id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
15513875
|
Differential geometry of surfaces
|
The mathematics of smooth surfaces
In mathematics, the differential geometry of surfaces deals with the differential geometry of smooth surfaces with various additional structures, most often, a Riemannian metric.
Surfaces have been extensively studied from various perspectives: "extrinsically", relating to their embedding in Euclidean space and "intrinsically", reflecting their properties determined solely by the distance within the surface as measured along curves on the surface. One of the fundamental concepts investigated is the Gaussian curvature, first studied in depth by Carl Friedrich Gauss, who showed that curvature was an intrinsic property of a surface, independent of its isometric embedding in Euclidean space.
Surfaces naturally arise as graphs of functions of a pair of variables, and sometimes appear in parametric form or as loci associated to space curves. An important role in their study has been played by Lie groups (in the spirit of the Erlangen program), namely the symmetry groups of the Euclidean plane, the sphere and the hyperbolic plane. These Lie groups can be used to describe surfaces of constant Gaussian curvature; they also provide an essential ingredient in the modern approach to intrinsic differential geometry through connections. On the other hand, extrinsic properties relying on an embedding of a surface in Euclidean space have also been extensively studied. This is well illustrated by the non-linear Euler–Lagrange equations in the calculus of variations: although Euler developed the one variable equations to understand geodesics, defined independently of an embedding, one of Lagrange's main applications of the two variable equations was to minimal surfaces, a concept that can only be defined in terms of an embedding.
History.
The volumes of certain quadric surfaces of revolution were calculated by Archimedes. The development of calculus in the seventeenth century provided a more systematic way of computing them. Curvature of general surfaces was first studied by Euler. In 1760 he proved a formula for the curvature of a plane section of a surface and in 1771 he considered surfaces represented in a parametric form. Monge laid down the foundations of their theory in his classical memoir "L'application de l'analyse à la géometrie" which appeared in 1795. The defining contribution to the theory of surfaces was made by Gauss in two remarkable papers written in 1825 and 1827. This marked a new departure from tradition because for the first time Gauss considered the "intrinsic" geometry of a surface, the properties which are determined only by the geodesic distances between points on the surface independently of the particular way in which the surface is located in the ambient Euclidean space. The crowning result, the Theorema Egregium of Gauss, established that the Gaussian curvature is an intrinsic invariant, i.e. invariant under local isometries. This point of view was extended to higher-dimensional spaces by Riemann and led to what is known today as Riemannian geometry. The nineteenth century was the golden age for the theory of surfaces, from both the topological and the differential-geometric point of view, with most leading geometers devoting themselves to their study. Darboux collected many results in his four-volume treatise "Théorie des surfaces" (1887–1896).
Overview.
It is intuitively quite familiar to say that the leaf of a plant, the surface of a glass, or the shape of a face, are curved in certain ways, and that all of these shapes, even after ignoring any distinguishing markings, have certain geometric features which distinguish one from another. The differential geometry of surfaces is concerned with a mathematical understanding of such phenomena. The study of this field, which was initiated in its modern form in the 1700s, has led to the development of higher-dimensional and abstract geometry, such as Riemannian geometry and general relativity.
The essential mathematical object is that of a regular surface. Although conventions vary in their precise definition, these form a general class of subsets of three-dimensional Euclidean space (ℝ3) which capture part of the familiar notion of "surface." By analyzing the class of curves which lie on such a surface, and the degree to which the surfaces force them to curve in ℝ3, one can associate to each point of the surface two numbers, called the principal curvatures. Their average is called the mean curvature of the surface, and their product is called the Gaussian curvature.
There are many classic examples of regular surfaces, including:
A surprising result of Carl Friedrich Gauss, known as the theorema egregium, showed that the Gaussian curvature of a surface, which by its definition has to do with how curves on the surface change directions in three dimensional space, can actually be measured by the lengths of curves lying on the surfaces together with the angles made when two curves on the surface intersect. Terminologically, this says that the Gaussian curvature can be calculated from the first fundamental form (also called metric tensor) of the surface. The second fundamental form, by contrast, is an object which encodes how lengths and angles of curves on the surface are distorted when the curves are pushed off of the surface.
Despite measuring different aspects of length and angle, the first and second fundamental forms are not independent from one another, and they satisfy certain constraints called the Gauss-Codazzi equations. A major theorem, often called the fundamental theorem of the differential geometry of surfaces, asserts that whenever two objects satisfy the Gauss-Codazzi constraints, they will arise as the first and second fundamental forms of a regular surface.
Using the first fundamental form, it is possible to define new objects on a regular surface. Geodesics are curves on the surface which satisfy a certain second-order ordinary differential equation which is specified by the first fundamental form. They are very directly connected to the study of lengths of curves; a geodesic of sufficiently short length will always be the curve of "shortest" length on the surface which connects its two endpoints. Thus, geodesics are fundamental to the optimization problem of determining the shortest path between two given points on a regular surface.
One can also define parallel transport along any given curve, which gives a prescription for how to deform a tangent vector to the surface at one point of the curve to tangent vectors at all other points of the curve. The prescription is determined by a first-order ordinary differential equation which is specified by the first fundamental form.
The above concepts are essentially all to do with multivariable calculus. The Gauss-Bonnet theorem is a more global result, which relates the Gaussian curvature of a surface together with its topological type. It asserts that the average value of the Gaussian curvature is completely determined by the Euler characteristic of the surface together with its surface area.
Any regular surface is an example both of a Riemannian manifold and Riemann surface. Essentially all of the theory of regular surfaces as discussed here has a generalization in the theory of Riemannian manifolds and their submanifolds.
Regular surfaces in Euclidean space.
Definition.
It is intuitively clear that a sphere is smooth, while a cone or a pyramid, due to their vertex or edges, are not. The notion of a "regular surface" is a formalization of the notion of a smooth surface. The definition utilizes the local representation of a surface via maps between Euclidean spaces. There is a standard notion of smoothness for such maps; a map between two open subsets of Euclidean space is smooth if its partial derivatives of every order exist at every point of the domain.
The following gives three equivalent ways to present the definition; the middle definition is perhaps the most visually intuitive, as it essentially says that a regular surface is a subset of ℝ3 which is locally the graph of a smooth function (whether over a region in the "yz" plane, the "xz" plane, or the "xy" plane).
The homeomorphisms appearing in the first definition are known as local parametrizations or local coordinate systems or local charts on S. The equivalence of the first two definitions asserts that, around any point on a regular surface, there always exist local parametrizations of the form ("u", "v") ↦ ("h"("u", "v"), "u", "v"), ("u", "v") ↦ ("u", "h"("u", "v"), "v"), or ("u", "v") ↦ ("u", "v", "h"("u", "v")), known as Monge patches. Functions F as in the third definition are called local defining functions. The equivalence of all three definitions follows from the implicit function theorem.
Given any two local parametrizations "f" : "V" → "U" and "f" ′ : "V" ′→ "U" ′ of a regular surface, the composition "f" −1 ∘ "f" ′ is necessarily smooth as a map between open subsets of ℝ2. This shows that any regular surface naturally has the structure of a smooth manifold, with a smooth atlas being given by the inverses of local parametrizations.
In the classical theory of differential geometry, surfaces are usually studied only in the regular case. It is, however, also common to study non-regular surfaces, in which the two partial derivatives ∂"u ""f" and ∂"v ""f" of a local parametrization may fail to be linearly independent. In this case, S may have singularities such as cuspidal edges. Such surfaces are typically studied in singularity theory. Other weakened forms of regular surfaces occur in computer-aided design, where a surface is broken apart into disjoint pieces, with the derivatives of local parametrizations failing to even be continuous along the boundaries.
Simple examples. A simple example of a regular surface is given by the 2-sphere {("x", "y", "z") | "x"2 + "y"2 + "z"2
1}; this surface can be covered by six Monge patches (two of each of the three types given above), taking "h"("u", "v")
± (1 − "u"2 − "v"2)1/2. It can also be covered by two local parametrizations, using stereographic projection. The set {("x", "y", "z") : (("x"2 + "y"2)1/2 − "r")2 + "z"2
"R"2} is a torus of revolution with radii r and R. It is a regular surface; local parametrizations can be given of the form
formula_0
The hyperboloid on two sheets {("x", "y", "z") : "z"2
1 + "x"2 + "y"2} is a regular surface; it can be covered by two Monge patches, with "h"("u", "v")
±(1 + "u"2 + "v"2)1/2. The helicoid appears in the theory of minimal surfaces. It is covered by a single local parametrization, "f"("u", "v")
("u" sin "v", "u" cos "v", "v").
Tangent vectors and normal vectors.
Let S be a regular surface in ℝ3, and let p be an element of S. Using any of the above definitions, one can single out certain vectors in ℝ3 as being tangent to S at p, and certain vectors in ℝ3 as being orthogonal to S at p.
One sees that the "tangent space" or "tangent plane" to S at p, which is defined to consist of all tangent vectors to S at p, is a two-dimensional linear subspace of ℝ3; it is often denoted by "T""p""S". The "normal space" to S at p, which is defined to consist of all normal vectors to S at p, is a one-dimensional linear subspace of ℝ3 which is orthogonal to the tangent space "T""p""S". As such, at each point p of S, there are two normal vectors of unit length (unit normal vectors). The unit normal vectors at p can be given in terms of local parametrizations, Monge patches, or local defining functions, via the formulas
formula_1
following the same notations as in the previous definitions.
It is also useful to note an "intrinsic" definition of tangent vectors, which is typical of the generalization of regular surface theory to the setting of smooth manifolds. It defines the tangent space as an abstract two-dimensional real vector space, rather than as a linear subspace of ℝ3. In this definition, one says that a tangent vector to S at p is an assignment, to each local parametrization "f" : "V" → "S" with "p" ∈ "f"("V"), of two numbers "X"1 and "X"2, such that for any other local parametrization "f" ′ : "V" → "S" with "p" ∈ "f"("V") (and with corresponding numbers ("X" ′)1 and ("X" ′)2), one has
formula_2
where "A""f" ′("p") is the Jacobian matrix of the mapping "f" −1 ∘ "f" ′, evaluated at the point "f" ′("p"). The collection of tangent vectors to S at p naturally has the structure of a two-dimensional vector space. A tangent vector in this sense corresponds to a tangent vector in the previous sense by considering the vector
formula_3
in ℝ3. The Jacobian condition on "X"1 and "X"2 ensures, by the chain rule, that this vector does not depend on f.
For smooth functions on a surface, vector fields (i.e. tangent vector fields) have an important interpretation as first order operators or derivations. Let formula_4 be a regular surface, formula_5 an open subset of the plane and formula_6 a coordinate chart. If formula_7, the space formula_8 can be identified with formula_9. Similarly formula_10 identifies vector fields on formula_5 with vector fields on formula_11. Taking standard variables u and v, a vector field has the form formula_12, with a and b smooth functions. If formula_13 is a vector field and formula_14 is a smooth function, then formula_15 is also a smooth function. The first order differential operator formula_13 is a "derivation", i.e. it satisfies the Leibniz rule formula_16
For vector fields X and Y it is simple to check that the operator formula_17 is a derivation corresponding to a vector field. It is called the Lie bracket formula_18. It is skew-symmetric formula_19 and satisfies the Jacobi identity:
formula_20
In summary, vector fields on formula_5 or formula_11 form a Lie algebra under the Lie bracket.
First and second fundamental forms, the shape operator, and the curvature.
Let S be a regular surface in ℝ3. Given a local parametrization "f" : "V" → "S" and a unit normal vector field n to "f"("V"), one defines the following objects as real-valued or matrix-valued functions on V. The first fundamental form depends only on f, and not on n. The fourth column records the way in which these functions depend on f, by relating the functions "E" ′, "F" ′, "G" ′, "L" ′, etc., arising for a different choice of local parametrization, "f" ′ : "V" ′ → "S", to those arising for f. Here A denotes the Jacobian matrix of "f" –1 ∘ "f" ′. The key relation in establishing the formulas of the fourth column is then
formula_21
as follows by the chain rule.
By a direct calculation with the matrix defining the shape operator, it can be checked that the Gaussian curvature is the determinant of the shape operator, the mean curvature is half of the trace of the shape operator, and the principal curvatures are the eigenvalues of the shape operator; moreover the Gaussian curvature is the product of the principal curvatures and the mean curvature is their sum. These observations can also be formulated as definitions of these objects. These observations also make clear that the last three rows of the fourth column follow immediately from the previous row, as similar matrices have identical determinant, trace, and eigenvalues. It is fundamental to note E, G, and "EG" − "F"2 are all necessarily positive. This ensures that the matrix inverse in the definition of the shape operator is well-defined, and that the principal curvatures are real numbers.
Note also that a negation of the choice of unit normal vector field will negate the second fundamental form, the shape operator, the mean curvature, and the principal curvatures, but will leave the Gaussian curvature unchanged. In summary, this has shown that, given a regular surface S, the Gaussian curvature of S can be regarded as a real-valued function on S; relative to a choice of unit normal vector field on all of S, the two principal curvatures and the mean curvature are also real-valued functions on S.
Geometrically, the first and second fundamental forms can be viewed as giving information on how "f"("u", "v") moves around in ℝ3 as ("u", "v") moves around in V. In particular, the first fundamental form encodes how quickly f moves, while the second fundamental form encodes the extent to which its motion is in the direction of the normal vector n. In other words, the second fundamental form at a point p encodes the length of the orthogonal projection from S to the tangent plane to S at p; in particular it gives the quadratic function which best approximates this length. This thinking can be made precise by the formulas
formula_22
as follows directly from the definitions of the fundamental forms and Taylor's theorem in two dimensions. The principal curvatures can be viewed in the following way. At a given point p of S, consider the collection of all planes which contain the orthogonal line to S. Each such plane has a curve of intersection with S, which can be regarded as a plane curve inside of the plane itself. The two principal curvatures at p are the maximum and minimum possible values of the curvature of this plane curve at p, as the plane under consideration rotates around the normal line.
The following summarizes the calculation of the above quantities relative to a Monge patch "f"("u", "v")
("u", "v", "h"("u", "v")). Here "h""u" and "h""v" denote the two partial derivatives of h, with analogous notation for the second partial derivatives. The second fundamental form and all subsequent quantities are calculated relative to the given choice of unit normal vector field.
Christoffel symbols, Gauss–Codazzi equations, and the Theorema Egregium.
Let S be a regular surface in ℝ3. The Christoffel symbols assign, to each local parametrization "f" : "V" → "S", eight functions on V, defined by
formula_23
They can also be defined by the following formulas, in which n is a unit normal vector field along "f"("V") and "L", "M", "N" are the corresponding components of the second fundamental form:
formula_24
The key to this definition is that , , and n form a basis of ℝ3 at each point, relative to which each of the three equations uniquely specifies the Christoffel symbols as coordinates of the second partial derivatives of f. The choice of unit normal has no effect on the Christoffel symbols, since if n is exchanged for its negation, then the components of the second fundamental form are also negated, and so the signs of "Ln", "Mn", "Nn" are left unchanged.
The second definition shows, in the context of local parametrizations, that the Christoffel symbols are geometrically natural. Although the formulas in the first definition appear less natural, they have the importance of showing that the Christoffel symbols can be calculated from the first fundamental form, which is not immediately apparent from the second definition. The equivalence of the definitions can be checked by directly substituting the first definition into the second, and using the definitions of "E", "F", "G".
The Codazzi equations assert that
formula_25
These equations can be directly derived from the second definition of Christoffel symbols given above; for instance, the first Codazzi equation is obtained by differentiating the first equation with respect to v, the second equation with respect to u, subtracting the two, and taking the dot product with n. The Gauss equation asserts that
formula_26
These can be similarly derived as the Codazzi equations, with one using the Weingarten equations instead of taking the dot product with n. Although these are written as three separate equations, they are identical when the definitions of the Christoffel symbols, in terms of the first fundamental form, are substituted in. There are many ways to write the resulting expression, one of them derived in 1852 by Brioschi using a skillful use of determinants:
formula_27
When the Christoffel symbols are considered as being defined by the first fundamental form, the Gauss and Codazzi equations represent certain constraints between the first and second fundamental forms. The Gauss equation is particularly noteworthy, as it shows that the Gaussian curvature can be computed directly from the first fundamental form, without the need for any other information; equivalently, this says that "LN" − "M"2 can actually be written as a function of "E", "F", "G", even though the individual components "L", "M", "N" cannot. This is known as the theorema egregium, and was a major discovery of Carl Friedrich Gauss. It is particularly striking when one recalls the geometric definition of the Gaussian curvature of S as being defined by the maximum and minimum radii of osculating circles; they seem to be fundamentally defined by the geometry of how S bends within ℝ3. Nevertheless, the theorem shows that their product can be determined from the "intrinsic" geometry of S, having only to do with the lengths of curves along S and the angles formed at their intersections. As said by Marcel Berger:
<templatestyles src="Template:Blockquote/styles.css" />This theorem is baffling. [...] It is the kind of theorem which could have waited dozens of years more before being discovered by another mathematician since, unlike so much of intellectual history, it was absolutely not in the air. [...] To our knowledge there is no simple geometric proof of the theorema egregium today.
The Gauss-Codazzi equations can also be succinctly expressed and derived in the language of connection forms due to Élie Cartan. In the language of tensor calculus, making use of natural metrics and connections on tensor bundles, the Gauss equation can be written as "H"2 − |"h"|2
"R" and the two Codazzi equations can be written as ∇1 "h"12
∇2 "h"11 and ∇1 "h"22
∇2 "h"12; the complicated expressions to do with Christoffel symbols and the first fundamental form are completely absorbed into the definitions of the covariant tensor derivative ∇"h" and the scalar curvature R. Pierre Bonnet proved that two quadratic forms satisfying the Gauss-Codazzi equations always uniquely determine an embedded surface locally. For this reason the Gauss-Codazzi equations are often called the fundamental equations for embedded surfaces, precisely identifying where the intrinsic and extrinsic curvatures come from. They admit generalizations to surfaces embedded in more general Riemannian manifolds.
Isometries.
A diffeomorphism formula_28 between open sets formula_5 and formula_11 in a regular surface formula_4 is said to be an isometry if it preserves the metric, i.e. the first fundamental form. Thus for every point formula_29 in formula_5 and tangent vectors formula_30 at formula_29, there are equalities
formula_31
In terms of the inner product coming from the first fundamental form, this can be rewritten as
formula_32.
On the other hand, the length of a parametrized curve formula_33 can be calculated as
formula_34
and, if the curve lies in formula_5, the rules for change of variables show that
formula_35
Conversely if formula_28 preserves the lengths of all parametrized in curves then formula_28 is an isometry. Indeed, for suitable choices of formula_36, the tangent vectors formula_37 and formula_38 give arbitrary tangent vectors formula_39 and formula_40. The equalities must hold for all choice of tangent vectors formula_39 and formula_40 as well as formula_41 and formula_42, so that formula_43.
A simple example of an isometry is provided by two parametrizations formula_44 and formula_45 of an open set formula_5 into regular surfaces formula_46 and formula_47. If formula_48, formula_49 and formula_50, then formula_51 is an isometry of formula_52 onto formula_53.
The cylinder and the plane give examples of surfaces that are locally isometric but which cannot be extended to an isometry for topological reasons. As another example, the catenoid and helicoid are locally isometric.
Covariant derivatives.
A tangential vector field X on S assigns, to each p in S, a tangent vector "X""p" to S at p. According to the "intrinsic" definition of tangent vectors given above, a tangential vector field X then assigns, to each local parametrization "f" : "V" → "S", two real-valued functions "X"1 and "X"2 on V, so that
formula_54
for each p in S. One says that X is smooth if the functions "X"1 and "X"2 are smooth, for any choice of f. According to the other definitions of tangent vectors given above, one may also regard a tangential vector field X on S as a map "X" : "S" → ℝ3 such that "X"("p") is contained in the tangent space "T""p""S" ⊂ ℝ3 for each p in S. As is common in the more general situation of smooth manifolds, tangential vector fields can also be defined as certain differential operators on the space of smooth functions on S.
The covariant derivatives (also called "tangential derivatives") of Tullio Levi-Civita and Gregorio Ricci-Curbastro provide a means of differentiating smooth tangential vector fields. Given a tangential vector field X and a tangent vector Y to S at p, the covariant derivative ∇"Y""X" is a certain tangent vector to S at p. Consequently, if X and Y are both tangential vector fields, then ∇"Y""X" can also be regarded as a tangential vector field; iteratively, if X, Y, and Z are tangential vector fields, the one may compute ∇"Z"∇"Y""X", which will be another tangential vector field. There are a few ways to define the covariant derivative; the first below uses the Christoffel symbols and the "intrinsic" definition of tangent vectors, and the second is more manifestly geometric.
Given a tangential vector field X and a tangent vector Y to S at p, one defines ∇"Y""X" to be the tangent vector to p which assigns to a local parametrization "f" : "V" → "S" the two numbers
formula_55
where "D"("Y"1, "Y"2) is the directional derivative. This is often abbreviated in the less cumbersome form (∇"Y""X")"k"
∂"Y"("X" "k") + "Y" "i"Γ"X"" j", making use of Einstein notation and with the locations of function evaluation being implicitly understood. This follows a standard prescription in Riemannian geometry for obtaining a connection from a Riemannian metric. It is a fundamental fact that the vector
formula_56
in ℝ3 is independent of the choice of local parametization f, although this is rather tedious to check.
One can also define the covariant derivative by the following geometric approach, which does not make use of Christoffel symbols or local parametrizations. Let X be a vector field on S, viewed as a function "S" → ℝ3. Given any curve "c" : ("a", "b") → "S", one may consider the composition "X" ∘ "c" : ("a", "b") → ℝ3. As a map between Euclidean spaces, it can be differentiated at any input value to get an element ("X" ∘ "c")′("t") of ℝ3. The orthogonal projection of this vector onto "T""c"("t")"S" defines the covariant derivative ∇"c" ′("t")"X". Although this is a very geometrically clean definition, it is necessary to show that the result only depends on "c"′("t") and X, and not on c and X; local parametrizations can be used for this small technical argument.
It is not immediately apparent from the second definition that covariant differentiation depends only on the first fundamental form of S; however, this is immediate from the first definition, since the Christoffel symbols can be defined directly from the first fundamental form. It is straightforward to check that the two definitions are equivalent. The key is that when one regards "X"1 + "X"2 as a ℝ3-valued function, its differentiation along a curve results in second partial derivatives ∂2"f"; the Christoffel symbols enter with orthogonal projection to the tangent space, due to the formulation of the Christoffel symbols as the tangential components of the second derivatives of f relative to the basis , , n. This is discussed in the above section.
The right-hand side of the three Gauss equations can be expressed using covariant differentiation. For instance, the right-hand side
formula_57
can be recognized as the second coordinate of
formula_58
relative to the basis , , as can be directly verified using the definition of covariant differentiation by Christoffel symbols. In the language of Riemannian geometry, this observation can also be phrased as saying that the right-hand sides of the Gauss equations are various components of the Ricci curvature of the Levi-Civita connection of the first fundamental form, when interpreted as a Riemannian metric.
Examples.
Surfaces of revolution.
A surface of revolution is obtained by rotating a curve in the "xz"-plane about the "z"-axis. Such surfaces include spheres, cylinders, cones, tori, and the catenoid. The general ellipsoids, hyperboloids, and paraboloids are not. Suppose that the curve is parametrized by
formula_59
with "s" drawn from an interval ("a", "b"). If "c"1 is never zero, if "c"1′ and "c"2′ are never both equal to zero, and if "c"1 and "c"2 are both smooth, then the corresponding surface of revolution
formula_60
will be a regular surface in ℝ3. A local parametrization "f" : ("a", "b") × (0, 2π) → "S" is given by
formula_61
Relative to this parametrization, the geometric data is:
In the special case that the original curve is parametrized by arclength, i.e. ("c"1′("s"))2 + ("c"2′("s"))2
1, one can differentiate to find "c"1′("s")"c"1′′("s") + "c"2′("s")"c"2′′("s")
0. On substitution into the Gaussian curvature, one has the simplified
formula_62
The simplicity of this formula makes it particularly easy to study the class of rotationally symmetric surfaces with constant Gaussian curvature. By reduction to the alternative case that "c"2(s)
s, one can study the rotationally symmetric minimal surfaces, with the result that any such surface is part of a plane or a scaled catenoid.
Each constant-t curve on S can be parametrized as a geodesic; a constant-s curve on S can be parametrized as a geodesic if and only if "c"1′(s) is equal to zero. Generally, geodesics on S are governed by Clairaut's relation.
Quadric surfaces.
Consider the quadric surface defined by
formula_63
This surface admits a parametrization
formula_64
The Gaussian curvature and mean curvature are given by
formula_65
Ruled surfaces.
A ruled surface is one which can be generated by the motion of a straight line in E3. Choosing a "directrix" on the surface, i.e. a smooth unit speed curve "c"("t") orthogonal to the straight lines, and then choosing "u"("t") to be unit vectors along the curve in the direction of the lines, the velocity vector "v"
"c"t and "u" satisfy
formula_66
The surface consists of points
formula_67
as "s" and "t" vary.
Then, if
formula_68
the Gaussian and mean curvature are given by
formula_69
The Gaussian curvature of the ruled surface vanishes if and only if "u""t" and "v" are proportional, This condition is equivalent to the surface being the envelope of the planes along the curve containing the tangent vector "v" and the orthogonal vector "u", i.e. to the surface being developable along the curve. More generally a surface in E3 has vanishing Gaussian curvature near a point if and only if it is developable near that point. (An equivalent condition is given below in terms of the metric.)
Minimal surfaces.
In 1760 Lagrange extended Euler's results on the calculus of variations involving integrals in one variable to two variables. He had in mind the following problem:
<templatestyles src="Template:Blockquote/styles.css" />
Such a surface is called a minimal surface.
In 1776 Jean Baptiste Meusnier showed that the differential equation derived by Lagrange was equivalent to the vanishing of the mean curvature of the surface:
<templatestyles src="Template:Blockquote/styles.css" />A surface is minimal if and only if its mean curvature vanishes.
Minimal surfaces have a simple interpretation in real life: they are the shape a soap film will assume if a wire frame shaped like the curve is dipped into a soap solution and then carefully lifted out. The question as to whether a minimal surface with given boundary exists is called Plateau's problem after the Belgian physicist Joseph Plateau who carried out experiments on soap films in the mid-nineteenth century. In 1930 Jesse Douglas and Tibor Radó gave an affirmative answer to Plateau's problem (Douglas was awarded one of the first Fields medals for this work in 1936).
Many explicit examples of minimal surface are known explicitly, such as the catenoid, the helicoid, the Scherk surface and the Enneper surface. There has been extensive research in this area, summarised in . In particular a result of Osserman shows that if a minimal surface is non-planar, then its image under the Gauss map is dense in "S"2.
Surfaces of constant Gaussian curvature.
If a surface has constant Gaussian curvature, it is called a surface of constant curvature.
−1 in three-dimensional Minkowski space, where "q"("x", "y", "z")
"x"2 + "y"2 – "z"2.
The sphere, the plane and the hyperbolic plane have transitive Lie group of symmetries. This group theoretic fact has far-reaching consequences, all the more remarkable because of the central role these special surfaces play in the geometry of surfaces, due to Poincaré's uniformization theorem (see below).
Other examples of surfaces with Gaussian curvature 0 include cones, tangent developables, and more generally any developable surface.
Local metric structure.
For any surface embedded in Euclidean space of dimension 3 or higher, it is possible to measure the length of a curve on the surface, the angle between two curves and the area of a region on the surface. This structure is encoded infinitesimally in a Riemannian metric on the surface through "line elements" and "area elements". Classically in the nineteenth and early twentieth centuries only surfaces embedded in R3 were considered and the metric was given as a 2×2 positive definite matrix varying smoothly from point to point in a local parametrization of the surface. The idea of local parametrization and change of coordinate was later formalized through the current abstract notion of a manifold, a topological space where the smooth structure is given by local charts on the manifold, exactly as the planet Earth is mapped by atlases today. Changes of coordinates between different charts of the same region are required to be smooth. Just as contour lines on real-life maps encode changes in elevation, taking into account local distortions of the Earth's surface to calculate true distances, so the Riemannian metric describes distances and areas "in the small" in each local chart. In each local chart a Riemannian metric is given by smoothly assigning a 2×2 positive definite matrix to each point; when a different chart is taken, the matrix is transformed according to the Jacobian matrix of the coordinate change. The manifold then has the structure of a 2-dimensional Riemannian manifold.
Shape operator.
The differential "dn" of the Gauss map "n" can be used to define a type of extrinsic curvature, known as the shape operator or Weingarten map. This operator first appeared implicitly in the work of Wilhelm Blaschke and later explicitly in a treatise by Burali-Forti and Burgati. Since at each point "x" of the surface, the tangent space is an inner product space, the shape operator "S""x" can be defined as a linear operator on this space by the formula
formula_70
for tangent vectors "v", "w" (the inner product makes sense because "dn"("v") and "w" both lie in E3). The right hand side is symmetric in "v" and "w", so the shape operator is self-adjoint on the tangent space. The eigenvalues of "S""x" are just the principal curvatures "k"1 and "k"2 at "x". In particular the determinant of the shape operator at a point is the Gaussian curvature, but it also contains other information, since the mean curvature is half the trace of the shape operator. The mean curvature is an extrinsic invariant. In intrinsic geometry, a cylinder is developable, meaning that every piece of it is intrinsically indistinguishable from a piece of a plane since its Gauss curvature vanishes identically. Its mean curvature is not zero, though; hence extrinsically it is different from a plane.
Equivalently, the shape operator can be defined as a linear operator on tangent spaces, "S""p": "T""p""M"→"T""p""M". If "n" is a unit normal field to "M" and "v" is a tangent vector then
formula_71
(there is no standard agreement whether to use + or − in the definition).
In general, the eigenvectors and eigenvalues of the shape operator at each point determine the directions in which the surface bends at each point. The eigenvalues correspond to the principal curvatures of the surface and the eigenvectors are the corresponding principal directions. The principal directions specify the directions that a curve embedded in the surface must travel to have maximum and minimum curvature, these being given by the principal curvatures.
Geodesic curves on a surface.
Curves on a surface which minimize length between the endpoints are called geodesics; they are the shape that an elastic band stretched between the two points would take. Mathematically they are described using ordinary differential equations and the calculus of variations. The differential geometry of surfaces revolves around the study of geodesics. It is still an open question whether every Riemannian metric on a 2-dimensional local chart arises from an embedding in 3-dimensional Euclidean space: the theory of geodesics has been used to show this is true in the important case when the components of the metric are analytic.
Geodesics.
Given a piecewise smooth path "c"("t")
("x"("t"), "y"("t")) in the chart for "t" in ["a", "b"], its "length" is defined by
formula_72
and "energy" by
formula_73
The length is independent of the parametrization of a path. By the Euler–Lagrange equations, if "c"("t") is a path minimising length, "parametrized by arclength", it must satisfy the Euler equations
formula_74
formula_75
where the Christoffel symbols Γ are given by
formula_76
where "g"11
"E", "g"12
"F", "g"22
"G" and "g""ij" is the inverse matrix to "g""ij". A path satisfying the Euler equations is called a geodesic. By the Cauchy–Schwarz inequality a path minimising energy is just a geodesic parametrised by arc length; and, for any geodesic, the parameter "t" is proportional to arclength.
Geodesic curvature.
The geodesic curvature "kg" at a point of a curve "c"("t"), parametrised by arc length, on an oriented surface is defined to be
formula_77
where n("t") is the "principal" unit normal to the curve in the surface, constructed by rotating the unit tangent vector ċ("t") through an angle of +90°.
The geodesic curvature measures in a precise way how far a curve on the surface is from being a geodesic.
Orthogonal coordinates.
When "F"
0 throughout a coordinate chart, such as with the geodesic polar coordinates discussed below, the images of lines parallel to the "x"- and "y"-axes are orthogonal and provide orthogonal coordinates. If "H"
("EG")<templatestyles src="Fraction/styles.css" />1⁄2, then the Gaussian curvature is given by
formula_78
If in addition "E"
1, so that "H"
"G"<templatestyles src="Fraction/styles.css" />1⁄2, then the angle "φ" at the intersection between geodesic ("x"("t"),"y"("t")) and the line "y" = constant is given by the equation
formula_79
The derivative of "φ" is given by a classical derivative formula of Gauss:
formula_80
Geodesic polar coordinates.
Once a metric is given on a surface and a base point is fixed, there is a unique geodesic connecting the base point to each sufficiently nearby point. The direction of the geodesic at the base point and the distance uniquely determine the other endpoint. These two bits of data, a direction and a magnitude, thus determine a tangent vector at the base point. The map from tangent vectors to endpoints smoothly sweeps out a neighbourhood of the base point and defines what is called the "exponential map", defining a local coordinate chart at that base point. The neighbourhood swept out has similar properties to balls in Euclidean space, namely any two points in it are joined by a unique geodesic. This property is called "geodesic convexity" and the coordinates are called "normal coordinates". The explicit calculation of normal coordinates can be accomplished by considering the differential equation satisfied by geodesics. The convexity properties are consequences of Gauss's lemma and its generalisations. Roughly speaking this lemma states that geodesics starting at the base point must cut the spheres of fixed radius centred on the base point at right angles. Geodesic polar coordinates are obtained by combining the exponential map with polar coordinates on tangent vectors at the base point. The Gaussian curvature of the surface is then given by the second order deviation of the metric at the point from the Euclidean metric. In particular the Gaussian curvature is an invariant of the metric, Gauss's celebrated "Theorema Egregium". A convenient way to understand the curvature comes from an ordinary differential equation, first considered by Gauss and later generalized by Jacobi, arising from the change of normal coordinates about two different points. The Gauss–Jacobi equation provides another way of computing the Gaussian curvature. Geometrically it explains what happens to geodesics from a fixed base point as the endpoint varies along a small curve segment through data recorded in the Jacobi field, a vector field along the geodesic. One and a quarter centuries after Gauss and Jacobi, Marston Morse gave a more conceptual interpretation of the Jacobi field in terms of second derivatives of the energy function on the infinite-dimensional Hilbert manifold of paths.
Exponential map.
The theory of ordinary differential equations shows that if "f"("t", "v") is smooth then the differential equation
"f"("t", "v") with initial condition "v"(0)
"v"0 has a unique solution for sufficiently small and the solution depends smoothly on "t" and "v"0. This implies that for sufficiently small tangent vectors "v" at a given point "p"
("x"0, "y"0), there is a geodesic "c""v"("t") defined on (−2, 2) with "c""v"(0)
("x"0, "y"0) and "ċ""v"(0)
"v". Moreover, if , then "c""sv"
"c""v"("st"). The "exponential map" is defined by
exp"p"("v")
"c""v" (1)
and gives a diffeomorphism between a disc ‖"v"‖ < "δ" and a neighbourhood of "p"; more generally the map sending ("p", "v") to exp"p"("v") gives a local diffeomorphism onto a neighbourhood of ("p", "p"). The exponential map gives "geodesic normal coordinates" near "p".
Computation of normal coordinates.
There is a standard technique (see for example ) for computing the change of variables to normal coordinates "u", "v" at a point as a formal Taylor series expansion. If the coordinates "x", "y" at (0,0) are locally orthogonal, write
"x"("u","v")
"αu" + "L"("u","v") + "λ"("u","v") + …
"y"("u","v")
"βv" + "M"("u","v") + "μ"("u","v") + …
where "L", "M" are quadratic and "λ", "μ" cubic homogeneous polynomials in "u" and "v". If "u" and "v" are fixed, "x"("t")
"x"("tu","tv") and "y"("t")
"y"("tu", "tv") can be considered as formal power series solutions of the Euler equations: this uniquely determines "α", "β", "L", "M", "λ" and "μ".
Gauss's lemma.
In these coordinates the matrix "g"("x") satisfies "g"(0)
"I" and the lines "t" ↦ "tv" are geodesics through 0. Euler's equations imply the matrix equation
"g"("v")"v"
"v",
a key result, usually called the Gauss lemma. Geometrically it states that
Taking polar coordinates ("r","θ"), it follows that the metric has the form
"ds"2
"dr"2 + "G"("r","θ") "dθ"2.
In geodesic coordinates, it is easy to check that the geodesics through zero minimize length. The topology on the Riemannian manifold is then given by a distance function "d"("p","q"), namely the infimum of the lengths of piecewise smooth paths between "p" and "q". This distance is realised locally by geodesics, so that in normal coordinates "d"(0,"v")
‖"v"‖. If the radius "δ" is taken small enough, a slight sharpening of the Gauss lemma shows that the image "U" of the disc ‖"v"‖ < "δ" under the exponential map is geodesically convex, i.e. any two points in "U" are joined by a unique geodesic lying entirely inside "U".
Theorema Egregium.
Gauss's Theorema Egregium, the "Remarkable Theorem", shows that the Gaussian curvature of a surface can be computed solely in terms of the metric and is thus an intrinsic invariant of the surface, independent of any isometric embedding in E3 and unchanged under coordinate transformations. In particular, isometries and local isometries of surfaces preserve Gaussian curvature.
This theorem can expressed in terms of the power series expansion of the metric, "ds", is given in normal coordinates ("u", "v") as
"ds"2
"du"2 + "dv"2 − "K"("u dv" – "v du")2/12 + ….
Gauss–Jacobi equation.
Taking a coordinate change from normal coordinates at "p" to normal coordinates at a nearby point "q", yields the Sturm–Liouville equation satisfied by "H"("r","θ")
"G"("r","θ")<templatestyles src="Fraction/styles.css" />1⁄2, discovered by Gauss and later generalised by Jacobi,
"H""rr"
–"KH".
The Jacobian of this coordinate change at "q" is equal to "H""r". This gives another way of establishing the intrinsic nature of Gaussian curvature. Because "H"("r","θ") can be interpreted as the length of the line element in the "θ" direction, the Gauss–Jacobi equation shows that the Gaussian curvature measures the spreading of geodesics on a geometric surface as they move away from a point.
Laplace–Beltrami operator.
On a surface with local metric
formula_81
and Laplace–Beltrami operator
formula_82
where "H"2
"EG" − "F"2, the Gaussian curvature at a point is given by the formula
formula_83
where "r" denotes the geodesic distance from the point.
In isothermal coordinates, first considered by Gauss, the metric is required to be of the special form
formula_84
In this case the Laplace–Beltrami operator is given by
formula_85
and "φ" satisfies Liouville's equation
formula_86
Isothermal coordinates are known to exist in a neighbourhood of any point on the surface, although all proofs to date rely on non-trivial results on partial differential equations. There is an elementary proof for minimal surfaces.
Gauss–Bonnet theorem.
On a sphere or a hyperboloid, the area of a geodesic triangle, i.e. a triangle all the sides of which are geodesics, is proportional to the difference of the sum of the interior angles and π. The constant of proportionality is just the Gaussian curvature, a constant for these surfaces. For the torus, the difference is zero, reflecting the fact that its Gaussian curvature is zero. These are standard results in spherical, hyperbolic and high school trigonometry (see below). Gauss generalised these results to an arbitrary surface by showing that the integral of the Gaussian curvature over the interior of a geodesic triangle is also equal to this angle difference or excess. His formula showed that the Gaussian curvature could be calculated near a point as the limit of area over angle excess for geodesic triangles shrinking to the point. Since any closed surface can be decomposed up into geodesic triangles, the formula could also be used to compute the integral of the curvature over the whole surface. As a special case of what is now called the Gauss–Bonnet theorem, Gauss proved that this integral was remarkably always 2π times an integer, a topological invariant of the surface called the Euler characteristic. This invariant is easy to compute combinatorially in terms of the number of vertices, edges, and faces of the triangles in the decomposition, also called a triangulation. This interaction between analysis and topology was the forerunner of many later results in geometry, culminating in the Atiyah-Singer index theorem. In particular properties of the curvature impose restrictions on the topology of the surface.
Geodesic triangles.
Gauss proved that, if Δ is a geodesic triangle on a surface with angles "α", "β" and "γ" at vertices "A", "B" and "C", then
formula_87
In fact taking geodesic polar coordinates with origin "A" and "AB", "AC" the radii at polar angles 0 and "α":
formula_88
where the second equality follows from the Gauss–Jacobi equation and the fourth from Gauss's derivative formula in the orthogonal coordinates ("r","θ").
Gauss's formula shows that the curvature at a point can be calculated as the limit of "angle excess" "α" + "β" + "γ" − π over "area" for successively smaller geodesic triangles near the point. Qualitatively a surface is positively or negatively curved according to the sign of the angle excess for arbitrarily small geodesic triangles.
Gauss–Bonnet theorem.
Since every compact oriented 2-manifold "M" can be triangulated by small geodesic triangles, it follows that
formula_89
where "χ"("M") denotes the Euler characteristic of the surface.
In fact if there are "F" faces, "E" edges and "V" vertices, then 3"F"
2"E" and the left hand side equals 2π"V" – π"F"
2π("V" – "E" + "F")
2π"χ"("M").
This is the celebrated Gauss–Bonnet theorem: it shows that the integral of the Gaussian curvature is a topological invariant of the manifold, namely the Euler characteristic. This theorem can be interpreted in many ways; perhaps one of the most far-reaching has been as the index theorem for an elliptic differential operator on "M", one of the simplest cases of the Atiyah-Singer index theorem. Another related result, which can be proved using the Gauss–Bonnet theorem, is the Poincaré-Hopf index theorem for vector fields on "M" which vanish at only a finite number of points: the sum of the indices at these points equals the Euler characteristic, where the "index" of a point is defined as follows: on a small circle round each isolated zero, the vector field defines a map into the unit circle; the index is just the winding number of this map.)
Curvature and embeddings.
If the Gaussian curvature of a surface "M" is everywhere positive, then the Euler characteristic is positive so "M" is homeomorphic (and therefore diffeomorphic) to S2. If in addition the surface is isometrically embedded in E3, the Gauss map provides an explicit diffeomorphism. As Hadamard observed, in this case the surface is convex; this criterion for convexity can be viewed as a 2-dimensional generalisation of the well-known second derivative criterion for convexity of plane curves. Hilbert proved that every isometrically embedded closed surface must have a point of positive curvature. Thus a closed Riemannian 2-manifold of non-positive curvature can never be embedded isometrically in E3; however, as Adriano Garsia showed using the Beltrami equation for quasiconformal mappings, this is always possible for some conformally equivalent metric.
Surfaces of constant curvature.
The simply connected surfaces of constant curvature 0, +1 and –1 are the Euclidean plane, the unit sphere in E3, and the hyperbolic plane. Each of these has a transitive three-dimensional Lie group of orientation preserving isometries "G", which can be used to study their geometry. Each of the two non-compact surfaces can be identified with the quotient "G" / "K" where "K" is a maximal compact subgroup of "G". Here "K" is isomorphic to SO(2). Any other closed Riemannian 2-manifold "M" of constant Gaussian curvature, after scaling the metric by a constant factor if necessary, will have one of these three surfaces as its universal covering space. In the orientable case, the fundamental group Γ of "M" can be identified with a torsion-free uniform subgroup of "G" and "M" can then be identified with the double coset space Γ \ "G" / "K". In the case of the sphere and the Euclidean plane, the only possible examples are the sphere itself and tori obtained as quotients of R2 by discrete rank 2 subgroups. For closed surfaces of genus "g" ≥ 2, the moduli space of Riemann surfaces obtained as Γ varies over all such subgroups, has real dimension 6"g" − 6. By Poincaré's uniformization theorem, any orientable closed 2-manifold is conformally equivalent to a surface of constant curvature 0, +1 or –1. In other words, by multiplying the metric by a positive scaling factor, the Gaussian curvature can be made to take exactly one of these values (the sign of the Euler characteristic of "M").
Euclidean geometry.
In the case of the Euclidean plane, the symmetry group is the Euclidean motion group, the semidirect product of
the two dimensional group of translations by the group of rotations. Geodesics are straight lines and the geometry is encoded in the elementary formulas of trigonometry, such as the cosine rule for a triangle with sides "a", "b", "c" and angles "α", "β", "γ":
formula_90
Flat tori can be obtained by taking the quotient of R2 by a lattice, i.e. a free Abelian subgroup of rank 2. These closed surfaces have no isometric embeddings in E3. They do nevertheless admit isometric embeddings in E4; in the easiest case this follows from the fact that the torus is a product of two circles and each circle can be isometrically embedded in E2.
Spherical geometry.
The isometry group of the unit sphere "S"2 in E3 is the orthogonal group O(3), with the rotation group SO(3) as the subgroup of isometries preserving orientation. It is the direct product of SO(3) with the antipodal map, sending "x" to –"x". The group SO(3) acts transitively on "S"2. The stabilizer subgroup of the unit vector (0,0,1) can be identified with SO(2), so that "S"2
SO(3)/SO(2).
The geodesics between two points on the sphere are the great circle arcs with these given endpoints. If the points are not antipodal, there is a unique shortest geodesic between the points. The geodesics can also be described group theoretically: each geodesic through the North pole (0,0,1) is the orbit of the subgroup of rotations about an axis through antipodal points on the equator.
A spherical triangle is a geodesic triangle on the sphere. It is defined by points "A", "B", "C" on the sphere with sides "BC", "CA", "AB" formed from great circle arcs of length less than π. If the lengths of the sides are "a", "b", "c" and the angles between the sides "α", "β", "γ", then the spherical cosine law states that
formula_91
The area of the triangle is given by
Area
"α" + "β" + "γ" − π.
Using stereographic projection from the North pole, the sphere can be identified with the extended complex plane C ∪ {∞}. The explicit map is given by
formula_92
Under this correspondence every rotation of "S"2 corresponds to a Möbius transformation in SU(2), unique up to sign. With respect to the coordinates ("u", "v") in the complex plane, the spherical metric becomes
formula_93
The unit sphere is the unique closed orientable surface with constant curvature +1. The quotient SO(3)/O(2) can be identified with the real projective plane. It is non-orientable and can be described as the quotient of "S"2 by the antipodal map (multiplication by −1). The sphere is simply connected, while the real projective plane has fundamental group Z2. The finite subgroups of SO(3), corresponding to the finite subgroups of O(2) and the symmetry groups of the platonic solids, do not act freely on "S"2, so the corresponding quotients are not 2-manifolds, just orbifolds.
Hyperbolic geometry.
Non-Euclidean geometry was first discussed in letters of Gauss, who made extensive computations at the turn of the nineteenth century which, although privately circulated, he decided not to put into print. In 1830 Lobachevsky and independently in 1832 Bolyai, the son of one Gauss's correspondents, published synthetic versions of this new geometry, for which they were severely criticized. However it was not until 1868 that Beltrami, followed by Klein in 1871 and Poincaré in 1882, gave concrete analytic models for what Klein dubbed hyperbolic geometry. The four models of 2-dimensional hyperbolic geometry that emerged were:
The first model, based on a disk, has the advantage that geodesics are actually line segments (that is, intersections of Euclidean lines with the open unit disk). The last model has the advantage that it gives a construction which is completely parallel to that of the unit sphere in 3-dimensional Euclidean space. Because of their application in complex analysis and geometry, however, the models of Poincaré are the most widely used: they are interchangeable thanks to the Möbius transformations between the disk and the upper half-plane.
Let
formula_94
be the Poincaré disk in the complex plane with Poincaré metric
formula_95
In polar coordinates ("r", "θ") the metric is given by
formula_96
The length of a curve "γ":["a","b"] → "D" is given by the formula
formula_97
The group "G"
SU(1,1) given by
formula_98
acts transitively by Möbius transformations on "D" and the stabilizer subgroup of 0 is the rotation group
formula_99
The quotient group SU(1,1)/±"I" is the group of orientation-preserving isometries of "D". Any two points "z", "w" in "D" are joined by a unique geodesic, given by the portion of the circle or straight line passing through "z" and "w" and orthogonal to the boundary circle. The distance between "z" and "w" is given by
formula_100
In particular "d"(0,"r")
2 tanh−1 "r" and "c"("t")
tanh "t" is the geodesic through 0 along the real axis, parametrized by arclength.
The topology defined by this metric is equivalent to the usual Euclidean topology, although as a metric space ("D","d") is complete.
A hyperbolic triangle is a geodesic triangle for this metric: any three points in "D" are vertices of a hyperbolic triangle. If the sides have length "a", "b", "c" with corresponding angles "α", "β", "γ", then the hyperbolic cosine rule states that
formula_101
The area of the hyperbolic triangle is given by
Area
π – "α" – "β" – "γ".
The unit disk and the upper half-plane
formula_102
are conformally equivalent by the Möbius transformations
formula_103
Under this correspondence the action of SL(2,R) by Möbius transformations on "H" corresponds to that of SU(1,1) on "D". The metric on "H" becomes
formula_104
Since lines or circles are preserved under Möbius transformations, geodesics are again described by lines or circles orthogonal to the real axis.
The unit disk with the Poincaré metric is the unique simply connected oriented 2-dimensional Riemannian manifold with constant curvature −1. Any oriented closed surface "M" with this property has "D" as its universal covering space. Its fundamental group can be identified with a torsion-free concompact subgroup Γ of SU(1,1), in such a way that
formula_105
In this case Γ is a finitely presented group. The generators and relations are encoded in a geodesically convex fundamental geodesic polygon in "D" (or "H") corresponding geometrically to closed geodesics on "M".
Examples.
Uniformization.
Given an oriented closed surface "M" with Gaussian curvature "K", the metric on "M" can be changed conformally by scaling it by a factor "e"2"u". The new Gaussian curvature "K′" is then given by
formula_106
where Δ is the Laplacian for the original metric. Thus to show that a given surface is conformally equivalent to a metric with constant curvature "K′" it suffices to solve the following variant of Liouville's equation:
formula_107
When "M" has Euler characteristic 0, so is diffeomorphic to a torus, "K′"
0, so this amounts to solving
formula_108
By standard elliptic theory, this is possible because the integral of "K" over "M" is zero, by the Gauss–Bonnet theorem.
When "M" has negative Euler characteristic, "K′"
−1, so the equation to be solved is:
formula_109
Using the continuity of the exponential map on Sobolev space due to Neil Trudinger, this non-linear equation can always be solved.
Finally in the case of the 2-sphere, "K′"
1 and the equation becomes:
formula_110
So far this non-linear equation has not been analysed directly, although classical results such as the Riemann–Roch theorem imply that it always has a solution. The method of Ricci flow, developed by Richard S. Hamilton, gives another proof of existence based on non-linear partial differential equations to prove existence. In fact the Ricci flow on conformal metrics on "S"2 is defined on functions "u"("x", "t") by
formula_111
After finite time, Chow showed that "K′" becomes positive; previous results of Hamilton could then be used to show that "K′" converges to +1. Prior to these results on Ricci flow, had given an alternative and technically simpler approach to uniformization based on the flow on Riemannian metrics "g" defined by log det Δ"g".
A proof using elliptic operators, discovered in 1988, can be found in . Let "G" be the Green's function on "S"2 satisfying Δ"G"
1 + 4π"δ""P", where "δ""P" is the point measure at a fixed point "P" of "S"2. The equation Δ"v"
2"K" – 2, has a smooth solution "v", because the right hand side has integral 0 by the Gauss–Bonnet theorem. Thus "φ"
2"G" + "v" satisfies Δ"φ"
2"K" away from "P". It follows that "g"1
"e""φ""g" is a complete metric of constant curvature 0 on the complement of "P", which is therefore isometric to the plane. Composing with stereographic projection, it follows that there is a smooth function "u" such that "e"2"u""g" has Gaussian curvature +1 on the complement of "P". The function "u" automatically extends to a smooth function on the whole of "S"2.
Riemannian connection and parallel transport.
The classical approach of Gauss to the differential geometry of surfaces was the standard elementary approach which predated the emergence of the concepts of Riemannian manifold initiated by Bernhard Riemann in the mid-nineteenth century and of connection developed by Tullio Levi-Civita, Élie Cartan and Hermann Weyl in the early twentieth century. The notion of connection, covariant derivative and parallel transport gave a more conceptual and uniform way of understanding curvature, which not only allowed generalisations to higher dimensional manifolds but also provided an important tool for defining new geometric invariants, called characteristic classes. The approach using covariant derivatives and connections is nowadays the one adopted in more advanced textbooks.
Covariant derivative.
Connections on a surface can be defined from various equivalent but equally important points of view. The Riemannian connection or Levi-Civita connection. is perhaps most easily understood in terms of lifting vector fields, considered as first order differential operators acting on functions on the manifold, to differential operators on the tangent bundle or frame bundle. In the case of an embedded surface, the lift to an operator on vector fields, called the covariant derivative, is very simply described in terms of orthogonal projection. Indeed, a vector field on a surface embedded in R3 can be regarded as a function from the surface into R3. Another vector field acts as a differential operator component-wise. The resulting vector field will not be tangent to the surface, but this can be corrected taking its orthogonal projection onto the tangent space at each point of the surface. As Ricci and Levi-Civita realised at the turn of the twentieth century, this process depends only on the metric and can be locally expressed in terms of the Christoffel symbols.
Parallel transport.
Parallel transport of tangent vectors along a curve in the surface was the next major advance in the subject, due to Levi-Civita. It is related to the earlier notion of covariant derivative, because it is the monodromy of the ordinary differential equation on the curve defined by the covariant derivative with respect to the velocity vector of the curve. Parallel transport along geodesics, the "straight lines" of the surface, can also easily be described directly. A vector in the tangent plane is transported along a geodesic as the unique vector field with constant length and making a constant angle with the velocity vector of the geodesic. For a general curve, this process has to be modified using the geodesic curvature, which measures how far the curve departs from being a geodesic.
A vector field "v"("t") along a unit speed curve "c"("t"), with geodesic curvature "k""g"("t"), is said to be parallel along the curve if
formula_112
This recaptures the rule for parallel transport along a geodesic or piecewise geodesic curve, because in that case "k""g"
0, so that the angle "θ"("t") should remain constant on any geodesic segment. The existence of parallel transport follows because "θ"("t") can be computed as the integral of the geodesic curvature. Since it therefore depends continuously on the "L"2 norm of "k""g", it follows that parallel transport for an arbitrary curve can be obtained as the limit of the parallel transport on approximating piecewise geodesic curves.
The connection can thus be described in terms of lifting paths in the manifold to paths in the tangent or orthonormal frame bundle, thus formalising the classical theory of the "moving frame", favoured by French authors. Lifts of loops about a point give rise to the holonomy group at that point. The Gaussian curvature at a point can be recovered from parallel transport around increasingly small loops at the point. Equivalently curvature can be calculated directly at an infinitesimal level in terms of Lie brackets of lifted vector fields.
Connection 1-form.
The approach of Cartan and Weyl, using connection 1-forms on the frame bundle of "M", gives a third way to understand the Riemannian connection. They noticed that parallel transport dictates that a path in the surface be lifted to a path in the frame bundle so that its tangent vectors lie in a special subspace of codimension one in the three-dimensional tangent space of the frame bundle. The projection onto this subspace is defined by a differential 1-form on the orthonormal frame bundle, the connection form. This enabled the curvature properties of the surface to be encoded in differential forms on the frame bundle and formulas involving their exterior derivatives.
This approach is particularly simple for an embedded surface. Thanks to a result of , the connection 1-form on a surface embedded in Euclidean space E3 is just the pullback under the Gauss map of the connection 1-form on "S"2. Using the identification of "S"2 with the homogeneous space SO(3)/SO(2), the connection 1-form is just a component of the Maurer–Cartan 1-form on SO(3).
Global differential geometry of surfaces.
Although the characterisation of curvature involves only the local geometry of a surface, there are important global aspects such as the Gauss–Bonnet theorem, the uniformization theorem, the von Mangoldt-Hadamard theorem, and the embeddability theorem. There are other important aspects of the global geometry of surfaces. These include:
and the length of its smallest closed geodesic. This improved a theorem of Bonnet who showed in 1855 that the diameter of a closed surface of positive Gaussian curvature is always bounded above by "δ"; in other words a geodesic realising the metric distance between two points cannot have length greater than "δ".
Reading guide.
One of the most comprehensive introductory surveys of the subject, charting the historical development from before Gauss to modern times, is by . Accounts of the classical theory are given in , and ; the more modern copiously illustrated undergraduate textbooks by , and might be found more accessible. An accessible account of the classical theory can be found in . More sophisticated graduate-level treatments using the Riemannian connection on a surface can be found in , and .
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f(s,t)=\\big((R \\cos s +r)\\cos t, (R \\cos s +r) \\sin t, R\\sin s\\big)."
},
{
"math_id": 1,
"text": "\\pm\\left.\\frac{\\frac{\\partial f}{\\partial u}\\times\\frac{\\partial f}{\\partial v}}{\\big\\|\\frac{\\partial f}{\\partial u}\\times\\frac{\\partial f}{\\partial v}\\big\\|}\\right|_{f^{-1}(p)},\\qquad \\pm\\left.\\frac{\\big(\\frac{\\partial h}{\\partial u},\\frac{\\partial h}{\\partial v},-1\\big)}{\\sqrt{1+\\big(\\frac{\\partial h}{\\partial u}\\big)^2+\\big(\\frac{\\partial h}{\\partial v}\\big)^2}}\\right|_{(p_1,p_2)},\\qquad\\text{or}\\qquad\\pm\\frac{\\nabla F(p)}{\\big\\|\\nabla F(p)\\big\\|},"
},
{
"math_id": 2,
"text": "\\begin{pmatrix}X^1\\\\ X^2\\end{pmatrix}=A_{f'(p)}\\begin{pmatrix}(X')^1\\\\ (X')^2\\end{pmatrix},"
},
{
"math_id": 3,
"text": "X^1\\frac{\\partial f}{\\partial u}+X^2\\frac{\\partial f}{\\partial v}."
},
{
"math_id": 4,
"text": "S"
},
{
"math_id": 5,
"text": "U"
},
{
"math_id": 6,
"text": "f:U\\rightarrow S"
},
{
"math_id": 7,
"text": "V=f(U)"
},
{
"math_id": 8,
"text": "C^\\infty(U)"
},
{
"math_id": 9,
"text": "C^\\infty(V)"
},
{
"math_id": 10,
"text": "f"
},
{
"math_id": 11,
"text": "V"
},
{
"math_id": 12,
"text": "X= a\\partial_u + b\\partial_v"
},
{
"math_id": 13,
"text": "X"
},
{
"math_id": 14,
"text": "g"
},
{
"math_id": 15,
"text": "Xg"
},
{
"math_id": 16,
"text": "X(gh)= (Xg) h + g (Xh)."
},
{
"math_id": 17,
"text": "[X,Y]=XY-YX"
},
{
"math_id": 18,
"text": "[X,Y]"
},
{
"math_id": 19,
"text": "[X,Y]=-[Y,X]"
},
{
"math_id": 20,
"text": " [[X,Y],Z] + [[Y,Z],X] + [[Z,X],Y]=0."
},
{
"math_id": 21,
"text": "\\begin{pmatrix}\\frac{\\partial f'}{\\partial u}\\\\ \\frac{\\partial f'}{\\partial v}\\end{pmatrix}=A\\begin{pmatrix}\\frac{\\partial f}{\\partial u}\\\\ \\frac{\\partial f}{\\partial v}\\end{pmatrix},"
},
{
"math_id": 22,
"text": "\\begin{align}\n\\lim_{(h,k)\\to(0,0)}\\frac{\\big|f(u+h,v+k) - f(u,v)\\big|^2-\\big(Eh^2+2Fhk+Gk^2\\big)}{h^2+k^2} &= 0\\\\\n\\lim_{(h,k)\\to(0,0)}\\frac{\\big(f(u+h,v+k) - f(u,v)\\big)\\cdot n-\\frac{1}{2}\\big(Lh^2 +2M hk + Nk^2\\big)}{h^2+k^2} &= 0,\n\\end{align}"
},
{
"math_id": 23,
"text": "\\begin{pmatrix}\\Gamma_{11}^1&\\Gamma_{12}^1&\\Gamma_{21}^1&\\Gamma_{22}^1\\\\ \\Gamma_{11}^2&\\Gamma_{12}^2&\\Gamma_{21}^2&\\Gamma_{22}^2\\end{pmatrix}=\\begin{pmatrix}E&F\\\\ F&G\\end{pmatrix}^{-1}\\begin{pmatrix}\\frac{1}{2}\\frac{\\partial E}{\\partial u}&\\frac{1}{2}\\frac{\\partial E}{\\partial v}&\\frac{1}{2}\\frac{\\partial E}{\\partial v} &\\frac{\\partial F}{\\partial v}-\\frac{1}{2}\\frac{\\partial G}{\\partial u}\\\\ \\frac{\\partial F}{\\partial u}-\\frac{1}{2}\\frac{\\partial E}{\\partial v}&\\frac{1}{2}\\frac{\\partial G}{\\partial u}&\\frac{1}{2}\\frac{\\partial G}{\\partial u}&\\frac{1}{2}\\frac{\\partial G}{\\partial v}\\end{pmatrix}."
},
{
"math_id": 24,
"text": "\\begin{align}\n\\frac{\\partial^2f}{\\partial u^2}&=\\Gamma_{11}^1\\frac{\\partial f}{\\partial u}+\\Gamma_{11}^2\\frac{\\partial f}{\\partial v}+Ln\\\\\n\\frac{\\partial^2f}{\\partial u\\partial v}&=\\Gamma_{12}^1\\frac{\\partial f}{\\partial u}+\\Gamma_{12}^2\\frac{\\partial f}{\\partial v}+Mn\\\\\n\\frac{\\partial^2f}{\\partial v^2}&=\\Gamma_{22}^1\\frac{\\partial f}{\\partial u}+\\Gamma_{22}^2\\frac{\\partial f}{\\partial v}+Nn.\n\\end{align}"
},
{
"math_id": 25,
"text": "\\begin{align}\n\\frac{\\partial L}{\\partial v}-\\frac{\\partial M}{\\partial u}&=L\\Gamma_{12}^1 + M(\\Gamma_{12}^2-\\Gamma_{11}^1) - N\\Gamma_{11}^2\\\\\n\\frac{\\partial M}{\\partial v}-\\frac{\\partial N}{\\partial u}&=L\\Gamma_{22}^1 + M(\\Gamma_{22}^2-\\Gamma_{12}^1) - N\\Gamma_{12}^2.\n\\end{align}"
},
{
"math_id": 26,
"text": "\\begin{align}\nKE&=\\frac{\\partial\\Gamma_{11}^2}{\\partial v}-\\frac{\\partial \\Gamma_{21}^2}{\\partial u}+\\Gamma_{21}^2\\Gamma_{11}^1+\\Gamma_{22}^2\\Gamma_{11}^2-\\Gamma_{11}^2\\Gamma_{21}^1-\\Gamma_{12}^2\\Gamma_{21}^2\\\\\nKF&=\\frac{\\partial\\Gamma_{12}^2}{\\partial v}-\\frac{\\partial\\Gamma_{22}^2}{\\partial u}+\\Gamma_{21}^2\\Gamma_{12}^1-\\Gamma_{11}^2\\Gamma_{22}^1\\\\\nKG&=\\frac{\\partial\\Gamma_{22}^1}{\\partial u}-\\frac{\\partial\\Gamma_{12}^1}{\\partial v}+\\Gamma_{11}^1\\Gamma_{22}^1+\\Gamma_{12}^1\\Gamma_{22}^2-\\Gamma_{21}^1\\Gamma_{12}^1-\\Gamma_{22}^1\\Gamma_{12}^2\n\\end{align}"
},
{
"math_id": 27,
"text": "K = \\frac{1}{(EG-F^2)^2}\\det\\begin{pmatrix}-{1 \\over 2}\\frac{\\partial^2E}{\\partial v^2} + \\frac{\\partial^2F}{\\partial u\\partial v} - {1 \\over 2}\\frac{\\partial^2G}{\\partial u^2} & {1 \\over 2}\\frac{\\partial E}{\\partial u} & \\frac{\\partial F}{\\partial u} - {1 \\over 2} \\frac{\\partial E}{\\partial v}\\\\\n\\frac{\\partial F}{\\partial v} - {1 \\over 2}\\frac{\\partial G}{\\partial u} & E & F \\\\ {1\\over 2}\\frac{\\partial G}{\\partial v}& F & G\\end{pmatrix}-\\frac{1}{(EG-F^2)^2}\\det\\begin{pmatrix} 0 & {1 \\over 2}\\frac{\\partial E}{\\partial v} & {1 \\over 2}\\frac{\\partial G}{\\partial u}\\\\\n{1 \\over 2}\\frac{\\partial E}{\\partial v} & E & F \\\\ {1 \\over 2}\\frac{\\partial G}{\\partial u}& F & G\\end{pmatrix}."
},
{
"math_id": 28,
"text": "\\varphi"
},
{
"math_id": 29,
"text": "p"
},
{
"math_id": 30,
"text": "w_1,\\,\\, w_2"
},
{
"math_id": 31,
"text": " E(p) w_1\\cdot w_1 + 2F(p) w_1\\cdot w_2 + G(p) w_2\\cdot w_2= E(\\varphi(p)) \\varphi^\\prime(w_1)\\cdot \\varphi^\\prime(w_1) +2F(\\varphi(p)) \\varphi^\\prime(w_1)\\cdot \\varphi^\\prime(w_2) + G (\\varphi(p)) \\varphi^\\prime(w_1)\\cdot \\varphi^\\prime(w_2)."
},
{
"math_id": 32,
"text": "(w_1,w_2)_p=(\\varphi^\\prime(w_1),\\varphi^\\prime(w_2))_{\\varphi(p)}"
},
{
"math_id": 33,
"text": "\\gamma(t)=(x(t),y(t))"
},
{
"math_id": 34,
"text": "L(\\gamma)=\\int_a^b \\sqrt{E\\dot{x}\\cdot \\dot{x} +2F \\dot{x}\\cdot \\dot{y} +G\\dot{y}\\cdot \\dot{y} } \\, dt"
},
{
"math_id": 35,
"text": "L(\\varphi\\circ \\gamma) = L(\\gamma)."
},
{
"math_id": 36,
"text": "\\gamma"
},
{
"math_id": 37,
"text": "\\dot{x}"
},
{
"math_id": 38,
"text": "\\dot{y}"
},
{
"math_id": 39,
"text": "w_1"
},
{
"math_id": 40,
"text": "w_2"
},
{
"math_id": 41,
"text": "\\varphi^\\prime(w_1)"
},
{
"math_id": 42,
"text": "\\varphi^\\prime(w_2)"
},
{
"math_id": 43,
"text": "(\\varphi^\\prime(w_1),\\varphi^\\prime(w_2))_{\\varphi(p)} = (w_1,w_1)_p"
},
{
"math_id": 44,
"text": "f_1"
},
{
"math_id": 45,
"text": "f_2"
},
{
"math_id": 46,
"text": "S_1"
},
{
"math_id": 47,
"text": "S_2"
},
{
"math_id": 48,
"text": "E_1=E_2"
},
{
"math_id": 49,
"text": "F_1=F_2"
},
{
"math_id": 50,
"text": "G_1=G_2"
},
{
"math_id": 51,
"text": "\\varphi=f_2\\circ f_1^{-1}"
},
{
"math_id": 52,
"text": "f_1(U)"
},
{
"math_id": 53,
"text": "f_2(U)"
},
{
"math_id": 54,
"text": "X_p=X^1\\big(f^{-1}(p)\\big)\\frac{\\partial f}{\\partial u}\\Big|_{f^{-1}(p)}+X^2\\big(f^{-1}(p)\\big)\\frac{\\partial f}{\\partial v}\\Big|_{f^{-1}(p)}"
},
{
"math_id": 55,
"text": "(\\nabla_YX)^k=D_{(Y^1,Y^2)}X^k\\Big|_{f^{-1}(p)}+\\sum_{i=1}^2\\sum_{j=1}^2\\big(\\Gamma_{ij}^kX^j\\big)\\Big|_{f^{-1}(p)}Y^i,\\qquad(k=1,2)"
},
{
"math_id": 56,
"text": "(\\nabla_YX)^1\\frac{\\partial f}{\\partial u}+(\\nabla_YX)^2\\frac{\\partial f}{\\partial v}"
},
{
"math_id": 57,
"text": "\\frac{\\partial\\Gamma_{11}^2}{\\partial v}-\\frac{\\partial \\Gamma_{21}^2}{\\partial u}+\\Gamma_{21}^2\\Gamma_{11}^1+\\Gamma_{22}^2\\Gamma_{11}^2-\\Gamma_{11}^2\\Gamma_{21}^1-\\Gamma_{12}^2\\Gamma_{21}^2"
},
{
"math_id": 58,
"text": "\\nabla_{\\frac{\\partial f}{\\partial v}}\\nabla_{\\frac{\\partial f}{\\partial u}}\\frac{\\partial f}{\\partial u}-\\nabla_{\\frac{\\partial f}{\\partial u}}\\nabla_{\\frac{\\partial f}{\\partial v}}\\frac{\\partial f}{\\partial u}"
},
{
"math_id": 59,
"text": " x= c_1(s),\\,\\, z=c_2(s)"
},
{
"math_id": 60,
"text": "S=\\Big\\{\\big(c_1(s)\\cos t, c_1(s)\\sin t,c_2(s)\\big)\\colon s\\in (a,b)\\text{ and }t\\in\\mathbb{R}\\Big\\}"
},
{
"math_id": 61,
"text": "f(s,t)=\\big(c_1(s)\\cos t, c_1(s)\\sin t,c_2(s)\\big)."
},
{
"math_id": 62,
"text": "K=-\\frac{c_1''(s)}{c_1(s)}\\qquad\\text{and}\\qquad H=c_1'(s)c_2''(s)-c_2'(s)c_1''(s)+\\frac{c_2'(s)}{c_1(s)}."
},
{
"math_id": 63,
"text": " {x^2\\over a} + {y^2\\over b} +{z^2\\over c}=1."
},
{
"math_id": 64,
"text": "x=\\sqrt{a(a-u)(a-v)\\over (a-b)(a-c)},\\,\\, y=\\sqrt{b(b-u)(b-v)\\over (b-a) (b-c)}, \\,\\, z=\\sqrt{c(c-u)(c-v)\\over (c-b)(c-a)}."
},
{
"math_id": 65,
"text": "K={abc\\over u^2 v^2} ,\\,\\,K_m=-(u+v)\\sqrt{abc\\over u^3v^3}."
},
{
"math_id": 66,
"text": "u\\cdot v=0, \\,\\,\\|u\\|=1,\\,\\,\\|v\\|=1."
},
{
"math_id": 67,
"text": "c(t) + s\\cdot u(t)"
},
{
"math_id": 68,
"text": "a=\\|u_t\\|, \\,\\, b=u_t\\cdot v, \\,\\, \\alpha=-\\frac{b}{a^2}, \\,\\, \\beta=\\frac{\\sqrt{a^2-b^2}}{a^2},"
},
{
"math_id": 69,
"text": "K=-{\\beta^2\\over ((s-\\alpha)^2 +\\beta^2)^2} ,\\,\\, K_m=-{r[(s-\\alpha)^2 +\\beta^2)] +\\beta_t(s-\\alpha) + \\beta\\alpha_t\\over\n [(s-\\alpha)^2 +\\beta^2]^{\\frac32}}."
},
{
"math_id": 70,
"text": " (S_x v, w) =(dn(v), w)"
},
{
"math_id": 71,
"text": "S(v)=\\pm \\nabla_{v}n"
},
{
"math_id": 72,
"text": " L(c) = \\int_a^b (E\\dot{x}^2 + 2F \\dot{x}\\dot{y} + G \\dot{y}^2)^{\\frac12}\\, dt "
},
{
"math_id": 73,
"text": " E(c) = \\int_a^b (E\\dot{x}^2 + 2F \\dot{x}\\dot{y} + G \\dot{y}^2)\\, dt. "
},
{
"math_id": 74,
"text": "\\ddot{x} + \\Gamma_{11}^1 \\dot{x}^2 + 2\\Gamma_{12}^1 \\dot{x}\\dot{y}+ \\Gamma_{22}^1\\dot{y}^2 =0"
},
{
"math_id": 75,
"text": "\\ddot{y}+ \\Gamma_{11}^2 \\dot{x}^2 + 2\\Gamma_{12}^2 \\dot{x}\\dot{y}+ \\Gamma_{22}^2 \\dot{y}^2 =0"
},
{
"math_id": 76,
"text": "\\Gamma_{ij}^k = \\tfrac12 g^{km}(\\partial_j g_{im} + \\partial_i g_{jm} - \\partial_m g_{ij})"
},
{
"math_id": 77,
"text": "k_g= \\ddot{c}(t)\\cdot \\mathbf{n}(t)."
},
{
"math_id": 78,
"text": " K=-{1\\over 2H} \\left[\\partial_x\\left(\\frac{G_x}{H}\\right) +\\partial_y\\left(\\frac{E_y}{H}\\right)\\right]."
},
{
"math_id": 79,
"text": "\\tan \\varphi = H\\cdot \\frac{\\dot{y}}{\\dot{x}}."
},
{
"math_id": 80,
"text": " \\dot{\\varphi} = -H_x \\cdot \\dot{y}."
},
{
"math_id": 81,
"text": " ds^2 = E \\, dx^2 + 2F \\, dx \\, dy + G \\, dy^2 "
},
{
"math_id": 82,
"text": "\\Delta f = {1\\over H} \\left(\\partial_x {G\\over H} \\partial_x f - \\partial_x {F\\over H}\\partial_y f -\\partial_y {F\\over H}\\partial_x f + \\partial_y {E\\over H}\\partial_yf\\right),"
},
{
"math_id": 83,
"text": " K=- 3 \\lim_{r\\rightarrow 0} \\Delta (\\log r),"
},
{
"math_id": 84,
"text": "ds^2 = e^\\varphi (dx^2+dy^2). \\, "
},
{
"math_id": 85,
"text": "\\Delta = e^{-\\varphi} \\left(\\frac{\\partial^2 }{\\partial x^2} + \\frac{\\partial^2 }{\\partial y^2}\\right)"
},
{
"math_id": 86,
"text": "\\Delta \\varphi=-2K. \\, "
},
{
"math_id": 87,
"text": "\\int_\\Delta K\\,dA = \\alpha + \\beta + \\gamma - \\pi."
},
{
"math_id": 88,
"text": "\\begin{align}\n\\int_\\Delta K\\,dA &= \\int_\\Delta KH\\,dr\\,d\\theta = - \\int_0^\\alpha \\int_0^{r_\\theta} \\! H_{rr}\\,dr\\,d\\theta \\\\\n&= \\int_0^\\alpha 1 -H_r(r_\\theta,\\theta)\\,d\\theta = \\int_0^\\alpha d\\theta + \\int_{\\pi-\\beta}^\\gamma \\!\\! d\\varphi \\\\\n&= \\alpha + \\beta + \\gamma - \\pi,\n\\end{align}"
},
{
"math_id": 89,
"text": " \\int_M K dA = 2\\pi\\,\\chi(M)"
},
{
"math_id": 90,
"text": " c^2 = a^2 +b^2 -2ab \\,\\cos \\gamma."
},
{
"math_id": 91,
"text": "\\cos c = \\cos a \\, \\cos b + \\sin a\\, \\sin b \\,\\cos \\gamma."
},
{
"math_id": 92,
"text": "\\pi(x,y,z)={x+iy\\over 1-z}\\equiv u + iv."
},
{
"math_id": 93,
"text": " ds^2 = {4(du^2 + dv^2)\\over (1+u^2+v^2)^2}."
},
{
"math_id": 94,
"text": "D=\\{z\\,\\colon |z|<1\\}"
},
{
"math_id": 95,
"text": "ds^2= {4(dx^2 +dy^2)\\over (1-x^2-y^2)^2}."
},
{
"math_id": 96,
"text": " ds^2= {4(dr^2 + r^2\\, d\\theta^2)\\over (1-r^2)^2}."
},
{
"math_id": 97,
"text": "\\ell(\\gamma)=\\int_a^b {2|\\gamma^\\prime(t)|\\, dt\\over 1 -|\\gamma(t)|^2}."
},
{
"math_id": 98,
"text": "G=\\left\\{ \\begin{pmatrix}\n\\alpha & \\beta \\\\\n\\overline{\\beta} & \\overline{\\alpha}\n\\end{pmatrix} : \\alpha,\\beta\\in\\mathbf{C},\\,|\\alpha|^2 -|\\beta|^2=1 \\right\\}"
},
{
"math_id": 99,
"text": " K=\\left\\{ \\begin{pmatrix}\n\\zeta & 0 \\\\\n0 & \\overline{\\zeta}\n\\end{pmatrix} : \\zeta\\in\\mathbf{C},\\,|\\zeta| =1 \\right\\}."
},
{
"math_id": 100,
"text": "d(z,w)=2 \\tanh^{-1} \\frac{|z-w|}{|1-\\overline{w}z|}."
},
{
"math_id": 101,
"text": "\\cosh c = \\cosh a\\, \\cosh b - \\sinh a \\,\\sinh b \\,\\cos \\gamma."
},
{
"math_id": 102,
"text": "H=\\{w=x+iy \\,\\colon\\, y >0\\}"
},
{
"math_id": 103,
"text": " w=i {1+z\\over 1-z},\\,\\, z={w-i\\over w+i}."
},
{
"math_id": 104,
"text": " ds^2 = {dx^2 + dy^2\\over y^2}."
},
{
"math_id": 105,
"text": " M= \\Gamma\\backslash G /K."
},
{
"math_id": 106,
"text": "K^\\prime(x)= e^{-2u} (K(x) - \\Delta u),"
},
{
"math_id": 107,
"text": "\\Delta u = K^\\prime e^{2u} + K(x)."
},
{
"math_id": 108,
"text": " \\Delta u = K(x)."
},
{
"math_id": 109,
"text": "\\Delta u = -e^{2u} + K(x)."
},
{
"math_id": 110,
"text": "\\Delta u = e^{2u} + K(x)."
},
{
"math_id": 111,
"text": " u_t = 4\\pi - K'(x,t) = 4\\pi -e^{-2u} (K(x) - \\Delta u). "
},
{
"math_id": 112,
"text": "\\dot{\\theta}(t) = - k_g(t)"
}
] |
https://en.wikipedia.org/wiki?curid=15513875
|
15515301
|
Foundations of statistics
|
The foundations of statistics are the mathematical and philosophical bases for statistical methods. These bases are theoretical frameworks that ground and justify methods of statistical inference, estimation, hypothesis testing, uncertainty quantification, and the interpretation of statistical conclusions. Further, a foundation can be used to explain statistical paradoxes, provide descriptions of statistical laws, and guide the application of statistics to real-world problems.
Different statistical foundations may provide different, contrasting perspectives on the analysis and interpretation of data, and some of these contrasts have been subject to centuries of debate. Examples include Bayesian inference versus frequentist inference; the distinction between Fisher's "significance testing" and the Neyman-Pearson "hypothesis testing"; and whether the likelihood principle holds.
Certain frameworks may be preferred for specific applications, such as the use of Bayesian methods in fitting complex ecological models.
Bandyopadhyay & Forster identify four statistical paradigms: classical statistics (or error statistics), Bayesian statistics, likelihood-based statistics, and information-based statistics using the Akaike Information Criterion. More recently, Judea Pearl reintroduced formal mathematics for attributing causality in statistical systems that addressed the fundamental limitations of both Bayesian and Neyman-Pearson methods, as discussed in his book "Causality".
Fisher's "significance testing" vs. Neyman–Pearson "hypothesis testing".
During the 20th century, the development of classical statistics led to the emergence of two competing foundations for inductive statistical testing. The merits of these models were extensively debated. Although a hybrid approach combining elements of both methods is commonly taught and utilized, the philosophical questions raised during the debate remain unresolved.
Significance testing.
Publications by Fisher, like "Statistical Methods for Research Workers" in 1925 and "The Design of Experiments" in 1935, contributed to the popularity of significance testing, which is a probabilistic approach to deductive inference. In practice, a statistic is computed based on the experimental data and the probability of obtaining a value greater than that statistic under a default or "null" model is compared to a predetermined threshold. This threshold represents the level of discord required (typically established by convention). One common application of this method is to determine whether a treatment has a noticeable effect based on a comparative experiment. In this case, the null hypothesis corresponds to the absence of a treatment effect, implying that the treated group and the control group are drawn from the same population. Statistical significance measures probability and does not address practical significance. It can be viewed as a criterion for the statistical signal-to-noise ratio. It is important to note that the test cannot prove the hypothesis (of no treatment effect), but it can provide evidence against it.
The Fisher significance test involves a single hypothesis, but the choice of the test statistic requires an understanding of relevant directions of deviation from the hypothesized model.
Hypothesis testing.
Neyman and Pearson collaborated on the problem of selecting the most appropriate hypothesis based solely on experimental evidence, which differed from significance testing. Their most renowned joint paper, published in 1933, introduced the Neyman-Pearson lemma, which states that a ratio of probabilities serves as an effective criterion for hypothesis selection (with the choice of the threshold being arbitrary). The paper demonstrated the optimality of the Student's t-test, one of the significance tests. Neyman believed that hypothesis testing represented a generalization and improvement of significance testing. The rationale for their methods can be found in their collaborative papers.
Hypothesis testing involves considering multiple hypotheses and selecting one among them, akin to making a multiple-choice decision. The absence of evidence is not an immediate factor to be taken into account. The method is grounded in the assumption of repeated sampling from the same population (the classical frequentist assumption), although Fisher criticized this assumption.
Grounds of disagreement.
The duration of the dispute allowed for a comprehensive discussion of various fundamental issues in the field of statistics.
Fisher's attack.
Repeated sampling of the same population
Type II errors
Inductive behavior
Neyman's rebuttal.
Fisher's attack on inductive behavior has been largely successful because he selected the field of battle. While "operational decisions" are routinely made on a variety of criteria (such as cost), "scientific conclusions" from experimentation are typically made based on probability alone.
Fisher's theory of fiduciary inference is flawed
A purely probabilistic theory of tests requires an alternative hypothesis. Fisher's attacks on Type II errors have faded with time. In the intervening years, statistics have separated the exploratory from the confirmatory. In the current environment, the concept of Type II errors are used in power calculations for confirmatory hypothesis tests' sample size determination.
Discussion.
Fisher's attack based on frequentist probability failed but was not without result. He identified a specific case (2×2 table) where the two schools of testing reached different results. This case is one of several that are still troubling. Commentators believe that the "right" answer is context-dependent. Fiducial probability has not fared well, being virtually without advocates, while frequentist probability remains a mainstream interpretation.
Fisher's attack on inductive behavior has been largely successful because he selected the field of battle. While "operational decisions" are routinely made on a variety of criteria (such as cost), "scientific conclusions" from experimentation are typically made based on probability alone.
During this exchange, Fisher also discussed the requirements for inductive inference, specifically criticizing cost functions that penalize erroneous judgments. Neyman countered by mentioning the use of such functions by Gauss and Laplace. These arguments occurred 15 years "after" textbooks began teaching a hybrid theory of statistical testing.
Fisher and Neyman held different perspectives on the foundations of statistics (though they both opposed the Bayesian viewpoint):
Fisher and Neyman diverged in their attitudes and, perhaps, their language. Fisher was a scientist and an intuitive mathematician, and inductive reasoning came naturally to him. Neyman, on the other hand, was a rigorous mathematician who relied on deductive reasoning rather than probability calculations based on experiments. Hence, there was an inherent clash between applied and theoretical approaches (between science and mathematics).
Related history.
In 1938, Neyman relocated to the West Coast of the United States of America, effectively ending his collaboration with Pearson and their work on hypothesis testing. Subsequent developments in the field were carried out by other researchers.
By 1940, textbooks began presenting a hybrid approach that combined elements of significance testing and hypothesis testing. However, none of the main contributors were directly involved in the further development of the hybrid approach currently taught in introductory statistics.
Statistics subsequently branched out into various directions, including decision theory, Bayesian statistics, exploratory data analysis, robust statistics, and non-parametric statistics. Neyman-Pearson hypothesis testing made significant contributions to decision theory, which is widely employed, particularly in statistical quality control. Hypothesis testing also extended its applicability to incorporate prior probabilities, giving it a Bayesian character. While Neyman-Pearson hypothesis testing has evolved into an abstract mathematical subject taught at the post-graduate level, much of what is taught and used in undergraduate education under the umbrella of hypothesis testing can be attributed to Fisher.
Contemporary opinion.
There have been no major conflicts between the two classical schools of testing in recent decades, although occasional criticism and disputes persist. However, it is highly unlikely that one theory of statistical testing will completely supplant the other in the foreseeable future.
The hybrid approach, which combines elements from both competing schools of testing, can be interpreted in different ways. Some view it as an amalgamation of two mathematically complementary ideas, while others see it as a flawed union of philosophically incompatible concepts. Fisher's approach had certain philosophical advantages, while Neyman and Pearson emphasized rigorous mathematics. Hypothesis testing remains a subject of controversy for some users, but the most widely accepted alternative method, confidence intervals, is based on the same mathematical principles.
Due to the historical development of testing, there is no single authoritative source that fully encompasses the hybrid theory as it is commonly practiced in statistics. Additionally, the terminology used in this context may lack consistency. Empirical evidence indicates that individuals, including students and instructors in introductory statistics courses, often have a limited understanding of the meaning of hypothesis testing.
Bayesian inference versus frequentist inference.
Two distinct interpretations of probability have existed for a long time, one based on objective evidence and the other on subjective degrees of belief. The debate between Gauss and Laplace could have taken place more than 200 years ago, giving rise to two competing schools of statistics. Classical inferential statistics emerged primarily during the second quarter of the 20th century, largely in response to the controversial principle of indifference used in Bayesian probability at that time. The resurgence of Bayesian inference was a reaction to the limitations of frequentist probability, leading to further developments and reactions.
While the philosophical interpretations have a long history, the specific statistical terminology is relatively recent. The terms "Bayesian" and "frequent" became standardized in the second half of the 20th century. However, the terminology can be confusing, as the "classical" interpretation of probability aligns with Bayesian principles, while "classical" statistics follow the frequentist approach. Moreover, even within the term "frequentist," there are variations in interpretation, differing between philosophy and physics.
The intricate details of philosophical probability interpretations are explored elsewhere. In the field of statistics, these alternative interpretations "allow" for the analysis of different datasets using distinct methods based on various models, aiming to achieve slightly different objectives. When comparing the competing schools of thought in statistics, pragmatic criteria beyond philosophical considerations are taken into account.
Major contributors.
Fisher and Neyman were significant figures in the development of frequentist (classical) methods. While Fisher had a unique interpretation of probability that differed from Bayesian principles, Neyman adhered strictly to the frequentist approach. In the realm of Bayesian statistical philosophy, mathematics, and methods, de Finetti, Jeffreys, and Savage emerged as notable contributors during the 20th century. Savage played a crucial role in popularizing de Finetti's ideas in English-speaking regions and establishing rigorous Bayesian mathematics. In 1965, Dennis Lindley's two-volume work titled "Introduction to Probability and Statistics from a Bayesian Viewpoint" played a vital role in introducing Bayesian methods to a wide audience. For three generations, statistics have progressed significantly, and the views of early contributors are not necessarily considered authoritative in present times.
Contrasting approaches.
Frequentist inference.
The earlier description briefly highlights frequentist inference, which encompasses Fisher's "significance testing" and Neyman-Pearson's "hypothesis testing." Frequentist inference incorporates various perspectives and allows for scientific conclusions, operational decisions, and parameter estimation with or without confidence intervals.
Bayesian inference.
A classical frequency distribution provides information about the probability of the observed data. By applying Bayes' theorem, a more abstract concept is introduced, which involves estimating the probability of a hypothesis (associated with a theory) given the data. This concept, formerly referred to as "inverse probability," is realized through Bayesian inference. Bayesian inference involves updating the probability estimate for a hypothesis as new evidence becomes available. It explicitly considers both the evidence and prior beliefs, enabling the incorporation of multiple sets of evidence.
Comparisons of characteristics.
Frequentists and Bayesians employ distinct probability models. Frequentist typically view parameters as fixed but unknown, whereas Bayesians assign probability distributions to these parameters. As a result, Bayesian discuss probabilities that frequentist do not acknowledge. Bayesian consider the probability of a theory, whereas true frequentists can only assess the evidence's consistency with the theory. For instance, a frequentist does not claim a 95% probability that the true value of a parameter falls within a confidence interval; rather, they state that 95% of confidence intervals encompass the true value.
Mathematical results.
Both the frequentist and Bayesian schools are subject to mathematical critique, and neither readily embraces such criticism. For instance, Stein's paradox highlights the intricacy of determining a "flat" or "uninformative" prior probability distribution in high-dimensional spaces. While Bayesians perceive this as tangential to their fundamental philosophy, they find frequentist plagued with inconsistencies, paradoxes, and unfavorable mathematical behavior. Frequentist traveller can account for most of these issues. Certain "problematic" scenarios, like estimating the weight variability of a herd of elephants based on a single measurement (Basu's elephants), exemplify extreme cases that defy statistical estimation. The principle of likelihood has been a contentious area of debate.
Statistical results.
Both the frequentist and Bayesian schools have demonstrated notable accomplishments in addressing practical challenges. Classical statistics, with its reliance on mechanical calculators and specialized printed tables, boasts a longer history of obtaining results. Bayesian methods, on the other hand, have shown remarkable efficacy in analyzing sequentially sampled information, such as radar and sonar data. Several Bayesian techniques, as well as certain recent frequentist methods like the bootstrap, necessitate the computational capabilities that have become widely accessible in the past few decades. There is an ongoing discourse regarding the integration of Bayesian and frequentist approaches, although concerns have been raised regarding the interpretation of results and the potential diminishment of methodological diversity.
Philosophical results.
Bayesians share a common stance against the limitations of frequent, but they are divided into various philosophical camps (empirical, hierarchical, objective, personal, and subjective), each emphasizing different aspects. A philosopher of statistics from the frequentist perspective has observed a shift from the statistical domain to philosophical interpretations of probability over the past two generations. Some perceive that the successes achieved with Bayesian applications do not sufficiently justify the associated philosophical framework. Bayesian methods often develop practical models that deviate from traditional inference and have minimal reliance on philosophy. Neither the frequentist nor the Bayesian philosophical interpretations of probability can be considered entirely robust. The frequentist view is criticized for being overly rigid and restrictive, while the Bayesian view can encompass both objective and subjective elements, among others.
The likelihood principle.
In common usage, likelihood is often considered synonymous with probability. However, according to statistics, this is not the case. In statistics, probability refers to variable data given a fixed hypothesis, whereas likelihood refers to variable hypotheses given a fixed set of data. For instance, when making repeated measurements with a ruler under fixed conditions, each set of observations corresponds to a probability distribution, and the observations can be seen as a sample from that distribution, following the frequentist interpretation of probability. On the other hand, a set of observations can also arise from sampling various distributions based on different observational conditions. The probabilistic relationship between a fixed sample and a variable distribution stemming from a variable hypothesis is referred to as likelihood, representing the Bayesian view of probability. For instance, a set of length measurements may represent readings taken by observers with specific characteristics and conditions.
Likelihood is a concept that was introduced and developed by Fisher over a span of more than 40 years, although earlier references to the concept exist and Fisher's support for it was not wholehearted. The concept was subsequently accepted and substantially revised by Jeffreys. In 1962, Birnbaum "proved" the likelihood principle based on premises that were widely accepted among statisticians, although his proof has been subject to dispute by statisticians and philosophers. Notably, by 1970, Birnbaum had rejected one of these premises (the conditionality principle) and had also abandoned the likelihood principle due to their incompatibility with the frequentist "confidence concept of statistical evidence." The likelihood principle asserts that all the information in a sample is contained within the likelihood function, which is considered a valid probability distribution by Bayesians but not by frequent.
Certain significance tests employed by frequentists are not consistent with the likelihood principle. Bayesian, on the other hand, embrace the principle as it aligns with their philosophical standpoint (perhaps in response to frequentist discomfort). The likelihood approach is compatible with Bayesian statistical inference, where the posterior Bayes distribution for a parameter is derived by multiplying the prior distribution by the likelihood function using Bayes' Theorem. Frequentist interpret the likelihood principle unfavourably, as it suggests a lack of concern for the reliability of evidence. The likelihood principle, according to Bayesian statistics, implies that information about the experimental design used to collect evidence does not factor into the statistical analysis of the data. Some Bayesian, including Savage, acknowledge this implication as a vulnerability.
The likelihood principle's staunchest proponents argue that it provides a more solid foundation for statistics compared to the alternatives presented by Bayesian and frequentist approaches. These supporters include some statisticians and philosophers of science. While Bayesian recognize the importance of likelihood for calculations, they contend that the posterior probability distribution serves as the appropriate basis for inference.
Modelling.
Inferential statistics relies on statistical models. Classical hypothesis testing, for instance, has often relied on the assumption of data normality. To reduce reliance on this assumption, robust and nonparametric statistics have been developed. Bayesian statistics, on the other hand, interpret new observations based on prior knowledge, assuming continuity between the past and present. The experimental design assumes some knowledge of the factors to be controlled, varied, randomized, and observed. Statisticians are aware of the challenges in establishing causation, often stating that "correlation does not imply causation," which is more of a limitation in modelling than a mathematical constraint.
As statistics and data sets have become more complex, questions have arisen regarding the validity of models and the inferences drawn from them. There is a wide range of conflicting opinions on modelling.
Models can be based on scientific theory or ad hoc data analysis, each employing different methods. Advocates exist for each approach. Model complexity is a trade-off and less subjective approaches such as the Akaike information criterion and Bayesian information criterion aim to strike a balance.
Concerns have been raised even about simple regression models used in the social sciences, as a multitude of assumptions underlying model validity are often neither mentioned nor verified. In some cases, a favorable comparison between observations and the model is considered sufficient.
Traditional observation-based models often fall short in addressing many significant problems, requiring the utilization of a broader range of models, including algorithmic ones. "If the model is a poor emulation of nature, the conclusions may be wrong."
Modelling is frequently carried out inadequately, with improper methods employed, and the reporting of models is often subpar.
Given the lack of a strong consensus on the philosophical review of statistical modeling, many statisticians adhere to the cautionary words of George Box: "All models are wrong, but some are useful."
Other reading.
For a concise introduction to the fundamentals of statistics, refer to "Stuart, A.; old, J.K. (1994). "Ch. 8 – Probability and statistical inference" in Kendall's Advanced Theory of Statistics, Volume I: Distribution Theory (6th ed.), published by Edward Arnold".
In his book "Statistics as Principled Argument", Robert P. Abelson presents the perspective that statistics serve as a standardized method for resolving disagreements among scientists, who could otherwise engage in endless debates about the merits of their respective positions. From this standpoint, statistics can be seen as a form of rhetoric. However, the effectiveness of statistical methods depends on the consensus among all involved parties regarding the chosen approach.
Footnotes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\alpha"
}
] |
https://en.wikipedia.org/wiki?curid=15515301
|
15515379
|
Taylor–Green vortex
|
In fluid dynamics, the Taylor–Green vortex is an unsteady flow of a decaying vortex, which has an exact closed form solution of the incompressible Navier–Stokes equations in Cartesian coordinates. It is named after the British physicist and mathematician Geoffrey Ingram Taylor and his collaborator A. E. Green.
Original work.
In the original work of Taylor and Green, a particular flow is analyzed in three spatial dimensions, with the three velocity components formula_0 at time formula_1 specified by
formula_2
formula_3
formula_4
The continuity equation formula_5 determines that formula_6. The small time behavior of the flow is then found through simplification of the incompressible Navier–Stokes equations using the initial flow to give a step-by-step solution as time progresses.
An exact solution in two spatial dimensions is known, and is presented below.
Incompressible Navier–Stokes equations.
The incompressible Navier–Stokes equations in the absence of body force, and in two spatial dimensions, are given by
formula_7
formula_8
formula_9
The first of the above equation represents the continuity equation and the other two represent the momentum equations.
Taylor–Green vortex solution.
In the domain formula_10, the solution is given by
formula_11
where formula_12, formula_13 being the kinematic viscosity of the fluid. Following the analysis of Taylor and Green for the two-dimensional situation, and for formula_14, gives agreement with this exact solution, if the exponential is expanded as a Taylor series, i.e. formula_15.
The pressure field formula_16 can be obtained by substituting the velocity solution in the momentum equations and is given by
formula_17
The stream function of the Taylor–Green vortex solution, i.e. which satisfies formula_18 for flow velocity formula_19, is
formula_20
Similarly, the vorticity, which satisfies formula_21, is given by
formula_22
The Taylor–Green vortex solution may be used for testing and validation of temporal accuracy of Navier–Stokes algorithms.
A generalization of the Taylor–Green vortex solution in three dimensions is described in.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{v}=(u,v,w)"
},
{
"math_id": 1,
"text": "t=0"
},
{
"math_id": 2,
"text": "\nu = A \\cos ax \\sin by \\sin cz,\n"
},
{
"math_id": 3,
"text": "\nv = B \\sin ax \\cos by \\sin cz,\n"
},
{
"math_id": 4,
"text": "\nw = C \\sin ax \\sin by \\cos cz.\n"
},
{
"math_id": 5,
"text": " \\nabla \\cdot \\mathbf{v}=0"
},
{
"math_id": 6,
"text": "Aa+Bb+Cc=0"
},
{
"math_id": 7,
"text": "\n\\frac{\\partial u}{\\partial x}+ \\frac{\\partial v}{\\partial y} = 0,\n"
},
{
"math_id": 8,
"text": "\n\\frac{\\partial u}{\\partial t} + u\\frac{\\partial u}{\\partial x} + v\\frac{\\partial u}{\\partial y} =\n-\\frac{1}{\\rho} \\frac{\\partial p}{\\partial x} + \\nu \\left( \\frac{\\partial^2 u}{\\partial x^2} +\n\\frac{\\partial^2 u}{\\partial y^2} \\right),\n"
},
{
"math_id": 9,
"text": "\n\\frac{\\partial v}{\\partial t} + u\\frac{\\partial v}{\\partial x} + v\\frac{\\partial v}{\\partial y} =\n-\\frac{1}{\\rho} \\frac{\\partial p}{\\partial y} + \\nu \\left( \\frac{\\partial^2 v}{\\partial x^2} +\n\\frac{\\partial^2 v}{\\partial y^2} \\right).\n"
},
{
"math_id": 10,
"text": "0 \\le x,y \\le 2\\pi "
},
{
"math_id": 11,
"text": "\nu = \\sin x \\cos y \\,F(t), \\qquad \\qquad v = -\\cos x \\sin y \\, F(t),\n"
},
{
"math_id": 12,
"text": "F(t) = e^{-2\\nu t}"
},
{
"math_id": 13,
"text": "\\nu"
},
{
"math_id": 14,
"text": "A=a=b=1"
},
{
"math_id": 15,
"text": "F(t) = 1 - 2\\nu t + O(t^2)"
},
{
"math_id": 16,
"text": "p"
},
{
"math_id": 17,
"text": "\np = \\frac{\\rho}{4} \\left( \\cos 2x + \\cos 2y \\right) F^2(t).\n"
},
{
"math_id": 18,
"text": " \\mathbf{v} = \\nabla \\times \\boldsymbol{\\psi}"
},
{
"math_id": 19,
"text": "\\mathbf{v}"
},
{
"math_id": 20,
"text": "\n\\boldsymbol{\\psi} = \\sin x \\sin y F(t)\\, \\hat{\\mathbf{z}}.\n"
},
{
"math_id": 21,
"text": " \\boldsymbol{\\mathbf{\\omega}} = \\nabla \\times \\mathbf{v} "
},
{
"math_id": 22,
"text": "\n\\boldsymbol{\\mathbf{\\omega}} = 2\\sin x \\sin y \\,F(t) \\hat{\\mathbf{z}}.\n"
}
] |
https://en.wikipedia.org/wiki?curid=15515379
|
15523181
|
Riemannian Penrose inequality
|
Estimates the mass of a spacetime in terms of the total area of its black holes
In mathematical general relativity, the Penrose inequality, first conjectured by Sir Roger Penrose, estimates the mass of a spacetime in terms of the total area of its black holes and is a generalization of the positive mass theorem. The Riemannian Penrose inequality is an important special case. Specifically, if ("M", "g") is an asymptotically flat Riemannian 3-manifold with nonnegative scalar curvature and ADM mass "m", and "A" is the area of the outermost minimal surface (possibly with multiple connected components), then the Riemannian Penrose inequality asserts
formula_0
This is purely a geometrical fact, and it corresponds to the case of a complete three-dimensional, space-like, totally geodesic submanifold
of a (3 + 1)-dimensional spacetime. Such a submanifold is often called a time-symmetric initial data set for a spacetime. The condition of ("M", "g") having nonnegative scalar curvature is equivalent to the spacetime obeying the dominant energy condition.
This inequality was first proved by Gerhard Huisken and Tom Ilmanen in 1997 in the case where "A" is the area of the largest component of the outermost minimal surface. Their proof relied on the machinery of weakly defined inverse mean curvature flow, which they developed. In 1999, Hubert Bray gave the first complete proof of the above inequality using a conformal flow of metrics. Both of the papers were published in 2001.
Physical motivation.
The original physical argument that led Penrose to conjecture such an inequality invoked the Hawking area theorem and the cosmic censorship hypothesis.
Case of equality.
Both the Bray and Huisken–Ilmanen proofs of the Riemannian Penrose inequality state that under the hypotheses, if
formula_1
then the manifold in question is isometric to a slice of the Schwarzschild spacetime outside its outermost minimal surface, which is a sphere of Schwarzschild radius.
Penrose conjecture.
More generally, Penrose conjectured that an inequality as above should hold for spacelike submanifolds of spacetimes that are not necessarily time-symmetric. In this case, nonnegative scalar curvature is replaced with the dominant energy condition, and one possibility is to replace the minimal surface condition with an apparent horizon condition. Proving such an inequality remains an open problem in general relativity, called the Penrose conjecture.
|
[
{
"math_id": 0,
"text": "m \\geq \\sqrt{\\frac{A}{16\\pi}}."
},
{
"math_id": 1,
"text": "m = \\sqrt{\\frac{A}{16\\pi}},"
}
] |
https://en.wikipedia.org/wiki?curid=15523181
|
15524228
|
Cribbage statistics
|
In cribbage, the probability and maximum and minimum score of each type of hand can be computed.
Distinct hands.
formula_0
formula_1
Minimum scores.
Minimum while holding a 5.
If a player holds a 5 in their hand, that player is guaranteed at least two points, as shown below:
A 0-point hand must have five distinct cards without forming a run or a fifteen combination. If such a hand includes a 5, it cannot hold a 10 or a face card. It also cannot include both an A and a 9; both a 2 and an 8; both a 3 and a 7; or both a 4 and a 6. Since four more cards are needed, exactly one must be taken from each of those sets. Let us run through the possible choices:
Therefore, every set of five cards including a 5 has a pair, a run, or a fifteen, and thus at least two points.
Interestingly, a hand with two 5s also can score at least two points; an example is 2 5 5 7 9, which would be most likely a crib hand, and would not score a flush because of the pair, although said hand can be a non-crib four-card flush if either 5 is the starter. A hand with three 5s scores at least eight points; a hand with all four 5s scores 20 points and is improved only with a 10, J, Q, or K (scoring 28 except for the 29 hand previously described.)
It is also true that holding both a 2 and a 3, or an A and a 4 (pairs of cards adding up to five) also guarantees a non-zero score:
Odds.
Scoring Breakdown, assuming random discard(s) to the crib
Note that these statistics do not reflect frequency of occurrence in 5 or 6-card play. For 6-card play the mean for non-dealer is 7.8580 with standard deviation 3.7996, and for dealer is 7.7981 and 3.9082 respectively. The means are higher because the player can choose those four cards that maximize their point holdings. For 5-card play the mean is about 5.4.
Slightly different scoring rules apply in the crib - only 5-point flushes are counted, in other words you need to flush all cards including the turn-up and not just the cards in the crib. Because of this, a slightly different distribution is observed:
Scoring Breakdown (crib/box hands only)
As above, these statistics do not reflect the true distributions in 5 or 6 card play, since both the dealer and non-dealer will discard tactically in order to maximise or minimise the possible score in the crib/box.
Hand plus Crib statistics.
If both the hand and the crib are considered as a sum (and both are drawn at random, rather than formed with strategy as is realistic in an actual game setting) there are 2,317,817,502,000 (2.3 trillion) 9-card combinations.
formula_2
Scoring Breakdown
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "{52 \\choose 4} \\times 48 = 12,994,800"
},
{
"math_id": 1,
"text": "{52 \\choose 5} \\times 5 = 12,994,800"
},
{
"math_id": 2,
"text": "{52 \\choose 4} \\times {48 \\choose 4} \\times 44 = 2,317,817,502,000"
}
] |
https://en.wikipedia.org/wiki?curid=15524228
|
1552505
|
Vickers hardness test
|
Hardness test
The Vickers hardness test was developed in 1921 by Robert L. Smith and George E. Sandland at Vickers Ltd as an alternative to the Brinell method to measure the hardness of materials. The Vickers test is often easier to use than other hardness tests since the required calculations are independent of the size of the indenter, and the indenter can be used for all materials irrespective of hardness. The basic principle, as with all common measures of hardness, is to observe a material's ability to resist plastic deformation from a standard source.
The Vickers test can be used for all metals and has one of the widest scales among hardness tests.
The unit of hardness given by the test is known as the Vickers Pyramid Number (HV) or Diamond Pyramid Hardness (DPH). The hardness number can be converted into units of pascals, but should not be confused with pressure, which uses the same units. The hardness number is determined by the load over the surface area of the indentation and not the area normal to the force, and is therefore not pressure.
Implementation.
It was decided that the indenter shape should be capable of producing geometrically similar impressions, irrespective of size; the impression should have well-defined points of measurement; and the indenter should have high resistance to self-deformation. A diamond in the form of a square-based pyramid satisfied these conditions. It had been established that the ideal size of a Brinell impression was <templatestyles src="Fraction/styles.css" />3⁄8 of the ball diameter. As two tangents to the circle at the ends of a chord 3"d"/8 long intersect at 136°, it was decided to use this as the included angle between plane faces of the indenter tip. This gives an angle from each face normal to the horizontal plane normal of 22° on each side. The angle was varied experimentally and it was found that the hardness value obtained on a homogeneous piece of material remained constant, irrespective of load. Accordingly, loads of various magnitudes are applied to a flat surface, depending on the hardness of the material to be measured. The HV number is then determined by the ratio "F"/"A", where "F" is the force applied to the diamond in kilograms-force and "A" is the surface area of the resulting indentation in square millimeters.
formula_0
which can be approximated by evaluating the sine term to give,
formula_1
where "d" is the average length of the diagonal left by the indenter in millimeters. Hence,
formula_2,
where "F" is in kgf and "d" is in millimeters.
The corresponding unit of HV is then the kilogram-force per square millimeter (kgf/mm2) or HV number. In the above equation, "F" could be in N and "d" in mm, giving HV in the SI unit of MPa. To calculate Vickers hardness number (VHN) using SI units one needs to convert the force applied from newtons to kilogram-force by dividing by 9.806 65 (standard gravity). This leads to the following equation:
formula_3
where "F" is in N and "d" is in millimeters. A common error is that the above formula to calculate the HV number does not result in a number with the unit newton per square millimeter (N/mm2), but results directly in the Vickers hardness number (usually given without units), which is in fact one kilogram-force per square millimeter (1 kgf/mm2).
Vickers hardness numbers are reported as xxxHVyy, e.g. 440HV30, or xxxHVyy/zz if duration of force differs from 10 s to 15 s, e.g. 440HV30/20, where:
Precautions.
When doing the hardness tests, the minimum distance between indentations and the distance from the indentation to the edge of the specimen must be taken into account to avoid interaction between the work-hardened regions and effects of the edge. These minimum distances are different for ISO 6507-1 and ASTM E384 standards.
Vickers values are generally independent of the test force: they will come out the same for 500 gf and 50 kgf, as long as the force is at least 200 gf. However, lower load indents often display a dependence of hardness on indent depth known as the indentation size effect (ISE). Small indent sizes will also have microstructure-dependent hardness values.
For thin samples indentation depth can be an issue due to substrate effects. As a rule of thumb the sample thickness should be kept greater than 2.5 times the indent diameter. Alternatively indent depth, formula_4, can be calculated according to:
formula_5
Conversion to SI units.
To convert the Vickers hardness number to SI units the hardness number in kilograms-force per square millimeter (kgf/mm2) has to be multiplied with the standard gravity, formula_6, to get the hardness in MPa (N/mm2) and furthermore divided by 1000 to get the hardness in GPa.
formula_7
Vickers hardness can also be converted to an SI hardness based on the projected area of the indent rather than the surface area. The projected area, formula_8, is defined as the following for a Vickers indenter geometry:
formula_9
This hardness is sometimes referred to as the mean contact area or Meyer hardness, and ideally can be directly compared with other hardness tests also defined using projected area. Care must be used when comparing other hardness tests due to various size scale factors which can impact the measured hardness.
formula_10
Estimating tensile strength.
If HV is first expressed in N/mm2 (MPa), or otherwise by converting from kgf/mm2, then the tensile strength (in MPa) of the material can be approximated as σu ≈ HV/c , where c is a constant determined by yield strength, Poisson's ratio, work-hardening exponent and geometrical factors – usually ranging between 2 and 4. In other words, if HV is expressed in N/mm2 (i.e. in MPa) then the tensile strength (in MPa) ≈ HV/3. This empirical law depends variably on the work-hardening behavior of the material.
Application.
The fin attachment pins and sleeves in the Convair 580 airliner were specified by the aircraft manufacturer to be hardened to a Vickers Hardness specification of 390HV5, the '5' meaning five kiloponds. However, on the aircraft flying Partnair Flight 394 the pins were later found to have been replaced with sub-standard parts, leading to rapid wear and finally loss of the aircraft. On examination, accident investigators found that the sub-standard pins had a hardness value of only some 200–230HV5.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A = \\frac{d^2}{2 \\sin(136^\\circ/2)},"
},
{
"math_id": 1,
"text": "A \\approx \\frac{d^2}{1.8544},"
},
{
"math_id": 2,
"text": "\\mathrm{HV} = \\frac{F}{A} \\approx \\frac{1.8544 F}{d^2} \\quad [\\textrm{kgf/mm}^2]"
},
{
"math_id": 3,
"text": "\\mathrm{HV} \\approx {0.1891}\\frac{F}{d^2} \\quad [\\textrm{N/mm}^2],"
},
{
"math_id": 4,
"text": "t"
},
{
"math_id": 5,
"text": "t = \\frac{d_{\\rm avg}}{2\\sqrt{2}\\tan{\\frac{\\theta}{2}}} \\approx \\frac{d_{\\rm avg}}{7.0006},"
},
{
"math_id": 6,
"text": "g_0"
},
{
"math_id": 7,
"text": "\\text{surface area hardness (GPa)} = \\frac{g_0}{1000}HV = \\frac{9.80665}{1000}HV"
},
{
"math_id": 8,
"text": "A_{\\rm p}"
},
{
"math_id": 9,
"text": "A_{\\rm p} = \\frac{d_{\\rm avg}^2}{2} = \\frac{1.854}{2}{A_s}"
},
{
"math_id": 10,
"text": "\\text{projected area hardness (GPa)} = \\frac{g_0}{1000}\\frac{2}{1.854}HV \\approx \\frac{HV}{94.5}"
}
] |
https://en.wikipedia.org/wiki?curid=1552505
|
1552607
|
Linkage (mechanical)
|
Assembly of systems connected to manage forces and movement
A mechanical linkage is an assembly of systems connected so as to manage forces and movement. The movement of a body, or link, is studied using geometry so the link is considered to be rigid. The connections between links are modeled as providing ideal movement, pure rotation or sliding for example, and are called joints. A linkage modeled as a network of rigid links and ideal joints is called a kinematic chain.
Linkages may be constructed from open chains, closed chains, or a combination of open and closed chains. Each link in a chain is connected by a joint to one or more other links. Thus, a kinematic chain can be modeled as a graph in which the links are paths and the joints are vertices, which is called a linkage graph.
The movement of an ideal joint is generally associated with a subgroup of the group of Euclidean displacements. The number of parameters in the subgroup is called the degrees of freedom (DOF) of the joint.
Mechanical linkages are usually designed to transform a given input force and movement into a desired output force and movement. The ratio of the output force to the input force is known as the mechanical advantage of the linkage, while the ratio of the input speed to the output speed is known as the speed ratio. The speed ratio and mechanical advantage are defined so they yield the same number in an ideal linkage.
A kinematic chain, in which one link is fixed or stationary, is called a mechanism, and a linkage designed to be stationary is called a structure.
History.
Archimedes applied geometry to the study of the lever. Into the 1500s the work of Archimedes and Hero of Alexandria were the primary sources of machine theory. It was Leonardo da Vinci who brought an inventive energy to machines and mechanism.
In the mid-1700s the steam engine was of growing importance, and James Watt realized that efficiency could be increased by using different cylinders for expansion and condensation of the steam. This drove his search for a linkage that could transform rotation of a crank into a linear slide, and resulted in his discovery of what is called Watt's linkage. This led to the study of linkages that could generate straight lines, even if only approximately; and inspired the mathematician J. J. Sylvester, who lectured on the Peaucellier linkage, which generates an exact straight line from a rotating crank.
The work of Sylvester inspired A. B. Kempe, who showed that linkages for addition and multiplication could be assembled into a system that traced a given algebraic curve. Kempe's design procedure has inspired research at the intersection of geometry and computer science.
In the late 1800s F. Reuleaux, A. B. W. Kennedy, and L. Burmester formalized the analysis and synthesis of linkage systems using descriptive geometry, and P. L. Chebyshev introduced analytical techniques for the study and invention of linkages.
In the mid-1900s F. Freudenstein and G. N. Sandor used the newly developed digital computer to solve the loop equations of a linkage and determine its dimensions for a desired function, initiating the computer-aided design of linkages. Within two decades these computer techniques were integral to the analysis of complex machine systems and the control of robot manipulators.
R. E. Kaufman combined the computer's ability to rapidly compute the roots of polynomial equations with a graphical user interface to unite Freudenstein's techniques with the geometrical methods of Reuleaux and Burmester and form "KINSYN," an interactive computer graphics system for linkage design
The modern study of linkages includes the analysis and design of articulated systems that appear in robots, machine tools, and cable driven and tensegrity systems. These techniques are also being applied to biological systems and even the study of proteins.
Mobility.
The configuration of a system of rigid links connected by ideal joints is defined by a set of configuration parameters, such as the angles around a revolute joint and the slides along prismatic joints measured between adjacent links. The geometric constraints of the linkage allow calculation of all of the configuration parameters in terms of a minimum set, which are the "input parameters". The number of input parameters is called the "mobility", or degree of freedom, of the linkage system.
A system of "n" rigid bodies moving in space has 6"n" degrees of freedom measured relative to a fixed frame. Include this frame in the count of bodies, so that mobility is independent of the choice of the fixed frame, then we have "M" = 6("N" − 1), where "N" = "n" + 1 is the number of moving bodies plus the fixed body.
Joints that connect bodies in this system remove degrees of freedom and reduce mobility. Specifically, hinges and sliders each impose five constraints and therefore remove five degrees of freedom. It is convenient to define the number of constraints "c" that a joint imposes in terms of the joint's freedom "f", where "c" = 6 − "f". In the case of a hinge or slider, which are one degree of freedom joints, we have "f" = 1 and therefore "c" = 6 − 1 = 5.
Thus, the mobility of a linkage system formed from "n" moving links and "j" joints each with "f""i", "i" = 1, ..., "j", degrees of freedom can be computed as,
formula_0
where "N" includes the fixed link. This is known as Kutzbach–Grübler's equation
There are two important special cases: (i) a simple open chain, and (ii) a simple closed chain. A simple open chain consists of "n" moving links connected end to end by "j" joints, with one end connected to a ground link. Thus, in this case "N" = "j" + 1 and the mobility of the chain is
formula_1
For a simple closed chain, "n" moving links are connected end-to-end by "n"+1 joints such that the two ends are connected to the ground link forming a loop. In this case, we have "N"="j" and the mobility of the chain is
formula_2
An example of a simple open chain is a serial robot manipulator. These robotic systems are constructed from a series of links connected by six one degree-of-freedom revolute or prismatic joints, so the system has six degrees of freedom.
An example of a simple closed chain is the RSSR (revolute-spherical-spherical-revolute) spatial four-bar linkage. The sum of the freedom of these joints is eight, so the mobility of the linkage is two, where one of the degrees of freedom is the rotation of the coupler around the line joining the two S joints.
Planar and spherical movement.
It is common practice to design the linkage system so that the movement of all of the bodies are constrained to lie on parallel planes, to form what is known as a "planar linkage". It is also possible to construct the linkage system so that all of the bodies move on concentric spheres, forming a "spherical linkage". In both cases, the degrees of freedom of the link is now three rather than six, and the constraints imposed by joints are now "c" = 3 − "f".
In this case, the mobility formula is given by
formula_3
and we have the special cases,
formula_4
formula_5
An example of a planar simple closed chain is the planar four-bar linkage, which is a four-bar loop with four one degree-of-freedom joints and therefore has mobility "M" = 1.
Joints.
The most familiar joints for linkage systems are the revolute, or hinged, joint denoted by an R, and the prismatic, or sliding, joint denoted by a P. Most other joints used for spatial linkages are modeled as combinations of revolute and prismatic joints. For example,
Analysis and synthesis of linkages.
The primary mathematical tool for the analysis of a linkage is known as the kinematic equations of the system. This is a sequence of rigid body transformation along a serial chain within the linkage that locates a floating link relative to the ground frame. Each serial chain within the linkage that connects this floating link to ground provides a set of equations that must be satisfied by the configuration parameters of the system. The result is a set of non-linear equations that define the configuration parameters of the system for a set of values for the input parameters.
Freudenstein introduced a method to use these equations for the design of a planar four-bar linkage to achieve a specified relation between the input parameters and the configuration of the linkage. Another approach to planar four-bar linkage design was introduced by L. Burmester, and is called Burmester theory.
Planar one degree-of-freedom linkages.
The mobility formula provides a way to determine the number of links and joints in a planar linkage that yields a one degree-of-freedom linkage. If we require the mobility of a planar linkage to be "M" = 1 and "f""i" = 1, the result is
formula_6
or
formula_7
This formula shows that the linkage must have an even number of links, so we have
See Sunkari and Schmidt for the number of 14- and 16-bar topologies, as well as the number of linkages that have two, three and four degrees-of-freedom.
The planar four-bar linkage is probably the simplest and most common linkage. It is a one degree-of-freedom system that transforms an input crank rotation or slider displacement into an output rotation or slide.
Examples of four-bar linkages are:
Biological linkages.
Linkage systems are widely distributed in animals. The most thorough overview of the different types of linkages in animals has been provided by Mees Muller, who also designed a new classification system which is especially well suited for biological systems. A well-known example is the cruciate ligaments of the knee.
An important difference between biological and engineering linkages is that revolving bars are rare in biology and that usually only a small range of the theoretically possible is possible due to additional functional constraints (especially the necessity to deliver blood). Biological linkages frequently are compliant. Often one or more bars are formed by ligaments, and often the linkages are three-dimensional. Coupled linkage systems are known, as well as five-, six-, and even seven-bar linkages. Four-bar linkages are by far the most common though.
Linkages can be found in joints, such as the knee of tetrapods, the hock of sheep, and the cranial mechanism of birds and reptiles. The latter is responsible for the upward motion of the upper bill in many birds.
Linkage mechanisms are especially frequent and manifold in the head of bony fishes, such as wrasses, which have evolved many specialized feeding mechanisms. Especially advanced are the linkage mechanisms of jaw protrusion. For suction feeding a system of linked four-bar linkages is responsible for the coordinated opening of the mouth and 3-D expansion of the buccal cavity. Other linkages are responsible for protrusion of the premaxilla.
Linkages are also present as locking mechanisms, such as in the knee of the horse, which enables the animal to sleep standing, without active muscle contraction. In pivot feeding, used by certain bony fishes, a four-bar linkage at first locks the head in a ventrally bent position by the alignment of two bars. The release of the locking mechanism jets the head up and moves the mouth toward the prey within 5–10 ms.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "M = 6n - \\sum_{i=1}^j (6 - f_i) = 6(N-1 - j) + \\sum_{i=1}^j\\ f_i,"
},
{
"math_id": 1,
"text": " M = \\sum_{i=1}^j\\ f_i ."
},
{
"math_id": 2,
"text": " M = \\sum_{i=1}^j\\ f_i - 6."
},
{
"math_id": 3,
"text": "M = 3(N- 1 - j)+ \\sum_{i=1}^j\\ f_i, "
},
{
"math_id": 4,
"text": " M = \\sum_{i=1}^j\\ f_i, "
},
{
"math_id": 5,
"text": " M = \\sum_{i=1}^j\\ f_i - 3. "
},
{
"math_id": 6,
"text": " M = 3(N - 1 - j) + j = 1, \\!"
},
{
"math_id": 7,
"text": " j = \\frac{3}{2}N - 2. \\!"
}
] |
https://en.wikipedia.org/wiki?curid=1552607
|
1552730
|
Binary Ordered Compression for Unicode
|
MIME compatible Unicode compression scheme
Binary Ordered Compression for Unicode (BOCU) is a MIME compatible Unicode compression scheme. BOCU-1 combines the wide applicability of UTF-8 with the compactness of Standard Compression Scheme for Unicode (SCSU). This Unicode encoding is designed to be useful for compressing short strings, and maintains code point order. BOCU-1 is specified in a Unicode Technical Note.
For comparison SCSU was adopted as standard Unicode compression scheme with a byte/code point ratio similar to language-specific code pages. SCSU has not been widely adopted, as it is not suitable for MIME "text" media types. For example, SCSU cannot be used directly in emails and similar protocols. SCSU requires a complicated encoder design for good performance. Usually, the zip, bzip2, and other industry standard algorithms compact larger amounts of Unicode text more efficiently.
Both SCSU and BOCU-1 are IANA registered charsets.
Details.
All numbers in this section are hexadecimal, and all ranges are inclusive.
Code points from codice_0 to codice_1 are encoded in BOCU-1 as the corresponding byte value. All other code points (that is, codice_2 through codice_3 and codice_4 through codice_5) are encoded as a difference between the code point and a normalized version of the most recently encoded code point that was not an ASCII space (codice_1). The initial state is codice_7. The normalization mapping is as follows:
The difference between the current code point and the normalized previous code point is encoded as follows:
Each byte range is lexicographically ordered with the following thirteen byte values excluded: codice_8. For example, the byte sequence codice_9, coding for a difference of codice_10, is immediately followed by the byte sequence codice_11, coding for a difference of codice_12.
Any ASCII input codice_0 to codice_14 excluding space codice_1 resets the encoder to codice_7. Because the above-mentioned values cover line end code points codice_17 and codice_18 "as is" (codice_19), the encoder is in a known state at the begin of each line. The corruption of a single byte therefore affects at most one line. For comparison, the corruption of a single byte in UTF-8 affects at most one code point, for SCSU it can affect the entire document.
BOCU-1 offers a similar robustness also for input texts without the above-mentioned values with the special reset code codice_20. When a decoder finds this octet it resets its state to codice_7 as for a line end. The use of codice_20 reset bytes is not recommended in the BOCU-1 specification, because it conflicts with other BOCU-1 design goals, notably the "binary order".
The optional use of a signature codice_23 at the begin of BOCU-1 encoded texts, i.e. the BOCU-1 byte sequence codice_24, changes the initial state codice_7 to codice_26. In other words, the signature cannot simply be stripped as in most other Unicode encoding schemes. Adding a reset byte after the signature (codice_27) could avoid this effect, but the BOCU-1 specification does not recommend this practice.
In theory UTF-1 and UTF-8 could encode the original UCS-4 set with 31 bits up to codice_28. BOCU-1 and UTF-16 can encode
the modern Unicode set from codice_0 to codice_5. Excluding the thirteen "protected" code points encoded as single octets BOCU-1 can use formula_0 octets in multi-byte encodings. BOCU-1 needs at most four bytes consisting of a lead byte and one to three trail bytes. The trail bytes encode a remaining "modulo 243" (base 243) difference, the lead byte determines the number of trail bytes and an initial difference. Note that the reset byte codice_20 is not "protected" and can occur as trail byte.
Patent.
Prior to 16 November 2022, the general BOCU algorithm was covered by United States Patent #6,737,994, which also mentions the specific BOCU-1 implementation. This patent has now expired.
IBM, which employed both of the inventors of BOCU-1 at the time it was created, stated in the Unicode Technical Note that implementers of a "fully compliant version of BOCU-1" had to contact IBM to request a royalty-free license. BOCU-1 is the only Unicode compression scheme described on the Unicode Web site that is known to have been encumbered with intellectual property restrictions.
By contrast, IBM also filed for a patent on UTF-EBCDIC, but it chose in that case to make the documentation and encoding scheme "freely available to anyone concerned towards making the transformation format as part of the UCS standards", instead of requiring implementers to request a license.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "256 - 13 = 243"
}
] |
https://en.wikipedia.org/wiki?curid=1552730
|
1552884
|
Cylinder-head-sector
|
Historical method for giving addresses to physical data blocks on hard disk drives
Cylinder-head-sector (CHS) is an early method for giving addresses to each physical block of data on a hard disk drive.
It is a 3D-coordinate system made out of a vertical coordinate "head", a horizontal (or radial) coordinate "cylinder", and an angular coordinate "sector". Head selects a circular surface: a platter in the disk (and one of its two sides). Cylinder is a cylindrical intersection through the stack of platters in a disk, centered around the disk's spindle. Combined, cylinder and head intersect to a circular line, or more precisely: a circular strip of physical data blocks called "track". Sector finally selects which data block in this track is to be addressed, as the track is subdivided into several equally-sized portions, each of which is an arc of (360/n) degrees, where n is the number of sectors in the track.
CHS addresses were exposed, instead of simple linear addresses (going from "0" to the "total block count on disk - 1"), because early hard drives didn't come with an embedded disk controller, that would hide the physical layout. A separate "generic" controller card was used, so that the operating system had to know the exact physical "geometry" of the "specific" drive attached to the controller, to correctly address data blocks. The traditional limits were 512 bytes/sector × 63 sectors/track × 255 heads (tracks/cylinder) × 1024 cylinders, resulting in a limit of 8032.5 MiB for the total capacity of a disk.
As the geometry became more complicated (for example, with the introduction of zone bit recording) and drive sizes grew over time, the CHS addressing method became restrictive. Since the late 1980s, hard drives began shipping with an embedded disk controller that had good knowledge of the physical geometry; they would however report a false geometry to the computer, e.g., a larger number of heads than actually present, to gain more addressable space. These logical CHS values would be translated by the controller, thus CHS addressing no longer corresponded to any physical attributes of the drive.
By the mid 1990s, hard drive interfaces replaced the CHS scheme with logical block addressing (LBA), but many tools for manipulating the master boot record (MBR) partition table still aligned partitions to cylinder boundaries; thus, artifacts of CHS addressing were still seen in partitioning software by the late 2000s.
In the early 2010s, the disk size limitations imposed by MBR became problematic and the GUID Partition Table (GPT) was designed as a replacement; modern computers using UEFI firmware without MBR support no longer use any notions from CHS addressing.
Definitions.
CHS addressing is the process of identifying individual sectors (aka. physical block of data) on a disk by their position in a track, where the track is determined by the head and cylinder numbers. The terms are explained bottom up, for disk addressing the "sector" is the smallest unit. Disk controllers can introduce address translations to map logical to physical positions, e.g., zone bit recording stores fewer sectors in shorter (inner) tracks, physical disk formats are not necessarily cylindrical, and sector numbers in a track can be skewed.
Sectors.
Floppy disks and controllers had used physical sector sizes of 128, 256, 512 and 1024 bytes (e.g., PC/AX), but formats with 512 bytes per physical sector became dominant in the 1980s.
The most common physical sector size for hard disks today is 512 bytes, but there have been hard disks with 520 bytes per sector as well for non-IBM compatible machines. In 2005 some Seagate custom hard disks used sector sizes of 1024 bytes per sector. Advanced Format hard disks use 4096 bytes per physical sector (4Kn) since 2010, but will also be able to emulate 512 byte sectors (512e) for a transitional period.
Magneto-optical drives use sector sizes of 512 and 1024 bytes on 5.25-inch drives and 512 and 2048 bytes on 3.5-inch drives.
In CHS addressing the "sector" numbers always start at 1, there is no "sector 0",[#endnote_] which can lead to confusion since logical sector addressing schemes typically start counting with 0, e.g., logical block addressing (LBA), or "relative sector addressing" used in DOS.
For physical disk geometries the maximal sector number is determined by the "low level format" of the disk. However, for disk access with the BIOS of IBM-PC compatible machines, the sector number was encoded in six bits, resulting in a maximal number of 111111 (63) sectors per track. This maximum is still in use for virtual CHS geometries.
Tracks.
The tracks are the thin concentric circular strips of sectors. At least one head is required to read a single track. With respect to disk geometries the terms "track" and "cylinder" are closely related. For a single or double sided floppy disk "track" is the common term; and for more than two heads "cylinder" is the common term. Strictly speaking a "track" is a given codice_0 combination consisting ofcodice_1 sectors, while a "cylinder" consists ofcodice_2 sectors.
Cylinders.
A cylinder is a division of data in a disk drive, as used in the CHS addressing mode of a Fixed Block Architecture disk or the cylinder–head–record (CCHHR) addressing mode of a CKD disk.
The concept is concentric, hollow, cylindrical slices through the physical disks (platters), collecting the respective circular tracks aligned through the stack of platters. The number of cylinders of a disk drive exactly equals the number of tracks on a single surface in the drive. It comprises the same track number on each platter, spanning all such tracks across each platter surface that is able to store data (without regard to whether or not the track is "bad"). Cylinders are vertically formed by tracks. In other words, track 12 on platter 0 plus track 12 on platter 1 etc. is cylinder 12.
Other forms of Direct Access Storage Device (DASD), such as drum memory devices or the IBM 2321 Data Cell, might give blocks addresses that include a cylinder address, although the cylinder address doesn't select a (geometric) cylindrical slice of the device.
Heads.
A device called a head reads and writes data in a hard drive by manipulating the magnetic medium that composes the surface of an associated disk platter. Naturally, a platter has 2 sides and thus 2 surfaces on which data can be manipulated; usually there are 2 heads per platter, one per side. (Sometimes the term "side" is substituted for "head," since platters might be separated from their head assemblies, as with the removable media of a "floppy" drive.)
The codice_3 addressing supported in IBM-PC compatible BIOSes code used eight bits for a maximum of 256 heads counted as head 0 up to 255 (codice_4). However, a bug in all versions of Microsoft DOS/IBM PC DOS up to and including 7.10 will cause these operating systems to crash on boot when encountering volumes with 256 heads[#endnote_]. Therefore, all compatible BIOSes will use mappings with up to 255 heads (codice_5) only, including in virtual codice_6 geometries.
This historical oddity can affect the maximum disk size in old BIOS INT 13h code as well as old PC DOS or similar operating systems:
codice_7 MB, but actually codice_8 MB yields what is known as 8 GB limit. In this context relevant definition of 8 GB = 8192 MB is another incorrect limit, because it would require CHS codice_9 with 64 sectors per track.
"Tracks" and "cylinders" are counted from 0, i.e., track 0 is the first (outer-most) track on floppy or other cylindrical disks. Old BIOS code supported ten bits in CHS addressing with up to 1024 cylinders (codice_10). Adding six bits for sectors and eight bits for heads results in the 24 bits supported by BIOS interrupt 13h. Subtracting the disallowed sector number 0 in codice_11 tracks corresponds to 128 MB for a sector size of 512 bytes (codice_12); and codice_13 confirms the (roughly) 8 GB limit.
CHS addressing starts at codice_14 with a maximal value codice_15 for codice_16 bits, or codice_17 for 24 bits limited to 255 heads. CHS values used to specify the geometry of a disk have to count cylinder 0 and head 0 resulting in
a maximum (codice_18 or) codice_19 for 24 bits with (256 or) 255 heads. In CHS tuples specifying a geometry S actually means sectors per track, and where the (virtual) geometry still matches the capacity the disk contains codice_20 sectors. As larger hard disks have come into use, a cylinder has become also a logical disk structure, standardised at 16 065 sectors (codice_21).
CHS addressing with 28 bits (EIDE and ATA-2) permits eight bits for sectors still starting at 1, i.e., sectors 1...255, four bits for heads 0...15, and sixteen bits for cylinders 0...65535. This results in a roughly 128 GB limit; actually codice_22 sectors corresponding to 130560 MB for a sector size of 512 bytes. The codice_23 bits in the ATA-2 specification are also covered by Ralf Brown's Interrupt List, and an old working draft of this now expired standard was published.
With an old BIOS limit of 1024 cylinders and the ATA limit of 16 heads the combined effect was codice_24 sectors, i.e., a 504 MB limit for sector size 512. BIOS translation schemes known as ECHS and "revised ECHS" mitigated this limitation by using 128 or 240 instead of 16 heads, simultaneously reducing the numbers of cylinders and sectors to fit into codice_25 (ECHS limit: 4032 MB) or codice_26 (revised ECHS limit: 7560 MB) for the given total number of sectors on a disk.
Blocks and clusters.
The Unix communities employ the term "block" to refer to a sector or group of sectors. For example, the Linux fdisk utility, before version 2.25, displayed partition sizes using 1024-byte "blocks".
"Clusters" are allocation units for data on various file systems (FAT, NTFS, etc.), where "data" mainly consists of files. "Clusters" are not directly affected by the physical or virtual geometry of the disk, i.e., a cluster can begin at a sector near the end of a given codice_0 track, and end in a sector on the physically or logically next codice_0 track.
CHS to LBA mapping.
In 2002 the ATA-6 specification introduced an optional 48 bits Logical Block Addressing and declared CHS addressing as obsolete, but still allowed to implement the ATA-5 translations. Unsurprisingly the CHS to LBA translation formula given below also matches the last ATA-5 CHS translation. In the ATA-5 specification CHS support was mandatory for up to 16 514 064 sectors and optional for larger disks. The ATA-5 limit corresponds to CHS codice_29 or equivalent disk capacities (16514064 = 16383 × 16 × 63 = 1032 × 254 × 63), and requires 24 = 14 + 4 + 6 bits (16383 + 1 = 214).
CHS tuples can be mapped onto LBA addresses using the following formula:
formula_0
where A is the LBA address, "N"heads is the number of heads on the disk, "N"sectors is the maximum number of sectors per track, and ("c", "h", "s") is the CHS address.
A "Logical Sector Number" formula in the ECMA-107 and ISO/IEC 9293:1994 (superseding ISO 9293:1987) standards for FAT file systems matches exactly the LBA formula given above: "Logical Block Address" and "Logical Sector Number" (LSN) are synonyms. The formula does not use the number of cylinders, but requires the number of heads and the number of sectors per track in the disk geometry, because the same CHS tuple addresses different logical sector numbers depending on the geometry.
Examples:
For geometry codice_30 of a disk with 1028160 sectors, CHS codice_31 is LBA 3150 = ((3 × 16) + 2) × 63 + (1 – 1);
For geometry codice_32 of a disk with 1028160 sectors, CHS codice_31 is LBA 3570 = ((3 × 4) + 2) × 255 + (1 – 1)
For geometry codice_34 of a disk with 1028160 sectors, CHS codice_31 is LBA 48321=((3 × 255) + 2) × 63 + (1 – 1)
For geometry codice_36 of a disk with 1028160 sectors, CHS codice_31 is LBA 1504 = ((3 × 15) + 2) × 32 + (1 – 1)
To help visualize the sequencing of sectors into a linear LBA model, note that:
The first LBA sector is sector # zero, the same sector in a CHS model is called sector # one.
All the sectors of each head/track get counted before incrementing to the next head/track.
All the heads/tracks of the same cylinder get counted before incrementing to the next cylinder.
The outside half of a whole hard drive would be the first half of the drive.
History.
Cylinder Head Record format has been used by Count Key Data (CKD) hard disks on IBM mainframes since at least the 1960s. This is largely comparable to the Cylinder Head Sector format used by PCs, save that the sector size was not fixed but could vary from track to track based on the needs of each application. In contemporary use, the disk geometry presented to the mainframe is emulated by the storage firmware, and no longer has any relation to physical disk geometry.
Earlier hard drives used in the PC, such as MFM and RLL drives, divided each cylinder into an equal number of sectors, so the CHS values matched the physical properties of the drive. A drive with a CHS tuple of codice_38 would have 500 tracks per side on each platter, two platters (4 heads), and 32 sectors per track, with a total of 32 768 000 bytes (31.25 MiB).
ATA/IDE drives were much more efficient at storing data and have replaced the now "archaic" MFM and RLL drives. They use zone bit recording (ZBR), where the number of sectors dividing each track varies with the location of groups of tracks on the surface of the platter. Tracks nearer to the edge of the platter contain more blocks of data than tracks close to the spindle, because there is more physical space within a given track near the edge of the platter. Thus, the CHS addressing scheme cannot correspond directly with the physical geometry of such drives, due to the varying number of sectors per track for different regions on a platter. Because of this, many drives still have a surplus of sectors (less than 1 cylinder in size) at the end of the drive, since the total number of sectors rarely, if ever, ends on a cylinder boundary.
An ATA/IDE drive can be set in the system BIOS with any configuration of cylinders, heads and sectors that do not exceed the capacity of the drive (or the BIOS), since the drive will convert any given CHS value into an actual address for its specific hardware configuration. This however can cause compatibility problems.
For operating systems such as Microsoft DOS or older version of Windows, each partition must start and end at a cylinder boundary. Only some of the relatively modern operating systems (Windows XP included) may disregard this rule, but doing so can still cause some compatibility issues, especially if the user wants to perform dual booting on the same drive. Microsoft does not follow this rule with internal disk partition tools since Windows Vista.
References.
<templatestyles src="Reflist/styles.css" />
1.<templatestyles src="Citation/styles.css"/>^ This rule is true at least for all formats where the physical sectors are named 1 upwards. However, there are a few odd floppy formats (e.g., the 640 KB format used by BBC Master 512 with DOS Plus 2.1), where the first sector in a track is named "0" not "1".
2.<templatestyles src="Citation/styles.css"/>^ While computers begin counting at 0, DOS would begin counting at 1. In order to do this, DOS would add a 1 to the head count before displaying it on the screen. However, instead of converting the 8-bit unsigned integer to a larger size (such as a 16-bit integer) first, DOS just added the 1. This would overflow a head count of 255 (codice_39) into 0 (codice_40) instead of the 256 that would be expected. This was fixed with DOS 8, but by then, it had become a de facto standard to not use a head value of 255.
|
[
{
"math_id": 0,
"text": "A = (c \\times N_\\mathrm{heads} + h) \\times N_\\mathrm{sectors} + (s-1),"
}
] |
https://en.wikipedia.org/wiki?curid=1552884
|
15532
|
Integral
|
Operation in mathematical calculus
In mathematics, an integral is the continuous analog of a sum, which is used to calculate areas, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental operations of calculus, the other being differentiation. Integration was initially used to solve problems in mathematics and physics, such as finding the area under a curve, or determining displacement from velocity. Usage of integration expanded to a wide variety of scientific fields thereafter.
A definite integral computes the signed area of the region in the plane that is bounded by the graph of a given function between two points in the real line. Conventionally, areas above the horizontal axis of the plane are positive while areas below are negative. Integrals also refer to the concept of an "antiderivative", a function whose derivative is the given function; in this case, they are also called "indefinite integrals". The fundamental theorem of calculus relates definite integration to differentiation and provides a method to compute the definite integral of a function when its antiderivative is known; differentiation and integration are inverse operations.
Although methods of calculating areas and volumes dated from ancient Greek mathematics, the principles of integration were formulated independently by Isaac Newton and Gottfried Wilhelm Leibniz in the late 17th century, who thought of the area under a curve as an infinite sum of rectangles of infinitesimal width. Bernhard Riemann later gave a rigorous definition of integrals, which is based on a limiting procedure that approximates the area of a curvilinear region by breaking the region into infinitesimally thin vertical slabs. In the early 20th century, Henri Lebesgue generalized Riemann's formulation by introducing what is now referred to as the Lebesgue integral; it is more general than Riemann's in the sense that a wider class of functions are Lebesgue-integrable.
Integrals may be generalized depending on the type of the function as well as the domain over which the integration is performed. For example, a line integral is defined for functions of two or more variables, and the interval of integration is replaced by a curve connecting two points in space. In a surface integral, the curve is replaced by a piece of a surface in three-dimensional space.
History.
Pre-calculus integration.
The first documented systematic technique capable of determining integrals is the method of exhaustion of the ancient Greek astronomer Eudoxus and philosopher Democritus ("ca." 370 BC), which sought to find areas and volumes by breaking them up into an infinite number of divisions for which the area or volume was known. This method was further developed and employed by Archimedes in the 3rd century BC and used to calculate the area of a circle, the surface area and volume of a sphere, area of an ellipse, the area under a parabola, the volume of a segment of a paraboloid of revolution, the volume of a segment of a hyperboloid of revolution, and the area of a spiral.
A similar method was independently developed in China around the 3rd century AD by Liu Hui, who used it to find the area of the circle. This method was later used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi and Zu Geng to find the volume of a sphere.
In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen (c. 965 – c. 1040 AD) derived a formula for the sum of fourth powers. Alhazen determined the equations to calculate the area enclosed by the curve represented by formula_0 (which translates to the integral formula_1 in contemporary notation), for any given non-negative integer value of formula_2. He used the results to carry out what would now be called an integration of this function, where the formulae for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid.
The next significant advances in integral calculus did not begin to appear until the 17th century. At this time, the work of Cavalieri with his method of indivisibles, and work by Fermat, began to lay the foundations of modern calculus, with Cavalieri computing the integrals of "x""n" up to degree "n"
9 in Cavalieri's quadrature formula. The case "n" = −1 required the invention of a function, the hyperbolic logarithm, achieved by quadrature of the hyperbola in 1647.
Further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the fundamental theorem of calculus. Wallis generalized Cavalieri's method, computing integrals of x to a general power, including negative powers and fractional powers.
Leibniz and Newton.
The major advance in integration came in the 17th century with the independent discovery of the fundamental theorem of calculus by Leibniz and Newton. The theorem demonstrates a connection between integration and differentiation. This connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the fundamental theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the comprehensive mathematical framework that both Leibniz and Newton developed. Given the name infinitesimal calculus, it allowed for precise analysis of functions with continuous domains. This framework eventually became modern calculus, whose notation for integrals is drawn directly from the work of Leibniz.
Formalization.
While Newton and Leibniz provided a systematic approach to integration, their work lacked a degree of rigour. Bishop Berkeley memorably attacked the vanishing increments used by Newton, calling them "ghosts of departed quantities". Calculus acquired a firmer footing with the development of limits. Integration was first rigorously formalized, using limits, by Riemann. Although all bounded piecewise continuous functions are Riemann-integrable on a bounded interval, subsequently more general functions were considered—particularly in the context of Fourier analysis—to which Riemann's definition does not apply, and Lebesgue formulated a different definition of integral, founded in measure theory (a subfield of real analysis). Other definitions of integral, extending Riemann's and Lebesgue's approaches, were proposed. These approaches based on the real number system are the ones most common today, but alternative approaches exist, such as a definition of integral as the standard part of an infinite Riemann sum, based on the hyperreal number system.
Historical notation.
The notation for the indefinite integral was introduced by Gottfried Wilhelm Leibniz in 1675. He adapted the integral symbol, ∫, from the letter "ſ" (long s), standing for "summa" (written as "ſumma"; Latin for "sum" or "total"). The modern notation for the definite integral, with limits above and below the integral sign, was first used by Joseph Fourier in "Mémoires" of the French Academy around 1819–1820, reprinted in his book of 1822.
Isaac Newton used a small vertical bar above a variable to indicate integration, or placed the variable inside a box. The vertical bar was easily confused with or "x"′, which are used to indicate differentiation, and the box notation was difficult for printers to reproduce, so these notations were not widely adopted.
First use of the term.
The term was first printed in Latin by Jacob Bernoulli in 1690: "Ergo et horum Integralia aequantur".
Terminology and notation.
In general, the integral of a real-valued function "f"("x") with respect to a real variable "x" on an interval ["a", "b"] is written as
formula_3
The integral sign ∫ represents integration. The symbol "dx", called the differential of the variable "x", indicates that the variable of integration is "x". The function "f"("x") is called the integrand, the points "a" and "b" are called the limits (or bounds) of integration, and the integral is said to be over the interval ["a", "b"], called the interval of integration.
A function is said to be integrable if its integral over its domain is finite. If limits are specified, the integral is called a definite integral.
When the limits are omitted, as in
formula_4
the integral is called an indefinite integral, which represents a class of functions (the antiderivative) whose derivative is the integrand. The fundamental theorem of calculus relates the evaluation of definite integrals to indefinite integrals. There are several extensions of the notation for integrals to encompass integration on unbounded domains and/or in multiple dimensions (see later sections of this article).
In advanced settings, it is not uncommon to leave out "dx" when only the simple Riemann integral is being used, or the exact type of integral is immaterial. For instance, one might write formula_5 to express the linearity of the integral, a property shared by the Riemann integral and all generalizations thereof.
Interpretations.
Integrals appear in many practical situations. For instance, from the length, width and depth of a swimming pool which is rectangular with a flat bottom, one can determine the volume of water it can contain, the area of its surface, and the length of its edge. But if it is oval with a rounded bottom, integrals are required to find exact and rigorous values for these quantities. In each case, one may divide the sought quantity into infinitely many infinitesimal pieces, then sum the pieces to achieve an accurate approximation.
As another example, to find the area of the region bounded by the graph of the function "f"("x") = formula_6 between "x" = 0 and "x" = 1, one can divide the interval into five pieces (0, 1/5, 2/5, ..., 1), then construct rectangles using the right end height of each piece (thus ) and sum their areas to get the approximation
formula_7
which is larger than the exact value. Alternatively, when replacing these subintervals by ones with the left end height of each piece, the approximation one gets is too low: with twelve such subintervals the approximated area is only 0.6203. However, when the number of pieces increases to infinity, it will reach a limit which is the exact value of the area sought (in this case, 2/3). One writes
formula_8
which means 2/3 is the result of a weighted sum of function values, √"x", multiplied by the infinitesimal step widths, denoted by "dx", on the interval [0, 1].
Formal definitions.
There are many ways of formally defining an integral, not all of which are equivalent. The differences exist mostly to deal with differing special cases which may not be integrable under other definitions, but are also occasionally for pedagogical reasons. The most commonly used definitions are Riemann integrals and Lebesgue integrals.
Riemann integral.
The Riemann integral is defined in terms of Riemann sums of functions with respect to "tagged partitions" of an interval. A tagged partition of a closed interval ["a", "b"] on the real line is a finite sequence
formula_9
This partitions the interval ["a", "b"] into n sub-intervals ["x""i"−1, "x""i"] indexed by i, each of which is "tagged" with a specific point "t""i" ∈ ["x""i"−1, "x""i"]. A "Riemann sum" of a function f with respect to such a tagged partition is defined as
formula_10
thus each term of the sum is the area of a rectangle with height equal to the function value at the chosen point of the given sub-interval, and width the same as the width of sub-interval, Δ"i"
"x""i"−"x""i"−1. The "mesh" of such a tagged partition is the width of the largest sub-interval formed by the partition, max"i"
1..."n" Δ"i". The "Riemann integral" of a function f over the interval ["a", "b"] is equal to S if:
For all formula_11 there exists formula_12 such that, for any tagged partition formula_13 with mesh less than formula_14,
formula_15
When the chosen tags are the maximum (respectively, minimum) value of the function in each interval, the Riemann sum becomes an upper (respectively, lower) Darboux sum, suggesting the close connection between the Riemann integral and the Darboux integral.
Lebesgue integral.
It is often of interest, both in theory and applications, to be able to pass to the limit under the integral. For instance, a sequence of functions can frequently be constructed that approximate, in a suitable sense, the solution to a problem. Then the integral of the solution function should be the limit of the integrals of the approximations. However, many functions that can be obtained as limits are not Riemann-integrable, and so such limit theorems do not hold with the Riemann integral. Therefore, it is of great importance to have a definition of the integral that allows a wider class of functions to be integrated.
Such an integral is the Lebesgue integral, that exploits the following fact to enlarge the class of integrable functions: if the values of a function are rearranged over the domain, the integral of a function should remain the same. Thus Henri Lebesgue introduced the integral bearing his name, explaining this integral thus in a letter to Paul Montel:
<templatestyles src="Template:Blockquote/styles.css" />I have to pay a certain sum, which I have collected in my pocket. I take the bills and coins out of my pocket and give them to the creditor in the order I find them until I have reached the total sum. This is the Riemann integral. But I can proceed differently. After I have taken all the money out of my pocket I order the bills and coins according to identical values and then I pay the several heaps one after the other to the creditor. This is my integral.
As Folland puts it, "To compute the Riemann integral of f, one partitions the domain ["a", "b"] into subintervals", while in the Lebesgue integral, "one is in effect partitioning the range of f ". The definition of the Lebesgue integral thus begins with a measure, μ. In the simplest case, the Lebesgue measure "μ"("A") of an interval "A" = ["a", "b"] is its width, "b" − "a", so that the Lebesgue integral agrees with the (proper) Riemann integral when both exist. In more complicated cases, the sets being measured can be highly fragmented, with no continuity and no resemblance to intervals.
Using the "partitioning the range of f " philosophy, the integral of a non-negative function "f" : R → R should be the sum over t of the areas between a thin horizontal strip between "y" = "t" and "y" = "t" + "dt". This area is just "μ"{ "x" : "f"("x") > "t"} "dt". Let "f"∗("t") = "μ"{ "x" : "f"("x") > "t" }. The Lebesgue integral of f is then defined by
formula_16
where the integral on the right is an ordinary improper Riemann integral ("f"∗ is a strictly decreasing positive function, and therefore has a well-defined improper Riemann integral). For a suitable class of functions (the measurable functions) this defines the Lebesgue integral.
A general measurable function f is Lebesgue-integrable if the sum of the absolute values of the areas of the regions between the graph of f and the x-axis is finite:
formula_17
In that case, the integral is, as in the Riemannian case, the difference between the area above the x-axis and the area below the x-axis:
formula_18
where
formula_19
Other integrals.
Although the Riemann and Lebesgue integrals are the most widely used definitions of the integral, a number of others exist, including:
Properties.
Linearity.
The collection of Riemann-integrable functions on a closed interval ["a", "b"] forms a vector space under the operations of pointwise addition and multiplication by a scalar, and the operation of integration
formula_20
is a linear functional on this vector space. Thus, the collection of integrable functions is closed under taking linear combinations, and the integral of a linear combination is the linear combination of the integrals:
formula_21
Similarly, the set of real-valued Lebesgue-integrable functions on a given measure space E with measure μ is closed under taking linear combinations and hence form a vector space, and the Lebesgue integral
formula_22
is a linear functional on this vector space, so that:
formula_23
More generally, consider the vector space of all measurable functions on a measure space ("E","μ"), taking values in a locally compact complete topological vector space V over a locally compact topological field "K", "f" : "E" → "V". Then one may define an abstract integration map assigning to each function f an element of V or the symbol "∞",
formula_24
that is compatible with linear combinations. In this situation, the linearity holds for the subspace of functions whose integral is an element of V (i.e. "finite"). The most important special cases arise when K is R, C, or a finite extension of the field Q"p" of p-adic numbers, and V is a finite-dimensional vector space over K, and when "K"
C and V is a complex Hilbert space.
Linearity, together with some natural continuity properties and normalization for a certain class of "simple" functions, may be used to give an alternative definition of the integral. This is the approach of Daniell for the case of real-valued functions on a set X, generalized by Nicolas Bourbaki to functions with values in a locally compact topological vector space. See for an axiomatic characterization of the integral.
Inequalities.
A number of general inequalities hold for Riemann-integrable functions defined on a closed and bounded interval ["a", "b"] and can be generalized to other notions of integral (Lebesgue and Daniell).
Conventions.
In this section, f is a real-valued Riemann-integrable function. The integral
formula_34
over an interval ["a", "b"] is defined if "a" < "b". This means that the upper and lower sums of the function f are evaluated on a partition "a"
"x"0 ≤ "x"1 ≤ . . . ≤ "x""n"
"b" whose values "x""i" are increasing. Geometrically, this signifies that integration takes place "left to right", evaluating f within intervals ["x" "i" , "x" "i" +1] where an interval with a higher index lies to the right of one with a lower index. The values a and b, the end-points of the interval, are called the limits of integration of f. Integrals can also be defined if "a" > "b":""
formula_35
With "a"
"b", this implies:
formula_36
The first convention is necessary in consideration of taking integrals over subintervals of ["a", "b"]; the second says that an integral taken over a degenerate interval, or a point, should be zero. One reason for the first convention is that the integrability of f on an interval ["a", "b"] implies that f is integrable on any subinterval ["c", "d"], but in particular integrals have the property that if c is any element of ["a", "b"], then:""
formula_37
With the first convention, the resulting relation
formula_38
is then well-defined for any cyclic permutation of a, b, and c.
Fundamental theorem of calculus.
The "fundamental theorem of calculus" is the statement that differentiation and integration are inverse operations: if a continuous function is first integrated and then differentiated, the original function is retrieved. An important consequence, sometimes called the "second fundamental theorem of calculus", allows one to compute integrals by using an antiderivative of the function to be integrated.
First theorem.
Let f be a continuous real-valued function defined on a closed interval ["a", "b"]. Let F be the function defined, for all x in ["a", "b"], by
formula_39
Then, F is continuous on ["a", "b"], differentiable on the open interval ("a", "b"), and
formula_40
for all x in ("a", "b").
Second theorem.
Let f be a real-valued function defined on a closed interval ["a", "b"] that admits an antiderivative F on ["a", "b"]. That is, f and F are functions such that for all x in ["a", "b"],
formula_41
If f is integrable on ["a", "b"] then
formula_42
Extensions.
Improper integrals.
A "proper" Riemann integral assumes the integrand is defined and finite on a closed and bounded interval, bracketed by the limits of integration. An improper integral occurs when one or more of these conditions is not satisfied. In some cases such integrals may be defined by considering the limit of a sequence of proper Riemann integrals on progressively larger intervals.
If the interval is unbounded, for instance at its upper end, then the improper integral is the limit as that endpoint goes to infinity:
formula_43
If the integrand is only defined or finite on a half-open interval, for instance ("a", "b"], then again a limit may provide a finite result:
formula_44
That is, the improper integral is the limit of proper integrals as one endpoint of the interval of integration approaches either a specified real number, or ∞, or −∞. In more complicated cases, limits are required at both endpoints, or at interior points.
Multiple integration.
Just as the definite integral of a positive function of one variable represents the area of the region between the graph of the function and the "x"-axis, the "double integral" of a positive function of two variables represents the volume of the region between the surface defined by the function and the plane that contains its domain. For example, a function in two dimensions depends on two real variables, "x" and "y", and the integral of a function "f" over the rectangle "R" given as the Cartesian product of two intervals formula_45 can be written
formula_46
where the differential "dA" indicates that integration is taken with respect to area. This double integral can be defined using Riemann sums, and represents the (signed) volume under the graph of "z"
"f"("x","y") over the domain "R". Under suitable conditions (e.g., if "f" is continuous), Fubini's theorem states that this integral can be expressed as an equivalent iterated integral
formula_47
This reduces the problem of computing a double integral to computing one-dimensional integrals. Because of this, another notation for the integral over "R" uses a double integral sign:
formula_48
Integration over more general domains is possible. The integral of a function "f", with respect to volume, over an "n-"dimensional region "D" of formula_49 is denoted by symbols such as:
formula_50
Line integrals and surface integrals.
The concept of an integral can be extended to more general domains of integration, such as curved lines and surfaces inside higher-dimensional spaces. Such integrals are known as line integrals and surface integrals respectively. These have important applications in physics, as when dealing with vector fields.
A "line integral" (sometimes called a "path integral") is an integral where the function to be integrated is evaluated along a curve. Various different line integrals are in use. In the case of a closed curve it is also called a "contour integral".
The function to be integrated may be a scalar field or a vector field. The value of the line integral is the sum of values of the field at all points on the curve, weighted by some scalar function on the curve (commonly arc length or, for a vector field, the scalar product of the vector field with a differential vector in the curve). This weighting distinguishes the line integral from simpler integrals defined on intervals. Many simple formulas in physics have natural continuous analogs in terms of line integrals; for example, the fact that work is equal to force, F, multiplied by displacement, s, may be expressed (in terms of vector quantities) as:
formula_51
For an object moving along a path "C" in a vector field F such as an electric field or gravitational field, the total work done by the field on the object is obtained by summing up the differential work done in moving from s to s + "d"s. This gives the line integral
formula_52
A "surface integral" generalizes double integrals to integration over a surface (which may be a curved set in space); it can be thought of as the double integral analog of the line integral. The function to be integrated may be a scalar field or a vector field. The value of the surface integral is the sum of the field at all points on the surface. This can be achieved by splitting the surface into surface elements, which provide the partitioning for Riemann sums.
For an example of applications of surface integrals, consider a vector field v on a surface "S"; that is, for each point "x" in "S", v("x") is a vector. Imagine that a fluid flows through "S", such that v("x") determines the velocity of the fluid at x. The flux is defined as the quantity of fluid flowing through "S" in unit amount of time. To find the flux, one need to take the dot product of v with the unit surface normal to "S" at each point, which will give a scalar field, which is integrated over the surface:
formula_53
The fluid flux in this example may be from a physical fluid such as water or air, or from electrical or magnetic flux. Thus surface integrals have applications in physics, particularly with the classical theory of electromagnetism.
Contour integrals.
In complex analysis, the integrand is a complex-valued function of a complex variable z instead of a real function of a real variable x. When a complex function is integrated along a curve formula_54 in the complex plane, the integral is denoted as follows
formula_55
This is known as a contour integral.
Integrals of differential forms.
A differential form is a mathematical concept in the fields of multivariable calculus, differential topology, and tensors. Differential forms are organized by degree. For example, a one-form is a weighted sum of the differentials of the coordinates, such as:
formula_56
where "E", "F", "G" are functions in three dimensions. A differential one-form can be integrated over an oriented path, and the resulting integral is just another way of writing a line integral. Here the basic differentials "dx", "dy", "dz" measure infinitesimal oriented lengths parallel to the three coordinate axes.
A differential two-form is a sum of the form
formula_57
Here the basic two-forms formula_58 measure oriented areas parallel to the coordinate two-planes. The symbol formula_59 denotes the wedge product, which is similar to the cross product in the sense that the wedge product of two forms representing oriented lengths represents an oriented area. A two-form can be integrated over an oriented surface, and the resulting integral is equivalent to the surface integral giving the flux of formula_60.
Unlike the cross product, and the three-dimensional vector calculus, the wedge product and the calculus of differential forms makes sense in arbitrary dimension and on more general manifolds (curves, surfaces, and their higher-dimensional analogs). The exterior derivative plays the role of the gradient and curl of vector calculus, and Stokes' theorem simultaneously generalizes the three theorems of vector calculus: the divergence theorem, Green's theorem, and the Kelvin-Stokes theorem.
Summations.
The discrete equivalent of integration is summation. Summations and integrals can be put on the same foundations using the theory of Lebesgue integrals or time-scale calculus.
Functional integrals.
An integration that is performed not over a variable (or, in physics, over a space or time dimension), but over a space of functions, is referred to as a functional integral.
Applications.
Integrals are used extensively in many areas. For example, in probability theory, integrals are used to determine the probability of some random variable falling within a certain range. Moreover, the integral under an entire probability density function must equal 1, which provides a test of whether a function with no negative values could be a density function or not.
Integrals can be used for computing the area of a two-dimensional region that has a curved boundary, as well as computing the volume of a three-dimensional object that has a curved boundary. The area of a two-dimensional region can be calculated using the aforementioned definite integral. The volume of a three-dimensional object such as a disc or washer can be computed by disc integration using the equation for the volume of a cylinder, formula_61, where formula_62 is the radius. In the case of a simple disc created by rotating a curve about the "x"-axis, the radius is given by "f"("x"), and its height is the differential "dx". Using an integral with bounds "a" and "b", the volume of the disc is equal to:formula_63Integrals are also used in physics, in areas like kinematics to find quantities like displacement, time, and velocity. For example, in rectilinear motion, the displacement of an object over the time interval formula_64 is given by
formula_65
where formula_66 is the velocity expressed as a function of time. The work done by a force formula_67 (given as a function of position) from an initial position formula_68 to a final position formula_69 is:
formula_70
Integrals are also used in thermodynamics, where thermodynamic integration is used to calculate the difference in free energy between two given states.
Computation.
Analytical.
The most basic technique for computing definite integrals of one real variable is based on the fundamental theorem of calculus. Let "f"("x") be the function of x to be integrated over a given interval ["a", "b"]. Then, find an antiderivative of f; that is, a function F such that "F"′
"f" on the interval. Provided the integrand and integral have no singularities on the path of integration, by the fundamental theorem of calculus,
formula_71
Sometimes it is necessary to use one of the many techniques that have been developed to evaluate integrals. Most of these techniques rewrite one integral as a different one which is hopefully more tractable. Techniques include integration by substitution, integration by parts, integration by trigonometric substitution, and integration by partial fractions.
Alternative methods exist to compute more complex integrals. Many nonelementary integrals can be expanded in a Taylor series and integrated term by term. Occasionally, the resulting infinite series can be summed analytically. The method of convolution using Meijer G-functions can also be used, assuming that the integrand can be written as a product of Meijer G-functions. There are also many less common ways of calculating definite integrals; for instance, Parseval's identity can be used to transform an integral over a rectangular region into an infinite sum. Occasionally, an integral can be evaluated by a trick; for an example of this, see Gaussian integral.
Computations of volumes of solids of revolution can usually be done with disk integration or shell integration.
Specific results which have been worked out by various techniques are collected in the list of integrals.
Symbolic.
Many problems in mathematics, physics, and engineering involve integration where an explicit formula for the integral is desired. Extensive tables of integrals have been compiled and published over the years for this purpose. With the spread of computers, many professionals, educators, and students have turned to computer algebra systems that are specifically designed to perform difficult or tedious tasks, including integration. Symbolic integration has been one of the motivations for the development of the first such systems, like Macsyma and Maple.
A major mathematical difficulty in symbolic integration is that in many cases, a relatively simple function does not have integrals that can be expressed in closed form involving only elementary functions, include rational and exponential functions, logarithm, trigonometric functions and inverse trigonometric functions, and the operations of multiplication and composition. The Risch algorithm provides a general criterion to determine whether the antiderivative of an elementary function is elementary and to compute the integral if is elementary. However, functions with closed expressions of antiderivatives are the exception, and consequently, computerized algebra systems have no hope of being able to find an antiderivative for a randomly constructed elementary function. On the positive side, if the 'building blocks' for antiderivatives are fixed in advance, it may still be possible to decide whether the antiderivative of a given function can be expressed using these blocks and operations of multiplication and composition and to find the symbolic answer whenever it exists. The Risch algorithm, implemented in Mathematica, Maple and other computer algebra systems, does just that for functions and antiderivatives built from rational functions, radicals, logarithm, and exponential functions.
Some special integrands occur often enough to warrant special study. In particular, it may be useful to have, in the set of antiderivatives, the special functions (like the Legendre functions, the hypergeometric function, the gamma function, the incomplete gamma function and so on). Extending Risch's algorithm to include such functions is possible but challenging and has been an active research subject.
More recently a new approach has emerged, using "D"-finite functions, which are the solutions of linear differential equations with polynomial coefficients. Most of the elementary and special functions are "D"-finite, and the integral of a "D"-finite function is also a "D"-finite function. This provides an algorithm to express the antiderivative of a "D"-finite function as the solution of a differential equation. This theory also allows one to compute the definite integral of a "D"-function as the sum of a series given by the first coefficients and provides an algorithm to compute any coefficient.
Rule-based integration systems facilitate integration. Rubi, a computer algebra system rule-based integrator, pattern matches an extensive system of symbolic integration rules to integrate a wide variety of integrands. This system uses over 6600 integration rules to compute integrals. The method of brackets is a generalization of Ramanujan's master theorem that can be applied to a wide range of univariate and multivariate integrals. A set of rules are applied to the coefficients and exponential terms of the integrand's power series expansion to determine the integral. The method is closely related to the Mellin transform.
Numerical.
Definite integrals may be approximated using several methods of numerical integration. The rectangle method relies on dividing the region under the function into a series of rectangles corresponding to function values and multiplies by the step width to find the sum. A better approach, the trapezoidal rule, replaces the rectangles used in a Riemann sum with trapezoids. The trapezoidal rule weights the first and last values by one half, then multiplies by the step width to obtain a better approximation. The idea behind the trapezoidal rule, that more accurate approximations to the function yield better approximations to the integral, can be carried further: Simpson's rule approximates the integrand by a piecewise quadratic function.
Riemann sums, the trapezoidal rule, and Simpson's rule are examples of a family of quadrature rules called the Newton–Cotes formulas. The degree n Newton–Cotes quadrature rule approximates the polynomial on each subinterval by a degree "n" polynomial. This polynomial is chosen to interpolate the values of the function on the interval. Higher degree Newton–Cotes approximations can be more accurate, but they require more function evaluations, and they can suffer from numerical inaccuracy due to Runge's phenomenon. One solution to this problem is Clenshaw–Curtis quadrature, in which the integrand is approximated by expanding it in terms of Chebyshev polynomials.
Romberg's method halves the step widths incrementally, giving trapezoid approximations denoted by "T"("h"0), "T"("h"1), and so on, where "h""k"+1 is half of "h""k". For each new step size, only half the new function values need to be computed; the others carry over from the previous size. It then interpolate a polynomial through the approximations, and extrapolate to "T"(0). Gaussian quadrature evaluates the function at the roots of a set of orthogonal polynomials. An n-point Gaussian method is exact for polynomials of degree up to 2"n" − 1.
The computation of higher-dimensional integrals (for example, volume calculations) makes important use of such alternatives as Monte Carlo integration.
Mechanical.
The area of an arbitrary two-dimensional shape can be determined using a measuring instrument called planimeter. The volume of irregular objects can be measured with precision by the fluid displaced as the object is submerged.
Geometrical.
Area can sometimes be found via geometrical compass-and-straightedge constructions of an equivalent square.
Integration by differentiation.
Kempf, Jackson and Morales demonstrated mathematical relations that allow an integral to be calculated by means of differentiation. Their calculus involves the Dirac delta function and the partial derivative operator formula_72. This can also be applied to functional integrals, allowing them to be computed by functional differentiation.
Examples.
Using the fundamental theorem of calculus.
The fundamental theorem of calculus allows straightforward calculations of basic functions:
formula_73
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "y=x^k"
},
{
"math_id": 1,
"text": "\\int x^k \\, dx"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "\\int_{a}^{b} f(x) \\,dx."
},
{
"math_id": 4,
"text": "\\int f(x) \\,dx,"
},
{
"math_id": 5,
"text": "\\int_a^b (c_1f+c_2g) = c_1\\int_a^b f + c_2\\int_a^b g "
},
{
"math_id": 6,
"text": "\\sqrt{x}"
},
{
"math_id": 7,
"text": "\\textstyle \\sqrt{\\frac{1}{5}}\\left(\\frac{1}{5}-0\\right)+\\sqrt{\\frac{2}{5}}\\left(\\frac{2}{5}-\\frac{1}{5}\\right)+\\cdots+\\sqrt{\\frac{5}{5}}\\left(\\frac{5}{5}-\\frac{4}{5}\\right)\\approx 0.7497,"
},
{
"math_id": 8,
"text": "\\int_{0}^{1} \\sqrt{x} \\,dx = \\frac{2}{3},"
},
{
"math_id": 9,
"text": " a = x_0 \\le t_1 \\le x_1 \\le t_2 \\le x_2 \\le \\cdots \\le x_{n-1} \\le t_n \\le x_n = b . \\,\\!"
},
{
"math_id": 10,
"text": "\\sum_{i=1}^n f(t_i) \\, \\Delta_i ; "
},
{
"math_id": 11,
"text": "\\varepsilon > 0"
},
{
"math_id": 12,
"text": "\\delta > 0"
},
{
"math_id": 13,
"text": "[a, b]"
},
{
"math_id": 14,
"text": "\\delta"
},
{
"math_id": 15,
"text": "\\left| S - \\sum_{i=1}^n f(t_i) \\, \\Delta_i \\right| < \\varepsilon."
},
{
"math_id": 16,
"text": "\\int f = \\int_0^\\infty f^*(t)\\,dt"
},
{
"math_id": 17,
"text": "\\int_E |f|\\,d\\mu < + \\infty."
},
{
"math_id": 18,
"text": "\\int_E f \\,d\\mu = \\int_E f^+ \\,d\\mu - \\int_E f^- \\,d\\mu"
},
{
"math_id": 19,
"text": "\\begin{alignat}{3}\n & f^+(x) &&{}={} \\max \\{f(x),0\\} &&{}={} \\begin{cases}\n f(x), & \\text{if } f(x) > 0, \\\\\n 0, & \\text{otherwise,}\n \\end{cases}\\\\\n & f^-(x) &&{}={} \\max \\{-f(x),0\\} &&{}={} \\begin{cases}\n -f(x), & \\text{if } f(x) < 0, \\\\\n 0, & \\text{otherwise.}\n \\end{cases}\n\\end{alignat}"
},
{
"math_id": 20,
"text": " f \\mapsto \\int_a^b f(x) \\; dx"
},
{
"math_id": 21,
"text": " \\int_a^b (\\alpha f + \\beta g)(x) \\, dx = \\alpha \\int_a^b f(x) \\,dx + \\beta \\int_a^b g(x) \\, dx. \\,"
},
{
"math_id": 22,
"text": " f\\mapsto \\int_E f \\, d\\mu "
},
{
"math_id": 23,
"text": " \\int_E (\\alpha f + \\beta g) \\, d\\mu = \\alpha \\int_E f \\, d\\mu + \\beta \\int_E g \\, d\\mu. "
},
{
"math_id": 24,
"text": " f\\mapsto\\int_E f \\,d\\mu, \\,"
},
{
"math_id": 25,
"text": " m(b - a) \\leq \\int_a^b f(x) \\, dx \\leq M(b - a). "
},
{
"math_id": 26,
"text": " \\int_a^b f(x) \\, dx \\leq \\int_a^b g(x) \\, dx. "
},
{
"math_id": 27,
"text": " \\int_a^b f(x) \\, dx < \\int_a^b g(x) \\, dx. "
},
{
"math_id": 28,
"text": " \\int_c^d f(x) \\, dx \\leq \\int_a^b f(x) \\, dx. "
},
{
"math_id": 29,
"text": "\n (fg)(x)= f(x) g(x), \\; f^2 (x) = (f(x))^2, \\; |f| (x) = |f(x)|."
},
{
"math_id": 30,
"text": "\\left| \\int_a^b f(x) \\, dx \\right| \\leq \\int_a^b | f(x) | \\, dx. "
},
{
"math_id": 31,
"text": "\\left( \\int_a^b (fg)(x) \\, dx \\right)^2 \\leq \\left( \\int_a^b f(x)^2 \\, dx \\right) \\left( \\int_a^b g(x)^2 \\, dx \\right). "
},
{
"math_id": 32,
"text": "\\left|\\int f(x)g(x)\\,dx\\right| \\leq\n\\left(\\int \\left|f(x)\\right|^p\\,dx \\right)^{1/p} \\left(\\int\\left|g(x)\\right|^q\\,dx\\right)^{1/q}."
},
{
"math_id": 33,
"text": "\\left(\\int \\left|f(x)+g(x)\\right|^p\\,dx \\right)^{1/p} \\leq\n\\left(\\int \\left|f(x)\\right|^p\\,dx \\right)^{1/p} +\n\\left(\\int \\left|g(x)\\right|^p\\,dx \\right)^{1/p}."
},
{
"math_id": 34,
"text": " \\int_a^b f(x) \\, dx "
},
{
"math_id": 35,
"text": "\\int_a^b f(x) \\, dx = - \\int_b^a f(x) \\, dx. "
},
{
"math_id": 36,
"text": "\\int_a^a f(x) \\, dx = 0. "
},
{
"math_id": 37,
"text": " \\int_a^b f(x) \\, dx = \\int_a^c f(x) \\, dx + \\int_c^b f(x) \\, dx."
},
{
"math_id": 38,
"text": "\\begin{align}\n \\int_a^c f(x) \\, dx &{}= \\int_a^b f(x) \\, dx - \\int_c^b f(x) \\, dx \\\\\n &{} = \\int_a^b f(x) \\, dx + \\int_b^c f(x) \\, dx\n\\end{align}"
},
{
"math_id": 39,
"text": "F(x) = \\int_a^x f(t)\\, dt."
},
{
"math_id": 40,
"text": "F'(x) = f(x)"
},
{
"math_id": 41,
"text": "f(x) = F'(x)."
},
{
"math_id": 42,
"text": "\\int_a^b f(x)\\,dx = F(b) - F(a)."
},
{
"math_id": 43,
"text": "\\int_a^\\infty f(x)\\,dx = \\lim_{b \\to \\infty} \\int_a^b f(x)\\,dx."
},
{
"math_id": 44,
"text": "\\int_a^b f(x)\\,dx = \\lim_{\\varepsilon \\to 0} \\int_{a+\\epsilon}^{b} f(x)\\,dx."
},
{
"math_id": 45,
"text": "R=[a,b]\\times [c,d]"
},
{
"math_id": 46,
"text": "\\int_R f(x,y)\\,dA"
},
{
"math_id": 47,
"text": "\\int_a^b\\left[\\int_c^d f(x,y)\\,dy\\right]\\,dx."
},
{
"math_id": 48,
"text": "\\iint_R f(x,y) \\, dA."
},
{
"math_id": 49,
"text": "\\mathbb{R}^n"
},
{
"math_id": 50,
"text": "\\int_D f(\\mathbf x) d^n\\mathbf x \\ = \\int_D f\\,dV."
},
{
"math_id": 51,
"text": "W=\\mathbf F\\cdot\\mathbf s."
},
{
"math_id": 52,
"text": "W=\\int_C \\mathbf F\\cdot d\\mathbf s."
},
{
"math_id": 53,
"text": "\\int_S {\\mathbf v}\\cdot \\,d{\\mathbf S}."
},
{
"math_id": 54,
"text": "\\gamma"
},
{
"math_id": 55,
"text": "\\int_\\gamma f(z)\\,dz."
},
{
"math_id": 56,
"text": "E(x,y,z)\\,dx + F(x,y,z)\\,dy + G(x,y,z)\\, dz"
},
{
"math_id": 57,
"text": "G(x,y,z) \\, dx\\wedge dy + E(x,y,z) \\, dy\\wedge dz + F(x,y,z) \\, dz\\wedge dx."
},
{
"math_id": 58,
"text": "dx\\wedge dy, dz\\wedge dx, dy\\wedge dz"
},
{
"math_id": 59,
"text": "\\wedge"
},
{
"math_id": 60,
"text": "E\\mathbf i+F\\mathbf j+G\\mathbf k"
},
{
"math_id": 61,
"text": "\\pi r^2 h "
},
{
"math_id": 62,
"text": "r"
},
{
"math_id": 63,
"text": "\\pi \\int_a^b f^2 (x) \\, dx."
},
{
"math_id": 64,
"text": "[a,b]"
},
{
"math_id": 65,
"text": "x(b)-x(a) = \\int_a^b v(t) \\,dt,"
},
{
"math_id": 66,
"text": "v(t)"
},
{
"math_id": 67,
"text": "F(x)"
},
{
"math_id": 68,
"text": "A"
},
{
"math_id": 69,
"text": "B"
},
{
"math_id": 70,
"text": "W_{A\\rightarrow B} = \\int_A^B F(x)\\,dx."
},
{
"math_id": 71,
"text": "\\int_a^b f(x)\\,dx=F(b)-F(a)."
},
{
"math_id": 72,
"text": "\\partial_x"
},
{
"math_id": 73,
"text": "\\int_0^\\pi \\sin(x) \\,dx = -\\cos(x) \\big|^{x = \\pi}_{x = 0} = -\\cos(\\pi) - \\big(-\\cos(0)\\big) = 2."
}
] |
https://en.wikipedia.org/wiki?curid=15532
|
1553317
|
Optical medium
|
Medium through which electromagnetic waves propagate
In optics, an optical medium is material through which light and other electromagnetic waves propagate. It is a form of transmission medium. The permittivity and permeability of the medium define how electromagnetic waves propagate in it.
Properties.
The optical medium has an "intrinsic impedance", given by
formula_0
where formula_1 and formula_2 are the electric field and magnetic field, respectively.
In a region with no electrical conductivity, the expression simplifies to:
formula_3
For example, in free space the intrinsic impedance is called the characteristic impedance of vacuum, denoted "Z"0, and
formula_4
Waves propagate through a medium with velocity formula_5, where formula_6 is the frequency and formula_7 is the wavelength of the electromagnetic waves. This equation also may be put in the form
formula_8
where formula_9 is the angular frequency of the wave and formula_10 is the wavenumber of the wave. In electrical engineering, the symbol formula_11, called the "phase constant", is often used instead of formula_10.
The propagation velocity of electromagnetic waves in free space, an idealized standard reference state (like absolute zero for temperature), is conventionally denoted by "c"0:
formula_12
where formula_13 is the electric constant and formula_14 is the magnetic constant.
For a general introduction, see Serway For a discussion of synthetic media, see Joannopoulus.
Notes and references.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\eta = {E_x \\over H_y}"
},
{
"math_id": 1,
"text": "E_x"
},
{
"math_id": 2,
"text": "H_y"
},
{
"math_id": 3,
"text": "\\eta = \\sqrt{\\mu \\over \\varepsilon}\\ ."
},
{
"math_id": 4,
"text": "Z_0 = \\sqrt{\\mu_0 \\over \\varepsilon_0}\\ ."
},
{
"math_id": 5,
"text": "c_w = \\nu \\lambda "
},
{
"math_id": 6,
"text": "\\nu"
},
{
"math_id": 7,
"text": "\\lambda"
},
{
"math_id": 8,
"text": " c_w = {\\omega \\over k}\\ ,"
},
{
"math_id": 9,
"text": "\\omega"
},
{
"math_id": 10,
"text": "k"
},
{
"math_id": 11,
"text": "\\beta"
},
{
"math_id": 12,
"text": "c_0 = {1 \\over \\sqrt{\\varepsilon_0 \\mu_0}}\\ ,"
},
{
"math_id": 13,
"text": "\\varepsilon_0"
},
{
"math_id": 14,
"text": "~ \\mu_0 \\ "
}
] |
https://en.wikipedia.org/wiki?curid=1553317
|
15533279
|
Görtler vortices
|
In fluid dynamics, Görtler vortices are secondary flows that appear in a boundary layer flow along a concave wall. If the boundary layer is thin compared to the radius of curvature of the wall, the pressure remains constant across the boundary layer. On the other hand, if the boundary layer thickness is comparable to the radius of curvature, the centrifugal action creates a pressure variation across the boundary layer. This leads to the centrifugal instability (Görtler instability) of the boundary layer and consequent formation of Görtler vortices.
Görtler number.
The onset of Görtler vortices can be predicted using the dimensionless number called Görtler number (G). It is the ratio of centrifugal effects to the viscous effects in the boundary layer and is defined as
formula_0
where
formula_1 = external velocity
formula_2 = momentum thickness
formula_3 = kinematic viscosity
formula_4 = radius of curvature of the wall
Görtler instability occurs when G exceeds about 0.3.
Other instances.
A similar phenomenon arising from the same centrifugal action is sometimes observed in rotational flows which do not follow a curved wall, such as the rib vortices seen in the wakes of cylinders and generated behind moving structures.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\mathrm{G} = \\frac{U_e \\theta}{\\nu} \\left( \\frac{\\theta}{R} \\right)^{1/2}\n"
},
{
"math_id": 1,
"text": "U_e"
},
{
"math_id": 2,
"text": "\\theta"
},
{
"math_id": 3,
"text": "\\nu"
},
{
"math_id": 4,
"text": "R"
}
] |
https://en.wikipedia.org/wiki?curid=15533279
|
15536340
|
Post's lattice
|
In logic and universal algebra, Post's lattice denotes the lattice of all clones on a two-element set {0, 1}, ordered by inclusion. It is named for Emil Post, who published a complete description of the lattice in 1941. The relative simplicity of Post's lattice is in stark contrast to the lattice of clones on a three-element (or larger) set, which has the cardinality of the continuum, and a complicated inner structure. A modern exposition of Post's result can be found in Lau (2006).
Basic concepts.
A Boolean function, or logical connective, is an "n"-ary operation "f": 2"n" → 2 for some "n" ≥ 1, where 2 denotes the two-element set {0, 1}. Particular Boolean functions are the projections
formula_0
and given an "m"-ary function "f", and "n"-ary functions "g"1, ..., "g""m", we can construct another "n"-ary function
formula_1
called their composition. A set of functions closed under composition, and containing all projections, is called a clone.
Let "B" be a set of connectives. The functions which can be defined by a formula using propositional variables and connectives from "B" form a clone ["B"], indeed it is the smallest clone which includes "B". We call ["B"] the clone "generated" by "B", and say that "B" is the "basis" of ["B"]. For example, [¬, ∧] are all Boolean functions, and [0, 1, ∧, ∨] are the monotone functions.
We use the operations ¬, N"p", (negation), ∧, K"pq", (conjunction or meet), ∨, A"pq", (disjunction or join), →, C"pq", (implication), ↔, E"pq", (biconditional), +, J"pq" (exclusive disjunction or Boolean ring addition), ↛, L"pq", (nonimplication), ?: (the ternary ) and the constant unary functions 0 and 1. Moreover, we need the threshold functions
formula_2
For example, th1"n" is the large disjunction of all the variables "x""i", and th"n""n" is the large conjunction. Of particular importance is the majority function
formula_3
We denote elements of 2"n" (i.e., truth-assignments) as vectors: a = ("a"1, ..., "a""n"). The set 2"n" carries a natural product Boolean algebra structure. That is, ordering, meets, joins, and other operations on "n"-ary truth assignments are defined pointwise:
formula_4
formula_5
Naming of clones.
Intersection of an arbitrary number of clones is again a clone. It is convenient to denote intersection of clones by simple juxtaposition, i.e., the clone "C"1 ∩ "C"2 ∩ ... ∩ "C""k" is denoted by "C"1"C"2..."C""k". Some special clones are introduced below:
formula_6
for every "i" ≤ "n", a, b ∈ 2"n", and "c", "d" ∈ 2. Equivalently, the functions expressible as "f"("x"1, ..., "x""n") = "a"0 + "a"1"x"1 + ... + "a""n""x""n" for some "a"0, a.
formula_9
Moreover, formula_10 = [↛] is the set of functions bounded above by a variable: there exists "i" = 1, ..., "n" such that "f"(a) ≤ "a""i" for all a.
As a special case, "P"0 = "T"01 = [∨, +] is the set of "0-preserving" functions: "f"(0) = 0. Furthermore, ⊤ can be considered "T"00 when one takes the empty meet into account.
formula_11
and formula_12 = [→] is the set of functions bounded below by a variable: there exists "i" = 1, ..., "n" such that "f"(a) ≥ "a""i" for all a.
The special case "P"1 = "T"11 = [∧, →] consists of the "1-preserving" functions: "f"(1) = 1. Furthermore, ⊤ can be considered "T"10 when one takes the empty join into account.
Description of the lattice.
The set of all clones is a closure system, hence it forms a complete lattice. The lattice is countably infinite, and all its members are finitely generated. All the clones are listed in the table below.
The eight infinite families have actually also members with "k" = 1, but these appear separately in the table: T01 = P0, T11 = P1, PT01 = PT11 = P, MT01 = MP0, MT11 = MP1, MPT01 = MPT11 = MP.
The lattice has a natural symmetry mapping each clone "C" to its dual clone "C""d" = {"f"d | "f" ∈ "C"}, where "f""d"("x"1, ..., "x""n") = ¬"f"(¬"x"1, ..., ¬"x""n") is the de Morgan dual of a Boolean function "f". For example, Λ"d" = V, (T0"k")"d" = T1"k", and M"d" = M.
Applications.
The complete classification of Boolean clones given by Post helps to resolve various questions about classes of Boolean functions. For example:
Variants.
Clones requiring the constant functions.
If one only considers clones that are required to contain the constant functions, the classification is much simpler: there are only 7 such clones: UM, Λ, V, U, A, M, and ⊤. While this can be derived from the full classification, there is a simpler proof, taking less than a page.
Clones allowing nullary functions.
Composition alone does not allow to generate a nullary function from the corresponding unary constant function, this is the technical reason why nullary functions are excluded from clones in Post's classification. If we lift the restriction, we get more clones. Namely, each clone "C" in Post's lattice which contains at least one constant function corresponds to two clones under the less restrictive definition: "C", and "C" together with all nullary functions whose unary versions are in "C".
Iterative systems.
Post originally did not work with the modern definition of clones, but with the so-called "iterative systems", which are sets of operations closed under substitution
formula_13
as well as permutation and identification of variables. The main difference is that iterative systems do not necessarily contain all projections. Every clone is an iterative system, and there are 20 non-empty iterative systems which are not clones. (Post also excluded the empty iterative system from the classification, hence his diagram has no least element and fails to be a lattice.) As another alternative, some authors work with the notion of a "closed class", which is an iterative system closed under introduction of dummy variables. There are four closed classes which are not clones: the empty set, the set of constant 0 functions, the set of constant 1 functions, and the set of all constant functions.
|
[
{
"math_id": 0,
"text": "\\pi_k^n(x_1,\\dots,x_n)=x_k,"
},
{
"math_id": 1,
"text": "h(x_1,\\dots,x_n)=f(g_1(x_1,\\dots,x_n),\\dots,g_m(x_1,\\dots,x_n)),"
},
{
"math_id": 2,
"text": "\\mathrm{th}^n_k(x_1,\\dots,x_n)=\\begin{cases}1&\\text{if }\\bigl|\\{i\\mid x_i=1\\}\\bigr|\\ge k,\\\\\n0&\\text{otherwise.}\\end{cases}"
},
{
"math_id": 3,
"text": "\\mathrm{maj}=\\mathrm{th}^3_2=(x\\land y)\\lor(x\\land z)\\lor(y\\land z)."
},
{
"math_id": 4,
"text": "(a_1,\\dots,a_n)\\le(b_1,\\dots,b_n)\\iff a_i\\le b_i\\text{ for }i=1,\\dots,n,"
},
{
"math_id": 5,
"text": "(a_1,\\dots,a_n)\\land(b_1,\\dots,b_n)=(a_1\\land b_1,\\dots,a_n\\land b_n)."
},
{
"math_id": 6,
"text": "\n\\begin{align}\n& f(a_1,\\dots,a_{i-1},c,a_{i+1},\\dots,a_n)=f(a_1,\\dots,d,a_{i+1},\\dots)\\\\\n\\Rightarrow & f(b_1,\\dots,c,b_{i+1},\\dots)=f(b_1,\\dots,d,b_{i+1},\\dots)\n\\end{align}\n"
},
{
"math_id": 7,
"text": "f(x_1,\\dots,x_n)=\\bigwedge_{i\\in I}x_i"
},
{
"math_id": 8,
"text": "f(x_1,\\dots,x_n)=\\bigvee_{i\\in I}x_i"
},
{
"math_id": 9,
"text": "\\mathbf a^1\\land\\cdots\\land\\mathbf a^k=\\mathbf 0\\ \\Rightarrow\\ f(\\mathbf a^1)\\land\\cdots\\land f(\\mathbf a^k)=0."
},
{
"math_id": 10,
"text": "\\mathrm{T}_0^\\infty=\\bigcap_{k=1}^\\infty\\mathrm{T}_0^k"
},
{
"math_id": 11,
"text": "\\mathbf a^1\\lor\\cdots\\lor\\mathbf a^k=\\mathbf 1\\ \\Rightarrow\\ f(\\mathbf a^1)\\lor\\cdots\\lor f(\\mathbf a^k)=1,"
},
{
"math_id": 12,
"text": "\\mathrm{T}_1^\\infty=\\bigcap_{k=1}^\\infty\\mathrm{T}_1^k"
},
{
"math_id": 13,
"text": "h(x_1,\\dots,x_{n+m-1})=f(x_1,\\dots,x_{n-1},g(x_n,\\dots,x_{n+m-1})),"
}
] |
https://en.wikipedia.org/wiki?curid=15536340
|
15537009
|
Variational vector field
|
Vector field
In the mathematical fields of the calculus of variations and differential geometry, the variational vector field is a certain type of vector field defined on the tangent bundle of a differentiable manifold which gives rise to variations along a vector field in the manifold itself.
Specifically, let "X" be a vector field on "M". Then "X" generates a one-parameter group of local diffeomorphisms "Fl"Xt, the flow along "X". The differential of "Fl"Xt gives, for each "t", a mapping
formula_0
where "TM" denotes the tangent bundle of "M". This is a one-parameter group of local diffeomorphisms of the tangent bundle. The variational vector field of "X", denoted by "T"("X") is the tangent to the flow of "d Fl"Xt.
|
[
{
"math_id": 0,
"text": " d\\mathrm{Fl}_X^t : TM \\to TM "
}
] |
https://en.wikipedia.org/wiki?curid=15537009
|
15537745
|
Frequentist inference
|
Probability Theory
Frequentist inference is a type of statistical inference based in frequentist probability, which treats “probability” in equivalent terms to “frequency” and draws conclusions from sample-data by means of emphasizing the frequency or proportion of findings in the data. Frequentist inference underlies frequentist statistics, in which the well-established methodologies of statistical hypothesis testing and confidence intervals are founded.
History of frequentist statistics.
The primary formulation of frequentism stems from the presumption that statistics could be perceived to have been a probabilistic frequency. This view was primarily developed by Ronald Fisher and the team of Jerzy Neyman and Egon Pearson. Ronald Fisher contributed to frequentist statistics by developing the frequentist concept of "significance testing", which is the study of the significance of a measure of a statistic when compared to the hypothesis. Neyman-Pearson extended Fisher's ideas to multiple hypotheses by conjecturing that the ratio of probabilities of hypotheses when maximizing the difference between the two hypotheses leads to a maximization of exceeding a given p-value, and also provides the basis of type I and type II errors. For more, see the foundations of statistics page.
Definition.
For statistical inference, the statistic about which we want to make inferences is formula_0, where the random vector formula_1 is a function of an unknown parameter, formula_2. The parameter formula_2 is further partitioned into (formula_3), where formula_4 is the parameter of interest, and formula_5 is the nuisance parameter. For concreteness, formula_4 might be the population mean, formula_6, and the nuisance parameter formula_5 the standard deviation of the population mean, formula_7.
Thus, statistical inference is concerned with the expectation of random vector formula_1, formula_8.
To construct areas of uncertainty in frequentist inference, a pivot is used which defines the area around formula_4 that can be used to provide an interval to estimate uncertainty. The pivot is a probability such that for a pivot, formula_9, which is a function, that formula_10 is strictly increasing in formula_4, where formula_11 is a random vector. This allows that, for some 0 < formula_12 < 1, we can define formula_13, which is the probability that the pivot function is less than some well-defined value. This implies formula_14, where formula_15 is a formula_16 upper limit for formula_4. Note that formula_16 is a range of outcomes that define a one-sided limit for formula_4, and that formula_17 is a two-sided limit for formula_4, when we want to estimate a range of outcomes where formula_4 may occur. This rigorously defines the confidence interval, which is the range of outcomes about which we can make statistical inferences.
Fisherian reduction and Neyman-Pearson operational criteria.
Two complementary concepts in frequentist inference are the Fisherian reduction and the Neyman-Pearson operational criteria. Together these concepts illustrate a way of constructing frequentist intervals that define the limits for formula_4. The Fisherian reduction is a method of determining the interval within which the true value of formula_4 may lie, while the Neyman-Pearson operational criteria is a decision rule about making "a priori" probability assumptions.
The Fisherian reduction is defined as follows:
Essentially, the Fisherian reduction is design to find where the sufficient statistic can be used to determine the range of outcomes where formula_4 may occur on a probability distribution that defines all the potential values of formula_4. This is necessary to formulating confidence intervals, where we can find a range of outcomes over which formula_4 is likely to occur in the long-run.
The Neyman-Pearon operational criteria is an even more specific understanding of the range of outcomes where the relevant statistic, formula_4, can be said to occur in the long run. The Neyman-Pearson operational criteria defines the likelihood of that range actually being adequate or of the range being inadequate. The Neyman-Pearson criteria defines the range of the probability distribution that, if formula_4 exists in this range, is still below the true population statistic. For example, if the distribution from the Fisherian reduction exceeds a threshold that we consider to be "a priori" implausible, then the Neyman-Pearson reduction's evaluation of that distribution can be used to infer where looking purely at the Fisherian reduction's distributions can give us inaccurate results. Thus, the Neyman-Pearson reduction is used to find the probability of type I and type II errors. As a point of reference, the complement to this in Bayesian statistics is the minimum Bayes risk criterion.
Because of the reliance of the Neyman-Pearson criteria on our ability to find a range of outcomes where formula_4 is likely to occur, the Neyman-Pearson approach is only possible where a Fisherian reduction can be achieved.
Experimental design and methodology.
Frequentist inferences are associated with the application frequentist probability to experimental design and interpretation, and specifically with the view that any given experiment can be considered one of an infinite sequence of possible repetitions of the same experiment, each capable of producing statistically independent results. In this view, the frequentist inference approach to drawing conclusions from data is effectively to require that the correct conclusion should be drawn with a given (high) probability, among this notional set of repetitions.
However, exactly the same procedures can be developed under a subtly different formulation. This is one where a pre-experiment point of view is taken. It can be argued that the design of an experiment should include, before undertaking the experiment, decisions about exactly what steps will be taken to reach a conclusion from the data yet to be obtained. These steps can be specified by the scientist so that there is a high probability of reaching a correct decision where, in this case, the probability relates to a yet to occur set of random events and hence does not rely on the frequency interpretation of probability. This formulation has been discussed by Neyman, among others. This is especially pertinent because the significance of a frequentist test can vary under model selection, a violation of the likelihood principle.
The statistical philosophy of frequentism.
Frequentism is the study of probability with the assumption that results occur with a given frequency over some period of time or with repeated sampling. As such, frequentist analysis must be formulated with consideration to the assumptions of the problem frequentism attempts to analyze. This requires looking into whether the question at hand is concerned with understanding variety of a statistic or locating the true value of a statistic. "The difference between these assumptions is critical for interpreting a hypothesis test". The next paragraph elaborates on this.
There are broadly two camps of statistical inference, the "epistemic approach" and the "epidemiological approach". The epistemic approach is the study of "variability"; namely, how often do we expect a statistic to deviate from some observed value. The epidemiological approach is concerned with the study of "uncertainty"; in this approach, the value of the statistic is fixed but our understanding of that statistic is incomplete. For concreteness, imagine trying to measure the stock market quote versus evaluating an asset's price. The stock market fluctuates so greatly that trying to find exactly where a stock price is going to be is not useful: the stock market is better understood using the epistemic approach, where we can try to quantify its fickle movements. Conversely, the price of an asset might not change that much from day to day: it is better to locate the true value of the asset rather than find a range of prices and thus the epidemiological approach is better. The difference between these approaches is non-trivial for the purposes of inference.
For the epistemic approach, we formulate the problem as if we want to attribute probability to a hypothesis. This can only be done with Bayesian statistics, where the interpretation of probability is straightforward because Bayesian statistics is conditional on the entire sample space, whereas frequentist testing is concerned with the whole experimental design. Frequentist statistics is conditioned not on solely the data but also on the "experimental design". In frequentist statistics, the cutoff for understanding the frequency occurrence is derived from the family distribution used in the experiment design. For example, a binomial distribution and a negative binomial distribution can be used to analyze exactly the same data, but because their tail ends are different the frequentist analysis will realize different levels of statistical significance for the same data that assumes different probability distributions. This difference does not occur in Bayesian inference. For more, see the likelihood principle, which frequentist statistics inherently violates.
For the epidemiological approach, the central idea behind frequentist statistics must be discussed. Frequentist statistics is designed so that, in the "long-run", the frequency of a statistic may be understood, and in the "long-run" the range of the true mean of a statistic can be inferred. This leads to the Fisherian reduction and the Neyman-Pearson operational criteria, discussed above. When we define the Fisherian reduction and the Neyman-Pearson operational criteria for any statistic, we are assessing, according to these authors, the likelihood that the true value of the statistic will occur within a given range of outcomes assuming a number of repetitions of our sampling method. This allows for inference where, in the long-run, we can define that the combined results of multiple frequentist inferences to mean that a 95% confidence interval literally means the true mean lies in the confidence interval 95% of the time, but "not" that the mean is in a particular confidence interval with 95% certainty. This is a popular misconception.
Very commonly the epistemic view and the epidemiological view are regarded as interconvertible. This is demonstrably false. First, the epistemic view is centered around Fisherian significance tests that are designed to provide inductive evidence against the null hypothesis, formula_20, in a single experiment, and is defined by the Fisherian p-value. Conversely, the epidemiological view, conducted with Neyman-Pearson hypothesis testing, is designed to minimize the Type II false acceptance errors in the long-run by providing error minimizations that work in the long-run. The difference between the two is critical because the epistemic view stresses the conditions under which we might find one value to be statistically significant; meanwhile, the epidemiological view defines the conditions under which long-run results present valid results. These are extremely different inferences, because one-time, epistemic conclusions do not inform long-run errors, and long-run errors cannot be used to certify whether one-time experiments are sensical. The assumption of one-time experiments to long-run occurrences is a misattribution, and the assumption of long run trends to individuals experiments is an example of the ecological fallacy.
Relationship with other approaches.
Frequentist inferences stand in contrast to other types of statistical inferences, such as Bayesian inferences and fiducial inferences. While the "Bayesian inference" is sometimes held to include the approach to inferences leading to optimal decisions, a more restricted view is taken here for simplicity.
Bayesian inference.
Bayesian inference is based in Bayesian probability, which treats “probability” as equivalent with “certainty”, and thus that the essential difference between the frequentist inference and the Bayesian inference is the same as the difference between the two interpretations of what a “probability” means. However, where appropriate, Bayesian inferences (meaning in this case an application of Bayes' theorem) are used by those employing frequency probability.
There are two major differences in the frequentist and Bayesian approaches to inference that are not included in the above consideration of the interpretation of probability:
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "y \\in Y"
},
{
"math_id": 1,
"text": "Y"
},
{
"math_id": 2,
"text": "\\theta"
},
{
"math_id": 3,
"text": "\\psi, \\lambda"
},
{
"math_id": 4,
"text": "\\psi"
},
{
"math_id": 5,
"text": "\\lambda"
},
{
"math_id": 6,
"text": "\\mu"
},
{
"math_id": 7,
"text": "\\sigma"
},
{
"math_id": 8,
"text": "E(Y)=E(Y;\\theta)=\\int y f_Y (y;\\theta)dy "
},
{
"math_id": 9,
"text": "p"
},
{
"math_id": 10,
"text": "p(t,\\psi)"
},
{
"math_id": 11,
"text": "t \\in T"
},
{
"math_id": 12,
"text": "c"
},
{
"math_id": 13,
"text": "P\\{p(T,\\psi)\\leq p^*_c\\}"
},
{
"math_id": 14,
"text": "P\\{\\psi \\leq q(T,c)\\} = 1-c"
},
{
"math_id": 15,
"text": "q(t,c)"
},
{
"math_id": 16,
"text": "1-c"
},
{
"math_id": 17,
"text": "1-2c"
},
{
"math_id": 18,
"text": "S"
},
{
"math_id": 19,
"text": "S = s"
},
{
"math_id": 20,
"text": "H_0"
}
] |
https://en.wikipedia.org/wiki?curid=15537745
|
1554065
|
Brianchon's theorem
|
The 3 long diagonals of a hexagon tangent to a conic section meet in a single point
In geometry, Brianchon's theorem is a theorem stating that when a hexagon is circumscribed around a conic section, its principal diagonals (those connecting opposite vertices) meet in a single point. It is named after Charles Julien Brianchon (1783–1864).
Formal statement.
Let formula_0 be a hexagon formed by six tangent lines of a conic section. Then lines formula_1 (extended diagonals each connecting opposite vertices) intersect at a single point formula_2, the Brianchon point.
Connection to Pascal's theorem.
The polar reciprocal and projective dual of this theorem give Pascal's theorem.
Degenerations.
As for Pascal's theorem there exist "degenerations" for Brianchon's theorem, too: Let coincide two neighbored tangents. Their point of intersection becomes a point of the conic. In the diagram three pairs of neighbored tangents coincide. This procedure results in a statement on inellipses of triangles. From a projective point of view the two triangles formula_3 and formula_4 lie perspectively with center formula_2. That means there exists a central collineation, which maps the one onto the other triangle. But only in special cases this collineation is an affine scaling. For example for a Steiner inellipse, where the Brianchon point is the centroid.
In the affine plane.
Brianchon's theorem is true in both the affine plane and the real projective plane. However, its statement in the affine plane is in a sense less informative and more complicated than that in the projective plane. Consider, for example, five tangent lines to a parabola. These may be considered sides of a hexagon whose sixth side is the line at infinity, but there is no line at infinity in the affine plane. In two instances, a line from a (non-existent) vertex to the opposite vertex would be a line "parallel to" one of the five tangent lines. Brianchon's theorem stated only for the affine plane would therefore have to be stated differently in such a situation.
The projective dual of Brianchon's theorem has exceptions in the affine plane but not in the projective plane.
Proof.
Brianchon's theorem can be proved by the idea of radical axis or reciprocation.
To prove it take an arbitrary length (MN) and carry it on the tangents starting from the contact points: PL = RJ = QH = MN etc. Draw circles a, b, c tangent to opposite sides of the hexagon at the created points (H,W), (J,V) and (L,Y) respectively. One sees easily that the concurring lines coincide with the radical axes ab, bc, ca resepectively, of the three circles taken in pairs. Thus O coincides with the radical center of these three circles.
The theorem takes particular forms in the case of circumscriptible pentagons e.g. when R and Q tend to coincide with F, a case where AFE is transformed to the tangent at F. Then, taking a further similar identification of points T,C and U, we obtain a corresponding theorem for quadrangles.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P_1P_2P_3P_4P_5P_6"
},
{
"math_id": 1,
"text": "\\overline{P_1P_4},\\; \\overline{P_2P_5},\\; \\overline{P_3P_6}"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "P_1P_3P_5"
},
{
"math_id": 4,
"text": "P_2P_4P_6"
}
] |
https://en.wikipedia.org/wiki?curid=1554065
|
155407
|
Kleene's recursion theorem
|
Theorem in computability theory
In computability theory, Kleene's recursion theorems are a pair of fundamental results about the application of computable functions to their own descriptions. The theorems were first proved by Stephen Kleene in 1938 and appear in his 1952 book "Introduction to Metamathematics". A related theorem, which constructs fixed points of a computable function, is known as Rogers's theorem and is due to Hartley Rogers, Jr.
The recursion theorems can be applied to construct fixed points of certain operations on computable functions, to generate quines, and to construct functions defined via recursive definitions.
Notation.
The statement of the theorems refers to an admissible numbering formula_0 of the partial recursive functions, such that the function corresponding to index formula_1 is formula_2.
If formula_3 and formula_4 are partial functions on the natural numbers, the notation formula_5 indicates that, for each "n", either formula_6 and formula_7 are both defined and are equal, or else formula_6 and formula_7 are both undefined.
Rogers's fixed-point theorem.
Given a function formula_3, a fixed point of formula_3 is an index formula_1 such that formula_8. Note that the comparison of in- and outputs here is not in terms of numerical values, but in terms of their associated functions.
Rogers describes the following result as "a simpler version" of Kleene's (second) recursion theorem.
<templatestyles src="Math_theorem/styles.css" />
Rogers's fixed-point theorem — If formula_3 is a total computable function, it has a fixed point in the above sense.
This essentially means that if we apply an effective transformation to programs (say, replace instructions such as successor, jump, remove lines), there will always be a program whose behaviour is not altered by the transformation. This theorem can therefore be interpreted in the following manner: “given any effective procedure to transform programs, there is always a program that, when modified by the procedure, does exactly what it did before”, or: “it’s impossible to write a program that changes the extensional behaviour of all programs”.
Proof of the fixed-point theorem.
The proof uses a particular total computable function formula_9, defined as follows. Given a natural number formula_10, the function formula_9 outputs the index of the partial computable function that performs the following computation:
Given an input formula_11, first attempt to compute formula_12. If that computation returns an output formula_1, then compute formula_13 and return its value, if any.
Thus, for all indices formula_10 of partial computable functions, if formula_14 is defined, then formula_15. If formula_14 is not defined, then formula_16 is a function that is nowhere defined. The function formula_9 can be constructed from the partial computable function formula_17 described above and the s-m-n theorem: for each formula_10, formula_18 is the index of a program which computes the function formula_19.
To complete the proof, let formula_3 be any total computable function, and construct formula_9 as above. Let formula_1 be an index of the composition formula_20, which is a total computable function. Then formula_21 by the definition of formula_9.
But, because formula_1 is an index of formula_20, formula_22, and thus formula_23. By the transitivity of formula_24, this means formula_25. Hence formula_26 for formula_27.
This proof is a construction of a partial recursive function which implements the Y combinator.
Fixed-point-free functions.
A function formula_3 such that formula_28 for all formula_1 is called fixed-point free. The fixed-point theorem shows that no total computable function is fixed-point free, but there are many non-computable fixed-point-free functions. Arslanov's completeness criterion states that the only recursively enumerable Turing degree that computes a fixed-point-free function is 0′, the degree of the halting problem.
Kleene's second recursion theorem.
The second recursion theorem is a generalization of Rogers's theorem with a second input in the function. One informal interpretation of the second recursion theorem is that it is possible to construct self-referential programs; see "Application to quines" below.
The second recursion theorem. For any partial recursive function formula_29 there is an index formula_30 such that formula_31.
The theorem can be proved from Rogers's theorem by letting formula_32 be a function such that formula_33 (a construction described by the S-m-n theorem). One can then verify that a fixed-point of this formula_3 is an index formula_30 as required. The theorem is constructive in the sense that a fixed computable function maps an index for formula_34 into the index formula_30.
Comparison to Rogers's theorem.
Kleene's second recursion theorem and Rogers's theorem can both be proved, rather simply, from each other. However, a direct proof of Kleene's theorem does not make use of a universal program, which means that the theorem holds for certain subrecursive programming systems that do not have a universal program.
Application to quines.
A classic example using the second recursion theorem is the function formula_35. The corresponding index formula_30 in this case yields a computable function that outputs its own index when applied to any value. When expressed as computer programs, such indices are known as quines.
The following example in Lisp illustrates how the formula_30 in the corollary can be effectively produced from the function formula_34. The function codice_0 in the code is the function of that name produced by the S-m-n theorem.
codice_1 can be changed to any two-argument function.
The results of the following expressions should be the same. formula_0 codice_2
codice_3
Application to elimination of recursion.
Suppose that formula_36 and formula_9 are total computable functions that are used in a recursive definition for a function formula_37:
formula_38
formula_39
The second recursion theorem can be used to show that such equations define a computable function, where the notion of computability does not have to allow, prima facie, for recursive definitions (for example, it may be defined by μ-recursion, or by Turing machines). This recursive definition can be converted into a computable function formula_40 that assumes formula_1 is an index to itself, to simulate recursion:
formula_41
formula_42
The recursion theorem establishes the existence of a computable function formula_43 such that formula_44. Thus formula_37 satisfies the given recursive definition.
Reflexive programming.
Reflexive, or reflective, programming refers to the usage of self-reference in programs. Jones presents a view of the second recursion theorem based on a reflexive language.
It is shown that the reflexive language defined is not stronger than a language without reflection (because an interpreter for the reflexive language can be implemented without using reflection); then, it is shown that the recursion theorem is almost trivial in the reflexive language.
The first recursion theorem.
While the second recursion theorem is about fixed points of computable functions, the first recursion theorem is related to fixed points determined by enumeration operators, which are a computable analogue of inductive definitions. An enumeration operator is a set of pairs ("A","n") where "A" is a (code for a) finite set of numbers and "n" is a single natural number. Often, "n" will be viewed as a code for an ordered pair of natural numbers, particularly when functions are defined via enumeration operators. Enumeration operators are of central importance in the study of enumeration reducibility.
Each enumeration operator Φ determines a function from sets of naturals to sets of naturals given by
formula_45
A recursive operator is an enumeration operator that, when given the graph of a partial recursive function, always returns the graph of a partial recursive function.
A fixed point of an enumeration operator Φ is a set "F" such that Φ("F") = "F". The first enumeration theorem shows that fixed points can be effectively obtained if the enumeration operator itself is computable.
First recursion theorem. The following statements hold.
# For any computable enumeration operator Φ there is a recursively enumerable set "F" such that Φ("F") = "F" and "F" is the smallest set with this property.
# For any recursive operator Ψ there is a partial computable function φ such that Ψ(φ) = φ and φ is the smallest partial computable function with this property.
The first recursion theorem is also called Fixed point theorem (of recursion theory). There is also a definition which can be applied to recursive functionals as follows:
Let formula_46 be a recursive functional. Then formula_47 has a least fixed point formula_48 which is computable i.e.
1) formula_49
2) formula_50 such that formula_51 it holds that formula_52
3) formula_53 is computable
Example.
Like the second recursion theorem, the first recursion theorem can be used to obtain functions satisfying systems of recursion equations. To apply the first recursion theorem, the recursion equations must first be recast as a recursive operator.
Consider the recursion equations for the factorial function "f":formula_54The corresponding recursive operator Φ will have information that tells how to get to the next value of "f" from the previous value. However, the recursive operator will actually define the graph of "f". First, Φ will contain the pair formula_55. This indicates that "f"(0) is unequivocally 1, and thus the pair (0,1) is in the graph of "f".
Next, for each "n" and "m", Φ will contain the pair formula_56. This indicates that, if "f"("n") is "m", then "f"("n" + 1) is ("n" + 1)"m", so that the pair ("n" + 1, ("n" + 1)"m") is in the graph of "f". Unlike the base case "f"(0) = 1, the recursive operator requires some information about "f"("n") before it defines a value of "f"("n" + 1).
The first recursion theorem (in particular, part 1) states that there is a set "F" such that Φ("F") = F. The set "F" will consist entirely of ordered pairs of natural numbers, and will be the graph of the factorial function "f", as desired.
The restriction to recursion equations that can be recast as recursive operators ensures that the recursion equations actually define a least fixed point. For example, consider the set of recursion equations:formula_57There is no function "g" satisfying these equations, because they imply "g"(2) = 1 and also imply "g"(2) = 0. Thus there is no fixed point "g" satisfying these recursion equations. It is possible to make an enumeration operator corresponding to these equations, but it will not be a recursive operator.
Proof sketch for the first recursion theorem.
The proof of part 1 of the first recursion theorem is obtained by iterating the enumeration operator Φ beginning with the empty set. First, a sequence "F""k" is constructed, for formula_58. Let "F"0 be the empty set. Proceeding inductively, for each "k", let "F""k" + 1 be formula_59. Finally, "F" is taken to be formula_60. The remainder of the proof consists of a verification that "F" is recursively enumerable and is the least fixed point of Φ. The sequence "F""k" used in this proof corresponds to the Kleene chain in the proof of the Kleene fixed-point theorem.
The second part of the first recursion theorem follows from the first part. The assumption that Φ is a recursive operator is used to show that the fixed point of Φ is the graph of a partial function. The key point is that if the fixed point "F" is not the graph of a function, then there is some "k" such that "F""k" is not the graph of a function.
Comparison to the second recursion theorem.
Compared to the second recursion theorem, the first recursion theorem produces a stronger conclusion but only when narrower hypotheses are satisfied. Rogers uses the term weak recursion theorem for the first recursion theorem and strong recursion theorem for the second recursion theorem.
One difference between the first and second recursion theorems is that the fixed points obtained by the first recursion theorem are guaranteed to be least fixed points, while those obtained from the second recursion theorem may not be least fixed points.
A second difference is that the first recursion theorem only applies to systems of equations that can be recast as recursive operators. This restriction is similar to the restriction to continuous operators in the Kleene fixed-point theorem of order theory. The second recursion theorem can be applied to any total recursive function.
Generalized theorem.
In the context of his theory of numberings, Ershov showed that Kleene's recursion theorem holds for any precomplete numbering. A Gödel numbering is a precomplete numbering on the set of computable functions so the generalized theorem yields the Kleene recursion theorem as a special case.
Given a precomplete numbering formula_61, then for any partial computable function formula_37 with two parameters there exists a total computable function formula_62 with one parameter such that
formula_63
References.
Footnotes
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\varphi"
},
{
"math_id": 1,
"text": "e"
},
{
"math_id": 2,
"text": "\\varphi_e"
},
{
"math_id": 3,
"text": "F"
},
{
"math_id": 4,
"text": "G"
},
{
"math_id": 5,
"text": "F \\simeq G"
},
{
"math_id": 6,
"text": "F(n)"
},
{
"math_id": 7,
"text": "G(n)"
},
{
"math_id": 8,
"text": "\\varphi_e \\simeq \\varphi_{F(e)}"
},
{
"math_id": 9,
"text": "h"
},
{
"math_id": 10,
"text": "x"
},
{
"math_id": 11,
"text": "y"
},
{
"math_id": 12,
"text": "\\varphi_{x}(x)"
},
{
"math_id": 13,
"text": "\\varphi_e(y)"
},
{
"math_id": 14,
"text": "\\varphi_x(x)"
},
{
"math_id": 15,
"text": "\\varphi_{h(x)} \\simeq \\varphi_{\\varphi_x(x)}"
},
{
"math_id": 16,
"text": "\\varphi_{h(x)}"
},
{
"math_id": 17,
"text": "g(x,y)"
},
{
"math_id": 18,
"text": "h(x)"
},
{
"math_id": 19,
"text": "y \\mapsto g(x,y)"
},
{
"math_id": 20,
"text": "F \\circ h"
},
{
"math_id": 21,
"text": "\\varphi_{h(e)} \\simeq \\varphi_{\\varphi_e(e)}"
},
{
"math_id": 22,
"text": "\\varphi_e(e) = (F \\circ h)(e)"
},
{
"math_id": 23,
"text": "\\varphi_{\\varphi_e(e)} \\simeq \\varphi_{F(h(e))}"
},
{
"math_id": 24,
"text": "\\simeq"
},
{
"math_id": 25,
"text": "\\varphi_{h(e)} \\simeq \\varphi_{F(h(e))}"
},
{
"math_id": 26,
"text": "\\varphi_n \\simeq \\varphi_{F(n)}"
},
{
"math_id": 27,
"text": "n = h(e)"
},
{
"math_id": 28,
"text": " \\varphi_e \\not \\simeq \\varphi_{F(e)}"
},
{
"math_id": 29,
"text": "Q(x,y)"
},
{
"math_id": 30,
"text": "p"
},
{
"math_id": 31,
"text": "\\varphi_p \\simeq \\lambda y.Q(p,y)"
},
{
"math_id": 32,
"text": "F(p)"
},
{
"math_id": 33,
"text": "\\varphi_{F(p)}(y) = Q(p,y)"
},
{
"math_id": 34,
"text": "Q"
},
{
"math_id": 35,
"text": "Q(x,y)=x"
},
{
"math_id": 36,
"text": "g"
},
{
"math_id": 37,
"text": "f"
},
{
"math_id": 38,
"text": "f(0,y) \\simeq g(y),"
},
{
"math_id": 39,
"text": "f(x+1,y) \\simeq h(f(x,y),x,y),"
},
{
"math_id": 40,
"text": "\\varphi_{F}(e,x,y)"
},
{
"math_id": 41,
"text": "\\varphi_{F}(e,0,y) \\simeq g(y),"
},
{
"math_id": 42,
"text": "\\varphi_{F}(e,x+1,y) \\simeq h(\\varphi_e(x,y),x,y)."
},
{
"math_id": 43,
"text": "\\varphi_f"
},
{
"math_id": 44,
"text": "\\varphi_f(x,y) \\simeq \\varphi_{F}(f,x,y)"
},
{
"math_id": 45,
"text": "\\Phi(X) = \\{ n \\mid \\exists A \\subseteq X [(A,n) \\in \\Phi]\\}."
},
{
"math_id": 46,
"text": "\\Phi: \\mathbb{F}(\\mathbb{N}^k) \\rightarrow (\\mathbb{N}^k)"
},
{
"math_id": 47,
"text": "\\Phi"
},
{
"math_id": 48,
"text": "f_{\\Phi}: \\mathbb{N}^k \\rightarrow \\mathbb{N}"
},
{
"math_id": 49,
"text": "\\Phi(f_{\\phi})=f_{\\Phi}"
},
{
"math_id": 50,
"text": "\\forall g \\in \\mathbb{F}(\\mathbb{N}^k)"
},
{
"math_id": 51,
"text": "\\Phi(g)=g"
},
{
"math_id": 52,
"text": "f_{\\Phi}\\subseteq g"
},
{
"math_id": 53,
"text": "f_{\\Phi}"
},
{
"math_id": 54,
"text": "\\begin{align}\n&f(0) = 1 \\\\\n&f(n+1) = (n + 1) \\cdot f(n)\n\\end{align}"
},
{
"math_id": 55,
"text": "( \\varnothing, (0, 1))"
},
{
"math_id": 56,
"text": "( \\{ (n, m) \\}, (n+1, (n+1)\\cdot m))"
},
{
"math_id": 57,
"text": "\\begin{align}\n&g(0) = 1\\\\\n&g(n + 1) = 1\\\\\n&g(2n) = 0\n\\end{align}"
},
{
"math_id": 58,
"text": "k = 0, 1, \\ldots"
},
{
"math_id": 59,
"text": "F_k \\cup \\Phi(F_k)"
},
{
"math_id": 60,
"text": "\\bigcup F_k"
},
{
"math_id": 61,
"text": "\\nu"
},
{
"math_id": 62,
"text": "t"
},
{
"math_id": 63,
"text": "\\forall n \\in \\mathbb{N} : \\nu \\circ f(n,t(n)) = \\nu \\circ t(n)."
}
] |
https://en.wikipedia.org/wiki?curid=155407
|
155414
|
Computability theory
|
Study of computable functions and Turing degrees
Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, computability theory overlaps with proof theory and effective descriptive set theory.
Basic questions addressed by computability theory include:
Although there is considerable overlap in terms of knowledge and methods, mathematical computability theorists study the theory of relative computability, reducibility notions, and degree structures; those in the computer science field focus on the theory of subrecursive hierarchies, formal methods, and formal languages. The study of which mathematical constructions can be effectively performed is sometimes called recursive mathematics.
Introduction.
Computability theory originated in the 1930s, with the work of Kurt Gödel, Alonzo Church, Rózsa Péter, Alan Turing, Stephen Kleene, and Emil Post.
The fundamental results the researchers obtained established Turing computability as the correct formalization of the informal idea of effective calculation. In 1952, these results led Kleene to coin the two names "Church's thesis"300 and "Turing's thesis".376 Nowadays these are often considered as a single hypothesis, the Church–Turing thesis, which states that any function that is computable by an algorithm is a computable function. Although initially skeptical, by 1946 Gödel argued in favor of this thesis:84
<templatestyles src="Template:Blockquote/styles.css" />"Tarski has stressed in his lecture (and I think justly) the great importance of the concept of general recursiveness (or Turing's computability). It seems to me that this importance is largely due to the fact that with this concept one has for the first time succeeded in giving an absolute notion to an interesting epistemological notion, i.e., one not depending on the formalism chosen."84
With a definition of effective calculation came the first proofs that there are problems in mathematics that cannot be effectively decided. In 1936, Church and Turing were inspired by techniques used by Gödel to prove his incompleteness theorems - in 1931, Gödel independently demonstrated that the is not effectively decidable. This result showed that there is no algorithmic procedure that can correctly decide whether arbitrary mathematical propositions are true or false.
Many problems in mathematics have been shown to be undecidable after these initial examples were established. In 1947, Markov and Post published independent papers showing that the word problem for semigroups cannot be effectively decided. Extending this result, Pyotr Novikov and William Boone showed independently in the 1950s that the word problem for groups is not effectively solvable: there is no effective procedure that, given a word in a finitely presented group, will decide whether the element represented by the word is the identity element of the group. In 1970, Yuri Matiyasevich proved (using results of Julia Robinson) Matiyasevich's theorem, which implies that Hilbert's tenth problem has no effective solution; this problem asked whether there is an effective procedure to decide whether a Diophantine equation over the integers has a solution in the integers.
Turing computability.
The main form of computability studied in the field was introduced by Turing in 1936. A set of natural numbers is said to be a "computable set" (also called a "decidable", "recursive", or "Turing computable" set) if there is a Turing machine that, given a number "n", halts with output 1 if "n" is in the set and halts with output 0 if "n" is not in the set. A function "f" from natural numbers to natural numbers is a "(Turing) computable," or "recursive function" if there is a Turing machine that, on input "n", halts and returns output "f"("n"). The use of Turing machines here is not necessary; there are many other models of computation that have the same computing power as Turing machines; for example the μ-recursive functions obtained from primitive recursion and the μ operator.
The terminology for computable functions and sets is not completely standardized.
The definition in terms of μ-recursive functions as well as a different definition of functions by Gödel led to the traditional name "recursive" for sets and functions computable by a Turing machine. The word "decidable" stems from the German word which was used in the original papers of Turing and others. In contemporary use, the term "computable function" has various definitions: according to Nigel J. Cutland, it is a partial recursive function (which can be undefined for some inputs), while according to Robert I. Soare it is a total recursive (equivalently, general recursive) function. This article follows the second of these conventions. In 1996, Soare gave additional comments about the terminology.
Not every set of natural numbers is computable. The halting problem, which is the set of (descriptions of) Turing machines that halt on input 0, is a well-known example of a noncomputable set. The existence of many noncomputable sets follows from the facts that there are only countably many Turing machines, and thus only countably many computable sets, but according to the Cantor's theorem, there are uncountably many sets of natural numbers.
Although the halting problem is not computable, it is possible to simulate program execution and produce an infinite list of the programs that do halt. Thus the halting problem is an example of a "computably enumerable (c.e.) set", which is a set that can be enumerated by a Turing machine (other terms for computably enumerable include "recursively enumerable" and "semidecidable"). Equivalently, a set is c.e. if and only if it is the range of some computable function. The c.e. sets, although not decidable in general, have been studied in detail in computability theory.
Areas of research.
Beginning with the theory of computable sets and functions described above, the field of computability theory has grown to include the study of many closely related topics. These are not independent areas of research: each of these areas draws ideas and results from the others, and most computability theorists are familiar with the majority of them.
Relative computability and the Turing degrees.
Computability theory in mathematical logic has traditionally focused on "relative computability", a generalization of Turing computability defined using oracle Turing machines, introduced by Turing in 1939. An oracle Turing machine is a hypothetical device which, in addition to performing the actions of a regular Turing machine, is able to ask questions of an "oracle", which is a particular set of natural numbers. The oracle machine may only ask questions of the form "Is "n" in the oracle set?". Each question will be immediately answered correctly, even if the oracle set is not computable. Thus an oracle machine with a noncomputable oracle will be able to compute sets that a Turing machine without an oracle cannot.
Informally, a set of natural numbers "A" is "Turing reducible" to a set "B" if there is an oracle machine that correctly tells whether numbers are in "A" when run with "B" as the oracle set (in this case, the set "A" is also said to be ("relatively") "computable from" "B" and "recursive in" "B"). If a set "A" is Turing reducible to a set "B" and "B" is Turing reducible to "A" then the sets are said to have the same "Turing degree" (also called "degree of unsolvability"). The Turing degree of a set gives a precise measure of how uncomputable the set is.
The natural examples of sets that are not computable, including many different sets that encode variants of the halting problem, have two properties in common:
Many-one reductions are "stronger" than Turing reductions: if a set "A" is many-one reducible to a set "B", then "A" is Turing reducible to "B", but the converse does not always hold. Although the natural examples of noncomputable sets are all many-one equivalent, it is possible to construct computably enumerable sets "A" and "B" such that "A" is Turing reducible to "B" but not many-one reducible to "B". It can be shown that every computably enumerable set is many-one reducible to the halting problem, and thus the halting problem is the most complicated computably enumerable set with respect to many-one reducibility and with respect to Turing reducibility. In 1944, Post asked whether "every" computably enumerable set is either computable or Turing equivalent to the halting problem, that is, whether there is no computably enumerable set with a Turing degree intermediate between those two.
As intermediate results, Post defined natural types of computably enumerable sets like the simple, hypersimple and hyperhypersimple sets. Post showed that these sets are strictly between the computable sets and the halting problem with respect to many-one reducibility. Post also showed that some of them are strictly intermediate under other reducibility notions stronger than Turing reducibility. But Post left open the main problem of the existence of computably enumerable sets of intermediate Turing degree; this problem became known as "Post's problem". After ten years, Kleene and Post showed in 1954 that there are intermediate Turing degrees between those of the computable sets and the halting problem, but they failed to show that any of these degrees contains a computably enumerable set. Very soon after this, Friedberg and Muchnik independently solved Post's problem by establishing the existence of computably enumerable sets of intermediate degree. This groundbreaking result opened a wide study of the Turing degrees of the computably enumerable sets which turned out to possess a very complicated and non-trivial structure.
There are uncountably many sets that are not computably enumerable, and the investigation of the Turing degrees of all sets is as central in computability theory as the investigation of the computably enumerable Turing degrees. Many degrees with special properties were constructed: "hyperimmune-free degrees" where every function computable relative to that degree is majorized by a (unrelativized) computable function; "high degrees" relative to which one can compute a function "f" which dominates every computable function "g" in the sense that there is a constant "c" depending on "g" such that "g(x) < f(x)" for all "x > c"; "random degrees" containing algorithmically random sets; "1-generic" degrees of 1-generic sets; and the degrees below the halting problem of limit-computable sets.
The study of arbitrary (not necessarily computably enumerable) Turing degrees involves the study of the Turing jump. Given a set "A", the "Turing jump" of "A" is a set of natural numbers encoding a solution to the halting problem for oracle Turing machines running with oracle "A". The Turing jump of any set is always of higher Turing degree than the original set, and a theorem of Friedburg shows that any set that computes the Halting problem can be obtained as the Turing jump of another set. Post's theorem establishes a close relationship between the Turing jump operation and the arithmetical hierarchy, which is a classification of certain subsets of the natural numbers based on their definability in arithmetic.
Much recent research on Turing degrees has focused on the overall structure of the set of Turing degrees and the set of Turing degrees containing computably enumerable sets. A deep theorem of Shore and Slaman states that the function mapping a degree "x" to the degree of its Turing jump is definable in the partial order of the Turing degrees. A survey by Ambos-Spies and Fejer gives an overview of this research and its historical progression.
Other reducibilities.
An ongoing area of research in computability theory studies reducibility relations other than Turing reducibility. Post introduced several "strong reducibilities", so named because they imply truth-table reducibility. A Turing machine implementing a strong reducibility will compute a total function regardless of which oracle it is presented with. "Weak reducibilities" are those where a reduction process may not terminate for all oracles; Turing reducibility is one example.
The strong reducibilities include:
One-one reducibility: "A" is "one-one reducible" (or "1-reducible") to "B" if there is a total computable injective function "f" such that each "n" is in "A" if and only if "f"("n") is in "B".
Many-one reducibility: This is essentially one-one reducibility without the constraint that "f" be injective. "A" is "many-one reducible" (or "m-reducible") to "B" if there is a total computable function "f" such that each "n" is in "A" if and only if "f"("n") is in "B".
Truth-table reducibility: "A" is truth-table reducible to "B" if "A" is Turing reducible to "B" via an oracle Turing machine that computes a total function regardless of the oracle it is given. Because of compactness of Cantor space, this is equivalent to saying that the reduction presents a single list of questions (depending only on the input) to the oracle simultaneously, and then having seen their answers is able to produce an output without asking additional questions regardless of the oracle's answer to the initial queries. Many variants of truth-table reducibility have also been studied.
Further reducibilities (positive, disjunctive, conjunctive, linear and their weak and bounded versions) are discussed in the article Reduction (computability theory).
The major research on strong reducibilities has been to compare their theories, both for the class of all computably enumerable sets as well as for the class of all subsets of the natural numbers. Furthermore, the relations between the reducibilities has been studied. For example, it is known that every Turing degree is either a truth-table degree or is the union of infinitely many truth-table degrees.
Reducibilities weaker than Turing reducibility (that is, reducibilities that are implied by Turing reducibility) have also been studied. The most well known are arithmetical reducibility and hyperarithmetical reducibility. These reducibilities are closely connected to definability over the standard model of arithmetic.
Rice's theorem and the arithmetical hierarchy.
Rice showed that for every nontrivial class "C" (which contains some but not all c.e. sets) the index set "E" = {"e": the "e"th c.e. set "We" is in "C"} has the property that either the halting problem or its complement is many-one reducible to "E", that is, can be mapped using a many-one reduction to "E" (see Rice's theorem for more detail). But, many of these index sets are even more complicated than the halting problem. These type of sets can be classified using the arithmetical hierarchy. For example, the index set FIN of the class of all finite sets is on the level Σ2, the index set REC of the class of all recursive sets is on the level Σ3, the index set COFIN of all cofinite sets is also on the level Σ3 and the index set COMP of the class of all Turing-complete sets Σ4. These hierarchy levels are defined inductively, Σ"n"+1 contains just all sets which are computably enumerable relative to Σ"n"; Σ1 contains the computably enumerable sets. The index sets given here are even complete for their levels, that is, all the sets in these levels can be many-one reduced to the given index sets.
Reverse mathematics.
The program of "reverse mathematics" asks which set-existence axioms are necessary to prove particular theorems of mathematics in subsystems of second-order arithmetic. This study was initiated by Harvey Friedman and was studied in detail by Stephen Simpson and others; in 1999, Simpson gave a detailed discussion of the program. The set-existence axioms in question correspond informally to axioms saying that the powerset of the natural numbers is closed under various reducibility notions. The weakest such axiom studied in reverse mathematics is "recursive comprehension", which states that the powerset of the naturals is closed under Turing reducibility.
Numberings.
A numbering is an enumeration of functions; it has two parameters, "e" and "x" and outputs the value of the "e"-th function in the numbering on the input "x". Numberings can be partial-computable although some of its members are total computable functions. Admissible numberings are those into which all others can be translated. A Friedberg numbering (named after its discoverer) is a one-one numbering of all partial-computable functions; it is necessarily not an admissible numbering. Later research dealt also with numberings of other classes like classes of computably enumerable sets. Goncharov discovered for example a class of computably enumerable sets for which the numberings fall into exactly two classes with respect to computable isomorphisms.
The priority method.
Post's problem was solved with a method called the "priority method"; a proof using this method is called a "priority argument". This method is primarily used to construct computably enumerable sets with particular properties. To use this method, the desired properties of the set to be constructed are broken up into an infinite list of goals, known as "requirements", so that satisfying all the requirements will cause the set constructed to have the desired properties. Each requirement is assigned to a natural number representing the priority of the requirement; so 0 is assigned to the most important priority, 1 to the second most important, and so on. The set is then constructed in stages, each stage attempting to satisfy one of more of the requirements by either adding numbers to the set or banning numbers from the set so that the final set will satisfy the requirement. It may happen that satisfying one requirement will cause another to become unsatisfied; the priority order is used to decide what to do in such an event.
Priority arguments have been employed to solve many problems in computability theory, and have been classified into a hierarchy based on their complexity. Because complex priority arguments can be technical and difficult to follow, it has traditionally been considered desirable to prove results without priority arguments, or to see if results proved with priority arguments can also be proved without them.
For example, Kummer published a paper on a proof for the existence of Friedberg numberings without using the priority method.
The lattice of computably enumerable sets.
When Post defined the notion of a simple set as a c.e. set with an infinite complement not containing any infinite c.e. set, he started to study the structure of the computably enumerable sets under inclusion. This lattice became a well-studied structure. Computable sets can be defined in this structure by the basic result that a set is computable if and only if the set and its complement are both computably enumerable. Infinite c.e. sets have always infinite computable subsets; but on the other hand, simple sets exist but do not always have a coinfinite computable superset. Post introduced already hypersimple and hyperhypersimple sets; later maximal sets were constructed which are c.e. sets such that every c.e. superset is either a finite variant of the given maximal set or is co-finite. Post's original motivation in the study of this lattice was to find a structural notion such that every set which satisfies this property is neither in the Turing degree of the computable sets nor in the Turing degree of the halting problem. Post did not find such a property and the solution to his problem applied priority methods instead; in 1991, Harrington and Soare found eventually such a property.
Automorphism problems.
Another important question is the existence of automorphisms in computability-theoretic structures. One of these structures is that one of computably enumerable sets under inclusion modulo finite difference; in this structure, "A" is below "B" if and only if the set difference "B" − "A" is finite. Maximal sets (as defined in the previous paragraph) have the property that they cannot be automorphic to non-maximal sets, that is, if there is an automorphism of the computably enumerable sets under the structure just mentioned, then every maximal set is mapped to another maximal set. In 1974, Soare showed that also the converse holds, that is, every two maximal sets are automorphic. So the maximal sets form an orbit, that is, every automorphism preserves maximality and any two maximal sets are transformed into each other by some automorphism. Harrington gave a further example of an automorphic property: that of the creative sets, the sets which are many-one equivalent to the halting problem.
Besides the lattice of computably enumerable sets, automorphisms are also studied for the structure of the Turing degrees of all sets as well as for the structure of the Turing degrees of c.e. sets. In both cases, Cooper claims to have constructed nontrivial automorphisms which map some degrees to other degrees; this construction has, however, not been verified and some colleagues believe that the construction contains errors and that the question of whether there is a nontrivial automorphism of the Turing degrees is still one of the main unsolved questions in this area.
Kolmogorov complexity.
The field of Kolmogorov complexity and algorithmic randomness was developed during the 1960s and 1970s by Chaitin, Kolmogorov, Levin, Martin-Löf and Solomonoff (the names are given here in alphabetical order; much of the research was independent, and the unity of the concept of randomness was not understood at the time). The main idea is to consider a universal Turing machine "U" and to measure the complexity of a number (or string) "x" as the length of the shortest input "p" such that "U"("p") outputs "x". This approach revolutionized earlier ways to determine when an infinite sequence (equivalently, characteristic function of a subset of the natural numbers) is random or not by invoking a notion of randomness for finite objects. Kolmogorov complexity became not only a subject of independent study but is also applied to other subjects as a tool for obtaining proofs.
There are still many open problems in this area.
Frequency computation.
This branch of computability theory analyzed the following question: For fixed "m" and "n" with 0 < "m" < "n", for which functions "A" is it possible to compute for any different "n" inputs "x"1, "x"2, ..., "xn" a tuple of "n" numbers "y1, y2, ..., yn" such that at least "m" of the equations "A"("xk") = "yk" are true. Such sets are known as ("m", "n")-recursive sets. The first major result in this branch of computability theory is Trakhtenbrot's result that a set is computable if it is ("m", "n")-recursive for some "m", "n" with 2"m" > "n". On the other hand, Jockusch's semirecursive sets (which were already known informally before Jockusch introduced them 1968) are examples of a set which is ("m", "n")-recursive if and only if 2"m" < "n" + 1. There are uncountably many of these sets and also some computably enumerable but noncomputable sets of this type. Later, Degtev established a hierarchy of computably enumerable sets that are (1, "n" + 1)-recursive but not (1, "n")-recursive. After a long phase of research by Russian scientists, this subject became repopularized in the west by Beigel's thesis on bounded queries, which linked frequency computation to the above-mentioned bounded reducibilities and other related notions. One of the major results was Kummer's Cardinality Theory which states that a set "A" is computable if and only if there is an "n" such that some algorithm enumerates for each tuple of "n" different numbers up to "n" many possible choices of the cardinality of this set of "n" numbers intersected with "A"; these choices must contain the true cardinality but leave out at least one false one.
Inductive inference.
This is the computability-theoretic branch of learning theory. It is based on E. Mark Gold's model of learning in the limit from 1967 and has developed since then more and more models of learning. The general scenario is the following: Given a class "S" of computable functions, is there a learner (that is, computable functional) which outputs for any input of the form ("f"(0), "f"(1), ..., "f"("n")) a hypothesis. A learner "M" learns a function "f" if almost all hypotheses are the same index "e" of "f" with respect to a previously agreed on acceptable numbering of all computable functions; "M" learns "S" if "M" learns every "f" in "S". Basic results are that all computably enumerable classes of functions are learnable while the class REC of all computable functions is not learnable. Many related models have been considered and also the learning of classes of computably enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards.
Generalizations of Turing computability.
Computability theory includes the study of generalized notions of this field such as arithmetic reducibility, hyperarithmetical reducibility and α-recursion theory, as described by Sacks in 1990. These generalized notions include reducibilities that cannot be executed by Turing machines but are nevertheless natural generalizations of Turing reducibility. These studies include approaches to investigate the analytical hierarchy which differs from the arithmetical hierarchy by permitting quantification over sets of natural numbers in addition to quantification over individual numbers. These areas are linked to the theories of well-orderings and trees; for example the set of all indices of computable (nonbinary) trees without infinite branches is complete for level formula_0 of the analytical hierarchy. Both Turing reducibility and hyperarithmetical reducibility are important in the field of effective descriptive set theory. The even more general notion of degrees of constructibility is studied in set theory.
Continuous computability theory.
Computability theory for digital computation is well developed. Computability theory is less well developed for analog computation that occurs in analog computers, analog signal processing, analog electronics, artificial neural networks and continuous-time control theory, modelled by differential equations and continuous dynamical systems. For example, models of computation such as the Blum–Shub–Smale machine model have formalized computation on the reals.
Relationships between definability, proof and computability.
There are close relationships between the Turing degree of a set of natural numbers and the difficulty (in terms of the arithmetical hierarchy) of defining that set using a first-order formula. One such relationship is made precise by Post's theorem. A weaker relationship was demonstrated by Kurt Gödel in the proofs of his completeness theorem and incompleteness theorems. Gödel's proofs show that the set of logical consequences of an effective first-order theory is a computably enumerable set, and that if the theory is strong enough this set will be uncomputable. Similarly, Tarski's indefinability theorem can be interpreted both in terms of definability and in terms of computability.
Computability theory is also linked to second-order arithmetic, a formal theory of natural numbers and sets of natural numbers. The fact that certain sets are computable or relatively computable often implies that these sets can be defined in weak subsystems of second-order arithmetic. The program of reverse mathematics uses these subsystems to measure the non-computability inherent in well known mathematical theorems. In 1999, Simpson discussed many aspects of second-order arithmetic and reverse mathematics.
The field of proof theory includes the study of second-order arithmetic and Peano arithmetic, as well as formal theories of the natural numbers weaker than Peano arithmetic. One method of classifying the strength of these weak systems is by characterizing which computable functions the system can prove to be total. For example, in primitive recursive arithmetic any computable function that is provably total is actually primitive recursive, while Peano arithmetic proves that functions like the Ackermann function, which are not primitive recursive, are total. Not every total computable function is provably total in Peano arithmetic, however; an example of such a function is provided by Goodstein's theorem.
Name.
The field of mathematical logic dealing with computability and its generalizations has been called "recursion theory" since its early days. Robert I. Soare, a prominent researcher in the field, has proposed that the field should be called "computability theory" instead. He argues that Turing's terminology using the word "computable" is more natural and more widely understood than the terminology using the word "recursive" introduced by Kleene. Many contemporary researchers have begun to use this alternate terminology. These researchers also use terminology such as "partial computable function" and "computably enumerable "("c.e.")" set" instead of "partial recursive function" and "recursively enumerable "("r.e.")" set". Not all researchers have been convinced, however, as explained by Fortnow and Simpson.
Some commentators argue that both the names "recursion theory" and "computability theory" fail to convey the fact that most of the objects studied in computability theory are not computable.
In 1967, Rogers has suggested that a key property of computability theory is that its results and structures should be invariant under computable bijections on the natural numbers (this suggestion draws on the ideas of the Erlangen program in geometry). The idea is that a computable bijection merely renames numbers in a set, rather than indicating any structure in the set, much as a rotation of the Euclidean plane does not change any geometric aspect of lines drawn on it. Since any two infinite computable sets are linked by a computable bijection, this proposal identifies all the infinite computable sets (the finite computable sets are viewed as trivial). According to Rogers, the sets of interest in computability theory are the noncomputable sets, partitioned into equivalence classes by computable bijections of the natural numbers.
Professional organizations.
The main professional organization for computability theory is the "Association for Symbolic Logic", which holds several research conferences each year. The interdisciplinary research Association "Computability in Europe" ("CiE") also organizes a series of annual conferences.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Pi^1_1"
}
] |
https://en.wikipedia.org/wiki?curid=155414
|
15541634
|
CaRMetal
|
Interactive geometry program
CaRMetal is an interactive geometry program which inherited the C.a.R. engine. The software has been created by Eric Hakenholz, in Java. CaRMetal is free, under GNU GPL license. It keeps an amount of functionality of C.a.R. but uses a different graphical interface which purportedly eliminates some intermediate dialogs and provides direct access to numerous effects. Constructions are done using a main palette, which contains some useful construction shortcuts in addition to the standard compass and ruler tools. These include perpendicular bisector, circle through three points, circumcircular arc through three points, and conic section through five points. Also interesting are the loci, functions, parametric curves, and implicit plots. Element thickness, color, label, and other attributes (including the so-called magnetic property) can be set using a separate panel.
CaRMetal also supports a configurable restricted construction palette and has assignment capabilities, which use an apparently unique feature called "Monkey". CaRMetal has a scripting language (JavaScript) which allows the user to build rather complex figures like fractals. CaRMetal has several locales including French, English, Spanish, German, Italian, Dutch, Portuguese and Arabic.
Didactic interest.
Anticipation.
When one chooses a tool like the parallel to a line through a point, or a circle, the intended object appears in yellow color and follows the mouse movements. This allows the user to make conjectures even before the construction is finished. This constant interaction between the pupil and the object of experimentation is in phase with modern theories about didactics and, in this view, CaRMetal is intended to be used by students.
Amodality.
The windows which show the history, the tools palette, the properties of the selected object are around the figure and never above it. These windows are not modal windows in the sense that they never hide the construction. For example, whenever the user wants to change the color of a polygon, he sees the new color immediately.
Transformations.
When a transformation (for example a macro) has been defined, such that it transforms points into points, this transformation can also be applied to curves. Once again, this allows the learning subject to see the properties of the transformation at a glance, even before the transformation has actually been applied.
Assignments.
The workbooks (see below) can be exported as HTML files, with a restricted tools palette (for example, leaving only the intersection and circle tools lets the pupil make compass-only construction). To create an assignment, the teacher chooses the initial objects, the objects to be created by the pupil, and writes a text explaining what is to be done. Since 2010, when the pupil has finished the construction and wants to test it, random variations are tested (with a tool called "Monkey") and a quality note is attributed to the pupil (actually, a percentage of the good constructions amongst the variations).
Macros.
The macros can be organized in a hierarchy of folders, which make it easy to transform CaRMetal into a tool allowing to explore non-euclidean geometries.
Special features.
Workbooks.
Since 2010, CaRMetal uses a folder system allowing one to put several figures in one folder, called "workbook". It is easy to navigate between the sheets of a workbook, to duplicate a sheet (or figure), to merge several workbooks into one. CaRMetal allows one to include picture files and JavaScript files into a figure. The file extension of a figure is "zir" like in C.a.R. (by the way, there is much compatibility between both software) and the file structure is a meta-description of the figure in the XML language. But a workbook is saved as a zipped folder containing all the "zir" figures, plus the included pictures (GIF, JPEG or PNG) and a "preferences" file.
Numeric display.
It is possible to convert any numerical measure of the figure into text, for display purposes. For example, if a segment called 's1' is 4.5 unit long, writing
codice_0
creates a character string which displays as "The length of the segment is 4.5". This character string can be included into the figure but also set as the "alias" of an object (for example "s1") or the name of an expression. Of course when one of the extremities of the segment is moved with the mouse, the text is edited in real time. This is called a dynamic text.
CaRMetal uses "HotEqn" and "JLatexMath" which are LaTeX parsers, and it is possible to write LaTeX formulae inside text objects. For example, if "poly1" is a square, and one wishes to find a circle which area is the same as the square's one, one can build a text expression like this:
codice_1
This can give a text such as this: formula_0
The strength of this feature comes from the fact that it is possible to mix up dynamic texts with LaTeX formulae, getting "dynamic LaTeX"(when the size of the square changes, the display changes too)!
3D.
CaRMetal allows the user to set some properties of the objects, like their color or the fact that they are visible or not, as conditional. Also each object can have a layer number. An important application of these features was the "2.5D" mode of CaRMetal, emulating 3D geometry. Since the version 4.0 CaRMetal has a real 3D mode which comes up with a regular tetrahedron, a cube, a diamond and a regular dodecahedron. It is also possible to bind a point to the inside of a (3D) circle or polygon. This feature, inherited from C.a.R., is based on barycentric coordinates. Since the 4.1 version CaRMetal permits some turtle graphics (programmed in JavaScript) either in 2D or in 3D.
Magnetism.
A point can be made magnetic with a distance and a list of objects it is attracted to whenever the point is sufficiently near one or several of these objects (sufficiently near means that the distance between them is less than the minimal distance which is a property of the point, and is measured in pixel units). For example, when a point is attracted to a finite set of points, which themselves are fixed, it can explore a finite geometry.
Network.
Since 2013 there is the possibility to run one CaRMetal figure as server (typically, the teacher's one) and several as clients. Therefore, it is possible
JavaScript inside CaRMetal.
The script tool mixes up algorithmics and geometry. Such scripting tools exist also in DrGeo, Kig and Cinderella (software). To run a script, one clicks over the icon representing a traffic light. A script can be attached to one or several points, so that any movement of one of these points runs the script. This allows some kind of inverse kinematics much like with GeoLicia.
Variables.
To create a geometric object in JavaScript it suffices to click on an icon representing the object. The JavaScript instruction appears in the editor, with predefined parameters. The user has then only to edit these, and does not have to use mnemotechnics. But when a geometric object is created, the variable which called the routine is really a character string, containing the name of the object.
For example,
a=Point(2,3);
creates a point, usually called "P1" and the variable "a" contains the string "P1". This allows to refer to the point by its name. The coordinates of the point are initialized but the point can still move with the use of the mouse. It is also possible to create a point in procedural programming with
Point("A",2,3);
In this case, the name of the point is "A" (unless there be already an object called "A"), and no variable is set to the name "A".
Input-Output.
To output a variable, there are four ways:
To input a variable, there is
This paradigm considers the variables of the program not necessarily as numeric or string variables but can act on graphic objects too. This is a common feature with Kig (but in this case, the language is Python (language)) and DrGeo (in this case, with Scheme (language)).
Strings.
It is also possible to set the coordinates of a point as character strings written in the language of CaRMetal. For example, to have a point "B" which follows "A" except that B's coordinates are integer (to model a gaussian integer) one can write
a=Point("2.72","3.14");
b=Point("round(x_a)","round(y_a)");
Loops.
As an example, the Sierpinski triangle can be built up as an iterated function system with this recursive script, which is rather short because of the already available graphic instructions such as "MidPoint":
a=Point(-4,-2);
b=Point(4,-2);
c=Point(0,4);
m=Point(Math.random(),Math.random());
SetHide(m,true);
for(n=0;n<2000;n++){
dice=Math.ceil(Math.random()*3); //A 3-faces dice!
switch(dice){
case 1: {p=MidPoint(a,m); break;
case 2: {p=MidPoint(b,m); break;
case 3: {p=MidPoint(c,m); break;
SetPointType(p,"point");
m=p;
After the cloud of points has been built up (and even while the script is still running!) one can make "A", "B" and "C" move with the mouse (or automatically with the "Monkey"): The triangle is "dynamic"!
JavaScript objects.
CaRMetal can also use the JavaScript objects like
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mbox{The radius of the circle would be }\\sqrt{\\frac{4}{\\pi}}\\simeq 1.13"
}
] |
https://en.wikipedia.org/wiki?curid=15541634
|
15542466
|
Display contrast
|
Difference in appearance of two or more parts of a field seen simultaneously or successively
Contrast, in physics and digital imaging, is a quantifiable property used to describe the difference in appearance between elements within a visual field. It is closely linked with the perceived brightness of objects and is typically defined by specific formulas that involve the luminances of the stimuli. For example, contrast can be quantified as ΔL/L near the luminance threshold, known as Weber contrast, or as LH/LL at much higher luminances. Further, contrast can result from differences in chromaticity, which are specified by colorimetric characteristics such as the color difference ΔE in the CIE 1976 UCS (Uniform Colour Space).
Understanding contrast is crucial in fields such as imaging and display technologies, where it significantly affects the quality of visual content rendering. The contrast of electronic visual displays is influenced by the type of signal driving mechanism used, which can be either analog or digital. This mechanism directly influences how well the display renders images under varying conditions. Additionally, the contrast is affected by ambient illumination and the viewer's direction of observation, which can alter perceived brightness and color accuracy.
Luminance contrast.
The "luminance contrast" is the ratio between the higher luminance, LH, and the lower luminance, LL, that define the feature to be detected. This ratio, often called contrast ratio, CR, (actually being a luminance ratio), is often used for high luminances and for specification of the contrast of electronic visual display devices. The luminance contrast (ratio), CR, is a dimensionless number, often indicated by adding ":1" to the value of the quotient (e.g. CR = 900:1).
formula_0 with 1 ≤ CR ≤ formula_1
A "contrast ratio" of CR = 1 means no contrast.
The contrast can also be specified by the contrast modulation (or Michelson contrast), CM, defined as:
formula_2 with 0 ≤ CM ≤ 1.
CM = 0 means no contrast.
Another contrast definition is a practical application of Weber contrast, sometimes found in the electronic displays field, K or CW, is:
formula_3 with 0 ≤ CW ≤ 1.
CW = 0 means no contrast, while maximum contrast, CWmax equals one, or more commonly described as a percentage like Michelson, 100%.
A modification of Weber by Hwaung/Peli adds a glare offset to the denominator to more accurately model computer displays. Thus the modified Weber is:
formula_4
This more accurately models the loss of contrast that occurs on darker display luminance due to ambient light conditions.
Color contrast.
Two parts of a visual field can be of equal luminance, but their color (chromaticity) is different. Such a "color contrast" can be described by a distance in a suitable chromaticity system (e.g. CIE 1976 UCS, CIELAB, CIELUV).
A metric for "color contrast" often used in the electronic displays field is the color difference ΔE*uv or ΔE*ab.
Full-screen contrast.
During measurement of the luminance values used for evaluation of the contrast, the active area of the display screen is often completely set to one of the optical states for which the contrast is to be determined, e.g. completely white (R=G=B=100%) and completely black (R=G=B=0%) and the luminance is measured one after the other (time sequential).
This way of proceeding is suitable only when the display device does not exhibit "loading effects", which means that the luminance of the test pattern is varying with the size of the test pattern. Such loading effects can be found in CRT-displays and in PDPs. A small test pattern (e.g. 4% window pattern) displayed on these devices can have significantly higher luminance than the corresponding full-screen pattern because the supply current may be limited by special electronic circuits.
Full-swing contrast.
Any two test patterns that are not completely identical can be used to evaluate a contrast between them. When one test pattern comprises the completely bright state (full-white, R=G=B=100%) and the other one the completely dark state (full-black, R=G=B=0%) the resulting contrast is called "full-swing contrast". This contrast is the highest (maximum) contrast the display can achieve. If no test pattern is specified in a data sheet together with a contrast statement, it will most probably refer to the "full-swing contrast".
Static contrast.
The standard procedure for contrast evaluation is as follows:
When luminance and/or chromaticity are measured before the optical response has settled to a stable steady state, some kind of "transient contrast" has been measured instead of the "static contrast".
Transient contrast.
When the image content is changing rapidly, e.g. during the display of video or movie content, the optical state of the display may not reach the intended stable steady state because of slow response and thus the apparent contrast is reduced if compared to the static contrast.
Dynamic contrast.
This is a technique for expanding the contrast of LCD-screens.
LCD-screens comprise a backlight unit which is permanently emitting light and an LCD-panel in front of it which modulates transmission of light with respect to intensity and chromaticity. In order to increase the contrast of such LCD-screens the backlight can be (globally) dimmed when the image to be displayed is dark (i.e. not comprising high intensity image data) while the image data is numerically corrected and adapted to the reduced backlight intensity. In such a way the dark regions in dark images can be improved and the contrast between subsequent frames can be substantially increased. Also the contrast within one frame can be expanded intentionally depending on the histogram of the image (some sporadic highlights in an image may be cut or suppressed). There is quite some digital signal processing required for implementation of the "dynamic contrast control technique" in a way that is pleasing to the human visual system (e.g. no flicker effects must be induced).
The contrast within individual frames ("simultaneous contrast") can be increased when the backlight can be locally dimmed. This can be achieved with backlight units that are realized with arrays of LEDs. High-dynamic-range (HDR) LCDs are using that technique in order to realize (static) contrast values in the range of CR > 100.000.
Dark-room contrast.
In order to measure the highest contrast possible, the dark state of the display under test must not be corrupted by light from the surroundings, since even small increments ΔL in the denominator of the ratio (LH + ΔL) / (LL + ΔL) effect a considerable reduction of that quotient. This is the reason why most contrast ratios used for advertising purposes are measured under dark-room conditions (illuminance EDR ≤ 1 lx).
All emissive electronic displays (e.g. CRTs, PDPs) theoretically do not emit light in the black state (R=G=B=0%) and thus, under darkroom conditions with no ambient light reflected from the display surface into the light measuring device, the luminance of the black state is zero and thus the contrast becomes infinity.
When these display-screens are used outside a completely dark room, e.g. in the living room (illuminance approx. 100 lx) or in an office situation (illuminance 300 lx minimum), ambient light is reflected from the display surface, adding to the luminance of the dark state and thus reducing the contrast considerably.
A quite novel TV-screen realized with OLED technology is specified with a "dark-room contrast" ratio CR = 1.000.000 (one million). In a realistic application situation with 100 lx illuminance the contrast ratio goes down to ~350, with 300 lx it is reduced to ~120.
"Ambient contrast".
The contrast that can be experienced or measured in the presence of ambient illumination is shortly called "ambient contrast". A special kind of "ambient contrast" is the contrast under outdoor illumination conditions when the illuminance can be very intense (up to 100.000 lx). The contrast apparent under such conditions is called "daylight contrast".
Since always the dark areas of a display are corrupted by reflected light, reasonable "ambient contrast" values can only be maintained when the display is provided with efficient measures to reduce reflections by anti reflection and/or anti-glare coatings.
Concurrent contrast.
When a test pattern is displayed that contains areas with different luminance and/or chromaticity (e.g. a checkerboard pattern), and an observer sees the different areas simultaneously, the apparent contrast is called "concurrent contrast" (the term "simultaneous contrast" is already taken for a different effect). Contrast values obtained from two subsequently displayed full-screen patterns may be different from the values evaluated from a checkerboard pattern with the same optical states. That discrepancy may be due to non-ideal properties of the display-screen (e.g. crosstalk, halation, etc.) and/or due to straylight problems in the light measuring device.
Successive contrast.
When a contrast is established between two optical states that are perceived or measured one after the other, this contrast is called "successive contrast". The contrast between two full-screen patterns (full-screen contrast) always is a "successive contrast".
Methods of measurement.
Depending on the nature of the display under test (direct-view or projection) the contrast is evaluated as a quotient of luminance values (direct-view) or as a quotient of illuminance values (projection displays) if the properties of the projection screen is separated from that of the projector. In the latter case, a checkerboard pattern with full-white and full-black rectangles is projected and the illuminance is measured at the center of the rectangles. The standard ANSI IT7.215-1992 defines test-patterns and measurement locations, and a way to obtain the luminous flux from illuminance measurements, it does not define however a quantity named "ANSI lumen".
If the reflective properties of the projection screen (usually depending on direction) are included in the measurement, the luminance reflected from the centers of the rectangles has to be measured for a (set of) specific directions of observation.
Luminance, contrast and chromaticity of LCD-screens is usually varying with the direction of observation (i.e. viewing direction). The variation of electro-optical characteristics with viewing direction can be measured sequentially by mechanical scanning of the viewing cone ("gonioscopic" approach) or by simultaneous measurements based on conoscopy.
|
[
{
"math_id": 0,
"text": "CR = \\frac{L_H}{L_L}"
},
{
"math_id": 1,
"text": "\\infty"
},
{
"math_id": 2,
"text": "C_M = \\frac{(L_H - L_L)}{(L_H + L_L)}"
},
{
"math_id": 3,
"text": "C_W = \\frac{(L_H - L_L)}{L_H}"
},
{
"math_id": 4,
"text": "C_mW = \\frac{(L_H - L_L)}{(L_H + 0.05)}"
}
] |
https://en.wikipedia.org/wiki?curid=15542466
|
15542628
|
97.5th percentile point
|
Number useful in statistics for analyzing a normal curve
In probability and statistics, the 97.5th percentile point of the standard normal distribution is a number commonly used for statistical calculations. The approximate value of this number is 1.96, meaning that 95% of the area under a normal curve lies within approximately 1.96 standard deviations of the mean. Because of the central limit theorem, this number is used in the construction of approximate 95% confidence intervals. Its ubiquity is due to the arbitrary but common convention of using confidence intervals with 95% probability in science and frequentist statistics, though other probabilities (90%, 99%, etc.) are sometimes used. This convention seems particularly common in medical statistics, but is also common in other areas of application, such as earth sciences, social sciences and business research.
There is no single accepted name for this number; it is also commonly referred to as the "standard normal deviate", "normal score" or "Z score" for the 97.5 percentile point, the .975 point, or just its approximate value, 1.96.
If "X" has a standard normal distribution, i.e. "X" ~ N(0,1),
formula_0
formula_1
and as the normal distribution is symmetric,
formula_2
One notation for this number is "z".975. From the probability density function of the standard normal distribution, the exact value of "z".975 is determined by
formula_3
History.
The use of this number in applied statistics can be traced to the influence of Ronald Fisher's classic textbook, Statistical Methods for Research Workers, first published in 1925:
<templatestyles src="Template:Blockquote/styles.css" />"The value for which P = .05, or 1 in 20, is 1.96 or nearly 2; it is convenient to take this point as a limit in judging whether a deviation is to be considered significant or not."
In Table 1 of the same work, he gave the more precise value 1.959964.
In 1970, the value truncated to 20 decimal places was calculated to be
1.95996 39845 40054 23552...
The commonly used approximate value of 1.96 is therefore accurate to better than one part in 50,000, which is more than adequate for applied work.
Some people even use the value of 2 in the place of 1.96, reporting a 95.4% confidence interval as a 95% confidence interval. This is not recommended but is occasionally seen.
Software functions.
The inverse of the standard normal CDF can be used to compute the value. The following is a table of function calls that return 1.96 in some commonly used applications:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathrm{P}(X > 1.96) \\approx 0.025, \\,"
},
{
"math_id": 1,
"text": " \\mathrm{P}(X < 1.96) \\approx 0.975, \\,"
},
{
"math_id": 2,
"text": " \\mathrm{P}(-1.96 < X < 1.96) \\approx 0.95. \\,"
},
{
"math_id": 3,
"text": " \\frac{1}{\\sqrt{2\\pi}}\\int_{-z_{.975}}^{z_{.975}} e^{-x^2/2} \\, \\mathrm{d}x = 0.975."
}
] |
https://en.wikipedia.org/wiki?curid=15542628
|
155430
|
Kleene algebra
|
Idempotent semiring endowed with a closure operator
In mathematics, a Kleene algebra ( ; named after Stephen Cole Kleene) is an idempotent (and thus partially ordered) semiring endowed with a closure operator. It generalizes the operations known from regular expressions.
Definition.
Various inequivalent definitions of Kleene algebras and related structures have been given in the literature. Here we will give the definition that seems to be the most common nowadays.
A Kleene algebra is a set "A" together with two binary operations + : "A" × "A" → "A" and · : "A" × "A" → "A" and one function * : "A" → "A", written as "a" + "b", "ab" and "a"* respectively, so that the following axioms are satisfied.
The above axioms define a semiring. We further require:
It is now possible to define a partial order ≤ on "A" by setting "a" ≤ "b" if and only if "a" + "b" = "b" (or equivalently: "a" ≤ "b" if and only if there exists an "x" in "A" such that "a" + "x" = "b"; with any definition, "a" ≤ "b" ≤ "a" implies "a" = "b"). With this order we can formulate the last four axioms about the operation *:
Intuitively, one should think of "a" + "b" as the "union" or the "least upper bound" of "a" and "b" and of "ab" as some multiplication which is monotonic, in the sense that "a" ≤ "b" implies "ax" ≤ "bx". The idea behind the star operator is "a"* = 1 + "a" + "aa" + "aaa" + ... From the standpoint of programming language theory, one may also interpret + as "choice", · as "sequencing" and * as "iteration".
Examples.
Let Σ be a finite set (an "alphabet") and let "A" be the set of all regular expressions over Σ. We consider two such regular expressions equal if they describe the same language. Then "A" forms a Kleene algebra. In fact, this is a free Kleene algebra in the sense that any equation among regular expressions follows from the Kleene algebra axioms and is therefore valid in every Kleene algebra.
Again let Σ be an alphabet. Let "A" be the set of all regular languages over Σ (or the set of all context-free languages over Σ; or the set of all recursive languages over Σ; or the set of "all" languages over Σ). Then the union (written as +) and the concatenation (written as ·) of two elements of "A" again belong to "A", and so does the Kleene star operation applied to any element of "A". We obtain a Kleene algebra "A" with 0 being the empty set and 1 being the set that only contains the empty string.
Let "M" be a monoid with identity element "e" and let "A" be the set of all subsets of "M". For two such subsets "S" and "T", let "S" + "T" be the union of "S" and "T" and set "ST" = {"st" : "s" in "S" and "t" in "T"}. "S"* is defined as the submonoid of "M" generated by "S", which can be described as {"e"} ∪ "S" ∪ "SS" ∪ "SSS" ∪ ... Then "A" forms a Kleene algebra with 0 being the empty set and 1 being {"e"}. An analogous construction can be performed for any small category.
The linear subspaces of a unital algebra over a field form a Kleene algebra. Given linear subspaces "V" and "W", define "V" + "W" to be the sum of the two subspaces, and 0 to be the trivial subspace {0}. Define "V" · "W" = span {v · w|v ∈ V, w ∈ W}, the linear span of the product of vectors from "V" and "W" respectively. Define 1 = span {I}, the span of the unit of the algebra. The closure of "V" is the direct sum of all powers of "V".
formula_0
Suppose "M" is a set and "A" is the set of all binary relations on "M". Taking + to be the union, · to be the composition and * to be the reflexive transitive closure, we obtain a Kleene algebra.
Every Boolean algebra with operations formula_1 and formula_2 turns into a Kleene algebra if we use formula_1 for +, formula_2 for · and set "a"* = 1 for all "a".
A quite different Kleene algebra can be used to implement the Floyd–Warshall algorithm, computing the shortest path's length for every two vertices of a weighted directed graph, by Kleene's algorithm, computing a regular expression for every two states of a deterministic finite automaton.
Using the extended real number line, take "a" + "b" to be the minimum of "a" and "b" and "ab" to be the ordinary sum of "a" and "b" (with the sum of +∞ and −∞ being defined as +∞). "a"* is defined to be the real number zero for nonnegative "a" and −∞ for negative "a". This is a Kleene algebra with zero element +∞ and one element the real number zero.
A weighted directed graph can then be considered as a deterministic finite automaton, with each transition labelled by its weight.
For any two graph nodes (automaton states), the regular expressions computed from Kleene's algorithm evaluates, in this particular Kleene algebra, to the shortest path length between the nodes.
Properties.
Zero is the smallest element: 0 ≤ "a" for all "a" in "A".
The sum "a" + "b" is the least upper bound of "a" and "b": we have "a" ≤ "a" + "b" and "b" ≤ "a" + "b" and if "x" is an element of "A" with "a" ≤ "x" and "b" ≤ "x", then "a" + "b" ≤ "x". Similarly, "a"1 + ... + "a""n" is the least upper bound of the elements "a"1, ..., "a""n".
Multiplication and addition are monotonic: if "a" ≤ "b", then
for all "x" in "A".
Regarding the star operation, we have
If "A" is a Kleene algebra and "n" is a natural number, then one can consider the set M"n"("A") consisting of all "n"-by-"n" matrices with entries in "A".
Using the ordinary notions of matrix addition and multiplication, one can define a unique *-operation so that M"n"("A") becomes a Kleene algebra.
History.
Kleene introduced regular expressions and gave some of their algebraic laws.
Although he didn't define Kleene algebras, he asked for a decision procedure for equivalence of regular expressions.
Redko proved that no finite set of "equational" axioms can characterize the algebra of regular languages.
Salomaa gave complete axiomatizations of this algebra, however depending on problematic inference rules.
The problem of providing a complete set of axioms, which would allow derivation of all equations among regular expressions, was intensively studied by John Horton Conway under the name of "regular algebras", however, the bulk of his treatment was infinitary.
In 1981, Kozen gave a complete infinitary equational deductive system for the algebra of regular languages.
In 1994, he gave the above finite axiom system, which uses unconditional and conditional equalities (considering "a" ≤ "b" as an abbreviation for "a" + "b" = "b"), and is equationally complete for the algebra of regular languages, that is, two regular expressions "a" and "b" denote the same language only if "a" = "b" follows from the above axioms.
Generalization (or relation to other structures).
Kleene algebras are a particular case of closed semirings, also called quasi-regular semirings or Lehmann semirings, which are semirings in which every element has at least one quasi-inverse satisfying the equation: "a"* = "aa"* + 1 = "a"*"a" + 1. This quasi-inverse is not necessarily unique. In a Kleene algebra, "a"* is the least solution to the fixpoint equations: "X" = "aX" + 1 and "X" = "Xa" + 1.
Closed semirings and Kleene algebras appear in algebraic path problems, a generalization of the shortest path problem.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V^{*} = \\bigoplus_{i = 0}^{\\infty} V^{i}"
},
{
"math_id": 1,
"text": "\\lor"
},
{
"math_id": 2,
"text": "\\land"
}
] |
https://en.wikipedia.org/wiki?curid=155430
|
15544038
|
Hamaker constant
|
Physical constant related to Van der Waals interactions
In molecular physics, the Hamaker constant (denoted A; named for H. C. Hamaker) is a physical constant that can be defined for a van der Waals (vdW) body–body interaction:
formula_0
where "ρ"1, "ρ"2 are the number densities of the two interacting kinds of particles, and C is the London coefficient in the particle–particle pair interaction. The magnitude of this constant reflects the strength of the vdW-force between two particles, or between a particle and a substrate.
The Hamaker constant provides the means to determine the interaction parameter C from the vdW-pair potential,
formula_1
Hamaker's method and the associated Hamaker constant ignores the influence of an intervening medium between the two particles of interaction. In 1956 Lifshitz developed a description of the vdW energy but with consideration of the dielectric properties of this intervening medium (often a continuous phase).
The Van der Waals forces are effective only up to several hundred angstroms. When the interactions are too far apart, the dispersion potential decays faster than formula_2 this is called the retarded regime, and the result is a Casimir–Polder force.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A=\\pi^2C\\rho_1\\rho_2,"
},
{
"math_id": 1,
"text": "w(r) = \\frac{-C}{r^6}."
},
{
"math_id": 2,
"text": "1/r^6;"
}
] |
https://en.wikipedia.org/wiki?curid=15544038
|
155443
|
Corrosion
|
Gradual destruction of materials by chemical reaction with its environment
Corrosion is a natural process that converts a refined metal into a more chemically stable oxide. It is the gradual deterioration of materials (usually a metal) by chemical or electrochemical reaction with their environment. Corrosion engineering is the field dedicated to controlling and preventing corrosion.
In the most common use of the word, this means electrochemical oxidation of metal in reaction with an oxidant such as oxygen, hydrogen, or hydroxide. Rusting, the formation of red-orange iron oxides, is a well-known example of electrochemical corrosion. This type of corrosion typically produces oxides or salts of the original metal and results in a distinctive coloration. Corrosion can also occur in materials other than metals, such as ceramics or polymers, although in this context, the term "degradation" is more common. Corrosion degrades the useful properties of materials and structures including mechanical strength, appearance, and permeability to liquids and gases. Corrosive is distinguished from caustic: the former implies mechanical degradation, the latter chemical.
Many structural alloys corrode merely from exposure to moisture in air, but the process can be strongly affected by exposure to certain substances. Corrosion can be concentrated locally to form a pit or crack, or it can extend across a wide area, more or less uniformly corroding the surface. Because corrosion is a diffusion-controlled process, it occurs on exposed surfaces. As a result, methods to reduce the activity of the exposed surface, such as passivation and chromate conversion, can increase a material's corrosion resistance. However, some corrosion mechanisms are less visible and less predictable.
The chemistry of corrosion is complex; it can be considered an electrochemical phenomenon. During corrosion at a particular spot on the surface of an object made of iron, oxidation takes place and that spot behaves as an anode. The electrons released at this anodic spot move through the metal to another spot on the object, and reduce oxygen at that spot in presence of H+ (which is believed to be available from carbonic acid () formed due to dissolution of carbon dioxide from air into water in moist air condition of atmosphere. Hydrogen ion in water may also be available due to dissolution of other acidic oxides from the atmosphere). This spot behaves as a cathode.
Galvanic corrosion.
Galvanic corrosion occurs when two different metals have physical or electrical contact with each other and are immersed in a common electrolyte, or when the same metal is exposed to electrolyte with different concentrations. In a galvanic couple, the more active metal (the anode) corrodes at an accelerated rate and the more noble metal (the cathode) corrodes at a slower rate. When immersed separately, each metal corrodes at its own rate. What type of metal(s) to use is readily determined by following the galvanic series. For example, zinc is often used as a sacrificial anode for steel structures. Galvanic corrosion is of major interest to the marine industry and also anywhere water (containing salts) contacts pipes or metal structures.
Factors such as relative size of anode, types of metal, and operating conditions (temperature, humidity, salinity, etc.) affect galvanic corrosion. The surface area ratio of the anode and cathode directly affects the corrosion rates of the materials. Galvanic corrosion is often prevented by the use of sacrificial anodes.
Galvanic series.
In any given environment (one standard medium is aerated, room-temperature seawater), one metal will be either more "noble" or more "active" than others, based on how strongly its ions are bound to the surface. Two metals in electrical contact share the same electrons, so that the "tug-of-war" at each surface is analogous to competition for free electrons between the two materials. Using the electrolyte as a host for the flow of ions in the same direction, the noble metal will take electrons from the active one. The resulting mass flow or electric current can be measured to establish a hierarchy of materials in the medium of interest. This hierarchy is called a "galvanic series" and is useful in predicting and understanding corrosion.
Corrosion removal.
Often, it is possible to chemically remove the products of corrosion. For example, phosphoric acid in the form of "naval jelly" is often applied to ferrous tools or surfaces to remove rust. Corrosion removal should not be confused with electropolishing, which removes some layers of the underlying metal to make a smooth surface. For example, phosphoric acid may also be used to electropolish copper but it does this by removing copper, not the products of copper corrosion.
Resistance to corrosion.
Some metals are more intrinsically resistant to corrosion than others (for some examples, see galvanic series). There are various ways of protecting metals from corrosion (oxidation) including painting, hot-dip galvanization, cathodic protection, and combinations of these.
Intrinsic chemistry.
The materials most resistant to corrosion are those for which corrosion is thermodynamically unfavorable. Any corrosion products of gold or platinum tend to decompose spontaneously into pure metal, which is why these elements can be found in metallic form on Earth and have long been valued. More common "base" metals can only be protected by more temporary means.
Some metals have naturally slow reaction kinetics, even though their corrosion is thermodynamically favorable. These include such metals as zinc, magnesium, and cadmium. While corrosion of these metals is continuous and ongoing, it happens at an acceptably slow rate. An extreme example is graphite, which releases large amounts of energy upon oxidation, but has such slow kinetics that it is effectively immune to electrochemical corrosion under normal conditions.
Passivation.
Passivation refers to the spontaneous formation of an ultrathin film of corrosion products, known as a passive film, on the metal's surface that act as a barrier to further oxidation. The chemical composition and microstructure of a passive film are different from the underlying metal. Typical passive film thickness on aluminium, stainless steels, and alloys is within 10 nanometers. The passive film is different from oxide layers that are formed upon heating and are in the micrometer thickness range – the passive film recovers if removed or damaged whereas the oxide layer does not. Passivation in natural environments such as air, water and soil at moderate pH is seen in such materials as aluminium, stainless steel, titanium, and silicon.
Passivation is primarily determined by metallurgical and environmental factors. The effect of pH is summarized using Pourbaix diagrams, but many other factors are influential. Some conditions that inhibit passivation include high pH for aluminium and zinc, low pH or the presence of chloride ions for stainless steel, high temperature for titanium (in which case the oxide dissolves into the metal, rather than the electrolyte) and fluoride ions for silicon. On the other hand, unusual conditions may result in passivation of materials that are normally unprotected, as the alkaline environment of concrete does for steel rebar. Exposure to a liquid metal such as mercury or hot solder can often circumvent passivation mechanisms.
Corrosion in passivated materials.
Passivation is extremely useful in mitigating corrosion damage, however even a high-quality alloy will corrode if its ability to form a passivating film is hindered. Proper selection of the right grade of material for the specific environment is important for the long-lasting performance of this group of materials. If breakdown occurs in the passive film due to chemical or mechanical factors, the resulting major modes of corrosion may include pitting corrosion, crevice corrosion, and stress corrosion cracking.
Pitting corrosion.
Certain conditions, such as low concentrations of oxygen or high concentrations of species such as chloride which compete as anions, can interfere with a given alloy's ability to re-form a passivating film. In the worst case, almost all of the surface will remain protected, but tiny local fluctuations will degrade the oxide film in a few critical points. Corrosion at these points will be greatly amplified, and can cause "corrosion pits" of several types, depending upon conditions. While the corrosion pits only nucleate under fairly extreme circumstances, they can continue to grow even when conditions return to normal, since the interior of a pit is naturally deprived of oxygen and locally the pH decreases to very low values and the corrosion rate increases due to an autocatalytic process. In extreme cases, the sharp tips of extremely long and narrow corrosion pits can cause stress concentration to the point that otherwise tough alloys can shatter; a thin film pierced by an invisibly small hole can hide a thumb sized pit from view. These problems are especially dangerous because they are difficult to detect before a part or structure fails. Pitting remains among the most common and damaging forms of corrosion in passivated alloys, but it can be prevented by control of the alloy's environment.
Pitting results when a small hole, or cavity, forms in the metal, usually as a result of de-passivation of a small area. This area becomes anodic, while part of the remaining metal becomes cathodic, producing a localized galvanic reaction. The deterioration of this small area penetrates the metal and can lead to failure. This form of corrosion is often difficult to detect due to the fact that it is usually relatively small and may be covered and hidden by corrosion-produced compounds.
Weld decay and knifeline attack.
Stainless steel can pose special corrosion challenges, since its passivating behavior relies on the presence of a major alloying component (chromium, at least 11.5%). Because of the elevated temperatures of welding and heat treatment, chromium carbides can form in the grain boundaries of stainless alloys. This chemical reaction robs the material of chromium in the zone near the grain boundary, making those areas much less resistant to corrosion. This creates a galvanic couple with the well-protected alloy nearby, which leads to "weld decay" (corrosion of the grain boundaries in the heat affected zones) in highly corrosive environments. This process can seriously reduce the mechanical strength of welded joints over time.
A stainless steel is said to be "sensitized" if chromium carbides are formed in the microstructure. A typical microstructure of a normalized type 304 stainless steel shows no signs of sensitization, while a heavily sensitized steel shows the presence of grain boundary precipitates. The dark lines in the sensitized microstructure are networks of chromium carbides formed along the grain boundaries.
Special alloys, either with low carbon content or with added carbon "getters" such as titanium and niobium (in types 321 and 347, respectively), can prevent this effect, but the latter require special heat treatment after welding to prevent the similar phenomenon of "knifeline attack". As its name implies, corrosion is limited to a very narrow zone adjacent to the weld, often only a few micrometers across, making it even less noticeable.
Crevice corrosion.
Crevice corrosion is a localized form of corrosion occurring in confined spaces (crevices), to which the access of the working fluid from the environment is limited. Formation of a differential aeration cell leads to corrosion inside the crevices. Examples of crevices are gaps and contact areas between parts, under gaskets or seals, inside cracks and seams, spaces filled with deposits, and under sludge piles.
Crevice corrosion is influenced by the crevice type (metal-metal, metal-non-metal), crevice geometry (size, surface finish), and metallurgical and environmental factors. The susceptibility to crevice corrosion can be evaluated with ASTM standard procedures. A critical crevice corrosion temperature is commonly used to rank a material's resistance to crevice corrosion.
Hydrogen grooving.
In the chemical industry, hydrogen grooving is the corrosion of piping at grooves created by the interaction of a corrosive agent, corroded pipe constituents, and hydrogen gas bubbles. For example, when sulfuric acid () flows through steel pipes, the iron in the steel reacts with the acid to form a passivation coating of iron sulfate () and hydrogen gas (). The iron sulfate coating will protect the steel from further reaction; however, if hydrogen bubbles contact this coating, it will be removed. Thus, a groove can be formed by a travelling bubble, exposing more steel to the acid, causing a vicious cycle. The grooving is exacerbated by the tendency of subsequent bubbles to follow the same path.
High-temperature corrosion.
High-temperature corrosion is chemical deterioration of a material (typically a metal) as a result of heating. This non-galvanic form of corrosion can occur when a metal is subjected to a hot atmosphere containing oxygen, sulfur ("sulfidation"), or other compounds capable of oxidizing (or assisting the oxidation of) the material concerned. For example, materials used in aerospace, power generation, and even in car engines must resist sustained periods at high temperature, during which they may be exposed to an atmosphere containing the potentially highly-corrosive products of combustion.
Some products of high-temperature corrosion can potentially be turned to the advantage of the engineer. The formation of oxides on stainless steels, for example, can provide a protective layer preventing further atmospheric attack, allowing for a material to be used for sustained periods at both room and high temperatures in hostile conditions. Such high-temperature corrosion products, in the form of compacted oxide layer glazes, prevent or reduce wear during high-temperature sliding contact of metallic (or metallic and ceramic) surfaces. Thermal oxidation is also commonly used to produce controlled oxide nanostructures, including nanowires and thin films.
Microbial corrosion.
Microbial corrosion, or commonly known as microbiologically influenced corrosion (MIC), is a corrosion caused or promoted by microorganisms, usually chemoautotrophs. It can apply to both metallic and non-metallic materials, in the presence or absence of oxygen. Sulfate-reducing bacteria are active in the absence of oxygen (anaerobic); they produce hydrogen sulfide, causing sulfide stress cracking. In the presence of oxygen (aerobic), some bacteria may directly oxidize iron to iron oxides and hydroxides, other bacteria oxidize sulfur and produce sulfuric acid causing biogenic sulfide corrosion. Concentration cells can form in the deposits of corrosion products, leading to localized corrosion.
Accelerated low-water corrosion (ALWC) is a particularly aggressive form of MIC that affects steel piles in seawater near the low water tide mark. It is characterized by an orange sludge, which smells of hydrogen sulfide when treated with acid. Corrosion rates can be very high and design corrosion allowances can soon be exceeded leading to premature failure of the steel pile. Piles that have been coated and have cathodic protection installed at the time of construction are not susceptible to ALWC. For unprotected piles, sacrificial anodes can be installed locally to the affected areas to inhibit the corrosion or a complete retrofitted sacrificial anode system can be installed. Affected areas can also be treated using cathodic protection, using either sacrificial anodes or applying current to an inert anode to produce a calcareous deposit, which will help shield the metal from further attack.
Metal dusting.
Metal dusting is a catastrophic form of corrosion that occurs when susceptible materials are exposed to environments with high carbon activities, such as synthesis gas and other high-CO environments. The corrosion manifests itself as a break-up of bulk metal to metal powder. The suspected mechanism is firstly the deposition of a graphite layer on the surface of the metal, usually from carbon monoxide (CO) in the vapor phase. This graphite layer is then thought to form metastable M3C species (where M is the metal), which migrate away from the metal surface. However, in some regimes, no M3C species is observed indicating a direct transfer of metal atoms into the graphite layer.
Protection from corrosion.
Various treatments are used to slow corrosion damage to metallic objects which are exposed to the weather, salt water, acids, or other hostile environments. Some unprotected metallic alloys are extremely vulnerable to corrosion, such as those used in neodymium magnets, which can spall or crumble into powder even in dry, temperature-stable indoor environments unless properly treated.
Surface treatments.
When surface treatments are used to reduce corrosion, great care must be taken to ensure complete coverage, without gaps, cracks, or pinhole defects. Small defects can act as an "Achilles' heel", allowing corrosion to penetrate the interior and causing extensive damage even while the outer protective layer remains apparently intact for a period of time.
Applied coatings.
Plating, painting, and the application of enamel are the most common anti-corrosion treatments. They work by providing a barrier of corrosion-resistant material between the damaging environment and the structural material. Aside from cosmetic and manufacturing issues, there may be tradeoffs in mechanical flexibility versus resistance to abrasion and high temperature. Platings usually fail only in small sections, but if the plating is more noble than the substrate (for example, chromium on steel), a galvanic couple will cause any exposed area to corrode much more rapidly than an unplated surface would. For this reason, it is often wise to plate with active metal such as zinc or cadmium. If the zinc coating is not thick enough the surface soon becomes unsightly with rusting obvious. The design life is directly related to the metal coating thickness.
Painting either by roller or brush is more desirable for tight spaces; spray would be better for larger coating areas such as steel decks and waterfront applications. Flexible polyurethane coatings, like Durabak-M26 for example, can provide an anti-corrosive seal with a highly durable slip resistant membrane. Painted coatings are relatively easy to apply and have fast drying times although temperature and humidity may cause dry times to vary.
Reactive coatings.
If the environment is controlled (especially in recirculating systems), corrosion inhibitors can often be added to it. These chemicals form an electrically insulating or chemically impermeable coating on exposed metal surfaces, to suppress electrochemical reactions. Such methods make the system less sensitive to scratches or defects in the coating, since extra inhibitors can be made available wherever metal becomes exposed. Chemicals that inhibit corrosion include some of the salts in hard water (Roman water systems are known for their mineral deposits), chromates, phosphates, polyaniline, other conducting polymers, and a wide range of specially designed chemicals that resemble surfactants (i.e., long-chain organic molecules with ionic end groups).
Anodization.
Aluminium alloys often undergo a surface treatment. Electrochemical conditions in the bath are carefully adjusted so that uniform pores, several nanometers wide, appear in the metal's oxide film. These pores allow the oxide to grow much thicker than passivating conditions would allow. At the end of the treatment, the pores are allowed to seal, forming a harder-than-usual surface layer. If this coating is scratched, normal passivation processes take over to protect the damaged area.
Anodizing is very resilient to weathering and corrosion, so it is commonly used for building facades and other areas where the surface will come into regular contact with the elements. While being resilient, it must be cleaned frequently. If left without cleaning, panel edge staining will naturally occur. Anodization is the process of converting an anode into cathode by bringing a more active anode in contact with it.
Biofilm coatings.
A new form of protection has been developed by applying certain species of bacterial films to the surface of metals in highly corrosive environments. This process increases the corrosion resistance substantially. Alternatively, antimicrobial-producing biofilms can be used to inhibit mild steel corrosion from sulfate-reducing bacteria.
Controlled permeability formwork.
Controlled permeability formwork (CPF) is a method of preventing the corrosion of reinforcement by naturally enhancing the durability of the cover during concrete placement. CPF has been used in environments to combat the effects of carbonation, chlorides, frost, and abrasion.
Cathodic protection.
Cathodic protection (CP) is a technique to control the corrosion of a metal surface by making it the cathode of an electrochemical cell. Cathodic protection systems are most commonly used to protect steel pipelines and tanks; steel pier piles, ships, and offshore oil platforms.
Sacrificial anode protection.
For effective CP, the potential of the steel surface is polarized (pushed) more negative until the metal surface has a uniform potential. With a uniform potential, the driving force for the corrosion reaction is halted. For galvanic CP systems, the anode material corrodes under the influence of the steel, and eventually it must be replaced. The polarization is caused by the current flow from the anode to the cathode, driven by the difference in electrode potential between the anode and the cathode. The most common sacrificial anode materials are aluminum, zinc, magnesium and related alloys. Aluminum has the highest capacity, and magnesium has the highest driving voltage and is thus used where resistance is higher. Zinc is general purpose and the basis for galvanizing.
A number of problems are associated with sacrificial anodes. Among these, from an environmental perspective, is the release of zinc, magnesium, aluminum and heavy metals such as cadmium into the environment including seawater. From a working perspective, sacrificial anodes systems are considered to be less precise than modern cathodic protection systems such as Impressed Current Cathodic Protection (ICCP) systems. Their ability to provide requisite protection has to be checked regularly by means of underwater inspection by divers. Furthermore, as they have a finite lifespan, sacrificial anodes need to be replaced regularly over time.
Impressed current cathodic protection.
For larger structures, galvanic anodes cannot economically deliver enough current to provide complete protection. Impressed current cathodic protection (ICCP) systems use anodes connected to a DC power source (such as a cathodic protection rectifier). Anodes for ICCP systems are tubular and solid rod shapes of various specialized materials. These include high silicon cast iron, graphite, mixed metal oxide or platinum coated titanium or niobium coated rod and wires.
Anodic protection.
Anodic protection impresses anodic current on the structure to be protected (opposite to the cathodic protection). It is appropriate for metals that exhibit passivity (e.g. stainless steel) and suitably small passive current over a wide range of potentials. It is used in aggressive environments, such as solutions of sulfuric acid. Anodic protection is an electrochemical method of corrosion protection by keeping metal in passive state
Rate of corrosion.
The formation of an oxide layer is described by the Deal–Grove model, which is used to predict and control oxide layer formation in diverse situations. A simple test for measuring corrosion is the weight loss method. The method involves exposing a clean weighed piece of the metal or alloy to the corrosive environment for a specified time followed by cleaning to remove corrosion products and weighing the piece to determine the loss of weight. The rate of corrosion ("R") is calculated as
formula_0
where "k" is a constant, "W" is the weight loss of the metal in time "t", "A" is the surface area of the metal exposed, and "ρ" is the density of the metal (in g/cm3).
Other common expressions for the corrosion rate is penetration depth and change of mechanical properties.
Economic impact.
In 2002, the US Federal Highway Administration released a study titled "Corrosion Costs and Preventive Strategies in the United States" on the direct costs associated with metallic corrosion in the US industry. In 1998, the total annual direct cost of corrosion in the US roughly $276 billion (or 3.2% of the US gross domestic product at the time). Broken down into five specific industries, the economic losses are $22.6 billion in infrastructure, $17.6 billion in production and manufacturing, $29.7 billion in transportation, $20.1 billion in government, and $47.9 billion in utilities.
Rust is one of the most common causes of bridge accidents. As rust displaces a much higher volume than the originating mass of iron, its build-up can also cause failure by forcing apart adjacent components. It was the cause of the collapse of the Mianus River Bridge in 1983, when support bearings rusted internally and pushed one corner of the road slab off its support. Three drivers on the roadway at the time died as the slab fell into the river below. The following NTSB investigation showed that a drain in the road had been blocked for road re-surfacing, and had not been unblocked; as a result, runoff water penetrated the support hangers. Rust was also an important factor in the Silver Bridge disaster of 1967 in West Virginia, when a steel suspension bridge collapsed within a minute, killing 46 drivers and passengers who were on the bridge at the time.
Similarly, corrosion of concrete-covered steel and iron can cause the concrete to spall, creating severe structural problems. It is one of the most common failure modes of reinforced concrete bridges. Measuring instruments based on the half-cell potential can detect the potential corrosion spots before total failure of the concrete structure is reached.
Until 20–30 years ago, galvanized steel pipe was used extensively in the potable water systems for single and multi-family residents as well as commercial and public construction. Today, these systems have long ago consumed the protective zinc and are corroding internally, resulting in poor water quality and pipe failures. The economic impact on homeowners, condo dwellers, and the public infrastructure is estimated at $22 billion as the insurance industry braces for a wave of claims due to pipe failures.
Corrosion in nonmetals.
Most ceramic materials are almost entirely immune to corrosion. The strong chemical bonds that hold them together leave very little free chemical energy in the structure; they can be thought of as already corroded. When corrosion does occur, it is almost always a simple dissolution of the material or chemical reaction, rather than an electrochemical process. A common example of corrosion protection in ceramics is the lime added to soda–lime glass to reduce its solubility in water; though it is not nearly as soluble as pure sodium silicate, normal glass does form sub-microscopic flaws when exposed to moisture. Due to its brittleness, such flaws cause a dramatic reduction in the strength of a glass object during its first few hours at room temperature.
Corrosion of polymers.
Polymer degradation involves several complex and often poorly understood physiochemical processes. These are strikingly different from the other processes discussed here, and so the term "corrosion" is only applied to them in a loose sense of the word. Because of their large molecular weight, very little entropy can be gained by mixing a given mass of polymer with another substance, making them generally quite difficult to dissolve. While dissolution is a problem in some polymer applications, it is relatively simple to design against.
A more common and related problem is "swelling", where small molecules infiltrate the structure, reducing strength and stiffness and causing a volume change. Conversely, many polymers (notably flexible vinyl) are intentionally swelled with plasticizers, which can be leached out of the structure, causing brittleness or other undesirable changes.
The most common form of degradation, however, is a decrease in polymer chain length. Mechanisms which break polymer chains are familiar to biologists because of their effect on DNA: ionizing radiation (most commonly ultraviolet light), free radicals, and oxidizers such as oxygen, ozone, and chlorine. Ozone cracking is a well-known problem affecting natural rubber for example. Plastic additives can slow these process very effectively, and can be as simple as a UV-absorbing pigment (e.g., titanium dioxide or carbon black). Plastic shopping bags often do not include these additives so that they break down more easily as ultrafine particles of litter.
Corrosion of glass.
Glass is characterized by a high degree of corrosion resistance. Because of its high water resistance, it is often used as primary packaging material in the pharmaceutical industry since most medicines are preserved in a watery solution. Besides its water resistance, glass is also robust when exposed to certain chemically-aggressive liquids or gases.
Glass disease is the corrosion of silicate glasses in aqueous solutions. It is governed by two mechanisms: diffusion-controlled leaching (ion exchange) and hydrolytic dissolution of the glass network. Both mechanisms strongly depend on the pH of contacting solution: the rate of ion exchange decreases with pH as 10−0.5pH, whereas the rate of hydrolytic dissolution increases with pH as 100.5pH.
Mathematically, corrosion rates of glasses are characterized by normalized corrosion rates of elements NR"i" (g/cm2·d) which are determined as the ratio of total amount of released species into the water "Mi" (g) to the water-contacting surface area "S" (cm2), time of contact "t" (days), and weight fraction content of the element in the glass "fi":
formula_1.
The overall corrosion rate is a sum of contributions from both mechanisms (leaching + dissolution): NR"i"=NR"x""i"+NR"h". Diffusion-controlled leaching (ion exchange) is characteristic of the initial phase of corrosion and involves replacement of alkali ions in the glass by a hydronium (H3O+) ion from the solution. It causes an ion-selective depletion of near surface layers of glasses and gives an inverse-square-root dependence of corrosion rate with exposure time. The diffusion-controlled normalized leaching rate of cations from glasses (g/cm2·d) is given by:
formula_2,
where "t" is time, "D""i" is the "i"th cation effective diffusion coefficient (cm2/d), which depends on pH of contacting water as "D""i" = "D""i"0·10–pH, and "ρ" is the density of the glass (g/cm3).
Glass network dissolution is characteristic of the later phases of corrosion and causes a congruent release of ions into the water solution at a time-independent rate in dilute solutions (g/cm2·d):
formula_3,
where "rh" is the stationary hydrolysis (dissolution) rate of the glass (cm/d). In closed systems, the consumption of protons from the aqueous phase increases the pH and causes a fast transition to hydrolysis. However, a further saturation of solution with silica impedes hydrolysis and causes the glass to return to an ion-exchange; e.g., diffusion-controlled regime of corrosion.
In typical natural conditions, normalized corrosion rates of silicate glasses are very low and are of the order of 10−7 to 10−5 g/(cm2·d). The very high durability of silicate glasses in water makes them suitable for hazardous and nuclear waste immobilisation.
Glass corrosion tests.
There exist numerous standardized procedures for measuring the corrosion (also called chemical durability) of glasses in neutral, basic, and acidic environments, under simulated environmental conditions, in simulated body fluid, at high temperature and pressure, and under other conditions.
The standard procedure ISO 719 describes a test of the extraction of water-soluble basic compounds under neutral conditions: 2 g of glass, particle size 300–500 μm, is kept for 60 min in 50 mL de-ionized water of grade 2 at 98 °C; 25 mL of the obtained solution is titrated against 0.01 mol/L HCl solution. The volume of HCl required for neutralization is classified according to the table below.
The standardized test ISO 719 is not suitable for glasses with poor or not extractable alkaline components, but which are still attacked by water; e.g., quartz glass, B2O3 glass or P2O5 glass.
Usual glasses are differentiated into the following classes:
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R = \\frac{kW}{\\rho At}"
},
{
"math_id": 1,
"text": "\\mathrm{NR}_i = \\frac{M_i}{Sf_it}"
},
{
"math_id": 2,
"text": "\\mathrm{NR}x_i = 2\\rho \\sqrt{\\frac{D_i}{\\pi t}}"
},
{
"math_id": 3,
"text": "\\mathrm{NR}h = \\rho r_h "
}
] |
https://en.wikipedia.org/wiki?curid=155443
|
15544989
|
List of 8-bit computer hardware graphics
|
This is a list of notable 8-bit computer color palettes, and graphics, which were primarily manufactured from 1975 to 1985. Although some of them use RGB palettes, more commonly they have 4, 16 or more color palettes that are not bit nor level combinations of RGB primaries, but fixed ROM/circuitry colors selected by the manufacturer. Due to mixed-bit architectures, the "n"-bit distinction is not always a strict categorization. Another error is assuming that a computer's color palette represents what it can show all at once. Resolution is also a crucial aspect when criticizing an 8-bit computer, as many offer different modes with different amounts of colors on screen, and different resolutions, with the intent of trading off resolution for color, and vice versa.
3-bit RGB palettes.
Systems with a 3-bit RGB palette use 1 bit for each of the red, green and blue color components. That is, each component is either "on" or "off" with no intermediate states. This results in an 8-color palette ((21)3 == 23 == 8) that have black, white, the three RGB primary colors red, green and blue and their correspondent complementary colors cyan, magenta and yellow as follows:
The color indices vary between implementations; therefore, index numbers are not given. A common selection has 3 bits (from LSB to MSB) directly representing the 'Red', 'Green' and 'Blue' (RGB) components in a number from 0 to 7. An alternate arrangement uses the bit order 'Blue', 'Red', 'Green' (BRG), such that the resultant palette - in numerical order - represents an increasing level of intensity on a monochrome display.
The 3-bit RGB palette is used by:
Specific details about implementation and actual graphical capabilities of specific systems, are listed on the next sub-sections.
World System Teletext Level 1.
World System Teletext Level 1 (1976) uses a 3-bit RGB, 8-color palette. Teletext has 40×25 characters per page of which the first row is reserved for a page header. Every character cell has a background color and a text color. These attributes along with others are set through control codes which each occupy one character position. Graphics characters consisting of 2×3 cells can used following a graphics color attribute. Up to a maximum of 72×69 blocky pixels can be used on a page.
Simulated image
BBC Micro.
BBC Micro has 8 display modes, with resolutions like 640×256 (max. 2 colors), 320×256 (max. 4 colors) and 160×256 (max. 16 logical colors). No display modes have cell attribute clashes. The palette available has only 8 physical colors, plus a further 8 flashing colors (each being one of the eight non-flashing colors alternating with its physical complement every second), and the display modes can have 16, 4 or 2 simultaneous colors.
Simulated image
BBC Micro display modes
Sinclair QL (Sinclair Quantum Leap).
On the Sinclair QL two video modes were available, 256×256 pixels with 8 RGB colors and per-pixel flashing, or 512×256 pixels with four colors: black, red, green and white. The supported colors could be stippled in 2×2 blocks to simulate up to 256 colors, an effect which did not copy reliably on a TV, especially over an RF connection.
Pixel aspect ratio was not square, with resulting image proportions close to 4.4:3, making the image extend into the horizontal overscan area of a CRT TV.
PC-8000 series.
The NEC PC-8000 was capable of displaying graphics with a resolution of 160x100 pixels and 8 colors.
4-bit RGBI palettes.
The 4-bit RGBI palette is similar to the 3-bit RGB palette but adds one bit for "intensity". This allows each of the colors of the 3-bit palette to have a variant (on most machines "dark" or "bright", but "saturated" or "unsaturated" was also possible) potentially giving a total of 23×2 == 16 colors. Some implementations had only 15 effective colors due to the "dark" and "bright" variations of black being displayed identically. Others generated a grey tone or a different color.
This 4-bit RGBI schema is used in several platforms with variations, so the table given below is a simple reference for the palette richness, and not an actual implemented palette. For this reason, no numbers are assigned to each color, and color order is arbitrary.
Systems that used this palette scheme:
Specific details about implementation and actual graphical capabilities of specific systems, are listed on the next sub-sections.
ZX Spectrum.
The ZX Spectrum (and compatible) computers use a variation of the 4-bit RGBI palette philosophy. This results in each of the colors of the 3-bit palette to have a "basic" and "bright" variant, with the exception of black. This was accomplished by having a maximum voltage level for the bright variant, and a lower voltage level for the basic variant. Due to this, black is the same in both variants.
The "attribute" byte associated with every 8×8 pixel cell comprises (from LSB to MSB): three bits for the background color; three bits for the foreground color; one bit for the "bright" variant for both, and one bit for the flashing effect (alternate foreground and background colors every 0.32 seconds). Thus the colors are not independently selectable as indices of a true palette (there are not color numbers 8 to 15, and the "bright" bit affects both colors within a cell). However, within a single set of 8 colors the BRG order of bits means that the colors appear in increasing order of brightness on a monochrome display.
The color number (0 to 7) can be employed with the following BASIC statements to choose:
And a value of 0 or 1 with the following statements to choose:
IBM PC/XT and compatible systems.
The original IBM PC launched in 1981 features an Intel 8088 CPU which has 8-bit data bus technology, though internally the CPU has a fully 16-bit architecture. It was offered with a Monochrome Display Adapter (MDA) or a Color Graphics Adapter (CGA). The MDA is a text mode-only display adapter, without any graphic ability beyond using the built-in code page 437 character set (which includes half-block and line-drawing characters), and employed an original IBM green monochrome monitor; only black, green and intensified green could be seen on its screen. Then, only the CGA had true graphic modes.
The IBM PC XT model, which succeeded the original PC in 1983, has an identical architecture and CPU to its predecessor, only with more expansion slots and a hard disk equipped as standard. The same two video cards, the MDA and the CGA, remained available for the PC XT, and no upgraded video hardware was offered by IBM until the EGA, which followed the introduction of the IBM Personal Computer/AT, with its full 16-bit bus design, in 1984.
CGA.
The Color Graphics Adapter (CGA) outputs what IBM called "digital RGB" (that is, the R, G, B (and I) signals from the graphics card to the monitor can each only have two states: on or off).
CGA supports a maximum of 16 colors. However, its 320×200 graphics mode is restricted to fixed palettes containing only four colors, and the 640×200 graphic mode is only two colors. 16 simultaneous colors are only available in text mode or the "tweaked text" 160×100 mode.
A different set of 16 simultaneous colors is available using an NTSC TV or composite monitor by using artifact color techniques, with independent groups having demonstrated much larger color sets of over 256 colors See Color Graphics Adapter#High color depth.
The CGA RGBI palette is a variant of the 4-bit RGBI schema, arranged internally like this:.
Although the RGBI signals each have only two states, the CGA color monitor (usually mentioned as RGB monitor) decodes them as four level RGB signals. Darker colors are the basic RGB 2nd level signals except for brown, which is dark yellow with the level for the green component halved (1st level). Brighter colors are made by adding a uniform intensity one-level signal to every RGB signal of the dark ones, reaching the 3rd level (except dark gray which reaches only the 1st level), and in this case yellow is produced as if the brown were ordinary dark yellow.
The resulting displayed colors on RGB monitors are shown below:
A few earlier non-IBM compatible CGA monitors lack the circuitry to decode color numbers as of four levels internally, and they cannot show brown and dark gray. The above palette is displayed in such monitors as follows:
16-color palette modes.
The only full 16-color BIOS modes of the CGA are the text mode 0 (40×25) and mode 2 (80×25). Disabling the flashing attribute effect and using the IBM 437 codepage block characters 220 (DCh) ▄ (bottom half) or 223 (DFh) ▀ (upper half), the mode 2 screen buffer provides an 80×50 quasi-graphic mode.
Also, a tweak mode can be set in the CGA to give an extra, non-standard 160×100 pixels 16-color graphic mode.
4-color palette modes.
In the 320×200 graphics mode, every pixel has two bits. A value of 0 is always a selectable background-plus-border color (with the same register and/or BIOS call used for the foreground color in the 640×200 graphic mode; black by default), and the three remaining values 1 to 3 are indices to one of the predefined color palette entries.
The selection of a palette is a bit complex. There are two BIOS 320×200 CGA graphics modes: modes 4 and 5. Mode 4 has the composite color burst output enabled (in the Mode Control Register at I/O address 3D8H, bit 2 is cleared), and mode 5 has it disabled (the same bit 2 is set). Mode 5 is intended mainly for a monochrome composite video monitor, but because of a specific intentional feature of the CGA hardware, it also has a different palette for an RGBI color monitor. For mode 4, two palettes can be chosen: green/red/brown and cyan/magenta/white; the difference is the absence or presence of the blue signal in all three colors. (The palette is selected with bit 5 of the Color-Select Register at I/O address 3D9h, where the bit value 1 selects the cyan/magenta/white palette [a/k/a "palette #1" because it is the BIOS default] and 0 selects the green/red/brown palette [a/k/a "palette #2"]. This bit can be set using BIOS INT 10h function 0Bh, subfunction 1.) The palette for BIOS video mode 5 is always cyan/red/white: blue is always on, and red and green each are controlled directly by one of the two bits of the pixel color value. For each of these three palette options, a low or high intensity palette can be chosen with bit 4 of the aforementioned Color-Select Register: a value of 0 means low intensity and 1 means high intensity. (No BIOS call exists to switch between the two intensity modes.) The selected intensity setting simply controls the "I" output signal to the RGBI monitor for all colors in the palette. As a result, the green-red-brown palette appears as bright-green/bright-red/yellow when high intensity is selected. The combination of color-burst enable/disable selection, palette selection, and intensity selection yields a total of 6 different possible palettes for CGA 320×200 graphics.
The sixteen combinations with the background color are:
(*) Useless due to the duplication of one of the colors.
The sixteen combinations with the background color are:
(*) Useless due to the duplication of one of the colors.
The sixteen combinations with the background color are:
(*) Useless due to the duplication of one of the colors.
The sixteen combinations with the background color are:
(*) Useless due to the duplication of one of the colors.
The sixteen combinations with the background color are:
(*) Useless due to the duplication of one of the colors.
The sixteen combinations with the background color are:
(*) Useless due to the duplication of one of the colors.
When viewed in a monochrome composite monitor, the mode 5 palettes above are shown as a (more or less brighter) 2-bit grayscale palette:
2-color palette mode.
In the 640×200 graphic mode (BIOS mode number 6), every pixel has only a single bit. The foreground color can be set, with the default being white.
The sixteen combinations are:
PCjr and Tandy 1000 series.
The IBM PCjr features a "CGA Plus" video subsystem, consisting mainly of a 6845 CRTC and an LSI video chip known as the "Video Gate Array", that can show all 16 CGA colors simultaneously on screen in the extended low-res graphic modes. The near-compatible Tandy 1000 series features almost 100% PCjr-compatible video hardware implemented in a Tandy proprietary chip. This graphics adapter is better known by the name Tandy Graphics Adapter, because the PCjr was short-lived but the Tandy 1000 line was quite popular for many years. The video mode capabilities of early-model Tandy 1000 computers are exactly the same as the PCjr's. (Later Tandy 1000 models featured "Tandy Video II" hardware which added a 640x200 16-color mode but surrendered PCjr hardware register-compatibility for CGA register-compatibility.)
The PCjr adds three video modes to the CGA mode set: 160×200 16-color "low-resolution" graphics, 320×200 16-color "medium-resolution" graphics, and 640×200 4-color "high-resolution" graphics. All PCjr/Tandy 1000 graphics modes can reassign any color index to any palette entry, allowing free selection of all palette
colors in modes with fewer than 16 colors (including the plain CGA modes) and enabling color cycling effects in all modes. The PCjr also offers a graphics blink function which causes 8 colors to alternate between the low and high halves of the 16-color palette at the text blink rate. (A PCjr must be upgraded with a PCjr-specific internal 64 KB memory expansion card in order to use the latter two of these modes or any 80-column text mode. Tandy 1000 base models can use all video modes.)
Thomson.
For Thomson computers, a popular brand in France, the most common display modes are 320×200, with 8×1 attribute cells with 2 colors. Here the intensity byte affects saturation and not only brightness variations.
Thomson MO5.
The Thomson MO5 generated graphics based on a EFGJ03L (or MA4Q-1200) gate array capable of 40×25 text display and a resolution of 320 x 200 pixels with 16 colours (subject to proximity constraints - only two colors for a 8x1 pixel area).
The colour palette has 8 basic RGB colours with an intensity bit (called P for "Pastel") that controlled saturation ("saturated" or "pastel"). In memory, the bit order was PBGR. The desaturated colours were obtained by mixing of the original RGB components within the video hardware. This is done by a PROM circuit, where a two bit mask controls colour mixing ratios of 0%, 33%, 66% and 100% of the saturated hue. This approach allows the display of Orange instead of "desaturated white", and Gray instead of "desaturated black".
According to the values specified on the computer's technical manual (“Manuel Technique du MO5”, pg. 11 & 19), the hardware palette was:
Actual colour on emulators and later models seems to have been tweaked, with normal Blue and Red being fully saturated.
Thomson TO7/70.
The Thomson TO7/70 graphics were similar to the Thomson MO5 and generated by a Motorola MCA1300 gate array. capable of 40×25 text display and a resolution of 320 x 200 pixels with 16 colours (limited by 8x1 pixel colour attribute areas). The colour palette is 4-bit RGBI, with 8 basic RGB colours and a intensity bit (called P for "Pastel") that controlled saturation ("saturated" or "pastel").
Fixed color palette 1 (similar to MO5)
Fixed color palette 2
Fixed color palette 3
Mattel Aquarius.
The Mattel Aquarius computer has a text mode with 40×24 characters, that can be used as a semigraphic 80×72 low resolution graphics mode. There are spatial constraints ("attribute" areas) for different colors, consisting of 2x3 pixel groups.
The machine uses a TEA1002 graphic chip, and there are three bits for the RGB components (generating 8 primary colors at full saturation but 75% luminance - similar to the EBU colour bars) and an "intensity" bit that controls a variation of the base color (a 75% "luminance" decrease for white, creating gray; a 50% "chroma" saturation decrease for the RGB primary colors).
An alternate configuration of the chip allows it to output 95% luminance color bars - similar to BBC colour bars, more suited for usage in teletext decoders.
3 level RGB palettes.
Amstrad CPC series.
The Amstrad CPC 464/664/6128 series of computers generates the available palette with 3 levels (not bits) for every RGB primary. Thus, there are 27 different RGB combinations, from which 16 can be simultaneously displayed in low resolution mode, four in medium resolution mode and two in high resolution mode.
Simulations of actual images on the Amstrad's color monitor in each of the modes (160×200x16 colors; 320×200x4 colors and 640×200x2 colors) follows. A cheaper green monochrome display was also available from the manufacturer; in this case, the colors are viewed as a 16-tone green scale, as shown in the last simulated image, as it interprets the overall brightness of the full color signal, instead of only considering the green intensity as might, e.g., the "Philips CM8833" line.
The number in parentheses means the primary ink number for the Locomotive BASIC PEN, PAPER and INK statements (that is, "(1)" means ink #1 defaults to this color). Inks can also have a secondary color number, meaning they flash between two colors. By default, ink #14 alternates between colors 1 and 24 (blue and bright yellow) and ink #15 alternates between colors 11 and 16 (cyan-blue and pink). In addition, the paper defaults to ink #0 and the pen to ink #1, meaning yellow text on a dark blue background.
8-bit RGB palettes.
The 8-bit RGB palettes (also known as 3-3-2 bit RGB) use 3 bits for each of the red and green color components, and 2 bits for the blue component, due to the lesser sensitivity of the common human eye to this primary color. This results in an 8×8×4 = 256-color palette as follows:
Tiki 100.
The Tiki 100 uses an 8-bit RGB palette (also described as 3-3-2 bit RGB), with 3 bits for each of the red and green color components, and 2 bits for the blue component.
It supports 3 different resolutions with 256, 512 or 1024 by 256 pixels and 16, 4, or 2 colors respectively (freely selectable from the full 256-color palette).
Enterprise.
The Enterprise computer has five graphics modes: 40- and 80-column text modes, Lo-Res and Hi-Res bit mapped graphics, and attribute graphics. Bit mapped graphics modes allow selection between displays of 2, 4,16 or 256 colors (from a 3-3-2 bit RGB palette), but horizontal resolution decreases as color depth increases.
Interlaced and non-interlaced modes are available. The maximum resolution is 640×512 pixels interlaced, or 640×256 pixels non-interlaced. These resolutions permit only a 2-colour display.
A 256-colour display has a maximum resolution of 80×256. The attribute graphics mode provides a 320×256 pixel resolution with 16 colors, selectable from a palette of 256.
Multiple pages can be displayed simultaneously on the screen, even if their graphics modes are different. Each page has its own palette, which allows more colors to be displayed onscreen simultaneously. The page height can be larger than the screen or the window it is displayed on. Each page is connected to a channel of the EXOS operating system, so it is possible to write on a hidden page.
MSX2.
On the MSX2 screen mode 8 is a high-resolution 256×212-pixel mode with an 8-bit color depth, giving a palette of 256 colors ("Fixed RGB" mode of the Yamaha V9938 video chip). From the MSB to LSB, there are three green bits, three red bits, and two blue bits. This mode uses half of the available colors overall, and can be considered a palette in its own right.
9-bit RGB palettes.
The MSX2 series features a Yamaha V9938 video chip, which manages a 9-bit RGB palette (512 colors in "Paletted RGB" mode) and has some extended graphic modes. Although its graphical capabilities are similar, or even better than of those of 16-bit personal computers, MSX2 and MSX2+ (see below) are pure 8-bit machines.
Screen mode 6 is a 512×212-pixel mode with a 4-color palette chosen from the available 512 colors.
Screen modes 5 and 7 are high-resolution 256×212-pixel and 512×212-pixel modes, respectively, with a 16-color palette chosen from the available 512 colors. Each pixel can be any of the 16 selected colors.
15-bit RGB palettes.
MSX2+.
The MSX2+ series (released in 1988) features a Yamaha V9958 video chip which manages a 15-bit RGB palette internally encoded in YJK (up to 19,268 different colors from the 32,768 theoretically possible) and has additional screen modes. Although its graphical capabilities are similar, or even better than of those of 16-bit personal computers, MSX2 (see above) and MSX2+ are pure 8-bit machines. YJK color encoding can be viewed as a lossy compression technique; in the RGB to YJK conversion, the average red and green levels are preserved, but blue is subsampled. As a result of every four pixels sharing a chroma value, in mode 12 it is not possible to have vertical lines of a single color. This is only possible in modes 10 and 11 due to the additional 16-color direct palette. This can be used to mix 16 indexed colors with a rich colorful background, in what can be considered a primitive video overlay technique.
Screen modes 10 & 11 – 12,499 YJK colors plus a 16-color palette. In this mode, the YJK technique encodes 16 levels of luminance into the four LSBs of each pixel and 64 levels of chroma, from −32 to +31, shared across every four consecutive pixels and stored in the three higher bits of the four pixels. If the fifth bit of the pixel is set, then the lower four bits of the pixel points to an index in the 16-color palette; otherwise, they specify the YJK luminance level of the pixel.
Screen mode 12 is similar to modes 10 and 11, but uses five bits to encode 32 levels of luminance for every pixel, thus it does not use an additional palette and, with YJK encoding, 19,268 different colors can be displayed simultaneously with 8-bit color depth.
18-bit RGB palettes.
FM-77 AV 40.
Fujitsu's FM-77 AV 40, released in 1986, uses an 18-bit RGB palette. Any 64,000 out of 262,144 colors can be displayed simultaneously at the 320×200 resolution, or 8 out of 262,144 colors at the 640×400 resolution.
Composite video palettes.
This section covers systems that generate color directly as composite video, closely related with display on analog CRT TVs. Many of the colors are non-standard and outside of RGB gamut, and would only display properly on NTSC hardware.
Due to the varying ways of converting a composite signal to sRGB (the standard for internet images), images in this section will be inconsistent with each other in color until further notice.
Atari 8-bit computers.
Early models of the Atari 400 and 800 use a palette of 128 colors, using 4 bits for chrominance and 3 for luminance. Screen modes may vary from 320×192 (384x240 with overscan) to 40×24, using 2 or 4 simultaneous colors, or 80×192 (96x240 with overscan) using 16 colors. After 2 years (late 1981) the CTIA graphics chip was replaced with the GTIA chip increasing the palette to 256 colors (CTIA and GTIA).
The ANTIC chip has an instruction set to run programs (called display lists) which permits many more colors on the screen at once. There are a number of possible software-driven graphics modes.
Apple II series.
The Apple II series features a 16-color composite video palette, based on the YIQ color space used by the NTSC color TV system.
Low-res mode palette
The 40x48 pixel lo-res mode allowed 15 different colors plus a duplicate gray.
*
High-res mode palette
The majority of Apple graphic applications used the hi-res mode, which had 280×192 pixels (effectively 140x192 on a color monitor). The hi-res mode allowed six colors: black, white, blue, orange, green and purple.
Systems based on MOS Technology chips.
For all the following computers from Commodore, the U and V coordinates for the composite video colors are always the cosine and the sine, respectively, of angles multiple of 22.5 degrees (i.e. a quarter of 90°), as the engineers were inspired by the NTSC color wheel, a radial way to figure out the U and V coordinates of points equidistant from the center of the chroma plane, the gray. Consumers in Europe (which uses PAL) considered the Commodore colors to be more "washed out" and less vivid than those provided by computers such as the ZX Spectrum.
VIC-20.
The VIC-20 uses a MOS Technology VIC chip which produces a 16-color YPbPr composite video palette. The palette lacks any intermediate shade of gray, and it has 5 or 9 levels of luminance.
The VIC-20 lacks any true graphic mode, but a 22×11 text mode with 200 definable characters of 8×16 bits each arranged as a matrix of 20×10 characters is usually used instead, giving a 3:2(NTSC)/5:3(PAL) pixel aspect ratio, 160×160 pixels, 8-color "high-res mode" or a 3:1(NTSC)/10:3(PAL) pixel aspect ratio, 80×160 pixels, 10-color "multicolor mode".
In the 8-color "high-res mode", every 8×8 pixels can have the background color (shared for the entire screen) or a free foreground color, both selectable among the first eight colors of the palette. In the 10-color "multicolor mode", a single pixel of every 4×8 block (a character cell) may have any of four colors: the background color, the auxiliary color (both shared for the entire screen and selectable among the entire palette), the same color as the overscan border (also a shared color) or a free foreground color, both selectable among the first eight colors of the palette.
Simulated images
On some models of the system, there are nine levels of luminance:
But on other models, there are only five levels of luminance:
Commodore 64.
The MOS Technology VIC-II is used in the Commodore 64 (and Commodore 128 in 40-column mode), and features a 16-color YPbPr composite video palette. This palette is largely based on that of the VIC, but it substitutes three colors by three levels of gray. When displayed over an analog NTSC composite video output, the actual resulting colors are more vivid.
The Commodore 64 has two graphic modes: Multicolor and High Resolution.
In the Multicolor 160×200, 16-color mode, every cell of 4×8, 2:1 aspect ratio pixels can have one of four colors: one shared with the entire screen, the two background and foreground colors of the corresponding text mode character, and one more color also stored in the color RAM area, all of them freely selectable among the entire palette.
In the High Resolution 320×200, 16-color mode, every cell of 8×8 pixels can have one of the two background and foreground colors of the correspondent text mode character, both freely selectable among the entire palette.
Simulated images
On most models of the Commodore 64, there are nine levels of luminance:
Commodore 16 and Plus/4.
The MOS Technology TED was used in the Commodore 16 and Commodore Plus/4. It has a palette of 121 YPbPr composite video colors consisting of sixteen hues (including black and white) at eight luminance levels. Black is the same color at every luminance level, so there are not 128 different colors. On the Commodore Plus/4, twelve colors formed a "default" palette of sorts accessible through keyboard shortcuts; these colors are underlined in the table below (RGB converted colors at a saturation level of 34%).
The Commodore 16 and Commodore Plus/4 have two graphic modes very similar to those of the Commodore 64: Multicolor and High Resolution.
In the Multicolor 160×200, 121-color mode, every cell of 4×8, 2:1 aspect ratio pixels can have one of four colors: two shared with the entire screen and the two background and foreground colors of the correspondent text mode character, all of them freely selectable among the entire 121-color palette (hue 0 to 15 and luminance 0 to 7 are set individually for any of them).
In the High Resolution 320×200, 121-color mode, every cell of 8×8 pixels can have one of the two background and foreground colors of the corresponding text mode character, both freely selectable among the entire 121-color palette (again setting both the hue and the luminance).
Simulated images
Notes:
Systems based on the Texas Instruments TMS9918 chip.
The TMS9918 is a Video Display Controller (VDC) manufactured by Texas Instruments and introduced in 1979. The TMS9918 and its variants were used in the Memotech MTX, MSX, Sord M5, Tatung Einstein and Tomy Tutor.
The TMS9918 chip which uses a proprietary 15-color YUV composite video palette encoded palette plus a "transparent" color, intended to be used by the hardware sprites and simple video overlay. When used as an ordinary background color, it is rendered using the same color as the screen border.
Note: The colors inside the parentheses are out of RGB gamut.
MSX.
The MSX series has two graphic modes. The MSX BASIC Screen 3 mode is a low-resolution mode with 15 colors, in which every pixel can be any of the 15 available colors.
Screen mode 2 is a 256×192 high-resolution mode with 15 colors, in which each of every eight consecutive pixels can only use 2 colors.
Systems based on the Motorola 6847 chip.
The Motorola 6847 is a video display generator (VDG) first introduced by Motorola and used in the TRS-80 Color Computer, Dragon 32/64, Laser 200, TRS-80 MC-10, NEC PC-6000 series, Acorn Atom, and the APF Imagination Machine, among others.
Color is generated by the combination of three signals, formula_0 (luminance) with 6 possible levels, formula_1 and formula_2 (chroma) with 3 possible levels, based on the YPbPr colorspace, and then converted for output into a NTSC analog signal.
The following table shows the signal values used:
TRS-80 Color Computer.
The TRS-80 Color Computer is capable of displaying text and graphics contained within a roughly square display matrix 256 pixels wide by 192 lines high.
The hardware palette has 9 colors: black, green, yellow, blue, red, buff (almost-but-not-quite white), cyan, magenta, and orange.
All colors are available in text modes. In color modes (64×64, 128×64, 128×96, and 128×192) two four color palettes are available:
a green border with the colors green, yellow, red, and blue;
a white border with the colors white, cyan, magenta, and orange.
NEC PC-6000 series.
Similar to other computers using the same video chip, the NEC PC-6000 series had four screen modes:
Other palettes.
Tandy Color Computer 3.
The Tandy Color Computer 3 could display all of the modes of the Tandy Color Computer 1 and 2 / TRS-80 Color Computer, except the Semigraphics modes. Taking the place of the graphics and memory hardware of the previous machines is an application-specific integrated circuit called (officially) the Advanced Color Video Chip (ACVC) or (unofficially) the Graphics Interrupt Memory Enhancer (GIME).
This chip allowed resolutions of 320x192x4, 320x192x16, 640x192x2, and 640x192x4 from a palette of 64 colors.
There are two palette modes - RGB (3 levels of intensity plus white, black and two grays) and Composite (total of 64 colors; 16 distinct chroma values with 4 levels of luminance).
SAM Coupé.
The 128 color master palette used by the SAM Coupé is produced via a unique method — it effectively contains 2 groups of 64 "RGB" colors of mildly different intensity, and ultimately derived from a 512 color space. The closest equivalent in more popular and well-known machines would be the Commodore Amiga's 64-color "Extra Half-Brite" mode (with 32 explicitly set colors using 5 bitplanes, which are displayed with full or half brightness depending on the bit setting of a 6th plane).
Two bits are used for each of Red, Green and Blue and give a similar result to a normal 6-bit RGB palette (as seen with the IBM EGA or Sega Master System); the seventh bit encodes for "brightness", which has a similar but more subtle effect to the Spectrum, increasing the output of all three channels by half the intensity of the lower bits of the main six (in this way, it does make a genuine 128 colors — rather than 127 colors with "two blacks" and only a 7-level grayscale).
The layout of the byte that encodes each color is complicated and appears like a Spectrum color nybble transferred to a full byte's width, and an extra RGB bit-triplet then prefixed to it, with the MSB left unused.
Resulting color palette:
These colors can be used on the four available display modes:
Side-by-side comparison.
Since there are many 8-bit computers to compare, a comparison table has been compiled to make comparing the systems easier.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Y"
},
{
"math_id": 1,
"text": "\\phi A"
},
{
"math_id": 2,
"text": "\\phi B"
}
] |
https://en.wikipedia.org/wiki?curid=15544989
|
15546867
|
Equatorial electrojet
|
The equatorial electrojet (EEJ) is a narrow ribbon of current flowing eastward in the day time equatorial region of the Earth's ionosphere. The abnormally large amplitude of variations in the horizontal components measured at equatorial geomagnetic observatories, as a result of EEJ, was noticed as early as 1920 from Huancayo geomagnetic observatory. Observations by radar, rockets, satellites, and geomagnetic observatories are used to study EEJ.
Causes.
The explanation for the existence of the equatorial electrojet lies with the anisotropic nature of ionospheric electrical conductivity and a process of self-reinforcement. Global-scale ionospheric circulation establishes a Sq (solar quiet) current system in the E region of the Earth's ionosphere (100–130 km altitude), and a primary eastwards electric field near day-side magnetic equator, where the magnetic field is horizontal and northwards. This electric field gives a primary eastwards Pedersen current. E cross B drift results in a downwards Hall current, sustaining vertical charge separation across the depth of the ionosphere, giving an upwards secondary electric field and a secondary Pedersen current that is opposite to the primary Hall current. A secondary Hall current then reinforces the original Pedersen current. At about 110 km height, the integration of the current density gives a peak current strength of about 100 kA, which supports a day-side electrojet magnetic-field enhancement of a factor of two or so.
Lunar Tide.
As the position of the sun, moon, and earth changes, so does the strength of the lunar tidal forces. Each lunar month, two spring tides occur when the sun, moon, and earth are aligned to produce a strong lunar tidal force. Likewise, two neap tides occur when the sun and moon are adjacent to one another to produce weak lunar tidal forces. The equatorial electrojet (EEJ) has an abnormally large amplitude of variations in the horizontal components due to the strength of the lunar tides. The lunar tide varies as described above and is changed by the gravitational attraction between the Moon and Earth. Because of this, the pressure and temperature of the lower atmosphere vary, and the effects propagate upward in a tidal wave form to the E region and modulate the electrodynamics.
Studies of the EEJ from satellite and ground magnetic data.
The EEJ phenomenon was first identified using geomagnetic data. The amplitude of the daily variation of the horizontal magnetic intensity (ΔH) measured at a geomagnetic observatory near the dip-equator is 3–5 fold higher than the variation of data from other regions of Earth. A typical diurnal equatorial observatory data show a peak of strength ~80 nT at 12:00 LT, with respect to the night-time level. Egedal (1947) showed that the enhancement of quiet day solar daily variations in ΔH (Sq(H)) lay within the 50 latitude centered on the dip equator. The mechanism that produced the variation in the magnetic field was proposed as a band of current about 300 km in width flowing over the dip equator.
EEJ studies from satellite data were initiated with the arrival of data from the POGO (Polar Orbiting Geophysical Observatories) series of satellite (1967–1970). The characteristic signature of the EEJ is a sharp negative V-shaped curve in the formula_0H field, attaining its minimum within 0.5° of the magnetic dip equator. The magnetic data from satellite missions like Ørsted (1999–present) and CHAMP (2000–present) have vastly improved our knowledge of the EEJ.
Recent studies have focused on the lunar-solar interaction is the EEJ. It was demonstrated that complexity is introduced into the EEJ due to the interaction between lunar tide variability in the equatorial electric field and solar-driven variability in the E-region conductivity.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Delta"
}
] |
https://en.wikipedia.org/wiki?curid=15546867
|
15547210
|
Gyrokinetic ElectroMagnetic
|
Gyrokinetic plasma turbulence simulation
Gyrokinetic ElectroMagnetic (GEM) is a gyrokinetic plasma turbulence simulation that uses the formula_0 particle-in-cell method. It is used to study waves, instabilities and nonlinear behavior of tokamak fusion plasmas. Information about GEM can be found at the GEM web page.
There are two versions of GEM, one is a flux-tube version and the other one is a global general geometry
version. Both versions of GEM use a field-aligned coordinate system. Ions are treated kinetically, but averaged over their gyro-obits and electrons are treated as drift-kinetic.
The modeling of the tokamak plasmas.
GEM solves the electromagnetic gyrokinetic equations which are the appropriate equations for well magnetized plasmas. The plasma is treated statistically as a kinetic distribution function. The distribution function depends on the three-dimensional position, the energy and magnetic moment. The time evolution of the distribution function is described by gyrokinetic theory which simply averages the Vlasov-Maxwell system of equations over the fast gyromotion associated with particles exhibiting cyclotron motion about the magnetic field lines. This eliminates fast time scales associated with the gyromotion and reduces the dimensionality of the problem from six down to five.
Algorithm to solve the equations.
GEM uses the delta-f particle-in-cell (PIC) plasma simulation method. An expansion about an adiabatic response is made for electrons to overcome the limit of small time step, which is caused by the fast motion of electrons. GEM uses a novel electromagnetic algorithm allowing direct numerical simulation of the electromagnetic problem at high plasma pressures. GEM uses a two-dimensional domain decomposition (see domain decomposition method) of the grid and particles to obtain good performance on massively parallel computers. A Monte Carlo method is used to model small angle Coulomb collisions.
Applications.
GEM is used to study nonlinear physics associated with tokamak plasma turbulence and transport. Tokamak turbulence driven by ion-temperature-gradient modes, electron-temperature gradient modes, trapped electron modes and micro-tearing modes has been investigated using GEM. It is also being used to look at energetic particle driven magnetohydrodynamic (see magnetohydrodynamics) eigenmodes.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\delta f"
}
] |
https://en.wikipedia.org/wiki?curid=15547210
|
15548993
|
Red blood cell indices
|
Details about red blood cells as part of a standard blood test
Red blood cell indices are blood tests that provide information about the hemoglobin content and size of red blood cells. Abnormal values indicate the presence of anemia and which type of anemia it is.
Mean corpuscular volume.
Mean corpuscular volume (MCV) is the average volume of a red blood cell and is calculated by dividing the hematocrit (Hct) by the concentration of red blood cell count.
Mean corpuscular hemoglobin.
Mean corpuscular hemoglobin (MCH) is the average amount of hemoglobin (Hb) per red blood cell and is calculated by dividing the hemoglobin by the red blood cell count.
Mean corpuscular hemoglobin concentration.
Mean corpuscular hemoglobin concentration (MCHC) is the average concentration of hemoglobin per unit volume of red blood cells and is calculated by dividing the hemoglobin by the hematocrit.
Red blood cell distribution width.
Red blood cell distribution width (RDW or RDW-CV or RCDW and RDW-SD) is a measure of the range of variation of red blood cell (RBC) volume, yielding clues about morphology.
Erythropoietic precursor indices.
The reticulocyte production index (RPI) or corrected reticulocyte count (CRC) represents the true significance of the absolute reticulocyte count to provide some reflection of erythropoietic demand and supply. The immature reticulocyte fraction (IRF) goes a step further to cast more light on the same question.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\textit{MCV} = \\frac{\\textit{Hct}}{[\\textit{RBC}]} "
},
{
"math_id": 1,
"text": "MCH = \\frac{Hb}{RBC}"
},
{
"math_id": 2,
"text": "MCHC = \\frac{Hb}{Hct}"
}
] |
https://en.wikipedia.org/wiki?curid=15548993
|
15550368
|
List of forcing notions
|
In mathematics, forcing is a method of constructing new models "M"["G"] of set theory by adding a generic subset "G" of a poset "P" to a model "M". The poset "P" used will determine what statements hold in the new universe (the 'extension'); to force a statement of interest thus requires construction of a suitable "P". This article lists some of the posets "P" that have been used in this construction.
Amoeba forcing.
Amoeba forcing is forcing with the amoeba order, and adds a measure 1 set of random reals.
Cohen forcing.
and "p" < "q" if "p" ⊇ "q".
This poset satisfies the countable chain condition. Forcing with this poset adds ω2 distinct reals to the model; this was the poset used by Cohen in his original proof of the independence of the continuum hypothesis.
More generally, one can replace ω2 by any cardinal κ so construct a model where the continuum has size at least κ. Here, there is no restriction. If κ has cofinality ω, the reals end up bigger than κ.
Grigorieff forcing.
Grigorieff forcing (after Serge Grigorieff) destroys a free ultrafilter on ω.
Hechler forcing.
Hechler forcing (after Stephen Herman Hechler) is used to show that Martin's axiom implies that every family of less than "c" functions from ω to ω is eventually dominated by some such function.
"P" is the set of pairs ("s", "E") where "s" is a finite sequence of natural numbers (considered as functions from a finite ordinal to ω) and "E" is a finite subset of some fixed set "G" of functions from ω to ω. The element ("s", "E") is stronger than ("t", "F") if "t" is contained in "s", "F" is contained in "E", and if "k" is in the domain of "s" but not of "t" then "s"("k") > "h"("k") for all "h" in "F".
Jockusch–Soare forcing.
Forcing with formula_0 classes was invented by Robert Soare and Carl Jockusch to prove, among other results, the low basis theorem. Here "P" is the set of nonempty formula_0 subsets of formula_1 (meaning the sets of paths through infinite, computable subtrees of formula_2), ordered by inclusion.
Iterated forcing.
Iterated forcing with finite supports was introduced by Solovay and Tennenbaum to show the consistency of Suslin's hypothesis. Easton introduced another type of iterated forcing to determine the possible values of the continuum function at regular cardinals. Iterated forcing with countable support was investigated by Laver in his proof of the consistency of Borel's conjecture, Baumgartner, who introduced Axiom A forcing, and Shelah, who introduced proper forcing. Revised countable support iteration was introduced by Shelah to handle semi-proper forcings, such as Prikry forcing, and generalizations, notably including Namba forcing.
Laver forcing.
Laver forcing was used by Laver to show that Borel's conjecture, which says that all strong measure zero sets are countable, is consistent with ZFC. (Borel's conjecture is not consistent with the continuum hypothesis.)
A Laver tree "p" is a subset of the finite sequences of natural numbers such that
If "G" is generic for ("P", ≤), then the real {"s"("p") : p ∈ "G"}, called a "Laver-real", uniquely determines "G".
Laver forcing satisfies the Laver property.
Levy collapsing.
These posets will collapse various cardinals, in other words force them to be equal in size to smaller cardinals.
Levy collapsing is named for Azriel Levy.
Magidor forcing.
Amongst many forcing notions developed by Magidor, one of the best known is a generalization of Prikry forcing used to change the cofinality of a cardinal to a given smaller regular cardinal.
("t", "B") is stronger than ("s", "A") (("t", "B") < ("s", "A")) if "s" is an initial segment of "t", "B" is a subset of "A", and "t" is contained in "s" ∪ "A".
Mathias forcing.
Mathias forcing is named for Adrian Mathias.
Namba forcing.
Namba forcing (after Kanji Namba) is used to change the cofinality of ω2 to ω without collapsing ω1.
Namba' forcing is the subset of "P" such that there is a node below which the ordering is linear and above which each node has formula_4 immediate successors.
Magidor and Shelah proved that if CH holds then a generic object of Namba forcing does not exist in the generic extension by Namba', and vice versa.
Prikry forcing.
In Prikry forcing (after Karel Prikrý) "P" is the set of pairs ("s", "A") where "s" is a finite subset of a fixed measurable cardinal κ, and "A" is an element of a fixed normal measure "D" on κ. A condition ("s", "A") is stronger than ("t", "B") if "t" is an initial segment of "s", "A" is contained in "B", and "s" is contained in "t" ∪ "B". This forcing notion can be used to change to cofinality of κ while preserving all cardinals.
Product forcing.
Taking a product of forcing conditions is a way of simultaneously forcing all the conditions.
Radin forcing.
Radin forcing (after Lon Berk Radin), a technically involved generalization of Magidor forcing, adds a closed, unbounded subset to some regular cardinal λ.
If λ is a sufficiently large cardinal, then the forcing keeps λ regular, measurable, supercompact, etc.
Sacks forcing.
Sacks forcing has the Sacks property.
Shooting a fast club.
For "S" a stationary subset of formula_5 we set formula_6 is a closed sequence from "S" and "C" is a closed unbounded subset of formula_7, ordered by formula_8 iff formula_9 end-extends formula_10 and formula_11 and formula_12. In formula_13, we have that formula_14 is a closed unbounded subset of "S" almost contained in each club set in "V". formula_15 is preserved. This method was introduced by Ronald Jensen in order to show the consistency of the continuum hypothesis and the Suslin hypothesis.
Shooting a club with countable conditions.
For "S" a stationary subset of formula_5 we set "P" equal to the set of closed countable sequences from "S". In formula_13, we have that formula_16 is a closed unbounded subset of "S" and formula_15 is preserved, and if CH holds then all cardinals are preserved.
Shooting a club with finite conditions.
For "S" a stationary subset of formula_5 we set "P" equal to the set of finite sets of pairs of countable ordinals, such that if formula_17 and formula_18 then formula_19 and formula_20, and whenever formula_21 and formula_22 are distinct elements of "p" then either formula_23 or formula_24. "P" is ordered by reverse inclusion. In formula_13, we have that formula_25 is a closed unbounded subset of "S" and all cardinals are preserved.
Silver forcing.
Silver forcing (after Jack Howard Silver) is the set of all those partial functions from the natural numbers into {0, 1} whose domain is coinfinite; or equivalently the set of all pairs ("A", "p"), where "A" is a subset of the natural numbers with infinite complement, and "p" is a function from "A" into a fixed 2-element set. A condition "q" is stronger than a condition "p" if "q" extends "p".
Silver forcing satisfies Fusion, the Sacks property, and is minimal with respect to reals (but not minimal).
Vopěnka forcing.
Vopěnka forcing (after Petr Vopěnka) is used to generically add a set formula_26 of ordinals to formula_27.
Define first formula_28 as the set of all non-empty formula_29 subsets of the power set formula_30 of formula_31, where formula_32, ordered by inclusion: formula_33 iff formula_34.
Each condition formula_35 can be represented by a tuple formula_36
where formula_37, for all formula_38.
The translation between formula_39 and its least representation is formula_29, and hence formula_28
is isomorphic to a poset formula_40 (the conditions being the minimal representations
of elements of formula_28). This poset is the Vopenka forcing for subsets of formula_31.
Defining formula_41 as the set of all representations for elements formula_35 such that
formula_42, then formula_41 is formula_43-generic and formula_44.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Pi^0_1"
},
{
"math_id": 1,
"text": "2^{\\omega}"
},
{
"math_id": 2,
"text": "2^{<\\omega}"
},
{
"math_id": 3,
"text": "T \\subseteq \\omega_2^{<\\omega}"
},
{
"math_id": 4,
"text": "\\aleph_2"
},
{
"math_id": 5,
"text": "\\omega_1"
},
{
"math_id": 6,
"text": "P=\\{\\langle \\sigma, C\\rangle\\,\\colon\\sigma"
},
{
"math_id": 7,
"text": "\\omega_1\\}"
},
{
"math_id": 8,
"text": "\\langle \\sigma',C'\\rangle\\leq\\langle\\sigma, C\\rangle"
},
{
"math_id": 9,
"text": "\\sigma'"
},
{
"math_id": 10,
"text": "\\sigma"
},
{
"math_id": 11,
"text": "C'\\subseteq C"
},
{
"math_id": 12,
"text": "\\sigma'\\subseteq\\sigma\\cup C"
},
{
"math_id": 13,
"text": "V[G]"
},
{
"math_id": 14,
"text": "\\bigcup\\{\\sigma\\,\\colon(\\exists C)(\\langle\\sigma,C\\rangle\\in G)\\}"
},
{
"math_id": 15,
"text": "\\aleph_1"
},
{
"math_id": 16,
"text": "\\bigcup G"
},
{
"math_id": 17,
"text": "p\\in P"
},
{
"math_id": 18,
"text": "\\langle\\alpha,\\beta\\rangle\\in p"
},
{
"math_id": 19,
"text": "\\alpha\\leq\\beta"
},
{
"math_id": 20,
"text": "\\alpha\\in S"
},
{
"math_id": 21,
"text": "\\langle\\alpha,\\beta\\rangle"
},
{
"math_id": 22,
"text": "\\langle\\gamma,\\delta\\rangle"
},
{
"math_id": 23,
"text": "\\beta<\\gamma"
},
{
"math_id": 24,
"text": "\\delta<\\alpha"
},
{
"math_id": 25,
"text": "\\{\\alpha\\,\\colon(\\exists\\beta)(\\langle\\alpha,\\beta\\rangle\\in\\bigcup G)\\}"
},
{
"math_id": 26,
"text": "A"
},
{
"math_id": 27,
"text": "{\\color{blue}\\text{HOD}}"
},
{
"math_id": 28,
"text": "P'"
},
{
"math_id": 29,
"text": "\\text{OD}"
},
{
"math_id": 30,
"text": "\\mathcal{P}(\\alpha)"
},
{
"math_id": 31,
"text": "\\alpha"
},
{
"math_id": 32,
"text": "A\\subseteq\\alpha"
},
{
"math_id": 33,
"text": "p\\leq q"
},
{
"math_id": 34,
"text": "p\\subseteq q"
},
{
"math_id": 35,
"text": "p\\in P'"
},
{
"math_id": 36,
"text": "(\\beta,\\gamma,\\varphi)"
},
{
"math_id": 37,
"text": "x\\in p\\Leftrightarrow V_\\beta\\models\\varphi(\\gamma,x)"
},
{
"math_id": 38,
"text": "x\\subseteq\\alpha"
},
{
"math_id": 39,
"text": "p"
},
{
"math_id": 40,
"text": "P\\in\\text{HOD}"
},
{
"math_id": 41,
"text": "G_A"
},
{
"math_id": 42,
"text": "A\\in p"
},
{
"math_id": 43,
"text": "\\text{HOD}"
},
{
"math_id": 44,
"text": "A\\in\\text{HOD}[G_A]"
}
] |
https://en.wikipedia.org/wiki?curid=15550368
|
15553696
|
Volta potential
|
The Volta potential (also called Volta potential difference, contact potential difference, outer potential difference, Δψ, or "delta psi") in electrochemistry, is the electrostatic potential difference between two metals (or one metal and one electrolyte) that are in contact and are in thermodynamic equilibrium. Specifically, it is the potential difference between a point close to the surface of the first metal and a point close to the surface of the second metal (or electrolyte).
The Volta potential is named after Alessandro Volta.
Volta potential between two metals.
When two metals are electrically isolated from each other, an arbitrary potential difference may exist between them. However, when two different neutral metal surfaces are brought into electrical contact (even indirectly, say, through a long electro-conductive wire), electrons will flow from the metal with the higher Fermi level to the metal with the lower Fermi level until the Fermi levels in the two phases are equal.
Once this has occurred, the metals are in thermodynamic equilibrium with each other (the actual number of electrons that passes between the two phases is usually small).
Just because the Fermi levels are equal, however, does not mean that the electric potentials are equal. The electric potential outside each material is controlled by its work function, and so dissimilar metals can show an electric potential difference even at equilibrium.
The Volta potential is "not" an intrinsic property of the two bulk metals under consideration, but rather is determined by work function differences between the metals' surfaces. Just like the work function, the Volta potential depends sensitively on surface state, contamination, and so on.
Measurement of Volta potential (Kelvin probe).
The Volta potential can be significant (of order 1 volt) but it cannot be measured directly by an ordinary voltmeter.
A voltmeter does not measure vacuum electrostatic potentials, but instead the difference in Fermi level between the two materials, a difference that is exactly zero at equilibrium.
The Volta potential, however, corresponds to a real electric field in the spaces between and around the two metal objects, a field generated by the accumulation of charges at their surfaces. The total charge formula_0 over each object's surface depends on the capacitance formula_1 between the two objects, by the relation formula_2, where formula_3 is the Volta potential. It follows therefore that the value of the potential can be measured by varying the capacitance between the materials by a known amount (e.g., by moving the objects further from each other) and measuring the displaced charge that flows through the wire that connects them.
The Volta potential difference between a metal and an electrolyte can be measured in a similar fashion.
The Volta potential of a metal surface can be mapped on very small scales by use of a Kelvin probe force microscope, based on atomic force microscopy. Over larger areas on the order of millimeters to centimeters, a scanning Kelvin probe (SKP), which uses a wire probe of tens to hundreds of microns in size, can be used. In either case the capacitance change is not known—instead, a compensating DC voltage is added to cancel the Volta potential so that no current is induced by the change in capacitance. This compensating voltage is the negative of the Volta potential.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Q"
},
{
"math_id": 1,
"text": "C"
},
{
"math_id": 2,
"text": "Q = C \\Delta \\psi"
},
{
"math_id": 3,
"text": "\\Delta \\psi"
}
] |
https://en.wikipedia.org/wiki?curid=15553696
|
155544
|
Charles's law
|
Relationship between volume and temperature of a gas at constant pressure
Charles's law (also known as the law of volumes) is an experimental gas law that describes how gases tend to expand when heated. A modern statement of Charles's law is:
When the pressure on a sample of a dry gas is held constant, the Kelvin temperature and the volume will be in direct proportion.
This relationship of direct proportion can be written as:
formula_0
So this means:
formula_1
where:
This law describes how a gas expands as the temperature increases; conversely, a decrease in temperature will lead to a decrease in volume. For comparing the same substance under two different sets of conditions, the law can be written as:
formula_2
The equation shows that, as absolute temperature increases, the volume of the gas also increases in proportion.
History.
The law was named after scientist Jacques Charles, who formulated the original law in his unpublished work from the 1780s.
In two of a series of four essays presented between 2 and 30 October 1801, John Dalton demonstrated by experiment that all the gases and vapours that he studied expanded by the same amount between two fixed points of temperature. The French natural philosopher Joseph Louis Gay-Lussac confirmed the discovery in a presentation to the French National Institute on 31 Jan 1802, although he credited the discovery to unpublished work from the 1780s by Jacques Charles. The basic principles had already been described by Guillaume Amontons and Francis Hauksbee a century earlier.
Dalton was the first to demonstrate that the law applied generally to all gases, and to the vapours of volatile liquids if the temperature was well above the boiling point. Gay-Lussac concurred. With measurements only at the two thermometric fixed points of water (0°C and 100°C), Gay-Lussac was unable to show that the equation relating volume to temperature was a linear function. On mathematical grounds alone, Gay-Lussac's paper does not permit the assignment of any law stating the linear relation. Both Dalton's and Gay-Lussac's main conclusions can be expressed mathematically as:
formula_3
where V100 is the volume occupied by a given sample of gas at 100 °C; V0 is the volume occupied by the same sample of gas at 0 °C; and k is a constant which is the same for all gases at constant pressure. This equation does not contain the temperature and so is not what became known as Charles's Law. Gay-Lussac's value for k (<templatestyles src="Fraction/styles.css" />1⁄2.6666), was identical to Dalton's earlier value for vapours and remarkably close to the present-day value of <templatestyles src="Fraction/styles.css" />1⁄2.7315. Gay-Lussac gave credit for this equation to unpublished statements by his fellow Republican citizen J. Charles in 1787. In the absence of a firm record, the gas law relating volume to temperature cannot be attributed to Charles.
Dalton's measurements had much more scope regarding temperature than Gay-Lussac, not only measuring the volume at the fixed points of water but also at two intermediate points. Unaware of the inaccuracies of mercury thermometers at the time, which were divided into equal portions between the fixed points, Dalton, after concluding in Essay II that in the case of vapours, “any elastic fluid expands nearly in a uniform manner into 1370 or 1380 parts by 180 degrees (Fahrenheit) of heat”, was unable to confirm it for gases.
Relation to absolute zero.
Charles's law appears to imply that the volume of a gas will descend to zero at a certain temperature (−266.66 °C according to Gay-Lussac's figures) or −273.15 °C. Gay-Lussac was clear in his description that the law was not applicable at low temperatures:
but I may mention that this last conclusion cannot be true except so long as the compressed vapours remain entirely in the elastic state; and this requires that their temperature shall be sufficiently elevated to enable them to resist the pressure which tends to make them assume the liquid state.
At absolute zero temperature, the gas possesses zero energy and hence the molecules restrict motion.
Gay-Lussac had no experience of liquid air (first prepared in 1877), although he appears to have believed (as did Dalton) that the "permanent gases" such as air and hydrogen could be liquified. Gay-Lussac had also worked with the vapours of volatile liquids in demonstrating Charles's law, and was aware that the law does not apply just above the boiling point of the liquid:
I may, however, remark that when the temperature of the ether is only a little above its boiling point, its condensation is a little more rapid than that of atmospheric air. This fact is related to a phenomenon which is exhibited by a great many bodies when passing from the liquid to the solid-state, but which is no longer sensible at temperatures a few degrees above that at which the transition occurs.
The first mention of a temperature at which the volume of a gas might descend to zero was by William Thomson (later known as Lord Kelvin) in 1848:
This is what we might anticipate when we reflect that infinite cold must correspond to a finite number of degrees of the air-thermometer below zero; since if we push the strict principle of graduation, stated above, sufficiently far, we should arrive at a point corresponding to the volume of air being reduced to nothing, which would be marked as −273° of the scale (−100/.366, if .366 be the coefficient of expansion); and therefore −273° of the air-thermometer is a point which cannot be reached at any finite temperature, however low.
However, the "absolute zero" on the Kelvin temperature scale was originally defined in terms of the second law of thermodynamics, which Thomson himself described in 1852. Thomson did not assume that this was equal to the "zero-volume point" of Charles's law, merely said that Charles's law provided the minimum temperature which could be attained. The two can be shown to be equivalent by Ludwig Boltzmann's statistical view of entropy (1870).
However, Charles also stated:
The volume of a fixed mass of dry gas increases or decreases by <templatestyles src="Fraction/styles.css" />1⁄273 times the volume at 0 °C for every 1 °C rise or fall in temperature. Thus:
formula_4
formula_5
where VT is the volume of gas at temperature T (in degrees Celsius), V0 is the volume at 0 °C.
Relation to kinetic theory.
The kinetic theory of gases relates the macroscopic properties of gases, such as pressure and volume, to the microscopic properties of the molecules which make up the gas, particularly the mass and speed of the molecules. To derive Charles's law from kinetic theory, it is necessary to have a microscopic definition of temperature: this can be conveniently taken as the temperature being proportional to the average kinetic energy of the gas molecules, "E"k:
formula_6
Under this definition, the demonstration of Charles's law is almost trivial. The kinetic theory equivalent of the ideal gas law relates PV to the average kinetic energy:
formula_7
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V \\propto T"
},
{
"math_id": 1,
"text": "\\frac{V}{T} = k, \\quad \\text{or} \\quad V=k T"
},
{
"math_id": 2,
"text": "\\frac{V_1}{T_1}=\\frac{V_2}{T_2}"
},
{
"math_id": 3,
"text": "V_{100} - V_0 = kV_0\\,"
},
{
"math_id": 4,
"text": "V_T=V_0+(\\tfrac{1}{273}\\times V_0 )\\times T"
},
{
"math_id": 5,
"text": "V_T=V_0 (1+\\tfrac{T}{273})"
},
{
"math_id": 6,
"text": "T \\propto \\bar{E_{\\rm k}}.\\,"
},
{
"math_id": 7,
"text": "PV = \\frac{2}{3} N \\bar{E_{\\rm k}}\\,"
}
] |
https://en.wikipedia.org/wiki?curid=155544
|
155562
|
Bode plot
|
Graph of the frequency response of a control system
In electrical engineering and control theory, a Bode plot is a graph of the frequency response of a system. It is usually a combination of a Bode magnitude plot, expressing the magnitude (usually in decibels) of the frequency response, and a Bode phase plot, expressing the phase shift.
As originally conceived by Hendrik Wade Bode in the 1930s, the plot is an asymptotic approximation of the frequency response, using straight line segments.
Overview.
Among his several important contributions to circuit theory and control theory, engineer Hendrik Wade Bode, while working at Bell Labs in the 1930s, devised a simple but accurate method for graphing gain and phase-shift plots. These bear his name, "Bode gain plot" and "Bode phase plot". "Bode" is often pronounced , although the Dutch pronunciation is .
Bode was faced with the problem of designing stable amplifiers with feedback for use in telephone networks. He developed the graphical design technique of the Bode plots to show the gain margin and phase margin required to maintain stability under variations in circuit characteristics caused during manufacture or during operation. The principles developed were applied to design problems of servomechanisms and other feedback control systems. The Bode plot is an example of analysis in the frequency domain.
Definition.
The Bode plot for a linear, time-invariant system with transfer function formula_0 (formula_1 being the complex frequency in the Laplace domain) consists of a magnitude plot and a phase plot.
The Bode magnitude plot is the graph of the function formula_2 of frequency formula_3 (with formula_4 being the imaginary unit). The formula_3-axis of the magnitude plot is logarithmic and the magnitude is given in decibels, i.e., a value for the magnitude formula_5 is plotted on the axis at formula_6.
The Bode phase plot is the graph of the phase, commonly expressed in degrees, of the transfer function formula_7 as a function of formula_8. The phase is plotted on the same logarithmic formula_8-axis as the magnitude plot, but the value for the phase is plotted on a linear vertical axis.
Frequency response.
This section illustrates that a Bode plot is a visualization of the frequency response of a system.
Consider a linear, time-invariant system with transfer function formula_9. Assume that the system is subject to a sinusoidal input with frequency formula_8,
formula_10
that is applied persistently, i.e. from a time formula_11 to a time formula_12. The response will be of the form
formula_13
i.e., also a sinusoidal signal with amplitude formula_14 shifted by a phase formula_15 with respect to the input.
It can be shown that the magnitude of the response is
and that the phase shift is
In summary, subjected to an input with frequency formula_8, the system responds at the same frequency with an output that is amplified by a factor formula_16 and phase-shifted by formula_17. These quantities, thus, characterize the frequency response and are shown in the Bode plot.
Rules for handmade Bode plot.
For many practical problems, the detailed Bode plots can be approximated with straight-line segments that are asymptotes of the precise response. The effect of each of the terms of a multiple element transfer function can be approximated by a set of straight lines on a Bode plot. This allows a graphical solution of the overall frequency response function. Before widespread availability of digital computers, graphical methods were extensively used to reduce the need for tedious calculation; a graphical solution could be used to identify feasible ranges of parameters for a new design.
The premise of a Bode plot is that one can consider the log of a function in the form
formula_18
as a sum of the logs of its zeros and poles:
formula_19
This idea is used explicitly in the method for drawing phase diagrams. The method for drawing amplitude plots implicitly uses this idea, but since the log of the amplitude of each pole or zero always starts at zero and only has one asymptote change (the straight lines), the method can be simplified.
Straight-line amplitude plot.
Amplitude decibels is usually done using formula_20 to define decibels. Given a transfer function in the form
formula_21
where formula_22 and formula_23 are constants, formula_24, formula_25, and formula_26 is the transfer function:
To handle irreducible 2nd-order polynomials, formula_32 can, in many cases, be approximated as formula_33.
Note that zeros and poles happen when formula_8 is "equal to" a certain formula_22 or formula_23. This is because the function in question is the magnitude of formula_34, and since it is a complex function, formula_35. Thus at any place where there is a zero or pole involving the term formula_36, the magnitude of that term is formula_37.
Corrected amplitude plot.
To correct a straight-line amplitude plot:
Note that this correction method does not incorporate how to handle complex values of formula_22 or formula_23. In the case of an irreducible polynomial, the best way to correct the plot is to actually calculate the magnitude of the transfer function at the pole or zero corresponding to the irreducible polynomial, and put that dot over or under the line at that pole or zero.
Straight-line phase plot.
Given a transfer function in the same form as above,
formula_21
the idea is to draw separate plots for each pole and zero, then add them up. The actual phase curve is given by
formula_40
To draw the phase plot, for "each" pole and zero:
Example.
To create a straight-line plot for a first-order (one-pole) low-pass filter, one considers the normalized form of the transfer function in terms of the angular frequency:
formula_53
The Bode plot is shown in Figure 1(b) above, and construction of the straight-line approximation is discussed next.
Magnitude plot.
The magnitude (in decibels) of the transfer function above (normalized and converted to angular-frequency form), given by the decibel gain expression formula_54:
formula_55
Then plotted versus input frequency formula_8 on a logarithmic scale, can be approximated by "two lines", forming the asymptotic (approximate) magnitude Bode plot of the transfer function:
These two lines meet at the corner frequency formula_56. From the plot, it can be seen that for frequencies well below the corner frequency, the circuit has an attenuation of 0 dB, corresponding to a unity pass-band gain, i.e. the amplitude of the filter output equals the amplitude of the input. Frequencies above the corner frequency are attenuated – the higher the frequency, the higher the attenuation.
Phase plot.
The phase Bode plot is obtained by plotting the phase angle of the transfer function given by
formula_59
versus formula_8, where formula_8 and formula_56 are the input and cutoff angular frequencies respectively. For input frequencies much lower than corner, the ratio formula_57 is small, and therefore the phase angle is close to zero. As the ratio increases, the absolute value of the phase increases and becomes −45° when formula_60. As the ratio increases for input frequencies much greater than the corner frequency, the phase angle asymptotically approaches −90°. The frequency scale for the phase plot is logarithmic.
Normalized plot.
The horizontal frequency axis, in both the magnitude and phase plots, can be replaced by the normalized (nondimensional) frequency ratio formula_57. In such a case the plot is said to be normalized, and units of the frequencies are no longer used, since all input frequencies are now expressed as multiples of the cutoff frequency formula_56.
An example with zero and pole.
Figures 2-5 further illustrate construction of Bode plots. This example with both a pole and a zero shows how to use superposition. To begin, the components are presented separately.
Figure 2 shows the Bode magnitude plot for a zero and a low-pass pole, and compares the two with the Bode straight line plots. The straight-line plots are horizontal up to the pole (zero) location and then drop (rise) at 20 dB/decade. The second Figure 3 does the same for the phase. The phase plots are horizontal up to a frequency factor of ten below the pole (zero) location and then drop (rise) at 45°/decade until the frequency is ten times higher than the pole (zero) location. The plots then are again horizontal at higher frequencies at a final, total phase change of 90°.
Figure 4 and Figure 5 show how superposition (simple addition) of a pole and zero plot is done. The Bode straight line plots again are compared with the exact plots. The zero has been moved to higher frequency than the pole to make a more interesting example. Notice in Figure 4 that the 20 dB/decade drop of the pole is arrested by the 20 dB/decade rise of the zero resulting in a horizontal magnitude plot for frequencies above the zero location. Notice in Figure 5 in the phase plot that the straight-line approximation is pretty approximate in the region where both pole and zero affect the phase. Notice also in Figure 5 that the range of frequencies where the phase changes in the straight line plot is limited to frequencies a factor of ten above and below the pole (zero) location. Where the phase of the pole and the zero both are present, the straight-line phase plot is horizontal because the 45°/decade drop of the pole is arrested by the overlapping 45°/decade rise of the zero in the limited range of frequencies where both are active contributors to the phase.
Gain margin and phase margin.
Bode plots are used to assess the stability of negative-feedback amplifiers by finding the gain and phase margins of an amplifier. The notion of gain and phase margin is based upon the gain expression for a negative feedback amplifier given by
formula_61
where "A"FB is the gain of the amplifier with feedback (the "closed-loop gain"), "β" is the "feedback factor", and "A"OL is the gain without feedback (the "open-loop gain"). The gain "A"OL is a complex function of frequency, with both magnitude and phase. Examination of this relation shows the possibility of infinite gain (interpreted as instability) if the product β"A"OL = −1 (that is, the magnitude of β"A"OL is unity and its phase is −180°, the so-called Barkhausen stability criterion). Bode plots are used to determine just how close an amplifier comes to satisfying this condition.
Key to this determination are two frequencies. The first, labeled here as "f"180, is the frequency where the open-loop gain flips sign. The second, labeled here "f"0 dB, is the frequency where the magnitude of the product |β"A"OL| = 1 = 0 dB. That is, frequency "f"180 is determined by the condition
formula_62
where vertical bars denote the magnitude of a complex number, and frequency "f"0 dB is determined by the condition
formula_63
One measure of proximity to instability is the gain margin. The Bode phase plot locates the frequency where the phase of β"A"OL reaches −180°, denoted here as frequency "f"180. Using this frequency, the Bode magnitude plot finds the magnitude of β"A"OL. If |β"A"OL|180 ≥ 1, the amplifier is unstable, as mentioned. If |β"A"OL|180 < 1, instability does not occur, and the separation in dB of the magnitude of |β"A"OL|180 from |β"A"OL| = 1 is called the "gain margin". Because a magnitude of 1 is 0 dB, the gain margin is simply one of the equivalent forms: formula_64.
Another equivalent measure of proximity to instability is the "phase margin". The Bode magnitude plot locates the frequency where the magnitude of |β"A"OL| reaches unity, denoted here as frequency "f"0 dB. Using this frequency, the Bode phase plot finds the phase of β"A"OL. If the phase of β"A"OL("f"0 dB) > −180°, the instability condition cannot be met at any frequency (because its magnitude is going to be < 1 when "f" = "f"180), and the distance of the phase at "f"0 dB in degrees above −180° is called the "phase margin".
If a simple "yes" or "no" on the stability issue is all that is needed, the amplifier is stable if "f"0 dB < "f"180. This criterion is sufficient to predict stability only for amplifiers satisfying some restrictions on their pole and zero positions (minimum phase systems). Although these restrictions usually are met, if they are not, then another method must be used, such as the Nyquist plot.
Optimal gain and phase margins may be computed using Nevanlinna–Pick interpolation theory.
Examples using Bode plots.
Figures 6 and 7 illustrate the gain behavior and terminology. For a three-pole amplifier, Figure 6 compares the Bode plot for the gain without feedback (the "open-loop" gain) "A"OL with the gain with feedback "A"FB (the "closed-loop" gain). See negative feedback amplifier for more detail.
In this example, "A"OL = 100 dB at low frequencies, and 1 / β = 58 dB. At low frequencies, "A"FB ≈ 58 dB as well.
Because the open-loop gain "A"OL is plotted and not the product β "A"OL, the condition "A"OL = 1 / β decides "f"0 dB. The feedback gain at low frequencies and for large "A"OL is "A"FB ≈ 1 / β (look at the formula for the feedback gain at the beginning of this section for the case of large gain "A"OL), so an equivalent way to find "f"0 dB is to look where the feedback gain intersects the open-loop gain. (Frequency "f"0 dB is needed later to find the phase margin.)
Near this crossover of the two gains at "f"0 dB, the Barkhausen criteria are almost satisfied in this example, and the feedback amplifier exhibits a massive peak in gain (it would be infinity if β "A"OL = −1). Beyond the unity gain frequency "f"0 dB, the open-loop gain is sufficiently small that "A"FB ≈ "A"OL (examine the formula at the beginning of this section for the case of small "A"OL).
Figure 7 shows the corresponding phase comparison: the phase of the feedback amplifier is nearly zero out to the frequency "f"180 where the open-loop gain has a phase of −180°. In this vicinity, the phase of the feedback amplifier plunges abruptly downward to become almost the same as the phase of the open-loop amplifier. (Recall, "A"FB ≈ "A"OL for small "A"OL.)
Comparing the labeled points in Figure 6 and Figure 7, it is seen that the unity gain frequency "f"0 dB and the phase-flip frequency "f"180 are very nearly equal in this amplifier, "f"180 ≈ "f"0 dB ≈ 3.332 kHz, which means the gain margin and phase margin are nearly zero. The amplifier is borderline stable.
Figures 8 and 9 illustrate the gain margin and phase margin for a different amount of feedback β. The feedback factor is chosen smaller than in Figure 6 or 7, moving the condition | β "A"OL | = 1 to lower frequency. In this example, 1 / β = 77 dB, and at low frequencies "A"FB ≈ 77 dB as well.
Figure 8 shows the gain plot. From Figure 8, the intersection of 1 / β and "A"OL occurs at "f"0 dB = 1 kHz. Notice that the peak in the gain "A"FB near "f"0 dB is almost gone.
Figure 9 is the phase plot. Using the value of "f"0 dB = 1 kHz found above from the magnitude plot of Figure 8, the open-loop phase at "f"0 dB is −135°, which is a phase margin of 45° above −180°.
Using Figure 9, for a phase of −180° the value of "f"180 = 3.332 kHz (the same result as found earlier, of course). The open-loop gain from Figure 8 at "f"180 is 58 dB, and 1 / β = 77 dB, so the gain margin is 19 dB.
Stability is not the sole criterion for amplifier response, and in many applications a more stringent demand than stability is good step response. As a rule of thumb, good step response requires a phase margin of at least 45°, and often a margin of over 70° is advocated, particularly where component variation due to manufacturing tolerances is an issue. See also the discussion of phase margin in the step response article.
Bode plotter.
The Bode plotter is an electronic instrument resembling an oscilloscope, which produces a Bode diagram, or a graph, of a circuit's voltage gain or phase shift plotted against frequency in a feedback control system or a filter. An example of this is shown in Figure 10. It is extremely useful for analyzing and testing filters and the stability of feedback control systems, through the measurement of corner (cutoff) frequencies and gain and phase margins.
This is identical to the function performed by a vector network analyzer, but the network analyzer is typically used at much higher frequencies.
For education and research purposes, plotting Bode diagrams for given transfer functions facilitates better understanding and getting faster results (see external links).
Related plots.
Two related plots that display the same data in different coordinate systems are the Nyquist plot and the Nichols plot. These are parametric plots, with frequency as the input and magnitude and phase of the frequency response as the output. The Nyquist plot displays these in polar coordinates, with magnitude mapping to radius and phase to argument (angle). The Nichols plot displays these in rectangular coordinates, on the log scale.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " H(s) "
},
{
"math_id": 1,
"text": "s"
},
{
"math_id": 2,
"text": " |H(s=j \\omega)|"
},
{
"math_id": 3,
"text": " \\omega"
},
{
"math_id": 4,
"text": "j"
},
{
"math_id": 5,
"text": "|H|"
},
{
"math_id": 6,
"text": "20 \\log_{10} |H|"
},
{
"math_id": 7,
"text": " \\arg \\left( H(s =j \\omega) \\right)"
},
{
"math_id": 8,
"text": "\\omega"
},
{
"math_id": 9,
"text": "H(s)"
},
{
"math_id": 10,
"text": "u(t) = \\sin (\\omega t),"
},
{
"math_id": 11,
"text": "-\\infty"
},
{
"math_id": 12,
"text": "t"
},
{
"math_id": 13,
"text": "y(t) = y_0 \\sin (\\omega t + \\varphi),"
},
{
"math_id": 14,
"text": "y_0"
},
{
"math_id": 15,
"text": "\\varphi"
},
{
"math_id": 16,
"text": "|H(\\mathrm{j} \\omega)|"
},
{
"math_id": 17,
"text": "\\arg H(\\mathrm{j} \\omega)"
},
{
"math_id": 18,
"text": "f(x) = A \\prod (x - c_n)^{a_n}"
},
{
"math_id": 19,
"text": "\\log(f(x)) = \\log(A) + \\sum a_n \\log(x - c_n)."
},
{
"math_id": 20,
"text": "\\text{dB} = 20 \\log_{10}(X)"
},
{
"math_id": 21,
"text": "H(s) = A \\prod \\frac{(s - x_n)^{a_n}}{(s - y_n)^{b_n}},"
},
{
"math_id": 22,
"text": "x_n"
},
{
"math_id": 23,
"text": "y_n"
},
{
"math_id": 24,
"text": "s = \\mathrm{j}\\omega"
},
{
"math_id": 25,
"text": "a_n, b_n > 0"
},
{
"math_id": 26,
"text": "H"
},
{
"math_id": 27,
"text": "\\omega = x_n"
},
{
"math_id": 28,
"text": "20 a_n\\ \\text{dB}"
},
{
"math_id": 29,
"text": "\\omega = y_n"
},
{
"math_id": 30,
"text": "20 b_n\\ \\text{dB}"
},
{
"math_id": 31,
"text": "|H(\\mathrm{j}\\omega)|"
},
{
"math_id": 32,
"text": "ax^2 + bx + c"
},
{
"math_id": 33,
"text": "(\\sqrt{a}x + \\sqrt{c})^2 "
},
{
"math_id": 34,
"text": "H(\\mathrm{j}\\omega)"
},
{
"math_id": 35,
"text": "|H(\\mathrm{j}\\omega)| = \\sqrt{H \\cdot H^*}"
},
{
"math_id": 36,
"text": "(s + x_n)"
},
{
"math_id": 37,
"text": "\\sqrt{(x_n + \\mathrm{j}\\omega)(x_n - \\mathrm{j}\\omega)} = \\sqrt{x_n^2 + \\omega^2}"
},
{
"math_id": 38,
"text": "3 a_n\\ \\text{dB}"
},
{
"math_id": 39,
"text": "3 b_n\\ \\text{dB}"
},
{
"math_id": 40,
"text": "\\varphi(s) = -\\arctan \\frac{\\operatorname{Im}[H(s)]}{\\operatorname{Re}[H(s)]}."
},
{
"math_id": 41,
"text": "A"
},
{
"math_id": 42,
"text": "\\omega = |x_n|"
},
{
"math_id": 43,
"text": "-\\operatorname{Re}(z) < 0"
},
{
"math_id": 44,
"text": "45 a_n"
},
{
"math_id": 45,
"text": "|x_n|/10"
},
{
"math_id": 46,
"text": "\\omega = |y_n|"
},
{
"math_id": 47,
"text": "-\\operatorname{Re}(p) < 0"
},
{
"math_id": 48,
"text": "45 b_n"
},
{
"math_id": 49,
"text": "|y_n|/10"
},
{
"math_id": 50,
"text": "\\operatorname{Re}(s) > 0"
},
{
"math_id": 51,
"text": "90 a_n"
},
{
"math_id": 52,
"text": "90 b_n"
},
{
"math_id": 53,
"text": "H_{\\text{lp}}(\\mathrm{j} \\omega) = \\frac{1}{1 + \\mathrm{j} \\frac{\\omega}{\\omega_\\text{c}}}."
},
{
"math_id": 54,
"text": "A_\\text{vdB}"
},
{
"math_id": 55,
"text": "\\begin{align}\nA_\\text{vdB} &= 20 \\log|H_{\\text{lp}}(\\mathrm{j}\\omega)| \\\\\n &= 20 \\log \\frac{1}{\\left| 1 + \\mathrm{j} \\frac{\\omega}{\\omega_\\text{c}} \\right|} \\\\\n &= -20 \\log \\left| 1 + \\mathrm{j} \\frac{\\omega}{\\omega_\\text{c}} \\right| \\\\\n &= -10 \\log \\left( 1 + \\frac{\\omega^2}{\\omega_\\text{c}^2} \\right).\n\\end{align}"
},
{
"math_id": 56,
"text": "\\omega_\\text{c}"
},
{
"math_id": 57,
"text": "\\omega/\\omega_\\text{c}"
},
{
"math_id": 58,
"text": "-20 \\log(\\omega/\\omega_\\text{c})"
},
{
"math_id": 59,
"text": "\\arg H_{\\text{lp}}(\\mathrm{j} \\omega) = -\\tan^{-1}\\frac{\\omega}{\\omega_\\text{c}}"
},
{
"math_id": 60,
"text": "\\omega = \\omega_\\text{c}"
},
{
"math_id": 61,
"text": "A_\\text{FB} = \\frac{A_\\text{OL}}{1 + \\beta A_\\text{OL}},"
},
{
"math_id": 62,
"text": "\\beta A_\\text{OL}(f_{180}) = -|\\beta A_\\text{OL}(f_{180})| = -|\\beta A_\\text{OL}|_{180},"
},
{
"math_id": 63,
"text": "|\\beta A_\\text{OL}(f_\\text{0 dB})| = 1."
},
{
"math_id": 64,
"text": "20 \\log_{10} |\\beta A_\\text{OL}|_{180} = 20 \\log_{10} |A_\\text{OL}| - 20 \\log_{10} \\beta^{-1}"
}
] |
https://en.wikipedia.org/wiki?curid=155562
|
15560529
|
Relative biological effectiveness
|
In radiobiology, the relative biological effectiveness (often abbreviated as RBE) is the ratio of biological effectiveness of one type of ionizing radiation relative to another, given the same amount of absorbed energy. The RBE is an empirical value that varies depending on the type of ionizing radiation, the energies involved, the biological effects being considered such as cell death, and the oxygen tension of the tissues or so-called oxygen effect.
Application.
The absorbed dose can be a poor indicator of the biological effect of radiation, as the biological effect can depend on many other factors, including the type of radiation, energy, and type of tissue. The relative biological effectiveness can help give a better measure of the biological effect of radiation. The relative biological effectiveness for radiation of type "R" on a tissue is defined as the ratio
formula_0
where "D""X" is a reference absorbed dose of radiation of a standard type "X", and "D""R" is the absorbed dose of radiation of type "R" that causes the same amount of biological damage. Both doses are quantified by the amount of energy absorbed in the cells.
Different types of radiation have different biological effectiveness mainly because they transfer their energy to the tissue in different ways. Photons and beta particles have a low linear energy transfer (LET) coefficient, meaning that they ionize atoms in the tissue that are spaced by several hundred nanometers (several tenths of a micrometer) apart, along their path. In contrast, the much more massive alpha particles and neutrons leave a denser trail of ionized atoms in their wake, spaced about one tenth of a nanometer apart (i.e., less than one-thousandth of the typical distance between ionizations for photons and beta particles).
RBEs can be used for either cancer/hereditary risks (stochastic) or for harmful tissue reactions (deterministic) effects. Tissues have different RBEs depending on the type of effect. For high LET radiation (i.e., alphas and neutrons), the RBEs for deterministic effects tend to be lower than those for stochastic effects.
The concept of RBE is relevant in medicine, such as in radiology and radiotherapy, and to the evaluation of risks and consequences of radioactive contamination in various contexts, such as nuclear power plant operation, nuclear fuel disposal and reprocessing, nuclear weapons, uranium mining, and ionizing radiation safety.
Relation to radiation weighting factors (WR).
For the purposes of computing the equivalent dose to an organ or tissue, the International Commission on Radiological Protection (ICRP) has defined a standard set of radiation weighting factors (WR), formerly termed the quality factor ("Q)". The radiation weighting factors convert absorbed dose (measured in SI units of grays or non-SI rads) into formal biological equivalent dose for radiation exposure (measured in units of sieverts or rem). However, ICRP states:
"The quantities equivalent dose and effective dose should not be used to quantify higher radiation doses or to make decisions on the need for any treatment related to tissue reactions [i.e., deterministic effects]. For such purposes, doses should be evaluated in terms of absorbed dose (in gray, Gy), and where high-LET radiations (e.g., neutrons or alpha particles) are involved, an absorbed dose, weighted with an appropriate RBE, should be used"
Radiation weighting factors are largely based on the RBE of radiation for stochastic health risks. However, for simplicity, the radiation weighting factors are not dependent on the type of tissue, and the values are conservatively chosen to be greater than the bulk of experimental values observed for the most sensitive cell types, with respect to external (external to the cell) sources. Radiation weighting factors have not been developed for internal sources of heavy ions, such as a recoil nucleus.
The ICRP 2007 standard values for relative effectiveness are given below. The higher radiation weighting factor for a type of radiation, the more damaging it is, and this is incorporated into the calculation to convert from gray to sievert units.
Radiation weighting factors that go from physical energy to biological effect must not be confused with tissue weighting factors. The tissue weighting factors are used to convert an equivalent dose to a given tissue in the body, to an effective dose, a number that provides an estimation of total danger to the whole organism, as a result of the radiation dose to part of the body.
Experimental methods.
Typically the evaluation of relative biological effectiveness is done on various types of living cells grown in culture medium, including prokaryotic cells such as bacteria, simple eukaryotic cells such as single celled plants, and advanced eukaryotic cells derived from organisms such as rats. By irradiating batches of cells with different doses and types of radiation, a relationship between dose and the fraction of cells that die can be found, and then used to find the doses corresponding to some common survival rate. The ratio of these doses is the RBE of "R". Instead of death, the endpoint might be the fraction of cells that become unable to undergo mitotic division (or, for bacteria, binary fission), thus being effectively sterilized — even if they can still carry out other cellular functions.
The types "R" of ionizing radiation most considered in RBE evaluation are X-rays and gamma radiation (both consisting of photons), alpha radiations (helium-4 nuclei), beta radiation (electrons and positrons), neutron radiation, and heavy nuclei, including the fragments of nuclear fission. For some kinds of radiation, the RBE is strongly dependent on the energy of the individual particles.
Dependence on tissue type.
Early on it was found that X-rays, gamma rays, and beta radiation were essentially equivalent for all cell types. Therefore, the standard radiation type "X" is generally an X-ray beam with 250 keV photons or cobalt-60 gamma rays. As a result, the relative biological effectiveness of beta and photon radiation is essentially 1.
For other radiation types, the RBE is not a well-defined physical quantity, since it varies somewhat with the type of tissue and with the precise place of absorption within the cell. Thus, for example, the RBE for alpha radiation is 2–3 when measured on bacteria, 4–6 for simple eukaryotic cells, and 6–8 for higher eukaryotic cells. According to one source it may be much higher (6500 with X rays as the reference) on ovocytes. The RBE of neutrons is 4–6 for bacteria, 8–12 for simple eukaryotic cells, and 12–16 for higher eukaryotic cells.
Dependence on source location.
In the early experiments, the sources of radiation were all external to the cells that were irradiated. However, since alpha particles cannot traverse the outermost dead layer of human skin, they can do significant damage only if they come from the decay of atoms inside the body. Since the range of an alpha particle is typically about the diameter of a single eukaryotic cell, the precise location of the emitting atom in the tissue cells becomes significant.
For this reason, it has been suggested that the health impact of contamination by alpha emitters might have been substantially underestimated. Measurements of RBE with external sources also neglect the ionization caused by the recoil of the parent-nucleus due to the alpha decay. While the recoil of the parent-nucleus of the decaying atom typically carries only about 2% of the energy of the alpha-particle that is emitted by the decaying atom, its range is extremely short (about 2–3 angstroms), due to its high electric charge and high mass. The parent nucleus is required to recoil, upon emission of an alpha particle, with a discrete kinetic energy due to conservation of momentum. Thus, all of the ionization energy from the recoil-nucleus is deposited in an extremely small volume near its original location, typically in the cell nucleus on the chromosomes, which have an affinity for heavy metals. The bulk of studies, using sources that are external to the cell, have yielded RBEs between 10 and 20. Since most of the ionization damage from the travel of the alpha particle is deposited in the cytoplasm, whereas from the travel of the recoil-nucleus is on the DNA itself, it is likely greater damage is caused by the recoil nucleus than by the alpha particle itself.
History.
In 1931, Failla and Henshaw reported on determination of the relative biological effectiveness (RBE) of x rays and γ rays. This appears to be the first use of the term ‘RBE’. The authors noted that RBE was dependent on the experimental system being studied. Somewhat later, it was pointed out by Zirkle et al. (1952) that the biological effectiveness depends on the spatial distribution of the energy imparted and the density of ionisations per unit path length of the ionising particles. Zirkle et al. coined the term ‘linear energy transfer (LET)’ to be used in radiobiology for the stopping power, i.e. the energy loss per unit path length of a charged particle. The concept was introduced in the 1950s, at a time when the deployment of nuclear weapons and nuclear reactors spurred research on the biological effects of artificial radioactivity. It had been noticed that those effects depended both on the type and energy spectrum of the radiation, and on the kind of living tissue. The first systematic experiments to determine the RBE were conducted in that decade.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "RBE= \\frac{D_X}{D_R}"
}
] |
https://en.wikipedia.org/wiki?curid=15560529
|
155624
|
Heritability
|
Estimation of effect of genetic variation on phenotypic variation of a trait
Heritability is a statistic used in the fields of breeding and genetics that estimates the degree of "variation" in a phenotypic trait in a population that is due to genetic variation between individuals in that population. The concept of heritability can be expressed in the form of the following question: "What is the proportion of the variation in a given trait within a population that is "not" explained by the environment or random chance?"
Other causes of measured variation in a trait are characterized as environmental factors, including observational error. In human studies of heritability these are often apportioned into factors from "shared environment" and "non-shared environment" based on whether they tend to result in persons brought up in the same household being more or less similar to persons who were not.
Heritability is estimated by comparing individual phenotypic variation among related individuals in a population, by examining the association between individual phenotype and genotype data, or even by modeling summary-level data from genome-wide association studies (GWAS). Heritability is an important concept in quantitative genetics, particularly in selective breeding and behavior genetics (for instance, twin studies). It is the source of much confusion due to the fact that its technical definition is different from its commonly-understood folk definition. Therefore, its use conveys the incorrect impression that behavioral traits are "inherited" or specifically passed down through the genes. Behavioral geneticists also conduct heritability analyses based on the assumption that genes and environments contribute in a separate, additive manner to behavioral traits.
Overview.
Heritability measures the fraction of phenotype variability that can be attributed to genetic variation. This is not the same as saying that this fraction of an individual phenotype is caused by genetics. For example, it is incorrect to say that since the heritability of personality traits is about 0.6, that means that 60% of your personality is inherited from your parents and 40% comes from the environment. In addition, heritability can change without any genetic change occurring, such as when the environment starts contributing to more variation. As a case in point, consider that both genes and environment have the potential to influence intelligence. Heritability could increase if genetic variation increases, causing individuals to show more phenotypic variation, like showing different levels of intelligence. On the other hand, heritability might also increase if the environmental variation decreases, causing individuals to show less phenotypic variation, like showing more similar levels of intelligence. Heritability increases when genetics are contributing more variation or because non-genetic factors are contributing less variation; what matters is the relative contribution. Heritability is specific to a particular population in a particular environment. High heritability of a trait, consequently, does not necessarily mean that the trait is not very susceptible to environmental influences. Heritability can also change as a result of changes in the environment, migration, inbreeding, or the way in which heritability itself is measured in the population under study. The heritability of a trait should not be interpreted as a measure of the extent to which said trait is genetically determined in an individual.
The extent of dependence of phenotype on environment can also be a function of the genes involved. Matters of heritability are complicated because genes may canalize a phenotype, making its expression almost inevitable in all occurring environments. Individuals with the same genotype can also exhibit different phenotypes through a mechanism called phenotypic plasticity, which makes heritability difficult to measure in some cases. Recent insights in molecular biology have identified changes in transcriptional activity of individual genes associated with environmental changes. However, there are a large number of genes whose transcription is not affected by the environment.
Estimates of heritability use statistical analyses to help to identify the causes of differences between individuals. Since heritability is concerned with variance, it is necessarily an account of the differences between individuals in a population. Heritability can be univariate – examining a single trait – or multivariate – examining the genetic and environmental associations between multiple traits at once. This allows a test of the genetic overlap between different phenotypes: for instance hair color and eye color. Environment and genetics may also interact, and heritability analyses can test for and examine these interactions (GxE models).
A prerequisite for heritability analyses is that there is some population variation to account for. This last point highlights the fact that heritability cannot take into account the effect of factors which are invariant in the population. Factors may be invariant if they are absent and do not exist in the population, such as no one having access to a particular antibiotic, or because they are omni-present, like if everyone is drinking coffee. In practice, all human behavioral traits vary and almost all traits show some heritability.
Definition.
Any particular phenotype can be modeled as the sum of genetic and environmental effects:
Phenotype ("P") = Genotype ("G") + Environment ("E").
Likewise the phenotypic variance in the trait – Var (P) – is the sum of effects as follows:
Var("P") = Var("G") + Var("E") + 2 Cov("G","E").
In a planned experiment Cov("G","E") can be controlled and held at 0. In this case, heritability, formula_0 is defined as
formula_1
"H"2 is the broad-sense heritability. This reflects all the genetic contributions to a population's phenotypic variance including additive, dominant, and epistatic (multi-genic interactions), as well as maternal and paternal effects, where individuals are directly affected by their parents' phenotype, such as with milk production in mammals.
A particularly important component of the genetic variance is the additive variance, Var(A), which is the variance due to the average effects (additive effects) of the alleles. Since each parent passes a single allele per locus to each offspring, parent-offspring resemblance depends upon the average effect of single alleles. Additive variance represents, therefore, the genetic component of variance responsible for parent-offspring resemblance. The additive genetic portion of the phenotypic variance is known as Narrow-sense heritability and is defined as
formula_2
An upper case "H"2 is used to denote broad sense, and lower case "h"2 for narrow sense.
For traits which are not continuous but dichotomous such as an additional toe or certain diseases, the contribution of the various alleles can be considered to be a sum, which past a threshold, manifests itself as the trait, giving the liability threshold model in which heritability can be estimated and selection modeled.
Additive variance is important for selection. If a selective pressure such as improving livestock is exerted, the response of the trait is directly related to narrow-sense heritability. The mean of the trait will increase in the next generation as a function of how much the mean of the selected parents differs from the mean of the population from which the selected parents were chosen. The observed "response to selection" leads to an estimate of the narrow-sense heritability (called realized heritability). This is the principle underlying artificial selection or breeding.
Example.
The simplest genetic model involves a single locus with two alleles (b and B) affecting one quantitative phenotype.
The number of B alleles can be 0, 1, or 2. For any genotype, ("B"i,"B"j), where "B"i and "B"j are either 0 or 1, the expected phenotype can then be written as the sum of the overall mean, a linear effect, and a dominance deviation (one can think of the dominance term as an "interaction" between "B"i and "B"j):
formula_3
The additive genetic variance at this locus is the weighted average of the squares of the additive effects:
formula_4
where formula_5
There is a similar relationship for the variance of dominance deviations:
formula_6
where formula_7
The linear regression of phenotype on genotype is shown in Figure 1.
Assumptions.
Estimates of the total heritability of human traits assume the absence of epistasis, which has been called the "assumption of additivity". Although some researchers have cited such estimates in support of the existence of "missing heritability" unaccounted for by known genetic loci, the assumption of additivity may render these estimates invalid. There is also some empirical evidence that the additivity assumption is frequently violated in behavior genetic studies of adolescent intelligence and academic achievement.
Estimating heritability.
Since only "P" can be observed or measured directly, heritability must be estimated from the similarities observed in subjects varying in their level of genetic or environmental similarity. The statistical analyses required to estimate the genetic and environmental components of variance depend on the sample characteristics. Briefly, better estimates are obtained using data from individuals with widely varying levels of genetic relationship - such as twins, siblings, parents and offspring, rather than from more distantly related (and therefore less similar) subjects. The standard error for heritability estimates is improved with large sample sizes.
In non-human populations it is often possible to collect information in a controlled way. For example, among farm animals it is easy to arrange for a bull to produce offspring from a large number of cows and to control environments. Such experimental control is generally not possible when gathering human data, relying on naturally occurring relationships and environments.
In classical quantitative genetics, there were two schools of thought regarding estimation of heritability.
One school of thought was developed by Sewall Wright at The University of Chicago, and further popularized by C. C. Li (University of Chicago) and J. L. Lush (Iowa State University). It is based on the analysis of correlations and, by extension, regression. Path Analysis was developed by Sewall Wright as a way of estimating heritability.
The second was originally developed by R. A. Fisher and expanded at The University of Edinburgh, Iowa State University, and North Carolina State University, as well as other schools. It is based on the analysis of variance of breeding studies, using the intraclass correlation of relatives. Various methods of estimating components of variance (and, hence, heritability) from ANOVA are used in these analyses.
Today, heritability can be estimated from general pedigrees using linear mixed models and from genomic relatedness estimated from genetic markers.
Studies of human heritability often utilize adoption study designs, often with identical twins who have been separated early in life and raised in different environments. Such individuals have identical genotypes and can be used to separate the effects of genotype and environment. A limit of this design is the common prenatal environment and the relatively low numbers of twins reared apart. A second and more common design is the twin study in which the similarity of identical and fraternal twins is used to estimate heritability. These studies can be limited by the fact that identical twins are not completely genetically identical, potentially resulting in an underestimation of heritability.
In observational studies, or because of evocative effects (where a genome evokes environments by its effect on them), G and E may covary: gene environment correlation. Depending on the methods used to estimate heritability, correlations between genetic factors and shared or non-shared environments may or may not be confounded with heritability.
Regression/correlation methods of estimation.
The first school of estimation uses regression and correlation to estimate heritability.
Comparison of close relatives.
In the comparison of relatives, we find that in general,
formula_8
where "r" can be thought of as the coefficient of relatedness, "b" is the coefficient of regression and "t" is the coefficient of correlation.
Parent-offspring regression.
Heritability may be estimated by comparing parent and offspring traits (as in Fig. 2). The slope of the line (0.57) approximates the heritability of the trait when offspring values are regressed against the average trait in the parents. If only one parent's value is used then heritability is twice the slope. (This is the source of the term "regression," since the offspring values always tend to regress to the mean value for the population, "i.e.", the slope is always less than one). This regression effect also underlies the DeFries–Fulker method for analyzing twins selected for one member being affected.
Sibling comparison.
A basic approach to heritability can be taken using full-Sib designs: comparing similarity between siblings who share both a biological mother and a father. When there is only additive gene action, this sibling phenotypic correlation is an index of "familiarity" – the sum of half the additive genetic variance plus full effect of the common environment. It thus places an upper limit on additive heritability of twice the full-Sib phenotypic correlation. Half-Sib designs compare phenotypic traits of siblings that share one parent with other sibling groups.
Twin studies.
Heritability for traits in humans is most frequently estimated by comparing resemblances between twins. "The advantage of twin studies, is that the total variance can be split up into genetic, shared or common environmental, and unique environmental components, enabling an accurate estimation of heritability". Fraternal or dizygotic (DZ) twins on average share half their genes (assuming there is no assortative mating for the trait), and so identical or monozygotic (MZ) twins on average are twice as genetically similar as DZ twins. A crude estimate of heritability, then, is approximately twice the difference in correlation between MZ and DZ twins, i.e. Falconer's formula "H"2=2(r(MZ)-r(DZ)).
The effect of shared environment, "c"2, contributes to similarity between siblings due to the commonality of the environment they are raised in. Shared environment is approximated by the DZ correlation minus half heritability, which is the degree to which DZ twins share the same genes, "c"2=DZ-1/2"h"2. Unique environmental variance, "e"2, reflects the degree to which identical twins raised together are dissimilar, "e"2=1-r(MZ).
Analysis of variance methods of estimation.
The second set of methods of estimation of heritability involves ANOVA and estimation of variance components.
Basic model.
We use the basic discussion of Kempthorne. Considering only the most basic of genetic models, we can look at the quantitative contribution of a single locus with genotype Gi as
formula_9
where formula_10 is the effect of genotype Gi and formula_11 is the environmental effect.
Consider an experiment with a group of sires and their progeny from random dams. Since the progeny get half of their genes from the father and half from their (random) mother, the progeny equation is
formula_12
Intraclass correlations.
Consider the experiment above. We have two groups of progeny we can compare. The first is comparing the various progeny for an individual sire (called "within sire group"). The variance will include terms for genetic variance (since they did not all get the same genotype) and environmental variance. This is thought of as an "error" term.
The second group of progeny are comparisons of means of half sibs with each other (called "among sire group"). In addition to the error term as in the within sire groups, we have an addition term due to the differences among different means of half sibs. The intraclass correlation is
formula_13 ,
since environmental effects are independent of each other.
The ANOVA.
In an experiment with formula_14 sires and formula_15 progeny per sire, we can calculate the following ANOVA, using formula_16 as the genetic variance and formula_17 as the environmental variance:
The formula_18 term is the intraclass correlation between half sibs. We can easily calculate formula_19. The expected mean square is calculated from the relationship of the individuals (progeny within a sire are all half-sibs, for example), and an understanding of intraclass correlations.
The use of ANOVA to calculate heritability often fails to account for the presence of gene–-environment interactions, because ANOVA has a much lower statistical power for testing for interaction effects than for direct effects.
Model with additive and dominance terms.
For a model with additive and dominance terms, but not others, the equation for a single locus is
formula_20
where
formula_21 is the additive effect of the ith allele, formula_22 is the additive effect of the jth allele, formula_23 is the dominance deviation for the ijth genotype, and formula_11 is the environment.
Experiments can be run with a similar setup to the one given in Table 1. Using different relationship groups, we can evaluate different intraclass correlations. Using formula_24 as the additive genetic variance and formula_25 as the dominance deviation variance, intraclass correlations become linear functions of these parameters. In general,
Intraclass correlationformula_26
where formula_15 and formula_27 are found as
formula_28P[ alleles drawn at random from the relationship pair are identical by descent], and
formula_29P[ genotypes drawn at random from the relationship pair are identical by descent].
Some common relationships and their coefficients are given in Table 2.
Linear mixed models.
A wide variety of approaches using linear mixed models have been reported in literature. Via these methods, phenotypic variance is partitioned into genetic, environmental and experimental design variances to estimate heritability. Environmental variance can be explicitly modeled by studying individuals across a broad range of environments, although inference of genetic variance from phenotypic and environmental variance may lead to underestimation of heritability due to the challenge of capturing the full range of environmental influence affecting a trait. Other methods for calculating heritability use data from genome-wide association studies to estimate the influence on a trait by genetic factors, which is reflected by the rate and influence of putatively associated genetic loci (usually single-nucleotide polymorphisms) on the trait. This can lead to underestimation of heritability, however. This discrepancy is referred to as "missing heritability" and reflects the challenge of accurately modeling both genetic and environmental variance in heritability models.
When a large, complex pedigree or another aforementioned type of data is available, heritability and other quantitative genetic parameters can be estimated by restricted maximum likelihood (REML) or Bayesian methods. The raw data will usually have three or more data points for each individual: a code for the sire, a code for the dam and one or several trait values. Different trait values may be for different traits or for different time points of measurement.
The currently popular methodology relies on high degrees of certainty over the identities of the sire and dam; it is not common to treat the sire identity probabilistically. This is not usually a problem, since the methodology is rarely applied to wild populations (although it has been used for several wild ungulate and bird populations), and sires are invariably known with a very high degree of certainty in breeding programmes. There are also algorithms that account for uncertain paternity.
The pedigrees can be viewed using programs such as Pedigree Viewer , and analyzed with programs such as ASReml, VCE , WOMBAT , MCMCglmm within the R environment or the BLUPF90 family of programs .
Pedigree models are helpful for untangling confounds such as reverse causality, maternal effects such as the prenatal environment, and confounding of genetic dominance, shared environment, and maternal gene effects.
Genomic heritability.
When genome-wide genotype data and phenotypes from large population samples are available, one can estimate the relationships between individuals based on their genotypes and use a linear mixed model to estimate the variance explained by the genetic markers. This gives a genomic heritability estimate based on the variance captured by common genetic variants. There are multiple methods that make different adjustments for allele frequency and linkage disequilibrium. Particularly, the method called High-Definition Likelihood (HDL) can estimate genomic heritability using only GWAS summary statistics, making it easier to incorporate large sample size available in various GWAS meta-analysis.
Response to selection.
In selective breeding of plants and animals, the expected response to selection of a trait with known narrow-sense heritability formula_30 can be estimated using the "breeder's equation":
formula_31
In this equation, the Response to Selection (R) is defined as the realized average difference between the parent generation and the next generation, and the Selection Differential (S) is defined as the average difference between the parent generation and the selected parents.
For example, imagine that a plant breeder is involved in a selective breeding project with the aim of increasing the number of kernels per ear of corn. For the sake of argument, let us assume that the average ear of corn in the parent generation has 100 kernels. Let us also assume that the selected parents produce corn with an average of 120 kernels per ear. If h2 equals 0.5, then the next generation will produce corn with an average of 0.5(120-100) = 10 additional kernels per ear. Therefore, the total number of kernels per ear of corn will equal, on average, 110.
Observing the response to selection in an artificial selection experiment will allow calculation of realized heritability as in Fig. 4.
Heritability in the above equation is equal to the ratio formula_32 only if the genotype and the environmental noise follow Gaussian distributions.
Controversies.
Heritability estimates' prominent critics, such as Steven Rose, Jay Joseph, and Richard Bentall, focus largely on heritability estimates in behavioral sciences and social sciences. Bentall has claimed that such heritability scores are typically calculated counterintuitively to derive numerically high scores, that heritability is misinterpreted as genetic determination, and that this alleged bias distracts from other factors that researches have found more causally important, such as childhood abuse causing later psychosis. Heritability estimates are also inherently limited because they do not convey any information regarding whether genes or environment play a larger role in the development of the trait under study. For this reason, David Moore and David Shenk describe the term "heritability" in the context of behavior genetics as "...one of the most misleading in the history of science" and argue that it has no value except in very rare cases. When studying complex human traits, it is impossible to use heritability analysis to determine the relative contributions of genes and environment, as such traits result from multiple causes interacting. In particular, Feldman and Lewontin emphasize that heritability is itself a function of environmental variation. However, some researchers argue that it is possible to disentangle the two.
The controversy over heritability estimates is largely via their basis in twin studies. The scarce success of molecular-genetic studies to corroborate such population-genetic studies' conclusions is the "missing heritability" problem. Eric Turkheimer has argued that newer molecular methods have vindicated the conventional interpretation of twin studies, although it remains mostly unclear how to explain the relations between genes and behaviors. According to Turkheimer, both genes and environment are heritable, genetic contribution varies by environment, and a focus on heritability distracts from other important factors. Overall, however, "heritability" is a concept widely applicable.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "H^2,"
},
{
"math_id": 1,
"text": "H^2 = \\frac{\\mathrm{Var}(G)}{\\mathrm{Var}(P)}"
},
{
"math_id": 2,
"text": "h^2 = \\frac{\\mathrm{Var}(A)}{\\mathrm{Var}(P)}"
},
{
"math_id": 3,
"text": "\n\\begin{align}\nP_{ij} & = \\mu + \\alpha \\, (B_i + B_j) + \\delta \\, (B_i B_j) \\\\\n& = \\text{Population mean} + \\text{Additive Effect } (a_{ij} = \\alpha (B_i + B_j)) + \\text{Dominance Deviation } (d_{ij} = \\delta (B_i B_j)). \\\\\n\\end{align}\n"
},
{
"math_id": 4,
"text": "\\mathrm{Var}(A) = f(bb)a^2_{bb}+f(Bb)a^2_{Bb}+f(BB)a^2_{BB},"
},
{
"math_id": 5,
"text": "f(bb)a_{bb}+f(Bb)a_{Bb}+f(BB)a_{BB} = 0."
},
{
"math_id": 6,
"text": "\\mathrm{Var}(D) = f(bb)d^2_{bb}+f(Bb)d^2_{Bb}+f(BB)d^2_{BB},"
},
{
"math_id": 7,
"text": "f(bb)d_{bb}+f(Bb)d_{Bb}+f(BB)d_{BB} = 0."
},
{
"math_id": 8,
"text": "h^2 = \\frac{b}{r} = \\frac{t}{r}"
},
{
"math_id": 9,
"text": "y_i = \\mu + g_i + e"
},
{
"math_id": 10,
"text": "g_i"
},
{
"math_id": 11,
"text": "e"
},
{
"math_id": 12,
"text": "z_i = \\mu + \\frac{1}{2}g_i + e"
},
{
"math_id": 13,
"text": "\\mathrm{corr}(z,z') = \\mathrm{corr}(\\mu + \\frac{1}{2}g + e, \\mu + \\frac{1}{2}g + e') = \\frac{1}{4}V_g"
},
{
"math_id": 14,
"text": "n"
},
{
"math_id": 15,
"text": "r"
},
{
"math_id": 16,
"text": "V_g"
},
{
"math_id": 17,
"text": "V_e"
},
{
"math_id": 18,
"text": "\\frac{1}{4}V_g"
},
{
"math_id": 19,
"text": "H^2 = \\frac{V_g}{V_g+V_e} = \\frac{4(S-W)}{S+(r-1)W}"
},
{
"math_id": 20,
"text": "y_{ij} = \\mu + \\alpha_i + \\alpha_j + d_{ij} + e, "
},
{
"math_id": 21,
"text": "\\alpha_i"
},
{
"math_id": 22,
"text": "\\alpha_j"
},
{
"math_id": 23,
"text": "d_{ij}"
},
{
"math_id": 24,
"text": "V_a"
},
{
"math_id": 25,
"text": "V_d"
},
{
"math_id": 26,
"text": " = r V_a + \\theta V_d,"
},
{
"math_id": 27,
"text": "\\theta"
},
{
"math_id": 28,
"text": "r = "
},
{
"math_id": 29,
"text": "\\theta = "
},
{
"math_id": 30,
"text": " (h^2) "
},
{
"math_id": 31,
"text": " R = h^2 S "
},
{
"math_id": 32,
"text": "\\mathrm{Var}(A)/\\mathrm{Var}(P)"
}
] |
https://en.wikipedia.org/wiki?curid=155624
|
15568639
|
Cost–volume–profit analysis
|
Cost accounting model
Cost–volume–profit (CVP), in managerial economics, is a form of cost accounting. It is a simplified model, useful for elementary instruction and for short-run decisions.
Overview.
A critical part of CVP analysis is the point where total revenues equal total costs (both fixed and variable costs). At this break-even point, a company will experience no income or loss. This break-even point can be an initial examination that precedes a more detailed CVP analysis.
CVP analysis employs the same basic assumptions as in breakeven analysis. The assumptions underlying CVP analysis are:
The components of CVP analysis are:
Assumptions.
CVP assumes the following:
These are simplifying, largely linearizing assumptions, which are often implicitly assumed in elementary discussions of costs and profits. In more advanced treatments and practice, costs and revenue are nonlinear, and the analysis is more complicated, but the intuition afforded by linear CVP remains basic and useful.
One of the main methods of calculating CVP is profit–volume ratio, which is (contribution /sales)*100 = this gives us profit–volume ratio.
Therefore, it gives us the profit added per unit of variable costs.
Model.
Basic graph.
The assumptions of the CVP model yield the following linear equations for total costs and total revenue (sales):
Total costs = fixed costs + (unit variable cost × number of units)
Total revenue = sales price × number of unit
These are linear because of the assumptions of constant costs and prices, and there is no distinction between units produced and units sold, as these are assumed to be equal. Note that when such a chart is drawn, the linear CVP model is assumed, often implicitly.
In symbols:
formula_0
formula_1
where
Profit is computed as TR-TC; it is a profit if positive, a loss if negative.
Break down.
Costs and sales can be broken down, which provide further insight into operations.
One can decompose total costs as fixed costs plus variable costs:
formula_0
Following a matching principle of matching a portion of sales against variable costs, one can decompose sales as contribution plus variable costs, where contribution is "what's left after deducting variable costs". One can think of contribution as "the marginal contribution of a unit to the profit", or "contribution towards offsetting fixed costs".
In symbols:
formula_2
where
Subtracting variable costs from both costs and sales yields the simplified diagram and equation for profit and loss.
In symbols:
formula_3
These diagrams can be related by a rather busy diagram, which demonstrates how if one subtracts variable costs, the sales and total costs lines shift down to become the contribution and fixed costs lines. Note that the profit and loss for any given number of unit sales is the same, and in particular the break-even point is the same, whether one computes by sales = total costs or as contribution = fixed costs. Mathematically, the contribution graph is obtained from the sales graph by a shear, to be precise formula_4, where V are unit variable costs.
Applications.
CVP simplifies the computation of breakeven in break-even analysis, and more generally allows simple computation of target income sales. It simplifies analysis of short run trade-offs in operational decisions.
Limitations.
CVP is a short run, marginal analysis: it assumes that unit variable costs and unit revenues are constant, which is appropriate for small deviations from current production and sales, and assumes a neat division between fixed costs and variable costs, though in the long run all costs are variable. For longer-term analysis that considers the entire life-cycle of a product, one therefore often prefers activity-based costing or throughput accounting.
When we analyze CVP is where we demonstrate the point at which in a firm there will be no profit nor loss means that firm works in breakeven situation
1. Segregation of total costs into its fixed and variable components is always a daunting task to do.
2. Fixed costs are unlikely to stay constant as output increases beyond a certain range of activity.
3. The analysis is restricted to the relevant range specified and beyond that the results can become unreliable.
4. Aside from volume, other elements like inflation, efficiency, capacity and technology impact on costs.
5. Impractical to assume sales mix remain constant since this depends on the changing demand levels.
6. The assumption of linear property of total cost and total revenue relies on the assumption that unit variable cost and selling price are always constant. In real life it is valid within relevant range or period and likely to change.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\text{TC} = \\text{TFC} + V \\times X"
},
{
"math_id": 1,
"text": "\\text{TR} = P \\times X"
},
{
"math_id": 2,
"text": "\\begin{align}\n\\text{TR} &= P \\times X\\\\\n &= \\bigl(\\left(P - V \\right)+V\\bigr)\\times X\\\\\n &= \\left(C+V\\right)\\times X\\\\\n &= C\\times X + V\\times X\n\\end{align}"
},
{
"math_id": 3,
"text": "\\begin{align}\n\\text{PL} &= \\text{TR} - \\text{TC}\\\\\n &= \\left(C+V\\right)\\times X\n - \\left(\\text{TFC} + V \\times X\\right)\\\\\n &= C \\times X - \\text{TFC}\n\\end{align}"
},
{
"math_id": 4,
"text": "\\left(\\begin{smallmatrix}1 & 0\\\\ -V & 1\\end{smallmatrix}\\right)"
}
] |
https://en.wikipedia.org/wiki?curid=15568639
|
1556952
|
Positive and negative predictive values
|
Statistical measures of whether a finding is likely to be true
The positive and negative predictive values (PPV and NPV respectively) are the proportions of positive and negative results in statistics and diagnostic tests that are true positive and true negative results, respectively. The PPV and NPV describe the performance of a diagnostic test or other statistical measure. A high result can be interpreted as indicating the accuracy of such a statistic. The PPV and NPV are not intrinsic to the test (as true positive rate and true negative rate are); they depend also on the prevalence. Both PPV and NPV can be derived using Bayes' theorem.
Although sometimes used synonymously, a "positive predictive value" generally refers to what is established by control groups, while a post-test probability refers to a probability for an individual. Still, if the individual's pre-test probability of the target condition is the same as the prevalence in the control group used to establish the positive predictive value, the two are numerically equal.
In information retrieval, the PPV statistic is often called the precision.
Definition.
Positive predictive value (PPV).
The positive predictive value (PPV), or precision, is defined as
formula_0
where a "true positive" is the event that the test makes a positive prediction, and the subject has a positive result under the gold standard, and a "false positive" is the event that the test makes a positive prediction, and the subject has a negative result under the gold standard. The ideal value of the PPV, with a perfect test, is 1 (100%), and the worst possible value would be zero.
The PPV can also be computed from sensitivity, specificity, and the prevalence of the condition:
formula_1
cf. Bayes' theorem
The complement of the PPV is the false discovery rate (FDR):
formula_2
Negative predictive value (NPV).
The negative predictive value is defined as:
formula_3
where a "true negative" is the event that the test makes a negative prediction, and the subject has a negative result under the gold standard, and a "false negative" is the event that the test makes a negative prediction, and the subject has a positive result under the gold standard. With a perfect test, one which returns no false negatives, the value of the NPV is 1 (100%), and with a test which returns no true negatives the NPV value is zero.
The NPV can also be computed from sensitivity, specificity, and prevalence:
formula_4
formula_5
The complement of the NPV is the <templatestyles src="Template:Visible anchor/styles.css" />false omission rate (FOR):
formula_6
Although sometimes used synonymously, a "negative predictive value" generally refers to what is established by control groups, while a negative post-test probability rather refers to a probability for an individual. Still, if the individual's pre-test probability of the target condition is the same as the prevalence in the control group used to establish the negative predictive value, then the two are numerically equal.
Relationship.
The following diagram illustrates how the "positive predictive value", "negative predictive value", sensitivity, and specificity are related.
<templatestyles src="Reflist/styles.css" />
Note that the positive and negative predictive values can only be estimated using data from a cross-sectional study or other population-based study in which valid prevalence estimates may be obtained. In contrast, the sensitivity and specificity can be estimated from case-control studies.
Worked example.
Suppose the fecal occult blood (FOB) screen test is used in 2030 people to look for bowel cancer:
The small positive predictive value (PPV = 10%) indicates that many of the positive results from this testing procedure are false positives. Thus it will be necessary to follow up any positive result with a more reliable test to obtain a more accurate assessment as to whether cancer is present. Nevertheless, such a test may be useful if it is inexpensive and convenient. The strength of the FOB screen test is instead in its negative predictive value — which, if negative for an individual, gives us a high confidence that its negative result is true.
Problems.
Other individual factors.
Note that the PPV is not intrinsic to the test—it depends also on the prevalence. Due to the large effect of prevalence upon predictive values, a standardized approach has been proposed, where the PPV is normalized to a prevalence of 50%. PPV is directly proportional to the prevalence of the disease or condition. In the above example, if the group of people tested had included a higher proportion of people with bowel cancer, then the PPV would probably come out higher and the NPV lower. If everybody in the group had bowel cancer, the PPV would be 100% and the NPV 0%.
To overcome this problem, NPV and PPV should only be used if the ratio of the number of patients in the disease group and the number of patients in the healthy control group used to establish the NPV and PPV is equivalent to the prevalence of the diseases in the studied population, or, in case two disease groups are compared, if the ratio of the number of patients in disease group 1 and the number of patients in disease group 2 is equivalent to the ratio of the prevalences of the two diseases studied. Otherwise, positive and negative likelihood ratios are more accurate than NPV and PPV, because likelihood ratios do not depend on prevalence.
When an individual being tested has a different pre-test probability of having a condition than the control groups used to establish the PPV and NPV, the PPV and NPV are generally distinguished from the positive and negative post-test probabilities, with the PPV and NPV referring to the ones established by the control groups, and the post-test probabilities referring to the ones for the tested individual (as estimated, for example, by likelihood ratios). Preferably, in such cases, a large group of equivalent individuals should be studied, in order to establish separate positive and negative predictive values for use of the test in such individuals.
Bayesian updating.
Bayes' theorem confers inherent limitations on the accuracy of screening tests as a function of disease prevalence or pre-test probability. It has been shown that a testing system can tolerate significant drops in prevalence, up to a certain well-defined point known as the prevalence threshold, below which the reliability of a positive screening test drops precipitously. That said, Balayla et al. showed that sequential testing overcomes the aforementioned Bayesian limitations and thus improves the reliability of screening tests. For a desired positive predictive value formula_7, where formula_8, that approaches some constant formula_9, the number of positive test iterations formula_10 needed is:
formula_11
where
Of note, the denominator of the above equation is the natural logarithm of the positive likelihood ratio (LR+). Also, note that a critical assumption is that the tests must be independent. As described Balayla et al., repeating the same test may violate the this independence assumption and in fact "A more natural and reliable method to enhance the positive predictive value would be, when available, to use a different test with different parameters altogether after an initial positive result is obtained.".
Different target conditions.
PPV is used to indicate the probability that in case of a positive test, that the patient really has the specified disease. However, there may be more than one cause for a disease and any single potential cause may not always result in the overt disease seen in a patient. There is potential to mix up related target conditions of PPV and NPV, such as interpreting the PPV or NPV of a test as having a disease, when that PPV or NPV value actually refers only to a predisposition of having that disease.
An example is the microbiological throat swab used in patients with a sore throat. Usually publications stating PPV of a throat swab are reporting on the probability that this bacterium is present in the throat, rather than that the patient is ill from the bacteria found. If presence of this bacterium always resulted in a sore throat, then the PPV would be very useful. However the bacteria may colonise individuals in a harmless way and never result in infection or disease. Sore throats occurring in these individuals are caused by other agents such as a virus. In this situation the gold standard used in the evaluation study represents only the presence of bacteria (that might be harmless) but not a causal bacterial sore throat illness. It can be proven that this problem will affect positive predictive value far more than negative predictive value. To evaluate diagnostic tests where the gold standard looks only at potential causes of disease, one may use an extension of the predictive value termed the Etiologic Predictive Value.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\text{PPV} = \\frac{\\text{Number of true positives}}{\\text{Number of true positives} + \\text{Number of false positives}} = \\frac{\\text{Number of true positives}}{\\text{Number of positive calls}}"
},
{
"math_id": 1,
"text": " \\text{PPV} = \\frac{\\text{sensitivity} \\times \\text{prevalence}}{\\text{sensitivity} \\times \\text{prevalence} + (1 - \\text{specificity}) \\times (1 - \\text{prevalence})} "
},
{
"math_id": 2,
"text": " \\text{FDR} = 1 - \\text{PPV} = \\frac{\\text{Number of false positives}}{\\text{Number of true positives} + \\text{Number of false positives}} = \\frac{\\text{Number of false positives}}{\\text{Number of positive calls}}"
},
{
"math_id": 3,
"text": " \\text{NPV} = \\frac{\\text{Number of true negatives}}{\\text{Number of true negatives}+\\text{Number of false negatives}} =\n\\frac{\\text{Number of true negatives}}{\\text{Number of negative calls}}\n"
},
{
"math_id": 4,
"text": " \\text{NPV} = \\frac{\\text{specificity} \\times (1-\\text{prevalence})}{\\text{specificity} \\times (1-\\text{prevalence}) + (1-\\text{sensitivity}) \\times \\text{prevalence}} "
},
{
"math_id": 5,
"text": " \\text{NPV} = \\frac{TN}{TN + FN} "
},
{
"math_id": 6,
"text": " \\text{FOR} = 1 - \\text{NPV} = \\frac{\\text{Number of false negatives}}{\\text{Number of true negatives}+\\text{Number of false negatives}} =\n\\frac{\\text{Number of false negatives}}{\\text{Number of negative calls}}\n"
},
{
"math_id": 7,
"text": "\\rho"
},
{
"math_id": 8,
"text": "\\rho < 1"
},
{
"math_id": 9,
"text": "k"
},
{
"math_id": 10,
"text": "n_i"
},
{
"math_id": 11,
"text": "n_i =\\lim_{k \\to \\rho}\\left\\lceil\\frac{\\ln\\left[\\frac{k(\\phi-1)}{\\phi(k-1)}\\right]}{\\ln\\left[\\frac{a}{1-b}\\right]}\\right\\rceil"
},
{
"math_id": 12,
"text": "a"
},
{
"math_id": 13,
"text": "b"
},
{
"math_id": 14,
"text": "\\phi"
}
] |
https://en.wikipedia.org/wiki?curid=1556952
|
1557013
|
Arc measurement
|
Technique of determining the radius of Earth
Arc measurement, sometimes degree measurement (), is the astrogeodetic technique of determining the radius of Earth – more specifically, the local Earth radius of curvature of the figure of the Earth – by relating the latitude difference (sometimes also the longitude difference) and the geographic distance (arc length) surveyed between two locations on Earth's surface. The most common variant involves only astronomical latitudes and the meridian arc length and is called "meridian arc measurement"; other variants may involve only astronomical longitude ("parallel arc measurement") or both geographic coordinates ("oblique arc measurement").
Arc measurement campaigns in Europe were the precursors to the International Association of Geodesy (IAG).
History.
The first known arc measurement was performed by Eratosthenes (240 BC) between Alexandria and Syene in what is now Egypt, determining the radius of the Earth with remarkable correctness.
In the early 8th century, Yi Xing performed a similar survey.
The French physician Jean Fernel measured the arc in 1528. The Dutch geodesist Snellius (~1620) repeated the experiment between Alkmaar and Bergen op Zoom using more modern geodetic instrumentation ("Snellius' triangulation").
Later arc measurements aimed at determining the flattening of the Earth ellipsoid by measuring at different geographic latitudes. The first of these was the "French Geodesic Mission", commissioned by the French Academy of Sciences in 1735–1738, involving measurement expeditions to Lapland (Maupertuis et al.) and Peru (Pierre Bouguer et al.).
Struve measured a geodetic control network via triangulation between the Arctic Sea and the Black Sea, the "Struve Geodetic Arc".
Bessel compiled several meridian arcs, to compute the famous Bessel ellipsoid (1841).
Nowadays, the method is replaced by worldwide geodetic networks and by satellite geodesy.
Determination.
Assume the astronomic latitudes of two endpoints, formula_0 (standpoint) and formula_1 (forepoint) are known; these can be determined by astrogeodesy, observing the zenith distances of sufficient numbers of stars (meridian altitude method).
Then, the empirical Earth's meridional radius of curvature at the midpoint of the meridian arc can then be determined inverting the great-circle distance (or circular arc length) formula:
formula_2
where the latitudes are in radians and formula_3 is the arc length on mean sea level (MSL).
Historically, the distance between two places has been determined at low precision by pacing or odometry.
High precision land surveys can be used to determine the distance between two places at nearly the same longitude by measuring a baseline and a triangulation network linking fixed points. The meridian distance formula_4 from one end point to a fictitious point at the same latitude as the second end point is then calculated by trigonometry. The surface distance formula_4 is reduced to the corresponding distance at MSL, formula_3 (see: Geographical distance#Altitude correction).
An additional arc measurement at another latitudinal band, delimited by a new pair of standpoint and forepoint, serves to determine Earth's flattening.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\phi_s"
},
{
"math_id": 1,
"text": "\\phi_f"
},
{
"math_id": 2,
"text": "R = \\frac{\\mathit{\\Delta}'}{\\vert\\phi_s - \\phi_f\\vert}"
},
{
"math_id": 3,
"text": "\\mathit{\\Delta}'"
},
{
"math_id": 4,
"text": "\\mathit{\\Delta}"
}
] |
https://en.wikipedia.org/wiki?curid=1557013
|
15570705
|
Intelligent Mail barcode
|
Barcode for use on U.S. mail
The Intelligent Mail Barcode (IMb) is a 65-bar barcode for use on mail in the United States. The term "Intelligent Mail" refers to services offered by the United States Postal Service for domestic mail delivery. The IM barcode is intended to provide greater information and functionality than its predecessors POSTNET and PLANET. An Intelligent Mail barcode has also been referred to as a "One Code Solution" and a "4-State Customer Barcode", abbreviated 4CB, 4-CB or USPS4CB. The complete specification can be found in USPS Document USPS-B-3200. It effectively incorporates the routing ZIP Code and tracking information included in previously used postal barcode standards.
The barcode is applied by the sender; the Postal Service required use of the Intelligent Mail barcode to qualify for automation prices beginning January 28, 2013. Use of the barcode provides increased overall efficiency, including improved deliverability, and new services.
Symbology.
The Intelligent Mail barcode is a height-modulated barcode that encodes up to 31 decimal digits of mail-piece data into 65 vertical bars.
The code is made up of four distinct symbols, which is why it was once referred to as the 4-State Customer Barcode. Each bar contains the central "tracker" portion, and may contain an ascender, descender, neither, or both (a "full bar").
The 65 bars represent 130 bits (or 39.13 decimal digits), grouped as ten 13-bit characters. Each character has 2, 5, 8, or 11 of its 13 bits set to one. The Hamming distance between characters is at least 2. Consequently, single-bit errors in a character can be detected (toggling one bit results in an invalid character). The characters are interleaved throughout the symbol.
The number of characters can be calculated from the binomial coefficient.
formula_0
The total number of characters is two times 1365, or 2730. Log2(2730) is 11.41469 bits per group. So the 65 bars (or 130 bits) encode a 114-bit message.
The encoding includes an eleven-bit cyclic redundancy check (CRC) to detect, but not correct, errors. Subtracting the 11 CRC bits from the 114-bit message leaves an information payload of 103 bits (the specification sets one of those bits to zero). Consequently, 27 of the 130 bits are devoted to error detection.
Data payload.
The IM barcode carries a data payload of 31 digits representing the following elements:
Barcode identifier.
A barcode identifier is assigned by the United States Postal Service to encode the presort identification that is currently printed in human-readable form on the optional endorsement line (OEL). It is also available for future United States Postal Service use. This is accomplished using two digits, with the second digit in the range of 0–4. The allowable encoding ranges are 00–04, 10–14, 20–24, 30–34, 40–44, 50–54, 60–64, 70–74, 80–84, and 90–94.
The first digit of the barcode identifier is defined as:
Service type identifier (STID).
A three-digit value represents both the class of the mail (such as first-class, standard mail, or periodical), and any services requested by the sender.
Basic STIDs, for the purpose of automation only, are as follows:
For a detailed list of STIDs, see Appendix A of the USPS Guide to Intelligent Mail Letters and Flats or Service Type Identifiers.
Mailer ID.
A 6- or 9-digit number assigned by the United States Postal Service identifies the specific business sending the mailing. Higher-volume mailers are eligible to receive 6-digit mailer IDs, which have a larger range of associated sequence numbers; lower-volume mailers receive 9-digit mailer IDs. To make it possible to distinguish 6-digit IDs from 9-digit IDs, all 6-digit IDs begin with a digit between 0 and 8 inclusive, while all 9-digit IDs begin with the digit 9.
Sequence number.
A mailer-assigned 6- or 9-digit ID specific to one piece of mail, to identify the specific recipient or household. The mailer must ensure that this number remains unique for a 45-day period after the mail is sent if a full service discount is claimed; otherwise, it does not have to be unique. The sequence number is either 6 or 9 digits, based on the length of the mailer ID. If the mailer ID is 6 digits long, then the sequence number is 9 digits long, and conversely, so that there will always be 15 digits in total when the mailer ID and the sequence number are combined.
Delivery-point ZIP code.
This section of the code may be omitted, but if it is present, the 5-, 9-, or 11-digit forms of the ZIP Code are also encoded in the Intelligent Mail barcode. The full 11-digit form includes the standard 5-digit ZIP code, the ZIP + 4 code, and a 2-digit code indicating the exact delivery point. This is the same information that was encoded in the POSTNET barcode, which the Intelligent Mail barcode replaces.
|
[
{
"math_id": 0,
"text": "\\binom{13}{2} + \\binom{13}{5} + \\binom{13}{8} + \\binom{13}{11} = 78 + 1287 + 1287 + 78 = 2 \\cdot 1365 = 2730"
}
] |
https://en.wikipedia.org/wiki?curid=15570705
|
155715
|
Pyroelectricity
|
Voltage created when a crystal is heated
Pyroelectricity (from Greek: "pyr" (πυρ), "fire" and electricity) is a property of certain crystals which are naturally electrically polarized and as a result contain large electric fields. Pyroelectricity can be described as the ability of certain materials to generate a temporary voltage when they are heated or cooled. The change in temperature modifies the positions of the atoms slightly within the crystal structure, so that the polarization of the material changes. This polarization change gives rise to a voltage across the crystal. If the temperature stays constant at its new value, the pyroelectric voltage gradually disappears due to leakage current. The leakage can be due to electrons moving through the crystal, ions moving through the air, or current leaking through a voltmeter attached across the crystal.
Explanation.
Pyroelectric charge in minerals develops on the opposite faces of asymmetric crystals. The direction in which the propagation of the charge tends is usually constant throughout a pyroelectric material, but, in some materials, this direction can be changed by a nearby electric field. These materials are said to exhibit ferroelectricity.
All known pyroelectric materials are also piezoelectric. Despite being pyroelectric, novel materials such as boron aluminum nitride (BAlN) and boron gallium nitride (BGaN) have zero piezoelectric response for strain along the c-axis at certain compositions, the two properties being closely related. However, note that some piezoelectric materials have a crystal symmetry that does not allow pyroelectricity.
Pyroelectric materials are mostly hard and crystals; however, soft pyroelectricity can be achieved by using electrets.
Pyroelectricity is measured as the change in net polarization (a vector) proportional to a change in temperature. The total pyroelectric coefficient measured at constant stress is the sum of the pyroelectric coefficients at constant strain (primary pyroelectric effect) and the piezoelectric contribution from thermal expansion (secondary pyroelectric effect). Under normal circumstances, even polar materials do not display a net dipole moment. As a consequence, there are no electric dipole equivalents of bar magnets because the intrinsic dipole moment is neutralized by "free" electric charge that builds up on the surface by internal conduction or from the ambient atmosphere. Polar crystals only reveal their nature when perturbed in some fashion that momentarily upsets the balance with the compensating surface charge.
Spontaneous polarization is temperature dependent, so a good perturbation probe is a change in temperature which induces a flow of charge to and from the surfaces. This is the pyroelectric effect. All polar crystals are pyroelectric, so the 10 polar crystal classes are sometimes referred to as the pyroelectric classes. Pyroelectric materials can be used as infrared and millimeter wavelength radiation detectors.
An electret is the electrical equivalent of a permanent magnet.
Mathematical description.
The pyroelectric coefficient may be described as the change in the spontaneous polarization vector with temperature:
formula_0
where "pi" (Cm−2K−1) is the vector for the pyroelectric coefficient.
History.
The first record of the pyroelectric effect was made in 1707 by Johann Georg Schmidt, who noted that the "[hot] tourmaline could attract the ashes from the warm or burning coals, as the magnet does iron, but also repelling them again [after the contact]". In 1717 Louis Lemery noticed, as Schmidt had, that small scraps of non-conducting material were first attracted to tourmaline, but then repelled by it once they contacted the stone. In 1747 Linnaeus first related the phenomenon to electricity (he called tourmaline "Lapidem Electricum", "the electric stone"), although this was not proven until 1756 by Franz Ulrich Theodor Aepinus.
Research into pyroelectricity became more sophisticated in the 19th century. In 1824 Sir David Brewster gave the effect the name it has today. Both William Thomson in 1878 and Woldemar Voigt in 1897 helped develop a theory for the processes behind pyroelectricity. Pierre Curie and his brother, Jacques Curie, studied pyroelectricity in the 1880s, leading to their discovery of some of the mechanisms behind piezoelectricity.
It is mistakenly attributed to Theophrastus (c. 314 BC) the first record of pyroelectricity. The misconception arose soon after the discovery of the pyroelectric properties of tourmaline, which made mineralogists of the time associate the legendary stone "Lyngurium" with it. Lyngurium is described in the work of Theophrastus as being similar to amber, without specifying any pyroelectric properties.
Crystal classes.
All crystal structures belong to one of thirty-two crystal classes based on the number of rotational axes and reflection planes they possess that leave the crystal structure unchanged (point groups). Of the thirty-two crystal classes, twenty-one are non-centrosymmetric (not having a centre of symmetry). Of these twenty-one, twenty exhibit direct piezoelectricity, the remaining one being the cubic class 432. Ten of these twenty piezoelectric classes are polar, i.e., they possess a spontaneous polarization, having a dipole in their unit cell, and exhibit pyroelectricity. If this dipole can be reversed by the application of an electric field, the material is said to be ferroelectric. Any dielectric material develops a dielectric polarization (electrostatics) when an electric field is applied, but a substance which has such a natural charge separation even in the absence of a field is called a polar material. Whether or not a material is polar is determined solely by its crystal structure. Only 10 of the 32 point groups are polar. All polar crystals are pyroelectric, so the ten polar crystal classes are sometimes referred to as the pyroelectric classes.
Piezoelectric crystal classes: 1, 2, m, 222, mm2, 4, -4, 422, 4mm, -42m, 3, 32, 3m, 6, -6, 622, 6mm, -62m, 23, -43m
Pyroelectric: 1, 2, m, mm2, 3, 3m, 4, 4mm, 6, 6mm
Related effects.
Two effects which are closely related to pyroelectricity are ferroelectricity and piezoelectricity. Normally materials are very nearly electrically neutral on the macroscopic level. However, the positive and negative charges which make up the material are not necessarily distributed in a symmetric manner. If the sum of charge times distance for all elements of the basic cell does not equal zero the cell will have an electric dipole moment (a vector quantity). The dipole moment per unit volume is defined as the dielectric polarization. If this dipole moment changes with the effect of applied temperature changes, applied electric field, or applied pressure, the material is pyroelectric, ferroelectric, or piezoelectric, respectively.
The ferroelectric effect is exhibited by materials which possess an electric polarization in the absence of an externally applied electric field such that the polarization can be reversed if the electric field is reversed. Since all ferroelectric materials exhibit a spontaneous polarization, all ferroelectric materials are also pyroelectric (but not all pyroelectric materials are ferroelectric).
The piezoelectric effect is exhibited by crystals (such as quartz or ceramic) for which an electric voltage across the material appears when pressure is applied. Similar to pyroelectric effect, the phenomenon is due to the asymmetric structure of the crystals that allows ions to move more easily along one axis than the others. As pressure is applied, each side of the crystal takes on an opposite charge, resulting in a voltage drop across the crystal.
Pyroelectricity should not be confused with thermoelectricity: In a typical demonstration of pyroelectricity, the whole crystal is changed from one temperature to another, and the result is a temporary voltage across the crystal. In a typical demonstration of thermoelectricity, one part of the device is kept at one temperature and the other part at a different temperature, and the result is a "permanent" voltage across the device as long as there is a temperature difference. Both effects convert temperature change to electrical potential, but the pyroelectric effect converts temperature change over "time" into electrical potential, while the thermoelectric effect converts temperature change with "position" into electrical potential.
Pyroelectric materials.
Although artificial pyroelectric materials have been engineered, the effect was first discovered in minerals such as tourmaline. The pyroelectric effect is also present in bone and tendon.
The most important example is gallium nitride, a semiconductor. The large electric fields in this material are detrimental in light emitting diodes (LEDs), but useful for the production of power transistors.
Progress has been made in creating artificial pyroelectric materials, usually in the form of a thin film, using gallium nitride (GaN), caesium nitrate (CsNO3), polyvinyl fluorides, derivatives of phenylpyridine, and cobalt phthalocyanine. Lithium tantalate (LiTaO3) is a crystal exhibiting both piezoelectric and pyroelectric properties, which has been used to create small-scale nuclear fusion ("pyroelectric fusion"). Recently, pyroelectric and piezoelectric properties have been discovered in doped hafnium oxide (HfO2), which is a standard material in CMOS manufacturing.
Applications.
Heat sensors.
Very small changes in temperature can produce a pyroelectric potential. Passive infrared sensors are often designed around pyroelectric materials, as the heat of a human or animal from several feet away is enough to generate a voltage.
Power generation.
A pyroelectric can be repeatedly heated and cooled (analogously to a heat engine) to generate usable electrical power. An example of a heat engine is the movement of the pistons in an internal combustion engine like that found in a gasoline powered automobile.
One group calculated that a pyroelectric in an Ericsson cycle could reach 50% of Carnot efficiency, while a different study found a material that could, in theory, reach 84-92% of Carnot efficiency (these efficiency values are for the pyroelectric itself, ignoring losses from heating and cooling the substrate, other heat-transfer losses, and all other losses elsewhere in the system).
Possible advantages of pyroelectric generators for generating electricity (as compared to the conventional heat engine plus electrical generator) include:
Although a few patents have been filed for such a device, such generators do not appear to be anywhere close to commercialization.
Nuclear fusion.
Pyroelectric materials have been used to generate large electric fields necessary to steer deuterium ions in a nuclear fusion process. This is known as pyroelectric fusion.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\np_i = \\frac{\\partial P_{S,i}} {\\partial T} \n"
}
] |
https://en.wikipedia.org/wiki?curid=155715
|
1557358
|
Molecular knot
|
Molecule whose structure resembles a knot
In chemistry, a molecular knot is a mechanically interlocked molecular architecture that is analogous to a macroscopic knot. Naturally-forming molecular knots are found in organic molecules like DNA, RNA, and proteins. It is not certain that naturally occurring knots are evolutionarily advantageous to nucleic acids or proteins, though knotting is thought to play a role in the structure, stability, and function of knotted biological molecules. The mechanism by which knots naturally form in molecules, and the mechanism by which a molecule is stabilized or improved by knotting, is ambiguous. The study of molecular knots involves the formation and applications of both naturally occurring and chemically synthesized molecular knots. Applying chemical topology and knot theory to molecular knots allows biologists to better understand the structures and synthesis of knotted organic molecules.
The term "knotane" was coined by Vögtle "et al." in 2000 to describe molecular knots by analogy with rotaxanes and catenanes, which are other mechanically interlocked molecular architectures. The term has not been broadly adopted by chemists and has not been adopted by IUPAC.
Naturally occurring molecular knots.
Organic molecules containing knots may fall into the categories of slipknots or pseudo-knots. They are not considered mathematical knots because they are not a closed curve, but rather a knot that exists within an otherwise linear chain, with termini at each end. Knotted proteins are thought to form molecular knots during their tertiary structure folding process, and knotted nucleic acids generally form molecular knots during genomic replication and transcription, though details of knotting mechanism continue to be disputed and ambiguous. Molecular simulations are fundamental to the research on molecular knotting mechanisms.
Knotted DNA was found first by Liu et al. in 1981, in single-stranded, circular, bacterial DNA, though double-stranded circular DNA has been found to also form knots. Naturally knotted RNA has not yet been reported.
A number of proteins containing naturally occurring molecular knots have been identified. The knot types found to be naturally occurring in proteins are the formula_0 and formula_1knots, as identified in the KnotProt database of known knotted proteins.
Chemically synthesized molecular knots.
Several synthetic molecular knots have been reported. Knot types that have been successfully synthesized in molecules are formula_2and 819 knots. Though the formula_3 and formula_1 knots have been found to naturally occur in knotted molecules, they have not been successfully synthesized. Small-molecule composite knots have also not yet been synthesized.
Artificial DNA, RNA, and protein knots have been successfully synthesized. DNA is a particularly useful model of synthetic knot synthesis, as the structure naturally forms interlocked structures and can be easily manipulated into forming knots control precisely the raveling necessary to form knots. Molecular knots are often synthesized with the help of crucial metal ion ligands.
History.
The first researcher to suggest the existence of a molecular knot in a protein was Jane Richardson in 1977, who reported that carbonic anhydrase B (CAB) exhibited apparent knotting during her survey of various proteins' topological behavior. However, the researcher generally attributed with the discovery of the first knotted protein is Marc. L. Mansfield in 1994, as he was the first to specifically investigate the occurrence of knots in proteins and confirm the existence of the trefoil knot in CAB. Knotted DNA was found first by Liu et al. in 1981, in single-stranded, circular, bacterial DNA, though double-stranded circular DNA has been found to also form knots.
In 1989, Sauvage and coworkers reported the first synthetic knotted molecule: a trefoil synthesized via a double-helix complex with the aid of Cu+ ions.
Vogtle et al. was the first to describe molecular knots as "knotanes" in 2000. Also in 2000 was William Taylor's creation of an alternative computational method to analyze protein knotting that set the termini at a fixed point far enough away from the knotted component of the molecule that the knot type could be well-defined. In this study, Taylor discovered a deep formula_4 knot in a protein. With this study, Taylor confirmed the existence of deeply knotted proteins.
In 2007, Eric Yeates reported the identification of a molecular slipknot, which is when the molecule contains knotted subchains even though their backbone chain as a whole is unknotted and does not contain completely knotted structures that are easily detectable by computational models. Mathematically, slipknots are difficult to analyze because they are not recognized in the examination of the complete structure.
A pentafoil knot prepared using dynamic covalent chemistry was synthesized by Ayme et al. in 2012, which at the time was the most complex non-DNA molecular knot prepared to date. Later in 2016, a fully organic pentafoil knot was also reported, including the very first use of a molecular knot to allosterically regulate catalysis. In January 2017, an 819 knot was synthesized by David Leigh's group, making the 819 knot the most complex molecular knot synthesized.
An important development in knot theory is allowing for intra-chain contacts within an entangled molecular chain. Circuit topology has emerged as a topology framework that formalises the arrangement of contacts as well as chain crossings in a folded linear chain. As a complementary approach, Colin Adams. et al., developed a singular knot theory that is applicable to folded linear chains with intramolecular interactions.
Applications.
Many synthetic molecular knots have a distinct globular shape and dimensions that make them potential building blocks in nanotechnology.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "+3_1, -3_1, 4_1, -5_2, "
},
{
"math_id": 1,
"text": "+6_1 "
},
{
"math_id": 2,
"text": "3_1, 4_1, 5_1 "
},
{
"math_id": 3,
"text": "-5_2 "
},
{
"math_id": 4,
"text": "4_1 "
}
] |
https://en.wikipedia.org/wiki?curid=1557358
|
15575410
|
Euclidean plane
|
Geometric model of the planar projection of the physical universe
In mathematics, a Euclidean plane is a Euclidean space of dimension two, denoted formula_0 or formula_1. It is a geometric space in which two real numbers are required to determine the position of each point. It is an affine space, which includes in particular the concept of parallel lines. It has also metrical properties induced by a distance, which allows to define circles, and angle measurement.
A Euclidean plane with a chosen Cartesian coordinate system is called a "Cartesian plane".
The set formula_2 of the ordered pairs of real numbers (the real coordinate plane), equipped with the dot product, is often called "the Euclidean plane", since every Euclidean plane is isomorphic to it.
History.
Books I through IV and VI of Euclid's Elements dealt with two-dimensional geometry, developing such notions as similarity of shapes, the Pythagorean theorem (Proposition 47), equality of angles and areas, parallelism, the sum of the angles in a triangle, and the three cases in which triangles are "equal" (have the same area), among many other topics.
Later, the plane was described in a so-called "Cartesian coordinate system", a coordinate system that specifies each point uniquely in a plane by a pair of numerical "coordinates", which are the signed distances from the point to two fixed perpendicular directed lines, measured in the same unit of length. Each reference line is called a "coordinate axis" or just "axis" of the system, and the point where they meet is its "origin", usually at ordered pair (0, 0). The coordinates can also be defined as the positions of the perpendicular projections of the point onto the two axes, expressed as signed distances from the origin.
The idea of this system was developed in 1637 in writings by Descartes and independently by Pierre de Fermat, although Fermat also worked in three dimensions, and did not publish the discovery. Both authors used a single (abscissa) axis in their treatments, with the lengths of ordinates measured along lines not-necessarily-perpendicular to that axis. The concept of using a pair of fixed axes was introduced later, after Descartes' "La Géométrie" was translated into Latin in 1649 by Frans van Schooten and his students. These commentators introduced several concepts while trying to clarify the ideas contained in Descartes' work.
Later, the plane was thought of as a field, where any two points could be multiplied and, except for 0, divided. This was known as the complex plane. The complex plane is sometimes called the Argand plane because it is used in Argand diagrams. These are named after Jean-Robert Argand (1768–1822), although they were first described by Danish-Norwegian land surveyor and mathematician Caspar Wessel (1745–1818). Argand diagrams are frequently used to plot the positions of the poles and zeroes of a function in the complex plane.
In geometry.
Coordinate systems.
In mathematics, analytic geometry (also called Cartesian geometry) describes every point in two-dimensional space by means of two coordinates. Two perpendicular coordinate axes are given which cross each other at the origin. They are usually labeled "x" and "y". Relative to these axes, the position of any point in two-dimensional space is given by an ordered pair of real numbers, each number giving the distance of that point from the origin measured along the given axis, which is equal to the distance of that point from the other axis.
Another widely used coordinate system is the polar coordinate system, which specifies a point in terms of its distance from the origin and its angle relative to a rightward reference ray.
Polytopes.
In two dimensions, there are infinitely many polytopes: the polygons. The first few regular ones are shown below:
Convex.
The Schläfli symbol formula_3 represents a regular n-gon.
Degenerate (spherical).
The regular monogon (or henagon) {1} and regular digon {2} can be considered degenerate regular polygons and exist nondegenerately in non-Euclidean spaces like a 2-sphere, 2-torus, or right circular cylinder.
Non-convex.
There exist infinitely many non-convex regular polytopes in two dimensions, whose Schläfli symbols consist of rational numbers {n/m}. They are called star polygons and share the same vertex arrangements of the convex regular polygons.
In general, for any natural number n, there are n-pointed non-convex regular polygonal stars with Schläfli symbols {"n"/"m"} for all "m" such that "m" < "n"/2 (strictly speaking {"n"/"m"} = {"n"/("n" − "m")}) and "m" and "n" are coprime.
Circle.
The hypersphere in 2 dimensions is a circle, sometimes called a 1-sphere ("S"1) because it is a one-dimensional manifold. In a Euclidean plane, it has the length 2π"r" and the area of its interior is
formula_4
where formula_5 is the radius.
Other shapes.
There are an infinitude of other curved shapes in two dimensions, notably including the conic sections: the ellipse, the parabola, and the hyperbola.
In linear algebra.
Another mathematical way of viewing two-dimensional space is found in linear algebra, where the idea of independence is crucial. The plane has two dimensions because the length of a rectangle is independent of its width. In the technical language of linear algebra, the plane is two-dimensional because every point in the plane can be described by a linear combination of two independent vectors.
Dot product, angle, and length.
The dot product of two vectors A = ["A"1, "A"2] and B = ["B"1, "B"2] is defined as:
formula_6
A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction the arrow points. The magnitude of a vector A is denoted by formula_7. In this viewpoint, the dot product of two Euclidean vectors A and B is defined by
formula_8
where θ is the angle between A and B.
The dot product of a vector A by itself is
formula_9
which gives
formula_10
the formula for the Euclidean length of the vector.
In calculus.
Gradient.
In a rectangular coordinate system, the gradient is given by
formula_11
Line integrals and double integrals.
For some scalar field "f" : "U" ⊆ R"2" → R, the line integral along a piecewise smooth curve "C" ⊂ "U" is defined as
formula_12
where r: [a, b] → "C" is an arbitrary bijective parametrization of the curve "C" such that r("a") and r("b") give the endpoints of "C" and formula_13.
For a vector field F : "U" ⊆ R"2" → R"2", the line integral along a piecewise smooth curve "C" ⊂ "U", in the direction of r, is defined as
formula_14
where · is the dot product and r: [a, b] → "C" is a bijective parametrization of the curve "C" such that r("a") and r("b") give the endpoints of "C".
A double integral refers to an integral within a region "D" in R2 of a function formula_15 and is usually written as:
formula_16
Fundamental theorem of line integrals.
The fundamental theorem of line integrals says that a line integral through a gradient field can be evaluated by evaluating the original scalar field at the endpoints of the curve.
Let formula_17. Then
formula_18
with p, q the endpoints of the curve γ.
Green's theorem.
Let "C" be a positively oriented, piecewise smooth, simple closed curve in a plane, and let "D" be the region bounded by "C". If "L" and "M" are functions of ("x", "y") defined on an open region containing "D" and have continuous partial derivatives there, then
formula_19
where the path of integration along C is counterclockwise.
In topology.
In topology, the plane is characterized as being the unique contractible 2-manifold.
Its dimension is characterized by the fact that removing a point from the plane leaves a space that is connected, but not simply connected.
In graph theory.
In graph theory, a planar graph is a graph that can be embedded in the plane, i.e., it can be drawn on the plane in such a way that its edges intersect only at their endpoints. In other words, it can be drawn in such a way that no edges cross each other. Such a drawing is called a "plane graph" or "planar embedding of the graph". A plane graph can be defined as a planar graph with a mapping from every node to a point on a plane, and from every edge to a plane curve on that plane, such that the extreme points of each curve are the points mapped from its end nodes, and all curves are disjoint except on their extreme points.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\textbf{E}^2"
},
{
"math_id": 1,
"text": "\\mathbb{E}^2"
},
{
"math_id": 2,
"text": "\\mathbb{R}^2"
},
{
"math_id": 3,
"text": "\\{n\\}"
},
{
"math_id": 4,
"text": "A = \\pi r^{2}"
},
{
"math_id": 5,
"text": "r"
},
{
"math_id": 6,
"text": "\\mathbf{A}\\cdot \\mathbf{B} = A_1B_1 + A_2B_2"
},
{
"math_id": 7,
"text": "\\|\\mathbf{A}\\|"
},
{
"math_id": 8,
"text": "\\mathbf A\\cdot\\mathbf B = \\|\\mathbf A\\|\\,\\|\\mathbf B\\|\\cos\\theta,"
},
{
"math_id": 9,
"text": "\\mathbf A\\cdot\\mathbf A = \\|\\mathbf A\\|^2,"
},
{
"math_id": 10,
"text": " \\|\\mathbf A\\| = \\sqrt{\\mathbf A\\cdot\\mathbf A},"
},
{
"math_id": 11,
"text": "\\nabla f = \\frac{\\partial f}{\\partial x} \\mathbf{i} +\n\\frac{\\partial f}{\\partial y} \\mathbf{j} \\,."
},
{
"math_id": 12,
"text": "\\int\\limits_C f\\, ds = \\int_a^b f(\\mathbf{r}(t)) |\\mathbf{r}'(t)|\\,dt,"
},
{
"math_id": 13,
"text": "a < b"
},
{
"math_id": 14,
"text": "\\int\\limits_C \\mathbf{F}(\\mathbf{r})\\cdot\\,d\\mathbf{r} = \\int_a^b \\mathbf{F}(\\mathbf{r}(t))\\cdot\\mathbf{r}'(t)\\,dt,"
},
{
"math_id": 15,
"text": "f(x,y),"
},
{
"math_id": 16,
"text": "\\iint\\limits_D f(x,y)\\,dx\\,dy."
},
{
"math_id": 17,
"text": " \\varphi : U \\subseteq \\mathbb{R}^2 \\to \\mathbb{R}"
},
{
"math_id": 18,
"text": " \\varphi\\left(\\mathbf{q}\\right)-\\varphi\\left(\\mathbf{p}\\right) = \\int_{\\gamma[\\mathbf{p},\\mathbf{q}]} \\nabla\\varphi(\\mathbf{r})\\cdot d\\mathbf{r} , "
},
{
"math_id": 19,
"text": "\\oint_{C} (L\\, dx + M\\, dy) = \\iint_{D} \\left(\\frac{\\partial M}{\\partial x} - \\frac{\\partial L}{\\partial y}\\right)\\, dx\\, dy"
}
] |
https://en.wikipedia.org/wiki?curid=15575410
|
1557562
|
Propositional variable
|
Variable that can either be true or false
In mathematical logic, a propositional variable (also called a sentence letter, sentential variable, or sentential letter) is an input variable (that can either be true or false) of a truth function. Propositional variables are the basic building-blocks of propositional formulas, used in propositional logic and higher-order logics.
Uses.
Formulas in logic are typically built up recursively from some propositional variables, some number of logical connectives, and some logical quantifiers. Propositional variables are the atomic formulas of propositional logic, and are often denoted using capital roman letters such as formula_0, formula_1 and formula_2.
In a given propositional logic, a formula can be defined as follows:
Through this construction, all of the formulas of propositional logic can be built up from propositional variables as a basic unit. Propositional variables should not be confused with the metavariables, which appear in the typical axioms of propositional calculus; the latter effectively range over well-formed formulae, and are often denoted using lower-case greek letters such as formula_3, formula_4 and formula_5.
Predicate logic.
Propositional variables with no object variables such as "x" and "y" attached to predicate letters such as P"x" and "x"R"y", having instead individual constants "a", "b", ..attached to predicate letters are propositional constants P"a", "a"R"b". These propositional constants are atomic propositions, not containing propositional operators.
The internal structure of propositional variables contains predicate letters such as P and Q, in association with bound individual variables (e.g., x, "y"), individual constants such as "a" and "b" (singular terms from a domain of discourse D), ultimately taking a form such as P"a", "a"R"b".(or with parenthesis, formula_6 and formula_7).
Propositional logic is sometimes called zeroth-order logic due to not considering the internal structure in contrast with first-order logic which analyzes the internal structure of the atomic sentences.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "Q"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "\\alpha"
},
{
"math_id": 4,
"text": "\\beta"
},
{
"math_id": 5,
"text": "\\gamma"
},
{
"math_id": 6,
"text": "P(11)"
},
{
"math_id": 7,
"text": "R(1, 3)"
}
] |
https://en.wikipedia.org/wiki?curid=1557562
|
155759
|
Launch window
|
Time period during which a rocket must be launched in order to reach its intended target
In the context of spaceflight, launch period is the collection of days and launch window is the time period on a given day during which a particular rocket must be launched in order to reach its intended target. If the rocket is not launched within a given window, it has to wait for the window on the next day of the period. Launch periods and launch windows are very dependent on both the rocket's capability and the orbit to which it is going.
A launch period refers to the days that the rocket can launch to reach its intended orbit. A mission could have a period of 365 days in a year, a few weeks each month, a few weeks every 26 months (e.g. Mars launch periods), or a short period time that won't be repeated.
A launch window indicates the time frame on a given day in the launch period that the rocket can launch to reach its intended orbit. This can be as short as a second (referred to as an instantaneous window) or even the entire day. For operational reasons, the window almost always is limited to no more than a few hours. The launch window can stretch over two calendar days (ex: start at 11:46 p.m. and end at 12:14 a.m.). Launch windows are sometimes but rarely exactly the same times each day.
Launch windows and launch periods are often used interchangeably in the public sphere, even within the same organization. However, these definitions are the ones used by NASA (and other space agencies) launch directors and trajectory analysts.
Launch period.
To go to another planet using the simple low-energy Hohmann transfer orbit, if eccentricity of orbits is not a factor, launch periods are periodic according to the synodic period; for example, in the case of Mars, the period is 780 days (2.1 years). In more complex cases, including the use of gravitational slingshots, launch periods are irregular. Sometimes rare opportunities arise, such as when "Voyager 2" took advantage of a planetary alignment occurring once in 175 years to visit Jupiter, Saturn, Uranus, and Neptune. When such an opportunity is missed, another target may be selected. For instance, ESA's "Rosetta" mission was originally intended for comet 46P/Wirtanen, but a launcher problem delayed it and a new target had to be selected (comet 67P/Churyumov-Gerasimenko).
Launch periods are often calculated from porkchop plots, which show the delta-v needed to achieve the mission plotted against the launch time.
Launch window.
The launch window is defined by the first launch point and ending launch point. It may be continuous (i.e. able to launch every second in the launch window) or may be a collection of discrete instantaneous points between the open and close. Launch windows and days are usually calculated in UTC and then converted to the local time of where the rocket and spacecraft operators are located (frequently multiple time zones for USA launches).
For trips into largely arbitrary Earth orbits, no specific launch time is required. But if the spacecraft intends to rendezvous with an object already in orbit, the launch must be carefully timed to occur around the times that the target vehicle's orbital plane intersects the launch site.
Earth observation satellites are often launched into sun-synchronous orbits which are near-polar. For these orbits, the launch window occurs at the time of day when the launch site location is aligned with the plane of the required orbit. To launch at another time would require an orbital plane change maneuver which would require a large amount of propellant.
For launches above low Earth orbit (LEO), the actual launch time can be somewhat flexible if a parking orbit is used, because the inclination and time the spacecraft initially spends in the parking orbit can be varied. See the launch window used by the "Mars Global Surveyor" spacecraft to the planet Mars at .
Instantaneous launch window.
Achieving the correct orbit requires the right ascension of the ascending node (RAAN). RAAN is set by varying a launch time, waiting for the earth to rotate until it is in the correct position. For missions with very specific orbits, such as rendezvousing with the International Space Station, the launch window may be a single moment in time, known as an instantaneous launch window.
Trajectories are programmed into a launch vehicle prior to launch. The launch vehicle will have a target, and the guidance system will alter the steering commands to attempt to get to the final end state. At least one variable (apogee, perigee, inclination, etc.) must be left free to alter the values of the others, otherwise the dynamics would be overconstrained. An instantaneous launch window allows the RAAN be the uncontrolled variable. While some spacecraft, such as the Centaur upper stage, can steer and adjust its RAAN after launch, choosing an instantaneous launch window allows the RAAN to be pre-determined for the spacecraft's guidance system.
Specific issues.
Space Shuttle missions to the International Space Station were restricted by beta angle cutout. Beta angle (formula_0) is defined as the angle between the orbit plane and the vector from the Sun. Due to the relationship between an orbiting object's beta angle (in this case, the ISS) and the percent of its orbit that is spent in sunlight, solar power generation and thermal control are affected by that beta angle. Shuttle launches to the ISS were normally attempted only when the ISS was in an orbit with a beta angle of less than 60 degrees.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\beta"
}
] |
https://en.wikipedia.org/wiki?curid=155759
|
155760
|
Hohmann transfer orbit
|
Transfer manoeuvre between two orbits
In astronautics, the Hohmann transfer orbit () is an orbital maneuver used to transfer a spacecraft between two orbits of different altitudes around a central body. For example, a Hohmann transfer could be used to raise a satellite's orbit from low Earth orbit to geostationary orbit. In the idealized case, the initial and target orbits are both circular and coplanar. The maneuver is accomplished by placing the craft into an elliptical transfer orbit that is tangential to both the initial and target orbits. The maneuver uses two impulsive engine burns: the first establishes the transfer orbit, and the second adjusts the orbit to match the target.
The Hohmann maneuver often uses the lowest possible amount of impulse (which consumes a proportional amount of delta-v, and hence propellant) to accomplish the transfer, but requires a relatively longer travel time than higher-impulse transfers. In some cases where one orbit is much larger than the other, a bi-elliptic transfer can use even less impulse, at the cost of even greater travel time.
The maneuver was named after Walter Hohmann, the German scientist who published a description of it in his 1925 book "Die Erreichbarkeit der Himmelskörper" ("The Attainability of Celestial Bodies"). Hohmann was influenced in part by the German science fiction author Kurd Lasswitz and his 1897 book "Two Planets".
When used for traveling between celestial bodies, a Hohmann transfer orbit requires that the starting and destination points be at particular locations in their orbits relative to each other. Space missions using a Hohmann transfer must wait for this required alignment to occur, which opens a launch window. For a mission between Earth and Mars, for example, these launch windows occur every 26 months. A Hohmann transfer orbit also determines a fixed time required to travel between the starting and destination points; for an Earth-Mars journey this travel time is about 9 months. When transfer is performed between orbits close to celestial bodies with significant gravitation, much less delta-v is usually required, as the Oberth effect may be employed for the burns.
They are also often used for these situations, but low-energy transfers which take into account the thrust limitations of real engines, and take advantage of the gravity wells of both planets can be more fuel efficient.
Example.
The diagram shows a Hohmann transfer orbit to bring a spacecraft from a lower circular orbit into a higher one. It is an elliptic orbit that is tangential both to the lower circular orbit the spacecraft is to leave (cyan, labeled "1" on diagram) and the higher circular orbit that it is to reach (red, labeled "3" on diagram). The transfer orbit (yellow, labeled "2" on diagram) is initiated by firing the spacecraft's engine to add energy and raise the apogee. When the spacecraft reaches apogee, a second engine firing adds energy to raise the perigee, putting the spacecraft in the larger circular orbit.
Due to the reversibility of orbits, a similar Hohmann transfer orbit can be used to bring a spacecraft from a higher orbit into a lower one; in this case, the spacecraft's engine is fired in the opposite direction to its current path, slowing the spacecraft and lowering its perigee to that of the elliptical transfer orbit. The engine is then fired again at the lower distance to slow the spacecraft into the lower circular orbit.
The Hohmann transfer orbit is based on two instantaneous velocity changes. Extra fuel is required to compensate for the fact that the bursts take time; this is minimized by using high-thrust engines to minimize the duration of the bursts. For transfers in Earth orbit, the two burns are labelled the "perigee burn" and the "apogee burn" (or "apogee kick"); more generally, they are labelled "periapsis" and "apoapsis" burns. Alternately, the second burn to circularize the orbit may be referred to as a "circularization burn".
Type I and Type II.
An ideal Hohmann transfer orbit transfers between two circular orbits in the same plane and traverses exactly 180° around the primary. In the real world, the destination orbit may not be circular, and may not be coplanar with the initial orbit. Real world transfer orbits may traverse slightly more, or slightly less, than 180° around the primary. An orbit which traverses less than 180° around the primary is called a "Type I" Hohmann transfer, while an orbit which traverses more than 180° is called a "Type II" Hohmann transfer.
Transfer orbits can go more than 360° around the primary. These multiple-revolution transfers are sometimes referred to as Type III and Type IV, where a Type III is a Type I plus 360°, and a Type IV is a Type II plus 360°.
Uses.
A Hohmann transfer orbit can be used to transfer an object's orbit toward another object, as long as they co-orbit a more massive body. In the context of Earth and the Solar System, this includes any object which orbits the Sun. An example of where a Hohmann transfer orbit could be used is to bring an asteroid, orbiting the Sun, into contact with the Earth.
Calculation.
For a small body orbiting another much larger body, such as a satellite orbiting Earth, the total energy of the smaller body is the sum of its kinetic energy and potential energy, and this total energy also equals half the potential at the average distance formula_0 (the semi-major axis):
formula_1
Solving this equation for velocity results in the vis-viva equation,
formula_2
where:
Therefore, the delta-"v" (Δv) required for the Hohmann transfer can be computed as follows, under the assumption of instantaneous impulses:
formula_9
to enter the elliptical orbit at formula_10 from the formula_11 circular orbit, where formula_12 is the aphelion of the resulting elliptical orbit, and
formula_13
to leave the elliptical orbit at formula_14 to the formula_12 circular orbit,
where formula_11 and formula_12 are respectively the radii of the departure and arrival circular orbits;
the smaller (greater) of formula_11 and formula_12 corresponds to the periapsis distance (apoapsis distance) of the Hohmann elliptical transfer orbit. Typically, formula_15 is given in units of m3/s2, as such be sure to use meters, not kilometers, for formula_11 and formula_12. The total formula_16 is then:
formula_17
Whether moving into a higher or lower orbit, by Kepler's third law, the time taken to transfer between the orbits is
formula_18
(one half of the orbital period for the whole ellipse), where formula_19 is length of semi-major axis of the Hohmann transfer orbit.
In application to traveling from one celestial body to another it is crucial to start maneuver at the time when the two bodies are properly aligned. Considering the target angular velocity being
formula_20
angular alignment α (in radians) at the time of start between the source object and the target object shall be
formula_21
Example.
Consider a geostationary transfer orbit, beginning at "r"1 = 6,678 km (altitude 300 km) and ending in a geostationary orbit with "r"2 = 42,164 km (altitude 35,786 km).
In the smaller circular orbit the speed is 7.73 km/s; in the larger one, 3.07 km/s. In the elliptical orbit in between the speed varies from 10.15 km/s at the perigee to 1.61 km/s at the apogee.
Therefore the Δv for the first burn is 10.15 − 7.73 = 2.42 km/s, for the second burn 3.07 − 1.61 = 1.46 km/s, and for both together 3.88 km/s.
This is "greater" than the Δv required for an escape orbit: 10.93 − 7.73 = 3.20 km/s. Applying a Δv at the Low Earth orbit (LEO) of only 0.78 km/s more (3.20−2.42) would give the rocket the escape velocity, which is less than the Δv of 1.46 km/s required to circularize the geosynchronous orbit. This illustrates the Oberth effect that at large speeds the same Δv provides more specific orbital energy, and energy increase is maximized if one spends the Δv as quickly as possible, rather than spending some, being decelerated by gravity, and then spending some more to overcome the deceleration (of course, the objective of a Hohmann transfer orbit is different).
Worst case, maximum delta-"v".
As the example above demonstrates, the Δ"v" required to perform a Hohmann transfer between two circular orbits is not the greatest when the destination radius is infinite. (Escape speed is √2 times orbital speed, so the Δv required to escape is √2 − 1 (41.4%) of the orbital speed.) The Δv required is greatest (53.0% of smaller orbital speed) when the radius of the larger orbit is 15.5817... times that of the smaller orbit. This number is the positive root of "x"3 − 15"x"2 − 9"x" − 1 = 0, which is formula_22. For higher orbit ratios the Δ"v" required for the second burn decreases faster than the first increases.
Application to interplanetary travel.
When used to move a spacecraft from orbiting one planet to orbiting another, the Oberth effect allows to use less delta-"v" than the sum of the delta-"v" for separate manoeuvres to escape the first planet, followed by a Hohmann transfer to the second planet, followed by insertion into an orbit around the other planet.
For example, consider a spacecraft travelling from Earth to Mars. At the beginning of its journey, the spacecraft will already have a certain velocity and kinetic energy associated with its orbit around Earth. During the burn the rocket engine applies its delta-"v", but the kinetic energy increases as a square law, until it is sufficient to escape the planet's gravitational potential, and then burns more so as to gain enough energy to get into the Hohmann transfer orbit (around the Sun). Because the rocket engine is able to make use of the initial kinetic energy of the propellant, far less delta-"v" is required over and above that needed to reach escape velocity, and the optimum situation is when the transfer burn is made at minimum altitude (low periapsis) above the planet. The delta-"v" needed is only 3.6 km/s, only about 0.4 km/s more than needed to escape Earth, even though this results in the spacecraft going 2.9 km/s faster than the Earth as it heads off for Mars (see table below).
At the other end, the spacecraft must decelerate for the gravity of Mars to capture it. This capture burn should optimally be done at low altitude to also make best use of the Oberth effect. Therefore, relatively small amounts of thrust at either end of the trip are needed to arrange the transfer compared to the free space situation.
However, with any Hohmann transfer, the alignment of the two planets in their orbits is crucial – the destination planet and the spacecraft must arrive at the same point in their respective orbits around the Sun at the same time. This requirement for alignment gives rise to the concept of launch windows.
The term lunar transfer orbit (LTO) is used for the Moon.
It is possible to apply the formula given above to calculate the Δv in km/s needed to enter a Hohmann transfer orbit to arrive at various destinations from Earth (assuming circular orbits for the planets). In this table, the column labeled "Δv to enter Hohmann orbit from Earth's orbit" gives the change from Earth's velocity to the velocity needed to get on a Hohmann ellipse whose other end will be at the desired distance from the Sun. The column labeled "LEO height" gives the velocity needed (in a non-rotating frame of reference centered on the earth) when 300 km above the Earth's surface. This is obtained by adding to the specific kinetic energy the square of the escape velocity (10.9 km/s) from this height. The column "LEO" is simply the previous speed minus the LEO orbital speed of 7.73 km/s.
Note that in most cases, Δ"v" from LEO is less than the Δ"v" to enter Hohmann orbit from Earth's orbit.
To get to the Sun, it is actually not necessary to use a Δ"v" of 24 km/s. One can use 8.8 km/s to go very far away from the Sun, then use a negligible Δ"v" to bring the angular momentum to zero, and then fall into the Sun. This can be considered a sequence of two Hohmann transfers, one up and one down. Also, the table does not give the values that would apply when using the Moon for a gravity assist. There are also possibilities of using one planet, like Venus which is the easiest to get to, to assist getting to other planets or the Sun.
Comparison to other transfers.
Bi-elliptic transfer.
The bi-elliptic transfer consists of two half-elliptic orbits. From the initial orbit, a first burn expends delta-v to boost the spacecraft into the first transfer orbit with an apoapsis at some point formula_23 away from the central body. At this point a second burn sends the spacecraft into the second elliptical orbit with periapsis at the radius of the final desired orbit, where a third burn is performed, injecting the spacecraft into the desired orbit.
While they require one more engine burn than a Hohmann transfer and generally require a greater travel time, some bi-elliptic transfers require a lower amount of total delta-v than a Hohmann transfer when the ratio of final to initial semi-major axis is 11.94 or greater, depending on the intermediate semi-major axis chosen.
The idea of the bi-elliptical transfer trajectory was first published by Ary Sternfeld in 1934.
Low-thrust transfer.
Low-thrust engines can perform an approximation of a Hohmann transfer orbit, by creating a gradual enlargement of the initial circular orbit through carefully timed engine firings. This requires a change in velocity (delta-"v") that is greater than the two-impulse transfer orbit and takes longer to complete.
Engines such as ion thrusters are more difficult to analyze with the delta-"v" model. These engines offer a very low thrust and at the same time, much higher delta-"v" budget, much higher specific impulse, lower mass of fuel and engine. A 2-burn Hohmann transfer maneuver would be impractical with such a low thrust; the maneuver mainly optimizes the use of fuel, but in this situation there is relatively plenty of it.
If only low-thrust maneuvers are planned on a mission, then continuously firing a low-thrust, but very high-efficiency engine might generate a higher delta-"v" and at the same time use less propellant than a conventional chemical rocket engine.
Going from one circular orbit to another by gradually changing the radius simply requires the same delta-"v" as the difference between the two speeds. Such maneuver requires more delta-"v" than a 2-burn Hohmann transfer maneuver, but does so with continuous low thrust rather than the short applications of high thrust.
The amount of propellant mass used measures the efficiency of the maneuver plus the hardware employed for it. The total delta-"v" used measures the efficiency of the maneuver only. For electric propulsion systems, which tend to be low-thrust, the high efficiency of the propulsive system usually compensates for the higher delta-V compared to the more efficient Hohmann maneuver.
Transfer orbits using electrical propulsion or low-thrust engines optimize the transfer time to reach the final orbit and not the delta-v as in the Hohmann transfer orbit. For geostationary orbit, the initial orbit is set to be supersynchronous and by thrusting continuously in the direction of the velocity at apogee, the transfer orbit transforms to a circular geosynchronous one. This method however takes much longer to achieve due to the low thrust injected into the orbit.
Interplanetary Transport Network.
In 1997, a set of orbits known as the Interplanetary Transport Network (ITN) was published, providing even lower propulsive delta-"v" (though much slower and longer) paths between different orbits than Hohmann transfer orbits. The Interplanetary Transport Network is different in nature than Hohmann transfers because Hohmann transfers assume only one large body whereas the Interplanetary Transport Network does not. The Interplanetary Transport Network is able to achieve the use of less propulsive delta-"v" by employing gravity assist from the planets.
Citations.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a"
},
{
"math_id": 1,
"text": "E=\\frac{m v^2}{2} - \\frac{G M m}{r} = \\frac{-G M m}{2 a}."
},
{
"math_id": 2,
"text": " v^2 = \\mu \\left( \\frac{2}{r} - \\frac{1}{a} \\right), "
},
{
"math_id": 3,
"text": "v"
},
{
"math_id": 4,
"text": "\\mu = GM"
},
{
"math_id": 5,
"text": "M + m"
},
{
"math_id": 6,
"text": "M"
},
{
"math_id": 7,
"text": "v_M \\ll v"
},
{
"math_id": 8,
"text": "r"
},
{
"math_id": 9,
"text": "\\Delta v_1 \n= \\sqrt{\\frac{\\mu}{r_1}}\n \\left( \\sqrt{\\frac{2 r_2}{r_1+r_2}} - 1 \\right),"
},
{
"math_id": 10,
"text": "r = r_1"
},
{
"math_id": 11,
"text": "r_1"
},
{
"math_id": 12,
"text": "r_2"
},
{
"math_id": 13,
"text": "\\Delta v_2 \n= \\sqrt{\\frac{\\mu}{r_2}}\n \\left(1 - \\sqrt{\\frac{2 r_1}{r_1+r_2}}\\right), "
},
{
"math_id": 14,
"text": "r = r_2"
},
{
"math_id": 15,
"text": "\\mu"
},
{
"math_id": 16,
"text": "\\Delta v"
},
{
"math_id": 17,
"text": "\\Delta v_\\text{total} = \\Delta v_1 + \\Delta v_2. "
},
{
"math_id": 18,
"text": " t_\\text{H}\n= \\frac{1}{2}\\sqrt{\\frac{4\\pi^2 a_\\text{H}^3}{\\mu}}\n= \\pi \\sqrt{\\frac {(r_1 + r_2)^3}{8\\mu}} "
},
{
"math_id": 19,
"text": " a_\\text{H}"
},
{
"math_id": 20,
"text": " \\omega_2 = \\sqrt{\\frac{\\mu}{r_2^3}}, "
},
{
"math_id": 21,
"text": " \\alpha = \\pi - \\omega_2 t_\\text{H} = \\pi\\left(1 -\\frac{1}{2\\sqrt{2}}\\sqrt{\\left(\\frac{r_1}{r_2} + 1\\right)^3}\\right). "
},
{
"math_id": 22,
"text": "5+4\\,\\sqrt{7}\\cos\\left({1\\over 3}\\arctan{\\sqrt{3}\\over 37}\\right)"
},
{
"math_id": 23,
"text": "r_b"
}
] |
https://en.wikipedia.org/wiki?curid=155760
|
1557634
|
Propositional formula
|
Logic formula
In propositional logic, a propositional formula is a type of syntactic formula which is well formed. If the values of all variables in a propositional formula are given, it determines a unique truth value. A propositional formula may also be called a propositional expression, a sentence, or a sentential formula.
A propositional formula is constructed from simple propositions, such as "five is greater than three" or propositional variables such as "p" and "q", using connectives or logical operators such as NOT, AND, OR, or IMPLIES; for example:
("p" AND NOT "q") IMPLIES ("p" OR "q").
In mathematics, a propositional formula is often more briefly referred to as a "proposition", but, more precisely, a propositional formula is not a proposition but a formal expression that "denotes" a proposition, a formal object under discussion, just like an expression such as ""x" + "y"" is not a value, but denotes a value. In some contexts, maintaining the distinction may be of importance.
Propositions.
For the purposes of the propositional calculus, propositions (utterances, sentences, assertions) are considered to be either simple or compound. Compound propositions are considered to be linked by sentential connectives, some of the most common of which are "AND", "OR", "IF ... THEN ...", "NEITHER ... NOR ...", "... IS EQUIVALENT TO ..." . The linking semicolon ";", and connective "BUT" are considered to be expressions of "AND". A sequence of discrete sentences are considered to be linked by "AND"s, and formal analysis applies a recursive "parenthesis rule" with respect to sequences of simple propositions (see more below about well-formed formulas).
For example: The assertion: "This cow is blue. That horse is orange but this horse here is purple." is actually a compound proposition linked by "AND"s: ( ("This cow is blue" AND "that horse is orange") AND "this horse here is purple" ) .
Simple propositions are declarative in nature, that is, they make assertions about the condition or nature of a "particular" object of sensation e.g. "This cow is blue", "There's a coyote!" ("That coyote IS "there", behind the rocks."). Thus the simple "primitive" assertions must be about specific objects or specific states of mind. Each must have at least a subject (an immediate object of thought or observation), a verb (in the active voice and present tense preferred), and perhaps an adjective or adverb. "Dog!" probably implies "I see a dog" but should be rejected as too ambiguous.
Example: "That purple dog is running", "This cow is blue", "Switch M31 is closed", "This cap is off", "Tomorrow is Friday".
For the purposes of the propositional calculus a compound proposition can usually be reworded into a series of simple sentences, although the result will probably sound stilted.
Relationship between propositional and predicate formulas.
The predicate calculus goes a step further than the propositional calculus to an "analysis of the "inner structure" of propositions" It breaks a simple sentence down into two parts (i) its subject (the object (singular or plural) of discourse) and (ii) a predicate (a verb or possibly verb-clause that asserts a quality or attribute of the object(s)). The predicate calculus then generalizes the "subject|predicate" form (where | symbolizes concatenation (stringing together) of symbols) into a form with the following blank-subject structure " ___|predicate", and the predicate in turn generalized to all things with that property.
Example: "This blue pig has wings" becomes two sentences in the "propositional calculus": "This pig has wings" AND "This pig is blue", whose internal structure is not considered. In contrast, in the predicate calculus, the first sentence breaks into "this pig" as the subject, and "has wings" as the predicate. Thus it asserts that object "this pig" is a member of the class (set, collection) of "winged things". The second sentence asserts that object "this pig" has an attribute "blue" and thus is a member of the class of "blue things". One might choose to write the two sentences connected with AND as:
p|W AND p|B
The generalization of "this pig" to a (potential) member of two classes "winged things" and "blue things" means that it has a truth-relationship with both of these classes. In other words, given a domain of discourse "winged things", p is either found to be a member of this domain or not. Thus there is a relationship W (wingedness) between p (pig) and { T, F }, W(p) evaluates to { T, F } where { T, F } is the set of the Boolean values "true" and "false". Likewise for B (blueness) and p (pig) and { T, F }: B(p) evaluates to { T, F }. So one now can analyze the connected assertions "B(p) AND W(p)" for its overall truth-value, i.e.:
( B(p) AND W(p) ) evaluates to { T, F }
In particular, simple sentences that employ notions of "all", "some", "a few", "one of", etc. called logical quantifiers are treated by the predicate calculus. Along with the new function symbolism "F(x)" two new symbols are introduced: ∀ (For all), and ∃ (There exists ..., At least one of ... exists, etc.). The predicate calculus, but not the propositional calculus, can establish the formal validity of the following statement:
"All blue pigs have wings but some pigs have no wings, hence some pigs are not blue".
Identity.
Tarski asserts that the notion of IDENTITY (as distinguished from LOGICAL EQUIVALENCE) lies outside the propositional calculus; however, he notes that if a logic is to be of use for mathematics and the sciences it must contain a "theory" of IDENTITY. Some authors refer to "predicate logic with identity" to emphasize this extension. See more about this below.
An algebra of propositions, the propositional calculus.
An algebra (and there are many different ones), loosely defined, is a method by which a collection of symbols called variables together with some other symbols such as parentheses (, ) and some sub-set of symbols such as *, +, ~, &, ∨, =, ≡, ∧, ¬ are manipulated within a system of rules. These symbols, and well-formed strings of them, are said to represent objects, but in a specific algebraic system these objects do not have meanings. Thus work inside the algebra becomes an exercise in obeying certain laws (rules) of the algebra's syntax (symbol-formation) rather than in semantics (meaning) of the symbols. The meanings are to be found outside the algebra.
For a well-formed sequence of symbols in the algebra —a formula— to have some usefulness outside the algebra the symbols are assigned meanings and eventually the variables are assigned values; then by a series of rules the formula is evaluated.
When the values are restricted to just two and applied to the notion of simple sentences (e.g. spoken utterances or written assertions) linked by propositional connectives this whole algebraic system of symbols and rules and evaluation-methods is usually called the propositional calculus or the sentential calculus.
While some of the familiar rules of arithmetic algebra continue to hold in the algebra of propositions (e.g. the commutative and associative laws for AND and OR), some do not (e.g. the distributive laws for AND, OR and NOT).
Usefulness of propositional formulas.
Analysis: In deductive reasoning, philosophers, rhetoricians and mathematicians reduce arguments to formulas and then study them (usually with truth tables) for correctness (soundness). For example: Is the following argument sound?
"Given that consciousness is sufficient for an artificial intelligence and only conscious entities can pass the Turing test, before we can conclude that a robot is an artificial intelligence the robot must pass the Turing test."
Engineers analyze the logic circuits they have designed using synthesis techniques and then apply various reduction and minimization techniques to simplify their designs.
Synthesis: Engineers in particular synthesize propositional formulas (that eventually end up as circuits of symbols) from truth tables. For example, one might write down a truth table for how binary addition should behave given the addition of variables "b" and "a" and "carry_in" "ci", and the results "carry_out" "co" and "sum" Σ:
Propositional variables.
The simplest type of propositional formula is a propositional variable. Propositions that are simple (atomic), symbolic expressions are often denoted by variables named "p", "q", or "P", "Q", etc. A propositional variable is intended to represent an atomic proposition (assertion), such as "It is Saturday" = "p" (here the symbol = means " ... is assigned the variable named ...") or "I only go to the movies on Monday" = "q".
Truth-value assignments, formula evaluations.
Evaluation of a propositional formula begins with assignment of a truth value to each variable. Because each variable represents a simple sentence, the truth values are being applied to the "truth" or "falsity" of these simple sentences.
Truth values in rhetoric, philosophy and mathematics
The truth values are only two: { TRUTH "T", FALSITY "F" }. An empiricist puts all propositions into two broad classes: "analytic"—true no matter what (e.g. tautology), and "synthetic"—derived from experience and thereby susceptible to confirmation by third parties (the verification theory of meaning). Empiricits hold that, in general, to arrive at the truth-value of a synthetic proposition, meanings (pattern-matching templates) must first be applied to the words, and then these meaning-templates must be matched against whatever it is that is being asserted. For example, my utterance "That cow is "blue"!" Is this statement a TRUTH? Truly I said it. And maybe I "am" seeing a blue cow—unless I am lying my statement is a TRUTH relative to the object of my (perhaps flawed) perception. But is the blue cow "really there"? What do you see when you look out the same window? In order to proceed with a verification, you will need a prior notion (a template) of both "cow" and "blue", and an ability to match the templates against the object of sensation (if indeed there is one).
Truth values in engineering
Engineers try to avoid notions of truth and falsity that bedevil philosophers, but in the final analysis engineers must trust their measuring instruments. In their quest for robustness, engineers prefer to pull known objects from a small library—objects that have well-defined, predictable behaviors even in large combinations, (hence their name for the propositional calculus: "combinatorial logic"). The fewest behaviors of a single object are two (e.g. { OFF, ON }, { open, shut }, { UP, DOWN } etc.), and these are put in correspondence with { 0, 1 }. Such elements are called digital; those with a continuous range of behaviors are called analog. Whenever decisions must be made in an analog system, quite often an engineer will convert an analog behavior (the door is 45.32146% UP) to digital (e.g. DOWN=0 ) by use of a comparator.
Thus an assignment of meaning of the variables and the two value-symbols { 0, 1 } comes from "outside" the formula that represents the behavior of the (usually) compound object. An example is a garage door with two "limit switches", one for UP labelled SW_U and one for DOWN labelled SW_D, and whatever else is in the door's circuitry. Inspection of the circuit (either the diagram or the actual objects themselves—door, switches, wires, circuit board, etc.) might reveal that, on the circuit board "node 22" goes to +0 volts when the contacts of switch "SW_D" are mechanically in contact ("closed") and the door is in the "down" position (95% down), and "node 29" goes to +0 volts when the door is 95% UP and the contacts of switch SW_U are in mechanical contact ("closed"). The engineer must define the meanings of these voltages and all possible combinations (all 4 of them), including the "bad" ones (e.g. both nodes 22 and 29 at 0 volts, meaning that the door is open and closed at the same time). The circuit mindlessly responds to whatever voltages it experiences without any awareness of TRUTH or FALSEHOOD, RIGHT or WRONG, SAFE or DANGEROUS.
Propositional connectives.
Arbitrary propositional formulas are built from propositional variables and other propositional formulas using propositional connectives. Examples of connectives include:
Connectives of rhetoric, philosophy and mathematics.
The following are the connectives common to rhetoric, philosophy and mathematics together with their truth tables. The symbols used will vary from author to author and between fields of endeavor. In general the abbreviations "T" and "F" stand for the evaluations TRUTH and FALSITY applied to the variables in the propositional formula (e.g. the assertion: "That cow is blue" will have the truth-value "T" for Truth or "F" for Falsity, as the case may be.).
The connectives go by a number of different word-usages, e.g. "a IMPLIES b" is also said "IF a THEN b". Some of these are shown in the table.
Engineering connectives.
In general, the engineering connectives are just the same as the mathematics connectives excepting they tend to evaluate with "1" = "T" and "0" = "F". This is done for the purposes of analysis/minimization and synthesis of formulas by use of the notion of "minterms" and Karnaugh maps (see below). Engineers also use the words logical product from Boole's notion (a*a = a) and logical sum from Jevons' notion (a+a = a).
CASE connective: IF ... THEN ... ELSE ....
The IF ... THEN ... ELSE ... connective appears as the simplest form of CASE operator of recursion theory and computation theory and is the connective responsible for conditional goto's (jumps, branches). From this one connective all other connectives can be constructed (see more below). Although " IF c THEN b ELSE a " sounds like an implication it is, in its most reduced form, a switch that makes a decision and offers as outcome only one of two alternatives "a" or "b" (hence the name switch statement in the C programming language).
The following three propositions are equivalent (as indicated by the logical equivalence sign ≡ ):
Thus IF ... THEN ... ELSE—unlike implication—does not evaluate to an ambiguous "TRUTH" when the first proposition is false i.e. c = F in (c → b). For example, most people would reject the following compound proposition as a nonsensical "non sequitur" because the second sentence is "not connected in meaning" to the first.
Example: The proposition " IF 'Winston Churchill was Chinese' THEN 'The sun rises in the east' " evaluates as a TRUTH given that 'Winston Churchill was Chinese' is a FALSEHOOD and 'The sun rises in the east' evaluates as a TRUTH.
In recognition of this problem, the sign → of formal implication in the propositional calculus is called material implication to distinguish it from the everyday, intuitive implication.
The use of the IF ... THEN ... ELSE construction avoids controversy because it offers a completely deterministic choice between two stated alternatives; it offers two "objects" (the two alternatives b and a), and it "selects" between them exhaustively and unambiguously. In the truth table below, d1 is the formula: ( (IF c THEN b) AND (IF NOT-c THEN a) ). Its fully reduced form d2 is the formula: ( (c AND b) OR (NOT-c AND a). The two formulas are equivalent as shown by the columns "=d1" and "=d2". Electrical engineers call the fully reduced formula the AND-OR-SELECT operator. The CASE (or SWITCH) operator is an extension of the same idea to "n" possible, but mutually exclusive outcomes. Electrical engineers call the CASE operator a multiplexer.
IDENTITY and evaluation.
The first table of this section stars *** the entry logical equivalence to note the fact that "Logical equivalence" is not the same thing as "identity". For example, most would agree that the assertion "That cow is blue" is identical to the assertion "That cow is blue". On the other hand, "logical" equivalence sometimes appears in speech as in this example: " 'The sun is shining' means 'I'm biking' " Translated into a propositional formula the words become: "IF 'the sun is shining' THEN 'I'm biking', AND IF 'I'm biking' THEN 'the sun is shining'":
"IF 's' THEN 'b' AND IF 'b' THEN 's' " is written as ((s → b) & (b → s)) or in an abbreviated form as (s ↔ b). As the rightmost symbol string is a definition for a new symbol in terms of the symbols on the left, the use of the IDENTITY sign = is appropriate:
((s → b) & (b → s)) = (s ↔ b)
Different authors use different signs for logical equivalence: ↔ (e.g. Suppes, Goodstein, Hamilton), ≡ (e.g. Robbin), ⇔ (e.g. Bender and Williamson). Typically identity is written as the equals sign =. One exception to this rule is found in "Principia Mathematica". For more about the philosophy of the notion of IDENTITY see Leibniz's law.
As noted above, Tarski considers IDENTITY to lie outside the propositional calculus, but he asserts that without the notion, "logic" is insufficient for mathematics and the deductive sciences. In fact the sign comes into the propositional calculus when a formula is to be evaluated.
In some systems there are no truth tables, but rather just formal axioms (e.g. strings of symbols from a set { ~, →, (, ), variables p1, p2, p3, ... } and formula-formation rules (rules about how to make more symbol strings from previous strings by use of e.g. substitution and modus ponens). the result of such a calculus will be another formula (i.e. a well-formed symbol string). Eventually, however, if one wants to use the calculus to study notions of validity and truth, one must add axioms that define the behavior of the symbols called "the truth values" {T, F} ( or {1, 0}, etc.) relative to the other symbols.
For example, Hamilton uses two symbols = and ≠ when he defines the notion of a valuation v of any well-formed formulas (wffs) "A" and "B" in his "formal statement calculus" L. A valuation v is a "function" from the wffs of his system L to the range (output) { T, F }, given that each variable p1, p2, p3 in a wff is assigned an arbitrary truth value { T, F }.
The two definitions (i) and (ii) define the equivalent of the truth tables for the ~ (NOT) and → (IMPLICATION) connectives of his system. The first one derives F ≠ T and T ≠ F, in other words " v("A") does not mean v(~"A")". Definition (ii) specifies the third row in the truth table, and the other three rows then come from an application of definition (i). In particular (ii) assigns the value F (or a meaning of "F") to the entire expression. The definitions also serve as formation rules that allow substitution of a value previously derived into a formula:
Some formal systems specify these valuation axioms at the outset in the form of certain formulas such as the law of contradiction or laws of identity and nullity. The choice of which ones to use, together with laws such as commutation and distribution, is up to the system's designer as long as the set of axioms is complete (i.e. sufficient to form and to evaluate any well-formed formula created in the system).
More complex formulas.
As shown above, the CASE (IF c THEN b ELSE a ) connective is constructed either from the 2-argument connectives IF ... THEN ... and AND or from OR and AND and the 1-argument NOT. Connectives such as the n-argument AND (a & b & c & ... & n), OR (a ∨ b ∨ c ∨ ... ∨ n) are constructed from strings of two-argument AND and OR and written in abbreviated form without the parentheses. These, and other connectives as well, can then be used as building blocks for yet further connectives. Rhetoricians, philosophers, and mathematicians use truth tables and the various theorems to analyze and simplify their formulas.
Electrical engineering uses drawn symbols and connect them with lines that stand for the mathematicals act of substitution and replacement. They then verify their drawings with truth tables and simplify the expressions as shown below by use of Karnaugh maps or the theorems. In this way engineers have created a host of "combinatorial logic" (i.e. connectives without feedback) such as "decoders", "encoders", "mutifunction gates", "majority logic", "binary adders", "arithmetic logic units", etc.
Definitions.
A definition creates a new symbol and its behavior, often for the purposes of abbreviation. Once the definition is presented, either form of the equivalent symbol or formula can be used. The following symbolism =Df is following the convention of Reichenbach. Some examples of convenient definitions drawn from the symbol set { ~, &, (, ) } and variables. Each definition is producing a logically equivalent formula that can be used for substitution or replacement.
* definition of a new variable: (c & d) =Df s
* OR: ~(~a & ~b) =Df (a ∨ b)
* IMPLICATION: (~a ∨ b) =Df (a → b)
* XOR: (~a & b) ∨ (a & ~b) =Df (a ⊕ b)
* LOGICAL EQUIVALENCE: ( (a → b) & (b → a) ) =Df ( a ≡ b )
Axiom and definition "schemas".
The definitions above for OR, IMPLICATION, XOR, and logical equivalence are actually schemas (or "schemata"), that is, they are "models" (demonstrations, examples) for a general formula "format" but shown (for illustrative purposes) with specific letters a, b, c for the variables, whereas any variable letters can go in their places as long as the letter substitutions follow the rule of substitution below.
Example: In the definition (~a ∨ b) =Df (a → b), other variable-symbols such as "SW2" and "CON1" might be used, i.e. formally:
a =Df SW2, b =Df CON1, so we would have as an "instance" of the definition schema (~SW2 ∨ CON1) =Df (SW2 → CON1)
Substitution versus replacement.
Substitution: The variable or sub-formula to be substituted with another variable, constant, or sub-formula must be replaced in all instances throughout the overall formula.
Example: (c & d) ∨ (p & ~(c & ~d)), but (q1 & ~q2) ≡ d. Now wherever variable "d" occurs, substitute (q1 & ~q2):
(c & (q1 & ~q2)) ∨ (p & ~(c & ~(q1 & ~q2)))
Replacement: (i) the formula to be replaced must be within a tautology, i.e. "logically equivalent" ( connected by ≡ or ↔) to the formula that replaces it, and (ii) unlike substitution its permissible for the replacement to occur only in one place (i.e. for one formula).
Example: Use this set of formula schemas/equivalences:
# ( (a ∨ 0) ≡ a ).
# ( (a & ~a) ≡ 0 ).
# ( (~a ∨ b) =Df (a → b) ).
Inductive definition.
The classical presentation of propositional logic (see Enderton 2002) uses the connectives formula_6. The set of formulas over a given set of propositional variables is inductively defined to be the smallest set of expressions such that:
This inductive definition can be easily extended to cover additional connectives.
The inductive definition can also be rephrased in terms of a closure operation (Enderton 2002). Let "V" denote a set of propositional variables and let "XV" denote the set of all strings from an alphabet including symbols in "V", left and right parentheses, and all the logical connectives under consideration. Each logical connective corresponds to a formula building operation, a function from "XXV" to "XXV":
The set of formulas over "V" is defined to be the smallest subset of "XXV" containing "V" and closed under all the formula building operations.
Parsing formulas.
The following "laws" of the propositional calculus are used to "reduce" complex formulas. The "laws" can be verified easily with truth tables. For each law, the principal (outermost) connective is associated with logical equivalence ≡ or identity =. A complete analysis of all 2n combinations of truth-values for its "n" distinct variables will result in a column of 1's (T's) underneath this connective. This finding makes each law, by definition, a tautology. And, for a given law, because its formula on the left and right are equivalent (or identical) they can be substituted for one another.
Enterprising readers might challenge themselves to invent an "axiomatic system" that uses the symbols { ∨, &, ~, (, ), variables a, b, c }, the formation rules specified above, and as few as possible of the laws listed below, and then derive as theorems the others as well as the truth-table valuations for ∨, &, and ~. One set attributed to Huntington (1904) (Suppes:204) uses eight of the laws defined below.
If used in an axiomatic system, the symbols 1 and 0 (or T and F) are considered to be well-formed formulas and thus obey all the same rules as the variables. Thus the laws listed below are actually axiom schemas, that is, they stand in place of an infinite number of instances. Thus ( x ∨ y ) ≡ ( y ∨ x ) might be used in one instance, ( p ∨ 0 ) ≡ ( 0 ∨ p ) and in another instance ( 1 ∨ q ) ≡ ( q ∨ 1 ), etc.
Connective seniority (symbol rank).
In general, to avoid confusion during analysis and evaluation of propositional formulas, one can make liberal use of parentheses. However, quite often authors leave them out. To parse a complicated formula one first needs to know the seniority, or rank, that each of the connectives (excepting *) has over the other connectives. To "well-form" a formula, start with the connective with the highest rank and add parentheses around its components, then move down in rank (paying close attention to the connective's scope over which it is working). From most- to least-senior, with the predicate signs ∀x and ∃x, the IDENTITY = and arithmetic signs added for completeness:
; ≡: (LOGICAL EQUIVALENCE)
; →: (IMPLICATION)
; &: (AND)
; ∨: (OR)
; ~: (NOT)
; ∀x: (FOR ALL x)
; ∃x: (THERE EXISTS AN x)
; =: (IDENTITY)
; +: (arithmetic sum)
;*: (arithmetic multiply)
; ' : (s, arithmetic successor).
Thus the formula can be parsed—but because NOT does not obey the distributive law, the parentheses around the inner formula (~c & ~d) is mandatory:
Example: " d & c ∨ w " rewritten is ( (d & c) ∨ w )
Example: " a & a → b ≡ a & ~a ∨ b " rewritten (rigorously) is
* ≡ has seniority: ( ( a & a → b ) ≡ ( a & ~a ∨ b ) )
* → has seniority: ( ( a & (a → b) ) ≡ ( a & ~a ∨ b ) )
* & has seniority both sides: ( ( ( (a) & (a → b) ) ) ≡ ( ( (a) & (~a ∨ b) ) )
* ~ has seniority: ( ( ( (a) & (a → b) ) ) ≡ ( ( (a) & (~(a) ∨ b) ) )
* check 9 ( -parenthesis and 9 ) -parenthesis: ( ( ( (a) & (a → b) ) ) ≡ ( ( (a) & (~(a) ∨ b) ) )
Example:
d & c ∨ p & ~(c & ~d) ≡ c & d ∨ p & c ∨ p & ~d rewritten is ( ( (d & c) ∨ ( p & ~((c & ~(d)) ) ) ) ≡ ( (c & d) ∨ (p & c) ∨ (p & ~(d)) ) )
Commutative and associative laws.
Both AND and OR obey the commutative law and associative law:
Omitting parentheses in strings of AND and OR: The connectives are considered to be unary (one-variable, e.g. NOT) and binary (i.e. two-variable AND, OR, IMPLIES). For example:
( (c & d) ∨ (p & c) ∨ (p & ~d) ) above should be written ( ((c & d) ∨ (p & c)) ∨ (p & ~(d) ) ) or possibly ( (c & d) ∨ ( (p & c) ∨ (p & ~(d)) ) )
However, a truth-table demonstration shows that the form without the extra parentheses is perfectly adequate.
Omitting parentheses with regards to a single-variable NOT: While ~(a) where a is a single variable is perfectly clear, ~a is adequate and is the usual way this literal would appear. When the NOT is over a formula with more than one symbol, then the parentheses are mandatory, e.g. ~(a ∨ b).
Distributive laws.
OR distributes over AND and AND distributes over OR. NOT does not distribute over AND or OR. See below about De Morgan's law:
De Morgan's laws.
NOT, when distributed over OR or AND, does something peculiar (again, these can be verified with a truth-table):
Laws of absorption.
Absorption, in particular the first one, causes the "laws" of logic to differ from the "laws" of arithmetic:
Laws of evaluation: Identity, nullity, and complement.
The sign " = " (as distinguished from logical equivalence ≡, alternately ↔ or ⇔) symbolizes the assignment of value or meaning. Thus the string (a & ~(a)) symbolizes "0", i.e. it means the same thing as symbol "0" ". In some "systems" this will be an axiom (definition) perhaps shown as ( (a & ~(a)) =Df 0 ); in other systems, it may be derived in the truth table below:
Well-formed formulas (wffs).
A key property of formulas is that they can be uniquely parsed to determine the structure of the formula in terms of its propositional variables and logical connectives. When formulas are written in infix notation, as above, unique readability is ensured through an appropriate use of parentheses in the definition of formulas. Alternatively, formulas can be written in Polish notation or reverse Polish notation, eliminating the need for parentheses altogether.
The inductive definition of infix formulas in the previous section can be converted to a formal grammar in Backus-Naur form:
<formula> ::= <propositional variable>
It can be shown that any expression matched by the grammar has a balanced number of left and right parentheses, and any nonempty initial segment of a formula has more left than right parentheses. This fact can be used to give an algorithm for parsing formulas. For example, suppose that an expression "x" begins with formula_17. Starting after the second symbol, match the shortest subexpression "y" of "x" that has balanced parentheses. If "x" is a formula, there is exactly one symbol left after this expression, this symbol is a closing parenthesis, and "y" itself is a formula. This idea can be used to generate a recursive descent parser for formulas.
Example of parenthesis counting:
This method locates as "1" the principal connective — the connective under which the overall evaluation of the formula occurs for the outer-most parentheses (which are often omitted). It also locates the inner-most connective where one would begin evaluatation of the formula without the use of a truth table, e.g. at "level 6".
Well-formed formulas versus valid formulas in inferences.
The notion of valid argument is usually applied to inferences in arguments, but arguments reduce to propositional formulas and can be evaluated the same as any other propositional formula. Here a valid inference means: "The formula that represents the inference evaluates to "truth" beneath its principal connective, no matter what truth-values are assigned to its variables", i.e. the formula is a tautology.
Quite possibly a formula will be "well-formed" but not valid. Another way of saying this is: "Being well-formed is "necessary" for a formula to be valid but it is not "sufficient"." The only way to find out if it is "both" well-formed "and" valid is to submit it to verification with a truth table or by use of the "laws":
Reduced sets of connectives.
A set of logical connectives is called complete if every propositional formula is tautologically equivalent to a formula with just the connectives in that set. There are many complete sets of connectives, including formula_18, formula_19, and formula_20. There are two binary connectives that are complete on their own, corresponding to NAND and NOR, respectively. Some pairs are not complete, for example formula_21.
The stroke (NAND).
The binary connective corresponding to NAND is called the Sheffer stroke, and written with a vertical bar | or vertical arrow ↑. The completeness of this connective was noted in "Principia Mathematica" (1927:xvii). Since it is complete on its own, all other connectives can be expressed using only the stroke. For example, where the symbol " ≡ " represents "logical equivalence":
~p ≡ p|p
p → q ≡ p|~q
p ∨ q ≡ ~p|~q
p & q ≡ ~(p|q)
In particular, the zero-ary connectives formula_22 (representing truth) and formula_23 (representing falsity) can be expressed using the stroke:
formula_24
formula_25
IF ... THEN ... ELSE.
This connective together with { 0, 1 }, ( or { F, T } or { formula_23, formula_22 } ) forms a complete set. In the following the IF...THEN...ELSE relation (c, b, a) = d represents ( (c → b) ∨ (~c → a) ) ≡ ( (c & b) ∨ (~c & a) ) = d
(c, b, a):
(c, 0, 1) ≡ ~c
(c, b, 1) ≡ (c → b)
(c, c, a) ≡ (c ∨ a)
(c, b, c) ≡ (c & b)
Example: The following shows how a theorem-based proof of "(c, b, 1) ≡ (c → b)" would proceed, below the proof is its truth-table verification. ( Note: (c → b) is "defined" to be (~c ∨ b) ):
* Begin with the reduced form: ( (c & b) ∨ (~c & a) )
* Substitute "1" for a: ( (c & b) ∨ (~c & 1) )
* Identity (~c & 1) = ~c: ( (c & b) ∨ (~c) )
* Law of commutation for V: ( (~c) ∨ (c & b) )
* Distribute "~c V" over (c & b): ( ((~c) ∨ c ) & ((~c) ∨ b )
* Law of excluded middle (((~c) ∨ c ) = 1 ): ( (1) & ((~c) ∨ b ) )
* Distribute "(1) &" over ((~c) ∨ b): ( ((1) & (~c)) ∨ ((1) & b )) )
* Commutivity and Identity (( 1 & ~c) = (~c & 1) = ~c, and (( 1 & b) ≡ (b & 1) ≡ b: ( ~c ∨ b )
* ( ~c ∨ b ) is defined as c → b Q. E. D.
In the following truth table the column labelled "taut" for tautology evaluates logical equivalence (symbolized here by ≡) between the two columns labelled d. Because all four rows under "taut" are 1's, the equivalence indeed represents a tautology.
Normal forms.
An arbitrary propositional formula may have a very complicated structure. It is often convenient to work with formulas that have simpler forms, known as normal forms. Some common normal forms include conjunctive normal form and disjunctive normal form. Any propositional formula can be reduced to its conjunctive or disjunctive normal form.
Reduction to normal form.
Reduction to normal form is relatively simple once a truth table for the formula is prepared. But further attempts to minimize the number of literals (see below) requires some tools: reduction by De Morgan's laws and truth tables can be unwieldy, but Karnaugh maps are very suitable a small number of variables (5 or less). Some sophisticated tabular methods exist for more complex circuits with multiple outputs but these are beyond the scope of this article; for more see Quine–McCluskey algorithm.
Literal, term and alterm.
In electrical engineering, a variable x or its negation ~(x) can be referred to as a literal. A string of literals connected by ANDs is called a term. A string of literals connected by OR is called an alterm. Typically the literal ~(x) is abbreviated ~x. Sometimes the &-symbol is omitted altogether in the manner of algebraic multiplication.
Minterms.
In the same way that a 2n-row truth table displays the evaluation of a propositional formula for all 2n possible values of its variables, n variables produces a 2n-square Karnaugh map (even though we cannot draw it in its full-dimensional realization). For example, 3 variables produces 23 = 8 rows and 8 Karnaugh squares; 4 variables produces 16 truth-table rows and 16 squares and therefore 16 minterms. Each Karnaugh-map square and its corresponding truth-table evaluation represents one minterm.
Any propositional formula can be reduced to the "logical sum" (OR) of the active (i.e. "1"- or "T"-valued) minterms. When in this form the formula is said to be in disjunctive normal form. But even though it is in this form, it is not necessarily minimized with respect to either the number of terms or the number of literals.
In the following table, observe the peculiar numbering of the rows: (0, 1, 3, 2, 6, 7, 5, 4, 0). The first column is the decimal equivalent of the binary equivalent of the digits "cba", in other words:
This numbering comes about because as one moves down the table from row to row only one variable at a time changes its value. Gray code is derived from this notion. This notion can be extended to three and four-dimensional hypercubes called Hasse diagrams where each corner's variables change only one at a time as one moves around the edges of the cube. Hasse diagrams (hypercubes) flattened into two dimensions are either Veitch diagrams or Karnaugh maps (these are virtually the same thing).
When working with Karnaugh maps one must always keep in mind that the top edge "wrap arounds" to the bottom edge, and the left edge wraps around to the right edge—the Karnaugh diagram is really a three- or four- or n-dimensional flattened object.
Reduction by use of the map method (Veitch, Karnaugh).
Veitch improved the notion of Venn diagrams by converting the circles to abutting squares, and Karnaugh simplified the Veitch diagram by converting the minterms, written in their literal-form (e.g. ~abc~d) into numbers. The method proceeds as follows:
Produce the formula's truth table.
Produce the formula's truth table. Number its rows using the binary-equivalents of the variables (usually just sequentially 0 through n-1) for n variables.
"Technically, the propositional function has been reduced to its (unminimized) conjunctive normal form: each row has its minterm expression and these can be OR'd to produce the formula in its (unminimized) conjunctive normal form."
Example: ((c & d) ∨ (p & ~(c & (~d)))) = q in conjunctive normal form is:
( (~p & d & c ) ∨ (p & d & c) ∨ (p & d & ~c) ∨ (p & ~d & ~c) ) = q
However, this formula be reduced both in the number of terms (from 4 to 3) and in the total count of its literals (12 to 6).
Create the formula's Karnaugh map.
Use the values of the formula (e.g. "p") found by the truth-table method and place them in their into their respective (associated) Karnaugh squares (these are numbered per the Gray code convention). If values of "d" for "don't care" appear in the table, this adds flexibility during the reduction phase.
Reduce minterms.
Minterms of adjacent (abutting) 1-squares (T-squares) can be reduced with respect to the number of their literals, and the number terms also will be reduced in the process. Two abutting squares (2 x 1 horizontal or 1 x 2 vertical, even the edges represent abutting squares) lose one literal, four squares in a 4 x 1 rectangle (horizontal or vertical) or 2 x 2 square (even the four corners represent abutting squares) lose two literals, eight squares in a rectangle lose 3 literals, etc. (One seeks out the largest square or rectangles and ignores the smaller squares or rectangles contained totally within it. ) This process continues until all abutting squares are accounted for, at which point the propositional formula is minimized.
For example, squares #3 and #7 abut. These two abutting squares can lose one literal (e.g. "p" from squares #3 and #7), four squares in a rectangle or square lose two literals, eight squares in a rectangle lose 3 literals, etc. (One seeks out the largest square or rectangles.) This process continues until all abutting squares are accounted for, at which point the propositional formula is said to be minimized.
Example: The map method usually is done by inspection. The following example expands the algebraic method to show the "trick" behind the combining of terms on a Karnaugh map:
Minterms #3 and #7 abut, #7 and #6 abut, and #4 and #6 abut (because the table's edges wrap around). So each of these pairs can be reduced.
Observe that by the Idempotency law (A ∨ A) = A, we can create more terms. Then by association and distributive laws the variables to disappear can be paired, and then "disappeared" with the Law of contradiction (x & ~x)=0. The following uses brackets [ and ] only to keep track of the terms; they have no special significance:
q = ( (~p & d & c ) ∨ (p & d & c) ∨ (p & d & ~c) ∨ (p & ~d & ~c) ) = ( #3 ∨ #7 ∨ #6 ∨ #4 )
( #3 ∨ [ #7 ∨ #7 ] ∨ [ #6 ∨ #6 ] ∨ #4 )
( [ #3 ∨ #7 ] ∨ [ #7 ∨ #6 ] ∨ [ #6 ∨ #4] )
[ (~p & d & c ) ∨ (p & d & c) ] ∨ [ (p & d & c) ∨ (p & d & ~c) ] ∨ [ (p & d & ~c) ∨ (p & ~d & ~c) ].
( [ (d & c) ∨ (~p & p) ] ∨ [ (p & d) ∨ (~c & c) ] ∨ [ (p & ~c) ∨ (c & ~c) ] )
( [ (d & c) ∨ (0) ] ∨ [ (p & d) ∨ (0) ] ∨ [ (p & ~c) ∨ (0) ] )
q = ( (d & c) ∨ (p & d) ∨ (p & ~c) )
Impredicative propositions.
Given the following examples-as-definitions, what does one make of the subsequent reasoning:
(1) "This sentence is simple." (2) "This sentence is complex, and it is conjoined by AND."
Then assign the variable "s" to the left-most sentence "This sentence is simple". Define "compound" c = "not simple" ~s, and assign c = ~s to "This sentence is compound"; assign "j" to "It [this sentence] is conjoined by AND". The second sentence can be expressed as:
( NOT(s) AND j )
If truth values are to be placed on the sentences c = ~s and j, then all are clearly FALSEHOODS: e.g. "This sentence is complex" is a FALSEHOOD (it is "simple", by definition). So their conjunction (AND) is a falsehood. But when taken in its assembled form, the sentence a TRUTH.
This is an example of the paradoxes that result from an impredicative definition—that is, when an object m has a property P, but the object m is defined in terms of property P. The best advice for a rhetorician or one involved in deductive analysis is avoid impredicative definitions but at the same time be on the lookout for them because they can indeed create paradoxes. Engineers, on the other hand, put them to work in the form of propositional formulas with feedback.
Propositional formula with "feedback".
The notion of a propositional formula appearing as one of its own variables requires a formation rule that allows the assignment of the formula to a variable. In general there is no stipulation (either axiomatic or truth-table systems of objects and relations) that forbids this from happening.
The simplest case occurs when an OR formula becomes one its own inputs e.g. p = q. Begin with (p ∨ s) = q, then let p = q. Observe that q's "definition" depends on itself "q" as well as on "s" and the OR connective; this definition of q is thus impredicative.
Either of two conditions can result: oscillation or memory.
It helps to think of the formula as a black box. Without knowledge of what is going on "inside" the formula-"box" from the outside it would appear that the output is no longer a function of the inputs alone. That is, sometimes one looks at q and sees 0 and other times 1. To avoid this problem one has to know the state (condition) of the "hidden" variable p inside the box (i.e. the value of q fed back and assigned to p). When this is known the apparent inconsistency goes away.
To understand [predict] the behavior of formulas with feedback requires the more sophisticated analysis of sequential circuits. Propositional formulas with feedback lead, in their simplest form, to state machines; they also lead to memories in the form of Turing tapes and counter-machine counters. From combinations of these elements one can build any sort of bounded computational model (e.g. Turing machines, counter machines, register machines, Macintosh computers, etc.).
Oscillation.
In the abstract (ideal) case the simplest oscillating formula is a NOT fed back to itself: ~(~(p=q)) = q. Analysis of an abstract (ideal) propositional formula in a truth-table reveals an inconsistency for both p=1 and p=0 cases: When p=1, q=0, this cannot be because p=q; ditto for when p=0 and q=1.
Oscillation with delay: If a delay (ideal or non-ideal) is inserted in the abstract formula between p and q then p will oscillate between 1 and 0: 101010...101... "ad infinitum". If either of the delay and NOT are not abstract (i.e. not ideal), the type of analysis to be used will be dependent upon the exact nature of the objects that make up the oscillator; such things fall outside mathematics and into engineering.
Analysis requires a delay to be inserted and then the loop cut between the delay and the input "p". The delay must be viewed as a kind of proposition that has "qd" (q-delayed) as output for "q" as input. This new proposition adds another column to the truth table. The inconsistency is now between "qd" and "p" as shown in red; two stable states resulting:
Memory.
Without delay, inconsistencies must be eliminated from a truth table analysis. With the notion of "delay", this condition presents itself as a momentary inconsistency between the fed-back output variable q and p = qdelayed.
A truth table reveals the rows where inconsistencies occur between p = qdelayed at the input and q at the output. After "breaking" the feed-back, the truth table construction proceeds in the conventional manner. But afterwards, in every row the output q is compared to the now-independent input p and any inconsistencies between p and q are noted (i.e. p=0 together with q=1, or p=1 and q=0); when the "line" is "remade" both are rendered impossible by the Law of contradiction ~(p & ~p)). Rows revealing inconsistencies are either considered transient states or just eliminated as inconsistent and hence "impossible".
Once-flip memory.
About the simplest memory results when the output of an OR feeds back to one of its inputs, in this case output "q" feeds back into "p". Given that the formula is first evaluated (initialized) with p=0 & q=0, it will "flip" once when "set" by s=1. Thereafter, output "q" will sustain "q" in the "flipped" condition (state q=1). This behavior, now time-dependent, is shown by the state diagram to the right of the once-flip.
Flip-flop memory.
The next simplest case is the "set-reset" flip-flop shown below the once-flip. Given that r=0 & s=0 and q=0 at the outset, it is "set" (s=1) in a manner similar to the once-flip. It however has a provision to "reset" q=0 when "r"=1. And additional complication occurs if both set=1 and reset=1. In this formula, the set=1 "forces" the output q=1 so when and if (s=0 & r=1) the flip-flop will be reset. Or, if (s=1 & r=0) the flip-flop will be set. In the abstract (ideal) instance in which s=1 ⇒ s=0 & r=1 ⇒ r=0 simultaneously, the formula q will be indeterminate (undecidable). Due to delays in "real" OR, AND and NOT the result will be unknown at the outset but thereafter predicable.
Clocked flip-flop memory.
The formula known as "clocked flip-flop" memory ("c" is the "clock" and "d" is the "data") is given below. It works as follows: When c = 0 the data d (either 0 or 1) cannot "get through" to affect output q. When c = 1 the data d "gets through" and output q "follows" d's value. When c goes from 1 to 0 the last value of the data remains "trapped" at output "q". As long as c=0, d can change value without causing q to change.
The state diagram is similar in shape to the flip-flop's state diagram, but with different labelling on the transitions.
Historical development.
Bertrand Russell (1912:74) lists three laws of thought that derive from Aristotle: (1) The law of identity: "Whatever is, is.", (2) The law of noncontradiction: "Nothing can both be and not be", and (3) The law of excluded middle: "Everything must be or not be."
The use of the word "everything" in the law of excluded middle renders Russell's expression of this law open to debate. If restricted to an expression about BEING or QUALITY with reference to a finite collection of objects (a finite "universe of discourse") -- the members of which can be investigated one after another for the presence or absence of the assertion—then the law is considered intuitionistically appropriate. Thus an assertion such as: "This object must either BE or NOT BE (in the collection)", or "This object must either have this QUALITY or NOT have this QUALITY (relative to the objects in the collection)" is acceptable. See more at Venn diagram.
Although a propositional calculus originated with Aristotle, the notion of an "algebra" applied to propositions had to wait until the early 19th century. In an (adverse) reaction to the 2000 year tradition of Aristotle's syllogisms, John Locke's "Essay concerning human understanding (1690)" used the word semiotics (theory of the use of symbols). By 1826 Richard Whately had critically analyzed the syllogistic logic with a sympathy toward Locke's semiotics. George Bentham's work (1827) resulted in the notion of "quantification of the predicate" (1827) (nowadays symbolized as ∀ ≡ "for all"). A "row" instigated by William Hamilton over a priority dispute with Augustus De Morgan "inspired George Boole to write up his ideas on logic, and to publish them as MAL [Mathematical Analysis of Logic] in 1847" (Grattin-Guinness and Bornet 1997:xxviii).
About his contribution Grattin-Guinness and Bornet comment:
"Boole's principal single innovation was [the] law [ xn = x ] for logic: it stated that the mental acts of choosing the property x and choosing x again and again is the same as choosing x once... As consequence of it he formed the equations x•(1-x)=0 and x+(1-x)=1 which for him expressed respectively the law of contradiction and the law of excluded middle" (p. xxviiff). For Boole "1" was the universe of discourse and "0" was nothing.
Gottlob Frege's massive undertaking (1879) resulted in a formal calculus of propositions, but his symbolism is so daunting that it had little influence excepting on one person: Bertrand Russell. First as the student of Alfred North Whitehead he studied Frege's work and suggested a (famous and notorious) emendation with respect to it (1904) around the problem of an antinomy that he discovered in Frege's treatment ( cf Russell's paradox ). Russell's work led to a collaboration with Whitehead that, in the year 1912, produced the first volume of "Principia Mathematica" (PM). It is here that what we consider "modern" propositional logic first appeared. In particular, PM introduces NOT and OR and the assertion symbol ⊦ as primitives. In terms of these notions they define IMPLICATION → ( def. *1.01: ~p ∨ q ), then AND (def. *3.01: ~(~p ∨ ~q) ), then EQUIVALENCE p ←→ q (*4.01: (p → q) & ( q → p ) ).
Computation and switching logic:
Example: Given binary bits ai and bi and carry-in ( c_ini), their summation Σi and carry-out (c_outi) are:
* ( ( ai XOR bi ) XOR c_ini )= Σi
* ( ai & bi ) ∨ c_ini ) = c_outi;
Footnotes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "\\lnot \\alpha"
},
{
"math_id": 2,
"text": "\\land, \\lor, \\to, \\leftrightarrow"
},
{
"math_id": 3,
"text": "\\beta"
},
{
"math_id": 4,
"text": "(\\alpha \\to \\beta)"
},
{
"math_id": 5,
"text": "\\leftrightarrow"
},
{
"math_id": 6,
"text": "\\lnot, \\land, \\lor, \\to, \\leftrightarrow"
},
{
"math_id": 7,
"text": "(\\lnot \\alpha)"
},
{
"math_id": 8,
"text": " (\\alpha\\,\\Box\\,\\beta)"
},
{
"math_id": 9,
"text": "\\Box"
},
{
"math_id": 10,
"text": "\\mathcal{E}_\\lnot(z)"
},
{
"math_id": 11,
"text": "(\\lnot z)"
},
{
"math_id": 12,
"text": "\\mathcal{E}_\\land(y,z)"
},
{
"math_id": 13,
"text": "(y\\land z)"
},
{
"math_id": 14,
"text": "\\mathcal{E}_\\lor"
},
{
"math_id": 15,
"text": "\\mathcal{E}_\\to"
},
{
"math_id": 16,
"text": "\\mathcal{E}_\\leftrightarrow"
},
{
"math_id": 17,
"text": "( \\lnot"
},
{
"math_id": 18,
"text": "\\{\\land, \\lnot\\}"
},
{
"math_id": 19,
"text": "\\{\\lor, \\lnot\\}"
},
{
"math_id": 20,
"text": "\\{\\to, \\lnot\\}"
},
{
"math_id": 21,
"text": "\\{\\land, \\lor\\}"
},
{
"math_id": 22,
"text": "\\top"
},
{
"math_id": 23,
"text": "\\bot"
},
{
"math_id": 24,
"text": "\\top \\equiv (a|(a|a))"
},
{
"math_id": 25,
"text": "\\bot \\equiv (\\top | \\top)"
}
] |
https://en.wikipedia.org/wiki?curid=1557634
|
15577346
|
Denavit–Hartenberg parameters
|
Convention for attaching reference frames to links of a kinematic chain
In mechanical engineering, the Denavit–Hartenberg parameters (also called DH parameters) are the four parameters associated with a particular convention for attaching reference frames to the links of a spatial kinematic chain, or robot manipulator.
Jacques Denavit and Richard Hartenberg introduced this convention in 1955 in order to standardize the coordinate frames for spatial linkages.
Richard Paul demonstrated its value for the kinematic analysis of robotic systems in 1981.
While many conventions for attaching reference frames have been developed, the Denavit–Hartenberg convention remains a popular approach.
Denavit–Hartenberg convention.
A commonly used convention for selecting frames of reference in robotics applications is the Denavit and Hartenberg (D–H) convention which was introduced by Jacques Denavit and Richard S. Hartenberg. In this convention, coordinate frames are attached to the joints between two links such that one transformation is associated with the joint ["Z" ], and the second is associated with the link ["X" ]. The coordinate transformations along a serial robot consisting of n links form the kinematics equations of the robot:
formula_0
where ["T" ] is the transformation that characterizes the location and orientation of the end-link.
To determine the coordinate transformations ["Z" ] and ["X" ], the joints connecting the links are modeled as either hinged or sliding joints, each of which has a unique line S in space that forms the joint axis and define the relative movement of the two links. A typical serial robot is characterized by a sequence of six lines "Si" ("i" = 1, 2, ..., 6), one for each joint in the robot. For each sequence of lines Si and "S""i"+1, there is a common normal line "A""i","i"+1. The system of six joint axes Si and five common normal lines "A""i","i"+1 form the kinematic skeleton of the typical six degree-of-freedom serial robot. Denavit and Hartenberg introduced the convention that z-coordinate axes are assigned to the joint axes Si and x-coordinate axes are assigned to the common normals "A""i","i"+1.
This convention allows the definition of the movement of links around a common joint axis Si by the screw displacement:
formula_1
where θi is the rotation around and di is the sliding motion along the z-axis. Each of these parameters could be a constant depending on the structure of the robot. Under this convention the dimensions of each link in the serial chain are defined by the screw displacement around the common normal "A""i","i"+1 from the joint "S""i" to "S""i"+1, which is given by
formula_2
where "α""i","i"+1 and "r""i","i"+1 define the physical dimensions of the link in terms of the angle measured around and distance measured along the X axis.
In summary, the reference frames are laid out as follows:
Four Parameters.
The following four transformation parameters are known as D–H parameters:
There is some choice in frame layout as to whether the previous x axis or the next x points along the common normal. The latter system allows branching chains more efficiently, as multiple frames can all point away from their common ancestor, but in the alternative layout the ancestor can only point toward one successor. Thus the commonly used notation places each down-chain x axis collinear with the common normal, yielding the transformation calculations shown below.
We can note constraints on the relationships between the axes:
Denavit–Hartenberg matrix.
It is common to separate a screw displacement into product of a pure translation along a line and a pure rotation about the line, so that
formula_4
and
formula_5
Using this notation, each link can be described by a coordinate transformation from the concurrent coordinate system to the previous coordinate system.
formula_6
Note that this is the product of two screw displacements. The matrices associated with these operations are:
formula_7
formula_8
formula_9
formula_10
This gives:
formula_11
where "R" is the 3×3 submatrix describing rotation and "T" is the 3×1 submatrix describing translation.
In some books, the order of transformation for a pair of consecutive rotation and translation (such as formula_12and formula_13) is reversed. This is possible (despite the fact that in general, matrix multiplication is not commutative) since translations and rotations are concerned with the same axes formula_14 and formula_15, respectively. As matrix multiplication order for these pairs does not matter, the result is the same. For example: formula_16.
Therefore, we can write the transformation formula_17 as follows: <br>
formula_18
<br>
formula_19
Use of Denavit and Hartenberg matrices.
The Denavit and Hartenberg notation gives a standard (distal) methodology to write the kinematic equations of a manipulator. This is especially useful for serial manipulators where a matrix is used to represent the pose (position and orientation) of one body with respect to another.
The position of body formula_20 with respect to formula_21 may be represented by a position matrix indicated with the symbol formula_22 or formula_23
formula_24
This matrix is also used to transform a point from frame formula_20 to formula_21
formula_25
Where the upper left formula_26 submatrix of formula_27 represents the relative
orientation of the two bodies, and the upper right formula_28 represents their relative position or more specifically the body position in frame "n" − 1 represented with element of frame "n".
The position of body formula_29 with respect to body formula_30 can be obtained as the product of the matrices representing the pose of formula_31 with respect of formula_30 and that of formula_29 with respect of formula_31
formula_32
An important property of Denavit and Hartenberg matrices is that the inverse is
formula_33
where formula_34 is both the transpose and the inverse of the orthogonal matrix formula_35, i.e. formula_36.
Kinematics.
Further matrices can be defined to represent velocity and acceleration of bodies.
The velocity of body formula_30 with respect to body formula_31 can be represented in frame formula_29 by the matrix
formula_37
where formula_38 is the angular velocity of body formula_39 with respect to body formula_40 and all the components are expressed in frame formula_41; formula_42 is the velocity of one point of body formula_39 with respect to body formula_43 (the pole). The pole is the point of formula_39 passing through the origin of frame formula_30.
The acceleration matrix can be defined as the sum of the time derivative of the velocity plus the velocity squared
formula_44
The velocity and the acceleration in frame formula_40 of a point of body formula_39 can be evaluated as
formula_45
formula_46
It is also possible to prove that
formula_47
formula_48
Velocity and acceleration matrices add up according to the following rules
formula_49
formula_50
in other words the absolute velocity is the sum of the parent velocity plus the relative velocity; for the acceleration the Coriolis' term is also present.
The components of velocity and acceleration matrices are expressed in an arbitrary frame formula_29 and transform from one frame to another by the following rule
formula_51
formula_52
Dynamics.
For the dynamics, three further matrices are necessary to describe the inertia formula_53, the linear and angular momentum formula_54, and the forces and torques formula_55 applied to a body.
Inertia formula_53:
formula_56
where formula_57 is the mass, formula_58 represent the position of the center of mass, and the terms formula_59 represent inertia and are defined as
formula_60
formula_61
Action matrix formula_62, containing force formula_63 and torque formula_64:
formula_65
Momentum matrix formula_66, containing linear formula_67 and angular formula_68 momentum
formula_69
All the matrices are represented with the vector components in a certain frame formula_29. Transformation of the components from frame formula_41 to frame formula_70 follows the rule
formula_71
The matrices described allow the writing of the dynamic equations in a concise way.
Newton's law:
formula_72
Momentum:
formula_73
The first of these equations express the Newton's law and is the equivalent of the vector equation formula_74 (force equal mass times acceleration) plus formula_75 (angular acceleration in function of inertia and angular velocity); the second equation permits the evaluation of the linear and angular momentum when velocity and inertia are known.
Modified DH parameters.
Some books such as "Introduction to Robotics: Mechanics and Control (3rd Edition)" use modified (proximal) DH parameters. The difference between the classic (distal) DH parameters and the modified DH parameters are the locations of the coordinates system attachment to the links and the order of the performed transformations.
Compared with the classic DH parameters, the coordinates of frame formula_76 is put on axis "i" − 1, not the axis "i" in classic DH convention. The coordinates of formula_77 is put on the axis "i", not the axis "i" + 1 in classic DH convention.
Another difference is that according to the modified convention, the transform matrix is given by the following order of operations:
formula_78
Thus, the matrix of the modified DH parameters becomes
formula_79
Note that some books (e.g.:) use formula_80 and formula_81 to indicate the length and twist of link "n" − 1 rather than link "n". As a consequence, formula_82 is formed only with parameters using the same subscript.
Surveys of DH conventions and its differences have been published.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "[T] = [Z_1][X_1][Z_2][X_2]\\ldots[X_{n-1}][Z_n][X_n]\\!"
},
{
"math_id": 1,
"text": " [Z_i]=\\begin{bmatrix}\\cos\\theta_i & -\\sin\\theta_i & 0 & 0 \\\\ \\sin\\theta_i & \\cos\\theta_i & 0 & 0 \\\\ 0 & 0 & 1 & d_i \\\\ 0 & 0 & 0 & 1\\end{bmatrix}"
},
{
"math_id": 2,
"text": " [X_i]=\\begin{bmatrix} 1 & 0 & 0 & r_{i,i+1} \\\\ 0 & \\cos\\alpha_{i,i+1} & -\\sin\\alpha_{i,i+1} & 0 \\\\ 0& \\sin\\alpha_{i,i+1} & \\cos\\alpha_{i,i+1} & 0 \\\\ 0 & 0 & 0 & 1\\end{bmatrix},"
},
{
"math_id": 3,
"text": "x_n = z_n \\times z_{n-1} "
},
{
"math_id": 4,
"text": " [Z_i] = \\operatorname{Trans}_{Z_{i}}(d_i) \\operatorname{Rot}_{Z_{i}}(\\theta_i),"
},
{
"math_id": 5,
"text": " [X_i]=\\operatorname{Trans}_{X_i}(r_{i,i+1}) \\operatorname{Rot}_{X_i}(\\alpha_{i,i+1})."
},
{
"math_id": 6,
"text": "{}^{n - 1}T_n\n = [Z_{n-1}]\\cdot [X_n]"
},
{
"math_id": 7,
"text": "\\operatorname{Trans}_{z_{n - 1}}(d_n)\n =\n\\left[\n\\begin{array}{ccc|c}\n 1 & 0 & 0 & 0 \\\\\n 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 1 & d_n \\\\\n \\hline\n 0 & 0 & 0 & 1\n \\end{array}\n\\right]\n"
},
{
"math_id": 8,
"text": "\\operatorname{Rot}_{z_{n - 1}}(\\theta_n)\n =\n\\left[\n\\begin{array}{ccc|c}\n \\cos\\theta_n & -\\sin\\theta_n & 0 & 0 \\\\\n \\sin\\theta_n & \\cos\\theta_n & 0 & 0 \\\\\n 0 & 0 & 1 & 0 \\\\\n \\hline\n 0 & 0 & 0 & 1\n \\end{array}\n\\right]\n"
},
{
"math_id": 9,
"text": "\\operatorname{Trans}_{x_n}(r_n)\n =\n\\left[\n\\begin{array}{ccc|c}\n 1 & 0 & 0 & r_n \\\\\n 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 1 & 0 \\\\\n \\hline\n 0 & 0 & 0 & 1\n \\end{array}\n\\right]\n"
},
{
"math_id": 10,
"text": "\\operatorname{Rot}_{x_n}(\\alpha_n)\n =\n\\left[\n\\begin{array}{ccc|c}\n 1 & 0 & 0 & 0 \\\\\n 0 & \\cos\\alpha_n & -\\sin\\alpha_n & 0 \\\\\n 0 & \\sin\\alpha_n & \\cos\\alpha_n & 0 \\\\\n \\hline\n 0 & 0 & 0 & 1\n \\end{array}\n\\right]\n"
},
{
"math_id": 11,
"text": "\\operatorname{}^{n - 1}T_n\n =\n\\left[\n\\begin{array}{ccc|c}\n \\cos\\theta_n & -\\sin\\theta_n \\cos\\alpha_n & \\sin\\theta_n \\sin\\alpha_n & r_n \\cos\\theta_n \\\\\n \\sin\\theta_n & \\cos\\theta_n \\cos\\alpha_n & -\\cos\\theta_n \\sin\\alpha_n & r_n \\sin\\theta_n \\\\\n 0 & \\sin\\alpha_n & \\cos\\alpha_n & d_n \\\\\n \\hline\n 0 & 0 & 0 & 1\n \\end{array}\n\\right]\n\n=\n\n\\left[\n\\begin{array}{ccc|c}\n & & & \\\\\n & R & & T \\\\\n & & & \\\\\n \\hline\n 0 & 0 & 0 & 1\n \\end{array}\n\\right]\n"
},
{
"math_id": 12,
"text": " d_n"
},
{
"math_id": 13,
"text": "\\theta_n\n"
},
{
"math_id": 14,
"text": "z_{n-1}"
},
{
"math_id": 15,
"text": "x_{n}"
},
{
"math_id": 16,
"text": "\\operatorname{Trans}_{z_{n - 1}}(d_n) \\cdot\n \\operatorname{Rot}_{z_{n - 1}}(\\theta_n)\n = \\operatorname{Rot}_{z_{n - 1}}(\\theta_n) \\cdot \n \\operatorname{Trans}_{z_{n - 1}}(d_n) \n "
},
{
"math_id": 17,
"text": " \\operatorname{}^{n - 1}T_{n} "
},
{
"math_id": 18,
"text": "\n{}^{n - 1}T_n = \n\\operatorname{Trans}_{z_{n - 1}}(d_n) \\cdot\n\\operatorname{Rot}_{z_{n - 1}}(\\theta_n) \\cdot\n\\operatorname{Trans}_{x_n}(r_n) \\cdot\n\\operatorname{Rot}_{x_n}(\\alpha_n)\n"
},
{
"math_id": 19,
"text": "\n{}^{n - 1}T_n = \n\\operatorname{Rot}_{z_{n - 1}}(\\theta_n) \\cdot\n\\operatorname{Trans}_{z_{n - 1}}(d_n) \\cdot\n\\operatorname{Trans}_{x_n}(r_n) \\cdot\n\\operatorname{Rot}_{x_n}(\\alpha_n) \n"
},
{
"math_id": 20,
"text": "n"
},
{
"math_id": 21,
"text": "n-1"
},
{
"math_id": 22,
"text": " T "
},
{
"math_id": 23,
"text": " M "
},
{
"math_id": 24,
"text": " \\operatorname{}^{n - 1}T_n = M_{n-1,n} "
},
{
"math_id": 25,
"text": " M_{n-1,n} = \\left[ \\begin{array}{ccc|c} R_{xx} & R_{xy} & R_{xz} & T_x \\\\ R_{yx} & R_{yy} & R_{yz} & T_y \\\\ R_{zx} & R_{zy} & R_{zz} & T_z \\\\\n\\hline\n0 & 0 & 0 & 1 \\end{array}\\right]\n "
},
{
"math_id": 26,
"text": "3\\times 3"
},
{
"math_id": 27,
"text": "M"
},
{
"math_id": 28,
"text": "3\\times 1"
},
{
"math_id": 29,
"text": "k"
},
{
"math_id": 30,
"text": "i"
},
{
"math_id": 31,
"text": "j"
},
{
"math_id": 32,
"text": " M_{i,k}= M_{i,j} M_{j,k} "
},
{
"math_id": 33,
"text": " M^{-1} =\n\\left[\n\\begin{array}{ccc|c}\n & & & \\\\\n & R^T & & -R^T T \\\\\n & & & \\\\\n \\hline\n 0 & 0 & 0 & 1\n \\end{array}\n\\right]\n"
},
{
"math_id": 34,
"text": " R^T "
},
{
"math_id": 35,
"text": " R "
},
{
"math_id": 36,
"text": " R^{-1}_{ij}=R^T_{ij} = R_{ji} "
},
{
"math_id": 37,
"text": " W_{i,j(k)}=\\left[ \\begin{array}{ccc|c} 0 & -\\omega_z & \\omega_y & v_x \\\\ \\omega_z & 0 & -\\omega_x & v_y \\\\ -\\omega_y & \\omega_x & 0 & v_z \\\\\n\\hline\n0 & 0 & 0 & 0 \\end{array}\\right]"
},
{
"math_id": 38,
"text": " \\omega "
},
{
"math_id": 39,
"text": " j "
},
{
"math_id": 40,
"text": " i "
},
{
"math_id": 41,
"text": " k "
},
{
"math_id": 42,
"text": " v "
},
{
"math_id": 43,
"text": " i\n"
},
{
"math_id": 44,
"text": " H_{i,j(k)}=\\dot{W}_{i,j(k)}+W_{i,j(k)}^2 "
},
{
"math_id": 45,
"text": "\\dot{P} = W_{i,j} P "
},
{
"math_id": 46,
"text": "\\ddot{P} = H_{i,j} P "
},
{
"math_id": 47,
"text": "\\dot{M}_{i,j} = W_{i,j(i)} M_{i,j} "
},
{
"math_id": 48,
"text": "\\ddot{M}_{i,j} = H_{i,j(i)} M_{i,j} "
},
{
"math_id": 49,
"text": " W_{i,k}= W_{i,j} + W_{j,k} "
},
{
"math_id": 50,
"text": " H_{i,k}= H_{i,j} + H_{j,k} + 2W_{i,j} W_{j,k}"
},
{
"math_id": 51,
"text": " W_{(h)}=M_{h,k} W_{(k)} M_{k,h} "
},
{
"math_id": 52,
"text": " H_{(h)}=M_{h,k} H_{(k)} M_{k,h} "
},
{
"math_id": 53,
"text": " J "
},
{
"math_id": 54,
"text": " \\Gamma "
},
{
"math_id": 55,
"text": " \\Phi "
},
{
"math_id": 56,
"text": " J=\\left[ \\begin{array}{ccc|c} I_{xx} & I_{xy} & I_{xz} & x_g m \\\\ I_{yx} & I_{yy} &\nI_{yz} & y_g m \\\\ I_{zx} & I_{zy} & I_{zz} & z_g m \\\\\n\\hline\nx_g m & y_g m & z_g m & m \\end{array}\\right] "
},
{
"math_id": 57,
"text": " m "
},
{
"math_id": 58,
"text": " x_g,\\, y_g,\\, z_g "
},
{
"math_id": 59,
"text": " I_{xx},\\,I_{xy},\\ldots"
},
{
"math_id": 60,
"text": " I_{xx} =\\iint x^2 \\, dm "
},
{
"math_id": 61,
"text": "\n\\begin{align}\nI_{xy} & =\\iint xy \\, dm \\\\\nI_{xz} & = \\cdots \\\\\n& \\,\\,\\, \\vdots\n\\end{align}\n"
},
{
"math_id": 62,
"text": "\\Phi"
},
{
"math_id": 63,
"text": " f "
},
{
"math_id": 64,
"text": " t "
},
{
"math_id": 65,
"text": " \\Phi = \\left[ \\begin{array}{ccc|c} 0 & -t_z & t_y & f_x \\\\ t_z & 0 & -t_x & f_y \\\\ -t_y & t_x & 0 & f_z \\\\\n\\hline\n-f_x & -f_y & -f_z & 0 \\end{array}\\right]"
},
{
"math_id": 66,
"text": "\\Gamma"
},
{
"math_id": 67,
"text": " \\rho "
},
{
"math_id": 68,
"text": " \\gamma "
},
{
"math_id": 69,
"text": " \\Gamma = \\left[ \\begin{array}{ccc|c} 0 & -\\gamma_z & \\gamma_y & \\rho_x \\\\ \\gamma_z & 0 & -\\gamma_x & \\rho_y \\\\ -\\gamma_y & \\gamma_x & 0 & \\rho_z \\\\\n\\hline\n-\\rho_x & -\\rho_y & -\\rho_z & 0 \\end{array}\\right]"
},
{
"math_id": 70,
"text": " h "
},
{
"math_id": 71,
"text": "\n\\begin{align}\nJ_{(h)} & = M_{h,k} J_{(k)} M_{h,k}^T \\\\\n\\Gamma_{(h)} & = M_{h,k} \\Gamma_{(k)} M_{h,k}^T \\\\\n\\Phi_{(h)} & = M_{h,k} \\Phi_{(k)} M_{h,k}^T\n\\end{align}\n"
},
{
"math_id": 72,
"text": " \\Phi = H J - J H^t \\, "
},
{
"math_id": 73,
"text": " \\Gamma = W J - J W^t \\, "
},
{
"math_id": 74,
"text": " f = m a "
},
{
"math_id": 75,
"text": " t = J \\dot{\\omega} + \\omega \\times J \\omega "
},
{
"math_id": 76,
"text": "O_{i-1}"
},
{
"math_id": 77,
"text": "O_{i}"
},
{
"math_id": 78,
"text": "\n{}^{n - 1}T_n = \\operatorname{Rot}_{x_{n-1}}(\\alpha_{n-1}) \\cdot \\operatorname{Trans}_{x_{n-1}}(a_{n-1}) \\cdot \\operatorname{Rot}_{z_{n}}(\\theta_n) \\cdot \\operatorname{Trans}_{z_{n}}(d_n)\n "
},
{
"math_id": 79,
"text": "\\operatorname{}^{n - 1}T_n\n =\n\\left[\n\\begin{array}{ccc|c}\n \\cos\\theta_n & -\\sin\\theta_n & 0 & a_{n-1} \\\\\n \\sin\\theta_n \\cos\\alpha_{n-1} & \\cos\\theta_n \\cos\\alpha_{n-1} & -\\sin\\alpha_{n-1} & -d_n \\sin\\alpha_{n-1} \\\\\n \\sin\\theta_n\\sin\\alpha_{n-1} & \\cos\\theta_n \\sin\\alpha_{n-1} & \\cos\\alpha_{n-1} & d_n \\cos\\alpha_{n-1} \\\\\n \\hline\n 0 & 0 & 0 & 1\n \\end{array}\n\\right]\n"
},
{
"math_id": 80,
"text": "a_n"
},
{
"math_id": 81,
"text": "\\alpha_n"
},
{
"math_id": 82,
"text": "{}^{n - 1}T_n"
}
] |
https://en.wikipedia.org/wiki?curid=15577346
|
15580702
|
Target income sales
|
In cost accounting, target income sales are the sales necessary to achieve a given target income (or targeted income). It can be measured either in units or in currency (sales proceeds), and can be computed using contribution margin similarly to break-even point:
formula_0
|
[
{
"math_id": 0,
"text": "\\begin{align}\n&\\text{Target Income Sales (in Units)} & &= \\frac{\\text{Fixed Costs}+\\text{Target Income}}{\\text{Unit Contribution}}\\\\\n&\\text{Target Income Sales (in Sales proceeds)} & &= \\frac{\\text{Fixed Costs}+\\text{Target Income}}{\\text{Contribution Margin Ratio}}\n\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=15580702
|
15581094
|
Numerical range
|
In the mathematical field of linear algebra and convex analysis, the numerical range or field of values of a complex formula_0 matrix "A" is the set
formula_1
where formula_2 denotes the conjugate transpose of the vector formula_3. The numerical range includes, in particular, the diagonal entries of the matrix (obtained by choosing "x" equal to the unit vectors along the coordinate axes) and the eigenvalues of the matrix (obtained by choosing "x" equal to the eigenvectors).
In engineering, numerical ranges are used as a rough estimate of eigenvalues of "A". Recently, generalizations of the numerical range are used to study quantum computing.
A related concept is the numerical radius, which is the largest absolute value of the numbers in the numerical range, i.e.
formula_4
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n \\times n"
},
{
"math_id": 1,
"text": "W(A) = \\left\\{\\frac{\\mathbf{x}^*A\\mathbf{x}}{\\mathbf{x}^*\\mathbf{x}} \\mid \\mathbf{x}\\in\\mathbb{C}^n,\\ \\mathbf{x}\\not=0\\right\\} "
},
{
"math_id": 2,
"text": "\\mathbf{x}^*"
},
{
"math_id": 3,
"text": "\\mathbf{x}"
},
{
"math_id": 4,
"text": "r(A) = \\sup \\{ |\\lambda| : \\lambda \\in W(A) \\} = \\sup_{\\|x\\|=1} |\\langle Ax, x \\rangle|."
},
{
"math_id": 5,
"text": "W(\\alpha A+\\beta I)=\\alpha W(A)+\\{\\beta\\}"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "\\alpha"
},
{
"math_id": 8,
"text": "\\beta"
},
{
"math_id": 9,
"text": "I"
},
{
"math_id": 10,
"text": "W(A)"
},
{
"math_id": 11,
"text": "A+A^*"
},
{
"math_id": 12,
"text": "W(\\cdot)"
},
{
"math_id": 13,
"text": "W(A+B)\\subseteq W(A)+W(B)"
},
{
"math_id": 14,
"text": "2 \\times 2"
},
{
"math_id": 15,
"text": "[\\alpha, \\beta]"
},
{
"math_id": 16,
"text": "r(\\cdot)"
},
{
"math_id": 17,
"text": "r(A) \\leq \\|A\\| \\leq 2r(A) "
},
{
"math_id": 18,
"text": " \\|\\cdot\\|"
},
{
"math_id": 19,
"text": "r(A^n) \\le r(A)^n"
}
] |
https://en.wikipedia.org/wiki?curid=15581094
|
15581883
|
Federal Bridge Gross Weight Formula
|
Formula for estimating bridge weight limits
The Federal Bridge Gross Weight Formula, also known as Bridge Formula B or the Federal Bridge Formula, is a mathematical formula in use in the United States by truck drivers and Department of Transportation (DOT) officials to determine the appropriate maximum gross weight for a commercial motor vehicle (CMV) based on axle number and spacing. The formula is part of federal weight and size regulations regarding interstate commercial traffic (intrastate traffic is subject to state limits). The formula is necessary to prevent heavy vehicles from damaging roads and bridges. CMVs are most often tractor-trailers or buses, but the formula is of most interest to truck drivers due to the heavy loads their vehicles often carry.
Early 20th-century weight limits were enacted to protect dirt and gravel roads from damage caused by the solid wheels of heavy trucks. As time passed, truck weight limits focused primarily on gross weight limits (which had no prescribed limits on length). By 1974, bridges received special protection from increasing truck weight limits. The bridge formula law was enacted by the U.S. Congress to limit the weight-to-length ratio of heavy trucks, and to protect roads and bridges from the damage caused by the concentrated weight of shorter trucks. The formula effectively lowers the legal weight limit for shorter trucks, preventing them from causing premature deterioration of bridges and highway infrastructure.
Compliance with the law is checked when vehicles pass through a weigh station, often located at the borders between states or on the outskirts of major cities, where the vehicle may be weighed and measured. The one exception to the formula allows a standard five-axle semi-truck configuration to weigh the maximum legal gross weight. This exception was specifically requested by the American Trucking Associations to allow tank trucks to reach the maximum legal gross weight without violating the bridge formula law.
History.
The first truck weight limits were enacted by four states in 1913, ranging from in Maine to in Massachusetts. These laws were passed to protect earth and gravel-surfaced roads from damage caused by the steel and solid rubber wheels of early heavy trucks. By 1933, all states had some form of truck weight regulation. The Federal-Aid Highway Act of 1956 instituted the first federal truck weight regulation (set at ) and authorized the construction of the Interstate Highway System.
In the late 1950s, the American Association of State Highway and Transportation Officials (AASHTO) conducted a series of extensive field tests of roads and bridges to determine how traffic contributed to the deterioration of pavement materials. In 1964, the AASHTO recommended to Congress that a bridge formula table be used instead of a single gross weight limit for trucks. The Federal-Aid Highway Act Amendments of 1974 established the bridge formula as law, along with the gross weight limit of . Current applications of the formula allow for up to 7 axles and 86 feet or more length between axle sets, and a maximum load of 105,500 lbs.
Usage.
The formula was enacted as law to limit the weight-to-length ratio of a commercial motor vehicle (CMV). The formula is necessary to prevent the concentrated truck's axles from overstressing pavements and bridge members (possibly causing a bridge collapse). In simplified form, this is analogous to a person walking on thin ice. When standing upright, a person's weight is concentrated at the bottom of their feet, funneling all of their weight into a small area. When lying down, a person's weight is distributed over a much larger area. This difference in weight distribution would allow a person to cross an area of ice while crawling that might otherwise collapse under their body weight while standing up. For an overweight truck to comply with the formula, more axles must be added, the distance between axles must be increased, or weight must be removed.
While the Federal Motor Carrier Safety Administration (FMCSA), regulates safety for the U.S. trucking industry., the Federal Highway Administration (FHWA) oversees the State enforcement of truck the size and weight Federal limits set by Congress for the Federal Aid System as described in 23 CFR 658. The Federal size limits apply in all States to the National Network (NN) which is a network of Interstate Highways, U.S. Highways, and state highways. Provided the truck remains on the NN, in all States and a truck is not subject to State size limits. In a similar fashion, the Federal weight limits and the Federal Bridge Formula apply to the Interstate System in all States. The State truck size and weight regulations apply to the Federal Aid System routes that do not have Federal limits.
The weight and size of CMVs are restricted for practical and safety reasons. CMVs are restricted by gross weight (total weight of vehicle and cargo), and by axle weight (i.e., the weight carried by each tire). The federal weight limits for CMVs are for gross weight (unless the bridge formula dictates a lower limit), for a tandem axle, and for a single axle. A tandem axle is defined as two or more consecutive axles whose centers are spaced more than but not more than apart. Axles spaced less than apart are considered a single axle.
In effect, the formula reduces the legal weight limit for shorter trucks with fewer axles (see table below). For example, a three-axle dump truck would have a gross weight limit of , instead of , which is the standard weight limit for five-axle tractor-trailer. FHWA regulation §658.17 states: "The maximum gross vehicle weight shall be except where lower gross vehicle weight is dictated by the bridge formula."
Bridge collapse.
The August 2007 collapse of the Interstate 35W Mississippi River bridge in Minneapolis brought renewed attention to the issue of truck weights and their relation to bridge stress. In November 2008, the National Transportation Safety Board determined there had been several reasons for the bridge's collapse, including (but not limited to): faulty gusset plates, inadequate inspections, and the extra weight of heavy construction equipment combined with the weight of rush hour traffic. The I-35 Trade Corridor Study reported that the Federal Highway Administration (FHWA) expressed concern over bridges on the I-35 corridor due to an expected increase of international truck traffic from Canada and Mexico, with the FHWA listing it as "high-priority" in 2005.
As of 2007, federal estimates suggest truck traffic increased 216% since 1970, shortly before the federal gross weight limit for trucks was increased by . This is also the period during which many of the existing interstate bridges were built. Research shows that increased truck traffic (and therefore, increased stress) shortens the life of bridges. National Pavement Cost Model (NAPCOM) estimates indicate that one truck does as much damage to roads as 750 cars.
Some smaller bridges have a weight limit (or gross weight load rating) indicated by a posted sign (hence the reference to a "posted bridge"). These are necessary when the weight limit of the bridge is lower than the federal or state gross weight limit for trucks. Driving a truck over a bridge that is too weak to support it usually does not result in an immediate collapse. The bridge may develop cracks, which over time can weaken the bridge and cause it to collapse. Most of these cracks are discovered during mandated inspections of bridges. Most bridge collapses occur in rural areas, result in few injuries or deaths, and receive relatively little media attention. While the number varies from year to year, as many as 150 bridges can collapse in a year. About 1,500 bridges collapsed between 1966 and 2007, and most of those were the result of soil erosion around bridge supports. In 1987, the Schoharie Creek Bridge collapsed in upstate New York, due to erosion of soil around the foundation, which sparked renewed interest in bridge design in inspection procedures.
In special cases involving unusually overweight trucks (which require special permits), not observing a bridge weight limit can lead to disastrous consequences. Fifteen days after the collapse of the Minneapolis bridge, a heavy truck collapsed a small bridge in Oakville, Washington.
Formula law.
CMVs are required to pass through weigh stations at the borders of most states and some large cities. These weigh stations are run by state DOTs, and CMV weight and size enforcement is overseen by the FHWA. Weigh stations check each vehicle's gross weight and axle weight using a set of in-ground truck scales, and are usually where a truck's compliance with the formula is checked.
FMCSA regulation §658.17 states:
formula_0
Two or more consecutive axles may not exceed the weight computed by the bridge formula, even the gross weight of the truck. This means that the "outer group" or axles 1-5 which comprises the entire Gross Vehicle Weight (GVW) of truck and all interior combination of axles must also comply with the bridge formula. State may not issue less than four citations when a truck violate each of the Federal weight limits on the Interstate System which are: 1) Single axle 2) Tandem axle, 3) Gross Vehicle Weight (GVW), 4) Inner Group.
Penalties for violating weight limits vary between states (bridge formula weight violations are treated as gross weight violations), as the states are responsible for enforcement and collection of fines. Some states, such as Connecticut, issue fines on a percentage basis (e.g. 20% overweight at $10 per ), which means larger trucks pay higher fines. For example, a truck with a legal gross limit of that violates the limit by would pay a fine of $500, while a truck with a legal gross limit of that violates the limit by 5,000 pounds would pay a fine of $250. Other states, such as New York, issue fines on a per-pound basis (e.g., 5,000 pounds overweight equals a $300 fine). Others, such as Massachusetts, impose a less complicated fine schedule whereby a vehicle that violates the limits by less than is fined $40 per , while a violation over pays $80 per (e.g. overweight equals a $200 fine).
Some states require overweight trucks to offload enough cargo to comply with the limits. In Florida, any vehicle that exceeds the limits by more than is required to be unloaded until the vehicle is in compliance. Florida also includes a scale tolerance, which allows for violations of less than 10% to be forgiven, and no fine issued. Florida also allows for a load to be shifted (e.g., moved from the front towards the rear of the vehicle) for the vehicle to comply with axle weight limits, without penalty.
Exception.
There is one exception to the formula: two consecutive sets of tandem axles may carry each if the overall distance between the first and last axles of these tandems is or more. For example, a five-axle truck may carry 34,000 pounds both on the tractor tandem axles (2 and 3) and the trailer tandem axles (4 and 5), provided axles 2 and 5 are spaced at least apart.
This exception allows for the standard 5-axle semi-truck configuration to gross up to (the legal limit) without being in violation of the bridge formula law. Without it, the bridge formula would allow an actual weight of only to on tandems spaced to apart; compared to with the exception. This exception was sought by the American Trucking Associations so trucking companies could use trailers and weigh . It was the only way tank truck operators could reach 80,000 pounds without adding axles to their fleets of trailers already in operation.
A CMV may exceed the bridge formula limits (or gross weight and its axle weight limits) by up to if the vehicle is equipped with an auxiliary power unit (APU) or idle reduction technology. This is permitted "in order to promote reduction of fuel use and emissions because of engine idling". To be eligible, the vehicle's operator must prove the weight of the APU with written certification, or—by demonstration or certification—that the idle reduction technology is fully functional at all times. Certification of the APU's weight must be available to law enforcement officers if the vehicle is found in violation of applicable weight laws. The additional weight allowed cannot exceed 550 pounds or the weight certified, whichever is less.
Issues.
The bridge formula (also referred to as Formula B) is based on research into single-span bridges, and fails to consider multiple-span bridges. Two-span bridges may not be fully protected by Formula B, depending on the truck length, span length, and other factors. Shorter wheelbase vehicles (usually specialized trucks such as garbage trucks and water trucks) have trouble complying with Formula B.
In 1987, the U.S. Congress passed the Surface Transportation and Uniform Relocation Assistance Act, requesting the Transportation Research Board (TRB) to conduct a study to develop alternatives to Formula B. The study recommended several that were never implemented. It suggested that Formula B was too strict for trucks with shorter axle lengths. One of the alternative formulas (later known as the TTI HS-20 Bridge Formula) was developed in conjunction with the Texas Transportation Institute. TTI HS-20 allowed shorter trucks to have higher weight limits than Formula B. For a 3-axle truck with an axle length of , the weight limit increased from to . TTI HS-20 also failed to address the problem of multiple-span bridges.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "W = 500 \\left ( \\frac{LN}{N-1} + 12N + 36 \\right ) "
}
] |
https://en.wikipedia.org/wiki?curid=15581883
|
155823
|
Sievert
|
SI unit of equivalent dose of ionizing radiation
<templatestyles src="Template:Infobox/styles-images.css" />
The sievert (symbol: Sv) is a unit in the International System of Units (SI) intended to represent the stochastic health risk of ionizing radiation, which is defined as the probability of causing radiation-induced cancer and genetic damage. The sievert is important in dosimetry and radiation protection. It is named after Rolf Maximilian Sievert, a Swedish medical physicist renowned for work on radiation dose measurement and research into the biological effects of radiation.
The sievert is used for radiation dose quantities such as equivalent dose and effective dose, which represent the risk of external radiation from sources outside the body, and committed dose, which represents the risk of internal irradiation due to inhaled or ingested radioactive substances. According to the International Commission on Radiological Protection (ICRP), one sievert results in a 5.5% probability of eventually developing fatal cancer based on the disputed linear no-threshold model of ionizing radiation exposure.
To calculate the value of stochastic health risk in sieverts, the physical quantity absorbed dose is converted into equivalent dose and effective dose by applying factors for radiation type and biological context, published by the ICRP and the International Commission on Radiation Units and Measurements (ICRU). One sievert equals 100 rem, which is an older, CGS radiation unit.
Conventionally, deterministic health effects due to acute tissue damage that is certain to happen, produced by high dose rates of radiation, are compared to the physical quantity absorbed dose measured by the unit gray (Gy).
Definition.
CIPM definition of the sievert.
The SI definition given by the International Committee for Weights and Measures (CIPM) says:
"The quantity dose equivalent "H" is the product of the absorbed dose "D" of ionizing radiation and the dimensionless factor "Q" (quality factor) defined as a function of linear energy transfer by the ICRU"
"H" = "Q" × "D"
The value of "Q" is not defined further by CIPM, but it requires the use of the relevant ICRU recommendations to provide this value.
The CIPM also says that "in order to avoid any risk of confusion between the absorbed dose "D" and the dose equivalent "H", the special names for the respective units should be used, that is, the name gray should be used instead of joules per kilogram for the unit of absorbed dose "D" and the name sievert instead of joules per kilogram for the unit of dose equivalent "H"".
In summary:
gray: quantity "D"—absorbed dose
1 Gy = 1 joule/kilogram—a physical quantity. 1 Gy is the deposit of a joule of radiation energy per kilogram of matter or tissue.
sievert: quantity "H"—equivalent dose
1 Sv = 1 joule/kilogram—a biological effect. The sievert represents the equivalent biological effect of the deposit of a joule of radiation energy in a kilogram of human tissue. The ratio to absorbed dose is denoted by "Q".
ICRP definition of the sievert.
The ICRP definition of the sievert is:
"The sievert is the special name for the SI unit of equivalent dose, effective dose, and operational dose quantities. The unit is joule per kilogram."
The sievert is used for a number of dose quantities which are described in this article and are part of the international radiological protection system devised and defined by the ICRP and ICRU.
External dose quantities.
When the sievert is used to represent the stochastic effects of external ionizing radiation on human tissue, the radiation doses received are measured in practice by radiometric instruments and dosimeters and are called operational quantities. To relate these actual received doses to likely health effects, protection quantities have been developed to predict the likely health effects using the results of large epidemiological studies. Consequently, this has required the creation of a number of different dose quantities within a coherent system developed by the ICRU working with the ICRP.
The external dose quantities and their relationships are shown in the accompanying diagram. The ICRU is primarily responsible for the operational dose quantities, based upon the application of ionising radiation metrology, and the ICRP is primarily responsible for the protection quantities, based upon modelling of dose uptake and biological sensitivity of the human body.
Naming conventions.
The ICRU/ICRP dose quantities have specific purposes and meanings, but some use common words in a different order. There can be confusion between, for instance, "equivalent dose" and "dose equivalent".
Although the CIPM definition states that the linear energy transfer function (Q) of the ICRU is used in calculating the biological effect, the ICRP in 1990 developed the "protection" dose quantities "effective" and "equivalent" dose which are calculated from more complex computational models and are distinguished by not having the phrase "dose equivalent" in their name. Only the operational dose quantities which still use Q for calculation retain the phrase "dose equivalent". However, there are joint ICRU/ICRP proposals to simplify this system by changes to the operational dose definitions to harmonise with those of protection quantities. These were outlined at the 3rd International Symposium on Radiological Protection in October 2015, and if implemented would make the naming of operational quantities more logical by introducing "dose to lens of eye" and "dose to local skin" as "equivalent doses".
In the USA there are differently named dose quantities which are not part of the ICRP nomenclature.
Physical quantities.
These are directly measurable physical quantities in which no allowance has been made for biological effects. Radiation fluence is the number of radiation particles impinging per unit area per unit time, kerma is the ionising effect on air of gamma rays and X-rays and is used for instrument calibration, and absorbed dose is the amount of radiation energy deposited per unit mass in the matter or tissue under consideration.
Operational quantities.
Operational quantities are measured in practice, and are the means of directly measuring dose uptake due to exposure, or predicting dose uptake in a measured environment. In this way they are used for practical dose control, by providing an estimate or upper limit for the value of the protection quantities related to an exposure. They are also used in practical regulations and guidance.
The calibration of individual and area dosimeters in photon fields is performed by measuring the collision "air kerma free in air" under conditions of secondary electron equilibrium. Then the appropriate operational quantity is derived applying a conversion coefficient that relates the air kerma to the appropriate operational quantity. The conversion coefficients for photon radiation are published by the ICRU.
Simple (non-anthropomorphic) "phantoms" are used to relate operational quantities to measured free-air irradiation. The ICRU sphere phantom is based on the definition of an ICRU 4-element tissue-equivalent material which does not really exist and cannot be fabricated. The ICRU sphere is a theoretical 30 cm diameter "tissue equivalent" sphere consisting of a material with a density of 1 g·cm−3 and a mass composition of 76.2% oxygen, 11.1% carbon, 10.1% hydrogen and 2.6% nitrogen. This material is specified to most closely approximate human tissue in its absorption properties. According to the ICRP, the ICRU "sphere phantom" in most cases adequately approximates the human body as regards the scattering and attenuation of penetrating radiation fields under consideration. Thus radiation of a particular energy fluence will have roughly the same energy deposition within the sphere as it would in the equivalent mass of human tissue.
To allow for back-scattering and absorption of the human body, the "slab phantom" is used to represent the human torso for practical calibration of whole body dosimeters. The slab phantom is 300 mm × 300 mm × 150 mm depth to represent the human torso.
The joint ICRU/ICRP proposals outlined at the 3rd International Symposium on Radiological Protection in October 2015 to change the definition of operational quantities would not change the present use of calibration phantoms or reference radiation fields.
Protection quantities.
Protection quantities are calculated models, and are used as "limiting quantities" to specify exposure limits to ensure, in the words of ICRP, "that the occurrence of stochastic health effects is kept below unacceptable levels and that tissue reactions are avoided". These quantities cannot be measured in practice but their values are derived using models of external dose to internal organs of the human body, using anthropomorphic phantoms. These are 3D computational models of the body which take into account a number of complex effects such as body self-shielding and internal scattering of radiation. The calculation starts with organ absorbed dose, and then applies radiation and tissue weighting factors.
As protection quantities cannot practically be measured, operational quantities must be used to relate them to practical radiation instrument and dosimeter responses.
Instrument and dosimetry response.
This is an actual reading obtained from such as an ambient dose gamma monitor, or a personal dosimeter. Such instruments are calibrated using radiation metrology techniques which will trace them to a national radiation standard, and thereby relate them to an operational quantity. The readings of instruments and dosimeters are used to prevent the uptake of excessive dose and to provide records of dose uptake to satisfy radiation safety legislation; such as in the UK, the Ionising Radiations Regulations 1999.
Calculating protection dose quantities.
The sievert is used in external radiation protection for equivalent dose (the external-source, whole-body exposure effects, in a uniform field), and effective dose (which depends on the body parts irradiated).
These dose quantities are weighted averages of absorbed dose designed to be representative of the stochastic health effects of radiation, and use of the sievert implies that appropriate weighting factors have been applied to the absorbed dose measurement or calculation (expressed in grays).
The ICRP calculation provides two weighting factors to enable the calculation of protection quantities.
1. The radiation factor "W""R", which is specific for radiation type "R" – This is used in calculating the equivalent dose "H""T" which can be for the whole body or for individual organs.
2. The tissue weighting factor "W""T", which is specific for tissue type T being irradiated. This is used with "W""R" to calculate the contributory organ doses to arrive at an effective dose "E" for non-uniform irradiation.
When a whole body is irradiated uniformly only the radiation weighting factor "W""R" is used, and the effective dose equals the whole body equivalent dose. But if the irradiation of a body is partial or non-uniform the tissue factor "W""T" is used to calculate dose to each organ or tissue. These are then summed to obtain the effective dose. In the case of uniform irradiation of the human body, these summate to 1, but in the case of partial or non-uniform irradiation, they will summate to a lower value depending on the organs concerned; reflecting the lower overall health effect. The calculation process is shown on the accompanying diagram. This approach calculates the biological risk contribution to the whole body, taking into account complete or partial irradiation, and the radiation type or types.
The values of these weighting factors are conservatively chosen to be greater than the bulk of experimental values observed for the most sensitive cell types, based on averages of those obtained for the human population.
Radiation type weighting factor "W""R".
Since different radiation types have different biological effects for the same deposited energy, a corrective radiation weighting factor "WR", which is dependent on the radiation type and on the target tissue, is applied to convert the absorbed dose measured in the unit gray to determine the equivalent dose. The result is given the unit sievert.
The equivalent dose is calculated by multiplying the absorbed energy, averaged by mass over an organ or tissue of interest, by a radiation weighting factor appropriate to the type and energy of radiation. To obtain the equivalent dose for a mix of radiation types and energies, a sum is taken over all types of radiation energy dose.
formula_0
where
"HT" is the equivalent dose absorbed by tissue "T",
"D""T","R" is the absorbed dose in tissue "T" by radiation type "R" and
"WR" is the radiation weighting factor defined by regulation.
Thus for example, an absorbed dose of 1 Gy by alpha particles will lead to an equivalent dose of 20 Sv.
This may seem to be a paradox. It implies that the energy of the incident radiation field in joules has increased by a factor of 20, thereby violating the laws of conservation of energy. However, this is not the case. The sievert is used only to convey the fact that a gray of absorbed alpha particles would cause twenty times the biological effect of a gray of absorbed x-rays. It is this biological component that is being expressed when using sieverts rather than the actual energy delivered by the incident absorbed radiation.
Tissue type weighting factor "W""T".
The second weighting factor is the tissue factor "W""T", but it is used only if there has been non-uniform irradiation of a body. If the body has been subject to uniform irradiation, the effective dose equals the whole body equivalent dose, and only the radiation weighting factor "W""R" is used. But if there is partial or non-uniform body irradiation the calculation must take account of the individual organ doses received, because the sensitivity of each organ to irradiation depends on their tissue type. This summed dose from only those organs concerned gives the effective dose for the whole body. The tissue weighting factor is used to calculate those individual organ dose contributions.
The ICRP values for "W""T" are given in the table shown here.
The article on effective dose gives the method of calculation. The absorbed dose is first corrected for the radiation type to give the equivalent dose, and then corrected for the tissue receiving the radiation. Some tissues like bone marrow are particularly sensitive to radiation, so they are given a weighting factor that is disproportionally large relative to the fraction of body mass they represent. Other tissues like the hard bone surface are particularly insensitive to radiation and are assigned a disproportionally low weighting factor.
In summary, the sum of tissue-weighted doses to each irradiated organ or tissue of the body adds up to the effective dose for the body. The use of effective dose enables comparisons of overall dose received regardless of the extent of body irradiation.
Operational quantities.
The operational quantities are used in practical applications for monitoring and investigating external exposure situations. They are defined for practical operational measurements and assessment of doses in the body. Three external operational dose quantities were devised to relate operational dosimeter and instrument measurements to the calculated protection quantities. Also devised were two phantoms, The ICRU "slab" and "sphere" phantoms which relate these quantities to incident radiation quantities using the Q(L) calculation.
Ambient dose equivalent.
This is used for area monitoring of penetrating radiation and is usually expressed as the quantity "H"*(10). This means the radiation is equivalent to that found 10 mm within the ICRU sphere phantom in the direction of origin of the field. An example of penetrating radiation is gamma rays.
Directional dose equivalent.
This is used for monitoring of low penetrating radiation and is usually expressed as the quantity "H"'(0.07). This means the radiation is equivalent to that found at a depth of 0.07 mm in the ICRU sphere phantom. Examples of low penetrating radiation are alpha particles, beta particles and low-energy photons. This dose quantity is used for the determination of equivalent dose to such as the skin, lens of the eye. In radiological protection practice value of omega is usually not specified as the dose is usually at a maximum at the point of interest.
Personal dose equivalent.
This is used for individual dose monitoring, such as with a personal dosimeter worn on the body. The recommended depth for assessment is 10 mm which gives the quantity "H"p(10).
Proposals for changing the definition of protection dose quantities.
In order to simplify the means of calculating operational quantities and assist in the comprehension of radiation dose protection quantities, ICRP Committee 2 & ICRU Report Committee 26 started in 2010 an examination of different means of achieving this by dose coefficients related to Effective Dose or Absorbed Dose.
Specifically;
1. For area monitoring of effective dose of whole body it would be:
"H" = Φ × conversion coefficient
The driver for this is that "H"∗(10) is not a reasonable estimate of effective dose due to high energy photons, as a result of the extension of particle types and energy ranges to be considered in ICRP report 116. This change would remove the need for the ICRU sphere and introduce a new quantity called "E"max.
2. For individual monitoring, to measure deterministic effects on eye lens and skin, it would be:
"D" = Φ × conversion coefficient for absorbed dose.
The driver for this is the need to measure the deterministic effect, which it is suggested, is more appropriate than stochastic effect. This would calculate equivalent dose quantities "H"lens and "H"skin.
This would remove the need for the ICRU Sphere and the Q-L function. Any changes would replace ICRU report 51, and part of report 57.
A final draft report was issued in July 2017 by ICRU/ICRP for consultation.
Internal dose quantities.
The sievert is used for human internal dose quantities in calculating committed dose. This is dose from radionuclides which have been ingested or inhaled into the human body, and thereby "committed" to irradiate the body for a period of time. The concepts of calculating protection quantities as described for external radiation applies, but as the source of radiation is within the tissue of the body, the calculation of absorbed organ dose uses different coefficients and irradiation mechanisms.
The ICRP defines Committed effective dose, E("t") as the sum of the products of the committed organ or tissue equivalent doses and the appropriate tissue weighting factors "W"T, where "t" is the integration time in years following the intake. The commitment period is taken to be 50 years for adults, and to age 70 years for children.
The ICRP further states "For internal exposure, committed effective doses are generally determined from an assessment of the intakes of radionuclides from bioassay measurements or other quantities (e.g., activity retained in the body or in daily excreta). The radiation dose is determined from the intake using recommended dose coefficients".
A committed dose from an internal source is intended to carry the same effective risk as the same amount of equivalent dose applied uniformly to the whole body from an external source, or the same amount of effective dose applied to part of the body.
Health effects.
Ionizing radiation has deterministic and stochastic effects on human health. Deterministic (acute tissue effect) events happen with certainty, with the resulting health conditions occurring in every individual who received the same high dose. Stochastic (cancer induction and genetic) events are inherently random, with most individuals in a group failing to ever exhibit any causal negative health effects after exposure, while an indeterministic random minority do, often with the resulting subtle negative health effects being observable only after large detailed epidemiology studies.
The use of the sievert implies that only stochastic effects are being considered, and to avoid confusion deterministic effects are conventionally compared to values of absorbed dose expressed by the SI unit gray (Gy).
Stochastic effects.
Stochastic effects are those that occur randomly, such as radiation-induced cancer. The consensus of nuclear regulators, governments and the UNSCEAR is that the incidence of cancers due to ionizing radiation can be modeled as increasing linearly with effective dose at a rate of 5.5% per sievert. This is known as the linear no-threshold model (LNT model). Some argue that this LNT model is now outdated and should be replaced with a threshold below which the body's natural cell processes repair damage and/or replace damaged cells. There is general agreement that the risk is much higher for infants and fetuses than adults, higher for the middle-aged than for seniors, and higher for women than for men, though there is no quantitative consensus about this.
Deterministic effects.
The deterministic (acute tissue damage) effects that can lead to acute radiation syndrome only occur in the case of acute high doses (≳ 0.1 Gy) and high dose rates (≳ 0.1 Gy/h) and are conventionally not measured using the unit sievert, but use the unit gray (Gy).
A model of deterministic risk would require different weighting factors (not yet established) than are used in the calculation of equivalent and effective dose.
ICRP dose limits.
The ICRP recommends a number of limits for dose uptake in table 8 of report 103. These limits are "situational", for planned, emergency and existing situations. Within these situations, limits are given for the following groups:
For occupational exposure, the limit is 50 mSv in a single year with a maximum of 100 mSv in a consecutive five-year period, and for the public to an average of 1 mSv (0.001 Sv) of effective dose per year, not including medical and occupational exposures.
For comparison, natural radiation levels inside the United States Capitol are such that a human body would receive an additional dose rate of 0.85 mSv/a, close to the regulatory limit, because of the uranium content of the granite structure. According to the conservative ICRP model, someone who spent 20 years inside the capitol building would have an extra one in a thousand chance of getting cancer, over and above any other existing risk (calculated as: 20 a·0.85 mSv/a·0.001 Sv/mSv·5.5%/Sv ≈ 0.1%). However, that "existing risk" is much higher; an average American would have a 10% chance of getting cancer during this same 20-year period, even without any exposure to artificial radiation (see natural Epidemiology of cancer and cancer rates).
Dose examples.
Significant radiation doses are not frequently encountered in everyday life. The following examples can help illustrate relative magnitudes; these are meant to be examples only, not a comprehensive list of possible radiation doses. An "acute dose" is one that occurs over a short and finite period of time, while a "chronic dose" is a dose that continues for an extended period of time so that it is better described by a dose rate.
Dose rate examples.
All conversions between hours and years have assumed continuous presence in a steady field, disregarding known fluctuations, intermittent exposure and radioactive decay. Converted values are shown in parentheses. "/a" is "per annum", which means per year. "/h" means "per hour".
Notes on examples:
<templatestyles src="Reflist/styles.css" />
History.
The sievert has its origin in the röntgen equivalent man (rem) which was derived from CGS units. The International Commission on Radiation Units and Measurements (ICRU) promoted a switch to coherent SI units in the 1970s, and announced in 1976 that it planned to formulate a suitable unit for equivalent dose. The ICRP pre-empted the ICRU by introducing the sievert in 1977.
The sievert was adopted by the International Committee for Weights and Measures (CIPM) in 1980, five years after adopting the gray. The CIPM then issued an explanation in 1984, recommending when the sievert should be used as opposed to the gray. That explanation was updated in 2002 to bring it closer to the ICRP's definition of equivalent dose, which had changed in 1990. Specifically, the ICRP had introduced equivalent dose, renamed the quality factor (Q) to radiation weighting factor (WR), and dropped another weighting factor "N" in 1990. In 2002, the CIPM similarly dropped the weighting factor "N" from their explanation but otherwise kept other old terminology and symbols. This explanation only appears in the appendix to the SI brochure and is not part of the definition of the sievert.
Common SI usage.
The sievert is named after Rolf Maximilian Sievert. As with every SI unit named for a person, its symbol starts with an upper case letter (Sv), but when written in full, it follows the rules for capitalisation of a common noun; i.e., "sievert" becomes capitalised at the beginning of a sentence and in titles but is otherwise in lower case.
Frequently used SI prefixes are the millisievert (1 mSv = 0.001 Sv) and microsievert (1 μSv = 0.000 001 Sv) and commonly used units for time derivative or "dose rate" indications on instruments and warnings for radiological protection are μSv/h and mSv/h. Regulatory limits and chronic doses are often given in units of mSv/a or Sv/a, where they are understood to represent an average over the entire year. In many occupational scenarios, the hourly dose rate might fluctuate to levels thousands of times higher for a brief period of time, without infringing on the annual limits. The conversion from hours to years varies because of leap years and exposure schedules, but approximate conversions are:
1 mSv/h = 8.766 Sv/a
114.1 μSv/h = 1 Sv/a
Conversion from hourly rates to annual rates is further complicated by seasonal fluctuations in natural radiation, decay of artificial sources, and intermittent proximity between humans and sources. The ICRP once adopted fixed conversion for occupational exposure, although these have not appeared in recent documents:
8 h = 1 day
40 h = 1 week
50 weeks = 1 year
Therefore, for occupation exposures of that time period,
1 mSv/h = 2 Sv/a
500 μSv/h = 1 Sv/a
Ionizing radiation quantities.
The following table shows radiation quantities in SI and non-SI units:
Although the United States Nuclear Regulatory Commission permits the use of the units curie, rad, and rem alongside SI units, the European Union European units of measurement directives required that their use for "public health ... purposes" be phased out by 31 December 1985.
Rem equivalence.
An older unit for the dose equivalent is the rem, still often used in the United States. One sievert is equal to 100 rem:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "H_T = \\sum_R W_R \\cdot D_{T,R},"
}
] |
https://en.wikipedia.org/wiki?curid=155823
|
15583411
|
Schanuel's lemma
|
In mathematics, especially in the area of algebra known as module theory, Schanuel's lemma, named after Stephen Schanuel, allows one to compare how far modules depart from being projective. It is useful in defining the Heller operator in the stable category, and in giving elementary descriptions of dimension shifting.
Statement.
Schanuel's lemma is the following statement:
Let "R" be a ring with identity.
If 0 → "K" → "P" → "M" → 0 and 0 → "K′" → "P′" → "M" → 0 are short exact sequences of "R"-modules and "P" and "P′" are projective, then "K" ⊕ "P′" is isomorphic to "K′" ⊕ "P".
Proof.
Define the following submodule of formula_0, where formula_1 and formula_2:
formula_3
The map formula_4, where formula_5 is defined as the projection of the first coordinate of formula_6 into formula_7, is surjective. Since formula_8 is surjective, for any formula_9, one may find a formula_10 such that formula_11. This gives formula_12 with formula_13. Now examine the kernel of the map formula_5:
formula_14
We may conclude that there is a short exact sequence
formula_15
Since formula_7 is projective this sequence splits, so formula_16. Similarly, we can write another map formula_17, and the same argument as above shows that there is another short exact sequence
formula_18
and so formula_19. Combining the two equivalences for formula_6 gives the desired result.
Long exact sequences.
The above argument may also be generalized to long exact sequences.
Origins.
Stephen Schanuel discovered the argument in Irving Kaplansky's homological algebra course at the University of Chicago in Autumn of 1958. Kaplansky writes:
"Early in the course I formed a one-step projective resolution of a module, and remarked that if the kernel was projective in one resolution it was projective in all. I added that, although the statement was so simple and straightforward, it would be a while before we proved it. Steve Schanuel spoke up and told me and the class that it was quite easy, and thereupon sketched what has come to be known as "Schanuel's lemma.""
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P \\oplus P'"
},
{
"math_id": 1,
"text": "\\phi \\colon P \\to M"
},
{
"math_id": 2,
"text": "\\phi' \\colon P' \\to M"
},
{
"math_id": 3,
"text": "X = \\{ (p,q) \\in P \\oplus P' : \\phi(p) = \\phi'(q) \\}."
},
{
"math_id": 4,
"text": "\\pi \\colon X \\to P"
},
{
"math_id": 5,
"text": "\\pi"
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "P"
},
{
"math_id": 8,
"text": "\\phi'"
},
{
"math_id": 9,
"text": "p \\in P"
},
{
"math_id": 10,
"text": "q \\in P'"
},
{
"math_id": 11,
"text": "\\phi(p) = \\phi'(q)"
},
{
"math_id": 12,
"text": "(p,q) \\in X"
},
{
"math_id": 13,
"text": "\\pi(p,q) = p"
},
{
"math_id": 14,
"text": "\\begin{align}\n\\ker \\pi &= \\{ (0,q): (0,q) \\in X \\} \\\\\n& = \\{ (0,q): \\phi'(q) =0 \\} \\\\\n& \\cong \\ker \\phi' \\cong K'.\n\\end{align}"
},
{
"math_id": 15,
"text": "0 \\rightarrow K' \\rightarrow X \\rightarrow P \\rightarrow 0."
},
{
"math_id": 16,
"text": "X \\cong K' \\oplus P"
},
{
"math_id": 17,
"text": "\\pi' \\colon X \\to P'"
},
{
"math_id": 18,
"text": "0 \\rightarrow K \\rightarrow X \\rightarrow P' \\rightarrow 0,"
},
{
"math_id": 19,
"text": "X \\cong P' \\oplus K"
}
] |
https://en.wikipedia.org/wiki?curid=15583411
|
15584416
|
Curve resistance (railroad)
|
Additional rolling resistance present in curved sections of rail track
In railway engineering, curve resistance is a part of train resistance, namely the additional rolling resistance a train must overcome when travelling on a curved section of track. Curve resistance is typically measured in per mille, with the correct physical unit being Newton per kilo-Newton (N/kN). Older texts still use the wrong unit of kilogram-force per tonne (kgf/t).
Curve resistance depends on various factors, the most important being the radius and the superelevation of a curve. Since curves are usually banked by superelevation, there will exist some speed at which there will be no sideways force on the train and where therefore curve resistance is minimum. At higher or lower speeds, curve resistance may be a few (or several) times greater.
Approximation formulas.
Formulas typically used in railway engineering in general compute the resistance as inversely proportional to the radius of curvature (thus, they neglect the fact that the resistance is dependent on both speed and superelevation). For example, in the USSR, the standard formula is Wr (curve resistance in parts per thousand or kgf/tonne) = 700/"R" where "R" is the radius of the curve in meters. Other countries often use the same formula, but with a different numerator-constant. For example, the US used 446/"R", Italy 800/"R", England 600/"R", China 573/"R", etc. In Germany, Austria, Switzerland, Czechoslovakia, Hungary, and Romania the term "R - b" is used in the denominator (instead of just "R"), where "b" is some constant. Typically, the expressions used are "Röckl's formula", which uses 650/("R" - 55) for "R" above 300 meters, and 500/("R" - 30) for smaller radii. The fact that, at 300 meters, the two values of Röckl's formula differ by more than 30% shows that these formulas are rough estimates at best.
The Russian experiments cited below show that all these formulas are inaccurate. At balancing speed, they give a curve resistance a few times too high (or worse). However, these approximation formulas are still contained in practically all standard railway engineering textbooks. For the US, AREMA American Railway Engineering ..., PDF, p.57 claims that curve resistance is 0.04% per degree of curvature (or 8 lbf/ton or 4 kgf/tonne). Hay's textbook also claims it is independent of superelevation. For Russia in 2011, internet articles use 700/R. German textbooks contain Röckl's formulas.
Speed and cant dependence per Russian experiments.
In the 1960s in the Soviet Union curve resistance was found by experiment to be highly dependent on both the velocity and the banking of the curve, also known as superelevation or cant, as can be seen in the graph above. If a train car rounds a curve at balancing speed such that the component of centrifugal force in the lateral direction (towards the outside of the curve and parallel with the plane of the track) is equal to the component of gravitational force in the opposite direction there is very little curve resistance. At such balancing speed there is zero cant deficiency and results in a frictionless banked turn. But deviate from this speed (either higher or lower) and the curve resistance increases due to the unbalance in forces which tends to pull the vehicle sideways (and would be felt by a passenger in a passenger train). Note that for empty rail cars (low wheel loads) the specific curve resistance is higher, similar to the phenomena of higher rolling resistance for empty cars on a straight track.
However, these experiments did not provide usable formulas for curve resistance, because the experiments were, unfortunately, all done on a test track with the same curvature (radius = 955 meters). Therefore, it is not clear how to account for curvature. The Russian experiments plot curve resistance against velocity for various types of railroad cars and various axle loads. The plots all show smooth convex curves with the minimums at balancing speed where the slope of the plotted curve is zero. These plots tend to show curve resistance increasing more rapidly with decreases in velocity below balancing speed, than for increases in velocity (by the same amounts) above balancing speeds. No explanation for this "asymmetrical velocity effect" is to be found in the references cited nor is any explanation found explaining the smooth convex curve plots mentioned above (except for explaining how they were experimentally determined).
That curve resistance is expected to be minimized at balancing speed was also proposed by Schmidt in 1927, but unfortunately the tests he conducted were all at below balancing speed. However his results all show curve resistance decreasing with increasing speed in conformance with this expectation.
Russian method of measuring in 1960s.
To experimentally find the curve resistance of a certain railroad freight car with a given load on its axles (partly due to the weight of the freight) the same car was tested both on a curved track and on a straight track. The difference in measured resistance (at the same speed) was assumed to be the curve resistance. To get an average for several cars of the same type, and to reduce the effect of aerodynamic drag, one may test a group of the same type of cars coupled together (a short train without a locomotive). The curved track used in the experiments was the of the National Scientific Investigation Institute of Railroad Transport (ВНИИЖТ). A single test run can find the train resistance (force) at various velocities by letting the rolling stock being tested coast down from a higher speed to a low speed, while continuously measuring the deceleration and using Newton's second law of motion (force = acceleration*mass) to find the resistance force that is causing the railroad cars to slow. In such calculations, one must take into account the moment of inertia of the car wheels by adding an equivalent mass (of rotating wheels) to the mass of the train consist. Thus the effective mass of a rail car used for Newton's second law, is larger than the car mass as weighed on a car weighing scale. This additional equivalent mass is tantamount to having the mass of each wheel-axle set be located at its radius of gyration. See "Inertia Resistance" (for automobile wheels, but it is the same formula for railroad wheels).
Deceleration was measured by measuring the distance traveled (using what might be called a recording odometer or by distance markers placed along the track say every 50 meters), versus time. A division of distance by time results in velocity and then the differences in velocities divided by time gives the deceleration. A sample data sheet shows time (in seconds) being recorded with 3 digits after the decimal point (thousandths of a second).
It turns out that there is no need to know the mass of the rolling stock to find the specific train resistance in kgf/tonne. This unit is force divided by mass which is acceleration per Newton's second law. But one must multiply kilograms of force by g (gravity) to get force in the metric units of Newtons. So the specific force (the result) is the deceleration multiplied by a constant which is 1/g times a factor to account for the equivalent mass due to wheel rotation. Then this specific force in kgf/kg must be multiplied by 1000 to get kgf/tonne since a tonne is 1000 kg.
Formulas which try to account for superelevation (cant).
Астахов proposed the use of a formula which when plotted is in substantial disagreement with the experimental results curves previously mentioned. His formula for curve resistance (in kgf/tonne) is the sum of two terms, the first term being a conventional k/R term (R is the curve radius in meters) with k=200 instead of 700. The second term is directly proportional to (1.5 times) the absolute value of the unbalanced acceleration in the plane of the track and perpendicular to the rail, such lateral acceleration being equal to the centrifugal acceleration formula_0, minus the gravitation component opposing this acceleration: g·tan(θ), where θ is the angle of the banking due to superelevation and v is the train velocity in m/s.
Wheel rail interface.
The state of repair of the railhead and of the wheelset, and of the compatibility of the two as chosen for a given railway have a significant impact on the curve resistance.
Very high speed passenger trains run on track laid and maintained very accurately and with a wheel/rail interface contour suited to fast running and usually running on continuously welded give near ideal conditions. Freight vehicles with very high axle loadings are often run slowly on relatively poor track with inaccuracies laterally and vertically and jointed rails that give rise to bounce and sway will have a very different performance profile.
The wetness of the rail from rain and from unintended lubricants such as leaf litter pulverised by the wheel/rail interface will reduce the rail drag - but will increase the risk that on powered axles, that driven wheels will lose adhesion.
The formulae given only take into account standard large gauge railways, such as 4'8.5" through 5'3". There is a large range of gauges that can carry commercial freight and an even greater range that can carry passengers which includes miniature railways.
Normally trains follow a curve by the natural steering effect achieved by the change of effective wheel diameter so that the outer wheel acts as having a larger radius than the inner wheel - and, unlike road vehicles, by having a fixed axle forces the outer wheel to travel further. There are also many railways, particularly tramways, where the radius of the curve is too small for the natural steering effect to succeed - resulting that the flange rubs against the side of the rail to force compliance to the curvature. This introduces a massive increase in the curve resistance.
These considerations all need further study. Beware of using standard formulae unless your circumstances of use match the established formulae. The differences in the empirical results identified earlier in this article may be explained by the different circumstances of each analysis.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{v^2}{R} cos(\\theta)"
}
] |
https://en.wikipedia.org/wiki?curid=15584416
|
1558470
|
Asaṃkhyeya
|
Buddhist name for a large number
An () is a Buddhist name for the number 10140, or alternatively for the number formula_0 as it is described in the Avatamsaka Sutra. The value of the number is different depending upon the translation. It is formula_1 in the translation of Buddhabhadra, formula_2 in that of Shikshananda and formula_3 in that of Thomas Cleary, who may have made an error in calculation. In these religious traditions, the word has the meaning of 'incalculable'.
Asaṃkhyeya is a Sanskrit word that appears often in the Buddhist texts. For example, Shakyamuni Buddha is said to have practiced for 4 great asaṃkhyeya kalpas before becoming a Buddha.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "10^{(a\\cdot2^b)}"
},
{
"math_id": 1,
"text": "10^{(5\\cdot2^{103})}"
},
{
"math_id": 2,
"text": "10^{(7\\cdot2^{103})}"
},
{
"math_id": 3,
"text": "10^{(10\\cdot2^{104})}"
}
] |
https://en.wikipedia.org/wiki?curid=1558470
|
15584973
|
Ordered dithering
|
Image dithering algorithm
Ordered dithering is any image dithering algorithm which uses a pre-set threshold map tiled across an image. It is commonly used to display a continuous image on a display of smaller color depth. For example, Microsoft Windows uses it in 16-color graphics modes. The algorithm is characterized by noticeable crosshatch patterns in the result.
Threshold map.
The algorithm reduces the number of colors by applying a threshold map M to the pixels displayed, causing some pixels to change color, depending on the distance of the original color from the available color entries in the reduced palette.
The first threshold maps were designed by hand to minimise the perceptual difference between a grayscale image and its two-bit quantisation for up to a 4x4 matrix.
An optimal threshold matrix is one that for any possible quantisation of color has the minimum possible texture so that the greatest impression of the underlying feature comes from the image being quantised. It can be proven that for matrices whose side length is a power of two there is an optimal threshold matrix. The map may be rotated or mirrored without affecting the effectiveness of the algorithm.
This threshold map (for sides with length as power of two) is also known as a Bayer matrix or, when unscaled, an index matrix. For threshold maps whose dimensions are a power of two, the map can be generated recursively via:
formula_0
While the metric for texture that Bayer proposed could be used find optimal matrices for sizes that are not a power of two, such matrices are uncommon as no simple formula for finding them exists, and relatively small matrix sizes frequently give excellent practical results (especially when combined with other modifications to the dithering algorithm).
This function can also be expressed using only bit arithmetic:
M(i, j) = bit_reverse(bit_interleave(bitwise_xor(i, j), i)) / n ^ 2
Pre-calculated threshold maps.
Rather than storing the threshold map as a matrix of formula_1×formula_1 integers from 0 to formula_2, depending on the exact hardware used to perform the dithering, it may be beneficial to pre-calculate the thresholds of the map into a floating point format, rather than the traditional integer matrix format shown above.
For this, the following formula can be used:
Mpre(i,j) = Mint(i,j) / n^2
This generates a standard threshold matrix.
for the 2×2 map:
this creates the pre-calculated map:
Additionally, normalizing the values to average out their sum to 0 (as done in the dithering algorithm shown below) can be done during pre-processing as well by subtracting <templatestyles src="Fraction/styles.css" />1⁄2 of the largest value from every value:
Mpre(i,j) = Mint(i,j) / n^2 – 0.5 * maxValue
creating the pre-calculated map:
Algorithm.
The ordered dithering algorithm renders the image normally, but for each pixel, it offsets its color value with a corresponding value from the threshold map according to its location, causing the pixel's value to be quantized to a different color if it exceeds the threshold.
For most dithering purposes, it is sufficient to simply add the threshold value to every pixel (without performing normalization by subtracting <templatestyles src="Fraction/styles.css" />1⁄2), or equivalently, to compare the pixel's value to the threshold: if the brightness value of a pixel is less than the number in the corresponding cell of the matrix, plot that pixel black, otherwise, plot it white. This lack of normalization slightly increases the average brightness of the image, and causes almost-white pixels to not be dithered. This is not a problem when using a gray scale palette (or any palette where the relative color distances are (nearly) constant), and it is often even desired, since the human eye perceives differences in darker colors more accurately than lighter ones, however, it produces incorrect results especially when using a small or arbitrary palette, so proper normalization should be preferred.
In other words, the algorithm performs the following transformation on each color "c" of every pixel:
formula_3
where "M"("i", "j") is the threshold map on the "i"-th row and "j"-th column, "c"′ is the transformed color, and "r" is the amount of spread in color space. Assuming an RGB palette with 23"N" evenly distanced colors where each color (a triple of red, green and blue values) is represented by an octet from 0 to 255, one would typically choose formula_4. (<templatestyles src="Fraction/styles.css" />1⁄2 is again the normalizing term.)
Because the algorithm operates on single pixels and has no conditional statements, it is very fast and suitable for real-time transformations. Additionally, because the location of the dithering patterns always stays the same relative to the display frame, it is less prone to jitter than error-diffusion methods, making it suitable for animations. Because the patterns are more repetitive than error-diffusion method, an image with ordered dithering compresses better. Ordered dithering is more suitable for line-art graphics as it will result in straighter lines and fewer anomalies.
The values read from the threshold map should preferably scale into the same range as the minimal difference between distinct colors in the target palette. Equivalently, the size of the map selected should be equal to or larger than the ratio of source colors to target colors. For example, when quantizing a 24 bpp image to 15 bpp (256 colors per channel to 32 colors per channel), the smallest map one would choose would be 4×2, for the ratio of 8 (256:32). This allows expressing each distinct tone of the input with different dithering patterns.
Non-Bayer approaches.
The above thresholding matrix approach describes the Bayer family of ordered dithering algorithms. A number of other algorithms are also known; they generally involve changes in the threshold matrix, equivalent to the "noise" in general descriptions of dithering.
Halftone.
Halftone dithering performs a form of clustered dithering, creating a look similar to halftone patterns, using a specially crafted matrix.
Void and cluster.
The Void and cluster algorithm uses a pre-generated blue noise as the matrix for the dithering process. The blue noise matrix keeps the Bayer's good high frequency content, but with a more uniform coverage of all the frequencies involved shows a much lower amount of patterning.
The "voids-and-cluster" method gets its name from the matrix generation procedure, where a black image with randomly initialized white pixels is gaussian-blurred to find the brightest and darkest parts, corresponding to voids and clusters. After a few swaps have evenly distributed the bright and dark parts, the pixels are numbered by importance. It takes significant computational resources to generate the blue noise matrix: on a modern computer a 64×64 matrix requires a couple seconds using the original algorithm.
This algorithm can be extended to make animated dither masks which also consider the axis of time. This is done by running the algorithm in three dimensions and using a kernel which is a product of a two-dimensional gaussian kernel on the XY plane, and a one-dimensional Gaussian kernel on the Z axis.
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\mathbf M_{2 n} = \\frac{1}{(2n)^2} \\times \\begin{bmatrix}\n(2n)^2 \\times \\mathbf M_n & (2n)^2 \\times \\mathbf M_n + 2 \\\\\n(2n)^2 \\times \\mathbf M_n + 3 & (2n)^2 \\times \\mathbf M_n + 1\n\\end{bmatrix}\n"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "n^2"
},
{
"math_id": 3,
"text": "\nc' = \\mathrm{nearest\\_palette\\_color}\\mathopen{}\\left(c + r \\times \\left(M(x \\bmod n, y \\bmod n) - 1/2\\right)\\mathclose{}\\right)\n"
},
{
"math_id": 4,
"text": "r \\approx \\frac{255}{N}"
}
] |
https://en.wikipedia.org/wiki?curid=15584973
|
1558510
|
Pushforward (differential)
|
Linear approximation of smooth maps on tangent spaces
In differential geometry, pushforward is a linear approximation of smooth maps (formulating manifold) on tangent spaces. Suppose that formula_0 is a smooth map between smooth manifolds; then the differential of formula_1 at a point formula_2, denoted formula_3, is, in some sense, the best linear approximation of formula_1 near formula_2. It can be viewed as a generalization of the total derivative of ordinary calculus. Explicitly, the differential is a linear map from the tangent space of formula_4 at formula_2 to the tangent space of formula_5 at formula_6, formula_7. Hence it can be used to "push" tangent vectors on formula_4 "forward" to tangent vectors on formula_5. The differential of a map formula_1 is also called, by various authors, the derivative or total derivative of formula_1.
Motivation.
Let formula_8 be a smooth map from an open subset formula_9 of formula_10 to an open subset formula_11 of formula_12. For any point formula_2 in formula_9, the Jacobian of formula_1 at formula_2 (with respect to the standard coordinates) is the matrix representation of the total derivative of formula_1 at formula_2, which is a linear map
formula_13
between their tangent spaces. Note the tangent spaces formula_14 are isomorphic to formula_15 and formula_16, respectively. The pushforward generalizes this construction to the case that formula_1 is a smooth function between "any" smooth manifolds formula_4 and formula_5.
The differential of a smooth map.
Let formula_17 be a smooth map of smooth manifolds. Given formula_18 the differential of formula_19 at formula_20 is a linear map
formula_21
from the tangent space of formula_22 at formula_20 to the tangent space of formula_23 at formula_24 The image formula_25 of a tangent vector formula_26 under formula_27 is sometimes called the pushforward of formula_28 by formula_29 The exact definition of this pushforward depends on the definition one uses for tangent vectors (for the various definitions see tangent space).
If tangent vectors are defined as equivalence classes of the curves formula_30 for which formula_31 then the differential is given by
formula_32
Here, formula_33 is a curve in formula_22 with formula_31 and formula_34 is tangent vector to the curve formula_33 at formula_35 In other words, the pushforward of the tangent vector to the curve formula_33 at formula_36 is the tangent vector to the curve formula_37 at formula_35
Alternatively, if tangent vectors are defined as derivations acting on smooth real-valued functions, then the differential is given by
formula_38
for an arbitrary function formula_39 and an arbitrary derivation formula_40 at point formula_41 (a derivation is defined as a linear map formula_42 that satisfies the Leibniz rule, see: definition of tangent space via derivations). By definition, the pushforward of formula_43 is in formula_44 and therefore itself is a derivation, formula_45.
After choosing two charts around formula_20 and around formula_46 formula_19 is locally determined by a smooth map formula_47 between open sets of formula_10 and formula_12, and
formula_48
in the Einstein summation notation, where the partial derivatives are evaluated at the point in formula_49 corresponding to formula_20 in the given chart.
Extending by linearity gives the following matrix
formula_50
Thus the differential is a linear transformation, between tangent spaces, associated to the smooth map formula_19 at each point. Therefore, in some chosen local coordinates, it is represented by the Jacobian matrix of the corresponding smooth map from formula_10 to formula_12. In general, the differential need not be invertible. However, if formula_19 is a local diffeomorphism, then formula_27 is invertible, and the inverse gives the pullback of formula_51
The differential is frequently expressed using a variety of other notations such as
formula_52
It follows from the definition that the differential of a composite is the composite of the differentials (i.e., functorial behaviour). This is the "chain rule" for smooth maps.
Also, the differential of a local diffeomorphism is a linear isomorphism of tangent spaces.
The differential on the tangent bundle.
The differential of a smooth map formula_1 induces, in an obvious manner, a bundle map (in fact a vector bundle homomorphism) from the tangent bundle of formula_4 to the tangent bundle of formula_5, denoted by formula_53, which fits into the following commutative diagram:
where formula_54 and formula_55 denote the bundle projections of the tangent bundles of formula_4 and formula_5 respectively.
formula_56 induces a bundle map from formula_57 to the pullback bundle "φ"∗"TN" over formula_4 via
formula_58
where formula_59 and formula_60 The latter map may in turn be viewed as a section of the vector bundle Hom("TM", "φ"∗"TN") over "M". The bundle map formula_56 is also denoted by formula_61 and called the tangent map. In this way, formula_62 is a functor.
Pushforward of vector fields.
Given a smooth map "φ" : "M" → "N" and a vector field "X" on "M", it is not usually possible to identify a pushforward of "X" by "φ" with some vector field "Y" on "N". For example, if the map "φ" is not surjective, there is no natural way to define such a pushforward outside of the image of "φ". Also, if "φ" is not injective there may be more than one choice of pushforward at a given point. Nevertheless, one can make this difficulty precise, using the notion of a vector field along a map.
A section of "φ"∗"TN" over "M" is called a vector field along "φ". For example, if "M" is a submanifold of "N" and "φ" is the inclusion, then a vector field along "φ" is just a section of the tangent bundle of "N" along "M"; in particular, a vector field on "M" defines such a section via the inclusion of "TM" inside "TN". This idea generalizes to arbitrary smooth maps.
Suppose that "X" is a vector field on "M", i.e., a section of "TM". Then, formula_63 yields, in the above sense, the pushforward "φ"∗"X", which is a vector field along "φ", i.e., a section of "φ"∗"TN" over "M".
Any vector field "Y" on "N" defines a pullback section "φ"∗"Y" of "φ"∗"TN" with ("φ"∗"Y")"x" = "Y""φ"("x"). A vector field "X" on "M" and a vector field "Y" on "N" are said to be "φ"-related if "φ"∗"X" = "φ"∗"Y" as vector fields along "φ". In other words, for all "x" in "M", "dφ""x"("X") = "Y""φ"("x").
In some situations, given a "X" vector field on "M", there is a unique vector field "Y" on "N" which is "φ"-related to "X". This is true in particular when "φ" is a diffeomorphism. In this case, the pushforward defines a vector field "Y" on "N", given by
formula_64
A more general situation arises when "φ" is surjective (for example the bundle projection of a fiber bundle). Then a vector field "X" on "M" is said to be projectable if for all "y" in "N", "dφ""x"("Xx") is independent of the choice of "x" in "φ"−1({"y"}). This is precisely the condition that guarantees that a pushforward of "X", as a vector field on "N", is well defined.
Examples.
Pushforward from multiplication on Lie groups.
Given a Lie group formula_65, we can use the multiplication map formula_66 to get left multiplication formula_67 and right multiplication formula_68 maps formula_69. These maps can be used to construct left or right invariant vector fields on formula_65 from its tangent space at the origin formula_70 (which is its associated Lie algebra). For example, given formula_71 we get an associated vector field formula_72 on formula_65 defined by
formula_73
for every formula_74. This can be readily computed using the curves definition of pushforward maps. If we have a curve
formula_75
where
formula_76
we get
formula_77
since formula_78 is constant with respect to formula_30. This implies we can interpret the tangent spaces formula_79 as formula_80.
Pushforward for some Lie groups.
For example, if formula_65 is the Heisenberg group given by matrices
formula_81
it has Lie algebra given by the set of matrices
formula_82
since we can find a path formula_83 giving any real number in one of the upper matrix entries with formula_84 (i-th row and j-th column). Then, for
formula_85
we have
formula_86
which is equal to the original set of matrices. This is not always the case, for example, in the group
formula_87
we have its Lie algebra as the set of matrices
formula_88
hence for some matrix
formula_89
we have
formula_90
which is not the same set of matrices.
|
[
{
"math_id": 0,
"text": "\\varphi\\colon M\\to N"
},
{
"math_id": 1,
"text": "\\varphi"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "\\mathrm d\\varphi_x"
},
{
"math_id": 4,
"text": "M"
},
{
"math_id": 5,
"text": "N"
},
{
"math_id": 6,
"text": "\\varphi(x)"
},
{
"math_id": 7,
"text": "\\mathrm d\\varphi_x\\colon T_xM \\to T_{\\varphi(x)}N"
},
{
"math_id": 8,
"text": "\\varphi: U \\to V"
},
{
"math_id": 9,
"text": "U"
},
{
"math_id": 10,
"text": "\\R^m"
},
{
"math_id": 11,
"text": "V"
},
{
"math_id": 12,
"text": "\\R^n"
},
{
"math_id": 13,
"text": "d\\varphi_x:T_x\\R^m\\to T_{\\varphi(x)}\\R^n"
},
{
"math_id": 14,
"text": "T_x\\R^m,T_{\\varphi(x)}\\R^n"
},
{
"math_id": 15,
"text": "\\mathbb{R}^m"
},
{
"math_id": 16,
"text": "\\mathbb{R}^n"
},
{
"math_id": 17,
"text": "\\varphi \\colon M \\to N "
},
{
"math_id": 18,
"text": " x \\in M, "
},
{
"math_id": 19,
"text": " \\varphi "
},
{
"math_id": 20,
"text": " x "
},
{
"math_id": 21,
"text": "d\\varphi_x \\colon\\ T_xM\\to T_{\\varphi(x)}N\\,"
},
{
"math_id": 22,
"text": " M "
},
{
"math_id": 23,
"text": " N "
},
{
"math_id": 24,
"text": " \\varphi(x). "
},
{
"math_id": 25,
"text": " d\\varphi_x X "
},
{
"math_id": 26,
"text": " X \\in T_x M "
},
{
"math_id": 27,
"text": " d\\varphi_x "
},
{
"math_id": 28,
"text": " X "
},
{
"math_id": 29,
"text": " \\varphi. "
},
{
"math_id": 30,
"text": "\\gamma"
},
{
"math_id": 31,
"text": " \\gamma(0) = x, "
},
{
"math_id": 32,
"text": "d\\varphi_x(\\gamma'(0)) = (\\varphi \\circ \\gamma)'(0)."
},
{
"math_id": 33,
"text": " \\gamma "
},
{
"math_id": 34,
"text": "\\gamma'(0)"
},
{
"math_id": 35,
"text": " 0. "
},
{
"math_id": 36,
"text": " 0 "
},
{
"math_id": 37,
"text": "\\varphi \\circ \\gamma"
},
{
"math_id": 38,
"text": "d\\varphi_x(X)(f) = X(f \\circ \\varphi),"
},
{
"math_id": 39,
"text": "f \\in C^\\infty(N)"
},
{
"math_id": 40,
"text": "X \\in T_xM"
},
{
"math_id": 41,
"text": "x \\in M"
},
{
"math_id": 42,
"text": "X \\colon C^\\infty(M) \\to \\R"
},
{
"math_id": 43,
"text": "X"
},
{
"math_id": 44,
"text": "T_{\\varphi(x)}N"
},
{
"math_id": 45,
"text": "d\\varphi_x(X) \\colon C^\\infty(N) \\to \\R"
},
{
"math_id": 46,
"text": " \\varphi(x), "
},
{
"math_id": 47,
"text": "\\widehat{\\varphi} \\colon U \\to V"
},
{
"math_id": 48,
"text": "d\\varphi_x\\left(\\frac{\\partial}{\\partial u^a}\\right) = \\frac{\\partial{\\widehat{\\varphi}}^b}{\\partial u^a} \\frac{\\partial}{\\partial v^b},"
},
{
"math_id": 49,
"text": " U "
},
{
"math_id": 50,
"text": "\\left(d\\varphi_x\\right)_a^{\\;b} = \\frac{\\partial{\\widehat{\\varphi}}^b}{\\partial u^a}."
},
{
"math_id": 51,
"text": " T_{\\varphi(x)} N."
},
{
"math_id": 52,
"text": "D\\varphi_x,\\left(\\varphi_*\\right)_x, \\varphi'(x),T_x\\varphi."
},
{
"math_id": 53,
"text": "d\\varphi"
},
{
"math_id": 54,
"text": "\\pi_M"
},
{
"math_id": 55,
"text": "\\pi_N"
},
{
"math_id": 56,
"text": "\\operatorname{d}\\!\\varphi"
},
{
"math_id": 57,
"text": "TM"
},
{
"math_id": 58,
"text": "(m,v_m) \\mapsto (\\varphi(m),\\operatorname{d}\\!\\varphi (m,v_m)),"
},
{
"math_id": 59,
"text": "m \\in M"
},
{
"math_id": 60,
"text": "v_m \\in T_mM."
},
{
"math_id": 61,
"text": "T\\varphi"
},
{
"math_id": 62,
"text": "T"
},
{
"math_id": 63,
"text": "\\operatorname{d}\\!\\phi \\circ X"
},
{
"math_id": 64,
"text": "Y_y = \\phi_*\\left(X_{\\phi^{-1}(y)}\\right)."
},
{
"math_id": 65,
"text": "G"
},
{
"math_id": 66,
"text": "m(-,-) : G\\times G \\to G"
},
{
"math_id": 67,
"text": "L_g = m(g,-)"
},
{
"math_id": 68,
"text": "R_g = m(-,g)"
},
{
"math_id": 69,
"text": "G \\to G"
},
{
"math_id": 70,
"text": "\\mathfrak{g} = T_e G"
},
{
"math_id": 71,
"text": "X \\in \\mathfrak{g}"
},
{
"math_id": 72,
"text": "\\mathfrak{X}"
},
{
"math_id": 73,
"text": "\\mathfrak{X}_g = (L_g)_*(X) \\in T_g G"
},
{
"math_id": 74,
"text": "g \\in G"
},
{
"math_id": 75,
"text": "\\gamma: (-1,1) \\to G"
},
{
"math_id": 76,
"text": "\\gamma(0) = e \\, , \\quad \\gamma'(0) = X"
},
{
"math_id": 77,
"text": "\\begin{align}\n(L_g)_*(X) &= (L_g\\circ \\gamma)'(0) \\\\\n&= (g\\cdot \\gamma(t))'(0) \\\\\n&= \\frac{dg}{d\\gamma}\\gamma(0) + g\\cdot \\frac{d\\gamma}{dt} (0) \\\\\n&= g \\cdot \\gamma'(0)\n\\end{align}"
},
{
"math_id": 78,
"text": "L_g"
},
{
"math_id": 79,
"text": "T_g G"
},
{
"math_id": 80,
"text": "T_g G = g\\cdot T_e G = g\\cdot \\mathfrak{g}"
},
{
"math_id": 81,
"text": "H = \\left\\{\n\\begin{bmatrix}\n1 & a & b \\\\\n0 & 1 & c \\\\\n0 & 0 & 1\n\\end{bmatrix} : a,b,c \\in \\mathbb{R}\n\\right\\}"
},
{
"math_id": 82,
"text": "\\mathfrak{h} = \\left\\{\n\\begin{bmatrix}\n0 & a & b \\\\\n0 & 0 & c \\\\\n0 & 0 & 0\n\\end{bmatrix} : a,b,c \\in \\mathbb{R}\n\\right\\}"
},
{
"math_id": 83,
"text": "\\gamma:(-1,1) \\to H"
},
{
"math_id": 84,
"text": "i < j"
},
{
"math_id": 85,
"text": "g = \\begin{bmatrix}\n1 & 2 & 3 \\\\\n0 & 1 & 4 \\\\\n0 & 0 & 1\n\\end{bmatrix}"
},
{
"math_id": 86,
"text": "T_gH = g\\cdot \\mathfrak{h} = \n\\left\\{\n\\begin{bmatrix}\n0 & a & b + 2c \\\\\n0 & 0 & c \\\\\n0 & 0 & 0\n\\end{bmatrix} : a,b,c \\in \\mathbb{R}\n\\right\\}"
},
{
"math_id": 87,
"text": "G = \\left\\{\n\\begin{bmatrix}\na & b \\\\\n0 & 1/a\n\\end{bmatrix} : a,b \\in \\mathbb{R}, a \\neq 0\n\\right\\}"
},
{
"math_id": 88,
"text": "\\mathfrak{g} = \\left\\{\n\\begin{bmatrix}\na & b \\\\\n0 & -a\n\\end{bmatrix} : a,b \\in \\mathbb{R}\n\\right\\}"
},
{
"math_id": 89,
"text": "g = \\begin{bmatrix}\n2 & 3 \\\\\n0 & 1/2\n\\end{bmatrix}"
},
{
"math_id": 90,
"text": "T_gG = \\left\\{\n\\begin{bmatrix}\n2a & 2b - 3a \\\\\n0 & -a/2\n\\end{bmatrix} : a,b\\in \\mathbb{R}\n\\right\\}"
}
] |
https://en.wikipedia.org/wiki?curid=1558510
|
15585516
|
Matrix (chemical analysis)
|
Components of a chemical sample other than the substance of interest
In chemical analysis, matrix refers to the components of a sample other than the analyte of interest. The matrix can have a considerable effect on the way the analysis is conducted and the quality of the results are obtained; such effects are called matrix effects. For example, the ionic strength of the solution can have an effect on the activity coefficients of the analytes. The most common approach for accounting for matrix effects is to build a calibration curve using standard samples with known analyte concentration and which try to approximate the matrix of the sample as much as possible. This is especially important for solid samples where there is a strong matrix influence. In cases with complex or unknown matrices, the standard addition method can be used. In this technique, the response of the sample is measured and recorded, for example, using an electrode selective for the analyte. Then, a small volume of standard solution is added and the response is measured again. Ideally, the standard addition should increase the analyte concentration by a factor of 1.5 to 3, and several additions should be averaged. The volume of standard solution should be small enough to disturb the matrix as little as possible.
Matrix effect.
Matrix enhancement and suppression is frequently observed in modern analytical routines, such as GC, HPLC, and ICP.
Matrix effect is quantitated by the use of the following formula:
formula_0
where
A(extract) is the peak area of analyte, when diluted with matrix extract.
A(standard) is the peak area of analyte in the absence of matrix.
The concentration of analyte in both standards should be the same. A matrix effect value close to 100 indicates absence of matrix influence. A matrix effect value of less than 100 indicates suppression, while a value larger than 100 is a sign of matrix enhancement.
An alternative definition of matrix effect utilizes the formula:
formula_1
The advantages of this definition are that negative values indicates suppression, while positive values are a sign of matrix enhancement. Ideally, a value of 0 is related to the absence of matrix effect.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "ME = 100 \\left ( \\frac{A (extract)}{A (standard)} \\right ) "
},
{
"math_id": 1,
"text": "ME = 100 \\left ( \\frac{A (extract)}{A (standard)} \\right ) - 100 "
}
] |
https://en.wikipedia.org/wiki?curid=15585516
|
15585793
|
Gauss's lemma (Riemannian geometry)
|
Theorem in manifold theory
In Riemannian geometry, Gauss's lemma asserts that any sufficiently small sphere centered at a point in a Riemannian manifold is perpendicular to every geodesic through the point. More formally, let "M" be a Riemannian manifold, equipped with its Levi-Civita connection, and "p" a point of "M". The exponential map is a mapping from the tangent space at "p" to "M":
formula_0
which is a diffeomorphism in a neighborhood of zero. Gauss' lemma asserts that the image of a sphere of sufficiently small radius in "T"p"M" under the exponential map is perpendicular to all geodesics originating at "p". The lemma allows the exponential map to be understood as a radial isometry, and is of fundamental importance in the study of geodesic convexity and normal coordinates.
Introduction.
We define the exponential map at formula_1 by
formula_2
where formula_3 is the unique geodesic with formula_4 and tangent formula_5 and formula_6 is chosen small enough so that for every formula_7 the geodesic formula_8 is defined. So, if formula_9 is complete, then, by the Hopf–Rinow theorem, formula_10 is defined on the whole tangent space.
Let formula_11 be a curve differentiable in formula_12 such that formula_13 and formula_14. Since formula_15, it is clear that we can choose formula_16. In this case, by the definition of the differential of the exponential in formula_17 applied over formula_18, we obtain:
formula_19
So (with the right identification formula_20) the differential of formula_21 is the identity. By the implicit function theorem, formula_21 is a diffeomorphism on a neighborhood of formula_22. The Gauss Lemma now tells that formula_21 is also a radial isometry.
The exponential map is a radial isometry.
Let formula_1. In what follows, we make the identification formula_23.
Gauss's Lemma states:
Let formula_24 and formula_25. Then,
formula_26
For formula_1, this lemma means that formula_21 is a radial isometry in the following sense: let formula_27, i.e. such that formula_21 is well defined.
And let formula_28. Then the exponential formula_21 remains an isometry in formula_29, and, more generally, all along the geodesic formula_30 (in so far as formula_31 is well defined)! Then, radially, in all the directions permitted by the domain of definition of formula_21, it remains an isometry.
Proof.
Recall that
formula_32
We proceed in three steps:
formula_34 such that formula_35 and formula_36. Since formula_23, we can put formula_37.
Therefore,
formula_38
where formula_39 is the parallel transport operator and formula_40. The last equality is true because formula_30 is a geodesic, therefore formula_41 is parallel.
Now let us calculate the scalar product formula_42.
We separate formula_43 into a component formula_44 parallel to formula_18 and a component formula_45 normal to formula_18. In particular, we put formula_46, formula_47.
The preceding step implies directly:
formula_48
formula_49
We must therefore show that the second term is null, because, according to Gauss's Lemma, we must have:
formula_50
Let us define the curve
formula_52
Note that
formula_53
Let us put:
formula_54
and we calculate:
formula_55
and
formula_56
Hence
formula_57
We can now verify that this scalar product is actually independent of the variable formula_58, and therefore that, for example:
formula_59
because, according to what has been given above:
formula_60
being given that the differential is a linear map. This will therefore prove the lemma.
formula_63
Since the maps formula_62 are geodesics,
the function formula_64 is constant. Thus,
formula_65
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{exp} : T_pM \\to M"
},
{
"math_id": 1,
"text": "p\\in M"
},
{
"math_id": 2,
"text": "\n\\exp_p: T_pM\\supset B_{\\epsilon}(0) \\longrightarrow M,\\quad vt \\longmapsto \\gamma_{p,v}(t),\n"
},
{
"math_id": 3,
"text": "\\gamma_{p,v}"
},
{
"math_id": 4,
"text": "\\gamma_{p,v}(0)=p"
},
{
"math_id": 5,
"text": "\\gamma_{p,v}'(0)=v \\in T_pM"
},
{
"math_id": 6,
"text": "\\epsilon"
},
{
"math_id": 7,
"text": " t \\in [0, 1], vt \\in B_{\\epsilon}(0) \\subset T_pM "
},
{
"math_id": 8,
"text": "\\gamma_{p,v}(t)"
},
{
"math_id": 9,
"text": "M"
},
{
"math_id": 10,
"text": " \\exp_p"
},
{
"math_id": 11,
"text": "\\alpha : I\\rightarrow T_pM"
},
{
"math_id": 12,
"text": "T_pM"
},
{
"math_id": 13,
"text": "\\alpha(0):=0"
},
{
"math_id": 14,
"text": "\\alpha'(0):=v"
},
{
"math_id": 15,
"text": "T_pM\\cong \\mathbb R^n"
},
{
"math_id": 16,
"text": "\\alpha(t):=vt"
},
{
"math_id": 17,
"text": "0"
},
{
"math_id": 18,
"text": "v"
},
{
"math_id": 19,
"text": "\nT_0\\exp_p(v) = \\frac{\\mathrm d}{\\mathrm d t} \\Bigl(\\exp_p\\circ\\alpha(t)\\Bigr)\\Big\\vert_{t=0} = \\frac{\\mathrm d}{\\mathrm d t} \\Bigl(\\exp_p(vt)\\Bigr)\\Big\\vert_{t=0}=\\frac{\\mathrm d}{\\mathrm d t} \\Bigl(\\gamma_{p, v}(t)\\Bigr)\\Big\\vert_{t=0}= \\gamma_{p, v}'(0)=v.\n"
},
{
"math_id": 20,
"text": "T_0 T_p M \\cong T_pM"
},
{
"math_id": 21,
"text": "\\exp_p"
},
{
"math_id": 22,
"text": "0 \\in T_pM"
},
{
"math_id": 23,
"text": "T_vT_pM\\cong T_pM\\cong \\mathbb R^n"
},
{
"math_id": 24,
"text": "v,w\\in B_\\epsilon(0)\\subset T_vT_pM\\cong T_pM"
},
{
"math_id": 25,
"text": "M\\ni q:=\\exp_p(v)"
},
{
"math_id": 26,
"text": "\n\\langle T_v\\exp_p(v), T_v\\exp_p(w)\\rangle_q = \\langle v,w\\rangle_p.\n"
},
{
"math_id": 27,
"text": "v\\in B_\\epsilon(0)"
},
{
"math_id": 28,
"text": "q:=\\exp_p(v)\\in M"
},
{
"math_id": 29,
"text": "q"
},
{
"math_id": 30,
"text": "\\gamma"
},
{
"math_id": 31,
"text": "\\gamma_{p, v}(1)=\\exp_p(v)"
},
{
"math_id": 32,
"text": "\nT_v\\exp_p \\colon T_pM\\cong T_vT_pM\\supset T_vB_\\epsilon(0)\\longrightarrow T_{\\exp_p(v)}M.\n"
},
{
"math_id": 33,
"text": "T_v\\exp_p(v)=v"
},
{
"math_id": 34,
"text": "\\alpha : \\mathbb R \\supset I \\rightarrow T_pM"
},
{
"math_id": 35,
"text": "\\alpha(0):=v\\in T_pM"
},
{
"math_id": 36,
"text": "\\alpha'(0):=v\\in T_vT_pM\\cong T_pM"
},
{
"math_id": 37,
"text": "\\alpha(t):=v(t+1)"
},
{
"math_id": 38,
"text": "\nT_v\\exp_p(v) = \\frac{\\mathrm d}{\\mathrm d t}\\Bigl(\\exp_p\\circ\\alpha(t)\\Bigr)\\Big\\vert_{t=0}=\\frac{\\mathrm d}{\\mathrm d t}\\Bigl(\\exp_p(tv)\\Bigr)\\Big\\vert_{t=1}=\\Gamma(\\gamma)_p^{\\exp_p(v)}v=v,\n"
},
{
"math_id": 39,
"text": "\\Gamma"
},
{
"math_id": 40,
"text": "\\gamma(t)=\\exp_p(tv)"
},
{
"math_id": 41,
"text": "\\gamma'"
},
{
"math_id": 42,
"text": "\\langle T_v\\exp_p(v), T_v\\exp_p(w)\\rangle"
},
{
"math_id": 43,
"text": "w"
},
{
"math_id": 44,
"text": "w_T"
},
{
"math_id": 45,
"text": "w_N"
},
{
"math_id": 46,
"text": "w_T:=a v"
},
{
"math_id": 47,
"text": "a \\in \\mathbb R"
},
{
"math_id": 48,
"text": "\n\\langle T_v\\exp_p(v), T_v\\exp_p(w)\\rangle = \\langle T_v\\exp_p(v), T_v\\exp_p(w_T)\\rangle + \\langle T_v\\exp_p(v), T_v\\exp_p(w_N)\\rangle"
},
{
"math_id": 49,
"text": "=a \\langle T_v\\exp_p(v), T_v\\exp_p(v)\\rangle + \\langle T_v\\exp_p(v), T_v\\exp_p(w_N)\\rangle=\\langle v, w_T\\rangle + \\langle T_v\\exp_p(v), T_v\\exp_p(w_N)\\rangle.\n"
},
{
"math_id": 50,
"text": "\\langle T_v\\exp_p(v), T_v\\exp_p(w_N)\\rangle = \\langle v, w_N\\rangle = 0."
},
{
"math_id": 51,
"text": "\\langle T_v\\exp_p(v), T_v\\exp_p(w_N)\\rangle = 0"
},
{
"math_id": 52,
"text": "\n\\alpha \\colon [-\\epsilon, \\epsilon]\\times [0,1] \\longrightarrow T_pM,\\qquad (s,t) \\longmapsto tv+tsw_N.\n"
},
{
"math_id": 53,
"text": "\n\\alpha(0,1) = v,\\qquad\n\\frac{\\partial \\alpha}{\\partial t}(s,t) = v+sw_N,\n\\qquad\\frac{\\partial \\alpha}{\\partial s}(0,t) = tw_N.\n"
},
{
"math_id": 54,
"text": "\nf \\colon [-\\epsilon, \\epsilon ]\\times [0,1] \\longrightarrow M,\\qquad (s,t)\\longmapsto \\exp_p(tv+tsw_N),\n"
},
{
"math_id": 55,
"text": "\nT_v\\exp_p(v)=T_{\\alpha(0,1)}\\exp_p\\left(\\frac{\\partial \\alpha}{\\partial t}(0,1)\\right)=\\frac{\\partial}{\\partial t}\\Bigl(\\exp_p\\circ\\alpha(s,t)\\Bigr)\\Big\\vert_{t=1, s=0}=\\frac{\\partial f}{\\partial t}(0,1)\n"
},
{
"math_id": 56,
"text": "\nT_v\\exp_p(w_N)=T_{\\alpha(0,1)}\\exp_p\\left(\\frac{\\partial \\alpha}{\\partial s}(0,1)\\right)=\\frac{\\partial}{\\partial s}\\Bigl(\\exp_p\\circ\\alpha(s,t)\\Bigr)\\Big\\vert_{t=1,s=0}=\\frac{\\partial f}{\\partial s}(0,1).\n"
},
{
"math_id": 57,
"text": "\n\\langle T_v\\exp_p(v), T_v\\exp_p(w_N)\\rangle = \\left\\langle \\frac{\\partial f}{\\partial t},\\frac{\\partial f}{\\partial s}\\right\\rangle(0,1).\n"
},
{
"math_id": 58,
"text": "t"
},
{
"math_id": 59,
"text": "\n\\left\\langle\\frac{\\partial f}{\\partial t},\\frac{\\partial f}{\\partial s}\\right\\rangle(0,1) = \\left\\langle\\frac{\\partial f}{\\partial t},\\frac{\\partial f}{\\partial s}\\right\\rangle(0,0) = 0,\n"
},
{
"math_id": 60,
"text": "\n\\lim_{t\\rightarrow 0}\\frac{\\partial f}{\\partial s}(0,t) = \\lim_{t\\rightarrow 0}T_{tv}\\exp_p(tw_N) = 0\n"
},
{
"math_id": 61,
"text": "\\frac{\\partial}{\\partial t}\\left\\langle \\frac{\\partial f}{\\partial t},\\frac{\\partial f}{\\partial s}\\right\\rangle=0"
},
{
"math_id": 62,
"text": "t\\mapsto f(s,t)"
},
{
"math_id": 63,
"text": "\n\\frac{\\partial}{\\partial t}\\left\\langle \\frac{\\partial f}{\\partial t},\\frac{\\partial f}{\\partial s}\\right\\rangle=\\left\\langle\\underbrace{\\frac{D}{\\partial t}\\frac{\\partial f}{\\partial t}}_{=0}, \\frac{\\partial f}{\\partial s}\\right\\rangle + \\left\\langle\\frac{\\partial f}{\\partial t},\\frac{D}{\\partial t}\\frac{\\partial f}{\\partial s}\\right\\rangle = \\left\\langle\\frac{\\partial f}{\\partial t},\\frac{D}{\\partial s}\\frac{\\partial f}{\\partial t}\\right\\rangle=\\frac12\\frac{\\partial }{\\partial s}\\left\\langle \\frac{\\partial f}{\\partial t}, \\frac{\\partial f}{\\partial t}\\right\\rangle.\n"
},
{
"math_id": 64,
"text": "t\\mapsto\\left\\langle\\frac{\\partial f}{\\partial t},\\frac{\\partial f}{\\partial t}\\right\\rangle"
},
{
"math_id": 65,
"text": "\n\\frac{\\partial }{\\partial s}\\left\\langle \\frac{\\partial f}{\\partial t}, \\frac{\\partial f}{\\partial t}\\right\\rangle\n=\\frac{\\partial }{\\partial s}\\left\\langle v+sw_N,v+sw_N\\right\\rangle\n=2\\left\\langle v,w_N\\right\\rangle=0.\n"
}
] |
https://en.wikipedia.org/wiki?curid=15585793
|
15585953
|
Itakura–Saito distance
|
The Itakura–Saito distance (or Itakura–Saito divergence) is a measure of the difference between an original spectrum formula_0 and an approximation formula_1 of that spectrum. Although it is not a perceptual measure, it is intended to reflect perceptual (dis)similarity. It was proposed by Fumitada Itakura and Shuzo Saito in the 1960s while they were with NTT.
The distance is defined as:
formula_2
The Itakura–Saito distance is a Bregman divergence generated by minus the logarithmic function, but is not a true metric since it is not symmetric and it does not fulfil triangle inequality.
In Non-negative matrix factorization, the Itakura-Saito divergence can be used as a measure of the quality of the factorization: this implies a meaningful statistical model of the components and can be solved through an iterative method.
The Itakura-Saito distance is the Bregman divergence associated with the Gamma exponential family where the information divergence of one distribution in the family from another element in the family is given by the Itakura-Saito divergence of the mean value of the first distribution from the mean value of the second distribution.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P(\\omega)"
},
{
"math_id": 1,
"text": "\\hat{P}(\\omega)"
},
{
"math_id": 2,
"text": "D_{IS}(P(\\omega),\\hat{P}(\\omega))=\\frac{1}{2\\pi}\\int_{-\\pi}^{\\pi} \\left[ \\frac{P(\\omega)}{\\hat{P}(\\omega)}-\\log \\frac{P(\\omega)}{\\hat{P}(\\omega)} - 1 \\right] \\, d\\omega"
}
] |
https://en.wikipedia.org/wiki?curid=15585953
|
1558844
|
Fast wavelet transform
|
Algorithm
The fast wavelet transform is a mathematical algorithm designed to turn a waveform or signal in the time domain into a sequence of coefficients based on an orthogonal basis of small finite waves, or wavelets. The transform can be easily extended to multidimensional signals, such as images, where the time domain is replaced with the space domain. This algorithm was introduced in 1989 by Stéphane Mallat.
It has as theoretical foundation the device of a finitely generated, orthogonal multiresolution analysis (MRA). In the terms given there, one selects a sampling scale "J" with sampling rate of 2"J" per unit interval, and projects the given signal "f" onto the space formula_0; in theory by computing the scalar products
formula_1
where formula_2 is the scaling function of the chosen wavelet transform; in practice by any suitable sampling procedure under the condition that the signal is highly oversampled, so
formula_3
is the orthogonal projection or at least some good approximation of the original signal in formula_0.
The MRA is characterised by its scaling sequence
formula_4 or, as Z-transform, formula_5
and its wavelet sequence
formula_6 or formula_7
(some coefficients might be zero). Those allow to compute the wavelet coefficients formula_8, at least some range "k=M...,J-1", without having to approximate the integrals in the corresponding scalar products. Instead, one can directly, with the help of convolution and decimation operators, compute those coefficients from the first approximation formula_9.
Forward DWT.
For the discrete wavelet transform (DWT),
one computes recursively, starting with the coefficient sequence formula_9 and counting down from "k = J - 1" to some "M < J",
formula_10 or formula_11
and
formula_12 or formula_13,
for "k=J-1,J-2...,M" and all formula_14. In the Z-transform notation:
* The downsampling operator formula_15 reduces an infinite sequence, given by its Z-transform, which is simply a Laurent series, to the sequence of the coefficients with even indices, formula_16.
* The starred Laurent-polynomial formula_17 denotes the adjoint filter, it has "time-reversed" adjoint coefficients, formula_18. (The adjoint of a real number being the number itself, of a complex number its conjugate, of a real matrix the transposed matrix, of a complex matrix its hermitian adjoint).
* Multiplication is polynomial multiplication, which is equivalent to the convolution of the coefficient sequences.
It follows that
formula_19
is the orthogonal projection of the original signal "f" or at least of the first approximation formula_20 onto the subspace formula_21, that is, with sampling rate of 2k per unit interval. The difference to the first approximation is given by
formula_22
where the difference or detail signals are computed from the detail coefficients as
formula_23
with formula_24 denoting the "mother wavelet" of the wavelet transform.
Inverse DWT.
Given the coefficient sequence formula_25 for some "M" < "J" and all the difference sequences formula_26, "k" = "M"...,"J" − 1, one computes recursively
formula_27 or formula_28
for "k" = "J" − 1,"J" − 2...,"M" and all formula_14. In the Z-transform notation:
* The upsampling operator formula_29 creates zero-filled holes inside a given sequence. That is, every second element of the resulting sequence is an element of the given sequence, every other second element is zero or formula_30. This linear operator is, in the Hilbert space formula_31, the adjoint to the downsampling operator formula_15.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
G. Beylkin, R. Coifman, V. Rokhlin, "Fast wavelet transforms and numerical algorithms" "Comm. Pure Appl. Math.", 44 (1991) pp. 141–183 Error: Bad DOI specified! (This article has been cited over 2400 times.)
|
[
{
"math_id": 0,
"text": "V_J"
},
{
"math_id": 1,
"text": "s^{(J)}_n:=2^J \\langle f(t),\\varphi(2^J t-n) \\rangle,"
},
{
"math_id": 2,
"text": "\\varphi"
},
{
"math_id": 3,
"text": " P_J[f](x):=\\sum_{n\\in\\Z} s^{(J)}_n\\,\\varphi(2^Jx-n)"
},
{
"math_id": 4,
"text": "a=(a_{-N},\\dots,a_0,\\dots,a_N)"
},
{
"math_id": 5,
"text": "a(z)=\\sum_{n=-N}^Na_nz^{-n}"
},
{
"math_id": 6,
"text": "b=(b_{-N},\\dots,b_0,\\dots,b_N)"
},
{
"math_id": 7,
"text": "b(z)=\\sum_{n=-N}^Nb_nz^{-n}"
},
{
"math_id": 8,
"text": "d^{(k)}_n"
},
{
"math_id": 9,
"text": "s^{(J)}"
},
{
"math_id": 10,
"text": "\ns^{(k)}_n:=\\frac12 \\sum_{m=-N}^N a_m s^{(k+1)}_{2n+m}\n"
},
{
"math_id": 11,
"text": "\ns^{(k)}(z):=(\\downarrow 2)(a^*(z)\\cdot s^{(k+1)}(z))\n"
},
{
"math_id": 12,
"text": "\nd^{(k)}_n:=\\frac12 \\sum_{m=-N}^N b_m s^{(k+1)}_{2n+m}\n"
},
{
"math_id": 13,
"text": "\nd^{(k)}(z):=(\\downarrow 2)(b^*(z)\\cdot s^{(k+1)}(z))\n"
},
{
"math_id": 14,
"text": "n\\in\\Z"
},
{
"math_id": 15,
"text": "(\\downarrow 2)"
},
{
"math_id": 16,
"text": "(\\downarrow 2)(c(z))=\\sum_{k\\in\\Z}c_{2k}z^{-k}"
},
{
"math_id": 17,
"text": "a^*(z)"
},
{
"math_id": 18,
"text": "a^*(z)=\\sum_{n=-N}^N a_{-n}^*z^{-n}"
},
{
"math_id": 19,
"text": "P_k[f](x):=\\sum_{n\\in\\Z} s^{(k)}_n\\,\\varphi(2^kx-n)"
},
{
"math_id": 20,
"text": "P_J[f](x)"
},
{
"math_id": 21,
"text": "V_k"
},
{
"math_id": 22,
"text": "P_J[f](x)=P_k[f](x)+D_k[f](x)+\\dots+D_{J-1}[f](x), "
},
{
"math_id": 23,
"text": "D_k[f](x):=\\sum_{n\\in\\Z} d^{(k)}_n\\,\\psi(2^kx-n), "
},
{
"math_id": 24,
"text": "\\psi"
},
{
"math_id": 25,
"text": "s^{(M)}"
},
{
"math_id": 26,
"text": "d^{(k)}"
},
{
"math_id": 27,
"text": "\ns^{(k+1)}_n:=\\sum_{k=-N}^N a_k s^{(k)}_{2n-k}+\\sum_{k=-N}^N b_k d^{(k)}_{2n-k}\n"
},
{
"math_id": 28,
"text": "\ns^{(k+1)}(z)=a(z)\\cdot(\\uparrow 2)(s^{(k)}(z))+b(z)\\cdot(\\uparrow 2)(d^{(k)}(z))\n"
},
{
"math_id": 29,
"text": "(\\uparrow 2)"
},
{
"math_id": 30,
"text": "(\\uparrow 2)(c(z)):=\\sum_{n\\in\\Z}c_nz^{-2n}"
},
{
"math_id": 31,
"text": "\\ell^2(\\Z,\\R)"
}
] |
https://en.wikipedia.org/wiki?curid=1558844
|
15593113
|
Barber–Layden–Power effect
|
The Barber–Layden–Power effect ("BLP effect" or colloquially "Bleep") is a blast wave phenomenon observed in the immediate aftermath of the successful functioning of air-delivered high-drag ordnance at the target. In common with a typical blast wave, the flow field can be approximated as a lead shock wave, followed by a 'self-similar' subsonic flow field. The phenomenon appears to adhere to the basic principles of the Sedov solution.
History.
The phenomenon is so named after the lead researchers from a joint team drawn from NASA Ames Research Center, the Field Artillery Training Center at Fort Sill, Oklahoma and instructors from the USAF Air Weapons School at Nellis AFB in response to a formal request for assistance from United States Central Command, MacDill AFB, Tampa, Florida, framed following events during Operation Anaconda. Instructors from the Royal School of Artillery's Gunnery Training Team also assisted.
Application.
The effect is caused by extremely localised fluctuations in surface pressure and humidity, which cause the initial shock wave to distort momentarily and refocus on itself, leading to a double shock wave, each of markedly reduced effect. This has distinct utility in the employment of air delivered ordnance close to key urban structures as part of an ongoing influence campaign.
The energy of the blast is so great that the pressure and temperature of the gas outside the shock front is negligible compared to the pressure and temperature inside. This substantially reduces the number of parameters available in the problem, leaving only the energy E of the blast, the resting density of the external gas, and the time t since the explosion. With only these three dimensional parameters, it is possible to form other quantities with unique functional dependences. In particular, the only length scale in the problem is<br>
formula_0
The constant of proportionality will depend on the equation of state of the gas. R can be effectively treated as a constant due to the nature of blasting weapons versus heat/blast ordnance.
Future developments.
Work is ongoing into capturing the exact environmental conditions in which the effect can be reliably repeated. This work is part of the 'Grays Study' and will report in late 2008.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " R \\propto E^{1/5}{{\\rho}_0}^{-1/5}t^{2/5} "
}
] |
https://en.wikipedia.org/wiki?curid=15593113
|
155941
|
Converse (logic)
|
Reverse of a categorical or hypothetical proposition
In logic and mathematics, the converse of a categorical or implicational statement is the result of reversing its two constituent statements. For the implication "P" → "Q", the converse is "Q" → "P". For the categorical proposition "All S are P", the converse is "All P are S". Either way, the truth of the converse is generally independent from that of the original statement.
Implicational converse.
Let "S" be a statement of the form "P implies Q" ("P" → "Q"). Then the "converse" of "S" is the statement "Q implies P" ("Q" → "P"). In general, the truth of "S" says nothing about the truth of its converse, unless the antecedent "P" and the consequent "Q" are logically equivalent.
For example, consider the true statement "If I am a human, then I am mortal." The converse of that statement is "If I am mortal, then I am a human," which is not necessarily true.
However, the converse of a statement with mutually inclusive terms remains true, given the truth of the original proposition. This is equivalent to saying that the converse of a definition is true. Thus, the statement "If I am a triangle, then I am a three-sided polygon" is logically equivalent to "If I am a three-sided polygon, then I am a triangle," because the definition of "triangle" is "three-sided polygon".
A truth table makes it clear that "S" and the converse of "S" are not logically equivalent, unless both terms imply each other:
Going from a statement to its converse is the fallacy of affirming the consequent. However, if the statement "S" and its converse are equivalent (i.e., "P" is true if and only if "Q" is also true), then affirming the consequent will be valid.
Converse implication is logically equivalent to the disjunction of formula_1 and formula_2
In natural language, this could be rendered "not "Q" without "P"".
Converse of a theorem.
In mathematics, the converse of a theorem of the form "P" → "Q" will be "Q" → "P". The converse may or may not be true, and even if true, the proof may be difficult. For example, the four-vertex theorem was proved in 1912, but its converse was proved only in 1997.
In practice, when determining the converse of a mathematical theorem, aspects of the antecedent may be taken as establishing context. That is, the converse of "Given P, if Q then R""" will be "Given P, if R then Q""". For example, the Pythagorean theorem can be stated as:
"Given" a triangle with sides of length "formula_3", "formula_4", and "formula_5", "if" the angle opposite the side of length "formula_5" is a right angle, "then" formula_6.
The converse, which also appears in Euclid's "Elements" (Book I, Proposition 48), can be stated as:
"Given" a triangle with sides of length "formula_3", "formula_4", and "formula_5", "if" formula_6, "then" the angle opposite the side of length "formula_5" is a right angle.
Converse of a relation.
If formula_7 is a binary relation with formula_8 then the converse relation formula_9 is also called the "transpose".
Notation.
The converse of the implication "P" → "Q" may be written "Q" → "P", formula_0, but may also be notated formula_10, or "B"pq"" (in Bocheński notation).
Categorical converse.
In traditional logic, the process of switching the subject term with the predicate term is called "conversion". For example, going from "No "S" are "P"" to its converse "No "P" are "S"". In the words of Asa Mahan: "The original proposition is called the exposita; when converted, it is denominated the converse. Conversion is valid when, and only when, nothing is asserted in the converse which is not affirmed or implied in the exposita." The "exposita" is more usually called the "convertend". In its simple form, conversion is valid only for E and I propositions:
The validity of simple conversion only for E and I propositions can be expressed by the restriction that "No term must be distributed in the converse which is not distributed in the convertend." For E propositions, both subject and predicate are distributed, while for I propositions, neither is.
For A propositions, the subject is distributed while the predicate is not, and so the inference from an A statement to its converse is not valid. As an example, for the A proposition "All cats are mammals", the converse "All mammals are cats" is obviously false. However, the weaker statement "Some mammals are cats" is true. Logicians define conversion "per accidens" to be the process of producing this weaker statement. Inference from a statement to its converse "per accidens" is generally valid. However, as with syllogisms, this switch from the universal to the particular causes problems with empty categories: "All unicorns are mammals" is often taken as true, while the converse "per accidens" "Some mammals are unicorns" is clearly false.
In first-order predicate calculus, "All S are P" can be represented as formula_11. It is therefore clear that the categorical converse is closely related to the implicational converse, and that "S" and "P" cannot be swapped in "All S are P".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P \\leftarrow Q"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "\\neg Q"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "b"
},
{
"math_id": 5,
"text": "c"
},
{
"math_id": 6,
"text": "a^2 + b^2 = c^2"
},
{
"math_id": 7,
"text": "R"
},
{
"math_id": 8,
"text": "R \\subseteq A \\times B,"
},
{
"math_id": 9,
"text": "R^T = \\{ (b,a) : (a,b) \\in R \\}"
},
{
"math_id": 10,
"text": "P \\subset Q"
},
{
"math_id": 11,
"text": "\\forall x. S(x) \\to P(x)"
}
] |
https://en.wikipedia.org/wiki?curid=155941
|
155942
|
Inverse (logic)
|
In logic, an inverse is a type of conditional sentence which is an immediate inference made from another conditional sentence. More specifically, given a conditional sentence of the form formula_0, the inverse refers to the sentence formula_1. Since an inverse is the contrapositive of the converse, inverse and converse are logically equivalent to each other.
For example, substituting propositions in natural language for logical variables, the inverse of the following conditional proposition
"If it's raining, then Sam will meet Jack at the movies."
would be
"If it's not raining, then Sam will not meet Jack at the movies."
The inverse of the inverse, that is, the inverse of formula_2, is formula_3, and since the double negation of any statement is equivalent to the original statement in classical logic, the inverse of the inverse is logically equivalent to the original conditional formula_4. Thus it is permissible to say that formula_2 and formula_4 are inverses of each other. Likewise, formula_5 and formula_6 are inverses of each other.
The inverse and the converse of a conditional are logically equivalent to each other, just as the conditional and its contrapositive are logically equivalent to each other. But "the inverse of a conditional cannot be inferred from the conditional itself" (e.g., the conditional might be true while its inverse might be false). For example, the sentence
"If it's not raining, Sam will not meet Jack at the movies"
cannot be inferred from the sentence
"If it's raining, Sam will meet Jack at the movies"
because in the case where it's not raining, additional conditions may still prompt Sam and Jack to meet at the movies, such as:
"If it's not raining and Jack is craving popcorn, Sam will meet Jack at the movies."
In traditional logic, where there are four named types of categorical propositions, only forms A (i.e., "All "S" are "P"") and E ("All "S" are not "P"") have an inverse. To find the inverse of these categorical propositions, one must: replace the subject and the predicate of the inverted by their respective contradictories, and change the quantity from universal to particular. That is:
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P \\rightarrow Q "
},
{
"math_id": 1,
"text": "\\neg P \\rightarrow \\neg Q "
},
{
"math_id": 2,
"text": "\\neg P \\rightarrow \\neg Q "
},
{
"math_id": 3,
"text": "\\neg \\neg P \\rightarrow \\neg \\neg Q "
},
{
"math_id": 4,
"text": "P \\rightarrow Q "
},
{
"math_id": 5,
"text": "P \\rightarrow \\neg Q "
},
{
"math_id": 6,
"text": "\\neg P \\rightarrow Q "
}
] |
https://en.wikipedia.org/wiki?curid=155942
|
1559901
|
Mean motion
|
Angular speed required for a body to complete one orbit
In orbital mechanics, mean motion (represented by "n") is the angular speed required for a body to complete one orbit, assuming constant speed in a circular orbit which completes in the same time as the variable speed, elliptical orbit of the actual body. The concept applies equally well to a small body revolving about a large, massive primary body or to two relatively same-sized bodies revolving about a common center of mass. While nominally a mean, and theoretically so in the case of two-body motion, in practice the mean motion is not typically an average over time for the orbits of real bodies, which only approximate the two-body assumption. It is rather the instantaneous value which satisfies the above conditions as calculated from the current gravitational and geometric circumstances of the body's constantly-changing, perturbed orbit.
Mean motion is used as an approximation of the actual orbital speed in making an initial calculation of the body's position in its orbit, for instance, from a set of orbital elements. This mean position is refined by Kepler's equation to produce the true position.
Definition.
Define the orbital period (the time period for the body to complete one orbit) as "P", with dimension of time. The mean motion is simply one revolution divided by this time, or,
formula_0
with dimensions of radians per unit time, degrees per unit time or revolutions per unit time.
The value of mean motion depends on the circumstances of the particular gravitating system. In systems with more mass, bodies will orbit faster, in accordance with Newton's law of universal gravitation. Likewise, bodies closer together will also orbit faster.
Mean motion and Kepler's laws.
Kepler's 3rd law of planetary motion states, "the square of the periodic time is proportional to the cube of the mean distance", or
formula_1
where "a" is the semi-major axis or mean distance, and "P" is the orbital period as above. The constant of proportionality is given by
formula_2
where "μ" is the standard gravitational parameter, a constant for any particular gravitational system.
If the mean motion is given in units of radians per unit of time, we can combine it into the above definition of the Kepler's 3rd law,
formula_3
and reducing,
formula_4
which is another definition of Kepler's 3rd law. "μ", the constant of proportionality, is a gravitational parameter defined by the masses of the bodies in question and by the Newtonian constant of gravitation, "G" (see below). Therefore, "n" is also defined
formula_5
Expanding mean motion by expanding "μ",
formula_6
where "M" is typically the mass of the primary body of the system and "m" is the mass of a smaller body.
This is the complete gravitational definition of mean motion in a two-body system. Often in celestial mechanics, the primary body is much larger than any of the secondary bodies of the system, that is, "M" ≫ "m". It is under these circumstances that "m" becomes unimportant and Kepler's 3rd law is approximately constant for all of the smaller bodies.
Kepler's 2nd law of planetary motion states, "a line joining a planet and the Sun sweeps out equal areas in equal times", or
formula_7
for a two-body orbit, where is the time rate of change of the area swept.
Letting "t" = "P", the orbital period, the area swept is the entire area of the ellipse, d"A" = π"ab", where "a" is the semi-major axis and "b" is the semi-minor axis of the ellipse. Hence,
formula_8
Multiplying this equation by 2,
formula_9
From the above definition, mean motion "n" = . Substituting,
formula_10
and mean motion is also
formula_11
which is itself constant as "a", "b", and are all constant in two-body motion.
Mean motion and the constants of the motion.
Because of the nature of two-body motion in a conservative gravitational field, two aspects of the motion do not change: the angular momentum and the mechanical energy.
The first constant, called specific angular momentum, can be defined as
formula_12
and substituting in the above equation, mean motion is also
formula_13
The second constant, called specific mechanical energy, can be defined,<ref /name="BMW28">Bate, Roger R.; Mueller, Donald D.; White, Jerry E. (1971). p. 28.</ref>
formula_14
Rearranging and multiplying by ,
formula_15
From above, the square of mean motion "n"2 = . Substituting and rearranging, mean motion can also be expressed,
formula_16
where the −2 shows that "ξ" must be defined as a negative number, as is customary in celestial mechanics and astrodynamics.
Mean motion and the gravitational constants.
Two gravitational constants are commonly used in Solar System celestial mechanics: "G", the Newtonian constant of gravitation and "k", the Gaussian gravitational constant. From the above definitions, mean motion is
formula_17
By normalizing parts of this equation and making some assumptions, it can be simplified, revealing the relation between the mean motion and the constants.
Setting the mass of the Sun to unity, "M" = 1. The masses of the planets are all much smaller, "m" ≪ "M". Therefore, for any particular planet,
formula_18
and also taking the semi-major axis as one astronomical unit,
formula_19
The Gaussian gravitational constant "k" = √"G", therefore, under the same conditions as above, for any particular planet
formula_20
and again taking the semi-major axis as one astronomical unit,
formula_21
Mean motion and mean anomaly.
Mean motion also represents the rate of change of mean anomaly, and hence can also be calculated,
formula_22
where "M"1 and "M"0 are the mean anomalies at particular points in time, and Δ"t" (≡ "t"1-"t"0) is the time elapsed between the two. "M"0 is referred to as the "mean anomaly at epoch" "t"0, and Δ"t" is the "time since epoch".
Formulae.
For Earth satellite orbital parameters, the mean motion is typically measured in revolutions per day. In that case,
formula_23
where
To convert from radians per unit time to revolutions per day, consider the following:
formula_24
From above, mean motion in radians per unit time is:
formula_25
therefore the mean motion in revolutions per day is
formula_26
where "P" is the orbital period, as above.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n = \\frac{2\\pi}{P}, \\qquad n = \\frac{360^\\circ}{P}, \\quad \\mbox{or} \\quad n = \\frac{1}{P},"
},
{
"math_id": 1,
"text": "{a^3} \\propto {P^2},"
},
{
"math_id": 2,
"text": "\\frac{a^3}{P^2} = \\frac {\\mu}{4\\pi^2}"
},
{
"math_id": 3,
"text": "\\frac {\\mu}{4\\pi^2} = \\frac{a^3}{\\left(\\frac{2\\pi}{n}\\right)^2},"
},
{
"math_id": 4,
"text": "\\mu = a^3n^2,"
},
{
"math_id": 5,
"text": "n^2 = \\frac{\\mu}{a^3}, \\quad \\text{or} \\quad n = \\sqrt{\\frac{\\mu}{a^3}}."
},
{
"math_id": 6,
"text": "n = \\sqrt{\\frac{ G( M + m ) }{a^3}},"
},
{
"math_id": 7,
"text": "\\frac{\\mathrm{d}A}{\\mathrm{d}t} = \\text{constant}"
},
{
"math_id": 8,
"text": "\\frac{\\mathrm{d}A}{\\mathrm{d}t} = \\frac{\\pi ab}{P}."
},
{
"math_id": 9,
"text": "2 \\left( \\frac{\\mathrm{d}A}{\\mathrm{d}t} \\right) = 2 \\left( \\frac{\\pi ab}{P} \\right)."
},
{
"math_id": 10,
"text": "2\\frac{\\mathrm{d}A}{\\mathrm{d}t} = nab,"
},
{
"math_id": 11,
"text": "n = \\frac{2}{ab}\\frac{\\mathrm{d}A}{\\mathrm{d}t},"
},
{
"math_id": 12,
"text": "h = 2\\frac{\\mathrm{d}A}{\\mathrm{d}t},"
},
{
"math_id": 13,
"text": "n = \\frac{h}{ab}."
},
{
"math_id": 14,
"text": "\\xi = -\\frac{\\mu}{2a}."
},
{
"math_id": 15,
"text": "\\frac{-2\\xi}{a^2} = \\frac{\\mu}{a^3}."
},
{
"math_id": 16,
"text": "n = \\frac{1}{a}\\sqrt{-2\\xi},"
},
{
"math_id": 17,
"text": "n = \\sqrt{\\frac{ G( M + m ) }{a^3}}\\,\\!."
},
{
"math_id": 18,
"text": "n \\approx \\sqrt{\\frac{G}{a^3}},"
},
{
"math_id": 19,
"text": "n_{1\\;\\text{AU}} \\approx \\sqrt{G}."
},
{
"math_id": 20,
"text": "n \\approx \\frac{k}{\\sqrt{a^3}},"
},
{
"math_id": 21,
"text": "n_{1\\text{ AU}} \\approx k."
},
{
"math_id": 22,
"text": "\\begin{align}\nn &= \\frac{M_1 - M_0}{t_1 - t_0} = \\frac{M_1 - M_0}{\\Delta t}, \\\\\nM_1 &= M_0 + n \\times (t_1 - t_0) = M_0 + n \\times \\Delta t \n\\end{align}"
},
{
"math_id": 23,
"text": "n = \\frac{d}{2\\pi}\\sqrt{\\frac{ G( M + m ) }{a^3}} = d\\sqrt{\\frac{ G( M + m ) }{4\\pi^2 a^3}}\\,\\!"
},
{
"math_id": 24,
"text": "{\\rm \\frac{radians}{time\\ unit}\\times\\frac{1\\ revolution}{2\\pi\\ radians}\\times}\\frac{d\\ {\\rm time\\ units}}{1{\\rm \\ day}} = \\frac{d}{2\\pi} {\\rm\\ revolutions\\ per\\ day}"
},
{
"math_id": 25,
"text": "n = \\frac{2\\pi}{P},"
},
{
"math_id": 26,
"text": "n = \\frac{d}{2\\pi} \\frac{2\\pi}{P} = \\frac{d}{P},"
}
] |
https://en.wikipedia.org/wiki?curid=1559901
|
1559922
|
Spacecraft flight dynamics
|
Application of mechanical dynamics to model the flight of space vehicles
Spacecraft flight dynamics is the application of mechanical dynamics to model how the external forces acting on a space vehicle or spacecraft determine its flight path. These forces are primarily of three types: propulsive force provided by the vehicle's engines; gravitational force exerted by the Earth and other celestial bodies; and aerodynamic lift and drag (when flying in the atmosphere of the Earth or other body, such as Mars or Venus).
The principles of flight dynamics are used to model a vehicle's powered flight during launch from the Earth; a spacecraft's orbital flight; maneuvers to change orbit; translunar and interplanetary flight; launch from and landing on a celestial body, with or without an atmosphere; entry through the atmosphere of the Earth or other celestial body; and attitude control. They are generally programmed into a vehicle's inertial navigation systems, and monitored on the ground by a member of the flight controller team known in NASA as the flight dynamics officer, or in the European Space Agency as the spacecraft navigator.
Flight dynamics depends on the disciplines of propulsion, aerodynamics, and astrodynamics (orbital mechanics and celestial mechanics). It cannot be reduced to simply attitude control; real spacecraft do not have steering wheels or tillers like airplanes or ships. Unlike the way fictional spaceships are portrayed, a spacecraft actually does not bank to turn in outer space, where its flight path depends strictly on the gravitational forces acting on it and the propulsive maneuvers applied.
Basic principles.
A space vehicle's flight is determined by application of Newton's second law of motion:
formula_0
where F is the vector sum of all forces exerted on the vehicle, m is its current mass, and a is the acceleration vector, the instantaneous rate of change of velocity (v), which in turn is the instantaneous rate of change of displacement. Solving for a, acceleration equals the force sum divided by mass. Acceleration is integrated over time to get velocity, and velocity is in turn integrated to get position.
Flight dynamics calculations are handled by computerized guidance systems aboard the vehicle; the status of the flight dynamics is monitored on the ground during powered maneuvers by a member of the flight controller team known in NASA's Human Spaceflight Center as the flight dynamics officer, or in the European Space Agency as the spacecraft navigator.
For powered atmospheric flight, the three main forces which act on a vehicle are propulsive force, aerodynamic force, and gravitation. Other external forces such as centrifugal force, Coriolis force, and solar radiation pressure are generally insignificant due to the relatively short time of powered flight and small size of spacecraft, and may generally be neglected in simplified performance calculations.
Propulsion.
The thrust of a rocket engine, in the general case of operation in an atmosphere, is approximated by:
formula_1
where,
The effective exhaust velocity of the rocket propellant is proportional to the vacuum specific impulse and affected by the atmospheric pressure:
formula_8
where:
The specific impulse relates the delta-v capacity to the quantity of propellant consumed according to the Tsiolkovsky rocket equation:
formula_11
where:
Aerodynamic force.
Aerodynamic forces, present near a body with a significant atmosphere such as Earth, Mars or Venus, are analyzed as: lift, defined as the force component perpendicular to the direction of flight (not necessarily upward to balance gravity, as for an airplane); and drag, the component parallel to, and in the opposite direction of flight. Lift and drag are modeled as the products of a coefficient times dynamic pressure acting on a reference area:
formula_16
formula_17
where:
Gravitation.
The gravitational force that a celestial body exerts on a space vehicle is modeled with the body and vehicle taken as point masses; the bodies (Earth, Moon, etc.) are simplified as spheres; and the mass of the vehicle is much smaller than the mass of the body so that its effect on the gravitational acceleration can be neglected. Therefore the gravitational force is calculated by:
formula_18
where:
Powered flight.
The equations of motion used to describe powered flight of a vehicle during launch can be as complex as six degrees of freedom for in-flight calculations, or as simple as two degrees of freedom for preliminary performance estimates. In-flight calculations will take perturbation factors into account such as the Earth's oblateness and non-uniform mass distribution; and gravitational forces of all nearby bodies, including the Moon, Sun, and other planets. Preliminary estimates can make some simplifying assumptions: a spherical, uniform planet; the vehicle can be represented as a point mass; solution of the flight path presents a two-body problem; and the local flight path lies in a single plane) with reasonably small loss of accuracy.
The general case of a launch from Earth must take engine thrust, aerodynamic forces, and gravity into account. The acceleration equation can be reduced from vector to scalar form by resolving it into its tangential (speed formula_24) and angular (flight path angle formula_25 relative to local vertical) time rate-of-change components relative to the launch pad. The two equations thus become:
formula_26
where:
Mass decreases as propellant is consumed and rocket stages, engines or tanks are shed (if applicable).
The planet-fixed values of v and θ at any time in the flight are then determined by numerical integration of the two rate equations from time zero (when both "v" and "θ" are 0):
formula_27
Finite element analysis can be used to integrate the equations, by breaking the flight into small time increments.
For most launch vehicles, relatively small levels of lift are generated, and a gravity turn is employed, depending mostly on the third term of the angle rate equation. At the moment of liftoff, when angle and velocity are both zero, the theta-dot equation is mathematically indeterminate and cannot be evaluated until velocity becomes non-zero shortly after liftoff. But notice at this condition, the only force which can cause the vehicle to pitch over is the engine thrust acting at a non-zero angle of attack (first term) and perhaps a slight amount of lift (second term), until a non-zero pitch angle is attained. In the gravity turn, pitch-over is initiated by applying an increasing angle of attack (by means of gimbaled engine thrust), followed by a gradual decrease in angle of attack through the remainder of the flight.
Once velocity and flight path angle are known, altitude formula_28 and downrange distance formula_29 are computed as:
formula_30
The planet-fixed values of "v" and "θ" are converted to space-fixed (inertial) values with the following conversions:
formula_31
where "ω" is the planet's rotational rate in radians per second, "φ" is the launch site latitude, and "A""z" is the launch azimuth angle.
formula_32
Final "v""s", "θ""s" and "r" must match the requirements of the target orbit as determined by orbital mechanics (see Orbital flight, above), where final "v""s" is usually the required periapsis (or circular) velocity, and final "θ""s" is 90 degrees. A powered descent analysis would use the same procedure, with reverse boundary conditions.
Orbital flight.
Orbital mechanics are used to calculate flight in orbit about a central body. For sufficiently high orbits (generally at least in the case of Earth), aerodynamic force may be assumed to be negligible for relatively short term missions (though a small amount of drag may be present which results in decay of orbital energy over longer periods of time.) When the central body's mass is much larger than the spacecraft, and other bodies are sufficiently far away, the solution of orbital trajectories can be treated as a two-body problem.
This can be shown to result in the trajectory being ideally a conic section (circle, ellipse, parabola or hyperbola) with the central body located at one focus. Orbital trajectories are either circles or ellipses; the parabolic trajectory represents first escape of the vehicle from the central body's gravitational field. Hyperbolic trajectories are escape trajectories with excess velocity, and will be covered under Interplanetary flight below.
Elliptical orbits are characterized by three elements. The semi-major axis "a" is the average of the radius at apoapsis and periapsis:
formula_33
The eccentricity "e" can then be calculated for an ellipse, knowing the apses:
formula_34
The time period for a complete orbit is dependent only on the semi-major axis, and is independent of eccentricity:
formula_35
where formula_36 is the standard gravitational parameter of the central body.
The orientation of the orbit in space is specified by three angles:
The orbital plane is ideally constant, but is usually subject to small perturbations caused by planetary oblateness and the presence of other bodies.
The spacecraft's position in orbit is specified by the "true anomaly," formula_37, an angle measured from the periapsis, or for a circular orbit, from the ascending node or reference direction. The "semi-latus rectum", or radius at 90 degrees from periapsis, is:
formula_38
The radius at any position in flight is:
formula_39
and the velocity at that position is:
formula_40
Types of orbit.
Circular.
For a circular orbit, "r""a" = "r""p" = "a", and eccentricity is 0. Circular velocity at a given radius is:
formula_41
Elliptical.
For an elliptical orbit, "e" is greater than 0 but less than 1. The periapsis velocity is:
formula_42
and the apoapsis velocity is:
formula_43
The limiting condition is a parabolic escape orbit, when "e" = 1 and "r""a" becomes infinite. Escape velocity at periapsis is then
formula_44
Flight path angle.
The "specific angular momentum" of any conic orbit, "h", is constant, and is equal to the product of radius and velocity at periapsis. At any other point in the orbit, it is equal to:
formula_45
where "φ" is the flight path angle measured from the local horizontal (perpendicular to "r".) This allows the calculation of "φ" at any point in the orbit, knowing radius and velocity:
formula_46
Note that flight path angle is a constant 0 degrees (90 degrees from local vertical) for a circular orbit.
True anomaly as a function of time.
It can be shown that the angular momentum equation given above also relates the rate of change in true anomaly to "r", "v", and "φ", thus the true anomaly can be found as a function of time since periapsis passage by integration:
formula_47
Conversely, the time required to reach a given anomaly is:
formula_48
Orbital maneuvers.
Once in orbit, a spacecraft may fire rocket engines to make in-plane changes to a different altitude or type of orbit, or to change its orbital plane. These maneuvers require changes in the craft's velocity, and the classical rocket equation is used to calculate the propellant requirements for a given delta-v. A delta-"v" budget will add up all the propellant requirements, or determine the total delta-v available from a given amount of propellant, for the mission. Most on-orbit maneuvers can be modeled as impulsive, that is as a near-instantaneous change in velocity, with minimal loss of accuracy.
In-plane changes.
Orbit circularization.
An elliptical orbit is most easily converted to a circular orbit at the periapsis or apoapsis by applying a single engine burn with a delta v equal to the difference between the desired orbit's circular velocity and the current orbit's periapsis or apoapsis velocity:
To circularize at periapsis, a retrograde burn is made:
formula_49
To circularize at apoapsis, a posigrade burn is made:
formula_50
Altitude change by Hohmann transfer.
A Hohmann transfer orbit is the simplest maneuver which can be used to move a spacecraft from one altitude to another. Two burns are required: the first to send the craft into the elliptical transfer orbit, and a second to circularize the target orbit.
To raise a circular orbit at formula_51, the first posigrade burn raises velocity to the transfer orbit's periapsis velocity:
formula_52
The second posigrade burn, made at apoapsis, raises velocity to the target orbit's velocity:
formula_53
A maneuver to lower the orbit is the mirror image of the raise maneuver; both burns are made retrograde.
Altitude change by bi-elliptic transfer.
A slightly more complicated altitude change maneuver is the bi-elliptic transfer, which consists of two half-elliptic orbits; the first, posigrade burn sends the spacecraft into an arbitrarily high apoapsis chosen at some point formula_54 away from the central body. At this point a second burn modifies the periapsis to match the radius of the final desired orbit, where a third, retrograde burn is performed to inject the spacecraft into the desired orbit. While this takes a longer transfer time, a bi-elliptic transfer can require less total propellant than the Hohmann transfer when the ratio of initial and target orbit radii is 12 or greater.
Burn 1 (posigrade):
formula_55
Burn 2 (posigrade or retrograde), to match periapsis to the target orbit's altitude:
formula_56
Burn 3 (retrograde):
formula_57
Change of plane.
Plane change maneuvers can be performed alone or in conjunction with other orbit adjustments. For a pure rotation plane change maneuver, consisting only of a change in the inclination of the orbit, the specific angular momentum, "h", of the initial and final orbits are equal in magnitude but not in direction. Therefore, the change in specific angular momentum can be written as:
formula_58
where "h" is the specific angular momentum before the plane change, and Δ"i" is the desired change in the inclination angle. From this it can be shown that the required delta-"v" is:
formula_59
From the definition of "h", this can also be written as:
formula_60
where "v" is the magnitude of velocity before plane change and "φ" is the flight path angle. Using the small-angle approximation, this becomes:
formula_61
The total delta-"v" for a combined maneuver can be calculated by a vector addition of the pure rotation delta-"v" and the delta-"v" for the other planned orbital change.
Translunar flight.
Vehicles sent on lunar or planetary missions are generally not launched by direct injection to departure trajectory, but first put into a low Earth parking orbit; this allows the flexibility of a bigger launch window and more time for checking that the vehicle is in proper condition for the flight.
Escape velocity is not required for flight to the Moon; rather the vehicle's apogee is raised high enough to take it through a point where it enters the Moon's gravitational sphere of influence (SOI). This is defined as the distance from a satellite at which its gravitational pull on a spacecraft equals that of its central body, which is
formula_62
where "D" is the mean distance from the satellite to the central body, and "m""c" and "m""s" are the masses of the central body and satellite, respectively. This value is approximately from Earth's Moon.
An accurate solution of the trajectory requires treatment as a three-body problem, but a preliminary estimate may be made using a patched conic approximation of orbits around the Earth and Moon, patched at the SOI point and taking into account the fact that the Moon is a revolving frame of reference around the Earth.
Translunar injection.
This must be timed so that the Moon will be in position to capture the vehicle, and might be modeled to a first approximation as a Hohmann transfer. However, the rocket burn duration is usually long enough, and occurs during a sufficient change in flight path angle, that this is not very accurate. It must be modeled as a non-impulsive maneuver, requiring integration by finite element analysis of the accelerations due to propulsive thrust and gravity to obtain velocity and flight path angle:
formula_63
where:
Altitude formula_28, downrange distance formula_29, and radial distance formula_21 from the center of the Earth are then computed as:
formula_64
Mid-course corrections.
A simple lunar trajectory stays in one plane, resulting in lunar flyby or orbit within a small range of inclination to the Moon's equator. This also permits a "free return", in which the spacecraft would return to the appropriate position for reentry into the Earth's atmosphere if it were not injected into lunar orbit. Relatively small velocity changes are usually required to correct for trajectory errors. Such a trajectory was used for the Apollo 8, Apollo 10, Apollo 11, and Apollo 12 crewed lunar missions.
Greater flexibility in lunar orbital or landing site coverage (at greater angles of lunar inclination) can be obtained by performing a plane change maneuver mid-flight; however, this takes away the free-return option, as the new plane would take the spacecraft's emergency return trajectory away from the Earth's atmospheric re-entry point, and leave the spacecraft in a high Earth orbit. This type of trajectory was used for the last five Apollo missions (13 through 17).
Lunar orbit insertion.
In the Apollo program, the retrograde lunar orbit insertion burn was performed at an altitude of approximately on the far side of the Moon. This became the pericynthion of the initial orbits, with an apocynthion on the order of . The delta v was approximately . Two orbits later, the orbit was circularized at . For each mission, the flight dynamics officer prepared 10 lunar orbit insertion solutions so the one could be chosen with the optimum (minimum) fuel burn and best met the mission requirements; this was uploaded to the spacecraft computer and had to be executed and monitored by the astronauts on the lunar far side, while they were out of radio contact with Earth.
Interplanetary flight.
In order to completely leave one planet's gravitational field to reach another, a hyperbolic trajectory relative to the departure planet is necessary, with excess velocity added to (or subtracted from) the departure planet's orbital velocity around the Sun. The desired heliocentric transfer orbit to a superior planet will have its perihelion at the departure planet, requiring the hyperbolic excess velocity to be applied in the posigrade direction, when the spacecraft is away from the Sun. To an inferior planet destination, aphelion will be at the departure planet, and the excess velocity is applied in the retrograde direction when the spacecraft is toward the Sun. For accurate mission calculations, the orbital elements of the planets must be obtained from an ephemeris, such as that published by NASA's Jet Propulsion Laboratory.
Simplifying assumptions.
For the purpose of preliminary mission analysis and feasibility studies, certain simplified assumptions may be made to enable delta-v calculation with very small error:
Since interplanetary spacecraft spend a large period of time in heliocentric orbit between the planets, which are at relatively large distances away from each other, the patched-conic approximation is much more accurate for interplanetary trajectories than for translunar trajectories. The patch point between the hyperbolic trajectory relative to the departure planet and the heliocentric transfer orbit occurs at the planet's sphere of influence radius relative to the Sun, as defined above in Orbital flight. Given the Sun's mass ratio of 333,432 times that of Earth and distance of , the Earth's sphere of influence radius is (roughly 1,000,000 kilometers).
Heliocentric transfer orbit.
The transfer orbit required to carry the spacecraft from the departure planet's orbit to the destination planet is chosen among several options:
Hyperbolic departure.
The required hyperbolic excess velocity "v"∞ (sometimes called "characteristic velocity") is the difference between the transfer orbit's departure speed and the departure planet's heliocentric orbital speed. Once this is determined, the injection velocity relative to the departure planet at periapsis is:
formula_65
The excess velocity vector for a hyperbola is displaced from the periapsis tangent by a characteristic angle, therefore the periapsis injection burn must lead the planetary departure point by the same angle:
formula_66
The geometric equation for eccentricity of an ellipse cannot be used for a hyperbola. But the eccentricity can be calculated from dynamics formulations as:
formula_67
where h is the specific angular momentum as given above in the Orbital flight section, calculated at the periapsis:
formula_68
and "ε" is the specific energy:
formula_69
Also, the equations for r and v given in Orbital flight depend on the semi-major axis, and thus are unusable for an escape trajectory. But setting radius at periapsis equal to the r equation at zero
anomaly gives an alternate expression for the semi-latus rectum:
formula_70
which gives a more general equation for radius versus anomaly which is usable at any eccentricity:
formula_71
Substituting the alternate expression for p also gives an alternate expression for a (which is defined for a hyperbola, but no longer represents the semi-major axis). This gives an equation for velocity versus radius which is likewise usable at any eccentricity:
formula_72
The equations for flight path angle and anomaly versus time given in Orbital flight are also usable for hyperbolic trajectories.
Launch windows.
There is a great deal of variation with time of the velocity change required for a mission, because of the constantly varying relative positions of the planets. Therefore, optimum launch windows are often chosen from the results of porkchop plots that show contours of characteristic energy ("v"∞2) plotted versus departure and arrival time.
Atmospheric entry.
Controlled entry, descent, and landing of a vehicle are achieved by shedding the excess kinetic energy through aerodynamic heating from drag, which requires some means of heat shielding, and/or retrograde thrust. Terminal descent is usually achieved by means of parachutes and/or air brakes.
Attitude control.
Since spacecraft spend most of their flight time coasting unpowered through the vacuum of space, they are unlike aircraft in that their flight trajectory is not determined by their attitude (orientation), except during atmospheric flight to control the forces of lift and drag, and during powered flight to align the thrust vector. Nonetheless, attitude control is often maintained in unpowered flight to keep the spacecraft in a fixed orientation for purposes of astronomical observation, communications, or for solar power generation; or to place it into a controlled spin for passive thermal control, or to create artificial gravity inside the craft.
Attitude control is maintained with respect to an inertial frame of reference or another entity (the celestial sphere, certain fields, nearby objects, etc.). The attitude of a craft is described by angles relative to three mutually perpendicular axes of rotation, referred to as roll, pitch, and yaw. Orientation can be determined by calibration using an external guidance system, such as determining the angles to a reference star or the Sun, then internally monitored using an inertial system of mechanical or optical gyroscopes. Orientation is a vector quantity described by three angles for the instantaneous direction, and the instantaneous rates of roll in all three axes of rotation. The aspect of control implies both awareness of the instantaneous orientation and rates of roll and the ability to change the roll rates to assume a new orientation using either a reaction control system or other means.
Newton's second law, applied to rotational rather than linear motion, becomes:
formula_73
where formula_74 is the net torque about an axis of rotation exerted on the vehicle, "I"x is its moment of inertia about that axis (a physical property that combines the mass and its distribution around the axis), and formula_75 is the angular acceleration about that axis in radians per second per second. Therefore, the acceleration rate in degrees per second per second is
formula_76
Analogous to linear motion, the angular rotation rate formula_77 (degrees per second) is obtained by integrating α over time:
formula_78
and the angular rotation formula_79 is the time integral of the rate:
formula_80
The three principal moments of inertia "I"x, "I"y, and "I"z about the roll, pitch and yaw axes, are determined through the vehicle's center of mass.
The control torque for a launch vehicle is sometimes provided aerodynamically by movable fins, and usually by mounting the engines on gimbals to vector the thrust around the center of mass. Torque is frequently applied to spacecraft, operating absent aerodynamic forces, by a reaction control system, a set of thrusters located about the vehicle. The thrusters are fired, either manually or under automatic guidance control, in short bursts to achieve the desired rate of rotation, and then fired in the opposite direction to halt rotation at the desired position. The torque about a specific axis is:
formula_81
where r is its distance from the center of mass, and F is the thrust of an individual thruster (only the component of F perpendicular to r is included.)
For situations where propellant consumption may be a problem (such as long-duration satellites or space stations), alternative means may be used to provide the control torque, such as reaction wheels or control moment gyroscopes.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{F} = m\\mathbf{a},"
},
{
"math_id": 1,
"text": "F = \\dot{m}\\;v_{e} = \\dot{m}\\;v_\\text{e-opt} + A_{e}(p_{e} - p_\\text{amb})"
},
{
"math_id": 2,
"text": "\\dot{m}"
},
{
"math_id": 3,
"text": "v_{e}"
},
{
"math_id": 4,
"text": "v_\\text{e-opt}"
},
{
"math_id": 5,
"text": "A_{e}"
},
{
"math_id": 6,
"text": "p_{e}"
},
{
"math_id": 7,
"text": "p_\\text{amb}"
},
{
"math_id": 8,
"text": "v_e = g_0 \\left(I_\\text{sp-vac} - \\frac{A_{e}\\, p_\\text{amb}}{\\dot{m}}\\right) "
},
{
"math_id": 9,
"text": "I_\\text{sp-vac}"
},
{
"math_id": 10,
"text": "g_0"
},
{
"math_id": 11,
"text": "\\Delta v\\ = v_e \\ln \\frac {m_0} {m_1}"
},
{
"math_id": 12,
"text": "m_0"
},
{
"math_id": 13,
"text": "m_1"
},
{
"math_id": 14,
"text": "v_e"
},
{
"math_id": 15,
"text": "\\Delta v "
},
{
"math_id": 16,
"text": "\\mathbf{L} = C_L q A_\\text{ref}"
},
{
"math_id": 17,
"text": "\\mathbf{D} = C_D q A_\\text{ref}"
},
{
"math_id": 18,
"text": "\\mathbf{W} = m \\cdot g"
},
{
"math_id": 19,
"text": "W"
},
{
"math_id": 20,
"text": "m"
},
{
"math_id": 21,
"text": "r"
},
{
"math_id": 22,
"text": "r_0"
},
{
"math_id": 23,
"text": "g = g_0\\left(\\frac{r_0}r\\right)^2"
},
{
"math_id": 24,
"text": "v"
},
{
"math_id": 25,
"text": "\\theta"
},
{
"math_id": 26,
"text": "\\begin{align}\n\\dot{v} &= \\frac{F\\cos\\alpha} m - \\frac D m - g\\cos\\theta \\\\\n\\dot{\\theta} &= \\frac{ F\\sin\\alpha }{mv} + \\frac L {mv} + \\left( \\frac g v - \\frac v r \\right) \\sin\\theta,\n\\end{align}"
},
{
"math_id": 27,
"text": "\\begin{align}\nv &= \\int_{t_0}^t \\dot{v}\\, dt \\\\\n\\theta &= \\int_{t_0}^t \\dot{\\theta}\\, dt\n\\end{align}"
},
{
"math_id": 28,
"text": "h"
},
{
"math_id": 29,
"text": "s"
},
{
"math_id": 30,
"text": "\\begin{align}\nh &= \\int_{t_0}^t v \\cos \\theta\\, dt \\\\\nr &= r_0 + h \\\\\ns &= r_0 \\int_{t_0}^t \\frac v r \\sin \\theta\\, dt\n\\end{align}"
},
{
"math_id": 31,
"text": "v_s = \\sqrt{v^2 + 2\\omega r v \\cos\\varphi \\sin\\theta \\sin A_z + (\\omega r \\cos\\theta)^2},"
},
{
"math_id": 32,
"text": "\\theta_s = \\arccos\\left(\\frac{ v \\cos\\theta}{v_s} \\right) "
},
{
"math_id": 33,
"text": "a = \\frac{r_a + r_p} 2 "
},
{
"math_id": 34,
"text": "e = \\frac{r_a} a - 1 "
},
{
"math_id": 35,
"text": " T = 2 \\pi \\sqrt{\\frac{a^3} \\mu}"
},
{
"math_id": 36,
"text": "\\mu"
},
{
"math_id": 37,
"text": "\\nu"
},
{
"math_id": 38,
"text": "p = a(1-e^2)\\,"
},
{
"math_id": 39,
"text": "r = \\frac p {1+e\\cos\\nu}"
},
{
"math_id": 40,
"text": "v = \\sqrt{\\mu\\left(\\frac 2 r - \\frac 1 a\\right)}"
},
{
"math_id": 41,
"text": "v_c = \\sqrt{\\frac\\mu r}"
},
{
"math_id": 42,
"text": "v_p = \\sqrt{\\frac{\\mu(1+e)}{a(1-e)}}"
},
{
"math_id": 43,
"text": "v_a = \\sqrt{\\frac{\\mu(1-e)}{a(1+e)}}\\,"
},
{
"math_id": 44,
"text": "v_e = \\sqrt{\\frac{2\\mu}{r_p}}"
},
{
"math_id": 45,
"text": "h = r v\\cos\\varphi,"
},
{
"math_id": 46,
"text": "\\varphi = \\arccos\\left(\\frac{r_p v_p}{r v}\\right)"
},
{
"math_id": 47,
"text": "\\nu = r_p v_p \\int_{t_p}^t \\frac 1 {r^2} \\, dt"
},
{
"math_id": 48,
"text": "t = \\frac 1 {r_p v_p} \\int_0^\\nu r^2 \\, d\\nu"
},
{
"math_id": 49,
"text": "\\Delta v\\ = v_c - v_p"
},
{
"math_id": 50,
"text": "\\Delta v\\ = v_c - v_a"
},
{
"math_id": 51,
"text": "v_1"
},
{
"math_id": 52,
"text": "\\Delta v_1\\ = v_p - v_1"
},
{
"math_id": 53,
"text": "\\Delta v_2\\ = v_2 - v_a"
},
{
"math_id": 54,
"text": "r_b"
},
{
"math_id": 55,
"text": "\\Delta v_1\\ = {v_p}_1 - v_1"
},
{
"math_id": 56,
"text": "\\Delta v_2\\ = {v_a}_2 - {v_a}_1"
},
{
"math_id": 57,
"text": "\\Delta v_3\\ = v_2 - {v_p}_2"
},
{
"math_id": 58,
"text": "\\Delta h = 2h\\sin\\left(\\frac {|\\Delta i|}{2} \\right)"
},
{
"math_id": 59,
"text": "\\Delta v = \\frac {2h\\sin\\frac {|\\Delta i|}{2}}{r}"
},
{
"math_id": 60,
"text": "\\Delta v = 2v\\cos \\varphi\\sin\\left(\\frac {\\left|\\Delta i\\right|} 2 \\right)"
},
{
"math_id": 61,
"text": "\\Delta v = v \\cos(\\varphi) \\left|\\Delta i\\right|"
},
{
"math_id": 62,
"text": "r_\\text{SOI} = D\\left(\\frac{m_s}{m_c}\\right)^{2/5},"
},
{
"math_id": 63,
"text": "\\begin{align}\n\\dot{v} &= \\frac{F\\cos\\alpha}m - g\\cos\\theta\\\\\n\\dot{\\theta} &= \\frac{F\\sin\\alpha}{mv} + \\left(\\frac g v - \\frac v r\\right) \\sin\\theta, \\\\\nv &= \\int_{t_0}^t \\dot{v}\\, dt \\\\\n\\theta &= \\int_{t_0}^t \\dot{\\theta}\\, dt\n\\end{align}"
},
{
"math_id": 64,
"text": "\\begin{align}\nh &= \\int_{t_0}^t v \\cos \\theta\\, dt \\\\\nr &= r_0+h \\\\\ns &= r_0 \\int_{t_0}^t \\frac v r \\sin \\theta\\, dt\n\\end{align}"
},
{
"math_id": 65,
"text": "v_p = \\sqrt{\\frac{2\\mu}{r_p} + v_\\infty^2}\\,"
},
{
"math_id": 66,
"text": "\\delta = \\arcsin\\frac 1 e"
},
{
"math_id": 67,
"text": "e = \\sqrt{1+\\frac{2\\varepsilon h^2}{\\mu^2}},"
},
{
"math_id": 68,
"text": "h = r_p v_p,"
},
{
"math_id": 69,
"text": "\\varepsilon = \\frac{v^2}2 - \\frac \\mu r\\,"
},
{
"math_id": 70,
"text": "p = r_p(1 + e),\\,"
},
{
"math_id": 71,
"text": "r = \\frac{r_p(1 + e)}{1+e\\cos\\nu}\\,"
},
{
"math_id": 72,
"text": "v = \\sqrt{\\mu\\left (\\frac{2}{r}-\\frac{1-e^2}{r_p(1+e)}\\right)}\\,"
},
{
"math_id": 73,
"text": "\\boldsymbol{\\tau}_x = I_x \\boldsymbol{\\alpha}_x,"
},
{
"math_id": 74,
"text": "\\boldsymbol{\\tau}_x"
},
{
"math_id": 75,
"text": "\\alpha_x"
},
{
"math_id": 76,
"text": "\\boldsymbol{\\alpha}_x = \\tfrac{180}{\\pi} \\boldsymbol{\\tau}_x/I_x,"
},
{
"math_id": 77,
"text": "\\boldsymbol{\\omega}_x"
},
{
"math_id": 78,
"text": "{\\omega_x} = \\int_{t_0}^t {\\alpha_x} dt"
},
{
"math_id": 79,
"text": "\\boldsymbol{\\theta}_x"
},
{
"math_id": 80,
"text": "\\theta_x = \\int_{t_0}^t {\\omega_x} dt"
},
{
"math_id": 81,
"text": "\\boldsymbol{\\tau} = \\sum_{i=1}^N (\\mathbf{r}_i \\times \\mathbf{F}_i ), "
}
] |
https://en.wikipedia.org/wiki?curid=1559922
|
1560073
|
Ivan Pervushin
|
Ivan Mikheevich Pervushin (, sometimes transliterated as Pervusin or Pervouchine) (—) was a Russian clergyman and mathematician of the second half of the 19th century, known for his achievements in number theory. He discovered the ninth perfect number and its odd prime factor, the ninth Mersenne prime. Also, he proved that two Fermat numbers, the 12th and 23rd, were composite.
A contemporary of Pervushin's, writer A. D. Nosilov, wrote: "... this is the modest unknown worker of science ... All of his spacious study is filled up with the different mathematical books, ... here are the books of famous mathematicians: Chebyshev, Legendre, Riemann; not including all modern mathematical publications, which were sent to him by Russian and foreign scientific and mathematical societies. It seemed I was not in a study of the village priest, but in a study of an old mathematics professor ... Besides being a mathematician, he is also a statistician, a meteorologist, and a correspondent".
Life.
Ivan Pervushin was born on 27 January [O.S. 15 January] 1827 in Lysva, Permsky Uyezd, Perm Governorate, a district in the east of European Russia. He claimed his birthplace to be the town of Lysva (where his grandfather, John Pervushin, was a priest) but other sources suggest Pashii, in Gornozavodsk. Though, according to recently found archival parish registers of 1827 from Lysva church, he was born in Lysva. He graduated from Kazan clerical academy in 1852. Upon graduation, Pervushin was required to become a priest; he stayed for some time in Perm, then moved to a remote village of Zamaraevo, some 150 miles from Ekaterinburg, where he lived for 25 years.
In Zamaraevo, Pervushin founded a rural school in 1859.
He moved to the nearby town of Shadrinsk in 1883, where he published an article that ridiculed the local government. As a punishment, he was exiled to the village of Mehonskoe in 1887.
Ivan Pervushin died on 30 June [O.S. 17 June] 1900 in Mehonskoe at the age of 73.
Number theory.
The priest's job provided for Pervushin's life and left him plenty of free time to spend on mathematics. Pervushin was particularly interested in number theory. In 1877 and in the beginning of 1878 he presented two papers to the Russian Academy of Sciences. In these papers, he proved that the 12th and 23rd Fermat numbers are composite:
formula_0 is divisible by formula_1
and
formula_2 is divisible by formula_3
In 1883 Pervushin demonstrated that the number
formula_4
is a Mersenne prime, and that correspondingly
formula_5
is a perfect number. At the time, these were the second largest known prime number, and the second largest known perfect number, after formula_6 and formula_7, proved prime and perfect by Édouard Lucas seven years earlier. They remained the second largest until 1911, when Ralph Ernest Powers proved that formula_8 is prime and formula_9 is perfect.
Pervushin was a contributor to the International World Congress of Mathematicians of 1893, a part of the World's Columbian Exposition in Chicago that became a precursor to the later International Congresses of Mathematicians. However, he did not attend.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "2^{2^{12}} + 1"
},
{
"math_id": 1,
"text": "7\\times2^{14}+1=114689 "
},
{
"math_id": 2,
"text": "2^{2^{23}} + 1"
},
{
"math_id": 3,
"text": "5\\times2^{25}+1=167772161."
},
{
"math_id": 4,
"text": "2^{61}-1 = 2305843009213693951 "
},
{
"math_id": 5,
"text": "2^{60}(2^{61}-1) = 2658455991569831744654692615953842176 "
},
{
"math_id": 6,
"text": "2^{127}-1"
},
{
"math_id": 7,
"text": "2^{126}(2^{127}-1)"
},
{
"math_id": 8,
"text": "2^{89}-1"
},
{
"math_id": 9,
"text": "2^{88}(2^{89}-1)"
}
] |
https://en.wikipedia.org/wiki?curid=1560073
|
1560090
|
Active-set method
|
In mathematical optimization, the active-set method is an algorithm used to identify the active constraints in a set of inequality constraints. The active constraints are then expressed as equality constraints, thereby transforming an inequality-constrained problem into a simpler equality-constrained subproblem.
An optimization problem is defined using an objective function to minimize or maximize, and a set of constraints
formula_0
that define the feasible region, that is, the set of all "x" to search for the optimal solution. Given a point formula_1 in the feasible region, a constraint
formula_2
is called active at formula_3 if formula_4, and inactive at formula_3 if formula_5 Equality constraints are always active. The active set at formula_3 is made up of those constraints formula_6 that are active at the current point .
The active set is particularly important in optimization theory, as it determines which constraints will influence the final result of optimization. For example, in solving the linear programming problem, the active set gives the hyperplanes that intersect at the solution point. In quadratic programming, as the solution is not necessarily on one of the edges of the bounding polygon, an estimation of the active set gives us a subset of inequalities to watch while searching the solution, which reduces the complexity of the search.
Active-set methods.
In general an active-set algorithm has the following structure:
Find a feasible starting point
repeat until "optimal enough"
"solve" the equality problem defined by the active set (approximately)
"compute" the Lagrange multipliers of the active set
"remove" a subset of the constraints with negative Lagrange multipliers
"search" for infeasible constraints
end repeat
Methods that can be described as active-set methods include:
Performance.
Consider the problem of Linearly Constrained Convex Quadratic Programming. Under reasonable assumptions (the problem is feasible, the system of constraints is regular at every point, and the quadratic objective is strongly convex), the active-set method terminates after finitely many steps, and yields a global solution to the problem. Theoretically, the active-set method may perform a number of iterations exponential in "m", like the simplex method. However, its practical behaviour is typically much better.Sec.9.1
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "g_1(x) \\ge 0, \\dots, g_k(x) \\ge 0"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "g_i(x) \\ge 0"
},
{
"math_id": 3,
"text": "x_0"
},
{
"math_id": 4,
"text": "g_i(x_0) = 0"
},
{
"math_id": 5,
"text": "g_i(x_0) > 0."
},
{
"math_id": 6,
"text": "g_i(x_0)"
}
] |
https://en.wikipedia.org/wiki?curid=1560090
|
15601327
|
Deformation (meteorology)
|
Weather phenomenon
Deformation is the rate of change of shape of fluid bodies. Meteorologically, this quantity is very important in the formation of atmospheric fronts, in the explanation of cloud shapes, and in the diffusion of materials and properties.
Equations.
The deformation of horizontal wind is defined as formula_0, where formula_1 and formula_2, representing the derivatives of wind component. Because these derivatives vary greatly with the rotation of the coordinate system, so do formula_3 and formula_4.
Stretching direction.
The deformation elements formula_3 and formula_4 (above) can be used to find the "direction of the dilatation axis", the line along which the material elements stretch (also known as the "stretching direction"). Several flow patterns are characteristic of large deformation: confluence, diffluence, and shear flow. <templatestyles src="Template:Visible anchor/styles.css" />Confluence, also known as "stretching", is the elongating of a fluid body along the flow (streamline convergence). <templatestyles src="Template:Visible anchor/styles.css" />Diffluence, also known as "shearing", is the elongating of a fluid body normal to the flow (streamline divergence).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\operatorname{def} \\mathbf{V} = \\sqrt{A^2 + B^2}"
},
{
"math_id": 1,
"text": "\\ A = \\frac{\\partial v}{\\partial x} + \\frac{\\partial u}{\\partial y}"
},
{
"math_id": 2,
"text": "\\ B = \\frac{\\partial u}{\\partial x} - \\frac{\\partial v}{\\partial y}"
},
{
"math_id": 3,
"text": "\\ A"
},
{
"math_id": 4,
"text": "\\ B"
}
] |
https://en.wikipedia.org/wiki?curid=15601327
|
15602155
|
Log-spectral distance
|
The log-spectral distance (LSD), also referred to as log-spectral distortion or root mean square log-spectral distance, is a distance measure between two spectra. The log-spectral distance between spectra formula_0 and formula_1 is defined as p-norm:
formula_2 where formula_0 and formula_1 are power spectra.
Unlike the Itakura–Saito distance, the log-spectral distance is symmetric.
In speech coding, log spectral distortion for a given frame is defined as the root mean square difference between the original LPC log power spectrum and the quantized or interpolated LPC log power spectrum. Usually the average of spectral distortion over a large number of frames is calculated and that is used as the measure of performance of quantization or interpolation.
Meaning.
When measuring the distortion between signals, the scale or temporality/spatiality of the signals can have different levels of significance to the distortion measures. To incorporate the proper level of significance, the signals can be transformed into a different domain.
When the signals are transformed into the spectral domain with transformation methods such as Fourier transform and DCT, the spectral distance is the measure to compare the transformed signals. LSD incorporates the logarithmic characteristics of the power spectra, and it becomes effective when the processing task of the power spectrum also has logarithmic characteristics, "e.g." human listening to the sound signal with different levels of loudness.
Moreover, LSD is equal to the cepstral distance which is the distance between the signals' cepstrum when the p-numbers are the same by Parseval's theorem.
Other Representations.
As LSD is in the form of p-norm, it can be represented with different p-numbers and log scales.
For instance, when it is expressed in dB with L2 norm, it is defined as:
formula_3.
When it is represented in the discrete space, it is defined as:
formula_4 where formula_5 and formula_6 are power spectra in discrete space.
|
[
{
"math_id": 0,
"text": "P\\left(\\omega\\right)"
},
{
"math_id": 1,
"text": "\\hat{P}\\left(\\omega\\right)"
},
{
"math_id": 2,
"text": "D_{LS}={\\left\\{ \\frac{1}{2\\pi} \\int_{-\\pi}^\\pi \\left[ \\log P(\\omega) - \\log \\hat{P}(\\omega) \\right]^p \\,d\\omega \\right\\} }^{1/p},"
},
{
"math_id": 3,
"text": "D_{LS}=\\sqrt{\\frac{1}{2\\pi} \\int_{-\\pi}^\\pi \\left[ 10\\log_{10} \\frac{P(\\omega)}{\\hat{P}(\\omega)} \\right]^2 \\,d\\omega }"
},
{
"math_id": 4,
"text": "D_{LS}={\\left\\{ \\frac{1}{N} \\sum_{n=1}^N \\left[ \\log P(n) - \\log \\hat{P}(n) \\right]^p \\right\\} }^{1/p} ,"
},
{
"math_id": 5,
"text": "P\\left(n\\right)"
},
{
"math_id": 6,
"text": "\\hat{P}\\left(n\\right)"
}
] |
https://en.wikipedia.org/wiki?curid=15602155
|
15606646
|
Exceptional inverse image functor
|
In mathematics, more specifically sheaf theory, a branch of topology and algebraic geometry, the exceptional inverse image functor is the fourth and most sophisticated in a series of image functors for sheaves. It is needed to express Verdier duality in its most general form.
Definition.
Let "f": "X" → "Y" be a continuous map of topological spaces or a morphism of schemes. Then the exceptional inverse image is a functor
R"f"!: D("Y") → D("X")
where D(–) denotes the derived category of sheaves of abelian groups or modules over a fixed ring.
It is defined to be the right adjoint of the total derived functor R"f"! of the direct image with compact support. Its existence follows from certain properties of R"f"! and general theorems about existence of adjoint functors, as does the unicity.
The notation R"f"! is an abuse of notation insofar as there is in general no functor "f"! whose derived functor would be R"f"!.
"f"!("F") := "f"∗ "G",
where "G" is the subsheaf of "F" of which the sections on some open subset "U" of "Y" are the sections "s" ∈ "F"("U") whose support is contained in "X". The functor "f"! is left exact, and the above R"f"!, whose existence is guaranteed by abstract nonsense, is indeed the derived functor of this "f"!. Moreover "f"! is right adjoint to "f"!, too.
Duality of the exceptional inverse image functor.
Let formula_0 be a smooth manifold of dimension formula_1 and let formula_2 be the unique map which maps everything to one point. For a ring formula_3, one finds that formula_4 is the shifted formula_3-orientation sheaf.
On the other hand, let formula_0 be a smooth formula_5-variety of dimension formula_1. If formula_6 denotes the structure morphism then formula_7 is the shifted canonical sheaf on formula_0.
Moreover, let formula_0 be a smooth formula_5-variety of dimension formula_1 and formula_8 a prime invertible in formula_5. Then formula_9 where formula_10 denotes the Tate twist.
Recalling the definition of the compactly supported cohomology as lower-shriek pushforward and noting that below the last formula_11 means the constant sheaf on formula_0 and the rest mean that on formula_12, formula_13, and
formula_14
the above computation furnishes the formula_8-adic Poincaré duality
formula_15
from the repeated application of the adjunction condition.
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "d"
},
{
"math_id": 2,
"text": "f: X \\rightarrow *"
},
{
"math_id": 3,
"text": "\\Lambda"
},
{
"math_id": 4,
"text": "f^{!} \\Lambda=\\omega_{X, \\Lambda}[d]"
},
{
"math_id": 5,
"text": "k"
},
{
"math_id": 6,
"text": "f: X \\rightarrow \\operatorname{Spec}(k)"
},
{
"math_id": 7,
"text": "f^{!} k \\cong \\omega_{X}[d]"
},
{
"math_id": 8,
"text": "\\ell"
},
{
"math_id": 9,
"text": "f^{!} \\mathbb{Q}_{\\ell} \\cong \\mathbb{Q}_{\\ell}(d)[2 d]"
},
{
"math_id": 10,
"text": "(d)"
},
{
"math_id": 11,
"text": "\\mathbb{Q}_{\\ell}"
},
{
"math_id": 12,
"text": "*"
},
{
"math_id": 13,
"text": "f:X\\to *"
},
{
"math_id": 14,
"text": "\\mathrm{H}_{c}^{n}(X)^{*} \\cong \\operatorname{Hom}\\left(f_! f^{*} \\mathbb{Q}_{\\ell}[n], \\mathbb{Q}_{\\ell}\\right) \\cong \\operatorname{Hom}\\left(\\mathbb{Q}_{\\ell}, f_{*} f^{!} \\mathbb{Q}_{\\ell}[-n]\\right),"
},
{
"math_id": 15,
"text": "\\mathrm{H}_{c}^{n}\\left(X ; \\mathbb{Q}_{\\ell}\\right)^{*} \\cong \\mathrm{H}^{2 d-n}(X ; \\mathbb{Q}(d))"
}
] |
https://en.wikipedia.org/wiki?curid=15606646
|
1561053
|
Variance swap
|
A variance swap is an over-the-counter financial derivative that allows one to speculate on or hedge risks associated with the magnitude of movement, i.e. volatility, of some underlying product, like an exchange rate, interest rate, or stock index.
One leg of the swap will pay an amount based upon the realized variance of the price changes of the underlying product. Conventionally, these price changes will be daily log returns, based upon the most commonly used closing price. The other leg of the swap will pay a fixed amount, which is the strike, quoted at the deal's inception. Thus the net payoff to the counterparties will be the difference between these two and will be settled in cash at the expiration of the deal, though some cash payments will likely be made along the way by one or the other counterparty to maintain agreed upon margin.
Structure and features.
The features of a variance swap include:
The payoff of a variance swap is given as follows:
formula_0
where:
The annualised realised variance is calculated based on a prespecified set of sampling points over the period. It does not always coincide with the classic statistical definition of variance as the contract terms may not subtract the mean. For example, suppose that there are formula_4 observed prices
formula_5
where formula_6
for formula_7 to formula_8. Define formula_9 the natural log returns.
Then
where formula_11 is an annualisation factor normally chosen to be approximately the number of sampling points in a year (commonly 252) and formula_12 is set be the swaps contract life defined by the number formula_13. It can be seen that subtracting the mean return will decrease the realised variance. If this is done, it is common to use formula_14 as the divisor rather than formula_8, corresponding to an unbiased estimate of the sample variance.
It is market practice to determine the number of contract units as follows:
formula_15
where formula_16 is the corresponding vega notional for a volatility swap. This makes the payoff of a variance swap comparable to that of a volatility swap, another less popular instrument used to trade volatility.
Pricing and valuation.
The variance swap may be hedged and hence priced using a portfolio of European call and put options with weights inversely proportional to the square of strike.
Any volatility smile model which prices vanilla options can therefore be used to price the variance swap. For example, using the Heston model, a closed-form solution can be derived for the fair variance swap rate. Care must be taken with the behaviour of the smile model in the wings as this can have a disproportionate effect on the price.
We can derive the payoff of a variance swap using Ito's Lemma. We first assume that the underlying stock is described as follows:
formula_17
Applying Ito's formula, we get:
formula_18
formula_19
Taking integrals, the total variance is:
formula_20
We can see that the total variance consists of a rebalanced hedge of formula_21 and short a log contract. <br>Using a static replication argument, i.e., any twice continuously differentiable contract can be replicated using a bond, a future and infinitely many puts and calls, we can show that a short log contract position is equal to being short a futures contract and a collection of puts and calls:
formula_22
Taking expectations and setting the value of the variance swap equal to zero, we can rearrange the formula to solve for the fair variance swap strike:
formula_23
where:
formula_24 is the initial price of the underlying security,
formula_25 is an arbitrary cutoff,
formula_26 is the strike of the each option in the collection of options used.
Often the cutoff formula_27is chosen to be the current forward price formula_28, in which case the fair variance swap strike can be written in the simpler form:
formula_29
Analytically pricing variance swaps with discrete-sampling.
One might find discrete-sampling of the realized variance, says, formula_30 as defined earlier, more practical in valuing the variance strike since, in reality, we are only able to observe the underlying price discretely in time. This is even more persuasive since there is an assertion that formula_30 converges in probability to the actual one as the number of price's observation increases.
Suppose that in the risk-neutral world with a martingale measure formula_31, the underlying asset price formula_32 solves the following SDE:
formula_33
where:
Given as defined above by formula_40 the payoff at expiry of variance swaps, then its expected value at time formula_41, denoted by formula_42 is
formula_43
To avoid arbitrage opportunity, there should be no cost to enter a swap contract, meaning that formula_42 is zero. Thus, the value of fair variance strike is simply expressed by
formula_44
which remains to be calculated either by finding its closed-form formula or utilizing numerical methods, like Monte Carlo methods.
Uses.
Many traders find variance swaps interesting or useful for their purity. An alternative way of speculating on volatility is with an option, but if one only has interest in volatility risk, this strategy will require constant delta hedging, so that direction risk of the underlying security is approximately removed. What is more, a replicating portfolio of a variance swap would require an entire strip of options, which would be very costly to execute. Finally, one might often find the need to be regularly rolling this entire strip of options so that it remains centered on the current price of the underlying
security.
The advantage of variance swaps is that they provide pure exposure to the volatility of the underlying price, as opposed to call and put options which may carry directional risk (delta). The profit and loss from a variance swap depends directly on the difference between realized and implied volatility.
Another aspect that some speculators may find interesting is that the quoted strike is determined by the implied volatility smile in the options market, whereas the ultimate payout will be based upon actual realized variance. Historically, implied variance has been above realized variance, a phenomenon known as the variance risk premium, creating an opportunity for volatility arbitrage, in this case known as the rolling short variance trade. For the same reason, these swaps can be used to hedge options on realized variance.
Related instruments.
Closely related strategies include straddle, volatility swap, correlation swap, gamma swap, conditional variance swap, corridor variance swap, forward-start variance swap, option on realized variance and correlation trading.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "N_{\\operatorname{var}}(\\sigma_{\\text{realised}}^2-\\sigma_{\\text{strike}}^2)"
},
{
"math_id": 1,
"text": "N_{\\operatorname{var}}"
},
{
"math_id": 2,
"text": "\\sigma_{\\text{realised}}^2"
},
{
"math_id": 3,
"text": "\\sigma_{\\text{strike}}^2"
},
{
"math_id": 4,
"text": "n+1"
},
{
"math_id": 5,
"text": "S_{t_0},S_{t_1}, ..., S_{t_n} "
},
{
"math_id": 6,
"text": "0\\leq t_{i-1}<t_{i}\\leq T"
},
{
"math_id": 7,
"text": "i=1"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "R_{i} = \\ln(S_{t_{i}}/S_{t_{i-1}}),"
},
{
"math_id": 10,
"text": "\\sigma_{\\text{realised}}^2 = \\frac{A}{n} \\sum_{i=1}^n R_i^2 "
},
{
"math_id": 11,
"text": "A"
},
{
"math_id": 12,
"text": "T"
},
{
"math_id": 13,
"text": "n/A"
},
{
"math_id": 14,
"text": "n-1"
},
{
"math_id": 15,
"text": "N_{\\operatorname{var}}=\\frac{N_{\\text{vol}}}{2\\sigma_{\\text{strike}}}"
},
{
"math_id": 16,
"text": "N_{\\text{vol}}"
},
{
"math_id": 17,
"text": " \\frac{dS_t}{S_{t}}\\ = \\mu \\, dt + \\sigma \\, dZ_t "
},
{
"math_id": 18,
"text": " d(\\log S_t) = \\left ( \\mu - \\frac{\\sigma^2}{2}\\ \\right) \\, dt + \\sigma \\, dZ_t "
},
{
"math_id": 19,
"text": " \\frac{dS_t}{S_t}\\ - d(\\log S_t) = \\frac{\\sigma^2}{2}\\ dt "
},
{
"math_id": 20,
"text": " \\text{Variance} = \\frac{1}{T}\\ \\int\\limits_0^T \\sigma^2 \\, dt\\ = \\frac{2}{T}\\ \\left ( \\int\\limits_0^T \\frac{dS_t}{S_t}\\ \\ - \\ln \\left ( \\frac{S_T}{S_0}\\ \\right ) \\right ) "
},
{
"math_id": 21,
"text": " \\frac{1}{S_{t}}\\ "
},
{
"math_id": 22,
"text": " -\\ln \\left ( \\frac{S_T}{S^{*}}\\ \\right ) = -\\frac{S_T-S^{*}}{S^{*}}\\ + \\int\\limits_{K \\le S^{*} } (K-S_T)^{+} \\frac{dK}{K^2}\\ + \\int\\limits_{K \\ge S^{*} } (S_T-K)^{+} \\frac{dK}{K^2}\\ "
},
{
"math_id": 23,
"text": " K_\\text{var} = \\frac{2}{T}\\ \\left ( rT- \\left (\\frac{S_{0}}{S^{*}}\\ e^{rT} -1 \\right ) - \\ln\\left ( \\frac{S^{*}}{S_0} \\ \\right ) + e^{rT} \\int\\limits_0^{S^{*}} \\frac{1}{K^2}\\ P(K)\\, dK + e^{rT} \\int\\limits_{S^{*}}^\\infty \\frac{1}{K^2} C(K) \\, dK \\right )"
},
{
"math_id": 24,
"text": " S_0 "
},
{
"math_id": 25,
"text": " S^{*}>0 "
},
{
"math_id": 26,
"text": " K "
},
{
"math_id": 27,
"text": "S^{*}"
},
{
"math_id": 28,
"text": " S^{*} = F_0 = S_0e^{rT} "
},
{
"math_id": 29,
"text": " K_{var} = \\frac{2e^{rT}}{T} \\ \\left ( \\int\\limits_0^{F_0} \\frac{1}{K^2}\\ P(K) \\, dK + \\int\\limits_{F_0}^\\infty \\frac{1}{K^2}\\ C(K) \\, dK \\right )"
},
{
"math_id": 30,
"text": "\\sigma^2_{\\text{realized}}"
},
{
"math_id": 31,
"text": "\\mathbb{Q}"
},
{
"math_id": 32,
"text": "S=(S_t)_{0\\leq t \\leq T}"
},
{
"math_id": 33,
"text": "\\frac{dS_t}{S_t}=r(t) \\, dt+\\sigma(t) \\, dW_t, \\;\\; S_0>0"
},
{
"math_id": 34,
"text": "r(t)\\in\\mathbb{R}"
},
{
"math_id": 35,
"text": "\\sigma(t)>0"
},
{
"math_id": 36,
"text": "W=(W_t)_{0\\leq t \\leq T}"
},
{
"math_id": 37,
"text": "(\\Omega,\\mathcal{F},\\mathbb{F},\\mathbb{Q})"
},
{
"math_id": 38,
"text": "\\mathbb{F}=(\\mathcal{F}_t)_{0\\leq t \\leq T}"
},
{
"math_id": 39,
"text": "W"
},
{
"math_id": 40,
"text": " (\\sigma^2_{\\text{realized}} - \\sigma^2_{\\text{strike}})\\times N_{\\text{var}} "
},
{
"math_id": 41,
"text": "t_0"
},
{
"math_id": 42,
"text": "V_{t_0}"
},
{
"math_id": 43,
"text": "V_{t_0}=e^{\\int^T_{t_0}r(s)ds}\\mathbb{E}^{\\mathbb{Q}}[\\sigma^2_{\\text{realized}} - \\sigma^2_{\\text{strike}} \\mid \\mathcal{F}_ {t_0}] \\times N_{\\text{var}}."
},
{
"math_id": 44,
"text": "\\sigma^2_{\\text{strike}}=\\mathbb{E}^{\\mathbb{Q}}[\\sigma^2_{\\text{realized}} \\mid \\mathcal{F}_{t_0}],"
}
] |
https://en.wikipedia.org/wiki?curid=1561053
|
15611465
|
Magnetic Prandtl number
|
The Magnetic Prandtl number (Prm) is a dimensionless quantity occurring in magnetohydrodynamics which approximates the ratio of momentum diffusivity (viscosity) and magnetic diffusivity. It is defined as:
formula_0
where:
At the base of the Sun's convection zone the Magnetic Prandtl number is approximately 10−2, and in the interiors of planets and in liquid-metal laboratory dynamos is approximately 10−5.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{Pr}_\\mathrm{m} = \\frac{\\mathrm{Re_m}}{\\mathrm{Re}} = \\frac{\\nu}{\\eta} = \\frac{\\mbox{viscous diffusion rate}}{\\mbox{magnetic diffusion rate}}"
}
] |
https://en.wikipedia.org/wiki?curid=15611465
|
15613287
|
Parker vector
|
In mathematics, especially the field of group theory, the Parker vector is an integer vector that describes a permutation group in terms of the cycle structure of its elements, defined by Richard A. Parker.
Definition.
The Parker vector "P" of a permutation group "G" acting on a set of size "n", is the vector whose "k"th component for "k" = 1, ..., "n" is given by:
formula_0 where "c""k"("g") is the number of "k"-cycles in the cycle decomposition of "g".
Examples.
For the group of even permutations on three elements, the Parker vector is (1,0,2). The group of all permutations on three elements has Parker vector (1,1,1). For any of the subgroups of the above with just two elements, the Parker vector is (2,1,0).The trivial subgroup has Parker vector (3,0,0).
|
[
{
"math_id": 0,
"text": "P_k = \\frac{k}{|G|} \\sum_{g \\in G} c_k(g)"
}
] |
https://en.wikipedia.org/wiki?curid=15613287
|
1561411
|
Color balance
|
Adjustment of color intensities in photography
In photography and image processing, color balance is the global adjustment of the intensities of the colors (typically red, green, and blue primary colors). An important goal of this adjustment is to render specific colors – particularly neutral colors like white or grey – correctly. Hence, the general method is sometimes called gray balance, neutral balance, or white balance. Color balance changes the overall mixture of colors in an image and is used for color correction. Generalized versions of color balance are used to correct colors other than neutrals or to deliberately change them for effect. White balance is one of the most common kinds of balancing, and is when colors are adjusted to make a white object (such as a piece of paper or a wall) appear white and not a shade of any other colour.
Image data acquired by sensors – either film or electronic image sensors – must be transformed from the acquired values to new values that are appropriate for color reproduction or display. Several aspects of the acquisition and display process make such color correction essential – including that the acquisition sensors do not match the sensors in the human eye, that the properties of the display medium must be accounted for, and that the ambient viewing conditions of the acquisition differ from the display viewing conditions.
The color balance operations in popular image editing applications usually operate directly on the red, green, and blue channel pixel values, without respect to any color sensing or reproduction model. In film photography, color balance is typically achieved by using color correction filters over the lights or on the camera lens.
Generalized color balance.
Sometimes the adjustment to keep neutrals neutral is called "white balance", and the phrase "color balance" refers to the adjustment that in addition makes other colors in a displayed image appear to have the same general appearance as the colors in an original scene. It is particularly important that neutral (gray, neutral, white) colors in a scene appear neutral in the reproduction.
Psychological color balance.
Humans relate to flesh tones more critically than other colors. Trees, grass and sky can all be off without concern, but if human flesh tones are 'off' then the human subject can look sick or dead. To address this critical color balance issue, the tri-color primaries themselves are formulated to "not" balance as a true neutral color. The purpose of this color primary imbalance is to more faithfully reproduce the flesh tones through the entire brightness range.
Illuminant estimation and adaptation.
Most digital cameras have means to select color correction based on the type of scene lighting, using either manual lighting selection, automatic white balance, or custom white balance. The algorithms for these processes perform generalized chromatic adaptation.
Many methods exist for color balancing. Setting a button on a camera is a way for the user to indicate to the processor the nature of the scene lighting. Another option on some cameras is a button which one may press when the camera is pointed at a gray card or other neutral colored object. This captures an image of the ambient light, which enables a digital camera to set the correct color balance for that light.
There is a large literature on how one might estimate the ambient lighting from the camera data and then use this information to transform the image data. A variety of algorithms have been proposed, and the quality of these has been debated. A few examples and examination of the references therein will lead the reader to many others. Examples are Retinex, an artificial neural network or a Bayesian method.
Chromatic colors.
Color balancing an image affects not only the neutrals, but other colors as well. An image that is not color balanced is said to have a color cast, as everything in the image appears to have been shifted towards one color. Color balancing may be thought in terms of removing this color cast.
Color balance is also related to color constancy. Algorithms and techniques used to attain color constancy are frequently used for color balancing, as well. Color constancy is, in turn, related to chromatic adaptation. Conceptually, color balancing consists of two steps: first, determining the illuminant under which an image was captured; and second, scaling the components (e.g., R, G, and B) of the image or otherwise transforming the components so they conform to the viewing illuminant.
Viggiano found that white balancing in the camera's native RGB color model tended to produce less color inconstancy (i.e., less distortion of the colors) than in monitor RGB for over 4000 hypothetical sets of camera sensitivities. This difference typically amounted to a factor of more than two in favor of camera RGB. This means that it is advantageous to get color balance right at the time an image is captured, rather than edit later on a monitor. If one must color balance later, balancing the raw image data will tend to produce less distortion of chromatic colors than balancing in monitor RGB.
Mathematics of color balance.
Color balancing is sometimes performed on a three-component image (e.g., RGB) using a 3x3 matrix. This type of transformation is appropriate if the image was captured using the wrong white balance setting on a digital camera, or through a color filter.
Scaling monitor R, G, and B.
In principle, one wants to scale all relative luminances in an image so that objects which are believed to be neutral appear so. If, say, a surface with formula_0 was believed to be a white object, and if 255 is the count which corresponds to white, one could multiply all red values by 255/240. Doing analogously for green and blue would result, at least in theory, in a color balanced image. In this type of transformation the 3x3 matrix is a diagonal matrix.
formula_1
where formula_2, formula_3, and formula_4 are the color balanced red, green, and blue components of a pixel in the image; formula_5, formula_6, and formula_7 are the red, green, and blue components of the image before color balancing, and formula_8, formula_9, and formula_10 are the red, green, and blue components of a pixel which is believed to be a white surface in the image before color balancing. This is a simple scaling of the red, green, and blue channels, and is why color balance tools in Photoshop have a white eyedropper tool. It has been demonstrated that performing the white balancing in the phosphor set assumed by sRGB tends to produce large errors in chromatic colors, even though it can render the neutral surfaces perfectly neutral.
Scaling X, Y, Z.
If the image may be transformed into CIE XYZ tristimulus values, the color balancing may be performed there. This has been termed a "wrong von Kries" transformation. Although it has been demonstrated to offer usually poorer results than balancing in monitor RGB, it is mentioned here as a bridge to other things. Mathematically, one computes:
formula_11
where formula_12, formula_13, and formula_14 are the color-balanced tristimulus values; formula_15, formula_16, and formula_17 are the tristimulus values of the viewing illuminant (the white point to which the image is being transformed to conform to); formula_18, formula_19, and formula_20 are the tristimulus values of an object believed to be white in the un-color-balanced image, and formula_21, formula_22, and formula_23 are the tristimulus values of a pixel in the un-color-balanced image. If the tristimulus values of the monitor primaries are in a matrix formula_24 so that:
formula_25
where formula_26, formula_27, and formula_28 are the un-gamma corrected monitor RGB, one may use:
formula_29
Von Kries's method.
Johannes von Kries, whose theory of rods and three color-sensitive cone types in the retina has survived as the dominant explanation of color sensation for over 100 years, motivated the method of converting color to the LMS color space, representing the effective stimuli for the Long-, Medium-, and Short-wavelength cone types that are modeled as adapting independently. A 3x3 matrix converts RGB or XYZ to LMS, and then the three LMS primary values are scaled to balance the neutral; the color can then be converted back to the desired final color space:
formula_30
where formula_31, formula_32, and formula_33 are the color-balanced LMS cone tristimulus values; formula_34, formula_35, and formula_36 are the tristimulus values of an object believed to be white in the un-color-balanced image, and formula_37, formula_38, and formula_39 are the tristimulus values of a pixel in the un-color-balanced image.
Matrices to convert to LMS space were not specified by von Kries, but can be derived from CIE color matching functions and LMS color matching functions when the latter are specified; matrices can also be found in reference books.
Scaling camera RGB.
By Viggiano's measure, and using his model of gaussian camera spectral sensitivities, most camera RGB spaces performed better than either monitor RGB or XYZ. If the camera's raw RGB values are known, one may use the 3x3 diagonal matrix:
formula_1
and then convert to a working RGB space such as sRGB or Adobe RGB after balancing.
Preferred chromatic adaptation spaces.
Comparisons of images balanced by diagonal transforms in a number of different RGB spaces have identified several such spaces that work better than others, and better than camera or monitor spaces, for chromatic adaptation, as measured by several color appearance models; the systems that performed statistically as well as the best on the majority of the image test sets used were the "Sharp", "Bradford", "CMCCAT", and "ROMM" spaces.
General illuminant adaptation.
The best color matrix for adapting to a change in illuminant is not necessarily a diagonal matrix in a fixed color space. It has long been known that if the space of illuminants can be described as a linear model with "N" basis terms, the proper color transformation will be the weighted sum of "N" fixed linear transformations, not necessarily consistently diagonalizable.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R=240"
},
{
"math_id": 1,
"text": "\\left[\\begin{array}{c} R \\\\ G \\\\ B \\end{array}\\right]=\\left[\\begin{array}{ccc}255/R'_w & 0 & 0 \\\\ 0 & 255/G'_w & 0 \\\\ 0 & 0 & 255/B'_w\\end{array}\\right]\\left[\\begin{array}{c}R' \\\\ G' \\\\ B' \\end{array}\\right]"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "G"
},
{
"math_id": 4,
"text": "B"
},
{
"math_id": 5,
"text": "R'"
},
{
"math_id": 6,
"text": "G'"
},
{
"math_id": 7,
"text": "B'"
},
{
"math_id": 8,
"text": "R'_w"
},
{
"math_id": 9,
"text": "G'_w"
},
{
"math_id": 10,
"text": "B'_w"
},
{
"math_id": 11,
"text": "\\left[\\begin{array}{c} X \\\\ Y \\\\ Z \\end{array}\\right]=\\left[\\begin{array}{ccc}X_w/X'_w & 0 & 0 \\\\ 0 & Y_w/Y'_w & 0 \\\\ 0 & 0 & Z_w/Z'_w\\end{array}\\right]\\left[\\begin{array}{c}X' \\\\ Y' \\\\ Z' \\end{array}\\right]"
},
{
"math_id": 12,
"text": "X"
},
{
"math_id": 13,
"text": "Y"
},
{
"math_id": 14,
"text": "Z"
},
{
"math_id": 15,
"text": "X_w"
},
{
"math_id": 16,
"text": "Y_w"
},
{
"math_id": 17,
"text": "Z_w"
},
{
"math_id": 18,
"text": "X'_w"
},
{
"math_id": 19,
"text": "Y'_w"
},
{
"math_id": 20,
"text": "Z'_w"
},
{
"math_id": 21,
"text": "X'"
},
{
"math_id": 22,
"text": "Y'"
},
{
"math_id": 23,
"text": "Z'"
},
{
"math_id": 24,
"text": "\\mathbf{P}"
},
{
"math_id": 25,
"text": "\\left[\\begin{array}{c} X \\\\ Y \\\\ Z \\end{array}\\right]=\\mathbf{P}\\left[\\begin{array}{c}L_R \\\\ L_G \\\\ L_B \\end{array}\\right]"
},
{
"math_id": 26,
"text": "L_R"
},
{
"math_id": 27,
"text": "L_G"
},
{
"math_id": 28,
"text": "L_B"
},
{
"math_id": 29,
"text": "\\left[\\begin{array}{c} L_R \\\\ L_G \\\\ L_B \\end{array}\\right]=\\mathbf{P^{-1}}\\left[\\begin{array}{ccc}X_w/X'_w & 0 & 0 \\\\ 0 & Y_w/Y'_w & 0 \\\\ 0 & 0 & Z_w/Z'_w\\end{array}\\right]\\mathbf{P}\\left[\\begin{array}{c}L_{R'} \\\\ L_{G'} \\\\ L_{B'} \\end{array}\\right]"
},
{
"math_id": 30,
"text": "\\left[\\begin{array}{c} L \\\\ M \\\\ S \\end{array}\\right]=\\left[\\begin{array}{ccc}1/L'_w & 0 & 0 \\\\ 0 & 1/M'_w & 0 \\\\ 0 & 0 & 1/S'_w\\end{array}\\right]\\left[\\begin{array}{c}L' \\\\ M' \\\\ S' \\end{array}\\right]"
},
{
"math_id": 31,
"text": "L"
},
{
"math_id": 32,
"text": "M"
},
{
"math_id": 33,
"text": "S"
},
{
"math_id": 34,
"text": "L'_w"
},
{
"math_id": 35,
"text": "M'_w"
},
{
"math_id": 36,
"text": "S'_w"
},
{
"math_id": 37,
"text": "L'"
},
{
"math_id": 38,
"text": "M'"
},
{
"math_id": 39,
"text": "S'"
}
] |
https://en.wikipedia.org/wiki?curid=1561411
|
1561620
|
H. J. Woodall
|
Herbert J. Woodall was a British mathematician, known as the namesake for the Woodall numbers.
In an 1889 publication, Woodall listed his affiliation as the Normal School of Science (now part of the Royal College of Science) in South Kensington. He was an Associate of the Royal College of Science, and taught physics at the Normal School from 1889 to 1892.
Woodall numbers.
A Woodall number is defined to be a number of the form formula_0 If a prime number can be written in this form, it is then called a Woodall prime. The generalized Woodall numbers and generalized Woodall primes substitute any base formula_1 for the base 2.
Woodall first announced his work on factorization in a 1911 publication, acknowledging in it his communication on the subject with Allan J. C. Cunningham. In 1925 Cunningham and Woodall gathered together all that was known about the primality and factorization of the Woodall numbers and the generalized Woodall numbers with base 10, and published a small book of tables. Since then many mathematicians have continued the work of filling in these tables.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n\\,2^n-1."
},
{
"math_id": 1,
"text": "b\\ge 2"
}
] |
https://en.wikipedia.org/wiki?curid=1561620
|
1561792
|
Curie–Weiss law
|
Model of magnetic susceptibility under certain conditions
In magnetism, the Curie–Weiss law describes the magnetic susceptibility χ of a ferromagnet in the paramagnetic region above the Curie temperature:
formula_0
where C is a material-specific Curie constant, T is the absolute temperature, and TC is the Curie temperature, both measured in kelvin. The law predicts a singularity in the susceptibility at T
TC. Below this temperature, the ferromagnet has a spontaneous magnetization. The name is given after Pierre Curie and Pierre Weiss.
Background.
A magnetic moment which is present even in the absence of the external magnetic field is called spontaneous magnetization. Materials with this property are known as ferromagnets, such as iron, nickel, and magnetite. However, when these materials are heated up, at a certain temperature they lose their spontaneous magnetization, and become paramagnetic. This threshold temperature below which a material is ferromagnetic is called the Curie temperature and is different for each material.
The Curie–Weiss law describes the changes in a material's magnetic susceptibility, formula_1, near its Curie temperature. The magnetic susceptibility is the ratio between the material's magnetization and the applied magnetic field.
Limitations.
In many materials, the Curie–Weiss law fails to describe the susceptibility in the immediate vicinity of the Curie point, since it is based on a mean-field approximation. Instead, there is a critical behavior of the form
formula_2
with the critical exponent γ. However, at temperatures T ≫ TC the expression of the Curie–Weiss law still holds true, but with TC replaced by a temperatureΘ that is somewhat higher than the actual Curie temperature. Some authors call Θ the Weiss constant to distinguish it from the temperature of the actual Curie point.
Classical approaches to magnetic susceptibility and Bohr–van Leeuwen theorem.
According to the Bohr–van Leeuwen theorem, when statistical mechanics and classical mechanics are applied consistently, the thermal average of the magnetization is always zero. Magnetism cannot be explained without quantum mechanics. That means that it can not be explained without taking into account that matter consists of atoms. Next are listed some semi-classical approaches to it, using a simple atom model, as they are easy to understand and relate to even though they are not perfectly correct.
The magnetic moment of a free atom is due to the orbital angular momentum and spin of its electrons and nucleus. When the atoms are such that their shells are completely filled, they do not have any net magnetic dipole moment in the absence of an external magnetic field. When present, such a field distorts the trajectories (classical concept) of the electrons so that the applied field could be opposed as predicted by the Lenz's law. In other words, the net magnetic dipole induced by the external field is in the opposite direction, and such materials are repelled by it. These are called diamagnetic materials.
Sometimes an atom has a net magnetic dipole moment even in the absence of an external magnetic field. The contributions of the individual electrons and nucleus to the total angular momentum do not cancel each other. This happens when the shells of the atoms are not fully filled up (Hund's Rule). A collection of such atoms however, may not have any net magnetic moment as these dipoles are not aligned. An external magnetic field may serve to align them to some extent and develop a net magnetic moment per volume. Such alignment is temperature dependent as thermal agitation acts to disorient the dipoles. Such materials are called paramagnetic.
In some materials, the atoms (with net magnetic dipole moments) can interact with each other to align themselves even in the absence of any external magnetic field when the thermal agitation is low enough. Alignment could be parallel (ferromagnetism) or anti-parallel. In the case of anti-parallel, the dipole moments may or may not cancel each other (antiferromagnetism, ferrimagnetism).
Density matrix approach to magnetic susceptibility.
We take a very simple situation in which each atom can be approximated as a two state system. The thermal energy is so low that the atom is in the ground state. In this ground state, the atom is assumed to have no net orbital angular momentum but only one unpaired electron to give it a spin of the half. In the presence of an external magnetic field, the ground state will split into two states having an energy difference proportional to the applied field. The spin of the unpaired electron is parallel to the field in the higher energy state and anti-parallel in the lower one.
A density matrix, formula_3, is a matrix that describes a quantum system in a mixed state, a statistical ensemble of several quantum states (here several similar 2-state atoms). This should be contrasted with a single state vector that describes a quantum system in a pure state. The expectation value of a measurement, formula_4, over the ensemble is formula_5. In terms of a complete set of states, formula_6, one can write
formula_7
Von Neumann's equation tells us how the density matrix evolves with time.
formula_8
In equilibrium,
one has formula_9, and the allowed density matrices are formula_10.
The canonical ensemble has formula_11, where formula_12.
For the 2-state system, we can write
formula_13.
Here formula_14 is the gyromagnetic ratio.
Hence formula_15, and
formula_16
From which
formula_17
Explanation of para and diamagnetism using perturbation theory.
In the presence of a uniform external magnetic field formula_18 along the z-direction, the Hamiltonian of the atom changes by
formula_19
where formula_20 are positive real numbers which are independent of which atom we are looking at but depend on the mass and the charge of the electron. formula_21 corresponds to individual electrons of the atom.
We apply second order perturbation theory to this situation. This is justified by the fact that even for highest presently attainable field strengths, the shifts in the energy level due to formula_22 is quite small w.r.t. atomic excitation energies. Degeneracy of the original Hamiltonian is handled by choosing a basis which diagonalizes formula_22 in the degenerate subspaces. Let formula_23 be such a basis for the state of the atom (rather the electrons in the atom). Let formula_24 be the change in energy in formula_25. So we get
formula_26
In our case we can ignore formula_27 and higher order terms. We get
formula_28
In case of diamagnetic material, the first two terms are absent as they don't have any angular momentum in their ground state. In case of paramagnetic material all the three terms contribute.
Adding spin–spin interaction in the Hamiltonian: Ising model.
So far, we have assumed that the atoms do not interact with each other. Even though this is a reasonable assumption in the case of diamagnetic and paramagnetic substances, this assumption fails in the case of ferromagnetism, where the spins of the atom try to align with each other to the extent permitted by the thermal agitation. In this case, we have to consider the Hamiltonian of the ensemble of the atom. Such a Hamiltonian will contain all the terms described above for individual atoms and terms corresponding to the interaction among the pairs of the atom. Ising model is one of the simplest approximations of such pairwise interaction.
formula_29
Here the two atoms of a pair are at formula_30. Their interaction formula_31 is determined by their distance vector formula_32. In order to simplify the calculation, it is often assumed that interaction happens between neighboring atoms only and formula_31 is a constant. The effect of such interaction is often approximated as a mean field and, in our case, the Weiss field.
Modification of Curie's law due to Weiss field.
The Curie–Weiss law is an adapted version of Curie's law, which for a paramagnetic material may be written in SI units as follows, assuming formula_33:
formula_34
Here "μ"0 is the permeability of free space; "M" the magnetization (magnetic moment per unit volume), "B" = "μ"0"H" is the magnetic field, and "C" the material-specific Curie constant:
formula_35
where "k"B is the Boltzmann constant, "N" the number of magnetic atoms (or molecules) per unit volume, "g" the Landé "g"-factor, "μ"B the Bohr magneton, "J" the angular momentum quantum number.
For the Curie-Weiss Law the total magnetic field is "B" + "λM" where "λ" is the Weiss molecular field constant and then
formula_36
which can be rearranged to get
formula_37
which is the Curie-Weiss Law
formula_38
where the Curie temperature "T"C is
formula_39
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\chi = \\frac{C}{T - T_{\\rm C}}\n"
},
{
"math_id": 1,
"text": "\\chi"
},
{
"math_id": 2,
"text": "\n\\chi \\propto \\frac{1}{(T - T_{\\rm C})^\\gamma}\n"
},
{
"math_id": 3,
"text": " \\rho "
},
{
"math_id": 4,
"text": " A "
},
{
"math_id": 5,
"text": " \\langle A \\rangle = \\operatorname{Tr} (A \\rho) "
},
{
"math_id": 6,
"text": " |i\\rangle "
},
{
"math_id": 7,
"text": "\n\\rho = \\sum_{ij}\n\\rho_{ij} |i\\rangle \\langle j| .\n"
},
{
"math_id": 8,
"text": "\ni \\hbar \\frac d {dt} \\rho (t) = [H, \\rho(t)]\n"
},
{
"math_id": 9,
"text": " [H, \\rho] = 0 "
},
{
"math_id": 10,
"text": " f(H) "
},
{
"math_id": 11,
"text": " \\rho = \\exp(-H/T)/Z "
},
{
"math_id": 12,
"text": " Z =\\operatorname{Tr} \\exp(-H/T) "
},
{
"math_id": 13,
"text": " H = -\\gamma \\hbar B \\sigma_3 "
},
{
"math_id": 14,
"text": " \\gamma "
},
{
"math_id": 15,
"text": " Z = 2 \\cosh(\\gamma \\hbar B/(2T)) "
},
{
"math_id": 16,
"text": "\n\\rho(B,T) = \\frac 1 {2 \\cosh(\\gamma \\hbar B/(2T))} \n\\begin{pmatrix}\n\\exp (-\\gamma \\hbar B/(2T)) & 0 \\\\\n0 & \\exp (\\gamma \\hbar B/(2T))\n\\end{pmatrix}.\n"
},
{
"math_id": 17,
"text": "\n\\langle J_x \\rangle = \n\\langle J_y \\rangle = 0, \n\\langle J_z \\rangle = - \\frac \\hbar 2 \\tanh (\\gamma \\hbar B/(2T)).\n"
},
{
"math_id": 18,
"text": " B "
},
{
"math_id": 19,
"text": "\n\\Delta H = \\alpha J_z B + \\beta B^2 \\sum_i (x_i^2 + y_i^2 ),\n"
},
{
"math_id": 20,
"text": " \\alpha, \\beta "
},
{
"math_id": 21,
"text": " i "
},
{
"math_id": 22,
"text": " \\Delta H "
},
{
"math_id": 23,
"text": " |n\\rangle "
},
{
"math_id": 24,
"text": " \\Delta E_n "
},
{
"math_id": 25,
"text": " |n \\rangle "
},
{
"math_id": 26,
"text": "\n\\Delta E_n = \\langle n | \\Delta H | n \\rangle + \\sum_{m, E_m \\neq E_n} \n\\frac\n{| \\langle n | \\Delta H | m \\rangle |^2}\n{E_n - E_m}\n.\n"
},
{
"math_id": 27,
"text": " B^3 "
},
{
"math_id": 28,
"text": "\n\\Delta E_n = \\alpha B \\langle n | J_z | n \\rangle\n+\n\\alpha^2 B^2 \\sum_{m, E_m \\neq E_n} \n\\frac\n{| \\langle n | J_z | m \\rangle |^2}\n{E_n - E_m}\n+\n\\beta\nB^2 \\sum_i \\langle n | x_i^2 + y_i^2 | n \\rangle\n.\n"
},
{
"math_id": 29,
"text": "\nH_{\\text{pairs}} = \n- \\frac 1 2\n\\sum_{R,R'}\nS(R) \\cdot S(R') J (R - R')\n"
},
{
"math_id": 30,
"text": " R, R' "
},
{
"math_id": 31,
"text": " J "
},
{
"math_id": 32,
"text": " R - R' "
},
{
"math_id": 33,
"text": "\\chi \\ll 1 "
},
{
"math_id": 34,
"text": "\\chi = \\frac{M}{H} \\approx \\frac{M \\mu_0}{B} =\\frac{C}{T} ."
},
{
"math_id": 35,
"text": "C = \\frac{\\mu_0 \\mu_{\\rm B}^2}{3 k_{\\rm B}}N g^2 J(J+1),"
},
{
"math_id": 36,
"text": "\\chi =\\frac{M \\mu_0}{B} \\rightarrow \\frac{M \\mu_0}{B+\\lambda M} =\\frac{C}{T}"
},
{
"math_id": 37,
"text": "\\chi = \\frac{C}{T - \\frac{C \\lambda }{\\mu_0}}"
},
{
"math_id": 38,
"text": "\\chi = \\frac{C}{T - T_{\\rm C}}"
},
{
"math_id": 39,
"text": "T_{\\rm C} = \\frac{C \\lambda }{\\mu_0}"
}
] |
https://en.wikipedia.org/wiki?curid=1561792
|
15618376
|
Omega equation
|
"Elliptic equation estimating vertical velocity in meteorology"
The omega equation is a culminating result in synoptic-scale meteorology. It is an elliptic partial differential equation, named because its left-hand side produces an estimate of vertical velocity, customarily expressed by symbol formula_0, in a pressure coordinate measuring height the atmosphere. Mathematically, formula_1, where formula_2 represents a material derivative. The underlying concept is more general, however, and can also be applied to the Boussinesq fluid equation system where vertical velocity is formula_3 in altitude coordinate "z".
Concept and summary.
Vertical wind is crucial to weather and storms of all types. Even slow, broad updrafts can create convective instability or bring air to its lifted condensation level creating stratiform cloud decks. Unfortunately, predicting vertical motion directly is difficult. For synoptic scales in Earth's broad and shallow troposphere, the vertical component of Newton's law of motion is sacrificed in meteorology's primitive equations, by accepting the hydrostatic approximation. Instead, vertical velocity must be solved through its link to horizontal laws of motion, via the mass continuity equation. But this presents further difficulties, because horizontal winds are mostly geostrophic, to a good approximation. Geostrophic winds merely circulate horizontally, and do not significantly converge or diverge in the horizontal to provide the needed link to mass continuity and thus vertical motion.
The key insight embodied by the quasi-geostrophic omega equation is that thermal wind balance (the combination of hydrostatic and geostrophic force balances above) holds "throughout time," even though the horizontal transport of momentum and heat by geostrophic winds will often tend to destroy that balance. Logically, then, a small non-geostrophic component of the wind (one which is divergent, and thus connected to vertical motion) must be acting as a secondary circulation to maintain balance of the geostrophic primary circulation. The quasi-geostrophic omega formula_4 is the hypothetical vertical motion whose adiabatic cooling or warming effect (based on the atmosphere's static stability) would prevent thermal wind "imbalance" from growing with time, by countering the balance-destroying (or imbalance-creating) effects of advection. Strictly speaking, QG theory approximates both the advected momentum and the advecting velocity as given by the geostrophic wind.
In summary, one may consider the vertical velocity that results from solving the omega equation as "that which would be needed to maintain geostrophy and hydrostasy in the face of advection by the geostrophic wind."
The equation reads:
where formula_5 is the Coriolis parameter, formula_6 is related to the static stability, formula_7 is the geostrophic velocity vector, formula_8 is the geostrophic relative vorticity, formula_9 is the geopotential, formula_10 is the horizontal Laplacian operator and formula_11 is the horizontal del operator. Its sign and sense in typical weather applications is: "upward" motion is produced by "positive" vorticity advection "above" the level in question (the first term), plus "warm" advection (the second term).
Derivation.
The derivation of the formula_0 equation is based on the vertical component of the vorticity equation, and the thermodynamic equation. The vertical vorticity equation for a frictionless atmosphere may be written using pressure as the vertical coordinate:
Here formula_12 is the relative vorticity, formula_13 the horizontal wind velocity vector, whose components in the formula_14 and formula_15 directions are formula_16 and formula_17 respectively, formula_18 the absolute vorticity formula_19, formula_20 is the Coriolis parameter, formula_1 the material derivative of pressure formula_21, formula_22 is the unit vertical vector, formula_23 is the isobaric Del (grad) operator, formula_24 is the vertical advection of vorticity and formula_25 represents the "tilting" term or transformation of horizontal vorticity into vertical vorticity.
The thermodynamic equation may be written as:
where formula_26, in which formula_27 is the heating rate (supply of energy per unit time and unit mass), formula_28is the specific heat of dry air, formula_29 is the gas constant for dry air, formula_30 is the potential temperature and formula_31 is geopotential formula_32.
The formula_0 equation (1) is obtained from equation (2) and (3) by casting both equations in terms of geopotential "Z," and eliminating time derivatives based on the physical assumption that thermal wind imbalance remains small across time, or d/dt(imbalance) = 0. For the first step, the relative vorticity must be approximated as the geostrophic vorticity:
formula_33
Expanding the final "tilting" term in (2) into Cartesian coordinates (although we will soon neglect it), the vorticity equation reads:
Differentiating (4) with respect to formula_21 gives:
Taking the Laplacian (formula_34) of (3) gives:
Adding (5) to "g"/"f" times (6), substituting formula_35, and approximating horizontal advection with "geostrophic advection" (using the Jacobian formalism) gives:
Equation (7) is now a diagnostic, linear differential equation for formula_0, which can be split into two terms, namely formula_36 and formula_37, such that:
and
where formula_36 is the vertical velocity attributable to all the flow-dependent advective tendencies in Equation (8), and formula_37 is the vertical velocity due to the non-adiabatic heating, which includes the latent heat of condensation, sensible heat fluxes, radiative heating, etc. (Singh & Rathor, 1974). Since all advecting velocities in the horizontal have been replaced with geostrophic values, and geostrophic winds are nearly nondivergent, neglect of vertical advection terms is a consistent further assumption of the quasi-geostrophic set, leaving only the square bracketed term in Eqs. (7-8) to enter (1).
Interpretation.
Equation (1) for adiabatic formula_4 is used by meteorologists and operational weather forecasters to anticipate where upward motion will occur on synoptic charts. For sinusoidal or wavelike motions, where Laplacian operators act simply as a negative sign, and the equation's meaning can be expressed with words indicating the sign of the effect: "Upward motion" is driven by "positive vorticity advection increasing with height" (or PVA for short), plus "warm air advection" (or WAA for short). The opposite signed case is logically opposite, for this linear equation.
In a location where the imbalancing effects of adiabatic advection are acting to drive upward motion (where formula_38 in Eq. 1), the inertia of the geostrophic wind field (that is, its propensity to carry on forward) is creating a demand for decreasing thickness formula_39 in order for thermal wind balance to continue to hold. For instance, when there is an approaching upper-level cyclone or trough above the level in question, the part of formula_4 attributable to the first term in Eq. 1 is upward motion needed to create the increasingly cool air column that is required hypsometrically under the falling heights. That adiabatic reasoning must be supplemented by an appreciation of feedbacks from flow-dependent heating, such as latent heat release. If latent heat is released as air cools, then an additional upward motion will be required based on Eq. (9) to counteract its effect, in order to still create the necessary cool core. Another way to think about such a feedback is to consider an effective static stability that is smaller in saturated air than in unsaturated air, although a complication of that view is that latent heating mediated by convection need not be vertically local to the altitude where cooling by formula_4 triggers its formation. For this reason, retaining a separate Q term like Equation (9) is a useful approach.
|
[
{
"math_id": 0,
"text": "\\omega"
},
{
"math_id": 1,
"text": "\\omega = \\frac{dp}{dt}"
},
{
"math_id": 2,
"text": "{d \\over dt}"
},
{
"math_id": 3,
"text": "w = \\frac{dz}{dt}"
},
{
"math_id": 4,
"text": "\\omega_{QG}"
},
{
"math_id": 5,
"text": " f "
},
{
"math_id": 6,
"text": " \\sigma "
},
{
"math_id": 7,
"text": " \\mathbf{V}_\\text{g} "
},
{
"math_id": 8,
"text": " \\zeta_\\text{g} "
},
{
"math_id": 9,
"text": " \\phi "
},
{
"math_id": 10,
"text": " \\nabla^2_\\text{H} "
},
{
"math_id": 11,
"text": " \\nabla_\\text{H} "
},
{
"math_id": 12,
"text": "\\xi"
},
{
"math_id": 13,
"text": "V"
},
{
"math_id": 14,
"text": "x"
},
{
"math_id": 15,
"text": "y"
},
{
"math_id": 16,
"text": "u"
},
{
"math_id": 17,
"text": "v"
},
{
"math_id": 18,
"text": "\\eta"
},
{
"math_id": 19,
"text": "\\xi + f"
},
{
"math_id": 20,
"text": "f"
},
{
"math_id": 21,
"text": "p"
},
{
"math_id": 22,
"text": "\\hat k"
},
{
"math_id": 23,
"text": "\\nabla"
},
{
"math_id": 24,
"text": "\\left( \\xi \\frac{\\partial \\omega}{\\partial p} - \\omega \\frac{\\partial \\xi}{\\partial p} \\right)"
},
{
"math_id": 25,
"text": "\\hat k \\cdot \\nabla\\omega \\times \\frac{\\partial V}{\\partial p} "
},
{
"math_id": 26,
"text": " k \\equiv \\left( \\frac{\\partial Z}{\\partial p}\\right) \\frac{\\partial}{\\partial p} \\ln\\theta"
},
{
"math_id": 27,
"text": "Q"
},
{
"math_id": 28,
"text": "C_\\text{p}"
},
{
"math_id": 29,
"text": "R"
},
{
"math_id": 30,
"text": "\\theta"
},
{
"math_id": 31,
"text": "\\phi"
},
{
"math_id": 32,
"text": "(gZ)"
},
{
"math_id": 33,
"text": "\\xi = \\frac{g}{f}\\nabla^2 Z "
},
{
"math_id": 34,
"text": " \\nabla^2 "
},
{
"math_id": 35,
"text": "gk = \\sigma"
},
{
"math_id": 36,
"text": "\\omega_1"
},
{
"math_id": 37,
"text": "\\omega_2"
},
{
"math_id": 38,
"text": "\\omega_{QG} < 0"
},
{
"math_id": 39,
"text": "\\frac{-\\partial Z}{\\partial p}"
}
] |
https://en.wikipedia.org/wiki?curid=15618376
|
1561997
|
Contorsion tensor
|
The contorsion tensor in differential geometry is the difference between a connection with and without torsion in it. It commonly appears in the study of spin connections. Thus, for example, a vielbein together with a spin connection, when subject to the condition of vanishing torsion, gives a description of Einstein gravity. For supersymmetry, the same constraint, of vanishing torsion, gives (the field equations of) eleven-dimensional supergravity. That is, the contorsion tensor, along with the connection, becomes one of the dynamical objects of the theory, demoting the metric to a secondary, derived role.
The elimination of torsion in a connection is referred to as the "absorption of torsion", and is one of the steps of Cartan's equivalence method for establishing the equivalence of geometric structures.
Definition in metric geometry.
In metric geometry, the contorsion tensor expresses the difference between a metric-compatible affine connection with Christoffel symbol formula_0 and the unique torsion-free Levi-Civita connection for the same metric.
The contorsion tensor formula_1 is defined in terms of the torsion tensor formula_2 as (up to a sign, see below)
formula_3
where the indices are being raised and lowered with respect to the metric:
formula_4.
The reason for the non-obvious sum in the definition of the contorsion tensor is due to the sum-sum difference that enforces metric compatibility. The contorsion tensor is antisymmetric in the first two indices, whilst the torsion tensor itself is antisymmetric in its last two indices; this is shown below.
formula_3
formula_5
formula_6
formula_7
The full metric compatible affine connection can be written as:
formula_8
where formula_9 the torsion-free Levi-Civita connection:
formula_10
Definition in affine geometry.
In affine geometry, one does not have a metric nor a metric connection, and so one is not free to raise and lower indices on demand. One can still achieve a similar effect by making use of the solder form, allowing the bundle to be related to what is happening on its base space. This is an explicitly geometric viewpoint, with tensors now being geometric objects in the vertical and horizontal bundles of a fiber bundle, instead of being indexed algebraic objects defined only on the base space. In this case, one may construct a contorsion tensor, living as a one-form on the tangent bundle.
Recall that the torsion of a connection formula_11 can be expressed as
formula_12
where formula_13 is the solder form (tautological one-form). The subscript formula_11 serves only as a reminder that this torsion tensor was obtained from the connection.
By analogy to the lowering of the index on torsion tensor on the section above, one can perform a similar operation with the solder form, and construct a tensor
formula_14
Here formula_15 is the scalar product. This tensor can be expressed as
formula_16
The quantity formula_17 is the contorsion form and is "exactly" what is needed to add to an arbitrary connection to get the torsion-free Levi-Civita connection. That is, given an Ehresmann connection formula_11, there is another connection formula_18 that is torsion-free.
The vanishing of the torsion is then equivalent to having
formula_19
or
formula_20
This can be viewed as a field equation relating the dynamics of the connection to that of the contorsion tensor.
Derivation.
One way to quickly derive a metric compatible affine connection is to repeat the sum-sum difference idea used in the derivation of the Levi–Civita connection but not take torsion to be zero. Below is a derivation.
Convention for derivation (Choose to define connection coefficients this way. The motivation is that of connection-one forms in gauge theory):
formula_21
formula_22
We begin with the Metric Compatible condition:
formula_23
Now we use sum-sum difference (Cycle the indices on the condition):
formula_24
formula_25
We now use the below torsion tensor definition (for a holonomic frame) to rewrite the connection:
formula_26
formula_27
Note that this definition of torsion has the opposite sign as the usual definition when using the above convention formula_28 for the lower index ordering of the connection coefficients, i.e. it has the opposite sign as the coordinate-free definition formula_29 in the below section on geometry. Rectifying this inconsistency (which seems to be common in the literature) would result in a contorsion tensor with the opposite sign.
Substitute the torsion tensor definition into what we have:
formula_30
Clean it up and combine like terms
formula_31
The torsion terms combine to make an object that transforms tensorially. Since these terms combine together in a metric compatible fashion, they are given a name, the Contorsion tensor, which determines the skew-symmetric part of a metric compatible affine connection.
We will define it here with the motivation that it match the indices of the left hand side of the equation above.
formula_32
Cleaning by using the anti-symmetry of the torsion tensor yields what we will define to be the contorsion tensor:
formula_33
Subbing this back into our expression, we have:
formula_34
Now isolate the connection coefficients, and group the torsion terms together:
formula_35
Recall that the first term with the partial derivatives is the Levi-Civita connection expression used often by relativists.
Following suit, define the following to be the torsion-free Levi-Civita connection:
formula_36
Then we have that the full metric compatible affine connection can now be written as:
formula_8
Relationship to teleparallelism.
In the theory of teleparallelism, one encounters a connection, the Weitzenböck connection, which is flat (vanishing Riemann curvature) but has a non-vanishing torsion. The flatness is exactly what allows parallel frame fields to be constructed. These notions can be extended to supermanifolds.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " {\\Gamma^{k}}_{ij} "
},
{
"math_id": 1,
"text": "K_{kji}"
},
{
"math_id": 2,
"text": "{T^{l}}_{ij}= {\\Gamma^{l}}_{ij} -{\\Gamma^{l}}_{ji} "
},
{
"math_id": 3,
"text": " K_{ijk} = \\tfrac{1}{2} (T_{ijk} + T_{jki} - T_{kij}) "
},
{
"math_id": 4,
"text": "T_{ijk} \\equiv g_{il} {T^{l}}_{jk}"
},
{
"math_id": 5,
"text": " K_{(ij)k} = \\tfrac{1}{2} \\bigl[\\tfrac{1}{2}(T_{ijk}+T_{jik}) + \\tfrac{1}{2}(T_{jki}+T_{ikj}) - \\tfrac{1}{2}(T_{kij}+T_{kji})\\bigr] "
},
{
"math_id": 6,
"text": " = \\tfrac{1}{4} (T_{ijk}+T_{jik}+T_{jki}+T_{ikj}-T_{kij}-T_{kji}) "
},
{
"math_id": 7,
"text": " = 0 "
},
{
"math_id": 8,
"text": " {\\Gamma^{l}}_{ij} =\\bar\\Gamma^{l}{}{}_{ij} + {K^{l}}_{ij},"
},
{
"math_id": 9,
"text": " \\bar\\Gamma^{l}{}{}_{ji} "
},
{
"math_id": 10,
"text": " \\bar\\Gamma^{l}{}{}_{ji} = \\tfrac{1}{2} g^{lk} (\\partial_{i}g_{jk} + \\partial_{j}g_{ki} - \\partial_{k}g_{ij}) "
},
{
"math_id": 11,
"text": "\\omega"
},
{
"math_id": 12,
"text": "\\Theta_\\omega = D\\theta = d\\theta + \\omega \\wedge \\theta"
},
{
"math_id": 13,
"text": "\\theta"
},
{
"math_id": 14,
"text": "\\Sigma_\\omega(X,Y,Z)=\\langle\\theta(Z), \\Theta_\\omega(X,Y)\\rangle +\n\\langle\\theta(Y), \\Theta_\\omega(Z,X)\\rangle \n- \\langle\\theta(X), \\Theta_\\omega(Y,Z)\\rangle\n"
},
{
"math_id": 15,
"text": "\\langle,\\rangle"
},
{
"math_id": 16,
"text": "\\Sigma_\\omega(X,Y,Z)=2\\langle\\theta(Z), \\sigma_\\omega(X)\\theta(Y)\\rangle"
},
{
"math_id": 17,
"text": "\\sigma_\\omega"
},
{
"math_id": 18,
"text": "\\omega+\\sigma_\\omega"
},
{
"math_id": 19,
"text": "\\Theta_{\\omega+\\sigma_\\omega} = 0"
},
{
"math_id": 20,
"text": "d\\theta = - (\\omega +\\sigma_\\omega) \\wedge \\theta"
},
{
"math_id": 21,
"text": "\\nabla_{i}v^{j} = \\partial_{i}v^{j} + {\\Gamma^{j}}_{ki} v^{k},"
},
{
"math_id": 22,
"text": "\\nabla_{i}\\omega_{j} = \\partial_{i}\\omega_{j} - {\\Gamma^{k}}_{ji} \\omega_{k},"
},
{
"math_id": 23,
"text": "\\nabla_{i}g_{jk} = \\partial_{i}g_{jk} - {\\Gamma^{l}}_{ji}g_{lk} - {\\Gamma^{l}}_{ki}g_{jl} = 0,"
},
{
"math_id": 24,
"text": "\\partial_{i}g_{jk} - {\\Gamma^{l}}_{ji}g_{lk} - {\\Gamma^{l}}_{ki}g_{jl} + \\partial_{j}g_{ki} - {\\Gamma^{l}}_{kj}g_{li} - {\\Gamma^{l}}_{ij}g_{kl} - \\partial_{k}g_{ij} + {\\Gamma^{l}}_{ik}g_{lj} + {\\Gamma^{l}}_{jk}g_{il} = 0"
},
{
"math_id": 25,
"text": "\\partial_{i}g_{jk} + \\partial_{j}g_{ki} - \\partial_{k}g_{ij} - \\Gamma_{kji} - \\Gamma_{jki} - \\Gamma_{ikj} - \\Gamma_{kij} + \\Gamma_{jik} + \\Gamma_{ijk} = 0"
},
{
"math_id": 26,
"text": "{T^k}_{ij}= {\\Gamma^k}_{ij} - {\\Gamma^k}_{ji} "
},
{
"math_id": 27,
"text": "\\Gamma_{kij} = T_{kij} + \\Gamma_{kji} "
},
{
"math_id": 28,
"text": "\\nabla_{i}v^{j} = \\partial_{i}v^{j} + {\\Gamma^{j}}_{ki} v^{k}"
},
{
"math_id": 29,
"text": "\\Theta_\\omega = D\\theta"
},
{
"math_id": 30,
"text": "\\partial_{i}g_{jk} + \\partial_{j}g_{ki} - \\partial_{k}g_{ij} - (T_{kji} + \\Gamma_{kij}) - \\Gamma_{jki} - (T_{ikj} + \\Gamma_{ijk}) - \\Gamma_{kij} + (T_{jik} + \\Gamma_{jki}) + \\Gamma_{ijk} = 0"
},
{
"math_id": 31,
"text": "2\\Gamma_{kij} = \\partial_{i}g_{jk} + \\partial_{j}g_{ki} - \\partial_{k}g_{ij} - T_{kji} - T_{ikj} + T_{jik} "
},
{
"math_id": 32,
"text": " K_{kij} = \\tfrac{1}{2} (- T_{kji} - T_{ikj} + T_{jik}) "
},
{
"math_id": 33,
"text": " K_{kij} = \\tfrac{1}{2} (T_{kij} + T_{ijk} - T_{jki}) "
},
{
"math_id": 34,
"text": "2\\Gamma_{kij} = \\partial_{i}g_{jk} + \\partial_{j}g_{ki} - \\partial_{k}g_{ij} + 2 K_{kij} "
},
{
"math_id": 35,
"text": "{\\Gamma^{l}}_{ij} = \\tfrac{1}{2} g^{lk} (\\partial_{i}g_{jk} + \\partial_{j}g_{ki} - \\partial_{k}g_{ij}) + \\tfrac{1}{2} g^{lk} (2 K_{kij}) "
},
{
"math_id": 36,
"text": " \\bar\\Gamma^{l}{}{}_{ij} = \\tfrac{1}{2} g^{lk} (\\partial_{i}g_{jk} + \\partial_{j}g_{ki} - \\partial_{k}g_{ij}) "
}
] |
https://en.wikipedia.org/wiki?curid=1561997
|
15620003
|
Singular spectrum analysis
|
Nonparametric spectral estimation method
In time series analysis, singular spectrum analysis (SSA) is a nonparametric spectral estimation method. It combines elements of classical time series analysis, multivariate statistics, multivariate geometry, dynamical systems and signal processing. Its roots lie in the classical Karhunen (1946)–Loève (1945, 1978) spectral decomposition of time series and random fields and in the Mañé (1981)–Takens (1981) embedding theorem. SSA can be an aid in the decomposition of time series into a sum of components, each having a meaningful interpretation. The name "singular spectrum analysis" relates to the spectrum of eigenvalues in a singular value decomposition of a covariance matrix, and not directly to a frequency domain decomposition.
Brief history.
The origins of SSA and, more generally, of subspace-based methods for signal processing, go back to the eighteenth century (Prony's method). A key development was the formulation of the spectral decomposition of the covariance operator of stochastic processes by Kari Karhunen and Michel Loève in the late 1940s (Loève, 1945; Karhunen, 1947).
Broomhead and King (1986a, b) and Fraedrich (1986) proposed to use SSA and multichannel SSA (M-SSA) in the context of nonlinear dynamics for the purpose of reconstructing the attractor of a system from measured time series. These authors provided an extension and a more robust application of the idea of reconstructing dynamics from a single time series based on the embedding theorem. Several other authors had already applied simple versions of M-SSA to meteorological and ecological data sets (Colebrook, 1978; Barnett and Hasselmann, 1979; Weare and Nasstrom, 1982).
Ghil, Vautard and their colleagues (Vautard and Ghil, 1989; Ghil and Vautard, 1991; Vautard et al., 1992; Ghil et al., 2002) noticed the analogy between the trajectory matrix of Broomhead and King, on the one hand, and the Karhunen–Loeve decomposition (Principal component analysis in the time domain), on the other. Thus, SSA can be used as a time-and-frequency domain method for time series analysis — independently from attractor reconstruction and including cases in which the latter may fail. The survey paper of Ghil et al. (2002) is the basis of the section of this article. A crucial result of the work of these authors is that SSA can robustly recover the "skeleton" of an attractor, including in the presence of noise. This skeleton is formed by the least unstable periodic orbits, which can be identified in the eigenvalue spectra of SSA and M-SSA. The identification and detailed description of these orbits can provide highly useful pointers to the underlying nonlinear dynamics.
The so-called ‘Caterpillar’ methodology is a version of SSA that was developed in the former Soviet Union, independently of the mainstream SSA work in the West. This methodology became known in the rest of the world more recently (Danilov and Zhigljavsky, Eds., 1997; Golyandina et al., 2001; Zhigljavsky, Ed., 2010; Golyandina and Zhigljavsky, 2013; Golyandina et al., 2018). ‘Caterpillar-SSA’ emphasizes the concept of separability, a concept that leads, for example, to specific recommendations concerning the choice of SSA parameters. This method is thoroughly described in of this article.
Methodology.
In practice, SSA is a nonparametric spectral estimation method based on embedding a time series formula_0 in a vector space of dimension formula_1. SSA proceeds by diagonalizing the formula_2 lag-covariance matrix formula_3 of formula_4 to obtain spectral information on the time series, assumed to be stationary in the weak sense. The matrix formula_3 can be estimated directly from the data as a Toeplitz matrix with constant diagonals (Vautard and Ghil, 1989), i.e., its entries formula_5 depend only on the lag formula_6:
formula_7
An alternative way to compute formula_3, is by using the formula_8 "trajectory matrix" formula_9 that is formed by formula_1 lag-shifted copies of formula_10, which are formula_11 long; then
formula_12
The formula_1 eigenvectors formula_13 of the lag-covariance matrix formula_14 are called temporal empirical orthogonal functions (EOFs). The eigenvalues formula_15 of formula_16 account for the partial variance in the
direction formula_13 and the sum of the eigenvalues, i.e., the trace of
formula_16, gives the total variance of the original time series
formula_4. The name of the method derives from the singular values formula_17 of formula_18
Decomposition and reconstruction.
Projecting the time series onto each EOF yields the corresponding
temporal principal components (PCs) formula_19:
formula_20
An oscillatory mode is characterized by a pair of
nearly equal SSA eigenvalues and associated PCs that are in approximate phase quadrature (Ghil et al., 2002). Such a pair can represent efficiently a nonlinear, anharmonic oscillation. This is due to the fact that a single pair of data-adaptive SSA eigenmodes often will capture better the basic periodicity of an oscillatory mode than methods with fixed basis functions, such as the sines and cosines used in the Fourier transform.
The window width formula_1 determines the longest periodicity captured by SSA. Signal-to-noise separation can be obtained by merely inspecting the slope break in a "scree diagram" of eigenvalues formula_15 or singular values formula_17 vs. formula_21. The point formula_22 at which this break occurs should not be confused with a "dimension" formula_23 of the underlying deterministic dynamics (Vautard and Ghil, 1989).
A Monte-Carlo test (Allen and Smith, 1996; Allen and Robertson, 1996; Groth and Ghil, 2015) can be applied to ascertain the statistical significance of the oscillatory pairs detected by SSA. The entire time series or parts of it that correspond to trends, oscillatory modes or noise can be reconstructed by using linear combinations of the PCs and EOFs, which provide the reconstructed components (RCs) formula_24:
formula_25
here formula_26 is the set of EOFs on which the reconstruction is based. The values of the normalization factor formula_27, as well as of the lower and upper bound of summation formula_28 and formula_29, differ between the central part of the time series and the vicinity of its endpoints (Ghil et al., 2002).
Multivariate extension.
Multi-channel SSA (or M-SSA) is a natural extension of SSA to an formula_30-channel time series of vectors or maps with formula_31 data points formula_32. In the meteorological literature, extended EOF (EEOF) analysis is often assumed to be synonymous with M-SSA. The two methods are both extensions of classical principal component analysis (PCA) but they differ in emphasis: EEOF analysis typically utilizes a number formula_30 of spatial channels much greater than the number formula_1 of temporal lags, thus limiting the temporal and spectral information. In M-SSA, on the other hand, one usually chooses formula_33. Often M-SSA is applied to a few leading PCs of the spatial data, with formula_1 chosen large enough to extract detailed temporal and spectral information from the multivariate time series (Ghil et al., 2002). However, Groth and Ghil (2015) have demonstrated possible negative effects of this variance compression on the detection rate of weak signals when the number formula_30 of retained PCs becomes too small. This practice can further affect negatively the judicious reconstruction of the spatio-temporal patterns of such weak signals, and Groth et al. (2016) recommend retaining a maximum number of PCs, i.e., formula_34.
Groth and Ghil (2011) have demonstrated that a classical M-SSA analysis suffers from a degeneracy problem, namely the EOFs do not separate well between distinct oscillations when the corresponding eigenvalues are similar in size. This problem is a shortcoming of principal component analysis in general, not just of M-SSA in particular. In order to reduce mixture effects and to improve the physical interpretation, Groth and Ghil (2011) have proposed a subsequent VARIMAX rotation of the spatio-temporal EOFs (ST-EOFs) of the M-SSA. To avoid a loss of spectral properties (Plaut and Vautard 1994), they have introduced a slight modification of the common VARIMAX rotation that does take the spatio-temporal structure of ST-EOFs into account. Alternatively, a closed matrix formulation of the algorithm for the simultaneous rotation of the EOFs by iterative SVD decompositions has been proposed (Portes and Aguirre, 2016).
M-SSA has two forecasting approaches known as recurrent and vector. The discrepancies between these two approaches are attributable to the organization of the single trajectory matrix formula_35 of each series into the block trajectory matrix in the multivariate case. Two trajectory matrices can be organized as either vertical (VMSSA) or horizontal (HMSSA) as was recently introduced in Hassani and Mahmoudvand (2013), and it was shown that these constructions lead to better forecasts. Accordingly, we have four different forecasting algorithms that can be exploited in this version of MSSA (Hassani and Mahmoudvand, 2013).
Prediction.
In this subsection, we focus on phenomena that exhibit a significant oscillatory component: repetition increases understanding and hence confidence in a prediction method that is closely connected with such understanding.
Singular spectrum analysis (SSA) and the maximum entropy method (MEM) have been combined to predict a variety of phenomena in meteorology, oceanography and climate dynamics (Ghil et al., 2002, and references therein). First, the “noise” is filtered out by projecting the time series onto a subset of leading EOFs obtained by SSA; the selected subset should include statistically significant, oscillatory modes. Experience shows that this approach works best when the partial variance associated with the pairs of RCs that capture these modes is large (Ghil and Jiang, 1998).
The prefiltered RCs are then extrapolated by least-square fitting to an autoregressive model formula_36, whose coefficients give the MEM spectrum of the remaining “signal”. Finally, the extended RCs are used in the SSA reconstruction process to produce the forecast values. The reason why this approach – via SSA prefiltering, AR extrapolation of the RCs, and SSA reconstruction – works better than the customary AR-based prediction is explained by the fact that the individual RCs are narrow-band signals, unlike the original, noisy time series formula_4 (Penland et al., 1991; Keppenne and Ghil, 1993). In fact, the optimal order "p" obtained for the individual RCs is considerably lower than the one given by the standard Akaike information criterion (AIC) or similar ones.
Spatio-temporal gap filling.
The gap-filling version of SSA can be used to analyze data sets that are unevenly sampled or contain missing data (Kondrashov and Ghil, 2006; Kondrashov et al. 2010). For a univariate time series, the SSA gap filling procedure utilizes temporal correlations to fill in the missing points. For a multivariate data set, gap filling by M-SSA takes advantage of both spatial and temporal correlations. In either case: (i) estimates of missing data points are produced iteratively, and are then used to compute a self-consistent lag-covariance matrix formula_3 and its EOFs formula_13; and (ii) cross-validation is used to optimize the window width formula_1 and the number of leading SSA modes to fill the gaps with the iteratively estimated "signal," while the noise is discarded.
As a model-free tool.
The areas where SSA can be applied are very broad: climatology, marine science, geophysics, engineering, image processing, medicine, econometrics among them. Hence different modifications of SSA have been proposed and different methodologies of SSA are used in practical applications such as trend extraction, periodicity detection, seasonal adjustment, smoothing, noise reduction (Golyandina, et al, 2001).
Basic SSA.
SSA can be used as a model-free technique so that it can be applied to arbitrary time series including non-stationary time series. The basic aim of SSA is to decompose the time series into the sum of interpretable components such as trend, periodic components and noise with no a-priori assumptions about the parametric form of these components.
Consider a real-valued time series formula_37 of length formula_31. Let formula_30 formula_38 be some integer called the "window length" and formula_39.
Main algorithm.
1st step: Embedding.
Form the "trajectory matrix" of the series formula_40, which is the formula_41 matrix
formula_42
where formula_43 are "lagged vectors" of size formula_30. The matrix formula_44 is a Hankel matrix which means that formula_44 has equal elements formula_45 on the anti-diagonals formula_46.
2nd step: Singular Value Decomposition (SVD).
Perform the singular value decomposition (SVD) of the trajectory matrix formula_44. Set formula_47 and denote by formula_48 the "eigenvalues" of formula_49 taken in the decreasing order of magnitude (formula_50) and by formula_51 the orthonormal system of the "eigenvectors" of the matrix formula_49 corresponding to these eigenvalues.
Set formula_52 (note that formula_53 for a typical real-life series) and formula_54 formula_55. In this notation, the SVD of the trajectory matrix formula_44 can be written as
formula_56
where
formula_57
are matrices having rank 1; these are called "elementary matrices". The collection formula_58 will be called the formula_59th "eigentriple" (abbreviated as ET) of the SVD. Vectors formula_60 are the left singular vectors of the matrix formula_44, numbers formula_61 are the singular values and provide the singular spectrum of formula_44; this gives the name to SSA. Vectors formula_62 are called vectors of principal components (PCs).
3rd step: Eigentriple grouping.
Partition the set of indices formula_63 into formula_64 disjoint subsets formula_65.
Let formula_66. Then the resultant matrix formula_67 corresponding to the group formula_68 is defined as formula_69. The resultant matrices are computed for the groups formula_70 and the grouped SVD expansion of formula_44 can now be written as
formula_71
4th step: Diagonal averaging.
Each matrix formula_72 of the grouped decomposition is hankelized and then the obtained Hankel matrix is transformed into a new series of length formula_31 using the one-to-one correspondence between Hankel matrices and time series.
Diagonal averaging applied to a resultant matrix formula_73 produces a "reconstructed series" formula_74. In this way, the initial series formula_75 is decomposed into a sum of formula_64 reconstructed subseries:
formula_76
This decomposition is the main result of the SSA algorithm. The decomposition is meaningful if each reconstructed
subseries could be classified as a part of either trend or some periodic component or noise.
Theory of SSA separability.
The two main questions which the theory of SSA attempts to answer are: (a) what time series components can be separated by SSA, and (b) how to choose the window length formula_30 and make proper grouping for extraction of a desirable component. Many theoretical results can be found in Golyandina et al. (2001, Ch. 1 and 6).
Trend (which is defined as a slowly varying component of the time series), periodic components and noise are asymptotically separable as formula_77. In practice formula_31 is fixed and one is interested in approximate separability between time series components. A number of indicators of approximate separability can be used, see Golyandina et al. (2001, Ch. 1). The window length formula_30 determines the resolution of the method: larger values of formula_30 provide more refined decomposition into elementary components and therefore better separability. The window length formula_30 determines the longest periodicity captured by SSA. Trends can be extracted by grouping of eigentriples with slowly varying eigenvectors. A sinusoid with frequency smaller than 0.5 produces two approximately equal eigenvalues and two sine-wave eigenvectors with the same frequencies and formula_78-shifted phases.
Separation of two time series components can be considered as extraction of one component in the presence of perturbation by the other component. SSA perturbation theory is developed in Nekrutkin (2010) and Hassani et al. (2011).
Forecasting by SSA.
If for some series formula_40 the SVD step in Basic SSA gives formula_79, then this series is called "time series of rank formula_80" (Golyandina et al., 2001, Ch.5). The subspace spanned by the formula_80 leading eigenvectors is called signal subspace. This subspace is used for estimating the signal parameters in signal processing, e.g. ESPRIT for high-resolution frequency estimation. Also, this subspace determines the linear homogeneous recurrence relation (LRR) governing the series, which can be used for forecasting. Continuation of the series by the LRR is similar to forward linear prediction in signal processing.
Let the series be governed by the minimal LRR formula_81. Let us choose formula_82, formula_83 be the eigenvectors (left singular vectors of the formula_30-trajectory matrix), which are provided by the SVD step of SSA. Then this series is governed by an LRR formula_84, where formula_85 are expressed through formula_83 (Golyandina et al., 2001, Ch.5), and can be continued by the same LRR.
This provides the basis for SSA recurrent and vector forecasting algorithms (Golyandina et al., 2001, Ch.2). In practice, the signal is corrupted by a perturbation, e.g., by noise, and its subspace is estimated by SSA approximately. Thus, SSA forecasting can be applied for forecasting of a time series component that is approximately governed by an LRR and is approximately separated from the residual.
Multivariate extension.
Multi-channel, Multivariate SSA (or M-SSA) is a natural extension of SSA to for analyzing multivariate time series, where the size of different univariate series does not have to be the same. The trajectory matrix of multi-channel time series consists of linked trajectory matrices of separate times series. The rest of the algorithm is the same as in the univariate case. System of series can be forecasted analogously to SSA recurrent and vector algorithms (Golyandina and Stepanov, 2005). MSSA has many applications. It is especially popular in analyzing and forecasting economic and financial time series with short and long series length (Patterson et al., 2011, Hassani et al., 2012, Hassani and Mahmoudvand, 2013).
Other multivariate extension is 2D-SSA that can be applied to two-dimensional data like digital images (Golyandina and Usevich, 2010). The analogue of trajectory matrix is constructed by moving 2D windows of size formula_86.
MSSA and causality.
A question that frequently arises in time series analysis is whether one economic variable can
help in predicting another economic variable. One way to address this question was proposed by
Granger (1969), in which he formalized the causality concept. A comprehensive causality test based on MSSA has recently introduced for causality measurement. The test is based on the forecasting accuracy and predictability of the direction of change of the MSSA algorithms (Hassani et al., 2011 and Hassani et al.,2012).
MSSA and EMH.
The MSSA forecasting results can be used in examining the efficient-market hypothesis controversy (EMH).
The EMH suggests that the information contained in the price series of an asset is reflected “instantly, fully, and perpetually” in the asset’s current price. Since the price series and the information contained in it are available to all market participants, no one can benefit by attempting to take advantage of the information contained in the price history of an asset by trading in the markets. This is evaluated using two series with different series length in a multivariate system in SSA analysis (Hassani et al. 2010).
MSSA, SSA and business cycles.
Business cycles plays a key role in macroeconomics, and are interest for a variety of players in the economy, including central banks, policy-makers, and financial intermediaries. MSSA-based methods for tracking business cycles have been recently introduced, and have been shown to allow for a reliable assessment of the cyclical position of the economy in real-time (de Carvalho et al., 2012 and de Carvalho and Rua, 2017).
MSSA, SSA and unit root.
SSA's applicability to any kind of stationary or deterministically trending series has been extended to the case of a series with a stochastic trend, also known as a series with a unit root. In Hassani and Thomakos (2010) and Thomakos (2010) the basic theory on the properties and application of SSA in the case of series of a unit root is given, along with several examples. It is shown that SSA in such series produces a special kind of filter, whose form and spectral properties are derived, and that forecasting the single reconstructed component reduces to a moving average. SSA in unit roots thus provides an `optimizing' non-parametric framework for smoothing series with a unit root. This line of work is also extended to the case of two series, both of which have a unit root but are cointegrated. The application of SSA in this bivariate framework produces a smoothed series of the common root component.
Gap-filling.
The gap-filling versions of SSA can be used to analyze data sets that are unevenly sampled or contain missing data (Schoellhamer, 2001; Golyandina and Osipov, 2007).
Schoellhamer (2001) shows that the straightforward idea to formally calculate approximate inner products omitting unknown terms is workable for long stationary time series.
Golyandina and Osipov (2007) uses the idea of filling in missing entries in vectors taken from the given subspace. The recurrent and vector SSA forecasting can be considered as particular cases of filling in algorithms described in the paper.
Detection of structural changes.
SSA can be effectively used as a non-parametric method of time series monitoring and change detection. To do that, SSA performs the subspace tracking in the following way. SSA is applied sequentially to the initial parts of the series, constructs the corresponding signal subspaces and checks the distances between these subspaces and the lagged vectors formed from the few most recent observations. If these distances become too large, a structural change is suspected to have occurred in the series (Golyandina et al., 2001, Ch.3; Moskvina and Zhigljavsky, 2003).
In this way, SSA could be used for change detection not only in trends but also in the variability of the series, in the mechanism that determines dependence between different series and even in the noise structure. The method have proved to be useful in different engineering problems (e.g. Mohammad and Nishida (2011) in robotics), and has been extended to the multivariate case with corresponding analysis of detection delay and false positive rate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\{X(t) : t=1,\\ldots,N\\}"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "M\\times M"
},
{
"math_id": 3,
"text": "{\\textbf C}_X"
},
{
"math_id": 4,
"text": "X(t)"
},
{
"math_id": 5,
"text": "c_{ij}"
},
{
"math_id": 6,
"text": "|i-j|"
},
{
"math_id": 7,
"text": "\nc_{ij} = \\frac{1}{N-|i-j|} \\sum_{t=1}^{N-|i-j|} X(t) X(t+|i-j|).\n"
},
{
"math_id": 8,
"text": "N' \\times M"
},
{
"math_id": 9,
"text": "{\\textbf D}"
},
{
"math_id": 10,
"text": "{\\it X(t)}"
},
{
"math_id": 11,
"text": "N' =N-M+1"
},
{
"math_id": 12,
"text": "\n{\\textbf C}_X = \\frac{1}{N'} {\\textbf D}^{\\rm t} {\\textbf D}. \n"
},
{
"math_id": 13,
"text": "{\\textbf E}_k"
},
{
"math_id": 14,
"text": "{\\textbf C}_ X"
},
{
"math_id": 15,
"text": "\\lambda_k"
},
{
"math_id": 16,
"text": "{\\textbf C}_{X}"
},
{
"math_id": 17,
"text": "\\lambda^{1/2}_k"
},
{
"math_id": 18,
"text": "{\\textbf C}_{X}."
},
{
"math_id": 19,
"text": "{\\textbf A}_k"
},
{
"math_id": 20,
"text": "\n A_k(t) = \\sum_{j=1}^{M} X(t+j-1) E_k(j).\n"
},
{
"math_id": 21,
"text": "k"
},
{
"math_id": 22,
"text": "k^* = S"
},
{
"math_id": 23,
"text": "D"
},
{
"math_id": 24,
"text": "{\\textbf R}_K"
},
{
"math_id": 25,
"text": "\nR_{ K}(t) = \\frac{1}{M_t} \\sum_{k\\in {\\textit K}} \\sum_{j={L_t}}^{U_t}\nA_k(t-j+1)E_k(j);\n"
},
{
"math_id": 26,
"text": "K"
},
{
"math_id": 27,
"text": "M_t"
},
{
"math_id": 28,
"text": "L_t"
},
{
"math_id": 29,
"text": "U_t"
},
{
"math_id": 30,
"text": "L"
},
{
"math_id": 31,
"text": "N"
},
{
"math_id": 32,
"text": "\\{X_{l}(t): l=1,\\dots, L; t=1,\\dots, N\\}"
},
{
"math_id": 33,
"text": "L \\leq M"
},
{
"math_id": 34,
"text": "L=N"
},
{
"math_id": 35,
"text": "{\\textbf X}"
},
{
"math_id": 36,
"text": "AR[p]"
},
{
"math_id": 37,
"text": "\\mathbb{X}=(x_1,\\ldots,x_{N})"
},
{
"math_id": 38,
"text": "\\ (1<L < N)"
},
{
"math_id": 39,
"text": "K=N-L+1"
},
{
"math_id": 40,
"text": "\\mathbb{X}"
},
{
"math_id": 41,
"text": "L\\!\\times\\! K"
},
{
"math_id": 42,
"text": "\n\\mathbf{X}=[X_1:\\ldots:X_K]=(x_{ij})_{i,j=1}^{L,K}=\n\\begin{bmatrix}\nx_1&x_2&x_3&\\ldots&x_{K}\\\\\nx_2&x_3&x_4&\\ldots&x_{K+1}\\\\\nx_3&x_4&x_5&\\ldots&x_{K+2}\\\\\n\\vdots&\\vdots&\\vdots&\\ddots&\\vdots\\\\\nx_{L}&x_{L+1}&x_{L+2}&\\ldots&x_{N}\\\\\n\\end{bmatrix}\n"
},
{
"math_id": 43,
"text": "\nX_i=(x_{i},\\ldots,x_{i+L-1})^\\mathrm{T} \\; \\quad (1\\leq i\\leq K)\n"
},
{
"math_id": 44,
"text": "\\mathbf{X}"
},
{
"math_id": 45,
"text": "x_{ij}"
},
{
"math_id": 46,
"text": "i+j =\\,{\\rm const}"
},
{
"math_id": 47,
"text": "\\mathbf{S}=\\mathbf{X} \\mathbf{X}^\\mathrm{T}"
},
{
"math_id": 48,
"text": "\\lambda_1, \\ldots,\\lambda_L"
},
{
"math_id": 49,
"text": "\\mathbf{S}"
},
{
"math_id": 50,
"text": "\\lambda_1\\geq \\ldots \\geq \\lambda_L\\geq 0"
},
{
"math_id": 51,
"text": "U_1,\\ldots,U_L"
},
{
"math_id": 52,
"text": "d= \\mathop{\\mathrm{rank}} \\mathbf{X} =\\max\\{i,\\ \\mbox{such that}\\ \\lambda_i >0\\}"
},
{
"math_id": 53,
"text": "d=L"
},
{
"math_id": 54,
"text": "V_i=\\mathbf{X}^\\mathrm{T} U_i/\\sqrt{\\lambda_i}"
},
{
"math_id": 55,
"text": "(i=1,\\ldots,d)"
},
{
"math_id": 56,
"text": "\n\\mathbf{X} = \\mathbf{X}_1 + \\ldots + \\mathbf{X}_d,\n"
},
{
"math_id": 57,
"text": "\\mathbf{X}_i=\\sqrt{\\lambda_i}U_i V_i^\\mathrm{T}"
},
{
"math_id": 58,
"text": "(\\sqrt{\\lambda_i},U_i,V_i)"
},
{
"math_id": 59,
"text": "i"
},
{
"math_id": 60,
"text": "U_i"
},
{
"math_id": 61,
"text": "\\sqrt{\\lambda_i}"
},
{
"math_id": 62,
"text": "\\sqrt{\\lambda_i}V_i=\\mathbf{X}^\\mathrm{T} U_i"
},
{
"math_id": 63,
"text": "\\{1,\\ldots,d\\}"
},
{
"math_id": 64,
"text": "m"
},
{
"math_id": 65,
"text": "I_1,\\ldots,I_m"
},
{
"math_id": 66,
"text": "I=\\{i_1,\\ldots,i_p\\}"
},
{
"math_id": 67,
"text": "\\mathbf{X}_I"
},
{
"math_id": 68,
"text": "I"
},
{
"math_id": 69,
"text": "\\mathbf{X}_I=\\mathbf{X}_{i_1}+\\ldots+\\mathbf{X}_{i_p}"
},
{
"math_id": 70,
"text": "I=I_1, \\ldots, I_m"
},
{
"math_id": 71,
"text": "\n\\mathbf{X}=\\mathbf{X}_{I_1}+\\ldots+\\mathbf{X}_{I_m}.\n"
},
{
"math_id": 72,
"text": "\\mathbf{X}_{I_j}"
},
{
"math_id": 73,
"text": "\\mathbf{X}_{I_k}"
},
{
"math_id": 74,
"text": "\\widetilde{\\mathbb{X}}^{(k)}=(\\widetilde{x}^{(k)}_1,\\ldots,\\widetilde{x}^{(k)}_N)"
},
{
"math_id": 75,
"text": "x_1,\\ldots,x_N"
},
{
"math_id": 76,
"text": "\n x_n = \\sum\\limits_{k=1}^m \\widetilde{x}^{(k)}_n \\ \\ (n=1,2, \\ldots, N).\n"
},
{
"math_id": 77,
"text": "N\\rightarrow \\infty"
},
{
"math_id": 78,
"text": "\\pi/2"
},
{
"math_id": 79,
"text": "d<L"
},
{
"math_id": 80,
"text": "d"
},
{
"math_id": 81,
"text": "x_{n}=\\sum_{k=1}^d b_k x_{n-k}"
},
{
"math_id": 82,
"text": "L>d"
},
{
"math_id": 83,
"text": "U_1,\\ldots,U_d"
},
{
"math_id": 84,
"text": "x_{n}=\\sum_{k=1}^{L-1} a_k x_{n-k}"
},
{
"math_id": 85,
"text": "(a_{L-1},\\ldots,a_1)^\\mathrm{T}"
},
{
"math_id": 86,
"text": "L_x \\times L_y"
},
{
"math_id": 87,
"text": "x_n=s_n+e_n"
},
{
"math_id": 88,
"text": "s_n=\\sum_{k=1}^r a_k s_{n-k}"
},
{
"math_id": 89,
"text": "e_n"
},
{
"math_id": 90,
"text": "x_n=\\sum_{k=1}^r a_k x_{n-k}+ e_n"
},
{
"math_id": 91,
"text": "k/N"
},
{
"math_id": 92,
"text": "s_{n}=\\sum_{k=1}^r a_k s_{n-k}"
},
{
"math_id": 93,
"text": "s_n=\\sum_k C_k \\rho_k^n e^{i 2\\pi \\omega_k n}"
},
{
"math_id": 94,
"text": "\\omega_k"
},
{
"math_id": 95,
"text": "\\rho_k"
},
{
"math_id": 96,
"text": "C_k"
},
{
"math_id": 97,
"text": "n"
},
{
"math_id": 98,
"text": "r"
},
{
"math_id": 99,
"text": "\\mathop{\\mathrm{span}}(U_1,\\ldots,U_r)"
},
{
"math_id": 100,
"text": "U_i=(u_1, \\ldots, u_L)^\\mathrm{T}"
},
{
"math_id": 101,
"text": "2L-1"
},
{
"math_id": 102,
"text": "\\widetilde{x}_s"
},
{
"math_id": 103,
"text": "L\\le s\\le K"
}
] |
https://en.wikipedia.org/wiki?curid=15620003
|
1562127
|
Shekel function
|
Function used as a performance test problem for optimization algorithms
The Shekel function or also Shekel's foxholes is a multidimensional, multimodal, continuous, deterministic function commonly used as a test function for testing optimization techniques.
The mathematical form of a function in formula_0 dimensions with formula_1 maxima is:
formula_2
or, similarly,
formula_3
Global minima.
Numerically certified global minima and the corresponding solutions were obtained using interval methods for up to formula_4.
Further reading.
Shekel, J. 1971. "Test Functions for Multimodal Search Techniques." "Fifth Annual Princeton Conference on Information Science and Systems".
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "\nf(\\vec{x}) = \\sum_{i = 1}^{m} \\; \\left( c_{i} + \\sum\\limits_{j = 1}^{n} (x_{j} - a_{ji})^2 \\right)^{-1}\n"
},
{
"math_id": 3,
"text": "\nf(x_1,x_2,...,x_{n-1},x_n) = \\sum_{i = 1}^{m} \\; \\left( c_{i} + \\sum\\limits_{j = 1}^{n} (x_{j} - a_{ij})^2 \\right)^{-1}\n"
},
{
"math_id": 4,
"text": "n = 10"
}
] |
https://en.wikipedia.org/wiki?curid=1562127
|
15623882
|
Four factor formula
|
Formula used to calculate nuclear chain reaction growth rate
The four-factor formula, also known as Fermi's four factor formula is used in nuclear engineering to determine the multiplication of a nuclear chain reaction in an infinite medium.
The symbols are defined as:
Multiplication.
The multiplication factor, k, is defined as (see Nuclear chain reaction):
formula_15
In an infinite medium, neutrons cannot leak out of the system and the multiplication factor becomes the infinite multiplication factor, formula_16, which is approximated by the four-factor formula.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\nu"
},
{
"math_id": 1,
"text": "\\nu_f"
},
{
"math_id": 2,
"text": "\\nu_t"
},
{
"math_id": 3,
"text": "\\sigma_f^F"
},
{
"math_id": 4,
"text": "\\sigma_a^F"
},
{
"math_id": 5,
"text": "\\Sigma_a^F"
},
{
"math_id": 6,
"text": "\\Sigma_a"
},
{
"math_id": 7,
"text": "N_i"
},
{
"math_id": 8,
"text": "I_{r,A,i}"
},
{
"math_id": 9,
"text": "I_{r,A,i} = \\int_{E_{th}}^{E_0} dE' \\frac{\\Sigma_p^{mod}}{\\Sigma_t(E')} \\frac{\\sigma_a^i(E')}{E'}"
},
{
"math_id": 10,
"text": "\\overline{\\xi}"
},
{
"math_id": 11,
"text": "u_f"
},
{
"math_id": 12,
"text": "P_{FAF}"
},
{
"math_id": 13,
"text": "P_{TAF}"
},
{
"math_id": 14,
"text": "P_{TNL}"
},
{
"math_id": 15,
"text": "k = \\frac{\\mbox{neutron population following nth generation}}{\\mbox{neutron population during nth generation}}"
},
{
"math_id": 16,
"text": "k = k_{\\infty}"
}
] |
https://en.wikipedia.org/wiki?curid=15623882
|
156259
|
Price discrimination
|
Microeconomic pricing strategy to maximise firm profits
Price discrimination is a microeconomic pricing strategy where identical or largely similar goods or services are sold at different prices by the same provider in different market segments. Price discrimination is distinguished from product differentiation by the more substantial difference in production cost for the differently priced products involved in the latter strategy. Price discrimination essentially relies on the variation in the customers' willingness to pay and in the elasticity of their demand. For price discrimination to succeed, a firm must have market power, such as a dominant market share, product uniqueness, sole pricing power, etc.
All prices under price discrimination are higher than the equilibrium price in a perfectly competitive market. However, some prices under price discrimination may be lower than the price charged by a single-price monopolist. Price discrimination is utilized by the monopolist to recapture some deadweight loss. This pricing strategy enables firms to capture additional consumer surplus and maximize their profits while benefiting some consumers at lower prices. Price discrimination can take many forms and is prevalent in many industries, from education and telecommunications to healthcare.
The term differential pricing is also used to describe the practice of charging different prices to different buyers for the same quality and quantity of a product, but it can also refer to a combination of price differentiation and product differentiation. Other terms used to refer to price discrimination include "equity pricing", "preferential pricing", "dual pricing", tiered pricing", and "surveillance pricing". Within the broader domain of price differentiation, a commonly accepted classification dating to the 1920s is:
Theoretical basis.
In a theoretical market with perfect information, perfect substitutes, and no transaction costs or prohibition on secondary exchange (or re-selling) to prevent arbitrage, price discrimination can only be a feature of monopoly and oligopoly markets, where market power can be exercised (see 'Price discrimination and monopoly power' below for more in-depth explanation). Without market power when the price is differentiated higher than the market equilibrium consumers will move to buy from other producers selling at the market equilibrium. Moreover, when the seller tries to sell the same good at differentiating prices, the buyer at the lower price can arbitrage by selling to the consumer buying at the higher price with a small discount from the higher price.
Price discrimination requires market segmentation and some means to discourage discount customers from becoming resellers and, by extension, competitors. This usually entails using one or more means of preventing any resale: keeping the different price groups separate, making price comparisons difficult, or restricting pricing information. The boundary set up by the marketer to keep segments separate is referred to as a "rate fence" (a rule that allow consumers to segment themselves into based on their needs, behaviour and willingness to pay). Price discrimination is thus very common in services where resale is not possible; an example is student discounts at museums: In theory, students, for their condition as students, may get lower prices than the rest of the population for a certain product or service, and later will not become resellers, since what they received, may only be used or consumed by them because it is required to show their student identification card when making the purchase. Another example of price discrimination is intellectual property, enforced by law and by technology. In the market for DVDs, laws require DVD players to be designed and produced with hardware or software that prevents inexpensive copying or playing of content purchased legally elsewhere in the world at a lower price. In the US the Digital Millennium Copyright Act has provisions to outlaw circumventing of such devices to protect the enhanced monopoly profits that copyright holders can obtain from price discrimination against higher price market segments.
Price discrimination differentiates the willingness to pay of the customers, in order to eliminate as much consumer surplus as possible. By understanding the elasticity of the customer's demand, a business could use its market power to identify the customers' willingness to pay. Different people would pay a different price for the same product when price discrimination exists in the market. When a company recognized a consumer that has a lower willingness to pay, the company could use the price discrimination strategy in order to maximized the firm's profit.
Price discrimination and market power.
Degrees of market power.
Market power refers to the ability of a firm to manipulate the price without losing shares (sales) in the market. Some factors which affect the market power of a firm are listed below:
The degree of market power can usually be divided into 4 categories (listed in the table below in order of increasing market power):
Since price discrimination is dependent on a firm's market power generally monopolies use price discrimination, however, oligopolies can also use price discrimination when the risk of arbitrage and consumers moving to other competitors is low.
Price discrimination in oligopolies.
An oligopoly forms when a small group of business dominates an industry. When the dominating companies in an oligopoly model compete in prices, the motive for inter-temporal price discrimination would appear in the oligopoly market. Price discrimination can be facilitated by inventory controls in oligopoly.
Whilst oligopolies hold more market power than perfectly competitive markets the use of price discrimination can lead to lower profits for oligopolies as they compete to hold greater shares of the market by lowering prices. For instance, when Oligopolies use third degree price discrimination to offer a lower price to consumers with high price elasticity (lower disposable income) they compete with other firms to capture the market until a lower profit is retained. Hence, Oligopolies may be dissuaded from using price discrimination.
Types of price discrimination.
First degree (Perfect price discrimination).
Exercising first degree (or perfect or primary) price discrimination requires the monopoly seller of a good or service to know the absolute maximum price (or reservation price) that every consumer is willing to pay. By knowing the reservation price, the seller is able to sell the good or service to each consumer at the maximum price they are willing to pay (granted it is greater or equal to the marginal cost), and thus transform the consumer surplus into seller revenue. Resultantly, the profit is equal to the sum of consumer surplus and producer surplus. First-degree price discrimination is the most profitable as it obtains all of the consumer surplus and each consumer buys the good at the highest price they are willing to pay. The marginal consumer is the one whose reservation price equals the marginal cost of the product, meaning that the social surplus is entirely from producer surplus (no consumer surplus). If the seller engages in first degree price discrimination, then they will produce more product than they would with no price discrimination. Hence first degree price discrimination can eliminate deadweight loss that occurs in monopolistic markets. Examples of first degree price discrimination can be observed in markets where consumers bid for tenders, though, in this case, the practice of collusive tendering could reduce the market efficiency.
Second degree (Quantity discount).
In second-degree price discrimination, the price of the same good varies according to the quantity demanded. It usually comes in the form of quantity discount which recognises of the law of diminishing marginal utility. The Law of diminishing marginal utility stipulates that a consumer's utility may decrease (diminish) with each successive unit. For example, the marginal utility received from enjoying a ride at a theme park may gradually diminish each time you go on the same ride. By offering a quantity discount for a larger quantity purchased the seller is able to capture some of the consumer surplus but not all. This is because diminishing marginal utility may mean the consumer would not be willing to purchase an additional unit without a discount since the marginal utility received from the good or service is no longer greater than the price. However, by offering a discount the seller can capture some of consumers surplus by encouraging them to purchase an additional unit at a discounted price. This is particularly widespread in sales to industrial customers, where bulk buyers enjoy discounts.
Mobile phone plans and different subscriptions are often other instances of second-degree price discrimination. Consumers will usually believe a one-year subscription is more cost-effective than a monthly one. Whether or not consumers need such a long-time subscription, they are more likely to accept and pay the cost-effective one. Besides, the producer will consequently see an increase in sales and profit. Second-degree price discrimination, also known as non-linear pricing, benefits consumers by allowing them to purchase at a cheaper price when they buy more instead of at the normal price.
Third degree (Market segregation).
Third-degree price discrimination means charging a different price to a group of consumers based on their different elasticities of demand, and the group with less elastic always be charged a higher price. For example, rail and tube (subway) travelers can be subdivided into commuters and casual travelers, and cinema goers can be subdivided into adults and children, with some theatres also offering discounts to full-time students and seniors. Splitting the market into peak and off-peak use of service is very common and occurs with gas, electricity, and telephone supply, as well as gym membership and parking charges.
In order to offer different prices for different groups of people in the aggregate market, the business has to use additional information to identify its consumers. The businesses must set prices according to the consumers' willingness to buy. Consequently, they will be involved in third-degree price discrimination. With third-degree price discrimination, the firms try to generate sales by identifying different market segments, such as domestic and industrial users, with different price elasticities. Markets must be kept separate by time, physical distance, and nature of use. For example, the Microsoft Office Schools edition is available for a lower price to educational institutions than to other users. The markets cannot overlap so that consumers who purchase at a lower price in the elastic sub-market could resell at a higher price in the inelastic sub-market.
Two-part tariff.
The two-part tariff is another form of price discrimination where the producer charges an initial fee and a secondary fee for the use of the product. This pricing strategy yields a result similar to second-degree price discrimination. In addition, the two-part tariff is desirable for welfare because the monopolistic markup can be eliminated. However, an upstream monopolist has the authority to set higher unit wholesale prices to the downstream firms in discriminatory two-part tariff, which is different from uniform two-part tariff pricing. As a result, the discriminatory two-part tariff for wholesale prices can harm social welfare.
An example of two-part tariff pricing is in the market for shaving razors. The customer pays an initial cost for the razor and then pays again for the replacement blades. This pricing strategy works because it shifts the demand curve to the right: since the customer has already paid for the initial blade holder and will continue to buy the blades which are cheaper than buying disposable razors.
Combination.
These types are not mutually exclusive. Thus a company may vary pricing by location, but then offer bulk discounts as well. Airlines use several different types of price discrimination, including:
User-controlled price discrimination.
While the conventional theory of price discrimination generally assumes that prices are set by the seller, there is a variant form in which prices are set by the buyer, such as in the form of pay what you want pricing. Such user-controlled price discrimination exploits similar ability to adapt to varying demand curves or individual price sensitivities, and may avoid the negative perceptions of price discrimination as imposed by a seller.
In the matching markets, the platforms will internalize the impacts in revenue to create a cross-side effects. In return, this cross-side effect will differentiate price discrimination in matching intermediation from the standard markets.
Modern taxonomy.
The first/second/third degree taxonomy of price discrimination is due to Pigou ("Economics of Welfare", 3rd edition, 1929). However, these categories are not mutually exclusive or exhaustive. Ivan Png ("Managerial Economics", 1998: 301–315) suggests an alternative taxonomy:
where the seller prices each unit at a different price, so that each user purchases up to the point where the user's marginal benefit equals the marginal cost of the item;
where the seller can condition price on some attribute (like age or gender) that "directly" segments the buyers;
where the seller relies on some proxy (e.g., package size, usage quantity, coupon) to structure a choice that "indirectly" segments the buyers;
where the seller sets the same price for each unit of the product.
Uniform pricing.
The hierarchy—complete/direct/indirect/uniform pricing—is in decreasing order of profitability and information requirement. Complete price discrimination is most profitable, and requires the seller to have the most information about buyers. Next most profitable and in information requirement is direct segmentation, followed by indirect segmentation. Finally, uniform pricing is the least profitable and requires the seller to have the least information about buyers is.
Explanation.
The purpose of price discrimination is generally to capture the market's consumer surplus. This surplus arises because, in a market with a single clearing price, some customers (the very low price elasticity segment) would have been prepared to pay more than the market price. Price discrimination transfers some of this surplus from the consumer to the seller. It is a way of increasing monopoly profit. In a perfectly competitive market, manufacturers make normal profit, but not monopoly profit, so they cannot engage in price discrimination.
It can be argued that strictly, a consumer surplus need not exist, for example where fixed costs or economies of scale mean that the marginal cost of adding more consumers is less than the marginal profit from selling more product. This means that charging some consumers less than an even share of costs can be beneficial. An example is a high-speed internet connection shared by two consumers in a single building; if one is willing to pay less than half the cost of connecting the building, and the other willing to make up the rest but not to pay the entire cost, then price discrimination can allow the purchase to take place. However, this will cost the consumers as much or more than if they pooled their money to pay a non-discriminating price. If the consumer is considered to be the building, then a consumer surplus goes to the inhabitants.
It can be proved mathematically that a firm facing a downward sloping demand curve that is convex to the origin will always obtain higher revenues under price discrimination than under a single price strategy. This can also be shown geometrically.
In the top diagram, a single price formula_0 is available to all customers. The amount of revenue is represented by area formula_1. The consumer surplus is the area above line segment formula_2 but below the demand curve formula_3.
With price discrimination, (the bottom diagram), the demand curve is divided into two segments (formula_4 and formula_5). A higher price formula_6 is charged to the low elasticity segment, and a lower price formula_7 is charged to the high elasticity segment. The total revenue from the first segment is equal to the area formula_8. The total revenue from the second segment is equal to the area formula_9. The sum of these areas will always be greater than the area without discrimination assuming the demand curve resembles a rectangular hyperbola with unitary elasticity. The more prices that are introduced, the greater the sum of the revenue areas, and the more of the consumer surplus is captured by the producer.
The above requires both first and second degree price discrimination: the right segment corresponds partly to different people than the left segment, partly to the same people, willing to buy more if the product is cheaper.
It is very useful for the price discriminator to determine the optimum prices in each market segment. This is done in the next diagram where each segment is considered as a separate market with its own demand curve. As usual, the profit maximizing output (Qt) is determined by the intersection of the marginal cost curve (MC) with the marginal revenue curve for the total market (MRt).
The firm decides what amount of the total output to sell in each market by looking at the intersection of marginal cost with marginal revenue (profit maximization). This output is then divided between the two markets, at the equilibrium marginal revenue level. Therefore, the optimum outputs are formula_10 and formula_11. From the demand curve in each market the profit can be determined maximizing prices of formula_12 and formula_13.
The marginal revenue in both markets at the optimal output levels must be equal, otherwise the firm could profit from transferring output over to whichever market is offering higher marginal revenue.
Given that Market 1 has a price elasticity of demand of formula_14 and Market 2 of formula_15, the optimal pricing ration in Market 1 versus Market 2 is formula_16.
The price in a perfectly competitive market will always be lower than any price under price discrimination (including in special cases like the internet connection example above, assuming that the perfectly competitive market allows consumers to pool their resources). In a market with perfect competition, no price discrimination is possible, and the average total cost (ATC) curve will be identical to the marginal cost curve (MC). The price will be the intersection of this ATC/MC curve and the demand line (Dt). The consumer thus buys the product at the cheapest price at which any manufacturer can produce any quantity.
Price discrimination is a sign that the market is imperfect, the seller has some monopoly power, and that prices and seller profits are higher than they would be in a perfectly competitive market.
Advantages and disadvantages of price discrimination.
Advantages of price discrimination.
True price discrimination occurs when exactly the same product is sold at multiple prices. It benefits only the seller, compared to a competitive market. It benefits some buyers at a (greater) cost to others, causing a net loss to consumers, compared to a single-price monopoly. For congestion pricing, which can benefit the buyer and is not price discrimination, see counterexamples below.
Examples.
Retail price discrimination.
Manufacturers may sell their products to similarly situated retailers at different prices based solely on the volume of products purchased. Sometimes, the firm investigate the consumers’ purchase histories which would show the customer's unobserved willingness to pay. Each customer has a purchasing score which indicates his or her preferences; consequently, the firm will be able to set the price for the individual customer at the point that minimizes the consumer surplus. Oftentimes, consumers are not aware of the ways to manipulate that score. If he or she wants to do to so, he or she could reduce the demand to reduce the average equilibrium price, which will reduce the firm's price discriminating strategy. It is an instance of third-degree price discrimination.
Travel industry.
Airlines and other travel companies regularly use differentiated pricing to sell travel products and services to different market segments. This is done by assigning capacity to various booking classes with different prices and fare restrictions. These restrictions ensure that market segments buy within their designated booking class range. For example, schedule-sensitive business passengers willing to pay $300 for a seat from city A to city B cannot purchase a $150 ticket because the $150 booking class has restrictions, such as a Saturday-night stay or a 15-day advance purchase, that discourage or prevent sales to business passengers. However, "the seat" is not always the same product. A business person may be willing to pay $300 for a seat on a high-demand morning flight with full refundability and the ability to upgrade to first class for a nominal fee. On the same flight, price-sensitive passengers may not be willing to pay $300 but are willing to fly on a lower-demand flight or via a connection city and forgo refundability.
An airline may also apply differential pricing to "the same seat" over time by discounting the price for early or late bookings and weekend purchases. This is part of an airline's strategy to segment price-sensitive leisure travelers from price-inelastic business travelers. This could present an arbitrage opportunity in the absence of restrictions on reselling, but passenger name changes are typically prevented or financially penalized.
An airline may also apply directional price discrimination by charging different roundtrip fares based on passenger origins. For example, passengers originating from City A, with a per capita income $30,000 higher than City B, may pay $5400–$12900 more than those from City B. This is due to airlines segmenting passenger price sensitivity based on the income of route endpoints. Since airlines often fly multi-leg flights and no-show rates vary by segment, competition for seats takes into account the spatial dynamics of the product. Someone trying to fly A-B is competing with people trying to fly A-C through city B on the same aircraft. Airlines use yield management technology to determine how many seats to allot for A-B, B-C, and A-B-C passengers at varying fares, demands, and no-show rates.
With the rise of the Internet and low fare airlines, airfare pricing transparency has increased. Passengers can easily compare fares across flights and airlines, putting pressure on airlines to lower fares. In the recession following the September 11, 2001 attacks, business travelers made it clear they would not buy air travel at rates high enough to subsidize lower fares for non-business travelers. This prediction has come true as many business travelers now buy economy class airfares for business travel.
Finally, there are sometimes group discounts on rail tickets and passes (second-degree price discrimination).
Coupons.
The use of coupons in retail is an attempt to distinguish customers by their reserve price. The assumption is that people who go through the trouble of collecting coupons have greater price sensitivity than those who do not. Thus, making coupons available enables, for instance, breakfast cereal makers to charge higher prices to price-insensitive customers, while still making some profit off customers who are more price-sensitive.
Another example can also be seen in how to collect grocery store coupons before the existence of digital coupons. Grocery store coupons were usually available in the free newspapers or magazines placed at the entrance of the stores. As coupons have a negative relationship with time, customers with a high value of time will not find it worthwhile to spend 20 minutes in order to save $5 only. Meanwhile, customers with a low value of time will be satisfied by getting $5 less from their purchase as they tend to be more price-sensitive. It is an instance of third-degree price discrimination.
Premium pricing.
For certain products, premium products are priced at a level (compared to "regular" or "economy" products) that is well beyond their marginal cost of production. For example, a coffee chain may price regular coffee at $1, but "premium" coffee at $2.50 (where the respective costs of production may be $0.90 and $1.25). Economists such as Tim Harford in "The Undercover Economist" have argued that this is a form of price discrimination: by providing a choice between a regular and premium product, consumers are being asked to reveal their degree of price sensitivity (or willingness to pay) for comparable products. Similar techniques are used in pricing business class airline tickets and premium alcoholic drinks, for example.They are examples of the third-degree price discrimination.
This effect can lead to (seemingly) perverse incentives for the producer. If, for example, potential business class customers will pay a large price differential only if economy class seats are uncomfortable while economy class customers are more sensitive to price than comfort, airlines may have substantial incentives to purposely make economy seating uncomfortable. In the example of coffee, a restaurant may gain more economic profit by making poor quality regular coffee—more profit is gained from up-selling to premium customers than is lost from customers who refuse to purchase inexpensive but poor quality coffee. In such cases, the net social utility should also account for the "lost" utility to consumers of the regular product, although determining the magnitude of this foregone utility may not be feasible.
Segmentation by age group, student status, ethnicity and citizenship.
Many movie theaters, amusement parks, tourist attractions, and other places have different admission prices per market segment: typical groupings are Youth/Child, Student, Adult, Senior Citizen, Local and Foreigner. Each of these groups typically have a much different demand curve. Children, people living on student wages, and people living on retirement generally have much less disposable income. Foreigners may be perceived as being more wealthy than locals and therefore being capable of paying more for goods and services – sometimes this can be even 35 times as much. Market stall-holders and individual public transport providers may also insist on higher prices for their goods and services when dealing with foreigners (sometimes called the "White Man Tax"). Some goods – such as housing – may be offered at cheaper prices for certain ethnic groups.
Discounts for members of certain occupations.
Some businesses may offer reduced prices members of some occupations, such as school teachers (see below), police and military personnel. In addition to increased sales to the target group, businesses benefit from the resulting positive publicity, leading to increased sales to the general public.
Incentives for industrial buyers.
Many methods exist to incentivize wholesale or industrial buyers. These may be quite targeted, as they are designed to generate specific activity, such as buying more frequently, buying more regularly, buying in bigger quantities, buying new products with established ones, and so on. They may also be designed to reduce the administrative and finance costs of processing each transaction. Thus, there are bulk discounts, special pricing for long-term commitments, non-peak discounts, discounts on high-demand goods to incentivize buying lower-demand goods, rebates, and many others. This can help the relations between the firms involved. It's the example of the second-price discrimination.
Gender-based examples.
Gender-based price discrimination is the practice of offering identical or similar services and products to men and women at different prices when the cost of producing the products and services is the same. In the United States, gender-based price discrimination has been a source of debate. In 1992, the New York City Department of Consumer Affairs ("DCA") conducted an investigation of "price bias against women in the marketplace". The DCA's investigation concluded that women paid more than men at used car dealers, dry cleaners, and hair salons. The DCA's research on gender pricing in New York City brought national attention to gender-based price discrimination and the financial impact it has on women.
With consumer products, differential pricing is usually not based explicitly on the actual gender of the purchaser, but is achieved implicitly by the use of differential packaging, labelling, or colour schemes designed to appeal to male or female consumers. In many cases, where the product is marketed to make an attractive gift, the gender of the purchaser may be different from that of the end user.
In 1995, California Assembly's Office of Research studied the issue of gender-based price discrimination of services and estimated that women effectively paid an annual "gender tax" of approximately $1,351.00 for the same services as men. It was also estimated that women, over the course of their lives, spend thousands of dollars more than men to purchase similar products. For example, prior to the enactment of the Patient Protection and Affordable Care Act ("Affordable Care Act"), health insurance companies charged women higher premiums for individual health insurance policies than men. Under the Affordable Care Act, health insurance companies are now required to offer the same premium price to all applicants of the same age and geographical locale without regard to gender. However, there is no federal law banning gender-based price discrimination in the sale of products. Instead, several cities and states have passed legislation prohibiting gender-based price discrimination on products and services.
In Europe, motor insurance premiums have historically been higher for men than for women, a practice that the insurance industry attempts to justify on the basis of different levels of risk. The EU has banned this practice; however, there is evidence that it is being replaced by "proxy discrimination", that is, discrimination on the basis of factors that are strongly correlated with gender: for example, charging construction workers more than midwives.
In Chinese retail automobile market, researchers found that male buyers pay less than female buyers for cars with the same characteristics. Although this research documented the existence of price discrimination between locals and non-locals, local men still receive $221.63 discount more than local women and non-local men receive $330.19 discount more than non-local women. The discount represents approximately 10% of average personal budget, considering the per capita GDP for 2018.
International price discrimination.
Pharmaceutical companies may charge customers living in wealthier countries a much higher price than for identical drugs in poorer nations, as is the case with the sale of antiretroviral drugs in Africa. Since the purchasing power of African consumers is much lower, sales would be extremely limited without price discrimination. The ability of pharmaceutical companies to maintain price differences between countries is often either reinforced or hindered by national drugs laws and regulations, or the lack thereof.
Even online sales for non material goods, which do not have to be shipped, may change according to the geographic location of the buyer, such as music streaming services by Spotify and Apple Music. The users in lower-income countries benefit from price discrimination by paying fewer subscription fees than those in higher-income countries. The researchers also found that the cross-national price differences actually raise the revenue of those companies by about 6% while reducing world users’ welfare by 1%.
Academic pricing.
Companies will often offer discounted goods and software to students and faculty at school and university levels. These may be labeled as academic versions, but perform the same as the full price retail software. Some academic software may have differing licenses than retail versions, usually disallowing their use in activities for profit or expiring the license after a given number of months. This also has the characteristics of an "initial offer" – that is, the profits from an academic customer may come partly in the form of future non-academic sales due to vendor lock-in.
Sliding scale fees.
Sliding scale fees are when different customers are charged different prices based on their income, which is used as a proxy for their willingness or ability to pay. For example, some nonprofit law firms charge on a sliding scale based on income and family size. Thus the clients paying a higher price at the top of the fee scale help subsidize the clients at the bottom of the scale. This differential pricing enables the nonprofit to serve a broader segment of the market than they could if they only set one price.
Weddings.
Goods and services for weddings are sometimes priced at a higher rate than identical goods for normal customers. The wedding venues and services are usually priced differently depending on the wedding date. For instance, if the wedding is held during the peak seasons (school holidays or festive seasons), the price will be higher than in the off-season wedding months.
Obstetric service.
The welfare consequences of price discrimination were assessed by testing the differences in mean prices paid by patients from three income groups: low, middle and high. The results suggest that two different forms of price discrimination for obstetric services occurred in both these hospitals. First, there was price discrimination according to income, with the poorer users benefiting from a higher discount rate than richer ones. Secondly, there was price discrimination according to social status, with three high status occupational groups (doctors, senior government officials, and large businessmen) having the highest probability of receiving some level of discount.
Pharmaceutical industry.
Price discrimination is common in the pharmaceutical industry. Drug-makers charge more for drugs in wealthier countries. For example, drug prices in the United States are some of the highest in the world. Europeans, on average, pay only 56% of what Americans pay for the same prescription drugs.
Textbooks.
Price discrimination is also prevalent within the textbook publishing industry. Prices for textbooks are much higher in the United States despite the fact that they are produced in the country. Copyright protection laws increase the price of textbooks. Also, textbooks are mandatory in the United States while schools in other countries see them as study aids.
Two necessary conditions for price discrimination.
There are two conditions that must be met if a price discrimination scheme is to work. First the firm must be able to identify market segments by their price elasticity of demand and second the firms must be able to enforce the scheme. For example, airlines routinely engage in price discrimination by charging high prices for customers with relatively inelastic demand – business travelers – and discount prices for tourist who have relatively elastic demand. The airlines enforce the scheme by enforcing a no resale policy on the tickets preventing a tourist from buying a ticket at a discounted price and selling it to a business traveler (arbitrage). Airlines must also prevent business travelers from directly buying discount tickets. Airlines accomplish this by imposing advance ticketing requirements or minimum stay requirements—conditions that would be difficult for the average business traveler to meet.
Concession and student discounts.
Firms often use third degree price discrimination concession and student segments in the market. By offering a perceived discount to market segments which generally have less disposable income, and hence are more price sensitive, the firm is able to capture the revenue from those with higher price sensitivity whilst also charging higher prices and capturing the consumer surplus of the segments with less price sensitivity.
Counterexamples.
Some pricing patterns appear to be price discrimination but are not.
Congestion pricing.
Price discrimination only happens when the "same" product is sold at more than one price. Congestion pricing is not price discrimination. Peak and off-peak fares on a train are not the same product; some people have to travel during rush hour, and travelling off-peak is not equivalent to them.
Some companies have high fixed costs (like a train company, which owns a railway and rolling stock, or a restaurant, which has to pay for premises and equipment). If these fixed costs permit the company to additionally provide less-preferred products (like mid-morning meals or off-peak rail travel) at little additional cost, it can profit both seller and buyer to offer them at lower prices. Providing more product from the same fixed costs increases both producer and consumer surplus. This is not technically price discrimination (unlike, say, giving menus with higher prices to richer-looking customers, which the poorer-looking ones get an ordinary menu).
If different prices are charged for products that only some consumers will see as equivalent, the differential pricing can be used to manage demand. For instance, airlines can use price discrimination to encourage people to travel at unpopular times (early in the morning). This helps avoid over-crowding and helps to spread out demand. The airline gets better use out of planes and airports, and can thus charge less (or profit more) than if it only flew peak hours.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "(P)"
},
{
"math_id": 1,
"text": "P,A,Q,O"
},
{
"math_id": 2,
"text": "P,A"
},
{
"math_id": 3,
"text": "(D)"
},
{
"math_id": 4,
"text": "D1"
},
{
"math_id": 5,
"text": "D2"
},
{
"math_id": 6,
"text": "(P1)"
},
{
"math_id": 7,
"text": "(P2)"
},
{
"math_id": 8,
"text": "P1,B,Q1,O"
},
{
"math_id": 9,
"text": "E,C,Q2,Q1"
},
{
"math_id": 10,
"text": "Q_a"
},
{
"math_id": 11,
"text": "Q_b"
},
{
"math_id": 12,
"text": "P_a"
},
{
"math_id": 13,
"text": "P_b"
},
{
"math_id": 14,
"text": "E_1"
},
{
"math_id": 15,
"text": "E_2"
},
{
"math_id": 16,
"text": "P_1/P_2 = [1+1/E_2]/[1+1/E_1]"
}
] |
https://en.wikipedia.org/wiki?curid=156259
|
15626235
|
Hilbert number
|
A positive integer of the form (4n + 1)
In number theory, a branch of mathematics, a Hilbert number is a positive integer of the form 4"n" + 1 (). The Hilbert numbers were named after David Hilbert.
The sequence of Hilbert numbers begins 1, 5, 9, 13, 17, ... (sequence in the OEIS))
Hilbert primes.
A Hilbert prime is a Hilbert number that is not divisible by a smaller Hilbert number (other than 1). The sequence of Hilbert primes begins
5, 9, 13, 17, 21, 29, 33, 37, 41, 49, ... (sequence in the OEIS).
A Hilbert prime is not necessarily a prime number; for example, 21 is a composite number since 21 = 3 ⋅ 7. However, 21 is a Hilbert prime since neither 3 nor 7 (the only factors of 21 other than 1 and itself) are Hilbert numbers. It follows from multiplication modulo 4 that a Hilbert prime is either a prime number of the form 4"n" + 1 (called a Pythagorean prime), or a semiprime of the form (4"a" + 3) ⋅ (4"b" + 3).
|
[
{
"math_id": 0,
"text": "a_1=1,d=4"
},
{
"math_id": 1,
"text": "a_n=a_{n-1}+4"
}
] |
https://en.wikipedia.org/wiki?curid=15626235
|
15627340
|
Wi-Fi positioning system
|
Geolocation system
Wi-Fi positioning system (WPS, WiPS or WFPS) is a geolocation system that uses the characteristics of nearby Wi‑Fi access points to discover where a device is located.
It is used where satellite navigation such as GPS is inadequate due to various causes including multipath and signal blockage indoors, or where acquiring a satellite fix would take too long. Such systems include assisted GPS, urban positioning services through hotspot databases, and indoor positioning systems. Wi-Fi positioning takes advantage of the rapid growth in the early 21st century of wireless access points in urban areas.
The most common technique for positioning using wireless access points is based on a rough proxy for the strength of the received signal ("received signal strength indication", or "RSSI") and the method of "fingerprinting". Typically a wireless access point is identified by its SSID and MAC address, and these data are compared to a database of supposed locations of access points so identified. The accuracy depends on the accuracy of the database (e.g. if an access point has moved its entry is inaccurate), and the precision depends on the number of discovered nearby access points with (accurate) entries in the database and the precisions of those entries. The access point location database gets filled by correlating mobile device location data (determined by other systems, such as Galileo or GPS) with Wi‑Fi access point MAC addresses. The possible signal fluctuations that may occur can increase errors and inaccuracies in the path of the user. To minimize fluctuations in the received signal, there are certain techniques that can be applied to filter the noise.
In the case of low precision, some techniques have been proposed to merge the Wi-Fi traces with other data sources such as geographical information and time constraints (i.e., time geography).
Motivation and applications.
Accurate indoor localization is becoming more important for Wi‑Fi–based devices due to the increased use of augmented reality, social networking, health care monitoring, personal tracking, inventory control and other indoor location-aware applications.
In wireless security, it is an important method used to locate and map rogue access points.
The popularity and low price of Wi-Fi network interface cards is an attractive incentive to use Wi-Fi as the basis for a localization system and significant research has been done in this area in the past 15 years.
Problem statement and basic concepts.
The problem of Wi‑Fi–based indoor localization of a device is that of determining the position of client devices with respect to access points. Many techniques exist to accomplish this, and these may be classified based on the four different criteria they use: "received signal strength indication" ("RSSI"), "fingerprinting", "angle of arrival" ("AoA") and "time of flight" ("ToF").
In most cases the first step to determine a device's position is to determine the distance between the target client device and a few access points. With the known distances between the target device and access points, trilateration algorithms may be used to determine the relative position of the target device, using the known position of access points as a reference. Alternatively, the angles of arriving signals at a target client device can be employed to determine the device's location based on triangulation algorithms.
A combination of these techniques may be used to improve the precision of a system.
Techniques.
Signal strength.
RSSI localization techniques are based on measuring rough relative signal strength at a client device from several different access points, and then combining this information with a propagation model to determine the distance between the client device and the access points. Trilateration (sometimes called multilateration) techniques can be used to calculate the estimated client device position relative to the expected position of access points.
Though one of the cheapest and easiest methods to implement, its disadvantage is that it does not provide very good precision (median of 2–4m), because the RSSI measurements tend to fluctuate according to changes in the environment or multipath fading.
Cisco Systems uses RSSI to locate devices through its access points. Access points collect the location data and update the location on the Cisco cloud called "Cisco DNA Spaces".
Monte Carlo sampling.
Monte Carlo sampling is a statistical technique used in indoor Wi-Fi mapping to estimate the location of wireless nodes. The process involves creating wireless signal strength maps using a two-step parametric and measurement-driven ray-tracing approach. This accounts for the absorption and reflection characteristics of various obstacles in the indoor environment.
The location estimates are then computed using Bayesian filtering on sample sets derived by Monte Carlo sampling. This method has been found to provide good location estimates of users with sub-room precision using received signal strength indication (RSSI) readings from a single access point.
Fingerprinting.
Traditional fingerprinting is also RSSI-based, but it simply relies on the recording of the signal strength from several access points in range and storing this information in a database along with the known coordinates of the client device in an offline phase. This information can be deterministic or probabilistic. During the online tracking phase, the current RSSI vector at an unknown location is compared to those stored in the fingerprint and the closest match is returned as the estimated user location. Such systems may provide a median accuracy of 0.6m and tail accuracy of 1.3m.
Its main disadvantage is that any changes to the environment, such as adding or removing furniture or buildings, may change the "fingerprint" that corresponds to each location, requiring an update to the fingerprint database. However, integration with other sensors such as cameras can be used in order to deal with a changing environment.
Angle of arrival.
With the advent of MIMO Wi-Fi interfaces, which use multiple antennas, it is possible to estimate the AoA of the multipath signals received at the antenna arrays in the access points, and apply triangulation to calculate the location of client devices. SpotFi, ArrayTrack and LTEye are proposed solutions which employ this kind of technique.
Typical computation of the AoA is done with the MUSIC algorithm. Assuming an antenna array of formula_0 antennas equally spaced by a distance of formula_1 and a signal arriving at the antenna array through formula_2 propagation paths, an additional distance of formula_3 is traveled by the signal to reach the second antenna of the array.
Considering that the formula_4-th propagation path arrives with angle formula_5 with respect to the normal of the antenna array of the access point, formula_6 is the attenuation experienced at any antenna of the array. The attenuation is the same in every antenna, except for a phase shift which changes for every antenna due to the extra distance traveled by the signal. This means that the signal arrives with an additional phase of
formula_7
at the second antenna and
formula_8
at the formula_9-th antenna.
Therefore, the following complex exponential can be used as a simplified representation of the phase shifts experienced by each antenna as a function of the AoA of the propagation path:
formula_10
The AoA can then be expressed as the vector formula_11 of received signals due to the formula_4-th propagation path, where formula_12 is the steering vector and given by:formula_13There is one steering vector for each propagation path, and the steering matrix formula_14 (of dimensions formula_15) is then defined as:formula_16and the received signal vector formula_17 is:formula_18where formula_19 is the vector complex attenuations along the formula_2 paths. OFDM transmits data over multiple different sub carriers, so the measured received signals formula_17 corresponding to each sub carrier form the matrix formula_20 expressed as:formula_21The matrix formula_20 is given by the channel state information (CSI) matrix which can be extracted from modern wireless cards with special tools such as the Linux 802.11n CSI Tool.
This is where the MUSIC algorithm is applied in, first by computing the eigenvectors of formula_22 (where formula_23 is the conjugate transpose of formula_20) and using the vectors corresponding to eigenvalue zero to calculate the steering vectors and the matrix formula_14. The AoAs can then be deduced from this matrix and used to estimate the position of the client device through triangulation.
Though this technique is usually more accurate than others, it may require special hardware in order to be deployed, such as an array of six to eight antennas or rotating antennas. SpotFi proposes the use of a superresolution algorithm which takes advantage of the number of measurements taken by each of the antennas of the Wi-Fi cards with only three antennas, and also incorporates ToF-based localization to improve its accuracy.
Time of flight.
Time of flight (ToF) localization approach takes timestamps provided by the wireless interfaces to calculate the ToF of signals and then use this information to estimate the distance and relative position of one client device with respect to access points. The granularity of such time measurements is in the order of nanoseconds and systems which use this technique have reported localization errors in the order of 2m. Typical applications for this technology are tagging and locating assets in buildings, for which room-level accuracy (~3m) is usually enough.
The time measurements taken at the wireless interfaces are based on the fact that RF waves travel close to the speed of light, which remains nearly constant in most propagation media in indoor environments. Therefore, the signal propagation speed (and consequently the ToF) is not affected so much by the environment as the RSSI measurements are.
Unlike traditional ToF-based echo techniques, such as those used in RADAR systems, Wi-Fi echo techniques use regular data and acknowledgement communication frames to measure the ToF.
As in the RSSI approach, the ToF is used only to estimate the distance between the client device and access points. Then a trilateration technique can be used to calculate the estimated position of the device relative to the access points. The greatest challenges in the ToF approach consist in dealing with clock synchronization issues, noise, sampling artifacts and multipath channel effects. Some techniques use mathematical approaches to remove the need for clock synchronization.
More recently, the Wi-Fi Round Trip Time standard has provided fine ToF ranging capabilities to Wi‑Fi.
Privacy concerns.
Citing the specific privacy concerns arising out of WPS, Google suggested a unified approach for excluding a particular access point from taking part in determining location using WPS, supposedly by every access point owner deliberately opting out for each access point to be excluded. Appending "_nomap" to a wireless access point's SSID excludes it from Google's WPS database. Mozilla honors _nomap as a method of opting out of its location service.
Public Wi-Fi location databases.
A number of public Wi-Fi location databases are available (only active projects):
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "d"
},
{
"math_id": 2,
"text": "L"
},
{
"math_id": 3,
"text": "d \\sin\\theta"
},
{
"math_id": 4,
"text": "k"
},
{
"math_id": 5,
"text": "\\theta_k"
},
{
"math_id": 6,
"text": "\\gamma_k"
},
{
"math_id": 7,
"text": "-2\\pi \\cdot d \\cdot \\sin(\\theta) \\cdot (f/c) \\cdot (2-1)"
},
{
"math_id": 8,
"text": "-2\\pi \\cdot d \\cdot \\sin(\\theta) \\cdot (f/c) \\cdot (m-1)"
},
{
"math_id": 9,
"text": "m"
},
{
"math_id": 10,
"text": "\\phi(\\theta_k) = \\exp(-j\\cdot 2\\pi\\cdot d\\cdot \\sin(\\theta_k)\\cdot f/c)"
},
{
"math_id": 11,
"text": "\\vec a(\\theta_k)\\gamma_k"
},
{
"math_id": 12,
"text": "\\vec a(\\theta_k)"
},
{
"math_id": 13,
"text": " \\vec a(\\theta_k) = [ 1,\\ \\phi(\\theta_k),\\ \\dots,\\ \\phi(\\theta_k)^{M-1}]^T"
},
{
"math_id": 14,
"text": "\\mathbf{A}"
},
{
"math_id": 15,
"text": "M \\cdot L"
},
{
"math_id": 16,
"text": "\\mathbf{A} = [\\vec a(\\theta_1), \\dots, \\vec a(\\theta_L)]"
},
{
"math_id": 17,
"text": "\\vec x"
},
{
"math_id": 18,
"text": "\\vec x = \\mathbf{A}\\vec \\Gamma"
},
{
"math_id": 19,
"text": "\\vec \\Gamma = [\\vec \\gamma_1 \\dots \\vec \\gamma_L]"
},
{
"math_id": 20,
"text": "\\mathbf{X}"
},
{
"math_id": 21,
"text": "\\mathbf{X} = [\\vec x_1 \\dots \\vec x_L] = \\mathbf{A} [\\vec \\Gamma_1 \\dots \\vec \\Gamma_L] = \\mathbf{AF}"
},
{
"math_id": 22,
"text": "\\mathbf{X}\\mathbf{X}^H"
},
{
"math_id": 23,
"text": "\\mathbf{X}^H"
}
] |
https://en.wikipedia.org/wiki?curid=15627340
|
1563701
|
Stag hunt
|
Conflict between safety and cooperation
In game theory, the stag hunt, sometimes referred to as the assurance game, trust dilemma or common interest game, describes a conflict between safety and social cooperation. The stag hunt problem originated with philosopher Jean-Jacques Rousseau in his "Discourse on Inequality". In the most common account of this dilemma, which is quite different from Rousseau's, two hunters must decide separately, and without the other knowing, whether to hunt a stag or a hare. However, both hunters know the only way to successfully hunt a stag is with the other's help. One hunter can catch a hare alone with less effort and less time, but it is worth far less than a stag and has much less meat. But both hunters would be better off if both choose the more ambitious and more rewarding goal of getting the stag, giving up some autonomy in exchange for the other hunter's cooperation and added might. This situation is often seen as a useful analogy for many kinds of social cooperation, such as international agreements on climate change.
The stag hunt differs from the prisoner's dilemma in that there are two pure-strategy Nash equilibria: one where both players cooperate, and one where both players defect. In the Prisoner's Dilemma, in contrast, despite the fact that both players cooperating is Pareto efficient, the only pure Nash equilibrium is when both players choose to defect.
An example of the payoff matrix for the stag hunt is pictured in Figure 2.
Formal definition.
Formally, a stag hunt is a game with two pure strategy Nash equilibria—one that is risk dominant and another that is payoff dominant. The payoff matrix in Figure 1 illustrates a generic stag hunt, where formula_0. Often, games with a similar structure but without a risk dominant Nash equilibrium are called assurance games. For instance if "a"=10, "b"=5, "c"=0, and "d"=2. While (Hare, Hare) remains a Nash equilibrium, it is no longer risk dominant. Nonetheless many would call this game a stag hunt.
In addition to the pure strategy Nash equilibria there is one mixed strategy Nash equilibrium. This equilibrium depends on the payoffs, but the risk dominance condition places a bound on the mixed strategy Nash equilibrium. No payoffs (that satisfy the above conditions including risk dominance) can generate a mixed strategy equilibrium where Stag is played with a probability higher than one half. The best response correspondences are pictured here.
The stag hunt and social cooperation.
Although most authors focus on the prisoner's dilemma as the game that best represents the problem of social cooperation, some authors believe that the stag hunt represents an equally (or more) interesting context in which to study cooperation and its problems (for an overview see ).
There is a substantial relationship between the stag hunt and the prisoner's dilemma. In biology many circumstances that have been described as prisoner's dilemma might also be interpreted as a stag hunt, depending on how fitness is calculated.
It is also the case that some human interactions that seem like prisoner's dilemmas may in fact be stag hunts. For example, suppose we have a prisoner's dilemma as pictured in Figure 3. The payoff matrix would need adjusting if players who defect against cooperators might be punished for their defection. For instance, if the expected punishment is −2, then the imposition of this punishment turns the above prisoner's dilemma into the stag hunt given at the introduction.
Examples of the stag hunt.
The original stag hunt dilemma is as follows: a group of hunters have tracked a large stag, and found it to follow a certain path. If all the hunters work together, they can kill the stag and all eat. If they are discovered, or do not cooperate, the stag will flee, and all will go hungry.
The hunters hide and wait along a path. An hour goes by, with no sign of the stag. Two, three, four hours pass, with no trace. A day passes. The stag may not pass every day, but the hunters are reasonably certain that it will come. However, a hare is seen by all hunters moving along the path.
If a hunter leaps out and kills the hare, he will eat, but the trap laid for the stag will be wasted and the other hunters will starve. There is no certainty that the stag will arrive; the hare is present. The dilemma is that if one hunter waits, he risks one of his fellows killing the hare for himself, sacrificing everyone else. This makes the risk twofold; the risk that the stag does not appear, and the risk that another hunter takes the kill.
In addition to the example suggested by Rousseau, David Hume provides a series of examples that are stag hunts. One example addresses two individuals who must row a boat. If both choose to row they can successfully move the boat. However, if one doesn't, the other wastes his effort. Hume's second example involves two neighbors wishing to drain a meadow. If they both work to drain it they will be successful, but if either fails to do his part the meadow will not be drained.
Several animal behaviors have been described as stag hunts. One is the coordination of slime molds. In times of stress, individual unicellular protists will aggregate to form one large body. Here if they all act together they can successfully reproduce, but success depends on the cooperation of many individual protozoa. Another example is the hunting practices of orcas (known as carousel feeding). Orcas cooperatively corral large schools of fish to the surface and stun them by hitting them with their tails. Since this requires that the fish have no way to escape, it requires the cooperation of many orcas.
Author James Cambias describes a solution to the game as the basis for an extraterrestrial civilization in his 2014 science fiction book "A Darkling Sea". Carol M. Rose argues that the stag hunt theory is useful in 'law and humanities' theory. In international law, countries are the participants in a stag hunt. They can, for example, work together to improve good corporate governance.
A stag Hunt with pre-play communication.
Robert Aumann proposed: "Let us now change the scenario by permitting pre-play communication. On the face of it, it seems that the players can then 'agree' to play (c,c); though the agreement is not enforceable, it removes each player's doubt about the other one playing c". Aumann concluded that in this game "agreement has no effect, one way or the other." It is his argument: "The information that such an agreement conveys is not that the players will keep it (since it is not binding), but that each wants the other to keep it." In this game "each player always prefers the other to play c, no matter what he himself plays. Therefore, an agreement to play (c,c) conveys no information about what the players will do, and cannot be considered self-enforcing." Weiss and Agassi wrote about this argument: "This we deem somewhat incorrect since it is an oversight of the agreement that may change the mutual expectations of players that the result of the game depends on... Aumann’s assertion that there is no a priori reason to expect agreement to lead to cooperation requires completion; at times, but only at times, there is a posteriori reason for that... How a given player will behave in a given game, thus, depends on the culture within which the game takes place".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a>b\\ge d>c"
}
] |
https://en.wikipedia.org/wiki?curid=1563701
|
15640469
|
Dickson polynomial
|
In mathematics, the Dickson polynomials, denoted "Dn"("x","α"), form a polynomial sequence introduced by L. E. Dickson (1897). They were rediscovered by in his study of Brewer sums and have at times, although rarely, been referred to as Brewer polynomials.
Over the complex numbers, Dickson polynomials are essentially equivalent to Chebyshev polynomials with a change of variable, and, in fact, Dickson polynomials are sometimes called Chebyshev polynomials.
Dickson polynomials are generally studied over finite fields, where they sometimes may not be equivalent to Chebyshev polynomials. One of the main reasons for interest in them is that for fixed "α", they give many examples of "permutation polynomials"; polynomials acting as permutations of finite fields.
Definition.
First kind.
For integer "n" > 0 and α in a commutative ring R with identity (often chosen to be the finite field F"q"
GF("q")) the Dickson polynomials (of the first kind) over R are given by
formula_0
The first few Dickson polynomials are
formula_1
They may also be generated by the recurrence relation for "n" ≥ 2,
formula_2
with the initial conditions "D"0("x","α")
2 and "D"1("x","α")
"x".
The coefficients are given at several places in the OEIS with minute differences for the first two terms.
Second kind.
The Dickson polynomials of the second kind, "En"("x","α"), are defined by
formula_3
They have not been studied much, and have properties similar to those of Dickson polynomials of the first kind.
The first few Dickson polynomials of the second kind are
formula_4
They may also be generated by the recurrence relation for "n" ≥ 2,
formula_5
with the initial conditions "E"0("x","α")
1 and "E"1("x","α")
"x".
The coefficients are also given in the OEIS.
Properties.
The "Dn" are the unique monic polynomials satisfying the functional equation
formula_6
where "α" ∈ F"q" and "u" ≠ 0 ∈ F"q"2.
They also satisfy a composition rule,
formula_7
The "En" also satisfy a functional equation
formula_8
for "y" ≠ 0, "y"2 ≠ "α", with "α" ∈ F"q" and "y" ∈ F"q"2.
The Dickson polynomial "y"
"Dn" is a solution of the ordinary differential equation
formula_9
and the Dickson polynomial "y"
"En" is a solution of the differential equation
formula_10
Their ordinary generating functions are
formula_11
Links to other polynomials.
By the recurrence relation above, Dickson polynomials are Lucas sequences. Specifically, for "α"
−1, the Dickson polynomials of the first kind are Fibonacci polynomials, and Dickson polynomials of the second kind are Lucas polynomials.
By the composition rule above, when α is idempotent, composition of Dickson polynomials of the first kind is commutative.
0 give monomials.
formula_12
1 are related to Chebyshev polynomials "Tn"("x")
cos ("n" arccos "x") of the first kind by
formula_13
Permutation polynomials and Dickson polynomials.
A permutation polynomial (for a given finite field) is one that acts as a permutation of the elements of the finite field.
The Dickson polynomial "Dn"("x", α) (considered as a function of "x" with α fixed) is a permutation polynomial for the field with "q" elements if and only if "n" is coprime to "q"2 − 1.
proved that any integral polynomial that is a permutation polynomial for infinitely many prime fields is a composition of Dickson polynomials and linear polynomials (with rational coefficients). This assertion has become known as Schur's conjecture, although in fact Schur did not make this conjecture. Since Fried's paper contained numerous errors, a corrected account was given by , and subsequently gave a simpler proof along the lines of an argument due to Schur.
Further, proved that any permutation polynomial over the finite field F"q" whose degree is simultaneously coprime to "q" and less than "q" must be a composition of Dickson polynomials and linear polynomials.
Generalization.
Dickson polynomials of both kinds over finite fields can be thought of as initial members of a sequence of generalized Dickson polynomials referred to as Dickson polynomials of the ("k" + 1)th kind. Specifically, for "α" ≠ 0 ∈ F"q" with "q"
"pe" for some prime p and any integers "n" ≥ 0 and 0 ≤ "k" < "p", the nth Dickson polynomial of the ("k" + 1)th kind over F"q", denoted by "D""n","k"("x","α"), is defined by
formula_14
and
formula_15
"D""n",0("x","α")
"Dn"("x","α") and "D""n",1("x","α")
"En"("x","α"), showing that this definition unifies and generalizes the original polynomials of Dickson.
The significant properties of the Dickson polynomials also generalize:
formula_16
with the initial conditions "D"0,"k"("x","α")
2 − "k" and "D"1,"k"("x","α")
"x".
formula_17
where "y" ≠ 0, "y"2 ≠ "α".
formula_18
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "D_n(x,\\alpha)=\\sum_{i=0}^{\\left\\lfloor \\frac{n}{2} \\right\\rfloor}\\frac{n}{n-i} \\binom{n-i}{i} (-\\alpha)^i x^{n-2i} \\,."
},
{
"math_id": 1,
"text": "\\begin{align}\nD_1(x,\\alpha) &= x \\\\\nD_2(x,\\alpha) &= x^2 - 2\\alpha \\\\\nD_3(x,\\alpha) &= x^3 - 3x\\alpha \\\\\nD_4(x,\\alpha) &= x^4 - 4x^2\\alpha + 2\\alpha^2 \\\\\nD_5(x,\\alpha) &= x^5 - 5x^3\\alpha + 5x\\alpha^2 \\,.\n\\end{align}"
},
{
"math_id": 2,
"text": "D_n(x,\\alpha) = xD_{n-1}(x,\\alpha)-\\alpha D_{n-2}(x,\\alpha) \\,,"
},
{
"math_id": 3,
"text": "E_n(x,\\alpha)=\\sum_{i=0}^{\\left\\lfloor \\frac{n}{2} \\right\\rfloor}\\binom{n-i}{i} (-\\alpha)^i x^{n-2i}. "
},
{
"math_id": 4,
"text": "\\begin{align}\nE_0(x,\\alpha) &= 1 \\\\\nE_1(x,\\alpha) &= x \\\\\nE_2(x,\\alpha) &= x^2 - \\alpha \\\\\nE_3(x,\\alpha) &= x^3 - 2x\\alpha \\\\\nE_4(x,\\alpha) &= x^4 - 3x^2\\alpha + \\alpha^2 \\,.\n\\end{align}"
},
{
"math_id": 5,
"text": "E_n(x,\\alpha) = xE_{n-1}(x,\\alpha)-\\alpha E_{n-2}(x,\\alpha) \\,,"
},
{
"math_id": 6,
"text": "D_n\\left(u + \\frac{\\alpha}{u},\\alpha\\right) = u^n + \\left(\\frac{\\alpha}{u}\\right)^n, "
},
{
"math_id": 7,
"text": "D_{mn}(x,\\alpha) = D_m\\bigl(D_n(x,\\alpha),\\alpha^n\\bigr) \\,= D_n\\bigl(D_m(x,\\alpha),\\alpha^m\\bigr) \\, . "
},
{
"math_id": 8,
"text": "E_n\\left(y + \\frac{\\alpha}{y}, \\alpha\\right) = \\frac{y^{n+1} - \\left(\\frac{\\alpha}{y}\\right)^{n+1}}{y - \\frac{\\alpha}{y}} \\,,"
},
{
"math_id": 9,
"text": "\\left(x^2-4\\alpha\\right)y'' + xy' - n^2y=0 \\,, "
},
{
"math_id": 10,
"text": "\\left(x^2-4\\alpha\\right)y'' + 3xy' - n(n+2)y=0 \\,. "
},
{
"math_id": 11,
"text": "\\begin{align}\n\\sum_n D_n(x,\\alpha)z^n &= \\frac{2-xz}{1-xz+\\alpha z^2} \\\\\n\\sum_n E_n(x,\\alpha)z^n &= \\frac{1}{1-xz+\\alpha z^2} \\,.\n\\end{align}"
},
{
"math_id": 12,
"text": "D_n(x,0) = x^n \\, . "
},
{
"math_id": 13,
"text": "D_n(2x, 1) = 2T_n(x) \\,."
},
{
"math_id": 14,
"text": "D_{0,k}(x,\\alpha) = 2 - k"
},
{
"math_id": 15,
"text": "D_{n,k}(x,\\alpha)=\\sum_{i=0}^{\\left\\lfloor \\frac{n}{2} \\right\\rfloor}\\frac{n - ki}{n-i}\\binom{n-i}{i} (-\\alpha)^i x^{n-2i} \\,. "
},
{
"math_id": 16,
"text": "D_{n,k}(x,\\alpha) = xD_{n-1,k}(x,\\alpha)-\\alpha D_{n-2,k}(x,\\alpha)\\,,"
},
{
"math_id": 17,
"text": "\nD_{n,k}\\left(y + \\alpha y^{-1}, \\alpha\\right) = \\frac{y^{2n} +k\\alpha y^{2n-2} + \\cdots +k\\alpha^{n-1}y^2 + \\alpha^n}{y^n} = \\frac{y^{2n} + {\\alpha}^n}{y^n} + \\left(\\frac{k\\alpha}{y^n} \\right) \\frac{y^{2n} - {\\alpha}^{n-1}y^2}{y^2 - \\alpha} \\,,"
},
{
"math_id": 18,
"text": "\\sum_{n=0}^{\\infty} D_{n,k}(x,\\alpha)z^n = \\frac{2 - k + (k-1)xz}{1 - xz + \\alpha z^2} \\,."
}
] |
https://en.wikipedia.org/wiki?curid=15640469
|
15641015
|
Permutation polynomial
|
In mathematics, a permutation polynomial (for a given ring) is a polynomial that acts as a permutation of the elements of the ring, i.e. the map formula_0 is a bijection. In case the ring is a finite field, the Dickson polynomials, which are closely related to the Chebyshev polynomials, provide examples. Over a finite field, every function, so in particular every permutation of the elements of that field, can be written as a polynomial function.
In the case of finite rings Z/"n"Z, such polynomials have also been studied and applied in the interleaver component of error detection and correction algorithms.
Single variable permutation polynomials over finite fields.
Let F"q" = GF("q") be the finite field of characteristic p, that is, the field having q elements where "q" = "p""e" for some prime p. A polynomial f with coefficients in F"q" (symbolically written as "f" ∈ F"q"["x"]) is a "permutation polynomial" of F"q" if the function from F"q" to itself defined by formula_1 is a permutation of F"q".
Due to the finiteness of F"q", this definition can be expressed in several equivalent ways:
A characterization of which polynomials are permutation polynomials is given by
("Hermite's Criterion") "f" ∈ F"q"["x"] is a permutation polynomial of F"q" if and only if the following two conditions hold:
If "f"("x") is a permutation polynomial defined over the finite field GF("q"), then so is "g"("x") = "a" "f"("x" + "b") + "c" for all "a" ≠ 0, "b" and c in GF("q"). The permutation polynomial "g"("x") is in normalized form if "a", "b" and c are chosen so that "g"("x") is monic, "g"(0) = 0 and (provided the characteristic p does not divide the degree n of the polynomial) the coefficient of "x""n"−1 is 0.
There are many open questions concerning permutation polynomials defined over finite fields.
Small degree.
Hermite's criterion is computationally intensive and can be difficult to use in making theoretical conclusions. However, Dickson was able to use it to find all permutation polynomials of degree at most five over all finite fields. These results are:
A list of all monic permutation polynomials of degree six in normalized form can be found in .
Some classes of permutation polynomials.
Beyond the above examples, the following list, while not exhaustive, contains almost all of the known major classes of permutation polynomials over finite fields.
These can also be obtained from the recursion
formula_5
with the initial conditions formula_6 and formula_7.
The first few Dickson polynomials are:
If "a" ≠ 0 and "n" > 1 then "D""n"("x", "a") permutes GF("q") if and only if ("n", "q"2 − 1) = 1. If "a" = 0 then "D""n"("x", 0) = "x""n" and the previous result holds.
The linearized polynomials that are permutation polynomials over GF("q""r") form a group under the operation of composition modulo formula_14, which is known as the Betti-Mathieu group, isomorphic to the general linear group GL("r", F"q").
Exceptional polynomials.
An exceptional polynomial over GF("q") is a polynomial in F"q"["x"] which is a permutation polynomial on GF("q""m") for infinitely many m.
A permutation polynomial over GF("q") of degree at most "q"1/4 is exceptional over GF("q").
Every permutation of GF("q") is induced by an exceptional polynomial.
If a polynomial with integer coefficients (i.e., in ℤ["x"]) is a permutation polynomial over GF("p") for infinitely many primes p, then it is the composition of linear and Dickson polynomials. (See Schur's conjecture below).
Geometric examples.
In finite geometry coordinate descriptions of certain point sets can provide examples of permutation polynomials of higher degree. In particular, the points forming an oval in a finite projective plane, PG(2,"q") with "q" a power of 2, can be coordinatized in such a way that the relationship between the coordinates is given by an "o-polynomial", which is a special type of permutation polynomial over the finite field GF("q").
Computational complexity.
The problem of testing whether a given polynomial over a finite field is a permutation polynomial can be solved in polynomial time.
Permutation polynomials in several variables over finite fields.
A polynomial formula_17 is a permutation polynomial in n variables over formula_18 if the equation formula_19 has exactly formula_20 solutions in formula_21 for each formula_22.
Quadratic permutation polynomials (QPP) over finite rings.
For the finite ring Z/"n"Z one can construct quadratic permutation polynomials. Actually it is possible if and only if "n" is divisible by "p"2 for some prime number "p". The construction is surprisingly simple, nevertheless it can produce permutations with certain good properties. That is why it has been used in the interleaver component of turbo codes in 3GPP Long Term Evolution mobile telecommunication standard (see 3GPP technical specification 36.212 e.g. page 14 in version 8.8.0).
Simple examples.
Consider formula_23 for the ring Z/4Z.
One sees: formula_24; formula_25; formula_26; formula_27,
so the polynomial defines the permutation
formula_28
Consider the same polynomial formula_23 for the other ring Z/"8"Z.
One sees: formula_24; formula_25; formula_26; formula_29; formula_30; formula_31; formula_32; formula_33, so the polynomial defines the permutation
formula_34
Rings Z/"pk"Z.
Consider formula_35 for the ring Z/"pk"Z.
Lemma: for "k"=1 (i.e. Z/"p"Z) such polynomial defines a permutation only in the case "a"=0 and "b" not equal to zero. So the polynomial is not quadratic, but linear.
Lemma: for "k">1, "p">2 (Z/"pk"Z) such polynomial defines a permutation if and only if formula_36 and formula_37.
Rings Z/"n"Z.
Consider formula_38, where "pt" are prime numbers.
Lemma: any polynomial formula_39 defines a permutation for the ring Z/"n"Z if and only if all the polynomials formula_40 defines the permutations for all rings formula_41, where formula_42 are remainders of formula_43 modulo formula_44.
As a corollary one can construct plenty quadratic permutation polynomials using the following simple construction.
Consider formula_45, assume that "k"1 >1.
Consider formula_46, such that formula_47, but formula_48; assume that formula_49, "i" > 1. And assume that formula_50 for all "i" = 1, ..., "l".
(For example, one can take formula_51 and formula_52).
Then such polynomial defines a permutation.
To see this we observe that for all primes "pi", "i" > 1, the reduction of this quadratic polynomial modulo "pi" is actually linear polynomial and hence is permutation by trivial reason. For the first prime number we should use the lemma discussed previously to see that it defines the permutation.
For example, consider Z/12Z and polynomial formula_53.
It defines a permutation
formula_54
Higher degree polynomials over finite rings.
A polynomial "g"("x") for the ring Z/"pkZ is a permutation polynomial if and only if it permutes the finite field Z/"pZ and formula_55 for all "x" in Z/"pk"Z, where "g"′("x") is the formal derivative of "g"("x").
Schur's conjecture.
Let "K" be an algebraic number field with "R" the ring of integers. The term "Schur's conjecture" refers to the assertion that, if a polynomial "f" defined over "K" is a permutation polynomial on "R"/"P" for infinitely many prime ideals "P", then "f" is the composition of Dickson polynomials, degree-one polynomials, and polynomials of the form "x""k". In fact, Schur did not make any conjecture in this direction. The notion that he did is due to Fried, who gave a flawed proof of a false version of the result. Correct proofs have been given by Turnwald and Müller.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x \\mapsto g(x)"
},
{
"math_id": 1,
"text": "c \\mapsto f(c)"
},
{
"math_id": 2,
"text": " c \\mapsto f(c)"
},
{
"math_id": 3,
"text": "t \\not \\equiv 0 \\!\\pmod p"
},
{
"math_id": 4,
"text": "D_n(x,a)=\\sum_{j=0}^{\\lfloor n/2\\rfloor}\\frac{n}{n-j} \\binom{n-j}{j} (-a)^j x^{n-2j}. "
},
{
"math_id": 5,
"text": "D_n(x,a) = xD_{n-1}(x,a)-a D_{n-2}(x,a), "
},
{
"math_id": 6,
"text": "D_0(x,a) = 2"
},
{
"math_id": 7,
"text": "D_1(x,a) = x"
},
{
"math_id": 8,
"text": " D_2(x,a) = x^2 - 2a "
},
{
"math_id": 9,
"text": " D_3(x,a) = x^3 - 3ax"
},
{
"math_id": 10,
"text": " D_4(x,a) = x^4 - 4ax^2 + 2a^2 "
},
{
"math_id": 11,
"text": " D_5(x,a) = x^5 - 5ax^3 + 5a^2 x."
},
{
"math_id": 12,
"text": "L(x) = \\sum_{s=0}^{r-1} \\alpha_s x^{q^s},"
},
{
"math_id": 13,
"text": " \\det\\left ( \\alpha_{i-j}^{q^j} \\right ) \\neq 0 \\quad (i, j= 0,1,\\ldots,r-1)."
},
{
"math_id": 14,
"text": "x^{q^r} - x"
},
{
"math_id": 15,
"text": " x^{(q + m - 1)/m} + ax "
},
{
"math_id": 16,
"text": " x^r \\left(x^d - a\\right)^{\\left(p^n - 1\\right)/d}"
},
{
"math_id": 17,
"text": "f \\in \\mathbb{F}_q[x_1,\\ldots,x_n]"
},
{
"math_id": 18,
"text": "\\mathbb{F}_q"
},
{
"math_id": 19,
"text": "f(x_1,\\ldots,x_n) = \\alpha"
},
{
"math_id": 20,
"text": "q^{n-1}"
},
{
"math_id": 21,
"text": "\\mathbb{F}_q^n"
},
{
"math_id": 22,
"text": "\\alpha \\in \\mathbb{F}_q"
},
{
"math_id": 23,
"text": " g(x) = 2x^2+x "
},
{
"math_id": 24,
"text": " g(0) = 0"
},
{
"math_id": 25,
"text": " g(1) = 3"
},
{
"math_id": 26,
"text": " g(2) = 2"
},
{
"math_id": 27,
"text": " g(3) = 1 "
},
{
"math_id": 28,
"text": "\\begin{pmatrix}\n0 &1 & 2 & 3 \\\\\n0 &3 & 2 & 1\n\\end{pmatrix} ."
},
{
"math_id": 29,
"text": " g(3) = 5"
},
{
"math_id": 30,
"text": " g(4) = 4"
},
{
"math_id": 31,
"text": " g(5) = 7"
},
{
"math_id": 32,
"text": " g(6) = 6"
},
{
"math_id": 33,
"text": " g(7) = 1"
},
{
"math_id": 34,
"text": "\\begin{pmatrix}\n0 &1 & 2 & 3 & 4 & 5 & 6 & 7 \\\\\n0 &3 & 2 & 5 & 4 & 7 & 6 & 1\n\\end{pmatrix} ."
},
{
"math_id": 35,
"text": " g(x) = ax^2+bx+c "
},
{
"math_id": 36,
"text": "a \\equiv 0 \\pmod p"
},
{
"math_id": 37,
"text": "b \\not \\equiv 0 \\pmod p"
},
{
"math_id": 38,
"text": "n=p_1^{k_1}p_2^{k_2}...p_l^{k_l}"
},
{
"math_id": 39,
"text": " g(x) = a_0+ \\sum_{0 < i \\leq M} a_i x^i "
},
{
"math_id": 40,
"text": " g_{p_t}(x) = a_{0,p_t}+ \\sum_{0 < i \\leq M} a_{i,p_t} x^i "
},
{
"math_id": 41,
"text": "Z/p_t^{k_t}Z"
},
{
"math_id": 42,
"text": "a_{j,p_t}"
},
{
"math_id": 43,
"text": "a_{j}"
},
{
"math_id": 44,
"text": "p_t^{k_t}"
},
{
"math_id": 45,
"text": "n = p_1^{k_1} p_2^{k_2} \\dots p_l^{k_l}"
},
{
"math_id": 46,
"text": "ax^2+bx"
},
{
"math_id": 47,
"text": " a= 0 \\bmod p_1"
},
{
"math_id": 48,
"text": " a\\ne 0 \\bmod p_1^{k_1}"
},
{
"math_id": 49,
"text": " a = 0 \\bmod p_i^{k_i}"
},
{
"math_id": 50,
"text": "b\\ne 0 \\bmod p_i"
},
{
"math_id": 51,
"text": " a=p_1 p_2^{k_2}...p_l^{k_l} "
},
{
"math_id": 52,
"text": "b=1"
},
{
"math_id": 53,
"text": "6x^2+x"
},
{
"math_id": 54,
"text": "\\begin{pmatrix}\n0 &1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & \\cdots \\\\\n0 &7 & 2 & 9 & 4 & 11 & 6 & 1 & 8 & \\cdots \n\\end{pmatrix} ."
},
{
"math_id": 55,
"text": "g'(x) \\ne 0 \\bmod p"
}
] |
https://en.wikipedia.org/wiki?curid=15641015
|
15641067
|
Super-recursive algorithm
|
Generalization of ordinary algorithms that compute more than Turing machines
In computability theory, super-recursive algorithms are posited as a generalization of hypercomputation: hypothetical algorithms that are more powerful, that is, compute more than Turing machines.
The term was introduced by Mark Burgin, whose book "Super-recursive algorithms" develops their theory and presents several mathematical models.
Burgin argues that super-recursive algorithms can be used to disprove the Church-Turing thesis. This point of view has been criticized within the mathematical community and is not widely accepted.
Definition.
Burgin (2005: 13) uses the term recursive algorithms for algorithms that can be implemented on Turing machines, and uses the word "algorithm" in a more general sense. Then a super-recursive class of algorithms is "a class of algorithms in which it is possible to compute functions not computable by any Turing machine" (Burgin 2005: 107)
Super-recursive algorithms are also related to algorithmic schemes, another novel concept from Burgin, which are more general than super-recursive algorithms. Burgin argues (2005: 115) that it is necessary to make a clear distinction between super-recursive algorithms and those algorithmic schemes that are not algorithms. Under this distinction, some types of hypercomputation are obtained by super-recursive algorithms.
Relation to the Church–Turing thesis.
The Church–Turing thesis in recursion theory relies on a particular definition of the term "algorithm". Based on his personal definitions that are more general than the one commonly used in recursion theory, Burgin argues that super-recursive algorithms disprove the Church–Turing thesis. He furthermore claims to prove that super-recursive algorithms could hypothetically provide even greater efficiency gains than using quantum algorithms.
Burgin's interpretation of super-recursive algorithms has encountered opposition in the mathematical community. One critic is logician Martin Davis, who argues that Burgin's claims have been well understood "for decades". Davis states,
"The present criticism is not about the mathematical discussion of these matters but only about the misleading claims regarding physical systems of the present and future."(Davis 2006: 128)
Davis disputes Burgin's claims that sets at level formula_0 of the arithmetical hierarchy can be called computable, saying
"It is generally understood that for a computational result to be useful one must be able to at least recognize that it is indeed the result sought." (Davis 2006: 128)
|
[
{
"math_id": 0,
"text": "\\Delta^0_2"
}
] |
https://en.wikipedia.org/wiki?curid=15641067
|
156411
|
Galois connection
|
Particular correspondence between two partially ordered sets
In mathematics, especially in order theory, a Galois connection is a particular correspondence (typically) between two partially ordered sets (posets). Galois connections find applications in various mathematical theories. They generalize the fundamental theorem of Galois theory about the correspondence between subgroups and subfields, discovered by the French mathematician Évariste Galois.
A Galois connection can also be defined on preordered sets or classes; this article presents the common case of posets.
The literature contains two closely related notions of "Galois connection". In this article, we will refer to them as (monotone) Galois connections and antitone Galois connections.
A Galois connection is rather weak compared to an order isomorphism between the involved posets, but every Galois connection gives rise to an isomorphism of certain sub-posets, as will be explained below.
The term Galois correspondence is sometimes used to mean a bijective "Galois connection"; this is simply an order isomorphism (or dual order isomorphism, depending on whether we take monotone or antitone Galois connections).
Definitions.
(Monotone) Galois connection.
Let ("A", ≤) and ("B", ≤) be two partially ordered sets. A "monotone Galois connection" between these posets consists of two monotone functions: "F" : "A" → "B" and "G" : "B" → "A", such that for all a in A and b in B, we have
"F"("a") ≤ "b" if and only if "a" ≤ "G"("b").
In this situation, F is called the lower adjoint of G and G is called the upper adjoint of "F". Mnemonically, the upper/lower terminology refers to where the function application appears relative to ≤. The term "adjoint" refers to the fact that monotone Galois connections are special cases of pairs of adjoint functors in category theory as discussed further below. Other terminology encountered here is left adjoint (respectively right adjoint) for the lower (respectively upper) adjoint.
An essential property of a Galois connection is that an upper/lower adjoint of a Galois connection "uniquely" determines the other:
"F"("a") is the least element with "a" ≤ "G"(~"b"), and
"G"("b") is the largest element with "F"(~"a") ≤ "b".
A consequence of this is that if F or G is bijective then each is the inverse of the other, i.e. "F" = "G" −1.
Given a Galois connection with lower adjoint F and upper adjoint G, we can consider the compositions "GF" : "A" → "A", known as the associated closure operator, and "FG" : "B" → "B", known as the associated kernel operator. Both are monotone and idempotent, and we have "a" ≤ "GF"("a") for all a in A and "FG"("b") ≤ "b" for all b in B.
A Galois insertion of B into A is a Galois connection in which the kernel operator FG is the identity on B, and hence G is an order isomorphism of B onto the set of closed elements GF&hairsp;[A] of A.
Antitone Galois connection.
The above definition is common in many applications today, and prominent in lattice and domain theory. However the original notion in Galois theory is slightly different. In this alternative definition, a Galois connection is a pair of "antitone", i.e. order-reversing, functions "F" : "A" → "B" and "G" : "B" → "A" between two posets A and B, such that
"b" ≤ "F"("a") if and only if "a" ≤ "G"("b").
The symmetry of F and G in this version erases the distinction between upper and lower, and the two functions are then called polarities rather than adjoints. Each polarity uniquely determines the other, since
"F"("a") is the largest element b with "a" ≤ "G"("b"), and
"G"("b") is the largest element a with "b" ≤ "F"("a").
The compositions "GF" : "A" → "A" and "FG" : "B" → "B" are the associated closure operators; they are monotone idempotent maps with the property "a" ≤ "GF"("a") for all a in A and "b" ≤ "FG"("b") for all b in B.
The implications of the two definitions of Galois connections are very similar, since an antitone Galois connection between A and B is just a monotone Galois connection between A and the order dual "B"op of B. All of the below statements on Galois connections can thus easily be converted into statements about antitone Galois connections.
Examples.
Bijections.
The bijection of a pair of functions formula_0 and formula_1 each other's inverse, forms a (trivial) Galois connection, as follows. Because the equality relation is reflexive, transitive and antisymmetric, it is, trivially, a partial order, making formula_2 and formula_3 partially ordered sets. Since formula_4 if and only if formula_5 we have a Galois connection.
Monotone Galois connections.
Floor; ceiling.
A monotone Galois connection between formula_6 the set of integers and formula_7 the set of real numbers, each with its usual ordering, is given by the usual embedding function of the integers into the reals and the floor function truncating a real number to the greatest integer less than or equal to it. The embedding of integers is customarily done implicitly, but to show the Galois connection we make it explicit. So let formula_8 denote the embedding function, with formula_9 while formula_10 denotes the floor function, so formula_11 The equivalence formula_12 then translates to
formula_13
This is valid because the variable formula_14 is restricted to the integers. The well-known properties of the floor function, such as formula_15 can be derived by elementary reasoning from this Galois connection.
The dual orderings give another monotone Galois connection, now with the ceiling function:
formula_16
Power set; implication and conjunction.
For an order-theoretic example, let U be some set, and let A and B both be the power set of U, ordered by inclusion. Pick a fixed subset L of U. Then the maps F and G, where "F"("M"&hairsp;)
"L" ∩ "M", and "G"("N"&hairsp;)
"N" ∪ ("U" \ "L"), form a monotone Galois connection, with F being the lower adjoint. A similar Galois connection whose lower adjoint is given by the meet (infimum) operation can be found in any Heyting algebra. Especially, it is present in any Boolean algebra, where the two mappings can be described by "F"("x")
("a" ∧ "x") and "G"(&hairsp;"y")
(&hairsp;"y" ∨ ¬"a")
("a" ⇒ "y"). In logical terms: "implication from a" is the upper adjoint of "conjunction with a".
Lattices.
Further interesting examples for Galois connections are described in the article on completeness properties. Roughly speaking, it turns out that the usual functions ∨ and ∧ are lower and upper adjoints to the diagonal map "X" → "X" × "X". The least and greatest elements of a partial order are given by lower and upper adjoints to the unique function "X" → {1}. Going further, even complete lattices can be characterized by the existence of suitable adjoints. These considerations give some impression of the ubiquity of Galois connections in order theory.
Transitive group actions.
Let G act transitively on X and pick some point x in X. Consider
formula_17
the set of blocks containing x. Further, let formula_18 consist of the subgroups of G containing the stabilizer of x.
Then, the correspondence formula_19:
formula_20
is a monotone, one-to-one Galois connection. As a corollary, one can establish that doubly transitive actions have no blocks other than the trivial ones (singletons or the whole of X): this follows from the stabilizers being maximal in G in that case. See Doubly transitive group for further discussion.
Image and inverse image.
If "f" : "X" → "Y" is a function, then for any subset M of X we can form the image "F"("M"&hairsp;)
"f" "M"
{ "f" ("m") | "m" ∈ "M"} and for any subset N of Y we can form the inverse image "G"("N"&hairsp;)
"f" −1"N"
{"x" ∈ "X" | "f" ("x") ∈ "N"}. Then F and G form a monotone Galois connection between the power set of X and the power set of Y, both ordered by inclusion ⊆. There is a further adjoint pair in this situation: for a subset M of X, define "H"("M")
{"y" ∈ "Y" | "f" −1{"y"} ⊆ "M"}. Then G and H form a monotone Galois connection between the power set of Y and the power set of X. In the first Galois connection, G is the upper adjoint, while in the second Galois connection it serves as the lower adjoint.
In the case of a quotient map between algebraic objects (such as groups), this connection is called the lattice theorem: subgroups of G connect to subgroups of "G"/"N", and the closure operator on subgroups of G is given by .
Span and closure.
Pick some mathematical object X that has an underlying set, for instance a group, ring, vector space, etc. For any subset S of X, let "F"("S"&hairsp;) be the smallest subobject of X that contains S, i.e. the subgroup, subring or subspace generated by S. For any subobject U of X, let "G"("U"&hairsp;) be the underlying set of U. (We can even take X to be a topological space, let "F"("S"&hairsp;) the closure of S, and take as "subobjects of X" the closed subsets of X.) Now F and G form a monotone Galois connection between subsets of X and subobjects of X, if both are ordered by inclusion. F is the lower adjoint.
Syntax and semantics.
A very general comment of William Lawvere is that "syntax and semantics" are adjoint: take A to be the set of all logical theories (axiomatizations) reverse ordered by strength, and B the power set of the set of all mathematical structures. For a theory "T" ∈ "A", let Mod("T"&hairsp;) be the set of all structures that satisfy the axioms T&hairsp;; for a set of mathematical structures "S" ∈ "B", let Th("S"&hairsp;) be the minimum of the axiomatizations that approximate S (in first-order logic, this is the set of sentences that are true in all structures in S). We can then say that S is a subset of Mod("T"&hairsp;) if and only if Th("S"&hairsp;) logically entails T: the "semantics functor" Mod and the "syntax functor" Th form a monotone Galois connection, with semantics being the upper adjoint.
Antitone Galois connections.
Galois theory.
The motivating example comes from Galois theory: suppose "L"/"K" is a field extension. Let A be the set of all subfields of L that contain K, ordered by inclusion ⊆. If E is such a subfield, write Gal("L"/"E") for the group of field automorphisms of L that hold E fixed. Let B be the set of subgroups of Gal("L"/"K"), ordered by inclusion ⊆. For such a subgroup G, define Fix("G") to be the field consisting of all elements of L that are held fixed by all elements of G. Then the maps "E" ↦ Gal("L"/"E") and "G" ↦ Fix("G") form an antitone Galois connection.
Algebraic topology: covering spaces.
Analogously, given a path-connected topological space X, there is an antitone Galois connection between subgroups of the fundamental group "π"1("X") and path-connected covering spaces of X. In particular, if X is semi-locally simply connected, then for every subgroup G of "π"1("X"), there is a covering space with G as its fundamental group.
Linear algebra: annihilators and orthogonal complements.
Given an inner product space V, we can form the orthogonal complement "F"("X"&hairsp;) of any subspace X of V. This yields an antitone Galois connection between the set of subspaces of V and itself, ordered by inclusion; both polarities are equal to F.
Given a vector space V and a subset X of V we can define its annihilator "F"("X"&hairsp;), consisting of all elements of the dual space "V"&hairsp;∗ of V that vanish on X. Similarly, given a subset Y of "V"&hairsp;∗, we define its annihilator "G"("Y" )
{&hairsp;"x" ∈ "V" | "φ"("x")
0 ∀"φ" ∈ "Y"&hairsp;}. This gives an antitone Galois connection between the subsets of V and the subsets of "V"&hairsp;∗.
Algebraic geometry.
In algebraic geometry, the relation between sets of polynomials and their zero sets is an antitone Galois connection.
Fix a natural number n and a field K and let A be the set of all subsets of the polynomial ring "K"["X"1, ..., "Xn"] ordered by inclusion ⊆, and let B be the set of all subsets of "K"&hairsp;"n" ordered by inclusion ⊆. If S is a set of polynomials, define the variety of zeros as
formula_21
the set of common zeros of the polynomials in S. If U is a subset of "K"&hairsp;"n", define "I"("U"&hairsp;) as the ideal of polynomials vanishing on U, that is
formula_22
Then V and "I" form an antitone Galois connection.
The closure on "K"&hairsp;"n" is the closure in the Zariski topology, and if the field K is algebraically closed, then the closure on the polynomial ring is the radical of ideal generated by S.
More generally, given a commutative ring R (not necessarily a polynomial ring), there is an antitone Galois connection between radical ideals in the ring and Zariski closed subsets of the affine variety Spec("R").
More generally, there is an antitone Galois connection between ideals in the ring and subschemes of the corresponding affine variety.
Connections on power sets arising from binary relations.
Suppose X and Y are arbitrary sets and a binary relation R over X and Y is given. For any subset M of X, we define "F"("M"&hairsp;)
{&hairsp;"y" ∈ "Y" | "mRy" ∀"m" ∈ "M"&hairsp;}. Similarly, for any subset N of Y, define "G"("N"&hairsp;)
{&hairsp;"x" ∈ "X" | "xRn" ∀"n" ∈ "N"&hairsp;}. Then F and G yield an antitone Galois connection between the power sets of X and Y, both ordered by inclusion ⊆.
Up to isomorphism "all" antitone Galois connections between power sets arise in this way. This follows from the "Basic Theorem on Concept Lattices". Theory and applications of Galois connections arising from binary relations are studied in formal concept analysis. That field uses Galois connections for mathematical data analysis. Many algorithms for Galois connections can be found in the respective literature, e.g., in.
The general concept lattice in its primitive version incorporates both the monotone and antitone Galois connections to furnish its upper and lower bounds of nodes for the concept lattice, respectively.
Properties.
In the following, we consider a (monotone) Galois connection "f"
( "f" ∗, "f"∗), where "f" ∗ : "A" → "B" is the lower adjoint as introduced above. Some helpful and instructive basic properties can be obtained immediately. By the defining property of Galois connections, "f" ∗("x") ≤ "f" ∗("x") is equivalent to "x" ≤ "f"∗( "f" ∗("x")), for all x in A. By a similar reasoning (or just by applying the duality principle for order theory), one finds that "f" ∗( "f"∗("y")) ≤ "y", for all y in B. These properties can be described by saying the composite "f" ∗∘ "f"∗ is "deflationary", while "f"∗∘ "f" ∗ is "inflationary" (or "extensive").
Now consider "x", "y" ∈ "A" such that "x" ≤ "y". Then using the above one obtains "x" ≤ "f"∗( "f" ∗("y")). Applying the basic property of Galois connections, one can now conclude that "f" ∗("x") ≤ "f" ∗("y"). But this just shows that "f" ∗ preserves the order of any two elements, i.e. it is monotone. Again, a similar reasoning yields monotonicity of "f"∗. Thus monotonicity does not have to be included in the definition explicitly. However, mentioning monotonicity helps to avoid confusion about the two alternative notions of Galois connections.
Another basic property of Galois connections is the fact that "f"∗( "f" ∗( "f"∗("x")))
"f"∗("x"), for all x in B. Clearly we find that
"f"∗( "f" ∗( "f"∗("x"))) ≥ "f"∗("x").
because "f"∗∘ "f" ∗ is inflationary as shown above. On the other hand, since "f" ∗∘ "f"∗ is deflationary, while "f"∗ is monotonic, one finds that
"f"∗( "f" ∗( "f"∗("x"))) ≤ "f"∗("x").
This shows the desired equality. Furthermore, we can use this property to conclude that
"f" ∗( "f"∗( "f" ∗( "f"∗("x"))))
"f" ∗( "f"∗("x"))
and
"f"∗( "f" ∗( "f"∗( "f" ∗("x"))))
"f"∗( "f" ∗("x"))
i.e., "f" ∗∘ "f"∗ and "f"∗∘ "f" ∗ are idempotent.
It can be shown (see Blyth or Erné for proofs) that a function "f" is a lower (respectively upper) adjoint if and only if "f" is a residuated mapping (respectively residual mapping). Therefore, the notion of residuated mapping and monotone Galois connection are essentially the same.
Closure operators and Galois connections.
The above findings can be summarized as follows: for a Galois connection, the composite "f"∗∘ "f" ∗ is monotone (being the composite of monotone functions), inflationary, and idempotent. This states that "f"∗∘ "f" ∗ is in fact a closure operator on A. Dually, "f" ∗∘ "f"∗ is monotone, deflationary, and idempotent. Such mappings are sometimes called kernel operators. In the context of frames and locales, the composite "f"∗∘ "f" ∗ is called the nucleus induced by "f" . Nuclei induce frame homomorphisms; a subset of a locale is called a sublocale if it is given by a nucleus.
Conversely, any closure operator c on some poset A gives rise to the Galois connection with lower adjoint "f" ∗ being just the corestriction of c to the image of c (i.e. as a surjective mapping the closure system "c"("A")). The upper adjoint "f"∗ is then given by the inclusion of "c"("A") into A, that maps each closed element to itself, considered as an element of A. In this way, closure operators and Galois connections are seen to be closely related, each specifying an instance of the other. Similar conclusions hold true for kernel operators.
The above considerations also show that closed elements of A (elements x with "f"∗( "f" ∗("x"))
"x") are mapped to elements within the range of the kernel operator "f" ∗∘ "f"∗, and vice versa.
Existence and uniqueness of Galois connections.
Another important property of Galois connections is that lower adjoints preserve all suprema that exist within their domain. Dually, upper adjoints preserve all existing infima. From these properties, one can also conclude monotonicity of the adjoints immediately. The adjoint functor theorem for order theory states that the converse implication is also valid in certain cases: especially, any mapping between complete lattices that preserves all suprema is the lower adjoint of a Galois connection.
In this situation, an important feature of Galois connections is that one adjoint uniquely determines the other. Hence one can strengthen the above statement to guarantee that any supremum-preserving map between complete lattices is the lower adjoint of a unique Galois connection. The main property to derive this uniqueness is the following: For every x in A, "f" ∗("x") is the least element y of B such that "x" ≤ "f"∗("y"). Dually, for every y in B, "f"∗("y") is the greatest x in A such that "f" ∗("x") ≤ "y". The existence of a certain Galois connection now implies the existence of the respective least or greatest elements, no matter whether the corresponding posets satisfy any completeness properties. Thus, when one upper adjoint of a Galois connection is given, the other upper adjoint can be defined via this same property.
On the other hand, some monotone function "f" is a lower adjoint if and only if each set of the form {&hairsp;"x" ∈ "A" | "f" ("x") ≤ "b"&hairsp;}, for b in B, contains a greatest element. Again, this can be dualized for the upper adjoint.
Galois connections as morphisms.
Galois connections also provide an interesting class of mappings between posets which can be used to obtain categories of posets. Especially, it is possible to compose Galois connections: given Galois connections ( "f" ∗, "f"∗) between posets A and B and ("g"∗, "g"∗) between B and C, the composite ("g"∗ ∘ "f" ∗, "f"∗ ∘ "g"∗) is also a Galois connection. When considering categories of complete lattices, this can be simplified to considering just mappings preserving all suprema (or, alternatively, infima). Mapping complete lattices to their duals, these categories display auto duality, that are quite fundamental for obtaining other duality theorems. More special kinds of morphisms that induce adjoint mappings in the other direction are the morphisms usually considered for frames (or locales).
Connection to category theory.
Every partially ordered set can be viewed as a category in a natural way: there is a unique morphism from "x" to "y" if and only if "x" ≤ "y". A monotone Galois connection is then nothing but a pair of adjoint functors between two categories that arise from partially ordered sets. In this context, the upper adjoint is the "right adjoint" while the lower adjoint is the "left adjoint". However, this terminology is avoided for Galois connections, since there was a time when posets were transformed into categories in a dual fashion, i.e. with morphisms pointing in the opposite direction. This led to a complementary notation concerning left and right adjoints, which today is ambiguous.
Applications in the theory of programming.
Galois connections may be used to describe many forms of abstraction in the theory of abstract interpretation of programming languages.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
"The following books and survey articles include Galois connections using the monotone definition:"
"Some publications using the original (antitone) definition:"
|
[
{
"math_id": 0,
"text": "f:X\\to Y"
},
{
"math_id": 1,
"text": "g:Y\\to X,"
},
{
"math_id": 2,
"text": "(X,=)"
},
{
"math_id": 3,
"text": "(Y,=)"
},
{
"math_id": 4,
"text": "f(x)=y"
},
{
"math_id": 5,
"text": "x=g(y),"
},
{
"math_id": 6,
"text": "\\Z,"
},
{
"math_id": 7,
"text": "\\R,"
},
{
"math_id": 8,
"text": "F:\\Z\\to\\R"
},
{
"math_id": 9,
"text": "F(n)=n\\in\\R,"
},
{
"math_id": 10,
"text": "G:\\R\\to\\Z"
},
{
"math_id": 11,
"text": "G(x)=\\lfloor x\\rfloor."
},
{
"math_id": 12,
"text": "F(n)\\leq x ~\\Leftrightarrow~ n\\leq G(x)"
},
{
"math_id": 13,
"text": "n\\leq x ~\\Leftrightarrow~ n\\leq\\lfloor x\\rfloor."
},
{
"math_id": 14,
"text": "n"
},
{
"math_id": 15,
"text": "\\lfloor x+n\\rfloor=\\lfloor x\\rfloor+n,"
},
{
"math_id": 16,
"text": "x\\leq n ~\\Leftrightarrow~ \\lceil x\\rceil\\leq n."
},
{
"math_id": 17,
"text": "\\mathcal{B} = \\{B \\subseteq X : x \\in B; \\forall g \\in G, gB = B \\ \\mathrm{or} \\ gB \\cap B = \\emptyset\\},"
},
{
"math_id": 18,
"text": "\\mathcal{G}"
},
{
"math_id": 19,
"text": "\\mathcal{B} \\to \\mathcal{G}"
},
{
"math_id": 20,
"text": " B \\mapsto H_B = \\{g \\in G : gx \\in B\\}"
},
{
"math_id": 21,
"text": "V(S) = \\{x \\in K^n : f(x) = 0 \\mbox{ for all } f \\in S\\},"
},
{
"math_id": 22,
"text": "I(U) = \\{f \\in K[X_1,\\dots,X_n] : f(x) = 0 \\mbox{ for all } x \\in U\\}."
}
] |
https://en.wikipedia.org/wiki?curid=156411
|
1564194
|
Universal instantiation
|
Rule of inference in predicate logic
In predicate logic, universal instantiation (UI; also called universal specification or universal elimination, and sometimes confused with "dictum de omni") is a valid rule of inference from a truth about each member of a class of individuals to the truth about a particular individual of that class. It is generally given as a quantification rule for the universal quantifier but it can also be encoded in an axiom schema. It is one of the basic principles used in quantification theory.
Example: "All dogs are mammals. Fido is a dog. Therefore Fido is a mammal."
Formally, the rule as an axiom schema is given as
formula_0
for every formula "A" and every term "t", where formula_1 is the result of substituting "t" for each "free" occurrence of "x" in "A". formula_2 is an instance of formula_3
And as a rule of inference it is
from formula_4 infer formula_5
Irving Copi noted that universal instantiation "...follows from variants of rules for 'natural deduction', which were devised independently by Gerhard Gentzen and Stanisław Jaśkowski in 1934."
Quine.
According to Willard Van Orman Quine, universal instantiation and existential generalization are two aspects of a single principle, for instead of saying that "∀"x" "x" = "x"" implies "Socrates = Socrates", we could as well say that the denial "Socrates ≠ Socrates" implies "∃"x" "x" ≠ "x"". The principle embodied in these two operations is the link between quantifications and the singular statements that are related to them as instances. Yet it is a principle only by courtesy. It holds only in the case where a term names and, furthermore, occurs referentially.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\forall x \\, A \\Rightarrow A\\{x \\mapsto t\\},"
},
{
"math_id": 1,
"text": "A\\{x \\mapsto t\\}"
},
{
"math_id": 2,
"text": "\\, A\\{x \\mapsto t\\}"
},
{
"math_id": 3,
"text": "\\forall x \\, A."
},
{
"math_id": 4,
"text": "\\vdash \\forall x A"
},
{
"math_id": 5,
"text": "\\vdash A \\{ x \\mapsto t \\} ."
}
] |
https://en.wikipedia.org/wiki?curid=1564194
|
1564226
|
Aufbau principle
|
Principle of atomic physics
In atomic physics and quantum chemistry, the Aufbau principle (, from ), also called the Aufbau rule, states that in the ground state of an atom or ion, electrons first fill subshells of the lowest available energy, then fill subshells of higher energy. For example, the 1s subshell is filled before the 2s subshell is occupied. In this way, the electrons of an atom or ion form the most stable electron configuration possible. An example is the configuration 1s2 2s2 2p6 3s2 3p3 for the phosphorus atom, meaning that the 1s subshell has 2 electrons, and so on.
The configuration is often abbreviated by writing only the valence electrons explicitly, while the core electrons are replaced by the symbol for the last previous noble gas in the periodic table, placed in square brackets. For phosphorus, the last previous noble gas is neon, so the configuration is abbreviated to [Ne] 3s2 3p3, where [Ne] signifies the core electrons whose configuration in phosphorus is identical to that of neon.
Electron behavior is elaborated by other principles of atomic physics, such as Hund's rule and the Pauli exclusion principle. Hund's rule asserts that if multiple orbitals of the same energy are available, electrons will occupy different orbitals singly and with the same spin before any are occupied doubly. If double occupation does occur, the Pauli exclusion principle requires that electrons that occupy the same orbital must have different spins (+<templatestyles src="Fraction/styles.css" />1⁄2 and −<templatestyles src="Fraction/styles.css" />1⁄2).
Passing from one element to another of the next higher atomic number, one proton and one electron are added each time to the neutral atom.
The maximum number of electrons in any shell is 2"n"2, where "n" is the principal quantum number.
The maximum number of electrons in a subshell is equal to 2(2l + 1), where the azimuthal quantum number l is equal to 0, 1, 2, and 3 for s, p, d, and f subshells, so that the maximum numbers of electrons are 2, 6, 10, and 14 respectively. In the ground state, the electronic configuration can be built up by placing electrons in the lowest available subshell until the total number of electrons added is equal to the atomic number. Thus subshells are filled in the order of increasing energy, using two general rules to help predict electronic configurations:
A version of the aufbau principle known as the nuclear shell model is used to predict the configuration of protons and neutrons in an atomic nucleus.
Madelung energy ordering rule.
In neutral atoms, the approximate order in which subshells are filled is given by the "n" + l rule, also known as the:
Here "n" represents the principal quantum number and l the azimuthal quantum number; the values l = 0, 1, 2, 3 correspond to the s, p, d, and f subshells, respectively. Subshells with a lower "n" + l value are filled before those with higher "n" + l values. In the many cases of equal "n" + l values, the subshell with a lower "n" value is filled first. The subshell ordering by this rule is 1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p, 8s, 5g, ... For example, thallium ("Z" = 81) has the ground-state configuration 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d10 5p6 6s2 4f14 5d10 6p1 or in condensed form, [Xe] 6s2 4f14 5d10 6p1.
Other authors write the subshells outside of the noble gas core in order of increasing "n", or if equal, increasing "n" + "l", such as Tl ("Z" = 81) [Xe]4f14 5d10 6s2 6p1. They do so to emphasize that if this atom is ionized, electrons leave approximately in the order 6p, 6s, 5d, 4f, etc. On a related note, writing configurations in this way emphasizes the outermost electrons and their involvement in chemical bonding.
In general, subshells with the same "n" + l value have similar energies, but the s-orbitals (with l = 0) are exceptional: their energy levels are appreciably far from those of their "n" + l group and are closer to those of the next "n" + l group. This is why the periodic table is usually drawn to begin with the s-block elements.
The Madelung energy ordering rule applies only to neutral atoms in their ground state. There are twenty elements (eleven in the d-block and nine in the f-block) for which the Madelung rule predicts an electron configuration that differs from that determined experimentally, although the Madelung-predicted electron configurations are at least close to the ground state even in those cases.
One inorganic chemistry textbook describes the Madelung rule as essentially an approximate empirical rule although with some theoretical justification, based on the Thomas–Fermi model of the atom as a many-electron quantum-mechanical system.
Exceptions in the d-block.
The valence d-subshell "borrows" one electron (in the case of palladium two electrons) from the valence s-subshell.
For example, in copper 29Cu, according to the Madelung rule, the 4s subshell ("n" + l = 4 + 0 = 4) is occupied before the 3d subshell ("n" + l = 3 + 2 = 5). The rule then predicts the electron configuration 1s2 2s2 2p6 3s2 3p6 3d9 4s2, abbreviated [Ar] 3d9 4s2 where [Ar] denotes the configuration of argon, the preceding noble gas. However, the measured electron configuration of the copper atom is [Ar] 3d10 4s1. By filling the 3d subshell, copper can be in a lower energy state.
A special exception is lawrencium 103Lr, where the 6d electron predicted by the Madelung rule is replaced by a 7p electron: the rule predicts [Rn] 5f14 6d1 7s2, but the measured configuration is [Rn] 5f14 7s2 7p1.
Exceptions in the f-block.
The valence d-subshell often "borrows" one electron (in the case of thorium two electrons) from the valence f-subshell. For example, in uranium 92U, according to the Madelung rule, the 5f subshell ("n" + l = 5 + 3 = 8) is occupied before the 6d subshell ("n" + l = 6 + 2 = 8). The rule then predicts the electron configuration [Rn] 5f4 7s2 where [Rn] denotes the configuration of radon, the preceding noble gas. However, the measured electron configuration of the uranium atom is [Rn] 5f3 6d1 7s2.
All these exceptions are not very relevant for chemistry, as the energy differences are quite small and the presence of a nearby atom can change the preferred configuration. The periodic table ignores them and follows idealised configurations. They occur as the result of interelectronic repulsion effects; when atoms are positively ionised, most of the anomalies vanish.
The above exceptions are predicted to be the only ones until element 120, where the 8s shell is completed. Element 121, starting the g-block, should be an exception in which the expected 5g electron is transferred to 8p (similarly to lawrencium). After this, sources do not agree on the predicted configurations, but due to very strong relativistic effects there are not expected to be many more elements that show the expected configuration from Madelung's rule beyond 120. The general idea that after the two 8s elements, there come regions of chemical activity of 5g, followed by 6f, followed by 7d, and then 8p, does however mostly seem to hold true, except that relativity "splits" the 8p shell into a stabilized part (8p1/2, which acts like an extra covering shell together with 8s and is slowly drowned into the core across the 5g and 6f series) and a destabilized part (8p3/2, which has nearly the same energy as 9p1/2), and that the 8s shell gets replaced by the 9s shell as the covering s-shell for the 7d elements.
History.
The aufbau principle in the new quantum theory.
The principle takes its name from German, "", "building-up principle", rather than being named for a scientist. It was formulated by Niels Bohr in the early 1920s. This was an early application of quantum mechanics to the properties of electrons and explained chemical properties in physical terms. Each added electron is subject to the electric field created by the positive charge of the atomic nucleus "and" the negative charge of other electrons that are bound to the nucleus. Although in hydrogen there is no energy difference between subshells with the same principal quantum number "n", this is not true for the outer electrons of other atoms.
In the old quantum theory prior to quantum mechanics, electrons were supposed to occupy classical elliptical orbits. The orbits with the highest angular momentum are "circular orbits" outside the inner electrons, but orbits with low angular momentum (s- and p-subshell) have high subshell eccentricity, so that they get closer to the nucleus and feel on average a less strongly screened nuclear charge.
Wolfgang Pauli's model of the atom, including the effects of electron spin, provided a more complete explanation of the empirical aufbau rules.
The "n" + "l" energy ordering rule.
A periodic table in which each row corresponds to one value of "n" + l (where the values of "n" and l correspond to the principal and azimuthal quantum numbers respectively) was suggested by Charles Janet in 1928, and in 1930 he made explicit the quantum basis of this pattern, based on knowledge of atomic ground states determined by the analysis of atomic spectra. This table came to be referred to as the left-step table. Janet "adjusted" some of the actual "n" + l values of the elements, since they did not accord with his energy ordering rule, and he considered that the discrepancies involved must have arisen from measurement errors. As it happens, the actual values were correct and the "n" + l energy ordering rule turned out to be an approximation rather than a perfect fit, although for all elements that are exceptions the regularised configuration is a low-energy excited state, well within reach of chemical bond energies.
In 1936, the German physicist Erwin Madelung proposed this as an empirical rule for the order of filling atomic subshells, and most English-language sources therefore refer to the Madelung rule. Madelung may have been aware of this pattern as early as 1926. The Russian-American engineer Vladimir Karapetoff was the first to publish the rule in 1930, though Janet also published an illustration of it the same year.
In 1945, American chemist William Wiswesser proposed that the subshells are filled in order of increasing values of the function
formula_0
This formula correctly predicts both the first and second parts of the Madelung rule (the second part being that for two subshells with the same value of "n" + l, the one with the smaller value of "n" fills first). Wiswesser argued for this formula based on the pattern of both angular and radial nodes, the concept now known as orbital penetration, and the influence of the core electrons on the valence orbitals.
In 1961 the Russian agricultural chemist V.M. Klechkowski proposed a theoretical explanation for the importance of the sum "n" + l, based on the Thomas–Fermi model of the atom. Many French- and Russian-language sources therefore refer to the Klechkowski rule. '
The full Madelung rule was derived from a similar potential in 1971 by Yury N. Demkov and Valentin N. Ostrovsky. They considered the potential
formula_1
where formula_2 and formula_3 are constant parameters; this approaches a Coulomb potential for small formula_4. When formula_3 satisfies the condition
formula_5,
where formula_6, the zero-energy solutions to the Schrödinger equation for this potential can be described analytically with Gegenbauer polynomials. As formula_3 passes through each of these values, a manifold containing all states with that value of formula_7 arises at zero energy and then becomes bound, recovering the Madelung order. The application of perturbation-theory show that states with smaller formula_8 have lower energy, and that the s-orbitals (with formula_9) have their energies approaching the next formula_10 group.
In recent years it has been noted that the order of filling subshells in neutral atoms does not always correspond to the order of adding or removing electrons for a given atom. For example, in the fourth row of the periodic table, the Madelung rule indicates that the 4s subshell is occupied before the 3d. Therefore, the neutral atom ground state configuration for K is [Ar] 4s1, Ca is [Ar] 4s2, Sc is [Ar] 4s2 3d1 and so on. However, if a scandium atom is ionized by removing electrons (only), the configurations differ: Sc is [Ar] 4s2 3d1, Sc+ is [Ar] 4s1 3d1, and Sc2+ is [Ar] 3d1. The subshell energies and their order depend on the nuclear charge; 4s is lower than 3d as per the Madelung rule in K with 19 protons, but 3d is lower in Sc2+ with 21 protons. In addition to there being ample experimental evidence to support this view, it makes the explanation of the order of ionization of electrons in this and other transition metals more intelligible, given that 4s electrons are invariably preferentially ionized. Generally the Madelung rule should only be used for neutral atoms; however, even for neutral atoms there are exceptions in the d-block and f-block (as shown above).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "W(n,l) = n + l - \\frac{l}{l + 1}. "
},
{
"math_id": 1,
"text": "U_{1/2}(r) = -\\frac{2v}{rR(r+R)^2}"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "v"
},
{
"math_id": 4,
"text": "r"
},
{
"math_id": 5,
"text": "v=v_N=\\frac{1}{4}R^2 N(N+1)"
},
{
"math_id": 6,
"text": "N=n+l"
},
{
"math_id": 7,
"text": "N"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "l=0"
},
{
"math_id": 10,
"text": "n+l"
}
] |
https://en.wikipedia.org/wiki?curid=1564226
|
1564380
|
Disjoint union (topology)
|
In general topology and related areas of mathematics, the disjoint union (also called the direct sum, free union, free sum, topological sum, or coproduct) of a family of topological spaces is a space formed by equipping the disjoint union of the underlying sets with a natural topology called the disjoint union topology. Roughly speaking, in the disjoint union the given spaces are considered as part of a single new space where each looks as it would alone and they are isolated from each other.
The name "coproduct" originates from the fact that the disjoint union is the categorical dual of the product space construction.
Definition.
Let {"X""i" : "i" ∈ "I"} be a family of topological spaces indexed by "I". Let
formula_0
be the disjoint union of the underlying sets. For each "i" in "I", let
formula_1
be the canonical injection (defined by formula_2). The disjoint union topology on "X" is defined as the finest topology on "X" for which all the canonical injections formula_3 are continuous (i.e.: it is the final topology on "X" induced by the canonical injections).
Explicitly, the disjoint union topology can be described as follows. A subset "U" of "X" is open in "X" if and only if its preimage formula_4 is open in "X""i" for each "i" ∈ "I". Yet another formulation is that a subset "V" of "X" is open relative to "X" iff its intersection with "Xi" is open relative to "Xi" for each "i".
Properties.
The disjoint union space "X", together with the canonical injections, can be characterized by the following universal property: If "Y" is a topological space, and "fi" : "Xi" → "Y" is a continuous map for each "i" ∈ "I", then there exists "precisely one" continuous map "f" : "X" → "Y" such that the following set of diagrams commute:
This shows that the disjoint union is the coproduct in the category of topological spaces. It follows from the above universal property that a map "f" : "X" → "Y" is continuous iff "fi" = "f" o φ"i" is continuous for all "i" in "I".
In addition to being continuous, the canonical injections φ"i" : "X""i" → "X" are open and closed maps. It follows that the injections are topological embeddings so that each "X""i" may be canonically thought of as a subspace of "X".
Examples.
If each "X""i" is homeomorphic to a fixed space "A", then the disjoint union "X" is homeomorphic to the product space "A" × "I" where "I" has the discrete topology.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X = \\coprod_i X_i"
},
{
"math_id": 1,
"text": "\\varphi_i : X_i \\to X\\,"
},
{
"math_id": 2,
"text": "\\varphi_i(x)=(x,i)"
},
{
"math_id": 3,
"text": "\\varphi_i"
},
{
"math_id": 4,
"text": "\\varphi_i^{-1}(U)"
}
] |
https://en.wikipedia.org/wiki?curid=1564380
|
1564394
|
Electromagnetic shielding
|
Using conductive or magnetic materials to reduce electromagnetic field intensity
In electrical engineering, electromagnetic shielding is the practice of reducing or redirecting the electromagnetic field (EMF) in a space with barriers made of conductive or magnetic materials. It is typically applied to enclosures, for isolating electrical devices from their surroundings, and to cables to isolate wires from the environment through which the cable runs (<templatestyles src="Crossreference/styles.css" />). Electromagnetic shielding that blocks radio frequency (RF) electromagnetic radiation is also known as RF shielding.
EMF shielding serves to minimize electromagnetic interference. The shielding can reduce the coupling of radio waves, electromagnetic fields, and electrostatic fields. A conductive enclosure used to block electrostatic fields is also known as a "Faraday cage". The amount of reduction depends very much upon the material used, its thickness, the size of the shielded volume and the frequency of the fields of interest and the size, shape and orientation of holes in a shield to an incident electromagnetic field.
Materials used.
Typical materials used for electromagnetic shielding include thin layer of metal, sheet metal, metal screen, and metal foam. Common sheet metals for shielding include copper, brass, nickel, silver, steel, and tin. Shielding effectiveness, that is, how well a shield reflects or absorbs/suppresses electromagnetic radiation, is affected by the physical properties of the metal. These may include conductivity, solderability, permeability, thickness, and weight. A metal's properties are an important consideration in material selection. For example, electrically dominant waves are reflected by highly conductive metals like copper, silver, and brass, while magnetically dominant waves are absorbed/suppressed by a less conductive metal such as steel or stainless steel. Further, any holes in the shield or mesh must be significantly smaller than the wavelength of the radiation that is being kept out, or the enclosure will not effectively approximate an unbroken conducting surface.
Another commonly used shielding method, especially with electronic goods housed in plastic enclosures, is to coat the inside of the enclosure with a metallic ink or similar material. The ink consists of a carrier material loaded with a suitable metal, typically copper or nickel, in the form of very small particulates. It is sprayed on to the enclosure and, once dry, produces a continuous conductive layer of metal, which can be electrically connected to the chassis ground of the equipment, thus providing effective shielding.
Electromagnetic shielding is the process of lowering the electromagnetic field in an area by barricading it with conductive or magnetic material. Copper is used for radio frequency (RF) shielding because it absorbs radio and other electromagnetic waves. Properly designed and constructed RF shielding enclosures satisfy most RF shielding needs, from computer and electrical switching rooms to hospital CAT-scan and MRI facilities.
EMI (Electromagnetic Interference) shielding is of great research interest and several new types of nanocomposites made of ferrites, polymers, and 2D materials are being developed to obtain more efficient RF/microwave-absorbing materials (MAMs). EMI shielding is often achieved by electroless plating of copper as most popular plastics are non-conductive or by special conductive paint.
Example of applications.
One example is a shielded cable, which has electromagnetic shielding in the form of a wire mesh surrounding an inner core conductor. The shielding impedes the escape of any signal from the core conductor, and also prevents signals from being added to the core conductor.
Some cables have two separate coaxial screens, one connected at both ends, the other at one end only, to maximize shielding of both electromagnetic and electrostatic fields.
The door of a microwave oven has a screen built into the window. From the perspective of microwaves (with wavelengths of 12 cm) this screen finishes a Faraday cage formed by the oven's metal housing. Visible light, with wavelengths ranging between 400 nm and 700 nm, passes easily through the screen holes.
RF shielding is also used to prevent access to data stored on RFID chips embedded in various devices, such as biometric passports.
NATO specifies electromagnetic shielding for computers and keyboards to prevent passive monitoring of keyboard emissions that would allow passwords to be captured; consumer keyboards do not offer this protection primarily because of the prohibitive cost.
RF shielding is also used to protect medical and laboratory equipment to provide protection against interfering signals, including AM, FM, TV, emergency services, dispatch, pagers, ESMR, cellular, and PCS. It can also be used to protect the equipment at the AM, FM or TV broadcast facilities.
Another example of the practical use of electromagnetic shielding would be defense applications. As technology improves, so does the susceptibility to various types of nefarious electromagnetic interference. The idea of encasing a cable inside a grounded conductive barrier can provide mitigation to these risks.
How it works.
Electromagnetic radiation consists of coupled electric and magnetic fields. The electric field produces forces on the charge carriers (i.e., electrons) within the conductor. As soon as an electric field is applied to the surface of an ideal conductor, it induces a current that causes displacement of charge inside the conductor that cancels the applied field inside, at which point the current stops. See Faraday cage for more explanation.
Similarly, "varying" magnetic fields generate eddy currents that act to cancel the applied magnetic field. (The conductor does not respond to static magnetic fields unless the conductor is moving relative to the magnetic field.) The result is that electromagnetic radiation is reflected from the surface of the conductor: internal fields stay inside, and external fields stay outside.
Several factors serve to limit the shielding capability of real RF shields. One is that, due to the electrical resistance of the conductor, the excited field does not completely cancel the incident field. Also, most conductors exhibit a ferromagnetic response to low-frequency magnetic fields, so that such fields are not fully attenuated by the conductor. Any holes in the shield force current to flow around them, so that fields passing through the holes do not excite opposing electromagnetic fields. These effects reduce the field-reflecting capability of the shield.
In the case of high-frequency electromagnetic radiation, the above-mentioned adjustments take a non-negligible amount of time, yet any such radiation energy, as far as it is not reflected, is absorbed by the skin (unless it is extremely thin), so in this case there is no electromagnetic field inside either. This is one aspect of a greater phenomenon called the skin effect. A measure of the depth to which radiation can penetrate the shield is the so-called skin depth.
Magnetic shielding.
Equipment sometimes requires isolation from external magnetic fields. For static or slowly varying magnetic fields (below about 100 kHz) the Faraday shielding described above is ineffective. In these cases shields made of high magnetic permeability metal alloys can be used, such as sheets of permalloy and mu-metal or with nanocrystalline grain structure ferromagnetic metal coatings. These materials do not block the magnetic field, as with electric shielding, but rather draw the field into themselves, providing a path for the magnetic field lines around the shielded volume. The best shape for magnetic shields is thus a closed container surrounding the shielded volume. The effectiveness of this type of shielding depends on the material's permeability, which generally drops off at both very low magnetic field strengths and high field strengths where the material becomes saturated. Therefore, to achieve low residual fields, magnetic shields often consist of several enclosures, one inside the other, each of which successively reduces the field inside it. Entry holes within shielding surfaces may degrade their performance significantly.
Because of the above limitations of passive shielding, an alternative used with static or low-frequency fields is active shielding, in which a field created by electromagnets cancels the ambient field within a volume. Solenoids and Helmholtz coils are types of coils that can be used for this purpose, as well as more complex wire patterns designed using methods adapted from those used in coil design for magnetic resonance imaging. Active shields may also be designed accounting for the electromagnetic coupling with passive shields, referred to as "hybrid" shielding, so that there is broadband shielding from the passive shield and additional cancellation of specific components using the active system.
Additionally, superconducting materials can expel magnetic fields via the Meissner effect.
Mathematical model.
Suppose that we have a spherical shell of a (linear and isotropic) diamagnetic material with relative permeability formula_0, with inner radius formula_1 and outer radius formula_2. We then put this object in a constant magnetic field:
formula_3
Since there are no currents in this problem except for possible bound currents on the boundaries of the diamagnetic material, then we can define a magnetic scalar potential that satisfies Laplace's equation:
formula_4
where
formula_5
In this particular problem there is azimuthal symmetry so we can write down that the solution to Laplace's equation in spherical coordinates is:
formula_6
After matching the boundary conditions
formula_7
at the boundaries (where formula_8 is a unit vector that is normal to the surface pointing from side 1 to side 2), then we find that the magnetic field inside the cavity in the spherical shell is:
formula_9
where formula_10 is an attenuation coefficient that depends on the thickness of the diamagnetic material and the magnetic permeability of the material:
formula_11
This coefficient describes the effectiveness of this material in shielding the external magnetic field from the cavity that it surrounds. Notice that this coefficient appropriately goes to 1 (no shielding) in the limit that formula_12. In the limit that formula_13 this coefficient goes to 0 (perfect shielding). When formula_14, then the attenuation coefficient takes on the simpler form:
formula_15
which shows that the magnetic field decreases like formula_16.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mu_\\text{r}"
},
{
"math_id": 1,
"text": "a"
},
{
"math_id": 2,
"text": "b"
},
{
"math_id": 3,
"text": "\\mathbf{H}_0 = H_0 \\hat\\mathbf{z} = H_0 \\cos(\\theta) \\hat\\mathbf{r} - H_0 \\sin(\\theta) \\hat\\boldsymbol{\\theta}"
},
{
"math_id": 4,
"text": "\\begin{align} \\mathbf{H} &= -\\nabla \\Phi_{M} \\\\ \\nabla^{2} \\Phi_{M} &= 0 \\end{align}"
},
{
"math_id": 5,
"text": "\\mathbf{B} = \\mu_\\text{r}\\mathbf{H}"
},
{
"math_id": 6,
"text": "\\Phi_{M} = \\sum_{\\ell=0}^\\infty \\left(A_{\\ell}r^{\\ell}+\\frac{B_{\\ell}}{r^{\\ell+1}}\\right) P_{\\ell}(\\cos\\theta)"
},
{
"math_id": 7,
"text": "\\begin{align}\\left(\\mathbf{H}_2 - \\mathbf{H}_1\\right)\\times\\hat\\mathbf{n}&=0\\\\\\left(\\mathbf{B}_2 - \\mathbf{B}_1\\right) \\cdot \\hat\\mathbf{n} &=0 \\end{align}"
},
{
"math_id": 8,
"text": "\\hat{n}"
},
{
"math_id": 9,
"text": "\\mathbf{H}_\\text{in}=\\eta\\mathbf{H}_{0}"
},
{
"math_id": 10,
"text": "\\eta"
},
{
"math_id": 11,
"text": "\\eta = \\frac{9\\mu_\\text{r}}{\\left(2\\mu_\\text{r} + 1\\right) \\left(\\mu_\\text{r} + 2\\right) - 2\\left(\\frac{a}{b}\\right)^3 \\left(\\mu_\\text{r} - 1\\right)^2}"
},
{
"math_id": 12,
"text": "\\mu_\\text{r} \\to 1"
},
{
"math_id": 13,
"text": "\\mu_\\text{r} \\to \\infty"
},
{
"math_id": 14,
"text": "\\mu_\\text{r} \\gg 1"
},
{
"math_id": 15,
"text": "\\eta = \\frac{9}{2 \\left(1 - \\frac{a^3}{b^3}\\right) \\mu_\\text{r}}"
},
{
"math_id": 16,
"text": "\\mu_\\text{r}^{-1}"
}
] |
https://en.wikipedia.org/wiki?curid=1564394
|
1564406
|
Cramer–Shoup cryptosystem
|
Asymmetric key encryption algorithm
The Cramer–Shoup system is an asymmetric key encryption algorithm, and was the first efficient scheme proven to be secure against adaptive chosen ciphertext attack using standard cryptographic assumptions. Its security is based on the computational intractability (widely assumed, but not proved) of the Decisional Diffie–Hellman assumption. Developed by Ronald Cramer and Victor Shoup in 1998, it is an extension of the ElGamal cryptosystem. In contrast to ElGamal, which is extremely malleable, Cramer–Shoup adds other elements to ensure non-malleability even against a resourceful attacker. This non-malleability is achieved through the use of a universal one-way hash function and additional computations, resulting in a ciphertext which is twice as large as in ElGamal.
Adaptive chosen ciphertext attacks.
The definition of security achieved by Cramer–Shoup is formally termed "indistinguishability under adaptive chosen ciphertext attack" (IND-CCA2). This security definition is currently the strongest definition known for a public key cryptosystem: it assumes that the attacker has access to a decryption oracle which will decrypt any ciphertext using the scheme's secret decryption key. The "adaptive" component of the security definition means that the attacker has access to this decryption oracle both before and after he observes a specific target ciphertext to attack (though he is prohibited from using the oracle to simply decrypt this target ciphertext). The weaker notion of security against non-adaptive chosen ciphertext attacks (IND-CCA1) only allows the attacker to access the decryption oracle before observing the target ciphertext.
Though it was well known that many widely used cryptosystems were insecure against such an attacker, for many years system designers considered the attack to be impractical and of largely theoretical interest. This began to change during the late 1990s, particularly when Daniel Bleichenbacher demonstrated a practical adaptive chosen ciphertext attack against SSL servers using a form of RSA encryption.
Cramer–Shoup was not the first encryption scheme to provide security against adaptive chosen ciphertext attack. Naor–Yung, Rackoff–Simon, and Dolev–Dwork–Naor proposed provably secure conversions from standard (IND-CPA) schemes into IND-CCA1 and IND-CCA2 schemes. These techniques are secure under a standard set of cryptographic assumptions (without random oracles), however they rely on complex zero-knowledge proof techniques, and are inefficient in terms of computational cost and ciphertext size. A variety of other approaches, including Bellare/Rogaway's OAEP and Fujisaki–Okamoto achieve efficient constructions using a mathematical abstraction known as a random oracle. Unfortunately, to implement these schemes in practice requires the substitution of some practical function (e.g., a cryptographic hash function) in place of the random oracle. A growing body of evidence suggests the insecurity of this approach, although no practical attacks have been demonstrated against deployed schemes.
The cryptosystem.
Cramer–Shoup consists of three algorithms: the key generator, the encryption algorithm, and the decryption algorithm.
Encryption.
To encrypt a message formula_9 to Alice under her public key formula_10,
Decryption.
To decrypt a ciphertext formula_16 with Alice's secret key formula_8,
The decryption stage correctly decrypts any properly-formed ciphertext, since
formula_20, and formula_21
If the space of possible messages is larger than the size of formula_0, then Cramer–Shoup may be used in a hybrid cryptosystem to improve efficiency on long messages.
|
[
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": " q "
},
{
"math_id": 2,
"text": "g_1, g_2"
},
{
"math_id": 3,
"text": "({x}_{1}, {x}_{2}, {y}_{1}, {y}_{2}, z)"
},
{
"math_id": 4,
"text": "\\{0, \\ldots, q-1\\}"
},
{
"math_id": 5,
"text": "c = {g}_{1}^{x_1} g_{2}^{x_2}, d = {g}_{1}^{y_1} g_{2}^{y_2}, h = {g}_{1}^{z}"
},
{
"math_id": 6,
"text": "(c, d, h)"
},
{
"math_id": 7,
"text": "G, q, g_1, g_2"
},
{
"math_id": 8,
"text": "(x_1, x_2, y_1, y_2, z)"
},
{
"math_id": 9,
"text": "m"
},
{
"math_id": 10,
"text": "(G,q,g_1,g_2,c,d,h)"
},
{
"math_id": 11,
"text": "k"
},
{
"math_id": 12,
"text": "u_1 = {g}_{1}^{k}, u_2 = {g}_{2}^{k}"
},
{
"math_id": 13,
"text": "e = h^k m"
},
{
"math_id": 14,
"text": "\\alpha = H(u_1, u_2, e)"
},
{
"math_id": 15,
"text": "v = c^k d^{k\\alpha}"
},
{
"math_id": 16,
"text": "(u_1, u_2, e, v)"
},
{
"math_id": 17,
"text": "\\alpha = H(u_1, u_2, e) \\,"
},
{
"math_id": 18,
"text": "{u}_{1}^{x_1} u_{2}^{x_2} ({u}_{1}^{y_1} u_{2}^{y_2})^{\\alpha} = v \\,"
},
{
"math_id": 19,
"text": "m = e / ({u}_{1}^{z}) \\,"
},
{
"math_id": 20,
"text": " {u}_{1}^{z} = {g}_{1}^{k z} = h^k \\,"
},
{
"math_id": 21,
"text": "m = e / h^k. \\,"
}
] |
https://en.wikipedia.org/wiki?curid=1564406
|
15646344
|
Chernoff's distribution
|
Probability distribution of random variable
In probability theory, Chernoff's distribution, named after Herman Chernoff, is the probability distribution of the random variable
formula_0
where "W" is a "two-sided" Wiener process (or two-sided "Brownian motion") satisfying "W"(0) = 0.
If
formula_1
then "V"(0, "c") has density
formula_2
where "g""c" has Fourier transform given by
formula_3
and where Ai is the Airy function. Thus "f""c" is symmetric about 0 and the density "ƒ""Z" = "ƒ"1. Groeneboom (1989) shows that
formula_4
where formula_5 is the largest zero of the Airy function Ai and where formula_6. In the same paper, Groeneboom also gives an analysis of the process formula_7. The connection with the statistical problem of estimating a monotone density is discussed in Groeneboom (1985). Chernoff's distribution is now known to appear in a wide range of monotone problems including isotonic regression.
The Chernoff distribution should not be confused with the Chernoff geometric distribution (called the Chernoff point in information geometry) induced by the Chernoff information.
History.
Groeneboom, Lalley and Temme state that the first investigation of this distribution was probably by Chernoff in 1964, who studied the behavior of a certain estimator of a mode. In his paper, Chernoff characterized the distribution through an analytic representation through the heat equation with suitable boundary conditions. Initial attempts at approximating Chernoff's distribution via solving the heat equation, however, did not achieve satisfactory precision due to the nature of the boundary conditions. The computation of the distribution is addressed, for example, in Groeneboom and Wellner (2001).
The connection of Chernoff's distribution with Airy functions was also found independently by Daniels and Skyrme and Temme, as cited in Groeneboom, Lalley and Temme. These two papers, along with Groeneboom (1989), were all written in 1984.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " Z =\\underset{s \\in \\mathbf{R}}{\\operatorname{argmax}}\\ (W(s) - s^2), "
},
{
"math_id": 1,
"text": " V(a,c) = \\underset{s \\in \\mathbf{R}}{\\operatorname{argmax}} \\ (W(s) - c(s-a)^2), "
},
{
"math_id": 2,
"text": " f_c(t) = \\frac{1}{2} g_c(t) g_c(-t) "
},
{
"math_id": 3,
"text": " \\hat{g}_c (s) = \\frac{(2/c)^{1/3}}{\\operatorname{Ai} (i (2c^2)^{-1/3} s)}, \\ \\ \\ s \\in \\mathbf{R} "
},
{
"math_id": 4,
"text": " f_Z (z) \\sim \\frac{1}{2} \\frac{4^{4/3} |z|}{\\operatorname{Ai}' (\\tilde{a}_1)} \\exp \\left( - \\frac{2}{3} |z|^3 + 2^{1/3} \\tilde{a}_1 |z| \\right)\n\\text{ as }z \\rightarrow \\infty\n"
},
{
"math_id": 5,
"text": "\\tilde{a}_1 \\approx -2.3381"
},
{
"math_id": 6,
"text": "\\operatorname{Ai}' (\\tilde{a}_1 ) \\approx 0.7022"
},
{
"math_id": 7,
"text": "\\{V(a,1): a \\in \\mathbf{R}\\}"
}
] |
https://en.wikipedia.org/wiki?curid=15646344
|
15646748
|
Weighted-average life
|
In finance, the weighted-average life (WAL) of an amortizing loan or amortizing bond, also called average life, is the weighted average of the times of the "principal repayments": it's the average time until a dollar of principal is repaid.
In a formula,
formula_0
where:
If desired, formula_5 can be expanded as formula_6 for a monthly bond, where formula_7 is the fraction of a month between settlement date and first cash flow date.
WAL of classes of loans.
In loans that allow prepayment, the WAL cannot be computed from the amortization schedule alone; one must also make assumptions about the prepayment and default behavior, and the quoted WAL will be an estimate. The WAL is usually computed from a single cash-flow sequence. Occasionally, a simulated average life may be computed from multiple cash-flow scenarios, such as those from an option-adjusted spread model.
Related concepts.
WAL should not be confused with the following distinct concepts:
Applications.
WAL is a measure that can be useful in credit risk analysis on fixed income securities, bearing in mind that the main credit risk of a loan is the risk of loss of principal. All else equal, a bond with principal outstanding longer (i.e., longer WAL) has greater credit risk than a bond with shorter WAL. In particular, WAL is often used as the basis for yield comparisons in I-spread calculations.
WAL should not be used to estimate a bond's price-sensitivity to interest-rate fluctuations, as WAL includes only the principal cash flows, omitting the interest payments. Instead, one should use bond duration, which incorporates "all" the cash flows.
Examples.
The WAL of a bullet loan (non-amortizing) is exactly the tenor, as the principal is repaid precisely at maturity.
On a 30-year amortizing loan, paying equal amounts monthly, one has the following WALs, for the given annual interest rates (and corresponding monthly payments per $100,000 principal balance, calculated via an amortization calculator and the formulas below relating amortized payments, total interest, and WAL):
Note that as the interest rate increases, WAL increases, since the principal payments become increasingly back-loaded. WAL is independent of the principal balance, though payments and total interest are proportional to principal.
For a coupon of 0%, where the principal amortizes linearly, the WAL is exactly half the tenor plus half a payment period, because principal is repaid in arrears (at the "end" of the period). So for a 30-year 0% loan, paying monthly, the WAL is formula_8 years.
Total Interest.
WAL allows one to easily compute the total interest payments, given by:
formula_9
where "r" is the annual interest rate and "P" is the initial principal.
This can be understood intuitively as: "The average dollar of principal is outstanding for the WAL, hence the interest on the average dollar is formula_10, and now one multiplies by the principal to get total interest payments."
Proof.
More rigorously, one can derive the result as follows. To ease exposition, assume that payments are monthly, so periodic interest rate is annual interest rate divided by 12, and time formula_11 (time in years is period number in months, over 12).
Then:
formula_12
Total interest is
formula_13
where formula_14 is the principal outstanding at the "beginning" of period "i" (it's the principal on which the "i" interest payment is based). The statement reduces to showing that formula_15. Both of these quantities are the time-weighted total principal of the bond (in periods), and they are simply different ways of slicing it: the formula_16 sum counts how "long" each dollar of principal is outstanding (it slices "horizontally"), while the formula_14 counts how much principal is outstanding "at each point in time" (it slices "vertically").
Working backwards, formula_17, and so forth: the principal outstanding when "k" periods remain is exactly the sum of the next "k" principal payments. The principal paid off by the last ("n"th) principal payment is outstanding for all "n" periods, while the principal paid off by the second to last (("n" − 1)th) principal payment is outstanding for "n" − 1 periods, and so forth. Using this, the sums can be re-arranged to be equal.
For instance, if the principal amortized as $100, $80, $50 (with paydowns of $20, $30, $50), then the sum would on the one hand be formula_18, and on the other hand would be formula_19. This is demonstrated in the following table, which shows the amortization schedule, broken up into principal repayments, where each column is a formula_14, and each row is formula_16:
Computing WAL from amortized payment.
The above can be reversed: given the terms (principal, tenor, rate) and amortized payment "A", one can compute the WAL without knowing the amortization schedule. The total payments are formula_20 and the total interest payments are formula_21, so the WAL is:
formula_22
Similarly, the total interest as percentage of principal is given by formula_10:
formula_23
Notes and references.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\text{WAL} = \\sum_{i=1}^n \\frac {P_i}{P} t_i,"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "P_i"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "\\frac{P_i}{P}"
},
{
"math_id": 5,
"text": "t_i"
},
{
"math_id": 6,
"text": "\\frac{1}{12}(i+\\alpha-1)"
},
{
"math_id": 7,
"text": "\\alpha"
},
{
"math_id": 8,
"text": "15 + 1/24 \\approx 15.04"
},
{
"math_id": 9,
"text": "\\text{WAL} \\times r \\times P,"
},
{
"math_id": 10,
"text": "\\text{WAL} \\times r"
},
{
"math_id": 11,
"text": "t_i = i/12"
},
{
"math_id": 12,
"text": "\\begin{align}\n\\text{WAL} &= \\sum_{i=1}^n \\frac {P_i}{P} t_i\\\\\n\\text{WAL} \\times P &= \\sum_{i=1}^n P_i t_i\n &&= \\sum_{i=1}^n P_i \\frac{i}{12}\\\\\n\\text{WAL} \\times P \\times r &= \\sum_{i=1}^n iP_i \\frac{r}{12}\n &&= \\frac{r}{12} \\sum_{i=1}^n iP_i\n\\end{align}\n"
},
{
"math_id": 13,
"text": "\\sum_{i=1}^n Q_i \\frac{r}{12} = \\frac{r}{12}\\sum_{i=1}^n Q_i,"
},
{
"math_id": 14,
"text": "Q_i"
},
{
"math_id": 15,
"text": "\\sum_{i=1}^n iP_i=\\sum_{i=1}^n Q_i"
},
{
"math_id": 16,
"text": "iP_i"
},
{
"math_id": 17,
"text": "Q_n=P_n, Q_{n-1}=P_n+P_{n-1}"
},
{
"math_id": 18,
"text": "20+2\\cdot 30 + 3\\cdot 50=230"
},
{
"math_id": 19,
"text": "100+80+50=230"
},
{
"math_id": 20,
"text": "An"
},
{
"math_id": 21,
"text": "An-P"
},
{
"math_id": 22,
"text": "\\text{WAL} = \\frac{An-P}{Pr}"
},
{
"math_id": 23,
"text": "\\text{WAL} \\times r = \\frac{An-P}{P}"
}
] |
https://en.wikipedia.org/wiki?curid=15646748
|
15646906
|
Digital differential analyzer
|
A digital differential analyzer (DDA), also sometimes called a digital integrating computer, is a digital implementation of a differential analyzer. The integrators in a DDA are implemented as accumulators, with the numeric result converted back to a pulse rate by the overflow of the accumulator.
The primary advantages of a DDA over the conventional analog differential analyzer are greater precision of the results and the lack of drift/noise/slip/lash in the calculations. The precision is only limited by register size and the resulting accumulated rounding/truncation errors of repeated addition. Digital electronics inherently lacks the temperature sensitive drift and noise level issues of analog electronics and the slippage and "lash" issues of mechanical analog systems.
For problems that can be expressed as differential equations, a hardware DDA can solve them much faster than a general purpose computer (using similar technology). However reprogramming a hardware DDA to solve a different problem (or fix a bug) is much harder than reprogramming a general purpose computer. Many DDAs were hardwired for one problem only and could not be reprogrammed without redesigning them.
History.
One of the inspirations for ENIAC was the mechanical analog Bush differential analyzer. It influenced both the architecture and programming method chosen. However, although ENIAC as originally configured, could have been programmed as a DDA (the "numerical integrator" in Electronic Numerical Integrator And Computer), there is no evidence that it ever actually was. The theory of DDAs was not developed until 1949, one year after ENIAC had been reconfigured as a stored program computer.
The first DDA built was the Magnetic Drum Digital Differential Analyzer of 1950.
Theory.
The basic DDA integrator, shown in the figure, implements numerical rectangular integration via the following equations:
formula_0
formula_1
Where Δx causes y to be added to (or subtracted from) S, Δy causes y to be incremented (or decremented), and ΔS is caused by an overflow (or underflow) of the S accumulator. Both registers and the three Δ signals are signed values. Initial conditions for the problem can be loaded into both y and S prior to beginning integration.
This produces an integrator approximating the following equation:
formula_2
where "K" is a scaling constant determined by the precision (size) of the registers as follows:
formula_3
where "radix" is the numeric base used (typically 2) in the registers and "n" is the number of places in the registers.
If Δy is eliminated, making y a constant, then the DDA integrator reduces to a device called a rate multiplier, where the pulse rate ΔS is proportional to the product of y and Δx by the following equation:
formula_4
Error sources.
There are two sources of error that limit the accuracy of DDAs:
Both of these error sources are cumulative, due to the repeated addition nature of DDAs. Therefore longer problem time results in larger inaccuracy of the resulting solution.
The effect of rounding/truncation errors can be reduced by using larger registers. However, as this reduces the scaling constant "K", it also increases problem time and therefore may not significantly improve accuracy and in real time DDA based systems may be unacceptable.
The effect of approximation errors can be reduced by using a more accurate numerical integration algorithm than rectangular integration (e.g., trapezoidal integration) in the DDA integrators.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "y^* = y \\pm \\sum \\Delta y"
},
{
"math_id": 1,
"text": "S^* = S \\pm y^* \\sum \\Delta x"
},
{
"math_id": 2,
"text": "\\Delta S = K \\int \\Delta y \\Delta x"
},
{
"math_id": 3,
"text": "K = {1 \\over {\\text{radix}}^n}"
},
{
"math_id": 4,
"text": "\\Delta S = K y \\Delta x"
}
] |
https://en.wikipedia.org/wiki?curid=15646906
|
15650931
|
Rudolf Kohlrausch
|
German physicist
Rudolf Hermann Arndt Kohlrausch (November 6, 1809 in Göttingen – March 8, 1858 in Erlangen) was a German physicist.
Biography.
He was a native of Göttingen, the son of the Royal Hanovarian director general of schools Friedrich Kohlrausch. He was a high-school teacher of mathematics and physics successively at Lüneburg, Rinteln, Kassel and Marburg. In 1853 he became an associate professor at the University of Marburg, and four years later, a full professor of physics at the University of Erlangen.
Research.
In 1854 Kohlrausch introduced the relaxation phenomena, and used the stretched exponential function to explain relaxation effects of a discharging Leyden jar (capacitor). In an 1855 experiment (published 1857) with Wilhelm Weber (1804–1891), he demonstrated that the ratio of electrostatic to electromagnetic units produced a number similar to the value of the speed of light, a constant which they named formula_0. Kirchhoff recognized that the ratio is equal to formula_1 the speed of light. This finding was instrumental towards Maxwell's conjecture that light is an electromagnetic wave.
Family.
He was the father of physicist Friedrich Kohlrausch.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "c"
},
{
"math_id": 1,
"text": "\\sqrt 2"
}
] |
https://en.wikipedia.org/wiki?curid=15650931
|
1565267
|
De Bruijn sequence
|
Cycle through all length-k sequences
In combinatorial mathematics, a de Bruijn sequence of order "n" on a size-"k" alphabet "A" is a cyclic sequence in which every possible length-"n" string on "A" occurs exactly once as a substring (i.e., as a "contiguous" subsequence). Such a sequence is denoted by "B"("k", "n") and has length "k""n", which is also the number of distinct strings of length "n" on "A". Each of these distinct strings, when taken as a substring of "B"("k", "n"), must start at a different position, because substrings starting at the same position are not distinct. Therefore, "B"("k", "n") must have "at least" "k""n" symbols. And since "B"("k", "n") has "exactly" "k""n" symbols, de Bruijn sequences are optimally short with respect to the property of containing every string of length "n" at least once.
The number of distinct de Bruijn sequences "B"("k", "n") is
formula_0
The sequences are named after the Dutch mathematician Nicolaas Govert de Bruijn, who wrote about them in 1946. As he later wrote, the existence of de Bruijn sequences for each order together with the above properties were first proved, for the case of alphabets with two elements, by Camille Flye Sainte-Marie (1894). The generalization to larger alphabets is due to Tatyana van Aardenne-Ehrenfest and de Bruijn (1951). Automata for recognizing these sequences are denoted as de Bruijn automata.
In most applications, "A" = {0,1}.
History.
The earliest known example of a de Bruijn sequence comes from Sanskrit prosody where, since the work of Pingala, each possible three-syllable pattern of long and short syllables is given a name, such as 'y' for short–long–long and 'm' for long–long–long. To remember these names, the mnemonic "yamātārājabhānasalagām" is used, in which each three-syllable pattern occurs starting at its name: 'yamātā' has a short–long–long pattern, 'mātārā' has a long–long–long pattern, and so on, until 'salagām' which has a short–short–long pattern. This mnemonic, equivalent to a de Bruijn sequence on binary 3-tuples, is of unknown antiquity, but is at least as old as Charles Philip Brown's 1869 book on Sanskrit prosody that mentions it and considers it "an ancient line, written by Pāṇini".
In 1894, A. de Rivière raised the question in an issue of the French problem journal "L'Intermédiaire des Mathématiciens", of the existence of a circular arrangement of zeroes and ones of size formula_1 that contains all formula_1 binary sequences of length formula_2. The problem was solved (in the affirmative), along with the count of formula_3 distinct solutions, by Camille Flye Sainte-Marie in the same year. This was largely forgotten, and proved the existence of such cycles for general alphabet size in place of 2, with an algorithm for constructing them. Finally, when in 1944 Kees Posthumus conjectured the count formula_3 for binary sequences, de Bruijn proved the conjecture in 1946, through which the problem became well-known.
Karl Popper independently describes these objects in his "The Logic of Scientific Discovery" (1934), calling them "shortest random-like sequences".
Construction.
The de Bruijn sequences can be constructed by taking a Hamiltonian path of an "n"-dimensional de Bruijn graph over "k" symbols (or equivalently, an Eulerian cycle of an ("n" − 1)-dimensional de Bruijn graph).
An alternative construction involves concatenating together, in lexicographic order, all the Lyndon words whose length divides "n".
An inverse Burrows–Wheeler transform can be used to generate the required Lyndon words in lexicographic order.
de Bruijn sequences can also be constructed using shift registers or via finite fields.
Example using de Bruijn graph.
Goal: to construct a "B"(2, 4) de Bruijn sequence of length 24 = 16 using Eulerian ("n" − 1 = 4 − 1 = 3) 3-D de Bruijn graph cycle.
Each edge in this 3-dimensional de Bruijn graph corresponds to a sequence of four digits: the three digits that label the vertex that the edge is leaving followed by the one that labels the edge. If one traverses the edge labeled 1 from 000, one arrives at 001, thereby indicating the presence of the subsequence 0001 in the de Bruijn sequence. To traverse each edge exactly once is to use each of the 16 four-digit sequences exactly once.
For example, suppose we follow the following Eulerian path through these vertices:
000, 000, 001, 011, 111, 111, 110, 101, 011,
110, 100, 001, 010, 101, 010, 100, 000.
These are the output sequences of length "k":
0 0 0 0
_ 0 0 0 1
_ _ 0 0 1 1
This corresponds to the following de Bruijn sequence:
0 0 0 0 1 1 1 1 0 1 1 0 0 1 0 1
The eight vertices appear in the sequence in the following way:
{0 0 0 0} 1 1 1 1 0 1 1 0 0 1 0 1
0 {0 0 0 1} 1 1 1 0 1 1 0 0 1 0 1
0 0 {0 0 1 1} 1 1 0 1 1 0 0 1 0 1
0 0 0 {0 1 1 1} 1 0 1 1 0 0 1 0 1
0 0 0 0 {1 1 1 1} 0 1 1 0 0 1 0 1
0 0 0 0 1 {1 1 1 0} 1 1 0 0 1 0 1
0 0 0 0 1 1 {1 1 0 1} 1 0 0 1 0 1
0 0 0 0 1 1 1 {1 0 1 1} 0 0 1 0 1
0 0 0 0 1 1 1 1 {0 1 1 0} 0 1 0 1
0 0 0 0 1 1 1 1 0 {1 1 0 0} 1 0 1
0 0 0 0 1 1 1 1 0 1 {1 0 0 1} 0 1
0 0 0 0 1 1 1 1 0 1 1 {0 0 1 0} 1
0} 0 0 0 1 1 1 1 0 1 1 0 0 {1 0 1 ...
... 0 0} 0 0 1 1 1 1 0 1 1 0 0 1 {0 1 ...
... 0 0 0} 0 1 1 1 1 0 1 1 0 0 1 0 {1 ...
...and then we return to the starting point. Each of the eight 3-digit sequences (corresponding to the eight vertices) appears exactly twice, and each of the sixteen 4-digit sequences (corresponding to the 16 edges) appears exactly once.
Example using inverse Burrows—Wheeler transform.
Mathematically, an inverse Burrows—Wheeler transform on a word w generates a multi-set of equivalence classes consisting of strings and their rotations. These equivalence classes of strings each contain a Lyndon word as a unique minimum element, so the inverse Burrows—Wheeler transform can be considered to generate a set of Lyndon words. It can be shown that if we perform the inverse Burrows—Wheeler transform on a word w consisting of the size-"k" alphabet repeated "k""n"−1 times (so that it will produce a word the same length as the desired de Bruijn sequence), then the result will be the set of all Lyndon words whose length divides "n". It follows that arranging these Lyndon words in lexicographic order will yield a de Bruijn sequence "B"("k","n"), and that this will be the first de Bruijn sequence in lexicographic order. The following method can be used to perform the inverse Burrows—Wheeler transform, using its "standard permutation":
For example, to construct the smallest "B"(2,4) de Bruijn sequence of length 24 = 16, repeat the alphabet (ab) 8 times yielding "w"
abababababababab. Sort the characters in w, yielding "w′"
aaaaaaaabbbbbbbb. Position above w as shown, and map each element in to the corresponding element in w by drawing a line. Number the columns as shown so we can read the cycles of the permutation:
Starting from the left, the Standard Permutation notation cycles are: (1) (2 3 5 9) (4 7 13 10) (6 11) (8 15 14 12) (16). (Standard Permutation)
Then, replacing each number by the corresponding letter in from that column yields: (a)(aaab)(aabb)(ab)(abbb)(b).
These are all of the Lyndon words whose length divides 4, in lexicographic order, so dropping the parentheses gives "B"(2,4)
aaaabaabbababbbb.
Algorithm.
The following Python code calculates a de Bruijn sequence, given "k" and "n", based on an algorithm from Frank Ruskey's "Combinatorial Generation".
from typing import Iterable, Union, Any
def de_bruijn(k: Union[Iterable[str], int], n: int) -> str:
"""de Bruijn sequence for alphabet k
and subsequences of length n.
# Two kinds of alphabet input: an integer expands
# to a list of integers as the alphabet..
if isinstance(k, int):
alphabet = list(map(str, range(k)))
else:
# While any sort of list becomes used as it is
alphabet = k
k = len(k)
a = [0] * k * n
sequence = []
def db(t, p):
if t > n:
if n % p == 0:
sequence.extend(a[1 : p + 1])
else:
a[t] = a[t - p]
db(t + 1, p)
for j in range(a[t - p] + 1, k):
a[t] = j
db(t + 1, t)
db(1, 1)
return "".join(alphabet[i] for i in sequence)
print(de_bruijn(2, 3))
print(de_bruijn("abcd", 2))
which prints
00010111
aabacadbbcbdccdd
Note that these sequences are understood to "wrap around" in a cycle. For example, the first sequence contains 110 and 100 in this fashion.
Uses.
de Bruijn cycles are of general use in neuroscience and psychology experiments that examine the effect of stimulus order upon neural systems, and can be specially crafted for use with functional magnetic resonance imaging.
Angle detection.
The symbols of a de Bruijn sequence written around a circular object (such as a wheel of a robot) can be used to identify its angle by examining the "n" consecutive symbols facing a fixed point. This angle-encoding problem is known as the "rotating drum problem". Gray codes can be used as similar rotary positional encoding mechanisms, a method commonly found in rotary encoders.
Finding least- or most-significant set bit in a word.
A de Bruijn sequence can be used to quickly find the index of the least significant set bit ("right-most 1") or the most significant set bit ("left-most 1") in a word using bitwise operations and multiplication. The following example uses a de Bruijn sequence to determine the index of the least significant set bit (equivalent to counting the number of trailing '0' bits) in a 32 bit unsigned integer:
uint8_t lowestBitIndex(uint32_t v)
static const uint8_t BitPositionLookup[32] = // hash table
0, 1, 28, 2, 29, 14, 24, 3, 30, 22, 20, 15, 25, 17, 4, 8,
31, 27, 13, 23, 21, 19, 16, 7, 26, 12, 18, 6, 11, 5, 10, 9
return BitPositionLookup[((uint32_t)((v & -v) * 0x077CB531U)) » 27];
The codice_0 function returns the index of the least-significant set bit in "v", or zero if "v" has no set bits. The constant 0x077CB531U in the expression is the "B" (2, 5) sequence 0000 0111 0111 1100 1011 0101 0011 0001 (spaces added for clarity). The operation codice_1 zeros all bits except the least-significant bit set, resulting in a new value which is a power of 2. This power of 2 is multiplied (arithmetic modulo 232) by the de Bruijn sequence, thus producing a 32-bit product in which the bit sequence of the 5 MSBs is unique for each power of 2. The 5 MSBs are shifted into the LSB positions to produce a hash code in the range [0, 31], which is then used as an index into hash table BitPositionLookup. The selected hash table value is the bit index of the least significant set bit in "v".
The following example determines the index of the most significant bit set in a 32 bit unsigned integer:
uint32_t keepHighestBit(uint32_t n)
n |= (n » 1);
n |= (n » 2);
n |= (n » 4);
n |= (n » 8);
n |= (n » 16);
return n - (n » 1);
uint8_t highestBitIndex(uint32_t v)
static const uint8_t BitPositionLookup[32] = { // hash table
0, 1, 16, 2, 29, 17, 3, 22, 30, 20, 18, 11, 13, 4, 7, 23,
31, 15, 28, 21, 19, 10, 12, 6, 14, 27, 9, 5, 26, 8, 25, 24,
return BitPositionLookup[(keepHighestBit(v) * 0x06EB14F9U) » 27];
In the above example an alternative de Bruijn sequence (0x06EB14F9U) is used, with corresponding reordering of array values. The choice of this particular de Bruijn sequence is arbitrary, but the hash table values must be ordered to match the chosen de Bruijn sequence. The codice_2 function zeros all bits except the most-significant set bit, resulting in a value which is a power of 2, which is then processed as in the previous example.
Brute-force attacks on locks.
A de Bruijn sequence can be used to shorten a brute-force attack on a PIN-like code lock that does not have an "enter" key and accepts the last "n" digits entered. For example, a digital door lock with a 4-digit code (each digit having 10 possibilities, from 0 to 9) would have "B" (10, 4) solutions, with length . Therefore, only at most + 3
(as the solutions are cyclic) presses are needed to open the lock, whereas trying all codes separately would require 4 ×
presses.
f-fold de Bruijn sequences.
An f-fold n-ary de Bruijn sequence is an extension of the notion "n"-ary de Bruijn sequence, such that the sequence of the length formula_4 contains every possible subsequence of the length "n" exactly "f" times. For example, for formula_5 the cyclic sequences 11100010 and 11101000 are two-fold binary de Bruijn sequences. The number of two-fold de Bruijn sequences, formula_6 for formula_7 is formula_8, the other known numbers are formula_9, formula_10, and formula_11.
de Bruijn torus.
A de Bruijn torus is a toroidal array with the property that every "k"-ary "m"-by-"n" matrix occurs exactly once.
Such a pattern can be used for two-dimensional positional encoding in a fashion analogous to that described above for rotary encoding. Position can be determined by examining the "m"-by-"n" matrix directly adjacent to the sensor, and calculating its position on the de Bruijn torus.
de Bruijn decoding.
Computing the position of a particular unique tuple or matrix in a de Bruijn sequence or torus is known as the "de Bruijn decoding problem". Efficient &NoBreak;&NoBreak; decoding algorithms exist for special, recursively constructed sequences and extend to the two-dimensional case. de Bruijn decoding is of interest, e.g., in cases where large sequences or tori are used for positional encoding.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\dfrac{\\left(k!\\right)^{k^{n-1}}}{k^n}."
},
{
"math_id": 1,
"text": "2^n"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "2^{2^{n-1} - n}"
},
{
"math_id": 4,
"text": "fk^n"
},
{
"math_id": 5,
"text": "n=2"
},
{
"math_id": 6,
"text": "N_n"
},
{
"math_id": 7,
"text": "n=1"
},
{
"math_id": 8,
"text": "N_1=2"
},
{
"math_id": 9,
"text": "N_2=5"
},
{
"math_id": 10,
"text": "N_3=72"
},
{
"math_id": 11,
"text": "N_4=43768"
}
] |
https://en.wikipedia.org/wiki?curid=1565267
|
15652764
|
Non-linear least squares
|
Approximation method in statistics
Non-linear least squares is the form of least squares analysis used to fit a set of "m" observations with a model that is non-linear in "n" unknown parameters ("m" ≥ "n"). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors (formula_0).
Theory.
Consider a set of formula_1 data points, formula_2 and a curve (model function) formula_3 that in addition to the variable formula_4 also depends on formula_5 parameters, formula_6 with formula_7 It is desired to find the vector formula_8 of parameters such that the curve fits best the given data in the least squares sense, that is, the sum of squares
formula_9
is minimized, where the residuals (in-sample prediction errors) "ri" are given by
formula_10
for formula_11
The minimum value of S occurs when the gradient is zero. Since the model contains n parameters there are n gradient equations:
formula_12
In a nonlinear system, the derivatives formula_13 are functions of both the independent variable and the parameters, so in general these gradient equations do not have a closed solution. Instead, initial values must be chosen for the parameters. Then, the parameters are refined iteratively, that is, the values are obtained by successive approximation,
formula_14
Here, k is an iteration number and the vector of increments, formula_15 is known as the shift vector. At each iteration the model is linearized by approximation to a first-order Taylor polynomial expansion about formula_16
formula_17
The Jacobian matrix, J, is a function of constants, the independent variable "and" the parameters, so it changes from one iteration to the next. Thus, in terms of the linearized model,
formula_18
and the residuals are given by
formula_19
formula_20
Substituting these expressions into the gradient equations, they become
formula_21
which, on rearrangement, become n simultaneous linear equations, the normal equations
formula_22
The normal equations are written in matrix notation as
formula_23
These equations form the basis for the Gauss–Newton algorithm for a non-linear least squares problem.
Note the sign convention in the definition of the Jacobian matrix in terms of the derivatives. Formulas linear in formula_24 may appear with factor of formula_25 in other articles or the literature.
Extension by weights.
When the observations are not equally reliable, a weighted sum of squares may be minimized,
formula_26
Each element of the diagonal weight matrix W should, ideally, be equal to the reciprocal of the error variance of the measurement.
The normal equations are then, more generally,
formula_27
Geometrical interpretation.
In linear least squares the objective function, S, is a quadratic function of the parameters.
formula_28
When there is only one parameter the graph of S with respect to that parameter will be a parabola. With two or more parameters the contours of S with respect to any pair of parameters will be concentric ellipses (assuming that the normal equations matrix formula_29 is positive definite). The minimum parameter values are to be found at the centre of the ellipses. The geometry of the general objective function can be described as paraboloid elliptical.
In NLLSQ the objective function is quadratic with respect to the parameters only in a region close to its minimum value, where the truncated Taylor series is a good approximation to the model.
formula_30
The more the parameter values differ from their optimal values, the more the contours deviate from elliptical shape. A consequence of this is that initial parameter estimates should be as close as practicable to their (unknown!) optimal values. It also explains how divergence can come about as the Gauss–Newton algorithm is convergent only when the objective function is approximately quadratic in the parameters.
Computation.
Initial parameter estimates.
Some problems of ill-conditioning and divergence can be corrected by finding initial parameter estimates that are near to the optimal values. A good way to do this is by computer simulation. Both the observed and calculated data are displayed on a screen. The parameters of the model are adjusted by hand until the agreement between observed and calculated data is reasonably good. Although this will be a subjective judgment, it is sufficient to find a good starting point for the non-linear refinement. Initial parameter estimates can be created using transformations or linearizations. Better still evolutionary algorithms such as the Stochastic Funnel Algorithm can lead to the convex basin of attraction that surrounds the optimal parameter estimates. Hybrid algorithms that use randomization and elitism, followed by Newton methods have been shown to be useful and computationally efficient.
Solution.
Any method among the ones described below can be applied to find a solution.
Convergence criteria.
The common sense criterion for convergence is that the sum of squares does not increase from one iteration to the next. However this criterion is often difficult to implement in practice, for various reasons. A useful convergence criterion is
formula_31
The value 0.0001 is somewhat arbitrary and may need to be changed. In particular it may need to be increased when experimental errors are large. An alternative criterion is
formula_32
Again, the numerical value is somewhat arbitrary; 0.001 is equivalent to specifying that each parameter should be refined to 0.1% precision. This is reasonable when it is less than the largest relative standard deviation on the parameters.
Calculation of the Jacobian by numerical approximation.
There are models for which it is either very difficult or even impossible to derive analytical expressions for the elements of the Jacobian. Then, the numerical approximation
formula_33
is obtained by calculation of formula_34 for formula_35 and formula_36. The increment,formula_37, size should be chosen so the numerical derivative is not subject to approximation error by being too large, or round-off error by being too small.
Parameter errors, confidence limits, residuals etc..
Some information is given in the corresponding section on the Weighted least squares page.
Multiple minima.
Multiple minima can occur in a variety of circumstances some of which are:
Not all multiple minima have equal values of the objective function. False minima, also known as local minima, occur when the objective function value is greater than its value at the so-called global minimum. To be certain that the minimum found is the global minimum, the refinement should be started with widely differing initial values of the parameters. When the same minimum is found regardless of starting point, it is likely to be the global minimum.
When multiple minima exist there is an important consequence: the objective function will have a maximum value somewhere between two minima. The normal equations matrix is not positive definite at a maximum in the objective function, as the gradient is zero and no unique direction of descent exists. Refinement from a point (a set of parameter values) close to a maximum will be ill-conditioned and should be avoided as a starting point. For example, when fitting a Lorentzian the normal equations matrix is not positive definite when the half-width of the band is zero.
Transformation to a linear model.
A non-linear model can sometimes be transformed into a linear one. Such an approximation is, for instance, often applicable in the vicinity of the best estimator, and it is one of the basic assumption in most iterative minimization algorithms.
When a linear approximation is valid, the model can directly be used for inference with a generalized least squares, where the equations of the Linear Template Fit apply.
Another example of a linear approximation would be, when the model is a simple exponential function,
formula_48
it can be transformed into a linear model by taking logarithms.
formula_49
Graphically this corresponds to working on a semi-log plot. The sum of squares becomes
formula_50
This procedure should be avoided unless the errors are multiplicative and log-normally distributed because it can give misleading results. This comes from the fact that whatever the experimental errors on "y" might be, the errors on log "y" are different. Therefore, when the transformed sum of squares is minimized different results will be obtained both for the parameter values and their calculated standard deviations. However, with multiplicative errors that are log-normally distributed, this procedure gives unbiased and consistent parameter estimates.
Another example is furnished by Michaelis–Menten kinetics, used to determine two parameters formula_51 and formula_52:
formula_53
The Lineweaver–Burk plot
formula_54
of formula_55 against formula_56 is linear in the parameters formula_57 and formula_58, but very sensitive to data error and strongly biased toward fitting the data in a particular range of the independent variable formula_59.
Algorithms.
Gauss–Newton method.
The normal equations
formula_60
may be solved for formula_61 by Cholesky decomposition, as described in linear least squares. The parameters are updated iteratively
formula_62
where "k" is an iteration number. While this method may be adequate for simple models, it will fail if divergence occurs. Therefore, protection against divergence is essential.
Shift-cutting.
If divergence occurs, a simple expedient is to reduce the length of the shift vector, formula_61, by a fraction, "f"
formula_63
For example, the length of the shift vector may be successively halved until the new value of the objective function is less than its value at the last iteration. The fraction, "f" could be optimized by a line search. As each trial value of "f" requires the objective function to be re-calculated it is not worth optimizing its value too stringently.
When using shift-cutting, the direction of the shift vector remains unchanged. This limits the applicability of the method to situations where the direction of the shift vector is not very different from what it would be if the objective function were approximately quadratic in the parameters, formula_64
Marquardt parameter.
If divergence occurs and the direction of the shift vector is so far from its "ideal" direction that shift-cutting is not very effective, that is, the fraction, "f" required to avoid divergence is very small, the direction must be changed. This can be achieved by using the Marquardt parameter. In this method the normal equations are modified
formula_65
where formula_66 is the Marquardt parameter and I is an identity matrix. Increasing the value of formula_66 has the effect of changing both the direction and the length of the shift vector. The shift vector is rotated towards the direction of steepest descent when
formula_67
formula_68 is the steepest descent vector. So, when formula_66 becomes very large, the shift vector becomes a small fraction of the steepest descent vector.
Various strategies have been proposed for the determination of the Marquardt parameter. As with shift-cutting, it is wasteful to optimize this parameter too stringently. Rather, once a value has been found that brings about a reduction in the value of the objective function, that value of the parameter is carried to the next iteration, reduced if possible, or increased if need be. When reducing the value of the Marquardt parameter, there is a cut-off value below which it is safe to set it to zero, that is, to continue with the unmodified Gauss–Newton method. The cut-off value may be set equal to the smallest singular value of the Jacobian. A bound for this value is given by formula_69 where tr is the trace function.
QR decomposition.
The minimum in the sum of squares can be found by a method that does not involve forming the normal equations. The residuals with the linearized model can be written as
formula_70
The Jacobian is subjected to an orthogonal decomposition; the QR decomposition will serve to illustrate the process.
formula_71
where Q is an orthogonal formula_72 matrix and R is an formula_73 matrix which is partitioned into an formula_74 block, formula_75, and a formula_76 zero block. formula_75 is upper triangular.
formula_77
The residual vector is left-multiplied by formula_78.
formula_79
This has no effect on the sum of squares since formula_80 because Q is orthogonal
The minimum value of "S" is attained when the upper block is zero. Therefore, the shift vector is found by solving
formula_81
These equations are easily solved as R is upper triangular.
Singular value decomposition.
A variant of the method of orthogonal decomposition involves singular value decomposition, in which R is diagonalized by further orthogonal transformations.
formula_82
where formula_83 is orthogonal, formula_84 is a diagonal matrix of singular values and formula_85 is the orthogonal matrix of the eigenvectors of formula_86 or equivalently the right singular vectors of formula_87. In this case the shift vector is given by
formula_88
The relative simplicity of this expression is very useful in theoretical analysis of non-linear least squares. The application of singular value decomposition is discussed in detail in Lawson and Hanson.
Gradient methods.
There are many examples in the scientific literature where different methods have been used for non-linear data-fitting problems.
Direct search methods.
Direct search methods depend on evaluations of the objective function at a variety of parameter values and do not use derivatives at all. They offer alternatives to the use of numerical derivatives in the Gauss–Newton method and gradient methods.
More detailed descriptions of these, and other, methods are available, in "Numerical Recipes", together with computer code in various languages.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "m(x,\\theta_i) = \\theta_1 + \\theta_2 x^{(\\theta_{3})}"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "(x_1, y_1), (x_2, y_2), \\dots, (x_m, y_m),"
},
{
"math_id": 3,
"text": "\\hat{y} = f(x, \\boldsymbol \\beta),"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "\\boldsymbol \\beta = (\\beta_1, \\beta_2, \\dots, \\beta_n),"
},
{
"math_id": 7,
"text": "m\\ge n."
},
{
"math_id": 8,
"text": "\\boldsymbol \\beta"
},
{
"math_id": 9,
"text": "S = \\sum_{i=1}^{m} r_i^2"
},
{
"math_id": 10,
"text": "r_i = y_i - f(x_i, \\boldsymbol \\beta) "
},
{
"math_id": 11,
"text": "i=1, 2,\\dots, m."
},
{
"math_id": 12,
"text": "\\frac{\\partial S}{\\partial \\beta_j} = 2 \\sum_i r_i\\frac{\\partial r_i}{\\partial \\beta_j} = 0 \\quad (j=1,\\ldots,n)."
},
{
"math_id": 13,
"text": "\\frac{\\partial r_i}{\\partial \\beta_j}"
},
{
"math_id": 14,
"text": "\\beta_j \\approx \\beta_j^{k+1} =\\beta^k_j+\\Delta \\beta_j. "
},
{
"math_id": 15,
"text": "\\Delta \\boldsymbol \\beta"
},
{
"math_id": 16,
"text": " \\boldsymbol \\beta^k"
},
{
"math_id": 17,
"text": "f(x_i,\\boldsymbol \\beta)\\approx f(x_i,\\boldsymbol \\beta^k) +\\sum_j \\frac{\\partial f(x_i,\\boldsymbol \\beta^k)}{\\partial \\beta_j} \\left(\\beta_j -\\beta^{k}_j \\right) = f(x_i,\\boldsymbol \\beta^k) +\\sum_j J_{ij} \\,\\Delta\\beta_j. "
},
{
"math_id": 18,
"text": "\\frac{\\partial r_i}{\\partial \\beta_j} = -J_{ij}"
},
{
"math_id": 19,
"text": "\\Delta y_i = y_i- f(x_i,\\boldsymbol \\beta^k),"
},
{
"math_id": 20,
"text": "r_i = y_i - f(x_i, \\boldsymbol \\beta) = \\left(y_i- f(x_i,\\boldsymbol \\beta^k)\\right)+ \\left(f(x_i,\\boldsymbol \\beta^k) - f(x_i, \\boldsymbol \\beta)\\right)\\approx\\Delta y_i- \\sum_{s=1}^{n} J_{is} \\Delta \\beta_s ."
},
{
"math_id": 21,
"text": "-2\\sum_{i=1}^{m} J_{ij} \\left( \\Delta y_i - \\sum_{s=1}^{n} J_{is}\\ \\Delta \\beta_s \\right) = 0,"
},
{
"math_id": 22,
"text": "\\sum_{i=1}^{m} \\sum_{s=1}^{n} J_{ij}J_{is}\\ \\Delta \\beta_s=\\sum_{i=1}^{m} J_{ij}\\ \\Delta y_i \\qquad (j=1,\\dots,n)."
},
{
"math_id": 23,
"text": "\\left(\\mathbf{J}^\\mathsf{T}\\mathbf{J}\\right) \\Delta \\boldsymbol \\beta = \\mathbf{J}^\\mathsf{T}\\ \\Delta \\mathbf{y}."
},
{
"math_id": 24,
"text": "J"
},
{
"math_id": 25,
"text": "-1"
},
{
"math_id": 26,
"text": "S = \\sum_{i=1}^m W_{ii} r_i^2."
},
{
"math_id": 27,
"text": "\\left(\\mathbf{J}^\\mathsf{T}\\mathbf{WJ}\\right) \\Delta \\boldsymbol \\beta = \\mathbf{J}^\\mathsf{T}\\mathbf{W}\\ \\Delta \\mathbf{y}."
},
{
"math_id": 28,
"text": "S = \\sum_i W_{ii} \\left(y_i - \\sum_j X_{ij}\\beta_j \\right)^2"
},
{
"math_id": 29,
"text": "\\mathbf{X}^\\mathsf{T}\\mathbf{WX}"
},
{
"math_id": 30,
"text": "S \\approx \\sum_i W_{ii} \\left(y_i - \\sum_j J_{ij}\\beta_j \\right)^2"
},
{
"math_id": 31,
"text": "\\left|\\frac{S^k-S^{k+1}}{S^k}\\right| < 0.0001."
},
{
"math_id": 32,
"text": "\\left|\\frac{\\Delta \\beta_j}{\\beta_j}\\right| < 0.001, \\qquad j=1,\\dots,n."
},
{
"math_id": 33,
"text": "\\frac{\\partial f(x_i, \\boldsymbol \\beta)}{\\partial \\beta_j} \\approx \\frac{\\delta f(x_i, \\boldsymbol \\beta)}{\\delta \\beta_j}"
},
{
"math_id": 34,
"text": "f(x_i, \\boldsymbol \\beta)"
},
{
"math_id": 35,
"text": "\\beta_j"
},
{
"math_id": 36,
"text": "\\beta_j+\\delta \\beta_j"
},
{
"math_id": 37,
"text": "\\delta \\beta_j"
},
{
"math_id": 38,
"text": "f(x_i, \\boldsymbol \\beta) = \\frac{\\alpha}{1 + \\left(\\frac{\\gamma-x_i}{\\beta} \\right)^2}"
},
{
"math_id": 39,
"text": "\\alpha"
},
{
"math_id": 40,
"text": "\\gamma"
},
{
"math_id": 41,
"text": "\\beta"
},
{
"math_id": 42,
"text": "\\hat \\beta"
},
{
"math_id": 43,
"text": "-\\hat \\beta"
},
{
"math_id": 44,
"text": "\\alpha \\beta"
},
{
"math_id": 45,
"text": "\\beta \\alpha"
},
{
"math_id": 46,
"text": "\\sin \\beta"
},
{
"math_id": 47,
"text": "\\hat \\beta + 2n \\pi"
},
{
"math_id": 48,
"text": "f(x_i,\\boldsymbol \\beta)= \\alpha e^{\\beta x_i}"
},
{
"math_id": 49,
"text": "\\log f(x_i,\\boldsymbol \\beta) = \\log \\alpha + \\beta x_i"
},
{
"math_id": 50,
"text": "S = \\sum_i (\\log y_i-\\log \\alpha - \\beta x_i)^2."
},
{
"math_id": 51,
"text": "V_{\\max}"
},
{
"math_id": 52,
"text": "K_m"
},
{
"math_id": 53,
"text": " v = \\frac{V_{\\max}[S]}{K_{m} + [S]}."
},
{
"math_id": 54,
"text": " \\frac{1}{v} = \\frac{1}{V_\\max} + \\frac{K_m}{V_{\\max}[S]}"
},
{
"math_id": 55,
"text": "\\frac{1}{v}"
},
{
"math_id": 56,
"text": "\\frac{1}{[S]}"
},
{
"math_id": 57,
"text": "\\frac{1}{V_\\max}"
},
{
"math_id": 58,
"text": "\\frac{K_m}{V_\\max}"
},
{
"math_id": 59,
"text": "[S]"
},
{
"math_id": 60,
"text": "\\left( \\mathbf{J}^\\mathsf{T}\\mathbf{WJ} \\right)\\Delta \\boldsymbol\\beta = \\left( \\mathbf{J}^\\mathsf{T}\\mathbf{W} \\right) \\Delta \\mathbf{y}"
},
{
"math_id": 61,
"text": "\\Delta \\boldsymbol\\beta"
},
{
"math_id": 62,
"text": "\\boldsymbol\\beta^{k+1} = \\boldsymbol\\beta^k + \\Delta \\boldsymbol\\beta"
},
{
"math_id": 63,
"text": "\\boldsymbol\\beta^{k+1} = \\boldsymbol\\beta^k+f\\ \\Delta \\boldsymbol\\beta."
},
{
"math_id": 64,
"text": "\\boldsymbol\\beta^k."
},
{
"math_id": 65,
"text": "\\left( \\mathbf{J}^\\mathsf{T} \\mathbf{WJ} + \\lambda \\mathbf{I} \\right) \\Delta \\boldsymbol \\beta = \\left( \\mathbf{J}^\\mathsf{T} \\mathbf{W} \\right) \\Delta \\mathbf{y}"
},
{
"math_id": 66,
"text": "\\lambda"
},
{
"math_id": 67,
"text": "\\lambda \\mathbf{I} \\gg \\mathbf{J}^\\mathsf{T}\\mathbf{WJ}, \\ {\\Delta \\boldsymbol \\beta} \\approx \\frac 1 \\lambda \\mathbf{J}^\\mathsf{T}\\mathbf{W}\\ \\Delta \\mathbf{y}."
},
{
"math_id": 68,
"text": "\\mathbf{J}^\\mathsf{T}\\mathbf{W}\\, \\Delta \\mathbf{y}"
},
{
"math_id": 69,
"text": "1 / \\operatorname{tr} \\left(\\mathbf{J}^\\mathsf{T}\\mathbf{W J}\\right)^{-1}"
},
{
"math_id": 70,
"text": "\\mathbf{r} = \\Delta \\mathbf{y} - \\mathbf{J}\\, \\Delta\\boldsymbol\\beta."
},
{
"math_id": 71,
"text": "\\mathbf{J} = \\mathbf{QR}"
},
{
"math_id": 72,
"text": "m \\times m"
},
{
"math_id": 73,
"text": "m \\times n"
},
{
"math_id": 74,
"text": "n \\times n"
},
{
"math_id": 75,
"text": "\\mathbf{R}_n"
},
{
"math_id": 76,
"text": "(m-n) \\times n"
},
{
"math_id": 77,
"text": "\\mathbf{R}= \\begin{bmatrix}\n\\mathbf{R}_n \\\\\n\\mathbf{0}\n\\end{bmatrix}"
},
{
"math_id": 78,
"text": "\\mathbf Q^\\mathsf{T}"
},
{
"math_id": 79,
"text": "\\mathbf{Q}^\\mathsf{T} \\mathbf{r} = \\mathbf{Q}^\\mathsf{T}\\ \\Delta \\mathbf{y} - \\mathbf{R}\\ \\Delta\\boldsymbol\\beta = \\begin{bmatrix}\n\\left(\\mathbf{Q}^\\mathsf{T}\\ \\Delta \\mathbf{y} - \\mathbf{R}\\ \\Delta\\boldsymbol\\beta \\right)_n \\\\\n\\left(\\mathbf{Q}^\\mathsf{T}\\ \\Delta \\mathbf{y} \\right)_{m-n}\n\\end{bmatrix}"
},
{
"math_id": 80,
"text": "S = \\mathbf{r}^\\mathsf{T} \\mathbf{Q} \\mathbf{Q}^\\mathsf{T} \\mathbf{r} = \\mathbf{r}^\\mathsf{T} \\mathbf{r}"
},
{
"math_id": 81,
"text": "\\mathbf{R}_n\\ \\Delta\\boldsymbol\\beta = \\left(\\mathbf{Q}^\\mathsf{T}\\ \\Delta \\mathbf{y} \\right)_n. "
},
{
"math_id": 82,
"text": "\\mathbf{J} = \\mathbf{U} \\boldsymbol\\Sigma \\mathbf{V}^\\mathsf{T} "
},
{
"math_id": 83,
"text": "\\mathbf U"
},
{
"math_id": 84,
"text": "\\boldsymbol\\Sigma "
},
{
"math_id": 85,
"text": "\\mathbf V"
},
{
"math_id": 86,
"text": "\\mathbf {J}^\\mathsf{T}\\mathbf{J}"
},
{
"math_id": 87,
"text": "\\mathbf{J}"
},
{
"math_id": 88,
"text": "\\Delta\\boldsymbol\\beta = \\mathbf{V} \\boldsymbol\\Sigma^{-1} \\left( \\mathbf{U}^\\mathsf{T}\\ \\Delta \\mathbf{y} \\right)_n. "
},
{
"math_id": 89,
"text": "f(x_i, \\boldsymbol \\beta) = f^k(x_i, \\boldsymbol \\beta) +\\sum_j J_{ij} \\, \\Delta \\beta_j + \\frac{1}{2}\\sum_j\\sum_k \\Delta\\beta_j \\, \\Delta\\beta_k \\,H_{jk_{(i)}},\\ H_{jk_{(i)}} = \\frac{\\partial^2 f(x_i, \\boldsymbol \\beta)}{\\partial \\beta_j \\, \\partial \\beta_k }. "
}
] |
https://en.wikipedia.org/wiki?curid=15652764
|
156533
|
Chebyshev's inequality
|
Bound on probability of a random variable being far from its mean
In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) provides an upper bound on the probability of deviation of a random variable (with finite variance) from its mean. More specifically, the probability that a random variable deviates from its mean by more than formula_0 is at most formula_1, where formula_2 is any positive constant and formula_3 is the standard deviation (the square root of the variance).
The rule is often called Chebyshev's theorem, about the range of standard deviations around the mean, in statistics. The inequality has great utility because it can be applied to any probability distribution in which the mean and variance are defined. For example, it can be used to prove the weak law of large numbers.
Its practical usage is similar to the 68–95–99.7 rule, which applies only to normal distributions. Chebyshev's inequality is more general, stating that a minimum of just 75% of values must lie within two standard deviations of the mean and 88.89% within three standard deviations for a broad range of different probability distributions.
The term "Chebyshev's inequality" may also refer to Markov's inequality, especially in the context of analysis. They are closely related, and some authors refer to Markov's inequality as "Chebyshev's First Inequality," and the similar one referred to on this page as "Chebyshev's Second Inequality."
Chebyshev's inequality is tight in the sense that for each chosen positive constant, there exists a random variable such that the inequality is in fact an equality.
History.
The theorem is named after Russian mathematician Pafnuty Chebyshev, although it was first formulated by his friend and colleague Irénée-Jules Bienaymé. The theorem was first proved by Bienaymé in 1853 and more generally proved by Chebyshev in 1867. His student Andrey Markov provided another proof in his 1884 Ph.D. thesis.
Statement.
Chebyshev's inequality is usually stated for random variables, but can be generalized to a statement about measure spaces.
Probabilistic statement.
Let "X" (integrable) be a random variable with finite non-zero variance "σ"2 (and thus finite expected value "μ"). Then for any real number "k" > 0,
formula_4
Only the case formula_5 is useful. When formula_6 the right-hand side formula_7 and the inequality is trivial as all probabilities are ≤ 1.
As an example, using formula_8 shows that the probability values lie outside the interval formula_9 does not exceed formula_10. Equivalently, it implies that the probability of values lying within the interval (i.e. its "coverage") is "at least" formula_10.
Because it can be applied to completely arbitrary distributions provided they have a known finite mean and variance, the inequality generally gives a poor bound compared to what might be deduced if more aspects are known about the distribution involved.
Measure-theoretic statement.
Let ("X", Σ, μ) be a measure space, and let "f" be an extended real-valued measurable function defined on "X". Then for any real number "t" > 0 and 0 < "p" < ∞,
formula_11
More generally, if "g" is an extended real-valued measurable function, nonnegative and nondecreasing, with formula_12 then:
formula_13
This statement follows from the Markov inequality, formula_14, with formula_15 and formula_16, since in this case formula_17.
The previous statement then follows by defining formula_18 as formula_19 if formula_20 and formula_21 otherwise.
Example.
Suppose we randomly select a journal article from a source with an average of 1000 words per article, with a standard deviation of 200 words. We can then infer that the probability that it has between 600 and 1400 words (i.e. within formula_22 standard deviations of the mean) must be at least 75%, because there is no more than formula_23 chance to be outside that range, by Chebyshev's inequality. But if we additionally know that the distribution is normal, we can say there is a 75% chance the word count is between 770 and 1230 (which is an even tighter bound).
Sharpness of bounds.
As shown in the example above, the theorem typically provides rather loose bounds. However, these bounds cannot in general (remaining true for arbitrary distributions) be improved upon. The bounds are sharp for the following example: for any "k" ≥ 1,
formula_24
For this distribution, the mean "μ" = 0 and the standard deviation "σ" = , so
formula_25
Chebyshev's inequality is an equality for precisely those distributions that are a linear transformation of this example.
Proof.
Markov's inequality states that for any real-valued random variable "Y" and any positive number "a", we have formula_26. One way to prove Chebyshev's inequality is to apply Markov's inequality to the random variable formula_27 with formula_28:
formula_29
It can also be proved directly using conditional expectation:
formula_30
Chebyshev's inequality then follows by dividing by "k"2"σ"2.
This proof also shows why the bounds are quite loose in typical cases: the conditional expectation on the event where |"X" − "μ"| < "kσ" is thrown away, and the lower bound of "k"2"σ"2 on the event |"X" − "μ"| ≥ "kσ" can be quite poor.
Chebyshev's inequality can also be obtained directly from a simple comparison of areas, starting from the representation of an expected value as the difference of two improper Riemann integrals (last formula in the definition of expected value for arbitrary real-valued random variables).
Extensions.
Several extensions of Chebyshev's inequality have been developed.
Selberg's inequality.
Selberg derived a generalization to arbitrary intervals. Suppose "X" is a random variable with mean "μ" and variance "σ""2". Selberg's inequality states that if formula_31,
formula_32
When formula_33, this reduces to Chebyshev's inequality. These are known to be the best possible bounds.
Finite-dimensional vector.
Chebyshev's inequality naturally extends to the multivariate setting, where one has "n" random variables Xi with mean μi and variance "σ"i2. Then the following inequality holds.
formula_34
This is known as the Birnbaum–Raymond–Zuckerman inequality after the authors who proved it for two dimensions. This result can be rewritten in terms of vectors "X"
("X"1, "X"2, ...) with mean "μ"
("μ"1, "μ"2, ...), standard deviation "σ" = ("σ"1, "σ"2, ...), in the Euclidean norm || ⋅ ||.
formula_35
One can also get a similar infinite-dimensional Chebyshev's inequality. A second related inequality has also been derived by Chen. Let n be the dimension of the stochastic vector X and let E("X") be the mean of X. Let S be the covariance matrix and "k" > 0. Then
formula_36
where "Y"T is the transpose of Y.
The inequality can be written in terms of the Mahalanobis distance as
formula_37
where the Mahalanobis distance based on S is defined by
formula_38
Navarro proved that these bounds are sharp, that is, they are the best possible bounds for that regions when we just know the mean and the covariance matrix of X.
Stellato et al. showed that this multivariate version of the Chebyshev inequality can be easily derived analytically as a special case of Vandenberghe et al. where the bound is computed by solving a semidefinite program (SDP).
Known correlation.
If the variables are independent this inequality can be sharpened.
formula_39
Berge derived an inequality for two correlated variables "X"1, "X"2. Let ρ be the correlation coefficient between "X"1 and "X"2 and let "σ""i"2 be the variance of Xi. Then
formula_40
This result can be sharpened to having different bounds for the two random variables and having asymmetric bounds, as in Selberg's inequality.
Olkin and Pratt derived an inequality for n correlated variables.
formula_41
where the sum is taken over the "n" variables and
formula_42
where ρij is the correlation between Xi and Xj.
Olkin and Pratt's inequality was subsequently generalised by Godwin.
Higher moments.
Mitzenmacher and Upfal note that by applying Markov's inequality to the nonnegative variable formula_43, one can get a family of tail bounds
formula_44
For "n" = 2 we obtain Chebyshev's inequality. For "k" ≥ 1, "n" > 4 and assuming that the "n"th moment exists, this bound is tighter than Chebyshev's inequality. This strategy, called the method of moments, is often used to prove tail bounds.
Exponential moment.
A related inequality sometimes known as the exponential Chebyshev's inequality is the inequality
formula_45
Let "K"("t") be the cumulant generating function,
formula_46
Taking the Legendre–Fenchel transformation of "K"("t") and using the exponential Chebyshev's inequality we have
formula_47
This inequality may be used to obtain exponential inequalities for unbounded variables.
Bounded variables.
If P("x") has finite support based on the interval ["a", "b"], let "M"
max(|"a"|, |"b"|) where |"x"| is the absolute value of x. If the mean of P("x") is zero then for all "k" > 0
formula_48
The second of these inequalities with "r"
2 is the Chebyshev bound. The first provides a lower bound for the value of P("x").
Finite samples.
Univariate case.
Saw "et al" extended Chebyshev's inequality to cases where the population mean and variance are not known and may not exist, but the sample mean and sample standard deviation from "N" samples are to be employed to bound the expected value of a new drawing from the same distribution. The following simpler version of this inequality is given by Kabán.
formula_49
where "X" is a random variable which we have sampled "N" times, "m" is the sample mean, "k" is a constant and "s" is the sample standard deviation.
This inequality holds even when the population moments do not exist, and when the sample is only weakly exchangeably distributed; this criterion is met for randomised sampling. A table of values for the Saw–Yang–Mo inequality for finite sample sizes ("N" < 100) has been determined by Konijn. The table allows the calculation of various confidence intervals for the mean, based on multiples, C, of the standard error of the mean as calculated from the sample. For example, Konijn shows that for "N" = 59, the 95 percent confidence interval for the mean "m" is ("m" − "Cs", "m" + "Cs") where "C" = 4.447 × 1.006 = 4.47 (this is 2.28 times larger than the value found on the assumption of normality showing the loss on precision resulting from ignorance of the precise nature of the distribution).
An equivalent inequality can be derived in terms of the sample mean instead,
formula_50
A table of values for the Saw–Yang–Mo inequality for finite sample sizes ("N" < 100) has been determined by Konijn.
For fixed "N" and large "m" the Saw–Yang–Mo inequality is approximately
formula_51
Beasley "et al" have suggested a modification of this inequality
formula_52
In empirical testing this modification is conservative but appears to have low statistical power. Its theoretical basis currently remains unexplored.
Dependence on sample size.
The bounds these inequalities give on a finite sample are less tight than those the Chebyshev inequality gives for a distribution. To illustrate this let the sample size "N" = 100 and let "k" = 3. Chebyshev's inequality states that at most approximately 11.11% of the distribution will lie at least three standard deviations away from the mean. Kabán's version of the inequality for a finite sample states that at most approximately 12.05% of the sample lies outside these limits. The dependence of the confidence intervals on sample size is further illustrated below.
For "N" = 10, the 95% confidence interval is approximately ±13.5789 standard deviations.
For "N" = 100 the 95% confidence interval is approximately ±4.9595 standard deviations; the 99% confidence interval is approximately ±140.0 standard deviations.
For "N" = 500 the 95% confidence interval is approximately ±4.5574 standard deviations; the 99% confidence interval is approximately ±11.1620 standard deviations.
For "N" = 1000 the 95% and 99% confidence intervals are approximately ±4.5141 and approximately ±10.5330 standard deviations respectively.
The Chebyshev inequality for the distribution gives 95% and 99% confidence intervals of approximately ±4.472 standard deviations and ±10 standard deviations respectively.
Samuelson's inequality.
Although Chebyshev's inequality is the best possible bound for an arbitrary distribution, this is not necessarily true for finite samples. Samuelson's inequality states that all values of a sample must lie within √"N" − 1 sample standard deviations of the mean.
By comparison, Chebyshev's inequality states that all but a "1/N" fraction of the sample will lie within √"N" standard deviations of the mean. Since there are "N" samples, this means that no samples will lie outside √"N" standard deviations of the mean, which is worse than Samuelson's inequality. However, the benefit of Chebyshev's inequality is that it can be applied more generally to get confidence bounds for ranges of standard deviations that do not depend on the number of samples.
Semivariances.
An alternative method of obtaining sharper bounds is through the use of semivariances (partial variances). The upper ("σ"+2) and lower ("σ"−2) semivariances are defined as
formula_53
formula_54
where "m" is the arithmetic mean of the sample and "n" is the number of elements in the sample.
The variance of the sample is the sum of the two semivariances:
formula_55
In terms of the lower semivariance Chebyshev's inequality can be written
formula_56
Putting
formula_57
Chebyshev's inequality can now be written
formula_58
A similar result can also be derived for the upper semivariance.
If we put
formula_59
Chebyshev's inequality can be written
formula_60
Because "σ"u2 ≤ "σ"2, use of the semivariance sharpens the original inequality.
If the distribution is known to be symmetric, then
formula_61
and
formula_62
This result agrees with that derived using standardised variables.
Multivariate case.
Stellato et al. simplified the notation and extended the empirical Chebyshev inequality from Saw et al. to the multivariate case. Let formula_63 be a random variable and let formula_64. We draw formula_65 iid samples of formula_66 denoted as formula_67. Based on the first formula_68 samples, we define the empirical mean as formula_69 and the unbiased empirical covariance as formula_70. If formula_71 is nonsingular, then for all formula_72 then
formula_73
Remarks.
In the univariate case, i.e. formula_74, this inequality corresponds to the one from Saw et al. Moreover, the right-hand side can be simplified by upper bounding the floor function by its argument
formula_75
As formula_76, the right-hand side tends to formula_77 which corresponds to the multivariate Chebyshev inequality over ellipsoids shaped according to formula_78 and centered in formula_79.
Sharpened bounds.
Chebyshev's inequality is important because of its applicability to any distribution. As a result of its generality it may not (and usually does not) provide as sharp a bound as alternative methods that can be used if the distribution of the random variable is known. To improve the sharpness of the bounds provided by Chebyshev's inequality a number of methods have been developed; for a review see eg.
Cantelli's inequality.
Cantelli's inequality due to Francesco Paolo Cantelli states that for a real random variable ("X") with mean ("μ") and variance ("σ"2)
formula_80
where "a" ≥ 0.
This inequality can be used to prove a one tailed variant of Chebyshev's inequality with "k" > 0
formula_81
The bound on the one tailed variant is known to be sharp. To see this consider the random variable "X" that takes the values
formula_82 with probability formula_83
formula_84 with probability formula_85
Then E("X") = 0 and E("X"2) = "σ"2 and P("X" < 1) = 1 / (1 + "σ"2).
An application: distance between the mean and the median.
The one-sided variant can be used to prove the proposition that for probability distributions having an expected value and a median, the mean and the median can never differ from each other by more than one standard deviation. To express this in symbols let "μ", "ν", and "σ" be respectively the mean, the median, and the standard deviation. Then
formula_86
There is no need to assume that the variance is finite because this inequality is trivially true if the variance is infinite.
The proof is as follows. Setting "k" = 1 in the statement for the one-sided inequality gives:
formula_87
Changing the sign of "X" and of "μ", we get
formula_88
As the median is by definition any real number "m" that satisfies the inequalities
formula_89
this implies that the median lies within one standard deviation of the mean. A proof using Jensen's inequality also exists.
Bhattacharyya's inequality.
Bhattacharyya extended Cantelli's inequality using the third and fourth moments of the distribution.
Let formula_90 and formula_91 be the variance. Let formula_92 and formula_93.
If formula_94 then
formula_95
The necessity of formula_94 may require formula_2 to be reasonably large.
In the case formula_96 this simplifies to
formula_97
Since formula_98 for formula_2 close to 1, this bound improves slightly over Cantelli's bound formula_99 as formula_100.
wins a factor 2 over Chebyshev's inequality.
Gauss's inequality.
In 1823 Gauss showed that for a distribution with a unique mode at zero,
formula_101
formula_102
Vysochanskij–Petunin inequality.
The Vysochanskij–Petunin inequality generalizes Gauss's inequality, which only holds for deviation from the mode of a unimodal distribution, to deviation from the mean, or more generally, any center. If "X" is a unimodal distribution with mean "μ" and variance "σ"2, then the inequality states that
formula_103
formula_104
For symmetrical unimodal distributions, the median and the mode are equal, so both the Vysochanskij–Petunin inequality and Gauss's inequality apply to the same center. Further, for symmetrical distributions, one-sided bounds can be obtained by noticing that
formula_105
The additional fraction of formula_106 present in these tail bounds lead to better confidence intervals than Chebyshev's inequality. For example, for any symmetrical unimodal distribution, the Vysochanskij–Petunin inequality states that 4/(9 x 3^2) = 4/81 ≈ 4.9% of the distribution lies outside 3 standard deviations of the mode.
Bounds for specific distributions.
DasGupta has shown that if the distribution is known to be normal
formula_107
From DasGupta's inequality it follows that for a normal distribution at least 95% lies within approximately 2.582 standard deviations of the mean. This is less sharp than the true figure (approximately 1.96 standard deviations of the mean).
Related inequalities.
Several other related inequalities are also known.
Paley–Zygmund inequality.
The Paley–Zygmund inequality gives a lower bound on tail probabilities, as opposed to Chebyshev's inequality which gives an upper bound. Applying it to the square of a random variable, we get
formula_108
Haldane's transformation.
One use of Chebyshev's inequality in applications is to create confidence intervals for variates with an unknown distribution. Haldane noted, using an equation derived by Kendall, that if a variate ("x") has a zero mean, unit variance and both finite skewness ("γ") and kurtosis ("κ") then the variate can be converted to a normally distributed standard score ("z"):
formula_109
This transformation may be useful as an alternative to Chebyshev's inequality or as an adjunct to it for deriving confidence intervals for variates with unknown distributions.
While this transformation may be useful for moderately skewed and/or kurtotic distributions, it performs poorly when the distribution is markedly skewed and/or kurtotic.
He, Zhang and Zhang's inequality.
For any collection of n non-negative independent random variables Xi with expectation 1
formula_110
Integral Chebyshev inequality.
There is a second (less well known) inequality also named after Chebyshev
If "f", "g" : ["a", "b"] → R are two monotonic functions of the same monotonicity, then
formula_111
If "f" and "g" are of opposite monotonicity, then the above inequality works in the reverse way.
This inequality is related to Jensen's inequality, Kantorovich's inequality, the Hermite–Hadamard inequality and Walter's conjecture.
Other inequalities.
There are also a number of other inequalities associated with Chebyshev:
Notes.
The Environmental Protection Agency has suggested best practices for the use of Chebyshev's inequality for estimating confidence intervals.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "k\\sigma"
},
{
"math_id": 1,
"text": "1/k^2"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "\\sigma"
},
{
"math_id": 4,
"text": "\n \\Pr(|X-\\mu|\\geq k\\sigma) \\leq \\frac{1}{k^2}.\n "
},
{
"math_id": 5,
"text": "k > 1"
},
{
"math_id": 6,
"text": "k \\leq 1"
},
{
"math_id": 7,
"text": " \\frac{1}{k^2} \\geq 1 "
},
{
"math_id": 8,
"text": "k = \\sqrt{2}"
},
{
"math_id": 9,
"text": "(\\mu - \\sqrt{2}\\sigma, \\mu + \\sqrt{2}\\sigma)"
},
{
"math_id": 10,
"text": "\\frac{1}{2}"
},
{
"math_id": 11,
"text": "\\mu(\\{x\\in X\\,:\\,\\,|f(x)|\\geq t\\}) \\leq {1\\over t^p} \\int_{X} |f|^p \\, d\\mu."
},
{
"math_id": 12,
"text": "g(t) \\neq 0"
},
{
"math_id": 13,
"text": "\\mu(\\{x\\in X\\,:\\,\\,f(x)\\geq t\\}) \\leq {1\\over g(t)} \\int_X g\\circ f\\, d\\mu."
},
{
"math_id": 14,
"text": "\n\\mu(\\{x\\in X:|F(x)|\\geq \\varepsilon\\}) \\leq\\frac1\\varepsilon \\int_X|F|d\\mu\n"
},
{
"math_id": 15,
"text": "F=g\\circ f"
},
{
"math_id": 16,
"text": "\\varepsilon=g(t)"
},
{
"math_id": 17,
"text": "\\mu(\\{x\\in X\\,:\\,\\,g\\circ f(x)\\geq g(t)\\}) =\\mu(\\{x\\in X\\,:\\,\\,f(x)\\geq t\\}) "
},
{
"math_id": 18,
"text": "g(x)"
},
{
"math_id": 19,
"text": "|x|^p"
},
{
"math_id": 20,
"text": "x\\ge t"
},
{
"math_id": 21,
"text": "0"
},
{
"math_id": 22,
"text": "k=2"
},
{
"math_id": 23,
"text": "1/k^2 = 1/4"
},
{
"math_id": 24,
"text": "\n X = \\begin{cases}\n -1, & \\text{with probability }\\frac{1}{2k^2} \\\\\n 0, & \\text{with probability }1 - \\frac{1}{k^2} \\\\\n 1, & \\text{with probability }\\frac{1}{2k^2}\n \\end{cases}\n "
},
{
"math_id": 25,
"text": "\n \\Pr(|X-\\mu| \\ge k\\sigma) = \\Pr(|X| \\ge 1) = \\frac{1}{k^2}.\n "
},
{
"math_id": 26,
"text": "\\Pr(|Y| \\geq a) \\leq \\mathbb{E}[|Y|]/a"
},
{
"math_id": 27,
"text": "Y = (X - \\mu)^2"
},
{
"math_id": 28,
"text": "a = (k \\sigma)^2"
},
{
"math_id": 29,
"text": " \\Pr(|X - \\mu| \\geq k\\sigma) = \\Pr((X - \\mu)^2 \\geq k^2\\sigma^2) \\leq \\frac{\\mathbb{E}[(X - \\mu)^2]}{k^2\\sigma^2} = \\frac{\\sigma^2}{k^2\\sigma^2} = \\frac{1}{k^2}. "
},
{
"math_id": 30,
"text": "\\begin{align}\n\\sigma^2&=\\mathbb{E}[(X-\\mu)^2]\\\\[5pt]\n&=\\mathbb{E}[(X-\\mu)^2\\mid k\\sigma\\leq |X-\\mu|]\\Pr[k\\sigma\\leq|X-\\mu|]+\\mathbb{E}[(X-\\mu)^2\\mid k\\sigma>|X-\\mu|]\\Pr[k\\sigma>|X-\\mu|] \\\\[5pt]\n&\\geq(k\\sigma)^2\\Pr[k\\sigma\\leq|X-\\mu|]+0\\cdot\\Pr[k\\sigma>|X-\\mu|] \\\\[5pt]\n&=k^2\\sigma^2\\Pr[k\\sigma\\leq|X-\\mu|]\n\\end{align}"
},
{
"math_id": 31,
"text": "\\beta \\geq \\alpha \\geq 0"
},
{
"math_id": 32,
"text": " \\Pr( X \\in [\\mu - \\alpha, \\mu + \\beta] ) \\ge \\begin{cases}\\frac{ \\alpha^2 }{\\alpha^2 + \\sigma^2} &\\text{if } \\alpha(\\beta-\\alpha) \\geq 2\\sigma^2 \\\\ \\frac{4\\alpha\\beta - 4\\sigma^2}{(\\alpha + \\beta)^2} &\\text{if } 2\\alpha\\beta \\geq 2\\sigma^2 \\geq \\alpha(\\beta - \\alpha) \\\\ 0 & \\sigma^2 \\geq \\alpha\\beta\\end{cases} "
},
{
"math_id": 33,
"text": "\\alpha = \\beta"
},
{
"math_id": 34,
"text": "\\Pr\\left(\\sum_{i=1}^n (X_i - \\mu_i)^2 \\ge k^2 \\sum_{i=1}^n \\sigma_i^2 \\right) \\le \\frac{1}{k^2} "
},
{
"math_id": 35,
"text": " \\Pr(\\| X - \\mu \\| \\ge k \\| \\sigma \\|) \\le \\frac{ 1 } { k^2 }. "
},
{
"math_id": 36,
"text": " \\Pr \\left( ( X - \\operatorname{E}(X) )^T S^{-1} (X - \\operatorname{E}(X)) < k \\right) \\ge 1 - \\frac{n}{k} "
},
{
"math_id": 37,
"text": " \\Pr \\left( d^2_S(X,\\operatorname{E}(X)) < k \\right) \\ge 1 - \\frac{n}{k} "
},
{
"math_id": 38,
"text": " d_S(x,y) =\\sqrt{ (x -y)^T S^{-1} (x -y) } "
},
{
"math_id": 39,
"text": "\\Pr\\left (\\bigcap_{i = 1}^n \\frac{|X_i - \\mu_i|}{\\sigma_i} \\le k_i \\right ) \\ge \\prod_{i=1}^n \\left (1 - \\frac{1}{k_i^2} \\right)"
},
{
"math_id": 40,
"text": " \\Pr\\left( \\bigcap_{ i = 1}^2 \\left[ \\frac{ | X_i - \\mu_i | } { \\sigma_i } < k \\right] \\right) \\ge 1 - \\frac{ 1 + \\sqrt{ 1 - \\rho^2 } } { k^2 }."
},
{
"math_id": 41,
"text": " \\Pr\\left(\\bigcap_{i = 1 }^n \\frac{|X_i - \\mu_i|}{\\sigma_i} < k_i \\right) \\ge 1 - \\frac{1}{n^2} \\left(\\sqrt{u} + \\sqrt{n-1} \\sqrt{n \\sum_i \\frac 1 { k_i^2} - u} \\right)^2 "
},
{
"math_id": 42,
"text": " u = \\sum_{i=1}^n \\frac{1}{ k_i^2} + 2\\sum_{i=1}^n \\sum_{j<i} \\frac{\\rho_{ij}}{k_i k_j} "
},
{
"math_id": 43,
"text": "| X - \\operatorname{E}(X) |^n"
},
{
"math_id": 44,
"text": " \\Pr\\left(| X - \\operatorname{E}(X) | \\ge k \\operatorname{E}(|X - \\operatorname{E}(X) |^n )^{ \\frac{1}{n} }\\right) \\le \\frac{1 } {k^n}, \\qquad k >0, n \\geq 2."
},
{
"math_id": 45,
"text": " \\Pr(X \\ge \\varepsilon) \\le e^{ -t \\varepsilon }\\operatorname{E}\\left (e^{ t X } \\right), \\qquad t > 0."
},
{
"math_id": 46,
"text": " K( t ) = \\log \\left(\\operatorname{E}\\left( e^{ t x } \\right) \\right). "
},
{
"math_id": 47,
"text": "-\\log( \\Pr (X \\ge \\varepsilon )) \\ge \\sup_t( t \\varepsilon - K( t ) ). "
},
{
"math_id": 48,
"text": "\\frac{\\operatorname{E}(|X|^r ) - k^r }{M^r} \\le \\Pr( | X | \\ge k ) \\le \\frac{\\operatorname{E}(| X |^r ) }{ k^r }."
},
{
"math_id": 49,
"text": "\\Pr( | X - m | \\ge ks ) \\le \\frac 1 {N + 1} \\left\\lfloor \\frac {N+1} N \\left(\\frac{N - 1}{k^2} + 1 \\right) \\right\\rfloor"
},
{
"math_id": 50,
"text": "\\Pr( | X - m | \\ge km ) \\le \\frac{N - 1} N \\frac 1 {k^2} \\frac{s^2}{m^2} + \\frac 1 N."
},
{
"math_id": 51,
"text": " \\Pr( | X - m | \\ge ks ) \\le \\frac 1 {N + 1}. "
},
{
"math_id": 52,
"text": " \\Pr( | X - m | \\ge ks ) \\le \\frac 1 {k^2( N + 1 )}. "
},
{
"math_id": 53,
"text": " \\sigma_+^2 = \\frac { \\sum_{x>m} (x - m)^2 } { n - 1 } ,"
},
{
"math_id": 54,
"text": " \\sigma_-^2 = \\frac { \\sum_{x<m} (m - x)^2 } { n - 1 }, "
},
{
"math_id": 55,
"text": " \\sigma^2 = \\sigma_+^2 + \\sigma_-^2. "
},
{
"math_id": 56,
"text": " \\Pr(x \\le m - a \\sigma_-) \\le \\frac { 1 } { a^2 }."
},
{
"math_id": 57,
"text": " a = \\frac{ k \\sigma } { \\sigma_- }. "
},
{
"math_id": 58,
"text": " \\Pr(x \\le m - k \\sigma) \\le \\frac { 1 } { k^2 } \\frac { \\sigma_-^2 } { \\sigma^2 }."
},
{
"math_id": 59,
"text": " \\sigma_u^2 = \\max(\\sigma_-^2, \\sigma_+^2) , "
},
{
"math_id": 60,
"text": " \\Pr(| x \\le m - k \\sigma |) \\le \\frac 1 {k^2} \\frac { \\sigma_u^2 } { \\sigma^2 } ."
},
{
"math_id": 61,
"text": " \\sigma_+^2 = \\sigma_-^2 = \\frac{ 1 } { 2 } \\sigma^2 "
},
{
"math_id": 62,
"text": " \\Pr(x \\le m - k \\sigma) \\le \\frac 1 {2k^2} ."
},
{
"math_id": 63,
"text": "\\xi \\in \\mathbb{R}^{n_\\xi}"
},
{
"math_id": 64,
"text": "N \\in \\mathbb{Z}_{\\geq n_\\xi}"
},
{
"math_id": 65,
"text": "N+1"
},
{
"math_id": 66,
"text": "\\xi"
},
{
"math_id": 67,
"text": "\\xi^{(1)},\\dots,\\xi^{(N)},\\xi^{(N+1)} \\in \\mathbb{R}^{n_\\xi}"
},
{
"math_id": 68,
"text": "N"
},
{
"math_id": 69,
"text": "\\mu_N = \\frac 1 N \\sum_{i=1}^N \\xi^{(i)}"
},
{
"math_id": 70,
"text": "\\Sigma_N = \\frac 1 N \\sum_{i=1}^N (\\xi^{(i)} - \\mu_{N})(\\xi^{(i)} - \\mu_N)^\\top"
},
{
"math_id": 71,
"text": "\\Sigma_N"
},
{
"math_id": 72,
"text": "\\lambda \\in \\mathbb{R}_{\\geq 0} "
},
{
"math_id": 73,
"text": "\n\\begin{align}\n& P^{N+1} \\left((\\xi^{(N+1)} - \\mu_N)^\\top \\Sigma_N^{-1}(\\xi^{(N+1)} - \\mu_N) \\geq \\lambda^2\\right) \\\\[8pt]\n\\leq {} & \\min\\left\\{1, \\frac 1 {N+1} \\left\\lfloor \\frac{n_\\xi(N+1)(N^2 - 1 + N\\lambda^2)}{N^2\\lambda^2}\\right\\rfloor\\right\\}.\n\\end{align}\n"
},
{
"math_id": 74,
"text": "n_\\xi = 1"
},
{
"math_id": 75,
"text": "P^{N+1}\\left((\\xi^{(N+1)} - \\mu_N)^\\top \\Sigma_N^{-1}(\\xi^{(N+1)} - \\mu_N) \\geq \\lambda^2\\right) \\leq \\min\\left\\{1, \\frac{n_\\xi(N^2 - 1 + N\\lambda^2)}{N^2\\lambda^2}\\right\\}."
},
{
"math_id": 76,
"text": "N \\to \\infty"
},
{
"math_id": 77,
"text": "\\min \\left\\{1, \\frac{n_\\xi}{\\lambda^2}\\right\\}"
},
{
"math_id": 78,
"text": "\\Sigma"
},
{
"math_id": 79,
"text": "\\mu"
},
{
"math_id": 80,
"text": " \\Pr(X - \\mu \\ge a) \\le \\frac{\\sigma^2}{ \\sigma^2 + a^2 } "
},
{
"math_id": 81,
"text": " \\Pr(X - \\mu \\geq k \\sigma) \\leq \\frac{ 1 }{ 1 + k^2 }. "
},
{
"math_id": 82,
"text": " X = 1 "
},
{
"math_id": 83,
"text": " \\frac{ \\sigma^2 } { 1 + \\sigma^2 }"
},
{
"math_id": 84,
"text": " X = - \\sigma^2 "
},
{
"math_id": 85,
"text": " \\frac{ 1 } { 1 + \\sigma^2 }."
},
{
"math_id": 86,
"text": " \\left | \\mu - \\nu \\right | \\leq \\sigma. "
},
{
"math_id": 87,
"text": "\\Pr(X - \\mu \\geq \\sigma) \\leq \\frac{ 1 }{ 2 } \\implies \\Pr(X \\geq \\mu + \\sigma) \\leq \\frac{ 1 }{ 2 }. "
},
{
"math_id": 88,
"text": "\\Pr(X \\leq \\mu - \\sigma) \\leq \\frac{ 1 }{ 2 }. "
},
{
"math_id": 89,
"text": "\\Pr(X\\leq m) \\geq \\frac{1}{2}\\text{ and }\\Pr(X\\geq m) \\geq \\frac{1}{2}"
},
{
"math_id": 90,
"text": "\\mu = 0"
},
{
"math_id": 91,
"text": "\\sigma^2"
},
{
"math_id": 92,
"text": "\\gamma = E[X^3] / \\sigma^3"
},
{
"math_id": 93,
"text": "\\kappa = E[X^4]/\\sigma^4"
},
{
"math_id": 94,
"text": "k^2 - k \\gamma - 1 > 0"
},
{
"math_id": 95,
"text": " \\Pr(X > k\\sigma) \\le \\frac{ \\kappa - \\gamma^2 - 1 }{ (\\kappa - \\gamma^2 - 1) (1 + k^2) + (k^2 - k\\gamma - 1) }."
},
{
"math_id": 96,
"text": "E[X^3]=0"
},
{
"math_id": 97,
"text": "\\Pr(X > k\\sigma) \\le \\frac{\\kappa-1}{\\kappa \\left(k^2+1\\right)-2}\n\\quad \\text{for } k > 1.\n"
},
{
"math_id": 98,
"text": "\\frac{\\kappa-1}{\\kappa \\left(k^2+1\\right)-2} = \\frac{1}{2}-\\frac{\\kappa (k-1)}{2 (\\kappa-1)}+O\\left((k-1)^2\\right)"
},
{
"math_id": 99,
"text": "\\frac{1}{2}-\\frac{k-1}{2}+O\\left((k-1)^2\\right)"
},
{
"math_id": 100,
"text": "\\kappa > 1"
},
{
"math_id": 101,
"text": " \\Pr( | X | \\ge k ) \\le \\frac{ 4 \\operatorname{ E }( X^2 ) } { 9k^2 } \\quad\\text{if} \\quad k^2 \\ge \\frac{ 4 } { 3 } \\operatorname{E} (X^2) ,"
},
{
"math_id": 102,
"text": " \\Pr( | X | \\ge k ) \\le 1 - \\frac{ k } { \\sqrt{3} \\operatorname{ E }( X^2 ) } \\quad \\text{if} \\quad k^2 \\le \\frac{ 4 } { 3 } \\operatorname{ E }( X^2 ). "
},
{
"math_id": 103,
"text": " \\Pr( | X - \\mu | \\ge k \\sigma ) \\le \\frac{ 4 }{ 9k^2 } \\quad \\text{if} \\quad k \\ge \\sqrt{8/3} = 1.633."
},
{
"math_id": 104,
"text": " \\Pr( | X - \\mu | \\ge k \\sigma ) \\le \\frac{ 4 }{ 3k^2 } - \\frac13 \\quad \\text{if} \\quad k \\le \\sqrt{8/3}."
},
{
"math_id": 105,
"text": " \\Pr( X - \\mu \\ge k \\sigma ) = \\Pr( X - \\mu \\le -k \\sigma ) = \\frac{1}{2} \\Pr( |X - \\mu| \\ge k \\sigma )."
},
{
"math_id": 106,
"text": "4/9"
},
{
"math_id": 107,
"text": " \\Pr( | X - \\mu | \\ge k \\sigma ) \\le \\frac{ 1 }{ 3 k^2 } ."
},
{
"math_id": 108,
"text": " \\Pr( | Z | > \\theta \\sqrt{E[Z^2]} ) \\ge \\frac{ ( 1 - \\theta^2 )^2 E[Z^2]^2 }{E[Z^4]}."
},
{
"math_id": 109,
"text": " z = x - \\frac{\\gamma}{6} (x^2 - 1) + \\frac{ x }{ 72 } [ 2 \\gamma^2 (4 x^2 - 7) - 3 \\kappa (x^2 - 3) ] + \\cdots "
},
{
"math_id": 110,
"text": " \\Pr\\left ( \\frac{\\sum_{i=1}^n X_i }{n} - 1 \\ge \\frac{1}{n} \\right) \\le \\frac{ 7 }{ 8 }. "
},
{
"math_id": 111,
"text": " \\frac{ 1 }{ b - a } \\int_a^b \\! f(x) g(x) \\,dx \\ge \\left[ \\frac{ 1 }{ b - a } \\int_a^b \\! f(x) \\,dx \\right] \\left[ \\frac{ 1 }{ b - a } \\int_a^b \\! g(x) \\,dx \\right] ."
}
] |
https://en.wikipedia.org/wiki?curid=156533
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.