id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
7225 | Chemical affinity | In chemical physics and physical chemistry, chemical affinity is the electronic property by which dissimilar chemical species are capable of forming chemical compounds. Chemical affinity can also refer to the tendency of an atom or compound to combine by chemical reaction with atoms or compounds of unlike composition.
History.
Early theories.
The idea of "affinity" is extremely old. Many attempts have been made at identifying its origins. The majority of such attempts, however, except in a general manner, end in futility since "affinities" lie at the basis of all magic, thereby pre-dating science. Physical chemistry, however, was one of the first branches of science to study and formulate a "theory of affinity". The name "affinitas" was first used in the sense of chemical relation by German philosopher Albertus Magnus near the year 1250. Later, those as Robert Boyle, John Mayow, Johann Glauber, Isaac Newton, and Georg Stahl put forward ideas on elective affinity in attempts to explain how heat is evolved during combustion reactions.
The term "affinity" has been used figuratively since c. 1600 in discussions of structural relationships in chemistry, philology, etc., and reference to "natural attraction" is from 1616. "Chemical affinity", historically, has referred to the "force" that causes chemical reactions. as well as, more generally, and earlier, the ″tendency to combine″ of any pair of substances. The broad definition, used generally throughout history, is that chemical affinity is that whereby substances enter into or resist decomposition.
The modern term chemical affinity is a somewhat modified variation of its eighteenth-century precursor "elective affinity" or elective attractions, a term that was used by the 18th century chemistry lecturer William Cullen. Whether Cullen coined the phrase is not clear, but his usage seems to predate most others, although it rapidly became widespread across Europe, and was used in particular by the Swedish chemist Torbern Olof Bergman throughout his book (1775). Affinity theories were used in one way or another by most chemists from around the middle of the 18th century into the 19th century to explain and organise the different combinations into which substances could enter and from which they could be retrieved. Antoine Lavoisier, in his famed 1789 "Traité Élémentaire de Chimie (Elements of Chemistry)", refers to Bergman's work and discusses the concept of elective affinities or attractions.
According to chemistry historian Henry Leicester, the influential 1923 textbook "Thermodynamics and the Free Energy of Chemical Reactions" by Gilbert N. Lewis and Merle Randall led to the replacement of the term "affinity" by the term "free energy" in much of the English-speaking world.
According to Prigogine, the term was introduced and developed by Théophile de Donder.
Johann Wolfgang von Goethe used the concept in his novel "Elective Affinities" (1809).
Visual representations.
The affinity concept was very closely linked to the visual representation of substances on a table. The first-ever "affinity table", which was based on displacement reactions, was published in 1718 by the French chemist Étienne François Geoffroy. Geoffroy's name is best known in connection with these tables of "affinities" ("tables des rapports"), which were first presented to the French Academy of Sciences in 1718 and 1720.
During the 18th century many versions of the table were proposed with leading chemists like Torbern Bergman in Sweden and Joseph Black in Scotland adapting it to accommodate new chemical discoveries. All the tables were essentially lists, prepared by collating observations on the actions of substances one upon another, showing the varying degrees of affinity exhibited by analogous bodies for different reagents.
Crucially, the table was the central graphic tool used to teach chemistry to students and its visual arrangement was often combined with other kinds diagrams. Joseph Black, for example, used the table in combination with chiastic and circlet diagrams to visualise the core principles of chemical affinity. Affinity tables were used throughout Europe until the early 19th century when they were displaced by affinity concepts introduced by Claude Berthollet.
Modern conceptions.
In chemical physics and physical chemistry, chemical affinity is the electronic property by which dissimilar chemical species are capable of forming chemical compounds. Chemical affinity can also refer to the tendency of an atom or compound to combine by chemical reaction with atoms or compounds of unlike composition.
In modern terms, we relate affinity to the phenomenon whereby certain atoms or molecules have the tendency to aggregate or bond. For example, in the 1919 book "Chemistry of Human Life" physician George W. Carey states that, "Health depends on a proper amount of iron phosphate Fe3(PO4)2 in the blood, for the molecules of this salt have chemical affinity for oxygen and carry it to all parts of the organism." In this antiquated context, chemical affinity is sometimes found synonymous with the term "magnetic attraction". Many writings, up until about 1925, also refer to a "law of chemical affinity".
Ilya Prigogine summarized the concept of affinity, saying, "All chemical reactions drive the system to a state of equilibrium in which the "affinities" of the reactions vanish."
Thermodynamics.
The present IUPAC definition is that affinity "A" is the negative partial derivative of Gibbs free energy "G" with respect to extent of reaction "ξ" at constant pressure and temperature. That is,
formula_0
It follows that affinity is positive for spontaneous reactions.
In 1923, the Belgian mathematician and physicist Théophile de Donder derived a relation between affinity and the Gibbs free energy of a chemical reaction. Through a series of derivations, de Donder showed that if we consider a mixture of chemical species with the possibility of chemical reaction, it can be proven that the following relation holds:
formula_1
With the writings of Théophile de Donder as precedent, Ilya Prigogine and Defay in "Chemical Thermodynamics" (1954) defined chemical affinity as the rate of change of the uncompensated heat of reaction "Q"' as the reaction progress variable or reaction extent "ξ" grows infinitesimally:
formula_2
This definition is useful for quantifying the factors responsible both for the state of equilibrium systems (where "A" = 0), and for changes of state of non-equilibrium systems (where "A" ≠ 0).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A = -\\left(\\frac{\\partial G}{\\partial \\xi}\\right)_{P,T}."
},
{
"math_id": 1,
"text": " A = -\\Delta_rG. \\,"
},
{
"math_id": 2,
"text": "A = \\frac{{\\mathrm d}Q'}{{\\mathrm d}\\xi}. \\, "
}
]
| https://en.wikipedia.org/wiki?curid=7225 |
722503 | Linear system | Physical system satisfying the superposition principle
In systems theory, a linear system is a mathematical model of a system based on the use of a linear operator.
Linear systems typically exhibit features and properties that are much simpler than the nonlinear case.
As a mathematical abstraction or idealization, linear systems find important applications in automatic control theory, signal processing, and telecommunications. For example, the propagation medium for wireless communication systems can often be
modeled by linear systems.
Definition.
A general deterministic system can be described by an operator, "H", that maps an input, "x"("t"), as a function of t to an output, "y"("t"), a type of black box description.
A system is linear if and only if it satisfies the superposition principle, or equivalently both the additivity and homogeneity properties, without restrictions (that is, for all inputs, all scaling constants and all time.)
The superposition principle means that a linear combination of inputs to the system produces a linear combination of the individual zero-state outputs (that is, outputs setting the initial conditions to zero) corresponding to the individual inputs.
In a system that satisfies the homogeneity property, scaling the input always results in scaling the zero-state response by the same factor. In a system that satisfies the additivity property, adding two inputs always results in adding the corresponding two zero-state responses due to the individual inputs.
Mathematically, for a continuous-time system, given two arbitrary inputs
formula_2
as well as their respective zero-state outputs
formula_3
then a linear system must satisfy
formula_4
for any scalar values α and β, for any input signals "x"1("t") and "x"2("t"), and for all time t.
The system is then defined by the equation "H"("x"("t")) = "y"("t"), where "y"("t") is some arbitrary function of time, and "x"("t") is the system state. Given "y"("t") and the system can be solved for
The behavior of the resulting system subjected to a complex input can be described as a sum of responses to simpler inputs. In nonlinear systems, there is no such relation.
This mathematical property makes the solution of modelling equations simpler than many nonlinear systems.
For time-invariant systems this is the basis of the impulse response or the frequency response methods (see LTI system theory), which describe a general input function "x"("t") in terms of unit impulses or frequency components.
Typical differential equations of linear time-invariant systems are well adapted to analysis using the Laplace transform in the continuous case, and the Z-transform in the discrete case (especially in computer implementations).
Another perspective is that solutions to linear systems comprise a system of functions which act like vectors in the geometric sense.
A common use of linear models is to describe a nonlinear system by linearization. This is usually done for mathematical convenience.
The previous definition of a linear system is applicable to SISO (single-input single-output) systems. For MIMO (multiple-input multiple-output) systems, input and output signal vectors (formula_5, formula_6, formula_7, formula_8) are considered instead of input and output signals (formula_0, formula_1, formula_9, formula_10.)
This definition of a linear system is analogous to the definition of a linear differential equation in calculus, and a linear transformation in linear algebra.
Examples.
A simple harmonic oscillator obeys the differential equation:
formula_11
If formula_12
then "H" is a linear operator. Letting we can rewrite the differential equation as which shows that a simple harmonic oscillator is a linear system.
Other examples of linear systems include those described by formula_13, formula_14, formula_15, and any system described by ordinary linear differential equations. Systems described by formula_16, formula_17, formula_18, formula_19, formula_20, formula_21, formula_22, and a system with odd-symmetry output consisting of a linear region and a saturation (constant) region, are non-linear because they don't always satisfy the superposition principle.
The output versus input graph of a linear system need not be a straight line through the origin. For example, consider a system described by formula_14 (such as a constant-capacitance capacitor or a constant-inductance inductor). It is linear because it satisfies the superposition principle. However, when the input is a sinusoid, the output is also a sinusoid, and so its output-input plot is an ellipse centered at the origin rather than a straight line passing through the origin.
Also, the output of a linear system can contain harmonics (and have a smaller fundamental frequency than the input) even when the input is a sinusoid. For example, consider a system described by formula_23. It is linear because it satisfies the superposition principle. However, when the input is a sinusoid of the form formula_24, using product-to-sum trigonometric identities it can be easily shown that the output is formula_25, that is, the output doesn't consist only of sinusoids of same frequency as the input (3 rad/s), but instead also of sinusoids of frequencies 2 rad/s and 4 rad/s; furthermore, taking the least common multiple of the fundamental period of the sinusoids of the output, it can be shown the fundamental angular frequency of the output is 1 rad/s, which is different than that of the input.
Time-varying impulse response.
The time-varying impulse response "h"("t"2, "t"1) of a linear system is defined as the response of the system at time "t" = "t"2 to a single impulse applied at time In other words, if the input "x"("t") to a linear system is
formula_26
where δ("t") represents the Dirac delta function, and the corresponding response "y"("t") of the system is
formula_27
then the function "h"("t"2, "t"1) is the time-varying impulse response of the system. Since the system cannot respond before the input is applied the following causality condition must be satisfied:
formula_28
The convolution integral.
The output of any general continuous-time linear system is related to the input by an integral which may be written over a doubly infinite range because of the causality condition:
formula_29
If the properties of the system do not depend on the time at which it is operated then it is said to be time-invariant and h is a function only of the time difference "τ" = "t" − "t' " which is zero for "τ" < 0 (namely "t" < "t' "). By redefinition of h it is then possible to write the input-output relation equivalently in any of the ways,
formula_30
Linear time-invariant systems are most commonly characterized by the Laplace transform of the impulse response function called the "transfer function" which is:
formula_31
In applications this is usually a rational algebraic function of s. Because "h"("t") is zero for negative t, the integral may equally be written over the doubly infinite range and putting "s" = "iω" follows the formula for the "frequency response function":
formula_32
Discrete-time systems.
The output of any discrete time linear system is related to the input by the time-varying convolution sum:
formula_33
or equivalently for a time-invariant system on redefining "h",
formula_34
where formula_35 represents the lag time between the stimulus at time "m" and the response at time "n".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x_1(t)"
},
{
"math_id": 1,
"text": "x_2(t)"
},
{
"math_id": 2,
"text": "\\begin{align} x_1(t) \\\\ x_2(t) \\end{align}"
},
{
"math_id": 3,
"text": "\\begin{align}\ny_1(t) &= H \\left \\{ x_1(t) \\right \\} \\\\\ny_2(t) &= H \\left \\{ x_2(t) \\right \\}\n\\end{align} "
},
{
"math_id": 4,
"text": "\\alpha y_1(t) + \\beta y_2(t) = H \\left \\{ \\alpha x_1(t) + \\beta x_2(t) \\right \\} "
},
{
"math_id": 5,
"text": "{\\mathbf x}_1(t)"
},
{
"math_id": 6,
"text": "{\\mathbf x}_2(t)"
},
{
"math_id": 7,
"text": "{\\mathbf y}_1(t)"
},
{
"math_id": 8,
"text": "{\\mathbf y}_2(t)"
},
{
"math_id": 9,
"text": "y_1(t)"
},
{
"math_id": 10,
"text": "y_2(t)"
},
{
"math_id": 11,
"text": "m \\frac{d^2(x)}{dt^2} = -kx."
},
{
"math_id": 12,
"text": "H(x(t)) = m \\frac{d^2(x(t))}{dt^2} + kx(t),"
},
{
"math_id": 13,
"text": "y(t) = k \\, x(t)"
},
{
"math_id": 14,
"text": "y(t) = k \\, \\frac{\\mathrm dx(t)}{\\mathrm dt}"
},
{
"math_id": 15,
"text": "y(t) = k \\, \\int_{-\\infty}^{t}x(\\tau) \\mathrm d\\tau"
},
{
"math_id": 16,
"text": "y(t) = k"
},
{
"math_id": 17,
"text": "y(t) = k \\, x(t) + k_0"
},
{
"math_id": 18,
"text": "y(t) = \\sin{[x(t)]}"
},
{
"math_id": 19,
"text": "y(t) = \\cos{[x(t)]}"
},
{
"math_id": 20,
"text": "y(t) = x^2(t)"
},
{
"math_id": 21,
"text": "y(t) = \\sqrt{x(t)}"
},
{
"math_id": 22,
"text": "y(t) = |x(t)|"
},
{
"math_id": 23,
"text": "y(t) = (1.5 + \\cos{(t)}) \\, x(t)"
},
{
"math_id": 24,
"text": "x(t) = \\cos{(3t)}"
},
{
"math_id": 25,
"text": "y(t) = 1.5 \\cos{(3t)} + 0.5 \\cos{(2t)} + 0.5 \\cos{(4t)}"
},
{
"math_id": 26,
"text": "x(t) = \\delta(t - t_1)"
},
{
"math_id": 27,
"text": "y(t=t_2) = h(t_2, t_1)"
},
{
"math_id": 28,
"text": " h(t_2, t_1) = 0, t_2 < t_1"
},
{
"math_id": 29,
"text": " y(t) = \\int_{-\\infty}^{t} h(t,t') x(t')dt' = \\int_{-\\infty}^{\\infty} h(t,t') x(t') dt' "
},
{
"math_id": 30,
"text": " y(t) = \\int_{-\\infty}^{t} h(t-t') x(t') dt' = \\int_{-\\infty}^{\\infty} h(t-t') x(t') dt' = \\int_{-\\infty}^{\\infty} h(\\tau) x(t-\\tau) d \\tau = \\int_{0}^{\\infty} h(\\tau) x(t-\\tau) d \\tau "
},
{
"math_id": 31,
"text": "H(s) =\\int_0^\\infty h(t) e^{-st}\\, dt."
},
{
"math_id": 32,
"text": " H(i\\omega) = \\int_{-\\infty}^{\\infty} h(t) e^{-i\\omega t} dt "
},
{
"math_id": 33,
"text": " y[n] = \\sum_{m =-\\infty}^{n} { h[n,m] x[m] } = \\sum_{m =-\\infty}^{\\infty} { h[n,m] x[m] }"
},
{
"math_id": 34,
"text": " y[n] = \\sum_{k =0}^{\\infty} { h[k] x[n-k] } = \\sum_{k =-\\infty}^{\\infty} { h[k] x[n-k] }"
},
{
"math_id": 35,
"text": " k = n-m "
}
]
| https://en.wikipedia.org/wiki?curid=722503 |
72251176 | Plane-based geometric algebra | Application of Clifford algebra
Plane-based geometric algebra is an application of Clifford algebra to modelling planes, lines, points, and rigid transformations. Generally this is with the goal of solving applied problems involving these elements and their intersections, projections, and their angle from one another in 3D space. Originally growing out of research on spin groups, it was developed with applications to robotics in mind. It has since been applied to machine learning, rigid body dynamics, and computer science, especially computer graphics. It is usually combined with a "duality" operation into a system known as "Projective Geometric Algebra", see below.
Plane-based geometric algebra takes "planar reflections" as basic elements, and constructs all other transformations and geometric objects out of them. Formally: it identifies planar reflections with the "grade-1" elements of a Clifford Algebra, that is, elements that are written with a single subscript such as "formula_0". With some rare exceptions described below, the algebra is almost always Cl3,0,1(R), meaning it has three basis grade-1 elements whose square is formula_1 and a single basis element whose square is formula_2.
Plane-based GA subsumes a large number of algebraic constructions applied in engineering, including the axis–angle representation of rotations, the quaternion and dual quaternion representations of rotations and translations, the plücker representation of lines, the point normal representation of planes, and the homogeneous representation of points. Dual Quaternions then allow the screw, twist and wrench model of classical mechanics to be constructed.
The plane-based approach to geometry may be contrasted with the approach that uses the cross product, in which points, translations, rotation axes, and plane normals are all modelled as "vectors". However, use of vectors in advanced engineering problems often require subtle distinctions between different kinds of vector because of this, including Gibbs vectors, pseudovectors and contravariant vectors. The latter of these two, in plane-based GA, map to the concepts of "rotation axis" and "point", with the distinction between them being made clear by the notation: rotation axes such as formula_3 (two lower indices) are always notated differently than points such as formula_4 (three lower indices).
All objects considered below are still "vectors" in the technical sense that they are "elements of vector spaces"; however they are "not" (generally) vectors in the sense that one could meaningfully take their cross product - so it is not informative to visualize them as arrows. Therefore to avoid conflict over different algebraic and visual connotations coming from the word 'vector', this article avoids use of the word.
Construction.
Plane-based geometric algebra starts with planes and then constructs lines and points by taking "intersections" of planes. Its canonical basis consists of the plane such that formula_5, which is labelled formula_0, the formula_6, which is labelled formula_7, and the formula_8 plane, formula_9.
Other planes may be obtained as weighted sums of the basis planes. for example, formula_10 would be the plane midway between the y- and z-plane. In general, combining two geometric objects in plane-based GA will always be as a weighted average of them – combining points will give a point between them, as will combining lines, and indeed rotations.
An operation that is as fundamental as addition is the "geometric product". For example:
formula_11Here we take formula_0, which is a planar reflection in the formula_5 plane, and formula_12, which is a 180-degree rotation around the x-axis. Their geometric product is formula_4, which is a point reflection in the origin, because that is the transformation that results from a 180-degree rotation followed by a planar reflection in a plane orthogonal to the rotation's axis.
For any pair of elements formula_13 and formula_14, their geometric product formula_13formula_14 is the transformation formula_14 followed by the transformation formula_13. Note that transform "composition" is "not" transform "application"; for example formula_15 is "not" "formula_12 transformed by formula_0", it is instead the transform formula_12 followed by the transform formula_0. Transform application is implemented with the "sandwich product", see below.
This geometric interpretation is usually combined with the following assertion:
formula_16
The geometric interpretation of the first three defining equations is that if we perform the same planar reflection twice we get back to where we started; e.g. any grade-1 element (plane) multiplied by itself results in the identity function, "formula_1". The statement that formula_17 is more subtle.
Elements at infinity.
The algebraic element formula_18 represents the plane at infinity. It behaves differently from any other plane – intuitively, it can be "approached but never reached". In 3 dimensions, formula_18 can be visualized as the sky. Lying in it are the points called "vanishing points", or alternatively "ideal points", or "points at infinity". Parallel lines such as metal rails on a railway line meet one another at such points.
Lines at infinity also exist; the horizon line is an example of such a line. For an observer standing on a plane, all planes parallel to the plane they stand on meet one another at the horizon line. Algebraically, if we take formula_7 to be the ground, then formula_19 will be a plane parallel to the ground (displaced 5 meters from it). These two parallel planes meet one another at the line-at-infinity formula_20.
Most lines, for examples formula_12, can act as axes for rotations; in fact they can treated as imaginary quaternions. But lines that lie in the plane-at-infinity formula_18, such as the line formula_21, cannot act as axes for a "rotation". Instead, these are axes for translations, and instead of having an algebra resembling complex numbers or quaternions, their algebraic behaviour is the same as the dual numbers, since they square to 0. Combining the three basis lines-through-the-origin formula_12, formula_3, formula_22, which square to formula_23, with the three basis lines at infinity formula_24, formula_25, formula_21 gives the necessary elements for (Plücker) coordinates of lines.
Derivation of other operations from the geometric product.
There are several useful products that can be extracted from the geometric product, similar to how the dot product and cross product were extracted from the quaternion product. These include:
For example, recall that formula_0 is a plane, as is formula_43. Their geometric product is their "reflection composition" – a reflection in formula_0 followed by a reflection in formula_43, which results in the dual quaternion formula_44. But this may be more than is desired; if we wish to take only the intersection line of the two planes, we simply need to look at just the "grade-2 part" of this result, e.g. the part with two lower indices formula_45. The information needed to specify that the intersection line is contained inside the transform composition of the two planes, because a reflection in a pair of planes will result in a rotation around their intersection line.
Interpretation as algebra of reflections.
The algebra of all distance-preserving transformations (essentially, rigid transformations and reflections) in 3D is called the Euclidean Group, formula_46. By the Cartan–Dieudonné theorem, any element of it can be written as a series of reflections in planes.
In plane-based GA, essentially all geometric objects can be thought of as a transformation. Planes such as formula_0 are planar reflections, points such as formula_4 are point reflections, and lines such as formula_22 are line reflections - which in 3D are the same thing as 180-degree rotations. The identity transform is the unique object that is constructed out of zero reflections. All of these are elements of formula_46.
Some elements of formula_46, for example rotations by any angle that is "not" 180 degrees, do not have a single specific geometric object which is used to visualize them; nevertheless, they can always be thought of as being made up of reflections, and can always be represented as a linear combination of some elements of objects in plane-based geometric algebra. For example, formula_47 is a slight rotation about the formula_22 axis, and it can be written as a geometric product (a transform composition) of formula_0 and formula_48, both of which are planar reflections intersecting at the line formula_22.
In fact, any rotation can be written as a composition of two planar reflections that pass through its axis; thus it can be called a "2-reflection". Rotoreflections, glide reflections, and point reflections can also always be written as compositions of 3 planar reflections and so are called 3-reflections. The upper limit of this for 3D is a screw motion, which is a 4-reflection. For this reason, when considering screw motions, it is necessary to use the grade-4 element of 3D plane-based GA, formula_49, which is the highest-grade element.
Geometric interpretation of geometric product as "cancelling out" reflections.
A reflection in a plane followed by a reflection in the "same" plane results in no change. The algebraic interpretation for this geometry is that grade-1 elements such as formula_0 square to 1. This simple fact can be used to give a geometric interpretation for the general behaviour of the geometric product as a device that solves geometric problems by "cancelling mirrors".
To give an example of the usefulness of this, suppose we wish to find a plane orthogonal to a certain line "L" in 3D and passing through a certain point "P". "L" is a 2-reflection and formula_30 is a 3-reflection, so taking their geometric product "PL" in some sense produces a 5-reflection; however, as in the picture below, two of these reflections cancel, leaving a 3-reflection (sometimes known as a rotoreflection). In the plane-based geometric algebra notation, this rotoreflection can be thought of as a planar reflection "added to" a point reflection. The plane part of this rotoreflection is the plane that is orthogonal to the line "L" and the original point "P". A similar procedure can be used to find the line orthogonal to a plane and passing through a point, or the intersection of a line and a plane, or the intersection line of a plane with another plane.
Rotations and translations as "even subalgebra".
Rotations and translations are transformations that preserve "distances" and "handedness" (chirality), e.g. when they are applied to sets of objects, the relative distances between those objects does not change; nor does their handedness, which is to say that a right-handed glove will not turn into a left-handed glove. All transformations in 3D euclidean plane-based geometric algebra preserve distances, but reflections, rotoreflections, and transflections do not preserve handedness.
Rotations and translations "do" preserve handedness, which in 3D Plane-based GA implies that they can be written as a composition of an "even number" of reflections. A rotations can thought of as a reflection in a plane followed by a reflection in another plane which is "not" parallel to the first (the quaternions, which are set in the context of PGA above). If the planes were parallel, composing their reflections would give a translation.
Rotations and translations are both special cases of "screw motions", e.g. a rotation around a line in space followed by a translation directed along the same line. This group is usually called SE(3), the group of Special (handedness-preserving) Euclidean (distance-preserving) transformations in 3 dimensions. This group has two commonly-used representations that allow them to be used in algebra and computation, one being the 4×4 matrices of real numbers, and the other being the "Dual Quaternions". The Dual Quaternion representation (like the usual quaternions) is actually a "double cover" of SE(3). Since the Dual Quaternions are closed under multiplication and addition and are made from an even number of basis elements in, they are called the "even subalgebra" of 3D euclidean (plane-based) geometric algebra. The word 'spinor' is sometimes used to describe this subalgebra.
Describing rigid transformations using planes was a major goal in the work of Camille Jordan. and Michel Chasles since it allows the treatment to be dimension-independent.
Generalizations.
Inversive Geometry.
Inversive geometry is the study of geometric objects and behaviours generated by inversions in circles and spheres. Reflections in planes are a special case of inversions in spheres, because a plane is a sphere with infinite radius. Since plane-based geometric algebra is generated by composition of reflections, it is a special case of inversive geometry. Inversive geometry itself can be performed with the larger system known as Conformal Geometric Algebra(CGA), of which Plane-based GA is a subalgebra.
CGA is also usually applied to 3D space, and is able to model general spheres, circles, and conformal (angle-preserving) transformations, which include the transformations seen on the Poincare disk. It can be difficult to see the connection between PGA and CGA, since CGA is often "point based", although some authors take a plane-based approach to CGA which makes the notations for Plane-based GA and CGA identical.
Projective Geometric Algebra.
Plane-based geometric algebra is able to represent all Euclidean transformations, but in practice it is almost always combined with a "dual" operation of some kind to create the larger system known as "Projective Geometric Algebra", PGA. Duality, as in other Clifford and Grassmann algebras, allows a definition of the regressive product. This is extremely useful for engineering applications - in plane-based GA, the regressive product can "join" a point to another point to obtain a line, and can join a point and a line to obtain a plane. It has the further convenience that if any two elements (points, lines, or planes) have "norm" (see above) equal to formula_1, the norm of their regressive product is equal to the distance between them. The join of several points is also known as their affine hull.
Variants of duality and terminology.
There is variation across authors as to the precise definition given for dual that is used to define the regressive product in PGA. No matter which definition is given, the regressive product functions to give completely identical outputs; for this reason, precise discussion of the dual is usually not included in introductory material on projective geometric algebra. There are significant conceptual and philosophical differences:
Projective geometric algebra of non-euclidean geometries and Classical Lie Groups in 3 dimensions.
To a first approximation, the physical world is euclidean, i.e. most transformations are "rigid"; Projective Geometric Algebra is therefore usually based on Cl3,0,1(R), since rigid transformations can be modelled in this algebra. However, it is possible to model other spaces by slightly varying the algebra.
In these systems, the points, planes, and lines have the same coordinates that they have in plane-based GA. But transformations like rotations and reflections will have very different effects on the geometry. In all cases below, the algebra is a double cover of the group of reflections, rotations, and rotoreflections in the space.
All formulae from the euclidean case carry over to these other geometries – the meet still functions as a way of taking the intersection of objects; the geometric product still functions as a way of composing transformations; and in the hyperbolic case the inner product become able to measure hyperbolic angle.
All three even subalgebras are classical Lie groups (after taking the quotient by scalars). The associated Lie algebra for each group is the grade 2 elements of the Clifford algebra, "not" taking the quotient by scalars.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\boldsymbol{e}_{1}"
},
{
"math_id": 1,
"text": "1"
},
{
"math_id": 2,
"text": "0"
},
{
"math_id": 3,
"text": "\\boldsymbol{e}_{13}"
},
{
"math_id": 4,
"text": "\\boldsymbol{e}_{123}"
},
{
"math_id": 5,
"text": "x = 0"
},
{
"math_id": 6,
"text": "y = 0"
},
{
"math_id": 7,
"text": "\\boldsymbol{e}_{2}"
},
{
"math_id": 8,
"text": "z = 0"
},
{
"math_id": 9,
"text": "\\boldsymbol{e}_{3}"
},
{
"math_id": 10,
"text": "\\boldsymbol{e}_{2} + \\boldsymbol{e}_{3}"
},
{
"math_id": 11,
"text": "\\boldsymbol{e}_{1} \\boldsymbol{e}_{23} = \\boldsymbol{e}_{123}"
},
{
"math_id": 12,
"text": "\\boldsymbol{e}_{23}"
},
{
"math_id": 13,
"text": "A"
},
{
"math_id": 14,
"text": "B"
},
{
"math_id": 15,
"text": "\\boldsymbol{e}_{1} \\boldsymbol{e}_{23}"
},
{
"math_id": 16,
"text": "\\boldsymbol{e}_{1}\\boldsymbol{e}_{1} = 1 \\qquad \\boldsymbol{e}_{2}\\boldsymbol{e}_{2} = 1\\qquad \\boldsymbol{e}_{3}\\boldsymbol{e}_{3} = 1\n\\qquad \\boldsymbol{e}_{0}\\boldsymbol{e}_{0} = 0"
},
{
"math_id": 17,
"text": "\\boldsymbol{e}_{0}\\boldsymbol{e}_{0} = 0"
},
{
"math_id": 18,
"text": "\\boldsymbol{e}_{0}"
},
{
"math_id": 19,
"text": "\\boldsymbol{e}_{2} + 5 \\boldsymbol{e}_{0}"
},
{
"math_id": 20,
"text": "\\boldsymbol{e}_{02}"
},
{
"math_id": 21,
"text": "\\boldsymbol{e}_{30}"
},
{
"math_id": 22,
"text": "\\boldsymbol{e}_{12}"
},
{
"math_id": 23,
"text": "-1"
},
{
"math_id": 24,
"text": "\\boldsymbol{e}_{10}"
},
{
"math_id": 25,
"text": "\\boldsymbol{e}_{20 }"
},
{
"math_id": 26,
"text": "A"
},
{
"math_id": 27,
"text": "B \\tilde{A}"
},
{
"math_id": 28,
"text": "\\tilde{A}"
},
{
"math_id": 29,
"text": "\\wedge"
},
{
"math_id": 30,
"text": "P"
},
{
"math_id": 31,
"text": "L"
},
{
"math_id": 32,
"text": "P \\wedge L"
},
{
"math_id": 33,
"text": "\\cdot"
},
{
"math_id": 34,
"text": "(A \\cdot B) \\tilde{B}"
},
{
"math_id": 35,
"text": "\\sqrt{A \\tilde{A}}"
},
{
"math_id": 36,
"text": "\\lVert A \\rVert"
},
{
"math_id": 37,
"text": "\\arccos( \\lVert A \\cdot B \\rVert )"
},
{
"math_id": 38,
"text": "\\lVert A \\rVert = \\lVert B \\rVert = 1"
},
{
"math_id": 39,
"text": "T"
},
{
"math_id": 40,
"text": "TA\\tilde{T}"
},
{
"math_id": 41,
"text": "\\times"
},
{
"math_id": 42,
"text": "\\frac{1}{2} (AB-BA)"
},
{
"math_id": 43,
"text": "\\boldsymbol{e}_{1} + \\boldsymbol{e}_{2} + \\boldsymbol{e}_{0}"
},
{
"math_id": 44,
"text": "1 + \\boldsymbol{e}_{12} + \\boldsymbol{e}_{10}"
},
{
"math_id": 45,
"text": "\\boldsymbol{e}_{12} + \\boldsymbol{e}_{10}"
},
{
"math_id": 46,
"text": "E(3)"
},
{
"math_id": 47,
"text": "0.8+0.6\\boldsymbol{e}_{12}"
},
{
"math_id": 48,
"text": "0.8\\boldsymbol{e}_{1}+0.6\\boldsymbol{e}_{2}"
},
{
"math_id": 49,
"text": "\\boldsymbol{e}_{0123}"
}
]
| https://en.wikipedia.org/wiki?curid=72251176 |
72252525 | S2S (mathematics) | In mathematics, S2S is the monadic second order theory with two successors. It is one of the most expressive natural decidable theories known, with many decidable theories interpretable in S2S. Its decidability was proved by Rabin in 1969.
Basic properties.
The first order objects of S2S are finite binary strings. The second order objects are arbitrary sets (or unary predicates) of finite binary strings. S2S has functions "s"→"s"0 and "s"→"s"1 on strings, and predicate "s"∈"S" (equivalently, "S"("s")) meaning string "s" belongs to set "S".
Some properties and conventions:
"Weakenings of S2S:" Weak S2S (WS2S) requires all sets to be finite (note that finiteness is expressible in S2S using Kőnig's lemma). S1S can be obtained by requiring that '1' does not appear in strings, and WS1S also requires finiteness. Even WS1S can interpret Presburger arithmetic with a predicate for powers of 2, as sets can be used to represent unbounded binary numbers with definable addition.
Decision complexity
S2S is decidable, and each of S2S, S1S, WS2S, WS1S has a nonelementary decision complexity corresponding to a linearly growing stack of exponentials. For the lower bound, it suffices to consider Σ11 WS1S sentences. A single second order quantifier can be used to propose an arithmetic (or other) computation, which can be verified using first order quantifiers if we can test which numbers are equal. For this, if we appropriately encode numbers 1.."m", we can encode a number with binary representation "i"1"i"2..."i""m" as "i"1 1 "i"2 2 ... "i""m" "m", preceded by a guard. By merging testing of guards and reusing variable names, the number of bits is linear in the number of exponentials. For the upper bound, using the decision procedure (below), sentences with "k"-fold quantifier alternation can decided in time corresponding to "k"+"O"(1)-fold exponentiation of the sentence length (with uniform constants).
Axiomatization
WS2S can be axiomatized through certain basic properties plus induction schema.
S2S can be partially axiomatized by:
(4) is the comprehension schema over formulas φ, which always holds for second order logic. As usual, if φ has free variables not shown, we take the universal closure of the axiom. If equality is primitive for predicates, one also adds extensionality "S"="T" ⇔ ∀"s" ("S"("s") ⇔ "T"("s")). Since we have comprehension, induction can be a single statement rather than a schema.
The analogous axiomatization of S1S is complete. However, for S2S, completeness is open (as of 2021). While S1S has uniformization, there is no S2S definable (even allowing parameters) choice function that given a non-empty set "S" returns an element of "S", and comprehension schemas are commonly augmented with various forms of the axiom of choice. However, (1)-(4) is complete when extended with a determinacy schema for certain parity games.
S2S can also be axiomatized by Π13 sentences (using the prefix relation on strings as a primitive). However, it is not finitely axiomatizable, nor can it be axiomatized by Σ13 sentences even if we add induction schema and a finite set of other sentences (this follows from its connection to Π12-CA0).
Theories related to S2S.
For every finite "k", the monadic second order (MSO) theory of countable graphs with treewidth ≤"k" (and a corresponding tree decomposition) is interpretable in S2S (see Courcelle's theorem). For example, the MSO theory of trees (as graphs) or of series-parallel graphs is decidable. Here (i.e. for bounded tree width), we can also interpret the finiteness quantifier for a set of vertices (or edges), and also count vertices (or edges) in a set modulo a fixed integer. Allowing uncountable graphs does not change the theory. Also, for comparison, S1S can interpret connected graphs of bounded pathwidth.
By contrast, for every set of graphs of unbounded treewidth, its existential (i.e. Σ11) MSO theory is undecidable if we allow predicates on both vertices and edges. Thus, in a sense, decidability of S2S is the best possible. Graphs with unbounded treewidth have large grid minors, which can be used to simulate a Turing machine.
By reduction to S2S, the MSO theory of countable orders is decidable, as is the MSO theory of countable trees with their Kleene–Brouwer orders. However, the MSO theory of (formula_0, <) is undecidable. The MSO theory of ordinals <ω2 is decidable; decidability for ω2 is independent of ZFC (assuming Con(ZFC + weakly compact cardinal)). Also, an ordinal is definable using monadic second order logic on ordinals iff it can be obtained from definable regular cardinals by ordinal addition and multiplication.
S2S is useful for decidability of certain modal logics, with Kripke semantics naturally leading to trees.
S2S+U (or just S1S+U) is undecidable if U is the unbounding quantifier — U"X" Φ("X") iff Φ("X") holds for some arbitrarily large finite "X". However, WS2S+U, even with quantification over infinite paths, is decidable, even with S2S subformulas that do not contain U.
Formula complexity.
A set of binary strings is definable in S2S iff it is regular (i.e. forms a regular language). In S1S, a (unary) predicate on sets is (parameter-free) definable iff it is an ω-regular language. For S2S, for formulas that use their free variables only on strings not containing a 1, the expressiveness is the same as for S1S.
For every S2S formula φ("S"1...,"S""k"), (with "k" free variables) and finite tree of binary strings "T", φ("S"1∩T...,"S""k"∩T) can be computed in time linear in |"T"| (see Courcelle's theorem), but as noted above, the overhead can be iterated exponential in the formula size (more precisely, the time is formula_1).
For S1S, every formula is equivalent to a Δ11 formula, and to a boolean combination of Π02 arithmetic formulas. Moreover, every S1S formula is equivalent to acceptance by a corresponding ω-automaton of the parameters of the formula. The automaton can be a deterministic parity automaton: A parity automaton has an integer priority for each state, and accepts iff the highest priority seen infinitely often is odd (alternatively, even).
For S2S, using tree automata (below), every formula is equivalent to a Δ12 formula. Moreover, every S2S formula is equivalent to a formula with just four quantifiers, ∃"S"∀"T"∃"s"∀"t" ... (assuming that our formalization has both the prefix relation and the successor functions). For S1S, three quantifiers (∃"S"∀"s"∃"t") suffice, and for WS2S and WS1S, two quantifiers (∃"S"∀"t") suffice; the prefix relation is not needed here for WS2S and WS1S.
However, with free second order variables, not every S2S formula can be expressed in second order arithmetic through just Π11 transfinite recursion (see reverse mathematics). RCA0 + (schema) {τ: τ is a true S2S sentence} is equivalent to (schema) {τ: τ is a Π13 sentence provable in Π12-CA0 }. Over a base theory, the schemas are equivalent to (schema over "k") ∀"S"⊆ω ∃α1<...<α"k" "L"α1("S") ≺Σ1 ... ≺Σ1 "L"α"k"(S) where "L" is the constructible universe (see also large countable ordinal). Due to limited induction, Π12-CA0 does not prove that all true (under the standard decision procedure) Π13 S2S statements are actually true even though each such sentence is provable Π12-CA0.
Moreover, given sets of binary strings "S" and "T", the following are equivalent:
(1) "T" is S2S definable from some set of binary strings polynomial time computable from "S".
(2) "T" can be computed from the set of winning positions for some game whose payoff is a finite boolean combination of Π02("S") sets.
(4) "T" is in the least β-model (i.e. an ω-model whose set-theoretic counterpart is transitive) containing "S" and satisfying all Π13 consequences of in Π12-CA0.
<templatestyles src="Template:Hidden begin/styles.css"/>Proof sketch:
Models of S1S and S2S.
In addition to the standard model (which is the unique MSO model for S1S and S2S), there are other models for S1S and S2S, which use some rather than all subsets of the domain (see Henkin semantics).
For every "S"⊆ω, sets recursive in "S" form an elementary submodel of the standard S1S model, and same for every non-empty collection of subsets of ω closed under Turing join and Turing reducibility.
This follows from relative recursiveness of S1S definable sets plus uniformization:
- φ("s") (as a function of "s") can be computed from the parameters of φ and the values of φ("s"′) for a finite set of "s"′ (with its size bounded by the number of states in a deterministic automaton for φ).
- A witness for ∃"S" φ("S") can be obtained by choosing "k" and a finite fragment of "S"′ of "S", and repeatedly extending "S"′ such that the highest priority during each extension is "k" and that the extension can be completed into "S" satisfying φ without hitting priorities above "k" (these are permitted only for the initial "S"′). Also, by using lexicographically least shortest choices, there is an S1S formula φ' such that φ'⇒φ and ∃"S" φ("S") ⇔∃!"S" φ'("S") (i.e. uniformization; φ may have free variables not shown; φ' depends only on the formula φ).
The minimal model of S2S consists of all regular languages on binary strings. It is an elementary submodel of the standard model, so if an S2S parameter-free definable set of trees is non-empty, then it includes a regular tree. A regular language can also be treated as a regular {0,1}-labeled complete infinite binary tree (identified with predicates on strings). A labeled tree is regular if it can be obtained by unrolling a vertex-labeled finite directed graph with an initial vertex; a (directed) cycle in the graph reachable from the initial vertex gives an infinite tree. With this interpretation and encoding of regular trees, every true S2S sentence may already be provable in elementary function arithmetic. It is non-regular trees that may require nonpredicative comprehension for determinacy (below). There are nonregular (i.e. containing nonregular languages) models of S1S (and presumably S2S) (both with and without standard first order part) with a computable satisfaction relation. However, the set of recursive sets of strings does not form a model of S2S due to failure of comprehension and determinacy.
Decidability of S2S.
The proof of decidability is by showing that every formula is equivalent to acceptance by a nondeterministic tree automaton (see tree automaton and infinite-tree automaton). An infinite tree automaton starts at the root and moves up the tree, and accepts iff every tree branch accepts. A nondeterministic tree automaton accepts iff player 1 has a winning strategy, where player 1 chooses an allowed (for the current state and input) pair of new states ("p"0,"p"1), while player 2 chooses the branch, with the transition to p0 if 0 is chosen and p1 otherwise. For a co-nondeterministic automaton, all choices are by player 2, while for deterministic, (p0,p1) is fixed by the state and input; and for a game automaton, the two players play a finite game to set the branch and the state. Acceptance on a branch is based on states seen infinitely often on the branch; parity automata are sufficiently general here.
For converting the formulas to automata, the base case is easy, and nondeterminism gives closure under existential quantifiers, so we only need closure under complementation. Using positional determinacy of parity games (which is where we need impredicative comprehension), non-existence of player 1 winning strategy gives a player 2 winning strategy "S", with a co-nondeterministic tree automaton verifying its soundness. The automaton can then be made deterministic (which is where we get an exponential increase in the number of states), and thus existence of "S" corresponds to acceptance by a non-deterministic automaton.
"Determinacy:" Provably in ZFC, Borel games are determined, and the determinacy proof for boolean combinations of Π02 formulas (with arbitrary real parameters) also gives a strategy here that depends only on the current state and the position in the tree. The proof is by induction on the number of priorities. Assume that there are "k" priorities, with the highest priority being "k", and that "k" has the right parity for player 2. For each position (tree position + state) assign the least ordinal α (if any) such that player 1 has a winning strategy with all entered (after one or more steps) priority "k" positions (if any) having labels <α. Player 1 can win if the initial position is labeled: Each time a priority "k" state is reached, the ordinal is decreased, and moreover in between the decreases, player 1 can use a strategy for "k"-1 priorities. Player 2 can win if the position is unlabeled: By the determinacy for "k"-1 priorities, player 2 has a strategy that wins or enters an unlabeled priority "k" state, in which case player 2 can again use that strategy. To make the strategy positional (by induction on "k"), when playing the auxiliary game, if two chosen positional strategies lead to the same position, continue with the strategy with the lower α, or for the same α (or for player 2) lower initial position (so we can switch a strategy finitely many times).
"Automata determinization:" For determinization of co-nondeterministic tree automata, it suffices to consider ω-automata, treating branch choice as input, determinizing the automaton, and using it for the deterministic tree automaton. Note that this does not work for "nondeterministic" tree automata as the determinization for going left (i.e. "s"→"s"0) can depend on the contents of the right branch; by contrast to nondeterminism, deterministic tree automata cannot even accept precisely nonempty sets. To determinize a nondeterministic ω-automaton "M" (for co-nondeterministic, take the complement, noting that deterministic parity automata are closed under complements), we can use a "Safra tree" with each node storing a set of possible states of "M", and node creation and deletion based on reaching high priority states. For details, see or.
"Decidability of acceptance:" Acceptance by a nondeterministic parity automaton of the empty tree corresponds to a parity game on a finite graph "G". Using the above positional (also called memoryless) determinacy, this can be simulated by a finite game that ends when we reach a loop, with the winning condition based on the highest priority state in the loop. A clever optimization gives a quasipolynomial time algorithm, which is polynomial time when the number of priorities is small enough (which occurs commonly in practice).
"Theory of trees:" For decidability of MSO logic on trees (i.e. graphs that are trees), even with finiteness and modular counting quantifiers for first order objects, we can embed countable trees into the complete binary tree and use the decidability of S2S. For example, for a node "s", we can represent its children by "s"1, "s"01, "s"001, and so on. For uncountable trees, we can use Shelah-Stup theorem (below). We can also add a predicate for a set first order objects having cardinality ω1, and the predicate for cardinality ω2, and so on for infinite regular cardinals. Graphs of bounded tree width are interpretable using trees, and without predicates over edges this also applies to graphs of bounded clique width.
Combining S2S with other decidable theories.
"Tree extensions of monadic theories:" By Shelah-Stup theorem, if a monadic relational model "M" is decidable, then so is its tree counterpart. For example, (modulo choice of formalization) S2S is the tree counterpart of {0,1}. In the tree counterpart, the first order objects are finite sequences of elements of "M" ordered by extension, and an "M"-relation "P""i" is mapped to "P""i"'("vd"1...,"vd"k) ⇔ "P""i"("d"1...,"d"k) with "P""i"' false otherwise ("d""j"∈"M", and "v" is a (possibly empty) sequence of elements of "M"). The proof is similar to the S2S decidability proof. At each step, a (nondeterministic) automaton gets a tuple of "M" objects (possibly second order) as input, and an "M" formula determines which state transitions are permitted. Player 1 (as above) chooses a mapping child⇒state that is permitted by the formula (given the current state), and player 2 chooses the child (of the node) to continue. To witness rejection by a non-deterministic automaton, for each (node, state) pick a set of (child, state) pairs such that for every choice, at least one of the pairs is hit, and such that all the resulting paths lead to rejection.
"Combining a monadic theory with a first order theory:" Feferman–Vaught theorem extends/applies as follows. If "M" is an MSO model and "N" is a first order model, then "M" remains decidable relative to a (Theory("M"), Theory("N")) oracle even if "M" is augmented with all functions "M"→"N" where "M" is identified with its first objects, and for each "s"∈"M" we use a disjoint copy of "N", with the language modified accordingly. For example, if "N" is (formula_0,0,+,⋅), we can state ∀(function "f") ∀"s" ∃"r"∈"N""s" "f"("s") +"N""s" "r" = 0"N""s". If "M" is S2S (or more generally, the tree counterpart of some monadic model), the automata can now use "N"-formulas, and thereby convert "f":"M"→"N""k" into a tuple of "M" sets. Disjointness is necessary as otherwise for every infinite "N" with equality, the extended S2S or just WS1S is undecidable. Also, for a (possibly incomplete) theory "T", the theory "T""M" of "M"-products of "T" is decidable relative to a (Theory("M"), "T") oracle, where a model of "T""M" uses an arbitrary disjoint model "N""s" of "T" for each "s"∈"M" (as above, "M" is an MSO model; Theory("N""s") may depend on "s"). The proof is by induction on formula complexity. Let "v""s" be the list of free "N""s" variables, including "f"("s") if function "f" is free. By induction, one shows that "v""s" is only used through a finite set of "N"-formulas with |"v""s"| free variables. Thus, we can quantify over all possible outcomes by using "N" (or "T") to answer what is possible, and given a list possibilities (or constraints), formulate a corresponding sentence in "M".
"Coding into extensions of S2S:" Every decidable predicate on strings can be encoded (with linear time encoding and decoding) for decidability of S2S (even with the extensions above) together with the encoded predicate. Proof: Given a nondeterministic infinite tree automaton, we can partition the set of finite binary labeled trees (having labels over which the automaton can operate) into finitely many classes such that if a complete infinite binary tree can be composed of same-class trees, acceptance depends only on the class and the initial state (i.e. state the automaton enters the tree). (Note a rough similarity with the pumping lemma.) For example (for a parity automaton), assign trees to the same class if they have the same predicate that given initial_state and set "Q" of (state, highest_priority_reached) pairs returns whether player 1 (i.e. nondeterminism) can simultaneously force all branches to correspond to elements of "Q". Now, for each "k", pick a finite set of trees (suitable for coding) that belong to the same class for automata 1-"k", with the choice of class consistent across "k". To encode a predicate, encode some bits using "k"=1, then more bits using "k"=2, and so on.
References.
<templatestyles src="Reflist/styles.css" />
Additional reference: | [
{
"math_id": 0,
"text": "\\mathbb{R}"
},
{
"math_id": 1,
"text": "O(|T|k)+2_{O(|\\phi|)}^2"
}
]
| https://en.wikipedia.org/wiki?curid=72252525 |
722549 | Triakis octahedron | Catalan solid with 24 faces
In geometry, a triakis octahedron (or trigonal trisoctahedron or kisoctahedron) is an Archimedean dual solid, or a Catalan solid. Its dual is the truncated cube.
It can be seen as an octahedron with triangular pyramids added to each face; that is, it is the Kleetope of the octahedron. It is also sometimes called a "trisoctahedron", or, more fully, "trigonal trisoctahedron". Both names reflect that it has three triangular faces for every face of an octahedron. The "tetragonal trisoctahedron" is another name for the deltoidal icositetrahedron, a different polyhedron with three quadrilateral faces for every face of an octahedron.
This convex polyhedron is topologically similar to the concave stellated octahedron. They have the same face connectivity, but the vertices are at different relative distances from the center.
If its shorter edges have length of 1, its surface area and volume are:
formula_0
Cartesian coordinates.
Let "α" = √2 − 1, then the 14 points (±"α", ±"α", ±"α") and (±1, 0, 0), (0, ±1, 0) and (0, 0, ±1) are the vertices of a triakis octahedron centered at the origin.
The length of the long edges equals √2, and that of the short edges .
The faces are isosceles triangles with one obtuse and two acute angles. The obtuse angle equals arccos( − ) ≈ ° and the acute ones equal arccos( + ) ≈ °.
Orthogonal projections.
The "triakis octahedron" has three symmetry positions, two located on vertices, and one mid-edge:
Related polyhedra.
The triakis octahedron is one of a family of duals to the uniform polyhedra related to the cube and regular octahedron.
The triakis octahedron is a part of a sequence of polyhedra and tilings, extending into the hyperbolic plane. These face-transitive figures have (*"n"32) reflectional symmetry.
The triakis octahedron is also a part of a sequence of polyhedra and tilings, extending into the hyperbolic plane. These face-transitive figures have (*"n"42) reflectional symmetry.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align} A &= 3\\sqrt{7+4\\sqrt{2}} \\\\ V &= \\frac{3+2\\sqrt{2}}{2} \\end{align}"
}
]
| https://en.wikipedia.org/wiki?curid=722549 |
722554 | Tetrakis hexahedron | Catalan solid with 24 faces
In geometry, a tetrakis hexahedron (also known as a tetrahexahedron, hextetrahedron, tetrakis cube, and kiscube) is a Catalan solid. Its dual is the truncated octahedron, an Archimedean solid.
It can be called a disdyakis hexahedron or hexakis tetrahedron as the dual of an omnitruncated tetrahedron, and as the barycentric subdivision of a tetrahedron.
Cartesian coordinates.
Cartesian coordinates for the 14 vertices of a tetrakis hexahedron centered at the origin, are the points
formula_0
The length of the shorter edges of this tetrakis hexahedron equals 3/2 and that of the longer edges equals 2. The faces are acute isosceles triangles. The larger angle of these equals formula_1 and the two smaller ones equal formula_2.
Orthogonal projections.
The "tetrakis hexahedron", dual of the truncated octahedron has 3 symmetry positions, two located on vertices and one mid-edge.
Uses.
Naturally occurring (crystal) formations of tetrahexahedra are observed in copper and fluorite systems.
Polyhedral dice shaped like the tetrakis hexahedron are occasionally used by gamers.
A 24-cell viewed under a vertex-first perspective projection has a surface topology of a tetrakis hexahedron and the geometric proportions of the rhombic dodecahedron, with the rhombic faces divided into two triangles.
The tetrakis hexahedron appears as one of the simplest examples in building theory. Consider the Riemannian symmetric space associated to the group "SL"4(R). Its Tits boundary has the structure of a spherical building whose apartments are 2-dimensional spheres. The partition of this sphere into spherical simplices (chambers) can be obtained by taking the radial projection of a tetrakis hexahedron.
Symmetry.
With Td, [3,3] (*332) tetrahedral symmetry, the triangular faces represent the 24 fundamental domains of tetrahedral symmetry. This polyhedron can be constructed from 6 great circles on a sphere. It can also be seen by a cube with its square faces triangulated by their vertices and face centers and a tetrahedron with its faces divided by vertices, mid-edges, and a central point.
The edges of the spherical tetrakis hexahedron belong to six great circles, which correspond to mirror planes in tetrahedral symmetry. They can be grouped into three pairs of orthogonal circles (which typically intersect on one coordinate axis each). In the images below these square hosohedra are colored red, green and blue.
Dimensions.
If we denote the edge length of the base cube by a, the height of each pyramid summit above the cube is &NoBreak;&NoBreak; The inclination of each triangular face of the pyramid versus the cube face is formula_3 (sequence in the OEIS). One edge of the isosceles triangles has length a, the other two have length &NoBreak;&NoBreak; which follows by applying the Pythagorean theorem to height and base length. This yields an altitude of formula_4 in the triangle (OEIS: ). Its area is formula_5 and the internal angles are formula_6 and the complementary formula_7
The volume of the pyramid is &NoBreak;&NoBreak; so the total volume of the six pyramids and the cube in the hexahedron is &NoBreak;&NoBreak;
Kleetope.
It can be seen as a cube with square pyramids covering each square face; that is, it is the Kleetope of the cube. A non-convex form of this shape, with equilateral triangle faces, has the same surface geometry as the regular octahedron, and a paper octahedron model can be re-folded into this shape. This form of the tetrakis hexahedron was illustrated by Leonardo da Vinci in Luca Pacioli's "Divina proportione" (1509).
This non-convex form of the tetrakis hexahedron can be folded along the square faces of the inner cube as a net for a four-dimensional cubic pyramid.
Related polyhedra and tilings.
It is a polyhedra in a sequence defined by the face configuration V4.6.2"n". This group is special for having all even number of edges per vertex and form bisecting planes through the polyhedra and infinite lines in the plane, and continuing into the hyperbolic plane for any "n" ≥ 7.
With an even number of faces at every vertex, these polyhedra and tilings can be shown by alternating two colors so all adjacent faces have different colors.
Each face on these domains also corresponds to the fundamental domain of a symmetry group with order 2,3,"n" mirrors at each triangle face vertex.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n \\bigl( {\\pm\\tfrac32}, 0, 0 \\bigr),\\ \n \\bigl( 0, \\pm\\tfrac32, 0 \\bigr),\\ \n \\bigl( 0, 0, \\pm\\tfrac32 \\bigr),\\ \n \\bigl({\\pm1}, \\pm1, \\pm1 \\bigr).\n"
},
{
"math_id": 1,
"text": "\\arccos\\tfrac19 \\approx 83.62^{\\circ}"
},
{
"math_id": 2,
"text": "\\arccos\\tfrac23 \\approx 48.19^{\\circ}"
},
{
"math_id": 3,
"text": "\\arctan\\tfrac{1}{2} \\approx 26.565^\\circ"
},
{
"math_id": 4,
"text": "\\tfrac{\\sqrt 5 a}{4}"
},
{
"math_id": 5,
"text": "\\tfrac{\\sqrt 5 a^2}{8},"
},
{
"math_id": 6,
"text": "\\arccos\\tfrac{2}{3} \\approx 48.1897^\\circ"
},
{
"math_id": 7,
"text": "180^\\circ - 2\\arccos\\tfrac{2}{3} \\approx 83.6206^\\circ."
}
]
| https://en.wikipedia.org/wiki?curid=722554 |
72270275 | Uncertainty effect | The uncertainty effect, also known as direct risk aversion, is a phenomenon from economics and psychology which suggests that individuals may be prone to expressing such an extreme distaste for risk that they ascribe a lower value to a risky prospect (e.g., a lottery for which outcomes and their corresponding probabilities are known) than its worst possible realization.
For example, in the original work on the uncertainty effect by Uri Gneezy, John A. List, and George Wu (2006), individuals were willing to pay $38 for a $50 gift card, but were only willing to pay $28 for a lottery ticket that would yield a $50 or $100 gift card with equal probability.
This effect is considered to be a violation of "internality" (i.e., the proposition that the value of a risky prospect must lie somewhere between the value of that prospect’s best and worst possible realizations) which is central to prospect theory, expected utility theory, and other models of risky choice. Additionally, it has been proposed as an explanation for a host of naturalistic behaviors which cannot be explained by dominant models of risky choice, such as the popularity of insurance/extended warranties for consumer products.
Origins.
Research on the uncertainty effect was first formally conducted by Uri Gneezy, John A. List, and George Wu in the early 2000s, though it follows in the footsteps of a large body of work devoted to understanding decision making under risk. As their starting point, Gneezy, List, and Wu noted that most models of risky choice assume that when presented with a risky prospect individuals engage in a balancing exercise of sorts in which they compare the best possible outcomes they might realize to the worst possible outcomes they might realize (e.g., in a gamble that gives a 50-50 chance to win $500 or $1,000, individuals might compare these two outcomes to one another). Within this type of schema, individuals are also expected to weight the value (or utility) of each of these discrete outcomes in accordance with the probability that each will occur.
While expected utility theory and prospect theory differ in terms of how outcomes are evaluated and weighted, they both nonetheless rely upon what Gonzalez, List, and Wu term as the "internality axiom." This axiom specifically posits that the value of some risky prospect must lie between the value of that prospect's best and worst possible outcomes. Formally, for some risky prospect formula_0 which offers formula_1 probability of earning formula_2 and formula_3 probability of earning formula_4 (where formula_2 is strictly greater than formula_4), individuals' elicited values for formula_2, formula_4, and formula_5 should satisfy the following inequality: formula_6.
In a series of studies conducted by Gneezy, List, and Wu, and in follow-up work conducted by Uri Simonsohn (among others), individuals were repeatedly shown to violate this assumption. For example:
Within this body of work, the uncertainty effect was also shown to extend to choice and to consideration of delayed outcomes; it was also shown not to be a consequence of poorly comprehending the lottery.
Among other explanations, it has been proposed that the uncertainty effect might arise as a consequence of individuals experiencing some form of disutility from risk.
Implications.
In his follow-up work on the uncertainty effect (or, as he termed it, direct risk aversion), Simonsohn suggested that it might provide an explanation for certain types of responses to risk that cannot be explained by prospect theory and expected utility theory. One notable example is the widespread popularity of insurance for small-stakes and/or low-probability risks – e.g., warranties for consumer electronics, low-deductible insurance policies, and so on; dominant theories of risky choice do not predict that such products should be popular, and Simonsohn asserted that the uncertainty effect might help to explain why.
Critiques and alternative explanations.
In the years after Gneezy, List, and Wu published their findings, several other scholars asserted that the uncertainty effect was simply a consequence of individuals misunderstanding the lottery utilized in initial tests of the uncertainty effect. Such claims were partially refuted by Simonsohn, whose 2009 paper utilized revised lottery instructions, as well as several other successful replications of the uncertainty effect which were published in subsequent years.
Notably, however, in later work with Robert Mislavsky, Simonsohn suggested that the uncertainty effect might be a consequence of aversion to "weird" transaction features as opposed to some form of disutility from risk. These and other alternative explanations are briefly summarized below.
Aversion to lotteries.
In work published in 2013, Yang Yang, Joachim Vosgerau, and George Loewenstein suggested that the uncertainty effect might in fact be understood as a framing effect. Specifically, they posited that the anomalies associated with the uncertainty effect might not arise as a consequence of distaste for/disutility from risk, but rather, as a consequence of the fact that in most experiments which successfully replicated the uncertainty effect certain outcomes were contrasted to risky prospects described as lotteries, gambles, and the like. As such, they posited that the effect might instead be described as an aversion to lotteries, or – as they term it – an aversion to "bad deals."
Aversion to "weird transactions".
Although Simonsohn initially proposed that the uncertainty effect might reflect a distaste for uncertainty, in later work he and colleague Robert Mislavsky instead explored the idea that adding "weird" features to a transaction might give rise to patterns which appeared consistent with the uncertainty effect. For example, they noted that internality violations may arise as a consequence of being averse to the notion of purchasing a coin flip or other gamble in order to obtain a gift card, rather than the uncertainty represented by the coin flip itself. In their work, Mislavsky and Simonsohn systematically explored this notion, and suggest that the aversion to weird transactions may help to provide a more parsimonious explanation for certain failures to replicate the uncertainty effect. | [
{
"math_id": 0,
"text": "L = (x, p, y)"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "1-p"
},
{
"math_id": 4,
"text": "y"
},
{
"math_id": 5,
"text": "L"
},
{
"math_id": 6,
"text": "V(x) \\geq V(L) \\geq V(y)"
}
]
| https://en.wikipedia.org/wiki?curid=72270275 |
72280238 | Dick effect | Effect that limits performance of advanced atomic clocks
The Dick effect (hereinafter; "the effect") is an important limitation to frequency stability for modern atomic clocks such as atomic fountains and optical lattice clocks. It is an aliasing effect: High frequency noise in a required "local oscillator" (LO) is aliased (heterodyned) to near zero frequency by a periodic interrogation process that locks the frequency of the LO to that of the atoms. The noise mimics and adds to the clock's inherent statistical instability, which is determined by the number of atoms or photons available. In so doing, the effect degrades the stability of the atomic clock and places new and stringent demands on LO performance.
For any given interrogation protocol, the effect can be calculated using a quantum-mechanical "sensitivity function", together with the spectral properties of the LO noise. This calculational methodology, introduced by G. John Dick, is now widely used in the design of advanced microwave and optical frequency standards, as well as in the development of methodologies for atomic-wave interferometry, frequency standard comparison, and other areas of measurement science.
Background.
General.
Frequency stability.
The "frequency stability" of an atomic clock is usually characterized by the Allan deviation formula_0, a measure of the expected statistical variation of fractional frequency as a function of averaging time formula_1. Generally, short-term fluctuations (frequency or phase noise) in the clock output require averaging for an extended period of time in order to achieve high performance. This "stability" is not the same as the "accuracy" of the clock, which estimates the expected difference of the average frequency from some absolute standard.
Excellent frequency stability is crucial to a clock's usability: Even though it might have excellent accuracy, a clock with poor frequency stability may require averaging for a week or more for a single high precision test or comparison. Such a clock would not be as useful as one with a higher stability; one that could accomplish the test in hours instead of days.
Stability and operation of atomic clocks.
Instability in the output from an atomic clock due to imperfect feedback between atoms and LO was previously well understood. This instability is of a short-term nature and typically does not impact the utility of the clock. The effect, on the other hand gives rise to frequency noise which has the same character as (and is typically much larger than) that due to the fundamental photon– or atom–counting limitation for atomic clocks.
With the exception of hydrogen and ammonia (hydrogen maser, ammonia maser), the atoms or ions in atomic clocks do not provide a usable output signal. Instead, an electronic or optical "local oscillator" (LO) provides the required output. The LO typically provides excellent short-term stability; long-term stability being achieved by correcting its frequency variability by feedback from the atoms.
In advanced frequency standards the atomic interrogation process is usually sequential in nature: After state-preparation, the atoms' internal clocks are allowed to oscillate in the presence of a signal from the LO for a period of time. At the end of this period, the atoms are interrogated by an optical signal to determine whether (and how much) the state has changed. This information is used to correct the frequency of the LO. Repeated again and again, this enables continuous operation with stability much higher than that of the LO itself. In fact, such feedback was previously thought to allow the stability of the LO output to approach the statistical limit for the atoms for long measuring times.
The effect.
The effect is an additional source of instability that disrupts this happy picture. It arises from an interaction between phase noise in the LO and periodic variations in feedback gain that result from the interrogation procedure. The temporal variations in feedback gain alias (or heterodyne) LO noise at frequencies associated with the interrogation period to near zero frequency, and this results in an instability (Allan deviation) that improves only slowly with increasing measuring time. The increased instability limits the utility of the atomic clock and results in stringent requirements on performance (and associated expense) for the required LO: Not only must it provide excellent stability (so that its output can be improved by feedback to the ultra-high stability of the atoms); it must now also have excellent (low) phase noise.
A simple, but incomplete, analysis of the effect may be found by observing that any variation in LO frequency or phase during a "dead time" required to prepare atoms for the next interrogation is completely undetected, and so will not be corrected. However, this approach does not take into account the quantum-mechanical response of the atoms while they are exposed to pulses of signal from the LO. This is an additional time-dependent response, calculated in analysis of the effect by means of a "sensitivity function".
Quantitative.
The graphs here show predictions of the effect for a trapped-ion frequency standard using a quartz LO. In addition to excellent stability, quartz oscillators have very well defined noise characteristics: Their frequency fluctuations are characterized as "flicker frequency" over a very wide range of frequency and time. Flicker frequency noise corresponds to a constant Allan deviation as shown for the quartz LO in the graphs here.
The "expected" curve on the plot shows how stability of the LO is improved by feedback from the atoms. As measuring time is increased (for times longer than an "attack time") the stability steadily increases, approaching the inherent stability of the atoms for times longer than about 10,000 seconds. The "actual" curve shows how the stability is impacted by the effect. Instead of approaching the inherent stability of the atoms, the stability of the LO output now approaches a line with a much higher value. The slope of this line is identical to that of the atomic limitation (minus one half on a log-log plot) with a value that is comparable to that of the LO, measured at the cycle time, as indicated by the small blue (downwards) arrow. The value (the length of the blue arrow) depends on the details of the atomic interrogation protocol, and can be calculated using the "sensitivity function" methodology.
The second graph here indicates how various performance aspects of the LO impact achievable stability for the atomic clock. The dependence labeled "Previously Analyzed LO Impact" shows the stability improving on that of the LO with an approximately formula_2 dependence for times longer than an "attack time" for the feedback loop. For increasing values of the measuring time formula_3, the stability approaches the limiting formula_4 dependence due to statistical variation in the numbers of atoms and photons available for each measurement.
The effect, on the other hand, causes the available stability of the frequency standard to show a counter-intuitive dependence on high-frequency LO phase noise. Here stability of the LO at times less than the "Cycle Time" is shown to influence stability of the atomic standard over its entire range of operation. Furthermore, it often prevents the clock from ever approaching the stability inherent in the atomic system.
History.
Within a few years of the publication of two papers laying out an analysis of LO aliasing, the methodology was experimentally verified, generally adopted by the "Time and Frequency" community, and applied to the design of many advanced frequency standards. It was also clarified by Lemonde et al. (1998) with a derivation of the "sensitivity function" that used a more conventional quantum-mechanical approach, and was generalized by Santarelli et al. (1996) so as to apply to interrogation protocols without "even" time symmetry.
Where performance limits for atomic clocks were previously characterized by accuracy and by the photon– or atom–counting limitation to stability, the effect was now a third part of the picture. This early stage culminated in 1998 in the publication of four papers
in a "Special Issue on the Dick effect"
for the journal "IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control".
Impact.
Perhaps the most significant consequence of the Dick analysis is due to its presentation of a mathematical framework that enabled researchers to accurately calculate the effect based on the methodology and technology used for many very different atomic clocks. Since the effect is generally the most significant limitation to stability for advanced frequency standards, a great deal of work since that time has focused on amelioration strategies. Additionally, the effect methodology and the "sensitivity function" have enabled significant progress in a number of technical areas.
These atomic clocks typically operate by tossing a ball of laser-cooled atoms upward through a microwave cavity that acts to start the clock in each individual atom. As the atoms return downward, they again traverse this same cavity where they receive a second microwave pulse that stops their clocks. The ball then falls through an optical interrogation apparatus below the cavity that "reads out" the phase difference between the microwaves (the LO) and the atoms that developed during their flying time. This is repeated again and again; a sequential process that gives rise to the effect.
On a smaller scale, the stability of Rb vapor clocks using a "pulsed optically pumped" (POP) technique has improved to such an extent that the effect due to a quartz USO LO has become the limiting performance factor. Developments in a combined USO--DRO (dielectric resonator oscillator) LO technology now enable improved performance.
Methodology.
Introduction.
Modern atomic frequency standards or clocks typically comprise a local oscillator (LO), an atomic system that is periodically interrogated by the LO, and a feedback loop to correct the frequency errors in the LO based on the results of that interrogation; thus locking the frequency of the LO to that of the atomic system. The effect describes a process that makes for imperfect locking, one that depends on details of the atomic interrogation protocol. Two steps are required in order to calculate this newly recognized impact of LO noise on the frequency stability of the "locked local oscillator" (LLO) that provides useful output for the frequency standard. These are:
In contrast to other examples of photonic excitation of atoms or ions, this excitation process is a slow business, taking milliseconds or even seconds to accomplish on account of the extremely high Q factors involved. Instead of a photon striking a solid and ejecting an electron, here is a process where a coherent EM field consisting of many photons (slowly) drives each atom or ion in a cloud from their ground state into a mixed state, typically one with equal amplitudes in the ground and excited states.
Functional forms for formula_5 during the time that the atoms are being exposed to interrogating microwave or optical fields can be calculated using a "fictitious spin" model for the quantum mechanical state-transition process or by using an algebraic approach. These forms, in combination with constant values (typically zero or unity) during times when no signal is applied, allow the sensitivity function to be well defined over the entire interrogation cycle.
The discriminating power for both Ramsey Interrogation and Rabi Interrogation of atomic systems had been previously calculated, based on the quantum-mechanical response of an atom or ion to a slightly detuned drive signal. These previously calculated values are now seen to correspond to a time average of the sensitivity function formula_6, taken over the interrogation cycle.
Calculation of the sensitivity function.
"The concepts and results of calculations presented below can be found in the first papers describing the effect".
Each interrogation cycle in an atomic clock typically begins with preparation of the atoms or ions in their ground states. Let P be the probability that any one will be found in its excited state after an interrogation. The amplitude and time of the interrogating signal are typically adjusted so that tuning the LO exactly to the atomic frequency will give formula_12, that is, all of the atoms or ions being in their excited state. P is determined for each measurement by then exposing the system to a different signal that will generate fluorescence only for the (e.g.) excited state atoms or ions.
In order to obtain effective feedback using periodic measurements of P, the protocol must be arranged so that P has a sensitivity to frequency variations. The sensitivity to frequency variation formula_13 can then be defined as
formula_14
where formula_15 is the interrogation time, so that the value of formula_13 characterizes the sensitivity of a measurement of P to a variation in frequency of the LO. Since P is maximized (at formula_12) when the LO is exactly tuned to the atomic transition frequency, the value of formula_13 would be zero for that case. Thus, for example, in a frequency standard using Rabi Interrogation, the LO is initially detuned so that formula_16, and when instability of the LO frequency causes a subsequent measurement of P to return a value different from this, the feedback loop adjusts the LO frequency to bring it back.
Experimentalists use various protocols to mitigate temporal variations in atomic number, light intensity, etc., and so to allow P to be accurately determined, but these are not discussed further here.
The sensitivity of P to variation of the LO frequency for Rabi Interrogation has been previously calculated, and found to have a value of formula_17 when the LO frequency has been offset by a frequency formula_18 to give formula_19. This is achieved when formula_20 is detuned so that formula_21.
A time-dependent form for the sensitivity of P to frequency variation can now be introduced, defining formula_7 as:
formula_22,
where formula_23 is the change in the probability of excitation when a phase step formula_24 is introduced into the interrogating signal at time formula_25. Integrating both sides of the equation shows that the effect on the probability P of a frequency that varies during the excitation process, formula_26 can be written:
formula_27.
This shows formula_7 to be a "sensitivity function"; representing the time dependence for the effect of frequency variations on the final excitation probability.
The "sensitivity function" for the case of Rabi Interrogation is shown to be given by:
formula_28
where formula_29,
formula_30,
formula_31,
and where formula_20 is detuned to half-signal
amplitude formula_21.
Taking the time average of this functional form for formula_7, gives
formula_32,
exactly as previously referenced for formula_33: This shows formula_7 to be a proper generalization of the previously used sensitivity formula_34.
Forms for the "sensitivity function" for the case of Ramsey Interrogation with a formula_35 phase step between two interrogation pulses (instead of a frequency offset) are somewhat simpler, and are given by:
formula_36
where formula_37 is the pulse time, formula_38 is the interrogation time and formula_39 is the cycle time.
Calculation of the limitation to frequency standard stability.
The operation of a pulse-mode atomic clock can be broken into functional elements as shown in the block diagram here (for a complete analysis see Greenhall ). Here, the LO is represented by its own block and the interrogated atomic system by the other four blocks. The time dependence of the atomic interrogation process is effected here by the Modulator, in which the time-dependent frequency error formula_41 is multiplied by a time-dependent gain formula_7 as calculated in the previous section.
The signal input to the integrator is proportional to the frequency error formula_42, and this allows it to correct slow frequency errors and drift in the local oscillator.
To understand the action in the block diagram, consider the values formula_43 and formula_5 to be made up of their average values plus the deviations from the average. The value of formula_44 (with averages taken over one cycle, formula_40) gives rise to proper feedback operation, locking the frequency of the local oscillator to that of the discriminator, formula_45. Additionally, high frequency components of formula_46 are smoothed by integration and sampling, giving rise to the already known short-term stability limit. However, the term formula_47, while generating additional high-frequency noise, also gives rise to very low frequency variations. This is the aliasing effect that causes the loop to improperly correct the local oscillator and which results in additional low frequency variation in the output of the frequency standard.
Following the methodology of (Dick, 1987) and (Santarelli et al., 1996), the Fourier components of the "sensitivity function" are:
formula_48,
formula_49,
formula_50 ,
and formula_51,
where formula_39 is the cycle time. The "locked local oscillator" provides the useful output signal from any passive (non-maser) frequency standard. A lower limit to its White frequency noise formula_52 is then shown to be dependent on the frequency noise of the LO at all frequencies formula_53 with a value given by
formula_54,
where formula_39 is the cycle time (the time between successive measurements of the atomic system).
The Allan variance for an oscillator with White frequency noise is given by formula_55, so that the stability limit due to the effect is given by
formula_56.
For Ramsey interrogation with very short interrogation pulses, this becomes
formula_57,
where formula_38 is the interrogation time. For the case of an LO with Flicker frequency noise where formula_58 is independent of formula_1, and where the duty factor formula_59 has typical values formula_60, the Allan deviation can be approximated as
formula_61.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma_y(\\tau)"
},
{
"math_id": 1,
"text": "\\tau"
},
{
"math_id": 2,
"text": " 1 / \\tau "
},
{
"math_id": 3,
"text": " \\tau "
},
{
"math_id": 4,
"text": " 1 / \\sqrt{\\tau} "
},
{
"math_id": 5,
"text": " g(t) "
},
{
"math_id": 6,
"text": "\\overline{g(t)}"
},
{
"math_id": 7,
"text": "g(t)"
},
{
"math_id": 8,
"text": "S_y^{LO}(f)"
},
{
"math_id": 9,
"text": "n"
},
{
"math_id": 10,
"text": "f_n=n/{t_c}"
},
{
"math_id": 11,
"text": "S_y^{LLO}(0)"
},
{
"math_id": 12,
"text": " P=1 "
},
{
"math_id": 13,
"text": " g "
},
{
"math_id": 14,
"text": "{{d P} \\over {d \\nu}} = \\pi\\, g\\, t_i "
},
{
"math_id": 15,
"text": " t_i "
},
{
"math_id": 16,
"text": " P= 0.5 "
},
{
"math_id": 17,
"text": "g_{Rabi} \\approx 0.60386 "
},
{
"math_id": 18,
"text": "\\delta\\nu"
},
{
"math_id": 19,
"text": "P=0.5"
},
{
"math_id": 20,
"text": "\\Delta \\equiv 2\\, \\delta\\nu\\, t_i "
},
{
"math_id": 21,
"text": "\\Delta = \\Delta_{half} \\approx 0.798685 "
},
{
"math_id": 22,
"text": "g(t) =2 \\lim_{\\delta\\phi \\to 0} {\\delta P(\\delta\\phi, t) \\over \\delta\\phi} "
},
{
"math_id": 23,
"text": " \\delta P(\\delta\\phi, t) "
},
{
"math_id": 24,
"text": " \\delta\\phi "
},
{
"math_id": 25,
"text": " t "
},
{
"math_id": 26,
"text": " \\delta\\omega(t)"
},
{
"math_id": 27,
"text": "\\Delta P = {1 \\over {2 t_c}} \\int_0^ {t_c} {g(t) \\delta\\omega(t) dt} "
},
{
"math_id": 28,
"text": "g(t)={{\\Delta} \\over {{(1+\\Delta^2)}^{3/2}}} \\left[\\sin (\\Omega_1(t)) \\left(1-\\cos(\\Omega_2(t)\\right)+\\sin(\\Omega_2(t)) \\left(1-\\cos(\\Omega_1(t)\\right)\\right]"
},
{
"math_id": 29,
"text": "\\Omega_1(t) = \\Omega\\cdot \\left({{t}\\over {t_i}}\\right) "
},
{
"math_id": 30,
"text": "\\Omega_2(t) = \\Omega\\cdot \\left(1-\\left({t \\over {t_i}}\\right)\\right) "
},
{
"math_id": 31,
"text": "\\Omega = \\Omega (\\Delta) = \\pi \\sqrt{1 + {\\Delta}^2} "
},
{
"math_id": 32,
"text": "{1 \\over t_i}{\\int_0^{t_i} g(t)\\,dt} \\approx 0.60386 "
},
{
"math_id": 33,
"text": "g_{Rabi}"
},
{
"math_id": 34,
"text": "g"
},
{
"math_id": 35,
"text": "\\pi/2"
},
{
"math_id": 36,
"text": "\ng(t) = \\begin{cases}\\sin\\left ( \\pi {{t}\\over {2\\, t_p}}\\right ) &&& ( 0 &< &t& < &t_p) \\\\\n1 &&& ( t_p &< &t& < &t_i - t_p) \\\\\n\\sin\\left ( \\pi {{t_i - t}\\over {2\\, t_p}}\\right ) &&& ( t_i-t_p &< &t& < &t_i) \\\\\n0 &&& ( t_i &< &t& < &t_c) \\\\\n\\end{cases}"
},
{
"math_id": 37,
"text": "t_p"
},
{
"math_id": 38,
"text": "t_i"
},
{
"math_id": 39,
"text": "t_c"
},
{
"math_id": 40,
"text": " t_c "
},
{
"math_id": 41,
"text": "\\delta \\nu (t)"
},
{
"math_id": 42,
"text": " \\delta \\nu "
},
{
"math_id": 43,
"text": "\\delta \\nu (t) "
},
{
"math_id": 44,
"text": "\\overline{\\delta \\nu (t)} \\cdot \\overline{g(t)}"
},
{
"math_id": 45,
"text": " \\nu_0 "
},
{
"math_id": 46,
"text": "(\\delta \\nu (t) - \\overline{\\delta \\nu (t)}) \\cdot \\overline{g(t)}"
},
{
"math_id": 47,
"text": "(\\delta \\nu (t) - \\overline{\\delta \\nu (t)}) \\cdot (g(t) - \\overline{g(t)})"
},
{
"math_id": 48,
"text": "g_{n,cos} = \\int_0^ {t_c} {g(t) \\cos \\left ( {{2 \\pi n t} \\over {t_c}}\\right ) dt}"
},
{
"math_id": 49,
"text": "g_{n,sin} = \\int_0^ {t_c} {g(t) \\sin\\left ( {{2 \\pi n t} \\over {t_c}}\\right ) dt}"
},
{
"math_id": 50,
"text": "g_n = \\sqrt{g_{n,cos}^2 + g_{n,sin}^2 } "
},
{
"math_id": 51,
"text": "g_0 = \\int_0^{t_c} {g(t) dt}"
},
{
"math_id": 52,
"text": "S_y^{LLO} (0)"
},
{
"math_id": 53,
"text": "\\left( {S_y^{LO} (f)} \\right) "
},
{
"math_id": 54,
"text": "S_y^{LLO} (0) = { 2 \\over {g_0^2}} \\cdot {\\sum_{n=1}^\\infty {g_n^2 \\, S_y^{LO} \\left ( {n \\over t_c} \\right ) }}"
},
{
"math_id": 55,
"text": "\\sigma^2_{y}(\\tau) = {S_y(0) \\over {2 \\tau} } "
},
{
"math_id": 56,
"text": "\\sigma^2_{y,Dick}(\\tau) = { 1 \\over {\\tau g_0^2}} \\cdot {\\sum_{n=1}^\\infty {g_n^2 \\, S_y^{LO}\\left({n \\over t_c}\\right)}}"
},
{
"math_id": 57,
"text": "\\sigma^2_{y,Dick}(\\tau) = { 1 \\over {\\tau}}{t_c^2 \\over t_i^2} \\cdot {\\sum_{n=1}^\\infty {{\\sin^2(\\pi n \\, t_i / t_c) \\over {\\pi^2} n^2} \\, S_y^{LO}\\left({n \\over t_c}\\right)}}"
},
{
"math_id": 58,
"text": "\\sigma_y^{LO}(\\tau)"
},
{
"math_id": 59,
"text": "d=t_i/t_c"
},
{
"math_id": 60,
"text": "0.4<d<0.7"
},
{
"math_id": 61,
"text": "\\sigma_{y,Dick}(\\tau) \\approx {\\sigma_y^{LO} \\over \\sqrt{2\\ln(2)}} \\cdot \\left|{{sin(\\pi d) \\over {\\pi d}}}\\right| \\cdot \\sqrt{t_c \\over{\\tau}} "
}
]
| https://en.wikipedia.org/wiki?curid=72280238 |
7228413 | Watermarking attack | Attack on disk encryption methods
In cryptography, a watermarking attack is an attack on disk encryption methods where the presence of a specially crafted piece of data can be detected by an attacker without knowing the encryption key.
Problem description.
Disk encryption suites generally operate on data in 512-byte sectors which are individually encrypted and decrypted. These 512-byte sectors alone can use any block cipher mode of operation (typically CBC), but since arbitrary sectors in the middle of the disk need to be accessible individually, they cannot depend on the contents of their preceding/succeeding sectors. Thus, with CBC, each sector has to have its own initialization vector (IV). If these IVs are predictable by an attacker (and the filesystem reliably starts file content at the same offset to the start of each sector, and files are likely to be largely contiguous), then there is a chosen plaintext attack which can reveal the existence of encrypted data.
The problem is analogous to that of using block ciphers in the electronic codebook (ECB) mode, but instead of whole blocks, only the first block in different sectors are identical. The problem can be relatively easily eliminated by making the IVs unpredictable with, for example, ESSIV.
Alternatively, one can use modes of operation specifically designed for disk encryption (see disk encryption theory). This weakness affected many disk encryption programs, including older versions of BestCrypt as well as the now-deprecated cryptoloop.
To carry out the attack, a specially crafted plaintext file is created for encryption in the system under attack, to "NOP-out" the IV
such that the first ciphertext block in two or more sectors is identical. This requires that the input to the cipher (plaintext, formula_0, XOR initialisation vector, formula_1) for each block must be the same; i.e., formula_2. Thus, we must choose plaintexts, formula_3 such that formula_4.
The ciphertext block patterns generated in this way give away the existence of the file, without any need for the disk to be decrypted first.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\scriptstyle P"
},
{
"math_id": 1,
"text": "\\scriptstyle IV"
},
{
"math_id": 2,
"text": "\\scriptstyle P_1 \\,\\oplus\\, IV_1 \\;=\\; P_2 \\,\\oplus\\, IV_2"
},
{
"math_id": 3,
"text": "\\scriptstyle P_1,\\, P_2"
},
{
"math_id": 4,
"text": "\\scriptstyle P_1 \\,\\oplus\\, P_2 \\;=\\; IV_1 \\,\\oplus\\, IV_2"
}
]
| https://en.wikipedia.org/wiki?curid=7228413 |
722873 | Triakis icosahedron | Catalan solid with 60 faces
In geometry, the triakis icosahedron is an Archimedean dual solid, or a Catalan solid, with 60 isosceles triangle faces. Its dual is the truncated dodecahedron. It has also been called the kisicosahedron. It was first depicted, in a non-convex form with equilateral triangle faces, by Leonardo da Vinci in Luca Pacioli's "Divina proportione", where it was named the "icosahedron elevatum". The capsid of the Hepatitis A virus has the shape of a triakis icosahedron.
As a Kleetope.
The triakis icosahedron can be formed by gluing triangular pyramids to each face of a regular icosahedron. Depending on the height of these pyramids relative to their base, the result can be either convex or non-convex. This construction, of gluing pyramids to each face, is an instance of a general construction called the Kleetope; the triakis icosahedron is the Kleetope of the icosahedron. This interpretation is also expressed in the name, triakis, which is used for the Kleetopes of polyhedra with triangular faces.
When depicted in Leonardo's form, with equilateral triangle faces, it is an example of a non-convex deltahedron, one of the few known deltahedra that are isohedral (meaning that all faces are symmetric to each other). In another of the non-convex forms of the triakis icosahedron, the three triangles adjacent to each pyramid are coplanar, and can be thought of as instead forming the visible parts of a convex hexagon, in a self-intersecting polyhedron with 20 hexagonal faces that has been called the small triambic icosahedron. Alternatively, for the same form of the triakis icosahedron, the triples of coplanar isosceles triangles form the faces of the first stellation of the icosahedron. Yet another non-convex form, with golden isosceles triangle faces, forms the outer shell of the great stellated dodecahedron, a Kepler–Poinsot polyhedron with twelve pentagram faces.
Each edge of the triakis icosahedron has endpoints of total degree at least 13. By Kotzig's theorem, this is the most possible for any polyhedron. The same total degree is obtained from the Kleetope of any polyhedron with minimum degree five, but the triakis icosahedron is the simplest example of this construction. Although this Kleetope has isosceles triangle faces, iterating the Kleetope construction on it produces convex polyhedra with triangular faces that cannot all be isosceles.
As a Catalan solid.
The triakis icosahedron is a Catalan solid, the dual polyhedron of the truncated dodecahedron. The truncated dodecahedron is an Archimedean solid, with faces that are regular decagons and equilateral triangles, and with all edges having unit length; its vertices lie on a common sphere, the circumsphere of the truncated decahedron. The polar reciprocation of this solid through this sphere is a convex form of the triakis icosahedron, with all faces tangent to the same sphere, now an inscribed sphere, with coordinates and dimensions that can be calculated as follows.
Let formula_0 denote the golden ratio. The short edges of this form of the triakis icosahedron have length
<templatestyles src="Block indent/styles.css"/>formula_1,
and the long edges have length
<templatestyles src="Block indent/styles.css"/>
Its faces are isosceles triangles with one obtuse angle of
<templatestyles src="Block indent/styles.css"/>formula_2
and two acute angles of
<templatestyles src="Block indent/styles.css"/>formula_3.
As a Catalan solid, its dihedral angles are all equal, formula_4160°36'45.188". One possible set of 32 Cartesian coordinates for the vertices of the triakis icosahedron centered at the origin (scaled differently than the one above) can be generated by combining the vertices of two appropriately scaled Platonic solids, the regular icosahedron and a regular dodecahedron:
Symmetry.
In any of its standard convex or non-convex forms, the triakis icosahedron has the same symmetries as a regular icosahedron.
The three types of symmetry axes of the icosahedron, through two opposite vertices, edge midpoints, and face centroids, become respectively axes through opposite pairs of degree-ten vertices of the triakis icosahedron, through opposite midpoints of edges between degree-ten vertices, and through opposite pairs of degree-three vertices.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varphi"
},
{
"math_id": 1,
"text": "\\frac{5\\varphi+15}{11}\\approx 2.099"
},
{
"math_id": 2,
"text": "\\cos^{-1}\\frac{-3\\varphi}{10}\\approx 119^{\\circ}"
},
{
"math_id": 3,
"text": "\\cos^{-1}\\frac{\\varphi+7}{10}\\approx 30.5^{\\circ}"
},
{
"math_id": 4,
"text": "\\cos^{-1}\\frac{\\varphi^2(1+2\\varphi(2+\\varphi))}{\\sqrt{(1+5\\varphi^4)(1+\\varphi^2(2+\\varphi)^2)}}\\approx "
},
{
"math_id": 5,
"text": "\\frac{(0, \\pm 1, \\pm \\varphi)}{\\sqrt{\\varphi^2 + 1}} , \\frac{(\\pm 1, \\pm \\varphi, 0)}{\\sqrt{\\varphi^2 + 1}} , \\frac{(\\pm \\varphi, 0, \\pm 1)}{\\sqrt{\\varphi^2 + 1}}."
},
{
"math_id": 6,
"text": "\\frac{2+\\varphi}{3+2\\varphi}\\sqrt {\\frac{3}{2-1/\\varphi}}=\\frac{1}{11}\\sqrt {75 + 6\\sqrt{5}}\\approx 0.8548,"
},
{
"math_id": 7,
"text": "(\\pm 1, \\pm 1,\\pm 1)\\frac{\\sqrt {25 + 2\\sqrt{5}}}{11}"
},
{
"math_id": 8,
"text": "(0, \\pm \\varphi, \\pm \\frac{1}{\\varphi})\\frac{\\sqrt {25 + 2\\sqrt{5}}}{11} , \n(\\pm \\frac{1}{\\varphi}, 0 , \\pm \\varphi)\\frac{\\sqrt {25 + 2\\sqrt{5}}}{11} , \n(\\pm \\varphi, \\pm \\frac{1}{\\varphi},0)\\frac{\\sqrt {25 + 2\\sqrt{5}}}{11}."
}
]
| https://en.wikipedia.org/wiki?curid=722873 |
72287736 | Landau kernel | The Landau kernel is named after the German number theorist Edmund Landau. The kernel is a summability kernel defined as:
formula_0where the coefficients formula_1 are defined as follows
formula_2
Visualisation.
Using integration by parts, one can show that:
formula_3
Hence, this implies that the Landau Kernel can be defined as follows: formula_4
Plotting this function for different values of "n" reveals that as "n" goes to infinity, formula_5 approaches the Dirac delta function, as seen in the image, where the following functions are plotted.
Properties.
Some general properties of the Landau kernel is that it is nonnegative and continuous on formula_6. These properties are made more concrete in the following section.
Dirac sequences.
<templatestyles src="Math_theorem/styles.css" />
Definition: Dirac Sequence — A Dirac Sequence is a sequence {formula_7} of functions formula_8 that satisfies the following properities:
The third bullet point means that the area under the graph of the function formula_12 becomes increasingly concentrated close to the origin as "n" approaches infinity. This definition lends us to the following theorem.
<templatestyles src="Math_theorem/styles.css" />
Theorem — The sequence of Landau Kernels is a Dirac sequence
Proof: We prove the third property only. In order to do so, we introduce the following lemma:
<templatestyles src="Math_theorem/styles.css" />
Lemma — The coefficients satsify the following relationship, formula_13
Proof of the Lemma:
Using the definition of the coefficients above, we find that the integrand is even, we may writeformula_14completing the proof of the lemma. A corollary of this lemma is the following:
<templatestyles src="Math_theorem/styles.css" />
Corollary — For all positive, real formula_15 formula_16
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L_n (t) = \\begin{cases}\n \\frac{(1-t^2)^n}{c_n} & \\text{if } -1 \\leq t \\leq 1\\\\\n 0 & \\text{otherwise} \n \\end{cases}"
},
{
"math_id": 1,
"text": "c_n"
},
{
"math_id": 2,
"text": "c_n = \\int_{-1}^1 (1-t^2)^n \\, dt"
},
{
"math_id": 3,
"text": "c_n = \\frac{(n!)^2 \\, 2^{2n+1}}{(2n)! (2n+1)}. "
},
{
"math_id": 4,
"text": "L_n (t) = \\begin{cases}\n\n(1-t^2)^n \\frac{(2n)! (2n+1)}{(n!)^2 \\, 2^{2n+1}} & \\text{for t} \\in [-1,1]\\\\\n0 & \\text{elsewhere}\n\\end{cases}\n\n"
},
{
"math_id": 5,
"text": "L_n(t)\n"
},
{
"math_id": 6,
"text": "\\mathbb{R}"
},
{
"math_id": 7,
"text": " K_n(t) "
},
{
"math_id": 8,
"text": " K_n(t) \\colon \\mathbb{R} \\to \\mathbb{R}"
},
{
"math_id": 9,
"text": " K_n(t) \\geq 0, \\, \\, \\forall t \\in \\mathbb{R} \\text{ and } \\forall n \\in \\mathbb{Z} "
},
{
"math_id": 10,
"text": " \\int_{-\\infty}^{\\infty} K_n (t) \\, dt =1, \\, \\forall n "
},
{
"math_id": 11,
"text": " \\forall \\epsilon >0, \\, \\forall \\delta >0, \\, \\exists N \\in \\mathbb{Z}_+ \\text{ such that } \\forall n \\geq N : \\int_{\\mathbb{R} \\setminus [-\\delta,\\delta]}K_n(t) \\, dt= \\int_{-\\infty}^{-\\delta} K_n (t) \\, dt + \\int_{\\delta}^{\\infty} K_n (t) \\, dt < \\epsilon "
},
{
"math_id": 12,
"text": "y = K_n(t)"
},
{
"math_id": 13,
"text": " c_n \\geq \\frac{2}{n+1}"
},
{
"math_id": 14,
"text": "\\frac{c_n}{2} = \\int_{0}^1 (1-t^2)^n \\, dt = \\int_{0}^1 (1-t)^n(1+t)^n \\, dt \\geq \\int_{0}^1 (1-t)^n \\, dt = \\frac{1}{1+n}"
},
{
"math_id": 15,
"text": " \\delta : "
},
{
"math_id": 16,
"text": " \\int_{\\mathbb{R} \\setminus [-\\delta,\\delta]}K_n(t) \\, dt \\leq \\frac{2}{c_n} \\int_{\\delta}^1 (1-t^2)^n \\, dt \\leq (n+1)(1-r^2)^n "
}
]
| https://en.wikipedia.org/wiki?curid=72287736 |
72293009 | O-Octadecylhydroxylamine | <templatestyles src="Chembox/styles.css"/>
Chemical compound
"O"-Octadecylhydroxylamine (ODHA) is a white solid organic compound with the formula . ODHA is a noncanonical lipid, which contains a saturated alkyl tail and an aminooxy headgroup. This noncanonical lipid can be site selectively appended to the N-terminal of desired biopolymers such as peptides. ODHA drives the supramolecular assembly of modified protein, presumably through the hydrophobic collapse of ODHA chains.
Preparation.
ODHA is prepared from the reaction between 2-(octadecyloxy)isoindoline-1,3-dione and hydrazine hydrate.
Reaction.
ODHA modification.
A pH-responsive oxime bond is used to install an ODHA-type synthetic lipid (octadecylhydroxylamine) in place of the N terminal serine residue in N-myristoylation PTM. N-terminal myristoylation is a post-translational modification carried out by the enzyme N-myristoyltransferase. Generally, the 12-carbon myristoyl lipid is added to the N-terminus of proteins. The lipid is attached to the protein via a stable amide bond. However, the ODHA lipid is attached to the protein via an oxime bond, due to the structure of the non-canonical lipid. The reaction is chemical, compared to the enzymatic NMT reaction. Self-assembly is driven by the hydrophobic nature of the attached lipid, and disassembly is controlled by oxime degradation in an acidic environment. The reaction between the lipid and oxidized protein is biomolecular, which means it is following second order rate kinetics since it is dependent on oxidized protein (ELP) and lipids (ODHA).
formula_0
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{d[ODHA-ELP] \\over dt}=K[ELP][ODHA] "
}
]
| https://en.wikipedia.org/wiki?curid=72293009 |
723031 | Orbital decay | Process that leads to gradual decrease of the distance between two orbiting bodies
Orbital decay is a gradual decrease of the distance between two orbiting bodies at their closest approach (the periapsis) over many orbital periods. These orbiting bodies can be a planet and its satellite, a star and any object orbiting it, or components of any binary system. If left unchecked, the decay eventually results in termination of the orbit when the smaller object strikes the surface of the primary; or for objects where the primary has an atmosphere, the smaller object burns, explodes, or otherwise breaks up in the larger object's atmosphere; or for objects where the primary is a star, ends with incineration by the star's radiation (such as for comets). Collisions of stellar-mass objects are usually accompanied by effects such as gamma-ray bursts and detectable gravitational waves.
Orbital decay is caused by one or more mechanisms which absorb energy from the orbital motion, such as fluid friction, gravitational anomalies, or electromagnetic effects. For bodies in low Earth orbit, the most significant effect is atmospheric drag.
Due to atmospheric drag, the lowest altitude above the Earth at which an object in a circular orbit can complete at least one full revolution without propulsion is approximately 150 km (93 mi) while the lowest perigee of an elliptical revolution is approximately 90 km (56 mi).
Modeling.
Simplified model.
A simplified decay model for a near-circular two-body orbit about a central body (or planet) with an atmosphere, in terms of the rate of change of the orbital altitude, is given below.
formula_0
Where R is the distance of the spacecraft from the planet's origin, αo is the sum of all accelerations projected on the along-track direction of the spacecraft (or parallel to the spacecraft velocity vector), and T is the Keplerian period. Note that αo is often a function of R due to variations in atmospheric density in the altitude, and T is a function of R by virtue of Kepler's laws of planetary motion.
If only atmospheric drag is considered, one can approximate drag deceleration αo as a function of orbit radius R using the drag equation below:
formula_1
formula_2 is the mass density of the atmosphere which is a function of the radius R from the origin,
formula_3 is the orbital velocity,
formula_4 is the drag reference area,
formula_5 is the mass of the satellite, and
formula_6 is the dimensionless drag coefficient related to the satellite geometry, and accounting for skin friction and form drag (~2.2 for cube satellites).
The orbit decay model has been tested against ~1 year of actual GPS measurements of VELOX-C1, where the mean decay measured via GPS was 2.566 km across Dec 2015 to Nov 2016, and the orbit decay model predicted a decay of 2.444 km, which amounted to a 5% deviation.
An open-source Python based software, ORBITM (ORBIT Maintenance and Propulsion Sizing), is available freely on GitHub for Python users using the above model.
Proof of simplified model.
By the conservation of mechanical energy, the energy of the orbit is simply the sum of kinetic and gravitational potential energies, in an unperturbed two-body orbit. By substituting the vis-viva equation into the kinetic energy component, the orbital energy of a circular orbit is given by:
formula_7
Where G is the gravitational constant, ME is the mass of the central body and m is the mass of the orbiting satellite. We take the derivative of the orbital energy with respect to the radius.
formula_8
The total decelerating force, which is usually atmospheric drag for low Earth orbits, exerted on a satellite of constant mass m is given by some force F. The rate of loss of orbital energy is simply the rate at the external force does negative work on the satellite as the satellite traverses an infinitesimal circular arc-length ds, spanned by some infinitesimal angle dθ and angular rate ω.
formula_9
The angular rate ω is also known as the Mean motion, where for a two-body circular orbit of radius R, it is expressed as:
formula_10
and...
formula_11
Substituting ω into the rate of change of orbital energy above, and expressing the external drag or decay force in terms of the deceleration αo, the orbital energy rate of change with respect to time can be expressed as:
formula_12
Having an equation for the rate of change of orbital energy with respect to both radial distance and time allows us to find the rate of change of the radial distance with respect to time as per below.
formula_13
formula_14
formula_15
The assumptions used in this derivation above are that the orbit stays very nearly circular throughout the decay process, so that the equations for orbital energy are more or less that of a circular orbit's case. This is often true for orbits that begin as circular, as drag forces are considered "re-circularizing", since drag magnitudes at the periapsis (lower altitude) is expectedly greater than that of the apoapsis, which has the effect of reducing the mean eccentricity.
Sources of decay.
Atmospheric drag.
Atmospheric drag at orbital altitude is caused by frequent collisions of gas molecules with the satellite.
It is the major cause of orbital decay for satellites in low Earth orbit. It results in the reduction in the altitude of a satellite's orbit. For the case of Earth, atmospheric drag resulting in satellite re-entry can be described by the following sequence:
lower altitude → denser atmosphere → increased drag → increased heat → usually burns on re-entry
Orbital decay thus involves a positive feedback effect, where the more the orbit decays, the lower its altitude drops, and the lower the altitude, the faster the decay. Decay is also particularly sensitive to external factors of the space environment such as solar activity, which are not very predictable. During solar maxima the Earth's atmosphere causes significant drag up to altitudes much higher than during solar minima.
Atmospheric drag exerts a significant effect at the altitudes of space stations, Space Shuttles and other crewed Earth-orbit spacecraft, and satellites with relatively high "low Earth orbits" such as the Hubble Space Telescope. Space stations typically require a regular altitude boost to counteract orbital decay (see also orbital station-keeping). Uncontrolled orbital decay brought down the Skylab space station, and (relatively) controlled orbital decay was used to de-orbit the Mir space station.
Reboosts for the Hubble Space Telescope are less frequent due to its much higher altitude. However, orbital decay is also a limiting factor to the length of time the Hubble can go without a maintenance rendezvous, the most recent having been performed successfully by STS-125, with Space Shuttle "Atlantis" in 2009. Newer space telescopes are in much higher orbits, or in some cases in solar orbit, so orbital boosting may not be needed.
Tidal effects.
An orbit can also decay by negative tidal acceleration when the orbiting body is large enough to raise a significant tidal bulge on the body it is orbiting and is either in a retrograde orbit or is below the synchronous orbit. This saps angular momentum from the orbiting body and transfers it to the primary's rotation, lowering the orbit's altitude.
Examples of satellites undergoing tidal orbital decay are Mars' moon Phobos, Neptune's moon Triton, and the extrasolar planet TrES-3b.
Light and thermal radiation.
Small objects in the Solar System also experience an orbital decay due to the forces applied by asymmetric radiation pressure. Ideally, energy absorbed would equal blackbody energy emitted at any given point, resulting in no net force. However, the Yarkovsky effect is the phenomenon that, because absorption and radiation of heat are not instantaneous, objects which are not terminally locked absorb sunlight energy on surfaces exposed to the Sun, but those surfaces do not re-emit much of that energy until after the object has rotated, so that the emission is parallel to the object's orbit. This results in a very small acceleration parallel to the orbital path, yet one which can be significant for small objects over millions of years. The Poynting-Robertson effect is a force opposing the object's velocity caused by asymmetric incidence of light, i.e., aberration of light. For an object with prograde rotation, these two effects will apply opposing, but generally unequal, forces.
Gravitational radiation.
Gravitational radiation is another mechanism of orbital decay. It is negligible for orbits of planets and planetary satellites (when considering their orbital motion on time scales of centuries, decades, and less), but is noticeable for systems of compact objects, as seen in observations of neutron star orbits. All orbiting bodies radiate gravitational energy, hence no orbit is infinitely stable.
Electromagnetic drag.
Satellites using an electrodynamic tether, moving through the Earth's magnetic field, create drag force that could eventually deorbit the satellite.
Stellar collision.
A stellar collision is the coming together of two binary stars when they lose energy and approach each other. Several things can cause the loss of energy including tidal forces, mass transfer, and gravitational radiation. The stars describe the path of a spiral as they approach each other. This sometimes results in a merger of the two stars or the creation of a black hole. In the latter case, the last several revolutions of the stars around each other take only a few seconds.
Mass concentration.
While not a direct cause of orbital decay, uneven mass distributions (known as "mascons") of the body being orbited can perturb orbits over time, and extreme distributions can cause orbits to be highly unstable. The resulting unstable orbit can mutate into an orbit where one of the direct causes of orbital decay can take place.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\frac{dR}{dt}=\\frac{\\alpha_o(R) \\cdot T(R)}{\\pi} "
},
{
"math_id": 1,
"text": "\\alpha_o\\, =\\, \\tfrac12\\, \\rho(R)\\, v^2\\, c_{\\rm d}\\, \\frac{A}{m}"
},
{
"math_id": 2,
"text": "\\rho(R)"
},
{
"math_id": 3,
"text": "v"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "m"
},
{
"math_id": 6,
"text": "c_{\\rm d}"
},
{
"math_id": 7,
"text": " U = KE + GPE = -\\frac{G M_E m}{2R} "
},
{
"math_id": 8,
"text": " \\frac{dU}{dR} = \\frac{G M_E m}{2R^2} "
},
{
"math_id": 9,
"text": " \\frac{dU}{dt}=\\frac{F \\cdot ds}{dt}=\\frac{F \\cdot R \\cdot d\\theta}{dt}=F \\cdot R \\cdot \\omega "
},
{
"math_id": 10,
"text": " \\omega = \\frac{2\\pi}{T} = \\sqrt{\\frac{G M_E}{R^3}} =\\frac{F \\cdot R \\cdot d\\theta}{dt}=F \\cdot R \\cdot \\omega "
},
{
"math_id": 11,
"text": " F = m \\cdot \\alpha_o "
},
{
"math_id": 12,
"text": " \\frac{dU}{dt}= m \\cdot \\alpha_o \\cdot \\sqrt{\\frac{G M_E}{R}}"
},
{
"math_id": 13,
"text": " \\frac{dR}{dt} = \\left( \\left( \\frac{dU}{dR} \\right)^{-1} \\cdot \\frac{dU}{dt} \\right) "
},
{
"math_id": 14,
"text": " = 2\\alpha_o \\cdot \\sqrt{\\frac{R^3}{G M_E}} "
},
{
"math_id": 15,
"text": " = \\frac{\\alpha_o \\cdot T}{\\pi} "
}
]
| https://en.wikipedia.org/wiki?curid=723031 |
723043 | Family of sets | Any collection of sets, or subsets of a set
In set theory and related branches of mathematics, a family (or collection) can mean, depending upon the context, any of the following: set, indexed set, multiset, or class. A collection formula_0 of subsets of a given set formula_1 is called a family of subsets of formula_1, or a family of sets over formula_2 More generally, a collection of any sets whatsoever is called a family of sets, set family, or a set system. Additionally, a family of sets may be defined as a function from a set formula_3, known as the index set, to formula_0, in which case the sets of the family are indexed by members of formula_3. In some contexts, a family of sets may be allowed to contain repeated copies of any given member, and in other contexts it may form a proper class.
A finite family of subsets of a finite set formula_1 is also called a "hypergraph". The subject of extremal set theory concerns the largest and smallest examples of families of sets satisfying certain restrictions.
Examples.
The set of all subsets of a given set formula_1 is called the power set of formula_1 and is denoted by formula_4 The power set formula_5 of a given set formula_1 is a family of sets over formula_2
A subset of formula_1 having formula_6 elements is called a formula_6-subset of formula_2
The formula_6-subsets formula_7 of a set formula_1 form a family of sets.
Let formula_8 An example of a family of sets over formula_1 (in the multiset sense) is given by formula_9 where formula_10 and formula_11
The class formula_12 of all ordinal numbers is a "large" family of sets. That is, it is not itself a set but instead a proper class.
Properties.
Any family of subsets of a set formula_1 is itself a subset of the power set formula_5 if it has no repeated members.
Any family of sets without repetitions is a subclass of the proper class of all sets (the universe).
Hall's marriage theorem, due to Philip Hall, gives necessary and sufficient conditions for a finite family of non-empty sets (repetitions allowed) to have a system of distinct representatives.
If formula_13 is any family of sets then formula_14 denotes the union of all sets in formula_15 where in particular, formula_16
Any family formula_13 of sets is a family over formula_17 and also a family over any superset of formula_18
Related concepts.
Certain types of objects from other areas of mathematics are equivalent to families of sets, in that they can be described purely as a collection of sets of objects of some type:
Covers and topologies.
A family of sets is said to cover a set formula_20 if every point of formula_20 belongs to some member of the family.
A subfamily of a cover of formula_20 that is also a cover of formula_20 is called a subcover.
A family is called a point-finite collection if every point of formula_20 lies in only finitely many members of the family. If every point of a cover lies in exactly one member, the cover is a partition of formula_24
When formula_20 is a topological space, a cover whose members are all open sets is called an open cover.
A family is called locally finite if each point in the space has a neighborhood that intersects only finitely many members of the family.
A σ-locally finite or countably locally finite collection is a family that is the union of countably many locally finite families.
A cover formula_13 is said to refine another (coarser) cover formula_25 if every member of formula_13 is contained in some member of formula_26 A star refinement is a particular type of refinement.
Special types of set families.
A Sperner family is a set family in which none of the sets contains any of the others. Sperner's theorem bounds the maximum size of a Sperner family.
A Helly family is a set family such that any minimal subfamily with empty intersection has bounded size. Helly's theorem states that convex sets in Euclidean spaces of bounded dimension form Helly families.
An abstract simplicial complex is a set family formula_0 (consisting of finite sets) that is downward closed; that is, every subset of a set in formula_0 is also in formula_27
A matroid is an abstract simplicial complex with an additional property called the "augmentation property".
Every filter is a family of sets.
A convexity space is a set family closed under arbitrary intersections and unions of chains (with respect to the inclusion relation).
Other examples of set families are independence systems, greedoids, antimatroids, and bornological spaces.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F"
},
{
"math_id": 1,
"text": "S"
},
{
"math_id": 2,
"text": "S."
},
{
"math_id": 3,
"text": "I"
},
{
"math_id": 4,
"text": "\\wp(S)."
},
{
"math_id": 5,
"text": "\\wp(S)"
},
{
"math_id": 6,
"text": "k"
},
{
"math_id": 7,
"text": "S^{(k)}"
},
{
"math_id": 8,
"text": "S = \\{a, b, c, 1, 2\\}."
},
{
"math_id": 9,
"text": "F = \\left\\{A_1, A_2, A_3, A_4\\right\\},"
},
{
"math_id": 10,
"text": "A_1 = \\{a, b, c\\}, A_2 = \\{1, 2\\}, A_3 = \\{1, 2\\},"
},
{
"math_id": 11,
"text": "A_4 = \\{a, b, 1\\}."
},
{
"math_id": 12,
"text": "\\operatorname{Ord}"
},
{
"math_id": 13,
"text": "\\mathcal{F}"
},
{
"math_id": 14,
"text": "\\cup \\mathcal{F} := {\\textstyle \\bigcup\\limits_{F \\in \\mathcal{F}}} F"
},
{
"math_id": 15,
"text": "\\mathcal{F},"
},
{
"math_id": 16,
"text": "\\cup \\varnothing = \\varnothing."
},
{
"math_id": 17,
"text": "\\cup \\mathcal{F}"
},
{
"math_id": 18,
"text": "\\cup \\mathcal{F}."
},
{
"math_id": 19,
"text": "(X, \\tau)"
},
{
"math_id": 20,
"text": "X"
},
{
"math_id": 21,
"text": "\\tau"
},
{
"math_id": 22,
"text": "X,"
},
{
"math_id": 23,
"text": "\\varnothing"
},
{
"math_id": 24,
"text": "X."
},
{
"math_id": 25,
"text": "\\mathcal{C}"
},
{
"math_id": 26,
"text": "\\mathcal{C}."
},
{
"math_id": 27,
"text": "F."
}
]
| https://en.wikipedia.org/wiki?curid=723043 |
72307852 | Gilbert–Pollack conjecture | Unsolved problem in graph theory
In mathematics, the Gilbert–Pollak conjecture is an unproven conjecture on the ratio of lengths of Steiner trees and Euclidean minimum spanning trees for the same point sets in the Euclidean plane. It was proposed by Edgar Gilbert and Henry O. Pollak in 1968.
Statement.
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in computer science:
Is the Steiner ratio of the Euclidean plane equal to formula_0?
For a set of points in the plane, the shortest network of line segments that connects the points, having only the given points as endpoints, is the Euclidean minimum spanning tree of the set. It may be possible to construct a shorter network by using additional endpoints, not present in the given point set. These additional points are called Steiner points and the shortest network that can be constructed using them is called a Steiner minimum tree. The Steiner ratio is the supremum, over all point sets, of the ratio of lengths of the Euclidean minimum spanning tree to the Steiner minimum tree. Because the Steiner minimum tree is shorter, this ratio is always greater than one.
A lower bound on the Steiner ratio is provided by three points at the vertices of an equilateral triangle of unit side length. For these three points, the Euclidean minimum spanning tree uses two edges of the triangle, with total length two. The Steiner minimum tree connects all three points to a Steiner point at the centroid of the triangle, with the smaller total length formula_1. Because of this example, the Steiner ratio must be at least formula_2.
The Gilbert–Pollak conjecture states that this example is the worst case for the Steiner ratio, and that this ratio equals formula_0. That is, for every finite point set in the Euclidean plane, the Euclidean minimum spanning tree can be no longer than formula_0 times the length of the Steiner minimum tree.
Attempted proof.
The conjecture is famous for its proof by Ding-Zhu Du and Frank Kwang-Ming Hwang, which later turned out to have a serious gap.
Based on the flawed Du and Hwang result, J. Hyam Rubinstein and Jia F. Weng concluded that the Steiner ratio is also formula_0 for a 2-dimensional sphere of constant curvature, but due to the gap in the base result of Du and Hwang, the result of Rubinstein and Weng is now also considered as not proved yet.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2/\\sqrt 3"
},
{
"math_id": 1,
"text": "\\sqrt 3"
},
{
"math_id": 2,
"text": "2/\\sqrt 3\\approx 1.155"
}
]
| https://en.wikipedia.org/wiki?curid=72307852 |
723105 | Abstract simplicial complex | Mathematical object
In combinatorics, an abstract simplicial complex (ASC), often called an abstract complex or just a complex, is a family of sets that is closed under taking subsets, i.e., every subset of a set in the family is also in the family. It is a purely combinatorial description of the geometric notion of a simplicial complex. For example, in a 2-dimensional simplicial complex, the sets in the family are the triangles (sets of size 3), their edges (sets of size 2), and their vertices (sets of size 1).
In the context of matroids and greedoids, abstract simplicial complexes are also called independence systems.
An abstract simplex can be studied algebraically by forming its Stanley–Reisner ring; this sets up a powerful relation between combinatorics and commutative algebra.
Definitions.
A collection Δ of non-empty finite subsets of a set "S" is called a set-family.
A set-family Δ is called an abstract simplicial complex if, for every set X in Δ, and every non-empty subset "Y" ⊆ "X", the set Y also belongs to Δ.
The finite sets that belong to Δ are called faces of the complex, and a face Y is said to belong to another face X if "Y" ⊆ "X", so the definition of an abstract simplicial complex can be restated as saying that every face of a face of a complex Δ is itself a face of Δ. The vertex set of Δ is defined as "V"(Δ)
∪Δ, the union of all faces of Δ. The elements of the vertex set are called the vertices of the complex. For every vertex "v" of Δ, the set {"v"} is a face of the complex, and every face of the complex is a finite subset of the vertex set.
The maximal faces of Δ (i.e., faces that are not subsets of any other faces) are called facets of the complex. The dimension of a face X in Δ is defined as dim("X")
|"X"| − 1: faces consisting of a single element are zero-dimensional, faces consisting of two elements are one-dimensional, etc. The dimension of the complex dim(Δ) is defined as the largest dimension of any of its faces, or infinity if there is no finite bound on the dimension of the faces.
The complex Δ is said to be finite if it has finitely many faces, or equivalently if its vertex set is finite. Also, Δ is said to be pure if it is finite-dimensional (but not necessarily finite) and every facet has the same dimension. In other words, Δ is pure if dim(Δ) is finite and every face is contained in a facet of dimension dim(Δ).
One-dimensional abstract simplicial complexes are mathematically equivalent to simple undirected graphs: the vertex set of the complex can be viewed as the vertex set of a graph, and the two-element facets of the complex correspond to undirected edges of a graph. In this view, one-element facets of a complex correspond to isolated vertices that do not have any incident edges.
A subcomplex of Δ is an abstract simplicial complex "L" such that every face of "L" belongs to Δ; that is, "L" ⊆ Δ and "L" is an abstract simplicial complex. A subcomplex that consists of all of the subsets of a single face of Δ is often called a simplex of Δ. (However, some authors use the term "simplex" for a face or, rather ambiguously, for both a face and the subcomplex associated with a face, by analogy with the non-abstract (geometric) simplicial complex terminology. To avoid ambiguity, we do not use in this article the term "simplex" for a face in the context of abstract complexes).
The "d"-skeleton of Δ is the subcomplex of Δ consisting of all of the faces of Δ that have dimension at most "d". In particular, the 1-skeleton is called the underlying graph of Δ. The 0-skeleton of Δ can be identified with its vertex set, although formally it is not quite the same thing (the vertex set is a single set of all of the vertices, while the 0-skeleton is a family of single-element sets).
The link of a face Y in Δ, often denoted Δ/"Y" or lkΔ("Y"), is the subcomplex of Δ defined by
formula_0
Note that the link of the empty set is Δ itself.
Simplicial maps.
Given two abstract simplicial complexes, Δ and Γ, a simplicial map is a function "f" that maps the vertices of Δ to the vertices of Γ and that has the property that for any face X of Δ, the image "f" ("X") is a face of Γ. There is a category SCpx with abstract simplicial complexes as objects and simplicial maps as morphisms. This is equivalent to a suitable category defined using non-abstract simplicial complexes.
Moreover, the categorical point of view allows us to tighten the relation between the underlying set "S" of an abstract simplicial complex Δ and the vertex set "V"(Δ) ⊆ "S" of Δ: for the purposes of defining a category of abstract simplicial complexes, the elements of "S" not lying in "V"(Δ) are irrelevant. More precisely, SCpx is equivalent to the category where:
Geometric realization.
We can associate to any abstract simplicial complex (ASC) "K" a topological space formula_1, called its geometric realization. There are several ways to define formula_1.
Geometric definition.
Every geometric simplicial complex (GSC) determines an ASC:""14 the vertices of the ASC are the vertices of the GSC, and the faces of the ASC are the vertex-sets of the faces of the GSC. For example, consider a GSC with 4 vertices {1,2,3,4}, where the maximal faces are the triangle between {1,2,3} and the lines between {2,4} and {3,4}. Then, the corresponding ASC contains the sets {1,2,3}, {2,4}, {3,4}, and all their subsets. We say that the GSC is the geometric realization of the ASC.
Every ASC has a geometric realization. This is easy to see for a finite ASC.""14 Let formula_2. Identify the vertices in formula_3 with the vertices of an ("N-1")-dimensional simplex in formula_4. Construct the GSC {conv(F): F is a face in K}. Clearly, the ASC associated with this GSC is identical to "K", so we have indeed constructed a geometric realization of "K." In fact, an ASC can be realized using much fewer dimensions. If an ASC is "d"-dimensional (that is, the maximum cardinality of a simplex in it is "d"+1), then it has a geometric realization in formula_5, but might not have a geometric realization in formula_6 "16" The special case "d"=1 corresponds to the well-known fact, that any graph can be plotted in formula_7 where the edges are straight lines that do not intersect each other except in common vertices, but not any graph can be plotted in formula_8 in this way.
If "K" is the standard combinatorial "n"-simplex, then formula_1 can be naturally identified with Δ"n".
Every two geometric realizations of the same ASC, even in Euclidean spaces of different dimensions, are homeomorphic.""14 Therefore, given an ASC "K," one can speak of "the" geometric realization of "K".
Topological definition.
The construction goes as follows. First, define formula_1 as a subset of formula_9 consisting of functions formula_10 satisfying the two conditions:
formula_11
formula_12
Now think of the set of elements of formula_9 with finite support as the direct limit of formula_13 where "A" ranges over finite subsets of "S", and give that direct limit the induced topology. Now give formula_1 the subspace topology.
Categorical definition.
Alternatively, let formula_14 denote the category whose objects are the faces of K and whose morphisms are inclusions. Next choose a total order on the vertex set of K and define a functor "F" from formula_14 to the category of topological spaces as follows. For any face "X" in "K" of dimension "n", let "F"("X")
Δ"n" be the standard "n"-simplex. The order on the vertex set then specifies a unique bijection between the elements of X and vertices of Δ"n", ordered in the usual way "e"0 < "e"1 < ... < "en". If "Y" ⊆ "X" is a face of dimension "m" < "n", then this bijection specifies a unique "m"-dimensional face of Δ"n". Define "F"("Y") → "F"("X") to be the unique affine linear embedding of Δ"m" as that distinguished face of Δ"n", such that the map on vertices is order-preserving.
We can then define the geometric realization formula_1 as the colimit of the functor "F". More specifically formula_1 is the quotient space of the disjoint union
formula_15
by the equivalence relation that identifies a point "y" ∈ "F"("Y") with its image under the map "F"("Y") → "F"("X"), for every inclusion "Y" ⊆ "X".
Examples.
1. Let "V" be a finite set of cardinality "n" + 1. The combinatorial "n"-simplex with vertex-set "V" is an ASC whose faces are all nonempty subsets of "V" (i.e., it is the power set of "V"). If "V"
"S"
{0, 1, ..., "n"}, then this ASC is called the standard combinatorial "n"-simplex.
2. Let "G" be an undirected graph. The clique complex of "G is an ASC whose faces are all cliques (complete subgraphs) of "G". The independence complex of "G is an ASC whose faces are all independent sets of "G" (it is the clique complex of the complement graph of G). Clique complexes are the prototypical example of flag complexes. A flag complex is a complex "K" with the property that every set, all of whose 2-element subsets are faces of "K", is itself a face of "K".
3. Let "H" be a hypergraph. A matching in "H" is a set of edges of "H", in which every two edges are disjoint. The matching complex of "H" is an ASC whose faces are all matchings in "H". It is the independence complex of the line graph of "H".
4. Let "P" be a partially ordered set (poset). The order complex of "P" is an ASC whose faces are all finite chains in "P". Its homology groups and other topological invariants contain important information about the poset "P".
5. Let "M" be a metric space and "δ" a real number. The Vietoris–Rips complex is an ASC whose faces are the finite subsets of "M" with diameter at most "δ". It has applications in homology theory, hyperbolic groups, image processing, and mobile ad hoc networking. It is another example of a flag complex.
6. Let formula_16 be a square-free monomial ideal in a polynomial ring formula_17 (that is, an ideal generated by products of subsets of variables). Then the exponent vectors of those square-free monomials of formula_18 that are not in formula_16 determine an abstract simplicial complex via the map formula_19. In fact, there is a bijection between (non-empty) abstract simplicial complexes on "n" vertices and square-free monomial ideals in "S". If formula_20 is the square-free ideal corresponding to the simplicial complex formula_21 then the quotient formula_22 is known as the Stanley–Reisner ring of formula_23.
7. For any open covering "C" of a topological space, the nerve complex of "C" is an abstract simplicial complex containing the sub-families of "C" with a non-empty intersection.
Enumeration.
The number of abstract simplicial complexes on up to "n" labeled elements (that is on a set "S" of size "n") is one less than the "n"th Dedekind number. These numbers grow very rapidly, and are known only for "n" ≤ 9; the Dedekind numbers are (starting with "n" = 0):
1, 2, 5, 19, 167, 7580, 7828353, 2414682040997, 56130437228687557907787, 286386577668298411128469151667598498812365 (sequence in the OEIS). This corresponds to the number of non-empty antichains of subsets of an "n" set.
The number of abstract simplicial complexes whose vertices are exactly "n" labeled elements is given by the sequence "1, 2, 9, 114, 6894, 7785062, 2414627396434, 56130437209370320359966, 286386577668298410623295216696338374471993" (sequence in the OEIS), starting at "n" = 1. This corresponds to the number of antichain covers of a labeled "n"-set; there is a clear bijection between antichain covers of an "n"-set and simplicial complexes on "n" elements described in terms of their maximal faces.
The number of abstract simplicial complexes on exactly "n" unlabeled elements is given by the sequence "1, 2, 5, 20, 180, 16143, 489996795, 1392195548399980210" (sequence in the OEIS), starting at "n" = 1.
Computational problems.
The simplicial complex recognition problem is: given a finite ASC, decide whether its geometric realization is homeomorphic to a given geometric object. This problem is undecidable for any "d"-dimensional manifolds for "d" ≥ 5.
Relation to other concepts.
An abstract simplicial complex with an additional property called the augmentation property or the exchange property yields a matroid. The following expression shows the relations between the terms:
HYPERGRAPHS = SET-FAMILIES ⊃ INDEPENDENCE-SYSTEMS = ABSTRACT-SIMPLICIAL-COMPLEXES ⊃ MATROIDS.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\Delta/Y := \\{ X\\in \\Delta \\mid X\\cap Y = \\varnothing,\\, X\\cup Y \\in \\Delta \\}. "
},
{
"math_id": 1,
"text": "|K|"
},
{
"math_id": 2,
"text": "N := |V(K)|"
},
{
"math_id": 3,
"text": "V(K)"
},
{
"math_id": 4,
"text": "\\R^N"
},
{
"math_id": 5,
"text": "\\R^{2d+1}"
},
{
"math_id": 6,
"text": "\\R^{2d}"
},
{
"math_id": 7,
"text": "\\R^{3}"
},
{
"math_id": 8,
"text": "\\R^{2}"
},
{
"math_id": 9,
"text": "[0, 1]^S"
},
{
"math_id": 10,
"text": "t\\colon S\\to [0, 1]"
},
{
"math_id": 11,
"text": "\\{s\\in S:t_s>0\\}\\in K"
},
{
"math_id": 12,
"text": "\\sum_{s\\in S}t_s=1"
},
{
"math_id": 13,
"text": "[0, 1]^A"
},
{
"math_id": 14,
"text": "\\mathcal{K}"
},
{
"math_id": 15,
"text": "\\coprod_{X \\in K}{F(X)}"
},
{
"math_id": 16,
"text": "I"
},
{
"math_id": 17,
"text": "S = K[x_1, \\dots, x_n]"
},
{
"math_id": 18,
"text": "S"
},
{
"math_id": 19,
"text": "\\mathbf{a}\\in \\{0,1\\}^n \\mapsto \\{i \\in [n] : a_i = 1\\}"
},
{
"math_id": 20,
"text": "I_{\\Delta}"
},
{
"math_id": 21,
"text": "\\Delta"
},
{
"math_id": 22,
"text": "S/I_{\\Delta}"
},
{
"math_id": 23,
"text": "{\\Delta}"
}
]
| https://en.wikipedia.org/wiki?curid=723105 |
723125 | Incidence structure | Abstract mathematical system of two types of objects and a relation between them
In mathematics, an incidence structure is an abstract system consisting of two types of objects and a single relationship between these types of objects. Consider the points and lines of the Euclidean plane as the two types of objects and ignore all the properties of this geometry except for the relation of which points are on which lines for all points and lines. What is left is the incidence structure of the Euclidean plane.
Incidence structures are most often considered in the geometrical context where they are abstracted from, and hence generalize, planes (such as affine, projective, and Möbius planes), but the concept is very broad and not limited to geometric settings. Even in a geometric setting, incidence structures are not limited to just points and lines; higher-dimensional objects (planes, solids, n-spaces, conics, etc.) can be used. The study of finite structures is sometimes called finite geometry.
Formal definition and terminology.
An incidence structure is a triple ("P", "L", "I") where P is a set whose elements are called "points", L is a distinct set whose elements are called "lines" and "I" ⊆ "P" × "L" is the incidence relation. The elements of I are called flags. If ("p", "l") is in I then one may say that point p "lies on" line l or that the line l "passes through" point p. A more "symmetric" terminology, to reflect the symmetric nature of this relation, is that "p is "incident" with l" or that "l is incident with p" and uses the notation "p" I "l" synonymously with ("p", "l") ∈ "I".
In some common situations L may be a set of subsets of P in which case incidence I will be containment ("p" I "l" if and only if p is a member of l). Incidence structures of this type are called "set-theoretic". This is not always the case, for example, if P is a set of vectors and L a set of square matrices, we may define
formula_0
This example also shows that while the geometric language of points and lines is used, the object types need not be these geometric objects.
Examples.
An incidence structure is "uniform" if each line is incident with the same number of points. Each of these examples, except the second, is uniform with three points per line.
Graphs.
Any graph (which need not be simple; loops and multiple edges are allowed) is a uniform incidence structure with two points per line. For these examples, the vertices of the graph form the point set, the edges of the graph form the line set, and incidence means that a vertex is an endpoint of an edge.
Linear spaces.
Incidence structures are seldom studied in their full generality; it is typical to study incidence structures that satisfy some additional axioms. For instance, a "partial linear space" is an incidence structure that satisfies:
If the first axiom above is replaced by the stronger:
the incidence structure is called a "linear space".
Nets.
A more specialized example is a k-net. This is an incidence structure in which the lines fall into k parallel classes, so that two lines in the same parallel class have no common points, but two lines in different classes have exactly one common point, and each point belongs to exactly one line from each parallel class. An example of a k-net is the set of points of an affine plane together with k parallel classes of affine lines.
Dual structure.
If we interchange the role of "points" and "lines" in
formula_1
we obtain the "dual structure",
formula_2
where "I"∗ is the converse relation of I. It follows immediately from the definition that:
formula_3
This is an abstract version of projective duality.
A structure C that is isomorphic to its dual "C"∗ is called "self-dual". The Fano plane above is a self-dual incidence structure.
Other terminology.
The concept of an incidence structure is very simple and has arisen in several disciplines, each introducing its own vocabulary and specifying the types of questions that are typically asked about these structures. Incidence structures use a geometric terminology, but in graph theoretic terms they are called hypergraphs and in design theoretic terms they are called block designs. They are also known as a "set system" or family of sets in a general context.
Hypergraphs.
Each hypergraph or set system can be regarded as an incidence
structure in which the universal set plays the role of "points", the corresponding family of sets plays the role of "lines" and the incidence relation is set membership "∈". Conversely, every incidence structure can be viewed as a hypergraph by identifying the lines with the sets of points that are incident with them.
Block designs.
A (general) block design is a set X together with a family F of subsets of X (repeated subsets are allowed). Normally a block design is required to satisfy numerical regularity conditions. As an incidence structure, X is the set of points and F is the set of lines, usually called "blocks" in this context (repeated blocks must have distinct names, so F is actually a set and not a multiset). If all the subsets in F have the same size, the block design is called "uniform". If each element of X appears in the same number of subsets, the block design is said to be "regular". The dual of a uniform design is a regular design and vice versa.
Example: Fano plane.
Consider the block design/hypergraph given by:
formula_4
This incidence structure is called the Fano plane. As a block design it is both uniform and regular.
In the labeling given, the lines are precisely the subsets of the points that consist of three points whose labels add up to zero using nim addition. Alternatively, each number, when written in binary, can be identified with a non-zero vector of length three over the binary field. Three vectors that generate a subspace form a line; in this case, that is equivalent to their vector sum being the zero vector.
Representations.
Incidence structures may be represented in many ways. If the sets P and L are finite these representations can compactly encode all the relevant information concerning the structure.
Incidence matrix.
The incidence matrix of a (finite) incidence structure is a (0,1) matrix that has its rows indexed by the points {pi} and columns indexed by the lines {"lj"} where the ij-th entry is a 1 if "pi" I "lj" and 0 otherwise. An incidence matrix is not uniquely determined since it depends upon the arbitrary ordering of the points and the lines.
The non-uniform incidence structure pictured above (example number 2) is given by:
formula_5
An incidence matrix for this structure is:
formula_6
which corresponds to the incidence table:
If an incidence structure C has an incidence matrix M, then the dual structure "C"∗ has the transpose matrix MT as its incidence matrix (and is defined by that matrix).
An incidence structure is self-dual if there exists an ordering of the points and lines so that the incidence matrix constructed with that ordering is a symmetric matrix.
With the labels as given in example number 1 above and with points ordered "A", "B", "C", "D", "G", "F", "E" and lines ordered "l", "p", "n", "s", "r", "m", "q", the Fano plane has the incidence matrix:
formula_7
Since this is a symmetric matrix, the Fano plane is a self-dual incidence structure.
Pictorial representations.
An incidence figure (that is, a depiction of an incidence structure), is constructed by representing the points by dots in a plane and having some visual means of joining the dots to correspond to lines. The dots may be placed in any manner, there are no restrictions on distances between points or any relationships between points. In an incidence structure there is no concept of a point being between two other points; the order of points on a line is undefined. Compare this with ordered geometry, which does have a notion of betweenness. The same statements can be made about the depictions of the lines. In particular, lines need not be depicted by "straight line segments" (see examples 1, 3 and 4 above). As with the pictorial representation of graphs, the crossing of two "lines" at any place other than a dot has no meaning in terms of the incidence structure; it is only an accident of the representation. These incidence figures may at times resemble graphs, but they aren't graphs unless the incidence structure is a graph.
Realizability.
Incidence structures can be modelled by points and curves in the Euclidean plane with the usual geometric meaning of incidence. Some incidence structures admit representation by points and (straight) lines. Structures that can be are called "realizable". If no ambient space is mentioned then the Euclidean plane is assumed. The Fano plane (example 1 above) is not realizable since it needs at least one curve. The Möbius–Kantor configuration (example 4 above) is not realizable in the Euclidean plane, but it is realizable in the complex plane. On the other hand, examples 2 and 5 above are realizable and the incidence figures given there demonstrate this. Steinitz (1894) has shown that (incidence structures with n points and n lines, three points per line and three lines through each point) are either realizable or require the use of only one curved line in their representations. The Fano plane is the unique (73) and the Möbius–Kantor configuration is the unique (83).
Incidence graph (Levi graph).
Each incidence structure C corresponds to a bipartite graph called the Levi graph or incidence graph of the structure. As any bipartite graph is two-colorable, the Levi graph can be given a black and white vertex coloring, where black vertices correspond to points and white vertices correspond to lines of C. The edges of this graph correspond to the flags (incident point/line pairs) of the incidence structure. The original Levi graph was the incidence graph of the generalized quadrangle of order two (example 3 above), but the term has been extended by H.S.M. Coxeter to refer to an incidence graph of any incidence structure.
Levi graph examples.
The Levi graph of the Fano plane is the Heawood graph. Since the Heawood graph is connected and vertex-transitive, there exists an automorphism (such as the one defined by a reflection about the vertical axis in the figure of the Heawood graph) interchanging black and white vertices. This, in turn, implies that the Fano plane is self-dual.
The specific representation, on the left, of the Levi graph of the Möbius–Kantor configuration (example 4 above) illustrates that a rotation of about the center (either clockwise or counterclockwise) of the diagram interchanges the blue and red vertices and maps edges to edges. That is to say that there exists a color interchanging automorphism of this graph. Consequently, the incidence structure known as the Möbius–Kantor configuration is self-dual.
Generalization.
It is possible to generalize the notion of an incidence structure to include more than two types of objects. A structure with k types of objects is called an "incidence structure of rank" k or a "rank" k "geometry". Formally these are defined as "k" + 1 tuples "S" = ("P"1, "P"2, ..., "P"k, "I") with "P""i" ∩ "P""j" = ∅ and
formula_8
The Levi graph for these structures is defined as a multipartite graph with vertices corresponding to each type being colored the same.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I = \\{(v, M) : \\vec v \\text{ is an eigenvector of matrix } M \\}."
},
{
"math_id": 1,
"text": "C = (P, L, I)"
},
{
"math_id": 2,
"text": "C^* = (L, P, I^*)"
},
{
"math_id": 3,
"text": "C^{**} = C"
},
{
"math_id": 4,
"text": "\\begin{align}\nP &= \\{1,2,3,4,5,6,7\\}, \\\\[2pt]\nL &= \\left\\{ \\begin{array}{ll}\n \\{1,2,3\\}, & \\{1,4,5\\}, \\\\\n \\{1,6,7\\}, & \\{2,4,6\\}, \\\\\n \\{2,5,7\\}, & \\{3,4,7\\}, \\\\\n \\{3,5,6\\} \\end{array} \\right\\}.\n\\end{align}"
},
{
"math_id": 5,
"text": "\\begin{align}\nP &= \\{ A, B, C, D, E, P \\} \\\\[2pt]\nL &= \\left\\{ \\begin{array}{ll}\n l = \\{C, P, E \\}, & m = \\{ P \\}, \\\\\n n = \\{ P, D \\}, & o = \\{ P, A \\}, \\\\\n p = \\{ A, B \\}, & q = \\{ P, B \\} \\end{array} \\right\\}\n\\end{align}"
},
{
"math_id": 6,
"text": " \\begin{pmatrix} \n0 & 0 & 0 & 1 & 1 & 0 \\\\\n0 & 0 & 0 & 0 & 1 & 1 \\\\\n1 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & 0 & 0 \\\\\n1 & 0 & 0 & 0 & 0 & 0 \\\\\n1 & 1 & 1 & 1 & 0 & 1\n\\end{pmatrix} "
},
{
"math_id": 7,
"text": " \\begin{pmatrix} \n1 & 1 & 1 & 0 & 0 & 0 & 0\\\\\n1 & 0 & 0 & 1 & 1 & 0 & 0 \\\\\n1 & 0 & 0 & 0 & 0 & 1 & 1 \\\\\n0 & 1 & 0 & 1 & 0 & 1 & 0 \\\\\n0 & 1 & 0 & 0 & 1 & 0 & 1 \\\\\n0 & 0 & 1 & 1 & 0 & 0 & 1 \\\\\n0 & 0 & 1 & 0 & 1 & 1 & 0\n\\end{pmatrix} . "
},
{
"math_id": 8,
"text": "I \\subseteq \\bigcup_{i < j} P_i \\times P_j."
}
]
| https://en.wikipedia.org/wiki?curid=723125 |
723196 | Pentakis dodecahedron | Catalan solid with 60 faces
In geometry, a pentakis dodecahedron or kisdodecahedron is a polyhedron created by attaching a pentagonal pyramid to each face of a regular dodecahedron; that is, it is the Kleetope of the dodecahedron. Specifically, the term typically refers to a particular Catalan solid, namely the dual of a truncated icosahedron.
Cartesian coordinates.
Let formula_0 be the golden ratio. The 12 points given by formula_1 and cyclic permutations of these coordinates are the vertices of a regular icosahedron. Its dual regular dodecahedron, whose edges intersect those of the icosahedron at right angles, has as vertices the points formula_2 together with the points formula_3 and cyclic permutations of these coordinates. Multiplying all coordinates of the icosahedron by a factor of formula_4 gives a slightly smaller icosahedron. The 12 vertices of this icosahedron, together with the vertices of the dodecahedron, are the vertices of a pentakis dodecahedron centered at the origin. The length of its long edges equals formula_5. Its faces are acute isosceles triangles with one angle of formula_6 and two of formula_7. The length ratio between the long and short edges of these triangles equals formula_8.
Chemistry.
<br>The "pentakis dodecahedron" in a model of buckminsterfullerene: each (spherical) surface segment represents a carbon atom, and if all are replaced with planar faces, a pentakis dodecahedron is produced. Equivalently, a truncated icosahedron is a model of buckminsterfullerene, with each vertex representing a carbon atom.
Biology.
The "pentakis dodecahedron" is also a model of some icosahedrally symmetric viruses, such as Adeno-associated virus. These have 60 symmetry related capsid proteins, which combine to make the 60 symmetrical faces of a "pentakis dodecahedron".
Orthogonal projections.
The pentakis dodecahedron has three symmetry positions, two on vertices, and one on a midedge:
Concave pentakis dodecahedron.
A concave pentakis dodecahedron replaces the pentagonal faces of a dodecahedron with "inverted" pyramids.
Related polyhedra.
The faces of a regular dodecahedron may be replaced (or augmented with) any regular pentagonal pyramid to produce what is in general referred to as an elevated dodecahedron. For example, if pentagonal pyramids with equilateral triangles are used, the result is a non-convex deltahedron. Any such elevated dodecahedron has the same combinatorial structure as a pentakis dodecahedron, i.e., the same Schlegel diagram.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi"
},
{
"math_id": 1,
"text": "(0, \\pm 1, \\pm \\phi)"
},
{
"math_id": 2,
"text": "(\\pm 1, \\pm 1, \\pm 1)"
},
{
"math_id": 3,
"text": "(\\pm\\phi, \\pm 1/\\phi, 0)"
},
{
"math_id": 4,
"text": "(3\\phi+12)/19\\approx 0.887\\,057\\,998\\,22"
},
{
"math_id": 5,
"text": "2/\\phi"
},
{
"math_id": 6,
"text": "\\arccos((-8+9\\phi)/18)\\approx 68.618\\,720\\,931\\,19^{\\circ}"
},
{
"math_id": 7,
"text": "\\arccos((5-\\phi)/6)\\approx 55.690\\,639\\,534\\,41^{\\circ}"
},
{
"math_id": 8,
"text": "(5-\\phi)/3\\approx 1.127\\,322\\,003\\,75"
}
]
| https://en.wikipedia.org/wiki?curid=723196 |
72320451 | Harmonic tensors | Mathematical objects more general than vectors
In this article spherical functions are replaced by polynomials that have been well known in electrostatics since the time of Maxwell and associated with multipole moments. In physics, dipole and quadrupole moments typically appear because fundamental concepts of physics are associated precisely with them.
Dipole and quadrupole moments are:
formula_0,
formula_1,
where formula_2 is density of charges (or other quantity).
Octupole moment
formula_3
is used rather seldom. As a rule, high-rank moments are calculated with the help of spherical functions.
Spherical functions are convenient in scattering problems. Polynomials are preferable in calculations with differential operators. Here, properties of tensors, including high-rank moments as well, are considered to repeat basically features of solid spherical functions but having their own specifics.
Using of invariant polynomial tensors in Cartesian coordinates, as shown in a number of recent studies, is preferable and simplifies the fundamental scheme of calculations
The spherical coordinates are not involved here. The rules for using harmonic symmetric tensors are demonstrated that directly follow from their properties. These rules are
naturally reflected in the theory of special functions, but are not always obvious, even though the group properties are general
At any rate, let us recall the main property of harmonic tensors: the trace over any pair of indices vanishes
Here, those properties of tensors are selected that not only make analytic calculations more compact and reduce 'the number of factorials' but also allow correctly formulating some fundamental questions of the theoretical physics
General properties.
Four properties of symmetric tensor formula_4 lead to the use of it in physics.
A. Tensor is homogeneous polynomial:
formula_5,
where formula_6 is the number of indices, i.e., tensor rank;
B. Tensor is symmetric with respect to indices;
C. Tensor is harmonic, i.e., it is a solution of the Laplace equation:
formula_7;
D. Trace over any two indices vanishes:
formula_8,
where symbol formula_9 denotes remaining formula_10 indices after equating formula_11.
Components of tensor are solid spherical functions. Tensor can be divided by factor formula_12 to acquire components in the form of spherical functions.
Multipole tensors in electrostatics.
The multipole potentials arise when the potential of a point charge is expanded in powers of coordinates formula_13 of the radius vector formula_14 ('Maxwell poles')
. For potential
formula_15,
there is well known formula:
formula_16,
where the following notation is used. For the formula_17th tensor power of the radius vector
formula_18,
and for a symmetric harmonic tensor of rank formula_17,
formula_19.
The tensor is a homogeneous harmonic polynomial with described the general properties. Contraction over any two indices (when the two gradients become the formula_20 operator) is null. If tensor is divided by formula_21, then a "multipole harmonic tensor" arises
formula_22,
which is also a homogeneous harmonic function with homogeneity degree formula_23.
From the formula for potential follows that
formula_24,
which allows to construct a ladder operator.
Theorem on power-law equivalent moments in electrostatics.
There is an obvious property of contraction
formula_25,
that give rise to a theorem simplifying essentially the calculation of moments in theoretical physics.
"Theorem"
Let formula_26 be a distribution of charge. When calculating a multipole potential,
power-law moments can be used instead of harmonic tensors (or instead of spherical functions ):
formula_27.
It is an advantage in comparing with using of spherical functions.
"Example 1."
For the quadrupole moment, instead of the integral
formula_28,
one can use 'short' integral
formula_29.
Moments are different but potentials are equal each other.
Formula for a harmonic tensor.
Formula for the tensor was considered in using a ladder operator.
It can be derived using the Laplace operator. Similar approach is known in the theory of special functions. The first term in the formula, as is easy
to see from expansion of a point charge potential, is equal to
formula_30.
The remaining terms can be obtained by repeatedly applying the Laplace operator and
multiplying by an even power of the modulus formula_31. The coefficients are easy to determine by substituting expansion in the Laplace equation . As a result, formula is following:
formula_32
This form is useful for applying differential operators of quantum mechanics and electrostatics to it. The differentiation generates product of the Kronecker symbols.
"Example 2"
formula_33,
formula_34,
formula_35.
The last quality can be verified using the contraction with formula_36 . It is convenient
to write the differentiation formula in terms of the symmetrization operation.
A symbol for it was proposed in, with the help of sum taken over all independent
permutations of indices:
formula_37.
As a result, the following formula is obtained:
formula_38
where the symbol formula_39 is used for a tensor power of the Kronecker symbol formula_40 and conventional symbol [..] is used for the two subscripts that are being changed under symmetrization.
Following one can find the relation between the tensor and solid spherical functions. Two unit vectors are needed: vector formula_41 directed along the formula_42-axis and complex vector formula_43.
Contraction with their powers gives the required relation
formula_44,
where formula_45 is a Legendre polynomial .
Special contractions.
In perturbation theory, it is necessary to expand the source in terms of spherical functions. If the source is a polynomial, for example, when calculating the Stark effect, then the integrals are standard, but cumbersome. When calculating with the help of invariant tensors, the expansion coefficients are simplified, and there is then no need to integrals. It suffices, as shown in, to calculate contractions that lower the rank of the tensors under consideration.
Instead of integrals, the operation of calculating the trace formula_46
of a tensor over two indices is used. The following rank reduction formula is useful:
formula_47,
where symbol [m] denotes all left (l-2) indices.
If the brackets contain several factors with the Kronecker delta, the following relation formula
holds:
formula_48.
Calculating the trace reduces the number of the Kronecker symbols by one, and the rank of the harmonic tensor on the right-hand side of the equation decreases by two. Repeating the calculation of the trace k times eliminates all the Kronecker symbols:
formula_49.
Harmonic 4D tensors.
The Laplace equation in four-dimensional 4D space has its own specifics. The potential of a point charge in 4D space is equal to formula_50
. From the expansion of the point-charge potential formula_51 with respect to powers formula_52 the multipole 4D potential arises:
formula_53.
The harmonic tensor in the numinator has a structure similar to 3D harmonic tensor. Its contraction with respect to any two indices must vanish. The dipole and quadruple 4-D tensors, as follows from here, are expressed as
formula_54,
formula_55,
The leading term of the expansion, as can be seen, is equal to
formula_56
The method described for 3D tensor, gives relations
formula_57
formula_58
Four-dimensional tensors are structurally simpler than 3D tensors.
Decomposition of polynomials in terms of harmonic functions.
Applying the contraction rules allows decomposing the tensor with respect to the harmonic ones.
In the perturbation theory, even the third approximation often considered good. Here, the decomposition of the tensor power up to the rank l=6 is presented:
formula_59, formula_60,
formula_61, formula_62,
formula_63, formula_64,
formula_65, formula_66 ,
formula_67, formula_68:formula_69.
To derive the formulas, it is useful to calculate the contraction with respect two indices, i.e., the trace. The formula for formula_70 then implies the formula for formula_71 . Applying the trace, there is convenient to use rules of previous section. Particular, the last term of the relations for even values of formula_6 has the form
formula_72.
Also useful is the frequently occurring contraction over all indices,
formula_73
which arises when normalizing the states.
Decomposition of polynomials in 4D space.
The decomposition of tensor powers of a vector is also compact in four dimensions:
formula_74, formula_75,
formula_76, formula_77,
formula_78, formula_79,
formula_80, formula_81 ,
formula_82, formula_83:formula_84.
When using the tensor notation with indices suppressed, the last equality becomes
formula_82, formula_85.
Decomposition of higher powers is not more difficult using contractions over two indices.
Ladder operator.
Ladder operators are useful for representing eigen functions in a compact form.
They are a basis for constructing coherent states
. Operators considered here, in mani respects close to the 'creation' and 'annihilation' operators of an oscillator.
Efimov's operator formula_86 that increases the value of rank by one was introduced in
. It can be obtained from expansion of point-charge potential:
formula_87.
Straightforward differentiation on the left-hand side of the equation yields a vector operator acting on a harmonic tensor:
formula_88,
formula_89
where operator
formula_90
multiplies homogeneous polynomial by degree of homogeneity formula_91.
In particular,
formula_92,
formula_93.
As a result of an formula_17- fold application to unity, the harmonic tensor arises:
formula_94,
written here in different forms.
The relation of this tensor to the angular momentum operator formula_95 formula_96 is as follows:
formula_97.
Some useful properties of the operator in vector form given below. Scalar product
formula_98
yields a vanishing trace over any two indices. The scalar product of vectors formula_99
and formula_100 is
formula_101,
formula_102,
and, hence, the contraction of the tensor with the vector formula_103 can be expressed as
formula_104,
where formula_105 is a number.
The commutator in the scalar product on the sphere is equal to unity:
formula_106.
To calculate the divergence of a tensor, a useful formula is
formula_107,
whence
formula_108
(formula_105 on the right-hand side is a number).
Four-dimensional ladder operator.
The raising operator in 4D space
formula_109
has largely similar properties. The main formula for it is
formula_110
where formula_111 is a 4D vector, formula_112,
formula_113,
and the formula_114 operator multiplies a homogeneous polynomial by its degree. Separating the formula_115 variable is convenient for physical problems:
formula_116.
In particular,
formula_117,
formula_118.
The scalar product of the ladder operator formula_119 and formula_120 is as simple as in 3D space:
formula_121.
The scalar product of formula_119 and formula_122 is
formula_123.
The ladder operator is now associated with the angular momentum operator and additional operator of rotations in 4D space formula_124. They perform Lie algebra as the angular momentum and the Laplace-Runge-Lenz operators.
Operator formula_124 has the simple form
formula_125.
Separately for the 3D formula_126 -component and the forth coordinate formula_127
of the raising operator, formulas are
formula_128,
formula_129. | [
{
"math_id": 0,
"text": " \\int x_i\\rho (\\mathbf{x})dV "
},
{
"math_id": 1,
"text": " \\int (3x_i x_k-\\delta_{ik})\\rho (\\mathbf{x})dV "
},
{
"math_id": 2,
"text": " \\rho (\\mathbf{x}) "
},
{
"math_id": 3,
"text": " \\int 3(5x_i x_k x_l- x_l\\delta_{ik}-x_k\\delta_{il}-x_i\\delta_{lk})\\rho (\\mathbf{x})dV "
},
{
"math_id": 4,
"text": "\\mathbf{M}_{i...k} "
},
{
"math_id": 5,
"text": "\\mathbf{M}_{i...k}(k\\mathbf{x})=k^l\\mathbf{M}_{i...k}(\\mathbf{x}) "
},
{
"math_id": 6,
"text": " l "
},
{
"math_id": 7,
"text": "\\Delta \\mathbf{M}_{i...k}(\\mathbf{ x })=0 "
},
{
"math_id": 8,
"text": "\\mathbf{M}_{ii[...]}( \\mathbf{x } )=0 "
},
{
"math_id": 9,
"text": "[...] "
},
{
"math_id": 10,
"text": "(l-2) "
},
{
"math_id": 11,
"text": "i=i "
},
{
"math_id": 12,
"text": "r^l "
},
{
"math_id": 13,
"text": "x_{oi } "
},
{
"math_id": 14,
"text": "\\mathbf{r }_{o} "
},
{
"math_id": 15,
"text": "\\frac{1 }{\\left|\\mathbf{r}-\\mathbf{r}_{o }\\right| }"
},
{
"math_id": 16,
"text": "\\frac{1 }{\\left|\\mathbf{r}-\\mathbf{r}_{o }\\right| }=\n\\sum_l(-1)^l\\frac{(\\mathbf{r}_0\\nabla )^l }{ l!}\\frac{ 1}{ r }=\n\\sum_l\\frac { x_{0i }...x_{0k} }{l!r^{ 2l+1 } } \\mathbf{M}_{i...k }^{(l)} (\\mathbf{r} ) = \n\\sum_l\\frac{ \\mathbf{r}_0^{\\otimes l} \\mathbf{M}^{(l)}_{[i]} }{l! r^{2l+1 } } "
},
{
"math_id": 17,
"text": " l"
},
{
"math_id": 18,
"text": "x_{0i}...x_ {0k}=\\mathbf{r}_0^{\\otimes l} "
},
{
"math_id": 19,
"text": " \\mathbf{M}^{( l ) }_{i...k }(\\mathbf{r } )= \\mathbf{M}^{( l ) }_{[i] }(\\mathbf{r } )"
},
{
"math_id": 20,
"text": "\\Delta "
},
{
"math_id": 21,
"text": "r^{2l+1 } "
},
{
"math_id": 22,
"text": " \\frac{\\mathbf{M}^{( l ) }_{i...k }(\\mathbf{r })}{r^{2l+1 } } =\\frac{ \\mathbf{M}^{( l ) }_{[i] }(\\mathbf{r }) }{r^{2l+1 } }"
},
{
"math_id": 23,
"text": "-(l+1) "
},
{
"math_id": 24,
"text": " \\frac{\\mathbf{M}^{( l+1 ) }_{im...k }(\\mathbf{r })}{r^{2l+3 } } = -\\nabla _i \\frac{\\mathbf{M}^ {(l)} _{m...k }(\\mathbf{r })}{r^{2l+1}} "
},
{
"math_id": 25,
"text": " (2l-1)!! ( \\mathbf{ r}_0^{\\otimes l } ,\\mathbf{M }^{( l )}_{[i] }(\\mathbf{r })) =\n( \\mathbf{M }^{( l )}_{[i] }(\\mathbf{r }_0) , \\mathbf{M }^{( l )}_{[i] }(\\mathbf{r } ) ) "
},
{
"math_id": 26,
"text": " \\rho (x) "
},
{
"math_id": 27,
"text": " (\\int\\rho(\\mathbf{r}_0 ) \\mathbf{ r}_0^{\\otimes l }dV_{\\mathbf{r}_0 } ,\\frac{\\mathbf{M }^{( l )}_{[i] }(\\mathbf{r })}{l!r^{ 2l+1} }) =\n( \\int \\rho (\\mathbf{r}_0 )\\mathbf{M }^{( l )}_{[i] }(\\mathbf{r }_0)dV_{\\mathbf{r}_0 },\\frac{ \\mathbf{M }^{( l )}_{[i] }(\\mathbf{r } )}{(2l-1)!!l!r^{2l+1} } ) "
},
{
"math_id": 28,
"text": "\\int \\rho(\\mathbf{r}) (3x_i x_k-r^2 \\delta_{ik})dV "
},
{
"math_id": 29,
"text": "3\\int \\rho(\\mathbf{r}) x_i x_k dV "
},
{
"math_id": 30,
"text": "\\mathbf{M}^ {(l ) }_{[i] } ( \\mathbf{ r } ) = (2l-1)!!x_{i_1}...x_{i_l }+...=(2l-1)!!\\mathbf{r}^{\\otimes l }+... "
},
{
"math_id": 31,
"text": "r "
},
{
"math_id": 32,
"text": "\\mathbf{M}^ {(l ) }_{[i] } ( \\mathbf{ r } ) = (2l-1)!!\\mathbf{r}^ {\\otimes l }-\n \\frac{(2l-3)!!}{1!2^1 }r^2\\Delta\\mathbf{r}^ {\\otimes l }+ \\frac{(2l-5)!!}{2!2^2 }r^4\\Delta^2\\mathbf{r}^ {\\otimes l }- \\frac{(2l-7)!!}{3!2^3 }r^6\\Delta^3\\mathbf{r}^ {\\otimes l }+... "
},
{
"math_id": 33,
"text": " \\Delta x_i x_k=2\\delta_{ik } "
},
{
"math_id": 34,
"text": " \\Delta x_i x_k x_m=2(\\delta_{ik }x_m+ \\delta_{km}x_i+\\delta_{im }x_k) "
},
{
"math_id": 35,
"text": " \\Delta\\Delta x_i x_k x_m x_n =8(\\delta_{ik }\\delta_{mn }+ \\delta_{km}\\delta_{in }+\\delta_{im }\\delta_{kn }) "
},
{
"math_id": 36,
"text": "i=k "
},
{
"math_id": 37,
"text": " \\frac{\\Delta^k\\mathbf{r}^{\\otimes l }}{k!2^k }=\\left\\langle \\left\\langle \\delta_{[..]}^{\\otimes (l-2k)} \\mathbf{ r}^{\\otimes (l-2k) } \\right\\rangle \\right\\rangle "
},
{
"math_id": 38,
"text": " \\mathbf{M}^{(l)}_{ [i]}(\\mathbf{r})= (2l-1)!!\\mathbf{ r}^{\\otimes l}- (2l-3)!!r^2 \\left\\langle \\left\\langle \\delta_{[..]}^{\\otimes 1} \\mathbf{ r}^{\\otimes (l-2) } \\right\\rangle \\right\\rangle +\n(2l-5)!!r^4\\left\\langle \\left\\langle \\delta_{[..]}^{\\otimes 2} \\mathbf{ r}^{\\otimes (l-4) } \\right\\rangle \\right\\rangle-... "
},
{
"math_id": 39,
"text": "\\otimes k "
},
{
"math_id": 40,
"text": "\\delta_{im}"
},
{
"math_id": 41,
"text": " \\mathbf{ n}_z"
},
{
"math_id": 42,
"text": "z"
},
{
"math_id": 43,
"text": " \\mathbf{ n}_x \\pm i\\mathbf{n}_y=\\mathbf{n}_{\\pm} "
},
{
"math_id": 44,
"text": " (\\mathbf{M}^{(l)}_{ [i]} (\\mathbf{r}),\\mathbf{n}^{\\otimes (l-m)}_z \\mathbf{n}^{\\otimes m}_{\\pm } ) = (l-m)!(x+iy)^m r^{(l-m)}\\frac{d^m }{dt^m}P_l(t) \\mid_{t=\\frac{z }{ r }} "
},
{
"math_id": 45,
"text": "P_l(t) "
},
{
"math_id": 46,
"text": "\\hat{ T }r "
},
{
"math_id": 47,
"text": "\\hat{T}r \\left\\langle\\left\\langle \\delta_{ik} \\mathbf{M }^{(l)}_{ik[m] } \\right\\rangle\\right\\rangle = (2l+3)\\mathbf{M}^{(l-2)}_{[m]} "
},
{
"math_id": 48,
"text": "\\hat{T}r \\left\\langle\\left\\langle \\delta_{i_1 p_1}...\\delta_{i_k p_k } \\mathbf{M }^{(l)}_{i_1p_1[m] } \\right\\rangle\\right\\rangle = (2l+2k+1)\\left\\langle\\left\\langle \\delta_{i_2 p_2 }...\\delta_{i_kp_k } \\mathbf{M}^{(l-2)}_{[m]}\\right\\rangle\\right\\rangle "
},
{
"math_id": 49,
"text": "\\hat{T}r_1...\\hat{T}r_k \\left\\langle\\left\\langle \\delta_{i_1 p_1}...\\delta_{i_k p_k } \\mathbf{M }^{(l)}_{[m] } \\right\\rangle\\right\\rangle = (2l+2k+1)!!\\left\\langle\\left\\langle \\mathbf{M}^{(l-2k)}_{[m]}\\right\\rangle\\right\\rangle "
},
{
"math_id": 50,
"text": " \\frac{ 1 } { r^2} "
},
{
"math_id": 51,
"text": " \\frac{ 1 } { {(\\mathbf {r}-\\mathbf{r}_0)}^2} "
},
{
"math_id": 52,
"text": " \\mathbf{r}_0^{\\otimes n } "
},
{
"math_id": 53,
"text": "\\frac{\\mathfrak{ M}^{(n)}_{i...k } }{ r^{ 2n+2} } = { (-1) }^n \\nabla_i...\\nabla_k\\frac{1}{ r^2} "
},
{
"math_id": 54,
"text": "\\mathfrak{ M}^{(1)}_{i }(\\mathbf{r } ) = 2x_i "
},
{
"math_id": 55,
"text": "\\mathfrak{ M}^{(2)}_{ik }(\\mathbf{r } ) = 2(4x_i x_k-\\delta_{ ik }) "
},
{
"math_id": 56,
"text": "\\mathfrak{ M}^{(n)}_{[i] }(\\mathbf{r } ) =(2n)!!x_{ i_1 } ...x_ {i_n } -...=(2n)!!\\mathbf{ r}^{\\otimes n }-... "
},
{
"math_id": 57,
"text": "\\mathfrak{M}^ {(n ) }_{[i] } ( \\mathbf{ r } ) = (2n)!!\\mathbf{r}^ {\\otimes n }-\n \\frac{(2n-2)!!}{1!2^1 }r^2\\Delta\\mathbf{r}^ {\\otimes n }+ \\frac{(2n-4)!!}{2!2^2 }r^4\\Delta^2\\mathbf{r}^ {\\otimes n }- \\frac{(2n-6)!!}{3!2^3 }r^6\\Delta^3\\mathbf{r}^ {\\otimes n }+... "
},
{
"math_id": 58,
"text": " \\mathfrak{M}^{(l)}_{ [i]}(\\mathbf{r})= (2n)!!\\mathbf{ r}^{\\otimes n}- (2n-2)!!r^2 \\left\\langle \\left\\langle \\delta_{[..]}^{\\otimes 1} \\mathbf{ r}^{\\otimes (n-2) } \\right\\rangle \\right\\rangle +\n(2n-4)!!r^4\\left\\langle \\left\\langle \\delta_{[..]}^{\\otimes 2} \\mathbf{ r}^{\\otimes (n-4) } \\right\\rangle \\right\\rangle-... "
},
{
"math_id": 59,
"text": "\\bullet\\ l=2 "
},
{
"math_id": 60,
"text": "\\qquad 3x_ix_k=\\mathbf{M }^{(2)}_{ik} + r^2\\delta_{ik } "
},
{
"math_id": 61,
"text": "\\bullet\\ l=3 "
},
{
"math_id": 62,
"text": "\\qquad 5!!x_ix_kx_m=\\mathbf{M }^{(3)}_{ikm} + 3r^2\\left\\langle\\left\\langle\\delta_{ik}x_m \\right\\rangle\\right\\rangle "
},
{
"math_id": 63,
"text": "\\bullet\\ l=4 "
},
{
"math_id": 64,
"text": "\\qquad 7!!x_ix_kx_lx_m=\\mathbf{M }^{(4)}_{iklm} + 5r^2\\left\\langle\\left\\langle\\mathbf{M}^{(2)}_{ik } \\delta_{lm} \\right\\rangle\\right\\rangle +\n7r^4\\left\\langle\\left\\langle\\delta_{ik } \\delta_{lm} \\right\\rangle\\right\\rangle"
},
{
"math_id": 65,
"text": "\\bullet\\ l=5 "
},
{
"math_id": 66,
"text": "\\qquad 9!!x_ix_kx_lx_mx_n=\\mathbf{M }^{(5)}_{iklmn} + 7r^2\\left\\langle\\left\\langle\\mathbf{M}^{(3)}_{ikl } \\delta_{mn} \\right\\rangle\\right\\rangle +\n 27r^4\\left\\langle\\left\\langle\\delta_{ik } \\delta_{lm}x_n \\right\\rangle\\right\\rangle"
},
{
"math_id": 67,
"text": "\\bullet\\ l=6 "
},
{
"math_id": 68,
"text": "\\qquad 11!!x_ix_kx_lx_mx_nx_p="
},
{
"math_id": 69,
"text": "\\qquad\\qquad\\qquad=\\mathbf{M }^{(6)}_{iklmnp} + 9r^2\\left\\langle\\left\\langle\\mathbf{M}^{(4)}_{iklm } \\delta_{np} \\right\\rangle\\right\\rangle +\n 55r^4\\left\\langle\\left\\langle\\mathbf{M }^{(2)}_{ik}\\delta_{lm } \\delta_{np} \\right\\rangle\\right\\rangle \n + 99r^6\\left\\langle\\left\\langle\\delta_{ik } \\delta_{lm}\\delta_{np } \\right\\rangle\\right\\rangle "
},
{
"math_id": 70,
"text": " l=6 "
},
{
"math_id": 71,
"text": " l=4 "
},
{
"math_id": 72,
"text": " \\frac{2l-1)!!}{(l+1)!!} "
},
{
"math_id": 73,
"text": " (\\mathbf{ M }^{(l ) }_{[i] },\\mathbf{ M }^{(l ) }_{[i] })= (\\mathbf{ M }^{(l ) }_{i...k }(\\mathbf{x } ),\\mathbf{ M }^{(l ) }_{i...k }(\\mathbf{x } ))= \\frac{(2l)! }{ 2^l } r^{ 2l } "
},
{
"math_id": 74,
"text": "\\bullet\\ n=2 "
},
{
"math_id": 75,
"text": "\\qquad 4!!x_ix_k=\\mathfrak{M }^{(2)}_{ik} + 2r^2\\delta_{ik } "
},
{
"math_id": 76,
"text": "\\bullet\\ n=3 "
},
{
"math_id": 77,
"text": "\\qquad 6!!x_ix_kx_m=\\mathfrak {M }^{(3)}_{ikm} + 8r^2\\left\\langle\\left\\langle\\delta_{ik}x_m \\right\\rangle\\right\\rangle "
},
{
"math_id": 78,
"text": "\\bullet\\ n=4 "
},
{
"math_id": 79,
"text": "\\qquad 8!!x_ix_kx_lx_m=\\mathfrak{M }^{(4)}_{iklm} + 6r^2\\left\\langle\\left\\langle\\mathfrak{M}^{(2)}_{ik } \\delta_{lm} \\right\\rangle\\right\\rangle +\n16r^4\\left\\langle\\left\\langle\\delta_{ik } \\delta_{lm} \\right\\rangle\\right\\rangle"
},
{
"math_id": 80,
"text": "\\bullet\\ n=5 "
},
{
"math_id": 81,
"text": "\\qquad 10!!x_ix_kx_lx_mx_n=\\mathfrak{M }^{(5)}_{iklmn} + 8r^2\\left\\langle\\left\\langle\\mathfrak{M}^{(3)}_{ikl } \\delta_{mn} \\right\\rangle\\right\\rangle +\n 80r^4\\left\\langle\\left\\langle\\delta_{ik } \\delta_{lm}x_n \\right\\rangle\\right\\rangle"
},
{
"math_id": 82,
"text": "\\bullet\\ n=6 "
},
{
"math_id": 83,
"text": "\\qquad 12!!x_ix_kx_lx_mx_nx_p="
},
{
"math_id": 84,
"text": "\\qquad\\qquad\\qquad=\\mathfrak{M }^{(6)}_{iklmnp} + 10r^2\\left\\langle\\left\\langle\\mathfrak {M}^{(4)}_{iklm } \\delta_{np} \\right\\rangle\\right\\rangle +\n 72r^4\\left\\langle\\left\\langle\\mathfrak{M }^{(2)}_{ik}\\delta_{lm } \\delta_{np} \\right\\rangle\\right\\rangle \n + 240r^6\\left\\langle\\left\\langle\\delta_{ik } \\delta_{lm}\\delta_{np } \\right\\rangle\\right\\rangle "
},
{
"math_id": 85,
"text": "\\qquad 12!!\\mathbf{x}^{\\otimes(6 ) }=\\mathfrak{M }^{(6)}_{[i]} + 10r^2\\left\\langle\\left\\langle\\mathfrak \\mathfrak{M}^{(4)}_{[i] } \\delta_{[..]} \\right\\rangle\\right\\rangle +\n 72r^4\\left\\langle\\left\\langle\\mathfrak{M }^{(2)}_{[i]}\\delta_{[..] }^{\\otimes 2} \\right\\rangle\\right\\rangle \n + 240r^6\\left\\langle\\left\\langle\\delta_{[..] }^{\\otimes 3 } \\right\\rangle\\right\\rangle "
},
{
"math_id": 86,
"text": " \\mathbf {\\hat D } "
},
{
"math_id": 87,
"text": " \\mathbf{ \\nabla}_i\\frac {\\mathbf{ M}^{(l )}_{k...m }(\\mathbf{r} ) } { r^{ 2l+1 } } =- \\frac {\\mathbf{ M}^{(l+1)}_{ik...m }(\\mathbf{r} ) } { r^{ 2l+3 } }"
},
{
"math_id": 88,
"text": " \\hat D_i M^{ (l ) }_{k...m }(\\mathbf r)= M^{ (l+1 ) }_{ik...m }(\\mathbf r) "
},
{
"math_id": 89,
"text": "\\mathbf{ \\hat D} =(2\\hat l-1)\\mathbf{ r }-r^2 \\mathbf{\\nabla}, "
},
{
"math_id": 90,
"text": "\\hat l= (\\mathbf{r}\\mathbf{\\nabla} )"
},
{
"math_id": 91,
"text": "l"
},
{
"math_id": 92,
"text": " \\hat D_i x_k=3x_ix_k-r^2\\delta{ik} "
},
{
"math_id": 93,
"text": " \\hat D_i \\hat D_k\\hat D_m 1 =3[5x_ix_kx_m-r^2(\\delta_{ik}x_m +\\delta_{km}x_i +\\delta_{im}x_k)]"
},
{
"math_id": 94,
"text": " \\hat D_i\\hat D_k...\\hat D_m \\mathbf{1}= M^{(l)} _{ik...m } =\\mathbf{D}^{ \\otimes l }_{[i] }=\n\\mathbf{M}^{( l )}_{[i]} "
},
{
"math_id": 95,
"text": "\\hat\\mathbf{L}"
},
{
"math_id": 96,
"text": "(\\hbar=1 ) "
},
{
"math_id": 97,
"text": "\\mathbf{\\hat{D} }=\\hat{ l}\\mathbf{r}+i[\\mathbf{r }\\times \\mathbf{\\hat{ L} } ] "
},
{
"math_id": 98,
"text": "\\hat D_i \\hat D_i=r^2 \\Delta "
},
{
"math_id": 99,
"text": "\\hat\\mathbf{ D} "
},
{
"math_id": 100,
"text": "\\mathbf{ x } "
},
{
"math_id": 101,
"text": "\\mathbf{x }\\hat\\mathbf{D } =r^2(\\hat l+1 ) "
},
{
"math_id": 102,
"text": "\\hat\\mathbf{D }\\mathbf{x } =r^2 \\hat l "
},
{
"math_id": 103,
"text": "\\mathbf{ x} "
},
{
"math_id": 104,
"text": "x_iM^{(l)}_{ik...m }=2lr^2 M^{(l-1)}_{k...m } "
},
{
"math_id": 105,
"text": "l "
},
{
"math_id": 106,
"text": "\\mathbf {x} \\mathbf{D} - \\mathbf{D}\\mathbf {x} =r^2 "
},
{
"math_id": 107,
"text": " \\mathbf{\\nabla}\\hat \\mathbf{D} = (\\hat l+1 )(2 \\hat l+3 )"
},
{
"math_id": 108,
"text": "\\nabla_i \\hat M^{(l)}_{ik...m }=l (2l+1 )M^{(l-1)}_{(k...m )} "
},
{
"math_id": 109,
"text": " \\hat \\mathfrak{D}_i \\mathfrak {M}^{(n)}_{k...m }(\\mathbf{ r },\\tau)= \\mathfrak {M}^{(n+1)}_{ik...m }(\\mathbf{ r },\\tau) =\\mathfrak {M}^{(n+1)}_{ik...m }(\\mathbf{y })"
},
{
"math_id": 110,
"text": "\\hat \\mathfrak{D }=2\\hat n\\mathbf{ y }-\\left |\\mathbf {y } \\right |^2 \\mathbf{\\nabla}_{\\mathbf{y}}, "
},
{
"math_id": 111,
"text": " y_i "
},
{
"math_id": 112,
"text": " i=1,2,3,4 "
},
{
"math_id": 113,
"text": " \\mathbf{y}=(\\mathbf{r },\\tau ),\\quad \\left|\\mathbf{ y}\\right|^2=\\rho^2_{\\mathbf{y}}=\\mathbf{r}^2+\\tau^2 "
},
{
"math_id": 114,
"text": " \\hat n"
},
{
"math_id": 115,
"text": " \\tau"
},
{
"math_id": 116,
"text": "\\hat n=(\\mathbf{r}\\mathbf{\\nabla}_\\mathbf{ r } +\\tau\\frac{\\partial } {\\partial \\tau})=\\mathbf{ y }\\mathbf{\\nabla}_{\\mathbf{y}} "
},
{
"math_id": 117,
"text": "\\hat \\mathfrak{D}_i\\mathbf 1=2y_i,\\quad\\hat \\mathfrak{D}_iy_k=2(4y_iy_k -\\rho_{\\mathbf{y}}^2 \\delta_{ ik } )"
},
{
"math_id": 118,
"text": " \\hat \\mathfrak{D}_i \\hat \\mathfrak{D}_k\\hat \\mathfrak{D}_m=4!![6x_ix_kx_m-\\rho^2_{\\mathbf{y}} (\\delta_{ik}x_m+ \\delta_{km}x_i+ \\delta_{im}x_k) ] "
},
{
"math_id": 119,
"text": "\\hat \\mathfrak{D} "
},
{
"math_id": 120,
"text": "y"
},
{
"math_id": 121,
"text": "\\mathbf{y} \\hat \\mathfrak{D}=\\hat n_{\\mathbf{y}}\\rho^2_{\\mathbf{y}},\\quad \\hat \\mathfrak{D}\\mathbf{y}=(\\hat n-2)\\rho^2_{\\mathbf{y}} "
},
{
"math_id": 122,
"text": " \\mathbf{ \\nabla } "
},
{
"math_id": 123,
"text": "\\mathbf{\\nabla} \\hat \\mathfrak{D}=2(\\hat n+2)^2"
},
{
"math_id": 124,
"text": " \\hat \\mathbf{A } "
},
{
"math_id": 125,
"text": " \\mathbf{\\hat A } = i(\\tau \\mathbf{\\nabla}_{\\mathbf{r } }-\\mathbf{ r } \\frac{\\partial}{\\partial\\tau } ) "
},
{
"math_id": 126,
"text": "\\mathbf{ r } "
},
{
"math_id": 127,
"text": " \\tau "
},
{
"math_id": 128,
"text": "\\hat\\mathfrak D_{\\mathbf{r}} = (\\hat n+1 )\\mathbf{r}+i[\\mathbf{r}\\times\\hat\\mathbf{L}]+i\\tau \\hat \\mathbf{A } "
},
{
"math_id": 129,
"text": "\\hat\\mathfrak D_{\\tau} = (\\hat n+1 )\\tau+i(\\mathbf{r}\\hat\\mathbf{A}) "
}
]
| https://en.wikipedia.org/wiki?curid=72320451 |
72321172 | Tetrahalodiboranes | Class of diboron compounds
Tetrahalodiboranes are a class of diboron compounds with the formula <chem>B2X4</chem> (X = F, Cl, Br, I). These compounds were first discovered in the 1920s, but, after some interest in the middle of the 20th century, were largely ignored in research. Compared to other diboron compounds, tetrahalodiboranes are fairly unstable and historically have been difficult to prepare; thus, their use in synthetic chemistry is largely unexplored, and research on tetrahalodiboranes has stemmed from fundamental interest in their reactivity. Recently, there has been a resurgence in interest in tetrahalodiboranes, particularly in diboron tetrafluoride as a reagent to promote doping of silicon with <chem>B+</chem>for use in semiconductor devices.
Structure.
Because the perpendicular and planar geometries of tetrahalodiboranes are generally very close in energy, the energetic difference between these two structures has been the most investigated aspect of the geometry of these molecules. As it turns out, the difference is dependent on the identity of the halide in the compound. <chem>B2F4</chem> adopts a planar geometry (formula_0 symmetry) in both as a solid and in the gas phase. The barrier to rotation, however, is small (only 0.42 kcal/mo)l. <chem>B2Cl4
</chem>, however, adopts a planar geometry when crystalized but favors the perpendicular geometry (formula_1 symmetry) in the gas phase. Computations of the relative stability of the two conformers indicate that the formula_1 geometry is ~2kcal/mol lower in energy; the planar geometry in the solid phase is thought to be due to packing effects. Continuing this trend, computational modeling and experimental results agree that <chem>B2Br4</chem> and <chem>B2I4</chem> favor the perpendicular formula_1 geometry.
Synthesis.
The first synthesis of a tetrahalodiborane was reported by Stock et al. in 1925 where the authors reduced <chem>BCl3</chem>to form <chem>B2Cl4</chem> by running a current between zinc electrodes immersed in liquid <chem>BCl3</chem>. Later work explored gas phase syntheses of <chem>B2Cl4</chem> using gaseous <chem>BCl3</chem> and mercury electrodes. Early characterization of <chem>B2Cl4</chem> reported the to be a colorless, pyrophoric liquid that decomposes at temperatures above 0 °C. <chem>B2F4</chem> was not synthesized until 1958 when Finch and Schlesinger reported the successful synthesis of <chem>B2Cl4</chem> with antimony trifluoride to form <chem>B2F4</chem>. Unlike <chem>B2Cl4</chem>, <chem>B2F4</chem> is stable at room temperature. The heavier tetrahalodiboranes, <chem>B2Br4</chem> and <chem>B2I4</chem>, were first published in 1949 by Schlesinger et al. and Schumb respectively. <chem>B2Br4</chem> was first accessed by reacting <chem>B2Cl4</chem> and <chem>BBr3</chem>. <chem>B2I4</chem> was first synthesized using electrodeless radiofrequency discharge to reduce <chem>BI3
</chem>. <chem>B2Br4</chem> is stable at temperatures below -40 °C, while <chem>B2I4</chem> is stable below 0 °C. <chem>B2I4</chem> also degrades when exposed to light. Decomposition of <chem>B2I4</chem> at elevated temperatures yields a <chem>BI3</chem> and a black solid found to be a mixture of and .
The initial interest in tetrahaloiboranes was largely fundamental, and more applied consideration of tetrahalodiboranes was largely limited by the difficulty of synthesis and the low stability of isolated compounds. Recent improvements in the synthesis of tetrahalodiboranes has yielded more convenient solution phase syntheses of <chem>B2F4</chem>, <chem>B2Cl4</chem>, <chem>B2Br4</chem>, and <chem>B2I4</chem>. The solution phase synthesis of <chem>B2Br4</chem> first reported by Noth et al in 1981 has not been improved upon. To form <chem>B2Br4</chem> in solution, <chem>B2(OMe)4</chem> is treated with <chem>BBr3
</chem>. Other tetrahalodiboranes can be accessed from <chem>B2Br4</chem> in the solution phase by reacting with <chem>SbF3</chem>, <chem>GaCl3
</chem>, or <chem>BI3</chem> to form <chem>B2F4</chem>, <chem>B2Cl4</chem>, or <chem>B2I4</chem> respectively. These improvements in synthetic methods have opened the door for exploring potential applications of tetrahalodiboranes; while this interest has been fairly limited thus far, researchers have begun to explore the use of tetrahalodiboranes as synthetic building blocks and for use in industrial applications. Notably, recent publications discussing tetrahalodiboranes have been largely in patent literature discussing the use of <chem>B2F4</chem> to replace <chem>BF3</chem> as a feed chemical to dope semiconductors with boron ions.
Reactivity.
Lewis base adduct formation.
The boron atoms in tetrahalodiboranes are highly lewis acidic and readily form adducts with neutral lewis bases. Though the formation of these complexes is usually energetically favorable, early attempts to form these lewis acid base complexes were hindered by the lability of the halogen substituents; prior to the 1992 publication of three additional phosphane-tetrahalodiborane(4) adducts, only three lewis acid-lewis base adducts had been reported. Other work has described many more lewis acid-lewis base adducts that form readily, but also outline how stability challenges with tetrahalodiboranes persist even in stabilized complexes. Examples of these unstable tetrahalodiborane-lewis base adducts include the bis-diethyl ether adduct formed with <chem>B2Cl4</chem> or <chem>B2F4</chem>, the bis-adduct of <chem>B2Cl4</chem> and either <chem>SH2
</chem> or <chem>PH3</chem>, and adducts formed by <chem>B2Cl4</chem> or <chem>B2F4</chem> and weak phosphine donors such as <chem>PCl3</chem> or <chem>PBr3
</chem>.
There are, however, some adducts that are stable beyond room temperature. <chem>B2Cl4</chem> and <chem>B2F4</chem> both form stable mono- and bis-adducts with aprotic nitrogen donors. The first of these stable lewis base-tetrahalodiborane adducts was published in 2012 by Braunshcweig et al. showing that <chem>B2Br4(IDip)2</chem> (where <chem>IDip</chem>=1,3-bis(2,6-diisopropylphenyl)-imidazole-2-ylidene) is stable at ambient temperature. <chem>B2Br4(IDip)2</chem> could then be reduced to form <chem>B2Br2(IDip)2</chem>, a stable diborene, and could ultimately be reduced further to form <chem>B2(IDip)2</chem> the first isolable diboryne. Since this discovery, the Braunschweig group has published a number of other stable tetrahalodiborane adducts including some monoadducts and some asymetrical bis adducts. These adducts are typically characterized using <chem>^{11}B</chem> NMR.
Reaction with transition metals.
There has been some investigation of the reactivity of tetrahalodiboranes with transition metals. Norman et al. reported reactivity of <chem>B2F4</chem> with <chem>PtL2</chem> to form <chem>cis-[Pt(BF2)2L2]</chem> where . Because the boron-halide bonds in are substantially less reactive than in heavier tetrahalodiboranes, it is unsurprising that reactivity with Pt occurs at the B-B bond.
Because it was expected that heavier tetrahalodiboranes might have more reactive boron-halide bonds, the reactivity of <chem>B2I4</chem> with electron rich <chem>Pt(PCy3)2</chem> was explored. The greater lability of the <chem>B-I
</chem> bond relative to the <chem>B-F
</chem> bond in <chem>B2F4</chem> allowed for the formation of a diplatnum complex with borryl ligands and a bridging <chem>[B2I4]</chem>.
Reaction with boriranylideneboranes.
In 2001, Seibert et al. showed that three boriranylideneboranes first synthesized by Berndt et al. in the 1980s could be reacted with tetrahalodiboranes to yield interesting boron containing compounds shown in the scheme below. In the first reaction, both the chlorine and fluorine containing compounds were synthesized in good yields, but the fluorine compound was noticeably less stable. The all compounds shown below were characterized by 11B,1H and 13C NMR.
Addition to unsaturated hydrocarbons.
Tetrahalodiboranes can add to unsaturated hydrocarbons. Schlesinger et al. published 1,2-additions of <chem>B2Cl4</chem> to ethylene and acetylene. Later work explored the reactivity of <chem>B2Cl4</chem> with other alkenes, alkynes and dienes and showed that <chem>B2F4</chem> can react similarly. <chem>B2Br4</chem>can also add to alkenes. In 2015, Brown et al. used electronic structure calculation to provide mechanistic information on some of these (uncatalyzed) boron additions. Most interestingly, the authors were able to provide mechanistic information explaining the stereospecificity of the reaction of <chem>B2Cl4</chem> with 1,2 disubstituted alkenes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "D_{2h}"
},
{
"math_id": 1,
"text": "D_{2d}"
}
]
| https://en.wikipedia.org/wiki?curid=72321172 |
72323059 | Structural identifiability | Dynamical system property
In the area of system identification, a dynamical system is structurally identifiable if it is possible to infer its unknown parameters by measuring its output over time. This problem arises in many branch of applied mathematics, since dynamical systems (such as the ones described by ordinary differential equations) are commonly utilized to model physical processes and these models contain unknown parameters that are typically estimated using experimental data.
However, in certain cases, the model structure may not permit a unique solution for this estimation problem, even when the data is continuous and free from noise. To avoid potential issues, it is recommended to verify the uniqueness of the solution in advance, prior to conducting any actual experiments. The lack of structural identifiability implies that there are multiple solutions for the problem of system identification, and the impossibility of distinguishing between these solutions suggests that the system has poor forecasting power as a model. On the other hand, control systems have been proposed with the goal of rendering the closed-loop system unidentifiable, decreasing its susceptibility to covert attacks targeting cyber-physical systems.
Examples.
Linear time-invariant system.
"Source"
Consider a linear time-invariant system with the following state-space representation:
formula_0
and with initial conditions given by formula_1 and formula_2. The solution of the output formula_3 is
formula_4
which implies that the parameters formula_5 and formula_6 are not structurally identifiable. For instance, the parameters formula_7 generates the same output as the parameters formula_8.
Non-linear system.
"Source"
A model of a possible glucose homeostasis mechanism is given by the differential equations
formula_9
where ("c", "s"i, "p", "α", "γ") are parameters of the system, and the states are the plasma glucose concentration "G", the plasma insulin concentration "I", and the beta-cell functional mass "β." It is possible to show that the parameters "p" and "s"i are not structurally identifiable: any numerical choice of parameters "p" and "s"i that have the same product "psi" are indistinguishable.
Practical identifiability.
Structural identifiability is assessed by analyzing the dynamical equations of the system, and does not take into account possible noises in the measurement of the output. In contrast, "practical non-identifiability" also takes noises into account.
Other related notions.
The notion of structurally identifiable is closely related to observability, which refers to the capacity of inferring the state of the system by measuring the trajectories of the system output. It is also closely related to data informativity, which refers to the proper selection of inputs that enables the inference of the unknown parameters.
The (lack of) structural identifiability is also important in the context of dynamical compensation of physiological control systems. These systems should ensure a precise dynamical response despite variations in certain parameters. In other words, while in the field of systems identification, unidentifiability is considered a negative property, in the context of dynamical compensation, unidentifiability becomes a desirable property.
Identifiability also appears in the context of inverse optimal control. Here, one assumes that the data comes from a solution of an optimal control problem with unknown parameters in the objective function. Here, identifiability refers to the possibility of infering the parameters present in the objective function by using the measured data.
Software.
There exist many software that can be used for analyzing the identifiability of a system, including non-linear systems:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n\\dot{x}_1(t) &=-\\theta_1 x_1, \\\\\n\\dot{x}_2(t) &=\\theta_1 x_1, \\\\\ny(t) &= \\theta_2 x_2,\n\\end{align}"
},
{
"math_id": 1,
"text": "x_1(0) = \\theta_3"
},
{
"math_id": 2,
"text": "x_2(0) = 0"
},
{
"math_id": 3,
"text": "y"
},
{
"math_id": 4,
"text": "y(t)= \\theta_2 \\theta_3 e^{-\\theta_1 t} \\left( e^{\\theta_1 t}-1 \\right),"
},
{
"math_id": 5,
"text": "\\theta_2"
},
{
"math_id": 6,
"text": "\\theta_3"
},
{
"math_id": 7,
"text": "\\theta_1 = 1, \\theta_2 = 1, \\theta_3 = 1"
},
{
"math_id": 8,
"text": "\\theta_1 = 1, \\theta_2 = 2, \\theta_3 = 0.5"
},
{
"math_id": 9,
"text": "\\begin{aligned}\n& \\dot{G}=u(0)+u-(c+s_\\mathrm{i} \\, I) G, \\\\\n& \\dot{\\beta}=\\beta \\left(\\frac{1.4583 \\cdot 10^{-5}}{1+\\left(\\frac{8.4}{G}\\right)^{1.7}}-\\frac{1.7361 \\cdot 10^{-5}}{1+\\left(\\frac{G}{8.4}\\right)^{8.5}}\\right), \\\\\n& \\dot{I}=p \\, \\beta \\, \\frac{G^2}{\\alpha^2+G^2}-\\gamma \\, I,\n\\end{aligned}"
}
]
| https://en.wikipedia.org/wiki?curid=72323059 |
72325 | Decay energy | Energy change of a nucleus after radioactive decay
The decay energy is the energy change of a nucleus having undergone a radioactive decay. Radioactive decay is the process in which an unstable atomic nucleus loses energy by emitting ionizing particles and radiation. This decay, or loss of energy, results in an atom of one type (called the parent nuclide) transforming to an atom of a different type (called the daughter nuclide).
Decay calculation.
The energy difference of the reactants is often written as "Q":
formula_0
formula_1
Decay energy is usually quoted in terms of the energy units MeV (million electronvolts) or keV (thousand electronvolts):
formula_2
Types of radioactive decay include
The decay energy is the mass difference "Δm" between the parent and the daughter atom and particles. It is equal to the energy of radiation "E". If "A" is the radioactive activity, i.e. the number of transforming atoms per time, "M" the molar mass, then the radiation power "P" is:
formula_3
or
formula_4
or
formula_5
Example: 60Co decays into 60Ni. The mass difference "Δm" is 0.003u. The radiated energy is approximately 2.8MeV. The molar weight is 59.93. The half life "T" of 5.27 year corresponds to the activity A
N [ ln(2) / T ], where N is the number of atoms per mol, and T is the half-life. Taking care of the units the radiation power for 60Co is 17.9W/g
Radiation power in "W/g" for several isotopes:
60Co: 17.9
238Pu: 0.57
137Cs: 0.6
241Am: 0.1
210Po: 140 (T = 136d)
90Sr: 0.9
226Ra: 0.02
For use in radioisotope thermoelectric generators (RTGs) high decay energy combined with a long half life is desirable. To reduce the cost and weight of radiation shielding, sources that do not emit strong gamma radiation are preferred. This table gives an indication why - despite its enormous cost - 238Pu with its roughly eighty year half life and low gamma emissions has become the RTG nuclide of choice. 90Sr performs worse than 238Pu on almost all measures, being shorter lived, a beta emitter rather than an easily shielded alpha emitter and releasing significant gamma radiation when its daughter nuclide 90Y decays, but as it is a high yield product of nuclear fission and easy to chemically extract from other fission products, Strontium titanate based RTGs were in widespread use for remote locations during much of the 20th century. Cobalt-60 while widely used for purposes such as food irradiation is not a practicable RTG isotope as most of its decay energy is released by gamma rays, requiring substantial shielding. Furthermore, its five-year half life is too short for many applications.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Q = \\left( \\text{Kinetic energy} \\right)_\\text{after} - \\left( \\text{Kinetic energy} \\right)_\\text{before},"
},
{
"math_id": 1,
"text": "Q = \\left(\\text{Rest mass} \\right)_{\\text{before}} c^2 - \\left( \\text{Rest mass} \\right )_\\text{after} c^2 ."
},
{
"math_id": 2,
"text": "Q \\text{ [MeV]} = -931.5 \\Delta M \\text{ [Da]},~~(\\text{where }\\Delta M = \\Sigma M_\\text{products} - \\Sigma M_\\text{reactants})."
},
{
"math_id": 3,
"text": "P = \\Delta{m} \\left( \\frac{A}{M} \\right)."
},
{
"math_id": 4,
"text": "P = E \\left( \\frac{A}{M} \\right)."
},
{
"math_id": 5,
"text": "P = Q A."
}
]
| https://en.wikipedia.org/wiki?curid=72325 |
723297 | Centered square number | Centered figurate number that gives the number of dots in a square with a dot in the center
In elementary number theory, a centered square number is a centered figurate number that gives the number of dots in a square with a dot in the center and all other dots surrounding the center dot in successive square layers. That is, each centered square number equals the number of dots within a given city block distance of the center dot on a regular square lattice. While centered square numbers, like figurate numbers in general, have few if any direct practical applications, they are sometimes studied in recreational mathematics for their elegant geometric and arithmetic properties.
The figures for the first four centered square numbers are shown below:
Each centered square number is the sum of successive squares. Example: as shown in the following figure of Floyd's triangle, 25 is a centered square number, and is the sum of the square 16 (yellow rhombus formed by shearing a square) and of the next smaller square, 9 (sum of two blue triangles):
Relationships with other figurate numbers.
Let "C""k","n" generally represent the "n"th centered "k"-gonal number. The "n"th centered square number is given by the formula:
formula_0
That is, the "n"th centered square number is the sum of the "n"th and the ("n" – 1)th square numbers. The following pattern demonstrates this formula:
The formula can also be expressed as:
formula_1
That is, the "n"th centered square number is half of the "n"th odd square number plus 1, as illustrated below:
Like all centered polygonal numbers, centered square numbers can also be expressed in terms of triangular numbers:
formula_2
where
formula_3
is the "n"th triangular number. This can be easily seen by removing the center dot and dividing the rest of the figure into four triangles, as below:
The difference between two consecutive octahedral numbers is a centered square number (Conway and Guy, p.50).
Another way the centered square numbers can be expressed is:
formula_4
where
formula_5
Yet another way the centered square numbers can be expressed is in terms of the centered triangular numbers:
formula_6
where
formula_7
List of centered square numbers.
The first centered square numbers ("C"4,"n" < 4500) are:
1, 5, 13, 25, 41, 61, 85, 113, 145, 181, 221, 265, 313, 365, 421, 481, 545, 613, 685, 761, 841, 925, 1013, 1105, 1201, 1301, 1405, 1513, 1625, 1741, 1861, 1985, 2113, 2245, 2381, 2521, 2665, 2813, 2965, 3121, 3281, 3445, 3613, 3785, 3961, 4141, 4325, … (sequence in the OEIS).
Properties.
All centered square numbers are odd, and in base 10 one can notice the one's digit follows the pattern 1-5-3-5-1.
All centered square numbers and their divisors have a remainder of 1 when divided by 4. Hence all centered square numbers and their divisors end with digit 1 or 5 in base 6, 8, and 12.
Every centered square number except 1 is the hypotenuse of a Pythagorean triple (3-4-5, 5-12-13, 7-24-25, ...). This is exactly the sequence of Pythagorean triples where the two longest sides differ by 1. (Example: 52 + 122 = 132.)
This is not to be confused with the relationship ("n" – 1)2 + "n"2 = "C"4,"n". (Example: 22 + 32 = 13.)
Generating function.
The generating function that gives the centered square numbers is:
formula_8 | [
{
"math_id": 0,
"text": "C_{4,n} = n^2 + (n - 1)^2."
},
{
"math_id": 1,
"text": "C_{4,n} = \\frac{(2n-1)^2 + 1}{2}."
},
{
"math_id": 2,
"text": "C_{4,n} = 1 + 4\\ T_{n-1} = 1 + 2{n(n-1)},"
},
{
"math_id": 3,
"text": "T_n = \\frac{n(n+1)}{2} = \\binom{n+1}{2}"
},
{
"math_id": 4,
"text": "C_{4,n} = 1 + 4 \\dim (SO(n)),"
},
{
"math_id": 5,
"text": "\\dim (SO(n)) = \\frac{n(n-1)}{2}."
},
{
"math_id": 6,
"text": "C_{4,n} = \\frac{4C_{3,n}-1}{3},"
},
{
"math_id": 7,
"text": "C_{3,n} = 1 + 3\\frac{n(n-1)}{2}."
},
{
"math_id": 8,
"text": "\\frac{(x+1)^2}{(1-x)^3}= 1+5x+13x^2+25x^3+41x^4+~...~. "
}
]
| https://en.wikipedia.org/wiki?curid=723297 |
72331198 | The Erdős Distance Problem | Geometry book
The Erdős Distance Problem is a monograph on the Erdős distinct distances problem in discrete geometry: how can one place formula_0 points into formula_1-dimensional Euclidean space so that the pairs of points make the smallest possible distance set? It was written by Julia Garibaldi, Alex Iosevich, and Steven Senger, and published in 2011 by the American Mathematical Society as volume 56 of the Student Mathematical Library. The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries.
Topics.
"The Erdős Distance Problem" consists of twelve chapters and three appendices.
After an introductory chapter describing the formulation of the problem by Paul Erdős and Erdős's proof that the number of distances is always at least proportional to formula_2, the next six chapters cover the two-dimensional version of the problem. They build on each other to describe successive improvements to the known results on the problem, reaching a lower bound proportional to formula_3 in Chapter 7. These results connect the problem to other topics including the Cauchy–Schwarz inequality, the crossing number inequality, the Szemerédi–Trotter theorem on incidences between points and lines, and methods from information theory.
Subsequent chapters discuss variations of the problem: higher dimensions, other metric spaces for the plane, the number of distinct inner products between vectors, and analogous problems in spaces whose coordinates come from a finite field instead of the real numbers.
Audience and reception.
Although the book is largely self-contained, it assumes a level of mathematical sophistication aimed at advanced university-level mathematics students. Exercises are included, making it possible to use it as a textbook for a specialized course. Reviewer Michael Weiss suggests that the book is less successful than its authors hoped at reaching "readers at different levels of mathematical experience": the density of some of its material, needed to cover that material thoroughly, is incompatible with accessibility to beginning mathematicians. Weiss also complains about some minor mathematical errors in the book, which however do not interfere with its overall content.
Much of the book's content, on the two-dimensional version of the problem, was made obsolete soon after its publication by new results of Larry Guth and Nets Katz, who proved that the number of distances in this case must be near-linear. Nevertheless, reviewer William Gasarch argues that this outcome should make the book more interesting to readers, not less, because it helps explain the barriers that Guth and Katz had to overcome in proving their result. Additionally, the techniques that the book describes have many uses in discrete geometry.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "d"
},
{
"math_id": 2,
"text": "\\sqrt[d]{n}"
},
{
"math_id": 3,
"text": "n^{44/51}"
}
]
| https://en.wikipedia.org/wiki?curid=72331198 |
723339 | Deltoidal icositetrahedron | Catalan solid with 24 kite faces
In geometry, the deltoidal icositetrahedron (or trapezoidal icositetrahedron, tetragonal icosikaitetrahedron, tetragonal trisoctahedron, strombic icositetrahedron) is a Catalan solid. Its 24 faces are congruent kites. The deltoidal icositetrahedron, whose dual is the (uniform) rhombicuboctahedron, is tightly related to the pseudo-deltoidal icositetrahedron, whose dual is the pseudorhombicuboctahedron; but the actual and pseudo-d.i. are not to be confused with each other.
Cartesian coordinates.
In the image above, the long body diagonals are those between opposite red vertices and between opposite blue vertices, and the short body diagonals are those between opposite yellow vertices.<br>Cartesian coordinates for the vertices of the deltoidal icositetrahedron centered at the origin and with long body diagonal length 2 are:
formula_1
formula_3
formula_5
For example, the point with coordinates formula_6 is the intersection of the plane with equation formula_7 and of the line with system of equations formula_8
A deltoidal icositetrahedron has three regular-octagon equators, lying in three orthogonal planes.
Dimensions and angles.
Dimensions.
The deltoidal icositetrahedron with long body diagonal length "D" = 2 has:
formula_9
formula_10
formula_11
formula_12
formula_13 is the distance from the center to any face plane; it may be calculated by normalizing the equation of plane above, replacing ("x", "y", "z") with (0, 0, 0), and taking the absolute value of the result.
A deltoidal icositetrahedron has its long and short edges in the ratio:
formula_14
The deltoidal icositetrahedron with short edge length formula_15 has:
formula_16
formula_17
Angles.
For a deltoidal icositetrahedron, each kite face has:
formula_18
formula_19
Side Lengths.
In a deltoidal icositetrahedron, each face is a kite-shaped quadrilateral. The side lengths of these kites can be expressed in the ratio 0.7731900694928638:1
Specifically, the side adjacent to the obtuse angle has a length of approximately 0.707106785, while the side adjacent to the acute angle has a length of approximately 0.914213565.
Occurrences in nature and culture.
The deltoidal icositetrahedron is a crystal habit often formed by the mineral analcime and occasionally garnet. The shape is often called a trapezohedron in mineral contexts, although in solid geometry the name trapezohedron has another meaning.
In Guardians of The Galaxy Vol. 3, the device containing the files about the experiments carried on Rocket Raccoon has the shape of a deltoidal icositetrahedron.
Orthogonal projections.
The "deltoidal icositetrahedron" has three symmetry positions, all centered on vertices:
Related polyhedra.
The deltoidal icositetrahedron's projection onto a cube divides its squares into quadrants. The projection onto a regular octahedron divides its equilateral triangles into kite faces. In Conway polyhedron notation this represents an "ortho" operation to a cube or octahedron.
The deltoidal icositetrahedron is tightly related to the disdyakis dodecahedron . The main difference is that the latter also has edges between the vertices on 3- and 4-fold symmetry axes .
Dyakis dodecahedron.
A variant with pyritohedral symmetry is called a dyakis dodecahedron or diploid. It is common in crystallography.<br>A dyakis dodecahedron can be created by enlarging 24 of the 48 faces of a disdyakis dodecahedron. A tetartoid can be created by enlarging 12 of the 24 faces of a dyakis dodecahedron.
Stellation.
The great triakis octahedron is a stellation of the deltoidal icositetrahedron.
Related polyhedra and tilings.
The deltoidal icositetrahedron is a member of a family of duals to the uniform polyhedra related to the cube and regular octahedron.
When projected onto a sphere (see right), it can be seen that the edges make up the edges of a cube and regular octahedron arranged in their dual positions. It can also be seen that the 3- and 4-fold corners can be made to have the same distance to the center. In that case the resulting icositetrahedron will no longer have a rhombicuboctahedron for a dual, since the centers of the square and triangle faces of a rhombicuboctahedron are at different distances from its center.
This polyhedron is a term of a sequence of topologically related deltoidal polyhedra with face configuration V3.4."n".4; this sequence continues with tilings of the Euclidean and hyperbolic planes. These face-transitive figures have (*"n"32) reflectional symmetry.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "4"
},
{
"math_id": 1,
"text": "\\left( \\pm 1 , 0 , 0 \\right) , \\left( 0 , \\pm 1 , 0 \\right) , \\left( 0 , 0 , \\pm 1 \\right) ;"
},
{
"math_id": 2,
"text": "2"
},
{
"math_id": 3,
"text": "\\left( 0 , \\pm \\frac{\\sqrt{2}}{2} , \\pm \\frac{\\sqrt{2}}{2} \\right) , \\left( \\pm \\frac{\\sqrt{2}}{2} , 0 , \\pm \\frac{\\sqrt{2}}{2} \\right) , \\left( \\pm \\frac{\\sqrt{2}}{2} , \\pm \\frac{\\sqrt{2}}{2} , 0 \\right) ;"
},
{
"math_id": 4,
"text": "3"
},
{
"math_id": 5,
"text": "\\left( \\pm \\frac{2 \\sqrt{2} + 1}{7} , \\pm \\frac{2 \\sqrt{2} + 1}{7} , \\pm \\frac{2 \\sqrt{2} + 1}{7} \\right) ."
},
{
"math_id": 6,
"text": "\\left( \\frac{2 \\sqrt{2} + 1}{7} , \\frac{2 \\sqrt{2} + 1}{7} , \\frac{2 \\sqrt{2} + 1}{7} \\right)"
},
{
"math_id": 7,
"text": "\\left( \\sqrt{2} - 1 \\right) x + \\left( \\sqrt{2} - 1 \\right) y + 1 \\left( z - 1 \\right) = 0"
},
{
"math_id": 8,
"text": "x = y = z\\,."
},
{
"math_id": 9,
"text": "d = \\frac{2 \\sqrt{3} \\left( 2 \\sqrt{2} + 1 \\right) }{7} \\approx 1.894\\,580 ;"
},
{
"math_id": 10,
"text": "S = \\sqrt{2 - \\sqrt{2}} \\approx 0.765\\,367 ;"
},
{
"math_id": 11,
"text": "s = \\frac{ \\sqrt{20 - 2 \\sqrt{2}} }{7} \\approx 0.591\\,980 ;"
},
{
"math_id": 12,
"text": "r = \\sqrt{ \\frac{7 + 4 \\sqrt{2}}{17} } \\approx 0.862\\,856 ."
},
{
"math_id": 13,
"text": "r"
},
{
"math_id": 14,
"text": "\\frac{ S }{ s } = 2 - \\frac{1}{ \\sqrt{2} } \\approx 1.292\\,893 ."
},
{
"math_id": 15,
"text": "s"
},
{
"math_id": 16,
"text": "A = 6 \\sqrt{29 - 2 \\sqrt{2}}\\,s ^{2} ;"
},
{
"math_id": 17,
"text": "V = \\sqrt{122 + 71 \\sqrt{2}}\\,s ^{3} ."
},
{
"math_id": 18,
"text": "\\arccos \\left( \\frac{1}{2} - \\frac{ \\sqrt{2} }{4} \\right) \\approx 81.578\\,942^{\\circ} ;"
},
{
"math_id": 19,
"text": "\\arccos \\left( - \\frac{1}{4} - \\frac{ \\sqrt{2} }{8} \\right) \\approx 115.263\\,174^{\\circ} ."
}
]
| https://en.wikipedia.org/wiki?curid=723339 |
72337079 | Geometric Origami | Book on the mathematics of paper folding
Geometric Origami is a book on the mathematics of paper folding, focusing on the ability to simulate and extend classical straightedge and compass constructions using origami. It was written by Austrian mathematician Robert Geretschläger and published by Arbelos Publishing (Shipley, UK) in 2008. The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries.
Topics.
The book is divided into two main parts. The first part is more theoretical. It outlines the Huzita–Hatori axioms for mathematical origami, and proves that they are capable of simulating any straightedge and compass construction. It goes on to show that, in this mathematical model, origami is strictly more powerful than straightedge and compass: with origami, it is possible to solve any cubic equation or quartic equation. In particular, origami methods can be used to trisect angles, and for doubling the cube, two problems that have been proven to have no exact solution using only straightedge and compass.
The second part of the book focuses on folding instructions for constructing regular polygons using origami, and on finding the largest copy of a given regular polygon that can be constructed within a given square sheet of origami paper. With straightedge and compass, it is only possible to exactly construct regular formula_0-gons for which formula_0 is a product of a power of two with distinct Fermat primes (powers of two plus one): this allows formula_0 to be 3, 5, 6, 8, 10, 12, etc. These are called the constructible polygons. With a construction system that can trisect angles, such as mathematical origami, more numbers of sides are possible, using Pierpont primes in place of Fermat primes, including formula_0-gons for formula_0 equal to 7, 13, 14, 17, 19, etc. "Geometric Origami" provides explicit folding instructions for 15 different regular polygons, including those with 3, 5, 6, 7, 8, 9, 10, 12, 13, 17, and 19 sides. Additionally, it discusses approximate constructions for polygons that cannot be constructed exactly in this way.
Audience and reception.
This book is quite technical, aimed more at mathematicians than at amateur origami enthusiasts looking for folding instructions for origami artworks. However, it may be of interest to origami designers, looking for methods to incorporate folding patterns for regular polygons into their designs. Origamist David Raynor suggests that its methods could also be useful in constructing templates from which to cut out clean unfolded pieces of paper in the shape of the regular polygons that it discusses, for use in origami models that use these polygons as a starting shape instead of the traditional square paper.
"Geometric Origami" may also be useful as teaching material for university-level geometry and abstract algebra, or for undergraduate research projects extending those subjects, although reviewer Mary Fortune cautions that "there is much preliminary material to be covered" before a student would be ready for such a project. Reviewer Georg Gunther summarizes the book as "a delightful addition to a wonderful corner of mathematics where art and geometry meet", recommending it as a reference for "anyone with a working knowledge of elementary geometry, algebra, and the geometry of complex numbers".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
}
]
| https://en.wikipedia.org/wiki?curid=72337079 |
72340245 | Method of virtual quanta | Mathematical method
The method of virtual quanta is a method used to calculate radiation produced by interactions of electromagnetic particles, particularly in the case of bremsstrahlung. It can also be applied in the context of gravitational radiation, and more recently to other field theories by Carl Friedrich von Weizsäcker and Evan James Williams in 1934.
Background.
In problems of collision between charged particles or systems, the incident particle is often travelling at relativistic speeds when impacting the struck system, producing the field of a moving charge as follows:
formula_0
formula_1
formula_2
where formula_3 indicates the component of the electric field in the direction of travel of the particle, formula_4 indicates the E-field in the direction perpendicular to formula_3 and in the plane of the collision, formula_5 is the impact parameter, formula_6 is the Lorentz factor, formula_7 the charge and formula_8 the velocity of the incident particle.
In the ultrarelativistic limit, formula_4 and formula_9 have the form of a pulse of radiation travelling in the formula_10 direction. This creates the virtual radiation pulse (virtual quanta) denoted by formula_11. Moreover, an additional magnetic field may be added in order to turn formula_3 into a radiation pulse travelling along formula_12, denoted formula_13. This virtual magnetic field will turn out to be much smaller than formula_9, hence its contribution to the motion of particles is minimal.
By taking this point of view, the problem of the collision can be treated as a scattering of radiation. Similar analogies can be made for other processes (e.g. the ionisation of an atom by a fast electron can be treated as photoexcitation).
Bremsstrahlung.
In the case of bremsstrahlung, the problem becomes one of the scattering of the virtual quanta in the nuclear Coulomb potential. This is a standard problem and the cross section of the scattering is known as the Thomson cross section:
formula_14
The differential radiation cross section per unit frequency is hence:
formula_15
where formula_16 is the frequency spectrum of virtual quanta produced by the incident particle over all possible impact parameters.
Other applications.
Synchrotron radiation.
In the rest frame of the charged accelerating particle, the emission of synchrotron radiation can be treated as a Thomson scattering problem. This enables the introduction of various corrections into the classical calculation of the power lost by particles while accelerated, such as the quantum correction through the Klein-Nishina formula.
Gravitational radiation.
When transforming the gravitational field described by the Schwarzschild metric into the rest frame of a passing, relativistic test particle, an effect similar to the relativistic transformation of electric and magnetic fields is observed, and virtual pulses of gravitational radiation are produced. Through this, the cross section of close gravitational encounters and radiative power loss caused by such collisions can be calculated.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " E_1 = -\\frac{q \\gamma v t}{(b^2 + \\gamma^2 v^2 t^2)^{\\frac{3}{2}}} "
},
{
"math_id": 1,
"text": " E_2 = \\frac{q \\gamma b}{(b^2 + \\gamma^2 v^2 t^2)^{\\frac{3}{2}}} "
},
{
"math_id": 2,
"text": " B_3 = \\frac{v}{c} E_2 "
},
{
"math_id": 3,
"text": " E_1 "
},
{
"math_id": 4,
"text": " E_2 "
},
{
"math_id": 5,
"text": " b "
},
{
"math_id": 6,
"text": " \\gamma "
},
{
"math_id": 7,
"text": "q"
},
{
"math_id": 8,
"text": "v"
},
{
"math_id": 9,
"text": " B_3 "
},
{
"math_id": 10,
"text": " \\overrightarrow{e_1} "
},
{
"math_id": 11,
"text": " P_1 "
},
{
"math_id": 12,
"text": " \\overrightarrow{e_2} "
},
{
"math_id": 13,
"text": " P_2 "
},
{
"math_id": 14,
"text": " \\frac{d\\sigma}{d\\Omega} = \\frac{1}{2}(\\frac{z^2 e^2}{mc^2})^2(1+\\cos^2{\\theta}) "
},
{
"math_id": 15,
"text": " \\frac{d\\chi}{d\\omega d\\Omega} = \\frac{1}{2}(\\frac{z^2 e^2}{mc^2})^2(1+\\cos^2{\\theta})\\frac{dI}{d\\omega} "
},
{
"math_id": 16,
"text": " \\frac{dI}{d\\omega} "
}
]
| https://en.wikipedia.org/wiki?curid=72340245 |
72344861 | Source attribution | Epidemiology method
<templatestyles src="Template:TOC_right/styles.css" />
In the field of epidemiology, source attribution refers to a category of methods with the objective of reconstructing the transmission of an infectious disease from a specific source, such as a population, individual, or location. For example, source attribution methods may be used to trace the origin of a new pathogen that recently crossed from another host species into humans, or from one geographic region to another. It may be used to determine the common source of an outbreak of a foodborne infectious disease, such as a contaminated water supply. Finally, source attribution may be used to estimate the probability that an infection was transmitted from one specific individual to another, "i.e.", "who infected whom".
Source attribution can play an important role in public health surveillance and management of infectious disease outbreaks. In practice, it tends to be a problem of statistical inference, because transmission events are seldom observed directly and may have occurred in the distant past. Thus, there is an unavoidable level of uncertainty when reconstructing transmission events from residual evidence, such as the spatial distribution of the disease. As a result, source attribution models often employ Bayesian methods that can accommodate substantial uncertainty in model parameters.
Molecular source attribution is a subfield of source attribution that uses the molecular characteristics of the pathogen — most often its nucleic acid genome — to reconstruct transmission events. Many infectious diseases are routinely detected or characterized through genetic sequencing, which can be faster than culturing isolates in a reference laboratory and can identify specific strains of the pathogen at substantially higher precision than laboratory assays, such as antibody-based assays or drug susceptibility tests. On the other hand, analyzing the genetic (or whole genome) sequence data requires specialized computational methods to fit models of transmission. Consequently, molecular source attribution is a highly interdisciplinary area of molecular epidemiology that incorporates concepts and skills from mathematical statistics and modeling, microbiology, public health and computational biology.
There are generally two ways that molecular data are used for source attribution. First, infections can be categorized into different "subtypes" that each corresponds to a unique molecular variety, or a cluster of similar varieties. Source attribution can then be inferred from the similarity of subtypes. Individual infections that belong to the same subtype are more likely to be related epidemiologically, including direct source-recipient transmission, because they have not substantially evolved away from their common ancestor. Similarly, we assume the true source population will have frequencies of subtypes that are more similar to the recipient population, relative to other potential sources. Second, molecular (genetic) sequences from different infections can be directly compared to reconstruct a phylogenetic tree, which represents how they are related by common ancestors. The resulting phylogeny can approximate the transmission history, and a variety of methods have been developed to adjust for confounding factors.
Due to the associated stigma and the criminalization of transmission for specific infectious diseases, molecular source attribution at the level of individuals can be a controversial use of data that was originally collected in a healthcare setting, with potentially severe legal consequences for individuals who become identified as putative sources. In these contexts, the development and application of molecular source attribution techniques may involve trade-offs between public health responsibilities and individual rights to data privacy.
Microbial subtyping.
Microbial subtyping or strain typing is the use of laboratory methods to assign microbial samples to subtypes, which are predefined classifications based on distinct characteristics.
The assignment of specimens to subtypes can provide a basis of source attribution, since we assume that a pathogen undergoes minimal change when transmitted to an uninfected host.
Therefore, infections of the same subtype are implied to be epidemiologically related, "i.e.," linked by one or more recent transmission events.
The assumption that the pathogen is unchanged when transmitted is generally reasonable if the rate of evolution for the pathogen is slower than the rate of transmission, such that few mutations are observed on an epidemiological time scale.
For example, suppose host A is infected by a pathogen that we have categorized as subtype 1.
They are more likely to have been infected by host B, who also carries the subtype 1 pathogen, than host C who carries the subtype 2 pathogen (Figure 1).
In other words, transmission from host B is a more parsimonious explanation if there is a relatively small probability that the pathogen population in host C evolved from subtype 1 to subtype 2 after transmission to host A.
Today it is more common to use genetic sequencing to characterize the microbial sample at the level of its nucleotide sequence by sequencing the whole genome or proportions thereof.
However, other molecular methods such as restriction length fragment polymorphism
have historically played an important role in microbial subtyping before genetic sequencing became an affordable and ubiquitous technology in reference laboratories.
Sequence-based typing methods confer an advantage over other laboratory methods (such as serotyping or pulsed-field gel electrophoresis
because there is an enormous number of potential subtypes that can be resolved at the level of the genetic sequence.
Consider the above example again; however, this time host A carries the same infection subtype as many other hosts.
In this case we would have no information to differentiate between these hosts as the potential source of host A's infection.
Our ability to identify potential sources, therefore, depends on having a sufficient number of different subtypes.
However, defining too many subtypes in the population makes it likely that every individual carries a unique subtype, especially for rapidly-evolving pathogens that can accumulate high levels of genetic diversity in a relatively short period of time.
Hence, there exists an intermediate level of subtype resolution that confers the greatest amount of information for source attribution.
When source attribution is considered for a pathogen with high diversity, such that most specimens have unique genetic sequences, it is useful to group multiple unique sequences with a clustering method.
Single and multi-locus typing.
Before whole-genome sequencing was cost-effective, targeting a specific part of the pathogen genome (a.k.a. single-locus typing) was an important step to facilitate microbial subtyping.
For example, the ribosomal gene 16S is a standard target for identifying bacteria, in part because it is present across all known species and contains a mixture of conserved and variable regions.
Within a pathogen species, sequencing targets tended to be selected on the basis of their length, ubiquity and exposure to diversifying selection, which may be dictated by the function of the gene product for expressed regions.
For example, so-called "housekeeping" or core genes have indispensable biological functions, such as copying genetic material or building proteins.
These genes are often preferred candidates for microbial subtyping because they are less likely to be absent from a given genome.
Gene presence/absence is particularly relevant for bacteria where genetic material is frequently exchanged through horizontal gene transfer.
Targeting multiple regions (loci) of the pathogen genome confers greater precision to distinguish between lineages, since the chance to observe informative genetic differences between infections is increased.
This approach is referred to as multi-locus sequence typing (MLST).
Similar to single-locus typing, MLST requires the selection of specific loci to target for sequencing.
Moreover, for subtyping to be consistent across laboratories a reference database must be maintained that maps sequences from single or multiple loci to a fixed notation of allele numbers or designations.
Whole genome sequencing.
Although single- and multiple-locus subtyping is still predominantly used for molecular epidemiology, ongoing improvements in sequencing technologies and computing power continue to lower the barrier to whole-genome sequencing.
Next-generation sequencing (NGS) technologies provide cost-effective methods to generate whole genome sequences from a given sample by individually amplifying and sequencing templates in parallel using customized technologies such as sequencing-by-synthesis.
Shotgun sequencing applications of NGS generate full-length genome sequences by shearing the nucleic acid extracted from the sample into small fragments that are converted into a sequencing library, and then using a "de novo" sequence assembler program the genome sequence is reconstituted from the sequence fragments (short reads).
Alternatively, short reads can be mapped to a reference genome sequence that has been converted into an index for efficient lookup of exact substring matches.
This approach can be faster than "de novo" assembly, but relies on having a reference genome that is sufficiently similar to the genome sequence of the sample.
While NGS makes it feasible to simultaneously generate full-length genome sequences from hundreds of pathogen samples in a single run, it introduces a number of other challenges.
For instance, NGS platforms tend to have higher sequencing error rates than conventional sequencing, and regions of the genome with long stretches of repetitive sequence can be difficult to reassemble.
Whole genome sequencing (WGS) can confer a significant advantage for source attribution over single- or multiple-locus subtyping.
Sequencing the entire genome is the maximal extent of multi-locus typing, in that all possible loci are covered.
Having whole genome sequences will tend to make one-to-one subtyping (Figure 1) less useful, since most genomes will be unique by at least one mutation for rapidly evolving pathogens.
Consequently, applications of WGS for source attribution at a population level will likely have to cluster similar genomes together.
The breadth of coverage offered by WGS is more advantageous for the epidemiology of bacterial pathogens than viruses.
Bacterial genomes tend to be longer, ranging from about 106 to 107 base pairs, whereas virus genomes seldom exceed 106 base pairs.
In addition, bacteria tend to evolve at a slower rate than viruses, so mutations tend to be distributed more sparsely throughout a bacterial genome.
For example, WGS data revealed differences between isolates of "Burkholderia pseudomallei" from Australia and Cambodia that had otherwise appeared to be identical by multi-locus subtyping due to convergent evolution.
WGS has also been utilized in several recent studies to resolve transmission networks of "Mycobacterium tuberculosis" in greater detail, because isolates with identical multi-locus subtypes ("e.g.", MIRU-VNTR profiles targeting 24 loci) were frequently separated by large numbers of nucleotide differences in the full genome sequence, comprising roughly 4.3 million nucleotides encodoing over 4,000 genes.
Genetic clustering.
When applied to genetic sequences, a clustering method is a set of rules for assigning the sequences to a smaller number of clusters such that members of the same cluster are more genetically similar to each other than sequences in other clusters. Put another way, a clustering method defines a partition on the set of genetic sequences using some similarity measure. Clustering is inherently subjective and there are usually no formal guidelines for setting the clustering criteria. Consequently, cluster definitions can vary substantially from one study to the next. In addition, clustering is an intuitive process that can be accomplished by a wide variety of approaches; because of this flexibility, numerous different methods of genetic clustering have been described in the literature.
Genetic clustering provides a way of dealing with sequences from rapidly evolving pathogens, or whole genome sequences from pathogens with less divergence.
In either case, there can be an enormous number of distinct genetic sequences in the data set.
If each subtype must correspond to a unique sequence variant, then one could potentially have to track an unwieldy number of microbial subtypes for these pathogens when subtypes are defined on a one-to-one basis (Figure 1).
The number of subtypes can be greatly reduced by expanding the definition of microbial subtypes from individually unique sequence variants to clusters of similar sequences.
For example, pairwise distance clustering is a nonparametric approach in which clusters are assembled from pairs of sequences that fall within a threshold distance of each other.
The distance between sequences is computed by a genetic distance measure (a mathematical formula that maps two sequences to a non-negative real number) that quantifies the evolutionary divergence between the sequences under some model of molecular evolution.
Frequency-based attribution.
When the potential sources are populations, not individuals, then we are comparing the frequencies of subtypes in the respective populations.
The most likely source population should have a subtype frequency distribution that is the most similar to the reference population.
Methods that employ this approach have been referred to as "frequency-based" or "frequency-matching" models.
These subtypes are not necessarily derived from molecular data; for instance, these methods were originally applied to microbial strains defined by non-genetic antigenic or resistance profiling.
For example, the "Dutch model"
was originally developed to estimate the most likely source of a number of foodborne illnesses due to "Salmonella" by comparing the relative frequencies of bacterial subtypes (based on phage typing) in different commercial livestock populations (including poultry, swine and cattle) through routine surveillance programs.
For a given subtype, the expected number of human cases attributed to each source is proportional to the relative frequencies of that subtype among sources:
formula_0
where formula_1 is the proportion of (non-human) cases in the formula_2-th source population associated with subtype formula_3, and formula_4 is the number of cases of subtype formula_3 in the recipient (human) population.
For instance, if the frequencies of subtype X among three potential sources was 0.8, 0.5 and 0.1, respectively, then the expected number of cases (out of a total of 100) from the second source is formula_5.
This simple formula is a maximum likelihood estimator when the total force of infection from each source into the human population is uniform, "e.g.", the sources have equal population sizes.
Subsequently, this model was extended by Hald and colleagues
to account for variation among sources and subtypes using Bayesian inference methods.
This extension, typically referred to as the Hald model, has become a standard model in source attribution for food-borne illnesses.
The observed numbers of each subtype in the human population was assumed to be a Poisson distributed outcome with a mean formula_6 for the "i"-th subtype, after adjusting for cases related to travel and outbreaks:
formula_7
where formula_8 is the marginal effect of the "i"-th subtype ("e.g.", elevated infectiousness of a bacterial variant), formula_9 is the observed total amount (mass) of the "j"-th food source, formula_10 is the marginal effect of the "j"-th food source, and formula_1 is the same observed case proportion as the original "Dutch" model.
This model is visualized in Figure 2.
Bayesian inference.
The addition of a large number of parameters to the "Dutch" model by Hald and colleagues yielded a more realistic model. However, it was too complex to solve for exact maximum likelihood estimates, in contrast to the original model. Many of the parameters could not be directly measured, such as the relative transmission risk associated with a specific food source. Consequently, Hald and colleagues adopted a Bayesian approach to estimate the model parameters. A similar approach has also been used to reconstruct the contribution of different environmental and livestock reservoirs of the bacteria "Campylobacter jejuni" to an outbreak of food poisoning in England,
where the migration of different subtypes among reservoirs was jointly estimated by Bayesian methods.
Although Bayesian inference is discussed extensively elsewhere, it plays an important role in computationally-intensive methods of source attribution, so we provide a brief description here.
In the context of Bayesian inference every parameter is described by a probability distribution that represents our "belief" about its true value.
Thus, the statistical principle that underlies Bayesian inference ("i.e.," Bayes' rule) can be expressed in terms of the model parameters (formula_11) and the data (formula_12):
formula_13
where formula_14, formula_15 and formula_16 are known as the posterior, sampling (likelihood), and prior distributions, respectively.
A simple way to think about Bayesian inference is that our prior belief about the parameters is "updated" once we have seen the data.
As a result, our posterior belief becomes a compromise between our prior belief and the data.
To update our belief, we need to have a sampling distribution or model that describes the probability of different outcomes of an experiment.
We also require a prior distribution that represents our belief in a statistical form.
While modern computation allows almost any probability distribution to be used, the uniform distribution is commonly used because it assigns the same probability to every value within some range.
After incorporating new information from the data, our updated belief about the model parameters is represented by the posterior distribution.
This use of distributions to represent our belief distinguishes Bayesian inference from maximum likelihood, which results in a single combination of parameter values as a point estimate.
Hald and colleagues used uniform prior distributions for many of their parameters to express the prior belief that the true value fell within a continuous range with specific upper and lower limits.
They constrained some parameters to take the same numerical value as others.
For example, the effects of domestic and imported supplies of the same food source were linked in this manner.
This assumption expressed a strong belief that a given food source carried the same transmission risk irrespective of its origin, and simplified the model so that it was more feasible to fit the data.
Other parameters were set to a fixed reference value to further simplify the model.
Hald and colleagues employed a Poisson model to describe the probability of observing the number (formula_17) of rare transmission events that occur at a rate formula_18. As described above, the rate of cases due to a specific bacterial subtype was the sum of transmission rates across all potential sources. The Hald model was more realistic than the "Dutch" model because it allowed transmission rates to vary between subtypes and food sources. However, it was not feasible to directly measure these different rates — these parameters needed to be estimated from the data.
Comparative methods.
Instead of comparing the frequencies of subtypes to reconstruct the transmission of pathogens between populations, many source attribution methods compare the pathogen sequences at the level of individual hosts.
One way of comparing sequences is to calculate some measure of genetic distance or similarity, a concept that we introduced earlier on the topic of pooling sequences into composite subtypes.
For example, infections that are grouped into clusters are assumed to be related through one or more recent and rapid transmission events.
Short genetic distances imply that limited time has passed for mutations to accumulate in lineages descending from their common ancestor.
Consequently, these clusters are often referred to as "transmission clusters".
Other studies have used genetic distances that exceed some threshold to rule out host individuals as potential sources of transmission.
Although this application of clustering is related to source attribution, it is not possible to infer the direction of transmission solely from the genetic distance between infections.
Furthermore, the genetic distance separating infections is not solely determined by the rate of transmission; for example, they are strongly influenced by how infections are sampled from the population.
Sequences can also be compared in the context of their shared evolutionary history.
A phylogenetic tree or phylogeny is a hypothesis about the common ancestry of species or populations. In the context of molecular epidemiology, phylogenies are used to relate infections in different hosts and are usually reconstructed from genetic sequences of each pathogen population. To reconstruct the phylogeny, the sequences must cover the same parts of the pathogen genome; for example, sequences that represent multiple copies of the same gene from different infections. It is this residual similarity (homology) between diverging populations that implies recent common ancestry. A molecular phylogeny comprises "tips" or "leaves" that represent different genetic sequences that are connected by branches to a series of common ancestors that eventually converge to a "root". The composition of the ancestral sequence at the root, the order of branching events, and the relative amount of change along each branch are all quantities that must be extrapolated from the observed sequences at the tips.
There are multiple approaches to reconstruct a phylogenetic tree from genetic sequence variation.
For example, distance-based methods use a hierarchical clustering method to build up a tree based on the observed genetic distances.
Phylogenetic uncertainty.
A common simplifying assumption in phylogenetic investigations is that the phylogenetic tree reconstructed from the data is the "true" tree — that is, an accurate representation of the common ancestry relating the sampled infections.
For instance, a single tree is often used as the input for comparative methods to detect the signature of natural selection in protein-coding sequences.
On the other hand, if the phylogeny is handled as an uncertain estimate derived from the data (including the sequence alignment), then the analysis becomes a hierarchical model in which the problem of phylogenetic reconstruction is nested within the problem of estimating the other model parameters that are conditional on the phylogeny (Figure 3).
Sampling both the phylogeny and other model parameters from their joint posterior distribution using methods such as Markov chain Monte Carlo (MCMC) should confer more accurate parameter estimates.
However, the greatly expanded model space also makes it more difficult for MCMC samples to converge to the posterior distribution.
Such hierarchical methods are often implemented in the software package BEAST2
(Bayesian Evolutionary Analysis by Sampling Trees), which provides generic routines for MCMC sampling from tree space, and calculates the likelihood of a time-scaled phylogenetic tree given sequence data and sample collection dates.
There are a number of sources of phylogenetic uncertainty.
For instance, the common ancestry of lineages can be difficult to reconstruct if there has been little to no evolution along the respective branches.
This can occur when the rate of evolution is substantially slower than the time scale of transmission, such that mutations are unlikely to accumulate between the start of one infection and its transmission to the next host ("i.e.", the generation time).
It can also arise when existing divergence is not captured due to incomplete sequencing of the respective genomes.
Furthermore, reconstructing the common ancestry of lineages is progressively more uncertain as we move deeper into the tree, forcing us to extrapolate the ancestral states at greater distances from the observed data.
Alignment.
Reconstructing phylogenies from molecular sequences generally requires a multiple sequence alignment, a table in which homologous residues in different sequences occupy the same position.
Although alignments are often treated as observed data known without ambiguity, the process of aligning sequences is also uncertain and can become more difficult with the rapid accumulation of sequence insertions and deletions among diverging pathogen lineages.
While there are Bayesian methods that address uncertainty in alignment by joint sampling of the alignment along with the phylogeny,
this approach is computationally complex and is seldom used in the context of source attribution.
Furthermore, sequences are themselves uncertain estimates of the genetic composition of individual pathogens or infecting populations, and next-generation sequencing technologies tend to have substantially higher error rates than conventional Sanger sequencing,
and analysis pipelines must be carefully validated to reduce the effects of sample cross-contamination and adapter contamination.
Recombination.
Genetic recombination is the exchange of genetic material between individual genomes.
For pathogens, recombination can occur when a cell is infected by multiple copies of the pathogen.
If some hosts were infected multiple times by two or more divergent variants from different sources ("i.e.," superinfection), then recombination can produce mosaic genomes that complicate the reconstruction of an accurate phylogeny.
In other words, different segments of a recombinant genome may be related to other genomes through discordant phylogenies in such a way that cannot be accurately represented by a single tree.
In practice, it is common to screen for recombinant sequences and discard them before reconstructing a phylogeny from an alignment that is assumed to be free of recombination.
Inferring transmission history from the phylogeny.
The basic premise in applying phylogenetics to source attribution is that the shape of the phylogenetic tree approximates the transmission history,
which can also be represented by a tree where each split into two branches represents the transmission of an infection from one host to another.
In conjunction with reconstructing the transmission tree from other sources of information, such as contact tracing, reconstructing a phylogenetic tree can serve as a useful, additional information source especially when genetic sequences are already available.
Because of the visual and conceptual similarity between phylogenetic and transmission trees, it is a common assumption that the branching points (splits) of the phylogeny represent transmission events.
However, this assumption will often be inaccurate.
A transmission event may have occurred at any point along the two branches that separate one sampled infection from the other in the virus phylogeny (Figure 3A).
The transmission tree only constrains the shape of the phylogenetic tree.
Thus, even if we can reconstruct the phylogenetic tree without error, there are several reasons why it will not be an accurate representation of the transmission tree, including incomplete sampling, pathogen evolution within hosts, and secondary infection of the same host.
Incomplete sampling.
Equating the phylogenetic tree with the transmission history implicitly assumes that genetic sequences have been obtained from every infected host in the epidemic. In practice, only a fraction of infected hosts are represented in the sequence data. The existence of an unknown and inevitably substantial number of unsampled infected hosts is a major challenge for source attribution. Even if the phylogenetic tree indicates that two infections are most closely related than any other sampled infection, one cannot rule out the existence of one or more unsampled hosts whom are intermediate links in the "transmission chain" separating the known hosts (Figure 3B). Similarly, an unsampled infection may have been the source population for both observed infections at the tips of the tree (Figure 3C). By itself, the phylogenetic tree does not explicitly discriminate among these alternative transmission scenarios.
Evolution within hosts.
The shape of the phylogenetic tree may diverge from the underlying transmission history because of the evolution of diverse populations of the pathogen within each host. Individual copies of the pathogen genome that are transmitted to the next host are, by definition, no longer in the source population. A split exists in the phylogenetic tree that represents the common ancestor between the transmitted lineages and the other lineages that have remained and persisted in the source population. If we follow both sets of lineages back in time, the time of the transmission event is the "most recent" possible time that they could converge to a common ancestor. Put another way, this event represents one extreme of a continuous range where the common ancestor is located further back in time.
This process is often modelled by Kingman's coalescent, which describes the number of generations we expect to follow randomly selected lineages back in time until we encounter a common ancestor.
The expected time until two lineages converge to a common ancestor, known as a coalescence event, is proportional to the effective population size, which determines the number of possible ancestors.
Put another way, two randomly selected people in a large city are less likely to have a great-grandparent in common than two people in a small rural community.
Longer coalescence times in a large, diverse within-host pathogen populations are a significant challenge for source attribution, because it uncouples the virus phylogeny from the transmission tree.
For example, if a host has transmitted their infection to two others, then there can be as many as three sets of lineages whose ancestry can be traced in the source population in that host (Figure 3D).
As a result, there is some chance that the branching order in the virus phylogeny implies a different order of transmission events if we interpret the phylogeny as equivalent to a transmission tree.
For example, in Figure 3D hosts 1 and 3 are more closely related in the transmission history, but not in the phylogeny.
Clearance and secondary infection.
Many infections can be spontaneously cleared by the host's immune system. If a host that has cleared a previously diagnosed infection becomes re-infected from another source, then it is possible for the same host to be represented by different infections in the phylogenetic and transmission trees, respectively. In addition, some individuals may become infected from multiple different sources. For example, roughly one-third of infections by hepatitis C virus are spontaneously cleared within the first six months of infection.
This previous exposure, however, does not confer immunity to re-infection by the same virus.
In addition, co-infection by multiple strains of hepatitis C virus that persist simultaneously within the same host can occur relatively frequently in populations with a high rate of transmission, such as people who inject drugs using shared equipment (ranging from 14% to 39%).
The persistence of strains from additional exposures may be missed by conventional genetic sequencing techniques if they are present at a low frequencies within the host, necessitating the use of "next-generation" sequencing technologies. For these reasons, the epidemiological linkage of hepatitis C virus infections through genetic similarity may be a transient phenomenon, leading some investigators to recommend using multiple virus sequences sampled from different time points of each infection for molecular epidemiology applications.
Ancestral host-state reconstruction.
Ancestral reconstruction is the application of a model of evolution to a phylogenetic tree to reconstruct character states, such as nucleotide sequences or phenotypes, at the different ancestral nodes of the tree down to the root.
In the context of source attribution, ancestral reconstruction is frequently used to estimate the geographic location of pathogen lineages as they are carried from one region to another by their hosts.
Drawing this analogy between character evolution and the spatial migration of individuals or populations is known as phylogeography,
where the geographic location of an ancestral population is reconstructed from the current locations of its sampled descendants under some model of migration.
Migration models generally fall into two categories of discrete-state and continuous-state models.
Discrete-state or island migration models assume that a given lineage is in one of a finite number of locations, and that it changes location at a constant rate over time according to a continuous-time Markov process, analogous to the models used for molecular evolution.
Ancestral reconstruction with a discrete-state migration model has also been utilized to reconstruct the early spread of HIV-1 in association with development of transport networks and increasing population density in central Africa.
Discrete models can also be applied to the population-level source attribution of zoonotic transmissions by reconstructing different host species as ancestral character states.
For example, a discrete trait model of evolution was used to reconstruct the ancestral host species in a phylogeny relating Staphylococcus aureus specimens from humans and domesticated animals.
Similarly, Faria and colleagues
analyzed the cross-species transmission of rabies virus as a discrete diffusion process along the virus phylogeny, with rates influenced by the evolutionary relatedness and geographic range overlap of the respective host species.
Continuous-state migration models are more similar to models of Brownian motion in that a lineage may occupy any point within a defined space.
Although continuous models can be more realistic than discrete migration models, they may also be more challenging to fit to data.
Taken literally, a continuous model requires precise geolocation data for every infection sampled from the population.
In many applications, however, these metadata are not available; for example, some studies approximate the true spatial distribution of sampled infections by the centroids of their respective regions.
This can become problematic if the regions vary substantially in area, and host populations are seldom uniformly distributed within regions.
Paraphyly.
Paraphyly is a term that originates from the study of cladistics, an evolutionary approach to systematics that groups organisms on the basis of their common ancestry. A group of infections is paraphyletic if the group includes the most recent common ancestor, but does not include all its descendants. In other words, one group is "nested" within an ancestral group. For example, birds are descended from a common ancestor that in turn shares a common ancestor with all reptiles; thus, birds are nested within the phylogeny of reptiles, making the latter a paraphyletic group. Thus, paraphyly is evidence of evolutionary precedence: the ancestor of all birds was a reptile. In the context of source attribution, paraphyly can be used as evidence that one infection preceded another. It does not provide evidence that the infection was directly transmitted from one individual to another, in part because of incomplete sampling.
The application of paraphyly for source attribution requires that the phylogenetic tree relates multiple copies of the pathogen from both the putative source and recipient hosts. To elaborate, phylogenetic trees relating different infections are often reconstructed from population-based sequences (direct sequencing of the PCR amplification product), where each sequence represents the consensus of the individual pathogen genomes sampled from the infected host. If copies of the pathogen genome are sequenced individually by limiting dilution protocols or next-generation sequencing, then one can reconstruct a tree that represents the genealogy of individual pathogen lineages, rather than the phylogeny of pathogen populations.
If sequences from host A form a monophyletic clade (in which members are the complete set of descendants from a common ancestor) that has a nested paraphyletic clade of sequences from host B, then the tree is consistent with the direction of transmission having originated from host A.
Directionality does not imply that host A directly transmitted their infection to host B, because the pathogen may have been transmitted through an unknown number of intermediate unsampled hosts before establishing an infection in host B.
Node support.
The statistical confidence in directionality of transmission from a given tree is usually quantified by the support value associated with the node that is ancestral to the nested monophyletic clade. The support of node "X" is the estimated probability that if we repeated the phylogenetic reconstruction on an equivalent data set, the new tree would contain exactly the same clade consisting exclusively of all descendants of node "X" in the original tree. In other words, it quantifies the reproducibility of that node given the data. It should not be interpreted as the probability that the clade below node "X" appears in the "true" tree.
There are generally three approaches to estimating node support:
1. "Bootstrapping." Felsenstein adapted the concept of nonparametric bootstrapping to the problem of phylogenetic reconstruction by maximum likelihood.
Bootstrapping provides a way to characterize the sampling variation associated with the data without having to collect additional, equivalent samples. To start, one generates a new data set by sampling an equivalent number of nucleotide or amino acid positions at random with replacement from the multiple sequence alignment – this new data set is referred to as a "bootstrap sample". A tree is reconstructed from the bootstrap sample using the same method as the original tree. Since we are sampling sets of homologous characters (columns) from the alignment, the information on the evolutionary history contained at that position is intact. We record the presence or absence of clades from the original tree in the new tree, and then repeat the entire process until a target number of replicate trees have been processed. The frequency at which a given clade is observed in the bootstrap sample of trees quantifies the reproducibility of that node in the original tree.
Non-parametric bootstrapping is a time-consuming process that scales linearly with the number of replicates, since every bootstrap sample is processed by the same method as the original tree, and post-processing steps are required to enumerate clades. The precision of estimating the node support values increases with the number of bootstrap replicates. For instance, it is not possible to obtain a node support of 99% if fewer than 100 bootstrap samples have been processed. Consequently, it is now more common to use faster approximate methods to estimate the support values associated with different nodes of the tree (for instance, see approximate likelihood-ratio testing below).
2. "Bayesian sampling." Instead of using bootstrapping to resample the data, one can quantify node support by examining the uncertainty in reconstructing the phylogeny from the given data. Bayesian sampling methods such as Markov chain Monte Carlo (see Hald model) are designed to generate a random sample of parameters from the posterior distribution given the model and data. In this case, the tree is a collection of parameters. A Bayesian estimate of node support can be extracted from this sample of trees by counting the number of trees in which the monophyletic clade that descends from that specific node appears.
Bayesian sampling is computationally demanding because the space of all possible trees is enormous, making convergence difficult or not feasible to attain for large data sets.
3. "Approximate likelihood-ratio testing." Unlike Bayesian sampling, this method is performed on a single estimate of the tree based on maximum likelihood, where the likelihood is the probability of the observed data given the tree and model of evolution. The likelihood ratio test (LRT) is a method for selecting between two models or hypotheses, where the ratio of their likelihoods is a test statistic that is mapped to a null distribution to assess statistical significance. In this application, the alternative hypothesis is that a branch in the reconstructed tree has a length of zero, which would imply that the descendant clade cannot be distinguished from its background.
This makes the LRT a localized analysis: it evaluates the support of a node when the rest of the tree is assumed to be true. On the other hand, this narrow scope makes the approximate LRT method computationally efficient in comparison to Bayesian sampling and bootstrap sampling. In addition to the LRT method, there are several other methods for fast approximation of bootstrap support and this remains an active area of research.
Background sequences.
The interpretation of monophyletic and paraphyletic clades is contingent on whether a sufficient number of infections have been sampled from the host population. Sequences from one host can only become paraphyletic relative to sequences from a second host if the tree contains additional sequences from at least one other host in the population. As noted above, there may be unsampled host individuals in a "transmission chain" connecting the putative source to the recipient host (Figure 3B). The incorporation of background sequences from additional hosts in the population is similar to the problem of rooting a phylogeny using an outgroup, where the root represents the earliest point in time in the tree. The location of this "root" in the section of the tree relating the sequences from the two hosts determines which host is interpreted to be the potential source.
There are no formal guidelines for selecting background sequences. Typically, one incorporates sequences that were collected in the same geographic region as the two hosts under investigation. These local sequences are sometimes augmented with additional sequences that are retrieved from public databases based on their genetic similarity ("e.g.", BLAST), which were not necessarily collected from the same region. Generally, the background data comprise consensus (bulk) sequences where each host is represented by a single sequence, unlike the putative source and recipient hosts from whom multiple clonal sequences have been sampled. Because clonal sequencing is more labor-intensive, such data are usually not available to use as background sequences. The incorporation of different types of sequences (clonal and bulk) into the same phylogeny may bias the interpretation of results, because it is not possible for sequences to be nested within the consensus sequence from a single background host.
Phylodynamic methods.
In general, phylodynamics is a subdiscipline of molecular epidemiology and phylogenetics that concerns the reconstruction of epidemiological processes, such as the rapid expansion of an epidemic or the emergence of herd immunity in the host population, from the shape of the phylogenetic tree relating infections sampled from the population.
A phylodynamic method uses tree shape as the primary data source to parameterize models representing the biological processes that influenced the evolutionary relationships among the observed infections. This process should not be confused with fitting models of evolution (such as a nucleotide substitution model or molecular clock model) to reconstruct the shape of the tree from the observed characteristics of related populations (infections), which originates from the field of phylogenetics. The relatively rapid evolution of viruses and bacteria makes it feasible to reconstruct the recent dynamics of an epidemic from the shape of the phylogeny reconstructed from infections sampled in the present.
The use of phylodynamic methods for source attribution involve reconstruction of the transmission tree, which cannot be directly observed, from its residual effect on the shape of the phylogenetic tree. Although there are established methods for reconstructing phylogenetic trees from the genetic divergence among pathogen populations sampled from different host individuals, there are several reasons why the phylogeny may be a poor approximation of the transmission tree (Figure 3). In this context, phylodynamic methods attempt to reconcile the discordance between the phylogeny and the transmission tree by modeling one or more of the processes responsible for this discordance, and fitting these models to the data (Figure 4).
Given the complexity of phylodynamic models, these methods predominantly use Bayesian inference to sample transmission trees from the posterior distribution, where the transmission tree is an explicit model of "who infected whom". Although these methods can estimate the probability of a direct transmission from one individual to another, this probability is conditional on how well the model (selected from a number of possible models) approximates reality.
Below we describe models that have been implemented to incorporate, but not eliminate, the additional uncertainty caused by the various assumptions required when using the phylogenetic tree as an approximation of the transmission history.
Demographic and transmission models.
A basic simplifying assumption is that every infection in the epidemic is represented by at least one genetic sequence in the data set
(complete sampling). Although complete sampling may be feasible in circumstances such as an outbreak of disease transmission among farms in a defined geographic region,
it is generally not possible to rule out unsampled sources in other contexts. This is especially true for infectious diseases that are stigmatized and/or associated with marginalized populations,
that have a long asymptomatic period,
or in the context of a generalized epidemic where disease prevalence may substantially exceed the local capacity for sample collection and genetic sequencing.
Several methods attempt to address the presence of unsampled hosts by modeling the growth of the epidemic over time, which predicts the total number of infected hosts at any given time. Put another way, the probability that an infection was transmitted from an unsampled source is determined in part by the total size of the infected population at the time of transmission. These models of epidemic growth are sometimes referred to as demographic models because some are derived from population growth models such as the exponential and logistic growth models. Alternatively, the number of infections can be modeled by a compartmental model that describes the rate that individual hosts switch from susceptible to infected states, and can be extended to incorporate additional states such as recovery from infection or different stages of infection.
An important distinction between population growth and compartmental models is that the number of uninfected susceptible hosts is tracked explicitly in the latter.
A phylodynamic analysis attempts to parameterize the growth model by using the phylogeny as either a direct proxy of the transmission tree, or to account for the discordance between these trees due to within-host diversity using a population genetic model, such as the coalescent (Figure 4).
Bayesian methods make it feasible to supplement this task with other data sources, such as the reported case incidence and/or prevalence over time.
The transmission process can be mapped to the size of the infected population using either a coalescent (reverse-time) model or a forward-time model such as birth-death or branching processes.
Thus, the coalescent model has two different applications in phylodynamics.
First, it can be used to address the confounding effect of diverse pathogen populations within hosts, by explicitly modeling the common ancestry of individual pathogens.
Second, the coalescent can be adapted to model the spread of infections back in time,
drawing an analogy between the common ancestry of individuals within hosts and the transmission of infections among hosts.
This parallel has also been explored by phylodynamic models based on the structured coalescent,
where the population can be partitioned into two or more subpopulations (demes).
Each deme represents an infected host individual.
Due to limited migration of pathogen lineages between demes, two pathogen lineages sampled at random are more likely to share a recent common ancestor if they belong to the same deme.
Birth-death models describe the proliferation of infections forward in time, where a "birth" event represents the transmission of an infection to an uninfected susceptible host, and a "death" event can represent either the diagnosis and treatment of an infection, or its spontaneous clearance by the host.
This class of models was originally formulated to describe the proliferation of species through speciation and extinction.
Similarly, branching processes model the growth of an epidemic forward in time where the number of transmissions from each infected host ("offspring") is described by a discrete probability distribution over non-negative integers, such as the negative binomial distribution.
Branching process models tend to use the simplifying assumption that this offspring distribution remains constant over time, making this class of models more appropriate for the initial stage of an epidemic where most of the population is uninfected.
Within-host diversity.
As noted above, the diversification of pathogen populations within each host results in a discordance between the shapes of the pathogen phylogeny and the transmission tree. Phylodynamic methods that treat the phylogeny as equivalent to the transmission tree assume implicitly that the population within each host is small enough to be approximated by a single lineage.
If the within-host population is diverse, then a transmission event will tend to underestimate the time since two lineages split from their common ancestor (Figure 3A); this phenomenon is analogous to the incomplete lineage sorting affecting gene trees relative to the species tree.
The resulting discordance between the phylogenetic and transmission trees makes it more difficult to reconstruct the latter from the observed data. Moreover, the effect of within-host diversity becomes even greater if there are incomplete transmission bottlenecks — where a new infection is established by more than one lineage transmitted from the source population — because the common ancestor of pathogen lineages may be located in previous hosts further back in time.
Controversies.
Source attribution is an inherently controversial application of molecular epidemiology because it identifies a specific population or individual as being responsible for the onward transmission of an infectious disease.
Because molecular source attribution increasingly requires the specialized and computationally-intensive analysis of complex data, the underlying model assumptions and level of uncertainty in these analyses are often not made accessible to principal stakeholders, including the key affected populations and community advocates.
Molecular forensics and HIV-1 transmission.
Outside of a public health context, the concept of source attribution has significant legal and ethical implications for people living with HIV to potentially become prosecuted for transmitting their infection to another person. The transmission of HIV-1 without disclosing one's infection status is a criminally prosecutable offense in many countries,
including the United States. For example, defendants in HIV transmission cases in Canada have been charged with aggravated sexual assault, with a "maximum penalty of life imprisonment and mandatory lifetime registration as a sex offender".
Molecular source attribution methods have been utilized as forensic evidence in such criminal cases.
Forensic applications of phylogenetic clustering.
One of the earliest and well-known examples of an HIV-1 transmission case was the investigation of the so-called "Florida dentist", where an HIV-positive dentist was accused of transmitting his infection to a patient.
Although genetic clustering — specifically, clustering in the context of a phylogeny — was applied to these data to demonstrate that HIV-1 particles sampled from the dentist were genetically similar to those sampled from the patient,
clustering alone is not sufficient for source attribution.
Clusters can only provide evidence that infections are unlikely to be epidemiologically linked because they are too dissimilar relative to other infections in the population.
For example, similar phylogenetic methods were used in a subsequent case to demonstrate that the HIV-1 sequence obtained from the patient was far more similar to the sequence from their sexual partner than the sequence from a third party under investigation.
Clustering provides no information on the directionality of transmission ("e.g.", whether the infection was transmitted from individual A to individual B, or from B to A; Figure 3), nor can it rule out the possibility that one or more other unknown persons (from whom no virus sequences have been obtained) were involved in the transmission history.
Despite these known limitations of clustering, statements on the genetic similarity of infections continue to appear in court cases.
On the other hand, clustering can have population-level benefits by enabling public health agencies to rapidly detect elevated rates of transmission in a population, and thereby optimize the allocation of prevention efforts.
The expansion of public health applications of clustering
has raised concerns among people living with HIV that this use of personal health data might also expose them to a greater risk of criminal prosecution for transmission.
Forensic applications of paraphyly methods.
Source attribution methods based on paraphyly have been used in the prosecution of individuals for HIV-1 transmission. One of the earliest examples was published in 2002, where a physician was accused of intentionally injecting blood from one patient ("P") who was HIV-1 positive into another patient ("V") who had previous been in a relationship with the physician.
This study used maximum likelihood methods to reconstruct a phylogenetic tree relating HIV-1 sequences from both patients. Paraphyly of sequences from "P" implying either direct or indirect transmission to "V" was reported for the phylogeny reconstructed from RT sequences (Figure 5). However, a second tree reconstructed from the more diverse HIV-1 envelope ("env") sequences from the same group was inconclusive on the direction of transmission - only that the "env" sequences from patients "P" and "V" clustered respectively into two monophyletic groups that were jointly distinct from the background.
The use of paraphyly for source attribution was stimulated with the onset of next-generation sequencing, which made it more cost-effective to rapidly sequence large numbers of individual viruses from multiple host individuals. More recent work
has also developed a formalized framework for interpreting the distribution of sequences in the phylogeny as being consistent with a direction of transmission. Several studies have since applied this framework to re-analyze or develop forensic evidence for HIV transmission cases in Serbia,
Taiwan,
China
and Portugal
The growing number of such studies has led to controversy on the ethical and legal implications of this type of phylogenetic analysis for HIV-1.
The accuracy of classifying a group of sequences in a phylogeny into monophyletic or paraphyletic groups is highly contingent on the accuracy of tree reconstruction.
As described above (see Paraphyly), our statistical confidence of a specific clade in the tree is quantified by the estimated probability that the same clade would be obtained if the tree reconstruction was repeated on an equivalent data set.
This support value is not the probability that the clade appears in the "true" tree because this quantity is conditional on the data at hand - however, it is often misinterpreted this way.
If the branch separating a nested monophyletic clade of sequences from host A from the paraphyletic group of sequences from host B has a low support value, then the conventional procedure would be to remove that branch from the tree.
This would have the result of collapsing the monophyletic and paraphyletic clades so that the tree is inconclusive about either direction of transmission.
However, this procedure has not been consistently used in source attribution investigations.
For example, the trees displayed in the 2020 study in Taiwan
do not support transmission from the defendant to the plaintiff when branches with low support (<80%) are collapsed.
Moreover, the result can vary with the region of the virus genome targeted for sequencing.
The use of paraphyly to infer the direction of transmission was recently evaluated on a prospective cohort of HIV serodiscordant couples (where one partner was HIV positive at the start of the study).
Applying the paraphyly method to next-generation sequence data generated from samples obtained from 33 pairs where the HIV negative partner became infected over the course of the study, the authors found that the direction of transmission was incorrectly reconstructed in about 13% to 21% of cases, depending on which sequences were analyzed.
However, a follow-up study involving many of the same authors
used a more comprehensive sequencing method to cover the full virus genome in depth from all host individuals, lowering the percentage of misclassified cases to 3.1%.
Forensic applications of phylodynamics.
A common feature of both clustering and paraphyly methods is that neither approach explicitly tests the hypothesis that an infection was directly transmitted from a specific source population or individual to the recipient.
Phylodynamic methods attempt to overcome the discordance between the pathogen phylogeny and the underlying transmission history by modeling the processes that contribute to this discordance, such as the evolution of pathogen populations within each host.
The development of phylodynamic methods for source attribution has been a rapidly expanding area, with a large number of published studies and associated software released since 2014 (see Software).
Because these methods have tended to be applied to other infectious diseases including influenza A virus,
foot-and-mouth disease virus
and "Mycobacterium tuberculosis",
they have so far avoided the ethical issues of stigma and criminalization associated with HIV-1.
However, applications of phylodynamic source attribution to HIV-1 have begun to appear in the literature.
For example, in a study based in Alberta, Canada, the investigators used a phylodynamic method (TransPhylo ) to reconstruct transmission events among patients receiving treatment at their clinic from HIV-1 sequence data.
Although the program TransPhylo attempts, by default, to estimate the proportion of infections that are unsampled, the investigators fixed this proportion to 1%.
By so doing, their analysis carried the unrealistic assumption that nearly every person living with HIV-1 in their regional epidemic (comprising at least 1,800 people) was represented in their data set of 139 sequences.
2010 cholera outbreak in Haiti.
In the aftermath of a magnitude 7.0 earthquake that struck Haiti in 2010, there was a large-scale outbreak of cholera, a gastrointestinal infection caused by the bacterium "Vibrio cholerae".
Nearly 800,000 Haitians became infected and nearly 10,000 died in one of the most significant outbreaks of cholera in modern history.
Initial microbial subtyping using pulsed-field gel electrophoresis indicated that the outbreak was most genetically similar to cholera strains sampled in South Asia.
In order to more comprehensively map the plausible source of infection, cholera strains from Southern Asia and South America were compared to the strains sampled from the Haitian outbreak.
Whole genome sequences taken from cases in Haiti shared more sites in common with the sequences taken from South Asia ("i.e.", Nepal and Bangladesh) than those in geographic areas more immediate to Haiti.
Direct comparisons were also made between the cholera strains taken from three Nepalese soldiers and three Haitian locals, which were nearly identical in genome sequence, forming a phylogenetic cluster.
Based on the evidence gathered by phylogenetic source attribution studies, the role of Nepalese soldiers who were part of the United Nations Stabilization Mission to Haiti (MINUSTAH) in this outbreak was officially recognized by the United Nations in 2016.
2019/2020 novel coronavirus outbreak.
In December 2019, an outbreak of 27 cases of viral pneumonia was reported in association with a seafood market in Wuhan, China. Known respiratory viruses including influenza A virus, respiratory syncytial virus and SARS coronavirus were soon ruled out by laboratory testing. On January 10, 2020, the genome sequence of the novel coronavirus, most closely related to bat SARS-coronaviruses, was released into the public domain. Despite unprecedented quarantine measures, the virus (eventually named SARS-CoV-2) spread to other countries including the United States, with global prevalence exceeding 556 million confirmed cases as of July 15, 2022.
This outbreak spurred an unprecedented level of epidemiological and genomic data sharing and real-time analysis, which was often communicated by social media prior to peer review.
Much of this knowledge translation was mediated through the open-source project Nextstrain
that performs phylogenetic analyses on pathogen sequence data as they become available on public and access-restricted databases, and uses the results to update web documents in real time.
On March 4, 2020, Nextstrain developers released a phylogeny in which a SARS-CoV-2 genome that was isolated from a German patient occupied an ancestral position relative to a monophyletic clade of sequences sampled from Europe and Mexico.
Users of the Twitter social media platform soon commented on the related post from Nextstrain that onward transmission from the German individual seemed to have "led directly to some fraction of the widespread outbreak circulating in Europe today".
These comments were soon followed by criticism from other users that attributing the outbreak in Europe to the German patient as the source individual was drawing conclusions about the directionality of transmission from an incompletely sampled tree.
In other words, the tree was reconstructed from a highly incomplete sample of cases from the ongoing outbreak, and the addition of other sequences had a substantial probability of modifying the inferred relationship between the German sequence and the clade in question.
Nevertheless, the interpretation attributing the European outbreak to a German source propagated through social media, causing some users to call on Germany to apologize.
Software.
There are numerous computational tools for source attribution that have been published, particularly for phylodynamic methods.
Table 1 provides a non-exhaustive listing of some of the software in the public domain.
Several of these programs are implemented within the Bayesian software package BEAST,
including SCOTTI, BadTrIP, and beastlier.
This listing does not include clustering methods, which are not designed for the purpose of source attribution, but may be used to develop microbial subtype definitions — clustering methods have previously been reviewed in molecular epidemiology literature
Sources.
This article incorporates text from a free content work. Licensed under CC BY 4.0. Text taken from ""Molecular source attribution"", Chao E, Chato C, Vender R, Olabode AS, Ferreira RC, Poon AFY (2022), "PLOS Computational Biology", Error: Bad DOI specified!.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lambda_{ij}=\\frac{p_{ij}}{\\sum_{j}p_{ij}}n_i"
},
{
"math_id": 1,
"text": "p_{ij}"
},
{
"math_id": 2,
"text": "j"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "n_i"
},
{
"math_id": 5,
"text": "0.5/(0.8+0.5+0.1)\\times 100=35.7"
},
{
"math_id": 6,
"text": "\\lambda_i"
},
{
"math_id": 7,
"text": "\\lambda_i = \\sum_i \\lambda_{ij} = \\sum_i q_i M_j a_j p_{ij}"
},
{
"math_id": 8,
"text": "q_i"
},
{
"math_id": 9,
"text": "M_j"
},
{
"math_id": 10,
"text": "a_j"
},
{
"math_id": 11,
"text": "\\theta"
},
{
"math_id": 12,
"text": "D"
},
{
"math_id": 13,
"text": "P(\\theta \\mid D) \\propto P(D \\mid \\theta)P(\\theta)"
},
{
"math_id": 14,
"text": "P(\\theta \\mid D)"
},
{
"math_id": 15,
"text": "P(D \\mid \\theta)"
},
{
"math_id": 16,
"text": "P(\\theta)"
},
{
"math_id": 17,
"text": "Y"
},
{
"math_id": 18,
"text": "\\lambda"
}
]
| https://en.wikipedia.org/wiki?curid=72344861 |
72346236 | From Zero to Infinity | Number theory book
From Zero to Infinity: What Makes Numbers Interesting is a book in popular mathematics and number theory by Constance Reid. It was originally published in 1955 by the Thomas Y. Crowell Company. The fourth edition was published in 1992 by the Mathematical Association of America in their MAA Spectrum series. A K Peters published a fifth "Fiftieth anniversary edition" in 2006.
Background.
Reid was not herself a professional mathematician, but came from a mathematical family that included her sister Julia Robinson and brother-in-law Raphael M. Robinson. She had worked as a schoolteacher, but by the time of the publication of "From Zero to Infinity" she was a "housewife and free-lance writer". She became known for her many books about mathematics and mathematicians, aimed at a popular audience, of which this was the first.
Reid's interest in number theory was sparked by her sister's use of computers to discover Mersenne primes. She published an article on a closely related topic, perfect numbers, in "Scientific American" in 1953, and wrote this book soon afterward. Her intended title was "What Makes Numbers Interesting"; the title "From Zero to Infinity" was a change made by the publisher.
Topics.
The twelve chapters of "From Zero to Infinity" are numbered by the ten decimal digits, formula_0 (Euler's number, approximately 2.71828), and formula_1, the smallest infinite cardinal number. Each chapter's topic is in some way related to its chapter number, with a generally increasing level of sophistication as the book progresses:
The first edition included only chapters 0 through 9. The chapter on infinite sets was added in the second edition, replacing a section on the interesting number paradox. Later editions of the book were "thoroughly updated" by Reid; in particular, the fifth edition includes updates on the search for Mersenne primes and the proof of Fermat's Last Theorem, and restores an index that had been dropped from earlier editions.
Audience and reception.
"From Zero to Infinity" has been written to be accessible both to students and non-mathematical adults, requiring only high-school level mathematics as background. Short sets of "quiz questions" at the end chapter could be helpful in sparking classroom discussions, making this useful as supplementary material for secondary-school mathematics courses.
In reviewing the fourth edition, mathematician David Singmaster describes it as "one of the classic works of mathematical popularisation since its initial appearance", and "a delightful introduction to what mathematics is about". Reviewer Lynn Godshall calls it "a highly-readable history of numbers", "easily understood by both educators and their students alike". Murray Siegel describes it as a must have for "the library of every mathematics teacher, and university faculty who prepare students to teach mathematics".
Singmaster complains only about two pieces of mathematics in the book: the assertion in chapter 4 that the Egyptians were familiar with the 3-4-5 right triangle (still the subject of considerable scholarly debate) and the omission from chapter 7 of any discussion of why classifying constructible polygons can be reduced to the case of prime numbers of sides. Siegel points out another small error, on algebraic factorization, but suggests that finding it could make another useful exercise for students.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "e"
},
{
"math_id": 1,
"text": "\\aleph_0"
},
{
"math_id": 2,
"text": "e^{i\\pi}=1"
}
]
| https://en.wikipedia.org/wiki?curid=72346236 |
72353274 | Simplicial complex recognition problem | Computational problem in algebraic topology
The simplicial complex recognition problem is a computational problem in algebraic topology. Given a simplicial complex, the problem is to decide whether it is homeomorphic to another fixed simplicial complex. The problem is undecidable for complexes of dimension 5 or more.
Background.
An abstract simplicial complex (ASC) is family of sets that is closed under taking subsets (the subset of a set in the family is also a set in the family). Every abstract simplicial complex has a unique geometric realization in a Euclidean space as a geometric simplicial complex (GSC), where each set with "k" elements in the ASC is mapped to a ("k"-1)-dimensional simplex in the GSC. Thus, an ASC provides a finite representation of a geometric object. Given an ASC, one can ask several questions regarding the topology of the GSC it represents.
Homeomorphism problem.
The homeomorphism problem is: given two finite simplicial complexes representing smooth manifolds, decide if they are homeomorphic.
The same is true if "homeomorphic" is replaced with "piecewise-linear homeomorphic".
Recognition problem.
The recognition problem is a sub-problem of the homeomorphism problem, in which one simplicial complex is given as a fixed parameter. Given another simplicial complex as an input, the problem is to decide whether it is homeomorphic to the given fixed complex.
Manifold problem.
The manifold problem is: given a finite simplicial complex, is it homeomorphic to a manifold? The problem is undecidable; the proof is by reduction from the word problem for groups.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S^3"
},
{
"math_id": 1,
"text": "S^d"
},
{
"math_id": 2,
"text": "S^4"
}
]
| https://en.wikipedia.org/wiki?curid=72353274 |
72359343 | Bias in the introduction of variation | Theory in the domain of evolutionary biology
Bias in the introduction of variation ("arrival bias") is a theory in the domain of evolutionary biology that asserts biases in the introduction of heritable variation are reflected in the outcome of evolution. It is relevant to topics in molecular evolution, evo-devo, and self-organization. In the context of this theory, "introduction" ("origination") is a technical term for events that shift an allele frequency upward from zero ("mutation" is the genetic process that converts one allele to another, whereas "introduction" is the population genetic process that adds to the set of alleles in a population with non-zero frequencies).
Formal models demonstrate that when an evolutionary process depends on introduction events, mutational and developmental biases in the generation of variation may influence the course of evolution by a first come, first served effect, so that evolution reflects the arrival of the likelier, not just the survival of the fitter.
Whereas mutational explanations for evolutionary patterns are typically assumed to imply or require neutral evolution, the theory of arrival biases distinctively predicts the possibility of "mutation-biased adaptation".
Direct evidence for the theory comes from laboratory studies showing that adaptive changes are systematically enriched for mutationally likely types of changes.
Retrospective analyses of natural cases of adaptation also provide support for the theory.
This theory is notable as an example of contemporary structuralist thinking, contrasting with a classical functionalist view in which the course of evolution is determined by natural selection (see ).
History.
The theory of biases in the introduction process as a cause of orientation or direction in evolution has been explained as the convergence of two threads. The first, from theoretical population genetics, is the explicit recognition by theoreticians (toward the end of the 20th century) that a correct treatment of evolutionary dynamics requires a rate-dependent process of introduction (origination) missing from classical treatments of evolution as a process of shifting frequencies of available alleles. This recognition is evident in the emergence of "origin-fixation" models that depict evolution as a 2-step process of origination and fixation (by drift or selection), with a rate specified by multiplying a rate of introduction (based on the mutation rate) with a probability of fixation (based on the fitness effect). Origin-fixation models appeared in the midst of the molecular revolution, a half-century after the origins of theoretical population genetics: they were soon widely applied in neutral models for rates and patterns of molecular evolution; their use in models of molecular adaptation was popularized in the 1990s; by 2014 they were described as a major branch of formal theory.
The second thread is a long history of attempts to establish the thesis that mutation and development exert a dispositional influence on evolution by presenting options for subsequent functional evaluation, i.e., acting in a manner that is logically prior to selection.
Many evolutionary thinkers have proposed some form of this idea.
In the early 20th-century, authors such as Eimer or Cope held that development constrains or channels evolution so strongly that the effect of selection is of secondary importance.
Early geneticists such as Morgan and Punnett proposed that common parallelisms (e.g., involving melanism or albinism) may reflect mutationally likely changes.
Expanding on Vavilov's (1922) exploration of this theme, Spurway (1949)
wrote that "the mutation spectrum of a group may be more important than many of its morphological or physiological features."
Similar thinking featured in the emergence of evo-devo, e.g., Alberch (1980) suggests that "in evolution, selection may decide the winner of a given game but development non-randomly defines the players" (p. 665) (see also ).
Thomson (1985),
reviewing multiple volumes addressing the new developmentalist thinking— a book by Raff and Kaufman (1983)
and conference volumes edited by Bonner (1982)
and Goodwin, et al (1983)
wrote that "The whole thrust of the developmentalist approach to evolution is to explore the possibility that asymmetries in the introduction of variation at the focal level of individual phenotypes, arising from the inherent properties of developing systems, constitutes a powerful source of causation in evolutionary change" (p. 222). Likewise, the paleontologists Elisabeth Vrba and Niles Eldredge summarized this new developmentalist thinking by saying that "bias in the introduction of phenotypic variation may be more important to directional phenotypic evolution than sorting by selection."
However, the notion of a developmental influence on evolution was rejected by Mayr and others such as Maynard Smith ("If we are to understand evolution, we must remember that it is a process which occurs in populations, not in individuals.")
and Bruce Wallace ("problems concerned with the orderly development of the individual are unrelated to those of the evolution of organisms through time"),
as being inconsistent with accepted concepts of causation. This conflict between evo-devo and neo-Darwinism is the focus of a book-length treatment by philosopher Ron Amundson (see also Scholl and Pigliucci, 2015 ).
In the theory of evolution as shifting gene frequencies that prevailed at the time, evolutionary causes are "forces" that act as mass pressures (i.e., the aggregate effects of countless individual events) shifting allele frequencies (see Ch. 4 of ), thus development did not qualify as an evolutionary cause. A widely cited 1985 commentary on "developmental constraints" advocated the importance of developmental influences, but did not anchor this claim with a theory of causation, a deficiency noted by critics, e.g., Reeve and Sherman (1993) defended the adaptationist program (against the developmentalists and the famous critique of adaptationism by Gould and Lewontin), arguing that the "developmental constraints" argument simply restates the idea that development shapes variation, without explaining how such preferences prevail against the pressure of selection.
Mayr (1994)
insisted that developmentalist thinking was "hopelessly mixed up" because development is a proximate cause and not an evolutionary one.
In this way, developmentalist thinking was received in the 1980s and 1990s as speculation without a rigorous grounding in causal theories, an attitude that persists (e.g., Lynch, 2007 ).
In response to these rebukes, developmentalists concluded that population genetics "cannot provide a complete account of evolutionary causation":
instead, a dry statistical account of changes in gene frequencies from population genetics must be supplemented with a wet biological account of changes in developmental-genetic organization (called "lineage explanation" in
The beliefs that (1) developmental biology was never integrated into the Modern Synthesis and (2) population genetics must be supplemented with alternative narratives of developmental causation, are now widely repeated in the evo-devo literature and are given explicitly as motivations for reform via an Extended Evolutionary Synthesis.
The proposal to recognize the introduction process formally as an evolutionary cause provides a different resolution to this conflict. Under this proposal, the key to understanding the structuralist thesis of the developmental biologists was "a previously missing population-genetic theory for the consequences of biases in introduction". The authors criticized classical reasoning for framing the efficacy of variational tendencies as a question of evolution by mutation pressure, i.e., the transformation of populations by recurrent mutation. They argued that, if generative biases are important, this cannot be because they out-compete selection as forces under the shifting-gene-frequencies theory, but because they act "prior" to selection, via introduction. Thus the theory of arrival biases proposes that the generative dispositions of a developmental-genetic system (i.e., its tendencies to respond to genetic perturbation in preferential ways) shape evolution by mediating biases in introduction. The theory, which applies to both mutational and developmental biases, addresses how such preferences can be effective in shaping the course of evolution even while strong selection is at work.
Systematic evidence for predicted effects of introduction biases first began to appear from experimental studies of adaptation in bacteria and viruses.
Since 2017, this support has widened to include systematic quantitative results from laboratory adaptation, and similar but less extensive results from the retrospective analysis of natural adaptations traced to the molecular level (see below).
The empirical case that biases in mutation shape adaptation is considered to be established for practical purposes such as evolutionary forecasting (e.g.,
However, the implications of the theory have not been tested critically in regard to morphological and behavioral traits in animals and plants that are the traditional targets of evolutionary theorizing (see Ch. 9 of ). Thus, the relevance of the theory to molecular adaptation has been established, but the significance for evo-devo remains unclear.
The theory sometimes appears associated with calls for reform from advocates of evo-devo (e.g.,), though it has not yet appeared in textbooks or in broad treatments of challenges in evolutionary biology (e.g.,).
Simple model.
The kind of dual causation proposed by the theory has been explained with the analogy of "Climbing Mount Probable." Imagine a robot on a rugged mountain landscape, climbing by a stochastic 2-step process of proposal and acceptance. In the proposal step, the robot reaches out with its limbs to sample various hand-holds, and in the acceptance step, the robot commits and shifts its position. If the acceptance step is biased to favor higher hand-holds, the climber will ascend. But one also may imagine a bias in the proposal step, e.g., the robot may sample more hand-holds on the left than on the right. Then the dual proposal-acceptance process will show both an upward bias due to a bias in acceptance, and a leftward bias due to a bias in proposal.
If the landscape is rugged, the ascent will end on a local peak that (due to the proposal bias) will tend to be to the left of the starting point. On a perfectly smooth landscape, the climber will simply spiral to the left until the single global peak is reached.
In either case, the trajectory of the climber is subject to a dual bias. These two biases are not pressures competing to determine an allele frequency: they act at different steps, along non-identical dimensions.
The dual effect predicted by the theory was demonstrated originally with a population-genetic model of a 1-step adaptive walk with 2 options, i.e., the climber faces two upward choices, one with a higher selection coefficient and the other with a higher mutation rate. A key feature of model is that neither of the alternatives is present in the initial population: they must be introduced. In simulated adaptation under this model, the population frequently reaches fixation for the mutationally favored allele, even though it is not the most fit option. The form of the model is agnostic with respect to whether the biases are mutational or developmental. Subsequent theoretical work (below) has generalized on the theory of one-step walks, and also considered longer-term adaptive walks on complex fitness landscapes. The general implication for parallel evolution is that biases in introduction may contribute strongly to parallelism. The general implication for the directionality and repeatability of adaptive walks is simply that some paths are more evolutionarily favorable due to being mutationally favorable. The general implication for the long-term predictability of outcomes, e.g., particular phenotypes, is that some phenotypes are more findable than others due to mutational effects, and such effects may strongly shape the distribution of evolved phenotypes.
The application of the theory to problems in evo-devo and self-organization relies formally on the concept of a genotype-phenotype (GP) map. The genetic code, for example, is a GP map that induces asymmetries in mutationally accessible phenotypes. Consider evolution from the Met (amino acid) phenotype encoded by the ATG (codon) genotype. A phenotypic shift from Met to Val requires an ATG to GTG mutation; a shift from Met to Leu can occur by 2 different mutations (ATG to CTG or TTG); a shift from Met to Ile can occur by 3 different mutations (to ATT, ATC, or ATA). If each type of genetic mutation has the same rate, i.e., with no mutation bias "per se", the GP map induces 3 different rates of introduction of the alternative phenotypes Val, Leu and Ile. Due to this bias in introduction, evolution from Met to Ile is favored, and this is not due to a mutational bias (in the sense of a bias reflecting the mechanisms of mutagenesis), but rather an asymmetric mapping of phenotypes to mutationally accessible genotypes.
Results of theoretical modeling.
One-step adaptive walks.
As noted above, in the simplest case of the "Climbing Mount Probable" effect, one may consider a climber facing just two fixed choices: up and to the left, or up and to the right. This case is modeled using simulations by, and is given a more complete treatment by
In general, the limiting behavior of evolution as the supply of new mutations becomes arbitrarily small, i.e., as formula_0, is called "origin-fixation" dynamics
The origin-fixation approximation for choosing between the left and right options formula_1 and formula_2 (respectively) in the Yampolsky-Stoltzfus model is given by the following:
where formula_3 (or formula_4) and formula_5 (or formula_6) are the mutation rate and selection coefficient for the left (or right) alternative, and assuming that the probability of fixation formula_7. In the Yampolsky-Stoltzfus model, this approximation is good for formula_8.
For 1-step walks under origin-fixation conditions, the behavior given by Eqn (1) generalizes from 2 to many alternatives. For instance, Cano, et al. (2022)
consider a model gene with many different beneficial mutations, and under low mutation supply, the effects of mutation bias are proportional on the spectrum of adaptive changes.
When formula_10 is not very small, different beneficial alleles may be present simultaneously, competing and slowing down adaptation, an effect known as clonal interference. Clonal interference reduces the effect of mutation bias in models of evolution in finite genetic spaces: alleles favored by mutation still tend to arrive sooner, but before they reach fixation, later-arising alleles that are more beneficial can out-compete them, enhancing the effect of fitness differences. Under the most extreme condition when all possible beneficial alleles are reliably present in a large population, the most fit allele wins deterministically and there is no room for an effect of mutation bias. Stated differently, when all the beneficial alleles are present and selection determines the winner, the chance of success is 1 for the most fit allele, and 0 for all other alleles. Thus, in a gene model with a finite set of beneficial mutations, the influence of mutation bias is expected to be strongest when formula_11 but to fall off as formula_10 becomes large.
The influence of mutation under varying degrees of clonal interference can be quantified precisely using the regression method of Cano, et al (2022). Suppose that the expected number of changes of a given class of mutational changes defined by starting and ending states is directly proportional to the product of (1) the frequency formula_12 of the starting state and (2) the mutation rate formula_13 raised to the power of formula_9, that is,
Taking the logarithm of this equation gives
where formula_14 is the logarithm of the constant of proportionality. Thus, when formula_9 is unknown, it may be estimated as the coefficient for the regression of log(counts) on log(expected counts). Simulations of a gene model (figure at right from ) show a range from formula_15 under high mutation supply to formula_16 when the mutation supply is low. While this approach was developed to assess how the mutation spectrum influenced adaptive missense changes (defined by a starting codon and an ending amino acid), the equation reflects a generic framework applicable to any mutationally defined classes of change.
Note that these considerations apply to finite genetic spaces. In an infinite genetic space, clonal interference still slows down the rate of adaptation due to competition, but it does not prevent an effect of mutation bias because there are always mutationally favored alternative alleles among the most-fit class of alleles.
Contribution of mutation to parallelism.
In general, if there is some set of possible steps each with a probability formula_17, then the chance of parallelism is given by summing the squares, formula_18. It follows from the definition of the variance formula_19 or the coefficient of variation formula_20, that (see Box 2 of or Ch. 8 of )
That is, parallelism is increased by anything that decreases the number of choices or increases the heterogeneity in their chances (as measured by formula_19 or formula_20). This result validates the intuition of Shull
that "It strains one's faith in the laws of chance to imagine that identical changes should crop out again and again if the possibilities are endless and the probabilities equal" (p. 448). To the extent that heterogeneity in formula_17 reflects heterogeneity in mutational chances, mutation contributes to parallelism.
In particular, for the case of origin-fixation dynamics, each value of formula_17 is a product of a mutational origin term and a fixation term, so that heterogeneity in either contributes similarly to the chances of parallelism, and it is possible to partition effects of mutation and selection in accounting for the repeatability of evolution. Under origin-fixation conditions, and assuming formula_21, it follows that
where formula_22 and formula_23 are coefficients of variation for vectors of selection coefficients and mutation rates, respectively.
Numeric examples in Box 2 of suggest that mutation sometimes contributes more to parallelism than selection, although the authors note that formula_24 in the denominator above confounds effects of mutation and selection in a hidden way (because, in practice, formula_24 reflects the set of paths that are sufficiently favored by selection and sufficiently mutationally likely to be observed).
Longer-term effects: trends, navigability, and findability.
For a systematic view of long-term effects of evolution in discrete genotypic space, consider the 4 perspectives below, focusing on the influence of a mutation spectrum (characteristic for some evolving system) on various ways of defining the chances of evolution (following the treatment by ):
Theoretical results relating to each of these perspectives are available.
For instance, in a simulation of adaptive walks of protein-coding genes in the context of an abstract NK landscape, the effect of a GC-AT mutation bias is to alter the protein sequence composition in a manner qualitatively consistent with the analogy of Climbing Mount Probable (above). Each adaptive walk begins with a random sequence and ends on some local peak; the direction of the walk and the final peak depend on the mutation bias. For instance, adaptive walks under a mutation bias toward GC result in proteins that have more of the amino acids with GC-rich codons (Gly, Ala, Arg, Pro), and likewise, adaptive walks under AT bias result in proteins with more of the amino acids with AT-rich codons (Phe, Tyr, Met, Ile, Asn, Lys). On a rough landscape, the initial effect is similar, but the adaptive walks are shorter. That is, the mutation bias imposes a preference (on the adaptive walks) for steps, paths, and local peaks that are enriched in outcomes favored by the mutation bias. This illustrates the concept of a directional trend in which the system moves cumulatively in a particular direction along an axis of composition.
The influence of transition-transversion bias has been explored using empirical fitness landscapes for transcription factor binding sites.
Each landscape is based on generating thousands of different 8-nucleotide fragments and measuring how well they bind to a particular transcription factor. Each peak on each landscape is accessible by some set of paths made of steps that are nucleotide changes, each one being either a transition or a transversion. Among all possible genetic changes, the ratio of transitions to transversions is 1:2. However, the collection of paths leading to a given peak (on a given empirical landscape) has a specific transition-transversion composition that may differ from 1:2. Likewise, any evolving system has a particular transition-transversion bias in mutation. The more closely the mutation bias (of the evolving system) matches the composition bias (of the landscape), the more likely that the evolving system will find the peak.
Thus, for a given evolving system with its characteristic transition-transversion bias, some landscapes are more navigable than others. Navigability is maximized when the mutation bias of the evolving system matches the composition bias of the landscape.
Finally, rather than organizing genotypes by "fitness" (in terms of peaks, upward paths, and collections of paths leading to a peak), we can organize genotypes by "phenotype" using a genotype–phenotype map. A given phenotype identifies a network in genotype-space including all of the genotypes with that phenotype. An evolving system may diffuse neutrally within the network of genotypes with the same phenotype, but conversions between phenotypes are assumed to be non-neutral. Each phenotypically defined network has a findability that is, as a first approximation, a function of the number of genotypes in the network.
For instance, using the canonical genetic code as a genotype-phenotype map, the phenotype Leucine has 6 codons whereas Tryptophan has 1: Leucine is more findable because there are more mutational paths from non-Leucine genotypes. This idea can be applied to the way that RNA folds (considered as phenotypes) map to RNA sequences. For instance, evolutionary simulations show that the RNA folds with more sequences are more findable, and this is due to the way that they are over-sampled by mutation. A similar point has been made in regard to substructures of regulatory networks (see also ).
The above results apply, as before, to finite spaces. In infinite spaces, the set of remaining beneficial mutations to be explored is infinite and includes an infinite supply of mutationally favored and mutationally disfavored options. Therefore evolution in infinite spaces can continue forever in the mutationally favored direction with no diminution of the mutational effect that applies in the short-term, e.g., one could consider Eqn (1) for such an infinite space. The model of Gomez, et al (2020) allows unlimited adaptation via two traits, one with a higher rate of beneficial mutation, and the other with larger selective benefits. In this model, mutation bias continues to be important in long-term evolution even when mutation supply is very high.
Distinctive implications.
The theory of biases in the introduction process as a cause of orientation or direction in evolution may be contrasted with other theories that have been used by evolutionary biologists to reason about the role of variation in evolution:
Relative to these theories, the theory of arrival biases has distinctive implications, some of which are supported empirically as described below, e.g., the most frequent outcome of an adaptive process such as the emergence of antibiotic resistance is not necessarily the most beneficial, but is often a moderately beneficial outcome favored by a high rate of mutational origin. Likewise, the theory implies that evolution can have directions that are not adaptive, or tendencies that are not optimal, an implication one commentator on Arthur's book
found "disturbing".
This theory is defined, not by any particular problem, taxon, level of organization, or field of study, but by a mechanism defined at the level of population genetics, namely the ability of biases in introduction to impose biases on evolution.
Some implications are as follows (see ).
Effects do not require neutrality or high mutation rates. In contrast to the theory of evolution by mutation pressure explored (and rejected) by Haldane and others, variational dispositions under the theory of arrival biases do not depend on neutral evolution and do not require high mutation rates.
Graduated biases can have graduated effects. In contrast to what is implied by the language of "constraints" or "limits" employed in historic appeals to internal sources of direction in evolution, the theory of arrival biases is not deterministic and does not require an absolute distinction between possible and impossible forms. Instead, the theory is probabilistic, and graduated biases can have graduated effects.
Regime-dependency with regard to population genetics. Under the theory, variation biases do not have a guaranteed effect independent of the details of population-genetics. The influence of mutation biases reaches a maximum (proportional influence) under origin-fixation conditions and can disappear almost entirely under high levels of mutation supply.
Parity in fixation biases and origination biases (under limiting conditions). In classical neo-Darwinian thinking, selection governs and shapes evolution, whereas variation plays a passive role of supplying materials. By contrast, under limiting origin-fixation conditions, the theory of arrival biases establishes a condition of parity such that (for instance) a 2-fold bias in fixation and a 2-fold bias in introduction both have the same 2-fold effect on the chances of evolution.
Generality with regard to sources of variational bias. In the evolutionary literature, mutation biases, developmental "constraints", and self-organization in the sense of findability are all treated as separate topics. Under the theory of arrival biases, these are all manifestations of the same kind of population-genetic mechanism in which biases in the introduction of variants imposes biases on evolution. Any short-term bias is either a mutational bias in the sense of a difference in rates for two fully specified genotypic conversions, or it can be treated as a scheme of differential phenotypic aggregation over genotypes (see Box 2 of ).
In addition to these direct implications, some more sophisticated or indirect implications have emerged in the literature.
Non-causal associations induced by mutation and selection. Due to a dual dependence on mutation and selection, the distribution of adaptive changes may show non-causal associations of mutation rates and selection coefficients, somewhat akin to Berkson's paradox, as suggested in Ch. 8 of. and developed in more detail by Gitschlag, et al (2023).
Conditions for composition and decomposition of causes. Under limiting origin-fixation conditions, the chances of evolution reflect two factors multiplied together, representing biases in introduction and biases in fixation, as in Eqn (1). Thus, conditions exist under which it is possible to quantify and directly compare the dispositional influences of mutation and selection. This approach has already been used in a few empirical arguments addressed below. (Box 2 of ).
Biased depletion of the spectrum of beneficial mutations. In any case of a system adapting via mutation and selection, there is some set of possible beneficial mutations, characterized by a distribution of selection coefficients and mutation rates. As adaptation occurs in a mutation-biased manner, this spectrum of possible beneficial mutations is depleted in a biased way. The theory for this depletion is relevant to experimental work showing that "shifts in mutation spectra enhance access to beneficial mutations".
That is, the experimentally observed favorability of shifts in mutation spectra depends on a pattern of biased depletion of beneficial mutations that is itself a sign of mutation-biased adaptation.
Evidence.
Evidence for the theory has been summarized recently, e.g., Gomez, et al (2020) present a table listing 8 different studies providing evidence of effect of mutation bias on adaptation, and Ch. 9 of "Mutation, Randomness and Evolution" is devoted to empirical support for the theory (see also ).
Biases in introduction are expected to influence evolution whether neutral or adaptive, but an effect on neutral evolution is not considered intuitively surprising or controversial, and so is not given much attention. Instead, accounts of evidence focus on mutation-biased adaptation, because this highlights how predictions of the theory clash with the classical conception of mutation as a weak pressure easily overcome by selection, per the "opposing pressures" argument of Fisher and Haldane.
Direct evidence of causation under controlled conditions.
Direct evidence that the spectrum of mutation shapes the spectrum of adaptive changes comes from studies that manipulate the mutation spectrum directly. In one study, resistance to cefotaxime was evolved repeatedly, using 3 strains of "E. coli" with different mutation spectra: wild-type, "mutH" and "mutT". The spectrum of resistance mutations among the evolved strains showed the same patterns of spontaneous mutations as the parental strains. Specifically, the formula_27 transversions favored by "mutT" (left block of bars) are highly enriched among resistant isolates from "mutT" parents (blue in the accompanying figure), and likewise, the resistant strains from "mutH" parents (red) tend to have the nucleotide transition mutations favored by "mutH" (center block of bars). Thus, changing the mutation spectrum changes the spectrum of adaptive changes in a corresponding manner.
Another study showed that the AR2 strain of "P. fluorescens" adapted to the loss of motility overwhelmingly (> 95% of the time) by one specific change, an A289C change in the "ntrB" gene, while the Pf0-2x strain adapted via diverse changes in several genes. The pattern in AR2 derivatives was traced to a mutational hotspot. Because the hotspot behavior was associated mainly with synonymous differences between the two strains, the experimenters were able to use genetic engineering to remove the hotspot from AR2 and add it to Pf0-2x, without changing the encoded amino acid sequence. This reversed the qualitative pattern of outcomes, so that the modified AR2 (engineered to remove the hotspot) adapted via diverse changes, while the modified Pf0-2x with the engineered hotspot adapted via the A289C change 80% of the time.
Graduated effects.
A different use of available evidence is to focus on the idea of graduated effects, which distinguishes the theory of arrival biases from the intuitive notion of "constraints" or "limits" on possible forms. In particular, one may set aside the dramatic effects associated with hotspots and mutator alleles, and consider the effects of ordinary quantitative biases in nucleotide mutations. A number of studies have established that modest several-fold biases in mutation can have a several-fold effect on evolution, and some studies indicate a roughly proportional relation between mutation rates and the chances of an adaptive change.
For instance, Sackman, et al (2017) studied parallel evolution in 4 related bacteriophages. In each case, they adapted 20 cultures in parallel, then sequenced a sample of the adapted culture to identify causative mutations. The results showed a strong preference for nucleotide transitions, 29:5 for paths (white vs. black bars in the figure at right) and 74:6 for events.
In a study of resistance to Rifampicin in "Pseudomonas aeruginosa", MacLean, et al (2010)
measured selection coefficients and frequency of evolution for 35 resistance mutations in the "rpoB" (RNA polymerase) gene, and reported mutation rates for 11 of these. The mutation rates vary over a 30-fold range. The frequency with which a resistant variant appears in the set of 284 replicate cultures correlates strongly and roughly linearly with the mutation rate (figure at right). This is not explained by a correlation between selection coefficients and mutation rates, which are not correlated (see Ch. 9 of ).
As explained above, the influence of the mutation spectrum on the spectrum of adaptive changes can be captured in a single parameter formula_9, defined as a coefficient of binomial regression of observed counts on the expected counts from a mutational model. Based on theoretical considerations, expected values of formula_9 range from 0 (no influence) to 1 (proportional influence). This method was applied by Cano, et al. to 3 large data sets of adaptive changes, comparing a model based on independent measures of the mutation spectrum with adaptive changes previously identified in studies of (1) clinical antibiotic resistance of "Mycobacterium tuberculosis", (2) laboratory adaptation in "E. coli", and (3) laboratory adaptation of the yeast "Saccharomyces cerevisiae" to environmental stress. In each case, formula_16, indicating a roughly proportional influence of mutation bias. The authors report that this is not just due to the influence of transition-transversion bias, because formula_16 applies both to transition-transversion bias and to the other aspects of the nucleotide mutation spectrum.
Scope of applicability.
A final use of available evidence is to consider the range of natural conditions under which the theory may be relevant. Whereas laboratory studies can be used to establish causation and assess effect-sizes, they do not provide direct guidance to where the theory applies in nature. As noted in, most studies of adaptation do not include a genetic analysis that identifies specific mutations, and in the rare cases in which an attempt is made to identify causative mutations, the results typically implicate only very small numbers of changes that are subject to questions of interpretation.
Therefore, the strategy followed in key studies
has been to focus on trusted cases of adaptation in which the proposed functional effects of putative adaptive mutations have been verified using techniques of genetics.
Payne, et al looked for an effect of transition bias among causative mutations for antibiotic resistance in clinically identified strains of "Mycobacterium tuberculosis", which exhibits a strong mutation bias toward nucleotide transitions.
They compared the observed transition-transversion ratio to the 1:2 null expectation under the absence of mutation bias. Using two different curated databases, they found transition:transversion ratios of
1755:1020 and 1771:900, i.e., enrichments 3.4-fold and 3.9-fold over the null, respectively.
They also took advantage of the special case of Met-to-Ile replacements, which can take place by 1 transition (ATG to ATA) or 2 different transversions (ATG to ATT or ATC). This 1:2 ratio of possibilities again represents a null expectation for effects independent of mutation bias. In fact, the mutations in resistant isolates have transition-transversion ratios of 88:49 and 96:39 (for the 2 datasets), i.e., 3.6-fold and 4.9-fold above null expectations. This result cannot be due to selection at the amino acid level, because the changes are all Met to Ile. The significance of this result is not that mutation bias only works when the options are selectively indistinguishable: instead, the lesson is that the bias toward nucleotide transitions is roughly 4-fold both for the Met-to-Ile case, and for amino-acid-changing substitutions generally.
A much broader taxonomic scope is implicated in a meta-analysis of published studies of parallel adaptation in nature. In this study, the authors curated a data set covering 10 published cases of parallel adaptation traced to the molecular level, including well known cases involving spectral tuning, resistance to natural toxins such as cardiac glycosides
and tetrodotoxin,
foregut fermentation, and so on.
The results shown below (table) indicate a transition-transversion ratio of 132:99, a 2.7-fold enrichment relative to the null expectation of 1:2 (the ratio of paths, which is less sensitive to extreme values, is 27:28, a 2-fold enrichment).
Thus, this study shows that a bias toward transitions is observed in well known cases of parallel adaptation in diverse taxa, including animals and plants.
Finally, Storz, et al. analyzed changes in hemoglobin affinity associated with altitude adaptation in birds. Specifically, they studied the effect of CpG bias, an enhanced mutation rate at CpG sites due to effects of cytosine methylation on damage and repair, found widely in mammals and birds.
They assembled a data set consisting of 35 matched pairs of high- and low-altitude bird species. In each case, hemoglobins were evaluated for functional differences resulting in a higher oxygen affinity in the high-altitude species. The changes in affinity plausibly linked to adaptation implicated 10 different paths found a total of 22 different times. Six of the 10 paths involved CpG mutations, whereas only 1 would be expected by chance; and 10 out of 22 events involved CpG mutations, whereas only 2 would be expected by chance (both differences were significant). This enrichment of mutationally-likely genetic changes supports the theory of arrival biases and provides further evidence that predictable effects of mutation bias are important for understanding adaptation in nature.
Context in evolutionary thinking.
The theory of arrival bias has been described as a cross-cutting theory
because it proposes a causal grounding (in population genetics) for diverse kinds of pre-existing claims for which a causal grounding is either unknown or mis-specified,
The context for applying the theory is illustrated in this figure (right). On the left are details of mutation and development that are responsible for tendencies in the generation of variation (varigenesis), i.e., tendencies prior to selection or drift. On the right are observable evolutionary patterns that might possibly be explained by these tendencies. The green arrow is some theory— the theory of arrival biases or some alternative theory— that specifies conditions of a cause-effect relationship linking variational tendencies to evolutionary tendencies. To apply a theory in this context is to generate evolutionary hypotheses or explanations that appeal to the internal details of mutation and development (on the left) to account for evolutionary patterns (on the right) via the conditions of causation specified by the theory. For instance, Darwin's comment that the laws of variation "bear no relation" to the structures built by selection would suggest that there are no conditions under which the internal details on the left account for the patterns on the right. The other theories all suggest that variational tendencies may influence evolution under some conditions. For instance, the theory of mutation pressure applies when mutation rates are high and unopposed by selection, thus it has a limited range of applications. The theory of evolutionary quantitative genetics can be applied very broadly to the evolution of quantitative characters, but the theory (as developed so far) does not suggest that mutation biases will have much impact.
By contrast, the theory of arrival bias might apply broadly, and allows for a strong role for variational tendencies in shaping evolutionary tendencies.
Late arrival and non-obviousness.
Though it seems intuitively obvious today, the theory did not emerge formally until 2001, e.g., as noted above, population geneticists did not propose the theory in the 1980s to answer an evo-devo challenge that literally called for recognizing biases in the introduction of variation. This late emergence has been attributed to a "blind spot" due to multiple factors, including a tradition of verbal arguments that minimize the role of mutation, a tendency to associate causation with processes that shift frequencies of variants rather than processes that create variants, and a formal argument from population genetics that doesn't extend to evolution from new mutations.
Specifically, when Haldane and Fisher asked if tendencies of mutation could influence evolution, they framed this as a matter of the efficacy of mutation pressure (below), concluding that, because mutation rates are low, mutation is a weak force, only important in the special case of abnormally high mutation rates unopposed by selection. Their conception of evolutionary causation was modeled on selection, which operates by shifting frequencies of available alleles, and so they treated recurrent mutation in the same way. Their conclusion is correct for the case of evolution from standing variation.
More generally, in Modern Synthesis thinking, evolution was assumed to follow from a short-term process of shifting frequencies of available alleles.
In this process, mutation is typically unimportant except when the focus is on low-frequency alleles maintained by deleterious mutation pressure (see Population genetics: Mutation), e.g., Edwards (1977) addressed theoretical population genetics without considering mutation at all; Lewontin (1974) stated that "There is virtually no qualitative or gross quantitative conclusion about the genetic structure of populations in deterministic theory that is sensitive to small values of migration, or any that depends on mutation rates" (p. 267).
The Haldane-Fisher "opposing pressures" argument was used repeatedly by leading thinkers to reject structuralist or internalist thinking (examples in or Ch. 6 of ), e.g., Fisher (1930) stated that “The whole group of theories which ascribe to hypothetical physiological mechanisms, controlling the occurrence of mutations, a power of directing the course of evolution, must be set aside, once the blending theory of inheritance is abandoned." Seventy years later Gould (2002), citing Fisher (1930), wrote that “Since orthogenesis can only operate when mutation pressure becomes high enough to act as an agent of evolutionary change, empirical data on low mutation rates sound the death-knell of internalism.” (p. 510)
In this way, arguments from population genetics were used to reject, rather than support, speculative claims about the role of variational tendencies. The flaw in the Haldane-Fisher argument, pointed out in, is that it treats mutation only as a pressure on frequencies of existing alleles, not as a cause of the origin of new alleles. When alleles relevant to the outcome of evolution are absent initially, biases in introduction can impose strong biases on the outcome. Thus, the late appearance of this theory sheds light on how closely Modern Synthesis thinking was tied to the assumption of standing variation,
and to the forces theory. These commitments continue to echo in contemporary sources, e.g., in a US white paper endorsed by SSE, SMBE, ASN, ESA and other relevant professional societies, Futuyma, et al (2001)
state, as fact, that evolution is shifting gene frequencies, identifying the main causes of "evolution" (so defined) as selection and drift (figure).
However, toward the end of the 20th century, theoreticians began to note that long-term dynamics depend on events of mutational introduction not covered in classical theory.
In the recent literature, the assumption of evolution from standing variation is only rarely made explicit, e.g.,
More commonly, evolution from standing variation is presented as an option to be considered together with evolution from new mutations.
Relevance to contemporary issues.
Parallelism and predictability. The application of the theory to parallelism is addressed above. The tendency for particular outcomes to recur in evolution is not merely a function of selection, but also reflects biases in introduction due to differential accessibility by mutation (or, for the case of phenotypes, by mutation and altered development). Recent reviews on prediction apply the theory to the role of mutation biases in contributing to the repeatability of evolution.
Partitioning causal responsibility for patterns to mutation and selection. In the case of origin-fixation dynamics, evolutionary dispositions can be attributed to a combination of mutation and selection, and it is possible, in principle, to untangle these contributions, as in an analysis of regulatory vs. structural effects in evolution or patterns of amino acid replacement in protein evolution.
Evo-devo, GP maps, and findability. The application of the theory of arrival biases to development and phenotypes is mediated by the concept of a genotype-phenotype map. A simple example of a biased induced by a GP map is shown at right. An evolving system diffuses neutrally within the genotypic network for its phenotype, and may occasionally jump to another phenotype. From the starting network of genotypes encoding phenotype P0, there are mutations leading to genotypic networks for P1 and P2. However, the number of mutations leading from the starting network to P2 is 4 times higher illustrating the idea that, for a given developmental-genetic system, some phenotypes are more mutationally accessible. This is not the same thing as a mutation bias "per se" (an asymmetry caused by the details of mutagenesis), but it can have the same effect in a population-genetic model. In this case, if all mutations happen at the same rate, the total rate of mutational introduction of P2 is 4 times higher than for P1: this bias can be mapped to the Yampolsky-Stoltzfus model and would have the same implications as a 4-fold mutation bias.
For short-term evolution, what matters is the distribution of immediately mutationally accessible phenotypes. In long-term evolution, however, one may expect two different effects that can be explained by the figure at right, after Fig. 4 of Fontana (2002).
The networks show the genotypes that map to 3 different phenotypes, P0, P1 and P2. Over time, a system may diffuse neutrally among different genotypes with the same phenotypes. Rarely a jump from one phenotype to another may occur. In the short term, evolution depends only on what is immediately accessible from a given point in genotype space. In the medium-term, evolution depends on the accessibility of alternative phenotype networks, relative to the starting network, e.g., starting with P0, P2 is twice as accessible as P1, even though P1 and P2 have the same number of genotypes. In the long-term, what matters is the total findability of a phenotype from all other phenotypes, which (as a first approximation) is a matter of the number of genotypes (and more precisely is a matter of the total surface area of the network accessible to other high-fitness phenotypes). In this case, P0 is more findable than P1 and P2 because it has twice as many genotypes. The variational bias toward more numerous phenotypes is called "phenotype bias" by Dingle, et al (2022) (see also ).
This effect of findability is the formal basis for empirical and theoretical arguments in studies of the findability of regulatory network motifs or RNA fold families. In the figure at right, Dingle, et al. (2022) present evidence of a striking tendency for the most common RNA folds in nature to match the folds most widely distributed in sequence space.
Role in broad appeals for reform.
The theory of arrival biases, proposed in 2001, appears in several subsequent appeals for reform, relative to the neo-Darwinian view of the Modern Synthesis. Per Arthur, it is part of a developmentalist approach to evolution that emphasizes the internal organizing effects of "developmental reprogramming" on variation. In a different framing per, the efficacy of arrival biases undermines the historic commitment of theoreticians to viewing evolution as a process of shifting gene frequencies in an abundant gene pool, dominated by mass-action forces, and is part of a larger movement (beginning during the molecular revolution) away from the neo-Darwinism of the Modern Synthesis and towards a version of mutationism grounded in population genetics. The theory also has been invoked in the literature of the Extended Evolutionary Synthesis under the heading of Developmental bias.
Distinction from other theories of mutational effects.
The theory of arrival biases focuses on a kind of population-genetic causation linking intrinsic generative biases acting prior to selection with predictable evolutionary tendencies. It is distinct from other ideas that lack the same focus on causation, on intrinsic biases, or on the introduction process.
Evolution by mutation pressure.
In classic sources, evolution by "mutation pressure" means the mass transformation of a population by mutational conversion, as in Wilson and Bossert (1971, p. 42). The general assessment of this theory, following Haldane (1932) and Fisher (1930), is that evolution by mutation pressure is implausible because it requires high mutation rates unopposed by selection. Kimura argued even more pessimistically that transformation by mutation pressure would take so long that it can be ignored for practical purposes. Nevertheless, later empirical and theoretical work showed that the theory can be valuable in cases such as the loss of a complex trait encoded by many loci, e.g., loss of sporulation in experimental populations of "B. subtilis", a case in which the mutation rate for loss of the trait was estimated as an unusually high value, formula_28.
Thus, the theory of mutation pressure and the theory of arrival biases both depict ways for the process of mutation to be an important influence, but they focus on different modes of causation: influencing either the fixation process (mutation pressure) or the introduction process (arrival bias). The effectiveness of mutational tendencies via these two modes is completely different, e.g., only the mutation pressure theory relies on high mutation rates unopposed by selection.
Evolution along genetic lines of least resistance.
Evolutionary quantitative genetics, the body of theory that focuses on highly polygenic quantitative traits, makes a particular prediction about mutational effects that has some empirical support. In the standard theory for a set of quantitative traits, the standing variation is represented by a formula_29 matrix of variances and covariances, which depends (in a complex way) on mutational input represented by an formula_30 matrix. Phenotypic divergence will tend to be aligned (in phenotype space) with the dimension of greatest variation, formula_31, and this predicted effect of standing variation has been seen repeatedly. This effect (explained more fully in Developmental bias) is called adaptation "along genetic lines of least resistance" and could be re-stated (with variation in a positive role) as adaptation along lines of maximal variational fuel. When divergence also aligns with formula_30, this suggests that mutational variability shapes divergence, but this circumstantial correlation has other interpretations and is not taken as dispositive evidence.
The use of mutation bias in the sense of an asymmetric effect on trait means is not part of the standard framework. When mutation bias is included in models of a single quantitative trait under stabilizing selection, the result is a small displacement from the optimal value.
Thus, models in evolutionary quantitative genetics are focusing on a different kind of problem, so that there is no simple translation between (for instance) effects of formula_30 and effects of biases in introduction.
Mutational contingency.
Evolutionary explanations have often relied on a paradigm of "equilibrium explanation" (ch. 5 of
in which outcomes are explained by appeal to what is selectively optimal, without regard to history or details of process
(as explained in
However, attention has focused in recent decades on the idea of "contingency", i.e., the idea that the outcome of evolution cannot be explained as the predictable or predefined endpoint of a deterministic process, but takes some path that cannot be predicted easily, or can only be predicted by knowing details of the starting conditions and the subsequent dynamics. "Mutational" contingency refers to cases in which an event of evolution is associated distinctively with a particular mutation, or a mutational hotspot, e.g.. in the sense that the evolutionary change would not have happened in the observed manner if the distinctive mutation had not occurred in the manner inferred.
This notion differs from the theory of biases in the introduction process because it is an explanatory concept (rather than a mechanism), applied in idiographic explanations, i.e., explaining one-off events (token events),
The theory of biases in the introduction process is a theory of general causation: the result of successfully applying the theory is to assign, not a token explanation, but a general explanation like "this pattern in which formula_32 happens more often than formula_33 is caused by a bias in introduction due to the higher chance of the mutational-developmental conversion formula_32."
Developmental constraints (developmental bias).
The concept of "constraint" is fraught.
Green and Jones (2016) argue that evolutionary biologists use it as a flexible explanatory concept rather than as a way to refer to a specific causal theory, i.e., a constraint is a factor with some limiting influence that makes it predictive, even if the causal basis of this influence is unclear.
A simple notion of developmental constraint is that some phenotypic forms are not observed, due to being impossible (or at least very difficult) to generate developmentally , e.g., centipedes with an even number of leg-bearing segments. That is, constraint is an explanation for the non-existence of phenotypes based on a variational effect (absence), within a paradigm focused on accounting for patterns of phenotype existence.
Other references to "constraint" imply graduated differences rather than the absolute difference between possible and impossible forms, e.g.. Whereas the effectiveness of absolute biases does not require a special causal theory (because a developmentally impossible form is an evolutionarily impossible form), the idea of graduated biases prompts questions of causation, due to the conflict with the classic Haldane-Fisher "opposing pressures" argument, which holds that mere variational tendencies are ineffectual because mutation rates are small. The seminal "developmental constraints" paper by Maynard Smith, et al. (1985) noted this issue without providing a solution. Advocates of "constraint" were criticized for failing to provide a mechanism. This is the issue that Yampolsky and Stoltzfus sought to remedy.
Nevertheless, the theory of arrival biases cannot be easily mapped to the concept of "constraint" due to the latter being used widely as a synonym for "factor". In the evo-devo literature, the term "constraint" is increasingly replaced with references to developmental bias. However, the concept of developmental bias is often associated with some idea of facilitated variation or evolvability, whereas the theory of arrival biases is only about the population-genetic consequences of arbitrary biases in the generation of variation.
Facilitated variation, evolvability, and directed mutation.
The theory of arrival biases does not require or imply facilitated variation or directed mutation and is not by itself a theory of the evolution of evolvability. The population-genetic models used to illustrate the theory, and the empirical cases invoked in support of the theory, focus on the effects of different forms of mutation bias, where the bias is always relative to some dimension other than fitness, e.g., transition-transversion bias, CpG bias, or the asymmetry of two traits with different mutabilities. That is, the theory does not assume that biases are beneficial with respect to fitness, and it does not propose that mutation somehow contributes to adaptedness separately from the effect of selection (contra
). In fact, many models illustrate the efficacy of arrival biases by focusing on a case where the most mutationally favored outcomes are not the most fit options, as in the original Yampolsky-Stoltzfus model, where one choice has a higher mutation rate but a smaller fitness benefit, and the other has a higher fitness benefit but a smaller mutation rate. The theory assumes neither that mutationally favored outcomes are more fit, nor that they are less fit.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu N \\rightarrow 0"
},
{
"math_id": 1,
"text": "j"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "\\mu_{ij}"
},
{
"math_id": 4,
"text": "\\mu_{ik}"
},
{
"math_id": 5,
"text": "s_{ij}"
},
{
"math_id": 6,
"text": "s_{ik}"
},
{
"math_id": 7,
"text": "\\pi_{s} \\approx 2s"
},
{
"math_id": 8,
"text": "\\mu N < 1/10"
},
{
"math_id": 9,
"text": "\\beta"
},
{
"math_id": 10,
"text": "\\mu N"
},
{
"math_id": 11,
"text": "\\mu N << 1"
},
{
"math_id": 12,
"text": "f(c)"
},
{
"math_id": 13,
"text": "\\mu(c,a)"
},
{
"math_id": 14,
"text": "\\alpha"
},
{
"math_id": 15,
"text": "\\beta \\approx 0"
},
{
"math_id": 16,
"text": "\\beta \\approx 1"
},
{
"math_id": 17,
"text": "p_i"
},
{
"math_id": 18,
"text": "P_{para} = \\sum_i p_i^2"
},
{
"math_id": 19,
"text": "V"
},
{
"math_id": 20,
"text": "c_v"
},
{
"math_id": 21,
"text": "\\pi(s, N) \\approx 2s"
},
{
"math_id": 22,
"text": "c_v(\\boldsymbol{s})"
},
{
"math_id": 23,
"text": "c_v(\\boldsymbol{\\mu})"
},
{
"math_id": 24,
"text": "n"
},
{
"math_id": 25,
"text": "X"
},
{
"math_id": 26,
"text": "\\bf{G}"
},
{
"math_id": 27,
"text": "A:T \\rightarrow C:G"
},
{
"math_id": 28,
"text": "\\mu = 0.003"
},
{
"math_id": 29,
"text": "G"
},
{
"math_id": 30,
"text": "M"
},
{
"math_id": 31,
"text": "g_{max}"
},
{
"math_id": 32,
"text": "A \\rightarrow B"
},
{
"math_id": 33,
"text": "A \\rightarrow C"
}
]
| https://en.wikipedia.org/wiki?curid=72359343 |
723774 | FIFA Women's World Ranking | Global sports team ranking list
The FIFA Women's World Ranking is a ranking system for women's national teams in association football (commonly known as football or soccer) published by the international governing body FIFA. As of August 2024[ [update]], the USWNT is ranked #1.
The rankings were introduced in 2003, with the first rankings published on 16 July of that year. FIFA attempts to assess the strength of internationally active women's national teams at any given time based on their past game results with the most successful teams being ranked highest. As of August 2024[ [update]], the ranking has 194 national teams.
The ranking has more than informative value, as it is often used to seed member associations into different pots in international tournaments.
Specifics of the ranking system.
The first two points result from the FIFA Women's World Rankings system being based on the Elo rating system adjusted for football; in 2018, FIFA modified the men's ranking system to similarly be based on Elo systems after continued criticism. FIFA considers the ratings for teams with fewer than 5 matches provisional and at the end of the list. In addition, any team that plays no matches for 4 years becomes unranked; this inactivity limit was previously 18 months, but was extended in early 2021 (after the COVID-19 pandemic stifled a significant amount of international play).
Ranking schedule.
Rankings are generally published four times a year. The next update is scheduled for 19 December 2024.
Leaders.
As of the 16 August 2024 rankings release, the United States is the number one ranked team. The United States holds the record for the longest consecutive period leading the rankings of nearly 7 years, from March 2008 to December 2014.
Before the 2023 World Cup, the United States and Germany had been the only two teams to lead the women's rankings, and these two teams also had held the top two spots in all but six releases, when Germany was ranked third (only Norway, Brazil, England and Sweden had reached second during this time).
<templatestyles src="Reflist/styles.css" />
Ranking calculations.
The rankings are based on the following formulae:
formula_0
formula_1
formula_2
Where
The average points of all teams are about 1300 points. The top nations usually exceed 2000 points. In order to be ranked, a team must have played at least 5 matches against officially ranked teams, and have not been inactive for more than 18 months. Even if teams are not officially ranked, their points rating is kept constant until they play their next match.
Actual result of the match.
The main component of the actual result is whether the team wins, loses, or draws, but goal difference is also taken into account.
If the match results in a winner and loser, the loser is awarded a percentage given by the accompanying table, with the result always less than or equal to 20% (for goal differences greater than zero). The result is based on the goal difference and the number of goals they scored. The remaining percentage points are awarded to the winner. For example, a 2–1 match has the result awarded 84%–16% respectively, a 4–3 match has the result awarded 82%–18%, and an 8–3 match has the result awarded 96.2%–3.8%. As such, it is possible for a team to lose points even if they win a match, assuming they did not "win by enough".
If the match ends in a draw the teams are awarded the same result, but the number depends on the goals scored so the results will not necessarily add up to 100%. For example, a 0–0 draws earns both teams 47% each, a 1–1 draw earns 50% each, and a 4–4 draw earns 52.5% each.
"Source"
Neutral ground or Home vs. Away.
Historically, home teams earn 66% of the points available to them, with away teams earning the other 34%. To account for this, when two teams are not playing on neutral ground, the home team has its formula_3 inflated by 100 points for the purposes of calculation. That is, if two equally ranked teams playing at one team's home ground, the home team would be expected to win at the same rate a team playing on neutral ground with a 100-point advantage. This 100 point difference corresponds to a 64%–36% advantage in terms of expected result. The scaling factor remains the same ("c"=200).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R_{aft} = R_{bef} + K (S_{act} - S_{exp})"
},
{
"math_id": 1,
"text": "S_{exp} = \\frac{1}{1 + 10^{-x/2}}"
},
{
"math_id": 2,
"text": "x = \\frac{R_{bef} - O_{bef} \\pm H}{c}"
},
{
"math_id": 3,
"text": "R_{bef}"
}
]
| https://en.wikipedia.org/wiki?curid=723774 |
72389547 | Hoàng Hiệp Phạm | Vietnamese Mathematician
Pham Hoang Hiep is a Vietnamese Mathematician known for his work in complex analysis. He is a professor at the Vietnam Academy of Science and Technology and director of the International Centre for Mathematical Research and Training. He was awarded the 2015 (young prize) and the 2019 ICTP Ramanujan Prize.
Research and career.
Pham Hoang Hiep graduated from Hanoi National University of Education in 2004 and obtained his PhD at Umea University in 2008. He obtained a doctorate in science at Aix-Marseille University in 2013. He is known for being the youngest full professor in Vietnam. He is on the editorial board of Acta Mathematica Vietnamica
Pham has worked on plurisubharmonic functions and (with Jean-Pierre Demailly) found a lower-bound on the log canonical threshold of such a function. If formula_0 is such a function then Pham and Demailly found a sharp inequality on the largest constant, c, so that formula_1 is integrable in the neighbourhood of a singularity. Pham later also worked on the "weighted log canonical threshold", which pertains to the integrability properties of formula_2 for a fixed holomorphic function f.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi"
},
{
"math_id": 1,
"text": "e^{-2 c \\phi}"
},
{
"math_id": 2,
"text": " f^2 e^{-2 c \\phi} "
}
]
| https://en.wikipedia.org/wiki?curid=72389547 |
72400883 | Inverse gamma function | Inverse of the gamma function
In mathematics, the inverse gamma function formula_0 is the inverse function of the gamma function. In other words, formula_1 whenever formula_2. For example, formula_3. Usually, the inverse gamma function refers to the principal branch with domain on the real interval formula_4 and image on the real interval formula_5, where formula_6 is the minimum value of the gamma function on the positive real axis and formula_7 is the location of that minimum.
Definition.
The inverse gamma function may be defined by the following integral representation
formula_8
where formula_9 is a Borel measure such that formula_10 and formula_11 and formula_12 are real numbers with formula_13.
Approximation.
To compute the branches of the inverse gamma function one can first compute the Taylor series of formula_14 near formula_15. The series can then be truncated and inverted, which yields successively better approximations to formula_0. For instance, we have the quadratic approximation:
formula_16
The inverse gamma function also has the following asymptotic formula
formula_17
where formula_18 is the Lambert W function. The formula is found by inverting the Stirling approximation, and so can also be expanded into an asymptotic series.
Series expansion.
To obtain a series expansion of the inverse gamma function one can first compute the series expansion of the reciprocal gamma function formula_19 near the poles at the negative integers, and then invert the series.
Setting formula_20 then yields, for the "n" th branch formula_21 of the inverse gamma function (formula_22)
formula_23
where formula_24 is the polygamma function.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Gamma^{-1}(x)"
},
{
"math_id": 1,
"text": "y = \\Gamma^{-1}(x)"
},
{
"math_id": 2,
"text": "\\Gamma(y)=x"
},
{
"math_id": 3,
"text": "\\Gamma^{-1}(24)=5"
},
{
"math_id": 4,
"text": "\\left[\\beta, +\\infty\\right)"
},
{
"math_id": 5,
"text": "\\left[\\alpha, +\\infty\\right)"
},
{
"math_id": 6,
"text": "\\beta = 0.8856031\\ldots"
},
{
"math_id": 7,
"text": "\\alpha = \\Gamma^{-1}(\\beta) = 1.4616321\\ldots"
},
{
"math_id": 8,
"text": "\\Gamma^{-1}(x)=a+bx+\\int_{-\\infty}^{\\Gamma(\\alpha)}\\left(\\frac{1}{x-t}-\\frac{t}{t^{2}-1}\\right)d\\mu(t)\\,,"
},
{
"math_id": 9,
"text": "\\mu (t)"
},
{
"math_id": 10,
"text": "\\int_{-\\infty}^{\\Gamma\\left(\\alpha\\right)}\\left(\\frac{1}{t^{2}+1}\\right)d\\mu(t)<\\infty \\,,"
},
{
"math_id": 11,
"text": "a"
},
{
"math_id": 12,
"text": "b"
},
{
"math_id": 13,
"text": "b \\geqq 0"
},
{
"math_id": 14,
"text": "\\Gamma(x)"
},
{
"math_id": 15,
"text": "\\alpha"
},
{
"math_id": 16,
"text": " \\Gamma^{-1}\\left(x\\right)\\approx\\alpha+\\sqrt{\\frac{2\\left(x-\\Gamma\\left(\\alpha\\right)\\right)}{\\Psi\\left(1,\\ \\alpha\\right)\\Gamma\\left(\\alpha\\right)}}."
},
{
"math_id": 17,
"text": " \\Gamma^{-1}(x)\\sim\\frac{1}{2}+\\frac{\\ln\\left(\\frac{x}{\\sqrt{2\\pi}}\\right)}{W_{0}\\left(e^{-1}\\ln\\left(\\frac{x}{\\sqrt{2\\pi}}\\right)\\right)}\\,,"
},
{
"math_id": 18,
"text": "W_0(x)"
},
{
"math_id": 19,
"text": "\\frac{1}{\\Gamma(x)}"
},
{
"math_id": 20,
"text": "z=\\frac{1}{x}"
},
{
"math_id": 21,
"text": "\\Gamma_{n}^{-1}(z)"
},
{
"math_id": 22,
"text": "n\\ge 0"
},
{
"math_id": 23,
"text": " \\Gamma_{n}^{-1}(z)=-n+\\frac{\\left(-1\\right)^{n}}{n!z}+\\frac{\\psi^{(0)}\\left(n+1\\right)}{\\left(n!z\\right)^2}+\\frac{\\left(-1\\right)^{n}\\left(\\pi^{2}+9\\psi^{(0)}\\left(n+1\\right)^{2}-3\\psi^{(1)}\\left(n+1\\right)\\right)}{6\\left(n!z\\right)^3}+O\\left(\\frac{1}{z^{4}}\\right)\\,,"
},
{
"math_id": 24,
"text": "\\psi^{(n)}(x)"
}
]
| https://en.wikipedia.org/wiki?curid=72400883 |
72402042 | Anchialine system | Landlocked body of water with underground connection to the sea
An anchialine system (, from Greek "ankhialos", "near the sea") is a landlocked body of water with a subterranean connection to the ocean. Depending on its formation, these systems can exist in one of two primary forms: pools or caves. The primary differentiating characteristics between pools and caves is the availability of light; cave systems are generally aphotic while pools are euphotic. The difference in light availability has a large influence on the biology of a given system. Anchialine systems are a feature of coastal aquifers which are density stratified, with water near the surface being fresh or brackish, and saline water intruding from the coast at depth. Depending on the site, it is sometimes possible to access the deeper saline water directly in the anchialine pool, or sometimes it may be accessible by cave diving.
Anchialine systems are extremely common worldwide especially along neotropical coastlines where the geology and aquifer systems are relatively young, and there is minimal soil development. Such conditions occur notably where the bedrock is limestone or recently formed volcanic lava. Many anchialine systems are found on the coastlines of the island of Hawaii, the Yucatán Peninsula, South Australia, the Canary Islands, Christmas Island, and other karst and volcanic systems.
Geology.
Karst landscape formation.
Anchialine systems may occur in karst landscapes, regions with bedrock composed of soluble sedimentary rock, such as limestone, dolomite, marble, gypsum, or halite. Subterranean voids form in karst landscapes through the dissolution of bedrock by rainwater, which becomes mildly acidic by equilibrating with carbon dioxide from the atmosphere and soil as it percolates, resulting in carbonic acid, a weak acid. The acidic water reacts with the soluble sedimentary rock causing the rock to dissolve and create voids. Over time, these voids widen and deepen, resulting in caves, sinkholes, subterranean pools, and springs. The processes to form these karst morphological features occur on long geological timescales; caverns can be several hundred thousand to millions of years old. Since the caverns which house karst anchialine systems form through the dissolution of bedrock via water percolation, current karst anchialine systems developed around the last glacial maximum, approximately 20,000 years ago when the sea level was ~120 meters lower than today. Evidence of this can be seen in speleothems (stalactites and stalagmites), a terrestrial cave formation observed at 24 meters water depth in anchialine pools in Bermuda and 122 meters water depth in a blue hole in Belize. The marine transgression after the last glacial maximum caused saline groundwater to intrude into karst caverns resulting in anchialine systems. In some anchialine systems, lenses of freshwater overlay the saltwater environment. This is caused by the accumulation of freshwater from meteoric or phreatic sources above the intruded saltwater or the vertical displacement of freshwater from intruding saltwater. Horizontal white “bathtub ring” stains are observed in submerged sections of Green Bay Cave, Bermuda, indicating paleo-transition zones between freshwater and saltwater at a lower sea level.
Volcanic formation.
Anchialine systems are also commonly found in coastal mafic volcanic environments such as the Canary Islands, Galapagos Islands, Samoa, and Hawaii. Lava tubes are the primary mechanism that creates anchialine systems in these volcanic environments. Lava tubes occur during eruptions of fluid-flowing basaltic pahoehoe lava. As lava flows downhill, the atmosphere and cooler surfaces come in contact with the exterior of the flow, causing it to solidify and create a conduit through which the interior liquid lava continues flowing. If the solid conduit empties of liquid lava, the result is a lava tube. Lava tubes flow towards lower elevations and typically stop upon reaching the ocean; however, lava tubes can extend along the seafloor or form from submarine eruptions creating anchialine habitats. Saltwater intruded into many coastal lava tubes during the marine transgression after the last glacial maximum creating many volcanic anchialine pools observed today. Volcanic anchialine systems typically can develop more rapidly than karst systems; on the order of thousands to tens of thousands of years due to their rapid formation at or near the Earth's surface, making them vulnerable to erosional processes.
Tectonic faulting formation.
Tectonic faulting in coastal areas is a less common formation process for anchialine systems. In volcanic and seismically activity areas, faults in coastal environments can be intruded by saline groundwater resulting in anchialine systems. Submerged coastal tectonic faults caused by volcanic activity are observed in Iceland and in the Galapagos Islands, where they are known as “grietas,” which translates to “cracks.” Faulted anchialine systems can also form from tectonic uplift processes in coastal regions. The Ras Muhammad Crack area in Israel is an anchialine pool created by an earthquake in 1968 from the uplift of a fossil reef. The earthquake resulted in a fault opening approximately 150 meters from the coastline, which filled with saline groundwater creating an anchialine pool with water depths of up to 14 meters. Deep anchialine pools created by faulting from the uplift of a reef limestone block are also seen on the island of Niue in the Central Pacific.
Hydrology process.
Hydrological processes can describe how the water moves between the pool and the surrounding environment. Collectively, these processes change the salinity and the vertical density profile, which sets the conditions for the ecological communities to develop. Although each anchialine system is unique, a box model simplifies the hydrology processes included in each system.
Box model.
To predict mean salinity of an anchialine pool, the pool can be treated as a well-mixed box. Various sources (sinks) add (remove) water and alter the salinity. Below lists several important saline sources and sinks of the pool.
The ratio between the evaporation and water exchange with the surrounding, formula_0, implies if the box reach an equilibrium state or not.
formula_1
For example, when the evaporation (E or S/D) removes freshwater faster than the influx, the salinity get higher than the ambient ocean. If formula_2, salinity is close to open ocean salinity because the salt inflow balances the evaporation. If formula_3, the pool is metahaline (~40 psu). If formula_4, the pool is hypersaline (60~80 psu).
Stratification.
The box model gives an estimate of the saline environment but does not imply the strength of the halocline. The depth of the seawater intake should be considered for the vertical salinity structure. In a pool containing fresh or brackish water, if the denser seawater flushes near the surface, it reduces stratification. However, in the same scenario in a polyhaline pool, the seawater forms a freshwater lens at the top, reinforcing the stratification and potentially creating a hypoxic environment depending on oxygen reaction rates.
Biogeochemistry.
Water chemistry of anchialine systems are directly related to the amount of connectivity to the adjacent marine and freshwater inputs, and evaporative losses. Major nutrient compositions (carbon, nitrate, phosphate, and silicate) from the ocean and groundwater sources determine the biogeochemical cycles in an anchialine system. These cycles are affected by the hydrological processes of anchialine systems which vary based on the type, size, and relative inputs of marine and freshwater into the system. Deeper anchialine systems, such as larger pool that resemble lakes, may become highly salinity stratified with depth. The surface consists of brackish oxygen-rich waters followed by a distinct pycnocline and chemocline, below which water has higher salinity and decreased dissolved oxygen (anoxic) concentrations. This stratification and available nutrient resources establishes redox gradients with depth which can support a variety of stratified communities of micro-organisms and biogeochemical cycles.
Redox conditions.
In deeper stratified systems water below the chemocline can be associated with an increase in dissolved hydrogen sulfide, phosphate, and ammonium, and a decrease in particulate organic carbon. The physical and chemical stratification determines which microbial metabolic pathways can occur and creates a vertical stratification of redox processes as oxygen decreases with depth. Oxygen-rich surface waters have a positive reduction potential (Eh), meaning there are oxidizing conditions for aerobic respiration. The chemocline layer has a negative Eh (reducing conditions) and low nutrient availability from the respiration above, so chemosynthetic bacteria reduce nitrate or sulfate for respiration. The productivity in the surface and chemocline layer creates turbid water, below which both oxygen and light levels are low but dissolved inorganic nutrient levels are high creating communities of other reducing microorganisms.
Physical nutrient cycling.
Highly stratified anchialine systems, by definition, have little turbid mixing from wind or water movements. Instead it is suggested that advection of nutrients back into the surface water is caused by the rain of particulate matter below the chemocline displacing water upwards and by the vertical movement of mobile organisms. Introduction of nutrients and organic matter from terrestrial runoff into the surface waters also adds to the nutrient cycling in anchialine systems.
Biology.
Ecology.
Anchialine systems have a highly specialized collection of organisms with distinctive adaptations. The species that occupy a given system are strongly determined by the presence or absence of light (pools or caves). A broad diversity of algae and bacteria can be found in anchialine systems, however only few species dominate a given habitat at a time. Systems closer to the coastline tend to have more influence from marine phytoplankton and zooplankton as they are advected in through the groundwater. Systems further inland are more dominated by freshwater algae and terrestrial deposits but exhibit increasingly restricted diversity within algal communities. Due to the ephemeral nature of many anchialine systems and their limited distribution across the planet, many of their inhabitants are either well adapted to tolerate a broad range of salinity and hypoxic conditions or are introduced through tides from neighboring marine habitats. Species that occupy these habitats are generalists or opportunistic as they exploit conditions intolerable for most other species.
Crustaceans.
Crustaceans are by far the most abundant taxa in anchialine systems. Crustacean biodiversity includes Copepoda, Amphipoda, Decapoda, Ascothoracida, and a variety of water fleas.
Non-crustacean invertebrates.
Dominant non-crustacean invertebrates groups within anchialine systems include sponges and other filter feeders (most common in Blue Holes), which thrive in moderate flow systems where the structure acts in a way to compress the water and make particulate organic matter less dilute, improving filter feeding efficacy. This is often seen in the hydrodynamic 'pumping' of Blue Holes by Tubellaria (flatworms), and Gastropoda (snails and other mollusks). There are also other smaller non-crustacean inverts including chaetognaths (voracious zooplankton).
Anchialine pools.
Hypogeal shrimps have been observed to have high population densities in anchialine ponds upwards of hundreds of individuals per square meter. Many of the shrimp species present in these systems migrate into and out of pools with the tide through the connection at the water table. It is hypothesized they enter pools during flood tides to feed and retreat to cover with ebb tides. There are a range of fish species that can be found in anchialine pools and their presence usually indicates lower populations of hypogeal shrimp and an absence of epigeal shrimp. In Hawaii, the pools are home to the ʻōpaeʻula (Hawaiian shrimp, "Halocaridina rubra").
Anchialine pools are considered an ecosystem that combines elements from brackish surface water bodies, subterranean systems, and terrestrial landscapes and are usually wet lit. Algal primary producers inhabit the water column and benthos, while the diversity and productivity are often influenced by geological age and connectivity to the sea. Ecological studies of anchialine pools frequently identify regionally rare and endemic species, while primary producers in these systems are typically algae and bacteria. In pools found in Western Hawaii cyanobacterial mats are dominant, these are common feature among shallow anchialine pools. Found on the substratum, these yellow-orange mats may precipitate minerals that contribute to the overall sedimentation of a pool. Generally, anchialine pools tend to be deeper and saltier the closer they are to shoreline. There is also a high degree of endemism associated with these environments with over 400 endemic species being described in the last 25 years. Thus, when these habitats are degraded or destroyed, it often leads to the extinction of multiple species. Porosity of the substratum can speed up or slow down this process with more porous substratum reducing sedimentation due to increased hydrologic connectivity with the water table which can exhibit a large control on the species that can survive in anchialine pools.
Anchialine caves.
Deep within anchialine cave systems the lack of energy from solar radiation prevents photosynthesis. These dark cave systems are often classified as allochthonous detritus because the dominant input of organic matter is from sources outside the system. In other words, the cave systems ultimately rely on solar radiation for most of their organic matter, but it is formed elsewhere. New research into the chemoautotrophy of caves however may be changing this paradigm with a greater dependence on sulfate-reducing microbes and methanogens. In both cases, the accumulation of particulate matter is largely found at the halocline interface between 2 and 0 PSU. The concentration of organic particles is also seen at saline boundaries in other estuarine systems as well with elevated concentrations of particles at estuarine turbidity maximum.
Fauna that reside strictly within the aphotic zone of anchialine caves typically exhibit adaptations associated with low light and food, and are often classified as stygofauna. Anchialine systems are classically restricted in terms of fluxes (water, nutrients, organisms) in and out of the system. Many of the organisms in anchialine caves lack pigmentation; they have evolved to save energy by not developing chromatophores. Another adaptation from the lack of solar radiation is that many of these organisms have no eyes, a very energy intensive organelle they no longer need. Stygofauna are however quite different than deep sea organisms, most of which have kept their eyes and specialized them to see bioluminescence and possibly Cherenkov radiation in their otherwise dark environments. There are no known bioluminescent stygobites to date, despite this adaptation's popularity in other dark systems.
Outside of light availability, there are a wide variety of geochemical parameters that affect the biology and ecology within these systems. Possibly the most notable and universal in these systems is the strong halocline. While some anchialine systems are entirely salt water (i.e. blue holes) other more inland systems (i.e. cenotes) often have a freshwater lens that can extend hundreds of feet deep or for miles underground until they meet the ocean interface. The halocline not only acts as a physical barrier in density but as a niche partitioning factor that segregates these systems into stenohaline and euryhaline organisms with the latter having the competitive advantage of being able to move between these two niches. In many low-latitude locations where the majority of these systems are found, the temperature of the intruding seawater is much warmer than the phreatic freshwater. Because of discrepancy between warmer seawater and cooler groundwater, temperatures of the anchialine system may also increase with depth and penetration, which has implications for growth and respiration rates.
Exploitation and conservation.
The diversity of unusual and rare species found in anchialine has attracted tourists and recreational divers from across the globe. Tourism generated from the anchialine systems in Bermuda play an important role in the economy. The Palau lakes are famous for their jellyfish populations and have even had an IMAX feature film made about them called 'The Living Sea'.
However, tourism and direct exploitation of anchialine systems has resulted in degradation of their environmental health. Approximately 90% of Hawaii's anchialine habitat have been degraded or lost due to development and introduction of exotic species. Hawaii's anchialine systems are currently one of the most threatened habitats in the archipelago. Pollution from tourism has led to endangered crustaceans in Sipun cave in Cavat. Some anchialine systems are exploited for limestone for use in construction. This mining results in the collapse and destruction of anchialine caves. Ha Long Bay marine lakes have been exploited by residents in surrounding boat villages for fisheries and aquaculture. Anchialine pools are also intentionally filled for development purposes. Tidal currents have been shown to sweep in trash into unexplored areas of Blue Holes in the Bahamas. Some caves in Bermuda, the Canary Islands, and Mallorca are used as wishing wells which increases concentration of copper and is thought to have caused the decline of the squat lobster, "Munidopsis polymorpha." Cave divers also have unintended negative impacts on these habitats by using flashlights that enable fish such as "Astyanax fasciatus" to feed on otherwise inaccessible prey. Additionally, cave diving can negatively alter water chemistry in normally hypoxic cave environments by introducing oxygen.
Due to the high endemism in these environments and limited global distribution, many species in anchialine systems are at risk of extinction. 25 species are ICUN red list in Bermuda and other species are on the Mexican list of threatened and endangered species in the Yucatán. Alien or introduced species also pose a significant threat to the ecological health of anchialine systems. These species could be introduced intentionally for the purpose of harvest or recreation or unintentionally from equipment on recreational divers. In Vietnam, green sea turtles were introduced into anchialine pools for practices related to animistic rites and consumption. Exotic species introduction is a primary driver for anchialine habitat degradation in Hawaii.
There has been policy and management action to protect the health of these environments. In Hawaii the Waikoloa anchialine Preservation Area Program (WAPPA) monitors the water quality of coastal environments including anchialine pools. There has been little evidence yet to suggest the fauna of these pools are sensitive to water quality changes, however they may be more threatened by the increase of pool exploitation for recreational purposes due to increased accessibility from tourism development. There are also conservation efforts in Maui and the Sinai peninsula to protect anchialine habitats in those areas.
Ongoing research.
Cave diving.
The primary way in which people study and explore the subterranean sections of anchialine systems is through cave diving. Using highly specialized techniques, divers navigate the sprawling overhead environment to form detailed maps of the underground aquifers, collect a variety of biologic, geologic, or chemical samples, and track hydrologic flow. Advances in cave diving technology, such as DPVs and rebreathers, facilitates data collection further into cave systems with lower environmental impact.
Climate change.
The complicated geometry of anchialine systems limits the understanding of hydrologic processes involved, requiring many studies to estimate or model the processes thought to be contributing to the physical and chemical properties of the system. More recent studies look at categorizing changes in biodiversity and physical characteristics of anchialine systems under changing climate conditions. It is currently an area of active research to predict how climate change induced sea level rise may affect the formation and health of anchialine systems in the near future.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "PS\n"
},
{
"math_id": 1,
"text": "PS = \\frac{1}{F} + \\frac{E}{SE+EP-RE}+\\frac{S}{D}."
},
{
"math_id": 2,
"text": "PS \\sim 1"
},
{
"math_id": 3,
"text": "2>PS>1"
},
{
"math_id": 4,
"text": "PS >2"
}
]
| https://en.wikipedia.org/wiki?curid=72402042 |
72402900 | Ramón Iribarren | Spanish civil engineer
Ramón Iribarren Cavanilles (15 April 1900 – 21 February 1967) was a Spanish civil engineer and professor of ports at the School of Civil Engineering (, "ETSICCP") in Madrid. He was chairman of the Spanish delegation to the Permanent International Association of Navigation Congresses (PIANC) and was elected as an academic at the Spanish Royal Academy of Sciences, although he did not take up the latter position. He made notable contributions in the field of coastal engineering, including methods for the calculation of breakwater stability and research which led to the development of the Iribarren number.
He undertook detailed research at several ports in the Bay of Biscay which were subject to extreme waves and frequent storms, and this underpinned much of his early research work. Iribarren recognised that many of the ports in the Bay of Biscay were insufficiently protected from severe wave and storm conditions, which had resulted in a number of shipwrecks and threatened the economic viability of the local fishing community, with whom he enjoyed a close relationship.
In the 1930s, much port and harbour infrastructure design in Spain relied on simply replicating methods used on previous projects, with the guiding principles for the design of new harbour and coastal projects often relying solely on a simple analysis of whether previous construction methods had been successful or not. Iribarren was dissatisfied with such a wholly empirical approach, which he considered did not take into account the effects of location-specific issues such as wave and sediment behaviour, and having identified this as a problem, he spent a number of years developing scientific and mathematical approaches which could be applied to specific cases, based on extensive research and an understanding of wave behaviour and coastal dynamics, in which he made extensive use of observation and photography.
He was instrumental in the development of a research facility for coastal engineering, the first of its kind in Spain. His work achieved international prominence and remains highly relevant, being subject to ongoing development and underpinning several contemporary design methods used in coastal engineering and coastal protection works.
Life and career.
Education and early work.
Iribarren was born in Irún in 1900, the son of Plácido José Iribarren Aldaz, a wealthy businessman with properties in Cuba, and Teresa Cavanilles Sanz. The eldest of three brothers, he initially studied at the school in his hometown, where he excelled as a student of mathematics. After completing a baccalaureate at the high school in San Sebastián, he left for Madrid to study exact science, but changed his course in 1921 and began studying civil engineering, graduating in 1927 as the best-placed student on the course. Upon graduation, he initially worked for the Ministry of Public Works at the regional Catalonian roads department in Girona.
The Guipuzkoan ports, Mutriku and The Iribarren Number.
Iribarren was transferred from Girona to his home province of Gipuzkoa in 1929, where he was appointed Chief Engineer of the Gipuzkoan Ports Group at the Ministry of Public Works, with an office in San Sebastián. In this role he was responsible for the ports of Deba, Isla de los Faisanes, Getaria, Mutriku, San Sebastián and Zumaia, along with overseeing the design and execution of several port and harbour projects. The role provided Iribarren with the opportunity to make detailed observations of the Gipuzkoan coastline, which informed his theories and research output. He undertook research into several aspects of breakwater and wave behaviour at each of the ports under his control, as well as the general Gipuzkoan coastline and Bay of Biscay.
Iribarren undertook extensive research at the Port of Mutriku, where he was responsible for the design and construction of a breakwater to the outer harbour in 1932. The works mitigated the approach and entry difficulties for shipping at the outer harbour area, but Iribarren observed that the existing vertical sea walls of the inner harbour were still causing significant wave reflection, leading to dangerous berthing conditions for ships once inside the mouth of the harbour. Despite initial opposition from the local fishing community, he was successful in implementing a sloping breakwater at the inner harbour in 1936, which ended the problems caused by reflection and made safe berthing of ships possible.
The work at Mutriku provided Iribarren with the opportunity to develop his fundamental theories around refraction, allowing him the time and environment in which to research and observe his theoretical approximations of wave direction and wave characteristics from available depth contours. He published papers on his work at Mutriku in 1932 and 1936, and this work led to the development of a dimensionless parameter for waves breaking on a slope, which was further developed by Jurjen Battjes in 1974 and is known as the "Iribarren number" or "Iribarren parameter".
<templatestyles src="Template:Blockquote/styles.css" />The importance of this parameter for so many aspects of waves breaking on slopes appears to justify that it be given a special name. In the author's opinion it is appropriate to call it the "Iribarren number" (denoted by "Ir"), in honor of the man who introduced it and who has made many other valuable contributions to our knowledge of water waves.
Works at the Bidasoa River.
In 1934, the City Council of Hondarribia approached Iribarren to investigate problems related to sediment transport and erosion at the Hondarribia Bar at the mouth of the Bidasoa River on the Spanish border with France, and proposed the construction of a breakwater. A budget of 3,000 pesetas was approved in order to construct a small trial section of breakwater. Recognising the complicated nature of the interaction between wave behaviour and sediment, and the need to design an effective solution, Iribarren spent a number of years studying the waves and coastal morphodynamics in Hondarribia to understand the relevant boundary conditions and prepare an effective design. He published his findings in 1941, and although his plans were supported by the Ministry of Public Works, they were met with opposition from the City Council and the project was not approved.
Meanwhile, Iribarren was approached by the French authorities to prepare a design for similar works across the river in the town of Hendaye. After completing a design in 1945, he supervised the construction of the Hendaye breakwater which commenced in October 1946. The project was a major success and in 1949, seeing the results of Iribarren's work in Hendaye, the City Council in Hondarribia approved the construction of a breakwater to his design. Iribarren supervised construction which commenced on 7 September 1949, with the works completed in 1955 at a cost of 18 million pesetas. He made changes through an iterative design process as construction progressed, with the final breakwater being 1,100 metres in length, 40 metres wide at the base and using 300,000 tonnes of armourstone from a quarry in Jaizkibel. The project was a success, solving the erosion problems, increasing navigation safety and creating a large recreational beach.
Professorship, establishment of the Ports Laboratory and international work.
Iribarren was appointed as professor at the ETSICCP in 1939, filling the vacancy left by the death of Eduardo de Castro Pascual during the Spanish Civil War in 1937. Iribarren promoted the idea of establishing a Spanish centre for the study of coastal engineering and harbour works, modelled on research facilities in universities such as Technische Universität Berlin and ETH Zurich. This was achieved in 1948 with the creation of the Ports Laboratory () in Madrid, with Iribarren as Director. In 1957 the laboratory became part of the Centro de Estudios y Experimentación de Obras Públicas.
He was involved in a number of notable Spanish and international civil engineering projects throughout his career. Notable projects included San Sebastián Airport, the breakwater at the port of Palma de Mallorca, major works at the Port of Cadiz, the port of Melilla in 1944, the canalisation of the Untxin, the oil terminal of Luanda in 1956 and coastal engineering works in the Gulf of Sirte, Cartagena de Indias, and Venezuela. Between 1960 and 1961, he was commissioned by the Government of Spain to work alongside a French delegation in Paris to undertake studies for railway and port infrastructure at Villa Cisneros to transport iron ore mined in the Spanish Sahara.
The Port of Palma de Mallorca and the "Wave Diagram Method".
Iribarren's approach to the study of wave behaviour for the works at the outer breakwater of the Port of Palma de Mallorca was used as the basis for several harbour projects across Spain after he published his or "(wave diagram method / method of wave plans)" in 1941. Building on research which he had commenced at Mutriku in 1932, the work was subsequently translated and published in English, Portuguese and French. Iribarren noted that the orientation of the Port of Palma meant that it would only be exposed to storms whose direction varied from Southwest to Southeast. He therefore studied these two extreme storms and their midpoint (the south), and designed the breakwater accordingly.
He used a similar approach on many other projects, including improvements at Hondarribia and A Coruña. Iribarren noted that his method was an approximation, albeit one which represented a significant advance on previous design techniques. Unlike existing approaches, his method was grounded in the principles of using the results of fundamental research to devise solutions to a practical problem.
Iribarren's approach was not to design by intuition or simplified empirical comparisons with previous projects, as was the case in Spain up until the 1930s, but rather to research and determine the nature of wave propagation towards a specific coastline and assess wave characteristics and bathymetry, along with detailed analysis of the shape and orientation of the coastline or harbour under consideration.
He used as a starting point the existing theory of trochoidal waves, assuming circular orbital motion for liquid molecules in a body of water agitated by swell at infinite depth, and elliptical motion for those at reduced depths. Iribarren took into account shoaling, and the modifications which waves undergo approaching the coast as they enter shallow water, which he defined as a point where water depth formula_0 is equal to or less than half the original wavelength, formula_1.
Iribarren noted that detailed observation and the production of graphical records of wave and sediment behaviour were necessary to correlate, and if necessary modify, the theoretical approximations used in his method, as he had himself done at Palma de Mallorca. He continuously refined and developed his methods and the associated mathematics. By 1954, with further iterations and adjustments made and the method successfully implemented on a number of projects across Spain and internationally, he considered that the "wave diagram method" was sufficiently developed for use in most practical cases.
Iribarren's formula for the design of breakwaters.
Iribarren had studied under Eduardo de Castro Pascal, who in 1933 proposed a formula for the design of breakwaters which he had developed based on earlier work by Briones. The De Castro Pascual-Briones (1933) formula is:
formula_3
in which:
* formula_4 is the weight of the armourstone;
* formula_5 is the slope of the armourstone (or, more precisely, the cosine of the angle the slope makes with the horizontal when the sine is taken as unity);
* formula_6 is the density of the stone relative to that of water, and;
* formula_7 is the wave height.
De Castro's formula implies that for formula_8, formula_4 should be zero; which means that when the slope is almost horizontal, even very light armour units can be used.
If formula_5 is less than formula_9, formula_4 has an imaginary value, indicating that for very steep slopes, a breakwater or dike cannot be successfully constructed no matter how large the armourstone is. If δ is given a negative value, which is unacceptable, it results in an inadmissible value for formula_4.
With de Castro Pascual's encouragement, Iribarren began to develop and research the formula further. Iribarren developed his own formula for the stability of breakwater slopes under wave attack, publishing a paper on the subject in 1938. However, the political situation in Spain and international attitudes to scientific co-operation with the Franco dictatorship restricted the dissemination of Iribarren's work, which led to more common international adoption of a similar method which had been developed by Robert Y. Hudson at the USACE Waterways Experiment Station (WES) in Vicksburg, Mississippi, known as Hudson's equation. Iribarren's 1938 paper included the following formula, which calculated the weight of the armourstone or wave-dissipating concrete block required:
formula_10
Where:
* formula_4 represents the weight of the stones in tonnes.
* formula_7 is the total wave height that impacts the breakwater, measured in metres.
* formula_2 is the relative density of the stone or block material.
* formula_11 is the angle (measured from the horizontal) of the breakwater slope.
* formula_12 is a coefficient, with 0.015 used for natural rubble mound breakwaters and 0.019 for artificial block breakwaters.
In this formula, the mass of the armourstone is proportional to the cube of the wave height, suggesting that a doubling of wave height necessitates an eightfold increase in stone weight. This relationship, while seeming substantial, is logical as the linear dimension of the stone (with constant density and slope) is proportional to wave height.
From the formula, Iribarren deduced that natural rubble offers advantages over its artificial counterpart, and he highlighted the significance of the material's density in breakwater design, whilst noting that the decision between natural and artificial materials often rests upon the availability of suitable quarry materials and an economic evaluation of the costs associated with each type for the given design profile.
Notably, the derived formula only permits slopes up to the natural limit of formula_13. Slopes exceeding this would yield a negative value for formula_4, which is impractical. At the limit of formula_13, even with a minimal wave height (formula_7), the formula indicates infinite values for formula_4. This implies that any size of stone on a breakwater with its natural slope could theoretically be dislodged by the slightest wave. Additionally, when formula_13 and formula_14 in the formula, formula_4 becomes zero, logically indicating that if waves are negligible, a natural slope breakwater consisting of stones of any size is in perfect equilibrium.
Iribarren continued to develop and refine this formula and his work on breakwater stability over the years, and in 1965 he published an improved version of the formula based on his ongoing research:
formula_15
This improvement includes a friction coefficient, formula_16, and was a result of research and experiments by Iribarren and his colleagues during the period between the first (1938) and final (1965) publication of the Iribarren formula. He had undertaken wave flume experiments and research into factors including the type of wave breaking, wave run-up and run-down, friction and interlocking between units, and the types of failure possible in breakwaters in the intervening period. Iribarren proposed that the force of a wave can be approximated by:
formula_17
By then analysing the balance of forces for the rock or artificial element in question, the required minimum stone weight for stability could be determined both during wave run-up and wave run-down. The stability formula is then applied as follows:
for breakwater stability during wave run-up:
formula_18
for stability during wave run-down:
formula_19
in which:
"W" = stone weight in kilogrammes
"H" = Wave height at the toe of the structure
Δ = relative density of the stone (= ("ρs" -"ρw")/"ρw"') where "ρs" is the density of the rock or armour unit and "ρw" is the density of the water
"d"n50 = nominal stone diameter
α = slope gradient
"N" = a stability number
"μ" = a friction factor
g = the acceleration due to gravity.
Iribarren suggested that "μ" is around 2.4, "N" for wave run-down is 0.43, and for run-up it is around 0.85.
The key difference between the Iribarren formula and the de Castro Pascual-Briones formula is the consideration of friction in the slope factor by Iribarren, with the de Castro Pascual-Briones slope factor including only a density term. For breakwater slopes steeper than 1:2, the formulae of Hudson and Iribarren produce similar results, but for more gentle slopes the Hudson formula is inaccurate and indicates that stability becomes infinite, which is invalid. Hudson's formula relied on a formula_20 relationship between the wave height and the slope angle, formula_11, whilst Iribarren demonstrated that a formula_21 relation is correct for gentle slopes.
Iribarren presented his final publication on the subject for debate at the PIANC Conference of 1965 in Stockholm, however Hudson did not attend this conference and there was consequently no public discussion between the authors. Modern design relies on the Van der Meer formula or similar variants. Despite Iribarren's demonstration of its shortcomings, the Hudson formula continued to be used widely in its original 1959 iteration until the 1970s. A modified version of the Hudson formula is still commonly used for concrete breakwater elements, but for rock armour structures, it is valid only for situations with a permeable core and steep waves. It is also not possible to estimate the degree of damage on a breakwater during a storm with the Hudson formula.
International recognition and publication of "Obras Maritimas".
Iribarren obtained a level of international recognition as chair of the Spanish delegation to PIANC, and in addition to his address at the 1965 event, he presented his research work at the PIANC international congresses in Lisbon, Rome and London (congresses XVII to XIX). Beginning in the late 1940s, he was invited to the United States by the engineering schools of The University of California, Berkeley and Massachusetts Institute of Technology, where he delivered several lectures.
He presented his research to the Beach Erosion Board of the United States Army Corps of Engineers, a body which subsequently organised the translation and publication of much of the research work Iribarren undertook with his long-term collaborator and fellow Spanish engineer , with whom Iribarren also collaborated on a two-volume engineering textbook entitled "Maritime works: Waves and dikes ()" which was first published in 1954, with a second edition in 1964.
Personal life.
Iribarren married Maria Hiriart, a French national, in 1939. He was the eldest of three brothers, one of whom, Luis Iribarren Cavanilles (19 February 1902 – 4 May 1984), was a dentist who served as manager of the Spain national football team in four matches between 1953 and 1954, and played football for both Real Unión and Real Sociedad Gimnástica Española. His second brother, José Iribarren Cavanilles, was the municipal architect in Irún.
In February 1967, Iribarren died as the result of a fire whilst driving in a Fiat 1500 on the main Valencia-Madrid motorway, near Vallecas. An inscribed watch, gifted to him by a federation of fishermen in Gipuzkoa, was used to identify him.
Legacy and recognition.
Iribarren had a highly theoretical approach grounded in detailed observation and assisted by experiment, and his work continues to underpin several coastal engineering design methods. His findings have been further developed by modern research, including contemporary design methods such as the van der Meer formula, which expands Iribarren's methods to include allowance for irregular waves and the influence of storm duration.
He was honoured by the governments of Spain and France with the awards of Civil Order of Alfonso X, the Wise in 1959, The Order of Civil Merit, Chevalier (Knight) of the Legion of Honour and was elected as a member of the École navale. He was named an adopted son of Hondarribia for his work on the Bidasoa breakwater and the associated beach nourishment works there.
A bronze bust of Iribarren by the Spanish sculptor José Pérez Pérez "Peresejo" stands at the location of the Bidasoa works, erected there in 1969. A bust of Iribarren is also displayed in the building in Madrid. A street in Irún (), and a promenade in Hondarribia (), are named after Iribarren. In 2017, a conference was held at the Institute of Engineering of Spain () to commemorate the fiftieth anniversary of Iribarren's passing.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H"
},
{
"math_id": 1,
"text": "L_o"
},
{
"math_id": 2,
"text": "d"
},
{
"math_id": 3,
"text": " P(T+1)^2 \\sqrt{T - \\frac{2}{\\delta}} = 704A^3 \\cdot \\frac{\\delta}{(\\delta - 1)^3} "
},
{
"math_id": 4,
"text": "P"
},
{
"math_id": 5,
"text": "T"
},
{
"math_id": 6,
"text": "\\delta"
},
{
"math_id": 7,
"text": "A"
},
{
"math_id": 8,
"text": "T = \\infty"
},
{
"math_id": 9,
"text": "\\frac{2}{\\delta}"
},
{
"math_id": 10,
"text": "P = \\frac{N \\times A^3 \\times d}{{(\\cos(\\alpha) - \\sin(\\alpha))^3 \\times (d - 1)^3}}"
},
{
"math_id": 11,
"text": "\\alpha"
},
{
"math_id": 12,
"text": "N"
},
{
"math_id": 13,
"text": "\\alpha = 45^\\circ"
},
{
"math_id": 14,
"text": "A = 0"
},
{
"math_id": 15,
"text": "P = \\frac{N \\times A^3 \\times d}{{(f \\times \\cos \\alpha - \\sin \\alpha)^3 \\times (d - 1)^3}}"
},
{
"math_id": 16,
"text": "f"
},
{
"math_id": 17,
"text": "F_{wave}=\\rho_w g d_{n50}^2H"
},
{
"math_id": 18,
"text": "W\\geq\\frac{N\\rho_s gH^3}{\\Delta^3(\\mu\\cos\\alpha-\\sin\\alpha)^3}"
},
{
"math_id": 19,
"text": "W\\geq\\frac{N\\rho_s gH^3}{\\Delta^3(\\mu\\cos\\alpha+\\sin\\alpha)^3}"
},
{
"math_id": 20,
"text": "cot\\alpha"
},
{
"math_id": 21,
"text": "cos \\alpha"
}
]
| https://en.wikipedia.org/wiki?curid=72402900 |
72405181 | Mahesh Kakde | Mathematician
Mahesh Ramesh Kakde (born 1983) is a mathematician working in algebraic number theory.
Biography.
Mahesh Kakde was born on 1983 in Akola, India. He obtained a Bachelor of Mathematics degree at the Indian Statistical Institute in Bangalore in 2004, and a Certificate of Advanced Study in Mathematics at the University of Cambridge in 2005. He completed his PhD under the supervision of John Coates at the University of Cambridge in 2008. He subsequently worked at Princeton University, University College London, and King's College London, before becoming a professor at the Indian Institute of Science in 2019.
Research.
Kakde proved the main conjecture of Iwasawa theory in the totally real "μ" = 0 case. Together with Samit Dasgupta and Kevin Ventullo, he proved the Gross–Stark conjecture. In a joint project with Samit Dasgupta, they proved the Brumer–Stark conjecture away from 2 in 2020, and later over formula_0 in 2023. Generalising these methods, they also gave a solution to Hilbert's 12th problem for totally real fields. Their methods were subsequently used by Johnston and Nickel to prove the equivariant Iwasawa main conjecture for abelian extensions without the "μ" = 0 hypothesis.
Awards.
In 2019, Kakde was awarded a Swarnajayanti Fellowship.
Together with Samit Dasgupta, Kakde was one of the invited speakers at the International Congress of Mathematicians 2022, where they gave a joint talk on their work on the Brumer–Stark conjecture.
In 2022, Kakde received the Infosys Prize for his contributions to algebraic number theory. In his congratulatory message, Jury Chair Chandrashekhar Khare noted that "[Kakde’s] work on the main conjecture of non-commutative Iwasawa theory, on the Gross-Stark conjecture and on the Brumer-Stark conjecture has had a big impact on the field of algebraic number theory. His work makes important progress towards a p-adic analytic analog of Hilbert’s 12th problem on construction of abelian extensions of number fields."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb Z"
}
]
| https://en.wikipedia.org/wiki?curid=72405181 |
72406363 | Gurzadyan theorem | In cosmology, Gurzadyan theorem, proved by Vahe Gurzadyan, states the most general functional form for the force satisfying the condition of identity of the gravity of the sphere and of a point mass located in the sphere's center. This theorem thus refers to the first statement of Isaac Newton’s shell theorem (the identity mentioned above) but not the second one, namely, the absence of gravitational force inside a shell.
The theorem had entered and its importance for cosmology outlined in several papers as well as in shell theorem.
The formula and the cosmological constant.
The formula for the force derived in has the form
formula_0
where formula_1 and formula_2 are constants. The first term is the familiar law of universal gravitation, the second one corresponds to the cosmological constant term in general relativity and McCrea-Milne cosmology.
Then the field is force-free only in the center of a shell but the confinement (oscillator) term does not change the initial formula_3 symmetry of the Newtonian field. Also, this field corresponds to the only field possessing the property of the Newtonian one: the closing of orbits at any negative value of energy, i.e. the coincidence of the period of variation of the value of the radius vector with that of its revolution by formula_4 (resonance principle) .
Consequences: cosmological constant as a physical constant.
Einstein named the cosmological constant as a universal constant, introducing it to define the static cosmological model. Einstein has stated: “I should have initially set formula_5 in Newton's sense. But the new considerations speak for a non-zero formula_6, which strives to bring about a non-zero mean density formula_7 of matter.” This theorem solves that contradiction between “non-zero formula_6” and Newton’s law.
From this theorem the cosmological constant formula_8 emerges as additional constant of gravity along with the Newton’s gravitational constant formula_9. Then, the cosmological constant is dimension independent and matter-uncoupled and hence can be considered even more universal than Newton’s gravitational constant.
For formula_8 joining the set of fundamental constants formula_10, the gravitational
Newton’s constant, the speed of light and the Planck constant, yields
formula_11
and a dimensionless quantity emerges for the 4-constant set formula_12
formula_13
where formula_14 is a real number. Note, no dimensionless quantity is possible to construct from the 3 constants formula_15.
This within a numerical factor, formula_16, coincides with the information (or entropy) of de Sitter event horizon
formula_17
and the Bekenstein Bound
formula_18
Rescaling of physical constants.
Within the Conformal Cyclic Cosmology this theorem implies that, in each aeon of an initial value of formula_8, the values of the 3 physical constants will be eligible for rescaling fulfilling the dimensionless ratio of invariants with respect to the conformal transformation
formula_19
Then the ratio yields
formula_20
for all physical quantities in Planck (initial) and de Sitter (final) eras of the aeons, remaining invariant under conformal transformations.
Inhomogeneous Fredholm equation.
This theorem, in the context of nonlocal effects in a system of gravitating particles, leads to the inhomogeneous Dirichlet boundary problem for the Poisson equation
formula_21
where formula_22 is the radius of the region,
formula_23.
Its solution can be expressed in terms of the double layer potential, which leads to an inhomogeneous nonlinear Hammerstein integral equation for the gravitational potential
formula_24
formula_25
This leads to a linear inhomogeneous 2nd kind Fredholm equation
formula_26
formula_27
formula_28
Its solution can be expressed in terms of the resolvent formula_29 of the integral kernel and the non-linear (repulsive) term
formula_30
formula_31
Observational indications.
The dynamics of groups and clusters of galaxies are claimed to fit the theorem, see also.
The possibility of two Hubble flows, a local one, determined by that formula, and a global one, described by Friedmannian cosmological equations was stated in.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " F = -\\frac{G M m}{r^2} + \\frac{\\Lambda c^2 m r}{3}, "
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "\\Lambda"
},
{
"math_id": 3,
"text": " O(4) "
},
{
"math_id": 4,
"text": " 2\\pi "
},
{
"math_id": 5,
"text": " \\lambda = 0 "
},
{
"math_id": 6,
"text": " \\lambda "
},
{
"math_id": 7,
"text": " \\rho_0 "
},
{
"math_id": 8,
"text": " \\Lambda "
},
{
"math_id": 9,
"text": " G "
},
{
"math_id": 10,
"text": " (G, c, \\hbar) "
},
{
"math_id": 11,
"text": " [c]=LT^{-1},\\quad [G]=M^{-1}L^3T^{-2},\\quad [\\hbar]=ML^{2}T^{-1},\\quad [\\Lambda]=L^{-2}, "
},
{
"math_id": 12,
"text": "(G, \\Lambda, c, \\hbar)"
},
{
"math_id": 13,
"text": " I=\\frac{c^{3a}}{\\Lambda^a G^a \\hbar^a}, "
},
{
"math_id": 14,
"text": " a "
},
{
"math_id": 15,
"text": " G, c, \\hbar "
},
{
"math_id": 16,
"text": " a=1 "
},
{
"math_id": 17,
"text": " I_{dS}= 3 \\pi \\frac {c^3}{\\Lambda G \\hbar}, "
},
{
"math_id": 18,
"text": " I_{BB} = \\frac {3 \\pi c^3}{\\Lambda G \\hbar ln 2}. "
},
{
"math_id": 19,
"text": " \\tilde{g}_{\\mu\\nu}=\\Omega^2 g_{\\mu\\nu}, "
},
{
"math_id": 20,
"text": " \\frac{Q_{dS}}{Q_p}=m (\\frac{c^3}{\\hbar G \\Lambda})^n = m I^n, \\quad m,n \\in \\mathbb{R}, "
},
{
"math_id": 21,
"text": " \\Delta \\Phi({\\bf x}) = AN G_3 S_3^2\n\\bigg(\\int_{y\\in [0,\\infty ]}\\exp \\big(-y^2/(2\\theta) \\big)y^2 dy\n\\bigg)\\cdot \\exp(-\\Phi/\\theta)-\\frac{c^2\\Lambda}{2},"
},
{
"math_id": 22,
"text": "R_\\Omega"
},
{
"math_id": 23,
"text": "A,\\theta,R_\\Omega \\in {\\mathbb R}^1"
},
{
"math_id": 24,
"text": " U({\\bf x})=\\widetilde{\\lambda} \\widehat{\\mathfrak G}({U})+\n\\alpha(\\theta,\\Lambda){\\bf x}^2,~~\n\\widehat{\\mathfrak G}({U})\\equiv \\int_{\\Omega'} {\\mathcal K}(|{\\bf x}-{\\bf x}'|)\n\\exp\\big(-{U}({\\bf x}')\\big)d{{\\bf x}'},"
},
{
"math_id": 25,
"text": "\n U( {\\bf x})\\equiv (\\Phi ({\\bf x})-C_0)/\\theta,~~\n\\widetilde{\\lambda} \\equiv \\widetilde{\\lambda_{II}}(\\theta)\n\\equiv \\frac{\\lambda_{I}}{\\theta}\\exp(-C_0/\\theta),~~\n\\alpha(\\theta,\\Lambda)= -\\frac{\\Lambda c^2}{12\\theta}."
},
{
"math_id": 26,
"text": "\n\\phi({\\bf x})=\\lambda^{(0)} \\int_{\\Omega'}{\\mathcal K}(|{\\bf x}-{\\bf x}'|)\n \\phi({\\bf x}')d{\\bf x}' +\\widehat{\\beta}({\\bf x}),\n"
},
{
"math_id": 27,
"text": " \n\\widehat{\\beta}({\\bf x})\\equiv -\\lambda^{(0)}\\int_{\\Omega'}\n {\\mathcal K}(|{\\bf x}-{\\bf x}'|)\\alpha |{\\bf x}'|^2 d{\\bf x}' -\\alpha |{\\bf x}|^2,\n"
},
{
"math_id": 28,
"text": "\n{U}({\\bf x})={U}_0-\\phi ({\\bf x}), ~~|{\\phi }|\\ll {U}_0;~~~\n \\lambda^{(0)}\\equiv-\\widetilde{\\lambda}\\exp(-U_0). "
},
{
"math_id": 29,
"text": "\\Gamma"
},
{
"math_id": 30,
"text": " \\phi ({\\bf x}) = -\\widehat\\beta ({\\bf x}) +\\lambda^{(0)}\\sum_{\\bf n}\n\\langle -\\widehat\\beta ({\\bf x}),\\phi_{\\bf n} \\rangle \\phi_{\\bf n}\\lambda_{\\bf n}^{-1} + (\\lambda^{(0)})^2\\sum_{\\bf n}\\langle -\\widehat\\beta ({\\bf x}),\\phi_{\\bf n} \\rangle\n\\phi_{\\bf n}\\lambda_{\\bf n}^{-1}(\\lambda_{\\bf n}-\\lambda^{(0)})^{-1} =\n"
},
{
"math_id": 31,
"text": "\n -\\widehat\\beta ({\\bf x}) +\n\\lambda^{(0)} \\int_\\Omega\n\\underbrace{\n\\bigg( {\\mathcal K}({\\bf x},{\\bf x}') +\n\\lambda^{(0)}\n\\sum_{\\bf n} \\phi_{\\bf n}({\\bf x}) \\phi_{\\bf n}({\\bf x}')\n\\lambda_{\\bf n}^{-1}(\\lambda_{\\bf n}-\\lambda^{(0)})^{-1}\n\\bigg)}_{\\Gamma ({\\bf x},{\\bf x}',\\lambda^{(0)})}\n(-\\widehat\\beta ({\\bf x}))d{\\bf x}'. "
}
]
| https://en.wikipedia.org/wiki?curid=72406363 |
72408357 | Backward stochastic differential equation | Stochastsic differential equations with terminal condition
A backward stochastic differential equation (BSDE) is a stochastic differential equation with a terminal condition in which the solution is required to be adapted with respect to an underlying filtration. BSDEs naturally arise in various applications such as stochastic control, mathematical finance, and nonlinear Feynman-Kac formulae.
Background.
Backward stochastic differential equations were introduced by Jean-Michel Bismut in 1973 in the linear case and by Étienne Pardoux and Shige Peng in 1990 in the nonlinear case.
Mathematical framework.
Fix a terminal time formula_0 and a probability space formula_1. Let formula_2 be a Brownian motion with natural filtration formula_3. A backward stochastic differential equation is an integral equation of the type
where formula_4 is called the generator of the BSDE, the terminal condition formula_5 is an formula_6-measurable random variable, and the solution formula_7 consists of stochastic processes formula_8 and formula_9 which are adapted to the filtration formula_3.
Example.
In the case formula_10, the BSDE (1) reduces to
If formula_11, then it follows from the martingale representation theorem, that there exists a unique stochastic process formula_12 such that formula_13 and formula_14 satisfy the BSDE (2).
Numerical Method.
Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE). This method is particularly useful for solving high-dimensional problems in financial mathematics problems. By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods in high-dimensional settings. Specifically, traditional methods like finite difference methods or Monte Carlo simulations often struggle with the curse of dimensionality, where computational cost increases exponentially with the number of dimensions. Deep BSDE methods, however, employ deep neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T>0"
},
{
"math_id": 1,
"text": "(\\Omega,\\mathcal{F},\\mathbb{P})"
},
{
"math_id": 2,
"text": "(B_t)_{t\\in [0,T]}"
},
{
"math_id": 3,
"text": "(\\mathcal{F}_t)_{t\\in [0,T]}"
},
{
"math_id": 4,
"text": "f:[0,T]\\times\\mathbb{R}\\times\\mathbb{R}\\to\\mathbb{R}"
},
{
"math_id": 5,
"text": "\\xi"
},
{
"math_id": 6,
"text": "\\mathcal{F}_T"
},
{
"math_id": 7,
"text": "(Y_t,Z_t)_{t\\in[0,T]}"
},
{
"math_id": 8,
"text": "(Y_t)_{t\\in[0,T]}"
},
{
"math_id": 9,
"text": "(Z_t)_{t\\in[0,T]}"
},
{
"math_id": 10,
"text": "f\\equiv 0"
},
{
"math_id": 11,
"text": "\\xi\\in L^2(\\Omega,\\mathbb{P})"
},
{
"math_id": 12,
"text": "(Z_t)_{t\\in [0,T]}"
},
{
"math_id": 13,
"text": "Y_t = \\mathbb{E} [ \\xi | \\mathcal{F}_t ]"
},
{
"math_id": 14,
"text": "Z_t"
}
]
| https://en.wikipedia.org/wiki?curid=72408357 |
72416325 | Sasikanth Manipatruni | American electrical engineer (born 1984)
Sasikanth Manipatruni is an American engineer and inventor in the fields of Computer engineering, Integrated circuit technology, Materials Engineering and semiconductor device fabrication. Manipatruni contributed to developments in silicon photonics, spintronics and quantum materials.
Manipatruni is a co-author of 50 research papers and ~400 patents (cited about 7500 times ) in the areas of electro-optic modulators, Cavity optomechanics, nanophotonics & optical interconnects, spintronics, and new logic devices for extension of Moore's law. His work has appeared in Nature, Nature Physics, Nature communications, Science advances and Physical Review Letters.
Early life and education.
Manipatruni received a bachelor's degree in Electrical Engineering and Physics from IIT Delhi in 2005 where he graduated with the institute silver medal. He also completed research under the Kishore Vaigyanik Protsahan Yojana at Indian Institute of Science working at Inter-University Centre for Astronomy and Astrophysics and in optimal control at Swiss Federal Institute of Technology at Zurich.
Research career.
Manipatruni received his Ph.D. in Electrical Engineering with minor in applied engineering physics from Cornell University. The title of his thesis was "Scaling silicon nanophotonic interconnects : silicon electrooptic modulators, slowlight & optomechanical devices". His thesis advisors were Michal Lipson and Alexander Gaeta at Cornell University. He has co-authored academic research with Michal Lipson, Alexander Gaeta, Keren Bergman, Ramamoorthy Ramesh, Lane W. Martin, Naresh Shanbhag, Jian-Ping Wang, Paul McEuen, Christopher J. Hardy, Felix Casanova, Ehsan Afshari, Alyssa Apsel, Jacob T. Robinson, spanning Condensed matter physics, Electronics and devices, Photonics, Circuit theory, Computer architecture and hardware for Artificial intelligence areas.
Silicon optical links.
Manipatruni's PhD thesis was focused on developing the then nascent field of silicon photonics by progressively scaling the speed of electro-optic modulation from 1 GHz to 12.5 Gbit/s, 18 Gbit/s and 50 Gbit/s on a single physical optical channel driven by a silicon photonic component. The significance of silicon for optical uses can be understood as follows: nearly 95% of modern Integrated circuit technology is based on silicon-based semiconductors which have high productivity in Semiconductor device fabrication due to the use of large single crystal wafers and extraordinary control of the quality of the interfaces. However, Photonic integrated circuits are still majorly manufactured using III-V compound semiconductor materials and II-VI semiconductor compound materials, whose engineering lags silicon industry by several decades (judged by number of wafers and devices produced per year). By showing that silicon can be used as a material to turn light signal on and off, silicon electro-optic modulators allow for use of high-quality engineering developed for the electronics industry to be adopted for photonics/optics industry. This the foundational argument used by silicon electro-optics researchers. This work was paralleled closely at leading industrial research groups at Intel, IBM and Luxtera during 2005–2010 with industry adopting and improving various methods developed at academic research labs. Manipatruni's work showed that it is practically possible to develop free carrier injection modulators (in contrast to carrier depletion modulators) to reach high speed modulation by engineering injection of free carriers via pre-amplification and back-to-back connected injection mode devices.
In combination with Keren Bergman at Columbia University, micro-ring modulator research led to demonstration of a number of firsts in long-distance uses of silicon photonics utilizing silicon based injection mode electro-optic modulators including first demonstration of long-haul transmission using silicon microring modulators first Error-free transmission of microring-modulated BPSK, First Demonstration of 80-km Long-Haul Transmission of 12.5-Gb/s Data Using Silicon Microring Resonator Electro-Optic Modulator, First Experimental Bit-Error-Rate Validation of 12.5-Gb/s Silicon Modulator Enabling Photonic Networks-on-Chip. These academic results have been applied into products widely deployed at Cisco, Intel.
Application for computing and medical imaging.
Manipatruni, Lipson and collaborators at Intel have projected a roadmap that required the use of Silicon micro-ring modulators to meet the bandwidth, linear bandwidth density (bandwidth/cross section length) and area bandwidth density (bandwidth/area) of on-die communication links. While originally considered thermally unstable, by early 2020's micro-ring modulators have received wide adoption for computing needs at Intel Ayar Labs, Global foundries and varied optical interconnect usages.
formula_0The optimal energy of an on-die optical link is written as : where formula_1is the optimal detector voltage (maintaining the bit error rate), formula_2 detector capacitance, formula_3 is the modulator drive voltage, formula_4 are the electrooptic volume of the optical cavity being stabilized, refractive index change to carrier concentration and spectral sensitivity of the device to refractive index change formula_5 is the change in optical transmission, B is the bandwidth of the link, Ptune the power to keep the resonator operational and B the bandwidth of the link at F frequency of the data being serialized.
Manipatruni and Christopher J. Hardy applied integrated photonic links to the Magnetic resonance imaging to improve the signal collection rate from the MRI machines via the signal collection coils while working at the General Electric's GE Global Research facility. The use of optical transduction of the MRI signals can allow significantly higher signal collection arrays within the MRI system increasing the signal throughput, reducing the time to collect the image and overall reduction of the weight of the coils and cost of MRI imaging by reducing the imaging time.
Cavity optomechanics and optical radiation pressure.
Manipatruni proposed the first observation that optical radiation pressure leads to non-reciprocity in micro cavity opto-mechanics in 2009 in the classical electro-magnetic domain without the use of magnetic isolators. In classical Newtonian optics, it was understood that light rays must be able to retrace their path through a given combination of optical media. However, once the momentum of light is taken into account inside a movable media this need not be true in all cases. This work proposed that breaking of the reciprocity (i.e. properties of media for forward and backward moving light can be violated) is observable in microscale optomechanical systems due to their small mass, low mechanical losses and high amplification of light due to long confinement times.
Later work has established the breaking of reciprocity in a number of nanophotonic conditions including time modulation and parametric effects in cavities. Manipatruni and Lipson have also applied the nascent devices in silicon photonics to optical synchronization and generation of non-classical beams of light using optical non-linearities.
Memory and spintronic devices.
Manipatruni worked on Spintronics for the development of logic computing devices for computational nodes beyond the existing limits to silicon-based transistors. He developed an extended modified nodal analysis that uses vector circuit theory for spin-based currents and voltages using modified nodal analysis which allows the use of spin components inside VLSI designs used widely in the industry. The circuit modeling is based on theoretical work by Supriyo Datta and Gerrit E. W. Bauer. Manipatruni's spin circuit models were extensively applied for development of spin logic circuits, spin interconnects, domain wall interconnects and benchmarking logic and memory devices utilizing spin and magnetic circuits.
In 2011, utilizing the discovery of Spin Hall effect and Spin–orbit interaction in heavy metals from Robert Buhrman, Daniel Ralph and Ioan Miron in Period 6 element transition metals Manipatruni proposed an integrated spin-hall effect memory (Later named Spin-Orbit Memory to comprehend the complex interplay of interface and bulk components of the spin current generation) combined with modern Fin field-effect transistor transistors to address the growing difficulty with embedded Static random-access memory in modern Semiconductor process technology. SOT-MRAM for SRAM replacement spurred significant research and development leading to successful demonstration of SOT-MRAM combined with Fin field-effect transistors in 22 nm process and 14 nm process at various foundries.
Working with Jian-Ping Wang, Manipatruni and collaborators were able to show evidence of a 4th elemental ferro-magnet. Given the rarity of ferro-magnetic materials in elemental form at room temperature, use of a less rare element can help with the adoption of permanent magnet based driven systems for electric vehicles.
Computational logic devices and quantum materials.
In 2016, Manipatruni and collaborators proposed a number of changes to the new logic device development by identifying the core criterion for the logic devices for utilization beyond the 2 nm process. The continued slow down the Moore's law as evidenced by slow down of the voltage scaling, lithographic node scaling and increasing cost per wafer and complexity of the fabs indicated that Moore's law as it existed in the 2000-2010 era has changed to a less aggressive scaling paradigm.
Manipatruni proposed that spintronic and multiferroic systems are leading candidates for achieving attojoule-class logic gates for computing, thereby enabling the continuation of Moore's law for transistor scaling. However, shifting the materials focus of computing towards oxides and topological materials requires a holistic approach addressing energy, stochasticity and complexity.
The Manipatruni-Nikonov-Young Figure-of-Merit for computational quantum materials is defined as the ratio of " formula_6 energy to switch a device at room temperature" to "formula_7 energy of thermodynamic stability of the materials compared to vacuum energy, where formula_8 is the reversal of the order parameter such as ferro-electric polarization or magnetization of the material"
formula_9
This ratio is universally optimal for a ferro-electric material and compared favorably to spintronic and CMOS switching elements such as MOS transistors and BJTs. The framework (adopted by SIA decadal plan) describes a unified computing framework that uses physical scaling (physics-based improvement in device energy and density), mathematical scaling (using information theoretic improvements to allow higher error rate as devices scale to thermodynamic limits) and complexity scaling (architectural scaling that moves from distinct memory & logic units to AI based architectures). Combining Shannon inspired computing allows the physical stochastic errors inherent in highly scaled devices to be mitigated by information theoretic techniques.
Ian A. Young, Nikonov, and Manipatruni have provided a list of 10 outstanding problems in quantum materials as they pertain to computational devices. These problems have been subsequently addressed in numerous research works leading to various improved device properties for a future computer technology Beyond CMOS. The top problems listed as milestones and challenges for logic are as follows:
Problems of magnetic/ferro-electric/multiferroic switching
Magneto-electric spin-orbit logic is a design using this methodology for a new logical component that couples magneto-electric effect and spin orbit effects. Compared to CMOS, MESO circuits could potentially require less energy for switching, lower operating voltage, and a higher integration density.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E_{switch_Optical_Link}>\\hbar\\omega.V_{receive}.C_{d}.10^{L*\\alpha/10}/(\\eta_{L}\\eta_{D}\\eta_{M}\\eta_{C}e)\n+(V_{m}\\Theta\\Delta.T)/((dn/d\\rho)(dT/dn))+(2/B)P_{tune}\\Delta\\lambda+E_{SD}*(B/(2F_{clock}))"
},
{
"math_id": 1,
"text": "V_{receive}"
},
{
"math_id": 2,
"text": "C_{d}"
},
{
"math_id": 3,
"text": "V_{m}"
},
{
"math_id": 4,
"text": "\\Theta, dn/d\\rho, dT/dn "
},
{
"math_id": 5,
"text": "\\Delta.T"
},
{
"math_id": 6,
"text": "E_{switching}"
},
{
"math_id": 7,
"text": "E(\\pm\\theta)"
},
{
"math_id": 8,
"text": "\\pm\\theta "
},
{
"math_id": 9,
"text": "\\lambda =E_{switching}/E(\\pm\\theta)"
}
]
| https://en.wikipedia.org/wiki?curid=72416325 |
72418333 | Woodall's conjecture | <templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
Does the minimum number of edges in a dicut of a directed graph always equal the maximum number of disjoint dijoins?
In the mathematics of directed graphs, Woodall's conjecture is an unproven relationship between dicuts and dijoins. It was posed by Douglas Woodall in 1976.
Statement.
A dicut is a partition of the vertices into two subsets such that all edges that cross the partition do so in the same direction. A dijoin is a subset of edges that, when contracted, produces a strongly connected graph; equivalently, it is a subset of edges that includes at least one edge from each dicut.
If the minimum number of edges in a dicut is formula_0, then there can be at most formula_0 disjoint dijoins in the graph, because each one must include a different edge from the smallest dicut. Woodall's conjecture states that, in this case, it is always possible to find formula_0 disjoint dijoins. That is, any directed graph the minimum number of edges in a dicut equals the maximum number of disjoint dijoins that can be found in the graph (a packing of dijoins).
Partial results.
It is a folklore result that the theorem is true for directed graphs whose minimum dicut has two edges. Any instance of the problem can be reduced to a directed acyclic graph by taking the condensation of the instance, a graph formed by contracting each strongly connected component to a single vertex. Another class of graphs for which the theorem has been proven true are the directed acyclic graphs in which every source vertex (a vertex without incoming edges) has a path to every sink vertex (a vertex without outgoing edges).
Related results.
A fractional weighted version of the conjecture, posed by Jack Edmonds and Rick Giles, was refuted by Alexander Schrijver. In the other direction, the Lucchesi–Younger theorem states that the minimum size of a dijoin equals the maximum number of disjoint dicuts that can be found in a given graph.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k"
}
]
| https://en.wikipedia.org/wiki?curid=72418333 |
72440309 | Littrow expansion | Littrow expansion and its counterpart Littrow compression are optical effects associated with slitless imaging spectrographs. These effects are named after austrian physicist Otto von Littrow.
In a slitless imaging spectrograph, light is focused with a conventional optical system, which includes a transmission or reflection grating as in a conventional spectrograph. This disperses the light, according to wavelength, in one direction; but no slit is interposed into the beam. For pointlike objects (such as distant stars) this results in a spectrum on the focal plane of the instrument for each imaged object. For distributed objects with emission-line spectra (such as the Sun in extreme ultraviolet), it results in an image of the object at each wavelength of interest, overlapping on the focal plane, as in a spectroheliograph.
Description.
The Littrow expansion/compression effect is an anamorphic distortion of single-wavelength image on the focal plane of the instrument, due to a geometric effect surrounding reflection or transmission at the grating. In particular, the angle of incidence formula_0 and reflection formula_1 from a flat mirror, measured from the direction normal to the mirror, have the relation
formula_2
which implies
formula_3
so that an image encoded in the angle of collimated light is reversed but not distorted by the reflection.
In a spectrograph, the angle of reflection in the dispersed direction depends in a more complicated way on the angle of incidence:
formula_4
where formula_5 is an integer and represents spectral order, formula_6 is the wavelength of interest, and formula_7 is the line spacing of the grating. Because the sine function (and its inverse) are nonlinear, this in general means that
formula_8
for most values of formula_5 and formula_9, yielding anamorphic distortion of the spectral image at each wavelength. When the magnitude is larger, images are expanded in the spectral direction; when the magnitude is smaller, they are compressed.
For the special case where
formula_10
the reflected ray exits the grating exactly back along the incident ray, and formula_11; this is the Littrow configuration, and the specific angle for which this configuration holds is the Littrow angle. This configuration preserves the image aspect ratio in the reflected beam. All other incidence angles yield either Littrow expansion or Littrow compression of the collimated image.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\theta_i"
},
{
"math_id": 1,
"text": "\\theta_r"
},
{
"math_id": 2,
"text": "\n \\theta_r = -\\theta_i,\n"
},
{
"math_id": 3,
"text": "\n \\frac{d\\theta_r}{d\\theta_i} = -1,\n"
},
{
"math_id": 4,
"text": "\n \\theta_r = -\\arcsin\\big( \\sin(\\theta_i) + n \\lambda / D \\big),\n"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "\\lambda"
},
{
"math_id": 7,
"text": "D"
},
{
"math_id": 8,
"text": "\n \\frac{d\\theta_r}{d\\theta_i} \\ne -1\n"
},
{
"math_id": 9,
"text": "\\lambda/D"
},
{
"math_id": 10,
"text": "\n n \\lambda / D = - 2 \\sin(\\theta_i),\n"
},
{
"math_id": 11,
"text": "d\\theta_r/d\\theta_i = 1"
}
]
| https://en.wikipedia.org/wiki?curid=72440309 |
72441060 | Jacobi bound problem | The Jacobi bound problem concerns the veracity of Jacobi's inequality which is an inequality on the absolute dimension of a differential algebraic variety in terms of its defining equations.
This is one of Kolchin's Problems.
The inequality is the differential algebraic analog of Bézout's theorem in affine space.
Although first formulated by Jacobi, In 1936 Joseph Ritt recognized the problem as non-rigorous in that Jacobi didn't even have a rigorous notion of absolute dimension (Jacobi and Ritt used the term "order" - which Ritt first gave a rigorous definition for using the notion of transcendence degree).
Intuitively, the absolute dimension is the number of constants of integration required to specify a solution of a system of ordinary differential equations.
A mathematical proof of the inequality has been open since 1936.
Statement.
Let formula_0 be a differential field of characteristic zero and consider formula_1 a differential algebraic variety determined by the vanishing of differential polynomials formula_2.
If formula_3 is an irreducible component of formula_1 of finite absolute dimension then
formula_4
In the above display formula_5 is the *jacobi number*.
It is defined to be
formula_6.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " (K,\\partial) "
},
{
"math_id": 1,
"text": " \\Gamma "
},
{
"math_id": 2,
"text": " u_1,\\ldots,u_n \\in K[x_1,\\ldots,x_n]_{\\partial} "
},
{
"math_id": 3,
"text": " \\Gamma_1 "
},
{
"math_id": 4,
"text": " a(\\Gamma_1) \\leq J(u_1,u_2,\\ldots,u_n). "
},
{
"math_id": 5,
"text": " J(u_1,u_2,\\ldots,u_n) "
},
{
"math_id": 6,
"text": " \\max_{\\sigma \\in S_n} \\sum_{i=1}^n \\operatorname{ord}_{x_i}^{\\partial}(u_{\\sigma(i)}) "
}
]
| https://en.wikipedia.org/wiki?curid=72441060 |
724577 | Blood urea nitrogen | Blood test
Blood urea nitrogen (BUN) is a medical test that measures the amount of urea nitrogen found in blood. The liver produces urea in the urea cycle as a waste product of the digestion of protein. Normal human adult blood should contain 7 to 18 mg/dL (0.388 to 1 mmol/L) of urea nitrogen. Individual laboratories may have different reference ranges, as they may use different assays. The test is used to detect kidney problems. It is not considered as reliable as creatinine or BUN-to-creatinine ratio blood studies.
Interpretation.
BUN is an indication of kidney health. The normal range is 2.1–7.1 mmol/L or 6–20 mg/dL.
The main causes of an increase in BUN are: high-protein diet, decrease in glomerular filtration rate (GFR) (suggestive of kidney failure), decrease in blood volume (hypovolemia), congestive heart failure, gastrointestinal hemorrhage, fever, rapid cell destruction from infections, athletic activity, excessive muscle breakdown, and increased catabolism.
Hypothyroidism can cause both decreased GFR and hypovolemia, but BUN-to-creatinine ratio has been found to be lowered in hypothyroidism and raised in hyperthyroidism.
The main causes of a decrease in BUN are malnutrition (low-protein diet), severe liver disease, anabolic state, and syndrome of inappropriate antidiuretic hormone.
Another rare cause of a decreased BUN is ornithine transcarbamylase deficiency, which is a genetic disorder inherited in an X-linked recessive pattern. OTC deficiency is also accompanied by hyperammonemia and high orotic acid levels.
Units.
BUN is usually reported in mg/dL in some countries (e.g. United States, Mexico, Italy, Austria, and Germany). Elsewhere, the concentration of urea is reported in SI units as mmol/L.
formula_0 represents the mass of nitrogen within urea/volume, not the mass of whole urea. Each molecule of urea has two nitrogen atoms, each having molar mass 14 g/mol. To convert from mg/dL of blood urea nitrogen to mmol/L of urea:
formula_1
Note that molar concentrations of urea and urea nitrogen are equal, because both nitrogen gas and urea has two nitrogen atoms.
Convert BUN to urea in mg/dL by using following formula:
formula_2
Where 60 represents MW of urea and 14*2 MW of urea nitrogen.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "BUN_{mg/dL}"
},
{
"math_id": 1,
"text": "Urea_{mmol/L} = BUN_{mmol/L} = BUN_{mg/dL} * \\frac{10_{dL/L}}{14*2} = BUN_{mg/dL} * 0.3571"
},
{
"math_id": 2,
"text": "Urea_{mg/dL} = BUN_{mg/dL} * \\frac{60}{14*2} = BUN_{mg/dL} * 2.14 "
}
]
| https://en.wikipedia.org/wiki?curid=724577 |
72460002 | Transition-rate matrix | Matrix describing continuous-time Markov chains
In probability theory, a transition-rate matrix (also known as a Q-matrix, intensity matrix, or infinitesimal generator matrix) is an array of numbers describing the instantaneous rate at which a continuous-time Markov chain transitions between states.
In a transition-rate matrix formula_0 (sometimes written formula_1), element formula_2 (for formula_3) denotes the rate departing from formula_4 and arriving in state formula_5. The rates formula_6, and the diagonal elements formula_7 are defined such that
formula_8,
and therefore the rows of the matrix sum to zero.
Up to a global sign, a large class of examples of such matrices is provided by the Laplacian of a directed, weighted graph. The vertices of the graph correspond to the Markov chain's states.
Properties.
The transition-rate matrix has following properties:
Example.
An M/M/1 queue, a model which counts the number of jobs in a queueing system with arrivals at rate λ and services at rate μ, has transition-rate matrix
formula_14
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Q"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "q_{ij}"
},
{
"math_id": 3,
"text": "i \\neq j"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "j"
},
{
"math_id": 6,
"text": "q_{ij} \\geq 0"
},
{
"math_id": 7,
"text": "q_{ii}"
},
{
"math_id": 8,
"text": "q_{ii} = -\\sum_{j\\neq i} q_{ij}"
},
{
"math_id": 9,
"text": "\\lambda"
},
{
"math_id": 10,
"text": " 0 > \\mathrm{Re}\\{\\lambda\\} \\geq 2 \\min_i q_{ii}"
},
{
"math_id": 11,
"text": "v"
},
{
"math_id": 12,
"text": "\\sum_{i}v_{i} = 0"
},
{
"math_id": 13,
"text": "Q=P'(0)"
},
{
"math_id": 14,
"text": "Q=\\begin{pmatrix}\n-\\lambda & \\lambda \\\\\n\\mu & -(\\mu+\\lambda) & \\lambda \\\\\n&\\mu & -(\\mu+\\lambda) & \\lambda \\\\\n&&\\mu & -(\\mu+\\lambda) & \\ddots &\\\\\n&&&\\ddots&\\ddots\n\\end{pmatrix}."
}
]
| https://en.wikipedia.org/wiki?curid=72460002 |
724752 | Disdyakis dodecahedron | Geometric shape with 48 faces
In geometry, a disdyakis dodecahedron, (also hexoctahedron, hexakis octahedron, octakis cube, octakis hexahedron, kisrhombic dodecahedron), is a Catalan solid with 48 faces and the dual to the Archimedean truncated cuboctahedron. As such it is face-transitive but with irregular face polygons. It resembles an augmented rhombic dodecahedron. Replacing each face of the rhombic dodecahedron with a flat pyramid creates a polyhedron that looks almost like the disdyakis dodecahedron, and is topologically equivalent to it.
More formally, the disdyakis dodecahedron is the Kleetope of the rhombic dodecahedron, and the barycentric subdivision of the cube or of the regular octahedron. The net of the rhombic dodecahedral pyramid also shares the same topology.
Symmetry.
It has Oh octahedral symmetry. Its collective edges represent the reflection planes of the symmetry. It can also be seen in the corner and mid-edge triangulation of the regular cube and octahedron, and rhombic dodecahedron.
The edges of a spherical disdyakis dodecahedron belong to 9 great circles. Three of them form a spherical octahedron (gray in the images below). The remaining six form three square hosohedra (red, green and blue in the images below). They all correspond to mirror planes - the former in dihedral [2,2], and the latter in tetrahedral [3,3] symmetry.
Cartesian coordinates.
Let formula_0.<br>
Then the Cartesian coordinates for the vertices of a disdyakis dodecahedron centered at the origin are:
● permutations of (±a, 0, 0) <br>
● permutations of (±b, ±b, 0) <br>
● (±c, ±c, ±c)
Dimensions.
If its smallest edges have length "a", its surface area and volume are
formula_1
The faces are scalene triangles. Their angles are formula_2, formula_3 and formula_4.
Orthogonal projections.
The truncated cuboctahedron and its dual, the "disdyakis dodecahedron" can be drawn in a number of symmetric orthogonal projective orientations. Between a polyhedron and its dual, vertices and faces are swapped in positions, and edges are perpendicular.
Related polyhedra and tilings.
The disdyakis dodecahedron is one of a family of duals to the uniform polyhedra related to the cube and regular octahedron.
It is a polyhedra in a sequence defined by the face configuration V4.6.2"n". This group is special for having all even number of edges per vertex and form bisecting planes through the polyhedra and infinite lines in the plane, and continuing into the hyperbolic plane for any "n" ≥ 7.
With an even number of faces at every vertex, these polyhedra and tilings can be shown by alternating two colors so all adjacent faces have different colors.
Each face on these domains also corresponds to the fundamental domain of a symmetry group with order 2,3,"n" mirrors at each triangle face vertex.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " ~ a = \\frac{1}{1 + 2 \\sqrt{2}} ~ {\\color{Gray} \\approx 0.261}, ~~ b = \\frac{1}{2 + 3 \\sqrt{2}} ~ {\\color{Gray} \\approx 0.160}, ~~ c = \\frac{1}{3 + 3 \\sqrt{2}} ~ {\\color{Gray} \\approx 0.138}"
},
{
"math_id": 1,
"text": "\\begin{align} A &= \\tfrac67\\sqrt{783+436\\sqrt 2}\\,a^2 \\\\ V &= \\tfrac17\\sqrt{3\\left(2194+1513\\sqrt 2\\right)}a^3\\end{align}"
},
{
"math_id": 2,
"text": "\\arccos\\biggl(\\frac{1}{6}-\\frac{1}{12}\\sqrt{2}\\biggr) ~{\\color{Gray}\\approx 87.201^{\\circ}}"
},
{
"math_id": 3,
"text": "\\arccos\\biggl(\\frac{3}{4}-\\frac{1}{8}\\sqrt{2}\\biggr) ~{\\color{Gray}\\approx 55.024^{\\circ}}"
},
{
"math_id": 4,
"text": "\\arccos\\biggl(\\frac{1}{12}+\\frac{1}{2}\\sqrt{2}\\biggr) ~{\\color{Gray}\\approx 37.773^{\\circ}}"
}
]
| https://en.wikipedia.org/wiki?curid=724752 |
724753 | Deltoidal hexecontahedron | Catalan polyhedron
In geometry, a deltoidal hexecontahedron (also sometimes called a "trapezoidal hexecontahedron", a "strombic hexecontahedron", or a "tetragonal hexacontahedron") is a Catalan solid which is the dual polyhedron of the rhombicosidodecahedron, an Archimedean solid. It is one of six Catalan solids to not have a Hamiltonian path among its vertices.
It is topologically identical to the nonconvex rhombic hexecontahedron.
Lengths and angles.
The 60 faces are deltoids or kites. The short and long edges of each kite are in the ratio 1: ≈ 1:1.539344663...
The angle between two short edges in a single face is arccos()≈118.2686774705°. The opposite angle, between long edges, is arccos()≈67.783011547435° . The other two angles of each face, between a short and a long edge each, are both equal to arccos()≈86.97415549104°.
The dihedral angle between any pair of adjacent faces is arccos()≈154.12136312578°.
Topology.
Topologically, the "deltoidal hexecontahedron" is identical to the nonconvex rhombic hexecontahedron. The deltoidal hexecontahedron can be derived from a dodecahedron (or icosahedron) by pushing the face centers, edge centers and vertices out to different radii from the body center. The radii are chosen so that the resulting shape has planar kite faces each such that vertices go to degree-3 corners, faces to degree-five corners, and edge centers to degree-four points.
Cartesian coordinates.
The 62 vertices of the disdyakis triacontahedron fall in three sets centered on the origin:
These hulls are visualized in the figure below:
Orthogonal projections.
The "deltoidal hexecontahedron" has 3 symmetry positions located on the 3 types of vertices:
Variations.
The "deltoidal hexecontahedron" can be constructed from either the regular icosahedron or regular dodecahedron by adding vertices mid-edge, and mid-face, and creating new edges from each edge center to the face centers. Conway polyhedron notation would give these as oI, and oD, ortho-icosahedron, and ortho-dodecahedron. These geometric variations exist as a continuum along one degree of freedom.
Related polyhedra and tilings.
When projected onto a sphere (see right), it can be seen that the edges make up the edges of an icosahedron and dodecahedron arranged in their dual positions.
This tiling is topologically related as a part of sequence of deltoidal polyhedra with face figure (V3.4."n".4), and continues as tilings of the hyperbolic plane. These face-transitive figures have (*"n"32) reflectional symmetry.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{3}{11}\\sqrt {15 - \\frac{6}{\\sqrt{5}}}\\approx 0.9571"
},
{
"math_id": 1,
"text": "3\\sqrt {1-\\frac{2}{\\sqrt{5}}}\\approx0.9748"
}
]
| https://en.wikipedia.org/wiki?curid=724753 |
72477621 | Diversity (mathematics) | Generalization of metric spaces
In mathematics, a diversity is a generalization of the concept of metric space. The concept was introduced in 2012 by Bryant and Tupper,
who call diversities "a form of multi-way metric". The concept finds application in nonlinear analysis.
Given a set formula_0, let formula_1 be the set of finite subsets of formula_0.
A diversity is a pair formula_2 consisting of a set formula_0 and a function formula_3 satisfying
(D1) formula_4, with formula_5 if and only if formula_6
and
(D2) if formula_7 then formula_8.
Bryant and Tupper observe that these axioms imply monotonicity; that is, if formula_9, then formula_10. They state that the term "diversity" comes from the appearance of a special case of their definition in work on phylogenetic and ecological diversities. They give the following examples:
Diameter diversity.
Let formula_11 be a metric space. Setting formula_12 for all formula_13 defines a diversity.
L1 diversity.
For all finite formula_14 if we define formula_15 then formula_16 is a diversity.
Phylogenetic diversity.
If "T" is a phylogenetic tree with taxon set "X". For each finite formula_17, define
formula_18 as the length of the smallest subtree of "T" connecting taxa in "A". Then formula_19 is a (phylogenetic) diversity.
Steiner diversity.
Let formula_20 be a metric space. For each finite formula_17, let formula_18 denote
the minimum length of a Steiner tree within "X" connecting elements in "A". Then formula_2 is a
diversity.
Truncated diversity.
Let formula_2 be a diversity. For all formula_13 define
formula_21. Then if formula_22, formula_23 is a diversity.
Clique diversity.
If formula_24 is a graph, and formula_18 is defined for any finite "A" as the largest clique of "A", then formula_2 is a diversity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": " \\wp_\\mbox{fin}(X)"
},
{
"math_id": 2,
"text": "(X,\\delta)"
},
{
"math_id": 3,
"text": "\\delta \\colon \\wp_\\mbox{fin}(X) \\to \\mathbb{R}"
},
{
"math_id": 4,
"text": "\\delta(A)\\geq 0"
},
{
"math_id": 5,
"text": "\\delta(A)=0"
},
{
"math_id": 6,
"text": "\\left|A\\right|\\leq 1"
},
{
"math_id": 7,
"text": " B\\neq\\emptyset"
},
{
"math_id": 8,
"text": "\\delta(A\\cup C)\\leq\\delta(A\\cup B) + \\delta(B \\cup C)"
},
{
"math_id": 9,
"text": "A\\subseteq B"
},
{
"math_id": 10,
"text": "\\delta(A)\\leq\\delta(B)"
},
{
"math_id": 11,
"text": "(X,d)"
},
{
"math_id": 12,
"text": "\\delta(A)=\\max_{a,b\\in A} d(a,b)=\\operatorname{diam}(A)"
},
{
"math_id": 13,
"text": "A\\in\\wp_\\mbox{fin}(X)"
},
{
"math_id": 14,
"text": "A\\subseteq\\mathbb{R}^n"
},
{
"math_id": 15,
"text": "\\delta(A)=\\sum_i\\max_{a,b}\\left\\{\\left| a_i-b_i\\right|\\colon a,b\\in A\\right\\}"
},
{
"math_id": 16,
"text": "(\\mathbb{R}^n,\\delta)"
},
{
"math_id": 17,
"text": "A\\subseteq X"
},
{
"math_id": 18,
"text": "\\delta(A)"
},
{
"math_id": 19,
"text": "(X, \\delta)"
},
{
"math_id": 20,
"text": "(X, d)"
},
{
"math_id": 21,
"text": "\\delta^{(k)}(A) = \\max\\left\\{\\delta(B)\\colon |B|\\leq k, B\\subseteq A\\right\\}"
},
{
"math_id": 22,
"text": "k\\geq 2"
},
{
"math_id": 23,
"text": "(X,\\delta^{(k)})"
},
{
"math_id": 24,
"text": "(X,E)"
}
]
| https://en.wikipedia.org/wiki?curid=72477621 |
724804 | Prime (symbol) | Typographical symbol
The prime symbol , double prime symbol ″, triple prime symbol ‴, and quadruple prime symbol ⁗ are used to designate units and for other purposes in mathematics, science, linguistics and music.
Although the characters differ little in appearance from those of the apostrophe and single and double quotation marks, the uses of the prime symbol are quite different. While an apostrophe is now often used in place of the prime, and a double quote in place of the double prime (due to the lack of prime symbols on everyday writing keyboards), such substitutions are not considered appropriate in formal materials or in typesetting.
Designation of units.
The prime symbol is commonly used to represent feet (ft), and the double prime ″ is used to represent inches (in). The triple prime ‴, as used in watchmaking, represents a (<templatestyles src="Fraction/styles.css" />1⁄12 of a "French" inch, or "pouce", about ).
Primes are also used for angles. The prime symbol is used for arcminutes (<templatestyles src="Fraction/styles.css" />1⁄60 of a degree), and the double prime ″ for arcseconds (<templatestyles src="Fraction/styles.css" />1⁄60 of an arcminute). As an angular measurement, means 3 degrees, 5 arcminutes and 30 arcseconds. In historical astronomical works, the triple prime was used to denote "thirds" (<templatestyles src="Fraction/styles.css" />1⁄60 of an arcsecond) and a quadruple prime ⁗ "fourths" (<templatestyles src="Fraction/styles.css" />1⁄60 of a third of arc), but modern usage has replaced this with decimal fractions of an arcsecond.
Primes are sometimes used to indicate minutes, and double primes to indicate seconds of time, as in the John Cage composition "" (spoken as "four thirty-three"), a composition that lasts exactly 4 minutes 33 seconds. This notation only applies to duration, and is seldom used for durations longer than 60 minutes.
Use in mathematics, statistics, and science.
In mathematics, the prime is generally used to generate more variable names for similar things without resorting to subscripts, with "x′" generally meaning something related to (or derived from) "x". For example, if a point is represented by the Cartesian coordinates ("x", "y"), then that point rotated, translated or reflected might be represented as ("x′", "y′").
Usually, the meaning of "x′" is defined when it is first used, but sometimes, its meaning is assumed to be understood:
The prime is said to "decorate" the letter to which it applies. The same convention is adopted in functional programming, particularly in Haskell.
In geometry, geography and astronomy, prime and double prime are used as abbreviations for minute and second of arc (and thus latitude, longitude, elevation and right ascension).
In physics, the prime is used to denote variables after an event. For example, "v"A′ would indicate the velocity of object A after an event. It is also commonly used in relativity: the event at (x, y, z, t) in frame "S", has coordinates in frame "S′".
In chemistry, it is used to distinguish between different functional groups connected to an atom in a molecule, such as R and , representing different alkyl groups in an organic compound. The carbonyl carbon in proteins is denoted as , which distinguishes it from the other backbone carbon, the alpha carbon, which is denoted as Cα. In physical chemistry, it is used to distinguish between the lower state and the upper state of a quantum number during a transition. For example, "J" ′ denotes the upper state of the quantum number "J" while "J" ″ denotes the lower state of the quantum number "J".
In molecular biology, the prime is used to denote the positions of carbon on a ring of deoxyribose or ribose. The prime distinguishes places on these two chemicals, rather than places on other parts of DNA or RNA, like phosphate groups or nucleic acids. Thus, when indicating the direction of movement of an enzyme along a string of DNA, biologists will say that it moves from the end to the end, because these carbons are on the ends of the DNA molecule. The chemistry of this reaction demands that the be extended by DNA synthesis. Prime can also be used to indicate which position a molecule has attached to, such as
Use in linguistics.
The prime can be used in the transliteration of some languages, such as Slavic languages, to denote palatalization. Prime and double prime are used to transliterate Cyrillic yeri (the soft sign, ь) and yer (the hard sign, ъ). However, in ISO 9, the corresponding modifier letters are used instead.
Originally, X-bar theory used a bar over syntactic units to indicate bar-levels in syntactic structure, generally rendered as an overbar. While easy to write, the bar notation proved difficult to typeset, leading to the adoption of the prime symbol to indicate a bar. (Despite the lack of bar, the unit would still be read as "X bar", as opposed to "X prime".) With contemporary development of typesetting software such as LaTeX, typesetting bars is considerably simpler; nevertheless, both prime and bar markups are accepted usages.
Some X-bar notations use a double prime (standing in for a double-bar) to indicate a phrasal level, indicated in most notations by "XP".
Use in music.
The prime symbol is used in combination with lower case letters in the Helmholtz pitch notation system to distinguish notes in different octaves from middle C upwards. Thus c represents the ⟨C⟩ below middle C, represents middle C, c″ represents the ⟨C⟩ in the octave above middle C, and c‴ the ⟨C⟩ in the octave two octaves above middle C. A combination of upper case letters and sub-prime symbols is used to represent notes in lower octaves. Thus C represents the ⟨C⟩ below the bass stave, while C ͵ represents the ⟨C⟩ in the octave below that.
In some musical scores, the double prime ″ is used to indicate a length of time in seconds. It is used over a fermata 𝄐 denoting a long note or rest.
Computer encodings.
Unicode and HTML representations of the prime and related symbols are as follows.
The "modifier letter prime" and "modifier letter double prime" characters are intended for linguistic purposes, such as the indication of stress or the transliteration of certain Cyrillic characters.
In a context when the character set used does not include the prime or double prime character (e.g., in an online discussion context where only ASCII or ISO 8859-1 [ISO Latin 1] is expected), they are often respectively approximated by ASCII apostrophe (U+0027) or quotation mark (U+0022).
LaTeX provides an oversized prime symbol, (formula_0), which, when used in super- or sub-scripts, renders appropriately; e.g., codice_0 appears as formula_1. An apostrophe, , is a shortcut for a superscript prime; e.g., appears as formula_2.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
External links.
| [
{
"math_id": 0,
"text": "\\prime"
},
{
"math_id": 1,
"text": "f_\\prime^\\prime"
},
{
"math_id": 2,
"text": "f'\\,\\!"
}
]
| https://en.wikipedia.org/wiki?curid=724804 |
72482500 | Subdivision (simplicial complex) | A subdivision (also called refinement) of a simplicial complex is another simplicial complex in which, intuitively, one or more simplices of the original complex have been partitioned into smaller simplices. The most commonly used subdivision is the barycentric subdivision, but the term is more general. The subdivision is defined in slightly different ways in different contexts.
In geometric simplicial complexes.
Let "K" be a geometric simplicial complex (GSC). A subdivision of "K" is a GSC "L" such that: 153
As an example, let "K" be a GSC containing a single triangle {A,B,C} (with all its faces and vertices). Let "D" be a point on the face AB. Let "L" be the complex containing the two triangles {A,D,C} and {B,D,C} (with all their faces and vertices). Then "L" is a subdivision of "K", since the two triangles {A,D,C} and {B,D,C} are both contained in {A,B,C}, and similarly the faces {A,D}, {D,B} are contained in the face {A,B}, and the face {D,C} is contained in {A,B,C}.
Subdivision by starring.
One way to obtain a subdivision of "K" is to pick an arbitrary point "x" in |"K"|, remove each simplex "s" in "K" that contains "x", and replace it with the closure of the following set of simplices:formula_0where formula_1 is the join of the point "x" and the face "t". This process is called starring at "x".15
A stellar subdivision is a subdivision obtained by sequentially starring at different points.15
A derived subdivision is a subdivision obtained by the following inductive process.3
The barycentric subdivision is a derived subdivision where the points used for starring are always barycenters of simplices. For example, if D, E, F, G are the barycenters of {A,B}, {A,C}, {B,C}, {A,B,C} respectively, then the first barycentric subdivision of {A,B,C} is the closure of {A,D,G}, {B,D,G}, {A,E,G}, {C,E,G}, {B,F,G}, {C,F,G}.
Iterated subdivisions can be used to attain arbitrarily fine triangulations of a given polyhedron.
In abstract simplicial complexes.
Let "K" be an abstract simplicial complex (ASC). The face poset of "K" is a poset made of all nonempty simplices of "K", ordered by inclusion (which is a partial order). For example, the face-poset of the closure of {A,B,C} is the poset with the following chains:
The order complex of a poset "P" is an ASC whose vertices are the elements of "P" and whose simplices are the chains of "P".
The first barycentric subdivision of an ASC "K" is the order complex of its face poset."18-19" The order complex of the above poset is the closure of the following simplices:
Note that this ASC is isomorphic to the ASC {A,D,G}, {B,D,G}, {A,E,G}, {C,E,G}, {B,F,G}, {C,F,G}, with the assignment: A={A}, B={B}, C={C}, D={A,B}, E={A,C}, F={B,C}, G={A,B,C}.
The geometric realization of the subdivision of "K" is always homeomorphic to the geometric realization of "K"."20,&hairsp;Exercise 1*"
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\{ x\\star t | t ~\\text{ is a face of } s \\text{ and } x\\not\\in t \\}"
},
{
"math_id": 1,
"text": "x\\star t"
}
]
| https://en.wikipedia.org/wiki?curid=72482500 |
72484242 | Van den Berg–Kesten inequality | Inequality in probability theory
In probability theory, the van den Berg–Kesten (BK) inequality or van den Berg–Kesten–Reimer (BKR) inequality states that the probability for two random events to both happen, and at the same time one can find "disjoint certificates" to show that they both happen, is at most the product of their individual probabilities. The special case for two monotone events (the notion as used in the FKG inequality) was first proved by van den Berg and Kesten in 1985, who also conjectured that the inequality holds in general, not requiring monotonicity. Reimer later proved this conjecture. The inequality is applied to probability spaces with a product structure, such as in percolation problems.
Statement.
Let formula_0 be probability spaces, each of finitely many elements. The inequality applies to spaces of the form formula_1, equipped with the product measure, so that each element formula_2 is given the probability
formula_3
For two events formula_4, their "disjoint occurrence" formula_5 is defined as the event consisting of configurations formula_6 whose memberships in formula_7 and in formula_8 can be verified on disjoint subsets of indices. Formally, formula_9 if there exist subsets formula_10 such that:
The inequality asserts that:
formula_19
for every pair of events formula_7 and formula_18
Examples.
Coin tosses.
If formula_20 corresponds to tossing a fair coin formula_21 times, then each formula_22 consists of the two possible outcomes, heads or tails, with equal probability. Consider the event formula_7 that there exists 3 consecutive heads, and the event formula_8 that there are at least 5 heads in total. Then formula_23 would be the following event: there are 3 consecutive heads, and discarding those there are another 5 heads remaining. This event has probability at most formula_24 which is to say the probability of getting formula_7 in 10 tosses, and getting formula_8 in another 10 tosses, independent of each other.
Numerically, formula_25 formula_26 and their disjoint occurrence would imply at least 8 heads, so formula_27
Percolation.
In (Bernoulli) bond percolation of a graph, the formula_28's are indexed by edges. Each edge is kept (or "open") with some probability formula_29 or otherwise removed (or "closed"), independent of other edges, and one studies questions about the connectivity of the remaining graph, for example the event formula_30 that there is a path between two vertices formula_31 and formula_32 using only open edges. For events of such form, the disjoint occurrence formula_23 is the event where there exist two open paths not sharing any edges (corresponding to the subsets formula_13 and formula_17 in the definition), such that the first one providing the connection required by formula_15 and the second for formula_18
The inequality can be used to prove a version of the exponential decay phenomenon in the subcritical regime, namely that on the integer lattice graph formula_33 for formula_34 a suitably defined critical probability, the radius of the connected component containing the origin obeys a distribution with exponentially small tails:
formula_35
for some constant formula_36 depending on formula_37 Here formula_38 consists of vertices formula_6 that satisfies formula_39
Extensions.
Multiple events.
When there are three or more events, the operator formula_40 may not be associative, because given a subset of indices formula_41 on which formula_42 can be verified, it might not be possible to split formula_41 a disjoint union formula_43 such that formula_13 witnesses formula_44 and formula_17 witnesses formula_45. For example, there exists an event formula_46 such that formula_47
Nonetheless, one can define the formula_48-ary BKR operation of events formula_49 as the set of configurations formula_6 where there are pairwise disjoint subset of indices formula_50 such that formula_51 witnesses the membership of formula_6 in formula_52 This operation satisfies:
formula_53
whence
formula_54
by repeated use of the original BK inequality. This inequality was one factor used to analyse the winner statistics from the Florida Lottery and identify what "Mathematics Magazine" referred to as "implausibly lucky" individuals, confirmed later by enforcement investigation that law violations were involved.
Spaces of larger cardinality.
When formula_28 is allowed to be infinite, measure theoretic issues arise. For formula_55 and formula_56 the Lebesgue measure, there are measurable subsets formula_57 such that formula_23 is non-measurable (so formula_58 in the inequality is not defined), but the following theorem still holds:
If formula_59 are Lebesgue measurable, then there is some Borel set formula_60 such that:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Omega_1, \\Omega_2, \\ldots, \\Omega_n"
},
{
"math_id": 1,
"text": "\\Omega = \\Omega_1 \\times \\Omega_2 \\times \\cdots \\times \\Omega_n"
},
{
"math_id": 2,
"text": "x = (x_1, \\ldots, x_n) \\in \\Omega"
},
{
"math_id": 3,
"text": " \\mathbb P(\\{x\\}) = \\mathbb P_1(\\{x_1\\}) \\cdots \\mathbb P_n(\\{x_n\\})."
},
{
"math_id": 4,
"text": "A, B\\subseteq \\Omega"
},
{
"math_id": 5,
"text": "A \\mathbin{\\square} B"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": "A"
},
{
"math_id": 8,
"text": "B"
},
{
"math_id": 9,
"text": "x \\in A \\mathbin{\\square} B"
},
{
"math_id": 10,
"text": "I, J \\subseteq [n]"
},
{
"math_id": 11,
"text": "I \\cap J = \\varnothing,"
},
{
"math_id": 12,
"text": "y"
},
{
"math_id": 13,
"text": "I"
},
{
"math_id": 14,
"text": "y_i = x_i\\ \\forall i \\in I"
},
{
"math_id": 15,
"text": "A,"
},
{
"math_id": 16,
"text": "z"
},
{
"math_id": 17,
"text": "J"
},
{
"math_id": 18,
"text": "B."
},
{
"math_id": 19,
"text": " \\mathbb P (A \\mathbin{\\square} B) \\le \\mathbb P (A) \\mathbb P (B)"
},
{
"math_id": 20,
"text": "\\Omega"
},
{
"math_id": 21,
"text": "n = 10"
},
{
"math_id": 22,
"text": "\\Omega_i = \\{ H, T\\}"
},
{
"math_id": 23,
"text": "A \\mathbin \\square B"
},
{
"math_id": 24,
"text": " \\mathbb P ( A) \\mathbb P ( B),"
},
{
"math_id": 25,
"text": "\\mathbb P ( A) = 520/1024 \\approx 0.5078,"
},
{
"math_id": 26,
"text": "\\mathbb P ( B) = 638/1024 \\approx 0.6230,"
},
{
"math_id": 27,
"text": "\\mathbb P ( A\\mathbin \\square B) \\le \\mathbb P(\\text{8 heads or more}) = 56/1024 \\approx 0.0547."
},
{
"math_id": 28,
"text": "\\Omega_i"
},
{
"math_id": 29,
"text": "p,"
},
{
"math_id": 30,
"text": "u \\leftrightarrow v "
},
{
"math_id": 31,
"text": "u"
},
{
"math_id": 32,
"text": "v"
},
{
"math_id": 33,
"text": "\\mathbb Z^d,"
},
{
"math_id": 34,
"text": " p < p_\\mathrm c"
},
{
"math_id": 35,
"text": "\\mathbb P( 0 \\leftrightarrow \\partial [-r, r]^d) \\le \\exp(- c r) "
},
{
"math_id": 36,
"text": "c > 0"
},
{
"math_id": 37,
"text": "p."
},
{
"math_id": 38,
"text": "\\partial [-r, r]^d"
},
{
"math_id": 39,
"text": "\\max_{1 \\le i \\le d} |x_i| = r."
},
{
"math_id": 40,
"text": "\\square"
},
{
"math_id": 41,
"text": "K"
},
{
"math_id": 42,
"text": "x \\in A \\mathbin \\square B"
},
{
"math_id": 43,
"text": "I \\sqcup J"
},
{
"math_id": 44,
"text": "x \\in A"
},
{
"math_id": 45,
"text": "x \\in B"
},
{
"math_id": 46,
"text": "A \\subseteq \\{0, 1\\}^6"
},
{
"math_id": 47,
"text": "\\left((A \\mathbin \\square A) \\mathbin \\square A\\right) \\mathbin \\square A \\neq (A \\mathbin \\square A) \\mathbin \\square (A \\mathbin \\square A)."
},
{
"math_id": 48,
"text": "k"
},
{
"math_id": 49,
"text": "A_1, A_2, \\ldots, A_k"
},
{
"math_id": 50,
"text": "I_i \\subseteq [n]"
},
{
"math_id": 51,
"text": "I_i"
},
{
"math_id": 52,
"text": "A_i."
},
{
"math_id": 53,
"text": " A_1 \\mathbin \\square A_2 \\mathbin \\square A_3 \\mathbin \\square \\cdots \\mathbin \\square A_k \\subseteq \\left( \\cdots \\left((A_1 \\mathbin \\square A_2) \\mathbin \\square A_3 \\right) \\mathbin \\square \\cdots \\right) \\mathbin \\square A_k,"
},
{
"math_id": 54,
"text": "\\begin{align} \n\\mathbb P( A_1 \\mathbin \\square A_2 \\mathbin \\square A_3 \\mathbin \\square \\cdots \\mathbin \\square A_k) &\\le \\mathbb P\\left(\n \\left( \\cdots \\left((A_1 \\mathbin \\square A_2) \\mathbin \\square A_3 \\right) \\mathbin \\square \\cdots \\right) \\mathbin \\square A_k\\right) \\\\\n&\\le \\mathbb P( A_1) \\mathbb P( A_2) \\cdots \\mathbb P( A_k)\n\\end{align}\n"
},
{
"math_id": 55,
"text": "\\Omega = [0, 1]^n"
},
{
"math_id": 56,
"text": "\\mathbb P"
},
{
"math_id": 57,
"text": "A, B \\subseteq \\Omega"
},
{
"math_id": 58,
"text": "\\mathbb P(A \\mathbin \\square B)"
},
{
"math_id": 59,
"text": "A, B \\subseteq [0, 1]^n"
},
{
"math_id": 60,
"text": "C"
},
{
"math_id": 61,
"text": "A \\mathbin \\square B \\subseteq C,"
},
{
"math_id": 62,
"text": "\\mathbb P(C) \\le \\mathbb P(A) \\mathbb P(B)."
}
]
| https://en.wikipedia.org/wiki?curid=72484242 |
7248770 | Engine efficiency | Work done divided by heat provided
Engine efficiency of thermal engines is the relationship between the total energy contained in the fuel, and the amount of energy used to perform useful work. There are two classifications of thermal engines-
Each of these engines has thermal efficiency characteristics that are unique to it.
Engine efficiency, transmission design, and tire design all contribute to a vehicle's fuel efficiency.
Mathematical definition.
The efficiency of an engine is defined as ratio of the useful work done to the heat provided.
formula_0
where, formula_1 is the heat absorbed and formula_2 is the work done.
Please note that the term work done relates to the power delivered at the clutch or at the driveshaft.
This means the friction and other losses are subtracted from the work done by thermodynamic expansion. Thus an engine not delivering any work to the outside environment has zero efficiency.
percentage of combustion engine efficiency is air resistance within the engines internal moving parts... internal turbulence causing air resistance while running... not all caused by heat but rather air resistance on the moving parts causing more energy losses.
the sump needs a continuous vacuum system to keep any air from causing air resistance I. the crank chamber. the gearbox also may benefit from the reduced air friction and turbulence.
-Christopher Perkins
Compression ratio.
The efficiency of internal combustion engines depends on several factors, the most important of which is the expansion ratio. For any heat engine the work which can be extracted from it is proportional to the difference between the starting pressure and the ending pressure during the expansion phase. Hence, increasing the starting pressure is an effective way to increase the work extracted (decreasing the ending pressure, as is done with steam turbines by exhausting into a vacuum, is likewise effective).
The compression ratio (calculated purely from the geometry of the mechanical parts) of a typical gasoline (petrol) is 10:1 (premium fuel) or 9:1 (regular fuel), with some engines reaching a ratio of 12:1 or more. The greater the expansion ratio, the more efficient the engine, in principle, and higher compression / expansion -ratio conventional engines in principle need gasoline with higher octane value, though this simplistic analysis is complicated by the difference between actual and geometric compression ratios. High octane value inhibits the fuel's tendency to burn nearly instantaneously (known as "detonation" or "knock") at high compression/high heat conditions. However, in engines that utilize compression rather than spark ignition, by means of very high compression ratios (14–25:1), such as the diesel engine or Bourke engine, high octane fuel is not necessary. In fact, lower-octane fuels, typically rated by cetane number, are preferable in these applications because they are more easily ignited under compression.
Under part throttle conditions (i.e. when the throttle is less than fully open), the "effective" compression ratio is less than when the engine is operating at full throttle, due to the simple fact that the incoming fuel-air mixture is being restricted and cannot fill the chamber to full atmospheric pressure. The engine efficiency is less than when the engine is operating at full throttle. One solution to this issue is to shift the load in a multi-cylinder engine from some of the cylinders (by deactivating them) to the remaining cylinders so that they may operate under higher individual loads and with correspondingly higher effective compression ratios. This technique is known as variable displacement.
Most petrol (gasoline, Otto cycle) and diesel (Diesel cycle) engines have an expansion ratio equal to the compression ratio. Some engines, which use the Atkinson cycle or the Miller cycle achieve increased efficiency by having an expansion ratio larger than the compression ratio.
Diesel engines have a compression/expansion ratio between 14:1 and 25:1. In this case the general rule of higher efficiency from higher compression does not apply because diesels with compression ratios over 20:1 are indirect injection diesels (as opposed to direct injection). These use a prechamber to make possible the high RPM operation required in automobiles/cars and light trucks. The thermal and gas dynamic losses from the prechamber result in direct injection diesels (despite their lower compression / expansion ratio) being more efficient.
Friction.
An engine has many moving parts that produce friction. Some of these friction forces remain constant (as long as the applied load is constant); some of these friction losses increase as engine speed increases, such as piston side forces and connecting bearing forces (due to increased inertia forces from the oscillating piston). A few friction forces decrease at higher speed, such as the friction force on the cam's lobes used to operate the inlet and outlet valves (the valves' inertia at high speed tends to pull the cam follower away from the cam lobe). Along with friction forces, an operating engine has "pumping losses", which is the work required to move air into and out of the cylinders. This pumping loss is minimal at low speed, but increases approximately as the square of the speed, until at rated power an engine is using about 20% of total power production to overcome friction and pumping losses.
Oxygen.
Air is approximately 21% oxygen. If there is not enough oxygen for proper combustion, the fuel will not burn completely and will produce less energy. An excessively rich fuel to air ratio will increase unburnt hydrocarbon pollutants from the engine. If all of the oxygen is consumed because there is too much fuel, the engine's power is reduced.
As combustion temperature tends to increase with leaner fuel air mixtures, unburnt hydrocarbon pollutants must be balanced against higher levels of pollutants such as nitrogen oxides (NOx), which are created at higher combustion temperatures. This is sometimes mitigated by introducing fuel upstream of the combustion chamber to cool down the incoming air through evaporative cooling. This can increase the total charge entering the cylinder (as cooler air will be more dense), resulting in more power but also higher levels of hydrocarbon pollutants and lower levels of nitrogen oxide pollutants. With direct injection this effect is not as dramatic but it can cool down the combustion chamber enough to reduce certain pollutants such as nitrogen oxides (NOx), while raising others such as partially decomposed hydrocarbons.
The air-fuel mix is drawn into an engine because the downward motion of the pistons induces a partial vacuum. A compressor can additionally be used to force a larger charge (forced induction) into the cylinder to produce more power. The compressor is either mechanically driven supercharging or exhaust driven turbocharging. Either way, forced induction increases the air pressure exterior to the cylinder inlet port.
There are other methods to increase the amount of oxygen available inside the engine; one of them, is to inject nitrous oxide, (N2O) to the mixture, and some engines use nitromethane, a fuel that provides the oxygen itself it needs to burn. Because of that, the mixture could be 1 part of fuel and 3 parts of air; thus, it is possible to burn more fuel inside the engine, and get higher power outputs.
Internal combustion engines.
Reciprocating engines.
Reciprocating engines at idle have low thermal efficiency because the only usable work being drawn off the engine is from the generator.
At low speeds, gasoline engines suffer efficiency losses at small throttle openings from the high turbulence and frictional (head) loss when the incoming air must fight its way around the nearly closed throttle (pump loss); diesel engines do not suffer this loss because the incoming air is not throttled, but suffer "compression loss" due to use of the whole charge to compress the air to small amount of power output.
At high speeds, efficiency in both types of engine is reduced by pumping and mechanical frictional losses, and the shorter period within which combustion has to take place. High speeds also results in more drag.
Gasoline (petrol) engines.
Modern gasoline engines have a maximum thermal efficiency of more than 50%, but most road legal cars are only about 20% to 40% when used to power a car. Many engines would be capable of running at higher thermal efficiency but at the cost of higher wear and emissions. In other words, even when the engine is operating at its point of maximum thermal efficiency, of the total heat energy released by the gasoline consumed, about 60-80% of total power is emitted as heat without being turned into useful work, i.e. turning the crankshaft. Approximately half of this rejected heat is carried away by the exhaust gases, and half passes through the cylinder walls or cylinder head into the engine cooling system, and is passed to the atmosphere via the cooling system radiator. Some of the work generated is also lost as friction, noise, air turbulence, and work used to turn engine equipment and appliances such as water and oil pumps and the electrical generator, leaving only about 20-40% of the energy released by the fuel consumed available to move the vehicle.
A gasoline engine burns a mix of gasoline and air, consisting of a range of about twelve to eighteen parts (by weight) of air to one part of fuel (by weight). A mixture with a 14.7:1 air/fuel ratio is stoichiometric, that is when burned, 100% of the fuel and the oxygen are consumed. Mixtures with slightly less fuel, called lean burn are more efficient. The combustion is a reaction which uses the oxygen content of the air to combine with the fuel, which is a mixture of several hydrocarbons, resulting in water vapor, carbon dioxide, and sometimes carbon monoxide and partially burned hydrocarbons. In addition, at high temperatures the oxygen tends to combine with nitrogen, forming oxides of nitrogen (usually referred to as "NOx", since the number of oxygen atoms in the compound can vary, thus the "X" subscript). This mixture, along with the unused nitrogen and other trace atmospheric elements, is what is found in the exhaust.
The most efficient cycle is the Atkinson Cycle, but most gasoline engine makers use the Otto Cycle for higher power and torque. Some engine design, such as Mazda's Skyactiv-G and some hybrid engines designed by Toyota utilize the Atkinson and Otto cycles together with an electric motor/generator and a traction storage battery. The hybrid drivetrain can achieve effective efficiencies of close to 40%.
Diesel engines.
Engines using the Diesel cycle are usually more efficient, although the Diesel cycle itself is less efficient at equal compression ratios. Since diesel engines use much higher compression ratios (the heat of compression is used to ignite the slow-burning diesel fuel), that higher ratio more than compensates for air pumping losses within the engine.
Modern turbo-diesel engines use electronically controlled common-rail fuel injection to increase efficiency. With the help of geometrically variable turbo-charging system (albeit more maintenance) this also increases the engines' torque at low engine speeds (1,200–1,800 rpm). Low speed diesel engines like the MAN S80ME-C7 have achieved an overall energy conversion efficiency of 54.4%, which is the highest conversion of fuel into power by any single-cycle internal or external combustion engine. Engines in large diesel trucks, buses, and newer diesel cars can achieve peak efficiencies around 45%.
Gas turbine.
The gas turbine is most efficient at maximum power output in the same way reciprocating engines are most efficient at maximum load. The difference is that at lower rotational speed the pressure of the compressed air drops and thus thermal and fuel efficiency drop dramatically. Efficiency declines steadily with reduced power output and is very poor in the low power range.
General Motors at one time manufactured a bus powered by a gas turbine, but due to rise of crude oil prices in the 1970s this concept was abandoned. Rover, Chrysler, and Toyota also built prototypes of turbine-powered cars. Chrysler built a short prototype series of them for real-world evaluation. Driving comfort was good, but overall economy lacked due to reasons mentioned above. This is also why gas turbines can be used for permanent and peak power electric plants. In this application they are only run at or close to full power, where they are efficient, or shut down when not needed.
Gas turbines do have an advantage in power density – gas turbines are used as the engines in heavy armored vehicles and armored tanks and in power generators in jet fighters.
One other factor negatively affecting the gas turbine efficiency is the ambient air temperature. With increasing temperature, intake air becomes less dense and therefore the gas turbine experiences power loss proportional to the increase in ambient air temperature.
Latest generation gas turbine engines have achieved an efficiency of 46% in simple cycle and 61% when used in combined cycle.
See also: Steam engine#Efficiency
See also: Timeline of steam power
External combustion engines.
Steam engine.
Piston engine.
Steam engines and turbines operate on the Rankine cycle which has a maximum Carnot efficiency of 63% for practical engines, with steam turbine power plants able to achieve efficiency in the mid 40% range.
The efficiency of steam engines is primarily related to the steam temperature and pressure and the number of stages or "expansions". Steam engine efficiency improved as the operating principles were discovered, which led to the development of the science of thermodynamics. See graph:Steam Engine Efficiency
In earliest steam engines the boiler was considered part of the engine. Today they are considered separate, so it is necessary to know whether stated efficiency is overall, which includes the boiler, or just of the engine.
Comparisons of efficiency and power of the early steam engines is difficult for several reasons: 1) there was no standard weight for a bushel of coal, which could be anywhere from 82 to 96 pounds (37 to 44 kg). 2) There was no standard heating value for coal, and probably no way to measure heating value. The coals had much higher heating value than today's steam coals, with 13,500 BTU/pound (31 megajoules/kg) sometimes mentioned. 3) Efficiency was reported as "duty", meaning how many foot pounds (or newton-metres) of work lifting water were produced, but the mechanical pumping efficiency is not known.
The first piston steam engine, developed by Thomas Newcomen around 1710, was slightly over one half percent (0.5%) efficient. It operated with steam at near atmospheric pressure drawn into the cylinder by the load, then condensed by a spray of cold water into the steam filled cylinder, causing a partial vacuum in the cylinder and the pressure of the atmosphere to drive the piston down. Using the cylinder as the vessel in which to condense the steam also cooled the cylinder, so that some of the heat in the incoming steam on the next cycle was lost in warming the cylinder, reducing the thermal efficiency. Improvements made by John Smeaton to the Newcomen engine increased the efficiency to over 1%.
James Watt made several improvements to the Newcomen engine, the most significant of which was the external condenser, which prevented the cooling water from cooling the cylinder. Watt's engine operated with steam at slightly above atmospheric pressure. Watt's improvements increased efficiency by a factor of over 2.5.
The lack of general mechanical ability, including skilled mechanics, machine tools, and manufacturing methods, limited the efficiency of actual engines and their design until about 1840.
Higher-pressured engines were developed by Oliver Evans and Richard Trevithick, working independently. These engines were not very efficient but had high power-to-weight ratio, allowing them to be used for powering locomotives and boats.
The centrifugal governor, which had first been used by Watt to maintain a constant speed, worked by throttling the inlet steam, which lowered the pressure, resulting in a loss of efficiency on the high (above atmospheric) pressure engines. Later control methods reduced or eliminated this pressure loss.
The improved valving mechanism of the Corliss steam engine (Patented. 1849) was better able to adjust speed with varying load and increased efficiency by about 30%. The Corliss engine had separate valves and headers for the inlet and exhaust steam so the hot feed steam never contacted the cooler exhaust ports and valving. The valves were quick acting, which reduced the amount of throttling of the steam and resulted in faster response. Instead of operating a throttling valve, the governor was used to adjust the valve timing to give a variable steam cut-off. The variable cut-off was responsible for a major portion of the efficiency increase of the Corliss engine.
Others before Corliss had at least part of this idea, including Zachariah Allen, who patented variable cut-off, but lack of demand, increased cost and complexity and poorly developed machining technology delayed introduction until Corliss.
The Porter-Allen high-speed engine (ca. 1862) operated at from three to five times the speed of other similar-sized engines. The higher speed minimized the amount of condensation in the cylinder, resulting in increased efficiency.
Compound engines gave further improvements in efficiency. By the 1870s triple-expansion engines were being used on ships. Compound engines allowed ships to carry less coal than freight. Compound engines were used on some locomotives but were not widely adopted because of their mechanical complexity.
A very well-designed and built steam locomotive used to get around 7-8% efficiency in its heyday. The most efficient reciprocating steam engine design (per stage) was the uniflow engine, but by the time it appeared steam was being displaced by diesel engines, which were even more efficient and had the advantages of requiring less labor (for coal handling and oiling), being a more dense fuel, and displaced less cargo.
<templatestyles src="Template:Blockquote/styles.css" />Using statistics collected during the early 1940s, the Santa Fe Railroad measured the efficiency of their fleet of steam locomotives in comparison with the FT units that they were just putting into service in significant numbers. They determined that the cost of a ton of oil fuel used in steam engines was $5.04 and yielded 20.37 train miles system wide on average. Diesel fuel cost $11.61 but produced 133.13 train miles per ton. In effect, diesels ran six times as far as steamers utilizing fuel that cost only twice as much. This was due to the much better thermal efficiency of diesel engines compared to steam. Presumably the trains used as a milage standard were 4,000 ton freight consists which was the normal tannage l (sic) at that time.
Steam turbine.
The steam turbine is the most efficient steam engine and for this reason is universally used for electrical generation. Steam expansion in a turbine is nearly continuous, which makes a turbine comparable to a very large number of expansion stages. Steam power stations operating at the critical point have efficiencies in the low 40% range. Turbines produce direct rotary motion and are far more compact and weigh far less than reciprocating engines and can be controlled to within a very constant speed. As is the case with the gas turbine, the steam turbine works most efficiently at full power, and poorly at slower speeds. For this reason, despite their high power to weight ratio, steam turbines have been primarily used in applications where they can be run at a constant speed. In AC electrical generation maintaining an extremely constant turbine speed is necessary to maintain the correct frequency.
Stirling engines.
The Stirling engine has the highest theoretical efficiency of any thermal engine but it has a low output power to weight ratio, therefore Stirling engines of practical output tend to be large. The size effect of the Stirling engine is due to its reliance on the expansion of a gas with an increase in temperature and practical limits on the working temperature of engine components. For an ideal gas, increasing its absolute temperature for a given volume, only increases its pressure proportionally, therefore, where the low pressure of the Stirling engine is atmospheric, its practical pressure difference is constrained by temperature limits and is typically not more than a couple of atmospheres, making the piston pressures of the Stirling engine very low, hence relatively large piston areas are required to obtain useful output power.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\eta = \\frac{ \\mathrm{work\\ done} } {\\mathrm{heat\\ absorbed}} = \\frac{ Q_1-Q_2 }{ Q_1}"
},
{
"math_id": 1,
"text": "Q_1"
},
{
"math_id": 2,
"text": "Q_1-Q_2"
}
]
| https://en.wikipedia.org/wiki?curid=7248770 |
72489376 | Beta-tungsten | Metastable phase of tungsten
Beta-tungsten (β-W) is a metastable phase of tungsten widely observed in tungsten thin films. While the commonly existing stable alpha-tungsten (α-W) has a body-centered cubic (A2) structure, β-W adopts the topologically close-packed A15 structure containing eight atoms per unit cell, and it irreversibly transforms to the stable α phase through thermal annealing of up to 650 °C. It has been found that β-W possesses the giant spin Hall effect, wherein the applied charge current generates a transverse spin current, and this leads to potential applications in magnetoresistive random access memory devices.
History.
β-W was first observed by Hartmann et al. in 1931 as part of the dendritic metallic deposit formed on the cathode after electrolysis of phosphate melts below 650°C. In the beginning stages of research into β-W, oxygen was commonly found to promote the formation of the β-W structure, thus discussions of whether the β-W structure is a phase of single-element tungsten or a tungsten suboxide were long-standing, but ever since the 1950s there has been a lot of experimental proof showing that the oxygen in β-W thin films is in a zero valence state, and thus the structure is a true allotrope of tungsten.
While the initial interest in β-W thin films was driven by its superconducting properties at low temperatures, the discovery of giant spin Hall effect in β-W thin films by Burhman et al. in 2012 has generated new interest in the material for potential applications in spintronic magnetic random access memories and spin-logic devices.
Structure.
β-W has a cubic A15 structure with space group formula_0, which belongs to the Frank–Kasper phases family. Each unit cell contains eight tungsten atoms. The structure can be seen as a cubic lattice with one atom at each corner, one atom in the center, and two atoms on each face. There are two inequivalent tungsten sites corresponding to Wyckoff positions formula_1 and formula_2, respectively. On the first site, Wyckoff position formula_1, each tungsten atom is bonded to twelve equivalent W atoms to form a mixture of edge- and face-sharing WW12 cuboctahedratungsten. On the second site, with Wyckoff position formula_2, each tungsten atom is bonded to fourteen neighboring tungsten atoms, and there is a spread of W–W bond lengths ranging from 2.54 to 3.12 Å. The experimentally measured lattice parameter of β-W is 5.036 Å, while the DFT calculated value is 5.09 Å.
Properties.
Two key properties of β-W have been well-established: the high electrical resistivity and the giant spin Hall effect.
Although the exact value depends on the preparation conditions, β-W has an electrical resistivity of at least five to ten times higher than that of α-W (5.3 μΩ.cm), and this high conductivity will remain almost unchanged in a temperature range of 5 to 380 K, making β-W a potential thin film resistor while α-W is a thin film conductor.
Thin films of β-W display a giant spin Hall effect with a spin Hall angle of 0.30 ± 0.02 and a spin-diffusion length of around 3.5 nm. In contrast, α-W exhibits a much smaller spin Hall angle of less than 0.07 and a comparable spin-diffusion length. In the spin Hall effect, the application of a longitudinal electric current through a nonmagnetic material generates a transverse spin current due to the spin–orbit interaction, and the spin Hall angle is defined as the ratio of the transverse spin current density and the longitudinal electric current density. The spin Hall angle of β-W is large enough to generate spin torques capable of flipping or setting the magnetization of adjacent magnetic layers into precession by means of the spin Hall effect.
Preparation.
While there have been some reports about preparing β-W with chemical methods such as hydrogen reduction reaction, almost all the reported β-W in the recent thirty years are prepared through sputter deposition, an atom-by-atom physical vapor deposition (PVD) technique. In the sputter deposition, a tungsten target is bombarded with ionized gas molecules (usually Ar), causing the tungsten atoms to be “sputtered” off into the plasma. These vaporized atoms are then deposited when they condense as a thin film on the substrate to be coated. The formation of β-W through sputter deposition depends on the base pressure, Ar pressure, substrate temperature, impurity gas, deposition rate, film thickness, substrate type, etc. It has been widely observed that oxygen or nitrogen gas flow can assist and is necessary for the formation of β-W, but recently there have also been reports on preparing β-W without putting into any impurity gas during deposition.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Pm\\bar{3}n"
},
{
"math_id": 1,
"text": "2a"
},
{
"math_id": 2,
"text": "6c"
}
]
| https://en.wikipedia.org/wiki?curid=72489376 |
724954 | Donald C. Spencer | American mathematician
Donald Clayton Spencer (April 25, 1912 – December 23, 2001) was an American mathematician, known for work on deformation theory of structures arising in differential geometry, and on several complex variables from the point of view of partial differential equations. He was born in Boulder, Colorado, and educated at the University of Colorado and MIT.
Career.
He wrote a Ph.D. in diophantine approximation under J. E. Littlewood and G.H. Hardy at the University of Cambridge, completed in 1939. He had positions at MIT and Stanford before his appointment in 1950 at Princeton University. There he was involved in a series of collaborative works with Kunihiko Kodaira on the deformation of complex structures, which had some influence on the theory of complex manifolds and algebraic geometry, and the conception of moduli spaces.
He also was led to formulate the "d-bar Neumann problem", for the operator formula_0 (see complex differential form) in PDE theory, to extend Hodge theory and the "n"-dimensional Cauchy–Riemann equations to the non-compact case. This is used to show existence theorems for holomorphic functions.
He later worked on pseudogroups and their deformation theory, based on a fresh approach to overdetermined systems of PDEs (bypassing the Cartan–Kähler ideas based on differential forms by making an intensive use of jets). Formulated at the level of various chain complexes, this gives rise to what is now called Spencer cohomology, a subtle and difficult theory both of formal and of analytical structure. This is a kind of Koszul complex theory, taken up by numerous mathematicians during the 1960s. In particular a theory for Lie equations formulated by Malgrange emerged, giving a very broad formulation of the notion of "integrability".
Legacy.
After his death, a mountain peak outside Silverton, Colorado was named in his honor.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\bar{\\partial}"
}
]
| https://en.wikipedia.org/wiki?curid=724954 |
72497805 | Aperiodic crystal | Crystal type lacking 3D periodicity
Aperiodic crystals lack three-dimensional translational symmetry but still exhibit three-dimensional long-range order. In other words, they are periodic crystals in higher dimensions. They are classified into three different categories: incommensurate modulated structures, incommensurate composite structures, and quasicrystals.
The diffraction patterns of aperiodic crystals contain two sets of peaks, which include "main reflections" and "satellite reflections". Main reflections are usually stronger in intensity and span a lattice defined by three-dimensional reciprocal lattice vectorsformula_0. Satellite reflections are weaker in intensity and are known as "lattice ghosts". These reflections do not correspond to any lattice points in physical space and cannot be indexed with the original three vectors. To understand aperiodic crystal structures, one must use the superspace approach. In materials science, "superspace" or higher-dimensional space refers to the concept of describing the structures and properties of materials in terms of dimensions beyond the three dimensions of physical space. This may involve using mathematical models to describe the behavior of atoms or molecules in a materials in four, five, or even higher dimensions.
History.
The history of aperiodic crystals can be traced back to the early 20th century, when the science of crystallography was in its infancy. At that time, it was generally accepted that the ground state of matter was always an ideal crystal with three-dimensional space group symmetry, or lattice periodicity. However, in the late 1900s, a number of developments in the field of crystallography challenged this belief. Researchers began to focus on the scattering of X-rays and other particles beyond just the Bragg peaks, which allowed them to better understand the effects of defects and finite size on the structure of crystals, as well as the presence of additional spots in diffraction patterns due to periodic variations in the crystal structure. These findings showed that the ground state of matter was not always an ideal crystal, and that other, more complex structures could also exist. These structures were later classified as aperiodic crystals, and their study has continued to be an active area of research in the field of crystallography.
Mathematics of the superspace approach.
The fundamental property of aperiodic crystal can be understood as a three dimensional physical space where the atoms are positioned plus additional dimensions of the second subspace.
Superspace.
Dimensionalities of aperiodic crystals:
The "formula_4" represents the dimensions of the first subspace, which is also called the "external space" (formula_5) or "parallel space" (formula_6).
The "formula_7" represents the additional dimension of the second subspace, which is also called "internal space" ("formula_8) or "perpendicular space" (formula_9). It is perpendicular to the first subspace.
In summary, superspace is the direct sum of two subspaces. With the superspace approach, we can now describe a three-dimensional aperiodic structure as a higher dimensional periodic structure.
Peak indexing.
To index all Bragg peaks, both main and satellite reflections, additional lattice vectors must be introduced:
With respect to the three reciprocal lattice vectors formula_14 spanned by the main reflection, the fourth vector formula_15 can be expressed by
formula_15 is modulation wave vector, which represents the direction and wavelength of the modulation wave through the crystal structure.
If at least one of the formula_17 values is an irrational number, then the structure is considered to be "incommensurately modulated".
With the superspace approach, we can project the diffraction pattern from a higher-dimensional space to three-dimensional space.
Example.
Biphenyl.
The biphenyl molecule is a simple organic molecular compound consisting of two phenyl rings bonded by a central C-C single bond, which exhibits a modulated molecular crystal structure. Two competing factors are important for the molecule's conformation. One is steric hindrance of ortho-hydrogen, which leads to the repulsion between electrons and causes torsion of the molecule. As a result, the conformation of the molecule is non-planar, which often occurs when biphenyl is in the gas phase. The other factor is the formula_18-electron effect which favors coplanarity of the two planes. This is often the case when biphenyl is at room temperature.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(a^*, b^*, c^*)"
},
{
"math_id": 1,
"text": "3 + 1d"
},
{
"math_id": 2,
"text": "3 + 2d"
},
{
"math_id": 3,
"text": "3 + 3d"
},
{
"math_id": 4,
"text": "3"
},
{
"math_id": 5,
"text": "V_E\n"
},
{
"math_id": 6,
"text": "V^{II}"
},
{
"math_id": 7,
"text": "d"
},
{
"math_id": 8,
"text": "V_I\n"
},
{
"math_id": 9,
"text": "V^\\perp"
},
{
"math_id": 10,
"text": "V = V_E \\oplus V_I"
},
{
"math_id": 11,
"text": "s(3+1) = ha^* + kb^* + lc^* + mq"
},
{
"math_id": 12,
"text": "s(3+2) = ha^* + kb^* + lc^* + mq_1 + nq_2"
},
{
"math_id": 13,
"text": "s(3+3) = ha^* + kb^* + lc^* + mq_1 + nq_2 + pq_3"
},
{
"math_id": 14,
"text": "(a*, b*, c*)"
},
{
"math_id": 15,
"text": "q"
},
{
"math_id": 16,
"text": "q = \\sigma_1a^* + \\sigma_2b^* + \\sigma_3c^* "
},
{
"math_id": 17,
"text": "\\sigma"
},
{
"math_id": 18,
"text": "\\pi"
}
]
| https://en.wikipedia.org/wiki?curid=72497805 |
72500875 | Hexagonal ferrite | Hexaferrite
Hexagonal ferrites or hexaferrites are a family of ferrites with hexagonal crystal structure. The most common member is BaFe12O19, also called barium ferrite, BaM, etc. BaM is a strong room-temperature ferrimagnetic material with high anisotropy along the "c" axis. All the hexaferrite members are constructed by stacking a few building blocks in a certain order.
Basic building blocks.
S block.
The S block is very common in hexaferrites, which has a chemical formula of MeS6O82+. MeS are smaller metal cations, for example, Fe and other transition metals or noble metals. The S block is essentially a slab cut along the formula_0 plane of an AB2O4 spinel. Each S block has one A layer and one B layer. The A layer features MeS-centered tetrahedron and MeS-centered octahedron, while the B layer is made up of edge-sharing MeS-centered octahedron. Both A and B layers have the same chemical formula of MeS3O42+.
R block.
The R block has a chemical formula of MeLMeS6O112-. MeL are larger metal cations, for example, alkaline earth metals (Ba, Sr,) rare earth metals, Pb, etc. The point group symmetry of the R block is formula_1. The large metal cations are located in the middle layer of the three hexagonally packed layers. This block is also composed of face-sharing MeS-centered octahedra and MeS-centered trigonal bipyramids.
T block.
The T block has a chemical formula of MeL2MeS8O142-. The point group symmetry of the T block is formula_2. One T block consists of 4 oxygen layers with the two MeL atoms substituting two oxygen atoms in the middle two layers. In one T block, there are both MeS-centered octahedra and MeS-centered tetrahedra.
Family nembers.
M-type ferrite.
M-type ferrite is made up of alternating S and R blocks in the sequence of SRS*R*. (* denotes rotating that layer around the "c" axis by 180°.) The chemical formula of M-type ferrite is MeLMeS12O19. Common examples are BaFe12O19, SrFe12O19. It exhibits formula_3 space group symmetry. For BaFe12O19, "a" = 5.89 Å and "c" = 23.18 Å. M-ferrite is a very robust ferrimagnetic material, thus widely used as fridge magnets, card strips, magnets in speakers, magnetic material in linear tape-open.
W-type ferrite.
W-type ferrite, like the M-type, consists of S and R blocks, but the stacking order and the number of blocks are different. The stacking sequence in a W-ferrite is SSRS*S*R* and its chemical formula is MeLMeS18O27. It exhibits formula_3 space group symmetry. One example of W-type ferrite is BaFe18O27, with "a" = 5.88 Å and "c" = 32.85 Å.
R-type ferrite.
R-type ferrite has a chemical formula of MeLMeS6O11 with a space group of formula_3. Unlike other hexaferrites, R-type ferrite doesn't have an S block. Instead, it only has single B layers extracted from the S block. The stacking sequence is BRB*R*.
Y-type ferrite.
Y-type ferrite has a chemical formula of MeL2MeS14O22 with a space group of formula_4. One example is Ba2Co2Fe12O22 with "a" = 5.86 Å and "c" = 43.5 Å. Y-type ferrite is built up with S and T blocks with an order of 3(ST) in one unit cell. There is no horizontal mirror plane in a Y-type ferrite.
Z-type ferrite.
Z-type ferrite has a chemical formula of MeL3MeS26O41 with a space group of formula_3. It has a complicated stacking of SRSTS*R*S*T* in one unit cell. Some Z-type members may have sophisticated magnetic properties along different directions. One example is Ba3Co2Fe24O41 with "a" = 5.88 Å and "c" = 52.3 Å.
X-type ferrite.
X-type ferrite has a chemical formula of MeL2MeS30O46 with a space group of formula_4. The stacking order is 3(SRS*S*R*) in one unit cell. One example is Sr2Co2Fe28O46 with "c" = 83.74 Å. | [
{
"math_id": 0,
"text": "(111)"
},
{
"math_id": 1,
"text": "\\bar{6}m2"
},
{
"math_id": 2,
"text": "3m"
},
{
"math_id": 3,
"text": "P6_3/mmc"
},
{
"math_id": 4,
"text": "R\\bar{3}m"
}
]
| https://en.wikipedia.org/wiki?curid=72500875 |
72505162 | Rate-limiting step (biochemistry) | In biochemistry, a rate-limiting step is a step that controls the rate of a series of biochemical reactions. The statement is, however, a misunderstanding of how a sequence of enzyme catalyzed reaction steps operate. Rather than a single step controlling the rate, it has been discovered that multiple steps control the rate. Moreover, each controlling step controls the rate to varying degrees.
Blackman (1905) stated as an axiom: "when a process is conditioned as to its rapidity by a number of separate factors, the rate of the process is limited by the pace of the slowest factor." This implies that it should be possible, by studying the behavior of a complicated system such as a metabolic pathway, to characterize a single factor or reaction (namely the slowest), which plays the role of a master or rate-limiting step. In other words, the study of flux control can be simplified to the study of a single enzyme since, by definition, there can only be one 'rate-limiting' step. Since its conception, the 'rate-limiting' step has played a significant role in suggesting how metabolic pathways are controlled. Unfortunately, the notion of a 'rate-limiting' step is erroneous, at least under steady-state conditions. Modern biochemistry textbooks have begun to play down the concept. For example, the seventh edition of "Lehninger Principles of Biochemistry" explicitly states: "It has now become clear that, in most pathways, the control of flux is distributed among several enzymes, and the extent to which each contributes to the control varies with metabolic circumstances". However, the concept is still incorrectly used in research articles.
Historical perspective.
From the 1920s to the 1950s, there were a number of authors who discussed the concept of rate-limiting steps, also known as master reactions. Several authors have stated that the concept of the 'rate-limiting' step is incorrect. Burton (1936) was one of the first to point out that: "In the steady state of reaction chains, the principle of the master reaction has no application". Hearon (1952) made a more general mathematical analysis and developed strict rules for the prediction of mastery in a linear sequence of enzyme-catalysed reactions. Webb (1963) was highly critical of the concept of the rate-limiting step and of its blind application to solving problems of regulation in metabolism. Waley (1964) made a simple but illuminating analysis of simple linear chains. He showed that provided the intermediate concentrations were low compared to the formula_0 values of the enzymes, the following expression was valid:
formula_1
where formula_2 equals the pathway flux, and formula_3 and formula_4 are functions of the rate constants and intermediate metabolite concentrations. The formula_5 terms are proportional to the limiting rate formula_6 values of the enzymes. The first point to note from the above equation is that the pathway flux is a function of all the enzymes; there is no need for there to be a 'rate-limiting' step. If, however, all the terms formula_7 from formula_8 to formula_9, are small relative to formula_10 then the first enzyme will contribute the most to determining the flux and therefore, could be termed the 'rate-limiting' step.
Modern perspective.
The modern perspective is that rate-limitingness should be quantitative and that it is distributed through a pathway to varying degrees. This idea was first considered by Higgins in the late 1950s as part of his PhD thesis where he introduced the quantitative measure he called the ‘reflection coefficient.’ This described the relative change of one variable to another for small perturbations. In his Ph.D. thesis, Higgins describes many properties of the reflection coefficients, and in later work, three groups, Savageau, Heinrich and Rapoport and Jim Burns in his thesis (1971) and subsequent publications independently and simultaneously developed this work into what is now called metabolic control analysis or, in the specific form developed by Savageau, biochemical systems theory. These developments extended Higgins’ original ideas significantly, and the formalism is now the primary theoretical approach to describing deterministic, continuous models of biochemical networks.
The variations in terminology between the different papers on metabolic control analysis were later harmonized by general agreement. | [
{
"math_id": 0,
"text": "K_\\mathrm{m}"
},
{
"math_id": 1,
"text": "\n\\frac{1}{F} = \\frac{1}{Q} \\left(\\frac{R}{e_1} + \\ldots \\frac{X}{e_i} + \\ldots + \\frac{Z}{e_n} \\right) \n"
},
{
"math_id": 2,
"text": "F"
},
{
"math_id": 3,
"text": "Q, R, \\ldots, X, \\ldots"
},
{
"math_id": 4,
"text": "Z"
},
{
"math_id": 5,
"text": "e_i"
},
{
"math_id": 6,
"text": "V"
},
{
"math_id": 7,
"text": "X / e_i"
},
{
"math_id": 8,
"text": "S/e_2"
},
{
"math_id": 9,
"text": "Z/e_n"
},
{
"math_id": 10,
"text": "R/e_1"
}
]
| https://en.wikipedia.org/wiki?curid=72505162 |
7250518 | Pappus graph | Bipartite, 3-regular undirected graph
In the mathematical field of graph theory, the Pappus graph is a bipartite, 3-regular, undirected graph with 18 vertices and 27 edges, formed as the Levi graph of the Pappus configuration. It is named after Pappus of Alexandria, an ancient Greek mathematician who is believed to have discovered the "hexagon theorem" describing the Pappus configuration. All the cubic, distance-regular graphs are known; the Pappus graph is one of the 13 such graphs.
The Pappus graph has rectilinear crossing number 5, and is the smallest cubic graph with that crossing number (sequence in the OEIS). It has girth 6, diameter 4, radius 4, chromatic number 2, chromatic index 3 and is both 3-vertex-connected and 3-edge-connected. It has book thickness 3 and queue number 2.
The Pappus graph has a chromatic polynomial equal to:
formula_0
The name "Pappus graph" has also been used to refer to a related nine-vertex graph, with a vertex for each point of the Pappus configuration and an edge for every pair of points on the same line; this nine-vertex graph is 6-regular, is the complement graph of the union of three disjoint triangle graphs, and is the complete tripartite graph K3,3,3. The first Pappus graph can be embedded in the torus to form a self-Petrie dual regular map with nine hexagonal faces; the second, to form a regular map with 18 triangular faces. The two regular toroidal maps are dual to each other.
Algebraic properties.
The automorphism group of the Pappus graph is a group of order 216. It acts transitively on the vertices, on the edges and on the arcs of the graph. Therefore the Pappus graph is a symmetric graph. It has automorphisms that take any vertex to any other vertex and any edge to any other edge. According to the "Foster census", the Pappus graph, referenced as F018A, is the only cubic symmetric graph on 18 vertices.
The characteristic polynomial of the Pappus graph is formula_1. It is the only graph with this characteristic polynomial, making it a graph determined by its spectrum.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(x-1)x(x^{16} - 26x^{15} + 325x^{14} - 2600x^{13} + 14950x^{12} - 65762x^{11} + 229852x^{10} - 653966x^9 + 1537363x^8 - 3008720x^7 + 4904386x^6 - 6609926x^5 + 7238770x^4 - 6236975x^3 + 3989074x^2 - 1690406x + 356509)"
},
{
"math_id": 1,
"text": "(x-3) x^4 (x+3) (x^2-3)^6"
}
]
| https://en.wikipedia.org/wiki?curid=7250518 |
725111 | Disdyakis triacontahedron | Catalan solid with 120 faces
In geometry, a disdyakis triacontahedron, hexakis icosahedron, decakis dodecahedron or kisrhombic triacontahedron is a Catalan solid with 120 faces and the dual to the Archimedean truncated icosidodecahedron. As such it is face-uniform but with irregular face polygons. It slightly resembles an inflated rhombic triacontahedron: if one replaces each face of the rhombic triacontahedron with a single vertex and four triangles in a regular fashion, one ends up with a disdyakis triacontahedron. That is, the disdyakis triacontahedron is the Kleetope of the rhombic triacontahedron. It is also the barycentric subdivision of the regular dodecahedron and icosahedron. It has the most faces among the Archimedean and Catalan solids, with the snub dodecahedron, with 92 faces, in second place.
If the bipyramids, the gyroelongated bipyramids, and the trapezohedra are excluded, the disdyakis triacontahedron has the most faces of any other strictly convex polyhedron where every face of the polyhedron has the same shape.
Projected into a sphere, the edges of a disdyakis triacontahedron define 15 great circles. Buckminster Fuller used these 15 great circles, along with 10 and 6 others in two other polyhedra to define his 31 great circles of the spherical icosahedron.
Geometry.
Being a Catalan solid with triangular faces, the disdyakis triacontahedron's three face angles formula_0 and common dihedral angle formula_1 must obey the following constraints analogous to other Catalan solids:
formula_2
formula_3
formula_4
formula_5
The above four equations are solved simultaneously to get the following face angles and dihedral angle:
formula_6
formula_7
formula_8
formula_9
where formula_10 is the golden ratio.
As with all Catalan solids, the dihedral angles at all edges are the same, even though the edges may be of different lengths.
Cartesian coordinates.
The 62 vertices of a disdyakis triacontahedron are given by:
where
formula_16,
formula_17, and
formula_18 is the golden ratio.
In the above coordinates, the first 12 vertices form a regular icosahedron, the next 20 vertices (those with "R") form a regular dodecahedron, and the last 30 vertices (those with "S") form an icosidodecahedron.
Normalizing all vertices to the unit sphere gives a "spherical" disdyakis triacontahedron, shown in the adjacent figure. This figure also depicts the 120 transformations associated with the full icosahedral group "Ih".
Symmetry.
The edges of the polyhedron projected onto a sphere form 15 great circles, and represent all 15 mirror planes of reflective "Ih" icosahedral symmetry. Combining pairs of light and dark triangles define the fundamental domains of the nonreflective ("I") icosahedral symmetry. The edges of a compound of five octahedra also represent the 10 mirror planes of icosahedral symmetry.
Orthogonal projections.
The disdyakis triacontahedron has three types of vertices which can be centered in orthogonally projection:
Uses.
The "disdyakis triacontahedron", as a regular dodecahedron with pentagons divided into 10 triangles each, is considered the "holy grail" for combination puzzles like the Rubik's cube. Such a puzzle currently has no satisfactory mechanism. It is the most significant unsolved problem in mechanical puzzles, often called the "big chop" problem.
This shape was used to make 120-sided dice using 3D printing.
Since 2016, the Dice Lab has used the disdyakis triacontahedron to mass-market an injection-moulded 120-sided die. It is claimed that 120 is the largest possible number of faces on a fair die, aside from infinite families (such as right regular prisms, bipyramids, and trapezohedra) that would be impractical in reality due to the tendency to roll for a long time.
A disdyakis tricontahedron projected onto a sphere is used as the logo for Brilliant, a website containing series of lessons on STEM-related topics.
Related polyhedra and tilings.
It is topologically related to a polyhedra sequence defined by the face configuration "V4.6.2n". This group is special for having all even number of edges per vertex and form bisecting planes through the polyhedra and infinite lines in the plane, and continuing into the hyperbolic plane for any "n" ≥ 7.
With an even number of faces at every vertex, these polyhedra and tilings can be shown by alternating two colors so all adjacent faces have different colors.
Each face on these domains also corresponds to the fundamental domain of a symmetry group with order 2,3,"n" mirrors at each triangle face vertex. This is *"n"32 in orbifold notation, and ["n",3] in Coxeter notation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha_4, \\alpha_6, \\alpha_{10}"
},
{
"math_id": 1,
"text": "\\theta"
},
{
"math_id": 2,
"text": "\\sin(\\theta/2) = \\cos(\\pi/4) / \\cos(\\alpha_4/2)"
},
{
"math_id": 3,
"text": "\\sin(\\theta/2) = \\cos(\\pi/6) / \\cos(\\alpha_6/2)"
},
{
"math_id": 4,
"text": "\\sin(\\theta/2) = \\cos(\\pi/10) / \\cos(\\alpha_{10}/2)"
},
{
"math_id": 5,
"text": "\\alpha_4 + \\alpha_6 + \\alpha_{10} = \\pi"
},
{
"math_id": 6,
"text": "\\alpha_4 = \\arccos \\left(\\frac{7-4\\phi}{30} \\right) \\approx 88.992^{\\circ}"
},
{
"math_id": 7,
"text": "\\alpha_6 = \\arccos \\left( \\frac{17-4\\phi}{20} \\right) \\approx 58.238^{\\circ}"
},
{
"math_id": 8,
"text": "\\alpha_{10} = \\arccos \\left( \\frac{2+5\\phi}{12} \\right) \\approx 32.770^{\\circ}"
},
{
"math_id": 9,
"text": "\\theta = \\arccos \\left( -\\frac{155 + 48\\phi}{241} \\right) \\approx 164.888^{\\circ}"
},
{
"math_id": 10,
"text": "\\phi = \\frac{\\sqrt{5}+1}{2} \\approx 1.618"
},
{
"math_id": 11,
"text": "\\left(0, \\frac{\\pm 1}{\\sqrt{\\phi+2}} , \\frac{\\pm \\phi}{\\sqrt{\\phi+2}} \\right)"
},
{
"math_id": 12,
"text": "\\left(\\pm R, \\pm R, \\pm R\\right)"
},
{
"math_id": 13,
"text": "\\left(0, \\pm R\\phi, \\pm \\frac{R}{\\phi}\\right)"
},
{
"math_id": 14,
"text": "\\left(\\pm S, 0, 0\\right)"
},
{
"math_id": 15,
"text": "\\left(\\pm \\frac{S\\phi}{2}, \\pm\\frac{S}{2}, \\pm\\frac{S}{2\\phi}\\right)"
},
{
"math_id": 16,
"text": "R = \\frac{5}{3\\phi\\sqrt{\\phi+2}} = \\frac{\\sqrt{25 - 10\\sqrt{5}}}{3} \\approx 0.5415328270548438"
},
{
"math_id": 17,
"text": "S = \\frac{(7\\phi - 6) \\sqrt{\\phi+2}}{11} = \\frac{(2\\sqrt{5} - 3) \\sqrt{25 + 10\\sqrt{5}}}{11} \\approx 0.9210096876986302"
},
{
"math_id": 18,
"text": "\\phi = \\frac{\\sqrt{5} + 1}{2} \\approx 1.618"
}
]
| https://en.wikipedia.org/wiki?curid=725111 |
725126 | Pentagonal icositetrahedron | Catalan polyhedron
In geometry, a pentagonal icositetrahedron or pentagonal icosikaitetrahedron is a Catalan solid which is the dual of the snub cube. In crystallography it is also called a gyroid.
It has two distinct forms, which are mirror images (or "enantiomorphs") of each other.
Construction.
The pentagonal icositetrahedron can be constructed from a snub cube without taking the dual. Square pyramids are added to the six square faces of the snub cube, and triangular pyramids are added to the eight triangular faces that do not share an edge with a square. The pyramid heights are adjusted to make them coplanar with the other 24 triangular faces of the snub cube. The result is the pentagonal icositetrahedron.
Cartesian coordinates.
Denote the tribonacci constant by formula_0. (See snub cube for a geometric explanation of the tribonacci constant.) Then Cartesian coordinates for the 38 vertices of a pentagonal icositetrahedron centered at the origin, are as follows:
The convex hulls for these vertices scaled by formula_1 result in a unit circumradius octahedron centered at the origin, a unit cube centered at the origin scaled to formula_2, and an irregular chiral snub cube scaled to formula_3, as visualized in the figure below:
Geometry.
The pentagonal faces have four angles of formula_4 and one angle of formula_5. The pentagon has three short edges of unit length each, and two long edges of length formula_6. The acute angle is between the two long edges. The dihedral angle equals formula_7.
If its dual snub cube has unit edge length, its surface area and volume are:
formula_8
Orthogonal projections.
The "pentagonal icositetrahedron" has three symmetry positions, two centered on vertices, and one on midedge.
Variations.
Isohedral variations with the same chiral octahedral symmetry can be constructed with pentagonal faces having 3 edge lengths.
This variation shown can be constructed by adding pyramids to 6 square faces and 8 triangular faces of a snub cube such that the new triangular faces with 3 coplanar triangles merged into identical pentagon faces.
Related polyhedra and tilings.
This polyhedron is topologically related as a part of sequence of polyhedra and tilings of pentagons with face configurations (V3.3.3.3."n"). (The sequence progresses into tilings the hyperbolic plane to any "n".) These face-transitive figures have (n32) rotational symmetry.
The "pentagonal icositetrahedron " is second in a series of dual snub polyhedra and tilings with face configuration V3.3.4.3."n".
The pentagonal icositetrahedron is one of a family of duals to the uniform polyhedra related to the cube and regular octahedron.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "t\\approx 1.839\\,286\\,755\\,21"
},
{
"math_id": 1,
"text": "t^{-3}"
},
{
"math_id": 2,
"text": "R\\approx0.9416969935"
},
{
"math_id": 3,
"text": "R"
},
{
"math_id": 4,
"text": "\\arccos((1-t)/2)\\approx 114.812\\,074\\,477\\,90^{\\circ}"
},
{
"math_id": 5,
"text": "\\arccos(2-t)\\approx 80.751\\,702\\,088\\,39^{\\circ}"
},
{
"math_id": 6,
"text": "(t+1)/2\\approx 1.419\\,643\\,377\\,607\\,08"
},
{
"math_id": 7,
"text": "\\arccos(-1/(t^2-2))\\approx 136.309\\,232\\,892\\,32^{\\circ}"
},
{
"math_id": 8,
"text": "\\begin{align} A &= 3\\sqrt{\\frac{22(5t-1)}{4t-3}} &&\\approx 19.299\\,94 \\\\ V &= \\sqrt{\\frac{11(t-4)}{2(20t-37)}} &&\\approx 7.4474 \\end{align}"
}
]
| https://en.wikipedia.org/wiki?curid=725126 |
7251905 | Principle of covariance | In physics, the principle of covariance emphasizes the formulation of physical laws using only those physical quantities the measurements of which the observers in different frames of reference could unambiguously correlate.
Mathematically, the physical quantities must transform "covariantly", that is, under a certain representation of the group of coordinate transformations between admissible frames of reference of the physical theory. This group is referred to as the covariance group.
The principle of covariance does not require invariance of the physical laws under the group of admissible transformations although in most cases the equations are actually invariant. However, in the theory of weak interactions, the equations are not invariant under reflections (but are, of course, still covariant).
Covariance in Newtonian mechanics.
In Newtonian mechanics the admissible frames of reference are inertial frames with relative velocities much smaller than the speed of light. Time is then absolute and the transformations between admissible frames of references are Galilean transformations which (together with rotations, translations, and reflections) form the Galilean group. The covariant physical quantities are Euclidean scalars, vectors, and tensors. An example of a covariant equation is Newton's second law,
formula_0
where the covariant quantities are the mass formula_1 of a moving body (scalar), the velocity formula_2 of the body (vector), the force formula_3 acting on the body, and the invariant time formula_4.
Covariance in special relativity.
In special relativity the admissible frames of reference are all inertial frames. The transformations between frames are the Lorentz transformations which (together with the rotations, translations, and reflections) form the Poincaré group. The covariant quantities are four-scalars, four-vectors etc., of the Minkowski space (and also more complicated objects like bispinors and others). An example of a covariant equation is the Lorentz force equation of motion of a charged particle in an electromagnetic field (a generalization of Newton's second law)
formula_5
where formula_1 and formula_6 are the mass and charge of the particle (invariant 4-scalars); formula_7 is the invariant interval (4-scalar); formula_8 is the 4-velocity (4-vector); and formula_9 is the electromagnetic field strength tensor (4-tensor).
Covariance in general relativity.
In general relativity, the admissible frames of reference are all reference frames. The transformations between frames are all arbitrary (invertible and differentiable) coordinate transformations. The covariant quantities are scalar fields, vector fields, tensor fields etc., defined on spacetime considered as a manifold. Main example of covariant equation is the Einstein field equations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nm\\frac{d\\vec{v}}{dt}=\\vec{F},\n"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "\\vec{v}"
},
{
"math_id": 3,
"text": "\\vec{F}"
},
{
"math_id": 4,
"text": "t"
},
{
"math_id": 5,
"text": "\nm\\frac{du^a}{ds}=qF^{ab}u_b,\n"
},
{
"math_id": 6,
"text": "q"
},
{
"math_id": 7,
"text": "ds"
},
{
"math_id": 8,
"text": "u^a"
},
{
"math_id": 9,
"text": "F^{ab}"
}
]
| https://en.wikipedia.org/wiki?curid=7251905 |
72520168 | Hooley's delta function | Mathematical function
In mathematics, Hooley's delta function (formula_0), also called Erdős--Hooley delta-function, defines the maximum number of divisors of formula_1 in formula_2 for all formula_3, where formula_4 is the Euler's number. The first few terms of this sequence are
formula_5 (sequence in the OEIS).
History.
The sequence was first introduced by Paul Erdős in 1974, then studied by Christopher Hooley in 1979.
In 2023, Dimitris Koukoulopoulos and Terence Tao proved that the sum of the first formula_1 terms, formula_6, for formula_7. In particular, the average order of formula_0 to formula_8 is formula_9 for any formula_10.
Later in 2023 Kevin Ford, Koukoulopoulos, and Tao proved the lower bound formula_11, where formula_12, fixed formula_13, and formula_7.
Usage.
This function measures the tendency of divisors of a number to cluster.
The growth of this sequence is limited by formula_14 where formula_15 is the number of divisors of formula_16.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\Delta(n) "
},
{
"math_id": 1,
"text": " n "
},
{
"math_id": 2,
"text": " [u, eu] "
},
{
"math_id": 3,
"text": " u "
},
{
"math_id": 4,
"text": " e "
},
{
"math_id": 5,
"text": "1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 3, 1, 2, 2, 2, 1, 2, 1, 3, 2, 2, 1, 4 "
},
{
"math_id": 6,
"text": " \\textstyle \\sum_{k=1}^n \\Delta(k) \\ll n (\\log \\log n)^{11/4} "
},
{
"math_id": 7,
"text": "n \\ge 100"
},
{
"math_id": 8,
"text": " k "
},
{
"math_id": 9,
"text": " O((\\log n)^k) "
},
{
"math_id": 10,
"text": " k > 0 "
},
{
"math_id": 11,
"text": " \\textstyle \\sum_{k=1}^n \\Delta(k) \\gg n (\\log \\log n)^{1+\\eta-\\epsilon} "
},
{
"math_id": 12,
"text": "\\eta=0.3533227\\ldots"
},
{
"math_id": 13,
"text": "\\epsilon"
},
{
"math_id": 14,
"text": "\\Delta(mn) \\leq \\Delta(n) d(m)"
},
{
"math_id": 15,
"text": "d(n)"
},
{
"math_id": 16,
"text": "n"
}
]
| https://en.wikipedia.org/wiki?curid=72520168 |
7252026 | Cremona group | In algebraic geometry, the Cremona group, introduced by Cremona (1863, 1865), is the group of birational automorphisms of the formula_0-dimensional projective space over a field formula_1. It is denoted by formula_2
or formula_3 or formula_4.
The Cremona group is naturally identified with the automorphism group formula_5 of the field of the rational functions in formula_0 indeterminates over formula_1, or in other words a pure transcendental extension of formula_1, with transcendence degree formula_0.
The projective general linear group of order formula_6, of projective transformations, is contained in the Cremona group of order formula_0. The two are equal only when formula_7 or formula_8, in which case both the numerator and the denominator of a transformation must be linear.
The Cremona group in 2 dimensions.
In two dimensions, Max Noether and Guido Castelnuovo showed that the complex Cremona group is generated by the standard quadratic transformation, along with formula_9, though there was some controversy about whether their proofs were correct, and gave a complete set of relations for these generators. The structure of this group is still not well understood, though there has been a lot of work on finding elements or subgroups of it.
The Cremona group in higher dimensions.
There is little known about the structure of the Cremona group in three dimensions and higher though many elements of it have been described. showed that it is (linearly) connected, answering a question of . There is no easy analogue of the Noether–Castelnouvo theorem as showed that the Cremona group in dimension at least 3 is not generated by its elements of degree bounded by any fixed integer.
De Jonquières groups.
A De Jonquières group is a subgroup of a Cremona group of the following form . Pick a transcendence basis
formula_10 for a field extension of formula_1. Then a De Jonquières group is the subgroup of automorphisms of formula_11 mapping the subfield formula_12 into itself for some formula_13. It has a normal subgroup given by the Cremona group of automorphisms of formula_14 over the field formula_15, and the quotient group is the Cremona group of formula_15 over the field formula_1. It can also be regarded as the group of birational automorphisms of the fiber bundle formula_16.
When formula_17 and formula_18 the De Jonquières group is the group of Cremona transformations fixing a pencil of lines through a given point, and is the semidirect product of
formula_19 and formula_20. | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "Cr(\\mathbb{P}^n(k))"
},
{
"math_id": 3,
"text": "Bir(\\mathbb{P}^n(k))"
},
{
"math_id": 4,
"text": "Cr_n(k)"
},
{
"math_id": 5,
"text": "\\mathrm{Aut}_k(k(x_1, ..., x_n)) "
},
{
"math_id": 6,
"text": "n+1"
},
{
"math_id": 7,
"text": "n=0"
},
{
"math_id": 8,
"text": "n=1"
},
{
"math_id": 9,
"text": "\\mathrm{PGL}(3,k)"
},
{
"math_id": 10,
"text": "x_1, ..., x_n"
},
{
"math_id": 11,
"text": "k(x_1, ...,x_n)"
},
{
"math_id": 12,
"text": "k(x_1, ...,x_r)"
},
{
"math_id": 13,
"text": "r\\leq n"
},
{
"math_id": 14,
"text": "k(x_1, ..., x_n)"
},
{
"math_id": 15,
"text": "k(x_1, ..., x_r)"
},
{
"math_id": 16,
"text": "\\mathbb{P}^r\\times \\mathbb{P}^{n-r} \\to \\mathbb{P}^r"
},
{
"math_id": 17,
"text": " n=2"
},
{
"math_id": 18,
"text": " r=1"
},
{
"math_id": 19,
"text": "\\mathrm{PGL}_2(k)"
},
{
"math_id": 20,
"text": "\\mathrm{PGL}_2(k(t))"
}
]
| https://en.wikipedia.org/wiki?curid=7252026 |
7252030 | Semisimple operator | Linear operator
In mathematics, a linear operator "T : V → V" on a vector space "V" is semisimple if every "T"-invariant subspace has a complementary "T"-invariant subspace. If "T" is a semisimple linear operator on "V," then "V" is a semisimple representation of "T". Equivalently, a linear operator is semisimple if its minimal polynomial is a product of distinct irreducible polynomials.
A linear operator on a finite-dimensional vector space over an algebraically closed field is semisimple if and only if it is diagonalizable.
Over a perfect field, the Jordan–Chevalley decomposition expresses an endomorphism formula_0 as a sum of a semisimple endomorphism "s" and a nilpotent endomorphism "n" such that both "s" and "n" are polynomials in "x". | [
{
"math_id": 0,
"text": "x : V \\to V"
}
]
| https://en.wikipedia.org/wiki?curid=7252030 |
7252198 | Real algebraic geometry | In mathematics, real algebraic geometry is the sub-branch of algebraic geometry studying real algebraic sets, i.e. real-number solutions to algebraic equations with real-number coefficients, and mappings between them (in particular real polynomial mappings).
Semialgebraic geometry is the study of semialgebraic sets, i.e. real-number solutions to algebraic inequalities with-real number coefficients, and mappings between them. The most natural mappings between semialgebraic sets are semialgebraic mappings, i.e., mappings whose graphs are semialgebraic sets.
Terminology.
Nowadays the words 'semialgebraic geometry' and 'real algebraic geometry' are used as synonyms, because real algebraic sets cannot be studied seriously without the use of semialgebraic sets. For example, a projection of a real algebraic set along a coordinate axis need not be a real algebraic set, but it is always a semialgebraic set: this is the Tarski–Seidenberg theorem. Related fields are o-minimal theory and real analytic geometry.
Examples: Real plane curves are examples of real algebraic sets and polyhedra are examples of semialgebraic sets. Real algebraic functions and Nash functions are examples of semialgebraic mappings. Piecewise polynomial mappings (see the Pierce–Birkhoff conjecture) are also semialgebraic mappings.
Computational real algebraic geometry is concerned with the algorithmic aspects of real algebraic (and semialgebraic) geometry. The main algorithm is cylindrical algebraic decomposition. It is used to cut semialgebraic sets into nice pieces and to compute their projections.
Real algebra is the part of algebra which is relevant to real algebraic (and semialgebraic) geometry. It is mostly concerned with the study of ordered fields and ordered rings (in particular real closed fields) and their applications to the study of positive polynomials and sums-of-squares of polynomials. (See Hilbert's 17th problem and Krivine's Positivestellensatz.) The relation of real algebra to real algebraic geometry is similar to the relation of commutative algebra to complex algebraic geometry. Related fields are the theory of moment problems, convex optimization, the theory of quadratic forms, valuation theory and model theory.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\R^n"
},
{
"math_id": 1,
"text": "S^n"
},
{
"math_id": 2,
"text": "\\R^{n+1}"
},
{
"math_id": 3,
"text": "S^3"
},
{
"math_id": 4,
"text": "\\mathbb{CP}^3"
}
]
| https://en.wikipedia.org/wiki?curid=7252198 |
7252626 | Constant amplitude zero autocorrelation waveform | In signal processing, a Constant Amplitude Zero AutoCorrelation waveform (CAZAC) is a periodic complex-valued signal with modulus one and out-of-phase periodic (cyclic) autocorrelations equal to zero. CAZAC sequences find application in wireless communication systems, for example in 3GPP Long Term Evolution for synchronization of mobile phones with base stations. Zadoff–Chu sequences are well-known CAZAC sequences with special properties.
Example CAZAC Sequence.
For a CAZAC sequence of length formula_0 where formula_1 is relatively prime to formula_0 the formula_2th symbol formula_3 is given by:
Even N.
formula_4
Odd N.
formula_5
Power Spectrum of CAZAC Sequence.
The power spectrum of a CAZAC sequence is flat.
If we have a CAZAC sequence the time domain autocorrelation is an impulse
formula_6
The discrete fourier transform of the autocorrelation is flat
formula_7
Power spectrum is related to autocorrelation by
formula_8
As a result the power spectrum is also flat.
formula_9
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "u_k"
},
{
"math_id": 4,
"text": "u_k = \\exp \\left(j \\frac{M \\pi k^2}{N} \\right)"
},
{
"math_id": 5,
"text": "u_k = \\exp \\left(j \\frac{M \\pi k (k+1)}{N} \\right)"
},
{
"math_id": 6,
"text": "r(\\tau)=\\delta(n)"
},
{
"math_id": 7,
"text": "R(f) = 1/N"
},
{
"math_id": 8,
"text": "R(f) = \\left| X(f) \\right|^2"
},
{
"math_id": 9,
"text": "\\left| X(f) \\right|^2 = 1/N"
}
]
| https://en.wikipedia.org/wiki?curid=7252626 |
725272 | Differential structure | Mathematical structure
In mathematics, an "n"-dimensional differential structure (or differentiable structure) on a set "M" makes "M" into an "n"-dimensional differential manifold, which is a topological manifold with some additional structure that allows for differential calculus on the manifold. If "M" is already a topological manifold, it is required that the new topology be identical to the existing one.
Definition.
For a natural number "n" and some "k" which may be a non-negative integer or infinity, an n"-dimensional "C""k" differential structure is defined using a C""k"-atlas, which is a set of bijections called charts between subsets of "M" (whose union is the whole of "M") and open subsets of formula_0:
formula_1
which are "C""k"-compatible (in the sense defined below):
Each chart allows a subset of the manifold to be viewed as an open subset of formula_0, but the usefulness of this depends on how much the charts agree when their domains overlap.
Consider two charts:
formula_2
formula_3
The intersection of their domains is
formula_4
whose images under the two charts are
formula_5
formula_6
The transition map between the two charts translates between their images on their shared domain:
formula_7
formula_8
Two charts formula_9 are "C""k"-compatible if
formula_10
are open, and the transition maps
formula_11
have continuous partial derivatives of order "k". If "k" = 0, we only require that the transition maps are continuous, consequently a "C"0-atlas is simply another way to define a topological manifold. If "k" = ∞, derivatives of all orders must be continuous. A family of "C""k"-compatible charts covering the whole manifold is a "C""k"-atlas defining a "C""k" differential manifold. Two atlases are C""k"-equivalent if the union of their sets of charts forms a "C""k"-atlas. In particular, a "C""k"-atlas that is "C"0-compatible with a "C"0-atlas that defines a topological manifold is said to determine a C""k" differential structure on the topological manifold. The "C""k" equivalence classes of such atlases are the distinct "C""k" differential structures of the manifold. Each distinct differential structure is determined by a unique maximal atlas, which is simply the union of all atlases in the equivalence class.
For simplification of language, without any loss of precision, one might just call a maximal "C""k"−atlas on a given set a "C""k"−manifold. This maximal atlas then uniquely determines both the topology and the underlying set, the latter being the union of the domains of all charts, and the former having the set of all these domains as a basis.
Existence and uniqueness theorems.
For any integer "k" > 0 and any "n"−dimensional "C""k"−manifold, the maximal atlas contains a "C"∞−atlas on the same underlying set by a theorem due to Hassler Whitney. It has also been shown that any maximal "C""k"−atlas contains some number of "distinct" maximal "C"∞−atlases whenever "n" > 0, although for any pair of these "distinct" "C"∞−atlases there exists a "C"∞−diffeomorphism identifying the two. It follows that there is only one class of smooth structures (modulo pairwise smooth diffeomorphism) over any topological manifold which admits a differentiable structure, i.e. The "C"∞−, structures in a "C""k"−manifold. A bit loosely, one might express this by saying that the smooth structure is (essentially) unique. The case for "k" = 0 is different. Namely, there exist topological manifolds which admit no "C"1−structure, a result proved by , and later explained in the context of Donaldson's theorem (compare Hilbert's fifth problem).
Smooth structures on an orientable manifold are usually counted modulo orientation-preserving smooth homeomorphisms. There then arises the question whether orientation-reversing diffeomorphisms exist. There is an "essentially unique" smooth structure for any topological manifold of dimension smaller than 4. For compact manifolds of dimension greater than 4, there is a finite number of "smooth types", i.e. equivalence classes of pairwise smoothly diffeomorphic smooth structures. In the case of R"n" with "n" ≠ 4, the number of these types is one, whereas for "n" = 4, there are uncountably many such types. One refers to these by exotic R4.
Differential structures on spheres of dimension 1 to 20.
The following table lists the number of smooth types of the topological "m"−sphere "S""m" for the values of the dimension "m" from 1 up to 20. Spheres with a smooth, i.e. "C"∞−differential structure not smoothly diffeomorphic to the usual one are known as exotic spheres.
It is not currently known how many smooth types the topological 4-sphere "S"4 has, except that there is at least one. There may be one, a finite number, or an infinite number. The claim that there is just one is known as the "smooth" Poincaré conjecture (see "Generalized Poincaré conjecture"). Most mathematicians believe that this conjecture is false, i.e. that "S"4 has more than one smooth type. The problem is connected with the existence of more than one smooth type of the topological 4-disk (or 4-ball).
Differential structures on topological manifolds.
As mentioned above, in dimensions smaller than 4, there is only one differential structure for each topological manifold. That was proved by Tibor Radó for dimension 1 and 2, and by Edwin E. Moise in dimension 3. By using obstruction theory, Robion Kirby and Laurent C. Siebenmann were able to show that the number of PL structures for compact topological manifolds of dimension greater than 4 is finite. John Milnor, Michel Kervaire, and Morris Hirsch proved that the number of smooth structures on a compact PL manifold is finite and agrees with the number of differential structures on the sphere for the same dimension (see the book Asselmeyer-Maluga, Brans chapter 7) . By combining these results, the number of smooth structures on a compact topological manifold of dimension not equal to 4 is finite.
Dimension 4 is more complicated. For compact manifolds, results depend on the complexity of the manifold as measured by the second Betti number "b"2. For large Betti numbers "b"2 > 18 in a simply connected 4-manifold, one can use a surgery along a knot or link to produce a new differential structure. With the help of this procedure one can produce countably infinite many differential structures. But even for simple spaces such as formula_12 one doesn't know the construction of other differential structures. For non-compact 4-manifolds there are many examples like formula_13 having uncountably many differential structures.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{R}^{n}"
},
{
"math_id": 1,
"text": "\\varphi_{i}:M\\supset W_{i}\\rightarrow U_{i}\\subset\\mathbb{R}^{n}"
},
{
"math_id": 2,
"text": "\\varphi_{i}:W_{i}\\rightarrow U_{i},"
},
{
"math_id": 3,
"text": "\\varphi_{j}:W_{j}\\rightarrow U_{j}."
},
{
"math_id": 4,
"text": "W_{ij}=W_{i}\\cap W_{j}"
},
{
"math_id": 5,
"text": "U_{ij}=\\varphi_{i}\\left(W_{ij}\\right),"
},
{
"math_id": 6,
"text": "U_{ji}=\\varphi_{j}\\left(W_{ij}\\right)."
},
{
"math_id": 7,
"text": "\\varphi_{ij}:U_{ij}\\rightarrow U_{ji}"
},
{
"math_id": 8,
"text": "\\varphi_{ij}(x)=\\varphi_{j}\\left(\\varphi_{i}^{-1}\\left(x\\right)\\right)."
},
{
"math_id": 9,
"text": "\\varphi_{i},\\,\\varphi_{j}"
},
{
"math_id": 10,
"text": "U_{ij},\\, U_{ji}"
},
{
"math_id": 11,
"text": "\\varphi_{ij},\\,\\varphi_{ji}"
},
{
"math_id": 12,
"text": "S^4, {\\mathbb C}P^2,..."
},
{
"math_id": 13,
"text": "{\\mathbb R}^4,S^3\\times {\\mathbb R},M^4\\smallsetminus\\{*\\},..."
}
]
| https://en.wikipedia.org/wiki?curid=725272 |
72529648 | Atkinson dithering | Image dithering algorithm by Bill Atkinson
Atkinson dithering is a variant of Floyd-Steinberg dithering designed by Bill Atkinson at Apple Computer, and used in the original Macintosh computer.
Implementation.
The algorithm achieves dithering using error diffusion, meaning it pushes (adds) the residual quantization error of a pixel onto its neighboring pixels, to be dealt with later. It spreads the debt out according to the distribution (shown as a map of the neighboring pixels):
formula_0
The pixel indicated with a star (*) indicates the pixel currently being scanned, and the blank pixels are the previously scanned pixels.
The algorithm scans the image from left to right, top to bottom, quantizing pixel values one by one. Each time the quantization error is transferred to the neighboring pixels, while not affecting the pixels that already have been quantized. Hence, if a number of pixels have been rounded downwards, it becomes more likely that the next pixel is rounded upwards, such that on average, the quantization error is reduced.
Unlike Floyd-Steinberg dithering, only 3/4 of the error is diffused outward. This leads to a more localized dither, at the cost of lower performance on near-white and near-black areas, but the increase in contrast on those areas may be regarded as more visually desirable for some purposes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{bmatrix}\n & & * & \\frac{\\displaystyle 1}{\\displaystyle 8} & \\frac{\\displaystyle 1}{\\displaystyle 8} \\\\\n \\ldots & \\frac{\\displaystyle 1}{\\displaystyle 8} & \\frac{\\displaystyle 1}{\\displaystyle 8} & \\frac{\\displaystyle 1}{\\displaystyle 8} & \\ldots \\\\\n \\ldots & & \\frac{\\displaystyle 1}{\\displaystyle 8} & & \\ldots \\\\\n\\end{bmatrix}"
}
]
| https://en.wikipedia.org/wiki?curid=72529648 |
72529865 | Moreau envelope | The Moreau envelope (or the Moreau-Yosida regularization) formula_0 of a proper lower semi-continuous convex function formula_1 is a smoothed version of formula_1. It was proposed by Jean-Jacques Moreau in 1965.
The Moreau envelope has important applications in mathematical optimization: minimizing over formula_0 and minimizing over formula_1 are equivalent problems in the sense that the sets of minimizers of formula_1 and formula_0 are the same. However, first-order optimization algorithms can be directly applied to formula_0, since formula_1 may be non-differentiable while formula_0 is always continuously differentiable. Indeed, many proximal gradient methods can be interpreted as a gradient descent method over formula_0.
Definition.
The Moreau envelope of a proper lower semi-continuous convex function formula_1 from a Hilbert space formula_2 to formula_3 is defined as
formula_4
Given a parameter formula_5, the Moreau envelope of formula_6 is also called as the Moreau envelope of formula_1 with parameter formula_7.
Properties.
formula_9. By defining the sequence formula_10 and using the above identity, we can interpret the proximal operator as a gradient descent algorithm over the Moreau envelope.
formula_11 where formula_12 denotes the convex conjugate of formula_1.
Since the subdifferential of a proper, convex, lower semicontinuous function on a Hilbert space is inverse to the subdifferential of its convex conjugate, we can conclude that if formula_13 is the maximizer of the above expression, then formula_14 is the minimizer in the primal formulation and vice versa.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M_f"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "\\mathcal{X}"
},
{
"math_id": 3,
"text": "(-\\infty,+\\infty]"
},
{
"math_id": 4,
"text": "M_{\\lambda f}(v) = \\inf_{x\\in\\mathcal{X}} \\left(f(x) + \\frac{1}{2\\lambda} \\|x - v\\|_2^2\\right)."
},
{
"math_id": 5,
"text": "\\lambda \\in \\mathbb{R}"
},
{
"math_id": 6,
"text": "\\lambda f"
},
{
"math_id": 7,
"text": "\\lambda"
},
{
"math_id": 8,
"text": "(1/2)\\| \\cdot \\|^2_2"
},
{
"math_id": 9,
"text": "\\nabla M_{\\lambda f}(x) = \\frac{1}{\\lambda} (x - \\mathrm{prox}_{\\lambda f}(x))"
},
{
"math_id": 10,
"text": "x_{k+1} = \\mathrm{prox}_{\\lambda f}(x_k)"
},
{
"math_id": 11,
"text": "M_{\\lambda f}(v) = \\max_{p \\in \\mathcal X} \\left( \\langle p, v \\rangle - \\frac{\\lambda}{2} \\| p \\|^2 - f^*(p)\\right),"
},
{
"math_id": 12,
"text": "f^*"
},
{
"math_id": 13,
"text": "p_0 \\in \\mathcal X"
},
{
"math_id": 14,
"text": "x_0 := v - \\lambda p_0"
}
]
| https://en.wikipedia.org/wiki?curid=72529865 |
725331 | Borel–Weil–Bott theorem | Basic result in the representation theory of Lie groups
In mathematics, the Borel–Weil–Bott theorem is a basic result in the representation theory of Lie groups, showing how a family of representations can be obtained from holomorphic sections of certain complex vector bundles, and, more generally, from higher sheaf cohomology groups associated to such bundles. It is built on the earlier Borel–Weil theorem of Armand Borel and André Weil, dealing just with the space of sections (the zeroth cohomology group), the extension to higher cohomology groups being provided by Raoul Bott. One can equivalently, through Serre's GAGA, view this as a result in complex algebraic geometry in the Zariski topology.
Formulation.
Let G be a semisimple Lie group or algebraic group over formula_0, and fix a maximal torus T along with a Borel subgroup B which contains T. Let λ be an integral weight of T; λ defines in a natural way a one-dimensional representation "C""λ" of B, by pulling back the representation on "T" = "B"/"U", where U is the unipotent radical of B. Since we can think of the projection map "G" → "G"/"B" as a principal B-bundle, for each "C""λ" we get an associated fiber bundle "L"−λ on "G"/"B" (note the sign), which is obviously a line bundle. Identifying "L""λ" with its sheaf of holomorphic sections, we consider the sheaf cohomology groups formula_1. Since G acts on the total space of the bundle formula_2 by bundle automorphisms, this action naturally gives a G-module structure on these groups; and the Borel–Weil–Bott theorem gives an explicit description of these groups as G-modules.
We first need to describe the Weyl group action centered at formula_3. For any integral weight λ and w in the Weyl group W, we set formula_4, where ρ denotes the half-sum of positive roots of G. It is straightforward to check that this defines a group action, although this action is "not" linear, unlike the usual Weyl group action. Also, a weight μ is said to be "dominant" if formula_5 for all simple roots α. Let ℓ denote the length function on W.
Given an integral weight λ, one of two cases occur:
The theorem states that in the first case, we have
formula_10 for all i;
and in the second case, we have
formula_10 for all formula_11, while
formula_12 is the dual of the irreducible highest-weight representation of G with highest weight formula_13.
It is worth noting that case (1) above occurs if and only if formula_14 for some positive root β. Also, we obtain the classical Borel–Weil theorem as a special case of this theorem by taking λ to be dominant and w to be the identity element formula_15.
Example.
For example, consider "G" = SL2(C), for which "G"/"B" is the Riemann sphere, an integral weight is specified simply by an integer n, and "ρ" = 1. The line bundle "L""n" is formula_16, whose sections are the homogeneous polynomials of degree n (i.e. the "binary forms"). As a representation of G, the sections can be written as Sym"n"(C2)*, and is canonically isomorphic to Sym"n"(C2).
This gives us at a stroke the representation theory of formula_17: formula_18 is the standard representation, and formula_19 is its nth symmetric power. We even have a unified description of the action of the Lie algebra, derived from its realization as vector fields on the Riemann sphere: if H, X, Y are the standard generators of formula_17, then
formula_20
Positive characteristic.
One also has a weaker form of this theorem in positive characteristic. Namely, let G be a semisimple algebraic group over an algebraically closed field of characteristic formula_21. Then it remains true that formula_10 for all i if λ is a weight such that formula_7 is non-dominant for all formula_6 as long as λ is "close to zero". This is known as the Kempf vanishing theorem. However, the other statements of the theorem do not remain valid in this setting.
More explicitly, let λ be a dominant integral weight; then it is still true that formula_10 for all formula_22, but it is no longer true that this G-module is simple in general, although it does contain the unique highest weight module of highest weight λ as a G-submodule. If λ is an arbitrary integral weight, it is in fact a large unsolved problem in representation theory to describe the cohomology modules formula_1 in general. Unlike over formula_23, Mumford gave an example showing that it need not be the case for a fixed λ that these modules are all zero except in a single degree i.
Borel–Weil theorem.
The Borel–Weil theorem provides a concrete model for irreducible representations of compact Lie groups and irreducible holomorphic representations of complex semisimple Lie groups. These representations are realized in the spaces of global sections of holomorphic line bundles on the flag manifold of the group. The Borel–Weil–Bott theorem is its generalization to higher cohomology spaces. The theorem dates back to the early 1950s and can be found in and .
Statement of the theorem.
The theorem can be stated either for a complex semisimple Lie group "G" or for its compact form "K". Let "G" be a connected complex semisimple Lie group, "B" a Borel subgroup of "G", and "X"
"G"/"B" the flag variety. In this scenario, "X" is a complex manifold and a nonsingular algebraic . The flag variety can also be described as a compact homogeneous space "K"/"T", where "T"
"K" ∩ "B" is a (compact) Cartan subgroup of "K". An integral weight "λ" determines a holomorphic line bundle "L""λ" on "X" and the group "G" acts on its space of global sections,
formula_24
The Borel–Weil theorem states that if "λ" is a "dominant" integral weight then this representation is a "holomorphic" irreducible highest weight representation of "G" with highest weight "λ". Its restriction to "K" is an irreducible unitary representation of "K" with highest weight "λ", and each irreducible unitary representation of "K" is obtained in this way for a unique value of "λ". (A holomorphic representation of a complex Lie group is one for which the corresponding Lie algebra representation is "complex" linear.)
Concrete description.
The weight "λ" gives rise to a character (one-dimensional representation) of the Borel subgroup "B", which is denoted "χ""λ". Holomorphic sections of the holomorphic line bundle "L""λ" over "G"/"B" may be described more concretely as holomorphic maps
formula_25
for all "g" ∈ "G" and "b" ∈ "B".
The action of "G" on these sections is given by
formula_26
for "g", "h" ∈ "G".
Example.
Let "G" be the complex special linear group SL(2, C), with a Borel subgroup consisting of upper triangular matrices with determinant one. Integral weights for "G" may be identified with integers, with dominant weights corresponding to nonnegative integers, and the corresponding characters "χ""n" of "B" have the form
formula_27
The flag variety "G"/"B" may be identified with the complex projective line CP1 with homogeneous coordinates "X", "Y" and the space of the global sections of the line bundle "L""n" is identified with the space of homogeneous polynomials of degree "n" on C"2". For "n" ≥ 0, this space has dimension "n" + 1 and forms an irreducible representation under the standard action of "G" on the polynomial algebra C["X", "Y"]. Weight vectors are given by monomials
formula_28
of weights 2"i" − "n", and the highest weight vector "X""n" has weight "n".
Further reading.
"This article incorporates material from Borel–Bott–Weil theorem on PlanetMath, which is licensed under the ." | [
{
"math_id": 0,
"text": "\\mathbb C"
},
{
"math_id": 1,
"text": "H^i( G/B, \\, L_\\lambda )"
},
{
"math_id": 2,
"text": "L_\\lambda"
},
{
"math_id": 3,
"text": " - \\rho "
},
{
"math_id": 4,
"text": "w*\\lambda := w( \\lambda + \\rho ) - \\rho \\,"
},
{
"math_id": 5,
"text": "\\mu( \\alpha^\\vee ) \\geq 0"
},
{
"math_id": 6,
"text": "w \\in W"
},
{
"math_id": 7,
"text": "w*\\lambda"
},
{
"math_id": 8,
"text": "w * \\lambda = \\lambda"
},
{
"math_id": 9,
"text": "w * \\lambda"
},
{
"math_id": 10,
"text": "H^i( G/B, \\, L_\\lambda ) = 0"
},
{
"math_id": 11,
"text": "i \\neq \\ell(w)"
},
{
"math_id": 12,
"text": "H^{ \\ell(w) }( G/B, \\, L_\\lambda )"
},
{
"math_id": 13,
"text": " w * \\lambda"
},
{
"math_id": 14,
"text": "(\\lambda+\\rho)( \\beta^\\vee ) = 0"
},
{
"math_id": 15,
"text": "e \\in W"
},
{
"math_id": 16,
"text": "{\\mathcal O}(n)"
},
{
"math_id": 17,
"text": "\\mathfrak{sl}_2(\\mathbf{C})"
},
{
"math_id": 18,
"text": "\\Gamma({\\mathcal O}(1))"
},
{
"math_id": 19,
"text": "\\Gamma({\\mathcal O}(n))"
},
{
"math_id": 20,
"text": "\n\\begin{align}\nH & = x\\frac{\\partial}{\\partial x}-y\\frac{\\partial}{\\partial y}, \\\\[5pt]\nX & = x\\frac{\\partial}{\\partial y}, \\\\[5pt]\nY & = y\\frac{\\partial}{\\partial x}.\n\\end{align}\n"
},
{
"math_id": 21,
"text": "p > 0"
},
{
"math_id": 22,
"text": "i > 0"
},
{
"math_id": 23,
"text": "\\mathbb{C}"
},
{
"math_id": 24,
"text": "\\Gamma(G/B,L_\\lambda).\\ "
},
{
"math_id": 25,
"text": " f: G\\to \\mathbb{C}_{\\lambda}: f(gb)=\\chi_{\\lambda}(b^{-1})f(g)"
},
{
"math_id": 26,
"text": "g\\cdot f(h)=f(g^{-1}h)"
},
{
"math_id": 27,
"text": " \\chi_n\n\\begin{pmatrix}\na & b\\\\\n0 & a^{-1}\n\\end{pmatrix}=a^n.\n"
},
{
"math_id": 28,
"text": " X^i Y^{n-i}, \\quad 0\\leq i\\leq n "
}
]
| https://en.wikipedia.org/wiki?curid=725331 |
72536 | Thermal conduction | Process by which heat is transferred within an object
Thermal conduction is the diffusion of thermal energy (heat) within one material or between materials in contact. The higher temperature object has molecules with more kinetic energy; collisions between molecules distributes this kinetic energy until an object has the same kinetic energy throughout. Thermal conductivity, frequently represented by k, is a property that relates the rate of heat loss per unit area of a material to its rate of change of temperature. Essentially, it is a value that accounts for any property of the material that could change the way it conducts heat. Heat spontaneously flows along a temperature gradient (i.e. from a hotter body to a colder body). For example, heat is conducted from the hotplate of an electric stove to the bottom of a saucepan in contact with it. In the absence of an opposing external driving energy source, within a body or between bodies, temperature differences decay over time, and thermal equilibrium is approached, temperature becoming more uniform.
Every process involving heat transfer takes place by only three methods:
Overview.
A region with greater thermal energy (heat) corresponds with greater molecular agitation. Thus when a hot object touches a cooler surface, the highly agitated molecules from the hot object bump the calm molecules of the cooler surface, transferring the microscopic kinetic energy and causing the colder part or object to heat up. Mathematically, thermal conduction works just like diffusion. As temperature difference goes up, the distance traveled gets shorter or the area goes up thermal conduction increases:Thermal conduction (power)= κAΔT/ℓWhere:
Conduction is the main mode of heat transfer for solid materials because the strong inter-molecular forces allow the vibrations of particles to be easily transmitted, in comparison to liquids and gases. Liquids have weaker inter-molecular forces and more space between the particles, which makes the vibrations of particles harder to transmit. Gases have even more space, and therefore infrequent particle collisions. This makes liquids and gases poor conductors of heat.
Thermal contact conductance is the study of heat conduction between solid bodies in contact. A temperature drop is often observed at the interface between the two surfaces in contact. This phenomenon is said to be a result of a thermal contact resistance existing between the contacting surfaces. Interfacial thermal resistance is a measure of an interface's resistance to thermal flow. This thermal resistance differs from contact resistance, as it exists even at atomically perfect interfaces. Understanding the thermal resistance at the interface between two materials is of primary significance in the study of its thermal properties. Interfaces often contribute significantly to the observed properties of the materials.
The inter-molecular transfer of energy could be primarily by elastic impact, as in fluids, or by free-electron diffusion, as in metals, or phonon vibration, as in insulators. In insulators, the heat flux is carried almost entirely by phonon vibrations.
Metals (e.g., copper, platinum, gold, etc.) are usually good conductors of thermal energy. This is due to the way that metals bond chemically: metallic bonds (as opposed to covalent or ionic bonds) have free-moving electrons that transfer thermal energy rapidly through the metal. The "electron fluid" of a conductive metallic solid conducts most of the heat flux through the solid. Phonon flux is still present but carries less of the energy. Electrons also conduct electric current through conductive solids, and the thermal and electrical conductivities of most metals have about the same ratio. A good electrical conductor, such as copper, also conducts heat well. Thermoelectricity is caused by the interaction of heat flux and electric current. Heat conduction within a solid is directly analogous to diffusion of particles within a fluid, in the situation where there are no fluid currents.
In gases, heat transfer occurs through collisions of gas molecules with one another. In the absence of convection, which relates to a moving fluid or gas phase, thermal conduction through a gas phase is highly dependent on the composition and pressure of this phase, and in particular, the mean free path of gas molecules relative to the size of the gas gap, as given by the Knudsen number formula_0.
To quantify the ease with which a particular medium conducts, engineers employ the thermal conductivity, also known as the conductivity constant or conduction coefficient, "k". In thermal conductivity, "k" is defined as "the quantity of heat, "Q", transmitted in time ("t") through a thickness ("L"), in a direction normal to a surface of area ("A"), due to a temperature difference (Δ"T") [...]". Thermal conductivity is a material "property" that is primarily dependent on the medium's phase, temperature, density, and molecular bonding. Thermal effusivity is a quantity derived from conductivity, which is a measure of its ability to exchange thermal energy with its surroundings.
Steady-state conduction.
Steady-state conduction is the form of conduction that happens when the temperature difference(s) driving the conduction are constant, so that (after an equilibration time), the spatial distribution of temperatures (temperature field) in the conducting object does not change any further. Thus, all partial derivatives of temperature "concerning space" may either be zero or have nonzero values, but all derivatives of temperature at any point "concerning time" are uniformly zero. In steady-state conduction, the amount of heat entering any region of an object is equal to the amount of heat coming out (if this were not so, the temperature would be rising or falling, as thermal energy was tapped or trapped in a region).
For example, a bar may be cold at one end and hot at the other, but after a state of steady-state conduction is reached, the spatial gradient of temperatures along the bar does not change any further, as time proceeds. Instead, the temperature remains constant at any given cross-section of the rod normal to the direction of heat transfer, and this temperature varies linearly in space in the case where there is no heat generation in the rod.
In steady-state conduction, all the laws of direct current electrical conduction can be applied to "heat currents". In such cases, it is possible to take "thermal resistances" as the analog to electrical resistances. In such cases, temperature plays the role of voltage, and heat transferred per unit time (heat power) is the analog of electric current. Steady-state systems can be modeled by networks of such thermal resistances in series and parallel, in exact analogy to electrical networks of resistors. See purely resistive thermal circuits for an example of such a network.
Transient conduction.
During any period in which temperatures changes "in time" at any place within an object, the mode of thermal energy flow is termed "transient conduction." Another term is "non-steady-state" conduction, referring to the time-dependence of temperature fields in an object. Non-steady-state situations appear after an imposed change in temperature at a boundary of an object. They may also occur with temperature changes inside an object, as a result of a new source or sink of heat suddenly introduced within an object, causing temperatures near the source or sink to change in time.
When a new perturbation of temperature of this type happens, temperatures within the system change in time toward a new equilibrium with the new conditions, provided that these do not change. After equilibrium, heat flow into the system once again equals the heat flow out, and temperatures at each point inside the system no longer change. Once this happens, transient conduction is ended, although steady-state conduction may continue if heat flow continues.
If changes in external temperatures or internal heat generation changes are too rapid for the equilibrium of temperatures in space to take place, then the system never reaches a state of unchanging temperature distribution in time, and the system remains in a transient state.
An example of a new source of heat "turning on" within an object, causing transient conduction, is an engine starting in an automobile. In this case, the transient thermal conduction phase for the entire machine is over, and the steady-state phase appears, as soon as the engine reaches steady-state operating temperature. In this state of steady-state equilibrium, temperatures vary greatly from the engine cylinders to other parts of the automobile, but at no point in space within the automobile does temperature increase or decrease. After establishing this state, the transient conduction phase of heat transfer is over.
New external conditions also cause this process: for example, the copper bar in the example steady-state conduction experiences transient conduction as soon as one end is subjected to a different temperature from the other. Over time, the field of temperatures inside the bar reaches a new steady-state, in which a constant temperature gradient along the bar is finally set up, and this gradient then stays constant in time. Typically, such a new steady-state gradient is approached exponentially with time after a new temperature-or-heat source or sink, has been introduced. When a "transient conduction" phase is over, heat flow may continue at high power, so long as temperatures do not change.
An example of transient conduction that does not end with steady-state conduction, but rather no conduction, occurs when a hot copper ball is dropped into oil at a low temperature. Here, the temperature field within the object begins to change as a function of time, as the heat is removed from the metal, and the interest lies in analyzing this spatial change of temperature within the object over time until all gradients disappear entirely (the ball has reached the same temperature as the oil). Mathematically, this condition is also approached exponentially; in theory, it takes infinite time, but in practice, it is over, for all intents and purposes, in a much shorter period. At the end of this process with no heat sink but the internal parts of the ball (which are finite), there is no steady-state heat conduction to reach. Such a state never occurs in this situation, but rather the end of the process is when there is no heat conduction at all.
The analysis of non-steady-state conduction systems is more complex than that of steady-state systems. If the conducting body has a simple shape, then exact analytical mathematical expressions and solutions may be possible (see heat equation for the analytical approach). However, most often, because of complicated shapes with varying thermal conductivities within the shape (i.e., most complex objects, mechanisms or machines in engineering) often the application of approximate theories is required, and/or numerical analysis by computer. One popular graphical method involves the use of Heisler Charts.
Occasionally, transient conduction problems may be considerably simplified if regions of the object being heated or cooled can be identified, for which thermal conductivity is very much greater than that for heat paths leading into the region. In this case, the region with high conductivity can often be treated in the lumped capacitance model, as a "lump" of material with a simple thermal capacitance consisting of its aggregate heat capacity. Such regions warm or cool, but show no significant temperature "variation" across their extent, during the process (as compared to the rest of the system). This is due to their far higher conductance. During transient conduction, therefore, the temperature across their conductive regions changes uniformly in space, and as a simple exponential in time. An example of such systems is those that follow Newton's law of cooling during transient cooling (or the reverse during heating). The equivalent thermal circuit consists of a simple capacitor in series with a resistor. In such cases, the remainder of the system with a high thermal resistance (comparatively low conductivity) plays the role of the resistor in the circuit.
Relativistic conduction.
The theory of relativistic heat conduction is a model that is compatible with the theory of special relativity. For most of the last century, it was recognized that the Fourier equation is in contradiction with the theory of relativity because it admits an infinite speed of propagation of heat signals. For example, according to the Fourier equation, a pulse of heat at the origin would be felt at infinity instantaneously. The speed of information propagation is faster than the speed of light in vacuum, which is physically inadmissible within the framework of relativity.
Quantum conduction.
Second sound is a quantum mechanical phenomenon in which heat transfer occurs by wave-like motion, rather than by the more usual mechanism of diffusion. Heat takes the place of pressure in normal sound waves. This leads to a very high thermal conductivity. It is known as "second sound" because the wave motion of heat is similar to the propagation of sound in air.this is called Quantum conduction
Fourier's law.
The law of heat conduction, also known as Fourier's law (compare Fourier's heat equation), states that the rate of heat transfer through a material is proportional to the negative gradient in the temperature and to the area, at right angles to that gradient, through which the heat flows. We can state this law in two equivalent forms: the integral form, in which we look at the amount of energy flowing into or out of a body as a whole, and the differential form, in which we look at the flow rates or fluxes of energy locally.
Newton's law of cooling is a discrete analogue of Fourier's law, while Ohm's law is the electrical analogue of Fourier's law and Fick's laws of diffusion is its chemical analogue.
Differential form.
The differential form of Fourier's law of thermal conduction shows that the local heat flux density formula_1 is equal to the product of thermal conductivity formula_2 and the negative local temperature gradient formula_3. The heat flux density is the amount of energy that flows through a unit area per unit time.
formula_4
where (including the SI units)
The thermal conductivity formula_2 is often treated as a constant, though this is not always true. While the thermal conductivity of a material generally varies with temperature, the variation can be small over a significant range of temperatures for some common materials. In anisotropic materials, the thermal conductivity typically varies with orientation; in this case formula_2 is represented by a second-order tensor. In non-uniform materials, formula_2 varies with spatial location.
For many simple applications, Fourier's law is used in its one-dimensional form, for example, in the "x" direction:
formula_6
In an isotropic medium, Fourier's law leads to the heat equation
formula_7
with a fundamental solution famously known as the heat kernel.
Integral form.
By integrating the differential form over the material's total surface formula_8, we arrive at the integral form of Fourier's law:
formula_9 formula_10formula_11formula_12 formula_9 formula_13
where (including the SI units):
The above differential equation, when integrated for a homogeneous material of 1-D geometry between two endpoints at constant temperature, gives the heat flow rate as
formula_17
where
One can define the (macroscopic) thermal resistance of the 1-D homogeneous material:
formula_22
With a simple 1-D steady heat conduction equation which is analogous to Ohm's law for a simple electric resistance:
formula_23
This law forms the basis for the derivation of the heat equation.
Conductance.
Writing
formula_24
where U is the conductance, in W/(m2 K).
Fourier's law can also be stated as:
formula_25
The reciprocal of conductance is resistance, formula_26 is given by:
formula_27
Resistance is additive when several conducting layers lie between the hot and cool regions, because "A" and "Q" are the same for all layers. In a multilayer partition, the total conductance is related to the conductance of its layers by:
formula_28 or equivalently formula_29
So, when dealing with a multilayer partition, the following formula is usually used:
formula_30
For heat conduction from one fluid to another through a barrier, it is sometimes important to consider the conductance of the thin film of fluid that remains stationary next to the barrier. This thin film of fluid is difficult to quantify because its characteristics depend upon complex conditions of turbulence and viscosity—but when dealing with thin high-conductance barriers it can sometimes be quite significant.
Intensive-property representation.
The previous conductance equations, written in terms of extensive properties, can be reformulated in terms of intensive properties. Ideally, the formulae for conductance should produce a quantity with dimensions independent of distance, like Ohm's law for electrical resistance, formula_31, and conductance, formula_32.
From the electrical formula: formula_33, where "ρ" is resistivity, "x" is length, and "A" is cross-sectional area, we have formula_34, where "G" is conductance, "k" is conductivity, "x" is length, and "A" is cross-sectional area.
For heat,
formula_35
where U is the conductance.
Fourier's law can also be stated as:
formula_36
analogous to Ohm's law, formula_37 or formula_38
The reciprocal of conductance is resistance, "R", given by:
formula_39
analogous to Ohm's law, formula_40
The rules for combining resistances and conductances (in series and parallel) are the same for both heat flow and electric current.
Cylindrical shells.
Conduction through cylindrical shells (e.g. pipes) can be calculated from the internal radius, formula_41, the external radius, formula_42, the length, formula_43, and the temperature difference between the inner and outer wall, formula_44.
The surface area of the cylinder is formula_45
When Fourier's equation is applied:
formula_46
and rearranged:
formula_47
then the rate of heat transfer is:
formula_48
the thermal resistance is:
formula_49
and formula_50, where formula_51. It is important to note that this is the log-mean radius.
Spherical.
The conduction through a spherical shell with internal radius, formula_41, and external radius, formula_42, can be calculated in a similar manner as for a cylindrical shell.
The surface area of the sphere is: formula_52
Solving in a similar manner as for a cylindrical shell (see above) produces:
formula_53
Transient thermal conduction.
Interface heat transfer.
The heat transfer at an interface is considered a transient heat flow. To analyze this problem, the Biot number is important to understand how the system behaves. The Biot number is determined by:
formula_54
The heat transfer coefficient formula_55, is introduced in this formula, and is measured in formula_56. If the system has a Biot number of less than 0.1, the material behaves according to Newtonian cooling, i.e. with negligible temperature gradient within the body. If the Biot number is greater than 0.1, the system behaves as a series solution. The temperature profile in terms of time can be derived from the equation
formula_57
which becomes
formula_58
The heat transfer coefficient, "h", is measured in formula_59, and represents the transfer of heat at an interface between two materials. This value is different at every interface and is an important concept in understanding heat flow at an interface.
The series solution can be analyzed with a nomogram. A nomogram has a relative temperature as the "y" coordinate and the Fourier number, which is calculated by
formula_60
The Biot number increases as the Fourier number decreases. There are five steps to determine a temperature profile in terms of time.
Applications.
Splat cooling.
Splat cooling is a method for quenching small droplets of molten materials by rapid contact with a cold surface. The particles undergo a characteristic cooling process, with the heat profile at formula_62 for initial temperature as the maximum at formula_63 and formula_64 at formula_65 and formula_66, and the heat profile at formula_67 for formula_68 as the boundary conditions. Splat cooling rapidly ends in a steady state temperature, and is similar in form to the Gaussian diffusion equation. The temperature profile, with respect to the position and time of this type of cooling, varies with:
formula_69
Splat cooling is a fundamental concept that has been adapted for practical use in the form of thermal spraying. The thermal diffusivity coefficient, represented as formula_70, can be written as formula_71. This varies according to the material.
Metal quenching.
Metal quenching is a transient heat transfer process in terms of the time temperature transformation (TTT). It is possible to manipulate the cooling process to adjust the phase of a suitable material. For example, appropriate quenching of steel can convert a desirable proportion of its content of austenite to martensite, creating a very hard and strong product. To achieve this, it is necessary to quench at the "nose" (or eutectic) of the TTT diagram. Since materials differ in their Biot numbers, the time it takes for the material to quench, or the Fourier number, varies in practice. In steel, the quenching temperature range is generally from 600 °C to 200 °C. To control the quenching time and to select suitable quenching media, it is necessary to determine the Fourier number from the desired quenching time, the relative temperature drop, and the relevant Biot number. Usually, the correct figures are read from a standard nomogram. By calculating the heat transfer coefficient from this Biot number, one can find a liquid medium suitable for the application.
Zeroth law of thermodynamics.
One statement of the so-called zeroth law of thermodynamics is directly focused on the idea of conduction of heat. Bailyn (1994) writes that "the zeroth law may be stated: All diathermal walls are equivalent".
A diathermal wall is a physical connection between two bodies that allows the passage of heat between them. Bailyn is referring to diathermal walls that exclusively connect two bodies, especially conductive walls.
This statement of the "zeroth law" belongs to an idealized theoretical discourse, and actual physical walls may have peculiarities that do not conform to its generality.
For example, the material of the wall must not undergo a phase transition, such as evaporation or fusion, at the temperature at which it must conduct heat. But when only thermal equilibrium is considered and time is not urgent, so that the conductivity of the material does not matter too much, one suitable heat conductor is as good as another. Conversely, another aspect of the zeroth law is that, subject again to suitable restrictions, a given diathermal wall is indifferent to the nature of the heat bath to which it is connected. For example, the glass bulb of a thermometer acts as a diathermal wall whether exposed to a gas or a liquid, provided that they do not corrode or melt it.
These differences are among the defining characteristics of heat transfer. In a sense, they are symmetries of heat transfer.
Instruments.
Thermal conductivity analyzer.
Thermal conduction property of any gas under standard conditions of pressure and temperature is a fixed quantity. This property of a known reference gas or known reference gas mixtures can, therefore, be used for certain sensory applications, such as the thermal conductivity analyzer.
The working of this instrument is by principle based on the Wheatstone bridge containing four filaments whose resistances are matched. Whenever a certain gas is passed over such network of filaments, their resistance changes due to the altered thermal conductivity of the filaments and thereby changing the net voltage output from the Wheatstone Bridge. This voltage output will be correlated with the database to identify the gas sample.
Gas sensor.
The principle of thermal conductivity of gases can also be used to measure the concentration of a gas in a binary mixture of gases.
Working: if the same gas is present around all the Wheatstone bridge filaments, then the same temperature is maintained in all the filaments and hence same resistances are also maintained; resulting in a balanced Wheatstone bridge. However, If the dissimilar gas sample (or gas mixture) is passed over one set of two filaments and the reference gas on the other set of two filaments, then the Wheatstone bridge becomes unbalanced. And the resulting net voltage output of the circuit will be correlated with the database to identify the constituents of the sample gas.
Using this technique many unknown gas samples can be identified by comparing their thermal conductivity with other reference gas of known thermal conductivity. The most commonly used reference gas is nitrogen; as the thermal conductivity of most common gases (except hydrogen and helium) are similar to that of nitrogen.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K_n"
},
{
"math_id": 1,
"text": "\\mathbf{q}"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "-\\nabla T"
},
{
"math_id": 4,
"text": "\\mathbf{q} = - k \\nabla T,"
},
{
"math_id": 5,
"text": "\\nabla T"
},
{
"math_id": 6,
"text": "q_x = - k \\frac{dT}{dx}."
},
{
"math_id": 7,
"text": "\\frac{\\partial T}{\\partial t} = \\alpha\\left(\\frac{\\partial^2T}{\\partial x^2} + \\frac{\\partial^2T}{\\partial y^2} + \\frac{\\partial^2T}{\\partial z^2}\\right)"
},
{
"math_id": 8,
"text": "S"
},
{
"math_id": 9,
"text": "\\scriptstyle S"
},
{
"math_id": 10,
"text": "\\mathbf q \\cdot \\mathrm{d}\\mathbf{S}"
},
{
"math_id": 11,
"text": "{}={}"
},
{
"math_id": 12,
"text": "-k"
},
{
"math_id": 13,
"text": "\\nabla T \\cdot \\mathrm{d}\\mathbf{S}"
},
{
"math_id": 14,
"text": "\\dot{Q}="
},
{
"math_id": 15,
"text": "Q"
},
{
"math_id": 16,
"text": "\\mathrm{d}\\mathbf{S}"
},
{
"math_id": 17,
"text": "Q = - k \\frac{A\\Delta t}{L} \\Delta T,"
},
{
"math_id": 18,
"text": "\\Delta t"
},
{
"math_id": 19,
"text": "A"
},
{
"math_id": 20,
"text": "\\Delta T"
},
{
"math_id": 21,
"text": "L"
},
{
"math_id": 22,
"text": "R = \\frac 1 k \\frac L A "
},
{
"math_id": 23,
"text": "\\Delta T= R \\, \\dot{Q} "
},
{
"math_id": 24,
"text": " U = \\frac{k}{\\Delta x}, "
},
{
"math_id": 25,
"text": " \\frac{\\Delta Q}{\\Delta t} = U A\\, (-\\Delta T)."
},
{
"math_id": 26,
"text": "\\big. R"
},
{
"math_id": 27,
"text": " R = \\frac{1}{U} = \\frac{\\Delta x}{k} = \\frac{A\\, (-\\Delta T)}{\\frac{\\Delta Q}{\\Delta t}}."
},
{
"math_id": 28,
"text": " R = R_1+ R_2 + R_3 + \\cdots"
},
{
"math_id": 29,
"text": "\\frac{1}{U} = \\frac{1}{U_1} + \\frac{1}{U_2} + \\frac{1}{U_3} + \\cdots"
},
{
"math_id": 30,
"text": " \\frac{\\Delta Q}{\\Delta t} = \\frac{A\\,(-\\Delta T)}{\\frac{\\Delta x_1}{k_1} + \\frac{\\Delta x_2}{k_2} + \\frac{\\Delta x_3}{k_3}+ \\cdots}."
},
{
"math_id": 31,
"text": "R = V/I\\,\\!"
},
{
"math_id": 32,
"text": " G = I/V \\,\\!"
},
{
"math_id": 33,
"text": "R = \\rho x / A "
},
{
"math_id": 34,
"text": "G = k A / x \\,\\!"
},
{
"math_id": 35,
"text": " U = \\frac{k A} {\\Delta x}, "
},
{
"math_id": 36,
"text": " \\dot{Q} = U \\, \\Delta T, "
},
{
"math_id": 37,
"text": " I = V/R "
},
{
"math_id": 38,
"text": " I = V G ."
},
{
"math_id": 39,
"text": " R = \\frac{\\Delta T}{\\dot{Q}}, "
},
{
"math_id": 40,
"text": " R = V/I ."
},
{
"math_id": 41,
"text": "r_1"
},
{
"math_id": 42,
"text": "r_2"
},
{
"math_id": 43,
"text": "\\ell"
},
{
"math_id": 44,
"text": "T_2 - T_1"
},
{
"math_id": 45,
"text": "A_r = 2 \\pi r \\ell"
},
{
"math_id": 46,
"text": "\\dot{Q} = -k A_r \\frac{dT}{dr} = -2 k \\pi r \\ell \\frac{dT}{dr}"
},
{
"math_id": 47,
"text": "\\dot{Q} \\int_{r_1}^{r_2} \\frac{1}{r} \\, dr = -2 k \\pi \\ell \\int_{T_1}^{T_2} dT"
},
{
"math_id": 48,
"text": "\\dot{Q} = 2 k \\pi \\ell \\frac{T_1 - T_2}{\\ln (r_2 /r_1)}"
},
{
"math_id": 49,
"text": "R_c = \\frac{\\Delta T}{\\dot{Q}}= \\frac{\\ln (r_2 /r_1)}{2 \\pi k \\ell}"
},
{
"math_id": 50,
"text": "\\dot{Q} = 2 \\pi k \\ell r_m \\frac{T_1-T_2}{r_2-r_1}"
},
{
"math_id": 51,
"text": "r_m = \\frac{r_2-r_1}{\\ln (r_2 /r_1)}"
},
{
"math_id": 52,
"text": "A = 4\\pi r^2."
},
{
"math_id": 53,
"text": "\\dot{Q} = 4 k \\pi \\frac{T_1 - T_2}{1/{r_1}-1/{r_2}} = 4 k \\pi \\frac{(T_1 - T_2) r_1 r_2}{r_2-r_1}"
},
{
"math_id": 54,
"text": " \\textit{Bi} = \\frac{hL}{k} "
},
{
"math_id": 55,
"text": "h"
},
{
"math_id": 56,
"text": " \\mathrm{\\frac{J}{m^{2} s K}} "
},
{
"math_id": 57,
"text": "q = -h \\, \\Delta T, "
},
{
"math_id": 58,
"text": " \\frac{T-T_f}{T_i - T_f} = \\exp \\left ( \\frac{-hAt}{\\rho C_p V} \\right ). "
},
{
"math_id": 59,
"text": " \\mathrm{\\frac{W}{m^2 K}} "
},
{
"math_id": 60,
"text": "\\textit{Fo}= \\frac{\\alpha t}{L^2}. "
},
{
"math_id": 61,
"text": "T_i"
},
{
"math_id": 62,
"text": "t=0"
},
{
"math_id": 63,
"text": "x=0"
},
{
"math_id": 64,
"text": "T = 0"
},
{
"math_id": 65,
"text": "x = -\\infin"
},
{
"math_id": 66,
"text": " x = \\infin "
},
{
"math_id": 67,
"text": "t=\\infin "
},
{
"math_id": 68,
"text": "-\\infin \\le x \\le \\infin"
},
{
"math_id": 69,
"text": " T(x,t) - T_i = \\frac{T_i \\Delta X}{2 \\sqrt{\\pi \\alpha t}} \\exp \\left ( -\\frac{x^2}{4 \\alpha t} \\right ) "
},
{
"math_id": 70,
"text": "\\alpha"
},
{
"math_id": 71,
"text": " \\alpha =\\frac{k}{\\rho C_p} "
}
]
| https://en.wikipedia.org/wiki?curid=72536 |
72540 | Newton (unit) | Unit of force in physics
<templatestyles src="Template:Infobox/styles-images.css" />
The newton (symbol: N) is the unit of force in the International System of Units (SI). It is defined as formula_0, the force which gives a mass of 1 kilogram an acceleration of 1 metre per second squared.
It is named after Isaac Newton in recognition of his work on classical mechanics, specifically his second law of motion.
Definition.
A newton is defined as formula_1 (it is a named derived unit defined in terms of the SI base units). One newton is, therefore, the force needed to accelerate one kilogram of mass at the rate of one metre per second squared in the direction of the applied force.
The units "metre per second squared" can be understood as measuring a rate of change in velocity per unit of time, i.e. an increase in velocity by 1 metre per second every second.
In 1946, the General Conference on Weights and Measures (CGPM) Resolution 2 standardized the unit of force in the MKS system of units to be the amount needed to accelerate 1 kilogram of mass at the rate of 1 metre per second squared. In 1948, the 9th CGPM Resolution 7 adopted the name "newton" for this force. The MKS system then became the blueprint for today's SI system of units. The newton thus became the standard unit of force in the (SI), or International System of Units.
The newton is named after Isaac Newton. As with every SI unit named for a person, its symbol starts with an upper case letter (N), but when written in full, it follows the rules for capitalisation of a common noun; i.e., "newton" becomes capitalised at the beginning of a sentence and in titles but is otherwise in lower case.
The connection to Newton comes from Newton's second law of motion, which states that the force exerted on an object is directly proportional to the acceleration hence acquired by that object, thus:
formula_2
where formula_3 represents the mass of the object undergoing an acceleration formula_4. When using the SI unit of mass, the kilogram (formula_5), and SI units for distance metre (formula_6), and time, second (formula_7) we arrive at the SI definition of the newton:
formula_8
Examples.
At average gravity on Earth (conventionally, formula_9), a kilogram mass exerts a force of about 9.8 newtons.
formula_10
formula_11 (where 62 kg is the world average adult mass).
Kilonewtons.
Large forces may be expressed in kilonewtons (kN), where 1 kN
1000 N. For example, the tractive effort of a Class Y steam train locomotive and the thrust of an F100 jet engine are both around 130 kN.
Climbing ropes are tested by assuming a human can withstand a fall that creates 12 kN of force. The ropes must not break when tested against 5 such falls.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1\\ \\text{kg}\\cdot \\text{m/s}^2 "
},
{
"math_id": 1,
"text": "\\mathrm{1\\ kg {\\cdot} m/s^2}"
},
{
"math_id": 2,
"text": "F = ma,"
},
{
"math_id": 3,
"text": "m"
},
{
"math_id": 4,
"text": "a"
},
{
"math_id": 5,
"text": "\\text{kg}"
},
{
"math_id": 6,
"text": "\\text{m}"
},
{
"math_id": 7,
"text": "\\text{s}"
},
{
"math_id": 8,
"text": "\\mathrm{1\\ kg {\\cdot} m/s^2}."
},
{
"math_id": 9,
"text": "g={9.80665}\\ \\text{m/s}^2 "
},
{
"math_id": 10,
"text": "0.200 \\text{ kg} \\times 9.80665 \\text{ m/s}^2 = 1.961\\text { N}. "
},
{
"math_id": 11,
"text": "62\\text { kg} \\times 9.80665 \\text{ m/s}^2=608\\text{ N} "
}
]
| https://en.wikipedia.org/wiki?curid=72540 |
7254186 | Bistatic radar | Radio wave detection and transmission system defined by its separation
Bistatic radar is a radar system comprising a transmitter and receiver that are separated by a distance comparable to the expected target distance. Conversely, a conventional radar in which the transmitter and receiver are co-located is called a monostatic radar.
A system containing multiple spatially diverse monostatic or bistatic radar components with a shared area of coverage is called "multistatic radar".
Many long-range air-to-air and surface-to-air missile systems use semi-active radar homing, which is a form of bistatic radar.
Types.
Pseudo-monostatic radars.
Some radar systems may have separate transmit and receive antennas, but if the angle subtended between transmitter, target and receiver (the bistatic angle) is close to zero, then they would still be regarded as monostatic or pseudo-monostatic. For example, some very long range HF radar systems may have a transmitter and receiver which are separated by a few tens of kilometres for electrical isolation, but as the expected target range is of the order 1000–3500 km, they are not considered to be truly bistatic and are referred to as pseudo-monostatic.
Forward scatter radars.
In some configurations, bistatic radars may be designed to operate in a fence-like configuration, detecting targets which pass between the transmitter and receiver, with the bistatic angle near 180 degrees. This is a special case of bistatic radar, known as a forward scatter radar, after the mechanism by which the transmitted energy is scattered by the target. In forward scatter, the scattering can be modeled using Babinet's principle and is a potential countermeasure to stealth aircraft as the radar cross section (RCS) is determined solely by the silhouette of the aircraft seen by the transmitter, and is unaffected by stealth coatings or shapings. The RCS in this mode is calculated as σ=4πA²/λ², where A is the silhouette area and λ is the radar wavelength. However, target may vary from place to place location and tracking is very challenging in forward scatter radars, as the information content in measurements of range, bearing and Doppler becomes very low (all these parameters tend to zero, regardless of the location of the target in the fence).
Multistatic radar.
A multistatic radar system is one in which there are at least three components - for example, one receiver and two transmitters, or two receivers and one transmitter, or multiple receivers and multiple transmitters. It is a generalisation of the bistatic radar system, with one or more receivers processing returns from one or more geographically separated transmitter.
Passive radar.
A bistatic or multistatic radar that exploits non-radar transmitters of opportunity is termed a passive coherent location system or passive covert radar.
Any radar which does not send active electro-magnetic pulse is known as passive radar. Passive coherent location also known as PCL is a special type of passive radar, which exploits the transmitters of opportunity especially the commercial signals in the environment.
Advantages and disadvantages.
The principal advantages of bistatic and multistatic radar include:
The principal disadvantages of bistatic and multistatic radar include:
Geometry.
Angle.
The bistatic angle is the angle subtended between the transmitter, target and receiver in a bistatic radar. When it is exactly zero the radar is a monostatic radar, when it is close to zero the radar is pseudo-monostatic, and when it is close to 180 degrees the radar is a forward scatter radar. Elsewhere, the radar is simply described as a bistatic radar. The bistatic angle is an important factor in determining the radar cross section of the target.
Range.
Bistatic range refers to the basic measurement of range made by a radar or sonar system with separated transmitter and receiver. The receiver measures the time difference of arrival of the signal from the transmitter directly, and via reflection from the target. This defines an ellipse of constant bistatic range, called an iso-range contour, on which the target lies, with foci centred on the transmitter and receiver. If the target is at range "Rrx" from the receiver and range "Rtx" from the transmitter, and the receiver and transmitter are a distance "L" apart, then the bistatic range is "Rrx"+"Rtx"-"L". Motion of the target causes a rate of change of bistatic range, which results in bistatic Doppler shift.
Generally speaking, constant bistatic range points draw an ellipsoid with the transmitter and receiver positions as the focal points. The bistatic iso-range contours are where the ground slices the ellipsoid. When the ground is flat, this intercept forms an ellipse. Note that except when the two platforms have equal altitude, these ellipses are not centered on the specular point.
Doppler shift.
Bistatic Doppler shift is a specific example of the Doppler effect that is observed by a radar or sonar system with a separated transmitter and receiver. The Doppler shift is due to the component of motion of the object in the direction of the transmitter, plus the component of motion of the object in the direction of the receiver. Equivalently, it can be considered as proportional to the bistatic range rate.
In a bistatic radar with wavelength "λ", where the distance between transmitter and target is "Rtx" and distance between receiver and target is "Rrx", the received bistatic Doppler frequency shift is calculated as:
formula_0
Note that objects moving along the line connecting the transmitter and receiver will always have 0 Hz Doppler shift, as will objects moving around an ellipse of constant bistatic range.
Imaging.
Bistatic imaging is a radar imaging technique using bistatic radar (two radar instruments, with one emitting and one receiving). The result is a more detailed image than would have been rendered with just one radar instrument. Bistatic imaging can be useful in differentiating between ice and rock on the surface of a remote target, such as the moon, due to the different ways that radar reflects off these objects—with ice, the radar instruments would detect "volume scattering", and with rock, the more traditional surface scattering would be detected.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f = -\\frac{1}{\\lambda}\\frac{d}{dt}(R_{tx}+R_{rx})"
}
]
| https://en.wikipedia.org/wiki?curid=7254186 |
72543079 | Brownian snake | Stochastic Markov process
A Brownian snake is a stochastic Markov process on the space of stopped paths. It has been extensively studied., and was in particular successfully used as a representation of superprocesses.
Informally, superprocesses are the scaling limit of branching processes, except each particle splits and dies at infinite rates. The Brownian snake is a stochastic object that enables the representation of the genealogy of a superprocess, providing a link between super-Brownian motion and Brownian trees. In other words, even though infinitely many particles are constantly born, we can still keep track of individual trajectories in space, or of when two given present-day particles have split from a common ancestor in the past.
History.
The Brownian snake approach was originally developed by Jean-François Le Gall. It has since been applied in fragmentation theory, partial differential equation or planar map
The simplest setting.
Let formula_0 be the space of càdlàg functions from formula_1 to formula_2, equipped with a metric formula_3 compatible with the Skorokhod topology. We define a stopped path as a couple formula_4 where formula_5 and formula_6 are such that formula_7. In other words, formula_8 is constant after formula_9.
Now, we consider a jump process formula_10 with states formula_11 and jump rate formula_12, such that formula_13. We set:formula_14and then formula_15 to be the process "reflected on 0."
In words, formula_16 increases with speed 1, until formula_17 jumps, in which case it decreases with speed 1, and so on. We define the stopping time formula_18 to be the formula_12-th hitting time of 0 by formula_19.
We now define a stochastic process formula_20on the set of stopped paths as follows:
See animation for an illustration. We call this process a snake and formula_30 the head of the snake. This process is not yet the Brownian snake, but a good introduction. The path is erased when the snake head moves backwards, and is created anew when it moves forward.
<templatestyles src="Template:Hidden begin/styles.css"/>Show/hide animation
Duality with a branching Brownian motion.
We now consider a measure-valued branching process formula_31 starting with formula_12 particles, such that each particle dies with rate formula_12, and upon its death gives birth to two offspring with probability formula_32.
On the other hand, we may define from our process formula_33a measure-valued random process formula_34 as follows: note that for any formula_35, there will almost surely be finitely many times formula_36 such that formula_37. We then set for any measurable function formula_38:
formula_39
Then formula_40 and formula_41 are equal in distribution.
The Brownian snake.
We take the limit of the previous system as formula_42. In this setting, the head of the snake keeps jittering. In fact, the process formula_30 tends towards a reflected Brownian motion formula_43. The definitions are no longer valid for a number of reasons, in particular because formula_43 is almost surely never monotonous on any interval.
However, we may define a probability formula_44 on stopped paths such that:
We may also define formula_50 to be the distribution of formula_51 if formula_52. Finally, define the transition semigroup on the set of stopped paths:
formula_53
A stochastic process with this semigroup is called a Brownian snake.
We may again find a duality between this process and a branching process. Here the branching process will be a super-Brownian motion formula_54 with branching mechanism formula_55, started on a Dirac in 0.
However, unlike the previous case, we must be more careful in the definition of the process formula_41. Indeed, for formula_35 we cannot just list the times formula_56 such that formula_57. Instead we use the local time formula_58 associated with formula_43: we first define the stopping time formula_59. Then we define for any measurable formula_38:formula_60 Then, as before, we obtain that formula_40 and formula_41 are equal in distribution. See the animation for the construction of the branching process from the Brownian snake.
<templatestyles src="Template:Hidden begin/styles.css"/>Animation for the branching process associated with the Brownian snake
Generalisation.
The previous example can be generalized in many ways:
Link with genealogy and the Brownian tree.
The Brownian snake can be seen as a way to represent the genealogy of a superprocess, the same way a Galton-Watson tree may encode the hidden genealogy of a Galton–Watson process. Indeed, for two points of the Brownian snake, their common ancestor will be the infimum of the snake's head position between them.
If we take a Brownian snake and construct a real tree from it, we obtain a Brownian tree.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "D(\\R_+,\\R)"
},
{
"math_id": 1,
"text": "\\R_+"
},
{
"math_id": 2,
"text": "\\R"
},
{
"math_id": 3,
"text": "d"
},
{
"math_id": 4,
"text": "(w,z)"
},
{
"math_id": 5,
"text": "w\\in D(\\R_+,\\R)"
},
{
"math_id": 6,
"text": "z\\in \\R_+"
},
{
"math_id": 7,
"text": "w(t)=w(t\\wedge z)"
},
{
"math_id": 8,
"text": "w"
},
{
"math_id": 9,
"text": "z"
},
{
"math_id": 10,
"text": "(J_s^N)_{s\\geq 0}"
},
{
"math_id": 11,
"text": "\\{+1,-1\\}"
},
{
"math_id": 12,
"text": "N"
},
{
"math_id": 13,
"text": "J^N_0 = +1 "
},
{
"math_id": 14,
"text": "\\hat{\\beta}^N_s := \\int_0^s J^N_{s'}ds'"
},
{
"math_id": 15,
"text": "\\beta^N_s := |\\hat{\\beta}_s^N|"
},
{
"math_id": 16,
"text": "\\beta^N_s"
},
{
"math_id": 17,
"text": "J^N_s"
},
{
"math_id": 18,
"text": "\\sigma_N"
},
{
"math_id": 19,
"text": "\\beta^N"
},
{
"math_id": 20,
"text": "(\\eta^N_s,\\beta^N_s)_{s\\in \\R_+}"
},
{
"math_id": 21,
"text": "\\eta_0^N = 0"
},
{
"math_id": 22,
"text": "J^N_s = +1"
},
{
"math_id": 23,
"text": "s\\in [s_1,s_2]"
},
{
"math_id": 24,
"text": "\\eta_{s_1}(t)=\\eta_{s_2}(t)"
},
{
"math_id": 25,
"text": "t\\leq \\beta_{s_1} "
},
{
"math_id": 26,
"text": "\\Big(\\eta_{s_2}(t-\\beta_{s_1})-\\eta_{s_1}(\\beta_{s_1})\\Big)_{0\\leq t\\leq \\beta_{s_2}-\\beta_{s_1}} "
},
{
"math_id": 27,
"text": "\\eta_{s_1}"
},
{
"math_id": 28,
"text": "J_s^N = -1"
},
{
"math_id": 29,
"text": "t\\leq \\beta_{s_2}"
},
{
"math_id": 30,
"text": "\\beta_s^N"
},
{
"math_id": 31,
"text": "(X^N_t)_{t\\geq 0}"
},
{
"math_id": 32,
"text": "1/2"
},
{
"math_id": 33,
"text": "(\\eta_s^N,\\beta_s^N)_{0\\leq s\\leq \\sigma^N}"
},
{
"math_id": 34,
"text": "\\hat{X}_t"
},
{
"math_id": 35,
"text": "t\\in \\R_+"
},
{
"math_id": 36,
"text": "s_1,s_2,\\dots,s_n\\in [0,\\sigma^N]"
},
{
"math_id": 37,
"text": "\\beta_{s_i}=t"
},
{
"math_id": 38,
"text": "f"
},
{
"math_id": 39,
"text": "\\hat{X}^N_t(f):= \\sum\\limits_{i=1}^nf(\\eta^N_s(t))"
},
{
"math_id": 40,
"text": "X"
},
{
"math_id": 41,
"text": "\\hat{X}"
},
{
"math_id": 42,
"text": "N\\to \\infty"
},
{
"math_id": 43,
"text": "\\beta_s"
},
{
"math_id": 44,
"text": "R_{a,b}((u,y),d(w,z))"
},
{
"math_id": 45,
"text": "R_{a,b}"
},
{
"math_id": 46,
"text": "z=b"
},
{
"math_id": 47,
"text": "w(t)=u(t)"
},
{
"math_id": 48,
"text": "0\\leq t\\leq a"
},
{
"math_id": 49,
"text": "(w(a+t))_{0\\leq t\\leq b-a}"
},
{
"math_id": 50,
"text": "\\gamma_s^y(da,db)"
},
{
"math_id": 51,
"text": "(\\inf_{0\\leq r\\leq s}\\beta_r,\\beta_s)"
},
{
"math_id": 52,
"text": "\\beta_0=y"
},
{
"math_id": 53,
"text": "Q_s((u,y),d(w,z)) = \\int \\gamma_s^y(da,db)R_{a,b}((u,y),d(w,z))"
},
{
"math_id": 54,
"text": "(X_t)_{t\\in \\R_+}"
},
{
"math_id": 55,
"text": "\\phi(z)=z^2"
},
{
"math_id": 56,
"text": "s_1,s_2,\\dots"
},
{
"math_id": 57,
"text": "\\beta_s=t"
},
{
"math_id": 58,
"text": "l_s(t)"
},
{
"math_id": 59,
"text": "\\sigma = \\inf\\{s\\geq 0, l_s(0)\\geq u\\}"
},
{
"math_id": 60,
"text": "\\hat{X}_t(f):= \\int_0^\\sigma f(\\eta_s(t))dl_s(t)"
},
{
"math_id": 61,
"text": "D(\\R_+,E)"
},
{
"math_id": 62,
"text": "(E,d)"
}
]
| https://en.wikipedia.org/wiki?curid=72543079 |
725441 | Atwood machine | Classroom demonstration used to illustrate principles of classical mechanics
The Atwood machine (or Atwood's machine) was invented in 1784 by the English mathematician George Atwood as a laboratory experiment to verify the mechanical laws of motion with constant acceleration. Atwood's machine is a common classroom demonstration used to illustrate principles of classical mechanics.
The ideal Atwood machine consists of two objects of mass "m"1 and "m"2, connected by an inextensible massless string over an ideal massless pulley.
Both masses experience uniform acceleration. When "m"1 = "m"2, the machine is in neutral equilibrium regardless of the position of the weights.
Equation for constant acceleration.
An equation for the acceleration can be derived by analyzing forces.
Assuming a massless, inextensible string and an ideal massless pulley, the only forces to consider are: tension force (T), and the weight of the two masses ("W"1 and "W"2). To find an acceleration, consider the forces affecting each individual mass.
Using Newton's second law (with a sign convention of formula_0) derive a system of equations for the acceleration (a).
As a sign convention, assume that "a" is positive when downward for formula_1 and upward for formula_2. Weight of formula_1 and formula_2 is simply formula_3 and formula_4 respectively.
Forces affecting m1: formula_5
Forces affecting m2: formula_6
and adding the two previous equations yields formula_7
and the concluding formula for acceleration formula_8
The Atwood machine is sometimes used to illustrate the Lagrangian method of deriving equations of motion.
Equation for tension.
It can be useful to know an equation for the tension in the string. To evaluate tension, substitute the equation for acceleration in either of the two force equations.
formula_9
For example, substituting into formula_10, results in
formula_11where formula_12 is the harmonic mean of the two masses. The numerical value of formula_13 is closer to the smaller of the two masses.
Equations for a pulley with inertia and friction.
For very small mass differences between "m"1 and "m"2, the rotational inertia I of the pulley of radius r cannot be neglected. The angular acceleration of the pulley is given by the no-slip condition:
formula_14
where formula_15 is the angular acceleration. The net torque is then:
formula_16
Combining with Newton's second law for the hanging masses, and solving for "T"1, "T"2, and a, we get:
Acceleration:
formula_17
Tension in string segment nearest "m"1:
formula_18
Tension in string segment nearest "m"2:
formula_19
Should bearing friction be negligible (but not the inertia of the pulley nor the traction of the string on the pulley rim), these equations simplify as the following results:
Acceleration: formula_20
Tension in string segment nearest "m"1:
formula_21
Tension in string segment nearest "m"2:
formula_22
Practical implementations.
Atwood's original illustrations show the main pulley's axle resting on the rims of another four wheels, to minimize friction forces from the bearings. Many historical implementations of the machine follow this design.
An elevator with a counterbalance approximates an ideal Atwood machine and thereby relieves the driving motor from the load of holding the elevator cab — it has to overcome only weight difference and inertia of the two masses. The same principle is used for funicular railways with two connected railway cars on inclined tracks, and for the elevators on the Eiffel Tower which counterbalance each other. Ski lifts are another example, where the gondolas move on a closed (continuous) pulley system up and down the mountain. The ski lift is similar to the counter-weighted elevator, but with a constraining force provided by the cable in the vertical dimension thereby achieving work in both the horizontal and vertical dimensions. Boat lifts are another type of counter-weighted elevator system approximating an Atwood machine. | [
{
"math_id": 0,
"text": "m_1 > m_2"
},
{
"math_id": 1,
"text": "m_1"
},
{
"math_id": 2,
"text": "m_2"
},
{
"math_id": 3,
"text": "W_1 = m_1 g"
},
{
"math_id": 4,
"text": "W_2 = m_2 g"
},
{
"math_id": 5,
"text": " m_1 g - T = m_1 a"
},
{
"math_id": 6,
"text": " T - m_2 g = m_2 a"
},
{
"math_id": 7,
"text": " m_1 g - m_2 g = m_1 a + m_2 a,"
},
{
"math_id": 8,
"text": "a = g \\frac{m_1 - m_2}{m_1 + m_2}"
},
{
"math_id": 9,
"text": "a = g{m_1-m_2 \\over m_1 + m_2}"
},
{
"math_id": 10,
"text": "m_1 a = m_1 g-T"
},
{
"math_id": 11,
"text": "T={2 g m_1 m_2 \\over m_1 + m_2}={2g \\over 1/m_1 + 1/m_2} = m_h \\, g"
},
{
"math_id": 12,
"text": "m_h = \\frac{2 m_1 m_2}{m_1 + m_2}"
},
{
"math_id": 13,
"text": "m_h"
},
{
"math_id": 14,
"text": " \\alpha = \\frac{a}{ r},"
},
{
"math_id": 15,
"text": " \\alpha"
},
{
"math_id": 16,
"text": "\\tau_{\\mathrm{net}}=\\left(T_1 - T_2 \\right)r - \\tau_{\\mathrm{friction}} = I \\alpha "
},
{
"math_id": 17,
"text": " a = {{g (m_1 - m_2) - {\\tau_{\\mathrm{friction}} \\over r}} \\over {m_1 + m_2 + {{I} \\over {r^2}}}}"
},
{
"math_id": 18,
"text": " T_1 = {{m_1 g \\left(2 m_2 + \\frac{I}{r^2} + \\frac{\\tau_{\\mathrm{friction}}}{r g} \\right)} \\over {m_1 + m_2 + \\frac{I}{r^2}}}"
},
{
"math_id": 19,
"text": " T_2 = {{m_2 g \\left(2 m_1 + \\frac{I}{r^2} + \\frac{\\tau_{\\mathrm{friction}}}{r g}\\right)} \\over {m_1 + m_2 + \\frac{I}{r^2}}}"
},
{
"math_id": 20,
"text": " a = {{g \\left(m_1 - m_2\\right)} \\over {m_1 + m_2 + \\frac{I}{r^2}}}"
},
{
"math_id": 21,
"text": " T_1 = {{m_1 g \\left(2 m_2 + \\frac{I}{r^2}\\right)} \\over {m_1 + m_2 + \\frac{I}{r^2}}}"
},
{
"math_id": 22,
"text": " T_2 = {{m_2 g \\left(2 m_1 + \\frac{I}{r^2}\\right)} \\over {m_1 + m_2 + \\frac{I}{r^2}}}"
}
]
| https://en.wikipedia.org/wiki?curid=725441 |
72546779 | Polar semiotics | Concept in the science of signs
Polar semiotics (or Polar semiology) is a concept in the field of semiotics, which is the science of signs.
The most basic concept of polar semiotics can be traced in the thought of Roman Jakobson, when he conceptualized binary opposition as a relationship that necessarily implies some other relationship of conjunction and disjunction. A simple example is the binary symmetry between polar qualities that belong to a same category, such as high / low, in coordination with other types of categories, for example the presence or absence of a pitch. With further development, this same idea is represented in the so-called Greimasian square, attributed to Algirdas Julius Greimas, and which is an adaptation of Aristotle’s old logical square, used by classical philosophers such as Descartes and Spinoza, among others, to try to support empirical demonstrations. As Chandler (2017) states: “There is an apparently inbuilt dualism in our attempts to understand
our perception and cognition of the world. We even see the world as a thing apart from us: the modern polarity of subject and object that causes the world to retreat forever into a veil of illusion.”.
The concept introduced into biosemiotics.
It is due to Thomas Sebeok the adaptation of the aforementioned concept, to imply that there are systems and dynamics of opposite symmetry, that at the same time are complementary in manifold ecological processes and ecological niches as Jakob von Uexküll had described them under the concept of Umwelt:
<templatestyles src="Template:Blockquote/styles.css" />" In the web of nature, plants are, above all, producers [...] The polar opposites of plants are the funghi, nature’s decomposers."
Sebeok suggests that this notion goes beyond mere subjectivity, as the association of oppositions and complements might seem in the RYB color model, used, for example, to understand the colorimetric relationships between flowers and pollinators. In fact, as Sebeok puts it, “the sign is bifaced” (1976: 117; see also Spinks, 1991: 29). The sign is, therefore, an instrument for cutting and producing symmetry that generates perspective and feeds the perception of externalized world through a self-conscious perceiver. Notice, also, that the concept of symmetry here employed, may also involve a manifold potential of asymmetry or simple antisymmetry, multiple antisymmetry, and permutational symmetry (see, for example, the conceptualization of binarism and asymmetry as conceived by Kotov & Kull, 2011:183).
Formalization in Category theory.
Until the first two decades of the 21st century, the concept of polar semiotics was loosely linked to the broader notion of category. Formalization of polar semiotics in the mathematical field of Category theory is due to Gabriel Pareyon (“Philosophical Sketches”, 2020), where the semiotic ‘pole’ is interpreted as the singularity of a function, which is neither removable nor essential to the function (as in fact ‘pole’ is defined in Mathematical analysis). Vectors that contribute to the definition of the semiotic set and scope of signification in a corresponding category of signs emanate or are traced from this polar singularity. The theoretical context to bring semiotics to the field of mathematics is based on Peirce’s semiotics. In this case, polar semiotics constitutes a useful tool in computational science, to characterize sign systems even in the so-called natural language and artistic language, as systems of categories submerged in contexts of the objects of the category of functors that submerge them, as it is postulated by the Yoneda lemma. Pareyon’s formalization of a generalized cohomology : formula_0 among any kind of subgroups (formula_1) operating within a same topological space (formula_2), where formula_3 stands for group, formula_1 for “symbolic system” (i.e. "language"), and formula_2 also intends the “semiotic continuum” as a self-coherent map, surpasses the pseudo-problem of simple binarism as (un)translatability of a code, hitherto understood by the structuralist tradition, as criticized by Lorusso (2015) and Lenninger (2018):
"The crucial point in the description of the notion of code here is the claim that it must not be interpreted as a one-to-one information key and cannot rest with the description of being traced via bi-polar categories, as in Levi-Strauss’ (1979) oppositional pairs or Greimas’ (1987) semes."
The referred formalization of polar semiotics allows, consequently, a morphism to build a one-to-one codal coherence for an "intersemiotic translation" (i.e. the conversion of a sign system within its semiotic regime, into another system within another, distinct, semiotic regime), as described by Jakobson, and constitutes a theoretical generalized framework for ekphrasis in its widest semiotic scope. Although the concept of ekphrasis usually is constrained within the field of arts, this framework extends semiotics competence to a crossover theorization among the arts and the sciences.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " G = \\langle s|T\\rangle "
},
{
"math_id": 1,
"text": "s"
},
{
"math_id": 2,
"text": "T"
},
{
"math_id": 3,
"text": "G"
}
]
| https://en.wikipedia.org/wiki?curid=72546779 |
7257476 | Happiness economics | Study of happiness and quality of life
The economics of happiness or happiness economics is the theoretical, qualitative and quantitative study of happiness and quality of life, including positive and negative affects, well-being, life satisfaction and related concepts – typically tying economics more closely than usual with other social sciences, like sociology and psychology, as well as physical health. It typically treats subjective happiness-related measures, as well as more objective quality of life indices, rather than wealth, income or profit, as something to be maximized.
The field has grown substantially since the late 20th century, for example by the development of methods, surveys and indices to measure happiness and related concepts, as well as quality of life. Happiness findings have been described as a challenge to the theory and practice of economics. Nevertheless, furthering gross national happiness, as well as a specified Index to measure it, has been adopted explicitly in the Constitution of Bhutan in 2008, to guide its economic governance.
Subject classifications.
The subject may be categorized in various ways, depending on specificity, intersection, and cross-classification. For example, within the "Journal of Economic Literature" classification codes, it has been categorized under:
Metrology.
Given its very nature, reported happiness is subjective. It is difficult to compare one person's happiness with another's. It can be especially difficult to compare happiness across cultures. However, many happiness economists believe they have solved this comparison problem. Cross-sections of large data samples across nations and time demonstrate consistent patterns in the determinants of happiness.
Happiness is typically measured using subjective measures – e.g. self-reported surveys – and/or objective measures. One concern has always been the accuracy and reliability of people's responses to happiness surveys. Objective measures such as lifespan, income, and education are often used as well as or instead of subjectively reported happiness, though this assumes that they generally produce happiness, which while plausible may not necessarily be the case. The terms quality of life or well-being are often used to encompass these more objective measures.
Micro-econometric happiness equations have the standard form: formula_0. In this equation formula_1 is the reported well-being of individual formula_2 at time formula_3, and formula_4 is a vector of known variables, which include socio-demographic and socioeconomic characteristics.
Macro-econometric happiness has been gauged by some as Gross National Happiness, following Sicco Mansholt's 1972 introduction of the measure, and by others as a Genuine Wealth index. Anielski in 2008 wrote a reference definition on how to measure five types of capital: (1) human; (2) social; (3) natural; (4) built; and (5) financial.
Happiness, well-being, or satisfaction with life, was seen as unmeasurable in classical and neo-classical economics. Van Praag was the first person who organized large surveys in order to explicitly measure welfare derived from income. He did this with the Income Evaluation Question (IEQ). This approach is called the Leyden School. It is named after the Dutch university where this approach was developed. Other researchers included Arie Kapteyn and Aldi Hagenaars.
Some scientists claim that happiness can be measured both subjectively and objectively by observing the joy center of the brain lit up with advanced imaging, although this raises philosophical issues, for example about whether this can be treated as more reliable than reported subjective happiness.
Determinants.
GDP and GNP.
Typically national financial measures, such as gross domestic product (GDP) and gross national product (GNP), have been used as a measure of successful policy. There is a significant association between GDP and happiness, with citizens in wealthier nations being happier than those in poorer nations. In 2002, researchers argued that this relationship extends only to an average GDP per capita of about $15,000. In the 2000s, several studies have obtained the opposite result, so this Easterlin paradox is controversial.
Individual income.
Historically, economists have said that well-being is a simple function of income. However, it has been found that once wealth reaches a subsistence level, its effectiveness as a generator of well-being is greatly diminished. Happiness economists hope to change the way governments view well-being and how to most effectively govern and allocate resources given this paradox.
In 2010, Daniel Kahneman and Angus Deaton found that higher earners generally reported better life satisfaction, but people's day-to-day emotional well-being only rose with earnings until a threshold annual household pre-tax income of $75,000. This particular study by Kahneman and Deaton showed the relationship between experienced happiness and the maximum amount of income at $75,000. Experienced happiness is the happiness received on a daily basis-"the frequency and intensity of experiences of joy, fascination, anxiety, sadness, anger, and affection that make one's life pleasant or unpleasant." The other finding from Kahneman and Deaton is there is no evidence supporting a maximum income to what is called reflective happiness. This data is supported by the use of the Cantrill Ladder, which revealed that there is a direct relationship between income and reflective happiness. This can conclude, to a point, that money does buy happiness.
Other factors have been suggested as making people happier than money. A short term course of psychological therapy is 32 times more cost effective at increasing happiness than simply increasing income.
Scholars at the University of Virginia, University of British Columbia and Harvard University released a study in 2011 after examining numerous academic papers in response to an apparent contradiction: "When asked to take stock of their lives, people with more money report being a good deal more satisfied. But when asked how happy they are at the moment, people with more money are barely different than those with less." The study included the following eight general recommendations:
In their "Unhappy Cities" paper, Edward Glaeser, Joshua Gottlieb and Oren Ziv examined the self-reported subjective well-being of people living in American metropolitan areas, particularly in relation to the notion that "individuals make trade-offs among competing objectives, including but not limited to happiness." The researchers findings revealed that people living in metropolitan areas where lower levels of happiness are reported are receiving higher real wages, and they suggest in their conclusion that "humans are quite understandably willing to sacrifice both happiness and life satisfaction if the price is right."
Social security.
Ruut Veenhoven claimed that social security payments do not seem to add to happiness. This may be due to the fact that non-self-earned income (e.g., from a lottery) does not add to happiness in general either. Happiness may be the mind's reward for a useful action. However, Johan Norberg of CIS, a free enterprise economy think tank, presents a hypothesis that as people who think that they themselves control their lives are happier, paternalist institutions may decrease happiness.
An alternative perspective focuses on the role of the welfare state as an institution that improves quality of life not only by increasing the extent to which basic human needs are met, but also by promoting greater control of one's life by limiting the degree to which individuals find themselves at the mercy of impersonal market forces that are indifferent to the fate of individuals. This is the argument suggested by the U.S. political scientist Benjamin Radcliff, who has presented a series of papers in peer-reviewed scholarly journals demonstrating that a more generous welfare state contributes to higher levels of life satisfaction, and does so to rich and poor alike.
Employment.
Generally, the well-being of those who are employed is higher than those who are unemployed. Employment itself may not increase subjective well-being, but facilitates activities that do (such as supporting a family, philanthropy, and education). While work does increase well-being through providing income, income level is not as indicative of subjective well-being as other benefits related to employment. Feelings of autonomy and mastery, found in higher levels in the employed than unemployed, are stronger predictors of subjective well-being than wealth.
When personal preference and the amount of time spent working do not align, both men and women experience a decrease in subjective well-being. The negative effect of working more or working less than preferred has been found across multiple studies, most finding that working more than preferred (over-employed) is more detrimental, but some found that working less (under-employed) is more detrimental. Most individuals' levels of subjective well-being returned to "normal" (level previous to time mismatch) within one year. Levels remained lower only when individuals worked more hours than preferred for a period of two years or more, which may indicate that it is more detrimental to be over-employed than under-employed in the long-term.
Employment status effects are not confined to the individual. Being unemployed can have detrimental effects on a spouse's subjective well-being, compared to being employed or not working (and not looking for work). Partner life satisfaction is inversely related to the number of hours their partner is underemployed. When both partners are underemployed, the life-satisfaction of men is more greatly diminished than women. However, just being in a relationship reduces the impact unemployment has on the subjective well-being of an individual. On a broad scale, high rates of unemployment negatively affect the subjective well-being of the employed.
Becoming self-employed can increase subjective well-being, given the right conditions. Those who leave work to become self-employed report greater life satisfaction than those who work for others or become self-employed after unemployment; this effect increases over time. Those who are self-employed and have employees of their own report higher life-satisfaction than those who are self-employed without employees, and women who are self-employed without employees report a higher life satisfaction than men in the same condition.
The effects of retirement on subjective well-being vary depending on personal and cultural factors. Subjective well-being can remain stable for those who retire from work voluntarily, but declines for those who are involuntarily retired. In countries with an average social norm to work, the well-being of men increases after retirement, and the well-being of retired women is at the same level as women who are homemakers or work outside the home. In countries with a strong social norm to work, retirement negatively impacts the well-being of men and women.
Relationships and children.
In the 1970s, women typically reported higher subjective well-being than did men. By 2009, declines in reported female happiness had eroded a gender gap.
In rich societies, where a rise in income doesn't equate to an increase in levels of subjective well-being, personal relationships are the determining factors of happiness.
Glaeser, Gottlieb and Ziv suggest in their conclusion that the happiness trade-offs that individuals seem willing to make aligns with the tendency of parents to report less happiness, as they sacrifice their personal well-being for the "price" of having children.
Freedom and control.
There is a significant correlation between feeling in control of one's own life and happiness levels.
A study conducted at the University of Zurich suggested that democracy and federalism bring well-being to individuals. It concluded that the more direct political participation possibilities available to citizens raises their subjective well-being. Two reasons were given for this finding. First, a more active role for citizens enables better monitoring of professional politicians by citizens, which leads to greater satisfaction with government output. Second, the ability for citizens to get involved in and have control over the political process, independently increases well-being.
American psychologist Barry Schwartz argues in his book "The Paradox of Choice" that too many consumer and lifestyle choices can produce anxiety and unhappiness due to analysis paralysis and raised expectations of satisfaction.
Religious diversity.
National cross-sectional data suggest an inverse relationship between religious diversity and happiness, possibly by facilitating more bonding (and less bridging) social capital.
Happiness and leisure.
Much of the research regarding happiness and leisure relies on subjective well-being (SWB) as an appropriate measure of happiness. Research has demonstrated a wide variety of contributing and resulting factors in the relationship between leisure and happiness. These include psychological mechanisms, and the types and characteristics of leisure activities that result in the greatest levels of subjective happiness. Specifically, leisure may trigger five core psychological mechanisms including detachment-recovery from work, autonomy in leisure, mastery of leisure activities, meaning-making in leisure activities, and social affiliation in leisure (DRAMMA). Leisure activities that are physical, relational, and performed outdoors are correlated with greater feelings of satisfaction with free time. Research across 33 different countries shows that individuals who feel they strengthen social relationships and work on personal development during leisure time are happier than others. Furthermore, shopping, reading books, attending cultural events, getting together with relatives, listening to music and attending sporting events is associated with higher levels of happiness. Spending time on the internet or watching TV is not associated with higher levels of happiness as compared to these other activities.
Research has shown that culture influences how we measure happiness and leisure. While SWB is a commonly used measure of happiness in North America and Europe, this may not be the case internationally. Quality of life (QOL) may be a better measure of happiness and leisure in Asian countries, especially Korea. Countries such as China and Japan may require a different measurement of happiness, as societal differences may influence the concept of happiness (i.e. economic variables, cultural practices, and social networks) beyond what QOL is able to measure. There seem to be some differences in leisure preference cross-culturally. Within the Croatian culture, family related leisure activities may enhance SWB across a large spectrum of ages ranging from adolescent to older adults, in both women and men. Active socializing and visiting cultural events are also associated with high levels of SWB across varying age and gender. Italians seem to prefer social conceptions of leisure as opposed to individualistic conceptions. Although different groups of individuals may prefer varying types and amount of leisure activity, this variability is likely due to the differing motivations and goals that an individual intends to fulfill with their leisure time.
Research suggests that specific leisure interventions enhance feelings of SWB. This is both a top-down and bottom-up effect, in that leisure satisfaction causally affects SWB, and SWB causally affects leisure satisfaction. This bi-directional effect is stronger in retired individuals than in working individuals. Furthermore, it appears that satisfaction with our leisure at least partially explains the relationship between our engagement in leisure and our SWB. Broadly speaking, researchers classify leisure into active (e.g. volunteering, socializing, sports and fitness) and passive leisure (e.g. watching television and listening to the radio). Among older adults, passive leisure activities and personal leisure activities (e.g. sleeping, eating, and bathing) correlate with higher levels of SWB and feelings of relaxation than active leisure activities. Thus, although significant evidence has demonstrated that active leisure is associated with higher levels of SWB, or happiness, this may not be the case with older populations.
Both regular and irregular involvement in sports leisure can result in heightened SWB. Serious, or systematic involvement in certain leisure activities, such as taekwondo, correlates with personal growth and a sense of happiness. Additionally, more irregular (e.g. seasonal) sports activities, such as skiing, are also correlated with high SWB. Furthermore, the relationship between pleasure and skiing is thought to be caused in part by a sense of flow and involvement with the activity. Leisure activities, such as meeting with friends, participating in sports, and going on vacation trips, positively correlate with life satisfaction. It may also be true that going on a vacation makes our lives seem better, but does not necessarily make us happier in the long term. Research regarding vacationing or taking a holiday trip is mixed. Although the reported effects are mostly small, some evidence points to higher levels of SWB, or happiness, after taking a holiday.
Economic security.
Poverty alleviation are associated with happier populations. According to the latest systematic review of the economic literature on life satisfaction: Volatile or high inflation is bad for a population's well-being, particularly those with a right-wing political orientation. That suggests the impact of disruptions to economic security are in part mediated or modified by beliefs about economic security.
Political stability.
The Voxeu analysis of the economic determinants of happiness found that life satisfaction explains the largest share of an existing government's vote share, followed by economic growth, which itself explains six times as much as employment and twice as much as inflation.
Economic freedom.
Individualistic societies have happier populations. Institutes of economic freedom are associated with increases wealth inequality but does not necessarily contribute to decreases in aggregate well-being or subjective well-being at the population level. In fact, income inequality enhances global well-being.
There is some debate over whether living in poor neighbours make one happier. And, living among rich neighbours can dull the happiness that comes from wealth. This is purported to work by way of an upward or downward comparison effect (Keeping up with the Joneses). The balance of evidence is trending in favour of the hypothesis that living in poor neighbourhoods makes one less happy, and living in rich neighbourhoods actually makes one happier, in the United States. While social status matters, a balance of factors like amenities, safe areas, well maintained housing, turn the tide in favour of the argument that richer neighbours are happier neighbours.
Democracy.
"The right to participate in the political process, measured by the extent of direct democratic rights across regions, is strongly correlated with subjective well-being (Frey and Stutzer, 2002) ... a potential mechanism that explains this relationship is the perception of procedural fairness and social mobility." Institutions and well-being, democracy and federalism are associated with a happier population. Correspondingly, political engagement and activism have associated health benefits.
On the other hand, some non-democratic countries such as China and Saudi Arabia top the Ipsos list of countries where the citizenry is most happy with their government's direction. That suggests that voting preferences may not translate well into overall satisfaction with the government's direction. In any case, both of these factors revealed preference and domain specific satisfaction rather than overall subjective well being.
Economic development.
Historically, economists thought economic growth was unrelated to population level well-being, a phenomenon labelled the Easterlin paradox. More robust research has identified that there is a link between economic development and the wellbeing of the population.
A <2017 meta-analysis shows that the impact of infrastructure expenditure on economic growth varies considerably. So, one cannot assume an infrastructure project will yield welfare benefits. The paper doesn't investigate or elaborate on any modifiable variables that might predict the value of a project. However, government spending on roads and primary industries is the best value target for transport spending, according to a 2013 meta-analysis.
7%+/−3% per annum discount rates are typically applied as the discount rate on public infrastructure projects in Australia. Smaller real discount rates are used internationally to calculate the social return on investment by governments.
Alternative approach: economic consequences of happiness.
While the mainstream happiness economics has focused on identifying the determinants of happiness, an alternative approach in the discipline examines instead what are the economic consequences of happiness. Happiness may act as a determinant of economic outcomes: it increases productivity, predicts one's future income and affects labour market performance. There is a growing number of studies justifying the so-called "happy-productive worker" thesis. The positive and causal impact of happiness on an individual's productivity has been established in experimental studies.
Related studies.
The Satisfaction with Life Index is an attempt to show the average self-reported happiness in different nations. This is an example of a recent trend to use direct measures of happiness, such as surveys asking people how happy they are, as an alternative to traditional measures of policy success such as GDP or GNP. Some studies suggest that happiness can be measured effectively. The Inter-American Development Bank (IDB), published in November 2008 a major study on happiness economics in Latin America and the Caribbean.
There are also several examples of measures that include self-reported happiness as one variable. Happy Life Years, a concept brought by Dutch sociologist Ruut Veenhoven, combines self-reported happiness with life expectancy. The Happy Planet Index combines it with life expectancy and ecological footprint.
Gross National Happiness (GNH) is a concept introduced by the King of Bhutan in 1972 as an alternative to GDP. Several countries have already developed or are in the process of developing such an index. Bhutan's index has led that country to limit the amount of deforestation it will allow and to require that all tourists to its nation must spend US$200.
After the military coup of 2006, Thailand also instituted an index. The stated promise of the new Prime Minister Surayud Chulanont is to make the Thai people not only richer but happier as well. Much like GDP results, Thailand releases monthly GNH data. The Thai GNH index is based on a 1–10 scale with 10 being the happiest. As of 13 May 2007, the Thai GNH measured 5.1 points. The index uses poll data from the population surveying various satisfaction factors such as security, public utilities, good governance, trade, social justice, allocation of resources, education and community problems.
Australia, China, France and the United Kingdom are also coming up with indexes to measure national happiness. The UK began to measure national wellbeing in 2012. North Korea also announced an international Happiness Index in 2011 through Korean Central Television. North Korea itself came in second, behind #1 China. Canada released the Canadian Index of Wellbeing (CIW) in 2011 to track changes in wellbeing. The CIW has adopted the following working definition of wellbeing: The presence of the highest possible quality of life in its full breadth of expression focused on but not necessarily exclusive to good living standards, robust health, a sustainable environment, vital communities, an educated populace, balanced time use, high levels of democratic participation, and access to and participation in leisure and culture
Ecuador's and Bolivia's new constitutions state the indigenous concept of "good life" ("buen vivir" in Spanish, "sumak kawsay" in Quichua, and "suma qamaña" in Aymara) as the goal of sustainable development.
Neoclassical economics.
Neoclassical, as well as classical economics, are not subsumed under the term "happiness economics" although the original goal was to increase the happiness of the people. Classical and neoclassical economics are stages in the development of welfare economics and are characterized by mathematical modeling. Happiness economics represents a radical break with this tradition. The measurement of "subjective" happiness respectively life satisfaction by means of survey research across nations and time (in addition to objective measures like lifespan, wealth, security etc.) marks the beginning of happiness economics.
Criticism.
Some have suggested that establishing happiness as a metric is only meant to serve political goals. Recently there has been concern that happiness research could be used to advance authoritarian aims. As a result, some participants at a happiness conference in Rome have suggested that happiness research should not be used as a matter of public policy but rather used to inform individuals.
Even on the individual level, there is discussion on how much effect external forces can have on happiness. Less than 3% of an individual's level of happiness comes from external sources such as employment, education level, marital status, and socioeconomic status. To go along with this, four of the Big Five Personality Traits are substantially associated with life satisfaction, openness to experience is not associated. Having high levels of internal locus of control leads to higher reported levels of happiness.
Even when happiness can be affected by external sources, it has high hedonic adaptation, some specific events such as an increase in income, disability, unemployment, and loss (bereavement) only have short-term (about a year) effects on a person's overall happiness and after a while happiness may return to levels similar to unaffected peers.
What has the most influence over happiness are internal factors such as genetics, personality traits, and internal locus of control. It is theorized that 50% of the variation in happiness levels is from genetic sources and is known as the genetic set point. The genetic set point is assumed to be stable over time, fixed, and immune to influence or control. This goes along with findings that well-being surveys have a naturally positive baseline.
With such strong internal forces on happiness, it is hard to have an effect on a person's happiness externally. This in turn lends itself back to the idea that establishing a happiness metric is only for political gain and has little other use. To support this even further it is believed that a country aggregate level of SWB can account for more variance in government vote share than standard macroeconomic variables, such as income and employment.
Technical issues.
According to Bond and Lang (2018), the results are skewed due to the fact that the respondents have to "round" their true happiness to the scale of, e.g., 3 or 7 alternatives (e.g., very happy, pretty happy, not too happy). This "rounding error" may cause a less happy group seem happier, in the average. This would not be the case if the happiness of both groups would be normally distributed with the same variance, but that is usually not the case, based on their results. For some not-implausible log-normal assumptions on the scale, typical results can be reversed to the opposite results.
They also show that the "reporting function" seems to be different for different groups and even for the same individual at different times. For example, when a person becomes disabled, they soon start to lower their threshold for a given answer (e.g., "pretty happy"). That is, they give a higher answer than they would have given at the same happiness state before becoming disabled.
References and notes.
<templatestyles src="Reflist/styles.css" />
Bibliography.
Books
Articles | [
{
"math_id": 0,
"text": "W_{it} = \\alpha + \\beta{x_{it}} + \\epsilon_{it}"
},
{
"math_id": 1,
"text": "W"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "t"
},
{
"math_id": 4,
"text": "x"
}
]
| https://en.wikipedia.org/wiki?curid=7257476 |
72576 | Axial precession | Change of rotational axis in an astronomical body
In astronomy, axial precession is a gravity-induced, slow, and continuous change in the orientation of an astronomical body's rotational axis. In the absence of precession, the astronomical body's orbit would show axial parallelism. In particular, axial precession can refer to the gradual shift in the orientation of Earth's axis of rotation in a cycle of approximately 26,000 years. This is similar to the precession of a spinning top, with the axis tracing out a pair of cones joined at their apices. The term "precession" typically refers only to this largest part of the motion; other changes in the alignment of Earth's axis—nutation and polar motion—are much smaller in magnitude.
Earth's precession was historically called the precession of the equinoxes, because the equinoxes moved westward along the ecliptic relative to the fixed stars, opposite to the yearly motion of the Sun along the ecliptic. Historically,
the discovery of the precession of the equinoxes is usually attributed in the West to the 2nd-century-BC astronomer Hipparchus. With improvements in the ability to calculate the gravitational force between planets during the first half of the nineteenth century, it was recognized that the ecliptic itself moved slightly, which was named planetary precession, as early as 1863, while the dominant component was named lunisolar precession. Their combination was named general precession, instead of precession of the equinoxes.
Lunisolar precession is caused by the gravitational forces of the Moon and Sun on Earth's equatorial bulge, causing Earth's axis to move with respect to inertial space. Planetary precession (an advance) is due to the small angle between the gravitational force of the other planets on Earth and its orbital plane (the ecliptic), causing the plane of the ecliptic to shift slightly relative to inertial space. Lunisolar precession is about 500 times greater than planetary precession. In addition to the Moon and Sun, the other planets also cause a small movement of Earth's axis in inertial space, making the contrast in the terms lunisolar versus planetary misleading, so in 2006 the International Astronomical Union recommended that the dominant component be renamed the precession of the equator, and the minor component be renamed precession of the ecliptic, but their combination is still named general precession. Many references to the old terms exist in publications predating the change.
Nomenclature.
The term "Precession" is derived from the Latin "praecedere" ("to precede, to come before or earlier"). The stars viewed from Earth are seen to proceed from east to west daily, due to the Earth's diurnal motion, and yearly, due to the Earth's revolution around the Sun. At the same time the stars can be observed to anticipate slightly such motion, at the rate of approximately 50 arc seconds per year, a phenomenon known as the "precession of the equinoxes".
In describing this motion astronomers generally have shortened the term to simply "precession". In describing the "cause" of the motion physicists have also used the term "precession", which has led to some confusion between the observable phenomenon and its cause, which matters because in astronomy, some precessions are real and others are apparent. This issue is further obfuscated by the fact that many astronomers are physicists or astrophysicists.
The term "precession" used in astronomy generally describes the observable precession of the equinox (the stars moving retrograde across the sky), whereas the term "precession" as used in physics, generally describes a mechanical process.
Effects.
The precession of the Earth's axis has a number of observable effects. First, the positions of the south and north celestial poles appear to move in circles against the space-fixed backdrop of stars, completing one circuit in approximately 26,000 years. Thus, while today the star Polaris lies approximately at the north celestial pole, this will change over time, and other stars will become the "north star". In approximately 3,200 years, the star Gamma Cephei in the Cepheus constellation will succeed Polaris for this position. The south celestial pole currently lacks a bright star to mark its position, but over time precession also will cause bright stars to become South Stars. As the celestial poles shift, there is a corresponding gradual shift in the apparent orientation of the whole star field, as viewed from a particular position on Earth.
Secondly, the position of the Earth in its orbit around the Sun at the solstices, equinoxes, or other time defined relative to the seasons, slowly changes. For example, suppose that the Earth's orbital position is marked at the summer solstice, when the Earth's axial tilt is pointing directly toward the Sun. One full orbit later, when the Sun has returned to the same apparent position relative to the background stars, the Earth's axial tilt is not now directly toward the Sun: because of the effects of precession, it is a little way "beyond" this. In other words, the solstice occurred a little "earlier" in the orbit. Thus, the tropical year, measuring the cycle of seasons (for example, the time from solstice to solstice, or equinox to equinox), is about 20 minutes shorter than the sidereal year, which is measured by the Sun's apparent position relative to the stars. After about 26 000 years the difference amounts to a full year, so the positions of the seasons relative to the orbit are "back where they started". (Other effects also slowly change the shape and orientation of the Earth's orbit, and these, in combination with precession, create various cycles of differing periods; see also Milankovitch cycles. The magnitude of the Earth's tilt, as opposed to merely its orientation, also changes slowly over time, but this effect is not attributed directly to precession.)
For identical reasons, the apparent position of the Sun relative to the backdrop of the stars at some seasonally fixed time slowly regresses a full 360° through all twelve traditional constellations of the zodiac, at the rate of about 50.3 seconds of arc per year, or 1 degree every 71.6 years.
At present, the rate of precession corresponds to a period of 25,772 years, so tropical year is shorter than sidereal year by 1,224.5 seconds (20 min 24.5 sec ≈ (365.24219 × 86400) / 25772).
The rate itself varies somewhat with time (see Values below), so one cannot say that in exactly 25,772 years the Earth's axis will be back to where it is now.
For further details, see Changing pole stars and Polar shift and equinoxes shift, below.
History.
Hellenistic world.
Hipparchus.
The discovery of precession usually is attributed to Hipparchus (190–120 BC) of Rhodes or Nicaea, a Greek astronomer. According to Ptolemy's "Almagest", Hipparchus measured the longitude of Spica and other bright stars. Comparing his measurements with data from his predecessors, Timocharis (320–260 BC) and Aristillus (~280 BC), he concluded that Spica had moved 2° relative to the autumnal equinox. He also compared the lengths of the tropical year (the time it takes the Sun to return to an equinox) and the sidereal year (the time it takes the Sun to return to a fixed star), and found a slight discrepancy. Hipparchus concluded that the equinoxes were moving ("precessing") through the zodiac, and that the rate of precession was not less than 1° in a century, in other words, completing a full cycle in no more than 36,000 years.
Virtually all of the writings of Hipparchus are lost, including his work on precession. They are mentioned by Ptolemy, who explains precession as the rotation of the celestial sphere around a motionless Earth. It is reasonable to presume that Hipparchus, similarly to Ptolemy, thought of precession in geocentric terms as a motion of the heavens, rather than of the Earth.
Ptolemy.
The first astronomer known to have continued Hipparchus's work on precession is Ptolemy in the second century AD. Ptolemy measured the longitudes of Regulus, Spica, and other bright stars with a variation of Hipparchus's lunar method that did not require eclipses. Before sunset, he measured the longitudinal arc separating the Moon from the Sun. Then, after sunset, he measured the arc from the Moon to the star. He used Hipparchus's model to calculate the Sun's longitude, and made corrections for the Moon's motion and its parallax. Ptolemy compared his own observations with those made by Hipparchus, Menelaus of Alexandria, Timocharis, and Agrippa. He found that between Hipparchus's time and his own (about 265 years), the stars had moved 2°40', or 1° in 100 years (36" per year; the rate accepted today is about 50" per year or 1° in 72 years). It is possible, however, that Ptolemy simply trusted Hipparchus' figure instead of making his own measurements. He also confirmed that precession affected all fixed stars, not just those near the ecliptic, and his cycle had the same period of 36,000 years as that of Hipparchus.
Other authors.
Most ancient authors did not mention precession and, perhaps, did not know of it. For instance, Proclus rejected precession, while Theon of Alexandria, a commentator on Ptolemy in the fourth century, accepted Ptolemy's explanation. Theon also reports an alternate theory:
"According to certain opinions ancient astrologers believe that from a certain epoch the solstitial signs have a motion of 8° in the order of the signs, after which they go back the same amount. ..." (Dreyer 1958, p. 204)
Instead of proceeding through the entire sequence of the zodiac, the equinoxes "trepidated" back and forth over an arc of 8°. The theory of trepidation is presented by Theon as an alternative to precession.
Alternative discovery theories.
Babylonians.
Various assertions have been made that other cultures discovered precession independently of Hipparchus. According to Al-Battani, the Chaldean astronomers had distinguished the tropical and sidereal year so that by approximately 330 BC, they would have been in a position to describe precession, if inaccurately, but such claims generally are regarded as unsupported.
Maya.
Archaeologist Susan Milbrath has speculated that the Mesoamerican Long Count calendar of "30,000 years involving the Pleiades...may have been an effort to calculate the precession of the equinox." This view is held by few other professional scholars of Maya civilization.
Ancient Egyptians.
Similarly, it is claimed the precession of the equinoxes was known in Ancient Egypt, prior to the time of Hipparchus (the Ptolemaic period). These claims remain controversial. Ancient Egyptians kept accurate calendars and recorded dates on temple walls, so it would be a simple matter for them to plot the "rough" precession rate.
The Dendera Zodiac, a star-map inside the Hathor temple at Dendera, allegedly records the precession of the equinoxes. In any case, if the ancient Egyptians knew of precession, their knowledge is not recorded as such in any of their surviving astronomical texts.
Michael Rice, a popular writer on Ancient Egypt, has written that Ancient Egyptians must have observed the precession, and suggested that this awareness had profound affects on their culture. Rice noted that Egyptians re-oriented temples in response to precession of associated stars.
India.
Before 1200, India had two theories of trepidation, one with a rate and another without a rate, and several related models of precession. Each had minor changes or corrections by various commentators. The dominant of the three was the trepidation described by the most respected Indian astronomical treatise, the "Surya Siddhanta" (3:9–12), composed c. 400 but revised during the next few centuries. It used a sidereal epoch, or ayanamsa, that is still used by all Indian calendars, varying over the ecliptic longitude of 19°11′ to 23°51′, depending on the group consulted. This epoch causes the roughly 30 Indian calendar years to begin 23–28 days after the modern March equinox. The March equinox of the "Surya Siddhanta" librated 27° in both directions from the sidereal epoch. Thus the equinox moved 54° in one direction and then back 54° in the other direction. This cycle took 7200 years to complete at a rate of 54″/year. The equinox coincided with the epoch at the beginning of the "Kali Yuga" in −3101 and again 3,600 years later in 499. The direction changed from prograde to retrograde midway between these years at −1301 when it reached its maximum deviation of 27°, and would have remained retrograde, the same direction as modern precession, for 3600 years until 2299.
Another trepidation was described by Varāhamihira (c. 550). His trepidation consisted of an arc of 46°40′ in one direction and a return to the starting point. Half of this arc, 23°20′, was identified with the Sun's maximum declination on either side of the equator at the solstices. But no period was specified, thus no annual rate can be ascertained.
Several authors have described precession to be near 200,000revolutions in a Kalpa of 4,320,000,000years, which would be a rate of = 60″/year. They probably deviated from an even 200,000revolutions to make the accumulated precession zero near 500. Visnucandra (c. 550–600) mentions 189,411revolutions in a Kalpa or 56.8″/year. Bhaskara I (c. 600–680) mentions [1]94,110revolutions in a Kalpa or 58.2″/year. Bhāskara II (c. 1150) mentions 199,699revolutions in a Kalpa or 59.9″/year.
Chinese astronomy.
Yu Xi (fourth century AD) was the first Chinese astronomer to mention precession. He estimated the rate of precession as 1° in 50 years.
Middle Ages and Renaissance.
In medieval Islamic astronomy, precession was known based on Ptolemy's "Almagest", and by observations that refined the value.
Al-Battani, in his work "Zij Al-Sabi", mentions Hipparchus's calculation of precession, and Ptolemy's value of 1 degree per 100 solar years, says that he measured precession and found it to be one degree per 66 solar years.
Subsequently, Al-Sufi, in his "Book of Fixed Stars", mentions the same values that Ptolemy's value for precession is 1 degree per 100 solar years. He then quotes a different value from "Zij Al Mumtahan", which was done during Al-Ma'mun's reign, of 1 degree for every 66 solar years. He also quotes the aforementioned "Zij Al-Sabi" of Al-Battani as adjusting coordinates for stars by 11 degrees and 10 minutes of arc to account for the difference between Al-Battani's time and Ptolemy's.
Later, the "Zij-i Ilkhani", compiled at the Maragheh observatory, sets the precession of the equinoxes at 51 arc seconds per annum, which is very close to the modern value of 50.2 arc seconds.
In the Middle Ages, Islamic and Latin Christian astronomers treated "trepidation" as a motion of the fixed stars to be "added to" precession. This theory is commonly attributed to the Arab astronomer Thabit ibn Qurra, but the attribution has been contested in modern times. Nicolaus Copernicus published a different account of trepidation in "De revolutionibus orbium coelestium" (1543). This work makes the first definite reference to precession as the result of a motion of the Earth's axis. Copernicus characterized precession as the third motion of the Earth.
Modern period.
Over a century later, Isaac Newton in "Philosophiae Naturalis Principia Mathematica" (1687) explained precession as a consequence of gravitation. However, Newton's original precession equations did not work, and were revised considerably by Jean le Rond d'Alembert and subsequent scientists.
Hipparchus's discovery.
Hipparchus gave an account of his discovery in "On the Displacement of the Solsticial and Equinoctial Points" (described in "Almagest" III.1 and VII.2). He measured the ecliptic longitude of the star Spica during lunar eclipses and found that it was about 6° west of the autumnal equinox. By comparing his own measurements with those of Timocharis of Alexandria (a contemporary of Euclid, who worked with Aristillus early in the 3rd century BC), he found that Spica's longitude had decreased by about 2° in the meantime (exact years are not mentioned in "Almagest"). Also in VII.2, Ptolemy gives more precise observations of two stars, including Spica, and concludes that in each case a 2° 40' change occurred between 128 BC and AD 139. Hence, 1° per century or one full cycle in 36,000 years, that is, the precessional period of Hipparchus as reported by Ptolemy; cf. page 328 in Toomer's translation of Almagest, 1998 edition. He also noticed this motion in other stars. He speculated that only the stars near the zodiac shifted over time. Ptolemy called this his "first hypothesis" ("Almagest" VII.1), but did not report any later hypothesis Hipparchus might have devised. Hipparchus apparently limited his speculations, because he had only a few older observations, which were not very reliable.
Because the equinoctial points are not marked in the sky, Hipparchus needed the Moon as a reference point; he used a lunar eclipse to measure the position of a star. Hipparchus already had developed a way to calculate the longitude of the Sun at any moment. A lunar eclipse happens during Full moon, when the Moon is at opposition, precisely 180° from the Sun. Hipparchus is thought to have measured the longitudinal arc separating Spica from the Moon. To this value, he added the calculated longitude of the Sun, plus 180° for the longitude of the Moon. He did the same procedure with Timocharis' data. Observations such as these eclipses, incidentally, are the main source of data about when Hipparchus worked, since other biographical information about him is minimal. The lunar eclipses he observed, for instance, took place on 21 April 146 BC, and 21 March 135 BC.
Hipparchus also studied precession in "On the Length of the Year". Two kinds of year are relevant to understanding his work. The tropical year is the length of time that the Sun, as viewed from the Earth, takes to return to the same position along the ecliptic (its path among the stars on the celestial sphere). The sidereal year is the length of time that the Sun takes to return to the same position with respect to the stars of the celestial sphere. Precession causes the stars to change their longitude slightly each year, so the sidereal year is longer than the tropical year. Using observations of the equinoxes and solstices, Hipparchus found that the length of the tropical year was 365+1/4−1/300 days, or 365.24667 days (Evans 1998, p. 209). Comparing this with the length of the sidereal year, he calculated that the rate of precession was not less than 1° in a century. From this information, it is possible to calculate that his value for the sidereal year was 365+1/4+1/144 days. By giving a minimum rate, he may have been allowing for errors in observation.
To approximate his tropical year, Hipparchus created his own lunisolar calendar by modifying those of Meton and Callippus in "On Intercalary Months and Days" (now lost), as described by Ptolemy in the "Almagest" III.1. The Babylonian calendar used a cycle of 235 lunar months in 19 years since 499 BC (with only three exceptions before 380 BC), but it did not use a specified number of days. The Metonic cycle (432 BC) assigned 6,940 days to these 19 years producing an average year of 365+1/4+1/76 or 365.26316 days. The Callippic cycle (330 BC) dropped one day from four Metonic cycles (76 years) for an average year of 365+1/4 or 365.25 days. Hipparchus dropped one more day from four Callippic cycles (304 years), creating the Hipparchic cycle with an average year of 365+1/4−1/304 or 365.24671 days, which was close to his tropical year of 365+1/4−1/300 or 365.24667 days.
Hipparchus's mathematical signatures are found in the Antikythera Mechanism, an ancient astronomical computer of the second century BC. The mechanism is based on a solar year, the Metonic Cycle, which is the period the Moon reappears in the same place in the sky with the same phase (full Moon appears at the same position in the sky approximately in 19 years), the Callipic cycle (which is four Metonic cycles and more accurate), the Saros cycle, and the Exeligmos cycles (three Saros cycles for the accurate eclipse prediction). Study of the Antikythera Mechanism showed that the ancients used very accurate calendars based on all the aspects of solar and lunar motion in the sky. In fact, the Lunar Mechanism which is part of the Antikythera Mechanism depicts the motion of the Moon and its phase, for a given time, using a train of four gears with a pin and slot device which gives a variable lunar velocity that is very close to Kepler's second law. That is, it takes into account the fast motion of the Moon at perigee and slower motion at apogee.
Changing pole stars.
A consequence of the precession is a changing pole star. Currently Polaris is extremely well suited to mark the position of the north celestial pole, as Polaris is a moderately bright star with a visual magnitude of 2.1 (variable), and is located about one degree from the pole, with no stars of similar brightness too close.
The previous pole star was Kochab (Beta Ursae Minoris, β UMi, β Ursae Minoris), the brightest star in the bowl of the "Little Dipper", located 16 degrees from Polaris. It held that role from 1500 BC to AD 500. It was not quite as accurate in its day as Polaris is today. Today, Kochab and its neighbor Pherkad are referred to as the "Guardians of the Pole" (meaning Polaris).
On the other hand, Thuban in the constellation Draco, which was the pole star in 3000 BC, is much less conspicuous at magnitude 3.67 (one-fifth as bright as Polaris); today it is invisible in light-polluted urban skies.
When Polaris becomes the north star again around 27,800, it will then be farther away from the pole than it is now due to its proper motion, while in 23,600 BC it came closer to the pole.
It is more difficult to find the south celestial pole in the sky at this moment, as that area is a particularly bland portion of the sky. The nominal south pole star is Sigma Octantis, which with magnitude 5.5 is barely visible to the naked eye even under ideal conditions. That will change from the 80th to the 90th centuries, however, when the south celestial pole travels through the False Cross.
This situation also is seen on a star map. The orientation of the south pole is moving toward the Southern Cross constellation. For the last 2,000 years or so, the Southern Cross has pointed to the south celestial pole. As a consequence, the constellation is difficult to view from subtropical northern latitudes, unlike in the time of the ancient Greeks. The Southern Cross can be viewed from as far north as Miami (about 25° N), but only during the winter/early spring.
Polar shift and equinoxes shift.
The images at right attempt to explain the relation between the precession of the Earth's axis and the shift in the equinoxes. These images show the position of the Earth's axis on the "celestial sphere", a fictitious sphere which places the stars according to their position as seen from Earth, regardless of their actual distance. The first image shows the celestial sphere from the outside, with the constellations in mirror image. The second image shows the perspective of a near-Earth position as seen through a very wide angle lens (from which the apparent distortion arises).
The rotation axis of the Earth describes, over a period of 25,700 years, a small blue circle among the stars near the top of the diagram, centered on the ecliptic north pole (the blue letter E) and with an angular radius of about 23.4°, an angle known as the "obliquity of the ecliptic". The direction of precession is opposite to the daily rotation of the Earth on its axis. The brown axis was the Earth's rotation axis 5,000 years ago, when it pointed to the star Thuban. The yellow axis, pointing to Polaris, marks the axis now.
The equinoxes occur where the celestial equator intersects the ecliptic (red line), that is, where the Earth's axis is perpendicular to the line connecting the centers of the Sun and Earth.The term "equinox" here refers to a point on the celestial sphere so defined, rather than the moment in time when the Sun is overhead at the Equator (though the two meanings are related). When the axis "precesses" from one orientation to another, the equatorial plane of the Earth (indicated by the circular grid around the equator) moves. The celestial equator is just the Earth's equator projected onto the celestial sphere, so it moves as the Earth's equatorial plane moves, and the intersection with the ecliptic moves with it. The positions of the poles and equator "on Earth" do not change, only the orientation of the Earth against the fixed stars.
As seen from the brown grid, 5,000 years ago, the March equinox was close to the star Aldebaran in Taurus. Now, as seen from the yellow grid, it has shifted (indicated by the red arrow) to somewhere in the constellation of Pisces.
Still pictures like these are only first approximations, as they do not take into account the variable speed of the precession, the variable obliquity of the ecliptic, the planetary precession (which is a slow rotation of the ecliptic plane itself, presently around an axis located on the plane, with longitude 174.8764°) and the proper motions of the stars.
The precessional eras of each constellation, often known as "Great Months", are given, approximately, in the table below:
Cause.
The precession of the equinoxes is caused by the gravitational forces of the Sun and the Moon, and to a lesser extent other bodies, on the Earth. It was first explained by Sir Isaac Newton.
Axial precession is similar to the precession of a spinning top. In both cases, the applied force is due to gravity. For a spinning top, this force tends to be almost parallel to the rotation axis initially and increases as the top slows down. For a gyroscope on a stand it can approach 90 degrees. For the Earth, however, the applied forces of the Sun and the Moon are closer to perpendicular to the axis of rotation.
The Earth is not a perfect sphere but an oblate spheroid, with an equatorial diameter about 43 kilometers larger than its polar diameter. Because of the Earth's axial tilt, during most of the year the half of this bulge that is closest to the Sun is off-center, either to the north or to the south, and the far half is off-center on the opposite side. The gravitational pull on the closer half is stronger, since gravity decreases with the square of distance, so this creates a small torque on the Earth as the Sun pulls harder on one side of the Earth than the other. The axis of this torque is roughly perpendicular to the axis of the Earth's rotation so the axis of rotation precesses. If the Earth were a perfect sphere, there would be no precession.
This average torque is perpendicular to the direction in which the rotation axis is tilted away from the ecliptic pole, so that it does not change the axial tilt itself. The magnitude of the torque from the Sun (or the Moon) varies with the angle between the Earth's spin axis direction and that of the gravitational attraction. It approaches zero when they are perpendicular. For example, this happens at the equinoxes in the case of the interaction with the Sun. This can be seen to be since the near and far points are aligned with the gravitational attraction, so there is no torque due to the difference in gravitational attraction.
Although the above explanation involved the Sun, the same explanation holds true for any object moving around the Earth, along or close to the ecliptic, notably, the Moon. The combined action of the Sun and the Moon is called the lunisolar precession. In addition to the steady progressive motion (resulting in a full circle in about 25,700 years) the Sun and Moon also cause small periodic variations, due to their changing positions. These oscillations, in both precessional speed and axial tilt, are known as the nutation. The most important term has a period of 18.6 years and an amplitude of 9.2 arcseconds.
In addition to lunisolar precession, the actions of the other planets of the Solar System cause the whole ecliptic to rotate slowly around an axis which has an ecliptic longitude of about 174° measured on the instantaneous ecliptic. This so-called planetary precession shift amounts to a rotation of the ecliptic plane of 0.47 seconds of arc per year (more than a hundred times smaller than lunisolar precession). The sum of the two precessions is known as the general precession.
Equations.
The tidal force on Earth due to a perturbing body (Sun, Moon or planet) is expressed by Newton's law of universal gravitation, whereby the gravitational force of the perturbing body on the side of Earth nearest is said to be greater than the gravitational force on the far side by an amount proportional to the difference in the cubes of the distances between the near and far sides. If the gravitational force of the perturbing body acting on the mass of the Earth as a point mass at the center of Earth (which provides the centripetal force causing the orbital motion) is subtracted from the gravitational force of the perturbing body everywhere on the surface of Earth, what remains may be regarded as the tidal force. This gives the paradoxical notion of a force acting away from the satellite but in reality it is simply a lesser force toward that body due to the gradient in the gravitational field. For precession, this tidal force can be grouped into two forces which only act on the equatorial bulge outside of a mean spherical radius. This couple can be decomposed into two pairs of components, one pair parallel to Earth's equatorial plane toward and away from the perturbing body which cancel each other out, and another pair parallel to Earth's rotational axis, both toward the ecliptic plane. The latter pair of forces creates the following torque vector on Earth's equatorial bulge:
formula_0
where
"GM", standard gravitational parameter of the perturbing body
"r", geocentric distance to the perturbing body
"C", moment of inertia around Earth's axis of rotation
"A", moment of inertia around any equatorial diameter of Earth
"C" − "A", moment of inertia of Earth's equatorial bulge ("C" > "A")
"δ", declination of the perturbing body (north or south of equator)
"α", right ascension of the perturbing body (east from March equinox).
The three unit vectors of the torque at the center of the Earth (top to bottom) are x on a line within the ecliptic plane (the intersection of Earth's equatorial plane with the ecliptic plane) directed toward the March equinox, y on a line in the ecliptic plane directed toward the summer solstice (90° east of x), and z on a line directed toward the north pole of the ecliptic.
The value of the three sinusoidal terms in the direction of x (sin"δ" cos"δ" sin"α") for the Sun is a sine squared waveform varying from zero at the equinoxes (0°, 180°) to 0.36495 at the solstices (90°, 270°). The value in the direction of y (sin"δ" cos"δ" (−cos"α")) for the Sun is a sine wave varying from zero at the four equinoxes and solstices to ±0.19364 (slightly more than half of the sine squared peak) halfway between each equinox and solstice with peaks slightly skewed toward the equinoxes (43.37°(−), 136.63°(+), 223.37°(−), 316.63°(+)). Both solar waveforms have about the same peak-to-peak amplitude and the same period, half of a revolution or half of a year. The value in the direction of z is zero.
The average torque of the sine wave in the direction of y is zero for the Sun or Moon, so this component of the torque does not affect precession. The average torque of the sine squared waveform in the direction of x for the Sun or Moon is:
formula_1
where
formula_2, semimajor axis of Earth's (Sun's) orbit or Moon's orbit
"e", eccentricity of Earth's (Sun's) orbit or Moon's orbit
and 1/2 accounts for the average of the sine squared waveform, formula_3 accounts for the average distance cubed of the Sun or Moon from Earth over the entire elliptical orbit, and ε (the angle between the equatorial plane and the ecliptic plane) is the maximum value of "δ" for the Sun and the average maximum value for the Moon over an entire 18.6 year cycle.
Precession is:
formula_4
where "ω" is Earth's angular velocity and "Cω" is Earth's angular momentum. Thus the first order component of precession due to the Sun is:
formula_5
whereas that due to the Moon is:
formula_6
where "i" is the angle between the plane of the Moon's orbit and the ecliptic plane. In these two equations, the Sun's parameters are within square brackets labeled S, the Moon's parameters are within square brackets labeled L, and the Earth's parameters are within square brackets labeled E. The term formula_7 accounts for the inclination of the Moon's orbit relative to the ecliptic. The term ("C" − "A")/"C" is Earth's dynamical ellipticity or flattening, which is adjusted to the observed precession because Earth's internal structure is not known with sufficient detail. If Earth were homogeneous the term would equal its third eccentricity squared,
formula_8
where a is the equatorial radius () and c is the polar radius (), so "e"2 = 0.003358481.
Applicable parameters for J2000.0 rounded to seven significant digits (excluding leading 1) are:
which yield
"dψS/dt" = 2.450183×10−12 /s
"dψL/dt" = 5.334529×10−12 /s
both of which must be converted to ″/a (arcseconds/annum) by the number of arcseconds in 2π radians (1.296×106″/2π) and the number of seconds in one annum (a Julian year) (3.15576×107s/a):
"dψS/dt" = 15.948788″/a vs 15.948870″/a from Williams
"dψL/dt" = 34.723638″/a vs 34.457698″/a from Williams.
The solar equation is a good representation of precession due to the Sun because Earth's orbit is close to an ellipse, being only slightly perturbed by the other planets. The lunar equation is not as good a representation of precession due to the Moon because the Moon's orbit is greatly distorted by the Sun and neither the radius nor the eccentricity is constant over the year.
Values.
Simon Newcomb's calculation at the end of the 19th century for general precession ("p") in longitude gave a value of 5,025.64 arcseconds per tropical century, and was the generally accepted value until artificial satellites delivered more accurate observations and electronic computers allowed more elaborate models to be calculated. Jay Henry Lieske developed an updated theory in 1976, where "p" equals 5,029.0966 arcseconds (or 1.3969713 degrees) per Julian century. Modern techniques such as VLBI and LLR allowed further refinements, and the International Astronomical Union adopted a new constant value in 2000, and new computation methods and polynomial expressions in 2003 and 2006; the accumulated precession is:
"pA" = 5,028.796195 "T" + 1.1054348 "T"2 + higher order terms, in arcseconds, with "T", the time in Julian centuries (that is, 36,525 days) since the epoch of 2000.
The rate of precession is the derivative of that:
"p" = 5,028.796195 + 2.2108696 "T" + higher order terms.
The constant term of this speed (5,028.796195 arcseconds per century in above equation) corresponds to one full precession circle in 25,771.57534 years (one full circle of 360 degrees divided by 50.28796195 arcseconds per year) although some other sources put the value at 25771.4 years, leaving a small uncertainty.
The precession rate is not a constant, but is (at the moment) slowly increasing over time, as indicated by the linear (and higher order) terms in "T". In any case it must be stressed that this formula is only valid over a "limited time period". It is a polynomial expression centred on the J2000 datum, empirically fitted to observational data, not on a deterministic model of the Solar System. It is clear that if "T" gets large enough (far in the future or far in the past), the "T"² term will dominate and "p" will go to very large values. In reality, more elaborate calculations on the numerical model of the Solar System show that the precessional rate has a period of about 41,000 years, the same as the obliquity of the ecliptic. That is,
"p" = "A" + "BT" + "CT"2 + …
is an approximation of
"p" = "a" + "b" sin (2π"T"/"P"), where "P" is the 41,000-year period.
Theoretical models may calculate the constants (coefficients) corresponding to the higher powers of "T", but since it is impossible for a polynomial to match a periodic function over all numbers, the difference in all such approximations will grow without bound as "T" increases. Sufficient accuracy can be obtained over a limited time span by fitting a high enough order polynomial to observation data, rather than a necessarily imperfect dynamic numerical model. For present flight trajectory calculations of artificial satellites and spacecraft, the polynomial method gives better accuracy. In that respect, the International Astronomical Union has chosen the best-developed available theory. For up to a few centuries into the past and future, none of the formulas used diverge very much. For up to a few thousand years in the past and the future, most agree to some accuracy. For eras farther out, discrepancies become too large – the exact rate and period of precession may not be computed using these polynomials even for a single whole precession period.
The precession of Earth's axis is a very slow effect, but at the level of accuracy at which astronomers work, it does need to be taken into account on a daily basis. Although the precession and the tilt of Earth's axis (the obliquity of the ecliptic) are calculated from the same theory and are thus related one to the other, the two movements act independently of each other, moving in opposite directions.
Precession rate exhibits a secular decrease due to tidal dissipation from 59"/a to 45"/a (a = annum = Julian year) during the 500 million year period centered on the present. After short-term fluctuations (tens of thousands of years) are averaged out, the long-term trend can be approximated by the following polynomials for negative and positive time from the present in "/a, where "T" is in billions of Julian years (Ga):
"p"− = 50.475838 − 26.368583 "T" + 21.890862 "T"2
"p"+ = 50.475838 − 27.000654 "T" + 15.603265 "T"2
This gives an average cycle length now of 25,676 years.
Precession will be greater than "p"+ by the small amount of +0.135052"/a between +30 Ma and +130 Ma. The jump to this excess over "p"+ will occur in only 20 Ma beginning now because the secular decrease in precession is beginning to cross a resonance in Earth's orbit caused by the other planets.
According to W. R. Ward, in about 1,500 million years, when the distance of the Moon, which is continuously increasing from tidal effects, has increased from the current 60.3 to approximately 66.5 Earth radii, resonances from planetary effects will push precession to 49,000 years at first, and then, when the Moon reaches 68 Earth radii in about 2,000 million years, to 69,000 years. This will be associated with wild swings in the obliquity of the ecliptic as well. Ward, however, used the abnormally large modern value for tidal dissipation. Using the 620-million year average provided by tidal rhythmites of about half the modern value, these resonances will not be reached until about 3,000 and 4,000 million years, respectively. However, due to the gradually increasing luminosity of the Sun, the oceans of the Earth will have vaporized before that time (about 2,100 million years from now).
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\overrightarrow{T} = \\frac{3GM}{r^3}(C - A) \\sin\\delta \\cos\\delta \\begin{pmatrix}\\sin\\alpha \\\\ -\\cos\\alpha \\\\ 0\\end{pmatrix}"
},
{
"math_id": 1,
"text": "T_x = \\frac{3}{2}\\frac{GM}{a^3 \\left(1 - e^2\\right)^\\frac{3}{2}}(C - A) \\sin\\epsilon \\cos\\epsilon"
},
{
"math_id": 2,
"text": "a"
},
{
"math_id": 3,
"text": "a^3 \\left(1 - e^2\\right)^\\frac{3}{2}"
},
{
"math_id": 4,
"text": "\\frac{d\\psi}{dt} = \\frac{T_x}{C\\omega\\sin\\epsilon}"
},
{
"math_id": 5,
"text": "\\frac{d\\psi_S}{dt} = \\frac{3}{2}\\left[\\frac{GM}{a^3 \\left(1 - e^2\\right)^\\frac{3}{2}}\\right]_S \\left[\\frac{C - A}{C}\\frac{\\cos\\epsilon}{\\omega}\\right]_E"
},
{
"math_id": 6,
"text": "\\frac{d\\psi_L}{dt} = \\frac{3}{2}\\left[\\frac{GM\\left(1 - 1.5\\sin^2 i\\right)}{a^3 \\left(1 - e^2\\right)^\\frac{3}{2}}\\right]_L \\left[\\frac{C - A}{C}\\frac{\\cos\\epsilon}{\\omega}\\right]_E"
},
{
"math_id": 7,
"text": "\\left(1 - 1.5\\sin^2 i\\right)"
},
{
"math_id": 8,
"text": "e''^2 = \\frac{\\mathrm{a}^2 - \\mathrm{c}^2}{\\mathrm{a}^2 + \\mathrm{c}^2}"
}
]
| https://en.wikipedia.org/wiki?curid=72576 |
725821 | Precalculus | Course designed to prepare students for calculus
In mathematics education, precalculus is a course, or a set of courses, that includes algebra and trigonometry at a level which is designed to prepare students for the study of calculus, thus the name precalculus. Schools often distinguish between algebra and trigonometry as two separate parts of the coursework.
Concept.
For students to succeed at finding the derivatives and antiderivatives with calculus, they will need facility with algebraic expressions, particularly in modification and transformation of such expressions. Leonhard Euler wrote the first precalculus book in 1748 called "Introductio in analysin infinitorum" (Latin: Introduction to the Analysis of the Infinite), which "was meant as a survey of concepts and methods in analysis and analytic geometry preliminary to the study of differential and integral calculus." He began with the fundamental concepts of variables and functions. His innovation is noted for its use of exponentiation to introduce the transcendental functions. The general logarithm, to an arbitrary positive base, Euler presents as the inverse of an exponential function.
Then the natural logarithm is obtained by taking as base "the number for which the hyperbolic logarithm is one", sometimes called Euler's number, and written formula_0. This appropriation of the significant number from Grégoire de Saint-Vincent’s calculus suffices to establish the natural logarithm. This part of precalculus prepares the student for integration of the monomial formula_1 in the instance of formula_2.
Today's precalculus text computes formula_0 as the limit formula_3. An exposition on compound interest in financial mathematics may motivate this limit. Another difference in the modern text is avoidance of complex numbers, except as they may arise as roots of a quadratic equation with a negative discriminant, or in Euler's formula as application of trigonometry. Euler used not only complex numbers but also infinite series in his precalculus. Today's course may cover arithmetic and geometric sequences and series, but not the application by Saint-Vincent to gain his hyperbolic logarithm, which Euler used to finesse his precalculus.
Variable content.
Precalculus prepares students for calculus somewhat differently from the way that pre-algebra prepares students for algebra. While pre-algebra often has extensive coverage of basic algebraic concepts, precalculus courses might see only small amounts of calculus concepts, if at all, and often involves covering algebraic topics that might not have been given attention in earlier algebra courses. Some precalculus courses might differ with others in terms of content. For example, an honors-level course might spend more time on conic sections, Euclidean vectors, and other topics needed for calculus, used in fields such as medicine or engineering. A college preparatory/regular class might focus on topics used in business-related careers, such as matrices, or power functions.
A standard course considers functions, function composition, and inverse functions, often in connection with sets and real numbers. In particular, polynomials and rational functions are developed. Algebraic skills are exercised with trigonometric functions and trigonometric identities. The binomial theorem, polar coordinates, parametric equations, and the limits of sequences and series are other common topics of precalculus. Sometimes the mathematical induction method of proof for propositions dependent upon a natural number may be demonstrated, but generally coursework involves exercises rather than theory.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "e"
},
{
"math_id": 1,
"text": "x^p"
},
{
"math_id": 2,
"text": "p = -1"
},
{
"math_id": 3,
"text": "e = \\lim_{n \\rightarrow \\infty} \\left(1 + \\frac{1}{n}\\right)^{n}"
}
]
| https://en.wikipedia.org/wiki?curid=725821 |
726014 | Filtration (mathematics) | In mathematics, a filtration formula_0 is an indexed family formula_1 of subobjects of a given algebraic structure formula_2, with the index formula_3 running over some totally ordered index set formula_4, subject to the condition that
if formula_5 in formula_4, then formula_6.
If the index formula_3 is the time parameter of some stochastic process, then the filtration can be interpreted as representing all historical but not future information available about the stochastic process, with the algebraic structure formula_7 gaining in complexity with time. Hence, a process that is adapted to a filtration formula_0 is also called non-anticipating, because it cannot "see into the future".
Sometimes, as in a filtered algebra, there is instead the requirement that the formula_7 be subalgebras with respect to some operations (say, vector addition), but not with respect to other operations (say, multiplication) that satisfy only formula_8, where the index set is the natural numbers; this is by analogy with a graded algebra.
Sometimes, filtrations are supposed to satisfy the additional requirement that the union of the formula_7 be the whole formula_2, or (in more general cases, when the notion of union does not make sense) that the canonical homomorphism from the direct limit of the formula_7 to formula_2 is an isomorphism. Whether this requirement is assumed or not usually depends on the author of the text and is often explicitly stated. This article does "not" impose this requirement.
There is also the notion of a descending filtration, which is required to satisfy formula_9 in lieu of formula_10 (and, occasionally, formula_11 instead of formula_12). Again, it depends on the context how exactly the word "filtration" is to be understood. Descending filtrations are not to be confused with the dual notion of cofiltrations (which consist of quotient objects rather than subobjects).
Filtrations are widely used in abstract algebra, homological algebra (where they are related in an important way to spectral sequences), and in measure theory and probability theory for nested sequences of σ-algebras. In functional analysis and numerical analysis, other terminology is usually used, such as scale of spaces or nested spaces.
Examples.
Sets.
Farey Sequence
Algebra.
Algebras.
See: Filtered algebra
Groups.
In algebra, filtrations are ordinarily indexed by formula_13, the set of natural numbers. A "filtration" of a group formula_14, is then a nested sequence formula_15 of normal subgroups of formula_14 (that is, for any formula_16 we have formula_17). Note that this use of the word "filtration" corresponds to our "descending filtration".
Given a group formula_14 and a filtration formula_15, there is a natural way to define a topology on formula_14, said to be "associated" to the filtration. A basis for this topology is the set of all cosets of subgroups appearing in the filtration, that is, a subset of formula_14 is defined to be open if it is a union of sets of the form formula_18, where formula_19 and formula_16 is a natural number.
The topology associated to a filtration on a group formula_14 makes formula_14 into a topological group.
The topology associated to a filtration formula_15 on a group formula_14 is Hausdorff if and only if formula_20.
If two filtrations formula_15 and formula_21 are defined on a group formula_14, then the identity map from formula_14 to formula_14, where the first copy of formula_14 is given the formula_15-topology and the second the formula_21-topology, is continuous if and only if for any formula_16 there is an formula_22 such that formula_23, that is, if and only if the identity map is continuous at 1. In particular, the two filtrations define the same topology if and only if for any subgroup appearing in one there is a smaller or equal one appearing in the other.
Rings and modules: descending filtrations.
Given a ring formula_24 and an formula_24-module formula_25, a "descending filtration" of formula_25 is a decreasing sequence of submodules formula_26. This is therefore a special case of the notion for groups, with the additional condition that the subgroups be submodules. The associated topology is defined as for groups.
An important special case is known as the formula_4-adic topology (or formula_27-adic, etc.): Let formula_24 be a commutative ring, and formula_4 an ideal of formula_24. Given an formula_24-module formula_25, the sequence formula_28 of submodules of formula_25 forms a filtration of formula_25 (the "formula_4-adic filtration"). The "formula_4-adic topology" on formula_25 is then the topology associated to this filtration. If formula_25 is just the ring formula_24 itself, we have defined the "formula_4-adic topology" on formula_24.
When formula_24 is given the formula_4-adic topology, formula_24 becomes a topological ring. If an formula_24-module formula_25 is then given the formula_4-adic topology, it becomes a topological formula_24-module, relative to the topology given on formula_24.
Rings and modules: ascending filtrations.
Given a ring formula_24 and an formula_24-module formula_25, an "ascending filtration" of formula_25 is an increasing sequence of submodules formula_26. In particular, if formula_24 is a field, then an ascending filtration of the formula_24-vector space formula_25 is an increasing sequence of vector subspaces of formula_25. Flags are one important class of such filtrations.
Sets.
A maximal filtration of a set is equivalent to an ordering (a permutation) of the set. For instance, the filtration formula_29 corresponds to the ordering formula_30. From the point of view of the field with one element, an ordering on a set corresponds to a maximal flag (a filtration on a vector space), considering a set to be a vector space over the field with one element.
Measure theory.
In measure theory, in particular in martingale theory and the theory of stochastic processes, a filtration is an increasing sequence of formula_31-algebras on a measurable space. That is, given a measurable space formula_32, a filtration is a sequence of formula_31-algebras formula_33 with formula_34 where each formula_35 is a non-negative real number and
formula_36
The exact range of the "times" "formula_35" will usually depend on context: the set of values for formula_35 might be discrete or continuous, bounded or unbounded. For example,
formula_37
Similarly, a filtered probability space (also known as a stochastic basis) formula_38, is a probability space equipped with the filtration formula_39 of its formula_31-algebra formula_0. A filtered probability space is said to satisfy the "usual conditions" if it is complete (i.e., formula_40 contains all formula_41-null sets) and right-continuous (i.e. formula_42 for all times formula_35).
It is also useful (in the case of an unbounded index set) to define formula_43 as the formula_31-algebra generated by the infinite union of the formula_44's, which is contained in formula_0:
formula_45
A "σ"-algebra defines the set of events that can be measured, which in a probability context is equivalent to events that can be discriminated, or "questions that can be answered at time formula_35". Therefore, a filtration is often used to represent the change in the set of events that can be measured, through gain or loss of information. A typical example is in mathematical finance, where a filtration represents the information available up to and including each time formula_35, and is more and more precise (the set of measurable events is staying the same or increasing) as more information from the evolution of the stock price becomes available.
Relation to stopping times: stopping time sigma-algebras.
Let formula_38 be a filtered probability space. A random variable formula_46 is a stopping time with respect to the filtration formula_47, if formula_48 for all formula_49.
The "stopping time" formula_31-algebra is now defined as
formula_50.
It is not difficult to show that formula_51 is indeed a formula_31-algebra.
The set formula_51 encodes information up to the "random" time formula_52 in the sense that, if the filtered probability space is interpreted as a random experiment, the maximum information that can be found out about it from arbitrarily often repeating the experiment until the random time formula_52 is formula_51. In particular, if the underlying probability space is finite (i.e. formula_0 is finite), the minimal sets of formula_51 (with respect to set inclusion) are given by the union over all formula_49 of the sets of minimal sets of formula_44 that lie in formula_53.
It can be shown that formula_52 is formula_51-measurable. However, simple examples show that, in general, formula_54. If formula_55 and formula_56 are stopping times on formula_38, and formula_57 almost surely, then formula_58
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{F}"
},
{
"math_id": 1,
"text": "(S_i)_{i \\in I}"
},
{
"math_id": 2,
"text": "S"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "I"
},
{
"math_id": 5,
"text": "i\\leq j"
},
{
"math_id": 6,
"text": "S_i\\subseteq S_j"
},
{
"math_id": 7,
"text": "S_i"
},
{
"math_id": 8,
"text": "S_i \\cdot S_j \\subseteq S_{i+j}"
},
{
"math_id": 9,
"text": "S_i \\supseteq S_j"
},
{
"math_id": 10,
"text": "S_i \\subseteq S_j"
},
{
"math_id": 11,
"text": "\\bigcap_{i\\in I} S_i=0"
},
{
"math_id": 12,
"text": "\\bigcup_{i\\in I} S_i=S"
},
{
"math_id": 13,
"text": "\\mathbb{N}"
},
{
"math_id": 14,
"text": "G"
},
{
"math_id": 15,
"text": "G_n"
},
{
"math_id": 16,
"text": "n"
},
{
"math_id": 17,
"text": "G_{n+1}\\subseteq G_n"
},
{
"math_id": 18,
"text": "aG_n"
},
{
"math_id": 19,
"text": "a\\in G"
},
{
"math_id": 20,
"text": "\\bigcap G_n=\\{1\\}"
},
{
"math_id": 21,
"text": "G'_n"
},
{
"math_id": 22,
"text": "m"
},
{
"math_id": 23,
"text": "G_m\\subseteq G'_n"
},
{
"math_id": 24,
"text": "R"
},
{
"math_id": 25,
"text": "M"
},
{
"math_id": 26,
"text": "M_n"
},
{
"math_id": 27,
"text": "J"
},
{
"math_id": 28,
"text": "I^n M"
},
{
"math_id": 29,
"text": "\\{0\\} \\subseteq \\{0,1\\} \\subseteq \\{0,1,2\\}"
},
{
"math_id": 30,
"text": "(0,1,2)"
},
{
"math_id": 31,
"text": "\\sigma"
},
{
"math_id": 32,
"text": "(\\Omega, \\mathcal{F})"
},
{
"math_id": 33,
"text": "\\{ \\mathcal{F}_{t} \\}_{t \\geq 0}"
},
{
"math_id": 34,
"text": "\\mathcal{F}_{t} \\subseteq \\mathcal{F}"
},
{
"math_id": 35,
"text": "t"
},
{
"math_id": 36,
"text": "t_{1} \\leq t_{2} \\implies \\mathcal{F}_{t_{1}} \\subseteq \\mathcal{F}_{t_{2}}."
},
{
"math_id": 37,
"text": "t \\in \\{ 0, 1, \\dots, N \\}, \\mathbb{N}_{0}, [0, T] \\mbox{ or } [0, + \\infty)."
},
{
"math_id": 38,
"text": "\\left(\\Omega, \\mathcal{F}, \\left\\{\\mathcal{F}_{t}\\right\\}_{t\\geq 0}, \\mathbb{P}\\right)"
},
{
"math_id": 39,
"text": "\\left\\{\\mathcal{F}_t\\right\\}_{t\\geq 0}"
},
{
"math_id": 40,
"text": "\\mathcal{F}_0"
},
{
"math_id": 41,
"text": "\\mathbb{P}"
},
{
"math_id": 42,
"text": "\\mathcal{F}_t = \\mathcal{F}_{t+} := \\bigcap_{s > t} \\mathcal{F}_s"
},
{
"math_id": 43,
"text": "\\mathcal{F}_{\\infty}"
},
{
"math_id": 44,
"text": "\\mathcal{F}_{t}"
},
{
"math_id": 45,
"text": "\\mathcal{F}_{\\infty} = \\sigma\\left(\\bigcup_{t \\geq 0} \\mathcal{F}_{t}\\right) \\subseteq \\mathcal{F}."
},
{
"math_id": 46,
"text": "\\tau : \\Omega \\rightarrow [0, \\infty]"
},
{
"math_id": 47,
"text": "\\left\\{\\mathcal{F}_{t}\\right\\}_{t\\geq 0}"
},
{
"math_id": 48,
"text": "\\{\\tau \\leq t\\} \\in \\mathcal{F}_t"
},
{
"math_id": 49,
"text": "t\\geq 0"
},
{
"math_id": 50,
"text": "\\mathcal{F}_{\\tau} := \\{A\\in\\mathcal{F} \\vert \\forall t\\geq 0 \\colon A\\cap\\{\\tau \\leq t\\}\\in\\mathcal{F}_t\\}"
},
{
"math_id": 51,
"text": "\\mathcal{F}_{\\tau}"
},
{
"math_id": 52,
"text": "\\tau"
},
{
"math_id": 53,
"text": "\\{\\tau = t\\} "
},
{
"math_id": 54,
"text": "\\sigma(\\tau) \\neq \\mathcal{F}_{\\tau}"
},
{
"math_id": 55,
"text": "\\tau_ 1"
},
{
"math_id": 56,
"text": "\\tau_ 2"
},
{
"math_id": 57,
"text": "\\tau_1 \\leq \\tau_2"
},
{
"math_id": 58,
"text": "\\mathcal{F}_{\\tau_1} \\subseteq \\mathcal{F}_{\\tau_2}."
}
]
| https://en.wikipedia.org/wiki?curid=726014 |
726049 | Pharmacodynamics | Branch of pharmacology
Pharmacodynamics (PD) is the study of the biochemical and physiologic effects of drugs (especially pharmaceutical drugs). The effects can include those manifested within animals (including humans), microorganisms, or combinations of organisms (for example, infection).
Pharmacodynamics and pharmacokinetics are the main branches of pharmacology, being itself a topic of biology interested in the study of the interactions of both endogenous and exogenous chemical substances with living organisms.
In particular, pharmacodynamics is the study of how a drug affects an organism, whereas pharmacokinetics is the study of how the organism affects the drug. Both together influence dosing, benefit, and adverse effects. Pharmacodynamics is sometimes abbreviated as PD and pharmacokinetics as PK, especially in combined reference (for example, when speaking of PK/PD models).
Pharmacodynamics places particular emphasis on dose–response relationships, that is, the relationships between drug concentration and effect. One dominant example is drug-receptor interactions as modeled by
<chem>L + R <=> LR </chem>
where "L", "R", and "LR" represent ligand (drug), receptor, and ligand-receptor complex concentrations, respectively. This equation represents a simplified model of reaction dynamics that can be studied mathematically through tools such as free energy maps.
<templatestyles src="Template:Quote_box/styles.css" />
IUPAC definition
Pharmacodynamics: Study of pharmacological actions on living systems, including the reactions with and binding to cell constituents, and the biochemical and physiological consequences of these actions.
Basics.
There are four principal protein targets with which drugs can interact:
"NMBD = neuromuscular blocking drugs; NMDA = N-methyl-d-aspartate; EGF = epidermal growth factor."
Effects on the body.
The majority of drugs either
There are 7 main drug actions:
Desired activity.
The desired activity of a drug is mainly due to successful targeting of one of the following:
General anesthetics were once thought to work by disordering the neural membranes, thereby altering the Na+ influx. Antacids and chelating agents combine chemically in the body. Enzyme-substrate binding is a way to alter the production or metabolism of key endogenous chemicals, for example aspirin irreversibly inhibits the enzyme prostaglandin synthetase (cyclooxygenase) thereby preventing inflammatory response. Colchicine, a drug for gout, interferes with the function of the structural protein tubulin, while digitalis, a drug still used in heart failure, inhibits the activity of the carrier molecule, Na-K-ATPase pump. The widest class of drugs act as ligands that bind to receptors that determine cellular effects. Upon drug binding, receptors can elicit their normal action (agonist), blocked action (antagonist), or even action opposite to normal (inverse agonist).
In principle, a pharmacologist would aim for a target plasma concentration of the drug for a desired level of response. In reality, there are many factors affecting this goal. Pharmacokinetic factors determine peak concentrations, and concentrations cannot be maintained with absolute consistency because of metabolic breakdown and excretory clearance. Genetic factors may exist which would alter metabolism or drug action itself, and a patient's immediate status may also affect indicated dosage.
Undesirable effects.
Undesirable effects of a drug include:
Therapeutic window.
The therapeutic window is the amount of a medication between the amount that gives an effect (effective dose) and the amount that gives more adverse effects than desired effects. For instance, medication with a small pharmaceutical window must be administered with care and control, e.g. by frequently measuring blood concentration of the drug, since it easily loses effects or gives adverse effects.
Duration of action.
The "duration of action" of a drug is the length of time that particular drug is effective. Duration of action is a function of several parameters including plasma half-life, the time to equilibrate between plasma and target compartments, and the off rate of the drug from its biological target.
Recreational drug use.
In recreational psychoactive drug spaces, duration refers to the length of time over which the subjective effects of a psychoactive substance manifest themselves.
Duration can be broken down into 6 parts: (1) total duration (2) onset (3) come up (4) peak (5) offset and (6) after effects. Depending upon the substance consumed, each of these occurs in a separate and continuous fashion.
Total.
The total duration of a substance can be defined as the amount of time it takes for the effects of a substance to completely wear off into sobriety, starting from the moment the substance is first administered.
Onset.
The onset phase can be defined as the period until the very first changes in perception (i.e. "first alerts") are able to be detected.
Come up.
The "come up" phase can be defined as the period between the first noticeable changes in perception and the point of highest subjective intensity. This is colloquially known as "coming up."
Peak.
The peak phase can be defined as period of time in which the intensity of the substance's effects are at its height.
Offset.
The offset phase can be defined as the amount of time in between the conclusion of the peak and shifting into a sober state. This is colloquially referred to as "coming down."
After effects.
The after effects can be defined as any residual effects which may remain after the experience has reached its conclusion. After effects depend on the substance and usage. This is colloquially known as a "hangover" for negative after effects of substances, such as alcohol, cocaine, and MDMA or an "afterglow" for describing a typically positive, pleasant effect, typically found in substances such as cannabis, LSD in low to high doses, and ketamine.
Receptor binding and effect.
The binding of ligands (drug) to receptors is governed by the "law of mass action" which relates the large-scale status to the rate of numerous molecular processes. The rates of formation and un-formation can be used to determine the equilibrium concentration of bound receptors. The "equilibrium dissociation constant" is defined by:
<chem>L + R <=> LR </chem> formula_0
where "L"=ligand, "R"=receptor, square brackets [] denote concentration. The fraction of bound receptors is
formula_1
Where formula_2 is the fraction of receptor bound by the ligand.
This expression is one way to consider the effect of a drug, in which the response is related to the fraction of bound receptors (see: Hill equation). The fraction of bound receptors is known as occupancy. The relationship between occupancy and pharmacological response is usually non-linear. This explains the so-called "receptor reserve" phenomenon i.e. the concentration producing 50% occupancy is typically higher than the concentration producing 50% of maximum response. More precisely, receptor reserve refers to a phenomenon whereby stimulation of only a fraction of the whole receptor population apparently elicits the maximal effect achievable in a particular tissue.
The simplest interpretation of receptor reserve is that it is a model that states there are excess receptors on the cell surface than what is necessary for full effect. Taking a more sophisticated approach, receptor reserve is an integrative measure of the response-inducing capacity of an agonist (in some receptor models it is termed intrinsic efficacy or intrinsic activity) and of the signal amplification capacity of the corresponding receptor (and its downstream signaling pathways). Thus, the existence (and magnitude) of receptor reserve depends on the agonist (efficacy), tissue (signal amplification ability) and measured effect (pathways activated to cause signal amplification). As receptor reserve is very sensitive to agonist's intrinsic efficacy, it is usually defined only for full (high-efficacy) agonists.
Often the response is determined as a function of log["L"] to consider many orders of magnitude of concentration. However, there is no biological or physical theory that relates effects to the log of concentration. It is just convenient for graphing purposes. It is useful to note that 50% of the receptors are bound when ["L"]="Kd" .
The graph shown represents the conc-response for two hypothetical receptor agonists, plotted in a semi-log fashion. The curve toward the left represents a higher potency (potency arrow does not indicate direction of increase) since lower concentrations are needed for a given response. The effect increases as a function of concentration.
Multicellular pharmacodynamics.
The concept of pharmacodynamics has been expanded to include Multicellular Pharmacodynamics (MCPD). MCPD is the study of the static and dynamic properties and relationships between a set of drugs and a dynamic and diverse multicellular four-dimensional organization. It is the study of the workings of a drug on a minimal multicellular system (mMCS), both "in vivo" and "in silico". Networked Multicellular Pharmacodynamics (Net-MCPD) further extends the concept of MCPD to model regulatory genomic networks together with signal transduction pathways, as part of a complex of interacting components in the cell.
Toxicodynamics.
Pharmacokinetics and pharmacodynamics are termed toxicokinetics and toxicodynamics in the field of ecotoxicology. Here, the focus is on toxic effects on a wide range of organisms. The corresponding models are called toxicokinetic-toxicodynamic models.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K_d = \\frac{[L][R]}{[L R]}"
},
{
"math_id": 1,
"text": "{p}_{LR} = \\frac{[L R]}{[R]+[L R]} =\\frac{1}{1+\\frac{K_d}{[L]}}"
},
{
"math_id": 2,
"text": "{p}_{LR}"
}
]
| https://en.wikipedia.org/wiki?curid=726049 |
7260876 | Ehrling's lemma | In mathematics, Ehrling's lemma, also known as Lions' lemma, is a result concerning Banach spaces. It is often used in functional analysis to demonstrate the equivalence of certain norms on Sobolev spaces. It was named after Gunnar Ehrling.
Statement of the lemma.
Let ("X", ||⋅||"X"), ("Y", ||⋅||"Y") and ("Z", ||⋅||"Z") be three Banach spaces. Assume that:
Then, for every "ε" > 0, there exists a constant "C"("ε") such that, for all "x" ∈ "X",
formula_0
Corollary (equivalent norms for Sobolev spaces).
Let Ω ⊂ R"n" be open and bounded, and let "k" ∈ N. Suppose that the Sobolev space "H""k"(Ω) is compactly embedded in "H""k"−1(Ω). Then the following two norms on "H""k"(Ω) are equivalent:
formula_1
and
formula_2
For the subspace of "H""k"(Ω) consisting of those Sobolev functions with zero trace (those that are "zero on the boundary" of Ω), the "L"2 norm of "u" can be left out to yield another equivalent norm.
References.
<templatestyles src="Reflist/styles.css" />
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\| x \\|_{Y} \\leq \\varepsilon \\| x \\|_{X} + C(\\varepsilon) \\| x \\|_{Z}"
},
{
"math_id": 1,
"text": "\\| \\cdot \\| : H^{k} (\\Omega) \\to \\mathbf{R}: u \\mapsto \\| u \\| := \\sqrt{\\sum_{| \\alpha | \\leq k} \\| \\mathrm{D}^{\\alpha} u \\|_{L^{2} (\\Omega)}^{2}}"
},
{
"math_id": 2,
"text": "\\| \\cdot \\|' : H^{k} (\\Omega) \\to \\mathbf{R}: u \\mapsto \\| u \\|' := \\sqrt{\\| u \\|_{L^{2} (\\Omega)}^{2} + \\sum_{| \\alpha | = k} \\| \\mathrm{D}^{\\alpha} u \\|_{L^{2} (\\Omega)}^{2}}."
}
]
| https://en.wikipedia.org/wiki?curid=7260876 |
72612923 | Doi-Hopf module | Concept in Hopf algebra and weak Hopf algebra
In quantum group, Hopf algebra and weak Hopf algebra, the Doi-Hopf module is a crucial construction that has many applications. It's named after Japanese mathematician Yukio Doi (土井 幸雄) and German mathematician Heinz Hopf. The concept was introduce by Doi in his 1992 paper "unifying Hopf modules".
Doi-Hopf module.
A right Doi-Hopf datum is a triple formula_0 with formula_1 a Hopf algebra, formula_2 a left formula_1-comodule algebra, and formula_3 a right formula_1-module coalgebra. A left-right Doi-Hopf formula_0-module formula_4 is a left formula_2-module and a right formula_3-comodule via formula_5 such that formula_6 for all formula_7. The subscript is the Sweedler notation.
A left Doi-Hopf datum is a triple formula_0 with formula_1 a Hopf algebra, formula_2 a right formula_1-comodule algebra, and formula_3 a left formula_1-module coalgebra. A Doi-Hopf module can be defined similarly.
Doi-Hopf module in weak Hopf algebra.
The generalization of Doi-Hopf module in weak Hopf algebra case is given by Gabriella Böhm in 2000.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(H,A,C)"
},
{
"math_id": 1,
"text": "H"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "C"
},
{
"math_id": 4,
"text": "M"
},
{
"math_id": 5,
"text": "\\beta: M\\to M\\otimes C"
},
{
"math_id": 6,
"text": "\\beta(am)=\\sum a_{(0)}m_{[0]}\\otimes a_{(1)}\\rightharpoonup m_{[1]}"
},
{
"math_id": 7,
"text": "a\\in A,m\\in M"
}
]
| https://en.wikipedia.org/wiki?curid=72612923 |
72614183 | Permutationally invariant quantum state tomography | Efficient reconstruction of quantum states based on measurements
Permutationally invariant quantum state tomography (PI quantum state tomography) is a method for the partial determination of the state of a quantum system consisting of many subsystems.
In general, the number of parameters needed to describe the quantum mechanical state of a system consisting of formula_0 subsystems is increasing exponentially with formula_1 For instance, for an formula_0-qubit system, formula_2 real parameters are needed to describe the state vector of a pure state, or
formula_3 real parameters are needed to describe the density matrix of a mixed state. "Quantum state tomography" is a method to determine all these parameters from a series of measurements on many independent and identically prepared systems. Thus, in the case of full quantum state tomography, the number of measurements needed scales exponentially with the number of particles or qubits.
For large systems, the determination of the entire quantum state is no longer possible in practice and one is interested in methods that determine only a subset of the parameters necessary to characterize the quantum state that still contains important information about the state. Permutationally invariant quantum tomography is such a method. PI quantum tomography only measures formula_4 the "permutationally invariant part" of the density matrix. For the procedure, it is sufficient to carry out "local measurements" on the subsystems. If the state is close to being permutationally invariant, which is the case in many practical situations, then formula_5 is close to the density matrix of the system.
Even if the state is not permutationally invariant, formula_5 can still be used for entanglement detection and computing relevant operator expectations values. Thus, the procedure does not assume the permutationally invariance of the quantum state. The number of independent real parameters of formula_5 for formula_0 qubits scales as formula_6 The number of local measurement settings scales as formula_7 Thus, permutationally invariant quantum tomography is considered manageable even for large formula_0. In other words, permutationally invariant quantum tomography is considered "scalable".
The method can be used, for example, for the reconstruction of the density matrices of systems with more than 10 particles, for photonic systems, for trapped cold ions or systems in cold atoms.
The permutationally invariant part of the density matrix.
PI state tomography reconstructs the permutationally invariant part of the density matrix, which is defined as the equal mixture of the quantum states obtained after permuting the particles in all the possible ways
formula_8
where formula_9 denotes the "k"th permutation. For instance, for formula_10 we have two permutations. formula_11 leaves the order of the two particles unchanged. formula_12 exchanges the two particles. In general, for formula_0 particles, we have formula_13 permutations.
It is easy to see that formula_14 is the density matrix that is obtained if the order of the particles is not taken into account. This corresponds to an experiment in which a subset of formula_15 particles is randomly selected from a larger ensemble. The state of this smaller group is of course permutationally invariant.
The number of degrees of freedom of formula_14 scales polynomially with the number of particles. For a system of formula_0 qubits (spin-formula_16 particles) the number of real degrees of freedom is
formula_17
The measurements needed to determine the permutationally invariant part of the density matrix.
To determine these degrees of freedom,
formula_18
"local measurement settings" are needed. Here, a local measurement settings means that the operator formula_19 is to be measured on each particle. By repeating the measurement and collecting enough data, all two-point, three-point and higher order correlations can be determined.
Efficient determination of a physical state.
So far we have discussed that the number of measurements scales polynomially with the number of qubits.
However, for using the method in practice, the entire tomographic procedure must be scalable. Thus, we need to store the state in the computer in a scalable way. Clearly, the straightforward way of storing the formula_0-qubit state in a formula_20 density matrix is not scalable. However, formula_5 is a blockdiagonal matrix due to its permutational invariance and thus it can be stored much more efficiently.
Moreover, it is well known that due to statistical fluctuations and systematic errors the density matrix obtained from the measured state by linear inversion is not positive semidefinite and it has some negative eigenvalues. An important step in a typical tomography is fitting a physical, i. e., positive semidefinite density matrix on the tomographic data. This step often represents a bottleneck in the overall process in full state tomography. However, PI tomography, as we have just discussed, allows the density matrix to be stored much more efficiently, which also allows an efficient fitting using convex optimization, which also guarantees that the solution is a global optimum.
Characteristics of the method.
PI tomography is commonly used in experiments involving permutationally invariant states. If the density matrix formula_5 obtained by PI tomography is entangled, then density matrix of the system, formula_21 is also entangled. For this reason, the usual methods for entanglement verification, such as entanglement witnesses or the Peres-Horodecki criterion, can be applied to formula_5. Remarkably, the entanglement detection carried out in this way does not assume that the quantum system itself is permutationally invariant.
Moreover, the expectation value of any permutaionally invariant operator is the same for formula_21 and for formula_22 Very relevant examples of such operators are projectors to symmetric states, such as the Greenberger–Horne–Zeilinger state, the W state and symmetric Dicke states. Thus, we can obtain the fidelity with respect to the above-mentioned quantum states as the expectation value of the corresponding projectors in the state formula_22
Links to other approaches.
There are other approaches for tomography that need fewer measurements than full quantum state tomography. As we have discussed, PI tomography is typically most useful for quantum states that are close to being permutionally invariant. Compressed sensing is especially suited for low rank states. Matrix product state tomography is most suitable for, e.g., cluster states and ground states of spin models. Permutationally invariant tomography can be combined with compressed sensing. In this case, the number of local measurement settings needed can even be smaller than for permutationally invariant tomography.
Experiments.
Permutationally invariant tomography has been tested experimentally for a four-qubit symmetric Dicke state, and also for a six-qubit symmetric Dicke in photons, and has been compared to full state tomography and compressed sensing. A simulation of permutationally invariant tomography shows that reconstruction of a positive semidefinite density matrix of 20 qubits from measured data is possible in a few minutes on a standard computer. The hybrid method combining permutationally invariant tomography and compressed sensing has also been tested. | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "N."
},
{
"math_id": 2,
"text": "2^N-2"
},
{
"math_id": 3,
"text": "2^{2N} -1 "
},
{
"math_id": 4,
"text": "\\varrho_{\\rm PI},"
},
{
"math_id": 5,
"text": "\\varrho_{\\rm PI}"
},
{
"math_id": 6,
"text": "\\sim N^3."
},
{
"math_id": 7,
"text": "\\sim N^2. "
},
{
"math_id": 8,
"text": "\\varrho_{\\rm PI}=\\frac{1}{N!} \\sum_k \\Pi_k \\varrho \\Pi_k^\\dagger,"
},
{
"math_id": 9,
"text": "\\Pi_k"
},
{
"math_id": 10,
"text": "N=2"
},
{
"math_id": 11,
"text": "\\Pi_1"
},
{
"math_id": 12,
"text": "\\Pi_2"
},
{
"math_id": 13,
"text": "N!"
},
{
"math_id": 14,
"text": " \\varrho_{\\rm PI} "
},
{
"math_id": 15,
"text": " N "
},
{
"math_id": 16,
"text": " 1/2"
},
{
"math_id": 17,
"text": "\\binom{N+3}{N}-1=\\frac{1}6(N^3+6N^2+11N)."
},
{
"math_id": 18,
"text": "\\binom{N+2}{N}=\\frac{(N+2)(N+1)}2=\\frac1 2 (N^2+3N+2)"
},
{
"math_id": 19,
"text": "A_j"
},
{
"math_id": 20,
"text": "2^N\\times2^N"
},
{
"math_id": 21,
"text": "\\varrho"
},
{
"math_id": 22,
"text": "\\varrho_{\\rm PI}."
}
]
| https://en.wikipedia.org/wiki?curid=72614183 |
72622754 | Ytterbium(II) iodide | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Ytterbium(II) iodide is an iodide of ytterbium, with the chemical formula of YbI2. It is a yellow solid.
Preparation.
Ytterbium(II) iodide can be prepared by heating ytterbium(III) iodide:
formula_0
It can also be prepared by reacting metallic ytterbium with 1,2-diiodoethane in tetrahydrofuran:
formula_1
Although the reaction takes place at room temperature, due to the sensitivity of the reagents it is necessary to work anhydrous and under inert gas. Otherwise, if oxygen is present, rapid oxidation to ytterbium(III) takes place. This can be visually recognized by the color change from green to yellow solution.
Properties and uses.
Ytterbium(II) iodide is a yellow solid that is very sensitive to air and moisture and is rapidly oxidized to ytterbium(III). It reacts with water to produce hydrogen gas and basic iodides, and reacts violently with acids. Ytterbium(II) iodide sinters at 0.01 Torr from about 780 °C and gives a viscous melt at about 920 °C. It begins to disproportionate into ytterbium and ytterbium(III) iodide. At around 800 °C, a yellow sublimate of ytterbium(II) iodide is observed on the glass walls; this partly obscures the disproportionation. The melting point can therefore only be determined imprecisely.
Like samarium(II) iodide (SmI2), ytterbium(II) iodide is a reagent used in organic chemical reactions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{2\\ YbI_3\\ \\xrightarrow []{\\Delta\\ T}\\ 2\\ YbI_2\\ +\\ I_2}"
},
{
"math_id": 1,
"text": "\\mathrm{Yb\\ +\\ ICH_2CH_2I\\ \\xrightarrow []{THF} \\ YbI_2\\ +\\ H_2C=CH_2}"
}
]
| https://en.wikipedia.org/wiki?curid=72622754 |
72623626 | Universal Taylor series | A universal Taylor series is a formal power series formula_0, such that for every continuous function formula_1 on formula_2, if formula_3, then there exists an increasing sequence formula_4 of positive integers such thatformula_5In other words, the set of partial sums of formula_0 is dense (in sup-norm) in formula_6, the set of continuous functions on formula_2 that is zero at origin.
Statements and proofs.
Fekete proved that a universal Taylor series exists.
<templatestyles src="Math_proof/styles.css" />Proof
Let formula_7 be the sequence in which each rational-coefficient polynomials with zero constant coefficient appears countably infinitely many times (use the diagonal enumeration). By Weierstrass approximation theorem, it is dense in formula_6. Thus it suffices to approximate the sequence. We construct the power series iteratively as a sequence of polynomials formula_8, such that formula_9 agrees on the first formula_10 coefficients, and formula_11.
To start, let formula_12. To construct formula_13, replace each formula_14 in formula_15 by a close enough approximation with lowest degree formula_16, using the lemma below. Now add this to formula_17.
<templatestyles src="Math_theorem/styles.css" />
Lemma — The function formula_18 can be approximated to arbitrary precision with a polynomial with arbitrarily lowest degree. That is, formula_19 polynomial formula_20 such that formula_21.
<templatestyles src="Math_proof/styles.css" />Proof of lemma
The function formula_22 is the uniform limit of its Taylor expansion, which starts with degree 3. Also, formula_23. Thus to formula_24-approximate formula_25 using a polynomial with lowest degree 3, we do so for formula_26 with formula_27 by truncating its Taylor expansion. Now iterate this construction by plugging in the lowest-degree-3 approximation into the Taylor expansion of formula_26, obtaining an approximation of lowest degree 9, 27, 81... | [
{
"math_id": 0,
"text": "\\sum_{n=1}^\\infty a_n x^n"
},
{
"math_id": 1,
"text": "h"
},
{
"math_id": 2,
"text": "[-1,1]"
},
{
"math_id": 3,
"text": "h(0)=0"
},
{
"math_id": 4,
"text": "\\left(\\lambda_n\\right)"
},
{
"math_id": 5,
"text": "\n\\lim_{n\\to\\infty}\\left\\|\\sum_{k=1}^{\\lambda_n} a_k x^k-h(x)\\right\\| = 0\n"
},
{
"math_id": 6,
"text": "C[-1,1]_0"
},
{
"math_id": 7,
"text": "f_1, f_2, ..."
},
{
"math_id": 8,
"text": "p_1, p_2, ..."
},
{
"math_id": 9,
"text": "p_n, p_{n+1}"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "\\|f_n - p_n \\|_\\infty \\leq 1/n"
},
{
"math_id": 12,
"text": "p_1 = f_1"
},
{
"math_id": 13,
"text": "p_{n+1}"
},
{
"math_id": 14,
"text": "x"
},
{
"math_id": 15,
"text": "f_{n+1} - p_n"
},
{
"math_id": 16,
"text": "\\geq n+1"
},
{
"math_id": 17,
"text": "p_n"
},
{
"math_id": 18,
"text": "f(x) = x"
},
{
"math_id": 19,
"text": "\\forall \\epsilon > 0, n \\in \\{1, 2, ...\\}\\exists "
},
{
"math_id": 20,
"text": "p(x) = a_nx^n + \\cdots + a_N x^N,"
},
{
"math_id": 21,
"text": "\\|f-p\\|_\\infty \\leq \\epsilon"
},
{
"math_id": 22,
"text": "g(x) = x - c\\tanh (x/c)"
},
{
"math_id": 23,
"text": "\\|f-g\\|_\\infty < c"
},
{
"math_id": 24,
"text": "\\epsilon"
},
{
"math_id": 25,
"text": "f(x) =x"
},
{
"math_id": 26,
"text": "g(x)"
},
{
"math_id": 27,
"text": "c < \\epsilon/2"
}
]
| https://en.wikipedia.org/wiki?curid=72623626 |
7262872 | London equations | Electromagnetic equations describing superconductors
The London equations, developed by brothers Fritz and Heinz London in 1935, are constitutive relations for a superconductor relating its superconducting current to electromagnetic fields in and around it. Whereas Ohm's law is the simplest constitutive relation for an ordinary conductor, the London equations are the simplest meaningful description of superconducting phenomena, and form the genesis of almost any modern introductory text on the subject. A major triumph of the equations is their ability to explain the Meissner effect, wherein a material exponentially expels all internal magnetic fields as it crosses the superconducting threshold.
Description.
There are two London equations when expressed in terms of measurable fields:
formula_0
Here formula_1 is the (superconducting) current density, E and B are respectively the electric and magnetic fields within the superconductor,
formula_2
is the charge of an electron or proton,
formula_3
is electron mass, and
formula_4
is a phenomenological constant loosely associated with a number density of superconducting carriers.
The two equations can be combined into a single "London Equation"
in terms of a specific vector potential formula_5 which has been gauge fixed to the "London gauge", giving:
formula_6
In the London gauge, the vector potential obeys the following requirements, ensuring that it can be interpreted as a current density:
The first requirement, also known as Coulomb gauge condition, leads to the constant superconducting electron density formula_11 as expected from the continuity equation. The second requirement is consistent with the fact that supercurrent flows near the surface. The third requirement ensures no accumulation of superconducting electrons on the surface. These requirements do away with all gauge freedom and uniquely determine the vector potential. One can also write the London equation in terms of an arbitrary gauge formula_12 by simply defining formula_13, where formula_14 is a scalar function and formula_15 is the change in gauge which shifts the arbitrary gauge to the London gauge.
The vector potential expression holds for magnetic fields that vary slowly in space.
London penetration depth.
If the second of London's equations is manipulated by applying Ampere's law,
formula_16,
then it can be turned into the Helmholtz equation for magnetic field:
formula_17
where the inverse of the laplacian eigenvalue:
formula_18
is the characteristic length scale, formula_19, over which external magnetic fields are exponentially suppressed: it is called the London penetration depth: typical values are from 50 to 500 nm.
For example, consider a superconductor within free space where the magnetic field outside the superconductor is a constant value pointed parallel to the superconducting boundary plane in the "z" direction. If "x" leads perpendicular to the boundary then the solution inside the superconductor may be shown to be
formula_20
From here the physical meaning of the London penetration depth can perhaps most easily be discerned.
Rationale.
Original arguments.
While it is important to note that the above equations cannot be formally derived,
the Londons did follow a certain intuitive logic in the formulation of their theory. Substances across a stunningly wide range of composition behave roughly according to Ohm's law, which states that current is proportional to electric field. However, such a linear relationship is impossible in a superconductor for, almost by definition, the electrons in a superconductor flow with no resistance whatsoever. To this end, the London brothers imagined electrons as if they were free electrons under the influence of a uniform external electric field. According to the Lorentz force law
formula_21
these electrons should encounter a uniform force, and thus they should in fact accelerate uniformly. Assume that the electrons in the superconductor are now driven by an electric field, then according to the definition of current density formula_22we should have
formula_23
This is the first London equation. To obtain the second equation, take the curl of the first London equation and apply Faraday's law,
formula_24,
to obtain
formula_25
As it currently stands, this equation permits both constant and exponentially decaying solutions. The Londons recognized from the Meissner effect that constant nonzero solutions were nonphysical, and thus postulated that not only was the time derivative of the above expression equal to zero, but also that the expression in the parentheses must be identically zero:
formula_26
This results in the second London equation and formula_27 (up to a gauge transformation which is fixed by choosing "London gauge") since the magnetic field is defined through formula_28
Additionally, according to Ampere's law formula_29 , one may derive that: formula_30
On the other hand, since formula_31, we have formula_32, which leads to the spatial distribution of magnetic field obeys :
formula_17
with penetration depth formula_33. In one dimension, such Helmholtz equation has the solution form formula_20
Inside the superconductor formula_34, the magnetic field exponetially decay, which well explains the Meissner effect. With the magnetic field distribution, we can use Ampere's law formula_29 again to see that the supercurrent formula_35also flows near the surface of superconductor, as expected from the requirement for interpreting formula_35as physical current.
While the above rationale holds for superconductor, one may also argue in the same way for a perfect conductor. However, one important fact that distinguishes the superconductor from perfect conductor is that perfect conductor does not exhibit Meissner effect for formula_36. In fact, the postulation formula_26 does not hold for a perfect conductor. Instead, the time derivative must be kept and cannot be simply removed. This results in the fact that the time derivative of formula_37 field (instead of formula_37 field) obeys:
formula_38
For formula_36, deep inside a perfect conductor we have formula_39 rather than formula_40 as the superconductor. Consequently, whether the magnetic flux inside a perfect conductor will vanish depends on the initial condition (whether it's zero-field cooled or not).
Canonical momentum arguments.
It is also possible to justify the London equations by other means.
Current density is defined according to the equation
formula_41
Taking this expression from a classical description to a quantum mechanical one, we must replace values formula_42 and formula_43 by the expectation values of their operators. The velocity operator
formula_44
is defined by dividing the gauge-invariant, kinematic momentum operator by the particle mass "m". Note we are using formula_45 as the electron charge.
We may then make this replacement in the equation above. However, an important assumption from the microscopic theory of superconductivity is that the superconducting state of a system is the ground state, and according to a theorem of Bloch's,
in such a state the canonical momentum p is zero. This leaves
formula_46
which is the London equation according to the second formulation above.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\partial \\mathbf{j}_{\\rm s}}{\\partial t} = \\frac{n_{\\rm s} e^2}{m}\\mathbf{E}, \\qquad \\mathbf{\\nabla}\\times\\mathbf{j}_{\\rm s} =-\\frac{n_{\\rm s} e^2}{m }\\mathbf{B}. "
},
{
"math_id": 1,
"text": "{\\mathbf{j}}_{\\rm s}"
},
{
"math_id": 2,
"text": "e\\,"
},
{
"math_id": 3,
"text": "m\\,"
},
{
"math_id": 4,
"text": "n_{\\rm s}\\,"
},
{
"math_id": 5,
"text": "\\mathbf{A}_{\\rm s}"
},
{
"math_id": 6,
"text": "\\mathbf{j}_{s} =-\\frac{n_{\\rm s}e^2}{m}\\mathbf{A}_{\\rm s}. "
},
{
"math_id": 7,
"text": " \\nabla\\cdot \\mathbf{A}_{\\rm s} = 0,"
},
{
"math_id": 8,
"text": "\\mathbf{A}_{\\rm s} = 0 "
},
{
"math_id": 9,
"text": "\\mathbf{A}_{\\rm s} \\cdot \\hat{\\mathbf{n}} = 0,"
},
{
"math_id": 10,
"text": "\\hat{\\mathbf{n}}"
},
{
"math_id": 11,
"text": "\\dot \\rho_{\\rm s} = 0"
},
{
"math_id": 12,
"text": "\\mathbf{A}"
},
{
"math_id": 13,
"text": "\\mathbf{A}_{\\rm s} = (\\mathbf{A} + \\nabla \\phi)"
},
{
"math_id": 14,
"text": "\\phi"
},
{
"math_id": 15,
"text": "\\nabla \\phi"
},
{
"math_id": 16,
"text": "\\nabla \\times \\mathbf{B} = \\mu_0 \\mathbf{j}"
},
{
"math_id": 17,
"text": "\\nabla^2 \\mathbf{B} = \\frac{1}{\\lambda_{\\rm s}^2}\\mathbf{B} "
},
{
"math_id": 18,
"text": "\\lambda_{\\rm s} \\equiv \\sqrt{\\frac{m}{\\mu_0 n_{\\rm s} e^2}}"
},
{
"math_id": 19,
"text": "\\lambda_{\\rm s}"
},
{
"math_id": 20,
"text": "B_z(x) = B_0 e^{-x / \\lambda_{\\rm s}}. \\,"
},
{
"math_id": 21,
"text": "\\mathbf{F}=m \\dot{\\mathbf{v}}=-e\\mathbf{E} - e\\mathbf{v} \\times \\mathbf{B}"
},
{
"math_id": 22,
"text": "\\mathbf{j}_{\\rm s} = -n_{\\rm s} e \\mathbf{v}_{\\rm s} "
},
{
"math_id": 23,
"text": "\\frac{\\partial \\mathbf{j}_{s}}{\\partial t} = -n_{\\rm s} e \\frac{\\partial \\mathbf{v}}{\\partial t } =\\frac{n_{\\rm s} e^2}{m}\\mathbf{E} "
},
{
"math_id": 24,
"text": "\\nabla \\times \\mathbf{E} = -\\frac{\\partial \\mathbf{B}}{\\partial t}"
},
{
"math_id": 25,
"text": " \\frac{\\partial}{\\partial t}\\left( \\nabla \\times \\mathbf{j}_{\\rm s} + \\frac{n_{\\rm s} e^2}{m} \\mathbf{B} \\right) = 0."
},
{
"math_id": 26,
"text": " \\nabla \\times \\mathbf{j}_{\\rm s} + \\frac{n_{\\rm s} e^2}{m} \\mathbf{B} = 0"
},
{
"math_id": 27,
"text": "\\mathbf{j}_{s} =-\\frac{n_{\\rm s}e^2}{m}\\mathbf{A}_{\\rm s} "
},
{
"math_id": 28,
"text": " B=\\nabla \\times A_{\\rm s}."
},
{
"math_id": 29,
"text": "\\nabla \\times \\mathbf{B} = \\mu_0 \\mathbf{j}_{\\rm s}"
},
{
"math_id": 30,
"text": "\\nabla \\times (\\nabla\\times \\mathbf{B}) =\\nabla \\times \\mu_0 \\mathbf{j}_{\\rm s} =-\\frac{\\mu_0 n_{\\rm s} e^2}{m} \\mathbf{B}. "
},
{
"math_id": 31,
"text": "\\nabla \\cdot \\mathbf{B} = 0 "
},
{
"math_id": 32,
"text": "\\nabla \\times (\\nabla\\times \\mathbf{B}) = -\\nabla^2\\mathbf{B} "
},
{
"math_id": 33,
"text": "\\lambda_{\\rm s}=\\sqrt{\\frac{m}{\\mu_0 n_{\\rm s} e^2}}"
},
{
"math_id": 34,
"text": "(x>0)"
},
{
"math_id": 35,
"text": "\\mathbf{j}_{\\rm s}"
},
{
"math_id": 36,
"text": "T<T_c"
},
{
"math_id": 37,
"text": " \\mathbf{B}"
},
{
"math_id": 38,
"text": "\\nabla^2 \\frac{\\partial \\mathbf{B}}{\\partial t} = \\frac{1}{\\lambda_{\\rm s}^2}\\frac{\\partial \\mathbf{B}}{\\partial t}. "
},
{
"math_id": 39,
"text": "\\dot \\mathbf{B} = 0 "
},
{
"math_id": 40,
"text": "\\mathbf{B}=0 "
},
{
"math_id": 41,
"text": "\\mathbf{j}_{\\rm s} =- n_{\\rm s} e \\mathbf{v}_{\\rm s}."
},
{
"math_id": 42,
"text": "\\mathbf{j}_{\\rm s} "
},
{
"math_id": 43,
"text": "\\mathbf{v}_{\\rm s} "
},
{
"math_id": 44,
"text": "\\mathbf{v}_{\\rm s} = \\frac{1}{m} \\left( \\mathbf{p} + e \\mathbf{A}_{\\rm s} \\right) "
},
{
"math_id": 45,
"text": "-e"
},
{
"math_id": 46,
"text": "\\mathbf{j} =-\\frac{n_{\\rm s}e^2}{m}\\mathbf{A}_{\\rm s}, "
}
]
| https://en.wikipedia.org/wiki?curid=7262872 |
726335 | Compact-open topology | In mathematics, the compact-open topology is a topology defined on the set of continuous maps between two topological spaces. The compact-open topology is one of the commonly used topologies on function spaces, and is applied in homotopy theory and functional analysis. It was introduced by Ralph Fox in 1945.
If the codomain of the functions under consideration has a uniform structure or a metric structure then the compact-open topology is the "topology of uniform convergence on compact sets." That is to say, a sequence of functions converges in the compact-open topology precisely when it converges uniformly on every compact subset of the domain.
Definition.
Let X and Y be two topological spaces, and let "C"("X", "Y") denote the set of all continuous maps between X and Y. Given a compact subset K of X and an open subset U of Y, let "V"("K", "U") denote the set of all functions "f" ∈ "C"("X", "Y") such that "f" ("K") ⊆ "U". In other words, formula_0. Then the collection of all such "V"("K", "U") is a subbase for the compact-open topology on "C"("X", "Y"). (This collection does not always form a base for a topology on "C"("X", "Y").)
When working in the category of compactly generated spaces, it is common to modify this definition by restricting to the subbase formed from those K that are the image of a compact Hausdorff space. Of course, if X is compactly generated and Hausdorff, this definition coincides with the previous one. However, the modified definition is crucial if one wants the convenient category of compactly generated weak Hausdorff spaces to be Cartesian closed, among other useful properties. The confusion between this definition and the one above is caused by differing usage of the word compact.
If X is locally compact, then formula_1 from the category of topological spaces always has a right adjoint formula_2. This adjoint coincides with the compact-open topology and may be used to uniquely define it. The modification of the definition for compactly generated spaces may be viewed as taking the adjoint of the product in the category of compactly generated spaces instead of the category of topological spaces, which ensures that the right adjoint always exists.
Properties.
"f" ("x"), is continuous. This can be seen as a special case of the above where X is a one-point space.
sup{"d"( "f" ("x"), "g"("x")) : "x" in "X"}, for "f" , "g" in "C"("X", "Y").
Applications.
The compact open topology can be used to topologize the following sets:
In addition, there is a homotopy equivalence between the spaces formula_8. These topological spaces, formula_9 are useful in homotopy theory because it can be used to form a topological space and a model for the homotopy type of the "set" of homotopy classes of maps
formula_10
This is because formula_11 is the set of path components in formula_9, that is, there is an isomorphism of sets
formula_12
where formula_13 is the homotopy equivalence.
Fréchet differentiable functions.
Let X and Y be two Banach spaces defined over the same field, and let "C m"("U", "Y") denote the set of all m-continuously Fréchet-differentiable functions from the open subset "U" ⊆ "X" to Y. The compact-open topology is the initial topology induced by the seminorms
formula_14
where "D"0 "f" ("x")
"f" ("x"), for each compact subset "K" ⊆ "U".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V(K, U) = C(K, U) \\times_{C(K, Y)} C(X, Y)"
},
{
"math_id": 1,
"text": " X \\times - "
},
{
"math_id": 2,
"text": " Hom(X, -) "
},
{
"math_id": 3,
"text": "\\Omega(X,x_0) = \\{ f: I \\to X : f(0) = f(1) = x_0 \\}"
},
{
"math_id": 4,
"text": "X"
},
{
"math_id": 5,
"text": "x_0"
},
{
"math_id": 6,
"text": "E(X, x_0, x_1) = \\{ f: I \\to X : f(0) = x_0 \\text{ and } f(1) = x_1 \\}"
},
{
"math_id": 7,
"text": "E(X, x_0) = \\{ f: I \\to X : f(0) = x_0 \\}"
},
{
"math_id": 8,
"text": "C(\\Sigma X, Y) \\cong C(X, \\Omega Y)"
},
{
"math_id": 9,
"text": "C(X,Y)"
},
{
"math_id": 10,
"text": "\\pi(X,Y) = \\{[f]: X \\to Y | f \\text{ is a homotopy class} \\}."
},
{
"math_id": 11,
"text": "\\pi(X,Y)"
},
{
"math_id": 12,
"text": "\\pi(X,Y) \\to C(I, C(X, Y))/\\sim"
},
{
"math_id": 13,
"text": "\\sim"
},
{
"math_id": 14,
"text": "p_{K}(f) = \\sup \\left\\{ \\left\\| D^j f(x) \\right\\| \\ : \\ x \\in K, 0 \\leq j \\leq m \\right\\}"
}
]
| https://en.wikipedia.org/wiki?curid=726335 |
72634 | Polyomino | Geometric shapes formed from squares
A polyomino is a plane geometric figure formed by joining one or more equal squares edge to edge. It is a polyform whose cells are squares. It may be regarded as a finite subset of the regular square tiling.
Polyominoes have been used in popular puzzles since at least 1907, and the enumeration of pentominoes is dated to antiquity. Many results with the pieces of 1 to 6 squares were first published in "Fairy Chess Review" between the years 1937 and 1957, under the name of "dissection problems." The name "polyomino" was invented by Solomon W. Golomb in 1953, and it was popularized by Martin Gardner in a November 1960 "Mathematical Games" column in "Scientific American".
Related to polyominoes are polyiamonds, formed from equilateral triangles; polyhexes, formed from regular hexagons; and other plane polyforms. Polyominoes have been generalized to higher dimensions by joining cubes to form polycubes, or hypercubes to form polyhypercubes.
In statistical physics, the study of polyominoes and their higher-dimensional analogs (which are often referred to as lattice animals in this literature) is applied to problems in physics and chemistry. Polyominoes have been used as models of branched polymers and of percolation clusters.
Like many puzzles in recreational mathematics, polyominoes raise many combinatorial problems. The most basic is enumerating polyominoes of a given size. No formula has been found except for special classes of polyominoes. A number of estimates are known, and there are algorithms for calculating them.
Polyominoes with holes are inconvenient for some purposes, such as tiling problems. In some contexts polyominoes with holes are excluded, allowing only simply connected polyominoes.
<templatestyles src="Template:TOC limit/styles.css" />
Enumeration of polyominoes.
Free, one-sided, and fixed polyominoes.
There are three common ways of distinguishing polyominoes for enumeration:
The following table shows the numbers of polyominoes of various types with "n" cells.
Fixed polyominoes were enumerated in 2004 up to "n" = 56 by Iwan Jensen, and in 2024 up to "n" = 70 by Gill Barequet and Gil Ben-Shachar.
Free polyominoes were enumerated in 2007 up to "n" = 28 by Tomás Oliveira e Silva, in 2012 up to "n" = 45 by Toshihiro Shirakawa, and in 2023 up to "n" = 50 by John Mason.
The above OEIS sequences, with the exception of A001419, include the count of 1 for the number of null-polyominoes; a null-polyomino is one that is formed of zero squares.
Symmetries of polyominoes.
The dihedral group "D"4 is the group of symmetries (symmetry group) of a square. This group contains four rotations and four reflections. It is generated by alternating reflections about the "x"-axis and about a diagonal. One free polyomino corresponds to at most 8 fixed polyominoes, which are its images under the symmetries of "D"4. However, those images are not necessarily distinct: the more symmetry a free polyomino has, the fewer distinct fixed counterparts it has. Therefore, a free polyomino that is invariant under some or all non-trivial symmetries of "D"4 may correspond to only 4, 2 or 1 fixed polyominoes. Mathematically, free polyominoes are equivalence classes of fixed polyominoes under the group "D"4.
Polyominoes have the following possible symmetries; the least number of squares needed in a polyomino with that symmetry is given in each case:
In the same way, the number of one-sided polyominoes depends on polyomino symmetry as follows:
The following table shows the numbers of polyominoes with "n" squares, sorted by symmetry groups.
Algorithms for enumeration of fixed polyominoes.
Inductive algorithms.
Each polyomino of size "n"+1 can be obtained by adding a square to a polyomino of size "n". This leads to algorithms for generating polyominoes inductively.
Most simply, given a list of polyominoes of size "n", squares may be added next to each polyomino in each possible position, and the resulting polyomino of size "n"+1 added to the list if not a duplicate of one already found; refinements in ordering the enumeration and marking adjacent squares that should not be considered reduce the number of cases that need to be checked for duplicates. This method may be used to enumerate either free or fixed polyominoes.
A more sophisticated method, described by Redelmeier, has been used by many authors as a way of not only counting polyominoes (without requiring that all polyominoes of size "n" be stored in size to enumerate those of size "n"+1), but also proving upper bounds on their number. The basic idea is that we begin with a single square, and from there, recursively add squares. Depending on the details, it may count each "n"-omino "n" times, once from starting from each of its "n" squares, or may be arranged to count each once only.
The simplest implementation involves adding one square at a time. Beginning with an initial square, number the adjacent squares, clockwise from the top, 1, 2, 3, and 4. Now pick a number between 1 and 4, and add a square at that location. Number the unnumbered adjacent squares, starting with 5. Then, pick a number larger than the previously picked number, and add that square. Continue picking a number larger than the number of the current square, adding that square, and then numbering the new adjacent squares. When "n" squares have been created, an "n"-omino has been created.
This method ensures that each fixed polyomino is counted exactly "n" times, once for each starting square. It can be optimized so that it counts each polyomino only once, rather than "n" times. Starting with the initial square, declare it to be the lower-left square of the polyomino. Simply do not number any square that is on a lower row, or left of the square on the same row. This is the version described by Redelmeier.
If one wishes to count free polyominoes instead, then one may check for symmetries after creating each "n"-omino. However, it is faster to generate symmetric polyominoes separately (by a variation of this method) and so determine the number of free polyominoes by Burnside's lemma.
Transfer-matrix method.
The most modern algorithm for enumerating the fixed polyominoes was discovered by Iwan Jensen. An improvement on Andrew Conway's method, it is exponentially faster than the previous methods (however, its running time is still exponential in "n").
Both Conway's and Jensen's versions of the transfer-matrix method involve counting the number of polyominoes that have a certain width. Computing the number for all widths gives the total number of polyominoes. The basic idea behind the method is that possible beginning rows are considered, and then to determine the minimum number of squares needed to complete the polyomino of the given width. Combined with the use of generating functions, this technique is able to count many polyominoes at once, thus enabling it to run many times faster than methods that have to generate every polyomino.
Although it has excellent running time, the tradeoff is that this algorithm uses exponential amounts of memory (many gigabytes of memory are needed for "n" above 50), is much harder to program than the other methods, and can't currently be used to count free polyominoes.
Asymptotic growth of the number of polyominoes.
Fixed polyominoes.
Theoretical arguments and numerical calculations support the estimate for the number of fixed polyominoes of size n
formula_0
where "λ" = 4.0626 and "c" = 0.3169. However, this result is not proven and the values of "λ" and "c" are only estimates.
The known theoretical results are not nearly as specific as this estimate. It has been proven that
formula_1
exists. In other words, "An" grows exponentially. The best known lower bound for "λ", found in 2016, is 4.00253. The best known upper bound is "λ" < 4.5252.
To establish a lower bound, a simple but highly effective method is concatenation of polyominoes. Define the upper-right square to be the rightmost square in the uppermost row of the polyomino. Define the bottom-left square similarly. Then, the upper-right square of any polyomino of size "n" can be attached to the bottom-left square of any polyomino of size "m" to produce a unique ("n"+"m")-omino. This proves "AnAm" ≤ "A""n"+"m". Using this equation, one can show "λ" ≥ ("An")1/"n" for all "n". Refinements of this procedure combined with data for "An" produce the lower bound given above.
The upper bound is attained by generalizing the inductive method of enumerating polyominoes. Instead of adding one square at a time, one adds a cluster of squares at a time. This is often described as adding "twigs". By proving that every "n"-omino is a sequence of twigs, and by proving limits on the combinations of possible twigs, one obtains an upper bound on the number of "n"-ominoes. For example, in the algorithm outlined above, at each step we must choose a larger number, and at most three new numbers are added (since at most three unnumbered squares are adjacent to any numbered square). This can be used to obtain an upper bound of 6.75. Using 2.8 million twigs, Klarner and Rivest obtained an upper bound of 4.65, which was subsequently improved by Barequet and Shalah to 4.5252.
Free polyominoes.
Approximations for the number of fixed polyominoes and free polyominoes are related in a simple way. A free polyomino with no symmetries (rotation or reflection) corresponds to 8 distinct fixed polyominoes, and for large "n", most "n"-ominoes have no symmetries. Therefore, the number of fixed "n"-ominoes is approximately 8 times the number of free "n"-ominoes. Moreover, this approximation is exponentially more accurate as "n" increases.
Special classes of polyominoes.
Exact formulas are known for enumerating polyominoes of special classes, such as the class of "convex" polyominoes and the class of "directed" polyominoes.
The definition of a "convex" polyomino is different from the usual definition of convexity, but is similar to the definition used for the orthogonal convex hull. A polyomino is said to be "vertically" or "column convex" if its intersection with any vertical line is convex (in other words, each column has no holes). Similarly, a polyomino is said to be "horizontally" or "row convex" if its intersection with any horizontal line is convex. A polyomino is said to be "convex" if it is row and column convex.
A polyomino is said to be "directed" if it contains a square, known as the "root", such that every other square can be reached by movements of up or right one square, without leaving the polyomino.
Directed polyominoes, column (or row) convex polyominoes, and convex polyominoes have been effectively enumerated by area "n", as well as by some other parameters such as perimeter, using generating functions.
A polyomino is equable if its area equals its perimeter. An equable polyomino must be made from an even number of squares; every even number greater than 15 is possible. For instance, the 16-omino in the form of a 4 × 4 square and the 18-omino in the form of a 3 × 6 rectangle are both equable. For polyominoes with 15 squares or fewer, the perimeter always exceeds the area.
Tiling with polyominoes.
In recreational mathematics, challenges are often posed for tiling a prescribed region, or the entire plane, with polyominoes, and related problems are investigated in mathematics and computer science.
Tiling regions with sets of polyominoes.
Puzzles commonly ask for tiling a given region with a given set of polyominoes, such as the 12 pentominoes. Golomb's and Gardner's books have many examples. A typical puzzle is to tile a 6×10 rectangle with the twelve pentominoes; the 2339 solutions to this were found in 1960. Where multiple copies of the polyominoes in the set are allowed, Golomb defines a hierarchy of different regions that a set may be able to tile, such as rectangles, strips, and the whole plane, and shows that whether polyominoes from a given set can tile the plane is undecidable, by mapping sets of Wang tiles to sets of polyominoes.
Because the general problem of tiling regions of the plane with sets of polyominoes is NP-complete, tiling with more than a few pieces rapidly becomes intractable and so the aid of a computer is required. The traditional approach to tiling finite regions of the plane uses a technique in computer science called backtracking.
In Jigsaw Sudokus a square grid is tiled with polynomino-shaped regions (sequence in the OEIS).
Tiling regions with copies of a single polyomino.
Another class of problems asks whether copies of a given polyomino can tile a rectangle, and if so, what rectangles they can tile. These problems have been extensively studied for particular polyominoes, and tables of results for individual polyominoes are available. Klarner and Göbel showed that for any polyomino there is a finite set of "prime" rectangles it tiles, such that all other rectangles it tiles can be tiled by those prime rectangles. Kamenetsky and Cooke showed how various disjoint (called "holey") polyominoes can tile rectangles.
Beyond rectangles, Golomb gave his hierarchy for single polyominoes: a polyomino may tile a rectangle, a half strip, a bent strip, an enlarged copy of itself, a quadrant, a strip, a half plane, the whole plane, certain combinations, or none of these. There are certain implications among these, both obvious (for example, if a polyomino tiles the half plane then it tiles the whole plane) and less so (for example, if a polyomino tiles an enlarged copy of itself, then it tiles the quadrant). Polyominoes of size up to 6 are characterized in this hierarchy (with the status of one hexomino, later found to tile a rectangle, unresolved at that time).
In 2001 Cristopher Moore and John Michael Robson showed that the problem of tiling one polyomino with copies of another is NP-complete.
Tiling the plane with copies of a single polyomino.
Tiling the plane with copies of a single polyomino has also been much discussed. It was noted in 1965 that all polyominoes up to hexominoes and all but four heptominoes tile the plane. It was then established by David Bird that all but 26 octominoes tile the plane. Rawsthorne found that all but 235 polyominoes of size 9 tile, and such results have been extended to higher area by Rhoads (to size 14) and others. Polyominoes tiling the plane have been classified by the symmetries of their tilings and by the number of aspects (orientations) in which the tiles appear in them.
The study of which polyominoes can tile the plane has been facilitated using the Conway criterion: except for two nonominoes, all tiling polyominoes up to size 9 form a patch of at least one tile satisfying it, with higher-size exceptions more frequent.
Several polyominoes can tile larger copies of themselves, and repeating this process recursively gives a rep-tile tiling of the plane. For instance, for every positive integer n, it is possible to combine "n"2 copies of the L-tromino, L-tetromino, or P-pentomino into a single larger shape similar to the smaller polyomino from which it was formed.
Tiling a common figure with various polyominoes.
The "compatibility problem" is to take two or more polyominoes and find a figure that can be tiled with each. Polyomino compatibility has been widely studied since the 1990s. Jorge Luis Mireles and Giovanni Resta have published websites of systematic results, and Livio Zucca shows results for some complicated cases like three different pentominoes. The general problem can be hard. The first compatibility figure for the L and X pentominoes was published in 2005 and had 80 tiles of each kind. Many pairs of polyominoes have been proved incompatible by systematic exhaustion. No algorithm is known for deciding whether two arbitrary polyominoes are compatible.
Polyominoes in puzzles and games.
In addition to the tiling problems described above, there are recreational mathematics puzzles that require folding a polyomino to create other shapes. Gardner proposed several simple games with a set of free pentominoes and a chessboard. Some variants of the Sudoku puzzle use nonomino-shaped regions on the grid. The video game "Tetris" is based on the seven one-sided tetrominoes (spelled "Tetriminos" in the game), and the board game "Blokus" uses all of the free polyominoes up to pentominoes.
Etymology.
The word "polyomino" and the names of the various sizes of polyomino are all back-formations from the word "domino", a common game piece consisting of two squares. The name "domino" for the game piece is believed to come from the spotted masquerade garment "domino", from Latin "dominus". Despite this word origin, in naming polyominoes, the first letter "d-" of "domino" is fancifully interpreted as a version of the prefix "di-" meaning "two", and replaced by other numerical prefixes.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A_n \\sim \\frac{c\\lambda^n}{n}"
},
{
"math_id": 1,
"text": "\\lim_{n\\rightarrow \\infty} (A_n)^\\frac{1}{n} = \\lambda"
}
]
| https://en.wikipedia.org/wiki?curid=72634 |
72635333 | Nicolson–Ross–Weir method | Measurement technique in microwave engineering
Nicolson–Ross–Weir method is a measurement technique for determination of complex permittivities and permeabilities of material samples for microwave frequencies. The method is based on insertion of a material sample with a known thickness inside a waveguide, such as a coaxial cable or a rectangular waveguide, after which the dispersion data is extracted from the resulting scattering parameters. The method is named after A. M. Nicolson and G. F. Ross, and W. B. Weir, who developed the approach in 1970 and 1974, respectively.
The technique is one of the most common procedures for material characterization in microwave engineering.
Method.
The method uses scattering parameters of a material sample embedded in a waveguide, namely formula_0 and formula_1, to calculate permittivity and permeability data. formula_0 and formula_1 correspond to the cumulative reflection and transmission coefficient of the sample that are referenced to the each sample end, respectively: these parameters account for the multiple internal reflections inside the sample, which is considered to have a thickness of formula_2. The reflection coefficient of the bulk sample is:
formula_3
where
formula_4
The sign of the root for the reflection coefficient is chosen appropriately to ensure its passivity (formula_5). Similarly, the transmission coefficient of the bulk sample can be written as:
formula_6
Thus, the effective permeability (formula_7) and permittivity (formula_8) of the material can be written as:
formula_9
formula_10
where
formula_11
and
The constitutive relation for formula_15 admits an infinite number of solutions due to the branches of the complex logarithm. The ambiguity regarding its result can be resolved by taking the group delay into account.
Limitations and extensions.
In the case of low material loss, the Nicolson–Ross–Weir method is known to be unstable for sample thicknesses at integer multiples of one half wavelength due to resonance phenomenon. Improvements over the standard algorithm have been presented in engineering literature to alleviate this effect. Furthermore, complete filling of a waveguide with sample material may pose a particular challenge: presence of gaps during the filling of the waveguide section would excite higher-order modes, which may yield errors in scattering parameter results. In such cases, more advanced methods based on the rigorous modal analysis of partially-filled waveguides or optimization methods can be used. A modification of the method for single-port measurements was also reported.
In addition to homogenous materials, the extension of the method was developed to obtain constitutive parameters of isotropic and bianisotropic metamaterials.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S_{11}"
},
{
"math_id": 1,
"text": "S_{21}"
},
{
"math_id": 2,
"text": "d"
},
{
"math_id": 3,
"text": "\\Gamma=X \\pm \\sqrt{X^2-1}"
},
{
"math_id": 4,
"text": "X=\\frac{1-(S_{21}^2-S_{11}^2)}{2 S_{11}}"
},
{
"math_id": 5,
"text": "|\\Gamma| \\leq 1"
},
{
"math_id": 6,
"text": "T=\\frac{S_{11}+S_{21}-\\Gamma}{1-(S_{11}+S_{21})\\Gamma}"
},
{
"math_id": 7,
"text": "\\mu^*"
},
{
"math_id": 8,
"text": "\\varepsilon^*"
},
{
"math_id": 9,
"text": "\\mu^*=\\frac{\\lambda_{0g}}{\\Lambda} \\left( \\frac{1+\\Gamma}{1-\\Gamma} \\right)"
},
{
"math_id": 10,
"text": "\\varepsilon^*=\\frac{\\lambda^2_0 \\left( \\frac{1}{\\Lambda^2}+\\frac{1}{\\lambda_c^2} \\right)}{\\mu^*}"
},
{
"math_id": 11,
"text": "\\frac{1}{\\Lambda^2}=-\\left[ \\frac{1}{2\\pi d} ln\\frac{1}{T} \\right]^2 "
},
{
"math_id": 12,
"text": "\\lambda_0"
},
{
"math_id": 13,
"text": "\\lambda_{0g}"
},
{
"math_id": 14,
"text": "\\lambda_{c}"
},
{
"math_id": 15,
"text": "\\Lambda"
}
]
| https://en.wikipedia.org/wiki?curid=72635333 |
72637308 | Soft graviton theorem | Physics theorem
In physics, the soft graviton theorem, first formulated by Steven Weinberg in 1965, allows calculation of the S-matrix, used in calculating the outcome of collisions between particles, when low-energy (soft) gravitons come into play.
Specifically, if in a collision between "n" incoming particles from which "m" outgoing particles arise, the outcome of the collision depends on a certain "S" matrix, by adding one or more gravitons to the "n" + "m" particles, the resulting "S" matrix (let it be "S"') differs from the initial "S" only by a factor that does not depend in any way, except for the momentum, on the type of particles to which the gravitons couple.
The theorem also holds by putting photons in place of gravitons, thus obtaining a corresponding soft photon theorem.
The theorem is used in the context of attempts to formulate a theory of quantum gravity in the form of a perturbative quantum theory, that is, as an approximation of a possible, as yet unknown, exact theory of quantum gravity.
In 2014 Andrew Strominger and Freddy Cachazo expanded the soft graviton theorem, gauge invariant under translation, to the subleading term of the series, obtaining the gauge invariance under rotation (implying global angular momentum conservation) and connected this to the gravitational spin memory effect.
Formulation.
Given particles whose interaction is described by a certain initial "S" matrix, by adding a soft graviton (i.e., whose energy is negligible compared to the energy of the other particles) that couples to one of the incoming or outgoing particles, the resulting "S"' matrix is, leaving off some kinematic factors,
formula_0 ,
where "p" is the momentum of the particle interacting with the graviton, "ϵμν" is the graviton polarization, "pG" is the momentum of the graviton, "ε" is an infinitesimal real quantity which helps to shape the integration contour, and the factor "η" is equal to 1 for outgoing particles and -1 for incoming particles.
The formula comes from a power series and the last term with the big O indicates that terms of higher order are not considered. Although the series differs depending on the spin of the particle coupling to the graviton, the lowest-order term shown above is the same for all spins.
In the case of multiple soft gravitons involved, the factor in front of "S" is the sum of the factors due to each individual graviton.
If a soft photon (whose energy is negligible compared to the energy of the other particles) is added instead of the graviton, the resulting matrix "S"' is
formula_1 ,
with the same parameters as before but with "pγ" momentum of the photon, "ϵ" is its polarization, and "q" the charge of the particle coupled to the photon.
As for the graviton, in case of more photons, a sum over all the terms occurs.
Subleading order expansion.
The expansion of the formula to the subleading term of the series for the graviton was calculated by Andrew Strominger and Freddy Cachazo:
formula_2,
where formula_3represents the angular momentum of the particle interacting with the graviton.
This formula is gauge invariant under rotation and is connected to the gravitational spin memory effect. | [
{
"math_id": 0,
"text": "{\\cal S}' = \\sqrt{8\\pi G} \\frac{\\eta p^\\mu p^\\nu \\epsilon_{\\mu\\nu}}{p \\cdot p_G - i \\eta \\varepsilon}{\\cal S} + O(p_G^0)"
},
{
"math_id": 1,
"text": "{\\cal S}' = \\frac{\\eta q p \\cdot \\epsilon}{p \\cdot p_\\gamma - i \\eta \\varepsilon} {\\cal S} + O(p_\\gamma^0)"
},
{
"math_id": 2,
"text": "{\\cal S}' = \\sqrt{8\\pi G} \\frac{\\eta p^\\mu p^\\nu \\epsilon_{\\mu\\nu}}{p \\cdot p_G - i \\eta \\varepsilon}{\\cal S}-i\\sqrt{8\\pi G} \\frac{\\eta p^\\mu ({p_G}_\\rho J^{\\rho\\nu}) \\epsilon_{\\mu\\nu}}{p \\cdot p_G - i \\eta \\varepsilon}{\\cal S} + O(p_G^1)"
},
{
"math_id": 3,
"text": "J^{\\rho\\nu}"
}
]
| https://en.wikipedia.org/wiki?curid=72637308 |
72645041 | Core-compact space | In general topology and related branches of mathematics, a core-compact topological space formula_0 is a topological space whose partially ordered set of open subsets is a continuous poset. Equivalently, formula_0 is core-compact if it is exponentiable in the category Top of topological spaces. Expanding the definition of an exponential object, this means that for any formula_1, the set of continuous functions formula_2 has a topology such that function application is a unique continuous function from formula_3 to formula_1, which is given by the Compact-open topology and is the most general way to define it.
Another equivalent concrete definition is that every neighborhood formula_4 of a point formula_5 contains a neighborhood formula_6 of formula_5 whose closure in formula_4 is compact. As a result, every (weakly) locally compact space is core-compact, and every Hausdorff (or more generally, sober) core-compact space is locally compact, so the definition is a slight weakening of the definition of a locally compact space in the non-Hausdorff case.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "Y"
},
{
"math_id": 2,
"text": "\\mathcal{C}(X,Y)"
},
{
"math_id": 3,
"text": "X \\times \\mathcal{C}(X, Y)"
},
{
"math_id": 4,
"text": "U"
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": "V"
}
]
| https://en.wikipedia.org/wiki?curid=72645041 |
72646521 | Levitated optomechanics | Field of physics relating to optics and quantum mechanics)
Levitated optomechanics is a field of mesoscopic physics which deals with the mechanical motion of mesoscopic particles which are optically or electrically or magnetically levitated. Through the use of levitation, it is possible to decouple the particle's mechanical motion exceptionally well from the environment. This in turn enables the study of high-mass quantum physics, out-of-equilibrium- and nano-thermodynamics and provides the basis for precise sensing applications.
Motivation.
In order to use mechanical oscillators in the regime of quantum physics or for sensing applications, low damping of the oscillator's motion and thus high quality factors are desirable. In nano and micromechanics, the Q-factor of a system is often limited by its suspension, which usually demands filigree structures. Nevertheless, the maximally achievable Q-factor usually correlates with the system's size, requiring large systems for achieving high Q-factors.
Particle levitation in external fields can alleviate this constraint. This is one of the reasons why the field of levitated optomechanics has become attractive for research on the foundations in physics and for high-precision applications.
Physical basics.
The interaction between a dielectric particle with polarizability formula_0 and an electric field formula_1 is given by the gradient force formula_2. When a particle is trapped and optically levitated in the focus of a Gaussian laser beam, the force can be approximated to first order by formula_3 with formula_4, i.e. a harmonic oscillator with frequency formula_5, where formula_6 is the particle's mass. Including passive damping, active external feedback and coupling results in the Langevin equations of motion:
formula_7
Here formula_8 is the total damping rate, which has usually two dominant contributions: collisions with atoms or molecules of the background gas and photon shot noise, which becomes dominant below pressures on the order of 10−6 mbar.
The coupling term allows to model any coupling to an external heat bath.
The external feedback is usually used to cool and control the particle motion.
The approximation of a classical harmonic oscillator holds true until one reaches the regime of quantum mechanics, where the quantum harmonic oscillator is the superior approximation and the quantization of the energy levels becomes apparent. The QHO has a ground state of lowest energy where both position and velocity have a minimal variance, determined by the Heisenberg uncertainty principle.
Such quantum states are interesting starting conditions for preparing non-Gaussian quantum states, quantum enhanced sensing, matter-wave interferometry or the realization of entanglement in many-particle systems.
Methods of cooling.
Parametric feedback cooling and cold damping.
The idea of feedback cooling is to apply a position and/or velocity dependent force on the particle in a way which produces a negative feedback loop.
One way to achieve that is by adding a feedback term, which is proportional to the particle's position (formula_9). Since that mechanism provides damping, which cools down the mechanical motion, without the introduction of fluctuations, it is referred to as “cold damping”. The first experiment employing this type of cooling was done in 1977 by Arthur Ashkin, who received the 2018 Nobel Prize in Physics for his pioneering work on trapping with optical tweezers.
Instead of applying a linear feedback signal, one can also combine position and velocity via formula_10 to get a signal with twice the frequency of the particle's oscillation. This way the stiffness of the trap increases when the particle moves out of the trap and decreases when the particle is moving back.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "\\vec{E}"
},
{
"math_id": 2,
"text": "\\vec{F}_{grad}=-\\alpha\\vec{\\nabla}\\vec{E}^2/2"
},
{
"math_id": 3,
"text": "F_{grad,q}=-k_qq"
},
{
"math_id": 4,
"text": "q\\in\\{x,y,z\\}"
},
{
"math_id": 5,
"text": "\\omega_q=\\sqrt{k_q/M}"
},
{
"math_id": 6,
"text": "M"
},
{
"math_id": 7,
"text": "\\ddot{q}(t)=-\\underbrace{\\Gamma_{CM}\\dot{q}(t)}_{\\text{damping}}-\\underbrace{\\omega_q^2q(t)}_{\\text{restoring force}}+\\underbrace{\\frac{\\sqrt{2\\pi S_{ff}}}{M}}_{\\text{coupling}}+\\underbrace{u_{fb}(t)}_{\\text{feedback}}"
},
{
"math_id": 8,
"text": "\\Gamma_{CM}"
},
{
"math_id": 9,
"text": "u_{fb}(t)\\propto\\dot{q}(t)"
},
{
"math_id": 10,
"text": "u_{fb}\\propto q(t)\\dot{q}(t)"
}
]
| https://en.wikipedia.org/wiki?curid=72646521 |
72649767 | Water clarity | How deeply visible light penetrates through water
Water clarity is a descriptive term for how deeply visible light penetrates through water. In addition to light penetration, the term water clarity is also often used to describe underwater visibility. Water clarity is one way that humans measure water quality, along with oxygen concentration and the presence or absence of pollutants and algal blooms.
Water clarity governs the health of underwater ecosystems because it impacts the amount of light reaching the plants and animals living underwater. For plants, light is needed for photosynthesis. The clarity of the underwater environment determines the depth ranges where aquatic plants can live. Water clarity also impacts how well visual animals like fish can see their prey. Clarity affects the aquatic plants and animals living in all kinds of water bodies, including rivers, ponds, lakes, reservoirs, estuaries, coastal lagoons, and the open ocean.
Water clarity also affects how humans interact with water, from recreation and property values to mapping, defense, and security. Water clarity influences human perceptions of water quality, recreational safety, aesthetic appeal, and overall environmental health. Tourists visiting the Great Barrier Reef were willing to pay to improve the water clarity conditions for recreational satisfaction. Water clarity also influences waterfront property values. In the United States, a 1% improvement in water clarity increased property values by up to 10%. Water clarity is needed to visualize targets underwater, either from above or in water. These applications include mapping and military operations. To map shallow-water features such as oyster reefs and seagrass beds, the water must be clear enough for those features to be visible to a drone, airplane, or satellite. Water clarity is also needed to detect underwater objects such as submarines using visible light.
Water clarity measurements.
Water clarity is measured using multiple techniques. These measurements include: Secchi depth, light attenuation, turbidity, beam attenuation, absorption by colored dissolved organic matter, the concentration of chlorophyll-a pigment, and the concentration of total suspended solids. Clear water generally has a deep Secchi depth, low light attenuation (deeper light penetration), low turbidity, low beam attenuation, and low concentrations of dissolved substances, chlorophyll-a, and/or total suspended solids. More turbid water generally has a shallow Secchi depth, high light attenuation (less light penetration to depth), high turbidity, high beam attenuation, and high concentrations of dissolved substances, chlorophyll-a, and/or total suspended solids.
Overall general metrics.
Secchi depth.
Secchi depth is the depth at which a disk is no longer visible to the human eye. This measurement was created in 1865 and represents one of the oldest oceanographic methods. To measure Secchi depth, a white or black-and-white disk is mounted on a pole or line and lowered slowly down in the water. The depth at which the disk is no longer visible is taken as a measure of the transparency of the water. Secchi depth is most useful as a measure of transparency or underwater visibility.
Light attenuation.
The light attenuation coefficient – often shortened to “light attenuation” – describes the decrease in solar irradiance with depth. To calculate this coefficient, light energy is measured at a series of depths from the surface to the depth of 1% illumination. Then, the exponential decline in light is calculated using Beer’s Law with the equation:
formula_0
where "k" is the light attenuation coefficient, "I""z" is the intensity of light at depth "z", and "I"0 is the intensity of light at the ocean surface.
Which translates to:
formula_1
This measurement can be done for specific colors of light or more broadly for all visible light. The light attenuation coefficient of photosynthetically active radiation (PAR) refers to the decrease in all visible light (400-700 nm) with depth. Light attenuation can be measured as the decrease in downwelling light (Kd) or the decrease in scalar light (Ko) with depth. Light attenuation is most useful as a measure of the total underwater light energy available to plants, such as phytoplankton and submerged aquatic vegetation.
Turbidity.
Turbidity is a measure of the cloudiness of water based on light scattering by particles at a 90-degree angle to the detector. A turbidity sensor is placed in water with a light source and a detector at a 90-degree angle to one another. The light source is usually red or near-infrared light (600-900 nm). Turbidity sensors are also called turbidimeters or nephelometers. In more turbid water, more particles are present in the water, and more light scattering by particles is picked up by the detector. Turbidity is most useful for long-term monitoring because these sensors are often low cost and sturdy enough for long deployments underwater.
Beam attenuation.
Beam attenuation is measured with a device called a transmissometer that has a light source at one end and a detector at the other end, in one plane. The amount of light transmitted to the detector through the water is the beam transmission, and the amount of light lost is the beam attenuation. Beam attenuation is essentially the opposite of light transmission. Clearer water with a low beam attenuation coefficient will have high light transmission, and more turbid water with a high beam attenuation coefficient will have low light transmission. Beam attenuation is used as a proxy for particulate organic carbon in oligotrophic waters like the open ocean.
Concentration-based metrics.
Colored dissolved organic matter (CDOM) absorption.
Colored dissolved organic matter (CDOM) absorbs light, making the water appear darker or tea-colored. Absorption by CDOM is one measure of water clarity. Clarity can still be quite high in terms of visibility with high amounts of CDOM in the water, but the color of the water will be altered to yellow or brown, and the water will appear darker than water with low CDOM concentrations. CDOM absorbs blue light more strongly than other colors, shifting the color of the water toward the yellow and red part of the visible light spectrum as the water gets darker. For example, in lakes with high CDOM concentrations, the bottom of the lake may be clearly visible to the human eye, but a white surface in the same lake water may appear yellow or brown.
Total suspended solids (TSS) concentration.
Total suspended solids (TSS) concentration is the concentration (dry weight mass per unit volume of water) of all the material in water that is caught on a filter, usually a filter with about a 0.7 micrometer pore size. This includes all the particles suspended in water, such as mineral particles (silt, clay), organic detritus, and phytoplankton cells. Clear water bodies have low TSS concentrations. Other names for TSS include total suspended matter (TSM) and suspended particulate matter (SPM). The term suspended sediment concentration (SSC) refers to the mineral component of TSS but is sometimes used interchangeably with TSS. If desired, the concentrations of volatile (organic) and fixed (inorganic) suspended solids can be separated out using the loss-on-ignition method by burning the filter in a muffle furnace to burn off organic matter, leaving behind ash including mineral particles and inorganic components of phytoplankton cells, with TSS = volatile suspended solids + fixed suspended solids.
Chlorophyll-a concentration.
Chlorophyll-a concentration is sometimes used to measure water clarity, especially when suspended sediments and colored dissolved organic matter concentrations are low. Chlorophyll-a concentration is a proxy for phytoplankton biomass, which is one way to quantify how turbid the water is due to biological primary production. Chlorophyll-a concentration is most useful for research on primary production, the contribution of phytoplankton to light attenuation, and harmful algal blooms. Chlorophyll-a concentration is also useful for long-term monitoring because these sensors are often low cost and sturdy enough for long deployments underwater.
Case studies.
High water clarity.
The clearest waters occur in oligotrophic ocean regions such as the South Pacific Gyre, tropical coastal waters, glacially-formed lakes with low sediment inputs, and lakes with some kind of natural filtration occurring at the inflow point. Blue Lake in New Zealand holds the record for the highest water clarity of any lake, with a Secchi depth of 230 to 260 feet. Blue Lake is fed by an underground passage from a nearby lake, which acts as a natural filter. Some other very clear water bodies are Lake Tahoe between California and Nevada in the United States, Lake Baikal in Russia, and Crater Lake in Oregon in the United States.
In tropical coastal waters, the water is clear thanks to low nutrient inputs, low primary production, and coral reefs acting as a natural buffer that keep sediments from getting resuspended.
The clearest recorded water on Earth is either Blue Lake, New Zealand or the Weddell Sea near Antarctica, both of which claim Secchi depths of 80 meters (230 to 260 feet).
Low water clarity.
Very low water clarity can be found where high loads of suspended sediments are transported from land. Some examples are estuaries where rivers with high loads of sediments empty into the ocean. One example is the Río de la Plata, an estuary in South America between Uruguay and Argentina where the Uruguay River and the Parana River empty into the Atlantic ocean. The Río de la Plata shows long-term mean TSS concentrations between 20 and 100 grams per cubic meter, higher than most estuaries. Another example is the gulf coast of North America where the Mississippi River meets the Gulf of Mexico. Turbid water from snowmelt and rain washes high loads of sediment downstream each spring, creating a sediment plume and making the water clarity very low. Water bodies can also experience low water clarity after extreme events like volcanic eruptions. After the eruption of Mount St. Helens, the water of Spirit Lake, Washington was darkened by decaying trees in the lake and had a Secchi depth of only 1 to 2 centimeters.
Water clarity vs. water quality.
Water clarity is more specific than water quality. The term “water clarity” more strictly describes the amount of light that passes through water or an object’s visibility in water. The term “water quality” more broadly refers to many characteristics of water, including temperature, dissolved oxygen, the amount of nutrients, or the presence of algal blooms. How clear the water appears is only one component of water quality.
An underwater ecosystem can have high water clarity yet low water quality, and vice versa. Scientists have observed that many lakes are becoming less clear while also recovering from acid rain. This phenomenon has been seen in the northeastern United States and northern Europe. In the past, some lakes were ecologically bare, yet clear, while acidity was high. In recent years, as acidity is reduced and watersheds become more forested, many lakes are less clear but also ecologically healthier with higher concentrations of dissolved organic carbon and more natural water chemistry.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{I_z \\over I_0}= e^{-kz}"
},
{
"math_id": 1,
"text": "k = {ln{I_z \\over I_0} \\over - z} "
}
]
| https://en.wikipedia.org/wiki?curid=72649767 |
726508 | Liar's dice | Class of dice games
Liar's dice is a class of dice games for two or more players requiring the ability to deceive and to detect an opponent's deception. In "single hand" liar's dice games, each player has a set of dice, all players roll once, and the bids relate to the dice each player can see (their hand) plus all the concealed dice (the other players' hands). In "common hand" games, there is one set of dice which is passed from player to player. The bids relate to the dice as they are in front of the bidder after selected dice have been re-rolled. Originating during the 15th century, the game subsequently spread to Latin American and European countries. In 1993, a variant, "Call My Bluff", won the Spiel des Jahres.
Background.
"Liar's dice" originated as a bluffing board game titled Dudo during the 15th century from the Inca Empire, and subsequently spread to Latin American countries. The game later spread to European countries via Spanish conquistadors. In the 1970s, numerous commercial versions of the game were released.
Single hand.
Five dice are used per player with dice cups used for concealment.
Each round, each player rolls a "hand" of dice under their cup and looks at their hand while keeping it concealed from the other players. The first player begins bidding, announcing any face value and the minimum number of dice that the player believes are showing that value, under all of the cups in the game. Ones are often wild, always counting as the face of the current bid.
Turns rotate among the players in a clockwise order. Each player has two choices during their turn: to make a higher bid, or challenge the previous bid—typically with a call of "liar". Raising the bid means either increasing the quantity, or the face value, or both, according to the specific bidding rules used. There are many variants of allowed and disallowed bids; common bidding variants, given a previous bid of an arbitrary quantity and face value, include:
If the current player challenges the previous bid, all dice are revealed. If the bid is valid (at least as many of the face value and any wild aces are showing as were bid), the bidder wins. Otherwise, the challenger wins. The player who loses a round loses one of their dice. The last player to still retain a die (or dice) is the winner. The loser of the last round starts the bidding on the next round. If the loser of the last round was eliminated, the next player starts the new round.
Dice odds.
For a given number of unknown dice "n", the probability that "exactly" a certain quantity "q" of any face value are showing, "P(q)", is
formula_0
Where "C(n,q)" is the number of unique subsets of "q" dice out of the set of "n" unknown dice. In other words, the number of dice with any particular face value follows the binomial distribution formula_1.
For the same n, the probability "P'(q)" that "at least q" dice are showing a given face is the sum of "P(x)" for all "x" such that "q ≤ x ≤ n", or
formula_2
These equations can be used to calculate and chart the probability of exactly "q" and at least "q" for any or multiple "n". For most purposes, it is sufficient to know the following facts of dice probability:
Common hand.
The "Common hand" version is for two players. The first caller is determined at random. Both players then roll their dice at the same time, and examine their hands. Hands are called in style similar to poker, and the game may be played with poker dice:
One player calls their hand. The other player may either call a higher-ranking hand, call the bluff, or re-roll some or all of their dice. When a bluff is called, the accused bluffer reveals their dice and the winner is determined.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ P(q) = C(n,q) \\cdot (1/6)^q \\cdot (5/6)^{n-q}"
},
{
"math_id": 1,
"text": "B(n,\\tfrac{1}{6})"
},
{
"math_id": 2,
"text": "\\ P'(q) = \\sum_{x=q}^n C(n,x) \\cdot (1/6)^x \\cdot (5/6)^{n-x}"
}
]
| https://en.wikipedia.org/wiki?curid=726508 |
72654987 | Heterojunction solar cell | A solar cell architecture
Heterojunction solar cells (HJT), variously known as Silicon heterojunctions (SHJ) or Heterojunction with Intrinsic Thin Layer (HIT), are a family of photovoltaic cell technologies based on a heterojunction formed between semiconductors with dissimilar band gaps. They are a hybrid technology, combining aspects of conventional crystalline solar cells with thin-film solar cells.
Silicon heterojunction-based solar panels are commercially mass-produced for residential and utility markets. As of 2023, Silicon heterojunction architecture has the highest cell efficiency for commercial-sized silicon solar cells. In 2022–2024, SHJ cells are expected to overtake Aluminium Back surface field (Al-BSF) solar cells in market share to become the second-most adopted commercial solar cell technology after PERC/TOPCon (Passivated Emitter Rear Cell/Tunnel Oxide Passivated Contact), increasing to nearly 20% by 2032.
Solar cells operate by absorbing light, exciting the absorber. This creates electron–hole pairs that must be separated into electrons (negative charge carriers) and holes (positive charge carriers) by asymmetry in the solar cell, provided through chemical gradients or electric fields in semiconducting junctions. After splitting, the carriers travel to opposing terminals of the solar cell that have carrier-discriminating properties (known as selective contacts). For solar cells to operate efficiently, surfaces and interfaces require protection from passivation to prevent electrons and holes from being trapped at surface defects, which would otherwise increase the probability of mutual annihilation of the carriers (recombination).
SHJ cells generally consist of an active crystalline silicon absorber substrate which is passivated by a thin layer of hydrogenated intrinsic amorphous silicon (denoted as a-Si:H; the "buffer layer"), and overlayers of appropriately doped amorphous or nanocrystalline silicon selective contacts. The selective contact material and the absorber have different band gaps, forming the carrier-separating heterojunctions that are analogous to the p-n junction of traditional solar cells. The high efficiency of heterojunction solar cells is owed mostly to the excellent passivation qualities of the buffer layers, particularly with respect to separating the highly recombination-active metallic contacts from the absorber. Due to their symmetrical structure, SHJ modules commonly have a bifaciality factor over 90%.
As the thin layers are usually temperature sensitive, heterojunction cells are constrained to a low-temperature manufacturing process. This presents challenges for electrode metallisation, as the typical silver paste screen printing method requires firing at up to 800 °C; well above the upper tolerance for most buffer layer materials. As a result, the electrodes are composed of a low curing temperature silver paste, or uncommonly a silver-coated copper paste or electroplated copper.
History.
The heterojunction structure, and the ability of amorphous silicon layers to effectively passivate crystalline silicon has been well documented since the 1970s. Heterojunction solar cells using amorphous and crystalline silicon were developed with a conversion efficiency of more than 12% in 1983. Sanyo Electric Co. (now a subsidiary of Panasonic Group) filed several patents pertaining to heterojunction devices including a-Si and μc-Si intrinsic layers in the early 1990s, trademarked "heterojunction with intrinsic thin-layer" (HIT). The inclusion of the intrinsic layer significantly increased efficiency over doped a-Si heterojunction solar cells through reduced density of trapping states, and reduced dark tunnelling leakage currents.
Research and development of SHJ solar cells was suppressed until the expiry of Sanyo-issued patents in 2011, allowing various companies to develop SHJ technology for commercialisation. In 2014, HIT cells with conversion efficiencies exceeding 25% were developed by Panasonic, which was then the highest for non-concentrated crystalline silicon cells. This record was broken more recently in 2018 by Kaneka corporation, which produced 26.7% efficient large area interdigitated back contact (IBC) SHJ solar cells, and again in 2022 and 2023 by LONGi with 26.81% and 27.09% efficiency respectively. As of 2023, this is the highest recorded efficiency for non-concentrated crystalline silicon solar cells. Heterojunction modules have been fabricated with efficiency up to 23.89%. In 2023, SHJ combined with Perovskite in monolithic tandem cells also recorded the highest non-concentrated Two-junction cell efficiency at 33.9%.
SHJ solar cells are now mass-produced on the gigawatt scale. In 2022, projects planned for the establishment or expansion of SHJ production lines totaled approximately 350 GW/year of additional capacity. Over 24 (mostly Chinese) manufacturers are beginning or augmenting their heterojunction production capacity, such as Huasun, Risen, Jingang (Golden Glass), LONGi, Meyer Burger and many more.
Utility scale projects.
In early 2022, a 150 MW heterojunction solar farm was completed by Bulgarian EPC company Inercom near the village of Apriltsi in Pazardzhik Province, Bulgaria—the largest HJT solar farm at the time, according to a press release by module supplier Huasun. In 2023, the same supplier announced a further 1.5 GW supply deal of HJT modules to Inercom.
Advantages.
Performance.
Efficiency and voltage.
SHJ has the highest efficiency amongst crystalline silicon solar cells in both laboratory (world record efficiency) and commercial production (average efficiency). In 2023, the average efficiency for commercial SHJ cells was 25.0%, compared with 24.9% for "n"-type TOPCon and 23.3% for "p"-type PERC. The high efficiency is owed mostly to very high open-circuit voltages—consistently over 700 mV—as a result of excellent surface passivation. Since 2023, SHJ bottom cells in Perovskite tandems also hold the highest non-concentrated Two-junction cell efficiency at 33.9%. Due to their superior surface passivation, heterojunction cells generally have a lower diode saturation current density than other silicon solar cells (such as TOPCon), allowing for very high fill factor and voltage; and hence record high efficiency.
Bifaciality.
Bifaciality refers to the ability of a solar cell to accept light from the front or rear surface. The collection of light from the rear surface can significantly improve energy yields in deployed solar arrays. SHJ cells can be manufactured with a conductive ARC on both sides, allowing a bifaciality factor above 90%, compared to ~70% for PERC cells with rear grid. Bifacial solar modules are expected to significantly increase their market share over monofacial modules to 85% by 2032.
Lifespan.
By virtue of their high bifaciality, silicon heterojunction modules can exploit more advantages of glass–glass module designs compared to other cell technologies. Glass–glass modules using EPE encapsulant are particularly effective in preventing water ingress, which is a significant cause of performance degradation in PV modules. When used with the appropriate module encapsulant, a glass–glass SHJ module is generally expected to have an operational lifespan of over 30 years; significantly longer than a glass–polymer foil backsheet (the module technology with the highest market share as of 2023). Glass–glass modules are heavier than glass–backsheet modules, however due to improvement in tempered glass technologies and module designs, the glass thickness (and hence weight) is expected to reduce, with the mainstream tending from 3.2 mm towards 2 mm or less in the 2030s. As a result, glass–glass modules are expected to become the dominant PV technology in the mid 2020s according to ITRPV (2023).
For example, utility scale 680 W heterojunction modules with a 30-year performance derating of 93% were announced by Enel in 2022.
Temperature coefficient.
The temperature coefficient refers to how the output power of a solar module changes with temperature. Typically, solar modules see a reduction in output power and efficiency at elevated temperatures. From lab testing and supplier datasheet surveys, modules fabricated with SHJ cells consistently measure an equal or lower temperature coefficient (i.e. the decrease in efficiency is less severe) compared with Al-BSF, PERC, PERT and hybrid PERT/rear-heterojunction solar cells. This applies to a range of parameters, including open-circuit voltage, maximum power point power, short circuit current and fill factor. The temperature sensitivity of solar cells has been inversely correlated to high open-circuit voltages compared to the absorber band gap potential, as noted by Martin Green in 1982; "As the open-circuit voltage of silicon solar cells continues to improve, one resulting advantage, not widely appreciated, is reduced temperature sensitivity of device performance". Thus the low temperature sensitivity of SHJ cells has been attributed to high formula_0 from well passivated contacts.
Manufacturing.
Energy consumption.
SHJ production lines fundamentally do not use high temperature equipment such as diffusion or metal paste curing furnaces, and on average have a lower power consumption per watt of fabricated cells. According to "China PV Industry Development Roadmap", in 2022, the average electricity consumption of "n"-type Heterojunction cell lines was 47,000 kWh/MW, whereas "p"-type PERC production lines consumed about 53,000 kWh/MW and for "n"-type TOPCon, about 56,000 kWh/MW. It is estimated that by 2030, the power consumption of "n"-type Heterojunction, p-type PERC and "n"-type TOPCon cell production lines will drop to 34,000 kWh/MW, 35,000 kWh/MW and 42,000 kWh/MW respectively. A 2014 study estimated the energy payback time of a SHJ module to be 1.5 years, compared to 1.8 years for a regular monocrystalline module; this figure was estimated to drop to 0.94 years vs. 1.2 years respectively for a prospective module in 2020 assuming 25% efficiency. Similarly, the life-cycle CO2-equivalent emissions per kWh for 2020 SHJ modules is estimated to be 20 grams vs 25 grams for a regular monocrystalline module.
Silicon consumption.
Crystalline silicon wafers used in solar cells typically have a thickness between 130 and 180 μm. The mass of consumed silicon wafer comprises a significant proportion of the cost of the solar module, and as such reducing the wafer thickness has potential to achieve significant cost reduction. Fewer photons are absorbed in thinner silicon. However, as long as surface recombination is effectively suppressed, thinner wafers can maintain—or even improve upon—very high open-circuit voltages. That is, the increase in open-circuit voltage may compensate for losses in short-circuit current. They do so fundamentally, as a greater proportion of recombination occurs in the bulk of the substrate if surfaces are well passivated, therefore reducing the thickness reduces the quantity of bulk defects. As SHJ cells have excellent surface passivation, reduction in their wafer thickness is more feasible than with other crystalline silicon solar cell technologies. As such, high efficiencies have been reported over a large range of wafer thicknesses, with the minimum on the order of 50 μm. On commercial-grade "n"-type substrates, the optimum thickness is estimated to be 40–60 μm. This advantage is not seen in technologies with non-passivated contacts or poor surface recombination such as PERC, in which the optimum thickness is greater than 100 μm.
Disadvantages.
Cost.
Operational expenditure.
SHJ modules are estimated to be approximately 3-4 ¢/Wp more expensive than PERC modules (both assuming Chinese manufacturing; sources cite 2018 benchmark). The majority of the increased operational expenditure is due to differences in metallisation technology, which was estimated to be responsible for about 1.8 ¢/Wp of that difference. The cost of PECVD for a-Si and sputtering for TCO layers were also significant contributors to cost increases. Other factors include higher cost of "n"-type wafers, as well as surface preparation.
Capital expenditure.
In 2020, the CapEx cost for SHJ was much higher than PERC. The major cost (up to 50%) of establishing a SHJ production line is attributed to the PECVD equipment. However, SHJ production line CapEx has been trending downward mostly due to the reduction in PECVD tool price, from $USD 125M before 2018 to $USD30–40 M at the end of 2020. As of 2021, the CapEx of SHJ production lines in Europe was still significantly greater than in China. Higher tool throughput also reduces the CapEx cost per gigawatt. In 2019, leading PECVD equipment capacity was below 3000 wafers/hour (manufactured by Meyer Burger, INDEOtec and Archers Suzhou Systems), with newer PECVD tools (such as those manufactured by Maxwell and GS Solar) increasing capacity to 5000–8000 wafers/hour.
Manufacturing.
Reliance on "n"-type silicon.
Although high efficiency SHJ cells can be manufactured using a "p"-type silicon substrate, the low temperature constraint on SHJ production makes the process of gettering (management of contamination defects) impossible and bulk hydrogenation cannot reliably passivate excessive defects. For the same concentration of contaminant transition metal defects, "n"-type wafers have a higher minority carrier lifetime due to the smaller capture cross section of holes (the minority charge carrier) compared to electrons. Similarly, the capture cross section ratio of electrons to holes is large for surface states (eg. silicon dangling bonds) and therefore well passivated surfaces are easier to achieve on n-type wafers. For these reasons, "n"-type wafers are strongly preferred for manufacturing, as inconvenient steps for improving bulk lifetimes are cut out and the risk of developing light-induced degradation is reduced. However, the cost of "n"-type wafers is usually cited to be about 8–10% higher than "p"-type.
The higher price of "n"-type wafers is attributed to the smaller segregation coefficient of phosphorus in silicon whilst growing of doped monocrystalline ingots. This results in a problematic variation in resistivity across the length of the ingot, and thus only about 75% of the volume meets the resistivity tolerance as required by PV manufacturers. Furthermore, "n"-type ingots grown in crucibles that have been reused many times (rechargeable Czochralski; RCz) are less likely to be acceptable.
Surface preparation and texturing.
One of the first steps in manufacturing crystalline silicon solar cells includes texturing and cleaning the surface of the silicon wafer substrate. For monocrystalline wafers, this involves an anisotropic wet chemical etch using a mixture of an alkaline solution (usually potassium hydroxide or metal ion-free tetramethylammonium hydroxide) and an organic additive to increase etching anisotropy (traditionally isopropyl alcohol, but now proprietary additives are used). The etch forms the light-trapping pyramidal texture that improves the output current of the finished solar cell. Due to stringent requirements for surface cleanliness for SHJ compared to PERC, the texturing and cleaning process is relatively more complex and consumes more chemicals. Some of these surface treatment steps include RCA cleaning, sulfuric acid/peroxide mixtures to remove organics, removal of metal ions using hydrochloric acid, and nitric acid oxidative cleaning and etch-backs. Recent developments in research has found that oxidative cleaning with ozonated water may help improve process efficiency and reduce waste, with the possibility of completely replacing RCA cleaning whilst maintaining the same surface quality.
Silver paste screen printing.
The vast majority of solar cells are manufactured with screen-printed paste electrodes. SHJ cells are constrained to a low-temperature process and thus cannot use traditional furnace-fired silver paste for their electrodes, such as what is used in PERC, TOPCon and Al-BSF cells. The low-temperature paste composition compromises several factors in the performance and economics of SHJ, such as high silver consumption and lower grid conductivity. Furthermore, the screen printing process of low-temperature silver paste onto SHJ cells also generally has a significantly lower throughput compared to PERC screen printing lines, as manufacturers often use a lower printing and flooding velocity to achieve a high quality grid. Terawatt-scale solar is anticipated to consume a significant fraction of global silver demand unless alternatives are developed. Emerging technologies that may reduce silver consumption for SHJ include silver-coated copper paste, silver nanoparticle ink, and electroplated copper.
Technological maturity.
SHJ production lines consist mostly of new equipment. Therefore, SHJ experiences difficulties competing with TOPCon production, as existing PERC production lines can be retrofitted for TOPCon relatively easily. A report by Wood Mackenzie (Dec 2022) predicts that TOPCon will be favoured over SHJ for new module production in the United States in light of the Inflation Reduction Act for this reason, citing a preferable balance between high efficiency and capital expenditure.
Structure.
A "front-junction" heterojunction solar cell is composed of a "p–i–n–i–n"-doped stack of silicon layers; the middle being an "n"-type crystalline silicon wafer and the others being amorphous thin layers. Then, overlayers of a transparent conducting oxide (TCO) antireflection coating and metal grid are used for light and current collection. Due to the high bifaciality of the SHJ structure, the similar "n–i–n–i–p" "rear-junction" configuration is also used by manufacturers and may have advantages depending on the process. In particular, rear-junction configurations are preferred in manufacturing as they allow for a greater proportion of lateral electron transport to transpire in the absorber rather than the front TCO. Therefore, the sheet resistance of the front side is lowered and restrictions on TCO process parameters are relaxed, leading to efficiency and cost benefits.
Absorber.
The substrate, in which electron-hole pairs are formed, is usually "n"-type monocrystalline silicon doped with phosphorus. In industrial production of high efficiency SHJ solar cells, high quality "n"-type Czochralski silicon is required because the low-temperature process cannot provide the benefits of gettering and bulk hydrogenation. Photons absorbed outside the substrate do not contribute to photocurrent and constitute losses in quantum efficiency.
Buffer and carrier selection.
Buffer Layers.
Intrinsic amorphous silicon is deposited onto both sides of the substrate using PECVD from a mixture of silane (SiH4) and hydrogen (H2), forming the heterojunction and passivating the surface. Although intrinsic buffer layers are effectively non-conductive, charge carriers can diffuse through as the thickness is typically less than 10 nm. The buffer layer must be sufficiently thick to provide adequate passivation, however must be thin enough to not significantly impede carrier transport or absorb light. It is advantageous for the passivating layer to have a higher band gap to minimise parasitic absorption of photons, as absorption coefficient is partially dependent on band gap. Despite similarities between the buffer layer structure and Metal–Insulator–Semiconductor (MIS) solar cells, SHJ do not necessarily rely on quantum tunnelling for carrier transport through the low-conductivity buffer layer; carrier diffusion is also an important transport mechanism.
Window Layers.
The selective contacts (also referred to as the "window layers") are then similarly formed by deposition of the "p-" and "n-"type highly doped amorphous silicon layers. Examples of dopant gases include phosphine (PH3) for "n"-type and trimethylborane (B(CH3)3) or diborane (B2H6) for "p"-type. Due to its defective nature, doped amorphous silicon (as opposed to intrinsic) cannot provide passivation to crystalline silicon; similarly epitaxial growth of any such a-Si layer causes severe detriment to passivation quality and cell efficiency and must be prevented during deposition.
Nanocrystalline window layer.
Recent developments in SHJ efficiency have been made by deposition of "n"-type nanocrystalline silicon oxide (nc-SiOx:H) films instead of "n"-type amorphous silicon for the electron contact. The material commonly referred to as "nanocrystalline silicon oxide" is actually a two-phase material composed of nanoscale silicon crystals embedded in an amorphous silicon oxide matrix. The silicon oxide has a higher band gap and is more optically transparent than amorphous silicon, whereas the columnar nanocrystals enhance vertical carrier transport and increase conductivity, thus leading to increased short circuit current density formula_1 and decreased contact resistance. The material band gap can be tuned with varying levels of carbon dioxide during PECVD. The replacement of amorphous silicon with nanocrystalline silicon/silicon oxide has already been integrated by some manufacturers on "n"-type, with "p"-type (hole contact) to follow in the near future. An optimised nanocrystalline hole contact was instrumental in producing the Lin, "et al.". (2023) 26.81% power conversion efficiency world record.
Antireflection coating and conductive oxide.
The dual purpose antireflection coating (ARC) and carrier transport layer, usually composed of Indium tin oxide (ITO), is sputtered onto both sides over the selective contacts. Indium tin oxide is a transparent conducting oxide (TCO) which enhances lateral conductivity of the contact surfaces without significantly impeding light transmission. This is necessary because the amorphous layers have a relatively high resistance despite their high doping levels, and so the TCO allows carriers to be transported from the selective contact to the metal electrodes.
For destructive interference antireflection properties, the TCO is deposited to the thickness required for optimum light capture at the peak of the solar spectrum (around 550 nm <templatestyles src="Legend/styles.css" /> ). The optimum thickness for a single-layer ARC is given by;
formula_2
where formula_3 is the layer thickness, formula_4 is the desired wavelength of minimum reflection and formula_5 is the material's refractive index.
Depending on the refractive index of the ITO (typically ~0.9), the optimum layer thickness is usually 70–80 nm. Due to thin-film interference, the ITO (a dull grey-black ceramic material) appears a vibrant blue colour at this thickness.
Alternative materials.
Due to the scarcity of indium, alternative TCOs such as aluminium-doped zinc oxide (AZO) are being researched for use in SHJ cells. AZO has a much higher chemical sensitivity than ITO, which presents challenges for certain metallisation methods that require etching, such as nickel seed layer etch-backs and typically has a poorer interface contact to both "p"- and "n"-type amorphous layers. AZO may have long-term stability issues when cells are used in modules, which may require capping layers such as SiOx.
Undoped tin oxide (SnOx) has also been used successfully to produce indium-free TCOs on SHJ cells with an efficiency of 24.91%.
Enhancement of the optical and electronic properties of indium oxide based TCOs has been achieved through co-doping with cerium and hydrogen, which results in high electron mobility. Such films can be grown at temperatures sufficiently low to be compatible with the heat-sensitive SHJ production process. Indium oxide doped with cerium oxide, tantalum oxide and titanium oxide have also resulted in favourable electronic properties. The process is tunable through introduction of water vapour into the sputtering chamber in which hydroxyl radicals in the plasma are believed to terminate oxygen vacancies in the TCO film, leading to enhanced electron mobility and lower sheet resistance, however stability and contact resistance must be considered when using this method in SHJ cells.
Double-antireflection coating.
Through evaporation, a double-antireflection coating of magnesium fluoride (MgF2) or aluminium oxide (Al2O3) may be used to further reduce surface reflections, however this step is not currently employed in industrial production. AZO capping layers such as SiOx can also act as a double AR coating. Such techniques were used to produce SHJ cells with world record power conversion efficiencies.
Role of work function.
The TCO layer for SHJ cells should ideally have a high work function (ie. the energy difference between the Fermi level and the Vacuum level) to prevent formation of a parasitic Schottky barrier at the interface between the TCO and the "p"-type amorphous layer. This can be partially alleviated by increasing the doping of the "p"-type layer, which decreases the barrier width and improves open-circuit voltage (formula_0) and fill factor (formula_6). However increased doping increases junction recombination, diminishing formula_0 gains. Depositing a higher work function TCO such as tungsten oxide (WOx) or tuning the deposition parameters of ITO can also reduce the barrier height; typically the latter is used due to the preferable optical properties of ITO.
Metallisation.
Metal electrodes are required to contact the solar cell so that electricity can be extracted from it. The TCO alone is not conductive enough to serve this purpose. The electrodes on a bifacial solar cell are composed of a grid pattern on the front side and the rear side, whereas non-bifacial cells can have the entire rear side coated in metal. Interdigitated back contact cells have metal only on the rear. In the case of front grids, the grid geometry is optimised such to provide a low resistance contact to all areas of the solar cell surface without excessively shading it from sunlight.
Printed paste.
Heterojunction solar cells are typically metallised (ie. fabrication of the metal contacts) in two distinct methods. Screen-printing of silver paste is common in industry as is with traditional solar cells, with a market share of over 98%. However low-temperature silver paste is required for SHJ cells. These pastes consist of silver particles combined with a polymer which crosslinks at a curing temperature of about 200 °C. These suffer major drawbacks including low grid conductivity and high silver consumption, volatile production costs or poor adhesion to the front surface. Despite their significantly higher cost, the resistivity of low-temperature silver pastes has been estimated to be 4–6 times higher than standard silver paste. To compensate for lowered conductivity, low-temperature silver pastes also consume more silver than conventional silver pastes, however silver consumption is trending downward as the development of screen-printing technology reduces finger linewidths. Improvements in the composition of low-temperature pastes are expected to further reduce silver consumption, such as through screen-printable silver-coated copper paste. Such pastes perform comparably to conventional low-temperature pastes, with up to 30% reduction in silver consumption. Silver-coated copper pastes are becoming an increasingly dominant metallisation technology amongst Chinese SHJ manufacturers into 2030, with 50% market share expected from 2024 to 2025.
A non-contact method of paste printing, Laser Pattern Transfer Printing, can be used to fabricate narrow fingers with a 1:1 aspect ratio. Paste is pressed into a grating, and an infrared laser is used to heat the paste from behind. The vaporising solvent expels the paste from the mold and onto the solar cell substrate. As contact screen printing exerts high forces on the cell, this alternative technique can reduce cell breakage, in particular for very thin wafers.
Printed ink.
Silver nanoparticle ink can be deposited onto a SHJ solar cell using inkjet printing, or through contact deposition with a hollow glass capillary. Inkjet deposition has been reported to reduce silver consumption from 200 mg per cell to less than 10 mg per cell compared with traditional silver paste screen printing. Further reductions are possible with capillary deposition (known as "FlexTrail" as the capillary is flexible and trails across the wafer surface) leading to as little as 3 mg of silver deposited. Such a large reduction in silver has implications for the grid design to compensate for lower conduction, namely using a busbar-less design.
Electroplated.
A potentially silver-free alternative to printed electrodes uses electroplated copper. The conductivity of electroplated copper is similar to that of bulk copper. This has potential to increase the SHJ cell current density through decreasing grid resistance. Improved feature geometry can also be achieved. However industrial production is challenging as electroplating requires selective patterning using a sacrificial inkjet-printed or photolithographically-derived mask. As a result, electroplated SHJ cells are not currently manufactured commercially. Copper plated directly to the ITO also suffers from adhesion issues. Therefore, it is usually necessary to first deposit a thin (~1μm) seed layer of nickel through sputtering or electrodeposition. Alternatively, an indium seed layer can be developed in-situ through selective cathodic reduction of the doped indium oxide. Nickel and ITO layers also act as a diffusion barrier against copper into the cell, which is a deep-level impurity that causes severe degradation. A capping layer of silver or tin is generally also required to prevent corrosion of the copper fingers, especially in EVA-encapsulated modules.
Like all conventional solar cells, heterojunction solar cells are a diode and conduct current in only one direction. Therefore, for metallisation of the "n"-type side, the solar cell must generate its own plating current through illumination, rather than using an external power supply. This process is known as Light-induced Plating (LIP); as opposed to field-induced plating (FIP) for the "p"-type side. Alternatively, an electroless process may be used, which does not require electrical contact to the solar cell that complicates manufacturing. However, electroless plating is much slower than electroplating and may take hours rather than minutes to reach a suitable thickness.
Interconnection.
SHJ temperature sensitivity has further implications for cell interconnection when manufacturing SHJ-based solar panels. High temperatures involved in soldering must be carefully controlled to avoid degradation of the cell passivation. Low temperature pastes have also suffered from weak adhesion to interconnecting wires or ribbons, which have consequences for module durability. Optimisation of these pastes and infrared soldering parameters, as well as careful selection of solder alloys, has led to increased success of interconnection processes on standard industrial equipment.
Multi-junction.
Heterojunction–Perovskite tandem structures have been fabricated, with some research groups reporting a power conversion efficiency exceeding the 29.43% Shockley–Queisser limit for crystalline silicon. This feat has been achieved in both monolithic and 4-terminal cell configurations. In such devices, to reduce thermalisation losses, the wide bandgap Perovskite top cell absorbs high energy photons whilst the SHJ bottom cell absorbs lower energy photons. In a bifacial configuration, the bottom cell can also accept light from the rear surface.
In 2017, tandem solar cells using a SHJ bottom cell and Group III–V semiconductor top cells were fabricated with power conversion efficiencies of 32.8% and 35.9% for 2- and 3-junction non-monolithic stacks respectively.
In November 2023, the efficiency record for SHJ tandems was set at 33.9% by LONGi using a Perovskite top cell in a monolithic configuration. This is the highest efficiency recorded for a non-concentrated Two-junction solar cell.
Alternative heterojunction materials.
Aside from the typical c-Si/a-Si:H structure, various groups have successfully produced passivated contact silicon heterojunction solar cells using novel semiconducting materials, such as between c-Si/SiOx, c-Si/MoOx and c-Si/poly-Si or c-Si/SiOx/poly-Si (POLO; polycrystalline silicon on oxide).
Hybrid inorganic–organic heterojunction solar cells have been produced using "n"-type silicon coated with polyaniline emeraldine base. Heterojunction solar cells have also been produced on multicrystalline silicon absorber substrates.
Interdigitated Back Contact.
Heterojunction solar cells are compatible with IBC technology, ie. the cell metallisation is entirely on the back surface. A Heterojunction IBC cell is often abbreviated to HBC. A HBC structure has several advantages over conventional SHJ cells; the major advantage is the elimination of shading from the front grid, which improves light capture and hence short circuit current density formula_1. Compared to PERC, conventional SHJ cells often suffer from poor formula_1 with values rarely exceeding 40 mA/cm2, as some light is parasitically absorbed in the front amorphous silicon layers due to its high absorption coefficient. By removing the need for the front metal contact, as well as the front amorphous silicon contact, formula_1 can be recovered. As such, HBC cells have potential for high efficiencies; notably a long-standing world record heterojunction cell employed a HBC structure, at 26.7% efficiency fabricated by Kaneka with a formula_1 of 42.65 mA/cm2. Despite HBC's high efficiency, double-sided cells are mainstream in industrial production due to their relatively simple manufacturing process. However, HBC cells may find specialised applications such as in vehicle-integrated PV systems where there are significant area constraints.
HBC cells are fabricated by localised doping of the rear side, in an alternating pattern of "p-" and "n-"type areas in an interdigitated pattern. The front side does not require a specific doping profile.
Loss mechanisms.
A well-designed silicon heterojunction module has an expected nominal lifespan of more than 30 years, with an expected average performance ratio of 75%. Failure, power losses and degradation of SHJ cells and modules can be categorised by the affected parameter (eg. open-circuit voltage, short-circuit current and fill factor). formula_0 losses are generally attributed to reduction in passivation quality or through introduction of defects, causing increased recombination. formula_1 losses are generally attributed to optical losses, in which less light is captured by the absorber (such as through shading or damage to module structures). formula_6 losses are generally attributed to passivation loss, and increases in series resistance or decreases in shunt resistance.
VOC losses.
Defects are sites at which charge carriers can inadvertently become "trapped", making them more likely to recombine through the Shockley-Read-Hall method (SRH Recombination). They are most likely to exist at interfaces (surface recombination), at crystal grain boundaries and dislocations, or at impurities. To prevent losses in efficiency, defects must be passivated (ie. become chemically and electrically neutral). Generally this occurs through bonding of the defect interface with interstitial hydrogen. In SHJ cells, hydrogenated intrinsic amorphous silicon is very effective at passivating defects existing at the absorber surface.
Understanding the behaviour of defects, and how they interact with hydrogen over time and in manufacturing processes, is crucial for maintaining the stability and performance of SHJ solar cells.
Light-induced Degradation.
The behaviour of light-sensitive defect passivation in amorphous silicon networks has been a topic of study since the discovery of the Staebler–Wronski effect in 1977. Staebler and Wronski found a gradual decrease in photoconductivity and dark conductivity of amorphous silicon thin films upon exposure to light for several hours. This effect is reversible upon dark annealing at temperatures above 150 °C and is a common example of reversible Light-induced Degradation (LID) in hydrogenated amorphous silicon devices. The introduction of new band gap states, causing a decrease in the carrier lifetime, was proposed to be the mechanism behind the degradation. Subsequent studies have explored the role of hydrogen migration and metastable hydrogen-trapping defects in the Staebler–Wronski effect.
Amongst many variables, the kinetics and extent of the Staebler–Wronski effect is dependent on crystallite grain size in the thin film and the light soaking illuminance.
Some amorphous silicon devices can also observe the opposite effect through LID, such as the increase in formula_0 observed in amorphous silicon solar cells and notably SHJ solar cells upon light soaking. Kobayashi, "et al." (2016) proposes that this is due to the shifting of the Fermi level of the intrinsic buffer layer closer to the band edges when in contact with the doped amorphous silicon selective contacts, noting that a similar reversal of the Staebler–Wronski effect was observed by Scuto "et al." (2015) when hydrogenated a-Si photovoltaic devices were light-soaked under reverse bias.
Deliberate annealing of heterojunction cells in an industrial post-processing step can improve lifetimes and decrease surface recombination velocity. It has been suggested that thermal annealing causes interstitial hydrogen to diffuse closer to the heterointerface, allowing greater saturation of dangling bond defects. Such a process may be enhanced using illumination during annealing, however this can cause degradation before the improvement in carrier lifetimes is achieved, and thus requires careful optimisation in a commercial setting. Illuminated annealing at high temperatures is instrumental in the Advanced Hydrogenation Process (AHP), an inline technique for defect mitigation developed by UNSW.
The Boron–Oxygen complex LID defect is a pervasive problem with the efficiency and stability of cheap "p-"type wafers and a major reason that "n-"type is preferred for SHJ substrates. Stabilising wafers against B–O LID using the Advanced Hydrogenation Process has had variable success and reliability issues. Therefore gallium has been proposed as an economically feasible alternative "p-"type dopant for use in SHJ absorbers. Gallium doped cells have potential for higher stability and lower defect density than boron, with research groups achieving formula_0 exceeding 730 mV on gallium-doped "p-"type SHJ. However, gallium has a lower segregation efficiency than boron in Cz-grown silicon ingots, therefore having a similar problem to "n"-type in that less ingot length is used.
FF losses.
Fill factor refers to how well the solar cell performs at its maximum power point compared to open- or short-circuit conditions.
Fill factor in high-efficiency solar cells is affected by several key factors: series resistance; bulk carrier lifetimes; the saturation current density; wafer resistivity and wafer thickness. These factors in turn affect the formula_0 and the diode ideality factor. To achieve a fill factor over 86%, a high efficiency heterojunction cell must have a very high shunt resistance, a negligible series resistance, high quality bulk silicon with very long minority carrier lifetime (~15 ms), excellent passivation (saturation current density below 0.8 fA/cm2).
The diode ideality factor will approach 2/3 when the bulk wafer lifetimes increase, implying that Auger recombination becomes the dominant mechanism when bulk defect density is very low. An ideality factor of less than 1 will enable fill factors greater than 86%, as long as bulk lifetimes are high. Very high lifetimes are easier to achieve when the wafer thickness is reduced. At sufficiently high lifetimes, it is also advantageous to decrease the bulk doping concentration (increase the wafer resistivity formula_7 > 0.3 Ω·cm) such that the wafer is under high injection conditions (the number of generated carriers is high compared to the dopant concentration).
Module degradation.
Solar modules are exposed to various stressors when deployed in outdoor installations, including moisture, thermal cycling and ultraviolet light. Solar modules may be expected to be in service for decades, and these factors can reduce module lifespan if unaccounted for. The mechanisms of degradation include efficiency loss in the cell itself from cracking, gradual corrosion or defect activation; delamination of the module layers; UV degradation of the cell or lamination; encapsulant embrittlement or discolouration; and failure of the metal conductors (fingers, busbars and tabbing). Some significant design considerations for module longevity are in encapsulant choice, with significant reductions in the module's levelised cost of electricity (LCOE) for encapsulants with fewer adverse effects on module efficiency.
Potential-induced Degradation.
Potential-induced degradation (PID) refers to degradation caused by high voltage stress in solar modules. It is one of the primary mechanisms of solar module degradation. Strings of modules in series can accumulate up to 1000 V in a photovoltaic system, and such a potential difference can be present over a small distance between the solar cells and a grounded module frame, causing leakage currents. PID is primarily an electrochemical process causing corrosion and ion migration in a solar module and cells, facilitated by moisture ingress and surface contamination. Sodium ions, which are suspected to leach from soda-lime glass, are particularly problematic and can cause degradation in the presence of moisture (even without high electric potential). This leads to reduction in the efficiency and lifespan of a PV system.
PID has been observed in all types of crystalline silicon solar cells, as well as thin-film solar cells, CIGS cells and CdTe cells. In research, PID can be replicated in accelerated aging tests by applying high bias voltages to a sample module, especially in an environmental chamber. In SHJ cells, PID is mostly characterised by the reduction in formula_1 caused by optical losses, and unlike the PID observed in other module technologies, the PID is mostly irreversible in SHJ modules with only a small recovery from applying the opposite bias. This indicates that some component of the PID occurs through a different mechanism in SHJ modules. It has been suggested that optical losses are caused by indium metal precipitating in the TCO. Degraded modules have also measured high concentrations of sodium ions deeper in the cell, which is consistent with PID caused from negative bias.
Encapsulant hydrolysis.
Encapsulants are thermoplastic materials used to encase solar cells in modules for stability. In the lamination process, the cells are sandwiched between the encapsulant film and it is melted. Traditionally, the cheap copolymer Ethylene-vinyl acetate (EVA) has been used in crystalline silicon modules as encapsulant. After long duration exposure to moisture, EVA can hydrolyse and leach acetic acid with the potential to corrode the metal terminals or surface of a solar cell.
Non-bifacial modules are composed of a textured glass front and UV-stabilised polymer (commonly polyvinyl fluoride) backsheet, whereas bifacial modules are more likely to be glass–glass. The polymer backsheet, despite being more permeable to moisture ingress than glass–glass modules (which facilitates hydrolysis of EVA), is allegedly "breathable" to acetic acid and does not allow it to build up. As SHJ-based modules are more likely to be bifacial glass–glass, the risk of acetic acid buildup is claimed to be greater; however manufacturers have found the impermeability of glass–glass modules is generally sufficient to prevent EVA degradation, allowing modules to pass accelerated aging tests. Some studies have also found that glass–glass construction reduces the extent of degradation in EVA-encapsulated modules against glass–backsheet.
Additionally, ITO used in SHJ cells may be susceptible to acetic acid etching, causing formula_0 loss. Despite the higher cost, acetate-free and low water vapour permeability encapsulants such as polyolefin elastomers (POE) or thermoplastic olefins (TPO) show reduced degradation after damp-heat testing in comparison to EVA. It has been estimated that using POE or TPO over EVA can reduce the LCOE by nearly 3% as a result of improved module longevity.
Encapsulant-free module designs have also been developed with potential for reduced long term degradation and CO2 footprint. However reflection losses may arise from the lack of optical coupling between the front glass and the cell that encapsulant provides.
Encapsulant delamination.
POE has higher resistance to water ingress compared to EVA, and hence prevents PID and other moisture-related issues. However, the lamination time is longer, and the adhesion between POE and the cell or glass is inferior to EVA. Delamination of encapsulant from poor adhesion can cause failure of the module. Therefore, POE is increasingly used as the centre layer in a three-layer coextruded polymer encapsulant with EVA, known as EPE (EVA–POE–EVA) which entails the benefits of both polymers.
UV stability.
UV light can cause degradation of module encapsulants and backsheets, causing discolouration, embrittlement and delamination that reduces module lifespan and performance. Hot carriers generated by UV absorption can also cause oxidation of such materials. Furthermore, in high efficiency solar cells including heterojunction, UV causes changes in passivation that may decrease module performance. Studies involving extended UV light soaking of heterojunction modules indicate they are more susceptible to UV damage than PERC or PERT modules, where significant losses in fill factor and open-circuit voltages were observed. The proposed mechanism is the redistribution of hydrogen away from the passivated surface interfaces and into the amorphous layers.
UV cut-off encapsulant films have been used to protect SHJ cells from UV degradation, however the UV energy from such materials is not used by the solar cells. In 2023, encapsulant films containing UV down-converting phosphors such as Europium/Dysprosium-doped Strontium magnesium silicate (Sr2-xMgSi2O7-x: Eu2+, Dy3+) were introduced for heterojunction solar cell applications, such as in EPE encapsulants. Such materials not only protect from UV degradation but also deliver optical gains from generated visible photons. Such films are being investigated for commercial use by Chinese heterojunction encapsulant manufacturers where tests of 60-cell modules saw power increases of 5 watts (approximately 1.5%) using the UV-converting film.
Glossary.
The following is a glossary of terms associated with heterojunction solar cells.
<templatestyles src="Glossary/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V_{OC}"
},
{
"math_id": 1,
"text": "J_{SC}"
},
{
"math_id": 2,
"text": "d=\\frac{\\lambda}{4\\eta}"
},
{
"math_id": 3,
"text": "d"
},
{
"math_id": 4,
"text": "\\lambda"
},
{
"math_id": 5,
"text": "\\eta"
},
{
"math_id": 6,
"text": "FF"
},
{
"math_id": 7,
"text": "\\rho"
}
]
| https://en.wikipedia.org/wiki?curid=72654987 |
726587 | Information cascade | Behavioral phenomenon
An information cascade or informational cascade is a phenomenon described in behavioral economics and network theory in which a number of people make the same decision in a sequential fashion. It is similar to, but distinct from herd behavior.
An information cascade is generally accepted as a two-step process. For a cascade to begin an individual must encounter a scenario with a decision, typically a binary one. Second, outside factors can influence this decision, such as the individual observing others' choices and the apparent outcomes.
The two-step process of an informational cascade can be broken down into five basic components:
Social perspectives of cascades, which suggest that agents may act irrationally (e.g., against what they think is optimal) when social pressures are great, exist as complements to the concept of information cascades. More often the problem is that the concept of an information cascade is confused with ideas that do not match the two key conditions of the process, such as social proof, information diffusion, and social influence. Indeed, the term information cascade has even been used to refer to such processes.
Basic model.
This section provides some basic examples of information cascades, as originally described by Bikchandani et al. (1992). The basic model has since been developed in a variety of directions to examine its robustness and better understand its implications.
Qualitative example.
Information cascades occur when external information obtained from previous participants in an event overrides one's own private signal, irrespective of the correctness of the former over the latter. The experiment conducted by Anderson is a useful example of this process. The experiment consisted of two urns labeled A and B. Urn A contains two balls labeled "a" and one labeled "b". Urn B contains one ball labeled "a" and two labeled "b". The urn from which a ball must be drawn during each run is determined randomly and with equal probabilities (from the throw of a dice). The contents of the chosen urn are emptied into a neutral container. The participants are then asked in random order to draw a marble from this container. This entire process may be termed a "run", and a number of such runs are performed.
Each time a participant picks up a marble, he is to decide which urn it belongs to. His decision is then announced for the benefit of the remaining participants in the room. Thus, the (n+1)th participant has information about the decisions made by all the n participants preceding him, and also his private signal which is the label on the ball that he draws during his turn. The experimenters observed that an information cascade was observed in 41 of 56 such runs. This means, in the runs where the cascade occurred, at least one participant gave precedence to earlier decisions over his own private signal. It is possible for such an occurrence to produce the wrong result. This phenomenon is known as "Reverse Cascade".
Quantitative description.
A person's signal telling them to accept is denoted as H (a high signal, where high signifies he should accept), and a signal telling them not to accept is L (a low signal). The model assumes that when the correct decision is to accept, individuals will be more likely to see an H, and conversely, when the correct decision is to reject, individuals are more likely to see an L signal. This is essentially a conditional probability – the probability of H when the correct action is to accept, or formula_0. Similarly formula_1 is the probability that an agent gets an L signal when the correct action is reject. If these likelihoods are represented by "q", then "q" > 0.5. This is summarized in the table below.
The first agent determines whether or not to accept solely based on his own signal. As the model assumes that all agents act rationally, the action (accept or reject) the agent feels is more likely is the action he will choose to take. This decision can be explained using Bayes' rule:
formula_2
If the agent receives an H signal, then the likelihood of accepting is obtained by calculating formula_3. The equation says that, by virtue of the fact that "q" > 0.5, the first agent, acting only on his private signal, will always increase his estimate of "p" with an H signal. Similarly, it can be shown that an agent will always decrease his expectation of "p" when he receives a low signal. Recalling that, if the value, V, of accepting is equal to the value of rejecting, then an agent will accept if he believes "p" > 0.5, and reject otherwise. Because this agent started out with the assumption that both accepting and rejecting are equally viable options ("p" = 0.5), the observation of an H signal will allow him to conclude that accepting is the rational choice.
The second agent then considers both the first agent's decision and his own signal, again in a rational fashion. In general, the "n"th agent considers the decisions of the previous "n"-1 agents, and his own signal. He makes a decision based on Bayesian reasoning to determine the most rational choice.
formula_4
Where a is the number of accepts in the previous set plus the agent's own signal, and b is the number of rejects. Thus, &NoBreak;&NoBreak;. The decision is based on how the value on the right hand side of the equation compares with "p".
Explicit model assumptions.
The original model makes several assumptions about human behavior and the world in which humans act, some of which are relaxed in later versions or in alternate definitions of similar problems, such as the diffusion of innovations.
Responding.
A literature exists that examines how individuals or firms might respond to the existence of informational cascades when they have products to sell but where buyers are unsure of the quality of those products. Curtis Taylor (1999) shows that when selling a house the seller might wish to start with high prices, as failure to sell with low prices is indicative of low quality and might start a cascade on not buying, while failure to sell with high prices could be construed as meaning the house is just over-priced, and prices can then be reduced to get a sale. Daniel Sgroi (2002) shows that firms might use "guinea pigs" who are given the opportunity to buy early to kick-start an informational cascade through their early and public purchasing decisions, and work by David Gill and Daniel Sgroi (2008) show that early public tests might have a similar effect (and in particular that passing a "tough test" which is biased against the seller can instigate a cascade all by itself). Bose "et al." have examined how prices set by a monopolist might evolve in the presence of potential cascade behavior where the monopolist and consumers are unsure of a products quality.
Examples and fields of application.
Information cascades occur in situations where seeing many people make the same choice provides evidence that outweighs one's own judgment. That is, one thinks: "It's more likely that I'm wrong than that all those other people are wrong. Therefore, I will do as they do."
In what has been termed a reputational cascade, late responders sometimes go along with the decisions of early responders, not just because the late responders think the early responders are right, but also because they perceive their reputation will be damaged if they dissent from the early responders.
Market cascades.
Information cascades have become one of the topics of behavioral economics, as they are often seen in financial markets where they can feed speculation and create cumulative and excessive price moves, either for the whole market (market bubble) or a specific asset, like a stock that becomes overly popular among investors.
Marketers also use the idea of cascades to attempt to get a buying cascade started for a new product. If they can induce an initial set of people to adopt the new product, then those who make purchasing decisions later on may also adopt the product even if it is no better than, or perhaps even worse than, competing products. This is most effective if these later consumers are able to observe the adoption decisions, but not how satisfied the early customers actually were with the choice. This is consistent with the idea that cascades arise naturally when people can see what others do but not what they know.
An example is Hollywood movies. If test screenings suggest a big-budget movie might be a flop, studios often decide to spend more on initial marketing rather than less, with the aim of making as much money as possible on the opening weekend, before word gets around that it's a turkey.
Information cascades are usually considered by economists:
Social networks and social media.
Dotey et al. state that information flows in the form of cascades on the social network. According to the authors, analysis of virality of information cascades on a social network may lead to many useful applications like determining the most influential individuals within a network. This information can be used for "maximizing market effectiveness" or "influencing public opinion". Various structural and temporal features of a network affect cascade virality. Additionally, these models are widely exploited in the problem of Rumor spread in social network to investigate it and reduce its influence in online social networks.
In contrast to work on information cascades in social networks, the social influence model of belief spread argues that people have some notion of the private beliefs of those in their network. The social influence model, then, relaxes the assumption of information cascades that people are acting only on observable actions taken by others. In addition, the social influence model focuses on embedding people within a social network, as opposed to a queue. Finally, the social influence model relaxes the assumption of the information cascade model that people will either complete an action or not by allowing for a continuous scale of the "strength" of an agents belief that an action should be completed.
Information cascades can also restructure the social networks that they pass through. For example, while there is a constant low level of churn in social ties on Twitter—in any given month, about 9% of all social connections change—there is often a spike in follow and unfollow activity following an information cascade, such as the sharing of a viral tweet. As the tweet-sharing cascade passes through the network, users adjust their social ties, particularly those connected to the original author of the viral tweet: the author of a viral tweet will see both a sudden loss in previous followers and a sudden increase in new followers.
As a part of this cascade-driven reorganization process, information cascades can also create assortative social networks, where people tend to be connected to others who are similar in some characteristic. Tweet cascades increase in the similarity between connected users, as users lose ties to more dissimilar users and add new ties to similar users. Information cascades created by news coverage in the media may also foster political polarization by sorting social networks along political lines: Twitter users who follow and share more polarized news coverage tend to lose social ties to users of the opposite ideology.
Empirical studies.
In addition to the examples above, Information Cascades have been shown to exist in several empirical studies. Perhaps the best example, given above, is. Participants stood in a line behind an urn which had balls of different colors. Sequentially, participants would pick a ball out of the urn, looks at it, and then places it back into the urn. The agent then voices their opinion of which color of balls (red or blue) there is a majority of in the urn for the rest of the participants to hear. Participants get a monetary reward if they guess correctly, forcing the concept of rationality.
Other examples include
Legal aspects.
The negative effects of informational cascades sometimes become a legal concern and laws have been enacted to neutralize them. Ward Farnsworth, a law professor, analyzed the legal aspects of informational cascades and gave several examples in his book "The Legal Analyst": in many military courts, the officers voting to decide a case vote in reverse rank order (the officer of the lowest rank votes first), and he suggested it may be done so the lower-ranked officers would not be tempted by the cascade to vote with the more senior officers, who are believed to have more accurate judgement; another example is that countries such as Israel and France have laws that prohibit polling days or weeks before elections to prevent the effect of informational cascade that may influence the election results.
Globalization.
One informational cascade study compared thought processes between Greek and German organic farmers, suggesting discrepancies based upon cultural and socioeconomic differences. Even further, cascades have been extrapolated to ideas such as financial volatility and monetary policy. In 2004 Helmut Wagner and Wolfram Berger suggested cascades as an analytical vehicle to examine changes to the financial market as it became more globalized. Wagner and Berger noticed structural changes to the framework of understanding financial markets due to globalization; giving rise to volatility in capital flow and spawning uncertainty which affected central banks. Additionally, information cascades are useful in understanding the origins of terrorist tactics. When the attack by Black September occurred in 1972 it was hard not to see the similarities between their tactics and the Baader-Meinhof group (also known as the Red Army Faction [RAF]). All of these examples portray how the process of cascades were put into use. Moreover, it is important to understand the framework of cascades to move forward in a more globalized society. Establishing a foundation to understanding the passage of information through transnational and multinational organizations, and even more, is critical to the arising modern society. Summing up all of these points, cascades, as a general term, encompass a spectrum of different concepts. Information cascades have been the underlying thread in how information is transferred, overwritten, and understood through various cultures spanning from a multitude of different countries.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P[H|A]"
},
{
"math_id": 1,
"text": "P[L|R]"
},
{
"math_id": 2,
"text": "\\begin{align}\n P\\left(A|H\\right) &= \\frac{P\\left(A\\right) P\\left(H|A\\right)}{P\\left(H\\right)} \\\\\n &= \\frac{P\\left(A\\right) P\\left(H|A\\right)}{P\\left(A\\right) P\\left(H|A\\right) + P\\left(R\\right) P\\left(H|R\\right)} \\\\\n &= \\frac{pq}{pq + \\left(1 - p\\right)\\left(1 - q\\right)} \\\\\n &> p\n\\end{align}"
},
{
"math_id": 3,
"text": "P[A|H]"
},
{
"math_id": 4,
"text": "P (A | \\text{Previous}, \\text{Personal signal}) = \\frac{pq^a (1 - q)^b}{p q^a (1 - q)^b + (1 - p)(1 - q)^a q^b}"
}
]
| https://en.wikipedia.org/wiki?curid=726587 |
7266141 | Strongly positive bilinear form | Functional analysis form
A bilinear form, "a"(•,•) whose arguments are elements of normed vector space "V" is a strongly positive bilinear form if and only if there exists a constant, "c">0, such that
formula_0
for all formula_1 where formula_2 is the norm on "V". | [
{
"math_id": 0,
"text": " a(u,u) \\geq c \\cdot \\|u\\|^2 "
},
{
"math_id": 1,
"text": "u\\in V"
},
{
"math_id": 2,
"text": "\\|\\cdot\\|"
}
]
| https://en.wikipedia.org/wiki?curid=7266141 |
72669698 | Cauchy wavelet | Cauchy wavelet
In mathematics, Cauchy wavelets are a family of continuous wavelets, used in the continuous wavelet transform.
Definition.
The Cauchy wavelet of order formula_0 is defined as:
formula_1
where formula_2 and formula_3
therefore, its Fourier transform is defined as
formula_4.
Sometimes it is defined as a function with its Fourier transform
formula_5
where formula_6 and formula_7 for formula_8 almost everywhere and formula_9 for all formula_8.
Also, it had used to be defined as
formula_10
in previous research of Cauchy wavelet. If we defined Cauchy wavelet in this way, we can observe that the Fourier transform of the Cauchy wavelet
formula_11
Moreover, we can see that the maximum of the Fourier transform of the Cauchy wavelet of order formula_0 is happened at formula_12 and the Fourier transform of the Cauchy wavelet is positive only in formula_13, it means that:
(1) when formula_0 is low then the convolution of Cauchy wavelet is a low pass filter, and when formula_0 is high the convolution of Cauchy wavelet is a high pass filter.
Since the wavelet transform equals to the convolution to the mother wavelet and the convolution to the mother wavelet equals to the multiplication between the Fourier transform of the mother wavelet and the function by the convolution theorem.
And,
(2) the design of the Cauchy wavelet transform is considered with analysis of the analytic signal.
Since the analytic signal is bijective to the real signal and there is only positive frequency in the analytic signal (the real signal has conjugated frequency between positive and negative) i.e.
formula_14
where formula_15 is a real signal (formula_16, for all formula_17)
And the bijection between analytic signal and real signal is that
formula_18
formula_19
where formula_20 is the corresponded analytic signal of the real signal formula_15, and formula_21 is Hilbert transform of formula_15.
Unicity of the reconstruction.
Phase retrieval problem.
A phase retrieval problem consists in reconstructing an unknown complex function formula_22 from a set of phaseless linear measurements. More precisely, let formula_23 be a vector space, whose vectors are complex functions, on formula_24 and formula_25 a set of linear forms from formula_23 to formula_24. We are given the set of all formula_26, for some unknown formula_27 and we want to determine formula_22.
This problem can be studied under three different viewpoints:
(1) Is formula_22 uniquely determined by formula_26 (up to a global phase)?
(2) If the answer to the previous question is positive, is the inverse application formula_28 is “stable”? For example, is it continuous? Uniformly Lipschitz?
(3) In practice, is there an efficient algorithm which recovers formula_22 from formula_26?
The most well-known example of a phase retrieval problem is the case where the formula_29 represent the Fourier coefficients:
for example:
formula_30, for formula_31,
where formula_22 is complex-valued function on formula_32
Then, formula_22 can be reconstruct by formula_33 as
formula_34.
and in fact we have Parseval's identity
formula_35.
where formula_36 i.e. the norm defined in formula_37.
Hence, in this example, the index set formula_38 is the integer formula_39, the vector space formula_23 is formula_37 and the linear form formula_40 is the Fourier coefficient. Furthermore, the absolute value of Fourier coefficients formula_41 can only determine the norm of formula_22 defined in formula_37.
Unicity Theorem of the reconstruction.
Firstly, we define the Cauchy wavelet transform as:
formula_42.
Then, the theorem is as followed
Theorem. For a fixed formula_43, if exist two different numbers formula_44 and the Cauchy wavelet transform defined as above. Then, if there are two real-valued functions formula_45 satisfied
formula_46, formula_47 and
formula_48, formula_47,
then there is a formula_49 such that formula_50.
formula_50 implies that
formula_51 and
formula_52.
Hence, we get the relation
formula_53
and formula_54.
Back to the phase retrieval problem, in the Cauchy wavelet transform case, the index set formula_55 is formula_56 with formula_57 and formula_44, the vector space formula_23 is formula_58 and the linear form formula_59 is defined as formula_60. Hence, formula_61 determines the two dimensional subspace formula_62 in formula_58.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "\\psi_p(t) = \\frac{\\Gamma(p+1)}{2\\pi}\\left ( \\frac{j}{t + j} \\right ) ^{p+1}"
},
{
"math_id": 2,
"text": "p > 0"
},
{
"math_id": 3,
"text": "j = \\sqrt{-1}"
},
{
"math_id": 4,
"text": "\\hat{\\psi_p}(\\xi) = \\xi^{p}e^{-\\xi}I_{[\\xi \\geq 0]}"
},
{
"math_id": 5,
"text": "\\hat{\\psi_p}(\\xi) = \\rho(\\xi)\\xi^{p}e^{-\\xi}I_{[\\xi \\geq 0]}"
},
{
"math_id": 6,
"text": "\\rho(\\xi) \\in L^{\\infty}(\\mathbb{R})"
},
{
"math_id": 7,
"text": "\\rho(\\xi) = \\rho(a\\xi)"
},
{
"math_id": 8,
"text": "\\xi \\in \\mathbb{R}"
},
{
"math_id": 9,
"text": "\\rho(\\xi) \\neq 0 "
},
{
"math_id": 10,
"text": "\\psi_p(t) = (\\frac{j}{t + j})^{p+1}"
},
{
"math_id": 11,
"text": "\\int_{-\\infty}^{\\infty} \\hat{\\psi_p}(\\xi) \\,d\\xi = \\int_{0}^{\\infty} \\frac{2\\pi}{\\Gamma(p+1)} \\xi^{p}e^{-\\xi} \\,d\\xi = 2\\pi"
},
{
"math_id": 12,
"text": "\\xi = p"
},
{
"math_id": 13,
"text": "\\xi > 0"
},
{
"math_id": 14,
"text": "\\overline{FT\\{x\\}(-\\xi)} = FT\\{x\\}(\\xi)"
},
{
"math_id": 15,
"text": "x(t)"
},
{
"math_id": 16,
"text": "x(t) \\in \\mathbb{R}"
},
{
"math_id": 17,
"text": "t \\in \\mathbb{R}"
},
{
"math_id": 18,
"text": "x_{+}(t) = x(t) + jx_H(t)"
},
{
"math_id": 19,
"text": "x(t) = Re\\{x_{+}(t)\\}"
},
{
"math_id": 20,
"text": "x_{+}(t)"
},
{
"math_id": 21,
"text": "x_H(t)"
},
{
"math_id": 22,
"text": " f "
},
{
"math_id": 23,
"text": " V "
},
{
"math_id": 24,
"text": " \\mathbb{C} "
},
{
"math_id": 25,
"text": " \\{L_i\\}_{i \\in I} "
},
{
"math_id": 26,
"text": " \\{|L_i(f)|\\}_{i \\in I} "
},
{
"math_id": 27,
"text": " f \\in V"
},
{
"math_id": 28,
"text": " \\{|L_i(f)|\\}_{i \\in I} \\implies f "
},
{
"math_id": 29,
"text": " L_i "
},
{
"math_id": 30,
"text": " L_n(f) = \\frac{1}{2\\pi} \\int_{-\\pi}^{\\pi} f(t)e^{-jnt} \\,dt "
},
{
"math_id": 31,
"text": "n \\in \\mathbb{Z}"
},
{
"math_id": 32,
"text": " [-\\pi, \\pi] "
},
{
"math_id": 33,
"text": " L_n(f) "
},
{
"math_id": 34,
"text": " f(t) = \\sum_{n=-\\infty}^\\infty L_n(f)e^{jnt} "
},
{
"math_id": 35,
"text": " ||f||^2 = \\sum_{n=-\\infty}^\\infty |L_n(f)|^2 "
},
{
"math_id": 36,
"text": " ||f||^2 = \\frac{1}{2\\pi} \\int_{-\\pi}^{\\pi} |f(t)|^2 \\,dt "
},
{
"math_id": 37,
"text": " L^2([-\\pi, \\pi]) "
},
{
"math_id": 38,
"text": " I"
},
{
"math_id": 39,
"text": " \\mathbb{Z}"
},
{
"math_id": 40,
"text": " L_n "
},
{
"math_id": 41,
"text": " \\{|L_n(f)|\\}_{n \\in \\mathbb{Z}} "
},
{
"math_id": 42,
"text": " W_{\\psi_p}[x(t)](a, b) = \\frac{1}{b} \\int_{-\\infty}^{\\infty} x(t) \\overline{\\psi_p(\\frac{t-a}{b})} \\,dt "
},
{
"math_id": 43,
"text": " p > 0 "
},
{
"math_id": 44,
"text": " b_1, b_2 > 0 "
},
{
"math_id": 45,
"text": " f, g \\in L^2(\\mathbb{R}) "
},
{
"math_id": 46,
"text": " |W_{\\psi_p}[f(t)](a, b_1)| = |W_{\\psi_p}[g(t)](a, b_1)| "
},
{
"math_id": 47,
"text": " \\forall a \\in \\mathbb{R} "
},
{
"math_id": 48,
"text": " |W_{\\psi_p}[f(t)](a, b_2)| = |W_{\\psi_p}[g(t)](a, b_2)| "
},
{
"math_id": 49,
"text": " \\alpha \\in \\mathbb{R} "
},
{
"math_id": 50,
"text": " f_{+}(t) = e^{j\\alpha}g_{+}(t) "
},
{
"math_id": 51,
"text": " Re\\{f_{+}(t)\\} = Re\\{e^{j\\alpha}g_{+}(t)\\} \\implies f(t) = \\cos{\\alpha} g(t) - \\sin{\\alpha} g_H(t) "
},
{
"math_id": 52,
"text": " Im\\{f_{+}(t)\\} = Im\\{e^{j\\alpha}g_{+}(t)\\} \\implies f_H(t) = \\sin{\\alpha} g(t) + \\cos{\\alpha} g_H(t) "
},
{
"math_id": 53,
"text": " f(t) = (\\cos{\\alpha}-\\sin{\\alpha}\\tan{\\alpha}) g(t) - \\tan{\\alpha} f_H(t) "
},
{
"math_id": 54,
"text": " f(t), g_H(t) \\in span\\{f_H(t), g(t)\\} = span\\{f(t), f_H(t)\\} = span\\{g(t), g_H(t)\\} "
},
{
"math_id": 55,
"text": " I "
},
{
"math_id": 56,
"text": " \\mathbb{R} \\times \\{b_1, b_2\\} "
},
{
"math_id": 57,
"text": " b_1 \\neq b_2 "
},
{
"math_id": 58,
"text": " L^2(\\mathbb{R}) "
},
{
"math_id": 59,
"text": " L_{(a, b)} "
},
{
"math_id": 60,
"text": " L_{(a, b)}(f) = W_{\\psi_p}[f(t)](a, b) "
},
{
"math_id": 61,
"text": " \\{|L_{(a, b)}(f)|\\}_{a, b \\in \\mathbb{R} \\times \\{b_1, b_2\\}} "
},
{
"math_id": 62,
"text": " span\\{f,f_H\\} "
}
]
| https://en.wikipedia.org/wiki?curid=72669698 |
726748 | Black-body radiation | Thermal electromagnetic radiation
Black-body radiation is the thermal electromagnetic radiation within, or surrounding, a body in thermodynamic equilibrium with its environment, emitted by a black body (an idealized opaque, non-reflective body). It has a specific, continuous spectrum of wavelengths, inversely related to intensity, that depend only on the body's temperature, which is assumed, for the sake of calculations and theory, to be uniform and constant.
A perfectly insulated enclosure which is in thermal equilibrium internally contains blackbody radiation, and will emit it through a hole made in its wall, provided the hole is small enough to have a negligible effect upon the equilibrium. The thermal radiation spontaneously emitted by many ordinary objects can be approximated as blackbody radiation.
Of particular importance, although planets and stars (including the Earth and Sun) are neither in thermal equilibrium with their surroundings nor perfect black bodies, blackbody radiation is still a good first approximation for the energy they emit.
The term "black body" was introduced by Gustav Kirchhoff in 1860. Blackbody radiation is also called thermal radiation, "cavity radiation", "complete radiation" or "temperature radiation".
Theory.
Spectrum.
Black-body radiation has a characteristic, continuous frequency spectrum that depends only on the body's temperature, called the Planck spectrum or Planck's law. The spectrum is peaked at a characteristic frequency that shifts to higher frequencies with increasing temperature, and at room temperature most of the emission is in the infrared region of the electromagnetic spectrum. As the temperature increases past about 500 degrees Celsius, black bodies start to emit significant amounts of visible light. Viewed in the dark by the human eye, the first faint glow appears as a "ghostly" grey (the visible light is actually red, but low intensity light activates only the eye's grey-level sensors). With rising temperature, the glow becomes visible even when there is some background surrounding light: first as a dull red, then yellow, and eventually a "dazzling bluish-white" as the temperature rises. When the body appears white, it is emitting a substantial fraction of its energy as ultraviolet radiation. The Sun, with an effective temperature of approximately 5800 K, is an approximate black body with an emission spectrum peaked in the central, yellow-green part of the visible spectrum, but with significant power in the ultraviolet as well.
Blackbody radiation provides insight into the thermodynamic equilibrium state of cavity radiation.
Black body.
All normal (baryonic) matter emits electromagnetic radiation when it has a temperature above absolute zero. The radiation represents a conversion of a body's internal energy into electromagnetic energy, and is therefore called thermal radiation. It is a spontaneous process of radiative distribution of entropy.
Conversely, all normal matter absorbs electromagnetic radiation to some degree. An object that absorbs all radiation falling on it, at all wavelengths, is called a black body. When a black body is at a uniform temperature, its emission has a characteristic frequency distribution that depends on the temperature. Its emission is called blackbody radiation.
The concept of the black body is an idealization, as perfect black bodies do not exist in nature. However, graphite and lamp black, with emissivities greater than 0.95, are good approximations to a black material. Experimentally, blackbody radiation may be established best as the ultimately stable steady state equilibrium radiation in a cavity in a rigid body, at a uniform temperature, that is entirely opaque and is only partly reflective. A closed box with walls of graphite at a constant temperature with a small hole on one side produces a good approximation to ideal blackbody radiation emanating from the opening.
Blackbody radiation has the unique absolutely stable distribution of radiative intensity that can persist in thermodynamic equilibrium in a cavity. In equilibrium, for each frequency, the intensity of radiation which is emitted and reflected from a body relative to other frequencies (that is, the net amount of radiation leaving its surface, called the "spectral radiance") is determined solely by the equilibrium temperature and does not depend upon the shape, material or structure of the body. For a black body (a perfect absorber) there is no reflected radiation, and so the spectral radiance is entirely due to emission. In addition, a black body is a diffuse emitter (its emission is independent of direction).
Blackbody radiation becomes a visible glow of light if the temperature of the object is high enough. The Draper point is the temperature at which all solids glow a dim red, about . At , a small opening in the wall of a large uniformly heated opaque-walled cavity (such as an oven), viewed from outside, looks red; at , it looks white. No matter how the oven is constructed, or of what material, as long as it is built so that almost all light entering is absorbed by its walls, it will contain a good approximation to blackbody radiation. The spectrum, and therefore color, of the light that comes out will be a function of the cavity temperature alone. A graph of the spectral radiation intensity plotted versus frequency(or wavelength) is called the "blackbody curve". Different curves are obtained by varying the temperature.
When the body is black, the absorption is obvious: the amount of light absorbed is all the light that hits the surface. For a black body much bigger than the wavelength, the light energy absorbed at any wavelength "λ" per unit time is strictly proportional to the blackbody curve. This means that the blackbody curve is the amount of light energy emitted by a black body, which justifies the name. This is the condition for the applicability of Kirchhoff's law of thermal radiation: the blackbody curve is characteristic of thermal light, which depends only on the temperature of the walls of the cavity, provided that the walls of the cavity are completely opaque and are not very reflective, and that the cavity is in thermodynamic equilibrium. When the black body is small, so that its size is comparable to the wavelength of light, the absorption is modified, because a small object is not an efficient absorber of light of long wavelength, but the principle of strict equality of emission and absorption is always upheld in a condition of thermodynamic equilibrium.
In the laboratory, blackbody radiation is approximated by the radiation from a small hole in a large cavity, a hohlraum, in an entirely opaque body that is only partly reflective, that is maintained at a constant temperature. (This technique leads to the alternative term "cavity radiation".) Any light entering the hole would have to reflect off the walls of the cavity multiple times before it escaped, in which process it is nearly certain to be absorbed. Absorption occurs regardless of the wavelength of the radiation entering (as long as it is small compared to the hole). The hole, then, is a close approximation of a theoretical black body and, if the cavity is heated, the spectrum of the hole's radiation (that is, the amount of light emitted from the hole at each wavelength) will be continuous, and will depend only on the temperature and the fact that the walls are opaque and at least partly absorptive, but not on the particular material of which they are built nor on the material in the cavity (compare with emission spectrum).
The radiance or observed intensity is not a function of direction. Therefore, a black body is a perfect Lambertian radiator.
Real objects never behave as full-ideal black bodies, and instead the emitted radiation at a given frequency is a fraction of what the ideal emission would be. The emissivity of a material specifies how well a real body radiates energy as compared with a black body. This emissivity depends on factors such as temperature, emission angle, and wavelength. However, it is typical in engineering to assume that a surface's spectral emissivity and absorptivity do not depend on wavelength so that the emissivity is a constant. This is known as the "gray body" assumption.
With non-black surfaces, the deviations from ideal blackbody behavior are determined by both the surface structure, such as roughness or granularity, and the chemical composition. On a "per wavelength" basis, real objects in states of local thermodynamic equilibrium still follow Kirchhoff's Law: emissivity equals absorptivity, so that an object that does not absorb all incident light will also emit less radiation than an ideal black body; the incomplete absorption can be due to some of the incident light being transmitted through the body or to some of it being reflected at the surface of the body.
In astronomy, objects such as stars are frequently regarded as black bodies, though this is often a poor approximation. An almost perfect blackbody spectrum is exhibited by the cosmic microwave background radiation. Hawking radiation is the hypothetical blackbody radiation emitted by black holes, at a temperature that depends on the mass, charge, and spin of the hole. If this prediction is correct, black holes will very gradually shrink and evaporate over time as they lose mass by the emission of photons and other particles.
A black body radiates energy at all frequencies, but its intensity rapidly tends to zero at high frequencies (short wavelengths). For example, a black body at room temperature () with one square meter of surface area will emit a photon in the visible range (390–750 nm) at an average rate of one photon every 41 seconds, meaning that, for most practical purposes, such a black body does not emit in the visible range.
The study of the laws of black bodies and the failure of classical physics to describe them helped establish the foundations of quantum mechanics.
Further explanation.
According to the Classical Theory of Radiation, if each Fourier mode of the equilibrium radiation (in an otherwise empty cavity with perfectly reflective walls) is considered as a degree of freedom capable of exchanging energy, then, according to the equipartition theorem of classical physics, there would be an equal amount of energy in each mode. Since there are an infinite number of modes, this would imply infinite heat capacity, as well as a nonphysical spectrum of emitted radiation that grows without bound with increasing frequency, a problem known as the ultraviolet catastrophe.
In the longer wavelengths this deviation is not so noticeable, as formula_0 and formula_1 are very small. In the shorter wavelengths of the ultraviolet range, however, classical theory predicts the energy emitted tends to infinity, hence the ultraviolet catastrophe. The theory even predicted that all bodies would emit most of their energy in the ultraviolet range, clearly contradicted by the experimental data which showed a different peak wavelength at different temperatures (see also Wien's law).
Instead, in the quantum treatment of this problem, the numbers of the energy modes are quantized, attenuating the spectrum at high frequency in agreement with experimental observation and resolving the catastrophe. The modes that had more energy than the thermal energy of the substance itself were not considered, and because of quantization modes having infinitesimally little energy were excluded.
Thus for shorter wavelengths very few modes (having energy more than formula_0) were allowed, supporting the data that the energy emitted is reduced for wavelengths less than the wavelength of the observed peak of emission.
Notice that there are two factors responsible for the shape of the graph, which can be seen as working opposite to one another. Firstly, shorter wavelengths have a larger number of modes associated with them. This accounts for the increase in spectral radiance as one moves from the longest wavelengths towards the peak at relatively shorter wavelengths. Secondly, though, at shorter wavelengths more energy is needed to reach the threshold level to occupy each mode: the more energy needed to excite the mode, the lower the probability that this mode will be occupied. As the wavelength decreases, the probability of exciting the mode becomes exceedingly small, leading to fewer of these modes being occupied: this accounts for the decrease in spectral radiance at very short wavelengths, left of the peak. Combined, they give the characteristic graph.
Calculating the blackbody curve was a major challenge in theoretical physics during the late nineteenth century. The problem was solved in 1901 by Max Planck in the formalism now known as Planck's law of blackbody radiation. By making changes to Wien's radiation law (not to be confused with Wien's displacement law) consistent with thermodynamics and electromagnetism, he found a mathematical expression fitting the experimental data satisfactorily. Planck had to assume that the energy of the oscillators in the cavity was quantized, which is to say that it existed in integer multiples of some quantity. Einstein built on this idea and proposed the quantization of electromagnetic radiation itself in 1905 to explain the photoelectric effect. These theoretical advances eventually resulted in the superseding of classical electromagnetism by quantum electrodynamics. These quanta were called photons and the blackbody cavity was thought of as containing a gas of photons. In addition, it led to the development of quantum probability distributions, called Fermi–Dirac statistics and Bose–Einstein statistics, each applicable to a different class of particles, fermions and bosons.
The wavelength at which the radiation is strongest is given by Wien's displacement law, and the overall power emitted per unit area is given by the Stefan–Boltzmann law. So, as temperature increases, the glow color changes from red to yellow to white to blue. Even as the peak wavelength moves into the ultra-violet, enough radiation continues to be emitted in the blue wavelengths that the body will continue to appear blue. It will never become invisible—indeed, the radiation of visible light increases monotonically with temperature. The Stefan–Boltzmann law also says that the total radiant heat energy emitted from a surface is proportional to the fourth power of its absolute temperature. The law was formulated by Josef Stefan in 1879 and later derived by Ludwig Boltzmann. The formula "E" = "σT"4 is given, where "E" is the radiant heat emitted from a unit of area per unit time, "T" is the absolute temperature, and "σ" = is the Stefan–Boltzmann constant.
Equations.
Planck's law of blackbody radiation.
Planck's law states that
formula_2
where
For a black body surface, the spectral radiance density (defined per unit of area normal to the propagation) is independent of the angle formula_4 of emission with respect to the normal. However, this means that, following Lambert's cosine law, formula_5 is the radiance density per unit area of emitting surface as the surface area involved in generating the radiance is increased by a factor formula_6 with respect to an area normal to the propagation direction. At oblique angles, the solid angle spans involved do get smaller, resulting in lower aggregate intensities.
The emitted energy flux density or irradiance formula_7, is related to the photon flux density formula_8 through
formula_9
Wien's displacement law.
Wien's displacement law shows how the spectrum of blackbody radiation at any temperature is related to the spectrum at any other temperature. If we know the shape of the spectrum at one temperature, we can calculate the shape at any other temperature. Spectral intensity can be expressed as a function of wavelength or of frequency.
A consequence of Wien's displacement law is that the wavelength at which the intensity "per unit wavelength" of the radiation produced by a black body has a local maximum or peak, formula_10, is a function only of the temperature:
formula_11
where the constant "b", known as Wien's displacement constant, is equal to formula_12 . formula_13 is the Lambert W function. So formula_10 is approximately 2898 μm/T, with the temperature given in kelvins. At a typical room temperature of 293 K (20 °C), the maximum intensity is at .
Planck's law was also stated above as a function of frequency. The intensity maximum for this is given by
formula_14
In unitless form, the maximum occurs when formula_15, where formula_16. The approximate numerical solution is formula_17. At a typical room temperature of 293 K (20 °C), the maximum intensity is for formula_3 = 17 THz.
Stefan–Boltzmann law.
By integrating formula_18 over the frequency the radiance formula_19 (units: power / [area × solid angle] ) is
formula_20
by using formula_21 with formula_22 and with formula_23 being the Stefan–Boltzmann constant.
On a side note, at a distance d, the intensity formula_24 per area formula_25 of radiating surface is the useful expression
formula_26
when the receiving surface is perpendicular to the radiation.
By subsequently integrating formula_19 over the solid angle formula_27 for all azimuthal angle (0 to formula_28) and polar angle formula_4 from 0 to formula_29, we arrive at the Stefan–Boltzmann law: the power "j"* emitted per unit area of the surface of a black body is directly proportional to the fourth power of its absolute temperature:
formula_30
We used
formula_31
Applications.
Human-body emission.
The human body radiates energy as infrared light. The net power radiated is the difference between the power emitted and the power absorbed:
formula_32
Applying the Stefan–Boltzmann law,
formula_33
where A and T are the body surface area and temperature, formula_34 is the emissivity, and "T"0 is the ambient temperature.
The total surface area of an adult is about , and the mid- and far-infrared emissivity of skin and most clothing is near unity, as it is for most nonmetallic surfaces. Skin temperature is about 33 °C, but clothing reduces the surface temperature to about 28 °C when the ambient temperature is 20 °C. Hence, the net radiative heat loss is about
formula_35
The total energy radiated in one day is about 8 MJ, or 2000 kcal (food calories). Basal metabolic rate for a 40-year-old male is about 35 kcal/(m2·h), which is equivalent to 1700 kcal per day, assuming the same 2 m2 area. However, the mean metabolic rate of sedentary adults is about 50% to 70% greater than their basal rate.
There are other important thermal loss mechanisms, including convection and evaporation. Conduction is negligible – the Nusselt number is much greater than unity. Evaporation by perspiration is only required if radiation and convection are insufficient to maintain a steady-state temperature (but evaporation from the lungs occurs regardless). Free-convection rates are comparable, albeit somewhat lower, than radiative rates. Thus, radiation accounts for about two-thirds of thermal energy loss in cool, still air. Given the approximate nature of many of the assumptions, this can only be taken as a crude estimate. Ambient air motion, causing forced convection, or evaporation reduces the relative importance of radiation as a thermal-loss mechanism.
Application of Wien's law to human-body emission results in a peak wavelength of
formula_36
For this reason, thermal imaging devices for human subjects are most sensitive in the 7–14 micrometer range.
Temperature relation between a planet and its star.
The blackbody law may be used to estimate the temperature of a planet orbiting the Sun.
The temperature of a planet depends on several factors:
The analysis only considers the Sun's heat for a planet in a Solar System.
The Stefan–Boltzmann law gives the total power (energy/second) that the Sun emits:
where
The Sun emits that power equally in all directions. Because of this, the planet is hit with only a tiny fraction of it. The power from the Sun that strikes the planet (at the top of the atmosphere) is:
where
Because of its high temperature, the Sun emits to a large extent in the ultraviolet and visible (UV-Vis) frequency range. In this frequency range, the planet reflects a fraction formula_37 of this energy where formula_37 is the albedo or reflectance of the planet in the UV-Vis range. In other words, the planet absorbs a fraction formula_38 of the Sun's light, and reflects the rest. The power absorbed by the planet and its atmosphere is then:
Even though the planet only absorbs as a circular area formula_39, it emits in all directions; the spherical surface area being formula_40. If the planet were a perfect black body, it would emit according to the Stefan–Boltzmann law
where formula_41 is the temperature of the planet. This temperature, calculated for the case of the planet acting as a black body by setting formula_42, is known as the effective temperature. The actual temperature of the planet will likely be different, depending on its surface and atmospheric properties. Ignoring the atmosphere and greenhouse effect, the planet, since it is at a much lower temperature than the Sun, emits mostly in the infrared (IR) portion of the spectrum. In this frequency range, it emits formula_43 of the radiation that a black body would emit where formula_43 is the average emissivity in the IR range. The power emitted by the planet is then:
For a body in radiative exchange equilibrium with its surroundings, the rate at which it emits radiant energy is equal to the rate at which it absorbs it:
Substituting the expressions for solar and planet power in equations 1–6 and simplifying yields the estimated temperature of the planet, ignoring greenhouse effect, "T"P:
In other words, given the assumptions made, the temperature of a planet depends only on the surface temperature of the Sun, the radius of the Sun, the distance between the planet and the Sun, the albedo and the IR emissivity of the planet.
Notice that a gray (flat spectrum) ball where formula_44 comes to the same temperature as a black body no matter how dark or light gray.
Effective temperature of Earth.
Substituting the measured values for the Sun and Earth yields:
With the average emissivity formula_49 set to unity, the effective temperature of the Earth is:
formula_50
or −18.8 °C.
This is the temperature of the Earth if it radiated as a perfect black body in the infrared, assuming an unchanging albedo and ignoring greenhouse effects (which can raise the surface temperature of a body above what it would be if it were a perfect black body in all spectrums). The Earth in fact radiates not quite as a perfect black body in the infrared which will raise the estimated temperature a few degrees above the effective temperature. If we wish to estimate what the temperature of the Earth would be if it had no atmosphere, then we could take the albedo and emissivity of the Moon as a good estimate. The albedo and emissivity of the Moon are about 0.1054 and 0.95 respectively, yielding an estimated temperature of about 1.36 °C.
Estimates of the Earth's average albedo vary in the range 0.3–0.4, resulting in different estimated effective temperatures. Estimates are often based on the solar constant (total insolation power density) rather than the temperature, size, and distance of the Sun. For example, using 0.4 for albedo, and an insolation of 1400 W m−2, one obtains an effective temperature of about 245 K.
Similarly using albedo 0.3 and solar constant of 1372 W m−2, one obtains an effective temperature of 255 K.
Cosmology.
The cosmic microwave background radiation observed today is the most perfect blackbody radiation ever observed in nature, with a temperature of about 2.7 K. It is a "snapshot" of the radiation at the time of decoupling between matter and radiation in the early universe. Prior to this time, most matter in the universe was in the form of an ionized plasma in thermal, though not full thermodynamic, equilibrium with radiation.
According to Kondepudi and Prigogine, at very high temperatures (above 1010 K; such temperatures existed in the very early universe), where the thermal motion separates protons and neutrons in spite of the strong nuclear forces, electron-positron pairs appear and disappear spontaneously and are in thermal equilibrium with electromagnetic radiation. These particles form a part of the black body spectrum, in addition to the electromagnetic radiation.
A black body at room temperature () radiates mostly in the infrared spectrum, which cannot be perceived by the human eye, but can be sensed by some reptiles. As the object increases in temperature to about , the emission spectrum gets stronger and extends into the human visual range, and the object appears dull red. As its temperature increases further, it emits more and more orange, yellow, green, and then blue light (and ultimately beyond violet, ultraviolet).
Light bulb.
Tungsten filament lights have a continuous black body spectrum with a cooler colour temperature, around , which also emits considerable energy in the infrared range. Modern-day fluorescent and LED lights, which are more efficient, do not have a continuous black body emission spectrum, rather emitting directly, or using combinations of phosphors that emit multiple narrow spectrums.
History.
In his first memoir, Augustin-Jean Fresnel (1788–1827) responded to a view he extracted from a French translation of Isaac Newton's "Optics". He says that Newton imagined particles of light traversing space uninhibited by the caloric medium filling it, and refutes this view (never actually held by Newton) by saying that a black body under illumination would increase indefinitely in heat.
Balfour Stewart.
In 1858, Balfour Stewart described his experiments on the thermal radiative emissive and absorptive powers of polished plates of various substances, compared with the powers of lamp-black surfaces, at the same temperature. Stewart chose lamp-black surfaces as his reference because of various previous experimental findings, especially those of Pierre Prevost and of John Leslie. He wrote, "Lamp-black, which absorbs all the rays that fall upon it, and therefore possesses the greatest possible absorbing power, will possess also the greatest possible radiating power." Stewart's statement assumed a general principle: that there exists a body or surface that has the greatest possible absorbing and radiative power for every wavelength and equilibrium temperature.
Stewart was concerned with selective thermal radiation, which he investigated using plates which selectively radiated and absorbed different wavelengths. He discussed the experiments in terms of rays which could be reflected and refracted, and which obeyed the Stokes-Helmholtz reciprocity principle. His research did not consider that properties of rays are dependent on wavelength, and he did not use tools such as prisms or diffraction gratings. His work was quantitative within these constraints. He made his measurements in a room temperature environment, and quickly so as to catch his bodies in a condition near the thermal equilibrium in which they had been prepared.
Gustav Kirchhoff.
In 1859, Gustav Robert Kirchhoff reported the coincidence of the wavelengths of spectrally resolved lines of absorption and emission of visible light. Importantly for thermal physics, he also observed that bright lines or dark lines were apparent depending on the temperature difference between emitter and absorber.
Kirchhoff then went on to consider some bodies that emit and absorb heat radiation, in an opaque enclosure or cavity, in equilibrium at a temperature "T".
Here is used a notation different from Kirchhoff's. Here, the emitting power "E"("T", "i") denotes a dimensioned quantity, the total radiation emitted by a body labeled by index "i" at temperature "T". The total absorption ratio "a"("T", "i") of that body is dimensionless, the ratio of absorbed to incident radiation in the cavity at temperature "T" . (In contrast with Balfour Stewart's, Kirchhoff's definition of his absorption ratio did not refer in particular to a lamp-black surface as the source of the incident radiation.) Thus the ratio "E"("T", "i") / "a"("T", "i") of emitting power to absorptivity is a dimensioned quantity, with the dimensions of emitting power, because "a"("T", "i") is dimensionless. Also here the wavelength-specific emitting power of the body at temperature "T" is denoted by "E"("λ", "T", "i") and the wavelength-specific absorption ratio by "a"("λ", "T", "i") . Again, the ratio "E"("λ", "T", "i") / "a"("λ", "T", "i") of emitting power to absorptivity is a dimensioned quantity, with the dimensions of emitting power.
In a second report made in 1859, Kirchhoff announced a new general principle or law for which he offered a theoretical and mathematical proof, though he did not offer quantitative measurements of radiation powers. His theoretical proof was and still is considered by some writers to be invalid. His principle, however, has endured: it was that for heat rays of the same wavelength, in equilibrium at a given temperature, the wavelength-specific ratio of emitting power to absorptivity has one and the same common value for all bodies that emit and absorb at that wavelength. In symbols, the law stated that the wavelength-specific ratio "E"("λ", "T", "i") / "a"("λ", "T", "i") has one and the same value for all bodies. In this report there was no mention of black bodies.
In 1860, still not knowing of Stewart's measurements for selected qualities of radiation, Kirchhoff pointed out that it was long established experimentally that for total heat radiation emitted and absorbed by a body in equilibrium, the dimensioned total radiation ratio "E"("T", "i") / "a"("T", "i") has one and the same value common to all bodies. Again without measurements of radiative powers or other new experimental data, Kirchhoff then offered a fresh theoretical proof of his new principle of the universality of the value of the wavelength-specific ratio "E"("λ", "T", "i") / "a"("λ", "T", "i") at thermal equilibrium. His fresh theoretical proof was and still is considered by some writers to be invalid.
But more importantly, it relied on a new theoretical postulate of "perfectly black bodies," which is the reason why one speaks of Kirchhoff's law. Such black bodies showed complete absorption in their infinitely thin most superficial surface. They correspond to Balfour Stewart's reference bodies, with internal radiation, coated with lamp-black. They were not the more realistic perfectly black bodies later considered by Planck. Planck's black bodies radiated and absorbed only by the material in their interiors; their interfaces with contiguous media were only mathematical surfaces, capable neither of absorption nor emission, but only of reflecting and transmitting with refraction.
Kirchhoff's proof considered an arbitrary non-ideal body labeled "i" as well as various perfect black bodies labeled BB. It required that the bodies be kept in a cavity in thermal equilibrium at temperature "T". His proof intended to show that the ratio "E"("λ", "T", "i") / "a"("λ", "T", "i") was independent of the nature "i" of the non-ideal body, however partly transparent or partly reflective it was.
His proof first argued that for wavelength "λ" and at temperature "T", at thermal equilibrium, all perfectly black bodies of the same size and shape have the one and the same common value of emissive power "E"("λ", "T", BB), with the dimensions of power. His proof noted that the dimensionless wavelength-specific absorptivity "a"("λ", "T", BB) of a perfectly black body is by definition exactly 1. Then for a perfectly black body, the wavelength-specific ratio of emissive power to absorptivity "E"("λ", "T", BB) / "a"("λ", "T", BB) is again just "E"("λ", "T", BB), with the dimensions of power. Kirchhoff considered thermal equilibrium with the arbitrary non-ideal body, and with a perfectly black body of the same size and shape, in place in his cavity in equilibrium at temperature "T". He argued that the flows of heat radiation must be the same in each case. Thus he argued that at thermal equilibrium the ratio "E"("λ", "T", "i") / "a"("λ", "T", "i") was equal to "E"("λ", "T", BB), which may now be denoted "B""λ" ("λ", "T"). "B""λ" ("λ", "T") is a continuous function, dependent only on "λ" at fixed temperature "T", and an increasing function of "T" at fixed wavelength "λ". It vanishes at low temperatures for visible wavelengths, which does not depend on the nature "i" of the arbitrary non-ideal body (Geometrical factors, taken into detailed account by Kirchhoff, have been ignored in the foregoing).
Thus Kirchhoff's law of thermal radiation can be stated: "For any material at all, radiating and absorbing in thermodynamic equilibrium at any given temperature T, for every wavelength λ, the ratio of emissive power to absorptivity has one universal value, which is characteristic of a perfect black body, and is an emissive power which we here represent by Bλ (λ, T)." (For our notation "B""λ" ("λ", "T"), Kirchhoff's original notation was simply "e".)
Kirchhoff announced that the determination of the function "B""λ" ("λ", "T") was a problem of the highest importance, though he recognized that there would be experimental difficulties to be overcome. He supposed that like other functions that do not depend on the properties of individual bodies, it would be a simple function. Occasionally by historians that function "B""λ" ("λ", "T") has been called "Kirchhoff's (emission, universal) function," though its precise mathematical form would not be known for another forty years, till it was discovered by Planck in 1900. The theoretical proof for Kirchhoff's universality principle was worked on and debated by various physicists over the same time, and later. Kirchhoff stated later in 1860 that his theoretical proof was better than Balfour Stewart's, and in some respects it was so. Kirchhoff's 1860 paper did not mention the second law of thermodynamics, and of course did not mention the concept of entropy which had not at that time been established. In a more considered account in a book in 1862, Kirchhoff mentioned the connection of his law with Carnot's principle, which is a form of the second law.
According to Helge Kragh, "Quantum theory owes its origin to the study of thermal radiation, in particular to the "blackbody" radiation that Robert Kirchhoff had first defined in 1859–1860."
Doppler effect.
The relativistic Doppler effect causes a shift in the frequency "f" of light originating from a source that is moving in relation to the observer, so that the wave is observed to have frequency "f"':
formula_51
where "v" is the velocity of the source in the observer's rest frame, "θ" is the angle between the velocity vector and the observer-source direction measured in the reference frame of the source, and "c" is the speed of light. This can be simplified for the special cases of objects moving directly towards ("θ" = π) or away ("θ" = 0) from the observer, and for speeds much less than "c".
Through Planck's law the temperature spectrum of a black body is proportionally related to the frequency of light and one may substitute the temperature ("T") for the frequency in this equation.
For the case of a source moving directly towards or away from the observer, this reduces to
formula_52
Here "v" > 0 indicates a receding source, and "v" < 0 indicates an approaching source.
This is an important effect in astronomy, where the velocities of stars and galaxies can reach significant fractions of "c". An example is found in the cosmic microwave background radiation, which exhibits a dipole anisotropy from the Earth's motion relative to this blackbody radiation field.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "h \\nu"
},
{
"math_id": 1,
"text": "nh \\nu"
},
{
"math_id": 2,
"text": "B_\\nu(T) = \\frac{2h\\nu^3}{c^2}\\frac{1}{e^{h\\nu/kT} - 1},"
},
{
"math_id": 3,
"text": "\\nu"
},
{
"math_id": 4,
"text": "\\theta"
},
{
"math_id": 5,
"text": " B_\\nu(T) \\cos \\theta"
},
{
"math_id": 6,
"text": " 1/\\cos \\theta"
},
{
"math_id": 7,
"text": "B_\\nu(T,E)"
},
{
"math_id": 8,
"text": "b_\\nu(T,E)"
},
{
"math_id": 9,
"text": "B_\\nu(T,E) = Eb_\\nu(T,E)"
},
{
"math_id": 10,
"text": "\\lambda_\\text{peak}"
},
{
"math_id": 11,
"text": "\\lambda_\\text{peak} = \\frac{b}{T},"
},
{
"math_id": 12,
"text": "\\frac{hc}{k} \\frac{1}{5 + W_0(-5e^{-5})} \\approx "
},
{
"math_id": 13,
"text": "W_0"
},
{
"math_id": 14,
"text": "\\nu_\\text{peak} = T \\times 5.879... \\times 10^{10} \\ \\mathrm{Hz}/\\mathrm{K}."
},
{
"math_id": 15,
"text": "e^x(1-x/3) = 1"
},
{
"math_id": 16,
"text": "x = h\\nu / kT"
},
{
"math_id": 17,
"text": "x \\approx 2.82"
},
{
"math_id": 18,
"text": "B_\\nu(T)\\cos(\\theta)"
},
{
"math_id": 19,
"text": "L"
},
{
"math_id": 20,
"text": " L = \\frac{2\\pi^5}{15} \\frac{k^4 T^4}{c^2 h^3} \\frac{1}{\\pi} = \\sigma T^4 \\frac{\\cos(\\theta)}{\\pi}"
},
{
"math_id": 21,
"text": "\\int_0^\\infty dx\\, \\frac{x^3}{e^x - 1} = \\frac{\\pi^4}{15}"
},
{
"math_id": 22,
"text": "x \\equiv \\frac{h\\nu}{k T} "
},
{
"math_id": 23,
"text": " \\sigma \\equiv \\frac{2\\pi^5}{15} \\frac{k^4}{c^2h^3} = 5.670373 \\times 10^{-8} \\mathrm{\\frac{W}{m^2 K^4}}"
},
{
"math_id": 24,
"text": "dI"
},
{
"math_id": 25,
"text": "dA"
},
{
"math_id": 26,
"text": " dI = \\sigma T^4 \\frac{\\cos\\theta}{\\pi d^2} dA"
},
{
"math_id": 27,
"text": "\\Omega"
},
{
"math_id": 28,
"text": "2\\pi"
},
{
"math_id": 29,
"text": "\\pi/2"
},
{
"math_id": 30,
"text": "j^\\star = \\sigma T^4,"
},
{
"math_id": 31,
"text": "\\int \\cos\\theta\\, d\\Omega = \\int_0^{2\\pi} \\int_0^{\\pi/2} \\cos\\theta\\sin\\theta \\,d\\theta\\,d\\phi= \\pi."
},
{
"math_id": 32,
"text": "P_\\text{net} = P_\\text{emit} - P_\\text{absorb}."
},
{
"math_id": 33,
"text": "P_\\text{net} = A \\sigma \\varepsilon \\left( T^4 - T_0^4 \\right),"
},
{
"math_id": 34,
"text": "\\varepsilon"
},
{
"math_id": 35,
"text": "P_\\text{net} = P_\\text{emit} - P_\\text{absorb} = \\mathrm{100 ~ W}."
},
{
"math_id": 36,
"text": "\\lambda_\\text{peak} = \\mathrm{\\frac{2.898 \\times 10^{-3} ~ K \\cdot m}{305 ~ K}} = \\mathrm{9.50~\\mu m}."
},
{
"math_id": 37,
"text": "\\alpha"
},
{
"math_id": 38,
"text": "1-\\alpha"
},
{
"math_id": 39,
"text": "\\pi R^2"
},
{
"math_id": 40,
"text": "4 \\pi R^2"
},
{
"math_id": 41,
"text": "T_{\\rm E} "
},
{
"math_id": 42,
"text": "P_{\\rm abs} = P_{\\rm emt\\,bb}"
},
{
"math_id": 43,
"text": "\\overline{\\epsilon}"
},
{
"math_id": 44,
"text": " (1 - \\alpha) = \\overline{\\varepsilon} "
},
{
"math_id": 45,
"text": "T_{\\rm S} = 5772 \\ \\mathrm{K},"
},
{
"math_id": 46,
"text": "R_{\\rm S} = 6.957 \\times 10^8 \\ \\mathrm{m},"
},
{
"math_id": 47,
"text": "D = 1.496 \\times 10^{11} \\ \\mathrm{m},"
},
{
"math_id": 48,
"text": "\\alpha = 0.309 \\ "
},
{
"math_id": 49,
"text": " \\overline{\\varepsilon} "
},
{
"math_id": 50,
"text": "T_{\\rm E} = 254.356\\ \\mathrm{K}"
},
{
"math_id": 51,
"text": "f' = f \\frac{1 - \\frac{v}{c} \\cos \\theta}{\\sqrt{1-v^2/c^2}}, "
},
{
"math_id": 52,
"text": "T' = T \\sqrt{\\frac{c-v}{c+v}}."
}
]
| https://en.wikipedia.org/wiki?curid=726748 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.