id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
1114864 | Critical focus | Photograph area
In a photograph, the area of critical focus is the portion of the picture that is optically in focus. This does not relate to depth of field which describes apparent sharpness.
Reducing the size of the aperture will increase the depth of field but the plane of critical focus will not change. Depth of field extends away from the plane of critical sharpness.
The image is only critically in focus within a plane.
The formula that describes the relationship between plane of sharpness, lens and film is
formula_0,
where formula_1 is the film to lens distance, formula_2 is the distance from the lens to the plane of critical focus (in the object space), and formula_3 is the focal length of the lens.
'Critical Focus' is also the title of a regular column by Brian J. Ford in the American magazine "The Microscope".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1/I+1/O = 1/F"
},
{
"math_id": 1,
"text": "I"
},
{
"math_id": 2,
"text": "O"
},
{
"math_id": 3,
"text": "F"
}
] | https://en.wikipedia.org/wiki?curid=1114864 |
11148881 | Schlick's approximation | In 3D computer graphics, Schlick’s approximation, named after Christophe Schlick, is a formula for approximating the contribution of the Fresnel factor in the specular reflection of light from a non-conducting interface (surface) between two media.
According to Schlick’s model, the specular reflection coefficient "R" can be approximated by:
formula_0 where formula_1
where formula_2 is the angle between the direction from which the incident light is coming and the normal of the interface between the two media, hence formula_3. And formula_4 are the indices of refraction of the two media at the interface and formula_5 is the reflection coefficient for light incoming parallel to the normal (i.e., the value of the Fresnel term when formula_6 or minimal reflection). In computer graphics, one of the interfaces is usually air, meaning that formula_7 very well can be approximated as 1.
In microfacet models it is assumed that there is always a perfect reflection, but the normal changes according to a certain distribution, resulting in a non-perfect overall reflection. When using Schlick’s approximation, the normal in the above computation is replaced by the halfway vector. Either the viewing or light direction can be used as the second vector.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " R(\\theta) = R_0 + (1 - R_0)(1 - \\cos \\theta)^5 "
},
{
"math_id": 1,
"text": " R_0 = \\left(\\frac{n_1-n_2}{n_1+n_2}\\right)^2"
},
{
"math_id": 2,
"text": "\\theta"
},
{
"math_id": 3,
"text": "\\cos\\theta=(N\\cdot V)"
},
{
"math_id": 4,
"text": "n_1,\\,n_2"
},
{
"math_id": 5,
"text": "R_0"
},
{
"math_id": 6,
"text": "\\theta = 0"
},
{
"math_id": 7,
"text": "n_1"
}
] | https://en.wikipedia.org/wiki?curid=11148881 |
11149 | Fresnel equations | Equations of light transmission and reflection
The Fresnel equations (or Fresnel coefficients) describe the reflection and transmission of light (or electromagnetic radiation in general) when incident on an interface between different optical media. They were deduced by French engineer and physicist Augustin-Jean Fresnel () who was the first to understand that light is a transverse wave, when no one realized that the waves were electric and magnetic fields. For the first time, polarization could be understood quantitatively, as Fresnel's equations correctly predicted the differing behaviour of waves of the "s" and "p" polarizations incident upon a material interface.
Overview.
When light strikes the interface between a medium with refractive index "n"1 and a second medium with refractive index "n"2, both reflection and refraction of the light may occur. The Fresnel equations give the ratio of the "reflected" wave's electric field to the incident wave's electric field, and the ratio of the "transmitted" wave's electric field to the incident wave's electric field, for each of two components of polarization. (The "magnetic" fields can also be related using similar coefficients.) These ratios are generally complex, describing not only the relative amplitudes but also the phase shifts at the interface.
The equations assume the interface between the media is flat and that the media are homogeneous and isotropic. The incident light is assumed to be a plane wave, which is sufficient to solve any problem since any incident light field can be decomposed into plane waves and polarizations.
S and P polarizations.
There are two sets of Fresnel coefficients for two different linear polarization components of the incident wave. Since any polarization state can be resolved into a combination of two orthogonal linear polarizations, this is sufficient for any problem. Likewise, unpolarized (or "randomly polarized") light has an equal amount of power in each of two linear polarizations.
The s polarization refers to polarization of a wave's electric field "normal" to the plane of incidence (the z direction in the derivation below); then the magnetic field is "in" the plane of incidence. The p polarization refers to polarization of the electric field "in" the plane of incidence (the xy plane in the derivation below); then the magnetic field is "normal" to the plane of incidence. The names "s" and "p" for the polarization components refer to German "senkrecht" (perpendicular or normal) and "parallel" (parallel to the plane of incidence).
Although the reflection and transmission are dependent on polarization, at normal incidence ("θ" = 0) there is no distinction between them so all polarization states are governed by a single set of Fresnel coefficients (and another special case is mentioned below in which that is true).
Configuration.
In the diagram on the right, an incident plane wave in the direction of the ray IO strikes the interface between two media of refractive indices "n"1 and "n"2 at point O. Part of the wave is reflected in the direction OR, and part refracted in the direction OT. The angles that the incident, reflected and refracted rays make to the normal of the interface are given as "θ"i, "θ"r and "θ"t, respectively.
The relationship between these angles is given by the law of reflection:
formula_0
and Snell's law:
formula_1
The behavior of light striking the interface is explained by considering the electric and magnetic fields that constitute an electromagnetic wave, and the laws of electromagnetism, as shown below. The ratio of waves' electric field (or magnetic field) amplitudes are obtained, but in practice one is more often interested in formulae which determine "power" coefficients, since power (or irradiance) is what can be directly measured at optical frequencies. The power of a wave is generally proportional to the square of the electric (or magnetic) field amplitude.
Power (intensity) reflection and transmission coefficients.
We call the fraction of the incident power that is reflected from the interface the "reflectance" (or reflectivity, or power reflection coefficient) "R", and the fraction that is refracted into the second medium is called the "transmittance" (or transmissivity, or power transmission coefficient) "T". Note that these are what would be measured right "at" each side of an interface and do not account for attenuation of a wave in an absorbing medium "following" transmission or reflection.
The reflectance for s-polarized light is
formula_2
while the reflectance for p-polarized light is
formula_3
where "Z"1 and "Z"2 are the wave impedances of media 1 and 2, respectively.
We assume that the media are non-magnetic (i.e., "μ"1 = "μ"2 = "μ"0), which is typically a good approximation at optical frequencies (and for transparent media at other frequencies). Then the wave impedances are determined solely by the refractive indices "n"1 and "n"2:
formula_4
where "Z"0 is the impedance of free space and "i" = 1, 2. Making this substitution, we obtain equations using the refractive indices:
formula_5
formula_6
The second form of each equation is derived from the first by eliminating "θ"t using Snell's law and trigonometric identities.
As a consequence of conservation of energy, one can find the transmitted power (or more correctly, irradiance: power per unit area) simply as the portion of the incident power that isn't reflected:
formula_7
and
formula_8
Note that all such intensities are measured in terms of a wave's irradiance in the direction normal to the interface; this is also what is measured in typical experiments. That number could be obtained from irradiances "in the direction of an incident or reflected wave" (given by the magnitude of a wave's Poynting vector) multiplied by cos "θ" for a wave at an angle "θ" to the normal direction (or equivalently, taking the dot product of the Poynting vector with the unit vector normal to the interface). This complication can be ignored in the case of the reflection coefficient, since cos "θ"i = cos "θ"r , so that the ratio of reflected to incident irradiance in the wave's direction is the same as in the direction normal to the interface.
Although these relationships describe the basic physics, in many practical applications one is concerned with "natural light" that can be described as unpolarized. That means that there is an equal amount of power in the "s" and "p" polarizations, so that the "effective" reflectivity of the material is just the average of the two reflectivities:
formula_9
For low-precision applications involving unpolarized light, such as computer graphics, rather than rigorously computing the effective reflection coefficient for each angle, Schlick's approximation is often used.
Special cases.
Normal incidence.
For the case of normal incidence, "θ"i = "θ"t = 0, and there is no distinction between s and p polarization. Thus, the reflectance simplifies to
formula_10
For common glass ("n"2 ≈ 1.5) surrounded by air ("n"1 = 1), the power reflectance at normal incidence can be seen to be about 4%, or 8% accounting for both sides of a glass pane.
Brewster's angle.
At a dielectric interface from "n"1 to "n"2, there is a particular angle of incidence at which "R"p goes to zero and a p-polarised incident wave is purely refracted, thus all reflected light is s-polarised. This angle is known as Brewster's angle, and is around 56° for "n"1 = 1 and "n"2 = 1.5 (typical glass).
Total internal reflection.
When light travelling in a denser medium strikes the surface of a less dense medium (i.e., "n"1 > "n"2), beyond a particular incidence angle known as the "critical angle", all light is reflected and "R"s = "R"p = 1. This phenomenon, known as total internal reflection, occurs at incidence angles for which Snell's law predicts that the sine of the angle of refraction would exceed unity (whereas in fact sin "θ" ≤ 1 for all real "θ"). For glass with "n" = 1.5 surrounded by air, the critical angle is approximately 42°.
45° incidence.
Reflection at 45° incidence is very commonly used for making 90° turns. For the case of light traversing from a less dense medium into a denser one at 45° incidence ("θ" = 45°), it follows algebraically from the above equations that "R"p equals the square of "R"s:
formula_11
This can be used to either verify the consistency of the measurements of "R"s and "R"p, or to derive one of them when the other is known. This relationship is only valid for the simple case of a single plane interface between two homogeneous materials, not for films on substrates, where a more complex analysis is required.
Measurements of "R"s and "R"p at 45° can be used to estimate the reflectivity at normal incidence. The "average of averages" obtained by calculating first the arithmetic as well as the geometric average of "R"s and "R"p, and then averaging these two averages again arithmetically, gives a value for "R"0 with an error of less than about 3% for most common optical materials. This is useful because measurements at normal incidence can be difficult to achieve in an experimental setup since the incoming beam and the detector will obstruct each other. However, since the dependence of "R"s and "R"p on the angle of incidence for angles below 10° is very small, a measurement at about 5° will usually be a good approximation for normal incidence, while allowing for a separation of the incoming and reflected beam.
Complex amplitude reflection and transmission coefficients.
The above equations relating powers (which could be measured with a photometer for instance) are derived from the Fresnel equations which solve the physical problem in terms of electromagnetic field complex amplitudes, i.e., considering phase shifts in addition to their amplitudes. Those underlying equations supply generally complex-valued ratios of those EM fields and may take several different forms, depending on the formalism used. The complex amplitude coefficients for reflection and transmission are usually represented by lower case "r" and "t" (whereas the power coefficients are capitalized). As before, we are assuming the magnetic permeability, "µ" of both media to be equal to the permeability of free space "µ"0 as is essentially true of all dielectrics at optical frequencies.
In the following equations and graphs, we adopt the following conventions. For "s" polarization, the reflection coefficient "r" is defined as the ratio of the reflected wave's complex electric field amplitude to that of the incident wave, whereas for "p" polarization "r" is the ratio of the waves complex "magnetic" field amplitudes (or equivalently, the "negative" of the ratio of their electric field amplitudes). The transmission coefficient "t" is the ratio of the transmitted wave's complex electric field amplitude to that of the incident wave, for either polarization. The coefficients "r" and "t" are generally different between the "s" and "p" polarizations, and even at normal incidence (where the designations "s" and "p" do not even apply!) the sign of "r" is reversed depending on whether the wave is considered to be "s" or "p" polarized, an artifact of the adopted sign convention (see graph for an air-glass interface at 0° incidence).
The equations consider a plane wave incident on a plane interface at angle of incidence formula_12, a wave reflected at angle formula_13, and a wave transmitted at angle formula_14. In the case of an interface into an absorbing material (where "n" is complex) or total internal reflection, the angle of transmission does not generally evaluate to a real number. In that case, however, meaningful results can be obtained using formulations of these relationships in which trigonometric functions and geometric angles are avoided; the inhomogeneous waves launched into the second medium cannot be described using a single propagation angle.
Using this convention,
formula_15
One can see that "t"s = "r"s + 1 and "t"p
"r"p + 1. One can write very similar equations applying to the ratio of the waves' magnetic fields, but comparison of the electric fields is more conventional.
Because the reflected and incident waves propagate in the same medium and make the same angle with the normal to the surface, the power reflection coefficient "R" is just the squared magnitude of "r":
formula_16
On the other hand, calculation of the power transmission coefficient T is less straightforward, since the light travels in different directions in the two media. What's more, the wave impedances in the two media differ; power (irradiance) is given by the square of the electric field amplitude "divided by" the characteristic impedance of the medium (or by the square of the magnetic field "multiplied by" the characteristic impedance). This results in:
formula_17
using the above definition of "t". The introduced factor of is the reciprocal of the ratio of the media's wave impedances. The cos("θ") factors adjust the waves' powers so they are reckoned "in the direction" normal to the interface, for both the incident and transmitted waves, so that full power transmission corresponds to "T" = 1.
In the case of total internal reflection where the power transmission T is zero, t nevertheless describes the electric field (including its phase) just beyond the interface. This is an evanescent field which does not propagate as a wave (thus "T" = 0) but has nonzero values very close to the interface. The phase shift of the reflected wave on total internal reflection can similarly be obtained from the phase angles of "r"p and "r"s (whose magnitudes are unity in this case). These phase shifts are different for "s" and "p" waves, which is the well-known principle by which total internal reflection is used to effect polarization transformations.
Alternative forms.
In the above formula for "r"s, if we put formula_18 (Snell's law) and multiply the numerator and denominator by , we obtain
formula_19
If we do likewise with the formula for "r"p, the result is easily shown to be equivalent to
formula_20
These formulas are known respectively as "Fresnel's sine law" and "Fresnel's tangent law". Although at normal incidence these expressions reduce to 0/0, one can see that they yield the correct results in the limit as "θ"i → 0.
Multiple surfaces.
When light makes multiple reflections between two or more parallel surfaces, the multiple beams of light generally interfere with one another, resulting in net transmission and reflection amplitudes that depend on the light's wavelength. The interference, however, is seen only when the surfaces are at distances comparable to or smaller than the light's coherence length, which for ordinary white light is few micrometers; it can be much larger for light from a laser.
An example of interference between reflections is the iridescent colours seen in a soap bubble or in thin oil films on water. Applications include Fabry–Pérot interferometers, antireflection coatings, and optical filters. A quantitative analysis of these effects is based on the Fresnel equations, but with additional calculations to account for interference.
The transfer-matrix method, or the recursive Rouard method can be used to solve multiple-surface problems.
History.
In 1808, Étienne-Louis Malus discovered that when a ray of light was reflected off a non-metallic surface at the appropriate angle, it behaved like "one" of the two rays emerging from a doubly-refractive calcite crystal. He later coined the term "polarization" to describe this behavior. In 1815, the dependence of the polarizing angle on the refractive index was determined experimentally by David Brewster. But the "reason" for that dependence was such a deep mystery that in late 1817, Thomas Young was moved to write:
<templatestyles src="Template:Blockquote/styles.css" />
In 1821, however, Augustin-Jean Fresnel derived results equivalent to his sine and tangent laws (above), by modeling light waves as transverse elastic waves with vibrations perpendicular to what had previously been called the plane of polarization. Fresnel promptly confirmed by experiment that the equations correctly predicted the direction of polarization of the reflected beam when the incident beam was polarized at 45° to the plane of incidence, for light incident from air onto glass or water; in particular, the equations gave the correct polarization at Brewster's angle. The experimental confirmation was reported in a "postscript" to the work in which Fresnel first revealed his theory that light waves, including "unpolarized" waves, were "purely" transverse.
Details of Fresnel's derivation, including the modern forms of the sine law and tangent law, were given later, in a memoir read to the French Academy of Sciences in January 1823. That derivation combined conservation of energy with continuity of the "tangential" vibration at the interface, but failed to allow for any condition on the "normal" component of vibration. The first derivation from "electromagnetic" principles was given by Hendrik Lorentz in 1875.
In the same memoir of January 1823, Fresnel found that for angles of incidence greater than the critical angle, his formulas for the reflection coefficients ("r"s and "r"p) gave complex values with unit magnitudes. Noting that the magnitude, as usual, represented the ratio of peak amplitudes, he guessed that the argument represented the phase shift, and verified the hypothesis experimentally. The verification involved
Thus he finally had a quantitative theory for what we now call the "Fresnel rhomb" — a device that he had been using in experiments, in one form or another, since 1817 (see "Fresnel rhomb § History").
The success of the complex reflection coefficient inspired James MacCullagh and Augustin-Louis Cauchy, beginning in 1836, to analyze reflection from metals by using the Fresnel equations with a complex refractive index.
Four weeks before he presented his completed theory of total internal reflection and the rhomb, Fresnel submitted a memoir in which he introduced the needed terms "linear polarization", "circular polarization", and "elliptical polarization", and in which he explained optical rotation as a species of birefringence: linearly-polarized light can be resolved into two circularly-polarized components rotating in opposite directions, and if these propagate at different speeds, the phase difference between them — hence the orientation of their linearly-polarized resultant — will vary continuously with distance.
Thus Fresnel's interpretation of the complex values of his reflection coefficients marked the confluence of several streams of his research and, arguably, the essential completion of his reconstruction of physical optics on the transverse-wave hypothesis (see "Augustin-Jean Fresnel").
Derivation.
Here we systematically derive the above relations from electromagnetic premises.
Material parameters.
In order to compute meaningful Fresnel coefficients, we must assume that the medium is (approximately) linear and homogeneous. If the medium is also isotropic, the four field vectors E, B, D, H are related by
formula_21
where "ϵ" and "μ" are scalars, known respectively as the (electric) "permittivity" and the (magnetic) "permeability" of the medium. For a vacuum, these have the values "ϵ"0 and "μ"0, respectively. Hence we define the "relative" permittivity (or dielectric constant) "ϵ"rel
"ϵ"/"ϵ"0 , and the "relative" permeability "μ"rel
"μ"/"μ"0.
In optics it is common to assume that the medium is non-magnetic, so that "μ"rel = 1. For ferromagnetic materials at radio/microwave frequencies, larger values of "μ"rel must be taken into account. But, for optically transparent media, and for all other materials at optical frequencies (except possible metamaterials), "μ"rel is indeed very close to 1; that is, "μ" ≈ "μ"0.
In optics, one usually knows the refractive index "n" of the medium, which is the ratio of the speed of light in a vacuum (c) to the speed of light in the medium. In the analysis of partial reflection and transmission, one is also interested in the electromagnetic wave impedance Z, which is the ratio of the amplitude of E to the amplitude of H. It is therefore desirable to express "n" and Z in terms of "ϵ" and "μ", and thence to relate Z to "n". The last-mentioned relation, however, will make it convenient to derive the reflection coefficients in terms of the wave "admittance" Y, which is the reciprocal of the wave impedance Z.
In the case of "uniform plane sinusoidal" waves, the wave impedance or admittance is known as the "intrinsic" impedance or admittance of the medium. This case is the one for which the Fresnel coefficients are to be derived.
Electromagnetic plane waves.
In a uniform plane sinusoidal electromagnetic wave, the electric field E has the form
where Ek is the (constant) complex amplitude vector, "i" is the imaginary unit, k is the wave vector (whose magnitude k is the angular wavenumber), r is the position vector, "ω" is the angular frequency, "t" is time, and it is understood that the "real part" of the expression is the physical field. The value of the expression is unchanged if the position r varies in a direction normal to k; hence k "is normal to the wavefronts".
To advance the phase by the angle "ϕ", we replace "ωt" by "ωt" + "ϕ" (that is, we replace −"ωt" by −"ωt" − "ϕ"), with the result that the (complex) field is multiplied by "e−iϕ". So a phase "advance" is equivalent to multiplication by a complex constant with a "negative" argument. This becomes more obvious when the field (1) is factored as Ek "e""i"k⋅r"e""−iωt", where the last factor contains the time-dependence. That factor also implies that differentiation w.r.t. time corresponds to multiplication by "−iω".
If "ℓ" is the component of r in the direction of k , the field (1) can be written Ek "e""i"("kℓ"−"ωt"). If the argument of "e""i"(⋯) is to be constant, "ℓ" must increase at the velocity formula_22 known as the "phase velocity" ("v"p). This in turn is equal to formula_23. Solving for k gives
As usual, we drop the time-dependent factor "e"−"iωt", which is understood to multiply every complex field quantity. The electric field for a uniform plane sine wave will then be represented by the location-dependent "phasor"
For fields of that form, Faraday's law and the Maxwell-Ampère law respectively reduce to
formula_24
Putting B
"μ"H and D
"ϵ"E, as above, we can eliminate B and D to obtain equations in only E and H:
formula_25
If the material parameters "ϵ" and "μ" are real (as in a lossless dielectric), these equations show that k , E , H form a "right-handed orthogonal triad", so that the same equations apply to the magnitudes of the respective vectors. Taking the magnitude equations and substituting from (2), we obtain
formula_26
where H and E are the magnitudes of H and E. Multiplying the last two equations gives
Dividing (or cross-multiplying) the same two equations gives "H
YE", where
This is the "intrinsic admittance".
From (4) we obtain the phase velocity formula_27. For a vacuum this reduces to formula_28. Dividing the second result by the first gives
formula_29
For a "non-magnetic" medium (the usual case), this becomes &NoBreak;}&NoBreak;.
Wave vectors.
In Cartesian coordinates ("x", "y","z"), let the region "y" < 0 have refractive index "n"1 , intrinsic admittance "Y"1 , etc., and let the region "y" > 0 have refractive index "n"2 , intrinsic admittance "Y"2 , etc. Then the "xz" plane is the interface, and the "y" axis is normal to the interface (see diagram). Let i and j (in bold roman type) be the unit vectors in the "x" and "y" directions, respectively. Let the plane of incidence be the "xy" plane (the plane of the page), with the angle of incidence "θ"i measured from j towards i. Let the angle of refraction, measured in the same sense, be "θ"t , where the subscript "t" stands for "transmitted" (reserving "r" for "reflected").
In the absence of Doppler shifts, "ω" does not change on reflection or refraction. Hence, by (2), the magnitude of the wave vector is proportional to the refractive index.
So, for a given "ω", if we "redefine" k as the magnitude of the wave vector in the "reference" medium (for which "n"
1), then the wave vector has magnitude "n"1"k" in the first medium (region "y" < 0 in the diagram) and magnitude "n"2"k" in the second medium. From the magnitudes and the geometry, we find that the wave vectors are
formula_34
where the last step uses Snell's law. The corresponding dot products in the phasor form (3) are
Hence:
The "s" components.
For the "s" polarization, the E field is parallel to the "z" axis and may therefore be described by its component in the "z" direction. Let the reflection and transmission coefficients be "r"s and "t"s , respectively. Then, if the incident E field is taken to have unit amplitude, the phasor form (3) of its "z"-component is
and the reflected and transmitted fields, in the same form, are
Under the sign convention used in this article, a positive reflection or transmission coefficient is one that preserves the direction of the "transverse" field, meaning (in this context) the field normal to the plane of incidence. For the "s" polarization, that means the E field. If the incident, reflected, and transmitted E fields (in the above equations) are in the "z"-direction ("out of the page"), then the respective H fields are in the directions of the red arrows, since k , E , H form a right-handed orthogonal triad. The H fields may therefore be described by their components in the directions of those arrows, denoted by "H"i , "H"r, "H"t . Then, since "H"
"YE",
At the interface, by the usual interface conditions for electromagnetic fields, the tangential components of the E and H fields must be continuous; that is,
When we substitute from equations (8) to (10) and then from (7), the exponential factors cancel out, so that the interface conditions reduce to the simultaneous equations
which are easily solved for "r"s and "t"s, yielding
and
At "normal incidence" ("θi
θt
" 0), indicated by an additional subscript 0, these results become
and
At "grazing incidence" ("θ"i → 90°), we have , hence "r"s → −1 and "t"s → 0.
The "p" components.
For the "p" polarization, the incident, reflected, and transmitted E fields are parallel to the red arrows and may therefore be described by their components in the directions of those arrows. Let those components be "E"i , "E"r, "E"t (redefining the symbols for the new context). Let the reflection and transmission coefficients be "r"p and "t"p. Then, if the incident E field is taken to have unit amplitude, we have
If the E fields are in the directions of the red arrows, then, in order for k , E , H to form a right-handed orthogonal triad, the respective H fields must be in the "−z" direction ("into the page") and may therefore be described by their components in that direction. This is consistent with the adopted sign convention, namely that a positive reflection or transmission coefficient is one that preserves the direction of the transverse field (the H field in the case of the "p" polarization). The agreement of the "other" field with the red arrows reveals an alternative definition of the sign convention: that a positive reflection or transmission coefficient is one for which the field vector in the plane of incidence points towards the same medium before and after reflection or transmission.
So, for the incident, reflected, and transmitted H fields, let the respective components in the "−z" direction be "H"i , "H"r, "H"t . Then, since "H
YE",
At the interface, the tangential components of the E and H fields must be continuous; that is,
When we substitute from equations (17) and (18) and then from (7), the exponential factors again cancel out, so that the interface conditions reduce to
Solving for "r"p and "t"p, we find
and
At "normal incidence" ("θi
θt
" 0) indicated by an additional subscript 0, these results become
and
At "grazing incidence" ("θ"i → 90°), we again have , hence "r"p → −1 and "t"p → 0.
Comparing (23) and (24) with (15) and (16), we see that at "normal" incidence, under the adopted sign convention, the transmission coefficients for the two polarizations are equal, whereas the reflection coefficients have equal magnitudes but opposite signs. While this clash of signs is a disadvantage of the convention, the attendant advantage is that the signs agree at "grazing" incidence.
Power ratios (reflectivity and transmissivity).
The "Poynting vector" for a wave is a vector whose component in any direction is the "irradiance" (power per unit area) of that wave on a surface perpendicular to that direction. For a plane sinusoidal wave the Poynting vector is , where E and H are due "only" to the wave in question, and the asterisk denotes complex conjugation. Inside a lossless dielectric (the usual case), E and H are in phase, and at right angles to each other and to the wave vector k ; so, for s polarization, using the z and xy components of E and H respectively (or for p polarization, using the xy and -z components of E and H), the irradiance in the direction of k is given simply by "EH"/2 , which is "E"2/"2Z" in a medium of intrinsic impedance "Z"
1/"Y". To compute the irradiance in the direction normal to the interface, as we shall require in the definition of the power transmission coefficient, we could use only the x component (rather than the full xy component) of H or E or, equivalently, simply multiply "EH"/2 by the proper geometric factor, obtaining .
From equations (13) and (21), taking squared magnitudes, we find that the "reflectivity" (ratio of reflected power to incident power) is
for the s polarization, and
for the p polarization. Note that when comparing the powers of two such waves in the same medium and with the same cos "θ", the impedance and geometric factors mentioned above are identical and cancel out. But in computing the power "transmission" (below), these factors must be taken into account.
The simplest way to obtain the power transmission coefficient ("transmissivity", the ratio of transmitted power to incident power "in the direction normal to the interface", i.e. the y direction) is to use "R" + "T"
1 (conservation of energy). In this way we find
for the s polarization, and
for the p polarization.
In the case of an interface between two lossless media (for which ϵ and μ are "real" and positive), one can obtain these results directly using the squared magnitudes of the amplitude transmission coefficients that we found earlier in equations (14) and (22). But, for given amplitude (as noted above), the component of the Poynting vector in the y direction is proportional to the geometric factor cos "θ" and inversely proportional to the wave impedance Z. Applying these corrections to each wave, we obtain two ratios multiplying the square of the amplitude transmission coefficient:
for the s polarization, and
for the p polarization. The last two equations apply only to lossless dielectrics, and only at incidence angles smaller than the critical angle (beyond which, of course, "T"
0 ).
For unpolarized light:
formula_35
formula_36
where formula_37.
Equal refractive indices.
From equations (4) and (5), we see that two dissimilar media will have the same refractive index, but different admittances, if the ratio of their permeabilities is the inverse of the ratio of their permittivities. In that unusual situation we have "θ"t
"θ"i (that is, the transmitted ray is undeviated), so that the cosines in equations (13), (14), (21), (22), and (25) to (28) cancel out, and all the reflection and transmission ratios become independent of the angle of incidence; in other words, the ratios for normal incidence become applicable to all angles of incidence. When extended to spherical reflection or scattering, this results in the Kerker effect for Mie scattering.
Non-magnetic media.
Since the Fresnel equations were developed for optics, they are usually given for non-magnetic materials. Dividing (4) by (5)) yields
formula_38
For non-magnetic media we can substitute the vacuum permeability "μ"0 for "μ", so that
formula_39
that is, the admittances are simply proportional to the corresponding refractive indices. When we make these substitutions in equations (13) to (16) and equations (21) to (26), the factor "cμ"0 cancels out. For the amplitude coefficients we obtain:
For the case of normal incidence these reduce to:
The power reflection coefficients become:
The power transmissions can then be found from "T"
1 − "R".
Brewster's angle.
For equal permeabilities (e.g., non-magnetic media), if "θ"i and "θ"t are "complementary", we can substitute for , and for , so that the numerator in equation (31) becomes "n"2sin "θ"t − "n"1sin "θ"i, which is zero (by Snell's law). Hence "r"p
0 and only the s-polarized component is reflected. This is what happens at the Brewster angle. Substituting for in Snell's law, we readily obtain
for Brewster's angle.
Equal permittivities.
Although it is not encountered in practice, the equations can also apply to the case of two media with a common permittivity but different refractive indices due to different permeabilities. From equations (4) and (5), if "ϵ" is fixed instead of "μ", then Y becomes "inversely" proportional to n, with the result that the subscripts 1 and 2 in equations (29) to (38) are interchanged (due to the additional step of multiplying the numerator and denominator by "n"1"n"2). Hence, in (29) and (31), the expressions for "r"s and "r"p in terms of refractive indices will be interchanged, so that Brewster's angle (39) will give "r"s
0 instead of "r"p
0, and any beam reflected at that angle will be p-polarized instead of s-polarized. Similarly, Fresnel's sine law will apply to the p polarization instead of the s polarization, and his tangent law to the s polarization instead of the p polarization.
This switch of polarizations has an analog in the old mechanical theory of light waves (see "§ History", above). One could predict reflection coefficients that agreed with observation by supposing (like Fresnel) that different refractive indices were due to different "densities" and that the vibrations were "normal" to what was then called the plane of polarization, or by supposing (like MacCullagh and Neumann) that different refractive indices were due to different "elasticities" and that the vibrations were "parallel" to that plane. Thus the condition of equal permittivities and unequal permeabilities, although not realistic, is of some historical interest.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\theta_\\mathrm{i} = \\theta_\\mathrm{r},"
},
{
"math_id": 1,
"text": "n_1 \\sin \\theta_\\mathrm{i} = n_2 \\sin \\theta_\\mathrm{t}."
},
{
"math_id": 2,
"text": "\n R_\\mathrm{s} = \\left|\\frac{Z_2 \\cos \\theta_\\mathrm{i} - Z_1 \\cos \\theta_\\mathrm{t}}{Z_2 \\cos \\theta_\\mathrm{i} + Z_1 \\cos \\theta_\\mathrm{t}}\\right|^2,\n"
},
{
"math_id": 3,
"text": "\n R_\\mathrm{p} = \\left|\\frac{Z_2 \\cos \\theta_\\mathrm{t} - Z_1 \\cos \\theta_\\mathrm{i}}{Z_2 \\cos \\theta_\\mathrm{t} + Z_1 \\cos \\theta_\\mathrm{i}}\\right|^2,\n"
},
{
"math_id": 4,
"text": "Z_i = \\frac{Z_0}{n_i}\\,,"
},
{
"math_id": 5,
"text": "\n R_\\mathrm{s} = \\left|\\frac{n_1 \\cos \\theta_\\mathrm{i} - n_2 \\cos \\theta_\\mathrm{t}}{n_1 \\cos \\theta_\\mathrm{i} + n_2 \\cos \\theta_\\mathrm{t}}\\right|^2\n = \\left|\\frac\n {n_1 \\cos \\theta_{\\mathrm{i}} - n_2 \\sqrt{1 - \\left(\\frac{n_1}{n_2} \\sin \\theta_{\\mathrm{i}}\\right)^2}}\n {n_1 \\cos \\theta_{\\mathrm{i}} + n_2 \\sqrt{1 - \\left(\\frac{n_1}{n_2} \\sin \\theta_{\\mathrm{i}}\\right)^2}}\n \\right|^2\\!,\n"
},
{
"math_id": 6,
"text": "\n R_\\mathrm{p} = \\left|\\frac{n_1 \\cos \\theta_\\mathrm{t} - n_2 \\cos \\theta_\\mathrm{i}}{n_1 \\cos \\theta_\\mathrm{t} + n_2 \\cos \\theta_\\mathrm{i}}\\right|^2\n = \\left|\\frac\n {n_1 \\sqrt{1 - \\left(\\frac{n_1}{n_2} \\sin \\theta_\\mathrm{i}\\right)^2} - n_2 \\cos \\theta_\\mathrm{i}}\n {n_1 \\sqrt{1 - \\left(\\frac{n_1}{n_2} \\sin \\theta_\\mathrm{i}\\right)^2} + n_2 \\cos \\theta_\\mathrm{i}}\n \\right|^2\\!.\n"
},
{
"math_id": 7,
"text": "T_\\mathrm{s} = 1 - R_\\mathrm{s}"
},
{
"math_id": 8,
"text": "T_\\mathrm{p} = 1 - R_\\mathrm{p}"
},
{
"math_id": 9,
"text": "R_\\mathrm{eff} = \\frac{1}{2}\\left(R_\\mathrm{s} + R_\\mathrm{p}\\right)."
},
{
"math_id": 10,
"text": "\nR_0 = \\left|\\frac{n_1 - n_2 }{n_1 + n_2 }\\right|^2\\,.\n"
},
{
"math_id": 11,
"text": " R_\\text{p} = R_\\text{s}^2 "
},
{
"math_id": 12,
"text": " \\theta_\\mathrm{i}"
},
{
"math_id": 13,
"text": " \\theta_\\mathrm{r} = \\theta_\\mathrm{i} "
},
{
"math_id": 14,
"text": " \\theta_\\mathrm{t}"
},
{
"math_id": 15,
"text": "\\begin{align}\n r_\\text{s} &= \\frac{ n_1 \\cos \\theta_\\text{i} - n_2 \\cos \\theta_\\text{t}}{n_1 \\cos \\theta_\\text{i} + n_2 \\cos \\theta_\\text{t}}, \\\\[3pt]\n t_\\text{s} &= \\frac{2 n_1 \\cos \\theta_\\text{i}} {n_1 \\cos \\theta_\\text{i} + n_2 \\cos \\theta_\\text{t}}, \\\\[3pt]\n r_\\text{p} &= \\frac{ n_2 \\cos \\theta_\\text{i} - n_1 \\cos \\theta_\\text{t}}{n_2 \\cos \\theta_\\text{i} + n_1 \\cos \\theta_\\text{t}}, \\\\[3pt]\n t_\\text{p} &= \\frac{2 n_1 \\cos \\theta_\\text{i}} {n_2 \\cos \\theta_\\text{i} + n_1 \\cos \\theta_\\text{t}}.\n\\end{align}"
},
{
"math_id": 16,
"text": "R = |r|^2."
},
{
"math_id": 17,
"text": "T = \\frac{n_2 \\cos \\theta_\\text{t}}{n_1 \\cos \\theta_\\text{i}} |t|^2"
},
{
"math_id": 18,
"text": "n_2=n_1\\sin\\theta_\\text{i}/\\sin\\theta_\\text{t}"
},
{
"math_id": 19,
"text": "r_\\text{s}=-\\frac{\\sin(\\theta_\\text{i}-\\theta_\\text{t})}{\\sin(\\theta_\\text{i}+\\theta_\\text{t})}."
},
{
"math_id": 20,
"text": "r_\\text{p}=\\frac{\\tan(\\theta_\\text{i}-\\theta_\\text{t})}{\\tan(\\theta_\\text{i}+\\theta_\\text{t})}.\n"
},
{
"math_id": 21,
"text": "\\begin{align}\n\\mathbf{D} &= \\epsilon \\mathbf{E} \\\\\n\\mathbf{B} &= \\mu \\mathbf{H}\\,,\n\\end{align}\n"
},
{
"math_id": 22,
"text": "\\omega/k\\,,\\,"
},
{
"math_id": 23,
"text": "c/n"
},
{
"math_id": 24,
"text": "\\begin{align}\n \\omega\\mathbf{B} &= \\mathbf{k}\\times\\mathbf{E}\\\\\n \\omega\\mathbf{D} &= -\\mathbf{k}\\times\\mathbf{H}\\,.\n\\end{align}"
},
{
"math_id": 25,
"text": "\\begin{align}\n \\omega\\mu\\mathbf{H} &= \\mathbf{k}\\times\\mathbf{E}\\\\\n \\omega\\epsilon\\mathbf{E} &= -\\mathbf{k}\\times\\mathbf{H}\\,.\n\\end{align}"
},
{
"math_id": 26,
"text": "\\begin{align}\n \\mu cH &= nE\\\\\n \\epsilon cE &= nH\\,,\n\\end{align}"
},
{
"math_id": 27,
"text": "c/n=1\\big/\\!\\sqrt{\\mu\\epsilon\\,}"
},
{
"math_id": 28,
"text": "c=1\\big/\\!\\sqrt{\\mu_0\\epsilon_0}"
},
{
"math_id": 29,
"text": "n=\\sqrt{\\mu_{\\text{rel}}\\epsilon_{\\text{rel}}}\\,."
},
{
"math_id": 30,
"text": "Z=\\sqrt{\\mu/\\epsilon}"
},
{
"math_id": 31,
"text": "Z_0=\\sqrt{\\mu_0/\\epsilon_0}\\,\\approx 377\\,\\Omega\\,,"
},
{
"math_id": 32,
"text": "Z/Z_0=\\sqrt{\\mu_{\\text{rel}}/\\epsilon_{\\text{rel}}}"
},
{
"math_id": 33,
"text": "Z=Z_0\\big/\\!\\sqrt{\\epsilon_{\\text{rel}}}=Z_0/n."
},
{
"math_id": 34,
"text": "\\begin{align}\n \\mathbf{k}_\\text{i} &= n_1 k(\\mathbf{i}\\sin\\theta_\\text{i} + \\mathbf{j}\\cos\\theta_\\text{i})\\\\[.5ex]\n \\mathbf{k}_\\text{r} &= n_1 k(\\mathbf{i}\\sin\\theta_\\text{i} - \\mathbf{j}\\cos\\theta_\\text{i})\\\\[.5ex]\n \\mathbf{k}_\\text{t} &= n_2 k(\\mathbf{i}\\sin\\theta_\\text{t} + \\mathbf{j}\\cos\\theta_\\text{t})\\\\\n &= k(\\mathbf{i}\\,n_1\\sin\\theta_\\text{i} + \\mathbf{j}\\,n_2\\cos\\theta_\\text{t})\\,,\n\\end{align}"
},
{
"math_id": 35,
"text": "T={1 \\over 2}(T_s+T_p)"
},
{
"math_id": 36,
"text": "R={1 \\over 2}(R_s+R_p)"
},
{
"math_id": 37,
"text": "R+T=1"
},
{
"math_id": 38,
"text": "Y=\\frac{n}{\\,c\\mu\\,}\\,."
},
{
"math_id": 39,
"text": "Y_1=\\frac{n_1}{\\,c\\mu_0} ~~;~~~ Y_2=\\frac{n_2}{\\,c\\mu_0}\\,;"
}
] | https://en.wikipedia.org/wiki?curid=11149 |
1114931 | Exotic sphere | Smooth manifold that is homeomorphic but not diffeomorphic to a sphere
In an area of mathematics called differential topology, an exotic sphere is a differentiable manifold "M" that is homeomorphic but not diffeomorphic to the standard Euclidean "n"-sphere. That is, "M" is a sphere from the point of view of all its topological properties, but carrying a smooth structure that is not the familiar one (hence the name "exotic").
The first exotic spheres were constructed by John Milnor (1956) in dimension formula_0 as formula_1-bundles over formula_2. He showed that there are at least 7 differentiable structures on the 7-sphere. In any dimension showed that the diffeomorphism classes of oriented exotic spheres form the non-trivial elements of an abelian monoid under connected sum, which is a finite abelian group if the dimension is not 4. The classification of exotic spheres by Michel Kervaire and Milnor (1963) showed that the oriented exotic 7-spheres are the non-trivial elements of a cyclic group of order 28 under the operation of connected sum.
More generally, in any dimension "n ≠ 4", there is a finite Abelian group whose elements are the equivalence classes of smooth structures on "S"n, where two structures are considered equivalent if there is an orientation preserving diffeomorphism carrying one structure onto the other. The group operation is defined by [x] + [y] = [x + y], where x and y are arbitrary representatives of their equivalence classes, and "x + y" denotes the smooth structure on the smooth "S"n that is the connected sum of x and y. It is necessary to show that such a definition does not depend on the choices made; indeed this can be shown.
Introduction.
The unit "n"-sphere, formula_3, is the set of all ("n"+1)-tuples formula_4 of real numbers, such that the sum formula_5. For instance, formula_6 is a circle, while formula_7 is the surface of an ordinary ball of radius one in 3 dimensions. Topologists consider a space "X" to be an "n"-sphere if there is a homeomorphism between them, i.e. every point in "X" may be assigned to exactly one point in the unit "n"-sphere by a continuous bijection with continuous inverse. For example, a point "x" on an "n"-sphere of radius "r" can be matched homeomorphically with a point on the unit "n"-sphere by multiplying its distance from the origin by formula_8. Similarly, an "n"-cube of any radius is homeomorphic to an "n"-sphere.
In differential topology, two smooth manifolds are considered smoothly equivalent if there exists a diffeomorphism from one to the other, which is a homeomorphism between them, with the additional condition that it be smooth — that is, it should have derivatives of all orders at all its points — and its inverse homeomorphism must also be smooth. To calculate derivatives, one needs to have local coordinate systems defined consistently in "X". Mathematicians (including Milnor himself) were surprised in 1956 when Milnor showed that consistent local coordinate systems could be set up on the 7-sphere in two different ways that were equivalent in the continuous sense, but not in the differentiable sense. Milnor and others set about trying to discover how many such exotic spheres could exist in each dimension and to understand how they relate to each other. No exotic structures are possible on the 1-, 2-, 3-, 5-, 6-, 12-, 56- or 61-sphere. Some higher-dimensional spheres have only two possible differentiable structures, others have thousands. Whether exotic 4-spheres exist, and if so how many, is an unsolved problem.
Classification.
The monoid of smooth structures on "n"-spheres is the collection of oriented smooth "n"-manifolds which are homeomorphic to the "n"-sphere, taken up to orientation-preserving diffeomorphism. The monoid operation is the connected sum. Provided formula_9, this monoid is a group and is isomorphic to the group formula_10 of "h"-cobordism classes of oriented homotopy "n"-spheres, which is finite and abelian. In dimension 4 almost nothing is known about the monoid of smooth spheres, beyond the facts that it is finite or countably infinite, and abelian, though it is suspected to be infinite; see the section on Gluck twists. All homotopy "n"-spheres are homeomorphic to the "n"-sphere by the generalized Poincaré conjecture, proved by Stephen Smale in dimensions bigger than 4, Michael Freedman in dimension 4, and Grigori Perelman in dimension 3. In dimension 3, Edwin E. Moise proved that every topological manifold has an essentially unique smooth structure (see Moise's theorem), so the monoid of smooth structures on the 3-sphere is trivial.
Parallelizable manifolds.
The group formula_10 has a cyclic subgroup
formula_11
represented by "n"-spheres that bound parallelizable manifolds. The structures of formula_11 and the quotient
formula_12
are described separately in the paper (Kervaire & Milnor 1963), which was influential in the development of surgery theory. In fact, these calculations can be formulated in a modern language in terms of the surgery exact sequence as indicated here.
The group formula_11 is a cyclic group, and is trivial or order 2 except in case formula_13, in which case it can be large, with its order related to the Bernoulli numbers. It is trivial if "n" is even. If "n" is 1 mod 4 it has order 1 or 2; in particular it has order 1 if "n" is 1, 5, 13, 29, or 61, and William Browder (1969) proved that it has order 2 if formula_14 mod 4 is not of the form formula_15. It follows from the now almost completely resolved Kervaire invariant problem that it has order 2 for all "n" bigger than 126; the case formula_16 is still open. The order of formula_17 for formula_18 is
formula_19
where "B" is the numerator of formula_20, and formula_21 is a Bernoulli number. (The formula in the topological literature differs slightly because topologists use a different convention for naming Bernoulli numbers; this article uses the number theorists' convention.)
Map between quotients.
The quotient group formula_12 has a description in terms of stable homotopy groups of spheres modulo the image of the J-homomorphism; it is either equal to the quotient or index 2. More precisely there is an injective map
formula_22
where formula_23 is the "n"th stable homotopy group of spheres, and "J" is the image of the "J"-homomorphism. As with formula_11, the image of "J" is a cyclic group, and is trivial or order 2 except in case formula_13, in which case it can be large, with its order related to the Bernoulli numbers. The quotient group formula_24 is the "hard" part of the stable homotopy groups of spheres, and accordingly formula_12 is the hard part of the exotic spheres, but almost completely reduces to computing homotopy groups of spheres. The map is either an isomorphism (the image is the whole group), or an injective map with index 2. The latter is the case if and only if there exists an "n"-dimensional framed manifold with Kervaire invariant 1, which is known as the Kervaire invariant problem. Thus a factor of 2 in the classification of exotic spheres depends on the Kervaire invariant problem.
The Kervaire invariant problem is almost completely solved, with only the case formula_25 remaining open, although Zhouli Xu (in collaboration with Weinan Lin and Guozhen Wang), announced during a seminar at Princeton University, on May 30, 2024, that the final case of dimension 126 has been settled and that there exist manifolds of Kervaire invariant 1 in dimension 126. Previous work of , proved that such manifolds only existed in dimension formula_26, and , which proved that there were no such manifolds for dimension formula_27 and above. Manifolds with Kervaire invariant 1 have been constructed in dimension 2, 6, 14, 30. While it is known that there are manifolds of Kervaire invariant 1 in dimension 62, no such manifold has yet been constructed. Similarly for dimension 126.
Order of Θn.
The order of the group formula_10 is given in this table (sequence in the OEIS) from (except that the entry for formula_28 is wrong by a factor of 2 in their paper; see the correction in volume III p. 97 of Milnor's collected works).
Note that for dim formula_29, then formula_30 are formula_31, formula_32, formula_33, and formula_34. Further entries in this table can be computed from the information above together with the table of stable homotopy groups of spheres.
By computations of stable homotopy groups of spheres, proves that the sphere "S"61 has a unique smooth structure, and that it is the last odd-dimensional sphere with this property – the only ones are "S"1, "S"3, "S"5, and "S"61.
Explicit examples of exotic spheres.
<templatestyles src="Template:Quote_box/styles.css" />
When I came upon such an example in the mid-50s, I was very puzzled and didn't know what to make of it. At first, I thought I'd found a counterexample to the generalized Poincaré conjecture in dimension seven. But careful study showed that the manifold really was homeomorphic to formula_35. Thus, there exists a differentiable structure on formula_35 not diffeomorphic to the standard one.
John Milnor (2009, p.12)
Milnor's construction.
One of the first examples of an exotic sphere found by was the following. Let formula_36 be the unit ball in formula_37, and let formula_1 be its boundary—a 3-sphere which we identify with the group of unit quaternions. Now take two copies of formula_38, each with boundary formula_39, and glue them together by identifying formula_40 in the first boundary with formula_41 in the second boundary. The resulting manifold has a natural smooth structure and is homeomorphic to formula_35, but is not diffeomorphic to formula_35. Milnor showed that it is not the boundary of any smooth 8-manifold with vanishing 4th Betti number, and has no orientation-reversing diffeomorphism to itself; either of these properties implies that it is not a standard 7-sphere. Milnor showed that this manifold has a Morse function with just two critical points, both non-degenerate, which implies that it is topologically a sphere.
Brieskorn spheres.
As shown by Egbert Brieskorn (1966, 1966b) (see also ) the intersection of the complex manifold of points in formula_42 satisfying
formula_43
with a small sphere around the origin for formula_44 gives all 28 possible smooth structures on the oriented 7-sphere. Similar manifolds are called Brieskorn spheres.
Twisted spheres.
Given an (orientation-preserving) diffeomorphism formula_45, gluing the boundaries of two copies of the standard disk formula_46 together by "f" yields a manifold called a "twisted sphere" (with "twist" "f"). It is homotopy equivalent to the standard "n"-sphere because the gluing map is homotopic to the identity (being an orientation-preserving diffeomorphism, hence degree 1), but not in general diffeomorphic to the standard sphere.
Setting formula_47 to be the group of twisted "n"-spheres (under connect sum), one obtains the exact sequence
formula_48
For formula_49, every exotic "n"-sphere is diffeomorphic to a twisted sphere, a result proven by Stephen Smale which can be seen as a consequence of the "h"-cobordism theorem. (In contrast, in the piecewise linear setting the left-most map is onto via radial extension: every piecewise-linear-twisted sphere is standard.) The group formula_47 of twisted spheres is always isomorphic to the group formula_10. The notations are different because it was not known at first that they were the same for formula_50 or 4; for example, the case formula_50 is equivalent to the Poincaré conjecture.
In 1970 Jean Cerf proved the pseudoisotopy theorem which implies that formula_51 is the trivial group provided formula_52, and so formula_53 provided formula_52.
Applications.
If "M" is a piecewise linear manifold then the problem of finding the compatible smooth structures on "M" depends on knowledge of the groups Γ"k" = Θ"k". More precisely, the obstructions to the existence of any smooth structure lie in the groups H"k+1"("M", Γ"k") for various values of "k", while if such a smooth structure exists then all such smooth structures can be classified using the groups H"k"("M", Γ"k").
In particular the groups Γ"k" vanish if "k" < 7, so all PL manifolds of dimension at most 7 have a smooth structure, which is essentially unique if the manifold has dimension at most 6.
The following finite abelian groups are essentially the same:
4-dimensional exotic spheres and Gluck twists.
In 4 dimensions it is not known whether there are any exotic smooth structures on the 4-sphere. The statement that they do not exist is known as the "smooth Poincaré conjecture", and is discussed by Michael Freedman, Robert Gompf, and Scott Morrison et al. (2010) who say that it is believed to be false.
Some candidates proposed for exotic 4-spheres are the Cappell–Shaneson spheres (Sylvain Cappell and Julius Shaneson (1976)) and those derived by Gluck twists . Gluck twist spheres are constructed by cutting out a tubular neighborhood of a 2-sphere "S" in "S"4 and gluing it back in using a diffeomorphism of its boundary "S"2×"S"1. The result is always homeomorphic to "S"4. Many cases over the years were ruled out as possible counterexamples to the smooth 4 dimensional Poincaré conjecture. For example, Cameron Gordon (1976), José Montesinos (1983), Steven P. Plotnick (1984), , , Selman Akbulut (2010), , .
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n = 7"
},
{
"math_id": 1,
"text": "S^3"
},
{
"math_id": 2,
"text": "S^4"
},
{
"math_id": 3,
"text": "S^n"
},
{
"math_id": 4,
"text": "(x_1, x_2, \\ldots , x_{n+1})"
},
{
"math_id": 5,
"text": "x_1^2 + x_2^2 + \\cdots + x_{n+1}^2 = 1"
},
{
"math_id": 6,
"text": "S^1"
},
{
"math_id": 7,
"text": "S^2"
},
{
"math_id": 8,
"text": "1/r"
},
{
"math_id": 9,
"text": "n\\ne 4"
},
{
"math_id": 10,
"text": "\\Theta_n"
},
{
"math_id": 11,
"text": "bP_{n+1}"
},
{
"math_id": 12,
"text": "\\Theta_n/bP_{n+1}"
},
{
"math_id": 13,
"text": "n = 4k+3"
},
{
"math_id": 14,
"text": "n = 1"
},
{
"math_id": 15,
"text": "2^k - 3"
},
{
"math_id": 16,
"text": "n = 126"
},
{
"math_id": 17,
"text": "bP_{4k}"
},
{
"math_id": 18,
"text": "k\\ge 2"
},
{
"math_id": 19,
"text": "2^{2k-2}(2^{2k-1}-1)B,"
},
{
"math_id": 20,
"text": "4B_{2k}/k"
},
{
"math_id": 21,
"text": "B_{2k}"
},
{
"math_id": 22,
"text": "\\Theta_n/bP_{n+1}\\to \\pi_n^S/J,"
},
{
"math_id": 23,
"text": "\\pi_n^S"
},
{
"math_id": 24,
"text": "\\pi_n^S/J"
},
{
"math_id": 25,
"text": "n=126"
},
{
"math_id": 26,
"text": "n=2^j-2"
},
{
"math_id": 27,
"text": "254=2^8-2"
},
{
"math_id": 28,
"text": "n = 19"
},
{
"math_id": 29,
"text": "n = 4k - 1"
},
{
"math_id": 30,
"text": "\\theta_n"
},
{
"math_id": 31,
"text": "28 = 2^2(2^3-1)"
},
{
"math_id": 32,
"text": "992 = 2^5(2^5 - 1)"
},
{
"math_id": 33,
"text": "16256 = 2^7(2^7 - 1) "
},
{
"math_id": 34,
"text": "523264 = 2^{10}(2^9 - 1) "
},
{
"math_id": 35,
"text": "S^7"
},
{
"math_id": 36,
"text": "B^4"
},
{
"math_id": 37,
"text": "\\R^4"
},
{
"math_id": 38,
"text": "B^4 \\times S^3"
},
{
"math_id": 39,
"text": "S^3 \\times S^3"
},
{
"math_id": 40,
"text": "(a,b)"
},
{
"math_id": 41,
"text": "(a,a^2ba^{-1})"
},
{
"math_id": 42,
"text": "\\Complex^5"
},
{
"math_id": 43,
"text": "a^2 + b^2 + c^2 + d^3 + e^{6k-1} = 0\\ "
},
{
"math_id": 44,
"text": "k = 1, 2, \\ldots, 28"
},
{
"math_id": 45,
"text": "f\\colon S^{n-1} \\to S^{n-1}"
},
{
"math_id": 46,
"text": "D^n"
},
{
"math_id": 47,
"text": "\\Gamma_n"
},
{
"math_id": 48,
"text": "\\pi_0\\operatorname{Diff}^+(D^n) \\to \\pi_0\\operatorname{Diff}^+(S^{n-1}) \\to \\Gamma_n \\to 0."
},
{
"math_id": 49,
"text": "n>5"
},
{
"math_id": 50,
"text": "n = 3"
},
{
"math_id": 51,
"text": "\\pi_0 \\operatorname{Diff}^+(D^n)"
},
{
"math_id": 52,
"text": "n \\geq 6"
},
{
"math_id": 53,
"text": "\\Gamma_n \\simeq \\pi_0 \\operatorname{Diff}^+(S^{n-1})"
}
] | https://en.wikipedia.org/wiki?curid=1114931 |
11149717 | Q-Pochhammer symbol | Concept in combinatorics (part of mathematics)
In the mathematical field of combinatorics, the q"-Pochhammer symbol, also called the q"-shifted factorial, is the product
formula_0
with formula_1
It is a "q"-analog of the Pochhammer symbol formula_2, in the sense that
formula_3
The "q"-Pochhammer symbol is a major building block in the construction of "q"-analogs; for instance, in the theory of basic hypergeometric series, it plays the role that the ordinary Pochhammer symbol plays in the theory of generalized hypergeometric series.
Unlike the ordinary Pochhammer symbol, the "q"-Pochhammer symbol can be extended to an infinite product:
formula_4
This is an analytic function of "q" in the interior of the unit disk, and can also be considered as a formal power series in "q". The special case
formula_5
is known as Euler's function, and is important in combinatorics, number theory, and the theory of modular forms.
Identities.
The finite product can be expressed in terms of the infinite product:
formula_6
which extends the definition to negative integers "n". Thus, for nonnegative "n", one has
formula_7
and
formula_8
Alternatively,
formula_9
which is useful for some of the generating functions of partition functions.
The "q"-Pochhammer symbol is the subject of a number of "q"-series identities, particularly the infinite series expansions
formula_10
and
formula_11
which are both special cases of the "q"-binomial theorem:
formula_12
Fridrikh Karpelevich found the following identity (see Olshanetsky and Rogov (1995) for the proof):
formula_13
Combinatorial interpretation.
The "q"-Pochhammer symbol is closely related to the enumerative combinatorics of partitions. The coefficient of formula_14 in
formula_15
is the number of partitions of "m" into at most "n" parts.
Since, by conjugation of partitions, this is the same as the number of partitions of "m" into parts of size at most "n", by identification of generating series we obtain the identity
formula_16
as in the above section.
We also have that the coefficient of formula_14 in
formula_17
is the number of partitions of "m" into "n" or "n"-1 distinct parts.
By removing a triangular partition with "n" − 1 parts from such a partition, we are left with an arbitrary partition with at most "n" parts. This gives a weight-preserving bijection between the set of partitions into "n" or "n" − 1 distinct parts and the set of pairs consisting of a triangular partition having "n" − 1 parts and a partition with at most "n" parts. By identifying generating series, this leads to the identity
formula_18
also described in the above section.
The reciprocal of the function formula_19 similarly arises as the generating function for the partition function, formula_20, which is also expanded by the second two q-series expansions given below:
formula_21
The "q"-binomial theorem itself can also be handled by a slightly more involved combinatorial argument of a similar flavor (see also the expansions given in the next subsection).
Similarly,
formula_22
Multiple arguments convention.
Since identities involving "q"-Pochhammer symbols so frequently involve products of many symbols, the standard convention is to write a product as a single symbol of multiple arguments:
formula_23
"q"-series.
A "q"-series is a series in which the coefficients are functions of "q", typically expressions of formula_24. Early results are due to Euler, Gauss, and Cauchy. The systematic study begins with Eduard Heine (1843).
Relationship to other "q"-functions.
The "q"-analog of "n", also known as the q"-bracket or q"-number of "n", is defined to be
formula_25
From this one can define the "q"-analog of the factorial, the "q"-factorial, as
formula_26
These numbers are analogues in the sense that
formula_27
and so also
formula_28
The limit value "n"! counts permutations of an "n"-element set "S". Equivalently, it counts the number of sequences of nested sets formula_29 such that formula_30 contains exactly "i" elements. By comparison, when "q" is a prime power and "V" is an "n"-dimensional vector space over the field with "q" elements, the "q"-analogue formula_31 is the number of complete flags in "V", that is, it is the number of sequences formula_32 of subspaces such that formula_33 has dimension "i". The preceding considerations suggest that one can regard a sequence of nested sets as a flag over a conjectural field with one element.
A product of negative integer "q"-brackets can be expressed in terms of the "q"-factorial as
formula_34
From the "q"-factorials, one can move on to define the "q"-binomial coefficients, also known as the Gaussian binomial coefficients, as
formula_35
where it is easy to see that the triangle of these coefficients is symmetric in the sense that
formula_36
for all formula_37. One can check that
formula_38
One can also see from the previous recurrence relations that the next variants of the formula_39-binomial theorem are expanded in terms of these coefficients as follows:
formula_40
One may further define the "q"-multinomial coefficients
formula_41
where the arguments formula_42 are nonnegative integers that satisfy formula_43. The coefficient above counts the number of flags
formula_44
of subspaces in an "n"-dimensional vector space over the field with "q" elements such that formula_45.
The limit formula_46 gives the usual multinomial coefficient formula_47, which counts words in "n" different symbols formula_48 such that each formula_49 appears formula_50 times.
One also obtains a "q"-analog of the gamma function, called the q-gamma function, and defined as
formula_51
This converges to the usual gamma function as "q" approaches 1 from inside the unit disc. Note that
formula_52
for any "x" and
formula_53
for non-negative integer values of "n". Alternatively, this may be taken as an extension of the "q"-factorial function to the real number system.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(a;q)_n = \\prod_{k=0}^{n-1} (1-aq^k)=(1-a)(1-aq)(1-aq^2)\\cdots(1-aq^{n-1}),"
},
{
"math_id": 1,
"text": "(a;q)_0 = 1."
},
{
"math_id": 2,
"text": "(x)_n = x(x+1)\\dots(x+n-1)"
},
{
"math_id": 3,
"text": "\\lim_{q\\to1} \\frac{(q^x;q)_n}{(1-q)^n} = (x)_n."
},
{
"math_id": 4,
"text": "(a;q)_\\infty = \\prod_{k=0}^{\\infty} (1-aq^k)."
},
{
"math_id": 5,
"text": "\\phi(q) = (q;q)_\\infty=\\prod_{k=1}^\\infty (1-q^k)"
},
{
"math_id": 6,
"text": "(a;q)_n = \\frac{(a;q)_\\infty} {(aq^n;q)_\\infty}, "
},
{
"math_id": 7,
"text": "(a;q)_{-n} = \\frac{1}{(aq^{-n};q)_n}=\\prod_{k=1}^n \\frac{1}{(1-a/q^k)}"
},
{
"math_id": 8,
"text": "(a;q)_{-n} = \\frac{(-q/a)^n q^{n(n-1)/2}} {(q/a;q)_n}."
},
{
"math_id": 9,
"text": "\\prod_{k=n}^\\infty (1-aq^k)=(aq^n;q)_\\infty = \\frac{(a;q)_\\infty} {(a;q)_n}, "
},
{
"math_id": 10,
"text": "(x;q)_\\infty = \\sum_{n=0}^\\infty \\frac{(-1)^n q^{n(n-1)/2}}{(q;q)_n} x^n"
},
{
"math_id": 11,
"text": "\\frac{1}{(x;q)_\\infty}=\\sum_{n=0}^\\infty \\frac{x^n}{(q;q)_n},"
},
{
"math_id": 12,
"text": "\\frac{(ax;q)_\\infty}{(x;q)_\\infty} = \\sum_{n=0}^\\infty \\frac{(a;q)_n}{(q;q)_n} x^n."
},
{
"math_id": 13,
"text": "\\frac{(q;q)_{\\infty}}{(z;q)_{\\infty}}=\\sum_{n=0}^{\\infty}\\frac{(-1)^{n}q^{n(n+1)/2}}{(q;q)_n(1-zq^{-n})}, \\ |z|<1."
},
{
"math_id": 14,
"text": "q^m a^n"
},
{
"math_id": 15,
"text": "(a;q)_\\infty^{-1} = \\prod_{k=0}^{\\infty} (1-aq^k)^{-1}"
},
{
"math_id": 16,
"text": "(a;q)_\\infty^{-1} = \\sum_{k=0}^\\infty \\left(\\prod_{j=1}^k \\frac{1}{1-q^j} \\right) a^k\n = \\sum_{k=0}^\\infty \\frac{a^k}{(q;q)_k}"
},
{
"math_id": 17,
"text": "(-a;q)_\\infty = \\prod_{k=0}^{\\infty} (1+aq^k)"
},
{
"math_id": 18,
"text": "(-a;q)_\\infty = \\prod_{k=0}^\\infty (1+aq^k)\n = \\sum_{k=0}^\\infty \\left(q^{k\\choose 2} \\prod_{j=1}^k \\frac{1}{1-q^j}\\right) a^k\n = \\sum_{k=0}^\\infty \\frac{q^{k\\choose 2}}{(q;q)_k} a^k"
},
{
"math_id": 19,
"text": "(q)_{\\infty} := (q; q)_{\\infty}"
},
{
"math_id": 20,
"text": "p(n)"
},
{
"math_id": 21,
"text": "\\frac{1}{(q; q)_{\\infty}} = \\sum_{n \\geq 0} p(n) q^n = \\sum_{n \\geq 0} \\frac{q^n}{(q; q)_n} = \\sum_{n \\geq 0} \\frac{q^{n^2}}{(q; q)_n^2}. "
},
{
"math_id": 22,
"text": "(q; q)_{\\infty} = 1 - \\sum_{n \\geq 0} q^{n+1}(q; q)_n = \\sum_{n \\geq 0} q^{\\frac{n(n+1)}{2}}\\frac{(-1)^n}{(q; q)_n}."
},
{
"math_id": 23,
"text": "(a_1,a_2,\\ldots,a_m;q)_n = (a_1;q)_n (a_2;q)_n \\ldots (a_m;q)_n."
},
{
"math_id": 24,
"text": "(a; q)_{n}"
},
{
"math_id": 25,
"text": "[n]_q=\\frac{1-q^n}{1-q}."
},
{
"math_id": 26,
"text": "\n\\begin{align}\n\\left[n\\right]!_q & = \\prod_{k=1}^n [k]_q = [1]_q \\cdot [2]_q \\cdots [n-1]_q \\cdot [n]_q \\\\\n& = \\frac{1-q}{1-q} \\frac{1-q^2}{1-q} \\cdots \\frac{1-q^{n-1}}{1-q} \\frac{1-q^n}{1-q} \\\\\n& = 1 \\cdot (1+q)\\cdots (1+q+\\cdots + q^{n-2}) \\cdot (1+q+\\cdots + q^{n-1}) \\\\\n& = \\frac{(q;q)_n}{(1-q)^n} \\\\\n\\end{align}\n"
},
{
"math_id": 27,
"text": "\\lim_{q\\rightarrow 1}[n]_q = n,"
},
{
"math_id": 28,
"text": "\\lim_{q\\rightarrow 1}[n]!_q = n!."
},
{
"math_id": 29,
"text": "E_1 \\subset E_2 \\subset \\cdots \\subset E_n = S"
},
{
"math_id": 30,
"text": "E_i"
},
{
"math_id": 31,
"text": "[n]!_q"
},
{
"math_id": 32,
"text": "V_1 \\subset V_2 \\subset \\cdots \\subset V_n = V"
},
{
"math_id": 33,
"text": "V_i"
},
{
"math_id": 34,
"text": "\\prod_{k=1}^n [-k]_q = \\frac{(-1)^n\\,[n]!_q}{q^{n(n+1)/2}}"
},
{
"math_id": 35,
"text": "\n\\begin{bmatrix}\nn\\\\\nk\n\\end{bmatrix}_q\n=\n\\frac{[n]!_q}{[n-k]!_q [k]!_q}, \n"
},
{
"math_id": 36,
"text": "\\begin{bmatrix} n \\\\ m \\end{bmatrix}_q = \\begin{bmatrix} n \\\\ n-m \\end{bmatrix}_q"
},
{
"math_id": 37,
"text": "0 \\leq m \\leq n"
},
{
"math_id": 38,
"text": "\n\\begin{align}\n\\begin{bmatrix}\nn+1\\\\\nk\n\\end{bmatrix}_q\n & =\n\\begin{bmatrix}\nn\\\\\nk\n\\end{bmatrix}_q\n+\nq^{n-k+1}\n\\begin{bmatrix}\nn\\\\\nk-1\n\\end{bmatrix}_q \\\\ \n & = \n\\begin{bmatrix} n \\\\ k-1 \\end{bmatrix}_q + q^k \\begin{bmatrix} n \\\\ k \\end{bmatrix}_q.\n\\end{align}\n"
},
{
"math_id": 39,
"text": "q"
},
{
"math_id": 40,
"text": "\n\\begin{align} \n(z; q)_n & = \\sum_{j=0}^n \\begin{bmatrix} n \\\\ j \\end{bmatrix}_q (-z)^j q^{\\binom{j}{2}} = (1-z)(1-qz) \\cdots (1-z q^{n-1}) \\\\ \n(-q; q)_n & = \\sum_{j=0}^n \\begin{bmatrix} n \\\\ j \\end{bmatrix}_{q^2} q^j \\\\ \n(q; q^2)_n & = \\sum_{j=0}^{2n} \\begin{bmatrix} 2n \\\\ j \\end{bmatrix}_q (-1)^j \\\\ \n\\frac{1}{(z; q)_{m+1}} & = \\sum_{n \\geq 0} \\begin{bmatrix} n+m \\\\ n \\end{bmatrix}_q z^n. \n\\end{align}\n"
},
{
"math_id": 41,
"text": "\n\\begin{bmatrix}\nn\\\\\nk_1, \\ldots ,k_m\n\\end{bmatrix}_q\n=\n\\frac{[n]!_q}{[k_1]!_q \\cdots [k_m]!_q}, \n"
},
{
"math_id": 42,
"text": "k_1, \\ldots, k_m"
},
{
"math_id": 43,
"text": "\n\\sum_{i=1}^m k_i = n \n"
},
{
"math_id": 44,
"text": "\nV_1 \\subset \\dots \\subset V_m\n"
},
{
"math_id": 45,
"text": "\n\\dim V_i = \\sum_{j=1}^i k_j\n"
},
{
"math_id": 46,
"text": "q\\to 1"
},
{
"math_id": 47,
"text": "{n\\choose k_1,\\dots ,k_m}"
},
{
"math_id": 48,
"text": "\\{s_1,\\dots,s_m\\}"
},
{
"math_id": 49,
"text": "s_i"
},
{
"math_id": 50,
"text": "k_i"
},
{
"math_id": 51,
"text": "\\Gamma_q(x)=\\frac{(1-q)^{1-x} (q;q)_\\infty}{(q^x;q)_\\infty}"
},
{
"math_id": 52,
"text": "\\Gamma_q(x+1)=[x]_q\\Gamma_q(x)"
},
{
"math_id": 53,
"text": "\\Gamma_q(n+1)=[n]!_q"
}
] | https://en.wikipedia.org/wiki?curid=11149717 |
1115052 | Kullback–Leibler divergence | Mathematical statistics distance measure
In mathematical statistics, the Kullback–Leibler (KL) divergence (also called relative entropy and I-divergence), denoted formula_0, is a type of statistical distance: a measure of how one probability distribution P is different from a second, reference probability distribution Q. Mathematically, it is defined as
formula_1
A simple interpretation of the KL divergence of P from Q is the expected excess surprise from using Q as a model instead of P when the actual distribution is P. While it is a measure of how different two distributions are, and in some sense is thus a "distance", it is not actually a metric, which is the most familiar and formal type of distance. In particular, it is not symmetric in the two distributions (in contrast to variation of information), and does not satisfy the triangle inequality. Instead, in terms of information geometry, it is a type of divergence, a generalization of squared distance, and for certain classes of distributions (notably an exponential family), it satisfies a generalized Pythagorean theorem (which applies to squared distances).
Relative entropy is always a non-negative real number, with value 0 if and only if the two distributions in question are identical. It has diverse applications, both theoretical, such as characterizing the relative (Shannon) entropy in information systems, randomness in continuous time-series, and information gain when comparing statistical models of inference; and practical, such as applied statistics, fluid mechanics, neuroscience, bioinformatics, and machine learning.
<templatestyles src="Template:TOC limit/styles.css" />
Introduction and context.
Consider two probability distributions P and Q. Usually, P represents the data, the observations, or a measured probability distribution. Distribution Q represents instead a theory, a model, a description or an approximation of P. The Kullback–Leibler divergence formula_0 is then interpreted as the average difference of the number of bits required for encoding samples of P using a code optimized for Q rather than one optimized for P. Note that the roles of P and Q can be reversed in some situations where that is easier to compute, such as with the expectation–maximization algorithm (EM) and evidence lower bound (ELBO) computations.
Etymology.
The relative entropy was introduced by Solomon Kullback and Richard Leibler in as "the mean information for discrimination between formula_2 and formula_3 per observation from formula_4", where one is comparing two probability measures formula_5, and formula_6 are the hypotheses that one is selecting from measure formula_5 (respectively). They denoted this by formula_7, and defined the "'divergence' between formula_4 and formula_8" as the symmetrized quantity formula_9, which had already been defined and used by Harold Jeffreys in 1948. In , the symmetrized form is again referred to as the "divergence", and the relative entropies in each direction are referred to as a "directed divergences" between two distributions; Kullback preferred the term discrimination information. The term "divergence" is in contrast to a distance (metric), since the symmetrized divergence does not satisfy the triangle inequality. Numerous references to earlier uses of the symmetrized divergence and to other statistical distances are given in . The asymmetric "directed divergence" has come to be known as the Kullback–Leibler divergence, while the symmetrized "divergence" is now referred to as the Jeffreys divergence.
Definition.
For discrete probability distributions P and Q defined on the same sample space, formula_10 the relative entropy from Q to P is defined to be
formula_11
which is equivalent to
formula_12
In other words, it is the expectation of the logarithmic difference between the probabilities P and Q, where the expectation is taken using the probabilities P.
Relative entropy is only defined in this way if, for all x, formula_13 implies formula_14 (absolute continuity). Otherwise, it is often defined as but the value formula_15 is possible even if formula_16 everywhere, provided that formula_17 is infinite in extent. Analogous comments apply to the continuous and general measure cases defined below.
Whenever formula_18 is zero the contribution of the corresponding term is interpreted as zero because
formula_19
For distributions P and Q of a continuous random variable, relative entropy is defined to be the integral
formula_20
where p and q denote the probability densities of P and Q.
More generally, if P and Q are probability measures on a measurable space formula_10 and P is absolutely continuous with respect to Q, then the relative entropy from Q to P is defined as
formula_21
where formula_22 is the Radon–Nikodym derivative of P with respect to Q, i.e. the unique Q almost everywhere defined function r on formula_17 such that formula_23 which exists because P is absolutely continuous with respect to Q. Also we assume the expression on the right-hand side exists. Equivalently (by the chain rule), this can be written as
formula_24
which is the entropy of P relative to Q. Continuing in this case, if formula_25 is any measure on formula_26 for which densities p and q with formula_27 and formula_28 exist (meaning that P and Q are both absolutely continuous with respect to formula_29), then the relative entropy from Q to P is given as
formula_30
Note that such a measure formula_25 for which densities can be defined always exists, since one can take formula_31 although in practice it will usually be one that in the context like counting measure for discrete distributions, or Lebesgue measure or a convenient variant thereof like Gaussian measure or the uniform measure on the sphere, Haar measure on a Lie group etc. for continuous distributions.
The logarithms in these formulae are usually taken to base 2 if information is measured in units of bits, or to base e if information is measured in nats. Most formulas involving relative entropy hold regardless of the base of the logarithm.
Various conventions exist for referring to formula_32 in words. Often it is referred to as the divergence "between" P and Q, but this fails to convey the fundamental asymmetry in the relation. Sometimes, as in this article, it may be described as the divergence of P "from" Q or as the divergence "from" Q "to" P. This reflects the asymmetry in Bayesian inference, which starts "from" a prior Q and updates "to" the posterior P. Another common way to refer to formula_32 is as the relative entropy of P "with respect to" Q or the information gain from P over Q.
Basic example.
Kullback gives the following example (Table 2.1, Example 2.1). Let P and Q be the distributions shown in the table and figure. P is the distribution on the left side of the figure, a binomial distribution with formula_33 and formula_34. Q is the distribution on the right side of the figure, a discrete uniform distribution with the three possible outcomes formula_35 , , (i.e. formula_36), each with probability formula_37.
Relative entropies formula_0 and formula_38 are calculated as follows. This example uses the natural log with base e, designated ln to get results in nats (see units of information):
formula_39
formula_40
Interpretations.
Statistics.
In the field of statistics, the Neyman–Pearson lemma states that the most powerful way to distinguish between the two distributions P and Q based on an observation Y (drawn from one of them) is through the log of the ratio of their likelihoods: formula_41. The KL divergence is the expected value of this statistic if Y is actually drawn from P. Kullback motivated the statistic as an expected log likelihood ratio.
Coding.
In the context of coding theory, formula_0 can be constructed by measuring the expected number of extra bits required to code samples from P using a code optimized for Q rather than the code optimized for P.
Inference.
In the context of machine learning, formula_0 is often called the information gain achieved if P would be used instead of Q which is currently used. By analogy with information theory, it is called the "relative entropy" of P with respect to Q.
Expressed in the language of Bayesian inference, formula_0 is a measure of the information gained by revising one's beliefs from the prior probability distribution Q to the posterior probability distribution P. In other words, it is the amount of information lost when Q is used to approximate P.
Information geometry.
In applications, P typically represents the "true" distribution of data, observations, or a precisely calculated theoretical distribution, while Q typically represents a theory, model, description, or approximation of P. In order to find a distribution Q that is closest to P, we can minimize the KL divergence and compute an information projection.
While it is a statistical distance, it is not a metric, the most familiar type of distance, but instead it is a divergence. While metrics are symmetric and generalize "linear" distance, satisfying the triangle inequality, divergences are asymmetric and generalize "squared" distance, in some cases satisfying a generalized Pythagorean theorem. In general formula_0 does not equal formula_38, and the asymmetry is an important part of the geometry. The infinitesimal form of relative entropy, specifically its Hessian, gives a metric tensor that equals the Fisher information metric; see . Relative entropy satisfies a generalized Pythagorean theorem for exponential families (geometrically interpreted as dually flat manifolds), and this allows one to minimize relative entropy by geometric means, for example by information projection and in maximum likelihood estimation.
The relative entropy is the Bregman divergence generated by the negative entropy, but it is also of the form of an f-divergence. For probabilities over a finite alphabet, it is unique in being a member of both of these classes of statistical divergences.
Finance (game theory).
Consider a growth-optimizing investor in a fair game with mutually exclusive outcomes
(e.g. a “horse race” in which the official odds add up to one).
The rate of return expected by such an investor is equal to the relative entropy
between the investor's believed probabilities and the official odds.
This is a special case of a much more general connection between financial returns and divergence measures.
Financial risks are connected to formula_42 via information geometry. Investors' views, the prevailing market view, and risky scenarios form triangles on the relevant manifold of probability distributions. The shape of the triangles determines key financial risks (both qualitatively and quantitatively). For instance, obtuse triangles in which investors' views and risk scenarios appear on “opposite sides” relative to the market describe negative risks, acute triangles describe positive exposure, and the right-angled situation in the middle corresponds to zero risk.
Motivation.
In information theory, the Kraft–McMillan theorem establishes that any directly decodable coding scheme for coding a message to identify one value formula_43 out of a set of possibilities X can be seen as representing an implicit probability distribution formula_44 over X, where formula_45 is the length of the code for formula_43 in bits. Therefore, relative entropy can be interpreted as the expected extra message-length per datum that must be communicated if a code that is optimal for a given (wrong) distribution Q is used, compared to using a code based on the true distribution P: it is the "excess" entropy.
formula_46
where formula_47 is the cross entropy of P and Q, and formula_48 is the entropy of P (which is the same as the cross-entropy of P with itself).
The relative entropy formula_0 can be thought of geometrically as a statistical distance, a measure of how far the distribution Q is from the distribution P. Geometrically it is a divergence: an asymmetric, generalized form of squared distance. The cross-entropy formula_49 is itself such a measurement (formally a loss function), but it cannot be thought of as a distance, since formula_50 is not zero. This can be fixed by subtracting formula_51 to make formula_0 agree more closely with our notion of distance, as the "excess" loss. The resulting function is asymmetric, and while this can be symmetrized (see ), the asymmetric form is more useful. See for more on the geometric interpretation.
Relative entropy relates to "rate function" in the theory of large deviations.
Arthur Hobson proved that relative entropy is the only measure of difference between probability distributions that satisfies some desired properties, which are the canonical extension to those appearing in a commonly used characterization of entropy. Consequently, mutual information is the only measure of mutual dependence that obeys certain related conditions, since it can be defined in terms of Kullback–Leibler divergence.
Properties.
In particular, if formula_55 and formula_56, then formula_57 formula_25-almost everywhere. The entropy formula_48 thus sets a minimum value for the cross-entropy formula_58, the expected number of bits required when using a code based on Q rather than P; and the Kullback–Leibler divergence therefore represents the expected number of extra bits that must be transmitted to identify a value x drawn from X, if a code is used corresponding to the probability distribution Q, rather than the "true" distribution P.
<templatestyles src="Template:Hidden begin/styles.css"/>[Proof]
Denote formula_83 and note that formula_84. The first derivative of formula_85 may be derived and evaluated as follows
formula_86
Further derivatives may be derived and evaluated as follows
formula_87
Hence solving for formula_0 via the Taylor expansion of formula_85 about formula_88 evaluated at formula_89 yields
formula_90
formula_81 a.s. is a sufficient condition for convergence of the series by the following absolute convergence argument
formula_91
formula_81 a.s. is also a necessary condition for convergence of the series by the following proof by contradiction. Assume that formula_92 with measure strictly greater than formula_88. It then follows that there must exist some values formula_93, formula_94, and formula_95 such that formula_96 and formula_97 with measure formula_98. The previous proof of sufficiency demonstrated that the measure formula_99 component of the series where formula_81 is bounded, so we need only concern ourselves with the behavior of the measure formula_98 component of the series where formula_96. The absolute value of the formula_100th term of this component of the series is then lower bounded by formula_101, which is unbounded as formula_102, so the series diverges.
Duality formula for variational inference.
The following result, due to Donsker and Varadhan, is known as Donsker and Varadhan's variational formula.
<templatestyles src="Math_theorem/styles.css" />
Theorem [Duality Formula for Variational Inference] —
Let formula_103 be a set endowed with an appropriate formula_104-field formula_105, and two probability measures P and Q, which formulate two probability spaces formula_106 and formula_107, with formula_108. (formula_108 indicates that Q is absolutely continuous with respect to P.) Let h be a real-valued integrable random variable on formula_106. Then the following equality holds
formula_109
Further, the supremum on the right-hand side is attained if and only if it holds
formula_110
almost surely with respect to probability measure P, where formula_111 denotes the Radon-Nikodym derivative of Q with respect to P .
<templatestyles src="Math_proof/styles.css" />Proof
For a short proof assuming integrability of formula_112 with respect to P, let formula_113 have P-density formula_114, i.e. formula_115 Then
formula_116
Therefore,
formula_117
where the last inequality follows from formula_118, for which equality occurs if and only if formula_119. The conclusion follows.
For alternative proof using measure theory, see.
Examples.
Multivariate normal distributions.
Suppose that we have two multivariate normal distributions, with means formula_120 and with (non-singular) covariance matrices formula_121 If the two distributions have the same dimension, k, then the relative entropy between the distributions is as follows:
formula_122
The logarithm in the last term must be taken to base e since all terms apart from the last are base-e logarithms of expressions that are either factors of the density function or otherwise arise naturally. The equation therefore gives a result measured in nats. Dividing the entire expression above by formula_123 yields the divergence in bits.
In a numerical implementation, it is helpful to express the result in terms of the Cholesky decompositions formula_124 such that formula_125 and formula_126. Then with M and y solutions to the triangular linear systems formula_127, and formula_128,
formula_129
A special case, and a common quantity in variational inference, is the relative entropy between a diagonal multivariate normal, and a standard normal distribution (with zero mean and unit variance):
formula_130
For two univariate normal distributions p and q the above simplifies to
formula_131
In the case of co-centered normal distributions with formula_132, this simplifies to:
formula_133
Uniform distributions.
Consider two uniform distributions, with the support of formula_134 enclosed within formula_135 (formula_136). Then the information gain is:
formula_137
Intuitively, the information gain to a k times narrower uniform distribution contains formula_138 bits. This connects with the use of bits in computing, where formula_139 bits would be needed to identify one element of a k long stream.
Relation to metrics.
While relative entropy is a statistical distance, it is not a metric on the space of probability distributions, but instead it is a divergence. While metrics are symmetric and generalize "linear" distance, satisfying the triangle inequality, divergences are asymmetric in general and generalize "squared" distance, in some cases satisfying a generalized Pythagorean theorem. In general formula_0 does not equal formula_38, and while this can be symmetrized (see ), the asymmetry is an important part of the geometry.
It generates a topology on the space of probability distributions. More concretely, if formula_140 is a sequence of distributions such that
formula_141,
then it is said that
formula_142.
Pinsker's inequality entails that
formula_143,
where the latter stands for the usual convergence in total variation.
Fisher information metric.
Relative entropy is directly related to the Fisher information metric. This can be made explicit as follows. Assume that the probability distributions P and Q are both parameterized by some (possibly multi-dimensional) parameter formula_144. Consider then two close by values of formula_145 and formula_146 so that the parameter formula_144 differs by only a small amount from the parameter value formula_147. Specifically, up to first order one has (using the Einstein summation convention)
formula_148
with formula_149 a small change of formula_144 in the j direction, and formula_150 the corresponding rate of change in the probability distribution. Since relative entropy has an absolute minimum 0 for formula_54, i.e. formula_151, it changes only to "second" order in the small parameters formula_152. More formally, as for any minimum, the first derivatives of the divergence vanish
formula_153
and by the Taylor expansion one has up to second order
formula_154
where the Hessian matrix of the divergence
formula_155
must be positive semidefinite. Letting formula_147 vary (and dropping the subindex 0) the Hessian formula_156 defines a (possibly degenerate) Riemannian metric on the θ parameter space, called the Fisher information metric.
Fisher information metric theorem.
When formula_157 satisfies the following regularity conditions:
formula_158 exist,
formula_159
where ξ is independent of ρ
formula_160
then:
formula_161
Variation of information.
Another information-theoretic metric is variation of information, which is roughly a symmetrization of conditional entropy. It is a metric on the set of partitions of a discrete probability space.
Relation to other quantities of information theory.
Many of the other quantities of information theory can be interpreted as applications of relative entropy to specific cases.
Self-information.
The self-information, also known as the information content of a signal, random variable, or event is defined as the negative logarithm of the probability of the given outcome occurring.
When applied to a discrete random variable, the self-information can be represented as
formula_162
is the relative entropy of the probability distribution formula_163 from a Kronecker delta representing certainty that formula_164 — i.e. the number of extra bits that must be transmitted to identify i if only the probability distribution formula_163 is available to the receiver, not the fact that formula_164.
Mutual information.
The mutual information,
formula_165
is the relative entropy of the joint probability distribution formula_166 from the product formula_167 of the two marginal probability distributions — i.e. the expected number of extra bits that must be transmitted to identify X and Y if they are coded using only their marginal distributions instead of the joint distribution. Equivalently, if the joint probability formula_166 "is" known, it is the expected number of extra bits that must on average be sent to identify Y if the value of X is not already known to the receiver.
Shannon entropy.
The Shannon entropy,
formula_168
is the number of bits which would have to be transmitted to identify X from N equally likely possibilities, "less" the relative entropy of the uniform distribution on the random variates of X, formula_169, from the true distribution formula_170 — i.e. "less" the expected number of bits saved, which would have had to be sent if the value of X were coded according to the uniform distribution formula_169 rather than the true distribution formula_170. This definition of Shannon entropy forms the basis of E.T. Jaynes's alternative generalization to continuous distributions, the limiting density of discrete points (as opposed to the usual differential entropy), which defines the continuous entropy as
formula_171
which is equivalent to:
formula_172
Conditional entropy.
The conditional entropy,
formula_173
is the number of bits which would have to be transmitted to identify X from N equally likely possibilities, "less" the relative entropy of the product distribution formula_174 from the true joint distribution formula_166 — i.e. "less" the expected number of bits saved which would have had to be sent if the value of X were coded according to the uniform distribution formula_169 rather than the conditional distribution formula_175 of X given Y.
Cross entropy.
When we have a set of possible events, coming from the distribution p, we can encode them (with a lossless data compression) using entropy encoding. This compresses the data by replacing each fixed-length input symbol with a corresponding unique, variable-length, prefix-free code (e.g.: the events (A, B, C) with probabilities p = (1/2, 1/4, 1/4) can be encoded as the bits (0, 10, 11)). If we know the distribution p in advance, we can devise an encoding that would be optimal (e.g.: using Huffman coding). Meaning the messages we encode will have the shortest length on average (assuming the encoded events are sampled from p), which will be equal to Shannon's Entropy of p (denoted as formula_176). However, if we use a different probability distribution (q) when creating the entropy encoding scheme, then a larger number of bits will be used (on average) to identify an event from a set of possibilities. This new (larger) number is measured by the cross entropy between p and q.
The cross entropy between two probability distributions (p and q) measures the average number of bits needed to identify an event from a set of possibilities, if a coding scheme is used based on a given probability distribution q, rather than the "true" distribution p. The cross entropy for two distributions p and q over the same probability space is thus defined as follows.
formula_177
For explicit derivation of this, see the Motivation section above.
Under this scenario, relative entropies (kl-divergence) can be interpreted as the extra number of bits, on average, that are needed (beyond formula_176) for encoding the events because of using q for constructing the encoding scheme instead of p.
Bayesian updating.
In Bayesian statistics, relative entropy can be used as a measure of the information gain in moving from a prior distribution to a posterior distribution: formula_178. If some new fact formula_179 is discovered, it can be used to update the posterior distribution for X from formula_180 to a new posterior distribution formula_181 using Bayes' theorem:
formula_182
This distribution has a new entropy:
formula_183
which may be less than or greater than the original entropy formula_184. However, from the standpoint of the new probability distribution one can estimate that to have used the original code based on formula_180 instead of a new code based on formula_185 would have added an expected number of bits:
formula_186
to the message length. This therefore represents the amount of useful information, or information gain, about X, that has been learned by discovering formula_179.
If a further piece of data, formula_187, subsequently comes in, the probability distribution for x can be updated further, to give a new best guess formula_188. If one reinvestigates the information gain for using formula_189 rather than formula_190, it turns out that it may be either greater or less than previously estimated:
formula_191 may be ≤ or > than formula_192
and so the combined information gain does "not" obey the triangle inequality:
formula_193 may be <, = or > than formula_194
All one can say is that on "average", averaging using formula_195, the two sides will average out.
Bayesian experimental design.
A common goal in Bayesian experimental design is to maximise the expected relative entropy between the prior and the posterior. When posteriors are approximated to be Gaussian distributions, a design maximising the expected relative entropy is called Bayes d-optimal.
Discrimination information.
Relative entropy formula_196 can also be interpreted as the expected discrimination information for formula_2 over formula_197: the mean information per sample for discriminating in favor of a hypothesis formula_2 against a hypothesis formula_197, when hypothesis formula_2 is true. Another name for this quantity, given to it by I. J. Good, is the expected weight of evidence for formula_2 over formula_197 to be expected from each sample.
The expected weight of evidence for formula_2 over formula_197 is not the same as the information gain expected per sample about the probability distribution formula_198 of the hypotheses,
formula_199
Either of the two quantities can be used as a utility function in Bayesian experimental design, to choose an optimal next question to investigate: but they will in general lead to rather different experimental strategies.
On the entropy scale of "information gain" there is very little difference between near certainty and absolute certainty—coding according to a near certainty requires hardly any more bits than coding according to an absolute certainty. On the other hand, on the logit scale implied by weight of evidence, the difference between the two is enormous – infinite perhaps; this might reflect the difference between being almost sure (on a probabilistic level) that, say, the Riemann hypothesis is correct, compared to being certain that it is correct because one has a mathematical proof. These two different scales of loss function for uncertainty are "both" useful, according to how well each reflects the particular circumstances of the problem in question.
Principle of minimum discrimination information.
The idea of relative entropy as discrimination information led Kullback to propose the Principle of <templatestyles src="Template:Visible anchor/styles.css" />Minimum Discrimination Information (MDI): given new facts, a new distribution f should be chosen which is as hard to discriminate from the original distribution formula_200 as possible; so that the new data produces as small an information gain formula_201 as possible.
For example, if one had a prior distribution formula_202 over x and a, and subsequently learnt the true distribution of a was formula_203, then the relative entropy between the new joint distribution for x and a, formula_204, and the earlier prior distribution would be:
formula_205
i.e. the sum of the relative entropy of formula_206 the prior distribution for a from the updated distribution formula_203, plus the expected value (using the probability distribution formula_203) of the relative entropy of the prior conditional distribution formula_207 from the new conditional distribution formula_208. (Note that often the later expected value is called the "conditional relative entropy" (or "conditional Kullback–Leibler divergence") and denoted by formula_209) This is minimized if formula_210 over the whole support of formula_203; and we note that this result incorporates Bayes' theorem, if the new distribution formula_203 is in fact a δ function representing certainty that a has one particular value.
MDI can be seen as an extension of Laplace's Principle of Insufficient Reason, and the Principle of Maximum Entropy of E.T. Jaynes. In particular, it is the natural extension of the principle of maximum entropy from discrete to continuous distributions, for which Shannon entropy ceases to be so useful (see "differential entropy"), but the relative entropy continues to be just as relevant.
In the engineering literature, MDI is sometimes called the Principle of Minimum Cross-Entropy (MCE) or Minxent for short. Minimising relative entropy from m to p with respect to m is equivalent to minimizing the cross-entropy of p and m, since
formula_211
which is appropriate if one is trying to choose an adequate approximation to p. However, this is just as often "not" the task one is trying to achieve. Instead, just as often it is m that is some fixed prior reference measure, and p that one is attempting to optimise by minimising formula_212 subject to some constraint. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by redefining cross-entropy to be formula_212, rather than formula_213 .
Relationship to available work.
Surprisals add where probabilities multiply. The surprisal for an event of probability p is defined as formula_215. If k is formula_216 then surprisal is in formula_217nats, bits, or formula_218 so that, for instance, there are N bits of surprisal for landing all "heads" on a toss of N coins.
Best-guess states (e.g. for atoms in a gas) are inferred by maximizing the "average surprisal" S (entropy) for a given set of control parameters (like pressure P or volume V). This constrained entropy maximization, both classically and quantum mechanically, minimizes Gibbs availability in entropy units formula_219 where Z is a constrained multiplicity or partition function.
When temperature T is fixed, free energy (formula_220) is also minimized. Thus if formula_221 and number of molecules N are constant, the Helmholtz free energy formula_222 (where U is energy and S is entropy) is minimized as a system "equilibrates." If T and P are held constant (say during processes in your body), the Gibbs free energy formula_223 is minimized instead. The change in free energy under these conditions is a measure of available work that might be done in the process. Thus available work for an ideal gas at constant temperature formula_214 and pressure formula_224 is formula_225 where formula_226 and formula_227 (see also Gibbs inequality).
More generally the work available relative to some ambient is obtained by multiplying ambient temperature formula_214 by relative entropy or "net surprisal" formula_228 defined as the average value of formula_229 where formula_230 is the probability of a given state under ambient conditions. For instance, the work available in equilibrating a monatomic ideal gas to ambient values of formula_231 and formula_214 is thus formula_232, where relative entropy
formula_233
The resulting contours of constant relative entropy, shown at right for a mole of Argon at standard temperature and pressure, for example put limits on the conversion of hot to cold as in flame-powered air-conditioning or in the unpowered device to convert boiling-water to ice-water discussed here. Thus relative entropy measures thermodynamic availability in bits.
Quantum information theory.
For density matrices P and Q on a Hilbert space, the quantum relative entropy from Q to P is defined to be
formula_234
In quantum information science the minimum of formula_235 over all separable states Q can also be used as a measure of entanglement in the state P.
Relationship between models and reality.
Just as relative entropy of "actual from ambient" measures thermodynamic availability, relative entropy of "reality from a model" is also useful even if the only clues we have about reality are some experimental measurements. In the former case relative entropy describes "distance to equilibrium" or (when multiplied by ambient temperature) the amount of "available work", while in the latter case it tells you about surprises that reality has up its sleeve or, in other words, "how much the model has yet to learn".
Although this tool for evaluating models against systems that are accessible experimentally may be applied in any field, its application to selecting a statistical model via Akaike information criterion are particularly well described in papers and a book by Burnham and Anderson. In a nutshell the relative entropy of reality from a model may be estimated, to within a constant additive term, by a function of the deviations observed between data and the model's predictions (like the mean squared deviation) . Estimates of such divergence for models that share the same additive term can in turn be used to select among models.
When trying to fit parametrized models to data there are various estimators which attempt to minimize relative entropy, such as maximum likelihood and maximum spacing estimators.
Symmetrised divergence.
also considered the symmetrized function:
formula_236
which they referred to as the "divergence", though today the "KL divergence" refers to the asymmetric function (see for the evolution of the term). This function is symmetric and nonnegative, and had already been defined and used by Harold Jeffreys in 1948; it is accordingly called the Jeffreys divergence.
This quantity has sometimes been used for feature selection in classification problems, where P and Q are the conditional pdfs of a feature under two different classes. In the Banking and Finance industries, this quantity is referred to as Population Stability Index (PSI), and is used to assess distributional shifts in model features through time.
An alternative is given via the formula_237-divergence,
formula_238
which can be interpreted as the expected information gain about X from discovering which probability distribution X is drawn from, P or Q, if they currently have probabilities formula_237 and formula_239 respectively.
The value formula_240 gives the Jensen–Shannon divergence, defined by
formula_241
where M is the average of the two distributions,
formula_242
We can also interpret formula_243 as the capacity of a noisy information channel with two inputs giving the output distributions P and Q. The Jensen–Shannon divergence, like all f-divergences, is "locally" proportional to the Fisher information metric. It is similar to the Hellinger metric (in the sense that it induces the same affine connection on a statistical manifold).
Furthermore, the Jensen–Shannon divergence can be generalized using abstract statistical M-mixtures relying on an abstract mean M.
Relationship to other probability-distance measures.
There are many other important measures of probability distance. Some of these are particularly connected with relative entropy. For example:
Other notable measures of distance include the Hellinger distance, "histogram intersection", "Chi-squared statistic", "quadratic form distance", "match distance", "Kolmogorov–Smirnov distance", and "earth mover's distance".
Data differencing.
Just as "absolute" entropy serves as theoretical background for data "compression", "relative" entropy serves as theoretical background for data "differencing" – the absolute entropy of a set of data in this sense being the data required to reconstruct it (minimum compressed size), while the relative entropy of a target set of data, given a source set of data, is the data required to reconstruct the target "given" the source (minimum size of a patch).
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "D_\\text{KL}(P \\parallel Q)"
},
{
"math_id": 1,
"text": "D_\\text{KL}(P \\parallel Q) = \\sum_{ x \\in \\mathcal{X} } P(x)\\ \\log\\left(\\frac{\\ P(x)\\ }{ Q(x) }\\right)."
},
{
"math_id": 2,
"text": "H_1"
},
{
"math_id": 3,
"text": "H_2"
},
{
"math_id": 4,
"text": "\\mu_1"
},
{
"math_id": 5,
"text": "\\mu_1, \\mu_2"
},
{
"math_id": 6,
"text": "H_1, H_2"
},
{
"math_id": 7,
"text": "I(1:2)"
},
{
"math_id": 8,
"text": "\\mu_2"
},
{
"math_id": 9,
"text": "J(1,2) = I(1:2) + I(2:1)"
},
{
"math_id": 10,
"text": "\\ \\mathcal{X}\\ ,"
},
{
"math_id": 11,
"text": " D_\\text{KL}(P \\parallel Q) = \\sum_{ x \\in \\mathcal{X} } P(x)\\ \\log\\left(\\frac{\\ P(x)\\ }{ Q(x) }\\right)\\ ,"
},
{
"math_id": 12,
"text": " D_\\text{KL}(P \\parallel Q) = -\\sum_{ x \\in \\mathcal{X} } P(x)\\ \\log\\left(\\frac{\\ Q(x)\\ }{P(x)}\\right) ~."
},
{
"math_id": 13,
"text": "\\ Q(x) = 0\\ "
},
{
"math_id": 14,
"text": "\\ P(x) = 0\\ "
},
{
"math_id": 15,
"text": "\\ +\\infty\\ "
},
{
"math_id": 16,
"text": "\\ Q(x) \\ne 0\\ "
},
{
"math_id": 17,
"text": "\\ \\mathcal{X}\\ "
},
{
"math_id": 18,
"text": "\\ P(x)\\ "
},
{
"math_id": 19,
"text": " \\lim_{x \\to 0^{+}} x \\log(x) = 0 ~."
},
{
"math_id": 20,
"text": " D_\\text{KL}(P \\parallel Q) = \\int_{-\\infty}^\\infty p(x)\\ \\log\\left(\\frac{p(x)}{q(x)}\\right)\\ \\mathrm{d}\\ \\!x\\ ,"
},
{
"math_id": 21,
"text": "D_\\text{KL}(P \\parallel Q) = \\int_{ x \\in \\mathcal{X} }\\ \\log\\left(\\frac{P(\\mathrm{d}\\ \\!x)}{Q(\\mathrm{d}\\ \\!x)}\\right)\\ P(\\mathrm{d}\\ \\!x)\\ ,"
},
{
"math_id": 22,
"text": "\\ \\frac{\\ P( \\mathrm{d}\\ \\!x )\\ }{Q( \\mathrm{d}\\ \\!x ) \\ }"
},
{
"math_id": 23,
"text": "\\ P(\\mathrm{d}\\ \\!x) = r(x)Q(\\mathrm{d}\\ \\!x)\\ "
},
{
"math_id": 24,
"text": " D_\\text{KL}(P \\parallel Q) = \\int_{ x \\in \\mathcal{X} } \\frac{P(\\mathrm{d}\\ \\!x)}{Q(\\mathrm{d}\\ \\!x)}\\ \\log\\left(\\frac{P(\\mathrm{d}\\ \\!x)}{Q(\\mathrm{d}\\ \\!x)}\\right)\\ Q(\\mathrm{d}\\ \\!x)\\ ,"
},
{
"math_id": 25,
"text": "\\mu"
},
{
"math_id": 26,
"text": "\\mathcal{X}"
},
{
"math_id": 27,
"text": "\\ P(\\mathrm{d}\\ \\!x) = p(x)\\mu(\\mathrm{d}\\ \\!x)\\ "
},
{
"math_id": 28,
"text": "\\ Q(\\mathrm{d}\\ \\!x) = q(x) \\mu(\\mathrm{d}\\ \\!x)\\ "
},
{
"math_id": 29,
"text": "\\ \\mu\\ "
},
{
"math_id": 30,
"text": " D_\\text{KL}(P \\parallel Q) = \\int_{x \\in \\mathcal{X}} p(x)\\ \\log\\left( \\frac{\\ p(x)\\ } {q(x)} \\right)\\ \\mu(\\mathrm{d}\\ \\!x) ~."
},
{
"math_id": 31,
"text": "\\ \\mu = \\frac{1}{2} \\left( P + Q \\right)\\ "
},
{
"math_id": 32,
"text": "\\ D_\\text{KL}(P \\parallel Q)\\ "
},
{
"math_id": 33,
"text": "N = 2"
},
{
"math_id": 34,
"text": "p = 0.4"
},
{
"math_id": 35,
"text": "x ="
},
{
"math_id": 36,
"text": "\\mathcal{X}=\\{0,1,2\\}"
},
{
"math_id": 37,
"text": "p = 1/3"
},
{
"math_id": 38,
"text": "D_\\text{KL}(Q \\parallel P)"
},
{
"math_id": 39,
"text": "\\begin{align}\nD_\\text{KL}(P \\parallel Q) &= \\sum_{x\\in\\mathcal{X}} P(x) \\ln\\left(\\frac{P(x)}{Q(x)}\\right) \\\\\n &= \\frac{9}{25} \\ln\\left(\\frac{9/25}{1/3}\\right)\n+ \\frac{12}{25} \\ln\\left(\\frac{12/25}{1/3}\\right)\n+ \\frac{4}{25} \\ln\\left(\\frac{4/25}{1/3}\\right) \\\\\n& = \\frac{1}{25} \\left(32 \\ln(2) + 55 \\ln(3) - 50 \\ln(5) \\right) \\approx 0.0852996,\n\\end{align}"
},
{
"math_id": 40,
"text": "\\begin{align}\nD_\\text{KL}(Q \\parallel P) &= \\sum_{x\\in\\mathcal{X}} Q(x) \\ln\\left(\\frac{Q(x)}{P(x)}\\right) \\\\\n &= \\frac{1}{3} \\ln\\left(\\frac{1/3}{9/25}\\right)\n+ \\frac{1}{3} \\ln\\left(\\frac{1/3}{12/25}\\right)\n+ \\frac{1}{3} \\ln\\left(\\frac{1/3}{4/25}\\right) \\\\\n &= \\frac{1}{3} \\left(-4 \\ln(2) - 6 \\ln(3) + 6 \\ln(5) \\right) \\approx 0.097455.\n\\end{align}"
},
{
"math_id": 41,
"text": "\\log P(Y) - \\log Q(Y)"
},
{
"math_id": 42,
"text": "D_ \\text{KL} "
},
{
"math_id": 43,
"text": "x_i"
},
{
"math_id": 44,
"text": "q(x_i)=2^{-\\ell_i}"
},
{
"math_id": 45,
"text": "\\ell_i"
},
{
"math_id": 46,
"text": "\\begin{align}\nD_\\text{KL}(P\\parallel Q) &= \\sum_{x\\in\\mathcal{X}} p(x) \\log \\frac{1}{q(x)} - \\sum_{x\\in\\mathcal{X}} p(x) \\log \\frac{1}{p(x)} \\\\[5pt]\n &= \\Eta(P, Q) - \\Eta(P)\n\\end{align}"
},
{
"math_id": 47,
"text": "\\Eta(P,Q)"
},
{
"math_id": 48,
"text": "\\Eta(P)"
},
{
"math_id": 49,
"text": "H(P,Q)"
},
{
"math_id": 50,
"text": "H(P,P)=:H(P)"
},
{
"math_id": 51,
"text": "H(P)"
},
{
"math_id": 52,
"text": "D_\\text{KL}(P \\parallel Q) \\geq 0,"
},
{
"math_id": 53,
"text": "D_\\text{KL}(P\\parallel Q)"
},
{
"math_id": 54,
"text": "P = Q"
},
{
"math_id": 55,
"text": "P(dx) = p(x) \\mu(dx)"
},
{
"math_id": 56,
"text": "Q(dx) = q(x)\\mu(dx)"
},
{
"math_id": 57,
"text": "p(x) = q(x)"
},
{
"math_id": 58,
"text": "\\Eta(P, Q)"
},
{
"math_id": 59,
"text": "y(x)"
},
{
"math_id": 60,
"text": "P(dx) = p(x) \\, dx = \\tilde{p}(y) \\, dy = \\tilde{p}(y(x)) |\\tfrac{dy}{dx}(x)| \\,dx "
},
{
"math_id": 61,
"text": "Q(dx)= q(x)\\, dx = \\tilde{q}(y) \\, dy = \\tilde{q}(y) |\\tfrac{dy}{dx}(x)| dx"
},
{
"math_id": 62,
"text": "|\\tfrac{dy}{dx}(x)|"
},
{
"math_id": 63,
"text": "\\begin{align}\n D_\\text{KL}(P \\parallel Q)\n &= \\int_{x_a}^{x_b} p(x) \\log\\left(\\frac{p(x)}{q(x)}\\right)\\, dx \\\\[6pt]\n &= \\int_{x_a}^{x_b} \\tilde{p}(y(x)) |\\frac{dy}{dx}(x)| \\log\\left(\\frac{\\tilde{p}(y(x))\\, |\\frac{dy}{dx}(x)|}{\\tilde{q}(y(x))\\, |\\frac{dy}{dx}(x)|}\\right)\\, dx \\\\\n &= \\int_{y_a}^{y_b} \\tilde{p}(y)\\log\\left(\\frac{\\tilde{p}(y)}{\\tilde{q}(y)}\\right)\\, dy\n\\end{align}"
},
{
"math_id": 64,
"text": "y_a = y(x_a)"
},
{
"math_id": 65,
"text": "y_b = y(x_b)"
},
{
"math_id": 66,
"text": "p(x)"
},
{
"math_id": 67,
"text": "q(x)"
},
{
"math_id": 68,
"text": "P(dx) = p(x) \\, dx"
},
{
"math_id": 69,
"text": "P_1, P_2"
},
{
"math_id": 70,
"text": "P(dx, dy) = P_1(dx)P_2(dy)"
},
{
"math_id": 71,
"text": "Q(dx, dy) = Q_1(dx)Q_2(dy)"
},
{
"math_id": 72,
"text": "Q_1, Q_2"
},
{
"math_id": 73,
"text": " D_\\text{KL}(P \\parallel Q) = D_\\text{KL}(P_1 \\parallel Q_1) + D_\\text{KL}(P_2 \\parallel Q_2)."
},
{
"math_id": 74,
"text": " D_\\text{KL}(P \\parallel Q)"
},
{
"math_id": 75,
"text": "(P,Q)"
},
{
"math_id": 76,
"text": "(P_1,Q_1)"
},
{
"math_id": 77,
"text": "(P_2,Q_2)"
},
{
"math_id": 78,
"text": "D_\\text{KL}(\\lambda P_1 + (1 - \\lambda) P_2 \\parallel \\lambda Q_1 + (1 - \\lambda) Q_2) \\le \n\\lambda D_\\text{KL}(P_1 \\parallel Q_1) + (1 - \\lambda)D_\\text{KL}(P_2 \\parallel Q_2) \\text{ for } 0 \\le \\lambda \\le 1."
},
{
"math_id": 79,
"text": "P=Q"
},
{
"math_id": 80,
"text": "D_\\text{KL}(P \\parallel Q) = \\sum_{n=2}^\\infty \\frac {1}{n(n-1)} \\sum_{x \\in \\mathcal{X}} \\frac{(Q(x) - P(x))^n}{Q(x)^{n-1}} "
},
{
"math_id": 81,
"text": "P \\leq 2Q"
},
{
"math_id": 82,
"text": "Q"
},
{
"math_id": 83,
"text": "f(\\alpha) := D_\\text{KL}((1-\\alpha) Q + \\alpha P \\parallel Q)"
},
{
"math_id": 84,
"text": "D_\\text{KL}(P \\parallel Q) = f(1)"
},
{
"math_id": 85,
"text": "f"
},
{
"math_id": 86,
"text": "\n\\begin{align}\nf'(\\alpha) &= \\sum_{x \\in \\mathcal{X}} (P(x) - Q(x)) \\left(\\log\\left(\\frac{(1-\\alpha) Q(x) + \\alpha P(x)}{Q(x)}\\right) + 1\\right)\n\\\\ &= \\sum_{x \\in \\mathcal{X}} (P(x) - Q(x)) \\log\\left(\\frac{(1-\\alpha) Q(x) + \\alpha P(x)}{Q(x)}\\right)\n\\\\ f'(0) &= 0\n\\end{align}\n"
},
{
"math_id": 87,
"text": "\n\\begin{align}\nf''(\\alpha) &= \\sum_{x \\in \\mathcal{X}} \\frac{(P(x) - Q(x))^2}{(1-\\alpha) Q(x) + \\alpha P(x)}\n\\\\ f''(0) &= \\sum_{x \\in \\mathcal{X}} \\frac{(P(x) - Q(x))^2}{Q(x)}\n\\\\ f^{(n)}(\\alpha) &= (-1)^n (n-2)!\\sum_{x \\in \\mathcal{X}} \\frac{(P(x) - Q(x))^n}{\\left((1-\\alpha) Q(x) + \\alpha P(x)\\right)^{n-1}}\n\\\\ f^{(n)}(0) &= (-1)^n (n-2)!\\sum_{x \\in \\mathcal{X}} \\frac{(P(x) - Q(x))^n}{Q(x)^{n-1}}\n\\end{align}\n"
},
{
"math_id": 88,
"text": "0"
},
{
"math_id": 89,
"text": "\\alpha=1"
},
{
"math_id": 90,
"text": "\n\\begin{align}\nD_\\text{KL}(P \\parallel Q) &= \\sum_{n=0}^\\infty \\frac{f^{(n)}(0)}{n!}\n\\\\ &= \\sum_{n=2}^\\infty \\frac{1}{n(n-1)} \\sum_{x \\in \\mathcal{X}} \\frac{(Q(x) - P(x))^n}{Q(x)^{n-1}}\n\\end{align}\n"
},
{
"math_id": 91,
"text": "\n\\begin{align}\n\\sum_{n=2}^\\infty \\left\\vert\\frac{1}{n(n-1)} \\sum_{x \\in \\mathcal{X}} \\frac{(Q(x) - P(x))^n}{Q(x)^{n-1}}\\right\\vert &= \\sum_{n=2}^\\infty \\frac{1}{n(n-1)} \\sum_{x \\in \\mathcal{X}} \\left\\vert Q(x) - P(x) \\right\\vert \\left\\vert 1 - \\frac{P(x)}{Q(x)} \\right\\vert^{n-1}\n\\\\ &\\leq \\sum_{n=2}^\\infty \\frac{1}{n(n-1)} \\sum_{x \\in \\mathcal{X}} \\left\\vert Q(x) - P(x) \\right\\vert\n\\\\ &\\leq \\sum_{n=2}^\\infty \\frac{1}{n(n-1)}\n\\\\ &= 1\n\\end{align}\n"
},
{
"math_id": 92,
"text": "P > 2Q"
},
{
"math_id": 93,
"text": "\\epsilon > 0"
},
{
"math_id": 94,
"text": "\\rho > 0"
},
{
"math_id": 95,
"text": "U < \\infty"
},
{
"math_id": 96,
"text": "P \\geq 2Q + \\epsilon"
},
{
"math_id": 97,
"text": "Q \\leq U"
},
{
"math_id": 98,
"text": "\\rho"
},
{
"math_id": 99,
"text": "1-\\rho"
},
{
"math_id": 100,
"text": "n"
},
{
"math_id": 101,
"text": "\\frac1{n(n-1)} \\rho \\left(1 + \\frac{\\epsilon}{U}\\right)^n"
},
{
"math_id": 102,
"text": "n \\to \\infty"
},
{
"math_id": 103,
"text": "\\Theta"
},
{
"math_id": 104,
"text": "\\sigma"
},
{
"math_id": 105,
"text": "\\mathcal{F}"
},
{
"math_id": 106,
"text": "(\\Theta,\\mathcal{F},P)"
},
{
"math_id": 107,
"text": "(\\Theta,\\mathcal{F},Q)"
},
{
"math_id": 108,
"text": "Q \\ll P"
},
{
"math_id": 109,
"text": " \\log E_P[\\exp h] = \\text{sup}_{Q \\ll P} \\{ E_Q[h] - D_\\text{KL}(Q \\parallel P)\\}."
},
{
"math_id": 110,
"text": " \\frac{Q(d\\theta)}{P(d\\theta)} = \\frac{\\exp h(\\theta)}{E_P[\\exp h]},"
},
{
"math_id": 111,
"text": "\\frac{Q(d\\theta)}{P(d\\theta)}"
},
{
"math_id": 112,
"text": "\\exp(h)"
},
{
"math_id": 113,
"text": " Q^* "
},
{
"math_id": 114,
"text": "\\frac{\\exp h(\\theta)}{E_P[\\exp h]}"
},
{
"math_id": 115,
"text": "Q^*(d\\theta) = \\frac{\\exp h(\\theta)}{E_P[\\exp h]} P(d\\theta)"
},
{
"math_id": 116,
"text": " D_\\text{KL}(Q \\parallel Q^*) - D_\\text{KL}(Q \\parallel P) = -E_Q[h] + \\log E_P[\\exp h]."
},
{
"math_id": 117,
"text": " E_Q[h] - D_\\text{KL}(Q \\parallel P) = \\log E_P[\\exp h] - D_\\text{KL}(Q \\parallel Q^*) \\le \\log E_P[\\exp h],"
},
{
"math_id": 118,
"text": "D_\\text{KL}(Q \\parallel Q^*) \\ge 0"
},
{
"math_id": 119,
"text": "Q=Q^*"
},
{
"math_id": 120,
"text": "\\mu_0, \\mu_1"
},
{
"math_id": 121,
"text": "\\Sigma_0, \\Sigma_1."
},
{
"math_id": 122,
"text": "\n D_\\text{KL}\\left(\\mathcal{N}_0 \\parallel \\mathcal{N}_1\\right) =\n \\frac{1}{2}\\left(\n \\operatorname{tr}\\left(\\Sigma_1^{-1}\\Sigma_0\\right) - k +\n \\left(\\mu_1 - \\mu_0\\right)^\\mathsf{T} \\Sigma_1^{-1}\\left(\\mu_1 - \\mu_0\\right) +\n \\ln\\left(\\frac{\\det\\Sigma_1}{\\det\\Sigma_0}\\right)\n \\right).\n"
},
{
"math_id": 123,
"text": "\\ln(2)"
},
{
"math_id": 124,
"text": "L_0, L_1"
},
{
"math_id": 125,
"text": "\\Sigma_0 = L_0L_0^T"
},
{
"math_id": 126,
"text": "\\Sigma_1 = L_1L_1^T"
},
{
"math_id": 127,
"text": "L_1 M = L_0"
},
{
"math_id": 128,
"text": "L_1 y = \\mu_1 - \\mu_0"
},
{
"math_id": 129,
"text": "\n D_\\text{KL}\\left(\\mathcal{N}_0 \\parallel \\mathcal{N}_1\\right) =\n \\frac{1}{2}\\left(\n \\sum_{i,j=1}^k (M_{ij})^2 - k +\n |y|^2 +\n 2\\sum_{i=1}^k \\ln \\frac{(L_1)_{ii}}{(L_0)_{ii}}\n \\right).\n"
},
{
"math_id": 130,
"text": "\n D_\\text{KL}\\left(\n \\mathcal{N}\\left(\\left(\\mu_1, \\ldots, \\mu_k\\right)^\\mathsf{T}, \\operatorname{diag} \\left(\\sigma_1^2, \\ldots, \\sigma_k^2\\right)\\right) \\parallel\n \\mathcal{N}\\left(\\mathbf{0}, \\mathbf{I}\\right)\n \\right) =\n {1 \\over 2} \\sum_{i=1}^k \\left(\\sigma_i^2 + \\mu_i^2 - 1 - \\ln\\left(\\sigma_i^2\\right)\\right).\n"
},
{
"math_id": 131,
"text": "\n D_\\text{KL}\\left(\\mathcal{p} \\parallel \\mathcal{q}\\right) = \\log \\frac{\\sigma_1}{\\sigma_0} + \\frac{\\sigma_0^2 + (\\mu_0-\\mu_1)^2}{2\\sigma_1^2} - \\frac{1}{2}\n"
},
{
"math_id": 132,
"text": "k=\\sigma_1/\\sigma_0"
},
{
"math_id": 133,
"text": "\n D_\\text{KL}\\left(\\mathcal{p} \\parallel \\mathcal{q}\\right) = \\log_2 k + (k^{-2}-1)/2/\\ln(2) \\mathrm{bits}\n"
},
{
"math_id": 134,
"text": "p=[A,B]"
},
{
"math_id": 135,
"text": "q=[C,D]"
},
{
"math_id": 136,
"text": "C\\le A<B\\le D"
},
{
"math_id": 137,
"text": "\n D_\\text{KL}\\left(\\mathcal{p} \\parallel \\mathcal{q}\\right) = \\log \\frac{D-C}{B-A}\n"
},
{
"math_id": 138,
"text": "\\log_2 k"
},
{
"math_id": 139,
"text": "\\log_2k"
},
{
"math_id": 140,
"text": "\\{P_1,P_2,\\ldots\\}"
},
{
"math_id": 141,
"text": "\\lim_{n \\to \\infty} D_\\text{KL}(P_n\\parallel Q) = 0"
},
{
"math_id": 142,
"text": "P_n \\xrightarrow{D} Q"
},
{
"math_id": 143,
"text": "P_n \\xrightarrow{D} P \\Rightarrow P_n \\xrightarrow{TV} P"
},
{
"math_id": 144,
"text": "\\theta"
},
{
"math_id": 145,
"text": "P = P(\\theta)"
},
{
"math_id": 146,
"text": "Q = P(\\theta_0)"
},
{
"math_id": 147,
"text": "\\theta_0"
},
{
"math_id": 148,
"text": "P(\\theta) = P(\\theta_0) + \\Delta\\theta_j \\, P_j(\\theta_0) + \\cdots"
},
{
"math_id": 149,
"text": "\\Delta\\theta_j = (\\theta - \\theta_0)_j"
},
{
"math_id": 150,
"text": "P_j\\left(\\theta_0\\right) = \\frac{\\partial P}{\\partial \\theta_j}(\\theta_0)"
},
{
"math_id": 151,
"text": " \\theta = \\theta_0 "
},
{
"math_id": 152,
"text": "\\Delta\\theta_j"
},
{
"math_id": 153,
"text": "\\left.\\frac{\\partial}{\\partial\\theta_j}\\right|_{\\theta = \\theta_0} D_\\text{KL}(P(\\theta) \\parallel P(\\theta_0)) = 0,"
},
{
"math_id": 154,
"text": "D_\\text{KL}(P(\\theta) \\parallel P(\\theta_0)) = \\frac{1}{2} \\, \\Delta\\theta_j \\, \\Delta\\theta_k \\, g_{jk}(\\theta_0) + \\cdots"
},
{
"math_id": 155,
"text": "g_{jk}(\\theta_0) = \\left.\\frac{\\partial^2}{\\partial\\theta_j\\, \\partial\\theta_k} \\right|_{\\theta = \\theta_0} D_\\text{KL}(P(\\theta) \\parallel P(\\theta_0))"
},
{
"math_id": 156,
"text": "g_{jk}(\\theta)"
},
{
"math_id": 157,
"text": "p_{(x, \\rho)}"
},
{
"math_id": 158,
"text": "\\frac{\\partial \\log(p)}{\\partial \\rho}, \\frac{\\partial^2 \\log(p)}{\\partial \\rho^2}, \\frac{\\partial^3 \\log(p)}{\\partial \\rho^3}"
},
{
"math_id": 159,
"text": "\\begin{align}\n \\left|\\frac{\\partial p}{\\partial \\rho}\\right| &< F(x): \\int_{x=0}^\\infty F(x)\\,dx < \\infty, \\\\ \n \\left|\\frac{\\partial^2 p}{\\partial \\rho^2}\\right| &< G(x): \\int_{x=0}^\\infty G(x)\\,dx < \\infty \\\\ \n \\left|\\frac{\\partial^3 \\log(p)}{\\partial \\rho^3}\\right| &< H(x): \\int_{x=0}^\\infty p(x, 0)H(x)\\,dx < \\xi < \\infty\n\\end{align}"
},
{
"math_id": 160,
"text": "\n \\left.\\int_{x=0}^\\infty \\frac{\\partial p(x, \\rho)}{\\partial \\rho}\\right|_{\\rho=0}\\, dx =\n \\left.\\int_{x=0}^\\infty \\frac{\\partial^2 p(x, \\rho)}{\\partial \\rho^2}\\right|_{\\rho=0}\\, dx = 0\n"
},
{
"math_id": 161,
"text": "\\mathcal{D}(p(x, 0) \\parallel p(x, \\rho)) = \\frac{c\\rho^2}{2} + \\mathcal{O}\\left(\\rho^3\\right) \\text{ as } \\rho \\to 0."
},
{
"math_id": 162,
"text": "\\operatorname \\operatorname{I}(m) = D_\\text{KL}\\left(\\delta_\\text{im} \\parallel \\{p_i\\}\\right),"
},
{
"math_id": 163,
"text": "P(i)"
},
{
"math_id": 164,
"text": "i = m"
},
{
"math_id": 165,
"text": "\\begin{align}\n \\operatorname{I}(X; Y)\n &= D_\\text{KL}(P(X, Y) \\parallel P(X)P(Y)) \\\\[5pt]\n &= \\operatorname{E}_X \\{D_\\text{KL}(P(Y \\mid X) \\parallel P(Y))\\} \\\\[5pt]\n &= \\operatorname{E}_Y \\{D_\\text{KL}(P(X \\mid Y) \\parallel P(X))\\}\n\\end{align}"
},
{
"math_id": 166,
"text": "P(X,Y)"
},
{
"math_id": 167,
"text": "P(X)P(Y)"
},
{
"math_id": 168,
"text": "\\begin{align}\n \\Eta(X) &= \\operatorname{E}\\left[\\operatorname{I}_X(x)\\right] \\\\\n &= \\log(N) - D_\\text{KL}\\left(p_X(x) \\parallel P_U(X)\\right)\n\\end{align}"
},
{
"math_id": 169,
"text": "P_U(X)"
},
{
"math_id": 170,
"text": "P(X)"
},
{
"math_id": 171,
"text": "\\lim_{N \\rightarrow \\infty} H_{N}(X) = \\log(N) - \\int p(x)\\log\\frac{p(x)}{m(x)}\\,dx,"
},
{
"math_id": 172,
"text": "\\log(N) - D_\\text{KL}(p(x)||m(x))"
},
{
"math_id": 173,
"text": "\\begin{align}\n \\Eta(X \\mid Y)\n &= \\log(N) - D_\\text{KL}(P(X, Y) \\parallel P_U(X) P(Y)) \\\\[5pt]\n &= \\log(N) - D_\\text{KL}(P(X, Y) \\parallel P(X) P(Y)) - D_\\text{KL}(P(X) \\parallel P_U(X)) \\\\[5pt]\n &= \\Eta(X) - \\operatorname{I}(X; Y) \\\\[5pt]\n &= \\log(N) - \\operatorname{E}_Y \\left[D_\\text{KL}\\left(P\\left(X \\mid Y\\right) \\parallel P_U(X)\\right)\\right]\n\\end{align}"
},
{
"math_id": 174,
"text": "P_U(X) P(Y)"
},
{
"math_id": 175,
"text": "P(X|Y)"
},
{
"math_id": 176,
"text": "\\Eta(p)"
},
{
"math_id": 177,
"text": "\\Eta(p, q) = \\operatorname{E}_p[-\\log(q)] = \\Eta(p) + D_\\text{KL}(p \\parallel q)."
},
{
"math_id": 178,
"text": "p(x) \\to p(x\\mid I)"
},
{
"math_id": 179,
"text": "Y = y"
},
{
"math_id": 180,
"text": "p(x\\mid I)"
},
{
"math_id": 181,
"text": "p(x\\mid y,I)"
},
{
"math_id": 182,
"text": "p(x \\mid y, I) = \\frac{p(y \\mid x, I) p(x \\mid I)}{p(y \\mid I)}"
},
{
"math_id": 183,
"text": "\\Eta\\big(p(x \\mid y, I)\\big) = -\\sum_x p(x \\mid y, I) \\log p(x \\mid y, I),"
},
{
"math_id": 184,
"text": "\\Eta(p(x\\mid I))"
},
{
"math_id": 185,
"text": "p(x\\mid y, I)"
},
{
"math_id": 186,
"text": " D_\\text{KL}\\big(p(x \\mid y, I) \\parallel p(x \\mid I) \\big) = \\sum_x p(x \\mid y, I) \\log\\left(\\frac{p(x \\mid y, I)}{p(x \\mid I)}\\right) "
},
{
"math_id": 187,
"text": "Y_2 = y_2"
},
{
"math_id": 188,
"text": "p(x \\mid y_1, y_2, I)"
},
{
"math_id": 189,
"text": "p(x \\mid y_1,I)"
},
{
"math_id": 190,
"text": "p(x \\mid I)"
},
{
"math_id": 191,
"text": "\\sum_x p(x \\mid y_1, y_2, I) \\log\\left(\\frac{p(x \\mid y_1, y_2, I)}{p(x \\mid I)}\\right)"
},
{
"math_id": 192,
"text": "\\displaystyle \\sum_x p(x \\mid y_1, I) \\log\\left(\\frac{p(x \\mid y_1, I)}{p(x \\mid I)}\\right)"
},
{
"math_id": 193,
"text": "D_\\text{KL} \\big( p(x \\mid y_1, y_2, I) \\parallel p(x \\mid I) \\big)"
},
{
"math_id": 194,
"text": "D_\\text{KL}\\big( p(x \\mid y_1, y_2, I) \\parallel p(x \\mid y_1, I)\\big) + D_\\text{KL}\\big(p(x \\mid y_1, I) \\parallel p(x \\mid I)\\big)"
},
{
"math_id": 195,
"text": "p(y_2 \\mid y_1, x, I)"
},
{
"math_id": 196,
"text": "D_\\text{KL}\\bigl(p(x \\mid H_1) \\parallel p(x \\mid H_0)\\bigr)"
},
{
"math_id": 197,
"text": "H_0"
},
{
"math_id": 198,
"text": "p(H)"
},
{
"math_id": 199,
"text": "D_\\text{KL}(p(x \\mid H_1) \\parallel p(x \\mid H_0)) \\neq IG = D_\\text{KL}(p(H \\mid x) \\parallel p(H \\mid I))."
},
{
"math_id": 200,
"text": "f_0"
},
{
"math_id": 201,
"text": "D_\\text{KL}(f \\parallel f_0)"
},
{
"math_id": 202,
"text": "p(x,a)"
},
{
"math_id": 203,
"text": "u(a)"
},
{
"math_id": 204,
"text": "q(x\\mid a)u(a)"
},
{
"math_id": 205,
"text": "D_\\text{KL}(q(x \\mid a)u(a) \\parallel p(x, a)) = \\operatorname{E}_{u(a)}\\left\\{D_\\text{KL}(q(x \\mid a) \\parallel p(x \\mid a))\\right\\} + D_\\text{KL}(u(a) \\parallel p(a)),"
},
{
"math_id": 206,
"text": "p(a)"
},
{
"math_id": 207,
"text": "p(x\\mid a)"
},
{
"math_id": 208,
"text": "q(x\\mid a)"
},
{
"math_id": 209,
"text": "D_\\text{KL}(q(x\\mid a) \\parallel p(x\\mid a))"
},
{
"math_id": 210,
"text": "q(x\\mid a)=p(x\\mid a)"
},
{
"math_id": 211,
"text": "\\Eta(p, m) = \\Eta(p) + D_\\text{KL}(p \\parallel m),"
},
{
"math_id": 212,
"text": "D_\\text{KL}(p \\parallel m)"
},
{
"math_id": 213,
"text": "\\Eta(p,m)"
},
{
"math_id": 214,
"text": "T_o"
},
{
"math_id": 215,
"text": "s = k \\ln(1 / p)"
},
{
"math_id": 216,
"text": "\\left\\{1, 1/\\ln 2, 1.38 \\times 10^{-23}\\right\\}"
},
{
"math_id": 217,
"text": "\\{"
},
{
"math_id": 218,
"text": "J/K\\}"
},
{
"math_id": 219,
"text": "A \\equiv -k\\ln(Z)"
},
{
"math_id": 220,
"text": "T \\times A"
},
{
"math_id": 221,
"text": "T, V"
},
{
"math_id": 222,
"text": "F \\equiv U - TS"
},
{
"math_id": 223,
"text": "G = U + PV - TS"
},
{
"math_id": 224,
"text": "P_o"
},
{
"math_id": 225,
"text": "W = \\Delta G = NkT_o\\Theta(V/V_o)"
},
{
"math_id": 226,
"text": "V_o = NkT_o/P_o"
},
{
"math_id": 227,
"text": "\\Theta(x) = x - 1 - \\ln x \\ge 0"
},
{
"math_id": 228,
"text": "\\Delta I \\ge 0,"
},
{
"math_id": 229,
"text": "k\\ln(p/p_o)"
},
{
"math_id": 230,
"text": "p_o"
},
{
"math_id": 231,
"text": "V_o"
},
{
"math_id": 232,
"text": "W = T_o \\Delta I"
},
{
"math_id": 233,
"text": "\\Delta I = Nk\\left[\\Theta\\left(\\frac{V}{V_o}\\right) + \\frac{3}{2}\\Theta\\left(\\frac{T}{T_o}\\right)\\right]."
},
{
"math_id": 234,
"text": "D_\\text{KL}(P \\parallel Q) = \\operatorname{Tr}(P(\\log(P) - \\log(Q)))."
},
{
"math_id": 235,
"text": " D_\\text{KL}(P\\parallel Q)"
},
{
"math_id": 236,
"text": "D_\\text{KL}(P \\parallel Q) + D_\\text{KL}(Q \\parallel P)"
},
{
"math_id": 237,
"text": "\\lambda"
},
{
"math_id": 238,
"text": "D_\\lambda(P \\parallel Q) = \\lambda D_\\text{KL}(P \\parallel \\lambda P + (1 - \\lambda)Q) + (1 - \\lambda) D_\\text{KL}(Q \\parallel \\lambda P + (1 - \\lambda)Q),"
},
{
"math_id": 239,
"text": "1-\\lambda"
},
{
"math_id": 240,
"text": "\\lambda = 0.5"
},
{
"math_id": 241,
"text": "D_\\text{JS} = \\frac{1}{2} D_\\text{KL} (P \\parallel M) + \\frac{1}{2} D_\\text{KL}(Q \\parallel M)"
},
{
"math_id": 242,
"text": "M = \\frac{1}{2}(P + Q)."
},
{
"math_id": 243,
"text": "D_\\text{JS}"
},
{
"math_id": 244,
"text": "\\delta(p,q)"
},
{
"math_id": 245,
"text": "\\delta(P, Q) \\le \\sqrt{\\frac{1}{2} D_\\text{KL}(P \\parallel Q)}."
},
{
"math_id": 246,
"text": "D_{\\mathrm{KL}}(P\\parallel Q)>2"
},
{
"math_id": 247,
"text": "\\delta(P,Q) \\le \\sqrt{1-e^{ -D_{\\mathrm{KL}}(P\\parallel Q) }}."
},
{
"math_id": 248,
"text": "\\alpha"
}
] | https://en.wikipedia.org/wiki?curid=1115052 |
1115085 | Magnetic circuit | Closed loop path containing a magnetic flux
A magnetic circuit is made up of one or more closed loop paths containing a magnetic flux. The flux is usually generated by permanent magnets or electromagnets and confined to the path by magnetic cores consisting of ferromagnetic materials like iron, although there may be air gaps or other materials in the path. Magnetic circuits are employed to efficiently channel magnetic fields in many devices such as electric motors, generators, transformers, relays, lifting electromagnets, SQUIDs, galvanometers, and magnetic recording heads.
The relation between magnetic flux, magnetomotive force, and magnetic reluctance in an unsaturated magnetic circuit can be described by Hopkinson's law, which bears a superficial resemblance to Ohm's law in electrical circuits, resulting in a one-to-one correspondence between properties of a magnetic circuit and an analogous electric circuit. Using this concept the magnetic fields of complex devices such as transformers can be quickly solved using the methods and techniques developed for electrical circuits.
Some examples of magnetic circuits are:
Magnetomotive force (MMF).
Similar to the way that electromotive force (EMF) drives a current of electrical charge in electrical circuits, magnetomotive force (MMF) 'drives' magnetic flux through magnetic circuits. The term 'magnetomotive force', though, is a misnomer since it is not a force nor is anything moving. It is perhaps better to call it simply MMF. In analogy to the definition of EMF, the magnetomotive force formula_0 around a closed loop is defined as:
formula_1
The MMF represents the potential that a hypothetical magnetic charge would gain by completing the loop. The magnetic flux that is driven is not a current of magnetic charge; it merely has the same relationship to MMF that electric current has to EMF. (See microscopic origins of reluctance below for a further description.)
The unit of magnetomotive force is the ampere-turn (At), represented by a steady, direct electric current of one ampere flowing in a single-turn loop of electrically conducting material in a vacuum. The gilbert (Gb), established by the IEC in 1930, is the CGS unit of magnetomotive force and is a slightly smaller unit than the ampere-turn. The unit is named after William Gilbert (1544–1603) English physician and natural philosopher.
formula_2
The magnetomotive force can often be quickly calculated using Ampère's law. For example, the magnetomotive force formula_0 of a long coil is:
formula_3
where "N" is the number of turns and "I" is the current in the coil. In practice this equation is used for the MMF of real inductors with "N" being the winding number of the inducting coil.
Magnetic flux.
An applied MMF 'drives' magnetic flux through the magnetic components of the system. The magnetic flux through a magnetic component is proportional to the number of magnetic field lines that pass through the cross sectional area of that component. This is the "net" number, i.e. the number passing through in one direction, minus the number passing through in the other direction. The direction of the magnetic field vector B is by definition from the south to the north pole of a magnet inside the magnet; outside the field lines go from north to south.
The flux through an element of area perpendicular to the direction of magnetic field is given by the product of the magnetic field and the area element. More generally, magnetic flux Φ is defined by a scalar product of the magnetic field and the area element vector. Quantitatively, the magnetic flux through a surface "S" is defined as the integral of the magnetic field over the area of the surface
formula_4
For a magnetic component the area S used to calculate the magnetic flux Φ is usually chosen to be the cross-sectional area of the component.
The SI unit of magnetic flux is the weber (in derived units: volt-seconds), and the unit of magnetic flux density (or "magnetic induction", B) is the weber per square meter, or tesla.
Circuit models.
The most common way of representing a magnetic circuit is the resistance–reluctance model, which draws an analogy between electrical and magnetic circuits. This model is good for systems that contain only magnetic components, but for modelling a system that contains both electrical and magnetic parts it has serious drawbacks. It does not properly model power and energy flow between the electrical and magnetic domains. This is because electrical resistance will dissipate energy whereas magnetic reluctance stores it and returns it later. An alternative model that correctly models energy flow is the gyrator–capacitor model.
Resistance–reluctance model.
The resistance–reluctance model for magnetic circuits is a lumped-element model that makes electrical resistance analogous to magnetic reluctance.
Hopkinson's law.
In electrical circuits, Ohm's law is an empirical relation between the EMF formula_5 applied across an element and the current formula_6 it generates through that element. It is written as:
formula_7
where "R" is the electrical resistance of that material. There is a counterpart to Ohm's law used in magnetic circuits. This law is often called Hopkinson's law, after John Hopkinson, but was actually formulated earlier by Henry Augustus Rowland in 1873. It states that
formula_8
where formula_0 is the magnetomotive force (MMF) across a magnetic element, formula_9 is the magnetic flux through the magnetic element, and formula_10 is the magnetic reluctance of that element. (It will be shown later that this relationship is due to the empirical relationship between the H-field and the magnetic field B, B
"μH", where "μ" is the permeability of the material). Like Ohm's law, Hopkinson's law can be interpreted either as an empirical equation that works for some materials, or it may serve as a definition of reluctance.
Hopkinson's law is not a correct analogy with Ohm's law in terms of modelling power and energy flow. In particular, there is no power dissipation associated with a magnetic reluctance in the same way as there is a dissipation in an electrical resistance. The magnetic resistance that is a true analogy of electrical resistance in this respect is defined as the ratio of magnetomotive force and the rate of change of magnetic flux. Here rate of change of magnetic flux is standing in for electric current and the Ohm's law analogy becomes,
formula_11
where formula_12 is the magnetic resistance. This relationship is part of an electrical-magnetic analogy called the gyrator-capacitor model and is intended to overcome the drawbacks of the reluctance model. The gyrator-capacitor model is, in turn, part of a wider group of compatible analogies used to model systems across multiple energy domains.
Reluctance.
Magnetic reluctance, or magnetic resistance, is analogous to resistance in an electrical circuit (although it does not dissipate magnetic energy). In likeness to the way an electric field causes an electric current to follow the path of least resistance, a magnetic field causes magnetic flux to follow the path of least magnetic reluctance. It is a scalar, extensive quantity, akin to electrical resistance.
The total reluctance is equal to the ratio of the MMF in a passive magnetic circuit and the magnetic flux in this circuit. In an AC field, the reluctance is the ratio of the amplitude values for a sinusoidal MMF and magnetic flux. (see phasors)
The definition can be expressed as:
formula_13
where formula_10 is the reluctance in ampere-turns per weber (a unit that is equivalent to turns per henry).
Magnetic flux always forms a closed loop, as described by Maxwell's equations, but the path of the loop depends on the reluctance of the surrounding materials. It is concentrated around the path of least reluctance. Air and vacuum have high reluctance, while easily magnetized materials such as soft iron have low reluctance. The concentration of flux in low-reluctance materials forms strong temporary poles and causes mechanical forces that tend to move the materials towards regions of higher flux so it is always an attractive force(pull).
The inverse of reluctance is called "permeance".
formula_14
Its SI derived unit is the henry (the same as the unit of inductance, although the two concepts are distinct).
Permeability and conductivity.
The reluctance of a magnetically uniform magnetic circuit element can be calculated as:
formula_15
where
This is similar to the equation for electrical resistance in materials, with permeability being analogous to conductivity; the reciprocal of the permeability is known as magnetic reluctivity and is analogous to resistivity. Longer, thinner geometries with low permeabilities lead to higher reluctance. Low reluctance, like low resistance in electric circuits, is generally preferred.
Summary of analogy.
The following table summarizes the mathematical analogy between electrical circuit theory and magnetic circuit theory. This is mathematical analogy and not a physical one. Objects in the same row have the same mathematical role; the physics of the two theories are very different. For example, current is the flow of electrical charge, while magnetic flux is not the flow of any quantity.
Limitations of the analogy.
The resistance–reluctance model has limitations. Electric and magnetic circuits are only superficially similar because of the similarity between Hopkinson's law and Ohm's law. Magnetic circuits have significant differences that need to be taken into account in their construction:
Circuit laws.
Magnetic circuits obey other laws that are similar to electrical circuit laws. For example, the total reluctance formula_20 of reluctances formula_21 in series is:
formula_22
This also follows from Ampère's law and is analogous to Kirchhoff's voltage law for adding resistances in series. Also, the sum of magnetic fluxes formula_23 into any node is always zero:
formula_24
This follows from Gauss's law and is analogous to Kirchhoff's current law for analyzing electrical circuits.
Together, the three laws above form a complete system for analysing magnetic circuits, in a manner similar to electric circuits. Comparing the two types of circuits shows that:
Magnetic circuits can be solved for the flux in each branch by application of the magnetic equivalent of Kirchhoff's voltage law (KVL) for pure source/resistance circuits. Specifically, whereas KVL states that the voltage excitation applied to a loop is equal to the sum of the voltage drops (resistance times current) around the loop, the magnetic analogue states that the magnetomotive force (achieved from ampere-turn excitation) is equal to the sum of MMF drops (product of flux and reluctance) across the rest of the loop. (If there are multiple loops, the current in each branch can be solved through a matrix equation—much as a matrix solution for mesh circuit branch currents is obtained in loop analysis—after which the individual branch currents are obtained by adding and/or subtracting the constituent loop currents as indicated by the adopted sign convention and loop orientations.) Per Ampère's law, the excitation is the product of the current and the number of complete loops made and is measured in ampere-turns. Stated more generally:
formula_25
By Stokes's theorem, the closed line integral of "H"·d"l" around a contour is equal to the open surface integral of curl H·dA across the surface bounded by the closed contour. Since, from Maxwell's equations, curl H = J, the closed line integral of H·dl evaluates to the total current passing through the surface. This is equal to the excitation, "NI", which also measures current passing through the surface, thereby verifying that the net current flow through a surface is zero ampere-turns in a closed system that conserves energy.
More complex magnetic systems, where the flux is not confined to a simple loop, must be analysed from first principles by using Maxwell's equations.
Applications.
Reluctance can also be applied to variable reluctance (magnetic) pickups.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{F}"
},
{
"math_id": 1,
"text": "\\mathcal{F} = \\oint \\mathbf{H} \\cdot \\mathrm{d}\\mathbf{l}."
},
{
"math_id": 2,
"text": "\\begin{align}\n 1\\;\\text{Gb} &= \\frac{10}{4\\pi}\\;\\text{At} \\\\[2pt]\n &\\approx 0.795775\\;\\text{At}\n\\end{align}"
},
{
"math_id": 3,
"text": "\\mathcal{F} = N I"
},
{
"math_id": 4,
"text": "\\Phi_m = \\iint_S \\mathbf{B} \\cdot \\mathrm{d}\\mathbf S."
},
{
"math_id": 5,
"text": "\\mathcal{E}"
},
{
"math_id": 6,
"text": "I"
},
{
"math_id": 7,
"text": "\\mathcal{E} = IR."
},
{
"math_id": 8,
"text": "\\mathcal{F}=\\Phi \\mathcal{R}."
},
{
"math_id": 9,
"text": "\\Phi"
},
{
"math_id": 10,
"text": "\\mathcal{R}"
},
{
"math_id": 11,
"text": "\\mathcal{F} = \\frac {d \\Phi}{dt} R_\\mathrm{m},"
},
{
"math_id": 12,
"text": "R_\\mathrm{m}"
},
{
"math_id": 13,
"text": "\\mathcal{R} = \\frac{\\mathcal{F}}{\\Phi},"
},
{
"math_id": 14,
"text": "\\mathcal{P} = \\frac{1}{\\mathcal{R}}."
},
{
"math_id": 15,
"text": "\\mathcal{R} = \\frac{l}{\\mu A}."
},
{
"math_id": 16,
"text": "\\mu = \\mu_r\\mu_0"
},
{
"math_id": 17,
"text": "\\mu_\\mathrm{r}"
},
{
"math_id": 18,
"text": "\\mu_0"
},
{
"math_id": 19,
"text": "\\mathcal{R}_\\mathrm{m}"
},
{
"math_id": 20,
"text": "\\mathcal{R}_\\mathrm{T}"
},
{
"math_id": 21,
"text": "\\mathcal{R}_1,\\ \\mathcal{R}_2,\\ \\ldots"
},
{
"math_id": 22,
"text": "\\mathcal{R}_\\mathrm{T} = \\mathcal{R}_1 + \\mathcal{R}_2 + \\dotsm"
},
{
"math_id": 23,
"text": "\\Phi_1,\\ \\Phi_2,\\ \\ldots"
},
{
"math_id": 24,
"text": "\\Phi_1 + \\Phi_2 + \\dotsm = 0."
},
{
"math_id": 25,
"text": "F = NI = \\oint \\mathbf{H} \\cdot \\mathrm{d}\\mathbf{l}."
}
] | https://en.wikipedia.org/wiki?curid=1115085 |
11151120 | Impulse invariance | Impulse invariance is a technique for designing discrete-time infinite-impulse-response (IIR) filters from continuous-time filters in which the impulse response of the continuous-time system is sampled to produce the impulse response of the discrete-time system. The frequency response of the discrete-time system will be a sum of shifted copies of the frequency response of the continuous-time system; if the continuous-time system is approximately band-limited to a frequency less than the Nyquist frequency of the sampling, then the frequency response of the discrete-time system will be approximately equal to it for frequencies below the Nyquist frequency.
Discussion.
The continuous-time system's impulse response, formula_0, is sampled with sampling period formula_1 to produce the discrete-time system's impulse response, formula_2.
formula_3
Thus, the frequency responses of the two systems are related by
formula_4
If the continuous time filter is approximately band-limited (i.e. formula_5 when formula_6), then the frequency response of the discrete-time system will be approximately the continuous-time system's frequency response for frequencies below π radians per sample (below the Nyquist frequency 1/(2"T") Hz):
formula_7 for formula_8
Comparison to the bilinear transform.
Note that aliasing will occur, including aliasing below the Nyquist frequency to the extent that the continuous-time filter's response is nonzero above that frequency. The bilinear transform is an alternative to impulse invariance that uses a different mapping that maps the continuous-time system's frequency response, out to infinite frequency, into the range of frequencies up to the Nyquist frequency in the discrete-time case, as opposed to mapping frequencies linearly with circular overlap as impulse invariance does.
Effect on poles in system function.
If the continuous poles at formula_9, the system function can be written in partial fraction expansion as
formula_10
Thus, using the inverse Laplace transform, the impulse response is
formula_11
The corresponding discrete-time system's impulse response is then defined as the following
formula_12
formula_13
Performing a z-transform on the discrete-time impulse response produces the following discrete-time system function
formula_14
Thus the poles from the continuous-time system function are translated to poles at z = eskT. The zeros, if any, are not so simply mapped.
Poles and zeros.
If the system function has zeros as well as poles, they can be mapped the same way, but the result is no longer an impulse invariance result: the discrete-time impulse response is not equal simply to samples of the continuous-time impulse response. This method is known as the matched Z-transform method, or pole–zero mapping.
Stability and causality.
Since poles in the continuous-time system at "s" = "sk" transform to poles in the discrete-time system at z = exp("skT"), poles in the left half of the "s"-plane map to inside the unit circle in the "z"-plane; so if the continuous-time filter is causal and stable, then the discrete-time filter will be causal and stable as well.
Corrected formula.
When a causal continuous-time impulse response has a discontinuity at formula_15, the expressions above are not consistent.
This is because formula_16 has different right and left limits, and should really only contribute their average, half its right value formula_17, to formula_18.
Making this correction gives
formula_19
formula_20
Performing a z-transform on the discrete-time impulse response produces the following discrete-time system function
formula_21
The second sum is zero for filters without a discontinuity, which is why ignoring it is often safe.
References.
<templatestyles src="Reflist/styles.css" />
Other sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "h_c(t)"
},
{
"math_id": 1,
"text": "T"
},
{
"math_id": 2,
"text": "h[n]"
},
{
"math_id": 3,
"text": "h[n]=Th_c(nT)\\,"
},
{
"math_id": 4,
"text": "H(e^{j\\omega}) = \\frac{1}{T} \\sum_{k=-\\infty}^\\infty{ TH_c\\left(j\\frac{\\omega}{T} + j\\frac{2{\\pi}}{T}k\\right)}\\,"
},
{
"math_id": 5,
"text": "H_c(j\\Omega) < \\delta"
},
{
"math_id": 6,
"text": "|\\Omega| \\ge \\pi/T"
},
{
"math_id": 7,
"text": "H(e^{j\\omega}) = H_c(j\\omega/T)\\,"
},
{
"math_id": 8,
"text": "|\\omega| \\le \\pi\\,"
},
{
"math_id": 9,
"text": "s = s_k"
},
{
"math_id": 10,
"text": "H_c(s) = \\sum_{k=1}^N{\\frac{A_k}{s-s_k}}\\,"
},
{
"math_id": 11,
"text": "h_c(t) = \\begin{cases}\n \\sum_{k=1}^N{A_ke^{s_kt}}, & t \\ge 0 \\\\\n 0, & \\mbox{otherwise}\n\\end{cases}"
},
{
"math_id": 12,
"text": "h[n] = Th_c(nT)\\,"
},
{
"math_id": 13,
"text": "h[n] = T \\sum_{k=1}^N{A_ke^{s_knT}u[n]}\\,"
},
{
"math_id": 14,
"text": "H(z) = T \\sum_{k=1}^N{\\frac{A_k}{1-e^{s_kT}z^{-1}}}\\,"
},
{
"math_id": 15,
"text": "t=0"
},
{
"math_id": 16,
"text": "h_c (0)"
},
{
"math_id": 17,
"text": "h_c (0_+)"
},
{
"math_id": 18,
"text": "h[0]"
},
{
"math_id": 19,
"text": "h[n] = T \\left( h_c(nT) - \\frac{1}{2} h_c(0_+)\\delta [n] \\right) \\,"
},
{
"math_id": 20,
"text": "h[n] = T \\sum_{k=1}^N{A_ke^{s_knT}} \\left( u[n] - \\frac{1}{2} \\delta[n] \\right) \\,"
},
{
"math_id": 21,
"text": "H(z) = T \\sum_{k=1}^N{\\frac{A_k}{1-e^{s_kT}z^{-1}} - \\frac{T}{2} \\sum_{k=1}^N A_k}."
}
] | https://en.wikipedia.org/wiki?curid=11151120 |
11151209 | VALBOND | In molecular mechanics, VALBOND is a method for computing the angle bending energy that is based on valence bond theory. It is based on "orbital strength functions", which are maximized when the hybrid orbitals on the atom are orthogonal. The hybridization of the bonding orbitals are obtained from empirical formulas based on Bent's rule, which relates the preference towards p character with electronegativity.
The VALBOND functions are suitable for describing the energy of bond angle distortion not only around the equilibrium angles, but also at very large distortions. This represents an advantage over the simpler harmonic oscillator approximation used by many force fields, and allows the VALBOND method to handle hypervalent molecules and transition metal complexes. The VALBOND energy term has been combined with force fields such as CHARMM and UFF to provide a complete functional form that includes also bond stretching, torsions, and non-bonded interactions.
Functional form.
Non-hypervalent molecules.
For an angle α between normal (non-hypervalent) bonds involving an spmdn hybrid orbital, the energy contribution is
formula_0,
where "k" is an empirical scaling factor that depends on the elements involved in the bond, "Smax", the "maximum strength function", is
formula_1
and "S(α)" is the strength function
formula_2
which depends on the "nonorthogonality integral" Δ:
formula_3
The energy contribution is added twice, once per each of the bonding orbitals involved in the angle (which may have different hybridizations and different values for "k").
For non-hypervalent p-block atoms, the hybridization value "n" is zero (no d-orbital contribution), and "m" is obtained as %p(1-%p), where %p is the p character of the orbital obtained from
formula_4
where the sum over "j" includes all ligands, lone pairs, and radicals on the atom, "np" is the "gross hybridization" (for example, for an "sp2" atom, "np" = 2). The weight "wti" depends on the two elements involved in the bond (or just one for lone pair or radicals), and represents the preference for p character of different elements. The values of the weights are empirical, but can be rationalized in terms of Bent's rule.
Hypervalent molecules.
For hypervalent molecules, the energy is represented as a combination of VALBOND configurations, which are akin to resonance structures that place three-center four-electron bonds (3c4e) in different ways. For example, ClF3 is represented as having one "normal" two-center bond and one 3c4e bond. There are three different configurations for ClF3, each one using a different Cl-F bond as the two-center bond. For more complicated systems the number of combinations increases rapidly; SF6 has 45 configurations.
formula_5
where the sum is over all configurations "j", and the coefficient "cj" is defined by the function
formula_6
where "hype" refers to the 3c4e bonds. This function ensures that the configurations where the 3c4e bonds are linear are favored.
The energy terms are modified by multiplying them by a bond order factor, BOF, which is the product of the formal bond orders of the two bonds involved in the angle (for 3c4e bonds, the bond order is 0.5). For 3c4e bonds, the energy is calculated as
formula_7
where Δ is again the non-orthogonality function, but here the angle α is offset by 180 degrees (π radians).
Finally, to ensure that the axial vs equatorial preference of different ligands in hypervalent compounds is reproduced, an "offset energy" term is subtracted. It has the form
formula_8
where the EN terms depend on the electronegativity difference between the ligand and the central atom as follows:
formula_9
where "ss" is 1 if the electronegativity difference is positive and 2 if it is negative.
For p-block hypervalent molecules, d orbitals are not used, so "n" = 0. The p contribution "m" is estimated from ab initio quantum chemistry methods and a natural bond orbital (NBO) analysis.
Extension.
More recent extensions, available in the CHARMM suite of codes, include the trans-influence (or trans effect) within VALBOND-TRANS and the possibility to run reactive molecular dynamics with "Multi-state VALBOND".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E(\\alpha ) = k(S^{max} - S(\\alpha))"
},
{
"math_id": 1,
"text": "S^{max} = \\sqrt{\\frac{1}{1+m+n}} (1 + \\sqrt{3m} + \\sqrt{5n})"
},
{
"math_id": 2,
"text": "S(\\alpha ) = S^{max} \\sqrt{1 - \\frac{1- \\sqrt{1 - \\Delta ^2}}{2}}"
},
{
"math_id": 3,
"text": "\\Delta = \\frac{1}{1+m+n} \\left [ 1+m \\cos \\alpha + \\frac{n}{2}(3 \\cos ^2 \\alpha -1) \\right ]"
},
{
"math_id": 4,
"text": "\\%p_i = \\frac{n_p wt_i}{\\sum_{j} wt_j}"
},
{
"math_id": 5,
"text": "E_{tot} = \\sum_j c_j E_j"
},
{
"math_id": 6,
"text": "c_j = \\frac{\\displaystyle \\prod_{i=1}^{hype} \\cos^2 \\alpha_i}{\\displaystyle \\sum_{j=1}^{config} \\prod_{i=1}^{hype} \\cos^2 \\alpha_i}"
},
{
"math_id": 7,
"text": "E(\\alpha) = BOF \\times k_{\\alpha} [1-\\Delta(\\alpha + \\pi)^2]"
},
{
"math_id": 8,
"text": "E_{offset} = \\sum_{i=1}^{config} c_i \\sum_{j=1}^{hype} \\frac{EN_{ija} + EN_{ijb}}{2}"
},
{
"math_id": 9,
"text": "EN_{ija} = 30 \\times (en_{lig} - en_{c.a.}) \\times ss"
}
] | https://en.wikipedia.org/wiki?curid=11151209 |
11151490 | Conway base 13 function | Counterexample to the converse of the intermediate value theorem
The Conway base 13 function is a function created by British mathematician John H. Conway as a counterexample to the converse of the intermediate value theorem. In other words, it is a function that satisfies a particular intermediate-value property — on any interval formula_0, the function formula_1 takes every value between formula_2 and formula_3 — but is not continuous.
In 2018, a much simpler function with the property that every open set is mapped onto the full real line was published by Aksel Bergfeldt on the mathematics StackExchange. This function is also nowhere continuous.
Purpose.
The Conway base 13 function was created as part of a "produce" activity: in this case, the challenge was to produce a simple-to-understand function which takes on every real value in every interval, that is, it is an everywhere surjective function. It is thus discontinuous at every point.
Definition.
The Conway base-13 function is a function formula_5 defined as follows. Write the argument formula_4 value as a tridecimal (a "decimal" in base 13) using 13 symbols as "digits": 0, 1, ..., 9, A, B, C; there should be no trailing C recurring. There may be a leading sign, and somewhere there will be a tridecimal point to separate the integer part from the fractional part; these should both be ignored in the sequel. These "digits" can be thought of as having the values 0 to 12 respectively; Conway originally used the digits "+", "−" and "." instead of A, B, C, and underlined all of the base-13 "digits" to clearly distinguish them from the usual base-10 digits and symbols.
For example:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(a,b)"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "f(a)"
},
{
"math_id": 3,
"text": "f(b)"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "f: \\Reals \\to \\Reals"
},
{
"math_id": 6,
"text": "A x_1 x_2 \\dots x_n C y_1 y_2 \\dots"
},
{
"math_id": 7,
"text": "x_i"
},
{
"math_id": 8,
"text": "y_j"
},
{
"math_id": 9,
"text": "\\{0, \\dots, 9\\},"
},
{
"math_id": 10,
"text": "f(x) = x_1 \\dots x_n . y_1 y_2 \\dots"
},
{
"math_id": 11,
"text": "B x_1 x_2 \\dots x_n C y_1 y_2 \\dots,"
},
{
"math_id": 12,
"text": "f(x) = -x_1 \\dots x_n . y_1 y_2 \\dots."
},
{
"math_id": 13,
"text": "f(x) = 0."
},
{
"math_id": 14,
"text": "f(\\mathrm{12345A3C14.159} \\dots_{13}) = f(\\mathrm{A3C14.159} \\dots_{13}) = 3.14159 \\dots,"
},
{
"math_id": 15,
"text": "f(\\mathrm{B1C234}_{13}) = -1.234,"
},
{
"math_id": 16,
"text": "f(\\mathrm{1C234A567}_{13}) = 0."
},
{
"math_id": 17,
"text": "f(b)."
},
{
"math_id": 18,
"text": "\\mathbb{R}^2"
},
{
"math_id": 19,
"text": "\\hat{r}"
},
{
"math_id": 20,
"text": "f(\\hat{r}) = r."
},
{
"math_id": 21,
"text": "\\hat{r},"
},
{
"math_id": 22,
"text": "c,"
},
{
"math_id": 23,
"text": "c'"
},
{
"math_id": 24,
"text": "(a, b)."
},
{
"math_id": 25,
"text": "f(c') = r."
}
] | https://en.wikipedia.org/wiki?curid=11151490 |
1115177 | Collateralized mortgage obligation | Type of debt security backed by mortgages
A collateralized mortgage obligation (CMO) is a type of complex debt security that repackages and directs the payments of principal and interest from a collateral pool to different types and maturities of securities, thereby meeting investor needs.
CMOs were first created in 1983 by the investment banks Salomon Brothers and First Boston for the U.S. mortgage liquidity provider Freddie Mac. The Salomon Brothers team was led by Lewis Ranieri and the First Boston team by Laurence D. Fink, although Dexter Senft also later received an industry award for his contribution).
Legally, a CMO is a debt security issued by an abstraction—a special purpose entity—and is not a debt owed by the institution creating and operating the entity. The entity is the legal owner of a set of mortgages, called a "pool". Investors in a CMO buy bonds issued by the entity, and they receive payments from the income generated by the mortgages according to a defined set of rules. With regard to terminology, the mortgages themselves are termed "collateral", 'classes' refers to groups of mortgages issued to borrowers of roughly similar credit worthiness, "tranches" are specified fractions or slices, metaphorically speaking, of a pool of mortgages and the income they produce that are combined into an individual security, while the "structure" is the set of rules that dictates how the income received from the collateral will be distributed. The legal entity, collateral, and structure are collectively referred to as the "deal". Unlike traditional mortgage pass-through securities, CMOs feature different payment streams and risks, depending on investor preferences. For tax purposes, CMOs are generally structured as Real Estate Mortgage Investment Conduits, which avoid the potential for "double-taxation".
Investors in CMOs include banks, hedge funds, insurance companies, pension funds, mutual funds, government agencies, and most recently central banks. This article focuses primarily on CMO bonds as traded in the United States.
The term "collateralized mortgage obligation" technically refers to a security issued by a specific type of legal entity dealing in residential mortgages, but investors also frequently refer to deals put together using other types of entities such as real estate mortgage investment conduits as CMOs.
Purpose.
The most basic way a mortgage loan can be transformed into a bond suitable for purchase by an investor would simply be to "split it". For example, a $300,000 30 year mortgage with an interest rate of 6.5% could be split into 300 1000-dollar bonds. These bonds would have a 30-year amortization, and an interest rate of 6.00% for example (with the remaining 0.50% going to the servicing company to send out the monthly bills and perform servicing work). However, this format of bond has various problems for various investors
Salomon Brothers and First Boston created the CMO concept to address these issues. A CMO is essentially a way to create many different kinds of bonds from the same mortgage loan so as to please many different kinds of investors. For example:
Whenever a group of mortgages is split into different classes of bonds, the risk does not disappear. Rather, it is reallocated among the different classes. Some classes receive less risk of a particular type; other classes more risk of that type. How much the risk is reduced or increased for each class depends on how the classes are structured.
Credit protection.
CMOs are most often backed by mortgage loans, which are originated by thrifts (savings and loans), mortgage companies, and the consumer lending units of large commercial banks. Loans meeting certain size and credit criteria can be insured against losses resulting from borrower delinquencies and defaults by any of the Government Sponsored Enterprises (GSEs) (Freddie Mac, Fannie Mae, or Ginnie Mae). GSE guaranteed loans can serve as collateral for "Agency CMOs", which are subject to interest rate risk but not credit risk. Loans not meeting these criteria are referred to as "Non-Conforming", and can serve as collateral for "private label mortgage bonds", which are also called "whole loan CMOs". Whole loan CMOs are subject to both credit risk and interest rate risk. Issuers of whole loan CMOs generally structure their deals to reduce the credit risk of all certain classes of bonds ("Senior Bonds") by utilizing various forms of credit protection in the structure of the deal.
Credit tranching.
The most common form of credit protection is called credit tranching. In the simplest case, credit tranching means that any credit losses will be absorbed by the most junior class of bondholders until the principal value of their investment reaches zero. If this occurs, the next class of bonds (more senior) absorb credit losses, and so forth, until finally the senior bonds begin to experience losses. More frequently, a deal is embedded with certain "triggers" related to quantities of delinquencies or defaults in the loans backing the mortgage pool. If a balance of delinquent loans reaches a certain threshold, interest and principal that would be used to pay junior bondholders is instead directed to pay off the principal balance of senior bondholders, shortening the life of the senior bonds.
Overcollateralization.
In CMOs backed by loans of lower credit quality, such as subprime mortgage loans, the issuer will sell a quantity of bonds whose principal value is less than the value of the underlying pool of mortgages. Because of the excess collateral, investors in the CMO will not experience losses until defaults on the underlying loans reach a certain level.
If the "overcollateralization" turns into "undercollateralization" (the assumptions of the default rate were inadequate), then the CMO defaults. CMOs have contributed to the subprime mortgage crisis.
Excess spread.
Another way to enhance credit protection is to issue bonds that pay a lower interest rate than the underlying mortgages. For example, if the weighted average interest rate of the mortgage pool is 7%, the CMO issuer could choose to issue bonds that pay a 5% coupon. The additional interest, referred to as "excess spread", is placed into a "spread account" until some or all of the bonds in the deal mature. If some of the mortgage loans go delinquent or default, funds from the excess spread account can be used to pay the bondholders. Excess spread is a very effective mechanism for protecting bondholders from defaults that occur late in the life of the deal because by that time the funds in the excess spread account will be sufficient to cover almost any losses.
Prepayment tranching.
The principal (and associated coupon) stream for CMO collateral can be structured to allocate prepayment risk. Investors in CMOs wish to be protected from prepayment risk as well as credit risk. Prepayment risk is the risk that the term of the security will vary according to differing rates of repayment of principal by borrowers (repayments from refinancings, sales, curtailments, or foreclosures). If principal is prepaid faster than expected (for example, if mortgage rates fall and borrowers refinance), then the overall term of the mortgage collateral will shorten, and the principal returned at par will cause a loss for premium priced collateral. This prepayment risk cannot be removed, but can be reallocated between CMO tranches so that some tranches have some protection against this risk, whereas other tranches will absorb more of this risk. To facilitate this allocation of prepayment risk, CMOs are structured such that prepayments are allocated between bonds using a fixed set of rules. The most common schemes for prepayment tranching are described below.
Sequential tranching (or "by time").
All of the available principal payments go to the first sequential tranche, until its balance is decremented to zero, then to the second, and so on. There are several reasons that this type of tranching would be done:
Parallel tranching.
This simply means tranches that pay down "pro rata". The coupons on the tranches would be set so that in aggregate the tranches pay the same amount of interest as the underlying mortgages. The tranches could be either fixed rate or floating rate. If they have floating coupons, they would have a formula that make their total interest equal to the collateral interest. For example, with collateral that pays a coupon of 8%, you could have two tranches that each have half of the principal, one being a floater that pays LIBOR with a cap of 16%, the other being an inverse floater that pays a coupon of 16% minus LIBOR.
Z bonds.
This type of tranche supports other tranches by not receiving an interest payment. The interest payment that would have accrued to the Z tranche is used to pay off the principal of other bonds, and the principal of the Z tranche increases. The Z tranche starts receiving interest and principal payments only after the other tranches in the CMO have been fully paid. This type of tranche is often used to customize sequential tranches, or VADM tranches.
Schedule bonds (also called PAC or TAC bonds).
This type of tranching has a bond (often called a PAC or TAC bond) which has even less uncertainty than a sequential bond by receiving prepayments according to a defined schedule. The schedule is maintained by using support bonds (also called companion bonds) that absorb the excess prepayments.
Very accurately defined maturity (VADM) bonds.
Very accurately defined maturity (VADM) bonds are similar to PAC bonds in that they protect against both extension and contraction risk, but their payments are supported in a different way. Instead of a support bond, they are supported by the accretion of a Z bond. This means a VADM tranche will receive the scheduled prepayments even if no prepayments are made on the underlying. The VADM bond concept was named after its inventor, Vadim Khazatsky, a trader at Solomon Brothers, highlighting its innovative approach to managing prepayment risks associated with mortgage-backed securities.
Non-accelerating senior (NAS).
NAS bonds are designed to protect investors from volatility and negative convexity resulting from prepayments. NAS tranches of bonds are fully protected from prepayments for a specified period, after which time prepayments are allocated to the tranche using a specified step down formula. For example, an NAS bond might be protected from prepayments for five years, and then would receive 10% of the prepayments for the first month, then 20%, and so on. Recently, issuers have added features to accelerate the proportion of prepayments flowing to the NAS class of bond in order to create shorter bonds and reduce extension risk. NAS tranches are usually found in deals that also contain short sequentials, Z-bonds, and credit subordination.
A NAS tranche receives principal payments according to a schedule which shows for a given month the share of pro rata principal that must be distributed to the NAS tranche.
NASquential.
NASquentials were introduced in mid-2005 and represented an innovative structural twist, combining the standard NAS (Non-Accelerated Senior) and Sequential structures. Similar to a sequential structure, the NASquentials are tranched sequentially, however, each tranche has a NAS-like hard lockout date associated with it. Unlike with a NAS, no shifting interest mechanism is employed after the initial lockout date. The resulting bonds offer superior stability versus regular sequentials, and yield pickup versus PACs. The support-like cashflows falling out on the other side of NASquentials are sometimes referred to as RUSquentials (Relatively Unstable Sequentials).
Coupon tranching.
The coupon stream from the mortgage collateral can also be restructured (analogous to the way the principal stream is structured). This coupon stream allocation is performed after prepayment tranching is complete. If the coupon tranching is done on the collateral without any prepayment tranching, then the resulting tranches are called 'strips'. The benefit is that the resulting CMO tranches can be targeted to very different sets of investors. In general, coupon tranching will produce a pair (or set) of complementary CMO tranches.
IO/discount fixed rate pair.
A fixed rate CMO tranche can be further restructured into an Interest Only (IO) tranche and a discount coupon fixed rate tranche. An IO pays a coupon only based on a notional principal, it receives no principal payments from amortization or prepayments. Notional principal does not have any cash flows but shadows the principal changes of the original tranche, and it is this principal off which the coupon is calculated. For example, a $100mm PAC tranche off 6% collateral with a 6% coupon ('6 off 6' or '6-squared') can be cut into a $100mm PAC tranche with a 5% coupon (and hence a lower dollar price) called a '5 off 6', and a PAC IO tranche with a notional principal of $16.666667mm and paying a 6% coupon. Note the resulting notional principle of the IO is less than the original principal. Using the example, the IO is created by taking 1% of coupon off the 6% original coupon gives an IO of 1% coupon off $100mm notional principal, but this is by convention 'normalized' to a 6% coupon (as the collateral was originally 6% coupon) by reducing the notional principal to $16.666667mm ($100mm / 6).
PO/premium fixed rate pair.
Similarly if a fixed rate CMO tranche coupon is desired to be increased, then principal can be removed to form a Principal Only (PO) class and a premium fixed rate tranche. A PO pays no coupon, but receives principal payments from amortization and prepayments. For example, a $100mm sequential (SEQ) tranche off 6% collateral with a 6% coupon ('6 off 6') can be cut into a $92.307692mm SEQ tranche with a 6.5% coupon (and hence a higher dollar price) called a '6.5 off 6', and a SEQ PO tranche with a principal of $7.692308mm and paying a no coupon. The principal of the premium SEQ is calculated as (6 / 6.5) * $100mm, the principal of the PO is calculated as balance from $100mm.
IO/PO pair.
The simplest coupon tranching is to allocate the coupon stream to an IO, and the principal stream to a PO. This is generally only done on the whole collateral without any prepayment tranching, and generates strip IOs and strip POs. In particular FNMA and FHLMC both have extensive strip IO/PO programs (aka Trusts IO/PO or SMBS) which generate very large, liquid strip IO/PO deals at regular intervals.
Floater/inverse pair.
The construction of CMO Floaters is the most effective means of getting additional market liquidity for CMOs. CMO floaters have a coupon that moves in line with a given index (usually 1 month LIBOR) plus a spread, and is thus seen as a relatively safe investment even though the term of the security may change. One feature of CMO floaters that is somewhat unusual is that they have a coupon cap, usually set well out of the money (e.g. 8% when LIBOR is 5%) In creating a CMO floater, a CMO Inverse is generated. The CMO inverse is a more complicated instrument to hedge and analyse, and is usually sold to sophisticated investors.
The construction of a floater/inverse can be seen in two stages. The first stage is to synthetically raise the effective coupon to the target floater cap, in the same way as done for the PO/Premium fixed rate pair. As an example using $100mm 6% collateral, targeting an 8% cap, we generate $25mm of PO and $75mm of '8 off 6'. The next stage is to cut up the premium coupon into a floater and inverse coupon, where the floater is a linear function of the index, with unit slope and a given offset or spread. In the example, the 8% coupon of the '8 off 6' is cut into a floater coupon of:
formula_0
The inverse formula is simply the difference of the original premium fixed rate coupon less the floater formula. In the example:
formula_1
The floater coupon is allocated to the premium fixed rate tranche principal, in the example the $75mm '8 off 6', giving the floater tranche of '$75mm 8% cap + 40bps LIBOR SEQ floater'. The floater will pay LIBOR + 0.40% each month on an original balance of $75mm, subject to a coupon cap of 8%.
The inverse coupon is to be allocated to the PO principal, but has been generated of the notional principal of the premium fixed rate tranche (in the example the PO principal is $25mm but the inverse coupon is notionalized off $75mm). Therefore, the inverse coupon is 're-notionalized' to the smaller principal amount, in the example this is done by multiplying the coupon by ($75mm /$25mm) = 3. Therefore, the resulting coupon is:
formula_2
In the example the inverse generated is a '$25mm 3 times levered 7.6 strike LIBOR SEQ inverse'.
Other structures.
Other structures include Inverse IOs, TTIBs, Digital TTIBs/Superfloaters, and 'mountain' bonds. A special class of IO/POs generated in non-agency deals are WAC IOs and WAC POs, which are used to build a fixed pass through rate on a deal.
Attributes of IOs and POs.
Interest only (IO).
An interest only (IO) tranche may be carved from collateral securities to receive just the interest payments from a pool of mortgages. IO holders are only entitled to the actual amount of interest paid, as it is paid, on a pool of mortgage collateral. Since mortgages allow for prepayment, there is no assurance how much interest will actually be paid. Once all an underlying debt is paid off, that debt's future stream of interest is terminated and the IO expires with no terminal value. Therefore, IO securities are annuity-like securities but the amount and timing of payments is uncertain as payments are based on the total interest payments paid on all the underlying mortgages in the collateral pool.
Generally speaking, mortgage prepayments tend to slow down as general interest rates increase. Therefore, IOs prices generally increase as interest rates increase and decrease as interest rates decrease (i.e. negative duration). If mortgage prepayments increase, or the market's expectations of future prepayments increases (i.e. higher expected PSA speed), the expected aggregate dollar amount of interest payments [and therefore the market price of the IO tranche] would generally be expected to decrease. By contrast, if mortgage prepayments decrease, or the market's expectations of future prepayments decreases (i.e. lower expected PSA speed), the expected dollar amount of interest payments [and therefore the market price of the IO tranche] would generally be expected to increase.
Therefore, IOs have investor demand due to their expected negative effective duration as they can be used as a hedge against conventional fixed-income securities in a portfolio. Additionally, since investors are only buying a portion of the overall cash flows and are not entitled to any payments of principal (the PO tranche), the cost of the IO trance may be significantly lower than that of the PO tranche. While IO and PO tranches have different risk characteristics, neither the IO nor the PO represents a leveraged position in the underlying collateral pool.
Principal only (PO).
A principal only (PO) strip may be carved from collateral securities to receive just the principal portion of a payment. Since the IO tranche has negative duration, a PO typically has more effective duration than its collateral. One may think of this in two ways: 1. The increased effective duration must balance the matching IO's negative effective duration to equal the collateral's effective duration, or 2. Bonds with lower coupons usually have higher effective durations, and a PO has no [zero] coupon. POs have investor demand as hedges against IO-type streams (e.g. mortgage servicing rights).
References.
Footnotes.
<templatestyles src="Reflist/styles.css" />
Works cited.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "1 \\times \\text{LIBOR} + 0.40\\%"
},
{
"math_id": 1,
"text": "8\\% - \\left(1\\times\\text{LIBOR} + 0.40\\%\\right) = 7.60\\% - 1\\times\\text{LIBOR}"
},
{
"math_id": 2,
"text": "3\\times \\left(7.60\\% - 1\\times\\text{LIBOR}\\right) = 22.8\\% - 3\\times\\text{LIBOR}"
}
] | https://en.wikipedia.org/wiki?curid=1115177 |
11153041 | Saint-Venant's principle | Saint-Venant's principle, named after Adhémar Jean Claude Barré de Saint-Venant, a French elasticity theorist, may be expressed as follows:
<templatestyles src="Template:Blockquote/styles.css" />... the difference between the effects of two different but statically equivalent loads becomes very small at sufficiently large distances from load.
The original statement was published in French by Saint-Venant in 1855. Although this informal statement of the principle is well known among structural and mechanical engineers, more recent mathematical literature gives a rigorous interpretation in the context of partial differential equations. An early such interpretation was made by Richard von Mises in 1945.
The Saint-Venant's principle allows elasticians to replace complicated stress distributions or weak boundary conditions with ones that are easier to solve, as long as that boundary is geometrically short. Quite analogous to the electrostatics, where the product of the distance and electric field due to the "i"-th moment of the load (with 0th being the net charge, 1st the dipole, 2nd the quadrupole) decays as formula_0 over space, Saint-Venant's principle states that high order moment of mechanical load (moment with order higher than torque) decays so fast that they never need to be considered for regions far from the short boundary. Therefore, the Saint-Venant's principle can be regarded as a statement on the asymptotic behavior of the Green's function by a point-load. | [
{
"math_id": 0,
"text": "1/r^{2+i} "
}
] | https://en.wikipedia.org/wiki?curid=11153041 |
11153189 | Casus irreducibilis | When a polynomial's solution cannot be expressed in radicals without complex numbers
In algebra, (from la 'the irreducible case') is one of the cases that may arise in solving polynomials of degree 3 or higher with integer coefficients algebraically (as opposed to numerically), i.e., by obtaining roots that are expressed with radicals. It shows that many algebraic numbers are real-valued but cannot be expressed in radicals without introducing complex numbers. The most notable occurrence of is in the case of cubic polynomials that have three real roots, which was proven by Pierre Wantzel in 1843.
One can see whether a given cubic polynomial is in the so-called by looking at the discriminant, via Cardano's formula.
The three cases of the discriminant.
Let
formula_0
be a cubic equation with formula_1. Then the discriminant is given by
formula_2
It appears in the algebraic solution and is the square of the product
formula_3
of the formula_4 differences of the 3 roots formula_5.
0, then formula_7 and there are three real roots; two of them are equal. Whether "D"
0 can be found out by the Euclidean algorithm, and if so, the roots by the quadratic formula. Moreover, all roots are real and expressible by real radicals.All the cubic polynomials with zero discriminant are reducible.
Formal statement and proof.
More generally, suppose that "F" is a formally real field, and that "p"("x") ∈ "F"["x"] is a cubic polynomial, irreducible over "F", but having three real roots (roots in the real closure of "F"). Then "casus irreducibilis" states that it is impossible to express a solution of "p"("x")
0 by radicals with radicands ∈ "F".
To prove this, note that the discriminant "D" is positive. Form the field extension "F"(√"D")
"F"(∆). Since this is "F" or a quadratic extension of "F" (depending in whether or not "D" is a square in "F"), "p"("x") remains irreducible in it. Consequently, the Galois group of "p"("x") over "F"(√"D") is the cyclic group "C"3. Suppose that "p"("x")
0 can be solved by real radicals. Then "p"("x") can be split by a tower of cyclic extensions
formula_9
At the final step of the tower, "p"("x") is irreducible in the penultimate field "K", but splits in "K"(3√"α") for some "α". But this is a cyclic field extension, and so must contain a conjugate of and therefore a primitive 3rd root of unity.
However, there are no primitive 3rd roots of unity in a real closed field. Suppose that ω is a primitive 3rd root of unity. Then, by the axioms defining an ordered field, ω and ω2 are both positive, because otherwise their cube (=1) would be negative. But if ω2>ω, then cubing both sides gives 1>1, a contradiction; similarly if ω>ω2.
Solution in non-real radicals.
Cardano's solution.
The equation "ax"3 + "bx"2 + "cx" + "d"
0 can be depressed to a monic trinomial by dividing by formula_10 and substituting "x"
"t" − (the Tschirnhaus transformation), giving the equation "t"3 + "pt" + "q"
0 where
formula_11
formula_12
Then regardless of the number of real roots, by Cardano's solution the three roots are given by
formula_13
where formula_14 ("k"=1, 2, 3) is a cube root of 1 (formula_15, formula_16, and formula_17, where "i" is the imaginary unit). Here if the radicands under the cube roots are non-real, the cube roots expressed by radicals are defined to be any pair of complex conjugate cube roots, while if they are real these cube roots are defined to be the real cube roots.
"Casus irreducibilis" occurs when none of the roots are rational and when all three roots are distinct and real; the case of three distinct real roots occurs if and only if + < 0, in which case Cardano's formula involves first taking the square root of a negative number, which is imaginary, and then taking the cube root of a complex number (the cube root cannot itself be placed in the form "α" + "βi" with specifically given expressions in real radicals for "α" and "β", since doing so would require independently solving the original cubic). Even in the reducible case in which one of three real roots is rational and hence can be factored out by polynomial long division, Cardano's formula (unnecessarily in this case) expresses that root (and the others) in terms of non-real radicals.
Example.
The cubic equation
formula_18
is irreducible, because if it could be factored there would be a linear factor giving a rational solution, while none of the possible roots given by the rational root test are actually roots. Since its discriminant is positive, it has three real roots, so it is an example of "casus irreducibilis." These roots can be expressed as
formula_19
for formula_20. The solutions are in radicals and involve the cube roots of complex conjugate numbers.
Trigonometric solution in terms of real quantities.
While "casus irreducibilis" cannot be solved in radicals in terms of real quantities, it "can" be solved trigonometrically in terms of real quantities. Specifically, the depressed monic cubic equation formula_21 is solved by
formula_22
These solutions are in terms of real quantities if and only if formula_23 — i.e., if and only if there are three real roots. The formula involves starting with an angle whose cosine is known, trisecting the angle by multiplying it by 1/3, and taking the cosine of the resulting angle and adjusting for scale.
Although cosine and its inverse function (arccosine) are transcendental functions, this solution is algebraic in the sense that formula_24 is an algebraic function, equivalent to angle trisection.
Relation to angle trisection.
The distinction between the reducible and irreducible cubic cases with three real roots is related to the issue of whether or not an angle is trisectible by the classical means of compass and unmarked straightedge. For any angle "θ", one-third of this angle has a cosine that is one of the three solutions to
formula_25
Likewise, has a sine that is one of the three real solutions to
formula_26
In either case, if the rational root test reveals a rational solution, x or y minus that root can be factored out of the polynomial on the left side, leaving a quadratic that can be solved for the remaining two roots in terms of a square root; then all of these roots are classically constructible since they are expressible in no higher than square roots, so in particular or is constructible and so is the associated angle . On the other hand, if the rational root test shows that there is no rational root, then "casus irreducibilis" applies, or is not constructible, the angle is not constructible, and the angle "θ" is not classically trisectible.
As an example, while a 180° angle can be trisected into three 60° angles, a 60° angle cannot be trisected with only compass and straightedge. Using triple-angle formulae one can see that cos
4"x"3 − 3"x" where "x"
cos(20°). Rearranging gives 8"x"3 − 6"x" − 1
0, which fails the rational root test as none of the rational numbers suggested by the theorem is actually a root. Therefore, the minimal polynomial of cos(20°) has degree 3, whereas the degree of the minimal polynomial of any constructible number must be a power of two.
Expressing cos(20°) in radicals results in
formula_27
which involves taking the cube root of complex numbers. Note the similarity to "e""iπ"/3
and "e""−iπ"/3
The connection between rational roots and trisectability can also be extended to some cases where the sine and cosine of the given angle is irrational. Consider as an example the case where the given angle "θ" is a vertex angle of a regular pentagon, a polygon that can be constructed classically. For this angle "5θ/3" is 180°, and standard trigonometric identities then give
formula_28
thus
formula_29
The cosine of the trisected angle is rendered as a rational expression in terms of the cosine of the given angle, so the vertex angle of a regular pentagon can be trisected (mechanically, by simply drawing a diagonal).
Generalization.
"Casus irreducibilis" can be generalized to higher degree polynomials as follows. Let "p" ∈ "F"["x"] be an irreducible polynomial which splits in a formally real extension "R" of "F" (i.e., "p" has only real roots). Assume that "p" has a root in formula_30 which is an extension of "F" by radicals. Then the degree of "p" is a power of 2, and its splitting field is an iterated quadratic extension of "F".
Thus for any irreducible polynomial whose degree is not a power of 2 and which has all roots real, no root can be expressed purely in terms of real radicals, i.e. it is a "casus irreducibilis" in the (16th century) sense of this article. Moreover, if the polynomial degree "is" a power of 2 and the roots are all real, then if there is a root that can be expressed in real radicals it can be expressed in terms of square roots and no higher-degree roots, as can the other roots, and so the roots are classically constructible.
"Casus irreducibilis" for quintic polynomials is discussed by Dummit.
Relation to angle pentasection (quintisection) and higher.
The distinction between the reducible and irreducible quintic cases with five real roots is related to the issue of whether or not an angle with rational cosine or rational sine is pentasectible (able to be split into five equal parts) by the classical means of compass and unmarked straightedge. For any angle "θ", one-fifth of this angle has a cosine that is one of the five real roots of the equation
formula_31
Likewise, has a sine that is one of the five real roots of the equation
formula_32
In either case, if the rational root test yields a rational root "x1", then the quintic is reducible since it can be written as a factor ("x—x"1) times a quartic polynomial. But if the test shows that there is no rational root, then the polynomial may be irreducible, in which case "casus irreducibilis" applies, and are not constructible, the angle is not constructible, and the angle "θ" is not classically pentasectible. An example of this is when one attempts to construct a 25-gon (icosipentagon) with compass and straightedge. While a pentagon is relatively easy to construct, a 25-gon requires an angle pentasector as the minimal polynomial for cos(14.4°) has degree 10:
formula_33
Thus,
formula_34
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "ax^3+bx^2+cx+d=0"
},
{
"math_id": 1,
"text": "a\\ne0"
},
{
"math_id": 2,
"text": "D := \\bigl((x_1-x_2)(x_1-x_3)(x_2-x_3)\\bigr)^2 = 18abcd - 4ac^3 - 27a^2d^2 + b^2c^2 -4b^3d~."
},
{
"math_id": 3,
"text": "\\Delta := \\prod_{j<k}(x_j-x_k) = (x_1-x_2)(x_1-x_3)(x_2-x_3) \\qquad \\qquad \\bigl(\\!= \\pm\\sqrt{D}\\bigr)"
},
{
"math_id": 4,
"text": "\\tbinom32 = 3"
},
{
"math_id": 5,
"text": "x_1,x_2,x_3"
},
{
"math_id": 6,
"text": "\\Delta\\in i\\R^\\times"
},
{
"math_id": 7,
"text": "\\Delta=0"
},
{
"math_id": 8,
"text": "\\Delta\\in\\R^\\times"
},
{
"math_id": 9,
"text": " F\\sub F(\\sqrt{D})\\sub F(\\sqrt{D}, \\sqrt[p_1]{\\alpha_1}) \\sub\\cdots \\sub K\\sub K(\\sqrt[3]{\\alpha})"
},
{
"math_id": 10,
"text": "a"
},
{
"math_id": 11,
"text": "p=\\frac{3ac-b^2}{3a^2}"
},
{
"math_id": 12,
"text": "q=\\frac{2b^3-9abc+27a^2d}{27a^3}."
},
{
"math_id": 13,
"text": " t_k = \\omega_k \\sqrt[3]{-{q\\over 2}+ \\sqrt{{q^{2}\\over 4}+{p^{3}\\over 27}}} + \\omega_k^2 \\sqrt[3]{-{q\\over 2}- \\sqrt{{q^{2}\\over 4}+{p^{3}\\over 27}}}"
},
{
"math_id": 14,
"text": " \\omega_k"
},
{
"math_id": 15,
"text": "\\omega_1 = 1"
},
{
"math_id": 16,
"text": "\\omega_2 = -\\frac{1}{2} + \\frac{\\sqrt{3}}{2}i"
},
{
"math_id": 17,
"text": "\\omega_3 = -\\frac{1}{2} - \\frac{\\sqrt{3}}{2}i"
},
{
"math_id": 18,
"text": "2x^3-9x^2-6x+3=0"
},
{
"math_id": 19,
"text": "t_k=\\frac{3-\\omega_k\\sqrt[3]{39-26i}-\\omega_k^2\\sqrt[3]{39+26i}}{2}"
},
{
"math_id": 20,
"text": "k\\in\\left\\{1, 2, 3\\right\\}"
},
{
"math_id": 21,
"text": "t^3+pt+q=0 "
},
{
"math_id": 22,
"text": "t_k=2\\sqrt{-\\frac{p}{3}}\\cos\\left[\\frac{1}{3}\\arccos\\left(\\frac{3q}{2p}\\sqrt{\\frac{-3}{p}}\\right)-k\\frac{2\\pi}{3}\\right] \\quad \\text{for} \\quad k=0,1,2 \\,."
},
{
"math_id": 23,
"text": "{q^{2}\\over 4}+{p^{3}\\over 27} < 0"
},
{
"math_id": 24,
"text": "\\cos\\left[\\arccos\\left(x\\right)/3\\right]"
},
{
"math_id": 25,
"text": "4x^3-3x-\\cos(\\theta)=0."
},
{
"math_id": 26,
"text": "4y^3-3y+\\sin(\\theta)=0."
},
{
"math_id": 27,
"text": "\\cos\\left(\\frac{\\pi}{9}\\right)=\\frac{\\sqrt[3]{1-i\\sqrt{3}}+\\sqrt[3]{1+i\\sqrt{3}}}{2\\sqrt[3]{2}}"
},
{
"math_id": 28,
"text": " \\cos(\\theta)+\\cos(\\theta/3) = 2\\cos(\\theta/3)\\cos(2\\theta/3)\n=-2\\cos(\\theta/3)\\cos(\\theta)"
},
{
"math_id": 29,
"text": " \\cos(\\theta/3) = -\\cos(\\theta)/(1+2\\cos(\\theta))."
},
{
"math_id": 30,
"text": "K\\subseteq R"
},
{
"math_id": 31,
"text": "16x^5-20x^3+5x-\\cos(\\theta)=0."
},
{
"math_id": 32,
"text": "16y^5-20y^3+5y-\\sin(\\theta)=0."
},
{
"math_id": 33,
"text": "\\begin{align}\n\\cos\\left(\\frac{2\\pi}{5}\\right) &= \\frac{\\sqrt{5}-1}{4} \\\\\n16x^5-20x^3+5x+\\frac{1-\\sqrt{5}}{4} &= 0 \\qquad\\qquad x=\\cos\\left(\\frac{2\\pi}{25}\\right) \\\\\n4\\left(16x^5-20x^3+5x+\\frac{1-\\sqrt{5}}{4}\\right)\\left(16x^5-20x^3+5x+\\frac{1+\\sqrt{5}}{4}\\right) &= 0 \\\\\n4\\left(16x^5-20x^3+5x\\right)^2+2\\left(16x^5-20x^3+5x\\right)-1 &= 0 \\\\\n1024x^{10}-2560x^8+2240 x^6+32x^5-800 x^4-40x^3+100x^2+10x-1 &= 0.\n\\end{align}"
},
{
"math_id": 34,
"text": "\\begin{align}\ne^{2\\pi i/5} &= \\frac{-1+\\sqrt{5}}{4}+\\frac{\\sqrt{10+2\\sqrt{5}}}{4}i \\\\\ne^{-2\\pi i/5} &= \\frac{-1+\\sqrt{5}}{4}-\\frac{\\sqrt{10+2\\sqrt{5}}}{4}i \\\\\n\\cos\\left(\\frac{2\\pi}{25}\\right) &= \\frac{\\sqrt[5]{-1+\\sqrt{5}-i\\sqrt{10+2\\sqrt{5}}}+\\sqrt[5]{-1+\\sqrt{5}+i\\sqrt{10+2\\sqrt{5}}}}{2\\sqrt[5]{4}}.\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=11153189 |
1115483 | 2-8-2 | Locomotive wheel arrangement
Under the Whyte notation for the classification of steam locomotives, 2-8-2 represents the wheel arrangement of two leading wheels on one axle, usually in a leading truck, eight powered and coupled driving wheels on four axles and two trailing wheels on one axle, usually in a trailing truck. This configuration of steam locomotive is most often referred to as a Mikado, frequently shortened to Mike.
It was also at times referred to on some railroads in the United States as the McAdoo Mikado and, during World War II, the MacArthur.
The notation 2-8-2T indicates a tank locomotive of this wheel arrangement, the "T" suffix indicating a locomotive on which the water is carried in tanks mounted on the engine rather than in an attached tender.
<templatestyles src="Template:TOC limit/styles.css" />
Overview.
The 2-8-2 wheel arrangement allowed the locomotive's firebox to be placed behind instead of above the driving wheels, thereby allowing a larger firebox that could be both wide and deep. This supported a greater rate of combustion and thus a greater capacity for steam generation, allowing for more power at higher speeds. Allied with the larger driving wheel diameter which was possible when they did not impinge on the firebox, it meant that the 2-8-2 was capable of higher speeds than a 2-8-0 with a heavy train. These locomotives did not suffer from the imbalance of reciprocating parts as much as did the 2-6-2 or the 2-10-2, because the center of gravity was between the second and third drivers instead of above the centre driver.
The first 2-8-2 locomotive was built in 1884. It was originally named "Calumet" by Angus Sinclair, in reference to the 2-8-2 engines built for the Chicago & Calumet Terminal Railway (C&CT). However, this name did not take hold.
The wheel arrangement name "Mikado" originated from a group of 2-8-2 locomotives that were built by Baldwin Locomotive Works for the gauge Nippon Railway of Japan in 1897. In the 19th century, the Emperor of Japan was often referred to as "the Mikado" in English. The Gilbert and Sullivan opera, "The Mikado", set in Japan, had premiered in 1885 and achieved great popularity in both Britain and America.
The 2-8-2 was one of the more common configurations in the first half of the 20th century, before dieselisation. Between 1917 and 1944, nearly 2,200 of this type were constructed by Baldwin, the American Locomotive Company (ALCO) and the Lima Locomotive Works, based on designs by the United States Railroad Administration (USRA). It was also known as the "McAdoo Mikado" in the United States, after William Gibbs McAdoo who was appointed as Director General of Railroads when the United States commenced hostilities during the latter part of the First World War and the USRA was established. Of all of the USRA designs, the Mikado proved to be the most popular. The total American production was about 14,000, of which 9,500 were for local customers and the rest exported.
"Mikado" remained the type name until the attack on Pearl Harbor in 1941. Seeking a more American name, "MacArthur", after General Douglas MacArthur, came into use to describe the locomotive type in the United States. After the war, the type name "Mikado" again became the most common for that locomotive type.
Usage.
Locomotives of this wheel arrangement saw service on all six populated continents. The 2-8-2 type was particularly popular in North America, but was also used extensively in Continental Europe and elsewhere.
Argentina.
broad gauge.
The Buenos Aires and Pacific Railway bought eighteen 2-8-2T locomotives in three batches of six as their class 701 class. The first two batches came from North British Locomotive Company in 1908 and 1912, the third from Henschel & Son in 1913.
The BA&P also bought eight 2-8-2 tender locomotives from Beyer, Peacock & Company in 1928 as their 3001 class.
The Central Argentine Railway (FCCA) bought fifteen 2-8-2T locomotives as their class C7 in 1912; they were built by Robert Stephenson & Company with works numbers 3506 to 3520.
The FCCA also bought sixty 2-8-2 locomotives: twenty class CS8A from Beyer, Peacock & Company in 1926, and another twenty in 1928 from Robert Stephenson & Company. The final twenty to class CS9A were supplied by Vulcan Foundry in 1930. Both classes were cross-compound locomotives with one high-pressure cylinder with a bore of and one low-pressure cylinder with a bore of , with a stroke of . The earlier class had coupled wheels with a diameter of , whereas on the later class they were .
Standard gauge.
The East Argentine Railway bought four 2-8-2 locomotives from Baldwin Locomotive Works in 1924. As class X they were numbered 70 to 74; they became General Urquiza Railway 701 to 704 in the 1948 nationalisation. Baldwin had classified them as 12-30-<templatestyles src="Fraction/styles.css" />1⁄4-E.
gauge.
The Province of Buenos Aires Railway bought a single 2-8-2 locomotive from Hanomag of Germany in 1910. Numbered 251 and classified as class E, it was the only 2-8-2 on that railway's system.
The Central Northern Railway (FCCN) bought seven classes of 2-8-2 locomotives totalling 134 locomotives. The first 100 were all bought in 1911: Fifteen from Borsig (class C7, numbered 700–714), 25 from Henschel & Sohn (class C8, 715–739), 10 from Hanomag (class C9, 740–749) and 50 from North British Locomotive (class C10, 750–799). The next 25 came from Baldwin Locomotive Works in 1920; they were Baldwin class 12-30-<templatestyles src="Fraction/styles.css" />1⁄4-E, 55 to 79, FCCN class C11, numbered 7000–7024. The last nine new locomotives were built by Henschel between 1928 and 1930 (class C13, numbers 7025–7033, and class C13A, number 7034). In addition the FCCN rebuilt 20 4-8-0 locomotives of classes C6 and C7 into 2-8-2s between 1938 and 1940.
The Córdoba Central Railway (FCCC) bought 31 locomotives in four classes. The first was a solitary locomotive, numbered 800, class C6A built by Alco's Brooks Works in 1910. It was nearly a decade before they bought any more with a dozen class C9A locomotives, numbered 1451 to 1462, coming from Montreal Locomotive Works, half in 1919 and half in 1920. MLW delivered another 15 Mikados later that same year; as class C10A they were numbered 1463 to 1477. FCCC's final three came from Baldwin Locomotive Works in 1925, they were Baldwin class 12-26-<templatestyles src="Fraction/styles.css" />1⁄4-E; FCCC numbered them 1501 to 1503, class C11A. When the FCCC was taken over by the FCCN in 1939, their new owner changed the classification by adding 20 to the FCCC's old classification; the locomotives kept their old numbers, except for FCCC 800 which became FCCN 1400.
gauge.
On the Ferrocarriles Patagónicos, 75 locomotives were bought in 1922. Fifty were built by Henschel & Sohn, numbered 101 to 150 and class 75H; 25 were built by Baldwin, numbered 1 to 25, class 75B with Baldwin classifying then as 12-18-<templatestyles src="Fraction/styles.css" />1⁄4-E.
Australia.
One of the world's first 2-8-2T designs was the South Maitland Railways 10 Class, first delivered in 1911, by Beyer-Peacock, and spasmodically continuing delivery until 1925, then totaling 14 in the class.
The requirement for locomotives that could be converted from to without major re-engineering led to the introduction of Mikado locomotives by the Victorian Railways (VR) in the 1920s. Whereas previous 2-8-0 Consolidation type locomotives featured long, narrow fireboxes between the frames that made gauge conversion impractical, the N class light lines and X class heavy goods locomotives both featured wide fireboxes positioned behind the coupled wheels and above the frames.
The South Australian Railways (SAR) employed four distinct classes of 2-8-2 locomotive, the locally designed 700 and 710 class, the 740 class that was originally built for China by Clyde Engineering and purchased by the SAR after the order was cancelled in the wake of the Chinese Communist Revolution, and the 750 class, a group of ten surplus VR N class locomotives.
To assist with the postwar rebuilding of Australian railways, American-designed Mikado locomotives were also introduced after the Second World War, such as the Baldwin-built New South Wales Government Railways (NSWGR) D59 class and the Queensland Rail (QR) AC16 class.
A Mikado was also the last new class of mainline steam locomotive to be introduced in Australia, the V class heavy freight locomotive of the Western Australian Government Railways (WAGR) of 1955.
Austria.
The 4-cylinder compound class 470, developed in 1914 by Karl Gölsdorf, was built for express trains on mountain lines. From 1927, some of these locomotives were rebuilt to two-cylinder superheated steam locomotives and designated class 670. They were reclassified to class 39 from 1938 and remained in service until 1957.
Belgian Congo.
In 1917, 24 Mikado type steam locomotives were built for the Compagnie du chemin de fer du bas-Congo au Katanga (BCK), a new line from the Northern Rhodesian border to Port Francqui in the Belgian Congo. Since the line was just being completed at the time, the full complement of locomotives were not required immediately and four, possibly six, of them were temporarily leased to the South African Railways to alleviate a wartime shortage of locomotives. In South Africa, they were known as the Katanga Mikado. Six more of these engines were leased to the Beira and Mashonaland and Rhodesia Railways (BMR), which operated between Umtali in Southern Rhodesia and Beira in Mozambique. The locomotives were all forwarded to the Belgian Congo after the war, where they were numbered in the BCK range from 201 to 224.
Canada.
Canadian National (CN) operated a few Mikado locomotives:
Canadian Pacific (CP) used Mikado locomotives for passenger and freight trains throughout Canada. Most worked in the Rocky Mountains, where the standard 4-6-2 Pacifics and 4-6-4 Hudsons could not provide enough traction to handle the steep mountain grades.
The Temiskaming & Northern Ontario (renamed Ontario Northland Railway in 1946) operated seventeen Mikados, all ordered from Canadian Locomotive Company in three batches, the first six in 1916, second batch of four in 1921, and the final seven in 1923 to 1925. They were scrapped between 1955 and 1957 when the Ontario Northland was completely dieselized, except for three wrecked and scrapped in the 1940s. The Temiskaming & Northern Ontario operated its Mikados on both freight and passenger service, and were fitted with smoke deflectors. In 1946 65 out of 199 Canadian Pacific N2 2-8-0's were rebuilt and converted to Class P1n 2-8-2's . However all were scrapped around 1955 and 1958 . No P1n 2-8-2's were preserved however CP no . 5468 is preserved
CP's no. 5468, on display in Revelstoke, British Columbia. And CP's 5361 a Class P2e is preserved Depew New York.
China.
Some local industries still actively use Mikados on freight service. The last regular Mikado passenger service was ended on 20 November 2015 in Baiyin. A few Chinese-made locomotives have found their way into the United States, including Class SY no. 3025, built in 1989, which operated as New Haven no. 3025, in honor of Class J1 no. 3001-3024, on the Valley Railroad in Connecticut. The locomotive now operates on the Belvidere & Delaware as no. 142. It is original to the New York, Susquehanna & Western Railway as no. 142. It and two other Chinese 2-8-2s are currently in the United States.
Finland.
Finland's sixteen gauge Class Pr1 were 2-8-2T passenger locomotives for use on local trains. They were nicknamed "Paikku", which means local. The Class Pr1 was operational from 1924 to 1972. Numbered 761 to 776, they were built by Hanomag in Germany and also by Finnish locomotive builders Tampella and Lokomo. The last one, no. 776, is preserved at the Finnish Railway Museum.
The Finnish Class Tr1 (or R1) tender locomotive was built by Tampella, Lokomo and German locomotive builders Arnold Jung from 1940 and remained in service until 1975. They were numbered from 1030 to 1096 and were nicknamed "Risto", after Finnish President Risto Ryti. 1030, 1033, 1037, 1047, 1051, 1055, 1057, 1060, 1067, 1071, 1074, 1077, 1082, 1087, 1088, 1092, 1093, 1094, 1095 and 1096 are preserved
France.
France used a fairly large number of 2-8-2s in both tender and tank configurations, designated 141 class from the French classification system of wheel arrangements.
Tender locomotives.
Of the pre-nationalisation railway companies that existed before the formation of the SNCF, the Chemins de fer de Paris à Lyon et à la Méditerranée (PLM) had the most Mikados. Their first twelve were initially numbered from 1001 to 1012 and later renumbered to 141.A.1 to 141.A.12. The PLM's second series, numbered from 1013 to 1129 and later renumbered 141.B.1 to 141.B.117, were built by Baldwin Locomotive Works in the United States. Their third and largest class was numbered from 141.C.1 to 141.C.680. Of these latter locomotives, those fitted with feedwater heaters bore the class letter D. The PLM also rebuilt forty-four 141.C and 141.D class locomotives to 141.E class. The SNCF modified the PLM numbers by adding the regional prefix digit "5".
The PLM's 141.A class Mikados were copied by the Chemins de fer du Nord, who had fifty, numbered from 4.1101 to 4.1150, which became 2-141.A.1 to 2-141.A.50 on the SNCF.
The Chemins de fer de l'État also had a class of 250 Mikados, numbered from 141-001 to 141-250. These later became the 141.B class on the SNCF and were renumbered 3-141.B.1 to 3-141.B.250. After modifications, the 141.B class locomotives became the 141.C class, as well as one 141.D class (no. 141.D.136) and one 141.E class (no. 141.E.113). No. 3-141.C.100 has been preserved and designated a Monument historique.
The most powerful French Mikado was the SNCF 141.P class. At about , these engines were among the most efficient steam locomotives in the world, thanks to their compound design. They could burn 30% less fuel and use 40% less water than their 141.R class counterparts, but could not compete when it came to reliability. Every locomotive of this 318-strong class has been scrapped.
The most numerous steam locomotive class France had, was the American and Canadian-built 141.R class. Of the 1,340 locomotives ordered, however, only 1,323 entered service since sixteen engines were lost at sea during a storm off the coast of Newfoundland while being shipped to France, while one more was lost in Marseille harbour. They were praised for being easy to maintain and proved to be very reliable, which may account for the fact that they remained in service until the very end of the steam era in 1975. Twelve of these locomotives have been preserved.
Tank locomotives.
The Chemins de fer d'Alsace et de Lorraine had a class of forty 2-8-2T locomotives, the T 14 class, later numbered SNCF 1-141.TA.501 to 1-141.TA.540. They were identical to Germany's Prussian T 14 class locomotive and were built between 1914 and 1918. (Also see )
The Chemins de fer de l'Est had two Mikado classes. The first was numbered from 4401 to 4512, later renumbered 141.401 to 141.512 and finally SNCF 1-141.TB.401 to 1-141.TB.512. The other was numbered from 141.701 to 141.742 and later SNCF 1-141.TC.701 to 1-141.TC.742.
The Chemin de Fer du Nord also had two 2-8-2T classes. The first, consisting of only two locomotives, was numbered 4.1201 and 4.1202, later renumbered 4.1701 and 4.1702 and finally SNCF 2-141.TB.1 and 2-141.TB.2. The second, with 72 locomotives, was numbered from 4.1201 to 4.1272 and later SNCF 2-141.TC.1 to 2-141.TC.72.
The Chemins de Fer de l'État also had two Mikado classes. The first, numbered from 42-001 to 42-020, later became the SNCF 141.TC class and were renumbered 3-141.TC.1 to 3-141.TC.20. The second, numbered from 42-101 to 42-140, later became the SNCF 141TD class and were renumbered 3-141.TD.1 to 3-141.TD.141. They were copies of the 141.700 series of the "Chemins de fer de l'Est".
The Compagnie du chemin de fer de Paris à Orléans (PO) also had two classes. The first was numbered from 5301 to 5490 and later SNCF 4-141.TA.301 to 4-141.TA.490. The second was numbered from 5616 to 5740 and later 4-SNCF 141.TB.616 to 4-141.TB.740.
Germany.
German 2-8-2 tender locomotives were built in both passenger and freight versions.
Both standard gauge and narrow gauge 1D1 2-8-2 tank locomotive classes were used in Germany.
India.
Broad gauge.
On the gauge, the Class XD was the first 2-8-2 in India to be built in quantity. Introduced in 1927, 78 were built before the Second World War by Vulcan Foundry, North British Locomotive Company (NBL), Armstrong Whitworth and Škoda Works. Production resumed after the war, and 110 were built by NBL in 1945 and 1946, while Vulcan Foundry built the last six in 1948.
There was also a Class XE that was built by William Beardmore & Company and Vulcan Foundry. Wartime designs included the Class AWD and Class AWE, built by American company Baldwin Locomotive Works, and the Class X-Dominion (later Class CWD) built as part of Canada's Mutual Aid program by two Canadian companies, the Canadian Locomotive Company and Montreal Locomotive Works.
After the war, a new design was produced and placed in production in 1950. The Class WG was the main post-war broad gauge freight locomotive type of the Indian Railways (IR). The first order of 200 was split evenly between NBL and Chittaranjan Locomotive Works (CLW). Apart from Indian manufacture, examples were also built in England, Scotland, Germany, Austria, the United States, Japan and Italy. By the time production ceased in 1970, 2,450 Class WG locomotives had been built.
Metre gauge.
After World War I, an Indian Railway Standards (IRS) 2-8-2 class became the main heavy freight locomotive on the . While two versions were designed, the Class YD with a 10-ton axle load and the Class YE with a 12-ton axle load, none was built of the latter class.
During World War II, many of the war-time United States Army Transportation Corps class S118 locomotives were sent to India and 33 more were ordered after the war.
The post World War II Mikado design was the Class YG, of which 1,074 were built between 1949 and 1972, with nearly half of them being manufactured in India.
Narrow gauges.
Two narrow track gauges were in use in India. The gauge was the more widely used while the gauge was used by the Darjeeling Himalayan Railway and the Scindia State Railway. Mikado type locomotives were used by the following:
The standard narrow gauge 2-8-2 locomotive was the ZE class, with 65 engines built by five companies between 1928 and 1954. Nasmyth, Wilson built ten in 1928, Hanomag built sixteen in 1931, Corpet-Louvet built twelve in 1950, KraussMaffei built fifteen in 1952 and another ten in 1954, and Kawasaki Heavy Industries built ten in 1954. In 1957 and 1958, six ZD class locomotives were also built by Nippon Sharyo in Japan.
Indonesia.
Before 1945, the Dutch East Indies Railway Administration, "Staatspoorwegen" (SS), received two types of locomotives with a 2-8-2 wheel arrangement. First, they received 10 units of 1,050 mm gauge of SS Class 1500 tender engine of 1920 from Hartmann that was previously intended for the Hejaz Railway, but later diverted to Java prior to the First World War and the drive wheels were adjusted to 1,067 mm gauge. After delivered, they present a difficulty. Their axle weigh 13 tons which way much heavier than weight permitted on bridges and mountainous lines (11 tons). Hence for safety reason, the SS 1500s were only allowed to haul light freight trains on flat lines. Second, they received 24 units of 2-8-2T from Hanomag and Werkspoor later classified as SS Class 1400 in 1921-22 which were the tank version of the 2-8-0 SS Class 900 (DKA D50). The SS Class 1400 initially was intended to be heavyweight shunter, but due to Great Depression, the SS had to preserve some of their large locomotives. So, the SS 1400s were used to haul express trains on the Bogor–Sukabumi line.
This decision was made by the top brass of SS that the SS 1400s were also tough, have power output to 1171 hp. In addition, SS Class 1400 also has compact characteristic, so it was suitable to work on mountainous line. After Japanese occupation and Indonesian Independence both locomotives renumbered to D51 (SS 1500) and D14 (SS 1400) based on Japanese numberings. During the 1970s report, one of D51 (D5101) was sighted at Klakah depot at Lumajang, East Java while most of her sisters were found normally worked on Surabaya–Kroya southern line. Out of 10 units, only D51 06 preserved at Ambarawa Railway Museum. In 1970, the population of D14 locomotives continued to dwindle as they were replaced by the presence of diesel locomotives, and from 24 units only D14 10 of Hanomag is preserved. Previously, D14 10 was a static display at Taman Mini Indonesia Indah before it was brought to Pengok Workshop to conserved it and converted from oil to wood burner. Finally successfully restored in November 2019 and used today to haul excursion train in Surakarta, Central Java beside Rob's C12 18 named "Sepur Kluthuk Jaladara".
After Indonesian Independence in 1945, the government of Indonesia nationalized all of the Dutch-owned railway companies, including the SS whose name was later changed to "Djawatan Kereta Api" (DKA) or the Department Railway of the Republic of Indonesia. Shortly after, by 1951-1952 the DKA bought 100 brand new of Mikado steam locomotives from Krupp, Germany. These locomotives, designated the D52 type, were the most modern steam locomotive in Indonesia at that time, with a large physical appearance and equipped with electric lighting. It was similar to the Class 41 locomotive of the Deutsche Reichsbahn.
In Java, the D52 locomotives were placed in passenger service, but was occasionally also used as freight locomotives. Some people even idolized the D52 because of its loyalty in taking passengers anywhere, as happened on the Rapih Dhoho Train from Madiun to Kertosono. The D52 was a mainstay for this train until the end of steam operation in Indonesia.
In contrast to the Java-based units, Sumatra-based D52 locomotives were used for hauling freight trains, mainly coal trains from the Tanjung Enim coal mine, now owned by the PT Bukit Asam mining company, to the coal dumping sites at Kertapati and Tarahan.
The D52 locomotives were initially coal-fired but, from mid-1956, 28 locomotives, numbers D52002 to D52029, were converted to oil burners. The work was done in stages over five years by the locomotive repair shop at Madiun.
One locomotive from this class was written off from service near Linggapura station after a boiler explosion that killed its driver, as a result of steam pipe failure. The only one of the original 100 locomotives that survived into the 21st century is D52 number D52099, which is on display at the Transport Museum in Taman Mini Indonesia Indah. Later on, the D52099 was moved to Purwosari station along with D14 10 which was successfully restored to action, but the D52099 still remained at the station and awaiting for another restoration.
Italy.
Italian railways relied primarily on 2-6-2s for fast passenger services, while heavy passenger service was assigned to 2-8-0s of the classes 744 and 745. Although Mikado types had little opportunity for development in Italy, Ferrovie dello Stato Italiane (FS) commissioned the 2-8-2 class 746 for heavy passenger service on the Adriatic route. To serve local branches and mountain lines where tank locomotives were more suitable, FS derived the new class 940 from the 2-8-0 class 740, with the same dimensions but adding a rear Bissel truck to support the coal bunker behind the cab to make it a 2-8-2.
Japan.
The Japanese Government Railways (JGR) built the Class D50, Class D51, and Class D52 Mikado tender locomotives for use on the gauge lines on the Japanese mainland and in its former colonies. (Also see .) Among those, the D51 was the most popular with a total of 1,115 units produced, the most of any single class of locomotive in Japan. A few of the D51s remain in operation for excursion services, with many preserved nationwide.
New Zealand.
Only one 2-8-2 locomotive ever operated on New Zealand's national rail network, and it was not even ordered by the New Zealand Railways Department, who ran almost the entire network. The locomotive was ordered in 1901 from Baldwin Locomotive Works by the Wellington & Manawatu Railway Company (WMR) for use on their main line's steep section between Wellington and Paekakariki. It entered service on 10 June 1902 as the WMR's no. 17. At the time, it was the most powerful locomotive in New Zealand and successfully performed its intended tasks.
When the WMR was incorporated into the national network in 1908, the Railways Department reclassified no. 17 as the solitary member of the BC class, no. BC 463, and the locomotive continued to operate on the Wellington-Paekakariki line until it was withdrawn on 31 March 1927.
Philippines.
According to Iowa State University professor Jonathan Smith, the Mikado was the most popular wheel arrangement of freight-purpose tender locomotives on the Manila Railroad. 67 units of the wheel class were delivered between 1927 and 1951, distributed into 4 classes.
The first 2-8-2 steam locomotive was the Baldwin-built Manila Railroad 250 class introduced in 1928. It was the freight version of the 4-6-2 "Pacific"-type 140 class built for passenger rail services in Luzon. More classes were ordered after the war. The United States Army Transportation Corps class S118, locally referred to as the Manila Railroad 800 class "USA" in which 45 units were ordered in 1944. These were numbered 851 to 895, with three named locomotives have been named: No. 865 Huckleberry Finn, No. 866 Tom Sawyer and No. 867 "Hanibella". Two more locomotives were ordered in 1948 from the War Assets Administration and were numbered the 630 class. These were locally assembled at the MRR workshop in Caloocan. Lastly, 10 JNR Class D51 locomotives were ordered from Nippon Sharyo in 1951 and were numbered the 300 class according to the Brotherhood of Locomotive Engineers and Trainmen.
All of these locomotives were decommissioned in 1956 and were scrapped afterwards.
Poland.
Between 1932 and 1939, Polish industry supplied PKP with 98 Mikados of class Pt31 of own design (further 12 were built under German occupation). After World War II additional 180 of improved class Pt47 were built until 1951. Both classes were used to run heavy (600 ton) long-distance passenger trains on main lines. They were the most powerful passenger locomotives in Poland. Their wheel diameter was 1.85 m, power output 2000 hp and speed 110 km/h.
191 TKt48 2-8-2 tank locomotives were delivered to PKP between 1950 and 1957, with additional two built for the industry and six exported to Albania. They were used on suburban passenger trains and on goods trains in lower mountain areas.
South Africa.
Only six Mikado locomotive classes saw service in South Africa, five on Cape gauge and one on narrow gauge. The type was rare, with only two of these classes built in quantity.
Cape gauge.
During 1887, designs for a 2-8-2 Mikado type tank-and-tender locomotive were prepared by the Natal Government Railways. The single locomotive was built in the Durban workshops and entered service in 1888, named "Havelock", but was soon rebuilt to a 4-6-2 Pacific configuration. The engine "Havelock" was the first locomotive to be designed and built in South Africa and also the first to have eight-coupled wheels.
In 1903, the Cape Government Railways (CGR) placed two Cape Class 9 2-8-2 locomotives in service, designed by H.M. Beatty, Locomotive Superintendent of the CGR from 1896 to 1910, and built by Kitson & Company. They had bar frames, Stephenson's link motion valve gear and used saturated steam. In comparison with the Cape Class 8 2-8-0 locomotive of 1901, however, it was found that their maintenance costs were much higher without any advantage in terms of efficiency. As a result, no more of the type were ordered. In 1912, when these locomotives were assimilated into the South African Railways (SAR), they were classified as Class Experimental 4.
In 1904, the Central South African Railways (CSAR) placed 36 Class 11 Mikados in service. Built by the North British Locomotive Company (NBL), it was designed by P.A. Hyde, Chief Locomotive Superintendent of the CSAR from 1902 to 1904, for goods train service on the Witwatersrand. It was superheated, with a Belpaire firebox, Walschaerts valve gear and plate frame. The Class 11 designation was retained when the CSAR was amalgamated into the SAR in 1912.
In 1906, the CGR placed a single experimental 2-8-2 in service, designed by H.M. Beatty and built by Kitson. It was a larger version of the Cape Class 9 in all respects, also with a bar frame, Stephenson's link motion valve gear and using saturated steam. The locomotive was not classified and was simply referred to as "the Mikado". On the CGR it was exceeded in size only by the Kitson-Meyer 0-6-0+0-6-0 of 1904. At the time, it was considered as a big advance in motive power, but the design was never repeated and the Cape Mikado remained unique. In 1912, it was classified as Class Experimental 5 on the SAR.
In 1917, the South African Railways placed at least four, possibly six, Mikado type steam locomotives in service. They had been built for the "Chemins de Fer du Bas Congo á Katanga" in the Belgian Congo and were obtained on temporary lease, to alleviate the critical shortage of locomotives as a result of the First World War's disruption of locomotive production in Europe and the United Kingdom. The Katanga Mikados, as the locomotives were known on the SAR, were all forwarded to the Belgian Congo after the war.
Narrow gauge.
Between 1931 and 1958, 21 narrow gauge Class NG15 Mikados, developed from the Class Hd and Class NG5 of South West Africa (SWA), were acquired for the Otavi Railway in SWA. Designed by the SAR, it was built by Henschel & Son and Société Franco-Belge. A major improvement on the earlier locomotives was the use of a Krauss-Helmholtz bogie, with the leading pair of driving wheels linked to the leading pony truck. The leading driving wheels had a limited amount of side play while the axle still remained parallel to the other three driving axles at all times, thus allowing the locomotive to negotiate sharper curves than its two predecessors. When the SWA narrow gauge line was regauged to Cape gauge in 1960, all these locomotives were transferred to the Eastern Cape for further service on the Langkloof narrow gauge line from Port Elizabeth to Avontuur. Here they were nicknamed the "Kalahari". Victorias Milling Co. 2H is a Henschel built 0-8-0T dated 1927.
South West Africa (Namibia).
Two very similar Mikado classes saw service on the narrow gauge Otavi Railway in South West Africa (SWA).
In 1912, the German administration in Deutsch-Südwest-Afrika acquired three locomotives for use on the line from Swakopmund to Karibib. They were built by Henschel & Son and were designated Class Hd. The locomotives were superheated, with Heusinger valve gear, piston valves and outside plate frames. Since they did not have separate bogie trucks, the leading and trailing carrying wheels were arranged as radial axles to allow for sideways motion of the wheels with respect to the locomotive frame. After the First World War, they were taken onto the roster of the South African Railways (SAR) and later reclassified as Class NG5 along with the similar locomotives of 1922.
In 1922, the SAR placed six Class NG5 locomotives in service on the Otavi branch in SWA, also built by Henschel. They were built to the same design as the Class Hd, but had a different coupled wheel suspension arrangement, different boilers and slide valves. In service, they were operated in a common pool with the Class Hd locomotives until they were all withdrawn from service when the SWA system was regauged to Cape gauge in 1960.
Soviet Union.
At the end of the Second World War, several gauge Japanese Class D51 2-8-2 locomotives were left behind on Russia's Sakhalin island, formerly Karafuto, by retreating Japanese forces. In addition, two Class D51 wrecks were abandoned to the north of the city. Until 1979, the serviceable Japanese locomotives were used on the island by the Soviet Railways.
One was then plinthed outside the Yuzhno-Sakhalinsk railway station, and another is still in running condition and is kept at the Yuzhno-Sakhalinsk railway station.
The Sakhalin Railway has a connection with the mainland via a train ferry operating between Kholmsk on the island and Vanino on the mainland. The Japanese gauge still remains in use on the island, although in 2004 conversion began to the Russian gauge. (Also see )
Spain.
The network of Spain used one Mikado tank locomotive and two versions of tender locomotives.
The Spanish manufacturer MTM delivered six 2-8-2T locomotives to the Madrid-Caceres-Portugal line in 1925. A project at MTM in 1942 to build a big 2-8-2 never realised.
The first tender version was built by two American companies in 1917, fifteen by Brooks Locomotive Works and forty by Schenectady Locomotive Works. They were numbered from 4501 to 4555 and were a slightly smaller version of the USRA Light Mikado. The locomotives served well in the Norte system, where they were nicknamed "Chalecos".
In 1953, RENFE (acronym of "Red Nacional de los Ferrocarriles Españoles"), the nationalised railway company, acquired twenty-five locomotives of the second tender version from North British Locomotive Company (NBL) of Glasgow. Spanish builders MTM, MACOSA and Euskalduna and the American Babcock & Wilcox built 213 more between 1953 and 1960, with only minor detail differences such as double chimneys, Llubera sanders, ACFI feedwater heaters and oil-burning. Their empty weight was and they had diameter coupled wheels. They performed well in both freight and passenger service and lasted until the official end of steam in common service in 1975.
One Norte and eighteen RENFE locomotives are preserved, three of them in good working condition.
Thailand (Siam).
The first Mikado locomotives of the Royal State Railways of Siam (RSR), the predecessor of the State Railway of Thailand (SRT), were acquired from 1923 as standard locomotives for express and mixed trains, to supersede the E-Class locomotives which had been commissioned between 1915 and 1921. The first Siamese Mikado class was built by Baldwin Locomotive Works in 1923, Nasmyth, Wilson & Company in 1924 and Batignolles-Châtillon, France in 1925.
However, it was not until the first batch of eight of Thailand's second class of 2-8-2 locomotives, numbers 351 to 358, was imported from Japan in 1936 that Mikado locomotives really became successful in Thailand. The RSR imported more Mikado standard locomotives to meet railways as well as military demands between 1938 and 1945.
After the Second World War, in 1946, the RSR imported fifty used United States Army Transportation Corps class S118 locomotives, the so-called MacArthur Locomotives. Another eighteen new engines of the same Class were purchased around 1948-1949 to meet the post-war demand.
The last type of Mikado steam locomotives for Thailand were seventy engines imported by SRT from Japan between 1949 and 1951, numbered 901 to 970. Of these, only Mikado no. 953 is still serviceable, and runs passenger trains on special occasions.
United Kingdom.
The 2-8-2 wheel arrangement was rarely, but successfully, used on British rails. Nigel Gresley of the London & North Eastern Railway (LNER) designed two Mikado types of note:
The Great Western Railway (GWR) operated a class of 54 2-8-2T engines that had been rebuilt from 2-8-0T locomotives by Charles Collett, chief mechanical engineer of the GWR. As early as 1906, the chief mechanical engineer at the time, George Churchward, planned a class of Mikado tank engines to handle heavy coal trains in South Wales. The plan was abandoned, however, as it was feared they would be unable to handle the sharp curves present on Welsh mineral branches. Instead, Churchward designed the 4200 Class of 2-8-0 tank engines, of which nearly 200 were built.
In the 1930s, coal traffic declined with the result that many of these engines stood idle, since their limited operating range prevented them from being allocated to other mainline duties. Collett, as Churchward's successor, decided to rebuild some of the 4200 Class engines as 2-8-2Ts. The addition of a trailing axle increased the engine's operating range by allowing an increased coal and water storage capacity. Altogether 54 locomotives were modified in this manner. The 7200 Class tank engines, as they were known, remained in service until the end of steam in Britain in the early 1960s.
The designer of the BR Standard Class 9F locomotive as well as the rest of the BR standard classes as Chief Mechanical Engineer of British Railways, Robert Riddles, originally designed the aforementioned locomotive to be a 2-8-2 using the boiler from one of the 4-6-2 passenger locomotive standard classes. However, he later decided to use a 2-10-0 wheel arrangement with a new boiler design, as it offered more tractive effort and better weight distribution.
United States.
The 2-8-2 saw great success in the United States, mostly as a freight locomotive. In the 1910s it largely replaced the 2-8-0 Consolidation as the main heavy freight locomotive type. Its tractive effort was similar to that of the best 2-8-0s, but a developing requirement for higher speed freight trains drove the shift to the 2-8-2 wheel arrangement.
The Mikado type was, in turn, ousted from the top-flight trains by larger freight locomotive wheel arrangements such as the 2-8-4, 2-10-2, 2-10-4 and articulated locomotives, but no successor type became ubiquitous and the Mike remained the most common road freight locomotive with most railroads until the end of steam. More than 14,000 were built in the United States, about 9,500 of these for North American service, constituting about one-fifth of all locomotives in service there at the time. The heaviest Mikados were the Great Northern's class O-8, with an axle load of .
Almost all North American railroads rostered the type, notable exceptions being the Richmond, Fredericksburg & Potomac, the Boston & Maine, the Delaware & Hudson, the Western Maryland, the Cotton Belt and the Norfolk & Western. The largest users included the New York Central with 715 locomotives, the Baltimore & Ohio with 610, the Pennsylvania Railroad with 579, the Illinois Central with 565, the Milwaukee Road with 500, the Southern with 435, and the Chicago, Burlington & Quincy with 388.
A number of North American 2-8-2s have been preserved as either static display pieces, or steam excursion stars. These include Baltimore and Ohio 4500, Nickel Plate Road 587, Grand Trunk Western 4070, Southern Railway 4501, Grand Canyon Railway 4960, Spokane, Portland and Seattle 539, Southern Pacific 745, Tremont and Gulf 30, Duluth and Northern Minnesota 14, Soo Line 1003, McCloud Railway 18, McCloud Railway 19, Denver and Rio Grande Western 463, Pennsylvania Railroad 520, and California Western 45.
Yugoslavia.
Borsig built 2-8-2s were delivered to the railway of the Kingdom of Yugoslavia in 1930. These became the JDZ class 06, of which a few remain in the former Yugoslav nations.
References.
<templatestyles src="Reflist/styles.css" />
External links.
Media related to at Wikimedia Commons | [
{
"math_id": 0,
"text": "\\textstyle \\mathfrak{H}"
},
{
"math_id": 1,
"text": "\\textstyle \\mathfrak{V}"
}
] | https://en.wikipedia.org/wiki?curid=1115483 |
11158400 | Quantum instrument | Mathematical abstraction of quantum measurement
In quantum physics, a quantum instrument is a mathematical description of a quantum measurement, capturing both the classical and quantum outputs. It can be equivalently understood as a quantum channel that takes as input a quantum system and has as its output two systems: a classical system containing the outcome of the measurement and a quantum system containing the post-measurement state.
Definition.
Let formula_0 be a countable set describing the outcomes of a quantum measurement, and let formula_1 denote a collection of trace-non-increasing completely positive maps, such that the sum of all formula_2 is trace-preserving, i.e. formula_3 for all positive operators formula_4
Now for describing a measurement by an instrument formula_5, the maps formula_2 are used to model the mapping from an input state formula_6 to the output state of a measurement conditioned on a classical measurement outcome formula_7. Therefore, the probability that a specific measurement outcome formula_7 occurs on a state formula_6 is given by
formula_8
The state after a measurement with the specific outcome formula_7 is given by
formula_9
If the measurement outcomes are recorded in a classical register, whose states are modeled by a set of orthonormal projections formula_10 , then the action of an instrument formula_5 is given by a quantum channel formula_11 with
formula_12
Here formula_13 and formula_14 are the Hilbert spaces corresponding to the input and the output systems of the instrument.
Reductions and inductions.
Just as a completely positive trace preserving (CPTP) map can always be considered as the reduction of unitary evolution on a system with an initially unentangled auxiliary, quantum instruments are the reductions of projective measurement with a conditional unitary, and also reduce to CPTP maps and POVMs when ignore measurement outcomes and state evolution, respectively. In John Smolin's terminology, this is an example of "going to the Church of the Larger Hilbert space".
As a reduction of projective measurement and conditional unitary.
Any quantum instrument on a system formula_15 can be modeled as a projective measurement on formula_15 and (jointly) an uncorrelated auxiliary formula_16 followed by a unitary "conditional" on the measurement outcome. Let formula_17 (with formula_18 and formula_19) be the normalized initial state of formula_16, let formula_20 (with formula_21 and formula_22) be a projective measurement on formula_23, and let formula_24 (with formula_25) be unitaries on formula_23. Then one can check that
formula_26
defines a quantum instrument. Furthermore, one can also check that any choice of quantum instrument formula_27 can be obtained with this construction for some choice of formula_17 and formula_24.
In this sense, a quantum instrument can be thought of as the "reduction" of a projective measurement combined with a conditional unitary.
Reduction to CPTP map.
Any quantum instrument formula_27 immediately induces a CPTP map, i.e., a quantum channel:
formula_28
This can be thought of as the overall effect of the measurement on the quantum system if the measurement outcome is thrown away.
Reduction to POVM.
Any quantum instrument formula_27 immediately induces a positive operator-valued measurement (POVM):
formula_29
where formula_30 are any choice of Kraus operators for formula_31,
formula_32
The Kraus operators formula_30 are not uniquely determined by the CP maps formula_31, but the above definition of the POVM elements formula_33 is the same for any choice. The POVM can be thought of as the measurement of the quantum system if the information about how the system is affected by the measurement is thrown away.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nX\n"
},
{
"math_id": 1,
"text": "\n\\{\\mathcal{E}_x \\}_{x\\in X}\n"
},
{
"math_id": 2,
"text": "\n\\mathcal{E}_x\n"
},
{
"math_id": 3,
"text": "\n\\operatorname{tr}\\left(\\sum_x\\mathcal{E}_x(\\rho)\\right)=\\operatorname{tr}(\\rho)\n"
},
{
"math_id": 4,
"text": "\n\\rho."
},
{
"math_id": 5,
"text": "\n\\mathcal{I}\n"
},
{
"math_id": 6,
"text": "\n\\rho\n"
},
{
"math_id": 7,
"text": "\nx\n"
},
{
"math_id": 8,
"text": "\np(x|\\rho)=\\operatorname{tr}(\\mathcal{E}_x(\\rho)).\n"
},
{
"math_id": 9,
"text": "\n\\rho_x=\\frac{\\mathcal{E}_x(\\rho)}{\\operatorname{tr}(\\mathcal{E}_x(\\rho))}.\n"
},
{
"math_id": 10,
"text": "\n|x\\rangle\\langle x| \\in \\mathcal{B}(\\mathbb{C}^{|X|})\n"
},
{
"math_id": 11,
"text": "\n\\mathcal{I}:\\mathcal{B}(\\mathcal{H}_1) \\rightarrow \\mathcal{B}(\\mathcal{H}_2)\\otimes \\mathcal{B}(\\mathbb{C}^{|X|})\n"
},
{
"math_id": 12,
"text": "\n\\mathcal{I}(\\rho):=\n\\sum_x \\mathcal{E}_x\n( \\rho)\\otimes \\vert x \\rangle \\langle x|.\n"
},
{
"math_id": 13,
"text": "\n\\mathcal{H}_1\n"
},
{
"math_id": 14,
"text": "\n\\mathcal{H}_2 \\otimes \\mathbb{C}^{|X|}\n"
},
{
"math_id": 15,
"text": "\\mathcal{S}"
},
{
"math_id": 16,
"text": "\\mathcal{A}"
},
{
"math_id": 17,
"text": "\\eta"
},
{
"math_id": 18,
"text": "\\eta > 0"
},
{
"math_id": 19,
"text": "\\mathrm{Tr} \\, \\eta =1"
},
{
"math_id": 20,
"text": "\\{\\Pi_i\\}"
},
{
"math_id": 21,
"text": "\\Pi_i = \\Pi_i^\\dagger = \\Pi_i^2"
},
{
"math_id": 22,
"text": "\\Pi_i \\Pi_j = \\delta_{ij} \\Pi_i"
},
{
"math_id": 23,
"text": "\\mathcal{SA}"
},
{
"math_id": 24,
"text": "\\{U_i\\}"
},
{
"math_id": 25,
"text": "U_i^\\dagger = U_i^{-1}"
},
{
"math_id": 26,
"text": "\\mathcal{E}_i (\\rho) := \\mathrm{Tr}_{\\mathcal{A}}\\left(U_i\\Pi_i(\\rho\\otimes\\eta)\\Pi_i U_i^\\dagger\\right)"
},
{
"math_id": 27,
"text": "\\{\\mathcal{E}_i\\}"
},
{
"math_id": 28,
"text": "\\mathcal{E} (\\rho) := \\sum_i \\mathcal{E}_i(\\rho)."
},
{
"math_id": 29,
"text": "M_i := \\sum_a K_a^{(i)\\dagger} K_a^{(i)}"
},
{
"math_id": 30,
"text": "K_a^{(i)}"
},
{
"math_id": 31,
"text": "\\mathcal{E}_i"
},
{
"math_id": 32,
"text": "\\mathcal{E}_i (\\rho) = \\sum_a K_a^{(i)}\\rho K_a^{(i)\\dagger}."
},
{
"math_id": 33,
"text": "M_i"
}
] | https://en.wikipedia.org/wiki?curid=11158400 |
11158443 | Johannes Martin Bijvoet | Dutch chemist and crystallographer
Johannes Martin Bijvoet (23 January 1892, Amsterdam – 4 March 1980, Winterswijk) was a Dutch chemist and crystallographer at the van 't Hoff Laboratory at Utrecht University. He is famous for devising a method of establishing the absolute configuration of molecules. In 1946, he became member of the Royal Netherlands Academy of Arts and Sciences.
The concept of tetrahedrally bound carbon in organic compounds stems back to the work by van 't Hoff and Le Bel in 1874. At this time, it was impossible to assign the absolute configuration of a molecule by means other than referring to the projection formula established by Fischer, who had used glyceraldehyde as the prototype and assigned randomly its absolute configuration.
In 1949 Bijvoet outlined his principle, which relies on the anomalous dispersion of X-ray radiation. Instead of the normally observed elastic scattering of X-rays when they hit an atom, which generates a scattered wave of the same energy but with a shift in phase, X-ray radiation near the absorption edge of an atom creates a partial ionisation process. Some new X-ray radiation is generated from the inner electron shells of the atoms. The X-ray radiation already being scattered is interfered with by the new radiation, both amplitude and phase being altered. These additional contributions to the scattering may be written as a real part formula_0"f"' and an imaginary one, formula_0"f"". Whereas the real part is either positive or negative, the imaginary is always positive, resulting in an addition to the phase angle.
In 1951, using an X-ray tube with a zirconium target, Bijvoet and his coworkers Peerdeman and van Bommel achieved the first experimental determination of the absolute configuration of sodium rubidium tartrate. In this compound, rubidium atoms were the ones close to the absorption edge. In their later publication in "Nature", entitled "Determination of the absolute configuration of optically active compounds by means of X-rays", the authors conclude that:
"The result is that Emil Fisher's "convention", which assigned the configuration of FIG. 2 to the dextrorotatory acid "appears to answer the reality"."
thus confirming the preceding decades of stereochemical assignments. The determination of absolute configuration is nowadays achieved using "soft" X-ray radiation, most often generated with a copper target (which generates X-rays with a characteristic wavelength of 154 pm). Shorter wavelengths make the observable differences in measured intensities smaller, thereby making the distinction of absolute configuration more difficult. The measurement of absolute configuration is also facilitated by the presence of atoms heavier than oxygen.
X-ray diffraction is still considered the ultimate proof of absolute structure, but other techniques such as circular dichroism spectroscopy are often used as faster alternatives.
Bijvoet Centre.
The Bijvoet Centre for Biomolecular Research at Utrecht University, which was founded in 1988, was named after him. The Bijvoet Centre performs research on the relation between the structure and function of biomolecules, including proteins and lipids, which play a role in biological processes such as regulation, interaction and recognition. The Bijvoet Centre maintains advanced infrastructures for the analysis of proteins using NMR, electron microscopy, X-ray crystallography and mass spectrometry.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta"
}
] | https://en.wikipedia.org/wiki?curid=11158443 |
1116115 | General Leibniz rule | Generalization of the product rule in calculus
In calculus, the general Leibniz rule, named after Gottfried Wilhelm Leibniz, generalizes the product rule (which is also known as "Leibniz's rule"). It states that if formula_0 and formula_1 are n-times differentiable functions, then the product formula_2 is also n-times differentiable and its n-th derivative is given by
formula_3
where formula_4 is the binomial coefficient and formula_5 denotes the "j"th derivative of "f" (and in particular formula_6).
The rule can be proven by using the product rule and mathematical induction.
Second derivative.
If, for example, "n" = 2, the rule gives an expression for the second derivative of a product of two functions:
formula_7
More than two factors.
The formula can be generalized to the product of "m" differentiable functions "f"1...,"f""m".
formula_8
where the sum extends over all "m"-tuples ("k"1...,"k""m") of non-negative integers with formula_9 and
formula_10
are the multinomial coefficients. This is akin to the multinomial formula from algebra.
Proof.
The proof of the general Leibniz rule proceeds by induction. Let formula_0 and formula_1 be formula_11-times differentiable functions. The base case when formula_12 claims that:
formula_13
which is the usual product rule and is known to be true. Next, assume that the statement holds for a fixed formula_14 that is, that
formula_15
Then,
formula_16
And so the statement holds for formula_17, and the proof is complete.
Multivariable calculus.
With the multi-index notation for partial derivatives of functions of several variables, the Leibniz rule states more generally:
formula_18
This formula can be used to derive a formula that computes the symbol of the composition of differential operators. In fact, let "P" and "Q" be differential operators (with coefficients that are differentiable sufficiently many times) and formula_19 Since "R" is also a differential operator, the symbol of "R" is given by:
formula_20
A direct computation now gives:
formula_21
This formula is usually known as the Leibniz formula. It is used to define the composition in the space of symbols, thereby inducing the ring structure.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "g"
},
{
"math_id": 2,
"text": "fg"
},
{
"math_id": 3,
"text": "(fg)^{(n)}=\\sum_{k=0}^n {n \\choose k} f^{(n-k)} g^{(k)},"
},
{
"math_id": 4,
"text": "{n \\choose k}={n!\\over k! (n-k)!}"
},
{
"math_id": 5,
"text": "f^{(j)}"
},
{
"math_id": 6,
"text": "f^{(0)}= f"
},
{
"math_id": 7,
"text": "(fg)''(x)=\\sum\\limits_{k=0}^{2}{\\binom{2}{k} f^{(2-k)}(x)g^{(k)}(x)}=f''(x)g(x)+2f'(x)g'(x)+f(x)g''(x)."
},
{
"math_id": 8,
"text": "\\left(f_1 f_2 \\cdots f_m\\right)^{(n)}=\\sum_{k_1+k_2+\\cdots+k_m=n} {n \\choose k_1, k_2, \\ldots, k_m}\n \\prod_{1\\le t\\le m}f_{t}^{(k_{t})}\\,,"
},
{
"math_id": 9,
"text": "\\sum_{t=1}^m k_t=n,"
},
{
"math_id": 10,
"text": " {n \\choose k_1, k_2, \\ldots, k_m} = \\frac{n!}{k_1!\\, k_2! \\cdots k_m!}"
},
{
"math_id": 11,
"text": "n"
},
{
"math_id": 12,
"text": "n=1"
},
{
"math_id": 13,
"text": " (fg)' = f'g + fg',"
},
{
"math_id": 14,
"text": "n \\geq 1,"
},
{
"math_id": 15,
"text": " (fg)^{(n)}=\\sum_{k=0}^n\\binom{n}{k} f^{(n-k)}g^{(k)}. "
},
{
"math_id": 16,
"text": "\\begin{align}\n (fg)^{(n+1)} &= \\left[ \\sum_{k=0}^n \\binom{n}{k} f^{(n-k)} g^{(k)} \\right]' \\\\\n &= \\sum_{k=0}^n \\binom{n}{k} f^{(n+1-k)} g^{(k)} + \\sum_{k=0}^n \\binom{n}{k} f^{(n-k)} g^{(k+1)} \\\\\n &= \\sum_{k=0}^n \\binom{n}{k} f^{(n+1-k)} g^{(k)} + \\sum_{k=1}^{n+1} \\binom{n}{k-1} f^{(n+1-k)} g^{(k)} \\\\\n &= \\binom{n}{0} f^{(n+1)} g^{(0)} + \\sum_{k=1}^{n} \\binom{n}{k} f^{(n+1-k)} g^{(k)} + \\sum_{k=1}^n \\binom{n}{k-1} f^{(n+1-k)} g^{(k)} + \\binom{n}{n} f^{(0)} g^{(n+1)} \\\\\n &= \\binom{n+1}{0} f^{(n+1)} g^{(0)} + \\left( \\sum_{k=1}^n \\left[\\binom{n}{k-1} + \\binom{n}{k} \\right]f^{(n+1-k)} g^{(k)} \\right) + \\binom{n+1}{n+1} f^{(0)} g^{(n+1)} \\\\\n &= \\binom{n+1}{0} f^{(n+1)} g^{(0)} + \\sum_{k=1}^n \\binom{n+1}{k} f^{(n+1-k)} g^{(k)} + \\binom{n+1}{n+1}f^{(0)} g^{(n+1)} \\\\\n &= \\sum_{k=0}^{n+1} \\binom{n+1}{k} f^{(n+1-k)} g^{(k)} .\n \\end{align}"
},
{
"math_id": 17,
"text": "n + 1"
},
{
"math_id": 18,
"text": "\\partial^\\alpha (fg) = \\sum_{ \\beta\\,:\\,\\beta \\le \\alpha } {\\alpha \\choose \\beta} (\\partial^{\\beta} f) (\\partial^{\\alpha - \\beta} g)."
},
{
"math_id": 19,
"text": "R = P \\circ Q."
},
{
"math_id": 20,
"text": "R(x, \\xi) = e^{-{\\langle x, \\xi \\rangle}} R (e^{\\langle x, \\xi \\rangle})."
},
{
"math_id": 21,
"text": "R(x, \\xi) = \\sum_\\alpha {1 \\over \\alpha!} \\left({\\partial \\over \\partial \\xi}\\right)^\\alpha P(x, \\xi) \\left({\\partial \\over \\partial x}\\right)^\\alpha Q(x, \\xi)."
}
] | https://en.wikipedia.org/wiki?curid=1116115 |
1116512 | International Standard Musical Work Code | Unique identifier for musical works
International Standard Musical Work Code (ISWC) is a unique identifier for musical works, similar to ISBN for books. It is adopted as international standard ISO 15707. The ISO subcommittee with responsibility for the standard is TC 46/SC 9.
Format.
Each code is composed of three parts:
Currently, the only prefix defined is "T", indicating Musical works. However, additional prefixes may be defined in the future to expand the available range of identifiers and/or expand the system to additional types of works.
Computation of the check digit.
With
formula_2
formula_3
Example: T-034.524.680-C.
formula_4
formula_5
ISWC identifiers are commonly written the form "T-123.456.789-C". The grouping is for ease of reading only; the numbers do not incorporate any information about the work's region, author, publisher, etc. Rather, they are simply issued in sequence. These separators are not required, and no other separators are allowed.
The first ISWC was assigned in 1995, for the song "Dancing Queen" by ABBA; the code is T-000.000.001-0.
Usage.
To register an ISWC, the following minimal information must be supplied:
Note: an ISWC identifies works, not recordings. ISRC can be used to identify recordings. Nor does it identify individual publications (e.g. issues of a recording on physical media, sheet music, broadcast at a particular frequency/modulation/time/location...)
Its primary purpose is in collecting society administration, and to clearly identify works in legal contracts. It would also be useful in library cataloguing.
Due to the fact that a musical work can have multiple authors, it is inevitable that, on rare occasions, a duplicate ISWC might exist and might not be detected immediately. Because of the existing business practices among collecting societies, it is not possible to simply declare an ISWC as obsolete. In such cases, as soon as they are identified, the system will deal with duplicate registrations by linking such registration records in the ISWC database and its related products. | [
{
"math_id": 0,
"text": "d_i"
},
{
"math_id": 1,
"text": "C"
},
{
"math_id": 2,
"text": "S = 1 + \\sum_{i=1}^{i=9}id_i"
},
{
"math_id": 3,
"text": "C = (10 - (S \\mod 10)) \\mod 10"
},
{
"math_id": 4,
"text": "S=179"
},
{
"math_id": 5,
"text": "C=1"
}
] | https://en.wikipedia.org/wiki?curid=1116512 |
11167326 | Littelmann path model | In mathematics, the Littelmann path model is a combinatorial device due to Peter Littelmann for computing multiplicities "without overcounting" in the representation theory of symmetrisable Kac–Moody algebras. Its most important application is to complex semisimple Lie algebras or equivalently compact semisimple Lie groups, the case described in this article. Multiplicities in irreducible representations, tensor products and branching rules can be calculated using a coloured directed graph, with labels given by the simple roots of the Lie algebra.
Developed as a bridge between the theory of crystal bases arising from the work of Kashiwara and Lusztig on quantum groups and the standard monomial theory of C. S. Seshadri and Lakshmibai, Littelmann's path model associates to each irreducible representation a rational vector space with basis given by paths from the origin to a weight as well as a pair of root operators acting on paths for each simple root. This gives a direct way of recovering the algebraic and combinatorial structures previously discovered by Kashiwara and Lusztig using quantum groups.
Background and motivation.
Some of the basic questions in the representation theory of complex semisimple Lie algebras or compact semisimple Lie groups going back to Hermann Weyl include:
Answers to these questions were first provided by Hermann Weyl and Richard Brauer as consequences of explicit character formulas, followed by later combinatorial formulas of Hans Freudenthal, Robert Steinberg and Bertram Kostant; see . An unsatisfactory feature of these formulas is that they involved alternating sums for quantities that were known a priori to be non-negative. Littelmann's method expresses these multiplicities as sums of non-negative integers "without overcounting". His work generalizes classical results based on Young tableaux for the general linear Lie algebra formula_3"n" or the special linear Lie algebra formula_4"n":
Attempts at finding similar algorithms without overcounting for the other classical Lie algebras had only been partially successful.
Littelmann's contribution was to give a unified combinatorial model that applied to all symmetrizable Kac–Moody algebras and provided explicit subtraction-free combinatorial formulas for weight multiplicities, tensor product rules and branching rules. He accomplished this by introducing the vector space "V" over Q generated by the weight lattice of a Cartan subalgebra; on the vector space of piecewise-linear paths in "V" connecting the origin to a weight, he defined a pair of "root operators" for each simple root of formula_2.
The combinatorial data could be encoded in a coloured directed graph, with labels given by the simple roots.
Littelmann's main motivation was to reconcile two different aspects of representation theory:
Although differently defined, the crystal basis, its root operators and crystal graph were later shown to be equivalent to Littelmann's path model and graph; see . In the case of complex semisimple Lie algebras, there is a simplified self-contained account in relying only on the properties of root systems; this approach is followed here.
Definitions.
Let "P" be the weight lattice in the dual of a Cartan subalgebra of the semisimple Lie algebra formula_2.
A Littelmann path is a piecewise-linear mapping
formula_6
such that π(0) = 0 and π(1) is a weight.
Let ("H" α) be the basis of formula_7 consisting of "coroot" vectors, dual to basis of formula_7* formed by simple roots (α). For fixed α and a path π, the function formula_8 has a minimum value "M".
Define non-decreasing self-mappings "l" and "r" of [0,1] formula_9 Q by
formula_10
Thus "l"("t") = 0 until the last time that "h"("s") = "M" and "r"("t") = 1 after the first time that "h"("s") = "M".
Define new paths πl and πr by
formula_11
The root operators "e"α and "f"α are defined on a basis vector [π] by
The key feature here is that the paths form a basis for the root operators like that of a monomial representation: when a root operator is applied to the basis element for a path, the result is either 0 or the basis element for another path.
Properties.
Let formula_14 be the algebra generated by the root operators. Let π("t") be a path lying wholly within the positive Weyl chamber defined by the simple roots. Using results on the path model of C. S. Seshadri and Lakshmibai, Littelmann showed that
There is also an action of the Weyl group on paths [π]. If α is a simple root and "k" = "h"(1), with "h" as above, then the corresponding reflection "s"α acts as follows:
If π is a path lying wholly inside the positive Weyl chamber, the Littelmann graph formula_15 is defined to be the coloured, directed graph having as vertices the non-zero paths obtained by successively applying the operators "f"α to π. There is a directed arrow from one path to another labelled by the simple root α, if the target path is obtained from the source path by applying "f"α.
The Littelmann graph therefore only depends on λ. Kashiwara and Joseph proved that it coincides with the "crystal graph" defined by Kashiwara in the theory of crystal bases.
Applications.
Character formula.
If π(1) = λ, the multiplicity of the weight μ in "L"(λ) is the number of vertices σ in the Littelmann graph formula_16 with σ(1) = μ.
Generalized Littlewood–Richardson rule.
Let π and σ be paths in the positive Weyl chamber with π(1) = λ and σ(1) = μ. Then
formula_17
where τ ranges over paths in formula_18 such that π formula_19 τ lies entirely in the positive Weyl chamber and
the "concatenation" π formula_19 τ (t) is defined as π(2"t") for "t" ≤ 1/2 and π(1) + τ( 2"t" – 1) for "t" ≥ 1/2.
Branching rule.
If formula_1 is the Levi component of a parabolic subalgebra of formula_2 with weight lattice "P"1 formula_20 "P" then
formula_21
where the sum ranges over all paths σ in formula_15 which lie wholly in the positive Weyl chamber for formula_1.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\otimes "
},
{
"math_id": 1,
"text": "\\mathfrak{g}_1"
},
{
"math_id": 2,
"text": "\\mathfrak{g}"
},
{
"math_id": 3,
"text": "\\mathfrak{gl}"
},
{
"math_id": 4,
"text": "\\mathfrak{sl}"
},
{
"math_id": 5,
"text": "\\oplus"
},
{
"math_id": 6,
"text": "\\pi:[0,1]\\cap \\mathbf{Q} \\rightarrow P\\otimes_{\\mathbf{Z}}\\mathbf{Q}"
},
{
"math_id": 7,
"text": "\\mathfrak{h}"
},
{
"math_id": 8,
"text": "h(t)= (\\pi(t), H_\\alpha)"
},
{
"math_id": 9,
"text": "\\cap"
},
{
"math_id": 10,
"text": " l(t) = \\min_{t\\le s\\le 1} (1,h(s)-M),\\,\\,\\,\\,\\,\\, r(t) = 1 - \\min_{0\\le s\\le t} (1,h(s)-M)."
},
{
"math_id": 11,
"text": "\\pi_r(t)= \\pi(t) + r(t) \\alpha,\\,\\,\\,\\,\\,\\, \\pi_l(t) = \\pi(t) - l(t)\\alpha"
},
{
"math_id": 12,
"text": "\\displaystyle{ e_\\alpha [\\pi] = [\\pi_r]} "
},
{
"math_id": 13,
"text": " \\displaystyle{f_\\alpha [\\pi] = [\\pi_l]} "
},
{
"math_id": 14,
"text": "\\mathcal{A}"
},
{
"math_id": 15,
"text": "\\mathcal{G}_\\pi"
},
{
"math_id": 16,
"text": " \\mathcal{G}_\\pi "
},
{
"math_id": 17,
"text": " L(\\lambda) \\otimes L(\\mu) = \\bigoplus_\\eta L(\\lambda + \\tau(1)),"
},
{
"math_id": 18,
"text": "\\mathcal{G}_\\sigma"
},
{
"math_id": 19,
"text": "\\star"
},
{
"math_id": 20,
"text": "\\supset "
},
{
"math_id": 21,
"text": " L(\\lambda)|_{\\mathfrak{g}_1} = \\bigoplus_{\\sigma} L_{\\mathfrak{g}_1}(\\sigma(1)),"
}
] | https://en.wikipedia.org/wiki?curid=11167326 |
11167471 | Boxicity | Smallest dimension where a graph can be represented as an intersection graph of boxes
In graph theory, boxicity is a graph invariant, introduced by Fred S. Roberts in 1969.
The boxicity of a graph is the minimum dimension in which a given graph can be represented as an intersection graph of axis-parallel boxes. That is, there must exist a one-to-one correspondence between the vertices of the graph and a set of boxes, such that two boxes intersect if and only if there is an edge connecting the corresponding vertices.
Examples.
The figure shows a graph with six vertices, and a representation of this graph as an intersection graph of rectangles (two-dimensional boxes). This graph cannot be represented as an intersection graph of boxes in any lower dimension, so its boxicity is two.
showed that the graph with 2"n" vertices formed by removing a perfect matching from a complete graph on 2"n" vertices has boxicity exactly "n": each pair of disconnected vertices must be represented by boxes that are separated in a different dimension than each other pair. A box representation of this graph with dimension exactly "n" can be found by thickening each of the 2"n" facets of an "n"-dimensional hypercube into a box. Because of these results, this graph has been called the "Roberts graph", although it is better known as the cocktail party graph and it can also be understood as the Turán graph "T"(2"n","n").
Relation to other graph classes.
A graph has boxicity at most one if and only if it is an interval graph; the boxicity of an arbitrary graph "G" is the minimum number of interval graphs on the same set of vertices such that the intersection of the edges sets of the interval graphs is "G". Every outerplanar graph has boxicity at most two, and every planar graph has boxicity at most three.
If a bipartite graph has boxicity two, it can be represented as an intersection graph of axis-parallel line segments in the plane.
proved that the boxicity of a bipartite graph "G" is within of a factor 2 of the order dimension of the height-two partially ordered set associated to "G" as follows: the set of minimal elements corresponds to one partite set of "G", the set of maximal elements corresponds to the second partite set of "G", and two elements are comparable if the corresponding vertices are adjacent in "G". Equivalently, the order dimension of a height-two partially ordered set "P" is within a factor 2 of the boxicity of the comparability graph of "P" (which is bipartite, since "P" has height two).
Algorithmic results.
Many graph problems can be solved or approximated more efficiently for graphs with bounded boxicity than they can for other graphs; for instance, the maximum clique problem can be solved in polynomial time for graphs with bounded boxicity. For some other graph problems, an efficient solution or approximation can be found if a low-dimensional box representation is known. However, finding such a representation may be difficult:
it is NP-complete to test whether the boxicity of a given graph is at most some given value "K", even for "K" = 2.
describe algorithms for finding representations of arbitrary graphs as intersection graphs of boxes, with a dimension that is within a logarithmic factor of the maximum degree of the graph; this result provides an upper bound on the graph's boxicity.
Despite being hard for its natural parameter, boxicity is fixed-parameter tractable when parameterized by the vertex cover number of the input graph.
Bounds.
If a graph "G" graph has "m" edges, then:
formula_0.
If a graph "G" is "k"-degenerate (with formula_1) and has "n" vertices, then "G" has boxicity formula_2.
If a graph "G" has no complete graph on "t" vertices as a minor, then formula_3 while there are graphs with no complete graph on "t" vertices as a minor, and with boxicity formula_4. In particular, any graph "G" hax boxicity formula_5, where formula_6 denotes the Colin de Verdière invariant of "G".
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "box(G) = O(\\sqrt{m \\cdot \\log(m)})"
},
{
"math_id": 1,
"text": "k\\ge 2"
},
{
"math_id": 2,
"text": "box(G) \\le (k+2) \\lceil 2e \\log n \\rceil"
},
{
"math_id": 3,
"text": "box(G) = O(t^2 \\log t)"
},
{
"math_id": 4,
"text": "\\Omega(t \\sqrt{\\log t})"
},
{
"math_id": 5,
"text": "box(G) = O(\\mu(G)^2 \\log \\mu(G))"
},
{
"math_id": 6,
"text": "\\mu(G)"
}
] | https://en.wikipedia.org/wiki?curid=11167471 |
11167824 | Saint-Venant's compatibility condition | In the mathematical theory of elasticity, Saint-Venant's compatibility condition defines the relationship between the strain formula_0 and a displacement field formula_1 by
formula_2
where formula_3. Barré de Saint-Venant derived the compatibility condition for an arbitrary symmetric second rank tensor field to be of this form, this has now been generalized to higher rank symmetric tensor fields on spaces of dimension formula_4
Rank 2 tensor fields.
For a symmetric rank 2 tensor field formula_5 in n-dimensional Euclidean space (formula_6) the integrability condition takes the form of the vanishing of the Saint-Venant's tensor formula_7 defined by
formula_8
The result that, on a simply connected domain W=0 implies that strain is the symmetric derivative of some vector field, was first described by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886. For non-simply connected domains there are finite dimensional spaces of symmetric tensors with vanishing Saint-Venant's tensor that are not the symmetric derivative of a vector field. The situation is analogous to de Rham cohomology
The Saint-Venant tensor formula_9 is closely related to the Riemann curvature tensor formula_10. Indeed the first variation formula_11 about the Euclidean metric with a perturbation in the metric formula_5 is precisely formula_9. Consequently the number of independent components of formula_9 is the same as formula_11 specifically formula_12 for dimension n. Specifically for formula_13, formula_9 has only one independent component where as for formula_14 there are six.
In its simplest form of course the components of formula_5 must be assumed twice continuously differentiable, but more recent work proves the result in a much more general case.
The relation between Saint-Venant's compatibility condition and Poincaré's lemma can be understood more clearly using a reduced form of formula_9 the Kröner tensor
formula_15
where formula_16 is the permutation symbol. For formula_14, formula_17is a symmetric rank 2 tensor field. The vanishing of formula_17 is equivalent to the vanishing of formula_9 and this also shows that there are six independent components for the important case of three dimensions. While this still involves two derivatives rather than the one in the Poincaré lemma, it is possible to reduce to a problem involving first derivatives by introducing more variables and it has been shown that the resulting 'elasticity complex' is equivalent to the de Rham complex.
In differential geometry the symmetrized derivative of a vector field appears also as the Lie derivative of the metric tensor "g" with respect to the vector field.
formula_18
where indices following a semicolon indicate covariant differentiation. The vanishing of formula_19 is thus the integrability condition for local existence of formula_20 in the Euclidean case. As noted above this coincides with the vanishing of the linearization of the Riemann curvature tensor about the Euclidean metric.
Generalization to higher rank tensors.
Saint-Venant's compatibility condition can be thought of as an analogue, for symmetric tensor fields, of Poincaré's lemma for skew-symmetric tensor fields (differential forms). The result can be generalized to higher rank symmetric tensor fields. Let F be a symmetric rank-k tensor field on an open set in n-dimensional Euclidean space, then the symmetric derivative is the rank k+1 tensor field defined by
formula_21
where we use the classical notation that indices following a comma indicate differentiation and groups of indices enclosed in brackets indicate symmetrization over those indices. The Saint-Venant tensor formula_9 of a symmetric rank-k tensor field formula_22 is defined by
formula_23
with
formula_24
On a simply connected domain in Euclidean space formula_25 implies that formula_26 for some rank k-1 symmetric tensor field formula_5. | [
{
"math_id": 0,
"text": "\\varepsilon"
},
{
"math_id": 1,
"text": "\\ u"
},
{
"math_id": 2,
"text": "\\epsilon_{ij} = \\frac{1}{2} \\left( \\frac{\\partial u_i}{\\partial x_j} + \\frac{\\partial u_j}{\\partial x_i} \\right)"
},
{
"math_id": 3,
"text": "1\\le i,j \\le 3"
},
{
"math_id": 4,
"text": "n\\ge 2 "
},
{
"math_id": 5,
"text": "F"
},
{
"math_id": 6,
"text": "n \\ge 2"
},
{
"math_id": 7,
"text": "W(F)"
},
{
"math_id": 8,
"text": "W_{ijkl} = \\frac{\\partial^2 F_{ij}}{\\partial x_k \\partial x_l} + \n\\frac{\\partial^2 F_{kl}}{\\partial x_i \\partial x_j} - \\frac{\\partial^2 F_{il}}{\\partial x_j \\partial x_k} -\\frac{\\partial^2 F_{jk}}{\\partial x_i \\partial x_l}\n"
},
{
"math_id": 9,
"text": "W"
},
{
"math_id": 10,
"text": "R_{ijkl}"
},
{
"math_id": 11,
"text": "R"
},
{
"math_id": 12,
"text": "\\frac{n^2 (n^2-1)}{12}"
},
{
"math_id": 13,
"text": "n=2"
},
{
"math_id": 14,
"text": "n=3"
},
{
"math_id": 15,
"text": "\nK_{i_1...i_{n-2}j_1...j_{n-2}} = \\epsilon_{i_1...i_{n-2}kl}\\epsilon_{j_1...j_{n-2}mp}F_{lm,kp}\n"
},
{
"math_id": 16,
"text": " \\epsilon"
},
{
"math_id": 17,
"text": "K"
},
{
"math_id": 18,
"text": " T_{ij}=(\\mathcal L_U g)_{ij} = U_{i;j}+U_{j;i}\n"
},
{
"math_id": 19,
"text": "W(T)"
},
{
"math_id": 20,
"text": "U"
},
{
"math_id": 21,
"text": " (dF)_{i_1... i_k i_{k+1}} = F_{(i_1... i_k,i_{k+1})}"
},
{
"math_id": 22,
"text": "T"
},
{
"math_id": 23,
"text": " W_{i_1..i_k j_1...j_k}=V_{(i_1..i_k)(j_1...j_k)}"
},
{
"math_id": 24,
"text": " V_{i_1..i_k j_1...j_k} = \\sum\\limits_{p=0}^{k} (-1)^p {k \\choose p} T_{i_1..i_{k-p}j_1...j_p,j_{p+1}...j_k i_{k-p+1}...i_k } "
},
{
"math_id": 25,
"text": "W=0"
},
{
"math_id": 26,
"text": " T = dF"
}
] | https://en.wikipedia.org/wiki?curid=11167824 |
11168195 | Existentially closed model | In model theory, a branch of mathematical logic, the notion of an existentially closed model (or existentially complete model) of a theory generalizes the notions of algebraically closed fields (for the theory of fields), real closed fields (for the theory of ordered fields), existentially closed groups (for the theory of groups), and dense linear orders without endpoints (for the theory of linear orders).
Definition.
A substructure "M" of a structure "N" is said to be existentially closed in (or existentially complete in) formula_0 if for every quantifier-free formula φ("x"1,…,"x""n","y"1,…,"y""n") and all elements "b"1,…,"b""n" of "M" such that φ("x"1,…,"x""n","b"1,…,"b""n") is realized in "N", then φ("x"1,…,"x""n","b"1,…,"b""n") is also realized in "M". In other words: If there is a tuple "a"1,…,"a""n" in "N" such that φ("a"1,…,"a""n","b"1,…,"b""n") holds in "N", then such a tuple also exists in "M". This notion is often denoted formula_1.
A model "M" of a theory "T" is called existentially closed in "T" if it is existentially closed in every superstructure "N" that is itself a model of "T". More generally, a structure "M" is called existentially closed in a class "K" of structures (in which it is contained as a member) if "M" is existentially closed in every superstructure "N" that is itself a member of "K".
The existential closure in "K" of a member "M" of "K", when it exists, is, up to isomorphism, the least existentially closed superstructure of "M". More precisely, it is any extensionally closed superstructure "M"∗ of "M" such that for every existentially closed superstructure "N" of "M", "M"∗ is isomorphic to a substructure of "N" via an isomorphism that is the identity on "M".
Examples.
Let "σ" = (+,×,0,1) be the signature of fields, i.e. + and × are binary function symbols and 0 and 1 are constant symbols. Let "K" be the class of structures of signature "σ" that are fields. If "A" is a subfield of "B", then "A" is existentially closed in "B" if and only if every system of polynomials over "A" that has a solution in "B" also has a solution in "A". It follows that the existentially closed members of "K" are exactly the algebraically closed fields.
Similarly in the class of ordered fields, the existentially closed structures are the real closed fields. In the class of linear orders, the existentially closed structures are those that are dense without endpoints, while the existential closure of any countable (including empty) linear order is, up to isomorphism, the countable dense total order without endpoints, namely the order type of the rationals. | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "M \\prec_1 N"
}
] | https://en.wikipedia.org/wiki?curid=11168195 |
11171275 | Illness rate | unreferenced|date=March 2008
The illness rate is calculated by comparing employee illness-related absences against planned working time, within a specific period. Illness-related absence times and planned working times are calculated in days.
Interpretation.
A high illness rate may be interpreted as an indicator of a heavy workload, bad working conditions, dangerous working environment, low employee satisfaction, and so on. As a simple key figure it can be used for planning purposes, for example, to shift resources from one area into an area with a high Illness Rate. An analysis of the illness reasons or causes must include other factors as well. For example, a high overtime rate combined with a high number of accidents may indicate the reasons for an increase of the illness rate.
Calculation Formula.
formula_0
Direction of Improvement.
One will usually try to minimize the illness rate.
Industry and Country Relevance.
The illness rate is generic for all industries and countries. | [
{
"math_id": 0,
"text": " \\textstyle{\\mbox{Illness rate } = \\frac{\\sum{\\mbox{Illness-related Absence Times in Days}}}{\\sum{\\mbox{Planned Working Times in Days}}}} "
}
] | https://en.wikipedia.org/wiki?curid=11171275 |
11171340 | Overtime rate | Overtime rate is a calculation of hours worked by a worker that exceed those hours defined for a standard workweek. This rate can have different meanings in different countries and jurisdictions, depending on how that jurisdiction's labor law defines overtime. In many jurisdictions, additional pay is mandated for certain classes of workers when this set number of hours is exceeded. In others, there is no concept of a standard workweek or analogous time period, and no additional pay for exceeding a set number of hours within that week.
The overtime rate calculates the ratio between employee overtime with the regular hours in a specific time period. Even if the work is planned or scheduled, it can still be considered overtime if it exceeds what is considered the standard workweek in that jurisdiction.
A high overtime rate is a good indicator of a temporary or permanent high workload, and can be a contentious issue in labor-management relations. It could result in a higher illness rate, lower safety rate, higher labor costs, and lower productivity.
United States.
In the United States a standard workweek is considered to be 40 hours. Most waged employees or so-called non-exempt workers under U.S. federal labor and tax law must be paid at a wage rate of 150% of their regular hourly rate for hours that exceed 40 in a week. The start of the pay week can be defined by the employer, and need not be a standard calendar week start (e.g., Sunday midnight). Many employees, especially shift workers in the U.S., have some amount of overtime built into their schedules so that 24/7 coverage can be obtained.
Formula.
formula_0
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\textstyle{\\mbox{Overtime Rate } = \\frac{\\sum{\\mbox{Overtime Hours}}}{\\sum{\\mbox{Regular Hours (defined)}}}} \\times 100 \\%"
}
] | https://en.wikipedia.org/wiki?curid=11171340 |
11174017 | Fred Galvin | American mathematician
Frederick William Galvin is a mathematician, currently a professor at the University of Kansas. His research interests include set theory and combinatorics.
His notable combinatorial work includes the proof of the Dinitz conjecture. In set theory, he proved with András Hajnal that if ℵω1 is a strong limit cardinal, then
formula_0
holds. The research on extending this result led Saharon Shelah to the invention of PCF theory. Galvin gave an elementary proof of the Baumgartner–Hajnal theorem formula_1 (formula_2). The original proof by Baumgartner and Hajnal used forcing and absoluteness. Galvin and Shelah also proved the square bracket partition relations formula_3 and formula_4. Galvin also proved the partition relation formula_5 where η denotes the order type of the set of rational numbers.
Galvin and Karel Prikry proved that every Borel set is Ramsey. Galvin and Komjáth showed that the axiom of choice is equivalent to the statement that every graph has a chromatic number.
Galvin received his Ph.D. in 1967 from the University of Minnesota.
He invented Doublemove Chess in 1957, and Push Chess in 1967. | [
{
"math_id": 0,
"text": "2^{\\aleph_{\\omega_1}}<\\aleph_{(2^{\\aleph_1})^+}"
},
{
"math_id": 1,
"text": "\\omega_1\\to(\\alpha)^2_k"
},
{
"math_id": 2,
"text": "\\alpha<\\omega_1, k<\\omega"
},
{
"math_id": 3,
"text": "\\aleph_1\\not\\to[\\aleph_1]^2_4"
},
{
"math_id": 4,
"text": "2^{\\aleph_0}\\not\\to[2^{\\aleph_0}]^2_{\\aleph_0}"
},
{
"math_id": 5,
"text": "\\eta\\to[\\eta]^2_3"
}
] | https://en.wikipedia.org/wiki?curid=11174017 |
11174336 | In-place matrix transposition | In-place matrix transposition, also called in-situ matrix transposition, is the problem of transposing an "N"×"M" matrix in-place in computer memory, ideally with "O"(1) (bounded) additional storage, or at most with additional storage much less than "NM". Typically, the matrix is assumed to be stored in row-major or column-major order (i.e., contiguous rows or columns, respectively, arranged consecutively).
Performing an in-place transpose (in-situ transpose) is most difficult when "N" ≠ "M", i.e. for a non-square (rectangular) matrix, where it involves a complex permutation of the data elements, with many cycles of length greater than 2. In contrast, for a square matrix ("N" = "M"), all of the cycles are of length 1 or 2, and the transpose can be achieved by a simple loop to swap the upper triangle of the matrix with the lower triangle. Further complications arise if one wishes to maximize memory locality in order to improve cache line utilization or to operate out-of-core (where the matrix does not fit into main memory), since transposes inherently involve non-consecutive memory accesses.
The problem of non-square in-place transposition has been studied since at least the late 1950s, and several algorithms are known, including several which attempt to optimize locality for cache, out-of-core, or similar memory-related contexts.
Background.
On a computer, one can often avoid explicitly transposing a matrix in memory by simply accessing the same data in a different order. For example, software libraries for linear algebra, such as BLAS, typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid data movement.
However, there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering. For example, with a matrix stored in row-major order, the rows of the matrix are contiguous in memory and the columns are discontiguous. If repeated operations need to be performed on the columns, for example in a fast Fourier transform algorithm (e.g. Frigo & Johnson, 2005), transposing the matrix in memory (to make the columns contiguous) may improve performance by increasing memory locality. Since these situations normally coincide with the case of very large matrices (which exceed the cache size), performing the transposition in-place with minimal additional storage becomes desirable.
Also, as a purely mathematical problem, in-place transposition involves a number of interesting number theory puzzles that have been worked out over the course of several decades.
Example.
For example, consider the 2×4 matrix:
formula_0
In row-major format, this would be stored in computer memory as the sequence (11, 12, 13, 14, 21, 22, 23, 24), i.e. the two rows stored consecutively. If we transpose this, we obtain the 4×2 matrix:
formula_1
which is stored in computer memory as the sequence (11, 21, 12, 22, 13, 23, 14, 24).
If we number the storage locations 0 to 7, from left to right, then this permutation consists of four cycles:
(0), (1 2 4), (3 6 5), (7)
That is, the value in position 0 goes to position 0 (a cycle of length 1, no data motion). Next, the value in position 1 (in the original storage: 11, 12, 13, 14, 21, 22, 23, 24) goes to position 2 (in the transposed storage 11, 21, 12, 22, 13, 23, 14, 24), while the value in position 2 (11, 12, 13, 14, 21, 22, 23, 24) goes to position 4 (11, 21, 12, 22, 13, 23, 14, 24), and position 4 (11, 12, 13, 14, 21, 22, 23, 24) goes back to position 1 (11, 21, 12, 22, 13, 23, 14, 24). Similarly for the values in position 7 and positions (3 6 5).
Properties of the permutation.
In the following, we assume that the "N"×"M" matrix is stored in row-major order with zero-based indices. This means that the ("n","m") element, for "n" = 0...,"N"−1 and "m" = 0...,"M"−1, is stored at an address "a" = "Mn" + "m" (plus some offset in memory, which we ignore). In the transposed "M"×"N" matrix, the corresponding ("m","n") element is stored at the address "a' " = "Nm" + "n", again in row-major order. We define the "transposition permutation" to be the function "a' " = "P"("a") such that:
formula_2 for all formula_3
This defines a permutation on the numbers formula_4.
It turns out that one can define simple formulas for "P" and its inverse (Cate & Twigg, 1977). First:
formula_5
where "mod" is the modulo operation.
Second, the inverse permutation is given by:
formula_6
As proved by Cate & Twigg (1977), the number of fixed points (cycles of length 1) of the permutation is precisely 1 + gcd("N"−1,"M"−1), where gcd is the greatest common divisor. For example, with "N" = "M" the number of fixed points is simply "N" (the diagonal of the matrix). If "N" − 1 and "M" − 1 are coprime, on the other hand, the only two fixed points are the upper-left and lower-right corners of the matrix.
The number of cycles of any length "k">1 is given by (Cate & Twigg, 1977):
formula_7
where μ is the Möbius function and the sum is over the divisors "d" of "k".
Furthermore, the cycle containing "a"=1 (i.e. the second element of the first row of the matrix) is always a cycle of maximum length "L", and the lengths "k" of all other cycles must be divisors of "L" (Cate & Twigg, 1977).
For a given cycle "C", every element formula_8 has the same greatest common divisor formula_9.
This theorem is useful in searching for cycles of the permutation, since an efficient search can look only at multiples of divisors of "MN"−1 (Brenner, 1973).
Laflin & Brebner (1970) pointed out that the cycles often come in pairs, which is exploited by several algorithms that permute pairs of cycles at a time. In particular, let "s" be the smallest element of some cycle "C" of length "k". It follows that "MN"−1−"s" is also an element of a cycle of length "k" (possibly the same cycle).
Algorithms.
The following briefly summarizes the published algorithms to perform in-place matrix transposition. Source code implementing some of these algorithms can be found in the references, below.
Accessor transpose.
Because physically transposing a matrix is computationally expensive, instead of moving values in memory, the access path may be transposed instead. It is trivial to perform this operation for CPU access, as the access paths of iterators must simply be exchanged, however hardware acceleration may require that still be physically realigned.
Square matrices.
For a square "N"×"N" matrix "A""n","m" = "A"("n","m"), in-place transposition is easy because all of the cycles have length 1 (the diagonals "A""n","n") or length 2 (the upper triangle is swapped with the lower triangle). Pseudocode to accomplish this (assuming zero-based array indices) is:
for n = 0 to N - 1
for m = n + 1 to N
swap A(n,m) with A(m,n)
This type of implementation, while simple, can exhibit poor performance due to poor cache-line utilization, especially when "N" is a power of two (due to cache-line conflicts in a CPU cache with limited associativity). The reason for this is that, as "m" is incremented in the inner loop, the memory address corresponding to "A"("n","m") or "A"("m","n") jumps discontiguously by "N" in memory (depending on whether the array is in column-major or row-major format, respectively). That is, the algorithm does not exploit locality of reference.
One solution to improve the cache utilization is to "block" the algorithm to operate on several numbers at once, in blocks given by the cache-line size; unfortunately, this means that the algorithm depends on the size of the cache line (it is "cache-aware"), and on a modern computer with multiple levels of cache it requires multiple levels of machine-dependent blocking. Instead, it has been suggested (Frigo "et al.", 1999) that better performance can be obtained by a recursive algorithm: divide the matrix into four submatrices of roughly equal size, transposing the two submatrices along the diagonal recursively and transposing and swapping the two submatrices above and below the diagonal. (When "N" is sufficiently small, the simple algorithm above is used as a base case, as naively recurring all the way down to "N"=1 would have excessive function-call overhead.) This is a cache-oblivious algorithm, in the sense that it can exploit the cache line without the cache-line size being an explicit parameter.
Non-square matrices: Following the cycles.
For non-square matrices, the algorithms are more complex. Many of the algorithms prior to 1980 could be described as "follow-the-cycles" algorithms. That is, they loop over the cycles, moving the data from one location to the next in the cycle. In pseudocode form:
for each length>1 cycle "C" of the permutation
pick a starting address "s" in "C"
let "D" = data at "s"
let "x" = predecessor of "s" in the cycle
while "x" ≠ "s"
move data from "x" to successor of "x"
let "x" = predecessor of "x"
move data from "D" to successor of "s"
The differences between the algorithms lie mainly in how they locate the cycles, how they find the starting addresses in each cycle, and how they ensure that each cycle is moved exactly once. Typically, as discussed above, the cycles are moved in pairs, since "s" and "MN"−1−"s" are in cycles of the same length (possibly the same cycle). Sometimes, a small scratch array, typically of length "M"+"N" (e.g. Brenner, 1973; Cate & Twigg, 1977) is used to keep track of a subset of locations in the array that have been visited, to accelerate the algorithm.
In order to determine whether a given cycle has been moved already, the simplest scheme would be to use "O"("MN") auxiliary storage, one bit per element, to indicate whether a given element has been moved. To use only "O"("M"+"N") or even "O"(log "MN") auxiliary storage, more-complex algorithms are required, and the known algorithms have a worst-case linearithmic computational cost of "O"("MN" log "MN") at best, as first proved by Knuth (Fich "et al.", 1995; Gustavson & Swirszcz, 2007).
Such algorithms are designed to move each data element exactly once. However, they also involve a considerable amount of arithmetic to compute the cycles, and require heavily non-consecutive memory accesses since the adjacent elements of the cycles differ by multiplicative factors of "N", as discussed above.
Improving memory locality at the cost of greater total data movement.
Several algorithms have been designed to achieve greater memory locality at the cost of greater data movement, as well as slightly greater storage requirements. That is, they may move each data element more than once, but they involve more consecutive memory access (greater spatial locality), which can improve performance on modern CPUs that rely on caches, as well as on SIMD architectures optimized for processing consecutive data blocks. The oldest context in which the spatial locality of transposition seems to have been studied is for out-of-core operation (by Alltop, 1975), where the matrix is too large to fit into main memory ("core").
For example, if "d" = gcd("N","M") is not small, one can perform the transposition using a small amount ("NM"/"d") of additional storage, with at most three passes over the array (Alltop, 1975; Dow, 1995). Two of the passes involve a sequence of separate, small transpositions (which can be performed efficiently out of place using a small buffer) and one involves an in-place "d"×"d" square transposition of formula_10 blocks (which is efficient since the blocks being moved are large and consecutive, and the cycles are of length at most 2). This is further simplified if N is a multiple of M (or vice versa), since only one of the two out-of-place passes is required.
Another algorithm for non-coprime dimensions, involving multiple subsidiary transpositions, was described by Catanzaro et al. (2014). For the case where is small, Dow (1995) describes another algorithm requiring additional storage, involving a min("N", "M") ⋅ min("N", "M") square transpose preceded or followed by a small out-of-place transpose. Frigo & Johnson (2005) describe the adaptation of these algorithms to use cache-oblivious techniques for general-purpose CPUs relying on cache lines to exploit spatial locality.
Work on out-of-core matrix transposition, where the matrix does not fit in main memory and must be stored largely on a hard disk, has focused largely on the "N" = "M" square-matrix case, with some exceptions (e.g. Alltop, 1975). Reviews of out-of-core algorithms, especially as applied to parallel computing, can be found in e.g. Suh & Prasanna (2002) and Krishnamoorth et al. (2004).
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{bmatrix} 11 & 12 & 13 & 14 \\\\ 21 & 22 & 23 & 24\\end{bmatrix}."
},
{
"math_id": 1,
"text": "\\begin{bmatrix} 11 & 21 \\\\ 12 & 22 \\\\ 13 & 23 \\\\ 14 & 24\\end{bmatrix}"
},
{
"math_id": 2,
"text": "Nm + n = P(Mn + m) \\,"
},
{
"math_id": 3,
"text": "(n,m) \\in [0,N-1]\\times[0,M-1] \\,."
},
{
"math_id": 4,
"text": "a = 0,\\ldots,MN-1"
},
{
"math_id": 5,
"text": "P(a) = \\begin{cases}\nMN - 1 & \\text{if } a = MN - 1, \\\\\nNa \\bmod (MN - 1) & \\text{otherwise},\n\\end{cases}\n"
},
{
"math_id": 6,
"text": "P^{-1}(a') = \\begin{cases}\nMN - 1 & \\text{if } a' = MN - 1, \\\\\nMa' \\bmod (MN - 1) & \\text{otherwise}.\n\\end{cases}\n"
},
{
"math_id": 7,
"text": "\\frac{1}{k} \\sum_{d | k} \\mu(k/d) \\gcd(N^d - 1, MN - 1) ,"
},
{
"math_id": 8,
"text": "x \\in C"
},
{
"math_id": 9,
"text": "d = \\gcd(x, MN - 1)"
},
{
"math_id": 10,
"text": "NM/d^2"
}
] | https://en.wikipedia.org/wiki?curid=11174336 |
11176492 | John Brillhart | American mathematician (1930–2022)
John David Brillhart (November 13, 1930 – May 21, 2022) was a mathematician who worked in number theory at the University of Arizona.
Early life and education.
Brillhart was born on November 13, 1930, in Berkeley, California.
He studied at the University of California, Berkeley, where he received his A.B. in 1953, his M.A. in 1966, and his Ph.D. in 1967. His doctoral thesis in mathematics was supervised by D. H. Lehmer, with assistance from Leonard Carlitz.
Before becoming a mathematician, he served in the United States Army.
Career.
Brillhart joined the faculty at the University of Arizona in 1967 and retired in 2001. He advised two Ph.D. students.
Research.
Brillart worked in integer factorization. His joint work with Michael A. Morrison in 1975 describes how to implement the continued fraction factorization method originally developed by Lehmer and Ralph Ernest Powers in 1931. One consequence was the first factorization of the Fermat number formula_0. Their ideas were influential in the development of the quadratic sieve by Carl Pomerance.
Brillhart was a member of the Cunningham Project, which factors Mersenne, Fermat, and related numbers. He was also a founding member and financial contributor to the Number Theory Foundation started by John L. Selfridge.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F^7 = 2^{2^7}+1"
}
] | https://en.wikipedia.org/wiki?curid=11176492 |
11176721 | Alfred George Greenhill | British mathematician
Sir Alfred George Greenhill (29 November 1847 in London – 10 February 1927 in London), was a British mathematician.
George Greenhill was educated at Christ's Hospital School and from there he went to St John's College, Cambridge in 1866. In 1876, Greenhill was appointed professor of mathematics at the Royal Military Academy (RMA) at Woolwich, London, UK. He held this chair until his retirement in 1908, when he was knighted.
His 1892 textbook on applications of elliptic functions is of acknowledged excellence. He was one of the world's leading experts on applications of elliptic integrals in electromagnetic theory.
He was a Plenary Speaker of the ICM in 1904 at Heidelberg (where he also gave a section talk) and an Invited Speaker of the ICM in 1908 at Rome, in 1920 at Strasbourg, and in 1924 at Toronto.
Greenhill formula.
In 1879 Greenhill calculated complicated twist rate formulas for rifled artillery by approximating the projectile as an elongated ellipsoid of rotation in incompressible fluid (which, as he couldn't have known back then, assumes subsonic flight). Later, English ballistician F. W. Jones simplified it for typical bullet lengths into a rule of thumb for calculating the optimal twist rate for lead-core bullets. This shortcut uses the bullet's length, needing no allowances for weight or nose shape. The eponymous Greenhill formula, still used today, is:
formula_0
where:
The original value of C was 150, which yields a twist rate in inches per turn, when given the diameter D and the length L of the bullet in inches. This works to velocities of about 840 m/s (2800 ft/s); above those velocities, a C of 180 should be used. For instance, with a velocity of 600 m/s (2000 ft/s), a diameter of and a length of , the Greenhill formula would give a value of 25, which means 1 turn in .
Recently, Greenhill formula has been supplemented with Miller twist rule.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{Twist} = \\frac{C D^2}{L} \\times \\sqrt{\\frac{SG}{10.9}}"
}
] | https://en.wikipedia.org/wiki?curid=11176721 |
1117829 | Type-2 Gumbel distribution | Probability distribution
In probability theory, the Type-2 Gumbel probability density function is
formula_0
for
formula_1.
For formula_2 the mean is infinite. For formula_3 the variance is infinite.
The cumulative distribution function is
formula_4
The moments formula_5 exist for formula_6
The distribution is named after Emil Julius Gumbel (1891 – 1966).
Generating random variates.
Given a random variate "U" drawn from the uniform distribution in the interval (0, 1), then the variate
formula_7
has a Type-2 Gumbel distribution with parameter formula_8 and formula_9. This is obtained by applying the inverse transform sampling-method.
Related distributions.
Based on The GNU Scientific Library, used under GFDL. | [
{
"math_id": 0,
"text": "f(x|a,b) = a b x^{-a-1} e^{-b x^{-a}}\\,"
},
{
"math_id": 1,
"text": "0 < x < \\infty"
},
{
"math_id": 2,
"text": "0<a\\le 1"
},
{
"math_id": 3,
"text": "0<a\\le 2"
},
{
"math_id": 4,
"text": "F(x|a,b) = e^{-b x^{-a}}\\,"
},
{
"math_id": 5,
"text": " E[X^k] \\,"
},
{
"math_id": 6,
"text": "k < a\\,"
},
{
"math_id": 7,
"text": "X=(-\\ln U/b)^{-1/a},"
},
{
"math_id": 8,
"text": "a"
},
{
"math_id": 9,
"text": "b"
},
{
"math_id": 10,
"text": "b=\\lambda^{-k}"
},
{
"math_id": 11,
"text": "a=-k"
}
] | https://en.wikipedia.org/wiki?curid=1117829 |
1117833 | Dirichlet distribution | Probability distribution
In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted formula_4, is a family of continuous multivariate probability distributions parameterized by a vector formula_5 of positive reals. It is a multivariate generalization of the beta distribution, hence its alternative name of multivariate beta distribution (MBD). Dirichlet distributions are commonly used as prior distributions in Bayesian statistics, and in fact, the Dirichlet distribution is the conjugate prior of the categorical distribution and multinomial distribution.
The infinite-dimensional generalization of the Dirichlet distribution is the "Dirichlet process".
Definitions.
Probability density function.
The Dirichlet distribution of order "K" ≥ 2 with parameters "α"1, ..., "α""K" > 0 has a probability density function with respect to Lebesgue measure on the Euclidean space R"K-1" given by
formula_6
where formula_7 belong to the standard formula_8 simplex, or in other words: formula_9
The normalizing constant is the multivariate beta function, which can be expressed in terms of the gamma function:
formula_10
Support.
The support of the Dirichlet distribution is the set of "K"-dimensional vectors formula_11 whose entries are real numbers in the interval [0,1] such that formula_12, i.e. the sum of the coordinates is equal to 1. These can be viewed as the probabilities of a "K"-way categorical event. Another way to express this is that the domain of the Dirichlet distribution is itself a set of probability distributions, specifically the set of "K"-dimensional discrete distributions. The technical term for the set of points in the support of a "K"-dimensional Dirichlet distribution is the open standard ("K" − 1)-simplex, which is a generalization of a triangle, embedded in the next-higher dimension. For example, with "K" = 3, the support is an equilateral triangle embedded in a downward-angle fashion in three-dimensional space, with vertices at (1,0,0), (0,1,0) and (0,0,1), i.e. touching each of the coordinate axes at a point 1 unit away from the origin.
Special cases.
A common special case is the symmetric Dirichlet distribution, where all of the elements making up the parameter vector formula_5 have the same value. The symmetric case might be useful, for example, when a Dirichlet prior over components is called for, but there is no prior knowledge favoring one component over another. Since all elements of the parameter vector have the same value, the symmetric Dirichlet distribution can be parametrized by a single scalar value "α", called the concentration parameter. In terms of "α," the density function has the form
formula_13
When "α"=1[#endnote_], the symmetric Dirichlet distribution is equivalent to a uniform distribution over the open standard ("K" − 1)-simplex, i.e. it is uniform over all points in its support. This particular distribution is known as the flat Dirichlet distribution. Values of the concentration parameter above 1 prefer variates that are dense, evenly distributed distributions, i.e. all the values within a single sample are similar to each other. Values of the concentration parameter below 1 prefer sparse distributions, i.e. most of the values within a single sample will be close to 0, and the vast majority of the mass will be concentrated in a few of the values.
More generally, the parameter vector is sometimes written as the product formula_14 of a (scalar) concentration parameter "α" and a (vector) base measure formula_15 where formula_16 lies within the ("K" − 1)-simplex (i.e.: its coordinates formula_17 sum to one). The concentration parameter in this case is larger by a factor of "K" than the concentration parameter for a symmetric Dirichlet distribution described above. This construction ties in with concept of a base measure when discussing Dirichlet processes and is often used in the topic modelling literature.
<templatestyles src="Citation/styles.css"/>^ If we define the concentration parameter as the sum of the Dirichlet parameters for each dimension, the Dirichlet distribution with concentration parameter "K", the dimension of the distribution, is the uniform distribution on the ("K" − 1)-simplex.
Properties.
Moments.
Let formula_18.
Let
formula_19
Then
formula_20
formula_21
Furthermore, if formula_22
formula_23
The matrix is thus singular.
More generally, moments of Dirichlet-distributed random variables can be expressed in the following way. For formula_24, denote by formula_25 its formula_3-th Hadamard power. Then,
formula_26
where the sum is over non-negative integers formula_27 with formula_28, and formula_29 is the cycle index polynomial of the Symmetric group of degree formula_30.
The multivariate analogue formula_31 for vectors formula_32 can be expressed in terms of a color pattern of the exponents formula_33 in the sense of Pólya enumeration theorem.
Particular cases include the simple computation
formula_34
Mode.
The mode of the distribution is the vector ("x"1, ..., "xK") with
formula_35
Marginal distributions.
The marginal distributions are beta distributions:
formula_36
Conjugate to categorical or multinomial.
The Dirichlet distribution is the conjugate prior distribution of the categorical distribution (a generic discrete probability distribution with a given number of possible outcomes) and multinomial distribution (the distribution over observed counts of each possible category in a set of categorically distributed observations). This means that if a data point has either a categorical or multinomial distribution, and the prior distribution of the distribution's parameter (the vector of probabilities that generates the data point) is distributed as a Dirichlet, then the posterior distribution of the parameter is also a Dirichlet. Intuitively, in such a case, starting from what we know about the parameter prior to observing the data point, we then can update our knowledge based on the data point and end up with a new distribution of the same form as the old one. This means that we can successively update our knowledge of a parameter by incorporating new observations one at a time, without running into mathematical difficulties.
Formally, this can be expressed as follows. Given a model
formula_37
then the following holds:
formula_38
This relationship is used in Bayesian statistics to estimate the underlying parameter p of a categorical distribution given a collection of "N" samples. Intuitively, we can view the hyperprior vector α as pseudocounts, i.e. as representing the number of observations in each category that we have already seen. Then we simply add in the counts for all the new observations (the vector c) in order to derive the posterior distribution.
In Bayesian mixture models and other hierarchical Bayesian models with mixture components, Dirichlet distributions are commonly used as the prior distributions for the categorical variables appearing in the models. See the section on applications below for more information.
Relation to Dirichlet-multinomial distribution.
In a model where a Dirichlet prior distribution is placed over a set of categorical-valued observations, the marginal joint distribution of the observations (i.e. the joint distribution of the observations, with the prior parameter marginalized out) is a Dirichlet-multinomial distribution. This distribution plays an important role in hierarchical Bayesian models, because when doing inference over such models using methods such as Gibbs sampling or variational Bayes, Dirichlet prior distributions are often marginalized out. See the article on this distribution for more details.
Entropy.
If "X" is a formula_4 random variable, the differential entropy of "X" (in nat units) is
formula_39
where formula_0 is the digamma function.
The following formula for formula_40 can be used to derive the differential entropy above. Since the functions formula_41 are the sufficient statistics of the Dirichlet distribution, the exponential family differential identities can be used to get an analytic expression for the expectation of formula_41 (see equation (2.62) in ) and its associated covariance matrix:
formula_42
and
formula_43
where formula_0 is the digamma function, formula_44 is the trigamma function, and formula_1 is the Kronecker delta.
The spectrum of Rényi information for values other than formula_45 is given by
formula_46
and the information entropy is the limit as formula_47 goes to 1.
Another related interesting measure is the entropy of a discrete categorical (one-of-K binary) vector formula_48 with probability-mass distribution formula_49, i.e., formula_50. The conditional information entropy of formula_48, given formula_49 is
formula_51
This function of formula_49 is a scalar random variable. If formula_49 has a symmetric Dirichlet distribution with all formula_52, the expected value of the entropy (in nat units) is
formula_53
Aggregation.
If
formula_54
then, if the random variables with subscripts "i" and "j" are dropped from the vector and replaced by their sum,
formula_55
This aggregation property may be used to derive the marginal distribution of formula_56 mentioned above.
Neutrality.
If formula_18, then the vector "X" is said to be "neutral" in the sense that "XK" is independent of formula_57 where
formula_58
and similarly for removing any of formula_59. Observe that any permutation of "X" is also neutral (a property not possessed by samples drawn from a generalized Dirichlet distribution).
Combining this with the property of aggregation it follows that "X""j" + ... + "X""K" is independent of formula_60. In fact it is true, further, for the Dirichlet distribution, that for formula_61, the pair formula_62, and the two vectors formula_60 and formula_63, viewed as triple of normalised random vectors, are mutually independent. The analogous result is true for partition of the indices {1,2...,"K"} into any other pair of non-singleton subsets.
Characteristic function.
The characteristic function of the Dirichlet distribution is a confluent form of the Lauricella hypergeometric series. It is given by Phillips as
formula_64
where
formula_65
The sum is over non-negative integers formula_66 and formula_67. Phillips goes on to state that this form is "inconvenient for numerical calculation" and gives an alternative in terms of a complex path integral:
formula_68
where "L" denotes any path in the complex plane originating at formula_69, encircling in the positive direction all the singularities of the integrand and returning to formula_69.
Inequality.
Probability density function formula_70 plays a key role in a multifunctional inequality which implies various bounds for the Dirichlet distribution.
Related distributions.
For "K" independently distributed Gamma distributions:
formula_71
we have:
formula_72
formula_73
Although the "Xi"s are not independent from one another, they can be seen to be generated from a set of "K" independent gamma random variable. Unfortunately, since the sum "V" is lost in forming "X" (in fact it can be shown that "V" is stochastically independent of "X"), it is not possible to recover the original gamma random variables from these values alone. Nevertheless, because independent random variables are simpler to work with, this reparametrization can still be useful for proofs about properties of the Dirichlet distribution.
Conjugate prior of the Dirichlet distribution.
Because the Dirichlet distribution is an exponential family distribution it has a conjugate prior.
The conjugate prior is of the form:
formula_74
Here formula_75 is a "K"-dimensional real vector and formula_76 is a scalar parameter. The domain of formula_77 is restricted to the set of parameters for which the above unnormalized density function can be normalized. The (necessary and sufficient) condition is:
formula_78
The conjugation property can be expressed as
if ["prior": formula_79] and ["observation": formula_80] then ["posterior": formula_81].
In the published literature there is no practical algorithm to efficiently generate samples from formula_82.
Occurrence and applications.
Bayesian models.
Dirichlet distributions are most commonly used as the prior distribution of categorical variables or multinomial variables in Bayesian mixture models and other hierarchical Bayesian models. (In many fields, such as in natural language processing, categorical variables are often imprecisely called "multinomial variables". Such a usage is unlikely to cause confusion, just as when Bernoulli distributions and binomial distributions are commonly conflated.)
Inference over hierarchical Bayesian models is often done using Gibbs sampling, and in such a case, instances of the Dirichlet distribution are typically marginalized out of the model by integrating out the Dirichlet random variable. This causes the various categorical variables drawn from the same Dirichlet random variable to become correlated, and the joint distribution over them assumes a Dirichlet-multinomial distribution, conditioned on the hyperparameters of the Dirichlet distribution (the concentration parameters). One of the reasons for doing this is that Gibbs sampling of the Dirichlet-multinomial distribution is extremely easy; see that article for more information.
Intuitive interpretations of the parameters.
The concentration parameter.
Dirichlet distributions are very often used as prior distributions in Bayesian inference. The simplest and perhaps most common type of Dirichlet prior is the symmetric Dirichlet distribution, where all parameters are equal. This corresponds to the case where you have no prior information to favor one component over any other. As described above, the single value "α" to which all parameters are set is called the concentration parameter. If the sample space of the Dirichlet distribution is interpreted as a discrete probability distribution, then intuitively the concentration parameter can be thought of as determining how "concentrated" the probability mass of the Dirichlet distribution to its center, leading to samples with mass dispersed almost equally among all components, i.e., with a value much less than 1, the mass will be highly concentrated in a few components, and all the rest will have almost no mass, and with a value much greater than 1, the mass will be dispersed almost equally among all the components. See the article on the concentration parameter for further discussion.
String cutting.
One example use of the Dirichlet distribution is if one wanted to cut strings (each of initial length 1.0) into "K" pieces with different lengths, where each piece had a designated average length, but allowing some variation in the relative sizes of the pieces. Recall that formula_19 The formula_83 values specify the mean lengths of the cut pieces of string resulting from the distribution. The variance around this mean varies inversely with formula_2.
Pólya's urn.
Consider an urn containing balls of "K" different colors. Initially, the urn contains "α"1 balls of color 1, "α"2 balls of color 2, and so on. Now perform "N" draws from the urn, where after each draw, the ball is placed back into the urn with an additional ball of the same color. In the limit as "N" approaches infinity, the proportions of different colored balls in the urn will be distributed as Dir("α"1...,"αK").
For a formal proof, note that the proportions of the different colored balls form a bounded [0,1]"K"-valued martingale, hence by the martingale convergence theorem, these proportions converge almost surely and in mean to a limiting random vector. To see that this limiting vector has the above Dirichlet distribution, check that all mixed moments agree.
Each draw from the urn modifies the probability of drawing a ball of any one color from the urn in the future. This modification diminishes with the number of draws, since the relative effect of adding a new ball to the urn diminishes as the urn accumulates increasing numbers of balls.
Random variate generation.
From gamma distribution.
With a source of Gamma-distributed random variates, one can easily sample a random vector formula_84 from the "K"-dimensional Dirichlet distribution with parameters formula_85 . First, draw "K" independent random samples formula_86 from Gamma distributions each with density
formula_87
and then set
formula_88
<templatestyles src="Template:Hidden begin/styles.css"/>[Proof]
The joint distribution of the independently sampled gamma variates, formula_89, is given by the product:
formula_90
Next, one uses a change of variables, parametrising formula_91 in terms of formula_92 and formula_93 , and performs a change of variables from formula_94 such that formula_95. Each of the variables formula_96 and likewise formula_97. One must then use the change of variables formula, formula_98 in which formula_99 is the transformation Jacobian. Writing y explicitly as a function of x, one obtains
formula_100
The Jacobian now looks like
formula_101
The determinant can be evaluated by noting that it remains unchanged if multiples of a row are added to another row, and adding each of the first K-1 rows to the bottom row to obtain
formula_102
which can be expanded about the bottom row to obtain the determinant value formula_103. Substituting for x in the joint pdf and including the Jacobian determinant, one obtains:
formula_104
where formula_105. The right-hand side can be recognized as the product of a Dirichlet pdf for the formula_106 and a gamma pdf for formula_107. The product form shows the Dirichlet and gamma variables are independent, so the latter can be integrated out by simply omitting it, to obtain:
formula_108
Which is equivalent to
formula_109 with support formula_110
Below is example Python code to draw the sample:
params = [a1, a2, ..., ak]
sample = [random.gammavariate(a, 1) for a in params]
sample = [v / sum(sample) for v in sample]
This formulation is correct regardless of how the Gamma distributions are parameterized (shape/scale vs. shape/rate) because they are equivalent when scale and rate equal 1.0.
From marginal beta distributions.
A less efficient algorithm relies on the univariate marginal and conditional distributions being beta and proceeds as follows. Simulate formula_111 from
formula_112
Then simulate formula_113 in order, as follows. For formula_114, simulate formula_115 from
formula_116
and let
formula_117
Finally, set
formula_118
This iterative procedure corresponds closely to the "string cutting" intuition described above.
Below is example Python code to draw the sample:
params = [a1, a2, ..., ak]
xs = [random.betavariate(params[0], sum(params[1:]))]
for j in range(1, len(params) - 1):
phi = random.betavariate(params[j], sum(params[j + 1 :]))
xs.append((1 - sum(xs)) * phi)
xs.append(1 - sum(xs))
When each alpha is 1.
When "α"1 = ... = "α""K" = 1, a sample from the distribution can be found by randomly drawing a set of "K" − 1 values independently and uniformly from the interval [0, 1], adding the values 0 and 1 to the set to make it have "K" + 1 values, sorting the set, and computing the difference between each pair of order-adjacent values, to give "x"1, ..., "x""K".
When each alpha is 1/2 and relationship to the hypersphere.
When "α"1 = ... = "α""K" = 1/2, a sample from the distribution can be found by randomly drawing K values independently from the standard normal distribution, squaring these values, and normalizing them by dividing by their sum, to give "x"1, ..., "x""K".
A point ("x"1, ..., "x""K") can be drawn uniformly at random from the ("K"−1)-dimensional hypersphere (which is the surface of a K-dimensional hyperball) via a similar procedure. Randomly draw K values independently from the standard normal distribution and normalize these coordinate values by dividing each by the constant that is the square root of the sum of their squares.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\psi"
},
{
"math_id": 1,
"text": "\\delta_{ij}"
},
{
"math_id": 2,
"text": "\\alpha_0"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "\\operatorname{Dir}(\\boldsymbol\\alpha)"
},
{
"math_id": 5,
"text": "\\boldsymbol\\alpha"
},
{
"math_id": 6,
"text": "f \\left(x_1,\\ldots, x_{K}; \\alpha_1,\\ldots, \\alpha_K \\right) = \\frac{1}{\\mathrm{B}(\\boldsymbol\\alpha)} \\prod_{i=1}^K x_i^{\\alpha_i - 1}"
},
{
"math_id": 7,
"text": "\\{x_k\\}_{k=1}^{k=K}"
},
{
"math_id": 8,
"text": "K-1"
},
{
"math_id": 9,
"text": "\\sum_{i=1}^{K} x_i = 1 \\mbox{ and } x_i \\in \\left[0,1\\right] \\mbox{ for all } i \\in \\{1,\\dots,K\\}"
},
{
"math_id": 10,
"text": "\\mathrm{B}(\\boldsymbol\\alpha) = \\frac{\\prod\\limits_{i=1}^K \\Gamma(\\alpha_i)}{\\Gamma\\left(\\sum\\limits_{i=1}^K \\alpha_i\\right)},\\qquad\\boldsymbol{\\alpha}=(\\alpha_1,\\ldots,\\alpha_K)."
},
{
"math_id": 11,
"text": "\\boldsymbol x"
},
{
"math_id": 12,
"text": "\\|\\boldsymbol x\\|_1 = 1"
},
{
"math_id": 13,
"text": "f(x_1,\\dots, x_{K}; \\alpha) = \\frac{\\Gamma(\\alpha K)}{\\Gamma(\\alpha)^K} \\prod_{i=1}^K x_i^{\\alpha - 1}."
},
{
"math_id": 14,
"text": "\\alpha \\boldsymbol n"
},
{
"math_id": 15,
"text": "\\boldsymbol n=(n_1,\\dots,n_K)"
},
{
"math_id": 16,
"text": "\\boldsymbol n"
},
{
"math_id": 17,
"text": "n_i"
},
{
"math_id": 18,
"text": "X = (X_1, \\ldots, X_K)\\sim\\operatorname{Dir}(\\boldsymbol\\alpha)"
},
{
"math_id": 19,
"text": "\\alpha_0 = \\sum_{i=1}^K \\alpha_i."
},
{
"math_id": 20,
"text": " \\operatorname{E}[X_i] = \\frac{\\alpha_i}{\\alpha_0},"
},
{
"math_id": 21,
"text": "\\operatorname{Var}[X_i] = \\frac{\\alpha_i (\\alpha_0-\\alpha_i)}{\\alpha_0^2 (\\alpha_0+1)}."
},
{
"math_id": 22,
"text": " i\\neq j"
},
{
"math_id": 23,
"text": "\\operatorname{Cov}[X_i,X_j] = \\frac{- \\alpha_i \\alpha_j}{\\alpha_0^2 (\\alpha_0+1)}."
},
{
"math_id": 24,
"text": " \\boldsymbol{t}=(t_1,\\dotsc,t_K) \\in \\mathbb{R}^K"
},
{
"math_id": 25,
"text": "\\boldsymbol{t}^{\\circ i} = (t_1^i,\\dotsc,t_K^i)"
},
{
"math_id": 26,
"text": "\\operatorname{E}\\left[ (\\boldsymbol{t} \\cdot \\boldsymbol{X})^n \\right] = \\frac{n! \\, \\Gamma ( \\alpha_0 )}{\\Gamma (\\alpha_0+n)} \\sum \\frac{{t_1}^{k_1} \\cdots {t_K}^{k_K}}{k_1! \\cdots k_K!} \\prod_{i=1}^K \\frac{\\Gamma(\\alpha_i + k_i)}{\\Gamma(\\alpha_i)} = \\frac{n! \\, \\Gamma ( \\alpha_0 )}{\\Gamma (\\alpha_0+n)} Z_n(\\boldsymbol{t}^{\\circ 1} \\cdot \\boldsymbol{\\alpha}, \\cdots, \\boldsymbol{t}^{\\circ n} \\cdot \\boldsymbol{\\alpha}),"
},
{
"math_id": 27,
"text": "k_1,\\ldots,k_K"
},
{
"math_id": 28,
"text": "n=k_1+\\cdots+k_K"
},
{
"math_id": 29,
"text": "Z_n"
},
{
"math_id": 30,
"text": "n"
},
{
"math_id": 31,
"text": "\\operatorname{E}\\left[ (\\boldsymbol{t}_1 \\cdot \\boldsymbol{X})^{n_1} \\cdots (\\boldsymbol{t}_q \\cdot \\boldsymbol{X})^{n_q} \\right]"
},
{
"math_id": 32,
"text": "\\boldsymbol{t}_1, \\dotsc, \\boldsymbol{t}_q \\in \\mathbb{R}^K"
},
{
"math_id": 33,
"text": "n_1, \\dotsc, n_q"
},
{
"math_id": 34,
"text": "\\operatorname{E}\\left[\\prod_{i=1}^K X_i^{\\beta_i}\\right] = \\frac{B\\left(\\boldsymbol{\\alpha} + \\boldsymbol{\\beta}\\right)}{B\\left(\\boldsymbol{\\alpha}\\right)} = \\frac{\\Gamma\\left(\\sum\\limits_{i=1}^K \\alpha_{i}\\right)}{\\Gamma\\left[\\sum\\limits_{i=1}^K (\\alpha_i+\\beta_i)\\right]}\\times\\prod_{i=1}^K \\frac{\\Gamma(\\alpha_i+\\beta_i)}{\\Gamma(\\alpha_i)}."
},
{
"math_id": 35,
"text": " x_i = \\frac{\\alpha_i - 1}{\\alpha_0 - K}, \\qquad \\alpha_i > 1. "
},
{
"math_id": 36,
"text": "X_i \\sim \\operatorname{Beta} (\\alpha_i, \\alpha_0 - \\alpha_i). "
},
{
"math_id": 37,
"text": "\\begin{array}{rcccl}\n\\boldsymbol\\alpha &=& \\left(\\alpha_1, \\ldots, \\alpha_K \\right) &=& \\text{concentration hyperparameter} \\\\\n\\mathbf{p}\\mid\\boldsymbol\\alpha &=& \\left(p_1, \\ldots, p_K \\right ) &\\sim& \\operatorname{Dir}(K, \\boldsymbol\\alpha) \\\\\n\\mathbb{X}\\mid\\mathbf{p} &=& \\left(\\mathbf{x}_1, \\ldots, \\mathbf{x}_K \\right ) &\\sim& \\operatorname{Cat}(K,\\mathbf{p})\n\\end{array}"
},
{
"math_id": 38,
"text": "\\begin{array}{rcccl}\n\\mathbf{c} &=& \\left(c_1, \\ldots, c_K \\right ) &=& \\text{number of occurrences of category }i \\\\\n\\mathbf{p} \\mid \\mathbb{X},\\boldsymbol\\alpha &\\sim& \\operatorname{Dir}(K,\\mathbf{c}+\\boldsymbol\\alpha) &=& \\operatorname{Dir} \\left (K,c_1+\\alpha_1,\\ldots,c_K+\\alpha_K \\right)\n\\end{array}"
},
{
"math_id": 39,
"text": " h(\\boldsymbol X) = \\operatorname{E}[- \\ln f(\\boldsymbol X)] = \\ln \\operatorname{B}(\\boldsymbol\\alpha) + (\\alpha_0-K)\\psi(\\alpha_0) - \\sum_{j=1}^K (\\alpha_j-1)\\psi(\\alpha_j) "
},
{
"math_id": 40,
"text": " \\operatorname{E}[\\ln(X_i)]"
},
{
"math_id": 41,
"text": "\\ln(X_i)"
},
{
"math_id": 42,
"text": " \\operatorname{E}[\\ln(X_i)] = \\psi(\\alpha_i)-\\psi(\\alpha_0)"
},
{
"math_id": 43,
"text": " \\operatorname{Cov}[\\ln(X_i),\\ln(X_j)] = \\psi'(\\alpha_i) \\delta_{ij} - \\psi'(\\alpha_0)"
},
{
"math_id": 44,
"text": "\\psi'"
},
{
"math_id": 45,
"text": " \\lambda = 1"
},
{
"math_id": 46,
"text": "F_R(\\lambda) = (1-\\lambda)^{-1} \\left( - \\lambda \\log \\mathrm{B}(\\boldsymbol\\alpha) + \\sum_{i=1}^K \\log \\Gamma(\\lambda(\\alpha_i - 1) + 1) - \\log \\Gamma(\\lambda (\\alpha_0 - K) + K ) \\right) "
},
{
"math_id": 47,
"text": "\\lambda"
},
{
"math_id": 48,
"text": "\\boldsymbol Z "
},
{
"math_id": 49,
"text": "\\boldsymbol X "
},
{
"math_id": 50,
"text": " P(Z_i=1, Z_{j\\ne i} = 0 | \\boldsymbol X) = X_i "
},
{
"math_id": 51,
"text": " S(\\boldsymbol X) = H(\\boldsymbol Z | \\boldsymbol X) = \\operatorname{E}_{\\boldsymbol Z}[- \\log P(\\boldsymbol Z | \\boldsymbol X ) ] = \\sum_{i=1}^K - X_i \\log X_i "
},
{
"math_id": 52,
"text": "\\alpha_i = \\alpha"
},
{
"math_id": 53,
"text": " \\operatorname{E}[S(\\boldsymbol X)] = \\sum_{i=1}^K \\operatorname{E}[- X_i \\ln X_i] = \\psi(K\\alpha + 1) - \\psi(\\alpha + 1) "
},
{
"math_id": 54,
"text": "X = (X_1, \\ldots, X_K)\\sim\\operatorname{Dir}(\\alpha_1,\\ldots,\\alpha_K)"
},
{
"math_id": 55,
"text": "X' = (X_1, \\ldots, X_i + X_j, \\ldots, X_K)\\sim\\operatorname{Dir} (\\alpha_1, \\ldots, \\alpha_i + \\alpha_j, \\ldots, \\alpha_K)."
},
{
"math_id": 56,
"text": "X_i"
},
{
"math_id": 57,
"text": "X^{(-K)}"
},
{
"math_id": 58,
"text": "X^{(-K)}=\\left(\\frac{X_1}{1-X_K},\\frac{X_2}{1-X_K},\\ldots,\\frac{X_{K-1}}{1-X_K} \\right),"
},
{
"math_id": 59,
"text": "X_2,\\ldots,X_{K-1}"
},
{
"math_id": 60,
"text": "\\left(\\frac{X_1}{X_1+\\cdots +X_{j-1}},\\frac{X_2}{X_1+\\cdots +X_{j-1}},\\ldots,\\frac{X_{j-1}}{X_1+\\cdots +X_{j-1}} \\right)"
},
{
"math_id": 61,
"text": "3\\le j\\le K-1"
},
{
"math_id": 62,
"text": "\\left(X_1+\\cdots +X_{j-1}, X_j+\\cdots +X_K\\right)"
},
{
"math_id": 63,
"text": "\\left(\\frac{X_j}{X_j+\\cdots +X_K},\\frac{X_{j+1}}{X_j+\\cdots +X_K},\\ldots,\\frac{X_K}{X_j+\\cdots +X_K} \\right)"
},
{
"math_id": 64,
"text": "\nCF\\left(s_1,\\ldots,s_{K-1}\\right) = \\operatorname{E}\\left(e^{i\\left(s_1X_1+\\cdots+s_{K-1}X_{K-1} \\right)} \\right)= \\Psi^{\\left[K-1\\right]} (\\alpha_1,\\ldots,\\alpha_{K-1};\\alpha_0;is_1,\\ldots, is_{K-1})\n"
},
{
"math_id": 65,
"text": "\n\\Psi^{[m]} (a_1,\\ldots,a_m;c;z_1,\\ldots z_m) = \\sum\\frac{(a_1)_{k_1} \\cdots (a_m)_{k_m} \\, z_1^{k_1} \\cdots z_m^{k_m}}{(c)_k\\,k_1!\\cdots k_m!}.\n"
},
{
"math_id": 66,
"text": "k_1,\\ldots,k_m"
},
{
"math_id": 67,
"text": "k=k_1+\\cdots+k_m"
},
{
"math_id": 68,
"text": "\n\\Psi^{[m]} = \\frac{\\Gamma(c)}{2\\pi i}\\int_L e^t\\,t^{a_1+\\cdots+a_m-c}\\,\\prod_{j=1}^m (t-z_j)^{-a_j} \\, dt"
},
{
"math_id": 69,
"text": "-\\infty"
},
{
"math_id": 70,
"text": "f \\left(x_1,\\ldots, x_{K-1}; \\alpha_1,\\ldots, \\alpha_K \\right)"
},
{
"math_id": 71,
"text": " Y_1 \\sim \\operatorname{Gamma}(\\alpha_1, \\theta), \\ldots, Y_K \\sim \\operatorname{Gamma}(\\alpha_K, \\theta)"
},
{
"math_id": 72,
"text": "V=\\sum_{i=1}^K Y_i\\sim\\operatorname{Gamma} \\left(\\alpha_0, \\theta \\right ),"
},
{
"math_id": 73,
"text": "X = (X_1, \\ldots, X_K) = \\left(\\frac{Y_1}{V}, \\ldots, \\frac{Y_K}{V} \\right)\\sim \\operatorname{Dir}\\left (\\alpha_1, \\ldots, \\alpha_K \\right)."
},
{
"math_id": 74,
"text": "\\operatorname{CD}(\\boldsymbol\\alpha \\mid \\boldsymbol{v},\\eta) \\propto \\left(\\frac{1}{\\operatorname{B}(\\boldsymbol\\alpha)}\\right)^\\eta \\exp\\left(-\\sum_k v_k \\alpha_k\\right)."
},
{
"math_id": 75,
"text": "\\boldsymbol{v}"
},
{
"math_id": 76,
"text": "\\eta"
},
{
"math_id": 77,
"text": "(\\boldsymbol{v},\\eta)"
},
{
"math_id": 78,
"text": "\n\\forall k\\;\\;v_k>0\\;\\;\\;\\;\\text{ and } \\;\\;\\;\\;\\eta>-1 \\;\\;\\;\\;\\text{ and } \\;\\;\\;\\;(\\eta\\leq0\\;\\;\\;\\;\\text{ or }\\;\\;\\;\\;\\sum_k \\exp-\\frac{v_k} \\eta < 1)\n"
},
{
"math_id": 79,
"text": "\\boldsymbol{\\alpha}\\sim\\operatorname{CD}(\\cdot \\mid \\boldsymbol{v},\\eta)"
},
{
"math_id": 80,
"text": "\\boldsymbol{x}\\mid\\boldsymbol{\\alpha}\\sim\\operatorname{Dirichlet}(\\cdot \\mid \\boldsymbol{\\alpha})"
},
{
"math_id": 81,
"text": "\\boldsymbol{\\alpha}\\mid\\boldsymbol{x}\\sim\\operatorname{CD}(\\cdot \\mid \\boldsymbol{v}-\\log \\boldsymbol{x}, \\eta+1)"
},
{
"math_id": 82,
"text": "\\operatorname{CD}(\\boldsymbol{\\alpha} \\mid \\boldsymbol{v},\\eta)"
},
{
"math_id": 83,
"text": "\\alpha_i/\\alpha_0"
},
{
"math_id": 84,
"text": "x=(x_1, \\ldots, x_K)"
},
{
"math_id": 85,
"text": "(\\alpha_1, \\ldots, \\alpha_K)"
},
{
"math_id": 86,
"text": "y_1, \\ldots, y_K"
},
{
"math_id": 87,
"text": " \\operatorname{Gamma}(\\alpha_i, 1) = \\frac{y_i^{\\alpha_i-1} \\; e^{-y_i}}{\\Gamma (\\alpha_i)}, \\!"
},
{
"math_id": 88,
"text": "x_i = \\frac{y_i}{\\sum_{j=1}^K y_j}."
},
{
"math_id": 89,
"text": "\\{y_{i}\\}"
},
{
"math_id": 90,
"text": " e^{-\\sum_{i}y_{i}} \\prod _{i=1}^{K} \\frac{y_{i}^{\\alpha _{i}-1}}{\\Gamma (\\alpha _{i})} "
},
{
"math_id": 91,
"text": " \\{y_{i}\\}"
},
{
"math_id": 92,
"text": " y_{1}, y_{2}, \\ldots , y_{K-1} "
},
{
"math_id": 93,
"text": " \\sum _{i=1}^{K}y_{i}"
},
{
"math_id": 94,
"text": " y \\to x "
},
{
"math_id": 95,
"text": "\\bar x = \\textstyle\\sum_{i=1}^{K}y_{i}, x_{1} = \\frac{y_{1}}{\\bar x}, x_{2} = \\frac{y_{2}}{\\bar x}, \\ldots , x_{K-1} = \\frac{y_{K-1}}{\\bar x}"
},
{
"math_id": 96,
"text": "0 \\leq x_{1}, x_{2}, \\ldots , x_{k-1} \\leq 1 "
},
{
"math_id": 97,
"text": "0 \\leq \\textstyle\\sum _{i=1}^{K-1}x_{i} \\leq 1 "
},
{
"math_id": 98,
"text": " P(x) = P(y(x))\\bigg|\\frac{\\partial y}{\\partial x}\\bigg| "
},
{
"math_id": 99,
"text": "\\bigg|\\frac{\\partial y}{\\partial x}\\bigg|"
},
{
"math_id": 100,
"text": "y_{1} = \\bar xx_{1}, y_{2} = \\bar xx_{2} \\ldots y_{K-1} = \\bar xx_{K-1}, y_{K} = \\bar x(1-\\textstyle\\sum_{i=1}^{K-1}x_{i}) "
},
{
"math_id": 101,
"text": "\\begin{vmatrix}\\bar x & 0 & \\ldots & x_{1} \\\\ 0 & \\bar x & \\ldots & x_{2} \\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ -\\bar x & -\\bar x & \\ldots & 1-\\sum_{i=1}^{K-1}x_{i} \\end{vmatrix}"
},
{
"math_id": 102,
"text": "\\begin{vmatrix}\\bar x & 0 & \\ldots & x_{1} \\\\ 0 & \\bar x & \\ldots & x_{2} \\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ 0 & 0 & \\ldots & 1 \\end{vmatrix} "
},
{
"math_id": 103,
"text": "\\bar x^{K-1}"
},
{
"math_id": 104,
"text": "\n\\begin{align}\n&\\frac{\\left[\\prod _{i=1}^{K-1}(\\bar xx_{i})^{\\alpha _{i}-1} \\right] \\left[\\bar x(1-\\sum_{i=1}^{K-1}x_{i})\\right]^{\\alpha_{K}-1}}{\\prod _{i=1}^{K}\\Gamma (\\alpha _{i})}\\bar x^{K-1}e^{-\\bar x} \\\\\n=&\\frac{\\Gamma(\\bar\\alpha)\\left[\\prod _{i=1}^{K-1}(x_{i})^{\\alpha _{i}-1} \\right] \\left[1-\\sum_{i=1}^{K-1}x_{i}\\right]^{\\alpha_{K}-1}}{\\prod _{i=1}^{K}\\Gamma (\\alpha _{i})}\\times\\frac{\\bar x^{\\bar\\alpha_i-1}e^{-\\bar x}}{\\Gamma(\\bar\\alpha)}\n\\end{align}\n"
},
{
"math_id": 105,
"text": "\\bar\\alpha=\\textstyle\\sum_{i=1}^K\\alpha_i"
},
{
"math_id": 106,
"text": "x_i"
},
{
"math_id": 107,
"text": "\\bar x"
},
{
"math_id": 108,
"text": " x_{1}, x_{2}, \\ldots, x_{K-1} \\sim \\frac{(1-\\sum_{i=1}^{K-1}x_{i})^{\\alpha _{K}-1}\\prod _{i=1}^{K-1}x_{i}^{\\alpha _{i} -1}}{B(\\boldsymbol{\\alpha})} "
},
{
"math_id": 109,
"text": " \\frac{\\prod _{i=1}^{K} x_{i}^{\\alpha_{i}-1}}{B(\\boldsymbol{\\alpha})} "
},
{
"math_id": 110,
"text": " \\sum_{i=1}^{K}x_{i}=1 "
},
{
"math_id": 111,
"text": "x_1"
},
{
"math_id": 112,
"text": "\\textrm{Beta}\\left(\\alpha_1, \\sum_{i=2}^K \\alpha_i \\right)"
},
{
"math_id": 113,
"text": "x_2, \\ldots, x_{K-1}"
},
{
"math_id": 114,
"text": "j=2, \\ldots, K-1"
},
{
"math_id": 115,
"text": "\\phi_j"
},
{
"math_id": 116,
"text": "\\textrm{Beta} \\left(\\alpha_j, \\sum_{i=j+1}^K \\alpha_i \\right ),"
},
{
"math_id": 117,
"text": "x_j= \\left(1-\\sum_{i=1}^{j-1} x_i \\right )\\phi_j."
},
{
"math_id": 118,
"text": "x_K=1-\\sum_{i=1}^{K-1} x_i."
}
] | https://en.wikipedia.org/wiki?curid=1117833 |
1117859 | Landau distribution | In probability theory, the Landau distribution is a probability distribution named after Lev Landau.
Because of the distribution's "fat" tail, the moments of the distribution, such as mean or variance, are undefined. The distribution is a particular case of stable distribution.
Definition.
The probability density function, as written originally by Landau, is defined by the complex integral:
formula_1
where "a" is an arbitrary positive real number, meaning that the integration path can be any parallel to the imaginary axis, intersecting the real positive semi-axis, and formula_2 refers to the natural logarithm.
In other words it is the Laplace transform of the function formula_3.
The following real integral is equivalent to the above:
formula_4
The full family of Landau distributions is obtained by extending the original distribution to a location-scale family of stable distributions with parameters formula_5 and formula_6, with characteristic function:
formula_7
where formula_8 and formula_0, which yields a density function:
formula_9
Taking formula_10 and formula_11 we get the original form of formula_12 above.
Properties.
These properties can all be derived from the characteristic function.
Together they imply that the Landau distributions are closed under affine transformations.
Approximations.
In the "standard" case formula_10 and formula_19, the pdf can be approximated using Lindhard theory which says:
formula_20
where formula_21 is Euler's constant.
A similar approximation of formula_22 for formula_10 and formula_23 is:
formula_24
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu\\in(-\\infty,\\infty)"
},
{
"math_id": 1,
"text": "p(x) = \\frac{1}{2 \\pi i} \\int_{a-i\\infty}^{a+i\\infty} e^{s \\log(s) + x s}\\, ds , "
},
{
"math_id": 2,
"text": "\\log"
},
{
"math_id": 3,
"text": "s^s"
},
{
"math_id": 4,
"text": "p(x) = \\frac{1}{\\pi} \\int_0^\\infty e^{-t \\log(t) - x t} \\sin(\\pi t)\\, dt."
},
{
"math_id": 5,
"text": "\\alpha=1"
},
{
"math_id": 6,
"text": "\\beta=1"
},
{
"math_id": 7,
"text": "\\varphi(t;\\mu,c)=\\exp\\left(it\\mu -\\tfrac{2ict}{\\pi}\\log|t|-c|t|\\right)"
},
{
"math_id": 8,
"text": "c\\in(0,\\infty)"
},
{
"math_id": 9,
"text": "p(x;\\mu,c) = \\frac{1}{\\pi c}\\int_{0}^{\\infty} e^{-t}\\cos\\left(t\\left(\\frac{x-\\mu}{c}\\right)+\\frac{2t}{\\pi}\\log\\left(\\frac{t}{c}\\right)\\right)\\, dt , "
},
{
"math_id": 10,
"text": "\\mu=0"
},
{
"math_id": 11,
"text": "c=\\frac{\\pi}{2}"
},
{
"math_id": 12,
"text": "p(x)"
},
{
"math_id": 13,
"text": "X \\sim \\textrm{Landau}(\\mu,c)\\, "
},
{
"math_id": 14,
"text": " X + m \\sim \\textrm{Landau}(\\mu + m ,c) \\,"
},
{
"math_id": 15,
"text": " aX \\sim \\textrm{Landau}(a\\mu-\\tfrac{2ac\\log(a)}{\\pi}, ac) \\,"
},
{
"math_id": 16,
"text": "X \\sim \\textrm{Landau}(\\mu_1, c_1)"
},
{
"math_id": 17,
"text": "Y \\sim \\textrm{Landau}(\\mu_2, c_2) \\,"
},
{
"math_id": 18,
"text": " X+Y \\sim \\textrm{Landau}(\\mu_1+\\mu_2, c_1+c_2)"
},
{
"math_id": 19,
"text": "c=\\pi/2"
},
{
"math_id": 20,
"text": "p(x+\\log(x)-1+\\gamma) \\approx \\frac{\\exp(-1/x)}{x(1+x)},"
},
{
"math_id": 21,
"text": "\\gamma"
},
{
"math_id": 22,
"text": "p(x;\\mu,c)"
},
{
"math_id": 23,
"text": "c=1"
},
{
"math_id": 24,
"text": "p(x) \\approx \\frac{1}{\\sqrt{2\\pi}}\\exp\\left(-\\frac{x + e^{-x}}{2}\\right)."
},
{
"math_id": 25,
"text": "\\alpha"
},
{
"math_id": 26,
"text": "\\beta"
}
] | https://en.wikipedia.org/wiki?curid=1117859 |
1117869 | Stable distribution | Distribution of variables which satisfies a stability property under linear combinations
In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stable if its distribution is stable. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.
Of the four parameters defining the family, most attention has been focused on the stability parameter, formula_0 (see panel). Stable distributions have formula_6, with the upper bound corresponding to the normal distribution, and formula_7 to the Cauchy distribution. The distributions have undefined variance for formula_8, and undefined mean for formula_9. The importance of stable probability distributions is that they are "attractors" for properly normed sums of independent and identically distributed (iid) random variables. The normal distribution defines a family of stable distributions. By the classical central limit theorem the properly normed sum of a set of random variables, each with finite variance, will tend toward a normal distribution as the number of variables increases. Without the finite variance assumption, the limit may be a stable distribution that is not normal. Mandelbrot referred to such distributions as "stable Paretian distributions", after Vilfredo Pareto. In particular, he referred to those maximally skewed in the positive direction with formula_10 as "Pareto–Lévy distributions", which he regarded as better descriptions of stock and commodity prices than normal distributions.
Definition.
A non-degenerate distribution is a stable distribution if it satisfies the following property:
<templatestyles src="Block indent/styles.css"/> Let "X"1 and "X"2 be independent realizations of a random variable "X". Then "X" is said to be stable if for any constants "a" > 0 and "b" > 0 the random variable "aX"1 + "bX"2 has the same distribution as "cX" + "d" for some constants "c" > 0 and "d". The distribution is said to be "strictly stable" if this holds with "d" = 0.
Since the normal distribution, the Cauchy distribution, and the Lévy distribution all have the above property, it follows that they are special cases of stable distributions.
Such distributions form a four-parameter family of continuous probability distributions parametrized by location and scale parameters "μ" and "c", respectively, and two shape parameters formula_1 and formula_0, roughly corresponding to measures of asymmetry and concentration, respectively (see the figures).
The characteristic function formula_11 of any probability distribution is the Fourier transform of its probability density function formula_12. The density function is therefore the inverse Fourier transform of the characteristic function:
formula_13
Although the probability density function for a general stable distribution cannot be written analytically, the general characteristic function can be expressed analytically. A random variable "X" is called stable if its characteristic function can be written as
formula_14
where sgn("t") is just the sign of t and
formula_15
"μ" ∈ R is a shift parameter, formula_16, called the "skewness parameter", is a measure of asymmetry. Notice that in this context the usual skewness is not well defined, as for formula_8 the distribution does not admit 2nd or higher moments, and the usual skewness definition is the 3rd central moment.
The reason this gives a stable distribution is that the characteristic function for the sum of two independent random variables equals the product of the two corresponding characteristic functions. Adding two random variables from a stable distribution gives something with the same values of formula_0 and formula_1, but possibly different values of "μ" and "c".
Not every function is the characteristic function of a legitimate probability distribution (that is, one whose cumulative distribution function is real and goes from 0 to 1 without decreasing), but the characteristic functions given above will be legitimate so long as the parameters are in their ranges. The value of the characteristic function at some value "t" is the complex conjugate of its value at −"t" as it should be so that the probability distribution function will be real.
In the simplest case formula_4, the characteristic function is just a stretched exponential function; the distribution is symmetric about "μ" and is referred to as a (Lévy) symmetric alpha-stable distribution, often abbreviated "SαS".
When formula_2 and formula_17, the distribution is supported on ["μ", ∞).
The parameter "c" > 0 is a scale factor which is a measure of the width of the distribution while formula_0 is the exponent or index of the distribution and specifies the asymptotic behavior of the distribution.
Parametrizations.
The parametrization of stable distributions is not unique. Nolan tabulates 11 parametrizations seen in the literature and gives conversion formulas. The two most commonly used parametrizations are the one above (Nolan's "1") and the one immediately below (Nolan's "0").
The parametrization above is easiest to use for theoretical work, but its probability density is not continuous in the parameters at formula_18. A continuous parametrization, better for numerical work, is
formula_19
where:
formula_20
The ranges of formula_0 and formula_1 are the same as before, "γ" (like "c") should be positive, and "δ" (like "μ") should be real.
In either parametrization one can make a linear transformation of the random variable to get a random variable whose density is formula_21. In the first parametrization, this is done by defining the new variable:
formula_22
For the second parametrization, simply use
formula_23
independent of formula_0. In the first parametrization, if the mean exists (that is, formula_3) then it is equal to "μ", whereas in the second parametrization when the mean exists it is equal to formula_24
The distribution.
A stable distribution is therefore specified by the above four parameters. It can be shown that any non-degenerate stable distribution has a smooth (infinitely differentiable) density function. If formula_25 denotes the density of "X" and "Y" is the sum of independent copies of "X":
formula_26
then "Y" has the density formula_27 with
formula_28
The asymptotic behavior is described, for formula_8, by:
formula_29
where Γ is the Gamma function (except that when formula_30 and formula_31, the tail does not vanish to the left or right, resp., of "μ", although the above expression is 0). This "heavy tail" behavior causes the variance of stable distributions to be infinite for all formula_32. This property is illustrated in the log–log plots below.
When formula_5, the distribution is Gaussian (see below), with tails asymptotic to exp(−"x"2/4"c"2)/(2"c"√).
One-sided stable distribution and stable count distribution.
When formula_2 and formula_17, the distribution is supported on ["μ", ∞). This family is called one-sided stable distribution. Its standard distribution ("μ" = 0) is defined as
formula_33, where formula_34
Let formula_35, its characteristic function is formula_36. Thus the integral form of its PDF is (note: formula_37)
formula_38
The double-sine integral is more effective for very small formula_39.
Consider the Lévy sum formula_40 where formula_41, then "Y" has the density formula_42 where formula_43. Set formula_44 to arrive at the stable count distribution. Its standard distribution is defined as
formula_45
The stable count distribution is the conjugate prior of the one-sided stable distribution. Its location-scale family is defined as
formula_46, formula_47
It is also a one-sided distribution supported on formula_48. The location parameter formula_49 is the cut-off location, while formula_50 defines its scale.
When formula_51, formula_52 is the Lévy distribution which is an inverse gamma distribution. Thus formula_53 is a shifted gamma distribution of shape 3/2 and scale formula_54,
formula_55
Its mean is formula_56 and its standard deviation is formula_57. It is hypothesized that VIX is distributed like formula_58 with formula_59 and formula_60 (See Section 7 of ). Thus the stable count distribution is the first-order marginal distribution of a volatility process. In this context, formula_49 is called the "floor volatility".
Another approach to derive the stable count distribution is to use the Laplace transform of the one-sided stable distribution, (Section 2.4 of )
formula_61
Let formula_62, and one can decompose the integral on the left hand side as a product distribution of a standard Laplace distribution and a standard stable count distribution,
formula_63
This is called the "lambda decomposition" (See Section 4 of ) since the right hand side was named as "symmetric lambda distribution" in Lihn's former works. However, it has several more popular names such as "exponential power distribution", or the "generalized error/normal distribution", often referred to when formula_3.
The n-th moment of formula_64 is the formula_65-th moment of formula_66, and all positive moments are finite.
Properties.
Stable distributions are closed under convolution for a fixed value of formula_0. Since convolution is equivalent to multiplication of the Fourier-transformed function, it follows that the product of two stable characteristic functions with the same formula_0 will yield another such characteristic function. The product of two stable characteristic functions is given by:
formula_67
Since Φ is not a function of the "μ", "c" or formula_1 variables it follows that these parameters for the convolved function are given by:
formula_68
In each case, it can be shown that the resulting parameters lie within the required intervals for a stable distribution.
The Generalized Central Limit Theorem.
The Generalized Central Limit Theorem (GCLT) was an effort of multiple mathematicians (Berstein, Lindeberg, Lévy, Feller, Kolmogorov, and others) over the period from 1920 to 1937.
The first published complete proof (in French) of the GCLT was in 1937 by Paul Lévy.
An English language version of the complete proof of the GCLT is available in the translation of Gnedenko and Kolmogorov's 1954 book.
The statement of the GLCT is as follows:
"A non-degenerate random variable" "Z" "is α-stable for some 0 < α ≤ 2 if and only if there is an independent, identically distributed sequence of random variables" "X"1, "X"2, "X"3, ... "and constants" "a""n" > 0, "b""n" ∈ ℝ "with"
"a""n" ("X"1 + ... + "X""n") − "b""n" → "Z."
"Here → means the sequence of random variable sums converges in distribution; i.e., the corresponding distributions satisfy" "F""n"("y") → "F"("y") "at all continuity points of" "F."
In other words, if sums of independent, identically distributed random variables converge in distribution to some "Z", then "Z" must be a stable distribution.
Special cases.
There is no general analytic solution for the form of "f"("x"). There are, however three special cases which can be expressed in terms of elementary functions as can be seen by inspection of the characteristic function:
Note that the above three distributions are also connected, in the following way: A standard Cauchy random variable can be viewed as a mixture of Gaussian random variables (all with mean zero), with the variance being drawn from a standard Lévy distribution. And in fact this is a special case of a more general theorem (See p. 59 of ) which allows any symmetric alpha-stable distribution to be viewed in this way (with the alpha parameter of the mixture distribution equal to twice the alpha parameter of the mixing distribution—and the beta parameter of the mixing distribution always equal to one).
A general closed form expression for stable PDFs with rational values of formula_0 is available in terms of Meijer G-functions. Fox H-Functions can also be used to express the stable probability density functions. For simple rational numbers, the closed form expression is often in terms of less complicated special functions. Several closed form expressions having rather simple expressions in terms of special functions are available. In the table below, PDFs expressible by elementary functions are indicated by an "E" and those that are expressible by special functions are indicated by an "s".
Some of the special cases are known by particular names:
Also, in the limit as "c" approaches zero or as α approaches zero the distribution will approach a Dirac delta function "δ"("x" − "μ").
Series representation.
The stable distribution can be restated as the real part of a simpler integral:
formula_72
Expressing the second exponential as a Taylor series, this leads to:
formula_73
where formula_74. Reversing the order of integration and summation, and carrying out the integration yields:
formula_75
which will be valid for "x" ≠ "μ" and will converge for appropriate values of the parameters. (Note that the "n" = 0 term which yields a delta function in "x" − "μ" has therefore been dropped.) Expressing the first exponential as a series will yield another series in positive powers of "x" − "μ" which is generally less useful.
For one-sided stable distribution, the above series expansion needs to be modified, since formula_76 and formula_77. There is no real part to sum. Instead, the integral of the characteristic function should be carried out on the negative axis, which yields:
formula_78
Parameter estimation.
In addition to the existing tests for normality and subsequent parameter estimation, a general method which relies on the quantiles was developed by McCulloch and works for both symmetric and skew stable distributions and stability parameter formula_79.
Simulation of stable variates.
There are no analytic expressions for the inverse formula_80 nor the CDF formula_81 itself, so the inversion method cannot be used to generate stable-distributed variates. Other standard approaches like the rejection method would require tedious computations. An elegant and efficient solution was proposed by Chambers, Mallows and Stuck (CMS), who noticed that a certain integral formula yielded the following algorithm:
This algorithm yields a random variable formula_89. For a detailed proof see.
To simulate a stable random variable for all admissible values of the parameters formula_0, formula_90, formula_1 and formula_91 use the following property: If formula_92 then
formula_93
is formula_94. For formula_5 (and formula_4) the CMS method reduces to the well known Box-Muller transform for generating Gaussian random variables. While other approaches have been proposed in the literature, including application of Bergström and LePage series expansions, the CMS method is regarded as the fastest and the most accurate.
Applications.
Stable distributions owe their importance in both theory and practice to the generalization of the central limit theorem to random variables without second (and possibly first) order moments and the accompanying self-similarity of the stable family. It was the seeming departure from normality along with the demand for a self-similar model for financial data (i.e. the shape of the distribution for yearly asset price changes should resemble that of the constituent daily or monthly price changes) that led Benoît Mandelbrot to propose that cotton prices follow an alpha-stable distribution with formula_0 equal to 1.7. Lévy distributions are frequently found in analysis of critical behavior and financial data.
They are also found in spectroscopy as a general expression for a quasistatically pressure broadened spectral line.
The Lévy distribution of solar flare waiting time events (time between flare events) was demonstrated for CGRO BATSE hard x-ray solar flares in December 2001. Analysis of the Lévy statistical signature revealed that two different memory signatures were evident; one related to the solar cycle and the second whose origin appears to be associated with a localized or combination of localized solar active region effects.
Other analytic cases.
A number of cases of analytically expressible stable distributions are known. Let the stable distribution be expressed by formula_95, then:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "\\beta"
},
{
"math_id": 2,
"text": "\\alpha < 1"
},
{
"math_id": 3,
"text": "\\alpha > 1"
},
{
"math_id": 4,
"text": "\\beta = 0"
},
{
"math_id": 5,
"text": "\\alpha = 2"
},
{
"math_id": 6,
"text": "0 < \\alpha \\leq 2"
},
{
"math_id": 7,
"text": "\\alpha=1"
},
{
"math_id": 8,
"text": "\\alpha < 2"
},
{
"math_id": 9,
"text": "\\alpha \\leq 1"
},
{
"math_id": 10,
"text": "1 < \\alpha < 2"
},
{
"math_id": 11,
"text": "\\varphi(t) "
},
{
"math_id": 12,
"text": "f(x) "
},
{
"math_id": 13,
"text": " \\varphi(t) = \\int_{- \\infty}^\\infty f(x)e^{ ixt}\\,dx. "
},
{
"math_id": 14,
"text": " \\varphi(t; \\alpha, \\beta, c, \\mu) = \\exp \\left ( i t \\mu - |c t|^\\alpha \\left ( 1 - i \\beta \\sgn(t) \\Phi \\right ) \\right ) "
},
{
"math_id": 15,
"text": " \\Phi = \\begin{cases}\n\\tan \\left (\\frac{\\pi \\alpha}{2} \\right) & \\alpha \\neq 1 \\\\\n- \\frac{2}{\\pi}\\log|t| & \\alpha = 1\n\\end{cases} "
},
{
"math_id": 16,
"text": "\\beta \\in [-1,1]"
},
{
"math_id": 17,
"text": "\\beta = 1"
},
{
"math_id": 18,
"text": "\\alpha =1"
},
{
"math_id": 19,
"text": " \\varphi(t; \\alpha, \\beta, \\gamma, \\delta) = \\exp \\left (i t \\delta - |\\gamma t|^\\alpha \\left (1 - i \\beta \\sgn(t) \\Phi \\right ) \\right ) "
},
{
"math_id": 20,
"text": " \\Phi = \\begin{cases} \\left ( |\\gamma t|^{1 - \\alpha} - 1 \\right ) \\tan \\left (\\tfrac{\\pi \\alpha}{2} \\right ) & \\alpha \\neq 1 \\\\ - \\frac{2}{\\pi} \\log|\\gamma t| & \\alpha = 1 \\end{cases} "
},
{
"math_id": 21,
"text": " f(y; \\alpha, \\beta, 1, 0) "
},
{
"math_id": 22,
"text": " y = \\begin{cases} \\frac{x - \\mu}\\gamma & \\alpha \\neq 1 \\\\ \\frac{x - \\mu}\\gamma - \\beta\\frac 2\\pi\\ln\\gamma & \\alpha = 1 \\end{cases} "
},
{
"math_id": 23,
"text": " y = \\frac{x-\\delta}\\gamma "
},
{
"math_id": 24,
"text": " \\delta - \\beta \\gamma \\tan \\left (\\tfrac{\\pi\\alpha}{2} \\right)."
},
{
"math_id": 25,
"text": " f(x; \\alpha, \\beta, c, \\mu) "
},
{
"math_id": 26,
"text": " Y = \\sum_{i = 1}^N k_i (X_i - \\mu)"
},
{
"math_id": 27,
"text": " \\tfrac{1}{s} f(y / s; \\alpha, \\beta, c, 0) "
},
{
"math_id": 28,
"text": " s = \\left(\\sum_{i = 1}^N |k_i|^\\alpha \\right )^{\\frac{1}{\\alpha}} "
},
{
"math_id": 29,
"text": " f(x) \\sim \\frac{1}{|x|^{1 + \\alpha}} \\left (c^\\alpha (1 + \\sgn(x) \\beta) \\sin \\left (\\frac{\\pi \\alpha}{2} \\right ) \\frac{\\Gamma(\\alpha + 1) }{\\pi} \\right ) "
},
{
"math_id": 30,
"text": "\\alpha \\geq 1"
},
{
"math_id": 31,
"text": "\\beta = \\pm 1"
},
{
"math_id": 32,
"text": "\\alpha <2"
},
{
"math_id": 33,
"text": "L_\\alpha(x) = f\\left(x;\\alpha,1,\\cos\\left(\\frac{\\alpha\\pi}{2}\\right)^{1/\\alpha},0\\right)"
},
{
"math_id": 34,
"text": "\\alpha < 1."
},
{
"math_id": 35,
"text": "q = \\exp(-i\\alpha\\pi/2)"
},
{
"math_id": 36,
"text": " \\varphi(t;\\alpha) = \\exp\\left (- q|t|^\\alpha \\right ) "
},
{
"math_id": 37,
"text": "\\operatorname{Im}(q)<0"
},
{
"math_id": 38,
"text": " \\begin{align}\nL_\\alpha(x)\n& = \\frac{1}{\\pi}\\Re\\left[ \\int_{-\\infty}^\\infty e^{itx}e^{-q|t|^\\alpha}\\,dt\\right]\n\\\\ & = \\frac{2}{\\pi} \\int_0^\\infty e^{-\\operatorname{Re}(q)\\,t^\\alpha}\n \\sin(tx)\\sin(-\\operatorname{Im}(q)\\,t^\\alpha) \\,dt, \\text{ or }\n\\\\ & = \\frac{2}{\\pi} \\int_0^\\infty e^{-\\text{Re}(q)\\,t^\\alpha}\n \\cos(tx)\\cos(\\operatorname{Im}(q)\\,t^\\alpha) \\,dt .\n\\end{align}"
},
{
"math_id": 39,
"text": " x"
},
{
"math_id": 40,
"text": "Y = \\sum_{i=1}^N X_i"
},
{
"math_id": 41,
"text": "X_i \\sim L_\\alpha(x)"
},
{
"math_id": 42,
"text": "\\frac{1}{\\nu} L_\\alpha \\left(\\frac{x}{\\nu}\\right)"
},
{
"math_id": 43,
"text": "\\nu = N^{1/\\alpha}"
},
{
"math_id": 44,
"text": "x = 1"
},
{
"math_id": 45,
"text": "\\mathfrak{N}_\\alpha(\\nu)=\\frac \\alpha {\\Gamma\\left(\\frac{1}{\\alpha}\\right)} \\frac1\\nu L_\\alpha \\left(\\frac{1}{\\nu} \\right), \\text{ where } \\nu > 0 \\text{ and } \\alpha < 1."
},
{
"math_id": 46,
"text": "\\mathfrak{N}_\\alpha(\\nu;\\nu_0,\\theta) = \\frac \\alpha {\\Gamma(\\frac{1}{\\alpha})} \\frac{1}{\\nu-\\nu_0} L_\\alpha \\left(\\frac{\\theta}{\\nu-\\nu_0}\\right), \\text{ where } \\nu > \\nu_0"
},
{
"math_id": 47,
"text": "\\theta > 0, \\text{ and } \\alpha < 1."
},
{
"math_id": 48,
"text": "[\\nu_0,\\infty)"
},
{
"math_id": 49,
"text": "\\nu_0"
},
{
"math_id": 50,
"text": "\\theta"
},
{
"math_id": 51,
"text": "\\alpha = \\frac{1}{2}"
},
{
"math_id": 52,
"text": "L_{\\frac{1}{2}}(x)"
},
{
"math_id": 53,
"text": "\\mathfrak{N}_{\\frac{1}{2}}(\\nu; \\nu_0, \\theta)"
},
{
"math_id": 54,
"text": "4\\theta"
},
{
"math_id": 55,
"text": "\\mathfrak{N}_{\\frac{1}{2}}(\\nu;\\nu_0,\\theta) = \\frac{1}{4\\sqrt{\\pi}\\theta^{3/2}}\n(\\nu-\\nu_0)^{1/2} e^{-\\frac{\\nu-\\nu_0}{4\\theta}}, \\text{ where } \\nu > \\nu_0, \\qquad \\theta > 0."
},
{
"math_id": 56,
"text": "\\nu_0 + 6\\theta"
},
{
"math_id": 57,
"text": "\\sqrt{24}\\theta"
},
{
"math_id": 58,
"text": "\\mathfrak{N}_{\\frac{1}{2}}(\\nu;\\nu_0,\\theta)"
},
{
"math_id": 59,
"text": "\\nu_0 = 10.4"
},
{
"math_id": 60,
"text": "\\theta = 1.6"
},
{
"math_id": 61,
"text": "\\int_0^\\infty e^{-z x} L_\\alpha(x) dx = e^{-z^\\alpha}, \\text{ where } > \\alpha<1. "
},
{
"math_id": 62,
"text": "x = 1 / \\nu"
},
{
"math_id": 63,
"text": "\\int_0^\\infty \\frac{1}{\\nu} \\left ( \\frac{1}{2} e^{-\\frac{|z|}{\\nu} }\\right )\n\\left (\\frac{\\alpha}{\\Gamma(\\frac{1}{\\alpha})} \\frac{1}{\\nu} L_\\alpha \\left(\\frac{1}{\\nu}\\right) \\right ) \\, d\\nu\n= \\frac{1}{2} \\frac{\\alpha}{\\Gamma(\\frac{1}{\\alpha})} e^{-|z|^\\alpha}, \\text{ where } \\alpha<1. "
},
{
"math_id": 64,
"text": "\\mathfrak{N}_\\alpha(\\nu)"
},
{
"math_id": 65,
"text": "-(n + 1)"
},
{
"math_id": 66,
"text": "L_\\alpha(x)"
},
{
"math_id": 67,
"text": "\\exp\\left (it\\mu_1+it\\mu_2 - |c_1 t|^\\alpha - |c_2 t|^\\alpha +i\\beta_1|c_1 t|^\\alpha\\sgn(t)\\Phi + i\\beta_2|c_2 t|^\\alpha\\sgn(t)\\Phi \\right )"
},
{
"math_id": 68,
"text": "\\begin{align}\n\\mu &=\\mu_1+\\mu_2 \\\\\nc &= \\left (c_1^\\alpha+c_2^\\alpha \\right )^{\\frac{1}{\\alpha}} \\\\[6pt]\n\\beta &= \\frac{\\beta_1 c_1^\\alpha+\\beta_2c_2^\\alpha}{c_1^\\alpha+c_2^\\alpha}\n\\end{align}"
},
{
"math_id": 69,
"text": "\\alpha = 1"
},
{
"math_id": 70,
"text": "\\alpha = 1/2"
},
{
"math_id": 71,
"text": "\\alpha = 3/2"
},
{
"math_id": 72,
"text": "f(x;\\alpha,\\beta,c,\\mu)=\\frac{1}{\\pi}\\Re\\left[ \\int_0^\\infty e^{it(x-\\mu)}e^{-(ct)^\\alpha(1-i\\beta\\Phi)}\\,dt\\right]."
},
{
"math_id": 73,
"text": "f(x;\\alpha,\\beta,c,\\mu)=\\frac{1}{\\pi}\\Re\\left[ \\int_0^\\infty e^{it(x-\\mu)}\\sum_{n=0}^\\infty\\frac{(-qt^\\alpha)^n}{n!}\\,dt\\right]"
},
{
"math_id": 74,
"text": "q=c^\\alpha(1-i\\beta\\Phi)"
},
{
"math_id": 75,
"text": "f(x;\\alpha,\\beta,c,\\mu)=\\frac{1}{\\pi}\\Re\\left[ \\sum_{n=1}^\\infty\\frac{(-q)^n}{n!}\\left(\\frac{i}{x-\\mu}\\right)^{\\alpha n+1}\\Gamma(\\alpha n+1)\\right]"
},
{
"math_id": 76,
"text": "q=\\exp(-i\\alpha\\pi/2)"
},
{
"math_id": 77,
"text": "q i^{\\alpha}=1"
},
{
"math_id": 78,
"text": "\\begin{align}\nL_\\alpha(x) & =\n\\frac{1}{\\pi}\\Re\\left[ \\sum_{n=1}^\\infty\\frac{(-q)^n}{n!}\\left(\\frac{-i}{x}\\right)^{\\alpha n+1}\\Gamma(\\alpha n+1)\\right]\n\\\\ & =\n\\frac{1}{\\pi}\\sum_{n=1}^\\infty\\frac{-\\sin(n(\\alpha+1)\\pi)}{n!}\\left(\\frac{1}{x}\\right)^{\\alpha n+1}\\Gamma(\\alpha n+1)\n\\end{align} "
},
{
"math_id": 79,
"text": "0.5 < \\alpha \\leq 2"
},
{
"math_id": 80,
"text": "F^{-1}(x)"
},
{
"math_id": 81,
"text": "F(x)"
},
{
"math_id": 82,
"text": "U"
},
{
"math_id": 83,
"text": "\\left (-\\tfrac{\\pi}{2},\\tfrac{\\pi}{2} \\right )"
},
{
"math_id": 84,
"text": "W"
},
{
"math_id": 85,
"text": "\\alpha\\ne 1"
},
{
"math_id": 86,
"text": "X = \\left (1+\\zeta^2 \\right )^\\frac{1}{2\\alpha} \\frac{\\sin ( \\alpha(U+\\xi)) }{ (\\cos(U))^{\\frac{1}{\\alpha}}} \\left (\\frac{\\cos (U - \\alpha(U+\\xi)) }{W} \\right )^\\frac{1-\\alpha}{\\alpha},"
},
{
"math_id": 87,
"text": "X = \\frac{1}{\\xi}\\left\\{\\left(\\frac{\\pi}{2}+\\beta U \\right)\\tan U- \\beta\\log\\left(\\frac{\\frac{\\pi}{2} W\\cos U}{\\frac{\\pi}{2}+\\beta U}\\right)\\right\\},"
},
{
"math_id": 88,
"text": "\\zeta = -\\beta\\tan\\frac{\\pi\\alpha}{2}, \\qquad \\xi =\\begin{cases}\n\\frac{1}{\\alpha} \\arctan(-\\zeta) & \\alpha \\ne 1 \\\\\n\\frac{\\pi}{2} & \\alpha=1\n\\end{cases}"
},
{
"math_id": 89,
"text": "X\\sim S_\\alpha(\\beta,1,0)"
},
{
"math_id": 90,
"text": "c"
},
{
"math_id": 91,
"text": "\\mu"
},
{
"math_id": 92,
"text": "X \\sim S_\\alpha(\\beta,1,0)"
},
{
"math_id": 93,
"text": "Y = \\begin{cases}\nc X+\\mu & \\alpha \\ne 1 \\\\\nc X+\\frac{2}{\\pi}\\beta c\\log c + \\mu & \\alpha = 1\n\\end{cases}"
},
{
"math_id": 94,
"text": "S_\\alpha(\\beta,c,\\mu)"
},
{
"math_id": 95,
"text": "f(x;\\alpha,\\beta,c,\\mu)"
},
{
"math_id": 96,
"text": "f(x;1,0,1,0)."
},
{
"math_id": 97,
"text": "f(x;\\tfrac{1}{2},1,1,0)."
},
{
"math_id": 98,
"text": "f(x;2,0,1,0)."
},
{
"math_id": 99,
"text": "S_{\\mu,\\nu}(z)"
},
{
"math_id": 100,
"text": " f \\left (x;\\tfrac{1}{3},0,1,0\\right ) = \\Re\\left ( \\frac{2e^{- \\frac{i \\pi}{4}}}{3 \\sqrt{3} \\pi} \\frac{1}{\\sqrt{x^3}} S_{0,\\frac{1}{3}} \\left (\\frac{2e^{\\frac{i \\pi}{4}}}{3 \\sqrt{3}} \\frac{1}{\\sqrt{x}} \\right) \\right )"
},
{
"math_id": 101,
"text": "S(x)"
},
{
"math_id": 102,
"text": "C(x)"
},
{
"math_id": 103,
"text": "f\\left (x;\\tfrac{1}{2},0,1,0\\right ) = \\frac{1}{{\\sqrt{2\\pi|x|^3}}}\\left (\\sin\\left(\\tfrac{1}{4|x|}\\right) \\left [\\frac{1}{2} - S\\left (\\tfrac{1}{\\sqrt{2\\pi|x|}}\\right )\\right ]+\\cos\\left(\\tfrac{1}{4|x|} \\right) \\left [\\frac{1}{2}-C\\left (\\tfrac{1}{\\sqrt{2\\pi|x|}}\\right )\\right ]\\right )"
},
{
"math_id": 104,
"text": "K_v(x)"
},
{
"math_id": 105,
"text": "f\\left (x;\\tfrac{1}{3},1,1,0\\right ) = \\frac{1}{\\pi} \\frac{2\\sqrt{2}}{3^{\\frac{7}{4}}} \\frac{1}{\\sqrt{x^3}} K_{\\frac{1}{3}}\\left (\\frac{4\\sqrt{2}}{3^{\\frac{9}{4}}} \\frac{1}{\\sqrt{x}} \\right )"
},
{
"math_id": 106,
"text": "{}_mF_n"
},
{
"math_id": 107,
"text": "\\begin{align}\n f\\left (x;\\tfrac{4}{3},0,1,0\\right ) &= \\frac{3^{\\frac{5}{4}}}{4 \\sqrt{2 \\pi}} \\frac{\\Gamma \\left (\\tfrac{7}{12} \\right ) \\Gamma \\left (\\tfrac{11}{12} \\right )}{\\Gamma\\left (\\tfrac{6}{12} \\right ) \\Gamma \\left (\\tfrac{8}{12} \\right )} {}_2F_2 \\left ( \\tfrac{7}{12}, \\tfrac{11}{12}; \\tfrac{6}{12}, \\tfrac{8}{12}; \\tfrac{3^3 x^4}{4^4} \\right ) - \\frac{3^{\\frac{11}{4}}x^3}{4^3 \\sqrt{2 \\pi}} \\frac{\\Gamma \\left (\\tfrac{13}{12} \\right ) \\Gamma \\left (\\tfrac{17}{12} \\right )}{\\Gamma \\left (\\tfrac{18}{12} \\right ) \\Gamma \\left (\\tfrac{15}{12} \\right )} {}_2F_2 \\left ( \\tfrac{13}{12}, \\tfrac{17}{12}; \\tfrac{18}{12}, \\tfrac{15}{12}; \\tfrac{3^3 x^4}{4^4} \\right ) \\\\[6pt]\nf\\left (x;\\tfrac{3}{2},0,1,0\\right ) &= \\frac{\\Gamma \\left(\\tfrac{5}{3} \\right)}{\\pi} {}_2F_3 \\left ( \\tfrac{5}{12}, \\tfrac{11}{12}; \\tfrac{1}{3}, \\tfrac{1}{2}, \\tfrac{5}{6}; - \\tfrac{2^2 x^6}{3^6} \\right )\n- \\frac{x^2}{3 \\pi} {}_3F_4 \\left ( \\tfrac{3}{4}, 1, \\tfrac{5}{4}; \\tfrac{2}{3}, \\tfrac{5}{6}, \\tfrac{7}{6}, \\tfrac{4}{3}; - \\tfrac{2^2 x^6}{3^6} \\right ) + \\frac{7 x^4\\Gamma \\left(\\tfrac{4}{3} \\right)}{3^4 \\pi ^ 2} {}_2F_3 \\left ( \\tfrac{13}{12}, \\tfrac{19}{12}; \\tfrac{7}{6}, \\tfrac{3}{2}, \\tfrac{5}{3}; -\\tfrac{2^2 x^6}{3^6} \\right)\n\\end{align}"
},
{
"math_id": 108,
"text": "W_{k,\\mu}(z)"
},
{
"math_id": 109,
"text": "\\begin{align}\nf\\left (x;\\tfrac{2}{3},0,1,0\\right ) &= \\frac{\\sqrt{3}}{6\\sqrt{\\pi}|x|} \\exp\\left (\\tfrac{2}{27}x^{-2}\\right ) W_{-\\frac{1}{2},\\frac{1}{6}}\\left (\\tfrac{4}{27}x^{-2}\\right ) \\\\[8pt]\nf\\left (x;\\tfrac{2}{3},1,1,0\\right ) &= \\frac{\\sqrt{3}}{\\sqrt{\\pi}|x|} \\exp\\left (-\\tfrac{16}{27}x^{-2}\\right ) W_{\\frac{1}{2},\\frac{1}{6}} \\left (\\tfrac{32}{27}x^{-2}\\right ) \\\\[8pt]\nf\\left (x;\\tfrac{3}{2},1,1,0\\right ) &= \\begin{cases} \\frac{\\sqrt{3}}{\\sqrt{\\pi}|x|} \\exp\\left (\\frac{1}{27}x^3\\right ) W_{\\frac{1}{2},\\frac{1}{6}}\\left (- \\frac{2}{27}x^3\\right ) & x<0\\\\ {} \\\\ \\frac{\\sqrt{3}}{6\\sqrt{\\pi}|x|} \\exp\\left (\\frac{1}{27}x^3\\right ) W_{-\\frac{1}{2},\\frac{1}{6}}\\left (\\frac{2}{27}x^3\\right ) & x \\geq 0 \\end{cases}\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=1117869 |
11180 | Functional analysis | Area of mathematics
Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (for example, inner product, norm, or topology) and the linear functions defined on these spaces and suitably respecting these structures. The historical roots of functional analysis lie in the study of spaces of functions and the formulation of properties of transformations of functions such as the Fourier transform as transformations defining, for example, continuous or unitary operators between function spaces. This point of view turned out to be particularly useful for the study of differential and integral equations.
The usage of the word "functional" as a noun goes back to the calculus of variations, implying a function whose argument is a function. The term was first used in Hadamard's 1910 book on that subject. However, the general concept of a functional had previously been introduced in 1887 by the Italian mathematician and physicist Vito Volterra. The theory of nonlinear functionals was continued by students of Hadamard, in particular Fréchet and Lévy. Hadamard also founded the modern school of linear functional analysis further developed by Riesz and the group of Polish mathematicians around Stefan Banach.
In modern introductory texts on functional analysis, the subject is seen as the study of vector spaces endowed with a topology, in particular infinite-dimensional spaces. In contrast, linear algebra deals mostly with finite-dimensional spaces, and does not use topology. An important part of functional analysis is the extension of the theories of measure, integration, and probability to infinite dimensional spaces, also known as infinite dimensional analysis.
Normed vector spaces.
The basic and historically first class of spaces studied in functional analysis are complete normed vector spaces over the real or complex numbers. Such spaces are called Banach spaces. An important example is a Hilbert space, where the norm arises from an inner product. These spaces are of fundamental importance in many areas, including the mathematical formulation of quantum mechanics, machine learning, partial differential equations, and Fourier analysis.
More generally, functional analysis includes the study of Fréchet spaces and other topological vector spaces not endowed with a norm.
An important object of study in functional analysis are the continuous linear operators defined on Banach and Hilbert spaces. These lead naturally to the definition of C*-algebras and other operator algebras.
Hilbert spaces.
Hilbert spaces can be completely classified: there is a unique Hilbert space up to isomorphism for every cardinality of the orthonormal basis. Finite-dimensional Hilbert spaces are fully understood in linear algebra, and infinite-dimensional separable Hilbert spaces are isomorphic to formula_0. Separability being important for applications, functional analysis of Hilbert spaces consequently mostly deals with this space. One of the open problems in functional analysis is to prove that every bounded linear operator on a Hilbert space has a proper invariant subspace. Many special cases of this invariant subspace problem have already been proven.
Banach spaces.
General Banach spaces are more complicated than Hilbert spaces, and cannot be classified in such a simple manner as those. In particular, many Banach spaces lack a notion analogous to an orthonormal basis.
Examples of Banach spaces are formula_1-spaces for any real number formula_2. Given also a measure formula_3 on set formula_4, then formula_5, sometimes also denoted formula_6 or formula_7, has as its vectors equivalence classes formula_8 of measurable functions whose absolute value's formula_9-th power has finite integral; that is, functions formula_10 for which one has
formula_11
If formula_3 is the counting measure, then the integral may be replaced by a sum. That is, we require
formula_12
Then it is not necessary to deal with equivalence classes, and the space is denoted formula_13, written more simply formula_14 in the case when formula_4 is the set of non-negative integers.
In Banach spaces, a large part of the study involves the dual space: the space of all continuous linear maps from the space into its underlying field, so-called functionals. A Banach space can be canonically identified with a subspace of its bidual, which is the dual of its dual space. The corresponding map is an isometry but in general not onto. A general Banach space and its bidual need not even be isometrically isomorphic in any way, contrary to the finite-dimensional situation. This is explained in the dual space article.
Also, the notion of derivative can be extended to arbitrary functions between Banach spaces. See, for instance, the Fréchet derivative article.
Major and foundational results.
There are four major theorems which are sometimes called the four pillars of functional analysis:
Important results of functional analysis include:
Uniform boundedness principle.
The uniform boundedness principle or Banach–Steinhaus theorem is one of the fundamental results in functional analysis. Together with the Hahn–Banach theorem and the open mapping theorem, it is considered one of the cornerstones of the field. In its basic form, it asserts that for a family of continuous linear operators (and thus bounded operators) whose domain is a Banach space, pointwise boundedness is equivalent to uniform boundedness in operator norm.
The theorem was first published in 1927 by Stefan Banach and Hugo Steinhaus but it was also proven independently by Hans Hahn.
<templatestyles src="Math_theorem/styles.css" />
Theorem (Uniform Boundedness Principle) — Let formula_4 be a Banach space and formula_15 be a normed vector space. Suppose that formula_16 is a collection of continuous linear operators from formula_4 to formula_15. If for all formula_17 in formula_4 one has
formula_18
then
formula_19
Spectral theorem.
There are many theorems known as the spectral theorem, but one in particular has many applications in functional analysis.
<templatestyles src="Math_theorem/styles.css" />
Spectral theorem — Let formula_20 be a bounded self-adjoint operator on a Hilbert space formula_21. Then there is a measure space formula_22 and a real-valued essentially bounded measurable function formula_10 on formula_4 and a unitary operator formula_23 such that
formula_24
where "T" is the multiplication operator:
formula_25
and formula_26.
This is the beginning of the vast research area of functional analysis called operator theory; see also the spectral measure.
There is also an analogous spectral theorem for bounded normal operators on Hilbert spaces. The only difference in the conclusion is that now formula_10 may be complex-valued.
Hahn–Banach theorem.
The Hahn–Banach theorem is a central tool in functional analysis. It allows the extension of bounded linear functionals defined on a subspace of some vector space to the whole space, and it also shows that there are "enough" continuous linear functionals defined on every normed vector space to make the study of the dual space "interesting".
<templatestyles src="Math_theorem/styles.css" />
Hahn–Banach theorem: — If formula_27 is a sublinear function, and formula_28 is a linear functional on a linear subspace formula_29 which is dominated by formula_9 on formula_30; that is,
formula_31
then there exists a linear extension formula_32 of formula_33 to the whole space formula_34 which is dominated by formula_9 on formula_34; that is, there exists a linear functional formula_35 such that
formula_36
Open mapping theorem.
The open mapping theorem, also known as the Banach–Schauder theorem (named after Stefan Banach and Juliusz Schauder), is a fundamental result which states that if a continuous linear operator between Banach spaces is surjective then it is an open map. More precisely,
<templatestyles src="Math_theorem/styles.css" />
Open mapping theorem — If formula_4 and formula_15 are Banach spaces and formula_37 is a surjective continuous linear operator, then formula_20 is an open map (that is, if formula_30 is an open set in formula_4, then formula_38 is open in formula_15).
The proof uses the Baire category theorem, and completeness of both formula_4 and formula_15 is essential to the theorem. The statement of the theorem is no longer true if either space is just assumed to be a normed space, but is true if formula_4 and formula_15 are taken to be Fréchet spaces.
Closed graph theorem.
<templatestyles src="Math_theorem/styles.css" />
Closed graph theorem — If formula_4 is a topological space and formula_15 is a compact Hausdorff space, then the graph of a linear map formula_39 from formula_4 to formula_15 is closed if and only if formula_39 is continuous.
Foundations of mathematics considerations.
Most spaces considered in functional analysis have infinite dimension. To show the existence of a vector space basis for such spaces may require Zorn's lemma. However, a somewhat different concept, the Schauder basis, is usually more relevant in functional analysis. Many theorems require the Hahn–Banach theorem, usually proved using the axiom of choice, although the strictly weaker Boolean prime ideal theorem suffices. The Baire category theorem, needed to prove many important theorems, also requires a form of axiom of choice.
Points of view.
Functional analysis includes the following tendencies:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ell^{\\,2}(\\aleph_0)\\,"
},
{
"math_id": 1,
"text": "L^p"
},
{
"math_id": 2,
"text": "p\\geq1"
},
{
"math_id": 3,
"text": "\\mu"
},
{
"math_id": 4,
"text": "X"
},
{
"math_id": 5,
"text": "L^p(X)"
},
{
"math_id": 6,
"text": "L^p(X,\\mu)"
},
{
"math_id": 7,
"text": "L^p(\\mu)"
},
{
"math_id": 8,
"text": "[\\,f\\,]"
},
{
"math_id": 9,
"text": "p"
},
{
"math_id": 10,
"text": "f"
},
{
"math_id": 11,
"text": "\\int_{X}\\left|f(x)\\right|^p\\,d\\mu(x) < \\infty."
},
{
"math_id": 12,
"text": "\\sum_{x\\in X}\\left|f(x)\\right|^p < \\infty ."
},
{
"math_id": 13,
"text": "\\ell^p(X)"
},
{
"math_id": 14,
"text": "\\ell^p"
},
{
"math_id": 15,
"text": "Y"
},
{
"math_id": 16,
"text": "F"
},
{
"math_id": 17,
"text": "x"
},
{
"math_id": 18,
"text": "\\sup\\nolimits_{T \\in F} \\|T(x)\\|_Y < \\infty, "
},
{
"math_id": 19,
"text": "\\sup\\nolimits_{T \\in F} \\|T\\|_{B(X,Y)} < \\infty."
},
{
"math_id": 20,
"text": "A"
},
{
"math_id": 21,
"text": "H"
},
{
"math_id": 22,
"text": "(X,\\Sigma,\\mu)"
},
{
"math_id": 23,
"text": "U:H\\to L^2_\\mu(X)"
},
{
"math_id": 24,
"text": " U^* T U = A "
},
{
"math_id": 25,
"text": " [T \\varphi](x) = f(x) \\varphi(x). "
},
{
"math_id": 26,
"text": "\\|T\\| = \\|f\\|_\\infty"
},
{
"math_id": 27,
"text": "p:V\\to\\mathbb{R}"
},
{
"math_id": 28,
"text": "\\varphi:U\\to\\mathbb{R}"
},
{
"math_id": 29,
"text": "U\\subseteq V"
},
{
"math_id": 30,
"text": "U"
},
{
"math_id": 31,
"text": "\\varphi(x) \\leq p(x)\\qquad\\forall x \\in U"
},
{
"math_id": 32,
"text": "\\psi:V\\to\\mathbb{R}"
},
{
"math_id": 33,
"text": "\\varphi"
},
{
"math_id": 34,
"text": "V"
},
{
"math_id": 35,
"text": "\\psi"
},
{
"math_id": 36,
"text": "\\begin{align}\n\\psi(x) &= \\varphi(x) &\\forall x\\in U, \\\\\n\\psi(x) &\\le p(x) &\\forall x\\in V.\n\\end{align}"
},
{
"math_id": 37,
"text": "A:X\\to Y"
},
{
"math_id": 38,
"text": "A(U)"
},
{
"math_id": 39,
"text": "T"
}
] | https://en.wikipedia.org/wiki?curid=11180 |
1118171 | Flatness problem | Cosmological fine-tuning problem
The flatness problem (also known as the oldness problem) is a cosmological fine-tuning problem within the Big Bang model of the universe. Such problems arise from the observation that some of the initial conditions of the universe appear to be fine-tuned to very 'special' values, and that small deviations from these values would have extreme effects on the appearance of the universe at the current time.
In the case of the flatness problem, the parameter which appears fine-tuned is the density of matter and energy in the universe. This value affects the curvature of space-time, with a very specific critical value being required for a flat universe. The current density of the universe is observed to be very close to this critical value. Since any departure of the total density from the critical value would increase rapidly over cosmic time, the early universe must have had a density even closer to the critical density, departing from it by one part in 1062 or less. This leads cosmologists to question how the initial density came to be so closely fine-tuned to this 'special' value.
The problem was first mentioned by Robert Dicke in 1969. The most commonly accepted solution among cosmologists is cosmic inflation, the idea that the universe went through a brief period of extremely rapid expansion in the first fraction of a second after the Big Bang; along with the monopole problem and the horizon problem, the flatness problem is one of the three primary motivations for inflationary theory.
Energy density and the Friedmann equation.
According to Einstein's field equations of general relativity, the structure of spacetime is affected by the presence of matter and energy. On small scales space appears flat – as does the surface of the Earth if one looks at a small area. On large scales however, space is bent by the gravitational effect of matter. Since relativity indicates that matter and energy are equivalent, this effect is also produced by the presence of energy (such as light and other electromagnetic radiation) in addition to matter. The amount of bending (or curvature) of the universe depends on the density of matter/energy present.
This relationship can be expressed by the first Friedmann equation. In a universe without a cosmological constant, this is:
formula_0
Here formula_1 is the Hubble parameter, a measure of the rate at which the universe is expanding. formula_2 is the total density of mass and energy in the universe, formula_3 is the scale factor (essentially the 'size' of the universe), and formula_4 is the curvature parameter — that is, a measure of how curved spacetime is. A positive, zero or negative value of formula_4 corresponds to a respectively closed, flat or open universe. The constants formula_5 and formula_6 are Newton's gravitational constant and the speed of light, respectively.
Cosmologists often simplify this equation by defining a critical density, formula_7. For a given value of formula_1, this is defined as the density required for a flat universe, i.e. formula_8. Thus the above equation implies
formula_9.
Since the constant formula_5 is known and the expansion rate formula_1 can be measured by observing the speed at which distant galaxies are receding from us,
formula_7 can be determined. Its value is currently around 10−26 kg m−3. The ratio of the actual density to this critical value is called Ω, and its difference from 1 determines the geometry of the universe: Ω > 1 corresponds to a greater than critical density, formula_10, and hence a closed universe. Ω < 1 gives a low density open universe, and Ω equal to exactly 1 gives a flat universe.
The Friedmann equation,
formula_11
can be re-arranged into
formula_12
which after factoring formula_13, and using formula_14, leads to
formula_15
The right hand side of the last expression above contains constants only and therefore the left hand side must remain constant throughout the evolution of the universe.
As the universe expands the scale factor formula_3 increases, but the density formula_2 decreases as matter (or energy) becomes spread out. For the standard model of the universe which contains mainly matter and radiation for most of its history, formula_2 decreases more quickly than formula_16 increases, and so the factor formula_13 will decrease. Since the time of the Planck era, shortly after the Big Bang, this term has decreased by a factor of around formula_17 and so formula_18 must have increased by a similar amount to retain the constant value of their product.
Current value of Ω.
Measurement.
The value of Ω at the present time is denoted Ω0. This value can be deduced by measuring the curvature of spacetime (since Ω = 1, or formula_19, is defined as the density for which the curvature "k" = 0). The curvature can be inferred from a number of observations.
One such observation is that of anisotropies (that is, variations with direction - see below) in the Cosmic Microwave Background (CMB) radiation. The CMB is electromagnetic radiation which fills the universe, left over from an early stage in its history when it was filled with photons and a hot, dense plasma. This plasma cooled as the universe expanded, and when it cooled enough to form stable atoms it no longer absorbed the photons. The photons present at that stage have been propagating ever since, growing fainter and less energetic as they spread through the ever-expanding universe.
The temperature of this radiation is almost the same at all points on the sky, but there is a slight variation (around one part in 100,000) between the temperature received from different directions. The angular scale of these fluctuations - the typical angle between a hot patch and a cold patch on the sky - depends on the curvature of the universe which in turn depends on its density as described above. Thus, measurements of this angular scale allow an estimation of Ω0.
Another probe of Ω0 is the frequency of Type-Ia supernovae at different distances from Earth. These supernovae, the explosions of degenerate white dwarf stars, are a type of standard candle; this means that the processes governing their intrinsic brightness are well understood so that a measure of "apparent" brightness when seen from Earth can be used to derive accurate distance measures for them (the apparent brightness decreasing in proportion to the square of the distance - see luminosity distance). Comparing this distance to the redshift of the supernovae gives a measure of the rate at which the universe has been expanding at different points in history. Since the expansion rate evolves differently over time in cosmologies with different total densities, Ω0 can be inferred from the supernovae data.
Data from the Wilkinson Microwave Anisotropy Probe (WMAP, measuring CMB anisotropies) combined with that from the Sloan Digital Sky Survey and observations of type-Ia supernovae constrain Ω0 to be 1 within 1%. In other words, the term |Ω − 1| is currently less than 0.01, and therefore must have been less than 10−62 at the Planck era. The cosmological parameters measured by Planck spacecraft mission reaffirmed previous results by WMAP.
Implication.
This tiny value is the crux of the flatness problem. If the initial density of the universe could take any value, it would seem extremely surprising to find it so 'finely tuned' to the critical value formula_7. Indeed, a very small departure of Ω from 1 in the early universe would have been magnified during billions of years of expansion to create a current density very far from critical. In the case of an overdensity (formula_10) this would lead to a universe so dense it would cease expanding and collapse into a Big Crunch (an opposite to the Big Bang in which all matter and energy falls back into an extremely dense state) in a few years or less; in the case of an underdensity (formula_20) it would expand so quickly and become so sparse it would soon seem essentially empty, and gravity would not be strong enough by comparison to cause matter to collapse and form galaxies resulting in a big freeze. In either case the universe would contain no complex structures such as galaxies, stars, planets and any form of life.
This problem with the Big Bang model was first pointed out by Robert Dicke in 1969, and it motivated a search for some reason the density should take such a specific value.
Solutions to the problem.
Some cosmologists agreed with Dicke that the flatness problem was a serious one, in need of a fundamental reason for the closeness of the density to criticality. But there was also a school of thought which denied that there was a problem to solve, arguing instead that since the universe must have some density it may as well have one close to formula_21 as far from it, and that speculating on a reason for any particular value was "beyond the domain of science". That, however, is a minority viewpoint, even among those sceptical of the existence of the flatness problem. Several cosmologists have argued that, for a variety of reasons, the flatness problem is based on a misunderstanding.
Anthropic principle.
One solution to the problem is to invoke the anthropic principle, which states that humans should take into account the conditions necessary for them to exist when speculating about causes of the universe's properties. If two types of universe seem equally likely but only one is suitable for the evolution of intelligent life, the anthropic principle suggests that finding ourselves in that universe is no surprise: if the other universe had existed instead, there would be no observers to notice the fact.
The principle can be applied to solve the flatness problem in two somewhat different ways. The first (an application of the 'strong anthropic principle') was suggested by C. B. Collins and Stephen Hawking, who in 1973 considered the existence of an infinite number of universes such that every possible combination of initial properties was held by some universe. In such a situation, they argued, only those universes with exactly the correct density for forming galaxies and stars would give rise to intelligent observers such as humans: therefore, the fact that we observe Ω to be so close to 1 would be "simply a reflection of our own existence."
An alternative approach, which makes use of the 'weak anthropic principle', is to suppose that the universe is infinite in size, but with the density varying in different places (i.e. an inhomogeneous universe). Thus some regions will be over-dense (Ω > 1) and some under-dense (Ω < 1). These regions may be extremely far apart - perhaps so far that light has not had time to travel from one to another during the age of the universe (that is, they lie outside one another's cosmological horizons). Therefore, each region would behave essentially as a separate universe: if we happened to live in a large patch of almost-critical density we would have no way of knowing of the existence of far-off under- or over-dense patches since no light or other signal has reached us from them. An appeal to the anthropic principle can then be made, arguing that intelligent life would only arise in those patches with Ω very close to 1, and that therefore our living in such a patch is unsurprising.
This latter argument makes use of a version of the anthropic principle which is 'weaker' in the sense that it requires no speculation on multiple universes, or on the probabilities of various different universes existing instead of the current one. It requires only a single universe which is infinite - or merely large enough that many disconnected patches can form - and that the density varies in different regions (which is certainly the case on smaller scales, giving rise to galactic clusters and voids).
However, the anthropic principle has been criticised by many scientists. For example, in 1979 Bernard Carr and Martin Rees argued that the principle “is entirely post hoc: it has not yet been used to predict any feature of the Universe.” Others have taken objection to its philosophical basis, with Ernan McMullin writing in 1994 that "the weak Anthropic principle is trivial ... and the strong Anthropic principle is indefensible." Since many physicists and philosophers of science do not consider the principle to be compatible with the scientific method, another explanation for the flatness problem was needed.
Inflation.
The standard solution to the flatness problem invokes cosmic inflation, a process whereby the universe expands exponentially quickly (i.e. formula_3 grows as formula_22 with time formula_23, for some constant formula_24) during a short period in its early history. The theory of inflation was first proposed in 1979, and published in 1981, by Alan Guth. His two main motivations for doing so were the flatness problem and the horizon problem, another fine-tuning problem of physical cosmology. However, “In December, 1980 when Guth was developing his inflation model, he was not trying to solve either the flatness or horizon problems. Indeed, at that time, he knew nothing of the horizon problem and had never quantitatively calculated the flatness problem”. He was a particle physicist trying to solve the magnetic monopole problem.”
The proposed cause of inflation is a field which permeates space and drives the expansion. The field contains a certain energy density, but unlike the density of the matter or radiation present in the late universe, which decrease over time, the density of the inflationary field remains roughly constant as space expands. Therefore, the term formula_13 increases extremely rapidly as the scale factor formula_3 grows exponentially. Recalling the Friedmann Equation
formula_25,
and the fact that the right-hand side of this expression is constant, the term formula_26 must therefore decrease with time.
Thus if formula_26 initially takes any arbitrary value, a period of inflation can force it down towards 0 and leave it extremely small - around formula_27 as required above, for example. Subsequent evolution of the universe will cause the value to grow, bringing it to the currently observed value of around 0.01. Thus the sensitive dependence on the initial value of Ω has been removed: a large and therefore 'unsurprising' starting value need not become amplified and lead to a very curved universe with no opportunity to form galaxies and other structures.
This success in solving the flatness problem is considered one of the major motivations for inflationary theory.
Post inflation.
Although inflationary theory is regarded as having had much success, and the evidence for it is compelling, it is not universally accepted: cosmologists recognize that there are still gaps in the theory and are open to the possibility that future observations will disprove it. In particular, in the absence of any firm evidence for what the field driving inflation should be, many different versions of the theory have been proposed. Many of these contain parameters or initial conditions which themselves require fine-tuning in much the way that the early density does without inflation.
For these reasons work is still being done on alternative solutions to the flatness problem. These have included non-standard interpretations of the effect of dark energy and gravity, particle production in an oscillating universe, and use of a Bayesian statistical approach to argue that the problem is non-existent. The latter argument, suggested for example by Evrard and Coles, maintains that the idea that Ω being close to 1 is 'unlikely' is based on assumptions about the likely distribution of the parameter which are not necessarily justified. Despite this ongoing work, inflation remains by far the dominant explanation for the flatness problem. The question arises, however, whether it is still the dominant explanation because it is the best explanation, or because the community is unaware of progress on this problem. In particular, in addition to the idea that Ω is not a suitable parameter in this context, other arguments against the flatness problem have been presented: if the universe collapses in the future, then the flatness problem "exists", but only for a relatively short time, so a typical observer would not expect to measure Ω appreciably different from 1; in the case of a universe which expands forever with a positive cosmological constant, fine-tuning is needed not to achieve a (nearly) flat universe, but also to avoid it.
Einstein–Cartan theory.
The flatness problem is naturally solved by the Einstein–Cartan–Sciama–Kibble theory of gravity, without an exotic form of matter required in inflationary theory. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamical variable. It has no free parameters. Including torsion gives the correct conservation law for the total (orbital plus intrinsic) angular momentum of matter in the presence of gravity. The minimal coupling between torsion and Dirac spinors obeying the nonlinear Dirac equation generates a spin-spin interaction which is significant in fermionic matter at extremely high densities. Such an interaction averts the unphysical big bang singularity, replacing it with a bounce at a finite minimum scale factor, before which the Universe was contracting. The rapid expansion immediately after the big bounce explains why the present Universe at largest scales appears spatially flat, homogeneous and isotropic. As the density of the Universe decreases, the effects of torsion weaken and the Universe smoothly enters the radiation-dominated era.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H^2 = \\frac{8 \\pi G}{3} \\rho - \\frac{kc^2}{a^2}"
},
{
"math_id": 1,
"text": "H"
},
{
"math_id": 2,
"text": "\\rho"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "k"
},
{
"math_id": 5,
"text": "G"
},
{
"math_id": 6,
"text": "c"
},
{
"math_id": 7,
"text": "\\rho_c"
},
{
"math_id": 8,
"text": "k = 0"
},
{
"math_id": 9,
"text": "\\rho_c = \\frac{3H^2}{8\\pi G}"
},
{
"math_id": 10,
"text": "\\rho > \\rho_c"
},
{
"math_id": 11,
"text": "\\frac{3a^2}{8\\pi G}H^2 = \\rho a^2 - \\frac{3kc^2}{8 \\pi G},"
},
{
"math_id": 12,
"text": "\\rho_c a^2 - \\rho a^2 = - \\frac{3kc^2}{8 \\pi G},"
},
{
"math_id": 13,
"text": "\\rho a^2"
},
{
"math_id": 14,
"text": "\\Omega=\\rho/\\rho_c"
},
{
"math_id": 15,
"text": "(\\Omega^{-1} - 1)\\rho a^2 = \\frac{-3kc^2}{8 \\pi G}."
},
{
"math_id": 16,
"text": "a^2"
},
{
"math_id": 17,
"text": "10^{60},"
},
{
"math_id": 18,
"text": "(\\Omega^{-1} - 1)"
},
{
"math_id": 19,
"text": "\\rho=\\rho_c"
},
{
"math_id": 20,
"text": "\\rho < \\rho_c"
},
{
"math_id": 21,
"text": "\\rho_{c}"
},
{
"math_id": 22,
"text": "e^{\\lambda t}"
},
{
"math_id": 23,
"text": "t"
},
{
"math_id": 24,
"text": "\\lambda"
},
{
"math_id": 25,
"text": "(\\Omega^{-1} - 1)\\rho a^2 = \\frac{-3kc^2}{8\\pi G}"
},
{
"math_id": 26,
"text": " | \\Omega^{-1} - 1 | "
},
{
"math_id": 27,
"text": "10^{-62}"
}
] | https://en.wikipedia.org/wiki?curid=1118171 |
1118396 | Chicago Pile-1 | World's first human-made nuclear reactor
Chicago Pile-1 (CP-1) was the world's first artificial nuclear reactor. On 2 December 1942, the first human-made self-sustaining nuclear chain reaction was initiated in CP-1 during an experiment led by Enrico Fermi. The secret development of the reactor was the first major technical achievement for the Manhattan Project, the Allied effort to create nuclear weapons during World War II. Developed by the Metallurgical Laboratory at the University of Chicago, CP-1 was built under the west viewing stands of the original Stagg Field. Although the project's civilian and military leaders had misgivings about the possibility of a disastrous runaway reaction, they trusted Fermi's safety calculations and decided they could carry out the experiment in a densely populated area. Fermi described the reactor as "a crude pile of black bricks and wooden timbers".
After a series of attempts, the successful reactor was assembled in November 1942 by a team of about 30 that, in addition to Fermi, included scientists Leo Szilard (who had previously formulated an idea for non-fission chain reaction), Leona Woods, Herbert L. Anderson, Walter Zinn, Martin D. Whitaker, and George Weil. The reactor used natural uranium. This required a very large amount of material in order to reach criticality, along with graphite used as a neutron moderator. The reactor contained 45,000 ultra-pure graphite blocks weighing and was fueled by of uranium metal and of uranium oxide. Unlike most subsequent nuclear reactors, it had no radiation shielding or cooling system as it operated at very low power – about one-half watt.
The pursuit of a reactor had been touched off by concern that Nazi Germany had a substantial scientific lead. The success of Chicago Pile-1 in producing the chain reaction provided the first vivid demonstration of the feasibility of the military use of nuclear energy by the Allies, as well as the reality of the danger that Nazi Germany could succeed in producing nuclear weapons. Previously, estimates of critical masses had been crude calculations, leading to order-of-magnitude uncertainties about the size of a hypothetical bomb. The successful use of graphite as a moderator paved the way for progress in the Allied effort, whereas the German program languished partly because of the belief that scarce and expensive heavy water would have to be used for that purpose. The Germans had failed to account for the importance of boron and cadmium impurities in the graphite samples on which they ran their test of its usability as a moderator, while Leo Szilard and Enrico Fermi had asked suppliers about the most common contaminations of graphite after a first failed test. They consequently ensured that the next test would be run with graphite entirely devoid of them. As it turned out, both boron and cadmium were strong neutron poisons.
In 1943, CP-1 was moved to Site A, a wartime research facility near Chicago, where it was reconfigured to become Chicago Pile-2 (CP-2). There, it was operated for research until 1954, when it was dismantled and buried. The stands at Stagg Field were demolished in August 1957 and a memorial quadrangle now marks the experiment site's location, which is now a National Historic Landmark and a Chicago Landmark.
Origins.
The idea of a chemical chain reaction was first suggested in 1913 by the German chemist Max Bodenstein for a situation in which two molecules react to form not just the final reaction products, but also some unstable molecules that can further react with the original substances to cause more to react. The concept of a nuclear chain reaction was first hypothesized by the Hungarian scientist Leo Szilard on 12 September 1933. Szilard realized that if a nuclear reaction produced neutrons or dineutrons, which then caused further nuclear reactions, the process might be self-perpetuating. Szilard proposed using mixtures of lighter known isotopes which produced neutrons in copious amounts, and also entertained the possibility of using uranium as a fuel. He filed a patent for his idea of a simple nuclear reactor the following year. The discovery of nuclear fission by German chemists Otto Hahn and Fritz Strassmann in 1938, and its theoretical explanation (and naming) by their collaborators Lise Meitner and Otto Frisch, opened up the possibility of creating a nuclear chain reaction with uranium, but initial experiments were unsuccessful.
In order for a chain reaction to occur, fissioning uranium atoms had to emit additional neutrons to keep the reaction going. At Columbia University in New York, Italian physicist Enrico Fermi collaborated with Americans John Dunning, Herbert L. Anderson, Eugene T. Booth, G. Norris Glasoe, and Francis G. Slack to conduct the first nuclear fission experiment in the United States on 25 January 1939. Subsequent work confirmed that fast neutrons were indeed produced by fission. Szilard obtained permission from the head of the Physics Department at Columbia, George B. Pegram, to use a laboratory for three months, and he persuaded Walter Zinn to become his collaborator. They conducted a simple experiment on the seventh floor of Pupin Hall at Columbia, using a radium-beryllium source to bombard uranium with neutrons. They discovered significant neutron multiplication in natural uranium, proving that a chain reaction might be possible.
Fermi and Szilard still believed that enormous quantities of uranium would be required for an atomic bomb, and therefore concentrated on producing a controlled chain reaction. Fermi urged Alfred O. C. Nier to separate uranium isotopes for determination of the fissile component, and, on 29 February 1940, Nier separated the first uranium-235 sample, which, after being mailed to Dunning at Columbia, was confirmed to be the isolated fissile material. When he was working in Rome, Fermi had discovered that collisions between neutrons and neutron moderators can slow the neutrons down, and thereby make them more likely to be captured by uranium nuclei, causing the uranium to fission. Szilard suggested to Fermi that they use carbon in the form of graphite as a moderator. As a back-up plan, he considered heavy water. This contained deuterium, which would not absorb neutrons like ordinary hydrogen, and was a better neutron moderator than carbon; but heavy water was expensive and difficult to produce, and several tons of it might be needed. Fermi estimated that a fissioning uranium nucleus produced 1.73 neutrons on average. It was enough, but a careful design was called for to minimize losses. (Today the average number of neutrons emitted per fissioning uranium-235 nucleus is known to be about 2.4).
Szilard estimated he would need about of graphite and of uranium. In December 1940, Fermi and Szilard met with Herbert G. MacPherson and Victor C. Hamister at National Carbon to discuss the possible existence of impurities in graphite, and the procurement of graphite of a purity that had never been produced commercially. National Carbon, a chemical company, had taken the then unusual step of hiring MacPherson, a physicist, to research carbon arc lamps, a major commercial use for graphite at that time. Because of his work studying the spectroscopy of the carbon arc, MacPherson knew that the major relevant contaminant was boron, both because of its concentration and its affinity for absorbing neutrons, confirming a suspicion of Szilard's. More importantly, MacPherson and Hamister believed that techniques for producing graphite of a sufficient purity could be developed. Had Fermi and Szilard not consulted MacPherson and Hamister, they might have concluded, incorrectly, as the Germans did, that graphite was unsuitable for use as a neutron moderator.
Over the next two years, MacPherson, Hamister and Lauchlin M. Currie developed thermal purification techniques for the large scale production of low boron content graphite. The resulting product was designated AGOT graphite ("Acheson Graphite Ordinary Temperature") by National Carbon. With a neutron absorption cross section of 4.97 mbarns, the AGOT graphite is considered as the first true nuclear-grade graphite. By November 1942 National Carbon had shipped of AGOT graphite to the University of Chicago, where it became the primary source of graphite to be used in the construction of Chicago Pile-1.
Government support.
Szilard drafted a confidential letter to the President, Franklin D. Roosevelt, warning of a German nuclear weapon project, explaining the possibility of nuclear weapons, and encouraging the development of a program that could result in their creation. With the help of Eugene Wigner and Edward Teller, he approached his old friend and collaborator Albert Einstein in August 1939, and convinced him to sign the letter, lending his prestige to the proposal. The Einstein–Szilard letter resulted in the establishment of research into nuclear fission by the U.S. government. An Advisory Committee on Uranium was formed under Lyman J. Briggs, a scientist and the director of the National Bureau of Standards. Its first meeting on 21 October 1939 was attended by Szilard, Teller, and Wigner. The scientists persuaded the Army and Navy to provide $6,000 for Szilard to purchase supplies for experiments—in particular, more graphite.
In April 1941, the National Defense Research Committee (NDRC) created a special project headed by Arthur Compton, a Nobel-Prize-winning physics professor at the University of Chicago, to report on the uranium program. Compton's report, submitted in May 1941, foresaw the prospects of developing radiological weapons, nuclear propulsion for ships, and nuclear weapons using uranium-235 or the recently discovered plutonium. In October he wrote another report on the practicality of an atomic bomb. For this report, he worked with Fermi on calculations of the critical mass of uranium-235. He also discussed the prospects for uranium enrichment with Harold Urey.
Niels Bohr and John Wheeler had theorized that heavy isotopes with odd atomic mass numbers were fissile. If so, then plutonium-239 was likely to be. In May 1941, Emilio Segrè and Glenn Seaborg produced 28 μg of plutonium-239 in the cyclotron at the University of California, and found that it had 1.7 times the thermal neutron capture cross section of uranium-235. At the time only such minute quantities of plutonium-239 had been produced, in cyclotrons, and it was not possible to produce a sufficiently large quantity that way. Compton discussed with Wigner how plutonium might be produced in a nuclear reactor, and with Robert Serber about how that plutonium might be separated from uranium. His report, submitted in November, stated that a bomb was feasible.
The final draft of Compton's November 1941 report made no mention of plutonium, but after discussing the latest research with Ernest Lawrence, Compton became convinced that a plutonium bomb was also feasible. In December, Compton was placed in charge of the plutonium project. Its objectives were to produce reactors to convert uranium to plutonium, to find ways to chemically separate the plutonium from the uranium, and to design and build an atomic bomb. It fell to Compton to decide which of the different types of reactor designs the scientists should pursue, even though a successful reactor had not yet been built. He proposed a schedule to achieve a controlled nuclear chain reaction by January 1943, and to have an atomic bomb by January 1945.
Development.
In a nuclear reactor, criticality is achieved when the rate of neutron production is equal to the rate of neutron losses, including both neutron absorption and neutron leakage. When a uranium-235 atom undergoes fission, it releases an average of 2.4 neutrons. In the simplest case of an unreflected, homogeneous, spherical reactor, the critical radius was calculated to be approximately:
<templatestyles src="Template:Blockquote/styles.css" />formula_0,
where "M" is the average distance that a neutron travels before it is absorbed, and "k" is the average neutron multiplication factor. The neutrons in succeeding reactions will be amplified by a factor "k", the second generation of fission events will produce "k2", the third "k3" and so on. In order for a self-sustaining nuclear chain reaction to occur, "k" must be at least 3 or 4 percent greater than 1. In other words, "k" must be greater than 1 without crossing the prompt critical threshold that would result in a rapid, exponential increase in the number of fission events.
Fermi christened his apparatus a "pile". Emilio Segrè later recalled that:<templatestyles src="Template:Blockquote/styles.css" />I thought for a while that this term was used to refer to a source of nuclear energy in analogy with Volta's use of the Italian term "pila" to denote his own great invention of a source of electrical energy. I was disillusioned by Fermi himself, who told me that he simply used the common English word "pile" as synonymous with "heap". To my surprise, Fermi never seemed to have thought of the relationship between his "pile" and Volta's.
Another grant, this time of $40,000, was obtained from the S-1 Uranium Committee to purchase more materials, and in August 1941 Fermi began to plan the building of a sub-critical assembly to test with a smaller structure whether a larger one would work. The so-called exponential pile he proposed to build was long, wide and high. This was too large to fit in the Pupin Physics Laboratories. Fermi recalled that:<templatestyles src="Template:Blockquote/styles.css" />We went to Dean Pegram, who was then the man who could carry out magic around the University, and we explained to him that we needed a big room. He scouted around the campus and we went with him to dark corridors and under various heating pipes and so on, to visit possible sites for this experiment and eventually a big room was discovered in Schermerhorn Hall.
The pile was built in September 1941 from graphite blocks and tinplate iron cans of uranium oxide. The cans were cubes. When filled with uranium oxide, each weighed about . There were 288 cans in all, and each was surrounded by graphite blocks so the whole would form a cubic lattice structure. A radium-beryllium neutron source was positioned near the bottom. The uranium oxide was heated to remove moisture, and packed into the cans while still hot on a shaking table. The cans were then soldered shut. For a workforce, Pegram secured the services of Columbia's football team. It was the custom at the time for football players to perform odd jobs around the university. They were able to manipulate the heavy cans with ease. The final result was a disappointing "k" of 0.87.
Compton felt that having teams at Columbia University, Princeton University, the University of Chicago and the University of California was creating too much duplication and not enough collaboration, and he resolved to concentrate the work in one location. Nobody wanted to move, and everybody argued in favor of their own location. In January 1942, soon after the United States entered World War II, Compton decided on his own location, the University of Chicago, where he knew he had the unstinting support of university administration. Chicago also had a central location, and scientists, technicians and facilities were more readily available in the Midwest, where war work had not yet taken them away. In contrast, Columbia University was engaged in uranium enrichment efforts under Harold Urey and John Dunning, and was hesitant to add a third secret project.
Before leaving for Chicago, Fermi's team made one last attempt to build a working pile at Columbia. Since the cans had absorbed neutrons, they were dispensed with. Instead, the uranium oxide, heated to to dry it out, was pressed into cylindrical holes long and in diameter drilled into the graphite. The entire pile was then canned by soldering sheet metal around it, and the contents heated above the boiling point of water to remove moisture. The result was a "k" of 0.918.
Choice of site.
In Chicago, Samuel K. Allison had found a suitable location long, wide and high, sunk slightly below ground level, in a space under the stands at Stagg Field originally built as a rackets court. Stagg Field had been largely unused since the University of Chicago had given up playing American football in 1939, but the rackets courts under West Stands were still used for playing squash and handball. Leona Woods and Anthony L. Turkevich played squash there in 1940. Since it was intended for strenuous exercise, the area was unheated, and very cold in the winter. The nearby North Stands had a pair of ice skating rinks on the ground floor, which although they were unrefrigerated, seldom melted in winter. Allison used the rackets court area to construct a experimental pile before Fermi's group arrived in 1942.
The United States Army Corps of Engineers assumed control of the nuclear weapons program in June 1942, and Compton's Metallurgical Laboratory became part of what came to be called the Manhattan Project. Brigadier General Leslie R. Groves, Jr. became director of the Manhattan Project on 23 September 1942. He visited the Metallurgical Laboratory for the first time on 5 October. Between 15 September and 15 November 1942, groups under Herbert Anderson and Walter Zinn constructed 16 experimental piles under the Stagg Field stands.
Fermi designed a new pile, which would be spherical to maximize "k", which was predicted to be around 1.04, thereby achieving criticality. Leona Woods was detailed to build boron trifluoride neutron detectors as soon as she completed her doctoral thesis. She also helped Anderson locate the required large number of timbers at lumber yards in Chicago's south side. Shipments of high-purity graphite arrived, mainly from National Carbon, and high-purity uranium dioxide from Mallinckrodt in St Louis, which was now producing a month. Metallic uranium also began arriving in larger quantities, the product of newly developed techniques.
On 25 June, the Army and the Office of Scientific Research and Development (OSRD) had selected a site in the Argonne Forest near Chicago for a plutonium pilot plant; this became known as "Site A". were leased from Cook County in August, but by September it was apparent that the proposed facilities would be too extensive for the site, and it was decided to build the pilot plant elsewhere. The subcritical piles posed little danger, but Groves felt that it would be prudent to locate a critical pile—a fully functional nuclear reactor—at a more remote site. A building at Argonne to house Fermi's experimental pile was commenced, with its completion scheduled for 20 October. Due to industrial disputes, construction fell behind schedule, and it became clear the materials for Fermi's new pile would be on hand before the new structure was completed. In early November, Fermi came to Compton with a proposal to build the experimental pile under the stands at Stagg Field.
The risk of building an operational reactor running at criticality in a populated area was a significant issue, as there was a danger of a catastrophic nuclear meltdown blanketing one of the United States' major urban areas in radioactive fission products. But the physics of the system suggested that the pile could be safely shut down even in the event of a runaway reaction. When a fuel atom undergoes fission, it releases neutrons that strike other fuel atoms in a chain reaction. The time between absorbing the neutron and undergoing fission is measured in nanoseconds. Szilard had noted that this reaction leaves behind fission products that may also release neutrons, but do so over much longer periods, from microseconds to as long as minutes. In a slow reaction like the one in a pile where the fission products build up, these neutrons account for about three percent of the total neutron flux.
Fermi argued that by using the delayed neutrons, and by carefully controlling the reaction rates as the power is ramped up, a pile can reach criticality at fission rates slightly below that of a chain reaction relying solely on the prompt neutrons from the fission reactions. Since the rate of release of these neutrons depends on fission events taking place some time earlier, there is a delay between any power spikes and the later criticality event. This time gives the operators leeway; if a spike in the prompt neutron flux is seen, they have several minutes before this causes a runaway reaction. If a neutron absorber, or neutron poison, is injected at any time during this period, the reactor will shut down. Consequently, the reaction can be controlled with electromechanical control systems such as control rods. Compton felt this delay was enough to provide a critical margin of safety, and allowed Fermi to build Chicago Pile-1 at Stagg Field.
Compton later explained that:<templatestyles src="Template:Blockquote/styles.css" />As a responsible officer of the University of Chicago, according to every rule of organizational protocol, I should have taken the matter to my superior. But this would have been unfair. President Hutchins was in no position to make an independent judgment of the hazards involved. Based on considerations of the University's welfare, the only answer he could have given would have been—no. And this answer would have been wrong.
Compton informed Groves of his decision at the 14 November meeting of the S-1 Executive Committee. Although Groves "had serious misgivings about the wisdom of Compton's suggestion", he did not interfere. James B. Conant, the chairman of the NDRC, was reported to have turned white. But because of the urgency and their confidence in Fermi's calculations, no one objected.
Construction.
Chicago Pile-1 was encased within a balloon so that the air inside could be replaced by carbon dioxide. Anderson had a dark gray balloon manufactured by Goodyear Tire and Rubber Company. A cube-shaped balloon was somewhat unusual, but the Manhattan Project's AAA priority rating ensured prompt delivery with no questions asked. A block and tackle was used to haul it into place, with the top secured to the ceiling and three sides to the walls. The remaining side, the one facing the balcony from which Fermi directed the operation, was furled like an awning. A circle was drawn on the floor, and the stacking of graphite blocks began on the morning of 16 November 1942. The first layer placed was made up entirely of graphite blocks, with no uranium. Layers without uranium were alternated with two layers containing uranium, so the uranium was enclosed in graphite. Unlike later reactors, it had no radiation shielding or cooling system, as it was only intended to be operated at very low power.
The work was carried out in twelve-hour shifts, with a day shift under Zinn and a night shift under Anderson. For a work force they hired thirty high school dropouts who were eager to earn a bit of money before being drafted into the military. They machined 45,000 graphite blocks enclosing 19,000 pieces of uranium metal and uranium oxide. The graphite arrived from the manufacturers in bars of various lengths. They were cut into standard lengths of , each weighing . A lathe was used to drill holes in the blocks for the control rods and the uranium. A hydraulic press was used to shape the uranium oxide into "pseudospheres", cylinders with rounded ends. Drill bits had to be sharpened after each 60 holes, which worked out to be about once an hour. Graphite dust soon filled the air and made the floor slippery.
Another group, under Volney C. Wilson, was responsible for instrumentation. They also fabricated the control rods, which were cadmium sheets nailed to flat wooden strips, cadmium being a potent neutron absorber, and the scram line, a manila rope that when cut would drop a control rod into the pile and stop the reaction. Richard Fox, who made the control-rod mechanism for the pile, remarked that the manual speed control that the operator had over the rods was simply a variable resistor, controlling an electric motor that would spool the clothesline wire over a pulley that also had two lead weights attached to ensure it would fail-safe and return to its zero position when released.
About two layers were laid per shift. Woods' boron trifluoride neutron counter was inserted at the 15th layer. Thereafter, readings were taken at the end of each shift. Fermi divided the square of the radius of the pile by the intensity of the radioactivity to obtain a metric that counted down to one as the pile approached criticality. At the 15th layer, it was 390; at the 19th it was 320; at the 25th it was 270 and by the 36th it was only 149. The original design was for a spherical pile, but as work proceeded, it became clear that this would not be necessary. The new graphite was purer, and of very pure metallic uranium began to arrive from the Ames Project at Iowa State University, where Harley Wilhelm and his team had developed a new process to produce uranium metal. Westinghouse Lamp Plant supplied , which it produced in a rush with a makeshift process.
The metallic uranium cylinders, known as "Spedding's eggs", were dropped in the holes in the graphite in lieu of the uranium oxide pseudospheres. The process of filling the balloon with carbon dioxide would not be necessary, and twenty layers could be dispensed with. According to Fermi's new calculations, the countdown would reach 1 between the 56th and 57th layers. The resulting pile was therefore flatter on the top than on the bottom. Anderson called a halt after the 57th layer was placed. When completed, the wooden frame supported an elliptical-shaped structure, high, wide at the ends and across the middle. It contained of uranium metal, of uranium oxide and of graphite, at an estimated cost of $2.7 million.
First nuclear chain reaction.
The next day, 2 December 1942, everybody assembled for the experiment. There were 49 scientists present. Although most of the S-1 Executive Committee was in Chicago, only Crawford Greenewalt was present, at Compton's invitation. Other dignitaries present included Szilard, Wigner and Spedding. Fermi, Compton, Anderson and Zinn gathered around the controls on the balcony, which was originally intended as a viewing platform. Samuel Allison stood ready with a bucket of concentrated cadmium nitrate, which he was to throw over the pile in the event of an emergency. The startup began at 09:54. Walter Zinn removed the zip, the emergency control rod, and secured it. Norman Hilberry stood ready with an axe to cut the scram line, which would allow the zip to fall under the influence of gravity. While Leona Woods called out the count from the boron trifluoride detector in a loud voice, George Weil, the only one on the floor, withdrew all but one of the control rods. At 10:37 Fermi ordered Weil to remove all but of the last control rod. Weil withdrew it at a time, with measurements being taken at each step.
The process was abruptly halted by the automatic control rod reinserting itself, due to its trip level being set too low. At 11:25, Fermi ordered the control rods reinserted. He then announced that it was lunch time.
The experiment resumed at 14:00. Weil worked the final control rod while Fermi carefully monitored the neutron activity. Fermi announced that the pile had gone critical (reached a self-sustaining reaction) at 15:25. Fermi switched the scale on the recorder to accommodate the rapidly increasing electric current from the boron trifluoride detector. He wanted to test the control circuits, but after 28 minutes, the alarm bells went off to notify everyone that the neutron flux had passed the preset safety level, and he ordered Zinn to release the zip. The reaction rapidly halted. The pile had run for about 4.5 minutes at about 0.5 watts. Wigner opened a bottle of Chianti, which they drank from paper cups.
Compton notified Conant by telephone. The conversation was in an impromptu code:
<templatestyles src="Template:Blockquote/styles.css" />
Later operation.
On 12 December 1942, CP-1's power output was increased to 200 W, enough to power a light bulb. Lacking shielding of any kind, it was a radiation hazard for everyone in the vicinity, and further testing was continued at 0.5 W. Operation was terminated on 28 February 1943, and the pile was dismantled and moved to Site A in the Argonne Forest, now known as Red Gate Woods. There the original materials were used to build Chicago Pile-2 (CP-2). Instead of being spherical, the new reactor was built in a cube-like shape, about tall with a base approximately square. It was surrounded by concrete walls thick that acted as a radiation shielding, with overhead protection from of lead and of wood. More uranium was used, so it contained of uranium and of graphite. No cooling system was provided as it only ran at a few kilowatts. CP-2 became operational in March 1943, with a "k" of 1.055. During the war Walter Zinn allowed CP-2 to be run around the clock, and its design was suitable for conducting experiments.
CP-2 was joined by Chicago Pile-3, the first heavy water reactor, which went critical on 15 May 1944.
The reactors were used to undertake research related to weapons, such as investigations of the properties of tritium. Wartime experiments included measuring the neutron absorption cross-section of elements and compounds. Albert Wattenberg recalled that about 10 elements were studied each month, and 75 over the course of a year. An accident involving radium and beryllium powder caused a dangerous drop in his white blood cell count that lasted for three years. As the dangers of things such as inhaling uranium oxide became more apparent, experiments were conducted on the effects of radioactive substances on laboratory test animals.
Though the design was held secret for a decade, Szilard and Fermi jointly patented it, with an initial filing date of 19 December 1944 as the "neutronic reactor" no. 2,708,656.
The Red Gate Woods later became the original site of Argonne National Laboratory, which replaced the Metallurgical Laboratory on 1 July 1946, with Zinn as its first director. CP-2 and CP-3 operated for ten years before they outlived their usefulness, and Zinn ordered them shut down on 15 May 1954. Their remaining usable fuel was transferred to Chicago Pile-5 at the Argonne National Laboratory's new site in DuPage County, and the CP-2 and CP-3 reactors were dismantled in 1955 and 1956. Some of the graphite blocks from CP-1/CP-2 were reused in the reflector of the TREAT reactor. High-level nuclear waste such as fuel and heavy water were shipped to Oak Ridge, Tennessee, for disposal. The rest was encased in concrete and buried in a trench in what is now known as the Site A/Plot M Disposal Site. It is marked by a commemorative boulder.
By the 1970s there was increased public concern about the levels of radioactivity at the site, which was used for recreation by local residents. Surveys conducted in the 1980s found strontium-90 in the soil at Plot M, trace amounts of tritium in nearby wells, and plutonium, technetium, caesium, and uranium in the area. In 1994, the United States Department of Energy and the Argonne National Laboratory yielded to public pressure and earmarked $24.7 million and $3.4 million respectively to rehabilitate the site. As part of the cleanup, of radioactive waste was removed and sent to the Hanford Site for disposal. By 2002, the Illinois Department of Public Health had determined that the remaining materials posed no danger to public health.
Significance and commemoration.
The successful test of CP-1 not only proved that a nuclear reactor was feasible, it demonstrated that the "k" factor was larger than originally thought. This removed the objections to the use of air or water as a coolant rather than expensive helium. It also meant that there was greater latitude in the choice of materials for coolant pipes and control mechanisms. Wigner now pressed ahead with his design for a water-cooled production reactor. There remained concerns about the ability of a graphite-moderated reactor being able to produce plutonium on industrial scale, and for this reason the Manhattan Project continued the development of heavy water production facilities. An air-cooled reactor, the X-10 Graphite Reactor, was built at the Clinton Engineer Works in Oak Ridge as part of a plutonium semiworks, followed by larger water-cooled production reactors at the Hanford Site in Washington state. Enough plutonium was produced for an atomic bomb by July 1945, and for two more in August.
A commemorative plaque was unveiled at Stagg Field on 2 December 1952, the occasion of the tenth anniversary of CP-1 going critical. It read as follows:<templatestyles src="Template:Blockquote/styles.css" /> The plaque was saved when the West Stands were demolished in August 1957. The site of CP-1 was designated as a National Historic Landmark on 18 February 1965. When the National Register of Historic Places was created in 1966, it was immediately added to that as well. The site was also named a Chicago Landmark on 27 October 1971.
Today the site of the old Stagg Field is occupied by the university's Regenstein Library, which was opened in 1970, and the Joe and Rika Mansueto Library, which was opened in 2011. A Henry Moore sculpture, "Nuclear Energy", stands in a small quadrangle outside the Regenstein Library on the former site of the west viewing stands' rackets court. It was dedicated on 2 December 1967, to commemorate the 25th anniversary of CP-1 going critical. The commemorative plaques from 1952, 1965 and 1967 are nearby. A graphite block from CP-1 can be seen at the Bradbury Science Museum in Los Alamos, New Mexico; another is on display at the Museum of Science and Industry in Chicago. On 2 December 2017, the 75th anniversary, the Massachusetts Institute of Technology in restoring a research-graphite pile, similar in design to Chicago Pile-1, ceremonially inserted the final uranium slugs.
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "R_{crit} \\approx \\frac{\\pi M}{\\sqrt{k - 1}}"
}
] | https://en.wikipedia.org/wiki?curid=1118396 |
11184711 | Mean reciprocal rank | Search quality measure in information retrieval
The mean reciprocal rank is a statistic measure for evaluating any process that produces a list of possible responses to a sample of queries, ordered by probability of correctness. The reciprocal rank of a query response is the multiplicative inverse of the rank of the first correct answer: 1 for first place, <templatestyles src="Fraction/styles.css" />1⁄2 for second place, <templatestyles src="Fraction/styles.css" />1⁄3 for third place and so on. The mean reciprocal rank is the average of the reciprocal ranks of results for a sample of queries Q:
formula_0
where formula_1 refers to the rank position of the "first" relevant document for the "i"-th query.
The reciprocal value of the mean reciprocal rank corresponds to the harmonic mean of the ranks.
Example.
Suppose we have the following three queries for a system that tries to translate English words to their plurals. In each case, the system makes three guesses, with the first one being the one it thinks is most likely correct:
Given those three samples, we could calculate the mean reciprocal rank as formula_2, or approximately 0.61.
If none of the proposed results are correct, the reciprocal rank is 0. Note that only the rank of the first relevant answer is considered, and possible further relevant answers are ignored. If users are also interested in further relevant items, mean average precision is a potential alternative metric.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\text{MRR} = \\frac{1}{|Q|} \\sum_{i=1}^{|Q|} \\frac{1}{\\text{rank}_i}. \\!"
},
{
"math_id": 1,
"text": " \\text{rank}_i"
},
{
"math_id": 2,
"text": " (1/3 + 1/2 + 1) / 3 = 11/18"
}
] | https://en.wikipedia.org/wiki?curid=11184711 |
11185796 | Beverton–Holt model | Discrete-time population model
The Beverton–Holt model is a classic discrete-time population model which gives the expected number "n" "t"+1 (or density) of individuals in generation "t" + 1 as a function of the number of individuals in the previous generation,
formula_0
Here "R"0 is interpreted as the proliferation rate per generation and "K" = ("R"0 − 1) "M" is the carrying capacity of the environment. The Beverton–Holt model was introduced in the context of fisheries by Beverton & Holt (1957). Subsequent work has derived the model under other assumptions such as contest competition (Brännström & Sumpter 2005), within-year resource limited competition (Geritz & Kisdi 2004) or even as the outcome of a source-sink Malthusian patches linked by density-dependent dispersal (Bravo de la Parra et al. 2013). The Beverton–Holt model can be generalized to include scramble competition (see the Ricker model, the Hassell model and the Maynard Smith–Slatkin model). It is also possible to include a parameter reflecting the spatial clustering of individuals (see Brännström & Sumpter 2005).
Despite being nonlinear, the model can be solved explicitly, since it is in fact an inhomogeneous linear equation in 1/"n".
The solution is
formula_1
Because of this structure, the model can be considered as the discrete-time analogue of the continuous-time logistic equation for population growth introduced by Verhulst; for comparison, the logistic equation is
formula_2
and its solution is
formula_3 | [
{
"math_id": 0,
"text": "n_{t+1} = \\frac{R_0 n_t}{1+ n_t/M}. "
},
{
"math_id": 1,
"text": "\nn_t = \\frac{K n_0}{n_0 + (K - n_0) R_0^{-t}}.\n"
},
{
"math_id": 2,
"text": "\\frac{dN}{dt} = rN \\left( 1 - \\frac{N}{K} \\right),"
},
{
"math_id": 3,
"text": "\nN(t) = \\frac{K N(0)}{N(0) + (K - N(0)) e^{-rt}}.\n"
}
] | https://en.wikipedia.org/wiki?curid=11185796 |
11186496 | Self-avoiding walk | A sequence of moves on a lattice that does not visit the same point more than once
In mathematics, a self-avoiding walk (SAW) is a sequence of moves on a lattice (a lattice path) that does not visit the same point more than once. This is a special case of the graph theoretical notion of a path. A self-avoiding polygon (SAP) is a closed self-avoiding walk on a lattice. Very little is known rigorously about the self-avoiding walk from a mathematical perspective, although physicists have provided numerous conjectures that are believed to be true and are strongly supported by numerical simulations.
In computational physics, a self-avoiding walk is a chain-like path in R2 or R3 with a certain number of nodes, typically a fixed step length and has the property that it doesn't cross itself or another walk. A system of SAWs satisfies the so-called excluded volume condition. In higher dimensions, the SAW is believed to behave much like the ordinary random walk.
SAWs and SAPs play a central role in the modeling of the topological and knot-theoretic behavior of thread- and loop-like molecules such as proteins. Indeed, SAWs may have first been introduced by the chemist Paul Flory in order to model the real-life behavior of chain-like entities such as solvents and polymers, whose physical volume prohibits multiple occupation of the same spatial point.
SAWs are fractals. For example, in "d"
2 the fractal dimension is 4/3, for "d"
3 it is close to 5/3 while for "d" ≥ 4 the fractal dimension is 2. The dimension is called the upper critical dimension above which excluded volume is negligible. A SAW that does not satisfy the excluded volume condition was recently studied to model explicit surface geometry resulting from expansion of a SAW.
The properties of SAWs cannot be calculated analytically, so numerical simulations are employed. The pivot algorithm is a common method for Markov chain Monte Carlo simulations for the uniform measure on n-step self-avoiding walks. The pivot algorithm works by taking a self-avoiding walk and randomly choosing a point on this walk, and then applying symmetrical transformations (rotations and reflections) on the walk after the nth step to create a new walk.
Calculating the number of self-avoiding walks in any given lattice is a common computational problem. There is currently no known formula, although there are rigorous methods of approximation.
Universality.
One of the phenomena associated with self-avoiding walks and statistical physics models in general is the notion of universality, that is, independence of macroscopic observables from microscopic details, such as the choice of the lattice. One important quantity that appears in conjectures for universal laws is the connective constant, defined as follows. Let cn denote the number of n-step self-avoiding walks. Since every ("n" + "m")-step self avoiding walk can be decomposed into an n-step self-avoiding walk and an m-step self-avoiding walk, it follows that "c""n"+"m" ≤ "cncm". Therefore, the sequence {log "cn"} is subadditive and we can apply Fekete's lemma to show that the following limit exists:
formula_0
μ is called the connective constant, since cn depends on the particular lattice chosen for the walk so does μ. The exact value of μ is only known for the hexagonal lattice, where it is equal to:
formula_1
For other lattices, μ has only been approximated numerically, and is believed not to even be an algebraic number. It is conjectured that
formula_2
as "n" → ∞, where μ depends on the lattice, but the power law correction formula_3 does not; in other words, this law is believed to be universal.
On networks.
Self-avoiding walks have also been studied in the context of network theory. In this context, it is customary to treat the SAW as a dynamical process, such that in every time-step a walker randomly hops between neighboring nodes of the network. The walk ends when the walker reaches a dead-end state, such that it can no longer progress to newly un-visited nodes. It was recently found that on Erdős–Rényi networks, the distribution of path lengths of such dynamically grown SAWs can be calculated analytically, and follows the Gompertz distribution. For arbitrary networks, the distribution of path lengths of the walk, the degree distribution of the non-visited network and the first-hitting-time distribution to a node can be obtained by solving a set of coupled recurrence equations.
Limits.
Consider the uniform measure on n-step self-avoiding walks in the full plane. It is currently unknown whether the limit of the uniform measure as "n" → ∞ induces a measure on infinite full-plane walks. However, Harry Kesten has shown that such a measure exists for self-avoiding walks in the half-plane. One important question involving self-avoiding walks is the existence and conformal invariance of the scaling limit, that is, the limit as the length of the walk goes to infinity and the mesh of the lattice goes to zero. The scaling limit of the self-avoiding walk is conjectured to be described by Schramm–Loewner evolution with parameter "κ"
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu = \\lim_{n \\to \\infty} c_n^{\\frac{1}{n}}."
},
{
"math_id": 1,
"text": "\\sqrt{2 + \\sqrt{2}}."
},
{
"math_id": 2,
"text": "c_n \\approx \\mu^n n^{\\frac{11}{32}}"
},
{
"math_id": 3,
"text": "n^{\\frac{11}{32}}"
}
] | https://en.wikipedia.org/wiki?curid=11186496 |
1118789 | Linearization | Finding linear approximation of function at given point
In mathematics, linearization is finding the linear approximation to a function at a given point. The linear approximation of a function is the first order Taylor expansion around the point of interest. In the study of dynamical systems, linearization is a method for assessing the local stability of an equilibrium point of a system of nonlinear differential equations or discrete dynamical systems. This method is used in fields such as engineering, physics, economics, and ecology.
Linearization of a function.
Linearizations of a function are lines—usually lines that can be used for purposes of calculation. Linearization is an effective method for approximating the output of a function formula_0 at any formula_1 based on the value and slope of the function at formula_2, given that formula_3 is differentiable on formula_4 (or formula_5) and that formula_6 is close to formula_7. In short, linearization approximates the output of a function near formula_1.
For example, formula_8. However, what would be a good approximation of formula_9?
For any given function formula_0, formula_3 can be approximated if it is near a known differentiable point. The most basic requisite is that formula_10, where formula_11 is the linearization of formula_3 at formula_1. The point-slope form of an equation forms an equation of a line, given a point formula_12 and slope formula_13. The general form of this equation is: formula_14.
Using the point formula_15, formula_11 becomes formula_16. Because differentiable functions are locally linear, the best slope to substitute in would be the slope of the line tangent to formula_3 at formula_1.
While the concept of local linearity applies the most to points arbitrarily close to formula_1, those relatively close work relatively well for linear approximations. The slope formula_13 should be, most accurately, the slope of the tangent line at formula_1.
Visually, the accompanying diagram shows the tangent line of formula_3 at formula_17. At formula_18, where formula_19 is any small positive or negative value, formula_18 is very nearly the value of the tangent line at the point formula_20.
The final equation for the linearization of a function at formula_1 is:
formula_21
For formula_1, formula_22. The derivative of formula_3 is formula_23, and the slope of formula_3 at formula_6 is formula_24.
Example.
To find formula_25, we can use the fact that formula_8. The linearization of formula_26 at formula_1 is formula_27, because the function formula_28 defines the slope of the function formula_26 at formula_17. Substituting in formula_29, the linearization at 4 is formula_30. In this case formula_31, so formula_25 is approximately formula_32. The true value is close to 2.00024998, so the linearization approximation has a relative error of less than 1 millionth of a percent.
Linearization of a multivariable function.
The equation for the linearization of a function formula_33 at a point formula_34 is:
formula_35
The general equation for the linearization of a multivariable function formula_36 at a point formula_37 is:
formula_38
where formula_39 is the vector of variables, formula_40 is the gradient, and formula_37 is the linearization point of interest
Uses of linearization.
Linearization makes it possible to use tools for studying linear systems to analyze the behavior of a nonlinear function near a given point. The linearization of a function is the first order term of its Taylor expansion around the point of interest. For a system defined by the equation
formula_41,
the linearized system can be written as
formula_42
where formula_43 is the point of interest and formula_44 is the formula_39-Jacobian of formula_45 evaluated at formula_43.
Stability analysis.
In stability analysis of autonomous systems, one can use the eigenvalues of the Jacobian matrix evaluated at a hyperbolic equilibrium point to determine the nature of that equilibrium. This is the content of the linearization theorem. For time-varying systems, the linearization requires additional justification.
Microeconomics.
In microeconomics, decision rules may be approximated under the state-space approach to linearization. Under this approach, the Euler equations of the utility maximization problem are linearized around the stationary steady state. A unique solution to the resulting system of dynamic equations then is found.
Optimization.
In mathematical optimization, cost functions and non-linear components within can be linearized in order to apply a linear solving method such as the Simplex algorithm. The optimized result is reached much more efficiently and is deterministic as a global optimum.
Multiphysics.
In multiphysics systems—systems involving multiple physical fields that interact with one another—linearization with respect to each of the physical fields may be performed. This linearization of the system with respect to each of the fields results in a linearized monolithic equation system that can be solved using monolithic iterative solution procedures such as the Newton–Raphson method. Examples of this include MRI scanner systems which results in a system of electromagnetic, mechanical and acoustic fields.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "y = f(x)"
},
{
"math_id": 1,
"text": "x = a"
},
{
"math_id": 2,
"text": "x = b"
},
{
"math_id": 3,
"text": "f(x)"
},
{
"math_id": 4,
"text": "[a, b]"
},
{
"math_id": 5,
"text": "[b, a]"
},
{
"math_id": 6,
"text": "a"
},
{
"math_id": 7,
"text": "b"
},
{
"math_id": 8,
"text": "\\sqrt{4} = 2"
},
{
"math_id": 9,
"text": "\\sqrt{4.001} = \\sqrt{4 + .001}"
},
{
"math_id": 10,
"text": "L_a(a) = f(a)"
},
{
"math_id": 11,
"text": "L_a(x)"
},
{
"math_id": 12,
"text": "(H, K)"
},
{
"math_id": 13,
"text": "M"
},
{
"math_id": 14,
"text": "y - K = M(x - H)"
},
{
"math_id": 15,
"text": "(a, f(a))"
},
{
"math_id": 16,
"text": "y = f(a) + M(x - a)"
},
{
"math_id": 17,
"text": "x"
},
{
"math_id": 18,
"text": "f(x+h)"
},
{
"math_id": 19,
"text": "h"
},
{
"math_id": 20,
"text": "(x+h, L(x+h))"
},
{
"math_id": 21,
"text": "y = (f(a) + f'(a)(x - a))"
},
{
"math_id": 22,
"text": "f(a) = f(x)"
},
{
"math_id": 23,
"text": "f'(x)"
},
{
"math_id": 24,
"text": "f'(a)"
},
{
"math_id": 25,
"text": "\\sqrt{4.001}"
},
{
"math_id": 26,
"text": "f(x) = \\sqrt{x}"
},
{
"math_id": 27,
"text": "y = \\sqrt{a} + \\frac{1}{2 \\sqrt{a}}(x - a)"
},
{
"math_id": 28,
"text": "f'(x) = \\frac{1}{2 \\sqrt{x}}"
},
{
"math_id": 29,
"text": "a = 4"
},
{
"math_id": 30,
"text": "y = 2 + \\frac{x-4}{4}"
},
{
"math_id": 31,
"text": "x = 4.001"
},
{
"math_id": 32,
"text": "2 + \\frac{4.001-4}{4} = 2.00025"
},
{
"math_id": 33,
"text": "f(x,y)"
},
{
"math_id": 34,
"text": "p(a,b)"
},
{
"math_id": 35,
"text": " f(x,y) \\approx f(a,b) + \\left. {\\frac{{\\partial f(x,y)}}{{\\partial x}}} \\right|_{a,b} (x - a) + \\left. {\\frac{{\\partial f(x,y)}}{{\\partial y}}} \\right|_{a,b} (y - b)"
},
{
"math_id": 36,
"text": "f(\\mathbf{x})"
},
{
"math_id": 37,
"text": "\\mathbf{p}"
},
{
"math_id": 38,
"text": "f({\\mathbf{x}}) \\approx f({\\mathbf{p}}) + \\left. {\\nabla f} \\right|_{\\mathbf{p}} \\cdot ({\\mathbf{x}} - {\\mathbf{p}})"
},
{
"math_id": 39,
"text": "\\mathbf{x}"
},
{
"math_id": 40,
"text": "{\\nabla f}"
},
{
"math_id": 41,
"text": "\\frac{d\\mathbf{x}}{dt} = \\mathbf{F}(\\mathbf{x},t)"
},
{
"math_id": 42,
"text": "\\frac{d\\mathbf{x}}{dt} \\approx \\mathbf{F}(\\mathbf{x_0},t) + D\\mathbf{F}(\\mathbf{x_0},t) \\cdot (\\mathbf{x} - \\mathbf{x_0})"
},
{
"math_id": 43,
"text": "\\mathbf{x_0}"
},
{
"math_id": 44,
"text": "D\\mathbf{F}(\\mathbf{x_0},t)"
},
{
"math_id": 45,
"text": "\\mathbf{F}(\\mathbf{x},t)"
}
] | https://en.wikipedia.org/wiki?curid=1118789 |
1118832 | Arbitrarily large | In mathematics, the phrases arbitrarily large, arbitrarily small and arbitrarily long are used in statements to make clear the fact that an object is large, small, or long with little limitation or restraint, respectively. The use of "arbitrarily" often occurs in the context of real numbers (and its subsets thereof), though its meaning can differ from that of "sufficiently" and "infinitely".
Examples.
The statement
"formula_0 is non-negative for arbitrarily large "formula_1"."
is a shorthand for:
"For every real number "formula_2", formula_0 is non-negative for some value of "formula_1" greater than "formula_2"."
In the common parlance, the term "arbitrarily long" is often used in the context of sequence of numbers. For example, to say that there are "arbitrarily long arithmetic progressions of prime numbers" does not mean that there exists any infinitely long arithmetic progression of prime numbers (there is not), nor that there exists any particular arithmetic progression of prime numbers that is in some sense "arbitrarily long". Rather, the phrase is used to refer to the fact that no matter how large a number "formula_2" is, there exists some arithmetic progression of prime numbers of length at least "formula_2".
Similar to arbitrarily large, one can also define the phrase "formula_3 holds for arbitrarily small real numbers", as follows:
formula_4
In other words:
However small a number, there will be a number "formula_1" smaller than it such that formula_3 holds.
Arbitrarily large vs. sufficiently large vs. infinitely large.
While similar, "arbitrarily large" is not equivalent to "sufficiently large". For instance, while it is true that prime numbers can be arbitrarily large (since there are infinitely many of them due to Euclid's theorem), it is not true that all sufficiently large numbers are prime.
As another example, the statement "formula_0 is non-negative for arbitrarily large "formula_1"." could be rewritten as:
formula_5
However, using "sufficiently large", the same phrase becomes:
formula_6
Furthermore, "arbitrarily large" also does not mean "infinitely large". For example, although prime numbers can be arbitrarily large, an infinitely large prime number does not exist—since all prime numbers (as well as all other integers) are finite.
In some cases, phrases such as "the proposition formula_3 is true for arbitrarily large "formula_1"" are used primarily for emphasis, as in "formula_3 is true for all "formula_1", no matter how large "formula_1" is." In these cases, the phrase "arbitrarily large" does not have the meaning indicated above (i.e., "however large a number, there will be "some" larger number for which formula_3 still holds."). Instead, the usage in this case is in fact logically synonymous with "all". | [
{
"math_id": 0,
"text": "f(x)"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "P(x)"
},
{
"math_id": 4,
"text": "\\forall \\epsilon \\in \\mathbb{R}_{+},\\, \\exists x \\in \\mathbb{R} : |x|<\\epsilon \\land P(x) "
},
{
"math_id": 5,
"text": "\\forall n \\in \\mathbb{R} \\mbox{, } \\exists x \\in \\mathbb{R} \\mbox{ such that } x > n \\land f(x) \\ge 0"
},
{
"math_id": 6,
"text": "\\exists n \\in \\mathbb{R} \\mbox{ such that } \\forall x \\in \\mathbb{R} \\mbox{, } x > n \\Rightarrow f(x) \\ge 0"
}
] | https://en.wikipedia.org/wiki?curid=1118832 |
1119231 | Logarithmic distribution | Discrete probability distribution
In probability and statistics, the logarithmic distribution (also known as the logarithmic series distribution or the log-series distribution) is a discrete probability distribution derived from the Maclaurin series expansion
formula_0
From this we obtain the identity
formula_1
This leads directly to the probability mass function of a Log("p")-distributed random variable:
formula_2
for "k" ≥ 1, and where 0 < "p" < 1. Because of the identity above, the distribution is properly normalized.
The cumulative distribution function is
formula_3
where "B" is the incomplete beta function.
A Poisson compounded with Log("p")-distributed random variables has a negative binomial distribution. In other words, if "N" is a random variable with a Poisson distribution, and "X""i", "i" = 1, 2, 3, ... is an infinite sequence of independent identically distributed random variables each having a Log("p") distribution, then
formula_4
has a negative binomial distribution. In this way, the negative binomial distribution is seen to be a compound Poisson distribution.
R. A. Fisher described the logarithmic distribution in a paper that used it to model relative species abundance. | [
{
"math_id": 0,
"text": "\n -\\ln(1-p) = p + \\frac{p^2}{2} + \\frac{p^3}{3} + \\cdots.\n"
},
{
"math_id": 1,
"text": "\\sum_{k=1}^{\\infty} \\frac{-1}{\\ln(1-p)} \\; \\frac{p^k}{k} = 1. "
},
{
"math_id": 2,
"text": " f(k) = \\frac{-1}{\\ln(1-p)} \\; \\frac{p^k}{k}"
},
{
"math_id": 3,
"text": " F(k) = 1 + \\frac{\\Beta(p; k+1,0)}{\\ln(1-p)}"
},
{
"math_id": 4,
"text": "\\sum_{i=1}^N X_i"
}
] | https://en.wikipedia.org/wiki?curid=1119231 |
1119342 | Dynamic nuclear polarization | Dynamic nuclear polarization (DNP) results from transferring spin polarization from electrons to nuclei, thereby aligning the nuclear spins to the extent that electron spins are aligned. Note that the alignment of electron spins at a given magnetic field and temperature is described by the Boltzmann distribution under the thermal equilibrium. It is also possible that those electrons are aligned to a higher degree of order by other preparations of electron spin order such as: chemical reactions (leading to chemical-induced DNP, CIDNP), optical pumping and spin injection. DNP is considered one of several techniques for hyperpolarization. DNP can also be induced using unpaired electrons produced by radiation damage in solids.
When electron spin polarization deviates from its thermal equilibrium value, polarization transfers between electrons and nuclei can occur spontaneously through electron-nuclear cross relaxation or spin-state mixing among electrons and nuclei. For example, the polarization transfer is spontaneous after a homolysis chemical reaction. On the other hand, when the electron spin system is in a thermal equilibrium, the polarization transfer requires continuous microwave irradiation at a frequency close to the corresponding electron paramagnetic resonance (EPR) frequency. In particular, mechanisms for the microwave-driven DNP processes are categorized into the Overhauser effect (OE), the solid-effect (SE), the cross-effect (CE) and thermal-mixing (TM).
The first DNP experiments were performed in the early 1950s at low magnetic fields but until recently the technique was of limited applicability for high-frequency, high-field NMR spectroscopy, because of the lack of microwave (or terahertz) sources operating at the appropriate frequency. Today such sources are available as turn-key instruments, making DNP a valuable and indispensable method especially in the field of structure determination by high-resolution solid-state NMR spectroscopy.
Mechanisms.
Overhauser effect.
DNP was first realized using the concept of the Overhauser effect, which is the perturbation of nuclear spin level populations observed in metals and free radicals when electron spin transitions are saturated by microwave irradiation. This effect relies on stochastic interactions between an electron and a nucleus. The "dynamic" initially meant to highlight the time-dependent and random interactions in this polarization transfer process.
The DNP phenomenon was theoretically predicted by Albert Overhauser in 1953
and initially drew some criticism from Norman Ramsey, Felix Bloch and other renowned physicists of the time on the grounds of being "thermodynamically improbable". The experimental confirmation by Carver and Slichter as well as an apologetic letter from Ramsey both reached Overhauser in the same year.
The so-called electron-nucleus cross-relaxation, which is responsible for the DNP phenomenon is caused by rotational and translational modulation of the electron-nucleus hyperfine coupling. The theory of this process is based essentially on the second-order time-dependent perturbation theory solution of the von Neumann equation for the spin density matrix.
While the Overhauser effect relies on time-dependent electron-nuclear interactions, the remaining polarizing mechanisms rely on time-independent electron-nuclear and electron-electron interactions.
Solid effect.
The simplest spin system exhibiting the SE DNP mechanism is an electron-nucleus spin pair. The Hamiltonian of the system can be written as:
formula_0
These terms are referring respectively to the electron and nucleus Zeeman interaction with the external magnetic field, and the hyperfine interaction. S and I are the electron and nuclear spin operators in the Zeeman basis (spin <templatestyles src="Fraction/styles.css" />1⁄2 considered for simplicity), "ωe" and "ω"n are the electron and nuclear Larmor frequencies, and "A" and "B" are the secular and pseudo-secular parts of the hyperfine interaction. For simplicity we will only consider the case of |"A"|,|"B"|«|"ω"n|. In such a case "A" has little effect on the evolution of the spin system. During DNP a MW irradiation is applied at a frequency "ω"MW and intensity "ω"1, resulting in a rotating frame Hamiltonian given by
formula_1where formula_2
The MW irradiation can excite the electron single quantum transitions ("allowed transitions") when "ω"MW is close to "ω"e, resulting in a loss of the electron polarization. In addition, due to the small state mixing caused by the B term of the hyperfine interaction, it is possible to irradiate on the electron-nucleus zero quantum or double quantum ("forbidden") transitions around "ω"MW = "ω"e ± "ω"n, resulting in polarization transfer between the electrons and the nuclei. The effective MW irradiation on these transitions is approximately given by "Bω"1/2"ω"n.
Static sample case.
In a simple picture of an electron-nucleus two-spin system, the solid effect occurs when a transition involving an electron-nucleus mutual flip (called zero quantum or double quantum) is excited by a microwave irradiation, in the presence of relaxation. This kind of transition is in general weakly allowed, meaning that the transition moment for the above microwave excitation results from a second-order effect of the electron-nuclear interactions and thus requires stronger microwave power to be significant, and its intensity is decreased by an increase of the external magnetic field B0. As a result, the DNP enhancement from the solid effect scales as B0−2 when all the relaxation parameters are kept constant. Once this transition is excited and the relaxation is acting, the magnetization is spread over the "bulk" nuclei (the major part of the detected nuclei in an NMR experiment) via the nuclear dipole network.
This polarizing mechanism is optimal when the exciting microwave frequency shifts up or down by the nuclear Larmor frequency from the electron Larmor frequency in the discussed two-spin system. The direction of frequency shifts corresponds to the sign of DNP enhancements.
Solid effect exist in most cases but is more easily observed if the linewidth of the EPR spectrum of involved unpaired electrons is smaller than the nuclear Larmor frequency of the corresponding nuclei.
Magic angle spinning case.
In the case of magic angle spinning DNP (MAS-DNP), the mechanism is different but to understand it, a two spins system can still be used. The polarization process of the nucleus still occurs when the microwave irradiation excites the double quantum or zero quantum transition, but due to the fact that the sample is spinning, this condition is only met for a short time at each rotor cycle (which makes it periodical). The DNP process in that case happens step by step and not continuously as in the static case.
Cross effect.
Static case.
The cross effect requires two unpaired electrons as the source of high polarization. Without special condition, such a three spins system can only generate a solid effect type of polarization. However, when the resonance frequency of each electron is separated by the nuclear Larmor frequency, and when the two electrons are dipolar coupled, another mechanism occurs: the cross-effect. In that case, the DNP process is the result of irradiation of an allowed transition (called single quantum) as a result the strength of microwave irradiation is less demanded than that in the solid effect. In practice, the correct EPR frequency separation is accomplished through random orientation of paramagnetic species with g-anisotropy. Since the "frequency" distance between the two electrons should be equal to the Larmor frequency of the targeted nucleus, cross-effect can only occur if the inhomogeneously broadened EPR lineshape has a linewidth broader than the nuclear Larmor frequency. Therefore, as this linewidth is proportional to external magnetic field B0, the overall DNP efficiency (or the enhancement of nuclear polarization) scales as B0−1. This remains true as long as the relaxation times remain constant. Usually going to higher field leads to longer nuclear relaxation times and this may partially compensate for the line broadening reduction.
In practice, in a glassy sample, the probability of having two dipolarly coupled electrons separated by the Larmor frequency is very scarce. Nonetheless, this mechanism is so efficient that it can be experimentally observed alone or in addition to the solid-effect.
Magic angle spinning case.
As in the static case, the MAS-DNP mechanism of cross effect is deeply modified due to the time dependent energy level. By taking a simple three spin system, it has been demonstrated that the cross-effect mechanism is different in the Static and MAS case. The cross effect is the result of very fast multi-step process involving EPR single quantum transition, electron dipolar anti-crossing and cross effect degeneracy conditions.
In the most simple case the MAS-DNP mechanism can be explained by the combination of a single quantum transition followed by the cross-effect degeneracy condition, or by the electron-dipolar anti-crossing followed by the cross-effect degeneracy condition.
This in turn change dramatically the CE dependence over the static magnetic field which does not scale like B0−1 and makes it much more efficient than the solid effect.
Thermal mixing.
Thermal mixing is an energy exchange phenomenon between the electron spin ensemble and the nuclear spin, which can be thought of as using multiple electron spins to provide hyper-nuclear polarization. Note that the electron spin ensemble acts as a whole because of stronger inter-electron interactions. The strong interactions lead to a homogeneously broadened EPR lineshape of the involved paramagnetic species. The linewidth is optimized for polarization transfer from electrons to nuclei, when it is close to the nuclear Larmor frequency. The optimization is related to an embedded three-spin (electron-electron-nucleus) process that mutually flips the coupled three spins under the energy conservation (mainly) of the Zeeman interactions. Due to the inhomogeneous component of the associated EPR lineshape, the DNP enhancement by this mechanism also scales as B0−1.
DNP-NMR enhancement curves.
Many types of solid materials can exhibit more than one mechanism for DNP. Some examples are carbonaceous materials such bituminous coal and charcoal (wood or cellulose heated at high temperatures above their decomposition point which leaves a residual solid char). To separate out the mechanisms of DNP and to characterize the electron-nuclear interactions occurring in such solids a DNP enhancement curve can be made. A typical enhancement curve is obtained by measuring the maximum intensity of the NMR FID of the 1H nuclei, for example, in the presence of continuous microwave irradiation as a function of the microwave frequency offset.
Carbonaceous materials such as cellulose char contain large numbers of stable free electrons delocalized in large polycyclic aromatic hydrocarbons. Such electrons can give large polarization enhancements to nearby protons via proton-proton spin-diffusion if they are not so close together that the electron-nuclear dipolar interaction does not broaden the proton resonance beyond detection. For small isolated clusters, the free electrons are fixed and give rise to solid-state enhancements (SS). The maximal proton solid-state enhancement is observed at microwave offsets of ω ≈ ωe ± ωH, where ωe and ωH are the electron and nuclear Larmor frequencies, respectively. For larger and more densely concentrated aromatic clusters, the free electrons can undergo rapid electron exchange interactions. These electrons give rise to an Overhauser enhancement centered at a microwave offset of ωe – ωH = 0. The cellulose char also exhibits electrons undergoing thermal mixing effects (TM). While the enhancement curve reveals the types electron-nuclear spin interactions in a material, it is not quantitative and the relative abundance of the different types of nuclei cannot be determined directly from the curve.
DNP-NMR.
DNP can be performed to enhance the NMR signals but also to introduce an inherent spatial dependence: the magnetization enhancement takes place in the vicinity of the irradiated electrons and propagates throughout the sample. Spatial selectivity can finally be obtained using magnetic resonance imaging (MRI) techniques, so that signals from similar parts can be separated based on their location in the sample.
DNP has triggered enthusiasm in the NMR community because it can enhance sensitivity in solid-state NMR. In DNP, a large electronic spin polarization is transferred onto the nuclear spins of interest using a microwave source. There are two main DNP approaches for solids. If the material does not contain suitable unpaired electrons, exogenous DNP is applied: the material is impregnated by a solution containing a specific radical. When possible, endogenous DNP is performed using the electrons in transition metal ions (metal-ion dynamic nuclear polarization, MIDNP) or conduction electrons. The experiments usually need to be performed at low temperatures with magic angle spinning. It is important to note that DNP was only performed ex situ as it usually requires low temperature to lower electronic relaxation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H_0=\\omega_eS_z+\\omega_{\\rm n}I_z+AS_zI_z+B\\ S_zI_x"
},
{
"math_id": 1,
"text": "H=\\Delta\\omega_e\\;S_z+\\omega_{\\rm n}I_z+AS_zI_z+B\\ S_zI_x+\\omega_1 S_x"
},
{
"math_id": 2,
"text": "\\Delta\\omega_e=\\omega_e-\\omega_{\\rm MW}"
}
] | https://en.wikipedia.org/wiki?curid=1119342 |
1119623 | Yule–Simon distribution | Discrete probability distribution
In probability and statistics, the Yule–Simon distribution is a discrete probability distribution named after Udny Yule and Herbert A. Simon. Simon originally called it the Yule distribution.
The probability mass function (pmf) of the Yule–Simon ("ρ") distribution is
formula_0
for integer formula_1 and real formula_2, where formula_3 is the beta function. Equivalently the pmf can be written in terms of the rising factorial as
formula_4
where formula_5 is the gamma function. Thus, if formula_6 is an integer,
formula_7
The parameter formula_6 can be estimated using a fixed point algorithm.
The probability mass function "f" has the property that for sufficiently large "k" we have
formula_8
This means that the tail of the Yule–Simon distribution is a realization of Zipf's law: formula_9 can be used to model, for example, the relative frequency of the formula_10th most frequent word in a large collection of text, which according to Zipf's law is inversely proportional to a (typically small) power of formula_10.
Occurrence.
The Yule–Simon distribution arose originally as the limiting distribution of a particular model studied by Udny Yule in 1925 to analyze the growth in the number of species per genus in some higher taxa of biotic organisms. The Yule model makes use of two related Yule processes, where a Yule process is defined as a continuous time birth process which starts with one or more individuals. Yule proved that when time goes to infinity, the limit distribution of the number of species in a genus selected uniformly at random has a specific form and exhibits a power-law behavior in its tail. Thirty years later, the Nobel laureate Herbert A. Simon proposed a time-discrete preferential attachment model to describe the appearance of new words in a large piece of a text. Interestingly enough, the limit distribution of the number of occurrences of each word, when the number of words diverges, coincides with that of the number of species belonging to the randomly chosen genus in the Yule model, for a specific choice of the parameters. This fact explains the designation Yule–Simon distribution that is commonly assigned to that limit distribution. In the context of random graphs, the Barabási–Albert model also exhibits an asymptotic degree distribution that equals the Yule–Simon distribution in correspondence of a specific choice of the parameters and still presents power-law characteristics for more general choices of the parameters. The same happens also for other preferential attachment random graph models.
The preferential attachment process can also be studied as an urn process in which balls are added to a growing number of urns, each ball being allocated to an urn with probability linear in the number (of balls) the urn already contains.
The distribution also arises as a compound distribution, in which the parameter of a geometric distribution is treated as a function of random variable having an exponential distribution. Specifically, assume that formula_11 follows an exponential distribution with scale formula_12 or rate formula_6:
formula_13
with density
formula_14
Then a Yule–Simon distributed variable "K" has the following geometric distribution conditional on "W":
formula_15
The pmf of a geometric distribution is
formula_16
for formula_17. The Yule–Simon pmf is then the following exponential-geometric compound distribution:
formula_18
The maximum likelihood estimator for the parameter formula_19 given the observations formula_20 is the solution to the fixed point equation
formula_21
where formula_22 are the rate and shape parameters of the gamma distribution prior on formula_23.
This algorithm is derived by Garcia by directly optimizing the likelihood. Roberts and Roberts
generalize the algorithm to Bayesian settings with the compound geometric formulation described above. Additionally, Roberts and Roberts are able to use the Expectation Maximisation (EM) framework to show convergence of the fixed point algorithm. Moreover, Roberts and Roberts derive the sub-linearity of the convergence rate for the fixed point algorithm. Additionally, they use the EM formulation to give 2 alternate derivations of the standard error of the estimator from the fixed point equation. The variance of the formula_24 estimator is
formula_25
the standard error is the square root of the quantity of this estimate divided by N.
Generalizations.
The two-parameter generalization of the original Yule distribution replaces the beta function with an incomplete beta function. The probability mass function of the generalized Yule–Simon("ρ", "α") distribution is defined as
formula_26
with formula_27. For formula_28 the ordinary Yule–Simon("ρ") distribution is obtained as a special case. The use of the incomplete beta function has the effect of introducing an exponential cutoff in the upper tail. | [
{
"math_id": 0,
"text": "f(k;\\rho) = \\rho\\operatorname{B}(k, \\rho+1),"
},
{
"math_id": 1,
"text": "k \\geq 1"
},
{
"math_id": 2,
"text": "\\rho > 0"
},
{
"math_id": 3,
"text": "\\operatorname{B}"
},
{
"math_id": 4,
"text": "\n f(k;\\rho) = \\frac{\\rho\\Gamma(\\rho+1)}{(k+\\rho)^{\\underline{\\rho+1}}},\n"
},
{
"math_id": 5,
"text": "\\Gamma"
},
{
"math_id": 6,
"text": "\\rho"
},
{
"math_id": 7,
"text": "\n f(k;\\rho) = \\frac{\\rho\\,\\rho!\\,(k-1)!}{(k+\\rho)!}.\n"
},
{
"math_id": 8,
"text": "\n f(k;\\rho)\n \\approx \\frac{\\rho\\Gamma(\\rho+1)}{k^{\\rho+1}}\n \\propto \\frac 1 {k^{\\rho+1}}.\n"
},
{
"math_id": 9,
"text": "f(k;\\rho)"
},
{
"math_id": 10,
"text": "k"
},
{
"math_id": 11,
"text": "W"
},
{
"math_id": 12,
"text": "1/\\rho"
},
{
"math_id": 13,
"text": "W \\sim \\operatorname{Exponential}(\\rho),"
},
{
"math_id": 14,
"text": "h(w;\\rho) = \\rho \\exp(-\\rho w)."
},
{
"math_id": 15,
"text": "K \\sim \\operatorname{Geometric}(\\exp(-W))."
},
{
"math_id": 16,
"text": "g(k; p) = p (1-p)^{k-1}"
},
{
"math_id": 17,
"text": "k\\in\\{1,2,\\dotsc\\}"
},
{
"math_id": 18,
"text": "f(k;\\rho)\n = \\int_0^\\infty g(k;\\exp(-w)) h(w;\\rho)\\,dw.\n"
},
{
"math_id": 19,
"text": "\\rho "
},
{
"math_id": 20,
"text": "k_1,k_2,k_3,\\dots,k_N"
},
{
"math_id": 21,
"text": "\n \\rho^{(t+1)} = \\frac{N+a-1}{b+\\sum_{i=1}^N\\sum_{j=1}^{k_i}\\frac{1}{\\rho^{(t)} + j}},\n"
},
{
"math_id": 22,
"text": " b=0, a=1"
},
{
"math_id": 23,
"text": " \\rho "
},
{
"math_id": 24,
"text": " \\lambda "
},
{
"math_id": 25,
"text": "\n\\operatorname{Var}(\\hat{\\lambda}) = \\frac{1}{\\frac{N}{\\hat{\\lambda}^2} - \\sum_{i=1}^N\\sum_{j=1}^{k_i}\\frac{1}{(\\hat{\\lambda} + j)^2}},\n"
},
{
"math_id": 26,
"text": "\n f(k;\\rho,\\alpha) = \\frac \\rho {1-\\alpha^\\rho} \\;\n \\mathrm{B}_{1-\\alpha}(k, \\rho+1),\n \\,"
},
{
"math_id": 27,
"text": "0 \\leq \\alpha < 1"
},
{
"math_id": 28,
"text": "\\alpha = 0"
}
] | https://en.wikipedia.org/wiki?curid=1119623 |
11196859 | Urocanase | Urocanase (also known as imidazolonepropionate hydrolase or urocanate hydratase) is the enzyme (EC 4.2.1.49) that catalyzes the second step in the degradation of histidine, the hydration of urocanate into imidazolonepropionate.
Urocanase is coded for by the UROC1 gene, located on the 3rd chromosome in humans. The protein itself is composed of 676 amino acids which then fold, producing the final product which has 2 identical subunits, making the enzyme a homodimer.
To catalyze the hydrolysis of urocanate in the catabolic pathway of L-histidine the enzyme utilizes its two NAD+ (Nicotinamide Adnene Dinucleotide) groups. The NAD+ groups act as electrophiles, attaching to the top carbon of the urocanate which leads to sigmatropic rearrangement of the urocanate molecule. This rearrangement allows for the addition of a water molecule, converting the urocanate into 4,5-dihydro-4-oxo-5-imidazolepropanoate.
urocanate + H2O formula_0 4,5-dihydro-4-oxo-5-imidazolepropanoate
Inherited deficiency of urocanase leads to elevated levels of urocanic acid in the urine, a condition known as urocanic aciduria.
Urocanase is found in some bacteria (gene hutU), in the liver of many vertebrates and has also been found in the plant "Trifolium repens" (white clover). Urocanase is a protein of about 60 Kd, it binds tightly to NAD+ and uses it as an electrophil cofactor. A conserved cysteine has been found to be important for the catalytic mechanism and could be involved in the binding of the NAD+.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=11196859 |
11197065 | Foster graph | Bipartite 3-regular graph with 90 vertices and 135 edges
In the mathematical field of graph theory, the Foster graph is a bipartite 3-regular graph with 90 vertices and 135 edges.
The Foster graph is Hamiltonian and has chromatic number 2, chromatic index 3, radius 8, diameter 8 and girth 10. It is also a 3-vertex-connected and 3-edge-connected graph. It has queue number 2 and the upper bound on the book thickness is 4.
All the cubic distance-regular graphs are known. The Foster graph is one of the 13 such graphs. It is the unique distance-transitive graph with intersection array {3,2,2,2,2,1,1,1;1,1,1,1,2,2,2,3}. It can be constructed as the incidence graph of the partial linear space which is the unique triple cover with no 8-gons of the generalized quadrangle "GQ"(2,2). It is named after R. M. Foster, whose "Foster census" of cubic symmetric graphs included this graph.
The bipartite half of the Foster graph is a distance-regular graph and a locally linear graph. It is one of a finite number of such graphs with degree six.
Algebraic properties.
The automorphism group of the Foster graph is a group of order 4320. It acts transitively on the vertices, on the edges and on the arcs of the graph. Therefore, the Foster graph is a symmetric graph. It has automorphisms that take any vertex to any other vertex and any edge to any other edge. According to the "Foster census", the Foster graph, referenced as F90A, is the only cubic symmetric graph on 90 vertices.
The characteristic polynomial of the Foster graph is equal to formula_0.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(x-3) (x-2)^9 (x-1)^{18} x^{10} (x+1)^{18} (x+2)^9 (x+3) (x^2-6)^{12}"
}
] | https://en.wikipedia.org/wiki?curid=11197065 |
11197155 | Partial linear space | A partial linear space (also semilinear or near-linear space) is a basic incidence structure in the field of incidence geometry, that carries slightly less structure than a linear space.
The notion is equivalent to that of a linear hypergraph.
Definition.
Let formula_0 an incidence structure, for which the elements of formula_1 are called "points" and the elements of formula_2 are called "lines". "S" is a partial linear space, if the following axioms hold:
If there is a unique line incident with every pair of distinct points, then we get a linear space.
Properties.
The De Bruijn–Erdős theorem shows that in any finite linear space formula_3 which is not a single point or a single line, we have formula_4. | [
{
"math_id": 0,
"text": "S=({\\mathcal P},{\\mathcal L}, \\textbf{I}) "
},
{
"math_id": 1,
"text": "{\\mathcal P}"
},
{
"math_id": 2,
"text": "{\\mathcal L}"
},
{
"math_id": 3,
"text": "S=({\\mathcal P},{\\mathcal L}, \\textbf{I})"
},
{
"math_id": 4,
"text": "|\\mathcal{P}| \\leq |\\mathcal{L}|"
}
] | https://en.wikipedia.org/wiki?curid=11197155 |
11199199 | Trace operator | In mathematics, the trace operator extends the notion of the restriction of a function to the boundary of its domain to "generalized" functions in a Sobolev space. This is particularly important for the study of partial differential equations with prescribed boundary conditions (boundary value problems), where weak solutions may not be regular enough to satisfy the boundary conditions in the classical sense of functions.
Motivation.
On a bounded, smooth domain formula_0, consider the problem of solving Poisson's equation with inhomogeneous Dirichlet boundary conditions:
formula_1
with given functions formula_2 and formula_3 with regularity discussed in the application section below. The weak solution formula_4 of this equation must satisfy
formula_5 for all formula_6.
The formula_7-regularity of formula_8 is sufficient for the well-definedness of this integral equation. It is not apparent, however, in which sense formula_8 can satisfy the boundary condition formula_9 on formula_10: by definition, formula_11 is an equivalence class of functions which can have arbitrary values on formula_10 since this is a null set with respect to the n-dimensional Lebesgue measure.
If formula_12 there holds formula_13 by Sobolev's embedding theorem, such that formula_8 can satisfy the boundary condition in the classical sense, i.e. the restriction of formula_8 to formula_10 agrees with the function formula_3 (more precisely: there exists a representative of formula_8 in formula_14 with this property). For formula_0 with formula_15 such an embedding does not exist and the trace operator formula_16 presented here must be used to give meaning to formula_17. Then formula_4 with formula_18 is called a weak solution to the boundary value problem if the integral equation above is satisfied. For the definition of the trace operator to be reasonable, there must hold formula_19 for sufficiently regular formula_8.
Trace theorem.
The trace operator can be defined for functions in the Sobolev spaces formula_20 with formula_21, see the section below for possible extensions of the trace to other spaces. Let formula_0 for formula_22 be a bounded domain with Lipschitz boundary. Then there exists a bounded linear trace operator
formula_23
such that formula_16 extends the classical trace, i.e.
formula_24 for all formula_25.
The continuity of formula_16 implies that
formula_26 for all formula_27
with constant only depending on formula_28 and formula_29. The function formula_30 is called trace of formula_8 and is often simply denoted by formula_17. Other common symbols for formula_16 include formula_31 and formula_32.
Construction.
This paragraph follows Evans, where more details can be found, and assumes that formula_29 has a formula_33-boundary. A proof (of a stronger version) of the trace theorem for Lipschitz domains can be found in Gagliardo. On a formula_33-domain, the trace operator can be defined as continuous linear extension of the operator
formula_34
to the space formula_35. By density of formula_36 in formula_35 such an extension is possible if formula_16 is continuous with respect to the formula_35-norm. The proof of this, i.e. that there exists formula_37 (depending on formula_29 and formula_28) such that
formula_38 for all formula_39
is the central ingredient in the construction of the trace operator. A local variant of this estimate for formula_40-functions is first proven for a locally flat boundary using the divergence theorem. By transformation, a general formula_33-boundary can be locally straightened to reduce to this case, where the formula_33-regularity of the transformation requires that the local estimate holds for formula_40-functions.
With this continuity of the trace operator in formula_36 an extension to formula_35 exists by abstract arguments and formula_41 for formula_27 can be characterized as follows. Let formula_42 be a sequence approximating formula_27 by density. By the proven continuity of formula_16 in formula_36 the sequence formula_43 is a Cauchy sequence in formula_44 and formula_45 with limit taken in formula_44.
The extension property formula_19 holds for formula_46 by construction, but for any formula_25 there exists a sequence formula_42 which converges uniformly on formula_47 to formula_8, verifying the extension property on the larger set formula_48.
The case p = ∞.
If formula_29 is bounded and has a formula_33-boundary then by Morrey's inequality there exists a continuous embedding formula_49, where formula_50 denotes the space of Lipschitz continuous functions. In particular, any function formula_51 has a classical trace formula_52 and there holds
formula_53
Functions with trace zero.
The Sobolev spaces formula_54 for formula_21 are defined as the closure of the set of compactly supported test functions formula_55 with respect to the formula_35-norm. The following alternative characterization holds:
formula_56
where formula_57 is the kernel of formula_16, i.e. formula_58 is the subspace of functions in formula_35 with trace zero.
Image of the trace operator.
For p > 1.
The trace operator is not surjective onto formula_44 if formula_59, i.e. not every function in formula_44 is the trace of a function in formula_35. As elaborated below the image consists of functions which satisfy an formula_60-version of Hölder continuity.
Abstract characterization.
An abstract characterization of the image of formula_16 can be derived as follows. By the isomorphism theorems there holds
formula_61
where formula_62 denotes the quotient space of the Banach space formula_63 by the subspace formula_64 and the last identity follows from the characterization of formula_58 from above. Equipping the quotient space with the quotient norm defined by
formula_65
the trace operator formula_16 is then a surjective, bounded linear operator
formula_66.
Characterization using Sobolev–Slobodeckij spaces.
A more concrete representation of the image of formula_16 can be given using Sobolev-Slobodeckij spaces which generalize the concept of Hölder continuous functions to the formula_60-setting. Since formula_10 is a "(n-1)"-dimensional Lipschitz manifold embedded into formula_67 an explicit characterization of these spaces is technically involved. For simplicity consider first a planar domain formula_68. For formula_69 define the (possibly infinite) norm
formula_70
which generalizes the Hölder condition formula_71. Then
formula_72
equipped with the previous norm is a Banach space (a general definition of formula_73 for non-integer formula_74 can be found in the article for Sobolev-Slobodeckij spaces). For the "(n-1)"-dimensional Lipschitz manifold formula_10 define formula_75 by locally straightening formula_10 and proceeding as in the definition of formula_76.
The space formula_75 can then be identified as the image of the trace operator and there holds that
formula_77
is a surjective, bounded linear operator.
For p = 1.
For formula_78 the image of the trace operator is formula_79 and there holds that
formula_80
is a surjective, bounded linear operator.
Right-inverse: trace extension operator.
The trace operator is not injective since multiple functions in formula_35 can have the same trace (or equivalently, formula_81). The trace operator has however a well-behaved right-inverse, which extends a function defined on the boundary to the whole domain. Specifically, for formula_82 there exists a bounded, linear trace extension operator
formula_83,
using the Sobolev-Slobodeckij characterization of the trace operator's image from the previous section, such that
formula_84 for all formula_85
and, by continuity, there exists formula_37 with
formula_86.
Notable is not the mere existence but the linearity and continuity of the right inverse. This trace extension operator must not be confused with the whole-space extension operators formula_87 which play a fundamental role in the theory of Sobolev spaces.
Extension to other spaces.
Higher derivatives.
Many of the previous results can be extended to formula_88 with higher differentiability formula_89 if the domain is sufficiently regular. Let formula_90 denote the exterior unit normal field on formula_10.
Since formula_17 can encode differentiability properties in tangential direction only the normal derivative formula_91 is of additional interest for the trace theory for formula_92. Similar arguments apply to higher-order derivatives for formula_93.
Let formula_82 and formula_0 be a bounded domain with formula_94-boundary. Then there exists a surjective, bounded linear higher-order trace operator
formula_95
with Sobolev-Slobodeckij spaces formula_96 for non-integer formula_74 defined on formula_10 through transformation to the planar case formula_97 for formula_68, whose definition is elaborated in the article on Sobolev-Slobodeckij spaces. The operator formula_98 extends the classical normal traces in the sense that
formula_99 for all formula_100
Furthermore, there exists a bounded, linear right-inverse of formula_98, a higher-order trace extension operator
formula_101.
Finally, the spaces formula_102, the completion of formula_55 in the formula_88-norm, can be characterized as the kernel of formula_98, i.e.
formula_103.
Less regular spaces.
No trace in "Lp".
There is no sensible extension of the concept of traces to formula_104 for formula_21 since any bounded linear operator which extends the classical trace must be zero on the space of test functions formula_55, which is a dense subset of formula_104, implying that such an operator would be zero everywhere.
Generalized normal trace.
Let formula_105 denote the distributional divergence of a vector field formula_106. For formula_82 and bounded Lipschitz domain formula_0 define
formula_107
which is a Banach space with norm
formula_108.
Let formula_90 denote the exterior unit normal field on formula_10. Then there exists a bounded linear operator
formula_109,
where formula_110 is the conjugate exponent to formula_28 and formula_111 denotes the continuous dual space to a Banach space formula_63, such that formula_112 extends the normal trace formula_113 for formula_114 in the sense that
formula_115.
The value of the normal trace operator formula_116 for formula_117 is defined by application of the divergence theorem to the vector field formula_118 where formula_119 is the trace extension operator from above.
"Application." Any weak solution formula_4 to formula_120 in a bounded Lipschitz domain formula_0 has a normal derivative in the sense of formula_121. This follows as formula_122 since formula_123 and formula_124. This result is notable since in Lipschitz domains in general formula_125, such that formula_126 may not lie in the domain of the trace operator formula_16.
Application.
The theorems presented above allow a closer investigation of the boundary value problem
formula_1
on a Lipschitz domain formula_0 from the motivation. Since only the Hilbert space case formula_127 is investigated here, the notation formula_7 is used to denote formula_128 etc. As stated in the motivation, a weak solution formula_4 to this equation must satisfy formula_18 and
formula_5 for all formula_6,
where the right-hand side must be interpreted for formula_129 as a duality product with the value formula_130.
Existence and uniqueness of weak solutions.
The characterization of the range of formula_16 implies that for formula_18 to hold the regularity formula_131 is necessary. This regularity is also sufficient for the existence of a weak solution, which can be seen as follows. By the trace extension theorem there exists formula_132 such that formula_133. Defining formula_134 by formula_135 we have that formula_136 and thus formula_137 by the characterization of formula_138 as space of trace zero. The function formula_137 then satisfies the integral equation
formula_139 for all formula_6.
Thus the problem with inhomogeneous boundary values for formula_8 could be reduced to a problem with homogeneous boundary values for formula_134, a technique which can be applied to any linear differential equation. By the Riesz representation theorem there exists a unique solution formula_134 to this problem. By uniqueness of the decomposition formula_140, this is equivalent to the existence of a unique weak solution formula_8 to the inhomogeneous boundary value problem.
Continuous dependence on the data.
It remains to investigate the dependence of formula_8 on formula_2 and formula_3. Let formula_141 denote constants independent of formula_2 and formula_3. By continuous dependence of formula_134 on the right-hand side of its integral equation, there holds
formula_142
and thus, using that formula_143 and formula_144 by continuity of the trace extension operator, it follows that
formula_145
and the solution map
formula_146
is therefore continuous. | [
{
"math_id": 0,
"text": "\\Omega \\subset \\mathbb R^n"
},
{
"math_id": 1,
"text": "\\begin{alignat}{2}\n-\\Delta u &= f &\\quad&\\text{in } \\Omega,\\\\\nu &= g &&\\text{on } \\partial \\Omega\n\\end{alignat}"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "g"
},
{
"math_id": 4,
"text": "u \\in H^1(\\Omega)"
},
{
"math_id": 5,
"text": "\\int_\\Omega \\nabla u \\cdot \\nabla \\varphi \\,\\mathrm dx = \\int_\\Omega f \\varphi \\,\\mathrm dx"
},
{
"math_id": 6,
"text": "\\varphi \\in H^1_0(\\Omega)"
},
{
"math_id": 7,
"text": "H^1(\\Omega)"
},
{
"math_id": 8,
"text": "u"
},
{
"math_id": 9,
"text": "u = g"
},
{
"math_id": 10,
"text": "\\partial \\Omega"
},
{
"math_id": 11,
"text": "u \\in H^1(\\Omega) \\subset L^2(\\Omega)"
},
{
"math_id": 12,
"text": "\\Omega \\subset \\mathbb R^1"
},
{
"math_id": 13,
"text": "H^1(\\Omega) \\hookrightarrow C^0(\\bar \\Omega)"
},
{
"math_id": 14,
"text": "C(\\bar \\Omega)"
},
{
"math_id": 15,
"text": "n > 1"
},
{
"math_id": 16,
"text": "T"
},
{
"math_id": 17,
"text": "u |_{\\partial \\Omega}"
},
{
"math_id": 18,
"text": "T u = g"
},
{
"math_id": 19,
"text": "T u = u |_{\\partial \\Omega}"
},
{
"math_id": 20,
"text": "W^{1,p}(\\Omega)"
},
{
"math_id": 21,
"text": "1 \\leq p < \\infty"
},
{
"math_id": 22,
"text": "n \\in \\mathbb N"
},
{
"math_id": 23,
"text": "T\\colon W^{1, p}(\\Omega) \\to L^p(\\partial \\Omega)"
},
{
"math_id": 24,
"text": "T u = u |_{\\partial \\Omega}"
},
{
"math_id": 25,
"text": "u \\in W^{1, p}(\\Omega) \\cap C(\\bar \\Omega)"
},
{
"math_id": 26,
"text": "\\| T u \\|_{L^p(\\partial \\Omega)} \\leq C \\| u \\|_{W^{1,p}(\\Omega)}"
},
{
"math_id": 27,
"text": "u \\in W^{1, p}(\\Omega)"
},
{
"math_id": 28,
"text": "p"
},
{
"math_id": 29,
"text": "\\Omega"
},
{
"math_id": 30,
"text": "T u"
},
{
"math_id": 31,
"text": "tr"
},
{
"math_id": 32,
"text": "\\gamma"
},
{
"math_id": 33,
"text": "C^1"
},
{
"math_id": 34,
"text": "T:C^\\infty(\\bar \\Omega)\\to L^p(\\partial \\Omega)"
},
{
"math_id": 35,
"text": "W^{1, p}(\\Omega)"
},
{
"math_id": 36,
"text": "C^\\infty(\\bar \\Omega)"
},
{
"math_id": 37,
"text": "C > 0"
},
{
"math_id": 38,
"text": "\\|Tu\\|_{L^{p}(\\partial \\Omega)}\\le C \\|u\\|_{W^{1, p}(\\Omega)}"
},
{
"math_id": 39,
"text": "u \\in C^\\infty(\\bar \\Omega)."
},
{
"math_id": 40,
"text": "C^1(\\bar \\Omega)"
},
{
"math_id": 41,
"text": "Tu"
},
{
"math_id": 42,
"text": "u_k \\in C^\\infty(\\bar \\Omega)"
},
{
"math_id": 43,
"text": "u_k |_{\\partial \\Omega}"
},
{
"math_id": 44,
"text": "L^p(\\partial \\Omega)"
},
{
"math_id": 45,
"text": "T u = \\lim_{k \\to \\infty} u_k |_{\\partial \\Omega}"
},
{
"math_id": 46,
"text": "u \\in C^{\\infty}(\\bar \\Omega)"
},
{
"math_id": 47,
"text": "\\bar \\Omega"
},
{
"math_id": 48,
"text": "W^{1, p}(\\Omega) \\cap C(\\bar \\Omega)"
},
{
"math_id": 49,
"text": "W^{1, \\infty}(\\Omega) \\hookrightarrow C^{0, 1}(\\Omega)"
},
{
"math_id": 50,
"text": "C^{0, 1}(\\Omega)"
},
{
"math_id": 51,
"text": "u \\in W^{1, \\infty}(\\Omega)"
},
{
"math_id": 52,
"text": "u |_{\\partial \\Omega} \\in C(\\partial \\Omega)"
},
{
"math_id": 53,
"text": "\\| u |_{\\partial \\Omega} \\|_{C(\\partial \\Omega)} \\leq \\| u \\|_{C^{0, 1}(\\Omega)} \\leq C \\| u \\|_{W^{1, \\infty}(\\Omega)}."
},
{
"math_id": 54,
"text": "W^{1,p}_0(\\Omega)"
},
{
"math_id": 55,
"text": "C^\\infty_c(\\Omega)"
},
{
"math_id": 56,
"text": "W^{1, p}_0(\\Omega) = \\{ u \\in W^{1, p}(\\Omega) \\mid T u = 0 \\} = \\ker(T\\colon W^{1, p}(\\Omega) \\to L^p(\\partial \\Omega)),"
},
{
"math_id": 57,
"text": "\\ker(T)"
},
{
"math_id": 58,
"text": "W^{1, p}_0(\\Omega)"
},
{
"math_id": 59,
"text": "p > 1"
},
{
"math_id": 60,
"text": "L^p"
},
{
"math_id": 61,
"text": "T(W^{1,p}(\\Omega)) \\cong W^{1, p}(\\Omega) / \\ker(T\\colon W^{1, p}(\\Omega) \\to L^p(\\partial \\Omega)) = W^{1, p}(\\Omega) / W^{1, p}_0(\\Omega)"
},
{
"math_id": 62,
"text": "X / N"
},
{
"math_id": 63,
"text": "X"
},
{
"math_id": 64,
"text": "N \\subset X"
},
{
"math_id": 65,
"text": "\\|u\\|_{W^{1, p}(\\Omega) / W^{1, p}_0(\\Omega)} = \\inf_{u_0 \\in W^{1, p}_0(\\Omega)} \\|u - u_0\\|_{W^{1, p}(\\Omega)}"
},
{
"math_id": 66,
"text": "T\\colon W^{1, p}(\\Omega) \\to W^{1, p}(\\Omega) / W^{1, p}_0(\\Omega) "
},
{
"math_id": 67,
"text": "\\mathbb R^n"
},
{
"math_id": 68,
"text": "\\Omega' \\subset \\mathbb R^{n-1}"
},
{
"math_id": 69,
"text": "v \\in L^p(\\Omega')"
},
{
"math_id": 70,
"text": "\\| v \\|_{W^{1-1/p, p}(\\Omega')} = \\left( \\|v\\|_{L^p(\\Omega')}^p + \\int_{\\Omega' \\times \\Omega'} \\frac{ | v(x) - v(y) |^p }{|x - y|^{(1 - 1/p) p + (n-1)}}\\,\\mathrm d(x, y) \\right)^{1/p} "
},
{
"math_id": 71,
"text": "| v(x) - v(y) | \\leq C | x - y|^{1-1/p}"
},
{
"math_id": 72,
"text": "W^{1-1/p, p}(\\Omega') = \\left\\{ v \\in L^p(\\Omega') \\;\\mid\\; \\| v \\|_{W^{1-1/p, p}(\\Omega')} < \\infty \\right\\}"
},
{
"math_id": 73,
"text": "W^{s,p}(\\Omega')"
},
{
"math_id": 74,
"text": "s > 0"
},
{
"math_id": 75,
"text": "W^{1-1/p, p}(\\partial \\Omega)"
},
{
"math_id": 76,
"text": "W^{1-1/p, p}(\\Omega')"
},
{
"math_id": 77,
"text": "T\\colon W^{1, p}(\\Omega) \\to W^{1 - 1/p, p}(\\partial \\Omega)"
},
{
"math_id": 78,
"text": "p = 1"
},
{
"math_id": 79,
"text": "L^1(\\partial \\Omega)"
},
{
"math_id": 80,
"text": "T\\colon W^{1, 1}(\\Omega) \\to L^1(\\partial \\Omega)"
},
{
"math_id": 81,
"text": "W^{1, p}_0(\\Omega) \\neq 0"
},
{
"math_id": 82,
"text": "1 < p < \\infty"
},
{
"math_id": 83,
"text": "E\\colon W^{1-1/p, p}(\\partial \\Omega) \\to W^{1, p}(\\Omega)"
},
{
"math_id": 84,
"text": "T (E v) = v"
},
{
"math_id": 85,
"text": "v \\in W^{1-1/p, p}(\\partial \\Omega)"
},
{
"math_id": 86,
"text": "\\| E v \\|_{W^{1, p}(\\Omega)} \\leq C \\| v \\|_{W^{1-1/p, p}(\\partial \\Omega)}"
},
{
"math_id": 87,
"text": "W^{1, p}(\\Omega) \\to W^{1, p}(\\mathbb R^n)"
},
{
"math_id": 88,
"text": "W^{m, p}(\\Omega)"
},
{
"math_id": 89,
"text": "m = 2, 3, \\ldots"
},
{
"math_id": 90,
"text": "N"
},
{
"math_id": 91,
"text": "\\partial_N u |_{\\partial \\Omega}"
},
{
"math_id": 92,
"text": "m = 2"
},
{
"math_id": 93,
"text": "m > 2"
},
{
"math_id": 94,
"text": "C^{m, 1}"
},
{
"math_id": 95,
"text": "T_m\\colon W^{m, p}(\\Omega) \\to \\prod_{l = 0}^{m-1} W^{m-l-1/p,p}(\\partial \\Omega)"
},
{
"math_id": 96,
"text": "W^{s, p}(\\partial \\Omega)"
},
{
"math_id": 97,
"text": "W^{s, p}(\\Omega')"
},
{
"math_id": 98,
"text": "T_m"
},
{
"math_id": 99,
"text": "T_m u = \\left(u |_{\\partial \\Omega}, \\partial_N u |_{\\partial \\Omega}, \\ldots, \\partial_N^{m-1} u |_{\\partial \\Omega}\\right)"
},
{
"math_id": 100,
"text": "u \\in W^{m, p}(\\Omega) \\cap C^{m-1}(\\bar \\Omega)."
},
{
"math_id": 101,
"text": "E_m\\colon \\prod_{l = 0}^{m-1} W^{m-l-1/p,p}(\\partial \\Omega) \\to W^{m, p}(\\Omega)"
},
{
"math_id": 102,
"text": "W^{m, p}_0(\\Omega)"
},
{
"math_id": 103,
"text": "W^{m, p}_0(\\Omega) = \\{ u \\in W^{m, p}(\\Omega) \\mid T_m u = 0 \\}"
},
{
"math_id": 104,
"text": "L^p(\\Omega)"
},
{
"math_id": 105,
"text": "\\operatorname{div} v"
},
{
"math_id": 106,
"text": "v"
},
{
"math_id": 107,
"text": "E_p(\\Omega) = \\{ v \\in (L^p(\\Omega))^n \\mid \\operatorname{div} v \\in L^p(\\Omega) \\}"
},
{
"math_id": 108,
"text": "\\| v \\|_{E_p(\\Omega)} = \\left( \\| v \\|_{L^p(\\Omega)}^p + \\| \\operatorname{div} v \\|_{L^p(\\Omega)}^p \\right)^{1/p}"
},
{
"math_id": 109,
"text": "T_N\\colon E_p(\\Omega) \\to (W^{1-1/q, q}(\\partial \\Omega))'"
},
{
"math_id": 110,
"text": "q = p / (p-1)"
},
{
"math_id": 111,
"text": "X'"
},
{
"math_id": 112,
"text": "T_N"
},
{
"math_id": 113,
"text": "(v \\cdot N) |_{\\partial \\Omega}"
},
{
"math_id": 114,
"text": "v \\in (C^\\infty(\\bar \\Omega))^n"
},
{
"math_id": 115,
"text": "T_N v = \\bigl\\{ \\varphi \\in W^{1 - 1/q, q}(\\partial \\Omega) \\mapsto \\int_{\\partial \\Omega} \\varphi v \\cdot N \\,\\mathrm{d} S \\bigr\\}"
},
{
"math_id": 116,
"text": "(T_N v)(\\varphi)"
},
{
"math_id": 117,
"text": "\\varphi \\in W^{1-1/q,q}(\\partial \\Omega)"
},
{
"math_id": 118,
"text": "w = E \\varphi \\, v"
},
{
"math_id": 119,
"text": "E"
},
{
"math_id": 120,
"text": "- \\Delta u = f \\in L^2(\\Omega)"
},
{
"math_id": 121,
"text": "T_N \\nabla u \\in (W^{1/2,2}(\\partial \\Omega))^*"
},
{
"math_id": 122,
"text": "\\nabla u \\in E_2(\\Omega)"
},
{
"math_id": 123,
"text": "\\nabla u \\in L^2(\\Omega)"
},
{
"math_id": 124,
"text": "\\operatorname{div}(\\nabla u) = \\Delta u = - f \\in L^2(\\Omega)"
},
{
"math_id": 125,
"text": "u \\not\\in H^2(\\Omega)"
},
{
"math_id": 126,
"text": "\\nabla u"
},
{
"math_id": 127,
"text": "p = 2"
},
{
"math_id": 128,
"text": "W^{1,2}(\\Omega)"
},
{
"math_id": 129,
"text": "f \\in H^{-1}(\\Omega) = (H^1_0(\\Omega))'"
},
{
"math_id": 130,
"text": "f(\\varphi)"
},
{
"math_id": 131,
"text": "g \\in H^{1/2}(\\partial \\Omega)"
},
{
"math_id": 132,
"text": "Eg \\in H^1(\\Omega)"
},
{
"math_id": 133,
"text": "T(Eg) = g"
},
{
"math_id": 134,
"text": "u_0"
},
{
"math_id": 135,
"text": "u_0 = u - Eg"
},
{
"math_id": 136,
"text": "T u_0 = Tu - T(Eg) = 0"
},
{
"math_id": 137,
"text": "u_0 \\in H^1_0(\\Omega)"
},
{
"math_id": 138,
"text": "H^1_0(\\Omega)"
},
{
"math_id": 139,
"text": "\\int_\\Omega \\nabla u_0 \\cdot \\nabla \\varphi \\,\\mathrm dx = \\int_\\Omega \\nabla (u - Eg) \\cdot \\nabla \\varphi \\, \\mathrm dx = \\int_\\Omega f \\varphi \\,\\mathrm dx - \\int_\\Omega \\nabla Eg \\cdot \\nabla \\varphi \\,\\mathrm dx"
},
{
"math_id": 140,
"text": "u = u_0 + Eg"
},
{
"math_id": 141,
"text": "c_1, c_2, \\ldots > 0"
},
{
"math_id": 142,
"text": "\\| u_0 \\|_{H^1_0(\\Omega)} \\leq c_1 \\left( \\|f\\|_{H^{-1}(\\Omega)} + \\|Eg\\|_{H^1(\\Omega)} \\right)"
},
{
"math_id": 143,
"text": "\\| u_0 \\|_{H^1(\\Omega)} \\leq c_2 \\| u_0 \\|_{H^1_0(\\Omega)}"
},
{
"math_id": 144,
"text": "\\| E g \\|_{H^1(\\Omega)} \\leq c_3 \\| g \\|_{H^{1/2}(\\Omega)}"
},
{
"math_id": 145,
"text": "\\begin{align}\\| u \\|_{H^1(\\Omega)} &\\leq \\| u_0 \\|_{H^1(\\Omega)} + \\| Eg \\|_{H^1(\\Omega)} \\leq c_1 c_2 \\|f\\|_{H^{-1}(\\Omega)} + (c_3+c_1 c_2) \\|Eg\\|_{H^1(\\Omega)} \\\\\n\t&\\leq c_4 \\left(\\|f\\|_{H^{-1}(\\Omega)} + \\|g\\|_{H^{1/2}(\\partial \\Omega)} \\right)\\end{align}"
},
{
"math_id": 146,
"text": "H^{-1}(\\Omega) \\times H^{1/2}(\\partial \\Omega) \\ni (f, g) \\mapsto u \\in H^1(\\Omega)"
}
] | https://en.wikipedia.org/wiki?curid=11199199 |
1120085 | Distributed knowledge | All the knowledge that a community of agents possesses and might apply to solving a problem
In multi-agent system research, distributed knowledge is all the knowledge that a community of agents possesses and might apply in solving a problem. Distributed knowledge is approximately what "a wise man knows", or what someone who has complete knowledge of what each member of the community knows knows. Distributed knowledge might also be called the aggregate knowledge of a community, as it represents all the knowledge that a community might bring to bear to solve a problem. Other related phrasings include cumulative knowledge, collective knowledge or pooled knowledge. Distributed knowledge is the union of all the knowledge of individuals in a community of agents.
Distributed knowledge differs from the concept of Wisdom of the crowd, in that the latter is concerned with opinions, not knowledge.
Wisdom of the crowd is the emergent opinion arising from multiple actors. It is not the union of all the knowledge of these actors, it does not necessarily include the contribution of all the actors, it does not refer to all the knowledge of these actors, and typically broadly includes opinions and guesswork.
Wisdom of the crowd is a concept useful in the context of social sciences, rather than in the more formal multi-agent systems or Knowledge-based systems research.
Example.
The logicians Alice and Bob are sitting in their dark office wondering whether or not it is raining outside. Now, none of them actually knows, but Alice knows something about her friend Carol, namely that Carol wears her red coat only if it is raining. Bob does not know this, but he just saw Carol, and noticed that she was wearing her red coat. Even though none of them knows whether or not it is raining, it is "distributed knowledge" amongst them that it is raining. If either one of them tells the other what they know, it will be clear to the other that it is raining.
If we denote by formula_0 that Carol wears a red coat and with formula_1 that if Carol wears a red coat, it is raining, we have
formula_2
Directly translated: Bob knows that Carol wears a red coat and Alice knows that if Carol wears a red coat it is raining so together they know that it is raining.
Distributed knowledge is related to the concept Wisdom of the crowd. Distributed knowledge reflects the fact that "no one of us is smarter than all of us." | [
{
"math_id": 0,
"text": "\\varphi"
},
{
"math_id": 1,
"text": "\\varphi \\Rightarrow \\psi"
},
{
"math_id": 2,
"text": "(K_b\\varphi \\land K_a(\\varphi \\Rightarrow \\psi)) \\Rightarrow D_{a,b}\\psi"
}
] | https://en.wikipedia.org/wiki?curid=1120085 |
11201354 | Alveolar–arterial gradient | Respiratory parameter for differential diagnosis of hypoxemia
The Alveolar–arterial gradient (A-aO2, or A–a gradient), is a measure of the difference between the alveolar concentration (A) of oxygen and the arterial (a) concentration of oxygen. It is a useful parameter for narrowing the differential diagnosis of hypoxemia.
The A–a gradient helps to assess the integrity of the alveolar capillary unit. For example, in high altitude, the arterial oxygen PaO2 is low but only because the alveolar oxygen (PAO2) is also low. However, in states of ventilation perfusion mismatch, such as pulmonary embolism or right-to-left shunt, oxygen is not effectively transferred from the alveoli to the blood which results in an elevated A-a gradient.
In a perfect system, no A-a gradient would exist: oxygen would diffuse and equalize across the capillary membrane, and the pressures in the arterial system and alveoli would be effectively equal (resulting in an A-a gradient of zero). However even though the partial pressure of oxygen is about equilibrated between the pulmonary capillaries and the alveolar gas, this equilibrium is not maintained as blood travels further through pulmonary circulation. As a rule, PAO2 is always higher than PaO2 by at least 5–10 mmHg, even in a healthy person with normal ventilation and perfusion. This gradient exists due to both physiological right-to-left shunting and a physiological V/Q mismatch caused by gravity-dependent differences in perfusion to various zones of the lungs. The bronchial vessels deliver nutrients and oxygen to certain lung tissues, and some of this spent, deoxygenated venous blood drains into the highly oxygenated pulmonary veins, causing a right-to-left shunt. Further, the effects of gravity alter the flow of both blood and air through various heights of the lung. In the upright lung, both perfusion and ventilation are greatest at the base, but the gradient of perfusion is steeper than that of ventilation so V/Q ratio is higher at the apex than at the base. This means that blood flowing through capillaries at the base of the lung is not fully oxygenated.
Equation.
The equation for calculating the A–a gradient is:
formula_0
Where:
formula_1
<br>
In its expanded form, the A–a gradient can be calculated by:
formula_2
On room air ( FiO2 = 0.21, or 21% ), at sea level ( Patm = 760 mmHg ) assuming 100% humidity in the alveoli (PH2O = 47 mmHg), a simplified version of the equation is:
formula_3
Values and Clinical Significance.
The A–a gradient is useful in determining the source of hypoxemia. The measurement helps isolate the location of the problem as either intrapulmonary (within the lungs) or extrapulmonary (elsewhere in the body).
A normal A–a gradient for a young adult non-smoker breathing air, is between 5–10 mmHg. Normally, the A–a gradient increases with age. For every decade a person has lived, their A–a gradient is expected to increase by 1 mmHg. A conservative estimate of normal A–a gradient is "[age in years + 10]/ 4". Thus, a 40-year-old should have an A–a gradient around 12.5 mmHg. The value calculated for a patient's A-a gradient can assess if their hypoxia is due to the dysfunction of the alveolar-capillary unit, for which it will elevate, or due to another reason, in which the A-a gradient will be at or lower than the calculated value using the above equation.
An abnormally increased A–a gradient suggests a defect in diffusion, V/Q mismatch, or right-to-left shunt.
The A-a gradient has clinical utility in patients with hypoxemia of undetermined etiology. The A-a gradient can be broken down categorically as either elevated or normal. Causes of hypoxemia will fall into either category. To better understand which etiologies of hypoxemia falls in either category, we can use a simple analogy. Think of the oxygen's journey through the body like a river. The respiratory system will serve as the first part of the river. Then imagine a waterfall from that point leading to the second part of the river. The waterfall represents the alveolar and capillary walls, and the second part of the river represents the arterial system. The river empties into a lake, which can represent end-organ perfusion. The A-a gradient helps to determine where there is flow obstruction.
For example, consider hypoventilation. Patients can exhibit hypoventilation for a variety of reasons; some include CNS depression, neuromuscular diseases such as myasthenia gravis, poor chest elasticity as seen in kyphoscoliosis or patients with vertebral fractures, and many others. Patients with poor ventilation lack oxygen tension throughout their arterial system in addition to the respiratory system. Thus, the river will have decreased flow throughout both parts. Since both the "A" and the "a" decrease in concert, the gradient between the two will remain in normal limits (even though both values will decrease). Thus patients with hypoxemia due to hypoventilation will have an A-a gradient within normal limits.
Now let us consider pneumonia. Patients with pneumonia have a physical barrier within the alveoli, which limits the diffusion of oxygen into the capillaries. However, these patients can ventilate (unlike the patient with hypoventilation), which will result in a well-oxygenated respiratory tract (A) with poor diffusion of oxygen across the alveolar-capillary unit and thus lower oxygen levels in the arterial blood (a). The obstruction, in this case, would occur at the waterfall in our example, limiting the flow of water only through the second part of the river. Thus patients with hypoxemia due to pneumonia will have an inappropriately elevated A-a gradient (due to normal "A" and low "a").
Applying this analogy to different causes of hypoxemia should help reason out whether to expect an elevated or normal A-a gradient. As a general rule of thumb, any pathology of the alveolar-capillary unit will result in a high A-a gradient. The table below has the different disease states that cause hypoxemia.
Because A–a gradient is approximated as: (150 − 5/4(PCO2)) – PaO2 at sea level and on room air (0.21x(760-47) = 149.7 mmHg for the alveolar oxygen partial pressure, after accounting for the water vapor), the direct mathematical cause of a large value is that the blood has a low PaO2, a low PaCO2, or both. CO2 is very easily exchanged in the lungs and low PaCO2 directly correlates with high minute ventilation; therefore a low arterial PaCO2 indicates that extra respiratory effort is being used to oxygenate the blood. A low PaO2 indicates that the patient's current minute ventilation (whether high or normal) is not enough to allow adequate oxygen diffusion into the blood. Therefore, the A–a gradient essentially demonstrates a high respiratory effort (low arterial PaCO2) relative to the achieved level of oxygenation (arterial PaO2). A high A–a gradient could indicate a patient breathing hard to achieve normal oxygenation, a patient breathing normally and attaining low oxygenation, or a patient breathing hard and still failing to achieve normal oxygenation.
If lack of oxygenation is proportional to low respiratory effort, then the A–a gradient is not increased; a healthy person who hypoventilates would have hypoxia, but a normal A–a gradient. At an extreme, high CO2 levels from hypoventilation can mask an existing high A–a gradient. This mathematical artifact makes A–a gradient more clinically useful in the setting of hyperventilation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{A–a Gradient}=P_A\\ce{O2}-P_a\\ce{O2}"
},
{
"math_id": 1,
"text": "P_A\\ce{O2}=F_i\\ce{O2}(P_\\ce{atm}-P_\\ce{H2O})-\\frac{P_a\\ce{CO2}}{0.8}"
},
{
"math_id": 2,
"text": "\\text{A–a Gradient}=\\left(F_i\\ce{O2}(P_\\text{atm}-P_\\ce{H2O})-\\frac{P_a\\ce{CO2}}{0.8}\\right)-P_a\\ce{O2}"
},
{
"math_id": 3,
"text": "\\text{A–a Gradient}=\\begin{cases}\n\\left(150\\text{ mm}\\ce{Hg}-\\frac{5}{4}(P_a\\ce{CO2})\\right)-P_a\\ce{O2}\\quad\\text{or}\\\\\n\\left(20\\text{ kPa}-\\frac{5}{4}(P_a\\ce{CO2})\\right)-P_a\\ce{O2}\n\\end{cases}"
}
] | https://en.wikipedia.org/wiki?curid=11201354 |
11205365 | Berger's sphere | In the mathematical field of Riemannian geometry, the Berger spheres form a special class of examples of Riemannian manifolds diffeomorphic to the 3-sphere. They are named for Marcel Berger who introduced them in 1962.
Geometry of the Berger spheres.
The Lie group SU(2) is diffeomorphic to the 3-sphere. Its Lie algebra is a three-dimensional real vector space spanned by
formula_0
which are complex multiples of the Pauli matrices. It is direct to check that the commutators are given by ["u"1, "u"2]
2"u"3 and ["u"1, "u"3]
−2"u"2 and ["u"2, "u"3]
2"u"1. Any positive-definite inner product on the Lie algebra determines a left-invariant Riemannian metric on the Lie group. A Berger sphere is a metric so obtained by making the inner product on the Lie algebra have matrix
formula_1
relative to the basis "u"1, "u"2, "u"3. Here t is a positive number to be freely chosen; each choice produces a different Berger sphere. If it were chosen negative, a Lorentzian metric would instead be produced. Using the Koszul formula it is direct to compute the Levi-Civita connection:
formula_2
The curvature operator has eigenvalues "t", "t", 4 − 3"t". The left-invariant Berger metric is also right-invariant if and only if "t"
1.
The left-invariant vector field on SU(2) corresponding to "u"1 (or to any other particular element of the Lie algebra) is tangent to the circular fibers of a Hopf fibration SU(2) → S2. As such, the Berger metrics can also be constructed via the Hopf fibration, by scaling the tangent directions to the fibers. Unlike the above construction, which is based on a Lie group structure on the 3-sphere, this version of the construction can be extended to the more general Hopf fibrations "S"2"n" + 1 → CP"n" of odd-dimensional spheres over the complex projective spaces, using the Fubini–Study metric.
Significance.
A well-known inequality of Wilhelm Klingenberg says that for any smooth Riemannian metric on a closed orientable manifold of even dimension, if the sectional curvature is positive then the injectivity radius is greater than or equal to , where K is the maximum of the sectional curvature. The Berger spheres show that this does not hold if the assumption of even-dimensionality is removed.
Likewise, another estimate of Klingenberg says that for any smooth Riemannian metric on a closed simply-connected manifold, if the sectional curvatures are all in the interval , then the injectivity radius is greater than . The Berger spheres show that the assumption on sectional curvature cannot be removed.
Any compact Riemannian manifold can be scaled to produce a metric of small volume, diameter, and injectivity radius but large curvature. The Berger spheres illustrate the alternative phenomena of small volume and injectivity radius but without small diameter or large curvature. They show that the 3-sphere is a collapsing manifold: it admits a sequence of Riemannian metrics with uniformly bounded curvature but injectivity radius converging to zero. This sequence of Riemannian manifolds converges in the Gromov–Hausdorff metric to a "two"-dimensional sphere of constant curvature 4.
Generalizations.
Berger–Cheeger perturbations.
The Hopf fibration S3 → S2 is a principle bundle with structure group U(1). Furthermore, relative to the standard Riemannian metric on S3, the unit-length vector field along the fibers of the bundle form a Killing vector field. This is to say that U(1) acts by isometries.
In greater generality, consider a Lie group G acting by isometries on a Riemannian manifold ("M", "g"). In this generality (unlike for the specific case of the Hopf fibration), different orbits of the group action might have different dimensionality. For this reason, scaling the tangent directions to the group orbits by constant factors, as for the Berger spheres, would produce discontinuities in the metric. The "Berger–Cheeger perturbations" modify the scaling to address this, in the following way.
Given a right-invariant Riemannian metric h on G, the product manifold "G" × "M" can be given the Riemannian metric "th" ⊕ "g". The left action of G on this product by "x"⋅("y", "m")
("y" "x"−1, "xm") acts freely by isometries, and so there is a naturally induced Riemannian metric on the quotient space, which is naturally diffeomorphic to M.
Canonical variation of a Riemannian submersion.
The Hopf fibration S3 → S2 is a Riemannian submersion relative to the standard Riemannian metrics on S3 and S2. For any Riemannian submersion "f": "M" → "B", the "canonical variation" scales the vertical part of the metric by a constant factor. The Berger spheres are thus the total space of the canonical variation of the Hopf fibration. Some of the geometry of the Berger spheres generalizes to this setting. For instance, if a Riemannian submersion has totally geodesic fibers then the canonical variation also has totally geodesic fibers.
References.
<templatestyles src="Reflist/styles.css" />
Sources
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "u_1 = \\begin{pmatrix}\n 0 & i \\\\\n i & 0\n \\end{pmatrix}, \\quad\n u_2 = \\begin{pmatrix}\n 0 & -1 \\\\\n 1 & 0\n \\end{pmatrix}, \\quad\n u_3 = \\begin{pmatrix}\n i & 0 \\\\\n 0 & -i\n \\end{pmatrix}~,\n"
},
{
"math_id": 1,
"text": "\\begin{pmatrix}t&0&0\\\\ 0&1&0\\\\ 0&0&1\\end{pmatrix}"
},
{
"math_id": 2,
"text": "\\begin{align}\n\\nabla_{u_1}u_2 &= (2-t)u_3&\\nabla_{u_2}u_1&=-tu_3\\\\ \\nabla_{u_2}u_3&=u_1& \\nabla_{u_3}u_2&=-u_1\\\\ \\nabla_{u_3}u_1&=tu_2&\\nabla_{u_1}u_3&=(t-2)y.\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=11205365 |
11205600 | Projectional radiography | Formation of 2D images using X-rays
Projectional radiography, also known as conventional radiography, is a form of radiography and medical imaging that produces two-dimensional images by X-ray radiation. The image acquisition is generally performed by radiographers, and the images are often examined by radiologists. Both the procedure and any resultant images are often simply called 'X-ray'. Plain radiography or roentgenography generally refers to projectional radiography (without the use of more advanced techniques such as computed tomography that can generate 3D-images). "Plain radiography" can also refer to radiography without a radiocontrast agent or radiography that generates single static images, as contrasted to fluoroscopy, which are technically also projectional.
Equipment.
X-ray generator.
Projectional radiographs generally use X-rays created by X-ray generators, which generate X-rays from X-ray tubes.
Grid.
An anti-scatter grid may be placed between the patient and the detector to reduce the quantity of scattered x-rays that reach the detector. This improves the contrast resolution of the image, but also increases radiation exposure for the patient.
Detector.
Detectors can be divided into two major categories: imaging detectors (such as photographic plates and X-ray film (photographic film), now mostly replaced by various digitizing devices like image plates or flat panel detectors) and dose measurement devices (such as ionization chambers, Geiger counters, and dosimeters used to measure the local radiation exposure, dose, and/or dose rate, for example, for verifying that radiation protection equipment and procedures are effective on an ongoing basis).
Shielding.
Lead is the main material used by radiography personnel for shielding against scattered X-rays.
Image properties.
Projectional radiography relies on the characteristics of X-ray radiation ("quantity" and "quality" of the beam) and knowledge of how it interacts with human tissue to create diagnostic images. X-rays are a form of ionizing radiation, meaning it has sufficient energy to potentially remove electrons from an atom, thus giving it a charge and making it an ion.
X-ray attenuation.
When an exposure is made, X-ray radiation exits the tube as what is known as the "primary beam". When the primary beam passes through the body, some of the radiation is absorbed in a process known as attenuation. Anatomy that is denser has a higher rate of attenuation than anatomy that is less dense, so bone will absorb more X-rays than soft tissue. What remains of the primary beam after attenuation is known as the "remnant beam". The remnant beam is responsible for exposing the image receptor. Areas on the image receptor that receive the most radiation (portions of the remnant beam experiencing the least attenuation) will be more heavily exposed, and therefore will be processed as being darker. Conversely, areas on the image receptor that receive the least radiation (portions of the remnant beam experience the most attenuation) will be less exposed and will be processed as being lighter. This is why bone, which is very dense, process as being 'white' on radio graphs, and the lungs, which contain mostly air and is the least dense, shows up as 'black'.
Density.
Radiographic density is the measure of overall darkening of the image. Density is a logarithmic unit that describes the ratio between light hitting the film and light being transmitted through the film. A higher radiographic density represents more opaque areas of the film, and lower density more transparent areas of the film.
With digital imaging, however, density may be referred to as "brightness." The brightness of the radiograph in digital imaging is determined by computer software and the monitor on which the image is being viewed.
Contrast.
Contrast is defined as the difference in radiographic density between adjacent portions of the image. The range between black and white on the final radiograph. High contrast, or short-scale contrast, means there is little gray on the radiograph, and there are fewer gray shades between black and white. Low contrast, or long-scale contrast, means there is much gray on the radiograph, and there are many gray shades between black and white.
Closely related to radiographic contrast is the concept of exposure latitude. Exposure latitude is the range of exposures over which the recording medium (image receptor) will respond with a diagnostically useful density; in other words, this is the "flexibility" or "leeway" that a radiographer has when setting his/her exposure factors. Images having a short-scale of contrast will have narrow exposure latitude. Images having long-scale contrast will have a wide exposure latitude; that is, the radiographer will be able to utilize a broader range of technical factors to produce a diagnostic-quality image.
Contrast is determined by the kilovoltage (kV; energy/quality/penetrability) of the x-ray beam and the tissue composition of the body part being radiographed. Selection of look-up tables (LUT) in digital imaging also affects contrast.
Generally speaking, high contrast is necessary for body parts in which bony anatomy is of clinical interest (extremities, bony thorax, etc.). When soft tissue is of interest (ex. abdomen or chest), lower contrast is preferable in order to accurately demonstrate all of the soft tissue tones in these areas.
Geometric magnification.
Geometric magnification results from the detector being farther away from the X-ray source than the object. In this regard, the "source-detector distance" or SDD is a measurement of the distance between the generator and the detector. Alternative names are "source"/"focus" to "detector"/"image-receptor"/"film" (latter used when using X-ray film) distance (SID, FID or FRD).
The "estimated radiographic magnification factor" ("ERMF") is the ratio of the "source-detector distance" (SDD) over the "source-object distance" (SOD). The size of the object is given as:<br>
formula_0,
<br>where Sizeprojection is the size of the projection that the object forms on the detector. On lumbar and chest radiographs, it is anticipated that ERMF is between 1.05 and 1.40. Because of the uncertainty of the true size of objects seen on projectional radiography, their sizes are often compared to other structures within the body, such as dimensions of the vertebrae, or empirically by clinical experience.
The "source-detector distance" (SDD) is roughly related to the "source-object distance" (SOD) and the "object-detector distance" (ODD) by the equation SOD + ODD = SDD.
Geometric unsharpness.
Geometric unsharpness is caused by the X-ray generator not creating X-rays from a single point but rather from an area, as can be measured as the "focal spot size". Geometric unsharpness increases proportionally to the focal spot size, as well as the "estimated radiographic magnification factor" ("ERMF").
Geometric distortion.
Organs will have different relative distances to the detector depending on which direction the X-rays come from. For example, chest radiographs are preferably taken with X-rays coming from behind (called a "posteroanterior" or "PA" radiograph). However, in case the patient cannot stand, the radiograph often needs to be taken with the patient lying in a supine position (called a "bedside" radiograph) with the X-rays coming from above ("anteroposterior" or "AP"), and geometric magnification will then cause for example the heart to appear larger than it actually is because it is further away from the detector.
Scatter.
In addition to using an anti-scatter grid, increasing the ODD alone can improve image contrast by decreasing the amount of scattered radiation that reaches the receptor. However, this needs to be weighted against increased geometric unsharpness if the SDD is not also proportionally increased.
Imaging variations by target tissue.
Projection radiography uses X-rays in different amounts and strengths depending on what body part is being imaged:
Projectional radiography terminology.
NOTE: The simplified word 'view' is often used to describe a radiographic projection.
Plain radiography generally refers to projectional radiography (without the use of more advanced techniques such as computed tomography). Plain radiography can also refer to radiography without a radiocontrast agent or radiography that generates single static images, as contrasted to fluoroscopy.
By target organ or structure.
Breasts.
Projectional radiography of the breasts is called mammography. This has been used mostly on women to screen for breast cancer, but is also used to view male breasts, and used in conjunction with a radiologist or a surgeon to localise suspicious tissues before a biopsy or a lumpectomy. Breast implants designed to enlarge the breasts reduce the viewing ability of mammography, and require more time for imaging as more views need to be taken. This is because the material used in the implant is very dense compared to breast tissue, and looks white (clear) on the film. The radiation used for mammography tends to be softer (has a lower photon energy) than that used for the harder tissues. Often a tube with a molybdenum anode is used with about 30 000 volts (30 kV), giving a range of X-ray energies of about 15-30 keV. Many of these photons are "characteristic radiation" of a specific energy determined by the atomic structure of the target material (Mo-K radiation).
Chest.
Chest radiographs are used to diagnose many conditions involving the chest wall, including its bones, and also structures contained within the thoracic cavity including the lungs, heart, and great vessels. Conditions commonly identified by chest radiography include pneumonia, pneumothorax, interstitial lung disease, heart failure, bone fracture and hiatal hernia. Typically an erect postero-anterior (PA) projection is the preferred projection. Chest radiographs are also used to screen for job-related lung disease in industries such as mining where workers are exposed to dust.
For some conditions of the chest, radiography is good for screening but poor for diagnosis. When a condition is suspected based on chest radiography, additional imaging of the chest can be obtained to definitively diagnose the condition or to provide evidence in favor of the diagnosis suggested by initial chest radiography. Unless a fractured rib is suspected of being displaced, and therefore likely to cause damage to the lungs and other tissue structures, an X-ray of the chest is not necessary as it will not alter patient management.
Abdomen.
In children, abdominal radiography is indicated in the acute setting in suspected bowel obstruction, gastrointestinal perforation, foreign body in the alimentary tract, suspected abdominal mass and intussusception (latter as part of the differential diagnosis). Yet, CT scan is the best alternative for diagnosing intra-abdominal injury in children. For acute abdominal pain in adults, an abdominal X-ray has a low sensitivity and accuracy in general. Computed tomography provides an overall better surgical strategy planning, and possibly less unnecessary laparotomies. Abdominal X-ray is therefore not recommended for adults presenting in the emergency department with acute abdominal pain.
The standard abdominal X-ray protocol is usually a single anteroposterior projection in supine position. A "Kidneys, Ureters, and Bladder" projection (KUB) is an anteroposterior abdominal projection that covers the levels of the urinary system, but does not necessarily include the diaphragm.
Axial skeleton.
Head.
In case of trauma, the standard UK protocol is to have a CT scan of the skull instead of projectional radiography. A skeletal survey including the skull can be indicated in for example multiple myeloma.
*Cervical spine: The standard projections in the UK "AP and Lateral. Peg projection with trauma only. Obliques and Flexion and Extension on special request". In the US, five or six projections are common; a Lateral, two 45 degree obliques, an AP axial (Cephalad), an AP "Open Mouth" for C1-C2, and Cervicothoracic Lateral (Swimmer's) to better visualize C7-T1 if necessary. Special projections include a Lateral with Flexion and Extension of the cervical spine, an Axial for C1-C2 (Fuchs or Judd method), and an AP Axial (Caudad) for articular pillars.
*Thoracic Spine - "AP and Lateral" in the UK. In the US, an AP and Lateral are basic projections. Obliques 20 degrees from lateral may be ordered to better visualize the zygapophysial joint.
*Lumbar Spine - "AP and Lateral +/- L5/S1 view in the UK, with obliques and Flexion and Extension requests being rare". In the US, basic projections include an AP, two Obliques, a Lateral, and a Lateral L5-S1 spot to better visualize the L5-S1 interspace. Special projections are AP Right and Left bending, and Laterals with Flexion and Extension.
*Pelvis - "AP only in the UK, with SIJ projections (prone) on special request".
*Sacrum and Coccyx: In the US, if both bones are to be examined separate cephalad and caudad AP axial projections are obtained for the sacrum and coccyx respectively as well as a single Lateral of both bones.
*Anterior area of interest - a PA chest X-ray, a PA projection of the ribs, and a 45 degree Anterior Oblique with the non-interest side closest to the image receptor.
*Posterior area of interest - a PA chest X-ray, an AP projection of the ribs, and a 45 degree Posterior Oblique with the side of interest closest to the image receptor.
*Sternoclavicular Joints - Are usually ordered as a single PA and a Right and Left 15 degree Right Anterior Obliques in the US.
Shoulders.
These include:
The body has to be rotated about 30 to 45 degrees towards the shoulder to be imaged, and the standing or sitting patient lets the arm hang. This method reveals the joint gap and the vertical alignment towards the socket.
The arm should be abducted 80 to 100 degrees. This method reveals:
The lateral contour of the shoulder should be positioned in front of the film in a way that the longitudinal axis of the scapula continues parallel to the path of the rays. This method reveals:
This projection has a low tolerance for errors and accordingly needs proper execution. The Y-projection can be traced back to Wijnblath's 1933 published cavitas-en-face projection.
In the UK, the standard projections of the shoulder are AP and Lateral Scapula or Axillary Projection.
Extremities.
A projectional radiograph of an extremity confers an effective dose of approximately 0.001 mSv, comparable to a background radiation equivalent time of 3 hours.
The standard projection protocols in the UK are:
*The "Lauenstein projection" a form of examination of the hip joint emphasizing the relationship of the femur to the acetabulum. The knee of the affected leg is flexed, and the thigh is drawn up to nearly a right angle. This is also called the frog-leg position.
Applications include X-ray of hip dysplasia.
Certain suspected conditions require specific projections. For example, skeletal signs of rickets are seen predominantly at sites of rapid growth, including the proximal humerus, distal radius, distal femur and both the proximal and the distal tibia. Therefore, a skeletal survey for rickets can be accomplished with anteroposterior radiographs of the knees, wrists, and ankles.
General disease mimics.
Radiological disease mimics are visual artifacts, normal anatomic structures or harmless variants that may simulate diseases or abnormalities. In projectional radiography, general disease mimics include jewelry, clothes and skin folds. In general medicine a disease mimic shows symptoms and/or signs like those of another.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Size_{object} = \\frac{Size_{projection}}{ERMF}"
}
] | https://en.wikipedia.org/wiki?curid=11205600 |
11207569 | Automorphic factor | In mathematics, an automorphic factor is a certain type of analytic function, defined on subgroups of SL(2,R), appearing in the theory of modular forms. The general case, for general groups, is reviewed in the article 'factor of automorphy'.
Definition.
An "automorphic factor of weight k" is a function
formula_0
satisfying the four properties given below. Here, the notation formula_1 and formula_2 refer to the upper half-plane and the complex plane, respectively. The notation formula_3 is a subgroup of SL(2,R), such as, for example, a Fuchsian group. An element formula_4 is a 2×2 matrix
formula_5
with "a", "b", "c", "d" real numbers, satisfying "ad"−"bc"=1.
An automorphic factor must satisfy:
Properties.
Every automorphic factor may be written as
formula_17
with
formula_18
The function formula_19 is called a multiplier system. Clearly,
formula_20,
while, if formula_15, then
formula_21
which equals formula_22 when "k" is an integer. | [
{
"math_id": 0,
"text": "\\nu : \\Gamma \\times \\mathbb{H} \\to \\Complex"
},
{
"math_id": 1,
"text": "\\mathbb{H}"
},
{
"math_id": 2,
"text": "\\Complex"
},
{
"math_id": 3,
"text": "\\Gamma"
},
{
"math_id": 4,
"text": "\\gamma \\in \\Gamma"
},
{
"math_id": 5,
"text": "\\gamma = \\begin{bmatrix}a&b \\\\c & d\\end{bmatrix}"
},
{
"math_id": 6,
"text": "\\gamma\\in\\Gamma"
},
{
"math_id": 7,
"text": "\\nu(\\gamma,z)"
},
{
"math_id": 8,
"text": "z\\in\\mathbb{H}"
},
{
"math_id": 9,
"text": "\\vert\\nu(\\gamma,z)\\vert = \\vert cz + d\\vert^k"
},
{
"math_id": 10,
"text": "\\gamma,\\delta \\in \\Gamma"
},
{
"math_id": 11,
"text": "\\nu(\\gamma\\delta, z) = \\nu(\\gamma,\\delta z)\\nu(\\delta,z)"
},
{
"math_id": 12,
"text": "\\delta z"
},
{
"math_id": 13,
"text": "z"
},
{
"math_id": 14,
"text": "\\delta"
},
{
"math_id": 15,
"text": "-I\\in\\Gamma"
},
{
"math_id": 16,
"text": "\\nu(-\\gamma,z) = \\nu(\\gamma,z)"
},
{
"math_id": 17,
"text": "\\nu(\\gamma, z)=\\upsilon(\\gamma) (cz+d)^k"
},
{
"math_id": 18,
"text": "\\vert\\upsilon(\\gamma)\\vert = 1"
},
{
"math_id": 19,
"text": "\\upsilon:\\Gamma\\to S^1"
},
{
"math_id": 20,
"text": "\\upsilon(I)=1"
},
{
"math_id": 21,
"text": "\\upsilon(-I)=e^{-i\\pi k}"
},
{
"math_id": 22,
"text": "(-1)^k"
}
] | https://en.wikipedia.org/wiki?curid=11207569 |
11207764 | Glycerol (data page) | Chemical data page
This page provides supplementary chemical data on glycerol.
Material Safety Data Sheet.
The handling of this chemical may incur notable safety precautions. It is highly recommended that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source and follow its directions.
Vapor pressure of liquid.
Table data obtained from "CRC Handbook of Chemistry and Physics", 44th ed.
loge of Glycerol vapor pressure. Uses formula: formula_0formula_1 with coefficients A=-2.125867E+01, B=-1.672626E+04, C=1.655099E+02, and D=1.100480E-05 obtained from CHERIC
Freezing point of aqueous solutions.
Table data obtained from "Lange's Handbook of Chemistry", 10th ed. Specific gravity is at 15 °C, referenced to water at 15 °C.
See details on: Freezing Points of Glycerine-Water Solutions Dow Chemical
or Freezing Points of Glycerol and Its Aqueous Solutions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\scriptstyle \\log_e P_{kPa} = "
},
{
"math_id": 1,
"text": "\\scriptstyle A \\times ln(T) + B/T + C + D \\times T^2"
}
] | https://en.wikipedia.org/wiki?curid=11207764 |
11210523 | Gaussian network model | The Gaussian network model (GNM) is a representation of a biological macromolecule as an elastic mass-and-spring network to study, understand, and characterize the mechanical aspects of its long-time large-scale dynamics. The model has a wide range of applications from small proteins such as enzymes composed of a single domain, to large macromolecular assemblies such as a ribosome or a viral capsid. Protein domain dynamics plays key roles in a multitude of molecular recognition and cell signalling processes.
Protein domains, connected by intrinsically disordered flexible linker domains, induce long-range allostery via .
The resultant dynamic modes cannot be generally predicted from static structures of either the entire protein or individual domains.
The Gaussian network model is a minimalist, coarse-grained approach to study biological molecules. In the model, proteins are represented by nodes corresponding to α-carbons of the amino acid residues. Similarly, DNA and RNA structures are represented with one to three nodes for each nucleotide. The model uses the harmonic approximation to model interactions. This coarse-grained representation makes the calculations computationally inexpensive.
At the molecular level, many biological phenomena, such as catalytic activity of an enzyme, occur within the range of nano- to millisecond timescales. All atom simulation techniques, such as molecular dynamics simulations, rarely reach microsecond trajectory length, depending on the size of the system and accessible computational resources. Normal mode analysis in the context of GNM, or elastic network (EN) models in general, provides insights on the longer-scale functional dynamic behaviors of macromolecules. Here, the model captures native state functional motions of a biomolecule at the cost of atomic detail. The inference obtained from this model is complementary to atomic detail simulation techniques.
Another model for protein dynamics based on elastic mass-and-spring networks is the Anisotropic Network Model.
Gaussian network model theory.
The Gaussian network model was proposed by Bahar, Atilgan, Haliloglu and Erman in 1997. The GNM is often analyzed using normal mode analysis, which offers an analytical formulation and unique solution for each structure. The GNM normal mode analysis differs from other normal mode analyses in that it is exclusively based on inter-residue contact topology, influenced by the theory of elasticity of Flory and the Rouse model and does not take the three-dimensional directionality of motions into account.
Representation of structure as an elastic network.
Figure 2 shows a schematic view of elastic network studied in GNM. Metal beads represent the nodes in this Gaussian network (residues of a protein) and springs represent the connections between the nodes (covalent and non-covalent interactions between residues). For nodes i and j, equilibrium position vectors, R0i and R0j, equilibrium distance vector, R0ij, instantaneous fluctuation vectors, ΔRi and ΔRj, and instantaneous distance vector, Rij, are shown in Figure 2. Instantaneous position vectors of these nodes are defined by Ri and Rj. The difference between equilibrium position vector and instantaneous position vector of residue i gives the instantaneous fluctuation vector, ΔRi = Ri - R0i. Hence, the instantaneous fluctuation vector between nodes i and j is expressed as ΔRij = ΔRj - ΔRi = Rij - R0ij.
Potential of the Gaussian network.
The potential energy of the network in terms of ΔRi is
formula_0
where γ is a force constant uniform for all springs and Γij is the ijth element of the Kirchhoff (or connectivity) matrix of inter-residue contacts, Γ, defined by
formula_1
"r"c is a cutoff distance for spatial interactions and taken to be 7 Å for amino acid pairs (represented by their α-carbons).
Expressing the X, Y and Z components of the fluctuation vectors ΔRi as ΔXT = [ΔX1 ΔX2 ... ΔXN], ΔYT = [ΔY1 ΔY2 ... ΔYN], and ΔZT = [ΔZ1 ΔZ2 ... ΔZN], above equation simplifies to
formula_2
Statistical mechanics foundations.
In the GNM, the probability distribution of all fluctuations, "P"(ΔR) is "isotropic"
formula_3
and "Gaussian"
formula_4
where "k""B" is the Boltzmann constant and "T" is the absolute temperature. "p"(ΔY) and "p"(ΔZ) are expressed similarly.
N-dimensional Gaussian probability density function with random variable vector x, mean vector μ and covariance matrix Σ is
formula_5
formula_6 normalizes the distribution and |Σ| is the determinant of the covariance matrix.
Similar to Gaussian distribution, normalized distribution for ΔXT = [ΔX1 ΔX2 ... ΔXN] around the equilibrium positions can be expressed as
formula_7
The normalization constant, also the partition function "Z"X, is given by
formula_8
where formula_9 is the covariance matrix in this case. "Z"Y and "Z"Z are expressed similarly. This formulation requires inversion of the Kirchhoff matrix. In the GNM, the determinant of the Kirchhoff matrix is zero, hence calculation of its inverse requires eigenvalue decomposition. Γ−1 is constructed using the N-1 non-zero eigenvalues and associated eigenvectors. Expressions for "p"(ΔY) and "p"(ΔZ) are similar to that of "p"(ΔX). The probability distribution of all fluctuations in GNM becomes
formula_10
For this mass and spring system, the normalization constant in the preceding expression is the overall GNM partition function, "Z"GNM,
formula_11
Expectation values of fluctuations and correlations.
The expectation values of residue fluctuations, <ΔRi2> (also called mean-square fluctuations, MSFs), and their cross-correlations, <ΔRi · ΔRj> can be organized as the diagonal and off-diagonal terms, respectively, of a covariance matrix. Based on statistical mechanics, the covariance matrix for ΔX is given by
formula_12
The last equality is obtained by inserting the above p(ΔX) and taking the (generalized Gaussian) integral. Since,
formula_13
<ΔRi2> and <ΔRi · ΔRj> follows
formula_14
formula_15
Mode decomposition.
The GNM normal modes are found by diagonalization of the Kirchhoff matrix, Γ = UΛU"T". Here, U is a unitary matrix, U"T" = U−1, of the eigenvectors ui of Γ and Λ is the diagonal matrix of eigenvalues λi. The frequency and shape of a mode is represented by its eigenvalue and eigenvector, respectively. Since the Kirchhoff matrix is positive semi-definite, the first eigenvalue, λ1, is zero and the corresponding eigenvector have all its elements equal to 1/√N. This shows that the network model translationally invariant.
Cross-correlations between residue fluctuations can be written as a sum over the N-1 nonzero modes as
formula_16
It follows that, [ΔRi · ΔRj], the contribution of an individual mode is expressed as
formula_17
where [uk]i is the ith element of uk.
Influence of local packing density.
By definition, a diagonal element of the Kirchhoff matrix, Γii, is equal to the degree of a node in GNM that represents the corresponding residue's coordination number. This number is a measure of the local packing density around a given residue. The influence of local packing density can be assessed by series expansion of Γ−1 matrix. Γ can be written as a sum of two matrices, Γ = D + O, containing diagonal elements and off-diagonal elements of Γ.
Γ−1 = (D + O)−1 = [ D (I + D−1O) ]−1 = (I + D−1O)−1D−1 = (I - D−1O + ...)D−1 = D−1 - D−1O D−1 + ...
This expression shows that local packing density makes a significant contribution to expected fluctuations of residues. The terms that follow inverse of the diagonal matrix, are contributions of positional correlations to expected fluctuations.
GNM applications.
Equilibrium fluctuations.
Equilibrium fluctuations of biological molecules can be experimentally measured. In X-ray crystallography the B-factor (also called Debye-Waller or temperature factor) of each atom is a measure of its mean-square fluctuation near its equilibrium position in the native structure. In NMR experiments, this measure can be obtained by calculating root-mean-square differences between different models.
In many applications and publications, including the original articles, it has been shown that expected residue fluctuations obtained by the GNM are in good agreement with the experimentally measured native state fluctuations. The relation between B-factors, for example, and expected residue fluctuations obtained from GNM is as follows
formula_18
Figure 3 shows an example of GNM calculation for the catalytic domain of the protein Cdc25B, a cell division cycle dual-specificity phosphatase.
Physical meanings of slow and fast modes.
Diagonalization of the Kirchhoff matrix decomposes the conformational motions into a spectrum of collective modes. The expected values of fluctuations and cross-correlations are obtained from linear combinations of fluctuations along these normal modes. The contribution of each mode is scaled with the inverse of that modes frequency. Hence, slow (low frequency) modes contribute most to the expected fluctuations. Along the few slowest modes, motions are shown to be collective and global and potentially relevant to functionality of the biomolecules. Fast (high frequency) modes, on the other hand, describe uncorrelated motions not inducing notable changes in the structure. GNM-based methods do not provide real dynamics but only an approximation based on the combination and interpolation of normal modes. Their applicability strongly depends on how collective the motion is.
Other specific applications.
There are several major areas in which the Gaussian network model and other elastic network models have proved to be useful. These include:
Web servers.
In practice, two kinds of calculations can be performed.
The first kind (the GNM per se) makes use of the Kirchhoff matrix. The second kind (more specifically called either the Elastic Network Model or the Anisotropic Network Model) makes use of the Hessian matrix associated to the corresponding set of harmonic springs. Both kinds of models can be used online, using the following servers.
References.
Primary sources.
<templatestyles src="Refbegin/styles.css" />
Specific citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V_{GNM} = \\frac{\\gamma}{2}\\left[ \\sum_{i,j}^{N} (\\Delta R_j-\\Delta R_i)^2 \\right]= \n \\frac{\\gamma}{2}\\left[ \\sum_{i,j}^{N} \\Delta R_i \\Gamma_{ij} \\Delta R_j\\right]"
},
{
"math_id": 1,
"text": "\\Gamma_{ij} = \\left\\{\\begin{matrix} \n-1, & \\mbox{if } i \\ne j & \\mbox{and }R_{ij} \\le r_c \\\\ \n0, & \\mbox{if } i \\ne j & \\mbox{and }R_{ij} > r_c \\\\\n-\\sum_{j,j \\ne i}^{N} \\Gamma_{ij}, & \\mbox{if } i = j \\end{matrix}\\right."
},
{
"math_id": 2,
"text": "V_{GNM} = \\frac{\\gamma}{2} [\\Delta X^T\\Gamma \\Delta X + \\Delta Y^T\\Gamma \\Delta Y + \\Delta Z^T\\Gamma \\Delta Z]"
},
{
"math_id": 3,
"text": "P(\\Delta R)=P(\\Delta X,\\Delta Y,\\Delta Z)=p(\\Delta X)p(\\Delta Y)p(\\Delta Z)"
},
{
"math_id": 4,
"text": "p(\\Delta X)\\propto \\exp\\left\\{ -\\frac{\\gamma}{2 k_B T} \\Delta X^T\\Gamma \\Delta X \\right\\}=\\exp\\left\\{ -\\frac{1}{2} \\left(\\Delta X^T\\left( \\frac{k_B T}{\\gamma} \\Gamma^{-1} \\right)^{-1} \\Delta X \\right) \\right\\}"
},
{
"math_id": 5,
"text": "W(x,\\mu ,\\Sigma ) = \\frac{1}{\\sqrt{(2\\pi)^N |\\Sigma|}} \\exp\\left\\{ -\\frac{1}{2} (x - \\mu)^T \\Sigma^{-1} (x - \\mu) \\right\\}"
},
{
"math_id": 6,
"text": "\\sqrt{(2\\pi)^N |\\Sigma|}"
},
{
"math_id": 7,
"text": "p(\\Delta X ) = \\frac{1}{\\sqrt{(2\\pi)^N \\frac{k_B T}{\\gamma} |\\Gamma^{-1}|}} \\exp\\left\\{ -\\frac{1}{2} \\left(\\Delta X^T\\left( \\frac{k_B T}{\\gamma} \\Gamma^{-1} \\right)^{-1} \\Delta X \\right) \\right\\}"
},
{
"math_id": 8,
"text": "Z_X = \\int_0^\\infty \\exp\\left\\{ -\\frac{1}{2} \\left(\\Delta X^T\\left( \\frac{k_B T}{\\gamma} \\Gamma^{-1} \\right)^{-1} \\Delta X \\right) \\right\\}d\\Delta X"
},
{
"math_id": 9,
"text": "\\frac{k_B T}{\\gamma} \\Gamma^{-1}"
},
{
"math_id": 10,
"text": "P(\\Delta R) = p(\\Delta X) p(\\Delta Y) p(\\Delta Z)=\\frac{1}{{Z_X}{Z_Y}{Z_Z}} \\exp\\left\\{ -\\frac{3}{2} \\left(\\Delta X^T\\left( \\frac{k_B T}{\\gamma} \\Gamma^{-1} \\right)^{-1} \\Delta X \\right) \\right\\}"
},
{
"math_id": 11,
"text": "Z_{GNM} = {Z_X}{Z_Y}{Z_Z} = {(2\\pi)^{3N/2} \\Biggl|{\\frac{k_{B}T}{\\gamma}{\\Gamma^{-1}}}\\Biggr|}^{3/2}"
},
{
"math_id": 12,
"text": "<\\Delta X \\cdot \\Delta X^T > = \\int \\Delta X \\cdot \\Delta X^T p(\\Delta X)d\\Delta X=\\frac{k_B T}{\\gamma}\\Gamma^{-1} "
},
{
"math_id": 13,
"text": "<\\Delta X \\cdot \\Delta X^T > = <\\Delta Y \\cdot \\Delta Y^T > = <\\Delta Z \\cdot \\Delta Z^T > =\\frac{1}{3} <\\Delta R \\cdot \\Delta R^T >"
},
{
"math_id": 14,
"text": "<\\Delta R_i^2 > = \\frac{3 k_B T}{\\gamma}(\\Gamma^{-1})_{ii}"
},
{
"math_id": 15,
"text": "<\\Delta R_i \\cdot \\Delta R_j > = \\frac{3 k_B T}{\\gamma}(\\Gamma^{-1})_{ij}"
},
{
"math_id": 16,
"text": "<\\Delta R_i \\cdot \\Delta R_j> = \\frac{3 k_B T}{\\gamma}[U\\Lambda^{-1}U^T]_{ij}=\\frac{3 k_B T}{\\gamma}\\sum_{k=1}^{N-1}\\lambda_k^{-1} [u_k u_k^T]_{ij}"
},
{
"math_id": 17,
"text": "[\\Delta R_i \\cdot \\Delta R_j]_k = \\frac{3 k_B T}{\\gamma}\\lambda_k^{-1} [u_k]_i [u_k]_j"
},
{
"math_id": 18,
"text": "B_i = \\frac{8\\pi^2}{3}< \\Delta R_{i} \\cdot \\Delta R_{i} > = \\frac{8\\pi^2 k_B T}{\\gamma}(\\Gamma^{-1})_{ii}"
}
] | https://en.wikipedia.org/wiki?curid=11210523 |
11214626 | Cant deficiency | When a rail vehicle's speed on a curved rail is high enough to begin tipping over
In railway engineering, cant deficiency is defined in the context of travel of a rail vehicle at constant speed on a constant-radius curve. Cant itself refers to the superelevation of the curve, that is, the difference between the elevations of the outside and inside rails. "Cant deficiency" is present when a rail vehicle's speed on the curve is greater than the speed at which the components of wheel to rail force are normal to the plane of the track. In that case, the resultant force (aggregate gravitational and centrifugal force) exerts on the outside rail more than the inside rail, in which it creates lateral acceleration toward the outside of the curve (which could lead to tipping or derailing). In order to reduce cant deficiency, the speed can be reduced or the superelevation can be increased. The amount of cant deficiency is expressed in terms of required superelevation to be added in order to bring the resultant force into balance between the two rails.
On the contrary, it is said to be "cant excess" if the resultant force exerts more against the inside rail than the outside rail, for instance, a high superelevation curve with a train traveling at a low speed.
Forces.
The forces that bear on the vehicle in this context are illustrated in the following figure.
A vehicle's motion at speed v along a circular path embodies centripetal acceleration of magnitude &NoBreak;}&NoBreak; toward the center of the circle, the curvature of that path being &NoBreak;&NoBreak; where R is the radius of the circle. This centripetal acceleration is produced by horizontal forces applied by the rails to the wheels of the vehicle, directed toward the center, and having sum equal to &NoBreak;}&NoBreak; where M is the mass of the vehicle.
The net horizontal force producing the centripetal acceleration is generally separated into components that are respectively in the plane of the superelevated (i.e., banked) track and normal thereto.
The component normal to the track acts together with the much larger component of gravitational force normal to the track and is generally neglected. It can slightly increase the vertical load seen by the vehicle suspension but it does not create lateral acceleration as perceived by passengers and does not cause lateral deflection of the vehicle suspension.
The track is superelevated so that the component of the acceleration of gravity in the plane of the track will provide some fraction of the horizontal acceleration in the plane of the track due to the circular motion. Referring to the figure above, it can be seen that the components of gravitational and centripetal acceleration in the plane of the track will be equal when the following balance equation is satisfied, where α is the bank angle.
formula_0
For a given curve radius and bank angle (i.e., superelevation) the speed V that satisfies the balance equation is called the balancing speed and is given by
formula_1
For reasons that will be mentioned below, passenger vehicles usually traverse a curve at a speed higher than the balance speed. The amount by which the actual speed exceeds the balance speed is conveniently expressed via the so-called cant deficiency, i.e., by the amount by which the superelevation would need to be increased to raise the balance speed to the speed at which the vehicles actually travel. Letting GE denote the rail gauge from low rail gauge side corner to high rail field side corner, letting SE denote the actual superelevation, and letting "V"act denote the actual speed, it follows from the definition that the cant deficiency, CD, is given by the formula
formula_2
Example.
Taking an example, a curve with curvature 1.0 degree per 100 ft chord (radius 1,746.40 m = 5,729.65 ft), GE = 1511.3 mm (59.5 inches), and SE = 152.4 mm (6.0 inches) will have
formula_3
If a vehicle traverses that curve at a speed of 55.880 m/s (= 201.17 km/h = 125 mph), then the cant deficiency will be
formula_4
On routes that carry freight traffic in cars with the maximum allowed axle loads it will be desirable to set superelevations so that the balancing speed of each curve is close to the speed at which most such traffic runs. This is to lessen the tendency of heavy wheel loads to crush the head of either rail.
Limit values.
Allowed CD is set below the value that would be allowed based on safety in order to reduce wheel and rail wear and to reduce the rate of degradation of geometry of ballasted track. Choice of design CD will be less constrained by passenger comfort in the case of vehicles that have tilting capability. One historical approach to determining safe cant deficiency was the requirement that the projection to the plane of the track of the resultant of the inertial and gravitational forces acting on a vehicle fall within the middle third of the track gauge. Contemporary engineering studies would likely use vehicle motion simulation including cross wind conditions to determine margins relative to derailment and rollover.
If the superelevation determined for a dedicated passenger route curve on regulatory and safety bases is below it may be desirable to increase the superelevation and reduce the cant deficiency. However, if on such a curve some trains regularly travel at low speeds, then raising the superelevation may be inadvisable for passenger comfort reasons.
On a mixed traffic route owned by a freight rail company, freight considerations are likely to prevail. On a mixed traffic route owned by a passenger rail company some kind of compromise may be needed.
Cant deficiency is generally looked at with respect to ideal track geometry. As geometry of real track is never perfect it may be desirable to supplement the static considerations laid out above with simulations of vehicle motion over measured geometries of actual tracks. Simulations are also desirable for understanding vehicle behaviour traversing spirals, turnouts, and other track segments where curvature changes with distance by design. Where simulations or measurements show non-ideal behaviour traversing traditional linear spirals, results can be improved by using advanced spirals. Good track geometry including advanced spirals is likely to foster passenger acceptance of higher CD values.
United States.
For passenger traffic superelevations and authorized speeds can be set so that trains run with as much cant deficiency as is allowed, based on safety, on relevant regulations and on passenger comfort. As of 2007 the US Federal Railroad Administration regulations limit CD to for tilting passenger vehicles, for conventional vehicles. This FRA regulation is based on AAR standards based on a single study in the 1950s on a rail line in Connecticut. In Germany, where axle loads are typically lower than those in the USA, tilting trains are allowed to operate with CD in some cases. CD above can be considered too uncomfortable for passengers (e.g. things on tables might slide off), except for tilting trains.
The FRA issued new information on cant deficiency in 2009 under FRA-2009-0036-0003. Due to the circumstances outlined, the federal regulations on cant deficiency were amended such that any rail vehicle may operate with up to 3 inches of cant deficiency and any vehicle that is to be operated above this number must be approved by the FRA for such operations. Approval is governed by conditions outlined in CFR chapter 49 section 213.329 part (d) and based on the idea that the car cannot unload the inside wheel on a curve by more than 60% of static loading.
Europe.
France and Germany allow trains on conventional lines to operate at up to cant deficiency. TGV has limit on cant deficiency of .
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{v^2\\over R} \\cos \\alpha = g \\sin \\alpha"
},
{
"math_id": 1,
"text": "V_\\text{bal} = \\sqrt{Rg\\tan\\alpha}"
},
{
"math_id": 2,
"text": "\\mathrm{CD} = \\frac{ \\mathrm{GE} }{ \\sqrt{1 + \\frac{R^2 g^2}{V_\\text{act}^4} } } - \\mathrm{SE}"
},
{
"math_id": 3,
"text": "\\begin{align}\nV_\\text{bal} &= \\sqrt{1746.4 \\cdot 9.80665 \\cdot \\tan( \\arcsin( 152.4 / 1511.3 ) )} \\\\\n&= 41.6638 \\text{ m/s} = 149.99 \\text{ km/h} = 93.20 \\text{ mph}\n\\end{align}"
},
{
"math_id": 4,
"text": "\\begin{align}\n\\mathrm{CD} &= \\frac{1511.3}{\\sqrt{1 + (1746.4^2 \\cdot 9.8066^2 / 55.8^4)}}- 152.4 \\\\\n&= 118.7 \\text{ mm } (= 4.67 \\text{ in})\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=11214626 |
1121587 | Schur multiplier | In mathematical group theory, the Schur multiplier or Schur multiplicator is the second homology group formula_0 of a group "G". It was introduced by Issai Schur (1904) in his work on projective representations.
Examples and properties.
The Schur multiplier formula_1 of a finite group "G" is a finite abelian group whose exponent divides the order of "G". If a Sylow "p"-subgroup of "G" is cyclic for some "p", then the order of formula_1 is not divisible by "p". In particular, if all Sylow "p"-subgroups of "G" are cyclic, then formula_1 is trivial.
For instance, the Schur multiplier of the nonabelian group of order 6 is the trivial group since every Sylow subgroup is cyclic. The Schur multiplier of the elementary abelian group of order 16 is an elementary abelian group of order 64, showing that the multiplier can be strictly larger than the group itself. The Schur multiplier of the quaternion group is trivial, but the Schur multiplier of dihedral 2-groups has order 2.
The Schur multipliers of the finite simple groups are given at the list of finite simple groups. The covering groups of the alternating and symmetric groups are of considerable recent interest.
Relation to projective representations.
Schur's original motivation for studying the multiplier was to classify projective representations of a group, and the modern formulation of his definition is the second cohomology group formula_2. A projective representation is much like a group representation except that instead of a homomorphism into the general linear group formula_3, one takes a homomorphism into the projective general linear group formula_4. In other words, a projective representation is a representation modulo the center.
Schur (1904, 1907) showed that every finite group "G" has associated to it at least one finite group "C", called a Schur cover, with the property that every projective representation of "G" can be lifted to an ordinary representation of "C". The Schur cover is also known as a covering group or Darstellungsgruppe. The Schur covers of the finite simple groups are known, and each is an example of a quasisimple group. The Schur cover of a perfect group is uniquely determined up to isomorphism, but the Schur cover of a general finite group is only determined up to isoclinism.
Relation to central extensions.
The study of such covering groups led naturally to the study of central and stem extensions.
A central extension of a group "G" is an extension
formula_5
where formula_6 is a subgroup of the center of "C".
A stem extension of a group "G" is an extension
formula_5
where formula_7 is a subgroup of the intersection of the center of "C" and the derived subgroup of "C"; this is more restrictive than central.
If the group "G" is finite and one considers only stem extensions, then there is a largest size for such a group "C", and for every "C" of that size the subgroup "K" is isomorphic to the Schur multiplier of "G". If the finite group "G" is moreover perfect, then "C" is unique up to isomorphism and is itself perfect. Such "C" are often called universal perfect central extensions of "G", or covering group (as it is a discrete analog of the universal covering space in topology). If the finite group "G" is not perfect, then its Schur covering groups (all such "C" of maximal order) are only isoclinic.
It is also called more briefly a universal central extension, but note that there is no largest central extension, as the direct product of "G" and an abelian group form a central extension of "G" of arbitrary size.
Stem extensions have the nice property that any lift of a generating set of "G" is a generating set of "C". If the group "G" is presented in terms of a free group "F" on a set of generators, and a normal subgroup "R" generated by a set of relations on the generators, so that formula_8, then the covering group itself can be presented in terms of "F" but with a smaller normal subgroup "S", that is, formula_9. Since the relations of "G" specify elements of "K" when considered as part of "C", one must have formula_10.
In fact if "G" is perfect, this is all that is needed: "C" ≅ ["F","F"]/["F","R"] and M("G") ≅ "K" ≅ "R"/["F","R"]. Because of this simplicity, expositions such as handle the perfect case first. The general case for the Schur multiplier is similar but ensures the extension is a stem extension by restricting to the derived subgroup of "F": M("G") ≅ ("R" ∩ ["F", "F"])/["F", "R"]. These are all slightly later results of Schur, who also gave a number of useful criteria for calculating them more explicitly.
Relation to efficient presentations.
In combinatorial group theory, a group often originates from a presentation. One important theme in this area of mathematics is to study presentations with as few relations as possible, such as one relator groups like Baumslag–Solitar groups. These groups are infinite groups with two generators and one relation, and an old result of Schreier shows that in any presentation with more generators than relations, the resulting group is infinite. The borderline case is thus quite interesting: finite groups with the same number of generators as relations are said to have a deficiency zero. For a group to have deficiency zero, the group must have a trivial Schur multiplier because the minimum number of generators of the Schur multiplier is always less than or equal to the difference between the number of relations and the number of generators, which is the negative deficiency. An "efficient group" is one where the Schur multiplier requires this number of generators.
A fairly recent topic of research is to find efficient presentations for all finite simple groups with trivial Schur multipliers. Such presentations are in some sense nice because they are usually short, but they are difficult to find and to work with because they are ill-suited to standard methods such as coset enumeration.
Relation to topology.
In topology, groups can often be described as finitely presented groups and a fundamental question is to calculate their integral homology formula_11. In particular, the second homology plays a special role and this led Heinz Hopf to find an effective method for calculating it. The method in is also known as Hopf's integral homology formula and is identical to Schur's formula for the Schur multiplier of a finite group:
formula_12
where formula_8 and "F" is a free group. The same formula also holds when "G" is a perfect group.
The recognition that these formulas were the same led Samuel Eilenberg and Saunders Mac Lane to the creation of cohomology of groups. In general,
formula_13
where the star denotes the algebraic dual group. Moreover, when "G" is finite, there is an unnatural isomorphism
formula_14
The Hopf formula for formula_15 has been generalised to higher dimensions. For one approach and references see the paper by Everaert, Gran and Van der Linden listed below.
A perfect group is one whose first integral homology vanishes. A superperfect group is one whose first two integral homology groups vanish. The Schur covers of finite perfect groups are superperfect. An acyclic group is a group all of whose reduced integral homology vanishes.
Applications.
The second algebraic K-group K2("R") of a commutative ring "R" can be identified with the second homology group "H"2("E"("R"), Z) of the group "E"("R") of (infinite) elementary matrices with entries in "R".
See also.
The references from Clair Miller give another view of the Schur Multiplier as the kernel of a morphism κ: G ∧ G → G induced by the commutator map.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "H_2(G, \\Z)"
},
{
"math_id": 1,
"text": "\\operatorname{M}(G)"
},
{
"math_id": 2,
"text": "H^2(G, \\Complex^{\\times})"
},
{
"math_id": 3,
"text": "\\operatorname{GL}(n, \\Complex)"
},
{
"math_id": 4,
"text": "\\operatorname{PGL}(n, \\Complex)"
},
{
"math_id": 5,
"text": "1 \\to K\\to C\\to G\\to 1"
},
{
"math_id": 6,
"text": "K\\le Z(C)"
},
{
"math_id": 7,
"text": "K\\le Z(C)\\cap C'"
},
{
"math_id": 8,
"text": "G \\cong F/R"
},
{
"math_id": 9,
"text": "C\\cong F/S"
},
{
"math_id": 10,
"text": "S \\le [F,R]"
},
{
"math_id": 11,
"text": "H_n(G, \\Z)"
},
{
"math_id": 12,
"text": " H_2(G, \\Z) \\cong (R \\cap [F, F])/[F, R]"
},
{
"math_id": 13,
"text": "H_2(G, \\Z) \\cong \\bigl( H^2(G, \\Complex^{\\times}) \\bigr)^* "
},
{
"math_id": 14,
"text": "\\bigl( H^2(G, \\Complex^{\\times}) \\bigr)^* \\cong H^2(G, \\Complex^{\\times})."
},
{
"math_id": 15,
"text": "H_2(G)"
}
] | https://en.wikipedia.org/wiki?curid=1121587 |
11216402 | Schouten–Nijenhuis bracket | In differential geometry, the Schouten–Nijenhuis bracket, also known as the Schouten bracket, is a type of graded Lie bracket defined on multivector fields on a smooth manifold extending the Lie bracket of vector fields.
There are two different versions, both rather confusingly called by the same name. The most common version is defined on alternating multivector fields and makes them into a Gerstenhaber algebra, but there is also another version defined on symmetric multivector fields, which is more or less the same as the Poisson bracket on the cotangent bundle. It was invented by Jan Arnoldus Schouten (1940, 1953) and its properties were investigated by his student Albert Nijenhuis (1955). It is related to but not the same as the Nijenhuis–Richardson bracket and the Frölicher–Nijenhuis bracket.
Definition and properties.
An alternating multivector field is a section of the exterior algebra "formula_0" over the tangent bundle of a manifold "formula_1". The alternating multivector fields form a graded supercommutative ring with the product of "formula_2" and "formula_3" written as "formula_4" (some authors use "formula_5"). This is dual to the usual algebra of differential forms "formula_6" by the pairing on homogeneous elements:
formula_7
The degree of a multivector "formula_8" in formula_9 is defined to be "formula_10".
The skew symmetric Schouten–Nijenhuis bracket is the unique extension of the Lie bracket of vector fields to a graded bracket on the space of alternating multivector fields that makes the alternating multivector fields into a Gerstenhaber algebra. It is given in terms of the Lie bracket of vector fields by
formula_11
for vector fields "formula_12", "formula_13" and
formula_14
for vector fields formula_12 and smooth function formula_15, where formula_16 is the common interior product operator. It has the following properties.
The Schouten–Nijenhuis bracket makes the multivector fields into a Lie superalgebra if the grading is changed to the one of opposite parity (so that the even and odd subspaces are switched), though with this new grading it is no longer a supercommutative ring. Accordingly, the Jacobi identity may also be expressed in the symmetrical form
formula_27
Generalizations.
There is a common generalization of the Schouten–Nijenhuis bracket for alternating multivector fields and the Frölicher–Nijenhuis bracket due to Vinogradov (1990).
A version of the Schouten–Nijenhuis bracket can also be defined for symmetric multivector fields in a similar way. The symmetric multivector fields can be identified with functions on the cotangent space "formula_28" of "formula_1" that are polynomial in the fiber, and under this identification the symmetric Schouten–Nijenhuis bracket corresponds to the Poisson bracket of functions on the symplectic manifold "formula_28". There is a common generalization of the Schouten–Nijenhuis bracket for symmetric multivector fields and the Frölicher–Nijenhuis bracket due to Dubois-Violette and Peter W. Michor (1995). | [
{
"math_id": 0,
"text": "\\wedge^\\bullet TM"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "a"
},
{
"math_id": 3,
"text": "b"
},
{
"math_id": 4,
"text": "ab"
},
{
"math_id": 5,
"text": "a \\wedge b"
},
{
"math_id": 6,
"text": "\\Omega^\\bullet (M)"
},
{
"math_id": 7,
"text": "\\omega(a_1a_2 \\dots a_p)=\\left\\{\n\\begin{matrix}\n\\omega(a_1,\\dots,a_p)&(\\omega\\in \\Omega^pM)\\\\\n0&(\\omega\\not\\in\\Omega^pM)\n\\end{matrix}\\right.\n"
},
{
"math_id": 8,
"text": "A"
},
{
"math_id": 9,
"text": "\\Lambda^p TM"
},
{
"math_id": 10,
"text": "|A|=p"
},
{
"math_id": 11,
"text": "[a_1\\cdots a_m,b_1\\cdots b_n]=\\sum_{i,j}(-1)^{i+j}[a_i,b_j]a_1\\cdots a_{i-1}a_{i+1}\\cdots a_mb_1\\cdots b_{j-1}b_{j+1}\\cdots b_n"
},
{
"math_id": 12,
"text": "a_i"
},
{
"math_id": 13,
"text": "b_j"
},
{
"math_id": 14,
"text": "[f,a_1\\cdots a_m] = -\\iota_{df}(a_1 \\cdots a_m)"
},
{
"math_id": 15,
"text": "f"
},
{
"math_id": 16,
"text": "\\iota_{df}"
},
{
"math_id": 17,
"text": "(ab)c=a(bc)"
},
{
"math_id": 18,
"text": "ab = (-1)^{|a||b|} ba"
},
{
"math_id": 19,
"text": "|ab| = |a|+|b|"
},
{
"math_id": 20,
"text": "|[a,b]|=|a|+|b|-1"
},
{
"math_id": 21,
"text": "[a,bc] = [a,b]c + (-1)^{|b| (|a|-1) } b [a,c]"
},
{
"math_id": 22,
"text": "[a,b] = - (-1)^{(|a| -1) (|b| -1)} [b,a]"
},
{
"math_id": 23,
"text": "[[a,b],c] = [a,[b,c]] - (-1)^{(|a|-1) (|b|-1) } [b,[a,c]]"
},
{
"math_id": 24,
"text": "g"
},
{
"math_id": 25,
"text": "[f,g]=0"
},
{
"math_id": 26,
"text": "[a,b] = L_a b"
},
{
"math_id": 27,
"text": "(-1)^{(|a|-1)(|c|-1)}[a,[b,c]]+(-1)^{(|b|-1)(|a|-1)}[b,[c,a]]+(-1)^{(|c|-1)(|b|-1)}[c,[a,b]] = 0.\\,"
},
{
"math_id": 28,
"text": "T^*M"
}
] | https://en.wikipedia.org/wiki?curid=11216402 |
11217018 | A-weighting | Frequency response curves used in sound pressure level measurement
A-weighting is a form of frequency weighting and the most commonly used of a family of curves defined in the International standard IEC 61672:2003 and various national standards relating to the measurement of sound pressure level. A-weighting is applied to instrument-measured sound levels in an effort to account for the relative loudness perceived by the human ear, as the ear is less sensitive to low audio frequencies. It is employed by arithmetically adding a table of values, listed by octave or third-octave bands, to the measured sound pressure levels in dB. The resulting octave band measurements are usually added (logarithmic method) to provide a single A-weighted value describing the sound; the units are written as dB(A). Other weighting sets of values – B, C, D and now Z – are discussed below.
The curves were originally defined for use at different average sound levels, but A-weighting, though originally intended only for the measurement of low-level sounds (around 40 phon), is now commonly used for the measurement of environmental noise and industrial noise, as well as when assessing potential hearing damage and other noise health effects at all sound levels; indeed, the use of A-frequency-weighting is now mandated for all these measurements, because decades of field experience have shown a very good correlation with occupational deafness in the frequency range of human speech. It is also used when measuring low-level noise in audio equipment, especially in the United States. In Britain, Europe and many other parts of the world, broadcasters and audio engineers more often use the ITU-R 468 noise weighting, which was developed in the 1960s based on research by the BBC and other organizations. This research showed that our ears respond differently to random noise, and the equal-loudness curves on which the A, B and C weightings were based are really only valid for pure single tones.
History.
A-weighting began with work by Fletcher and Munson which resulted in their publication, in 1933, of a set of equal-loudness contours. Three years later these curves were used in the first American standard for sound level meters. This ANSI standard, later revised as ANSI S1.4-1981, incorporated B-weighting as well as the A-weighting curve, recognising the unsuitability of the latter for anything other than low-level measurements. But B-weighting has since fallen into disuse. Later work, first by Zwicker and then by Schomer, attempted to overcome the difficulty posed by different levels, and work by the BBC resulted in the CCIR-468 weighting, currently maintained as ITU-R 468 noise weighting, which gives more representative readings on noise as opposed to pure tones.
Deficiencies.
A-weighting is valid to represent the sensitivity of the human ear as a function of the frequency of pure tones. The A-weighting was based on the 40-phon Fletcher–Munson curves, which represented an early determination of the equal-loudness contour for human hearing. However, because decades of field experience have shown a very good correlation between the A scale and occupational deafness in the frequency range of human speech, this scale is employed in many jurisdictions to evaluate the risks of occupational deafness and other auditory problems related to signals or speech intelligibility in noisy environments.
Because of perceived discrepancies between early and more recent determinations, the International Organization for Standardization (ISO) revised its standard curves as defined in ISO 226, in response to the recommendations of a study coordinated by the Research Institute of Electrical Communication, Tohoku University, Japan. The study produced new curves by combining the results of several studies, by researchers in Japan, Germany, Denmark, UK, and USA. (Japan was the greatest contributor with about 40% of the data.) This resulted in the acceptance of a new set of curves standardized as ISO 226:2003 (subsequently revised again in 2023 with changes to the ISO 226 equal loudness contours of less than 0.5 dB over the 20-90 phon range). The report comments on the large differences between the combined study results and the original Fletcher–Munson equal loudness contours, as well as the later Robinson-Dadson contours that formed the basis for the first version of ISO 226, published in 1987. Subsequent research has demonstrated that A-weighting is in closer agreement with the updated 60-phon contour incorporated into ISO 226:2003 than with the 40-phon Fletcher-Munson contour, which challenges the common misapprehension that A-weighting represents loudness only for quiet sounds.
Nevertheless, A-weighting would be a closer match to the equal loudness curves if it fell more steeply above 10 kHz, and it is conceivable that this compromise may have arisen because steep filters were more difficult to construct in the early days of electronics. Nowadays, no such limitation need exist, as demonstrated by the ITU-R 468 curve. If A-weighting is used without further band-limiting it is possible to obtain different readings on different instruments when ultrasonic, or near ultrasonic noise is present. Accurate measurements therefore require a 20 kHz low-pass filter to be combined with the A-weighting curve in modern instruments. This is defined in IEC 61012 as AU weighting and while very desirable, is rarely fitted to commercial sound level meters.
B-, C-, D-, G- and Z-weightings.
A-frequency-weighting is mandated by the international standard IEC 61672 to be fitted to all sound level meters and are approximations to the equal loudness contours given in ISO 226. The old B- and D-frequency-weightings have fallen into disuse, but many sound level meters provide for C frequency-weighting and its fitting is mandated — at least for testing purposes — to precision (Class one) sound level meters. D-frequency-weighting was specifically designed for use when measuring high-level aircraft noise in accordance with the IEC 537 measurement standard. The large peak in the D-weighting curve is not a feature of the equal-loudness contours, but reflects the fact that humans hear random noise differently from pure tones, an effect that is particularly pronounced around 6 kHz. This is because individual neurons from different regions of the cochlea in the inner ear respond to narrow bands of frequencies, but the higher frequency neurons integrate a wider band and hence signal a louder sound when presented with noise containing many frequencies than for a single pure tone of the same pressure level.
Following changes to the ISO standard, D-frequency-weighting by itself should now only be used for non-bypass-type jet engines, which are found only on military aircraft and not on commercial aircraft. For this reason, today A-frequency-weighting is now mandated for light civilian aircraft measurements, while a more accurate loudness-corrected weighting EPNdB is required for certification of large transport aircraft. D-weighting is the basis for the measurement underlying EPNdB.
Z- or ZERO frequency-weighting was introduced in the International Standard IEC 61672 in 2003 and was intended to replace the "Flat" or "Linear" frequency weighting often fitted by manufacturers. This change was needed as each sound level meter manufacturer could choose their own low and high frequency cut-offs (–3 dB) points, resulting in different readings, especially when peak sound level was being measured. It is a flat frequency response between 10 Hz and 20 kHz ±1.5 dB. As well, the C-frequency-weighting, with –3 dB points at 31.5 Hz and 8 kHz did not have a sufficient bandpass to allow the sensibly correct measurement of true peak noise (Lpk).
G-weighting is used for measurements in the infrasound range from 8 Hz to about 40 Hz.
B- and D-frequency-weightings are no longer described in the body of the standard IEC 61672:2003, but their frequency responses can be found in the older IEC 60651, although that has been formally withdrawn by the International Electrotechnical Commission in favour of IEC 61672:2003. The frequency weighting tolerances in IEC 61672 have been tightened over those in the earlier standards IEC 179 and IEC 60651 and thus instruments complying with the earlier specifications should no longer be used for legally required measurements.
Environmental and other noise measurements.
A-weighted decibels are abbreviated dB(A) or dBA. When acoustic (calibrated microphone) measurements are being referred to, then the units used will be dB SPL referenced to
20 micropascals = 0 dB SPL.
The A-weighting curve has been widely adopted for environmental noise measurement, and is standard in many sound level meters. The A-weighting system is used in any measurement of environmental noise (examples of which include roadway noise, rail noise, aircraft noise). A-weighting is also in common use for assessing potential hearing damage caused by loud noise, including noise dose measurements at work. A noise level of more than 85 dB(A) each day increases the risk factor for hearing damage.
A-weighted sound power levels "L"WA are increasingly found on sales literature for domestic appliances such as refrigerators, freezers and computer fans.
The expected sound pressure level to be measured at a given distance as SPL with a sound level meter can with some simplifications be calculated from the sound power level.
In Europe, the A-weighted noise level is used for instance for normalizing the noise of tires on cars.
Noise exposure for visitors of venues with loud music is usually also expressed in dB(A), although the presence of high levels of low frequency noise does not justify this.
Audio reproduction and broadcasting equipment.
Although the A-weighting curve, in widespread use for noise measurement, is said to have been based on the 40-phon Fletcher-Munson curve, research in the 1960s demonstrated that determinations of equal-loudness made using pure tones are not directly relevant to our perception of noise. This is because the cochlea in our inner ear analyses sounds in terms of spectral content, each hair cell responding to a narrow band of frequencies known as a critical band. The high-frequency bands are wider in absolute terms than the low-frequency bands, and therefore 'collect' proportionately more power from a noise source. However, when more than one critical band is stimulated, the outputs of the various bands are summed by the brain to produce an impression of loudness. For these reasons equal-loudness curves derived using noise bands show an upwards tilt above 1 kHz and a downward tilt below 1 kHz when compared to the curves derived using pure tones.
This enhanced sensitivity to noise in the region of 6 kHz became particularly apparent in the late 1960s with the introduction of compact cassette recorders and Dolby-B noise reduction. A-weighted noise measurements were found to give misleading results because they did not give sufficient prominence to the 6 kHz region where the noise reduction was having greatest effect, and did not sufficiently attenuate noise around 10 kHz and above (a particular example is with the 19 kHz pilot tone on FM radio systems which, though usually inaudible, is not sufficiently attenuated by A-weighting, so that sometimes one piece of equipment would even measure worse than another and yet sound better, because of differing spectral content.
ITU-R 468 noise weighting was therefore developed to more accurately reflect the subjective loudness of all types of noise, as opposed to tones. This curve, which came out of work done by the BBC Research Department, and was standardised by the CCIR and later adopted by many other standards bodies (IEC, BSI) and, as of 2006[ [update]], is maintained by the ITU. It became widely used in Europe, especially in broadcasting, and was adopted by Dolby Laboratories who realised its superior validity for their purposes when measuring noise on film soundtracks and compact cassette systems. Its advantages over A-weighting are less accepted in the US, where the use of A-weighting still predominates. It is used by broadcasters in Britain, Europe, and former countries of the British Empire such as Australia and South Africa.
Function realisation of some common weightings.
The standard defines weightings (formula_0) in dB units by tables with tolerance limits (to allow a variety of implementations). Additionally, the standard describes weighting functions formula_1 to calculate the weightings. The weighting function formula_1 is applied to the amplitude spectrum (not the intensity spectrum) of the unweighted sound level. The offsets ensure the normalisation to 0 dB at 1000 Hz. Appropriate weighting functions are:
formula_2
formula_3
formula_4
formula_5
Transfer function equivalent.
The gain curves can be realised by the following s-domain transfer functions. They are not defined in this way though, being defined by tables of values with tolerances in the standards documents, thus allowing different realisations:
formula_6
"k"A ≈ 7.39705 × 109
formula_7
"k"B ≈ 5.99185 × 109
formula_8
"k"C ≈ 5.91797 × 109
formula_9
"k"D ≈ 91104.32
D.
The "k"-values are constants that are used to normalize the function to a gain of 1 (0 dB). The values listed above normalize the functions to 0 dB at 1 kHz, as they are typically used. (This normalization is shown in the image.)
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A(f), C(f)"
},
{
"math_id": 1,
"text": "R_X(f)"
},
{
"math_id": 2,
"text": "\\begin{align}\n R_A(f) &= {12194^2 f^4 \\over \\left(f^2 + 20.6^2\\right)\\ \\sqrt{\\left(f^2 + 107.7^2\\right)\\left(f^2 + 737.9^2\\right)}\\ \\left(f^2 + 12194^2\\right)}\\ ,\\\\[3pt]\n A(f) &= 20\\log_{10}\\left(R_A(f)\\right) - 20\\log_{10}\\left(R_A(1000)\\right) \\\\\n &\\approx 20\\log_{10}\\left(R_A(f)\\right) + 2.00\n\\end{align}"
},
{
"math_id": 3,
"text": "\\begin{align}\n R_B(f) &= {12194^2 f^3\\over \\left(f^2 + 20.6^2\\right)\\ \\sqrt{\\left(f^2 + 158.5^2\\right)} \\ \\left(f^2 + 12194^2\\right)}\\ ,\\\\[3pt]\n B(f) &= 20\\log_{10}\\left(R_B(f)\\right) - 20\\log_{10}\\left(R_B(1000)\\right) \\\\\n &\\approx 20\\log_{10}\\left(R_B(f)\\right) + 0.17\n\\end{align}"
},
{
"math_id": 4,
"text": "\\begin{align}\n R_C(f) &= {12194^2 f^2 \\over \\left(f^2 + 20.6^2\\right)\\ \\left(f^2 + 12194^2\\right)}\\ ,\\\\[3pt]\n C(f) &= 20\\log_{10}\\left(R_C(f)\\right) - 20\\log_{10}\\left(R_C(1000)\\right) \\\\[3pt]\n &\\approx 20\\log_{10}\\left(R_C(f)\\right) + 0.06\n\\end{align}"
},
{
"math_id": 5,
"text": "\\begin{align}\n h(f) &= \\frac{\\left(1037918.48 - f^2\\right)^2 + 1080768.16\\,f^2}{\\left(9837328 - f^2\\right)^2 + 11723776\\,f^2} \\\\[3pt]\n R_D(f) &= \\frac{f}{6.8966888496476 \\cdot 10^{-5}} \\sqrt{\\frac{h(f)}{\\left(f^2 + 79919.29\\right)\\left(f^2 + 1345600\\right)}} \\\\\n D(f) &= 20\\log_{10}\\left(R_D(f)\\right).\n\\end{align}"
},
{
"math_id": 6,
"text": "H_\\text{A}(s) \\approx {k_\\text{A} \\cdot s^4 \\over (s + 129.4)^2\\quad(s + 676.7)\\quad (s + 4636)\\quad (s + 76617)^2}"
},
{
"math_id": 7,
"text": "H_\\text{B}(s) \\approx {k_\\text{B} \\cdot s^3\\over(s + 129.4)^2\\quad (s + 995.9)\\quad (s + 76617)^2}"
},
{
"math_id": 8,
"text": "H_\\text{C}(s) \\approx {k_\\text{C} \\cdot s^2\\over(s + 129.4)^2\\quad (s + 76617)^2}"
},
{
"math_id": 9,
"text": "H_\\text{D}(s) \\approx {k_\\text{D} \\cdot s \\cdot \\left(s^2 + 6532 s + 4.0975 \\times 10^7\\right)\\over(s + 1776.3)\\quad (s + 7288.5)\\quad \\left(s^2 + 21514 s + 3.8836 \\times 10^8\\right)}"
}
] | https://en.wikipedia.org/wiki?curid=11217018 |
11219399 | Quasi-birth–death process | In queueing models, a discipline within the mathematical theory of probability, the quasi-birth–death process describes a generalisation of the birth–death process. As with the birth-death process it moves up and down between levels one at a time, but the time between these transitions can have a more complicated distribution encoded in the blocks.
Discrete time.
The stochastic matrix describing the Markov chain has block structure
formula_0
where each of "A"0, "A"1 and "A"2 are matrices and "A"*0, "A"*1 and "A"*2 are irregular matrices for the first and second levels.
Continuous time.
The transition rate matrix for a quasi-birth-death process has a tridiagonal block structure
formula_1
where each of "B"00, "B"01, "B"10, "A"0, "A"1 and "A"2 are matrices. The process can be viewed as a two dimensional chain where the block structure are called "levels" and the intra-block structure "phases". When describing the process by both level and phase it is a continuous-time Markov chain, but when considering levels only it is a semi-Markov process (as transition times are then not exponentially distributed).
Usually the blocks have finitely many phases, but models like the Jackson network can be considered as quasi-birth-death processes with infinitely (but countably) many phases.
Stationary distribution.
The stationary distribution of a quasi-birth-death process can be computed using the matrix geometric method.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P=\\begin{pmatrix}\nA_1^\\ast & A_2^\\ast \\\\\nA_0^\\ast & A_1 & A_2 \\\\\n& A_0 & A_1 & A_2 \\\\\n&& A_0 & A_1 & A_2 \\\\\n&&& \\ddots & \\ddots & \\ddots\n\\end{pmatrix}"
},
{
"math_id": 1,
"text": "Q=\\begin{pmatrix}\nB_{00} & B_{01} \\\\\nB_{10} & A_1 & A_2 \\\\\n& A_0 & A_1 & A_2 \\\\\n&& A_0 & A_1 & A_2 \\\\\n&&& A_0 & A_1 & A_2 \\\\\n&&&& \\ddots & \\ddots & \\ddots\n\\end{pmatrix}"
}
] | https://en.wikipedia.org/wiki?curid=11219399 |
11219603 | Bracket (mathematics) | Brackets as used in mathematical notation
In mathematics, brackets of various typographical forms, such as parentheses ( ), square brackets [ ], braces { } and angle brackets ⟨ ⟩, are frequently used in mathematical notation. Generally, such bracketing denotes some form of grouping: in evaluating an expression containing a bracketed sub-expression, the operators in the sub-expression take precedence over those surrounding it. Sometimes, for the clarity of reading, different kinds of brackets are used to express the same meaning of precedence in a single expression with deep nesting of sub-expressions.
Historically, other notations, such as the vinculum, were similarly used for grouping. In present-day use, these notations all have specific meanings. The earliest use of brackets to indicate aggregation (i.e. grouping) was suggested in 1608 by Christopher Clavius, and in 1629 by Albert Girard.
Symbols for representing angle brackets.
A variety of different symbols are used to represent angle brackets. In e-mail and other ASCII text, it is common to use the less-than (codice_0) and greater-than (codice_1) signs to represent angle brackets, because ASCII does not include angle brackets.
Unicode has pairs of dedicated characters; other than less-than and greater-than symbols, these include:
In LaTeX the markup is codice_2 and codice_3: formula_0.
Non-mathematical angled brackets include:
There are additional dingbats with increased line thickness, a lot of angle quotation marks and deprecated characters.
Algebra.
In elementary algebra, parentheses ( ) are used to specify the order of operations. Terms inside the bracket are evaluated first; hence 2×(3 + 4) is 14, 20 ÷ (5(1 + 1)) is 2 and (2×3) + 4 is 10. This notation is extended to cover more general algebra involving variables: for example ("x" + "y") × ("x" − "y"). Square brackets are also often used in place of a second set of parentheses when they are nested—so as to provide a visual distinction.
In mathematical expressions in general, parentheses are also used to indicate grouping (i.e., which parts belong together) when edible to avoid ambiguities and improve clarity. For example, in the formula formula_1, used in the definition of composition of two natural transformations, the parentheses around formula_2 serve to indicate that the indexing by "formula_3" is applied to the composition formula_2, and not just its last component formula_4.
Functions.
The arguments to a function are frequently surrounded by brackets: formula_5. With some standard function when there is little chance of ambiguity, it is common to omit the parentheses around the argument altogether (e.g., formula_6). Note that this is never done with a general function formula_7, in which case the parenthesis are always included
Coordinates and vectors.
In the Cartesian coordinate system, brackets are used to specify the coordinates of a point. For example, (2,3) denotes the point with "x"-coordinate 2 and "y"-coordinate 3.
The inner product of two vectors is commonly written as formula_8, but the notation ("a", "b") is also used.
Intervals.
Both parentheses, ( ), and square brackets, [ ], can also be used to denote an interval. The notation formula_9 is used to indicate an interval from a to c that is inclusive of formula_10—but exclusive of formula_11. That is, formula_12 would be the set of all real numbers between 5 and 12, including 5 but not 12. Here, the numbers may come as close as they like to 12, including 11.999 and so forth (with any finite number of 9s), but 12.0 is not included.
In some European countries, the notation formula_13 is also used for this, and wherever comma is used as decimal separator, semicolon might be used as a separator to avoid ambiguity (e.g., formula_14).
The endpoint adjoining the square bracket is known as "closed", while the endpoint adjoining the parenthesis is known as "open". If both types of brackets are the same, the entire interval may be referred to as "closed" or "open" as appropriate. Whenever infinity or negative infinity is used as an endpoint (in the case of intervals on the real number line), it is always considered "open" and adjoined to a parenthesis. The endpoint can be closed when considering intervals on the extended real number line.
A common convention in discrete mathematics is to define formula_15 as the set of positive integer numbers less or equal than formula_16. That is, formula_17 would correspond to the set formula_18.
Sets and groups.
Braces { } are used to identify the elements of a set. For example, {"a","b","c"} denotes a set of three elements "a", "b" and "c".
Angle brackets ⟨ ⟩ are used in group theory and commutative algebra to specify group presentations, and to denote the subgroup or ideal generated by a collection of elements.
Matrices.
An explicitly given matrix is commonly written between large round or square brackets:
formula_19
Derivatives.
The notation
formula_20
stands for the "n"-th derivative of function "f", applied to argument "x". So, for example, if formula_21, then formula_22. This is to be contrasted with formula_23, the "n"-fold application of "f" to argument "x".
Falling and rising factorial.
The notation formula_24 is used to denote the "falling factorial", an "n"-th degree polynomial defined by
formula_25
Alternatively, the same notation may be encountered as representing the "rising factorial", also called "Pochhammer symbol". Another notation for the same is formula_26. It can be defined by
formula_27
Quantum mechanics.
In quantum mechanics, angle brackets are also used as part of Dirac's formalism, bra–ket notation, to denote vectors from the dual spaces of the bra formula_28 and the ket formula_29.
In statistical mechanics, angle brackets denote ensemble or time average.
Polynomial rings.
Square brackets are used to contain the variable(s) in polynomial rings. For example, formula_30 is the ring of polynomials with real number coefficients and variable formula_31.
Subring generated by an element or collection of elements.
If A is a subring of a ring B, and b is an element of B, then "A"["b"] denotes the subring of B generated by A and b. This subring consists of all the elements that can be obtained, starting from the elements of A and b, by repeated addition and multiplication; equivalently, it is the smallest subring of B that contains A and b. For example, formula_32 is the smallest subring of C containing all the integers and formula_33; it consists of all numbers of the form formula_34, where m and n are arbitrary integers. Another example: formula_35 is the subring of Q consisting of all rational numbers whose denominator is a power of 2.
More generally, if A is a subring of a ring B, and formula_36, then formula_37 denotes the subring of B generated by A and formula_36. Even more generally, if S is a subset of B, then "A"["S"] is the subring of B generated by A and S.
Lie bracket and commutator.
In group theory and ring theory, square brackets are used to denote the commutator. In group theory, the commutator ["g","h"] is commonly defined as "g"−1"h"−1"gh". In ring theory, the commutator ["a","b"] is defined as "ab" − "ba". Furthermore, braces may be used to denote the anticommutator: {"a","b"} is defined as "ab" + "ba".
The Lie bracket of a Lie algebra is a binary operation denoted by formula_38. By using the commutator as a Lie bracket, every associative algebra can be turned into a Lie algebra. There are many different forms of Lie bracket, in particular the Lie derivative and the Jacobi–Lie bracket.
Floor/ceiling functions and fractional part.
The floor and ceiling functions are usually typeset with left and right square brackets where only the lower (for floor function) or upper (for ceiling function) horizontal bars are displayed, as in ⌊π⌋ = 3 or ⌈π⌉ = 4. However, Square brackets, as in [π] = 3, are sometimes used to denote the floor function, which rounds a real number down to the next integer. Conversely, some authors use outwards pointing square brackets to denote the ceiling function, as in ]π[ = 4.
Braces, as in {π} < 1/7, may denote the fractional part of a real number. | [
{
"math_id": 0,
"text": "\\langle\\ \\rangle"
},
{
"math_id": 1,
"text": "(\\varepsilon \\eta)_X = \\varepsilon_{Cod \\, \\eta_X}\\eta_X"
},
{
"math_id": 2,
"text": "\\varepsilon \\eta"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "\\eta"
},
{
"math_id": 5,
"text": "f(x) "
},
{
"math_id": 6,
"text": "\\sin x"
},
{
"math_id": 7,
"text": "f "
},
{
"math_id": 8,
"text": " \\langle a, b\\rangle"
},
{
"math_id": 9,
"text": "[a, c)"
},
{
"math_id": 10,
"text": "a"
},
{
"math_id": 11,
"text": "c"
},
{
"math_id": 12,
"text": "[5, 12)"
},
{
"math_id": 13,
"text": "[5,12["
},
{
"math_id": 14,
"text": "(0 ; 1)"
},
{
"math_id": 15,
"text": "[n]"
},
{
"math_id": 16,
"text": "n"
},
{
"math_id": 17,
"text": "[5]"
},
{
"math_id": 18,
"text": "\\{1,2,3,4,5\\}"
},
{
"math_id": 19,
"text": "\\begin{pmatrix}\n1 & -1 \\\\\n2 & 3 \\end{pmatrix}\n\\quad\\quad\\begin{bmatrix}\nc & d \\end{bmatrix}\n"
},
{
"math_id": 20,
"text": "f^{(n)}(x)"
},
{
"math_id": 21,
"text": "f(x) = \\exp(\\lambda x)"
},
{
"math_id": 22,
"text": "f^{(n)}(x) = \\lambda^n\\exp(\\lambda x)"
},
{
"math_id": 23,
"text": "f^n(x) = f(f(\\ldots(f(x))\\ldots))"
},
{
"math_id": 24,
"text": "(x)_n"
},
{
"math_id": 25,
"text": "(x)_n=x(x-1)(x-2)\\cdots(x-n+1)=\\frac{x!}{(x-n)!}."
},
{
"math_id": 26,
"text": "x^{(n)}"
},
{
"math_id": 27,
"text": "x^{(n)}=x(x+1)(x+2)\\cdots(x+n-1)=\\frac{(x+n-1)!}{(x-1)!}."
},
{
"math_id": 28,
"text": "\\left\\langle A\\right|"
},
{
"math_id": 29,
"text": "\\left|B\\right\\rangle"
},
{
"math_id": 30,
"text": "\\mathbb{R}[x]"
},
{
"math_id": 31,
"text": "x"
},
{
"math_id": 32,
"text": "\\mathbf{Z}[\\sqrt{-2}]"
},
{
"math_id": 33,
"text": "\\sqrt{-2}"
},
{
"math_id": 34,
"text": "m+n\\sqrt{-2}"
},
{
"math_id": 35,
"text": "\\mathbf{Z}[1/2]"
},
{
"math_id": 36,
"text": "b_1,\\ldots,b_n \\in B"
},
{
"math_id": 37,
"text": "A[b_1,\\ldots,b_n]"
},
{
"math_id": 38,
"text": "[\\cdot,\\cdot]:\\mathfrak{g}\\times\\mathfrak{g}\\to\\mathfrak{g}"
}
] | https://en.wikipedia.org/wiki?curid=11219603 |
1122242 | Dedekind sum | In mathematics, Dedekind sums are certain sums of products of a sawtooth function, and are given by a function "D" of three integer variables. Dedekind introduced them to express the functional equation of the Dedekind eta function. They have subsequently been much studied in number theory, and have occurred in some problems of topology. Dedekind sums have a large number of functional equations; this article lists only a small fraction of these.
Dedekind sums were introduced by Richard Dedekind in a commentary on fragment XXVIII of Bernhard Riemann's collected papers.
Definition.
Define the sawtooth function formula_0 as
formula_1
We then let
formula_2
be defined by
formula_3
the terms on the right being the Dedekind sums. For the case "a" = 1, one often writes
"s"("b", "c") = "D"(1, "b"; "c").
Simple formulae.
Note that "D" is symmetric in "a" and "b", and hence
formula_4
and that, by the oddness of (( )),
"D"(−"a", "b"; "c") = −"D"("a", "b"; "c"),
"D"("a", "b"; −"c") = "D"("a", "b"; "c").
By the periodicity of "D" in its first two arguments, the third argument being the length of the period for both,
"D"("a", "b"; "c") = "D"("a"+"kc", "b"+"lc"; "c"), for all integers "k","l".
If "d" is a positive integer, then
"D"("ad", "bd"; "cd") = "dD"("a", "b"; "c"),
"D"("ad", "bd"; "c") = "D"("a", "b"; "c"), if ("d", "c") = 1,
"D"("ad", "b"; "cd") = "D"("a", "b"; "c"), if ("d", "b") = 1.
There is a proof for the last equality making use of
formula_5
Furthermore, "az" = 1 (mod "c") implies "D"("a", "b"; "c") = "D"(1, "bz"; "c").
Alternative forms.
If "b" and "c" are coprime, we may write "s"("b", "c") as
formula_6
where the sum extends over the "c"-th roots of unity other than 1, i.e. over all formula_7 such that formula_8 and formula_9.
If "b", "c" > 0 are coprime, then
formula_10
Reciprocity law.
If "b" and "c" are coprime positive integers then
formula_11
Rewriting this as
formula_12
it follows that the number 6"c" "s"("b","c") is an integer.
If "k" = (3, "c") then
formula_13
and
formula_14
A relation that is prominent in the theory of the Dedekind eta function is the following. Let "q" = 3, 5, 7 or 13 and let "n" = 24/("q" − 1). Then given integers "a", "b", "c", "d" with "ad" − "bc" = 1 (thus belonging to the modular group), with "c" chosen so that "c" = "kq" for some integer "k" > 0, define
formula_15
Then "n"δ is an even integer.
Rademacher's generalization of the reciprocity law.
Hans Rademacher found the following generalization of the reciprocity law for Dedekind sums: If "a", "b", and "c" are pairwise coprime positive integers, then
formula_16
Hence, the above triple sum vanishes if and only if ("a", "b", "c") is a Markov triple, i.e. a solution of the Markov equation
formula_17 | [
{
"math_id": 0,
"text": "(\\!( \\, )\\!) : \\mathbb{R} \\rightarrow \\mathbb{R}"
},
{
"math_id": 1,
"text": "(\\!(x)\\!)=\\begin{cases}\nx-\\lfloor x\\rfloor - 1/2, &\\mbox{if }x\\in\\mathbb{R}\\setminus\\mathbb{Z};\\\\\n0,&\\mbox{if }x\\in\\mathbb{Z}.\n\\end{cases}"
},
{
"math_id": 2,
"text": "D: \\mathbb{Z}^2\\times (\\mathbb{Z}-\\{0\\})\\to \\mathbb{R}"
},
{
"math_id": 3,
"text": "D(a,b;c)=\\sum_{n=1}^{c-1} \\left(\\!\\!\\left( \\frac{an}{c} \\right)\\!\\!\\right) \\! \\left(\\!\\!\\left( \\frac{bn}{c} \\right)\\!\\!\\right),"
},
{
"math_id": 4,
"text": "D(a,b;c)=D(b,a;c),"
},
{
"math_id": 5,
"text": "\\sum_{n=1}^{c-1} \\left( \\!\\!\\left( \\frac{n+x}{c} \\right) \\!\\!\\right)= (\\!( x )\\!),\\qquad\\forall x\\in\\mathbb{R}."
},
{
"math_id": 6,
"text": "s(b,c)=\\frac{-1}{c} \\sum_\\omega\n\\frac{1} { (1-\\omega^b) (1-\\omega ) } \n+\\frac{1}{4} - \\frac{1}{4c},"
},
{
"math_id": 7,
"text": "\\omega"
},
{
"math_id": 8,
"text": "\\omega^c=1"
},
{
"math_id": 9,
"text": "\\omega\\not=1"
},
{
"math_id": 10,
"text": "s(b,c)=\\frac{1}{4c}\\sum_{n=1}^{c-1} \n\\cot \\left(\\frac{\\pi n}{c}\\right)\n\\cot \\left(\\frac{\\pi nb}{c}\\right).\n"
},
{
"math_id": 11,
"text": "s(b,c)+s(c,b) =\\frac{1}{12}\\left(\\frac{b}{c}+\\frac{1}{bc}+\\frac{c}{b}\\right)-\\frac{1}{4}."
},
{
"math_id": 12,
"text": "12bc \\left( s(b,c) + s(c,b) \\right) = b^2 + c^2 - 3bc + 1,"
},
{
"math_id": 13,
"text": "12bc\\, s(c,b)=0 \\mod kc"
},
{
"math_id": 14,
"text": "12bc\\, s(b,c)=b^2+1 \\mod kc."
},
{
"math_id": 15,
"text": "\\delta = s(a,c) - \\frac{a+d}{12c} - s(a,k) + \\frac{a+d}{12k}"
},
{
"math_id": 16,
"text": "D(a,b;c)+D(b,c;a)+D(c,a;b)=\\frac{1}{12}\\frac{a^2+b^2+c^2}{abc}-\\frac{1}{4}."
},
{
"math_id": 17,
"text": "a^2+b^2+c^2=3abc."
}
] | https://en.wikipedia.org/wiki?curid=1122242 |
11224683 | Key selection vector | A Key Selection Vector (KSV) is a numerical identifier associated with a Device Key Set which is distributed by a Licensor or its designee to Adopters and is used to support authentication of Licensed Products and Revocation as part of the HDCP copy protection system. The KSV is used to generate confidential keys, specifically used in the Restricted Authentication process of HDCP. Restricted Authentication is an AKE method for devices with limited computing resources. This method is used by copying devices of any kind (such as DV recorders or D-VHS recorders) and devices communicating with them for authenticating protected content. The restricted authentication protocol uses asymmetric key management and common key cryptography, and relies on the use of shared secrets and hash functions to respond to a random challenge.
Restricted Authentication Protocol.
The goal of Restricted Authentication is for a device to prove that it holds a secret shared with other devices. One device authenticates another by issuing a random challenge for which the response is generated by combining the shared secrets and multiple hashes. Formally, a Key Selection Vector is a 40-bit vector containing 20 ones and 20 zeros, and is used to specify the random challenge. The Device Key Set is a collection of 40 56-bit values, and is the set of shared secrets for this protocol
During the authentication process, both parties (a transmitter and a receiver) exchange their KSVs. Then each device adds (unsigned addition modulo formula_0) its own device secret keys according to a KSV received from another device. If a particular bit in the KSV is set to 1, then the corresponding secret key is used in the addition and otherwise it is ignored. For each set of keys a special key called a KSV (Key Selection Vector) is created. Each KSV has exactly 20 bits set to 0 and 20 bits set to 1. Keys and KSVs are generated in such a way that during this process both devices get the same 56 bit number as a result. That number is later used in the encryption process.
Uniqueness and Revocation of KSVs.
Since valid keys can become compromised (hacked, for instance through reverse engineering hardware), the HDCP scheme includes a mechanism to revoke keys. The KSV values are unique to each key set and, therefore to each device. The HDCP system can then compare these values to a revocation list, and authentication fails if either the transmitter or receiver appears on the revocation list. Updates to the revocation list arrive with new media and are automatically integrated into a device's revocation list. This means that damage can be limited if a key set is exposed or copied.
This revocation process does not affect other devices, even if the devices are of the same make and model. KSV values are similar to serial numbers in this sense. As an example of how this system works, if two customers were to buy the same model of television on the same day at the same store, and the first customer hacked their television, the first customer's key could be revoked without affecting the ability of the other customer's television to play content.
Attacks on Restricted Authentication.
If an attacker can find 40 linearly independent vectors (formula_1) keys ... (formula_2)keys (i.e. the vectors generated by adding together a device's Device Key Set based on a KSV,) then they can completely break the HDCP system for all devices using a given Device Key Set. At this point, they can extract the secret key array for any number of KSVs, which allows them to access the shared secrets used in the HDCP authentication protocol. Since the keys generated from the KSVs are produced linearly in the given system (i.e. getting a key from a KSV can be viewed as matrix multiplication), someone could determine the Device Key Set matrix from any 40-50 different systems: formula_1 ... formula_3, and the associated KSV (this is public information from the protocol).
In other cases where the extracted keys are not linearly independent, it is still possible to create a new XKey for a new Xksv that is within the span of the (formula_4)KSVs (by taking linear combinations) for which the private keys have been found. There will be, however, no guarantee of them satisfying the required property that a KSV must have; 20 ones and 20 zeros.
Setting up the Equations.
Assuming there are 40 (formula_4) KSVs that are linearly independent (and naming Xkeys the matrix of the keys in the Device Key Set), this gives a set of n linear equations on 40 unknowns –
[Xkeys] * (A1)ksv = = [(A1)keys] * Xksv[Xkeys] * (A2)ksv = = [(A2)keys] * Xksv...[Xkeys] * (A40)ksv = = [(A40)keys] * Xksv
By having acknowledgment on all the KSVs, and assuming the secret key vectors (formula_4)keys are known, the above algorithm can be used to find the secret keys to produce a new derived key from arbitrary new KSV. If the space spanned by the (formula_4)KSVs doesn't span the full 40 dimensional space, this may be okay because the KSVs were either not designed to not span the space, or only a small number of extra keys are needed to find a set of vectors spanning the full space. Each additional device has low odds of being linearly dependent with the existing set. (roughly 1/2^[40-dimensionality-of-spanned-space]. This analysis of probabilities of linear dependence is similar to the analysis of Simon's Algorithm).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2^{56}"
},
{
"math_id": 1,
"text": "A_1"
},
{
"math_id": 2,
"text": "A_{40}"
},
{
"math_id": 3,
"text": "A_n"
},
{
"math_id": 4,
"text": "A_i"
}
] | https://en.wikipedia.org/wiki?curid=11224683 |
11227519 | Symmetric rank-one | The Symmetric Rank 1 (SR1) method is a quasi-Newton method to update the second derivative (Hessian)
based on the derivatives (gradients) calculated at two points. It is a generalization to the secant method for a multidimensional problem.
This update maintains the "symmetry" of the matrix but does "not" guarantee that the update be "positive definite".
The sequence of Hessian approximations generated by the SR1 method converges to the true Hessian under mild conditions, in theory; in practice, the approximate Hessians generated by the SR1 method show faster progress towards the true Hessian than do popular alternatives (BFGS or DFP), in preliminary numerical experiments. The SR1 method has computational advantages for sparse or partially separable problems.
A twice continuously differentiable function formula_0 has a gradient (formula_1) and Hessian matrix formula_2: The function formula_3 has an expansion as a Taylor series at formula_4, which can be truncated
formula_5;
its gradient has a Taylor-series approximation also
formula_6,
which is used to update formula_2. The above secant-equation need not have a unique solution formula_2.
The SR1 formula computes (via an update of rank 1) the symmetric solution that is closest to the current approximate-value formula_7:
formula_8,
where
formula_9.
The corresponding update to the approximate inverse-Hessian formula_10 is
formula_11.
One might wonder why positive-definiteness is not preserved — after all, a rank-1 update of the form formula_12 is positive-definite if formula_7 is. The explanation is that the update might be of the form formula_13 instead because the denominator can be negative, and in that case there are no guarantees about positive-definiteness.
The SR1 formula has been rediscovered a number of times. Since the denominator can vanish, some authors have suggested that the update be applied only if
formula_14,
where formula_15 is a small number, e.g. formula_16.
Limited Memory.
The SR1 update maintains a dense matrix, which can be prohibitive for large problems. Similar to the
L-BFGS method also a limited-memory SR1 (L-SR1) algorithm exists. Instead
of storing the full Hessian approximation, a L-SR1 method only stores the formula_17 most recent
pairs formula_18, where formula_19 and formula_17 is an integer much smaller
than the problem size (formula_20). The limited-memory matrix is
formula_21
formula_22
formula_23
formula_24
Since the update can be indefinite, the L-SR1 algorithm is
suitable for a trust-region strategy. Because of the limited-memory matrix,
the trust-region L-SR1 algorithm scales linearly with the problem size, just like L-BFGS.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x \\mapsto f(x)"
},
{
"math_id": 1,
"text": "\\nabla f"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "x_0"
},
{
"math_id": 5,
"text": "f(x_0+\\Delta x) \\approx f(x_0)+\\nabla f(x_0)^T \\Delta x+\\frac{1}{2} \\Delta x^T {B} \\Delta x "
},
{
"math_id": 6,
"text": "\\nabla f(x_0+\\Delta x) \\approx \\nabla f(x_0)+B \\Delta x"
},
{
"math_id": 7,
"text": "B_k"
},
{
"math_id": 8,
"text": "B_{k+1}=B_{k}+\\frac {(y_k-B_k \\Delta x_k) (y_k-B_k \\Delta x_k)^T}{(y_k-B_k \\Delta x_k)^T \\Delta x_k}"
},
{
"math_id": 9,
"text": "y_k=\\nabla f(x_k+\\Delta x_k)-\\nabla f(x_k)"
},
{
"math_id": 10,
"text": "H_k=B_k^{-1}"
},
{
"math_id": 11,
"text": "H_{k+1}=H_{k}+\\frac {(\\Delta x_k-H_k y_k)(\\Delta x_k-H_k y_k)^T}{(\\Delta x_k-H_k y_k)^T y_k}"
},
{
"math_id": 12,
"text": "B_{k+1} = B_k + vv^T"
},
{
"math_id": 13,
"text": "B_{k+1} = B_k - vv^T"
},
{
"math_id": 14,
"text": "|\\Delta x_k^T (y_k-B_k \\Delta x_k)|\\geq r \\|\\Delta x_k\\|\\cdot \\|y_k-B_k \\Delta x_k\\| "
},
{
"math_id": 15,
"text": "r\\in(0,1)"
},
{
"math_id": 16,
"text": "10^{-8}"
},
{
"math_id": 17,
"text": "m"
},
{
"math_id": 18,
"text": " \\{(s_i, y_i) \\}_{i=k-m}^{k-1} "
},
{
"math_id": 19,
"text": "\\Delta x_i := s_i "
},
{
"math_id": 20,
"text": "m \\ll n "
},
{
"math_id": 21,
"text": "\nB_k = B_0 + J_k N^{-1}_k J^T_k, \\quad J_k = Y_k-B_0 S_k, \\quad N_k =\nD_k+L_k+L^T_k-S^T_k B_0 S_k \n"
},
{
"math_id": 22,
"text": " S_k = \\begin{bmatrix} s_{k-m} & s_{k-m+1} & \\ldots & s_{k-1} \\end{bmatrix}, "
},
{
"math_id": 23,
"text": " Y_k = \\begin{bmatrix} y_{k-m} & y_{k-m+1} & \\ldots & y_{k-1} \\end{bmatrix}, "
},
{
"math_id": 24,
"text": " \\big(L_k\\big)_{ij} = s^T_{i-1}y_{j-1}, \\quad D_k = s^T_{i-1}y_{i-1}, \\quad k-m \\le i \\le k-1 "
}
] | https://en.wikipedia.org/wiki?curid=11227519 |
1122854 | Equilibrium constant | Chemical property
The equilibrium constant of a chemical reaction is the value of its reaction quotient at chemical equilibrium, a state approached by a dynamic chemical system after sufficient time has elapsed at which its composition has no measurable tendency towards further change. For a given set of reaction conditions, the equilibrium constant is independent of the initial analytical concentrations of the reactant and product species in the mixture. Thus, given the initial composition of a system, known equilibrium constant values can be used to determine the composition of the system at equilibrium. However, reaction parameters like temperature, solvent, and ionic strength may all influence the value of the equilibrium constant.
A knowledge of equilibrium constants is essential for the understanding of many chemical systems, as well as biochemical processes such as oxygen transport by hemoglobin in blood and acid–base homeostasis in the human body.
Stability constants, formation constants, binding constants, association constants and dissociation constants are all types of equilibrium constants.
Basic definitions and properties.
For a system undergoing a reversible reaction described by the general chemical equation
formula_0
a thermodynamic equilibrium constant, denoted by formula_1, is defined to be the value of the reaction quotient "Qt" when forward and reverse reactions occur at the same rate. At chemical equilibrium, the chemical composition of the mixture does not change with time, and the Gibbs free energy change formula_2 for the reaction is zero. If the composition of a mixture at equilibrium is changed by addition of some reagent, a new equilibrium position will be reached, given enough time. An equilibrium constant is related to the composition of the mixture at equilibrium by
formula_3
formula_4
where {X} denotes the thermodynamic activity of reagent X at equilibrium, [X] the numerical value of the corresponding concentration in moles per liter, and γ the corresponding activity coefficient. If X is a gas, instead of [X] the numerical value of the partial pressure formula_5 in bar is used. If it can be assumed that the quotient of activity coefficients, formula_6, is constant over a range of experimental conditions, such as pH, then an equilibrium constant can be derived as a quotient of concentrations.
formula_7
An equilibrium constant is related to the standard Gibbs free energy change of reaction formula_8 by
formula_9
where "R" is the universal gas constant, "T" is the absolute temperature (in kelvins), and ln is the natural logarithm. This expression implies that formula_1 must be a pure number and cannot have a dimension, since logarithms can only be taken of pure numbers. formula_10 must also be a pure number. On the other hand, the reaction quotient at equilibrium
formula_11
does have the dimension of concentration raised to some power (see , below). Such reaction quotients are often referred to, in the biochemical literature, as equilibrium constants.
For an equilibrium mixture of gases, an equilibrium constant can be defined in terms of partial pressure or fugacity.
An equilibrium constant is related to the forward and backward rate constants, "k"f and "k"r of the reactions involved in reaching equilibrium:
formula_12
Types of equilibrium constants.
Cumulative and stepwise formation constants.
A cumulative or overall constant, given the symbol "β", is the constant for the formation of a complex from reagents. For example, the cumulative constant for the formation of ML2 is given by
M + 2 L ⇌ ML2; [ML2] = "β"12[M][L]2
The stepwise constant, "K", for the formation of the same complex from ML and L is given by
ML + L ⇌ ML2; [ML2] = "K"[ML][L] = "Kβ"11[M][L]2
It follows that
"β"12 = "Kβ"11
A cumulative constant can always be expressed as the product of stepwise constants. There is no agreed notation for stepwise constants, though a symbol such as "K" is sometimes found in the literature. It is best always to define each stability constant by reference to an equilibrium expression.
Competition method.
A particular use of a stepwise constant is in the determination of stability constant values outside the normal range for a given method. For example, EDTA complexes of many metals are outside the range for the potentiometric method. The stability constants for those complexes were determined by competition with a weaker ligand.
ML + L′ ⇌ ML′ + L formula_13
The formation constant of [Pd(CN)4]2− was determined by the competition method.
Association and dissociation constants.
In organic chemistry and biochemistry it is customary to use p"K"a values for acid dissociation equilibria.
formula_14
where "log" denotes a logarithm to base 10 or common logarithm, and "K"diss is a stepwise acid dissociation constant. For bases, the base association constant, p"K"b is used. For any given acid or base the two constants are related by p"K"a + p"K"b = p"K"w, so p"K"a can always be used in calculations.
On the other hand, stability constants for metal complexes, and binding constants for host–guest complexes are generally expressed as association constants. When considering equilibria such as
M + HL ⇌ ML + H
it is customary to use association constants for both ML and HL. Also, in generalized computer programs dealing with equilibrium constants it is general practice to use cumulative constants rather than stepwise constants and to omit ionic charges from equilibrium expressions. For example, if NTA, nitrilotriacetic acid, N(CH2CO2H)3 is designated as H3L and forms complexes ML and MHL with a metal ion M, the following expressions would apply for the dissociation constants.
formula_15
The cumulative association constants can be expressed as
formula_16
Note how the subscripts define the stoichiometry of the equilibrium product.
Micro-constants.
When two or more sites in an asymmetrical molecule may be involved in an equilibrium reaction there are more than one possible equilibrium constants. For example, the molecule L-DOPA has two non-equivalent hydroxyl groups which may be deprotonated. Denoting L-DOPA as LH2, the following diagram shows all the species that may be formed (X = CH2CH(NH2)CO2H).
The concentration of the species LH is equal to the sum of the concentrations of the two micro-species with the same chemical formula, labelled L1H and L2H. The constant "K"2 is for a reaction with these two micro-species as products, so that [LH] = [L1H] + [L2H] appears in the numerator, and it follows that this macro-constant is equal to the sum of the two micro-constants for the component reactions.
"K"2 = "k"21 + "k"22
However, the constant "K"1 is for a reaction with these two micro-species as reactants, and [LH] = [L1H] + [L2H] in the denominator, so that in this case
1/"K"1 =1/ "k"11 + 1/"k"12,
and therefore "K"1 ="k"11 "k"12 / ("k"11 + "k"12).
Thus, in this example there are four micro-constants whose values are subject to two constraints; in consequence, only the two macro-constant values, for K1 and K2 can be derived from experimental data.
Micro-constant values can, in principle, be determined using a spectroscopic technique, such as infrared spectroscopy, where each micro-species gives a different signal. Methods which have been used to estimate micro-constant values include
Although the value of a micro-constant cannot be determined from experimental data, site occupancy, which is proportional to the micro-constant value, can be very important for biological activity. Therefore, various methods have been developed for estimating micro-constant values. For example, the isomerization constant for L-DOPA has been estimated to have a value of 0.9, so the micro-species L1H and L2H have almost equal concentrations at all pH values.
pH considerations (Brønsted constants).
pH is defined in terms of the activity of the hydrogen ion
pH = −log10 {H+}
In the approximation of ideal behaviour, activity is replaced by concentration. pH is measured by means of a glass electrode, a mixed equilibrium constant, also known as a Brønsted constant, may result.
HL ⇌ L + H; formula_17
It all depends on whether the electrode is calibrated by reference to solutions of known activity or known concentration. In the latter case the equilibrium constant would be a concentration quotient. If the electrode is calibrated in terms of known hydrogen ion concentrations it would be better to write p[H] rather than pH, but this suggestion is not generally adopted.
Hydrolysis constants.
In aqueous solution the concentration of the hydroxide ion is related to the concentration of the hydrogen ion by
<chem>\mathit{K}_W =[H][OH]</chem>
<chem>[OH]=\mathit{K}_W[H]^{-1}</chem>
The first step in metal ion hydrolysis can be expressed in two different ways
formula_18
It follows that "β"* = "KK"W. Hydrolysis constants are usually reported in the "β"* form and therefore often have values much less than 1. For example, if log "K" = 4 and log KW = −14, log "β"* = 4 + (−14) = −10 so that "β*" = 10−10. In general when the hydrolysis product contains "n" hydroxide groups log "β"* = log "K" + "n" log "K"W
Conditional constants.
Conditional constants, also known as apparent constants, are concentration quotients which are not true equilibrium constants but can be derived from them. A very common instance is where pH is fixed at a particular value. For example, in the case of iron(III) interacting with EDTA, a conditional constant could be defined by
formula_19
This conditional constant will vary with pH. It has a maximum at a certain pH. That is the pH where the ligand sequesters the metal most effectively.
In biochemistry equilibrium constants are often measured at a pH fixed by means of a buffer solution. Such constants are, by definition, conditional and different values may be obtained when using different buffers.
Gas-phase equilibria.
For equilibria in a gas phase, fugacity, "f", is used in place of activity. However, fugacity has the dimension of pressure, so it must be divided by a standard pressure, usually 1 bar, in order to produce a dimensionless quantity, . An equilibrium constant is expressed in terms of the dimensionless quantity. For example, for the equilibrium 2NO2 ⇌ N2O4,
formula_20
Fugacity is related to partial pressure, "formula_21", by a dimensionless fugacity coefficient "ϕ": "formula_22". Thus, for the example,
formula_23
Usually the standard pressure is omitted from such expressions. Expressions for equilibrium constants in the gas phase then resemble the expression for solution equilibria with fugacity coefficient in place of activity coefficient and partial pressure in place of concentration.
formula_24
Thermodynamic basis for equilibrium constant expressions.
Thermodynamic equilibrium is characterized by the free energy for the whole (closed) system being a minimum. For systems at constant temperature and pressure the Gibbs free energy is minimum. The slope of the reaction free energy with respect to the extent of reaction, "ξ", is zero when the free energy is at its minimum value.
formula_25
The free energy change, d"G"r, can be expressed as a weighted sum of change in amount times the chemical potential, the partial molar free energy of the species. The chemical potential, "μi", of the "i"th species in a chemical reaction is the partial derivative of the free energy with respect to the number of moles of that species, "N"i
formula_26
A general chemical equilibrium can be written as
formula_27
where "nj" are the stoichiometric coefficients of the reactants in the equilibrium equation, and "mj" are the coefficients of the products. At equilibrium
formula_28
The chemical potential, "μi", of the "i"th species can be calculated in terms of its activity, "ai".
formula_29
"μ" is the standard chemical potential of the species, "R" is the gas constant and "T" is the temperature. Setting the sum for the reactants "j" to be equal to the sum for the products, "k", so that "δG"r(Eq) = 0
formula_30
Rearranging the terms,
formula_31
formula_32
This relates the standard Gibbs free energy change, Δ"G"o to an equilibrium constant, "K", the reaction quotient of activity values at equilibrium.
formula_33
formula_34
Equivalence of thermodynamic and kinetic expressions for equilibrium constants.
At equilibrium the rate of the forward reaction is equal to the backward reaction rate. A simple reaction, such as ester hydrolysis
<chem>AB + H2O <=> AH + B(OH)</chem>
has reaction rates given by expressions
formula_35
formula_36
According to Guldberg and Waage, equilibrium is attained when the forward and backward reaction rates are equal to each other. In these circumstances, an equilibrium constant is defined to be equal to the ratio of the forward and backward reaction rate constants
formula_37.
The concentration of water may be taken to be constant, resulting in the simpler expression
formula_38.
This particular concentration quotient, formula_39, has the dimension of concentration, but the thermodynamic equilibrium constant, K, is always dimensionless.
Unknown activity coefficient values.
It is very rare for activity coefficient values to have been determined experimentally for a system at equilibrium. There are three options for dealing with the situation where activity coefficient values are not known from experimental measurements.
Dimensionality.
An equilibrium constant is related to the standard Gibbs free energy of reaction change, formula_40, for the reaction by the expression
formula_41
Therefore, "K", must be a dimensionless number from which a logarithm can be derived. In the case of a simple equilibrium
<chem>A + B <=> AB, </chem>
the thermodynamic equilibrium constant is defined in terms of the activities, {AB}, {A} and {B}, of the species in equilibrium with each other:
formula_42
Now, each activity term can be expressed as a product of a concentration formula_43 and a corresponding activity coefficient, formula_44. Therefore,
formula_45
When formula_6, the quotient of activity coefficients, is set equal to 1, we get
formula_46
"K" then appears to have the dimension of 1/concentration. This is what usually happens in practice when an equilibrium constant is calculated as a quotient of concentration values. This can be avoided by dividing each concentration by its standard-state value (usually mol/L or bar), which is standard practice in chemistry.
The assumption underlying this practice is that the quotient of activities is constant under the conditions in which the equilibrium constant value is determined. These conditions are usually achieved by keeping the reaction temperature constant and by using a medium of relatively high ionic strength as the solvent. It is not unusual, particularly in texts relating to biochemical equilibria, to see an equilibrium constant value quoted with a dimension. The justification for this practice is that the concentration scale used may be either mol dm−3 or mmol dm−3, so that the concentration unit has to be stated in order to avoid there being any ambiguity.
"Note". When the concentration values are measured on the mole fraction scale all concentrations and activity coefficients are dimensionless quantities.
In general equilibria between two reagents can be expressed as
<chem>{\mathit{p}A} + \mathit{q}B <=> A_\mathit{p}B_\mathit{q} , </chem>
in which case the equilibrium constant is defined, in terms of numerical concentration values, as
formula_47
The apparent dimension of this "K" value is concentration1−p−q; this may be written as M(1−p−q) or mM(1−p−q), where the symbol M signifies a molar concentration (1M = 1 mol dm−3). The apparent dimension of a dissociation constant is the reciprocal of the apparent dimension of the corresponding association constant, and "vice versa".
When discussing the thermodynamics of chemical equilibria it is necessary to take dimensionality into account. There are two possible approaches.
In both approaches the numerical value of the stability constant is unchanged. The first is more useful for practical purposes; in fact, the unit of the concentration quotient is often attached to a published stability constant value in the biochemical literature. The second approach is consistent with the standard exposition of Debye–Hückel theory, where formula_52, "etc". are taken to be pure numbers.
Water as both reactant and solvent.
For reactions in aqueous solution, such as an acid dissociation reaction
AH + H2O ⇌ A− + H3O+
the concentration of water may be taken as being constant and the formation of the hydronium ion is implicit.
AH ⇌ A− + H+
Water concentration is omitted from expressions defining equilibrium constants, except when solutions are very concentrated.
formula_53 ("K" defined as a dissociation constant)
Similar considerations apply to metal ion hydrolysis reactions.
Enthalpy and entropy: temperature dependence.
If both the equilibrium constant, formula_54 and the standard enthalpy change, formula_55, for a reaction have been determined experimentally, the standard entropy change for the reaction is easily derived. Since formula_56 and formula_57
formula_58
To a first approximation the standard enthalpy change is independent of temperature. Using this approximation, definite integration of the van 't Hoff equation
formula_59
gives
formula_60
This equation can be used to calculate the value of log K at a temperature, T2, knowing the value at temperature T1.
The van 't Hoff equation also shows that, for an exothermic reaction (formula_61), when temperature increases "K" decreases and when temperature decreases "K" increases, in accordance with Le Chatelier's principle. The reverse applies when the reaction is endothermic.
When "K" has been determined at more than two temperatures, a straight line fitting procedure may be applied to a plot of formula_62 against formula_63 to obtain a value for formula_64. Error propagation theory can be used to show that, with this procedure, the error on the calculated formula_64 value is much greater than the error on individual log K values. Consequently, K needs to be determined to high precision when using this method. For example, with a silver ion-selective electrode each log K value was determined with a precision of ca. 0.001 and the method was applied successfully.
Standard thermodynamic arguments can be used to show that, more generally, enthalpy will change with temperature.
formula_65
where "C""p" is the heat capacity at constant pressure.
A more complex formulation.
The calculation of "K" at a particular temperature from a known "K" at another given temperature can be approached as follows if standard thermodynamic properties are available. The effect of temperature on equilibrium constant is equivalent to the effect of temperature on Gibbs energy because:
formula_66
where Δr"G"o is the reaction standard Gibbs energy, which is the sum of the standard Gibbs energies of the reaction products minus the sum of standard Gibbs energies of reactants.
Here, the term "standard" denotes the ideal behaviour (i.e., an infinite dilution) and a hypothetical standard concentration (typically 1 mol/kg). It does not imply any particular temperature or pressure because, although contrary to IUPAC recommendation, it is more convenient when describing aqueous systems over wide temperature and pressure ranges.
The standard Gibbs energy (for each species or for the entire reaction) can be represented (from the basic definitions) as:
formula_67
In the above equation, the effect of temperature on Gibbs energy (and thus on the equilibrium constant) is ascribed entirely to heat capacity. To evaluate the integrals in this equation, the form of the dependence of heat capacity on temperature needs to be known.
If the standard molar heat capacity "C" can be approximated by some analytic function of temperature (e.g. the Shomate equation), then the integrals involved in calculating other parameters may be solved to yield analytic expressions for them. For example, using approximations of the following forms:
then the integrals can be evaluated and the following final form is obtained:
formula_70
The constants "A", "B", "C", "a", "b" and the absolute entropy, "S̆", required for evaluation of "C"("T"), as well as the values of "G"298 K and "S"298 K for many species are tabulated in the literature.
Pressure dependence.
The pressure dependence of the equilibrium constant is usually weak in the range of pressures normally encountered in industry, and therefore, it is usually neglected in practice. This is true for condensed reactant/products (i.e., when reactants and products are solids or liquid) as well as gaseous ones.
For a gaseous-reaction example, one may consider the well-studied reaction of hydrogen with nitrogen to produce ammonia:
N2 + 3 H2 ⇌ 2 NH3
If the pressure is increased by the addition of an inert gas, then neither the composition at equilibrium nor the equilibrium constant are appreciably affected (because the partial pressures remain constant, assuming an ideal-gas behaviour of all gases involved). However, the composition at equilibrium will depend appreciably on pressure when:
In the example reaction above, the number of moles changes from 4 to 2, and an increase of pressure by system compression will result in appreciably more ammonia in the equilibrium mixture. In the general case of a gaseous reaction:
"α" A + "β" B ⇌ "σ" S + "τ" T
the change of mixture composition with pressure can be quantified using:
formula_71
where "p" denote the partial pressures and "X" the mole fractions of the components, "P" is the total system pressure, "Kp" is the equilibrium constant expressed in terms of partial pressures and "KX" is the equilibrium constant expressed in terms of mole fractions.
The above change in composition is in accordance with Le Chatelier's principle and does not involve any change of the equilibrium constant with the total system pressure. Indeed, for ideal-gas reactions "Kp" is independent of pressure.
In a condensed phase, the pressure dependence of the equilibrium constant is associated with the reaction volume. For reaction:
"α" A + "β" B ⇌ "σ" S + "τ" T
the reaction volume is:
formula_72
where "V̄" denotes a partial molar volume of a reactant or a product.
For the above reaction, one can expect the change of the reaction equilibrium constant (based either on mole-fraction or molal-concentration scale) with pressure at constant temperature to be:
formula_73
The matter is complicated as partial molar volume is itself dependent on pressure.
Effect of isotopic substitution.
Isotopic substitution can lead to changes in the values of equilibrium constants, especially if hydrogen is replaced by deuterium (or tritium). This "equilibrium isotope effect" is analogous to the kinetic isotope effect on rate constants, and is primarily due to the change in zero-point vibrational energy of H–X bonds due to the change in mass upon isotopic substitution. The zero-point energy is inversely proportional to the square root of the mass of the vibrating hydrogen atom, and will therefore be smaller for a D–X bond that for an H–X bond.
An example is a hydrogen atom abstraction reaction R' + H–R ⇌ R'–H + R with equilibrium constant KH, where R' and R are organic radicals such that R' forms a stronger bond to hydrogen than does R. The decrease in zero-point energy due to deuterium substitution will then be more important for R'–H than for R–H, and R'–D will be stabilized more than R–D, so that the equilibrium constant KD for R' + D–R ⇌ R'–D + R is greater than KH. This is summarized in the rule "the heavier atom favors the stronger bond".
Similar effects occur in solution for acid dissociation constants (Ka) which describe the transfer of H+ or D+ from a weak aqueous acid to a solvent molecule: HA + H2O = H3O+ + A− or DA + D2O ⇌ D3O+ + A−. The deuterated acid is studied in heavy water, since if it were dissolved in ordinary water the deuterium would rapidly exchange with hydrogen in the solvent.
The product species H3O+ (or D3O+) is a stronger acid than the solute acid, so that it dissociates more easily, and its H–O (or D–O) bond is weaker than the H–A (or D–A) bond of the solute acid. The decrease in zero-point energy due to isotopic substitution is therefore less important in D3O+ than in DA so that KD < KH, and the deuterated acid in D2O is weaker than the non-deuterated acid in H2O. In many cases the difference of logarithmic constants pKD – pKH is about 0.6, so that the pD corresponding to 50% dissociation of the deuterated acid is about 0.6 units higher than the pH for 50% dissociation of the non-deuterated acid.
For similar reasons the self-ionization of heavy water is less than that of ordinary water at the same temperature.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha\\,\\mathrm{A} + \\beta\\,\\mathrm{B} + \\cdots \\rightleftharpoons \\rho\\,\\mathrm{R} + \\sigma\\,\\mathrm{S} + \\cdots"
},
{
"math_id": 1,
"text": "K^\\ominus"
},
{
"math_id": 2,
"text": "\\Delta G"
},
{
"math_id": 3,
"text": "K^\\ominus = \\frac{{\\mathrm{\\{R\\}}^\\rho \\mathrm{\\{S\\}}^\\sigma...}}{{\\mathrm{\\{A\\}}^\\alpha \\mathrm{\\{B\\}}^\\beta...}}\n= \\frac{{{[\\mathrm{R}]}}^\\rho {{[\\mathrm{S}]}}^\\sigma ... } {{{[\\mathrm{A}]}}^\\alpha {{[\\mathrm{B}]}}^\\beta ...} \\times \\Gamma,\n"
},
{
"math_id": 4,
"text": "\\Gamma= \\frac{\\gamma_R^\\rho \\gamma_S^\\sigma...}{\\gamma_A^\\alpha \\gamma_B^\\beta...},"
},
{
"math_id": 5,
"text": "P_X"
},
{
"math_id": 6,
"text": "\\Gamma"
},
{
"math_id": 7,
"text": "K_c = K^\\ominus/\\Gamma = \\frac{[\\mathrm{R}]^\\rho [\\mathrm{S}]^\\sigma ...}{[\\mathrm{A}]^\\alpha [\\mathrm{B}]^\\beta ...}."
},
{
"math_id": 8,
"text": "\\Delta G^\\ominus"
},
{
"math_id": 9,
"text": "\\Delta G^\\ominus = -RT \\ln K^\\ominus,"
},
{
"math_id": 10,
"text": "K_c"
},
{
"math_id": 11,
"text": "\\frac{[\\mathrm{R}]^\\rho [\\mathrm{S}]^\\sigma ...}{[\\mathrm{A}]^\\alpha [\\mathrm{B}]^\\beta ...} (eq)"
},
{
"math_id": 12,
"text": "K^\\ominus = \\frac{k_\\text{f}}{k_\\text{r}}."
},
{
"math_id": 13,
"text": "[\\mathrm{ML}']=K\\frac{[\\mathrm{ML}][\\mathrm{L}']}{[\\mathrm{L}]} = K \\frac{\\beta_\\mathrm{ML}[\\mathrm{M}][\\mathrm{L}][\\mathrm{L}']}{[\\mathrm{L}]}= K \\beta_\\mathrm{ML}[\\mathrm{M}][\\mathrm{L}']; \\quad \\beta_{\\mathrm{ML}'}=K\\beta_\\mathrm{ML}"
},
{
"math_id": 14,
"text": "\\mathrm{p}K_\\mathrm{a}=-\\log K_{\\mathrm{diss}} = \\log \\left(\\frac{1}{K_\\mathrm{diss}}\\right)\\,"
},
{
"math_id": 15,
"text": "\\begin{array}{ll}\n\\ce{H3L <=> {H2L} + H}; & \\ce{p}K_1=-\\log \\left(\\frac{[\\ce{H2L}][\\ce{H}]} {[\\ce{H3L}]} \\right)\\\\\n\\ce{H2L <=> {HL} + H}; & \\ce{p}K_2=-\\log \\left(\\frac{[\\ce{HL}][\\ce{H}]} {[\\ce{H2L}]} \\right)\\\\\n\\ce{HL <=> {L} + H}; & \\ce{p}K_3=-\\log \\left(\\frac{[\\ce{L}][\\ce{H}]} {[\\ce{HL}]} \\right)\n\\end{array}"
},
{
"math_id": 16,
"text": "\\begin{array}{ll}\n\\ce{{L} + H <=> HL}; & \\log \\beta_{011} =\\log \\left(\\frac{[\\ce{HL}]}{[\\ce{L}][\\ce{H}]} \\right)=\\ce{p}K_3 \\\\\n\\ce{{L} + 2H <=> H2L}; & \\log \\beta_{012} =\\log \\left(\\frac{[\\ce{H2L}]}{[\\ce{L}][\\ce{H}]^2} \\right)=\\ce{p}K_3+\\ce{p}K_2 \\\\\n\\ce{{L} + 3H <=> H3L}; & \\log \\beta_{013} =\\log \\left(\\frac{[\\ce{H3L}]}{[\\ce{L}][\\ce{H}]^3} \\right)=\\ce{p}K_3+\\ce{p}K_2+\\ce{p}K_1 \\\\\n\\ce{{M} + L <=> ML}; & \\log \\beta_{110} =\\log \\left(\\frac{[\\ce{ML}]}{[\\ce{M}][\\ce{L}]} \\right)\\\\\n\\ce{{M} + {L} + H <=> MLH}; & \\log \\beta_{111} =\\log \\left(\\frac{[\\ce{MLH}]}{[\\ce{M}][\\ce{L}][\\ce{H}]} \\right)\n\\end{array}"
},
{
"math_id": 17,
"text": "\\mathrm{p}K =-\\log \\left(\\frac{[\\mathrm{L}]\\{\\mathrm{H}\\}}{[\\mathrm{HL}]} \\right) "
},
{
"math_id": 18,
"text": "\\begin{cases}\n\\ce{M(H2O) <=> {M(OH)} + H}; &[\\ce{M(OH)}]=\\beta^*[\\ce{M}][\\ce{H}]^{-1} \\\\\n\\ce{{M} + OH <=> M(OH)}; &[\\ce{M(OH)}]=K[\\ce{M}][\\ce{OH}]=K K_\\ce{W}[\\ce{M}][\\ce{H}]^{-1}\n\\end{cases}"
},
{
"math_id": 19,
"text": "K_{\\mathrm{cond}}=\\frac{[\\mbox{Total Fe bound to EDTA}]}{[\\mbox{Total Fe not bound to EDTA}]\\times [\\mbox{Total EDTA not bound to Fe}] }"
},
{
"math_id": 20,
"text": "\\frac{f_\\mathrm{N_2O_4}}{p^\\ominus} = K \\left(\\frac{f_\\mathrm{NO_2}}{p^\\ominus}\\right)^2"
},
{
"math_id": 21,
"text": "p_X"
},
{
"math_id": 22,
"text": "f_X = \\phi_X p_X"
},
{
"math_id": 23,
"text": "K=\\frac{\\phi_\\mathrm{N_2O_4} p_\\mathrm{N_2O_4}/{p^\\ominus}}{\\left(\\phi_\\mathrm{NO_2}p_\\mathrm{NO_2}/{p^\\ominus}\\right)^2}"
},
{
"math_id": 24,
"text": "K=\\frac{\\phi_\\mathrm{N_2O_4} p_\\mathrm{N_2O_4}}{\\left(\\phi_\\mathrm{NO_2}p_\\mathrm{NO_2}\\right)^2}"
},
{
"math_id": 25,
"text": "\\left(\\frac{\\partial G}{\\partial \\xi }\\right)_{T,P}=0"
},
{
"math_id": 26,
"text": "\\mu_i=\\left(\\frac{\\partial G}{\\partial N_i}\\right)_{T,P}"
},
{
"math_id": 27,
"text": "\\sum_j n_j \\mathrm{Reactant}_j \\rightleftharpoons \\sum_k m_k \\mathrm{Product}_k"
},
{
"math_id": 28,
"text": "\\sum_k m_k \\mu_k = \\sum_j n_j \\mu_j "
},
{
"math_id": 29,
"text": "\\mu_i = \\mu_i^\\ominus + RT \\ln a_i"
},
{
"math_id": 30,
"text": "\\sum_j n_j(\\mu_j^\\ominus +RT\\ln a_j)=\\sum_k m_k(\\mu_k^\\ominus +RT\\ln a_k) "
},
{
"math_id": 31,
"text": "\\sum_k m_k\\mu_k^\\ominus-\\sum_j n_j\\mu_j^\\ominus =-RT \\left(\\sum_k \\ln {a_k}^{m_k}-\\sum_j \\ln {a_j}^{n_j}\\right)"
},
{
"math_id": 32,
"text": "\\Delta G^\\ominus = -RT \\ln K."
},
{
"math_id": 33,
"text": "\\Delta G^\\ominus = \\sum_k m_k\\mu_k^\\ominus-\\sum_j n_j\\mu_j^\\ominus"
},
{
"math_id": 34,
"text": "\\ln K= \\sum_k \\ln {a_k}^{m_k}-\\sum_j \\ln {a_j}^{n_j};\nK=\\frac{\\prod_k {a_k}^{m_k}}{\\prod_j {a_j}^{n_j}} \\equiv \\frac{{\\{\\mathrm{R}\\}} ^\\rho {\\{\\mathrm{S}\\}}^\\sigma ... } {{\\{\\mathrm{A}\\}}^\\alpha {\\{\\mathrm{B}\\}}^\\beta ...}\n"
},
{
"math_id": 35,
"text": "\\text{forward rate} = k_f\\ce{[AB][H2O]}"
},
{
"math_id": 36,
"text": "\\text{backward rate} = k_b\\ce{[AH][B(OH)]}\n"
},
{
"math_id": 37,
"text": "K=\\frac{k_f}{k_b}=\\frac\\ce{[AH][B(OH)]}\\ce{[AB][H2O]}"
},
{
"math_id": 38,
"text": "K^c=\\frac\\ce{[AH][B(OH)]}\\ce{[AB]}"
},
{
"math_id": 39,
"text": "K^c"
},
{
"math_id": 40,
"text": "\\Delta_R G^\\ominus "
},
{
"math_id": 41,
"text": "\\Delta_R G^\\ominus = \\left( \\frac{\\partial G}{\\partial \\xi } \\right)_{P,T} = -RT \\ln K ."
},
{
"math_id": 42,
"text": "K = \\frac{\\{AB\\}}{\\{A\\}\\{B\\}} ."
},
{
"math_id": 43,
"text": "[X]"
},
{
"math_id": 44,
"text": "\\gamma(X)"
},
{
"math_id": 45,
"text": "K = \\frac{[AB]}{[A][B]}\\times\\frac{\\gamma(AB)}{\\gamma(A)\\gamma(B)} = \\frac{[AB]}{[A][B]}\\times \\Gamma ."
},
{
"math_id": 46,
"text": "K = \\frac{[AB]}{[A][B]} ."
},
{
"math_id": 47,
"text": "K = \\frac{[\\ce{A}_p\\ce{B}_q]}{[\\ce A]^p[\\ce B]^q} ."
},
{
"math_id": 48,
"text": "\\frac{K}{\\Gamma}"
},
{
"math_id": 49,
"text": "\\frac{[X]}{[X^0]}"
},
{
"math_id": 50,
"text": "[X^0]"
},
{
"math_id": 51,
"text": "\\gamma(X^0)"
},
{
"math_id": 52,
"text": "\\gamma(AB)"
},
{
"math_id": 53,
"text": "K=\\frac{[A][H]}{[AH]}"
},
{
"math_id": 54,
"text": "K"
},
{
"math_id": 55,
"text": "\\Delta H^\\ominus"
},
{
"math_id": 56,
"text": "\\Delta G = \\Delta H - T \\Delta S"
},
{
"math_id": 57,
"text": "\\Delta G = -RT \\ln K"
},
{
"math_id": 58,
"text": "\\Delta S^\\ominus = \\frac{\\Delta H^\\ominus + RT \\ln K}{T}"
},
{
"math_id": 59,
"text": "\\Delta H^\\ominus=-R\\frac{d \\ln K}{d(1/T)}\\ "
},
{
"math_id": 60,
"text": "\\ln K_2 = \\ln K_1 -\\frac{\\Delta H^\\ominus}{R} \\left(\\frac{1}{T_2}-\\frac{1}{T_1}\\right)"
},
{
"math_id": 61,
"text": " \\Delta H <0"
},
{
"math_id": 62,
"text": "\\ln K"
},
{
"math_id": 63,
"text": "1/T"
},
{
"math_id": 64,
"text": "\\Delta H^\\ominus "
},
{
"math_id": 65,
"text": "\\left(\\frac{\\partial H}{\\partial T} \\right)_p=C_p"
},
{
"math_id": 66,
"text": "\\ln K = {{-\\Delta_\\mathrm{r} G^\\ominus} \\over {RT}}"
},
{
"math_id": 67,
"text": "G_{T_2}^\\ominus = G_{T_1}^\\ominus-S_{T_1}^\\ominus(T_2-T_1)-T_2 \\int^{T_2}_{T_1} {{C_p^\\ominus} \\over {T}}\\,dT + \\int^{T_2}_{T_1} C_p^\\ominus\\,dT"
},
{
"math_id": 68,
"text": "C_p^\\ominus \\approx A + BT + CT^{-2}"
},
{
"math_id": 69,
"text": "C_p^\\ominus \\approx (4.186a+b\\breve{S}^\\ominus_{T_1}) {{(T_2-T_1)} \\over {\\ln\\left(\\frac{T_2}{T_1}\\right)}}"
},
{
"math_id": 70,
"text": "G_{T_2}^\\ominus \\approx G_{T_1}^\\ominus + (C_p^\\ominus - S_{T_1}^\\ominus)(T_2-T_1) - T_2 \\ln\\left(\\frac{T_2}{T_1}\\right)C_p^\\ominus"
},
{
"math_id": 71,
"text": "K_p = \\frac{{p_\\mathrm{S}}^\\sigma {p_\\mathrm{T}}^\\tau} {{p_\\mathrm{A}}^\\alpha {p_\\mathrm{B}}^\\beta} = \\frac{{X_\\mathrm{S}}^\\sigma {X_\\mathrm{T}}^\\tau} {{X_\\mathrm{A}}^\\alpha {X_\\mathrm{B}}^\\beta} P^{\\sigma+\\tau-\\alpha-\\beta} = K_X P^{\\sigma+\\tau-\\alpha-\\beta}"
},
{
"math_id": 72,
"text": "\\Delta \\bar{V} = \\sigma \\bar{V}_\\mathrm{S} + \\tau \\bar{V}_\\mathrm{T} - \\alpha \\bar{V}_\\mathrm{A} - \\beta \\bar{V}_\\mathrm{B} "
},
{
"math_id": 73,
"text": " \\left(\\frac{\\partial \\ln K_X}{\\partial P} \\right)_T = \\frac{-\\Delta \\bar{V}} {RT}. "
}
] | https://en.wikipedia.org/wiki?curid=1122854 |
11230759 | Retarded potential | Type of potential in electrodynamics
In electrodynamics, the retarded potentials are the electromagnetic potentials for the electromagnetic field generated by time-varying electric current or charge distributions in the past. The fields propagate at the speed of light "c", so the delay of the fields connecting cause and effect at earlier and later times is an important factor: the signal takes a finite time to propagate from a point in the charge or current distribution (the point of cause) to another point in space (where the effect is measured), see figure below.
In the Lorenz gauge.
The starting point is Maxwell's equations in the potential formulation using the Lorenz gauge:
formula_0
where φ(r, "t") is the electric potential and A(r, "t") is the magnetic vector potential, for an arbitrary source of charge density ρ(r, "t") and current density J(r, "t"), and formula_1 is the D'Alembert operator. Solving these gives the retarded potentials below (all in SI units).
For time-dependent fields.
For time-dependent fields, the retarded potentials are:
formula_2
formula_3
where r is a point in space, "t" is time,
formula_4
is the retarded time, and d3r' is the integration measure using r'.
From φ(r, t) and A(r, "t"), the fields E(r, "t") and B(r, "t") can be calculated using the definitions of the potentials:
formula_5
and this leads to Jefimenko's equations. The corresponding advanced potentials have an identical form, except the advanced time
formula_6
replaces the retarded time.
In comparison with static potentials for time-independent fields.
In the case the fields are time-independent (electrostatic and magnetostatic fields), the time derivatives in the formula_1 operators of the fields are zero, and Maxwell's equations reduce to
formula_7
where ∇2 is the Laplacian, which take the form of Poisson's equation in four components (one for φ and three for A), and the solutions are:
formula_8
formula_9
These also follow directly from the retarded potentials.
In the Coulomb gauge.
In the Coulomb gauge, Maxwell's equations are
formula_10
formula_11
although the solutions contrast the above, since A is a retarded potential yet φ changes "instantly", given by:
formula_12
formula_13
This presents an advantage and a disadvantage of the Coulomb gauge - φ is easily calculable from the charge distribution ρ but A is not so easily calculable from the current distribution j. However, provided we require that the potentials vanish at infinity, they can be expressed neatly in terms of fields:
formula_14
formula_15
In linearized gravity.
The retarded potential in linearized general relativity is closely analogous to the electromagnetic case. The trace-reversed tensor formula_16 plays the role of the four-vector potential, the harmonic gauge formula_17 replaces the electromagnetic Lorenz gauge, the field equations are formula_18, and the retarded-wave solution is
formula_19
Using SI units, the expression must be divided by formula_20, as can be confirmed by dimensional analysis.
Occurrence and application.
A many-body theory which includes an average of retarded and "advanced" Liénard–Wiechert potentials is the Wheeler–Feynman absorber theory also known as the Wheeler–Feynman time-symmetric theory.
Example.
The potential of charge with uniform speed on a straight line has inversion in a point that is in the recent position. The potential is not changed in the direction of movement.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\Box \\varphi = \\dfrac{\\rho}{\\epsilon_0} \\,,\\quad \\Box \\mathbf{A} = \\mu_0\\mathbf{J}"
},
{
"math_id": 1,
"text": "\\Box"
},
{
"math_id": 2,
"text": " \\mathrm\\varphi (\\mathbf r , t) = \\frac{1}{4\\pi\\epsilon_0}\\int \\frac{\\rho (\\mathbf r' , t_r)}{|\\mathbf r - \\mathbf r'|}\\, \\mathrm{d}^3\\mathbf r'"
},
{
"math_id": 3,
"text": "\\mathbf A (\\mathbf r , t) = \\frac{\\mu_0}{4\\pi}\\int \\frac{\\mathbf J (\\mathbf r' , t_r)}{|\\mathbf r - \\mathbf r'|}\\, \\mathrm{d}^3\\mathbf r'\\,."
},
{
"math_id": 4,
"text": "t_r = t-\\frac{|\\mathbf r - \\mathbf r'|}{c}"
},
{
"math_id": 5,
"text": "-\\mathbf{E} = \\nabla\\varphi +\\frac{\\partial\\mathbf{A}}{\\partial t}\\,,\\quad \\mathbf{B}=\\nabla\\times\\mathbf A\\,."
},
{
"math_id": 6,
"text": "t_a = t+\\frac{|\\mathbf r - \\mathbf r'|}{c}"
},
{
"math_id": 7,
"text": " \\nabla^2 \\varphi =-\\dfrac{\\rho}{\\epsilon_0}\\,,\\quad \\nabla^2 \\mathbf{A} =- \\mu_0 \\mathbf{J}\\,,"
},
{
"math_id": 8,
"text": " \\mathrm\\varphi (\\mathbf{r}) = \\frac{1}{4\\pi\\epsilon_0}\\int \\frac{\\rho (\\mathbf r' )}{|\\mathbf r - \\mathbf r'|}\\, \\mathrm{d}^3\\mathbf r'"
},
{
"math_id": 9,
"text": "\\mathbf A (\\mathbf{r}) = \\frac{\\mu_0}{4\\pi}\\int \\frac{\\mathbf J (\\mathbf r' )}{|\\mathbf r - \\mathbf r'|}\\, \\mathrm{d}^3\\mathbf r'\\,."
},
{
"math_id": 10,
"text": " \\nabla^2 \\varphi =-\\dfrac{\\rho}{\\epsilon_0}"
},
{
"math_id": 11,
"text": " \\nabla^2 \\mathbf{A} - \\dfrac{1}{c^2}\\dfrac{\\partial^2 \\mathbf{A}}{\\partial t^2}=- \\mu_0 \\mathbf{J} +\\dfrac{1}{c^2}\\nabla\\left(\\dfrac{\\partial \\varphi}{\\partial t}\\right)\\,,"
},
{
"math_id": 12,
"text": "\\varphi(\\mathbf{r}, t) = \\dfrac{1}{4\\pi\\epsilon_0}\\int \\dfrac{\\rho(\\mathbf{r}',t)}{|\\mathbf r - \\mathbf r'|}\\mathrm{d}^3\\mathbf{r}'"
},
{
"math_id": 13,
"text": " \\mathbf{A}(\\mathbf{r},t) = \\dfrac{1}{4\\pi \\varepsilon_0} \\nabla\\times\\int \\mathrm{d}^3\\mathbf{r'} \\int_0^{|\\mathbf{r}-\\mathbf{r}'|/c} \\mathrm{d}t_r \\dfrac{ t_r \\mathbf{J}(\\mathbf{r'}, t-t_r)}{|\\mathbf{r}-\\mathbf{r}'|^3}\\times (\\mathbf{r}-\\mathbf{r}') \\,."
},
{
"math_id": 14,
"text": "\\varphi(\\mathbf{r}, t) = \\dfrac{1}{4\\pi}\\int \\dfrac{\\nabla \\cdot \\mathbf{E}(\\mathbf{r}',t)}{|\\mathbf r - \\mathbf r'|}\\mathrm{d}^3\\mathbf{r}'"
},
{
"math_id": 15,
"text": " \\mathbf{A}(\\mathbf{r},t) = \\dfrac{1}{4\\pi}\\int \\dfrac{\\nabla \\times \\mathbf{B}(\\mathbf{r}',t)}{|\\mathbf r - \\mathbf r'|}\\mathrm{d}^3\\mathbf{r}'"
},
{
"math_id": 16,
"text": "\\tilde h_{\\mu\\nu} = h_{\\mu\\nu} - \\frac 1 2 \\eta_{\\mu\\nu} h"
},
{
"math_id": 17,
"text": "\\tilde h^{\\mu\\nu}{}_{,\\mu} = 0"
},
{
"math_id": 18,
"text": "\\Box \\tilde h_{\\mu\\nu} = -16\\pi G T_{\\mu\\nu}"
},
{
"math_id": 19,
"text": "\\tilde h_{\\mu\\nu}(\\mathbf r, t) = 4 G \\int \\frac{T_{\\mu\\nu}(\\mathbf r', t_r)}{|\\mathbf r - \\mathbf r'|} \\mathrm d^3 \\mathbf r'."
},
{
"math_id": 20,
"text": "c^4"
}
] | https://en.wikipedia.org/wiki?curid=11230759 |
1123083 | Abbe sine condition | Design rule for optical systems
In optics, the Abbe sine condition is a condition that must be fulfilled by a lens or other optical system in order for it to produce sharp images of off-axis as well as on-axis objects. It was formulated by Ernst Abbe in the context of microscopes.
The Abbe sine condition says that
the sine of the object-space angle formula_0 should be proportional to the sine of the image space angle formula_1
Furthermore, the ratio equals the magnification of the system. In mathematical terms this is:
formula_2
where the variables formula_3 are the angles (relative to the optic axis) of any two rays as they leave the object, and formula_4 are the angles of the same rays where they reach the image plane (say, the film plane of a camera). For example, (formula_5 might represent a paraxial ray (i.e., a ray nearly parallel with the optic axis), and formula_6 might represent a marginal ray (i.e., a ray with the largest angle admitted by the system aperture). An optical imaging system for which this is true in for all rays is said to obey the Abbe sine condition.
The Abbe sine condition can be derived by Fermat's principle.
A thin lens satisfies formula_7instead, which means that it does not satisfy Abbe sine condition at large angles. The difference is on the order of formula_8, which corresponds to the coma aberration.
Magnification and the Abbe sine condition.
Using the framework of Fourier optics, we may easily explain the significance of the Abbe sine condition. Say an object in the object plane of an optical system has a transmittance function of the form, "T"("x"o,"y"o). We may express this transmittance function in terms of its Fourier transform as
formula_9
where formula_10 is the exponential function, and formula_11 is the imaginary unit.
Now, assume for simplicity that the system has no image distortion, so that the image plane coordinates are linearly related to the object plane coordinates via the relation
formula_12
where M is the system magnification. The object plane transmittance above can now be re-written in a slightly modified form:
formula_13
where the various terms have been simply multiplied and divided in the exponent by M, the system magnification. Now, the equations may be substituted above for image plane coordinates in terms of object plane coordinates, to obtain,
formula_14
At this point another coordinate transformation can be proposed (i.e., the Abbe sine condition) relating the object plane wavenumber spectrum to the image plane wavenumber spectrum as
formula_15
to obtain the final equation for the image plane field in terms of image plane coordinates and image plane wavenumbers as:
formula_16
From Fourier optics, it is known that the wavenumbers can be expressed in terms of the spherical coordinate system as
formula_17
If a spectral component is considered for which formula_18, then the coordinate transformation between object and image plane wavenumbers takes the form
formula_19
This is another way of writing the Abbe sine condition, which simply reflects the classical "uncertainty principle" for Fourier transform pairs, namely that as the spatial extent of any function is expanded (by the magnification factor, M), the spectral extent contracts by the same factor, M, so that the "space-bandwidth product" remains constant.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha_\\mathrm{o}"
},
{
"math_id": 1,
"text": "\\alpha_\\mathrm{i}"
},
{
"math_id": 2,
"text": "\\frac{\\sin \\alpha_\\mathrm{o}}{\\sin \\alpha_\\mathrm{i}} = \\frac{\\sin \\beta_\\mathrm{o}}{\\sin \\beta_\\mathrm{i}} = |M|"
},
{
"math_id": 3,
"text": "(\\alpha_\\mathrm{o}, \\beta_\\mathrm{o})"
},
{
"math_id": 4,
"text": "(\\alpha_\\mathrm{i}, \\beta_\\mathrm{i})"
},
{
"math_id": 5,
"text": "\\alpha_\\mathrm{o}, \\alpha_\\mathrm{i})"
},
{
"math_id": 6,
"text": "(\\beta_\\mathrm{o}, \\beta_\\mathrm{i})"
},
{
"math_id": 7,
"text": "\\frac{\\tan \\alpha_\\mathrm{o}}{\\tan \\alpha_\\mathrm{i}} = \\frac{\\tan \\beta_\\mathrm{o}}{\\tan \\beta_\\mathrm{i}} = |M|"
},
{
"math_id": 8,
"text": "\\alpha_o^3"
},
{
"math_id": 9,
"text": "T(x_\\mathrm{o},y_\\mathrm{o}) = \\iint T(k_x,k_y) \\exp\\left({j(k_x x_\\mathrm{o} + k_y y_\\mathrm{o})}\\right) \\,dk_x\\,dk_y\\,,"
},
{
"math_id": 10,
"text": "\\exp(z) = e^z"
},
{
"math_id": 11,
"text": "j = \\sqrt{-1}"
},
{
"math_id": 12,
"text": "\\begin{align}\nx_\\mathrm{i} &= M x_\\mathrm{o} \\\\\ny_\\mathrm{i} &= M y_\\mathrm{o} \\,,\n\\end{align}"
},
{
"math_id": 13,
"text": "T(x_\\mathrm{o},y_\\mathrm{o}) = \\iint T(k_x,k_y) \\exp\\left({j\\left({k_x\\over M} Mx_\\mathrm{o} + {k_y\\over M} My_\\mathrm{o}\\right)}\\right) \\,dk_x\\,dk_y"
},
{
"math_id": 14,
"text": "T(x_\\mathrm{i},y_\\mathrm{i}) = \\iint T(k_x,k_y) \\exp\\left({j\\left({k_x\\over M} x_\\mathrm{i} + {k_y\\over M} y_\\mathrm{i}\\right)}\\right) \\,dk_x\\,dk_y\\,."
},
{
"math_id": 15,
"text": "\n\\begin{align}\nk^\\mathrm{i}_x &= \\frac{k_x}{M} \\\\\nk^\\mathrm{i}_y &= \\frac{k_y}{M}\n\\end{align}\n"
},
{
"math_id": 16,
"text": "T(x_\\mathrm{i},y_\\mathrm{i}) = M^2 \\iint T\\left(M k^\\mathrm{i}_x, M k^\\mathrm{i}_y\\right) \\exp\\left({j\\left(k^\\mathrm{i}_x x_\\mathrm{i} + k^\\mathrm{i}_y y_\\mathrm{i}\\right)}\\right) \\,dk^\\mathrm{i}_x \\,dk^\\mathrm{i}_y"
},
{
"math_id": 17,
"text": "\n\\begin{align}\nk_x &= k \\sin \\theta \\cos \\varphi \\\\\nk_y &= k \\sin \\theta \\sin \\varphi \\,.\n\\end{align}\n"
},
{
"math_id": 18,
"text": "\\varphi =0"
},
{
"math_id": 19,
"text": "k^\\mathrm{i} \\sin \\theta^\\mathrm{i} = k \\frac{\\sin \\theta}{M}\\,."
}
] | https://en.wikipedia.org/wiki?curid=1123083 |
11230975 | Resonance fluorescence | Quantum electromechanical process
Resonance fluorescence is the process in which a two-level atom system interacts with the quantum electromagnetic field if the field is driven at a frequency near to the natural frequency of the atom.
General theory.
Typically the photon contained electromagnetic field is applied to the two-level atom through the use of a monochromatic laser. A two-level atom is a specific type of two-state system in which the atom can be found in the two possible states. The two possible states are if an electron is found in its ground state or the excited state. In many experiments an atom of lithium is used because it can be closely modeled to a two-level atom as the excited states of the singular electron are separated by large enough energy gaps to significantly reduce the possibility of the electron jumping to a higher excited state. Thus it allows for easier frequency tuning of the applied laser as frequencies further off resonance can be used while still driving the electron to jump to only the first excited state. Once the atom is excited, it will release a photon with the same energy as the energy difference between the excited and ground state. The mechanism for this release is the spontaneous decay of the atom. The emitted photon is released in an arbitrary direction. While the transition between two specific energy levels is the dominant mechanism in resonance fluorescence, experimentally other transitions will play a very small role and thus must be taken into account when analyzing results. The other transitions will lead to emission of a photon of a different atomic transition with much lower energy which will lead to "dark" periods of resonance fluorescence.
The dynamics of the electromagnetic field of the monochromatic laser can be derived by first treating the two-level atom as a spin-1/2 system with two energy eigenstates which have energy separation of ħω0. The dynamics of the atom can then be described by the three rotation operators, formula_0,formula_1,formula_2, acting upon the Bloch sphere. Thus the energy of the system is described entirely through an electric dipole interaction between the atom and field with the resulting hamiltonian being described by
formula_3 .
After quantizing the electromagnetic field, the Heisenberg Equation as well as Maxwell's equations can then be used to find the resulting equations of motion for formula_2 as well as for formula_4, the annihilation operator of the field,
formula_5
formula_6,
where formula_7 and formula_8 are frequency parameters used to simplify equations.
Now that the dynamics of the field with respect to the states of the atom has been described, the mechanism through which photons are released from the atom as the electron falls from the excited state to the ground state, Spontaneous Emission, can be examined. Spontaneous emission is when an excited electron arbitrarily decays to the ground state emitting a photon. As the electromagnetic field is coupled to the state of the atom, and the atom can only absorb a single photon before having to decay, the most basic case then is if the field only contains a single photon. Thus spontaneous decay occurs when the excited state of the atom emits a photon back into the vacuum Fock state of the field formula_9.
During this process the decay of the expectation values of the above operators follow the following relations
formula_10,
formula_11.
So the atom decays exponentially and the atomic dipole moment shall oscillate. The dipole moment oscillates due to the Lamb shift, which is a shift in the energy levels of the atom due to fluctuations of the field.
It is imperative, however, to look at fluorescence in the presence of a field with many photons, as this is a much more general case. This is the case in which the atom goes through many excitation cycles. In this case the exciting field emitted from the laser is in the form of coherent states formula_12. This allows for the operators which comprise the field to act on the coherent state and thus be replaced with eigenvalues. Thus we can simplify the equations by allowing operators to be turned into constants. The field can then be described much more classically than a quantized field normally would be able to. As a result, we are able to find the expectation value of the electric field for the retarded time.
formula_13,
where formula_14 is the angle between formula_15 and formula_16.
There are two general types of excitations produced by fields. The first is one that dies out as formula_17, while the other one reaches a state in which it eventually reaches a constant amplitude, thus formula_18.
Here formula_19 is a real normalization constant, formula_20 is a real phase factor, and formula_21 is a unit vector which indicates the direction of the excitation.
Thus as formula_22, then
formula_23.
As formula_24 is the Rabi frequency, we can see that this is analogous to the rotation of a spin state around the Bloch sphere from an interferometer. Thus the dynamics of a two-level atom can be accurately modeled by a photon in an interferometer.
It is also possible to model as an atom and a field, and it will, in fact, retain more properties of the system such as lamb shift, but the basic dynamics of resonance fluorescence can be modeled as a spin-1/2 particle.
Resonance fluorescence in the Weak Field.
There are several limits that can be analyzed to make the study of resonance fluorescence easier. The first of these is the approximations associated with the Weak Field Limit, where the square modulus of the Rabi frequency of the field that is coupled to two-level atom is much smaller than the rate of spontaneous emission of the atom. This means that the difference in the population between the excited state of the atom and the ground state of the atom is approximately independent of time.
If we also take the limit in which the time period is much larger than the time for spontaneous decay, the coherences of the light can be modeled as
formula_25, where formula_26 is the Rabi frequency of the driving field and formula_27 is the spontaneous decay rate of the atom. Thus it is clear that when an electric field is applied to the atom, the dipole of the atom oscillates according to driving frequency and not the natural frequency of the atom. If we also look at the positive frequency component of the electric field,
formula_28
we can see that the emitted field is the same as the absorbed field other than the difference in direction, resulting in the spectrum of the emitted field being the same as that of the absorbed field. The result is that the two-level atom behaves exactly as a driven oscillator and continues scattering photons so long as the driving field remains coupled to the atom.
The weak field approximation is also used in approaching two-time correlation functions. In the weak-field limit, the correlation function formula_29 can be calculated much more easily as only the first three terms must be kept. Thus the correlation function becomes
formula_30 as formula_22.
From the above equation we can see that as formula_31 the correlation function will no longer depend on time, but rather that it will depend on formula_32. The system will eventually reach a quasi-stationary state as formula_22
It is also clear that there are terms in the equation that go to zero as formula_33. These are the result of the Markovian processes of the quantum fluctuations of the system.
We see that in the weak field approximation as well as formula_34, the coupled system will reach a quasi-steady state where the quantum fluctuations become negligible.
Resonance fluorescence in the Strong Field.
The Strong Field Limit is the exact opposite limit to the weak field where the square modulus of the Rabi frequency of the electromagnetic field is much larger than the rate of spontaneous emission of the two-level atom. When a strong field is applied to the atom, a single peak is no longer observed in fluorescent light's radiation spectrum. Instead, other peaks begin appearing on either side of the original peak. These are known as side bands. The sidebands are a result of the Rabi oscillations of the field causing a modulation in the dipole moment of the atom. This causes a splitting in the degeneracy of certain eigenstates of the hamiltonian, specifically formula_35 and formula_36 are split into doublets. This is known as dynamic Stark splitting and is the cause for the Mollow triplet, which is a characteristic energy spectrum found in Resonance fluorescence.
An interesting phenomena arises in the Mollow triplet where both of the sideband peaks have a width different than that of the central peak. If the Rabi frequency is allowed to become much larger than the rate of spontaneous decay of the atom, we can see that in the strong field limit formula_37 will become formula_38.
From this equation it is clear where the differences in width of the peaks in the Mollow triplet arise from as the central peak has a width of formula_39 and the sideband peaks have a width of formula_40 where formula_27 is the rate of spontaneous emission for the atom. Unfortunately this cannot be used to calculate a steady state solution as formula_41 and formula_42 in a steady state solution. Thus the spectrum would vanish in a steady state solution, which is not the actual case.
The solution that does allow for a steady state solution must take the form of a two-time correlation function as opposed to the above one-time correlation function. This solution appears as
formula_43.
Since this correlation function includes the steady state limits of the density matrix, where formula_44 and formula_45, and the spectrum is nonzero, it is clear to see that the Mollow triplet remains the spectrum for the fluoresced light even in a steady state solution.
General two-time correlation functions and spectral density.
The study of correlation functions is critical to the study of quantum optics as the Fourier transform of the correlation function is the energy spectral density. Thus the two-time correlation function is a useful tool in the calculation of the energy spectrum for a given system. We take the parameter formula_32 to be the difference between the two times in which the function is calculated. While correlation functions can more easily be described using limits of the strength of the field and limits placed on the time of the system, they can be found more generally as well. For resonance fluorescence, the most important correlation functions are
formula_46,
formula_47,
formula_48,
where
formula_49,
formula_50,
formula_51.
Two-time correlation functions are generally shown to be independent of formula_52, and instead rely on formula_32 as formula_22. These functions can be used to find the spectral density formula_53 by computing the transform
formula_54,
where K is a constant. The spectral density can be viewed as the rate of photon emission of photons of frequency formula_55 at the given time formula_52, which is useful in determining the power output of a system at a given time.
The correlation function associated with the spectral density of resonance fluorescence is reliant on the electric field. Thus once the constant K has been determined, the result is equivalent to
formula_56
This is related to the intensity by formula_57
In the weak field limit when formula_58 the power spectrum can be determined to be
formula_59.
In the strong field limit, the power spectrum is slight more complicated and found to be
formula_60.
From these two functions it is easy to see that in the weak field limit a single peak appears at formula_61 in the spectral density due to the delta function, while in the strong field limit a Mollow triplet forms with sideband peaks at formula_62, and appropriate peak width of formula_39 for the central peak and formula_40 for the sideband peaks.
Photon Anti-bunching.
Photon anti-bunching is the process in Resonance Fluorescence through which rate at which photons are emitted by a two-level atom is limited. A two-level atom is only capable of absorbing a photon from the driving electromagnetic field after a certain period of time has passed. This time period is modeled as a probability distribution formula_63 where formula_64 as formula_65. As the atom cannot absorb a photon, it is unable to emit one and thus there is a restriction on the spectral density. This is illustrated by the second order correlation function
formula_66.
From the above equation it is clear that formula_67 and thus formula_68 resulting in the relation that describes photon antibunching
formula_69.
This shows that the power cannot be anything other than zero for formula_70. In the weak field approximation formula_71 can only increase monotonically as formula_32 increases, however in the strong field approximation formula_71 oscillates as it increases. These oscillations die off as formula_33.
The physical idea behind photon anti-bunching is that while the atom itself is ready to be excited as soon as it releases its previous photon, the electromagnetic field created by the laser takes time to excite the atom.
Double Resonance.
Double Resonance is the phenomena when an additional magnetic field is applied to a two-level atom in addition to the typical electromagnetic field used to drive resonance fluorescence. This lifts the spin degeneracy of the Zeeman energy levels splitting them along the energies associated with the respective available spin levels, allowing for not only resonance to be achieved around the typical excited state, but if a second driving electromagnetic associated with the Larmor frequency is applied, a second resonance can be achieved around the energy state associated with formula_72 and the states associated with formula_73. Thus resonance is achievable not only about the possible energy-levels of a two-level atom, but also about the sub-levels in the energy created by lifting the degeneracy of the level. If the applied magnetic field is tuned properly, the polarization of resonance fluorescence can be used to describe the composition of the excited state. Thus double resonance can be used to find the Landé factor, which is used to describe the magnetic moment of the electron within the two-level atom.
Resonance fluorescence of a single artificial atom.
Any two state system can be modeled as a two-level atom. This leads to many systems being described as an "Artificial Atom". For instance a superconducting loop which can create a magnetic flux passing through it can act as an artificial atom as the current can induce a magnetic flux in either direction through the loop depending on whether the current is clockwise or counterclockwise.
The hamiltonian for this system is described as formula_74 where formula_75.
This models the dipole interaction of the atom with a 1-D electromagnetic wave.
It is easy to see that this is truly analogous to a real two-level atom due to the fact that the fluorescence appears in the spectrum as the Mollow triplet, precisely like a true two-level atom.
These artificial atoms are often used to explore the phenomena of quantum coherence. This allows for the study of squeezed light which is known for creating more precise measurements. It is difficult to explore the resonance fluorescence of squeezed light in a typical two-level atom as all modes of the electromagnetic field must be squeezed which cannot easily be accomplished. In an artificial atom, the number of possible modes of the field is significantly limited allowing for easier study of squeezed light. In 2016 D.M. Toyli et al., performed an experiment in which two superconducting parametric amplifiers were used to generate squeezed light and then detect resonance fluorescence in artificial atoms from the squeezed light. Their results agreed strongly with the theory describing the phenomena. The implication of this study is it allows for resonance fluorescence to assist in qubit readout for squeezed light. The qubit used in the study was an aluminum transmon circuit that was then coupled to a 3-D aluminum cavity. Extra silicon chips were introduced to the cavity to assist in the tuning of resonance to that of the cavity. The majority of the detuning that did occur was a result of the degeneration of the qubit over time.
Resonance fluorescence from a Semiconductor Quantum Dot.
A quantum dot is a semiconductor nano-particle that is often used in quantum optical systems. This includes their ability to be placed in optical microcavities where they can act as two-level systems. In this process, quantum dots are placed in cavities which allow for the discretization of the possible energy states of the quantum dot coupled with the vacuum field. The vacuum field is then replaced by an excitation field and resonance fluorescence is observed. Current technology only allows for population of the dot in an excited state (not necessarily always the same), and relaxation of the quantum dot back to its ground state. Direct excitation followed by ground state collection was not achieved until recently. This is mainly due to the fact that as a result of the size of quantum dots, defects and contaminants create fluorescence of their own apart from the quantum dot. This desired manipulation has been achieved by quantum dots by themselves through a number of techniques including four-wave mixing and differential reflectivity, however no techniques had shown it to occur in cavities until 2007. Resonance fluorescence has been seen in a single self-assembled quantum dot as presented by Muller among others in 2007.
In the experiment they used quantum dots that were grown between two mirrors in the cavity. Thus the quantum dot was not placed in the cavity, but instead created in it. They then coupled a strong in-plane polarized tunable continuous-wave laser to the quantum dot and were able to observe resonance fluorescence from the quantum dot. In addition to the excitation of the quantum dot that was achieved, they were also able to collect the photon that was emitted with a micro-PL setup. This allows for resonant coherent control of the ground state of the quantum dot while also collecting the photons emitted from the fluorescence.
Coupling photons to a molecule.
In 2007, G. Wrigge, I. Gerhardt, J. Hwang, G. Zumofen, and V. Sandoghdar developed an efficient method to observe resonance fluorescence for an entire molecule as opposed to its typical observation in a single atom.
Instead of coupling the electric field to a single atom, they were able to replicate two-level systems in dye molecules embedded in solids.
They used a tunable dye laser to excite the dye molecules in their sample. Due to the fact that they could only have one source at a time, the proportion of shot noise to actual data was much higher than normal. The sample which they excited was a Shpol'skii matrix which they had doped with the dyes they wished to use, dibenzanthanthrene. To improve the accuracy of the results, single-molecule fluorescence-excitation spectroscopy was used. The actual process for measuring the resonance was measuring the interference between the laser beam and the photons that were scattered from the molecule. Thus the laser was passed over the sample, resulting in several photons were scattered back, allowing for the measurement of the interference in the electromagnetic field that resulted. The improvement to this technique was they used solid-immersion lens technology. This is a lens that has a much higher numerical aperture than normal lenses as it is filled with a material that has a large refractive index. The technique used to measure the resonance fluorescence in this system was originally designed to locate individual molecules within substances.
Implications of Resonance fluorescence.
The largest implication that arises from resonance fluorescence is that for future technologies. Resonance fluorescence is used primarily in the coherent control of atoms. By coupling a two-level atom, such as a quantum dot, to an electric field in the form of a laser, you are able to effectively create a qubit. The qubit states correspond to the excited and the ground state of the two-level atoms. Manipulation of the electromagnetic field allows for effective control of the dynamics of the atom. These can then be used to create quantum computers. The largest barriers that still stand in the way of this being achievable are failures in truly controlling the atom. For instance true control of spontaneous decay and decoherence of the field pose large problems that must be overcome before two-level atoms can truly be used as qubits.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\hat{R_{i}}(t)"
},
{
"math_id": 1,
"text": "\\hat{R_{j}}(t)"
},
{
"math_id": 2,
"text": "\\hat{R_{k}}(t)"
},
{
"math_id": 3,
"text": "\\hat{H} = \\frac{1}{2}\\int(\\epsilon_{0}\\hat{\\vec{E}}^{2}(\\vec{r},t)+\\frac{1}{\\mu_{0}}\\hat{\\vec{B}}^{2}(\\vec{r},t))d^{3}x + \\hbar \\omega_{0} \\hat{R_{k}}(t) + 2\\omega_{0}\\vec{\\mu}\\cdot\\hat{\\vec{A}}(0,t)\\hat{R_{j}}(t)"
},
{
"math_id": 4,
"text": "\\hat{b}(t)"
},
{
"math_id": 5,
"text": " \\dot{\\hat{R}}_{k}(t) = -2\\beta(\\hat{R}_{k}(t) + \\frac{1}{2}) - (\\omega_{0}/\\hbar)\n\\{[\\hat{b}(t) +\\hat{b}^{\\dagger}(t)]\\vec{\\mu}\\cdot\\hat{\\vec{A}}^{(+)}_{free}(\\vec{r},t) + H.c.\\} "
},
{
"math_id": 6,
"text": " \\dot{\\hat{b}}(t)=(-i\\omega_{0}-\\beta+i\\gamma)\\hat{b}(t)-(\\beta + i \\gamma)\\hat{b}^{\\dagger}(t)+2(\\omega_{0}/\\hbar)[\\hat{R}_{k}(t)\\vec{\\mu}\\cdot\\hat{\\vec{A}}^{(+)}_{free}(0,t) + H.c.] "
},
{
"math_id": 7,
"text": "\\beta"
},
{
"math_id": 8,
"text": "\\gamma"
},
{
"math_id": 9,
"text": "|e\\rangle\\otimes|\\{0\\}\\rangle \\Rightarrow |g\\rangle\\otimes|\\{1\\}\\rangle"
},
{
"math_id": 10,
"text": " \\langle \\hat{R}_{k}(t)\\rangle + \\frac{1}{2} = [\\langle\\hat{R}_{k}(0)\\rangle + \\frac{1}{2}]e^{-2\\beta t} "
},
{
"math_id": 11,
"text": " \\langle\\hat{b}_{s}(t)\\rangle = \\langle \\hat{b}_{s}(0)\\rangle e^{(-\\beta + i\\gamma)t}"
},
{
"math_id": 12,
"text": "|\\{v\\}\\rangle"
},
{
"math_id": 13,
"text": " \\langle \\hat{\\vec{E}}^{(-)}(\\vec{r},t)\\cdot \\hat{\\vec{E}}^{(+)}(\\vec{r},t)\\rangle = \\left( \\frac{\\omega_{0}^{2}}{4\\pi\\epsilon_{0}c^{2}}\\right)^{2} \\left(\\frac{\\mu^{2}}{r^{2}} - \\frac{(\\vec{\\mu}\\cdot \\vec{r})^{2}}{r^{4}}\\right) \\times \\langle \\hat{b}_{s}^{\\dagger}\\left( t-\\frac{r}{c} \\right) \\hat{b}_{s}\\left(t - \\frac{r}{c}\\right)\\rangle = \\left(\\frac{\\omega_{0}^{2} \\mu \\sin\\psi}{4\\pi\\epsilon_{0}c^{2}r}\\right)^{2}[\\langle\\hat{R}_{k}\\left(t - \\frac{r}{c}\\right)\\rangle + \\frac{1}{2}] "
},
{
"math_id": 14,
"text": "\\psi"
},
{
"math_id": 15,
"text": " \\hat{\\mu} "
},
{
"math_id": 16,
"text": " \\hat{r} "
},
{
"math_id": 17,
"text": " V = 0, t \\Rightarrow \\infty "
},
{
"math_id": 18,
"text": " \\hat{V}(t) = \\hat{\\epsilon}\\alpha e^{i(\\omega_{0}-\\omega_{1})t+i\\phi}"
},
{
"math_id": 19,
"text": " \\alpha "
},
{
"math_id": 20,
"text": " \\phi "
},
{
"math_id": 21,
"text": " \\hat{\\epsilon} "
},
{
"math_id": 22,
"text": " t \\Rightarrow \\infty "
},
{
"math_id": 23,
"text": " \\langle \\hat{R}_{k}(t)\\rangle + 1/2 \\Rightarrow \\frac{\\frac{1}{4}\\Omega^{2}}{\\frac{1}{2}\\Omega^{2} + \\beta^{2} + (\\gamma + \\omega_{1} - \\omega_{0})^{2}} "
},
{
"math_id": 24,
"text": " \\Omega "
},
{
"math_id": 25,
"text": " \\rho_{ab}(t) = \\frac{-i(\\Omega_{R}/2)e^{-i\\nu t}}{i(\\omega - \\nu) + \\Gamma/2} [\\rho_{aa}(0)-\\rho_{bb}(0)] "
},
{
"math_id": 26,
"text": " \\Omega_{R} "
},
{
"math_id": 27,
"text": " \\Gamma "
},
{
"math_id": 28,
"text": " \\langle \\vec{E}^{(+)}(\\vec{r},t)\\rangle = \\frac{\\omega^{2}\\mu \\sin\\psi}{4\\pi\\epsilon_{0}c^{2}|\\vec{r}|} \\hat{x} \\langle \\sigma_{-}(t-\\frac{|\\vec{r}|}{c})\\rangle "
},
{
"math_id": 29,
"text": " \\langle \\hat{b}_{s}^{\\dagger}(t)\\hat{b}_{s}(t+\\tau) \\rangle "
},
{
"math_id": 30,
"text": " \\langle \\hat{b}_{s}^{\\dagger}(t)\\hat{b}_{s}(t+\\tau) \\rangle = \\frac{1}{4} \\frac{\\Omega^{2}e^{i(\\omega_{0}-\\omega_{1})\\tau}}{\\beta^{2}(1+\\theta^{2})} \\left (1-\\frac{\\Omega^{2}}{\\frac{1}{2}\\Omega^{2} + \\beta^{2}(1+\\theta^{2})} \\right) + \\frac{\\Omega^{4}e^{-\\beta|\\tau|} e^{i(\\omega_{0}-\\omega_{1})\\tau}}{8\\beta^{4}\\theta(1+\\theta^{2})^{2}} \\times [\\sin(\\beta\\theta|\\tau|) + \\theta \\cos(\\beta\\theta\\tau)] "
},
{
"math_id": 31,
"text": " t \\Rightarrow \\infty"
},
{
"math_id": 32,
"text": "\\tau"
},
{
"math_id": 33,
"text": " \\tau \\Rightarrow \\infty "
},
{
"math_id": 34,
"text": " t \\Rightarrow \\infty , \\tau \\Rightarrow \\infty "
},
{
"math_id": 35,
"text": " |e\\rangle \\otimes |\\{n\\}\\rangle "
},
{
"math_id": 36,
"text": " |g\\rangle \\otimes |\\{n+1\\}\\rangle "
},
{
"math_id": 37,
"text": " \\langle \\sigma_{-}(t)\\rangle e^{i\\omega t}"
},
{
"math_id": 38,
"text": " \\langle \\sigma_{-}(t)\\rangle e^{i\\omega t} = \\frac{1}{4} \\{[2\\rho_{++}(0)-1]e^{-\\frac{\\Gamma}{2}t} - [\\rho_{+-}(0)e^{-i\\Omega_{R}t-\\frac{3\\Gamma}{4}t}-c.c]\\} "
},
{
"math_id": 39,
"text": " \\frac{\\Gamma}{2} "
},
{
"math_id": 40,
"text": " \\frac{3\\Gamma}{4} "
},
{
"math_id": 41,
"text": " \\rho_{++}(0) \\Rightarrow \\frac{1}{2} "
},
{
"math_id": 42,
"text": " \\rho_{+-}(0) \\Rightarrow 0 "
},
{
"math_id": 43,
"text": " \\langle \\sigma_{+}(0)\\sigma_{-}(\\tau)\\rangle = \\frac{1}{4} \\left( e^{-\\frac{\\Gamma}{2}\\tau} + \\frac{1}{2}e^{-\\frac{3\\Gamma}{4}\\tau}e^{-i\\Omega_{R}\\tau}+\\frac{1}{2}e^{-\\frac{3\\Gamma}{4}\\tau}e^{i\\Omega_{R}\\tau} \\right) e^{-i\\omega\\tau} "
},
{
"math_id": 44,
"text": " \\rho_{++}^{s.s} \\Rightarrow \\frac{1}{2} "
},
{
"math_id": 45,
"text": " \\rho_{+-}^{s.s} \\Rightarrow 0 "
},
{
"math_id": 46,
"text": " \\langle \\hat{b}_{s}^{\\dagger}(t)\\hat{b}_{s}(t+\\tau)\\rangle e^{i(\\omega_{1}-\\omega_{0})\\tau} \\equiv g(t,\\tau) "
},
{
"math_id": 47,
"text": " \\langle \\hat{b}_{s}^{\\dagger}(t)\\hat{b}_{s}(t+\\tau)\\rangle e^{i(\\omega_{1}-\\omega_{0})(2t + \\tau} e^{2i\\phi} \\equiv f(t,\\tau) "
},
{
"math_id": 48,
"text": " \\langle \\hat{b}_{s}^{\\dagger}(t)\\hat{R}_{k}(t+\\tau)\\rangle e^{i(\\omega_{1}-\\omega_{0})t}e^{i\\phi} \\equiv g(t,\\tau) "
},
{
"math_id": 49,
"text": " g(t,\\tau) = [\\langle \\hat{R}_{k}(t)\\rangle + \\frac{1}{2}]e^{-\\beta(1-i\\theta)\\tau} + \\Omega \\int\\limits_{0}^{\\tau} dt' h(t,t')e^{\\beta(1-i\\theta)(t'-\\tau)} "
},
{
"math_id": 50,
"text": " f(t,\\tau) = \\Omega \\int\\limits_{0}^{\\tau}dt' h(t,t')e^{\\beta(1+i\\theta)(t'-\\tau)} "
},
{
"math_id": 51,
"text": " h(t,\\tau) = -\\frac{1}{2}\\langle\\hat{b}^{\\dagger}_{s}(t)\\rangle e^{i(\\omega_{0} - \\omega_{1})t}e^{i\\phi} - \\frac{1}{2}\\Omega \\int\\limits_{0}^{\\tau}dt'[f(t,t')+g(t,t')]e^{2\\beta(t'-\\tau)} "
},
{
"math_id": 52,
"text": "t"
},
{
"math_id": 53,
"text": " S(t,\\omega) "
},
{
"math_id": 54,
"text": " S (t,\\omega) = K \\int\\limits_{0}^{\\infty}d\\tau g(t-\\tau,\\tau)e^{i(\\omega - \\omega_{1})\\tau} + c.c "
},
{
"math_id": 55,
"text": " \\omega "
},
{
"math_id": 56,
"text": " S(\\vec{r},\\omega_{0}) = \\frac{1}{\\pi} Re \\int\\limits_{0}^{\\infty}d\\tau\\langle E^{(-)}(\\vec{r},t)E^{(+)}(\\vec{r},t+\\tau)\\rangle e^{i\\omega_{0}\\tau} "
},
{
"math_id": 57,
"text": " \\langle E^{(-)}(\\vec{r},t)E^{(+)}(\\vec{r},t+\\tau)\\rangle = I_{0}(\\vec{r})\\langle \\sigma_{+}(t)\\sigma_{-}(t+\\tau)\\rangle "
},
{
"math_id": 58,
"text": " \\Omega_{R} \\ll \\frac{\\Gamma}{4} "
},
{
"math_id": 59,
"text": " S(\\vec{r},\\omega_{0}) = I_{0}(\\vec{r}) \\left(\\frac{\\Omega_{R}}{\\Gamma}\\right)^{2} \\delta(\\omega-\\omega_{0}) "
},
{
"math_id": 60,
"text": " S(\\vec{r},\\omega_{0}) = \\frac{I_{0}(\\vec{r})}{8\\pi}\\left[\\frac{3\\Gamma/4}{(\\omega-\\Omega_{R}-\\omega_{0})^{2} + (3\\Gamma/4)^{2}} + \\frac{\\Gamma}{(\\omega-\\omega_{0})^{2} + (\\Gamma/2)^{2}} + \\frac{3\\Gamma/4}{(\\omega + \\Omega_{R}-\\omega_{0})^{2} + (3\\Gamma/4)^{2}} \\right] "
},
{
"math_id": 61,
"text": "\\omega_{0}"
},
{
"math_id": 62,
"text": " \\omega = \\omega_{0} \\pm \\Omega_{R} "
},
{
"math_id": 63,
"text": " p(\\tau) "
},
{
"math_id": 64,
"text": " p(\\tau) \\Rightarrow 0 "
},
{
"math_id": 65,
"text": " \\tau \\Rightarrow 0 "
},
{
"math_id": 66,
"text": " g^{(2)}(\\tau) = 1 - \\left( cos\\mu\\tau + \\frac{3\\Gamma}{4\\mu}sin\\mu\\tau \\right) e^{-3\\Gamma\\tau/4} "
},
{
"math_id": 67,
"text": " g^{(2)}(0)=0 "
},
{
"math_id": 68,
"text": " g^{(2)}(\\tau)>0 "
},
{
"math_id": 69,
"text": " g^{(2)}(\\tau) > g^{(2)}(0) "
},
{
"math_id": 70,
"text": "\\tau = 0"
},
{
"math_id": 71,
"text": "g^{(2)}(\\tau)"
},
{
"math_id": 72,
"text": " m_{B} = 0 "
},
{
"math_id": 73,
"text": " m_{b} = \\pm 1 "
},
{
"math_id": 74,
"text": " \\hat{H} = \\hbar \\sqrt{\\omega^{2}_{0} + \\epsilon^{2}}\\frac{\\hat{\\sigma}_{z}}{2} "
},
{
"math_id": 75,
"text": " \\hbar \\epsilon = 2I_{p}\\delta\\Phi "
}
] | https://en.wikipedia.org/wiki?curid=11230975 |
11231265 | Davidon–Fletcher–Powell formula | The Davidon–Fletcher–Powell formula (or DFP; named after William C. Davidon, Roger Fletcher, and Michael J. D. Powell) finds the solution to the secant equation that is closest to the current estimate and satisfies the curvature condition. It was the first quasi-Newton method to generalize the secant method to a multidimensional problem. This update maintains the symmetry and positive definiteness of the Hessian matrix.
Given a function formula_0, its gradient (formula_1), and positive-definite Hessian matrix formula_2, the Taylor series is
formula_3
and the Taylor series of the gradient itself (secant equation)
formula_4
is used to update formula_2.
The DFP formula finds a solution that is symmetric, positive-definite and closest to the current approximate value of formula_5:
formula_6
where
formula_7
formula_8
and formula_5 is a symmetric and positive-definite matrix.
The corresponding update to the inverse Hessian approximation formula_9 is given by
formula_10
formula_2 is assumed to be positive-definite, and the vectors formula_11 and formula_12 must satisfy the curvature condition
formula_13
The DFP formula is quite effective, but it was soon superseded by the Broyden–Fletcher–Goldfarb–Shanno formula, which is its dual (interchanging the roles of "y" and "s").
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(x)"
},
{
"math_id": 1,
"text": "\\nabla f"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "f(x_k+s_k) = f(x_k) + \\nabla f(x_k)^T s_k + \\frac{1}{2} s^T_k {B} s_k + \\dots,"
},
{
"math_id": 4,
"text": "\\nabla f(x_k+s_k) = \\nabla f(x_k) + B s_k + \\dots"
},
{
"math_id": 5,
"text": "B_k"
},
{
"math_id": 6,
"text": "B_{k+1}=\n(I - \\gamma_k y_k s_k^T) B_k (I - \\gamma_k s_k y_k^T) + \\gamma_k y_k y_k^T,"
},
{
"math_id": 7,
"text": "y_k = \\nabla f(x_k+s_k) - \\nabla f(x_k),"
},
{
"math_id": 8,
"text": "\\gamma_k = \\frac{1}{y_k^T s_k},"
},
{
"math_id": 9,
"text": "H_k = B_k^{-1}"
},
{
"math_id": 10,
"text": "H_{k+1} = H_k - \\frac{H_k y_k y_k^T H_k}{y_k^T H_k y_k} + \\frac{s_k s_k^T}{y_k^{T} s_k}."
},
{
"math_id": 11,
"text": "s_k^T"
},
{
"math_id": 12,
"text": "y"
},
{
"math_id": 13,
"text": "s_k^T y_k = s_k^T B s_k > 0."
}
] | https://en.wikipedia.org/wiki?curid=11231265 |
11233252 | Multimodal logic | A multimodal logic is a modal logic that has more than one primitive modal operator. They find substantial applications in theoretical computer science.
Overview.
A modal logic with "n" primitive unary modal operators formula_0 is called an "n"-modal logic. Given these operators and negation, one can always add formula_1 modal operators defined as formula_2 if and only if formula_3, to give a classical multimodal logic if it is in addition stable under necessitation (or "possibilization", therefore) of both members of provable equivalences.
Perhaps the first substantive example of a two-modal logic is Arthur Prior's tense logic, with two modalities, F and P, corresponding to "sometime in the future" and "sometime in the past". A logic with infinitely many modalities is dynamic logic, introduced by Vaughan Pratt in 1976 and having a separate modal operator for every regular expression. A version of temporal logic introduced in 1977 and intended for program verification has two modalities, corresponding to dynamic logic's ["A"] and ["A"*] modalities for a single program "A", understood as the whole universe taking one step forwards in time. The term "multimodal logic" itself was not introduced until 1980. Another example of a multimodal logic is the Hennessy–Milner logic, itself a fragment of the more expressive modal μ-calculus, which is also a fixed-point logic.
Multimodal logic can be used also to formalize a kind of knowledge representation: the motivation of epistemic logic is allowing several agents (they are regarded as subjects capable of forming beliefs, knowledge); and managing the belief or knowledge of each agent, so that epistemic assertions can be formed about them. The modal operator formula_4 must be capable of bookkeeping the cognition of each agent, thus formula_5 must be indexed on the set of the agents. The motivation is that formula_6 should assert "The subject "i" has knowledge about formula_7 being true". But it can be used also for formalizing "the subject "i" believes formula_7". For formalization of meaning based on the possible world semantics approach, a multimodal generalization of Kripke semantics can be used: instead of a single "common" accessibility relation, there is a series of them indexed on the set of agents.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Box_i, i\\in \\{1,\\ldots, n\\}"
},
{
"math_id": 1,
"text": "\\Diamond_i"
},
{
"math_id": 2,
"text": "\\Diamond_i P"
},
{
"math_id": 3,
"text": "\\lnot \\Box_i \\lnot P"
},
{
"math_id": 4,
"text": "\\Box"
},
{
"math_id": 5,
"text": "\\Box_i"
},
{
"math_id": 6,
"text": "\\Box_i \\alpha"
},
{
"math_id": 7,
"text": "\\alpha"
}
] | https://en.wikipedia.org/wiki?curid=11233252 |
11233493 | Population vector | In neuroscience, a population vector is the sum of the preferred directions of a population of neurons, weighted by the respective spike counts.
The formula for computing the (normalized) population vector, formula_0, takes the following form:
formula_1
Where formula_2 is the activity of cell formula_3, and formula_4 is the preferred input for cell formula_3.
Note that the vector formula_0 encodes the input direction, formula_4, in terms of the activation of a population of neurons. | [
{
"math_id": 0,
"text": "F"
},
{
"math_id": 1,
"text": "F = \\frac{\\sum_j m_j F_j}{\\sum_j m_j}"
},
{
"math_id": 2,
"text": "m_j"
},
{
"math_id": 3,
"text": "j"
},
{
"math_id": 4,
"text": "F_j"
}
] | https://en.wikipedia.org/wiki?curid=11233493 |
1123470 | Gyroelongated square pyramid | 10th Johnson solid (13 faces)
In geometry, the gyroelongated square pyramid is the Johnson solid that can be constructed by attaching an equilateral square pyramid to a square antiprism. It occurs in chemistry; for example, the square antiprismatic molecular geometry.
Construction.
The gyroelongated square pyramid is composite, since it can constructed by attaching one equilateral square pyramid to the square antiprism, a process known as the gyroelongation. This construction involves the covering of one of two square faces and replacing them with the four equilateral triangles, so that the resulting polyhedron has twelve equilateral triangles and one square. The convex polyhedron in which all of the faces are regular is the Johnson solid, and the gyroelongated square pyramid is one of them, enumerated as formula_1, the tenth Johnson solid.
Properties.
The surface area of a gyroelongated square pyramid with edge length formula_2 is:
formula_3
the area of twelve equilateral triangles and a square. Its volume:
formula_4
can be obtained by slicing the square pyramid and the square antiprism, after which adding their volumes.
It has the same three-dimensional symmetry group as the square pyramid, the cyclic group formula_0 of order eight. Its dihedral angle can be derived by calculating the angle of a square pyramid and square antiprism in the following:
Applications.
In stereochemistry, the capped square antiprismatic molecular geometry can be described as the atom cluster of the gyroelongated square pyramid. An example is [LaCl(H2O)7]24+, a lanthanum(III) complex with a La–La bond.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " C_{4v} "
},
{
"math_id": 1,
"text": " J_{10} "
},
{
"math_id": 2,
"text": " a "
},
{
"math_id": 3,
"text": " \\left(1 + 3\\sqrt{3}\\right)a^2 \\approx 6.196a^2, "
},
{
"math_id": 4,
"text": " \\frac{\\sqrt{2} + 2\\sqrt{4 + 3\\sqrt{2}}}{6}a^3 \\approx 1.193a^3, "
},
{
"math_id": 5,
"text": " 109.47^\\circ "
},
{
"math_id": 6,
"text": " 127.55^\\circ "
},
{
"math_id": 7,
"text": " 103.83^\\circ "
},
{
"math_id": 8,
"text": " 158.57^\\circ"
},
{
"math_id": 9,
"text": " 54.74^\\circ "
}
] | https://en.wikipedia.org/wiki?curid=1123470 |
1123547 | Elongated pentagonal pyramid | 9th Johnson solid (11 faces)
In geometry, the elongated pentagonal pyramid is one of the Johnson solids ("J"9). As the name suggests, it can be constructed by elongating a pentagonal pyramid ("J"2) by attaching a pentagonal prism to its base.
A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966.
Formulae.
The following formulae for the height (formula_0), surface area (formula_1) and volume (formula_2) can be used if all faces are regular, with edge length formula_3:
formula_4
formula_5
formula_6
Dual polyhedron.
The dual of the elongated pentagonal pyramid has 11 faces: 5 triangular, 1 pentagonal and 5 trapezoidal. It is topologically identical to the Johnson solid.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "V"
},
{
"math_id": 3,
"text": "L"
},
{
"math_id": 4,
"text": "H = L\\cdot \\left( 1 + \\sqrt{\\frac{5 - \\sqrt{5}}{10}}\\right) \\approx L\\cdot 1.525731112"
},
{
"math_id": 5,
"text": "A = L^2 \\cdot \\frac{20 + 5\\sqrt{3} + \\sqrt{25 + 10\\sqrt{5}}}{4} \\approx L^2\\cdot 8.88554091"
},
{
"math_id": 6,
"text": "V = L^3 \\cdot \\left( \\frac{5 + \\sqrt{5} + 6\\sqrt{25 + 10\\sqrt{5}}}{24} \\right) \\approx L^3\\cdot 2.021980233"
}
] | https://en.wikipedia.org/wiki?curid=1123547 |
1123549 | Gyroelongated pentagonal pyramid | 11th Johnson solid (16 faces)
In geometry, the gyroelongated pentagonal pyramid is a polyhedron constructed by attaching a pentagonal antiprism to the base of a pentagonal pyramid. An alternative name is diminished icosahedron because it can be constructed by removing a pentagonal pyramid from a regular icosahedron.
Construction.
The gyroelongated pentagonal pyramid can be constructed from a pentagonal antiprism by attaching a pentagonal pyramid onto its pentagonal face. This pyramid covers the pentagonal faces, so the resulting polyhedron has 15 equilateral triangles and 1 regular pentagon as its faces. Another way to construct it is started from the regular icosahedron by cutting off one of two pentagonal pyramids, a process known as diminishment; for this reason, it is also called the "diminished icosahedron". Because the resulting polyhedron has the property of convexity and its faces are regular polygons, the gyroelongated pentagonal pyramid is a Johnson solid, enumerated as the 11th Johnson solid formula_1.
Properties.
The surface area of a gyroelongated pentagonal pyramid formula_2 can be obtained by summing the area of 15 equilateral triangles and 1 regular pentagon. Its volume formula_3 can be ascertained either by slicing it off into both a pentagonal antiprism and a pentagonal pyramid, after which adding them up; or by subtracting the volume of a regular icosahedron to a pentagonal pyramid. With edge length formula_4, they are:
formula_5
It has the same three-dimensional symmetry group as the pentagonal pyramid: the cyclic group formula_0 of order 10. Its dihedral angle can be obtained by involving the angle of a pentagonal antiprism and pentagonal pyramid: its dihedral angle between triangle-to-pentagon is the pentagonal antiprism's angle between that 100.8°, and its dihedral angle between triangle-to-triangle is the pentagonal pyramid's angle 138.2°.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " C_{5 \\mathrm{v}} "
},
{
"math_id": 1,
"text": " J_{11} "
},
{
"math_id": 2,
"text": " A "
},
{
"math_id": 3,
"text": " V "
},
{
"math_id": 4,
"text": " a "
},
{
"math_id": 5,
"text": " \\begin{align}\n A &= \\frac{15 \\sqrt{3} + \\sqrt{5(5 + 2\\sqrt{5})}}{4}a^2 \\approx 8.215a^2, \\\\\n V &= \\frac{25 + 9\\sqrt{5}}{24}a^3 \\approx 1.880a^3.\n\\end{align} "
}
] | https://en.wikipedia.org/wiki?curid=1123549 |
1123559 | Tridiminished icosahedron | 63rd Johnson solid
In geometry, the tridiminished icosahedron is a Johnson solid that is constructed by removing three pentagonal pyramids from a regular icosahedron.
Construction.
The tridiminished icosahedron can be constructed by removing three regular pentagonal pyramid from a regular icosahedron. The aftereffect of such construction leaves five equilateral triangles and three regular pentagons. Since all of its faces are regular polygons and the resulting polyhedron remains convex, the tridiminished icosahedron is a Johnson solid, and it is enumerated as the sixty-third Johnson solid formula_0. This construction is similar to other Johnson solids as in gyroelongated pentagonal pyramid and metabidiminished icosahedron.
The tridiminished icosahedron is non-composite polyhedron, meaning it is convex polyhedron that cannot be separated by a plane into two or more regular polyhedrons.
Properties.
The surface area of a tridiminished icosahedron formula_1 is the sum of all polygonal faces' area: five equilateral triangles and three regular pentagons. Its volume formula_2 can be ascertained by subtracting the volume of a regular icosahedron with the volume of three pentagonal pyramids. Given that formula_3 is the edge length of a tridiminished icosahedron, they are:
formula_4
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " J_{63} "
},
{
"math_id": 1,
"text": " A "
},
{
"math_id": 2,
"text": " V "
},
{
"math_id": 3,
"text": " a "
},
{
"math_id": 4,
"text": " \\begin{align}\n A &= \\frac{5 \\sqrt{3}+3 \\sqrt{5 \\left(5+2 \\sqrt{5}\\right)}}{4} a^2 &\\approx 7.3265a^2, \\\\\n V &= \\frac{15 + 7 \\sqrt{5}}{24}a^3 &\\approx 1.2772a^3.\n\\end{align} "
}
] | https://en.wikipedia.org/wiki?curid=1123559 |
1123698 | End-to-end delay | Time taken for a packet to travel across a network from source to destination
End-to-end delay or one-way delay (OWD) refers to the time taken for a packet to be transmitted across a network from source to destination. It is a common term in IP network monitoring, and differs from round-trip time (RTT) in that only path in the one direction from source to destination is measured.
Measurement.
The ping utility measures the RTT, that is, the time to go and come back to a host. Half the RTT is often used as an approximation of OWD but this assumes that the forward and back paths are the same in terms of congestion, number of hops, or quality of service (QoS). This is not always a good assumption. To avoid such problems, the OWD may be measured directly.
Direct.
OWDs may be measured between two points "A" and "B" of an IP network through the use of synchronized clocks; "A" records a timestamp on the packet and sends it to "B", which notes the receiving time and calculates the OWD as their difference. The transmitted packets need to be identified at source and destination in order to avoid packet loss or packet reordering. However, this method suffers several limitations, such as requiring intensive cooperation between both parties, and the accuracy of the measured delay is subject to the synchronization precision.
The Minimum-Pairs Protocol is an example by which several cooperating entities, "A", "B", and "C", could measure OWDs between one of them and a fourth less cooperative one (e.g., between "B" and "X").
Estimate.
Transmission between two network nodes may be asymmetric, and the forward and reverse delays are not equal. Half the RTT value is the average of the forward and reverse delays and so may be sometimes used as an approximation to the end-to-end delay. The accuracy of such an estimate depends on the nature of delay distribution in both directions. As delays in both directions become more symmetric, the accuracy increases.
The probability mass function (PMF) of absolute error, "E", between the smaller of the forward and reverse OWDs and their average (i.e., RTT/2) can be expressed as a function of the network delay distribution as follows:
formula_0
where "a" and "b" are the forward and reverse edges, and "fy"("z") is the PMF of delay of edge "z" (that is, "fy"("z") = Pr{delay on edge "z" = "y"}).
Delay components.
End-to-end delay in networks comes from several sources including transmission delay, propagation delay, processing delay and queuing delay.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Pr(E = x) = \\begin{cases}\\displaystyle\\sum_{i=0}^\\infty f_i(a).f_i(b), &x=0,\\\\\\displaystyle\\sum_{i=0}^\\infty f_i(a).f_{2x+i}(b) + \\sum_{i=0}^\\infty f_i(b).f_{2x+i}(a), &x>0,\\end{cases}"
}
] | https://en.wikipedia.org/wiki?curid=1123698 |
1123994 | LAN Manager | Microsoft network operating system
LAN Manager was a network operating system (NOS) available from multiple vendors and developed by Microsoft in cooperation with 3Com Corporation. It was designed to succeed 3Com's 3+Share network server software which ran atop a heavily modified version of MS-DOS.
History.
The LAN Manager OS/2 operating system was co-developed by IBM and Microsoft, using the Server Message Block (SMB) protocol. It originally used SMB atop either the NetBIOS Frames (NBF) protocol or a specialized version of the Xerox Network Systems (XNS) protocol. These legacy protocols had been inherited from previous products such as MS-Net for MS-DOS, Xenix-NET for MS-Xenix, and the afore-mentioned 3+Share. A version of LAN Manager for Unix-based systems called LAN Manager/X was also available. LAN Manager/X was the basis for Digital Equipment Corporation's Pathworks product for OpenVMS, Ultrix and Tru64.
In 1990, Microsoft announced LAN Manager 2.0 with a host of improvements, including support for TCP/IP as a transport protocol for SMB, using NetBIOS over TCP/IP (NBT). The last version of LAN Manager, 2.2, which included an MS-OS/2 1.31 base operating system, remained Microsoft's strategic server system until the release of Windows NT Advanced Server in 1993.
Versions.
Many vendors shipped licensed versions, including:
Password hashing algorithm.
The LM hash is computed as follows:
Security weaknesses.
LAN Manager authentication uses a particularly weak method of hashing a user's password known as the LM hash algorithm, stemming from the mid-1980s when viruses transmitted by floppy disks were the major concern. Although it is based on DES, a well-studied block cipher, the LM hash has several weaknesses in its design.
This makes such hashes crackable in a matter of seconds using rainbow tables, or in a few minutes using brute force. Starting with Windows NT, it was replaced by NTLM, which is still vulnerable to rainbow tables, and brute force attacks unless long, unpredictable passwords are used, see password cracking. NTLM is used for logon with local accounts except on domain controllers since Windows Vista and later versions no longer maintain the LM hash by default. Kerberos is used in Active Directory Environments.
The major weaknesses of LAN Manager authentication protocol are:
Workarounds.
To address the security weaknesses inherent in LM encryption and authentication schemes, Microsoft introduced the NTLMv1 protocol in 1993 with Windows NT 3.1. For hashing, NTLM uses Unicode support, replacing codice_4 by codice_5, which does not require any padding or truncating that would simplify the key. On the negative side, the same DES algorithm was used with only 56-bit encryption for the subsequent authentication steps, and there is still no salting. Furthermore, Windows machines were for many years configured by default to send and accept responses derived from both the LM hash and the NTLM hash, so the use of the NTLM hash provided no additional security while the weaker hash was still present. It also took time for artificial restrictions on password length in management tools such as User Manager to be lifted.
While LAN Manager is considered obsolete and current Windows operating systems use the stronger NTLMv2 or Kerberos authentication methods, Windows systems before Windows Vista/Windows Server 2008 enabled the LAN Manager hash by default for backward compatibility with legacy LAN Manager and Windows ME or earlier clients, or legacy NetBIOS-enabled applications. It has for many years been considered good security practice to disable the compromised LM and NTLMv1 authentication protocols where they aren't needed.
Starting with Windows Vista and Windows Server 2008, Microsoft disabled the LM hash by default; the feature can be enabled for local accounts via a security policy setting, and for Active Directory accounts by applying the same setting via domain Group Policy. The same method can be used to turn the feature off in Windows 2000, Windows XP and NT. Users can also prevent a LM hash from being generated for their own password by using a password at least fifteen characters in length.—NTLM hashes have in turn become vulnerable in recent years to various attacks that effectively make them as weak today as LanMan hashes were back in 1998.
Reasons for continued use of LM hash.
Many legacy third party SMB implementations have taken considerable time to add support for the stronger protocols that Microsoft has created to replace LM hashing because the open source communities supporting these libraries first had to reverse engineer the newer protocols—Samba took 5 years to add NTLMv2 support, while JCIFS took 10 years.
Poor patching regimes subsequent to software releases supporting the feature becoming available have contributed to some organisations continuing to use LM Hashing in their environments, even though the protocol is easily disabled in Active Directory itself.
Lastly, prior to the release of Windows Vista, many unattended build processes still used a DOS boot disk (instead of Windows PE) to start the installation of Windows using WINNT.EXE, something that requires LM hashing to be enabled for the legacy LAN Manager networking stack to work.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2\\times69^{7} \\approx 2^{44}"
},
{
"math_id": 1,
"text": "69^{14} \\approx 2^{86}"
}
] | https://en.wikipedia.org/wiki?curid=1123994 |
1124019 | Accounting method (computer science) | Method of amortized analysis
In the field of analysis of algorithms in computer science, the accounting method is a method of amortized analysis based on accounting. The accounting method often gives a more intuitive account of the amortized cost of an operation than either aggregate analysis or the potential method. Note, however, that this does not guarantee such analysis will be immediately obvious; often, choosing the correct parameters for the accounting method requires as much knowledge of the problem and the complexity bounds one is attempting to prove as the other two methods.
The accounting method is most naturally suited for proving an O(1) bound on time. The method as explained here is for proving such a bound.
The method.
A set of elementary operations which will be used in the algorithm is chosen and their costs are arbitrarily set to 1. The fact that the costs of these operations may differ in reality presents no difficulty in principle. What is important is that each elementary operation has a constant cost.
Each aggregate operation is assigned a "payment". The payment is intended to cover the cost of elementary operations needed to complete this particular operation, with some of the payment left over, placed in a pool to be used later.
The difficulty with problems that require amortized analysis is that, in general, some of the operations will require greater than constant cost. This means that no constant payment will be enough to cover the worst case cost of an operation, in and of itself. With proper selection of payment, however, this is no longer a difficulty; the expensive operations will only occur when there is sufficient payment in the pool to cover their costs.
Examples.
A few examples will help to illustrate the use of the accounting method.
Table expansion.
It is often necessary to create a table before it is known how much space is needed. One possible strategy is to double the size of the table when it is full. Here we will use the accounting method to show that the amortized cost of an insertion operation in such a table is O(1).
Before looking at the procedure in detail, we need some definitions. Let T be a table, E an element to insert, num(T) the number of elements in T, and size(T) the allocated size of T. We assume the existence of operations create_table(n), which creates an empty table of size n, for now assumed to be free, and elementary_insert(T,E), which inserts element E into a table T that already has space allocated, with a cost of 1.
The following pseudocode illustrates the table insertion procedure:
function table_insert(T, E)
if num(T) = size(T)
U := create_table(2 × size(T))
for each F in T
elementary_insert(U, F)
T := U
elementary_insert(T, E)
Without amortized analysis, the best bound we can show for n insert operations is O(n) — this is due to the loop at line 4 that performs num(T) elementary insertions.
For analysis using the accounting method, we assign a payment of 3 to each table insertion. Although the reason for this is not clear now, it will become clear during the course of the analysis.
Assume that initially the table is empty with size(T) = m. The first m insertions therefore do not require reallocation and only have cost 1 (for the elementary insert). Therefore, when num(T) = m, the pool has (3 - 1)×m = 2m.
Inserting element m + 1 requires reallocation of the table. Creating the new table on line 3 is free (for now). The loop on line 4 requires m elementary insertions, for a cost of m. Including the insertion on the last line, the total cost for this operation is m + 1. After this operation, the pool therefore has 2m + 3 - (m + 1) = m + 2.
Next, we add another m - 1 elements to the table. At this point the pool has m + 2 + 2×(m - 1) = 3m. Inserting an additional element (that is, element 2m + 1) can be seen to have cost 2m + 1 and a payment of 3. After this operation, the pool has 3m + 3 - (2m + 1) = m + 2. Note that this is the same amount as after inserting element m + 1. In fact, we can show that this will be the case for any number of reallocations.
It can now be made clear why the payment for an insertion is 3. 1 pays for the first insertion of the element, 1 pays for moving the element the next time the table is expanded, and 1 pays for moving an older element the next time the table is expanded. Intuitively, this explains why an element's contribution never "runs out" regardless of how many times the table is expanded: since the table is always doubled, the newest half always covers the cost of moving the oldest half.
We initially assumed that creating a table was free. In reality, creating a table of size n may be as expensive as O(n). Let us say that the cost of creating a table of size n is n. Does this new cost present a difficulty? Not really; it turns out we use the same method to show the amortized O(1) bounds. All we have to do is change the payment.
When a new table is created, there is an old table with m entries. The new table will be of size 2m. As long as the entries currently in the table have added enough to the pool to pay for creating the new table, we will be all right.
We cannot expect the first formula_0 entries to help pay for the new table. Those entries already paid for the current table. We must then rely on the last formula_0 entries to pay the cost formula_1. This means we must add formula_2 to the payment for each entry, for a total payment of 3 + 4 = 7. | [
{
"math_id": 0,
"text": "\\frac{m}{2}"
},
{
"math_id": 1,
"text": "2m"
},
{
"math_id": 2,
"text": "\\frac{2m}{m/2} = 4"
}
] | https://en.wikipedia.org/wiki?curid=1124019 |
1124109 | Glider (Conway's Game of Life) | Moving pattern of five live cells in Conway's Game of Life
The glider is a pattern that travels across the board in Conway's Game of Life. It was first discovered by Richard K. Guy in 1969, while John Conway's group was attempting to track the evolution of the R-pentomino. Gliders are the smallest spaceships, and they travel diagonally at a speed of one cell every four generations, or formula_0. The glider is often produced from randomly generated starting configurations.
The name comes from the fact that, after two steps, the glider pattern repeats its configuration with a glide reflection symmetry. After four steps and two glide reflections, it returns to its original orientation. John Conway remarked that he wished he hadn't called it the glider. The game was developed before the widespread use of interactive computers, and after seeing it animated, he feels the glider looks more like an ant walking across the plane.
Importance.
Gliders are important to the Game of Life because they are easily produced, can be collided with each other to form more complicated objects, and can be used to transmit information over long distances. Instances of this second advantage are called glider syntheses. For instance, eight gliders can be positioned so that they collide to form a Gosper glider gun. Glider collisions designed to result in certain patterns are also called glider syntheses.
Patterns such as blocks, beehives, blinkers, traffic lights, even the uncommon Eater, can be synthesized with just two gliders. It takes three gliders to build the three other basic spaceships, and even the pentadecathlon oscillator.
Some patterns require a very large number (sometimes hundreds) of glider collisions; some oscillators, exotic spaceships, puffer trains, guns, etc. Whether the construction of an exotic pattern from gliders can possibly mean it can occur naturally, is still conjecture.
Gliders can also be collided with other patterns with interesting results. For example, if two gliders are shot at a block in just the right way, the block moves closer to the source of the gliders. If three gliders are shot in just the right way, the block moves farther away. This "sliding block memory" can be used to simulate a counter, which would be modified by firing gliders at it. It is possible to construct logic gates such as "AND", "OR" and "NOT" using gliders. One may also build a pattern that acts like a finite state machine connected to two counters. This has the same computational power as a universal Turing machine, so, using the glider, the Game of Life is theoretically as powerful as any computer with unlimited memory and no time constraints: it is Turing complete.
Hacker emblem.
Eric S. Raymond has proposed the glider as an emblem to represent the hacker subculture, as the Game of Life appeals to hackers, and the concept of the glider was "born at almost the same time as the Internet and Unix". The emblem is in use in various places within the subculture.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "c/4"
}
] | https://en.wikipedia.org/wiki?curid=1124109 |
1124466 | Uranium–thorium dating | Radiometric dating method
Uranium–thorium dating, also called thorium-230 dating, uranium-series disequilibrium dating or uranium-series dating, is a radiometric dating technique established in the 1960s which has been used since the 1970s to determine the age of calcium carbonate materials such as speleothem or coral. Unlike other commonly used radiometric dating techniques such as rubidium–strontium or uranium–lead dating, the uranium-thorium technique does not measure accumulation of a stable end-member decay product. Instead, it calculates an age from the degree to which secular equilibrium has been restored between the radioactive isotope thorium-230 and its radioactive parent uranium-234 within a sample.
Background.
Thorium is not soluble in natural water under conditions found at or near the surface of the earth, so materials grown in or from this water do not usually contain thorium. In contrast, uranium is soluble to some extent in all natural water, so any material that precipitates or is grown from such water also contains trace uranium, typically at levels of between a few parts per billion and few parts per million by weight. As time passes after such material has formed, uranium-234 in the sample with a half-life of 245,000 years decays to thorium-230. Thorium-230 is itself radioactive with a half-life of 75,000 years, so instead of accumulating indefinitely (as for instance is the case for the uranium–lead system), thorium-230 instead approaches secular equilibrium with its radioactive parent uranium-234. At secular equilibrium, the number of thorium-230 decays per year within a sample is equal to the number of thorium-230 produced, which also equals the number of uranium-234 decays per year in the same sample.
History.
In 1908, John Joly, a professor of geology at Trinity College Dublin, found higher radium contents in deep sediments than in those of the continental shelf, and suspected that detrital sediments scavenged radium out of seawater. Piggot and Urry found in 1942, that radium excess corresponded with an excess of thorium. It took another 20 years until the technique was applied to terrestrial carbonates (speleothems and travertines). In the late 1980s, the method was refined by mass spectrometry, with significant contributions from Larry Edwards. After Viktor Viktorovich Cherdyntsev's landmark book about uranium-234 had been translated into English, U-Th dating came to widespread research attention in Western geology.
Methods.
U-series dating is a family of methods which can be applied to different materials over different time ranges.
Each method is named after the isotopes measured to obtain the date, mostly a daughter and its parent. Eight methods are
listed in the table below.
The 234U/238U method is based on the fact that 234U is dissolved preferentially over 238U because when a 238U atom decays by emitting an alpha ray the daughter atom is displaced from its normal position in the crystal by atomic recoil.
This produces a 234Th atom which quickly becomes a 234U atom. Once the uranium is deposited, the ratio of 234U to 238U goes back down to its secular equilibrium (at which the radioactivities of the two are equal), with the distance from equilibrium decreasing by a factor of 2 every 245,000 years.
A material balance gives, for some unknown constant A, these expressions for activity rations (assuming that the 230Th starts at zero):
formula_0
formula_1
We can solve the first equation for A in terms of the unknown age, t:
formula_2
Putting this into the second equation gives us an equation to be solved for t:
formula_3
Unfortunately there is no closed-form expression for the age, t, but it is easily found using equation solving algorithms.
Dating limits.
Uranium–thorium dating has an upper age limit of somewhat over 500,000 years, defined by the half-life of thorium-230, the precision with which one can measure the thorium-230/uranium-234 ratio in a sample, and the accuracy to which one knows the half-lives of thorium-230 and uranium-234. Using this technique to calculate an age, the ratio of uranium-234 to its parent isotope uranium-238 must also be measured.
Precision.
U-Th dating yields the most accurate results if applied to precipitated calcium carbonate, that is in stalagmites, travertines, and lacustrine limestones. Bone and shell are less reliable. Mass spectrometry can achieve a precision of ±1%. Conventional alpha counting's precision is ±5%. Mass spectrometry also uses smaller samples.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "^{234}\\text{U}/^{238}\\text{U}=1+A\\times 2^{-t/245000}"
},
{
"math_id": 1,
"text": "^{230}\\text{Th}/^{238}\\text{U}=1+\\frac A{1-75000/245000}\\times 2^{-t/245000}-\\left(1+\\frac A{1-75000/245000}\\right)\\times 2^{-t/75000}"
},
{
"math_id": 2,
"text": "A=(^{234}\\text{U}/^{238}\\text{U}-1)\\times 2^{t/245000}"
},
{
"math_id": 3,
"text": "^{230}\\text{Th}/^{238}\\text{U}=1+\\frac{^{234}\\text{U}/^{238}\\text{U}-1}{1-75000/245000}-2^{-t/75000}-\\frac{^{234}\\text{U}/^{238}\\text{U}-1}{1-75000/245000}\\times 2^{t/245000-t/75000}"
}
] | https://en.wikipedia.org/wiki?curid=1124466 |
1124695 | Kripke–Platek set theory | System of mathematical set theory
The Kripke–Platek set theory (KP), pronounced , is an axiomatic set theory developed by Saul Kripke and Richard Platek.
The theory can be thought of as roughly the predicative part of ZFC and is considerably weaker than it.
Axioms.
In its formulation, a Δ0 formula is one all of whose quantifiers are bounded. This means any quantification is the form formula_0 or formula_1 (See the Lévy hierarchy.)
Some but not all authors include an
KP with infinity is denoted by KPω. These axioms lead to close connections between KP, generalized recursion theory, and the theory of admissible ordinals.
KP can be studied as a constructive set theory by dropping the law of excluded middle, without changing any axioms.
Empty set.
If any set formula_2 is postulated to exist, such as in the axiom of infinity, then the axiom of empty set is redundant because it is equal to the subset formula_3. Furthermore, the existence of a member in the universe of discourse, i.e., ∃x(x=x), is implied in certain formulations of first-order logic, in which case the axiom of empty set follows from the axiom of Δ0-separation, and is thus redundant.
Comparison with Zermelo-Fraenkel set theory.
As noted, the above are weaker than ZFC as they exclude the power set axiom, choice, and sometimes infinity. Also the axioms of separation and collection here are weaker than the corresponding axioms in ZFC because the formulas φ used in these are limited to bounded quantifiers only.
The axiom of induction in the context of KP is stronger than the usual axiom of regularity, which amounts to applying induction to the complement of a set (the class of all sets not in the given set).
Theorems.
Admissible sets.
The ordinal "α" is an admissible ordinal if and only if "α" is a limit ordinal and there does not exist a "γ" < "α" for which there is a Σ1(L"α") mapping from "γ" onto "α". If "M" is a standard model of KP, then the set of ordinals in "M" is an admissible ordinal.
Cartesian products exist.
Theorem:
If "A" and "B" are sets, then there is a set "A"×"B" which consists of all ordered pairs ("a", "b") of elements "a" of "A" and "b" of "B".
Proof:
The singleton set with member "a", written {"a"}, is the same as the unordered pair {"a", "a"}, by the axiom of extensionality.
The singleton, the set {"a", "b"}, and then also the ordered pair
formula_8
all exist by pairing.
A possible Δ0-formula formula_9 expressing that "p" stands for the pair ("a", "b") is given by the lengthy
formula_10
formula_11
formula_12
What follows are two steps of collection of sets, followed by a restriction through separation. All results are also expressed using set builder notation.
Firstly, given formula_13 and collecting with respect to formula_14, some superset of formula_15 exists by collection.
The Δ0-formula
formula_16
grants that just formula_17 itself exists by separation.
If formula_18 ought to stand for this collection of pairs formula_17, then a Δ0-formula characterizing it is
formula_19
Given formula_14 and collecting with respect to formula_20, some superset of formula_21 exists by collection.
Putting formula_22 in front of that last formula and one finds the set formula_21 itself exists by separation.
Finally, the desired
formula_23
exists by union.
Q.E.D.
Metalogic.
The consistency strength of KPω is given by the Bachmann–Howard ordinal. KP fails to prove some common theorems in set theory, such as the Mostowski collapse lemma. | [
{
"math_id": 0,
"text": "\\forall u \\in v"
},
{
"math_id": 1,
"text": "\\exist u \\in v."
},
{
"math_id": 2,
"text": "c"
},
{
"math_id": 3,
"text": "\\{x\\in c\\mid x\\neq x\\}"
},
{
"math_id": 4,
"text": " A\\, "
},
{
"math_id": 5,
"text": "\\langle A,\\in \\rangle"
},
{
"math_id": 6,
"text": "\\alpha"
},
{
"math_id": 7,
"text": "L_\\alpha"
},
{
"math_id": 8,
"text": "(a,b) := \\{ \\{a\\}, \\{a,b\\} \\} "
},
{
"math_id": 9,
"text": "\\psi (a, b, p)"
},
{
"math_id": 10,
"text": "\\exist r \\in p\\, \\big(a \\in r\\, \\land\\, \\forall x \\in r\\, (x = a) \\big)"
},
{
"math_id": 11,
"text": "\\land\\, \\exist s \\in p\\, \\big(a \\in s \\,\\land\\, b \\in s\\, \\land\\, \\forall x \\in s\\, (x = a \\,\\lor\\, x = b) \\big)"
},
{
"math_id": 12,
"text": "\\land\\, \\forall t \\in p\\, \\Big(\\big(a \\in t\\, \\land\\, \\forall x \\in t\\, (x = a)\\big)\\, \\lor\\, \\big(a \\in t \\land b \\in t \\land \\forall x \\in t\\, (x = a \\,\\lor\\, x = b)\\big)\\Big)."
},
{
"math_id": 13,
"text": "b"
},
{
"math_id": 14,
"text": "A"
},
{
"math_id": 15,
"text": "A\\times\\{b\\} = \\{(a,b)\\mid a\\in A\\}"
},
{
"math_id": 16,
"text": "\\exist a \\in A \\,\\psi (a, b, p)"
},
{
"math_id": 17,
"text": "A\\times\\{b\\}"
},
{
"math_id": 18,
"text": "P"
},
{
"math_id": 19,
"text": "\\forall a \\in A\\, \\exist p \\in P\\, \\psi (a, b, p)\\, \\land\\, \\forall p \\in P\\, \\exist a \\in A\\, \\psi (a, b, p) \\,."
},
{
"math_id": 20,
"text": "B"
},
{
"math_id": 21,
"text": "\\{A\\times \\{b\\} \\mid b\\in B\\}"
},
{
"math_id": 22,
"text": "\\exist b \\in B"
},
{
"math_id": 23,
"text": "A\\times B := \\bigcup \\{A\\times \\{b\\} \\mid b\\in B\\}"
}
] | https://en.wikipedia.org/wiki?curid=1124695 |
11247993 | Hyperpyramid | A hyperpyramid is a generalisation of the normal pyramid to "n" dimensions.
In the case of the pyramid one connects all vertices of the base, a polygon in a plane, to a point outside the plane, which is the peak. The pyramid's height is the distance of the peak from the plane. This construction gets generalised to "n" dimensions. The base becomes a ("n" − 1)-polytope in a ("n" − 1)-dimensional hyperplane. A point called apex is located outside the hyperplane and gets connected to all the vertices of the polytope and the distance of the apex from the hyperplane is called height. This construct is called a "n"-dimensional hyperpyramid.
A normal triangle is a 2-dimensional hyperpyramid, the triangular pyramid is a 3-dimensional hyperpyramid and the pentachoron or tetrahedral pyramid is a 4-dimensional hyperpyramid with a tetrahedron as base.
The "n"-dimensional volume of a "n"-dimensional hyperpyramid can be computed as follows:
formula_0
Here formula_1 denotes the "n"-dimensional volume of the hyperpyramid, "A" the ("n" − 1)-dimensional volume of the base and "h" the height, that is the distance between the apex and the ("n" − 1)-dimensional hyperplane containing the base "A". For "n" = 2, 3 the formula above yields the standard formulas for the area of a triangle and the volume of a pyramid. | [
{
"math_id": 0,
"text": "V_n=\\frac{A \\cdot h}{n}"
},
{
"math_id": 1,
"text": "V_n"
}
] | https://en.wikipedia.org/wiki?curid=11247993 |
11248623 | Low-energy electron microscopy | Low-energy electron microscopy, or LEEM, is an analytical surface science technique used to image atomically clean surfaces, atom-surface interactions, and thin (crystalline) films. In LEEM, high-energy electrons (15-20 keV) are emitted from an electron gun, focused using a set of condenser optics, and sent through a magnetic beam deflector (usually 60˚ or 90˚). The “fast” electrons travel through an objective lens and begin decelerating to low energies (1-100 eV) near the sample surface because the sample is held at a potential near that of the gun. The low-energy electrons are now termed “surface-sensitive” and the near-surface sampling depth can be varied by tuning the energy of the incident electrons (difference between the sample and gun potentials minus the work functions of the sample and system). The low-energy elastically backscattered electrons travel back through the objective lens, reaccelerate to the gun voltage (because the objective lens is grounded), and pass through the beam separator again. However, now the electrons travel away from the condenser optics and into the projector lenses. Imaging of the back focal plane of the objective lens into the object plane of the projector lens (using an intermediate lens) produces a diffraction pattern (low-energy electron diffraction, LEED) at the imaging plane and recorded in a number of different ways. The intensity distribution of the diffraction pattern will depend on the periodicity at the sample surface and is a direct result of the wave nature of the electrons. One can produce individual images of the diffraction pattern spot intensities by turning off the intermediate lens and inserting a contrast aperture in the back focal plane of the objective lens (or, in state-of-the-art instruments, in the center of the separator, as chosen by the excitation of the objective lens), thus allowing for real-time observations of dynamic processes at surfaces. Such phenomena include (but are not limited to): tomography, phase transitions, adsorption, reaction, segregation, thin film growth, etching, strain relief, sublimation, and magnetic microstructure. These investigations are only possible because of the accessibility of the sample; allowing for a wide variety of in situ studies over a wide temperature range. LEEM was invented by Ernst Bauer in 1962; however, not fully developed (by Ernst Bauer and Wolfgang Telieps) until 1985.
Introduction.
LEEM differs from conventional electron microscopes in four main ways:
Surface diffraction.
Kinematic or elastic backscattering occurs when low energy (1-100 eV) electrons impinge on a clean, well-ordered crystalline specimen. It is assumed that each electron undergoes only one scattering event, and incident electron beam is described as a plane wave with the wavelength:
formula_0
Inverse space is used to describe the periodicity of the lattice and the interaction of the plane wave with the sample surface. In inverse (or "k-space") space, the wave vector of the incident and scattered waves are formula_1 and formula_2, respectively,
and constructive interference occurs at the Laue condition:
formula_3
where (h,k,l) is a set of integers and
formula_4
is a vector of the reciprocal lattice.
Experimental setup.
A typical LEEM setup consists of electron gun, used to generate electrons by way of thermionic or field emission from a source tip. In thermionic emission, electrons escape a source tip (usually made of LaB6) by resistive heating and application of an electric field to effectively lower the energy needed for electrons to escape the surface. Once sufficient thermal vibrational energy is attained electrons may overcome this electrostatic energy barrier, allowing them to travel into vacuum and accelerate down the lens column to the gun potential (because the lenses are at ground). In field emission, rather than heating the tip to vibrationally excite electrons from the surface, the source tip (usually tungsten) is sharpened to a small point such that when large electric fields are applied, they concentrate at the tip, lowering the barrier to escape the surface as well as making tunneling of electrons from the tip to vacuum level more feasible.
Condenser/illumination optics are used to focus electrons leaving the electron gun and manipulate and/or translate the illumination electron beam. Electromagnetic quadrupole electron lenses are used, the number of which depends on how much resolution and focusing flexibility the designer wishes. However, the ultimate resolution of LEEM is usually determined by that of the objective lens.
Illumination beam aperture allows researchers to control the area of the specimen which is illuminated (LEEM's version of electron microscopy's “selected area diffraction”, termed microdiffraction) and is located in the beam separator on the illumination side.
Magnetic beam separator is needed to resolve the illuminating and imaging beam (while in turn spatially separating the optics for each). There has been much development on the technology of electron beam separators; the early separators introduced distortion in either the image or diffraction plane. However, IBM recently developed a hybrid prism array/nested quadratic field design, focusing the electron beams both in and out of the plane of the beampath, allowing for deflection and transfer of the image and diffraction planes without distortion or energy dispersion.
Electrostatic immersion objective lens is used to form a real image of the sample by way of a 2/3-magnification virtual image behind the sample. The uniformity of the electrostatic field between the objective lens and specimen, limited by spherical and chromatic aberrations larger than those of any other lenses, ultimately determines the overall performance of the instrument.
Contrast aperture is located in the center on the projector lens side of the beam separator. In most electron microscopies, the contrast aperture is introduced into the back focal plan of the objective lens (where the actual diffraction plane lies). However, this is not true in the LEEM, because dark-field imaging (imaging of nonspecular beams) would not be possible because the aperture has to move laterally and would intercept the incident beam for large shifts. Therefore, researchers adjust the excitation of the objective lens so as to produce an image of the diffraction pattern in the middle of the beam separator and choose the desired spot intensity to image using a contrast aperture inserted there. This aperture allows scientists to image diffraction intensities that may be of particular interest (dark field).
Illumination optics are employed to magnify the image or diffraction pattern and project it onto the imaging plate or screen.
Imaging plate or screen used to image the electron intensity so that we can see it. This can be done many different ways including, phosphorescent screens, imaging plates, CCDs, among others.
Specialized imaging techniques.
Low energy electron diffraction (LEED).
After a parallel beam of low-energy electrons interacts with a specimen, the electrons form a diffraction or LEED pattern which depends on periodicity present at the surface and is a direct result of the wave nature of an electron. It is important to point out that in regular LEED the entire sample surface is being illuminated by a parallel beam of electrons, and thus the diffraction pattern will contain information about the entire surface.
LEED performed in a LEEM instrument (sometimes referred to as Very Low-Energy Electron Diffraction (VLEED), due to the even lower electron energies), limits the area illuminated to the beam spot, typically in the order of square micrometers.
The diffraction pattern is formed in the back focal plane of the objective lens, imaged into the object plane of the projective lens (using an intermediate lens), and the final pattern appears on the phosphorescent screen, photographic plate or CCD.
As the reflected electrons are bent away from the electron source by the prism, the specular reflected electrons can be measured, even starting from zero landing energy, as no shadow of the source is visible on the screen (which prevents this in regular LEED instruments).
It is worth noting that the spacing of diffracted beams does not increase with kinetic energy as for conventional LEED systems. This is due to the imaged electrons being accelerated to the high energy of the imaging column and are therefore imaged with a constant size of K-space regardless of the incident electron energy.
Microdiffraction.
Microdiffraction is conceptually exactly like LEED. However, unlike in a LEED experiment where the sampled surface area is some square millimeters, one inserts the illumination and the beam aperture into the beam path while imaging a surface and thus reduces the size of the sampled surface area. The chosen area ranges from a fraction of a square micrometer to square micrometers. If the surface is not homogeneous, a diffraction pattern obtained from LEED experiment appears convoluted and is therefore hard to analyze. In a microdiffraction experiment researchers may focus on a particular island, terrace, domain and so on, and retrieve a diffraction pattern composed solely of a single surface feature, making the technique extremely useful.
Bright field imaging.
Bright Field imaging uses the specular, reflected, (0,0) beam to form an image. Also known as phase or interference contrast imaging, bright field imaging makes particular use of the wave nature of the electron to generate vertical diffraction contrast, making steps on the surface visible.
Dark field imaging.
In dark field imaging (also termed diffraction contrast imaging) researchers choose a desired diffraction spot and use a contrast aperture to pass only those electrons that contribute to that particular spot. In the image planes after the contrast aperture it is then possible to observe where the electrons originate from in real space. This technique allows scientists to study on which areas of a specimen a structure with a certain lattice vector (periodicity) exists.
Spectroscopy.
Both (micro-)diffraction as well as bright field and dark field imaging can be performed as a function of the electron landing energy, measuring a diffraction pattern or an image for a range of energies. This way of measuring (often called LEEM-IV) yields spectra for each diffraction spot or sample position. In its simplest form, this spectrum gives a `fingerprint' of the surface, enabling the identification of different surface structures.
A particular application of bright field spectroscopy is the counting of the exact number of layers in layered materials such as (few layer) graphene, hexagonal boron nitride and some transition metal dichalcogenides.
Photoemission electron microscopy (PEEM).
In photoemission electron microscopy (PEEM), upon exposure to electromagnetic radiation (photons), secondary electrons are excited from the surface and imaged. PEEM was first developed in the early 1930s, using ultraviolet (UV) light to induce photoemission of (secondary) electrons. However, since then, this technique has made many advances, the most important of which was the pairing of PEEM with a synchrotron light source, providing tunable, linear polarized, left and right circularized radiation in the soft x-ray range. Such application allows scientist to retrieve topographical, elemental, chemical, and magnetic contrast of surfaces.
LEEM instruments are often equipped with light sources to perform PEEM imaging. This helps in system alignment and enables collection LEEM, PEEM and ARPES data of a single sample in a single instrument.
Mirror electron microscopy (MEM).
In mirror electron microscopy, electrons are slowed in the retarding field of the condenser lens to the limit of the instrument and thus, only allowed to interact with the “near-surface” region of the sample. It is very complicated to understand the exact contrast variations come from, but the important things to point out here are that height variations at the surface of the region change the properties of the retarding field, therefore influencing the reflected (specular) beam. No LEED pattern is formed, because no scattering events have taken place, and therefore, reflected intensity is high.
Low-energy electron holography.
Low-energy electron holography is realized with electron with kinetic energies in the range 30 - 250 eV. The source of the coherent electron beam is a sharp metal tip and the electrons are extracted by field emission. The wave transmitted through the sample propagates to the detector where the interference pattern is acquired, formed by superposition of the scattered with the non-scattered (reference) wave, constituting an in-line hologram. The structure of the object (macromolecule) is then reconstructed from the hologram by numerical methods. Low-energy electron holography has successfully been applied for imaging of individual biological molecules, including: purple protein membrane, DNA molecules, phthalocyaninato polysiloxane molecules, the tobacco mosaic virus8, a bacteriophage, ferritin and individual proteins (bovine serum albumin, cytochrome C and hemoglobin). The resolution achieved by low-energy electron holography is about 0.7 - 1 nm.
Reflectivity contrast imaging.
The elastic backscattering of low energy electrons from surfaces is strong. The reflectivity coefficients of surfaces depend strongly on the energy of incident electrons and the nuclear charge, in a non-monotonic fashion. Therefore, contrast can be maximized by varying the energy of the electrons incident at the surface.
Spin-polarized LEEM (SPLEEM).
SPLEEM uses spin-polarized illumination electrons to image the magnetic structure of a surface by way of spin-spin coupling of the incident electrons with that of the surface.
Other.
Other advanced techniques include:
References.
<templatestyles src="Reflist/styles.css" />
*
* | [
{
"math_id": 0,
"text": "\n\\begin{align}\n\\lambda = \\frac{h}{\\sqrt{2mE}}, \\qquad \\lambda[\\textrm{A}]=\\sqrt{\\frac{150}{E[\\textrm{eV}]}}\n\\end{align}\n"
},
{
"math_id": 1,
"text": "\\textbf{k}_0=2\\pi/\\lambda_0"
},
{
"math_id": 2,
"text": "\\begin{align}\\textbf{k}=2\\pi/\\lambda\\end{align}"
},
{
"math_id": 3,
"text": "\\textbf{k}-\\textbf{k}_0 = \\textbf{G}_\\textrm{hkl}"
},
{
"math_id": 4,
"text": " \\textbf{G}_\\textrm{hkl} = h\\textbf{a}^*+k\\textbf{b}^*+l\\textbf{c}^*"
}
] | https://en.wikipedia.org/wiki?curid=11248623 |
11249626 | Statistical model specification | Part of the process of building a statistical model
In statistics, model specification is part of the process of building a statistical model: specification consists of selecting an appropriate functional form for the model and choosing which variables to include. For example, given personal income formula_0 together with years of schooling formula_1 and on-the-job experience formula_2, we might specify a functional relationship formula_3 as follows:
formula_4
where formula_5 is the unexplained error term that is supposed to comprise independent and identically distributed Gaussian variables.
The statistician Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".
Specification error and bias.
Specification error occurs when the functional form or the choice of independent variables poorly represent relevant aspects of the true data-generating process. In particular, bias (the expected value of the difference of an estimated parameter and the true underlying value) occurs if an independent variable is correlated with the errors inherent in the underlying process. There are several different possible causes of specification error; some are listed below.
Additionally, measurement errors may affect the independent variables: while this is not a specification error, it can create statistical bias.
Note that all models will have some specification error. Indeed, in statistics there is a common aphorism that "all models are wrong". In the words of Burnham & Anderson, "Modeling is an art as well as a science and is directed toward finding a good approximating model ... as the basis for statistical inference".
Detection of misspecification.
The Ramsey RESET test can help test for specification error in regression analysis.
In the example given above relating personal income to schooling and job experience, if the assumptions of the model are correct, then the least squares estimates of the parameters formula_6 and formula_7 will be efficient and unbiased. Hence specification diagnostics usually involve testing the first to fourth moment of the residuals.
Model building.
Building a model involves finding a set of relationships to represent the process that is generating the data. This requires avoiding all the sources of misspecification mentioned above.
One approach is to start with a model in general form that relies on a theoretical understanding of the data-generating process. Then the model can be fit to the data and checked for the various sources of misspecification, in a task called "statistical model validation". Theoretical understanding can then guide the modification of the model in such a way as to retain theoretical validity while removing the sources of misspecification. But if it proves impossible to find a theoretically acceptable specification that fits the data, the theoretical model may have to be rejected and replaced with another one.
A quotation from Karl Popper is apposite here: "Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve".
Another approach to model building is to specify several different models as candidates, and then compare those candidate models to each other. The purpose of the comparison is to determine which candidate model is most appropriate for statistical inference. Common criteria for comparing models include the following: "R"2, Bayes factor, and the likelihood-ratio test together with its generalization relative likelihood. For more on this topic, see "statistical model selection".
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "y"
},
{
"math_id": 1,
"text": "s"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "y = f(s,x)"
},
{
"math_id": 4,
"text": "\n\\ln y = \\ln y_0 + \\rho s + \\beta_1 x + \\beta_2 x^2 + \\varepsilon\n"
},
{
"math_id": 5,
"text": "\\varepsilon"
},
{
"math_id": 6,
"text": "\\rho"
},
{
"math_id": 7,
"text": "\\beta"
}
] | https://en.wikipedia.org/wiki?curid=11249626 |
11253084 | Atomic form factor | Measure of the scattering amplitude of a wave by an isolated atom
In physics, the atomic form factor, or atomic scattering factor, is a measure of the scattering amplitude of a wave by an isolated atom. The atomic form factor depends on the type of scattering, which in turn depends on the nature of the incident radiation, typically X-ray, electron or neutron. The common feature of all form factors is that they involve a Fourier transform of a spatial density distribution of the scattering object from real space to momentum space (also known as reciprocal space). For an object with spatial density distribution, formula_0, the form factor, formula_1, is defined as
formula_2,
where formula_0 is the spatial density of the scatterer about its center of mass (formula_3), and formula_4 is the momentum transfer. As a result of the nature of the Fourier transform, the broader the distribution of the scatterer formula_5 in real space formula_6, the narrower the distribution of formula_7 in formula_4; i.e., the faster the decay of the form factor.
For crystals, atomic form factors are used to calculate the structure factor for a given Bragg peak of a crystal.
X-ray form factors.
X-rays are scattered by the electron cloud of the atom and hence the scattering amplitude of X-rays increases with the atomic number, formula_8, of the atoms in a sample. As a result, X-rays are not very sensitive to light atoms, such as hydrogen and helium, and there is very little contrast between elements adjacent to each other in the periodic table. For X-ray scattering, formula_9 in the above equation is the electron charge density about the nucleus, and the form factor the Fourier transform of this quantity. The assumption of a spherical distribution is usually good enough for X-ray crystallography.
In general the X-ray form factor is complex but the imaginary components only become large near an absorption edge. Anomalous X-ray scattering makes use of the variation of the form factor close to an absorption edge to vary the scattering power of specific atoms in the sample by changing the energy of the incident x-rays hence enabling the extraction of more detailed structural information.
Atomic form factor patterns are often represented as a function of the magnitude of the "scattering vector" formula_10. Herein formula_11 is the wavenumber and formula_12 is the scattering angle between the incident x-ray beam and the detector measuring the scattered intensity, while formula_13 is the wavelength of the X-rays. One interpretation of the scattering vector is that it is the "resolution" or "yardstick" with which the sample is observed. In the range of scattering vectors between formula_14 Å−1, the atomic form factor is well approximated by a sum of Gaussians of the form
formula_15
where the values of ai, bi, and c are tabulated here.
Electron form factor.
The relevant distribution, formula_9 is the potential distribution of the atom, and the electron form factor is the Fourier transform of this. The electron form factors are normally calculated from X-ray form factors using the Mott–Bethe formula. This formula takes into account both elastic electron-cloud scattering and elastic nuclear scattering.
Neutron form factor.
There are two distinct scattering interactions of neutrons by nuclei. Both are used in the investigation structure and dynamics of condensed matter: they are termed nuclear (sometimes also termed chemical) and magnetic scattering.
Nuclear scattering.
Nuclear scattering of the free neutron by the nucleus is mediated by the strong nuclear force. The wavelength of thermal (several ångströms) and cold neutrons (up to tens of Angstroms) typically used for such investigations is 4-5 orders of magnitude larger than the dimension of the nucleus (femtometres). The free neutrons in a beam travel in a plane wave; for those that undergo nuclear scattering from a nucleus, the nucleus acts as a secondary point source, and radiates scattered neutrons as a spherical wave. (Although a quantum phenomenon, this can be visualized in simple classical terms by the Huygens–Fresnel principle.) In this case formula_9 is the spatial density distribution of the nucleus, which is an infinitesimal point (delta function), with respect to the neutron wavelength. The delta function forms part of the Fermi pseudopotential, by which the free neutron and the nuclei interact. The Fourier transform of a delta function is unity; therefore, it is commonly said that neutrons "do not have a form factor;" i.e., the scattered amplitude, formula_16, is independent of formula_17.
Since the interaction is nuclear, each isotope has a different scattering amplitude. This Fourier transform is scaled by the amplitude of the spherical wave, which has dimensions of length. Hence, the amplitude of scattering that characterizes the interaction of a neutron with a given isotope is termed the scattering length, "b". Neutron scattering lengths vary erratically between neighbouring elements in the periodic table and between isotopes of the same element. They may only be determined experimentally, since the theory of nuclear forces is not adequate to calculate or predict "b" from other properties of the nucleus.
Magnetic scattering.
Although neutral, neutrons also have a nuclear spin. They are a composite fermion and hence have an associated magnetic moment. In neutron scattering from condensed matter, magnetic scattering refers to the interaction of this moment with the magnetic moments arising from unpaired electrons in the outer orbitals of certain atoms. It is the spatial distribution of these unpaired electrons about the nucleus that is formula_9 for magnetic scattering.
Since these orbitals are typically of a comparable size to the wavelength of the free neutrons, the resulting form factor resembles that of the X-ray form factor. However, this neutron-magnetic scattering is only from the outer electrons, rather than being heavily weighted by the core electrons, which is the case for X-ray scattering. Hence, in strong contrast to the case for nuclear scattering, the scattering object for magnetic scattering is far from a point source; it is still more diffuse than the effective size of the source for X-ray scattering, and the resulting Fourier transform (the magnetic form factor) decays more rapidly than the X-ray form factor. Also, in contrast to nuclear scattering, the magnetic form factor is not isotope dependent, but is dependent on the oxidation state of the atom.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rho(\\mathbf{r})"
},
{
"math_id": 1,
"text": "f(\\mathbf{Q})"
},
{
"math_id": 2,
"text": "f(\\mathbf{Q})=\\int \\rho(\\mathbf{r}) e^{i\\mathbf{Q} \\cdot \\mathbf{r}}\\mathrm{d}^3\\mathbf{r}"
},
{
"math_id": 3,
"text": "\\mathbf{r}=0"
},
{
"math_id": 4,
"text": "\\mathbf{Q}"
},
{
"math_id": 5,
"text": "\\rho"
},
{
"math_id": 6,
"text": "\\mathbf{r}"
},
{
"math_id": 7,
"text": "f"
},
{
"math_id": 8,
"text": "Z"
},
{
"math_id": 9,
"text": "\\rho(r)"
},
{
"math_id": 10,
"text": "Q = 2k \\sin (\\theta )"
},
{
"math_id": 11,
"text": "k = 2\\pi / \\lambda"
},
{
"math_id": 12,
"text": "2\\theta"
},
{
"math_id": 13,
"text": "\\lambda"
},
{
"math_id": 14,
"text": "0 < Q < 25"
},
{
"math_id": 15,
"text": "\nf(Q) = \\sum_{i=1}^{4} a_i \\exp\\left(-b_i \\left(\\frac{Q}{4 \\pi}\\right)^2\\right) + c\n"
},
{
"math_id": 16,
"text": "b"
},
{
"math_id": 17,
"text": "Q"
}
] | https://en.wikipedia.org/wiki?curid=11253084 |
1125342 | Continuous-time Markov chain | Probability concept
A continuous-time Markov chain (CTMC) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a different state as specified by the probabilities of a stochastic matrix. An equivalent formulation describes the process as changing state according to the least value of a set of exponential random variables, one for each possible state it can move to, with the parameters determined by the current state.
An example of a CTMC with three states formula_0 is as follows: the process makes a transition after the amount of time specified by the holding time—an exponential random variable formula_1, where "i" is its current state. Each random variable is independent and such that formula_2, formula_3 and formula_4. When a transition is to be made, the process moves according to the jump chain, a discrete-time Markov chain with stochastic matrix:
formula_5
Equivalently, by the property of competing exponentials, this CTMC changes state from state "i" according to the minimum of two random variables, which are independent and such that formula_6 for formula_7 where the parameters are given by the Q-matrix formula_8
formula_9
Each non-diagonal entry formula_10 can be computed as the probability that the jump chain moves from state "i" to state "j", divided by the expected holding time of state "i". The diagonal entries are chosen so that each row sums to 0.
A CTMC satisfies the Markov property, that its behavior depends only on its current state and not on its past behavior, due to the memorylessness of the exponential distribution and of discrete-time Markov chains.
Definition.
Let formula_11 be a probability space, let formula_12 be a countable nonempty set, and let formula_13 (formula_14 for "time"). Equip formula_12 with the discrete metric, so that we can make sense of right continuity of functions formula_15. A continuous-time Markov chain is defined by:
Note that the row sums of formula_17 are 0: formula_27 or more succinctly, formula_28. This situation contrasts with the situation for discrete-time Markov chains, where all row sums of the transition matrix equal unity.
Now, let formula_29 such that formula_30 is formula_31-measurable. There are three equivalent ways to define formula_32 being "Markov with initial distribution formula_16 and rate matrix formula_17": via transition probabilities or via the jump chain and holding times.
As a prelude to a transition-probability definition, we first motivate the definition of a "regular" rate matrix. We will use the transition-rate matrix formula_17 to specify the dynamics of the Markov chain by means of generating a collection of "transition matrices" formula_33 on formula_12 (formula_34), via the following theorem.
<templatestyles src="Math_theorem/styles.css" />
Theorem: Existence of solution to Kolmogorov backward equations. — There exists formula_35 such that for all formula_36 the entry formula_37 is differentiable and formula_38 satisfies the Kolmogorov backward equations:
We say formula_17 is "regular" to mean that we do have uniqueness for the above system, i.e., that there exists exactly one solution. We say formula_17 is "irregular" to mean formula_17 is not regular. If formula_12 is finite, then there is exactly one solution, namely formula_39 and hence formula_17 is regular. Otherwise, formula_12 is infinite, and there exist irregular transition-rate matrices on formula_12. If formula_17 is regular, then for the unique solution formula_38, for each formula_40, formula_33 will be a stochastic matrix. We will assume formula_17 is regular from the beginning of the following subsection up through the end of this section, even though it is conventional to not include this assumption. (Note for the expert: thus we are not defining continuous-time Markov chains in general but only "non-explosive" continuous-time Markov chains.)
Transition-probability definition.
Let formula_38 be the (unique) solution of the system (0). (Uniqueness guaranteed by our assumption that formula_17 is regular.) We say formula_32 is "Markov with initial distribution formula_16 and rate matrix formula_17" to mean: for any nonnegative integer formula_41, for all formula_42 such that formula_43 for all formula_44
Using induction and the fact that formula_45 we can show the equivalence of the above statement containing (1) and the following statement: for all formula_46 and for any nonnegative integer formula_41, for all formula_42 such that formula_43 for all formula_47 such that formula_48 (it follows that formula_49),
It follows from continuity of the functions formula_37 (formula_36) that the trajectory formula_50 is almost surely right continuous (with respect to the discrete metric on formula_12): there exists a formula_51-null set formula_52 such that formula_53.
Jump-chain/holding-time definition.
Sequences associated to a right-continuous function.
Let formula_54 be right continuous (when we equip formula_12 with the discrete metric). Define
formula_55
let
formula_56
be the "holding-time sequence" associated to formula_57, choose formula_58 and let
formula_59
be "the "state sequence"" associated to formula_57.
Definition of the jump matrix Π.
The jump matrix formula_60, alternatively written formula_61 if we wish to emphasize the dependence on formula_17, is the matrix
formula_62
where formula_63 is the "zero set" of the function formula_64
Jump-chain/holding-time property.
We say formula_32 is "Markov with initial distribution formula_16 and rate matrix formula_17" to mean: the trajectories of formula_32 are almost surely right continuous, let formula_57 be a modification of formula_32 to have (everywhere) right-continuous trajectories, formula_65 almost surely (note to experts: this condition says formula_32 is non-explosive), the state sequence formula_66 is a discrete-time Markov chain with initial distribution formula_16 (jump-chain property) and transition matrix formula_67 and formula_68 (holding-time property).
Infinitesimal definition.
We say formula_32 is "Markov with initial distribution formula_16 and rate matrix formula_17" to mean: for all formula_20 formula_69 and for all formula_70, for all formula_71 and for small strictly positive values of formula_72, the following holds for all formula_40 such that formula_73:
formula_74,
where the term formula_75 is formula_76 if formula_77 and otherwise formula_78, and the little-o term formula_79 depends in a certain way on formula_80.
The above equation shows that formula_10 can be seen as measuring how quickly the transition from formula_81 to formula_82 happens for formula_7, and how quickly the transition away from formula_81 happens for formula_83.
Properties.
Communicating classes.
Communicating classes, transience, recurrence and positive and null recurrence are defined identically as for discrete-time Markov chains.
Transient behaviour.
Write P("t") for the matrix with entries "p""ij" = P("X""t" = "j" | "X"0 = "i"). Then the matrix P("t") satisfies the forward equation, a first-order differential equation
formula_84,
where the prime denotes differentiation with respect to "t". The solution to this equation is given by a matrix exponential
formula_85.
In a simple case such as a CTMC on the state space {1,2}. The general "Q" matrix for such a process is the following 2 × 2 matrix with "α","β" > 0
formula_86
The above relation for forward matrix can be solved explicitly in this case to give
formula_87.
Computing direct solutions is complicated in larger matrices. The fact that "Q" is the generator for a semigroup of matrices
formula_88
is used.
Stationary distribution.
The stationary distribution for an irreducible recurrent CTMC is the probability distribution to which the process converges for large values of "t". Observe that for the two-state process considered earlier with P("t") given by
formula_87,
as "t" → ∞ the distribution tends to
formula_89.
Observe that each row has the same distribution as this does not depend on starting state. The row vector "π" may be found by solving
formula_90
with the constraint
formula_91.
Example 1.
The image to the right describes a continuous-time Markov chain with state-space {Bull market, Bear market, Stagnant market} and transition-rate matrix
formula_92
The stationary distribution of this chain can be found by solving formula_93, subject to the constraint that elements must sum to 1 to obtain
formula_94
Example 2.
The image to the right describes a discrete-time Markov chain modeling Pac-Man with state-space {1,2,3,4,5,6,7,8,9}. The player controls Pac-Man through a maze, eating pac-dots. Meanwhile, he is being hunted by ghosts. For convenience, the maze shall be a small 3x3-grid and the ghosts move randomly in horizontal and vertical directions. A secret passageway between states 2 and 8 can be used in both directions. Entries with probability zero are removed in the following transition-rate matrix:
formula_95
This Markov chain is irreducible, because the ghosts can fly from every state to every state in a finite amount of time. Due to the secret passageway, the Markov chain is also aperiodic, because the ghosts can move from any state to any state both in an even and in an uneven number of state transitions. Therefore, a unique stationary distribution exists and can be found by solving formula_93, subject to the constraint that elements must sum to 1. The solution of this linear equation subject to the constraint is formula_96
The central state and the border states 2 and 8 of the adjacent secret passageway are visited most and the corner states are visited least.
Time reversal.
For a CTMC "X""t", the time-reversed process is defined to be formula_97. By Kelly's lemma this process has the same stationary distribution as the forward process.
A chain is said to be reversible if the reversed process is the same as the forward process. Kolmogorov's criterion states that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions.
Embedded Markov chain.
One method of finding the stationary probability distribution, π, of an ergodic continuous-time Markov chain, "Q", is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain. Each element of the one-step transition probability matrix of the EMC, "S", is denoted by "s""ij", and represents the conditional probability of transitioning from state "i" into state "j". These conditional probabilities may be found by
formula_98
From this, "S" may be written as
formula_99
where "I" is the identity matrix and diag("Q") is the diagonal matrix formed by selecting the main diagonal from the matrix "Q" and setting all other elements to zero.
To find the stationary probability distribution vector, we must next find formula_100 such that
formula_101
with formula_100 being a row vector, such that all elements in formula_100 are greater than 0 and formula_102 = 1. From this, π may be found as
formula_103
Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observing "X"("t") at intervals of δ units of time. The random variables "X"(0), "X"(δ), "X"(2δ), ... give the sequence of states visited by the δ-skeleton.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\{0,1, 2\\}"
},
{
"math_id": 1,
"text": "E_i"
},
{
"math_id": 2,
"text": "E_0\\sim \\text{Exp}(6)"
},
{
"math_id": 3,
"text": "E_1\\sim \\text{Exp}(12)"
},
{
"math_id": 4,
"text": "E_2\\sim \\text{Exp}(18)"
},
{
"math_id": 5,
"text": "\\begin{bmatrix}\n0 & \\frac{1}{2} & \\frac{1}{2} \\\\\n\\frac{1}{3} & 0 & \\frac{2}{3} \\\\\n\\frac{5}{6} & \\frac{1}{6} & 0\n\\end{bmatrix}."
},
{
"math_id": 6,
"text": "E_{i,j}\\sim \\text{Exp}(q_{i,j})"
},
{
"math_id": 7,
"text": "i\\neq j"
},
{
"math_id": 8,
"text": "Q=(q_{i,j})"
},
{
"math_id": 9,
"text": "\\begin{bmatrix}\n-6 & 3 & 3 \\\\\n4 & -12 & 8 \\\\\n15 & 3 & -18\n\\end{bmatrix}."
},
{
"math_id": 10,
"text": "q_{i,j}"
},
{
"math_id": 11,
"text": "(\\Omega,{\\cal A},\\Pr)"
},
{
"math_id": 12,
"text": "S"
},
{
"math_id": 13,
"text": "T=\\mathbb R_{\\ge0}"
},
{
"math_id": 14,
"text": "T"
},
{
"math_id": 15,
"text": "\\mathbb R_{\\ge0}\\to S"
},
{
"math_id": 16,
"text": "\\lambda"
},
{
"math_id": 17,
"text": "Q"
},
{
"math_id": 18,
"text": "Q:S^2\\to\\mathbb R"
},
{
"math_id": 19,
"text": "i,j\\in S, 0\\le q_{i,j}"
},
{
"math_id": 20,
"text": "i\\in S,"
},
{
"math_id": 21,
"text": "\\sum_{j\\in I:j\\ne i}q_{i,j}=-q_{i,i}."
},
{
"math_id": 22,
"text": "I"
},
{
"math_id": 23,
"text": "+\\infty"
},
{
"math_id": 24,
"text": "-q_{i,i}"
},
{
"math_id": 25,
"text": "Q:S^2\\to\\mathbb R\\cup\\{-\\infty\\}"
},
{
"math_id": 26,
"text": "\\operatorname{range}Q\\subseteq\\mathbb R"
},
{
"math_id": 27,
"text": "\\forall i\\in S~\\sum_{j\\in I}q_{i,j}=0,"
},
{
"math_id": 28,
"text": "Q\\cdot1=0"
},
{
"math_id": 29,
"text": "X:T\\to S^\\Omega"
},
{
"math_id": 30,
"text": "\\forall t\\in T~X(t)"
},
{
"math_id": 31,
"text": "({\\cal A},{\\cal P}(S))"
},
{
"math_id": 32,
"text": "X"
},
{
"math_id": 33,
"text": "P(t)"
},
{
"math_id": 34,
"text": "t\\in\\mathbb R_{\\ge0}"
},
{
"math_id": 35,
"text": "P\\in([0,1]^{S\\times S})^T"
},
{
"math_id": 36,
"text": "i,j\\in S"
},
{
"math_id": 37,
"text": "(P(t)_{i,j})_{t\\in T}"
},
{
"math_id": 38,
"text": "P"
},
{
"math_id": 39,
"text": "P=(e^{tQ})_{t\\in T},"
},
{
"math_id": 40,
"text": "t\\in T"
},
{
"math_id": 41,
"text": "n\\ge0"
},
{
"math_id": 42,
"text": "t_0,\\dots,t_{n+1}\\in T"
},
{
"math_id": 43,
"text": "t_0<\\dots<t_{n+1},"
},
{
"math_id": 44,
"text": "i_0,\\dots,i_{n+1}\\in I,"
},
{
"math_id": 45,
"text": "\\forall A,B\\in{\\cal A}~~\\Pr(B)\\ne0\\rightarrow\\Pr(A\\cap B)=\\Pr(A\\mid B)\\Pr(B),"
},
{
"math_id": 46,
"text": "i\\in I,~\\Pr(X_0=i)=\\lambda_i"
},
{
"math_id": 47,
"text": "i_0,\\dots,i_{n+1}\\in I"
},
{
"math_id": 48,
"text": "0<\\Pr(X_0=i_0,\\dots,X_{t_n}=i_n)"
},
{
"math_id": 49,
"text": "0<\\Pr(X_{t_n}=i_n)"
},
{
"math_id": 50,
"text": "(X_t(\\omega))_{t\\in T}"
},
{
"math_id": 51,
"text": "\\Pr"
},
{
"math_id": 52,
"text": "N"
},
{
"math_id": 53,
"text": "\\{\\omega\\in\\Omega:(X_t(\\omega))_{t\\in T}\\text{ is right continuous}\\}\\subseteq N"
},
{
"math_id": 54,
"text": "f:T\\to S"
},
{
"math_id": 55,
"text": "h=h(f)=(\\inf\\{u\\in(0,+\\infty):f(t+u)\\ne f(t)\\})_{t\\in T})\\cup\\{+\\infty,0\\},"
},
{
"math_id": 56,
"text": "H=H(f)=(h^{\\circ n}0)_{n\\in\\mathbb Z_{\\ge0}}"
},
{
"math_id": 57,
"text": "f"
},
{
"math_id": 58,
"text": "s\\in S,"
},
{
"math_id": 59,
"text": "y=y(f)=\\left(\\begin{cases}f(\\sum_{k\\in n}H_k)&\\text{ if }\\sum_{k\\in n}H_k<+\\infty,\\\\s&\\text{ else}\\end{cases}\\right)_{n\\in\\omega}"
},
{
"math_id": 60,
"text": "\\Pi"
},
{
"math_id": 61,
"text": "\\Pi(Q)"
},
{
"math_id": 62,
"text": "\\Pi=([i=j])_{i\\in Z,j\\in S}\\cup\\bigcup_{i\\in S\\setminus Z}(\\{((i,j),(-Q_{i,i})^{-1}Q_{i,j}):j\\in S\\setminus\\{i\\}\\}\\cup\\{((i,i),0)\\}),"
},
{
"math_id": 63,
"text": "Z=Z(Q)=\\{k\\in S:q_{k,k}=0\\}"
},
{
"math_id": 64,
"text": "(q_{k,k})_{k\\in S}."
},
{
"math_id": 65,
"text": "\\sum_{n\\in\\mathbb Z_{\\ge0}}H(f(\\omega))_n=+\\infty"
},
{
"math_id": 66,
"text": "y(f(\\omega))"
},
{
"math_id": 67,
"text": "\\Pi(Q),"
},
{
"math_id": 68,
"text": "\\forall n\\in\\mathbb Z_{\\ge0}~\\forall B\\in{\\cal B}(\\mathbb R_{\\ge0})~\\Pr(H_n(f)\\in B)=\\operatorname{Exp}(-q_{Y_n,Y_n})(B)"
},
{
"math_id": 69,
"text": "\\Pr(X(0)=i)=\\lambda_i"
},
{
"math_id": 70,
"text": "i,j"
},
{
"math_id": 71,
"text": "t"
},
{
"math_id": 72,
"text": "h"
},
{
"math_id": 73,
"text": "0<\\Pr(X(t)=i)"
},
{
"math_id": 74,
"text": "\\Pr(X(t+h) = j \\mid X(t) = i) = [i=j] + q_{i,j}h + o(h)"
},
{
"math_id": 75,
"text": "[i=j]"
},
{
"math_id": 76,
"text": "1"
},
{
"math_id": 77,
"text": "i=j"
},
{
"math_id": 78,
"text": "0"
},
{
"math_id": 79,
"text": "o(h)"
},
{
"math_id": 80,
"text": "i,j,h"
},
{
"math_id": 81,
"text": "i"
},
{
"math_id": 82,
"text": "j"
},
{
"math_id": 83,
"text": "i= j"
},
{
"math_id": 84,
"text": "P'(t) = P(t) Q"
},
{
"math_id": 85,
"text": "P(t) = e^{tQ}"
},
{
"math_id": 86,
"text": "Q = \\begin{pmatrix} -\\alpha & \\alpha \\\\ \\beta & -\\beta \\end{pmatrix}."
},
{
"math_id": 87,
"text": "P(t) = \\begin{pmatrix}\n\\frac{\\beta}{\\alpha+\\beta} + \\frac{\\alpha}{\\alpha+\\beta}e^{-(\\alpha+\\beta)t} &\n\\frac{\\alpha}{\\alpha+\\beta} - \\frac{\\alpha}{\\alpha+\\beta}e^{-(\\alpha+\\beta)t} \\\\\n\\frac{\\beta}{\\alpha+\\beta} - \\frac{\\beta}{\\alpha+\\beta}e^{-(\\alpha+\\beta)t} &\n\\frac{\\alpha}{\\alpha+\\beta} + \\frac{\\beta}{\\alpha+\\beta}e^{-(\\alpha+\\beta)t}\n\\end{pmatrix}"
},
{
"math_id": 88,
"text": "P(t+s) = e^{(t+s)Q} = e^{tQ} e^{sQ} = P(t) P(s)"
},
{
"math_id": 89,
"text": "P_\\pi = \\begin{pmatrix}\n\\frac{\\beta}{\\alpha+\\beta} &\n\\frac{\\alpha}{\\alpha+\\beta} \\\\\n\\frac{\\beta}{\\alpha+\\beta} &\n\\frac{\\alpha}{\\alpha+\\beta}\n\\end{pmatrix}"
},
{
"math_id": 90,
"text": "\\pi Q = 0"
},
{
"math_id": 91,
"text": "\\sum_{i \\in S} \\pi_i = 1"
},
{
"math_id": 92,
"text": "Q=\\begin{pmatrix}\n-0.025 & 0.02 & 0.005 \\\\\n0.3 & -0.5 & 0.2 \\\\\n0.02 & 0.4 & -0.42\n\\end{pmatrix}."
},
{
"math_id": 93,
"text": "\\pi Q=0"
},
{
"math_id": 94,
"text": "\\pi = \\begin{pmatrix}0.885 & 0.071 & 0.044 \\end{pmatrix}."
},
{
"math_id": 95,
"text": "Q=\\begin{pmatrix} \n-1 & \\frac{1}{2} & & \\frac{1}{2}\\\\\n\\frac{1}{4} & -1 & \\frac{1}{4} & & \\frac{1}{4}&&&\\frac{1}{4}\\\\\n& \\frac{1}{2} & -1 & & & \\frac{1}{2}\\\\\n\\frac{1}{3} & & & -1 & \\frac{1}{3} & & \\frac{1}{3}\\\\\n& \\frac{1}{4} & & \\frac{1}{4} & -1 & \\frac{1}{4} & & \\frac{1}{4}\\\\\n& & \\frac{1}{3} & & \\frac{1}{3} & -1& & & \\frac{1}{3}\\\\\n& & & \\frac{1}{2} & & & -1 & \\frac{1}{2}\\\\\n& \\frac{1}{4} & && \\frac{1}{4} & & \\frac{1}{4} & -1 & \\frac{1}{4}\\\\\n& & & & & \\frac{1}{2} & & \\frac{1}{2} & -1\\end{pmatrix}"
},
{
"math_id": 96,
"text": "\\pi=(7.7,15.4,7.7,11.5,15.4,11.5,7.7,15.4,7.7)\\%."
},
{
"math_id": 97,
"text": "\\hat X_t = X_{T-t}"
},
{
"math_id": 98,
"text": "\ns_{ij} = \\begin{cases}\n\\frac{q_{ij}}{\\sum_{k \\neq i} q_{ik}} & \\text{if } i \\neq j,\\\\\n0 & \\text{otherwise}.\n\\end{cases}\n"
},
{
"math_id": 99,
"text": "S = I - \\left( \\operatorname{diag}(Q) \\right)^{-1} Q"
},
{
"math_id": 100,
"text": "\\varphi"
},
{
"math_id": 101,
"text": "\\varphi S = \\varphi, "
},
{
"math_id": 102,
"text": "\\|\\varphi\\|_1"
},
{
"math_id": 103,
"text": "\\pi = {-\\varphi (\\operatorname{diag}(Q))^{-1} \\over \\left\\| \\varphi (\\operatorname{diag}(Q))^{-1} \\right\\|_1}."
}
] | https://en.wikipedia.org/wiki?curid=1125342 |
11254442 | List of women in mathematics | This is a list of women who have made noteworthy contributions to or achievements in mathematics. These include mathematical research, mathematics education, the history and philosophy of mathematics, public outreach, and mathematics contests.
<templatestyles src="Hlist/styles.css"/>
* See also* References* External links
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p"
}
] | https://en.wikipedia.org/wiki?curid=11254442 |
11254565 | Synchronization (alternating current) | Process in power generation
In an alternating current (AC) electric power system, synchronization is the process of matching the frequency, phase and voltage of a generator or other source to an electrical grid in order to transfer power. If two unconnected segments of a grid are to be connected to each other, they cannot safely exchange AC power until they are synchronized.
A direct current (DC) generator can be connected to a power network simply by adjusting its open-circuit terminal voltage to match the network's voltage, by either adjusting its speed or its field excitation. The exact engine speed is not critical. However, an AC generator must additionally match its timing (frequency and phase) to the network voltage, which requires both speed and excitation to be systematically controlled for synchronization. This extra complexity was one of the arguments against AC operation during the war of currents in the 1880s. In modern grids, synchronization of generators is carried out by automatic systems.
Conditions.
There are five conditions that must be met before the synchronization process takes place. The source (generator or sub-network) must have equal root-mean-square voltage, frequency, phase sequence, phase angle, and waveform to that of the system to which it is being synchronized.
Waveform and phase sequence are fixed by the construction of the generator and its connections to the system. During installation of a generator, careful checks are made to ensure the generator terminals and all control wiring is correct so that the order of phases (phase sequence) matches the system. Connecting a generator with the wrong phase sequence will result in large, possibly damaging, currents as the system voltages are opposite to those of the generator terminal voltages.
The voltage, frequency and phase angle must be controlled each time a generator is to be connected to a grid.
Generating units for connection to a power grid have an inherent droop speed control that allows them to share load proportional to their rating. Some generator units, especially in isolated systems, operate with isochronous frequency control, maintaining constant system frequency independent of load.
Process.
The sequence of events is similar for manual or automatic synchronization. The generator is brought up to approximate synchronous speed by supplying more energy to its shaft - for example, opening the valves on a steam turbine, opening the gates on a hydraulic turbine, or increasing the fuel rack setting on a diesel engine. The field of the generator is energized and the voltage at the terminals of the generator is observed and compared with the system. The voltage magnitude must be the same as the system voltage.
If one machine is slightly out of phase it will pull into step with the others but, if the phase difference is large, there will be heavy cross-currents which can cause voltage fluctuations and, in extreme cases, damage to the machines.
Synchronizing lamps.
Formerly, three incandescent light bulbs were connected between the generator terminals and the system terminals (or more generally, to the terminals of instrument transformers connected to generator and system). As the generator speed changes, the lights will flicker at the beat frequency proportional to the difference between generator frequency and system frequency. When the voltage at the generator is opposite to the system voltage (either ahead or behind in phase), the lamps will be bright. When the voltage at the generator matches the system voltage, the lights will be dark. At that instant, the circuit breaker connecting the generator to the system may be closed and the generator will then stay in synchronism with the system.
An alternative technique used a similar scheme to the above except that the connections of two of the lamps were swapped either at the generator terminals or the system terminals. In this scheme, when the generator was in synchronism with the system, one lamp would be dark, but the two with the swapped connections would be of equal brightness. Synchronizing on "dark" lamps was preferred over "bright" lamps because it was easier to discern the minimum brightness. However, a lamp burnout could give a false-positive for successful synchronization.
Synchroscope.
Another manual method of synchronization relies on observing an instrument called a "synchroscope", which displays the relative frequencies of system and generator. The pointer of the synchroscope will indicate "fast" or "slow" speed of the generator with respect to the system. To minimize the transient current when the generator circuit breaker is closed, usual practice is to initiate the close as the needle slowly approaches the in-phase point. An error of a few electrical degrees between system and generator will result in a momentary inrush and abrupt speed change of the generator.
Synchronizing relays.
Synchronizing relays allow unattended synchronization of a machine with a system. Today these are digital microprocessor instruments, but in the past electromechanical relay systems were applied. A synchronizing relay is useful to remove human reaction time from the process, or when a human is not available such as at a remote controlled generating plant. Synchroscopes or lamps are sometimes installed as a supplement to automatic relays, for possible manual use or for monitoring the generating unit.
Sometimes as a precaution against out-of-step connection of a machine to a system, a "synchro check" relay is installed that prevents closing the generator circuit breaker unless the machine is within a few electrical degrees of being in-phase with the system. Synchro check relays are also applied in places where several sources of supply may be connected and where it is important that out-of-step sources are not accidentally paralleled.
Synchronous operation.
While the generator is synchronized, the frequency of the system will change depending on load and the average characteristics of all the generating units connected to the grid. Large changes in system frequency can cause the generator to fall out of synchronism with the system. Protective devices on the generator will operate to disconnect it automatically.
Synchronous speeds.
Synchronous speeds for synchronous motors and alternators depend on the number of poles on the machine and the frequency of the supply.
The relationship between the supply frequency, "f", the number of poles, "p", and the synchronous speed (speed of rotating field), "ns" is given by:
formula_0.
In the following table, frequencies are shown in hertz (Hz) and rotational speeds in revolutions per minute (rpm):
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f = \\frac{pn_s}{120\\ }"
}
] | https://en.wikipedia.org/wiki?curid=11254565 |
1125883 | Markov decision process | Mathematical model
Markov decision process (MDP), also called a stochastic dynamic program or stochastic control problem, is a model for sequential decision making when outcomes are uncertain.
Originating from operations research in the 1950s, MDPs have since gained recognition in a variety of fields, including robotics, ecology, economics, healthcare, telecommunications and artificial intelligence.
Background.
The name "Markov decision process" comes from its connection to Markov chains, a mathematical concept developed by the Russian mathematician Andrey Markov. A Markov chain is a sequence of states where the probability of moving to the next state depends only on the current state and not on the sequence of events that preceded it. This property is known as the Markov property or memorylessness.
An MDP builds on the idea of a Markov chain but adds the element of decision-making. In an MDP, an agent makes decisions that influence the transitions between states. Each decision (or action) taken in a particular state leads to a probability distribution over the next possible states, similar to a Markov chain. However, unlike a simple Markov chain, in an MDP, the agent can actively choose actions to optimize a certain objective (usually maximizing some cumulative reward).
The "Markov" in "Markov decision process" refers to the underlying structure of state transitions that still follow the Markov property. The process is called a "decision process" because it involves making decisions that influence these state transitions, extending the concept of a Markov chain into the realm of decision-making under uncertainty.
Definition.
A Markov decision process is a 4-tuple formula_0, where:
A policy function formula_15 is a (potentially probabilistic) mapping from state space (formula_1) to action space (formula_2).
Optimization objective.
The goal in a Markov decision process is to find a good "policy" for the decision maker: a function formula_15 that specifies the action formula_16 that the decision maker will choose when in state formula_4. Once a Markov decision process is combined with a policy in this way, this fixes the action for each state and the resulting combination behaves like a Markov chain (since the action chosen in state formula_4 is completely determined by formula_16).
The objective is to choose a policy formula_15 that will maximize some cumulative function of the random rewards, typically the expected discounted sum over a potentially infinite horizon:
formula_17 (where we choose formula_18, i.e. actions given by the policy). And the expectation is taken over formula_19
where formula_20 is the discount factor satisfying formula_21, which is usually close to formula_22 (for example, formula_23 for some discount rate formula_24). A lower discount factor motivates the decision maker to favor taking actions early, rather than postpone them indefinitely.
Another possible, but strictly related, objective that is commonly used is the formula_25step return. This time, instead of using a discount factor formula_20, the agent is interested only in the first formula_26 steps of the process, with each reward having the same weight.
formula_27 (where we choose formula_18, i.e. actions given by the policy). And the expectation is taken over formula_19
where formula_28 is the time horizon. Compared to the previous objective, the latter one is more used in Learning Theory.
A policy that maximizes the function above is called an "<dfn>optimal policy</dfn>" and is usually denoted formula_29. A particular MDP may have multiple distinct optimal policies. Because of the Markov property, it can be shown that the optimal policy is a function of the current state, as assumed above.
Simulator models.
In many cases, it is difficult to represent the transition probability distributions, formula_5, explicitly. In such cases, a simulator can be used to model the MDP implicitly by providing samples from the transition distributions. One common form of implicit MDP model is an episodic environment simulator that can be started from an initial state and yields a subsequent state and reward every time it receives an action input. In this manner, trajectories of states, actions, and rewards, often called "<dfn>episodes</dfn>" may be produced.
Another form of simulator is a "<dfn>generative model</dfn>", a single step simulator that can generate samples of the next state and reward given any state and action. (Note that this is a different meaning from the term generative model in the context of statistical classification.) In algorithms that are expressed using pseudocode, formula_30 is often used to represent a generative model. For example, the expression formula_31 might denote the action of sampling from the generative model where formula_4 and formula_6 are the current state and action, and formula_8 and formula_24 are the new state and reward. Compared to an episodic simulator, a generative model has the advantage that it can yield data from any state, not only those encountered in a trajectory.
These model classes form a hierarchy of information content: an explicit model trivially yields a generative model through sampling from the distributions, and repeated application of a generative model yields an episodic simulator. In the opposite direction, it is only possible to learn approximate models through regression. The type of model available for a particular MDP plays a significant role in determining which solution algorithms are appropriate. For example, the dynamic programming algorithms described in the next section require an explicit model, and Monte Carlo tree search requires a generative model (or an episodic simulator that can be copied at any state), whereas most reinforcement learning algorithms require only an episodic simulator.
Example.
An example of MDP is the Pole-Balancing model, which comes from classic control theory.
In this example, we have
Algorithms.
Solutions for MDPs with finite state and action spaces may be found through a variety of methods such as dynamic programming. The algorithms in this section apply to MDPs with finite state and action spaces and explicitly given transition probabilities and reward functions, but the basic concepts may be extended to handle other problem classes, for example using function approximation.
The standard family of algorithms to calculate optimal policies for finite state and action MDPs requires storage for two arrays indexed by state: "value" formula_34, which contains real values, and "policy" formula_15, which contains actions. At the end of the algorithm, formula_15 will contain the solution and formula_35 will contain the discounted sum of the rewards to be earned (on average) by following that solution from state formula_4.
The algorithm has two steps, (1) a value update and (2) a policy update, which are repeated in some order for all the states until no further changes take place. Both recursively update a new estimation of the optimal policy and state value using an older estimation of those values.
formula_36
formula_37
Their order depends on the variant of the algorithm; one can also do them for all states at once or state by state, and more often to some states than others. As long as no state is permanently excluded from either of the steps, the algorithm will eventually arrive at the correct solution.
Notable variants.
Value iteration.
In value iteration , which is also called backward induction,
the formula_15 function is not used; instead, the value of formula_16 is calculated within formula_35 whenever it is needed. Substituting the calculation of formula_16 into the calculation of formula_35 gives the combined step:
formula_38
where formula_39 is the iteration number. Value iteration starts at formula_40 and formula_41 as a guess of the value function. It then iterates, repeatedly computing formula_42 for all states formula_4, until formula_34 converges with the left-hand side equal to the right-hand side (which is the "Bellman equation" for this problem). Lloyd Shapley's 1953 paper on stochastic games included as a special case the value iteration method for MDPs, but this was recognized only later on.
Policy iteration.
In policy iteration , step one is performed once, and then step two is performed once, then both are repeated until policy converges. Then step one is again performed once and so on. (Policy iteration was invented by Howard to optimize Sears catalogue mailing, which he had been optimizing using value iteration.)
Instead of repeating step two to convergence, it may be formulated and solved as a set of linear equations. These equations are merely obtained by making formula_43 in the step two equation. Thus, repeating step two to convergence can be interpreted as solving the linear equations by relaxation.
This variant has the advantage that there is a definite stopping condition: when the array formula_15 does not change in the course of applying step 1 to all states, the algorithm is completed.
Policy iteration is usually slower than value iteration for a large number of possible states.
Modified policy iteration.
In modified policy iteration (; ), step one is performed once, and then step two is repeated several times. Then step one is again performed once and so on.
Prioritized sweeping.
In this variant, the steps are preferentially applied to states which are in some way important – whether based on the algorithm (there were large changes in formula_34 or formula_15 around those states recently) or based on use (those states are near the starting state, or otherwise of interest to the person or program using the algorithm).
Computational complexity.
Algorithms for finding optimal policies with time complexity polynomial in the size of the problem representation exist for finite MDPs. Thus, decision problems based on MDPs are in computational complexity class P. However, due to the curse of dimensionality, the size of the problem representation is often exponential in the number of state and action variables, limiting exact solution techniques to problems that have a compact representation. In practice, online planning techniques such as Monte Carlo tree search can find useful solutions in larger problems, and, in theory, it is possible to construct online planning algorithms that can find an arbitrarily near-optimal policy with no computational complexity dependence on the size of the state space.
Extensions and generalizations.
A Markov decision process is a stochastic game with only one player.
Partial observability.
The solution above assumes that the state formula_4 is known when action is to be taken; otherwise formula_16 cannot be calculated. When this assumption is not true, the problem is called a partially observable Markov decision process or POMDP.
Constrained Markov decision processes.
Constrained Markov decision processes (CMDPS) are extensions to Markov decision process (MDPs). There are three fundamental differences between MDPs and CMDPs.
The method of Lagrange multipliers applies to CMDPs.
Many Lagrangian-based algorithms have been developed.
There are a number of applications for CMDPs. It has recently been used in motion planning scenarios in robotics.
Continuous-time Markov decision process.
In discrete-time Markov Decision Processes, decisions are made at discrete time intervals. However, for continuous-time Markov decision processes, decisions can be made at any time the decision maker chooses. In comparison to discrete-time Markov decision processes, continuous-time Markov decision processes can better model the decision-making process for a system that has continuous dynamics, i.e., the system dynamics is defined by ordinary differential equations (ODEs). These kind of applications raise in queueing systems, epidemic processes, and population processes.
Like the discrete-time Markov decision processes, in continuous-time Markov decision processes the agent aims at finding the optimal "policy" which could maximize the expected cumulated reward. The only difference with the standard case stays in the fact that, due to the continuous nature of the time variable, the sum is replaced by an integral:
formula_44
where formula_45
Discrete space: Linear programming formulation.
If the state space and action space are finite, we could use linear programming to find the optimal policy, which was one of the earliest approaches applied. Here we only consider the ergodic model, which means our continuous-time MDP becomes an ergodic continuous-time Markov chain under a stationary policy. Under this assumption, although the decision maker can make a decision at any time at the current state, they could not benefit more by taking more than one action. It is better for them to take an action only at the time when system is transitioning from the current state to another state. Under some conditions,(for detail check Corollary 3.14 of "Continuous-Time Markov Decision Processes"), if our optimal value function formula_46 is independent of state formula_39, we will have the following inequality:
formula_47
If there exists a function formula_48, then formula_49 will be the smallest formula_50 satisfying the above equation. In order to find formula_49, we could use the following linear programming model:
formula_51
formula_52
formula_53 is a feasible solution to the D-LP if formula_53 is nonnative and satisfied the constraints in the D-LP problem. A feasible solution formula_54 to the D-LP is said to be an optimal solution if
formula_55
for all feasible solution formula_53 to the D-LP. Once we have found the optimal solution formula_54, we can use it to establish the optimal policies.
Continuous space: Hamilton–Jacobi–Bellman equation.
In continuous-time MDP, if the state space and action space are continuous, the optimal criterion could be found by solving Hamilton–Jacobi–Bellman (HJB) partial differential equation. In order to discuss the HJB equation, we need to reformulate
our problem
formula_56
formula_57 is the terminal reward function, formula_58 is the system state vector, formula_59 is the system control vector we try to find. formula_60 shows how the state vector changes over time. The Hamilton–Jacobi–Bellman equation is as follows:
formula_61
We could solve the equation to find the optimal control formula_59, which could give us the optimal value function formula_46
Reinforcement learning.
Reinforcement learning is an interdisciplinary area of machine learning and optimal control that has, as main objective, finding an approximately optimal policy for MDPs where transition probabilities and rewards are unknown.
Reinforcement learning can solve Markov-Decision processes without explicit specification of the transition probabilities which are instead needed to perform policy iteration. In this setting, transition probabilities and rewards must be learned from experience, i.e. by letting an agent interact with the MDP for a given number of steps. Both on a theoretical and on a practical level, effort is put in maximizing the sample efficiency, i.e. minimimizing the number of samples needed to learn a policy whose performance is formula_62close to the optimal one (due to the stochastic nature of the process, learning the optimal policy with a finite number of samples is, in general, impossible).
Reinforcement Learning for discrete MDPs.
For the purpose of this section, it is useful to define a further function, which corresponds to taking the action formula_6 and then continuing optimally (or according to whatever policy one currently has):
formula_63
While this function is also unknown, experience during learning is based on formula_64 pairs (together with the outcome formula_8; that is, "I was in state formula_4 and I tried doing formula_6 and formula_8 happened"). Thus, one has an array formula_65 and uses experience to update it directly. This is known as Q-learning.
Other scopes.
Learning automata.
Another application of MDP process in machine learning theory is called learning automata. This is also one type of reinforcement learning if the environment is stochastic. The first detail learning automata paper is surveyed by Narendra and Thathachar (1974), which were originally described explicitly as finite state automata. Similar to reinforcement learning, a learning automata algorithm also has the advantage of solving the problem when probability or rewards are unknown. The difference between learning automata and Q-learning is that the former technique omits the memory of Q-values, but updates the action probability directly to find the learning result. Learning automata is a learning scheme with a rigorous proof of convergence.
In learning automata theory, a stochastic automaton consists of:
The states of such an automaton correspond to the states of a "discrete-state discrete-parameter Markov process". At each time step "t" = 0,1,2,3..., the automaton reads an input from its environment, updates P("t") to P("t" + 1) by "A", randomly chooses a successor state according to the probabilities P("t" + 1) and outputs the corresponding action. The automaton's environment, in turn, reads the action and sends the next input to the automaton.
Category theoretic interpretation.
Other than the rewards, a Markov decision process formula_66 can be understood in terms of Category theory. Namely, let formula_67 denote the free monoid with generating set "A". Let Dist denote the Kleisli category of the Giry monad. Then a functor formula_68 encodes both the set "S" of states and the probability function "P".
In this way, Markov decision processes could be generalized from monoids (categories with one object) to arbitrary categories. One can call the result formula_69 a "context-dependent Markov decision process", because moving from one object to another in formula_70 changes the set of available actions and the set of possible states.
Alternative notations.
The terminology and notation for MDPs are not entirely settled. There are two main streams — one focuses on maximization problems from contexts like economics, using the terms action, reward, value, and calling the discount factor β or γ, while the other focuses on minimization problems from engineering and navigation, using the terms control, cost, cost-to-go, and calling the discount factor α. In addition, the notation for the transition probability varies.
In addition, transition probability is sometimes written formula_71, formula_72 or, rarely, formula_73
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(S, A, P_a, R_a)"
},
{
"math_id": 1,
"text": "S"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "A_s"
},
{
"math_id": 4,
"text": "s"
},
{
"math_id": 5,
"text": "P_a(s, s')"
},
{
"math_id": 6,
"text": "a"
},
{
"math_id": 7,
"text": "t"
},
{
"math_id": 8,
"text": "s'"
},
{
"math_id": 9,
"text": "t+1"
},
{
"math_id": 10,
"text": "\\Pr(s_{t+1}\\in S' \\mid s_t = s, a_t=a)=\\int_{S'} P_a(s, s')ds',"
},
{
"math_id": 11,
"text": "S'\\subseteq S"
},
{
"math_id": 12,
"text": "P_a(s, s')= \\Pr(s_{t+1}=s' \\mid s_t = s, a_t=a)"
},
{
"math_id": 13,
"text": "S\\subseteq \\mathbb R^d"
},
{
"math_id": 14,
"text": "R_a(s, s')"
},
{
"math_id": 15,
"text": "\\pi"
},
{
"math_id": 16,
"text": "\\pi(s)"
},
{
"math_id": 17,
"text": "E\\left[\\sum^{\\infty}_{t=0} {\\gamma^t R_{a_t} (s_t, s_{t+1})}\\right] "
},
{
"math_id": 18,
"text": "a_t = \\pi(s_t)"
},
{
"math_id": 19,
"text": "s_{t+1} \\sim P_{a_t}(s_t,s_{t+1})"
},
{
"math_id": 20,
"text": "\\ \\gamma \\ "
},
{
"math_id": 21,
"text": "0 \\le\\ \\gamma\\ \\le\\ 1"
},
{
"math_id": 22,
"text": "1"
},
{
"math_id": 23,
"text": " \\gamma = 1/(1+r) "
},
{
"math_id": 24,
"text": "r"
},
{
"math_id": 25,
"text": "H-"
},
{
"math_id": 26,
"text": "H"
},
{
"math_id": 27,
"text": "E\\left[\\sum^{H-1}_{t=0} {R_{a_t} (s_t, s_{t+1})}\\right] "
},
{
"math_id": 28,
"text": "\\ H \\ "
},
{
"math_id": 29,
"text": "\\pi^*"
},
{
"math_id": 30,
"text": "G"
},
{
"math_id": 31,
"text": "s', r \\gets G(s, a)"
},
{
"math_id": 32,
"text": "(\\theta,\\dot \\theta, x, \\dot x )\\subset \\mathbb R^4"
},
{
"math_id": 33,
"text": "\\{-1,1\\}"
},
{
"math_id": 34,
"text": "V"
},
{
"math_id": 35,
"text": "V(s)"
},
{
"math_id": 36,
"text": " V(s) := \\sum_{s'} P_{\\pi(s)} (s,s') \\left( R_{\\pi(s)} (s,s') + \\gamma V(s') \\right) "
},
{
"math_id": 37,
"text": " \\pi(s) := \\operatorname{argmax}_a \\left\\{ \\sum_{s'} P_{a}(s, s') \\left( R_{a}(s,s') + \\gamma V(s') \\right) \\right\\} "
},
{
"math_id": 38,
"text": " V_{i+1}(s) := \\max_a \\left\\{ \\sum_{s'} P_a(s,s') \\left( R_a(s,s') + \\gamma V_i(s') \\right) \\right\\}, "
},
{
"math_id": 39,
"text": "i"
},
{
"math_id": 40,
"text": "i = 0"
},
{
"math_id": 41,
"text": "V_0"
},
{
"math_id": 42,
"text": "V_{i+1}"
},
{
"math_id": 43,
"text": "s = s'"
},
{
"math_id": 44,
"text": "\\max \\operatorname{E}_\\pi\\left[\\left. \\int_0^\\infty\\gamma^t r(s(t),\\pi(s(t))) \\, dt \\;\\right| s_0 \\right]"
},
{
"math_id": 45,
"text": "0\\leq\\gamma< 1."
},
{
"math_id": 46,
"text": "V^*"
},
{
"math_id": 47,
"text": "g\\geq R(i,a)+\\sum_{j\\in S}q(j\\mid i,a)h(j) \\quad \\forall i \\in S \\text{ and } a \\in A(i)"
},
{
"math_id": 48,
"text": "h"
},
{
"math_id": 49,
"text": "\\bar V^*"
},
{
"math_id": 50,
"text": "g"
},
{
"math_id": 51,
"text": "\n\\begin{align}\n\\text{Minimize}\\quad &g\\\\\n\\text{s.t} \\quad & g-\\sum_{j \\in S}q(j\\mid i,a)h(j)\\geq R(i,a)\\,\\,\n\\forall i\\in S,\\,a\\in A(i)\n\\end{align}\n"
},
{
"math_id": 52,
"text": "\n\\begin{align}\n\\text{Maximize} &\\sum_{i\\in S}\\sum_{a\\in A(i)}R(i,a)y(i,a)\\\\\n\\text{s.t.} &\\sum_{i\\in S}\\sum_{a\\in A(i)} q(j\\mid i,a)y(i,a)=0 \\quad\n\\forall j\\in S,\\\\\n& \\sum_{i\\in S}\\sum_{a\\in A(i)}y(i,a)=1,\\\\\n& y(i,a)\\geq 0 \\qquad \\forall a\\in A(i) \\text{ and } \\forall i\\in S\n\\end{align}\n"
},
{
"math_id": 53,
"text": "y(i,a)"
},
{
"math_id": 54,
"text": "y^*(i,a)"
},
{
"math_id": 55,
"text": "\n\\begin{align}\n\\sum_{i\\in S}\\sum_{a\\in A(i)}R(i,a)y^*(i,a) \\geq \\sum_{i\\in S} \\sum_{a\\in A(i)} R(i,a) y(i,a)\n\\end{align}\n"
},
{
"math_id": 56,
"text": "\\begin{align} V(s(0),0)= {} & \\max_{a(t)=\\pi(s(t))}\\int_0^T r(s(t),a(t)) \\, dt+D[s(T)] \\\\\n\\text{s.t.}\\quad & \\frac{dx(t)}{dt}=f[t,s(t),a(t)]\n\\end{align}\n"
},
{
"math_id": 57,
"text": "D(\\cdot)"
},
{
"math_id": 58,
"text": "s(t)"
},
{
"math_id": 59,
"text": "a(t)"
},
{
"math_id": 60,
"text": "f(\\cdot)"
},
{
"math_id": 61,
"text": "0=\\max_u ( r(t,s,a) +\\frac{\\partial V(t,s)}{\\partial x}f(t,s,a)) "
},
{
"math_id": 62,
"text": "\\varepsilon-"
},
{
"math_id": 63,
"text": "\\ Q(s,a) = \\sum_{s'} P_a(s,s') (R_a(s,s') + \\gamma V(s')).\\ "
},
{
"math_id": 64,
"text": "(s, a)"
},
{
"math_id": 65,
"text": "Q"
},
{
"math_id": 66,
"text": "(S,A,P)"
},
{
"math_id": 67,
"text": "\\mathcal{A}"
},
{
"math_id": 68,
"text": "\\mathcal{A}\\to\\mathbf{Dist}"
},
{
"math_id": 69,
"text": "(\\mathcal{C}, F:\\mathcal{C}\\to \\mathbf{Dist})"
},
{
"math_id": 70,
"text": "\\mathcal{C}"
},
{
"math_id": 71,
"text": "\\Pr(s,a,s')"
},
{
"math_id": 72,
"text": "\\Pr(s'\\mid s,a)"
},
{
"math_id": 73,
"text": "p_{s's}(a)."
}
] | https://en.wikipedia.org/wiki?curid=1125883 |
1126162 | Chlorophyll a | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Chlorophyll "a" is a specific form of chlorophyll used in oxygenic photosynthesis. It absorbs most energy from wavelengths of violet-blue and orange-red light, and it is a poor absorber of green and near-green portions of the spectrum. Chlorophyll does not reflect light but chlorophyll-containing tissues appear green because green light is diffusively reflected by structures like cell walls. This photosynthetic pigment is essential for photosynthesis in eukaryotes, cyanobacteria and prochlorophytes because of its role as primary electron donor in the electron transport chain. Chlorophyll "a" also transfers resonance energy in the antenna complex, ending in the reaction center where specific chlorophylls P680 and P700 are located.
Distribution of chlorophyll "a".
Chlorophyll "a" is essential for most photosynthetic organisms to release chemical energy but is not the only pigment that can be used for photosynthesis. All oxygenic photosynthetic organisms use chlorophyll "a", but differ in accessory pigments like chlorophyll "b". Chlorophyll "a" can also be found in very small quantities in the green sulfur bacteria, an anaerobic photoautotroph. These organisms use bacteriochlorophyll and some chlorophyll "a" but do not produce oxygen. Anoxygenic photosynthesis is the term applied to this process, unlike oxygenic photosynthesis where oxygen is produced during the light reactions of photosynthesis.
Molecular structure.
The molecular structure of chlorophyll "a" consists of a chlorin ring, whose four nitrogen atoms surround a central magnesium atom, and has several other attached side chains and a hydrocarbon tail formed by a phytol ester.
Chlorin ring.
Chlorophyll "a" contains a magnesium ion encased in a large ring structure known as a chlorin. The chlorin ring is a heterocyclic compound derived from pyrrole. Four nitrogen atoms from the chlorin surround and bind the magnesium atom. The magnesium center uniquely defines the structure as a chlorophyll molecule. The porphyrin ring of bacteriochlorophyll is saturated, and lacking alternation of double and single bonds causing variation in absorption of light.
Side chains.
Side chains are attached to the chlorin ring of the various chlorophyll molecules. Different side chains characterize each type of chlorophyll molecule, and alters the absorption spectrum of light.
For instance, the only difference between chlorophyll "a" and chlorophyll "b" is that chlorophyll "b" has an aldehyde instead of a methyl group at the C-7 position.
Hydrocarbon tail.
The phytol ester of chlorophyll "a" (R in the diagram) is a long hydrophobic tail which anchors the molecule to other hydrophobic proteins in the thylakoid membrane of the chloroplast. Once detached from the porphyrin ring, phytol becomes the precursor of two biomarkers, pristane and phytane, which are important in the study of geochemistry and the determination of petroleum sources.
Biosynthesis.
The Chlorophyll "a" biosynthetic pathway utilizes a variety of enzymes. In most plants, chlorophyll is derived from glutamate and is synthesised along a branched pathway that is shared with heme and siroheme.
The initial steps incorporate glutamic acid into 5-aminolevulinic acid (ALA); two molecules of ALA are then reduced to porphobilinogen (PBG), and four molecules of PBG are coupled, forming protoporphyrin IX.
Chlorophyll synthase is the enzyme that completes the biosynthesis of chlorophyll "a" by catalysing the reaction EC 2.5.1.62
chlorophyllide "a" + phytyl diphosphate formula_0 chlorophyll "a" + diphosphate
This forms an ester of the carboxylic acid group in chlorophyllide "a" with the 20-carbon diterpene alcohol phytol.
Reactions of photosynthesis.
Absorbance of light.
Light spectrum.
Chlorophyll "a" absorbs light within the violet, blue and red wavelengths. Accessory photosynthetic pigments broaden the spectrum of light absorbed, increasing the range of wavelengths that can be used in photosynthesis. The addition of chlorophyll "b" next to chlorophyll "a" extends the absorption spectrum. In low light conditions, plants produce a greater ratio of chlorophyll "b" to chlorophyll "a" molecules, increasing photosynthetic yield.
Light gathering.
Absorption of light by photosynthetic pigments converts photons into chemical energy. Light energy radiating onto the chloroplast strikes the pigments in the thylakoid membrane and excites their electrons. Since the chlorophyll "a" molecules only capture certain wavelengths, organisms may use accessory pigments to capture a wider range of light energy shown as the yellow circles. It then transfers captured light from one pigment to the next as resonance energy, passing energy one pigment to the other until reaching the special chlorophyll "a" molecules in the reaction center. These special chlorophyll "a" molecules are located in both photosystem II and photosystem I. They are known as P680 for Photosystem II and P700 for Photosystem I. P680 and P700 are the primary electron donors to the electron transport chain. These two systems are different in their redox potentials for one-electron oxidation. The Em for P700 is approximately 500mV, while the Em for P680 is approximately 1,100-1,200 mV.
Primary electron donation.
Chlorophyll "a" is very important in the energy phase of photosynthesis. Two electrons need to be passed to an electron acceptor for the process of photosynthesis to proceed. Within the reaction centers of both photosystems there are a pair of chlorophyll "a" molecules that pass electrons on to the transport chain through redox reactions.
Ocean.
The concentration of chlorophyll A is used as an index of phytoplankton biomass. In the ocean, phytoplankton all contain the chlorophyll pigment, which has a greenish color.
Phytoplankton are microscopic organisms that live in watery environments and changes in the amount of phytoplankton indicate the change in productivity of the ocean. Phytoplankton can be affected indirectly by climatic factors, such as changes in water temperatures and surface winds.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=1126162 |
11264182 | Storage effect | Ecological mechanism enabling species to coexist
The storage effect is a coexistence mechanism proposed in the ecological theory of species coexistence, which tries to explain how such a wide variety of similar species are able to coexist within the same ecological community or guild. The storage effect was originally proposed in the 1980s to explain coexistence in diverse communities of coral reef fish, however it has since been generalized to cover a variety of ecological communities. The theory proposes one way for multiple species to coexist: in a changing environment, no species can be the best under all conditions. Instead, each species must have a unique response to varying environmental conditions, and a way of buffering against the effects of bad years. The storage effect gets its name because each population "stores" the gains in good years or microhabitats (patches) to help it survive population losses in bad years or patches. One strength of this theory is that, unlike most coexistence mechanisms, the storage effect can be measured and quantified, with units of per-capita growth rate (offspring per adult per generation).
The storage effect can be caused by both temporal and spatial variation. The temporal storage effect (often referred to as simply "the storage effect") occurs when species benefit from changes in year-to-year environmental patterns, while the spatial storage effect occurs when species benefit from variation in microhabitats across a landscape.
The concept.
For the storage effect to operate, it requires variation (i.e. fluctuations) in the environment and thus can be termed a "fluctuation-dependent mechanism". This variation can come from a large degree of factors, including resource availability, temperature, and predation levels. However, for the storage effect to function, this variation must change the birth, survival, or recruitment rate of species from year to year (or patch to patch).
For competing species within the same community to coexist, they have to meet one fundamental requirement: the impact of competition from a species on itself must exceed its competitive impact on other species. In other words, intraspecific competition must exceed interspecific competition. For example, jackrabbits living in the same area compete for food and nesting grounds. Such competition within the same species is called intraspecific competition, which limits the growth of the species itself. Members from different species can also compete. For instance, jackrabbits and cottontail rabbits also compete for food and nesting grounds. Competition between different species is called interspecific competition, which limits the growth of other species. Stable coexistence occurs when any one species in the community limits its own growth more strongly than the growth of others.
The storage effect mixes three essential ingredients to assemble a community of competing species that fulfill the requirement. They are 1) correlation between the quality of an environment and the amount of competition experienced by a population in that environment (i.e. covariance between environment and competition), 2) differences in species response to the same environment (i.e. species-specific environmental responses), and 3) the ability of a population to diminish the impact of competition under worsening environment (i.e. buffered population growth). Each ingredient is described in detail below with an explanation why the combination of the three leads to species coexistence.
Covariance between environment and competition.
The growth of a population can be strongly influenced by the environment it experiences. An environment consists of not only physical elements such as resource abundance, temperature, and level of physical disturbance, but also biological elements such as the abundance of natural enemies and mutualists. Usually organisms reproduce more in a favorable environment (i.e. either during a good year, or within a good patch), build up their population densities, and lead themselves to a high level of competition due to this increasing crowding. Such a trend means that higher quality environments usually correlate with a higher strength of competition experienced by the organisms in those environments. In short, a better environment results in stronger competition. In statistics, such correlation means that there will be a non-zero covariance between the change of population density in response to the environment and that to the competition. That is why the first ingredient is called "covariance between environment and competition".
Species-specific environmental responses.
Covariance between environment and competition suggests that organisms experience the strongest competition under their optimal environmental conditions because their populations grow most rapidly in those conditions. In nature, we often find that different species from the same community respond to the same conditions in distinctive manners. For example, plant species have different preferred levels of light and water availability, which affect their germination and physical growth rates. Such differences in their response to the environment, which is called "species-specific environmental response," means no two species from a community will have the same best environment in a given year or a given patch. As a result, when a species is under its optimal environmental conditions and thus experiencing the strongest intraspecific competition, other species from the same community only experience the strongest interspecific competition coming from that species, but not the strongest intraspecific competition coming from themselves.
Buffered population growth.
A population can decline when environmental conditions worsen and when competition intensifies. If a species cannot limit the impact of competition in a hostile environment, its population will crash, and it will become locally extinct. Marvelously, in nature organisms are often able to slow down the rate of population decline in a hostile environment by alleviating the impact of competition. In doing so, they are able to set up a lower limit on the rate of their population decline. This phenomenon is called "buffered population growth", which occurs under a variety of situations. Under the temporal storage effect, it can be accomplished by the adults of a species having long life spans, which are relatively unaffected by environmental stressors. For example, an adult tree is unlikely to be killed by a few weeks of drought or a single night of freezing temperatures, whereas a seedling may not survive these conditions. Even if all seedlings are killed by bad environmental conditions, the long-lived adults are able to keep the overall population from crashing. Moreover, the adults usually adopt strategies such as dormancy or hibernation under a hostile environment, which make them less sensitive to competition, and allows them to buffer against the double blades of the hostile environment and competition from their rivals. For a different example, buffered population growth is attained by annual plants with a persistent seed bank. Thanks to these long-lived seeds, the entire population cannot be destroyed by a single bad year. Moreover, the seeds stay dormant under unfavorable environmental conditions, avoiding direct competition with rivals who are favored by the same environment, and thus diminish the impact of competition in bad years. There are some temporal situations in which buffered population growth is not expected to occur. Namely, when multiple generations do not overlap (such as Labord's chameleon) or when adults have a high mortality rate (such as many aquatic insects, or some populations of the Eastern Fence Lizard), buffered growth does not occur. Under the spatial storage effect, buffered population growth is generally automatic, because the effects of a detrimental microhabitat will only be experienced by individuals in that area, rather than the population as a whole.
Outcome.
The combined effect of (1) covariance between environment and competition, and (2) species-specific response to the environment decouple the strongest intraspecific and interspecific competition experienced by a species. Intraspecific competition is strongest when a species is favored by the environment, whereas interspecific competition is strongest when its rivals are favored. After this decoupling, buffered population growth limits the impact of interspecific competition when a species is not favored by the environment. As a consequence, the impact of intraspecific competition on the species favored by a particular environment exceeds that of the interspecific competition on species less favored by that environment. We see that the fundamental requirement for species coexistence is fulfilled and thus storage effect is able to maintain stable coexistence in a community of competing species.
For species to coexist in a community, all species must be able to recover from low density. Not surprisingly, being a coexistence mechanism, the storage effect helps species when they become rare. It does so by making the abundant species’ effect on itself greater than its effect on the rare species. The difference between species’ response to environmental conditions means that a rare species’ optimal environment is not the same as its competitors. Under these conditions, the rare species will experience low levels of interspecific competition. Because the rare species itself is rare, it will experience little impact from intraspecific competition as well, even at its highest possible levels of intraspecific competition. Free from the impact of competition, the rare species is able to make gains in these good years or patches. Moreover, thanks to the buffered population growth, the rare species is able to survive the bad years or patches by "storing" the gains from the good years/patches. As a result, the population of any rare species is able to grow due to the storage effect.
One natural outcome from the covariance between environment and competition is that the species with very low densities will have more fluctuation in its recruitment rates than species with normal densities. This occurs because in good environments, species with high densities will often experience large amount of crowding by members of the same species, thus limiting the benefits of good years/patches, and making good years/patches more similar to bad years/patches. Low-density species are rarely able to cause crowding, thus allowing significantly increased fitness in good years/patches. Since the fluctuation in recruitment rate is an indicator of covariance between environment and competition, and since species-specific environmental response and buffered population growth can normally be assumed in nature, finding much stronger fluctuation in recruitment rates in rare and low-density species provides a strong indication that the storage effect is operating within a community.
Mathematical formulation.
The storage effect is not a model for population growth (such as the Lotka–Volterra equation) itself, but is an effect that appears in non-additive models of population growth. Thus, the equations shown below will work for any arbitrary model of population growth, but will only be as accurate as the original model. The derivation below is taken from Chesson 1994. It is a derivation of the temporal storage effect, but is very similar to the spatial storage effect.
The fitness of an individual, as well as expected growth rate, can be measured in terms of the average number of offspring it will leave during its lifetime. This parameter, "r"("t"), is a function of both environmental factors, "e"("t"), and how much the organism must compete with other individuals (both of its own species, and different species), "c"("t"). Thus,
formula_0
where "g" is an arbitrary function for growth rate. Throughout the article, subscripts are occasionally used to represent functions of a particular species (e.g. "r" "j"("t") is the fitness of species "j"). It is assumed that there must be some values "e*" and "c*", such that "g"("e"*, "c"*) = 0, representing a zero-population growth equilibrium. These values need not be unique, but for every "e*", there is a unique "c*". For ease of calculation, standard parameters "E"("t") and "C"("t") are defined, such that
formula_1
formula_2
Both "E" and "C" represent the effect of deviations in environmental response from equilibrium. "E" represents the effect that varying environmental conditions (e.g. rainfall patterns, temperature, food availability, etc.) have on fitness, in the absence of abnormal competitive effects. For the storage effect to occur, the environmental response for each species must be unique (i.e. "E" "j"("t") ≠ "E" "i"("t") when "j" ≠ "i"). "C"("t") represents how much average fitness is lowered as a result of competition. For example, if there is more rain during a given year, "E"("t") will likely increase. If more plants begin to bloom, and thus compete for that rain, then "C"("t") will increase as well. Because "e*" and "c*" are not unique, "E"("t") and "C"("t") are not unique, and thus one should choose them as conveniently as possible. Under most conditions (see Chesson 1994), "r"("t") can be approximated as
formula_3
where
formula_4
"γ" represents the nonadditivity of growth rates. If "γ" = 0 (known as additivity) it means that the impact of competition on fitness does not change with the environment. If "γ" > 0 (superadditivity), it means that the adverse effects of competition during a bad year are relatively worse than during a good year. In other words, a population suffers more from competition in bad years than in good years. If "γ" < 0 (subadditivity, or buffered population growth), it means that the harm done by competition during a bad year is relatively minor when compared to a good year. In other words, the population is able to diminish the impact of competition as the environment worsens. As stated above, for the storage effect to contribute to species coexistence, we must have buffered population growth (i.e. it must be the case that "γ" < 0).
The long-term average of the above equation is
formula_5
which, under environments with sufficient variation relative to mean effects, can be approximated as
formula_6
For any effect to act as a coexistence mechanism, it must boost the average fitness of an individual when they are at below-normal population density. Otherwise, a species at low density (known as an `invader') will continue to dwindle, and this negative feedback will cause its extinction. When a species is at equilibrium (known as a `resident'), its average long-term fitness must be 0. For a species to recover from low density, its average fitness must be greater than 0. For the remainder of the text, we refer to functions of the invader with the subscript "i", and to the resident with the subscript "r".
A long-term average growth rate of an invader is often written as
formula_7
where,
formula_8
formula_9
and, ΔI, the storage effect,
formula_10
where
formula_11
In this equation, "q""ir" tells us how much the competition experienced by "r" affects the competition experienced by "i".
The biological meaning of the storage effect is expressed in the mathematical form of ΔI. The first term of the expression is covariance between environment and competition (Cov("E" "C")), scaled by a factor representing buffered population growth ("γ"). The difference between the first term and the second term represents the difference in species responses to the environment between the invader and the sum of the residents, scaled by the effect each resident has on the invader ("q""ir").
Predation.
Recent work has extended what is known about the storage effect to include apparent competition (i.e., competition mediated through a shared predator).
These models showed that generalist predators can undermine the benefits of the storage effect that from competition. This occurs because generalist predators depress population levels by eating individuals. When this happens, there are fewer individuals competing for resources. As a result, relatively abundant species are less constrained by competition for resource in favorable years (i.e., the covariance between environment and competition is weakened), and therefore the storage effect from competition is weakened. This conclusion follows the general trend that the introduction of a generalist predator will often weaken other competition-based coexistence mechanisms, and which result in competitive exclusion.
Additionally, certain types of predators can produce a storage effect from predation. This effect has been shown for frequency-dependent predators, who are more likely to attack prey that are abundant, and for generalist pathogens, who cause outbreaks when prey are abundant. When prey species are especially numerous and active, frequency-dependent predators become more active, and pathogens outbreaks become more severe (i.e., there was a positive covariance between the environment and predation, analogous to the covariance between the environment and competition). As a result, abundant species are limited during their best years by high predation – an effect that is analogous to the storage effect from competition.
Empirical studies.
The first empirical study that tested the requirements of the storage effect was done by Pake and Venable, who looked at three desert annual plants. They experimentally manipulated density and water availability over a two-year period, and found that fitness and germination rates varied greatly from year to year, and over different environmental conditions. This shows that each species has a unique environmental response, and implied that likely there is a covariance between environment and competition. This, combined with the buffered population growth that is a product of a long-lived seed bank, showed that a temporal storage effect was probably an important factor in mediating coexistence. This study was also important, because it showed that variation in germination conditions could be a major factor promoting species coexistence.
The first attempt made at quantifying the temporal storage effect was by Carla Cáceres in 1997. Using 30 years of water-column data from Oneida Lake, New York, she studied the effect the storage effect had on two species of plankton (Daphnia galeata mendotae and "D. pulicaria"). These species of plankton lay diapausing eggs which, much like the seeds of annual plants, lay dormant in the sediment for many years before hatching. Cáceres found that the size of reproductive bouts were fairly uncorrelated between the two species. She also found, in the absence of the storage effect, D. galeata mendotae would have gone extinct. She was unable to measure certain important parameters (such as the rate of egg predation), but found that her results were robust to a wide range of estimates.
The first test of the spatial storage effect was done by Sears and Chesson in the desert area east of Portal, Arizona. Using a common neighbor-removal experiment, they examined whether coexistence between two annual plants, Erodium cicutarium and Phacelia popeii, was due to the spatial storage effect or resource partitioning. The storage effect was quantified in terms of number of inflorescences (a proxy for fitness) instead of actual population growth rate. They found that E. cicutarium was able to outcompete P. popeii in many situations, and in the absence of the storage effect, would likely competitively exclude P. popeii. However, they found a very strong difference in the covariance between environment and competition, which showed that some of the most favorable areas for P. popeii (the rare species), were unfavorable to E. cicutarium (the common species). This suggests that P. popeii is able to avoid strong interspecific competition in some good patches, and that this may be enough to compensate for losses in areas favorable to E. cicutarium.
Colleen Kelly and colleagues have used congeneric species pairs to examine storage dynamics where species similarity is a natural outcome of relatedness and not dependent on researcher-based estimates. Initial studies were of 12 species of trees coexisting in a tropical deciduous forest at the Chamela Biological Station in Jalisco, Mexico. For each of the 12 species they examined age structure (calculated from size and species-specific growth rate), and found that recruitment of young trees varies from year to year. Grouping the species into 6 congeneric pairs, the locally rarer species of each pair unanimously had a more irregular age distributions than the more common species. This finding strongly suggests that between closely competing tree species, the rarer species experiences stronger recruitment fluctuation than the commoner species. Such difference in recruitment fluctuation, combined with evidence of greater competitive ability in the rarer species of each pair, indicates a difference in covariance between the environment and competition between rare and common species. Since species-specific environmental response and buffered population growth can be naturally assumed, their finding strongly suggests that the storage effect operates in this tropical deciduous forest so as to maintain the coexistence between different tree species. Further work with these species has shown that the storage dynamic is a pairwise, competitive relationship, between congeneric species pairs, and possibly extending as successively nested pairs within a genus.
Angert and colleagues demonstrated the temporal storage effect occurring in the desert annual plant community on Tumamoc Hill, Arizona. Previous studies had shown the annual plants in that community exhibited a trade-off between growth rate (a proxy for competitive ability) and water use efficiency (a proxy for drought tolerance). As a result, some plants grew better during wet years, while others grew better during dry years. This, combined with variation in germination rates, produced an overall community average storage effect of 0.103. In other words, the storage effect is expected to help the population of any species at low density to increase, on average, by 10.3% each generation, until it recovers from low density.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r(t) = g(e(t), c(t)) \\, "
},
{
"math_id": 1,
"text": "E(t) = g(e(t), c^*) \\, "
},
{
"math_id": 2,
"text": "C(t) = -g(e^*, c(t)) \\, "
},
{
"math_id": 3,
"text": "r(t) = E(t) - C(t) + \\gamma E(t)C(t) \\, "
},
{
"math_id": 4,
"text": "\\gamma = \\frac{\\partial^2r}{\\partial E \\, \\partial C}"
},
{
"math_id": 5,
"text": "\\bar{r} = \\bar{E} - \\bar{C} + \\gamma (\\bar{E}\\bar{C} + \\text{Cov}(E(t),C(t))) \\, "
},
{
"math_id": 6,
"text": "\\bar{r} \\approx \\bar{E} - \\bar{C} + \\gamma \\text{Cov}(E(t),C(t))"
},
{
"math_id": 7,
"text": "\\bar{r_i} = \\Delta E - \\Delta C + \\Delta I \\,"
},
{
"math_id": 8,
"text": "\\Delta E = \\bar{E_i} - \\sum_{i\\neq r} q_{ir}\\bar{E_r} \\, "
},
{
"math_id": 9,
"text": "\\Delta C = \\bar{C_i} - \\sum_{i\\neq r} q_{ir}\\bar{C_r} \\, "
},
{
"math_id": 10,
"text": "\\Delta I = \\gamma_i \\text{Cov}(E_i,C_i) - \\sum_{i\\neq r} q_{ir}\\gamma_r \\text{Cov}(E_r, C_r) \\, "
},
{
"math_id": 11,
"text": "q_{ir} = \\frac{\\partial C_i}{\\partial C_r}"
}
] | https://en.wikipedia.org/wiki?curid=11264182 |
11264285 | Quantum graph | In mathematics and physics, a quantum graph is a linear, network-shaped structure of vertices connected on edges (i.e., a graph) in which each edge is given a length and where a differential (or pseudo-differential) equation is posed on each edge. An example would be a power network consisting of power lines (edges) connected at transformer stations (vertices); the differential equations would then describe the voltage along each of the lines, with boundary conditions for each edge provided at the adjacent vertices ensuring that the current added over all edges adds to zero at each vertex.
Quantum graphs were first studied by Linus Pauling as models of free electrons in organic molecules in the 1930s. They also arise in a variety of mathematical contexts, e.g. as model systems in quantum chaos, in the study of waveguides, in photonic crystals and in Anderson localization, or as limit on shrinking thin wires. Quantum graphs have become prominent models in mesoscopic physics used to obtain a theoretical understanding of nanotechnology. Another, more simple notion of quantum graphs was introduced by Freedman et al.
Aside from actually solving the differential equations posed on a quantum graph for purposes of concrete applications, typical questions that arise are those of controllability (what inputs have to be provided to bring the system into a desired state, for example providing sufficient power to all houses on a power network) and identifiability (how and where one has to measure something to obtain a complete picture of the state of the system, for example measuring the pressure of a water pipe network to determine whether or not there is a leaking pipe).
Metric graphs.
A metric graph
is a graph consisting of a set formula_0 of vertices and
a set formula_1 of edges where each edge formula_2 has been associated
with an interval formula_3 so that formula_4 is the coordinate on the
interval, the vertex formula_5 corresponds to formula_6 and
formula_7 to formula_8 or vice versa. The choice of which vertex lies at zero is
arbitrary with the alternative corresponding to a change of coordinate on the
edge.
The graph has a natural metric: for two
points formula_9 on the graph, formula_10 is
the shortest distance between them
where distance is measured along the edges of the graph.
Open graphs: in the combinatorial graph model
edges always join pairs of vertices however in a quantum graph one may also
consider semi-infinite edges. These are edges associated with the interval
formula_11 attached to a single vertex at formula_6.
A graph with one or more
such open edges is referred to as an open graph.
Quantum graphs.
Quantum graphs are metric graphs equipped with a differential
(or pseudo-differential) operator acting on functions on the graph.
A function formula_12 on a metric graph is defined as the formula_13-tuple of functions
formula_14 on the intervals.
The Hilbert space of the graph is formula_15
where the inner product of two functions is
formula_16
formula_17 may be infinite in the case of an open edge. The simplest example of an operator on a metric graph is the Laplace operator. The operator on an edge is formula_18 where formula_4 is the coordinate on the edge. To make the operator self-adjoint a suitable domain must be specified. This is typically achieved by taking the Sobolev space formula_19 of functions on the edges of the graph and specifying matching conditions at the vertices.
The trivial example of matching conditions that make the operator self-adjoint are the Dirichlet boundary conditions, formula_20 for every edge. An eigenfunction on a finite edge may be written as
formula_21
for integer formula_22. If the graph is closed with no infinite edges and the
lengths of the edges of the graph are rationally independent
then an eigenfunction is supported on a single graph edge
and the eigenvalues are formula_23. The Dirichlet conditions
don't allow interaction between the intervals so the spectrum is the same as
that of the set of disconnected edges.
More interesting self-adjoint matching conditions that allow interaction between edges are the Neumann or natural matching conditions. A function formula_12 in the domain of the operator is continuous everywhere on the graph and the sum of the outgoing derivatives at a vertex is zero,
formula_24
where formula_25 if the vertex formula_26 is at formula_27 and formula_28 if formula_26 is at formula_29.
The properties of other operators on metric graphs have also been studied.
formula_30
where formula_31 is a "magnetic vector potential" on the edge and formula_32 is a scalar potential.
Theorems.
All self-adjoint matching conditions of the Laplace operator on a graph can be classified according to a scheme of Kostrykin and Schrader. In practice, it is often more convenient to adopt a formalism introduced by Kuchment, see, which automatically yields an operator in variational form.
Let formula_26 be a vertex with formula_33 edges emanating from it. For simplicity we choose the coordinates on the edges so that formula_26 lies at formula_6 for each edge meeting at formula_26. For a function formula_12 on the graph let
formula_34
Matching conditions at formula_26 can be specified by a pair of matrices
formula_35 and formula_36 through the linear equation,
formula_37
The matching conditions define a self-adjoint operator if
formula_38 has the maximal rank formula_33 and formula_39
The spectrum of the Laplace operator on a finite graph can be conveniently described
using a scattering matrix approach introduced by Kottos and Smilansky
. The eigenvalue problem on an edge is,
formula_40
So a solution on the edge can be written as a linear combination of plane waves.
formula_41
where in a time-dependent Schrödinger equation formula_42 is the coefficient
of the outgoing plane wave at formula_43 and formula_44 coefficient of the incoming
plane wave at formula_43.
The matching conditions at formula_26 define a scattering matrix
formula_45
The scattering matrix relates the vectors of incoming and outgoing plane-wave
coefficients at formula_26, formula_46.
For self-adjoint matching conditions formula_47 is unitary. An element of
formula_48 of formula_47 is a complex transition amplitude
from a directed edge formula_49
to the edge formula_50 which in general depends on formula_51.
However, for a large class of matching conditions
the S-matrix is independent of formula_51.
With Neumann matching conditions for example
formula_52
Substituting in the equation for formula_47
produces formula_51-independent transition amplitudes
formula_53
where formula_54 is the Kronecker delta function that is one if formula_55 and
zero otherwise. From the transition amplitudes we may define a
formula_56 matrix
formula_57
formula_58 is called the bond scattering matrix and
can be thought of as a quantum evolution operator on the graph. It is
unitary and acts on the vector of formula_59 plane-wave coefficients for the
graph where formula_60 is the coefficient of
the plane wave traveling from formula_61 to formula_26.
The phase formula_62 is the phase acquired by the plane wave
when propagating from vertex formula_61 to vertex formula_26.
Quantization condition: An eigenfunction on the graph
can be defined through its associated formula_59 plane-wave coefficients.
As the eigenfunction is stationary under the quantum evolution a quantization
condition for the graph can be written using the evolution operator.
formula_63
Eigenvalues formula_64 occur at values of formula_51 where the matrix formula_65 has an
eigenvalue one. We will order the spectrum with
formula_66.
The first trace formula for a graph was derived by Roth (1983).
In 1997 Kottos and Smilansky used the quantization condition above to obtain
the following trace formula for the Laplace operator on a graph when the
transition amplitudes are independent of formula_51.
The trace formula links the spectrum with periodic orbits on the graph.
formula_67
formula_68 is called the density of states. The right hand side of the trace
formula is made up of two terms, the Weyl
term formula_69
is the mean separation of eigenvalues and the oscillating part is a sum
over all periodic orbits formula_70 on the graph.
formula_71 is the length of the orbit and
formula_72 is
the total length of the graph. For an orbit generated by repeating a
shorter primitive orbit, formula_73 counts the number of repartitions.
formula_74 is
the product of the transition amplitudes at the vertices of the graph around
the orbit.
Applications.
Quantum graphs were first employed in the 1930s
to model the spectrum of free electrons in organic molecules like
Naphthalene, see figure. As a first approximation the
atoms are taken to be vertices while the
σ-electrons form bonds that fix a frame
in the shape of the molecule on which the free electrons are confined.
A similar problem appears when considering quantum waveguides. These
are mesoscopic systems - systems built with a width on the scale of
nanometers. A quantum waveguide can be thought of as a fattened graph
where the edges
are thin tubes. The spectrum of the Laplace operator on this domain
converges to the spectrum of the Laplace operator on the graph
under certain conditions. Understanding mesoscopic systems plays an
important role in the field of nanotechnology.
In 1997 Kottos and Smilansky proposed quantum graphs as a model to study
quantum chaos, the quantum mechanics of systems that
are classically chaotic. Classical motion on the graph can be defined as
a probabilistic Markov chain where the probability of scattering
from edge formula_75 to edge formula_12 is given by the absolute value of the
quantum transition amplitude squared, formula_76. For almost all
finite connected
quantum graphs the probabilistic dynamics is ergodic and mixing,
in other words chaotic.
Quantum graphs embedded in two or three dimensions appear in the study
of photonic crystals. In two dimensions a simple model of
a photonic crystal consists of polygonal cells of a dense dielectric with
narrow interfaces between the cells filled with air. Studying
dielectric modes that stay mostly in the dielectric gives rise to a
pseudo-differential operator on the graph that follows the narrow interfaces.
Periodic quantum graphs like the lattice in formula_77 are common models of
periodic systems and quantum graphs have been applied
to the study the phenomena of Anderson localization where localized
states occur at the edge of spectral bands in the presence of disorder. | [
{
"math_id": 0,
"text": "V"
},
{
"math_id": 1,
"text": "E"
},
{
"math_id": 2,
"text": "e=(v_1,v_2)\\in E"
},
{
"math_id": 3,
"text": "[0,L_e]"
},
{
"math_id": 4,
"text": "x_e"
},
{
"math_id": 5,
"text": "v_1"
},
{
"math_id": 6,
"text": "x_e=0"
},
{
"math_id": 7,
"text": "v_2"
},
{
"math_id": 8,
"text": "x_e=L_e"
},
{
"math_id": 9,
"text": "x,y"
},
{
"math_id": 10,
"text": "\\rho(x,y)"
},
{
"math_id": 11,
"text": "[0,\\infty)"
},
{
"math_id": 12,
"text": "f"
},
{
"math_id": 13,
"text": "|E|"
},
{
"math_id": 14,
"text": "f_e(x_e)"
},
{
"math_id": 15,
"text": "\\bigoplus_{e\\in E} L^2([0,L_e])"
},
{
"math_id": 16,
"text": "\\langle f,g \\rangle = \\sum_{e\\in E} \\int_{0}^{L_e} f_e^{*}(x_e)g_e(x_e) \\, dx_e,"
},
{
"math_id": 17,
"text": "L_e"
},
{
"math_id": 18,
"text": "-\\frac{\\textrm{d}^2}{\\textrm{d} x_e^2}"
},
{
"math_id": 19,
"text": "H^2"
},
{
"math_id": 20,
"text": "f_e(0)=f_e(L_e)=0"
},
{
"math_id": 21,
"text": "f_e(x_e) = \\sin \\left( \\frac{n \\pi x_e}{L_e} \\right)"
},
{
"math_id": 22,
"text": "n"
},
{
"math_id": 23,
"text": "\\frac{n^2\\pi^2}{L_e^2}"
},
{
"math_id": 24,
"text": "\\sum_{e\\sim v} f'(v) = 0 \\ ,"
},
{
"math_id": 25,
"text": "f'(v)=f'(0)"
},
{
"math_id": 26,
"text": "v"
},
{
"math_id": 27,
"text": "x=0"
},
{
"math_id": 28,
"text": "f'(v)=-f'(L_e)"
},
{
"math_id": 29,
"text": "x=L_e"
},
{
"math_id": 30,
"text": "\\left( i \\frac{\\textrm{d}}{\\textrm{d} x_e} + A_e(x_e) \\right)^2 + V_e(x_e) \\ ,"
},
{
"math_id": 31,
"text": "A_e"
},
{
"math_id": 32,
"text": "V_e"
},
{
"math_id": 33,
"text": "d"
},
{
"math_id": 34,
"text": "\\mathbf{f}=(f_{e_1}(0),f_{e_2}(0),\\dots,f_{e_{d}}(0))^T , \\qquad \\mathbf{f}'=(f'_{e_1}(0),f'_{e_2}(0),\\dots,f'_{e_{d}}(0))^T."
},
{
"math_id": 35,
"text": "A"
},
{
"math_id": 36,
"text": "B"
},
{
"math_id": 37,
"text": "A \\mathbf{f} +B \\mathbf{f}'=\\mathbf{0}. "
},
{
"math_id": 38,
"text": "(A, B)"
},
{
"math_id": 39,
"text": "AB^{*}=BA^{*}."
},
{
"math_id": 40,
"text": "-\\frac{d^2}{dx_e^2} f_e(x_e)=k^2 f_e(x_e).\\,"
},
{
"math_id": 41,
"text": "f_e(x_e) = c_e \\textrm{e}^{i k x_e} + \\hat{c}_e \\textrm{e}^{-i k x_e}.\\,"
},
{
"math_id": 42,
"text": "c"
},
{
"math_id": 43,
"text": "0"
},
{
"math_id": 44,
"text": "\\hat{c}"
},
{
"math_id": 45,
"text": "S(k)=-(A+i kB)^{-1}(A-ikB).\\,"
},
{
"math_id": 46,
"text": "\\mathbf{c}=S(k)\\hat{\\mathbf{c}}"
},
{
"math_id": 47,
"text": "S"
},
{
"math_id": 48,
"text": "\\sigma_{(uv)(vw)}"
},
{
"math_id": 49,
"text": "(uv)"
},
{
"math_id": 50,
"text": "(vw)"
},
{
"math_id": 51,
"text": "k"
},
{
"math_id": 52,
"text": "\nA=\\left( \\begin{array}{ccccc}\n1& -1 & 0 & 0 & \\dots \\\\\n0 & 1 & -1 & 0 & \\dots \\\\\n& & \\ddots & \\ddots & \\\\\n0& \\dots & 0 & 1 & -1 \\\\\n0 &\\dots & 0 & 0& 0 \\\\\n\\end{array} \\right) , \\quad B=\\left( \\begin{array}{cccc}\n0& 0 & \\dots & 0 \\\\\n\\vdots & \\vdots & & \\vdots \\\\\n0& 0 & \\dots & 0 \\\\\n1 &1 & \\dots & 1 \\\\\n\\end{array} \\right).\n"
},
{
"math_id": 53,
"text": "\\sigma_{(uv)(vw)}=\\frac{2}{d}-\\delta_{uw}.\\,"
},
{
"math_id": 54,
"text": "\\delta_{uw}"
},
{
"math_id": 55,
"text": "u=w"
},
{
"math_id": 56,
"text": "2|E|\\times 2|E|"
},
{
"math_id": 57,
"text": "U_{(uv)(lm)}(k)= \\delta_{vl} \\sigma_{(uv)(vm)}(k) \\textrm{e}^{i kL_{(uv)}}.\\,"
},
{
"math_id": 58,
"text": "U"
},
{
"math_id": 59,
"text": "2|E|"
},
{
"math_id": 60,
"text": "c_{(uv)}"
},
{
"math_id": 61,
"text": "u"
},
{
"math_id": 62,
"text": "\\textrm{e}^{i kL_{(uv)}}"
},
{
"math_id": 63,
"text": "|U(k)-I|=0.\\,"
},
{
"math_id": 64,
"text": "k_j"
},
{
"math_id": 65,
"text": "U(k)"
},
{
"math_id": 66,
"text": "0\\leqslant k_0 \\leqslant k_1 \\leqslant \\dots"
},
{
"math_id": 67,
"text": "d(k):=\\sum_{j=0}^{\\infty} \\delta(k-k_j)=\\frac{L}{\\pi}+\\frac{1}{\\pi} \n\\sum_p \\frac{L_p}{r_p} A_p \\cos(kL_p)."
},
{
"math_id": 68,
"text": "d(k)"
},
{
"math_id": 69,
"text": "\\frac{L}{\\pi}"
},
{
"math_id": 70,
"text": "p=(e_1,e_2,\\dots,e_n)"
},
{
"math_id": 71,
"text": "L_p=\\sum_{e\\in p} L_e"
},
{
"math_id": 72,
"text": "L=\\sum_{e\\in E}L_e"
},
{
"math_id": 73,
"text": "r_p"
},
{
"math_id": 74,
"text": "A_p=\\sigma_{e_1 e_2} \\sigma_{e_2 e_3} \\dots \\sigma_{e_n e_1}"
},
{
"math_id": 75,
"text": "e"
},
{
"math_id": 76,
"text": "|\\sigma_{ef}|^2"
},
{
"math_id": 77,
"text": "{\\mathbb R}^2"
}
] | https://en.wikipedia.org/wiki?curid=11264285 |
1126536 | Optimization problem | Problem of finding the best feasible solution
In mathematics, engineering, computer science and economics, an optimization problem is the problem of finding the "best" solution from all feasible solutions.
Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete:
Continuous optimization problem.
The "standard form" of a continuous optimization problem is
formula_0
where
0 are called "equality constraints", and
If "m"
"p"
0, the problem is an unconstrained optimization problem. By convention, the standard form defines a minimization problem. A maximization problem can be treated by negating the objective function.
Combinatorial optimization problem.
Formally, a combinatorial optimization problem A is a quadruple ("I", "f", "m", "g"), where
The goal is then to find for some instance x an "optimal solution", that is, a feasible solution y with
formula_1
For each combinatorial optimization problem, there is a corresponding decision problem that asks whether there is a feasible solution for some particular measure "m"0. For example, if there is a graph G which contains vertices u and v, an optimization problem might be "find a path from u to v that uses the fewest edges". This problem might have an answer of, say, 4. A corresponding decision problem would be "is there a path from u to v that uses 10 or fewer edges?" This problem can be answered with a simple 'yes' or 'no'.
In the field of approximation algorithms, algorithms are designed to find near-optimal solutions to hard problems. The usual decision version is then an inadequate definition of the problem since it only specifies acceptable solutions. Even though we could introduce suitable decision problems, the problem is more naturally characterized as an optimization problem.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n&\\underset{x}{\\operatorname{minimize}}& & f(x) \\\\\n&\\operatorname{subject\\;to}\n& &g_i(x) \\leq 0, \\quad i = 1,\\dots,m \\\\\n&&&h_j(x) = 0, \\quad j = 1, \\dots,p \n\\end{align}"
},
{
"math_id": 1,
"text": "m(x, y) = g\\left\\{ m(x, y') : y' \\in f(x) \\right\\}."
}
] | https://en.wikipedia.org/wiki?curid=1126536 |
1126638 | Invariant (mathematics) | Property that is not changed by mathematical transformations
In mathematics, an invariant is a property of a mathematical object (or a class of mathematical objects) which remains unchanged after operations or transformations of a certain type are applied to the objects. The particular class of objects and type of transformations are usually indicated by the context in which the term is used. For example, the area of a triangle is an invariant with respect to isometries of the Euclidean plane. The phrases "invariant under" and "invariant to" a transformation are both used. More generally, an invariant with respect to an equivalence relation is a property that is constant on each equivalence class.
Invariants are used in diverse areas of mathematics such as geometry, topology, algebra and discrete mathematics. Some important classes of transformations are defined by an invariant they leave unchanged. For example, conformal maps are defined as transformations of the plane that preserve angles. The discovery of invariants is an important step in the process of classifying mathematical objects.
Examples.
A simple example of invariance is expressed in our ability to count. For a finite set of objects of any kind, there is a number to which we always arrive, regardless of the order in which we count the objects in the set. The quantity—a cardinal number—is associated with the set, and is invariant under the process of counting.
An identity is an equation that remains true for all values of its variables. There are also inequalities that remain true when the values of their variables change.
The distance between two points on a number line is not changed by adding the same quantity to both numbers. On the other hand, multiplication does not have this same property, as distance is not invariant under multiplication.
Angles and ratios of distances are invariant under scalings, rotations, translations and reflections. These transformations produce similar shapes, which is the basis of trigonometry. In contrast, angles and ratios are not invariant under non-uniform scaling (such as stretching). The sum of a triangle's interior angles (180°) is invariant under all the above operations. As another example, all circles are similar: they can be transformed into each other and the ratio of the circumference to the diameter is invariant (denoted by the Greek letter π (pi)).
Some more complicated examples:
MU puzzle.
The MU puzzle is a good example of a logical problem where determining an invariant is of use for an impossibility proof. The puzzle asks one to start with the word MI and transform it into the word MU, using in each step one of the following transformation rules:
An example derivation (with superscripts indicating the applied rules) is
MI →2 MII →2 MIIII →3 MUI →2 MUIUI →1 MUIUIU →2 MUIUIUUIUIU →4 MUIUIIUIU → ...
In light of this, one might wonder whether it is possible to convert MI into MU, using only these four transformation rules. One could spend many hours applying these transformation rules to strings. However, it might be quicker to find a property that is invariant to all rules (that is, not changed by any of them), and that demonstrates that getting to MU is impossible. By looking at the puzzle from a logical standpoint, one might realize that the only way to get rid of any I's is to have three consecutive I's in the string. This makes the following invariant interesting to consider:
"The number of I's in the string is not a multiple of 3".
This is an invariant to the problem, if for each of the transformation rules the following holds: if the invariant held before applying the rule, it will also hold after applying it. Looking at the net effect of applying the rules on the number of I's and U's, one can see this actually is the case for all rules:
The table above shows clearly that the invariant holds for each of the possible transformation rules, which means that whichever rule one picks, at whatever state, if the number of I's was not a multiple of three before applying the rule, then it will not be afterwards either.
Given that there is a single I in the starting string MI, and one that is not a multiple of three, one can then conclude that it is impossible to go from MI to MU (as the number of I's will never be a multiple of three).
Invariant set.
A subset "S" of the domain "U" of a mapping "T": "U" → "U" is an invariant set under the mapping when formula_4 Note that the elements of "S" are not fixed, even though the set "S" is fixed in the power set of "U". (Some authors use the terminology "setwise invariant," vs. "pointwise invariant," to distinguish between these cases.)
For example, a circle is an invariant subset of the plane under a rotation about the circle's center. Further, a conical surface is invariant as a set under a homothety of space.
An invariant set of an operation "T" is also said to be stable under "T". For example, the normal subgroups that are so important in group theory are those subgroups that are stable under the inner automorphisms of the ambient group.
In linear algebra, if a linear transformation "T" has an eigenvector v, then the line through 0 and v is an invariant set under "T", in which case the eigenvectors span an invariant subspace which is stable under "T".
When "T" is a screw displacement, the screw axis is an invariant line, though if the pitch is non-zero, "T" has no fixed points.
In probability theory and ergodic theory, invariant sets are usually defined via the stronger property formula_5 When the map formula_6 is measurable, invariant sets form a sigma-algebra, the invariant sigma-algebra.
Formal statement.
The notion of invariance is formalized in three different ways in mathematics: via group actions, presentations, and deformation.
Unchanged under group action.
Firstly, if one has a group "G" acting on a mathematical object (or set of objects) "X," then one may ask which points "x" are unchanged, "invariant" under the group action, or under an element "g" of the group.
Frequently one will have a group acting on a set "X", which leaves one to determine which objects in an "associated" set "F"("X") are invariant. For example, rotation in the plane about a point leaves the point about which it rotates invariant, while translation in the plane does not leave any points invariant, but does leave all lines parallel to the direction of translation invariant as lines. Formally, define the set of lines in the plane "P" as "L"("P"); then a rigid motion of the plane takes lines to lines – the group of rigid motions acts on the set of lines – and one may ask which lines are unchanged by an action.
More importantly, one may define a "function" on a set, such as "radius of a circle in the plane", and then ask if this function is invariant under a group action, such as rigid motions.
Dual to the notion of invariants are "coinvariants," also known as "orbits," which formalizes the notion of congruence: objects which can be taken to each other by a group action. For example, under the group of rigid motions of the plane, the perimeter of a triangle is an invariant, while the set of triangles congruent to a given triangle is a coinvariant.
These are connected as follows: invariants are constant on coinvariants (for example, congruent triangles have the same perimeter), while two objects which agree in the value of one invariant may or may not be congruent (for example, two triangles with the same perimeter need not be congruent). In classification problems, one might seek to find a complete set of invariants, such that if two objects have the same values for this set of invariants, then they are congruent.
For example, triangles such that all three sides are equal are congruent under rigid motions, via SSS congruence, and thus the lengths of all three sides form a complete set of invariants for triangles. The three angle measures of a triangle are also invariant under rigid motions, but do not form a complete set as incongruent triangles can share the same angle measures. However, if one allows scaling in addition to rigid motions, then the AAA similarity criterion shows that this is a complete set of invariants.
Independent of presentation.
Secondly, a function may be defined in terms of some presentation or decomposition of a mathematical object; for instance, the Euler characteristic of a cell complex is defined as the alternating sum of the number of cells in each dimension. One may forget the cell complex structure and look only at the underlying topological space (the manifold) – as different cell complexes give the same underlying manifold, one may ask if the function is "independent" of choice of "presentation," in which case it is an "intrinsically" defined invariant. This is the case for the Euler characteristic, and a general method for defining and computing invariants is to define them for a given presentation, and then show that they are independent of the choice of presentation. Note that there is no notion of a group action in this sense.
The most common examples are:
Unchanged under perturbation.
Thirdly, if one is studying an object which varies in a family, as is common in algebraic geometry and differential geometry, one may ask if the property is unchanged under perturbation (for example, if an object is constant on families or invariant under change of metric).
Invariants in computer science.
In computer science, an invariant is a logical assertion that is always held to be true during a certain phase of execution of a computer program. For example, a loop invariant is a condition that is true at the beginning and the end of every iteration of a loop.
Invariants are especially useful when reasoning about the correctness of a computer program. The theory of optimizing compilers, the methodology of design by contract, and formal methods for determining program correctness, all rely heavily on invariants.
Programmers often use assertions in their code to make invariants explicit. Some object oriented programming languages have a special syntax for specifying class invariants.
Automatic invariant detection in imperative programs.
Abstract interpretation tools can compute simple invariants of given imperative computer programs. The kind of properties that can be found depend on the abstract domains used. Typical example properties are single integer variable ranges like codice_0, relations between several variables like codice_1, and modulus information like codice_2. Academic research prototypes also consider simple properties of pointer structures.
More sophisticated invariants generally have to be provided manually.
In particular, when verifying an imperative program using the Hoare calculus, a loop invariant has to be provided manually for each loop in the program, which is one of the reasons that this approach is generally impractical for most programs.
In the context of the above MU puzzle example, there is currently no general automated tool that can detect that a derivation from MI to MU is impossible using only the rules 1–4. However, once the abstraction from the string to the number of its "I"s has been made by hand, leading, for example, to the following C program, an abstract interpretation tool will be able to detect that codice_3 cannot be 0, and hence the "while"-loop will never terminate.
void MUPuzzle(void) {
volatile int RandomRule;
int ICount = 1, UCount = 0;
while (ICount % 3 != 0) // non-terminating loop
switch(RandomRule) {
case 1: UCount += 1; break;
case 2: ICount *= 2; UCount *= 2; break;
case 3: ICount -= 3; UCount += 1; break;
case 4: UCount -= 2; break;
} // computed invariant: ICount % 3 == 1 || ICount % 3 == 2
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\int_M K\\,d\\mu"
},
{
"math_id": 1,
"text": "K"
},
{
"math_id": 2,
"text": "(M,g)"
},
{
"math_id": 3,
"text": "g"
},
{
"math_id": 4,
"text": "x \\in S \\implies T(x) \\in S."
},
{
"math_id": 5,
"text": "x \\in S \\Leftrightarrow T(x) \\in S."
},
{
"math_id": 6,
"text": "T"
}
] | https://en.wikipedia.org/wiki?curid=1126638 |
11269476 | Lie algebra bundle | Concept in topology (mathematics)
In mathematics, a weak Lie algebra bundle
formula_0
is a vector bundle formula_1 over a base space "X" together with a morphism
formula_2
which induces a Lie algebra structure on each fibre formula_3.
A Lie algebra bundle formula_4 is a vector bundle in which
each fibre is a Lie algebra and for every "x" in "X", there is an open set formula_5 containing "x", a Lie algebra "L" and a homeomorphism
formula_6
such that
formula_7
is a Lie algebra isomorphism.
Any Lie algebra bundle is a weak Lie algebra bundle, but the converse need not be true in general.
As an example of a weak Lie algebra bundle that is not a strong Lie algebra bundle, consider the total space formula_8 over the real line formula_9. Let [..] denote the Lie bracket of formula_10 and deform it by the real parameter as:
formula_11
for formula_12 and formula_13.
Lie's third theorem states that every bundle of Lie algebras can locally be integrated to a bundle of Lie groups. In general globally the total space might fail to be Hausdorff. But if all fibres of a real Lie algebra bundle over a topological space are mutually isomorphic as Lie algebras, then it is a locally trivial Lie algebra bundle. This result was proved by proving that the real orbit of a real point under an algebraic group is open in the real part of its complex orbit. Suppose the base space is Hausdorff and fibers of total space are isomorphic as Lie algebras then there exists a Hausdorff Lie group bundle over the same base space whose Lie algebra bundle is isomorphic to the given Lie algebra bundle. Every semi simple Lie algebra bundle is locally trivial. Hence there exist a Hausdorff Lie group bundle over the same base space whose Lie algebra bundle is isomorphic to the given Lie algebra bundle.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\xi=(\\xi, p, X, \\theta)\\,"
},
{
"math_id": 1,
"text": "\\xi\\,"
},
{
"math_id": 2,
"text": " \\theta : \\xi \\otimes \\xi \\rightarrow \\xi "
},
{
"math_id": 3,
"text": " \\xi_x\\, "
},
{
"math_id": 4,
"text": " \\xi=(\\xi, p, X)\\,"
},
{
"math_id": 5,
"text": " U "
},
{
"math_id": 6,
"text": " \\phi:U\\times L\\to p^{-1}(U)\\,"
},
{
"math_id": 7,
"text": " \\phi_x:x\\times L \\rightarrow p^{-1}(x)\\,"
},
{
"math_id": 8,
"text": "\\mathfrak{so}(3)\\times\\mathbb{R}"
},
{
"math_id": 9,
"text": "\\mathbb{R}"
},
{
"math_id": 10,
"text": "\\mathfrak{so}(3)"
},
{
"math_id": 11,
"text": "[X,Y]_x = x\\cdot[X,Y]"
},
{
"math_id": 12,
"text": "X,Y\\in\\mathfrak{so}(3)"
},
{
"math_id": 13,
"text": "x\\in\\mathbb{R}"
}
] | https://en.wikipedia.org/wiki?curid=11269476 |
11269780 | Matrix determinant lemma | In linear algebra
In mathematics, in particular linear algebra, the matrix determinant lemma computes the determinant of the sum of an invertible matrix A and the dyadic product, u vT, of a column vector u and a row vector vT.
Statement.
Suppose A is an invertible square matrix and u, v are column vectors. Then the matrix determinant lemma states that
formula_0
Here, uvT is the outer product of two vectors u and v.
The theorem can also be stated in terms of the adjugate matrix of A:
formula_1
in which case it applies whether or not the square matrix A is invertible.
Proof.
First the proof of the special case A = I follows from the equality:
formula_2
The determinant of the left hand side is the product of the determinants of the three matrices. Since the first and third matrix are triangular matrices with unit diagonal, their determinants are just 1. The determinant of the middle matrix is our desired value. The determinant of the right hand side is simply (1 + vTu). So we have the result:
formula_3
Then the general case can be found as:
formula_4
Application.
If the determinant and inverse of A are already known, the formula provides a numerically cheap way to compute the determinant of A corrected by the matrix uvT. The computation is relatively cheap because the determinant of A + uvT does not have to be computed from scratch (which in general is expensive). Using unit vectors for u and/or v, individual columns, rows or elements of A may be manipulated and a correspondingly updated determinant computed relatively cheaply in this way.
When the matrix determinant lemma is used in conjunction with the Sherman–Morrison formula, both the inverse and determinant may be conveniently updated together.
Generalization.
Suppose A is an invertible "n"-by-"n" matrix and U, V are "n"-by-"m" matrices. Then
formula_5
In the special case formula_6 this is the Weinstein–Aronszajn identity.
Given additionally an invertible "m"-by-"m" matrix W, the relationship can also be expressed as
formula_7
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\det\\left(\\mathbf{A} + \\mathbf{uv}^\\textsf{T}\\right) = \\left(1 + \\mathbf{v}^\\textsf{T}\\mathbf{A}^{-1}\\mathbf{u}\\right)\\,\\det\\left(\\mathbf{A}\\right)\\,."
},
{
"math_id": 1,
"text": "\\det\\left(\\mathbf{A} + \\mathbf{uv}^\\textsf{T}\\right) = \\det\\left(\\mathbf{A}\\right) + \\mathbf{v}^\\textsf{T}\\mathrm{adj}\\left(\\mathbf{A}\\right)\\mathbf{u}\\,,"
},
{
"math_id": 2,
"text": "\n \\begin{pmatrix} \\mathbf{I} & 0 \\\\ \\mathbf{v}^\\textsf{T} & 1 \\end{pmatrix}\n \\begin{pmatrix} \\mathbf{I} + \\mathbf{uv}^\\textsf{T} & \\mathbf{u} \\\\ 0 & 1 \\end{pmatrix}\n \\begin{pmatrix} \\mathbf{I} & 0 \\\\ -\\mathbf{v}^\\textsf{T} & 1 \\end{pmatrix} =\n \\begin{pmatrix} \\mathbf{I} & \\mathbf{u} \\\\ 0 & 1 + \\mathbf{v}^\\textsf{T}\\mathbf{u} \\end{pmatrix}.\n"
},
{
"math_id": 3,
"text": "\\det\\left(\\mathbf{I} + \\mathbf{uv}^\\textsf{T}\\right) = \\left(1 + \\mathbf{v}^\\textsf{T}\\mathbf{u}\\right)."
},
{
"math_id": 4,
"text": "\\begin{align}\n \\det\\left(\\mathbf{A} + \\mathbf{uv}^\\textsf{T}\\right)\n &= \\det\\left(\\mathbf{A}\\right) \\det\\left(\\mathbf{I} + \\left(\\mathbf{A}^{-1}\\mathbf{u}\\right)\\mathbf{v}^\\textsf{T}\\right)\\\\\n &= \\det\\left(\\mathbf{A}\\right) \\left(1 + \\mathbf{v}^\\textsf{T} \\left(\\mathbf{A}^{-1}\\mathbf{u}\\right)\\right).\n\\end{align}"
},
{
"math_id": 5,
"text": "\\det\\left(\\mathbf{A} + \\mathbf{UV}^\\textsf{T}\\right) = \\det\\left(\\mathbf{I_m} + \\mathbf{V}^\\textsf{T}\\mathbf{A}^{-1}\\mathbf{U}\\right)\\det(\\mathbf{A})."
},
{
"math_id": 6,
"text": "\\mathbf{A}=\\mathbf{I_n}"
},
{
"math_id": 7,
"text": "\\det\\left(\\mathbf{A} + \\mathbf{UWV}^\\textsf{T}\\right) = \\det\\left(\\mathbf{W}^{-1} + \\mathbf{V}^\\textsf{T}\\mathbf{A}^{-1}\\mathbf{U}\\right)\\det\\left(\\mathbf{W}\\right)\\det\\left(\\mathbf{A}\\right)."
}
] | https://en.wikipedia.org/wiki?curid=11269780 |
1127107 | Verdoorn's law | Economic relationship between growth in output and growth in productivity
Verdoorn's law is named after Dutch economist Petrus Johannes Verdoorn. It states that in the long run productivity generally grows proportionally to the square root of output. In economics, this law pertains to the relationship between the growth of output and the growth of productivity. According to the law, faster growth in output increases productivity due to increasing returns. Verdoorn argued that "in the long run a change in the volume of production, say about 10 per cent, tends to be associated with an average increase in labor productivity of 4.5 per cent." The Verdoorn coefficient close to 0.5 (0.484) is also found in subsequent estimations of the law.
Description.
Verdoorn's law describes a simple long-run relation between productivity and output growth, whose coefficients were empirically estimated in 1949 by the Dutch economist. The relation takes the following form: formula_0
where p is the labor productivity growth, Q the output growth (value-added), b is the Verdoorn coefficient and a is the exogenous productivity growth rate.
Verdoorn's law differs from "the usual hypothesis […] that the growth of productivity is mainly to be explained by the progress of knowledge in science and technology", as it typically is in neoclassical models of growth (notably the Solow model). Verdoorn's law is usually associated with cumulative causation models of growth, in which demand rather than supply determine the pace of accumulation.
Nicholas Kaldor and Anthony Thirlwall developed models of export-led growth based on Verdoorn's law. For a given country an expansion of the export sector may cause specialisation in the production of export products, which increase the productivity level, and increase the level of skills in the export sector. This may then lead to a reallocation of resources from the less efficient non-trade sector to the more productive export sector, lower prices for traded goods and higher competitiveness. This productivity change may then lead expanded exports and to output growth.
Thirlwall shows that for several countries the rate of growth never exceeds the ratio of the rate of growth of exports to income elasticity of demand for imports. This implies that growth is limited by the balance of payments equilibrium. This result is known as Thirlwall's Law.
Sometimes Verdoorn's law is called Kaldor-Verdoorn's law or effect.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "p=a+ bQ"
}
] | https://en.wikipedia.org/wiki?curid=1127107 |
1127427 | Pisano period | Period of the Fibonacci sequence modulo an integer
In number theory, the "n"th Pisano period, written as "π"("n"), is the period with which the sequence of Fibonacci numbers taken modulo "n" repeats. Pisano periods are named after Leonardo Pisano, better known as Fibonacci. The existence of periodic functions in Fibonacci numbers was noted by Joseph Louis Lagrange in 1774.
Definition.
The Fibonacci numbers are the numbers in the integer sequence:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, ... (sequence in the OEIS)
defined by the recurrence relation
formula_0
formula_1
formula_2
For any integer "n", the sequence of Fibonacci numbers "Fi" taken modulo "n" is periodic.
The Pisano period, denoted "π"("n"), is the length of the period of this sequence. For example, the sequence of Fibonacci numbers modulo 3 begins:
0, 1, 1, 2, 0, 2, 2, 1, 0, 1, 1, 2, 0, 2, 2, 1, 0, 1, 1, 2, 0, 2, 2, 1, 0, ... (sequence in the OEIS)
This sequence has period 8, so "π"(3) = 8.
Properties.
With the exception of "π"(2) = 3, the Pisano period "π"("n") is always even.
A proof of this can be given by observing that "π"("n") is equal to the order of the Fibonacci matrix
formula_3
in the general linear group formula_4of invertible 2 by 2 matrices in the finite ring formula_5of integers modulo "n". Since Q has determinant −1, the determinant of Q"π"("n") is (−1)"π"("n"), and since this must equal 1 in formula_5, either "n" ≤ 2 or "π"("n") is even.
Since formula_0 and formula_1 we have that formula_6 divides formula_7 and formula_8.
If "m" and "n" are coprime, then "π"("mn") is the least common multiple of "π"("m") and "π"("n"), by the Chinese remainder theorem. For example, "π"(3) = 8 and "π"(4) = 6 imply "π"(12) = 24. Thus the study of Pisano periods may be reduced to that of Pisano periods of prime powers "q" = "p""k", for "k" ≥ 1.
If "p" is prime, "π"("p""k") divides "p""k"–1 "π"("p"). It is unknown if
formula_9
for every prime "p" and integer "k" > 1. Any prime "p" providing a counterexample would necessarily be a Wall–Sun–Sun prime, and conversely every Wall–Sun–Sun prime "p" gives a counterexample (set "k" = 2).
So the study of Pisano periods may be further reduced to that of Pisano periods of primes. In this regard, two primes are anomalous. The prime 2 has an odd Pisano period, and the prime 5 has period that is relatively much larger than the Pisano period of any other prime. The periods of powers of these primes are as follows:
From these it follows that if "n" = 2
5"k" then "π"("n") = 6"n".
The remaining primes all lie in the residue classes formula_10 or formula_11. If "p" is a prime different from 2 and 5, then the modulo "p" analogue of Binet's formula implies that "π"("p") is the multiplicative order of a root of "x"2 − "x" − 1 modulo "p". If formula_10, these roots belong to formula_12 (by quadratic reciprocity). Thus their order, "π"("p") is a divisor of "p" − 1. For example, "π"(11) = 11 − 1 = 10 and "π"(29) = (29 − 1)/2 = 14.
If formula_13 the roots modulo "p" of "x"2 − "x" − 1 do not belong to formula_14 (by quadratic reciprocity again), and belong to the finite field
formula_15
As the Frobenius automorphism formula_16 exchanges these roots, it follows that, denoting them by "r" and "s", we have "r" "p" = "s", and thus "r" "p"+1 = –1. That is "r" 2("p"+1) = 1, and the Pisano period, which is the order of "r", is the quotient of 2("p"+1) by an odd divisor. This quotient is always a multiple of 4. The first examples of such a "p", for which "π"("p") is smaller than 2("p"+1), are "π"(47) = 2(47 + 1)/3 = 32, "π"(107) = 2(107 + 1)/3 = 72 and "π"(113) = 2(113 + 1)/3 = 76. (See the table below)
It follows from above results, that if "n" = "p""k" is an odd prime power such that "π"("n") > "n", then "π"("n")/4 is an integer that is not greater than "n". The multiplicative property of Pisano periods imply thus that
"π"("n") ≤ 6"n", with equality if and only if "n" = 2 · 5"r", for "r" ≥ 1.
The first examples are "π"(10) = 60 and "π"(50) = 300. If "n" is not of the form 2 · 5"r", then "π"("n") ≤ 4"n".
Tables.
The first twelve Pisano periods (sequence in the OEIS) and their cycles (with spaces before the zeros for readability) are (using hexadecimal cyphers A and B for ten and eleven, respectively):
The first 144 Pisano periods are shown in the following table:
Pisano periods of Fibonacci numbers.
If "n" = "F"(2"k") ("k" ≥ 2), then π("n") = 4"k"; if "n" = "F"(2"k" + 1) ("k" ≥ 2), then π("n") = 8"k" + 4. That is, if the modulo base is a Fibonacci number (≥ 3) with an even index, the period is twice the index and the cycle has two zeros. If the base is a Fibonacci number (≥ 5) with an odd index, the period is four times the index and the cycle has four zeros.
Pisano periods of Lucas numbers.
If "n" = "L"(2"k") ("k" ≥ 1), then π("n") = 8"k"; if "n" = "L"(2"k" + 1) ("k" ≥ 1), then π("n") = 4"k" + 2. That is, if the modulo base is a Lucas number (≥ 3) with an even index, the period is four times the index. If the base is a Lucas number (≥ 4) with an odd index, the period is twice the index.
For even "k", the cycle has two zeros. For odd "k", the cycle has only one zero, and the second half of the cycle, which is of course equal to the part on the left of 0, consists of alternatingly numbers "F"(2"m" + 1) and "n" − "F"(2"m"), with "m" decreasing.
Number of zeros in the cycle.
The number of occurrences of 0 per cycle is 1, 2, or 4. Let "p" be the number after the first 0 after the combination 0, 1. Let the distance between the 0s be "q".
For generalized Fibonacci sequences (satisfying the same recurrence relation, but with other initial values, e.g. the Lucas numbers) the number of occurrences of 0 per cycle is 0, 1, 2, or 4.
The ratio of the Pisano period of "n" and the number of zeros modulo "n" in the cycle gives the "rank of apparition" or "Fibonacci entry point" of "n". That is, smallest index "k" such that "n" divides "F"("k"). They are:
1, 3, 4, 6, 5, 12, 8, 6, 12, 15, 10, 12, 7, 24, 20, 12, 9, 12, 18, 30, 8, 30, 24, 12, 25, 21, 36, 24, 14, 60, 30, 24, 20, 9, 40, 12, 19, 18, 28, 30, 20, 24, 44, 30, 60, 24, 16, 12, ... (sequence in the OEIS)
In Renault's paper the number of zeros is called the "order" of "F" mod "m", denoted formula_17, and the "rank of apparition" is called the "rank" and denoted formula_18.
According to Wall's conjecture, formula_19. If formula_20 has prime factorization formula_21 then formula_22.
Generalizations.
The Pisano periods of Lucas numbers are
1, 3, 8, 6, 4, 24, 16, 12, 24, 12, 10, 24, 28, 48, 8, 24, 36, 24, 18, 12, 16, 30, 48, 24, 20, 84, 72, 48, 14, 24, 30, 48, 40, 36, 16, 24, 76, 18, 56, 12, 40, 48, 88, 30, 24, 48, 32, ... (sequence in the OEIS)
The Pisano periods of Pell numbers (or 2-Fibonacci numbers) are
1, 2, 8, 4, 12, 8, 6, 8, 24, 12, 24, 8, 28, 6, 24, 16, 16, 24, 40, 12, 24, 24, 22, 8, 60, 28, 72, 12, 20, 24, 30, 32, 24, 16, 12, 24, 76, 40, 56, 24, 10, 24, 88, 24, 24, 22, 46, 16, ... (sequence in the OEIS)
The Pisano periods of 3-Fibonacci numbers are
1, 3, 2, 6, 12, 6, 16, 12, 6, 12, 8, 6, 52, 48, 12, 24, 16, 6, 40, 12, 16, 24, 22, 12, 60, 156, 18, 48, 28, 12, 64, 48, 8, 48, 48, 6, 76, 120, 52, 12, 28, 48, 42, 24, 12, 66, 96, 24, ... (sequence in the OEIS)
The Pisano periods of Jacobsthal numbers (or (1,2)-Fibonacci numbers) are
1, 1, 6, 2, 4, 6, 6, 2, 18, 4, 10, 6, 12, 6, 12, 2, 8, 18, 18, 4, 6, 10, 22, 6, 20, 12, 54, 6, 28, 12, 10, 2, 30, 8, 12, 18, 36, 18, 12, 4, 20, 6, 14, 10, 36, 22, 46, 6, ... (sequence in the OEIS)
The Pisano periods of (1,3)-Fibonacci numbers are
1, 3, 1, 6, 24, 3, 24, 6, 3, 24, 120, 6, 156, 24, 24, 12, 16, 3, 90, 24, 24, 120, 22, 6, 120, 156, 9, 24, 28, 24, 240, 24, 120, 48, 24, 6, 171, 90, 156, 24, 336, 24, 42, 120, 24, 66, 736, 12, ... (sequence in the OEIS)
The Pisano periods of Tribonacci numbers (or 3-step Fibonacci numbers) are
1, 4, 13, 8, 31, 52, 48, 16, 39, 124, 110, 104, 168, 48, 403, 32, 96, 156, 360, 248, 624, 220, 553, 208, 155, 168, 117, 48, 140, 1612, 331, 64, 1430, 96, 1488, 312, 469, 360, 2184, 496, 560, 624, 308, 440, 1209, 2212, 46, 416, ... (sequence in the OEIS)
The Pisano periods of Tetranacci numbers (or 4-step Fibonacci numbers) are
1, 5, 26, 10, 312, 130, 342, 20, 78, 1560, 120, 130, 84, 1710, 312, 40, 4912, 390, 6858, 1560, 4446, 120, 12166, 260, 1560, 420, 234, 1710, 280, 1560, 61568, 80, 1560, 24560, 17784, 390, 1368, 34290, 1092, 1560, 240, 22230, 162800, 120, 312, 60830, 103822, 520, ... (sequence in the OEIS)
See also generalizations of Fibonacci numbers.
Number theory.
Pisano periods can be analyzed using algebraic number theory.
Let formula_23 be the "n"-th Pisano period of the "k"-Fibonacci sequence "F""k"("n") ("k" can be any natural number, these sequences are defined as "F""k"(0) = 0, "F""k"(1) = 1, and for any natural number "n" > 1, "F""k"("n") = "kF""k"("n"−1) + "F""k"("n"−2)). If "m" and "n" are coprime, then formula_24, by the Chinese remainder theorem: two numbers are congruent modulo "mn" if and only if they are congruent modulo "m" and modulo "n", assuming these latter are coprime. For example, formula_25 and formula_26 so formula_27 Thus it suffices to compute Pisano periods for prime powers formula_28 (Usually, formula_29, unless "p" is "k"-Wall–Sun–Sun prime, or "k"-Fibonacci–Wieferich prime, that is, "p"2 divides "F""k"("p" − 1) or "F""k"("p" + 1), where "F""k" is the "k"-Fibonacci sequence, for example, 241 is a 3-Wall–Sun–Sun prime, since 2412 divides "F"3(242).)
For prime numbers "p", these can be analyzed by using Binet's formula:
formula_30 where formula_31 is the "k"th metallic mean
formula_32
If "k"2 + 4 is a quadratic residue modulo "p" (where "p" > 2 and "p" does not divide "k"2 + 4), then formula_33 and formula_34 can be expressed as integers modulo "p", and thus Binet's formula can be expressed over integers modulo "p", and thus the Pisano period divides the totient formula_35, since any power (such as formula_36) has period dividing formula_37 as this is the order of the group of units modulo "p".
For "k" = 1, this first occurs for "p" = 11, where 42 = 16 ≡ 5 (mod 11) and 2 · 6 = 12 ≡ 1 (mod 11) and 4 · 3 = 12 ≡ 1 (mod 11) so 4 = √5, 6 = 1/2 and 1/√5 = 3, yielding "φ" = (1 + 4) · 6 = 30 ≡ 8 (mod 11) and the congruence
formula_38
Another example, which shows that the period can properly divide "p" − 1, is "π"1(29) = 14.
If "k"2 + 4 is not a quadratic residue modulo "p", then Binet's formula is instead defined over the quadratic extension field formula_39, which has "p"2 elements and whose group of units thus has order "p"2 − 1, and thus the Pisano period divides "p"2 − 1. For example, for "p" = 3 one has "π"1(3) = 8 which equals 32 − 1 = 8; for "p" = 7, one has "π"1(7) = 16, which properly divides 72 − 1 = 48.
This analysis fails for "p" = 2 and "p" is a divisor of the squarefree part of "k"2 + 4, since in these cases are zero divisors, so one must be careful in interpreting 1/2 or formula_40. For "p" = 2, "k"2 + 4 is congruent to 1 mod 2 (for "k" odd), but the Pisano period is not "p" − 1 = 1, but rather 3 (in fact, this is also 3 for even "k"). For "p" divides the squarefree part of "k"2 + 4, the Pisano period is "π""k"("k"2 + 4) = "p"2 − "p" = "p"("p" − 1), which does not divide "p" − 1 or "p"2 − 1.
Fibonacci integer sequences modulo "n".
One can consider Fibonacci integer sequences and take them modulo "n", or put differently, consider Fibonacci sequences in the ring Z/"n"Z. The period is a divisor of π("n"). The number of occurrences of 0 per cycle is 0, 1, 2, or 4. If "n" is not a prime the cycles include those that are multiples of the cycles for the divisors. For example, for "n" = 10 the extra cycles include those for "n" = 2 multiplied by 5, and for "n" = 5 multiplied by 2.
Table of the extra cycles: (the original Fibonacci cycles are excluded) (using X and E for ten and eleven, respectively)
Number of Fibonacci integer cycles mod "n" are:
1, 2, 2, 4, 3, 4, 4, 8, 5, 6, 14, 10, 7, 8, 12, 16, 9, 16, 22, 16, 29, 28, 12, 30, 13, 14, 14, 22, 63, 24, 34, 32, 39, 34, 30, 58, 19, 86, 32, 52, 43, 58, 22, 78, 39, 46, 70, 102, ... (sequence in the OEIS)
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F_0 = 0"
},
{
"math_id": 1,
"text": "F_1 = 1"
},
{
"math_id": 2,
"text": "F_i = F_{i-1} + F_{i-2}."
},
{
"math_id": 3,
"text": "\n\\mathbf Q = \\begin{bmatrix} 1 & 1\\\\1 & 0 \\end{bmatrix} \n"
},
{
"math_id": 4,
"text": "\\text{GL}_2(\\mathbb{Z}_n)"
},
{
"math_id": 5,
"text": "\\mathbb{Z}_n"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "F_{\\pi(n)}"
},
{
"math_id": 8,
"text": "(F_{\\pi(n)+1} - 1)"
},
{
"math_id": 9,
"text": "\n\\pi(p^k) = p^{k-1}\\pi(p)\n"
},
{
"math_id": 10,
"text": "p \\equiv \\pm 1 \\pmod {10}"
},
{
"math_id": 11,
"text": "p \\equiv \\pm 3 \\pmod {10}"
},
{
"math_id": 12,
"text": "\\mathbb{F}_{p} = \\mathbb{Z}/p\\mathbb{Z}"
},
{
"math_id": 13,
"text": "p \\equiv \\pm 3 \\pmod {10},"
},
{
"math_id": 14,
"text": "\\mathbb{F}_{p}"
},
{
"math_id": 15,
"text": "\n\\mathbb{F}_{p}[x]/(x^2 - x - 1).\n"
},
{
"math_id": 16,
"text": "x \\mapsto x^p"
},
{
"math_id": 17,
"text": "\\omega(m)"
},
{
"math_id": 18,
"text": "\\alpha(m)"
},
{
"math_id": 19,
"text": "\\alpha(p^e) = p^{e-1} \\alpha(p)"
},
{
"math_id": 20,
"text": "m"
},
{
"math_id": 21,
"text": "m = p_1^{e_1} p_2^{e_2} \\dots p_n^{e_n}"
},
{
"math_id": 22,
"text": "\\alpha(m) = \\operatorname{lcm}(\\alpha(p_1^{e_1}), \\alpha(p_2^{e_2}), \\dots, \\alpha(p_n^{e_n}))"
},
{
"math_id": 23,
"text": "\\pi_k(n)"
},
{
"math_id": 24,
"text": "\\pi_k(m\\cdot n) = \\mathrm{lcm}(\\pi_k(m),\\pi_k(n))"
},
{
"math_id": 25,
"text": "\\pi_1(3)=8"
},
{
"math_id": 26,
"text": "\\pi_1(4)=6,"
},
{
"math_id": 27,
"text": "\\pi_1(12=3\\cdot 4) = \\mathrm{lcm}(\\pi_1(3),\\pi_1(4))= \\mathrm{lcm}(8,6)=24."
},
{
"math_id": 28,
"text": "q=p^n."
},
{
"math_id": 29,
"text": "\\pi_k(p^n) = p^{n-1}\\cdot \\pi_k(p)"
},
{
"math_id": 30,
"text": "F_k\\left(n\\right) = {{\\varphi_k^n-(k-\\varphi_k)^n} \\over {\\sqrt {k^2+4}}}={{\\varphi_k^{n}-(-1/\\varphi_k)^{n}} \\over {\\sqrt {k^2+4}}},\\,"
},
{
"math_id": 31,
"text": "\\varphi_k"
},
{
"math_id": 32,
"text": "\\varphi_k = \\frac{k + \\sqrt{k^2+4}}{2}."
},
{
"math_id": 33,
"text": "\\sqrt{k^2+4}, 1/2,"
},
{
"math_id": 34,
"text": "k/\\sqrt{k^2+4}"
},
{
"math_id": 35,
"text": "\\phi(p)=p-1"
},
{
"math_id": 36,
"text": "\\varphi_k^n"
},
{
"math_id": 37,
"text": "\\phi(p),"
},
{
"math_id": 38,
"text": "F_1\\left(n\\right) \\equiv 3\\cdot \\left(8^n - 4^n\\right) \\pmod{11}."
},
{
"math_id": 39,
"text": "(\\mathbb{Z}/p)[\\sqrt{k^2+4}]"
},
{
"math_id": 40,
"text": "\\sqrt{k^2+4}"
}
] | https://en.wikipedia.org/wiki?curid=1127427 |
11274357 | Carbamoyl phosphate synthase II | Enzyme
Carbamoyl phosphate synthetase (glutamine-hydrolysing) (EC 6.3.5.5) is an enzyme that catalyzes the reactions that produce carbamoyl phosphate in the cytosol (as opposed to type I, which functions in the mitochondria). Its systemic name is "hydrogen-carbonate:L-glutamine amido-ligase (ADP-forming, carbamate-phosphorylating)".
In pyrimidine biosynthesis, it serves as the rate-limiting enzyme and catalyzes the following reaction:
2 ATP + L-glutamine + HCO3− + H2O formula_0 2 ADP + phosphate + L-glutamate + carbamoyl phosphate (overall reaction)
(1a) L-glutamine + H2O formula_0 L-glutamate + NH3
(1b) 2 ATP + HCO3− + NH3 formula_0 2 ADP + phosphate + carbamoyl phosphate
It is activated by ATP and PRPP and it is inhibited by UTP (Uridine triphosphate)
Neither CPSI nor CPSII require biotin as a coenzyme, as seen with most carboxylation reactions.
It is one of the four functional enzymatic domains coded by the "CAD" gene. It is classified under EC 6.3.5.5.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=11274357 |
1127460 | Sylvester's law of inertia | Theorem of matrix algebra of invariance properties under basis transformations
Sylvester's law of inertia is a theorem in matrix algebra about certain properties of the coefficient matrix of a real quadratic form that remain invariant under a change of basis. Namely, if formula_0 is a symmetric matrix, then for any invertible matrix formula_1, the number of positive, negative and zero eigenvalues (called the inertia of the matrix) of formula_2 is constant. This result is particularly useful when formula_3 is diagonal, as the inertia of a diagonal matrix can easily be obtained by looking at the sign of its diagonal elements.
This property is named after James Joseph Sylvester who published its proof in 1852.
Statement.
Let formula_0 be a symmetric square matrix of order formula_4 with real entries. Any non-singular matrix formula_1 of the same size is said to transform formula_0 into another symmetric matrix &NoBreak;}&NoBreak;, also of order &NoBreak;&NoBreak;, where formula_5 is the transpose of &NoBreak;&NoBreak;. It is also said that matrices formula_0 and formula_6 are congruent. If formula_0 is the coefficient matrix of some quadratic form of &NoBreak;&NoBreak;, then formula_6 is the matrix for the same form after the change of basis defined by &NoBreak;&NoBreak;.
A symmetric matrix formula_0 can always be transformed in this way into a diagonal matrix
formula_3 which has only entries &NoBreak;&NoBreak;, &NoBreak;&NoBreak;, &NoBreak;&NoBreak; along the diagonal. Sylvester's law of inertia states that the number of diagonal entries of each kind is an invariant of &NoBreak;&NoBreak;, i.e. it does not depend on the matrix formula_1 used.
The number of &NoBreak;&NoBreak;s, denoted &NoBreak;&NoBreak;, is called the positive index of inertia of &NoBreak;&NoBreak;, and the number of &NoBreak;&NoBreak;s, denoted &NoBreak;&NoBreak;, is called the negative index of inertia. The number of &NoBreak;&NoBreak;s, denoted &NoBreak;&NoBreak;, is the dimension of the null space of &NoBreak;&NoBreak;, known as the nullity of &NoBreak;&NoBreak;. These numbers satisfy an obvious relation
formula_7
The difference, &NoBreak;&NoBreak;, is usually called the signature of &NoBreak;&NoBreak;. (However, some authors use that term for the triple
formula_8 consisting of the nullity and the positive and negative indices of inertia of &NoBreak;&NoBreak;; for a non-degenerate form of a given dimension these are equivalent data, but in general the triple yields more data.)
If the matrix formula_0 has the property that every principal upper left formula_9 minor formula_10 is non-zero then the negative index of inertia is equal to the number of sign changes in the sequence
formula_11
Statement in terms of eigenvalues.
The law can also be stated as follows: two symmetric square matrices of the same size have the same number of positive, negative and zero eigenvalues if and only if they are congruent (&NoBreak;}&NoBreak;, for some non-singular &NoBreak;&NoBreak;).
The positive and negative indices of a symmetric matrix formula_0 are also the number of positive and negative eigenvalues of &NoBreak;&NoBreak;. Any symmetric real matrix formula_0 has an eigendecomposition of the form formula_12 where formula_13 is a diagonal matrix containing the eigenvalues of &NoBreak;&NoBreak;, and formula_14 is an orthonormal square matrix containing the eigenvectors. The matrix formula_13 can be written formula_15 where formula_3 is diagonal with entries &NoBreak;&NoBreak;, and formula_16 is diagonal with &NoBreak;}&NoBreak;. The matrix formula_17 transforms formula_3 to &NoBreak;&NoBreak;.
Law of inertia for quadratic forms.
In the context of quadratic forms, a real quadratic form formula_14 in formula_4 variables (or on an formula_4-dimensional real vector space) can by a suitable change of basis (by non-singular linear transformation from formula_18 to &NoBreak;&NoBreak;) be brought to the diagonal form
formula_19
with each formula_20. Sylvester's law of inertia states that the number of coefficients of a given sign is an invariant of &NoBreak;&NoBreak;, i.e., does not depend on a particular choice of diagonalizing basis. Expressed geometrically, the law of inertia says that all maximal subspaces on which the restriction of the quadratic form is positive definite (respectively, negative definite) have the same dimension. These dimensions are the positive and negative indices of inertia.
Generalizations.
Sylvester's law of inertia is also valid if formula_0 and formula_6 have complex entries. In this case, it is said that formula_0 and formula_6 are formula_21-congruent if and only if there exists a non-singular complex matrix formula_1 such that &NoBreak;&NoBreak;, where formula_21 denotes the conjugate transpose. In the complex scenario, a way to state Sylvester's law of inertia is that if formula_0 and formula_6 are Hermitian matrices, then formula_0 and formula_6 are formula_21-congruent if and only if they have the same inertia, the definition of which is still valid as the eigenvalues of Hermitian matrices are always real numbers.
Ostrowski proved a quantitative generalization of Sylvester's law of inertia: if formula_0 and formula_6 are formula_21-congruent with &NoBreak;&NoBreak;, then their eigenvalues formula_22 are related by
formula_23
where formula_24 are such that &NoBreak;&NoBreak;.
A theorem due to Ikramov generalizes the law of inertia to any normal matrices formula_0 and &NoBreak;&NoBreak;: If formula_0 and formula_6 are normal matrices, then formula_0 and formula_6 are congruent if and only if they have the same number of eigenvalues on each open ray from the origin in the complex plane.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "S"
},
{
"math_id": 2,
"text": "D=SAS^\\mathrm{T}"
},
{
"math_id": 3,
"text": "D"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "S^\\mathrm{T}"
},
{
"math_id": 6,
"text": "B"
},
{
"math_id": 7,
"text": " n_0+n_{+}+n_{-}=n."
},
{
"math_id": 8,
"text": "(n_0,n_+,n_-)"
},
{
"math_id": 9,
"text": "k\\times k"
},
{
"math_id": 10,
"text": "\\Delta_k"
},
{
"math_id": 11,
"text": " \\Delta_0=1, \\Delta_1, \\ldots, \\Delta_n=\\det A. "
},
{
"math_id": 12,
"text": "QEQ^\\mathrm{T}"
},
{
"math_id": 13,
"text": "E"
},
{
"math_id": 14,
"text": "Q"
},
{
"math_id": 15,
"text": "E=WDW^\\mathrm{T}"
},
{
"math_id": 16,
"text": "W"
},
{
"math_id": 17,
"text": "S=QW"
},
{
"math_id": 18,
"text": "x"
},
{
"math_id": 19,
"text": " Q(x_1,x_2,\\ldots,x_n)=\\sum_{i=1}^n a_i x_i^2 "
},
{
"math_id": 20,
"text": "a_i \\in \\{ 0,1,-1\\}"
},
{
"math_id": 21,
"text": "*"
},
{
"math_id": 22,
"text": "\\lambda_i"
},
{
"math_id": 23,
"text": "\\lambda_{i} (B) = \\theta_{i} \\lambda_{i}(A), \\quad i =1,\\ldots,n"
},
{
"math_id": 24,
"text": "\\theta_i"
}
] | https://en.wikipedia.org/wiki?curid=1127460 |
11277623 | Harish-Chandra character | In mathematics, the Harish-Chandra character, named after Harish-Chandra, of a representation of a semisimple Lie group "G" on a Hilbert space "H" is a distribution on the group "G" that is analogous to the character of a finite-dimensional representation of a compact group.
Definition.
Suppose that π is an irreducible unitary representation of "G" on a Hilbert space "H".
If "f" is a compactly supported smooth function on the group "G", then the operator on "H"
formula_0
is of trace class, and the distribution
formula_1
is called the character (or global character or Harish-Chandra character) of the representation.
The character Θπ is a distribution on "G" that is invariant under conjugation, and is an eigendistribution of the center of
the universal enveloping algebra of "G", in other words an invariant eigendistribution, with eigenvalue the infinitesimal character of the representation π.
Harish-Chandra's regularity theorem states that any invariant eigendistribution, and in particular any character of an irreducible unitary representation on a Hilbert space, is given by a locally integrable function. | [
{
"math_id": 0,
"text": "\\pi(f) = \\int_Gf(x)\\pi(x)\\,dx"
},
{
"math_id": 1,
"text": "\\Theta_\\pi:f\\mapsto \\operatorname{Tr}(\\pi(f))"
}
] | https://en.wikipedia.org/wiki?curid=11277623 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.