id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
1231708 | Ramanujan tau function | The Ramanujan tau function, studied by Ramanujan (1916), is the function formula_0 defined by the following identity:
formula_1
where "q"
exp(2"πiz") with Im "z" > 0, formula_2 is the Euler function, η is the Dedekind eta function, and the function Δ("z") is a holomorphic cusp form of weight 12 and level 1, known as the discriminant modular form (some authors, notably Apostol, write formula_3 instead of formula_4). It appears in connection to an "error term" involved in counting the number of ways of expressing an integer as a sum of 24 squares. A formula due to Ian G. Macdonald was given in .
Values.
The first few values of the tau function are given in the following table (sequence in the OEIS):
Calculating this function on an odd square number (i.e. a centered octagonal number) yields an odd number, whereas for any other number the function yields an even number.
Ramanujan's conjectures.
observed, but did not prove, the following three properties of "τ"("n"):
"τ"("m")"τ"("n") if gcd("m","n")
1 (meaning that "τ"("n") is a multiplicative function)
"τ"("p")"τ"("pr") − "p"11 "τ"("p""r" − 1) for p prime and "r" > 0.
The first two properties were proved by and the third one, called the Ramanujan conjecture, was proved by Deligne in 1974 as a consequence of his proof of the Weil conjectures (specifically, he deduced it by applying them to a Kuga-Sato variety).
Congruences for the tau function.
For "k" ∈ formula_5 and "n" ∈ formula_5>0, the Divisor function "σ""k"("n") is the sum of the kth powers of the divisors of n. The tau function satisfies several congruence relations; many of them can be expressed in terms of "σ""k"("n"). Here are some:
For "p" ≠ 23 prime, we have
Explicit formula.
In 1975 Douglas Niebur proved an explicit formula for the Ramanujan tau function:
formula_16
where σ("n") is the sum of the positive divisors of n.
Conjectures on "τ"("n").
Suppose that f is a weight-k integer newform and the Fourier coefficients "a"("n") are integers. Consider the problem:
Given that f does not have complex multiplication, do almost all primes p have the property that "a"("p") ≢ 0 (mod "p")?
Indeed, most primes should have this property, and hence they are called "ordinary". Despite the big advances by Deligne and Serre on Galois representations, which determine "a"("n") (mod "p") for n coprime to p, it is unclear how to compute "a"("p") (mod "p"). The only theorem in this regard is Elkies' famous result for modular elliptic curves, which guarantees that there are infinitely many primes p such that "a"("p")
0, which thus are congruent to 0 modulo "p". There are no known examples of non-CM f with weight greater than 2 for which "a"("p") ≢ 0 (mod "p") for infinitely many primes p (although it should be true for almost all p). There are also no known examples with "a"("p") ≡ 0 (mod "p") for infinitely many p. Some researchers had begun to doubt whether "a"("p") ≡ 0 (mod "p") for infinitely many p. As evidence, many provided Ramanujan's "τ"("p") (case of weight 12). The only solutions up to 1010 to the equation "τ"("p") ≡ 0 (mod "p") are 2, 3, 5, 7, 2411, and (sequence in the OEIS).
conjectured that "τ"("n") ≠ 0 for all n, an assertion sometimes known as Lehmer's conjecture. Lehmer verified the conjecture for n up to (Apostol 1997, p. 22). The following table summarizes progress on finding successively larger values of N for which this condition holds for all "n" ≤ "N".
Ramanujan's "L"-function.
Ramanujan's "L"-function is defined by
formula_17
if formula_18 and by analytic continuation otherwise. It satisfies the functional equation
formula_19
and has the Euler product
formula_20
Ramanujan conjectured that all nontrivial zeros of formula_21 have real part equal to formula_22.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\tau : \\mathbb{N} \\rarr\\mathbb{Z}"
},
{
"math_id": 1,
"text": "\\sum_{n\\geq 1}\\tau(n)q^n=q\\prod_{n\\geq 1}\\left(1-q^n\\right)^{24} = q\\phi(q)^{24} = \\eta(z)^{24}=\\Delta(z),"
},
{
"math_id": 2,
"text": "\\phi"
},
{
"math_id": 3,
"text": "\\Delta/(2\\pi)^{12}"
},
{
"math_id": 4,
"text": "\\Delta"
},
{
"math_id": 5,
"text": "\\mathbb{Z}"
},
{
"math_id": 6,
"text": "\\tau(n)\\equiv\\sigma_{11}(n)\\ \\bmod\\ 2^{11}\\text{ for }n\\equiv 1\\ \\bmod\\ 8"
},
{
"math_id": 7,
"text": "\\tau(n)\\equiv 1217 \\sigma_{11}(n)\\ \\bmod\\ 2^{13}\\text{ for } n\\equiv 3\\ \\bmod\\ 8"
},
{
"math_id": 8,
"text": "\\tau(n)\\equiv 1537 \\sigma_{11}(n)\\ \\bmod\\ 2^{12}\\text{ for }n\\equiv 5\\ \\bmod\\ 8"
},
{
"math_id": 9,
"text": "\\tau(n)\\equiv 705 \\sigma_{11}(n)\\ \\bmod\\ 2^{14}\\text{ for }n\\equiv 7\\ \\bmod\\ 8"
},
{
"math_id": 10,
"text": "\\tau(n)\\equiv n^{-610}\\sigma_{1231}(n)\\ \\bmod\\ 3^{6}\\text{ for }n\\equiv 1\\ \\bmod\\ 3"
},
{
"math_id": 11,
"text": "\\tau(n)\\equiv n^{-610}\\sigma_{1231}(n)\\ \\bmod\\ 3^{7}\\text{ for }n\\equiv 2\\ \\bmod\\ 3"
},
{
"math_id": 12,
"text": "\\tau(n)\\equiv n^{-30}\\sigma_{71}(n)\\ \\bmod\\ 5^{3}\\text{ for }n\\not\\equiv 0\\ \\bmod\\ 5"
},
{
"math_id": 13,
"text": "\\tau(n)\\equiv n\\sigma_{9}(n)\\ \\bmod\\ 7"
},
{
"math_id": 14,
"text": "\\tau(n)\\equiv n\\sigma_{9}(n)\\ \\bmod\\ 7^2\\text{ for }n\\equiv 3,5,6\\ \\bmod\\ 7"
},
{
"math_id": 15,
"text": "\\tau(n)\\equiv\\sigma_{11}(n)\\ \\bmod\\ 691."
},
{
"math_id": 16,
"text": "\\tau(n)=n^4\\sigma(n)-24\\sum_{i=1}^{n-1}i^2(35i^2-52in+18n^2)\\sigma(i)\\sigma(n-i)."
},
{
"math_id": 17,
"text": "L(s)=\\sum_{n\\ge 1}\\frac{\\tau (n)}{n^s}"
},
{
"math_id": 18,
"text": "\\Re s>6"
},
{
"math_id": 19,
"text": "\\frac{L(s)\\Gamma (s)}{(2\\pi)^s}=\\frac{L(12-s)\\Gamma(12-s)}{(2\\pi)^{12-s}},\\quad s\\notin\\mathbb{Z}_0^-, \\,12-s\\notin\\mathbb{Z}_0^{-}"
},
{
"math_id": 20,
"text": "L(s)=\\prod_{p\\,\\text{prime}}\\frac{1}{1-\\tau (p)p^{-s}+p^{11-2s}},\\quad \\Re s>7."
},
{
"math_id": 21,
"text": "L"
},
{
"math_id": 22,
"text": "6"
}
] | https://en.wikipedia.org/wiki?curid=1231708 |
1231733 | Airglow | Faint emission of light by a planetary atmosphere
Airglow (also called nightglow) is a faint emission of light by a planetary atmosphere. In the case of Earth's atmosphere, this optical phenomenon causes the night sky never to be completely dark, even after the effects of starlight and diffused sunlight from the far side are removed. This phenomenon originates with self-illuminated gases and has no relationship with Earth's magnetism or sunspot activity.
History.
The airglow phenomenon was first identified in 1868 by Swedish physicist Anders Ångström. Since then, it has been studied in the laboratory, and various chemical reactions have been observed to emit electromagnetic energy as part of the process. Scientists have identified some of those processes that would be present in Earth's atmosphere, and astronomers have verified that such emissions are present. Simon Newcomb was the first person to scientifically study and describe airglow, in 1901.
Airglow existed in pre-industrial society and was known to the ancient Greeks. "Aristotle and Pliny described the phenomena of "Chasmata", which can be identified in part as auroras, and in part as bright airglow nights."
Description.
Airglow is caused by various processes in the upper atmosphere of Earth, such as the recombination of atoms which were photoionized by the Sun during the day, luminescence caused by cosmic rays striking the upper atmosphere, and chemiluminescence caused mainly by oxygen and nitrogen reacting with hydroxyl free radicals at heights of a few hundred kilometres. It is not noticeable during the daytime due to the glare and scattering of sunlight.
Even at the best ground-based observatories, airglow limits the photosensitivity of optical telescopes. Partly for this reason, space telescopes like Hubble can observe much fainter objects than current ground-based telescopes at visible wavelengths.
Airglow at night may be bright enough for a ground observer to notice and appears generally bluish. Although airglow emission is fairly uniform across the atmosphere, it appears brightest at about 10° above the observer's horizon, since the lower one looks, the greater the mass of atmosphere one is looking through. Very low down, however, atmospheric extinction reduces the apparent brightness of the airglow.
One airglow mechanism is when an atom of nitrogen combines with an atom of oxygen to form a molecule of nitric oxide (NO). In the process, a photon is emitted. This photon may have any of several different wavelengths characteristic of nitric oxide molecules. The free atoms are available for this process, because molecules of nitrogen (N2) and oxygen (O2) are dissociated by solar energy in the upper reaches of the atmosphere and may encounter each other to form NO. Other chemicals that can create air glow in the atmosphere are hydroxyl (OH), atomic oxygen (O), sodium (Na), and lithium (Li).
The sky brightness is typically measured in units of apparent magnitude per square arcsecond of sky.
Calculation.
In order to calculate the relative intensity of airglow, we need to convert apparent magnitudes into fluxes of photons; this clearly depends on the spectrum of the source, but we will ignore that initially. At visible wavelengths, we need the parameter "S"0("V"), the power per square centimetre of aperture and per micrometre of wavelength produced by a zeroth-magnitude star, to convert apparent magnitudes into fluxes – "S"0("V")
. If we take the example of a "V" = 28 star observed through a normal "V" band filter ("B"
bandpass, frequency ν ≈), the number of photons we receive per square centimeter of telescope aperture per second from the source is "N"s:
formula_0
(where "h" is the Planck constant; "hν" is the energy of a single photon of frequency "ν").
At "V" band, the emission from airglow is "V"
22 per square arc-second at a high-altitude observatory on a moonless night; in excellent seeing conditions, the image of a star will be about 0.7 arc-second across with an area of 0.4 square arc-second, and so the emission from airglow over the area of the image corresponds to about "V"
23. This gives the number of photons from airglow, "N"a:
formula_1
The signal-to-noise for an ideal ground-based observation with a telescope of area "A" (ignoring losses and detector noise), arising from Poisson statistics, is only:
formula_2
If we assume a 10 m diameter ideal ground-based telescope and an unresolved star: every second, over a patch the size of the seeing-enlarged image of the star, 35 photons arrive from the star and 3500 from air-glow. So, over an hour, roughly arrive from the air-glow, and approximately arrive from the source; so the "S"/"N" ratio is about:
formula_3
We can compare this with "real" answers from exposure time calculators. For an 8 m unit Very Large Telescope telescope, according to the FORS exposure time calculator, 40 hours of observing time are needed to reach "V" = 28, while the 2.4 m Hubble only takes 4 hours according to the ACS exposure time calculator. A hypothetical 8 m Hubble telescope would take about 30 minutes.
It should be clear from this calculation that reducing the view field size can make fainter objects more detectable against the airglow; unfortunately, adaptive optics techniques that reduce the diameter of the view field of an Earth-based telescope by an order of magnitude only as yet work in the infrared, where the sky is much brighter. A space telescope isn't restricted by the view field, since it is not affected by airglow.
Induced airglow.
Scientific experiments have been conducted to induce airglow by directing high-power radio emissions at the Earth's ionosphere. These radiowaves interact with the ionosphere to induce faint but visible optical light at specific wavelengths under certain conditions.
The effect is also observable in the radio frequency band, using ionosondes.
Experimental observation.
SwissCube-1 is a Swiss satellite operated by Ecole Polytechnique Fédérale de Lausanne. The spacecraft is a single unit CubeSat, which was designed to conduct research into airglow within the Earth's atmosphere and to develop technology for future spacecraft. Though SwissCube-1 is rather small (10 cm × 10 cm × 10 cm) and weighs less than 1 kg, it carries a small telescope for obtaining images of the airglow. The first SwissCube-1 image came down on 18 February 2011 and was quite black with some thermal noise on it. The first airglow image came down on 3 March 2011. This image has been converted to the human optical range (green) from its near-infrared measurement. This image provides a measurement of the intensity of the airglow phenomenon in the near-infrared. The range measured is from 500 to 61400 photons, with a resolution of 500 photons.
Observation of airglow on other planets.
The "Venus Express" spacecraft contains an infrared sensor which has detected near-IR emissions from the upper atmosphere of Venus. The emissions come from nitric oxide (NO) and from molecular oxygen.
Scientists had previously determined in laboratory testing that during NO production, ultraviolet emissions and near-IR emissions were produced. The UV radiation had been detected in the atmosphere, but until this mission, the atmosphere-produced near-IR emissions were only theoretical.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N_\\text{s} = 10^{-28/2.5}\\times\\frac{S_0(V) \\times B}{h\\nu}"
},
{
"math_id": 1,
"text": "N_\\text{a} = 10^{-23/2.5}\\times\\frac{S_0(V) \\times B}{h\\nu}"
},
{
"math_id": 2,
"text": "S/N = \\sqrt{A}\\times\\frac{N_\\text{s}}{\\sqrt{N_\\text{s}+N_\\text{a}}}"
},
{
"math_id": 3,
"text": "\\frac{1.3 \\times 10^5}{\\sqrt{1.3 \\times 10^7}} \\approx 36."
}
] | https://en.wikipedia.org/wiki?curid=1231733 |
1231797 | Extinction (astronomy) | Interstellar absorption and scattering of light
In astronomy, extinction is the absorption and scattering of electromagnetic radiation by dust and gas between an emitting astronomical object and the observer. Interstellar extinction was first documented as such in 1930 by Robert Julius Trumpler. However, its effects had been noted in 1847 by Friedrich Georg Wilhelm von Struve, and its effect on the colors of stars had been observed by a number of individuals who did not connect it with the general presence of galactic dust. For stars lying near the plane of the Milky Way which are within a few thousand parsecs of the Earth, extinction in the visual band of frequencies (photometric system) is roughly 1.8 magnitudes per kiloparsec.
For Earth-bound observers, extinction arises both from the interstellar medium and the Earth's atmosphere; it may also arise from circumstellar dust around an observed object. Strong extinction in Earth's atmosphere of some wavelength regions (such as X-ray, ultraviolet, and infrared) is overcome by the use of space-based observatories. Since blue light is much more strongly attenuated than red light, extinction causes objects to appear redder than expected; this phenomenon is called interstellar reddening.
Interstellar reddening.
Interstellar reddening is a phenomenon associated with interstellar extinction where the spectrum of electromagnetic radiation from a radiation source changes characteristics from that which the object originally emitted. Reddening occurs due to the light scattering off dust and other matter in the interstellar medium. Interstellar reddening is a different phenomenon from redshift, which is the proportional frequency shifts of spectra without distortion. Reddening preferentially removes shorter wavelength photons from a radiated spectrum while leaving behind the longer wavelength photons, leaving the spectroscopic lines unchanged.
In most photometric systems, filters (passbands) are used from which readings of magnitude of light may take account of latitude and humidity among terrestrial factors. Interstellar reddening equates to the "color excess", defined as the difference between an object's observed color index and its intrinsic color index (sometimes referred to as its normal color index). The latter is the theoretical value which it would have if unaffected by extinction. In the first system, the UBV photometric system devised in the 1950s and its most closely related successors, the object's color excess formula_0 is related to the object's B−V color (calibrated blue minus calibrated visible) by:
formula_1
For an A0-type main sequence star (these have median wavelength and heat among the main sequence) the color indices are calibrated at 0 based on an intrinsic reading of such a star (± exactly 0.02 depending on which spectral point, i.e. precise passband within the abbreviated color name is in question, see color index). At least two and up to five measured passbands in magnitude are then compared by subtraction: U, B, V, I, or R during which the color excess from extinction is calculated and deducted. The name of the four sub-indices (R minus I etc.) and order of the subtraction of recalibrated magnitudes is from right to immediate left within this sequence.
General characteristics.
Interstellar reddening occurs because interstellar dust absorbs and scatters blue light waves more than red light waves, making stars appear redder than they are. This is similar to the effect seen when dust particles in the atmosphere of Earth contribute to red sunsets.
Broadly speaking, interstellar extinction is strongest at short wavelengths, generally observed by using techniques from spectroscopy. Extinction results in a change in the shape of an observed spectrum. Superimposed on this general shape are absorption features (wavelength bands where the intensity is lowered) that have a variety of origins and can give clues as to the chemical composition of the interstellar material, e.g. dust grains. Known absorption features include the 2175 Å bump, the diffuse interstellar bands, the 3.1 μm water ice feature, and the 10 and 18 μm silicate features.
In the solar neighborhood, the rate of interstellar extinction in the Johnson–Cousins V-band (visual filter) averaged at a wavelength of 540 nm is usually taken to be 0.7–1.0 mag/kpc−simply an average due to the "clumpiness" of interstellar dust. In general, however, this means that a star will have its brightness reduced by about a factor of 2 in the V-band viewed from a good night sky vantage point on earth for every kiloparsec (3,260 light years) it is farther away from us.
The amount of extinction can be significantly higher than this in specific directions. For example, some regions of the Galactic Center are awash with obvious intervening dark dust from our spiral arm (and perhaps others) and themselves in a bulge of dense matter, causing as much as more than 30 magnitudes of extinction in the optical, meaning that less than 1 optical photon in 1012 passes through. This results in the zone of avoidance, where our view of the extra-galactic sky is severely hampered, and background galaxies, such as Dwingeloo 1, were only discovered recently through observations in radio and infrared.
The general shape of the ultraviolet through near-infrared (0.125 to 3.5 μm) extinction curve (plotting extinction in magnitude against wavelength, often inverted) looking from our vantage point at other objects in the Milky Way, is fairly well characterized by the stand-alone parameter of relative visibility (of such visible light) R(V) (which is different along different lines of sight), but there are known deviations from this characterization. Extending the extinction law into the mid-infrared wavelength range is difficult due to the lack of suitable targets and various contributions by absorption features.
R(V) compares aggregate and particular extinctions. It is A(V)/E(B−V). Restated, it is the total extinction, A(V) divided by the selective total extinction (A(B)−A(V)) of those two wavelengths (bands). A(B) and A(V) are the "total extinction" at the B and V filter bands. Another measure used in the literature is the "absolute extinction" A(λ)/A(V) at wavelength λ, comparing the total extinction at that wavelength to that at the V band.
R(V) is known to be correlated with the average size of the dust grains causing the extinction. For the Milky Way Galaxy, the typical value for R(V) is 3.1, but is found to vary considerably across different lines of sight. As a result, when computing cosmic distances it can be advantageous to move to star data from the near-infrared (of which the filter or passband Ks is quite standard) where the variations and amount of extinction are significantly less, and similar ratios as to R(Ks): 0.49±0.02 and 0.528±0.015 were found respectively by independent groups. Those two more modern findings differ substantially relative to the commonly referenced historical value ≈0.7.
The relationship between the total extinction, A(V) (measured in magnitudes), and the column density of neutral hydrogen atoms column, NH (usually measured in cm−2), shows how the gas and dust in the interstellar medium are related. From studies using ultraviolet spectroscopy of reddened stars and X-ray scattering halos in the Milky Way, Predehl and Schmitt found the relationship between NH and A(V) to be approximately:
formula_2
(see also:).
Astronomers have determined the three-dimensional distribution of extinction in the "solar circle" (our region of our galaxy), using visible and near-infrared stellar observations and a model of distribution of stars. The dust causing extinction mainly lies along the spiral arms, as observed in other spiral galaxies.
Measuring extinction towards an object.
To measure the extinction curve for a star, the star's spectrum is compared to the observed spectrum of a similar star known not to be affected by extinction (unreddened). It is also possible to use a theoretical spectrum instead of the observed spectrum for the comparison, but this is less common. In the case of emission nebulae, it is common to look at the ratio of two emission lines which should not be affected by the temperature and density in the nebula. For example, the ratio of hydrogen-alpha to hydrogen-beta emission is always around 2.85 under a wide range of conditions prevailing in nebulae. A ratio other than 2.85 must therefore be due to extinction, and the amount of extinction can thus be calculated.
The 2175-angstrom feature.
One prominent feature in measured extinction curves of many objects within the Milky Way is a broad 'bump' at about 2175 Å, well into the ultraviolet region of the electromagnetic spectrum. This feature was first observed in the 1960s, but its origin is still not well understood. Several models have been presented to account for this bump which include graphitic grains with a mixture of PAH molecules. Investigations of interstellar grains embedded in interplanetary dust particles (IDP) observed this feature and identified the carrier with organic carbon and amorphous silicates present in the grains.
Extinction curves of other galaxies.
The form of the standard extinction curve depends on the composition of the ISM, which varies from galaxy to galaxy. In the Local Group, the best-determined extinction curves are those of the Milky Way, the Small Magellanic Cloud (SMC) and the Large Magellanic Cloud (LMC).
In the LMC, there is significant variation in the characteristics of the ultraviolet extinction with a weaker 2175 Å bump and stronger far-UV extinction in the region associated with the LMC2 supershell (near the 30 Doradus starbursting region) than seen elsewhere in the LMC and in the Milky Way. In the SMC, more extreme variation is seen with no 2175 Å bump and very strong far-UV extinction in the star forming Bar and fairly normal ultraviolet extinction seen in the more quiescent Wing.
This gives clues as to the composition of the ISM in the various galaxies. Previously, the different average extinction curves in the Milky Way, LMC, and SMC were thought to be the result of the different metallicities of the three galaxies: the LMC's metallicity is about 40% of that of the Milky Way, while the SMC's is about 10%. Finding extinction curves in both the LMC and SMC which are similar to those found in the Milky Way and finding extinction curves in the Milky Way that look more like those found in the LMC2 supershell of the LMC and in the SMC Bar has given rise to a new interpretation. The variations in the curves seen in the Magellanic Clouds and Milky Way may instead be caused by processing of the dust grains by nearby star formation. This interpretation is supported by work in starburst galaxies (which are undergoing intense star formation episodes) which shows that their dust lacks the 2175 Å bump.
Atmospheric extinction.
Atmospheric extinction gives the rising or setting Sun an orange hue and varies with location and altitude. Astronomical observatories generally are able to characterise the local extinction curve very accurately, to allow observations to be corrected for the effect. Nevertheless, the atmosphere is completely opaque to many wavelengths requiring the use of satellites to make observations.
This extinction has three main components: Rayleigh scattering by air molecules, scattering by particulates, and molecular absorption. Molecular absorption is often referred to as telluric absorption, as it is caused by the Earth ("telluric" is a synonym for "terrestrial"). The most important sources of telluric absorption are molecular oxygen and ozone, which strongly absorb radiation near ultraviolet, and water, which strongly absorbs infrared.
The amount of such extinction is lowest at the observer's zenith and highest near the horizon. A given star, preferably at solar opposition, reaches its greatest celestial altitude and optimal time for observation when the star is near the local meridian around solar midnight and if the star has a favorable declination ("i.e.", similar to the observer's latitude); thus, the seasonal time due to axial tilt is key. Extinction is approximated by multiplying the standard atmospheric extinction curve (plotted against each wavelength) by the mean air mass calculated over the duration of the observation. A dry atmosphere reduces infrared extinction significantly.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E_{B-V}"
},
{
"math_id": 1,
"text": "E_{B-V} = (B-V)_{\\textrm{observed}} - (B-V)_{\\textrm{intrinsic}}\\,"
},
{
"math_id": 2,
"text": "\\frac{N_H}{A(V)} \\approx 1.8 \\times 10^{21}~\\mbox{atoms}~\\mbox{cm}^{-2}~\\mbox{mag}^{-1}"
}
] | https://en.wikipedia.org/wiki?curid=1231797 |
1232419 | Weighted geometric mean | In statistics, the weighted geometric mean is a generalization of the geometric mean using the weighted arithmetic mean.
Given a sample formula_0 and weights formula_1, it is calculated as:
formula_2
The second form above illustrates that the logarithm of the geometric mean is the weighted arithmetic mean of the logarithms of the individual values.
If all the weights are equal, the weighted geometric mean simplifies to the ordinary unweighted geometric mean.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x=(x_1,x_2\\dots,x_n)"
},
{
"math_id": 1,
"text": "w=(w_1, w_2,\\dots,w_n)"
},
{
"math_id": 2,
"text": " \\bar{x} = \\left(\\prod_{i=1}^n x_i^{w_i}\\right)^{1 / \\sum_{i=1}^n w_i} = \\quad \\exp \\left( \\frac{\\sum_{i=1}^n w_i \\ln x_i}{\\sum_{i=1}^n w_i \\quad} \\right) "
}
] | https://en.wikipedia.org/wiki?curid=1232419 |
12324910 | Boolean conjunctive query | In the theory of relational databases, a Boolean conjunctive query is a conjunctive query without distinguished predicates, i.e., a query in the form formula_0, where each formula_1 is a relation symbol and each formula_2 is a tuple of variables and constants; the number of elements in formula_2 is equal to the arity of formula_1. Such a query evaluates to either true or false depending on whether the relations in the database contain the appropriate tuples of values, i.e. the conjunction is valid according to the facts in the database.
As an example, if a database schema contains the relation symbols Father (binary, who's the father of whom) and Employed (unary, who is employed), a conjunctive query could be formula_3. This query evaluates to true if there exists an individual x who is a child of Mark and employed. In other words, this query expresses the question: "does Mark have an employed child?" | [
{
"math_id": 0,
"text": "R_1(t_1) \\wedge \\cdots \\wedge R_n(t_n)"
},
{
"math_id": 1,
"text": "R_i"
},
{
"math_id": 2,
"text": "t_i"
},
{
"math_id": 3,
"text": "Father(\\text{Mark}, x) \\wedge Employed(x)"
}
] | https://en.wikipedia.org/wiki?curid=12324910 |
1232743 | Universal science | Universal science (; ) is a branch of metaphysics, dedicated to the study of the underlying principles of all science. Instead of viewing knowledge as being separated into branches, Universalists view all knowledge as being part of a single category. Universal science is related to, but distinct from universal language.
Precursors.
Logic and rationalism lie at the foundation of the ideas of universal science. In a broad sense, logic is the study of reasoning. Although there were individuals that implicitly utilized logical methods prior to Aristotle, it is generally agreed he was the originator of modern systems of logic. The Organon, Aristotle's books on logic, details this system. In Categories, Aristotle separates everything into 10 "categories": substance, quantity, quality, relation, place, time, position, state, action, and passion. In De Interpretatione, Aristotle studied propositions, detailing what he determined were the most basic propositions and the relationships between them. The Organon had several other books, which further detailed the process of constructing arguments, deducing logical consequences, and even contained the foundations of the modern scientific method.
The most immediate predecessor to universal science is the system of formal logic, which is the study of the abstract notions of propositions and arguments, usually utilizing symbols to represent these structures. Formal logic differs from previous systems of logic by looking exclusively at the structure of an argument, instead of at the specific aspects of each statement. Thus, while the statements "Jeff is shorter than Jeremy and Jeremy is shorter Aidan, so Jeff is shorter than Aidan" and "Every triangle has less sides than every rectangle and every rectangle has less sides than every pentagon, so every triangle has less sides than every pentagon" deal with different specific information, they are both are equivalent in formal logic to the expression
formula_0.
By abstracting away from the specifics of each statement and argument, formal logic allows the overarching structure of logic to be studied. This viewpoint inspired later logicians to seek out a set of minimal size containing all of the requisite knowledge from which everything else could be derived and is the fundamental idea behind universal science.
Llull.
Ramon Llull was a 13th century Catalan philosopher, mystic, and poet. He is best known for creating an "art of finding truth" with the intention of unifying all knowledge. Llull sought to unify philosophy, theology, and mysticism through a single universal model to understand reality.
Llull compiled his thoughts into his work Ars Magna, which had several versions. The most thorough and complete version being the Ars Generalis Ultima, which he wrote several years before his death. The Ars Generalis Ultima consisted of several books, which explained the Ars, his universal system to understand all of reality. The books included the principles, definitions, and questions, along with ways to combine these things, which Llull thought could serve as the basis from which reality could be studied. Since he was primarily focused upon faith and Christianity, the content of these books was also mainly concerned with religious ideas and concepts. In fact, the Ars contained figures and diagrams representing ideas from Christianity, Islam, and Judaism to serve as a tool to aid philosophers from each of the three religions to discuss ideas in a logical manner.
Leibniz.
Gottfried Wilhelm Leibniz was a 17th century German philosopher, mathematician, and political adviser, metaphysician, and logician, distinguished for his achievements including the independent creation of the mathematical field of Calculus.
Leibniz entered the University of Leipzig in 1661, which is where he first studied the teachings of many famous scientists and philosophers, such as Rene Descartes, Galileo Galilei, Francis Bacon, and Thomas Hobbes. These individuals, together with Aristotle, influenced Leibniz's future philosophical ideas, with one major idea being the reconciliation of the ideas of modern philosophers with the thoughts of Aristotle, already demonstrating Leibniz's interest in unification.
Unification played a major role in one of Leibniz's early works, Dissertatio de arte Combinatoria. Written in 1666, De arte Combinatoria was a mathematical and philosophical text that served as the basis for Leibniz's future goal for a universal science. The text starts by analysis several mathematical problems in combinatorics, the study of ways in which objects can be arranged. While the mathematics in the text was not revolutionary, the main impact came from the ideas Leibniz derived following the mathematics. Taking major influence from Ramon Llull's ideas in his Ars Magna, Leibniz argued that the solution to these combinatorial problems served as a base for all logic and reasoning, since all of human knowledge could be viewed as different permutations of some base set.
Leibniz's ideas about unifying human knowledge culminated in his Characteristica universalis, which was a proposed language that would allow for logical statements and arguments to become symbolic calculations. Leibniz aimed to construct "the alphabet of human thought," which was the collection of all of the "primitives" from which all human thought could be derived through the processes described in de arte Combinatoria.
Modern Influences.
Although it has never been constructed, the ideas behind Leibniz's universal science have permeated the thoughts of many modern mathematics and philosophers. George Boole, a 19th century English mathematician, expanded upon the ideas of Leibniz. He is responsible for the modern symbolic system logic, aptly called Boolean Algebra. Boole's logical system, and thus also Leibniz's logical system, served as the foundation for modern computers and electronic circuitry.
The fundamental ideas of universal science can also be seen in the modern axiomatic system of mathematics, which constructs mathematical theories as consequences of a set of axioms. In this case, axioms are the primitive elements from which all further propositions can be derived. Hilbert's Program was an attempt by German mathematician David Hilbert to axiomatize all of mathematics in the above manner, and additionally to prove that these axiomatic systems are consistent. Kurt Gödel was an Austrian mathematician and logician, who furthered the investigations in logic and the foundations of mathematics began by Hilbert and Russell in the early 20th century. Gödel is most famous for his incompleteness theorems, which encompass two theorems about provability and completeness of logical systems. In his first theorem, Gödel asserts that any formal system that includes arithmetic will have a statement which cannot be proven nor disproven within the system. His second theorem stated that a formal system additionally cannot prove that it is consistent, using methods only from that system. Thus, Gödel essentially refuted Hilbert's Program, along with aspects of universal science.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\forall x \\in X, y \\in Y, z \\in Z, \\quad x < y \\wedge y < z \\implies x < z"
}
] | https://en.wikipedia.org/wiki?curid=1232743 |
1232841 | SKI combinator calculus | Simple Turing complete logic
The SKI combinator calculus is a combinatory logic system and a computational system. It can be thought of as a computer programming language, though it is not convenient for writing software. Instead, it is important in the mathematical theory of algorithms because it is an extremely simple Turing complete language. It can be likened to a reduced version of the untyped lambda calculus. It was introduced by Moses Schönfinkel and Haskell Curry.
All operations in lambda calculus can be encoded via abstraction elimination into the SKI calculus as binary trees whose leaves are one of the three symbols S, K, and I (called "combinators").
Notation.
Although the most formal representation of the objects in this system requires binary trees, for simpler typesetting they are often represented as parenthesized expressions, as a shorthand for the tree they represent. Any subtrees may be parenthesized, but often only the right-side subtrees are parenthesized, with left associativity implied for any unparenthesized applications. For example, ISK means ((IS)K). Using this notation, a tree whose left subtree is the tree KS and whose right subtree is the tree SK can be written as KS(SK). If more explicitness is desired, the implied parentheses can be included as well: ((KS)(SK)).
Informal description.
Informally, and using programming language jargon, a tree ("xy") can be thought of as a function "x" applied to an argument "y". When evaluated ("i.e.", when the function is "applied" to the argument), the tree "returns a value", "i.e.", transforms into another tree. The "function", "argument" and the "value" are either combinators or binary trees. If they are binary trees, they may be thought of as functions too, if needed.
The evaluation operation is defined as follows:
("x", "y", and "z" represent expressions made from the functions S, K, and I, and set values):
I returns its argument:
I"x" = "x"
K, when applied to any argument "x", yields a one-argument constant function K"x", which, when applied to any argument "y", returns "x":
K"xy" = "x"
S is a substitution operator. It takes three arguments and then returns the first argument applied to the third, which is then applied to the result of the second argument applied to the third. More clearly:
S"xyz" = "xz"("yz")
Example computation: SKSK evaluates to KK(SK) by the S-rule. Then if we evaluate KK(SK), we get K by the K-rule. As no further rule can be applied, the computation halts here.
For all trees "x" and all trees "y", SKxy" will always evaluate to "y" in two steps, Ky"("xy") = "y", so the ultimate result of evaluating SKxy" will always equal the result of evaluating "y". We say that SKx" and I are "functionally equivalent" for any "x" because they always yield the same result when applied to any "y".
From these definitions it can be shown that SKI calculus is not the minimum system that can fully perform the computations of lambda calculus, as all occurrences of I in any expression can be replaced by (SKK) or (SKS) or (SK "x") for any "x", and the resulting expression will yield the same result. So the "I" is merely syntactic sugar. Since I is optional, the system is also referred as SK calculus or SK combinator calculus.
It is possible to define a complete system using only one (improper) combinator. An example is Chris Barker's iota combinator, which can be expressed in terms of S and K as follows:
ι"x" = "x"SK
It is possible to reconstruct S, K, and I from the iota combinator. Applying ι to itself gives ιι = ιSK = SSKK = SK(KK) which is functionally equivalent to I. K can be constructed by applying ι twice to I (which is equivalent to application of ι to itself): ι(ι(ιι)) = ι(ιιSK) = ι(ISK) = ι(SK) = SKSK = K. Applying ι one more time gives ι(ι(ι(ιι))) = ιK = KSK = S.
The simplest possible term forming a basis is X = λx λy λz. x z (y (λ_.z)), which satisfies X (X X) (X (X X) X X X X X) = K, and X (X (X X (X X (X X))(X (X (X X (X X)))))) X X = S.
Formal definition.
The terms and derivations in this system can also be more formally defined:
Terms:
The set "T" of terms is defined recursively by the following rules.
Derivations:
A derivation is a finite sequence of terms defined recursively by the following rules (where α and ι are words over the alphabet {S, K, I, (, )} while β, γ and δ are terms):
Assuming a sequence is a valid derivation to begin with, it can be extended using these rules. All derivations of length 1 are valid derivations.
SKI expressions.
Self-application and recursion.
SII is an expression that takes an argument and applies that argument to itself:
SIIα = Iα(Iα) = αα
This is also known as U combinator, U"x" = "xx". One interesting property of it is that its self-application is irreducible:
SII(SII) = I(SII)(I(SII)) = SII(I(SII)) = SII(SII)
Or, using the equation as its definition directly, we immediately get U U = U U.
Another thing is that it allows one to write a function that applies one thing to the self application of another thing:
(S(Kα)(SII))β = Kαβ(SIIβ) = α(Iβ(Iβ)) = α(ββ)
or it can be seen as defining yet another combinator directly, H"xy" = "x"("yy").
This function can be used to achieve recursion. If β is the function that applies α to the self application of something else,
β = Hα = S(Kα)(SII)
then the self-application of this β is the fixed point of that α:
SIIβ = ββ = α(ββ) = α(α(ββ)) = formula_0
Or, directly again from the derived definition, Hα(Hα) = α(Hα(Hα)).
If α expresses a "computational step" computed by αρν for some ρ and ν, that assumes ρν' expresses "the rest of the computation" (for some ν' that α will "compute" from ν), then its fixed point ββ expresses the whole recursive computation, since using "the same function" ββ for the "rest of computation" call (with ββν = α(ββ)ν) is the very definition of recursion: ρν' = ββν' = α(ββ)ν' = ... . α will have to employ some kind of conditional to stop at some "base case" and not make the recursive call then, to avoid divergence.
This can be formalized, with
β = Hα = S(Kα)(SII) = S(KS)Kα(SII) = S(S(KS)K)(K(SII)) α
as
Yα = SIIβ = SII(Hα) = S(K(SII))H α = S(K(SII))(S(S(KS)K)(K(SII))) α
which gives us one possible encoding of the Y combinator.
This becomes much shorter with the use of the B and C combinators, as the equivalent
Yα = S(KU)(SB(KU))α = U(BαU) = BU(CBU)α
or directly, as
Hαβ = α(ββ) = BαUβ = CBUαβ
Yα = U(Hα) = BU(CBU)α
And with a pseudo-Haskell syntax it becomes the exceptionally short Y = U . (. U).
The reversal expression.
S(K(SI))K reverses the following two terms:
S(K(SI))Kαβ →
K(SI)α(Kα)β →
SI(Kα)β →
Iβ(Kαβ) →
Iβα →
βα
Boolean logic.
SKI combinator calculus can also implement Boolean logic in the form of an "if-then-else" structure. An "if-then-else" structure consists of a Boolean expression that is either true (T) or false (F) and two arguments, such that:
T"xy" = "x"
and
F"xy" = "y"
The key is in defining the two Boolean expressions. The first works just like one of our basic combinators:
T = K
K"xy" = "x"
The second is also fairly simple:
F = SK
SKxy" = Ky(xy)" = y
Once true and false are defined, all Boolean logic can be implemented in terms of "if-then-else" structures.
Boolean NOT (which returns the opposite of a given Boolean) works the same as the "if-then-else" structure, with F and T as the second and third values, so it can be implemented as a postfix operation:
NOT = (F)(T) = (SK)(K)
If this is put in an "if-then-else" structure, it can be shown that this has the expected result
(T)NOT = T(F)(T) = F
(F)NOT = F(F)(T) = T
Boolean OR (which returns T if either of the two Boolean values surrounding it is T) works the same as an "if-then-else" structure with T as the second value, so it can be implemented as an infix operation:
OR = T = K
If this is put in an "if-then-else" structure, it can be shown that this has the expected result:
(T)OR(T) = T(T)(T) = T
(T)OR(F) = T(T)(F) = T
(F)OR(T) = F(T)(T) = T
(F)OR(F) = F(T)(F) = F
Boolean AND (which returns T if both of the two Boolean values surrounding it are T) works the same as an "if-then-else" structure with F as the third value, so it can be implemented as a postfix operation:
AND = F = SK
If this is put in an "if-then-else" structure, it can be shown that this has the expected result:
(T)(T)AND = T(T)(F) = T
(T)(F)AND = T(F)(F) = F
(F)(T)AND = F(T)(F) = F
(F)(F)AND = F(F)(F) = F
Because this defines T, F, NOT (as a postfix operator), OR (as an infix operator), and AND (as a postfix operator) in terms of SKI notation, this proves that the SKI system can fully express Boolean logic.
As the SKI calculus is complete, it is also possible to express NOT, OR and AND as prefix operators:
NOT = S(SI(KF))(KT) (as S(SI(KF))(KT)"x" = SI(KF)"x"(KTx") = Ix"(KFx")T = "xFT)
OR = SI(KT) (as SI(KT)"xy" = Ix"(KTx")"y" = "xTy")
AND = SS(K(KF)) (as SS(K(KF))"xy" = Sx"(K(KF)"x")"y" = "xy"(KFy") = "xy"F)
Connection to intuitionistic logic.
The combinators K and S correspond to two well-known axioms of sentential logic:
AK: "A" → ("B" → "A"),
AS: ("A" → ("B" → "C")) → (("A" → "B") → ("A" → "C")).
Function application corresponds to the rule modus ponens:
MP: from A and "A" → "B", infer B.
The axioms AK and AS, and the rule MP are complete for the implicational fragment of intuitionistic logic. In order for combinatory logic to have as a model:
This connection between the types of combinators and the corresponding logical axioms is an instance of the Curry–Howard isomorphism.
Examples of reduction.
There may be multiple ways to do a reduction. All are equivalent, as long as you don't break order of operations
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\ldots"
},
{
"math_id": 1,
"text": "\\textrm{SKI(KIS)}"
},
{
"math_id": 2,
"text": "\\textrm{SKI(KIS)} \\Rightarrow \\textrm{K(KIS)(I(KIS))} \\Rightarrow \\textrm{KIS} \\Rightarrow \\textrm{I}"
},
{
"math_id": 3,
"text": "\\textrm{SKI(KIS)} \\Rightarrow \\textrm{SKII} \\Rightarrow \\textrm{KI(II)} \\Rightarrow \\textrm{KII} \\Rightarrow \\textrm{I}"
},
{
"math_id": 4,
"text": "\\textrm{KS(I(SKSI))}"
},
{
"math_id": 5,
"text": "\\textrm{KS(I(SKSI))} \\Rightarrow \\textrm{KS(I(KI(SI)))} \\Rightarrow \\textrm{KS(I(I))} \\Rightarrow \\textrm{KS(II)} \\Rightarrow \\textrm{KSI} \\Rightarrow \\textrm{S}"
},
{
"math_id": 6,
"text": "\\textrm{KS(I(SKSI))} \\Rightarrow \\textrm{S}"
},
{
"math_id": 7,
"text": "\\textrm{SKIK} \\Rightarrow \\textrm{KK(IK)} \\Rightarrow \\textrm{KKK} \\Rightarrow \\textrm{K}"
}
] | https://en.wikipedia.org/wiki?curid=1232841 |
12328822 | Attenuation coefficient | Light or sound absorption in a substance
The linear attenuation coefficient, attenuation coefficient, or narrow-beam attenuation coefficient characterizes how easily a volume of material can be penetrated by a beam of light, sound, particles, or other energy or matter. A coefficient value that is large represents a beam becoming 'attenuated' as it passes through a given medium, while a small value represents that the medium had little effect on loss. The (derived) SI unit of attenuation coefficient is the reciprocal metre (m−1). Extinction coefficient is another term for this quantity, often used in meteorology and climatology. Most commonly, the quantity measures the exponential decay of intensity, that is, the value of downward "e"-folding distance of the original intensity as the energy of the intensity passes through a unit ("e.g." one meter) thickness of material, so that an attenuation coefficient of 1 m−1 means that after passing through 1 metre, the radiation will be reduced by a factor of "e", and for material with a coefficient of 2 m−1, it will be reduced twice by "e", or "e"2. Other measures may use a different factor than "e", such as the "decadic attenuation coefficient" below. The broad-beam attenuation coefficient counts forward-scattered radiation as transmitted rather than attenuated, and is more applicable to radiation shielding.
The "mass attenuation coefficient" is the attenuation coefficient normalized by the density of the material.
Overview.
The attenuation coefficient describes the extent to which the radiant flux of a beam is reduced as it passes through a specific material. It is used in the context of:
The attenuation coefficient is called the "extinction coefficient" in the context of
A small attenuation coefficient indicates that the material in question is relatively transparent, while a larger value indicates greater degrees of opacity. The attenuation coefficient is dependent upon the type of material and the energy of the radiation. Generally, for electromagnetic radiation, the higher the energy of the incident photons and the less dense the material in question, the lower the corresponding attenuation coefficient will be.
Mathematical definitions.
Attenuation coefficient.
The attenuation coefficient of a volume, denoted "μ", is defined as
formula_0
where
Spectral hemispherical attenuation coefficient.
The spectral hemispherical attenuation coefficient in frequency and spectral hemispherical attenuation coefficient in wavelength of a volume, denoted "μ"ν and "μ"λ respectively, are defined as:
formula_1
formula_2
where
Directional attenuation coefficient.
The directional attenuation coefficient of a volume, denoted "μ"Ω, is defined as
formula_3
where "L"e,Ω is the radiance.
Spectral directional attenuation coefficient.
The spectral directional attenuation coefficient in frequency and spectral directional attenuation coefficient in wavelength of a volume, denoted "μ"Ω,ν and "μ"Ω,λ respectively, are defined as
formula_4
where
Absorption and scattering coefficients.
When a narrow (collimated) beam passes through a volume, the beam will lose intensity due to two processes: absorption and scattering. Absorption indicates energy that is lost from the beam, while scattering indicates light that is redirected in a (random) direction, and hence is no longer in the beam, but still present, resulting in diffuse light.
The absorption coefficient of a volume, denoted "μ"a, and the scattering coefficient of a volume, denoted "μ"s, are defined the same way as the attenuation coefficient.
The attenuation coefficient of a volume is the sum of absorption coefficient and scattering coefficients:
formula_5
Just looking at the narrow beam itself, the two processes cannot be distinguished. However, if a detector is set up to measure beam leaving in different directions, or conversely using a non-narrow beam, one can measure how much of the lost radiant flux was scattered, and how much was absorbed.
In this context, the "absorption coefficient" measures how quickly the beam would lose radiant flux due to the absorption "alone", while "attenuation coefficient" measures the "total" loss of narrow-beam intensity, including scattering as well. "Narrow-beam attenuation coefficient" always unambiguously refers to the latter. The attenuation coefficient is at least as large as the absorption coefficient; they are equal in the idealized case of no scattering.
Mass attenuation, absorption, and scattering coefficients.
The mass attenuation coefficient, mass absorption coefficient, and mass scattering coefficient are defined as
formula_6
where "ρ""m" is the mass density.
Napierian and decadic attenuation coefficients.
Decibels.
Engineering applications often express attenuation in the logarithmic units of decibels, or "dB", where 10 dB represents attenuation by a factor of 10. The units for attenuation coefficient are thus dB/m (or, in general, dB per unit distance). Note that in logarithmic units such as dB, the attenuation is a linear function of distance, rather than exponential. This has the advantage that the result of multiple attenuation layers can be found by simply adding up the dB loss for each individual passage. However, if intensity is desired, the logarithms must be converted back into linear units by using an exponential: formula_7
Naperian attenuation.
The decadic attenuation coefficient or decadic narrow beam attenuation coefficient, denoted "μ"10, is defined as
formula_8
Just as the usual attenuation coefficient measures the number of "e"-fold reductions that occur over a unit length of material, this coefficient measures how many 10-fold reductions occur: a decadic coefficient of 1 m−1 means 1 m of material reduces the radiation once by a factor of 10.
"μ" is sometimes called Napierian attenuation coefficient or Napierian narrow beam attenuation coefficient rather than just simply "attenuation coefficient". The terms "decadic" and "Napierian" come from the base used for the exponential in the Beer–Lambert law for a material sample, in which the two attenuation coefficients take part:
formula_9
where
In case of "uniform" attenuation, these relations become
formula_10
Cases of "non-uniform" attenuation occur in atmospheric science applications and radiation shielding theory for instance.
The (Napierian) attenuation coefficient and the decadic attenuation coefficient of a material sample are related to the number densities and the amount concentrations of its "N" attenuating species as
formula_11
where
by definition of attenuation cross section and molar attenuation coefficient.
Attenuation cross section and molar attenuation coefficient are related by
formula_12
and number density and amount concentration by
formula_13
where "N"A is the Avogadro constant.
The half-value layer (HVL) is the thickness of a layer of material required to reduce the radiant flux of the transmitted radiation to half its incident magnitude. The half-value layer is about 69% (ln 2) of the penetration depth. Engineers use these equations predict how much shielding thickness is required to attenuate radiation to acceptable or regulatory limits.
Attenuation coefficient is also inversely related to mean free path. Moreover, it is very closely related to the attenuation cross section.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu = -\\frac{1}{\\Phi_\\mathrm{e}} \\frac{\\mathrm{d}\\Phi_\\mathrm{e}}{\\mathrm{d}z},"
},
{
"math_id": 1,
"text": "\\mu_\\nu = -\\frac{1}{\\Phi_{\\mathrm{e},\\nu}} \\frac{\\mathrm{d}\\Phi_{\\mathrm{e},\\nu}}{\\mathrm{d}z},"
},
{
"math_id": 2,
"text": "\\mu_\\lambda = -\\frac{1}{\\Phi_{\\mathrm{e},\\lambda}} \\frac{\\mathrm{d}\\Phi_{\\mathrm{e},\\lambda}}{\\mathrm{d}z},"
},
{
"math_id": 3,
"text": "\\mu_\\Omega = -\\frac{1}{L_{\\mathrm{e},\\Omega}} \\frac{\\mathrm{d}L_{\\mathrm{e},\\Omega}}{\\mathrm{d}z},"
},
{
"math_id": 4,
"text": "\\begin{align}\n \\mu_{\\Omega,\\nu} &= -\\frac{1}{L_{\\mathrm{e},\\Omega,\\nu}} \\frac{\\mathrm{d}L_{\\mathrm{e},\\Omega,\\nu}}{\\mathrm{d}z}, \\\\\n \\mu_{\\Omega,\\lambda} &= -\\frac{1}{L_{\\mathrm{e},\\Omega,\\lambda}} \\frac{\\mathrm{d}L_{\\mathrm{e},\\Omega,\\lambda}}{\\mathrm{d}z},\n\\end{align}"
},
{
"math_id": 5,
"text": "\\begin{align}\n \\mu &= \\mu_\\mathrm{a} + \\mu_\\mathrm{s}, \\\\\n \\mu_\\nu &= \\mu_{\\mathrm{a},\\nu} + \\mu_{\\mathrm{s},\\nu}, \\\\\n \\mu_\\lambda &= \\mu_{\\mathrm{a},\\lambda} + \\mu_{\\mathrm{s},\\lambda}, \\\\\n \\mu_\\Omega &= \\mu_{\\mathrm{a},\\Omega} + \\mu_{\\mathrm{s},\\Omega}, \\\\\n \\mu_{\\Omega,\\nu} &= \\mu_{\\mathrm{a},\\Omega,\\nu} + \\mu_{\\mathrm{s},\\Omega,\\nu}, \\\\\n \\mu_{\\Omega,\\lambda} &= \\mu_{\\mathrm{a},\\Omega,\\lambda} + \\mu_{\\mathrm{s},\\Omega,\\lambda}.\n\\end{align}"
},
{
"math_id": 6,
"text": "\\frac{\\mu}{\\rho_m},\\quad \\frac{\\mu_\\mathrm{a}}{\\rho_m},\\quad \\frac{\\mu_\\mathrm{s}}{\\rho_m},"
},
{
"math_id": 7,
"text": "I = I_o 10^{-(dB/10)}."
},
{
"math_id": 8,
"text": "\\mu_{10} = \\frac{\\mu}{\\ln 10}."
},
{
"math_id": 9,
"text": "T = e^{-\\int_0^\\ell \\mu(z)\\mathrm{d}z} = 10^{-\\int_0^\\ell \\mu_{10}(z)\\mathrm{d}z},"
},
{
"math_id": 10,
"text": "T = e^{-\\mu\\ell} = 10^{-\\mu_{10}\\ell}."
},
{
"math_id": 11,
"text": "\\begin{align}\n \\mu(z) &= \\sum_{i = 1}^N \\mu_i(z) = \\sum_{i = 1}^N \\sigma_i n_i(z), \\\\\n \\mu_{10}(z) &= \\sum_{i = 1}^N \\mu_{10,i}(z) = \\sum_{i = 1}^N \\varepsilon_i c_i(z),\n\\end{align}"
},
{
"math_id": 12,
"text": "\\varepsilon_i = \\frac{N_\\text{A}}{\\ln{10}}\\,\\sigma_i,"
},
{
"math_id": 13,
"text": "c_i = \\frac{n_i}{N_\\text{A}},"
}
] | https://en.wikipedia.org/wiki?curid=12328822 |
12328965 | Lissajous knot | Knot defined by parametric equations defining Lissajous curves
In knot theory, a Lissajous knot is a knot defined by parametric equations of the form
formula_0
where formula_1, formula_2, and formula_3 are integers and the phase shifts formula_4, formula_5, and formula_6 may be any real numbers.
The projection of a Lissajous knot onto any of the three coordinate planes is a Lissajous curve, and many of the properties of these knots are closely related to properties of Lissajous curves.
Replacing the cosine function in the parametrization by a triangle wave transforms every Lissajous
knot isotopically into a billiard curve inside a cube, the simplest case of so-called "billiard knots".
Billiard knots can also be studied in other domains, for instance in a cylinder or in a (flat) solid torus (Lissajous-toric knot).
Form.
Because a knot cannot be self-intersecting, the three integers formula_7 must be pairwise relatively prime, and none of the quantities
formula_8
may be an integer multiple of pi. Moreover, by making a substitution of the form formula_9, one may assume that any of the three phase shifts formula_4, formula_5, formula_6 is equal to zero.
Examples.
Here are some examples of Lissajous knots, all of which have formula_10:
There are infinitely many different Lissajous knots, and other examples with 10 or fewer crossings include the 74 knot, the 815 knot, the 101 knot, the 1035 knot, the 1058 knot, and the composite knot 52* # 52, as well as the 916 knot, 1076 knot, the 1099 knot, the 10122 knot, the 10144 knot, the granny knot, and the composite knot 52 # 52. In addition, it is known that every twist knot with Arf invariant zero is a Lissajous knot.
Symmetry.
Lissajous knots are highly symmetric, though the type of symmetry depends on whether or not the numbers formula_1, formula_2, and formula_3 are all odd.
Odd case.
If formula_1, formula_2, and formula_3 are all odd, then the point reflection across the origin formula_11 is a symmetry of the Lissajous knot which preserves the knot orientation.
In general, a knot that has an orientation-preserving point reflection symmetry is known as strongly plus amphicheiral. This is a fairly rare property: only seven or eight prime knots with twelve or fewer crossings are strongly plus amphicheiral (1099, 10123, 12a427, 12a1019, 12a1105, 12a1202, 12n706). Since this is so rare, ′most′ prime Lissajous knots lie in the even case.
Even case.
If one of the frequencies (say formula_1) is even, then the 180° rotation around the "x"-axis formula_12 is a symmetry of the Lissajous knot. In general, a knot that has a symmetry of this type is called 2-periodic, so every even Lissajous knot must be 2-periodic.
Consequences.
The symmetry of a Lissajous knot puts severe constraints on the Alexander polynomial. In the odd case, the Alexander
polynomial of the Lissajous knot must be a perfect square. In the even case, the Alexander polynomial must be a perfect square modulo 2. In addition, the Arf invariant of a Lissajous knot must be zero. It follows that:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x = \\cos(n_x t + \\phi_x),\\qquad y = \\cos(n_y t + \\phi_y), \\qquad z = \\cos(n_z t + \\phi_z),"
},
{
"math_id": 1,
"text": "n_x"
},
{
"math_id": 2,
"text": "n_y"
},
{
"math_id": 3,
"text": "n_z"
},
{
"math_id": 4,
"text": "\\phi_x"
},
{
"math_id": 5,
"text": "\\phi_y"
},
{
"math_id": 6,
"text": "\\phi_z"
},
{
"math_id": 7,
"text": "n_x, n_y, n_z"
},
{
"math_id": 8,
"text": "n_x \\phi_y - n_y \\phi_x,\\quad n_y \\phi_z - n_z \\phi_y,\\quad n_z \\phi_x - n_x \\phi_z"
},
{
"math_id": 9,
"text": "t' = t+c"
},
{
"math_id": 10,
"text": "\\phi_z=0"
},
{
"math_id": 11,
"text": "(x,y,z)\\mapsto (-x,-y,-z)"
},
{
"math_id": 12,
"text": "(x,y,z)\\mapsto (x,-y,-z)"
}
] | https://en.wikipedia.org/wiki?curid=12328965 |
1232940 | Micellar electrokinetic chromatography | Chromatography technique
Micellar electrokinetic chromatography (MEKC) is a chromatography technique used in analytical chemistry. It is a modification of capillary electrophoresis (CE), extending its functionality to neutral analytes, where the samples are separated by differential partitioning between micelles (pseudo-stationary phase) and a surrounding aqueous buffer solution (mobile phase).
The basic set-up and detection methods used for MEKC are the same as those used in CE. The difference is that the solution contains a surfactant at a concentration that is greater than the critical micelle concentration (CMC). Above this concentration, surfactant monomers are in equilibrium with micelles.
In most applications, MEKC is performed in open capillaries under alkaline conditions to generate a strong electroosmotic flow. Sodium dodecyl sulfate (SDS) is the most commonly used surfactant in MEKC applications. The anionic character of the sulfate groups of SDS causes the surfactant and micelles to have electrophoretic mobility that is counter to the direction of the strong electroosmotic flow. As a result, the surfactant monomers and micelles migrate quite slowly, though their net movement is still toward the cathode. During a MEKC separation, analytes distribute themselves between the hydrophobic interior of the micelle and hydrophilic buffer solution as shown in "figure 1".
Analytes that are insoluble in the interior of micelles should migrate at the electroosmotic flow velocity, formula_0, and be detected at the retention time of the buffer, formula_1. Analytes that solubilize completely within the micelles (analytes that are highly hydrophobic) should migrate at the micelle velocity, formula_2, and elute at the final elution time, formula_3.
Theory.
The micelle velocity is defined by:
formula_4
where formula_5 is the electrophoretic velocity of a micelle.
The retention time of a given sample should depend on the capacity factor, formula_6:
formula_7
where formula_8 is the total number of moles of solute in the micelle and formula_9 is the total moles in the aqueous phase. The retention time of a solute should then be within the range:
formula_10
Charged analytes have a more complex interaction in the capillary because they exhibit electrophoretic mobility, engage in electrostatic interactions with the micelle, and participate in hydrophobic partitioning.
The fraction of the sample in the aqueous phase, formula_11, is given by:
formula_12
where formula_13 is the migration velocity of the solute. The value formula_11 can also be expressed in terms of the capacity factor:
formula_14
Using the relationship between velocity, tube length from the injection end to the detector cell (formula_15), and retention time, formula_16, formula_17 and formula_18, a relationship between the capacity factor and retention times can be formulated:
formula_19
The extra term enclosed in parentheses accounts for the partial mobility of the hydrophobic phase in MEKC. This equation resembles an expression derived for formula_6 in conventional packed bed chromatography:
formula_20
A rearrangement of the previous equation can be used to write an expression for the retention factor:
formula_21
From this equation it can be seen that all analytes that partition strongly into the micellar phase (where formula_6 is essentially ∞) migrate at the same time, formula_3. In conventional chromatography, separation of similar compounds can be improved by gradient elution. In MEKC, however, techniques must be used to extend the elution range to separate strongly retained analytes.
Elution ranges can be extended by several techniques including the use of organic modifiers, cyclodextrins, and mixed micelle systems. Short-chain alcohols or acetonitrile can be used as organic modifiers that decrease formula_1 and formula_6 to improve the resolution of analytes that co-elute with the micellar phase. These agents, however, may alter the level of the EOF. Cyclodextrins are cyclic polysaccharides that form inclusion complexes that can cause competitive hydrophobic partitioning of the analyte. Since analyte-cyclodextrin complexes are neutral, they will migrate toward the cathode at a higher velocity than that of the negatively charged micelles. Mixed micelle systems, such as the one formed by combining SDS with the non-ionic surfactant Brij-35, can also be used to alter the selectivity of MEKC.
Applications.
The simplicity and efficiency of MEKC have made it an attractive technique for a variety of applications. Further improvements can be made to the selectivity of MEKC by adding chiral selectors or chiral surfactants to the system. Unfortunately, this technique is not suitable for protein analysis because proteins are generally too large to partition into a surfactant micelle and tend to bind to surfactant monomers to form SDS-protein complexes.
Recent applications of MEKC include the analysis of uncharged pesticides, essential and branched-chain amino acids in nutraceutical products, hydrocarbon and alcohol contents of the marjoram herb.
MEKC has also been targeted for its potential to be used in combinatorial chemical analysis. The advent of combinatorial chemistry has enabled medicinal chemists to synthesize and identify large numbers of potential drugs in relatively short periods of time. Small sample and solvent requirements and the high resolving power of MEKC have enabled this technique to be used to quickly analyze a large number of compounds with good resolution.
Traditional methods of analysis, like high-performance liquid chromatography (HPLC), can be used to identify the purity of a combinatorial library, but assays need to be rapid with good resolution for all components to provide useful information for the chemist. The introduction of surfactant to traditional capillary electrophoresis instrumentation has dramatically expanded the scope of analytes that can be separated by capillary electrophoresis.
MEKC can also be used in routine quality control of antibiotics in pharmaceuticals or feedstuffs.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "u_o"
},
{
"math_id": 1,
"text": "t_M"
},
{
"math_id": 2,
"text": "u_c"
},
{
"math_id": 3,
"text": "t_c"
},
{
"math_id": 4,
"text": "u_c= u_p+u_o"
},
{
"math_id": 5,
"text": "u_p"
},
{
"math_id": 6,
"text": "k^1"
},
{
"math_id": 7,
"text": "k^1=\\frac{n_c}{n_w}"
},
{
"math_id": 8,
"text": "n_c"
},
{
"math_id": 9,
"text": "n_w"
},
{
"math_id": 10,
"text": " t_M\\le t_r\\le t_c "
},
{
"math_id": 11,
"text": "R"
},
{
"math_id": 12,
"text": " R= \\frac{u_s-u_c}{u_o-u_c}"
},
{
"math_id": 13,
"text": "u_s"
},
{
"math_id": 14,
"text": " R=\\frac{1}{1+k^1}"
},
{
"math_id": 15,
"text": "L"
},
{
"math_id": 16,
"text": "u_{o}= L/t_M"
},
{
"math_id": 17,
"text": "u_{c}= L/t_{c}"
},
{
"math_id": 18,
"text": "u_s = L/t_r"
},
{
"math_id": 19,
"text": " k^1=\\frac{t_r-t_M}{t_M(1-(t_r/t_c))}"
},
{
"math_id": 20,
"text": " k=\\frac{t_r-t_M}{t_M}"
},
{
"math_id": 21,
"text": "t_r =\\left ( \\frac{1+k^1}{1+(t_M/t_c)k^1} \\right )t_M"
}
] | https://en.wikipedia.org/wiki?curid=1232940 |
1232957 | Isocitrate dehydrogenase | Class of enzymes
Isocitrate dehydrogenase (IDH) (EC 1.1.1.42) and (EC 1.1.1.41) is an enzyme that catalyzes the oxidative decarboxylation of isocitrate, producing alpha-ketoglutarate (α-ketoglutarate) and CO2. This is a two-step process, which involves oxidation of isocitrate (a secondary alcohol) to oxalosuccinate (a ketone), followed by the decarboxylation of the carboxyl group beta to the ketone, forming alpha-ketoglutarate. In humans, IDH exists in three isoforms: IDH3 catalyzes the third step of the citric acid cycle while converting NAD+ to NADH in the mitochondria. The isoforms IDH1 and IDH2 catalyze the same reaction outside the context of the citric acid cycle and use NADP+ as a cofactor instead of NAD+. They localize to the cytosol as well as the mitochondrion and peroxisome.
Structure.
The NAD-IDH is composed of 3 subunits, is allosterically regulated, and requires an integrated Mg2+ or Mn2+ ion. The closest homologue that has a known structure is the "E. coli" NADP-dependent IDH, which has only 2 subunits and a 13% identity and 29% similarity based on the amino acid sequences, making it dissimilar to human IDH and not suitable for close comparison. All the known NADP-IDHs are homodimers.
Most isocitrate dehydrogenases are dimers, to be specific, homodimers (two identical monomer subunits forming one dimeric unit). In comparing "C. glutamicum" and "E. coli", monomer and dimer, respectively, both enzymes were found to "efficiently catalyze identical reactions." However, "C. glutamicum" was recorded as having ten times as much activity than "E. coli" and seven times more affinitive/specific for NADP. "C. glutamicum" favored NADP+ over NAD+. In terms of stability with response to temperature, both enzymes had a similar Tm or melting temperature at about 55 °C to 60 °C. However, the monomer "C. glutamicum" showed a more consistent stability at higher temperatures, which was expected. The dimer "E. coli" showed stability at a higher temperature than normal due to the interactions between the two monomeric subunits.
The structure of "Mycobacterium tuberculosis" (Mtb) ICDH-1 bound with NADPH and Mn(2+) bound has been solved by X-ray crystallography. It is a homodimer in which each subunit has a Rossmann fold, and a common top domain of interlocking β sheets. Mtb ICDH-1 is most structurally similar to the R132H mutant human ICDH found in CNS WHO grade 4 astrocytomas, formerly classified as glioblastomas. Similar to human R132H ICDH, Mtb ICDH-1 also catalyzes the formation of α-hydroxyglutarate.
Regulation.
The IDH step of the citric acid cycle is often (but not always) an irreversible reaction due to its large negative change in free energy. It must therefore be carefully regulated to avoid depletion of isocitrate (and therefore an accumulation of alpha-ketoglutarate). The reaction is stimulated by the simple mechanisms of substrate availability (isocitrate, NAD+ or NADP+, Mg2+ / Mn2+ ), product inhibition by NADH (or NADPH outside the citric acid cycle) and alpha-ketoglutarate, and competitive feedback inhibition by ATP. A conserved ncRNA upstream of the "icd" gene which codes for NADP+-dependent isocitrate dehydrogenase (IDH) has been reported in bacterial genomes, due to its characteristics this ncRNA resembles previous regulatory motifs called riboswitches, icd-II ncRNA motif has been proposed as a strong candidate riboswitch.
Catalytic mechanisms.
Isocitrate dehydrogenase catalyzes the chemical reactions:
Isocitrate + NAD+ formula_0 2-oxoglutarate + CO2 + NADH + H+
Isocitrate + NADP+ formula_0 2-oxoglutarate + CO2 + NADPH + H+
The overall free energy for this reaction is -8.4 kJ/mol.
Steps.
Within the citric acid cycle, isocitrate, produced from the isomerization of citrate, undergoes both oxidation and decarboxylation. The enzyme isocitrate dehydrogenase (IDH) holds isocitrate within its active site using the surrounding amino acids, including arginine, tyrosine, asparagine, serine, threonine, and aspartic acid.
In the provided figure, the first box shows the overall isocitrate dehydrogenase reaction. The necessary reactants for this enzyme mechanism are isocitrate, NAD+/NADP+, and Mn2+ or Mg2+. The products of the reaction are alpha-ketoglutarate, carbon dioxide, and NADH + H+/NADPH + H+. Water molecules help to deprotonate the oxygen atoms of isocitrate.
The second box in the figure illustrates step 1 of the reaction, which is the oxidation of the alpha-carbon (C2 here, also called alpha-C). In this process, the alcohol group of the alpha-carbon is deprotonated and the resulting lone pair of electrons forms a ketone group on that carbon. NAD+/NADP+ acts as an electron-accepting cofactor and collects the resulting hydride from C2. The oxidation of the alpha carbon introduces a molecular arrangement where electrons (in the next step) will flow from the nearby carboxyl group and push the electrons of the double bonded oxygen up onto the oxygen atom itself, which collects a proton from a nearby lysine.
The third box illustrates step 2, which is the decarboxylation of oxalosuccinate. In this step, the carboxyl group oxygen is deprotonated by a nearby tyrosine, and those electrons flow down to C2. Carbon dioxide, the leaving group, detaches from the beta carbon of isocitrate (C3) and the electrons flow to the ketone oxygen attached to the alpha carbon, granting a negative charge to the associated oxygen atom and forming an alpha-beta unsaturated double bond between carbons 2 and 3.
The fourth and final box illustrates step 3, which is the saturation of the alpha-beta unsaturated double bond that formed in the previous step. The negatively charged oxygen (attached to the alpha-carbon) donates its electrons, reforming the ketone double bond and pushing another lone pair (the one that forms the double bond between the alpha and beta carbons) "off" the molecule. This lone pair, in turn, picks up a proton from the nearby tyrosine.<ref name="nfr154197/32"></ref> This reaction results in the formation of alpha-ketoglutarate, NADH + H+/NADPH + H+, and CO2.
Detailed mechanism.
Two aspartate amino acid residues (below left) are interacting with two adjacent water molecules (w6 and w8) in the Mn2+ isocitrate porcine IDH complex to deprotonate the alcohol off the alpha-carbon atom. The oxidation of the alpha-C also takes place in this picture where NAD+ accepts a hydride resulting in oxalosuccinate. Along with the sp3 to sp2 stereochemical change around the alpha-C, there is a ketone group that is formed from the alcohol group. The formation of this ketone double bond allows for resonance to take place as electrons coming down from the leaving carboxylate group move towards the ketone.
The decarboxylation of oxalosuccinate (below center) is a key step in the formation of alpha-ketoglutarate. In this reaction, the lone pair on the adjacent Tyrosine hydroxyl abstracts the proton off the carboxyl group. This carboxyl group is also referred to as the beta subunit in the isocitrate molecule. The deprotonation of the carboxyl group causes the lone pair of electrons to move down making carbon dioxide and separating from oxalosuccinate. The electrons continue to move towards the alpha carbon pushing the double bond electrons (making the ketone) up to abstract a proton off an adjacent lysine residue. An alpha-beta unsaturated double bond results between carbon 2 and three. As you can see in the picture, the green ion represents either Mg2+ or Mn2+, which is a cofactor necessary for this reaction to occur. The metal-ion forms a little complex through ionic interactions with the oxygen atoms on the fourth and fifth carbons (also known as the gamma subunit of isocitrate).
After the carbon dioxide is split from the oxalosuccinate in the decarboxylation step (below right), the enol will tautomerize to the keto from. The formation of the ketone double bond is started by the deprotonation of that oxygen off the alpha carbon (C#2) by the same lysine that protonated the oxygen in the first place. The lone pair of electrons moves down kicking off the lone pairs that were making the double bond. This lone pair of electrons abstracts a proton off the Tyrosine that deprotonated the carboxyl group in the decarboxylation step. The reason that we can say that the Lys and Tyr residues will be the same from the previous step is because they are helping in holding the isocitrate molecule in the active site of the enzyme. These two residues will be able to form hydrogen bonds back and forth as long as they are close enough to the substrate.
The isocitrate dehydrogenase enzyme as stated above produces alpha-ketoglutarate, carbon dioxide, and NADH + H+/NADPH + H+. There are three changes that occurred throughout the reaction. The oxidation of Carbon 2, the decarboxylation (loss of carbon dioxide) off Carbon 3, and the formation of a ketone group with a stereochemical change from sp3 to sp2.
Active site.
The Isocitrate Dehydrogenase (IDH) enzyme structure in "Escherichia coli" was the first IDH ortholog structure to be elucidated and understood. Since then, the "Escherichia coli" IDH structure has been used by most researchers to make comparisons to other isocitrate dehydrogenase enzymes. There is much detailed knowledge about this bacterial enzyme, and it has been found that most isocitrate dehydrogenases are similar in structure and therefore also in function. This similarity of structure and function gives a reason to believe that the structures are conserved as well as the amino acids. Therefore, the active sites amongst most prokaryotic isocitrate dehydrogenase enzymes should be conserved as well, which is observed throughout many studies done on prokaryotic enzymes. Eukaryotic isocitrate dehydrogenase enzymes on the other hand, have not been fully discovered yet.
Each dimer of IDH has two active sites. Each active site binds a NAD+/NADP+ molecule and a divalent metal ion (Mg2+,Mn2+). In general, each active site has a conserved sequence of amino acids for each specific binding site. In "Desulfotalea psychrophila" ("Dp"IDH) and porcine ("Pc"IDH) there are three substrates bound to the active site.
Clinical significance.
Specific mutations in the isocitrate dehydrogenase gene IDH1 have been found in several tumor types, notably brain tumors including astrocytoma and oligodendroglioma. Patients whose tumor had an IDH1 mutation had longer survival compared to patients whose tumor had an IDH1 wild type. Furthermore, mutations of IDH2 and IDH1 were found in up to 20% of cytogenetically normal acute myeloid leukemia (AML). These mutations are known to produce D-2-hydroxyglutarate from alpha-ketoglutarate. D-2-hydroxyglutarate accumulates to very high concentrations which inhibits the function of enzymes that are dependent on alpha-ketoglutarate. This leads to a hypermethylated state of DNA and histones, which results in different gene expression that can activate oncogenes and inactivate tumor-suppressor genes. Ultimately, this may lead to the types of cancer described above. Somatic mosaic mutations of this gene have also been found associated to Ollier disease and Maffucci syndrome. However, recent studies have also shown that D-2-hydroxyglutarate may be converted back into alpha-ketoglutarate either enzymatically or non-enzymatically. Further studies are required to fully understand the roles of IDH1 mutation (and D-2-hydroxyglutarate) in cancer. Recent research highlighted cancer-causing mutations in isocitrate dehydrogenase which may cause accumulation of the metabolite D-2-hydroxyglutarate (D-2HG). Notarangelo et al. showed that such high concentrations of D-2HG could act as a direct inhibitor of lactate dehydrogenase in mouse T cells. Inhibition of this metabolic enzyme altered glucose metabolism in the T cells and inhibited their proliferation, cytokine production, and ability to kill target cells.
Isozymes.
The following is a list of human isocitrate dehydrogenase isozymes:
NADP+ dependent.
Each NADP+-dependent isozyme functions as a homodimer:
NAD+ dependent.
The isocitrate dehydrogenase 3 isozyme is a heterotetramer that is composed of two alpha subunits, one beta subunit, and one gamma subunit:
References.
<templatestyles src="Reflist/styles.css" />
External links.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=1232957 |
12332 | Gustav Kirchhoff | German physicist (1824–1887)
Gustav Robert Kirchhoff (; 12 March 1824 – 17 October 1887) was a German physicist and mathematician who contributed to the fundamental understanding of electrical circuits, spectroscopy, and the emission of black-body radiation by heated objects.
He coined the term black-body radiation in 1860.
Several different sets of concepts are named "Kirchhoff's laws" after him, which include Kirchhoff's circuit laws, Kirchhoff's law of thermal radiation, and Kirchhoff's law of thermochemistry.
The Bunsen–Kirchhoff Award for spectroscopy is named after Kirchhoff and his colleague, Robert Bunsen.
Life and work.
Gustav Kirchhoff was born on 12 March 1824 in Königsberg, Prussia, the son of Friedrich Kirchhoff, a lawyer, and Johanna Henriette Wittke. His family were Lutherans in the Evangelical Church of Prussia. He graduated from the Albertus University of Königsberg in 1847 where he attended the mathematico-physical seminar directed by Carl Gustav Jacob Jacobi, Franz Ernst Neumann and Friedrich Julius Richelot. In the same year, he moved to Berlin, where he stayed until he received a professorship at Breslau. Later, in 1857, he married Clara Richelot, the daughter of his mathematics professor Richelot. The couple had five children. Clara died in 1869. He married Luise Brömmel in 1872.
Kirchhoff formulated his circuit laws, which are now ubiquitous in electrical engineering, in 1845, while he was still a student. He completed this study as a seminar exercise; it later became his doctoral dissertation. He was called to the University of Heidelberg in 1854, where he collaborated in spectroscopic work with Robert Bunsen. In 1857, he calculated that an electric signal in a resistanceless wire travels along the wire at the speed of light. He proposed his law of thermal radiation in 1859, and gave a proof in 1861. Together Kirchhoff and Bunsen invented the spectroscope, which Kirchhoff used to pioneer the identification of the elements in the Sun, showing in 1859 that the Sun contains sodium. He and Bunsen discovered caesium and rubidium in 1861. At Heidelberg he ran a mathematico-physical seminar, modelled on Franz Ernst Neumann's, with the mathematician Leo Koenigsberger. Among those who attended this seminar were Arthur Schuster and Sofia Kovalevskaya.
He contributed greatly to the field of spectroscopy by formalizing three laws that describe the spectral composition of light emitted by incandescent objects, building substantially on the discoveries of David Alter and Anders Jonas Ångström. In 1862, he was awarded the Rumford Medal for his researches on the fixed lines of the solar spectrum, and on the inversion of the bright lines in the spectra of artificial light. In 1875 Kirchhoff accepted the first chair dedicated specifically to theoretical physics at Berlin.
He also contributed to optics, carefully solving the wave equation to provide a solid foundation for Huygens' principle (and correct it in the process).
In 1864, he was elected as a member of the American Philosophical Society.
In 1884, he became foreign member of the Royal Netherlands Academy of Arts and Sciences.
Kirchhoff died in 1887, and was buried in the St Matthäus Kirchhof Cemetery in Schöneberg, Berlin (just a few meters from the graves of the Brothers Grimm). Leopold Kronecker is buried in the same cemetery.
Kirchhoff's circuit laws.
Kirchhoff's first law is that the algebraic sum of currents in a network of conductors meeting at a point (or node) is zero. The second law is that in a closed circuit, the directed sums of the voltages in the system is zero.
Kirchhoff's three laws of spectroscopy.
Kirchhoff did not know about the existence of energy levels in atoms. The existence of discrete spectral lines was known since Fraunhofer discovered them in 1814. And that the lines formed a discrete mathematical pattern was described by Johann Balmer in 1885. Joseph Larmor explained the splitting of the spectral lines in a magnetic field known as the Zeeman Effect by the oscillation of electrons. But these discrete spectral lines were not explained as electron transitions until the Bohr model of the atom in 1913, which helped lead to quantum mechanics.
Kirchhoff's law of thermal radiation.
It was Kirchhoff's law of thermal radiation in which he proposed an unknown universal law for radiation that led Max Planck to the discovery of the quantum of action leading to quantum mechanics.
Kirchhoff's law of thermochemistry.
Kirchhoff showed in 1858 that, in thermochemistry, the variation of the heat of a chemical reaction is given by the difference in heat capacity between products and reactants:
formula_0.
Integration of this equation permits the evaluation of the heat of reaction at one temperature from measurements at another temperature.
Kirchhoff's theorem in graph theory.
Kirchhoff also worked in the mathematical field of graph theory, in which he proved Kirchhoff's matrix tree theorem.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left(\\frac{\\partial \\Delta H}{\\partial T}\\right)_p = \\Delta C_p"
}
] | https://en.wikipedia.org/wiki?curid=12332 |
1233278 | Capillary electrophoresis | Method of separating chemical or biological samples
Capillary electrophoresis (CE) is a family of electrokinetic separation methods performed in submillimeter diameter capillaries and in micro- and nanofluidic channels. Very often, CE refers to capillary zone electrophoresis (CZE), but other electrophoretic techniques including capillary gel electrophoresis (CGE), capillary isoelectric focusing (CIEF), capillary isotachophoresis and micellar electrokinetic chromatography (MEKC) belong also to this class of methods. In CE methods, analytes migrate through electrolyte solutions under the influence of an electric field. Analytes can be separated according to ionic mobility and/or partitioning into an alternate phase via non-covalent interactions. Additionally, analytes may be concentrated or "focused" by means of gradients in conductivity and pH.
Instrumentation.
The instrumentation needed to perform capillary electrophoresis is relatively simple. A basic schematic of a capillary electrophoresis system is shown in "figure 1". The system's main components are a sample vial, source and destination vials, a capillary, electrodes, a high voltage power supply, a detector, and a data output and handling device. The source vial, destination vial and capillary are filled with an electrolyte such as an aqueous buffer solution. To introduce the sample, the capillary inlet is placed into a vial containing the sample. Sample is introduced into the capillary via capillary action, pressure, siphoning, or electrokinetically, and the capillary is then returned to the source vial. The migration of the analytes is initiated by an electric field that is applied between the source and destination vials and is supplied to the electrodes by the high-voltage power supply. In the most common mode of CE, all ions, positive or negative, are pulled through the capillary in the same direction by electroosmotic flow. The analytes separate as they migrate due to their electrophoretic mobility, and are detected near the outlet end of the capillary. The output of the detector is sent to a data output and handling device such as an integrator or computer. The data is then displayed as an electropherogram, which reports detector response as a function of time. Separated chemical compounds appear as peaks with different migration times in an electropherogram. The technique is often attributed to James W. Jorgensen and Krynn DeArman Lukacs, who first demonstrated the capabilities of this technique. Capillary electrophoresis was first combined with mass spectrometry by Richard D. Smith and coworkers, and provides extremely high sensitivity for the analysis of very small sample sizes. Despite the very small sample sizes (typically only a few nanoliters of liquid are introduced into the capillary), high sensitivity and sharp peaks are achieved in part due to injection strategies that result in a concentration of analytes into a narrow zone near the inlet of the capillary. This is achieved in either pressure or electrokinetic injections simply by suspending the sample in a buffer of lower conductivity ("e.g." lower salt concentration) than the running buffer. A process called field-amplified sample stacking (a form of isotachophoresis) results in concentration of analyte in a narrow zone at the boundary between the low-conductivity sample and the higher-conductivity running buffer.
To achieve greater sample throughput, instruments with arrays of capillaries are used to analyze many samples simultaneously. Such capillary array electrophoresis (CAE) instruments with 16 or 96 capillaries are used for medium- to high-throughput capillary DNA sequencing, and the inlet ends of the capillaries are arrayed spatially to accept samples directly from SBS-standard footprint 96-well plates. Certain aspects of the instrumentation (such as detection) are necessarily more complex than for a single-capillary system, but the fundamental principles of design and operation are similar to those shown in Figure 1.
Detection.
Separation by capillary electrophoresis can be detected by several detection devices. The majority of commercial systems use UV or UV-Vis absorbance as their primary mode of detection. In these systems, a section of the capillary itself is used as the detection cell. The use of on-tube detection enables detection of separated analytes with no loss of resolution. In general, capillaries used in capillary electrophoresis are coated with a polymer (frequently polyimide or Teflon) for increased flexibility. The portion of the capillary used for UV detection, however, must be optically transparent. For polyimide-coated capillaries, a segment of the coating is typically burned or scraped off to provide a bare window several millimeters long. This bare section of capillary can break easily, and capillaries with transparent coatings are available to increase the stability of the cell window. The path length of the detection cell in capillary electrophoresis (~ 50 micrometers) is far less than that of a traditional UV cell (~ 1 cm). According to the Beer-Lambert law, the sensitivity of the detector is proportional to the path length of the cell. To improve the sensitivity, the path length can be increased, though this results in a loss of resolution. The capillary tube itself can be expanded at the detection point, creating a "bubble cell" with a longer path length or additional tubing can be added at the detection point as shown in "figure 2". Both of these methods, however, will decrease the resolution of the separation. This decrease is almost unnoticeable if a smooth aneurysm is produced in the wall of a capillary by heating and pressurization, as plug flow can be preserved. This invention by Gary Gordon, US Patent 5061361, typically triples the absorbance path length. When used with a UV absorbance detector, the wider cross-section of the analyte in the cell allows for an illuminating beam twice as large, which reduces shot noise by a factor of two. Together these two factors increase the sensitivity of Agilent Technologies's Bubble Cell CE Detector six times over that of one using a straight capillary. This cell and its manufacture are described on page 62 of the June 1995 issue of the "Hewlett-Packard Journal".
Fluorescence detection can also be used in capillary electrophoresis for samples that naturally fluoresce or are chemically modified to contain fluorescent tags. This mode of detection offers high sensitivity and improved selectivity for these samples, but cannot be utilized for samples that do not fluoresce. Numerous labeling strategies are used to create fluorescent derivatives or conjugates of non-fluorescent molecules, including proteins and DNA. The set-up for fluorescence detection in a capillary electrophoresis system can be complicated. The method requires that the light beam be focused on the capillary, which can be difficult for many light sources. Laser-induced fluorescence has been used in CE systems with detection limits as low as 10−18 to 10−21 mol. The sensitivity of the technique is attributed to the high intensity of the incident light and the ability to accurately focus the light on the capillary. Multi-color fluorescence detection can be achieved by including multiple dichroic mirrors and bandpass filters to separate the fluorescence emission amongst multiple detectors ("e.g.," photomultiplier tubes), or by using a prism or grating to project spectrally resolved fluorescence emission onto a position-sensitive detector such as a CCD array. CE systems with 4- and 5-color LIF detection systems are used routinely for capillary DNA sequencing and genotyping ("DNA fingerprinting") applications.
In order to obtain the identity of sample components, capillary electrophoresis can be directly coupled with mass spectrometers or surface-enhanced Raman spectroscopy (SERS). In most systems, the capillary outlet is introduced into an ion source that utilizes electrospray ionization (ESI). The resulting ions are then analyzed by the mass spectrometer. This setup requires volatile buffer solutions, which will affect the range of separation modes that can be employed and the degree of resolution that can be achieved.
The measurement and analysis are mostly done with a specialized.
For CE-SERS, capillary electrophoresis eluants can be deposited onto a SERS-active substrate. Analyte retention times can be translated into spatial distance by moving the SERS-active substrate at a constant rate during capillary electrophoresis. This allows the subsequent spectroscopic technique to be applied to specific eluants for identification with high sensitivity. SERS-active substrates can be chosen that do not interfere with the spectrum of the analytes.
Modes of separation.
The separation of compounds by capillary electrophoresis is dependent on the differential migration of analytes in an applied electric field. The electrophoretic migration velocity (formula_0) of an analyte toward the electrode of opposite charge is:
formula_1
The electrophoretic mobility can be determined experimentally from the migration time and the field strength:
formula_2
where formula_3 is the distance from the inlet to the detection point, formula_4 is the time required for the analyte to reach the detection point (migration time), formula_5 is the applied voltage (field strength), and formula_6 is the total length of the capillary. Since only charged ions are affected by the electric field, neutral analytes are poorly separated by capillary electrophoresis.
The velocity of migration of an analyte in capillary electrophoresis will also depend upon the rate of electroosmotic flow (EOF) of the buffer solution. In a typical system, the electroosmotic flow is directed toward the negatively charged cathode so that the buffer flows through the capillary from the source vial to the destination vial. Separated by differing electrophoretic mobilities, analytes migrate toward the electrode of opposite charge. As a result, negatively charged analytes are attracted to the positively charged anode, counter to the EOF, while positively charged analytes are attracted to the cathode, in agreement with the EOF as depicted in "figure 3".
The velocity of the electroosmotic flow, formula_7 can be written as:
formula_8
where formula_9 is the electroosmotic mobility, which is defined as:
formula_10
where formula_11 is the zeta potential of the capillary wall, and formula_12 is the relative permittivity of the buffer solution. Experimentally, the electroosmotic mobility can be determined by measuring the retention time of a neutral analyte. The velocity (formula_13) of an analyte in an electric field can then be defined as:
formula_14
Since the electroosmotic flow of the buffer solution is generally greater than that of the electrophoretic mobility of the analytes, all analytes are carried along with the buffer solution toward the cathode. Even small, triply charged anions can be redirected to the cathode by the relatively powerful EOF of the buffer solution. Negatively charged analytes are retained longer in the capillary due to their conflicting electrophoretic mobilities. The order of migration seen by the detector is shown in "figure 3": small multiply charged cations migrate quickly and small multiply charged anions are retained strongly.
Electroosmotic flow is observed when an electric field is applied to a solution in a capillary that has fixed charges on its interior wall. Charge is accumulated on the inner surface of a capillary when a buffer solution is placed inside the capillary. In a fused-silica capillary, silanol (Si-OH) groups attached to the interior wall of the capillary are ionized to negatively charged silanoate (Si-O−) groups at pH values greater than three. The ionization of the capillary wall can be enhanced by first running a basic solution, such as NaOH or KOH through the capillary prior to introducing the buffer solution. Attracted to the negatively charged silanoate groups, the positively charged cations of the buffer solution will form two inner layers of cations (called the diffuse double layer or the electrical double layer) on the capillary wall as shown in "figure 4". The first layer is referred to as the fixed layer because it is held tightly to the silanoate groups. The outer layer, called the mobile layer, is farther from the silanoate groups. The mobile cation layer is pulled in the direction of the negatively charged cathode when an electric field is applied. Since these cations are solvated, the bulk buffer solution migrates with the mobile layer, causing the electroosmotic flow of the buffer solution. Other capillaries including Teflon capillaries also exhibit electroosmotic flow. The EOF of these capillaries is probably the result of adsorption of the electrically charged ions of the buffer onto the capillary walls. The rate of EOF is dependent on the field strength and the charge density of the capillary wall. The wall's charge density is proportional to the pH of the buffer solution. The electroosmotic flow will increase with pH until all of the available silanols lining the wall of the capillary are fully ionized.
In certain situations where strong electroosmotic flow toward the cathode is undesirable, the inner surface of the capillary can be coated with polymers, surfactants, or small molecules to reduce electroosmosis to very low levels, restoring the normal direction of migration (anions toward the anode, cations toward the cathode). CE instrumentation typically includes power supplies with reversible polarity, allowing the same instrument to be used in "normal" mode (with EOF and detection near the cathodic end of the capillary) and "reverse" mode (with EOF suppressed or reversed, and detection near the anodic end of the capillary). One of the most common approaches to suppressing EOF, reported by Stellan Hjertén in 1985, is to create a covalently attached layer of linear polyacrylamide. The silica surface of the capillary is first modified with a silane reagent bearing a polymerizable vinyl group ("e.g." 3-methacryloxypropyltrimethoxysilane), followed by introduction of acrylamide monomer and a free radical initiator. The acrylamide is polymerized "in situ", forming long linear chains, some of which are covalently attached to the wall-bound silane reagent. Numerous other strategies for covalent modification of capillary surfaces exist. Dynamic or adsorbed coatings (which can include polymers or small molecules) are also common. For example, in capillary sequencing of DNA, the sieving polymer (typically polydimethylacrylamide) suppresses electroosmotic flow to very low levels. Besides modulating electroosmotic flow, capillary wall coatings can also serve the purpose of reducing interactions between "sticky" analytes (such as proteins) and the capillary wall. Such wall-analyte interactions, if severe, manifest as reduced peak efficiency, asymmetric (tailing) peaks, or even complete loss of analyte to the capillary wall.
Efficiency and resolution.
The number of theoretical plates, or separation efficiency, in capillary electrophoresis is given by:
formula_15
where formula_16 is the number of theoretical plates, formula_17 is the apparent mobility in the separation medium and formula_18 is the diffusion coefficient of the analyte. According to this equation, the efficiency of separation is only limited by diffusion and is proportional to the strength of the electric field, although practical considerations limit the strength of the electric field to several hundred volts per centimeter. Application of very high potentials (>20-30 kV) may lead to arcing or breakdown of the capillary. Further, application of strong electric fields leads to resistive heating (Joule heating) of the buffer in the capillary. At sufficiently high field strengths, this heating is strong enough that radial temperature gradients can develop within the capillary. Since electrophoretic mobility of ions is generally temperature-dependent (due to both temperature-dependent ionization and solvent viscosity effects), a non-uniform temperature profile results in variation of electrophoretic mobility across the capillary, and a loss of resolution. The onset of significant Joule heating can be determined by constructing an "Ohm's Law plot", wherein the current through the capillary is measured as a function of applied potential. At low fields, the current is proportional to the applied potential (Ohm's Law), whereas at higher fields the current deviates from the straight line as heating results in decreased resistance of the buffer. The best resolution is typically obtained at the maximum field strength for which Joule heating is insignificant ("i.e." near the boundary between the linear and nonlinear regimes of the Ohm's Law plot). Generally capillaries of smaller inner diameter support use of higher field strengths, due to improved heat dissipation and smaller thermal gradients relative to larger capillaries, but with the drawbacks of lower sensitivity in absorbance detection due to shorter path length, and greater difficulty in introducing buffer and sample into the capillary (small capillaries require greater pressure and/or longer times to force fluids through the capillary).
The efficiency of capillary electrophoresis separations is typically much higher than the efficiency of other separation techniques like HPLC. Unlike HPLC, in capillary electrophoresis there is no mass transfer between phases. In addition, the flow profile in EOF-driven systems is flat, rather than the rounded laminar flow profile characteristic of the pressure-driven flow in chromatography columns as shown in "figure 5". As a result, EOF does not significantly contribute to band broadening as in pressure-driven chromatography. Capillary electrophoresis separations can have several hundred thousand theoretical plates.
The resolution (formula_19) of capillary electrophoresis separations can be written as:
formula_20
According to this equation, maximum resolution is reached when the electrophoretic and electroosmotic mobilities are similar in magnitude and opposite in sign. In addition, it can be seen that high resolution requires lower velocity and, correspondingly, increased analysis time.
Besides diffusion and Joule heating (discussed above), factors that may decrease the resolution in capillary electrophoresis from the theoretical limits in the above equation include, but are not limited to, the finite widths of the injection plug and detection window; interactions between the analyte and the capillary wall; instrumental non-idealities such as a slight difference in height of the fluid reservoirs leading to siphoning; irregularities in the electric field due to, "e.g.," imperfectly cut capillary ends; depletion of buffering capacity in the reservoirs; and electrodispersion (when an analyte has higher conductivity than the background electrolyte). Identifying and minimizing the numerous sources of band broadening is key to successful method development in capillary electrophoresis, with the objective of approaching as close as possible to the ideal of diffusion-limited resolution.
Applications.
Capillary electrophoresis may be used for the simultaneous determination of the ions NH4+, Na+, K+, Mg2+ and Ca2+ in saliva.
One of the main applications of CE in forensic science is the development of methods for amplification and detection of DNA fragments using polymerase chain reaction (PCR), which has led to rapid and dramatic advances in forensic DNA analysis. DNA separations are carried out using thin CE 50-mm fused silica capillaries filled with a sieving buffer. These capillaries have excellent capabilities to dissipate heat, permitting much higher electric field strengths to be used than slab gel electrophoresis. Therefore separations in capillaries are rapid and efficient. Additionally, the capillaries can be easily refilled and changed for efficient and automated injections. Detection occurs via fluorescence through a window etched in the capillary. Both single-capillary and capillary-array instruments are available with array systems capable of running 16 or more samples simultaneously for increased throughput.
A major use of CE by forensic biologists is typing of STR from biological samples to generate a profile from highly polymorphic genetic markers which differ between individuals. Other emerging uses for CE include the detection of specific mRNA fragments to help identify the biological fluid or tissue origin of a forensic sample.
Another application of CE in forensics is ink analysis, where the analysis of inkjet printing inks is becoming more necessary due to increasingly frequent counterfeiting of documents printed by inkjet printers. The chemical composition of inks provides very important information in cases of fraudulent documents and counterfeit banknotes. Micellar electrophoretic capillary chromatography (MECC) has been developed and applied to the analysis of inks extracted from paper. Due to its high resolving power relative to inks containing several chemically similar substances, differences between inks from the same manufacturer can also be distinguished. This makes it suitable for evaluating the origin of documents based on the chemical composition of inks. It is worth noting that because of the possible compatibility of the same cartridge with different printer models, the differentiation of inks on the basis of their MECC electrophoretic profiles is a more reliable method for the determination of the ink cartridge of origin (its producer and cartridge number) rather than the printer model of origin.
A specialized type of CE, affinity capillary electrophoresis (ACE), utilizes intermolecular binding interactions to understand protein-ligand interactions. Pharmaceutical companies use ACE for a multitude of reasons, with one of the main ones being the association/binding constants for drugs and ligands or drugs and certain vehicle systems like micelles. It is a widely used technique because of its simplicity, rapid results, and low analyte usage. The use of ACE can provide specific details in binding, separation, and detection of analytes and is proven to be highly practical for studies in life sciences. Aptamer-based affinity capillary electrophoresis is utilized for the analysis and modifications of specific affinity reagents. Modified aptamers ideally exhibit and high binding affinity, specificity, and nuclease resistance. Ren et al. incorporated modified nucleotides in aptamers to introduce new confrontational features and high affinity interactions from the hydrophobic and polar interactions between IL-1α and the aptamer. Huang et al. uses ACE to investigate protein-protein interactions using aptamers. A α-thrombin binding aptamer was labeled with 6-carboxyfluorescein for use as a selective fluorescent probe and was studied to elucidate information on binding sites for protein-protein and protein-DNA interactions.
Capillary electrophoresis (CE) has become an important, cost-effective approach to do DNA sequencing that provides high throughput and high accuracy sequencing information. Woolley and Mathies used a CE chip to sequence DNA fragments with 97% accuracy and a speed of 150 bases in 540 seconds. They used a 4-color labeling and detection format to collect fluorescent data. Fluorescence is used to view the concentrations of each part of the nucleic acid sequence, A, T, C and G, and these concentration peaks that are graphed from the detection are used to determine the sequence of the DNA.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "u_p"
},
{
"math_id": 1,
"text": " u_p = \\mu_p E \\,"
},
{
"math_id": 2,
"text": "\\mu_p = \\left ( \\frac{L}{t_r} \\right )\\left ( \\frac{L_t}{V} \\right )"
},
{
"math_id": 3,
"text": "L"
},
{
"math_id": 4,
"text": "t_r"
},
{
"math_id": 5,
"text": "V"
},
{
"math_id": 6,
"text": "L_t"
},
{
"math_id": 7,
"text": "u_o"
},
{
"math_id": 8,
"text": " u_o= \\mu_o E "
},
{
"math_id": 9,
"text": "\\mu_o"
},
{
"math_id": 10,
"text": "\\mu_o= \\frac{\\epsilon \\zeta}{\\eta}"
},
{
"math_id": 11,
"text": "\\zeta"
},
{
"math_id": 12,
"text": "\\epsilon"
},
{
"math_id": 13,
"text": "u"
},
{
"math_id": 14,
"text": " u_p + u_o = (\\mu_p +\\mu_o) E"
},
{
"math_id": 15,
"text": " N=\\frac{\\mu V}{2 D_m}"
},
{
"math_id": 16,
"text": "N"
},
{
"math_id": 17,
"text": "\\mu"
},
{
"math_id": 18,
"text": "D_m"
},
{
"math_id": 19,
"text": "R_s"
},
{
"math_id": 20,
"text": "R_s = \\frac{1}{4}\\left ( \\frac{\\triangle \\mu_p \\sqrt{N} }{\\mu_p +\\mu_o} \\right )"
}
] | https://en.wikipedia.org/wiki?curid=1233278 |
12335752 | Energy minimization | In the field of computational chemistry, energy minimization (also called energy optimization, geometry minimization, or geometry optimization) is the process of finding an arrangement in space of a collection of atoms where, according to some computational model of chemical bonding, the net inter-atomic force on each atom is acceptably close to zero and the position on the potential energy surface (PES) is a stationary point (described later). The collection of atoms might be a single molecule, an ion, a condensed phase, a transition state or even a collection of any of these. The computational model of chemical bonding might, for example, be quantum mechanics.
As an example, when optimizing the geometry of a water molecule, one aims to obtain the hydrogen-oxygen bond lengths and the hydrogen-oxygen-hydrogen bond angle which minimize the forces that would otherwise be pulling atoms together or pushing them apart.
The motivation for performing a geometry optimization is the physical significance of the obtained structure: optimized structures often correspond to a substance as it is found in nature and the geometry of such a structure can be used in a variety of experimental and theoretical investigations in the fields of chemical structure, thermodynamics, chemical kinetics, spectroscopy and others.
Typically, but not always, the process seeks to find the geometry of a particular arrangement of the atoms that represents a local or global energy minimum. Instead of searching for global energy minimum, it might be desirable to optimize to a transition state, that is, a saddle point on the potential energy surface. Additionally, certain coordinates (such as a chemical bond length) might be fixed during the optimization.
Molecular geometry and mathematical interpretation.
The geometry of a set of atoms can be described by a vector of the atoms' positions. This could be the set of the Cartesian coordinates of the atoms or, when considering molecules, might be so called "internal coordinates" formed from a set of bond lengths, bond angles and dihedral angles.
Given a set of atoms and a vector, r, describing the atoms' positions, one can introduce the concept of the energy as a function of the positions, "E"(r). Geometry optimization is then a mathematical optimization problem, in which it is desired to find the value of r for which "E"(r) is at a local minimum, that is, the derivative of the energy with respect to the position of the atoms, ∂"E"/∂r, is the zero vector and the second derivative matrix of the system, formula_0, also known as the Hessian matrix, which describes the curvature of the PES at r, has all positive eigenvalues (is positive definite).
A special case of a geometry optimization is a search for the geometry of a transition state; this is discussed below.
The computational model that provides an approximate "E"(r) could be based on quantum mechanics (using either density functional theory or semi-empirical methods), force fields, or a combination of those in case of QM/MM. Using this computational model and an initial guess (or ansatz) of the correct geometry, an iterative optimization procedure is followed, for example:
Practical aspects of optimization.
As described above, some method such as quantum mechanics can be used to calculate the energy, "E"(r), the gradient of the PES, that is, the derivative of the energy with respect to the position of the atoms, ∂"E"/∂r and the second derivative matrix of the system, ∂∂"E"/∂"r"i∂"r"j, also known as the Hessian matrix, which describes the curvature of the PES at r.
An optimization algorithm can use some or all of "E"(r), ∂"E"/∂r and ∂∂"E"/∂"r"i∂"r"j to try to minimize the forces and this could in theory be any method such as gradient descent, conjugate gradient or Newton's method, but in practice, algorithms which use knowledge of the PES curvature, that is the Hessian matrix, are found to be superior. For most systems of practical interest, however, it may be prohibitively expensive to compute the second derivative matrix, and it is estimated from successive values of the gradient, as is typical in a Quasi-Newton optimization.
The choice of the coordinate system can be crucial for performing a successful optimization. Cartesian coordinates, for example, are redundant since a non-linear molecule with "N" atoms has 3"N"–6 vibrational degrees of freedom whereas the set of Cartesian coordinates has 3"N" dimensions. Additionally, Cartesian coordinates are highly correlated, that is, the Hessian matrix has many non-diagonal terms that are not close to zero. This can lead to numerical problems in the optimization, because, for example, it is difficult to obtain a good approximation to the Hessian matrix and calculating it precisely is too computationally expensive. However, in case that energy is expressed with standard force fields, computationally efficient methods have been developed able to derive analytically the Hessian matrix in Cartesian coordinates while preserving a computational complexity of the same order to that of gradient computations. Internal coordinates tend to be less correlated but are more difficult to set-up and it can be difficult to describe some systems, such as ones with symmetry or large condensed phases. Many modern computational chemistry software packages contain automatic procedures for the automatic generation of reasonable coordinate systems for optimization.
Degree of freedom restriction.
Some degrees of freedom can be eliminated from an optimization, for example, positions of atoms or bond lengths and angles can be given fixed values. Sometimes these are referred to as being "frozen" degrees of freedom.
Figure 1 depicts a geometry optimization of the atoms in a carbon nanotube in the presence of an external electrostatic field. In this optimization, the atoms on the left have their positions frozen. Their interaction with the other atoms in the system are still calculated, but alteration the atoms' position during the optimization is prevented.
Transition state optimization.
Transition state structures can be determined by searching for saddle points on the PES of the chemical species of interest. A first-order saddle point is a position on the PES corresponding to a minimum in all directions except one; a second-order saddle point is a minimum in all directions except two, and so on. Defined mathematically, an "n"th order saddle point is characterized by the following: ∂"E"/∂r = 0 and the Hessian matrix, ∂∂"E"/∂"r"i∂"r"j, has exactly "n" negative eigenvalues.
Algorithms to locate transition state geometries fall into two main categories: local methods and semi-global methods. Local methods are suitable when the starting point for the optimization is very close to the true transition state ("very close" will be defined shortly) and semi-global methods find application when it is sought to locate the transition state with very little "a priori" knowledge of its geometry. Some methods, such as the Dimer method (see below), fall into both categories.
Local searches.
A so-called local optimization requires an initial guess of the transition state that is very close to the true transition state. "Very close" typically means that the initial guess must have a corresponding Hessian matrix with one negative eigenvalue, or, the negative eigenvalue corresponding to the reaction coordinate must be greater in magnitude than the other negative eigenvalues. Further, the eigenvector with the most negative eigenvalue must correspond to the reaction coordinate, that is, it must represent the geometric transformation relating to the process whose transition state is sought.
Given the above pre-requisites, a local optimization algorithm can then move "uphill" along the eigenvector with the most negative eigenvalue and "downhill" along all other degrees of freedom, using something similar to a quasi-Newton method.
Dimer method.
The dimer method can be used to find possible transition states without knowledge of the final structure or to refine a good guess of a transition structure. The “dimer” is formed by two images very close to each other on the PES. The method works by moving the dimer uphill from the starting position whilst rotating the dimer to find the direction of lowest curvature (ultimately negative).
Activation Relaxation Technique (ART).
The Activation Relaxation Technique (ART) is also an open-ended method to find new transition states or to refine known saddle points on the PES. The method follows the direction of lowest negative curvature (computed using the Lanczos algorithm) on the PES to reach the saddle point, relaxing in the perpendicular hyperplane between each "jump" (activation) in this direction.
Chain-of-state methods.
Chain-of-state methods can be used to find the "approximate" geometry of the transition state based on the geometries of the reactant and product. The generated approximate geometry can then serve as a starting point for refinement via a local search, which was described above.
Chain-of-state methods use a series of vectors, that is points on the PES, connecting the reactant and product of the reaction of interest, rreactant and rproduct, thus discretizing the reaction pathway. Very commonly, these points are referred to as "beads "due to an analogy of a set of beads connected by strings or springs, which connect the reactant and products. The series of beads is often initially created by interpolating between rreactant and rproduct, for example, for a series of "N" + 1 beads, bead "i" might be given by
formula_1
where "i" ∈ 0, 1, ..., "N". Each of the beads r"i" has an energy, "E"(r"i"), and forces, -∂"E"/∂r"i" and these are treated with a constrained optimization process that seeks to get an as accurate as possible representation of the reaction pathway. For this to be achieved, spacing constraints must be applied so that each bead r"i" does not simply get optimized to the reactant and product geometry.
Often this constraint is achieved by projecting out components of the force on each bead r"i", or alternatively the movement of each bead during optimization, that are tangential to the reaction path. For example, if for convenience, it is defined that g"i" = ∂"E"/∂r"i", then the energy gradient at each bead minus the component of the energy gradient that is tangential to the reaction pathway is given by
formula_2
where "I" is the identity matrix and τ"i" is a unit vector representing the reaction path tangent at r"i". By projecting out components of the energy gradient or the optimization step that are parallel to the reaction path, an optimization algorithm significantly reduces the tendency of each of the beads to be optimized directly to a minimum.
Synchronous transit.
The simplest chain-of-state method is the linear synchronous transit (LST) method. It operates by taking interpolated points between the reactant and product geometries and choosing the one with the highest energy for subsequent refinement via a local search. The quadratic synchronous transit (QST) method extends LST by allowing a parabolic reaction path, with optimization of the highest energy point orthogonally to the parabola.
Nudged elastic band.
In Nudged elastic band (NEB) method, the beads along the reaction pathway have simulated spring forces in addition to the chemical forces, -∂"E"/∂r"i", to cause the optimizer to maintain the spacing constraint. Specifically, the force f"i" on each point "i" is given by
formula_3
where
formula_4
is the spring force parallel to the pathway at each point r"i" ("k" is a spring constant and τ"i", as before, is a unit vector representing the reaction path tangent at r"i").
In a traditional implementation, the point with the highest energy is used for subsequent refinement in a local search. There are many variations on the NEB method, such including the climbing image NEB, in which the point with the highest energy is pushed upwards during the optimization procedure so as to (hopefully) give a geometry which is even closer to that of the transition state. There have also been extensions to include Gaussian process regression for reducing the number of evaluations. For systems with non-Euclidean (R^2) geometry, like magnetic systems, the method is modified to the geodesic nudged elastic band approach.
String method.
The string method uses splines connecting the points, r"i", to measure and enforce distance constraints between the points and to calculate the tangent at each point. In each step of an optimization procedure, the points might be moved according to the force acting on them perpendicular to the path, and then, if the equidistance constraint between the points is no-longer satisfied, the points can be redistributed, using the spline representation of the path to generate new vectors with the required spacing.
Variations on the string method include the growing string method, in which the guess of the pathway is grown in from the end points (that is the reactant and products) as the optimization progresses.
Comparison with other techniques.
Geometry optimization is fundamentally different from a molecular dynamics simulation. The latter simulates the motion of molecules with respect to time, subject to temperature, chemical forces, initial velocities, Brownian motion of a solvent, and so on, via the application of Newton's laws of Motion. This means that the trajectories of the atoms which get computed, have some physical meaning. Geometry optimization, by contrast, does not produce a "trajectory" with any physical meaning – it is concerned with minimization of the forces acting on each atom in a collection of atoms, and the pathway via which it achieves this lacks meaning. Different optimization algorithms could give the same result for the minimum energy structure, but arrive at it via a different pathway.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{pmatrix}\n\\frac{\\partial^2 E}{\\partial r_i\\,\\partial r_j}\n\\end{pmatrix}_{ij}"
},
{
"math_id": 1,
"text": "\\mathbf{r}_i = \\frac{i}{N}\\mathbf{r}_\\mathrm{product} + \\left(1 - \\frac{i}{N} \\right)\\mathbf{r}_\\mathrm{reactant}"
},
{
"math_id": 2,
"text": "\\mathbf{g}_i^\\perp = \\mathbf{g}_i - \\mathbf{\\tau}_i(\\mathbf{\\tau}_i\\cdot\\mathbf{g}_i) = \\left( I - \\mathbf{\\tau}_i \\mathbf{\\tau}_i^T \\right)\\mathbf{g}_i"
},
{
"math_id": 3,
"text": "\\mathbf{f}_i = \\mathbf{f}_i^{\\parallel} -\\mathbf{g}_i^{\\perp}"
},
{
"math_id": 4,
"text": "\\mathbf{f}_i^{\\parallel} = k\\left[\\left( \\left(\\mathbf{r}_{i+1} - \\mathbf{r}_i\\right) - \\left(\\mathbf{r}_i - \\mathbf{r}_{i-1}\\right)\\right)\\cdot\\tau_i \\right] \\tau_i"
}
] | https://en.wikipedia.org/wiki?curid=12335752 |
1233773 | Seifert–Weber space | In mathematics, Seifert–Weber space (introduced by Herbert Seifert and Constantin Weber) is a closed hyperbolic 3-manifold. It is also known as Seifert–Weber dodecahedral space and hyperbolic dodecahedral space. It is one of the first discovered examples of closed hyperbolic 3-manifolds.
It is constructed by gluing each face of a dodecahedron to its opposite in a way that produces a closed 3-manifold. There are three ways to do this gluing consistently. Opposite faces are misaligned by 1/10 of a turn, so to match them they must be rotated by 1/10, 3/10 or 5/10 turn; a rotation of 3/10 gives the Seifert–Weber space. Rotation of 1/10 gives the Poincaré homology sphere, and rotation by 5/10 gives 3-dimensional real projective space.
With the 3/10-turn gluing pattern, the edges of the original dodecahedron are glued to each other in groups of five. Thus, in the Seifert–Weber space, each edge is surrounded by five pentagonal faces, and the dihedral angle between these pentagons is 72°. This does not match the 117° dihedral angle of a regular dodecahedron in Euclidean space, but in hyperbolic space there exist regular dodecahedra with any dihedral angle between 60° and 117°, and the hyperbolic dodecahedron with dihedral angle 72° may be used to give the Seifert–Weber space a geometric structure as a hyperbolic manifold.
It is a (finite volume) quotient space of the (non-finite volume) order-5 dodecahedral honeycomb, a regular tessellation of hyperbolic 3-space by dodecahedra with this dihedral angle.
The Seifert–Weber space is a rational homology sphere, and its first homology group is isomorphic to formula_0. William Thurston conjectured that the Seifert–Weber space is not a Haken manifold, that is, it does not contain any incompressible surfaces; proved the conjecture with the aid of their computer software Regina. | [
{
"math_id": 0,
"text": "\\mathbb Z_5^3"
}
] | https://en.wikipedia.org/wiki?curid=1233773 |
1234 | Acoustic theory | Theory of sound waves
Acoustic theory is a scientific field that relates to the description of sound waves. It derives from fluid dynamics. See acoustics for the engineering approach.
For sound waves of any magnitude of a disturbance in velocity, pressure, and density we have
formula_0
In the case that the fluctuations in velocity, density, and pressure are small, we can approximate these as
formula_1
Where formula_2 is the perturbed velocity of the fluid, formula_3 is the pressure of the fluid at rest, formula_4 is the perturbed pressure of the system as a function of space and time, formula_5 is the density of the fluid at rest, and formula_6 is the variance in the density of the fluid over space and time.
In the case that the velocity is irrotational (formula_7), we then have the acoustic wave equation that describes the system:
formula_8
Where we have
formula_9
Derivation for a medium at rest.
Starting with the Continuity Equation and the Euler Equation:
formula_10
If we take small perturbations of a constant pressure and density:
formula_11
Then the equations of the system are
formula_12
Noting that the equilibrium pressures and densities are constant, this simplifies to
formula_13
A Moving Medium.
Starting with
formula_14
We can have these equations work for a moving medium by setting formula_15, where formula_16 is the constant velocity that the whole fluid is moving at before being disturbed (equivalent to a moving observer) and formula_17 is the fluid velocity.
In this case the equations look very similar:
formula_18
Note that setting formula_19 returns the equations at rest.
Linearized Waves.
Starting with the above given equations of motion for a medium at rest:
formula_13
Let us now take formula_20 to all be small quantities.
In the case that we keep terms to first order, for the continuity equation, we have the formula_21 term going to 0. This similarly applies for the density perturbation times the time derivative of the velocity. Moreover, the spatial components of the material derivative go to 0. We thus have, upon rearranging the equilibrium density:
formula_22
Next, given that our sound wave occurs in an ideal fluid, the motion is adiabatic, and then we can relate the small change in the pressure to the small change in the density by
formula_23
Under this condition, we see that we now have
formula_24
Defining the speed of sound of the system:
formula_25
Everything becomes
formula_26
For Irrotational Fluids.
In the case that the fluid is irrotational, that is formula_27, we can then write formula_28 and thus write our equations of motion as
formula_29
The second equation tells us that
formula_30
And the use of this equation in the continuity equation tells us that
formula_31
This simplifies to
formula_32
Thus the velocity potential formula_33 obeys the wave equation in the limit of small disturbances. The boundary conditions required to solve for the potential come from the fact that the velocity of the fluid must be 0 normal to the fixed surfaces of the system.
Taking the time derivative of this wave equation and multiplying all sides by the unperturbed density, and then using the fact that formula_34 tells us that
formula_35
Similarly, we saw that formula_36. Thus we can multiply the above equation appropriately and see that
formula_37
Thus, the velocity potential, pressure, and density all obey the wave equation. Moreover, we only need to solve one such equation to determine all other three. In particular, we have
formula_38
For a moving medium.
Again, we can derive the small-disturbance limit for sound waves in a moving medium. Again, starting with
formula_18
We can linearize these into
formula_39
For Irrotational Fluids in a Moving Medium.
Given that we saw that
formula_39
If we make the previous assumptions of the fluid being ideal and the velocity being irrotational, then we have
formula_40
Under these assumptions, our linearized sound equations become
formula_41
Importantly, since formula_16 is a constant, we have formula_42, and then the second equation tells us that
formula_43
Or just that
formula_44
Now, when we use this relation with the fact that formula_45, alongside cancelling and rearranging terms, we arrive at
formula_46
We can write this in a familiar form as
formula_47
This differential equation must be solved with the appropriate boundary conditions. Note that setting formula_48 returns us the wave equation. Regardless, upon solving this equation for a moving medium, we then have
formula_49 | [
{
"math_id": 0,
"text": "\n \\begin{align}\n \\frac{\\partial \\rho'}{\\partial t} +\\rho_0\\nabla\\cdot \\mathbf{v} + \\nabla\\cdot(\\rho'\\mathbf{v}) & = 0 \\qquad \\text{(Conservation of Mass)} \\\\\n (\\rho_0+\\rho')\\frac{\\partial \\mathbf{v}}{\\partial t} + (\\rho_0+\\rho')(\\mathbf{v}\\cdot\\nabla)\\mathbf{v} + \\nabla p' & = 0 \\qquad \\text{(Equation of Motion)}\n \\end{align}\n"
},
{
"math_id": 1,
"text": "\n \\begin{align}\n \\frac{\\partial \\rho'}{\\partial t} +\\rho_0\\nabla\\cdot \\mathbf{v} & = 0 \\\\\n \\frac{\\partial \\mathbf{v}}{\\partial t} + \\frac{1}{\\rho_0}\\nabla p'& = 0\n \\end{align}\n"
},
{
"math_id": 2,
"text": "\\mathbf{v}(\\mathbf{x},t)"
},
{
"math_id": 3,
"text": "p_0"
},
{
"math_id": 4,
"text": "p'(\\mathbf{x},t)"
},
{
"math_id": 5,
"text": "\\rho_0"
},
{
"math_id": 6,
"text": "\\rho'(\\mathbf{x}, t)"
},
{
"math_id": 7,
"text": "\\nabla\\times \\mathbf{v} = 0"
},
{
"math_id": 8,
"text": "\n \\frac{1}{c^2}\\frac{\\partial^2 \\phi}{\\partial t^2} - \\nabla^2\\phi = 0\n "
},
{
"math_id": 9,
"text": "\n\\begin{align}\n \\mathbf{v} & = -\\nabla \\phi \\\\\nc^2 & = (\\frac{\\partial p}{\\partial \\rho})_s\\\\\np' & = \\rho_0\\frac{\\partial \\phi}{\\partial t}\\\\\n\\rho' & = \\frac{\\rho_0}{c^2}\\frac{\\partial \\phi}{\\partial t}\n\\end{align}\n "
},
{
"math_id": 10,
"text": "\n \\begin{align}\n \\frac{\\partial \\rho}{\\partial t} +\\nabla\\cdot \\rho\\mathbf{v} & = 0 \\\\\n \\rho\\frac{\\partial \\mathbf{v}}{\\partial t} + \\rho(\\mathbf{v}\\cdot\\nabla)\\mathbf{v} + \\nabla p & = 0\n \\end{align}\n"
},
{
"math_id": 11,
"text": "\n \\begin{align}\n \\rho & = \\rho_0+\\rho' \\\\\n p & = p_0 + p'\n \\end{align}\n"
},
{
"math_id": 12,
"text": "\n \\begin{align}\n \\frac{\\partial}{\\partial t}(\\rho_0+\\rho') +\\nabla\\cdot (\\rho_0+\\rho')\\mathbf{v} & = 0 \\\\\n (\\rho_0+\\rho')\\frac{\\partial \\mathbf{v}}{\\partial t} + (\\rho_0+\\rho')(\\mathbf{v}\\cdot\\nabla)\\mathbf{v} + \\nabla (p_0+p') & = 0\n \\end{align}\n"
},
{
"math_id": 13,
"text": "\n \\begin{align}\n \\frac{\\partial \\rho'}{\\partial t} +\\rho_0\\nabla\\cdot\\mathbf{v}+\\nabla\\cdot \\rho'\\mathbf{v} & = 0 \\\\\n (\\rho_0+\\rho')\\frac{\\partial \\mathbf{v}}{\\partial t} + (\\rho_0+\\rho')(\\mathbf{v}\\cdot\\nabla)\\mathbf{v} + \\nabla p' & = 0\n \\end{align}\n"
},
{
"math_id": 14,
"text": "\n \\begin{align}\n \\frac{\\partial \\rho'}{\\partial t} +\\rho_0\\nabla\\cdot\\mathbf{w}+\\nabla\\cdot \\rho'\\mathbf{w} & = 0 \\\\\n (\\rho_0+\\rho')\\frac{\\partial \\mathbf{w}}{\\partial t} + (\\rho_0+\\rho')(\\mathbf{w}\\cdot\\nabla)\\mathbf{w} + \\nabla p' & = 0\n \\end{align}\n"
},
{
"math_id": 15,
"text": "\\mathbf{w} = \\mathbf{u} + \\mathbf{v}"
},
{
"math_id": 16,
"text": "\\mathbf{u}"
},
{
"math_id": 17,
"text": "\\mathbf{v}"
},
{
"math_id": 18,
"text": "\n \\begin{align}\n \\frac{\\partial \\rho'}{\\partial t} +\\rho_0\\nabla\\cdot\\mathbf{v}+\\mathbf{u}\\cdot\\nabla\\rho' + \\nabla\\cdot \\rho'\\mathbf{v} & = 0 \\\\\n (\\rho_0+\\rho')\\frac{\\partial \\mathbf{v}}{\\partial t} + (\\rho_0+\\rho')(\\mathbf{u}\\cdot\\nabla)\\mathbf{v} + (\\rho_0+\\rho')(\\mathbf{v}\\cdot\\nabla)\\mathbf{v} + \\nabla p' & = 0\n \\end{align}\n"
},
{
"math_id": 19,
"text": "\\mathbf{u} = 0"
},
{
"math_id": 20,
"text": "\\mathbf{v},\\rho',p'"
},
{
"math_id": 21,
"text": "\\rho'\\mathbf{v}"
},
{
"math_id": 22,
"text": "\n \\begin{align}\n \\frac{\\partial \\rho'}{\\partial t} +\\rho_0\\nabla\\cdot \\mathbf{v} & = 0 \\\\\n \\frac{\\partial \\mathbf{v}}{\\partial t} + \\frac{1}{\\rho_0}\\nabla p' & = 0\n \\end{align}\n"
},
{
"math_id": 23,
"text": "\n p' = \\left(\\frac{\\partial p}{\\partial \\rho_{0}}\\right)_{s}\\rho'\n"
},
{
"math_id": 24,
"text": "\n \\begin{align}\n \\frac{\\partial p'}{\\partial t} +\\rho_{0}\\left(\\frac{\\partial p}{\\partial \\rho_0}\\right)_{s}\\nabla\\cdot \\mathbf{v} & = 0 \\\\\n \\frac{\\partial \\mathbf{v}}{\\partial t} + \\frac{1}{\\rho_0}\\nabla p' & = 0\n \\end{align}\n"
},
{
"math_id": 25,
"text": "\nc \\equiv \\sqrt{\\left(\\frac{\\partial p}{\\partial \\rho_{0}}\\right)_{s}}\n"
},
{
"math_id": 26,
"text": "\n \\begin{align}\n \\frac{\\partial p'}{\\partial t} +\\rho_0c^2\\nabla\\cdot \\mathbf{v} & = 0 \\\\\n \\frac{\\partial \\mathbf{v}}{\\partial t} + \\frac{1}{\\rho_0}\\nabla p' & = 0\n \\end{align}\n"
},
{
"math_id": 27,
"text": "\\nabla\\times\\mathbf{v} = 0"
},
{
"math_id": 28,
"text": "\\mathbf{v} = -\\nabla\\phi"
},
{
"math_id": 29,
"text": "\n \\begin{align}\n \\frac{\\partial p'}{\\partial t} -\\rho_0c^2\\nabla^2\\phi & = 0 \\\\\n -\\nabla\\frac{\\partial\\phi}{\\partial t} + \\frac{1}{\\rho_0}\\nabla p' & = 0\n \\end{align}\n"
},
{
"math_id": 30,
"text": "\n p' = \\rho_0 \\frac{\\partial \\phi}{\\partial t}\n"
},
{
"math_id": 31,
"text": "\n \\rho_0\\frac{\\partial^2 \\phi}{\\partial t} -\\rho_0c^2\\nabla^2\\phi = 0\n"
},
{
"math_id": 32,
"text": "\n \\frac{1}{c^2}\\frac{\\partial^2 \\phi}{\\partial t^2} -\\nabla^2\\phi = 0\n"
},
{
"math_id": 33,
"text": "\\phi"
},
{
"math_id": 34,
"text": "p' = \\rho_0 \\frac{\\partial \\phi}{\\partial t}"
},
{
"math_id": 35,
"text": "\n \\frac{1}{c^2}\\frac{\\partial^2 p'}{\\partial t^2} -\\nabla^2p' = 0\n"
},
{
"math_id": 36,
"text": "p' = \\left(\\frac{\\partial p}{\\partial \\rho_{0}}\\right)_{s}\\rho' = c^{2}\\rho'"
},
{
"math_id": 37,
"text": "\n \\frac{1}{c^2}\\frac{\\partial^2 \\rho'}{\\partial t^2} -\\nabla^2\\rho' = 0\n"
},
{
"math_id": 38,
"text": "\n \\begin{align}\n \\mathbf{v} & = -\\nabla \\phi \\\\\n p' & = \\rho_0 \\frac{\\partial \\phi}{\\partial t}\\\\\n\\rho' & = \\frac{\\rho_0}{c^2}\\frac{\\partial\\phi}{\\partial t}\n \\end{align}\n"
},
{
"math_id": 39,
"text": "\n \\begin{align}\n \\frac{\\partial \\rho'}{\\partial t} +\\rho_0\\nabla\\cdot\\mathbf{v}+\\mathbf{u}\\cdot\\nabla\\rho' & = 0 \\\\\n \\frac{\\partial \\mathbf{v}}{\\partial t} + (\\mathbf{u}\\cdot\\nabla)\\mathbf{v} + \\frac{1}{\\rho_0}\\nabla p' & = 0\n \\end{align}\n"
},
{
"math_id": 40,
"text": "\n \\begin{align}\n p' & = \\left(\\frac{\\partial p}{\\partial \\rho_{0}}\\right)_{s}\\rho' = c^{2}\\rho' \\\\\n \\mathbf{v} & = -\\nabla\\phi\n \\end{align}\n"
},
{
"math_id": 41,
"text": "\n \\begin{align}\n \\frac{1}{c^2}\\frac{\\partial p'}{\\partial t} -\\rho_0\\nabla^2\\phi+\\frac{1}{c^2}\\mathbf{u}\\cdot\\nabla p' & = 0 \\\\\n -\\frac{\\partial}{\\partial t}(\\nabla\\phi) - (\\mathbf{u}\\cdot\\nabla)[\\nabla\\phi] + \\frac{1}{\\rho_0}\\nabla p' & = 0\n \\end{align}\n"
},
{
"math_id": 42,
"text": "(\\mathbf{u}\\cdot\\nabla)[\\nabla\\phi] = \\nabla[(\\mathbf{u}\\cdot\\nabla)\\phi]"
},
{
"math_id": 43,
"text": "\n \\frac{1}{\\rho_0} \\nabla p' = \\nabla\\left[\\frac{\\partial\\phi}{\\partial t} + (\\mathbf{u}\\cdot\\nabla)\\phi\\right]\n"
},
{
"math_id": 44,
"text": "\n p' = \\rho_{0}\\left[\\frac{\\partial\\phi}{\\partial t} + (\\mathbf{u}\\cdot\\nabla)\\phi\\right]\n"
},
{
"math_id": 45,
"text": "\\frac{1}{c^2}\\frac{\\partial p'}{\\partial t} -\\rho_0\\nabla^2\\phi+\\frac{1}{c^2}\\mathbf{u}\\cdot\\nabla p' = 0"
},
{
"math_id": 46,
"text": "\n \\frac{1}{c^2}\\frac{\\partial^2 \\phi}{\\partial t^2} - \\nabla^2\\phi + \\frac{1}{c^2}\\frac{\\partial}{\\partial t}[(\\mathbf{u}\\cdot\\nabla)\\phi] + \\frac{1}{c^2}\\frac{\\partial}{\\partial t}(\\mathbf{u}\\cdot\\nabla\\phi) + \\frac{1}{c^2}\\mathbf{u}\\cdot\\nabla[(\\mathbf{u}\\cdot\\nabla)\\phi] = 0\n"
},
{
"math_id": 47,
"text": "\n\\left[\\frac{1}{c^2}\\left(\\frac{\\partial}{\\partial t} + \\mathbf{u}\\cdot\\nabla\\right)^{2} - \\nabla^{2}\\right]\\phi = 0\n"
},
{
"math_id": 48,
"text": "\\mathbf{u}=0"
},
{
"math_id": 49,
"text": "\n\\begin{align}\n \\mathbf{v} & = -\\nabla \\phi \\\\\np' & = \\rho_{0}\\left(\\frac{\\partial}{\\partial t} + \\mathbf{u}\\cdot\\nabla\\right)\\phi\\\\\n\\rho' & = \\frac{\\rho_{0}}{c^{2}}\\left(\\frac{\\partial}{\\partial t} + \\mathbf{u}\\cdot\\nabla\\right)\\phi\n\\end{align}\n "
}
] | https://en.wikipedia.org/wiki?curid=1234 |
1234125 | Kleene fixed-point theorem | Theorem in order theory and lattice theory
In the mathematical areas of order and lattice theory, the Kleene fixed-point theorem, named after American mathematician Stephen Cole Kleene, states the following:
Kleene Fixed-Point Theorem. Suppose formula_0 is a directed-complete partial order (dcpo) with a least element, and let formula_1 be a Scott-continuous (and therefore monotone) function. Then formula_2 has a least fixed point, which is the supremum of the ascending Kleene chain of formula_3
The ascending Kleene chain of "f" is the chain
formula_4
obtained by iterating "f" on the least element ⊥ of "L". Expressed in a formula, the theorem states that
formula_5
where formula_6 denotes the least fixed point.
Although Tarski's fixed point theorem
does not consider how fixed points can be computed by iterating "f" from some seed (also, it pertains to monotone functions on complete lattices), this result is often attributed to Alfred Tarski who proves it for additive functions. Moreover, Kleene Fixed-Point Theorem can be extended to monotone functions using transfinite iterations.
Proof.
We first have to show that the ascending Kleene chain of formula_2 exists in formula_7. To show that, we prove the following:
Lemma. If formula_7 is a dcpo with a least element, and formula_1 is Scott-continuous, then formula_8
Proof. We use induction:
* Assume n = 0. Then formula_9 since formula_10 is the least element.
* Assume n > 0. Then we have to show that formula_11. By rearranging we get formula_12. By inductive assumption, we know that formula_13 holds, and because f is monotone (property of Scott-continuous functions), the result holds as well.
As a corollary of the Lemma we have the following directed ω-chain:
formula_14
From the definition of a dcpo it follows that formula_15 has a supremum, call it formula_16 What remains now is to show that formula_17 is the least fixed-point.
First, we show that formula_17 is a fixed point, i.e. that formula_18. Because formula_2 is Scott-continuous, formula_19, that is formula_20. Also, since formula_21 and because formula_10 has no influence in determining the supremum we have: formula_22. It follows that formula_18, making formula_17 a fixed-point of formula_2.
The proof that formula_17 is in fact the "least" fixed point can be done by showing that any element in formula_15 is smaller than any fixed-point of formula_2 (because by property of supremum, if all elements of a set formula_23 are smaller than an element of formula_7 then also formula_24 is smaller than that same element of formula_7). This is done by induction: Assume formula_25 is some fixed-point of formula_2. We now prove by induction over formula_26 that formula_27. The base of the induction formula_28 obviously holds: formula_29 since formula_10 is the least element of formula_7. As the induction hypothesis, we may assume that formula_30. We now do the induction step: From the induction hypothesis and the monotonicity of formula_2 (again, implied by the Scott-continuity of formula_2), we may conclude the following: formula_31 Now, by the assumption that formula_25 is a fixed-point of formula_32 we know that formula_33 and from that we get formula_34
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(L, \\sqsubseteq)"
},
{
"math_id": 1,
"text": "f: L \\to L"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "f."
},
{
"math_id": 4,
"text": "\\bot \\sqsubseteq f(\\bot) \\sqsubseteq f(f(\\bot)) \\sqsubseteq \\cdots \\sqsubseteq f^n(\\bot) \\sqsubseteq \\cdots"
},
{
"math_id": 5,
"text": "\\textrm{lfp}(f) = \\sup \\left(\\left\\{f^n(\\bot) \\mid n\\in\\mathbb{N}\\right\\}\\right)"
},
{
"math_id": 6,
"text": "\\textrm{lfp}"
},
{
"math_id": 7,
"text": "L"
},
{
"math_id": 8,
"text": "f^n(\\bot) \\sqsubseteq f^{n+1}(\\bot), n \\in \\mathbb{N}_0"
},
{
"math_id": 9,
"text": "f^0(\\bot) = \\bot \\sqsubseteq f^1(\\bot),"
},
{
"math_id": 10,
"text": "\\bot"
},
{
"math_id": 11,
"text": "f^n(\\bot) \\sqsubseteq f^{n+1}(\\bot)"
},
{
"math_id": 12,
"text": "f(f^{n-1}(\\bot)) \\sqsubseteq f(f^n(\\bot))"
},
{
"math_id": 13,
"text": "f^{n-1}(\\bot) \\sqsubseteq f^n(\\bot)"
},
{
"math_id": 14,
"text": "\\mathbb{M} = \\{ \\bot, f(\\bot), f(f(\\bot)), \\ldots\\}."
},
{
"math_id": 15,
"text": "\\mathbb{M}"
},
{
"math_id": 16,
"text": "m."
},
{
"math_id": 17,
"text": "m"
},
{
"math_id": 18,
"text": "f(m) = m"
},
{
"math_id": 19,
"text": "f(\\sup(\\mathbb{M})) = \\sup(f(\\mathbb{M}))"
},
{
"math_id": 20,
"text": "f(m) = \\sup(f(\\mathbb{M}))"
},
{
"math_id": 21,
"text": "\\mathbb{M} = f(\\mathbb{M})\\cup\\{\\bot\\}"
},
{
"math_id": 22,
"text": "\\sup(f(\\mathbb{M})) = \\sup(\\mathbb{M})"
},
{
"math_id": 23,
"text": "D \\subseteq L"
},
{
"math_id": 24,
"text": "\\sup(D)"
},
{
"math_id": 25,
"text": "k"
},
{
"math_id": 26,
"text": "i"
},
{
"math_id": 27,
"text": "\\forall i \\in \\mathbb{N}: f^i(\\bot) \\sqsubseteq k"
},
{
"math_id": 28,
"text": "(i = 0)"
},
{
"math_id": 29,
"text": "f^0(\\bot) = \\bot \\sqsubseteq k,"
},
{
"math_id": 30,
"text": "f^i(\\bot) \\sqsubseteq k"
},
{
"math_id": 31,
"text": "f^i(\\bot) \\sqsubseteq k ~\\implies~ f^{i+1}(\\bot) \\sqsubseteq f(k)."
},
{
"math_id": 32,
"text": "f,"
},
{
"math_id": 33,
"text": "f(k) = k,"
},
{
"math_id": 34,
"text": "f^{i+1}(\\bot) \\sqsubseteq k."
}
] | https://en.wikipedia.org/wiki?curid=1234125 |
12341697 | Speeded up robust features | Robust local feature detector
In computer vision, speeded up robust features (SURF) is a patented local feature detector and descriptor. It can be used for tasks such as object recognition, image registration, classification, or 3D reconstruction. It is partly inspired by the scale-invariant feature transform (SIFT) descriptor. The standard version of SURF is several times faster than SIFT and claimed by its authors to be more robust against different image transformations than SIFT.
To detect interest points, SURF uses an integer approximation of the determinant of Hessian blob detector, which can be computed with 3 integer operations using a precomputed integral image. Its feature descriptor is based on the sum of the Haar wavelet response around the point of interest. These can also be computed with the aid of the integral image.
SURF descriptors have been used to locate and recognize objects, people or faces, to reconstruct 3D scenes, to track objects and to extract points of interest.
SURF was first published by Herbert Bay, Tinne Tuytelaars, and Luc Van Gool, and presented at the 2006 European Conference on Computer Vision. An application of the algorithm is patented in the United States. An "upright" version of SURF (called U-SURF) is not invariant to image rotation and therefore faster to compute and better suited for application where the camera remains more or less horizontal.
The image is transformed into coordinates, using the multi-resolution pyramid technique, to copy the original image with Pyramidal Gaussian or Laplacian Pyramid shape to obtain an image with the same size but with reduced bandwidth. This achieves a special blurring effect on the original image, called Scale-Space and ensures that the points of interest are scale invariant.
<templatestyles src="Template:TOC limit/styles.css" />
Algorithm and features.
The SURF algorithm is based on the same principles and steps as SIFT; but details in each step are different. The algorithm has three main parts: interest point detection, local neighborhood description, and matching.
Detection.
SURF uses square-shaped filters as an approximation of Gaussian smoothing. (The SIFT approach uses cascaded filters to detect scale-invariant characteristic points, where the difference of Gaussians (DoG) is calculated on rescaled images progressively.) Filtering the image with a square is much faster if the integral image is used:
formula_0
The sum of the original image within a rectangle can be evaluated quickly using the integral image, requiring evaluations at the rectangle's four corners.
SURF uses a blob detector based on the Hessian matrix to find points of interest. The determinant of the Hessian matrix is used as a measure of local change around the point and points are chosen where this determinant is maximal. In contrast to the Hessian-Laplacian detector by Mikolajczyk and Schmid, SURF also uses the determinant of the Hessian for selecting the scale, as is also done by Lindeberg. Given a point p=(x, y) in an image I, the Hessian matrix H(p, σ) at point p and scale σ, is:
formula_1
where formula_2 etc. is the convolution of the second-order derivative of gaussian with the image formula_3 at the point formula_4.
The box filter of size 9×9 is an approximation of a Gaussian with σ=1.2 and represents the lowest level (highest spatial resolution) for blob-response maps.
Scale-space representation and location of points of interest.
Interest points can be found at different scales, partly because the search for correspondences often requires comparison images where they are seen at different scales. In other feature detection algorithms, the scale space is usually realized as an image pyramid. Images are repeatedly smoothed with a Gaussian filter, then they are subsampled to get the next higher level of the pyramid. Therefore, several floors or stairs with various measures of the masks are calculated:
formula_5
The scale space is divided into a number of octaves, where an octave refers to a series of response maps of covering a doubling of scale. In SURF, the lowest level of the scale space is obtained from the output of the 9×9 filters.
Hence, unlike previous methods, scale spaces in SURF are implemented by applying box filters of different sizes. Accordingly, the scale space is analyzed by up-scaling the filter size rather than iteratively reducing the image size. The output of the above 9×9 filter is considered as the initial scale layer at scale "s" =1.2 (corresponding to Gaussian derivatives with "σ" = 1.2). The following layers are obtained by filtering the image with gradually bigger masks, taking into account the discrete nature of integral images and the specific filter structure. This results in filters of size 9×9, 15×15, 21×21, 27×27... Non-maximum suppression in a 3×3×3 neighborhood is applied to localize interest points in the image and over scales. The maxima of the determinant of the Hessian matrix are then interpolated in scale and image space with the method proposed by Brown, et al. Scale space interpolation is especially important in this case, as the difference in scale between the first layers of every octave is relatively large.
Descriptor.
The goal of a descriptor is to provide a unique and robust description of an image feature, e.g., by describing the intensity distribution of the pixels within the neighbourhood of the point of interest. Most descriptors are thus computed in a local manner, hence a description is obtained for every point of interest identified previously.
The dimensionality of the descriptor has direct impact on both its computational complexity and point-matching robustness/accuracy. A short descriptor may be more robust against appearance variations, but may not offer sufficient discrimination and thus give too many false positives.
The first step consists of fixing a reproducible orientation based on information from a circular region around the interest point. Then we construct a square region aligned to the selected orientation, and extract the SURF descriptor from it.
Orientation assignment.
In order to achieve rotational invariance, the orientation of the point of interest needs to be found. The Haar wavelet responses in both x- and y-directions within a circular neighbourhood of radius formula_6 around the point of interest are computed, where formula_7 is the scale at which the point of interest was detected. The obtained responses are weighted by a Gaussian function centered at the point of interest, then plotted as points in a two-dimensional space, with the horizontal response in the abscissa and the vertical response in the ordinate. The dominant orientation is estimated by calculating the sum of all responses within a sliding orientation window of size π/3. The horizontal and vertical responses within the window are summed. The two summed responses then yield a local orientation vector. The longest such vector overall defines the orientation of the point of interest. The size of the sliding window is a parameter that has to be chosen carefully to achieve a desired balance between robustness and angular resolution.
Descriptor based on the sum of Haar wavelet responses.
To describe the region around the point, a square region is extracted, centered on the interest point and oriented along the orientation as selected above. The size of this window is 20s.
The interest region is split into smaller 4x4 square sub-regions, and for each one, the Haar wavelet responses are extracted at 5x5 regularly spaced sample points. The responses are weighted with a Gaussian (to offer more robustness for deformations, noise and translation).
Matching.
By comparing the descriptors obtained from different images, matching pairs can be found.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S(x, y)=\\sum_{i=0}^x \\sum_{j=0}^y I(i,j)"
},
{
"math_id": 1,
"text": "H(p,\\sigma)=\\begin{pmatrix} L_{xx}(p,\\sigma) & L_{xy}(p,\\sigma) \\\\ L_{xy}(p,\\sigma) & L_{yy}(p,\\sigma) \\end{pmatrix}"
},
{
"math_id": 2,
"text": "L_{xx}(p,\\sigma)"
},
{
"math_id": 3,
"text": "I(x,y)"
},
{
"math_id": 4,
"text": " p "
},
{
"math_id": 5,
"text": "\\sigma_\\text{approx} = \\text{current filter size} \\times \\left( \\frac{\\text{base filter scale} }{\\text{base filter size}} \\right)"
},
{
"math_id": 6,
"text": "6s"
},
{
"math_id": 7,
"text": "s"
}
] | https://en.wikipedia.org/wiki?curid=12341697 |
1234251 | PCF theory | PCF theory is the name of a mathematical theory, introduced by Saharon Shelah (1978), that deals with the cofinality of the ultraproducts of ordered sets. It gives strong upper bounds on the cardinalities of power sets of singular cardinals, and has many more applications as well. The abbreviation "PCF" stands for "possible cofinalities".
Main definitions.
If "A" is an infinite set of regular cardinals, "D" is an ultrafilter on "A", then
we let formula_0 denote the cofinality of the ordered set of functions
formula_1 where the ordering is defined as follows:
formula_2 if formula_3.
pcf("A") is the set of cofinalities that occur if we consider all ultrafilters on "A", that is,
formula_4
Main results.
Obviously, pcf("A") consists of regular cardinals. Considering ultrafilters concentrated on elements of "A", we get that
formula_5. Shelah proved, that if formula_6, then pcf("A") has a largest element, and there are subsets formula_7 of "A" such that for each ultrafilter "D" on "A", formula_0 is the least element θ of pcf("A") such that formula_8. Consequently, formula_9.
Shelah also proved that if "A" is an interval of regular cardinals (i.e., "A" is the set of all regular cardinals between two cardinals), then pcf("A") is also an interval of regular cardinals and |pcf("A")|<|"A"|+4.
This implies the famous inequality
formula_10
assuming that ℵω is strong limit.
If λ is an infinite cardinal, then "J"<λ is the following ideal on "A". "B"∈"J"<λ if formula_11 holds for every ultrafilter "D" with "B"∈"D". Then "J"<λ is the ideal generated by the sets formula_12. There exist "scales", i.e., for every λ∈pcf("A") there is a sequence of length λ of elements of formula_13 which is both increasing and cofinal mod "J"<λ. This implies that the cofinality of formula_1 under pointwise dominance is max(pcf("A")).
Another consequence is that if λ is singular and no regular cardinal less than λ is Jónsson, then also λ+ is not Jónsson. In particular, there is a Jónsson algebra on ℵω+1, which settles an old conjecture.
Unsolved problems.
The most notorious conjecture in pcf theory states that |pcf("A")|=|"A"| holds for every set "A" of regular cardinals with |"A"|<min("A"). This would imply that if ℵω is strong limit, then the sharp bound
formula_14
holds. The analogous bound
formula_15
follows from Chang's conjecture (Magidor) or even from the nonexistence of a Kurepa tree (Shelah).
A weaker, still unsolved conjecture states that if |"A"|<min("A"), then pcf("A") has no inaccessible limit point. This is equivalent to the statement that pcf(pcf("A"))=pcf("A").
Applications.
The theory has found a great deal of applications, besides cardinal arithmetic.
The original survey by Shelah, "Cardinal arithmetic for skeptics", includes the following topics: almost free abelian groups, partition problems, failure of preservation of chain conditions in Boolean algebras under products, existence of Jónsson algebras, existence of entangled linear orders, equivalently narrow Boolean algebras, and the existence of nonisomorphic models equivalent in certain infinitary logics.
In the meantime, many further applications have been found in Set Theory, Model Theory, Algebra and Topology. | [
{
"math_id": 0,
"text": "\\operatorname{cf}\\left(\\prod A/D\\right)"
},
{
"math_id": 1,
"text": "\\prod A"
},
{
"math_id": 2,
"text": "f<g"
},
{
"math_id": 3,
"text": "\\{x\\in A:f(x)<g(x)\\}\\in D"
},
{
"math_id": 4,
"text": "\\operatorname{pcf}(A)=\\left\\{\\operatorname{cf}\\left(\\prod A/D\\right):D\\,\\,\\mbox{is an ultrafilter on}\\,\\,A\\right\\}."
},
{
"math_id": 5,
"text": "A\\subseteq \\operatorname{pcf}(A)"
},
{
"math_id": 6,
"text": "|A|<\\min(A)"
},
{
"math_id": 7,
"text": "\\{B_\\theta:\\theta\\in \\operatorname{pcf}(A)\\}"
},
{
"math_id": 8,
"text": "B_\\theta\\in D"
},
{
"math_id": 9,
"text": "\\left|\\operatorname{pcf}(A)\\right|\\leq2^{|A|}"
},
{
"math_id": 10,
"text": "2^{\\aleph_\\omega}<\\aleph_{\\omega_4}"
},
{
"math_id": 11,
"text": "\\operatorname{cf}\\left(\\prod A/D\\right)<\\lambda"
},
{
"math_id": 12,
"text": "\\{B_\\theta:\\theta\\in \\operatorname{pcf}(A),\\theta<\\lambda\\}"
},
{
"math_id": 13,
"text": "\\prod B_\\lambda"
},
{
"math_id": 14,
"text": "2^{\\aleph_\\omega}<\\aleph_{\\omega_1}"
},
{
"math_id": 15,
"text": "2^{\\aleph_{\\omega_1}}<\\aleph_{\\omega_2}"
}
] | https://en.wikipedia.org/wiki?curid=1234251 |
1234272 | Derek J. de Solla Price | Physicist and science historian (1922–1983)
Derek John de Solla Price (22 January 1922 – 3 September 1983) was a British physicist, historian of science, and information scientist. He was known for his investigation of the Antikythera mechanism, an ancient Greek planetary computer, and for quantitative studies on scientific publications, which led to his being described as the "Herald of scientometrics".
Biography.
Price was born in Leyton, England, to Philip Price, a tailor, and Fanny de Solla, a singer. He began work in 1938 as an assistant in a physics laboratory at the South West Essex Technical College, before studying Physics and Mathematics at the University of London, where he received a Bachelor of Science in 1942. He then worked as an assistant to Harry Lowery carrying out research on hot and molten metals, and working towards a London external Ph.D. in experimental physics, which he obtained in 1946. This work led to several research papers and to a patent for an emissive-correcting optical pyrometer. He then went to the USA on a Commonwealth Fund fellowship, working in Pittsburgh and Princeton, returning to England in 1947. He was married that year to Ellen Hjorth in Copenhagen.
In 1948 Price took a 3-year position as a teacher of applied mathematics at Raffles College, Singapore, which was to become part of the National University of Singapore. There he met C. Northcote Parkinson, the naval historian, who stimulated a love of history in Price that would change the direction of his career. While in Singapore, he formulated his theory on the exponential growth of science. He was looking after the university's complete run of the "Philosophical Transactions of the Royal Society", while Raffles College had its library built. He started reading these, and as he placed the volumes in chronological order he noticed that their yearly height increased exponentially with time. This led to a presentation at the Sixth International Congress of the History of Science in Amsterdam, in 1950.
Returning to England, Price decided to make a career in the history of science, and enrolled for a second Ph.D. at the University of Cambridge, supported by an ICI fellowship. He had initially intended to work on a survey of scientific instruments, but during his studies he discovered "The Equatorie of the Planetis", a Peterhouse manuscript in Cambridge University Library. The manuscript, written in Middle English, describes an Equatorium, an astronomical calculating instrument, and became the basis of the thesis for his PhD, which he obtained in 1954, and also for a book, published the following year. He believed the work to be by Geoffrey Chaucer, who had written "A Treatise on the Astrolabe", but it is now attributed to a St Albans monk called John Westwyk.
Price received a Nuffield Foundation award for research in the History of science, which enabled him to work on scientific instruments during 1955–1956. He first prepared a catalogue of the instrument collection of the British Museum, and then a catalogue of all the ancient astrolabes that he was able to locate.
While working on his Ph.D. in Cambridge, Price met Joseph Needham, the historian of Chinese science. As a result of his work on the Equatorium Price was invited to participate in a project on medieval Chinese astronomical clocks. This led to the book "Heavenly Clockwork" by Needham, Wang Ling and Price, which was published in 1960.
Another interest in ancient technology concerned the Antikythera mechanism. This machine had been retrieved from a wreck off the island of Antikythera in 1900, and its function had remained unknown. Price started working on this in the 1950s, and continued on and off for twenty years using various techniques including gamma radiography. He published two papers on the mechanism, in 1959 and 1974, showing that it was a planetary computer, dating from about 80 BCE. Also, with Joseph Noble, he studied the machinery of the Tower of the Winds in Athens, and showed it to be water-driven clockwork, showing times and seasons.
Around 1950, Price adopted his mother's Sephardic name, "de Solla", as a middle name. He was a "British Atheist ... from a rather well-known Sephardic Jewish family", and although his Danish wife, Ellen, had been christened as a Lutheran, he did not, according to their son Mark, regard their marriage as "mixed", because they were both atheists.
After obtaining his second doctorate, Price found advancement difficult in England. One colleague alleged that Price, who came from a lower-class background, was "not socially house-trained," and he suspected that he was turned down for university positions for personal reasons. Price decided to move to the United States. In 1957 he became a consultant to the Smithsonian Institution, and then a fellow at the Institute for Advanced Study in Princeton, New Jersey. At Princeton he studied ancient astronomy with Otto Neugebauer. In 1959 he joined the Department of History at Yale University initially as a one-year visitor. He would remain at Yale for the rest of his life.
Price gave a series of lectures in Yale in 1959, which formed the basis for a book, "Science since Babylon" (1961). In 1960, a Department of History of Science and Medicine was formed at Yale, largely through the efforts of John Fulton who had been Professor of the History of Medicine since 1951. Price became Professor of the History of Science, and on Fulton's death in 1960 became chairman of the department. In 1962 he became the Avalon Professor of the History of Science.
The quantitative study of science, Scientometrics, and its application to science policy, became the principal focus of Price's work from the 1960s onwards. In 1963 his best-known book "Little Science, Big Science" was published. Early in that year, he met Eugene Garfield, founder of the Science Citation Index (SCI), and formed a lasting collaboration. SCI would provide most of the data for his quantitative work, allowing studies not just of the quantity of scientific publication, but, for example, of the impact of those publications, and of the duration of that impact. In 1965, Price gave the first Science of Science Foundation lecture, entitled "The Scientific Foundations of Science Policy", given at the Royal Institution in London. He argued that as science grew exponentially it presented new challenges to policy-makers, and that they could be helped by the kind of Scientometric work he was carrying out and promoting. Clearly exponential growth cannot continue indefinitely, and the slowing of growth rates will correspond to pressing issues around allocation of resources. He also emphasised the critical importance of communication, referring to the "invisible college", a network of scientific communication that exists outside formal channels. The lecture was reviewed at length in the journal Nature.
Price died of a heart attack at the home of his oldest friend, Anthony Michaelis, in London, during a visit to attend the wedding of his niece. He was survived by his wife, Ellen, and their three children, Linda, Jeffrey, and Mark.
In 1984, Price received, posthumously, the ASIS Research Award for outstanding contributions in the field of information science.
Since 1984, the Derek de Solla Price Memorial Medal is awarded by the International Society for Scientometrics and Informetrics to scientists with outstanding contributions to the fields of quantitative studies of science.
Scientific contributions.
Price's major scientific contributions include:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt{25}=5"
}
] | https://en.wikipedia.org/wiki?curid=1234272 |
12342942 | Anderson orthogonality theorem | Theorem in physics
The Anderson orthogonality theorem is a theorem in physics by the physicist P. W. Anderson.
It relates to the introduction of a magnetic impurity in a metal. When a magnetic impurity is introduced into a metal, the conduction electrons will tend to screen the potential formula_0 that the impurity creates. The N-electron ground state for the system when formula_1, which corresponds to the absence of the impurity and formula_2, which corresponds to the introduction of the impurity are orthogonal in the thermodynamic limit formula_3. | [
{
"math_id": 0,
"text": "V(r)"
},
{
"math_id": 1,
"text": "V(r) = 0"
},
{
"math_id": 2,
"text": "V(r) \\neq 0"
},
{
"math_id": 3,
"text": "N \\to \\infty "
}
] | https://en.wikipedia.org/wiki?curid=12342942 |
1234368 | Fractional quantum Hall effect | Electromagnetic effect in physics
The fractional quantum Hall effect (FQHE) is a physical phenomenon in which the Hall conductance of 2-dimensional (2D) electrons shows precisely quantized plateaus at fractional values of formula_0, where "e" is the electron charge and "h" is the Planck constant. It is a property of a collective state in which electrons bind magnetic flux lines to make new quasiparticles, and excitations have a fractional elementary charge and possibly also fractional statistics. The 1998 Nobel Prize in Physics was awarded to Robert Laughlin, Horst Störmer, and Daniel Tsui "for their discovery of a new form of quantum fluid with fractionally charged excitations".
The microscopic origin of the FQHE is a major research topic in condensed matter physics.
Descriptions.
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in physics:
What mechanism explains the existence of the "ν"=5/2 state in the fractional quantum Hall effect?
The fractional quantum Hall effect (FQHE) is a collective behavior in a 2D system of electrons. In particular magnetic fields, the electron gas condenses into a remarkable liquid state, which is very delicate, requiring high quality material with a low carrier concentration, and extremely low temperatures. As in the integer quantum Hall effect, the Hall resistance undergoes certain quantum Hall transitions to form a series of plateaus. Each particular value of the magnetic field corresponds to a filling factor (the ratio of electrons to magnetic flux quanta)
formula_1
where "p" and "q" are integers with no common factors. Here "q" turns out to be an odd number with the exception of two filling factors 5/2 and 7/2. The principal series of such fractions are
formula_2
and
formula_3
Fractionally charged quasiparticles are neither bosons nor fermions and exhibit anyonic statistics. The fractional quantum Hall effect continues to be influential in theories about topological order. Certain fractional quantum Hall phases appear to have the right properties for building a topological quantum computer.
History and developments.
The FQHE was experimentally discovered in 1982 by Daniel Tsui and Horst Störmer, in experiments performed on heterostructures made out of gallium arsenide developed by Arthur Gossard.
There were several major steps in the theory of the FQHE.
Tsui, Störmer, and Robert B. Laughlin were awarded the 1998 Nobel Prize in Physics for their work.
Evidence for fractionally-charged quasiparticles.
Experiments have reported results that specifically support the understanding that there are fractionally-charged quasiparticles in an electron gas under FQHE conditions.
In 1995, the fractional charge of Laughlin quasiparticles was measured directly in a quantum antidot electrometer at Stony Brook University, New York. In 1997, two groups of physicists at the Weizmann Institute of Science in Rehovot, Israel, and at the Commissariat à l'énergie atomique laboratory near Paris, detected such quasiparticles carrying an electric current, through measuring quantum shot noise
Both of these experiments have been confirmed with certainty.
A more recent experiment, measures the quasiparticle charge.
Impact.
The FQH effect shows the limits of Landau's symmetry breaking theory. Previously it was held that the symmetry breaking theory could explain all the important concepts and properties of forms of matter. According to this view, the only thing to be done was to apply the symmetry breaking theory to all different kinds of phases and phase transitions. From this perspective, the importance of the FQHE discovered by
Tsui, Stormer, and Gossard is notable for contesting old perspectives.
The existence of FQH liquids suggests that there is much more to discover beyond the present symmetry breaking paradigm in condensed matter physics.
Different FQH states all have the same symmetry
and cannot be described by symmetry breaking theory.
The associated fractional charge, fractional statistics, non-Abelian statistics,
chiral edge states, etc. demonstrate the power and the fascination of emergence in many-body systems.
Thus FQH states represent new states of matter that contain a
completely new kind of order—topological order.
For example, properties once deemed isotropic for all materials may be anisotropic in 2D planes.
The new type of orders represented by FQH states greatly enrich our
understanding of quantum phases and quantum phase transitions.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "e^2/h"
},
{
"math_id": 1,
"text": "\\nu = p/q,\\ "
},
{
"math_id": 2,
"text": "{1\\over 3}, {2\\over 5}, {3\\over 7}, \\mbox{etc.,} "
},
{
"math_id": 3,
"text": "{2\\over3}, {3\\over 5}, {4\\over 7}, \\mbox{etc.} "
},
{
"math_id": 4,
"text": "1/q"
},
{
"math_id": 5,
"text": "e^*={e\\over q}"
},
{
"math_id": 6,
"text": "\\theta = {\\pi \\over q}"
},
{
"math_id": 7,
"text": " e^{i \\theta}"
},
{
"math_id": 8,
"text": "\\nu = 1/q"
},
{
"math_id": 9,
"text": "\\nu = 2/5"
},
{
"math_id": 10,
"text": "2/7"
},
{
"math_id": 11,
"text": "\\nu = 1/3"
}
] | https://en.wikipedia.org/wiki?curid=1234368 |
12347137 | Sonic logging | Sonic logging is a well logging tool that provides a formation’s interval transit time, designated as formula_0, which is a measure of a how fast elastic seismic compressional and shear waves travel through the formations. Geologically, this capacity varies with many things including lithology and rock textures, most notably decreasing with an increasing effective porosity and increasing with an increasing effective confining stress. This means that a sonic log can be used to calculate the porosity, confining stress, or pore pressure of a formation if the seismic velocity of the rock matrix, formula_1, and pore fluid, formula_2, are known, which is very useful for hydrocarbon exploration.
Process of sonic logging.
The velocity is calculated by measuring the travel time from the piezoelectric transmitter to the receiver, normally with the units microsecond per foot (a measure of slowness). To compensate for the variations in the drilling mud thickness, there are actually two receivers, one near and one far. This is because the travel time within the drilling mud will be common for both, so the travel time within the formation is given by:
formula_3 = formula_4;
where formula_5 = travel time to far receiver; formula_6 = travel time to near receiver.
If it is necessary to compensate for tool tilt and variations in the borehole width then both up-down and down-up arrays can be used and an average can be calculated. Overall this gives a sonic log that can be made up of 1 or 2 pulse generators and 2 or 4 detectors, all located in single unit called a “sonde”, which is lowered down the well.
An additional way in which the sonic log tool can be altered is increasing or decreasing the separation between the source and receivers. This gives deeper penetration and overcomes the problem of low velocity zones posed by borehole wall damage.
Cycle skipping.
The returning signal is a wavetrain and not a sharp pulse, so the detectors are only activated at a certain signal threshold. Sometimes, both detectors won’t be activated by the same peak (or trough) and the next peak (or trough) wave will activate one of them instead. This type of error is called cycle skipping and is easily identified because the time difference is equal to the time interval between successive pulse cycles.
Calculating porosity.
Many relationships between travel time and porosity have been proposed, the most commonly accepted is the Wyllie time-average equation. The equation basically holds that the total travel time recorded on the log is the sum of the time the sonic wave spends travelling the solid part of the rock, called the rock matrix and the time spent travelling through the fluids in the pores. This equation is empirical and makes no allowance for the structure of the rock matrix or the connectivity of the pore spaces so extra corrections can often be added to it. The Wyllie time-average equation is:
formula_7
where formula_8 = seismic velocity of the formation; formula_9 = seismic velocity of the pore fluid; formula_1 = seismic velocity of the rock matrix; formula_10 = porosity.
Accuracy.
The accuracy of modern compressional and shear sonic logs obtained with wireline logging tools is well known now to be within 2% for boreholes that are less than 14 inches in diameter and within 5% for larger boreholes. Some suggest that the fact that regular- and long-spaced log measurements often conflict means these logs are not accurate. That is actually not true. Quite often there is drilling induced damage or chemical alteration around the borehole that causes the near-borehole formation to be up to 15% slower than the deeper formation. This "gradient" in slowness can be as large as 2–3 feet. The long-spaced measurements (7.5–13.5 ft) always measures the deeper, unaltered formation velocity and should always be used instead of the shorter offset logs. Discrepancies between seismic data and sonic log data are due to upscaling and anisotropy considerations, which can be handled by using Backus Averaging on sonic log data.
Some suggest that to investigate how the varying size of a borehole has affected a sonic log, the results can be plotted against those of a caliper log. However, this is usually prone to leading one to the wrong conclusions because the more compliant formations that are prone to washouts or diameter enlargements also inherently have "slower" velocities.
Calibrated sonic log.
To improve the tie between well data and seismic data a "check-shot" survey is often used to generate a calibrated sonic log. A geophone, or array of geophones is lowered down the borehole, with a seismic source located at the surface. The seismic source is fired with the geophone(s) at a series of different depths, with the interval transit times being recorded. This is often done during the acquisition of a vertical seismic profile.
Use in mineral exploration.
Sonic logs are also used in mineral exploration, especially exploration for iron and potassium.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{\\Delta} t"
},
{
"math_id": 1,
"text": "V_{mat}"
},
{
"math_id": 2,
"text": "V_l"
},
{
"math_id": 3,
"text": "{{\\Delta}t}"
},
{
"math_id": 4,
"text": "{t_{far}} - {t_{near}}"
},
{
"math_id": 5,
"text": "{t_{far}}"
},
{
"math_id": 6,
"text": "{t_{near}}"
},
{
"math_id": 7,
"text": "\\frac{1}{V} = \\frac{\\phi}{V_f} + \\frac{1 - {\\phi}}{V_{mat}}"
},
{
"math_id": 8,
"text": "V"
},
{
"math_id": 9,
"text": "V_f"
},
{
"math_id": 10,
"text": "{\\phi}"
}
] | https://en.wikipedia.org/wiki?curid=12347137 |
12347279 | Density logging | Density logging is a well logging tool that can provide a continuous record of a formation's bulk density along the length of a borehole. In geology, bulk density is a function of the density of the minerals forming a rock (i.e. matrix) and the fluid enclosed in the pore spaces. This is one of three well logging tools that are commonly used to calculate porosity, the other two being sonic logging and neutron porosity logging
History & Principle.
The tool was initially developed in the 1950s and became widely utilized across the hydrocarbon industry by the 1960s. A type of active nuclear tool, a radioactive source and detector are lowered down the borehole and the source emits medium-energy gamma rays into the formation. Radioactive sources are typically a directional Cs-137 source. These gamma rays interact with electrons in the formation and are scattered in an interaction known as Compton scattering. The number of scattered gamma rays that reach the detector, placed at a set distance from the emitter, is related to the formation's electron density, which itself is related to the formation's bulk density (formula_0) via
formula_1
where formula_2 is the atomic number, and formula_3 is the molecular weight of the compound. For most elements formula_4 is about 1/2 (except for hydrogen where this ratio is 1). The electron density (formula_5) in g/cm3 determines the response of the density tool.
General tool design.
The tool itself initially consisted of a radioactive source and a single detector, but this configuration is susceptible to the effects of the drilling fluid. In a similar way to how the sonic logging tool was improved to compensate for borehole effects, density logging now conventionally uses 2 or more detectors. In a 2 detector configuration, the short-spaced detector has a much shallower depth of investigation than the long-spaced detector so it is used to measure the effect that the drilling fluid has on the gamma ray detection. This result is then used to correct the long-spaced detector.
Inferring porosity from bulk density.
Assuming that the measured bulk density (formula_0) only depends on matrix density (formula_6) and fluid density (formula_7), and that these values are known along the wellbore, porosity (formula_8) can be inferred by the formula
formula_9
Common values of matrix density formula_6 (in g/cm3) are:
This method is the most reliable porosity indicator for sandstones and limestones because their density is well known. On the other hand, the density of clay minerals such as mudstone is highly variable, depending on depositional environment, overburden pressure, type of clay mineral and many other factors. It can vary from 2.1 (montmorillonite) to 2.76 (chlorite) so this tool is not as useful for determining their porosity. A fluid bulk density formula_7 of 1 g/cm3 is appropriate where the water is fresh but highly saline water has a slightly higher density and lower values should be used for hydrocarbon reservoirs, depending on the hydrocarbon density and residual saturation.
In some applications hydrocarbons are indicated by the presence of abnormally high log porosities.
See also.
Sonic logging
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rho_\\text{bulk}"
},
{
"math_id": 1,
"text": "\\rho_\\text{e} = 2\\rho_\\text{bulk} \\ \\frac{ Z}{A}"
},
{
"math_id": 2,
"text": "Z"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "Z/A"
},
{
"math_id": 5,
"text": "\\rho_\\text{e}"
},
{
"math_id": 6,
"text": "\\rho_\\text{matrix}"
},
{
"math_id": 7,
"text": "\\rho_\\text{fluid}"
},
{
"math_id": 8,
"text": "\\phi"
},
{
"math_id": 9,
"text": "\\phi = \\frac{\\rho_\\text{matrix} - \\rho_\\text{bulk}}{\\rho_\\text{matrix}-\\rho_\\text{fluid}}"
}
] | https://en.wikipedia.org/wiki?curid=12347279 |
123495 | Spectral theorem | Result about when a matrix can be diagonalized
In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.
Examples of operators to which the spectral theorem applies are self-adjoint operators or more generally normal operators on Hilbert spaces.
The spectral theorem also provides a canonical decomposition, called the spectral decomposition, of the underlying vector space on which the operator acts.
Augustin-Louis Cauchy proved the spectral theorem for symmetric matrices, i.e., that every real, symmetric matrix is diagonalizable. In addition, Cauchy was the first to be systematic about determinants. The spectral theorem as generalized by John von Neumann is today perhaps the most important result of operator theory.
This article mainly focuses on the simplest kind of spectral theorem, that for a self-adjoint operator on a Hilbert space. However, as noted above, the spectral theorem also holds for normal operators on a Hilbert space.
Finite-dimensional case.
Hermitian maps and Hermitian matrices.
We begin by considering a Hermitian matrix on formula_0 (but the following discussion will be adaptable to the more restrictive case of symmetric matrices on formula_1). We consider a Hermitian map "A" on a finite-dimensional complex inner product space "V" endowed with a positive definite sesquilinear inner product formula_2 The Hermitian condition on formula_3 means that for all "x", "y" ∈ "V",
formula_4
An equivalent condition is that "A"*
"A" , where "A"* is the Hermitian conjugate of "A". In the case that "A" is identified with a Hermitian matrix, the matrix of "A"* is equal to its conjugate transpose. (If "A" is a real matrix, then this is equivalent to "A"T
"A", that is, "A" is a symmetric matrix.)
This condition implies that all eigenvalues of a Hermitian map are real: To see this, it is enough to apply it to the case when "x"
"y" is an eigenvector. (Recall that an eigenvector of a linear map "A" is a non-zero vector "v" such that "A v"
"λv" for some scalar "λ". The value "λ" is the corresponding eigenvalue. Moreover, the eigenvalues are roots of the characteristic polynomial.)
<templatestyles src="Math_theorem/styles.css" />
Theorem — If "A" is Hermitian on "V", then there exists an orthonormal basis of "V" consisting of eigenvectors of "A". Each eigenvalue of "A" is real.
We provide a sketch of a proof for the case where the underlying field of scalars is the complex numbers.
By the fundamental theorem of algebra, applied to the characteristic polynomial of "A", there is at least one complex eigenvalue "λ"1 and corresponding eigenvector which must by definition be non-zero. Then since
formula_5
we find that "λ"1 is real. Now consider the space formula_6 the orthogonal complement of By Hermiticity, formula_7 is an invariant subspace of "A". To see that, consider any formula_8 so that formula_9 by definition of formula_10 To satisfy invariance, we need to check if formula_11 This is true because formula_12 Applying the same argument to formula_7 shows that "A" has at least one real eigenvalue formula_13 and corresponding eigenvector formula_14 This can be used to build another invariant subspace formula_15 Finite induction then finishes the proof.
The matrix representation of "A" in a basis of eigenvectors is diagonal, and by the construction the proof gives a basis of mutually orthogonal eigenvectors; by choosing them to be unit vectors one obtains an orthonormal basis of eigenvectors. "A" can be written as a linear combination of pairwise orthogonal projections, called its spectral decomposition. Let
formula_16
be the eigenspace corresponding to an eigenvalue formula_17 Note that the definition does not depend on any choice of specific eigenvectors. In general, "V" is the orthogonal direct sum of the spaces formula_18 where the formula_19 ranges over the spectrum of formula_20
When the matrix being decomposed is Hermitian, the spectral decomposition is a special case of the Schur decomposition (see the proof in case of normal matrices below).
Spectral decomposition and the singular value decomposition.
The spectral decomposition is a special case of the singular value decomposition, which states that any matrix formula_21 can be expressed as
formula_22 where formula_23 and formula_24 are unitary matrices and formula_25 is a diagonal matrix. The diagonal entries of formula_26 are uniquely determined by formula_3 and are known as the singular values of formula_20 If formula_3 is Hermitian, then formula_27 and formula_28 which implies formula_29
Normal matrices.
The spectral theorem extends to a more general class of matrices. Let "A" be an operator on a finite-dimensional inner product space. "A" is said to be normal if
One can show that "A" is normal if and only if it is unitarily diagonalizable using the Schur decomposition. That is, any matrix can be written as where "U" is unitary and "T" is upper triangular.
If "A" is normal, then one sees that Therefore, "T" must be diagonal since a normal upper triangular matrix is diagonal (see normal matrix). The converse is obvious.
In other words, "A" is normal if and only if there exists a unitary matrix "U" such that
formula_30
where "D" is a diagonal matrix. Then, the entries of the diagonal of "D" are the eigenvalues of "A". The column vectors of "U" are the eigenvectors of "A" and they are orthonormal. Unlike the Hermitian case, the entries of "D" need not be real.
Compact self-adjoint operators.
In the more general setting of Hilbert spaces, which may have an infinite dimension, the statement of the spectral theorem for compact self-adjoint operators is virtually the same as in the finite-dimensional case.
<templatestyles src="Math_theorem/styles.css" />
Theorem — Suppose "A" is a compact self-adjoint operator on a (real or complex) Hilbert space "V". Then there is an orthonormal basis of "V" consisting of eigenvectors of "A". Each eigenvalue is real.
As for Hermitian matrices, the key point is to prove the existence of at least one nonzero eigenvector. One cannot rely on determinants to show existence of eigenvalues, but one can use a maximization argument analogous to the variational characterization of eigenvalues.
If the compactness assumption is removed, then it is "not" true that every self-adjoint operator has eigenvectors. For example, the multiplication operator formula_31 on formula_32 which takes each formula_33 to formula_34 is bounded and self-adjoint, but has no eigenvectors. However, its spectrum, suitably defined, is still equal to formula_35, see spectrum of bounded operator.
Bounded self-adjoint operators.
Possible absence of eigenvectors.
The next generalization we consider is that of bounded self-adjoint operators on a Hilbert space. Such operators may have no eigenvectors: for instance let "A" be the operator of multiplication by "t" on formula_32, that is,
formula_36
This operator does not have any eigenvectors "in" formula_32, though it does have eigenvectors in a larger space. Namely the distribution formula_37, where formula_38 is the Dirac delta function, is an eigenvector when construed in an appropriate sense. The Dirac delta function is however not a function in the classical sense and does not lie in the Hilbert space "L"2[0, 1] or any other Banach space. Thus, the delta-functions are "generalized eigenvectors" of formula_39 but not eigenvectors in the usual sense.
Spectral subspaces and projection-valued measures.
In the absence of (true) eigenvectors, one can look for a "spectral subspace" consisting of an "almost eigenvector", i.e, a closed subspace formula_40 of formula_41 associated with a Borel set formula_42 in the spectrum of formula_39. This subspace can be thought of as the closed span of generalized eigenvectors for formula_39 with eigen"values" in formula_43. In the above example, where formula_44 we might consider the subspace of functions supported on a small interval formula_45 inside formula_35. This space is invariant under formula_39 and for any formula_46 in this subspace, formula_47 is very close to formula_48. Each subspace, in turn, is encoded by the associated projection operator, and the collection of all the subspaces is then represented by a projection-valued measure.
One formulation of the spectral theorem expresses the operator "A" as an integral of the coordinate function over the operator's spectrum formula_49 with respect to a projection-valued measure.
formula_50
When the self-adjoint operator in question is compact, this version of the spectral theorem reduces to something similar to the finite-dimensional spectral theorem above, except that the operator is expressed as a finite or countably infinite linear combination of projections, that is, the measure consists only of atoms.
Multiplication operator version.
An alternative formulation of the spectral theorem says that every bounded self-adjoint operator is unitarily equivalent to a multiplication operator. The significance of this result is that multiplication operators are in many ways easy to understand.
<templatestyles src="Math_theorem/styles.css" />
Theorem — Let "A" be a bounded self-adjoint operator on a Hilbert space "H". Then there is a measure space ("X", Σ, "μ") and a real-valued essentially bounded measurable function "f" on "X" and a unitary operator "U":"H" → "L"2("X", "μ") such that
formula_51
where "T" is the multiplication operator:
formula_52
and formula_53.
The spectral theorem is the beginning of the vast research area of functional analysis called operator theory; see also the spectral measure.
There is also an analogous spectral theorem for bounded normal operators on Hilbert spaces. The only difference in the conclusion is that now "f" may be complex-valued.
Direct integrals.
There is also a formulation of the spectral theorem in terms of direct integrals. It is similar to the multiplication-operator formulation, but more canonical.
Let formula_39 be a bounded self-adjoint operator and let formula_54 be the spectrum of formula_39. The direct-integral formulation of the spectral theorem associates two quantities to formula_39. First, a measure formula_55 on formula_54, and second, a family of Hilbert spaces formula_56 We then form the direct integral Hilbert space
formula_57
The elements of this space are functions (or "sections") formula_58 such that formula_59 for all formula_60.
The direct-integral version of the spectral theorem may be expressed as follows:
<templatestyles src="Math_theorem/styles.css" />
Theorem — If formula_39 is a bounded self-adjoint operator, then formula_39 is unitarily equivalent to the "multiplication by formula_60" operator on formula_61
for some measure formula_55 and some family formula_62 of Hilbert spaces. The measure formula_55 is uniquely determined by formula_39 up to measure-theoretic equivalence; that is, any two measure associated to the same formula_39 have the same sets of measure zero. The dimensions of the Hilbert spaces formula_63 are uniquely determined by formula_39 up to a set of formula_55-measure zero.
The spaces formula_63 can be thought of as something like "eigenspaces" for formula_39. Note, however, that unless the one-element set formula_60 has positive measure, the space formula_63 is not actually a subspace of the direct integral. Thus, the formula_63's should be thought of as "generalized eigenspace"—that is, the elements of formula_63 are "eigenvectors" that do not actually belong to the Hilbert space.
Although both the multiplication-operator and direct integral formulations of the spectral theorem express a self-adjoint operator as unitarily equivalent to a multiplication operator, the direct integral approach is more canonical. First, the set over which the direct integral takes place (the spectrum of the operator) is canonical. Second, the function we are multiplying by is canonical in the direct-integral approach: Simply the function formula_64.
Cyclic vectors and simple spectrum.
A vector formula_46 is called a cyclic vector for formula_39 if the vectors formula_65 span a dense subspace of the Hilbert space. Suppose formula_39 is a bounded self-adjoint operator for which a cyclic vector exists. In that case, there is no distinction between the direct-integral and multiplication-operator formulations of the spectral theorem. Indeed, in that case, there is a measure formula_55 on the spectrum formula_49 of formula_39 such that formula_39 is unitarily equivalent to the "multiplication by formula_60" operator on formula_66. This result represents formula_39 simultaneously as a multiplication operator "and" as a direct integral, since formula_66 is just a direct integral in which each Hilbert space formula_63 is just formula_67.
Not every bounded self-adjoint operator admits a cyclic vector; indeed, by the uniqueness in the direct integral decomposition, this can occur only when all the formula_63's have dimension one. When this happens, we say that formula_39 has "simple spectrum" in the sense of spectral multiplicity theory. That is, a bounded self-adjoint operator that admits a cyclic vector should be thought of as the infinite-dimensional generalization of a self-adjoint matrix with distinct eigenvalues (i.e., each eigenvalue has multiplicity one).
Although not every formula_39 admits a cyclic vector, it is easy to see that we can decompose the Hilbert space as a direct sum of invariant subspaces on which formula_39 has a cyclic vector. This observation is the key to the proofs of the multiplication-operator and direct-integral forms of the spectral theorem.
Functional calculus.
One important application of the spectral theorem (in whatever form) is the idea of defining a functional calculus. That is, given a function formula_68 defined on the spectrum of formula_39, we wish to define an operator formula_69. If formula_68 is simply a positive power, formula_70, then formula_69 is just the formula_71-th power of formula_39, formula_72. The interesting cases are where formula_68 is a nonpolynomial function such as a square root or an exponential. Either of the versions of the spectral theorem provides such a functional calculus. In the direct-integral version, for example, formula_69 acts as the "multiplication by formula_68" operator in the direct integral:
formula_73
That is to say, each space formula_63 in the direct integral is a (generalized) eigenspace for formula_69 with eigenvalue formula_74.
Unbounded self-adjoint operators.
Many important linear operators which occur in analysis, such as differential operators, are unbounded. There is also a spectral theorem for self-adjoint operators that applies in these cases. To give an example, every constant-coefficient differential operator is unitarily equivalent to a multiplication operator. Indeed, the unitary operator that implements this equivalence is the Fourier transform; the multiplication operator is a type of Fourier multiplier.
In general, spectral theorem for self-adjoint operators may take several equivalent forms. Notably, all of the formulations given in the previous section for bounded self-adjoint operators—the projection-valued measure version, the multiplication-operator version, and the direct-integral version—continue to hold for unbounded self-adjoint operators, with small technical modifications to deal with domain issues. Specifically, the only reason the multiplication operator formula_39 on formula_32 is bounded, is due to the choice of domain formula_35. The same operator on, e.g., formula_75 would be unbounded.
The notion of "generalized eigenvectors" naturally extends to unbounded self-adjoint operators, as they are characterized as non-normalizable eigenvectors. Contrary to the case of almost eigenvectors, however, the eigenvalues can be real or complex and, even if they are real, do not necessarily belong to the spectrum. Though, for self-adjoint operators there always exist a real subset of "generalized eigenvalues" such that the corresponding set of eigenvectors is complete.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{C}^n"
},
{
"math_id": 1,
"text": "\\mathbb{R}^n"
},
{
"math_id": 2,
"text": "\\ \\langle \\cdot, \\cdot \\rangle ~."
},
{
"math_id": 3,
"text": "\\ A\\ "
},
{
"math_id": 4,
"text": "\\ \\langle\\ A x, y\\ \\rangle = \\langle\\ x, A y\\ \\rangle ~."
},
{
"math_id": 5,
"text": "\\ \\lambda_1\\ \\langle\\ v_1, v_1\\ \\rangle = \\langle\\ A (v_1), v_1\\ \\rangle = \\langle\\ v_1, A(v_1)\\ \\rangle = \\bar\\lambda_1\\ \\langle\\ v_1, v_1\\ \\rangle\\ ,"
},
{
"math_id": 6,
"text": "\\ \\mathcal{K}^{n-1} = \\text{span}\\left(\\ v_1\\ \\right)^\\perp\\ ,"
},
{
"math_id": 7,
"text": "\\ \\mathcal{K}^{n-1}\\ "
},
{
"math_id": 8,
"text": "\\ k \\in \\mathcal{K}^{n-1}"
},
{
"math_id": 9,
"text": "\\ \\langle\\ k, v_1\\ \\rangle = 0\\ "
},
{
"math_id": 10,
"text": "\\mathcal{K}^{n-1} ~."
},
{
"math_id": 11,
"text": "\\ A(k) \\in \\mathcal{K}^{n-1} ~."
},
{
"math_id": 12,
"text": "\\ \\langle\\ A(k), v_1\\ \\rangle = \\langle\\ k, A(v_1)\\ \\rangle = \\langle\\ k, \\lambda_1\\ v_1\\ \\rangle = 0 ~."
},
{
"math_id": 13,
"text": "\\lambda_2"
},
{
"math_id": 14,
"text": "\\ v_2 \\in \\mathcal{K}^{n-1} \\perp v_1 ~."
},
{
"math_id": 15,
"text": "\\ \\mathcal{K}^{n-2} = \\text{span}\\left(\\ \\{v_1, v_2\\}\\ \\right)^\\perp ~."
},
{
"math_id": 16,
"text": "\\ V_{\\lambda} = \\{\\ v \\in V\\ :\\ A\\ v = \\lambda\\ v\\ \\}\\ "
},
{
"math_id": 17,
"text": "\\ \\lambda ~."
},
{
"math_id": 18,
"text": "\\ V_{\\lambda}\\ "
},
{
"math_id": 19,
"text": "\\ \\lambda\\ "
},
{
"math_id": 20,
"text": "\\ A ~."
},
{
"math_id": 21,
"text": "\\ A \\in \\mathbb{C}^{m \\times n}\\ "
},
{
"math_id": 22,
"text": "\\ A = U\\ \\Sigma\\ V^{*}\\ ,"
},
{
"math_id": 23,
"text": "\\ U \\in \\mathbb{C}^{m \\times m}\\ "
},
{
"math_id": 24,
"text": "\\ V \\in \\mathbb{C}^{n \\times n}\\ "
},
{
"math_id": 25,
"text": "\\ \\Sigma \\in \\mathbb{R}^{m \\times n}\\ "
},
{
"math_id": 26,
"text": "\\ \\Sigma\\ "
},
{
"math_id": 27,
"text": "\\ A^* = A\\ "
},
{
"math_id": 28,
"text": "\\ V\\ \\Sigma\\ U^* = U\\ \\Sigma\\ V^*\\ "
},
{
"math_id": 29,
"text": "\\ U = V ~."
},
{
"math_id": 30,
"text": "\\ A = U\\ D\\ U^*\\ ,"
},
{
"math_id": 31,
"text": "M_{x}"
},
{
"math_id": 32,
"text": "L^2([0,1])"
},
{
"math_id": 33,
"text": "\\psi(x) \\in L^2([0,1])"
},
{
"math_id": 34,
"text": "x\\psi(x)"
},
{
"math_id": 35,
"text": "[0,1]"
},
{
"math_id": 36,
"text": " [A \\varphi](t) = t \\varphi(t). "
},
{
"math_id": 37,
"text": "\\varphi(t)=\\delta(t-t_0)"
},
{
"math_id": 38,
"text": "\\delta"
},
{
"math_id": 39,
"text": "A"
},
{
"math_id": 40,
"text": "V_E"
},
{
"math_id": 41,
"text": "H"
},
{
"math_id": 42,
"text": "E \\subset \\sigma(A)"
},
{
"math_id": 43,
"text": "E"
},
{
"math_id": 44,
"text": " [A \\varphi](t) = t \\varphi(t), \\;"
},
{
"math_id": 45,
"text": "[a,a+\\varepsilon]"
},
{
"math_id": 46,
"text": "\\varphi"
},
{
"math_id": 47,
"text": "A\\varphi"
},
{
"math_id": 48,
"text": "a\\varphi"
},
{
"math_id": 49,
"text": "\\sigma(A)"
},
{
"math_id": 50,
"text": " A = \\int_{\\sigma(A)} \\lambda \\, d \\pi (\\lambda)."
},
{
"math_id": 51,
"text": " U^* T U = A,"
},
{
"math_id": 52,
"text": " [T \\varphi](x) = f(x) \\varphi(x),"
},
{
"math_id": 53,
"text": "\\|T\\| = \\|f\\|_\\infty"
},
{
"math_id": 54,
"text": "\\sigma (A)"
},
{
"math_id": 55,
"text": "\\mu"
},
{
"math_id": 56,
"text": "\\{H_{\\lambda}\\},\\,\\,\\lambda\\in\\sigma (A)."
},
{
"math_id": 57,
"text": " \\int_\\mathbf{R}^\\oplus H_{\\lambda}\\, d \\mu(\\lambda). "
},
{
"math_id": 58,
"text": "s(\\lambda),\\,\\,\\lambda\\in\\sigma(A),"
},
{
"math_id": 59,
"text": "s(\\lambda)\\in H_{\\lambda}"
},
{
"math_id": 60,
"text": "\\lambda"
},
{
"math_id": 61,
"text": " \\int_\\mathbf{R}^\\oplus H_{\\lambda}\\, d \\mu(\\lambda) "
},
{
"math_id": 62,
"text": "\\{H_{\\lambda}\\}"
},
{
"math_id": 63,
"text": "H_{\\lambda}"
},
{
"math_id": 64,
"text": "\\lambda\\mapsto\\lambda"
},
{
"math_id": 65,
"text": "\\varphi,A\\varphi,A^2\\varphi,\\ldots"
},
{
"math_id": 66,
"text": "L^2(\\sigma(A),\\mu)"
},
{
"math_id": 67,
"text": "\\mathbb{C}"
},
{
"math_id": 68,
"text": "f"
},
{
"math_id": 69,
"text": "f(A)"
},
{
"math_id": 70,
"text": "f(x) = x^n"
},
{
"math_id": 71,
"text": "n"
},
{
"math_id": 72,
"text": "A^n"
},
{
"math_id": 73,
"text": "[f(A)s](\\lambda) = f(\\lambda) s(\\lambda)."
},
{
"math_id": 74,
"text": "f(\\lambda)"
},
{
"math_id": 75,
"text": "L^2(\\mathbb{R})"
}
] | https://en.wikipedia.org/wiki?curid=123495 |
12350617 | Color–color diagram | Astronomical diagram graphing two colour indices
A color–color diagram is a means of comparing the colors of an astronomical object at different wavelengths. Astronomers typically observe at narrow bands around certain wavelengths, and objects observed will have different brightnesses in each band. The difference in brightness between two bands is referred to as an object's color index, or simply "color". On color–color diagrams, the color defined by two wavelength bands is plotted on the horizontal axis, and the color defined by another brightness difference will be plotted on the vertical axis.
Background.
Although stars are not perfect blackbodies, to first order the spectra of light emitted by stars conforms closely to a black-body radiation curve, also referred to sometimes as a thermal radiation curve. The overall shape of a black-body curve is uniquely determined by its temperature, and the wavelength of peak intensity is inversely proportional to temperature, a relation known as Wien's Displacement Law. Thus, observation of a stellar spectrum allows determination of its effective temperature. Obtaining complete spectra for stars through spectrometry is much more involved than simple photometry in a few bands. Thus by comparing the magnitude of the star in multiple different color indices, the effective temperature of the star can still be determined, as magnitude differences between each color will be unique for that temperature. As such, color-color diagrams can be used as a means of representing the stellar population, much like a Hertzsprung–Russell diagram, and stars of different spectral classes will inhabit different parts of the diagram. This feature leads to applications within various wavelength bands.
In the stellar locus, stars tend to align in a more or less straight feature. If stars were perfect black bodies, the stellar locus would be a pure straight line indeed. The divergences with the straight line are due to the absorptions and emission lines in the stellar spectra. These divergences can be more or less evident depending on the filters used: narrow filters with central wavelength located in regions without lines, will produce a response close to the black body one, and even filters centered at lines if they are broad enough, can give a reasonable blackbody-like behavior.
Therefore, in most cases the straight feature of the stellar locus can be described by Ballesteros' formula deduced for pure blackbodies:
formula_0
where A, B, C and D are the magnitudes of the stars measured through filters with central frequencies "ν"a, "ν"b, "ν"c and "ν"d respectively, and k is a constant depending on the central wavelength and width of the filters, given by:
formula_1
Note that the slope of the straight line depends only on the effective wavelength, not in the filter width.
Although this formula cannot be directly used to calibrate data, if one has data well calibrated for two given filters, it can be used to calibrate data in other filters. It can be used to measure the effective wavelength midpoint of an unknown filter too, by using two well known filters. This can be useful to recover information on the filters used for the case of old data, when logs are not conserved and filter information has been lost.
Applications.
Photometric calibration.
The color-color diagram of stars can be used to directly calibrate or to test colors and magnitudes in optical and infrared imaging data. Such methods take advantage of the fundamental distribution of stellar colors in our galaxy across the vast majority of the sky, and the fact that observed stellar colors (unlike apparent magnitudes) are independent of the distance to the stars. Stellar locus regression (SLR) was a method developed to eliminate the need for standard star observations in photometric calibrations, except highly infrequently (once a year or less) to measure color terms. SLR has been used in a number of research initiatives. The NEWFIRM survey of the NOAO Deep Wide-Field Survey region used it to arrive at more accurate colors than would have otherwise been attainable by traditional calibration methods, and South Pole Telescope used SLR in the measurement of redshifts of galaxy clusters. The blue-tip method is closely related to SLR, but was used mainly to correct Galactic extinction predictions from IRAS data. Other surveys have used the stellar color-color diagram primarily as a calibration diagnostic tool, including The Oxford-Dartmouth Thirty Degree Survey and Sloan Digital Sky Survey (SDSS).
Color outliers.
Analyzing data from large observational surveys, such as the SDSS or 2 Micron All Sky Survey (2MASS), can be challenging due to the huge number of data produced. For surveys such as these, color-color diagrams have been used to find outliers from the main sequence stellar population. Once these outliers are identified, they can then be studied in more detail. This method has been used to identify ultracool subdwarfs. Unresolved binary stars, which appear photometrically to be points, have been identified by studying color-color outliers in cases where one member is off the main sequence. The stages of the evolution of stars along the asymptotic giant branch from carbon star to planetary nebula appear on distinct regions of color–color diagrams (carbon stars tend to be redder than expected from their temperature due to the formation of carbon compounds in their atmospheres which absorb blue light). Quasars also appear as color-color outliers.
Star formation.
Color–color diagrams are often used in infrared astronomy to study star forming regions. Stars form in clouds of dust. As the star continues to contract, a circumstellar disk of dust is formed, and this dust is heated by the star inside. The dust itself then begins to radiate as a blackbody, though one much cooler than the star. As a result, an excess of infrared radiation is observed for the star. Even without circumstellar dust, regions undergoing star formation exhibit high infrared luminosities compared to stars on the main sequence. Each of these effects is distinct from the reddening of starlight which occurs as a result of scattering off of dust in the interstellar medium.
Color–color diagrams allow for these effects to be isolated. As the color–color relationships of main sequence stars are well known, a theoretical main sequence can be plotted for reference, as is done with the solid black line in the example to the right. Interstellar dust scattering is also well understood, allowing bands to be drawn on a color–color diagram defining the region in which stars reddened by interstellar dust are expected to be observed, indicated on the color–color diagram by dashed lines. The typical axes for infrared color–color diagrams have (H–K) on the horizontal axis and (J–H) on the vertical axis (see infrared astronomy for information on band color designations). On a diagram with these axes, stars which fall to the right of the main sequence and the reddening bands drawn are significantly brighter in the K band than main sequence stars, including main sequence stars which have experienced reddening due to interstellar dust. Of the J, H, and K bands, K is the longest wavelength, so objects which are anomalously bright in the K band are said to exhibit infrared excess. These objects are likely protostellar in nature, with the excess radiation at long wavelengths caused by suppression by the reflection nebula in which the protostars are embedded. Color–color diagrams can be used then as a means of studying stellar formation, as the state of a star in its formation can be roughly determined by looking at its position on the diagram.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C - D = \\frac{\\nu_\\text{c} - \\nu_\\text{d}}{\\nu_\\text{a} - \\nu_\\text{b}} (A - B) + k, "
},
{
"math_id": 1,
"text": " k = -2.5 \\log_{10} \\left[\n \\left( \\frac{\\nu_\\text{c}}{\\nu_\\text{d}} \\right)^2\n \\left( \\frac{\\Delta_\\text{c}}{\\Delta_\\text{d}} \\right)\n \\left( \\frac{\\nu_\\text{b}}{\\nu_\\text{a}} \\right)^{2\\frac{\\nu_\\text{c} - \\nu_\\text{d}}{\\nu_\\text{a} - \\nu_\\text{b}}}\n \\left( \\frac{\\Delta_\\text{b}}{\\Delta_\\text{a}} \\right)^\\frac{\\nu_\\text{c} - \\nu_\\text{d}}{\\nu_\\text{a} - \\nu_\\text{b}}\n\\right] "
}
] | https://en.wikipedia.org/wiki?curid=12350617 |
1235183 | Spigot algorithm | Algorithm for computing the value of a transcendental numberA spigot algorithm is an algorithm for computing the value of a transcendental number (such as π or "e") that generates the digits of the number sequentially from left to right providing increasing precision as the algorithm proceeds. Spigot algorithms also aim to minimize the amount of intermediate storage required. The name comes from the sense of the word "spigot" for a tap or valve controlling the flow of a liquid. Spigot algorithms can be contrasted with algorithms that store and process complete numbers to produce successively more accurate approximations to the desired transcendental.
Interest in spigot algorithms was spurred in the early days of computational mathematics by extreme constraints on memory, and such an algorithm for calculating the digits of "e" appeared in a paper by Sale in 1968. In 1970, Abdali presented a more general algorithm to compute the sums of series in which the ratios of successive terms can be expressed as quotients of integer functions of term positions. This algorithm is applicable to many familiar series for trigonometric functions, logarithms, and transcendental numbers because these series satisfy the above condition. The name "spigot algorithm" seems to have been coined by Stanley Rabinowitz and Stan Wagon, whose algorithm for calculating the digits of π is sometimes referred to as ""the" spigot algorithm for π".
The spigot algorithm of Rabinowitz and Wagon is "bounded", in the sense that the number of terms of the infinite series that will be processed must be specified in advance. The term "streaming algorithm" indicates an approach without this restriction. This allows the calculation to run indefinitely varying the amount of intermediate storage as the calculation progresses.
A variant of the spigot approach uses an algorithm which can be used to compute a single arbitrary digit of the transcendental without computing the preceding digits: an example is the Bailey–Borwein–Plouffe formula, a digit extraction algorithm for π which produces base 16 digits. The inevitable truncation of the underlying infinite series of the algorithm means that the accuracy of the result may be limited by the number of terms calculated.
Example.
This example illustrates the working of a spigot algorithm by calculating the binary digits of the natural logarithm of 2 (sequence in the OEIS) using the identity
formula_0
To start calculating binary digits from, as an example, the 8th place we multiply this identity by 27 (since 7 = 8 − 1):
formula_1
We then divide the infinite sum into a "head", in which the exponents of 2 are greater than or equal to zero, and a "tail", in which the exponents of 2 are negative:
formula_2
We are only interested in the fractional part of this value, so we can replace each of the summands in the "head" by
formula_3
Calculating each of these terms and adding them to a running total where we again only keep the fractional part, we have:
We add a few terms in the "tail", noting that the error introduced by truncating the sum is less than the final term:
Adding the "head" and the first few terms of the "tail" together we get:
formula_4
so the 8th to 11th binary digits in the binary expansion of ln(2) are 1, 0, 1, 1. Note that we have not calculated the values of the first seven binary digits – indeed, all information about them has been intentionally discarded by using modular arithmetic in the "head" sum.
The same approach can be used to calculate digits of the binary expansion of ln(2) starting from an arbitrary "n"th position. The number of terms in the "head" sum increases linearly with "n", but the complexity of each term only increases with the logarithm of "n" if an efficient method of modular exponentiation is used. The precision of calculations and intermediate results and the number of terms taken from the "tail" sum are all independent of "n", and only depend on the number of binary digits that are being calculated – single precision arithmetic can be used to calculate around 12 binary digits, regardless of the starting position.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ln(2)=\\sum_{k=1}^{\\infty}\\frac{1}{k2^k}\\, ."
},
{
"math_id": 1,
"text": "2^7\\ln(2) =2^7\\sum_{k=1}^\\infty \\frac{1}{k2^k}\\, ."
},
{
"math_id": 2,
"text": "2^7\\ln(2) =\\sum_{k=1}^{7}\\frac{2^{7-k}}{k}+\\sum_{k=8}^{\\infty}\\frac{1}{k2^{k-7}}\\, ."
},
{
"math_id": 3,
"text": "\\frac{2^{7-k} } k \\bmod 1 = \\frac{2^{7-k} \\bmod k} k \\, ."
},
{
"math_id": 4,
"text": "2^7\\ln(2)\\bmod1 \\approx \\frac{64}{105}+\\frac{37}{360}=0.10011100 \\ldots_2 + 0.00011010 \\ldots_2 = 0.1011 \\ldots_2 \\, ,"
}
] | https://en.wikipedia.org/wiki?curid=1235183 |
1235271 | Betatron | Cyclic particle accelerator
A betatron is a type of cyclic particle accelerator for electrons. It consists of a torus-shaped vacuum chamber with an electron source. Circling the torus is an iron transformer core with a wire winding around it. The device functions similarly to a transformer, with the electrons in the torus-shaped vacuum chamber as its secondary coil. An alternating current in the primary coils accelerates electrons in the vacuum around a circular path. The betatron was the first machine capable of producing electron beams at energies higher than could be achieved with a simple electron gun, and the first circular accelerator in which particles orbited at a constant radius.
The concept of the betatron had been proposed as early as 1922 by Joseph Slepian. Through the 1920s and 30s a number of theoretical problems related to the device were considered by scientists including Rolf Wideroe, Ernest Walton, and Max Steenbeck. The first working betatron was constructed by Donald Kerst at the University of Illinois Urbana-Champaign in 1940.
History.
After the discovery in the 1800s of Faraday's law of induction, which showed that an electromotive force could be generated by a changing magnetic field, several scientists speculated that this effect could be used to accelerate charged particles to high energies. Joseph Slepian proposed a device in 1922 that would use permanent magnets to steer the beam while it was accelerated by a changing magnetic field. However, he did not pursue the idea past the theoretical stage.
In the late 1920s, Gregory Breit and Merle Tuve at the Bureau of Terrestrial Magnetism constructed a working device that used varying magnetic fields to accelerate electrons. Their device placed two solenoidal magnets next to one another and fired electrons from a gun at the outer edge of the magnetic field. As the field was increased, the electrons accelerated in to strike a target at the center of the field, producing X-rays. This device took a step towards the betatron concept by shaping the magnetic field to keep the particles focused in the plane of acceleration.
In 1929, Rolf Wideroe made the next major contribution to the development of the theory by deriving the "Wideroe Condition" for stable orbits. He determined that in order for the orbit radius to remain constant, the field at the radius must be exactly half of the average field over the area of the magnet. This critical calculation allowed for the development of accelerators in which the particles orbited at a constant radius, rather than spiraling inward, as in the case of Breit and Tuve's machine, or outward, as in the case of the cyclotron. Although Wideroe made valuable contributions to the development of the theory of the Betatron, he was unable to build a device in which the electrons orbited more than one and a half times, as his device had no mechanism to keep the beam focused.
Simultaneously with Wideroe's experiments, Ernest Walton analyzed the orbits of electrons in a magnetic field, and determined that it was possible to construct an orbit that was radially focused in the plane of the orbit. Particles in such an orbit which moved a small distance away from the orbital radius would experience a force pushing them back to the correct radius. These oscillations about a stable orbit in a circular accelerator are now referred to as "betatron oscillations".
In 1935 Max Steenbeck applied in Germany for a patent on a device that would combine the radial focusing condition of Walton with the vertical focusing used in Breit and Tuve's machine. He later claimed to have built a working machine, but this claim was disputed.
The first team unequivocally acknowledged to have built a working betatron was led by Donald Kerst at the University of Illinois. The accelerator was completed on July 15, 1940.
Operation principle.
In a betatron, the changing magnetic field from the primary coil accelerates electrons injected into the vacuum torus, causing them to circle around the torus in the same manner as current is induced in the secondary coil of a transformer (Faraday's law).
The stable orbit for the electrons satisfies
formula_0
where
formula_1 is the flux within the area enclosed by the electron orbit,
formula_2 is the radius of the electron orbit, and
formula_3 is the magnetic field at formula_2.
In other words, the magnetic field at the orbit must be half the average magnetic field over its circular cross section:
formula_4
This condition is often called "Widerøe's condition".
Etymology.
The name "betatron" (a reference to the beta particle, a fast electron) was chosen during a departmental contest. Other proposals were "rheotron", "induction accelerator", "induction electron accelerator", and even "Außerordentlichehochgeschwindigkeitselektronenentwickelndesschwerarbeitsbeigollitron", a suggestion by a German associate, for "Hard working by golly machine for generating extraordinarily high velocity electrons" or perhaps "Extraordinarily high velocity electron generator, high energy by golly-tron."
Applications.
Betatrons were historically employed in particle physics experiments to provide high-energy beams of electrons—up to about 300 MeV. If the electron beam is directed at a metal plate, the betatron can be used as a source of energetic x-rays, which may be used in industrial and medical applications (historically in radiation oncology). A small version of a betatron was also used to provide a source of hard X-rays (by deceleration of the electron beam in a target) for prompt initiation of some experimental nuclear weapons by means of photon-induced fission and photofission in the bomb core.
The Radiation Center, the first private medical center to treat cancer patients with a betatron, was opened by Dr. O. Arthur Stiennon in a suburb of Madison, Wisconsin in the late 1950s.
Limitations.
The maximum energy that a betatron can impart is limited by the strength of the magnetic field due to the saturation of iron and by practical size of the magnet core. The next generation of accelerators, the synchrotrons, overcame these limitations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\theta_0 = 2 \\pi r_0^2 H_0,"
},
{
"math_id": 1,
"text": "\\theta_0"
},
{
"math_id": 2,
"text": "r_0"
},
{
"math_id": 3,
"text": "H_0"
},
{
"math_id": 4,
"text": "\\Leftrightarrow H_0 = \\frac{1}{2} \\frac{\\theta_0}{\\pi r_0^2}."
}
] | https://en.wikipedia.org/wiki?curid=1235271 |
12354 | Greatest common divisor | Largest integer that divides given integers
In mathematics, the greatest common divisor (GCD), also known as greatest common factor (GCF), of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers "x", "y", the greatest common divisor of "x" and "y" is denoted formula_0. For example, the GCD of 8 and 12 is 4, that is, gcd(8, 12) = 4.
In the name "greatest common divisor", the adjective "greatest" may be replaced by "highest", and the word "divisor" may be replaced by "factor", so that other names include highest common factor, etc. Historically, other names for the same concept have included greatest common measure.
This notion can be extended to polynomials (see "Polynomial greatest common divisor") and other commutative rings (see "" below).
Overview.
Definition.
The "greatest common divisor" (GCD) of integers a and b, at least one of which is nonzero, is the greatest positive integer d such that d is a divisor of both a and b; that is, there are integers e and f such that "a" = "de" and "b" = "df", and d is the largest such integer. The GCD of a and b is generally denoted gcd("a", "b").
When one of "a" and "b" is zero, the GCD is the absolute value of the nonzero integer: gcd("a", 0) = gcd(0, "a") = |"a"|. This case is important as the terminating step of the Euclidean algorithm.
The above definition is unsuitable for defining gcd(0, 0), since there is no greatest integer "n" such that 0 × "n" = 0. However, zero is its own greatest divisor if "greatest" is understood in the context of the divisibility relation, so gcd(0, 0) is commonly defined as 0. This preserves the usual identities for GCD, and in particular Bézout's identity, namely that gcd("a", "b") generates the same ideal as {"a", "b"}. This convention is followed by many computer algebra systems. Nonetheless, some authors leave gcd(0, 0) undefined.
The GCD of a and b is their greatest positive common divisor in the preorder relation of divisibility. This means that the common divisors of a and b are exactly the divisors of their GCD. This is commonly proved by using either Euclid's lemma, the fundamental theorem of arithmetic, or the Euclidean algorithm. This is the meaning of "greatest" that is used for the generalizations of the concept of GCD.
Example.
The number 54 can be expressed as a product of two integers in several different ways:
formula_1
Thus the complete list of "divisors" of 54 is 1, 2, 3, 6, 9, 18, 27, 54.
Similarly, the divisors of 24 are 1, 2, 3, 4, 6, 8, 12, 24.
The numbers that these two lists have "in common" are the "common divisors" of 54 and 24, that is,
formula_2
Of these, the greatest is 6, so it is the "greatest common divisor":
formula_3
Computing all divisors of the two numbers in this way is usually not efficient, especially for large numbers that have many divisors. Much more efficient methods are described in "".
Coprime numbers.
Two numbers are called relatively prime, or coprime, if their greatest common divisor equals 1. For example, 9 and 28 are coprime.
A geometric view.
For example, a 24-by-60 rectangular area can be divided into a grid of: 1-by-1 squares, 2-by-2 squares, 3-by-3 squares, 4-by-4 squares, 6-by-6 squares or 12-by-12 squares. Therefore, 12 is the greatest common divisor of 24 and 60. A 24-by-60 rectangular area can thus be divided into a grid of 12-by-12 squares, with two squares along one edge (24/12 = 2) and five squares along the other (60/12 = 5).
Applications.
Reducing fractions.
The greatest common divisor is useful for reducing fractions to the lowest terms. For example, gcd(42, 56) = 14, therefore,
formula_4
Least common multiple.
The least common multiple of two integers that are not both zero can be computed from their greatest common divisor, by using the relation
formula_5
Calculation.
Using prime factorizations.
Greatest common divisors can be computed by determining the prime factorizations of the two numbers and comparing factors. For example, to compute gcd(48, 180), we find the prime factorizations 48 = 24 · 31 and 180 = 22 · 32 · 51; the GCD is then 2min(4,2) · 3min(1,2) · 5min(0,1) = 22 · 31 · 50 = 12 The corresponding LCM is then
2max(4,2) · 3max(1,2) · 5max(0,1) =
24 · 32 · 51 = 720.
In practice, this method is only feasible for small numbers, as computing prime factorizations takes too long.
Euclid's algorithm.
The method introduced by Euclid for computing greatest common divisors is based on the fact that, given two positive integers a and b such that "a" > "b", the common divisors of a and b are the same as the common divisors of "a" – "b" and b.
So, Euclid's method for computing the greatest common divisor of two positive integers consists of replacing the larger number with the difference of the numbers, and repeating this until the two numbers are equal: that is their greatest common divisor.
For example, to compute gcd(48,18), one proceeds as follows:
formula_6
So gcd(48, 18) = 6.
This method can be very slow if one number is much larger than the other. So, the variant that follows is generally preferred.
Euclidean algorithm.
A more efficient method is the "Euclidean algorithm", a variant in which the difference of the two numbers a and b is replaced by the "remainder" of the Euclidean division (also called "division with remainder") of a by b.
Denoting this remainder as "a" mod "b", the algorithm replaces ("a", "b") with ("b", "a" mod "b") repeatedly until the pair is ("d", 0), where d is the greatest common divisor.
For example, to compute gcd(48,18), the computation is as follows:
formula_7
This again gives gcd(48, 18) = 6.
Binary GCD algorithm.
The binary GCD algorithm is a variant of Euclid's algorithm that is specially adapted to the binary representation of the numbers, which is used in most computers.
The binary GCD algorithm differs from Euclid's algorithm essentially by dividing by two every even number that is encountered during the computation. Its efficiency results from the fact that, in binary representation, testing parity consists of testing the right-most digit, and dividing by two consists of removing the right-most digit.
The method is as follows, starting with "a" and "b" that are the two positive integers whose GCD is sought.
Step 1 determines d as the highest power of 2 that divides "a" and "b", and thus their greatest common divisor. None of the steps changes the set of the odd common divisors of "a" and "b". This shows that when the algorithm stops, the result is correct. The algorithm stops eventually, since each steps divides at least one of the operands by at least 2. Moreover, the number of divisions by 2 and thus the number of subtractions is at most the total number of digits.
Example: ("a", "b", "d") = (48, 18, 0) → (24, 9, 1) → (12, 9, 1) → (6, 9, 1) → (3, 9, 1) → (3, 3, 1) ; the original GCD is thus the product 6 of 2"d" = 21 and "a" = "b" = 3.
The binary GCD algorithm is particularly easy to implement and particularly efficient on binary computers. Its computational complexity is
formula_9
The square in this complexity comes from the fact that division by 2 and subtraction take a time that is proportional to the number of bits of the input.
The computational complexity is usually given in terms of the length "n" of the input. Here, this length is "n" = log "a" + log "b", and the complexity is thus
formula_10.
Lehmer's GCD algorithm.
Lehmer's algorithm is based on the observation that the initial quotients produced by Euclid's algorithm can be determined based on only the first few digits; this is useful for numbers that are larger than a computer word. In essence, one extracts initial digits, typically forming one or two computer words, and runs Euclid's algorithms on these smaller numbers, as long as it is guaranteed that the quotients are the same with those that would be obtained with the original numbers. The quotients are collected into a small 2-by-2 transformation matrix (a matrix of single-word integers) to reduce the original numbers. This process is repeated until numbers are small enough that the binary algorithm (see below) is more efficient.
This algorithm improves speed, because it reduces the number of operations on very large numbers, and can use hardware arithmetic for most operations. In fact, most of the quotients are very small, so a fair number of steps of the Euclidean algorithm can be collected in a 2-by-2 matrix of single-word integers. When Lehmer's algorithm encounters a quotient that is too large, it must fall back to one iteration of Euclidean algorithm, with a Euclidean division of large numbers.
Other methods.
If "a" and "b" are both nonzero, the greatest common divisor of "a" and "b" can be computed by using least common multiple (LCM) of "a" and "b":
formula_11,
but more commonly the LCM is computed from the GCD.
Using Thomae's function "f",
formula_12
which generalizes to "a" and "b" rational numbers or commensurable real numbers.
Keith Slavin has shown that for odd "a" ≥ 1:
formula_13
which is a function that can be evaluated for complex "b". Wolfgang Schramm has shown that
formula_14
is an entire function in the variable "b" for all positive integers "a" where "c""d"("k") is Ramanujan's sum.
Complexity.
The computational complexity of the computation of greatest common divisors has been widely studied. If one uses the Euclidean algorithm and the elementary algorithms for multiplication and division, the computation of the greatest common divisor of two integers of at most n bits is "O"("n"2). This means that the computation of greatest common divisor has, up to a constant factor, the same complexity as the multiplication.
However, if a fast multiplication algorithm is used, one may modify the Euclidean algorithm for improving the complexity, but the computation of a greatest common divisor becomes slower than the multiplication. More precisely, if the multiplication of two integers of "n" bits takes a time of "T"("n"), then the fastest known algorithm for greatest common divisor has a complexity "O"("T"("n") log "n"). This implies that the fastest known algorithm has a complexity of "O"("n" (log "n")2).
Previous complexities are valid for the usual models of computation, specifically multitape Turing machines and random-access machines.
The computation of the greatest common divisors belongs thus to the class of problems solvable in quasilinear time. "A fortiori", the corresponding decision problem belongs to the class P of problems solvable in polynomial time. The GCD problem is not known to be in NC, and so there is no known way to parallelize it efficiently; nor is it known to be P-complete, which would imply that it is unlikely to be possible to efficiently parallelize GCD computation. Shallcross et al. showed that a related problem (EUGCD, determining the remainder sequence arising during the Euclidean algorithm) is NC-equivalent to the problem of integer linear programming with two variables; if either problem is in NC or is P-complete, the other is as well. Since NC contains NL, it is also unknown whether a space-efficient algorithm for computing the GCD exists, even for nondeterministic Turing machines.
Although the problem is not known to be in NC, parallel algorithms asymptotically faster than the Euclidean algorithm exist; the fastest known deterministic algorithm is by Chor and Goldreich, which (in the CRCW-PRAM model) can solve the problem in "O"("n"/log "n") time with "n"1+"ε" processors. Randomized algorithms can solve the problem in "O"((log "n")2) time on formula_15 processors (this is superpolynomial).
This formula is often used to compute least common multiples: one first computes the GCD with Euclid's algorithm and then divides the product of the given numbers by their GCD.
Properties.
formula_17 where formula_18 is the "p"-adic valuation. (sequence in the OEIS)
Probabilities and expected value.
In 1972, James E. Nymann showed that "k" integers, chosen independently and uniformly from {1, ..., "n"}, are coprime with probability 1/"ζ"("k") as "n" goes to infinity, where "ζ" refers to the Riemann zeta function. (See coprime for a derivation.) This result was extended in 1987 to show that the probability that "k" random integers have greatest common divisor "d" is "d"−"k"/ζ("k").
Using this information, the expected value of the greatest common divisor function can be seen (informally) to not exist when "k" = 2. In this case the probability that the GCD equals "d" is "d"−2/"ζ"(2), and since "ζ"(2) = π2/6 we have
formula_19
This last summation is the harmonic series, which diverges. However, when "k" ≥ 3, the expected value is well-defined, and by the above argument, it is
formula_20
For "k" = 3, this is approximately equal to 1.3684. For "k" = 4, it is approximately 1.1106.
In commutative rings.
The notion of greatest common divisor can more generally be defined for elements of an arbitrary commutative ring, although in general there need not exist one for every pair of elements.
With this definition, two elements a and b may very well have several greatest common divisors, or none at all. If R is an integral domain, then any two GCDs of a and b must be associate elements, since by definition either one must divide the other. Indeed, if a GCD exists, any one of its associates is a GCD as well.
Existence of a GCD is not assured in arbitrary integral domains. However, if R is a unique factorization domain or any other GCD domain, then any two elements have a GCD. If R is a Euclidean domain in which euclidean division is given algorithmically (as is the case for instance when "R" = "F"["X"] where F is a field, or when R is the ring of Gaussian integers), then greatest common divisors can be computed using a form of the Euclidean algorithm based on the division procedure.
The following is an example of an integral domain with two elements that do not have a GCD:
formula_21
The elements 2 and are two maximal common divisors (that is, any common divisor which is a multiple of 2 is associated to 2, the same holds for , but they are not associated, so there is no greatest common divisor of a and "b".
Corresponding to the Bézout property we may, in any commutative ring, consider the collection of elements of the form "pa" + "qb", where p and q range over the ring. This is the ideal generated by a and b, and is denoted simply ("a", "b"). In a ring all of whose ideals are principal (a principal ideal domain or PID), this ideal will be identical with the set of multiples of some ring element "d"; then this d is a greatest common divisor of a and "b". But the ideal ("a", "b") can be useful even when there is no greatest common divisor of a and "b". (Indeed, Ernst Kummer used this ideal as a replacement for a GCD in his treatment of Fermat's Last Theorem, although he envisioned it as the set of multiples of some hypothetical, or "ideal", ring element d, whence the ring-theoretic term.)
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\gcd (x,y)"
},
{
"math_id": 1,
"text": " 54 \\times 1 = 27 \\times 2 = 18 \\times 3 = 9 \\times 6."
},
{
"math_id": 2,
"text": " 1, 2, 3, 6. "
},
{
"math_id": 3,
"text": " \\gcd(54,24) = 6. "
},
{
"math_id": 4,
"text": "\\frac{42}{56}=\\frac{3 \\cdot 14 }{ 4 \\cdot 14}=\\frac{3 }{ 4}."
},
{
"math_id": 5,
"text": "\\operatorname{lcm}(a,b)=\\frac{|a\\cdot b|}{\\operatorname{gcd}(a,b)}."
},
{
"math_id": 6,
"text": "\\begin{align}\\gcd(48,18)\\quad&\\to\\quad \\gcd(48-18, 18)= \\gcd(30,18)&&\\to \\quad \\gcd(30-18, 18)= \\gcd(12,18)\\\\\n&\\to \\quad \\gcd(12,18-12)= \\gcd(12,6)&&\\to \\quad \\gcd(12-6,6)= \\gcd(6,6).\\end{align}"
},
{
"math_id": 7,
"text": "\\begin{align}\\gcd(48,18)\\quad&\\to\\quad \\gcd(18, 48\\bmod 18)= \\gcd(18, 12)\\\\\n&\\to \\quad \\gcd(12, 18\\bmod 12)= \\gcd(12,6)\\\\\n&\\to \\quad \\gcd(6,12\\bmod 6)= \\gcd(6,0).\\end{align}"
},
{
"math_id": 8,
"text": "2^d a."
},
{
"math_id": 9,
"text": "O((\\log a + \\log b)^2)."
},
{
"math_id": 10,
"text": "O(n^2)"
},
{
"math_id": 11,
"text": "\\gcd(a,b)=\\frac{|a\\cdot b|}{\\operatorname{lcm}(a,b)}"
},
{
"math_id": 12,
"text": "\\gcd(a,b) = a f\\left(\\frac b a\\right),"
},
{
"math_id": 13,
"text": "\\gcd(a,b)=\\log_2\\prod_{k=0}^{a-1} (1+e^{-2i\\pi k b/a})"
},
{
"math_id": 14,
"text": "\\gcd(a,b)=\\sum\\limits_{k=1}^a \\exp (2\\pi ikb/a) \\cdot \\sum\\limits_{d\\left| a\\right.} \\frac{c_d (k)}{d} "
},
{
"math_id": 15,
"text": "\\exp\\left(O\\left(\\sqrt{n \\log n}\\right)\\right)"
},
{
"math_id": 16,
"text": " \\gcd(a,b) = \\sum_{k|a \\text{ and }k|b} \\varphi(k) ."
},
{
"math_id": 17,
"text": "\\sum_{k=1}^n \\gcd(k,n)\n= \\sum_{d|n} d \\phi \\left( \\frac n d \\right)\n=n\\sum_{d|n}\\frac{\\varphi(d)}{d}\n=n\\prod_{p|n}\\left(1+\\nu_p(n)\\left(1-\\frac{1}{p}\\right)\\right)"
},
{
"math_id": 18,
"text": "\\nu_p(n)"
},
{
"math_id": 19,
"text": "\\mathrm{E}( \\mathrm{2} ) = \\sum_{d=1}^\\infty d \\frac{6}{\\pi^2 d^2} = \\frac{6}{\\pi^2} \\sum_{d=1}^\\infty \\frac{1}{d}."
},
{
"math_id": 20,
"text": " \\mathrm{E}(k) = \\sum_{d=1}^\\infty d^{1-k} \\zeta(k)^{-1} = \\frac{\\zeta(k-1)}{\\zeta(k)}. "
},
{
"math_id": 21,
"text": "R = \\mathbb{Z}\\left[\\sqrt{-3}\\,\\,\\right],\\quad a = 4 = 2\\cdot 2 = \\left(1+\\sqrt{-3}\\,\\,\\right)\\left(1-\\sqrt{-3}\\,\\,\\right),\\quad b = \\left(1+\\sqrt{-3}\\,\\,\\right)\\cdot 2."
}
] | https://en.wikipedia.org/wiki?curid=12354 |
1235913 | Radiation zone | Radiative layer of stars
A radiation zone, or radiative region, is a layer of a star's interior where energy is primarily transported toward the exterior by means of radiative diffusion and thermal conduction, rather than by convection. Energy travels through the radiation zone in the form of electromagnetic radiation as photons.
Matter in a radiation zone is so dense that photons can travel only a short distance before they are absorbed or scattered by another particle, gradually shifting to longer wavelength as they do so. For this reason, it takes an average of 171,000 years for gamma rays from the core of the Sun to leave the radiation zone. Over this range, the temperature of the plasma drops from 15 million K near the core down to 1.5 million K at the base of the convection zone.
Temperature gradient.
In a radiative zone, the temperature gradient—the change in temperature ("T") as a function of radius ("r")—is given by:
formula_0
where "κ"("r") is the opacity, "ρ"("r") is the matter density, "L"("r") is the luminosity, and "σ""B" is the Stefan–Boltzmann constant. Hence the opacity ("κ") and radiation flux ("L") within a given layer of a star are important factors in determining how effective radiative diffusion is at transporting energy. A high opacity or high luminosity can cause a high temperature gradient, which results from a slow flow of energy. Those layers where convection is more effective than radiative diffusion at transporting energy, thereby creating a lower temperature gradient, will become convection zones.
This relation can be derived by integrating Fick's first law over the surface of some radius "r", giving the total outgoing energy flux which is equal to the luminosity by conservation of energy:
formula_1
Where "D" is the photons diffusion coefficient, and "u" is the energy density.
The energy density is related to the temperature by Stefan–Boltzmann law by:
formula_2
Finally, as in the elementary theory of diffusion coefficient in gases, the diffusion coefficient "D" approximately satisfies:
formula_3
where λ is the photon mean free path, and is the reciprocal of the opacity "κ".
Eddington stellar model.
Eddington assumed the pressure "P" in a star is a combination of an ideal gas pressure and radiation pressure, and that there is a constant ratio, β, of the gas pressure to the total pressure.
Therefore, by the ideal gas law:
formula_4
where "k""B" is Boltzmann constant and μ the mass of a single atom (actually, an ion since matter is ionized; usually a hydrogen ion, i.e. a proton).
While the radiation pressure satisfies:
formula_5
so that "T"4 is proportional to "P" throughout the star.
This gives the polytropic equation (with "n"=3):
formula_6
Using the hydrostatic equilibrium equation, the second equation becomes equivalent to:
formula_7
For energy transmission by radiation only, we may use the equation for the temperature gradient (presented in the previous subsection) for the right-hand side and get
formula_8
Thus the Eddington model is a good approximation in the radiation zone as long as κ"L"/"M" is approximately constant, which is often the case.
Stability against convection.
The radiation zone is stable against formation of convection cells if the density gradient is high enough, so that an element moving upwards has its density lowered (due to adiabatic expansion) less than the drop in density of its surrounding, so that it will experience a net buoyancy force downwards.
The criterion for this is:
formula_9
where "P" is the pressure, ρ the density and formula_10 is the heat capacity ratio.
For a homogenic ideal gas, this is equivalent to:
formula_11
We can calculate the left-hand side by dividing the equation for the temperature gradient by the equation relating the pressure gradient to the gravity acceleration "g":
formula_12
"M"("r") being the mass within the sphere of radius "r", and is approximately the whole star mass for large enough "r".
This gives the following form of the Schwarzschild criterion for stability against convection:
formula_13
Note that for non-homogenic gas this criterion should be replaced by the Ledoux criterion, because the density gradient now also depends on concentration gradients.
For a polytrope solution with "n"=3 (as in the Eddington stellar model for radiation zone), "P" is proportional to "T"4 and the left-hand side is constant and equals 1/4, smaller than the ideal monatomic gas approximation for the right-hand side giving formula_14. This explains the stability of the radiation zone against convection.
However, at a large enough radius, the opacity κ increases due to the decrease in temperature (by Kramers' opacity law), and possibly also due to a smaller degree of ionization in the lower shells of heavy elements ions. This leads to a violation of the stability criterion and to the creation of the convection zone; in the sun, opacity increases by more than a tenfold across the radiation zone, before the transition to the convection zone happens.
Additional situations in which this stability criterion is not met are:
Main sequence stars.
For main sequence stars—those stars that are generating energy through the thermonuclear fusion of hydrogen at the core, the presence and location of radiative regions depends on the star's mass. Main sequence stars below about 0.3 solar masses are entirely convective, meaning they do not have a radiative zone. From 0.3 to 1.2 solar masses, the region around the stellar core is a radiation zone, separated from the overlying convection zone by the tachocline. The radius of the radiative zone increases monotonically with mass, with stars around 1.2 solar masses being almost entirely radiative. Above 1.2 solar masses, the core region becomes a convection zone and the overlying region is a radiation zone, with the amount of mass within the convective zone increasing with the mass of the star.
The Sun.
In the Sun, the region between the solar core at 0.2 of the Sun's radius and the outer convection zone at 0.71 of the Sun's radius is referred to as the radiation zone, although the core is also a radiative region. The convection zone and the radiation zone are divided by the tachocline, another part of the Sun.
Notes and references.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\text{d}T(r)}{\\text{d}r}\\ =\\ -\\frac{3 \\kappa(r) \\rho(r) L(r)}{(4 \\pi r^2)(16 \\sigma_B) T^3(r)}"
},
{
"math_id": 1,
"text": "L = -4\\pi\\,r^2 D\\frac{\\partial u}{\\partial r}"
},
{
"math_id": 2,
"text": "U = \\frac{4}{c} \\, \\sigma_B \\, T^4 "
},
{
"math_id": 3,
"text": " D = \\frac{1}{3}c\\,\\lambda "
},
{
"math_id": 4,
"text": "\\beta P = k_B\\frac{\\rho}{\\mu}T"
},
{
"math_id": 5,
"text": "1-\\beta = \\frac{P_\\text{radiation}}{P} =\\frac{u}{3P} =\\frac{4\\sigma_B}{3c} \\frac{T^4}{P} "
},
{
"math_id": 6,
"text": "P = \\left(\\frac{3c k_B^4}{4\\sigma_B\\mu^4}\\frac{1-\\beta}{\\beta^4}\\right)^{1/3}\\rho^{4/3}"
},
{
"math_id": 7,
"text": "-\\frac{GM\\rho}{r^2} = \\frac{\\text{d}P}{\\text{d}r} = \\frac{16\\sigma_B}{3c(1-\\beta)}T^3\\frac{\\text{d}T}{\\text{d}r}"
},
{
"math_id": 8,
"text": "GM = \\frac{\\kappa L}{4\\pi c (1-\\beta)}"
},
{
"math_id": 9,
"text": "\\frac{\\text{d}\\,\\log\\,\\rho}{\\text{d}\\,\\log\\, P} > \\frac{1}{\\gamma_{ad}}"
},
{
"math_id": 10,
"text": "\\gamma_{ad}"
},
{
"math_id": 11,
"text": "\\frac{\\text{d}\\,\\log\\,T}{\\text{d}\\,\\log\\, P} < 1-\\frac{1}{\\gamma_{ad}}"
},
{
"math_id": 12,
"text": "\\frac{\\text{d}P(r)}{\\text{d}r}\\ =\\ g\\rho \\ = \\ \\frac{G\\,M(r)\\,\\rho(r)}{r^2}"
},
{
"math_id": 13,
"text": "\\frac{3}{64\\pi\\sigma_B\\,G} \\frac{\\kappa\\,L}{M}\\frac{P}{T^4} < 1-\\frac{1}{\\gamma_{ad}}"
},
{
"math_id": 14,
"text": "1-1/\\gamma_{ad}=2/5"
},
{
"math_id": 15,
"text": "L(r)/M(r)"
},
{
"math_id": 16,
"text": "1-1/\\gamma_{ad}=1/6"
}
] | https://en.wikipedia.org/wiki?curid=1235913 |
1235977 | Dirichlet integral | Integral of sin(x)/x from 0 to infinity.
In mathematics, there are several integrals known as the Dirichlet integral, after the German mathematician Peter Gustav Lejeune Dirichlet, one of which is the improper integral of the sinc function over the positive real line:
formula_0
This integral is not absolutely convergent, meaning formula_1 has infinite Lebesgue or Riemann improper integral over the positive real line, so the sinc function is not Lebesgue integrable over the positive real line. The sinc function is, however, integrable in the sense of the improper Riemann integral or the generalized Riemann or Henstock–Kurzweil integral. This can be seen by using Dirichlet's test for improper integrals.
It is a good illustration of special techniques for evaluating definite integrals, particularly when it is not useful to directly apply the fundamental theorem of calculus due to the lack of an elementary antiderivative for the integrand, as the sine integral, an antiderivative of the sinc function, is not an elementary function. In this case, the improper definite integral can be determined in several ways: the Laplace transform, double integration, differentiating under the integral sign, contour integration, and the Dirichlet kernel.
Evaluation.
Laplace transform.
Let formula_2 be a function defined whenever formula_3 Then its Laplace transform is given by
formula_4
if the integral exists.
A property of the Laplace transform useful for evaluating improper integrals is
formula_5
provided formula_6 exists.
In what follows, one needs the result formula_7 which is the Laplace transform of the function formula_8 (see the section 'Differentiating under the integral sign' for a derivation) as well as a version of Abel's theorem (a consequence of the final value theorem for the Laplace transform).
Therefore,
formula_9
Double integration.
Evaluating the Dirichlet integral using the Laplace transform is equivalent to calculating the same double definite integral by changing the order of integration, namely,
formula_10
formula_11
The change of order is justified by the fact that for all formula_12, the integral is absolutely convergent.
Differentiation under the integral sign (Feynman's trick).
First rewrite the integral as a function of the additional variable formula_13 namely, the Laplace transform of formula_14 So let
formula_15
In order to evaluate the Dirichlet integral, we need to determine formula_16 The continuity of formula_17 can be justified by applying the dominated convergence theorem after integration by parts. Differentiate with respect to formula_18 and apply the Leibniz rule for differentiating under the integral sign to obtain
formula_19
Now, using Euler's formula formula_20 one can express the sine function in terms of complex exponentials:
formula_21
Therefore,
formula_22
Integrating with respect to formula_23 gives
formula_24
where formula_25 is a constant of integration to be determined. Since formula_26 formula_27 using the principal value. This means that for formula_12
formula_28
Finally, by continuity at formula_29 we have formula_30 as before.
Complex contour integration.
Consider formula_31
As a function of the complex variable formula_32 it has a simple pole at the origin, which prevents the application of Jordan's lemma, whose other hypotheses are satisfied.
Define then a new function
formula_33
The pole has been moved to the negative imaginary axis, so formula_34 can be integrated along the semicircle formula_35 of radius formula_36 centered at formula_37 extending in the positive imaginary direction, and closed along the real axis. One then takes the limit formula_38
The complex integral is zero by the residue theorem, as there are no poles inside the integration path formula_35:
formula_39
The second term vanishes as formula_36 goes to infinity. As for the first integral, one can use one version of the Sokhotski–Plemelj theorem for integrals over the real line: for a complex-valued function f defined and continuously differentiable on the real line and real constants formula_40 and formula_41 with formula_42 one finds
formula_43
where formula_44 denotes the Cauchy principal value. Back to the above original calculation, one can write
formula_45
By taking the imaginary part on both sides and noting that the function formula_46 is even, we get
formula_47
Finally,
formula_48
Alternatively, choose as the integration contour for formula_17 the union of upper half-plane semicircles of radii formula_49 and formula_36 together with two segments of the real line that connect them. On one hand the contour integral is zero, independently of formula_49 and formula_50 on the other hand, as formula_51 and formula_52 the integral's imaginary part converges to formula_53 (here formula_54 is any branch of logarithm on upper half-plane), leading to formula_55
Dirichlet kernel.
Consider the well-known formula for the Dirichlet kernel:formula_56
It immediately follows that:formula_57
Define
formula_58
Clearly, formula_17 is continuous when formula_59 to see its continuity at 0 apply L'Hopital's Rule:
formula_60
Hence, formula_17 fulfills the requirements of the Riemann-Lebesgue Lemma. This means:
formula_61
We would like to compute:
formula_62
However, we must justify switching the real limit in formula_63 to the integral limit in formula_64 which will follow from showing that the limit does exist.
Using integration by parts, we have:
formula_65
Now, as formula_66 and formula_67 the term on the left converges with no problem. See the list of limits of trigonometric functions. We now show that formula_68 is absolutely integrable, which implies that the limit exists.
First, we seek to bound the integral near the origin. Using the Taylor-series expansion of the cosine about zero,
formula_69
Therefore,
formula_70
Splitting the integral into pieces, we have
formula_71
for some constant formula_72 This shows that the integral is absolutely integrable, which implies the original integral exists, and switching from formula_63 to formula_73 was in fact justified, and the proof is complete.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\int_0^\\infty \\frac{\\sin x}{x} \\,dx = \\frac{\\pi}{2}."
},
{
"math_id": 1,
"text": "\\left| \\frac{\\sin x}{x} \\right|"
},
{
"math_id": 2,
"text": "f(t)"
},
{
"math_id": 3,
"text": "t \\geq 0."
},
{
"math_id": 4,
"text": "\\mathcal{L} \\{f(t)\\} = F(s) = \\int_{0}^{\\infty} e^{-st} f(t) \\,dt,"
},
{
"math_id": 5,
"text": " \\mathcal{L} \\left [ \\frac{f(t)}{t} \\right] = \\int_{s}^{\\infty} F(u) \\, du,\n"
},
{
"math_id": 6,
"text": "\\lim_{t \\to 0} \\frac{f(t)}{t}"
},
{
"math_id": 7,
"text": "\\mathcal{L}\\{\\sin t\\} = \\frac{1}{s^2 + 1},"
},
{
"math_id": 8,
"text": "\\sin t"
},
{
"math_id": 9,
"text": " \n\\begin{align}\n\\int_{0}^{\\infty} \\frac{\\sin t}{t} \\, dt\n&= \\lim_{s \\to 0} \\int_{0}^{\\infty} e^{-st} \\frac{\\sin t}{t} \\, dt\n= \\lim_{s \\to 0} \\mathcal{L} \\left [ \\frac{\\sin t}{t} \\right] \\\\[6pt]\n&= \\lim_{s \\to 0} \\int_{s}^{\\infty} \\frac{du}{u^2 + 1}\n= \\lim_{s \\to 0} \\arctan u \\Biggr|_{s}^{\\infty} \\\\[6pt]\n&= \\lim_{s \\to 0} \\left[ \\frac{\\pi}{2} - \\arctan (s)\\right]\n= \\frac{\\pi}{2}.\n\\end{align} "
},
{
"math_id": 10,
"text": "\n\\left( I_1 = \\int_0^\\infty \\int _0^\\infty e^{-st} \\sin t \\,dt \\,ds \\right) = \\left( I_2 = \\int_0^\\infty \\int _0^\\infty e^{-st} \\sin t \\,ds \\,dt \\right),"
},
{
"math_id": 11,
"text": "\\left( I_1 = \\int_0^\\infty \\frac{1}{s^2 + 1} \\,ds = \\frac{\\pi}{2} \\right) = \\left( I_2 = \\int_0^\\infty \\frac{\\sin t}{t} \\,dt \\right), \\text{ provided } s > 0.\n"
},
{
"math_id": 12,
"text": "s > 0"
},
{
"math_id": 13,
"text": "s,"
},
{
"math_id": 14,
"text": "\\frac{\\sin t} t."
},
{
"math_id": 15,
"text": "f(s)=\\int_0^\\infty e^{-st} \\frac{\\sin t} t \\, dt."
},
{
"math_id": 16,
"text": "f(0)."
},
{
"math_id": 17,
"text": "f"
},
{
"math_id": 18,
"text": "s>0"
},
{
"math_id": 19,
"text": "\n\\begin{align}\n\\frac{df}{ds} & = \\frac{d}{ds}\\int_0^\\infty e^{-st} \\frac{\\sin t}{t} \\, dt = \\int_0^\\infty \\frac{\\partial}{\\partial s}e^{-st}\\frac{\\sin t} t \\, dt \\\\[6pt]\n& = -\\int_0^\\infty e^{-st} \\sin t \\, dt.\n\\end{align}\n"
},
{
"math_id": 20,
"text": "e^{it} = \\cos t + i\\sin t,"
},
{
"math_id": 21,
"text": "\n\\sin t = \\frac{1}{2i} \\left( e^{i t} - e^{-it}\\right).\n"
},
{
"math_id": 22,
"text": "\n\\begin{align}\n\\frac{df}{ds} & = -\\int_0^\\infty e^{-st} \\sin t \\, dt = -\\int_{0}^{\\infty} e^{-st} \\frac{e^{it} - e^{-it}}{2i} dt \\\\[6pt]\n&= -\\frac{1}{2i} \\int_{0}^{\\infty} \\left[ e^{-t(s-i)} - e^{-t(s + i)} \\right] dt \\\\[6pt]\n&= -\\frac{1}{2i} \\left [ \\frac{-1}{s - i} e^{-t (s - i)} - \\frac{-1}{s + i} e^{-t (s + i)}\\right]_0^{\\infty} \\\\[6pt]\n&= -\\frac{1}{2i} \\left[ 0 - \\left( \\frac{-1}{s - i} + \\frac{1}{s + i} \\right) \\right] = -\\frac{1}{2i} \\left( \\frac{1}{s - i} - \\frac{1}{s + i} \\right) \\\\[6pt]\n&= -\\frac{1}{2i} \\left( \\frac{s + i - (s -i)}{s^2 + 1} \\right) = -\\frac{1}{s^2 + 1}.\n\\end{align}\n"
},
{
"math_id": 23,
"text": "s"
},
{
"math_id": 24,
"text": "f(s) = \\int \\frac{-ds}{s^2 + 1} = A - \\arctan s,"
},
{
"math_id": 25,
"text": "A"
},
{
"math_id": 26,
"text": "\\lim_{s \\to \\infty} f(s) = 0,"
},
{
"math_id": 27,
"text": "A = \\lim_{s \\to \\infty} \\arctan s = \\frac{\\pi}{2},"
},
{
"math_id": 28,
"text": "f(s) = \\frac{\\pi}{2} - \\arctan s."
},
{
"math_id": 29,
"text": "s = 0,"
},
{
"math_id": 30,
"text": "f(0) = \\frac{\\pi}{2} - \\arctan(0) = \\frac{\\pi}{2},"
},
{
"math_id": 31,
"text": "f(z) = \\frac{e^{iz}} z."
},
{
"math_id": 32,
"text": "z,"
},
{
"math_id": 33,
"text": "g(z) = \\frac{e^{iz}}{z + i\\varepsilon}."
},
{
"math_id": 34,
"text": "g(z)"
},
{
"math_id": 35,
"text": "\\gamma"
},
{
"math_id": 36,
"text": "R"
},
{
"math_id": 37,
"text": "z = 0"
},
{
"math_id": 38,
"text": "\\varepsilon \\to 0."
},
{
"math_id": 39,
"text": "0 = \\int_\\gamma g(z) \\,dz = \\int_{-R}^R \\frac{e^{ix}}{x + i\\varepsilon} \\, dx + \\int_0^\\pi \\frac{e^{i(Re^{i\\theta} + \\theta)}}{Re^{i\\theta} + i\\varepsilon} iR \\, d\\theta."
},
{
"math_id": 40,
"text": "a"
},
{
"math_id": 41,
"text": "b"
},
{
"math_id": 42,
"text": "a < 0 < b"
},
{
"math_id": 43,
"text": "\\lim_{\\varepsilon \\to 0^+} \\int_a^b \\frac{f(x)}{x \\pm i \\varepsilon} \\,dx = \\mp i \\pi f(0) + \\mathcal{P} \\int_a^b \\frac{f(x)}{x} \\,dx,"
},
{
"math_id": 44,
"text": "\\mathcal{P}"
},
{
"math_id": 45,
"text": "0 = \\mathcal{P} \\int \\frac{e^{ix}}{x} \\, dx - \\pi i."
},
{
"math_id": 46,
"text": "\\sin(x)/x"
},
{
"math_id": 47,
"text": "\\int_{-\\infty}^{+\\infty} \\frac{\\sin(x)}{x} \\,dx = 2 \\int_0^{+\\infty} \\frac{\\sin(x)}{x} \\,dx."
},
{
"math_id": 48,
"text": "\\lim_{\\varepsilon \\to 0} \\int_\\varepsilon^\\infty \\frac{\\sin(x)}{x} \\, dx = \\int_0^\\infty \\frac{\\sin(x)}{x} \\, dx = \\frac \\pi 2."
},
{
"math_id": 49,
"text": "\\varepsilon"
},
{
"math_id": 50,
"text": "R;"
},
{
"math_id": 51,
"text": "\\varepsilon \\to 0"
},
{
"math_id": 52,
"text": "R \\to \\infty"
},
{
"math_id": 53,
"text": "2 I + \\Im\\big(\\ln 0 - \\ln(\\pi i)\\big) = 2I - \\pi"
},
{
"math_id": 54,
"text": "\\ln z"
},
{
"math_id": 55,
"text": "I = \\frac{\\pi}{2}."
},
{
"math_id": 56,
"text": "\nD_n(x) = 1 + 2\\sum_{k=1}^n \\cos(2kx) = \\frac{\\sin[(2n+1)x]}{\\sin(x)}.\n"
},
{
"math_id": 57,
"text": "\n\\int_0^{\\frac{\\pi}{2}} D_n(x)\\, dx = \\frac{\\pi}{2}.\n"
},
{
"math_id": 58,
"text": "f(x) =\n\\begin{cases}\n\\frac{1}{x} - \\frac{1}{\\sin(x)} & x \\neq 0 \\\\[6pt]\n0 & x = 0\n\\end{cases}\n"
},
{
"math_id": 59,
"text": " x \\in (0,\\pi/2] ;"
},
{
"math_id": 60,
"text": "\n\\lim_{x\\to 0} \\frac{\\sin(x) - x}{x\\sin(x)} = \n\\lim_{x\\to 0} \\frac{\\cos(x) - 1}{\\sin(x) + x\\cos(x)} = \n\\lim_{x\\to 0} \\frac{-\\sin(x)}{2\\cos(x) - x\\sin(x)} = 0.\n"
},
{
"math_id": 61,
"text": "\n\\lim_{\\lambda \\to \\infty} \\int_0^{\\pi/2} f(x)\\sin(\\lambda x)dx = 0 \n\\quad\\Longrightarrow\\quad\n\\lim_{\\lambda \\to \\infty} \\int_0^{\\pi/2} \\frac{\\sin(\\lambda x)}{x}dx = \n \\lim_{\\lambda \\to \\infty} \\int_0^{\\pi/2} \\frac{\\sin(\\lambda x)}{\\sin(x)}dx.\n"
},
{
"math_id": 62,
"text": "\n\\begin{align}\n\\int_0^\\infty \\frac{\\sin(t)}{t}dt\n= & \\lim_{\\lambda \\to \\infty} \\int_0^{\\lambda\\frac{\\pi}{2}} \\frac{\\sin(t)}{t}dt \\\\[6pt]\n= & \\lim_{\\lambda \\to \\infty} \\int_0^{\\frac{\\pi}{2}} \\frac{\\sin(\\lambda x)}{x}dx \\\\[6pt]\n= & \\lim_{\\lambda \\to \\infty} \\int_0^{\\frac{\\pi}{2}} \\frac{\\sin(\\lambda x)}{\\sin(x)}dx \\\\[6pt]\n= & \\lim_{n\\to \\infty} \\int_0^{\\frac{\\pi}{2}} \\frac{\\sin((2n+1)x)}{\\sin(x)}dx \\\\[6pt]\n= & \\lim_{n\\to \\infty} \\int_0^{\\frac{\\pi}{2}} D_n(x) dx = \\frac{\\pi}{2}\n\\end{align} "
},
{
"math_id": 63,
"text": "\\lambda"
},
{
"math_id": 64,
"text": "n,"
},
{
"math_id": 65,
"text": "\n\\int_a^b \\frac{\\sin(x)}{x}dx = \n\\int_a^b \\frac{d(1-\\cos(x))}{x}dx = \n\\left. \\frac{1-\\cos(x)}{x}\\right|_a^b + \\int_a^b \\frac{1-\\cos(x)}{x^2}dx\n"
},
{
"math_id": 66,
"text": "a \\to 0"
},
{
"math_id": 67,
"text": " b \\to \\infty"
},
{
"math_id": 68,
"text": " \\int_{-\\infty}^{\\infty} \\frac{1-\\cos(x)}{x^2}dx "
},
{
"math_id": 69,
"text": "\n1 - \\cos(x) = 1 - \\sum_{k\\geq 0}\\frac{{(-1)^{(k+1)}}x^{2k}}{2k!} = \\sum_{k\\geq 1}\\frac{{(-1)^{(k+1)}}x^{2k}}{2k!}.\n"
},
{
"math_id": 70,
"text": "\n\\left|\\frac{1 - \\cos(x)}{x^2}\\right| \n = \\left|-\\sum_{k\\geq 0}\\frac{x^{2k}}{2(k+1)!}\\right|\n\\leq \\sum_{k\\geq 0} \\frac{|x|^k}{k!}\n = e^{|x|}.\n"
},
{
"math_id": 71,
"text": "\n \\int_{-\\infty}^{\\infty}\\left|\\frac{1-\\cos(x)}{x^2}\\right|dx\n\\leq \\int_{-\\infty}^{-\\varepsilon} \\frac{2}{x^2}dx + \n \\int_{-\\varepsilon}^{\\varepsilon} e^{|x|}dx + \n \\int_{\\varepsilon}^{\\infty} \\frac{2}{x^2}dx\n\\leq K,\n"
},
{
"math_id": 72,
"text": "K > 0."
},
{
"math_id": 73,
"text": "n"
}
] | https://en.wikipedia.org/wiki?curid=1235977 |
1236075 | Quartz crystal microbalance | A quartz crystal microbalance (QCM) (also known as "quartz microbalance" (QMB), sometimes also as "quartz crystal nanobalance" (QCN)) measures a mass variation per unit area by measuring the change in frequency of a quartz crystal resonator. The resonance is disturbed by the addition or removal of a small mass due to oxide growth/decay or film deposition at the surface of the acoustic resonator. The QCM can be used under vacuum, in gas phase ("gas sensor", first use described by King) and more recently in liquid environments. It is useful for monitoring the rate of deposition in thin-film deposition systems under vacuum. In liquid, it is highly effective at determining the affinity of molecules (proteins, in particular) to surfaces functionalized with recognition sites. Larger entities such as viruses or polymers are investigated as well. QCM has also been used to investigate interactions between biomolecules. Frequency measurements are easily made to high precision (discussed below); hence, it is easy to measure mass densities down to a level of below 1 μg/cm2. In addition to measuring the frequency, the dissipation factor (equivalent to the resonance bandwidth) is often measured to help analysis. The dissipation factor is the inverse quality factor of the resonance, Q−1 = w/fr (see below); it quantifies the damping in the system and is related to the sample's viscoelastic properties.
General.
Quartz is one member of a family of crystals that experience the piezoelectric effect. The piezoelectric effect has found applications in high power sources, sensors, actuators, frequency standards, motors, etc., and the relationship between applied voltage and mechanical deformation is well known; this allows probing an acoustic resonance by electrical means. Applying alternating current to the quartz crystal will induce oscillations. With an alternating current between the electrodes of a properly cut crystal, a standing shear wave is generated. The Q factor, which is the ratio of frequency and bandwidth, can be as high as 106. Such a narrow resonance leads to highly stable oscillators and a high accuracy in the determination of the resonance frequency. The QCM exploits this ease and precision for sensing. Common equipment allows resolution down to 1 Hz on crystals with a fundamental resonant frequency in the 4 – 6 MHz range. A typical setup for the QCM contains water cooling tubes, the retaining unit, frequency sensing equipment through a microdot feed-through, an oscillation source, and a measurement and recording device.
The frequency of oscillation of the quartz crystal is partially dependent on the thickness of the crystal. During normal operation, all the other influencing variables remain constant; thus a change in thickness correlates directly to a change in frequency. As mass is deposited on the surface of the crystal, the thickness increases; consequently the frequency of oscillation decreases from the initial value. With some simplifying assumptions, this frequency change can be quantified and correlated precisely to the mass change using the Sauerbrey equation. Other techniques for measuring the properties of thin films include ellipsometry, surface plasmon resonance (SPR) spectroscopy, Multi-Parametric Surface Plasmon Resonance and dual polarisation interferometry.
Gravimetric and non-gravimetric QCM.
The classical sensing application of quartz crystal resonators is microgravimetry. Many commercial instruments, some of which are called thickness monitors, are available. These devices exploit the Sauerbrey relation. For thin films, the resonance frequency is usually inversely proportional to the total thickness of the plate. The latter increases when a film is deposited onto the crystal surface. Monolayer sensitivity is easily reached. However, when the film thickness increases, viscoelastic effects come into play. In the late 1980s, it was recognized that the QCM can also be operated in liquids, if proper measures are taken to overcome the consequences of the large damping. Again, viscoelastic effects contribute strongly to the resonance properties.
Today, microweighing is one of several uses of the QCM.
Measurements of viscosity and more general, viscoelastic properties, are of much importance as well. The "non-gravimetric" QCM is by no means an alternative to the conventional QCM. Many researchers, who use quartz resonators for purposes other than gravimetry, have continued to call the quartz crystal resonator "QCM". Actually, the term "balance" makes sense even for non-gravimetric applications if it is understood in the sense of a force balance. At resonance, the force exerted upon the crystal by the sample is balanced by a force originating from the shear gradient inside the crystal. This is the essence of the small-load approximation.
The QCM measures inertial mass, and therefore by operating at a high resonant frequency it can be made very sensitive to small changes in that inertia as material is added to (or removed from) its surface. The sensitivity of gravitational mass measurements is, by comparison, limited by the Earth's gravitational field strength. We normally think of a balance as a way of measuring (or comparing) gravitational mass, as measured by the force that the earth exerts on the body being weighed. A few experiments have demonstrated a direct link between QCM and the SI system by comparing traceable (gravitational mass) weighings with QCM measurements.
Crystalline α–quartz is by far the most important material for thickness-shear resonators. Langasite (La3Ga5SiO14, "LGS") and gallium-orthophosphate (GaPO4) are investigated as alternatives to quartz, mainly (but not only) for use at high temperatures. Such devices are also called "QCM", even though they are not made out of quartz (and may or may not be used for gravimetry).
Surface acoustic wave-based sensors.
The QCM is a member of a wider class of sensing instruments based on acoustic waves at surfaces. Instruments sharing similar principles of operation are shear horizontal surface acoustic wave (SH-SAW) devices, Love-wave devices and torsional resonators. Surface acoustic wave-based devices make use of the fact that the reflectivity of an acoustic wave at the crystal surface depends on the impedance (the stress-to-speed ratio) of the adjacent medium. (Some acoustic sensors for temperature or pressure make use of the fact that the speed of sound inside the crystal depends on temperature, pressure, or bending. These sensors do not exploit surface effects.) In the context of surface-acoustic wave based sensing, the QCM is also termed "bulk acoustic wave resonator (BAW-resonator)" or "thickness-shear resonator". The displacement pattern of an unloaded BAW resonator is a standing shear wave with anti-nodes at the crystal surface. This makes the analysis particularly easy and transparent.
Instrumental.
Resonator crystals.
When the QCM was first developed, natural quartz was harvested, selected for its quality and then cut in the lab. However, most of today's crystals are grown using seed crystals. A seed crystal serves as an anchoring point and template for crystal growth. Grown crystals are subsequently cut and polished into hair-thin discs which support thickness shear resonance in the 1-30 MHz range. The "AT" or "SC" oriented cuts (discussed below) are widely used in applications.
Electromechanical coupling.
The QCM consists of a thin piezoelectric plate with electrodes evaporated onto both sides. Due to the piezo-effect, an AC voltage across the electrodes induces a shear deformation and vice versa. The electromechanical coupling provides a simple way to detect an acoustic resonance by electrical means. Otherwise, it is of minor importance. However, electromechanical coupling can have a slight influence on the resonance frequency via piezoelectric stiffening. This effect can be used for sensing, but is usually avoided. It is essential to have the electric and dielectric boundary conditions well under control. Grounding the front electrode (the electrode in contact with the sample) is one option. A π-network sometimes is employed for the same reason. A π-network is an arrangement of resistors, which almost short-circuit the two electrodes. This makes the device less susceptible to electrical perturbations.
Shear waves decay in liquids and gases.
Most acoustic-wave-based sensors employ shear (transverse) waves. Shear waves decay rapidly in liquid and gaseous environments. Compressional (longitudinal) waves would be radiated into the bulk and potentially be reflected back to the crystal from the opposing cell wall. Such reflections are avoided with transverse waves. The range of penetration of a 5 MHz-shear wave in water is 250 nm. This finite penetration depth renders the QCM surface-specific. Also, liquids and gases have a rather small shear-acoustic impedance and therefore only weakly damp the oscillation. The exceptionally high Q-factors of acoustic resonators are linked to their weak coupling to the environment.
Modes of operation.
Economic ways of driving a QCM make use of oscillator circuits. Oscillator circuits are also widely employed in time and frequency control applications, where the oscillator serves as a clock. Other modes of operation are impedance analysis, QCM-I, and ring-down, QCM-D. In impedance analysis, the electric conductance as a function of driving frequency is determined by means of a network analyzer. By fitting a resonance curve to the conductance curve, one obtains the frequency and bandwidth of the resonance as fit parameters. In ring-down, one measures the voltage between the electrodes after the exciting voltage has suddenly been turned off. The resonator emits a decaying sine wave, where the resonance parameters are extracted from the period of oscillation and the decay rate.
Energy trapping.
To avoid dissipation of vibration energy (damping the oscillation) by the crystal holder, which touches the crystal at the rim, the vibration should be confined to the center of the crystal platelet. This is known as energy trapping.
For crystals with high frequencies (10 MHz and higher), the electrodes at the front and the back of the crystal usually are key-hole shaped, thereby making the resonator thicker in the center than at the rim. The mass of the electrodes confines the displacement field to the center of the crystal disk. QCM crystals with vibration frequencies around 5 or 6 MHz usually have a planoconvex shape; at the rim the crystal is too thin for a standing wave with the resonance frequency.
Thus, in both cases the thickness-shear vibration amplitude is greatest at the center of the disk. This means that the mass-sensitivity is peaked at the center also, with this sensitivity declining smoothly to zero towards the rim (For high-frequency crystals, the amplitude vanishes already somewhat outside the perimeter of the smallest electrode.) The mass-sensitivity is therefore very non-uniform across the crystal surface, and this non-uniformity is a function of the mass-distribution of the metal electrodes (or in the case of non-planar resonators, the quartz crystal thickness itself).
Energy trapping slightly distorts the otherwise planar wave fronts. The deviation from the plane thickness-shear mode entails flexural contribution to the displacement pattern. If the crystal is not operated in vacuum, flexural waves emit compressional waves into the adjacent medium, which is a problem when operating the crystal in a liquid environment. Standing compressional waves form in the liquid between the crystals and the container walls (or the liquid surface); these waves modify both the frequency and the damping of the crystal resonator.
Overtones.
Planar resonators can be operated at a number of overtones, typically indexed by the number of nodal planes parallel to the crystal surfaces. Only odd harmonics can be excited electrically because only these induce charges of opposite sign at the two crystal surfaces. Overtones are to be distinguished from anharmonic side bands (spurious modes), which have nodal planes perpendicular to the plane of the resonator. The best agreement between theory and experiment is reached with planar, optically polished crystals for overtone orders between "n" = 5 and "n" = 13. On low harmonics, energy trapping is insufficient, while on high harmonics, anharmonic side bands interfere with the main resonance.
Amplitude of motion.
The amplitude of lateral displacement rarely exceeds a nanometer. More specifically one has
formula_0
with "u"0 the amplitude of lateral displacement, "n" the overtone order, "d" the piezoelectric strain coefficient, "Q" the quality factor, and "U"el the amplitude of electrical driving. The piezoelectric strain coefficient is given as "d" = 3.1·10‑12 m/V for AT-cut quartz crystals. Due to the small amplitude, stress and strain usually are proportional to each other. The QCM operates in the range of linear acoustics.
Effects of temperature and stress.
The resonance frequency of acoustic resonators depends on temperature, pressure, and bending stress. Temperature-frequency coupling is minimized by employing special crystal cuts. A widely used temperature-compensated cut of quartz is the AT-cut. Careful control of temperature and stress is essential in the operation of the QCM.
AT-cut crystals are singularly rotated Y-axis cuts in which the top and bottom half of the crystal move in opposite directions (thickness shear vibration) during oscillation.
The AT-cut crystal is easily manufactured. However, it has limitations at high and low temperature, as it is easily disrupted by internal stresses caused by temperature gradients in these temperature extremes (relative to room temperature, ~25 °C). These internal stress points produce undesirable frequency shifts in the crystal, decreasing its accuracy. The relationship between temperature and frequency is cubic. The cubic relationship has an inflection point near room temperature. As a consequence the AT-cut quartz crystal is most effective when operating at or near room temperature. For applications which are above room temperature, water cooling is often helpful.
Stress-compensated (SC) crystals are available with a doubly rotated cut that minimizes the frequency changes due to temperature gradients when the system is operating at high temperatures, and reduces the reliance on water cooling. SC-cut crystals have an inflection point of ~92 °C. In addition to their high temperature inflection point, they also have a smoother cubic relationship and are less affected by temperature deviations from the inflection point. However, due to the more difficult manufacturing process, they are more expensive and are not widely commercially available.
Electrochemical QCM.
The QCM can be combined with other surface-analytical instruments. The electrochemical QCM (EQCM) is particularly advanced. Using the EQCM, one determines the ratio of mass deposited at the electrode surface during an electrochemical reaction to the total charge passed through the electrode. This ratio is called the current efficiency.
Quantification of dissipative processes.
For advanced QCMs, such as QCM-I and QCM-D, both the resonance frequency, "f"r, and the bandwidth, "w", are available for analysis. The latter quantifies processes which withdraw energy from the oscillation. These may include damping by the holder and ohmic losses inside the electrode or the crystal. In the literature some parameters other than "w" itself are used to quantify bandwidth. The Q-factor (quality factor) is given by "Q" = "f"r/"w". The “dissipation factor”, "D", is the inverse of the Q-factor: "D" = "Q"−1 = "w"/"f"r. The half-band-half-width, Γ, is Γ = "w"/2. The use of Γ is motivated by a complex formulation of the equations governing the motion of the crystal. A complex resonance frequency is defined as "f"r* = "f"r + iΓ, where the imaginary part, Γ, is half the bandwidth at half maximum. Using a complex notation, one can treat shifts of frequency, Δ"f", and bandwidth, ΔΓ, within the same set of (complex) equations.
The motional resistance of the resonator, "R"1, is also used as a measure of dissipation. "R"1 is an output parameter of some instruments based on advanced oscillator circuits. "R"1 usually is not strictly proportional to the bandwidth (although it should be according to the BvD circuit; see below). Also, in absolute terms, "R"1 – being an electrical quantity and not a frequency – is more severely affected by calibration problems than the bandwidth.
Equivalent circuits.
Modeling of acoustic resonators often occurs with equivalent electrical circuits. Equivalent circuits are algebraically equivalent to the continuum mechanics description and to a description in terms of acoustic reflectivities. They provide for a graphical representation of the resonator's properties and their shifts upon loading. These representations are not just cartoons. They are tools to predict the shift of the resonance parameters in response to the addition of the load.
Equivalent circuits build on the electromechanical analogy. In the same way as the current through a network of resistors can be predicted from their arrangement and the applied voltage, the displacement of a network of mechanical elements can be predicted from the topology of the network and the applied force. The electro-mechanical analogy maps forces onto voltages and speeds onto currents. The ratio of force and speed is termed "mechanical impedance". Note: Here, speed means the time derivative of a displacement, not the speed of sound. There also is an electro-acoustic analogy, within which stresses (rather than forces) are mapped onto voltages. In acoustics, forces are normalized to area. The ratio of stress and speed should not be called "acoustic impedance" (in analogy to the mechanical impedance) because this term is already in use for the material property "Z"ac = ρ"c" with ρ the density and "c" the speed of sound). The ratio of stress and speed at the crystal surface is called load impedance, "Z"L. Synonymous terms are "surface impedance" and "acoustic load." The load impedance is in general not equal to the material constant "Z"ac = ρ"c" = ("G"ρ)1/2. Only for propagating plane waves are the values of "Z"L and "Z"ac the same.
The electro-mechanical analogy provides for mechanical equivalents of a resistor, an inductance, and a capacitance, which are the dashpot (quantified by the drag coefficient, ξp), the point mass (quantified by the mass, "m"p), and the spring (quantified by the spring constant, κp). For a dashpot, the impedance by definition is "Z"m="F" / (d"u"/d"t")=ξm with "F" the force and (d"u"/d"t") the speed). For a point mass undergoing oscillatory motion "u"("t") = "u"0 exp(iω"t") we have "Z"m = iω"m"p. The spring obeys "Z"m =κp/(iω). Piezoelectric coupling is depicted as a transformer. It is characterized by a parameter φ. While φ is dimensionless for usual transformers (the turns ratio), it has the dimension charge/length in the case of electromechanical coupling. The transformer acts as an impedance converter in the sense that a mechanical impedance, "Z"m, appears as an electrical impedance, "Z"el, across the electrical ports. " Z"el is given by "Z"el = φ2 "Z"m. For planar piezoelectric crystals, φ takes the value φ = "Ae"/"d"q, where "A" is the effective area, "e" is the piezoelectric stress coefficient ("e" = 9.65·10−2 C/m2 for AT-cut quartz) and "d"q is the thickness of the plate. The transformer often is not explicitly depicted. Rather, the mechanical elements are directly depicted as electrical elements (capacitor replaces a spring, etc.).
There is a pitfall with the application of the electro-mechanical analogy, which has to do with how networks are drawn. When a spring pulls onto a dashpot, one would usually draw the two elements in series. However, when applying the electro-mechanical analogy, the two elements have to be placed in parallel. For two parallel electrical elements the currents are additive. Since the speeds (= currents) add when placing a spring behind a dashpot, this assembly has to be represented by a parallel network.
The figure on the right shows the Butterworth-van Dyke (BvD) equivalent circuit. The acoustic properties of the crystal are represented by the motional inductance, "L"1, the motional capacitance, "C"1, and the motional resistance "R"1. "Z"L is the load impedance. Note that the load, "Z"L, cannot be determined from a single measurement. It is inferred from the comparison of the loaded and the unloaded state. Some authors use the BvD circuit without the load "Z"L. This circuit is also called “four element network”. The values of "L"1, "C"1, and "R"1 then change their value in the presence of the load (they do not if the element "Z"L is explicitly included).
Small-load approximation.
The BvD circuit predicts the resonance parameters. One can show that the following simple relation holds as long as the frequency shift is much smaller than the frequency itself:
formula_1
"f"f is the frequency of the fundamental. "Z"q is the acoustic impedance of material. For AT-cut quartz, its value is "Z"q = 8.8·106 kg m−2 s−1.
The small-load approximation is central to the interpretation of QCM-data. It holds for arbitrary samples and can be applied in an average sense. Assume that the sample is a complex material, such as a cell culture, a sand pile, a froth, an assembly of spheres or vesicles, or a droplet. If the average stress-to-speed ratio of the sample at the crystal surface (the load impedance, "Z"L) can be calculated in one way or another, a quantitative analysis of the QCM experiment is in reach. Otherwise, the interpretation will have to remain qualitative.
The limits of the small-load approximation are noticed either when the frequency shift is large or when the overtone-dependence of Δ"f" and Δ("w"/2) is analyzed in detail in order to derive the viscoelastic properties of the sample. A more general relation is
formula_2
This equation is implicit in Δ"f"*, and must be solved numerically. Approximate solutions also exist, which go beyond the small-load approximation. The small-load approximation is the first order solution of a perturbation analysis.
The definition of the load impedance implicitly assumes that stress and speed are proportional and that the ratio therefore is independent of speed. This assumption is justified when the crystal is operated in liquids and in air. The laws of linear acoustics then hold. However, when the crystal is in contact with a rough surface, stress can easily become a nonlinear function of strain (and speed) because the stress is transmitted across a finite number of rather small load-bearing asperities. The stress at the points of contact is high, and phenomena like slip, partial slip, yield, etc. set in. These are part of non-linear acoustics. There is a generalization of the small-load equation dealing with this problem. If the stress, σ("t"), is periodic in time and synchronous with the crystal oscillation one has
formula_3
formula_4
Angular brackets denote a time average and σ("t") is the (small) stress exerted by the external surface. The function σ(t) may or may not be harmonic. One can always test for nonlinear behavior by checking for a dependence of the resonance parameters on the driving voltage. If linear acoustics hold, there is no drive level-dependence. Note, however, that quartz crystals have an intrinsic drive level-dependence, which must not be confused with nonlinear interactions between the crystal and the sample.
Viscoelastic modeling.
Assumptions.
For a number of experimental configurations, there are explicit expressions relating the shifts of frequency and bandwidth to the sample properties. The assumptions underlying the equations are the following:
Semi-infinite viscoelastic medium.
For a semi-infinite medium, one has
formula_5
formula_6
η’ and η’’ are the real and the imaginary part of the viscosity, respectively. "Z"ac = ρ"c" =("G" ρ)1/2 is the acoustic impedance of the medium. ρ is the density, "c", the speed of sound, and "G" = i ωη is the shear modulus.
For Newtonian liquids (η’ = const, η’’ = 0), Δ"f" and Δ("w"/2) are equal and opposite. They scale as the square root of the overtone order, "n"1/2. For viscoelastic liquids (η’ = η(ω), η’’≠ 0), the complex viscosity can be obtained as
formula_7
formula_8
Importantly, the QCM only probes the region close to the crystal surface. The shear wave evanescently decays into the liquid. In water the penetration depth is about 250 nm at 5 MHz. Surface roughness, nano-bubbles at the surface, slip, and compressional waves can interfere with the measurement of viscosity. Also, the viscosity determined at MHz frequencies sometimes differs from the low-frequency viscosity. In this respect, torsional resonators (with a frequency around 100 kHz) are closer to application than thickness-shear resonators.
Inertial loading (Sauerbrey equation).
The frequency shift induced by a thin sample which is rigidly coupled to the crystal (such as a thin film), is described by the Sauerbrey equation. The stress is governed by inertia, which implies σ = -ω2"u"0"m"F, where "u"0 is the amplitude of oscillation and "m"F is the (average) mass per unit area. Inserting this result into the small-load-approximation one finds
formula_9
If the density of the film is known, one can convert from mass per unit area, "m"F, to thickness, "d"F. The thickness thus derived is also called the Sauerbrey thickness to show that it was derived by applying the Sauerbrey equation to the frequency shift.
The shift in bandwidth is zero if the Sauerbrey equation holds. Checking for the bandwidth therefore amounts to checking the applicability of the Sauerbrey equation.
The Sauerbrey equation was first derived by Günter Sauerbrey in 1959 and correlates changes in the oscillation frequency of a piezoelectric crystal with mass deposited on it. He simultaneously developed a method for measuring the resonance frequency and its changes by using the crystal as the frequency-determining component of an oscillator circuit. His method continues to be used as the primary tool in quartz crystal microbalance experiments for conversion of frequency to mass.
Because the film is treated as an extension of thickness, Sauerbrey’s equation only applies to systems in which (a) the deposited mass has the same acoustic properties as the crystal and (b) the frequency change is small (Δ"f" / "f" < 0.05).
If the change in frequency is greater than 5%, that is, Δ"f" / "f" > 0.05, the Z-match method must be used to determine the change in mass. The formula for the Z-match method is:
formula_10
"k"F is the wave vector inside the film and "d"F its thickness. Inserting "k"F = 2·π·"f" /cF = 2·π·"f"·ρF / "Z"F as well as "d"F = "m"F / ρF yields
formula_11
Viscoelastic film.
For a viscoelastic film, the frequency shift is
formula_12
Here "Z"F is the acoustic impedance of the film ("Z"F = ρF"c"F = (ρF"G"f)1/2)= (ρF/"J"f)1/2), "k"F is the wave vector and "d"F is the film thickness. "J"f is the film's viscoelastic compliance, ρF is the density.
The poles of the tangent ("k"F "d"F = π/2) define the film resonances. At the film resonance, one has "d"F = λ/4. The agreement between experiment and theory is often poor close to the film resonance. Typically, the QCM only works well for film thicknesses much less than a quarter of the wavelength of sound (corresponding to a few micrometres, depending on the softness of the film and the overtone order).
Note that the properties of a film as determined with the QCM are fully specified by two parameters, which are its acoustic impedance, "Z"F = ρF"c"F and its mass per unit area, "m"F = "d"F/ρF. The wave number "k"F = ω/"c"F is not algebraically independent from "Z"F and "m"F. Unless the density of the film is known independently, the QCM can only measure mass per unit area, never the geometric thickness itself.
Viscoelastic film in liquid.
For a film immersed in a liquid environment, the frequency shift is
formula_13
The indices "F" and "Liq" denote the film and the liquid. Here, the reference state is the crystal immersed in liquid (but not covered with a film). For thin films, one can Taylor-expand the above equation to first order in "d"F, yielding
formula_14
Apart from the term in brackets, this equation is equivalent to the Sauerbrey equation. The term in brackets is a viscoelastic correction, dealing with the fact that in liquids, soft layers lead to a smaller Sauerbrey thickness than rigid layers.
Derivation of viscoelastic constants.
The frequency shift depends on the acoustic impedance of the material; the latter in turn depends on the viscoelastic properties of the material. Therefore, in principle, one can derive the complex shear modulus (or equivalently, the complex viscosity). However, there are certain caveats to be kept in mind:
For thin films in liquids, there is an approximate analytical result, relating the elastic compliance of the film, "J"F’ to the ratio of Δ(w/2); and Δ"f". The shear compliance is the inverse of the shear modulus, "G". In the thin-film limit, the ratio of Δ(w/2) and –Δ"f" is independent of film thickness. It is an intrinsic property of the film. One has
formula_15
For thin films in air an analogous analytical result is
formula_16
Here "J"’’ is the viscous shear compliance.
Interpretation of the Sauerbrey thickness.
The correct interpretation of the frequency shift from QCM experiments in liquids is a challenge. Practitioners often just apply the Sauerbrey equation to their data and term the resulting areal mass (mass per unit area) the "Sauerbrey mass" and the corresponding thickness "Sauerbrey thickness". Even though the Sauerbrey thickness can certainly serve to compare different experiments, it must not be naively identified with the geometric thickness. Worthwhile considerations are the following:
a) The QCM always measures an areal mass density, never a geometric thickness. The conversion from areal mass density to thickness usually requires the physical density as an independent input.
b) It is difficult to infer the viscoelastic correction factor from QCM data. However, if the correction factor differs significantly from unity, it may be expected that it affects the bandwidth Δ(w/2) and also that it depends on overtone order. If, conversely, such effects are absent (Δ("w"/2) « Δ"f", Sauerbrey thickness same on all overtone orders) one may assume that (1-"Z"Liq2/"Z"F2)≈1.
c) Complex samples are often laterally heterogeneous.
d) Complex samples often have fuzzy interfaces. A "fluffy" interface will often lead to a viscoelastic correction and, as a consequence, to a non-zero Δ("w"/2) as well as an overtone-dependent Sauerbrey mass. In the absence of such effects, one may conclude that the outer interface of film is sharp.
e) When the viscoelastic correction, as discussed in (b), is insignificant, this does by no means imply that the film is not swollen by the solvent. It only means that the (swollen) film is much more rigid than the ambient liquid. QCM data taken on the wet sample alone do not allow inference of the degree of swelling. The amount of swelling can be inferred from the comparison of the wet and the dry thickness. The degree of swelling is also accessible by comparing the acoustic thickness (in the Sauerbrey sense) to the optical thickness as determined by, for example, surface plasmon resonance (SPR) spectroscopy or ellipsometry. Solvent contained in the film usually does contribute to the acoustic thickness (because it takes part in the movement), whereas it does not contribute to the optic thickness (because the electronic polarizability of a solvent molecule does not change when it is located inside a film). The difference in dry and wet mass is shown with QCM-D and MP-SPR for instance in protein adsorption on nanocellulose and in other soft materials.
Point contacts.
The equations concerning viscoelastic properties assume planar layer systems. A frequency shift is also induced when the crystal makes contact with discrete objects across small, load-bearing asperities. Such contacts are often encountered with rough surfaces. It is assumed that the stress–speed ratio may be replaced by an average stress–speed ratio, where the average stress just is the lateral force divided by the active area of the crystal.
Often, the external object is so heavy that it does not take part in the MHz oscillation of the crystal due to inertia. It then rests in place in the laboratory frame. When the crystal surface is laterally displaced, the contact exerts a restoring force upon the crystal surface. The stress is proportional to the number density of the contacts, "N"S, and their average spring constant, κS. The spring constant may be complex (κS* = κS’ + iκS’’), where the imaginary part quantifies a withdrawal of energy from the crystal oscillation (for instance due to viscoelastic effects). For such a situation, the small-load approximation predicts
formula_17
The QCM allows for non-destructive testing of the shear stiffness of multi-asperity contacts.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "u_0=\\frac 4{\\left( n\\pi \\right) ^2}dQU_{\\mathrm{el}}\n"
},
{
"math_id": 1,
"text": "\\frac{\\Delta f^{*}}{f_f}=\\frac i{\\pi Z_q}Z_L"
},
{
"math_id": 2,
"text": "Z_L=-iZ_q\\tan \\left( \\pi \\frac{\\Delta f}{f_f}\\right)"
},
{
"math_id": 3,
"text": "\\frac{\\Delta f}{f_f}=\\frac 1{\\pi Z_q}\\,\\frac 2{\\omega u_0}\\left\\langle\n\\sigma \\left( t\\right) \\cos \\left( \\omega t\\right) \\right\\rangle _t"
},
{
"math_id": 4,
"text": "\\frac{\\Delta (w/2) }{f_f}=\\frac 1{\\pi Z_q}\\,\\frac 2{\\omega u_0}\\left\\langle\n\\sigma \\left( t\\right) \\sin \\left( \\omega t\\right) \\right\\rangle _t"
},
{
"math_id": 5,
"text": "\\frac{\\Delta f^{*}}{f_f}=\\frac i{\\pi Z_q}\\,\\frac \\sigma {\\dot{u}}=\\frac\ni{\\pi Z_q}Z_{\\mathrm{ac}}=\\frac i{\\pi Z_q}\\sqrt{\\rho i\\omega \\eta }"
},
{
"math_id": 6,
"text": "=\\frac 1{\\pi Z_q}\\,\\frac{-1+i}{\\sqrt{2}}\\sqrt{\\rho \\omega\n\\left( \\eta ^{\\prime }-i\\eta ^{\\prime \\prime }\\right) }=\\frac i{\\pi Z_q} \n\\sqrt{\\rho \\left( G^{\\prime }+iG^{\\prime \\prime }\\right) }"
},
{
"math_id": 7,
"text": "\\eta ^{\\prime }=-\\frac{\\pi Z_q^2}{\\rho _{\\mathrm{Liq}}\\,f}\\,\\frac{\\Delta\nf\\Delta \\left( w/2\\right) }{f_f^2}"
},
{
"math_id": 8,
"text": "\\eta ^{\\prime \\prime }=\\frac 12\\frac{\\pi Z_q^2}{\\rho _{\\mathrm{Liq}}\\,f}\\,\\frac{\\left( \\left( \\Delta \\left( w/2\\right) \\right) ^2-\\Delta f^2\\right) }{f_f^2}"
},
{
"math_id": 9,
"text": "\\frac{\\Delta f^{*}}{f_f}\\approx \\frac i{\\pi Z_q}\\frac{-\\omega ^2u_0m_{\\mathrm{F}}}{i\\omega u_0}=-\\frac{2\\,f}{Z_q}m_{\\mathrm{F}}"
},
{
"math_id": 10,
"text": "\\tan \\left( \\frac{\\pi \\Delta f}{f_f}\\right) =\\frac{-Z_{\\mathrm{F}}}{Z_q}\\tan\n\\left( k_{\\mathrm{F}}d_{\\mathrm{F}}\\right)"
},
{
"math_id": 11,
"text": "\\Delta f=-\\frac{f_f}\\pi \\left( \\arctan \\frac{Z_{\\mathrm{F}}}{Z_q}\\tan \\left( \n\\frac{2\\pi f}{Z_{\\mathrm{F}}}m_{\\mathrm{F}}\\right) \\right)"
},
{
"math_id": 12,
"text": "\\frac{\\Delta f^{*}}{f_f}=\\frac{-1}{\\pi Z_q}Z_{\\mathrm{F}}\\tan \\left( k_{\\mathrm{F}}d_{\\mathrm{F}}\\right)"
},
{
"math_id": 13,
"text": "\\frac{\\Delta f^{*}}{f_f}=\\frac{-Z_{\\mathrm{F}}}{\\pi Z_q}\\frac{Z_{\\mathrm{F}}\\tan \\left(\nk_{\\mathrm{F}}d_{\\mathrm{F}}\\right) -iZ_{\\mathrm{Liq}}}{Z_{\\mathrm{F}}+iZ_{\\mathrm{Liq}}\\tan \\left(\nk_{\\mathrm{F}}d_{\\mathrm{F}}\\right) }"
},
{
"math_id": 14,
"text": "\\frac{\\Delta f^{*}}{f_f}=\\frac{-\\omega m_{\\mathrm{F}}}{\\pi Z_q}\\left( 1-\\frac{Z_{\n\\mathrm{Liq}}^2}{Z_{\\mathrm{F}}^2}\\right)=\\frac{-\\omega m_{\\mathrm{F}}}{\\pi Z_q}\\left( 1-J_{\\mathrm{F}}\\frac{Z_{\\mathrm{Liq}}^2}{\\rho_{\\mathrm{F}}}\\right)"
},
{
"math_id": 15,
"text": "\\frac{\\Delta \\left( \\omega /2\\right) }{-\\Delta f}\\approx \\eta \\omega\nJ_F^{\\,\\prime }"
},
{
"math_id": 16,
"text": "\\Delta \\left( \\omega /2\\right) =\\frac 8{3\\rho _{\\mathrm{F}}Z_q}f_f^{\\,4}m_{ \n\\mathrm{F}}^3n^3\\pi ^2J^{\\prime \\prime }"
},
{
"math_id": 17,
"text": "\\frac{\\Delta f^{*}}{f_f}=\\frac{N_S}{\\pi Z_q}\\frac{\\kappa _S^{*}}\\omega"
}
] | https://en.wikipedia.org/wiki?curid=1236075 |
12363342 | Income splitting | Income splitting is a tax policy of fictionally attributing earned and passive income of one spouse to the other spouse for the purposes of assessing personal income tax (i.e. "splitting" away the income of the greater earner, reducing his/her income for tax measurement purposes), thus reducing tax rates paid by the spouse who earns more and increasing rates paid by a spouse who earns less (or nothing).
Global incidence and ramifications for sovereign debt and fertility rates.
Most Western countries have abolished mandatory fictional income splitting, while in several countries fictional income splitting is optional (if the couple chooses it). A 2009 study of 26 European countries found that: "In France, Liechtenstein, Luxembourg and Portugal, couples are jointly assessed. Ireland and Germany operate joint taxation, respectively, with an option for individual taxation and the right to be individually taxed when this is more advantageous; conversely individual taxation is the default option in Spain and Poland, but the option of joint assessment is offered. Elements of jointness remain in some income tax codes for which the individual is the unit of taxation – the Belgian, Estonian, Greek, Icelandic and Norwegian codes – some of which are minor while others matter. The remaining countries enforce individual income taxation without exceptions".
In 2015, Portugal abolished the mandatory joint taxation of a family, establishing separate taxation for married (or "de facto" unions) taxpayers as the norm, with an option being available for joint taxation.
The International Monetary Fund has called for the countries to abandon the practice of taxing family income instead of individual income, along with other tax practices, such as the method of assessing payroll tax in the United States, which assesses extra taxes, higher tax rates, and reduced benefits to families that have two earners, and provides funded and unfunded subsidies to patriarchal families, which are related to sovereign debt problems in these countries.
In the United States, the spouse to whom the income is fictionally attributed does not pay payroll tax on that "split" earned income, while the benefit of that spouse's lower rate does accrue to the greater earner. The "split" is thus ignored in that context while it is applied in the income tax context. Even though the fictionally earning spouse does not pay payroll tax, the couple draws two sets of Social Security and Medicare benefits.
Declining fertility rates in countries that subsidize patriarchal/maternalist marriage and rebounding fertility rates in countries that shift their policies to recognizing equal parental responsibility are also a factor in many countries abandoning fictional income splitting for tax measurement.
In part because of these concerns, as well as child welfare policies that advocate recognizing both parents having personal responsibility for children in order to support their development without distortion, fictional income splitting is becoming rare globally, and, since 1970, it has been abolished in many countries.
Some countries require joint returns but measure the tax on income individually, while others use only individual returns. Tax laws in these countries generally have regulations preventing the direct transfer of earned income from one spouse to another to reduce taxes. There are often still methods of using income splitting to reduce taxes in these jurisdictions. For those who own their own company, hiring family members will often reduce the overall tax burden by shifting income to lower-income family members.
United States.
In the United States, tax benefits or "marriage bonuses" to married couples with only one breadwinner (or with a breadwinner earning the bulk of the couple's income) have been cited by the Tax Policy Center as one of the debt-ballooning policies of the Bush tax cuts. The Tax Policy Center asserts that these "marriage bonuses" (received by the greater or sole earner in the marriage) and "marriage penalties" (paid by the lesser or non-earner in the marriage) are often subsidized by single people and two-earner marriages or are unfunded and thus contribute to government borrowing.
While its effects on the national debt have increased substantially in recent years, income splitting became required for married persons filing jointly in the United States in 1948. After two successive vetoes by President Harry S. Truman, a GOP-led effort in Congress obtained enough votes to institute the splitting of marital income. Until then, only single filing was permitted. However, couples in community property states such as California had access to "de facto" fictional income splitting, since one-half of the income of one spouse could be fictionally attributed to the other spouse. This led to issues of patriarchal taxpayers in community property states paying lower tax rates than patriarchal taxpayers in common law states and hastened the passage of "de jure" income splitting. While other solutions to this distortion in community property states were available, political activism to establish a male entitlement (or first right) to paid work, and to push women back into unpaid or lower paid work after their substantial economic contributions during World War II, led to the override of Truman's double veto.
Fictional income splitting is strongly opposed by people in two-earner marriages, and especially by those in Shared Earning/Shared Parenting Marriages. U.S. economists Betsey Stevenson and Justin Wolfers are among those who oppose it. The opposition also comes from those who see this type of taxation contributing to problems of child neglect, particularly by fathers, family breakdown, unequal pay for equal work problems for women, poverty in general, and the feminization of poverty, particularly in older women.
Germany.
In Germany, income splitting involves two aspects. First, if married couples file jointly, their total tax liability is determined by twice the tax liability of applying half the total taxable income. Let formula_0 and formula_1 denote each spouse's taxable income. Defining formula_2 the tax schedule, the tax due for couples is computed by formula_3. The splitting advantage increases if both partners have unequal incomes. Another consequence is a high marginal tax rate for the secondary earner, as the secondary earner indirectly pays the marginal tax rate of the higher-earning spouse.
The second aspect involves the Withholding tax ("Lohnsteuer") which is paid on employment income. Family taxation implies that married couples may split the total basic exemption ("Grundfreibetrag"). This is done via choosing the appropriate tax bracket ("Steuerklasse"). The higher-earning spouse predominantly opts for "Steuerklasse III" to claim both exemptions, while the lower-earning spouse will be taxed without exemption ("Steuerklasse V").
Both arrangements are widely considered to create an incentive for unequal employment within married couples in Germany, providing one cause for low labor force participation among married women.
Canada.
Income splitting was not a part of Canada's tax system until the 21st century. From the introduction of income tax, Canadian households were almost exclusively deemed to be single income households. In 1962, a Royal Commission on Taxation was initiated under Kenneth Carter by Prime Minister John Diefenbaker to examine and to recommend improvements to the federal tax system. The report declared "that fairness should be the foremost objective of the taxation system; the existing system was not only too complicated and inefficient, but under it the poor paid more than their fair share while the wealthy avoided taxes through various loopholes."
From the Carter commission's report:
"We conclude that the present system is lacking in essential fairness between families in similar circumstances and that attempts to prevent abuses of the system have produced serious anomalies and rigidities. Most of these results are inherent in the concept that each individual is a separate taxable entity. Taxation of the individual in almost total disregard for his inevitably close financial and economic ties with the other members of the basic social unit of which he is ordinarily a member, the family, is in our view another striking instance of the lack of a comprehensive and rational pattern in the present tax system. In keeping with our general theme that the scope of our tax concepts should be broadened and made more consistent in order to achieve equity, we recommend that the family be treated as a tax unit and taxed on a rate schedule applicable to family units. Individuals who are not members of a family unit would continue to be treated as separate tax units and would be taxed on a schedule applicable to individuals."
The 1970 Royal Commission on the Status of Women recommended a system of elective joint taxation to address the issues of both tax fairness between families and concerns regarding disincentives for women's participation in the work force.
Combined family income is used to calculate a family's tax liability as well as to determine a family's eligibility for tax-delivered benefit payments, such as the Canada Child Tax Benefit (CCTB). Households of similar gross incomes are bearing broadly different tax obligations. On an individual basis this is not the case. Households with the same total income are eligible for identical tax-delivered benefit payments but may have significantly different tax liabilities. Further, while bearing the same general costs of everyday life, such as child care, one jointly filing family is unable to experience greater tax relief (available to individually filing parents), due to the requirement that child care expenses be applied to the lower spouse's income.
After enacting income splitting for retired couples in 2006, in 2011 the Conservative Party of Canada led by Stephen Harper won a majority government with a platform promising limited income splitting. The proposed policy would allow families, with children under 18, to split their household income of up to $50,000, once the federal budget was balanced. The Tories estimate that almost 1.8 million families would be able to capitalize on the tax package and they would save an average of $1,300 annually.
A 2013 study by the C.D. Howe Institute concluded that incoming splitting "does more harm than good," and a 2014 study by the Canadian Centre for Policy Alternatives claims that would primarily benefit wealthier families.
However, the C.D. Howe Institute study went far beyond the scope of the limited proposal in the Conservative campaign platform by including the consequences of the provinces following suit. It also speculates upon the effects of workforce participation of the lower earning spouse which is easily addressed by elective joint taxation such as recommended by the 1970 Royal Commission on the Status of Women.
In February 2014, a day after introducing the 2014 budget, Finance Minister Jim Flaherty distanced himself from the concept of income splitting, but others within the Cabinet still support the idea.
The 2015 Canadian federal budget proposed measures to allow families to split their income.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "y_1"
},
{
"math_id": 1,
"text": "y_2"
},
{
"math_id": 2,
"text": "T(.)"
},
{
"math_id": 3,
"text": " 2 T\\left( \\frac{y_1 + y_2}{2} \\right) \\leq T(y_1) + T(y_2) "
}
] | https://en.wikipedia.org/wiki?curid=12363342 |
12363573 | Diffusionless transformation | Shift of atomic positions in a crystal structure
A diffusionless transformation, commonly known as displacive transformation, denotes solid-state alterations in crystal structures that do not hinge on the diffusion of atoms across extensive distances. Rather, these transformations manifest as a result of synchronized shifts in atomic positions, wherein atoms undergo displacements of distances smaller than the spacing between adjacent atoms, all while preserving their relative arrangement. An example of such a phenomenon is the martensitic transformation, a notable occurrence observed in the context of steel materials.
The term "martensite" was originally coined to describe the rigid and finely dispersed constituent that emerges in steels subjected to rapid cooling. Subsequent investigations revealed that materials beyond ferrous alloys, such as non-ferrous alloys and ceramics, can also undergo diffusionless transformations. Consequently, the term "martensite" has evolved to encompass the resultant product arising from such transformations in a more inclusive manner. In the context of diffusionless transformations, a cooperative and homogeneous movement occurs, leading to a modification in the crystal structure during a phase change. These movements are small, usually less than their interatomic distances, and the neighbors of an atom remain close.
The systematic movement of large numbers of atoms led some to refer to them as "military" transformations, in contrast to "civilian" diffusion-based phase changes, initially by Frederick Charles Frank and John Wyrill Christian.
The most commonly encountered transformation of this type is the martensitic transformation, which is probably the most studied but is only one subset of non-diffusional transformations. The martensitic transformation in steel represents the most economically significant example of this category of phase transformations. However, an increasing number of alternatives, such as shape memory alloys, are becoming more important as well.
Classification and definitions.
The phenomenon in which atoms or groups of atoms coordinate to displace their neighboring counterparts resulting in structural modification is known as a displacive transformation. The scope of displacive transformations is extensive, encompassing a diverse array of structural changes. As a result, additional classifications have been devised to provide a more nuanced understanding of these transformations.
The first distinction can be drawn between transformations dominated by "lattice-distortive strains" and those where "shuffles" are of greater importance.
Homogeneous lattice-distortive strains, also known as Bain strains, transform one Bravais lattice into a different one. This can be represented by a strain matrix S which transforms one vector, y, into a new vector, x:
formula_0
This is homogeneous, as straight lines are transformed into new straight lines. Examples of such transformations include a cubic lattice increasing in size on all three axes (dilation) or shearing into a monoclinic structure.
Shuffles, aptly named, refer to the minute displacement of atoms within the unit cell. Notably, pure shuffles typically do not induce a modification in the shape of the unit cell; instead, they predominantly impact its symmetry and overall structural configuration.
Phase transformations typically give rise to the formation of an interface delineating the transformed and parent materials. The energy requisite for establishing this new interface is contingent upon its characteristics, specifically how well the two structures interlock. An additional energy consideration arises when the transformation involves a change in shape. In such instances, if the new phase is constrained by the surrounding material, elastic or plastic deformation may occur, introducing a strain energy term. The interplay between these interfacial and strain energy terms significantly influences the kinetics of the transformation and the morphology of the resulting phase. Notably, in shuffle transformations characterized by minimal distortions, interfacial energies tend to predominate, distinguishing them from lattice-distortive transformations where the impact of strain energy is more pronounced.
A subclassification of lattice-distortive displacements can be made by considering the dilutional and shear components of the distortion. In transformations dominated by the shear component, it is possible to find a line in the new phase that is undistorted from the parent phase while all lines are distorted when the dilation is predominant. Shear-dominated transformations can be further classified according to the magnitude of the strain energies involved compared to the innate vibrations of the atoms in the lattice and hence whether the strain energies have a notable influence on the kinetics of the transformation and the morphology of the resulting phase. If the strain energy is a significant factor, then the transformations are dubbed "martensitic", if not the transformation is referred to as "quasi-martensitic".
Iron-carbon martensitic transformation.
The distinction between austenitic and martensitic steels is subtle in nature. Austenite exhibits a face-centered cubic (FCC) unit cell, whereas the transformation to martensite entails a distortion of this cube into a body-centered tetragonal shape (BCT). This transformation occurs due to a displacive process, where interstitial carbon atoms lack the time to diffuse out. Consequently, the unit cell undergoes a slight elongation in one dimension and contraction in the other two. Despite differences in the symmetry of the crystal structures, the chemical bonding between them remains similar.
The iron-carbon martensitic transformation generates an increase in hardness. The martensitic phase of the steel is supersaturated in carbon and thus undergoes solid solution strengthening. Similar to work-hardened steels, defects prevent atoms from sliding past one another in an organized fashion, causing the material to become harder.
Pseudo martensitic transformation.
In addition to displacive transformation and diffusive transformation, a new type of phase transformation that involves a displacive sublattice transition and atomic diffusion was discovered using a high-pressure X-ray diffraction system. The new transformation mechanism has been christened pseudo martensitic transformation.
References.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "y=Sx"
}
] | https://en.wikipedia.org/wiki?curid=12363573 |
1236458 | Bellman equation | Necessary condition for optimality associated with dynamic programming
A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time in terms of the payoff from some initial choices and the "value" of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into a sequence of simpler subproblems, as Bellman's “principle of optimality" prescribes. The equation applies to algebraic structures with a total ordering; for algebraic structures with a partial ordering, the generic Bellman's equation can be used.
The Bellman equation was first applied to engineering control theory and to other topics in applied mathematics, and subsequently became an important tool in economic theory; though the basic concepts of dynamic programming are prefigured in John von Neumann and Oskar Morgenstern's "Theory of Games and Economic Behavior" and Abraham Wald's "sequential analysis". The term "Bellman equation" usually refers to the dynamic programming equation (DPE) associated with discrete-time optimization problems. In continuous-time optimization problems, the analogous equation is a partial differential equation that is called the Hamilton–Jacobi–Bellman equation.
In discrete time any multi-stage optimization problem can be solved by analyzing the appropriate Bellman equation. The appropriate Bellman equation can be found by introducing new state variables (state augmentation). However, the resulting augmented-state multi-stage optimization problem has a higher dimensional state space than the original multi-stage optimization problem - an issue that can potentially render the augmented problem intractable due to the “curse of dimensionality”. Alternatively, it has been shown that if the cost function of the multi-stage optimization problem satisfies a "backward separable" structure, then the appropriate Bellman equation can be found without state augmentation.
Analytical concepts in dynamic programming.
To understand the Bellman equation, several underlying concepts must be understood. First, any optimization problem has some objective: minimizing travel time, minimizing cost, maximizing profits, maximizing utility, etc. The mathematical function that describes this objective is called the "objective function".
Dynamic programming breaks a multi-period planning problem into simpler steps at different points in time. Therefore, it requires keeping track of how the decision situation is evolving over time. The information about the current situation that is needed to make a correct decision is called the "state". For example, to decide how much to consume and spend at each point in time, people would need to know (among other things) their initial wealth. Therefore, wealth formula_0 would be one of their "state variables", but there would probably be others.
The variables chosen at any given point in time are often called the "control variables". For instance, given their current wealth, people might decide how much to consume now. Choosing the control variables now may be equivalent to choosing the next state; more generally, the next state is affected by other factors in addition to the current control. For example, in the simplest case, today's wealth (the state) and consumption (the control) might exactly determine tomorrow's wealth (the new state), though typically other factors will affect tomorrow's wealth too.
The dynamic programming approach describes the optimal plan by finding a rule that tells what the controls should be, given any possible value of the state. For example, if consumption ("c") depends "only" on wealth ("W"), we would seek a rule formula_1 that gives consumption as a function of wealth. Such a rule, determining the controls as a function of the states, is called a "policy function".
Finally, by definition, the optimal decision rule is the one that achieves the best possible value of the objective. For example, if someone chooses consumption, given wealth, in order to maximize happiness (assuming happiness "H" can be represented by a mathematical function, such as a utility function and is something defined by wealth), then each level of wealth will be associated with some highest possible level of happiness, formula_2. The best possible value of the objective, written as a function of the state, is called the "value function".
Bellman showed that a dynamic optimization problem in discrete time can be stated in a recursive, step-by-step form known as backward induction by writing down the relationship between the value function in one period and the value function in the next period. The relationship between these two value functions is called the "Bellman equation". In this approach, the optimal policy in the last time period is specified in advance as a function of the state variable's value at that time, and the resulting optimal value of the objective function is thus expressed in terms of that value of the state variable. Next, the next-to-last period's optimization involves maximizing the sum of that period's period-specific objective function and the optimal value of the future objective function, giving that period's optimal policy contingent upon the value of the state variable as of the next-to-last period decision. This logic continues recursively back in time, until the first period decision rule is derived, as a function of the initial state variable value, by optimizing the sum of the first-period-specific objective function and the value of the second period's value function, which gives the value for all the future periods. Thus, each period's decision is made by explicitly acknowledging that all future decisions will be optimally made.
Derivation.
A dynamic decision problem.
Let formula_3 be the state at time formula_4. For a decision that begins at time 0, we take as given the initial state formula_5. At any time, the set of possible actions depends on the current state; we express this as formula_6, where a particular action formula_7 represents particular values for one or more control variables, and formula_8 is the set of actions available to be taken at state formula_3. It is also assumed that the state changes from formula_9 to a new state formula_10 when action formula_11 is taken, and that the current payoff from taking action formula_11 in state formula_9 is formula_12. Finally, we assume impatience, represented by a discount factor formula_13.
Under these assumptions, an infinite-horizon decision problem takes the following form:
formula_14
subject to the constraints
formula_15
Notice that we have defined notation formula_16 to denote the optimal value that can be obtained by maximizing this objective function subject to the assumed constraints. This function is the "value function". It is a function of the initial state variable formula_5, since the best value obtainable depends on the initial situation.
Bellman's principle of optimality.
The dynamic programming method breaks this decision problem into smaller subproblems. Bellman's "principle of optimality" describes how to do this:Principle of Optimality: An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. (See Bellman, 1957, Chap. III.3.)
In computer science, a problem that can be broken apart like this is said to have optimal substructure. In the context of dynamic game theory, this principle is analogous to the concept of subgame perfect equilibrium, although what constitutes an optimal policy in this case is conditioned on the decision-maker's opponents choosing similarly optimal policies from their points of view.
As suggested by the "principle of optimality", we will consider the first decision separately, setting aside all future decisions (we will start afresh from time 1 with the new state formula_17). Collecting the future decisions in brackets on the right, the above infinite-horizon decision problem is equivalent to:
formula_18
subject to the constraints
formula_19
Here we are choosing formula_20, knowing that our choice will cause the time 1 state to be formula_21. That new state will then affect the decision problem from time 1 on. The whole future decision problem appears inside the square brackets on the right.
The Bellman equation.
So far it seems we have only made the problem uglier by separating today's decision from future decisions. But we can simplify by noticing that what is inside the square brackets on the right is "the value" of the time 1 decision problem, starting from state formula_21.
Therefore, the problem can be rewritten as a recursive definition of the value function:
formula_22, subject to the constraints: formula_19
This is the Bellman equation. It may be simplified even further if the time subscripts are dropped and the value of the next state is plugged in:
formula_23
The Bellman equation is classified as a functional equation, because solving it means finding the unknown function formula_24, which is the "value function". Recall that the value function describes the best possible value of the objective, as a function of the state formula_9. By calculating the value function, we will also find the function formula_25 that describes the optimal action as a function of the state; this is called the "policy function".
In a stochastic problem.
In the deterministic setting, other techniques besides dynamic programming can be used to tackle the above optimal control problem. However, the Bellman Equation is often the most convenient method of solving "stochastic" optimal control problems.
For a specific example from economics, consider an infinitely-lived consumer with initial wealth endowment formula_26 at period formula_27. They have an instantaneous utility function formula_28 where formula_29 denotes consumption and discounts the next period utility at a rate of formula_30. Assume that what is not consumed in period formula_4 carries over to the next period with interest rate formula_31. Then the consumer's utility maximization problem is to choose a consumption plan formula_32 that solves
formula_33
subject to
formula_34
and
formula_35
The first constraint is the capital accumulation/law of motion specified by the problem, while the second constraint is a transversality condition that the consumer does not carry debt at the end of their life. The Bellman equation is
formula_36
Alternatively, one can treat the sequence problem directly using, for example, the Hamiltonian equations.
Now, if the interest rate varies from period to period, the consumer is faced with a stochastic optimization problem. Let the interest "r" follow a Markov process with probability transition function formula_37 where formula_38 denotes the probability measure governing the distribution of interest rate next period if current interest rate is formula_31. In this model the consumer decides their current period consumption after the current period interest rate is announced.
Rather than simply choosing a single sequence formula_32, the consumer now must choose a sequence formula_32 for each possible realization of a formula_39 in such a way that their lifetime expected utility is maximized:
formula_40
The expectation formula_41 is taken with respect to the appropriate probability measure given by "Q" on the sequences of "r"'s. Because "r" is governed by a Markov process, dynamic programming simplifies the problem significantly. Then the Bellman equation is simply:
formula_42
Under some reasonable assumption, the resulting optimal policy function "g"("a","r") is measurable.
For a general stochastic sequential optimization problem with Markovian shocks and where the agent is faced with their decision "ex-post", the Bellman equation takes a very similar form
formula_43
Applications in economics.
The first known application of a Bellman equation in economics is due to Martin Beckmann and Richard Muth. Martin Beckmann also wrote extensively on consumption theory using the Bellman equation in 1959. His work influenced Edmund S. Phelps, among others.
A celebrated economic application of a Bellman equation is Robert C. Merton's seminal 1973 article on the intertemporal capital asset pricing model. (See also Merton's portfolio problem). The solution to Merton's theoretical model, one in which investors chose between income today and future income or capital gains, is a form of Bellman's equation. Because economic applications of dynamic programming usually result in a Bellman equation that is a difference equation, economists refer to dynamic programming as a "recursive method" and a subfield of recursive economics is now recognized within economics.
Nancy Stokey, Robert E. Lucas, and Edward Prescott describe stochastic and nonstochastic dynamic programming in considerable detail, and develop theorems for the existence of solutions to problems meeting certain conditions. They also describe many examples of modeling theoretical problems in economics using recursive methods. This book led to dynamic programming being employed to solve a wide range of theoretical problems in economics, including optimal economic growth, resource extraction, principal–agent problems, public finance, business investment, asset pricing, factor supply, and industrial organization. Lars Ljungqvist and Thomas Sargent apply dynamic programming to study a variety of theoretical questions in monetary policy, fiscal policy, taxation, economic growth, search theory, and labor economics. Avinash Dixit and Robert Pindyck showed the value of the method for thinking about capital budgeting. Anderson adapted the technique to business valuation, including privately held businesses.
Using dynamic programming to solve concrete problems is complicated by informational difficulties, such as choosing the unobservable discount rate. There are also computational issues, the main one being the curse of dimensionality arising from the vast number of possible actions and potential state variables that must be considered before an optimal strategy can be selected. For an extensive discussion of computational issues, see Miranda and Fackler, and Meyn 2007.
Example.
In Markov decision processes, a Bellman equation is a recursion for expected rewards. For example, the expected reward for being in a particular state "s" and following some fixed policy formula_44 has the Bellman equation:
formula_45
This equation describes the expected reward for taking the action prescribed by some policy formula_44.
The equation for the optimal policy is referred to as the "Bellman optimality equation":
formula_46
where formula_47 is the optimal policy and formula_48 refers to the value function of the optimal policy. The equation above describes the reward for taking the action giving the highest expected return.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(W)"
},
{
"math_id": 1,
"text": "c(W)"
},
{
"math_id": 2,
"text": "H(W)"
},
{
"math_id": 3,
"text": "x_t"
},
{
"math_id": 4,
"text": "t"
},
{
"math_id": 5,
"text": "x_0"
},
{
"math_id": 6,
"text": " a_{t} \\in \\Gamma (x_t)"
},
{
"math_id": 7,
"text": "a_t"
},
{
"math_id": 8,
"text": "\\Gamma (x_t)"
},
{
"math_id": 9,
"text": "x"
},
{
"math_id": 10,
"text": "T(x,a)"
},
{
"math_id": 11,
"text": "a"
},
{
"math_id": 12,
"text": "F(x,a)"
},
{
"math_id": 13,
"text": "0<\\beta<1"
},
{
"math_id": 14,
"text": " V(x_0) \\; = \\; \\max_{ \\left \\{ a_{t} \\right \\}_{t=0}^{\\infty} } \\sum_{t=0}^{\\infty} \\beta^t F(x_t,a_{t}), "
},
{
"math_id": 15,
"text": " a_{t} \\in \\Gamma (x_t), \\; x_{t+1}=T(x_t,a_t), \\; \\forall t = 0, 1, 2, \\dots "
},
{
"math_id": 16,
"text": "V(x_0)"
},
{
"math_id": 17,
"text": "x_1 "
},
{
"math_id": 18,
"text": " \\max_{ a_0 } \\left \\{ F(x_0,a_0)\n+ \\beta \\left[ \\max_{ \\left \\{ a_{t} \\right \\}_{t=1}^{\\infty} }\n\\sum_{t=1}^{\\infty} \\beta^{t-1} F(x_t,a_{t}):\na_{t} \\in \\Gamma (x_t), \\; x_{t+1}=T(x_t,a_t), \\; \\forall t \\geq 1 \\right] \\right \\}"
},
{
"math_id": 19,
"text": " a_0 \\in \\Gamma (x_0), \\; x_1=T(x_0,a_0). "
},
{
"math_id": 20,
"text": "a_0"
},
{
"math_id": 21,
"text": "x_1=T(x_0,a_0)"
},
{
"math_id": 22,
"text": "V(x_0) = \\max_{ a_0 } \\{ F(x_0,a_0) + \\beta V(x_1) \\} "
},
{
"math_id": 23,
"text": "V(x) = \\max_{a \\in \\Gamma (x) } \\{ F(x,a) + \\beta V(T(x,a)) \\}."
},
{
"math_id": 24,
"text": "V"
},
{
"math_id": 25,
"text": "a(x)"
},
{
"math_id": 26,
"text": "{\\color{Red}a_0}"
},
{
"math_id": 27,
"text": "0"
},
{
"math_id": 28,
"text": "u(c)"
},
{
"math_id": 29,
"text": "c"
},
{
"math_id": 30,
"text": "0< \\beta<1 "
},
{
"math_id": 31,
"text": "r"
},
{
"math_id": 32,
"text": "\\{{\\color{OliveGreen}c_t}\\}"
},
{
"math_id": 33,
"text": "\\max \\sum_{t=0} ^{\\infty} \\beta^t u ({\\color{OliveGreen}c_t})"
},
{
"math_id": 34,
"text": "{\\color{Red}a_{t+1}} = (1 + r) ({\\color{Red}a_t} - {\\color{OliveGreen}c_t}), \\; {\\color{OliveGreen}c_t} \\geq 0,"
},
{
"math_id": 35,
"text": "\\lim_{t \\rightarrow \\infty} {\\color{Red}a_t} \\geq 0."
},
{
"math_id": 36,
"text": "V(a) = \\max_{ 0 \\leq c \\leq a } \\{ u(c) + \\beta V((1+r) (a - c)) \\},"
},
{
"math_id": 37,
"text": "Q(r, d\\mu_r)"
},
{
"math_id": 38,
"text": "d\\mu_r"
},
{
"math_id": 39,
"text": "\\{r_t\\}"
},
{
"math_id": 40,
"text": "\\max_{ \\left \\{ c_{t} \\right \\}_{t=0}^{\\infty} } \\mathbb{E}\\bigg( \\sum_{t=0} ^{\\infty} \\beta^t u ({\\color{OliveGreen}c_t}) \\bigg)."
},
{
"math_id": 41,
"text": "\\mathbb{E}"
},
{
"math_id": 42,
"text": "V(a, r) = \\max_{ 0 \\leq c \\leq a } \\{ u(c) + \\beta \\int V((1+r) (a - c), r') Q(r, d\\mu_r) \\} ."
},
{
"math_id": 43,
"text": "V(x, z) = \\max_{c \\in \\Gamma(x,z)} \\{F(x, c, z) + \\beta \\int V( T(x,c), z') d\\mu_z(z')\\}. "
},
{
"math_id": 44,
"text": "\\pi"
},
{
"math_id": 45,
"text": " V^\\pi(s)= R(s,\\pi(s)) + \\gamma \\sum_{s'} P(s'|s,\\pi(s)) V^\\pi(s').\\ "
},
{
"math_id": 46,
"text": " V^{\\pi*}(s)= \\max_a \\left\\{ {R(s,a) + \\gamma \\sum_{s'} P(s'|s,a) V^{\\pi*}(s')} \\right\\}.\\ "
},
{
"math_id": 47,
"text": "{\\pi*}"
},
{
"math_id": 48,
"text": "V^{\\pi*}"
}
] | https://en.wikipedia.org/wiki?curid=1236458 |
1236542 | Projective hierarchy | Descriptive set theory concept
In the mathematical field of descriptive set theory, a subset formula_0 of a Polish space formula_1 is projective if it is formula_2 for some positive integer formula_3. Here formula_0 is
The choice of the Polish space formula_8 in the third clause above is not very important; it could be replaced in the definition by a fixed uncountable Polish space, say Baire space or Cantor space or the real line.
Relationship to the analytical hierarchy.
There is a close relationship between the relativized analytical hierarchy on subsets of Baire space (denoted by lightface letters formula_12 and formula_13) and the projective hierarchy on subsets of Baire space (denoted by boldface letters formula_14 and formula_15). Not every formula_2 subset of Baire space is formula_16. It is true, however, that if a subset "X" of Baire space is formula_2 then there is a set of natural numbers "A" such that "X" is formula_17. A similar statement holds for formula_5 sets. Thus the sets classified by the projective hierarchy are exactly the sets classified by the relativized version of the analytical hierarchy. This relationship is important in effective descriptive set theory. Stated in terms of definability, a set of reals is projective iff it is definable in the language of second-order arithmetic from some real parameter.
A similar relationship between the projective hierarchy and the relativized analytical hierarchy holds for subsets of Cantor space and, more generally, subsets of any effective Polish space.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "\\boldsymbol{\\Sigma}^1_n"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "\\boldsymbol{\\Sigma}^1_1"
},
{
"math_id": 5,
"text": "\\boldsymbol{\\Pi}^1_n"
},
{
"math_id": 6,
"text": "X\\setminus A"
},
{
"math_id": 7,
"text": "\\boldsymbol{\\Sigma}^1_{n+1}"
},
{
"math_id": 8,
"text": "Y"
},
{
"math_id": 9,
"text": "C\\subseteq X\\times Y"
},
{
"math_id": 10,
"text": "C"
},
{
"math_id": 11,
"text": "A=\\{x\\in X \\mid \\exists y\\in Y : (x,y)\\in C\\}."
},
{
"math_id": 12,
"text": "\\Sigma"
},
{
"math_id": 13,
"text": "\\Pi"
},
{
"math_id": 14,
"text": "\\boldsymbol{\\Sigma}"
},
{
"math_id": 15,
"text": "\\boldsymbol{\\Pi}"
},
{
"math_id": 16,
"text": "\\Sigma^1_n"
},
{
"math_id": 17,
"text": "\\Sigma^{1,A}_n"
}
] | https://en.wikipedia.org/wiki?curid=1236542 |
1236569 | Nagata–Smirnov metrization theorem | Characterizes when a topological space is metrizable
In topology, the Nagata–Smirnov metrization theorem characterizes when a topological space is metrizable. The theorem states that a topological space formula_0 is metrizable if and only if it is regular, Hausdorff and has a countably locally finite (that is, 𝜎-locally finite) basis.
A topological space formula_0 is called a regular space if every non-empty closed subset formula_1 of formula_0 and a point p not contained in formula_1 admit non-overlapping open neighborhoods.
A collection in a space formula_0 is countably locally finite (or 𝜎-locally finite) if it is the union of a countable family of locally finite collections of subsets of formula_2
Unlike Urysohn's metrization theorem, which provides only a sufficient condition for metrizability, this theorem provides both a necessary and sufficient condition for a topological space to be metrizable. The theorem is named after Junichi Nagata and Yuriĭ Mikhaĭlovich Smirnov, whose (independent) proofs were published in 1950 and 1951, respectively.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "C"
},
{
"math_id": 2,
"text": "X."
}
] | https://en.wikipedia.org/wiki?curid=1236569 |
12365909 | Felici's law | In physics, Felici's law states that the net charge through a circuit induced by a changing magnetic field is directly proportional to the difference between the initial and final magnetic flux. The proportionality constant is the electrical conductance formula_0. This law is a predecessor of the modern Faraday's law of induction.
For a circuit with resistance formula_1, the total charge passing through the circuit from time 0 to time "t" is
formula_2,
where formula_3 is the magnetic flux.
History.
Felici's Law is named after Riccardo Felici, an Italian physicist and rector of the University of Pisa between 1851 and 1859. His research was primarily focused on induction and magnetic effects.
Derivation from Faraday's law of induction.
Faraday's law of induction states that the induced electromotive force is,
formula_4
By charge conservation, the charge passing through the circuit is,
formula_5
where "I" is the electric current.
Applications.
Since the charge depends only on the total change in magnetic flux during the time, and not any instantaneous result, we can apply Felici's law to measure the magnetic flux and thus a constant magnetic field. In the flip-coil method, we wind 'n' overlapping loops in a small ring. We first insert the coil perpendicular to the field and then rotate it 180°. Thus the end magnetic flux is exactly opposite the initial flux since the magnetic field is constant. By Felici's law, for area "A" and resistance "R", we can find the magnetic field B
formula_6
Thus, we can calculate the magnetic field based on the measured charge that flows through the coil.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G = \\frac{1}{R}"
},
{
"math_id": 1,
"text": "R"
},
{
"math_id": 2,
"text": "Q(t) = \\frac{1}{R} [\\Phi_B(0) - \\Phi_B(t)]"
},
{
"math_id": 3,
"text": "\\Phi_B(t)"
},
{
"math_id": 4,
"text": "\\mathcal{E} = -\\frac{d\\Phi_B}{dt}"
},
{
"math_id": 5,
"text": "\\begin{align}\nQ(t) &= \\int_0^t I\\,d\\tau\\\\\n&= \\int_0^t \\frac{\\mathcal{E}}{R}\\,d\\tau \\\\\n&= \\frac{1}{R}\\int_0^t -\\frac{d\\Phi_B}{dt} \\,d\\tau \\\\\n&= \\frac{1}{R} [\\Phi(0) - \\Phi(t)]\n\\end{align}"
},
{
"math_id": 6,
"text": "\\begin{align}\nQ &= \\frac{\\Delta \\Phi}{R}\\\\\n&= \\frac{2 \\Phi}{R}\\\\\n&= \\frac{2 B n A}{R}\\\\\nB &= \\frac{R Q}{2 n A}\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=12365909 |
12365918 | Camera matrix | Computer vision geometry concept
In computer vision a camera matrix or (camera) projection matrix is a formula_0 matrix which describes the mapping of a pinhole camera from 3D points in the world to 2D points in an image.
Let formula_1 be a representation of a 3D point in homogeneous coordinates (a 4-dimensional vector), and let formula_2 be a representation of the image of this point in the pinhole camera (a 3-dimensional vector). Then the following relation holds
formula_3
where formula_4 is the camera matrix and the formula_5 sign implies that the left and right hand sides are equal except for a multiplication by a non-zero scalar formula_6:
formula_7
Since the camera matrix formula_4 is involved in the mapping between elements of two projective spaces, it too can be regarded as a projective element. This means that it has only 11 degrees of freedom since any multiplication by a non-zero scalar results in an equivalent camera matrix.
Derivation.
The mapping from the coordinates of a 3D point P to the 2D image coordinates of the point's projection onto the image plane, according to the pinhole camera model, is given by
formula_8
where formula_9 are the 3D coordinates of P relative to a camera centered coordinate system, formula_10 are the resulting image coordinates, and "f" is the camera's focal length for which we assume "f" > 0. Furthermore, we also assume that "x3 > 0".
To derive the camera matrix, the expression above is rewritten in terms of homogeneous coordinates. Instead of the 2D vector formula_11 we consider the projective element (a 3D vector) formula_12 and instead of equality we consider equality up to scaling by a non-zero number, denoted formula_5. First, we write the homogeneous image coordinates as expressions in the usual 3D coordinates.
formula_13
Finally, also the 3D coordinates are expressed in a homogeneous representation formula_1 and this is how the camera matrix appears:
formula_14 or formula_3
where formula_4 is the camera matrix, which here is given by
formula_15,
and the corresponding camera matrix now becomes
formula_16
The last step is a consequence of formula_4 itself being a projective element.
The camera matrix derived here may appear trivial in the sense that it contains very few non-zero elements. This depends to a large extent on the particular coordinate systems which have been chosen for the 3D and 2D points. In practice, however, other forms of camera matrices are common, as will be shown below.
Camera position.
The camera matrix formula_4 derived in the previous section has a null space which is spanned by the vector
formula_17
This is also the homogeneous representation of the 3D point which has coordinates (0,0,0), that is, the "camera center" (aka the entrance pupil; the position of the pinhole of a pinhole camera) is at O. This means that the camera center (and only this point) cannot be mapped to a point in the image plane by the camera (or equivalently, it maps to all points on the image as every ray on the image goes through this point).
For any other 3D point with formula_18, the result formula_19 is well-defined and has the form formula_20. This corresponds to a point at infinity in the projective image plane (even though, if the image plane is taken to be a Euclidean plane, no corresponding intersection point exists).
Normalized camera matrix and normalized image coordinates.
The camera matrix derived above can be simplified even further if we assume that "f = 1":
formula_21
where formula_22 here denotes a formula_23 identity matrix. Note that formula_24 matrix formula_4 here is divided into a concatenation of a formula_23 matrix and a 3-dimensional vector. The camera matrix formula_25 is sometimes referred to as a "canonical form".
So far all points in the 3D world have been represented in a "camera centered" coordinate system, that is, a coordinate system which has its origin at the camera center (the location of the pinhole of a pinhole camera). In practice however, the 3D points may be represented in terms of coordinates relative to an arbitrary coordinate system (X1', X2', X3'). Assuming that the camera coordinate axes (X1, X2, X3) and the axes (X1', X2', X3') are of Euclidean type (orthogonal and isotropic), there is a unique Euclidean 3D transformation (rotation and translation) between the two coordinate systems. In other words, the camera is not necessarily at the origin looking along the "z" axis.
The two operations of rotation and translation of 3D coordinates can be represented as the two formula_26 matrices
formula_27 and formula_28
where formula_29 is a formula_23 rotation matrix and formula_30 is a 3-dimensional translation vector. When the first matrix is multiplied onto the homogeneous representation of a 3D point, the result is the homogeneous representation of the rotated point, and the second matrix performs instead a translation. Performing the two operations in sequence, i.e. first the rotation and then the translation (with translation vector given in the already rotated coordinate system), gives a combined rotation and translation matrix
formula_31
Assuming that formula_29 and formula_30 are precisely the rotation and translations which relate the two coordinate system (X1,X2,X3) and (X1',X2',X3') above, this implies that
formula_32
where formula_33 is the homogeneous representation of the point P in the coordinate system (X1',X2',X3').
Assuming also that the camera matrix is given by formula_25, the mapping from the coordinates in the (X1,X2,X3) system to homogeneous image coordinates becomes
formula_34
Consequently, the camera matrix which relates points in the coordinate system (X1',X2',X3') to image coordinates is
formula_35
a concatenation of a 3D rotation matrix and a 3-dimensional translation vector.
This type of camera matrix is referred to as a "normalized camera matrix", it assumes focal length = 1 and that image coordinates are measured in a coordinate system where the origin is located at the intersection between axis X3 and the image plane and has the same units as the 3D coordinate system. The resulting image coordinates are referred to as "normalized image coordinates".
The camera position.
Again, the null space of the normalized camera matrix, formula_36 described above, is spanned by the 4-dimensional vector
formula_37
This is also, again, the coordinates of the camera center, now relative to the (X1',X2',X3') system. This can be seen by applying first the rotation and then the translation to the 3-dimensional vector formula_38 and the result is the homogeneous representation of 3D coordinates (0,0,0).
This implies that the camera center (in its homogeneous representation) lies in the null space of the camera matrix, provided that it is represented in terms of 3D coordinates relative to the same coordinate system as the camera matrix refers to.
The normalized camera matrix formula_36 can now be written as
formula_39
where formula_38 is the 3D coordinates of the camera relative to the (X1',X2',X3') system.
General camera matrix.
Given the mapping produced by a normalized camera matrix, the resulting normalized image coordinates can be transformed by means of an arbitrary 2D homography. This includes 2D translations and rotations as well as scaling (isotropic and anisotropic) but also general 2D perspective transformations. Such a transformation can be represented as a formula_23 matrix formula_40 which maps the homogeneous normalized image coordinates formula_2 to the homogeneous transformed image coordinates formula_41:
formula_42
Inserting the above expression for the normalized image coordinates in terms of the 3D coordinates gives
formula_43
This produces the most general form of camera matrix
formula_44 | [
{
"math_id": 0,
"text": "3 \\times 4"
},
{
"math_id": 1,
"text": " \\mathbf{x} "
},
{
"math_id": 2,
"text": " \\mathbf{y} "
},
{
"math_id": 3,
"text": " \\mathbf{y} \\sim \\mathbf{C} \\, \\mathbf{x} "
},
{
"math_id": 4,
"text": " \\mathbf{C} "
},
{
"math_id": 5,
"text": "\\, \\sim "
},
{
"math_id": 6,
"text": "k \\neq 0"
},
{
"math_id": 7,
"text": " \\mathbf{y} = k \\, \\mathbf{C} \\, \\mathbf{x} . "
},
{
"math_id": 8,
"text": " \\begin{pmatrix} y_1 \\\\ y_2 \\end{pmatrix} = \\frac{f}{x_3} \\begin{pmatrix} x_1 \\\\ x_2 \\end{pmatrix} "
},
{
"math_id": 9,
"text": " (x_1, x_2, x_3) "
},
{
"math_id": 10,
"text": " (y_1, y_2) "
},
{
"math_id": 11,
"text": " (y_1,y_2) "
},
{
"math_id": 12,
"text": " \\mathbf{y} = (y_1,y_2,1) "
},
{
"math_id": 13,
"text": " \\begin{pmatrix} y_1 \\\\ y_2 \\\\ 1 \\end{pmatrix} = \\begin{pmatrix} \\frac{f}{x_3} x_1 \\\\ \\frac{f}{x_3} x_2 \\\\ 1 \\end{pmatrix} \\sim \\begin{pmatrix} x_1 \\\\ x_2 \\\\ \\frac{x_3}{f} \\end{pmatrix} "
},
{
"math_id": 14,
"text": " \\begin{pmatrix} y_1 \\\\ y_2 \\\\ 1 \\end{pmatrix} \\sim \\begin{pmatrix} 1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & \\frac{1}{f} & 0 \\end{pmatrix} \\, \\begin{pmatrix} x_1 \\\\ x_2 \\\\ x_3 \\\\ 1 \\end{pmatrix} "
},
{
"math_id": 15,
"text": " \\mathbf{C} = \\begin{pmatrix} 1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & \\frac{1}{f} & 0 \\end{pmatrix} "
},
{
"math_id": 16,
"text": " \\mathbf{C} = \\begin{pmatrix} 1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & \\frac{1}{f} & 0 \\end{pmatrix} \\sim \\begin{pmatrix} f & 0 & 0 & 0 \\\\ 0 & f & 0 & 0 \\\\ 0 & 0 & 1 & 0 \\end{pmatrix} "
},
{
"math_id": 17,
"text": " \\mathbf{n} = \\begin{pmatrix} 0 \\\\ 0 \\\\ 0 \\\\ 1 \\end{pmatrix} "
},
{
"math_id": 18,
"text": "x_3 = 0"
},
{
"math_id": 19,
"text": " \\mathbf{y} \\sim\\mathbf{C}\\,\\mathbf{x} "
},
{
"math_id": 20,
"text": " \\mathbf{y} = (y_1\\,y_2\\,0)^\\top "
},
{
"math_id": 21,
"text": " \\mathbf{C}_{0} = \\begin{pmatrix} 1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & 1 & 0 \\end{pmatrix} = \\left ( \\begin{array}{c|c} \\mathbf{I} & \\mathbf{0} \\end{array} \\right ) "
},
{
"math_id": 22,
"text": " \\mathbf{I} "
},
{
"math_id": 23,
"text": " 3 \\times 3 "
},
{
"math_id": 24,
"text": " 3 \\times 4 "
},
{
"math_id": 25,
"text": " \\mathbf{C}_{0} "
},
{
"math_id": 26,
"text": " 4 \\times 4 "
},
{
"math_id": 27,
"text": " \\left ( \\begin{array}{c|c} \\mathbf{R} & \\mathbf{0} \\\\ \\hline \\mathbf{0} & 1 \\end{array} \\right ) "
},
{
"math_id": 28,
"text": " \\left ( \\begin{array}{c|c} \\mathbf{I} & \\mathbf{t} \\\\ \\hline \\mathbf{0} & 1 \\end{array} \\right ) "
},
{
"math_id": 29,
"text": " \\mathbf{R} "
},
{
"math_id": 30,
"text": " \\mathbf{t} "
},
{
"math_id": 31,
"text": " \\left ( \\begin{array}{c|c} \\mathbf{R} & \\mathbf{t} \\\\ \\hline \\mathbf{0} & 1 \\end{array} \\right ) "
},
{
"math_id": 32,
"text": " \\mathbf{x} = \\left ( \\begin{array}{c|c} \\mathbf{R} & \\mathbf{t} \\\\ \\hline \\mathbf{0} & 1 \\end{array} \\right ) \\mathbf{x}' "
},
{
"math_id": 33,
"text": " \\mathbf{x}' "
},
{
"math_id": 34,
"text": " \\mathbf{y} \\sim \\mathbf{C}_{0} \\, \\mathbf{x} = \\left ( \\begin{array}{c|c} \\mathbf{I} & \\mathbf{0} \\end{array} \\right ) \\, \\left ( \\begin{array}{c|c} \\mathbf{R} & \\mathbf{t} \\\\ \\hline \\mathbf{0} & 1 \\end{array} \\right ) \\mathbf{x}' = \\left ( \\begin{array}{c|c} \\mathbf{R} & \\mathbf{t} \\end{array} \\right ) \\, \\mathbf{x}' "
},
{
"math_id": 35,
"text": " \\mathbf{C}_{N} = \\left ( \\begin{array}{c|c} \\mathbf{R} & \\mathbf{t} \\end{array} \\right ) "
},
{
"math_id": 36,
"text": " \\mathbf{C}_{N} "
},
{
"math_id": 37,
"text": " \\mathbf{n} = \\begin{pmatrix} -\\mathbf{R}^{-1} \\, \\mathbf{t} \\\\ 1 \\end{pmatrix} = \\begin{pmatrix} \\tilde{\\mathbf{n}} \\\\ 1 \\end{pmatrix}"
},
{
"math_id": 38,
"text": " \\tilde{\\mathbf{n}} "
},
{
"math_id": 39,
"text": " \\mathbf{C}_{N} = \\mathbf{R} \\, \\left ( \\begin{array}{c|c} \\mathbf{I} & \\mathbf{R}^{-1} \\, \\mathbf{t} \\end{array} \\right ) = \\mathbf{R} \\, \\left ( \\begin{array}{c|c} \\mathbf{I} & -\\tilde{\\mathbf{n}} \\end{array} \\right ) "
},
{
"math_id": 40,
"text": " \\mathbf{H} "
},
{
"math_id": 41,
"text": " \\mathbf{y}' "
},
{
"math_id": 42,
"text": " \\mathbf{y}' = \\mathbf{H} \\, \\mathbf{y} "
},
{
"math_id": 43,
"text": " \\mathbf{y}' = \\mathbf{H} \\, \\mathbf{C}_{N} \\, \\mathbf{x}' "
},
{
"math_id": 44,
"text": " \\mathbf{C} = \\mathbf{H} \\, \\mathbf{C}_{N} = \\mathbf{H} \\, \\left ( \\begin{array}{c|c} \\mathbf{R} & \\mathbf{t} \\end{array} \\right ) "
}
] | https://en.wikipedia.org/wiki?curid=12365918 |
1236592 | Axiom of projective determinacy | In mathematical logic, projective determinacy is the special case of the axiom of determinacy applying only to projective sets.
The axiom of projective determinacy, abbreviated PD, states that for any two-player infinite game of perfect information of length ω in which the players play natural numbers, if the victory set (for either player, since the projective sets are closed under complementation) is projective, then one player or the other has a winning strategy.
The axiom is not a theorem of ZFC (assuming ZFC is consistent), but unlike the full axiom of determinacy (AD), which contradicts the axiom of choice, it is not known to be inconsistent with ZFC. PD follows from certain large cardinal axioms, such as the existence of infinitely many Woodin cardinals.
Consequences.
PD implies that all projective sets are Lebesgue measurable (in fact, universally measurable) and have the perfect set property and the property of Baire. It also implies that every projective binary relation may be uniformized by a projective set.
PD implies that for all positive integers formula_0, there is a largest countable formula_1 set.
References.
<templatestyles src="Refbegin/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "\\Sigma^1_{2n}"
}
] | https://en.wikipedia.org/wiki?curid=1236592 |
12366193 | Hilbert–Schmidt integral operator | In mathematics, a Hilbert–Schmidt integral operator is a type of integral transform. Specifically, given a domain Ω in "n"-dimensional Euclidean space R"n", then the square-integrable function "k" : Ω × Ω → C belonging to "L"2(Ω×Ω) such that
formula_0
is called a Hilbert–Schmidt kernel and the associated integral operator "T" : "L"2(Ω) → "L"2(Ω) given by
formula_1
is called a Hilbert–Schmidt integral operator.
Then "T" is a Hilbert–Schmidt operator with Hilbert–Schmidt norm
formula_2
Hilbert–Schmidt integral operators are both continuous and compact.
The concept of a Hilbert–Schmidt operator may be extended to any locally compact Hausdorff spaces. Specifically, let "L"2("X") be a separable Hilbert space and "X" a locally compact Hausdorff space equipped with a positive Borel measure. The initial condition on the kernel "k" on Ω ⊆ Rn can be reinterpreted as demanding "k" belong to "L"2("X × X"). Then the operator
formula_3
is compact. If
formula_4
then "T" is also self-adjoint and so the spectral theorem applies. This is one of the fundamental constructions of such operators, which often reduces problems about infinite-dimensional vector spaces to questions about well-understood finite-dimensional eigenspaces.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\int_{\\Omega} \\int_{\\Omega} | k(x, y) |^{2} \\,dx \\, dy < \\infty ,"
},
{
"math_id": 1,
"text": "(Tf) (x) = \\int_{\\Omega} k(x, y) f(y) \\, dy, \\quad f \\in L^2(\\Omega),"
},
{
"math_id": 2,
"text": "\\Vert T \\Vert_\\mathrm{HS} = \\Vert k \\Vert_{L^2}."
},
{
"math_id": 3,
"text": "(Tf)(x) = \\int_{X} k(x,y)f(y)\\,dy,"
},
{
"math_id": 4,
"text": "k(x,y) = \\overline{k(y,x)},"
}
] | https://en.wikipedia.org/wiki?curid=12366193 |
1236666 | Pointed space | Topological space with a distinguished point
In mathematics, a pointed space or based space is a topological space with a distinguished point, the basepoint. The distinguished point is just simply one particular point, picked out from the space, and given a name, such as formula_0 that remains unchanged during subsequent discussion, and is kept track of during all operations.
Maps of pointed spaces (based maps) are continuous maps preserving basepoints, i.e., a map formula_1 between a pointed space formula_2 with basepoint formula_3 and a pointed space formula_4 with basepoint formula_5 is a based map if it is continuous with respect to the topologies of formula_2 and formula_4 and if formula_6 This is usually denoted
formula_7
Pointed spaces are important in algebraic topology, particularly in homotopy theory, where many constructions, such as the fundamental group, depend on a choice of basepoint.
The pointed set concept is less important; it is anyway the case of a pointed discrete space.
Pointed spaces are often taken as a special case of the relative topology, where the subset is a single point. Thus, much of homotopy theory is usually developed on pointed spaces, and then moved to relative topologies in algebraic topology.
Category of pointed spaces.
The class of all pointed spaces forms a category Topformula_8 with basepoint preserving continuous maps as morphisms. Another way to think about this category is as the comma category, (formula_9 Top) where formula_10 is any one point space and Top is the category of topological spaces. (This is also called a coslice category denoted formula_11Top.) Objects in this category are continuous maps formula_12 Such maps can be thought of as picking out a basepoint in formula_13 Morphisms in (formula_9 Top) are morphisms in Top for which the following diagram commutes:
It is easy to see that commutativity of the diagram is equivalent to the condition that formula_1 preserves basepoints.
As a pointed space, formula_10 is a zero object in Topformula_10, while it is only a terminal object in Top.
There is a forgetful functor Topformula_10 formula_14 Top which "forgets" which point is the basepoint. This functor has a left adjoint which assigns to each topological space formula_2 the disjoint union of formula_2 and a one-point space formula_10 whose single element is taken to be the basepoint. | [
{
"math_id": 0,
"text": "x_0,"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "x_0"
},
{
"math_id": 4,
"text": "Y"
},
{
"math_id": 5,
"text": "y_0"
},
{
"math_id": 6,
"text": "f\\left(x_0\\right) = y_0."
},
{
"math_id": 7,
"text": "f : \\left(X, x_0\\right) \\to \\left(Y, y_0\\right)."
},
{
"math_id": 8,
"text": "\\bull"
},
{
"math_id": 9,
"text": "\\{ \\bull \\} \\downarrow "
},
{
"math_id": 10,
"text": "\\{ \\bull \\}"
},
{
"math_id": 11,
"text": "\\{ \\bull \\} /"
},
{
"math_id": 12,
"text": "\\{ \\bull \\} \\to X."
},
{
"math_id": 13,
"text": "X."
},
{
"math_id": 14,
"text": "\\to"
},
{
"math_id": 15,
"text": "A \\subseteq X"
},
{
"math_id": 16,
"text": "\\left(X, x_0\\right),"
},
{
"math_id": 17,
"text": "\\left(Y, y_0\\right)"
},
{
"math_id": 18,
"text": "X \\times Y"
},
{
"math_id": 19,
"text": "\\left(x_0, y_0\\right)"
},
{
"math_id": 20,
"text": "\\Sigma X"
},
{
"math_id": 21,
"text": "S^1."
},
{
"math_id": 22,
"text": "\\Omega"
},
{
"math_id": 23,
"text": "\\Omega X"
}
] | https://en.wikipedia.org/wiki?curid=1236666 |
12367835 | Prüfer domain | In mathematics, a Prüfer domain is a type of commutative ring that generalizes Dedekind domains in a non-Noetherian context. These rings possess the nice ideal and module theoretic properties of Dedekind domains, but usually only for finitely generated modules. Prüfer domains are named after the German mathematician Heinz Prüfer.
Examples.
The ring of entire functions on the open complex plane formula_0 form a Prüfer domain. The ring of integer valued polynomials with rational coefficients is a Prüfer domain, although the ring formula_1 of integer polynomials is not . While every number ring is a Dedekind domain, their union, the ring of algebraic integers, is a Prüfer domain. Just as a Dedekind domain is locally a discrete valuation ring, a Prüfer domain is locally a valuation ring, so that Prüfer domains act as non-noetherian analogues of Dedekind domains. Indeed, a domain that is the direct limit of subrings that are Prüfer domains is a Prüfer domain .
Many Prüfer domains are also Bézout domains, that is, not only are finitely generated ideals projective, they are even free (that is, principal). For instance the ring of analytic functions on any non-compact Riemann surface is a Bézout domain , and the ring of algebraic integers is Bézout.
Definitions.
A Prüfer domain is a semihereditary integral domain. Equivalently, a Prüfer domain may be defined as a commutative ring without zero divisors in which every non-zero finitely generated ideal is invertible. Many different characterizations of Prüfer domains are known. Bourbaki lists fourteen of them, has around forty, and open with nine.
As a sample, the following conditions on an integral domain "R" are equivalent to "R" being a Prüfer domain, i.e. every finitely generated ideal of "R" is projective:
formula_5
formula_6
formula_7
Generalizations.
More generally, a Prüfer ring is a commutative ring in which every non-zero finitely generated ideal containing a non-zero-divisor is invertible (that is, projective).
A commutative ring is said to be arithmetical if for every maximal ideal "m" in "R", the localization "R""m" of "R" at "m" is a chain ring. With this definition, a Prüfer domain is an arithmetical domain. In fact, an arithmetical domain is the same thing as a Prüfer domain.
Non-commutative right or left semihereditary domains could also be considered as generalizations of Prüfer domains. | [
{
"math_id": 0,
"text": "C"
},
{
"math_id": 1,
"text": "\\mathbb{Z}[X]"
},
{
"math_id": 2,
"text": "\\ I \\cdot I^{-1} = R"
},
{
"math_id": 3,
"text": "I^{-1} = \\{r\\in q(R): rI\\subseteq R\\}"
},
{
"math_id": 4,
"text": "q(R)"
},
{
"math_id": 5,
"text": " I \\cap (J + K) = (I \\cap J) + (I \\cap K). "
},
{
"math_id": 6,
"text": " I(J \\cap K)=IJ \\cap IK. "
},
{
"math_id": 7,
"text": " (I+J)(I \\cap J) = IJ. "
},
{
"math_id": 8,
"text": "R"
},
{
"math_id": 9,
"text": "n"
},
{
"math_id": 10,
"text": "a"
},
{
"math_id": 11,
"text": "b"
},
{
"math_id": 12,
"text": "(a,b)^n=(a^n,b^n)"
},
{
"math_id": 13,
"text": "K"
},
{
"math_id": 14,
"text": "R[x]"
},
{
"math_id": 15,
"text": "S"
},
{
"math_id": 16,
"text": "R[X]"
}
] | https://en.wikipedia.org/wiki?curid=12367835 |
12368347 | Radio-frequency microelectromechanical system | A radio-frequency microelectromechanical system (RF MEMS) is a microelectromechanical system with electronic components comprising moving sub-millimeter-sized parts that provide radio-frequency (RF) functionality. RF functionality can be implemented using a variety of RF technologies. Besides RF MEMS technology, III-V compound semiconductor (GaAs, GaN, InP, InSb), ferrite, ferroelectric, silicon-based semiconductor (RF CMOS, SiC and SiGe), and vacuum tube technology are available to the RF designer. Each of the RF technologies offers a distinct trade-off between cost, frequency, gain, large-scale integration, lifetime, linearity, noise figure, packaging, power handling, power consumption, reliability, ruggedness, size, supply voltage, switching time and weight.
Components.
There are various types of RF MEMS components, such as CMOS integrable RF MEMS resonators and self-sustained oscillators with small form factor and low phase noise, RF MEMS tunable inductors, and RF MEMS switches, switched capacitors and varactors.
Switches, switched capacitors and varactors.
The components discussed in this article are based on RF MEMS switches, switched capacitors and varactors. These components can be used instead of FET and HEMT switches (FET and HEMT transistors in common gate configuration), and PIN diodes. RF MEMS switches, switched capacitors and varactors are classified by actuation method (electrostatic, electrothermal, magnetostatic, piezoelectric), by axis of deflection (lateral, vertical), by circuit configuration (series, shunt), by clamp configuration (cantilever, fixed-fixed beam), or by contact interface (capacitive, ohmic). Electrostatically actuated RF MEMS components offer low insertion loss and high isolation, linearity, power handling and Q factor, do not consume power, but require a high control voltage and hermetic single-chip packaging (thin film capping, LCP or LTCC packaging) or wafer-level packaging (anodic or glass frit wafer bonding).
RF MEMS switches were pioneered by IBM Research Laboratory, San Jose, CA, Hughes Research Laboratories, Malibu, CA, Northeastern University in cooperation with Analog Devices, Boston, MA, Raytheon, Dallas, TX, and Rockwell Science, Thousand Oaks, CA. A capacitive fixed-fixed beam RF MEMS switch, as shown in Fig. 1(a), is in essence a micro-machined capacitor with a moving top electrode, which is the beam. It is generally connected in shunt with the transmission line and used in X- to W-band (77 GHz and 94 GHz) RF MEMS components. An ohmic cantilever RF MEMS switch, as shown in Fig. 1(b), is capacitive in the up-state, but makes an ohmic contact in the down-state. It is generally connected in series with the transmission line and is used in DC to the Ka-band components.
From an electromechanical perspective, the components behave like a damped mass-spring system, actuated by an electrostatic force. The spring constant is a function of the dimensions of the beam, as well as the Young's modulus, the residual stress and the Poisson ratio of the beam material. The electrostatic force is a function of the capacitance and the bias voltage. Knowledge of the spring constant allows for hand calculation of the pull-in voltage, which is the bias voltage necessary to pull-in the beam, whereas knowledge of the spring constant and the mass allows for hand calculation of the switching time.
From an RF perspective, the components behave like a series RLC circuit with negligible resistance and inductance. The up- and down-state capacitance are in the order of 50 fF and 1.2 pF, which are functional values for millimeter-wave circuit design. Switches typically have a capacitance ratio of 30 or higher, while switched capacitors and varactors have a capacitance ratio of about 1.2 to 10. The loaded Q factor is between 20 and 50 in the X-, Ku- and Ka-band.
RF MEMS switched capacitors are capacitive fixed-fixed beam switches with a low capacitance ratio. RF MEMS varactors are capacitive fixed-fixed beam switches which are biased below pull-in voltage. Other examples of RF MEMS switches are ohmic cantilever switches, and capacitive single pole N throw (SPNT) switches based on the axial gap wobble motor.
Biasing.
RF MEMS components are biased electrostatically using a bipolar NRZ drive voltage, as shown in Fig. 2, in order to avoid dielectric charging and to increase the lifetime of the device. Dielectric charges exert a permanent electrostatic force on the beam. The use of a bipolar NRZ drive voltage instead of a DC drive voltage avoids dielectric charging whereas the electrostatic force exerted on the beam is maintained, because the electrostatic force varies quadratically with the DC drive voltage. Electrostatic biasing implies no current flow, allowing high-resistivity bias lines to be used instead of RF chokes.
Packaging.
RF MEMS components are fragile and require wafer level packaging or single chip packaging which allow for hermetic cavity sealing. A cavity is required to allow movement, whereas hermeticity is required to prevent cancellation of the spring force by the Van der Waals force exerted by water droplets and other contaminants on the beam. RF MEMS switches, switched capacitors and varactors can be packaged using wafer level packaging. Large monolithic RF MEMS filters, phase shifters, and tunable matching networks require single chip packaging.
Wafer-level packaging is implemented before wafer dicing, as shown in Fig. 3(a), and is based on anodic, metal diffusion, metal eutectic, glass frit, polymer adhesive, and silicon fusion wafer bonding. The selection of a wafer-level packaging technique is based on balancing the thermal expansion coefficients of the material layers of the RF MEMS component and those of the substrates to minimize the wafer bow and the residual stress, as well as on alignment and hermeticity requirements. Figures of merit for wafer-level packaging techniques are chip size, hermeticity, processing temperature, (in)tolerance to alignment errors and surface roughness. Anodic and silicon fusion bonding do not require an intermediate layer, but do not tolerate surface roughness. Wafer-level packaging techniques based on a bonding technique with a conductive intermediate layer (conductive split ring) restrict the bandwidth and isolation of the RF MEMS component. The most common wafer-level packaging techniques are based on anodic and glass frit wafer bonding. Wafer-level packaging techniques, enhanced with vertical interconnects, offer the opportunity of three-dimensional integration.
Single-chip packaging, as shown in Fig. 3(b), is implemented after wafer dicing, using pre-fabricated ceramic or organic packages, such as LCP injection molded packages or LTCC packages. Pre-fabricated packages require hermetic cavity sealing through clogging, shedding, soldering or welding. Figures of merit for single-chip packaging techniques are chip size, hermeticity, and processing temperature.
Microfabrication.
An RF MEMS fabrication process is based on surface micromachining techniques, and allows for integration of SiCr or TaN thin film resistors (TFR), metal-air-metal (MAM) capacitors, metal-insulator-metal (MIM) capacitors, and RF MEMS components. An RF MEMS fabrication process can be realized on a variety of wafers: III-V compound semi-insulating, borosilicate glass, fused silica (quartz), LCP, sapphire, and passivated silicon wafers. As shown in Fig. 4, RF MEMS components can be fabricated in class 100 clean rooms using 6 to 8 optical lithography steps with a 5 μm contact alignment error, whereas state-of-the-art MMIC and RFIC fabrication processes require 13 to 25 lithography steps.
As outlined in Fig. 4, the essential microfabrication steps are:
With the exception of the removal of the sacrificial spacer, which requires critical point drying, the fabrication steps are similar to CMOS fabrication process steps. RF MEMS fabrication processes, unlike BST or PZT ferroelectric and MMIC fabrication processes, do not require electron beam lithography, MBE, or MOCVD.
Reliability.
Contact interface degradation poses a reliability issue for ohmic cantilever RF MEMS switches, whereas dielectric charging beam stiction, as shown in Fig. 5(a), and humidity induced beam stiction, as shown in Fig. 5(b), pose a reliability issue for capacitive fixed-fixed beam RF MEMS switches. Stiction is the inability of the beam to release after removal of the drive voltage. A high contact pressure assures a low-ohmic contact or alleviates dielectric charging induced beam stiction. Commercially available ohmic cantilever RF MEMS switches and capacitive fixed-fixed beam RF MEMS switches have demonstrated lifetimes in excess of 100 billion cycles at 100 mW of RF input power. Reliability issues pertaining to high-power operation are discussed in the limiter section.
Applications.
RF MEMS resonators are applied in filters and reference oscillators. RF MEMS switches, switched capacitors and varactors are applied in electronically scanned (sub)arrays (phase shifters) and software-defined radios (reconfigurable antennas, tunable band-pass filters).
Antennas.
Polarization and radiation pattern reconfigurability, and frequency tunability, are usually achieved by incorporation of III-V semiconductor components, such as SPST switches or varactor diodes. However, these components can be readily replaced by RF MEMS switches and varactors in order to take advantage of the low insertion loss and high Q factor offered by RF MEMS technology. In addition, RF MEMS components can be integrated monolithically on low-loss dielectric substrates, such as borosilicate glass, fused silica or LCP, whereas III-V compound semi-insulating and passivated silicon substrates are generally lossier and have a higher dielectric constant. A low loss tangent and low dielectric constant are of importance for the efficiency and the bandwidth of the antenna.
The prior art includes an RF MEMS frequency tunable fractal antenna for the 0.1–6 GHz frequency range, and the actual integration of RF MEMS switches on a self-similar Sierpinski gasket antenna to increase its number of resonant frequencies, extending its range to 8 GHz, 14 GHz and 25 GHz, an RF MEMS radiation pattern reconfigurable spiral antenna for 6 and 10 GHz, an RF MEMS radiation pattern reconfigurable spiral antenna for the 6–7 GHz frequency band based on packaged Radant MEMS SPST-RMSW100 switches, an RF MEMS multiband Sierpinski fractal antenna, again with integrated RF MEMS switches, functioning at different bands from 2.4 to 18 GHz, and a 2-bit Ka-band RF MEMS frequency tunable slot antenna.
The Samsung Omnia W was the first smart phone to include a RF MEMS antenna.
Filters.
RF bandpass filters can be used to increase out-of-band rejection, in case the antenna fails to provide sufficient selectivity. Out-of-band rejection eases the dynamic range requirement on the LNA and the mixer in the light of interference. Off-chip RF bandpass filters based on lumped bulk acoustic wave (BAW), ceramic, SAW, quartz crystal, and FBAR resonators have superseded distributed RF bandpass filters based on transmission line resonators, printed on substrates with low loss tangent, or based on waveguide cavities.
Tunable RF bandpass filters offer a significant size reduction over switched RF bandpass filter banks. They can be implemented using III-V semiconducting varactors, BST or PZT ferroelectric and RF MEMS resonators and switches, switched capacitors and varactors, and YIG ferrites. RF MEMS resonators offer the potential of on-chip integration of high-Q resonators and low-loss bandpass filters. The Q factor of RF MEMS resonators is in the order of 100–1000. RF MEMS switch, switched capacitor and varactor technology, offers the tunable filter designer a compelling trade-off between insertion loss, linearity, power consumption, power handling, size, and switching time.
Phase shifters.
Passive subarrays based on RF MEMS phase shifters may be used to lower the amount of T/R modules in an active electronically scanned array. The statement is illustrated with examples in Fig. 6: assume a one-by-eight passive subarray is used for transmit as well as receive, with following characteristics: f = 38 GHz, Gr = Gt = 10 dBi, BW = 2 GHz, Pt = 4 W. The low loss (6.75 ps/dB) and good power handling (500 mW) of the RF MEMS phase shifters allow an EIRP of 40 W and a Gr/T of 0.036 1/K. EIRP, also referred to as the power-aperture product, is the product of the transmit gain, Gt, and the transmit power, Pt. Gr/T is the quotient of the receive gain and the antenna noise temperature. A high EIRP and Gr/T are a prerequisite for long-range detection. The EIRP and Gr/T are a function of the number of antenna elements per subarray and of the maximum scanning angle. The number of antenna elements per subarray should be chosen in order to optimize the EIRP or the EIRP x Gr/T product, as shown in Fig. 7 and Fig. 8. The radar range equation can be used to calculate the maximum range for which targets can be detected with 10 dB of SNR at the input of the receiver.
formula_0
in which kB is the Boltzmann constant, λ is the free-space wavelength, and σ is the RCS of the target. Range values are tabulated in Table 1 for following targets: a sphere with a radius, a, of 10 cm (σ = π a2), a dihedral corner reflector with facet size, a, of 10 cm (σ = 12 a4/λ2), the rear of a car (σ = 20 m2) and for a non-evasive fighter jet (σ = 400 m2).
RF MEMS phase shifters enable wide-angle passive electronically scanned arrays, such as lens arrays, reflect arrays, subarrays and switched beamforming networks, with high EIRP and high Gr/T. The prior art in passive electronically scanned arrays, includes an X-band continuous transverse stub (CTS) array fed by a line source synthesized by sixteen 5-bit reflect-type RF MEMS phase shifters based on ohmic cantilever RF MEMS switches, an X-band 2-D lens array consisting of parallel-plate waveguides and featuring 25,000 ohmic cantilever RF MEMS switches, and a W-band switched beamforming network based on an RF MEMS SP4T switch and a Rotman lens focal plane scanner.
The usage of true-time-delay TTD phase shifters instead of RF MEMS phase shifters allows UWB radar waveforms with associated high range resolution, and avoids beam squinting or frequency scanning. TTD phase shifters are designed using the switched-line principle or the distributed loaded-line principle. Switched-line TTD phase shifters outperform distributed loaded-line TTD phase shifters in terms of time delay per decibel NF, especially at frequencies up to X-band, but are inherently digital and require low-loss and high-isolation SPNT switches. Distributed loaded-line TTD phase shifters, however, can be realized analogously or digitally, and in smaller form factors, which is important at the subarray level. Analog phase shifters are biased through a single bias line, whereas multibit digital phase shifters require a parallel bus along with complex routing schemes at the subarray level.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{\\mathrm{R = \\sqrt[4]{\\frac{\\displaystyle {\\mathrm{\\lambda^2 \\, EIRP \\, G_R/T \\, \\sigma}}}{{\\mathrm{\\displaystyle 64 \\, \\pi^3 \\, k_B \\, BW \\, SNR}}}}}}"
}
] | https://en.wikipedia.org/wiki?curid=12368347 |
12374236 | Absoluteness (logic) | Mathematical logic concept
In mathematical logic, a formula is said to be absolute to some class of structures (also called models), if it has the same truth value in each of the members of that class. One can also speak of absoluteness of a formula "between" two structures, if it is absolute to some class which contains both of them.. Theorems about absoluteness typically establish relationships between the absoluteness of formulas and their syntactic form.
There are two weaker forms of partial absoluteness. If the truth of a formula in each substructure "N" of a structure "M" follows from its truth in "M", the formula is downward absolute. If the truth of a formula in a structure "N" implies its truth in each structure "M" extending "N", the formula is upward absolute.
Issues of absoluteness are particularly important in set theory and model theory, fields where multiple structures are considered simultaneously. In model theory, several basic results and definitions are motivated by absoluteness. In set theory, the issue of which properties of sets are absolute is well studied. The Shoenfield absoluteness theorem, due to Joseph Shoenfield (1961), establishes the absoluteness of a large class of formulas between a model of set theory and its constructible universe, with important methodological consequences. The absoluteness of large cardinal axioms is also studied, with positive and negative results known.
In model theory.
In model theory, there are several general results and definitions related to absoluteness. A fundamental example of downward absoluteness is that universal sentences (those with only universal quantifiers) that are true in a structure are also true in every substructure of the original structure. Conversely, existential sentences are upward absolute from a structure to any structure containing it.
Two structures are defined to be elementarily equivalent if they agree about the truth value of all sentences in their shared language, that is, if all sentences in their language are absolute between the two structures. A theory is defined to be model complete if whenever "M" and "N" are models of the theory and "M" is a substructure of "N", then "M" is an elementary substructure of "N".
In set theory.
A major part of modern set theory involves the study of different models of ZF and ZFC. It is crucial for the study of such models to know which properties of a set are absolute to different models. It is common to begin with a fixed model of set theory and only consider other transitive models containing the same ordinals as the fixed model.
Certain properties are absolute to all transitive models of set theory, including the following (see Jech (2003 sec. I.12) and Kunen (1980 sec. IV.3)).
Other properties are not absolute:
Failure of absoluteness for countability.
Skolem's paradox is the seeming contradiction that on the one hand, the set of real numbers is uncountable (and this is provable from ZFC, or even from a small finite subsystem ZFC' of ZFC), while on the other hand there are countable transitive models of ZFC' (this is provable in ZFC), and the set of real numbers in such a model will be a countable set. The paradox can be resolved by noting that countability is not absolute to submodels of a particular model of ZFC. It is possible that a set "X" is countable in a model of set theory but uncountable in a submodel containing "X", because the submodel may contain no bijection between "X" and ω, while the definition of countability is the existence of such a bijection. The Löwenheim–Skolem theorem, when applied to ZFC, shows that this situation does occur.
Shoenfield's absoluteness theorem.
Shoenfield's absoluteness theorem shows that formula_0 and formula_1 sentences in the analytical hierarchy are absolute between a model "V" of ZF and the constructible universe "L" of the model, when interpreted as statements about the natural numbers in each model. The theorem can be relativized to allow the sentence to use sets of natural numbers from "V" as parameters, in which case "L" must be replaced by the smallest submodel containing those parameters and all the ordinals. The theorem has corollaries that formula_2 sentences are upward absolute (if such a sentence holds in "L" then it holds in "V") and formula_3 sentences are downward absolute (if they hold in "V" then they hold in "L"). Because any two transitive models of set theory with the same ordinals have the same constructible universe, Shoenfield's theorem shows that two such models must agree about the truth of all formula_0 sentences.
One consequence of Shoenfield's theorem relates to the axiom of choice. Gödel proved that the constructible universe "L" always satisfies ZFC, including the axiom of choice, even when "V" is only assumed to satisfy ZF. Shoenfield's theorem shows that if there is a model of ZF in which a given formula_2 statement "φ" is false, then "φ" is also false in the constructible universe of that model. In contrapositive, this means that if ZFC proves a formula_2 sentence then that sentence is also provable in ZF. The same argument can be applied to any other principle that always holds in the constructible universe, such as the combinatorial principle ◊. Even if these principles are independent of ZF, each of their formula_2 consequences is already provable in ZF. In particular, this includes any of their consequences that can be expressed in the (first-order) language of Peano arithmetic.
Shoenfield's theorem also shows that there are limits to the independence results that can be obtained by forcing. In particular, any sentence of Peano arithmetic is absolute to transitive models of set theory with the same ordinals. Thus it is not possible to use forcing to change the truth value of arithmetical sentences, as forcing does not change the ordinals of the model to which it is applied. Many famous open problems, such as the Riemann hypothesis and the P = NP problem, can be expressed as formula_0 sentences (or sentences of lower complexity), and thus cannot be proven independent of ZFC by forcing.
Large cardinals.
There are certain large cardinals that cannot exist in the constructible universe ("L") of any model of set theory. Nevertheless, the constructible universe contains all the ordinal numbers that the original model of set theory contains. This "paradox" can be resolved by noting that the defining properties of some large cardinals are not absolute to submodels.
One example of such a nonabsolute large cardinal axiom is for measurable cardinals; for an ordinal to be a measurable cardinal there must exist another set (the measure) satisfying certain properties. It can be shown that no such measure is constructible.
References.
Inline citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Pi^1_2"
},
{
"math_id": 1,
"text": "\\Sigma^1_2"
},
{
"math_id": 2,
"text": "\\Sigma^1_3"
},
{
"math_id": 3,
"text": "\\Pi^1_3"
}
] | https://en.wikipedia.org/wiki?curid=12374236 |
12374274 | Aliquot sum | Sum of all proper divisors of a natural number
In number theory, the aliquot sum "s"("n") of a positive integer n is the sum of all proper divisors of n, that is, all divisors of n other than n itself.
That is,
formula_0
It can be used to characterize the prime numbers, perfect numbers, sociable numbers, deficient numbers, abundant numbers, and untouchable numbers, and to define the aliquot sequence of a number.
Examples.
For example, the proper divisors of 12 (that is, the positive divisors of 12 that are not equal to 12) are 1, 2, 3, 4, and 6, so the aliquot sum of 12 is 16 i.e. (1 + 2 + 3 + 4 + 6).
The values of "s"("n") for n = 1, 2, 3, ... are:
0, 1, 1, 3, 1, 6, 1, 7, 4, 8, 1, 16, 1, 10, 9, 15, 1, 21, 1, 22, 11, 14, 1, 36, 6, 16, 13, 28, 1, 42, 1, 31, 15, 20, 13, 55, 1, 22, 17, 50, 1, 54, 1, 40, 33, 26, 1, 76, 8, 43, ... (sequence in the OEIS)
Characterization of classes of numbers.
The aliquot sum function can be used to characterize several notable classes of numbers:
The mathematicians noted that one of Erdős' "favorite subjects of investigation" was the aliquot sum function.
Iteration.
Iterating the aliquot sum function produces the aliquot sequence "n", "s"("n"), "s"("s"("n")), … of a nonnegative integer n (in this sequence, we define "s"(0) = 0).
Sociable numbers are numbers whose aliquot sequence is a periodic sequence. Amicable numbers are sociable numbers whose aliquot sequence has period 2.
It remains unknown whether these sequences always end with a prime number, a perfect number, or a periodic sequence of sociable numbers.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "s(n)=\\sum_{{d|n,} \\atop {d\\ne n}} d \\, ."
}
] | https://en.wikipedia.org/wiki?curid=12374274 |
1237612 | Delta rule | Gradient descent learning rule in machine learning
In machine learning, the delta rule is a gradient descent learning rule for updating the weights of the inputs to artificial neurons in a single-layer neural network. It can be derived as the backpropagation algorithm for a single-layer neural network with mean-square error loss function.
For a neuron formula_0 with activation function formula_1, the delta rule for neuron formula_0's formula_2-th weight formula_3 is given by
formula_4
where
It holds that formula_12 and formula_13.
The delta rule is commonly stated in simplified form for a neuron with a linear activation function as
formula_14
While the delta rule is similar to the perceptron's update rule, the derivation is different. The perceptron uses the Heaviside step function as the activation function formula_15, and that means that formula_16 does not exist at zero, and is equal to zero elsewhere, which makes the direct application of the delta rule impossible.
Derivation of the delta rule.
The delta rule is derived by attempting to minimize the error in the output of the neural network through gradient descent. The error for a neural network with formula_0 outputs can be measured as
formula_17
In this case, we wish to move through "weight space" of the neuron (the space of all possible values of all of the neuron's weights) in proportion to the gradient of the error function with respect to each weight. In order to do that, we calculate the partial derivative of the error with respect to each weight. For the formula_2th weight, this derivative can be written as
formula_18
Because we are only concerning ourselves with the formula_0-th neuron, we can substitute the error formula above while omitting the summation:
formula_19
Next we use the chain rule to split this into two derivatives:
formula_20
To find the left derivative, we simply apply the power rule and the chain rule:
formula_21
To find the right derivative, we again apply the chain rule, this time differentiating with respect to the total input to formula_0, formula_9:
formula_22
Note that the output of the formula_23th neuron, formula_10, is just the neuron's activation function formula_24 applied to the neuron's input formula_9. We can therefore write the derivative of formula_10 with respect to formula_9 simply as formula_24's first derivative:
formula_25
Next we rewrite formula_9 in the last term as the sum over all formula_26 weights of each weight formula_27 times its corresponding input formula_28:
formula_29
Because we are only concerned with the formula_2th weight, the only term of the summation that is relevant is formula_30. Clearly,
formula_31
giving us our final equation for the gradient:
formula_32
As noted above, gradient descent tells us that our change for each weight should be proportional to the gradient. Choosing a proportionality constant formula_5 and eliminating the minus sign to enable us to move the weight in the negative direction of the gradient to minimize error, we arrive at our target equation:
formula_33
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "j "
},
{
"math_id": 1,
"text": "g(x) "
},
{
"math_id": 2,
"text": "i "
},
{
"math_id": 3,
"text": "w_{ji} "
},
{
"math_id": 4,
"text": "\\Delta w_{ji} = \\alpha(t_j-y_j) g'(h_j) x_i , "
},
{
"math_id": 5,
"text": "\\alpha "
},
{
"math_id": 6,
"text": "g'"
},
{
"math_id": 7,
"text": "g"
},
{
"math_id": 8,
"text": "t_j "
},
{
"math_id": 9,
"text": "h_j "
},
{
"math_id": 10,
"text": "y_j "
},
{
"math_id": 11,
"text": "x_i "
},
{
"math_id": 12,
"text": "h_j = \\sum_i x_i w_{ji} "
},
{
"math_id": 13,
"text": "y_j=g(h_j) "
},
{
"math_id": 14,
"text": "\\Delta w_{ji} = \\alpha \\left(t_j-y_j\\right) x_i "
},
{
"math_id": 15,
"text": "g(h)"
},
{
"math_id": 16,
"text": "g'(h)"
},
{
"math_id": 17,
"text": "E = \\sum_{j} \\tfrac{1}{2} \\left(t_j-y_j\\right)^2 ."
},
{
"math_id": 18,
"text": "\\frac{\\partial E}{ \\partial w_{ji} } ."
},
{
"math_id": 19,
"text": "\\frac{\\partial E}{ \\partial w_{ji} } = \\frac{ \\partial }{ \\partial w_{ji} } \\left [\\frac{1}{2} \\left( t_j-y_j \\right ) ^2 \\right ] "
},
{
"math_id": 20,
"text": "\\frac{\\partial E}{\\partial w_{ji}} = \\frac{ \\partial \\left ( \\frac{1}{2} \\left( t_j-y_j \\right ) ^2 \\right ) }{ \\partial y_j } \\frac{ \\partial y_j }{ \\partial w_{ji} } "
},
{
"math_id": 21,
"text": "\\frac{\\partial E}{\\partial w_{ji}} = - \\left ( t_j-y_j \\right ) \\frac{ \\partial y_j }{ \\partial w_{ji} } "
},
{
"math_id": 22,
"text": "\\frac{\\partial E}{\\partial w_{ji}} = - \\left ( t_j-y_j \\right ) \\frac{ \\partial y_j }{ \\partial h_j } \\frac{ \\partial h_j }{ \\partial w_{ji} } "
},
{
"math_id": 23,
"text": "j"
},
{
"math_id": 24,
"text": "g "
},
{
"math_id": 25,
"text": "\\frac{\\partial E}{\\partial w_{ji}} = - \\left ( t_j-y_j \\right ) g'(h_j) \\frac{ \\partial h_j }{ \\partial w_{ji} } "
},
{
"math_id": 26,
"text": "k "
},
{
"math_id": 27,
"text": "w_{jk} "
},
{
"math_id": 28,
"text": "x_k "
},
{
"math_id": 29,
"text": "\\frac{\\partial E}{\\partial w_{ji}} = - \\left ( t_j-y_j \\right ) g'(h_j) \\; \\frac{ \\partial}{ \\partial w_{ji} } \\!\\!\\left[ \\sum_{i} x_i w_{ji} \\right] "
},
{
"math_id": 30,
"text": "x_i w_{ji} "
},
{
"math_id": 31,
"text": "\\frac{ \\partial (x_i w_{ji}) }{ \\partial w_{ji} } = x_i. "
},
{
"math_id": 32,
"text": "\\frac{\\partial E}{ \\partial w_{ji} } = - \\left ( t_j-y_j \\right ) g'(h_j) x_i "
},
{
"math_id": 33,
"text": "\\Delta w_{ji}=\\alpha(t_j-y_j) g'(h_j) x_i ."
}
] | https://en.wikipedia.org/wiki?curid=1237612 |
1237700 | Hyperbolic 3-manifold | Manifold of dimension 3 equipped with a hyperbolic metric
In mathematics, more precisely in topology and differential geometry, a hyperbolic 3-manifold is a manifold of dimension 3 equipped with a hyperbolic metric, that is a Riemannian metric which has all its sectional curvatures equal to −1. It is generally required that this metric be also complete: in this case the manifold can be realised as a quotient of the 3-dimensional hyperbolic space by a discrete group of isometries (a Kleinian group).
Hyperbolic 3-manifolds of finite volume have a particular importance in 3-dimensional topology as follows from Thurston's geometrisation conjecture proved by Perelman. The study of Kleinian groups is also an important topic in geometric group theory.
Importance in topology.
Hyperbolic geometry is the most rich and least understood of the eight geometries in dimension 3 (for example, for all other geometries it is not hard to give an explicit enumeration of the finite-volume manifolds with this geometry, while this is far from being the case for hyperbolic manifolds). After the proof of the Geometrisation conjecture, understanding the topological properties of hyperbolic 3-manifolds is thus a major goal of 3-dimensional topology. Recent breakthroughs of Kahn–Markovic, Wise, Agol and others have answered most long-standing open questions on the topic but there are still many less prominent ones which have not been solved.
In dimension 2 almost all closed surfaces are hyperbolic (all but the sphere, projective plane, torus and Klein bottle). In dimension 3 this is far from true: there are many ways to construct infinitely many non-hyperbolic closed manifolds. On the other hand, the heuristic statement that "a generic 3-manifold tends to be hyperbolic" is verified in many contexts. For example, any knot which is not either a satellite knot or a torus knot is hyperbolic. Moreover, almost all Dehn surgeries on a hyperbolic knot yield a hyperbolic manifold. A similar result is true of links (Thurston's hyperbolic Dehn surgery theorem), and since all 3-manifolds are obtained as surgeries on a link in the 3-sphere this gives a more precise sense to the informal statement. Another sense in which "almost all" manifolds are hyperbolic in dimension 3 is that of random models. For example, random Heegaard splittings of genus at least 2 are almost surely hyperbolic (when the complexity of the gluing map goes to infinity).
The relevance of the hyperbolic geometry of a 3-manifold to its topology also comes from the Mostow rigidity theorem, which states that the hyperbolic structure of a hyperbolic 3-manifold of finite volume is uniquely determined by its homotopy type. In particular, geometric invariants such as the volume can be used to define new topological invariants.
Structure.
Manifolds of finite volume.
In this case one important tool to understand the geometry of a manifold is the thick-thin decomposition. It states that a hyperbolic 3-manifold of finite volume has a decomposition into two parts:
Geometrically finite manifolds.
The thick-thin decomposition is valid for all hyperbolic 3-manifolds, though in general the thin part is not as described above. A hyperbolic 3-manifold is said to be geometrically finite if it contains a convex submanifold (its "convex core") onto which it retracts, and whose thick part is compact (note that all manifolds have a convex core, but in general it is not compact). The simplest case is when the manifold does not have "cusps" (i.e. the fundamental group does not contain parabolic elements), in which case the manifold is geometrically finite if and only if it is the quotient of a closed, convex subset of hyperbolic space by a group acting cocompactly on this subset.
Manifolds with finitely generated fundamental group.
This is the larger class of hyperbolic 3-manifolds for which there is a satisfying structure theory. It rests on two theorems:
Construction of hyperbolic 3-manifolds of finite volume.
Hyperbolic polyhedra, reflection groups.
The oldest construction of hyperbolic manifolds, which dates back at least to Poincaré, goes as follows: start with a finite collection of 3-dimensional hyperbolic finite polytopes. Suppose that there is a side-pairing between the 2-dimensional faces of these polyhedra (i.e. each such face is paired with another, distinct, one so that they are isometric to each other as 2-dimensional hyperbolic polygons), and consider the space obtained by gluing the paired faces together (formally this is obtained as a quotient space). It carries a hyperbolic metric which is well-defined outside of the image of the 1-skeletons of the polyhedra. This metric extends to a hyperbolic metric on the whole space if the two following conditions are satisfied:
A notable example of this construction is the Seifert–Weber space which is obtained by gluing opposite faces of a regular dodecahedron.
A variation on this construction is by using hyperbolic Coxeter polytopes (polytopes whose dihedral angles are of the form formula_2). Such a polytope gives rise to a Kleinian reflection group, which is a discrete subgroup of isometries of hyperbolic space. Taking a torsion-free finite-index subgroup one obtains a hyperbolic manifold (which can be recovered by the previous construction, gluing copies of the original Coxeter polytope in a manner prescribed by an appropriate Schreier coset graph).
Gluing ideal tetrahedra and hyperbolic Dehn surgery.
In the previous construction the manifolds obtained are always compact. To obtain manifolds with cusps one has to use polytopes which have ideal vertices (i.e. vertices which lie on the sphere at infinity). In this setting the gluing construction does not always yield a complete manifold. Completeness is detected by a system of equations involving the dihedral angles around the edges adjacent to an ideal vertex, which are commonly called Thurston's gluing equations. In case the gluing is complete the ideal vertices become cusps in the manifold. An example of a noncompact, finite volume hyperbolic manifold obtained in this way is the Gieseking manifold which is constructed by gluing faces of a regular ideal hyperbolic tetrahedron together.
It is also possible to construct a finite-volume, complete hyperbolic manifold when the gluing is not complete. In this case the completion of the metric space obtained is a manifold with a torus boundary and under some (not generic) conditions it is possible to glue a hyperbolic solid torus on each boundary component so that the resulting space has a complete hyperbolic metric. Topologically, the manifold is obtained by hyperbolic Dehn surgery on the complete hyperbolic manifold which would result from a complete gluing.
It is not known whether all hyperbolic 3-manifolds of finite volume can be constructed in this way. In practice however this is how computational software (such as SnapPea or Regina) stores hyperbolic manifolds.
Arithmetic constructions.
The construction of arithmetic Kleinian groups from quaternion algebras gives rise to particularly interesting hyperbolic manifolds. On the other hand, they are in some sense "rare" among hyperbolic 3-manifolds (for example hyperbolic Dehn surgery on a fixed manifold results in a non-arithmetic manifold for almost all parameters).
The hyperbolisation theorem.
In contrast to the explicit constructions above it is possible to deduce the existence of a complete hyperbolic structure on a 3-manifold purely from topological information. This is a consequence of the Geometrisation conjecture and can be stated as follows (a statement sometimes referred to as the "hyperbolisation theorem", which was proven by Thurston in the special case of Haken manifolds):
<templatestyles src="Template:Blockquote/styles.css" />If a compact 3-manifold with toric boundary is irreducible and algebraically atoroidal (meaning that every formula_3-injectively immersed torus is homotopic to a boundary component) then its interior carries a complete hyperbolic metric of finite volume.
A particular case is that of a surface bundle over the circle: such manifolds are always irreducible, and they carry a complete hyperbolic metric if and only if the monodromy is a pseudo-Anosov map.
Another consequence of the Geometrisation conjecture is that any closed 3-manifold which admits a Riemannian metric with negative sectional curvatures admits in fact a Riemannian metric with constant sectional curvature -1. This is not true in higher dimensions.
Virtual properties.
The topological properties of 3-manifolds are sufficiently intricate that in many cases it is interesting to know that a property holds virtually for a class of manifolds, that is for any manifold in the class there exists a finite covering space of the manifold with the property. The virtual properties of hyperbolic 3-manifolds are the objects of a series of conjectures by Waldhausen and Thurston, which were recently all proven by Ian Agol following work of Jeremy Kahn, Vlad Markovic, Frédéric Haglund, Dani Wise and others. The first part of the conjectures were logically related to the virtually Haken conjecture. In order of strength they are:
Another conjecture (also proven by Agol) which implies 1-3 above but a priori has no relation to 4 is the following :
5. (the virtually fibered conjecture) Any hyperbolic 3-manifold of finite volume has a finite cover which is a surface bundle over the circle.
The space of all hyperbolic 3-manifolds.
Geometric convergence.
A sequence of Kleinian groups is said to be "geometrically convergent" if it converges in the Chabauty topology. For the manifolds obtained as quotients this amounts to them being convergent in the pointed Gromov-Hausdorff metric.
Jørgensen–Thurston theory.
The hyperbolic volume can be used to order the space of all hyperbolic manifold. The set of manifolds corresponding to a given volume is at most finite, and the set of volumes is well-ordered and of order type formula_4. More precisely, Thurston's hyperbolic Dehn surgery theorem implies that a manifold with formula_5 cusps is a limit of a sequence of manifolds with formula_6 cusps for any formula_7, so that the isolated points are volumes of compact manifolds, the manifolds with exactly one cusp are limits of compact manifolds, and so on. Together with results of Jørgensen the theorem also proves that any convergent sequence must be obtained by Dehn surgeries on the limit manifold.
Quasi-Fuchsian groups.
Sequences of quasi-fuchsian surface groups of given genus can converge to a doubly degenerate surface group, as in the double limit theorem.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "4\\pi"
},
{
"math_id": 1,
"text": "2\\pi"
},
{
"math_id": 2,
"text": "\\pi/m, m \\in \\mathbb N"
},
{
"math_id": 3,
"text": "\\pi_1"
},
{
"math_id": 4,
"text": "\\omega^\\omega"
},
{
"math_id": 5,
"text": "m"
},
{
"math_id": 6,
"text": "l"
},
{
"math_id": 7,
"text": "0\\le l < m"
}
] | https://en.wikipedia.org/wiki?curid=1237700 |
1237777 | Polarizability | Tendency of matter subjected to an electric field to acquire an electric dipole moment
Polarizability usually refers to the tendency of matter, when subjected to an electric field, to acquire an electric dipole moment in proportion to that applied field. It is a property of particles with an electric charge. When subject to an electric field, the negatively charged electrons and positively charged atomic nuclei are subject to opposite forces and undergo charge separation. Polarizability is responsible for a material's dielectric constant and, at high (optical) frequencies, its refractive index.
The polarizability of an atom or molecule is defined as the ratio of its induced dipole moment to the local electric field; in a crystalline solid, one considers the dipole moment per unit cell. Note that the local electric field seen by a molecule is generally different from the macroscopic electric field that would be measured externally. This discrepancy is taken into account by the Clausius–Mossotti relation (below) which connects the bulk behaviour (polarization density due to an external electric field according to the electric susceptibility formula_0) with the molecular polarizability formula_1 due to the local field.
Magnetic polarizability likewise refers to the tendency for a magnetic dipole moment to appear in proportion to an external magnetic field. Electric and magnetic polarizabilities determine the dynamical response of a bound system (such as a molecule or crystal) to external fields, and provide insight into a molecule's internal structure. "Polarizability" should "not" be confused with the intrinsic magnetic or electric dipole moment of an atom, molecule, or bulk substance; these do not depend on the presence of an external field.
Electric polarizability.
Definition.
Electric polarizability is the relative tendency of a charge distribution, like the electron cloud of an atom or molecule, to be distorted from its normal shape by an external electric field.
The polarizability formula_1 in isotropic media is defined as the ratio of the induced dipole moment formula_2 of an atom to the electric field formula_3 that produces this dipole moment.
formula_4
Polarizability has the SI units of C·m2·V−1 = A2·s4·kg−1 while its cgs unit is cm3. Usually it is expressed in cgs units as a so-called polarizability volume, sometimes expressed in Å3 = 10−24 cm3. One can convert from SI units (formula_1) to cgs units (formula_5) as follows:
formula_6 ≃ 8.988×1015 × formula_7
where formula_8, the vacuum permittivity, is ~8.854 × 10−12 (F/m). If the polarizability volume in cgs units is denoted formula_5 the relation can be expressed generally (in SI) as formula_9.
The polarizability of individual particles is related to the average electric susceptibility of the medium by the Clausius–Mossotti relation:
formula_10
where "R" is the molar refractivity, formula_11is the Avogadro constant, formula_12 is the electronic polarizability, "p" is the density of molecules, "M" is the molar mass, and formula_13 is the material's relative permittivity or dielectric constant (or in optics, the square of the refractive index).
Polarizability for anisotropic or non-spherical media cannot in general be represented as a scalar quantity. Defining formula_1 as a scalar implies both that applied electric fields can only induce polarization components parallel to the field and that the formula_14 and formula_15 directions respond in the same way to the applied electric field. For example, an electric field in the formula_16-direction can only produce an formula_16 component in formula_2 and if that same electric field were applied in the formula_17-direction the induced polarization would be the same in magnitude but appear in the formula_17 component of formula_2. Many crystalline materials have directions that are easier to polarize than others and some even become polarized in directions perpendicular to the applied electric field, and the same thing happens with non-spherical bodies. Some molecules and materials with this sort of anisotropy are optically active, or exhibit linear birefringence of light.
Tensor.
To describe anisotropic media a polarizability rank two tensor or formula_18 matrix formula_1 is defined,
formula_19
so that:
formula_20
The elements describing the response parallel to the applied electric field are those along the diagonal. A large value of formula_21 here means that an electric-field applied in the formula_16-direction would strongly polarize the material in the formula_17-direction. Explicit expressions for formula_1 have been given for homogeneous anisotropic ellipsoidal bodies.
Application in crystallography.
The matrix above can be used with the molar refractivity equation and other data to produce density data for crystallography. Each polarizability measurement along with the refractive index associated with its direction will yield a direction specific density that can be used to develop an accurate three dimensional assessment of molecular stacking in the crystal. This relationship was first observed by Linus Pauling.
Polarizability and molecular property are related to refractive index and bulk property. In crystalline structures, the interactions between molecules are considered by comparing a local field to the macroscopic field. Analyzing a cubic crystal lattice, we can imagine an isotropic spherical region to represent the entire sample. Giving the region the radius formula_22, the field would be given by the volume of the sphere times the dipole moment per unit volume formula_23
formula_2 = formula_24 formula_23
We can call our local field formula_25, our macroscopic field formula_3, and the field due to matter within the sphere, formula_26 We can then define the local field as the macroscopic field without the contribution of the internal field:
formula_27
The polarization is proportional to the macroscopic field by formula_28 where formula_29 is the electric permittivity constant and formula_30 is the electric susceptibility. Using this proportionality, we find the local field as formula_31 which can be used in the definition of polarization
formula_32
and simplified with formula_33 to get formula_34. These two terms can both be set equal to the other, eliminating the formula_3 term giving us
formula_35.
We can replace the relative permittivity formula_36 with refractive index formula_37, since formula_38 for a low-pressure gas. The number density can be related to the molecular weight formula_39 and mass density formula_40 through formula_41, adjusting the final form of our equation to include molar refractivity:
formula_42
This equation allows us to relate bulk property (refractive index) to the molecular property (polarizability) as a function of frequency.
Tendencies.
Generally, polarizability increases as the volume occupied by electrons increases. In atoms, this occurs because larger atoms have more loosely held electrons in contrast to smaller atoms with tightly bound electrons. On rows of the periodic table, polarizability therefore decreases from left to right. Polarizability increases down on columns of the periodic table. Likewise, larger molecules are generally more polarizable than smaller ones.
Water is a very polar molecule, but alkanes and other hydrophobic molecules are more polarizable. Water with its permanent dipole is less likely to change shape due to an external electric field. Alkanes are the most polarizable molecules. Although alkenes and arenes are expected to have larger polarizability than alkanes because of their higher reactivity compared to alkanes, alkanes are in fact more polarizable. This results because of alkene's and arene's more electronegative sp2 carbons to the alkane's less electronegative sp3 carbons.
Ground state electron configuration models are often inadequate in studying the polarizability of bonds because dramatic changes in molecular structure occur in a reaction.
Magnetic polarizability.
Magnetic polarizability defined by spin interactions of nucleons is an important parameter of deuterons and hadrons. In particular, measurement of tensor polarizabilities of nucleons yields important information about spin-dependent nuclear forces.
The method of spin amplitudes uses quantum mechanics formalism to more easily describe spin dynamics. Vector and tensor polarization of particle/nuclei with spin S ≥ 1 are specified by the unit polarization vector formula_2 and the polarization tensor "P"`. Additional tensors composed of products of three or more spin matrices are needed only for the exhaustive description of polarization of particles/nuclei with spin .
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\chi = \\varepsilon_{\\mathrm r}-1"
},
{
"math_id": 1,
"text": "\\alpha"
},
{
"math_id": 2,
"text": "\\mathbf{p}"
},
{
"math_id": 3,
"text": "\\mathbf{E}"
},
{
"math_id": 4,
"text": "\\alpha = \\frac{|\\mathbf{p}|}{|\\mathbf{E}|}"
},
{
"math_id": 5,
"text": "\\alpha'"
},
{
"math_id": 6,
"text": "\\alpha' (\\mathrm{cm}^3) = \\frac{10^{6}}{ 4 \\pi \\varepsilon_0 }\\alpha (\\mathrm{C{\\cdot}m^2{\\cdot}V^{-1}}) = \\frac{10^{6}}{ 4 \\pi \\varepsilon_0 }\\alpha (\\mathrm{F{\\cdot}m^2}) "
},
{
"math_id": 7,
"text": "\\alpha (\\mathrm{F{\\cdot}m^2}) "
},
{
"math_id": 8,
"text": "\\varepsilon_0 "
},
{
"math_id": 9,
"text": "\\alpha = 4\\pi\\varepsilon_0 \\alpha' "
},
{
"math_id": 10,
"text": "R={\\displaystyle \\left({\\frac {4\\pi}{3}}\\right)N_\\text{A}\\alpha_{c}=\\left({\\frac {M}{p}}\\right)\\left({\\frac {\\varepsilon_\\mathrm{r}-1}{\\varepsilon_\\mathrm{r}+2}}\\right)}"
},
{
"math_id": 11,
"text": "N_\\text{A}"
},
{
"math_id": 12,
"text": "\\alpha_c"
},
{
"math_id": 13,
"text": "\\varepsilon_{\\mathrm r} = \\epsilon/\\epsilon_0"
},
{
"math_id": 14,
"text": "x, y"
},
{
"math_id": 15,
"text": "z"
},
{
"math_id": 16,
"text": "x"
},
{
"math_id": 17,
"text": "y"
},
{
"math_id": 18,
"text": "3 \\times 3"
},
{
"math_id": 19,
"text": " \\mathbb{\\alpha} = \n\\begin{bmatrix}\n\\alpha_{xx} & \\alpha_{xy} & \\alpha_{xz} \\\\\n\\alpha_{yx} & \\alpha_{yy} & \\alpha_{yz} \\\\\n\\alpha_{zx} & \\alpha_{zy} & \\alpha_{zz} \\\\\n\\end{bmatrix}\n"
},
{
"math_id": 20,
"text": "\n\\mathbf{p} = \\mathbb{\\alpha} \\mathbf{E}\n"
},
{
"math_id": 21,
"text": "\\alpha_{yx}"
},
{
"math_id": 22,
"text": "a"
},
{
"math_id": 23,
"text": "\\mathbf{P}."
},
{
"math_id": 24,
"text": "\\frac{4 \\pi a^3}{3} "
},
{
"math_id": 25,
"text": "\\mathbf{F}"
},
{
"math_id": 26,
"text": "\\mathbf E_{\\mathrm{in}} = \\tfrac{-\\mathbf{P}}{3 \\varepsilon_0}"
},
{
"math_id": 27,
"text": "\\mathbf{F}=\\mathbf{E}-\\mathbf{E}_{\\mathrm{in}}=\\mathbf{E}+\\frac{\\mathbf{P}}{3 \\varepsilon_0}"
},
{
"math_id": 28,
"text": "\\mathbf{P}=\\varepsilon_0(\\varepsilon_r-1)\\mathbf{E}=\\chi_{\\text{e}}\\varepsilon_0\\mathbf{E}"
},
{
"math_id": 29,
"text": "\\varepsilon_0"
},
{
"math_id": 30,
"text": "\\chi_{\\text{e}}"
},
{
"math_id": 31,
"text": "\\mathbf{F}=\\tfrac{1}{3}(\\varepsilon_{\\mathrm r}+2)\\mathbf{E}"
},
{
"math_id": 32,
"text": "\\mathbf{P}=\\frac{N\\alpha}{V}\\mathbf{F}=\\frac{N\\alpha}{3V}(\\varepsilon_{\\mathrm r}+2)\\mathbf{E}"
},
{
"math_id": 33,
"text": "\\varepsilon_{\\mathrm r}=1+\\tfrac{N\\alpha}{\\varepsilon_0V}"
},
{
"math_id": 34,
"text": "\\mathbf{P}=\\varepsilon_0(\\varepsilon_{\\mathrm r}-1)\\mathbf{E}"
},
{
"math_id": 35,
"text": "\\frac{\\varepsilon_{\\mathrm r}-1}{\\varepsilon_{\\mathrm r}+2}=\\frac{N\\alpha}{3\\varepsilon_0V}"
},
{
"math_id": 36,
"text": "\\varepsilon_{\\mathrm r}"
},
{
"math_id": 37,
"text": "n"
},
{
"math_id": 38,
"text": "\\varepsilon_{\\mathrm r}=n^2"
},
{
"math_id": 39,
"text": "M"
},
{
"math_id": 40,
"text": "\\rho"
},
{
"math_id": 41,
"text": "\\tfrac{N}{V}=\\tfrac{N_{\\mathrm A}\\rho}{M}"
},
{
"math_id": 42,
"text": "R_{\\mathrm M} = \\frac{N_{\\mathrm A}\\alpha}{3\\varepsilon_0} = \\left(\\frac{M}{\\rho}\\right) \\frac{n^2-1}{n^2+2}"
}
] | https://en.wikipedia.org/wiki?curid=1237777 |
12380844 | Alliinase | Class of enzyme
In enzymology, an alliin lyase (EC 4.4.1.4) is an enzyme that catalyzes the chemical reaction
an "S"-alkyl--cysteine "S"-oxide formula_0 an alkyl sulfenate + 2-aminoacrylate
Hence, this enzyme has one substrate, "S"-alkyl--cysteine "S"-oxide, and two products, alkyl sulfenate and 2-aminoacrylate.
This enzyme belongs to the family of lyases, specifically the class of carbon-sulfur lyases. The systematic name of this enzyme class is S"-alkyl--cysteine "S"-oxide alkyl-sulfenate-lyase (2-aminoacrylate-forming). Other names in common use include alliinase, cysteine sulfoxide lyase, alkylcysteine sulfoxide lyase, S"-alkylcysteine sulfoxide lyase, -cysteine sulfoxide lyase, "S"-alkyl--cysteine sulfoxide lyase, and alliin alkyl-sulfenate-lyase. It employs one cofactor, pyridoxal phosphate.
Many alliinases contain a novel "N"-terminal epidermal growth factor-like domain (EGF-like domain).
Occurrence.
These enzymes are found in plants of the genus "Allium", such as garlic and onions. Alliinase is responsible for catalyzing chemical reactions that produce the volatile chemicals that give these foods their flavors, odors, and tear-inducing properties. Alliinases are part of the plant's defense against herbivores. Alliinase is normally sequestered within a plant cell, but, when the plant is damaged by a feeding animal, the alliinase is released to catalyze the production of the pungent chemicals. This tends to have a deterrent effect on the animal. The same reaction occurs when onion or garlic is cut with a knife in the kitchen.
Chemistry.
In garlic, an alliinase enzyme acts on the chemical alliin converting it into allicin. The process involves two stages: elimination of 2-propenesulfenic acid from the amino acid unit (with dehydroalanine as a byproduct), and then condensation of two of the sulfenic acid molecules.
Alliin and related substrates found in nature are chiral at the sulfoxide position (usually having the "S" , and alliin itself was the first natural product found to have both carbon- and sulfur-centered stereochemistry. However, the sulfenic acid intermediate is not chiral, and the final product's stereochemistry is not controlled.
There are a range of similar enzymes that can react with the cysteine-derived sulfoxides present in different species. In onions, an isomer of alliin, isoalliin, is converted to 1-propenesulfenic acid. A separate enzyme, the lachrymatory factor synthase or LFS, then converts this chemical to "syn"-propanethial-"S"-oxide, a potent lachrymator. The analogous butyl compound, "syn"-butanethial-"S"-oxide, is found in "Allium siculum" species.
Structural studies.
As of late 2007, 3 structures have been solved for this class of enzymes, using X-ray crystallography. The PDB accession codes are 1LK9, 2HOR, and 2HOX.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=12380844 |
1238173 | Weyl transformation | "See also Wigner–Weyl transform, for another definition of the Weyl transform."
In theoretical physics, the Weyl transformation, named after Hermann Weyl, is a local rescaling of the metric tensor:
formula_0
which produces another metric in the same conformal class. A theory or an expression invariant under this transformation is called conformally invariant, or is said to possess Weyl invariance or Weyl symmetry. The Weyl symmetry is an important symmetry in conformal field theory. It is, for example, a symmetry of the Polyakov action. When quantum mechanical effects break the conformal invariance of a theory, it is said to exhibit a conformal anomaly or Weyl anomaly.
The ordinary Levi-Civita connection and associated spin connections are not invariant under Weyl transformations. Weyl connections are a class of affine connections that is invariant, although no Weyl connection is individual invariant under Weyl transformations.
Conformal weight.
A quantity formula_1 has conformal weight formula_2 if, under the Weyl transformation, it transforms via
formula_3
Thus conformally weighted quantities belong to certain density bundles; see also conformal dimension. Let formula_4 be the connection one-form associated to the Levi-Civita connection of formula_5. Introduce a connection that depends also on an initial one-form formula_6 via
formula_7
Then formula_8 is covariant and has conformal weight formula_9.
Formulas.
For the transformation
formula_10
We can derive the following formulas
formula_11
Note that the Weyl tensor is invariant under a Weyl rescaling. | [
{
"math_id": 0,
"text": "g_{ab}\\rightarrow e^{-2\\omega(x)}g_{ab}"
},
{
"math_id": 1,
"text": "\\varphi"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "\n \\varphi \\to \\varphi e^{k \\omega}.\n"
},
{
"math_id": 4,
"text": "A_\\mu"
},
{
"math_id": 5,
"text": "g"
},
{
"math_id": 6,
"text": "\\partial_\\mu\\omega"
},
{
"math_id": 7,
"text": "\n B_\\mu = A_\\mu + \\partial_\\mu \\omega.\n"
},
{
"math_id": 8,
"text": "D_\\mu \\varphi \\equiv \\partial_\\mu \\varphi + k B_\\mu \\varphi"
},
{
"math_id": 9,
"text": "k - 1"
},
{
"math_id": 10,
"text": " \n g_{ab} = f(\\phi(x)) \\bar{g}_{ab} \n"
},
{
"math_id": 11,
"text": " \n\\begin{align}\n g^{ab} &= \\frac{1}{f(\\phi(x))} \\bar{g}^{ab}\\\\\n \\sqrt{-g} &= \\sqrt{-\\bar{g}} f^{D/2} \\\\\n \\Gamma^c_{ab} &= \\bar{\\Gamma}^c_{ab} + \\frac{f'}{2f} \\left(\\delta^c_b \\partial_a \\phi + \\delta^c_a \\partial_b \\phi - \\bar{g}_{ab} \\partial^c \\phi \\right) \\equiv \\bar{\\Gamma}^c_{ab} + \\gamma^c_{ab} \\\\\n R_{ab} &= \\bar{R}_{ab} + \\frac{f'' f- f^{\\prime 2}}{2f^2} \\left((2-D) \\partial_a \\phi \\partial_b \\phi - \\bar{g}_{ab} \\partial^c \\phi \\partial_c \\phi \\right) + \\frac{f'}{2f} \\left((2-D) \\bar{\\nabla}_a \\partial_b \\phi - \\bar{g}_{ab} \\bar{\\Box} \\phi\\right) + \\frac{1}{4} \\frac{f^{\\prime 2}}{f^2} (D-2) \\left(\\partial_a \\phi \\partial_b \\phi - \\bar{g}_{ab} \\partial_c \\phi \\partial^c \\phi \\right) \\\\\n R &= \\frac{1}{f} \\bar{R} + \\frac{1-D}{f} \\left( \\frac{f''f - f^{\\prime 2}}{f^2} \\partial^c \\phi \\partial_c \\phi + \\frac{f'}{f} \\bar{\\Box} \\phi \\right) + \\frac{1}{4f} \\frac{f^{\\prime 2}}{f^2} (D-2) (1-D) \\partial_c \\phi \\partial^c \\phi \n\\end{align}\n"
}
] | https://en.wikipedia.org/wiki?curid=1238173 |
12382740 | Anscombe transform | Statistical concept
In statistics, the Anscombe transform, named after Francis Anscombe, is a variance-stabilizing transformation that transforms a random variable with a Poisson distribution into one with an approximately standard Gaussian distribution. The Anscombe transform is widely used in photon-limited imaging (astronomy, X-ray) where images naturally follow the Poisson law. The Anscombe transform is usually used to pre-process the data in order to make the standard deviation approximately constant. Then denoising algorithms designed for the framework of additive white Gaussian noise are used; the final estimate is then obtained by applying an inverse Anscombe transformation to the denoised data.
Definition.
For the Poisson distribution the mean formula_0 and variance formula_1 are not independent: formula_2. The Anscombe transform
formula_3
aims at transforming the data so that the variance is set approximately 1 for large enough mean; for mean zero, the variance is still zero.
It transforms Poissonian data formula_4 (with mean formula_0) to approximately Gaussian data of mean formula_5
and standard deviation formula_6.
This approximation gets more accurate for larger formula_0, as can be also seen in the figure.
For a transformed variable of the form formula_7, the expression for the variance has an additional term formula_8; it is reduced to zero at formula_9, which is exactly the reason why this value was picked.
Inversion.
When the Anscombe transform is used in denoising (i.e. when the goal is to obtain from formula_4 an estimate of formula_0), its inverse transform is also needed
in order to return the variance-stabilized and denoised data formula_10 to the original range.
Applying the algebraic inverse
formula_11
usually introduces undesired bias to the estimate of the mean formula_0, because the forward square-root
transform is not linear. Sometimes using the asymptotically unbiased inverse
formula_12
mitigates the issue of bias, but this is not the case in photon-limited imaging, for which
the exact unbiased inverse given by the implicit mapping
formula_13
should be used. A closed-form approximation of this exact unbiased inverse is
formula_14
Alternatives.
There are many other possible variance-stabilizing transformations for the Poisson distribution. Bar-Lev and Enis report a family of such transformations which includes the Anscombe transform. Another member of the family is the Freeman-Tukey transformation
formula_15
A simplified transformation, obtained as the primitive of the reciprocal of the standard deviation of the data, is
formula_16
which, while it is not quite so good at stabilizing the variance, has the advantage of being more easily understood.
Indeed, from the delta method,
formula_17.
Generalization.
While the Anscombe transform is appropriate for pure Poisson data, in many applications the data presents also an additive Gaussian component. These cases are treated by a Generalized Anscombe transform and its asymptotically unbiased or exact unbiased inverses.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m"
},
{
"math_id": 1,
"text": "v"
},
{
"math_id": 2,
"text": "m = v"
},
{
"math_id": 3,
"text": "A:x \\mapsto 2 \\sqrt{x + \\tfrac{3}{8}} \\, "
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "2\\sqrt{m + \\tfrac{3}{8}} - \\tfrac{1}{4 \\, m^{1/2}} + O\\left(\\tfrac{1}{m^{3/2}}\\right)"
},
{
"math_id": 6,
"text": " 1 + O\\left(\\tfrac{1}{m^2}\\right)"
},
{
"math_id": 7,
"text": "2 \\sqrt{x + c}"
},
{
"math_id": 8,
"text": "\\frac{\\tfrac{3}{8} -c}{m}"
},
{
"math_id": 9,
"text": "c = \\tfrac{3}{8}"
},
{
"math_id": 10,
"text": "y"
},
{
"math_id": 11,
"text": "A^{-1}:y \\mapsto \\left( \\frac{y}{2} \\right)^2 - \\frac{3}{8} "
},
{
"math_id": 12,
"text": "y \\mapsto \\left( \\frac{y}{2} \\right)^2 - \\frac{1}{8} "
},
{
"math_id": 13,
"text": " \\operatorname{E} \\left[ 2\\sqrt{x+\\tfrac{3}{8}} \\mid m \\right] = 2 \\sum_{x=0}^{+\\infty} \\left( \\sqrt{x+\\tfrac{3}{8}} \\cdot \\frac{m^x e^{-m}}{x!} \\right) \\mapsto m "
},
{
"math_id": 14,
"text": "y \\mapsto \\frac{1}{4} y^2 - \\frac{1}{8} + \\frac{1}{4} \\sqrt{\\frac{3}{2}} y^{-1} - \\frac{11}{8} y^{-2} + \\frac{5}{8} \\sqrt{\\frac{3}{2}} y^{-3}."
},
{
"math_id": 15,
"text": "A:x \\mapsto \\sqrt{x+1}+\\sqrt{x}. \\, "
},
{
"math_id": 16,
"text": "A:x \\mapsto 2\\sqrt{x} \\, "
},
{
"math_id": 17,
"text": " V[2\\sqrt{x}] \\approx \\left(\\frac{d (2\\sqrt{m})}{d m} \\right)^2 V[x] = \\left(\\frac{1}{\\sqrt{m}} \\right)^2 m = 1 "
}
] | https://en.wikipedia.org/wiki?curid=12382740 |
12383591 | Hyperarithmetical theory | In computability theory, hyperarithmetic theory is a generalization of Turing computability. It has close connections with definability in second-order arithmetic and with weak systems of set theory such as Kripke–Platek set theory. It is an important tool in effective descriptive set theory.
The central focus of hyperarithmetic theory is the sets of natural numbers known as hyperarithmetic sets. There are three equivalent ways of defining this class of sets; the study of the relationships between these different definitions is one motivation for the study of hyperarithmetical theory.
Hyperarithmetical sets and definability.
The first definition of the hyperarithmetic sets uses the analytical hierarchy.
A set of natural numbers is classified at level formula_0 of this hierarchy if it is definable by a formula of second-order arithmetic with only existential set quantifiers and no other set quantifiers. A set is classified at level formula_1 of the analytical hierarchy if it is definable by a formula of second-order arithmetic with only universal set quantifiers and no other set quantifiers. A set is formula_2 if it is both formula_0 and formula_1. The hyperarithmetical sets are exactly the formula_2 sets.
Hyperarithmetical sets and iterated Turing jumps: the hyperarithmetical hierarchy.
The definition of hyperarithmetical sets as formula_2 does not directly depend on computability results. A second, equivalent, definition shows that the hyperarithmetical sets can be defined using infinitely iterated Turing jumps. This second definition also shows that the hyperarithmetical sets can be classified into a hierarchy extending the arithmetical hierarchy; the hyperarithmetical sets are exactly the sets that are assigned a rank in this hierarchy.
Each level of the hyperarithmetical hierarchy is indexed by a countable ordinal number (ordinal), but not all countable ordinals correspond to a level of the hierarchy. The ordinals used by the hierarchy are those with an ordinal notation, which is a concrete, effective description of the ordinal.
An ordinal notation is an effective description of a countable ordinal by a natural number. A system of ordinal notations is required in order to define the hyperarithmetic hierarchy. The fundamental property an ordinal notation must have is that it describes the ordinal in terms of smaller ordinals in an effective way. The following inductive definition is typical; it uses a pairing function formula_3.
This may also be defined by taking effective joins at all levels instead of only notations for limit ordinals.
There are only countably many ordinal notations, since each notation is a natural number; thus there is a countable ordinal that is the supremum of all ordinals that have a notation. This ordinal is known as the Church–Kleene ordinal and is denoted formula_9. Note that this ordinal is still countable, the symbol being only an analogy with the first uncountable ordinal, formula_10. The set of all natural numbers that are ordinal notations is denoted formula_11 and called "Kleene's formula_11".
Ordinal notations are used to define iterated Turing jumps. The sets of natural numbers used to define the hierarchy are formula_12 for each formula_13. formula_12 is sometimes also denoted formula_14, or formula_15 for a notation formula_16 for formula_17. Suppose that "δ" has notation "e". These sets were first defined by Davis (1950) and Mostowski (1951). The set formula_12 is defined using "e" as follows.
Although the construction of formula_12 depends on having a fixed notation for "δ", and each infinite ordinal has many notations, a theorem of Clifford Spector shows that the Turing degree of formula_12 depends only on "δ", not on the particular notation used, and thus formula_12 is well defined up to Turing degree.
The hyperarithmetical hierarchy is defined from these iterated Turing jumps. A set "X" of natural numbers is classified at level "δ" of the hyperarithmetical hierarchy, for formula_13, if "X" is Turing reducible to formula_12. There will always be a least such "δ" if there is any; it is this least "δ" that measures the level of uncomputability of "X".
Hyperarithmetical sets and constructibility.
Let formula_27 denote the formula_28th level of the constructible hierarchy, and let formula_29 be the map from a member of Kleene's O to the ordinal it represents. A subset of formula_30 is hyperarithmetical if and only if it is a member of formula_31. A subset of formula_30 is definable by a formula_1 formula if and only if its image under formula_32 is formula_33-definable on formula_31, where formula_33 is from the Lévy hierarchy of formulae.
Hyperarithmetical sets and recursion in higher types.
A third characterization of the hyperarithmetical sets, due to Kleene, uses higher-type computable functionals. The type-2 functional formula_34 is defined by the following rules:
formula_35 if there is an "i" such that "f"("i") > 0,
formula_36 if there is no "i" such that "f"("i") > 0.
Using a precise definition of computability relative to a type-2 functional, Kleene showed that a set of natural numbers is hyperarithmetical if and only if it is computable relative to formula_37.
Example: the truth set of arithmetic.
Every arithmetical set is hyperarithmetical, but there are many other hyperarithmetical sets. One example of a hyperarithmetical, nonarithmetical set is the set "T" of Gödel numbers of formulas of Peano arithmetic that are true in the standard natural numbers formula_38. The set "T" is Turing equivalent to the set formula_39, and so is not high in the hyperarithmetical hierarchy, although it is not arithmetically definable by Tarski's indefinability theorem.
Fundamental results.
The fundamental results of hyperarithmetic theory show that the three definitions above define the same collection of sets of natural numbers. These equivalences are due to Kleene.
Completeness results are also fundamental to the theory. A set of natural numbers is formula_1 complete if it is at level formula_1 of the analytical hierarchy and every formula_1 set of natural numbers is many-one reducible to it. The definition of a formula_1 complete subset of Baire space (formula_40) is similar. Several sets associated with hyperarithmetic theory are formula_1 complete:
Results known as formula_0 bounding follow from these completeness results. For any formula_0 set "S" of ordinal notations, there is an formula_43 such that every element of "S" is a notation for an ordinal less than formula_28. For any formula_0 subset "T" of Baire space consisting only of characteristic functions of well orderings, there is an formula_43 such that each ordinal represented in "T" is less than formula_28.
Relativized hyperarithmeticity and hyperdegrees.
The definition of formula_11 can be relativized to a set "X" of natural numbers: in the definition of an ordinal notation, the clause for limit ordinals is changed so that the computable enumeration of a sequence of ordinal notations is allowed to use "X" as an oracle. The set of numbers that are ordinal notations relative to "X" is denoted formula_44. The supremum of ordinals represented in formula_44 is denoted formula_45; this is a countable ordinal no smaller than formula_9.
The definition of formula_12 can also be relativized to an arbitrary set formula_46 of natural numbers. The only change in the definition is that formula_47 is defined to be "X" rather than the empty set, so that formula_48 is the Turing jump of "X", and so on. Rather than terminating at formula_9 the hierarchy relative to "X" runs through all ordinals less than formula_45.
The relativized hyperarithmetical hierarchy is used to define hyperarithmetical reducibility. Given sets "X" and "Y", we say formula_49 if and only if there is a formula_50 such that "X" is Turing reducible to formula_51. If formula_49 and formula_52 then the notation formula_53 is used to indicate "X" and "Y" are hyperarithmetically equivalent. This is a coarser equivalence relation than Turing equivalence; for example, every set of natural numbers is hyperarithmetically equivalent to its Turing jump but not Turing equivalent to its Turing jump. The equivalence classes of hyperarithmetical equivalence are known as hyperdegrees.
The function that takes a set "X" to formula_44 is known as the hyperjump by analogy with the Turing jump. Many properties of the hyperjump and hyperdegrees have been established. In particular, it is known that Post's problem for hyperdegrees has a positive answer: for every set "X" of natural numbers there is a set "Y" of natural numbers such that formula_54.
Generalizations.
Hyperarithmetical theory is generalized by "α"-recursion theory, which is the study of definable subsets of admissible ordinals. Hyperarithmetical theory is the special case in which "α" is formula_9.
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Sigma^1_1"
},
{
"math_id": 1,
"text": "\\Pi^1_1"
},
{
"math_id": 2,
"text": "\\Delta^1_1"
},
{
"math_id": 3,
"text": "\\langle \\cdot , \\cdot\\rangle"
},
{
"math_id": 4,
"text": "\\langle 1, n \\rangle"
},
{
"math_id": 5,
"text": "\\langle 2, e\\rangle"
},
{
"math_id": 6,
"text": "\\phi_e"
},
{
"math_id": 7,
"text": "\\phi_e(n)"
},
{
"math_id": 8,
"text": "\\{ \\lambda_n \\mid n \\in \\mathbb{N}\\}"
},
{
"math_id": 9,
"text": "\\omega^{CK}_1"
},
{
"math_id": 10,
"text": "\\omega_{1}"
},
{
"math_id": 11,
"text": "\\mathcal{O}"
},
{
"math_id": 12,
"text": "0^{(\\delta)}"
},
{
"math_id": 13,
"text": "\\delta < \\omega^{CK}_1"
},
{
"math_id": 14,
"text": "H(\\delta)"
},
{
"math_id": 15,
"text": "H_e"
},
{
"math_id": 16,
"text": "e"
},
{
"math_id": 17,
"text": "\\delta"
},
{
"math_id": 18,
"text": "0^{(\\delta)}= 0"
},
{
"math_id": 19,
"text": "0^{(\\lambda)}"
},
{
"math_id": 20,
"text": "0^{(1)}"
},
{
"math_id": 21,
"text": "0^{(2)}"
},
{
"math_id": 22,
"text": "0'"
},
{
"math_id": 23,
"text": "0''"
},
{
"math_id": 24,
"text": "\\langle \\lambda_n \\mid n \\in \\mathbb{N}\\rangle"
},
{
"math_id": 25,
"text": "0^{(\\delta)} = \\{ \\langle n,i\\rangle \\mid i \\in 0^{(\\lambda_n)}\\}"
},
{
"math_id": 26,
"text": "0^{(\\lambda_n)}"
},
{
"math_id": 27,
"text": "L_\\alpha"
},
{
"math_id": 28,
"text": "\\alpha"
},
{
"math_id": 29,
"text": "n:\\mathcal O\\to\\omega_1^{CK}"
},
{
"math_id": 30,
"text": "\\mathbb N"
},
{
"math_id": 31,
"text": "L_{\\omega_1^{CK}}"
},
{
"math_id": 32,
"text": "n"
},
{
"math_id": 33,
"text": "\\Sigma_1"
},
{
"math_id": 34,
"text": "{}^2E\\colon \\mathbb{N}^{\\mathbb{N}} \\to \\mathbb{N}"
},
{
"math_id": 35,
"text": "{}^2E(f) = 1 \\quad"
},
{
"math_id": 36,
"text": "{}^2E(f) = 0 \\quad"
},
{
"math_id": 37,
"text": "{}^2E"
},
{
"math_id": 38,
"text": "\\mathbb{N}"
},
{
"math_id": 39,
"text": "0^{(\\omega)}"
},
{
"math_id": 40,
"text": "\\mathbb{N}^\\mathbb{N}"
},
{
"math_id": 41,
"text": "\\phi_e(x,y)"
},
{
"math_id": 42,
"text": "\\mathbb{N}^\\mathbb{N} \\cong \\mathbb{N}^{\\mathbb{N}\\times\\mathbb{N}})"
},
{
"math_id": 43,
"text": "\\alpha < \\omega^{CK}_1"
},
{
"math_id": 44,
"text": "\\mathcal{O}^X"
},
{
"math_id": 45,
"text": "\\omega^{X}_1"
},
{
"math_id": 46,
"text": "X"
},
{
"math_id": 47,
"text": "X^{(0)}"
},
{
"math_id": 48,
"text": "X^{(1)} = X'"
},
{
"math_id": 49,
"text": " X \\leq_\\text{HYP} Y"
},
{
"math_id": 50,
"text": "\\delta < \\omega^Y_1"
},
{
"math_id": 51,
"text": "Y^{(\\delta)}"
},
{
"math_id": 52,
"text": " Y \\leq_\\text{HYP} X"
},
{
"math_id": 53,
"text": " X \\equiv_\\text{HYP} Y"
},
{
"math_id": 54,
"text": "X <_\\text{HYP} Y <_\\text{HYP} \\mathcal{O}^X"
}
] | https://en.wikipedia.org/wiki?curid=12383591 |
1238550 | Index of dissimilarity | Demographic measure
The index of dissimilarity is a demographic measure of the evenness with which two groups are distributed across component geographic areas that make up a larger area. A group is evenly distributed when each geographic unit has the same percentage of group members as the total population. The index score can also be interpreted as the percentage of one of the two groups included in the calculation that would have to move to different geographic areas in order to produce a distribution that matches that of the larger area. The index of dissimilarity can be used as a measure of segregation. A score of zero (0%) reflects a fully integrated environment; a score of 1 (100%) reflects full segregation. In terms of black–white segregation, a score of .60 means that 60 percent of blacks would have to exchange places with whites in other units to achieve an even geographic distribution.Index of dissimilarity is invariant to relative size of group.
<templatestyles src="Template:TOC limit/styles.css" />
Basic formula.
The basic formula for the index of dissimilarity is:
formula_0
where (comparing a black and white population, for example):
"ai" = the population of group A in the "i"th area, e.g. census tract
"A" = the total population in group A in the large geographic entity for which the index of dissimilarity is being calculated.
"bi" = the population of group B in the "i"th area
"B" = the total population in group B in the large geographic entity for which the index of dissimilarity is being calculated.
The index of dissimilarity is applicable to any categorical variable (whether demographic or not) and because of its simple properties is useful for input into multidimensional scaling and clustering programs. It has been used extensively in the study of social mobility to compare distributions of origin (or destination) occupational categories.
Numerical Example.
Consider the following distribution of white and black population across neighborhoods.
Linear algebra perspective.
The formula for the Index of Dissimilarity can be made much more compact and meaningful by considering it from the perspective of Linear algebra. Suppose we are studying the distribution of rich and poor people in a city (e.g. London). Suppose our city contains formula_1 blocks:
formula_2
Let's create a vector formula_3 which shows the number of rich people in each block of our city:
formula_4
Similarly, let's create a vector formula_5 which shows the number of poor people in each block of our city:
formula_6
Now, the formula_7-norm of a vector is simply the sum of (the magnitude of) each entry in that vector. That is, for a vector formula_8, we have the formula_7-norm:
formula_9
If we denote formula_10 as the total number of rich people in our city, than a compact way to calculate formula_10 would be to use the formula_7-norm:
formula_11
Similarly, if we denote formula_12 as the total number of poor people in our city, then:
formula_13
When we divide a vector formula_14 by its norm, we get what is called the normalized vector or Unit vector formula_15:
formula_16
Let us normalize the rich vector formula_3 and the poor vector formula_5:
formula_17
formula_18
We finally return to the formula for the Index of Dissimilarity (formula_19); it is simply equal to one-half the formula_7-norm of the difference between the vectors formula_20 and formula_21:
Index of Dissimilarity"(in Linear Algebraic notation)"
formula_22
Numerical example.
Consider a city consisting of four blocks of 2 people each. One block consists of 2 rich people. One block consists of 2 poor people. Two blocks consist of 1 rich and 1 poor person. What is the index of dissimilarity for this city?
Firstly, let's find the rich vector formula_3 and poor vector formula_5:
formula_23
formula_24
Next, let's calculate the total number of rich people and poor people in our city:
formula_25
formula_26
Next, let's normalize the rich and poor vectors:
formula_27
formula_28
We can now calculate the difference formula_29:
formula_30
Finally, let's find the index of dissimilarity (formula_19):
formula_31
Equivalence between formulae.
We can prove that the Linear Algebraic formula for formula_19 is identical to the basic formula for formula_19. Let's start with the Linear Algebraic formula:
formula_22
Let's replace the normalized vectors formula_3 and formula_5 with:
formula_32
Finally, from the definition of the formula_7-norm, we know that we can replace it with the summation:
formula_33
Thus we prove that the linear algebra formula for the index of dissimilarity is equivalent to the basic formula for it:
formula_34
Zero segregation.
When the Index of Dissimilarity is zero, this means that the community we are studying has zero segregation. For example, if we are studying the segregation of rich and poor people in a city, then if formula_35, it means that:
If we set formula_35 in the linear algebraic formula, we get the necessary condition for having zero segregation:
formula_36
For example, suppose you have a city with 2 blocks. Each block has 4 rich people and 100 poor people:
formula_37
formula_38
Then, the total number of rich people is formula_39, and the total number of poor people is formula_40. Thus:
formula_41
formula_42
Because formula_36, thus this city has zero segregation.
As another example, suppose you have a city with 3 blocks:
formula_43
formula_44
Then, we have formula_45 rich people in our city, and formula_46 poor people. Thus:
formula_47
formula_48
Again, because formula_36, thus this city also has zero segregation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "D = \\frac{1}{2} \\sum_{i=1}^N \\left| \\frac{a_i}{A} - \\frac{b_i}{B} \\right| "
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "\\{\\text{block 1}, \\text{block 2}, \\ldots, \\text{block N}\\}"
},
{
"math_id": 3,
"text": "\\mathbf{r}"
},
{
"math_id": 4,
"text": "\\mathbf{r} = [r_1, r_2, \\cdots, r_N]"
},
{
"math_id": 5,
"text": "\\mathbf{p}"
},
{
"math_id": 6,
"text": "\\mathbf{p} = [p_1, p_2, \\cdots, p_N]"
},
{
"math_id": 7,
"text": "L^1"
},
{
"math_id": 8,
"text": "\\mathbf{v} = [v_1, v_2, \\cdots, v_N]"
},
{
"math_id": 9,
"text": "|\\mathbf{v}|_1 = \\sum_{i=1}^{N} |v_i|"
},
{
"math_id": 10,
"text": "R"
},
{
"math_id": 11,
"text": "R = |\\mathbf{r}|_1 = \\sum_{i=1}^{N} |r_i|"
},
{
"math_id": 12,
"text": "P"
},
{
"math_id": 13,
"text": "P = |\\mathbf{p}|_1 = \\sum_{i=1}^{N} |p_i|"
},
{
"math_id": 14,
"text": "\\mathbf{v}"
},
{
"math_id": 15,
"text": "\\hat{\\mathbf{v}}"
},
{
"math_id": 16,
"text": "\\hat{\\mathbf{v}} = \\frac{\\mathbf{v}}{|\\mathbf{v}|_1}"
},
{
"math_id": 17,
"text": "\\hat{\\mathbf{r}} = \\frac{\\mathbf{r}}{|\\mathbf{r}|_1} = \\frac{\\mathbf{r}}{R}"
},
{
"math_id": 18,
"text": "\\hat{\\mathbf{p}} = \\frac{\\mathbf{p}}{|\\mathbf{p}|_1} = \\frac{\\mathbf{p}}{P}"
},
{
"math_id": 19,
"text": "D"
},
{
"math_id": 20,
"text": "\\hat{\\mathbf{r}}"
},
{
"math_id": 21,
"text": "\\hat{\\mathbf{p}}"
},
{
"math_id": 22,
"text": "D = \\frac{1}{2}|\\hat{\\mathbf{r}} - \\hat{\\mathbf{p}}|_1"
},
{
"math_id": 23,
"text": "\\mathbf{r} = [2,0,1,1]"
},
{
"math_id": 24,
"text": "\\mathbf{p} = [0,2,1,1]"
},
{
"math_id": 25,
"text": "R = 2 + 0 + 1 + 1 = 4"
},
{
"math_id": 26,
"text": "P = 0 + 2 + 1 + 1 = 4"
},
{
"math_id": 27,
"text": "\\hat{\\mathbf{r}} = \\frac{\\mathbf{r}}{R} = \\frac{1}{4}[2,0,1,1] = [0.5, 0, 0.25, 0.25]"
},
{
"math_id": 28,
"text": "\\hat{\\mathbf{p}} = \\frac{\\mathbf{p}}{P} = \\frac{1}{4}[0,2,1,1] = [0, 0.5, 0.25, 0.25]"
},
{
"math_id": 29,
"text": "\\hat{\\mathbf{r}} - \\hat{\\mathbf{p}}"
},
{
"math_id": 30,
"text": "\\hat{\\mathbf{r}} - \\hat{\\mathbf{p}} = [0.5, 0, 0.25, 0.25] - [0, 0.5, 0.25, 0.25] = [0.5, -0.5, 0, 0]"
},
{
"math_id": 31,
"text": "D = \\frac{1}{2} |\\hat{\\mathbf{r}} - \\hat{\\mathbf{p}}|_1 = \\frac{1}{2} ( |0.5| + |-0.5| ) = 0.5 "
},
{
"math_id": 32,
"text": "D = \\frac{1}{2} \\left| \\frac{\\mathbf{r}}{R} - \\frac{\\mathbf{p}}{P} \\right|_1"
},
{
"math_id": 33,
"text": "D = \\frac{1}{2} \\sum_{i=1}^{N} |\\frac{r_i}{R} - \\frac{p_i}{P}|"
},
{
"math_id": 34,
"text": "D = \\frac{1}{2}|\\hat{\\mathbf{r}} - \\hat{\\mathbf{p}}|_1 = \\frac{1}{2} \\sum_{i=1}^{N} |\\frac{r_i}{R} - \\frac{p_i}{P}|"
},
{
"math_id": 35,
"text": "D = 0"
},
{
"math_id": 36,
"text": "\\mathbf{\\hat{r}} = \\mathbf{\\hat{p}}"
},
{
"math_id": 37,
"text": "\\mathbf{r} = [4,4]"
},
{
"math_id": 38,
"text": "\\mathbf{p} = [100,100]"
},
{
"math_id": 39,
"text": "R = 4 + 4 = 8"
},
{
"math_id": 40,
"text": "P = 100 + 100 = 200"
},
{
"math_id": 41,
"text": "\\mathbf{\\hat{r}} = [4/8, 4/8] = [0.5, 0.5]"
},
{
"math_id": 42,
"text": "\\mathbf{\\hat{p}} = [100/200, 100/200] = [0.5, 0.5]"
},
{
"math_id": 43,
"text": "\\mathbf{r} = [1,2,3]"
},
{
"math_id": 44,
"text": "\\mathbf{p} = [100,200,300]"
},
{
"math_id": 45,
"text": "R = 1 + 2 + 3= 6"
},
{
"math_id": 46,
"text": "P = 100 + 200 + 300 = 600"
},
{
"math_id": 47,
"text": "\\mathbf{\\hat{r}} = [1/6, 2/6, 3/6]"
},
{
"math_id": 48,
"text": "\\mathbf{\\hat{p}} = [100/600,200/600,300/600] = [1/6,2/6,3/6]"
}
] | https://en.wikipedia.org/wiki?curid=1238550 |
12386 | Golden ratio | Number, approximately 1.618
In mathematics, two quantities are in the golden ratio if their ratio is the same as the ratio of their sum to the larger of the two quantities. Expressed algebraically, for quantities formula_0 and formula_1 with formula_2, formula_0 is in a golden ratio to formula_1 if
<templatestyles src="Block indent/styles.css"/>formula_3
where the Greek letter phi (formula_4 or formula_5) denotes the golden ratio. The constant formula_4 satisfies the quadratic equation formula_6 and is an irrational number with a value of
<templatestyles src="Block indent/styles.css"/>formula_7...
The golden ratio was called the extreme and mean ratio by Euclid, and the divine proportion by Luca Pacioli, and also goes by several other names.
Mathematicians have studied the golden ratio's properties since antiquity. It is the ratio of a regular pentagon's diagonal to its side and thus appears in the construction of the dodecahedron and icosahedron. A golden rectangle—that is, a rectangle with an aspect ratio of formula_4—may be cut into a square and a smaller rectangle with the same aspect ratio. The golden ratio has been used to analyze the proportions of natural objects and artificial systems such as financial markets, in some cases based on dubious fits to data. The golden ratio appears in some patterns in nature, including the spiral arrangement of leaves and other parts of vegetation.
Some 20th-century artists and architects, including Le Corbusier and Salvador Dalí, have proportioned their works to approximate the golden ratio, believing it to be aesthetically pleasing. These uses often appear in the form of a golden rectangle.
<templatestyles src="Template:TOC limit/styles.css" />
Calculation.
Two quantities formula_0 and formula_1 are in the "golden ratio" formula_4 if
<templatestyles src="Block indent/styles.css"/>formula_8
One method for finding a closed form for formula_4 starts with the left fraction. Simplifying the fraction and substituting the reciprocal formula_9,
<templatestyles src="Block indent/styles.css"/>formula_10
Therefore,
<templatestyles src="Block indent/styles.css"/>formula_11
Multiplying by formula_4 gives
<templatestyles src="Block indent/styles.css"/>formula_12
which can be rearranged to
<templatestyles src="Block indent/styles.css"/>formula_13
The quadratic formula yields two solutions:
<templatestyles src="Block indent/styles.css"/>formula_14 and formula_15
Because formula_4 is a ratio between positive quantities, formula_4 is necessarily the positive root. The negative root is in fact the negative inverse formula_16, which shares many properties with the golden ratio.
History.
According to Mario Livio,
<templatestyles src="Template:Blockquote/styles.css" />Some of the greatest mathematical minds of all ages, from Pythagoras and Euclid in ancient Greece, through the medieval Italian mathematician Leonardo of Pisa and the Renaissance astronomer Johannes Kepler, to present-day scientific figures such as Oxford physicist Roger Penrose, have spent endless hours over this simple ratio and its properties. ... Biologists, artists, musicians, historians, architects, psychologists, and even mystics have pondered and debated the basis of its ubiquity and appeal. In fact, it is probably fair to say that the Golden Ratio has inspired thinkers of all disciplines like no other number in the history of mathematics.
Ancient Greek mathematicians first studied the golden ratio because of its frequent appearance in geometry; the division of a line into "extreme and mean ratio" (the golden section) is important in the geometry of regular pentagrams and pentagons. According to one story, 5th-century BC mathematician Hippasus discovered that the golden ratio was neither a whole number nor a fraction (it is irrational), surprising Pythagoreans. Euclid's "Elements" (c. 300 BC) provides several propositions and their proofs employing the golden ratio, and contains its first known definition which proceeds as follows:
<templatestyles src="Template:Blockquote/styles.css" />A straight line is said to have been cut in extreme and mean ratio when, as the whole line is to the greater segment, so is the greater to the lesser.
The golden ratio was studied peripherally over the next millennium. Abu Kamil (c. 850–930) employed it in his geometric calculations of pentagons and decagons; his writings influenced that of Fibonacci (Leonardo of Pisa) (c. 1170–1250), who used the ratio in related geometry problems but did not observe that it was connected to the Fibonacci numbers.
Luca Pacioli named his book "Divina proportione" (1509) after the ratio; the book, largely plagiarized from Piero della Francesca, explored its properties including its appearance in some of the Platonic solids. Leonardo da Vinci, who illustrated Pacioli's book, called the ratio the "sectio aurea" ('golden section'). Though it is often said that Pacioli advocated the golden ratio's application to yield pleasing, harmonious proportions, Livio points out that the interpretation has been traced to an error in 1799, and that Pacioli actually advocated the Vitruvian system of rational proportions. Pacioli also saw Catholic religious significance in the ratio, which led to his work's title. 16th-century mathematicians such as Rafael Bombelli solved geometric problems using the ratio.
German mathematician Simon Jacob (d. 1564) noted that consecutive Fibonacci numbers converge to the golden ratio; this was rediscovered by Johannes Kepler in 1608. The first known decimal approximation of the (inverse) golden ratio was stated as "about formula_17" in 1597 by Michael Maestlin of the University of Tübingen in a letter to Kepler, his former student. The same year, Kepler wrote to Maestlin of the Kepler triangle, which combines the golden ratio with the Pythagorean theorem. Kepler said of these:<templatestyles src="Template:Blockquote/styles.css" />
Eighteenth-century mathematicians Abraham de Moivre, Nicolaus I Bernoulli, and Leonhard Euler used a golden ratio-based formula which finds the value of a Fibonacci number based on its placement in the sequence; in 1843, this was rediscovered by Jacques Philippe Marie Binet, for whom it was named "Binet's formula". Martin Ohm first used the German term "goldener Schnitt" ('golden section') to describe the ratio in 1835. James Sully used the equivalent English term in 1875.
By 1910, inventor Mark Barr began using the Greek letter phi (formula_4) as a symbol for the golden ratio. It has also been represented by tau (formula_18), the first letter of the ancient Greek τομή ('cut' or 'section').
The zome construction system, developed by Steve Baer in the late 1960s, is based on the symmetry system of the icosahedron/dodecahedron, and uses the golden ratio ubiquitously. Between 1973 and 1974, Roger Penrose developed Penrose tiling, a pattern related to the golden ratio both in the ratio of areas of its two rhombic tiles and in their relative frequency within the pattern. This gained in interest after Dan Shechtman's Nobel-winning 1982 discovery of quasicrystals with icosahedral symmetry, which were soon afterward explained through analogies to the Penrose tiling.
Mathematics.
Irrationality.
The golden ratio is an irrational number. Below are two short proofs of irrationality:
Contradiction from an expression in lowest terms.
This is a proof by infinite descent. Recall that:
<templatestyles src="Block indent/styles.css"/>the whole is the longer part plus the shorter part;
the whole is to the longer part as the longer part is to the shorter part.
If we call the whole formula_19 and the longer part formula_20 then the second statement above becomes
<templatestyles src="Block indent/styles.css"/>formula_19 is to formula_21 as formula_21 is to formula_22
To say that the golden ratio formula_4 is rational means that formula_4 is a fraction formula_23 where formula_19 and formula_21 are integers. We may take formula_23 to be in lowest terms and formula_19 and formula_21 to be positive. But if formula_23 is in lowest terms, then the equally valued formula_24 is in still lower terms. That is a contradiction that follows from the assumption that formula_4 is rational.
By irrationality of.
Another short proof – perhaps more commonly known – of the irrationality of the golden ratio makes use of the closure of rational numbers under addition and multiplication. If formula_25 is rational, then formula_26 is also rational, which is a contradiction if it is already known that the square roots of all non-square natural numbers are irrational.
Minimal polynomial.
The golden ratio is also an algebraic number and even an algebraic integer. It has minimal polynomial
<templatestyles src="Block indent/styles.css"/>formula_27
This quadratic polynomial has two roots, formula_4 and formula_28
The golden ratio is also closely related to the polynomial
<templatestyles src="Block indent/styles.css"/>formula_29
which has roots formula_30 and formula_31 As the root of a quadratic polynomial, the golden ratio is a constructible number.
Golden ratio conjugate and powers.
The conjugate root to the minimal polynomial formula_32 is
<templatestyles src="Block indent/styles.css"/>formula_33
The absolute value of this quantity (formula_34) corresponds to the length ratio taken in reverse order (shorter segment length over longer segment length, formula_35).
This illustrates the unique property of the golden ratio among positive numbers, that
<templatestyles src="Block indent/styles.css"/>formula_36
or its inverse:
<templatestyles src="Block indent/styles.css"/>formula_37
The conjugate and the defining quadratic polynomial relationship lead to decimal values that have their fractional part in common with formula_4:
<templatestyles src="Block indent/styles.css"/>formula_38
The sequence of powers of formula_4 contains these values formula_39 formula_40 formula_41 formula_42 more generally,
any power of formula_4 is equal to the sum of the two immediately preceding powers:
<templatestyles src="Block indent/styles.css"/>formula_43
As a result, one can easily decompose any power of formula_4 into a multiple of formula_4 and a constant. The multiple and the constant are always adjacent Fibonacci numbers. This leads to another property of the positive powers of formula_4:
If formula_44 then:
<templatestyles src="Block indent/styles.css"/>formula_45
Continued fraction and square root.
The formula formula_46 can be expanded recursively to obtain a continued fraction for the golden ratio:
<templatestyles src="Block indent/styles.css"/>formula_47
It is in fact the simplest form of a continued fraction, alongside its reciprocal form:
<templatestyles src="Block indent/styles.css"/>formula_48
The convergents of these continued fractions (formula_49 formula_50 formula_51 formula_52 formula_53 formula_54 ... or formula_49 formula_55 formula_56 formula_57 formula_58 formula_59 ...) are ratios of successive Fibonacci numbers. The consistently small terms in its continued fraction explain why the approximants converge so slowly. This makes the golden ratio an extreme case of the Hurwitz inequality for Diophantine approximations, which states that for every irrational formula_60, there are infinitely many distinct fractions formula_61 such that,
formula_62
This means that the constant formula_63 cannot be improved without excluding the golden ratio. It is, in fact, the smallest number that must be excluded to generate closer approximations of such Lagrange numbers.
A continued square root form for formula_4 can be obtained from formula_64, yielding:
<templatestyles src="Block indent/styles.css"/>formula_65
Relationship to Fibonacci and Lucas numbers.
Fibonacci numbers and Lucas numbers have an intricate relationship with the golden ratio. In the Fibonacci sequence, each number is equal to the sum of the preceding two, starting with the base sequence formula_66:
<templatestyles src="Block indent/styles.css"/>formula_67 formula_68 formula_68 formula_69 formula_70 formula_71 formula_72 formula_73 formula_74 formula_75 formula_76 formula_77 formula_78(OEIS: ).
The sequence of Lucas numbers (not to be confused with the generalized Lucas sequences, of which this is part) is like the Fibonacci sequence, in which each term is the sum of the previous two, however instead starts with formula_79:
<templatestyles src="Block indent/styles.css"/>formula_69 formula_68 formula_70 formula_80 formula_81 formula_82 formula_83 formula_84 formula_85 formula_86 formula_87 formula_88 formula_78(OEIS: ).
Exceptionally, the golden ratio is equal to the limit of the ratios of successive terms in the Fibonacci sequence and sequence of Lucas numbers:
<templatestyles src="Block indent/styles.css"/>formula_89
In other words, if a Fibonacci and Lucas number is divided by its immediate predecessor in the sequence, the quotient approximates formula_4.
For example, formula_90 and formula_91
These approximations are alternately lower and higher than formula_92 and converge to formula_4 as the Fibonacci and Lucas numbers increase.
Closed-form expressions for the Fibonacci and Lucas sequences that involve the golden ratio are:
<templatestyles src="Block indent/styles.css"/>formula_93
<templatestyles src="Block indent/styles.css"/>formula_94
Combining both formulas above, one obtains a formula for formula_95 that involves both Fibonacci and Lucas numbers:
<templatestyles src="Block indent/styles.css"/>formula_96
Between Fibonacci and Lucas numbers one can deduce formula_97 which simplifies to express the limit of the quotient of Lucas numbers by Fibonacci numbers as equal to the square root of five:
<templatestyles src="Block indent/styles.css"/>formula_98
Indeed, much stronger statements are true:
formula_99
formula_100
These values describe formula_4 as a fundamental unit of the algebraic number field formula_101.
Successive powers of the golden ratio obey the Fibonacci recurrence, i.e. formula_102
The reduction to a linear expression can be accomplished in one step by using:
<templatestyles src="Block indent/styles.css"/>formula_103
This identity allows any polynomial in formula_4 to be reduced to a linear expression, as in:
<templatestyles src="Block indent/styles.css"/>formula_104
Consecutive Fibonacci numbers can also be used to obtain a similar formula for the golden ratio, here by infinite summation:
<templatestyles src="Block indent/styles.css"/>formula_105
In particular, the powers of formula_4 themselves round to Lucas numbers (in order, except for the first two powers, formula_106 and formula_4, are in reverse order):
<templatestyles src="Block indent/styles.css"/>formula_107
and so forth. The Lucas numbers also directly generate powers of the golden ratio; for formula_108:
<templatestyles src="Block indent/styles.css"/>formula_109
Rooted in their interconnecting relationship with the golden ratio is the notion that the sum of "third" consecutive Fibonacci numbers equals a Lucas number, that is formula_110; and, importantly, that formula_111.
Both the Fibonacci sequence and the sequence of Lucas numbers can be used to generate approximate forms of the golden spiral (which is a special form of a logarithmic spiral) using quarter-circles with radii from these sequences, differing only slightly from the "true" golden logarithmic spiral. "Fibonacci spiral" is generally the term used for spirals that approximate golden spirals using Fibonacci number-sequenced squares and quarter-circles.
Geometry.
The golden ratio features prominently in geometry. For example, it is intrinsically involved in the internal symmetry of the pentagon, and extends to form part of the coordinates of the vertices of a regular dodecahedron, as well as those of a 5-cell. It features in the Kepler triangle and Penrose tilings too, as well as in various other polytopes.
Construction.
Dividing by interior division
Dividing by exterior division
Application examples you can see in the articles Pentagon with a given side length, Decagon with given circumcircle and Decagon with a given side length.
Both of the above displayed different algorithms produce geometric constructions that determine two aligned line segments where the ratio of the longer one to the shorter one is the golden ratio.
Golden angle.
When two angles that make a full circle have measures in the golden ratio, the smaller is called the "golden angle", with measure formula_134
<templatestyles src="Block indent/styles.css"/>formula_135
This angle occurs in patterns of plant growth as the optimal spacing of leaf shoots around plant stems so that successive leaves do not block sunlight from the leaves below them.
Pentagonal symmetry system.
Pentagon and pentagram.
In a regular pentagon the ratio of a diagonal to a side is the golden ratio, while intersecting diagonals section each other in the golden ratio. The golden ratio properties of a regular pentagon can be confirmed by applying Ptolemy's theorem to the quadrilateral formed by removing one of its vertices. If the quadrilateral's long edge and diagonals are formula_136 and short edges are formula_137 then Ptolemy's theorem gives formula_138 Dividing both sides by formula_139 yields (see above),
<templatestyles src="Block indent/styles.css"/>formula_140
The diagonal segments of a pentagon form a pentagram, or five-pointed star polygon, whose geometry is quintessentially described by formula_4. Primarily, each intersection of edges sections other edges in the golden ratio. The ratio of the length of the shorter segment to the segment bounded by the two intersecting edges (that is, a side of the inverted pentagon in the pentagram's center) is formula_92 as the four-color illustration shows.
Pentagonal and pentagrammic geometry permits us to calculate the following values for formula_4:
<templatestyles src="Block indent/styles.css"/>formula_141
Golden triangle and golden gnomon.
The triangle formed by two diagonals and a side of a regular pentagon is called a "golden triangle" or "sublime triangle". It is an acute isosceles triangle with apex angle 36° and base angles 72°. Its two equal sides are in the golden ratio to its base. The triangle formed by two sides and a diagonal of a regular pentagon is called a "golden gnomon". It is an obtuse isosceles triangle with apex angle 108° and base angle 36°. Its base is in the golden ratio to its two equal sides. The pentagon can thus be subdivided into two golden gnomons and a central golden triangle. The five points of a regular pentagram are golden triangles, as are the ten triangles formed by connecting the vertices of a regular decagon to its center point.
Bisecting one of the base angles of the golden triangle subdivides it into a smaller golden triangle and a golden gnomon. Analogously, any acute isosceles triangle can be subdivided into a similar triangle and an obtuse isosceles triangle, but the golden triangle is the only one for which this subdivision is made by the angle bisector, because it is the only isosceles triangle whose base angle is twice its apex angle. The angle bisector of the golden triangle subdivides the side that it meets in the golden ratio, and the areas of the two subdivided pieces are also in the golden ratio.
If the apex angle of the golden gnomon is trisected, the trisector again subdivides it into a smaller golden gnomon and a golden triangle. The trisector subdivides the base in the golden ratio, and the two pieces have areas in the golden ratio. Analogously, any obtuse triangle can be subdivided into a similar triangle and an acute isosceles triangle, but the golden gnomon is the only one for which this subdivision is made by the angle trisector, because it is the only isosceles triangle whose apex angle is three times its base angle.
Penrose tilings.
The golden ratio appears prominently in the "Penrose tiling", a family of aperiodic tilings of the plane developed by Roger Penrose, inspired by Johannes Kepler's remark that pentagrams, decagons, and other shapes could fill gaps that pentagonal shapes alone leave when tiled together. Several variations of this tiling have been studied, all of whose prototiles exhibit the golden ratio:
In triangles and quadrilaterals.
Odom's construction.
George Odom found a construction for formula_4 involving an equilateral triangle: if the line segment joining the midpoints of two sides is extended to intersect the circumcircle, then the two midpoints and the point of intersection with the circle are in golden proportion.
Kepler triangle.
The "Kepler triangle", named after Johannes Kepler, is the unique right triangle with sides in geometric progression:
<templatestyles src="Block indent/styles.css"/>formula_143.
These side lengths are the three Pythagorean means of the two numbers formula_144. The three squares on its sides have areas in the golden geometric progression formula_145.
Among isosceles triangles, the ratio of inradius to side length is maximized for the triangle formed by two reflected copies of the Kepler triangle, sharing the longer of their two legs. The same isosceles triangle maximizes the ratio of the radius of a semicircle on its base to its perimeter.
For a Kepler triangle with smallest side length formula_146, the area and acute internal angles are:
<templatestyles src="Block indent/styles.css"/>formula_147
Golden rectangle.
The golden ratio proportions the adjacent side lengths of a "golden rectangle" in formula_142 ratio. Stacking golden rectangles produces golden rectangles anew, and removing or adding squares from golden rectangles leaves rectangles still proportioned in formula_4 ratio. They can be generated by "golden spirals", through successive Fibonacci and Lucas number-sized squares and quarter circles. They feature prominently in the icosahedron as well as in the dodecahedron (see section below for more detail).
Golden rhombus.
A "golden rhombus" is a rhombus whose diagonals are in proportion to the golden ratio, most commonly formula_142. For a rhombus of such proportions, its acute angle and obtuse angles are:
<templatestyles src="Block indent/styles.css"/>formula_148
The lengths of its short and long diagonals formula_149 and formula_150, in terms of side length formula_0 are:
<templatestyles src="Block indent/styles.css"/>formula_151
Its area, in terms of formula_0,and formula_149:
<templatestyles src="Block indent/styles.css"/>formula_152
Its inradius, in terms of side formula_0:
<templatestyles src="Block indent/styles.css"/>formula_153
Golden rhombi form the faces of the rhombic triacontahedron, the two golden rhombohedra, the Bilinski dodecahedron, and the rhombic hexecontahedron.
Golden spiral.
Logarithmic spirals are self-similar spirals where distances covered per turn are in geometric progression. A logarithmic spiral whose radius increases by a factor of the golden ratio for each quarter-turn is called the golden spiral. These spirals can be approximated by quarter-circles that grow by the golden ratio, or their approximations generated from Fibonacci numbers, often depicted inscribed within a spiraling pattern of squares growing in the same ratio. The exact logarithmic spiral form of the golden spiral can be described by the polar equation with formula_154:
<templatestyles src="Block indent/styles.css"/>formula_155
Not all logarithmic spirals are connected to the golden ratio, and not all spirals that are connected to the golden ratio are the same shape as the golden spiral. For instance, a different logarithmic spiral, encasing a nested sequence of golden isosceles triangles, grows by the golden ratio for each 108° that it turns, instead of the 90° turning angle of the golden spiral. Another variation, called the "better golden spiral", grows by the golden ratio for each half-turn, rather than each quarter-turn.
In the dodecahedron and icosahedron.
The regular dodecahedron and its dual polyhedron the icosahedron are Platonic solids whose dimensions are related to the golden ratio. A dodecahedron has formula_156 regular pentagonal faces, whereas an icosahedron has formula_157 equilateral triangles; both have formula_158 edges.
For a dodecahedron of side formula_0, the radius of a circumscribed and inscribed sphere, and midradius are (formula_159 formula_160 and formula_161 respectively):
<templatestyles src="Block indent/styles.css"/>formula_162 formula_163 and formula_164
While for an icosahedron of side formula_0, the radius of a circumscribed and inscribed sphere, and midradius are:
<templatestyles src="Block indent/styles.css"/>formula_165 formula_166 and formula_167
The volume and surface area of the dodecahedron can be expressed in terms of formula_4:
<templatestyles src="Block indent/styles.css"/>formula_168 and formula_169.
As well as for the icosahedron:
<templatestyles src="Block indent/styles.css"/>formula_170 and formula_171
These geometric values can be calculated from their Cartesian coordinates, which also can be given using formulas involving formula_4. The coordinates of the dodecahedron are displayed on the figure above, while those of the icosahedron are the cyclic permutations of:
<templatestyles src="Block indent/styles.css"/>formula_172, formula_173, formula_174
Sets of three golden rectangles intersect perpendicularly inside dodecahedra and icosahedra, forming Borromean rings. In dodecahedra, pairs of opposing vertices in golden rectangles meet the centers of pentagonal faces, and in icosahedra, they meet at its vertices. In all, the three golden rectangles contain formula_156 vertices of the icosahedron, or equivalently, intersect the centers of formula_156 of the dodecahedron's faces.
A cube can be inscribed in a regular dodecahedron, with some of the diagonals of the pentagonal faces of the dodecahedron serving as the cube's edges; therefore, the edge lengths are in the golden ratio. The cube's volume is formula_175 times that of the dodecahedron's. In fact, golden rectangles inside a dodecahedron are in golden proportions to an inscribed cube, such that edges of a cube and the long edges of a golden rectangle are themselves in formula_176 ratio. On the other hand, the octahedron, which is the dual polyhedron of the cube, can inscribe an icosahedron, such that an icosahedron's formula_156 vertices touch the formula_156 edges of an octahedron at points that divide its edges in golden ratio.
Other polyhedra are related to the dodecahedron and icosahedron or their symmetries, and therefore have corresponding relations to the golden ratio. These include the compound of five cubes, compound of five octahedra, compound of five tetrahedra, the compound of ten tetrahedra, rhombic triacontahedron, icosidodecahedron, truncated icosahedron, truncated dodecahedron, and rhombicosidodecahedron, rhombic enneacontahedron, and Kepler-Poinsot polyhedra, and rhombic hexecontahedron. In four dimensions, the dodecahedron and icosahedron appear as faces of the 120-cell and 600-cell, which again have dimensions related to the golden ratio.
Other properties.
The golden ratio's "decimal expansion" can be calculated via root-finding methods, such as Newton's method or Halley's method, on the equation formula_177 or on formula_178 (to compute formula_63 first). The time needed to compute formula_19 digits of the golden ratio using Newton's method is essentially formula_179, where formula_180 is the time complexity of multiplying two formula_19-digit numbers. This is considerably faster than known algorithms for formula_181 and formula_182. An easily programmed alternative using only integer arithmetic is to calculate two large consecutive Fibonacci numbers and divide them. The ratio of Fibonacci numbers formula_183 and formula_184 each over formula_185 digits, yields over formula_186 significant digits of the golden ratio. The decimal expansion of the golden ratio formula_4 has been calculated to an accuracy of ten trillion (formula_187) digits.
In the complex plane, the fifth roots of unity formula_188 (for an integer formula_189) satisfying formula_190 are the vertices of a pentagon. They do not form a ring of quadratic integers, however the sum of any fifth root of unity and its complex conjugate, formula_191 "is" a quadratic integer, an element of formula_192 Specifically,
<templatestyles src="Block indent/styles.css"/>formula_193
This also holds for the remaining tenth roots of unity satisfying formula_194
<templatestyles src="Block indent/styles.css"/>formula_195
For the gamma function formula_196, the only solutions to the equation formula_197 are formula_198 and formula_199.
When the golden ratio is used as the base of a numeral system (see golden ratio base, sometimes dubbed "phinary" or formula_4"-nary"), quadratic integers in the ring formula_200 – that is, numbers of the form formula_201 for formula_202 – have terminating representations, but rational fractions have non-terminating representations.
The golden ratio also appears in hyperbolic geometry, as the maximum distance from a point on one side of an ideal triangle to the closer of the other two sides: this distance, the side length of the equilateral triangle formed by the points of tangency of a circle inscribed within the ideal triangle, is formula_203
The golden ratio appears in the theory of modular functions as well. For formula_204, let
<templatestyles src="Block indent/styles.css"/>formula_205
Then
<templatestyles src="Block indent/styles.css"/>formula_206
and
<templatestyles src="Block indent/styles.css"/>formula_207
where formula_208 and formula_209 in the continued fraction should be evaluated as formula_210. The function formula_211 is invariant under formula_212, a congruence subgroup of the modular group. Also for positive real numbers formula_213 and formula_214 then
<templatestyles src="Block indent/styles.css"/>formula_215
formula_4 is a Pisot–Vijayaraghavan number.
Applications and observations.
Architecture.
The Swiss architect Le Corbusier, famous for his contributions to the modern international style, centered his design philosophy on systems of harmony and proportion. Le Corbusier's faith in the mathematical order of the universe was closely bound to the golden ratio and the Fibonacci series, which he described as "rhythms apparent to the eye and clear in their relations with one another. And these rhythms are at the very root of human activities. They resound in man by an organic inevitability, the same fine inevitability which causes the tracing out of the Golden Section by children, old men, savages and the learned."
Le Corbusier explicitly used the golden ratio in his Modulor system for the scale of architectural proportion. He saw this system as a continuation of the long tradition of Vitruvius, Leonardo da Vinci's "Vitruvian Man", the work of Leon Battista Alberti, and others who used the proportions of the human body to improve the appearance and function of architecture.
In addition to the golden ratio, Le Corbusier based the system on human measurements, Fibonacci numbers, and the double unit. He took suggestion of the golden ratio in human proportions to an extreme: he sectioned his model human body's height at the navel with the two sections in golden ratio, then subdivided those sections in golden ratio at the knees and throat; he used these golden ratio proportions in the Modulor system. Le Corbusier's 1927 Villa Stein in Garches exemplified the Modulor system's application. The villa's rectangular ground plan, elevation, and inner structure closely approximate golden rectangles.
Another Swiss architect, Mario Botta, bases many of his designs on geometric figures. Several private houses he designed in Switzerland are composed of squares and circles, cubes and cylinders. In a house he designed in Origlio, the golden ratio is the proportion between the central section and the side sections of the house.
Art.
Leonardo da Vinci's illustrations of polyhedra in Pacioli's "Divina proportione" have led some to speculate that he incorporated the golden ratio in his paintings. But the suggestion that his "Mona Lisa", for example, employs golden ratio proportions, is not supported by Leonardo's own writings. Similarly, although Leonardo's "Vitruvian Man" is often shown in connection with the golden ratio, the proportions of the figure do not actually match it, and the text only mentions whole number ratios.
Salvador Dalí, influenced by the works of Matila Ghyka, explicitly used the golden ratio in his masterpiece, "The Sacrament of the Last Supper". The dimensions of the canvas are a golden rectangle. A huge dodecahedron, in perspective so that edges appear in golden ratio to one another, is suspended above and behind Jesus and dominates the composition.
A statistical study on 565 works of art of different great painters, performed in 1999, found that these artists had not used the golden ratio in the size of their canvases. The study concluded that the average ratio of the two sides of the paintings studied is formula_216 with averages for individual artists ranging from formula_217 (Goya) to formula_218 (Bellini). On the other hand, Pablo Tosto listed over 350 works by well-known artists, including more than 100 which have canvasses with golden rectangle and formula_219 proportions, and others with proportions like formula_220 formula_70 formula_80 and formula_221
Books and design.
According to Jan Tschichold,
There was a time when deviations from the truly beautiful page proportions formula_222 formula_223 and the Golden Section were rare. Many books produced between 1550 and 1770 show these proportions exactly, to within half a millimeter.
According to some sources, the golden ratio is used in everyday design, for example in the proportions of playing cards, postcards, posters, light switch plates, and widescreen televisions.
Flags.
The aspect ratio (width to height ratio) of the flag of Togo was intended to be the golden ratio, according to its designer.
Music.
Ernő Lendvai analyzes Béla Bartók's works as being based on two opposing systems, that of the golden ratio and the acoustic scale, though other music scholars reject that analysis. French composer Erik Satie used the golden ratio in several of his pieces, including "Sonneries de la Rose+Croix". The golden ratio is also apparent in the organization of the sections in the music of Debussy's "Reflets dans l'eau (Reflections in Water)", from "Images" (1st series, 1905), in which "the sequence of keys is marked out by the intervals 34, 21, 13 and 8, and the main climax sits at the phi position".
The musicologist Roy Howat has observed that the formal boundaries of Debussy's "La Mer" correspond exactly to the golden section. Trezise finds the intrinsic evidence "remarkable", but cautions that no written or reported evidence suggests that Debussy consciously sought such proportions.
Music theorists including Hans Zender and Heinz Bohlen have experimented with the 833 cents scale, a musical scale based on using the golden ratio as its fundamental musical interval. When measured in cents, a logarithmic scale for musical intervals, the golden ratio is approximately 833.09 cents.
Nature.
Johannes Kepler wrote that "the image of man and woman stems from the divine proportion. In my opinion, the propagation of plants and the progenitive acts of animals are in the same ratio".
The psychologist Adolf Zeising noted that the golden ratio appeared in phyllotaxis and argued from these patterns in nature that the golden ratio was a universal law. Zeising wrote in 1854 of a universal orthogenetic law of "striving for beauty and completeness in the realms of both nature and art".
However, some have argued that many apparent manifestations of the golden ratio in nature, especially in regard to animal dimensions, are fictitious.
Physics.
The quasi-one-dimensional Ising ferromagnet <chem display="inline">CoNb2O6</chem> (cobalt niobate) has 8 predicted excitation states (with E8 symmetry), that when probed with neutron scattering, showed its lowest two were in golden ratio. Specifically, these quantum phase transitions during spin excitation, which occur at near absolute zero temperature, showed pairs of kinks in its ordered-phase to spin-flips in its paramagnetic phase; revealing, just below its critical field, a spin dynamics with sharp modes at low energies approaching the golden mean.
Optimization.
There is no known general algorithm to arrange a given number of nodes evenly on a sphere, for any of several definitions of even distribution (see, for example, "Thomson problem" or "Tammes problem"). However, a useful approximation results from dividing the sphere into parallel bands of equal surface area and placing one node in each band at longitudes spaced by a golden section of the circle, i.e. formula_224 This method was used to arrange the 1500 mirrors of the student-participatory satellite Starshine-3.
The golden ratio is a critical element to golden-section search as well.
Disputed observations.
Examples of disputed observations of the golden ratio include the following:
Egyptian pyramids.
The Great Pyramid of Giza (also known as the Pyramid of Cheops or Khufu) has been analyzed by pyramidologists as having a doubled Kepler triangle as its cross-section. If this theory were true, the golden ratio would describe the ratio of distances from the midpoint of one of the sides of the pyramid to its apex, and from the same midpoint to the center of the pyramid's base. However, imprecision in measurement caused in part by the removal of the outer surface of the pyramid makes it impossible to distinguish this theory from other numerical theories of the proportions of the pyramid, based on pi or on whole-number ratios. The consensus of modern scholars is that this pyramid's proportions are not based on the golden ratio, because such a basis would be inconsistent both with what is known about Egyptian mathematics from the time of construction of the pyramid, and with Egyptian theories of architecture and proportion used in their other works.
The Parthenon.
The Parthenon's façade (c. 432 BC) as well as elements of its façade and elsewhere are said by some to be circumscribed by golden rectangles. Other scholars deny that the Greeks had any aesthetic association with golden ratio. For example, Keith Devlin says, "Certainly, the oft repeated assertion that the Parthenon in Athens is based on the golden ratio is not supported by actual measurements. In fact, the entire story about the Greeks and golden ratio seems to be without foundation." Midhat J. Gazalé affirms that "It was not until Euclid ... that the golden ratio's mathematical properties were studied."
From measurements of 15 temples, 18 monumental tombs, 8 sarcophagi, and 58 grave stelae from the fifth century BC to the second century AD, one researcher concluded that the golden ratio was totally absent from Greek architecture of the classical fifth century BC, and almost absent during the following six centuries.
Later sources like Vitruvius (first century BC) exclusively discuss proportions that can be expressed in whole numbers, i.e. commensurate as opposed to irrational proportions.
Modern art.
The Section d'Or ('Golden Section') was a collective of painters, sculptors, poets and critics associated with Cubism and Orphism. Active from 1911 to around 1914, they adopted the name both to highlight that Cubism represented the continuation of a grand tradition, rather than being an isolated movement, and in homage to the mathematical harmony associated with Georges Seurat. (Several authors have claimed that Seurat employed the golden ratio in his paintings, but Seurat's writings and paintings suggest that he employed simple whole-number ratios and any approximation of the golden ratio was coincidental.) The Cubists observed in its harmonies, geometric structuring of motion and form, "the primacy of idea over nature", "an absolute scientific clarity of conception". However, despite this general interest in mathematical harmony, whether the paintings featured in the celebrated 1912 "Salon de la Section d'Or" exhibition used the golden ratio in any compositions is more difficult to determine. Livio, for example, claims that they did not, and Marcel Duchamp said as much in an interview. On the other hand, an analysis suggests that Juan Gris made use of the golden ratio in composing works that were likely, but not definitively, shown at the exhibition. Art historian Daniel Robbins has argued that in addition to referencing the mathematical term, the exhibition's name also refers to the earlier "Bandeaux d'Or" group, with which Albert Gleizes and other former members of the Abbaye de Créteil had been involved.
Piet Mondrian has been said to have used the golden section extensively in his geometrical paintings, though other experts (including critic Yve-Alain Bois) have discredited these claims.
References.
Explanatory footnotes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
Works cited.
<templatestyles src="Refbegin/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "a"
},
{
"math_id": 1,
"text": "b"
},
{
"math_id": 2,
"text": "a > b > 0"
},
{
"math_id": 3,
"text": " \\frac{a+b}{a} = \\frac{a}{b} = \\varphi"
},
{
"math_id": 4,
"text": "\\varphi"
},
{
"math_id": 5,
"text": "\\phi"
},
{
"math_id": 6,
"text": "\\varphi^2 = \\varphi + 1"
},
{
"math_id": 7,
"text": "\\varphi = \\frac{1+\\sqrt5}{2} = "
},
{
"math_id": 8,
"text": " \\frac{a+b}{a} = \\frac{a}{b} = \\varphi."
},
{
"math_id": 9,
"text": "b/a = 1/\\varphi"
},
{
"math_id": 10,
"text": "\\frac{a+b}{a} = \\frac{a}{a}+\\frac{b}{a} = 1 + \\frac{b}{a} = 1 + \\frac{1}{\\varphi}."
},
{
"math_id": 11,
"text": " 1 + \\frac{1}{\\varphi} = \\varphi. "
},
{
"math_id": 12,
"text": "\\varphi + 1 = \\varphi^2"
},
{
"math_id": 13,
"text": "{\\varphi}^2 - \\varphi - 1 = 0."
},
{
"math_id": 14,
"text": "\\frac{1 + \\sqrt5}{2} = 1.618033\\dots"
},
{
"math_id": 15,
"text": "\\frac{1 - \\sqrt5}{2} = -0.618033\\dots."
},
{
"math_id": 16,
"text": "-\\frac{1}{\\varphi}"
},
{
"math_id": 17,
"text": "0.6180340"
},
{
"math_id": 18,
"text": "\\tau"
},
{
"math_id": 19,
"text": "n"
},
{
"math_id": 20,
"text": "m,"
},
{
"math_id": 21,
"text": "m"
},
{
"math_id": 22,
"text": "n-m."
},
{
"math_id": 23,
"text": "n/m"
},
{
"math_id": 24,
"text": "m/(n-m)"
},
{
"math_id": 25,
"text": "\\varphi = \\tfrac12(1 + \\sqrt5)"
},
{
"math_id": 26,
"text": "2\\varphi - 1 = \\sqrt5"
},
{
"math_id": 27,
"text": "x^2 - x - 1."
},
{
"math_id": 28,
"text": "-\\varphi^{-1}."
},
{
"math_id": 29,
"text": "x^2 + x - 1,"
},
{
"math_id": 30,
"text": "-\\varphi"
},
{
"math_id": 31,
"text": "\\varphi^{-1}."
},
{
"math_id": 32,
"text": "x^2-x-1"
},
{
"math_id": 33,
"text": "-\\frac{1}{\\varphi}=1-\\varphi = \\frac{1 - \\sqrt5}{2} = -0.618033\\dots."
},
{
"math_id": 34,
"text": "0.618\\ldots"
},
{
"math_id": 35,
"text": "b/a"
},
{
"math_id": 36,
"text": "\\frac1\\varphi = \\varphi - 1,"
},
{
"math_id": 37,
"text": "\\frac1{1/\\varphi} = \\frac1\\varphi + 1."
},
{
"math_id": 38,
"text": "\\begin{align}\n\\varphi^2 &= \\varphi + 1 = 2.618033\\dots, \\\\[5mu]\n\\frac1\\varphi &= \\varphi - 1 = 0.618033\\dots.\n\\end{align}"
},
{
"math_id": 39,
"text": "0.618033\\ldots,"
},
{
"math_id": 40,
"text": "1.0,"
},
{
"math_id": 41,
"text": "1.618033\\ldots,"
},
{
"math_id": 42,
"text": "2.618033\\ldots;"
},
{
"math_id": 43,
"text": "\\varphi^n = \\varphi^{n-1} + \\varphi^{n-2} = \\varphi \\cdot \\operatorname{F}_n + \\operatorname{F}_{n-1}."
},
{
"math_id": 44,
"text": " \\lfloor n/2 - 1 \\rfloor = m,"
},
{
"math_id": 45,
"text": "\\begin{align}\n\\varphi^n &= \\varphi^{n-1} + \\varphi^{n-3} + \\cdots + \\varphi^{n-1-2m} + \\varphi^{n-2-2m} \\\\[5mu]\n\\varphi^n - \\varphi^{n-1} &= \\varphi^{n-2}.\n\\end{align}"
},
{
"math_id": 46,
"text": "\\varphi = 1 + 1/\\varphi"
},
{
"math_id": 47,
"text": "\\varphi = [1; 1, 1, 1, \\dots] = 1 + \\cfrac{1}{1 + \\cfrac{1}{1 + \\cfrac{1}{1 + \\ddots}}}"
},
{
"math_id": 48,
"text": "\\varphi^{-1} = [0; 1, 1, 1, \\dots] = 0 + \\cfrac{1}{1 + \\cfrac{1}{1 + \\cfrac{1}{1 + \\ddots}}}"
},
{
"math_id": 49,
"text": "1/1,"
},
{
"math_id": 50,
"text": "2/1,"
},
{
"math_id": 51,
"text": "3/2,"
},
{
"math_id": 52,
"text": "5/3,"
},
{
"math_id": 53,
"text": "8/5,"
},
{
"math_id": 54,
"text": "13/8,"
},
{
"math_id": 55,
"text": "1/2,"
},
{
"math_id": 56,
"text": "2/3,"
},
{
"math_id": 57,
"text": "3/5,"
},
{
"math_id": 58,
"text": "5/8,"
},
{
"math_id": 59,
"text": "8/13,"
},
{
"math_id": 60,
"text": "\\xi"
},
{
"math_id": 61,
"text": "p/q"
},
{
"math_id": 62,
"text": "\\left|\\xi-\\frac{p}{q}\\right|<\\frac{1}{\\sqrt{5}q^2}."
},
{
"math_id": 63,
"text": "\\sqrt{5}"
},
{
"math_id": 64,
"text": "\\varphi^2 = 1 + \\varphi"
},
{
"math_id": 65,
"text": "\\varphi = \\sqrt{1 + \\sqrt{1 + \\sqrt{1 + \\cdots}}}."
},
{
"math_id": 66,
"text": "0,1"
},
{
"math_id": 67,
"text": "0,"
},
{
"math_id": 68,
"text": "1,"
},
{
"math_id": 69,
"text": "2,"
},
{
"math_id": 70,
"text": "3,"
},
{
"math_id": 71,
"text": "5,"
},
{
"math_id": 72,
"text": "8,"
},
{
"math_id": 73,
"text": "13,"
},
{
"math_id": 74,
"text": "21,"
},
{
"math_id": 75,
"text": "34,"
},
{
"math_id": 76,
"text": "55,"
},
{
"math_id": 77,
"text": "89,"
},
{
"math_id": 78,
"text": "\\ldots"
},
{
"math_id": 79,
"text": "2,1"
},
{
"math_id": 80,
"text": "4,"
},
{
"math_id": 81,
"text": "7,"
},
{
"math_id": 82,
"text": "11,"
},
{
"math_id": 83,
"text": "18,"
},
{
"math_id": 84,
"text": "29,"
},
{
"math_id": 85,
"text": "47,"
},
{
"math_id": 86,
"text": "76,"
},
{
"math_id": 87,
"text": "123,"
},
{
"math_id": 88,
"text": "199,"
},
{
"math_id": 89,
"text": "\\lim_{n\\to\\infty}\\frac{F_{n+1}}{F_n}=\\lim_{n\\to\\infty}\\frac{L_{n+1}}{L_n}=\\varphi."
},
{
"math_id": 90,
"text": "\\frac{F_{16}}{F_{15}} = \\frac{987}{610} = 1.6180327\\ldots, "
},
{
"math_id": 91,
"text": "\\frac{L_{16}}{L_{15}} = \\frac{2207}{1364} = 1.6180351\\ldots."
},
{
"math_id": 92,
"text": "\\varphi,"
},
{
"math_id": 93,
"text": "F\\left(n\\right)\n = {{\\varphi^n-(1-\\varphi)^n} \\over {\\sqrt 5}}\n = {{\\varphi^n-(-\\varphi)^{-n}} \\over {\\sqrt 5}},"
},
{
"math_id": 94,
"text": "L\\left(n\\right)\n= \\varphi^n + (- \\varphi)^{-n}= \\left({ 1+ \\sqrt{5} \\over 2}\\right)^n + \\left({ 1- \\sqrt{5} \\over 2}\\right)^n\\, ."
},
{
"math_id": 95,
"text": "\\varphi^n"
},
{
"math_id": 96,
"text": "\\varphi^n = {{L_n + F_n \\sqrt{5}} \\over 2}\\, ."
},
{
"math_id": 97,
"text": "L_{2n} = 5 F_n^2 + 2(-1)^n = L_n^2 - 2(-1)^n,"
},
{
"math_id": 98,
"text": "\\lim_{n\\to\\infty} \\frac{L_n}{F_n}=\\sqrt{5}."
},
{
"math_id": 99,
"text": " \\vert L_n - \\sqrt{5} F_n \\vert = \\frac{2}{\\varphi^n} \\to 0, "
},
{
"math_id": 100,
"text": " (L_{3n}/2)^2 = 5 (F_{3n}/2)^2 + (-1)^n. "
},
{
"math_id": 101,
"text": "\\mathbb{Q}(\\sqrt5)"
},
{
"math_id": 102,
"text": "\\varphi^{n+1} = \\varphi^n + \\varphi^{n-1}."
},
{
"math_id": 103,
"text": "\\varphi^n = F_n \\varphi + F_{n-1}."
},
{
"math_id": 104,
"text": "\\begin{align}\n3\\varphi^3 - 5\\varphi^2 + 4\n&= 3(\\varphi^2 + \\varphi) - 5\\varphi^2 + 4 \\\\[5mu]\n&= 3[(\\varphi + 1) + \\varphi] - 5(\\varphi + 1) + 4 \\\\[5mu]\n&= \\varphi + 2 \\approx 3.618033.\n\\end{align}"
},
{
"math_id": 105,
"text": "\\sum_{n=1}^{\\infty}|F_n\\varphi-F_{n+1}| = \\varphi."
},
{
"math_id": 106,
"text": "\\varphi^0"
},
{
"math_id": 107,
"text": "\\begin{align}\n\\varphi^0 &= 1, \\\\[5mu]\n\\varphi^1 &= 1.618033989\\ldots \\approx 2, \\\\[5mu]\n\\varphi^2 &= 2.618033989\\ldots \\approx 3, \\\\[5mu]\n\\varphi^3 &= 4.236067978\\ldots \\approx 4, \\\\[5mu]\n\\varphi^4 &= 6.854101967\\ldots \\approx 7,\n\\end{align}"
},
{
"math_id": 108,
"text": "n \\ge 2"
},
{
"math_id": 109,
"text": " \\varphi^n = L_n - (- \\varphi)^{-n}."
},
{
"math_id": 110,
"text": "L_n = F_{n-1}+F_{n+1}"
},
{
"math_id": 111,
"text": " L_n = \\frac{F_{2n}}{F_n}"
},
{
"math_id": 112,
"text": "AB,"
},
{
"math_id": 113,
"text": "BC"
},
{
"math_id": 114,
"text": "B,"
},
{
"math_id": 115,
"text": "AB."
},
{
"math_id": 116,
"text": "AC."
},
{
"math_id": 117,
"text": "C"
},
{
"math_id": 118,
"text": "BC."
},
{
"math_id": 119,
"text": "AC"
},
{
"math_id": 120,
"text": "D."
},
{
"math_id": 121,
"text": "A"
},
{
"math_id": 122,
"text": "AD."
},
{
"math_id": 123,
"text": "AB"
},
{
"math_id": 124,
"text": "S."
},
{
"math_id": 125,
"text": "S"
},
{
"math_id": 126,
"text": "AS"
},
{
"math_id": 127,
"text": "SB"
},
{
"math_id": 128,
"text": "SC"
},
{
"math_id": 129,
"text": "AS."
},
{
"math_id": 130,
"text": "M."
},
{
"math_id": 131,
"text": "M"
},
{
"math_id": 132,
"text": "MC"
},
{
"math_id": 133,
"text": "B"
},
{
"math_id": 134,
"text": "g\\colon"
},
{
"math_id": 135,
"text": "\\begin{align}\n\\frac{2\\pi - g}{g} &= \\frac{2\\pi}{2\\pi - g} = \\varphi, \\\\[8mu]\n2\\pi - g &= \\frac{2\\pi}{\\varphi} \\approx 222.5^\\circ, \\\\[8mu]\ng &= \\frac{2\\pi}{\\varphi^2} \\approx 137.5^\\circ.\n\\end{align}"
},
{
"math_id": 136,
"text": "a,"
},
{
"math_id": 137,
"text": "b,"
},
{
"math_id": 138,
"text": "a^2 = b^2 + ab."
},
{
"math_id": 139,
"text": "ab"
},
{
"math_id": 140,
"text": "\\frac ab = \\frac{a + b}{a} = \\varphi."
},
{
"math_id": 141,
"text": "\\begin{align}\n\\varphi &= 1+2\\sin(\\pi/10) = 1 + 2\\sin 18^\\circ, \\\\[5mu]\n\\varphi &= \\tfrac12\\csc(\\pi/10) = \\tfrac12\\csc 18^\\circ, \\\\[5mu]\n\\varphi &= 2\\cos(\\pi/5)=2\\cos 36^\\circ, \\\\[5mu]\n\\varphi &= 2\\sin(3\\pi/10)=2\\sin 54^\\circ.\n\\end{align}"
},
{
"math_id": 142,
"text": "1:\\varphi"
},
{
"math_id": 143,
"text": "1\\mathbin{:}\\sqrt\\varphi\\mathbin{:}\\varphi"
},
{
"math_id": 144,
"text": "\\varphi \\pm 1"
},
{
"math_id": 145,
"text": "1\\mathbin{:}\\varphi\\mathbin{:}\\varphi^2"
},
{
"math_id": 146,
"text": "s"
},
{
"math_id": 147,
"text": "\\begin{align}\nA &= \\tfrac{s^2}{2}\\sqrt\\varphi, \\\\[5mu]\n\\theta &= \\sin^{-1}\\frac{1}{\\varphi}\\approx 38.1727^\\circ, \\\\[5mu]\n\\theta &= \\cos^{-1}\\frac{1}{\\varphi}\\approx 51.8273^\\circ.\n\\end{align}"
},
{
"math_id": 148,
"text": "\\begin{align}\n\\alpha &= 2\\arctan{1\\over\\varphi}\\approx63.43495^\\circ, \\\\[5mu]\n\\beta &= 2\\arctan\\varphi=\\pi-\\arctan2 = \\arctan1+\\arctan3 \\approx116.56505^\\circ.\n\\end{align}"
},
{
"math_id": 149,
"text": "d"
},
{
"math_id": 150,
"text": "D"
},
{
"math_id": 151,
"text": "\\begin{align}\nd &= {2a\\over\\sqrt{2+\\varphi}}=2\\sqrt{{3-\\varphi}\\over5}a\\approx1.05146a, \\\\[5mu]\nD &= 2\\sqrt{{2+\\varphi}\\over5}a\\approx1.70130a.\n\\end{align}"
},
{
"math_id": 152,
"text": "\\begin{align}\nA &= (\\sin(\\arctan2))~a^2 = {2\\over\\sqrt5}~a^2 \\approx 0.89443a^2, \\\\[5mu]\nA &= {{\\varphi}\\over2}d^2\\approx 0.80902d^2.\n\\end{align}"
},
{
"math_id": 153,
"text": "r=\\frac{a}{\\sqrt{5}}."
},
{
"math_id": 154,
"text": "(r,\\theta)"
},
{
"math_id": 155,
"text": "r = \\varphi^{2\\theta/\\pi}."
},
{
"math_id": 156,
"text": "12"
},
{
"math_id": 157,
"text": "20"
},
{
"math_id": 158,
"text": "30"
},
{
"math_id": 159,
"text": "r_u, "
},
{
"math_id": 160,
"text": "r_i,"
},
{
"math_id": 161,
"text": "r_m,"
},
{
"math_id": 162,
"text": "r_u = a\\, \\frac{\\sqrt{3}\\varphi}{2},"
},
{
"math_id": 163,
"text": "r_i = a\\, \\frac{\\varphi^2}{2 \\sqrt{3-\\varphi}},"
},
{
"math_id": 164,
"text": "r_m = a\\, \\frac{\\varphi^2}{2}."
},
{
"math_id": 165,
"text": "r_u = a\\frac{\\sqrt{\\varphi \\sqrt{5}}}{2},"
},
{
"math_id": 166,
"text": "r_i = a\\frac{\\varphi^2}{2 \\sqrt{3}},"
},
{
"math_id": 167,
"text": "r_m = a\\frac{\\varphi}{2}."
},
{
"math_id": 168,
"text": "A_d = \\frac{15\\varphi}{\\sqrt{3-\\varphi}}"
},
{
"math_id": 169,
"text": "V_d = \\frac{5\\varphi^3}{6-2\\varphi}"
},
{
"math_id": 170,
"text": "A_i = 20\\frac{\\varphi^{2}}{2}"
},
{
"math_id": 171,
"text": "V_i = \\frac{5}{6}(1 + \\varphi)."
},
{
"math_id": 172,
"text": "(0,\\pm1,\\pm\\varphi)"
},
{
"math_id": 173,
"text": "(\\pm1,\\pm\\varphi,0)"
},
{
"math_id": 174,
"text": "(\\pm\\varphi,0,\\pm1)."
},
{
"math_id": 175,
"text": "\\tfrac{2}{2+\\varphi}"
},
{
"math_id": 176,
"text": "\\varphi : \\varphi^{2}"
},
{
"math_id": 177,
"text": "x^2-x-1=0"
},
{
"math_id": 178,
"text": "x^2-5=0"
},
{
"math_id": 179,
"text": "O(M(n))"
},
{
"math_id": 180,
"text": "M(n)"
},
{
"math_id": 181,
"text": "\\pi"
},
{
"math_id": 182,
"text": "e"
},
{
"math_id": 183,
"text": "F_{25001}"
},
{
"math_id": 184,
"text": "F_{25000},"
},
{
"math_id": 185,
"text": "5000"
},
{
"math_id": 186,
"text": "10{,}000"
},
{
"math_id": 187,
"text": "1 \\times 10^{13} = 10{,}000{,}000{,}000{,}000"
},
{
"math_id": 188,
"text": "z = e^{2\\pi k i/5}"
},
{
"math_id": 189,
"text": "k"
},
{
"math_id": 190,
"text": "z^5 = 1"
},
{
"math_id": 191,
"text": "z + \\bar z,"
},
{
"math_id": 192,
"text": "\\mathbb{Z}[\\varphi]."
},
{
"math_id": 193,
"text": "\\begin{align}\ne^{0} + e^{-0} &= 2, \\\\[5mu]\ne^{2\\pi i / 5} + e^{-2\\pi i / 5} &= \\varphi^{-1} = -1 + \\varphi, \\\\[5mu]\ne^{4\\pi i / 5} + e^{-4\\pi i / 5} &= -\\varphi.\n\\end{align}"
},
{
"math_id": 194,
"text": "z^{10} = 1,"
},
{
"math_id": 195,
"text": "\\begin{align}\ne^{\\pi i} + e^{-\\pi i} &= -2, \\\\[5mu]\ne^{\\pi i / 5} + e^{-\\pi i / 5} &= \\varphi, \\\\[5mu]\ne^{3\\pi i / 5} + e^{-3\\pi i / 5} &= -\\varphi^{-1} = 1 - \\varphi.\n\\end{align}"
},
{
"math_id": 196,
"text": "\\Gamma"
},
{
"math_id": 197,
"text": "\\Gamma(z-1) = \\Gamma(z+1)"
},
{
"math_id": 198,
"text": "z = \\varphi"
},
{
"math_id": 199,
"text": "z = -\\varphi^{-1}"
},
{
"math_id": 200,
"text": "\\mathbb{Z}[\\varphi]"
},
{
"math_id": 201,
"text": "a + b\\varphi"
},
{
"math_id": 202,
"text": "a, b \\in \\mathbb{Z}"
},
{
"math_id": 203,
"text": "4\\log(\\varphi)."
},
{
"math_id": 204,
"text": "\\left|q\\right|<1"
},
{
"math_id": 205,
"text": "R(q)=\\cfrac{q^{1/5}}{1+\\cfrac{q}{1+\\cfrac{q^2}{1+\\cfrac{q^3}{1+\\ddots}}}}."
},
{
"math_id": 206,
"text": "R(e^{-2\\pi})=\\sqrt{\\varphi\\sqrt5}-\\varphi ,\\quad R(-e^{-\\pi})=\\varphi^{-1}-\\sqrt{2-\\varphi^{-1}}"
},
{
"math_id": 207,
"text": "R(e^{-2\\pi i/\\tau})=\\frac{1-\\varphi R(e^{2\\pi i\\tau})}{\\varphi+R(e^{2\\pi i\\tau})}"
},
{
"math_id": 208,
"text": "\\operatorname{Im}\\tau>0"
},
{
"math_id": 209,
"text": "(e^z)^{1/5}"
},
{
"math_id": 210,
"text": "e^{z/5}"
},
{
"math_id": 211,
"text": "\\tau\\mapsto R(e^{2\\pi i\\tau})"
},
{
"math_id": 212,
"text": "\\Gamma (5)"
},
{
"math_id": 213,
"text": "a, b \\in \\mathbb{R}^+"
},
{
"math_id": 214,
"text": "ab = \\pi^2,"
},
{
"math_id": 215,
"text": "\\begin{align}\n\\Bigl(\\varphi+R{\\bigl(e^{-2a}\\bigr)}\\Bigr)\\Bigl(\\varphi+R{\\bigl(e^{-2b}\\bigr)}\\Bigr)&=\\varphi\\sqrt5, \\\\[5mu]\n\\Bigl(\\varphi^{-1}-R{\\bigl({-e^{-a}}\\bigr)}\\Bigr)\\Bigl(\\varphi^{-1}-R{\\bigl({-e^{-b}}\\bigr)}\\Bigr)&=\\varphi^{-1}\\sqrt5.\n\\end{align}"
},
{
"math_id": 216,
"text": "1.34,"
},
{
"math_id": 217,
"text": "1.04"
},
{
"math_id": 218,
"text": "1.46"
},
{
"math_id": 219,
"text": "\\sqrt5"
},
{
"math_id": 220,
"text": "\\sqrt2,"
},
{
"math_id": 221,
"text": "6."
},
{
"math_id": 222,
"text": "2\\mathbin{:}3,"
},
{
"math_id": 223,
"text": "1\\mathbin{:}\\sqrt3,"
},
{
"math_id": 224,
"text": "360^\\circ/\\varphi \\approx 222.5^\\circ."
},
{
"math_id": 225,
"text": "1.45."
}
] | https://en.wikipedia.org/wiki?curid=12386 |
12387353 | Talbot cavity | A Talbot cavity is an external cavity used for the coherent beam combination of output from laser sets.
It has been used experimentally for semiconductor laser diodes, carbon dioxide lasers, fiber lasers and solid-state disk lasers arranged in an array. In the simplest version, it is constructed with a single mirror at half the Talbot distance from the output facet of the laser array:
formula_0
where formula_1 is the period of the laser lattice and formula_2 is the wavelength of laser emission. The constructive interference images the near field of the array back onto the array itself at the Talbot distance, creating optical feedback. This interference feedback forces the lasers in the array to transverse mode lock. The Fresnel number formula_3 of the formula_4 element laser array phase-locked by Talbot cavity is given by:
formula_5
Talbot beam combination is highly sensitive to transverse phase distortions even at formula_6 scale.
Theory developed for Talbot cavities facilitated the development of thin disk diode-pumped solid-state laser arrays.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "z_{_\\text{T}}=\\frac{2p^2}{\\lambda},"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "\\lambda"
},
{
"math_id": 3,
"text": "F"
},
{
"math_id": 4,
"text": "N-"
},
{
"math_id": 5,
"text": "F=(N-1)^2."
},
{
"math_id": 6,
"text": "\\lambda / 10"
}
] | https://en.wikipedia.org/wiki?curid=12387353 |
12388613 | Streaming current | Electrokinetic phenomena
A streaming current and streaming potential are two interrelated electrokinetic phenomena studied in the areas of surface chemistry and electrochemistry. They are an electric current or potential which originates when an electrolyte is driven by a pressure gradient through a channel or porous plug with charged walls.
The first observation of the streaming potential is generally attributed to the German physicist Georg Hermann Quincke in 1859.
Applications.
Streaming currents in well-defined geometries are a sensitive method to characterize the zeta potential of surfaces, which is important in the fields of colloid and interface science. In geology, measurements of related spontaneous potential are used for evaluations of formations. Streaming potential has to be considered in design for flow of poorly conductive fluids (e.g., gasoline lines) because of the danger of buildup of high voltages. The streaming current monitor (SCM) is a fundamental tool for monitoring coagulation in wastewater treatment plants. The degree of coagulation of raw water may be monitored by the use of an SCM to provide a positive feedback control of coagulant injection. As the streaming current of the wastewater increases, more coagulant agent is injected into the stream. The higher levels of coagulant agent cause the small colloidal particles to coagulate and sediment out of the stream. Since less colloid particles are in the wastewater stream, the streaming potential decreases. The SCM recognizes this and subsequently reduces the amount of coagulant agent injected into the wastewater stream. The implementation of SCM feedback control has led to a significant materials cost reduction, one that was not realized until the early 1980s. In addition to monitoring capabilities, the streaming current could, in theory, generate usable electrical power. This process, however, has yet to be applied as typical streaming potential mechanical to electrical efficiencies are around 1%.
Origin.
Adjacent to the channel walls, the charge-neutrality of the liquid is violated due to the presence of the electrical double layer: a thin layer of counterions attracted by the charged surface.
The transport of counterions along with the pressure-driven fluid flow gives rise to a net charge transport: the streaming current. The reverse effect, generating a fluid flow by applying a potential difference, is called electroosmotic flow.
Measurement method.
A typical setup to measure streaming currents consists of two reversible electrodes placed on either side of a fluidic geometry across which a known pressure difference is applied. When both electrodes are held at the same potential, the streaming current is measured directly as the electric current flowing through the electrodes. Alternatively, the electrodes can be left floating, allowing a streaming potential to build up between the two ends of the channel.
A streaming potential is defined as positive when the electric potential is higher on the high pressure end of the flow system than on the low pressure end.
The value of streaming current observed in a capillary is usually related to the zeta potential through the relation:
formula_0.
The conduction current, which is equal in magnitude to the streaming current at steady state, is:
formula_1
At steady state, the streaming potential built up across the flow system is given by:
formula_2
Symbols:
The equation above is usually referred to as the Helmholtz–Smoluchowski equation.
The above equations assume that: | [
{
"math_id": 0,
"text": "I_{str}=-\\frac{\\epsilon_{rs}\\epsilon_0 a^2 \\pi}{\\eta} \\frac{\\Delta P}{L} \\zeta "
},
{
"math_id": 1,
"text": "I_c= K_L a^2 \\pi \\frac{U_{str}}{L}"
},
{
"math_id": 2,
"text": "U_{str}= \\frac{\\epsilon_{rs}\\epsilon_0 \\zeta}{\\eta K_L} \\Delta P"
},
{
"math_id": 3,
"text": "\\kappa a \\gg 1"
}
] | https://en.wikipedia.org/wiki?curid=12388613 |
12388866 | Von Neumann's theorem | In mathematics, von Neumann's theorem is a result in the operator theory of linear operators on Hilbert spaces.
Statement of the theorem.
Let formula_0 and formula_1 be Hilbert spaces, and let formula_2 be an unbounded operator from formula_0 into formula_3 Suppose that formula_4 is a closed operator and that formula_4 is densely defined, that is, formula_5 is dense in formula_6 Let formula_7 denote the adjoint of formula_8 Then formula_9 is also densely defined, and it is self-adjoint. That is,
formula_10
and the operators on the right- and left-hand sides have the same dense domain in formula_6
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "H"
},
{
"math_id": 2,
"text": "T : \\operatorname{dom}(T) \\subseteq G \\to H"
},
{
"math_id": 3,
"text": "H."
},
{
"math_id": 4,
"text": "T"
},
{
"math_id": 5,
"text": "\\operatorname{dom}(T)"
},
{
"math_id": 6,
"text": "G."
},
{
"math_id": 7,
"text": "T^* : \\operatorname{dom}\\left(T^*\\right) \\subseteq H \\to G"
},
{
"math_id": 8,
"text": "T."
},
{
"math_id": 9,
"text": "T^* T"
},
{
"math_id": 10,
"text": "\\left(T^* T\\right)^* = T^* T"
}
] | https://en.wikipedia.org/wiki?curid=12388866 |
1238920 | Permutation automaton | Finite-state machine in automata theory
In automata theory, a permutation automaton, or pure-group automaton, is a deterministic finite automaton such that each input symbol permutes the set of states.
Formally, a deterministic finite automaton A may be defined by the tuple ("Q", Σ, δ, "q""0", "F"),
where "Q" is the set of states of the automaton, Σ is the set of input symbols, δ is the transition function that takes a state "q" and an input symbol "x" to a new state δ("q","x"), "q""0" is the initial state of the automaton, and "F" is the set of accepting states (also: final states) of the automaton. A is a permutation automaton if and only if, for every two distinct states "qi" and "qj" in "Q" and every input symbol x in Σ, δ("qi","x") ≠ δ("qj","x").
A formal language is p-regular (also: a pure-group language) if it is accepted by a permutation automaton. For example, the set of strings of even length forms a p-regular language: it may be accepted by a permutation automaton with two states in which every transition replaces one state by the other.
Applications.
The pure-group languages were the first interesting family of regular languages for which the star height problem was proved to be computable.
Another mathematical problem on regular languages is the "separating words problem", which asks for the size of a smallest deterministic finite automaton that distinguishes between two given words of length at most "n" – by accepting one word and rejecting the other. The known upper bound in the general case is formula_0. The problem was later studied for the restriction to permutation automata. In this case, the known upper bound changes to formula_1.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O(n^{2/5}(\\log n)^{3/5})"
},
{
"math_id": 1,
"text": "O(n^{1/2})"
}
] | https://en.wikipedia.org/wiki?curid=1238920 |
1239237 | Matrix unit | Concept in mathematics
In linear algebra, a matrix unit is a matrix with only one nonzero entry with value 1. The matrix unit with a 1 in the "i"th row and "j"th column is denoted as formula_0. For example, the 3 by 3 matrix unit with "i" = 1 and "j" = 2 is
formula_1A vector unit is a standard unit vector.
A single-entry matrix generalizes the matrix unit for matrices with only one nonzero entry of any value, not necessarily of value 1.
Properties.
The set of "m" by "n" matrix units is a basis of the space of "m" by "n" matrices.
The product of two matrix units of the same square shape formula_2 satisfies the relation
formula_3
where formula_4 is the Kronecker delta.
The group of scalar "n"-by-"n" matrices over a ring "R" is the centralizer of the subset of "n"-by-"n" matrix units in the set of "n"-by-"n" matrices over "R".
The matrix norm (induced by the same two vector norms) of a matrix unit is equal to 1.
When multiplied by another matrix, it isolates a specific row or column in arbitrary position. For example, for any 3-by-3 matrix "A":
formula_5
formula_6
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E_{ij}"
},
{
"math_id": 1,
"text": "E_{12} = \\begin{bmatrix}0 & 1 & 0 \\\\0 & 0 & 0 \\\\ 0 & 0 & 0 \\end{bmatrix}"
},
{
"math_id": 2,
"text": "n \\times n"
},
{
"math_id": 3,
"text": "E_{ij}E_{kl} = \\delta_{jk}E_{il},"
},
{
"math_id": 4,
"text": "\\delta_{jk}"
},
{
"math_id": 5,
"text": "\n E_{23}A = \\left[ \\begin{matrix} 0 & 0& 0 \\\\ a_{31} & a_{32} & a_{33} \\\\ 0 & 0 & 0 \\end{matrix}\\right].\n"
},
{
"math_id": 6,
"text": "\n AE_{23} = \\left[ \\begin{matrix} 0 & 0 & a_{12} \\\\ 0 & 0 & a_{22} \\\\ 0 & 0 & a_{32} \\end{matrix}\\right].\n"
}
] | https://en.wikipedia.org/wiki?curid=1239237 |
12394086 | Self-similar process | Self-similar processes are stochastic processes satisfying a mathematically precise version of the self-similarity property. Several related properties have this name, and some are defined here.
A self-similar phenomenon behaves the same when viewed at different degrees of magnification, or different scales on a dimension.
Because stochastic processes are random variables with a time and a space component, their self-similarity properties are defined in terms of how a scaling in time relates to a scaling in space.
Distributional self-similarity.
Definition.
A continuous-time stochastic process formula_1 is called "self-similar" with parameter formula_2 if for all formula_3, the processes formula_4 and formula_5 have the same law.
Second-order self-similarity.
Definition.
A wide-sense stationary process formula_8 is called "exactly second-order self-similar" with parameter formula_2 if the following hold:
(i) formula_9, where for each formula_10, formula_11
(ii) for all formula_12, the autocorrelation functions formula_13 and formula_14 of formula_15 and formula_16 are equal.
If instead of (ii), the weaker condition
(iii) formula_17 pointwise as formula_18
holds, then formula_15 is called "asymptotically second-order self-similar".
Connection to long-range dependence.
In the case formula_19, asymptotic self-similarity is equivalent to long-range dependence.
Self-similar and long-range dependent characteristics in computer networks present a fundamentally different set of problems to people doing analysis and/or design of networks, and many of the previous assumptions upon which systems have been built are no longer valid in the presence of self-similarity.
Long-range dependence is closely connected to the theory of heavy-tailed distributions. A distribution is said to have a heavy tail if
formula_20
One example of a heavy-tailed distribution is the Pareto distribution. Examples of processes that can be described using heavy-tailed distributions include traffic processes, such as packet inter-arrival times and burst lengths.
References.
<templatestyles src="Reflist/styles.css" />
Sources | [
{
"math_id": 0,
"text": "H=1/2"
},
{
"math_id": 1,
"text": "(X_t)_{t\\ge0}"
},
{
"math_id": 2,
"text": "H>0"
},
{
"math_id": 3,
"text": "a>0"
},
{
"math_id": 4,
"text": "(X_{at})_{t\\ge0}"
},
{
"math_id": 5,
"text": "(a^HX_t)_{t\\ge0}"
},
{
"math_id": 6,
"text": "H\\in(0,1)"
},
{
"math_id": 7,
"text": "H\\in[1/2,\\infty)"
},
{
"math_id": 8,
"text": "(X_n)_{n\\ge0}"
},
{
"math_id": 9,
"text": "\\mathrm{Var}(X^{(m)})=\\mathrm{Var}(X)m^{2(H-1)}"
},
{
"math_id": 10,
"text": "k\\in\\mathbb N_0"
},
{
"math_id": 11,
"text": "X^{(m)}_k = \\frac 1 m \\sum_{i=1}^m X_{(k-1)m + i},"
},
{
"math_id": 12,
"text": "m\\in\\mathbb N^+"
},
{
"math_id": 13,
"text": "r"
},
{
"math_id": 14,
"text": "r^{(m)}"
},
{
"math_id": 15,
"text": "X"
},
{
"math_id": 16,
"text": "X^{(m)}"
},
{
"math_id": 17,
"text": "r^{(m)} \\to r"
},
{
"math_id": 18,
"text": "m\\to\\infty"
},
{
"math_id": 19,
"text": "1/2<H<1"
},
{
"math_id": 20,
"text": "\n\\lim_{x \\to \\infty} e^{\\lambda x}\\Pr[X>x] = \\infty \\quad \\mbox{for all } \\lambda>0.\\,\n"
}
] | https://en.wikipedia.org/wiki?curid=12394086 |
1239457 | ARROW waveguide | In optics, an ARROW (anti-resonant reflecting optical waveguide) is a type of waveguide that uses the principle of thin-film interference to guide light with low loss. It is formed from an anti-resonant Fabry–Pérot reflector. The optical mode is leaky, but relatively low-loss propagation can be achieved by making the Fabry–Pérot reflector of sufficiently high quality or small size.
Principles of Operation.
ARROW relies on the principle of thin-film interference. It is created by forming a Fabry-Perot cavity in the transverse direction, with cladding layers that function as Fabry-Perot etalons. A Fabry-Perot etalon is in resonance when the light in the layer constructively interferes with itself, resulting in high transmission. Anti-resonance occurs when the light in the layer destructively interferes with itself, resulting in no transmission through the etalon.
The refractive indexes of the guiding core (nc) and the cladding layers (nj, ni) are important and are carefully chosen. In order to make anti-resonance happen, nc needs to smaller than nj. In a typical system of a solid core ARROW, as shown in the figure, the waveguide consists of a low refractive index guiding core bounded on the upper surface by air and on the lower surface by higher refractive index antiresonant reflecting cladding layers. The confinement of light on the upper surface of the guiding core is provided by the total internal reflection with air, while the confinement on the lower surface is provided by interference created by the antiresonant cladding layers.
The thickness of the antiresonant cladding layer (tj) of an ARROW also needs to be carefully chosen in order to achieve anti-resonance. It can be calculated by the following formula:
formula_0
formula_1 = thickness of the antiresonant cladding layer
formula_2= thickness of the guiding core layer
formula_3 = wavelength
formula_4 = refractive index of antiresonant cladding layer
formula_5 = refractive index of guiding core layer
while formula_6
Considerations.
ARROWs can be realized as cylindrical waveguides (2D confinement) or slab waveguides (1D confinement). The latter ARROWs are practically formed by a low index layer, embedded between higher index layers. Note that the refractive indices of these ARROWs are reversed, when comparing to usual waveguides. Light is confined by total internal reflection (TIR) on the inside of the higher index layers, but achieves a lot of modal overlap with the lower index central volume.
This strong overlap can be made plausible in a simplified picture imagining "rays", as in geometrical optics. Such rays are refracted into a very shallow angle, when entering the low index inner layer. Thus, one can use the metaphor that these rays "stay very long inside" the low index inner layer. Note this is just a metaphor and the explanatory power of ray optics is very limited for the micrometer scales, at which these ARROWs are typically made.
Applications.
ARROW are often used for guiding light in liquids, particularly in photonic lab-on-a-chip analytical systems (PhLoCs). Conventional waveguides rely on the principle of total internal reflection, which can only occur if the refractive index of the guiding core material is greater than the refractive indexes of its surroundings. However, the materials used to make the guiding core are typically polymer and silicon-based materials, which have higher refractive indexes (n=1.4-3.5) than that of water (n = 1.33). As a result, a conventional hollow-core waveguide no longer works once it's filled with water solution, making the PhLoCs useless. An ARROW, on the other hand, can be liquid-filled since it confines light completely by interference, which requires that the refractive index of the guiding core to be lower than the refractive index of the surrounding materials. Thus, ARROWs become the ideal building blocks for PhLoCs.
Though ARROWs carry big advantage over conventional waveguide for building PhLoCs, they are not perfect. The main problem of ARROW is its undesirable light loss. Light loss of ARROWs decreases the signal to noise ratio of the PhLoCs. Different versions of ARROWs have been designed and tested in order to overcome this problem. | [
{
"math_id": 0,
"text": "t_j={\\lambda \\over 4n_j}(2N-1)[1-{n_c^2 \\over n_j^2}+{\\lambda^2 \\over 4n_j^2t_c^2}]^{-1/2}, N = 1, 2, 3,..."
},
{
"math_id": 1,
"text": "t_j"
},
{
"math_id": 2,
"text": "t_c"
},
{
"math_id": 3,
"text": "\\lambda"
},
{
"math_id": 4,
"text": "n_j"
},
{
"math_id": 5,
"text": "n_c"
},
{
"math_id": 6,
"text": "n_j > n_c > n_{air}"
}
] | https://en.wikipedia.org/wiki?curid=1239457 |
1239472 | Eccentricity (mathematics) | Characteristic of conic sections
In mathematics, the eccentricity of a conic section is a non-negative real number that uniquely characterizes its shape.
One can think of the eccentricity as a measure of how much a conic section deviates from being circular. In particular:
Two conic sections with the same eccentricity are similar.
Definitions.
Any conic section can be defined as the locus of points whose distances to a point (the focus) and a line (the directrix) are in a constant ratio. That ratio is called the eccentricity, commonly denoted as e.
The eccentricity can also be defined in terms of the intersection of a plane and a double-napped cone associated with the conic section. If the cone is oriented with its axis vertical, the eccentricity is
formula_1
where β is the angle between the plane and the horizontal and α is the angle between the cone's slant generator and the horizontal. For formula_2 the plane section is a circle, for formula_3 a parabola. (The plane must not meet the vertex of the cone.)
The half-focal separation of an ellipse or hyperbola, denoted c (or sometimes f or e), is the distance between its center and either of its two foci. The eccentricity can be defined as the ratio of the half-focal separation to the semimajor axis a: that is, formula_4 (lacking a center, the half-focal separation for parabolas is not defined). It is worth to note that a parabola can be treated as an ellipse or a hyperbola, but with one focal point at infinity.
Alternative names.
In the case of ellipses and hyperbolas the half-focal separation is sometimes called the linear eccentricity.
Notation.
Three notational conventions are in common use:
Values.
Here, for the ellipse and the hyperbola, a is the length of the semi-major axis and b is the length of the semi-minor axis.
When the conic section is given in the general quadratic form
formula_5
the following formula gives the eccentricity e if the conic section is not a parabola (which has eccentricity equal to 1), not a degenerate hyperbola or degenerate ellipse, and not an imaginary ellipse:
formula_6
where formula_7 if the determinant of the 3×3 matrix
formula_8
is negative or formula_9 if that determinant is positive.
Ellipses.
The eccentricity of an ellipse is strictly less than 1. When circles (which have eccentricity 0) are counted as ellipses, the eccentricity of an ellipse is greater than or equal to 0; if circles are given a special category and are excluded from the category of ellipses, then the eccentricity of an ellipse is strictly greater than 0.
For any ellipse, let a be the length of its semi-major axis and b be the length of its semi-minor axis. In the coordinate system with origin at the ellipse's center and x-axis aligned with the major axis, points on the ellipse satisfy the equation
formula_10
with foci at coordinates formula_11 for formula_12
We define a number of related additional concepts (only for ellipses):
Other formulae for the eccentricity of an ellipse.
The eccentricity of an ellipse is, most simply, the ratio of the linear eccentricity c (distance between the center of the ellipse and each focus) to the length of the semimajor axis a.
formula_13
The eccentricity is also the ratio of the semimajor axis a to the distance d from the center to the directrix:
formula_14
The eccentricity can be expressed in terms of the flattening f (defined as formula_15 for semimajor axis a and semiminor axis b):
formula_16
Define the maximum and minimum radii formula_17 and formula_18 as the maximum and minimum distances from either focus to the ellipse (that is, the distances from either focus to the two ends of the major axis). Then with semimajor axis a, the eccentricity is given by
formula_19
which is the distance between the foci divided by the length of the major axis.
Hyperbolas.
The eccentricity of a hyperbola can be any real number greater than 1, with no upper bound. The eccentricity of a rectangular hyperbola is formula_20.
Quadrics.
The eccentricity of a three-dimensional quadric is the eccentricity of a designated section of it. For example, on a triaxial ellipsoid, the "meridional eccentricity" is that of the ellipse formed by a section containing both the longest and the shortest axes (one of which will be the polar axis), and the "equatorial eccentricity" is the eccentricity of the ellipse formed by a section through the centre, perpendicular to the polar axis (i.e. in the equatorial plane). But: conic sections may occur on surfaces of higher order, too (see image).
Celestial mechanics.
In celestial mechanics, for bound orbits in a spherical potential, the definition above is informally generalized. When the apocenter distance is close to the pericenter distance, the orbit is said to have low eccentricity; when they are very different, the orbit is said be eccentric or having eccentricity near unity. This definition coincides with the mathematical definition of eccentricity for ellipses, in Keplerian, i.e., formula_21 potentials.
Analogous classifications.
A number of classifications in mathematics use derived terminology from the classification of conic sections by eccentricity:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\infty"
},
{
"math_id": 1,
"text": " e = \\frac{\\sin \\beta}{\\sin \\alpha}, \\ \\ 0<\\alpha<90^\\circ, \\ 0\\le\\beta\\le90^\\circ \\ , "
},
{
"math_id": 2,
"text": "\\beta=0"
},
{
"math_id": 3,
"text": "\\beta=\\alpha"
},
{
"math_id": 4,
"text": " e = \\frac{c}{a} "
},
{
"math_id": 5,
"text": "Ax^2 + Bxy + Cy^2 +Dx + Ey + F = 0,"
},
{
"math_id": 6,
"text": "e=\\sqrt{\\frac{2\\sqrt{(A-C)^2 + B^2}}{\\eta (A+C) + \\sqrt{(A-C)^2 + B^2}}}"
},
{
"math_id": 7,
"text": "\\eta = 1"
},
{
"math_id": 8,
"text": "\\begin{bmatrix}A & B/2 & D/2\\\\B/2 & C & E/2\\\\D/2&E/2&F\\end{bmatrix}"
},
{
"math_id": 9,
"text": "\\eta = -1"
},
{
"math_id": 10,
"text": "\\frac{x^2}{a^2} + \\frac{y^2}{b^2} = 1,"
},
{
"math_id": 11,
"text": "(\\pm c, 0)"
},
{
"math_id": 12,
"text": "c = \\sqrt{a^2 - b^2}."
},
{
"math_id": 13,
"text": "e = \\frac{c}{a}."
},
{
"math_id": 14,
"text": "e = \\frac{a}{d}."
},
{
"math_id": 15,
"text": "f = 1 - b / a"
},
{
"math_id": 16,
"text": "e = \\sqrt{1-(1-f)^2} = \\sqrt{f(2-f)}."
},
{
"math_id": 17,
"text": "r_\\text{max}"
},
{
"math_id": 18,
"text": "r_\\text{min}"
},
{
"math_id": 19,
"text": "e = \\frac{r_\\text{max}-r_\\text{min}}{r_\\text{max}+r_\\text{min}} = \\frac{r_\\text{max}-r_\\text{min}}{2a},"
},
{
"math_id": 20,
"text": "\\sqrt{2}"
},
{
"math_id": 21,
"text": "1/r"
}
] | https://en.wikipedia.org/wiki?curid=1239472 |
12395 | Greenhouse effect | Atmospheric phenomenon causing planetary warming
The greenhouse effect occurs when greenhouse gases in a planet's atmosphere insulate the planet from losing heat to space, raising its surface temperature. Surface heating can happen from an internal heat source as in the case of Jupiter, or from its host star as in the case of the Earth. In the case of Earth, the Sun emits shortwave radiation (sunlight) that passes through greenhouse gases to heat the Earth's surface. In response, the Earth's surface emits longwave radiation that is mostly absorbed by greenhouse gases. The absorption of longwave radiation prevents it from reaching space, reducing the rate at which the Earth can cool off.
Without the greenhouse effect, the Earth's average surface temperature would be about , which is less than Earth's 20th century average of about , or a more recent average of about . In addition to naturally present greenhouse gases, burning of fossil fuels has increased amounts of carbon dioxide and methane in the atmosphere. As a result, global warming of about has occurred since the Industrial Revolution, with the global average surface temperature increasing at a rate of per decade since 1981.
All objects with a temperature above absolute zero emit thermal radiation. The wavelengths of thermal radiation emitted by the Sun and Earth differ because their surface temperatures are different. The Sun has a surface temperature of , so it emits most of its energy as shortwave radiation in near-infrared and visible wavelengths (as sunlight). In contrast, Earth's surface has a much lower temperature, so it emits longwave radiation at mid- and far-infrared wavelengths. A gas is a greenhouse gas if it absorbs longwave radiation. Earth's atmosphere absorbs only 23% of incoming shortwave radiation, but absorbs 90% of the longwave radiation emitted by the surface, thus accumulating energy and warming the Earth's surface.
The existence of the greenhouse effect, while not named as such, was proposed as early as 1824 by Joseph Fourier. The argument and the evidence were further strengthened by Claude Pouillet in 1827 and 1838. In 1856 Eunice Newton Foote demonstrated that the warming effect of the sun is greater for air with water vapour than for dry air, and the effect is even greater with carbon dioxide. The term "greenhouse" was first applied to this phenomenon by Nils Gustaf Ekholm in 1901.
Definition.
The "greenhouse effect" on Earth is defined as: "The infrared radiative effect of all infrared absorbing constituents in the atmosphere. Greenhouse gases (GHGs), clouds, and some aerosols absorb terrestrial radiation emitted by the Earth’s surface and elsewhere in the atmosphere."2232
The "enhanced greenhouse effect" describes the fact that by increasing the concentration of GHGs in the atmosphere (due to human action), the natural greenhouse effect is increased.2232
Terminology.
The term "greenhouse effect" comes from an analogy to greenhouses. Both greenhouses and the "greenhouse effect" work by retaining heat from sunlight, but the way they retain heat differs. Greenhouses retain heat mainly by blocking convection (the movement of air). In contrast, the greenhouse effect retains heat by restricting radiative transfer through the air and reducing the rate at which thermal radiation is emitted into space.
History of discovery and investigation.
The existence of the greenhouse effect, while not named as such, was proposed as early as 1824 by Joseph Fourier. The argument and the evidence were further strengthened by Claude Pouillet in 1827 and 1838. In 1856 Eunice Newton Foote demonstrated that the warming effect of the sun is greater for air with water vapour than for dry air, and the effect is even greater with carbon dioxide. She concluded that "An atmosphere of that gas would give to our earth a high temperature..."
John Tyndall was the first to measure the infrared absorption and emission of various gases and vapors. From 1859 onwards, he showed that the effect was due to a very small proportion of the atmosphere, with the main gases having no effect, and was largely due to water vapor, though small percentages of hydrocarbons and carbon dioxide had a significant effect. The effect was more fully quantified by Svante Arrhenius in 1896, who made the first quantitative prediction of global warming due to a hypothetical doubling of atmospheric carbon dioxide. The term "greenhouse" was first applied to this phenomenon by Nils Gustaf Ekholm in 1901.
Measurement.
Matter emits thermal radiation at a rate that is directly proportional to the fourth power of its temperature. Some of the radiation emitted by the Earth's surface is absorbed by greenhouse gases and clouds. Without this absorption, Earth's surface would have an average temperature of . However, because some of the radiation is absorbed, Earth's average surface temperature is around . Thus, the Earth's greenhouse effect may be measured as a "temperature change" of .
Thermal radiation is characterized by how much energy it carries, typically in watts per square meter (W/m2). Scientists also measure the greenhouse effect based on how much more longwave thermal radiation leaves the Earth's surface than reaches space. Currently, longwave radiation leaves the surface at an average rate of 398 W/m2, but only 239 W/m2 reaches space. Thus, the Earth's greenhouse effect can also be measured as an "energy flow change" of 159 W/m2. The greenhouse effect can be expressed as a fraction (0.40) or percentage (40%) of the longwave thermal radiation that leaves Earth's surface but does not reach space.
Whether the greenhouse effect is expressed as a change in temperature or as a change in longwave thermal radiation, the same effect is being measured.
Role in climate change.
Strengthening of the greenhouse effect through additional greenhouse gases from human activities is known as the "enhanced greenhouse effect".2232 As well as being inferred from measurements by ARGO, CERES and other instruments throughout the 21st century,7–17 this increase in radiative forcing from human activity has been observed directly, and is attributable mainly to increased atmospheric carbon dioxide levels.
CO2 is produced by fossil fuel burning and other activities such as cement production and tropical deforestation. Measurements of CO2 from the Mauna Loa Observatory show that concentrations have increased from about 313 parts per million (ppm) in 1960, passing the 400 ppm milestone in 2013. The current observed amount of CO2 exceeds the geological record maxima (≈300 ppm) from ice core data.
Over the past 800,000 years, ice core data shows that carbon dioxide has varied from values as low as 180 ppm to the pre-industrial level of 270 ppm. Paleoclimatologists consider variations in carbon dioxide concentration to be a fundamental factor influencing climate variations over this time scale.
Energy balance and temperature.
Incoming shortwave radiation.
Hotter matter emits shorter wavelengths of radiation. As a result, the Sun emits shortwave radiation as sunlight while the Earth and its atmosphere emit longwave radiation. Sunlight includes ultraviolet, visible light, and near-infrared radiation.
Sunlight is reflected and absorbed by the Earth and its atmosphere. The atmosphere and clouds reflect about 23% and absorb 23%. The surface reflects 7% and absorbs 48%. Overall, Earth reflects about 30% of the incoming sunlight, and absorbs the rest (240 W/m2).
Outgoing longwave radiation.
The Earth and its atmosphere emit "longwave radiation", also known as "thermal infrared" or "terrestrial radiation". Informally, longwave radiation is sometimes called "thermal radiation". Outgoing longwave radiation (OLR) is the radiation from Earth and its atmosphere that passes through the atmosphere and into space.
The greenhouse effect can be directly seen in graphs of Earth's outgoing longwave radiation as a function of frequency (or wavelength). The area between the curve for longwave radiation emitted by Earth's surface and the curve for outgoing longwave radiation indicates the size of the greenhouse effect.
Different substances are responsible for reducing the radiation energy reaching space at different frequencies; for some frequencies, multiple substances play a role. Carbon dioxide is understood to be responsible for the dip in outgoing radiation (and associated rise in the greenhouse effect) at around 667 cm−1 (equivalent to a wavelength of 15 microns).
Each layer of the atmosphere with greenhouse gases absorbs some of the longwave radiation being radiated upwards from lower layers. It also emits longwave radiation in all directions, both upwards and downwards, in equilibrium with the amount it has absorbed. This results in less radiative heat loss and more warmth below. Increasing the concentration of the gases increases the amount of absorption and emission, and thereby causing more heat to be retained at the surface and in the layers below.
Effective temperature.
The power of outgoing longwave radiation emitted by a planet corresponds to the "effective temperature" of the planet. The effective temperature is the temperature that a planet radiating with a uniform temperature (a blackbody) would need to have in order to radiate the same amount of energy.
This concept may be used to compare the amount of longwave radiation emitted to space and the amount of longwave radiation emitted by the surface:
Earth's surface temperature is often reported in terms of the average near-surface air temperature. This is about , a bit lower than the effective surface temperature. This value is warmer than Earth's overall effective temperature.
Energy flux.
Energy flux is the rate of energy flow per unit area. Energy flux is expressed in units of W/m2, which is the number of joules of energy that pass through a square meter each second. Most fluxes quoted in high-level discussions of climate are global values, which means they are the total flow of energy over the entire globe, divided by the surface area of the Earth, .
The fluxes of radiation arriving at and leaving the Earth are important because radiative transfer is the only process capable of exchanging energy between Earth and the rest of the universe.
Radiative balance.
The temperature of a planet depends on the balance between incoming radiation and outgoing radiation. If incoming radiation exceeds outgoing radiation, a planet will warm. If outgoing radiation exceeds incoming radiation, a planet will cool. A planet will tend towards a state of radiative equilibrium, in which the power of outgoing radiation equals the power of absorbed incoming radiation.
Earth's energy imbalance is the amount by which the power of incoming sunlight absorbed by Earth's surface or atmosphere exceeds the power of outgoing longwave radiation emitted to space. Energy imbalance is the fundamental measurement that drives surface temperature. A UN presentation says "The EEI is the most critical number defining the prospects for continued global warming and climate change." One study argues, "The absolute value of EEI represents the most fundamental metric defining the status of global climate change."
Earth's energy imbalance (EEI) was about 0.7 W/m2 as of around 2015, indicating that Earth as a whole is accumulating thermal energy and is in a process of becoming warmer.
Over 90% of the retained energy goes into warming the oceans, with much smaller amounts going into heating the land, atmosphere, and ice.
Day and night cycle.
A simple picture assumes a steady state, but in the real world, the day/night (diurnal) cycle, as well as the seasonal cycle and weather disturbances, complicate matters. Solar heating applies only during daytime. At night the atmosphere cools somewhat, but not greatly because the thermal inertia of the climate system resists changes both day and night, as well as for longer periods. Diurnal temperature changes decrease with height in the atmosphere.
Effect of lapse rate.
Lapse rate.
In the lower portion of the atmosphere, the troposphere, the air temperature decreases (or "lapses") with increasing altitude. The rate at which temperature changes with altitude is called the "lapse rate".
On Earth, the air temperature decreases by about 6.5 °C/km (3.6 °F per 1000 ft), on average, although this varies.
The temperature lapse is caused by convection. Air warmed by the surface rises. As it rises, air expands and cools. Simultaneously, other air descends, compresses, and warms. This process creates a vertical temperature gradient within the atmosphere.
This vertical temperature gradient is essential to the greenhouse effect. If the lapse rate was zero (so that the atmospheric temperature did not vary with altitude and was the same as the surface temperature) then there would be no greenhouse effect (i.e., its value would be zero).
Emission temperature and altitude.
Greenhouse gases make the atmosphere near Earth's surface mostly opaque to longwave radiation. The atmosphere only becomes transparent to longwave radiation at higher altitudes, where the air is less dense, there is less water vapor, and reduced pressure broadening of absorption lines limits the wavelengths that gas molecules can absorb.
For any given wavelength, the longwave radiation that reaches space is emitted by a particular "radiating layer" of the atmosphere. The intensity of the emitted radiation is determined by the weighted average air temperature within that layer. So, for any given wavelength of radiation emitted to space, there is an associated "effective emission temperature" (or brightness temperature).
A given wavelength of radiation may also be said to have an "effective emission altitude", which is a weighted average of the altitudes within the radiating layer.
The effective emission temperature and altitude vary by wavelength (or frequency). This phenomenon may be seen by examining plots of radiation emitted to space.
Greenhouse gases and the lapse rate.
Earth's surface radiates longwave radiation with wavelengths in the range of 4–100 microns. Greenhouse gases that were largely transparent to incoming solar radiation are more absorbent for some wavelengths in this range.
The atmosphere near the Earth's surface is largely opaque to longwave radiation and most heat loss from the surface is by evaporation and convection. However radiative energy losses become increasingly important higher in the atmosphere, largely because of the decreasing concentration of water vapor, an important greenhouse gas.
Rather than thinking of longwave radiation headed to space as coming from the surface itself, it is more realistic to think of this outgoing radiation as being emitted by a layer in the mid-troposphere, which is effectively coupled to the surface by a lapse rate. The difference in temperature between these two locations explains the difference between surface emissions and emissions to space, i.e., it explains the greenhouse effect.
Infrared absorbing constituents in the atmosphere.
Greenhouse gases.
A greenhouse gas (GHG) is a gas which contributes to the trapping of heat by impeding the flow of longwave radiation out of a planet's atmosphere. Greenhouse gases contribute most of the greenhouse effect in Earth's energy budget.
Infrared active gases.
Gases which can absorb and emit longwave radiation are said to be "infrared active" and act as greenhouse gases.
Most gases whose molecules have two different atoms (such as carbon monoxide, CO), and all gases with three or more atoms (including and CO2), are infrared active and act as greenhouse gases. (Technically, this is because when these molecules vibrate, those vibrations modify the molecular dipole moment, or asymmetry in the distribution of electrical charge. See Infrared spectroscopy.)
Gases with only one atom (such as argon, Ar) or with two identical atoms (such as nitrogen, N2, and oxygen, O2) are not infrared active. They are transparent to longwave radiation, and, for practical purposes, do not absorb or emit longwave radiation. (This is because their molecules are symmetrical and so do not have a dipole moment.) Such gases make up more than 99% of the dry atmosphere.
Absorption and emission.
Greenhouse gases absorb and emit longwave radiation within specific ranges of wavelengths (organized as spectral lines or bands).
When greenhouse gases absorb radiation, they distribute the acquired energy to the surrounding air as thermal energy (i.e., kinetic energy of gas molecules). Energy is transferred from greenhouse gas molecules to other molecules via molecular collisions.
Contrary to what is sometimes said, greenhouse gases do not "re-emit" photons after they are absorbed. Because each molecule experiences billions of collisions per second, any energy a greenhouse gas molecule receives by absorbing a photon will be redistributed to other molecules before there is a chance for a new photon to be emitted.
In a separate process, greenhouse gases emit longwave radiation, at a rate determined by the air temperature. This thermal energy is either absorbed by other greenhouse gas molecules or leaves the atmosphere, cooling it.
Radiative effects.
"Effect on air:" Air is warmed by latent heat (buoyant water vapor condensing into water droplets and releasing heat), thermals (warm air rising from below), and by sunlight being absorbed in the atmosphere. Air is cooled radiatively, by greenhouse gases and clouds emitting longwave thermal radiation. Within the troposphere, greenhouse gases typically have a net cooling effect on air, emitting more thermal radiation than they absorb. Warming and cooling of air are well balanced, on average, so that the atmosphere maintains a roughly stable average temperature.
"Effect on surface cooling:" Longwave radiation flows both upward and downward due to absorption and emission in the atmosphere. These canceling energy flows reduce radiative surface cooling (net upward radiative energy flow). Latent heat transport and thermals provide non-radiative surface cooling which partially compensates for this reduction, but there is still a net reduction in surface cooling, for a given surface temperature.
"Effect on TOA energy balance:" Greenhouse gases impact the top-of-atmosphere (TOA) energy budget by reducing the flux of longwave radiation emitted to space, for a given surface temperature. Thus, greenhouse gases alter the energy balance at TOA. This means that the surface temperature needs to be higher (than the planet's "effective temperature", i.e., the temperature associated with emissions to space), in order for the outgoing energy emitted to space to balance the incoming energy from sunlight. It is important to focus on the top-of-atmosphere (TOA) energy budget (rather than the surface energy budget) when reasoning about the warming effect of greenhouse gases.
Clouds and aerosols.
Clouds and aerosols have both cooling effects, associated with reflecting sunlight back to space, and warming effects, associated with trapping thermal radiation.
On average, clouds have a strong net cooling effect. However, the mix of cooling and warming effects varies, depending on detailed characteristics of particular clouds (including their type, height, and optical properties). Thin cirrus clouds can have a net warming effect. Clouds can absorb and emit infrared radiation and thus affect the radiative properties of the atmosphere.
Basic formulas.
Effective temperature.
A given flux of thermal radiation has an associated "effective radiating temperature" or "effective temperature". Effective temperature is the temperature that a black body (a perfect absorber/emitter) would need to be to emit that much thermal radiation. Thus, the overall effective temperature of a planet is given by
formula_0
where OLR is the average flux (power per unit area) of outgoing longwave radiation emitted to space and formula_1 is the Stefan-Boltzmann constant. Similarly, the effective temperature of the surface is given by
formula_2
where SLR is the average flux of longwave radiation emitted by the surface. (OLR is a conventional abbreviation. SLR is used here to denote the flux of surface-emitted longwave radiation, although there is no standard abbreviation for this.)
Metrics for the greenhouse effect.
The IPCC reports the "greenhouse effect", G, as being 159 W m-2, where G is the flux of longwave thermal radiation that leaves the surface minus the flux of outgoing longwave radiation that reaches space:
formula_3
Alternatively, the greenhouse effect can be described using the "normalized greenhouse effect", g̃, defined as
formula_4
The normalized greenhouse effect is "the fraction of the amount of thermal radiation emitted by the surface that does not reach space".
Based on the IPCC numbers, g̃ = 0.40. In other words, 40 percent less thermal radiation reaches space than what leaves the surface.
Sometimes the greenhouse effect is quantified as a temperature difference. This temperature difference is closely related to the quantities above.
When the greenhouse effect is expressed as a temperature difference, formula_5, this refers to the effective temperature associated with thermal radiation emissions from the surface minus the effective temperature associated with emissions to space:
formula_6
formula_7
Informal discussions of the greenhouse effect often compare the actual surface temperature to the temperature that the planet would have if there were no greenhouse gases. However, in formal technical discussions, when the size of the greenhouse effect is quantified as a temperature, this is generally done using the above formula. The formula refers to the effective surface temperature rather than the actual surface temperature, and compares the surface with the top of the atmosphere, rather than comparing reality to a hypothetical situation.
The temperature difference, formula_5, indicates how much warmer a planet's surface is than the planet's overall effective temperature.
Radiative balance.
Earth's top-of-atmosphere (TOA) energy imbalance (EEI) is the amount by which the power of incoming radiation exceeds the power of outgoing radiation:
formula_8
where ASR is the mean flux of absorbed solar radiation. ASR may be expanded as
formula_9
where formula_10 is the albedo (reflectivity) of the planet and MSI is the mean solar irradiance incoming at the top of the atmosphere.
The radiative equilibrium temperature of a planet can be expressed as
formula_11
A planet's temperature will tend to shift towards a state of radiative equilibrium, in which the TOA energy imbalance is zero, i.e., formula_12. When the planet is in radiative equilibrium, the overall effective temperature of the planet is given by
formula_13
Thus, the concept of radiative equilibrium is important because it indicates what effective temperature a planet will tend towards having.
If, in addition to knowing the effective temperature, formula_14, we know the value of the greenhouse effect, then we know the mean (average) surface temperature of the planet.
This is why the quantity known as the greenhouse effect is important: it is one of the few quantities that go into determining the planet's mean surface temperature.
Greenhouse effect and temperature.
Typically, a planet will be close to radiative equilibrium, with the rates of incoming and outgoing energy being well-balanced. Under such conditions, the planet's equilibrium temperature is determined by the mean solar irradiance and the planetary albedo (how much sunlight is reflected back to space instead of being absorbed).
The greenhouse effect measures how much warmer the surface is than the overall effective temperature of the planet. So, the effective surface temperature, formula_15, is, using the definition of formula_5,
formula_16
One could also express the relationship between formula_15 and formula_14 using "G" or "g̃".
So, the principle that a larger greenhouse effect corresponds to a higher surface temperature, if everything else (i.e., the factors that determine formula_14) is held fixed, is true as a matter of definition.
Note that the greenhouse effect influences the temperature of the planet as a whole, in tandem with the planet's tendency to move toward radiative equilibrium.
Misconceptions.
There are sometimes misunderstandings about how the greenhouse effect functions and raises temperatures.
The "surface budget fallacy" is a common error in thinking. It involves thinking that an increased CO2 concentration could only cause warming by increasing the downward thermal radiation to the surface, as a result of making the atmosphere a better emitter. If the atmosphere near the surface is already nearly opaque to thermal radiation, this would mean that increasing CO2 could not lead to higher temperatures. However, it is a mistake to focus on the surface energy budget rather than the top-of-atmosphere energy budget. Regardless of what happens at the surface, increasing the concentration of CO2 tends to reduce the thermal radiation reaching space (OLR), leading to a TOA energy imbalance that leads to warming. Earlier researchers like Callendar (1938) and Plass (1959) focused on the surface budget, but the work of Manabe in the 1960s clarified the importance of the top-of-atmosphere energy budget.
Among those who do not believe in the greenhouse effect, there is a fallacy that the greenhouse effect involves greenhouse gases sending heat from the cool atmosphere to the planet's warm surface, in violation of the Second Law of Thermodynamics. However, this idea reflects a misunderstanding. Radiation heat flow is the "net energy flow" after the flows of radiation in both directions have been taken into account. Radiation heat flow occurs in the direction from the surface to the atmosphere and space, as is to be expected given that the surface is warmer than the atmosphere and space. While greenhouse gases emit thermal radiation downward to the surface, this is part of the normal process of radiation heat transfer. The downward thermal radiation simply reduces the upward thermal radiation net energy flow (radiation heat flow), i.e., it reduces cooling.
Simplified models.
Simplified models are sometimes used to support understanding of how the greenhouse effect comes about and how this affects surface temperature.
Atmospheric layer models.
The greenhouse effect can be seen to occur in a simplified model in which the air is treated as if it is single uniform layer exchanging radiation with the ground and space. Slightly more complex models add additional layers, or introduce convection.
Equivalent emission altitude.
One simplification is to treat all outgoing longwave radiation as being emitted from an altitude where the air temperature equals the overall effective temperature for planetary emissions, formula_14. Some authors have referred to this altitude as the "effective radiating level" (ERL), and suggest that as the CO2 concentration increases, the ERL must rise to maintain the same mass of CO2 above that level.
This approach is less accurate than accounting for variation in radiation wavelength by emission altitude. However, it can be useful in supporting a simplified understanding of the greenhouse effect. For instance, it can be used to explain how the greenhouse effect increases as the concentration of greenhouse gases increase.
Earth's overall equivalent emission altitude has been increasing with a trend of /decade, which is said to be consistent with a global mean surface warming of /decade over the period 1979–2011.
Related effects on Earth.
Negative greenhouse effect.
Scientists have observed that, at times, there is a negative greenhouse effect over parts of Antarctica. In a location where there is a strong temperature inversion, so that the air is warmer than the surface, it is possible for the greenhouse effect to be reversed, so that the presence of greenhouse gases increases the rate of radiative cooling to space. In this case, the rate of thermal radiation emission to space is greater than the rate at which thermal radiation is emitted by the surface. Thus, the local value of the greenhouse effect is negative.
Bodies other than Earth.
In the solar system, apart from the Earth, at least two other planets and a moon also have a greenhouse effect.
Venus.
The greenhouse effect on Venus is particularly large, and it brings the surface temperature to as high as . This is due to its very dense atmosphere which consists of about 97% carbon dioxide.
Although Venus is about 30% closer to the Sun, it absorbs (and is warmed by) "less sunlight" than Earth, because Venus reflects 77% of incident sunlight while Earth reflects around 30%. In the absence of a greenhouse effect, the surface of Venus would be expected to have a temperature of . Thus, contrary to what one might think, being nearer to the Sun is not a reason why Venus is warmer than Earth.
Due to its high pressure, the CO2 in the atmosphere of Venus exhibits "continuum absorption" (absorption over a broad range of wavelengths) and is not limited to absorption within the bands relevant to its absorption on Earth.
A runaway greenhouse effect involving carbon dioxide and water vapor has for many years been hypothesized to have occurred on Venus; this idea is still largely accepted. The planet Venus experienced a runaway greenhouse effect, resulting in an atmosphere which is 96% carbon dioxide, and a surface atmospheric pressure roughly the same as found underwater on Earth. Venus may have had water oceans, but they would have boiled off as the mean surface temperature rose to the current .
Mars.
Mars has about 70 times as much carbon dioxide as Earth, but experiences only a small greenhouse effect, about . The greenhouse effect is small due to the lack of water vapor and the overall thinness of the atmosphere.
The same radiative transfer calculations that predict warming on Earth accurately explain the temperature on Mars, given its atmospheric composition.
Titan.
Saturn's moon Titan has both a greenhouse effect and an anti-greenhouse effect. The presence of nitrogen (N2), methane (CH4), and hydrogen (H2) in the atmosphere contribute to a greenhouse effect, increasing the surface temperature by over the expected temperature of the body without these gases.
While the gases N2 and H2 ordinarily do not absorb infrared radiation, these gases absorb thermal radiation on Titan due to pressure-induced collisions, the large mass and thickness of the atmosphere, and the long wavelengths of the thermal radiation from the cold surface.
The existence of a high-altitude haze, which absorbs wavelengths of solar radiation but is transparent to infrared, contribute to an anti-greenhouse effect of approximately .
The net result of these two effects is a warming of 21 K − 9 K = , so Titan's surface temperature of is 12 K warmer than it would be if there were no atmosphere.
Effect of pressure.
One cannot predict the relative sizes of the greenhouse effects on different bodies simply by comparing the amount of greenhouse gases in their atmospheres. This is because factors other than the quantity of these gases also play a role in determining the size of the greenhouse effect.
Overall atmospheric pressure affects how much thermal radiation each molecule of a greenhouse gas can absorb. High pressure leads to more absorption and low pressure leads to less.
This is due to "pressure broadening" of spectral lines. When the total atmospheric pressure is higher, collisions between molecules occur at a higher rate. Collisions broaden the width of absorption lines, allowing a greenhouse gas to absorb thermal radiation over a broader range of wavelengths.
Each molecule in the air near Earth's surface experiences about 7 billion collisions per second. This rate is lower at higher altitudes, where the pressure and temperature are both lower. This means that greenhouse gases are able to absorb more wavelengths in the lower atmosphere than they can in the upper atmosphere.
On other planets, pressure broadening means that each molecule of a greenhouse gas is more effective at trapping thermal radiation if the total atmospheric pressure is high (as on Venus), and less effective at trapping thermal radiation if the atmospheric pressure is low (as on Mars).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T_\\mathrm{eff} = (\\mathrm{OLR}/\\sigma)^{1/4}"
},
{
"math_id": 1,
"text": "\\sigma"
},
{
"math_id": 2,
"text": "T_\\mathrm{surface,eff} = (\\mathrm{SLR}/\\sigma)^{1/4}"
},
{
"math_id": 3,
"text": "G = \\mathrm{SLR} - \\mathrm{OLR}\\;."
},
{
"math_id": 4,
"text": "\\tilde g = G/\\mathrm{SLR} = 1 - \\mathrm{OLR}/\\mathrm{SLR}\\;."
},
{
"math_id": 5,
"text": "\\Delta T_\\mathrm{GHE}"
},
{
"math_id": 6,
"text": "\\Delta T_\\mathrm{GHE} = T_\\mathrm{surface,eff} - T_\\mathrm{eff}"
},
{
"math_id": 7,
"text": "\\Delta T_\\mathrm{GHE} = \\left(\\mathrm{SLR}/\\sigma\\right)^{1/4} - \\left(\\mathrm{OLR}/\\sigma\\right)^{1/4}"
},
{
"math_id": 8,
"text": "\\mathrm{EEI} = \\mathrm{ASR} -\\mathrm{OLR}"
},
{
"math_id": 9,
"text": "\\mathrm{ASR} = (1-A) \\,\\mathrm{MSI}"
},
{
"math_id": 10,
"text": "A"
},
{
"math_id": 11,
"text": "T_\\mathrm{radeq} = (\\mathrm{ASR}/\\sigma)^{1/4} = \\left[(1-A)\\,\\mathrm{MSI}/\\sigma \\right]^{1/4} \\;."
},
{
"math_id": 12,
"text": "\\mathrm{EEI} = 0"
},
{
"math_id": 13,
"text": "T_\\mathrm{eff} = T_\\mathrm{radeq}\\;."
},
{
"math_id": 14,
"text": "T_\\mathrm{eff}"
},
{
"math_id": 15,
"text": "T_\\mathrm{surface,eff}"
},
{
"math_id": 16,
"text": "T_\\mathrm{surface,eff} = T_\\mathrm{eff} + \\Delta T_\\mathrm{GHE} \\;."
}
] | https://en.wikipedia.org/wiki?curid=12395 |
1239556 | Arrow–Debreu model | Economic Model
In mathematical economics, the Arrow–Debreu model is a theoretical general equilibrium model. It posits that under certain economic assumptions (convex preferences, perfect competition, and demand independence) there must be a set of prices such that aggregate supplies will equal aggregate demands for every commodity in the economy.
The model is central to the theory of general (economic) equilibrium and it is often used as a general reference for other microeconomic models. It was proposed by Kenneth Arrow, Gérard Debreu in 1954, and Lionel W. McKenzie independently in 1954, with later improvements in 1959.
The A-D model is one of the most general models of competitive economy and is a crucial part of general equilibrium theory, as it can be used to prove the existence of general equilibrium (or Walrasian equilibrium) of an economy. In general, there may be many equilibria.
Arrow (1972) and Debreu (1983) were separately awarded the Nobel Prize in Economics for their development of the model. McKenzie, however, did not receive the award.
Formal statement.
<templatestyles src="Template:Blockquote/styles.css" />The contents of both theorems [fundamental theorems of welfare economics] are old beliefs in economics. Arrow and Debreu have recently treated this question with techniques permitting proofs.
— <templatestyles src="Template:Blockquote/styles.css" />This statement is precisely correct; once there were beliefs, now there was knowledge.
But more was at stake. Great scholars change the way that we think about the world, and about what and who we are. The Arrow-Debreu model, as communicated in Theory of Value changed basic thinking, and it quickly became the standard model of price theory. It is the “benchmark” model in Finance, International Trade, Public Finance, Transportation, and even macroeconomics... In rather short order it was no longer “as it is” in Marshall, Hicks, and Samuelson; rather it became “as it is” in Theory of Value.
This section follows the presentation in, which is based on.
Intuitive description of the Arrow–Debreu model.
The Arrow–Debreu model models an economy as a combination of three kinds of agents: the households, the producers, and the market. The households and producers transact with the market, but not with each other directly.
The households possess endowments (bundles of commodities they begin with), which one may think of as "inheritance". For the sake of mathematical clarity, all households are required to sell all their endowment to the market at the beginning. If they wish to retain some of the endowment, they would have to repurchase from the market later. The endowments may be working hours, use of land, tons of corn, etc.
The households possess proportional ownerships of producers, which can be thought of as joint-stock companies. The profit made by producer formula_0 is divided among the households in proportion to how much stock each household holds for the producer formula_0. Ownership is imposed at the beginning, and the households may not sell, buy, create, or discard them.
The households receive a budget, as the sum of income from selling endowments and dividend from producer profits.
The households possess preferences over bundles of commodities, which under the assumptions given, makes them utility maximizers. The households choose the consumption plan with the highest utility that they can afford using their budget.
The producers are capable of transforming bundles of commodities into other bundles of commodities. The producers have no separate utility functions. Instead, they are all purely profit maximizers.
The market is only capable of "choosing" a market price vector, which is a list of prices for each commodity, which every producer and household takes (there is no bargaining behavior—every producer and household is a price taker). The market has no utility or profit. Instead, the market aims to choose a market price vector such that, even though each household and producer is maximizing their own utility and profit, their consumption plans and production plans "harmonize". That is, "the market clears". In other words, the market is playing the role of a "Walrasian auctioneer".
Notation setup.
In general, we write indices of agents as superscripts, and vector coordinate indices as subscripts.
Imposing an artificial restriction.
The functions formula_52 are not necessarily well-defined for all price vectors formula_25. For example, if producer 1 is capable of transforming formula_53 units of commodity 1 into formula_54 units of commodity 2, and we have formula_55, then the producer can create plans with infinite profit, thus formula_56, and formula_57 is undefined.
Consequently, we define "restricted market" to be the same market, except there is a universal upper bound formula_58, such that every producer is required to use a production plan formula_59, and each household is required to use a consumption plan formula_60. Denote the corresponding quantities on the restricted market with a tilde. So for example, formula_61 is the excess demand function on the restricted market.
formula_58 is chosen to be "large enough" for the economy, so that the restriction is not in effect under equilibrium conditions (see next section). In detail, formula_58 is chosen to be large enough such that:
Each requirement is satisfiable.
The two requirements together imply that the restriction is not a real restriction when the production plans and consumption plans are "interior" to the restriction.
These two propositions imply that equilibria for the restricted market are equilibria for the unrestricted market:<templatestyles src="Math_theorem/styles.css" />
Theorem — If formula_25 is an equilibrium price vector for the restricted market, then it is also an equilibrium price vector for the unrestricted market. Furthermore, we have formula_79.
Existence of general equilibrium.
As the last piece of the construction, we define Walras's law:
Walras's law can be interpreted on both sides:
<templatestyles src="Math_theorem/styles.css" />
Theorem — formula_84 satisfies "weak" Walras's law: For all formula_85,
formula_86
and if formula_87, then formula_88 for some formula_89.
<templatestyles src="Math_proof/styles.css" />Proof sketch
If total excess demand value is exactly zero, then every household has spent all their budget. Else, some household is restricted to spend only part of their budget. Therefore, that household's consumption bundle is on the boundary of the restriction, that is, formula_90. We have chosen (in the previous section) formula_58 to be so large that even if all the producers coordinate, they would still fall short of meeting the demand. Consequently there exists some commodity formula_89 such that formula_91
<templatestyles src="Math_theorem/styles.css" />
Theorem — An equilibrium price vector exists for the restricted market, at which point the restricted market satisfies Walras's law.
<templatestyles src="Math_proof/styles.css" />Proof sketch
By definition of equilibrium, if formula_25 is an equilibrium price vector for the restricted market, then at that point, the restricted market satisfies Walras's law.
formula_84 is continuous since all formula_92 are continuous.
Define a function formula_93on the price simplex, where formula_94 is a fixed positive constant.
By the weak Walras law, this function is well-defined. By Brouwer's fixed-point theorem, it has a fixed point. By the weak Walras law, this fixed point is a market equilibrium.
Note that the above proof does not give an iterative algorithm for finding any equilibrium, as there is no guarantee that the function formula_95 is a contraction. This is unsurprising, as there is no guarantee (without further assumptions) that any market equilibrium is a stable equilibrium.
<templatestyles src="Math_theorem/styles.css" />
Corollary — An equilibrium price vector exists for the unrestricted market, at which point the unrestricted market satisfies Walras's law.
The role of convexity.
In 1954, McKenzie and the pair Arrow and Debreu independently proved the existence of general equilibria by invoking the Kakutani fixed-point theorem on the fixed points of a continuous function from a compact, convex set into itself. In the Arrow–Debreu approach, convexity is essential, because such fixed-point theorems are inapplicable to non-convex sets. For example, the rotation of the unit circle by 90 degrees lacks fixed points, although this rotation is a continuous transformation of a compact set into itself; although compact, the unit circle is non-convex. In contrast, the same rotation applied to the convex hull of the unit circle leaves the point "(0,0)" fixed. Notice that the Kakutani theorem does not assert that there exists exactly one fixed point. Reflecting the unit disk across the y-axis leaves a vertical segment fixed, so that this reflection has an infinite number of fixed points.
Non-convexity in large economies.
The assumption of convexity precluded many applications, which were discussed in the "Journal of Political Economy" from 1959 to 1961 by Francis M. Bator, M. J. Farrell, Tjalling Koopmans, and Thomas J. Rothenberg. Ross M. Starr (1969) proved the existence of economic equilibria when some consumer preferences need not be convex. In his paper, Starr proved that a "convexified" economy has general equilibria that are closely approximated by "quasi-equilbria" of the original economy; Starr's proof used the Shapley–Folkman theorem.
Uzawa equivalence theorem.
(Uzawa, 1962) showed that the existence of general equilibrium in an economy characterized by a continuous excess demand function fulfilling Walras’s Law is equivalent to Brouwer fixed-Point theorem. Thus, the use of Brouwer's fixed-point theorem is essential for showing that the equilibrium exists in general.
Fundamental theorems of welfare economics.
In welfare economics, one possible concern is finding a Pareto-optimal plan for the economy.
Intuitively, one can consider the problem of welfare economics to be the problem faced by a master planner for the whole economy: given starting endowment formula_48 for the entire society, the planner must pick a feasible master plan of production and consumption plans formula_96. The master planner has a wide freedom in choosing the master plan, but any reasonable planner should agree that, if someone's utility can be increased, while everyone else's is not decreased, then it is a better plan. That is, the Pareto ordering should be followed.
Define the Pareto ordering on the set of all plans formula_96 by formula_97 iff formula_98 for all formula_12.
Then, we say that a plan is Pareto-efficient with respect to a starting endowment formula_48, iff it is feasible, and there does not exist another feasible plan that is strictly better in Pareto ordering.
In general, there are a whole continuum of Pareto-efficient plans for each starting endowment formula_48.
With the set up, we have two fundamental theorems of welfare economics:
<templatestyles src="Math_theorem/styles.css" />
First fundamental theorem of welfare economics — Any market equilibrium state is Pareto-efficient.
<templatestyles src="Math_proof/styles.css" />Proof sketch
The price hyperplane separates the attainable productions and the Pareto-better consumptions. That is, the hyperplane formula_99 separates formula_100 and formula_101, where formula_101 is the set of all formula_102, such that formula_103, and formula_104. That is, it is the set of aggregates of all possible consumption plans that are strictly Pareto-better.
The attainable productions are on the lower side of the price hyperplane, while the Pareto-better consumptions are "strictly" on the upper side of the price hyperplane. Thus any Pareto-better plan is not attainable.
<templatestyles src="Math_theorem/styles.css" />
Second fundamental theorem of welfare economics — For any total endowment formula_48, and any Pareto-efficient state achievable using that endowment, there exists a distribution of endowments formula_105 and private ownerships formula_106 of the producers, such that the given state is a market equilibrium state for some price vector formula_107.
Proof idea: any Pareto-optimal consumption plan is separated by a hyperplane from the set of attainable consumption plans. The slope of the hyperplane would be the equilibrium prices. Verify that under such prices, each producer and household would find the given state optimal. Verify that Walras's law holds, and so the expenditures match income plus profit, and so it is possible to provide each household with exactly the necessary budget.
<templatestyles src="Math_proof/styles.css" />Proof
Since the state is attainable, we have formula_47. The equality does not necessarily hold, so we define the set of attainable aggregate consumptions formula_108. Any aggregate consumption bundle in formula_109 is attainable, and any outside is not.
Find the market price formula_25.
Define formula_101 to be the set of all formula_102, such that formula_103, and formula_104. That is, it is the set of aggregates of all possible consumption plans that are strictly Pareto-better. Since each formula_20 is convex, and each preference is convex, the set formula_101 is also convex.
Now, since the state is Pareto-optimal, the set formula_101 must be unattainable with the given endowment. That is, formula_101 is disjoint from formula_109. Since both sets are convex, there exists a separating hyperplane between them.
Let the hyperplane be defined by formula_110, where formula_111, and formula_112. The sign is chosen such that formula_113 and formula_114.
Claim: formula_115.
Suppose not, then there exists some formula_9 such that formula_116. Then formula_117 if formula_118 is large enough, but we also have formula_119, contradiction.
We have by construction formula_120, and formula_121. Now we claim: formula_122.
For each household formula_123, let formula_23 be the set of consumption plans for formula_123 that are at least as good as formula_22, and formula_124 be the set of consumption plans for formula_123 that are strictly better than formula_22.
By local nonsatiation of formula_19, the closed half-space formula_125 contains formula_23.
By continuity of formula_19, the open half-space formula_126 contains formula_124.
Adding them up, we find that the open half-space formula_127 contains formula_101.
Claim (Walras's law): formula_128
Since the production is attainable, we have formula_129, and since formula_130, we have formula_131.
By construction of the separating hyperplane, we also have formula_132, thus we have an equality.
Claim: at price formula_25, each producer formula_0 maximizes profit at formula_32,
If there exists some production plan formula_133 such that one producer can reach higher profit formula_134, then
formula_135
but then we would have a point in formula_136 on the other side of the separating hyperplane, violating our construction.
Claim: at price formula_25 and budget formula_137, household formula_123 maximizes utility at formula_22.
Otherwise, there exists some formula_138 such that formula_139 and formula_140. Then, consider aggregate consumption bundle formula_141. It is in formula_101, but also satisfies formula_142. But this contradicts previous claim that formula_122.
By Walras's law, the aggregate endowment income and profit exactly equals aggregate expenditure. It remains to distribute them such that each household formula_123 obtains exactly formula_137 as its budget. This is trivial.
Here is a greedy algorithm to do it: first distribute all endowment of commodity 1 to household 1. If household 1 can reach its budget before distributing all of it, then move on to household 2. Otherwise, start distributing all endowment of commodity 2, etc. Similarly for ownerships of producers.
Convexity vs strict convexity.
The assumptions of strict convexity can be relaxed to convexity. This modification changes supply and demand functions from point-valued functions into set-valued functions (or "correspondences"), and the application of Brouwer's fixed-point theorem into Kakutani's fixed-point theorem.
This modification is similar to the generalization of the minimax theorem to the existence of Nash equilibria.
The two fundamental theorems of welfare economics holds without modification.
Equilibrium vs "quasi-equilibrium".
The definition of market equilibrium assumes that every household performs utility maximization, subject to budget constraints. That is, formula_143The dual problem would be cost minimization subject to utility constraints. That is,formula_144for some real number formula_145. The duality gap between the two problems is nonnegative, and may be positive. Consequently, some authors study the dual problem and the properties of its "quasi-equilibrium" (or "compensated equilibrium"). Every equilibrium is a quasi-equilibrium, but the converse is not necessarily true.
Extensions.
Accounting for strategic bargaining.
In the model, all producers and households are "price takers", meaning that they simply transact with the market using the price vector formula_25. In particular, behaviors such as cartel, monopoly, consumer coalition, etc are not modelled. Edgeworth's limit theorem shows that under certain stronger assumptions, the households can do no better than price-take at the limit of an infinitely large economy.
Setup.
In detail, we continue with the economic model on the households and producers, but we consider a different method to design production and distribution of commodities than the market economy. It may be interpreted as a model of a "socialist" economy.
This economy is thus a cooperative game with each household being a player, and we have the following concepts from cooperative game theory:
Since we assumed that any nonempty subset of households may eliminate all other households, while retaining control of the producers, the only states that can be executed are the core states. A state that is not a core state would immediately be objected by a coalition of households.
We need one more assumption on formula_150, that it is a cone, that is, formula_151 for any formula_152. This assumption rules out two ways for the economy to become trivial.
Main results (Debreu and Scarf, 1963).
<templatestyles src="Math_theorem/styles.css" />
Proposition — Market equilibria are core states.
<templatestyles src="Math_proof/styles.css" />Proof
Define the price hyperplane formula_155. Since it's a supporting hyperplane of formula_150, and formula_150 is a convex cone, the price hyperplane passes the origin. Thus formula_156.
Since formula_157 is the total profit, and every producer can at least make zero profit (that is, formula_158 ), this means that the profit is exactly zero for every producer. Consequently, every household's budget is exactly from selling endowment.
formula_159
By utility maximization, every household is already doing as much as it could. Consequently, we have formula_160.
In particular, for any coalition formula_161, and any production plan formula_138 that is Pareto-better, we have
formula_162
and consequently, the point formula_163 lies above the price hyperplane, making it unattainable.
In Debreu and Scarf's paper, they defined a particular way to approach an infinitely large economy, by "replicating households". That is, for any positive integer formula_164, define an economy where there are formula_164 households that have exactly the same consumption possibility set and preference as household formula_123.
Let formula_165 stand for the consumption plan of the formula_118-th replicate of household formula_123. Define a plan to be equitable iff formula_166 for any formula_12 and formula_167.
In general, a state would be quite complex, treating each replicate differently. However, core states are significantly simpler: they are equitable, treating every replicate equally.
<templatestyles src="Math_theorem/styles.css" />
Proposition — Any core state is equitable.
<templatestyles src="Math_proof/styles.css" />Proof
We use the "underdog coalition".
Consider a core state formula_165. Define average distributions formula_168.
It is attainable, so we have formula_169
Suppose there exist any inequality, that is, some formula_170, then by convexity of preferences, we have formula_171, where formula_172 is the worst-treated household of type formula_123.
Now define the "underdog coalition" consisting of the worst-treated household of each type, and they propose to distribute according to formula_173. This is Pareto-better for the coalition, and since formula_174 is conic, we also have formula_175, so the plan is attainable. Contradiction.
Consequently, when studying core states, it is sufficient to consider one consumption plan for each type of households. Now, define formula_176 to be the set of all core states for the economy with formula_164 replicates per household. It is clear that formula_177, so we may define the limit set of core states formula_178.
We have seen that formula_58 contains the set of market equilibria for the original economy. The converse is true under minor additional assumption:
<templatestyles src="Math_theorem/styles.css" />
(Debreu and Scarf, 1963) — If formula_150 is a polygonal cone, or if every formula_20 has nonempty interior with respect to formula_179, then formula_58 is the set of market equilibria for the original economy.
The assumption that formula_150 is a polygonal cone, or every formula_20 has nonempty interior, is necessary to avoid the technical issue of "quasi-equilibrium". Without the assumption, we can only prove that formula_58 is contained in the set of quasi-equilibria.
Accounting for nonconvexity.
The assumption that production possibility sets are convex is a strong constraint, as it implies that there is no economy of scale. Similarly, we may consider nonconvex consumption possibility sets and nonconvex preferences. In such cases, the supply and demand functions formula_80 may be discontinuous with respect to price vector, thus a general equilibrium may not exist.
However, we may "convexify" the economy, find an equilibrium for it, then by the Shapley–Folkman–Starr theorem, it is an approximate equilibrium for the original economy.
In detail, given any economy satisfying all the assumptions given, except convexity of formula_180 and formula_19, we define the "convexified economy" to be the same economy, except that
where formula_185 denotes the convex hull.
With this, any general equilibrium for the convexified economy is also an approximate equilibrium for the original economy. That is, if formula_186 is an equilibrium price vector for the convexified economy, thenformula_187where formula_188 is the Euclidean distance, and formula_189 is any upper bound on the inner radii of all formula_180 (see page on Shapley–Folkman–Starr theorem for the definition of inner radii).
The convexified economy may not satisfy the assumptions. For example, the set formula_190 is closed, but its convex hull is not closed. Imposing the further assumption that the convexified economy also satisfies the assumptions, we find that the original economy always has an approximate equilibrium.
Accounting for time, space, and uncertainty.
The commodities in the Arrow–Debreu model are entirely abstract. Thus, although it is typically represented as a static market, it can be used to model time, space, and uncertainty by splitting one commodity into several, each contingent on a certain time, place, and state of the world. For example, "apples" can be split into "apples in New York in September if oranges are available" and "apples in Chicago in June if oranges are not available".
Given some base commodities, the Arrow–Debreu complete market is a market where there is a separate commodity for every future time, for every place of delivery, for every state of the world under consideration, for every base commodity.
In financial economics the term "Arrow–Debreu" most commonly refers to an Arrow–Debreu security. A canonical Arrow–Debreu security is a security that pays one unit of numeraire if a particular state of the world is reached and zero otherwise (the price of such a security being a so-called "state price"). As such, any derivatives contract whose settlement value is a function on an underlying whose value is uncertain at contract date can be decomposed as linear combination of Arrow–Debreu securities.
Since the work of Breeden and Lizenberger in 1978, a large number of researchers have used options to extract Arrow–Debreu prices for a variety of applications in financial economics.
Accounting for the existence of money.
<templatestyles src="Template:Blockquote/styles.css" />No theory of money is offered here, and it is assumed that the economy works without the help of a good serving as medium of exchange.
— <templatestyles src="Template:Blockquote/styles.css" />To the pure theorist, at the present juncture the most interesting and challenging aspect of money is that it can find no place in an Arrow–Debreu economy. This circumstance should also be of considerable significance to macroeconomists, but it rarely is.
— Typically, economists consider the functions of money to be as a unit of account, store of value, medium of exchange, and standard of deferred payment. This is however incompatible with the Arrow–Debreu complete market described above. In the complete market, there is only a one-time transaction at the market "at the beginning of time". After that, households and producers merely execute their planned productions, consumptions, and deliveries of commodities until the end of time. Consequently, there is no use for storage of value or medium of exchange. This applies not just to the Arrow–Debreu complete market, but also to models (such as those with markets of contingent commodities and Arrow insurance contracts) that differ in form, but are mathematically equivalent to it.
Computing general equilibria.
Scarf (1967) was the first algorithm that computes the general equilibrium. See Scarf (2018) and Kubler (2012) for reviews.
Number of equilibria.
Certain economies at certain endowment vectors may have infinitely equilibrium price vectors. However, "generically", an economy has only finitely many equilibrium price vectors. Here, "generically" means "on all points, except a closed set of Lebesgue measure zero", as in Sard's theorem.
There are many such genericity theorems. One example is the following:
<templatestyles src="Math_theorem/styles.css" />
Genericity — For any strictly positive endowment distribution formula_191, and any strictly positive price vector formula_107, define the excess demand formula_192 as before.
If on all formula_193,
then for generically any endowment distribution formula_191, there are only finitely many equilibria formula_197.
<templatestyles src="Math_proof/styles.css" />Proof (sketch)
Define the "equilibrium manifold" as the set of solutions to formula_198. By Walras's law, one of the constraints is redundant. By assumptions that formula_195 has rank formula_196, no more constraints are redundant. Thus the equilibrium manifold has dimension formula_199, which is equal to the space of all distributions of strictly positive endowments formula_200.
By continuity of formula_194, the projection is closed. Thus by Sard's theorem, the projection from the equilibrium manifold to formula_200 is critical on only a set of measure 0. It remains to check that the preimage of the projection is generically not just discrete, but also finite.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "j"
},
{
"math_id": 1,
"text": "x \\succeq y"
},
{
"math_id": 2,
"text": "\\forall n, x_n \\geq y_n"
},
{
"math_id": 3,
"text": "\\R^N_+"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "x \\succeq 0"
},
{
"math_id": 6,
"text": "\\R_{++}^N"
},
{
"math_id": 7,
"text": "x \\succ 0"
},
{
"math_id": 8,
"text": "\\Delta_N = \\left\\{x\\in \\R^N: x_1, ..., x_N \\geq 0, \\sum_{n\\in 1:N} x_n = 1\\right\\}"
},
{
"math_id": 9,
"text": "n\\in 1:N"
},
{
"math_id": 10,
"text": "N"
},
{
"math_id": 11,
"text": "p = (p_1, ..., p_N) \\in \\R_{++}^N"
},
{
"math_id": 12,
"text": "i\\in I"
},
{
"math_id": 13,
"text": "r^i\\in \\R^N_+"
},
{
"math_id": 14,
"text": "\\alpha^{i,j} \\geq 0"
},
{
"math_id": 15,
"text": "\\sum_{i\\in I} \\alpha^{i,j} = 1 \\quad \\forall j\\in J "
},
{
"math_id": 16,
"text": "M^i(p) = \\langle p, r^i\\rangle + \\sum_{j\\in J}\\alpha^{i,j}\\Pi^j(p)"
},
{
"math_id": 17,
"text": "M"
},
{
"math_id": 18,
"text": "CPS^i\\subset \\R_+^N"
},
{
"math_id": 19,
"text": "\\succeq^i"
},
{
"math_id": 20,
"text": "CPS^i"
},
{
"math_id": 21,
"text": "u^i: CPS^i \\to [0, 1]"
},
{
"math_id": 22,
"text": "x^i"
},
{
"math_id": 23,
"text": "U_+^i(x^i)"
},
{
"math_id": 24,
"text": "B^i(p) = \\{x^i \\in CPS^i : \\langle p, x^i \\rangle \\leq M^i(p)\\}"
},
{
"math_id": 25,
"text": "p"
},
{
"math_id": 26,
"text": "D^i(p)\\in \\R_+^N"
},
{
"math_id": 27,
"text": "D^i(p) := \\arg\\max_{x^i \\in B^i(p)} u^i(x^i)"
},
{
"math_id": 28,
"text": "p \\in \\R^N_{++}"
},
{
"math_id": 29,
"text": "j\\in J"
},
{
"math_id": 30,
"text": "PPS^j"
},
{
"math_id": 31,
"text": "(-1, 1, 0)"
},
{
"math_id": 32,
"text": "y^j"
},
{
"math_id": 33,
"text": "S^j(p)\\in \\R^N"
},
{
"math_id": 34,
"text": "S^j(p) := \\arg\\max_{y^j\\in PPS^j} \\langle p, y^j\\rangle"
},
{
"math_id": 35,
"text": "\\Pi^j(p) := \\langle p, S^j(p)\\rangle = \\max_{y^j\\in PPS^j} \\langle p, y^j\\rangle"
},
{
"math_id": 36,
"text": "CPS = \\sum_{i\\in I}CPS^i"
},
{
"math_id": 37,
"text": "PPS = \\sum_{j\\in J}PPS^j"
},
{
"math_id": 38,
"text": "r = \\sum_i r^i"
},
{
"math_id": 39,
"text": "D(p) := \\sum_i D^i(p)"
},
{
"math_id": 40,
"text": "S(p) := \\sum_j S^j(p)"
},
{
"math_id": 41,
"text": "Z(p) = D(p) - S(p) - r"
},
{
"math_id": 42,
"text": "(N, I, J, CPS^i, \\succeq^i, PPS^j)"
},
{
"math_id": 43,
"text": "(r^i, \\alpha^{i,j})_{i\\in I, j\\in J}"
},
{
"math_id": 44,
"text": "((p_n)_{n\\in 1:N}, (x^i)_{i\\in I}, (y^j)_{j\\in J})"
},
{
"math_id": 45,
"text": "x^i \\in CPS^i"
},
{
"math_id": 46,
"text": "y^j\\in PPS^j"
},
{
"math_id": 47,
"text": "\\sum_{i\\in I}x^i \\preceq \\sum_{j\\in J}y^j + r"
},
{
"math_id": 48,
"text": "r"
},
{
"math_id": 49,
"text": "PPS_r := \\{y\\in PPS: y+r \\succeq 0\\}"
},
{
"math_id": 50,
"text": "(p, (D^i(p))_{i\\in I}, (S^j(p))_{j\\in J})"
},
{
"math_id": 51,
"text": "Z(p)_n \\begin{cases}\n\t \\leq 0 \\text{ if } p_n = 0 \\\\\n\t = 0 \\text{ if } p_n > 0 \n\t \\end{cases}"
},
{
"math_id": 52,
"text": " D^i(p), S^j(p)"
},
{
"math_id": 53,
"text": "t"
},
{
"math_id": 54,
"text": "\\sqrt{(t+1)^2-1}"
},
{
"math_id": 55,
"text": "p_1 / p_2 < 1"
},
{
"math_id": 56,
"text": "\\Pi^j(p) = +\\infty"
},
{
"math_id": 57,
"text": "S^j(p)"
},
{
"math_id": 58,
"text": "C"
},
{
"math_id": 59,
"text": "\\|y^j\\| \\leq C"
},
{
"math_id": 60,
"text": "\\|x^i\\| \\leq C"
},
{
"math_id": 61,
"text": "\\tilde Z(p)"
},
{
"math_id": 62,
"text": "x \\succeq 0, \\|x\\| = C"
},
{
"math_id": 63,
"text": "(y^j\\in PPS^j)_{j\\in J}"
},
{
"math_id": 64,
"text": "\\sum_{j\\in J} y^j + r \\succeq 0"
},
{
"math_id": 65,
"text": "\\|y^j\\| < C"
},
{
"math_id": 66,
"text": "PPS_r = \\left\\{\\sum_{j\\in J} y^j : y^j \\in PPS^j \\text{ for each } j\\in J, \\text{ and }\\sum_{j\\in J} y^j + r \\succeq 0\\right\\}"
},
{
"math_id": 67,
"text": "PPS_r"
},
{
"math_id": 68,
"text": "r \\succeq 0"
},
{
"math_id": 69,
"text": "PPS_r^j := \\{y^j \\in PPS^j: y^j\\text{ is a part of some attainable production plan under endowment }r\\}"
},
{
"math_id": 70,
"text": "PPS_r^j"
},
{
"math_id": 71,
"text": "j\\in J, r \\succeq 0"
},
{
"math_id": 72,
"text": "\\|\\tilde S^j(p)\\| < C"
},
{
"math_id": 73,
"text": "S^j(p) "
},
{
"math_id": 74,
"text": "\\tilde S^j(p) "
},
{
"math_id": 75,
"text": "S^j(p) = \\tilde S^j(p)"
},
{
"math_id": 76,
"text": "\\|\\tilde D^i(p)\\| < C"
},
{
"math_id": 77,
"text": "D^i(p) "
},
{
"math_id": 78,
"text": "\\tilde D^i(p) "
},
{
"math_id": 79,
"text": "\\tilde D^i(p) = D^i(p), \\tilde S^j(p) = S^j(p)"
},
{
"math_id": 80,
"text": "S^j(p), D^i(p)"
},
{
"math_id": 81,
"text": " \\langle p, Z(p)\\rangle = 0"
},
{
"math_id": 82,
"text": " \\sum_{j\\in J} \\langle p,S^j(p)\\rangle + \\langle p, r\\rangle = \\sum_{i\\in I} \\langle p, D^i(p)\\rangle"
},
{
"math_id": 83,
"text": " \\langle p, \\tilde Z(p)\\rangle = 0"
},
{
"math_id": 84,
"text": "\\tilde Z"
},
{
"math_id": 85,
"text": "p \\in \\R_{++}^N"
},
{
"math_id": 86,
"text": "\\langle p, \\tilde Z(p)\\rangle \\leq 0"
},
{
"math_id": 87,
"text": "\\langle p, \\tilde Z(p)\\rangle < 0"
},
{
"math_id": 88,
"text": "\\tilde Z(p)_n > 0"
},
{
"math_id": 89,
"text": "n"
},
{
"math_id": 90,
"text": "\\|\\tilde D^i(p)\\| = C"
},
{
"math_id": 91,
"text": "\\tilde D^i(p)_n > \\tilde S(p)_n + r_n"
},
{
"math_id": 92,
"text": "\\tilde S^j, \\tilde D^i"
},
{
"math_id": 93,
"text": "f(p) = \\frac{\\max(0, p + \\gamma \\tilde Z(p))}{\\sum_n \\max(0, p_n + \\gamma \\tilde Z(p)_n)}"
},
{
"math_id": 94,
"text": "\\gamma"
},
{
"math_id": 95,
"text": "f"
},
{
"math_id": 96,
"text": "((x^i)_{i\\in I}, (y^j)_{j\\in J})"
},
{
"math_id": 97,
"text": "((x^i)_{i\\in I}, (y^j)_{j\\in J}) \\succeq((x'^i)_{i\\in I}, (y'^j)_{j\\in J})"
},
{
"math_id": 98,
"text": "x^i \\succeq^i x'^i"
},
{
"math_id": 99,
"text": "\\langle p^*, q\\rangle = \\langle p^*, D(p^*)\\rangle"
},
{
"math_id": 100,
"text": "r + PPS_r"
},
{
"math_id": 101,
"text": "U_{++}"
},
{
"math_id": 102,
"text": "\\sum_{i\\in I} x'^i"
},
{
"math_id": 103,
"text": "\\forall i\\in I, x'^i\\in CPS^i, x'^i \\succeq^i x^i"
},
{
"math_id": 104,
"text": "\\exists i\\in I, x'^i \\succ^i x^i"
},
{
"math_id": 105,
"text": "\\{r^i\\}_{i\\in I}"
},
{
"math_id": 106,
"text": "\\{\\alpha^{i,j}\\}_{i\\in I, j\\in J}"
},
{
"math_id": 107,
"text": "p\\in \\R_{++}^N"
},
{
"math_id": 108,
"text": "V := \\{r + y - z: y \\in PPS, z \\succeq 0\\}"
},
{
"math_id": 109,
"text": "V"
},
{
"math_id": 110,
"text": "\\langle p, q\\rangle = c"
},
{
"math_id": 111,
"text": "p\\in \\R^N, p\\neq 0"
},
{
"math_id": 112,
"text": "c= \\sum_{i\\in I}\\langle p, x^i\\rangle"
},
{
"math_id": 113,
"text": "\\langle p, U_{++}\\rangle \\geq c"
},
{
"math_id": 114,
"text": "\\langle p, r+PPS\\rangle \\leq c"
},
{
"math_id": 115,
"text": "p \\succ 0"
},
{
"math_id": 116,
"text": "p_n <0"
},
{
"math_id": 117,
"text": "\\langle p, r + 0 - k e_n\\rangle > c"
},
{
"math_id": 118,
"text": "k"
},
{
"math_id": 119,
"text": "r + 0 - k e_n\\in V"
},
{
"math_id": 120,
"text": "\\langle p, \\sum_{i\\in I}x^i\\rangle = c"
},
{
"math_id": 121,
"text": "\\langle p, V\\rangle \\leq c"
},
{
"math_id": 122,
"text": "\\langle p, U_{++}\\rangle > c"
},
{
"math_id": 123,
"text": "i"
},
{
"math_id": 124,
"text": "U_{++}^i(x^i)"
},
{
"math_id": 125,
"text": "\\langle p, q\\rangle \\geq \\langle p, x^i\\rangle"
},
{
"math_id": 126,
"text": "\\langle p, q\\rangle > \\langle p, x^i\\rangle"
},
{
"math_id": 127,
"text": "\\langle p, q\\rangle > c"
},
{
"math_id": 128,
"text": "\\langle p, r + \\sum_j y^j\\rangle =c =\\langle p, \\sum_i x^i\\rangle"
},
{
"math_id": 129,
"text": "r + \\sum_j y^j \\succeq \\sum_i x^i"
},
{
"math_id": 130,
"text": "p\\succ 0"
},
{
"math_id": 131,
"text": "\\langle p, r + \\sum_j y^j\\rangle \\geq \\langle p, \\sum_i x^i\\rangle"
},
{
"math_id": 132,
"text": "\\langle p, r + \\sum_j y^j\\rangle \\leq c =\\langle p, \\sum_i x^i\\rangle"
},
{
"math_id": 133,
"text": "y'^j"
},
{
"math_id": 134,
"text": "\\langle p, y'^j\\rangle > \\langle p, y^j\\rangle"
},
{
"math_id": 135,
"text": "\\langle p, r\\rangle+ \\sum_{j\\in J}\\langle p, y'^j\\rangle >\\langle p, r\\rangle+ \\sum_{j\\in J}\\langle p, y^j\\rangle = c"
},
{
"math_id": 136,
"text": "r+PPS"
},
{
"math_id": 137,
"text": "\\langle p, x^i\\rangle"
},
{
"math_id": 138,
"text": "x'^i"
},
{
"math_id": 139,
"text": "x'^i \\succ^i x^i"
},
{
"math_id": 140,
"text": "\\langle p, x'^i\\rangle \\leq \\langle p, x^i\\rangle"
},
{
"math_id": 141,
"text": "q' := \\sum_{i'\\in I, i' \\neq i}x^i + x'^i"
},
{
"math_id": 142,
"text": "\\langle p, q'\\rangle \\leq \\sum \\langle p, x^i\\rangle = c"
},
{
"math_id": 143,
"text": "\\begin{cases}\n\\max_{x^i} u^i(x^i) \\\\\n\\langle p, x^i\\rangle \\leq M^i(p)\n\\end{cases}"
},
{
"math_id": 144,
"text": "\\begin{cases}\nu^i(x^i) \\geq u^i_0\\\\\n\\min_{x^i} \\langle p, x^i\\rangle\n\\end{cases}"
},
{
"math_id": 145,
"text": "u^i_0"
},
{
"math_id": 146,
"text": "y^j \\in PPS^j"
},
{
"math_id": 147,
"text": "y\\in PPS"
},
{
"math_id": 148,
"text": "((x_i)_{i\\in I}, y)"
},
{
"math_id": 149,
"text": "x^i \\in CPS^i, y \\in PPS, y\\succeq \\sum_i (x^i- r^i)"
},
{
"math_id": 150,
"text": "PPS"
},
{
"math_id": 151,
"text": "k \\cdot PPS \\subset PPS"
},
{
"math_id": 152,
"text": "k \\geq 0"
},
{
"math_id": 153,
"text": "y\\succ 0"
},
{
"math_id": 154,
"text": "y"
},
{
"math_id": 155,
"text": "\\langle p, q \\rangle = \\langle p, \\sum_j y^j\\rangle"
},
{
"math_id": 156,
"text": "\\langle p, \\sum_j y^j\\rangle = \\langle p, \\sum_i x^i - r^i\\rangle = 0"
},
{
"math_id": 157,
"text": "\\sum_j \\langle p, y^j\\rangle"
},
{
"math_id": 158,
"text": "0 \\in PPS^j"
},
{
"math_id": 159,
"text": "\\langle p, x^i \\rangle = \\langle p, r^i\\rangle"
},
{
"math_id": 160,
"text": "\\langle p, U^i_{++}(x^i)\\rangle > \\langle p, r^i\\rangle"
},
{
"math_id": 161,
"text": "I' \\subset I"
},
{
"math_id": 162,
"text": " \\sum_{i\\in I'} \\langle p, x'^i \\rangle >\\sum_{i\\in I'} \\langle p, r^i \\rangle"
},
{
"math_id": 163,
"text": "\\sum_{i\\in I'} x'^i - r^i"
},
{
"math_id": 164,
"text": "K"
},
{
"math_id": 165,
"text": "x^{i, k}"
},
{
"math_id": 166,
"text": "x^{i, k} \\sim^i x^{i, k'}"
},
{
"math_id": 167,
"text": "k, k'\\in K"
},
{
"math_id": 168,
"text": "\\bar x^{i} := \\frac 1K \\sum_{k\\in K} x^{i,k}"
},
{
"math_id": 169,
"text": "K \\sum_{i} (\\bar x^i - r^i) \\in PPS"
},
{
"math_id": 170,
"text": "x^{i, k} \\succ^i x^{i, k'}"
},
{
"math_id": 171,
"text": "\\bar x^i \\succ^i x^{i, k'}"
},
{
"math_id": 172,
"text": "k'"
},
{
"math_id": 173,
"text": "\\bar x^i"
},
{
"math_id": 174,
"text": "PP"
},
{
"math_id": 175,
"text": "\\sum_i(\\bar x^i - r^i) \\in PPS"
},
{
"math_id": 176,
"text": "C_K"
},
{
"math_id": 177,
"text": "C_1 \\supset C_2 \\supset \\cdots"
},
{
"math_id": 178,
"text": "C := \\cap_{K=1}^\\infty C_K"
},
{
"math_id": 179,
"text": "\\R^N"
},
{
"math_id": 180,
"text": "PPS^j, CPS^i"
},
{
"math_id": 181,
"text": "PPS'^j = \\mathrm{Conv}(PPS^j)"
},
{
"math_id": 182,
"text": "CPS'^i = \\mathrm{Conv}(CPS^i)"
},
{
"math_id": 183,
"text": "x \\succeq'^i y"
},
{
"math_id": 184,
"text": "\\forall z \\in CPS^i, y \\in \\mathrm{Conv}(U_+^i(z)) \\implies x \\in \\mathrm{Conv}(U_+^i(z)) "
},
{
"math_id": 185,
"text": "\\mathrm{Conv}"
},
{
"math_id": 186,
"text": "p^*"
},
{
"math_id": 187,
"text": "\\begin{align}\nd(D'(p^*) - S'(p^*), D(p^*) - S(p^*)) &\\leq N\\sqrt{L} \\\\\nd(r, D(p^*) - S(p^*)) &\\leq N\\sqrt{L}\n\\end{align}"
},
{
"math_id": 188,
"text": "d(\\cdot, \\cdot)"
},
{
"math_id": 189,
"text": "L"
},
{
"math_id": 190,
"text": "\\{(x, 0): x \\geq 0\\}\\cup \\{(x,y): xy = 1, x > 0\\}"
},
{
"math_id": 191,
"text": "r^1, ..., r^I \\in \\R_{++}^N"
},
{
"math_id": 192,
"text": "Z(p, r^1, ..., r^I)"
},
{
"math_id": 193,
"text": "p, r^1, ..., r^I \\in \\R_{++}^N"
},
{
"math_id": 194,
"text": "Z"
},
{
"math_id": 195,
"text": "\\nabla_p Z"
},
{
"math_id": 196,
"text": "(N-1)"
},
{
"math_id": 197,
"text": "p^* \\in \\R_{++}^N"
},
{
"math_id": 198,
"text": "Z=0"
},
{
"math_id": 199,
"text": "N \\times I"
},
{
"math_id": 200,
"text": "\\R_{++}^{N \\times I}"
}
] | https://en.wikipedia.org/wiki?curid=1239556 |
12396 | Group homomorphism | Mathematical function between groups that preserves multiplication structure
In mathematics, given two groups, ("G",∗) and ("H", ·), a group homomorphism from ("G",∗) to ("H", ·) is a function "h" : "G" → "H" such that for all "u" and "v" in "G" it holds that
formula_0
where the group operation on the left side of the equation is that of "G" and on the right side that of "H".
From this property, one can deduce that "h" maps the identity element "eG" of "G" to the identity element "eH" of "H",
formula_1
and it also maps inverses to inverses in the sense that
formula_2
Hence one can say that "h" "is compatible with the group structure".
In areas of mathematics where one considers groups endowed with additional structure, a "homomorphism" sometimes means a map which respects not only the group structure (as above) but also the extra structure. For example, a homomorphism of topological groups is often required to be continuous.
Intuition.
The purpose of defining a group homomorphism is to create functions that preserve the algebraic structure. An equivalent definition of group homomorphism is: The function "h" : "G" → "H" is a group homomorphism if whenever
"a" ∗ "b" = "c" we have "h"("a") ⋅ "h"("b") = "h"("c").
In other words, the group "H" in some sense has a similar algebraic structure as "G" and the homomorphism "h" preserves that.
Image and kernel.
We define the "kernel of h" to be the set of elements in "G" which are mapped to the identity in "H"
formula_3
and the "image of h" to be
formula_4
The kernel and image of a homomorphism can be interpreted as measuring how close it is to being an isomorphism. The first isomorphism theorem states that the image of a group homomorphism, "h"("G") is isomorphic to the quotient group "G"/ker "h".
The kernel of h is a normal subgroup of "G". Assume formula_5 and show formula_6 for arbitrary formula_7:
formula_8
The image of h is a subgroup of "H".
The homomorphism, "h", is a "group monomorphism"; i.e., "h" is injective (one-to-one) if and only if ker("h")
{"e""G"}. Injection directly gives that there is a unique element in the kernel, and, conversely, a unique element in the kernel gives injection:
formula_9
Category of groups.
If "h" : "G" → "H" and "k" : "H" → "K" are group homomorphisms, then so is "k" ∘ "h" : "G" → "K". This shows that the class of all groups, together with group homomorphisms as morphisms, forms a category (specifically the category of groups).
Homomorphisms of abelian groups.
If "G" and "H" are abelian (i.e., commutative) groups, then the set Hom("G", "H") of all group homomorphisms from "G" to "H" is itself an abelian group: the sum "h" + "k" of two homomorphisms is defined by
("h" + "k")("u") = "h"("u") + "k"("u") for all "u" in "G".
The commutativity of "H" is needed to prove that "h" + "k" is again a group homomorphism.
The addition of homomorphisms is compatible with the composition of homomorphisms in the following sense: if "f" is in Hom("K", "G"), "h", "k" are elements of Hom("G", "H"), and "g" is in Hom("H", "L"), then
("h" + "k") ∘ "f" = ("h" ∘ "f") + ("k" ∘ "f") and "g" ∘ ("h" + "k") = ("g" ∘ "h") + ("g" ∘ "k").
Since the composition is associative, this shows that the set End("G") of all endomorphisms of an abelian group forms a ring, the "endomorphism ring" of "G". For example, the endomorphism ring of the abelian group consisting of the direct sum of "m" copies of Z/"nZ is isomorphic to the ring of "m"-by-"m" matrices with entries in Z/"nZ. The above compatibility also shows that the category of all abelian groups with group homomorphisms forms a preadditive category; the existence of direct sums and well-behaved kernels makes this category the prototypical example of an abelian category. | [
{
"math_id": 0,
"text": " h(u*v) = h(u) \\cdot h(v) "
},
{
"math_id": 1,
"text": " h(e_G) = e_H"
},
{
"math_id": 2,
"text": " h\\left(u^{-1}\\right) = h(u)^{-1}. \\,"
},
{
"math_id": 3,
"text": " \\operatorname{ker}(h) := \\left\\{u \\in G\\colon h(u) = e_{H}\\right\\}."
},
{
"math_id": 4,
"text": " \\operatorname{im}(h) := h(G) \\equiv \\left\\{h(u)\\colon u \\in G\\right\\}."
},
{
"math_id": 5,
"text": "u \\in \\operatorname{im}(h)"
},
{
"math_id": 6,
"text": "g^{-1} \\circ u \\circ g \\in \\operatorname{im}(h)"
},
{
"math_id": 7,
"text": "u, g"
},
{
"math_id": 8,
"text": "\\begin{align}\n h\\left(g^{-1} \\circ u \\circ g\\right) &= h(g)^{-1} \\cdot h(u) \\cdot h(g) \\\\\n &= h(g)^{-1} \\cdot e_H \\cdot h(g) \\\\\n &= h(g)^{-1} \\cdot h(g) = e_H,\n\\end{align}"
},
{
"math_id": 9,
"text": "\\begin{align}\n && h(g_1) &= h(g_2) \\\\\n \\Leftrightarrow && h(g_1) \\cdot h(g_2)^{-1} &= e_H \\\\\n \\Leftrightarrow && h\\left(g_1 \\circ g_2^{-1}\\right) &= e_H,\\ \\operatorname{ker}(h) = \\{e_G\\} \\\\\n \\Rightarrow && g_1 \\circ g_2^{-1} &= e_G \\\\\n \\Leftrightarrow && g_1 &= g_2 \n\\end{align}"
},
{
"math_id": 10,
"text": "\\Phi: (\\mathbb{N}, +) \\rightarrow (\\mathbb{R}, +)"
},
{
"math_id": 11,
"text": "\\Phi(x) = \\sqrt[]{2}x"
},
{
"math_id": 12,
"text": "(\\mathbb{R}^+, *)"
},
{
"math_id": 13,
"text": "(\\mathbb{R}, +)"
},
{
"math_id": 14,
"text": "G"
},
{
"math_id": 15,
"text": "H"
},
{
"math_id": 16,
"text": "\\mathbb{R}^+"
},
{
"math_id": 17,
"text": "f: G \\rightarrow H "
}
] | https://en.wikipedia.org/wiki?curid=12396 |
12397 | Group isomorphism | Bijective group homomorphism
In abstract algebra, a group isomorphism is a function between two groups that sets up a bijection between the elements of the groups in a way that respects the given group operations. If there exists an isomorphism between two groups, then the groups are called isomorphic. From the standpoint of group theory, isomorphic groups have the same properties and need not be distinguished.
Definition and notation.
Given two groups formula_0 and formula_1 a "group isomorphism" from formula_0 to formula_2 is a bijective group homomorphism from formula_3 to formula_4 Spelled out, this means that a group isomorphism is a bijective function formula_5 such that for all formula_6 and formula_7 in formula_3 it holds that
formula_8
The two groups formula_0 and formula_2 are isomorphic if there exists an isomorphism from one to the other. This is written
formula_9
Often shorter and simpler notations can be used. When the relevant group operations are understood, they are omitted and one writes
formula_10
Sometimes one can even simply write formula_11 Whether such a notation is possible without confusion or ambiguity depends on context. For example, the equals sign is not very suitable when the groups are both subgroups of the same group. See also the examples.
Conversely, given a group formula_12 a set formula_13 and a bijection formula_14 we can make formula_15 a group formula_2 by defining
formula_16
If formula_17 and formula_18 then the bijection is an automorphism ("q.v.").
Intuitively, group theorists view two isomorphic groups as follows: For every element formula_19 of a group formula_20 there exists an element formula_21 of formula_15 such that formula_21 "behaves in the same way" as formula_19 (operates with other elements of the group in the same way as formula_19). For instance, if formula_19 generates formula_20 then so does formula_22 This implies, in particular, that formula_3 and formula_15 are in bijective correspondence. Thus, the definition of an isomorphism is quite natural.
An isomorphism of groups may equivalently be defined as an invertible group homomorphism (the inverse function of a bijective group homomorphism is also a group homomorphism).
Examples.
In this section some notable examples of isomorphic groups are listed.
Some groups can be proven to be isomorphic, relying on the axiom of choice, but the proof does not indicate how to construct a concrete isomorphism. Examples:
Properties.
The kernel of an isomorphism from formula_0 to formula_2 is always {eG}, where eG is the identity of the group formula_0
If formula_0 and formula_2 are isomorphic, then formula_3 is abelian if and only if formula_15 is abelian.
If formula_41 is an isomorphism from formula_0 to formula_1 then for any formula_42 the order of formula_43 equals the order of formula_44
If formula_0 and formula_2 are isomorphic, then formula_0 is a locally finite group if and only if formula_2 is locally finite.
The number of distinct groups (up to isomorphism) of order formula_45 is given by sequence A000001 in the OEIS. The first few numbers are 0, 1, 1, 1 and 2 meaning that 4 is the lowest order with more than one group.
Cyclic groups.
All cyclic groups of a given order are isomorphic to formula_46 where formula_47 denotes addition modulo formula_48
Let formula_3 be a cyclic group and formula_45 be the order of formula_49 Letting formula_50 be a generator of formula_3, formula_3 is then equal to formula_51
We will show that
formula_52
Define
formula_53 so that formula_54
Clearly, formula_55 is bijective. Then
formula_56
which proves that formula_57
Consequences.
From the definition, it follows that any isomorphism formula_5 will map the identity element of formula_3 to the identity element of formula_13
formula_58
that it will map inverses to inverses,
formula_59
and more generally, formula_45th powers to formula_45th powers,
formula_60
and that the inverse map formula_61 is also a group isomorphism.
The relation "being isomorphic" is an equivalence relation. If formula_41 is an isomorphism between two groups formula_3 and formula_13 then everything that is true about formula_3 that is only related to the group structure can be translated via formula_41 into a true ditto statement about formula_13 and vice versa.
Automorphisms.
An isomorphism from a group formula_0 to itself is called an automorphism of the group. Thus it is a bijection formula_62 such that
formula_63
The image under an automorphism of a conjugacy class is always a conjugacy class (the same or another).
The composition of two automorphisms is again an automorphism, and with this operation the set of all automorphisms of a group formula_20 denoted by formula_64 itself forms a group, the "automorphism group" of formula_49
For all abelian groups there is at least the automorphism that replaces the group elements by their inverses. However, in groups where all elements are equal to their inverses this is the trivial automorphism, e.g. in the Klein four-group. For that group all permutations of the three non-identity elements are automorphisms, so the automorphism group is isomorphic to formula_65 (which itself is isomorphic to formula_66).
In formula_67 for a prime number formula_68 one non-identity element can be replaced by any other, with corresponding changes in the other elements. The automorphism group is isomorphic to formula_69 For example, for formula_70 multiplying all elements of formula_71 by 3, modulo 7, is an automorphism of order 6 in the automorphism group, because formula_72 while lower powers do not give 1. Thus this automorphism generates formula_73 There is one more automorphism with this property: multiplying all elements of formula_71 by 5, modulo 7. Therefore, these two correspond to the elements 1 and 5 of formula_74 in that order or conversely.
The automorphism group of formula_75 is isomorphic to formula_76 because only each of the two elements 1 and 5 generate formula_74 so apart from the identity we can only interchange these.
The automorphism group of formula_77 has order 168, as can be found as follows. All 7 non-identity elements play the same role, so we can choose which plays the role of formula_78 Any of the remaining 6 can be chosen to play the role of (0,1,0). This determines which element corresponds to formula_79 For formula_80 we can choose from 4, which determines the rest. Thus we have formula_81 automorphisms. They correspond to those of the Fano plane, of which the 7 points correspond to the 7 non-identity elements. The lines connecting three points correspond to the group operation: formula_82 and formula_83 on one line means formula_84 formula_85 and formula_86 See also general linear group over finite fields.
For abelian groups, all non-trivial automorphisms are outer automorphisms.
Non-abelian groups have a non-trivial inner automorphism group, and possibly also outer automorphisms.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(G, *)"
},
{
"math_id": 1,
"text": "(H, \\odot),"
},
{
"math_id": 2,
"text": "(H, \\odot)"
},
{
"math_id": 3,
"text": "G"
},
{
"math_id": 4,
"text": "H."
},
{
"math_id": 5,
"text": "f : G \\to H"
},
{
"math_id": 6,
"text": "u"
},
{
"math_id": 7,
"text": "v"
},
{
"math_id": 8,
"text": "f(u * v) = f(u) \\odot f(v)."
},
{
"math_id": 9,
"text": "(G, *) \\cong (H, \\odot)."
},
{
"math_id": 10,
"text": "G \\cong H."
},
{
"math_id": 11,
"text": "G = H."
},
{
"math_id": 12,
"text": "(G, *),"
},
{
"math_id": 13,
"text": "H,"
},
{
"math_id": 14,
"text": "f : G \\to H,"
},
{
"math_id": 15,
"text": "H"
},
{
"math_id": 16,
"text": "f(u) \\odot f(v) = f(u * v)."
},
{
"math_id": 17,
"text": "H = G"
},
{
"math_id": 18,
"text": "\\odot = *"
},
{
"math_id": 19,
"text": "g"
},
{
"math_id": 20,
"text": "G,"
},
{
"math_id": 21,
"text": "h"
},
{
"math_id": 22,
"text": "h."
},
{
"math_id": 23,
"text": "(\\R, +)"
},
{
"math_id": 24,
"text": "(\\R^+, \\times)"
},
{
"math_id": 25,
"text": "(\\R, +) \\cong (\\R^+, \\times)"
},
{
"math_id": 26,
"text": "f(x) = e^x"
},
{
"math_id": 27,
"text": "\\Z"
},
{
"math_id": 28,
"text": "\\R,"
},
{
"math_id": 29,
"text": "\\R/\\Z"
},
{
"math_id": 30,
"text": "S^1"
},
{
"math_id": 31,
"text": "\\R/\\Z \\cong S^1"
},
{
"math_id": 32,
"text": "\\Z_2 = \\Z/2\\Z"
},
{
"math_id": 33,
"text": "\\Z_2 \\times \\Z_2."
},
{
"math_id": 34,
"text": "\\operatorname{Dih}_2,"
},
{
"math_id": 35,
"text": "n,"
},
{
"math_id": 36,
"text": "\\operatorname{Dih}_{2 n}"
},
{
"math_id": 37,
"text": "\\operatorname{Dih}_n"
},
{
"math_id": 38,
"text": "\\Z_2."
},
{
"math_id": 39,
"text": "(\\Complex, +)"
},
{
"math_id": 40,
"text": "(\\Complex^*, \\cdot)"
},
{
"math_id": 41,
"text": "f"
},
{
"math_id": 42,
"text": "a \\in G,"
},
{
"math_id": 43,
"text": "a"
},
{
"math_id": 44,
"text": "f(a)."
},
{
"math_id": 45,
"text": "n"
},
{
"math_id": 46,
"text": "(\\Z_n, +_n),"
},
{
"math_id": 47,
"text": "+_n"
},
{
"math_id": 48,
"text": "n."
},
{
"math_id": 49,
"text": "G."
},
{
"math_id": 50,
"text": "x"
},
{
"math_id": 51,
"text": "\\langle x \\rangle = \\left\\{e, x, \\ldots, x^{n-1}\\right\\}."
},
{
"math_id": 52,
"text": "G \\cong (\\Z_n, +_n)."
},
{
"math_id": 53,
"text": "\\varphi : G \\to \\Z_n = \\{0, 1, \\ldots, n - 1\\},"
},
{
"math_id": 54,
"text": "\\varphi(x^a) = a."
},
{
"math_id": 55,
"text": "\\varphi"
},
{
"math_id": 56,
"text": "\\varphi(x^a \\cdot x^b) = \\varphi(x^{a+b}) = a + b = \\varphi(x^a) +_n \\varphi(x^b),"
},
{
"math_id": 57,
"text": "G \\cong (\\Z_n, +_n)."
},
{
"math_id": 58,
"text": "f(e_G) = e_H,"
},
{
"math_id": 59,
"text": "f(u^{-1}) = f(u)^{-1} \\quad \\text{ for all } u \\in G,"
},
{
"math_id": 60,
"text": "f(u^n)= f(u)^n \\quad \\text{ for all } u \\in G,"
},
{
"math_id": 61,
"text": "f^{-1} : H \\to G"
},
{
"math_id": 62,
"text": "f : G \\to G"
},
{
"math_id": 63,
"text": "f(u) * f(v) = f(u * v)."
},
{
"math_id": 64,
"text": "\\operatorname{Aut}(G),"
},
{
"math_id": 65,
"text": "S_3"
},
{
"math_id": 66,
"text": "\\operatorname{Dih}_3"
},
{
"math_id": 67,
"text": "\\Z_p"
},
{
"math_id": 68,
"text": "p,"
},
{
"math_id": 69,
"text": "\\Z_{p-1}"
},
{
"math_id": 70,
"text": "n = 7,"
},
{
"math_id": 71,
"text": "\\Z_7"
},
{
"math_id": 72,
"text": "3^6 \\equiv 1 \\pmod 7,"
},
{
"math_id": 73,
"text": "\\Z_6."
},
{
"math_id": 74,
"text": "\\Z_6,"
},
{
"math_id": 75,
"text": "\\Z_6"
},
{
"math_id": 76,
"text": "\\Z_2,"
},
{
"math_id": 77,
"text": "\\Z_2 \\oplus \\Z_2 \\oplus \\Z_2 = \\operatorname{Dih}_2 \\oplus \\Z_2"
},
{
"math_id": 78,
"text": "(1,0,0)."
},
{
"math_id": 79,
"text": "(1,1,0)."
},
{
"math_id": 80,
"text": "(0,0,1)"
},
{
"math_id": 81,
"text": "7 \\times 6 \\times 4 = 168"
},
{
"math_id": 82,
"text": "a, b,"
},
{
"math_id": 83,
"text": "c"
},
{
"math_id": 84,
"text": "a + b = c,"
},
{
"math_id": 85,
"text": "a + c = b,"
},
{
"math_id": 86,
"text": "b + c = a."
}
] | https://en.wikipedia.org/wiki?curid=12397 |
12398202 | Counterpart theory | In philosophy, specifically in the area of metaphysics, counterpart theory is an alternative to standard (Kripkean) possible-worlds semantics for interpreting quantified modal logic. Counterpart theory still presupposes possible worlds, but differs in certain important respects from the Kripkean view. The form of the theory most commonly cited was developed by David Lewis, first in a paper and later in his book "On the Plurality of Worlds".
Differences from the Kripkean view.
Counterpart theory (hereafter "CT"), as formulated by Lewis, requires that individuals exist in only one world. The standard account of possible worlds assumes that a modal statement about an individual (e.g., "it is possible that x is y") means that there is a possible world, W, where the individual x has the property y; in this case there is only one individual, x, at issue. On the contrary, counterpart theory supposes that this statement is really saying that there is a possible world, W, wherein exists an individual that is not x itself, but rather a distinct individual 'x' different from but nonetheless similar to x. So, when I state that I might have been a banker (rather than a philosopher) according to counterpart theory I am saying not that I exist in another possible world where I am a banker, but rather my counterpart does. Nevertheless, this statement about my counterpart is still held to ground the truth of the statement that I might have been a banker. The requirement that any individual exist in only one world is to avoid what Lewis termed the "problem of accidental intrinsics" which (he held) would require a single individual to both have and simultaneously not have particular properties.
The counterpart theoretic formalization of modal discourse also departs from the standard formulation by eschewing use of modality operators (Necessarily, Possibly) in favor of quantifiers that range over worlds and 'counterparts' of individuals in those worlds. Lewis put forth a set of primitive predicates and a number of axioms governing CT and a scheme for translating standard modal claims in the language of quantified modal logic into his CT.
In addition to interpreting modal claims about objects and possible worlds, CT can also be applied to the identity of a single object at different points in time. The view that an object can retain its identity over time is often called endurantism, and it claims that objects are ‘wholly present’ at different moments (see the counterpart relation, below). An opposing view is that any object in time is made up of temporal parts or is perduring.
Lewis' view on possible worlds is sometimes called modal realism.
The basics.
The possibilities that CT is supposed to describe are “ways a world might be” (Lewis 1986:86) or more exactly:
(1) absolutely every way that a world could possibly be is a way that some world is, and
(2) absolutely every way that a part of a world could possibly be is a way that some part of some world is. (Lewis 1986:86.)
Add also the following “principle of recombination,” which Lewis describes this way: “patching together parts of different possible worlds yields another possible world […]. [A]nything can coexist with anything else, […] provided they occupy distinct spatiotemporal positions.” (Lewis 1986:87-88). But these possibilities should be restricted by CT.
The counterpart relation.
The counterpart relation (hereafter C-relation) differs from the notion of identity. Identity is a reflexive, symmetric, and transitive relation. The counterpart relation is only a similarity relation; it needn’t be transitive or symmetric. The C-relation is also known as genidentity (Carnap 1967), I-relation (Lewis 1983), and the unity relation (Perry 1975).
If identity is shared between objects in different possible worlds then the same object can be said to exist in different possible worlds (a "trans-world" object, that is, a series of objects sharing a single identity).
Parthood relation.
An important part of the way Lewis’s worlds deliver possibilities is the use of the parthood relation. This gives some neat formal machinery, mereology. This is an axiomatic system that uses formal logic to describe the relationship between parts and wholes, and between parts within a whole. Especially important, and most reasonable, according to Lewis, is the strongest form that accepts the existence of mereological sums or the thesis of unrestricted mereological composition (Lewis 1986:211-213).
The formal theory.
As a formal theory, counterpart theory can be used to translate sentences into modal quantificational logic. Sentences that seem to be quantifying over possible individuals should be translated into CT. (Explicit primitives and axioms have not yet been stated for the temporal or spatial use of CT.) Let CT be stated in quantificational logic and contain the following primitives:
formula_0 (formula_1 is a possible world)
formula_2 (formula_1 is in possible world formula_3)
formula_4 (formula_1 is actual)
formula_5 (formula_1 is a counterpart of formula_3)
We have the following axioms (taken from Lewis 1968):
A1. formula_6
(Nothing is in anything except a world)
A2. formula_7
(Nothing is in two worlds)
A3. formula_8
(Whatever is a counterpart is in a world)
A4. formula_9
(Whatever has a counterpart is in a world)
A5. formula_10
(Nothing is a counterpart of anything else in its world)
A6. formula_11
(Anything in a world is a counterpart of itself)
A7. formula_12
(Some world contains all and only actual things)
A8. formula_13
(Something is actual)
It is an uncontroversial assumption to assume that the primitives and the axioms A1 through A8 make the standard counterpart system.
R1. formula_14
(Symmetry of the counterpart relation)
R2. formula_15
(Transitivity of the counterpart relation)
R3. formula_16
(Nothing in any world has more than one counterpart in any other world)
R4. formula_17
(No two things in any world have a common counterpart in any other world)
R5. formula_18
(For any two worlds, anything in one is a counterpart of something in the other)
R6. formula_19
(For any two worlds, anything in one has some counterpart in the other)
Motivations for counterpart theory.
CT can be applied to the relationship between identical objects in different worlds or at different times. Depending on the subject, there are different reasons for accepting CT as a description of the relation between different entities.
In possible worlds.
David Lewis defended modal realism. This is the view that a possible world is a concrete, maximal connected spatio-temporal region. The actual world is one of the possible worlds; it is also concrete. Because a single concrete object demands spatio-temporal connectedness, a possible concrete object can only exist in one possible world. Still, we say true things like: It is possible that Hubert Humphrey won the 1968 US presidential election. How is it true? Humphrey has a counterpart in another possible world that wins the 1968 election in that world.
Lewis also argues against three other alternatives that might be compatible with possibilism: overlapping individuals, trans-world individuals, and haecceity.
Some philosophers, such as Peter van Inwagen (1985), see no problem with identity within a world . Lewis seems to share this attitude. He says:
"… like the Holy Roman Empire, it is badly named. […] In the first place we should bear in mind that Trans-World Airlines is an intercontinental, but not as yet an interplanetary carrier. More important, we should not suppose that we have here any problem with "identity".
We never have. Identity is utterly simple and unproblematic. Everything is identical to itself; nothing is ever identical to anything else except itself. There is never any problem about what makes something identical to itself; nothing can ever fail to be. And there is never any problem about what makes two things identical; two things never can be identical.
There might be a problem about how to define identity to someone sufficiently lacking in conceptual resources — we note that it won't suffice to teach him certain rules of inference — but since such unfortunates are rare, even among philosophers, we needn't worry much if their condition is incurable.
We "do" state plenty of genuine problems in terms of identity. But we "needn't" state them so.” (Lewis 1986:192-193)
Overlapping individuals.
An overlapping individual has a part in the actual world and a part in another world. Because identity is not problematic, we get overlapping individuals by having overlapping worlds. Two worlds overlap if they share a common part. But some properties of overlapping objects are, for Lewis, troublesome (Lewis 1986:199-210).
The problem is with an object’s accidental intrinsic properties, such as shape and weight, which supervene on its parts. Humphrey could have the property of having six fingers on his left hand. How does he do that? It can’t be true that Humphrey has both the property of having six fingers and five fingers on his left hand. What we might say is that he has five fingers "at this" world and six fingers "at that" world. But how should these modifiers be understood?
According to McDaniel (2004), if Lewis is right, the defender of overlapping individuals has to accept genuine contradictions or defend the view that every object has all its properties essentially.
How can you be one year older than you are? One way is to say that there is a possible world where you exist. Another way is for you to have a counterpart in that possible world, who has the property of being one year older than you.
Trans-world individuals.
Take Humphrey: if he is a trans-world individual he is the mereological sum of all of the possible Humphreys in the different worlds. He is like a road that goes through different regions. There are parts that overlap, but we can also say that there is a northern part that is connected to the southern part and that the road is the mereological sum of these parts. The same thing with Humphrey. One part of him is in one world, another part in another world.
"It is possible for something to exist if it is possible for the whole to exist. That is, if there is a world at which the whole of it exists. That is, if there is a world such that quantifying only over parts of that world, the whole of it exists. That is, if the whole of it is among the parts of some world. That is, if it is part of some world – and hence not a trans-world individual. Parts of worlds are "possible" individuals; trans-world individuals are therefore "impossible" individuals."
Haecceity.
A haecceity or individual essence is a property that only a single object instantiates. Ordinary properties, if one accepts the existence of universals, can be exemplified by more than one object at a time. Another way to explain a haecceity is to distinguish between "suchness" and "thisness", where thisness has a more demonstrative character.
David Lewis gives the following definition of a haecceitistic difference: “two worlds differ in what they represent "de re" concerning some individual, but do not differ qualitatively in any way.” (Lewis 1986:221.)
CT does not require distinct worlds for distinct possibilities – “a single world may provide many possibilities, since many possible individuals inhabit it” (Lewis 1986:230). CT can satisfy multiple counterparts in one possible world.
Temporal parts.
Perdurantism is the view that material objects are not wholly present at any single instant of time; instead, some temporal parts is said to be present. Sometimes, especially in the theory of relativity as it is expressed by Minkowski, the path traced by an object through spacetime. According to Ted Sider, “Temporal parts theory is the claim that time is like space in one particular respect, namely, with respect to parts.” Sider associates endurantism with a C-relation between temporal parts.
Sider defends a revised way of counting. Instead of counting individual objects, timeline slices or the temporal parts of an object are used. Sider discusses an example of counting road segments instead of roads simpliciter. (Sider 2001:188-192). (Compare with Lewis 1993.) Sider argues that, even if we knew that some material object would go through some fission and split into two, "we would not "say"" that there are two objects located at the same spacetime region. (Sider 2001:189)
How can one predicate temporal properties of these momentary temporal parts? It is here that the C-relation comes in play. Sider proposed the sentence: "Ted was once a boy." The truth condition of this sentence is that "there exists some person stage x prior to the time of utterance, such that x is a boy, and x bears the temporal counterpart relation to Ted." (Sider 2001:193)
Counterpart theory and the necessity of identity.
Kripke's three lectures on proper names and identity, (1980), raised the issues of how we should interpret statements about identity. Take the statement that the Evening Star is identical to the Morning Star. Both are the planet Venus. This seems to be an "a posteriori" identity statement. We discover that the names designate the same thing. The traditional view, since Kant, has been that statements or propositions that are necessarily true are a priori. But in the end of the sixties Saul Kripke and Ruth Barcan Marcus offered proof for the necessary truth of identity statements. Here is the Kripke's version (Kripke 1971):
(1) formula_20 [Necessity of self-identity]
(2) formula_21 [Leibniz law]
(3) formula_22 [From (1) and (2)]
(4) formula_23 [From (3) and the following principle: formula_24]
If the proof is correct the distinction between the a priori/a posteriori and necessary/contingent becomes less clear. The same applies if identity statements are necessarily true anyway. (For some interesting comments on the proof, see Lowe 2002.) The statement that for instance “Water is identical to H2O” is (then) a statement that is necessarily true but a posteriori. If CT is the correct account of modal properties we still can keep the intuition that identity statements are contingent and a priori because counterpart theory understands the modal operator in a different way than standard modal logic.
The relationship between CT and essentialism is of interest. (Essentialism, the necessity of identity, and rigid designators form an important troika of mutual interdependence.) According to David Lewis, claims about an object's essential properties can be true or false depending on context (in Chapter 4.5 in 1986 he calls against constancy, because an absolute conception of essences is constant over the logical space of possibilities). He writes:
But if I ask how things would be if Saul Kripke had come from no sperm and egg but had been brought by a stork, that makes equally good sense. I create a context that makes my question make sense, and to do so it has to be a context that makes origins not be essential. (Lewis 1986:252.)
Counterpart theory and rigid designators.
Kripke interpreted proper names as rigid designators where a rigid designator picks out the same object in every possible world (Kripke 1980). For someone who accepts contingent identity statements the following semantic problem occurs (semantic because we deal with de dicto necessity) (Rea 1997:xxxvii).
Take a scenario that is mentioned in the paradox of coincidence. A statue (call it “Statue”) is made by melding two pieces of clay together. Those two pieces are called “Clay”. Statue and Clay seem to be identical, they exist at the same time, and we could incinerate them at the same time. The following seems true:
(7) Necessarily, if Statue exists then Statue is identical to Statue.
But,
(8) Necessarily, if Statue exists then Statue is identical to Clay
is false, because it seems possible that Statue could have been made out of two different pieces of clay, and thus its identity to Clay is not necessary.
Counterpart theory, qua-identity, and individual concepts can offer solutions to this problem.
Arguments for inconstancy.
Ted Sider gives roughly the following argument (Sider 2001:223). There is inconstancy if a proposition about the essence of an object is true in one context and false in another. C-relation is a similarity relation. What is similar in one dimension is not similar in another dimension. Therefore, the C-relation can have the same difference and express inconstant judgements about essences.
David Lewis offers another argument. The paradox of coincidence can be solved if we accept inconstancy. We can then say that it is possible for a dishpan and a piece of plastic to coincide, in some context. That context can then be described using CT.
Sider makes the point that David Lewis feels he was forced to defend CT, due to modal realism. Sider uses CT as a solution to the paradox of material coincidence.
Counterpart theory compared to qua-theory and individual concepts.
We assume that contingent identity is real. Then it is informative to compare CT with other theories about how to handle "de re" representations.
"Qua-theory"
Kit Fine (1982) and Alan Gibbard (1975) (according to Rea 1997) are defences of qua-theory. According to qua-theory we can talk about some of an object's modal properties. The theory is handy if we don't think it is possible for Socrates to be identical with a piece of bread or a stone. Socrates qua person is essentially a person.
"Individual concepts"
According to Rudolf Carnap, in modal contexts variables refer to individual concepts instead of individuals. An individual concept is then defined as a function of individuals in different possible worlds. Basically, individual concepts deliver semantic objects or abstract functions instead of real concrete entities as in CT.
Counterpart theory and epistemic possibility.
Kripke accepts the necessity of identity but agrees with the feeling that it still seems that it is possible that Phospherus (the Morning Star) is not identical to Hespherus (the Evening Star). For all we know, it could be that they are different. He says:
What, then, does the intuition that the table might have turned out to have been made of ice or of anything else, that it might even have turned out not to be made of molecules, amount to? I think that it means simply that there might have been a table looking and feeling just like this one and placed in this very position in the room, which was in fact made of ice, In other words, I (or some conscious being) could have been qualitatively in the same epistemic situation that in fact obtains, I could have the same sensory evidence that I in fact have, about a table which was made of ice. The situation is thus akin to the one which inspired the counterpart theorists; when I speak of the possibility of the table turning out to be made of various things, I am speaking loosely. This table itself could not have had an origin different form the one it in fact had, but in a situation qualitatively identical to this one with respect to all evidence I had in advance, the room could have contained a table made of ice in place of this one. Something like counterpart theory is thus applicable to the situation, but it applies only because we are not interested in what might not be true of a table given certain evidence. It is precisely because it is not true that this table might have been made of ice from the Thames that we must turn here to qualitative descriptions and counterparts. To apply these notions to genuine de re modalities, is from the present standpoint, perverse. (Kripke 1980:142.)
So to explain how the illusion of necessity is possible, according to Kripke, CT is an alternative. Therefore, CT forms an important part of our theory about the knowledge of modal intuitions. (For doubt about this strategy, see Della Roca, 2002. And for more about the knowledge of modal statements, see Gendler and Hawthorne, 2002.)
Arguments against counterpart theory.
The most famous is Kripke's Humphrey objection. Because a counterpart is never identical to something in another possible world Kripke raised the following objection against CT:
Thus if we say "Humphrey might have won the election" (if only he had done such-and-such), we are not talking about something that might have happened to "Humphrey" but to someone else, a "counterpart". Probably, however, Humphrey could not care less whether someone "else", no matter how much resembling him, would have been victorious in another possible world. Thus, Lewis's view seems to me even more bizarre than the usual notions of transworld identification that it replaces. (Kripke 1980:45 note 13.)
One way to spell out the meaning of Kripke's claim is by the following imaginary dialogue: (Based on Sider MS)
Against: Kripke means that Humphrey himself doesn’t have the property of possibly winning the election, because it is only the counterpart that wins.
For: The property of possibly winning the election is the property of the counterpart.
Against: But they can't be the same property because Humphrey has different attitudes to them: he cares about he himself having the property of possibly winning the election. He doesn’t care about the counterpart having the property of possibly winning the election.
For: But properties don't work the same way as objects, our attitudes towards them can be different, because we have different descriptions – they are still the same properties. That lesson is taught by the paradox of analysis.
CT is inadequate if it can't translate all modal sentences or intuitions. Fred Feldman mentioned two sentences (Feldman 1971):
(1) I could have been quite unlike what I in fact am.
(2) I could have been more like what you in fact are than like what I in fact am. At the same time, you could have been more like what I in fact am than what you in fact are.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Wx"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "Ixy"
},
{
"math_id": 3,
"text": "y"
},
{
"math_id": 4,
"text": "Ax"
},
{
"math_id": 5,
"text": "Cxy"
},
{
"math_id": 6,
"text": "(Ixy \\rightarrow Wy)"
},
{
"math_id": 7,
"text": "((Ixy \\land Ixz) \\rightarrow y=z)"
},
{
"math_id": 8,
"text": "(Cxy \\rightarrow \\exists z\\, Ixz)"
},
{
"math_id": 9,
"text": "(Cxy \\rightarrow \\exists z\\, Iyz)"
},
{
"math_id": 10,
"text": "((Ixy \\land Izy \\land Cxz) \\rightarrow x=z)"
},
{
"math_id": 11,
"text": "(Ixy \\rightarrow Cxx)"
},
{
"math_id": 12,
"text": "\\exists x\\, (Wx \\land \\forall y\\, (Iyx \\leftrightarrow Ay))"
},
{
"math_id": 13,
"text": "\\exists x\\, Ax"
},
{
"math_id": 14,
"text": "(Cxy \\rightarrow Cyx)"
},
{
"math_id": 15,
"text": "((Cxy \\land Cyz) \\rightarrow Cxz)"
},
{
"math_id": 16,
"text": "((Cy_1x \\land Cy_2x \\land Iy_1w_1 \\land Iy_2w_2 \\land y_1 \\neq y_2) \\rightarrow w_1 \\neq w_2)"
},
{
"math_id": 17,
"text": "((Cyx_1 \\land Cyx_2 \\land Ix_1w_1 \\land Ix_2w_2 \\land x_1 \\neq x_2) \\rightarrow w_1 \\neq w_2)"
},
{
"math_id": 18,
"text": "((Ww_1 \\land Ww_2 \\land Ixw_1) \\rightarrow \\exists y\\, (Iyw_2 \\land Cxy))"
},
{
"math_id": 19,
"text": "((Ww_1 \\land Ww_2 \\land Ixw_1) \\rightarrow \\exists y\\, (Iyw_2 \\land Cyx))"
},
{
"math_id": 20,
"text": "\\forall x\\, \\Box\\, x = x"
},
{
"math_id": 21,
"text": "\\forall x\\, \\forall y\\, (x = y \\rightarrow \\forall P\\, (Px \\rightarrow Py))"
},
{
"math_id": 22,
"text": "\\forall x\\, \\forall y\\ (x = y \\rightarrow (\\Box\\, x = x \\rightarrow \\Box\\, x = y))"
},
{
"math_id": 23,
"text": "\\forall x\\, \\forall y\\, (x = y \\rightarrow \\Box\\, x = y)"
},
{
"math_id": 24,
"text": "(\\varphi \\rightarrow (\\top \\rightarrow \\psi)) \\Rightarrow (\\varphi \\rightarrow \\psi)"
}
] | https://en.wikipedia.org/wiki?curid=12398202 |
1240011 | Dirichlet's principle | In mathematics, and particularly in potential theory, Dirichlet's principle is the assumption that the minimizer of a certain energy functional is a solution to Poisson's equation.
Formal statement.
Dirichlet's principle states that, if the function formula_0 is the solution to Poisson's equation
formula_1
on a domain formula_2 of formula_3 with boundary condition
formula_4 on the boundary formula_5,
then "u" can be obtained as the minimizer of the Dirichlet energy
formula_6
amongst all twice differentiable functions formula_7 such that formula_8 on formula_5 (provided that there exists at least one function making the Dirichlet's integral finite). This concept is named after the German mathematician Peter Gustav Lejeune Dirichlet.
History.
The name "Dirichlet's principle" is due to Bernhard Riemann, who applied it in the study of complex analytic functions.
Riemann (and others such as Carl Friedrich Gauss and Peter Gustav Lejeune Dirichlet) knew that Dirichlet's integral is bounded below, which establishes the existence of an infimum; however, he took for granted the existence of a function that attains the minimum. Karl Weierstrass published the first criticism of this assumption in 1870, giving an example of a functional that has a greatest lower bound which is not a minimum value. Weierstrass's example was the functional
formula_9
where formula_10 is continuous on formula_11, continuously differentiable on formula_12, and subject to boundary conditions formula_13, formula_14 where formula_15 and formula_16 are constants and formula_17. Weierstrass showed that formula_18, but no admissible function formula_10 can make formula_19 equal 0. This example did not disprove Dirichlet's principle "per se", since the example integral is different from Dirichlet's integral. But it did undermine the reasoning that Riemann had used, and spurred interest in proving Dirichlet's principle as well as broader advancements in the calculus of variations and ultimately functional analysis.
In 1900, Hilbert later justified Riemann's use of Dirichlet's principle by developing the direct method in the calculus of variations.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " u ( x ) "
},
{
"math_id": 1,
"text": "\\Delta u + f = 0"
},
{
"math_id": 2,
"text": "\\Omega"
},
{
"math_id": 3,
"text": "\\mathbb{R}^n"
},
{
"math_id": 4,
"text": "u=g"
},
{
"math_id": 5,
"text": "\\partial\\Omega"
},
{
"math_id": 6,
"text": "E[v(x)] = \\int_\\Omega \\left(\\frac{1}{2}|\\nabla v|^2 - vf\\right)\\,\\mathrm{d}x"
},
{
"math_id": 7,
"text": "v"
},
{
"math_id": 8,
"text": "v=g"
},
{
"math_id": 9,
"text": "J(\\varphi) = \\int_{-1}^{1} \\left( x \\frac{d\\varphi}{dx} \\right)^2 \\, dx "
},
{
"math_id": 10,
"text": "\\varphi"
},
{
"math_id": 11,
"text": "[-1,1]"
},
{
"math_id": 12,
"text": "(-1,1)"
},
{
"math_id": 13,
"text": "\\varphi(-1)=a"
},
{
"math_id": 14,
"text": "\\varphi(1)=b"
},
{
"math_id": 15,
"text": "a"
},
{
"math_id": 16,
"text": "b"
},
{
"math_id": 17,
"text": "a \\ne b"
},
{
"math_id": 18,
"text": "\\textstyle \\inf_\\varphi J(\\varphi) = 0"
},
{
"math_id": 19,
"text": "J(\\varphi)"
}
] | https://en.wikipedia.org/wiki?curid=1240011 |
1240093 | Effective action | Quantum version of the classical action
In quantum field theory, the quantum effective action is a modified expression for the classical action taking into account quantum corrections while ensuring that the principle of least action applies, meaning that extremizing the effective action yields the equations of motion for the vacuum expectation values of the quantum fields. The effective action also acts as a generating functional for one-particle irreducible correlation functions. The potential component of the effective action is called the effective potential, with the expectation value of the true vacuum being the minimum of this potential rather than the classical potential, making it important for studying spontaneous symmetry breaking.
It was first defined perturbatively by Jeffrey Goldstone and Steven Weinberg in 1962, while the non-perturbative definition was introduced by Bryce DeWitt in 1963 and independently by Giovanni Jona-Lasinio in 1964.
The article describes the effective action for a single scalar field, however, similar results exist for multiple scalar or fermionic fields.
Generating functionals.
"These generating functionals also have applications in statistical mechanics and information theory, with slightly different factors of formula_0 and sign conventions."
A quantum field theory with action formula_1 can be fully described in the path integral formalism using the partition functional
formula_2
Since it corresponds to vacuum-to-vacuum transitions in the presence of a classical external current formula_3, it can be evaluated perturbatively as the sum of all connected and disconnected Feynman diagrams. It is also the generating functional for correlation functions
formula_4
where the scalar field operators are denoted by formula_5. One can define another useful generating functional formula_6 responsible for generating connected correlation functions
formula_7
which is calculated perturbatively as the sum of all connected diagrams. Here connected is interpreted in the sense of the cluster decomposition, meaning that the correlation functions approach zero at large spacelike separations. General correlation functions can always be written as a sum of products of connected correlation functions.
The quantum effective action is defined using the Legendre transformation of formula_8
formula_9
where formula_10 is the source current for which the scalar field has the expectation value formula_11, often called the classical field, defined implicitly as the solution to
formula_12
As an expectation value, the classical field can be thought of as the weighted average over quantum fluctuations in the presence of a current formula_3 that sources the scalar field. Taking the functional derivative of the Legendre transformation with respect to formula_11 yields
formula_13
In the absence of an source formula_14, the above shows that the vacuum expectation value of the fields extremize the quantum effective action rather than the classical action. This is nothing more than the principle of least action in the full quantum field theory. The reason for why the quantum theory requires this modification comes from the path integral perspective since all possible field configurations contribute to the path integral, while in classical field theory only the classical configurations contribute.
The effective action is also the generating functional for one-particle irreducible (1PI) correlation functions. 1PI diagrams are connected graphs that cannot be disconnected into two pieces by cutting a single internal line. Therefore, we have
formula_15
with formula_16 being the sum of all 1PI Feynman diagrams. The close connection between formula_8 and formula_16 means that there are a number of very useful relations between their correlation functions. For example, the two-point correlation function, which is nothing less than the propagator formula_17, is the inverse of the 1PI two-point correlation function
formula_18
Methods for calculating the effective action.
A direct way to calculate the effective action formula_19 perturbatively as a sum of 1PI diagrams is to sum over all 1PI vacuum diagrams acquired using the Feynman rules derived from the shifted action formula_20. This works because any place where formula_21 appears in any of the propagators or vertices is a place where an external formula_22 line could be attached. This is very similar to the background field method which can also be used to calculate the effective action.
Alternatively, the one-loop approximation to the action can be found by considering the expansion of the partition function around the classical vacuum expectation value field configuration formula_23, yielding
formula_24
Symmetries.
Symmetries of the classical action formula_1 are not automatically symmetries of the quantum effective action formula_16. If the classical action has a continuous symmetry depending on some functional formula_25
formula_26
then this directly imposes the constraint
formula_27
This identity is an example of a Slavnov–Taylor identity. It is identical to the requirement that the effective action is invariant under the symmetry transformation
formula_28
This symmetry is identical to the original symmetry for the important class of linear symmetries
formula_29
For non-linear functionals the two symmetries generally differ because the average of a non-linear functional is not equivalent to the functional of an average.
Convexity.
For a spacetime with volume formula_32, the effective potential is defined as formula_33. With a Hamiltonian formula_34, the effective potential formula_31 at formula_11 always gives the minimum of the expectation value of the energy density formula_35 for the set of states formula_36 satisfying formula_37. This definition over multiple states is necessary because multiple different states, each of which corresponds to a particular source current, may result in the same expectation value. It can further be shown that the effective potential is necessarily a convex function formula_38.
Calculating the effective potential perturbatively can sometimes yield a non-convex result, such as a potential that has two local minima. However, the true effective potential is still convex, becoming approximately linear in the region where the apparent effective potential fails to be convex. The contradiction occurs in calculations around unstable vacua since perturbation theory necessarily assumes that the vacuum is stable. For example, consider an apparent effective potential formula_30 with two local minima whose expectation values formula_39 and formula_40 are the expectation values for the states formula_41 and formula_42, respectively. Then any formula_22 in the non-convex region of formula_30 can also be acquired for some formula_43 using
formula_44
However, the energy density of this state is formula_45 meaning formula_30 cannot be the correct effective potential at formula_22 since it did not minimize the energy density. Rather the true effective potential formula_31 is equal to or lower than this linear construction, which restores convexity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "i"
},
{
"math_id": 1,
"text": "S[\\phi]"
},
{
"math_id": 2,
"text": "\nZ[J] = \\int \\mathcal D \\phi e^{iS[\\phi] + i \\int d^4 x \\phi(x)J(x)}.\n"
},
{
"math_id": 3,
"text": "J(x)"
},
{
"math_id": 4,
"text": "\n\\langle \\hat \\phi(x_1) \\dots \\hat \\phi(x_n)\\rangle = (-i)^n \\frac{1}{Z[J]} \\frac{\\delta^n Z[J]}{\\delta J(x_1) \\dots \\delta J(x_n)}\\bigg|_{J=0},\n"
},
{
"math_id": 5,
"text": "\\hat \\phi(x)"
},
{
"math_id": 6,
"text": "W[J] = -i\\ln Z[J]"
},
{
"math_id": 7,
"text": "\n\\langle \\hat \\phi(x_1) \\cdots \\hat \\phi(x_n)\\rangle_{\\text{con}} = (-i)^{n-1}\\frac{\\delta^n W[J]}{\\delta J(x_1) \\dots \\delta J(x_n)}\\bigg|_{J=0},\n"
},
{
"math_id": 8,
"text": "W[J]"
},
{
"math_id": 9,
"text": "\\Gamma[\\phi] = W[J_\\phi] - \\int d^4 x J_\\phi(x) \\phi(x),"
},
{
"math_id": 10,
"text": "J_\\phi"
},
{
"math_id": 11,
"text": "\\phi(x)"
},
{
"math_id": 12,
"text": "\n\\phi(x) = \\langle \\hat \\phi(x)\\rangle_J = \\frac{\\delta W[J]}{\\delta J(x)}.\n"
},
{
"math_id": 13,
"text": "\nJ_\\phi(x) = -\\frac{\\delta \\Gamma[\\phi]}{\\delta \\phi(x)}.\n"
},
{
"math_id": 14,
"text": "J_\\phi(x) = 0"
},
{
"math_id": 15,
"text": "\n\\langle \\hat \\phi(x_1) \\dots \\hat \\phi(x_n)\\rangle_{\\mathrm{1PI}} = i \\frac{\\delta^n \\Gamma[\\phi]}{\\delta \\phi(x_1) \\dots \\delta \\phi(x_n)}\\bigg|_{J=0},\n"
},
{
"math_id": 16,
"text": "\\Gamma[\\phi]"
},
{
"math_id": 17,
"text": "\\Delta(x,y)"
},
{
"math_id": 18,
"text": "\n\\Delta(x,y) = \\frac{\\delta^2 W[J]}{\\delta J(x)\\delta J(y)} = \\frac{\\delta \\phi(x)}{\\delta J(y)} = \\bigg(\\frac{\\delta J(y)}{\\delta \\phi(x)}\\bigg)^{-1} = -\\bigg(\\frac{\\delta^2 \\Gamma[\\phi]}{\\delta \\phi(x)\\delta \\phi(y)}\\bigg)^{-1} = -\\Pi^{-1}(x,y).\n"
},
{
"math_id": 19,
"text": "\\Gamma[\\phi_0]"
},
{
"math_id": 20,
"text": "S[\\phi+\\phi_0]"
},
{
"math_id": 21,
"text": "\\phi_0"
},
{
"math_id": 22,
"text": "\\phi"
},
{
"math_id": 23,
"text": "\\phi(x) = \\phi_{\\text{cl}}(x) +\\delta \\phi(x)"
},
{
"math_id": 24,
"text": "\n\\Gamma[\\phi_{\\text{cl}}] = S[\\phi_{\\text{cl}}]+\\frac{i}{2}\\text{Tr}\\bigg[\\ln \\frac{\\delta^2 S[\\phi]}{\\delta \\phi(x)\\delta \\phi(y)}\\bigg|_{\\phi = \\phi_{\\text{cl}}} \\bigg]+\\cdots.\n"
},
{
"math_id": 25,
"text": "F[x,\\phi]"
},
{
"math_id": 26,
"text": "\n\\phi(x) \\rightarrow \\phi(x) + \\epsilon F[x,\\phi],\n"
},
{
"math_id": 27,
"text": "\n0 = \\int d^4 x \\langle F[x,\\phi]\\rangle_{J_\\phi}\\frac{\\delta \\Gamma[\\phi]}{\\delta \\phi(x)}.\n"
},
{
"math_id": 28,
"text": "\n\\phi(x) \\rightarrow \\phi(x) + \\epsilon \\langle F[x,\\phi]\\rangle_{J_\\phi}.\n"
},
{
"math_id": 29,
"text": "F[x,\\phi] = a(x)+\\int d^4 y \\ b(x,y)\\phi(y)."
},
{
"math_id": 30,
"text": "V_0(\\phi)"
},
{
"math_id": 31,
"text": "V(\\phi)"
},
{
"math_id": 32,
"text": "\\mathcal V_4"
},
{
"math_id": 33,
"text": "V(\\phi) = - \\Gamma[\\phi]/\\mathcal V_4"
},
{
"math_id": 34,
"text": "H"
},
{
"math_id": 35,
"text": " \\langle \\Omega|H|\\Omega\\rangle"
},
{
"math_id": 36,
"text": "|\\Omega\\rangle"
},
{
"math_id": 37,
"text": "\\langle\\Omega| \\hat \\phi| \\Omega\\rangle = \\phi(x)"
},
{
"math_id": 38,
"text": "V''(\\phi) \\geq 0"
},
{
"math_id": 39,
"text": "\\phi_1"
},
{
"math_id": 40,
"text": "\\phi_2"
},
{
"math_id": 41,
"text": "|\\Omega_1\\rangle"
},
{
"math_id": 42,
"text": "|\\Omega_2\\rangle"
},
{
"math_id": 43,
"text": "\\lambda \\in [0,1]"
},
{
"math_id": 44,
"text": "\n|\\Omega\\rangle \\propto \\sqrt \\lambda |\\Omega_1\\rangle+\\sqrt{1-\\lambda}|\\Omega_2\\rangle.\n"
},
{
"math_id": 45,
"text": "\\lambda V_0(\\phi_1)+ (1-\\lambda)V_0(\\phi_2)<V_0(\\phi)"
}
] | https://en.wikipedia.org/wiki?curid=1240093 |
12401 | Graph theory | Area of discrete mathematics
In mathematics, graph theory is the study of "graphs", which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of "vertices" (also called "nodes" or "points") which are connected by "edges" (also called "arcs", "links" or "lines"). A distinction is made between undirected graphs, where edges link two vertices symmetrically, and directed graphs, where edges link two vertices asymmetrically. Graphs are one of the principal objects of study in discrete mathematics.
Definitions.
Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and related mathematical structures.
Graph.
In one restricted but very common sense of the term, a graph is an ordered pair formula_0 comprising:
To avoid ambiguity, this type of object may be called precisely an undirected simple graph.
In the edge formula_3, the vertices formula_4 and formula_5 are called the endpoints of the edge. The edge is said to join formula_4 and formula_5 and to be incident on formula_4 and on formula_5. A vertex may exist in a graph and not belong to an edge. Under this definition, multiple edges, in which two or more edges connect the same vertices, are not allowed.
In one more general sense of the term allowing multiple edges, a graph is an ordered triple formula_6 comprising:
To avoid ambiguity, this type of object may be called precisely an undirected multigraph.
A loop is an edge that joins a vertex to itself. Graphs as defined in the two definitions above cannot have loops, because a loop joining a vertex formula_4 to itself is the edge (for an undirected simple graph) or is incident on (for an undirected multigraph) formula_9 which is not in formula_10. To allow loops, the definitions must be expanded. For undirected simple graphs, the definition of formula_7 should be modified to formula_11. For undirected multigraphs, the definition of formula_12 should be modified to formula_13. To avoid ambiguity, these types of objects may be called undirected simple graph permitting loops and undirected multigraph permitting loops (sometimes also undirected pseudograph), respectively.
formula_1 and formula_7 are usually taken to be finite, and many of the well-known results are not true (or are rather different) for infinite graphs because many of the arguments fail in the infinite case. Moreover, formula_1 is often assumed to be non-empty, but formula_7 is allowed to be the empty set. The order of a graph is formula_14, its number of vertices. The size of a graph is formula_15, its number of edges. The degree or valency of a vertex is the number of edges that are incident to it, where a loop is counted twice. The degree of a graph is the maximum of the degrees of its vertices.
In an undirected simple graph of order "n", the maximum degree of each vertex is "n" − 1 and the maximum size of the graph is .
The edges of an undirected simple graph permitting loops formula_16 induce a symmetric homogeneous relation formula_17 on the vertices of formula_16 that is called the adjacency relation of formula_16. Specifically, for each edge formula_18, its endpoints formula_4 and formula_5 are said to be adjacent to one another, which is denoted formula_19.
Directed graph.
A directed graph or digraph is a graph in which edges have orientations.
In one restricted but very common sense of the term, a directed graph is an ordered pair formula_0 comprising:
To avoid ambiguity, this type of object may be called precisely a directed simple graph. In set theory and graph theory, formula_21 denotes the set of n-tuples of elements of formula_22 that is, ordered sequences of formula_23 elements that are not necessarily distinct.
In the edge formula_24 directed from formula_4 to formula_5, the vertices formula_4 and formula_5 are called the "endpoints" of the edge, formula_4 the "tail" of the edge and formula_5 the "head" of the edge. The edge is said to "join" formula_4 and formula_5 and to be "incident" on formula_4 and on formula_5. A vertex may exist in a graph and not belong to an edge. The edge formula_25 is called the "inverted edge" of formula_24. "Multiple edges", not allowed under the definition above, are two or more edges with both the same tail and the same head.
In one more general sense of the term allowing multiple edges, a directed graph is an ordered triple formula_6 comprising:
To avoid ambiguity, this type of object may be called precisely a directed multigraph.
A "loop" is an edge that joins a vertex to itself. Directed graphs as defined in the two definitions above cannot have loops, because a loop joining a vertex formula_4 to itself is the edge (for a directed simple graph) or is incident on (for a directed multigraph) formula_27 which is not in formula_28. So to allow loops the definitions must be expanded. For directed simple graphs, the definition of formula_7 should be modified to formula_29. For directed multigraphs, the definition of formula_12 should be modified to formula_30. To avoid ambiguity, these types of objects may be called precisely a directed simple graph permitting loops and a directed multigraph permitting loops (or a "quiver") respectively.
The edges of a directed simple graph permitting loops formula_16 is a homogeneous relation ~ on the vertices of formula_16 that is called the "adjacency relation" of formula_16. Specifically, for each edge formula_18, its endpoints formula_4 and formula_5 are said to be "adjacent" to one another, which is denoted formula_4 ~ formula_5.
Applications.
Graphs can be used to model many types of relations and processes in physical, biological, social and information systems. Many practical problems can be represented by graphs. Emphasizing their application to real-world systems, the term "network" is sometimes defined to mean a graph in which attributes (e.g. names) are associated with the vertices and edges, and the subject that expresses and understands real-world systems as a network is called network science.
Computer science.
Within computer science, causal and non-causal linked structures are graphs that are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the link structure of a website can be represented by a directed graph, in which the vertices represent web pages and directed edges represent links from one page to another. A similar approach can be taken to problems in social media, travel, biology, computer chip design, mapping the progression of neuro-degenerative diseases, and many other fields. The development of algorithms to handle graphs is therefore of major interest in computer science. The transformation of graphs is often formalized and represented by graph rewrite systems. Complementary to graph transformation systems focusing on rule-based in-memory manipulation of graphs are graph databases geared towards transaction-safe, persistent storing and querying of graph-structured data.
Linguistics.
Graph-theoretic methods, in various forms, have proven particularly useful in linguistics, since natural language often lends itself well to discrete structure. Traditionally, syntax and compositional semantics follow tree-based structures, whose expressive power lies in the principle of compositionality, modeled in a hierarchical graph. More contemporary approaches such as head-driven phrase structure grammar model the syntax of natural language using typed feature structures, which are directed acyclic graphs.
Within lexical semantics, especially as applied to computers, modeling word meaning is easier when a given word is understood in terms of related words; semantic networks are therefore important in computational linguistics. Still, other methods in phonology (e.g. optimality theory, which uses lattice graphs) and morphology (e.g. finite-state morphology, using finite-state transducers) are common in the analysis of language as a graph. Indeed, the usefulness of this area of mathematics to linguistics has borne organizations such as TextGraphs, as well as various 'Net' projects, such as WordNet, VerbNet, and others.
Physics and chemistry.
Graph theory is also used to study molecules in chemistry and physics. In condensed matter physics, the three-dimensional structure of complicated simulated atomic structures can be studied quantitatively by gathering statistics on graph-theoretic properties related to the topology of the atoms. Also, "the Feynman graphs and rules of calculation summarize quantum field theory in a form in close contact with the experimental numbers one wants to understand." In chemistry a graph makes a natural model for a molecule, where vertices represent atoms and edges bonds. This approach is especially used in computer processing of molecular structures, ranging from chemical editors to database searching. In statistical physics, graphs can represent local connections between interacting parts of a system, as well as the dynamics of a physical process on such
systems. Similarly, in computational neuroscience graphs can be used to represent functional connections between brain areas that interact to give rise to various cognitive processes, where the vertices represent different areas of the brain and the edges represent the connections between those areas. Graph theory plays an important role in electrical modeling of electrical networks, here, weights are associated with resistance of the wire segments to obtain electrical properties of network structures. Graphs are also used to represent the micro-scale channels of porous media, in which the vertices represent the pores and the edges represent the smaller channels connecting the pores. Chemical graph theory uses the molecular graph as a means to model molecules.
Graphs and networks are excellent models to study and understand phase transitions and critical phenomena.
Removal of nodes or edges leads to a critical transition where the network breaks into small clusters which is studied as a phase transition. This breakdown is studied via percolation theory.
Social sciences.
Graph theory is also widely used in sociology as a way, for example, to measure actors' prestige or to explore rumor spreading, notably through the use of social network analysis software. Under the umbrella of social networks are many different types of graphs. Acquaintanceship and friendship graphs describe whether people know each other. Influence graphs model whether certain people can influence the behavior of others. Finally, collaboration graphs model whether two people work together in a particular way, such as acting in a movie together.
Biology.
Likewise, graph theory is useful in biology and conservation efforts where a vertex can represent regions where certain species exist (or inhabit) and the edges represent migration paths or movement between the regions. This information is important when looking at breeding patterns or tracking the spread of disease, parasites or how changes to the movement can affect other species.
Graphs are also commonly used in molecular biology and genomics to model and analyse datasets with complex relationships. For example, graph-based methods are often used to 'cluster' cells together into cell-types in single-cell transcriptome analysis. Another use is to model genes or proteins in a pathway and study the relationships between them, such as metabolic pathways and gene regulatory networks. Evolutionary trees, ecological networks, and hierarchical clustering of gene expression patterns are also represented as graph structures.
Graph theory is also used in connectomics; nervous systems can be seen as a graph, where the nodes are neurons and the edges are the connections between them.
Mathematics.
In mathematics, graphs are useful in geometry and certain parts of topology such as knot theory. Algebraic graph theory has close links with group theory. Algebraic graph theory has been applied to many areas including dynamic systems and complexity.
Other topics.
A graph structure can be extended by assigning a weight to each edge of the graph. Graphs with weights, or weighted graphs, are used to represent structures in which pairwise connections have some numerical values. For example, if a graph represents a road network, the weights could represent the length of each road. There may be several weights associated with each edge, including distance (as in the previous example), travel time, or monetary cost. Such weighted graphs are commonly used to program GPS's, and travel-planning search engines that compare flight times and costs.
History.
The paper written by Leonhard Euler on the Seven Bridges of Königsberg and published in 1736 is regarded as the first paper in the history of graph theory. This paper, as well as the one written by Vandermonde on the "knight problem," carried on with the "analysis situs" initiated by Leibniz. Euler's formula relating the number of edges, vertices, and faces of a convex polyhedron was studied and generalized by Cauchy and L'Huilier, and represents the beginning of the branch of mathematics known as topology.
More than one century after Euler's paper on the bridges of Königsberg and while Listing was introducing the concept of topology, Cayley was led by an interest in particular analytical forms arising from differential calculus to study a particular class of graphs, the "trees". This study had many implications for theoretical chemistry. The techniques he used mainly concern the enumeration of graphs with particular properties. Enumerative graph theory then arose from the results of Cayley and the fundamental results published by Pólya between 1935 and 1937. These were generalized by De Bruijn in 1959. Cayley linked his results on trees with contemporary studies of chemical composition. The fusion of ideas from mathematics with those from chemistry began what has become part of the standard terminology of graph theory.
In particular, the term "graph" was introduced by Sylvester in a paper published in 1878 in "Nature", where he draws an analogy between "quantic invariants" and "co-variants" of algebra and molecular diagrams:
"[…] Every invariant and co-variant thus becomes expressible by a "graph" precisely identical with a Kekuléan diagram or chemicograph. […] I give a rule for the geometrical multiplication of graphs, "i.e." for constructing a "graph" to the product of in- or co-variants whose separate graphs are given. […]" (italics as in the original).
The first textbook on graph theory was written by Dénes Kőnig, and published in 1936. Another book by Frank Harary, published in 1969, was "considered the world over to be the definitive textbook on the subject", and enabled mathematicians, chemists, electrical engineers and social scientists to talk to each other. Harary donated all of the royalties to fund the Pólya Prize.
One of the most famous and stimulating problems in graph theory is the four color problem: "Is it true that any map drawn in the plane may have its regions colored with four colors, in such a way that any two regions having a common border have different colors?" This problem was first posed by Francis Guthrie in 1852 and its first written record is in a letter of De Morgan addressed to Hamilton the same year. Many incorrect proofs have been proposed, including those by Cayley, Kempe, and others. The study and the generalization of this problem by Tait, Heawood, Ramsey and Hadwiger led to the study of the colorings of the graphs embedded on surfaces with arbitrary genus. Tait's reformulation generated a new class of problems, the "factorization problems", particularly studied by Petersen and Kőnig. The works of Ramsey on colorations and more specially the results obtained by Turán in 1941 was at the origin of another branch of graph theory, "extremal graph theory".
The four color problem remained unsolved for more than a century. In 1969 Heinrich Heesch published a method for solving the problem using computers. A computer-aided proof produced in 1976 by Kenneth Appel and Wolfgang Haken makes fundamental use of the notion of "discharging" developed by Heesch. The proof involved checking the properties of 1,936 configurations by computer, and was not fully accepted at the time due to its complexity. A simpler proof considering only 633 configurations was given twenty years later by Robertson, Seymour, Sanders and Thomas.
The autonomous development of topology from 1860 and 1930 fertilized graph theory back through the works of Jordan, Kuratowski and Whitney. Another important factor of common development of graph theory and topology came from the use of the techniques of modern algebra. The first example of such a use comes from the work of the physicist Gustav Kirchhoff, who published in 1845 his Kirchhoff's circuit laws for calculating the voltage and current in electric circuits.
The introduction of probabilistic methods in graph theory, especially in the study of Erdős and Rényi of the asymptotic probability of graph connectivity, gave rise to yet another branch, known as "random graph theory", which has been a fruitful source of graph-theoretic results.
Representation.
A graph is an abstraction of relationships that emerge in nature; hence, it cannot be coupled to a certain representation. The way it is represented depends on the degree of convenience such representation provides for a certain application. The most common representations are the visual, in which, usually, vertices are drawn and connected by edges, and the tabular, in which rows of a table provide information about the relationships between the vertices within the graph.
Visual: Graph drawing.
Graphs are usually represented visually by drawing a point or circle for every vertex, and drawing a line between two vertices if they are connected by an edge. If the graph is directed, the direction is indicated by drawing an arrow. If the graph is weighted, the weight is added on the arrow.
A graph drawing should not be confused with the graph itself (the abstract, non-visual structure) as there are several ways to structure the graph drawing. All that matters is which vertices are connected to which others by how many edges and not the exact layout. In practice, it is often difficult to decide if two drawings represent the same graph. Depending on the problem domain some layouts may be better suited and easier to understand than others.
The pioneering work of W. T. Tutte was very influential on the subject of graph drawing. Among other achievements, he introduced the use of linear algebraic methods to obtain graph drawings.
Graph drawing also can be said to encompass problems that deal with the crossing number and its various generalizations. The crossing number of a graph is the minimum number of intersections between edges that a drawing of the graph in the plane must contain. For a planar graph, the crossing number is zero by definition. Drawings on surfaces other than the plane are also studied.
There are other techniques to visualize a graph away from vertices and edges, including circle packings, intersection graph, and other visualizations of the adjacency matrix.
Tabular: Graph data structures.
The tabular representation lends itself well to computational applications. There are different ways to store graphs in a computer system. The data structure used depends on both the graph structure and the algorithm used for manipulating the graph. Theoretically one can distinguish between list and matrix structures but in concrete applications the best structure is often a combination of both. List structures are often preferred for sparse graphs as they have smaller memory requirements. Matrix structures on the other hand provide faster access for some applications but can consume huge amounts of memory. Implementations of sparse matrix structures that are efficient on modern parallel computer architectures are an object of current investigation.
List structures include the edge list, an array of pairs of vertices, and the adjacency list, which separately lists the neighbors of each vertex: Much like the edge list, each vertex has a list of which vertices it is adjacent to.
Matrix structures include the incidence matrix, a matrix of 0's and 1's whose rows represent vertices and whose columns represent edges, and the adjacency matrix, in which both the rows and columns are indexed by vertices. In both cases a 1 indicates two adjacent objects and a 0 indicates two non-adjacent objects. The degree matrix indicates the degree of vertices. The Laplacian matrix is a modified form of the adjacency matrix that incorporates information about the degrees of the vertices, and is useful in some calculations such as Kirchhoff's theorem on the number of spanning trees of a graph.
The distance matrix, like the adjacency matrix, has both its rows and columns indexed by vertices, but rather than containing a 0 or a 1 in each cell it contains the length of a shortest path between two vertices.
Problems.
Enumeration.
There is a large literature on graphical enumeration: the problem of counting graphs meeting specified conditions. Some of this work is found in Harary and Palmer (1973).
Subgraphs, induced subgraphs, and minors.
A common problem, called the subgraph isomorphism problem, is finding a fixed graph as a subgraph in a given graph. One reason to be interested in such a question is that many graph properties are "hereditary" for subgraphs, which means that a graph has the property if and only if all subgraphs have it too.
Unfortunately, finding maximal subgraphs of a certain kind is often an NP-complete problem. For example:
One special case of subgraph isomorphism is the graph isomorphism problem. It asks whether two graphs are isomorphic. It is not known whether this problem is NP-complete, nor whether it can be solved in polynomial time.
A similar problem is finding induced subgraphs in a given graph. Again, some important graph properties are hereditary with respect to induced subgraphs, which means that a graph has a property if and only if all induced subgraphs also have it. Finding maximal induced subgraphs of a certain kind is also often NP-complete. For example:
Still another such problem, the minor containment problem, is to find a fixed graph as a minor of a given graph. A minor or subcontraction of a graph is any graph obtained by taking a subgraph and contracting some (or no) edges. Many graph properties are hereditary for minors, which means that a graph has a property if and only if all minors have it too. For example, Wagner's Theorem states:
A similar problem, the subdivision containment problem, is to find a fixed graph as a subdivision of a given graph. A subdivision or homeomorphism of a graph is any graph obtained by subdividing some (or no) edges. Subdivision containment is related to graph properties such as planarity. For example, Kuratowski's Theorem states:
Another problem in subdivision containment is the Kelmans–Seymour conjecture:
Another class of problems has to do with the extent to which various species and generalizations of graphs are determined by their "point-deleted subgraphs". For example:
Graph coloring.
Many problems and theorems in graph theory have to do with various ways of coloring graphs. Typically, one is interested in coloring a graph so that no two adjacent vertices have the same color, or with other similar restrictions. One may also consider coloring edges (possibly so that no two coincident edges are the same color), or other variations. Among the famous results and conjectures concerning graph coloring are the following:
Subsumption and unification.
Constraint modeling theories concern families of directed graphs related by a partial order. In these applications, graphs are ordered by specificity, meaning that more constrained graphs—which are more specific and thus contain a greater amount of information—are subsumed by those that are more general. Operations between graphs include evaluating the direction of a subsumption relationship between two graphs, if any, and computing graph unification. The unification of two argument graphs is defined as the most general graph (or the computation thereof) that is consistent with (i.e. contains all of the information in) the inputs, if such a graph exists; efficient unification algorithms are known.
For constraint frameworks which are strictly compositional, graph unification is the sufficient satisfiability and combination function. Well-known applications include automatic theorem proving and modeling the elaboration of linguistic structure.
Network flow.
There are numerous problems arising especially from applications that have to do with various notions of flows in networks, for example:
Covering problems.
Covering problems in graphs may refer to various set cover problems on subsets of vertices/subgraphs.
Decomposition problems.
Decomposition, defined as partitioning the edge set of a graph (with as many vertices as necessary accompanying the edges of each part of the partition), has a wide variety of questions. Often, the problem is to decompose a graph into subgraphs isomorphic to a fixed graph; for instance, decomposing a complete graph into Hamiltonian cycles. Other problems specify a family of graphs into which a given graph should be decomposed, for instance, a family of cycles, or decomposing a complete graph "K""n" into "n" − 1 specified trees having, respectively, 1, 2, 3, ..., "n" − 1 edges.
Some specific decomposition problems that have been studied include:
Graph classes.
Many problems involve characterizing the members of various classes of graphs. Some examples of such questions are below:
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G=(V,E)"
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "E \\subseteq \\{ \\{x, y\\} \\mid x, y \\in V \\;\\textrm{ and }\\; x \\neq y \\}"
},
{
"math_id": 3,
"text": "\\{x, y\\}"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "y"
},
{
"math_id": 6,
"text": "G=(V,E,\\phi)"
},
{
"math_id": 7,
"text": "E"
},
{
"math_id": 8,
"text": "\\phi : E \\to \\{ \\{x, y\\} \\mid x, y \\in V \\;\\textrm{ and }\\; x \\neq y \\}"
},
{
"math_id": 9,
"text": "\\{x, x\\} = \\{x\\}"
},
{
"math_id": 10,
"text": "\\{ \\{x, y\\} \\mid x, y \\in V \\;\\textrm{ and }\\; x \\neq y \\}"
},
{
"math_id": 11,
"text": "E \\subseteq \\{ \\{x, y\\} \\mid x, y \\in V \\}"
},
{
"math_id": 12,
"text": "\\phi"
},
{
"math_id": 13,
"text": "\\phi : E \\to \\{ \\{x, y\\} \\mid x, y \\in V \\}"
},
{
"math_id": 14,
"text": "|V|"
},
{
"math_id": 15,
"text": "|E|"
},
{
"math_id": 16,
"text": "G"
},
{
"math_id": 17,
"text": "\\sim"
},
{
"math_id": 18,
"text": "(x,y)"
},
{
"math_id": 19,
"text": "x \\sim y"
},
{
"math_id": 20,
"text": "E \\subseteq \\left\\{(x,y) \\mid (x, y) \\in V^2 \\;\\textrm{ and }\\; x \\neq y \\right\\}"
},
{
"math_id": 21,
"text": "V^n"
},
{
"math_id": 22,
"text": "V,"
},
{
"math_id": 23,
"text": "n"
},
{
"math_id": 24,
"text": "(x, y)"
},
{
"math_id": 25,
"text": "(y,x)"
},
{
"math_id": 26,
"text": "\\phi : E \\to \\left\\{(x,y) \\mid (x, y) \\in V^2 \\;\\textrm{ and }\\; x \\neq y \\right\\}"
},
{
"math_id": 27,
"text": "(x,x)"
},
{
"math_id": 28,
"text": "\\left\\{(x, y) \\mid (x, y) \\in V^2 \\;\\textrm{ and }\\; x \\neq y \\right\\}"
},
{
"math_id": 29,
"text": "E \\subseteq \\left\\{(x, y) \\mid (x, y) \\in V^2\\right\\}"
},
{
"math_id": 30,
"text": "\\phi : E \\to \\left\\{(x, y) \\mid (x, y) \\in V^2\\right\\}"
}
] | https://en.wikipedia.org/wiki?curid=12401 |
12401224 | Eilenberg–Maclane spectrum | In mathematics, specifically algebraic topology, there is a distinguished class of spectra called Eilenberg–Maclane spectra formula_0 for any Abelian group formula_1pg 134. Note, this construction can be generalized to commutative rings formula_2 as well from its underlying Abelian group. These are an important class of spectra because they model ordinary integral cohomology and cohomology with coefficients in an abelian group. In addition, they are a lift of the homological structure in the derived category formula_3 of abelian groups in the homotopy category of spectra. In addition, these spectra can be used to construct resolutions of spectra, called Adams resolutions, which are used in the construction of the Adams spectral sequence.
Definition.
For a fixed abelian group formula_1 let formula_0 denote the set of Eilenberg–MacLane spaces formula_4with the adjunction map coming from the property of loop spaces of Eilenberg–Maclane spaces: namely, because there is a homotopy equivalenceformula_5we can construct maps formula_6 from the adjunction formula_7 giving the desired structure maps of the set to get a spectrum. This collection is called the Eilenberg–Maclane spectrum of formula_1pg 134.
Properties.
Using the Eilenberg–Maclane spectrum formula_8 we can define the notion of cohomology of a spectrum formula_9 and the homology of a spectrum formula_9pg 42. Using the functorformula_10we can define cohomology simply asformula_11Note that for a CW complex formula_9, the cohomology of the suspension spectrum formula_12 recovers the cohomology of the original space formula_9. Note that we can define the dual notion of homology asformula_13which can be interpreted as a "dual" to the usual hom-tensor adjunction in spectra. Note that instead of formula_8, we take formula_0 for some Abelian group formula_1, we recover the usual (co)homology with coefficients in the abelian group formula_1 and denote it by formula_14.
Mod-"p" spectra and the Steenrod algebra.
For the Eilenberg–Maclane spectrum formula_15 there is an isomorphismformula_16for the p-Steenrod algebra formula_17.
Tools for computing Adams resolutions.
One of the quintessential tools for computing stable homotopy groups is the Adams spectral sequence. In order to make this construction, the use of Adams resolutions are employed. These depend on the following properties of Eilenberg–Maclane spectra. We define a generalized Eilenberg–Maclane spectrum formula_18 as a finite wedge of suspensions of Eilenberg–Maclane spectra formula_19, soformula_20Note that for formula_21 and a spectrum formula_9formula_22so it shifts the degree of cohomology classes. For the rest of the article formula_23 for some fixed abelian group formula_1
Equivalence of maps to "K".
Note that a homotopy class formula_24 represents a finite collection of elements in formula_14. Conversely, any finite collection of elements in formula_14 is represented by some homotopy class formula_24.
Constructing a surjection.
For a locally finite collection of elements in formula_14 generating it as an abelian group, the associated map formula_25 induces a surjection on cohomology, meaning if we evaluate these spectra on some topological space formula_26, there is always a surjectionformula_27of Abelian groups.
Steenrod-module structure on cohomology of spectra.
For a spectrum formula_9 taking the wedge formula_28 constructs a spectrum which is homotopy equivalent to a generalized Eilenberg–Maclane space with one wedge summand for each formula_29 generator or formula_30. In particular, it gives the structure of a module over the Steenrod algebra formula_17 for formula_31. This is because the equivalence stated before can be read asformula_32and the map formula_33 induces the formula_17-structure.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "HA"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "D(\\mathbb{Z})"
},
{
"math_id": 4,
"text": "\\{ K(A,0), K(A,1), K(A,2), \\ldots \\}"
},
{
"math_id": 5,
"text": "K(A,n-1)\\simeq \\Omega K(A,n)"
},
{
"math_id": 6,
"text": "\\Sigma K(A,n-1) \\to K(A,n)"
},
{
"math_id": 7,
"text": "[\\Sigma(X),Y]\\simeq [X,\\Omega(Y)]"
},
{
"math_id": 8,
"text": "H\\mathbb{Z}"
},
{
"math_id": 9,
"text": "X"
},
{
"math_id": 10,
"text": "[-,H\\mathbb{Z}]:\\textbf{Spectra}^{op} \\to \\text{GrAb} "
},
{
"math_id": 11,
"text": "H^*(E) = [E,H\\mathbb{Z}]"
},
{
"math_id": 12,
"text": "\\Sigma^\\infty X"
},
{
"math_id": 13,
"text": "H_*(X) = \\pi_*(E\\wedge X) = [\\mathbb{S},E\\wedge X]"
},
{
"math_id": 14,
"text": "H^*(X;A)"
},
{
"math_id": 15,
"text": "H\\mathbb{Z}/p"
},
{
"math_id": 16,
"text": "H^*(H\\mathbb{Z}/p, \\mathbb{Z}/p) \\cong [H\\mathbb{Z}/p,H\\mathbb{Z}/p] \\cong \\mathcal{A}_p"
},
{
"math_id": 17,
"text": "\\mathcal{A}_p"
},
{
"math_id": 18,
"text": "K"
},
{
"math_id": 19,
"text": "HA_i"
},
{
"math_id": 20,
"text": "K := \\Sigma^{k_1}HA_1\\wedge\\cdots\\wedge\\Sigma^{k_n}HA_n"
},
{
"math_id": 21,
"text": "\\Sigma^kHA"
},
{
"math_id": 22,
"text": "[X,\\Sigma^kHA] \\cong H^{*+k}(X;A)"
},
{
"math_id": 23,
"text": "HA_i = HA"
},
{
"math_id": 24,
"text": "f \\in [X,K]"
},
{
"math_id": 25,
"text": "f: X \\to K"
},
{
"math_id": 26,
"text": "S"
},
{
"math_id": 27,
"text": "f^*:K(S) \\to X(S)"
},
{
"math_id": 28,
"text": "X\\wedge H\\mathbb{Z}/p"
},
{
"math_id": 29,
"text": "\\mathbb{Z}/p"
},
{
"math_id": 30,
"text": "H^*(X;\\mathbb{Z}/p)"
},
{
"math_id": 31,
"text": "H^*(X)"
},
{
"math_id": 32,
"text": "H^*(X\\wedge H\\mathbb{Z}/p) \\cong \\mathcal{A}_p\\otimes H^*(X)"
},
{
"math_id": 33,
"text": "f: X \\to X \\wedge H\\mathbb{Z}/p"
}
] | https://en.wikipedia.org/wiki?curid=12401224 |
12401488 | Triangle center | Point in a triangle that can be seen as its middle under some criteria
In geometry, a triangle center or triangle centre is a point in the triangle's plane that is in some sense in the middle of the triangle. For example, the centroid, circumcenter, incenter and orthocenter were familiar to the ancient Greeks, and can be obtained by simple constructions.
Each of these classical centers has the property that it is invariant (more precisely equivariant) under similarity transformations. In other words, for any triangle and any similarity transformation (such as a rotation, reflection, dilation, or translation), the center of the transformed triangle is the same point as the transformed center of the original triangle.
This invariance is the defining property of a triangle center. It rules out other well-known points such as the Brocard points which are not invariant under reflection and so fail to qualify as triangle centers.
For an equilateral triangle, all triangle centers coincide at its centroid. However the triangle centers generally take different positions from each other on all other triangles. The definitions and properties of thousands of triangle centers have been collected in the "Encyclopedia of Triangle Centers".
History.
Even though the ancient Greeks discovered the classic centers of a triangle, they had not formulated any definition of a triangle center. After the ancient Greeks, several special points associated with a triangle like the Fermat point, nine-point center, Lemoine point, Gergonne point, and Feuerbach point were discovered.
During the revival of interest in triangle geometry in the 1980s it was noticed that these special points share some general properties that now form the basis for a formal definition of triangle center. Clark Kimberling's "Encyclopedia of Triangle Centers" contains an annotated list of over 50,000 triangle centers. Every entry in the "Encyclopedia of Triangle Centers" is denoted by formula_0 or formula_1 where formula_2 is the positional index of the entry. For example, the centroid of a triangle is the second entry and is denoted by formula_3 or formula_4.
Formal definition.
A real-valued function f of three real variables a, b, c may have the following properties:
If a non-zero f has both these properties it is called a triangle center function. If f is a triangle center function and a, b, c are the side-lengths of a reference triangle then the point whose trilinear coordinates are formula_7 is called a triangle center.
This definition ensures that triangle centers of similar triangles meet the invariance criteria specified above. By convention only the first of the three trilinear coordinates of a triangle center is quoted since the other two are obtained by cyclic permutation of a, b, c. This process is known as cyclicity.
Every triangle center function corresponds to a unique triangle center. This correspondence is not bijective. Different functions may define the same triangle center. For example, the functions formula_8 and formula_9 both correspond to the centroid.
Two triangle center functions define the same triangle center if and only if their ratio is a function symmetric in a, b, c.
Even if a triangle center function is well-defined everywhere the same cannot always be said for its associated triangle center. For example, let formula_10 be 0 if &NoBreak;&NoBreak; and &NoBreak;&NoBreak; are both rational and 1 otherwise. Then for any triangle with integer sides the associated triangle center evaluates to 0:0:0 which is undefined.
Default domain.
In some cases these functions are not defined on the whole of &NoBreak;&NoBreak; For example, the trilinears of "X"365 which is the 365th entry in the Encyclopedia of Triangle Centers, are formula_11 so a, b, c cannot be negative. Furthermore, in order to represent the sides of a triangle they must satisfy the triangle inequality. So, in practice, every function's domain is restricted to the region of &NoBreak;&NoBreak; where
formula_12
This region T is the domain of all triangles, and it is the default domain for all triangle-based functions.
Other useful domains.
There are various instances where it may be desirable to restrict the analysis to a smaller domain than T. For example:
*The centers "X"3, "X"4, "X"22, "X"24, "X"40 make specific reference to acute triangles, namely that region of T where formula_13
*When differentiating between the Fermat point and "X"13 the domain of triangles with an angle exceeding 2π/3 is important; in other words, triangles for which any of the following is true:
formula_14
*A domain of much practical value since it is dense in T yet excludes all trivial triangles (i.e. points) and degenerate triangles (i.e. lines) is the set of all scalene triangles. It is obtained by removing the planes "b" = "c", "c" = "a", "a" = "b" from T.
Domain symmetry.
Not every subset D ⊆ T is a viable domain. In order to support the bisymmetry test D must be symmetric about the planes "b" = "c", "c" = "a", "a" = "b". To support cyclicity it must also be invariant under 2π/3 rotations about the line "a" = "b" = "c". The simplest domain of all is the line ("t", "t", "t") which corresponds to the set of all equilateral triangles.
Examples.
Circumcenter.
The point of concurrence of the perpendicular bisectors of the sides of triangle △"ABC" is the circumcenter. The trilinear coordinates of the circumcenter are
formula_15
Let formula_16 It can be shown that f is homogeneous:
formula_17
as well as bisymmetric:
formula_18
so f is a triangle center function. Since the corresponding triangle center has the same trilinears as the circumcenter, it follows that the circumcenter is a triangle center.
1st isogonic center.
Let △"A'BC" be the equilateral triangle having base BC and vertex A' on the negative side of BC and let △"AB'C" and △"ABC' " be similarly constructed equilateral triangles based on the other two sides of triangle △"ABC". Then the lines AA', BB', CC' are concurrent and the point of concurrence is the 1st isogonal center. Its trilinear coordinates are
formula_19
Expressing these coordinates in terms of a, b, c, one can verify that they indeed satisfy the defining properties of the coordinates of a triangle center. Hence the 1st isogonic center is also a triangle center.
Fermat point.
Let
formula_20
Then f is bisymmetric and homogeneous so it is a triangle center function. Moreover, the corresponding triangle center coincides with the obtuse angled vertex whenever any vertex angle exceeds 2π/3, and with the 1st isogonic center otherwise. Therefore, this triangle center is none other than the Fermat point.
Non-examples.
Brocard points.
The trilinear coordinates of the first Brocard point are:
formula_21
These coordinates satisfy the properties of homogeneity and cyclicity but not bisymmetry. So the first Brocard point is not (in general) a triangle center. The second Brocard point has trilinear coordinates:
formula_22
and similar remarks apply.
The first and second Brocard points are one of many bicentric pairs of points, pairs of points defined from a triangle with the property that the pair (but not each individual point) is preserved under similarities of the triangle. Several binary operations, such as midpoint and trilinear product, when applied to the two Brocard points, as well as other bicentric pairs, produce triangle centers.
Some well-known triangle centers.
Recent triangle centers.
In the following table of more recent triangle centers, no specific notations are mentioned for the various points.
Also for each center only the first trilinear coordinate f(a,b,c) is specified. The other coordinates can be easily derived using the cyclicity property of trilinear coordinates.
General classes of triangle centers.
Kimberling center.
In honor of Clark Kimberling who created the online encyclopedia of more than 32,000 triangle centers, the triangle centers listed in the encyclopedia are collectively called "Kimberling centers".
Polynomial triangle center.
A triangle center P is called a "polynomial triangle center" if the trilinear coordinates of P can be expressed as polynomials in a, b, c.
Regular triangle center.
A triangle center P is called a "regular triangle point" if the trilinear coordinates of P can be expressed as polynomials in △, "a", "b", "c", where △ is the area of the triangle.
Major triangle center.
A triangle center P is said to be a "major triangle center" if the trilinear coordinates of P can be expressed in the form formula_23 where &NoBreak;&NoBreak; is a function of the angle X alone and does not depend on the other angles or on the side lengths.
Transcendental triangle center.
A triangle center P is called a "transcendental triangle center" if P has no trilinear representation using only algebraic functions of a, b, c.
Miscellaneous.
Isosceles and equilateral triangles.
Let f be a triangle center function. If two sides of a triangle are equal (say "a" = "b") then
formula_24
so two components of the associated triangle center are always equal. Therefore, all triangle centers of an isosceles triangle must lie on its line of symmetry. For an equilateral triangle all three components are equal so all centers coincide with the centroid. So, like a circle, an equilateral triangle has a unique center.
Excenters.
Let
formula_25
This is readily seen to be a triangle center function and (provided the triangle is scalene) the corresponding triangle center is the excenter opposite to the largest vertex angle. The other two excenters can be picked out by similar functions. However, as indicated above only one of the excenters of an isosceles triangle and none of the excenters of an equilateral triangle can ever be a triangle center.
Biantisymmetric functions.
A function f is biantisymmetric if
formula_26
If such a function is also non-zero and homogeneous it is easily seen that the mapping
formula_27
is a triangle center function. The corresponding triangle center is
formula_28
On account of this the definition of triangle center function is sometimes taken to include non-zero homogeneous biantisymmetric functions.
New centers from old.
Any triangle center function f can be normalized by multiplying it by a symmetric function of a, b, c so that "n" = 0. A normalized triangle center function has the same triangle center as the original, and also the stronger property that
formula_29
Together with the zero function, normalized triangle center functions form an algebra under addition, subtraction, and multiplication. This gives an easy way to create new triangle centers. However distinct normalized triangle center functions will often define the same triangle center, for example f and formula_30
Uninteresting centers.
Assume a, b, c are real variables and let α, β, γ be any three real constants. Let
formula_31
Then f is a triangle center function and "α" : "β" : "γ" is the corresponding triangle center whenever the sides of the reference triangle are labelled so that "a" < "b" < "c". Thus every point is potentially a triangle center. However the vast majority of triangle centers are of little interest, just as most continuous functions are of little interest.
Barycentric coordinates.
If f is a triangle center function then so is af and the corresponding triangle center is
formula_32
Since these are precisely the barycentric coordinates of the triangle center corresponding to f it follows that triangle centers could equally well have been defined in terms of barycentrics instead of trilinears. In practice it isn't difficult to switch from one coordinate system to the other.
Binary systems.
There are other center pairs besides the Fermat point and the 1st isogonic center. Another system is formed by "X"3 and the incenter of the tangential triangle. Consider the triangle center function given by:
formula_33
For the corresponding triangle center there are four distinct possibilities:
formula_34
Note that the first is also the circumcenter.
Routine calculation shows that in every case these trilinears represent the incenter of the tangential triangle. So this point is a triangle center that is a close companion of the circumcenter.
Bisymmetry and invariance.
Reflecting a triangle reverses the order of its sides. In the image the coordinates refer to the ("c", "b", "a") triangle and (using "|" as the separator) the reflection of an arbitrary point formula_35 is formula_36 If f is a triangle center function the reflection of its triangle center is formula_37 which, by bisymmetry, is the same as formula_38
As this is also the triangle center corresponding to f relative to the ("c", "b", "a") triangle, bisymmetry ensures that all triangle centers are invariant under reflection. Since rotations and translations may be regarded as double reflections they too must preserve triangle centers. These invariance properties provide justification for the definition.
Alternative terminology.
Some other names for dilation are uniform scaling, isotropic scaling, homothety, and homothecy.
Non-Euclidean and other geometries.
The study of triangle centers traditionally is concerned with Euclidean geometry, but triangle centers can also be studied in non-Euclidean geometry. Triangle centers that have the same form for both Euclidean and hyperbolic geometry can be expressed using gyrotrigonometry. In non-Euclidean geometry, the assumption that the interior angles of the triangle sum to 180 degrees must be discarded.
Centers of tetrahedra or higher-dimensional simplices can also be defined, by analogy with 2-dimensional triangles.
Some centers can be extended to polygons with more than three sides. The centroid, for instance, can be found for any polygon. Some research has been done on the centers of polygons with more than three sides.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X(n)"
},
{
"math_id": 1,
"text": "X_n"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "X(2)"
},
{
"math_id": 4,
"text": "X_2"
},
{
"math_id": 5,
"text": "f(ta,tb,tc) = t^n f(a,b,c)"
},
{
"math_id": 6,
"text": "f(a,b,c) = f(a,c,b)."
},
{
"math_id": 7,
"text": "f(a,b,c) : f(b,c,a) : f(c,a,b)"
},
{
"math_id": 8,
"text": "f_1(a,b,c) = \\tfrac{1}{a}"
},
{
"math_id": 9,
"text": "f_2(a,b,c) = bc"
},
{
"math_id": 10,
"text": "f(a,b,c)"
},
{
"math_id": 11,
"text": "a^{1/2} : b^{1/2} : c^{1/2}"
},
{
"math_id": 12,
"text": "a \\leq b + c, \\quad b \\leq c + a, \\quad c \\leq a + b."
},
{
"math_id": 13,
"text": "a^2 \\leq b^2 + c^2, \\quad b^2 \\leq c^2 + a^2, \\quad c^2 \\leq a^2 + b^2."
},
{
"math_id": 14,
"text": "a^2 > b^2 + bc + c^2; \\quad b^2 > c^2 + ca + a^2; \\quad c^2 > a^2 + ab + b^2."
},
{
"math_id": 15,
"text": "a(b^2 + c^2 - a^2) : b(c^2 + a^2 - b^2) : c(a^2 + b^2 - c^2)."
},
{
"math_id": 16,
"text": "f\\left(a,b,c\\right)=a\\left(b^{2}+c^{2}-a^{2}\\right)"
},
{
"math_id": 17,
"text": "\\begin{align}\n f(ta,tb,tc) &= ta \\Bigl[ (tb)^2 + (tc)^2 - (ta)^2 \\Bigr] \\\\[2pt]\n &= t^3 \\Bigl[ a(b^2 + c^2 - a^2) \\Bigr] \\\\[2pt]\n &= t^3 f(a,b,c)\n\\end{align}"
},
{
"math_id": 18,
"text": "\\begin{align}\n f(a,c,b) &= a(c^2 + b^2 - a^2) \\\\[2pt] \n &= a(b^2 + c^2 - a^2) \\\\[2pt]\n &= f(a,b,c)\n\\end{align}"
},
{
"math_id": 19,
"text": "\\csc\\left(A + \\frac{\\pi}{3}\\right) : \\csc\\left(B + \\frac{\\pi}{3}\\right) : \\csc\\left(C + \\frac{\\pi}{3}\\right)."
},
{
"math_id": 20,
"text": "f(a, b, c) = \\begin{cases}\n 1 & \\quad \\text{if } a^2 > b^2 + bc + c^2 & \\iff \\text{if } A > 2\\pi/3 \\\\[8pt]\n 0 & \\quad \\!\\! \\displaystyle {{\\text{if } b^2 > c^2 + ca + a^2} \\atop {\\text{ or } c^2 > a^2 + ab + b^2}} & \\iff \\!\\! \\displaystyle {{\\text{if } B > 2\\pi/3} \\atop {\\text{ or } C > 2\\pi/3}} \\\\[8pt]\n \\csc(A + \\frac{\\pi}{3}) & \\quad \\text{otherwise } & \\iff A,B,C > 2\\pi/3\n\\end{cases}"
},
{
"math_id": 21,
"text": "\\frac{c}{b} \\ :\\ \\frac{a}{c} \\ :\\ \\frac{b}{a}"
},
{
"math_id": 22,
"text": "\\frac{b}{c} \\ :\\ \\frac{c}{a} \\ :\\ \\frac{a}{b}"
},
{
"math_id": 23,
"text": "f(A) : f(B) : f(C)"
},
{
"math_id": 24,
"text": "\\begin{align}\nf(a,b,c) &= f(b,a,c) &(\\text{since }a = b)\\\\\n&= f(b,c,a) & \\text{(by bisymmetry)}\n\\end{align}"
},
{
"math_id": 25,
"text": "f(a, b, c) = \\begin{cases}\n -1 & \\quad \\text{if } a \\ge b \\text{ and } a \\ge c, \\\\\n \\;\\;\\; 1 & \\quad \\text{otherwise}.\n\\end{cases}"
},
{
"math_id": 26,
"text": "f(a,b,c) = -f(a,c,b) \\quad \\text{for all} \\quad a,b,c."
},
{
"math_id": 27,
"text": "(a,b,c) \\to f(a,b,c)^2 \\, f(b,c,a) \\, f(c,a,b)"
},
{
"math_id": 28,
"text": "f(a,b,c) : f(b,c,a) : f(c,a,b)."
},
{
"math_id": 29,
"text": "f(ta,tb,tc) = f(a,b,c) \\quad \\text{for all} \\quad t > 0, \\ (a,b,c)."
},
{
"math_id": 30,
"text": "(abc)^{-1}(a+b+c)^3 f."
},
{
"math_id": 31,
"text": "f(a, b, c) = \\begin{cases}\n \\alpha & \\quad \\text{if } a < b \\text{ and } a < c & (a \\text{ is least}), \\\\[2pt]\n \\gamma & \\quad \\text{if } a > b \\text{ and } a > c & (a \\text{ is greatest}), \\\\[2pt] \n \\beta & \\quad \\text{otherwise} & (a \\text{ is in the middle}).\n\\end{cases}"
},
{
"math_id": 32,
"text": "a \\, f(a,b,c) : b \\, f(b,c,a) : c \\, f(c,a,b)."
},
{
"math_id": 33,
"text": "f(a, b, c) = \\begin{cases}\n \\cos A & \\text{if } \\triangle \\text{ is acute}, \\\\[2pt]\n \\cos A + \\sec B \\sec C & \\text{if } \\measuredangle A \\text{ is obtuse}, \\\\[2pt]\n \\cos A - \\sec A & \\text{if either} \\measuredangle B \\text{ or } \\measuredangle C \\text{ is obtuse}.\n\\end{cases}"
},
{
"math_id": 34,
"text": "\\begin{align}\n & \\text{if reference } \\triangle \\text{ is acute:} \\quad \\cos A \\ :\\, \\cos B \\ :\\, \\cos C \\\\[6pt] \n & \\begin{array}{rcccc}\n \\text{if } \\measuredangle A \\text{ is obtuse:} & \\cos A + \\sec B \\sec C &:& \\cos B - \\sec B &:& \\cos C - \\sec C \\\\[4pt]\n \\text{if } \\measuredangle B \\text{ is obtuse:} & \\cos A - \\sec A &:& \\cos B + \\sec C \\sec A &:& \\cos C - \\sec C \\\\[4pt]\n \\text{if } \\measuredangle C \\text{ is obtuse:} & \\cos A - \\sec A &:& \\cos B - \\sec B &:& \\cos C + \\sec A \\sec B\n\\end{array}\\end{align}"
},
{
"math_id": 35,
"text": "\\gamma : \\beta : \\alpha"
},
{
"math_id": 36,
"text": "\\gamma\\ |\\ \\beta \\ |\\ \\alpha."
},
{
"math_id": 37,
"text": "f(c,a,b)\\ |\\ f(b,c,a)\\ |\\ f(a,b,c),"
},
{
"math_id": 38,
"text": "f(c,b,a)\\ |\\ f(b,a,c)\\ |\\ f(a,c,b)."
}
] | https://en.wikipedia.org/wiki?curid=12401488 |
1240291 | Return on assets | The percentage of how profitable a company's assets are in generating revenue
The return on assets (ROA) shows the percentage of how profitable a company's assets are in generating revenue.
ROA can be computed as below:
formula_0
The phrase return on average assets (ROAA) is also used, to emphasize that average assets are used in the above formula.
This number tells you what the company can do with what it has, "i.e." how many dollars of earnings they derive from each dollar of assets they control. It's a useful number for comparing competing companies in the same industry. The number will vary widely across different industries. Return on assets gives an indication of the capital intensity of the company, which will depend on the industry; companies that require large initial investments will generally have lower return on assets. ROAs over 5% are generally considered good.
Usage.
Return on assets is one of the elements used in financial analysis using the Du Pont Identity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{ROA} = \\frac{\\mbox{Net Income}}{\\mbox{Average Total Assets}}"
}
] | https://en.wikipedia.org/wiki?curid=1240291 |
1240378 | Symmetry breaking | Physical process transitioning a system from a symmetric state to a more ordered state
In physics, symmetry breaking is a phenomenon where a disordered but symmetric state collapses into an ordered, but less symmetric state. This collapse is often one of many possible bifurcations that a particle can take as it approaches a lower energy state. Due to the many possibilities, an observer may assume the result of the collapse to be arbitrary. This phenomenon is fundamental to quantum field theory (QFT), and further, contemporary understandings of physics. Specifically, it plays a central role in the Glashow–Weinberg–Salam model which forms part of the Standard model modelling the electroweak sector.In an infinite system (Minkowski spacetime) symmetry breaking occurs, however in a finite system (that is, any real super-condensed system), the system is less predictable, but in many cases quantum tunneling occurs. Symmetry breaking and tunneling relate through the collapse of a particle into non-symmetric state as it seeks a lower energy.
Symmetry breaking can be distinguished into two types, explicit and spontaneous. They are characterized by whether the equations of motion fail to be invariant, or the ground state fails to be invariant.
Non-technical description.
This section describes spontaneous symmetry breaking. This is the idea that for a physical system, the lowest energy configuration (the vacuum state) is not the most symmetric configuration of the system. Roughly speaking there are three types of symmetry that can be broken: discrete, continuous and gauge, ordered in increasing technicality.
An example of a system with discrete symmetry is given by the figure with the red graph: consider a particle moving on this graph, subject to gravity. A similar graph could be given by the function formula_1. This system is symmetric under reflection in the y-axis. There are three possible stationary states for the particle: the top of the hill at formula_2, or the bottom, at formula_3. When the particle is at the top, the configuration respects the reflection symmetry: the particle stays in the same place when reflected. However, the lowest energy configurations are those at formula_3. When the particle is in either of these configurations, it is no longer fixed under reflection in the y-axis: reflection swaps the two vacuum states.
An example with continuous symmetry is given by a 3d analogue of the previous example, from rotating the graph around an axis through the top of the hill, or equivalently given by the graph formula_4. This is essentially the graph of the Mexican hat potential. This has a continuous symmetry given by rotation about the axis through the top of the hill (as well as a discrete symmetry by reflection through any radial plane). Again, if the particle is at the top of the hill it is fixed under rotations, but it has higher gravitational energy at the top. At the bottom, it is no longer invariant under rotations but minimizes its gravitational potential energy. Furthermore rotations move the particle from one energy minimizing configuration to another. There is a novelty here not seen in the previous example: from any of the vacuum states it is possible to access any other vacuum state with only a small amount of energy, by moving around the trough at the bottom of the hill, whereas in the previous example, to access the other vacuum, the particle would have to cross the hill, requiring a large amount of energy.
Gauge symmetry breaking is the most subtle, but has important physical consequences. Roughly speaking, for the purposes of this section a gauge symmetry is an assignment of systems with continuous symmetry to every point in spacetime. Gauge symmetry forbids mass generation for gauge fields, yet massive gauge fields (W and Z bosons) have been observed. Spontaneous symmetry breaking was developed to resolve this inconsistency. The idea is that in an early stage of the universe it was in a high energy state, analogous to the particle being at the top of the hill, and so had full gauge symmetry and all the gauge fields were massless. As it cooled, it settled into a choice of vacuum, thus spontaneously breaking the symmetry, thus removing the gauge symmetry and allowing mass generation of those gauge fields. A full explanation is highly technical: see electroweak interaction.
Spontaneous symmetry breaking.
In spontaneous symmetry breaking (SSB), the equations of motion of the system are invariant, but any vacuum state (lowest energy state) is not.
For an example with two-fold symmetry, if there is some atom which has two vacuum states, occupying either one of these states breaks the two-fold symmetry. This act of selecting one of the states as the system reaches a lower energy is SSB. When this happens, the atom is no longer formula_0 symmetric (reflectively symmetric) and has collapsed into a lower energy state.
Such a symmetry breaking is parametrized by an order parameter. A special case of this type of symmetry breaking is dynamical symmetry breaking.
In the Lagrangian setting of Quantum field theory (QFT), the Lagrangian formula_5 is a functional of quantum fields which is invariant under the action of a symmetry group formula_6. However, the vacuum expectation value formed when the particle collapses to a lower energy may not be invariant under formula_6. In this instance, it will partially break the symmetry of formula_6, into a subgroup formula_7. This is spontaneous symmetry breaking.
Within the context of gauge symmetry however, SSB is the phenomenon by which gauge fields 'acquire mass' despite gauge-invariance enforcing that such fields be massless. This is because the SSB of gauge symmetry breaks gauge-invariance, and such a break allows for the existence of massive gauge fields. This is an important exemption from Goldstone's Theorem, where a Nambu-Goldstone Boson can gain mass, becoming a Higgs Boson in the process.
Further, in this context the usage of 'symmetry breaking' while standard, is a misnomer, as gauge 'symmetry' is not really a symmetry but a redundancy in the description of the system. Mathematically, this redundancy is a choice of trivialization, somewhat analogous to redundancy arising from a choice of basis.
Spontaneous symmetry breaking is also associated with phase transitions. For example in the Ising model, as the temperature of the system falls below the critical temperature the formula_0 symmetry of the vacuum is broken, giving a phase transition of the system.
Explicit symmetry breaking.
In explicit symmetry breaking (ESB), the equations of motion describing a system are variant under the broken symmetry. In Hamiltonian mechanics or Lagrangian Mechanics, this happens when there is at least one term in the Hamiltonian (or Lagrangian) that explicitly breaks the given symmetry.
In the Hamiltonian setting, this is often studied when the Hamiltonian can be written formula_8.
Here formula_9 is a 'base Hamiltonian', which has some manifest symmetry. More explicitly, it is symmetric under the action of a (Lie) group formula_6. Often this is an integrable Hamiltonian.
The formula_10 is a perturbation or interaction Hamiltonian. This is not invariant under the action of formula_6. It is often proportional to a small, perturbative parameter.
This is essentially the paradigm for perturbation theory in quantum mechanics. An example of its use is in finding the fine structure of atomic spectra.
Examples.
Symmetry breaking can cover any of the following scenarios:
* The breaking of an exact symmetry of the underlying laws of physics by the apparently random formation of some structure;
* A situation in physics in which a minimal energy state has less symmetry than the system itself;
* Situations where the actual state of the system does not reflect the underlying symmetries of the dynamics because the manifestly symmetric state is unstable (stability is gained at the cost of local asymmetry);
* Situations where the equations of a theory may have certain symmetries, though their solutions may not (the symmetries are "hidden").
One of the first cases of broken symmetry discussed in the physics literature is related to the form taken by a uniformly rotating body of incompressible fluid in gravitational and hydrostatic equilibrium. Jacobi and soon later Liouville, in 1834, discussed the fact that a tri-axial ellipsoid was an equilibrium solution for this problem when the kinetic energy compared to the gravitational energy of the rotating body exceeded a certain critical value. The axial symmetry presented by the McLaurin spheroids is broken at this bifurcation point. Furthermore, above this bifurcation point, and for constant angular momentum, the solutions that minimize the kinetic energy are the "non"-axially symmetric Jacobi ellipsoids instead of the Maclaurin spheroids.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{Z}_2"
},
{
"math_id": 1,
"text": "f(x) = (x^2 - a^2)^2"
},
{
"math_id": 2,
"text": "x = 0"
},
{
"math_id": 3,
"text": "x = \\pm a"
},
{
"math_id": 4,
"text": "f(x,y) = (x^2 + y^2 - a^2)^2"
},
{
"math_id": 5,
"text": "L"
},
{
"math_id": 6,
"text": "G"
},
{
"math_id": 7,
"text": "H"
},
{
"math_id": 8,
"text": "H = H_0 + H_{\\text{int}}"
},
{
"math_id": 9,
"text": "H_0"
},
{
"math_id": 10,
"text": "H_{\\text{int}}"
}
] | https://en.wikipedia.org/wiki?curid=1240378 |
12404937 | Runoff model (reservoir) | Type of water motion
A runoff models or rainfall-runoff model describes how rainfall is converted into runoff in a drainage basin (catchment area or watershed). More precisely, it produces a surface runoff hydrograph in response to a rainfall event, represented by and input as a hyetograph.
Rainfall-runoff models need to be calibrated before they can be used.
A well known runoff model is the "linear reservoir", but in practice it has limited applicability.
The runoff model with a "non-linear reservoir" is more universally applicable, but still it holds only for catchments whose surface area is limited by the condition that the rainfall can be considered more or less uniformly distributed over the area. The maximum size of the watershed then depends on the rainfall characteristics of the region. When the study area is too large, it can be divided into sub-catchments and the various runoff hydrographs may be combined using flood routing techniques.
Linear reservoir.
The hydrology of a linear reservoir (figure 1) is governed by two equations.
where:<br>
Q is the "runoff" or" discharge" <br>
R is the "effective rainfall" or "rainfall excess" or "recharge" <br>
A is the constant "reaction factor" or "response factor" with unit [1/T] <br>
S is the water storage with unit [L] <br>
dS is a differential or small increment of S<br>
dT is a differential or small increment of T
Runoff equation<br>
A combination of the two previous equations results in a differential equation, whose solution is:
This is the "runoff equation" or "discharge equation", where Q1 and Q2 are the values of Q at time T1 and T2 respectively while T2−T1 is a small time step during which the recharge can be assumed constant.
Computing the total hydrograph<br>
Provided the value of A is known, the "total hydrograph" can be obtained using a successive number of time steps and computing, with the "runoff equation", the runoff at the end of each time step from the runoff at the end of the previous time step.
Unit hydrograph<br>
The discharge may also be expressed as: Q = − dS/dT . Substituting herein the expression of Q in equation (1) gives the differential equation dS/dT = A·S, of which the solution is: S = exp(− A·t) . Replacing herein S by Q/A according to equation (1), it is obtained that: Q = A exp(− A·t) . This is called the instantaneous unit hydrograph (IUH) because the Q herein equals Q2 of the foregoing runoff equation using "R" = 0, and taking S as "unity" which makes Q1 equal to A according to equation (1).<br>
The availability of the foregoing "runoff equation" eliminates the necessity of calculating the "total hydrograph" by the summation of partial hydrographs using the "IUH" as is done with the more complicated convolution method.
Determining the response factor A<br>
When the "response factor" A can be determined from the characteristics of the watershed (catchment area), the reservoir can be used as a "deterministic model" or "analytical model", see hydrological modelling.<br>
Otherwise, the factor A can be determined from a data record of rainfall and runoff using the method explained below under "non-linear reservoir". With this method the reservoir can be used as a black box model.
Conversions<br>
1 mm/day corresponds to 10 m3/day per ha of the watershed<br>
1 L/s per ha corresponds to 8.64 mm/day or 86.4 m3/day per ha
Non-linear reservoir.
<templatestyles src="Stack/styles.css"/>
Contrary to the linear reservoir, the non linear reservoir has a reaction factor A that is not a constant, but it is a function of S or Q (figure 2, 3).
Normally A increases with Q and S because the higher the water level is the higher the discharge capacity becomes. The factor is therefore called Aq instead of A.<br>
The non-linear reservoir has "no" usable unit hydrograph.
During periods without rainfall or recharge, i.e. when "R" = 0, the runoff equation reduces to
or, using a "unit time step" (T2 − T1 = 1) and solving for Aq:
Hence, the reaction or response factor Aq can be determined from runoff or discharge measurements using "unit time steps" during dry spells, employing a numerical method.
Figure 3 shows the relation between Aq (Alpha) and Q for a small valley (Rogbom) in Sierra Leone.<br>
Figure 4 shows observed and "simulated" or "reconstructed" discharge hydrograph of the watercourse at the downstream end of the same valley.
Recharge.
The recharge, also called "effective rainfall" or "rainfall excess", can be modeled by a "pre-reservoir" (figure 6) giving the recharge as "overflow". The pre-reservoir knows the following elements:
The recharge during a unit time step (T2−T1=1) can be found from "R" = Rain − Sd<br>
The actual storage at the end of a "unit time step" is found as Sa2 = Sa1 + Rain − "R" − Ea, where Sa1 is the actual storage at the start of the time step.
The Curve Number method (CN method) gives another way to calculate the recharge. The "initial abstraction" herein compares with Sm − Si, where Si is the initial value of Sa.
Nash model.
The Nash model uses a series (cascade) of linear reservoirs in which each reservoir empties into the next until the runoff is obtained. For calibration, the model requires considerable research.
Software.
Figures 3 and 4 were made with the RainOff program, designed to analyse rainfall and runoff using the non-linear reservoir model with a pre-reservoir. The program also contains an example of the hydrograph of an agricultural subsurface drainage system for which the value of A can be obtained from the system's characteristics.
Raven is a robust and flexible hydrological modelling framework, designed for application to challenging hydrological problems in academia and practice. This fully object-oriented code provides complete flexibility in spatial discretization, interpolation, process representation, and forcing function generation. Models built with Raven can be as simple as a single watershed lumped model with only a handful of state variables to a full semi-distributed system model with physically-based infiltration, snowmelt, and routing. This flexibility encourages stepwise modelling while enabling investigation into critical research issues regarding discretization, numerical implementation, and ensemble simulation of surface water hydrological models. Raven is open source, covered under the Artistic License 2.0.
The SMART hydrological model includes agricultural subsurface drainage flow, in addition to soil and groundwater reservoirs, to simulate the flow path contributions to streamflow.
V"flo" is another software program for modeling runoff. V"flo" uses radar rainfall and GIS data to generate physics-based, distributed runoff simulation.
The WEAP (Water Evaluation And Planning) software platform models runoff and percolation from climate and land use data, using a choice of linear and non-linear reservoir models.
The RS MINERVE software platform simulates the formation of free surface run-off flow and its propagation in rivers or channels. The software is based on object-oriented programming and allows hydrologic and hydraulic modeling according to a semi-distributed conceptual scheme with different rainfall-runoff model such as HBV, GR4J, SAC-SMA or SOCONT.
The IHACRES is a catchment-scale rainfall-streamflow modelling methodology. Its purpose is to assist the hydrologist or water resources engineer to characterise the dynamic relationship between basin rainfall and streamflow. | [
{
"math_id": 0,
"text": "Q = A \\cdot S"
},
{
"math_id": 1,
"text": "R = Q + \\frac{dS}{dT}"
},
{
"math_id": 2,
"text": " Q_2 = Q_1 \\exp\\left(-A (T_2 - T_1)\\right) + R\\left[1 - \\exp\\left(-A (T_2 - T_1)\\right)\\right] "
}
] | https://en.wikipedia.org/wiki?curid=12404937 |
1240566 | Merkur XR4Ti | The Merkur XR4Ti is a performance-oriented 3-door hatchback sold in North America from 1985 to 1989. A product of the Ford Motor Company, the car was a version of the European Ford Sierra adapted to U.S. regulations. The XR4Ti project was championed by Ford vice president Bob Lutz.
History.
The Sierra was the successor to Ford of Europe's Cortina/Taunus, and was developed while Lutz was chairman of Ford's European operations. Due to financial limitations the decision was made to keep the front-engine, rear-wheel-drive layout of its predecessor, and pursue improved fuel economy through advanced aerodynamics. The Probe III design study unveiled at the 1981 Frankfurt Motor Show foreshadowed the direction Ford would take with the Sierra. Responsibility for the Sierra design was handled by vice president for design Uwe Bahnsen and chief stylist Patrick le Quément. The Sierra was released in Europe in September 1982, and the performance-oriented XR4i appeared in 1983, slotted into the lineup above the Fiesta-based XR2 and Escort-based XR3.
Lutz spearheaded the plan to bring a version of the XR4i to North America to compete with sporty luxury imports like BMW. Although modifications would be needed, his instructions were that the nature of the car not be compromised. The XR4 for America would be turbocharged, adding a 'T' to its name while keeping the 'i' indicating a fuel injected engine, as in Europe. The 'Sierra' name was not used in North America, since it was already used by General Motors for their GMC C/K Sierra pickup truck, and sounded too similar to the Oldsmobile Cutlass Ciera.
With their own production lines occupied building Sierras for the European market, Ford contracted out assembly of the XR4Ti. Using body panels from Ford's factory in Genk, the cars were largely hand-built by Wilhelm Karmann GmbH in Rheine, Germany. The XR4Ti was introduced at a starting price of US$16,503 ().
Chief executive officer Pete Petersen decided that the car would be sold under the 'Merkur' brand name. The name means 'Mercury' in German, and tied the new brand to the Lincoln-Mercury dealers through which the car would be sold. Initially, 800 Lincoln-Mercury dealers enrolled to also become Merkur dealers.
Ford projected sales of 16,000 to 20,000 units per year. These targets were never met, although for the first two years they came close, with over 25,000 units sold. The car continued to struggle to establish its identity in the North American market, both with the public and with dealers.
An increasingly unfavorable dollar/Deutschmark exchange rate put upward pressure on price. By the late 1980s the XR4Ti was facing a redesign to comply with incoming safety regulations in the US. Ford dropped the 'Merkur' name in 1988, and began to refer to their two European imports by their model names only. Sales dropped off rapidly after 1986, so that in its last year fewer than 3,000 XR4Tis were sold. 1989 was the last year for the XR4Ti.
The XR4Ti was the last vehicle imported by Ford into North America from Germany until 2016, when the Ford Focus RS was introduced.
Body, chassis, suspension.
The XR4Ti kept the 3-door semi-notchback hatchback body style of the XR4i, including the European version's triple side-window profile and bi-plane rear spoiler. The lower body was clad in polycarbonate 'anti-abrasion' panels that were matte grey in the early cars. The car's drag coefficient (formula_0) was 0.328.
The unibody chassis of the European Ford Sierra was modified to meet US safety requirements. The floorpan had reliefs added to accommodate catalytic converters. Side intrusion beams were added to the two doors, and the bumpers were stretched to meet US impact requirements. To accommodate the engine for the US-spec car, the XR4Ti also received a taller hood. Altogether 850 unique parts were developed for the car destined for the US and Canada, and these changes added approximately to the weight.
Suspension was independent front and rear. The front suspension comprised Macpherson struts with concentric coil springs and lower lateral links triangulated by an anti-roll bar. The rear suspension used semi-trailing arms with coil springs ahead of the axle half-shafts, and shock absorbers behind. An anti-roll bar was also fitted at the rear. Spring rates were softened compared to the XR4i, based on feedback from Jackie Stewart, who had been brought in as both a development tester and spokesman for the car. Steering was by a power-assisted rack and pinion with 3.6 turns lock-to-lock. Brakes were disks in front and drums at rear. The car had a two-piece driveshaft and used a giubo as a torsional damper.
Engine and transmission.
While the European XR4i was powered by a 2.8 L version of the Ford Cologne V6 engine, the only engine offered in the XR4Ti was a turbocharged Lima inline-four. This engine featured a cast-iron block, cast iron cylinder head with 2 valves per cylinder, and a single overhead cam driven by a timing belt. The XR4Ti engine also received a Garrett AiResearch turbocharger, fuel-injection and Ford's EEC-IV engine control unit. Built in Ford's Taubaté Brazil plant, the engine had a bore of and stroke of for a total displacement of . A nearly identical engine was used in the 1983 Mustang Turbo GT and 1983 Thunderbird Turbo coupe. The XR4Ti did not have the intercooler found in the 1984 SVO Mustang or 1987-88 Thunderbird Turbo coupe. Engines in cars equipped with automatic transmissions had maximum boost set to and produced . In cars with manual transmissions maximum boost was raised to and the ECU programming was modified to allow the engine to produce .
The second-order vibrations produced by this large four cylinder engine had been noticeable when it was used in the turbocharged Thunderbird and Cougar models. To minimize these in the XR4Ti without resorting to extreme measures such as adding balance shafts, extensive work was done to reduce noise, vibration, and harshness (NVH) in the power-train. The first measure taken to reduce NVH was to redesign the engine's external components, including the intake manifold, to increase the stiffness of the bracketing and lighten the components. The second measure was to reduce the amount of vibration transmitted to the body structure by using soft rubber engine mounts. Engine roll was controlled by wide-based mounting brackets, and engine movement due to bumps was limited by having the brackets attached to the body via hydraulic mounts.
The base transmission was a 5-speed manual Ford Type 9 unit, while a Ford C3 3-speed automatic transmission was optional.
Options and updates.
The XR4Ti arrived on the market with an extensive list of standard and optional equipment.
1985:
Many changes were made to the car over its five-year life. These include:
1986:
1987:
1988:
1989:
In addition to the changes above many modifications were incorporated as Technical Service Bulletins.
Special editions.
Several automotive customizers produced versions of the XR4Ti that offered increased performance and improved handling. Among these were Roush, Rapido, and RC Consultants. These conversions were sold as either owner-installed kits or pre-built vehicles.
One such special, called the "Scorch XS" and built by Ralph Todd, replaced the Ford engine with a twin-turbocharged Nissan VG30DETT V6. Only one of these US $50,000 conversions was built; Scorch Prototype #001. It was later upgraded with larger brakes and wheels. The last magazine article featuring the Scorch listed a revised cost of US $55,000. Creation of other Scorch conversions had been started, but never completed.
Ford only offered one special edition of the XR4Ti when it launched the XR4Ti K2 in 1987, the result of a tie-in with ski manufacturer K2 Sports. This all-white model came with colored "K2" logos on the front fenders and a roof-mounted ski rack. No mechanical changes were made to the K2 version.
Performance reviews and legacy.
In their September 1984 road test of the XR4Ti "Car & Driver" magazine reported a 0-60 mph time of 7.0 seconds, a 1/4 mile time of 15.5 seconds and a top speed of . In later tests by the same magazine the car took 7.8 seconds to accelerate from 0-60 mph, leading the testers to speculate that the earlier press car might have been a ringer, a not-uncommon practice at the time. In their test data they stated the press car came with a limited slip differential, something that was not offered as standard or an option for the XR4Ti during its production. The first test car returned a combined city/highway fuel economy of , and generated 0.80 Gs of lateral acceleration.
Numbers from the March 1985 road test by "Road & Track" magazine are comparable. Their car ran from 0-60 mph in 7.9 seconds and reached the end of the quarter mile in 16.0 seconds. Lateral acceleration was measured at 0.767 Gs and the car ran through the R&T slalom at a speed of 59.7 mph. Their car's fuel economy was measured as .
Contemporary reviews of the car often mention either the non-intuitive pronunciation of the brand name or the car's polarizing appearance. Most praise the car's handling, while recognizing areas where it falls short of expectations. Some felt the car showed more body roll while cornering than desired. The disk/drum brake system was felt to be adequate but not outstanding. Others were disappointed that no anti-lock was offered. While "Car and Driver's" first report went into great detail about Ford's efforts to reduce NVH, their later tests reported that there was still noticeable vibration in the drive-line. Effective use of the car has hampered by what some felt was the engine's narrow power band.
Even after production ended the car continued to provoke widely differing opinions. The XR4Ti was on "Car and Driver's" Ten Best list for 1985. In 2009 however, the magazine listed that honor as one of the "most embarrassing" awards in automotive history.
In addition to "Car and Driver"'s change of heart some have numbered the XR4Ti among the 10 worst cars ever made, and "not (Ford's) finest hour".
On the other hand, other articles have called it one of the 10 best forgotten cars, and a car that had unfairly received a bad reputation.
Technical data.
<templatestyles src="Template:Table alignment/tables.css" />
Motorsport.
Despite the XR4Ti never being sold outside of the United States and Canada, Andy Rouse campaigned one in the British Saloon Car Championship. Rouse took the overall title for the 1985 season and the class title for 1986 with 14 wins. In 1986, Eggenberger Motorsport was among the few to use an XR4Ti to compete in the European Touring Car Championship and the Deutsche Tourenwagen Meisterschaft (German Touring Car Championship) with positive results. Ford used technical feedback from the teams racing the XR4Ti to develop the 1986 Ford Sierra RS Cosworth, which first appeared on race tracks in 1987 and was superseded in mid-1987 by the Ford Sierra RS500. Some of the body panels used to stiffen the Sierra chassis and create the Merkur shell were subsequently branded 909 Motorsport parts for later adaptation to a Sierra shell. Many see the successes and failures of the XR4Ti as being the blueprint for success of the dominant RS500 Sierras.
Between 1986 and 1987, Pete Halsmer and Scott Pruett campaigned the Roush-prepped XR4Ti, although of a tubeframe construction like that of a silhouette racing car, to take the Trans-Am Series title. Along with Paul Miller, the pair also campaigned an XR4Ti successfully in the IMSA series in 1988.
Explanatory notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\scriptstyle C_\\mathrm d\\,"
}
] | https://en.wikipedia.org/wiki?curid=1240566 |
1240666 | Parity (physics) | Symmetry of spatially mirrored systems
In physics, a parity transformation (also called parity inversion) is the flip in the sign of "one" spatial coordinate. In three dimensions, it can also refer to the simultaneous flip in the sign of all three spatial coordinates (a point reflection):
formula_0
It can also be thought of as a test for chirality of a physical phenomenon, in that a parity inversion transforms a phenomenon into its mirror image.
All fundamental interactions of elementary particles, with the exception of the weak interaction, are symmetric under parity. As established by the Wu experiment conducted at the US National Bureau of Standards by Chinese-American scientist Chien-Shiung Wu, the weak interaction is chiral and thus provides a means for probing chirality in physics. In her experiment, Wu took advantage of the controlling role of weak interactions in radioactive decay of atomic isotopes to establish the chirality of the weak force.
By contrast, in interactions that are symmetric under parity, such as electromagnetism in atomic and molecular physics, parity serves as a powerful controlling principle underlying quantum transitions.
A matrix representation of P (in any number of dimensions) has determinant equal to −1, and hence is distinct from a rotation, which has a determinant equal to 1. In a two-dimensional plane, a simultaneous flip of all coordinates in sign is "not" a parity transformation; it is the same as a 180° rotation.
In quantum mechanics, wave functions that are unchanged by a parity transformation are described as even functions, while those that change sign under a parity transformation are odd functions.
Simple symmetry relations.
Under rotations, classical geometrical objects can be classified into scalars, vectors, and tensors of higher rank. In classical physics, physical configurations need to transform under representations of every symmetry group.
Quantum theory predicts that states in a Hilbert space do not need to transform under representations of the group of rotations, but only under projective representations. The word "projective" refers to the fact that if one projects out the phase of each state, where we recall that the overall phase of a quantum state is not observable, then a projective representation reduces to an ordinary representation. All representations are also projective representations, but the converse is not true, therefore the projective representation condition on quantum states is weaker than the representation condition on classical states.
The projective representations of any group are isomorphic to the ordinary representations of a central extension of the group. For example, projective representations of the 3-dimensional rotation group, which is the special orthogonal group SO(3), are ordinary representations of the special unitary group SU(2). Projective representations of the rotation group that are not representations are called spinors and so quantum states may transform not only as tensors but also as spinors.
If one adds to this a classification by parity, these can be extended, for example, into notions of
One can define reflections such as
formula_1
which also have negative determinant and form a valid parity transformation. Then, combining them with rotations (or successively performing "x"-, "y"-, and "z"-reflections) one can recover the particular parity transformation defined earlier. The first parity transformation given does not work in an even number of dimensions, though, because it results in a positive determinant. In even dimensions only the latter example of a parity transformation (or any reflection of an odd number of coordinates) can be used.
Parity forms the abelian group formula_2 due to the relation formula_3. All Abelian groups have only one-dimensional irreducible representations. For formula_2, there are two irreducible representations: one is even under parity, formula_4, the other is odd, formula_5. These are useful in quantum mechanics. However, as is elaborated below, in quantum mechanics states need not transform under actual representations of parity but only under projective representations and so in principle a parity transformation may rotate a state by any phase.
Representations of O(3).
An alternative way to write the above classification of scalars, pseudoscalars, vectors and pseudovectors is in terms of the representation space that each object transforms in. This can be given in terms of the group homomorphism formula_6 which defines the representation. For a matrix formula_7
When the representation is restricted to formula_12, scalars and pseudoscalars transform identically, as do vectors and pseudovectors.
Classical mechanics.
Newton's equation of motion formula_13 (if the mass is constant) equates two vectors, and hence is invariant under parity. The law of gravity also involves only vectors and is also, therefore, invariant under parity.
However, angular momentum formula_14 is an axial vector,
formula_15
In classical electrodynamics, the charge density formula_6 is a scalar, the electric field, formula_16, and current formula_17 are vectors, but the magnetic field, formula_18 is an axial vector. However, Maxwell's equations are invariant under parity because the curl of an axial vector is a vector.
Effect of spatial inversion on some variables of classical physics.
The two major divisions of classical physical variables have either even or odd parity. The way into which particular variables and vectors sort out into either category depends on whether the "number of dimensions" of space is either an odd or even number. The categories of "odd" or "even" given below for the "parity transformation" is a different, but intimately related issue.
The answers given below are correct for 3 spatial dimensions. In a 2 dimensional space, for example, when constrained to remain on the surface of a planet, some of the variables switch sides.
Odd.
Classical variables whose signs flip when inverted in space inversion are predominantly vectors. They include:
<templatestyles src="Div col/styles.css"/>* formula_19, helicity
Even.
Classical variables, predominantly scalar quantities, which do not change upon spatial inversion include:
<templatestyles src="Div col/styles.css"/>* formula_33, the time when an event occurs
Quantum mechanics.
Possible eigenvalues.
In quantum mechanics, spacetime transformations act on quantum states. The parity transformation, formula_44, is a unitary operator, in general acting on a state formula_45 as follows: formula_46.
One must then have formula_47, since an overall phase is unobservable. The operator formula_48, which reverses the parity of a state twice, leaves the spacetime invariant, and so is an internal symmetry which rotates its eigenstates by phases formula_49. If formula_48 is an element formula_50 of a continuous U(1) symmetry group of phase rotations, then formula_51is part of this U(1) and so is also a symmetry. In particular, we can define formula_52, which is also a symmetry, and so we can choose to call formula_53 our parity operator, instead of formula_44. Note that formula_54 and so formula_53 has eigenvalues formula_55. Wave functions with eigenvalue formula_56 under a parity transformation are even functions, while eigenvalue formula_57 corresponds to odd functions. However, when no such symmetry group exists, it may be that all parity transformations have some eigenvalues which are phases other than formula_55.
For electronic wavefunctions, even states are usually indicated by a subscript g for "gerade" (German: even) and odd states by a subscript u for "ungerade" (German: odd). For example, the lowest energy level of the hydrogen molecule ion (H2+) is labelled formula_58 and the next-closest (higher) energy level is labelled formula_59.
The wave functions of a particle moving into an external potential, which is centrosymmetric (potential energy invariant with respect to a space inversion, symmetric to the origin), either remain invariable or change signs: these two possible states are called the even state or odd state of the wave functions.
The law of conservation of parity of particles states that, if an isolated ensemble of particles has a definite parity, then the parity remains invariable in the process of ensemble evolution. However this is not true for the beta decay of nuclei) because the weak nuclear interaction violates parity.
The parity of the states of a particle moving in a spherically symmetric external field is determined by the angular momentum, and the particle state is defined by three quantum numbers: total energy, angular momentum and the projection of angular momentum.
Consequences of parity symmetry.
When parity generates the Abelian group formula_2, one can always take linear combinations of quantum states such that they are either even or odd under parity (see the figure). Thus the parity of such states is ±1. The parity of a multiparticle state is the product of the parities of each state; in other words parity is a multiplicative quantum number.
In quantum mechanics, Hamiltonians are invariant (symmetric) under a parity transformation if formula_60 commutes with the Hamiltonian. In non-relativistic quantum mechanics, this happens for any scalar potential, i.e., formula_61, hence the potential is spherically symmetric. The following facts can be easily proven:
Some of the non-degenerate eigenfunctions of formula_72 are unaffected (invariant) by parity formula_60 and the others are merely reversed in sign when the Hamiltonian operator and the parity operator commute:
formula_73
where formula_74 is a constant, the eigenvalue of formula_60,
formula_75
Many-particle systems: atoms, molecules, nuclei.
The overall parity of a many-particle system is the product of the parities of the one-particle states. It is −1 if an odd number of particles are in odd-parity states, and +1 otherwise. Different notations are in use to denote the parity of nuclei, atoms, and molecules.
Atoms.
Atomic orbitals have parity (−1)"ℓ", where the exponent ℓ is the azimuthal quantum number. The parity is odd for orbitals p, f, ... with ℓ = 1, 3, ..., and an atomic state has odd parity if an odd number of electrons occupy these orbitals. For example, the ground state of the nitrogen atom has the electron configuration 1s22s22p3, and is identified by the term symbol 4So, where the superscript o denotes odd parity. However the third excited term at about 83,300 cm−1 above the ground state has electron configuration 1s22s22p23s has even parity since there are only two 2p electrons, and its term symbol is 4P (without an o superscript).
Molecules.
The complete (rotational-vibrational-electronic-nuclear spin) electromagnetic Hamiltonian of any molecule commutes with (or is invariant to) the parity operation P (or E*, in the notation introduced by Longuet-Higgins) and its eigenvalues can be given the parity symmetry label + or - as they are even or odd, respectively. The parity operation involves the inversion of electronic and nuclear spatial coordinates at the molecular center of mass.
Centrosymmetric molecules at equilibrium have a centre of symmetry at their midpoint (the nuclear center of mass). This includes all homonuclear diatomic molecules as well as certain symmetric molecules such as ethylene, benzene, xenon tetrafluoride and sulphur hexafluoride. For centrosymmetric molecules, the point group contains the operation i which is not to be confused with the parity operation. The operation i involves the inversion of the electronic and vibrational displacement coordinates at the nuclear centre of mass. For centrosymmetric molecules the operation i commutes with the rovibronic (rotation-vibration-electronic) Hamiltonian and can be used to label such states. Electronic and vibrational states of centrosymmetric molecules are either unchanged by the operation i, or they are changed in sign by i. The former are denoted by the subscript g and are called gerade, while the latter are denoted by the subscript u and are called ungerade. The complete electromagnetic Hamiltonian of a centrosymmetric molecule
does not commute with the point group inversion operation i because of the effect of the nuclear hyperfine Hamiltonian. The nuclear hyperfine Hamiltonian can mix the rotational levels of g and u vibronic states (called "ortho-para" mixing) and give rise to "ortho"-"para" transitions
Nuclei.
In atomic nuclei, the state of each nucleon (proton or neutron) has even or odd parity, and nucleon configurations can be predicted using the nuclear shell model. As for electrons in atoms, the nucleon state has odd overall parity if and only if the number of nucleons in odd-parity states is odd. The parity is usually written as a + (even) or − (odd) following the nuclear spin value. For example, the isotopes of oxygen include 17O(5/2+), meaning that the spin is 5/2 and the parity is even. The shell model explains this because the first 16 nucleons are paired so that each pair has spin zero and even parity, and the last nucleon is in the 1d5/2 shell, which has even parity since ℓ = 2 for a d orbital.
Quantum field theory.
If one can show that the vacuum state is invariant under parity, formula_76, the Hamiltonian is parity invariant formula_77 and the quantization conditions remain unchanged under parity, then it follows that every state has good parity, and this parity is conserved in any reaction.
To show that quantum electrodynamics is invariant under parity, we have to prove that the action is invariant and the quantization is also invariant. For simplicity we will assume that canonical quantization is used; the vacuum state is then invariant under parity by construction. The invariance of the action follows from the classical invariance of Maxwell's equations. The invariance of the canonical quantization procedure can be worked out, and turns out to depend on the transformation of the annihilation operator:
formula_78
where formula_79 denotes the momentum of a photon and formula_80 refers to its polarization state. This is equivalent to the statement that the photon has odd intrinsic parity. Similarly all vector bosons can be shown to have odd intrinsic parity, and all axial-vectors to have even intrinsic parity.
A straightforward extension of these arguments to scalar field theories shows that scalars have even parity. That is, formula_81, since
formula_82
This is true even for a complex scalar field. (Details of spinors are dealt with in the article on the Dirac equation, where it is shown that fermions and antifermions have opposite intrinsic parity.)
With fermions, there is a slight complication because there is more than one spin group.
Parity in the Standard Model.
Fixing the global symmetries.
Applying the parity operator twice leaves the coordinates unchanged, meaning that must act as one of the internal symmetries of the theory, at most changing the phase of a state. For example, the Standard Model has three global U(1) symmetries with charges equal to the baryon number "B", the lepton number "L", and the electric charge "Q". Therefore, the parity operator satisfies "P"2 = "e""iαB"+"iβL"+"iγQ" for some choice of α, β, and γ. This operator is also not unique in that a new parity operator P' can always be constructed by multiplying it by an internal symmetry such as P' = P "e""iαB" for some "α".
To see if the parity operator can always be defined to satisfy P2 = 1, consider the general case when P2 = Q for some internal symmetry Q present in the theory. The desired parity operator would be P' = PQ−1/2. If Q is part of a continuous symmetry group then exists, but if it is part of a discrete symmetry then this element need not exist and such a redefinition may not be possible.
The Standard Model exhibits a (−1)"F" symmetry, where "F" is the fermion number operator counting how many fermions are in a state. Since all particles in the Standard Model satisfy "F" = "B" + "L", the discrete symmetry is also part of the "e""iα"("B" + "L") continuous symmetry group. If the parity operator satisfied P2 = (−1)"F", then it can be redefined to give a new parity operator satisfying P2 = 1. But if the Standard Model is extended by incorporating Majorana neutrinos, which have "F" = 1 and "B" + "L" = 0, then the discrete symmetry (−1)"F" is no longer part of the continuous symmetry group and the desired redefinition of the parity operator cannot be performed. Instead it satisfies P4 = 1 so the Majorana neutrinos would have intrinsic parities of ±"i".
Parity of the pion.
In 1954, a paper by William Chinowsky and Jack Steinberger demonstrated that the pion has negative parity.
They studied the decay of an "atom" made from a deuteron ([<noinclude />[hydrogen-2|H]<noinclude />]) and a negatively charged pion () in a state with zero orbital angular momentum formula_83 into two neutrons (formula_84).
Neutrons are fermions and so obey Fermi–Dirac statistics, which implies that the final state is antisymmetric. Using the fact that the deuteron has spin one and the pion spin zero together with the antisymmetry of the final state they concluded that the two neutrons must have orbital angular momentum formula_85 The total parity is the product of the intrinsic parities of the particles and the extrinsic parity of the spherical harmonic function formula_86 Since the orbital momentum changes from zero to one in this process, if the process is to conserve the total parity then the products of the intrinsic parities of the initial and final particles must have opposite sign. A deuteron nucleus is made from a proton and a neutron, and so using the aforementioned convention that protons and neutrons have intrinsic parities equal to formula_87 they argued that the parity of the pion is equal to minus the product of the parities of the two neutrons divided by that of the proton and neutron in the deuteron, explicitly formula_88 from which they concluded that the pion is a pseudoscalar particle.
Parity violation.
Although parity is conserved in electromagnetism and gravity, it is violated in weak interactions, and perhaps, to some degree, in strong interactions. The Standard Model incorporates parity violation by expressing the weak interaction as a chiral gauge interaction. Only the left-handed components of particles and right-handed components of antiparticles participate in charged weak interactions in the Standard Model. This implies that parity is not a symmetry of our universe, unless a hidden mirror sector exists in which parity is violated in the opposite way.
An obscure 1928 experiment, undertaken by R. T. Cox, G. C. McIlwraith, and B. Kurrelmeyer, had in effect reported parity violation in weak decays, but, since the appropriate concepts had not yet been developed, those results had no impact. In 1929, Hermann Weyl explored, without any evidence, the existence of a two-component massless particle of spin one-half. This idea was rejected by Pauli, because it implied parity violation.
By the mid-20th century, it had been suggested by several scientists that parity might not be conserved (in different contexts), but without solid evidence these suggestions were not considered important. Then, in 1956, a careful review and analysis by theoretical physicists Tsung-Dao Lee and Chen-Ning Yang went further, showing that while parity conservation had been verified in decays by the strong or electromagnetic interactions, it was untested in the weak interaction. They proposed several possible direct experimental tests. They were mostly ignored, but Lee was able to convince his Columbia colleague Chien-Shiung Wu to try it. She needed special cryogenic facilities and expertise, so the experiment was done at the National Bureau of Standards.
Wu, Ambler, Hayward, Hoppes, and Hudson (1957) found a clear violation of parity conservation in the beta decay of cobalt-60. As the experiment was winding down, with double-checking in progress, Wu informed Lee and Yang of their positive results, and saying the results need further examination, she asked them not to publicize the results first. However, Lee revealed the results to his Columbia colleagues on 4 January 1957 at a "Friday lunch" gathering of the Physics Department of Columbia. Three of them, R. L. Garwin, L. M. Lederman, and R. M. Weinrich, modified an existing cyclotron experiment, and immediately verified the parity violation. They delayed publication of their results until after Wu's group was ready, and the two papers appeared back-to-back in the same physics journal.
The discovery of parity violation explained the outstanding τ–θ puzzle in the physics of kaons.
In 2010, it was reported that physicists working with the Relativistic Heavy Ion Collider had created a short-lived parity symmetry-breaking bubble in quark–gluon plasmas. An experiment conducted by several physicists in the STAR collaboration, suggested that parity may also be violated in the strong interaction. It is predicted that this local parity violation manifests itself by chiral magnetic effect.
Intrinsic parity of hadrons.
To every particle one can assign an intrinsic parity as long as nature preserves parity. Although weak interactions do not, one can still assign a parity to any hadron by examining the strong interaction reaction that produces it, or through decays not involving the weak interaction, such as rho meson decay to pions.
References.
Footnotes
<templatestyles src="Reflist/styles.css" />
Citations
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{P}: \\begin{pmatrix}x\\\\y\\\\z\\end{pmatrix} \\mapsto \\begin{pmatrix}-x\\\\-y\\\\-z\\end{pmatrix}."
},
{
"math_id": 1,
"text": "V_x: \\begin{pmatrix}x\\\\y\\\\z\\end{pmatrix} \\mapsto \\begin{pmatrix}-x\\\\y\\\\z\\end{pmatrix},"
},
{
"math_id": 2,
"text": "\\mathbb{Z}_2"
},
{
"math_id": 3,
"text": "\\hat{\\mathcal P}^2 = \\hat{1}"
},
{
"math_id": 4,
"text": "\\hat{\\mathcal P}\\phi = +\\phi"
},
{
"math_id": 5,
"text": "\\hat{\\mathcal P}\\phi = -\\phi"
},
{
"math_id": 6,
"text": "\\rho"
},
{
"math_id": 7,
"text": "R\\in \\text{O}(3),"
},
{
"math_id": 8,
"text": "\\rho(R) = 1"
},
{
"math_id": 9,
"text": "\\rho(R) = \\det(R)"
},
{
"math_id": 10,
"text": "\\rho(R) = R"
},
{
"math_id": 11,
"text": "\\rho(R) = \\det(R)R."
},
{
"math_id": 12,
"text": "\\text{SO}(3)"
},
{
"math_id": 13,
"text": "\\mathbf{F} = m\\mathbf{a}"
},
{
"math_id": 14,
"text": "\\mathbf{L}"
},
{
"math_id": 15,
"text": "\\begin{align}\n \\mathbf{L} &= \\mathbf{r}\\times\\mathbf{p} \\\\\n \\hat{P}\\left(\\mathbf{L}\\right) &= (-\\mathbf{r}) \\times (-\\mathbf{p}) = \\mathbf{L}.\n\\end{align}"
},
{
"math_id": 16,
"text": "\\mathbf{E}"
},
{
"math_id": 17,
"text": "\\mathbf{j}"
},
{
"math_id": 18,
"text": "\\mathbf{B}"
},
{
"math_id": 19,
"text": " h "
},
{
"math_id": 20,
"text": " \\Phi "
},
{
"math_id": 21,
"text": " \\mathbf x "
},
{
"math_id": 22,
"text": " \\mathbf v "
},
{
"math_id": 23,
"text": " \\mathbf a "
},
{
"math_id": 24,
"text": " \\mathbf p "
},
{
"math_id": 25,
"text": " \\rho \\, \\mathbf v "
},
{
"math_id": 26,
"text": " \\mathbf F "
},
{
"math_id": 27,
"text": " \\mathbf J "
},
{
"math_id": 28,
"text": " \\mathbf E "
},
{
"math_id": 29,
"text": " \\mathbf D "
},
{
"math_id": 30,
"text": " \\mathbf P "
},
{
"math_id": 31,
"text": " \\mathbf A "
},
{
"math_id": 32,
"text": " \\mathbf S "
},
{
"math_id": 33,
"text": " t "
},
{
"math_id": 34,
"text": " m "
},
{
"math_id": 35,
"text": " E "
},
{
"math_id": 36,
"text": " P "
},
{
"math_id": 37,
"text": " \\rho "
},
{
"math_id": 38,
"text": " V "
},
{
"math_id": 39,
"text": " \\mathbf L "
},
{
"math_id": 40,
"text": " \\mathbf B "
},
{
"math_id": 41,
"text": " \\mathbf H "
},
{
"math_id": 42,
"text": " \\mathbf M "
},
{
"math_id": 43,
"text": " T_{ij} "
},
{
"math_id": 44,
"text": "\\hat{\\mathcal P}"
},
{
"math_id": 45,
"text": "\\psi"
},
{
"math_id": 46,
"text": "\\hat{\\mathcal P}\\, \\psi{\\left(r\\right)} = e^{{i\\phi}/{2}}\\psi{\\left(-r\\right)}"
},
{
"math_id": 47,
"text": "\\hat{\\mathcal P}^2\\, \\psi{\\left(r\\right)} = e^{i\\phi}\\psi{\\left(r\\right)}"
},
{
"math_id": 48,
"text": "\\hat{\\mathcal P}^2"
},
{
"math_id": 49,
"text": "e^{i\\phi}"
},
{
"math_id": 50,
"text": "e^{iQ}"
},
{
"math_id": 51,
"text": "e^{-iQ}"
},
{
"math_id": 52,
"text": "\\hat{\\mathcal P}' \\equiv \\hat{\\mathcal P}\\, e^{-{iQ}/{2}}"
},
{
"math_id": 53,
"text": "\\hat{\\mathcal P}'"
},
{
"math_id": 54,
"text": "{\\hat{\\mathcal P}'}^2 = 1"
},
{
"math_id": 55,
"text": "\\pm 1"
},
{
"math_id": 56,
"text": "+1"
},
{
"math_id": 57,
"text": "-1"
},
{
"math_id": 58,
"text": "1\\sigma_g"
},
{
"math_id": 59,
"text": "1\\sigma_u"
},
{
"math_id": 60,
"text": "\\hat{\\mathcal{P}}"
},
{
"math_id": 61,
"text": " V = V{\\left(r\\right)}"
},
{
"math_id": 62,
"text": "| \\varphi \\rangle"
},
{
"math_id": 63,
"text": "| \\psi \\rangle"
},
{
"math_id": 64,
"text": "\\langle \\varphi | \\hat{X} | \\psi \\rangle = 0"
},
{
"math_id": 65,
"text": "\\hat{X}"
},
{
"math_id": 66,
"text": "\\bigl|\\vec{L}, L_z\\bigr\\rangle"
},
{
"math_id": 67,
"text": "\\vec{L}"
},
{
"math_id": 68,
"text": "L_z"
},
{
"math_id": 69,
"text": "\\hat{\\mathcal{P}} \\bigl|\\vec{L}, L_z\\bigr\\rangle = \\left(-1\\right)^{L} \\bigl|\\vec{L}, L_z \\bigr\\rangle"
},
{
"math_id": 70,
"text": "\\bigl[\\hat{H},\\hat{\\mathcal P}\\bigr] = 0 "
},
{
"math_id": 71,
"text": "\\bigl[\\hat{H}, \\hat{\\mathcal P}\\bigr] = 0"
},
{
"math_id": 72,
"text": "\\hat{H}"
},
{
"math_id": 73,
"text": "\\hat{\\mathcal{P}}| \\psi \\rangle = c \\left| \\psi \\right\\rangle,"
},
{
"math_id": 74,
"text": "c"
},
{
"math_id": 75,
"text": "\\hat{\\mathcal{P}}^2\\left| \\psi \\right\\rangle = c\\,\\hat{\\mathcal{P}}\\left| \\psi \\right\\rangle."
},
{
"math_id": 76,
"text": "\\hat{\\mathcal{P}}\\left| 0 \\right\\rangle = \\left| 0 \\right\\rangle"
},
{
"math_id": 77,
"text": "\\left[\\hat{H},\\hat{\\mathcal{P}}\\right]"
},
{
"math_id": 78,
"text": "\\mathbf{Pa}(\\mathbf{p}, \\pm)\\mathbf{P}^{+} = \\mathbf{a}(-\\mathbf{p}, \\pm)"
},
{
"math_id": 79,
"text": "\\mathbf{p}"
},
{
"math_id": 80,
"text": "\\pm"
},
{
"math_id": 81,
"text": "\\mathsf{P}\\phi(-\\mathbf{x},t)\\mathsf{P}^{-1}=\\phi(\\mathbf{x},t)"
},
{
"math_id": 82,
"text": "\\mathbf{Pa}(\\mathbf{p})\\mathbf{P}^{+} = \\mathbf{a}(-\\mathbf{p})"
},
{
"math_id": 83,
"text": "~ \\mathbf L = \\boldsymbol 0 ~"
},
{
"math_id": 84,
"text": "n"
},
{
"math_id": 85,
"text": "~ L = 1 ~."
},
{
"math_id": 86,
"text": "~ \\left( -1 \\right)^L ~."
},
{
"math_id": 87,
"text": "~+1~"
},
{
"math_id": 88,
"text": "\\frac{(-1)(1)^2}{(1)^2} = -1 ~,"
}
] | https://en.wikipedia.org/wiki?curid=1240666 |
1240699 | Dehn surgery | Operation used to modify three-dimensional topological spaces
In topology, a branch of mathematics, a Dehn surgery, named after Max Dehn, is a construction used to modify 3-manifolds. The process takes as input a 3-manifold together with a link. It is often conceptualized as two steps: "drilling" then "filling".
Definitions.
In order to describe a Dehn surgery, one picks two oriented simple closed curves formula_8 and formula_9 on the corresponding boundary torus formula_7 of the drilled 3-manifold, where formula_8 is a meridian of formula_10 (a curve staying in a small ball in formula_0 and having linking number +1 with formula_10 or, equivalently, a curve that bounds a disc that intersects once the component formula_10) and formula_9 is a longitude of formula_7 (a curve travelling once along formula_10 or, equivalently, a curve on formula_7 such that the algebraic intersection formula_11 is equal to +1).
The curves formula_8 and formula_9 generate the fundamental group of the torus formula_7, and they form a basis of its first homology group. This gives any simple closed curve formula_12 on the torus formula_7 two coordinates formula_13 and formula_14, so that formula_15. These coordinates only depend on the homotopy class of formula_12.
We can specify a homeomorphism of the boundary of a solid torus to formula_7 by having the meridian curve of the solid torus map to a curve homotopic to formula_12. As long as the meridian maps to the surgery slope formula_16, the resulting Dehn surgery will yield a 3-manifold that will not depend on the specific gluing (up to homeomorphism). The ratio formula_17 is called the surgery coefficient of formula_10.
In the case of links in the 3-sphere or more generally an oriented integral homology sphere, there is a canonical choice of the longitudes formula_9: every longitude is chosen so that it is null-homologous in the knot complement—equivalently, if it is the boundary of a Seifert surface.
When the ratios formula_18 are all integers (note that this condition does not depend on the choice of the longitudes, since it corresponds to the new meridians intersecting exactly once the ancient meridians), the surgery is called an integral surgery.
Such surgeries are closely related to handlebodies, cobordism and Morse functions.
Results.
Every closed, orientable, connected 3-manifold is obtained by performing Dehn surgery on a link in the 3-sphere. This result, the Lickorish–Wallace theorem, was first proven by Andrew H. Wallace in 1960 and independently by W. B. R. Lickorish in a stronger form in 1962. Via the now well-known relation between genuine surgery and cobordism, this result is equivalent to the theorem that the oriented cobordism group of 3-manifolds is trivial, a theorem originally proved by Vladimir Abramovich Rokhlin in 1951.
Since orientable 3-manifolds can all be generated by suitably decorated links, one might ask how distinct surgery presentations of a given 3-manifold might be related. The answer is called the Kirby calculus.
Footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "L \\subset M"
},
{
"math_id": 2,
"text": "L"
},
{
"math_id": 3,
"text": "L = L_1\\cup\\dots\\cup L_k "
},
{
"math_id": 4,
"text": "k"
},
{
"math_id": 5,
"text": "T_1\\cup\\dots\\cup T_k"
},
{
"math_id": 6,
"text": "M \\setminus L"
},
{
"math_id": 7,
"text": "T_i"
},
{
"math_id": 8,
"text": "m_i"
},
{
"math_id": 9,
"text": "\\ell_i"
},
{
"math_id": 10,
"text": "L_i"
},
{
"math_id": 11,
"text": "\\langle\\ell_i, m_i\\rangle"
},
{
"math_id": 12,
"text": "\\gamma_i"
},
{
"math_id": 13,
"text": "a_i"
},
{
"math_id": 14,
"text": "b_i"
},
{
"math_id": 15,
"text": "[\\gamma_i] = [a_i \\ell_i+b_i m_i]"
},
{
"math_id": 16,
"text": "[\\gamma_i]"
},
{
"math_id": 17,
"text": "b_i/a_i\\in\\mathbb{Q}\\cup\\{\\infty\\}"
},
{
"math_id": 18,
"text": "b_i/a_i"
},
{
"math_id": 19,
"text": " 0"
},
{
"math_id": 20,
"text": "\\mathbb{S}^2\\times \\mathbb{S}^1"
},
{
"math_id": 21,
"text": "b/a"
},
{
"math_id": 22,
"text": "L(b,a)"
},
{
"math_id": 23,
"text": "\\pm1/r"
},
{
"math_id": 24,
"text": "+1"
}
] | https://en.wikipedia.org/wiki?curid=1240699 |
12407025 | Muzzle blast | Explosive shockwave from firearm muzzle
A muzzle blast is an explosive shockwave created at the muzzle of a firearm during shooting. Before a projectile leaves the gun barrel, it obturates the bore and "plugs up" the pressurized gaseous products of the propellant combustion behind it, essentially containing the gases within a closed system as a neutral element in the overall momentum of the system's physics. However, when the projectile exits the barrel, this functional seal is removed and the highly energetic bore gases are suddenly free to exit the muzzle and rapidly expand in the form of a supersonic shockwave (which can often be fast enough to momentarily overtake the projectile and affect its flight dynamics), thus creating the muzzle blast.
The muzzle blast is often broken down into two components: an auditory component and a non-auditory component. The auditory component is the loud "Bang!" sound of the gunshot, and is important because it can cause significant hearing loss to surrounding personnel and also give away the gun's position. The non-auditory component is the infrasonic compression wave, and can cause concussive damage to nearby items.
In addition to the blast itself, some of the gases' energy is also released as light energy, known as a muzzle flash.
Components.
Gun sound.
The audible sound of a gun discharging, also known as the muzzle report or gunfire, may have two sources: the muzzle blast itself, which manifests as a loud and brief "pop" or "bang", and any sonic boom produced by a transonic or supersonic projectile, which manifest as a sharp whip-like crack that persists a bit longer. The muzzle blast is by far the main component of a gunfire, due to the intensity of sound energy released and the proximity to the shooter and bystanders. Muzzle blasts can easily exceed sound pressure levels of 140 decibels, which can rupture eardrums and cause permanent sensorineural hearing loss even with brief and infrequent exposure. With large guns with much higher muzzle energy, for instance artillery, that danger can extend outwards a significant distance from the muzzle, which mandates wearing of hearing protections for all personnel in proximity for occupational health purposes.
For small arms, suppressors help to reduce the muzzle report of firearms by providing a larger area for the propellant gas to expand, decelerate and cool before releasing sound energy into the surrounding. Other muzzle devices such as blast shields can also protect hearing by deflecting the pressure wave forward and away from the shooter and bystanders. Recoil-reducing devices such as muzzle brakes however worsen potential hearing damage, as these modulate the muzzle blast by increasing the lateral vectors nearer to the shooter.
Compression wave.
The overpressure wave from a firearm's muzzle blast are infrasonic and thus inaudible to human ears, but it still can be highly energy-intense due to the gases expanding at an extremely high velocity. Residual pressures at the muzzle can be a significant fraction of the peak bore pressure, especially when short barrels are used. This energy can also be regulated by a muzzle brake to reduce the recoil of the firearm, or harnessed by a muzzle booster to provide energy to cycle the action of self-loading firearms.
The force of the muzzle blast can cause shock damage to nearby items around the muzzle, and with artillery, the energy is sufficiently large to cause significant damage to surrounding structures and vehicles. It is thus important for the gun crew and any nearby friendly troops to stay clear of the potential directions of blast vectors, in order to avoid unnecessary collateral damages.
Recoil.
Typically the majority of the blast impulse is vectored to a forward direction, creating a jet propulsion effect that exerts force back upon the barrel, resulting in an additional rearward momentum on top of the reactional momentum generated by the projectile before it exits the gun. The overall recoil applied to the firearm is thus equal and opposite to the total forward momentum of not only the projectile, but also the ejected gas. Likewise, the recoil energy given to the firearm is affected by the ejected gas. By conservation of mass, the mass of the gas ejectae will be equal to the original mass of the propellant (assuming complete burning). As a rough approximation, the ejected gas can be considered to have an effective exit velocity of formula_0 where formula_1 is the muzzle velocity of the projectile and formula_2 is approximately constant. The total momentum formula_3 of the propellant and projectile will then be:
formula_4
where: formula_5 is the mass of the propellant charge, equal to the mass of the ejected gas.
This expression should be substituted into the expression for projectile momentum in order to obtain a more accurate description of the recoil process. The effective velocity may be used in the energy equation as well, but since the value of α used is generally specified for the momentum equation, the energy values obtained may be less accurate. The value of the constant α is generally taken to lie between 1.25 and 1.75. It is mostly dependent upon the type of propellant used, but may depend slightly on other things such as the ratio of the length of the barrel to its radius.
Muzzle devices can reduce the recoil impulse by altering the pattern of gas expansion. For instance, muzzle brakes primarily works by diverting some of the gas ejecta towards the sides, increasing the lateral blast intensity (hence louder and more concussive to the sides) but reducing the thrust from the forward-projection (thus less recoil), with some designs claiming up to 40-60% reduction in perceived recoil. Similarly, recoil compensators divert the gas ejecta mostly upwards to counteract the muzzle rise. However, suppressors work on a different principle, not by vectoring the gas expansion laterally but instead by modulating the forward speed of the gas expansion. By using internal baffles, the gas is made to travel through a convoluted path before eventually released outside at the front of the suppressor, thus dissipating its energy over a larger area and a longer time. This reduces both the intensity of the blast (thus lower loudness) and the recoil generated (as for the same impulse, force is inversely proportional to time).
Detection.
Muzzle blasts can stir up significant dust clouds, especially from large-caliber guns when firing low and flat, which can be visible from distance and thus give away the gun's position, increasing the risk of inviting counter-fire. Preventive actions may consist of wetting the soil of the surrounding ground, having the muzzle brake vector to blast up and away from the ground, or covering the area around the muzzle with a tarpaulin to shroud down as much airborne dust as possible.
Gunfire locators detect muzzle blast with microphones and triangulate the location where the shots were fired. These are commercially available, and have been installed by law enforcement agencies as remote sensors in many high-crime rate areas of urban centers. They can provide a fairly precise location of the source of a shot fired outdoors — 99% to within or better — and provide the data to police dispatchers within seconds of a firing.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha V_0"
},
{
"math_id": 1,
"text": "V_0"
},
{
"math_id": 2,
"text": "\\alpha"
},
{
"math_id": 3,
"text": "p_e"
},
{
"math_id": 4,
"text": "p_e=m_pV_0 + m_g \\alpha V_0\\,"
},
{
"math_id": 5,
"text": "m_g\\,"
}
] | https://en.wikipedia.org/wiki?curid=12407025 |
1240741 | Compression body | In the theory of 3-manifolds, a compression body is a kind of generalized handlebody.
A compression body is either a handlebody or the result of the following construction:
Let formula_0 be a compact, closed surface (not necessarily connected). Attach 1-handles to formula_1 along formula_2.
Let formula_3 be a compression body. The negative boundary of C, denoted formula_4, is formula_5. (If formula_3 is a handlebody then formula_6.) The positive boundary of C, denoted formula_7, is formula_8 minus the negative boundary.
There is a dual construction of compression bodies starting with a surface formula_0 and attaching 2-handles to formula_5. In this case formula_7 is formula_2, and formula_4 is formula_8 minus the positive boundary.
Compression bodies often arise when manipulating Heegaard splittings. | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "S \\times [0,1]"
},
{
"math_id": 2,
"text": "S \\times \\{1\\}"
},
{
"math_id": 3,
"text": "C"
},
{
"math_id": 4,
"text": "\\partial_{-}C"
},
{
"math_id": 5,
"text": "S \\times \\{0\\}"
},
{
"math_id": 6,
"text": "\\partial_- C = \\emptyset"
},
{
"math_id": 7,
"text": "\\partial_{+}C"
},
{
"math_id": 8,
"text": "\\partial C"
}
] | https://en.wikipedia.org/wiki?curid=1240741 |
1240842 | Adjunction space | In mathematics, an adjunction space (or attaching space) is a common construction in topology where one topological space is attached or "glued" onto another. Specifically, let "X" and "Y" be topological spaces, and let "A" be a subspace of "Y". Let "f" : "A" → "X" be a continuous map (called the attaching map). One forms the adjunction space "X" ∪"f" "Y" (sometimes also written as "X" +"f" "Y") by taking the disjoint union of "X" and "Y" and identifying "a" with "f"("a") for all "a" in "A". Formally,
formula_0
where the equivalence relation ~ is generated by "a" ~ "f"("a") for all "a" in "A", and the quotient is given the quotient topology. As a set, "X" ∪"f" "Y" consists of the disjoint union of "X" and ("Y" − "A"). The topology, however, is specified by the quotient construction.
Intuitively, one may think of "Y" as being glued onto "X" via the map "f".
Properties.
The continuous maps "h" : "X" ∪"f" "Y" → "Z" are in 1-1 correspondence with the pairs of continuous maps "h""X" : "X" → "Z" and "h""Y" : "Y" → "Z" that satisfy "h""X"("f"("a"))="h""Y"("a") for all "a" in "A".
In the case where "A" is a closed subspace of "Y" one can show that the map "X" → "X" ∪"f" "Y" is a closed embedding and ("Y" − "A") → "X" ∪"f" "Y" is an open embedding.
Categorical description.
The attaching construction is an example of a pushout in the category of topological spaces. That is to say, the adjunction space is universal with respect to the following commutative diagram:
Here "i" is the inclusion map and "Φ""X", "Φ""Y" are the maps obtained by composing the quotient map with the canonical injections into the disjoint union of "X" and "Y". One can form a more general pushout by replacing "i" with an arbitrary continuous map "g"—the construction is similar. Conversely, if "f" is also an inclusion the attaching construction is to simply glue "X" and "Y" together along their common subspace. | [
{
"math_id": 0,
"text": "X\\cup_f Y = (X\\sqcup Y) / \\sim"
}
] | https://en.wikipedia.org/wiki?curid=1240842 |
1240988 | Triamcinolone | Steroid medication
Triamcinolone is a glucocorticoid used to treat certain skin diseases, allergies, and rheumatic disorders among others. It is also used to prevent worsening of asthma and COPD. It can be taken in various ways including by mouth, injection into a muscle, and inhalation.
Common side effects with long-term use include osteoporosis, cataracts, thrush, and muscle weakness. Serious side effects may include psychosis, increased risk of infections, adrenal suppression, and bronchospasm. Use in pregnancy is generally safe. It works by decreasing inflammation and immune system activity.
Triamcinolone was patented in 1956 and came into medical use in 1958. It is available as a generic medication. In 2021, it was the 104th most commonly prescribed medication in the United States, with more than 6million prescriptions.
Medical uses.
Triamcinolone is used to treat a number of different medical conditions, such as eczema, alopecia areata, lichen sclerosus, psoriasis, arthritis, allergies, ulcerative colitis, lupus, sympathetic ophthalmia, temporal arteritis, uveitis, ocular inflammation, keloids, urushiol-induced contact dermatitis, aphthous ulcers (usually as triamcinolone acetonide), central retinal vein occlusion, visualization during vitrectomy and the prevention of asthma attacks.
The derivative triamcinolone acetonide is the active ingredient in various topical skin preparations (cream, lotion, ointment, aerosol spray) designed to treat skin conditions such as rash, inflammation, redness, or intense itching due to eczema and dermatitis.
Contraindications.
Contraindications for systemic triamcinolone are similar to those of other corticoids. They include systemic mycoses (fungal infections) and parasitic diseases, as well as eight weeks before and two weeks after application of live vaccines. For long-term treatment, the drug is also contraindicated in people with peptic ulcers, severe osteoporosis, severe myopathy, certain viral infections, glaucoma, and metastasizing tumours.
There are no contraindications for use in emergency medicine.
Side effects.
Side effects of triamcinolone are similar to other corticoids. In short-term treatment up to ten days, it has very few adverse effects; however, sometimes gastrointestinal bleeding is seen, as well as acute infections (mainly viral) and impaired glucose tolerance.
Side effects of triamcinolone long-term treatment may include coughing (up to bronchospasms), sinusitis, metabolic syndrome–like symptoms such as high blood sugar and cholesterol, weight gain due to water retention, and electrolyte imbalance, as well as cataract, thrush, osteoporosis, reduced muscle mass, and psychosis. Triamcinolone injections can cause bruising and joint swelling. Symptoms of an allergic reaction include rash, itch, swelling, severe dizziness, trouble breathing, and anaphylaxis.
Overdose.
No acute overdosing of triamcinolone has been described.
Interactions.
Drug interactions are mainly pharmacodynamic, that is, they result from other drugs either adding to triamcinolone's corticoid side effects or working against its desired effects. They include:
Triamcinolone and other drugs can also influence each other's concentrations in the body, amounting to pharmacokinetic interactions such as:
Pharmacology.
Mechanism of action.
Triamcinolone is a glucocorticoid that is about five times as potent as cortisol, but has very few mineralocorticoid effects.
Pharmacokinetics.
When taken by mouth, the drug's bioavailability is over 90%. It reaches highest concentrations in the blood plasma after one to two hours and is bound to plasma proteins to about 80%. The biological half-life from the plasma is 200 to 300 minutes; due to stable complexes of triamcinolone and its receptor in the intracellular fluid, the total half-life is significantly longer at about 36 hours.
A small fraction of the substance is metabolized to 6-hydroxy- and 20-dihydro-triamcinolone; most of it probably undergoes glucuronidation, and a smaller part sulfation. Three quarters are excreted via the urine, and the rest via the faeces.
Due to corticoids' mechanism of action, the effects are delayed as compared to plasma concentrations. Depending on the route of administration and the treated condition, the onset of action can be from two hours up to one or two days after application; and the drug can act much longer than its elimination half-life would suggest.
Chemistry.
Triamcinolone is a synthetic pregnane corticosteroid and derivative of cortisol (hydrocortisone) and is also known as 1-dehydro-9α-fluoro-16α-hydroxyhydrocortisone or 9α-fluoro-16α-hydroxyprednisolone as well as 9α-fluoro-11β,16α,17α,21-tetrahydroxypregna-1,4-diene-3,20-dione.
The substance is a light-sensitive, white to off-white, crystalline powder, or has the form of colourless, matted crystals. It has no odour or is nearly odourless. Information on the melting point varies, partly due to the substance's polymorphism: , , or can be found in the literature.
Solubility is 1:500 in water and 1:240 in ethanol; it is slightly soluble in methanol, very slightly soluble in chloroform and diethylether, and practically insoluble in dichloromethane. The specific rotation is formula_0 +65° to +72° cm3/dm·g (1% in dimethylformamide).
Society and culture.
In 2010, Teva and Perrigo launched the first generic inhalable triamcinolone.
According to Chang et al. (2014), "Triamcinolone acetonide (TA) is classified as an S9 glucocorticoid in the 2014 Prohibited List published by the World Anti-Doping Agency, which caused it to be prohibited in international athletic competition when administered orally, intravenously, intramuscularly or rectally".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "[\\alpha]_D^{20}"
}
] | https://en.wikipedia.org/wiki?curid=1240988 |
12410777 | Quasi-compact morphism | In algebraic geometry, a morphism formula_0 between schemes is said to be quasi-compact if "Y" can be covered by open affine subschemes formula_1 such that the pre-images formula_2 are compact. If "f" is quasi-compact, then the pre-image of a compact open subscheme (e.g., open affine subscheme) under "f" is compact.
It is not enough that "Y" admits a covering by compact open subschemes whose pre-images are compact. To give an example, let "A" be a ring that does not satisfy the ascending chain conditions on radical ideals, and put formula_3. Then "X" contains an open subset "U" that is not compact. Let "Y" be the scheme obtained by gluing two "X"'s along "U". "X", "Y" are both compact. If formula_0 is the inclusion of one of the copies of "X", then the pre-image of the other "X", open affine in "Y", is "U"—not compact. Hence, "f" is not quasi-compact.
A morphism from a quasi-compact scheme to an affine scheme is quasi-compact.
Let formula_0 be a quasi-compact morphism between schemes. Then formula_4 is closed if and only if it is stable under specialization.
The composition of quasi-compact morphisms is quasi-compact. The base change of a quasi-compact morphism is quasi-compact.
An affine scheme is quasi-compact. In fact, a scheme is quasi-compact if and only if it is a finite union of open affine subschemes. Serre’s criterion gives a necessary and sufficient condition for a quasi-compact scheme to be affine.
A quasi-compact scheme has at least one closed point.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f: X \\to Y"
},
{
"math_id": 1,
"text": "V_i"
},
{
"math_id": 2,
"text": "f^{-1}(V_i)"
},
{
"math_id": 3,
"text": "X = \\operatorname{Spec} A"
},
{
"math_id": 4,
"text": "f(X)"
}
] | https://en.wikipedia.org/wiki?curid=12410777 |
12413580 | Subjective logic | Subjective logic is a type of probabilistic logic that explicitly takes epistemic uncertainty and source trust into account. In general, subjective logic is suitable for modeling and analysing situations involving uncertainty and relatively unreliable sources. For example, it can be used for modeling and analysing trust networks and Bayesian networks.
Arguments in subjective logic are subjective opinions about state variables which can take values from a domain (aka state space), where a state value can be thought of as a proposition which can be true or false. A binomial opinion applies to a binary state variable, and can be represented as a Beta PDF (Probability Density Function). A multinomial opinion applies to a state variable of multiple possible values, and can be represented as a Dirichlet PDF (Probability Density Function). Through the correspondence between opinions and Beta/Dirichlet distributions, subjective logic provides an algebra for these functions. Opinions are also related to the belief representation in Dempster–Shafer belief theory.
A fundamental aspect of the human condition is that nobody can ever determine with absolute certainty whether a proposition about the world is true or false. In addition, whenever the truth of a proposition is expressed, it is always done by an individual, and it can never be considered to represent a general and objective belief. These philosophical ideas are directly reflected in the mathematical formalism of subjective logic.
Subjective opinions.
Subjective opinions express subjective beliefs about the truth of state values/propositions with degrees of epistemic uncertainty, and can explicitly indicate the source of belief whenever required. An opinion is usually denoted as formula_0 where formula_1 is the source of the opinion, and formula_2 is the state variable to which the opinion applies. The variable formula_2 can take values from a domain (also called state space) e.g. denoted as formula_3. The values of a domain are assumed to be exhaustive and mutually disjoint, and sources are assumed to have a common semantic interpretation of a domain. The source and variable are attributes of an opinion. Indication of the source can be omitted whenever irrelevant.
Binomial opinions.
Let formula_4 be a state value in a binary domain. A binomial opinion about the truth of state value formula_4 is the ordered quadruple formula_5 where:
These components satisfy formula_6 and formula_7. The characteristics of various opinion classes are listed below.
The projected probability of a binomial opinion is defined as formula_8.
Binomial opinions can be represented on an equilateral triangle as shown below. A point inside the triangle represents a formula_9 triple. The "b","d","u"-axes run from one edge to the opposite vertex indicated by the Belief, Disbelief or Uncertainty label. For example, a strong positive opinion is represented by a point towards the bottom right Belief vertex. The base rate, also called the prior probability, is shown as a red pointer along the base line, and the projected probability, formula_10, is formed by projecting the opinion onto the base, parallel to the base rate projector line. Opinions about three values/propositions X, Y and Z are visualized on the triangle to the left, and their equivalent Beta PDFs (Probability Density Functions) are visualized on the plots to the right. The numerical values and verbal qualitative descriptions of each opinion are also shown.
The Beta PDF is normally denoted as formula_11 where formula_12 and formula_13 are its two strength parameters. The Beta PDF of a binomial opinion formula_14 is the function
formula_15
where formula_16 is the non-informative prior weight, also called a unit of evidence, normally set to formula_17.
Multinomial opinions.
Let formula_2 be a state variable which can take state values formula_18. A multinomial opinion over formula_2 is the composite
tuple formula_19, where formula_20 is a belief mass distribution over the possible state values of formula_2, formula_21 is the uncertainty mass, and formula_22 is the prior (base rate) probability distribution over the possible state values of formula_2. These parameters satisfy formula_23 and formula_24 as well as formula_25.
Trinomial opinions can be simply visualised as points inside a tetrahedron, but opinions with dimensions larger than trinomial do not lend themselves to simple visualisation.
Dirichlet PDFs are normally denoted as formula_26 where formula_27 is a probability distribution over the state values of formula_28, and formula_29 are the strength parameters. The Dirichlet PDF of a multinomial opinion formula_30 is the function
formula_31 where the strength parameters are given by formula_32,
where formula_16 is the non-informative prior weight, also called a unit of evidence, normally set to the number of classes.
Operators.
Most operators in the table below are generalisations of binary logic and probability operators. For example "addition" is simply a generalisation of addition of probabilities. Some operators are only meaningful for combining binomial opinions, and some also apply to multinomial opinion. Most operators are binary, but "complement" is unary, and "abduction" is ternary. See the referenced publications for mathematical details of each operator.
Transitive source combination can be denoted in a compact or expanded form. For example, the transitive trust path from analyst/source formula_1 via source formula_33 to the variable formula_2 can be denoted as formula_34 in compact form, or as formula_35 in expanded form. Here, formula_36 expresses that formula_37 has some trust/distrust in source formula_38, whereas formula_39 expresses that formula_38 has an opinion about the state of variable formula_28 which is given as an advice to formula_37. The expanded form is the most general, and corresponds directly to the way subjective logic expressions are formed with operators.
Properties.
In case the argument opinions are equivalent to Boolean TRUE or FALSE, the result of any subjective logic operator is always equal to that of the corresponding propositional/binary logic operator. Similarly, when the argument opinions are equivalent to traditional probabilities, the result of any subjective logic operator is always equal to that of the corresponding probability operator (when it exists).
In case the argument opinions contain degrees of uncertainty, the operators involving multiplication and division (including deduction, abduction and Bayes' theorem) will produce derived opinions that always have correct projected probability but possibly with approximate variance when seen as Beta/Dirichlet PDFs.
All other operators produce opinions where the projected probabilities and the variance are always analytically correct.
Different logic formulas that traditionally are equivalent in propositional logic do not necessarily have equal opinions. For example formula_40 in general although the distributivity of conjunction over disjunction, expressed as formula_41, holds in binary propositional logic. This is no surprise as the corresponding probability operators are also non-distributive. However, multiplication is distributive over addition, as expressed by formula_42. De Morgan's laws are also satisfied as e.g. expressed by formula_43.
Subjective logic allows very efficient computation of mathematically complex models. This is possible by approximation of the analytically correct functions. While it is relatively simple to analytically multiply two Beta PDFs in the form of a joint Beta PDF, anything more complex than that quickly becomes intractable. When combining two Beta PDFs with some operator/connective, the analytical result is not always a Beta PDF and can involve hypergeometric series. In such cases, subjective logic always approximates the result as an opinion that is equivalent to a Beta PDF.
Applications.
Subjective logic is applicable when the situation to be analysed is characterised by considerable epistemic uncertainty due to incomplete knowledge. In this way, subjective logic becomes a probabilistic logic for epistemic-uncertain probabilities. The advantage is that uncertainty is preserved throughout the analysis and is made explicit in the results so that it is possible to distinguish between certain and uncertain conclusions.
The modelling of trust networks and Bayesian networks are typical applications of subjective logic.
Subjective trust networks.
Subjective trust networks can be modelled with a combination of the transitivity and fusion operators. Let formula_36 express the referral trust edge from formula_1 to formula_33, and let formula_39 express the belief edge from formula_33 to formula_2. A subjective trust network can for example be expressed as formula_44 as illustrated in the figure below.
The indices 1, 2 and 3 indicate the chronological order in which the trust edges and advice are formed. Thus, given the set of trust edges with index 1, the origin trustor formula_1 receives advice from formula_33 and formula_45, and is thereby able to derive belief in variable formula_2. By expressing each trust edge and belief edge as an opinion, it is possible for formula_1 to derive belief in formula_2 expressed as formula_46.
Trust networks can express the reliability of information sources, and can be used to determine subjective opinions about variables that the sources provide information about.
Evidence-based subjective logic (EBSL) describes an alternative trust-network computation, where the transitivity of opinions (discounting) is handled by applying weights to the evidence underlying the opinions.
Subjective Bayesian networks.
In the Bayesian network below, formula_2 and formula_47 are parent variables and formula_48 is the child variable. The analyst must learn the set of joint conditional opinions formula_49 in order to apply the deduction operator and derive the marginal opinion formula_50 on the variable formula_51. The conditional opinions express a conditional relationship between the parent variables and the child variable.
The deduced opinion is computed as formula_52.
The joint evidence opinion formula_53 can be computed as the product of independent evidence opinions on formula_2 and formula_47, or as the joint product of partially dependent evidence opinions.
Subjective networks.
The combination of a subjective trust network and a subjective Bayesian network is a subjective network. The subjective trust network can be used to obtain from various sources the opinions to be used as input opinions to the subjective Bayesian network, as illustrated in the figure below.
Traditional Bayesian network typically do not take into account the reliability of the sources. In subjective networks, the trust in sources is explicitly taken into account.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\omega^{A}_{X}"
},
{
"math_id": 1,
"text": "A\\,\\!"
},
{
"math_id": 2,
"text": "X\\,\\!"
},
{
"math_id": 3,
"text": "\\mathbb{X}"
},
{
"math_id": 4,
"text": "x\\,\\!"
},
{
"math_id": 5,
"text": "\\omega_{x} = (b_x,d_x,u_x,a_x)\\,\\!"
},
{
"math_id": 6,
"text": "b_x+d_x+u_x=1\\,\\!"
},
{
"math_id": 7,
"text": "b_x,d_x,u_x,a_x \\in [0,1]\\,\\!"
},
{
"math_id": 8,
"text": "\\mathrm{P}_{x}=b_{x} + a_{x} u_{x}\\,\\!"
},
{
"math_id": 9,
"text": "(b_x,d_x,u_x)\\,\\!"
},
{
"math_id": 10,
"text": "\\mathrm{P}_{x}\\,\\!"
},
{
"math_id": 11,
"text": "\\mathrm{Beta}(p(x);\\alpha,\\beta)\\,\\!"
},
{
"math_id": 12,
"text": "\\alpha\\,\\!"
},
{
"math_id": 13,
"text": "\\beta\\,\\!"
},
{
"math_id": 14,
"text": "\\omega_x = (b_x,d_x,u_x,a_x)\\,\\!"
},
{
"math_id": 15,
"text": "\n\\mathrm{Beta}(p(x);\\alpha,\\beta) \\mbox{ where }\n\\begin{cases}\n\\alpha &= \\frac{Wb_x}{u_x}+Wa_x\\\\\n\\beta &= \\frac{Wd_x}{u_x}+W(1-a_x)\n\\end{cases}\n\\,\\!\n"
},
{
"math_id": 16,
"text": "W"
},
{
"math_id": 17,
"text": "W=2"
},
{
"math_id": 18,
"text": "x\\in\\mathbb{X}\\,\\!"
},
{
"math_id": 19,
"text": "\\omega_{X}=(b_{X}, u_{X}, a_{X})\\,\\!"
},
{
"math_id": 20,
"text": "b_{X}\\,\\!"
},
{
"math_id": 21,
"text": "u_{X}\\,\\!"
},
{
"math_id": 22,
"text": "a_{X}\\,\\!"
},
{
"math_id": 23,
"text": "u_{X}+\\sum b_{X}(x) = 1\\,\\!"
},
{
"math_id": 24,
"text": "\\sum a_{X}(x) = 1\\,\\!"
},
{
"math_id": 25,
"text": "b_{X}(x),u_{X},a_{X}(x) \\in [0,1]\\,\\!"
},
{
"math_id": 26,
"text": "\\mathrm{Dir}(p_{X};\\alpha_{X})\\,\\!"
},
{
"math_id": 27,
"text": "p_{X}\\,\\!"
},
{
"math_id": 28,
"text": "X"
},
{
"math_id": 29,
"text": "\\alpha_{X}\\,\\!"
},
{
"math_id": 30,
"text": "\\omega_{X} = (b_{X},u_{X},a_{X})\\,\\!"
},
{
"math_id": 31,
"text": "\n\\mathrm{Dir}(p_{X};\\alpha_{X})"
},
{
"math_id": 32,
"text": "\\alpha_{X}(x) = \\frac{Wb_{X}(x)}{u_{X}}+Wa_{X}(x)\\,\\!\n"
},
{
"math_id": 33,
"text": "B\\,\\!"
},
{
"math_id": 34,
"text": "[A;B,X]\\,\\!"
},
{
"math_id": 35,
"text": "[A;B]:[B,X]\\,\\!"
},
{
"math_id": 36,
"text": "[A;B]\\,\\!"
},
{
"math_id": 37,
"text": "A"
},
{
"math_id": 38,
"text": "B"
},
{
"math_id": 39,
"text": "[B,X]\\,\\!"
},
{
"math_id": 40,
"text": "\\omega_{x\\land (y\\lor z)} \\neq \\omega_{(x \\land y)\\lor (x\\land z)}\\,\\!"
},
{
"math_id": 41,
"text": "x\\land (y\\lor z) \\Leftrightarrow (x \\land y)\\lor (x\\land z)\\,\\!"
},
{
"math_id": 42,
"text": "\\omega_{x\\land (y\\cup z)} = \\omega_{(x \\land y)\\cup (x\\land z)}\\,\\!"
},
{
"math_id": 43,
"text": "\\omega_{\\overline{x\\land y}} = \\omega_{\\overline{x} \\lor \\overline{y}}\\,\\! "
},
{
"math_id": 44,
"text": "([A;B]:[B,X])\\diamond([A;C]:[C,X])\\,\\!"
},
{
"math_id": 45,
"text": "C\\,\\!"
},
{
"math_id": 46,
"text": "\\omega^{A}_{X} = \\omega^{[A;B]\\diamond[A;C]}_{X} = (\\omega^{A}_{B}\\otimes \\omega^{B}_{X}) \\oplus (\\omega^{A}_{C}\\otimes \\omega^{C}_{X})\\,\\!"
},
{
"math_id": 47,
"text": "Y\\,\\!"
},
{
"math_id": 48,
"text": "Z\\,\\!"
},
{
"math_id": 49,
"text": "\\omega_{Z|XY}"
},
{
"math_id": 50,
"text": "\\omega_{Z\\|XY}"
},
{
"math_id": 51,
"text": "Z"
},
{
"math_id": 52,
"text": "\\omega_{Z\\|XY} = \\omega_{Z|XY} \\circledcirc \\omega_{XY}"
},
{
"math_id": 53,
"text": "\\omega_{XY}"
}
] | https://en.wikipedia.org/wiki?curid=12413580 |
1241528 | Posturography | Posturography is the technique used to quantify postural control in upright stance in either static or dynamic conditions. Among them, Computerized dynamic posturography (CDP), also called test of balance (TOB), is a non-invasive specialized clinical assessment technique used to quantify the central nervous system adaptive mechanisms (sensory, motor and central) involved in the control of posture and balance, both in normal (such as in physical education and sports training) and abnormal conditions (particularly in the diagnosis of balance disorders and in physical therapy and postural re-education). Due to the complex interactions among sensory, motor, and central processes involved in posture and balance, CDP requires different protocols in order to differentiate among the many defects and impairments which may affect the patient's posture control system. Thus, CDP challenges it by using several combinations of visual and support surface stimuli and parameters.
Clinical applications for CDP were first described by L.M. Nashner in 1982, and the first commercially available testing system was developed in 1986, when NeuroCom International, Inc., launched the EquiTest system.
Working.
Static posturography is carried out by placing the patient in a standing posture on a fixed instrumented platform (forceplate) connected to sensitive detectors (force and movement transducers), which are able to detect the tiny oscillations of the body.
Dynamic posturography differentiates from static posturography generally by using a special apparatus with a movable horizontal platform. As the patient makes small movements, they transmit in real time to a computer. The computer is also used to command electric motors which can move the forceplate in the horizontal direction (translation) as well as to incline it (rotations). Thus, the posturography test protocols generate a sequence of standardized motions in the support platform in order to disequilibrate the patient's posture in an orderly and reproducible way. The platform is contained within an enclosure which can also be used to generate apparent visual surround motions. These stimuli are calibrated relative to the patient's height and weight. A special computer software integrates all this and produces detailed graphics and reports which can then be compared with normal ranges.
Components of balance.
Center of gravity (COG) is an important component of balance and should be assessed when evaluating someone’s posture. COG is often measured with COP (Center of pressure) because COG is hard to quantify. According to Lafage et al. (2008) the COG should be located at the midpoint of the base of support if an individual has ideal posture. COP excursion and velocity are indicators of control over COG and are key factors for identifying proper posture and the ability to maintain balance. COP excursion is defined by Collins & De Luca (1992) as the Euclidean*LINK* displacement in the anterior/posterior and medial/lateral directions within the base of support (perimeter around the feet). With poor posture and / or exaggerated spinal curvatures it is possible that the COP excursion would increase which can cause instability as the COP shifts towards the perimeter of the base of support.
Types of tests.
The test protocols usually include a Sensory Organization Test (SOT), Limits of Stability Test (LOS), a Motor Control Test (MCT) and an Adaptation Test (ADT). The SOT test was developed by Nashner and is a computerized system that is made up of dual movable force plates and movable visual screen (EquiTest). During the test the patient is instructed to stand still and quietly with eyes open or closed depending on which of the six tests is being administered. The patient performs multiple trials per test; a description of these tests can be found in the table below. The SOT test is based on the fact that there are three sensory systems mainly involved in maintaining balance (vision, vestibular, and proprioceptive). Minute spontaneous body sways are measured as well as reactions provoked by unexpected abrupt movements of the platform and the visual surroundings. Differences in these sways and reactions to system perturbations help to determine the patients ability to effectively use visual, vestibular, and proprioceptive input to maintain posture. Wrisley et al. (2007) found that there are learning effects associated with the SOT test and therefore it could be used clinically to assess, improve and track changes in balance.
SOT results are subdivided in an Equilibrium Score, a Sensory Analysis, a Strategy Analysis and COG Alignment.
The sensory analysis calculates 4 different scores: somatosensory (SOM), visual (VIS), vestibular (VEST) and visual preference (PREF) (otherwise known as "visual dependence", an excessive reliance on visual information even when it is inappropriate). The scores are respectively calculated as ratios of the 6 different scores of the equilibrium score:
formula_0
formula_1
formula_2
formula_3
MCT results include instead the Weight Symmetry, both for forward and for backward translations, Latency Scores for forward and backward translations, and Amplitude Scaling, which refers to the capacity of the participant to generate a response force adequate to the entity of the perturbation.
The limits of stability (LOS) is defined as the distance outside the base of support that can be traveled before a loss of balance occurs. The LOS test is frequently used to quantify this distance and has been suggested as a hybrid between static and dynamic balance assessment. During this test the patient stands on the platform as directed above in the SOT test. The patient watches their movements on a screen so they can see each of the eight LOS targets. The patient begins with their COP directly in the center of the targets (displayed as a figure as a computerized person). At the onset of the test, the patient attempts to lean in the direction of the indicated perimeter target, without lifting their feet, and hold there until the test is complete.
According to necessity of the diagnostic workup, CDP can be combined with other techniques, such as electronystagmography (ENG) and electromyography.
The main indications for CDP are dizziness and vertigo, and postural imbalances (balance disorders).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{Somatosensory (SOM)} = \\frac{\\text{condition 2}}{\\text{condition 1}}"
},
{
"math_id": 1,
"text": "\\text{Visual (VIS)} = \\frac{\\text{condition 4}}{\\text{condition 1}}"
},
{
"math_id": 2,
"text": "\\text{Vestibular (VEST)} = \\frac{\\text{condition 5}}{\\text{condition 1}}"
},
{
"math_id": 3,
"text": "\\text{Visual Preference (PREF)} = \\frac{\\text{conditions 3 + 6}}{\\text{conditions 2 + 5}}"
}
] | https://en.wikipedia.org/wiki?curid=1241528 |
12416124 | Multiplet | In physics and particularly in particle physics, a multiplet is the state space for 'internal' degrees of freedom of a particle, that is, degrees of freedom associated to a particle itself, as opposed to 'external' degrees of freedom such as the particle's position in space. Examples of such degrees of freedom are the spin state of a particle in quantum mechanics, or the color, isospin and hypercharge state of particles in the Standard model of particle physics. Formally, we describe this state space by a vector space which carries the action of a group of continuous symmetries.
Mathematical formulation.
Mathematically, multiplets are described via representations of a Lie group or its corresponding Lie algebra, and is usually used to refer to irreducible representations (irreps, for short).
At the group level, this is a triplet formula_0 where
At the algebra level, this is a triplet formula_10, where
The symbol formula_6 is used for both Lie algebras and Lie groups as, at least in finite dimension, there is a well understood correspondence between Lie groups and Lie algebras.
In mathematics, it is common to refer to the homomorphism formula_6 as the representation, for example in the sentence 'consider a representation formula_6', and the vector space formula_1 is referred to as the 'representation space'. In physics sometimes the vector space is referred to as the representation, for example in the sentence 'we model the particle as transforming in the singlet representation', or even to refer to a quantum field which takes values in such a representation, and the physical particles which are modelled by such a quantum field.
For an irreducible representation, an formula_16-plet refers to an formula_16 dimensional irreducible representation. Generally, a group may have multiple non-isomorphic representations of the same dimension, so this does not fully characterize the representation. An exception is formula_17 which has exactly one irreducible representation of dimension formula_16 for each non-negative integer formula_16.
For example, consider real three-dimensional space, formula_18. The group of 3D rotations SO(3) acts naturally on this space as a group of formula_19 matrices. This explicit realisation of the rotation group is known as the fundamental representation formula_20, so formula_18 is a representation space. The full data of the representation is formula_21. Since the dimension of this representation space is 3, this is known as the triplet representation for formula_22, and it is common to denote this as formula_23.
Application to theoretical physics.
For applications to theoretical physics, we can restrict our attention to the representation theory of a handful of physically important groups. Many of these have well understood representation theory:
These groups all appear in the theory of the Standard model. For theories which extend these symmetries, the representation theory of some other groups might be considered:
Physics.
Quantum field theory.
In quantum physics, the mathematical notion is usually applied to representations of the gauge group. For example, an formula_17 gauge theory will have multiplets which are fields whose representation of formula_17 is determined by the single half-integer number formula_38, the isospin. Since irreducible formula_17 representations are isomorphic to the formula_16th symmetric power of the fundamental representation, every field has formula_16 symmetrized internal indices.
Fields also transform under representations of the Lorentz group formula_31, or more generally its spin group formula_39 which can be identified with formula_40 due to an exceptional isomorphism. Examples include scalar fields, commonly denoted formula_41, which transform in the trivial representation, vector fields formula_42 (strictly, this might be more accurately labelled a covector field), which transforms as a 4-vector, and spinor fields formula_43 such as Dirac or Weyl spinors which transform in representations of formula_40. A right-handed Weyl spinor transforms in the fundamental representation, formula_44, of formula_40.
Beware that besides the Lorentz group, a field can transform under the action of a gauge group. For example, a scalar field formula_45, where formula_46 is a spacetime point, might have an isospin state taking values in the fundamental representation formula_44 of formula_17. Then formula_45 is a vector valued function of spacetime, but is still referred to as a scalar field, as it transforms trivially under Lorentz transformations.
In quantum field theory different particles correspond one to one with gauged fields transforming in irreducible representations of the internal and Lorentz group. Thus, a multiplet has also come to describe a collection of subatomic particles described by these representations.
Examples.
The best known example is a spin multiplet, which describes symmetries of a group representation of an SU(2) subgroup of the Lorentz algebra, which is used to define spin quantization. A spin singlet is a trivial representation, a spin doublet is a fundamental representation and a spin triplet is in the vector representation or adjoint representation.
In QCD, quarks are in a multiplet of SU(3), specifically the three-dimensional fundamental representation.
Other uses.
Spectroscopy.
In spectroscopy, particularly Gamma spectroscopy and X-ray spectroscopy, a multiplet is a group of related or unresolvable spectral lines. Where the number of unresolved lines is small, these are often referred to specifically as doublet or triplet peaks, while multiplet is used to describe groups of peaks in any number. | [
{
"math_id": 0,
"text": "(V,G,\\rho)"
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "K"
},
{
"math_id": 3,
"text": "K = \\mathbb{R}"
},
{
"math_id": 4,
"text": "\\mathbb{C}"
},
{
"math_id": 5,
"text": "G"
},
{
"math_id": 6,
"text": "\\rho"
},
{
"math_id": 7,
"text": "G\\rightarrow \\text{GL}(V)"
},
{
"math_id": 8,
"text": "g_1,g_2\\in G,"
},
{
"math_id": 9,
"text": "\\rho(g_1\\cdot g_2) = \\rho(g_1)\\rho(g_2)"
},
{
"math_id": 10,
"text": "(V,\\mathfrak{g},\\rho)"
},
{
"math_id": 11,
"text": "\\mathfrak{g}"
},
{
"math_id": 12,
"text": "\\mathbb{R}"
},
{
"math_id": 13,
"text": "\\mathfrak{g}\\rightarrow\\text{End}(V)"
},
{
"math_id": 14,
"text": "X_1, X_2 \\in \\mathfrak{g},"
},
{
"math_id": 15,
"text": "\\rho([X_1, X_2])=[\\rho(X_1),\\rho(X_2)]"
},
{
"math_id": 16,
"text": "n"
},
{
"math_id": 17,
"text": "\\text{SU}(2)"
},
{
"math_id": 18,
"text": "\\mathbb{R}^3"
},
{
"math_id": 19,
"text": "3\\times 3"
},
{
"math_id": 20,
"text": "\\rho_{\\text{fund}}"
},
{
"math_id": 21,
"text": "(\\mathbb{R}^3,\\text{SO(3)},\\rho_{\\text{fund}})"
},
{
"math_id": 22,
"text": "\\text{SO}(3)"
},
{
"math_id": 23,
"text": "\\mathbf{3}"
},
{
"math_id": 24,
"text": "\\text{U}(1)"
},
{
"math_id": 25,
"text": "\\mathbb{Z}"
},
{
"math_id": 26,
"text": "\\rho_n:\\text{U}(1)\\rightarrow\\text{GL}(\\mathbb{C}); e^{i\\theta}\\mapsto e^{in\\theta}"
},
{
"math_id": 27,
"text": "\\text{SU}(2)\\cong\\text{Spin}(3)"
},
{
"math_id": 28,
"text": "n\\in\\mathbb{N}_{\\geq 0}"
},
{
"math_id": 29,
"text": "\\text{SU}(3)"
},
{
"math_id": 30,
"text": "(m,n)"
},
{
"math_id": 31,
"text": "\\text{SO}(1,3)"
},
{
"math_id": 32,
"text": "\\text{SL}(2,\\mathbb{C})\\cong \\text{Spin}(1,3)"
},
{
"math_id": 33,
"text": "(\\mu,\\nu)"
},
{
"math_id": 34,
"text": "\\text{E}(1,3)\\cong \\mathbb{R}^{1,3}\\rtimes\\text{SO}(1,3)"
},
{
"math_id": 35,
"text": "\\text{Conf}(p,q)\\cong O(p,q)/\\mathbb{Z}_2"
},
{
"math_id": 36,
"text": "\\text{SU}(5), \\text{SO}(10)"
},
{
"math_id": 37,
"text": " \\text{E}_6"
},
{
"math_id": 38,
"text": "s=:n/2"
},
{
"math_id": 39,
"text": "\\text{Spin}(1,3)"
},
{
"math_id": 40,
"text": "\\text{SL}(2,\\mathbb{C})"
},
{
"math_id": 41,
"text": "\\phi"
},
{
"math_id": 42,
"text": "A_\\mu"
},
{
"math_id": 43,
"text": "\\psi_\\alpha"
},
{
"math_id": 44,
"text": "\\mathbb{C}^2"
},
{
"math_id": 45,
"text": "\\phi(x)"
},
{
"math_id": 46,
"text": "x"
}
] | https://en.wikipedia.org/wiki?curid=12416124 |
1241614 | Atoroidal | In mathematics, an atoroidal 3-manifold is one that does not contain an essential torus.
There are two major variations in this terminology: an essential torus may be defined geometrically, as an embedded, non-boundary parallel, incompressible torus, or it may be defined algebraically, as a subgroup formula_0 of its fundamental group that is not conjugate to a peripheral subgroup (i.e., the image of the map on fundamental group induced by an inclusion of a boundary component). The terminology is not standardized, and different authors require atoroidal 3-manifolds to satisfy certain additional restrictions. For instance:
A 3-manifold that is not atoroidal is called toroidal.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Z\\times\\Z"
}
] | https://en.wikipedia.org/wiki?curid=1241614 |
1241623 | Boundary parallel | In mathematics, a closed "n"-manifold "N" embedded in an ("n" + 1)-manifold "M" is boundary parallel (or ∂-parallel, or peripheral) if there is an isotopy of "N" onto a boundary component of "M".
An example.
Consider the annulus formula_0. Let π denote the projection map
formula_1
If a circle "S" is embedded into the annulus so that π restricted to "S" is a bijection, then "S" is boundary parallel. (The converse is not true.)
If, on the other hand, a circle "S" is embedded into the annulus so that π restricted to "S" is not surjective, then "S" is not boundary parallel. (Again, the converse is not true.)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I \\times S^1"
},
{
"math_id": 1,
"text": "\\pi\\colon I \\times S^1 \\rightarrow S^1,\\quad (x, z) \\mapsto z."
}
] | https://en.wikipedia.org/wiki?curid=1241623 |
12420 | Gödel's ontological proof | Logical argument for the existence of God
Gödel's ontological proof is a formal argument by the mathematician Kurt Gödel (1906–1978) for the existence of God. The argument is in a line of development that goes back to Anselm of Canterbury (1033–1109). St. Anselm's ontological argument, in its most succinct form, is as follows: "God, by definition, is that for which no greater can be conceived. God exists in the understanding. If God exists in the understanding, we could imagine Him to be greater by existing in reality. Therefore, God must exist." A more elaborate version was given by Gottfried Leibniz (1646–1716); this is the version that Gödel studied and attempted to clarify with his ontological argument.
Gödel left a fourteen-point outline of his philosophical beliefs in his papers. Points relevant to the ontological proof include:
4. There are other worlds and rational beings of a different and higher kind.
5. The world in which we live is not the only one in which we shall live or have lived.
13. There is a scientific (exact) philosophy and theology, which deals with concepts of the highest abstractness; and this is also most highly fruitful for science.
14. Religions are, for the most part, bad—but religion is not.
History.
The first version of the ontological proof in Gödel's papers is dated "around 1941". Gödel is not known to have told anyone about his work on the proof until 1970, when he thought he was dying. In February, he allowed Dana Scott to copy out a version of the proof, which circulated privately. In August 1970, Gödel told Oskar Morgenstern that he was "satisfied" with the proof, but Morgenstern recorded in his diary entry for 29 August 1970, that Gödel would not publish because he was afraid that others might think "that he actually believes in God, whereas he is only engaged in a logical investigation (that is, in showing that such a proof with classical assumptions (completeness, etc.) correspondingly axiomatized, is possible)." Gödel died January 14, 1978. Another version, slightly different from Scott's, was found in his papers. It was finally published, together with Scott's version, in 1987.
In letters to his mother, who was not a churchgoer and had raised Kurt and his brother as freethinkers, Gödel argued at length for a belief in an afterlife. He did the same in an interview with a skeptical Hao Wang, who said: "I expressed my doubts as G spoke [...] Gödel smiled as he replied to my questions, obviously aware that his answers were not convincing me." Wang reports that Gödel's wife, Adele, two days after Gödel's death, told Wang that "Gödel, although he did not go to church, was religious and read the Bible in bed every Sunday morning." In an unmailed answer to a questionnaire, Gödel described his religion as "baptized Lutheran (but not member of any religious congregation). My belief is "theistic", not pantheistic, following Leibniz rather than Spinoza."
Outline.
The proof uses modal logic, which distinguishes between "necessary" truths and "contingent" truths. In the most common semantics for modal logic, many "possible worlds" are considered. A truth is "necessary" if it is true in all possible worlds. By contrast, if a statement happens to be true in our world, but is false in another world, then it is a "contingent" truth. A statement that is true in some world (not necessarily our own) is called a "possible" truth.
Furthermore, the proof uses higher-order (modal) logic because the definition of God employs an explicit quantification over properties.
First, Gödel axiomatizes the notion of a "positive property": for each property "φ", either "φ" or its negation ¬"φ" must be positive, but not both (axiom 2). If a positive property "φ" implies a property "ψ" in each possible world, then "ψ" is positive, too (axiom 1). Gödel then argues that each positive property is "possibly exemplified", i.e. applies at least to some object in some world (theorem 1). Defining an object to be Godlike if it has all positive properties (definition 1), and requiring that property to be positive itself (axiom 3), Gödel shows that in "some" possible world a Godlike object exists (theorem 2), called "God" in the following. Gödel proceeds to prove that a Godlike object exists in "every" possible world.
To this end, he defines "essences": if "x" is an object in some world, then a property "φ" is said to be an essence of "x" if "φ"("x") is true in that world and if "φ" necessarily entails all other properties that "x" has in that world (definition 2). Requiring positive properties being positive in every possible world (axiom 4), Gödel can show that Godlikeness is an essence of a Godlike object (theorem 3). Now, "x" is said to "exist necessarily" if, for every essence "φ" of "x", there is an element "y" with property "φ" in every possible world (definition 3). Axiom 5 requires necessary existence to be a positive property.
Hence, it must follow from Godlikeness. Moreover, Godlikeness is an essence of God, since it entails all positive properties, and any non-positive property is the negation of some positive property, so God cannot have any non-positive properties. Since necessary existence is also a positive property (axiom 5), it must be a property of every Godlike object, as every Godlike object has all the positive properties (definition 1). Since any Godlike object is necessarily existent, it follows that any Godlike object in one world is a Godlike object in all worlds, by the definition of necessary existence. Given the existence of a Godlike object in one world, proven above, we may conclude that there is a Godlike object in every possible world, as required (theorem 4). Besides axiom 1-5 and definition 1-3, a few other axioms from modal logic were tacitly used in the proof.
From these hypotheses, it is also possible to prove that there is only one God in each world by Leibniz's law, the identity of indiscernibles: two or more objects are identical (the same) if they have all their properties in common, and so, there would only be one object in each world that possesses property G. Gödel did not attempt to do so however, as he purposely limited his proof to the issue of existence, rather than uniqueness.
Symbolic notation.
formula_0
Criticism.
Most criticism of Gödel's proof is aimed at its axioms: as with any proof in any logical system, if the axioms the proof depends on are doubted, then the conclusions can be doubted. It is particularly applicable to Gödel's proof – because it rests on five axioms, some of which are considered questionable. A proof does not necessitate that the conclusion be correct, but rather that by accepting the axioms, the conclusion follows logically.
Many philosophers have called the axioms into question. The first layer of criticism is simply that there are no arguments presented that give reasons why the axioms are true. A second layer is that these particular axioms lead to unwelcome conclusions. This line of thought was argued by Jordan Howard Sobel, showing that if the axioms are accepted, they lead to a "modal collapse" where every statement that is true is necessarily true, i.e. the sets of necessary, of contingent, and of possible truths all coincide (provided there are accessible worlds at all). According to Robert Koons, Sobel suggested in a 2005 conference paper that Gödel might have welcomed modal collapse.
There are suggested amendments to the proof, presented by C. Anthony Anderson, but argued to be refutable by Anderson and Michael Gettings. Sobel's proof of modal collapse has been questioned by Koons, but a counter-defence by Sobel has been given.
Gödel's proof has also been questioned by Graham Oppy, asking whether many other almost-gods would also be "proven" through Gödel's axioms. This counter-argument has been questioned by Gettings, who agrees that the axioms might be questioned, but disagrees that Oppy's particular counter-example can be shown from Gödel's axioms.
Religious scholar Fr. Robert J. Spitzer accepted Gödel's proof, calling it "an improvement over the Anselmian Ontological Argument (which does not work)."
There are, however, many more criticisms, most of them focusing on the question of whether these axioms must be rejected to avoid odd conclusions. The broader criticism is that even if the axioms cannot be shown to be false, that does not mean that they are true. Hilbert's famous remark about interchangeability of the primitives' names applies to those in Gödel's ontological axioms ("positive", "god-like", "essence") as well as to those in Hilbert's geometry axioms ("point", "line", "plane"). According to André Fuhrmann (2005) it remains to show that the dazzling notion prescribed by traditions and often believed to be essentially mysterious satisfies Gödel's axioms. This is not a mathematical, but a theological task. It is this task which decides which religion's god has been proven to exist.
Computationally verified versions.
Christoph Benzmüller and Bruno Woltzenlogel-Paleo formalized Gödel's proof to a level that is suitable for automated theorem proving or at least computational verification via proof assistants. The effort made headlines in German newspapers. According to the authors of this effort, they were inspired by Melvin Fitting's book.
In 2014, they computationally verified Gödel's proof (in the above version). They also proved that this version's axioms are consistent, but imply modal collapse, thus confirming Sobel's 1987 argument. In the same paper, they suspected Gödel's original version of the axioms to be inconsistent, as they failed to prove their consistency.
In 2016, they gave an automated proof that the original version implies formula_1, i.e., is inconsistent in every modal logic with a reflexive or symmetric accessibility relation. Moreover, they gave an argument that this version is inconsistent in every logic at all, but failed to duplicate it by automated provers. However, they were able to verify Melvin Fitting's reformulation of the argument and guarantee its consistency.
In literature.
A humorous variant of Gödel's ontological proof is mentioned in Quentin Canterel's novel "The Jolly Coroner".
The proof is also mentioned in the TV series "Hand of God".
Jeffrey Kegler's 2007 novel "The God Proof" depicts the (fictional) rediscovery of Gödel's lost notebook about the ontological proof.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{array}{rl}\n\n\\text{Ax. 1.} & \\left(P(\\varphi) \\;\\wedge\\; \\Box \\; \\forall x (\\varphi(x) \\Rightarrow \\psi(x))\\right) \\;\\Rightarrow\\; P(\\psi) \\\\\n\n\\text{Ax. 2.} & P(\\neg \\varphi) \\;\\Leftrightarrow\\; \\neg P(\\varphi) \\\\\n\n\\text{Th. 1.} & P(\\varphi) \\;\\Rightarrow\\; \\Diamond \\; \\exists x \\; \\varphi(x) \\\\\n\n\\text{Df. 1.} & G(x) \\;\\Leftrightarrow\\; \\forall \\varphi (P(\\varphi) \\Rightarrow \\varphi(x)) \\\\\n\n\\text{Ax. 3.} & P(G) \\\\\n\n\\text{Th. 2.} & \\Diamond \\; \\exists x \\; G(x) \\\\\n\n\\text{Df. 2.} & \\varphi \\text{ ess } x \\;\\Leftrightarrow\\; \\varphi(x) \\wedge \\forall \\psi \\left(\\psi(x) \\Rightarrow \\Box \\; \\forall y (\\varphi(y) \\Rightarrow \\psi(y))\\right) \\\\\n\n\\text{Ax. 4.} & P(\\varphi) \\;\\Rightarrow\\; \\Box \\; P(\\varphi) \\\\\n\n\\text{Th. 3.} & G(x) \\;\\Rightarrow\\; G \\text{ ess } x \\\\\n\t\t\n\\text{Df. 3.} & E(x) \\;\\Leftrightarrow\\; \\forall \\varphi (\\varphi \\text{ ess } x \\Rightarrow \\Box \\; \\exists y \\; \\varphi(y)) \\\\\n\t\t\t\n\\text{Ax. 5.} & P(E) \\\\\n\t\t\t\n\\text{Th. 4.} & \\Box \\; \\exists x \\; G(x)\n \n\\end{array}\n"
},
{
"math_id": 1,
"text": "\\Diamond\\Box\\bot"
}
] | https://en.wikipedia.org/wiki?curid=12420 |
12424347 | Kavrayskiy VII projection | Pseudocylindrical compromise map projection
The Kavrayskiy VII projection is a map projection invented by Soviet cartographer Vladimir V. Kavrayskiy in 1939 for use as a general-purpose pseudocylindrical projection. Like the Robinson projection, it is a compromise intended to produce good-quality maps with low distortion overall. It scores well in that respect compared to other popular projections, such as the Winkel tripel, despite straight, evenly spaced parallels and a simple formulation. Regardless, it has not been widely used outside the former Soviet Union.
The projection is defined as
formula_0
where formula_1 is the longitude, and formula_2 is the latitude in radians.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n x &= \\frac{3 \\lambda}{2} \\sqrt{\\frac{1}{3} - \\left(\\frac{\\varphi}{\\pi}\\right)^2} \\\\\n y &= \\varphi\n\\end{align}"
},
{
"math_id": 1,
"text": "\\lambda"
},
{
"math_id": 2,
"text": "\\varphi"
}
] | https://en.wikipedia.org/wiki?curid=12424347 |
1242444 | Pointed set | Basic concept in set theory
In mathematics, a pointed set (also based set or rooted set) is an ordered pair formula_0 where formula_1 is a set and formula_2 is an element of formula_1 called the base point, also spelled basepoint.
Maps between pointed sets formula_0 and formula_3—called based maps, pointed maps, or point-preserving maps—are functions from formula_1 to formula_4 that map one basepoint to another, i.e. maps formula_5 such that formula_6. Based maps are usually denoted formula_7.
Pointed sets are very simple algebraic structures. In the sense of universal algebra, a pointed set is a set formula_1 together with a single nullary operation formula_8 which picks out the basepoint. Pointed maps are the homomorphisms of these algebraic structures.
The class of all pointed sets together with the class of all based maps forms a category. Every pointed set can be converted to an ordinary set by forgetting the basepoint (the forgetful functor is faithful), but the reverse is not true. In particular, the empty set cannot be pointed, because it has no element that can be chosen as the basepoint.
Categorical properties.
The category of pointed sets and based maps is equivalent to the category of sets and partial functions. The base point serves as a "default value" for those arguments for which the partial function is not defined. One textbook notes that "This formal completion of sets and partial maps by adding 'improper', 'infinite' elements was reinvented many times, in particular, in topology (one-point compactification) and in theoretical computer science." This category is also isomorphic to the coslice category (formula_9), where formula_10 is (a functor that selects) a singleton set, and formula_11 (the identity functor of) the category of sets. This coincides with the algebraic characterization, since the unique map formula_12 extends the commutative triangles defining arrows of the coslice category to form the commutative squares defining homomorphisms of the algebras.
There is a faithful functor from pointed sets to usual sets, but it is not full and these categories are not equivalent.
The category of pointed sets is a pointed category. The pointed singleton sets formula_13 are both initial objects and terminal objects, i.e. they are zero objects. The category of pointed sets and pointed maps has both products and coproducts, but it is not a distributive category. It is also an example of a category where formula_14 is not isomorphic to formula_15.
Applications.
Many algebraic structures rely on a distinguished point. For example, groups are pointed sets by choosing the identity element as the basepoint, so that group homomorphisms are point-preserving maps. This observation can be restated in category theoretic terms as the existence of a forgetful functor from groups to pointed sets.
A pointed set may be seen as a pointed space under the discrete topology or as a vector space over the field with one element.
As "rooted set" the notion naturally appears in the study of antimatroids and transportation polytopes.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(X, x_0)"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "x_0"
},
{
"math_id": 3,
"text": "(Y, y_0)"
},
{
"math_id": 4,
"text": "Y"
},
{
"math_id": 5,
"text": "f \\colon X \\to Y"
},
{
"math_id": 6,
"text": "f(x_0) = y_0"
},
{
"math_id": 7,
"text": "f \\colon (X, x_0) \\to (Y, y_0)"
},
{
"math_id": 8,
"text": "*: X^0 \\to X,"
},
{
"math_id": 9,
"text": "\\mathbf{1} \\downarrow \\mathbf{Set}"
},
{
"math_id": 10,
"text": "\\mathbf{1}"
},
{
"math_id": 11,
"text": "\\scriptstyle {\\mathbf{Set}}"
},
{
"math_id": 12,
"text": "\\mathbf{1} \\to \\mathbf{1}"
},
{
"math_id": 13,
"text": "(\\{a\\}, a)"
},
{
"math_id": 14,
"text": "0 \\times A"
},
{
"math_id": 15,
"text": "0"
}
] | https://en.wikipedia.org/wiki?curid=1242444 |
12424551 | Fluorescence cross-correlation spectroscopy | Fluorescence cross-correlation spectroscopy (FCCS) is a spectroscopic technique that examines the interactions of fluorescent particles of different colours as they randomly diffuse through a microscopic detection volume over time, under steady conditions.
Discovery.
Eigen and Rigler first introduced the fluorescence cross-correlation spectroscopy (FCCS) method in 1994. Later, in 1997, Schwille experimentally implemented this method.
Theory.
FCCS is an extension of the fluorescence correlation spectroscopy (FCS) method that uses two fluorescent molecules instead of one that emits different colours. The technique measures coincident green and red intensity fluctuations of distinct molecules that correlate if green and red labelled particles move together through a predefined confocal volume. FCCS utilizes two species that are independently labeled with two different fluorescent probes of different colours. These fluorescent probes are excited and detected by two different laser light sources and detectors typically labeled as "green" and "red". By combining FCCS with a confocal microscope, the technique's capabilities are highlighted, as it becomes possible to detect fluorescence molecules in femtoliter volumes within the nanomolar range, with a high signal-to-noise ratio, and at a microsecond time scale.
The normalized cross-correlation function is defined for two fluorescent species, G and R, which are independent green and red channels, respectively:
formula_0
where differential fluorescent signals formula_1 at a specific time,formula_2 and formula_3 at a delay time, formula_4 later is correlated with each other. In the absence of spectral bleed-through – when the fluorescence signal from an adjacent channel is visible in the channel being observed – the cross-correlation function is zero for non-interacting particles. In contrast to FCS, the cross-correlation function increases with increasing numbers of interacting particles.
FCCS is mainly used to study bio-molecular interactions both in living cells and in vitro. It allows for measuring simple molecular stoichiometries and binding constants. It is one of the few techniques that can provide information about protein–protein interactions at a specific time and location within a living cell. Unlike fluorescence resonance energy transfer, FCCS does not have a distance limit for interactions making it suitable for probing large complexes. However, FCCS requires active diffusion of the complexes through the microscope focus on a relatively short time scale, typically seconds.
Modeling.
The mathematical function used to model cross-correlation curves in FCCS is slightly more complex compared to that used in FCS. One of the primary differences is the effective superimposed observation volume, denoted as formula_5 in which the G and R channels form a single observation volume:
formula_6
where formula_7 andformula_8 are radial parameters and formula_9 andformula_10 are the axial parameters for the G and R channels respectively.
The diffusion time, formula_11 for a doubly (G and R) fluorescent species is therefore described as follows:
formula_12
where formula_13 is the diffusion coefficient of the doubly fluorescent particle.
The cross-correlation curve generated from diffusing doubly labelled fluorescent particles can be modelled in separate channels as follows:
formula_14
formula_15
In the ideal case, the cross-correlation function is proportional to the concentration of the doubly labeled fluorescent complex:
formula_16
with formula_17
The cross-correlation amplitude is directly proportional to the concentration of double-labeled (red and green) species.
Experimental method.
FCCS measures the coincident green and red intensity fluctuations of distinct molecules that correlate if green and red labeled particles move together through a predefined confocal volume. To perform fluorescence cross-correlation spectroscopy (FCCS), samples of interest are first labeled with fluorescent probes of different colours. The FCCS setup typically includes a confocal microscope, two laser sources, and two detectors. The confocal microscope is used to focus the laser beams and collect the fluorescence signals. The signals from the detectors are then collected and recorded over time. Data analysis involves cross-correlating the signals to determine the degree of correlation between the two fluorescent probes. This information can be used to extract data on the stoichiometry and binding constants of molecular complexes, as well as the timing and location of interactions within living cells.
Applications.
Fluorescence cross-correlation spectroscopy (FCCS) has several applications in the field of biophysics and biochemistry. Fluorescence cross-correlation spectroscopy (FCCS) is a powerful technique that enables the investigation of interactions between various types of biomolecules, including proteins, nucleic acids, and lipids.
FCCS is one of the few techniques that can provide information about protein-protein interactions at a specific time and location within a living cell. FCCS can be used to study the dynamics of biomolecules in living cells, including their diffusion rates and localization.
This can provide insights into the function and regulation of cellular processes. Unlike Förster resonance energy transfer, FCCS does not have a distance limit for interactions making it suitable for probing large complexes. However, FCCS requires active diffusion of the complexes through the microscope focus on a relatively short time scale, typically seconds. FCCS allows for measuring simple molecular stoichiometries and binding constants. | [
{
"math_id": 0,
"text": "\\ G_{GR}(\\tau)=1+\\frac{\\langle \\delta I_G(t)\\delta I_R(t+\\tau)\\rangle }{\\langle I_G(t)\\rangle\\langle I_R(t)\\rangle}=\\frac{\\langle I_G(t)I_R(t+\\tau)\\rangle}{\\langle I_G(t)\\rangle \\langle I_R(t)\\rangle}"
},
{
"math_id": 1,
"text": "\\ \\delta I_G"
},
{
"math_id": 2,
"text": "\\ t"
},
{
"math_id": 3,
"text": "\\ \\delta I_R"
},
{
"math_id": 4,
"text": "\\ \\tau"
},
{
"math_id": 5,
"text": "\\ V_{eff, RG}"
},
{
"math_id": 6,
"text": "\\ V_{eff, RG}=\\pi^{3/2}(\\omega_{xy,G}^2+\\omega_{xy,R}^2)(\\omega_{z,G}^2+\\omega_{z,R}^2)^{1/2}/2^{3/2}"
},
{
"math_id": 7,
"text": "\\ \\omega_{xy,G}^2"
},
{
"math_id": 8,
"text": "\\ \\omega_{xy,R}^2"
},
{
"math_id": 9,
"text": "\\ \\omega_{z,G}"
},
{
"math_id": 10,
"text": "\\ \\omega_{z,R}"
},
{
"math_id": 11,
"text": "\\ \\tau_{D,GR}"
},
{
"math_id": 12,
"text": "\\ \\tau_{D,GR}=\\frac{\\omega_{xy,G}^2+\\omega_{xy,R}^2}{8D_{GR}}"
},
{
"math_id": 13,
"text": "\\ D_{GR}"
},
{
"math_id": 14,
"text": "\\ G_G(\\tau)=1+\\frac{(<C_G>Diff_k(\\tau)+<C_{GR}>Diff_k(\\tau))}{V_{eff, GR}(<C_G>+<C_{GR}>)^2}"
},
{
"math_id": 15,
"text": "\\ G_R(\\tau)=1+\\frac{(<C_R>Diff_k(\\tau)+<C_{GR}>Diff_k(\\tau))}{V_{eff, GR}(<C_R>+<C_{GR}>)^2}"
},
{
"math_id": 16,
"text": "\\ G_{GR}(\\tau)=1+\\frac{<C_{GR}>Diff_{GR}(\\tau)}{V_{eff}(<C_G>+<C_{GR}>)(<C_R>+<C_{GR}>)}"
},
{
"math_id": 17,
"text": "\\ Diff_k(\\tau)=\\frac{1}{(1+\\frac{\\tau}{\\tau_{D,i}})(1+a^{-2}(\\frac{\\tau}{\\tau_{D,i}})^{1/2}}"
}
] | https://en.wikipedia.org/wiki?curid=12424551 |
12428690 | Semi-elliptic operator | Differential operator in mathematics
In mathematics — specifically, in the theory of partial differential equations — a semi-elliptic operator is a partial differential operator satisfying a positivity condition slightly weaker than that of being an elliptic operator. Every elliptic operator is also semi-elliptic, and semi-elliptic operators share many of the nice properties of elliptic operators: for example, much of the same existence and uniqueness theory is applicable, and semi-elliptic Dirichlet problems can be solved using the methods of stochastic analysis.
Definition.
A second-order partial differential operator "P" defined on an open subset Ω of "n"-dimensional Euclidean space R"n", acting on suitable functions "f" by
formula_0
is said to be semi-elliptic if all the eigenvalues "λ""i"("x"), 1 ≤ "i" ≤ "n", of the matrix "a"("x") = ("a""ij"("x")) are non-negative. (By way of contrast, "P" is said to be elliptic if "λ""i"("x") > 0 for all "x" ∈ Ω and 1 ≤ "i" ≤ "n", and uniformly elliptic if the eigenvalues are uniformly bounded away from zero, uniformly in "i" and "x".) Equivalently, "P" is semi-elliptic if the matrix "a"("x") is positive semi-definite for each "x" ∈ Ω.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P f(x) = \\sum_{i, j = 1}^{n} a_{ij} (x) \\frac{\\partial^{2} f}{\\partial x_{i} \\, \\partial x_{j}}(x) + \\sum_{i = 1}^{n} b_{i} (x) \\frac{\\partial f}{\\partial x_{i}} (x) + c(x) f(x),"
}
] | https://en.wikipedia.org/wiki?curid=12428690 |
1242892 | Hodge index theorem | In mathematics, the Hodge index theorem for an algebraic surface "V" determines the signature of the intersection pairing on the algebraic curves "C" on "V". It says, roughly speaking, that the space spanned by such curves (up to linear equivalence) has a one-dimensional subspace on which it is positive definite (not uniquely determined), and decomposes as a direct sum of some such one-dimensional subspace, and a complementary subspace on which it is negative definite.
In a more formal statement, specify that "V" is a non-singular projective surface, and let "H" be the divisor class on "V" of a hyperplane section of "V" in a given projective embedding. Then the intersection
formula_0
where "d" is the degree of "V" (in that embedding). Let "D" be the vector space of rational divisor classes on "V", up to algebraic equivalence. The dimension of "D" is finite and is usually denoted by ρ("V"). The Hodge index theorem says that the subspace spanned by "H" in "D" has a complementary subspace on which the intersection pairing is negative definite. Therefore, the signature (often also called "index") is (1,ρ("V")-1).
The abelian group of divisor classes up to algebraic equivalence is now called the Néron-Severi group; it is known to be a finitely-generated abelian group, and the result is about its tensor product with the rational number field. Therefore, ρ("V") is equally the rank of the Néron-Severi group (which can have a non-trivial torsion subgroup, on occasion).
This result was proved in the 1930s by W. V. D. Hodge, for varieties over the complex numbers, after it had been a conjecture for some time of the Italian school of algebraic geometry (in particular, Francesco Severi, who in this case showed that ρ < ∞). Hodge's methods were the topological ones brought in by Lefschetz. The result holds over general (algebraically closed) fields. | [
{
"math_id": 0,
"text": "H \\cdot H = d\\ "
}
] | https://en.wikipedia.org/wiki?curid=1242892 |
1242956 | Gross national income | Total domestic and foreign economic output claimed by residents of a country
The gross national income (GNI), previously known as gross national product (GNP), is the total domestic and foreign financial output claimed by residents of a country, consisting of gross domestic product (GDP), plus factor incomes earned by foreign residents, minus income earned in the domestic economy by nonresidents.
Comparing GNI to GDP shows the degree to which a nation's GDP represents domestic or international activity. GNI has gradually replaced GNP in international statistics. While being conceptually identical, it is calculated differently. GNI is the basis of calculation of the largest part of contributions to the budget of the European Union. In February 2017, Ireland's GDP became so distorted from the base erosion and profit shifting ("BEPS") tax planning tools of U.S. multinationals, that the Central Bank of Ireland replaced Irish GDP with a new metric, Irish Modified GNI (or "GNI*"). In 2017, Irish GDP was 162% of Irish Modified GNI.
The Atlas method can be applied to correct for fluctuating exchange rates.
Comparison of GNI and GDP.
formula_0
GNI (Atlas method) nominal.
Nominal, Atlas method – millions of current US$ (top 15)
GNI (Atlas method) PPP.
PPP – millions of international dollars (top 15)
Gross national product.
Gross national product (GNP) is the market value of all the goods and services produced in one year by labor and property supplied by the citizens of a country. Unlike gross domestic product (GDP), which defines production based on the geographical location of production, GNP indicates allocated production based on location of ownership. In fact it calculates income by the location of ownership and residence, and so its name is also the less ambiguous "gross national income".
GNP is an economic statistic that is equal to GDP plus any income earned by residents from overseas investments minus income earned within the domestic economy by overseas residents.
GNP does not distinguish between qualitative improvements in the state of the technical arts (e.g., increasing computer processing speeds), and quantitative increases in goods (e.g., number of computers produced), and considers both to be forms of "economic growth".
When a country's capital or labour resources are employed outside its borders, or when a foreign firm is operating in its territory, GDP and GNP can produce different measures of total output. In 2009 for instance, the United States estimated its GDP at $14.119 trillion, and its GNP at $14.265 trillion.
The term "gross national income" (GNI) has gradually replaced the "Gross national product" (GNP) in international statistics. While being conceptually identical, the precise calculation method has evolved at the same time as the name change.
Use of GNP.
The United States used GNP as its primary measure of total economic activity until 1991, when it began to use GDP. In making the switch, the Bureau of Economic Analysis (BEA) noted both that GDP provided an easier comparison of other measures of economic activity in the United States and that "virtually all other countries have already adopted GDP as their primary measure of production". Many economists have questioned how meaningful GNP or GDP is as a measure of a nation's economic well-being, as it does not count most unpaid work and counts much economic activity that is unproductive or actually destructive.
GNI vs GDP.
While GDP measures the market value of all final goods and services produced in a given country, GNI measures income generated by the country's citizens, regardless of the geographic location of the income. In many states, those two figures are close, as the difference between income received by the country versus payments made to the rest of the world is not significant. According to the World Bank, the GNI of the US in 2016 was 1.5% higher than GDP.
In developing countries, on the other hand, the difference might be significant due to a large amount of foreign aid and capital inflow. In 2016, the GNI of Armenia was 4.45% higher than GDP. Based on the OECD reports, in 2015 alone, Armenia has received a total of US$409 million development assistance. Over the past 25 years, USAID has provided more than one billion USD to improve the living of the people in Armenia. GNI equals GDP plus wages, salaries, and property income of the country's residents earned abroad that also constitutes the higher GNI figure. According to the UN report on migration from Armenia in 2015–17, every year around 15–20 thousand people leave Armenia permanently, and roughly 47% of those are working migrants that leave the country to earn income and sustain the families left in Armenia. In 2016 Armenian residents received in a total of around $150 million remittances. Armenia's GNI, measured in US dollars, amounted to USD 13.5 billion in 2021, according to the National Statistical Office. This is an 8.23% increase over the prior year. GNI in USD terms in Armenia has historically ranged from a record high of USD 13.8 billion in 2019 to a record low of USD 1.06 billion in 1992. Regarding interest rates on GNI expressed in USD, Armenia is ranked 119th out of the 155 monitored nations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathrm{GNI} = \\mathrm{GDP} + \\text{Money flowing from foreign countries} - \\text{Money flowing to foreign countries}"
}
] | https://en.wikipedia.org/wiki?curid=1242956 |
12430286 | Magnetic impurity | Impurity in a host metal having a magnetic moment
A magnetic impurity is an impurity in a host metal that has a magnetic moment. The magnetic impurity can then interact with the conduction electrons of the metal, leading to interesting physics such as the Kondo effect, and heavy fermion behaviour. Some examples of magnetic impurities that metals can be doped with are iron and nickel. Such an impurity will contribute a Curie-Weiss term to the magnetic susceptibility,
formula_0.
Early theoretical work concentrated on explaining the trend observed as the impurity was varied across the transition metal group. Based on the idea of a virtual bound state, Anderson proposed a model that was successful in explaining the formation of a localized magnetic moment from a magnetic impurity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\chi_{imp} = \\frac{C}{T + \\theta}"
}
] | https://en.wikipedia.org/wiki?curid=12430286 |
12431124 | Process window index | Statistical measure that quantifies the robustness of a manufacturing process
Process window index (PWI) is a statistical measure that quantifies the robustness of a manufacturing process, e.g. one which involves heating and cooling, known as a thermal process. In manufacturing industry, PWI values are used to calibrate the heating and cooling of soldering jobs (known as a thermal profile) while baked in a reflow oven.
PWI measures how well a process fits into a user-defined process limit known as the specification limit. The specification limit is the tolerance allowed for the process and may be statistically determined. Industrially, these specification limits are known as the "process window", and values that a plotted inside or outside this window are known as the process window index.
Using PWI values, processes can be accurately measured, analyzed, compared, and tracked at the same level of statistical process control and quality control available to other manufacturing processes.
Statistical process control.
Process capability is the ability of a process to produce output within specified limits. To help determine whether a manufacturing or business process is in a state of statistical control, process engineers use control charts, which help to predict the future performance of the process based on the current process.
To help determine the capability of a process, statistically determined upper and lower limits are drawn on either side of a process mean on the control chart. The control limits are set at three standard deviations on either side of the process mean, and are known as the upper control limit (UCL) and lower control limit (LCL) respectively. If the process data plotted on the control chart remains within the control limits over an extended period, then the process is said to be stable.
The tolerance values specified by the end-user are known as specification limits – the upper specification limit (USL) and lower specification limit (LSL). If the process data plotted on a control chart remains within these specification limits, then the process is considered a capable process, denoted by formula_0.
The manufacturing industry has developed customized specification limits known as "process windows". Within this process window, values are plotted. The values relative to the process mean of the window are known as the "process window index". By using PWI values, processes can be accurately measured, analyzed, compared, and tracked at the same level of statistical process control and quality control available to other manufacturing processes.
Control limits.
"Control limits", also known as "natural process limits", are horizontal lines drawn on a statistical process control chart, usually at a distance of ±3 standard deviations of the plotted statistic's mean, used to judge the stability of a process.
Control limits should not be confused with "tolerance limits" or "specifications," which are completely independent of the distribution of the plotted sample statistic. Control limits describe what a process is capable of producing (sometimes referred to as the "voice of the process"), while tolerances and specifications describe how the product should perform to meet the customer's expectations (referred to as the "voice of the customer").
Use.
Control limits are used to detect signals in process data that indicate that a process is not in control and, therefore, not operating predictably. A value in excess of the control limit indicates a special cause is affecting the process.
To detect signals one of several rule sets may be used (). One specification outlines that a signal is defined as any single point outside of the control limits. A process is also considered out of control if there are seven consecutive points, still inside the control limits but on one single side of the mean.
For normally distributed statistics, the area bracketed by the control limits will on average contain 99.73% of all the plot points on the chart, as long as the process is and remains in statistical control. A false-detection rate of at least 0.27% is therefore expected.
It is often not known whether a particular process generates data that conform to particular distributions, but the Chebyshev's inequality and the Vysochanskij–Petunin inequality allow the inference that for any unimodal distribution at least 95% of the data will be encapsulated by limits placed at 3 sigma.
PWI in electronics manufacturing.
An example of a process to which the PWI concept may be applied is soldering. In soldering, a thermal profile is the set of time-temperature values for a variety of processes such as slope, thermal soak, reflow, and peak.
Each thermal profile is ranked on how it fits in a process window (the specification or tolerance limit). Raw temperature values are normalized in terms of a percentage relative to both the process mean and the window limits. The center of the process window is defined as zero, and the extreme edges of the process window are ±99%. A PWI greater than or equal to 100% indicates that the profile does not process the product within specification. A PWI of 99% indicates that the profile runs at the edge of the process window. For example, if the process mean is set at 200 °C, with the process window calibrated at 180 °C and 220 °C respectively; then a measured value of 188 °C translates to a process window index of −60%. A lower PWI value indicates a more robust profile. For maximum efficiency, separate PWI values are computed for peak, slope, reflow, and soak processes of a thermal profile.
To avoid thermal shock affecting production, the steepest slope in the thermal profile is determined and leveled. Manufacturers use custom-built software to accurately determine and decrease the steepness of the slope. In addition, the software also automatically recalibrates the PWI values for the peak, slope, reflow, and soak processes. By setting PWI values, engineers can ensure that the reflow soldering work does not overheat or cool too quickly.
Formula.
The PWI is calculated as the worst case (i.e. highest number) in the set of thermal profile data. For each profile statistic the percentage used of the respective process window is calculated, and the worst case (i.e. highest percentage) is the PWI.
For example, a thermal profile with three thermocouples, with four profile statistics logged for each thermocouple, would have a set of twelve statistics for that thermal profile. In this case, the PWI would be the highest value among the twelve percentages of the respective process windows.
The formula to calculate PWI is:
formula_1
where:
"i" = 1 to "N" (number of thermocouples)
"j" = 1 to "M" (number of statistics per thermocouple)
measured value ["i", "j"] = the ["i", "j"]th statistic's measured value
average limits ["i", "j"] = the average of the high and low (specified) limits of the ["i", "j"']th statistic
range ["i", "j"] = the high limit minus the low limit of the ["i", "j"]th statistic
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\hat{C}_{pk}"
},
{
"math_id": 1,
"text": "\n\\text{PWI} = 100 \\times \\max_{i=1\\dots N \\atop j=1\\dots M}\n\\left\\{\\left|\n\\frac{\\text{measured value}_{[i,j]} - \\text{average limits}_{[i,j]}} {\\text{range}_{[i,j]}/2}\n\\right|\\right\\}\n"
}
] | https://en.wikipedia.org/wiki?curid=12431124 |
12433418 | Filtering problem (stochastic processes) | Mathematical model for state estimation
In the theory of stochastic processes, filtering describes the problem of determining the state of a system from an incomplete and potentially noisy set of observations. While originally motivated by problems in engineering, filtering found applications in many fields from signal processing to finance.
The problem of optimal non-linear filtering (even for the non-stationary case) was solved by Ruslan L. Stratonovich (1959, 1960), see also Harold J. Kushner's work and Moshe Zakai's, who introduced a simplified dynamics for the unnormalized conditional law of the filter known as the Zakai equation. The solution, however, is infinite-dimensional in the general case. Certain approximations and special cases are well understood: for example, the linear filters are optimal for Gaussian random variables, and are known as the Wiener filter and the Kalman-Bucy filter. More generally, as the solution is infinite dimensional, it requires finite dimensional approximations to be implemented in a computer with finite memory. A finite dimensional approximated nonlinear filter may be more based on heuristics, such as the extended Kalman filter or the assumed density filters, or more methodologically oriented such as for example the projection filters, some sub-families of which are shown to coincide with the Assumed Density Filters.
Particle filters are another option to attack the infinite dimensional filtering problem and are based on sequential Monte Carlo methods.
In general, if the separation principle applies, then filtering also arises as part of the solution of an optimal control problem. For example, the Kalman filter is the estimation part of the optimal control solution to the linear-quadratic-Gaussian control problem.
The mathematical formalism.
Consider a probability space (Ω, Σ, P) and suppose that the (random) state "Y""t" in "n"-dimensional Euclidean space R"n" of a system of interest at time "t" is a random variable "Y""t" : Ω → R"n" given by the solution to an Itō stochastic differential equation of the form
formula_0
where "B" denotes standard "p"-dimensional Brownian motion, "b" : [0, +∞) × R"n" → R"n" is the drift field, and "σ" : [0, +∞) × R"n" → R"n"×"p" is the diffusion field. It is assumed that observations "H""t" in R"m" (note that "m" and "n" may, in general, be unequal) are taken for each time "t" according to
formula_1
Adopting the Itō interpretation of the stochastic differential and setting
formula_2
this gives the following stochastic integral representation for the observations "Z""t":
formula_3
where "W" denotes standard "r"-dimensional Brownian motion, independent of "B" and the initial condition "Y"0, and "c" : [0, +∞) × R"n" → R"n" and "γ" : [0, +∞) × R"n" → R"n"×"r" satisfy
formula_4
for all "t" and "x" and some constant "C".
The filtering problem is the following: given observations "Z""s" for 0 ≤ "s" ≤ "t", what is the best estimate "Ŷ""t" of the true state "Y""t" of the system based on those observations?
By "based on those observations" it is meant that "Ŷ""t" is measurable with respect to the "σ"-algebra "G""t" generated by the observations "Z""s", 0 ≤ "s" ≤ "t". Denote by "K" = "K"("Z", "t") the collection of all R"n"-valued random variables "Y" that are square-integrable and "G""t"-measurable:
formula_5
By "best estimate", it is meant that "Ŷ""t" minimizes the mean-square distance between "Y""t" and all candidates in "K":
formula_6
Basic result: orthogonal projection.
The space "K"("Z", "t") of candidates is a Hilbert space, and the general theory of Hilbert spaces implies that the solution "Ŷ""t" of the minimization problem (M) is given by
formula_7
where "P""K"("Z","t") denotes the orthogonal projection of "L"2(Ω, Σ, P; R"n") onto the linear subspace "K"("Z", "t") = "L"2(Ω, "G""t", P; R"n"). Furthermore, it is a general fact about conditional expectations that if "F" is any sub-"σ"-algebra of Σ then the orthogonal projection
formula_8
is exactly the conditional expectation operator E[·|"F"], i.e.,
formula_9
Hence,
formula_10
This elementary result is the basis for the general Fujisaki-Kallianpur-Kunita equation of filtering theory.
More advanced result: nonlinear filtering SPDE.
The complete knowledge of the filter at a time "t" would be given by the probability law of the signal "Y""t" conditional on the sigma-field "G""t" generated by observations "Z" up to time "t". If this probability law admits a density, informally
formula_11
then under some regularity assumptions the density formula_12 satisfies a non-linear stochastic partial differential equation (SPDE) driven by formula_13 and called Kushner-Stratonovich equation, or a unnormalized version formula_14 of the density formula_12 satisfies a linear SPDE called Zakai equation.
These equations can be formulated for the above system, but to simplify the exposition one can assume that the unobserved signal "Y" and the partially observed noisy signal "Z" satisfy the equations
formula_0
formula_15
In other terms, the system is simplified by assuming that the observation noise "W" is not state dependent.
One might keep a deterministic time dependent formula_16 in front of formula_17 but we assume this has been taken out by re-scaling.
For this particular system, the Kushner-Stratonovich SPDE for the density formula_18 reads
formula_19
where "T" denotes transposition, formula_20 denotes the expectation with respect to the density "p",
formula_21
and the forward diffusion operator formula_22 is
formula_23
where formula_24.
If we choose the unnormalized density formula_25, the Zakai SPDE for the same system reads
formula_26
These SPDEs for "p" and "q" are written in Ito calculus form. It is possible to write them in Stratonovich calculus form, which turns out to be helpful when deriving filtering approximations based on differential geometry, as in the projection filters.
For example, the Kushner-Stratonovich equation written in Stratonovich calculus reads
formula_27
From any of the densities "p" and "q" one can calculate all statistics of the signal "Y""t" conditional on the sigma-field generated by observations "Z" up to time "t", so that the densities give complete knowledge of the filter. Under the particular linear-constant assumptions with respect to "Y", where the systems coefficients "b" and "c" are linear functions of "Y" and where formula_28 and formula_16 do not depend on "Y", with the initial condition for the signal "Y" being Gaussian or deterministic, the density formula_12 is Gaussian and it can be characterized by its mean and variance-covariance matrix, whose evolution is described by the Kalman-Bucy filter, which is finite dimensional. More generally, the evolution of the filter density occurs in an infinite-dimensional function space, and it has to be approximated via a finite dimensional approximation, as hinted above.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{d} Y_{t} = b(t, Y_{t}) \\, \\mathrm{d} t + \\sigma (t, Y_{t}) \\, \\mathrm{d} B_{t},"
},
{
"math_id": 1,
"text": "H_{t} = c(t, Y_{t}) + \\gamma (t, Y_{t}) \\cdot \\mbox{noise}."
},
{
"math_id": 2,
"text": " Z_{t} = \\int_{0}^{t} H_{s} \\, \\mathrm{d} s,"
},
{
"math_id": 3,
"text": "\\mathrm{d} Z_{t} = c(t, Y_{t}) \\, \\mathrm{d} t + \\gamma (t, Y_{t}) \\, \\mathrm{d} W_{t},"
},
{
"math_id": 4,
"text": "\\big| c (t, x) \\big| + \\big| \\gamma (t, x) \\big| \\leq C \\big( 1 + | x | \\big)"
},
{
"math_id": 5,
"text": "K = K(Z, t) = L^{2} (\\Omega, G_{t}, \\mathbf{P}; \\mathbf{R}^{n})."
},
{
"math_id": 6,
"text": "\\mathbf{E} \\left[ \\big| Y_{t} - \\hat{Y}_{t} \\big|^{2} \\right] = \\inf_{Y \\in K} \\mathbf{E} \\left[ \\big| Y_{t} - Y \\big|^{2} \\right]. \\qquad \\mbox{(M)}"
},
{
"math_id": 7,
"text": "\\hat{Y}_{t} = P_{K(Z, t)} \\big( Y_{t} \\big),"
},
{
"math_id": 8,
"text": "P_{K} : L^{2} (\\Omega, \\Sigma, \\mathbf{P}; \\mathbf{R}^{n}) \\to L^{2} (\\Omega, F, \\mathbf{P}; \\mathbf{R}^{n})"
},
{
"math_id": 9,
"text": "P_{K} (X) = \\mathbf{E} \\big[ X \\big | F \\big]."
},
{
"math_id": 10,
"text": "\\hat{Y}_{t} = P_{K(Z, t)} \\big( Y_{t} \\big) = \\mathbf{E} \\big[ Y_{t} \\big | G_{t} \\big]."
},
{
"math_id": 11,
"text": " p_t(y)\\ dy = {\\bf P}(Y_t \\in dy|G_t), "
},
{
"math_id": 12,
"text": "p_t(y)"
},
{
"math_id": 13,
"text": "dZ_t"
},
{
"math_id": 14,
"text": "q_t(y)"
},
{
"math_id": 15,
"text": "\\mathrm{d} Z_{t} = c(t, Y_{t}) \\, \\mathrm{d} t + \\mathrm{d} W_{t}."
},
{
"math_id": 16,
"text": "\\gamma"
},
{
"math_id": 17,
"text": " dW"
},
{
"math_id": 18,
"text": "p_t"
},
{
"math_id": 19,
"text": "\n\\mathrm{d} p_t = {\\cal L}^*_t p_t \\ dt \n+ p_t[c(t,\\cdot) - E_{p_t}(c(t,\\cdot))]^T [ d Z_t - E_{p_t}(c(t,\\cdot)) d t]\n"
},
{
"math_id": 20,
"text": "E_p"
},
{
"math_id": 21,
"text": " E_p[f] = \\int f(y) p(y) dy,"
},
{
"math_id": 22,
"text": "{\\cal L}^*_t"
},
{
"math_id": 23,
"text": "\n{\\cal L}_t^* f(t,y) = - \\sum_i \\frac{\\partial}{\\partial y_i} [ b_i(t,y) f(t,y) ] + \\frac{1}{2} \\sum_{i,j} \\frac{\\partial^2}{\\partial y_i \\partial y_j} [a_{ij}(t,y) f(t,y)]\n"
},
{
"math_id": 24,
"text": "a=\\sigma \\sigma^T"
},
{
"math_id": 25,
"text": " q_t(y)"
},
{
"math_id": 26,
"text": "\n\\mathrm{d} q_t = {\\cal L}^*_t q_t \\ dt \n+ q_t[c(t,\\cdot)]^T d Z_t .\n"
},
{
"math_id": 27,
"text": " d p_t = {\\cal L}^\\ast_t\\, p_t\\,dt\n - \\frac{1}{2}\\, p_t\\, [\\vert c(\\cdot, t) \\vert^2 - E_{p_t}(\\vert c(\\cdot, t) \\vert^2)] \\,dt\n + p_t\\, [c(\\cdot, t)-E_{p_t}(c(\\cdot, t)) ]^T \\circ dZ_t\\ ."
},
{
"math_id": 28,
"text": "\\sigma"
}
] | https://en.wikipedia.org/wiki?curid=12433418 |
1243451 | Thom space | In mathematics, the Thom space, Thom complex, or Pontryagin–Thom construction (named after René Thom and Lev Pontryagin) of algebraic topology and differential topology is a topological space associated to a vector bundle, over any paracompact space.
Construction of the Thom space.
One way to construct this space is as follows. Let
formula_0
be a rank "n" real vector bundle over the paracompact space "B". Then for each point "b" in "B", the fiber formula_1 is an "n"-dimensional real vector space. We can form an "n"-sphere bundle formula_2 by taking the one-point compactification of each fiber and gluing them together to get the total space. Finally, from the total space formula_3 we obtain the Thom space formula_4 as the quotient of formula_3 by "B"; that is, by identifying all the new points to a single point formula_5, which we take as the basepoint of formula_4. If "B" is compact, then formula_4 is the one-point compactification of "E".
For example, if "E" is the trivial bundle formula_6, then formula_3 is formula_7 and, writing formula_8 for "B" with a disjoint basepoint, formula_4 is the smash product of formula_8 and formula_9; that is, the "n"-th reduced suspension of formula_8.
Alternatively, since "B" is paracompact, "E" can be given a Euclidean metric and then formula_4 can be defined as the quotient of the unit disk bundle of "E" by the unit formula_10-sphere bundle of "E".
The Thom isomorphism.
The significance of this construction begins with the following result, which belongs to the subject of cohomology of fiber bundles. (We have stated the result in terms of formula_11 coefficients to avoid complications arising from orientability; see also Orientation of a vector bundle#Thom space.)
Let formula_12 be a real vector bundle of rank "n". Then there is an isomorphism called a Thom isomorphism
formula_13
for all "k" greater than or equal to 0, where the right hand side is reduced cohomology.
This theorem was formulated and proved by René Thom in his famous 1952 thesis.
We can interpret the theorem as a global generalization of the suspension isomorphism on local trivializations, because the Thom space of a trivial bundle on "B" of rank "k" is isomorphic to the "k"th suspension of formula_8, "B" with a disjoint point added (cf. #Construction of the Thom space.) This can be more easily seen in the formulation of the theorem that does not make reference to Thom space:
In concise terms, the last part of the theorem says that "u" freely generates formula_15 as a right formula_16-module. The class "u" is usually called the Thom class of "E". Since the pullback formula_17 is a ring isomorphism, formula_18 is given by the equation:
formula_19
In particular, the Thom isomorphism sends the identity element of formula_20 to "u". Note: for this formula to make sense, "u" is treated as an element of (we drop the ring formula_14)
formula_21
The standard reference for the Thom isomorphism is the book by Bott and Tu.
Significance of Thom's work.
In his 1952 paper, Thom showed that the Thom class, the Stiefel–Whitney classes, and the Steenrod operations were all related. He used these ideas to prove in the 1954 paper "Quelques propriétés globales des variétés differentiables" that the cobordism groups could be computed as the homotopy groups of certain Thom spaces "MG"("n"). The proof depends on and is intimately related to the transversality properties of smooth manifolds—see Thom transversality theorem. By reversing this construction, John Milnor and Sergei Novikov (among many others) were able to answer questions about the existence and uniqueness of high-dimensional manifolds: this is now known as surgery theory. In addition, the spaces "MG(n)" fit together to form spectra "MG" now known as Thom spectra, and the cobordism groups are in fact stable. Thom's construction thus also unifies differential topology and stable homotopy theory, and is in particular integral to our knowledge of the stable homotopy groups of spheres.
If the Steenrod operations are available, we can use them and the isomorphism of the theorem to construct the Stiefel–Whitney classes. Recall that the Steenrod operations (mod 2) are natural transformations
formula_22
defined for all nonnegative integers "m". If formula_23, then formula_24 coincides with the cup square. We can define the "i"th Stiefel–Whitney class formula_25 of the vector bundle formula_12 by:
formula_26
Consequences for differentiable manifolds.
If we take the bundle in the above to be the tangent bundle of a smooth manifold, the conclusion of the above is called the Wu formula, and has the following strong consequence: since the Steenrod operations are invariant under homotopy equivalence, we conclude that the Stiefel–Whitney classes of a manifold are as well. This is an extraordinary result that does not generalize to other characteristic classes. There exists a similar famous and difficult result establishing topological invariance for rational Pontryagin classes, due to Sergei Novikov.
Thom spectrum.
Real cobordism.
There are two ways to think about bordism: one as considering two formula_27-manifolds formula_28 are cobordant if there is an formula_29-manifold with boundary formula_30 such that
formula_31
Another technique to encode this kind of information is to take an embedding formula_32 and considering the normal bundle
formula_33
The embedded manifold together with the isomorphism class of the normal bundle actually encodes the same information as the cobordism class formula_34. This can be shown by using a cobordism formula_30 and finding an embedding to some formula_35 which gives a homotopy class of maps to the Thom space formula_36 defined below. Showing the isomorphism of
formula_37
requires a little more work.
Definition of Thom spectrum.
By definition, the Thom spectrum is a sequence of Thom spaces
formula_38
where we wrote formula_39 for the universal vector bundle of rank "n". The sequence forms a spectrum. A theorem of Thom says that formula_40 is the unoriented cobordism ring; the proof of this theorem relies crucially on Thom’s transversality theorem. The lack of transversality prevents from computing cobordism rings of, say, topological manifolds from Thom spectra.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p\\colon E \\to B"
},
{
"math_id": 1,
"text": "E_b"
},
{
"math_id": 2,
"text": "\\operatorname{Sph}(E) \\to B"
},
{
"math_id": 3,
"text": "\\operatorname{Sph}(E)"
},
{
"math_id": 4,
"text": "T(E)"
},
{
"math_id": 5,
"text": "\\infty"
},
{
"math_id": 6,
"text": "B\\times \\R^n"
},
{
"math_id": 7,
"text": "B\\times S^n"
},
{
"math_id": 8,
"text": "B_+"
},
{
"math_id": 9,
"text": "S^n"
},
{
"math_id": 10,
"text": "(n-1)"
},
{
"math_id": 11,
"text": "\\Z_2"
},
{
"math_id": 12,
"text": "p: E\\to B"
},
{
"math_id": 13,
"text": "\\Phi : H^k(B; \\Z_2) \\to \\widetilde{H}^{k+n}(T(E); \\Z_2),"
},
{
"math_id": 14,
"text": "\\Lambda"
},
{
"math_id": 15,
"text": "H^*(E, E \\setminus B; \\Lambda)"
},
{
"math_id": 16,
"text": "H^*(E; \\Lambda)"
},
{
"math_id": 17,
"text": "p^*: H^*(B; \\Lambda) \\to H^*(E; \\Lambda)"
},
{
"math_id": 18,
"text": "\\Phi"
},
{
"math_id": 19,
"text": "\\Phi(b) = p^*(b) \\smile u."
},
{
"math_id": 20,
"text": "H^*(B)"
},
{
"math_id": 21,
"text": "\\tilde{H}^n(T(E)) = H^n(\\operatorname{Sph}(E), B) \\simeq H^n(E, E \\setminus B)."
},
{
"math_id": 22,
"text": "Sq^i : H^m(-; \\Z_2) \\to H^{m+i}(-; \\Z_2),"
},
{
"math_id": 23,
"text": "i=m"
},
{
"math_id": 24,
"text": "Sq^i"
},
{
"math_id": 25,
"text": "w_i(p)"
},
{
"math_id": 26,
"text": "w_i(p) = \\Phi^{-1}(Sq^i(\\Phi(1))) = \\Phi^{-1}(Sq^i(u))."
},
{
"math_id": 27,
"text": "n"
},
{
"math_id": 28,
"text": "M,M'"
},
{
"math_id": 29,
"text": "(n+1)"
},
{
"math_id": 30,
"text": "W"
},
{
"math_id": 31,
"text": "\\partial W = M \\coprod M'"
},
{
"math_id": 32,
"text": "M \\hookrightarrow \\R^{N + n}"
},
{
"math_id": 33,
"text": "\\nu: N_{\\R^{N+n}/M} \\to M"
},
{
"math_id": 34,
"text": "[M]"
},
{
"math_id": 35,
"text": "\\R^{N_W + n}\\times [0,1]"
},
{
"math_id": 36,
"text": "MO(n)"
},
{
"math_id": 37,
"text": "\\pi_nMO \\cong \\Omega^O_n"
},
{
"math_id": 38,
"text": "MO(n) = T(\\gamma^n)"
},
{
"math_id": 39,
"text": "\\gamma^n\\to BO(n)"
},
{
"math_id": 40,
"text": "\\pi_*(MO)"
}
] | https://en.wikipedia.org/wiki?curid=1243451 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.