id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
1147994 | Friedmann equations | Equations in physical cosmology
The Friedmann equations, also known as the Friedmann–Lemaître (FL) equations, are a set of equations in physical cosmology that govern the expansion of space in homogeneous and isotropic models of the universe within the context of general relativity. They were first derived by Alexander Friedmann in 1922 from Einstein's field equations of gravitation for the Friedmann–Lemaître–Robertson–Walker metric and a perfect fluid with a given mass density ρ and pressure p. The equations for negative spatial curvature were given by Friedmann in 1924.
Assumptions.
The Friedmann equations start with the simplifying assumption that the universe is spatially homogeneous and isotropic, that is, the cosmological principle; empirically, this is justified on scales larger than the order of 100 Mpc. The cosmological principle implies that the metric of the universe must be of the form
formula_0
where d"s"32 is a three-dimensional metric that must be one of (a) flat space, (b) a sphere of constant positive curvature or (c) a hyperbolic space with constant negative curvature. This metric is called the Friedmann–Lemaître–Robertson–Walker (FLRW) metric. The parameter k discussed below takes the value 0, 1, −1, or the Gaussian curvature, in these three cases respectively. It is this fact that allows us to sensibly speak of a "scale factor" "a"("t").
Einstein's equations now relate the evolution of this scale factor to the pressure and energy of the matter in the universe. From FLRW metric we compute Christoffel symbols, then the Ricci tensor. With the stress–energy tensor for a perfect fluid, we substitute them into Einstein's field equations and the resulting equations are described below.
Equations.
There are two independent Friedmann equations for modelling a homogeneous, isotropic universe. The first is:
formula_1
which is derived from the 00 component of the Einstein field equations. The second is:
formula_2
which is derived from the first together with the trace of Einstein's field equations (the dimension of the two equations is time−2).
a is the scale factor, G, Λ, and c are universal constants (G is the Newtonian constant of gravitation, Λ is the cosmological constant with dimension length−2, and c is the speed of light in vacuum). ρ and p are the volumetric mass density (and not the volumetric energy density) and the pressure, respectively. k is constant throughout a particular solution, but may vary from one solution to another.
In previous equations, a, ρ, and p are functions of time. is the spatial curvature in any time-slice of the universe; it is equal to one-sixth of the spatial Ricci curvature scalar R since
formula_3
in the Friedmann model. "H" ≡ is the Hubble parameter.
We see that in the Friedmann equations, "a"("t") does not depend on which coordinate system we chose for spatial slices. There are two commonly used choices for a and k which describe the same physics:
+1, 0 or −1 depending on whether the shape of the universe is a closed 3-sphere, flat (Euclidean space) or an open 3-hyperboloid, respectively. If "k"
+1, then a is the radius of curvature of the universe. If "k"
0, then a may be fixed to any arbitrary positive number at one particular time. If "k"
−1, then (loosely speaking) one can say that "i" · "a" is the radius of curvature of the universe.
1). If the shape of the universe is hyperspherical and "Rt" is the radius of curvature ("R"0 at the present), then "a"
. If k is positive, then the universe is hyperspherical. If "k"
0, then the universe is flat. If k is negative, then the universe is hyperbolic.
Using the first equation, the second equation can be re-expressed as
formula_4
which eliminates Λ and expresses the conservation of mass–energy:
formula_5
These equations are sometimes simplified by replacing
formula_6
to give:
formula_7
The simplified form of the second equation is invariant under this transformation.
The Hubble parameter can change over time if other parts of the equation are time dependent (in particular the mass density, the vacuum energy, or the spatial curvature). Evaluating the Hubble parameter at the present time yields Hubble's constant which is the proportionality constant of Hubble's law. Applied to a fluid with a given equation of state, the Friedmann equations yield the time evolution and geometry of the universe as a function of the fluid density.
Some cosmologists call the second of these two equations the Friedmann acceleration equation and reserve the term "Friedmann equation" for only the first equation.
Density parameter.
The density parameter Ω is defined as the ratio of the actual (or observed) density ρ to the critical density "ρ"c of the Friedmann universe. The relation between the actual density and the critical density determines the overall geometry of the universe; when they are equal, the geometry of the universe is flat (Euclidean). In earlier models, which did not include a cosmological constant term, critical density was initially defined as the watershed point between an expanding and a contracting Universe.
To date, the critical density is estimated to be approximately five atoms (of monatomic hydrogen) per cubic metre, whereas the average density of ordinary matter in the Universe is believed to be 0.2–0.25 atoms per cubic metre.
A much greater density comes from the unidentified dark matter, although both ordinary and dark matter contribute in favour of contraction of the universe. However, the largest part comes from so-called dark energy, which accounts for the cosmological constant term. Although the total density is equal to the critical density (exactly, up to measurement error), dark energy does not lead to contraction of the universe but rather may accelerate its expansion.
An expression for the critical density is found by assuming Λ to be zero (as it is for all basic Friedmann universes) and setting the normalised spatial curvature, k, equal to zero. When the substitutions are applied to the first of the Friedmann equations we find:
formula_8
<templatestyles src="Block indent/styles.css"/> (where "h"
"H"0/(100 km/s/Mpc). For "Ho"
67.4 km/s/Mpc, i.e. "h"
0.674, "ρ"c
The density parameter (useful for comparing different cosmological models) is then defined as:
formula_9
This term originally was used as a means to determine the spatial geometry of the universe, where "ρ"c is the critical density for which the spatial geometry is flat (or Euclidean). Assuming a zero vacuum energy density, if Ω is larger than unity, the space sections of the universe are closed; the universe will eventually stop expanding, then collapse. If Ω is less than unity, they are open; and the universe expands forever. However, one can also subsume the spatial curvature and vacuum energy terms into a more general expression for Ω in which case this density parameter equals exactly unity. Then it is a matter of measuring the different components, usually designated by subscripts. According to the ΛCDM model, there are important components of Ω due to baryons, cold dark matter and dark energy. The spatial geometry of the universe has been measured by the WMAP spacecraft to be nearly flat. This means that the universe can be well approximated by a model where the spatial curvature parameter k is zero; however, this does not necessarily imply that the universe is infinite: it might merely be that the universe is much larger than the part we see.
The first Friedmann equation is often seen in terms of the present values of the density parameters, that is
formula_10
Here Ω0,R is the radiation density today (when "a"
1), Ω0,M is the matter (dark plus baryonic) density today, Ω0,"k"
1 − Ω0 is the "spatial curvature density" today, and Ω0,Λ is the cosmological constant or vacuum density today.
Useful solutions.
The Friedmann equations can be solved exactly in presence of a perfect fluid with equation of state
formula_11
where p is the pressure, ρ is the mass density of the fluid in the comoving frame and w is some constant.
In spatially flat case ("k"
0), the solution for the scale factor is
formula_12
where "a"0 is some integration constant to be fixed by the choice of initial conditions. This family of solutions labelled by w is extremely important for cosmology. For example, "w"
0 describes a matter-dominated universe, where the pressure is negligible with respect to the mass density. From the generic solution one easily sees that in a matter-dominated universe the scale factor goes as
formula_13 matter-dominated
Another important example is the case of a radiation-dominated universe, namely when "w"
. This leads to
formula_14 radiation-dominated
Note that this solution is not valid for domination of the cosmological constant, which corresponds to an "w"
−1. In this case the energy density is constant and the scale factor grows exponentially.
Solutions for other values of k can be found at
Mixtures.
If the matter is a mixture of two or more non-interacting fluids each with such an equation of state, then
formula_15
holds separately for each such fluid f. In each case,
formula_16
from which we get
formula_17
For example, one can form a linear combination of such terms
formula_18
where A is the density of "dust" (ordinary matter, "w"
0) when "a"
1; B is the density of radiation ("w"
) when "a"
1; and C is the density of "dark energy" ("w"
−1). One then substitutes this into
formula_19
and solves for a as a function of time.
Detailed derivation.
To make the solutions more explicit, we can derive the full relationships from the first Friedmann equation:
formula_20
with
formula_21
Rearranging and changing to use variables "a"′ and "t"′ for the integration
formula_22
Solutions for the dependence of the scale factor with respect to time for universes dominated by each component can be found. In each we also have assumed that "Ω"0,"k" ≈ 0, which is the same as assuming that the dominating source of energy density is approximately 1.
For matter-dominated universes, where "Ω"0,M ≫ "Ω"0,R and "Ω"0,"Λ", as well as "Ω"0,M ≈ 1:
formula_23
which recovers the aforementioned "a" ∝ "t"2/3
For radiation-dominated universes, where Ω0,R ≫ Ω0,M and Ω0,Λ, as well as Ω0,R ≈ 1:
formula_24
For Λ-dominated universes, where "Ω"0,"Λ" ≫ "Ω"0,R and "Ω"0,M, as well as "Ω"0,"Λ" ≈ 1, and where we now will change our bounds of integration from "ti" to t and likewise "ai" to a:
formula_25
The Λ-dominated universe solution is of particular interest because the second derivative with respect to time is positive, non-zero; in other words implying an accelerating expansion of the universe, making "ρΛ" a candidate for dark energy:
formula_26
Where by construction "ai" > 0, our assumptions were "Ω"0,"Λ" ≈ 1, and "H"0 has been measured to be positive, forcing the acceleration to be greater than zero.
Rescaled Friedmann equation.
Set
formula_27
where "a"0 and "H"0 are separately the scale factor and the Hubble parameter today.
Then we can have
formula_28
where
formula_29
For any form of the effective potential "U"eff("ã"), there is an equation of state "p"
"p"("ρ") that will produce it.
In popular culture.
Several students at Tsinghua University (CCP leader Xi Jinping's alma mater) participating in the 2022 COVID-19 protests in China carried placards with Friedmann equations scrawled on them, interpreted by some as a play on the words "Free man". Others have interpreted the use of the equations as a call to “open up” China and stop its Zero Covid policy, as the Friedmann equations relate to the expansion, or “opening” of the universe.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " -\\mathrm{d}s^2 = a(t)^2 \\, {\\mathrm{d}s_3}^2 - c^2 \\, \\mathrm{d}t^2 "
},
{
"math_id": 1,
"text": " \\frac{\\dot{a}^2 + kc^2}{a^2} = \\frac{8 \\pi G \\rho + \\Lambda c^2}{3} ,"
},
{
"math_id": 2,
"text": "\\frac{\\ddot{a}}{a} = -\\frac{4 \\pi G}{3}\\left(\\rho+\\frac{3p}{c^2}\\right) + \\frac{\\Lambda c^2}{3}"
},
{
"math_id": 3,
"text": "R = \\frac{6}{c^2 a^2}(\\ddot{a} a + \\dot{a}^2 + kc^2)"
},
{
"math_id": 4,
"text": "\\dot{\\rho} = -3 H \\left(\\rho + \\frac{p}{c^2}\\right),"
},
{
"math_id": 5,
"text": " T^{\\alpha\\beta}{}_{;\\beta}= 0."
},
{
"math_id": 6,
"text": "\\begin{align}\n\\rho &\\to \\rho - \\frac{\\Lambda c^2}{8 \\pi G} &\np &\\to p + \\frac{\\Lambda c^4}{8 \\pi G}\n\\end{align}"
},
{
"math_id": 7,
"text": "\\begin{align}\nH^2 = \\left(\\frac{\\dot{a}}{a}\\right)^2 &= \\frac{8 \\pi G}{3}\\rho - \\frac{kc^2}{a^2} \\\\\n\\dot{H} + H^2 = \\frac{\\ddot{a}}{a} &= - \\frac{4\\pi G}{3}\\left(\\rho + \\frac{3p}{c^2}\\right).\n\\end{align}"
},
{
"math_id": 8,
"text": "\\rho_\\mathrm{c} = \\frac{3 H^2}{8 \\pi G} = 1.8788 \\times 10^{-26} h^2 {\\rm kg}\\,{\\rm m}^{-3} = 2.7754\\times 10^{11} h^2 M_\\odot\\,{\\rm Mpc}^{-3} ,"
},
{
"math_id": 9,
"text": "\\Omega \\equiv \\frac{\\rho}{\\rho_c} = \\frac{8 \\pi G\\rho}{3 H^2}."
},
{
"math_id": 10,
"text": "\\frac{H^2}{H_0^2} = \\Omega_{0,\\mathrm R} a^{-4} + \\Omega_{0,\\mathrm M} a^{-3} + \\Omega_{0,k} a^{-2} + \\Omega_{0,\\Lambda}."
},
{
"math_id": 11,
"text": "p=w\\rho c^2,"
},
{
"math_id": 12,
"text": " a(t)=a_0\\,t^{\\frac{2}{3(w+1)}} "
},
{
"math_id": 13,
"text": "a(t) \\propto t^{2/3}"
},
{
"math_id": 14,
"text": "a(t) \\propto t^{1/2}"
},
{
"math_id": 15,
"text": "\\dot{\\rho}_{f} = -3 H \\left( \\rho_{f} + \\frac{p_{f}}{c^2} \\right) \\,"
},
{
"math_id": 16,
"text": "\\dot{\\rho}_{f} = -3 H \\left( \\rho_{f} + w_{f} \\rho_{f} \\right) \\,"
},
{
"math_id": 17,
"text": "{\\rho}_{f} \\propto a^{-3 \\left(1 + w_{f}\\right)} \\,."
},
{
"math_id": 18,
"text": "\\rho = A a^{-3} + B a^{-4} + C a^{0} \\,"
},
{
"math_id": 19,
"text": "\\left(\\frac{\\dot{a}}{a}\\right)^2 = \\frac{8 \\pi G}{3} \\rho - \\frac{kc^2}{a^2} \\,"
},
{
"math_id": 20,
"text": "\\frac{H^2}{H_0^2} = \\Omega_{0,\\mathrm R} a^{-4} + \\Omega_{0,\\mathrm M} a^{-3} + \\Omega_{0,k} a^{-2} + \\Omega_{0,\\Lambda}"
},
{
"math_id": 21,
"text": "\\begin{align}\nH &= \\frac{\\dot{a}}{a} \\\\[6px]\nH^2 &= H_0^2 \\left( \\Omega_{0,\\mathrm R} a^{-4} + \\Omega_{0,\\mathrm M} a^{-3} + \\Omega_{0,k} a^{-2} + \\Omega_{0,\\Lambda} \\right) \\\\[6pt]\nH &= H_0 \\sqrt{ \\Omega_{0,\\mathrm R} a^{-4} + \\Omega_{0,\\mathrm M} a^{-3} + \\Omega_{0,k} a^{-2} + \\Omega_{0,\\Lambda}} \\\\[6pt]\n\\frac{\\dot{a}}{a} &= H_0 \\sqrt{ \\Omega_{0,\\mathrm R} a^{-4} + \\Omega_{0,\\mathrm M} a^{-3} + \\Omega_{0,k} a^{-2} + \\Omega_{0,\\Lambda}} \\\\[6pt]\n\\frac{\\mathrm{d}a }{\\mathrm{d} t} &= H_0 \\sqrt{\\Omega_{0,\\mathrm R} a^{-2} + \\Omega_{0,\\mathrm M} a^{-1} + \\Omega_{0,k} + \\Omega_{0,\\Lambda} a^2} \\\\[6pt]\n\\mathrm{d}a &= \\mathrm{d} t H_0 \\sqrt{\\Omega_{0,\\mathrm R} a^{-2} + \\Omega_{0,\\mathrm M} a^{-1} + \\Omega_{0,k} + \\Omega_{0,\\Lambda} a^2} \\\\[6pt]\n\\end{align}"
},
{
"math_id": 22,
"text": "t H_0 = \\int_{0}^{a} \\frac{\\mathrm{d}a'}{\\sqrt{\\Omega_{0,\\mathrm R} a'^{-2} + \\Omega_{0,\\mathrm M} a'^{-1} + \\Omega_{0,k} + \\Omega_{0,\\Lambda} a'^2}}"
},
{
"math_id": 23,
"text": "\\begin{align}\nt H_0 &= \\int_{0}^{a} \\frac{\\mathrm{d}a'}{\\sqrt{\\Omega_{0,\\mathrm M} a'^{-1}}} \\\\[6px]\nt H_0 \\sqrt{\\Omega_{0,\\mathrm M}} &= \\left.\\left( \\tfrac23 {a'}^{3/2} \\right) \\,\\right|^a_0 \\\\[6px]\n\\left( \\tfrac32 t H_0 \\sqrt{\\Omega_{0,\\mathrm M}}\\right)^{2/3} &= a(t)\n\\end{align}"
},
{
"math_id": 24,
"text": "\\begin{align}\nt H_0 &= \\int_{0}^{a} \\frac{\\mathrm{d}a'}{\\sqrt{\\Omega_{0,\\mathrm R} a'^{-2}}} \\\\[6px]\nt H_0 \\sqrt{\\Omega_{0,\\mathrm R}} &= \\left.\\frac{a'^2}{2} \\,\\right|^a_0 \\\\[6px]\n\\left(2 t H_0 \\sqrt{\\Omega_{0,\\mathrm R}}\\right)^{1/2} &= a(t)\n\\end{align}"
},
{
"math_id": 25,
"text": "\\begin{align}\n\\left(t-t_i\\right) H_0 &= \\int_{a_i}^{a} \\frac{\\mathrm{d}a'}{\\sqrt{(\\Omega_{0,\\Lambda} a'^2)}} \\\\[6px]\n\\left(t - t_i\\right) H_0 \\sqrt{\\Omega_{0,\\Lambda}} &= \\bigl. \\ln|a'| \\,\\bigr|^a_{a_i} \\\\[6px]\na_i \\exp\\left( (t - t_i) H_0 \\sqrt{\\Omega_{0,\\Lambda}}\\right) &= a(t)\n\\end{align}"
},
{
"math_id": 26,
"text": "\\begin{align}\na(t) &= a_i \\exp\\left( (t - t_i) H_0 \\textstyle\\sqrt{\\Omega_{0,\\Lambda}}\\right) \\\\[6px]\n\\frac{\\mathrm{d}^2 a(t)}{\\mathrm{d}t^2} &= a_i {H_0}^2 \\, \\Omega_{0,\\Lambda} \\exp\\left( (t - t_i) H_0 \\textstyle\\sqrt{\\Omega_{0,\\Lambda}}\\right)\n\\end{align}"
},
{
"math_id": 27,
"text": "\\tilde{a} = \\frac{a}{a_0}, \\quad\n\\rho_c = \\frac{3H_0^2}{8\\pi G},\\quad\n\\Omega = \\frac{\\rho}{\\rho_\\mathrm{c}},\\quad\nt = \\frac{\\tilde{t}}{H_0},\\quad\n\\Omega_\\mathrm{k} = -\\frac{kc^2}{H_0^2 a_0^2},"
},
{
"math_id": 28,
"text": "\\frac12\\left( \\frac{d\\tilde{a}}{d\\tilde{t}}\\right)^2 + U_\\text{eff}(\\tilde{a})=\\frac12\\Omega_\\mathrm{k}"
},
{
"math_id": 29,
"text": "U_\\text{eff}(\\tilde{a})=\\frac{-\\Omega\\tilde{a}^2}{2}."
}
] | https://en.wikipedia.org/wiki?curid=1147994 |
1148092 | Primordial fluctuations | Primordial fluctuations are density variations in the early universe which are considered the seeds of all structure in the universe. Currently, the most widely accepted explanation for their origin is in the context of cosmic inflation. According to the inflationary paradigm, the exponential growth of the scale factor during inflation caused quantum fluctuations of the inflaton field to be stretched to macroscopic scales, and, upon leaving the horizon, to "freeze in".
At the later stages of radiation- and matter-domination, these fluctuations re-entered the horizon, and thus set the initial conditions for structure formation.
The statistical properties of the primordial fluctuations can be inferred from observations of anisotropies in the cosmic microwave background and from measurements of the distribution of matter, e.g., galaxy redshift surveys. Since the fluctuations are believed to arise from inflation, such measurements can also set constraints on parameters within inflationary theory.
Formalism.
Primordial fluctuations are typically quantified by a power spectrum which gives the power of the variations as a function of spatial scale. Within this formalism, one usually considers the fractional energy density of the fluctuations, given by:
formula_0
where formula_1 is the energy density, formula_2 its average and formula_3 the wavenumber of the fluctuations. The power spectrum formula_4 can then be defined via the ensemble average of the Fourier components:
formula_5
There are both scalar and tensor modes of fluctuations.
Scalar modes.
Scalar modes have the power spectrum defined as the mean squared density fluctuation for a specific wavenumber formula_6, i.e., the average fluctuation amplitude at a given scale:
formula_7
Many inflationary models predict that the scalar component of the fluctuations obeys a power law in which
formula_8
For scalar fluctuations, formula_9 is referred to as the scalar spectral index, with formula_10 corresponding to scale invariant fluctuations (not scale invariant in formula_11 but in the comoving curvature perturbation formula_12 for which the power formula_13 is indeed invariant with formula_6 when formula_14).
The scalar "spectral index" describes how the density fluctuations vary with scale. As the size of these fluctuations depends upon the inflaton's motion when these quantum fluctuations are becoming super-horizon sized, different inflationary potentials predict different spectral indices. These depend upon the slow roll parameters, in particular the gradient and curvature of the potential. In models where the curvature is large and positive formula_15. On the other hand, models such as monomial potentials predict a red spectral index formula_16. Planck provides a value of formula_17.
Tensor modes.
The presence of primordial tensor fluctuations is predicted by many inflationary models. As with scalar fluctuations, tensor fluctuations are expected to follow a power law and are parameterized by the tensor index (the tensor version of the scalar index). The ratio of the tensor to scalar power spectra is given by
formula_18
where the 2 arises due to the two polarizations of the tensor modes. 2015 CMB data from the Planck satellite gives a constraint of formula_19.
Adiabatic/isocurvature fluctuations.
Adiabatic fluctuations are density variations in all forms of matter and energy which have equal fractional over/under densities in the number density. So for example, an adiabatic photon overdensity of a factor of two in the number density would also correspond to an electron overdensity of two. For isocurvature fluctuations, the number density variations for one component do not necessarily correspond to number density variations in other components. While it is usually assumed that the initial fluctuations are adiabatic, the possibility of isocurvature fluctuations can be considered given current cosmological data. Current cosmic microwave background data favor adiabatic fluctuations and constrain uncorrelated isocurvature cold dark matter modes to be small.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\delta(\\vec{x}) \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac{\\rho(\\vec{x})}{\\bar{\\rho}} - 1 =\n \\int \\text{d}k \\; \\delta_k \\, e^{i\\vec{k} \\cdot \\vec{x}},"
},
{
"math_id": 1,
"text": " \\rho "
},
{
"math_id": 2,
"text": "\\bar{\\rho}"
},
{
"math_id": 3,
"text": " k "
},
{
"math_id": 4,
"text": " \\mathcal{P}(k)"
},
{
"math_id": 5,
"text": " \\langle \\delta_k \\delta_{k'} \\rangle = \\frac{2 \\pi^2}{k^3} \\, \\delta(k-k') \\, \\mathcal{P}(k)."
},
{
"math_id": 6,
"text": "k"
},
{
"math_id": 7,
"text": "\\mathcal{P}_\\mathrm{s}(k) = \\langle\\delta_k\\rangle^2."
},
{
"math_id": 8,
"text": "\\mathcal{P}_\\mathrm{s}(k) \\propto k^{n_\\mathrm{s}}."
},
{
"math_id": 9,
"text": "n_\\mathrm{s}"
},
{
"math_id": 10,
"text": "n_\\mathrm{s} = 1"
},
{
"math_id": 11,
"text": "\\delta"
},
{
"math_id": 12,
"text": "\\zeta"
},
{
"math_id": 13,
"text": "\\mathcal{P}_{\\zeta}(k) \\propto k^{n_s-1}"
},
{
"math_id": 14,
"text": "n_s=1"
},
{
"math_id": 15,
"text": "n_s > 1"
},
{
"math_id": 16,
"text": "n_s < 1"
},
{
"math_id": 17,
"text": "n_s = 0.968 \\pm 0.006"
},
{
"math_id": 18,
"text": "r=\\frac{2|\\delta_h|^2}{|\\delta_R|^2},"
},
{
"math_id": 19,
"text": "r<0.11"
}
] | https://en.wikipedia.org/wiki?curid=1148092 |
1148129 | Net (polyhedron) | Edge-joined polygons which fold into a polyhedron
In geometry, a net of a polyhedron is an arrangement of non-overlapping edge-joined polygons in the plane which can be folded (along edges) to become the faces of the polyhedron. Polyhedral nets are a useful aid to the study of polyhedra and solid geometry in general, as they allow for physical models of polyhedra to be constructed from material such as thin cardboard.
An early instance of polyhedral nets appears in the works of Albrecht Dürer, whose 1525 book "A Course in the Art of Measurement with Compass and Ruler" ("Unterweysung der Messung mit dem Zyrkel und Rychtscheyd ") included nets for the Platonic solids and several of the Archimedean solids. These constructions were first called nets in 1543 by Augustin Hirschvogel.
Existence and uniqueness.
Many different nets can exist for a given polyhedron, depending on the choices of which edges are joined and which are separated. The edges that are cut from a convex polyhedron to form a net must form a spanning tree of the polyhedron, but cutting some spanning trees may cause the polyhedron to self-overlap when unfolded, rather than forming a net. Conversely, a given net may fold into more than one different convex polyhedron, depending on the angles at which its edges are folded and the choice of which edges to glue together. If a net is given together with a pattern for gluing its edges together, such that each vertex of the resulting shape has positive angular defect and such that the sum of these defects is exactly 4π, then there necessarily exists exactly one polyhedron that can be folded from it; this is Alexandrov's uniqueness theorem. However, the polyhedron formed in this way may have different faces than the ones specified as part of the net: some of the net polygons may have folds across them, and some of the edges between net polygons may remain unfolded. Additionally, the same net may have multiple valid gluing patterns, leading to different folded polyhedra.
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
Does every convex polyhedron have a simple edge unfolding?
In 1975, G. C. Shephard asked whether every convex polyhedron has at least one net, or simple edge-unfolding. This question, which is also known as Dürer's conjecture, or Dürer's unfolding problem, remains unanswered. There exist non-convex polyhedra that do not have nets, and it is possible to subdivide the faces of every convex polyhedron (for instance along a cut locus) so that the set of subdivided faces has a net. In 2014 Mohammad Ghomi showed that every convex polyhedron admits a net after an affine transformation. Furthermore, in 2019 Barvinok and Ghomi showed that a generalization of Dürer's conjecture fails for "pseudo edges", i.e., a network of geodesics which connect vertices of the polyhedron and form a graph with convex faces.
A related open question asks whether every net of a convex polyhedron has a blooming, a continuous non-self-intersecting motion from its flat to its folded state that keeps each face flat throughout the motion.
Shortest path.
The shortest path over the surface between two points on the surface of a polyhedron corresponds to a straight line on a suitable net for the subset of faces touched by the path. The net has to be such that the straight line is fully within it, and one may have to consider several nets to see which gives the shortest path. For example, in the case of a cube, if the points are on adjacent faces one candidate for the shortest path is the path crossing the common edge; the shortest path of this kind is found using a net where the two faces are also adjacent. Other candidates for the shortest path are through the surface of a third face adjacent to both (of which there are two), and corresponding nets can be used to find the shortest path in each category.
The spider and the fly problem is a recreational mathematics puzzle which involves finding the shortest path between two points on a cuboid.
Higher-dimensional polytope nets.
A net of a 4-polytope, a four-dimensional polytope, is composed of polyhedral cells that are connected by their faces and all occupy the same three-dimensional space, just as the polygon faces of a net of a polyhedron are connected by their edges and all occupy the same plane. The net of the tesseract, the four-dimensional hypercube, is used prominently in a painting by Salvador Dalí, "Crucifixion (Corpus Hypercubus)" (1954). The same tesseract net is central to the plot of the short story "—And He Built a Crooked House—" by Robert A. Heinlein.
The number of combinatorially distinct nets of formula_0-dimensional hypercubes can be found by representing these nets as a tree on formula_1 nodes describing the pattern by which pairs of faces of the hypercube are glued together to form a net, together with a perfect matching on the complement graph of the tree describing the pairs of faces that are opposite each other on the folded hypercube. Using this representation, the number of different unfoldings for hypercubes of dimensions 2, 3, 4, ... have been counted as
1, 11, 261, 9694, 502110, 33064966, 2642657228, ... (sequence in the OEIS)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "2n"
}
] | https://en.wikipedia.org/wiki?curid=1148129 |
1148272 | Exponential backoff | Rate-seeking algorithm
Exponential backoff is an algorithm that uses feedback to multiplicatively decrease the rate of some process, in order to gradually find an acceptable rate. These algorithms find usage in a wide range of systems and processes, with radio networks and computer networks being particularly notable.
Exponential backoff algorithm.
An exponential backoff algorithm is a form of closed-loop control system that reduces the rate of a controlled process in response to adverse events. For example, if a smartphone app fails to connect to its server, it might try again 1 second later, then if it fails again, 2 seconds later, then 4, etc. Each time the pause is multiplied by a fixed amount (in this case 2). In this case, the adverse event is failing to connect to the server. Other examples of adverse events include collisions of network traffic, an error response from a service, or an explicit request to reduce the rate (i.e. "back off").
The rate reduction can be modelled as an exponential function:
formula_0
or
formula_1
Here, "t" is the time delay applied between actions, "b" is the multiplicative factor or "base", "c" is the number of adverse events observed, and "f" is the frequency (or rate) of the process (i.e. number of actions per unit of time). The value of "c" is incremented each time an adverse event is observed, leading to an exponential rise in delay and, therefore, an inversely proportionate rate. An exponential backoff algorithm where "b" = 2 is referred to as a "binary" exponential backoff algorithm.
When the rate has been reduced in response to an adverse event, it usually does not remain at that reduced level forever. If no adverse events are observed for some period of time, often referred to as the "recovery time" or "cooling-off period", the rate may be increased again. The time period that must elapse before attempting to increase the rate again may, itself, be determined by an exponential backoff algorithm. Typically, recovery of the rate occurs more slowly than reduction of the rate due to backoff, and often requires careful tuning to avoid oscillation of the rate. The exact recovery behaviour is implementation-specific and may be informed by any number of environmental factors.
The mechanism by which rate reduction is practically achieved in a system may be more complex than a simple time delay. In some cases the value "t" may refer to an upper bound to the time delay, rather than a specific time delay value. The name "exponential backoff" refers to the exponential growth characteristic of the backoff, rather than an exact numeric relationship between adverse event counts and delay times.
Rate limiting.
Exponential backoff is commonly utilised as part of rate limiting mechanisms in computer systems such as web services, to help enforce fair distribution of access to resources and prevent network congestion. Each time a service informs a client that it is sending requests too frequently, the client reduces its rate by some predetermined factor, until the client's request rate reaches an acceptable equilibrium. The service may enforce rate limiting by refusing to respond to requests when the client is sending them too frequently, so that misbehaving clients are not allowed to exceed their allotted resources.
A benefit of utilising an exponential backoff algorithm, over of a fixed rate limit, is that rate limits can be achieved dynamically without providing any prior information to the client. In the event that resources are unexpectedly constrained, e.g. due to heavy load or a service disruption, backoff requests and error responses from the service can automatically decrease the request rate from clients. This can help maintain some level of availability rather than overloading the service. In addition, quality of service can be prioritised to certain clients based on their individual importance, e.g. by reducing the backoff for emergency calls on a telephone network during periods of high load.
In a simple version of the algorithm, messages are delayed by predetermined (non-random) time. For example, in SIP protocol over unreliable transport (such as UDP) the client retransmits requests at an interval that starts at T1 seconds (usually, , which is the estimate of the round-trip time) and doubles after every retransmission until it reaches T2 seconds (which defaults to ). This results in retransmission intervals of , , , , , , etc.
Collision avoidance.
Exponential backoff algorithms can be used to avoid network collisions. In a point-to-multipoint or multiplexed network, multiple senders communicate over a single shared channel. If two senders attempt to transmit a message at the same time, or "talk over" each other, a collision occurs and the messages are damaged or lost. Each sender can then back off before attempting to retransmit the same message again.
A deterministic exponential backoff algorithm is unsuitable for this use case, since each sender would back off for the same time period, leading them to retransmit simultaneously and cause another collision. Instead, for purposes of collision avoidance, the time between retransmissions is randomized and the exponential backoff algorithm sets the "range" of delay values that are possible. The time delay is usually measured in slots, which are fixed-length periods (or "slices") of time on the network. In a binary exponential backoff algorithm (i.e. one where "b" = 2), after c collisions, each retransmission is delayed by a random number of slot times between 0 and 2"c" − 1. After the first collision, each sender will wait 0 or 1 slot times. After the second collision, the senders will wait anywhere from 0 to 3 slot times (inclusive). After the third collision, the senders will wait anywhere from 0 to 7 slot times (inclusive), and so forth. As the number of retransmission attempts increases, the number of possibilities for delay increases exponentially. This decreases the probability of a collision, but increases the average latency.
Exponential backoff is utilised during retransmission of frames in carrier-sense multiple access with collision avoidance (CSMA/CA) and carrier-sense multiple access with collision detection (CSMA/CD) networks, where this algorithm is part of the channel access method used to send data on these networks. In Ethernet networks, the algorithm is commonly used to schedule retransmissions after collisions. The retransmission is delayed by an amount of time derived from the slot time (for example, the time it takes to send 512 bits; i.e., 512 bit-times) and the number of attempts to retransmit.
Example.
This example is from the Ethernet protocol, where a sending host is able to know when a collision has occurred (that is, another host has tried to transmit), when it is sending a frame. If both hosts attempted to re-transmit as soon as a collision occurred, there would be yet another collision — and the pattern would continue forever. The hosts must choose a random value within an acceptable range to ensure that this situation doesn't happen. An exponential backoff algorithm is therefore used. The value 51.2 μs is used as an example here because it is the slot time for a Ethernet line. However, could be replaced by any positive value, in practice.
History and theory.
In a seminal paper published in AFIPS 1970, Norman Abramson presented the idea of multiple “users,” on different islands, sharing a single radio channel (i.e., a single frequency) to access the main computer at the University of Hawaii without any time synchronization. Packet collisions at the receiver of the main computer are treated by senders after a timeout as detected errors. Each sender not receiving a positive acknowledgment from the main computer would retransmit its “lost” packet. Abramson assumed that the sequence of packets transmitted into the shared channel is a Poisson process at rate G, which is the sum of the rate S of new packet arrivals to senders and the rate of retransmitted packets into the channel. Assuming steady state, he showed that the channel throughput rate is formula_2 with a maximum value of 1/(2e) = 0.184 in theory.
Larry Roberts considered a time slotted ALOHA channel with each time slot long enough for a packet transmission time. (A satellite channel using the TDMA protocol is time slotted.) Using the same Poisson process and steady state assumptions as Abramson, Larry Roberts showed that the maximum throughput rate is 1/e = 0.368 in theory. Roberts was the program manager of the ARPANET research project. Inspired by the slotted ALOHA idea, Roberts initiated a new ARPANET Satellite System (ASS) project to include satellite links in the ARPANET.
Simulation results by Abramson, his colleagues, and others showed that an ALOHA channel, slotted or not, is unstable and would sometimes go into congestion collapse. How much time until congestion collapse depended on the arrival rate of new packets as well other unknown factors. In 1971, Larry Roberts asked Professor Leonard Kleinrock and his Ph.D. student, Simon Lam, at UCLA to join the Satellite System project of ARPANET. Simon Lam would work on the stability, performance evaluation, and adaptive control of slotted ALOHA for his Ph.D. dissertation research. The first paper he co-authored with Kleinrock was ARPANET Satellite System (ASS) Note 12 disseminated to the ASS group in August 1972. In this paper, a slot chosen randomly over an interval of K slots was used for retransmission. A new result from the model is that increasing K increases channel throughput which converges to 1/e as K increases to infinity. This model retained the assumptions of Poisson arrivals and steady state, and was not intended for understanding statistical behavior and congestion collapse.
Stability and adaptive backoff.
To understand stability, Lam created a discrete-time Markov chain model for analyzing the statistical behavior of slotted ALOHA in chapter 5 of his dissertation. The model has three parameters: N, s, and p. N is the total number of users. At any time, each user may be idle or blocked. Each user has at most one packet to transmit in the next time slot. An idle user generates a new packet with probability s and transmits it in the next time slot immediately. A blocked user transmits its backlogged packet with probability p, where 1/p = (K+1)/2 to keep the average retransmission interval the same. The throughput-delay results of the two retransmission methods were compared by extensive simulations and found to be essentially the same.
Lam’s model provides mathematically rigorous answers to the stability questions of slotted ALOHA, as well as an efficient algorithm for computing the throughput-delay performance for any stable system. There are 3 key results, shown below, from Lam’s Markov chain model in Chapter 5 of his dissertation (also published jointly with Professor Len Kleinrock, in IEEE Transactions on Communications.)
Corollary.
For a finite (N×s), an unstable channel for the current K value can be made stable by increasing K to a sufficiently large value, to be referred to as its K(N,s).
Heuristic RCP for adaptive backoff.
Lam used Markov decision theory and developed optimal control policies for slotted ALOHA but these policies require all blocked users to know the current state (number of blocked users) of the Markov chain. In 1973, Lam decided that instead of using a complex protocol for users to estimate the system state, he would create a simple algorithm for each user to use its own local information, i.e., the number of collisions its backlogged packet has encountered. Applying the above Corollary, Lam invented the following class of adaptive backoff algorithms (named Heuristic RCP).
A Heuristic RCP algorithm consists of the following steps: (1) Let m denote the number of previous collisions incurred by a packet at a user as the feedback information in its control loop. For a new packet, K(0) is initialized to 1. (2) The packet’s retransmission interval K(m) increases as m increases (until the channel becomes stable as implied by the above Corollary). For implementation, with K(0)=1, as m increases, K(m) can be increased by multiplication (or by addition).
Observation.
Binary Exponential Backoff (BEB) used in Ethernet several years later is a special case of Heuristic RCP with formula_3.
BEB is very easy to implement. It is however not optimal for many applications because BEB uses 2 as the only multiplier which provides no flexibility for optimization. In particular, for a system with a large number of users, BEB increases K(m) too slowly. On the other hand, for a system with a small number of users, a fairly small K is sufficient for the system to be stable, and backoff would not be necessary.
To illustrate an example of a multiplicative RCP that uses several multipliers, see the bottom row in Table 6.3 on page 214 in Chapter 6 of Lam’s dissertation, or bottom row in Table III on page 902 in the Lam-Kleinrock paper. In this example:
For this example, K=200 is sufficient for a stable slotted ALOHA system with N equal to about 400, which follows from result 3 above Corollary. There is no need to increase K any further.
Truncated exponential backoff.
The 'truncated' variant of the algorithm introduces a limit on "c". This simply means that after a certain number of increases, the exponentiation stops. Without a limit on "c", the delay between transmissions may become undesirably long if a sender repeatedly observes adverse events, e.g. due to a degradation in network service. In a randomized system this may occur by chance, leading to unpredictable latency; longer delays due to unbounded increases in "c" are exponentially less probable, but they are effectively inevitable on a busy network due to the law of truly large numbers. Limiting "c" helps to reduce the possibility of unexpectedly long transmission latencies and improve recovery times after a transient outage.
For example, if the ceiling is set at "i" = 10 in a truncated binary exponential backoff algorithm, (as it is in the IEEE 802.3 CSMA/CD standard), then the maximum delay is 1023 slot times, i.e. 210 − 1.
Selecting an appropriate backoff limit for a system involves striking a balance between collision probability and latency. By increasing the ceiling there is an exponential reduction in probability of collision on each transmission attempt. At the same time, increasing the limit also exponentially increases the range of possible latency times for a transmission, leading to less deterministic performance and an increase in the average latency. The optimal limit value for a system is specific to both the implementation and environment.
Expected backoff.
Given a uniform distribution of backoff times, the expected backoff time is the mean of the possibilities. After "c" collisions in a binary exponential backoff algorithm, the delay is randomly chosen from [0, 1, ..., "N"] slots, where "N" = 2"c" − 1, and the expected backoff time (in slots) is
formula_4
For example, the expected backoff time for the third ("c" = 3) collision, one could first calculate the maximum backoff time, "N":
formula_5
formula_6
formula_7
and then calculate the mean of the backoff time possibilities:
formula_8.
which is, for the example, "E"(3) = 3.5 slots.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Citation/styles.css"/> | [
{
"math_id": 0,
"text": "t = b^c"
},
{
"math_id": 1,
"text": "f = \\frac{1}{b^c}"
},
{
"math_id": 2,
"text": "S =Ge^{-2G}"
},
{
"math_id": 3,
"text": "K(m) = 2^m"
},
{
"math_id": 4,
"text": "\\operatorname{E}(c) = \\frac{1}{N+1}\\sum_{i=0}^{N} i = \\frac{1}{N+1}\\frac{N(N+1)}{2} = \\frac{N}{2}."
},
{
"math_id": 5,
"text": "N = 2^c - 1"
},
{
"math_id": 6,
"text": "N = 2^3 - 1 = 8 - 1"
},
{
"math_id": 7,
"text": "N = 7 ,"
},
{
"math_id": 8,
"text": "\\operatorname{E}(c) = \\frac{1}{N+1}\\sum_{i=0}^{N} i = \\frac{1}{N+1}\\frac{N(N+1)}{2} = \\frac{N}{2} = \\frac{2^c-1}{2}"
}
] | https://en.wikipedia.org/wiki?curid=1148272 |
1148356 | Lami's theorem | In physics, Lami's theorem is an equation relating the magnitudes of three coplanar, concurrent and non-collinear vectors, which keeps an object in static equilibrium, with the angles directly opposite to the corresponding vectors. According to the theorem,
formula_0
where formula_1 are the magnitudes of the three coplanar, concurrent and non-collinear vectors, formula_2, which keep the object in static equilibrium, and formula_3 are the angles directly opposite to the vectors, thus satisfying formula_4.
Lami's theorem is applied in static analysis of mechanical and structural systems. The theorem is named after Bernard Lamy.
Proof.
As the vectors must balance formula_5, hence by making all the vectors touch its tip and tail the result is a triangle with sides formula_6 and angles formula_7 (formula_3 are the exterior angles).
By the law of sines then
formula_8
Then by applying that for any angle formula_9, formula_10 (supplementary angles have the same sine), and the result is
formula_11 | [
{
"math_id": 0,
"text": "\\frac{v_A}{\\sin \\alpha}=\\frac{v_B}{\\sin \\beta}=\\frac{v_C}{\\sin \\gamma}"
},
{
"math_id": 1,
"text": "v_A, v_B, v_C"
},
{
"math_id": 2,
"text": "\\vec{v}_A, \\vec{v}_B, \\vec{v}_C"
},
{
"math_id": 3,
"text": "\\alpha,\\beta,\\gamma"
},
{
"math_id": 4,
"text": "\\alpha+\\beta+\\gamma=360^o"
},
{
"math_id": 5,
"text": "\\vec{v}_A+\\vec{v}_B+\\vec{v}_C=\\vec{0}"
},
{
"math_id": 6,
"text": "v_A,v_B,v_C"
},
{
"math_id": 7,
"text": "180^o -\\alpha, 180^o -\\beta, 180^o -\\gamma"
},
{
"math_id": 8,
"text": "\\frac{v_A}{\\sin (180^o -\\alpha)}=\\frac{v_B}{\\sin (180^o-\\beta)}=\\frac{v_C}{\\sin (180^o-\\gamma)}."
},
{
"math_id": 9,
"text": "\\theta"
},
{
"math_id": 10,
"text": "\\sin (180^o - \\theta) = \\sin \\theta"
},
{
"math_id": 11,
"text": "\\frac{v_A}{\\sin \\alpha}=\\frac{v_B}{\\sin \\beta}=\\frac{v_C}{\\sin \\gamma}."
}
] | https://en.wikipedia.org/wiki?curid=1148356 |
11484689 | Radical of a Lie algebra | In the mathematical field of Lie theory, the radical of a Lie algebra formula_0 is the largest solvable ideal of formula_1
The radical, denoted by formula_2, fits into the exact sequence
formula_3.
where formula_4 is semisimple. When the ground field has characteristic zero and formula_5 has finite dimension, Levi's theorem states that this exact sequence splits; i.e., there exists a (necessarily semisimple) subalgebra of formula_5 that is isomorphic to the semisimple quotient formula_6 via the restriction of the quotient map formula_7
A similar notion is a Borel subalgebra, which is a (not necessarily unique) maximal solvable subalgebra.
Definition.
Let formula_8 be a field and let formula_0 be a finite-dimensional Lie algebra over formula_8. There exists a unique maximal solvable ideal, called the "radical," for the following reason.
Firstly let formula_9 and formula_10 be two solvable ideals of formula_0. Then formula_11 is again an ideal of formula_0, and it is solvable because it is an extension of formula_12 by formula_9. Now consider the sum of all the solvable ideals of formula_0. It is nonempty since formula_13 is a solvable ideal, and it is a solvable ideal by the sum property just derived. Clearly it is the unique maximal solvable ideal.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathfrak{g}"
},
{
"math_id": 1,
"text": "\\mathfrak{g}."
},
{
"math_id": 2,
"text": "{\\rm rad}(\\mathfrak{g})"
},
{
"math_id": 3,
"text": "0 \\to {\\rm rad}(\\mathfrak{g}) \\to \\mathfrak g \\to \\mathfrak{g}/{\\rm rad}(\\mathfrak{g}) \\to 0"
},
{
"math_id": 4,
"text": "\\mathfrak{g}/{\\rm rad}(\\mathfrak{g})"
},
{
"math_id": 5,
"text": "\\mathfrak g"
},
{
"math_id": 6,
"text": " \\mathfrak{g}/{\\rm rad}(\\mathfrak{g})"
},
{
"math_id": 7,
"text": "\\mathfrak g \\to \\mathfrak{g}/{\\rm rad}(\\mathfrak{g})."
},
{
"math_id": 8,
"text": "k"
},
{
"math_id": 9,
"text": "\\mathfrak{a}"
},
{
"math_id": 10,
"text": "\\mathfrak{b}"
},
{
"math_id": 11,
"text": "\\mathfrak{a}+\\mathfrak{b}"
},
{
"math_id": 12,
"text": "(\\mathfrak{a}+\\mathfrak{b})/\\mathfrak{a}\\simeq\\mathfrak{b}/(\\mathfrak{a}\\cap\\mathfrak{b})"
},
{
"math_id": 13,
"text": "\\{0\\}"
},
{
"math_id": 14,
"text": "0"
}
] | https://en.wikipedia.org/wiki?curid=11484689 |
11485892 | Baumslag–Solitar group | In the mathematical field of group theory, the Baumslag–Solitar groups are examples of two-generator one-relator groups that play an important role in combinatorial group theory and geometric group theory as (counter)examples and test-cases. They are given by the group presentation
formula_0
For each integer "m" and "n", the Baumslag–Solitar group is denoted BS("m", "n"). The relation in the presentation is called the Baumslag–Solitar relation.
Some of the various BS("m", "n") are well-known groups. BS(1, 1) is the free abelian group on two generators, and BS(1, −1) is the fundamental group of the Klein bottle.
The groups were defined by Gilbert Baumslag and Donald Solitar in 1962 to provide examples of non-Hopfian groups. The groups contain residually finite groups, Hopfian groups that are not residually finite, and non-Hopfian groups.
Linear representation.
Define
formula_1
The matrix group "G" generated by "A" and "B" is a homomorphic image of BS("m", "n"), via the homomorphism induced by
formula_2
It is worth noting that this will not, in general, be an isomorphism. For instance if BS("m", "n") is not residually finite (i.e. if it is not the case that |"m"|
1, |"n"|
1, or |"m"|
|"n"|) it cannot be isomorphic to a finitely generated linear group, which is known to be residually finite by a theorem of Anatoly Maltsev. | [
{
"math_id": 0,
"text": "\\left \\langle a, b \\ : \\ b a^m b^{-1} = a^n \\right \\rangle."
},
{
"math_id": 1,
"text": "A= \\begin{pmatrix}1&1\\\\0&1\\end{pmatrix}, \\qquad B= \\begin{pmatrix}\\frac{n}{m}&0\\\\0&1\\end{pmatrix}."
},
{
"math_id": 2,
"text": "a\\mapsto A, \\qquad b\\mapsto B."
}
] | https://en.wikipedia.org/wiki?curid=11485892 |
11489016 | Molecular binding | Attractive interaction between two molecules
Molecular binding is an attractive interaction between two molecules that results in a stable association in which the molecules are in close proximity to each other. It is formed when atoms or molecules bind together by sharing of electrons. It often, but not always, involves some chemical bonding.
In some cases, the associations can be quite strong—for example, the protein streptavidin and the vitamin biotin have a dissociation constant (reflecting the ratio between bound and free biotin) on the order of 10−14—and so the reactions are effectively irreversible. The result of molecular binding is sometimes the formation of a molecular complex in which the attractive forces holding the components together are generally non-covalent, and thus are normally energetically weaker than covalent bonds.
Molecular binding occurs in biological complexes (e.g., between pairs or sets of proteins, or between a protein and a small molecule ligand it binds) and also in abiologic chemical systems, e.g. as in cases of "coordination polymers" and "coordination networks" such as metal-organic frameworks.
Types.
Molecular binding can be classified into the following types:
Bound molecules are sometimes called a "molecular complex"—the term generally refers to non-covalent associations. Non-covalent interactions can effectively become irreversible; for example, tight binding inhibitors of enzymes can have kinetics that closely resemble irreversible covalent inhibitors. Among the tightest known protein–protein complexes is that between the enzyme angiogenin and ribonuclease inhibitor; the dissociation constant for the human proteins is 5x10−16 mol/L. Another biological example is the binding protein streptavidin, which has extraordinarily high affinity for biotin (vitamin B7/H, dissociation constant, Kd ≈10−14 mol/L). In such cases, if the reaction conditions change (e.g., the protein moves into an environment where biotin concentrations are very low, or pH or ionic conditions are altered), the reverse reaction can be promoted. For example, the biotin-streptavidin interaction can be broken by incubating the complex in water at 70 °C, without damaging either molecule. An example of change in local concentration causing dissociation can be found in the Bohr effect, which describes the dissociation of ligands from hemoglobin in the lung versus peripheral tissues.
Some protein–protein interactions result in covalent bonding, and some pharmaceuticals are irreversible antagonists that may or may not be covalently bound. Drug discovery has been through periods when drug candidates that bind covalently to their targets are attractive and then are avoided; the success of bortezomib made boron-based covalently binding candidates more attractive in the late 2000s.
Driving force.
In order for the complex to be stable, the free energy of complex by definition must be lower than the solvent separated molecules. The binding may be primarily entropy-driven (release of ordered solvent molecules around the isolated molecule that results in a net increase of entropy of the system). When the solvent is water, this is known as the hydrophobic effect. Alternatively, the binding may be enthalpy-driven where non-covalent attractive forces such as electrostatic attraction, hydrogen bonding, and van der Waals / London dispersion forces are primarily responsible for the formation of a stable complex. Complexes that have a strong entropy contribution to formation tend to have weak enthalpy contributions. Conversely complexes that have strong enthalpy component tend to have a weak entropy component. This phenomenon is known as enthalpy-entropy compensation.
Measurement.
The strength of binding between the components of molecular complex is measured quantitatively by the binding constant (KA), defined as the ratio of the concentration of the complex divided by the product of the concentrations of the isolated components at equilibrium in molar units:
formula_0
When the molecular complex prevents the normal functioning of an enzyme, the binding constant is also referred to as inhibition constant (KI).
Examples.
Molecules that can participate in molecular binding include proteins, nucleic acids, carbohydrates, lipids, and small organic molecules such as drugs. Hence the types of complexes that form as a result of molecular binding include:
Proteins that form stable complexes with other molecules are often referred to as receptors while their binding partners are called ligands.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A+B \\rightleftharpoons AB:\\log K_{A} =\\log \\left(\\frac{[AB]}{[A][B]} \\right)=-pK_{A} "
}
] | https://en.wikipedia.org/wiki?curid=11489016 |
11490 | Fundamental frequency | Lowest frequency of a periodic waveform, such as sound
The fundamental frequency, often referred to simply as the fundamental, is defined as the lowest frequency of a periodic waveform. In music, the fundamental is the musical pitch of a note that is perceived as the lowest partial present. In terms of a superposition of sinusoids, the fundamental frequency is the lowest frequency sinusoidal in the sum of harmonically related frequencies, or the frequency of the difference between adjacent frequencies. In some contexts, the fundamental is usually abbreviated as f0, indicating the lowest frequency counting from zero. In other contexts, it is more common to abbreviate it as f1, the first harmonic. (The second harmonic is then f2 = 2⋅f1, etc. In this context, the zeroth harmonic would be 0 Hz.)
According to Benward's and Saker's "Music: In Theory and Practice":
<templatestyles src="Template:Blockquote/styles.css" />Since the fundamental is the lowest frequency and is also perceived as the loudest, the ear identifies it as the specific pitch of the musical tone [harmonic spectrum]... The individual partials are not heard separately but are blended together by the ear into a single tone.
Explanation.
All sinusoidal and many non-sinusoidal waveforms repeat exactly over time – they are periodic. The period of a waveform is the smallest positive value formula_0 for which the following is true:
<templatestyles src="Block indent/styles.css"/>formula_1
Where formula_2 is the value of the waveform formula_3. This means that the waveform's values over any interval of length formula_0 is all that is required to describe the waveform completely (for example, by the associated Fourier series). Since any multiple of period formula_0 also satisfies this definition, the fundamental period is defined as the smallest period over which the function may be described completely. The fundamental frequency is defined as its reciprocal:
<templatestyles src="Block indent/styles.css"/>formula_4
When the units of time are seconds, the frequency is in formula_5, also known as Hertz.
Fundamental frequency of a pipe.
For a pipe of length formula_6 with one end closed and the other end open the wavelength of the fundamental harmonic is formula_7, as indicated by the first two animations. Hence,
<templatestyles src="Block indent/styles.css"/>formula_8
Therefore, using the relation
<templatestyles src="Block indent/styles.css"/>formula_9
where formula_10 is the speed of the wave, the fundamental frequency can be found in terms of the speed of the wave and the length of the pipe:
<templatestyles src="Block indent/styles.css"/>formula_11
If the ends of the same pipe are now both closed or both opened, the wavelength of the fundamental harmonic becomes formula_12 . By the same method as above, the fundamental frequency is found to be
<templatestyles src="Block indent/styles.css"/>formula_13
In music.
In music, the fundamental is the musical pitch of a note that is perceived as the lowest partial present. The fundamental may be created by vibration over the full length of a string or air column, or a higher harmonic chosen by the player. The fundamental is one of the harmonics. A harmonic is any member of the harmonic series, an ideal set of frequencies that are positive integer multiples of a common fundamental frequency. The reason a fundamental is also considered a harmonic is because it is 1 times itself.
The fundamental is the frequency at which the entire wave vibrates. Overtones are other sinusoidal components present at frequencies above the fundamental. All of the frequency components that make up the total waveform, including the fundamental and the overtones, are called partials. Together they form the harmonic series. Overtones which are perfect integer multiples of the fundamental are called harmonics. When an overtone is near to being harmonic, but not exact, it is sometimes called a harmonic partial, although they are often referred to simply as harmonics. Sometimes overtones are created that are not anywhere near a harmonic, and are just called partials or inharmonic overtones.
The fundamental frequency is considered the "first harmonic" and the "first partial". The numbering of the partials and harmonics is then usually the same; the second partial is the second harmonic, etc. But if there are inharmonic partials, the numbering no longer coincides. Overtones are numbered as they appear above the fundamental. So strictly speaking, the "first" overtone is the "second" partial (and usually the "second" harmonic). As this can result in confusion, only harmonics are usually referred to by their numbers, and overtones and partials are described by their relationships to those harmonics.
Mechanical systems.
Consider a spring, fixed at one end and having a mass attached to the other; this would be a single degree of freedom (SDoF) oscillator. Once set into motion, it will oscillate at its natural frequency. For a single degree of freedom oscillator, a system in which the motion can be described by a single coordinate, the natural frequency depends on two system properties: mass and stiffness; (providing the system is undamped). The natural frequency, or fundamental frequency, ω0, can be found using the following equation:
<templatestyles src="Block indent/styles.css"/>formula_14
where:
To determine the natural frequency in Hz, the omega value is divided by 2π.
Or:
<templatestyles src="Block indent/styles.css"/>formula_15
where:
While doing a modal analysis, the frequency of the 1st mode is the fundamental frequency.
This is also expressed as:
<templatestyles src="Block indent/styles.css"/>formula_16
where:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T"
},
{
"math_id": 1,
"text": " x(t) = x(t + T)\\text{ for all }t \\in \\mathbb{R} "
},
{
"math_id": 2,
"text": "x(t)"
},
{
"math_id": 3,
"text": "t"
},
{
"math_id": 4,
"text": " f_0 = \\frac{1}{T}"
},
{
"math_id": 5,
"text": "s^{-1}"
},
{
"math_id": 6,
"text": "L"
},
{
"math_id": 7,
"text": "4L"
},
{
"math_id": 8,
"text": "\\lambda_0 = 4L"
},
{
"math_id": 9,
"text": " \\lambda_0 = \\frac{v}{f_0}"
},
{
"math_id": 10,
"text": "v"
},
{
"math_id": 11,
"text": " f_0 = \\frac{v}{4L}"
},
{
"math_id": 12,
"text": "2L"
},
{
"math_id": 13,
"text": " f_0 = \\frac{v}{2L}"
},
{
"math_id": 14,
"text": " \\omega_\\mathrm{0} = \\sqrt{\\frac{k}{m}} \\, "
},
{
"math_id": 15,
"text": "f_\\mathrm{0} = \\frac{1}{2\\pi} \\sqrt{\\frac{k}{m}} \\,"
},
{
"math_id": 16,
"text": "f_\\mathrm{0} = \\frac{1}{2l} \\sqrt{\\frac{T}{\\mu}} \\,"
}
] | https://en.wikipedia.org/wiki?curid=11490 |
11490196 | YBC 7289 | Babylonian mathematical clay tablet
YBC 7289 is a Babylonian clay tablet notable for containing an accurate sexagesimal approximation to the square root of 2, the length of the diagonal of a unit square. This number is given to the equivalent of six decimal digits, "the greatest known computational accuracy ... in the ancient world". The tablet is believed to be the work of a student in southern Mesopotamia from some time between 1800 and 1600 BC.
Content.
The tablet depicts a square with its two diagonals. One side of the square is labeled with the sexagesimal number 30. The diagonal of the square is labeled with two sexagesimal numbers. The first of these two, 1;24,51,10 represents the number 305470/216000 ≈ 1.414213, a numerical approximation of the square root of two that is off by less than one part in two million. The second of the two numbers is 42;25,35 = 30547/720 ≈ 42.426. This number is the result of multiplying 30 by the given approximation to the square root of two, and approximates the length of the diagonal of a square of side length 30.
Because the Babylonian sexagesimal notation did not indicate which digit had which place value, one alternative interpretation is that the number on the side of the square is 30/60 = 1/2. Under this alternative interpretation, the number on the diagonal is 30547/43200 ≈ 0.70711, a close numerical approximation of formula_0, the length of the diagonal of a square of side length 1/2, that is also off by less than one part in two million. David Fowler and Eleanor Robson write, "Thus we have a reciprocal pair of numbers with a geometric interpretation…". They point out that, while the importance of reciprocal pairs in Babylonian mathematics makes this interpretation attractive, there are reasons for skepticism.
The reverse side is partly erased, but Robson believes it contains a similar problem concerning the diagonal of a rectangle whose two sides and diagonal are in the ratio 3:4:5.
Interpretation.
Although YBC 7289 is frequently depicted (as in the photo) with the square oriented diagonally, the standard Babylonian conventions for drawing squares would have made the sides of the square vertical and horizontal, with the numbered side at the top. The small round shape of the tablet, and the large writing on it, suggests that it was a "hand tablet" of a type typically used for rough work by a student who would hold it in the palm of his hand. The student would likely have copied the sexagesimal value of the square root of 2 from another tablet, but an iterative procedure for computing this value can be found in another Babylonian tablet, BM 96957 + VAT 6598.
The mathematical significance of this tablet was first recognized by Otto E. Neugebauer and Abraham Sachs in 1945.
The tablet "demonstrates the greatest known computational accuracy obtained anywhere in the ancient world", the equivalent of six decimal digits of accuracy. Other Babylonian tablets include the computations of areas of hexagons and heptagons, which involve the approximation of more complicated algebraic numbers such as formula_1. The same number formula_1 can also be used in the interpretation of certain ancient Egyptian calculations of the dimensions of pyramids. However, the much greater numerical precision of the numbers on YBC 7289 makes it more clear that they are the result of a general procedure for calculating them, rather than merely being an estimate.
The same sexagesimal approximation to formula_2, 1;24,51,10, was used much later by Greek mathematician Claudius Ptolemy in his "Almagest". Ptolemy did not explain where this approximation came from and it may be assumed to have been well known by his time.
Provenance and curation.
It is unknown where in Mesopotamia YBC 7289 comes from, but its shape and writing style make it likely that it was created in southern Mesopotamia, sometime between 1800BC and 1600BC.
At Yale, the Institute for the Preservation of Cultural Heritage has produced a digital model of the tablet, suitable for 3D printing. The original tablet is currently kept in the Yale Babylonian Collection at Yale University.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1/\\sqrt2"
},
{
"math_id": 1,
"text": "\\sqrt3"
},
{
"math_id": 2,
"text": "\\sqrt2"
}
] | https://en.wikipedia.org/wiki?curid=11490196 |
11491735 | Markov partition | A Markov partition in mathematics is a tool used in dynamical systems theory, allowing the methods of symbolic dynamics to be applied to the study of hyperbolic dynamics. By using a Markov partition, the system can be made to resemble a discrete-time Markov process, with the long-term dynamical characteristics of the system represented as a Markov shift. The appellation 'Markov' is appropriate because the resulting dynamics of the system obeys the Markov property. The Markov partition thus allows standard techniques from symbolic dynamics to be applied, including the computation of expectation values, correlations, topological entropy, topological zeta functions, Fredholm determinants and the like.
Motivation.
Let formula_0 be a discrete dynamical system. A basic method of studying its dynamics is to find a symbolic representation: a faithful encoding of the points of formula_1 by sequences of symbols such that the map formula_2 becomes the shift map.
Suppose that formula_1 has been divided into a number of pieces formula_3 which are thought to be as small and localized, with virtually no overlaps. The behavior of a point formula_4 under the iterates of formula_2 can be tracked by recording, for each formula_5, the part formula_6 which contains formula_7. This results in an infinite sequence on the alphabet formula_8 which encodes the point. In general, this encoding may be imprecise (the same sequence may represent many different points) and the set of sequences which arise in this way may be difficult to describe. Under certain conditions, which are made explicit in the rigorous definition of a Markov partition, the assignment of the sequence to a point of formula_1 becomes an almost one-to-one map whose image is a symbolic dynamical system of a special kind called a shift of finite type. In this case, the symbolic representation is a powerful tool for investigating the properties of the dynamical system formula_0.
Formal definition.
A Markov partition is a finite cover of the invariant set of the manifold by a set of curvilinear rectangles formula_9 such that
formula_16
formula_17
Here, formula_18 and formula_19 are the unstable and stable manifolds of "x", respectively, and formula_20 simply denotes the interior of formula_6.
These last two conditions can be understood as a statement of the Markov property for the symbolic dynamics; that is, the movement of a trajectory from one open cover to the next is determined only by the most recent cover, and not the history of the system. It is this property of the covering that merits the 'Markov' appellation. The resulting dynamics is that of a Markov shift; that this is indeed the case is due to theorems by Yakov Sinai (1968) and Rufus Bowen (1975), thus putting symbolic dynamics on a firm footing.
Variants of the definition are found, corresponding to conditions on the geometry of the pieces formula_6.
Examples.
Markov partitions have been constructed in several situations.
Markov partitions make homoclinic and heteroclinic orbits particularly easy to describe.
The system formula_21 has the Markov partition formula_22, and in this case the symbolic representation of a real number in formula_23 is its binary expansion. For example: formula_24. The assignment of points of formula_23 to their sequences in the Markov partition is well defined except on the dyadic rationals - morally speaking, this is because formula_25, in the same way as formula_26 in decimal expansions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(M, \\varphi)"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "\\varphi"
},
{
"math_id": 3,
"text": "E_1, E_2, \\ldots, E_r"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "E_i"
},
{
"math_id": 7,
"text": "\\varphi^n (x)"
},
{
"math_id": 8,
"text": "\\{1, 2, \\ldots, r\\}"
},
{
"math_id": 9,
"text": "\\{E_1, E_2, \\ldots, E_r\\}"
},
{
"math_id": 10,
"text": "x,y\\in E_i"
},
{
"math_id": 11,
"text": "W_s(x)\\cap W_u(y) \\in E_i"
},
{
"math_id": 12,
"text": "\\operatorname{Int} E_i \\cap \\operatorname{Int} E_j=\\emptyset"
},
{
"math_id": 13,
"text": "i\\ne j"
},
{
"math_id": 14,
"text": "x\\in \\operatorname{Int} E_i"
},
{
"math_id": 15,
"text": "\\varphi(x)\\in \\operatorname{Int} E_j"
},
{
"math_id": 16,
"text": "\\varphi\\left[W_u(x)\\cap E_i\\right] \\supset W_u(\\varphi x) \\cap E_j "
},
{
"math_id": 17,
"text": "\\varphi\\left[W_s(x)\\cap E_i\\right] \\subset W_s(\\varphi x) \\cap E_j "
},
{
"math_id": 18,
"text": "W_u(x)"
},
{
"math_id": 19,
"text": "W_s(x)"
},
{
"math_id": 20,
"text": "\\operatorname{Int} E_i"
},
{
"math_id": 21,
"text": "([0,1), x \\mapsto 2x\\ mod\\ 1)"
},
{
"math_id": 22,
"text": "E_0 = (0,1/2), E_1 = (1/2,1)"
},
{
"math_id": 23,
"text": "[0,1)"
},
{
"math_id": 24,
"text": "x \\in E_0, Tx \\in E_1, T^2x \\in E_1, T^3x \\in E_1, T^4x \\in E_0 \\Rightarrow x = (0.01110...)_2"
},
{
"math_id": 25,
"text": "(0.01111\\dots)_2 = (0.10000\\dots)_2"
},
{
"math_id": 26,
"text": " 1 = 0.999\\dots"
}
] | https://en.wikipedia.org/wiki?curid=11491735 |
11491940 | Karger's algorithm | Randomized algorithm for minimum cuts
In computer science and graph theory, Karger's algorithm is a randomized algorithm to compute a minimum cut of a connected graph. It was invented by David Karger and first published in 1993.
The idea of the algorithm is based on the concept of contraction of an edge formula_0 in an undirected graph formula_1. Informally speaking, the contraction of an edge merges the nodes formula_2 and formula_3 into one, reducing the total number of nodes of the graph by one. All other edges connecting either formula_2 or formula_3 are "reattached" to the merged node, effectively producing a multigraph. Karger's basic algorithm iteratively contracts randomly chosen edges until only two nodes remain; those nodes represent a cut in the original graph. By iterating this basic algorithm a sufficient number of times, a minimum cut can be found with high probability.
The global minimum cut problem.
A "cut" formula_4 in an undirected graph formula_1 is a partition of the vertices formula_5 into two non-empty, disjoint sets formula_6. The "cutset" of a cut consists of the edges formula_7 between the two parts. The "size" (or "weight") of a cut in an unweighted graph is the cardinality of the cutset, i.e., the number of edges between the two parts,
formula_8
There are formula_9 ways of choosing for each vertex whether it belongs to formula_10 or to formula_11, but two of these choices make formula_10 or formula_11 empty and do not give rise to cuts. Among the remaining choices, swapping the roles of formula_10 and formula_11 does not change the cut, so each cut is counted twice; therefore, there are formula_12 distinct cuts.
The "minimum cut problem" is to find a cut of smallest size among these cuts.
For weighted graphs with positive edge weights formula_13 the weight of the cut is the sum of the weights of edges between vertices in each part
formula_14
which agrees with the unweighted definition for formula_15.
A cut is sometimes called a “global cut” to distinguish it from an “formula_16-formula_17 cut” for a given pair of vertices, which has the additional requirement that formula_18 and formula_19. Every global cut is an formula_16-formula_17 cut for some formula_20. Thus, the minimum cut problem can be solved in polynomial time by iterating over all choices of formula_20 and solving the resulting minimum formula_16-formula_17 cut problem using the max-flow min-cut theorem and a polynomial time algorithm for maximum flow, such as the push-relabel algorithm, though this approach is not optimal. Better deterministic algorithms for the global minimum cut problem include the Stoer–Wagner algorithm, which has a running time of formula_21.
Contraction algorithm.
The fundamental operation of Karger’s algorithm is a form of edge contraction. The result of contracting the edge formula_22 is a new node formula_23. Every edge formula_24 or formula_25 for formula_26 to the endpoints of the contracted edge is replaced by an edge formula_27 to the new node. Finally, the contracted nodes formula_2 and formula_3 with all their incident edges are removed. In particular, the resulting graph contains no self-loops. The result of contracting edge formula_28 is denoted formula_29.
The contraction algorithm repeatedly contracts random edges in the graph, until only two nodes remain, at which point there is only a single cut.
The key idea of the algorithm is that it is far more likely for non min-cut edges than min-cut edges to be randomly selected and lost to contraction, since min-cut edges are usually vastly outnumbered by non min-cut edges. Subsequently, it is plausible that the min-cut edges will survive all the edge contraction, and the algorithm will correctly identify the min-cut edge.
procedure contract(formula_30):
while formula_31
choose formula_32 uniformly at random
formula_33
return the only cut in formula_34
When the graph is represented using adjacency lists or an adjacency matrix, a single edge contraction operation can be implemented with a linear number of updates to the data structure, for a total running time of formula_35. Alternatively, the procedure can be viewed as an execution of Kruskal’s algorithm for constructing the minimum spanning tree in a graph where the edges have weights formula_36 according to a random permutation formula_37. Removing the heaviest edge of this tree results in two components that describe a cut. In this way, the contraction procedure can be implemented like Kruskal’s algorithm in time formula_38.
The best known implementations use formula_39 time and space, or formula_38 time and formula_40 space, respectively.
Success probability of the contraction algorithm.
In a graph formula_30 with formula_41 vertices, the contraction algorithm returns a minimum cut with polynomially small probability formula_42. Recall that every graph has formula_43 cuts (by the discussion in the previous section), among which at most formula_44 can be minimum cuts. Therefore, the success probability for this algorithm is much better than the probability for picking a cut at random, which is at most formula_45.
For instance, the cycle graph on formula_46 vertices has exactly formula_47 minimum cuts, given by every choice of 2 edges. The contraction procedure finds each of these with equal probability.
To further establish the lower bound on the success probability, let formula_48 denote the edges of a specific minimum cut of size formula_49. The contraction algorithm returns formula_48 if none of the random edges deleted by the algorithm belongs to the cutset formula_48. In particular, the first edge contraction avoids formula_48, which happens with probability formula_50. The minimum degree of formula_34 is at least formula_49 (otherwise a minimum degree vertex would induce a smaller cut where one of the two partitions contains only the minimum degree vertex), so formula_51. Thus, the probability that the contraction algorithm picks an edge from formula_48 is
formula_52
The probability formula_53 that the contraction algorithm on an formula_46-vertex graph avoids formula_48 satisfies the recurrence formula_54, with formula_55, which can be expanded as
formula_56
Repeating the contraction algorithm.
By repeating the contraction algorithm formula_57 times with independent random choices and returning the smallest cut, the probability of not finding a minimum cut is
formula_58
The total running time for formula_11 repetitions for a graph with formula_46 vertices and formula_59 edges is formula_60.
Karger–Stein algorithm.
An extension of Karger’s algorithm due to David Karger and Clifford Stein achieves an order of magnitude improvement.
The basic idea is to perform the contraction procedure until the graph reaches formula_17 vertices.
procedure contract(formula_30, formula_17):
while formula_61
choose formula_32 uniformly at random
formula_33
return formula_34
The probability formula_62 that this contraction procedure avoids a specific cut formula_48 in an formula_46-vertex graph is
formula_63
This expression is approximately formula_64 and becomes less than formula_65 around formula_66. In particular, the probability that an edge from formula_48 is contracted grows towards the end. This motivates the idea of switching to a slower algorithm after a certain number of contraction steps.
procedure fastmincut(formula_67):
if formula_68:
return contract(formula_34, formula_69)
else:
formula_70
formula_71 contract(formula_34, formula_17)
formula_72 contract(formula_34, formula_17)
Analysis.
The contraction parameter formula_17 is chosen so that each call to contract has probability at least 1/2 of success (that is, of avoiding the contraction of an edge from a specific cutset formula_48). This allows the successful part of the recursion tree to be modeled as a random binary tree generated by a critical Galton–Watson process, and to be analyzed accordingly.
The probability formula_75 that this random tree of successful calls contains a long-enough path to reach the base of the recursion and find formula_48 is given by the recurrence relation
formula_76
with solution formula_77. The running time of fastmincut satisfies
formula_78
with solution formula_79. To achieve error probability formula_80, the algorithm can be repeated formula_81 times, for an overall running time of formula_82. This is an order of magnitude improvement over Karger’s original algorithm.
Improvement bound.
To determine a min-cut, one has to touch every edge in the graph at least once, which is formula_83 time in a dense graph. The Karger–Stein's min-cut algorithm takes the running time of formula_84, which is very close to that. | [
{
"math_id": 0,
"text": "(u, v)"
},
{
"math_id": 1,
"text": "G = (V, E)"
},
{
"math_id": 2,
"text": "u"
},
{
"math_id": 3,
"text": "v"
},
{
"math_id": 4,
"text": "(S,T)"
},
{
"math_id": 5,
"text": "V"
},
{
"math_id": 6,
"text": "S\\cup T= V"
},
{
"math_id": 7,
"text": "\\{\\, uv \\in E \\colon u\\in S, v\\in T\\,\\}"
},
{
"math_id": 8,
"text": "w(S,T) = |\\{\\, uv \\in E \\colon u\\in S, v\\in T\\,\\}|\\,."
},
{
"math_id": 9,
"text": "2^{|V|}"
},
{
"math_id": 10,
"text": "S"
},
{
"math_id": 11,
"text": "T"
},
{
"math_id": 12,
"text": "2^{|V|-1}-1"
},
{
"math_id": 13,
"text": "w\\colon E\\rightarrow \\mathbf R^+"
},
{
"math_id": 14,
"text": "w(S,T) = \\sum_{uv\\in E\\colon u\\in S, v\\in T} w(uv)\\,,"
},
{
"math_id": 15,
"text": "w=1"
},
{
"math_id": 16,
"text": "s"
},
{
"math_id": 17,
"text": "t"
},
{
"math_id": 18,
"text": "s\\in S"
},
{
"math_id": 19,
"text": "t\\in T"
},
{
"math_id": 20,
"text": "s,t\\in V"
},
{
"math_id": 21,
"text": "O(mn+n^2\\log n)"
},
{
"math_id": 22,
"text": "e=\\{u,v\\}"
},
{
"math_id": 23,
"text": "uv"
},
{
"math_id": 24,
"text": "\\{w,u\\}"
},
{
"math_id": 25,
"text": "\\{w,v\\}"
},
{
"math_id": 26,
"text": "w\\notin\\{u,v\\}"
},
{
"math_id": 27,
"text": "\\{w,uv\\}"
},
{
"math_id": 28,
"text": "e"
},
{
"math_id": 29,
"text": "G/e"
},
{
"math_id": 30,
"text": "G=(V,E)"
},
{
"math_id": 31,
"text": "|V| > 2"
},
{
"math_id": 32,
"text": "e\\in E"
},
{
"math_id": 33,
"text": "G \\leftarrow G/e"
},
{
"math_id": 34,
"text": "G"
},
{
"math_id": 35,
"text": "O(|V|^2)"
},
{
"math_id": 36,
"text": "w(e_i)=\\pi(i)"
},
{
"math_id": 37,
"text": "\\pi"
},
{
"math_id": 38,
"text": "O(|E|\\log |E|)"
},
{
"math_id": 39,
"text": "O(|E|)"
},
{
"math_id": 40,
"text": "O(|V|)"
},
{
"math_id": 41,
"text": "n=|V|"
},
{
"math_id": 42,
"text": "\\binom{n}{2}^{-1}"
},
{
"math_id": 43,
"text": "2^{n-1} -1 "
},
{
"math_id": 44,
"text": "\\tbinom{n}{2}"
},
{
"math_id": 45,
"text": "\\frac{\\tbinom{n}{2}}{2^{n-1} -1}"
},
{
"math_id": 46,
"text": "n"
},
{
"math_id": 47,
"text": "\\binom{n}{2}"
},
{
"math_id": 48,
"text": "C"
},
{
"math_id": 49,
"text": "k"
},
{
"math_id": 50,
"text": "1-k/|E|"
},
{
"math_id": 51,
"text": "|E|\\geqslant nk/2"
},
{
"math_id": 52,
"text": "\\frac{k}{|E|} \\leqslant \\frac{k}{nk/2} = \\frac{2}{n}."
},
{
"math_id": 53,
"text": "p_n"
},
{
"math_id": 54,
"text": "p_n \\geqslant \\left( 1 - \\frac{2}{n} \\right) p_{n-1}"
},
{
"math_id": 55,
"text": "p_2 = 1"
},
{
"math_id": 56,
"text": "\np_n \\geqslant \\prod_{i=0}^{n-3} \\Bigl(1-\\frac{2}{n-i}\\Bigr) =\n \\prod_{i=0}^{n-3} {\\frac{n-i-2}{n-i}}\n = \\frac{n-2}{n}\\cdot \\frac{n-3}{n-1} \\cdot \\frac{n-4}{n-2}\\cdots \\frac{3}{5}\\cdot \\frac{2}{4} \\cdot \\frac{1}{3}\n = \\binom{n}{2}^{-1}\\,.\n"
},
{
"math_id": 57,
"text": " T = \\binom{n}{2}\\ln n "
},
{
"math_id": 58,
"text": "\n\\left[1-\\binom{n}{2}^{-1}\\right]^T\n \\leq \\frac{1}{e^{\\ln n}} = \\frac{1}{n}\\,.\n"
},
{
"math_id": 59,
"text": "m"
},
{
"math_id": 60,
"text": " O(Tm) = O(n^2 m \\log n)"
},
{
"math_id": 61,
"text": "|V| > t"
},
{
"math_id": 62,
"text": "p_{n,t}"
},
{
"math_id": 63,
"text": "\np_{n,t} \\ge \\prod_{i=0}^{n-t-1} \\Bigl(1-\\frac{2}{n-i}\\Bigr) = \\binom{t}{2}\\Bigg/\\binom{n}{2}\\,. \n"
},
{
"math_id": 64,
"text": "t^2/n^2"
},
{
"math_id": 65,
"text": "\\frac{1}{2}"
},
{
"math_id": 66,
"text": " t= n/\\sqrt 2 "
},
{
"math_id": 67,
"text": "G= (V,E)"
},
{
"math_id": 68,
"text": "|V| \\le 6"
},
{
"math_id": 69,
"text": "2"
},
{
"math_id": 70,
"text": "t\\leftarrow \\lceil 1 + |V|/\\sqrt 2\\rceil"
},
{
"math_id": 71,
"text": "G_1 \\leftarrow "
},
{
"math_id": 72,
"text": "G_2 \\leftarrow "
},
{
"math_id": 73,
"text": "G_1"
},
{
"math_id": 74,
"text": "G_2"
},
{
"math_id": 75,
"text": "P(n)"
},
{
"math_id": 76,
"text": "P(n)= 1-\\left(1-\\frac{1}{2} P\\left(\\Bigl\\lceil 1 + \\frac{n}{\\sqrt{2}}\\Bigr\\rceil \\right)\\right)^2"
},
{
"math_id": 77,
"text": "P(n) = \\Omega\\left(\\frac{1}{\\log n}\\right)"
},
{
"math_id": 78,
"text": "T(n)= 2T\\left(\\Bigl\\lceil 1+\\frac{n}{\\sqrt{2}}\\Bigr\\rceil\\right)+O(n^2)"
},
{
"math_id": 79,
"text": "T(n)=O(n^2\\log n)"
},
{
"math_id": 80,
"text": "O(1/n)"
},
{
"math_id": 81,
"text": "O(\\log n/P(n))"
},
{
"math_id": 82,
"text": "T(n) \\cdot \\frac{\\log n}{P(n)} = O(n^2\\log ^3 n)"
},
{
"math_id": 83,
"text": "\\Theta(n^2)"
},
{
"math_id": 84,
"text": "O(n^2\\ln ^{O(1)} n)"
}
] | https://en.wikipedia.org/wiki?curid=11491940 |
11492935 | Graph product | Binary operation on graphs
In graph theory, a graph product is a binary operation on graphs. Specifically, it is an operation that takes two graphs "G"1 and "G"2 and produces a graph H with the following properties:
The graph products differ in what exactly this condition is. It is always about whether or not the vertices "a""n", "b""n" in Gn are equal or connected by an edge.
The terminology and notation for specific graph products in the literature varies quite a lot; even if the following may be considered somewhat standard, readers are advised to check what definition a particular author uses for a graph product, especially in older texts.
Even for more standard definitions, it is not always consistent in the literature how to handle self-loops. The formulas below for the number of edges in a product also may fail when including self-loops. For example, the tensor product of a single vertex self-loop with itself is another single vertex self-loop with formula_0, and not formula_1 as the formula formula_2 would suggest.
Overview table.
The following table shows the most common graph products, with formula_3 denoting "is connected by an edge to", and formula_4 denoting non-adjacency. While formula_4 "does" allow equality, formula_5 means they must be distinct and non-adjacent. The operator symbols listed here are by no means standard, especially in older papers.
In general, a graph product is determined by any condition for formula_6 that can be expressed in terms of formula_8 and formula_9.
Mnemonic.
Let formula_10 be the complete graph on two vertices (i.e. a single edge). The product graphs formula_11, formula_12, and formula_13 look exactly like the graph representing the operator. For example, formula_11 is a four cycle (a square) and formula_13 is the complete graph on four vertices.
The formula_7 notation for lexicographic product serves as a reminder that this product is not commutative. The resulting graph looks like substituting a copy of formula_14 for every vertex of formula_15.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "E=1"
},
{
"math_id": 1,
"text": "E=2"
},
{
"math_id": 2,
"text": "E_{G\\times H} = 2 E_{G} E_{H}"
},
{
"math_id": 3,
"text": "\\sim"
},
{
"math_id": 4,
"text": "\\not\\sim"
},
{
"math_id": 5,
"text": "\\not\\simeq"
},
{
"math_id": 6,
"text": "(a_1, a_2) \\sim (b_1, b_2)"
},
{
"math_id": 7,
"text": "G_1[G_2]"
},
{
"math_id": 8,
"text": "a_n = b_n"
},
{
"math_id": 9,
"text": "a_n \\sim b_n"
},
{
"math_id": 10,
"text": "K_2"
},
{
"math_id": 11,
"text": "K_2 \\square K_2"
},
{
"math_id": 12,
"text": "K_2 \\times K_2"
},
{
"math_id": 13,
"text": "K_2 \\boxtimes K_2"
},
{
"math_id": 14,
"text": "G_2"
},
{
"math_id": 15,
"text": "G_1"
}
] | https://en.wikipedia.org/wiki?curid=11492935 |
11492969 | Polar homology | In complex geometry, a polar homology is a group which captures holomorphic invariants of a complex manifold in a similar way to usual homology of a manifold in differential topology. Polar homology was defined by B. Khesin and A. Rosly in 1999.
Definition.
Let "M" be a complex projective manifold. The space formula_0 of polar "k"-chains is a vector space over formula_1 defined as a quotient formula_2, with formula_3 and formula_4 vector spaces defined below.
Defining "Ak".
The space formula_3 is freely generated by the triples formula_5, where "X" is a smooth, "k"-dimensional complex manifold, formula_6 a holomorphic map, and formula_7 is a rational "k"-form on "X", with first order poles on a divisor with normal crossing.
Defining "Rk".
The space formula_4 is generated by the following relations.
formula_12
where
formula_13 for all formula_14 and the push-forwards formula_15 are considered on the smooth part of formula_16.
Defining the boundary operator.
The boundary operator formula_17 is defined by
formula_18,
where formula_19 are components of the polar divisor of formula_7, "res" is the Poincaré residue, and formula_20 are restrictions of the map "f" to each component of the divisor.
Khesin and Rosly proved that this boundary operator is well defined, and satisfies formula_21. They defined the polar cohomology as the quotient formula_22. | [
{
"math_id": 0,
"text": "C_k"
},
{
"math_id": 1,
"text": "{\\mathbb C}"
},
{
"math_id": 2,
"text": "A_k/R_k"
},
{
"math_id": 3,
"text": "A_k"
},
{
"math_id": 4,
"text": "R_k"
},
{
"math_id": 5,
"text": "(X, f, \\alpha)"
},
{
"math_id": 6,
"text": "f:\\; X \\mapsto M"
},
{
"math_id": 7,
"text": "\\alpha"
},
{
"math_id": 8,
"text": "\\lambda (X, f, \\alpha)=(X, f, \\lambda\\alpha)"
},
{
"math_id": 9,
"text": "(X,f,\\alpha)=0"
},
{
"math_id": 10,
"text": "\\dim f(X) < k"
},
{
"math_id": 11,
"text": "\\ \\sum_i(X_i,f_i,\\alpha_i)=0"
},
{
"math_id": 12,
"text": "\\sum_if_{i*}\\alpha_i\\equiv 0,"
},
{
"math_id": 13,
"text": "dim \\;f_i(X_i)=k"
},
{
"math_id": 14,
"text": "i"
},
{
"math_id": 15,
"text": "f_{i*}\\alpha_i"
},
{
"math_id": 16,
"text": "\\cup_i f_i(X_i)"
},
{
"math_id": 17,
"text": "\\partial:\\; C_k \\mapsto C_{k-1}"
},
{
"math_id": 18,
"text": "\\partial(X,f,\\alpha)=2\\pi \\sqrt{-1}\\sum_i(V_i, f_i, res_{V_i}\\,\\alpha)"
},
{
"math_id": 19,
"text": "V_i"
},
{
"math_id": 20,
"text": "f_i=f|_{V_i}"
},
{
"math_id": 21,
"text": "\\partial^2=0"
},
{
"math_id": 22,
"text": " \\operatorname{ker}\\; \\partial / \\operatorname{im} \\; \\partial"
}
] | https://en.wikipedia.org/wiki?curid=11492969 |
11493727 | Papyrus 45 | Papyrus 45 ("P. Chester Beatty" I), designated by siglum 𝔓45 in the Gregory-Aland numbering of New Testament manuscripts, is an early Greek New Testament manuscript written on papyrus, and is one of the manuscripts comprising the Chester Beatty Papyri, a group of early Christian manuscripts discovered in the 1930s, and purchased by business man and philanthropist, Alfred Chester Beatty. Beatty purchased the manuscript in the 1930s from an Egyptian book dealer, and it was subsequently published in "The Chester Beatty Biblical Papyri, Descriptions and Texts of Twelve Manuscripts on Papyrus of the Greek Bible" by palaeographer, biblical and classical scholar Frederic G. Kenyon in 1933.121, 118 Manuscripts among the Chester Beatty Papyri have had several places of discovery associated with them, the most likely being the Faiyum in Egypt (the dry sands of Egypt have been a haven for finding very early manuscripts since the late 1800s). Using the study of comparative writing styles (palaeography), it has been dated to the early 3rd century CE. This therefore makes it the earliest example of not only the four Gospels contained in one volume, but also the Acts of the Apostles.134 It contains verses in fragmentary form from the texts of Matthew chapters 20–21 and 25–26; Mark chapters 4–9 and 11–12; Luke chapters 6–7 and 9–14; John chapters 4–5 and 10–11; and Acts chapters 4–17.
The manuscript is currently housed at the Chester Beatty Library, Dublin, Ireland, except for one leaf containing Matt. 25:41–26:39, which is in the Papyrus Collection of the Austrian National Library in Vienna ("Pap. Vindob. G." 31974).
Description.
The manuscript is heavily damaged and fragmented. The papyrus was bound in a codex (the forerunner to the modern book), which may have consisted of 220 pages, however only 30 survive (two of Matthew, six of Mark, seven of Luke, two of John, and thirteen of Acts).54 It was made up of quires of two leaves (four pages) only, which were formed by folding a single sheet of papyrus in half, with the horizontal fibres (due to how papyrus is made from strips of the papyrus plant) facing each other on the inside pages, while the outsides had the vertical fibres. The order of fibres in the quire may thus be designated V-H-H-V, and this sequence is a vital factor in the reconstruction of the manuscript. All of the pages have gaps, with very few lines complete.54 The leaves of Matthew and John are only extant in small fragments, which have to be pieced together in order to make up a page.54 The original pages were roughly 10 inches by 8 inches.54 Unlike many of the other surviving manuscripts from the 3rd century which usually contained just the Gospels, or just the Catholic letters, or just the Pauline epistles, this manuscript possibly contained more than one grouping of New Testament texts.54 This hypothesis is attributed to the use of gatherings of two leaves, known as a single-quire, whereas most other codices were made from multiple pages in a single quire (all pages put on top of each other, then folded in the middle to make a single block), or of multiple pages split into several quires (groups of 8–10 pages laid on top of each other, then folded in half to make separate blocks), which were then stitched together to make a full volume. It is unknown whether the codex was enclosed in a leather cover or one of another material.vii
Despite the fragmentary nature, the codex has evidence of the following verses from the New Testament:
Textual character.
Because of the extent of the damage, determining the text's relationship to the standard text-type groups has been difficult for scholars (the text-types are groups of different manuscripts which share specific or generally related readings, which then differ from each other group, and thus the conflicting readings can separate out the groups, which are then used to determine the original text as published; there are three main groups with names: Alexandrian, Western, and Byzantine). Kenyon identified the text of the Gospel of Mark in the manuscript as representing the Caesarean text-type, following the definition of the group by biblical scholar Burnett Hillman Streeter. Reverend Hollis Huston criticized Kenyon's transcription of various partially surviving words, and concluded that chapters 6 and 11 of Mark in 𝔓45 could not neatly fit into one of the established textual groupings, especially not Caesarean, due to the manuscript predating the distinctive texts for each type from the 4th and 5th centuries. This is due to the definition of a "text-type" being based on readings found in manuscripts dating to after the Edict of Milan (313) by the Emperor Constantine, which stopped the persecution of Christians in the Roman Empire, thus allowing them to make copies of the New and Old Testaments freely, under the auspices of an official copying process. Therefore, these manuscripts were made under a controlled setting, whereas the early papyri weren't, hence the specific text-type groups could be established.
The manuscript has a great number of unique (known as "singular") readings (this being words/phrases not found in other manuscripts of the New Testament in specific verses). On the origin of these singular readings, E. C. Colwell comments:
"As an editor the scribe of 𝔓45 wielded a sharp axe. The most striking aspect of his style is its conciseness. The dispensable word is dispensed with. He omits adverbs, adjectives, nouns, participles, verbs, personal pronouns—without any compensating habit of addition. He frequently omits phrases and clauses. He prefers the simple to the compound word. In short, he favors brevity. He shortens the text in at least fifty places in singular readings alone. But he does not drop syllables or letters. His shortened text is readable."
Textual relationship with other New Testament manuscripts.
𝔓45 has a relatively close statistical relationship with Codex Washingtonianus (W) in Mark (this being their unique readings shared with each other, albeit not with other manuscripts), and to a lesser extent those manuscripts within the textual-family group Family 13. Citing biblical scholar Larry Hurtado's study, "Text-Critical Methodology and the Pre-Caesarean Text: Codex W in the Gospel of Mark," text-critic Eldon Jay Epp has agreed that there is no connection to a Caesarean or pre-Caesarean text in Mark. There is also no strong connection to the Alexandrian text as seen in Codex Vaticanus (B), the Western text as evidenced by Codex Bezae (D), or the Byzantine text as witnessed by the Textus Receptus. Another hypothesis is that 𝔓45 comes from the Alexandrian tradition, but has many readings intended to "improve" the text stylistically, and a number of harmonizations. While still difficult to place historically in a category of texts, contrary to Kenyon, including 𝔓45 as a representative of the Caesarean text-type has been undermined.
The textual relationship of the manuscript varies from book to book. In Mark, an analysis of the various readings noted in the textual apparatus of the United Bible Society's "Greek New Testament" (4th ed.) (a critical edition of the Greek New Testament which has, based on scientific principles, attempted to reconstruct the original text from available ancient manuscripts), places 𝔓45 in a group which includes W (for chapters 5-16), Codex Koridethi (Θ), textual group Family 1, and the minuscules 28, 205, 565; the Sinaitic Syriac manuscript, Armenian manuscripts of the New Testament, and Georgian manuscript versions of the New Testament; and the quotations of the New Testament found in early church writer Origen's works. This group corresponds to what Streeter called an "Eastern type" of the text. In Luke, an eleven-way PAM partition (a specific analytical-method) based on Greek manuscript data, associated with the Institute for New Testament Textual Research's (INTF) "Parallel Pericopes" volume places the manuscript in a group with Codex Ephraemi Rescriptus (C), Codex Regius (L), Codex Zacynthius (Ξ), and the minuscules 33, 892, and 1241. In Acts the Alexandrian text-type is its closest textual relationship.
It is calculated that the codex omitted the Pericope Adulterae (John 7:53–8:11).
Some notable readings.
Below are some readings of the manuscript which agree or disagree with variant readings in other Greek manuscripts, or with varying ancient translations of the New Testament. See the main article Textual variants in the New Testament.
("rooster crows"): 𝔓37(vid) 𝔓45 L ƒ1 2886.
("rooster has crowed"): B D W 33. formula_0
("by hundreds and by fifties"):
Omit. : 𝔓45 syvf syh(ms)
Incl. : B D ( – L Θ "ƒ"1 "ƒ"13 28. 565. 579. 700. 892. 1424. formula_0)
("the loaves of bread"):
Omit. : 𝔓45 D W Θ ƒ1.13 28. 565. 700. 2542 lat cop
Incl. : A B L 33. formula_0 (c) f syp.h bo
("to the other side"):
Omit. : 𝔓45 W ƒ1 118. itq syrs
Incl. : Majority of manuscripts
("I say to you"):
Omit. : 𝔓45 W
Incl. (without "ὑμῖν"): B L 892. "pc"
Incl. (full): Majority of manuscripts
("of the Herodians"): 𝔓45 W Θ ƒ1.13 28. 565. 1365. 2542 iti.k cop samss arm geo
("of Herod"): Majority of manuscripts
("my, and"):
Omit. : 𝔓45 D 28. 700. ita.b.d.i.k.n.r1 syrs arm Origen
Incl. : Majority of manuscripts
("and stood up"):
Omit. : 𝔓45(vid) W itk.l sys.p
Incl. : Majority of manuscripts
("because it had been well built"): 𝔓75(vid) B L W Ξ 33. 157. 579. 892. 1241. 1342. 2542 syhmg sa bopt
("for it had been built upon the rock"): A C D Θ Ψ ƒ1.13 700.c Byz latt syrp.h cop bopt arm geo goth
Omit. : 𝔓45(vid) 700.* syrs
("nor under a basket"):
Omit. : 𝔓45 𝔓75 L Γ Ξ 070 ƒ1 22. 69. 700.* 788. 1241. 2542 syrs cop arm, geo
Incl. : A B C D W Θ Ψ ƒ13 formula_0 latt sy(c.p).h; (Cl)
("scribes and Pharisees, hypocrites!"):
Omit. : 𝔓45 𝔓75 B C L ƒ1 33. 1241. 2542 ita.aur.c.e.ff2.l vg syrs.c sa cop bopt arm geo
Incl. : A (D) W Θ Ψ ƒ13 formula_0 it syp.h bopt
("so they might catch him"):
Omit. : 𝔓45 𝔓75 B L 579. 892.* 1241. 2542 syrs.c co
Incl. : A C (D) W Θ Ψ ƒ1.13 33. formula_0 lat vg sy(p).h
Omit. verse: 𝔓45 ite syrs boms
Incl. verse: Majority of manuscripts
("or prepared, or"):
Omit. : 𝔓45
Incl. : Majority of manuscripts
("to the disciples"):
Omit. : 𝔓45 𝔓66* ite.1
Incl. : 𝔓6(vid) 𝔓66(c) 𝔓75 A B D K Γ Δ L W Θ Ψ 0250 ƒ13 𝑙844 "al" lat syr co ƒ1 33. formula_0
("and the life"):
Omit. : 𝔓45 it1 syrs Diatessaron syr Cyprian
Incl. : Majority of manuscripts
("of that year"):
Omit. : 𝔓45 ite.1 syrs
Incl. : Majority of manuscripts
("all"):
Omit. : 𝔓45 D it
Incl. : Majority of manuscripts
("the Holy"):
Omit. : Ac B sa mae
Incl. : 𝔓45 𝔓74 A* C D E Ψ 33. 1739 Byz latt syr cop bo
("Jesus"):
Omit. : formula_0
Incl. : 𝔓45 𝔓74 A B C E Ψ 33. 81. 323. 614. 945. 1175 1739
("those who heard"):
Omit. : 𝔓45 𝔓74 Ψ* "pc"
Incl. : Majority of manuscripts
("two men"):
Omit. : formula_0
Incl. : 𝔓45 𝔓74 A B C E Ψ 36 81. 323. 614. 945 1175 1739 latt syr co
("became"): 𝔓74(vid) A B C 36. 81. 323. 453. 945. 1175. 1739. Origen
("fell upon"): E Ψ 33. Byz latt syr
("came"): 𝔓45
("Peter"):
Omit. : 𝔓45 gig Clement Ambrose
Incl. : Majority of manuscripts
("immediately"):
Omit. : 𝔓45 36. 453. 1175. 2818 itd syrp samss boms
Incl. : 𝔓74 A B C E 81. "pc" vg syh(mg) bo
("again"): (D) Ψ 33(vid). 323. 614. 945. 1241. 1505. 1739 formula_0 p syh samss mae
("Lord"): 𝔓45(vid) A B C E Ψ 81* 323. 614. 945 1175 1739 lat syrh bo
("God"): 𝔓74 D Byz syrp sa mae boms
("making no distinction"):
Omit. : 𝔓45 D itl.p* syrh
Incl. : (*) A B (E Ψ) 33. 81. 945. (1175). 1739 "al"
("of the Lord"): 𝔓45 𝔓74 A C Ψ 33. 1739 Byz gig vg samss mae
("of God"): B D E 049 323. 453 sams bo
("to God"): 614. syr "pc"
("of the Lord"):
Omit. : 𝔓45 "pc"
Incl. : Majority of manuscripts
("from sexual immorality"):
Omit. : 𝔓45
Incl. : Majority of manuscripts
("Lord"): 𝔓74 A B D 33. 81. itd vgst sa
("God"): 𝔓45 C E Ψ 1739 Byz gig itw vgcl syr bo
("Lord"): 𝔓45 𝔓74 2 A C (D) E Ψ 33. 1739 Byz lat syr cop
("God"): * B "pc"
("and stirring up"):
Omit. : 𝔓45 E Byz
Incl. : 𝔓74 A B D(*) (Ψ) 33. 36. 81. 323. 614. 945. 1175. 1505. 1739 "al" lat syr sa (bo)
Facsimile edition.
In November 2020, the Center for the Study of New Testament Manuscripts in conjunction with Hendrickson Publishers released a new 1:1 high-resolution imaged facsimile edition of 𝔓45 on black and white backgrounds, along with 𝔓46 and 𝔓47.
Notes and references.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathfrak{M}"
}
] | https://en.wikipedia.org/wiki?curid=11493727 |
11494014 | Strong product of graphs | Binary operation in graph theory
In graph theory, the strong product is a way of combining two graphs to make a larger graph. Two vertices are adjacent in the strong product when they come from pairs of vertices in the factor graphs that are either adjacent or identical. The strong product is one of several different graph product operations that have been studied in graph theory. The strong product of any two graphs can be constructed as the union of two other products of the same two graphs, the Cartesian product of graphs and the tensor product of graphs.
An example of a strong product is the king's graph, the graph of moves of a chess king on a chessboard, which can be constructed as a strong product of path graphs. Decompositions of planar graphs and related graph classes into strong products have been used as a central tool to prove many other results about these graphs.
Care should be exercised when encountering the term "strong product" in the literature, since it has also been used to denote the tensor product of graphs.
Definition and example.
The strong product "G" ⊠ "H" of graphs G and H is a graph such that
the vertex set of "G" ⊠ "H" is the Cartesian product "V"("G") × "V"("H"); and
distinct vertices ("u","u' ") and ("v","v' ") are adjacent in "G" ⊠ "H" if and only if:
"u" = "v" and u' is adjacent to v', or
"u' " = "v' " and u is adjacent to v, or
u is adjacent to v and u' is adjacent to v'.
It is the union of the Cartesian product and the tensor product.
For example, the king's graph, a graph whose vertices are squares of a chessboard and whose edges represent possible moves of a chess king, is a strong product of two path graphs. Its horizontal edges come from the Cartesian product, and its diagonal edges come from the tensor product of the same two paths. Together, these two kinds of edges make up the entire strong product.
Properties and applications.
Every planar graph is a subgraph of a strong product of a path and a graph of treewidth at most six. This result has been used to prove that planar graphs have bounded queue number, small universal graphs and concise adjacency labeling schemes, and bounded nonrepetitive chromatic number and centered chromatic number. This product structure can be found in linear time. Beyond planar graphs, extensions of these results have been proven for graphs of bounded genus, graphs with a forbidden minor that is an apex graph, bounded-degree graphs with any forbidden minor, and k-planar graphs.
The clique number of the strong product of any two graphs equals the product of the clique numbers of the two graphs. If two graphs both have bounded twin-width, and in addition one of them has bounded degree, then their strong product also has bounded twin-width.
A leaf power is a graph formed from the leaves of a tree by making two leaves adjacent when their distance in the tree is below some threshold formula_0. If formula_1 is a formula_0-leaf power of a tree formula_2, then formula_2 can be found as a subgraph of a strong product of formula_1 with a formula_0-vertex cycle. This embedding has been used in recognition algorithms for leaf powers.
The strong product of a 7-vertex cycle graph and a 4-vertex complete graph, formula_3, has been suggested as a possibility for a 10-chromatic biplanar graph that would improve the known bounds on the Earth–Moon problem; another suggested example is the graph obtained by removing any vertex from formula_4. In both cases, the number of vertices in these graphs is more than 9 times the size of their largest independent set, implying that their chromatic number is at least 10. However, it is not known whether these graphs are biplanar.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "T"
},
{
"math_id": 3,
"text": "C_7\\boxtimes K_4"
},
{
"math_id": 4,
"text": "C_5\\boxtimes K_4"
}
] | https://en.wikipedia.org/wiki?curid=11494014 |
11494409 | Element (category theory) | In category theory, the concept of an element, or a point, generalizes the more usual set theoretic concept of an element of a set to an object of any category. This idea often allows restating of definitions or properties of morphisms (such as monomorphism or product) given by a universal property in more familiar terms, by stating their relation to elements. Some very general theorems, such as Yoneda's lemma and the Mitchell embedding theorem, are of great utility for this, by allowing one to work in a context where these translations are valid. This approach to category theory – in particular the use of the Yoneda lemma in this way – is due to Grothendieck, and is often called the method of the functor of points.
Definition.
Suppose C is any category and "A", "T" are two objects of C. A "T"-valued point of "A" is simply a morphism formula_0. The set of all "T"-valued points of "A" varies functorially with "T", giving rise to the "functor of points" of "A"; according to the Yoneda lemma, this completely determines "A" as an object of C.
Properties of morphisms.
Many properties of morphisms can be restated in terms of points. For example, a map formula_1 is said to be a monomorphism if
For all maps formula_2, formula_3, if formula_4 then formula_5.
Suppose formula_6 and formula_7 in "C". Then "g" and "h" are "A"-valued points of "B", and therefore monomorphism is equivalent to the more familiar statement
"f" is a monomorphism if it is an injective function on points of "B".
Some care is necessary. "f" is an epimorphism if the dual condition holds:
For all maps "g", "h" (of some suitable type), formula_8 implies formula_5.
In set theory, the term "epimorphism" is synonymous with "surjection", i.e.
Every point of "C" is the image, under "f", of some point of "B".
This is clearly not the translation of the first statement into the language of points, and in fact these statements are "not" equivalent in general. However, in some contexts, such as abelian categories, "monomorphism" and "epimorphism" are backed by sufficiently strong conditions that in fact they do allow such a reinterpretation on points.
Similarly, categorical constructions such as the product have pointed analogues. Recall that if "A", "B" are two objects of C, their product "A" × "B" is an object such that
There exist maps formula_9 formula_10, and for any "T" and maps formula_11, there exists a unique map formula_12 such that formula_13 and formula_14.
In this definition, "f" and "g" are "T"-valued points of "A" and "B", respectively, while "h" is a "T"-valued point of "A" × "B". An alternative definition of the product is therefore:
"A" × "B" is an object of C, together with projection maps formula_15 and formula_10, such that "p" and "q" furnish a bijection between points of "A" × "B" and "pairs of points" of "A" and "B".
This is the more familiar definition of the product of two sets.
Geometric origin.
The terminology is geometric in origin; in algebraic geometry, Grothendieck introduced the notion of a scheme in order to unify the subject with arithmetic geometry, which dealt with the same idea of studying solutions to polynomial equations (i.e. algebraic varieties) but where the solutions are not complex numbers but rational numbers, integers, or even elements of some finite field. A scheme is then just that: a scheme for collecting together all the manifestations of a variety defined by the same equations but with solutions taken in different number sets. One scheme gives a complex variety, whose points are its formula_16-valued points, as well as the set of formula_17-valued points (rational solutions to the equations), and even formula_18-valued points (solutions modulo "p").
One feature of the language of points is evident from this example: it is, in general, not enough to consider just points with values in a single object. For example, the equation formula_19 (which defines a scheme) has no real solutions, but it has complex solutions, namely formula_20. It also has one solution modulo 2 and two modulo 5, 13, 29, etc. (all primes that are 1 modulo 4). Just taking the real solutions would give no information whatsoever.
Relation with set theory.
The situation is analogous to the case where C is the category Set, of sets of actual elements. In this case, we have the "one-pointed" set {1}, and the elements of any set "S" are the same as the {1}-valued points of "S". In addition, though, there are the {1,2}-valued points, which are pairs of elements of "S", or elements of "S" × "S". In the context of sets, these higher points are extraneous: "S" is determined completely by its {1}-points. However, as shown above, this is special (in this case, it is because all sets are iterated coproducts of {1}).
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "p \\colon T \\to A"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "g"
},
{
"math_id": 3,
"text": "h"
},
{
"math_id": 4,
"text": "f \\circ g = f \\circ h"
},
{
"math_id": 5,
"text": "g = h"
},
{
"math_id": 6,
"text": "f \\colon B \\to C"
},
{
"math_id": 7,
"text": "g, h \\colon A \\to B"
},
{
"math_id": 8,
"text": "g \\circ f = h \\circ f"
},
{
"math_id": 9,
"text": "p \\colon A \\times B \\to A,"
},
{
"math_id": 10,
"text": "q \\colon A \\times B \\to B"
},
{
"math_id": 11,
"text": "f \\colon T \\to A, g \\colon T \\to B"
},
{
"math_id": 12,
"text": "h \\colon T \\to A \\times B"
},
{
"math_id": 13,
"text": "f = p \\circ h"
},
{
"math_id": 14,
"text": "g = q \\circ h"
},
{
"math_id": 15,
"text": "p \\colon A \\times B \\to A"
},
{
"math_id": 16,
"text": "(\\operatorname{Spec}\\mathbb{C})"
},
{
"math_id": 17,
"text": "(\\operatorname{Spec}\\mathbb{Q})"
},
{
"math_id": 18,
"text": "(\\operatorname{Spec}\\mathbb{F}_p)"
},
{
"math_id": 19,
"text": "x^2 + 1 = 0"
},
{
"math_id": 20,
"text": "\\pm i"
}
] | https://en.wikipedia.org/wiki?curid=11494409 |
1149596 | TPK algorithm | Program to compare computer programming languages
The TPK algorithm is a simple program introduced by Donald Knuth and Luis Trabb Pardo to illustrate the evolution of computer programming languages. In their 1977 work "The Early Development of Programming Languages", Trabb Pardo and Knuth introduced a small program that involved arrays, indexing, mathematical functions, subroutines, I/O, conditionals and iteration. They then wrote implementations of the algorithm in several early programming languages to show how such concepts were expressed.
To explain the name "TPK", the authors referred to Grimm's law (which concerns the consonants 't', 'p', and 'k'), the sounds in the word "typical", and their own initials (Trabb Pardo and Knuth). In a talk based on the paper, Knuth said:
<templatestyles src="Template:Blockquote/styles.css" />You can only appreciate how deep the subject is by seeing how good people struggled with it and how the ideas emerged one at a time. In order to study this—Luis I think was the main instigator of this idea—we take one program—one algorithm—and we write it in every language. And that way from one example we can quickly psych out the flavor of that particular language. We call this the TPK program, and well, the fact that it has the initials of Trabb Pardo and Knuth is just a funny coincidence.
The algorithm.
Knuth describes it as follows:
<templatestyles src="Template:Blockquote/styles.css" />We introduced a simple procedure called the “TPK algorithm,” and gave the flavor of each language by expressing TPK in each particular style. […] The TPK algorithm inputs eleven numbers formula_0; then it outputs a sequence of eleven pairs formula_1 where
formula_2
This simple task is obviously not much of a challenge, in any decent computer language.
In pseudocode:
ask for 11 numbers to be read into a sequence "S"
reverse sequence "S"
for each "item" in sequence "S"
call a function to do an operation
if "result" overflows
alert user
else
print "result"
The algorithm reads eleven numbers from an input device, stores them in an array, and then processes them in reverse order, applying a user-defined function to each value and reporting either the value of the function or a message to the effect that the value has exceeded some threshold.
Implementations.
Implementations in the original paper.
In the original paper, which covered "roughly the first decade" of the development of high-level programming languages (from 1945 up to 1957), they gave the following example implementation "in a dialect of ALGOL 60", noting that ALGOL 60 was a later development than the languages actually discussed in the paper:
TPK: begin integer i; real y; real array a[0:10];
real procedure f(t); real t; value t;
f := sqrt(abs(t)) + 5 × t ↑ 3;
for i := 0 step 1 until 10 do read(a[i]);
for i := 10 step -1 until 0 do
begin y := f(a[i]);
if y > 400 then write(i, 'TOO LARGE')
else write(i, y);
end
end TPK.
As many of the early high-level languages could not handle the TPK algorithm exactly, they allow the following modifications:
With these modifications when necessary, the authors implement this algorithm in Konrad Zuse's Plankalkül, in Goldstine and von Neumann's flow diagrams, in Haskell Curry's proposed notation, in Short Code of John Mauchly and others, in the Intermediate Program Language of Arthur Burks, in the notation of Heinz Rutishauser, in the language and compiler by Corrado Böhm in 1951–52, in Autocode of Alick Glennie, in the A-2 system of Grace Hopper, in the Laning and Zierler system, in the earliest proposed Fortran (1954) of John Backus, in the Autocode for Mark 1 by Tony Brooker, in ПП-2 of Andrey Ershov, in BACAIC of Mandalay Grems and R. E. Porter, in Kompiler 2 of A. Kenton Elsworth and others, in ADES of E. K. Blum, the Internal Translator of Alan Perlis, in Fortran of John Backus, in ARITH-MATIC and MATH-MATIC from Grace Hopper's lab, in the system of Bauer and Samelson, and (in addenda in 2003 and 2009) PACT I and TRANSCODE. They then describe what kind of arithmetic was available, and provide a subjective rating of these languages on parameters of "implementation", "readability", "control structures", "data structures", "machine independence" and "impact", besides mentioning what each was the first to do.
Implementations in more recent languages.
C implementation.
This shows a C implementation equivalent to the above ALGOL 60.
double f(double t)
return sqrt(fabs(t)) + 5 * pow(t, 3);
int main(void)
double a[11] = {0}, y;
for (int i = 0; i < 11; i++)
scanf("%lf", &a[i]);
for (int i = 10; i >= 0; i--) {
y = f(a[i]);
if (y > 400)
printf("%d TOO LARGE\n", i);
else
printf("%d %.16g\n", i, y);
Python implementation.
This shows a Python implementation.
from math import sqrt
def f(t):
return sqrt(abs(t)) + 5 * t ** 3
a = [float(input()) for _ in range(11)]
for i, t in reversed(list(enumerate(a))):
y = f(t)
print(i, "TOO LARGE" if y > 400 else y)
Rust implementation.
This shows a Rust implementation.
use std::{io, iter::zip};
fn f(t: f64) -> f64 {
t.abs().sqrt() + 5.0 * t.powi(3)
fn main() {
let mut a = [0f64; 11];
for (t, input) in zip(&mut a, io::stdin().lines()) {
*t = input.unwrap().parse().unwrap();
a.iter().enumerate().rev().for_each(|(i, &t)| match f(t) {
y if y > 400.0 => println!("{i} TOO LARGE"),
y => println!("{i} {y}"),
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a_0, a_1, \\ldots, a_{10}"
},
{
"math_id": 1,
"text": "(10, b_{10}), (9, b_9), \\ldots, (0, b_0),"
},
{
"math_id": 2,
"text": "b_i = \\begin{cases}\n f(a_i), & \\text{if }f(a_i) \\le 400; \\\\\n 999, & \\text{if }f(a_i) > 400; \\end{cases} \\quad f(x) = \\sqrt{|x|} + 5x^3."
},
{
"math_id": 3,
"text": "\\sqrt{x}"
},
{
"math_id": 4,
"text": "10, f(10), 9, f(9), \\ldots, 0, f(0)"
},
{
"math_id": 5,
"text": "f(i)"
},
{
"math_id": 6,
"text": "\\sqrt{|a_i|} + 5x^3"
}
] | https://en.wikipedia.org/wiki?curid=1149596 |
11496902 | Secondary polynomials | In mathematics, the secondary polynomials formula_0 associated with a sequence formula_1 of polynomials orthogonal with respect to a density formula_2 are defined by
formula_3
To see that the functions formula_4 are indeed polynomials, consider the simple example of formula_5 Then,
formula_6
which is a polynomial formula_7 provided that the three integrals in formula_8 (the moments of the density formula_9) are convergent. | [
{
"math_id": 0,
"text": "\\{q_n(x)\\}"
},
{
"math_id": 1,
"text": "\\{p_n(x)\\}"
},
{
"math_id": 2,
"text": "\\rho(x)"
},
{
"math_id": 3,
"text": " q_n(x) = \\int_\\mathbb{R}\\! \\frac{p_n(t) - p_n(x)}{t - x} \\rho(t)\\,dt. "
},
{
"math_id": 4,
"text": "q_n(x)"
},
{
"math_id": 5,
"text": "p_0(x)=x^3."
},
{
"math_id": 6,
"text": "\\begin{align} q_0(x) &{}\n= \\int_\\mathbb{R} \\! \\frac{t^3 - x^3}{t - x} \\rho(t)\\,dt \\\\\n&{}\n= \\int_\\mathbb{R} \\! \\frac{(t - x)(t^2+tx+x^2)}{t - x} \\rho(t)\\,dt \\\\\n&{}\n= \\int_\\mathbb{R} \\! (t^2+tx+x^2)\\rho(t)\\,dt \\\\\n&{}\n= \\int_\\mathbb{R} \\! t^2\\rho(t)\\,dt\n+ x\\int_\\mathbb{R} \\! t\\rho(t)\\,dt\n+ x^2\\int_\\mathbb{R} \\! \\rho(t)\\,dt\n\\end{align}"
},
{
"math_id": 7,
"text": "x"
},
{
"math_id": 8,
"text": "t"
},
{
"math_id": 9,
"text": "\\rho"
}
] | https://en.wikipedia.org/wiki?curid=11496902 |
11502667 | Xcas | Computer algebra system
Xcas is a user interface to Giac, which is an open source computer algebra system (CAS) for Windows, macOS and Linux among many other platforms. Xcas is written in C++. Giac can be used directly inside software written in C++.
Xcas has compatibility modes with many popular algebra systems like WolframAlpha, Mathematica, Maple, or MuPAD. Users can use Giac/Xcas to develop formal algorithms or use it in other software. Giac is used in SageMath for calculus operations. Among other things, Xcas can solve equations (Figure 3) and differential equations (Figure 4) and draw graphs. There is a forum for questions about Xcas.
CmathOOoCAS, an OpenOffice.org plugin which allows formal calculation in Calc spreadsheet and Writer word processing, uses Giac to perform calculations.
Features.
Here is a brief overview of what Xcas is able to do:
Example Xcas commands:
History.
Xcas and Giac are open-source projects developed and written by Bernard Parisse and Renée De Graeve at the former Joseph Fourier University of Grenoble (now the Grenoble Alpes University), France since 2000. Xcas and Giac are based on experiences gained with Parisse's former project Erable.
Pocket CAS and CAS Calc P11 utilize Giac.
The system was also chosen by Hewlett-Packard as the CAS for their HP Prime calculator, which utilizes the Giac/Xcas 1.5.0 engine under a dual-license scheme.
In 2013, the mathematical software Xcas was also integrated into GeoGebra's CAS view.
Use in education.
Since 2015, Xcas is used in the French education system. Xcas is also used in German universities, and in Spain and Mexico. It is also used at the University of North Carolina Wilmington and the University of New Mexico. Xcas is used in particular for learning algebra.
χCAS.
There is a port of Giac/Xcas for Casio graphing calculators: fx-CG10, fx-CG20, fx-CG50, fx-9750GIII, fx-9860GIII, called χCAS (KhiCAS). These calculators do not have their own computer algebra system. It is also available for TI Nspire CX, CX-II, and Numworks N0110
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x=1"
}
] | https://en.wikipedia.org/wiki?curid=11502667 |
11503485 | Threshold graph | Graph formed by adding isolated or universal vertices
In graph theory, a threshold graph is a graph that can be constructed from a one-vertex graph by repeated applications of the following two operations:
For example, the graph of the figure is a threshold graph. It can be constructed by beginning with a single-vertex graph (vertex 1), and then adding black vertices as isolated vertices and red vertices as dominating vertices, in the order in which they are numbered.
Threshold graphs were first introduced by . A chapter on threshold graphs appears in , and the book is devoted to them.
Alternative definitions.
An equivalent definition is the following: a graph is a threshold graph if there are a real number formula_0 and for each vertex formula_1 a real vertex weight formula_2 such that for any two vertices formula_3, formula_4 is an edge if and only if formula_5.
Another equivalent definition is this: a graph is a threshold graph if there are a real number formula_6 and for each vertex formula_1 a real vertex weight formula_7 such that for any vertex set formula_8, formula_9 is independent if and only if formula_10
The name "threshold graph" comes from these definitions: "S" is the "threshold" for the property of being an edge, or equivalently "T" is the threshold for being independent.
Threshold graphs also have a forbidden graph characterization: A graph is a threshold graph if and only if it no four of its vertices form an induced subgraph that is a three-edge path graph, a four-edge cycle graph, or a two-edge matching.
Decomposition.
From the definition which uses repeated addition of vertices, one can derive an alternative way of uniquely describing a threshold graph, by means of a string of symbols. formula_11 is always the first character of the string, and represents the first vertex of the graph. Every subsequent character is either formula_12, which denotes the addition of an isolated vertex (or "union" vertex), or formula_13, which denotes the addition of a dominating vertex (or "join" vertex). For example, the string formula_14 represents a star graph with three leaves, while formula_15 represents a path on three vertices. The graph of the figure can be represented as formula_16
Related classes of graphs and recognition.
Threshold graphs are a special case of cographs, split graphs, and trivially perfect graphs. A graph is a threshold graph if and only if it is both a cograph and a split graph. Every graph that is both a trivially perfect graph and the complementary graph of a trivially perfect graph is a threshold graph. Threshold graphs are also a special case of interval graphs. All these relations can be explained in terms of their characterisation by forbidden induced subgraphs. A cograph is a graph with no induced path on four vertices, P4, and a threshold graph is a graph with no induced P4, C4 nor 2K2. C4 is a cycle of four vertices and 2K2 is its complement, that is, two disjoint edges. This also explains why threshold graphs are closed under taking complements; the P4 is self-complementary, hence if a graph is P4-, C4- and 2K2-free, its complement is as well.
showed that threshold graphs can be recognized in linear time; if a graph is not threshold, an obstruction (one of P4, C4, or 2K2) will be output.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "v"
},
{
"math_id": 2,
"text": "w(v)"
},
{
"math_id": 3,
"text": "v,u"
},
{
"math_id": 4,
"text": "uv"
},
{
"math_id": 5,
"text": "w(u)+w(v)> S"
},
{
"math_id": 6,
"text": "T"
},
{
"math_id": 7,
"text": "a(v)"
},
{
"math_id": 8,
"text": "X\\subseteq V"
},
{
"math_id": 9,
"text": "X"
},
{
"math_id": 10,
"text": "\\sum_{v \\in X} a(v) \\le T."
},
{
"math_id": 11,
"text": "\\epsilon"
},
{
"math_id": 12,
"text": "u"
},
{
"math_id": 13,
"text": "j"
},
{
"math_id": 14,
"text": "\\epsilon u u j"
},
{
"math_id": 15,
"text": "\\epsilon u j"
},
{
"math_id": 16,
"text": "\\epsilon uuujuuj "
}
] | https://en.wikipedia.org/wiki?curid=11503485 |
11503563 | H-vector | In algebraic combinatorics, the "h"-vector of a simplicial polytope is a fundamental invariant of the polytope which encodes the number of faces of different dimensions and allows one to express the Dehn–Sommerville equations in a particularly simple form. A characterization of the set of "h"-vectors of simplicial polytopes was conjectured by Peter McMullen and proved by Lou Billera and Carl W. Lee and Richard Stanley ("g"-theorem). The definition of "h"-vector applies to arbitrary abstract simplicial complexes. The "g"-conjecture stated that for simplicial spheres, all possible "h"-vectors occur already among the "h"-vectors of the boundaries of convex simplicial polytopes. It was proven in December 2018 by Karim Adiprasito.
Stanley introduced a generalization of the "h"-vector, the toric "h"-vector, which is defined for an arbitrary ranked poset, and proved that for the class of Eulerian posets, the Dehn–Sommerville equations continue to hold. A different, more combinatorial, generalization of the "h"-vector that has been extensively studied is the flag "h"-vector of a ranked poset. For Eulerian posets, it can be more concisely expressed by means of a noncommutative polynomial in two variables called the "cd"-index.
Definition.
Let Δ be an abstract simplicial complex of dimension "d" − 1 with "f""i" "i"-dimensional faces and "f"−1 = 1. These numbers are arranged into the "f"-vector of Δ,
formula_0
An important special case occurs when Δ is the boundary of a "d"-dimensional convex polytope.
For "k" = 0, 1, …, "d", let
formula_1
The tuple
formula_2
is called the "h"-vector of Δ. In particular, formula_3, formula_4, and formula_5, where formula_6 is the Euler characteristic of formula_7. The "f"-vector and the "h"-vector uniquely determine each other through the linear relation
formula_8
from which it follows that, for formula_9,
formula_10
In particular, formula_11. Let "R" = k[Δ] be the Stanley–Reisner ring of Δ. Then its Hilbert–Poincaré series can be expressed as
formula_12
This motivates the definition of the "h"-vector of a finitely generated positively graded algebra of Krull dimension "d" as the numerator of its Hilbert–Poincaré series written with the denominator (1 − "t")"d".
The "h"-vector is closely related to the "h"*-vector for a convex lattice polytope, see Ehrhart polynomial.
Recurrence relation.
The formula_13-vector formula_14 can be computed from the formula_15-vector formula_16 by using the recurrence relation
formula_17
formula_18
formula_19.
and finally setting formula_20 for formula_21. For small examples, one can use this method to compute formula_13-vectors quickly by hand by recursively filling the entries of an array similar to Pascal's triangle. For example, consider the boundary complex formula_22 of an octahedron. The formula_15-vector of formula_22 is formula_23. To compute the formula_13-vector of formula_7, construct a triangular array by first writing formula_24 formula_25s down the left edge and the formula_15-vector down the right edge.
formula_26
(We set formula_27 just to make the array triangular.) Then, starting from the top, fill each remaining entry by subtracting its upper-left neighbor from its upper-right neighbor. In this way, we generate the following array:
formula_28
The entries of the bottom row (apart from the final formula_29) are the entries of the formula_13-vector. Hence, the formula_13-vector of formula_22 is formula_30.
Toric "h"-vector.
To an arbitrary graded poset "P", Stanley associated a pair of polynomials "f"("P","x") and "g"("P","x"). Their definition is recursive in terms of the polynomials associated to intervals [0,"y"] for all "y" ∈ "P", y ≠ 1, viewed as ranked posets of lower rank (0 and 1 denote the minimal and the maximal elements of "P"). The coefficients of "f"("P","x") form the toric "h"-vector of "P". When "P" is an Eulerian poset of rank "d" + 1 such that "P" − 1 is simplicial, the toric "h"-vector coincides with the ordinary "h"-vector constructed using the numbers "f""i" of elements of "P" − 1 of given rank "i" + 1. In this case the toric "h"-vector of "P" satisfies the Dehn–Sommerville equations
formula_31
The reason for the adjective "toric" is a connection of the toric "h"-vector with the intersection cohomology of a certain projective toric variety "X" whenever "P" is the boundary complex of rational convex polytope. Namely, the components are the dimensions of the even intersection cohomology groups of "X":
formula_32
(the odd intersection cohomology groups of "X" are all zero). The Dehn–Sommerville equations are a manifestation of the Poincaré duality in the intersection cohomology of "X". Kalle Karu proved that the toric "h"-vector of a polytope is unimodal, regardless of whether the polytope is rational or not.
Flag "h"-vector and "cd"-index.
A different generalization of the notions of "f"-vector and "h"-vector of a convex polytope has been extensively studied. Let formula_33 be a finite graded poset of rank "n", so that each maximal chain in formula_33 has length "n". For any formula_34, a subset of formula_35, let formula_36 denote the number of chains in formula_33 whose ranks constitute the set formula_34. More formally, let
formula_37
be the rank function of formula_33 and let formula_38 be the formula_34-rank selected subposet, which consists of the elements from formula_33 whose rank is in formula_34:
formula_39
Then formula_36 is the number of the maximal chains in formula_38 and the function
formula_40
is called the flag "f"-vector of "P". The function
formula_41
is called the flag "h"-vector of formula_33. By the inclusion–exclusion principle,
formula_42
The flag "f"- and "h"-vectors of formula_33 refine the ordinary "f"- and "h"-vectors of its order complex formula_43:
formula_44
The flag "h"-vector of formula_33 can be displayed via a polynomial in noncommutative variables "a" and "b". For any subset formula_34 of {1,…,"n"}, define the corresponding monomial in "a" and "b",
formula_45
Then the noncommutative generating function for the flag "h"-vector of "P" is defined by
formula_46
From the relation between "α""P"("S") and "β""P"("S"), the noncommutative generating function for the flag "f"-vector of "P" is
formula_47
Margaret Bayer and Louis Billera determined the most general linear relations that hold between the components of the flag "h"-vector of an Eulerian poset "P".
Fine noted an elegant way to state these relations: there exists a noncommutative polynomial Φ"P"("c","d"), called the "cd"-index of "P", such that
formula_48
Stanley proved that all coefficients of the "cd"-index of the boundary complex of a convex polytope are non-negative. He conjectured that this positivity phenomenon persists for a more general class of Eulerian posets that Stanley calls Gorenstein* complexes and which includes simplicial spheres and complete fans. This conjecture was proved by Kalle Karu. The combinatorial meaning of these non-negative coefficients (an answer to the question "what do they count?") remains unclear.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " f(\\Delta)=(f_{-1},f_0,\\ldots,f_{d-1})."
},
{
"math_id": 1,
"text": " h_k = \\sum_{i=0}^k (-1)^{k-i}\\binom{d-i}{k-i}f_{i-1}. "
},
{
"math_id": 2,
"text": " h(\\Delta)=(h_0,h_1,\\ldots,h_d) "
},
{
"math_id": 3,
"text": "h_{0} = 1"
},
{
"math_id": 4,
"text": "h_{1} = f_{0} - d"
},
{
"math_id": 5,
"text": "h_{d} = (-1)^{d} (1 - \\chi(\\Delta))"
},
{
"math_id": 6,
"text": "\\chi(\\Delta)"
},
{
"math_id": 7,
"text": "\\Delta"
},
{
"math_id": 8,
"text": " \\sum_{i=0}^{d}f_{i-1}(t-1)^{d-i}= \\sum_{k=0}^{d}h_{k}t^{d-k}, "
},
{
"math_id": 9,
"text": "i = 0, \\dotsc, d"
},
{
"math_id": 10,
"text": "f_{i-1} = \\sum_{k=0}^i \\binom{d-k}{i-k} h_{k}."
},
{
"math_id": 11,
"text": "f_{d-1} = h_{0} + h_{1} + \\dotsb + h_{d}"
},
{
"math_id": 12,
"text": " P_{R}(t)=\\sum_{i=0}^{d}\\frac{f_{i-1}t^i}{(1-t)^{i}}=\n\\frac{h_0+h_1t+\\cdots+h_d t^d}{(1-t)^d}. "
},
{
"math_id": 13,
"text": "\\textstyle h"
},
{
"math_id": 14,
"text": "(h_{0}, h_{1}, \\dotsc, h_{d})"
},
{
"math_id": 15,
"text": "\\textstyle f"
},
{
"math_id": 16,
"text": "(f_{-1}, f_{0}, \\dotsc, f_{d-1})"
},
{
"math_id": 17,
"text": "h^{i}_{0} = 1, \\qquad -1 \\le i \\le d"
},
{
"math_id": 18,
"text": "h^{i}_{i+1} = f_{i}, \\qquad -1 \\le i \\le d-1"
},
{
"math_id": 19,
"text": "h^{i}_{k} = h^{i-1}_{k} - h^{i-1}_{k-1}, \\qquad 1 \\le k \\le i \\le d"
},
{
"math_id": 20,
"text": "\\textstyle h_{k} = h^{d}_{k}"
},
{
"math_id": 21,
"text": "\\textstyle 0 \\le k \\le d"
},
{
"math_id": 22,
"text": "\\textstyle \\Delta"
},
{
"math_id": 23,
"text": "\\textstyle (1, 6, 12, 8)"
},
{
"math_id": 24,
"text": "d+2"
},
{
"math_id": 25,
"text": "\\textstyle 1"
},
{
"math_id": 26,
"text": "\\begin{matrix} & & & & 1 & & & \\\\ & & & 1 & & 6 & & \\\\ & & 1 & & & & 12 & \\\\ & 1 & & & & & & 8 \\\\ 1 & & & & & & & & 0 \\end{matrix}"
},
{
"math_id": 27,
"text": "f_{d} = 0"
},
{
"math_id": 28,
"text": "\\begin{matrix} & & & & 1 & & & \\\\ & & & 1 & & 6 & & \\\\ & & 1 & & 5 & & 12 & \\\\ & 1 & & 4 & & 7 & & 8 \\\\ 1 & & 3 & & 3 & & 1 & & 0 \\end{matrix}"
},
{
"math_id": 29,
"text": "0"
},
{
"math_id": 30,
"text": "\\textstyle (1, 3, 3, 1)"
},
{
"math_id": 31,
"text": " h_k = h_{d-k}. "
},
{
"math_id": 32,
"text": " h_k = \\dim_{\\mathbb{Q}} \\operatorname{IH}^{2k}(X,\\mathbb{Q}) "
},
{
"math_id": 33,
"text": "P"
},
{
"math_id": 34,
"text": "S"
},
{
"math_id": 35,
"text": "\\left\\{0, \\ldots, n\\right\\}"
},
{
"math_id": 36,
"text": "\\alpha_P(S)"
},
{
"math_id": 37,
"text": " rk: P\\to\\{0,1,\\ldots,n\\}"
},
{
"math_id": 38,
"text": "P_S"
},
{
"math_id": 39,
"text": " P_S=\\{x\\in P: rk(x)\\in S\\}."
},
{
"math_id": 40,
"text": " S \\mapsto \\alpha_P(S) "
},
{
"math_id": 41,
"text": " S \\mapsto \\beta_P(S), \\quad \n\\beta_P(S) = \\sum_{T \\subseteq S} (-1)^{|S|-|T|} \\alpha_P(S) "
},
{
"math_id": 42,
"text": " \\alpha_P(S) = \\sum_{T\\subseteq S}\\beta_P(T). "
},
{
"math_id": 43,
"text": "\\Delta(P)"
},
{
"math_id": 44,
"text": "f_{i-1}(\\Delta(P)) = \\sum_{|S|=i} \\alpha_P(S), \\quad\nh_{i}(\\Delta(P)) = \\sum_{|S|=i} \\beta_P(S). "
},
{
"math_id": 45,
"text": " u_S = u_1 \\cdots u_n, \\quad \nu_i=a \\text{ for } i\\notin S, u_i=b \\text{ for } i\\in S. "
},
{
"math_id": 46,
"text": "\\Psi_P(a,b) = \\sum_{S} \\beta_P(S) u_{S}. "
},
{
"math_id": 47,
"text": " \\Psi_P(a,a+b) = \\sum_{S} \\alpha_P(S) u_{S}. "
},
{
"math_id": 48,
"text": " \\Psi_P(a,b) = \\Phi_P(a+b, ab+ba). "
}
] | https://en.wikipedia.org/wiki?curid=11503563 |
11503699 | Stieltjes transformation | In mathematics, the Stieltjes transformation "S""ρ"("z") of a measure of density "ρ" on a real interval I is the function of the complex variable z defined outside I by the formula
formula_0
Under certain conditions we can reconstitute the density function "ρ" starting from its Stieltjes transformation thanks to the inverse formula of Stieltjes-Perron. For example, if the density "ρ" is continuous throughout I, one will have inside this interval
formula_1
Connections with moments of measures.
If the measure of density "ρ" has moments of any order defined for each integer by the equality
formula_2
then the Stieltjes transformation of "ρ" admits for each integer n the asymptotic expansion in the neighbourhood of infinity given by
formula_3
Under certain conditions the complete expansion as a Laurent series can be obtained:
formula_4
Relationships to orthogonal polynomials.
The correspondence formula_5 defines an inner product on the space of continuous functions on the interval I.
If {"Pn"} is a sequence of orthogonal polynomials for this product, we can create the sequence of associated secondary polynomials by the formula
formula_6
It appears that formula_7 is a Padé approximation of "S""ρ"("z") in a neighbourhood of infinity, in the sense that
formula_8
Since these two sequences of polynomials satisfy the same recurrence relation in three terms, we can develop a continued fraction for the Stieltjes transformation whose successive convergents are the fractions "Fn"("z").
The Stieltjes transformation can also be used to construct from the density "ρ" an effective measure for transforming the secondary polynomials into an orthogonal system. (For more details see the article secondary measure.) | [
{
"math_id": 0,
"text": "S_{\\rho}(z)=\\int_I\\frac{\\rho(t)\\,dt}{z-t}, \\qquad z \\in \\mathbb{C} \\setminus I."
},
{
"math_id": 1,
"text": "\\rho(x)=\\lim_{\\varepsilon \\to 0^+} \\frac{S_{\\rho}(x-i\\varepsilon)-S_{\\rho}(x+i\\varepsilon)}{2i\\pi}."
},
{
"math_id": 2,
"text": "m_{n}=\\int_I t^n\\,\\rho(t)\\,dt,"
},
{
"math_id": 3,
"text": "S_{\\rho}(z)=\\sum_{k=0}^{n}\\frac{m_k}{z^{k+1}}+o\\left(\\frac{1}{z^{n+1}}\\right)."
},
{
"math_id": 4,
"text": "S_{\\rho}(z) = \\sum_{n=0}^{\\infty}\\frac{m_n}{z^{n+1}}."
},
{
"math_id": 5,
"text": "(f,g) \\mapsto \\int_I f(t) g(t) \\rho(t) \\, dt"
},
{
"math_id": 6,
"text": "Q_n(x)=\\int_I \\frac{P_n (t)-P_n (x)}{t-x}\\rho (t)\\,dt."
},
{
"math_id": 7,
"text": "F_n(z) = \\frac{Q_n(z)}{P_n(z)}"
},
{
"math_id": 8,
"text": "S_\\rho(z)-\\frac{Q_n(z)}{P_n(z)}=O\\left(\\frac{1}{z^{2n}}\\right)."
}
] | https://en.wikipedia.org/wiki?curid=11503699 |
1150377 | Slurry | Mixture of solids suspended in liquid
A slurry is a mixture of denser solids suspended in liquid, usually water. The most common use of slurry is as a means of transporting solids or separating minerals, the liquid being a carrier that is pumped on a device such as a centrifugal pump. The size of solid particles may vary from 1 micrometre up to hundreds of millimetres.
The particles may settle below a certain transport velocity and the mixture can behave like a Newtonian or non-Newtonian fluid. Depending on the mixture, the slurry may be abrasive and/or corrosive.
Examples.
Examples of slurries include:
Calculations.
Determining solids fraction.
To determine the percent solids (or solids fraction) of a slurry from the density of the slurry, solids and liquid
formula_0
where
formula_1 is the solids fraction of the slurry (state by mass)
formula_2 is the solids density
formula_3 is the slurry density
formula_4 is the liquid density
In aqueous slurries, as is common in mineral processing, the specific gravity of the species is typically used, and since specific gravity of water is taken to be 1, this relation is typically written:
formula_5
even though specific gravity with units tonnes/m3 (t/m3) is used instead of the SI density unit, kg/m3.
Liquid mass from mass fraction of solids.
To determine the mass of liquid in a sample given the mass of solids and the mass fraction:
By definition
formula_6
therefore
formula_7
and
formula_8
then
formula_9
and therefore
formula_10
where
formula_1 is the solids fraction of the slurry
formula_11 is the mass or mass flow of solids in the sample or stream
formula_12 is the mass or mass flow of slurry in the sample or stream
formula_13 is the mass or mass flow of liquid in the sample or stream
formula_14
Volumetric fraction from mass fraction.
Equivalently
formula_15
and in a minerals processing context where the specific gravity of the liquid (water) is taken to be one:
formula_16
So
formula_17
and
formula_18
Then combining with the first equation:
formula_19
So
formula_20
Then since
formula_21
we conclude that
formula_22
where
formula_23 is the solids fraction of the slurry on a "volumetric" basis
formula_24 is the solids fraction of the slurry on a "mass" basis
formula_11 is the mass or mass flow of solids in the sample or stream
formula_12 is the mass or mass flow of slurry in the sample or stream
formula_13 is the mass or mass flow of liquid in the sample or stream
formula_25 is the bulk specific gravity of the solids
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi_{sl}=\\frac{\\rho_{s}(\\rho_{sl} - \\rho_{l})}{\\rho_{sl}(\\rho_{s} - \\rho_{l})}"
},
{
"math_id": 1,
"text": "\\phi_{sl}"
},
{
"math_id": 2,
"text": "\\rho_{s}"
},
{
"math_id": 3,
"text": "\\rho_{sl}"
},
{
"math_id": 4,
"text": "\\rho_{l}"
},
{
"math_id": 5,
"text": "\\phi_{sl}=\\frac{\\rho_{s}(\\rho_{sl} - 1)}{\\rho_{sl}(\\rho_{s} - 1)}"
},
{
"math_id": 6,
"text": "\\phi_{sl}=\\frac{M_{s}}{M_{sl}}"
},
{
"math_id": 7,
"text": "M_{sl}=\\frac{M_{s}}{\\phi_{sl}}"
},
{
"math_id": 8,
"text": "M_{s}+M_{l}=\\frac{M_{s}}{\\phi_{sl}}"
},
{
"math_id": 9,
"text": "M_{l}=\\frac{M_{s}}{\\phi_{sl}}-M_{s}"
},
{
"math_id": 10,
"text": "M_{l}=\\frac{1-\\phi_{sl}}{\\phi_{sl}}M_{s}"
},
{
"math_id": 11,
"text": "M_{s}"
},
{
"math_id": 12,
"text": "M_{sl}"
},
{
"math_id": 13,
"text": "M_{l}"
},
{
"math_id": 14,
"text": "\\phi_{sl,m}=\\frac{M_{s}}{M_{sl}}"
},
{
"math_id": 15,
"text": "\\phi_{sl,v}=\\frac{V_{s}}{V_{sl}}"
},
{
"math_id": 16,
"text": "\\phi_{sl,v}=\\frac{\\frac{M_{s}}{SG_{s}}}{\\frac{M_{s}}{SG_{s}}+\\frac{M_{l}}{1}}"
},
{
"math_id": 17,
"text": "\\phi_{sl,v}=\\frac{M_{s}}{M_{s}+M_{l}SG_{s}}"
},
{
"math_id": 18,
"text": "\\phi_{sl,v}=\\frac{1}{1+\\frac{M_{l}SG_{s}}{M_{s}}}"
},
{
"math_id": 19,
"text": "\\phi_{sl,v}=\\frac{1}{1+\\frac{M_{l}SG_{s}}{\\phi_{sl,m}M_{s}}\\frac{M_{s}}{M_{s}+M_{l}}}"
},
{
"math_id": 20,
"text": "\\phi_{sl,v}=\\frac{1}{1+\\frac{SG_{s}}{\\phi_{sl,m}}\\frac{M_{l}}{M_{s}+M_{l}}}"
},
{
"math_id": 21,
"text": "\\phi_{sl,m}=\\frac{M_{s}}{M_{s}+M_{l}}=1-\\frac{M_{l}}{M_{s}+M_{l}}"
},
{
"math_id": 22,
"text": "\\phi_{sl,v}=\\frac{1}{1+SG_{s}(\\frac{1}{\\phi_{sl,m}}-1)}"
},
{
"math_id": 23,
"text": "\\phi_{sl,v}"
},
{
"math_id": 24,
"text": "\\phi_{sl,m}"
},
{
"math_id": 25,
"text": "SG_{s}"
}
] | https://en.wikipedia.org/wiki?curid=1150377 |
11508286 | Discrepancy of hypergraphs | Area of discrepancy theory
Discrepancy of hypergraphs is an area of discrepancy theory that studies the discrepancy of general set systems.
Definitions.
In the classical setting, we aim at partitioning the vertices of a hypergraph formula_0 into two classes in such a way that ideally each hyperedge contains the same number of vertices in both classes. A partition into two classes can be represented by a coloring formula_1. We call −1 and +1 "colors". The color-classes formula_2 and formula_3 form the corresponding partition. For a hyperedge formula_4, set
formula_5
The "discrepancy of formula_6 with respect to formula_7" and the "discrepancy of formula_6" are defined by
formula_8
formula_9
These notions as well as the term 'discrepancy' seem to have appeared for the first time in a paper of Beck. Earlier results on this problem include the famous lower bound on the discrepancy of arithmetic progressions by Roth and upper bounds for this problem and other results by Erdős and Spencer and Sárközi.39 At that time, discrepancy problems were called "quasi-Ramsey problems".
Examples.
To get some intuition for this concept, let's have a look at a few examples.
The last example shows that we cannot expect to determine the discrepancy by looking at a single parameter like the number of hyperedges. Still, the size of the hypergraph yields first upper bounds.
General hypergraphs.
1. For any hypergraph "formula_6 " with "n" vertices and "m" edges:
The proof is a simple application of the probabilistic method. Let formula_21 be a random coloring, i.e. we have
formula_22
independently for all formula_23. Since formula_24 is a sum of independent −1, 1 random variables. So we have formula_25 for all formula_26 and formula_27. Taking formula_28 gives
formula_29
Since a random coloring with positive probability has discrepancy at most formula_30, in particular, there are colorings that have discrepancy at most formula_30. Hence formula_31
2. For any hypergraph "formula_6 "with "n" vertices and "m" edges "such that formula_32:"
To prove this, a much more sophisticated approach using the entropy function was necessary.
Of course this is particularly interesting for formula_34. In the case formula_35, formula_36 can be shown for n large enough. Therefore, this result is usually known to as 'Six Standard Deviations Suffice'. It is considered to be one of the milestones of discrepancy theory. The entropy method has seen numerous other applications, e.g. in the proof of the tight upper bound for the arithmetic progressions of Matoušek and Spencer or the upper bound in terms of the primal shatter function due to Matoušek.
Hypergraphs of bounded degree.
Better discrepancy bounds can be attained when the hypergraph has a "bounded degree", that is, each vertex of formula_6 is contained in at most "t" edges, for some small "t". In particular:
Special hypergraphs.
Better bounds on the discrepancy are possible for hypergraphs with a special structure, such as: | [
{
"math_id": 0,
"text": "\\mathcal{H}=(V, \\mathcal{E})"
},
{
"math_id": 1,
"text": "\\chi \\colon V \\rightarrow \\{-1, +1\\}"
},
{
"math_id": 2,
"text": "\\chi^{-1}(-1)"
},
{
"math_id": 3,
"text": "\\chi^{-1}(+1)"
},
{
"math_id": 4,
"text": "E \\in \\mathcal{E}"
},
{
"math_id": 5,
"text": "\\chi(E) := \\sum_{v\\in E} \\chi(v)."
},
{
"math_id": 6,
"text": "\\mathcal{H}"
},
{
"math_id": 7,
"text": "\\chi"
},
{
"math_id": 8,
"text": "\\operatorname{disc}(\\mathcal{H},\\chi) := \\; \\max_{E \\in \\mathcal{E}} |\\chi(E)|,"
},
{
"math_id": 9,
"text": "\\operatorname{disc}(\\mathcal{H}) := \\min_{\\chi:V\\rightarrow\\{-1,+1\\}} \\operatorname{disc}(\\mathcal{H}, \\chi)."
},
{
"math_id": 10,
"text": "E_1 \\cap E_2 = \\varnothing"
},
{
"math_id": 11,
"text": "E_1, E_2 \\in \\mathcal{E}"
},
{
"math_id": 12,
"text": "(V, 2^V)"
},
{
"math_id": 13,
"text": "\\lceil \\frac{1}{2} |V|\\rceil"
},
{
"math_id": 14,
"text": "\\lfloor \\frac{1}{2} |V|\\rfloor"
},
{
"math_id": 15,
"text": "n=4k"
},
{
"math_id": 16,
"text": "k \\in \\mathcal{N}"
},
{
"math_id": 17,
"text": "\\mathcal{H}_n = ([n], \\{E \\subseteq [n] \\mid | E \\cap [2k]| = | E \\setminus [2k]|\\})"
},
{
"math_id": 18,
"text": "\\mathcal{H}_n"
},
{
"math_id": 19,
"text": "\\binom{n/2}{n/4}^2 = \\Theta(\\frac 1 n 2^n)"
},
{
"math_id": 20,
"text": "\\operatorname{disc}(\\mathcal{H}) \\leq \\sqrt{2n \\ln (2m)}."
},
{
"math_id": 21,
"text": "\\chi:V \\rightarrow \\{-1,1\\}"
},
{
"math_id": 22,
"text": "\\Pr(\\chi(v) = -1) = \\Pr(\\chi(v) = 1) = \\frac{1}{2}"
},
{
"math_id": 23,
"text": "v \\in V"
},
{
"math_id": 24,
"text": "\\chi(E) = \\sum_{v \\in E} \\chi(v)"
},
{
"math_id": 25,
"text": "\\Pr(|\\chi(E)|>\\lambda)<2 \\exp(-\\lambda^2/(2n))"
},
{
"math_id": 26,
"text": "E \\subseteq V"
},
{
"math_id": 27,
"text": "\\lambda \\geq 0"
},
{
"math_id": 28,
"text": "\\lambda = \\sqrt{2n \\ln (2m)}"
},
{
"math_id": 29,
"text": "\\Pr(\\operatorname{disc}(\\mathcal{H},\\chi)> \\lambda) \\leq \\sum_{E \\in \\mathcal{E}} \\Pr(|\\chi(E)| > \\lambda) < 1."
},
{
"math_id": 30,
"text": "\\lambda"
},
{
"math_id": 31,
"text": "\\operatorname{disc}(\\mathcal{H}) \\leq \\lambda. \\ \\Box"
},
{
"math_id": 32,
"text": "m \\geq n"
},
{
"math_id": 33,
"text": "\\operatorname{disc}(\\mathcal{H}) \\in O(\\sqrt{n})."
},
{
"math_id": 34,
"text": "m = O(n)"
},
{
"math_id": 35,
"text": "m=n"
},
{
"math_id": 36,
"text": "\\operatorname{disc}(\\mathcal{H}) \\leq 6 \\sqrt{n}"
},
{
"math_id": 37,
"text": "\\operatorname{disc}(\\mathcal{H}) < 2t"
},
{
"math_id": 38,
"text": "\\operatorname{disc}(\\mathcal{H}) = O(\\sqrt t)"
},
{
"math_id": 39,
"text": "\\operatorname{disc}(\\mathcal{H}) \\leq 2t - 3"
},
{
"math_id": 40,
"text": " t \\geq 3 "
},
{
"math_id": 41,
"text": "2t - \\log^* t"
},
{
"math_id": 42,
"text": "\\log^* t"
},
{
"math_id": 43,
"text": "\\operatorname{disc}(\\mathcal{H}) \\leq C \\sqrt{t \\log m} \\log n"
},
{
"math_id": 44,
"text": "\\operatorname{disc}(\\mathcal{H}) = O(\\sqrt{t \\log n})"
}
] | https://en.wikipedia.org/wiki?curid=11508286 |
1150833 | Areal velocity | Term from classical mechanics
In classical mechanics, areal velocity (also called sector velocity or sectorial velocity) is a pseudovector whose length equals the rate of change at which area is swept out by a particle as it moves along a curve. It has SI units of square meters per second (m2/s) and dimension of square length per time L2 T-1.
In the adjoining figure, suppose that a particle moves along the blue curve. At a certain time "t", the particle is located at point "B", and a short while later, at time "t" + Δ"t", the particle has moved to point "C". The region swept out by the particle is shaded in green in the figure, bounded by the line segments "AB" and "AC" and the curve along which the particle moves. The areal velocity magnitude (i.e., the "areal speed") is this region's area divided by the time interval Δ"t" in the limit that Δ"t" becomes vanishingly small. The vector direction is postulated to be normal to the plane containing the position and velocity vectors of the particle, following a convention known as the right hand rule.
Conservation of areal velocity is a general property of central force motion, and, within the context of classical mechanics, is equivalent to the conservation of angular momentum.
Relationship with angular momentum.
Areal velocity is closely related to angular momentum. Any object has an orbital angular momentum about an origin, and this turns out to be, up to a multiplicative scalar constant, equal to the areal velocity of the object about the same origin. A crucial property of angular momentum is that it is conserved under the action of central forces (i.e. forces acting radially toward or away from the origin). Historically, the law of conservation of angular momentum was stated entirely in terms of areal velocity.
A special case of this is Kepler's second law, which states that the areal velocity of a planet, with the sun taken as origin, is constant with time. Because the gravitational force acting on a planet is approximately a central force (since the mass of the planet is small in comparison to that of the sun), the angular momentum of the planet (and hence the areal velocity) must remain (approximately) constant. Isaac Newton was the first scientist to recognize the dynamical significance of Kepler's second law. With the aid of his laws of motion, he proved in 1684 that any planet that is attracted to a fixed center sweeps out equal areas in equal intervals of time. For this reason, the law of conservation of angular momentum was historically called the "principle of equal areas". The law of conservation of angular momentum was later expanded and generalized to more complicated situations not easily describable via the concept of areal velocity. Since the modern form of the law of conservation of angular momentum includes much more than just Kepler's second law, the designation "principle of equal areas" has been dropped in modern works.
Derivation of the connection with angular momentum.
In the situation of the first figure, the area swept out during time period Δ"t" by the particle is approximately equal to the area of triangle "ABC". As Δ"t" approaches zero this near-equality becomes exact as a limit.
Let the point "D" be the fourth corner of parallelogram "ABDC" shown in the figure, so that the vectors "AB" and "AC" add up by the parallelogram rule to vector "AD". Then the area of triangle "ABC" is half the area of parallelogram "ABDC", and the area of "ABDC" is equal to the magnitude of the cross product of vectors "AB" and "AC". This area can also be viewed as a (pseudo)vector with this magnitude, and pointing in a direction perpendicular to the parallelogram (following the right hand rule); this vector is the cross product itself:
formula_0
Hence
formula_1
The areal velocity is this vector area divided by Δ"t" in the limit that Δ"t" becomes vanishingly small:
formula_2
But, formula_3 is the velocity vector formula_4 of the moving particle, so that
formula_5
On the other hand, the angular momentum of the particle is
formula_6
and hence the angular momentum equals 2"m" times the areal velocity.
Relationship with magnetic dipoles.
Areal velocity is also closely related to the concept of magnetic dipoles in classical electrodynamics. Every electric current possesses a (pseudo)vectorial quantity called a "magnetic dipole moment" about a given origin. In the special case that the current consists of a single moving point charge, the magnetic dipole moment about any given origin turns out to be, up to a scalar factor, equal to the areal velocity of the charge about the same origin. In the more general case where the current consists of a large but finite number of moving point charges, the magnetic dipole moment is the sum of the dipole moments of each of the charges, and hence, is proportional to the sum of the areal velocities of all the charges. In the continuity limit where the number of charges in the current becomes infinite, the sum becomes an integral; i.e., the magnetic dipole moment of a continuous current about a given origin is, up to a scalar factor, equal to the integral of the areal velocity along the current path. If the current path happens to be a closed loop and if the current is the same at all points in the loop, this integral turns out to be independent of the chosen origin, so that the magnetic dipole moment becomes a fundamental constant associated with the current loop.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\text{vector area of parallelogram }ABCD = \\mathbf{r}(t) \\times \\mathbf{r}(t + \\Delta t). "
},
{
"math_id": 1,
"text": " \\text{vector area of triangle }ABC = \\frac{\\mathbf{r}(t) \\times \\mathbf{r}(t + \\Delta t)}{2}. "
},
{
"math_id": 2,
"text": " \\begin{align}\n\\text{areal velocity} &= \\lim_{\\Delta t \\rightarrow 0} \\frac{\\mathbf{r}(t) \\times \\mathbf{r}(t + \\Delta t)}{2 \\Delta t} \\\\\n&= \\lim_{\\Delta t \\rightarrow 0} \\frac{\\mathbf{r}(t) \\times \\bigl( \\mathbf{r}(t) + \\mathbf{r}\\,'(t) \\Delta t \\bigr)}{2 \\Delta t} \\\\\n&= \\lim_{\\Delta t \\rightarrow 0} \\frac{\\mathbf{r}(t) \\times \\mathbf{r}\\,'(t)}{2} \\left( {\\Delta t \\over \\Delta t} \\right) \\\\\n&= \\frac{\\mathbf{r}(t) \\times \\mathbf{r}\\,'(t)}{2}. \n\\end{align} "
},
{
"math_id": 3,
"text": "\\mathbf{r}\\,'(t)"
},
{
"math_id": 4,
"text": "\\mathbf{v}(t)"
},
{
"math_id": 5,
"text": " \\frac{d \\mathbf{A}}{d t} = \\frac{\\mathbf{r} \\times \\mathbf{v}}{2}. "
},
{
"math_id": 6,
"text": " \\mathbf{L} = \\mathbf{r} \\times m \\mathbf{v}, "
}
] | https://en.wikipedia.org/wiki?curid=1150833 |
1150875 | 157 (number) | Natural number
157 (one hundred [and] fifty-seven) is the number following 156 and preceding 158.
In mathematics.
157 is:
In base 10, 1572 is 24649, and 1582 is 24964, which uses the same digits. Numbers having this property are listed in OEIS: . The previous entry is 13, and the next entry after 157 is 913.
The simplest right angle triangle with rational sides that has area 157 has the longest side with a denominator of 45 digits.
In other fields.
157 is also:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{p^p+1}{p+1}"
}
] | https://en.wikipedia.org/wiki?curid=1150875 |
11510650 | Edmonds' algorithm | Algorithm for the directed version of the minimum spanning tree problem
In graph theory, Edmonds' algorithm or Chu–Liu/Edmonds' algorithm is an algorithm for finding a spanning arborescence of minimum weight (sometimes called an "optimum branching").
It is the directed analog of the minimum spanning tree problem.
The algorithm was proposed independently first by Yoeng-Jin Chu and Tseng-Hong Liu (1965) and then by Jack Edmonds (1967).
Algorithm.
Description.
The algorithm takes as input a directed graph formula_0 where formula_1 is the set of nodes and formula_2 is the set of directed edges, a distinguished vertex formula_3 called the "root", and a real-valued weight formula_4 for each edge formula_5.
It returns a spanning arborescence formula_6 rooted at formula_7 of minimum weight, where the weight of an arborescence is defined to be the sum of its edge weights, formula_8.
The algorithm has a recursive description.
Let formula_9 denote the function which returns a spanning arborescence rooted at formula_7 of minimum weight.
We first remove any edge from formula_2 whose destination is formula_7.
We may also replace any set of parallel edges (edges between the same pair of vertices in the same direction) by a single edge with weight equal to the minimum of the weights of these parallel edges.
Now, for each node formula_10 other than the root, find the edge incoming to formula_10 of lowest weight (with ties broken arbitrarily).
Denote the source of this edge by formula_11.
If the set of edges formula_12 does not contain any cycles, then formula_13.
Otherwise, formula_14 contains at least one cycle.
Arbitrarily choose one of these cycles and call it formula_15.
We now define a new weighted directed graph formula_16 in which the cycle formula_15 is "contracted" into one node as follows:
The nodes of formula_17 are the nodes of formula_1 not in formula_15 plus a "new" node denoted formula_18.
For each edge in formula_22, we remember which edge in formula_2 it corresponds to.
Now find a minimum spanning arborescence formula_30 of formula_31 using a call to formula_32.
Since formula_30 is a spanning arborescence, each vertex has exactly one incoming edge.
Let formula_33 be the unique incoming edge to formula_18 in formula_30.
This edge corresponds to an edge formula_34 with formula_35.
Remove the edge formula_36 from formula_15, breaking the cycle.
Mark each remaining edge in formula_15.
For each edge in formula_30, mark its corresponding edge in formula_2.
Now we define formula_9 to be the set of marked edges, which form a minimum spanning arborescence.
Observe that formula_9 is defined in terms of formula_37, with formula_31 having strictly fewer vertices than formula_38. Finding formula_9 for a single-vertex graph is trivial (it is just formula_38 itself), so the recursive algorithm is guaranteed to terminate.
Running time.
The running time of this algorithm is formula_39. A faster implementation of the algorithm due to Robert Tarjan runs in time formula_40 for sparse graphs and formula_41 for dense graphs. This is as fast as Prim's algorithm for an undirected minimum spanning tree. In 1986, Gabow, Galil, Spencer, and Tarjan produced a faster implementation, with running time formula_42. | [
{
"math_id": 0,
"text": "D = \\langle V, E \\rangle"
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "E"
},
{
"math_id": 3,
"text": "r \\in V"
},
{
"math_id": 4,
"text": "w(e)"
},
{
"math_id": 5,
"text": "e \\in E"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "r"
},
{
"math_id": 8,
"text": "w(A) = \\sum_{e \\in A}{w(e)}"
},
{
"math_id": 9,
"text": "f(D, r, w)"
},
{
"math_id": 10,
"text": "v"
},
{
"math_id": 11,
"text": "\\pi(v)"
},
{
"math_id": 12,
"text": "P = \\{(\\pi(v),v) \\mid v \\in V \\setminus \\{ r \\} \\}"
},
{
"math_id": 13,
"text": "f(D,r,w) = P"
},
{
"math_id": 14,
"text": "P"
},
{
"math_id": 15,
"text": "C"
},
{
"math_id": 16,
"text": "D^\\prime = \\langle V^\\prime, E^\\prime \\rangle"
},
{
"math_id": 17,
"text": "V^\\prime"
},
{
"math_id": 18,
"text": "v_C"
},
{
"math_id": 19,
"text": "(u,v)"
},
{
"math_id": 20,
"text": "u\\notin C"
},
{
"math_id": 21,
"text": "v\\in C"
},
{
"math_id": 22,
"text": "E^\\prime"
},
{
"math_id": 23,
"text": "e = (u, v_C)"
},
{
"math_id": 24,
"text": "w^\\prime(e) = w(u,v) - w(\\pi(v),v)"
},
{
"math_id": 25,
"text": "u\\in C"
},
{
"math_id": 26,
"text": "v\\notin C"
},
{
"math_id": 27,
"text": "e = (v_C, v)"
},
{
"math_id": 28,
"text": "w^\\prime(e) = w(u,v) "
},
{
"math_id": 29,
"text": "e = (u, v)"
},
{
"math_id": 30,
"text": "A^\\prime"
},
{
"math_id": 31,
"text": "D^\\prime"
},
{
"math_id": 32,
"text": "f(D^\\prime, r,w^\\prime)"
},
{
"math_id": 33,
"text": "(u, v_C)"
},
{
"math_id": 34,
"text": "(u,v) \\in E"
},
{
"math_id": 35,
"text": "v \\in C"
},
{
"math_id": 36,
"text": "(\\pi(v),v)"
},
{
"math_id": 37,
"text": "f(D^\\prime, r, w^\\prime)"
},
{
"math_id": 38,
"text": "D"
},
{
"math_id": 39,
"text": "O(EV)"
},
{
"math_id": 40,
"text": "O(E \\log V)"
},
{
"math_id": 41,
"text": "O(V^2)"
},
{
"math_id": 42,
"text": "O(E + V \\log V)"
}
] | https://en.wikipedia.org/wiki?curid=11510650 |
11512 | Fast Fourier transform | O(N log N) discrete Fourier transform algorithm
A fast Fourier transform (FFT) is an algorithm that computes the Discrete Fourier Transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in the frequency domain and vice versa. The DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields, but computing it directly from the definition is often too slow to be practical. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse (mostly zero) factors. As a result, it manages to reduce the complexity of computing the DFT from formula_0, which arises if one simply applies the definition of DFT, to formula_1, where n is the data size. The difference in speed can be enormous, especially for long data sets where n may be in the thousands or millions. In the presence of round-off error, many FFT algorithms are much more accurate than evaluating the DFT definition directly or indirectly. There are many different FFT algorithms based on a wide range of published theories, from simple complex-number arithmetic to group theory and number theory.
Fast Fourier transforms are widely used for applications in engineering, music, science, and mathematics. The basic ideas were popularized in 1965, but some algorithms had been derived as early as 1805. In 1994, Gilbert Strang described the FFT as "the most important numerical algorithm of our lifetime", and it was included in Top 10 Algorithms of 20th Century by the IEEE magazine "Computing in Science & Engineering".
The best-known FFT algorithms depend upon the factorization of n, but there are FFTs with formula_2 complexity for all, even prime, n. Many FFT algorithms depend only on the fact that formula_3 is an n'th primitive root of unity, and thus can be applied to analogous transforms over any finite field, such as number-theoretic transforms. Since the inverse DFT is the same as the DFT, but with the opposite sign in the exponent and a 1/"n" factor, any FFT algorithm can easily be adapted for it.
History.
The development of fast algorithms for DFT can be traced to Carl Friedrich Gauss's unpublished 1805 work on the orbits of asteroids Pallas and Juno. Gauss wanted to interpolate the orbits from sample observations; his method was very similar to the one that would be published in 1965 by James Cooley and John Tukey, who are generally credited for the invention of the modern generic FFT algorithm. While Gauss's work predated even Joseph Fourier's 1822 results, he did not analyze the method's complexity, and eventually used other methods to achieve the same end.
Between 1805 and 1965, some versions of FFT were published by other authors. Frank Yates in 1932 published his version called "interaction algorithm", which provided efficient computation of Hadamard and Walsh transforms. Yates' algorithm is still used in the field of statistical design and analysis of experiments. In 1942, G. C. Danielson and Cornelius Lanczos published their version to compute DFT for x-ray crystallography, a field where calculation of Fourier transforms presented a formidable bottleneck. While many methods in the past had focused on reducing the constant factor for formula_0 computation by taking advantage of "symmetries", Danielson and Lanczos realized that one could use the "periodicity" and apply a "doubling trick" to "double [n] with only slightly more than double the labor", though like Gauss they did not do the analysis to discover that this led to formula_1 scaling.
James Cooley and John Tukey independently rediscovered these earlier algorithms and published a more general FFT in 1965 that is applicable when n is composite and not necessarily a power of 2, as well as analyzing the formula_1 scaling. Tukey came up with the idea during a meeting of President Kennedy's Science Advisory Committee where a discussion topic involved detecting nuclear tests by the Soviet Union by setting up sensors to surround the country from outside. To analyze the output of these sensors, an FFT algorithm would be needed. In discussion with Tukey, Richard Garwin recognized the general applicability of the algorithm not just to national security problems, but also to a wide range of problems including one of immediate interest to him, determining the periodicities of the spin orientations in a 3-D crystal of Helium-3. Garwin gave Tukey's idea to Cooley (both worked at IBM's Watson labs) for implementation. Cooley and Tukey published the paper in a relatively short time of six months. As Tukey did not work at IBM, the patentability of the idea was doubted and the algorithm went into the public domain, which, through the computing revolution of the next decade, made FFT one of the indispensable algorithms in digital signal processing.
Definition.
Let formula_4 be complex numbers. The DFT is defined by the formula
formula_5
where formula_6 is a primitive n'th root of 1.
Evaluating this definition directly requires formula_0 operations: there are n outputs Xk , and each output requires a sum of n terms. An FFT is any method to compute the same results in formula_1 operations. All known FFT algorithms require formula_1 operations, although there is no known proof that lower complexity is impossible.
To illustrate the savings of an FFT, consider the count of complex multiplications and additions for formula_7 data points. Evaluating the DFT's sums directly involves formula_8 complex multiplications and formula_9 complex additions, of which formula_10 operations can be saved by eliminating trivial operations such as multiplications by 1, leaving about 30 million operations. In contrast, the radix-2 Cooley–Tukey algorithm, for n a power of 2, can compute the same result with only formula_11 complex multiplications (again, ignoring simplifications of multiplications by 1 and similar) and formula_12 complex additions, in total about 30,000 operations — a thousand times less than with direct evaluation. In practice, actual performance on modern computers is usually dominated by factors other than the speed of arithmetic operations and the analysis is a complicated subject (for example, see Frigo & Johnson, 2005), but the overall improvement from formula_0 to formula_1 remains.
Algorithms.
Cooley–Tukey algorithm.
By far the most commonly used FFT is the Cooley–Tukey algorithm. This is a divide-and-conquer algorithm that recursively breaks down a DFT of any composite size formula_13 into formula_14 smaller DFTs of size formula_15, along with formula_16 multiplications by complex roots of unity traditionally called twiddle factors (after Gentleman and Sande, 1966).
This method (and the general idea of an FFT) was popularized by a publication of Cooley and Tukey in 1965, but it was later discovered that those two authors had independently re-invented an algorithm known to Carl Friedrich Gauss around 1805 (and subsequently rediscovered several times in limited forms).
The best known use of the Cooley–Tukey algorithm is to divide the transform into two pieces of size n/2 at each step, and is therefore limited to power-of-two sizes, but any factorization can be used in general (as was known to both Gauss and Cooley/Tukey). These are called the "radix-2" and "mixed-radix" cases, respectively (and other variants such as the split-radix FFT have their own names as well). Although the basic idea is recursive, most traditional implementations rearrange the algorithm to avoid explicit recursion. Also, because the Cooley–Tukey algorithm breaks the DFT into smaller DFTs, it can be combined arbitrarily with any other algorithm for the DFT, such as those described below.
Other FFT algorithms.
There are FFT algorithms other than Cooley–Tukey.
For formula_13 with coprime formula_14 and formula_15, one can use the prime-factor (Good–Thomas) algorithm (PFA), based on the Chinese remainder theorem, to factorize the DFT similarly to Cooley–Tukey but without the twiddle factors. The Rader–Brenner algorithm (1976) is a Cooley–Tukey-like factorization but with purely imaginary twiddle factors, reducing multiplications at the cost of increased additions and reduced numerical stability; it was later superseded by the split-radix variant of Cooley–Tukey (which achieves the same multiplication count but with fewer additions and without sacrificing accuracy). Algorithms that recursively factorize the DFT into smaller operations other than DFTs include the Bruun and QFT algorithms. (The Rader–Brenner and QFT algorithms were proposed for power-of-two sizes, but it is possible that they could be adapted to general composite n. Bruun's algorithm applies to arbitrary even composite sizes.) Bruun's algorithm, in particular, is based on interpreting the FFT as a recursive factorization of the polynomial formula_17, here into real-coefficient polynomials of the form formula_18 and formula_19.
Another polynomial viewpoint is exploited by the Winograd FFT algorithm, which factorizes formula_17 into cyclotomic polynomials—these often have coefficients of 1, 0, or −1, and therefore require few (if any) multiplications, so Winograd can be used to obtain minimal-multiplication FFTs and is often used to find efficient algorithms for small factors. Indeed, Winograd showed that the DFT can be computed with only formula_16 irrational multiplications, leading to a proven achievable lower bound on the number of multiplications for power-of-two sizes; this comes at the cost of many more additions, a tradeoff no longer favorable on modern processors with hardware multipliers. In particular, Winograd also makes use of the PFA as well as an algorithm by Rader for FFTs of "prime" sizes.
Rader's algorithm, exploiting the existence of a generator for the multiplicative group modulo prime n, expresses a DFT of prime size n as a cyclic convolution of (composite) size "n" – 1, which can then be computed by a pair of ordinary FFTs via the convolution theorem (although Winograd uses other convolution methods). Another prime-size FFT is due to L. I. Bluestein, and is sometimes called the chirp-z algorithm; it also re-expresses a DFT as a convolution, but this time of the "same" size (which can be zero-padded to a power of two and evaluated by radix-2 Cooley–Tukey FFTs, for example), via the identity
formula_20
Hexagonal fast Fourier transform (HFFT) aims at computing an efficient FFT for the hexagonally-sampled data by using a new addressing scheme for hexagonal grids, called Array Set Addressing (ASA).
FFT algorithms specialized for real or symmetric data.
In many applications, the input data for the DFT are purely real, in which case the outputs satisfy the symmetry
formula_21
and efficient FFT algorithms have been designed for this situation (see e.g. Sorensen, 1987). One approach consists of taking an ordinary algorithm (e.g. Cooley–Tukey) and removing the redundant parts of the computation, saving roughly a factor of two in time and memory. Alternatively, it is possible to express an "even"-length real-input DFT as a complex DFT of half the length (whose real and imaginary parts are the even/odd elements of the original real data), followed by formula_16 post-processing operations.
It was once believed that real-input DFTs could be more efficiently computed by means of the discrete Hartley transform (DHT), but it was subsequently argued that a specialized real-input DFT algorithm (FFT) can typically be found that requires fewer operations than the corresponding DHT algorithm (FHT) for the same number of inputs. Bruun's algorithm (above) is another method that was initially proposed to take advantage of real inputs, but it has not proved popular.
There are further FFT specializations for the cases of real data that have even/odd symmetry, in which case one can gain another factor of roughly two in time and memory and the DFT becomes the discrete cosine/sine transform(s) (DCT/DST). Instead of directly modifying an FFT algorithm for these cases, DCTs/DSTs can also be computed via FFTs of real data combined with formula_16 pre- and post-processing.
Computational issues.
Bounds on complexity and operation counts.
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in computer science:
What is the lower bound on the complexity of fast Fourier transform algorithms? Can they be faster than formula_22?
A fundamental question of longstanding theoretical interest is to prove lower bounds on the complexity and exact operation counts of fast Fourier transforms, and many open problems remain. It is not rigorously proved whether DFTs truly require formula_23 (i.e., order "formula_24" or greater) operations, even for the simple case of power of two sizes, although no algorithms with lower complexity are known. In particular, the count of arithmetic operations is usually the focus of such questions, although actual performance on modern-day computers is determined by many other factors such as cache or CPU pipeline optimization.
Following work by Shmuel Winograd (1978), a tight formula_25 lower bound is known for the number of real multiplications required by an FFT. It can be shown that only formula_26 irrational real multiplications are required to compute a DFT of power-of-two length formula_27. Moreover, explicit algorithms that achieve this count are known (Heideman & Burrus, 1986; Duhamel, 1990). However, these algorithms require too many additions to be practical, at least on modern computers with hardware multipliers (Duhamel, 1990; Frigo & Johnson, 2005).
A tight lower bound is not known on the number of required additions, although lower bounds have been proved under some restrictive assumptions on the algorithms. In 1973, Morgenstern proved an formula_28 lower bound on the addition count for algorithms where the multiplicative constants have bounded magnitudes (which is true for most but not all FFT algorithms). Pan (1986) proved an formula_28 lower bound assuming a bound on a measure of the FFT algorithm's "asynchronicity", but the generality of this assumption is unclear. For the case of power-of-two n, Papadimitriou (1979) argued that the number formula_29 of complex-number additions achieved by Cooley–Tukey algorithms is "optimal" under certain assumptions on the graph of the algorithm (his assumptions imply, among other things, that no additive identities in the roots of unity are exploited). (This argument would imply that at least formula_30 real additions are required, although this is not a tight bound because extra additions are required as part of complex-number multiplications.) Thus far, no published FFT algorithm has achieved fewer than formula_29 complex-number additions (or their equivalent) for power-of-two n.
A third problem is to minimize the "total" number of real multiplications and additions, sometimes called the "arithmetic complexity" (although in this context it is the exact count and not the asymptotic complexity that is being considered). Again, no tight lower bound has been proven. Since 1968, however, the lowest published count for power-of-two n was long achieved by the split-radix FFT algorithm, which requires formula_31 real multiplications and additions for "n" > 1. This was recently reduced to formula_32 (Johnson and Frigo, 2007; Lundy and Van Buskirk, 2007). A slightly larger count (but still better than split radix for "n" ≥ 256) was shown to be provably optimal for "n" ≤ 512 under additional restrictions on the possible algorithms (split-radix-like flowgraphs with unit-modulus multiplicative factors), by reduction to a satisfiability modulo theories problem solvable by brute force (Haynal & Haynal, 2011).
Most of the attempts to lower or prove the complexity of FFT algorithms have focused on the ordinary complex-data case, because it is the simplest. However, complex-data FFTs are so closely related to algorithms for related problems such as real-data FFTs, discrete cosine transforms, discrete Hartley transforms, and so on, that any improvement in one of these would immediately lead to improvements in the others (Duhamel & Vetterli, 1990).
Approximations.
All of the FFT algorithms discussed above compute the DFT exactly (i.e. neglecting floating-point errors). A few "FFT" algorithms have been proposed, however, that compute the DFT "approximately", with an error that can be made arbitrarily small at the expense of increased computations. Such algorithms trade the approximation error for increased speed or other properties. For example, an approximate FFT algorithm by Edelman et al. (1999) achieves lower communication requirements for parallel computing with the help of a fast multipole method. A wavelet-based approximate FFT by Guo and Burrus (1996) takes sparse inputs/outputs (time/frequency localization) into account more efficiently than is possible with an exact FFT. Another algorithm for approximate computation of a subset of the DFT outputs is due to Shentov et al. (1995). The Edelman algorithm works equally well for sparse and non-sparse data, since it is based on the compressibility (rank deficiency) of the Fourier matrix itself rather than the compressibility (sparsity) of the data. Conversely, if the data are sparse—that is, if only k out of n Fourier coefficients are nonzero—then the complexity can be reduced to formula_33, and this has been demonstrated to lead to practical speedups compared to an ordinary FFT for "n"/"k" > 32 in a large-n example ("n"
222) using a probabilistic approximate algorithm (which estimates the largest k coefficients to several decimal places).
Accuracy.
FFT algorithms have errors when finite-precision floating-point arithmetic is used, but these errors are typically quite small; most FFT algorithms, e.g. Cooley–Tukey, have excellent numerical properties as a consequence of the pairwise summation structure of the algorithms. The upper bound on the relative error for the Cooley–Tukey algorithm is formula_34, compared to formula_35 for the naïve DFT formula, where 𝜀 is the machine floating-point relative precision. In fact, the root mean square (rms) errors are much better than these upper bounds, being only formula_36 for Cooley–Tukey and formula_37 for the naïve DFT (Schatzman, 1996). These results, however, are very sensitive to the accuracy of the twiddle factors used in the FFT (i.e. the trigonometric function values), and it is not unusual for incautious FFT implementations to have much worse accuracy, e.g. if they use inaccurate trigonometric recurrence formulas. Some FFTs other than Cooley–Tukey, such as the Rader–Brenner algorithm, are intrinsically less stable.
In fixed-point arithmetic, the finite-precision errors accumulated by FFT algorithms are worse, with rms errors growing as formula_38 for the Cooley–Tukey algorithm (Welch, 1969). Achieving this accuracy requires careful attention to scaling to minimize loss of precision, and fixed-point FFT algorithms involve rescaling at each intermediate stage of decompositions like Cooley–Tukey.
To verify the correctness of an FFT implementation, rigorous guarantees can be obtained in formula_1 time by a simple procedure checking the linearity, impulse-response, and time-shift properties of the transform on random inputs (Ergün, 1995).
The values for intermediate frequencies may be obtained by various averaging methods.
Multidimensional FFTs.
As defined in the multidimensional DFT article, the multidimensional DFT
formula_39
transforms an array "x"n with a d-dimensional vector of indices formula_40 by a set of d nested summations (over formula_41 for each j), where the division formula_42 is performed element-wise. Equivalently, it is the composition of a sequence of "d" sets of one-dimensional DFTs, performed along one dimension at a time (in any order).
This compositional viewpoint immediately provides the simplest and most common multidimensional DFT algorithm, known as the row-column algorithm (after the two-dimensional case, below). That is, one simply performs a sequence of d one-dimensional FFTs (by any of the above algorithms): first you transform along the "n"1 dimension, then along the "n"2 dimension, and so on (actually, any ordering works). This method is easily shown to have the usual formula_1 complexity, where formula_43 is the total number of data points transformed. In particular, there are "n"/"n"1 transforms of size "n"1, etc., so the complexity of the sequence of FFTs is:
formula_44
In two dimensions, the "x"k can be viewed as an formula_45 matrix, and this algorithm corresponds to first performing the FFT of all the rows (resp. columns), grouping the resulting transformed rows (resp. columns) together as another formula_45 matrix, and then performing the FFT on each of the columns (resp. rows) of this second matrix, and similarly grouping the results into the final result matrix.
In more than two dimensions, it is often advantageous for cache locality to group the dimensions recursively. For example, a three-dimensional FFT might first perform two-dimensional FFTs of each planar "slice" for each fixed "n"1, and then perform the one-dimensional FFTs along the "n"1 direction. More generally, an asymptotically optimal cache-oblivious algorithm consists of recursively dividing the dimensions into two groups formula_46 and formula_47 that are transformed recursively (rounding if d is not even) (see Frigo and Johnson, 2005). Still, this remains a straightforward variation of the row-column algorithm that ultimately requires only a one-dimensional FFT algorithm as the base case, and still has formula_2 complexity. Yet another variation is to perform matrix transpositions in between transforming subsequent dimensions, so that the transforms operate on contiguous data; this is especially important for out-of-core and distributed memory situations where accessing non-contiguous data is extremely time-consuming.
There are other multidimensional FFT algorithms that are distinct from the row-column algorithm, although all of them have formula_1 complexity. Perhaps the simplest non-row-column FFT is the vector-radix FFT algorithm, which is a generalization of the ordinary Cooley–Tukey algorithm where one divides the transform dimensions by a vector formula_48 of radices at each step. (This may also have cache benefits.) The simplest case of vector-radix is where all of the radices are equal (e.g. vector-radix-2 divides "all" of the dimensions by two), but this is not necessary. Vector radix with only a single non-unit radix at a time, i.e. formula_49, is essentially a row-column algorithm. Other, more complicated, methods include polynomial transform algorithms due to Nussbaumer (1977), which view the transform in terms of convolutions and polynomial products. See Duhamel and Vetterli (1990) for more information and references.
Other generalizations.
An formula_50 generalization to spherical harmonics on the sphere "S"2 with "n"2 nodes was described by Mohlenkamp, along with an algorithm conjectured (but not proven) to have formula_51 complexity; Mohlenkamp also provides an implementation in the libftsh library. A spherical-harmonic algorithm with formula_52 complexity is described by Rokhlin and Tygert.
The fast folding algorithm is analogous to the FFT, except that it operates on a series of binned waveforms rather than a series of real or complex scalar values. Rotation (which in the FFT is multiplication by a complex phasor) is a circular shift of the component waveform.
Various groups have also published "FFT" algorithms for non-equispaced data, as reviewed in Potts "et al." (2001). Such algorithms do not strictly compute the DFT (which is only defined for equispaced data), but rather some approximation thereof (a non-uniform discrete Fourier transform, or NDFT, which itself is often computed only approximately). More generally there are various other methods of spectral estimation.
Applications.
The FFT is used in digital recording, sampling, additive synthesis and pitch correction software.
The FFT's importance derives from the fact that it has made working in the frequency domain equally computationally feasible as working in the temporal or spatial domain. Some of the important applications of the FFT include:
An original application of the FFT in finance particularly in the Valuation of options was developed by Marcello Minenna.
See also.
FFT-related algorithms:
FFT implementations:
Other links:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O(n^2)"
},
{
"math_id": 1,
"text": "O(n \\log n)"
},
{
"math_id": 2,
"text": "O(n \\log n)"
},
{
"math_id": 3,
"text": "e^{-2\\pi i/n}"
},
{
"math_id": 4,
"text": "x_0, \\ldots, x_{n-1}"
},
{
"math_id": 5,
"text": " X_k = \\sum_{m=0}^{n-1} x_m e^{-i2\\pi k m/n} \\qquad k = 0,\\ldots,n-1, "
},
{
"math_id": 6,
"text": "e^{i 2\\pi/n}"
},
{
"math_id": 7,
"text": "n=4096"
},
{
"math_id": 8,
"text": "n^2"
},
{
"math_id": 9,
"text": "n(n-1)"
},
{
"math_id": 10,
"text": "O(n)"
},
{
"math_id": 11,
"text": "(n/2)\\log_2(n)"
},
{
"math_id": 12,
"text": "n\\log_2(n)"
},
{
"math_id": 13,
"text": "n = n_1n_2"
},
{
"math_id": 14,
"text": "n_1"
},
{
"math_id": 15,
"text": "n_2"
},
{
"math_id": 16,
"text": "O(n)"
},
{
"math_id": 17,
"text": "z^n-1"
},
{
"math_id": 18,
"text": "z^m-1"
},
{
"math_id": 19,
"text": "z^{2m} + az^m + 1"
},
{
"math_id": 20,
"text": "nk = -\\frac{(k-n)^2} 2 + \\frac{n^2} 2 + \\frac{k^2} 2."
},
{
"math_id": 21,
"text": "X_{n-k} = X_k^*"
},
{
"math_id": 22,
"text": "O(N\\log N)"
},
{
"math_id": 23,
"text": "\\Omega(n \\log n)"
},
{
"math_id": 24,
"text": "n \\log n"
},
{
"math_id": 25,
"text": "\\Theta(n)"
},
{
"math_id": 26,
"text": "4n - 2\\log_2^2(n) - 2\\log_2(n) - 4"
},
{
"math_id": 27,
"text": "n = 2^m"
},
{
"math_id": 28,
"text": "\\Omega(n \\log n)"
},
{
"math_id": 29,
"text": "n \\log_2 n"
},
{
"math_id": 30,
"text": "2N \\log_2 N"
},
{
"math_id": 31,
"text": "4n\\log_2(n) - 6n + 8"
},
{
"math_id": 32,
"text": "\\sim \\frac{34}{9} n \\log_2 n"
},
{
"math_id": 33,
"text": "O(k \\log n \\log n/k)"
},
{
"math_id": 34,
"text": "O(\\varepsilon \\log n)"
},
{
"math_id": 35,
"text": "O(\\varepsilon n^{3/2})"
},
{
"math_id": 36,
"text": "O(\\varepsilon \\sqrt{\\log n})"
},
{
"math_id": 37,
"text": "O(\\varepsilon \\sqrt{n})"
},
{
"math_id": 38,
"text": "O(\\sqrt{n})"
},
{
"math_id": 39,
"text": "X_\\mathbf{k} = \\sum_{\\mathbf{n}=0}^{\\mathbf{N}-1} e^{-2\\pi i \\mathbf{k} \\cdot (\\mathbf{n} / \\mathbf{N})} x_\\mathbf{n}"
},
{
"math_id": 40,
"text": "\\mathbf{n} = \\left(n_1, \\ldots, n_d\\right)"
},
{
"math_id": 41,
"text": "n_j = 0 \\ldots N_j - 1"
},
{
"math_id": 42,
"text": "\\mathbf{n} / \\mathbf{N} = \\left(n_1/N_1, \\ldots, n_d/N_d\\right)"
},
{
"math_id": 43,
"text": "n = n_1 \\cdot n_2 \\cdots n_d"
},
{
"math_id": 44,
"text": "\\begin{align}\n & \\frac{n}{n_1} O(n_1 \\log n_1) + \\cdots + \\frac{n}{n_d} O(n_d \\log n_d) \\\\[6pt]\n ={} & O\\left(n \\left[\\log n_1 + \\cdots + \\log n_d\\right]\\right) = O(n \\log n).\n\\end{align}"
},
{
"math_id": 45,
"text": "n_1 \\times n_2"
},
{
"math_id": 46,
"text": "(n_1, \\ldots, n_{d/2})"
},
{
"math_id": 47,
"text": "(n_{d/2+1}, \\ldots, n_d)"
},
{
"math_id": 48,
"text": "\\mathbf{r} = \\left(r_1, r_2, \\ldots, r_d\\right)"
},
{
"math_id": 49,
"text": "\\mathbf{r} = \\left(1, \\ldots, 1, r, 1, \\ldots, 1\\right)"
},
{
"math_id": 50,
"text": "O(n^{5/2} \\log n)"
},
{
"math_id": 51,
"text": "O(n^2 \\log^2(n))"
},
{
"math_id": 52,
"text": "O(n^2 \\log n)"
}
] | https://en.wikipedia.org/wiki?curid=11512 |
1151206 | Power series solution of differential equations | Method for solving differential equations
In mathematics, the power series method is used to seek a power series solution to certain differential equations. In general, such a solution assumes a power series with unknown coefficients, then substitutes that solution into the differential equation to find a recurrence relation for the coefficients.
Method.
Consider the second-order linear differential equation
formula_0
Suppose "a"2 is nonzero for all z. Then we can divide throughout to obtain
formula_1
Suppose further that "a"1/"a"2 and "a"0/"a"2 are analytic functions.
The power series method calls for the construction of a power series solution
formula_2
If "a"2 is zero for some z, then the Frobenius method, a variation on this method, is suited to deal with so called "singular points". The method works analogously for higher order equations as well as for systems.
Example usage.
Let us look at the Hermite differential equation,
formula_3
We can try to construct a series solution
formula_4
Substituting these in the differential equation
formula_5
Making a shift on the first sum
formula_6
If this series is a solution, then all these coefficients must be zero, so for both "k"=0 and "k">0:
formula_7
We can rearrange this to get a recurrence relation for "A""k"+2.
formula_8
formula_9
Now, we have
formula_10
We can determine "A"0 and "A"1 if there are initial conditions, i.e. if we have an initial value problem.
So we have
formula_11
and the series solution is
formula_12
which we can break up into the sum of two linearly independent series solutions:
formula_13
which can be further simplified by the use of hypergeometric series.
A simpler way using Taylor series.
A much simpler way of solving this equation (and power series solution in general) using the Taylor series form of the expansion.
Here we assume the answer is of the form
formula_14
If we do this, the general rule for obtaining the recurrence relationship for the coefficients is
formula_15
and
formula_16
In this case we can solve the Hermite equation in fewer steps:
formula_17
becomes
formula_18
or
formula_19
in the series
formula_14
Nonlinear equations.
The power series method can be applied to certain nonlinear differential equations, though with less flexibility. A very large class of nonlinear equations can be solved analytically by using the Parker–Sochacki method. Since the Parker–Sochacki method involves an expansion of the original system of ordinary differential equations through auxiliary equations, it is not simply referred to as the power series method. The Parker–Sochacki method is done before the power series method to make the power series method possible on many nonlinear problems. An ODE problem can be expanded with the auxiliary variables which make the power series method trivial for an equivalent, larger system. Expanding the ODE problem with auxiliary variables produces the same coefficients (since the power series for a function is unique) at the cost of also calculating the coefficients of auxiliary equations. Many times, without using auxiliary variables, there is no known way to get the power series for the solution to a system, hence the power series method alone is difficult to apply to most nonlinear equations.
The power series method will give solutions only to initial value problems (opposed to boundary value problems), this is not an issue when dealing with linear equations since the solution may turn up multiple linearly independent solutions which may be combined (by superposition) to solve boundary value problems as well. A further restriction is that the series coefficients will be specified by a nonlinear recurrence (the nonlinearities are inherited from the differential equation).
In order for the solution method to work, as in linear equations, it is necessary to express every term in the nonlinear equation as a power series so that all of the terms may be combined into one power series.
As an example, consider the initial value problem
formula_20
which describes a solution to capillary-driven flow in a groove. There are two nonlinearities: the first and second terms involve products. The initial values are given at formula_21, which hints that the power series must be set up as:
formula_22
since in this way
formula_23
which makes the initial values very easy to evaluate. It is necessary to rewrite the equation slightly in light of the definition of the power series,
formula_24
so that the third term contains the same form formula_25 that shows in the power series.
The last consideration is what to do with the products; substituting the power series in would result in products of power series when it's necessary that each term be its own power series. This is where the Cauchy product
formula_26
is useful; substituting the power series into the differential equation and applying this identity leads to an equation where every term is a power series. After much rearrangement, the recurrence
formula_27
is obtained, specifying exact values of the series coefficients. From the initial values, formula_28 and formula_29, thereafter the above recurrence is used. For example, the next few coefficients:
formula_30
A limitation of the power series solution shows itself in this example. A numeric solution of the problem shows that the function is smooth and always decreasing to the left of formula_21, and zero to the right. At formula_21, a slope discontinuity exists, a feature which the power series is incapable of rendering, for this reason the series solution continues decreasing to the right of formula_21 instead of suddenly becoming zero. | [
{
"math_id": 0,
"text": "a_2(z)f''(z)+a_1(z)f'(z)+a_0(z)f(z)=0."
},
{
"math_id": 1,
"text": "f''+{a_1(z)\\over a_2(z)}f'+{a_0(z)\\over a_2(z)}f=0."
},
{
"math_id": 2,
"text": "f=\\sum_{k=0}^\\infty A_kz^k."
},
{
"math_id": 3,
"text": "f''-2zf'+\\lambda f=0; \\; \\lambda=1"
},
{
"math_id": 4,
"text": "\\begin{align}\nf &= \\sum_{k=0}^\\infty A_k z^k\\\\\nf' &= \\sum_{k=1}^\\infty k A_k z^{k-1}\\\\\nf'' &= \\sum_{k=2}^\\infty k(k-1)A_k z^{k-2}\n\\end{align}"
},
{
"math_id": 5,
"text": "\\begin{align}\n& \\sum_{k=2}^\\infty k(k-1)A_kz^{k-2}-2z\\sum_{k=1}^\\infty k A_k z^{k-1}+\\sum_{k=0}^\\infty A_k z^k=0 \\\\\n= & \\sum_{k=2}^\\infty k(k-1)A_kz^{k-2}-\\sum_{k=1}^\\infty 2kA_k z^k+\\sum_{k=0}^\\infty A_k z^k\n\\end{align}"
},
{
"math_id": 6,
"text": "\\begin{align}\n& = \\sum_{k=0}^\\infty (k+2)(k+1) A_{k+2} z^k - \\sum_{k=1}^\\infty 2k A_k z^k + \\sum_{k=0}^\\infty A_k z^k \\\\\n& = 2 A_2 + \\sum_{k=1}^\\infty (k+2)(k+1) A_{k+2} z^k - \\sum_{k=1}^\\infty 2k A_k z^k + A_0 + \\sum_{k=1}^\\infty A_k z^k \\\\\n& = 2 A_2 + A_0 + \\sum_{k=1}^\\infty \\left( (k+2) (k+1) A_{k+2} + (-2k+1) A_k \\right) z^k\n\\end{align}"
},
{
"math_id": 7,
"text": "(k+2)(k+1)A_{k+2}+(-2k+1)A_k = 0"
},
{
"math_id": 8,
"text": "(k+2)(k+1)A_{k+2}=-(-2k+1)A_k"
},
{
"math_id": 9,
"text": "A_{k+2}={(2k-1)\\over (k+2)(k+1)}A_k"
},
{
"math_id": 10,
"text": "A_2 = {-1 \\over (2)(1)}A_0={-1\\over 2}A_0,\\, A_3 = {1 \\over (3)(2)} A_1={1\\over 6}A_1"
},
{
"math_id": 11,
"text": "\\begin{align}\nA_4 & ={1\\over 4}A_2 = \\left({1\\over 4}\\right)\\left({-1 \\over 2}\\right)A_0 = {-1 \\over 8}A_0 \\\\[8pt]\nA_5 & ={1\\over 4}A_3 = \\left({1\\over 4}\\right)\\left({1 \\over 6}\\right)A_1 = {1 \\over 24}A_1 \\\\[8pt]\nA_6 & = {7\\over 30}A_4 = \\left({7\\over 30}\\right)\\left({-1 \\over 8}\\right)A_0 = {-7 \\over 240}A_0 \\\\[8pt]\nA_7 & = {3\\over 14}A_5 = \\left({3\\over 14}\\right)\\left({1 \\over 24}\\right)A_1 = {1 \\over 112}A_1\n\\end{align}"
},
{
"math_id": 12,
"text": "\\begin{align}\nf & = A_0z^0 + A_1z^1 +A_2z^2 +A_3z^3 +A_4z^4 +A_5z^5 + A_6z^6 + A_7z^7+\\cdots \\\\[8pt]\n& = A_0z^0 + A_1z^1 + {-1\\over 2}A_0z^2 + {1\\over 6}A_1z^3 + {-1 \\over 8}A_0z^4 + {1 \\over 24}A_1z^5 + {-7 \\over 240}A_0z^6 + {1 \\over 112}A_1z^7 + \\cdots \\\\[8pt]\n& = A_0z^0 + {-1\\over 2}A_0z^2 + {-1 \\over 8}A_0z^4 + {-7 \\over 240}A_0z^6 + A_1z + {1\\over 6}A_1z^3 + {1 \\over 24}A_1z^5 + {1 \\over 112}A_1z^7 + \\cdots\n\\end{align}"
},
{
"math_id": 13,
"text": "f=A_0 \\left(1 + {-1\\over 2}z^2 + {-1 \\over 8}z^4 + {-7 \\over 240}z^6 + \\cdots\\right) + A_1\\left(z + {1\\over 6}z^3 + {1 \\over 24}z^5 + {1 \\over 112}z^7 + \\cdots\\right)"
},
{
"math_id": 14,
"text": "f=\\sum_{k=0}^\\infty {A_kz^k\\over {k!}}"
},
{
"math_id": 15,
"text": "y^{[n]} \\to A_{k+n} "
},
{
"math_id": 16,
"text": "x^m y^{[n]} \\to (k)(k-1)\\cdots(k-m+1)A_{k+n-m}"
},
{
"math_id": 17,
"text": "f''-2zf'+\\lambda f=0;\\;\\lambda=1"
},
{
"math_id": 18,
"text": "A_{k+2} -2kA_k +\\lambda A_k=0"
},
{
"math_id": 19,
"text": "A_{k+2} = (2k-\\lambda) A_k"
},
{
"math_id": 20,
"text": "F F'' + 2 F'^2 + \\eta F' = 0 \\quad ; \\quad F(1) = 0 \\ , \\ F'(1) = -\\frac{1}{2}"
},
{
"math_id": 21,
"text": "\\eta = 1"
},
{
"math_id": 22,
"text": "F(\\eta) = \\sum_{i = 0}^{\\infty} c_i (\\eta - 1)^i"
},
{
"math_id": 23,
"text": "\\left.\\frac{d^n F}{d \\eta^n} \\right|_{\\eta = 1} = n! \\ c_n"
},
{
"math_id": 24,
"text": "F F'' + 2 F'^2 + (\\eta - 1) F' + F' = 0 \\quad ; \\quad F(1) = 0 \\ , \\ F'(1) = -\\frac{1}{2}"
},
{
"math_id": 25,
"text": "\\eta - 1"
},
{
"math_id": 26,
"text": "\\left(\\sum_{i = 0}^{\\infty} a_i x^i\\right) \\left(\\sum_{i = 0}^{\\infty} b_i x^i\\right) = \n\\sum_{i = 0}^{\\infty} x^i \\sum_{j = 0}^i a_{i - j} b_j"
},
{
"math_id": 27,
"text": "\\sum_{j = 0}^i \\left((j + 1) (j + 2) c_{i - j} c_{j + 2} + 2 (i - j + 1) (j + 1) c_{i - j + 1} c_{j + 1}\\right) + i c_i + (i + 1) c_{i + 1} = 0"
},
{
"math_id": 28,
"text": "c_0 = 0"
},
{
"math_id": 29,
"text": "c_1 = -1/2"
},
{
"math_id": 30,
"text": "c_2 = -\\frac{1}{6} \\quad ; \\quad c_3 = -\\frac{1}{108} \\quad ; \\quad c_4 = \\frac{7}{3240} \\quad ; \\quad c_5 = -\\frac{19}{48600} \\ \\dots"
}
] | https://en.wikipedia.org/wiki?curid=1151206 |
1151323 | Thue equation | Diophantine equation involving an irreducible bivariate form of deg > 2 over the rationals
In mathematics, a Thue equation is a Diophantine equation of the form
formula_0
where formula_1 is an irreducible bivariate form of degree at least 3 over the rational numbers, and formula_2 is a nonzero rational number. It is named after Axel Thue, who in 1909 proved that a Thue equation can have only finitely many solutions in integers formula_3 and formula_4, a result known as Thue's theorem.
The Thue equation is solvable effectively: there is an explicit bound on the solutions formula_3, formula_4 of the form formula_5 where constants formula_6 and formula_7 depend only on the form formula_1. A stronger result holds: if formula_8 is the field generated by the roots of formula_1, then the equation has only finitely many solutions with formula_3 and formula_4 integers of formula_8, and again these may be effectively determined.
Finiteness of solutions and diophantine approximation.
Thue's original proof that the equation named in his honour has finitely many solutions is through the proof of what is now known as Thue's theorem: it asserts that for any algebraic number formula_9 having degree formula_10 and for any formula_11 there exists only finitely many co-prime integers formula_12 with formula_13 such that formula_14. Applying this theorem allows one to almost immediately deduce the finiteness of solutions. However, Thue's proof, as well as subsequent improvements by Siegel, Dyson, and Roth were all ineffective.
Solution algorithm.
Finding all solutions to a Thue equation can be achieved by a practical algorithm, which has been implemented in the following computer algebra systems:
Bounding the number of solutions.
While there are several effective methods to solve Thue equations (including using Baker's method and Skolem's formula_15-adic method), these are not able to give the best theoretical bounds on the number of solutions. One may qualify an effective bound formula_16 of the Thue equation formula_17 by the parameters it depends on, and how "good" the dependence is.
The best result known today, essentially building on pioneering work of Bombieri and Schmidt, gives a bound of the shape formula_18, where formula_19 is an "absolute constant" (that is, independent of both formula_1 and formula_2) and formula_20 is the number of distinct prime divisors of formula_2. The most significant qualitative improvement to the theorem of Bombieri and Schmidt is due to Stewart, who obtained a bound of the form formula_21 where formula_22 is a divisor of formula_2 exceeding formula_23 in absolute value. It is conjectured that one may take the bound formula_24; that is, depending only on the "degree" of formula_1 but not its coefficients, and completely independent of the integer formula_2 on the right hand side of the equation.
This is a weaker form of a conjecture of Stewart, and is a special case of the uniform boundedness conjecture for rational points. This conjecture has been proven for "small" integers formula_2, where smallness is measured in terms of the discriminant of the form formula_1, by various authors, including Evertse, Stewart, and Akhtari. Stewart and Xiao demonstrated a strong form of this conjecture, asserting that the number of solutions is absolutely bounded, holds on average (as formula_2 ranges over the interval formula_25 with formula_26)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nf(x,y) = r,\n"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "y"
},
{
"math_id": 5,
"text": "(C_1 r)^{C_2}"
},
{
"math_id": 6,
"text": "C_1"
},
{
"math_id": 7,
"text": "C_2"
},
{
"math_id": 8,
"text": "K"
},
{
"math_id": 9,
"text": "\\alpha"
},
{
"math_id": 10,
"text": "d \\geq 3"
},
{
"math_id": 11,
"text": "\\varepsilon > 0 "
},
{
"math_id": 12,
"text": "p, q"
},
{
"math_id": 13,
"text": " q > 0 "
},
{
"math_id": 14,
"text": " |\\alpha - p/q| < q^{-(d+1+\\varepsilon)/2}"
},
{
"math_id": 15,
"text": "p"
},
{
"math_id": 16,
"text": "C(f,r)"
},
{
"math_id": 17,
"text": "f(x,y) = r"
},
{
"math_id": 18,
"text": "C(f,r) = C \\cdot (\\deg f)^{1 + \\omega(r)}"
},
{
"math_id": 19,
"text": "C"
},
{
"math_id": 20,
"text": "\\omega(\\cdot)"
},
{
"math_id": 21,
"text": "C(f,r) = C \\cdot (\\deg f)^{1 + \\omega(g)}"
},
{
"math_id": 22,
"text": "g"
},
{
"math_id": 23,
"text": "|r|^{3/4}"
},
{
"math_id": 24,
"text": "C(f,r) = C(\\deg f)"
},
{
"math_id": 25,
"text": "|r| \\leq Z "
},
{
"math_id": 26,
"text": "Z \\rightarrow \\infty"
}
] | https://en.wikipedia.org/wiki?curid=1151323 |
1151794 | Admissible rule | In logic, a rule of inference is admissible in a formal system if the set of theorems of the system does not change when that rule is added to the existing rules of the system. In other words, every formula that can be derived using that rule is already derivable without that rule, so, in a sense, it is redundant. The concept of an admissible rule was introduced by Paul Lorenzen (1955).
Definitions.
Admissibility has been systematically studied only in the case of structural (i.e. substitution-closed) rules in propositional non-classical logics, which we will describe next.
Let a set of basic propositional connectives be fixed (for instance, formula_0 in the case of superintuitionistic logics, or formula_1 in the case of monomodal logics). Well-formed formulas are built freely using these connectives from a countably infinite set of propositional variables "p"0, "p"1, ... A substitution "σ" is a function from formulas to formulas that commutes with applications of the connectives, i.e.,
formula_2
for every connective "f", and formulas "A"1, ... , "A""n". (We may also apply substitutions to sets Γ of formulas, making "σ"Γ = {"σA": "A" ∈ Γ}.) A Tarski-style consequence relation is a relation formula_3 between sets of formulas, and formulas, such that
for all formulas "A", "B", and sets of formulas Γ, Δ. A consequence relation such that
for all substitutions "σ" is called structural. (Note that the term "structural" as used here and below is unrelated to the notion of structural rules in sequent calculi.) A structural consequence relation is called a propositional logic. A formula "A" is a theorem of a logic formula_3 if formula_4.
For example, we identify a superintuitionistic logic "L" with its standard consequence relation formula_5 generated by modus ponens and axioms, and we identify a normal modal logic with its global consequence relation formula_5 generated by modus ponens, necessitation, and (as axioms) the theorems of the logic.
A structural inference rule (or just rule for short) is given by a pair (Γ, "B"), usually written as
formula_6
where Γ = {"A"1, ... , "A""n"} is a finite set of formulas, and "B" is a formula. An instance of the rule is
formula_7
for a substitution "σ". The rule Γ/"B" is derivable in formula_3, if formula_8. It is admissible if for every instance of the rule, "σB" is a theorem whenever all formulas from "σ"Γ are theorems. In other words, a rule is admissible if it, when added to the logic, does not lead to new theorems. We also write formula_9 if Γ/"B" is admissible. (Note that formula_10 is a structural consequence relation on its own.)
Every derivable rule is admissible, but not vice versa in general. A logic is structurally complete if every admissible rule is derivable, i.e., formula_11.
In logics with a well-behaved conjunction connective (such as superintuitionistic or modal logics), a rule formula_12 is equivalent to formula_13 with respect to admissibility and derivability. It is therefore customary to only deal with unary rules "A"/"B".
formula_16
is admissible in the intuitionistic propositional calculus ("IPC"). In fact, it is admissible in every superintuitionistic logic. On the other hand, the formula
formula_17
is not an intuitionistic theorem; hence "KPR" is not derivable in "IPC". In particular, "IPC" is not structurally complete.
formula_18
is admissible in many modal logics, such as "K", "D", "K"4, "S"4, "GL" (see this table for names of modal logics). It is derivable in "S"4, but it is not derivable in "K", "D", "K"4, or "GL".
formula_19
is admissible in normal logic formula_20. It is derivable in "GL" and "S"4.1, but it is not derivable in "K", "D", "K"4, "S"4, or "S"5.
formula_21
is admissible (but not derivable) in the basic modal logic "K", and it is derivable in "GL". However, "LR" is not admissible in "K"4. In particular, it is "not" true in general that a rule admissible in a logic "L" must be admissible in its extensions.
Decidability and reduced rules.
The basic question about admissible rules of a given logic is whether the set of all admissible rules is decidable. Note that the problem is nontrivial even if the logic itself (i.e., its set of theorems) is decidable: the definition of admissibility of a rule "A"/"B" involves an unbounded universal quantifier over all propositional substitutions. Hence "a priori" we only know that admissibility of rule in a decidable logic is formula_22 (i.e., its complement is recursively enumerable). For instance, it is known that admissibility in the bimodal logics "K""u" and "K"4"u" (the extensions of "K" or "K"4 with the universal modality) is undecidable. Remarkably, decidability of admissibility in the basic modal logic "K" is a major open problem.
Nevertheless, admissibility of rules is known to be decidable in many modal and superintuitionistic logics. The first decision procedures for admissible rules in basic transitive modal logics were constructed by Rybakov, using the reduced form of rules. A modal rule in variables "p"0, ... , "p""k" is called reduced if it has the form
formula_23
where each formula_24 is either blank, or negation formula_25. For each rule "r", we can effectively construct a reduced rule "s" (called the reduced form of "r") such that any logic admits (or derives) "r" if and only if it admits (or derives) "s", by introducing extension variables for all subformulas in "A", and expressing the result in the full disjunctive normal form. It is thus sufficient to construct a decision algorithm for admissibility of reduced rules.
Let formula_26 be a reduced rule as above. We identify every conjunction formula_27 with the set formula_28 of its conjuncts. For any subset "W" of the set formula_29 of all conjunctions, let us define a Kripke model formula_30 by
formula_31
formula_32
Then the following provides an algorithmic criterion for admissibility in "K"4:
Theorem. The rule formula_26 is "not" admissible in "K"4 if and only if there exists a set formula_33 such that
formula_38 if and only if formula_39 for every formula_40
formula_38 if and only if formula_41 and formula_39 for every formula_40
hold for all "j".
Similar criteria can be found for the logics "S"4, "GL", and "Grz". Furthermore, admissibility in intuitionistic logic can be reduced to admissibility in "Grz" using the Gödel–McKinsey–Tarski translation:
formula_42 if and only if formula_43
Rybakov (1997) developed much more sophisticated techniques for showing decidability of admissibility, which apply to a robust (infinite) class of transitive (i.e., extending "K"4 or "IPC") modal and superintuitionistic logics, including e.g. "S"4.1, "S"4.2, "S"4.3, "KC", "T""k" (as well as the above-mentioned logics "IPC", "K"4, "S"4, "GL", "Grz").
Despite being decidable, the admissibility problem has relatively high computational complexity, even in simple logics: admissibility of rules in the basic transitive logics "IPC", "K"4, "S"4, "GL", "Grz" is coNEXP-complete. This should be contrasted with the derivability problem (for rules or formulas) in these logics, which is PSPACE-complete.
Projectivity and unification.
Admissibility in propositional logics is closely related to unification in the equational theory of modal or Heyting algebras. The connection was developed by Ghilardi (1999, 2000). In the logical setup, a unifier of a formula "A" in the language of a logic "L" (an "L"-unifier for short) is a substitution "σ" such that "σA" is a theorem of "L". (Using this notion, we can rephrase admissibility of a rule "A"/"B" in "L" as "every "L"-unifier of "A" is an "L"-unifier of "B"".) An "L"-unifier "σ" is less general than an "L"-unifier "τ", written as "σ" ≤ "τ", if there exists a substitution "υ" such that
formula_44
for every variable "p". A complete set of unifiers of a formula "A" is a set "S" of "L"-unifiers of "A" such that every "L"-unifier of "A" is less general than some unifier from "S". A most general unifier (MGU) of "A" is a unifier "σ" such that {"σ"} is a complete set of unifiers of "A". It follows that if "S" is a complete set of unifiers of "A", then a rule "A"/"B" is "L"-admissible if and only if every "σ" in "S" is an "L"-unifier of "B". Thus we can characterize admissible rules if we can find well-behaved complete sets of unifiers.
An important class of formulas that have a most general unifier are the projective formulas: these are formulas "A" such that there exists a unifier "σ" of "A" such that
formula_45
for every formula "B". Note that "σ" is an MGU of "A". In transitive modal and superintuitionistic logics with the finite model property, one can characterize projective formulas semantically as those whose set of finite "L"-models has the extension property: if "M" is a finite Kripke "L"-model with a root "r" whose cluster is a singleton, and the formula "A" holds at all points of "M" except for "r", then we can change the valuation of variables in "r" so as to make "A" true at "r" as well. Moreover, the proof provides an explicit construction of an MGU for a given projective formula "A".
In the basic transitive logics "IPC", "K"4, "S"4, "GL", "Grz" (and more generally in any transitive logic with the finite model property whose set of finite frame satisfies another kind of extension property), we can effectively construct for any formula "A" its projective approximation Π("A"): a finite set of projective formulas such that
It follows that the set of MGUs of elements of Π("A") is a complete set of unifiers of "A". Furthermore, if "P" is a projective formula, then
formula_48 if and only if formula_49
for any formula "B". Thus we obtain the following effective characterization of admissible rules:
formula_50 if and only if formula_51
Bases of admissible rules.
Let "L" be a logic. A set "R" of "L"-admissible rules is called a basis of admissible rules, if every admissible rule Γ/"B" can be derived from "R" and the derivable rules of "L", using substitution, composition, and weakening. In other words, "R" is a basis if and only if formula_52 is the smallest structural consequence relation that includes formula_5 and "R".
Notice that decidability of admissible rules of a decidable logic is equivalent to the existence of recursive (or recursively enumerable) bases: on the one hand, the set of "all" admissible rules is a recursive basis if admissibility is decidable. On the other hand, the set of admissible rules is always co-recursively enumerable, and if we further have a recursively enumerable basis, set of admissible rules is also recursively enumerable; hence it is decidable. (In other words, we can decide admissibility of "A"/"B" by the following algorithm: we start in parallel two exhaustive searches, one for a substitution "σ" that unifies "A" but not "B", and one for a derivation of "A"/"B" from "R" and formula_5. One of the searches has to eventually come up with an answer.) Apart from decidability, explicit bases of admissible rules are useful for some applications, e.g. in proof complexity.
For a given logic, we can ask whether it has a recursive or finite basis of admissible rules, and to provide an explicit basis. If a logic has no finite basis, it can nevertheless have an independent basis: a basis "R" such that no proper subset of "R" is a basis.
In general, very little can be said about existence of bases with desirable properties. For example, while tabular logics are generally well-behaved, and always finitely axiomatizable, there exist tabular modal logics without a finite or independent basis of rules. Finite bases are relatively rare: even the basic transitive logics "IPC", "K"4, "S"4, "GL", "Grz" do not have a finite basis of admissible rules, though they have independent bases.
formula_53
formula_54
are a basis of admissible rules in "IPC" or "KC".
formula_55
are a basis of admissible rules of "GL". (Note that the empty disjunction is defined as formula_15.)
formula_56
are a basis of admissible rules of "S"4 or "Grz".
Semantics for admissible rules.
A rule Γ/"B" is valid in a modal or intuitionistic Kripke frame formula_57, if the following is true for every valuation formula_58 in "F":
if for all formula_59 formula_60, then formula_61.
Let "X" be a subset of "W", and "t" a point in "W". We say that "t" is
We say that a frame "F" has reflexive (irreflexive) tight predecessors, if for every "finite" subset "X" of "W", there exists a reflexive (irreflexive) tight predecessor of "X" in "W".
We have:
Note that apart from a few trivial cases, frames with tight predecessors must be infinite. Hence admissible rules in basic transitive logics do not enjoy the finite model property.
Structural completeness.
While a general classification of structurally complete logics is not an easy task, we have a good understanding of some special cases.
Intuitionistic logic itself is not structurally complete, but its "fragments" may behave differently. Namely, any disjunction-free rule or implication-free rule admissible in a superintuitionistic logic is derivable. On the other hand, the Mints rule
formula_62
is admissible in intuitionistic logic but not derivable, and contains only implications and disjunctions.
We know the "maximal" structurally incomplete transitive logics. A logic is called hereditarily structurally complete, if any extension is structurally complete. For example, classical logic, as well as the logics "LC" and "Grz".3 mentioned above, are hereditarily structurally complete. A complete description of hereditarily structurally complete superintuitionistic and transitive modal logics was given respectively by Citkin and Rybakov. Namely, a superintuitionistic logic is hereditarily structurally complete if and only if it is not valid in any of the five Kripke frames
Similarly, an extension of "K"4 is hereditarily structurally complete if and only if it is not valid in any of certain twenty Kripke frames (including the five intuitionistic frames above).
There exist structurally complete logics that are not hereditarily structurally complete: for example, Medvedev's logic is structurally complete, but it is included in the structurally incomplete logic "KC".
Variants.
A rule with parameters is a rule of the form
formula_63
whose variables are divided into the "regular" variables "p""i", and the parameters "s""i". The rule is "L"-admissible if every "L"-unifier "σ" of "A" such that "σs""i" = "s""i" for each "i" is also a unifier of "B". The basic decidability results for admissible rules also carry to rules with parameters.
A multiple-conclusion rule is a pair (Γ,Δ) of two finite sets of formulas, written as
formula_64
Such a rule is admissible if every unifier of Γ is also a unifier of some formula from Δ. For example, a logic "L" is consistent iff it admits the rule
formula_65
and a superintuitionistic logic has the disjunction property iff it admits the rule
formula_66
Again, basic results on admissible rules generalize smoothly to multiple-conclusion rules. In logics with a variant of the disjunction property, the multiple-conclusion rules have the same expressive power as single-conclusion rules: for example, in "S"4 the rule above is equivalent to
formula_67
Nevertheless, multiple-conclusion rules can often be employed to simplify arguments.
In proof theory, admissibility is often considered in the context of sequent calculi, where the basic objects are sequents rather than formulas. For example, one can rephrase the cut-elimination theorem as saying that the cut-free sequent calculus admits the cut rule
formula_68
(By abuse of language, it is also sometimes said that the (full) sequent calculus admits cut, meaning its cut-free version does.) However, admissibility in sequent calculi is usually only a notational variant for admissibility in the corresponding logic: any complete calculus for (say) intuitionistic logic admits a sequent rule if and only if "IPC" admits the formula rule that we obtain by translating each sequent formula_69 to its characteristic formula formula_70.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\{\\to,\\land,\\lor,\\bot\\}"
},
{
"math_id": 1,
"text": "\\{\\to,\\bot,\\Box\\}"
},
{
"math_id": 2,
"text": "\\sigma f(A_1,\\dots,A_n)=f(\\sigma A_1,\\dots,\\sigma A_n)"
},
{
"math_id": 3,
"text": "\\vdash"
},
{
"math_id": 4,
"text": "\\varnothing\\vdash A"
},
{
"math_id": 5,
"text": "\\vdash_L"
},
{
"math_id": 6,
"text": "\\frac{A_1,\\dots,A_n}B\\qquad\\text{or}\\qquad A_1,\\dots,A_n/B,"
},
{
"math_id": 7,
"text": "\\sigma A_1,\\dots,\\sigma A_n/\\sigma B"
},
{
"math_id": 8,
"text": "\\Gamma\\vdash B"
},
{
"math_id": 9,
"text": "\\Gamma\\mathrel{|\\!\\!\\!\\sim} B"
},
{
"math_id": 10,
"text": "\\phantom{.}\\!{|\\!\\!\\!\\sim}"
},
{
"math_id": 11,
"text": "{\\vdash}={\\,|\\!\\!\\!\\sim}"
},
{
"math_id": 12,
"text": "A_1,\\dots,A_n/B"
},
{
"math_id": 13,
"text": "A_1\\land\\dots\\land A_n/B"
},
{
"math_id": 14,
"text": "\\top"
},
{
"math_id": 15,
"text": "\\bot"
},
{
"math_id": 16,
"text": "(\\mathit{KPR})\\qquad\\frac{\\neg p\\to q\\lor r}{(\\neg p\\to q)\\lor(\\neg p\\to r)}"
},
{
"math_id": 17,
"text": "(\\neg p\\to q\\lor r)\\to ((\\neg p\\to q)\\lor(\\neg p\\to r))"
},
{
"math_id": 18,
"text": "\\frac{\\Box p}p"
},
{
"math_id": 19,
"text": "\\frac{\\Diamond p\\land\\Diamond\\neg p}\\bot"
},
{
"math_id": 20,
"text": "L \\supseteq S4.3"
},
{
"math_id": 21,
"text": "(\\mathit{LR})\\qquad\\frac{\\Box p\\to p}p"
},
{
"math_id": 22,
"text": "\\Pi^0_1"
},
{
"math_id": 23,
"text": "\\frac{\\bigvee_{i=0}^n\\bigl(\\bigwedge_{j=0}^k\\neg_{i,j}^0p_j\\land\\bigwedge_{j=0}^k\\neg_{i,j}^1\\Box p_j\\bigr)}{p_0},"
},
{
"math_id": 24,
"text": "\\neg_{i,j}^u"
},
{
"math_id": 25,
"text": "\\neg"
},
{
"math_id": 26,
"text": "\\textstyle\\bigvee_{i=0}^n\\varphi_i/p_0"
},
{
"math_id": 27,
"text": "\\varphi_i"
},
{
"math_id": 28,
"text": "\\{\\neg_{i,j}^0p_j,\\neg_{i,j}^1\\Box p_j\\mid j\\le k\\}"
},
{
"math_id": 29,
"text": "\\{\\varphi_i\\mid i\\le n\\}"
},
{
"math_id": 30,
"text": "M=\\langle W,R,{\\Vdash}\\rangle"
},
{
"math_id": 31,
"text": "\\varphi_i\\Vdash p_j\\iff p_j\\in\\varphi_i,"
},
{
"math_id": 32,
"text": "\\varphi_i\\,R\\,\\varphi_{i'}\\iff\\forall j\\le k\\,(\\Box p_j\\in\\varphi_i\\Rightarrow\\{p_j,\\Box p_j\\}\\subseteq\\varphi_{i'})."
},
{
"math_id": 33,
"text": "W\\subseteq\\{\\varphi_i\\mid i\\le n\\}"
},
{
"math_id": 34,
"text": "\\varphi_i\\nVdash p_0"
},
{
"math_id": 35,
"text": "i\\le n,"
},
{
"math_id": 36,
"text": "\\varphi_i\\Vdash\\varphi_i"
},
{
"math_id": 37,
"text": "\\alpha,\\beta\\in W"
},
{
"math_id": 38,
"text": "\\alpha\\Vdash\\Box p_j"
},
{
"math_id": 39,
"text": "\\varphi\\Vdash p_j\\land\\Box p_j"
},
{
"math_id": 40,
"text": "\\varphi\\in D"
},
{
"math_id": 41,
"text": "\\alpha\\Vdash p_j"
},
{
"math_id": 42,
"text": "A\\,|\\!\\!\\!\\sim_{IPC}B"
},
{
"math_id": 43,
"text": "T(A)\\,|\\!\\!\\!\\sim_{Grz}T(B)."
},
{
"math_id": 44,
"text": "\\vdash_L\\sigma p\\leftrightarrow \\upsilon\\tau p"
},
{
"math_id": 45,
"text": "A\\vdash_L B\\leftrightarrow\\sigma B"
},
{
"math_id": 46,
"text": "P\\vdash_L A"
},
{
"math_id": 47,
"text": "P\\in\\Pi(A),"
},
{
"math_id": 48,
"text": "P\\,|\\!\\!\\!\\sim_L B"
},
{
"math_id": 49,
"text": "P\\vdash_L B"
},
{
"math_id": 50,
"text": "A\\,|\\!\\!\\!\\sim_L B"
},
{
"math_id": 51,
"text": "\\forall P\\in\\Pi(A)\\,(P\\vdash_L B)."
},
{
"math_id": 52,
"text": "\\phantom{.}\\!{|\\!\\!\\!\\sim_L}"
},
{
"math_id": 53,
"text": "\\frac{\\Diamond p\\land\\Diamond\\neg p}\\bot."
},
{
"math_id": 54,
"text": "\\frac{\\displaystyle\\Bigl(\\bigwedge_{i=1}^n(p_i\\to q_i)\\to p_{n+1}\\lor p_{n+2}\\Bigr)\\lor r}{\\displaystyle\\bigvee_{j=1}^{n+2}\\Bigl(\\bigwedge_{i=1}^{n}(p_i\\to q_i)\\to p_j\\Bigr)\\lor r},\\qquad n\\ge 1"
},
{
"math_id": 55,
"text": "\\frac{\\displaystyle\\Box\\Bigl(\\Box q\\to\\bigvee_{i=1}^n\\Box p_i\\Bigr)\\lor\\Box r}{\\displaystyle\\bigvee_{i=1}^n\\Box(q\\land\\Box q\\to p_i)\\lor r},\\qquad n\\ge0"
},
{
"math_id": 56,
"text": "\\frac{\\displaystyle\\Box\\Bigl(\\Box(q\\to\\Box q)\\to\\bigvee_{i=1}^n\\Box p_i\\Bigr)\\lor\\Box r}{\\displaystyle\\bigvee_{i=1}^n\\Box(\\Box q\\to p_i)\\lor r},\\qquad n\\ge0"
},
{
"math_id": 57,
"text": "F=\\langle W,R\\rangle"
},
{
"math_id": 58,
"text": "\\Vdash"
},
{
"math_id": 59,
"text": "A\\in\\Gamma"
},
{
"math_id": 60,
"text": "\\forall x\\in W\\,(x\\Vdash A)"
},
{
"math_id": 61,
"text": "\\forall x\\in W\\,(x\\Vdash B)"
},
{
"math_id": 62,
"text": "\\frac{(p\\to q)\\to p\\lor r}{((p\\to q)\\to p)\\lor((p\\to q)\\to r)}"
},
{
"math_id": 63,
"text": "\\frac{A(p_1,\\dots,p_n,s_1,\\dots,s_k)}{B(p_1,\\dots,p_n,s_1,\\dots,s_k)},"
},
{
"math_id": 64,
"text": "\\frac{A_1,\\dots,A_n}{B_1,\\dots,B_m}\\qquad\\text{or}\\qquad A_1,\\dots,A_n/B_1,\\dots,B_m."
},
{
"math_id": 65,
"text": "\\frac{\\;\\bot\\;}{},"
},
{
"math_id": 66,
"text": "\\frac{p\\lor q}{p,q}."
},
{
"math_id": 67,
"text": "\\frac{A_1,\\dots,A_n}{\\Box B_1\\lor\\dots\\lor\\Box B_m}."
},
{
"math_id": 68,
"text": "\\frac{\\Gamma\\vdash A,\\Delta\\qquad\\Pi,A\\vdash\\Lambda}{\\Gamma,\\Pi\\vdash\\Delta,\\Lambda}."
},
{
"math_id": 69,
"text": "\\Gamma\\vdash\\Delta"
},
{
"math_id": 70,
"text": "\\bigwedge\\Gamma\\to\\bigvee\\Delta"
}
] | https://en.wikipedia.org/wiki?curid=1151794 |
11518797 | Cell lists | Cell lists (also sometimes referred to as cell linked-lists) is a data structure in molecular dynamics simulations to find all atom pairs within a given cut-off distance of each other. These pairs are needed to compute the short-range non-bonded interactions in a system, such as Van der Waals forces or the short-range part of the electrostatic interaction when using Ewald summation.
Algorithm.
Cell lists work by subdividing the simulation domain into cells with an edge length greater than or equal to the cut-off radius of the interaction to be computed. The particles are sorted into these cells and the interactions are computed between particles in the same or neighbouring cells.
In its most basic form, the non-bonded interactions for a cut-off distance formula_0 are computed as follows:
for all neighbouring cell pairs formula_1 do
for all formula_2 do
for all formula_3 do
formula_4
if formula_5 then
Compute the interaction between formula_6 and formula_7.
end if
end for
end for
end for
Since the cell length is at least formula_0 in all dimensions, no particles within formula_0 of each other can be missed.
Given a simulation with formula_8 particles with a homogeneous particle density, the number of cells formula_9 is proportional to formula_8 and inversely proportional to the cut-off radius (i.e. if formula_8 increases, so does the number of cells). The average number of particles per cell formula_10 therefore does not depend on the total number of particles. The cost of interacting two cells is in formula_11. The number of cell pairs is proportional to the number of cells which is again proportional to the number of particles formula_8. The total cost of finding all pairwise distances within a given cut-off is in formula_12, which is significantly better than computing the formula_13 pairwise distances naively.
Periodic boundary conditions.
In most simulations, periodic boundary conditions are used to avoid imposing artificial boundary conditions. Using cell lists, these boundaries can be implemented in two ways.
Ghost cells.
In the ghost cells approach, the simulation box is wrapped in an additional layer of cells. These cells contain periodically wrapped copies of the corresponding simulation cells inside the domain.
Although the data—and usually also the computational cost—is doubled for interactions over the periodic boundary, this approach has the advantage of being straightforward to implement and very easy to parallelize, since cells will only interact with their geographical neighbours.
Periodic wrapping.
Instead of creating ghost cells, cell pairs that interact over a periodic boundary can also use a periodic correction vector formula_14. This vector, which can be stored or computed for every cell pair formula_15, contains the correction which needs to be applied to "wrap" one cell around the domain to neighbour the other. The pairwise distance between two particles formula_2 and formula_3 is then computed as
formula_16.
This approach, although more efficient than using ghost cells, is less straightforward to implement (the cell pairs need to be identified over the periodic boundaries and the vector formula_14 needs to be computed/stored).
Improvements.
Despite reducing the computational cost of finding all pairs within a given cut-off distance from formula_13 to formula_17, the cell list algorithm listed above still has some inefficiencies.
Consider a computational cell in three dimensions with edge length equal to the cut-off radius formula_0. The pairwise distance between all particles in the cell and in one of the neighbouring cells is computed. The cell has 26 neighbours: 6 sharing a common face, 12 sharing a common edge and 8 sharing a common corner. Of all the pairwise distances computed, only about 16% will actually be less than or equal to formula_0. In other words, 84% of all pairwise distance computations are spurious.
One way of overcoming this inefficiency is to partition the domain into cells of edge length smaller than formula_0. The pairwise interactions are then not just computed between neighboring cells, but between all cells within formula_0 of each other (first suggested in and implemented and analysed in and ). This approach can be taken to the limit wherein each cell holds at most one single particle, therefore reducing the number of spurious pairwise distance evaluations to zero. This gain in efficiency, however, is quickly offset by the number of cells formula_18 that need to be inspected for every interaction with a cell formula_19, which, for example in three dimensions, grows cubically with the inverse of the cell edge length. Setting the edge length to formula_20, however, already reduces the number of spurious distance evaluations to 63%.
Another approach is outlined and tested in Gonnet, in which the particles are first sorted along the axis connecting the cell centers. This approach generates only about 40% spurious pairwise distance computations, yet carries an additional cost due to sorting the particles. | [
{
"math_id": 0,
"text": "r_c"
},
{
"math_id": 1,
"text": "(C_\\alpha, C_\\beta)"
},
{
"math_id": 2,
"text": "p_\\alpha \\in C_\\alpha"
},
{
"math_id": 3,
"text": "p_\\beta \\in C_\\beta"
},
{
"math_id": 4,
"text": "r^2 = \\| \\mathbf x[p_\\alpha] - \\mathbf x[p_\\beta] \\|_2^2"
},
{
"math_id": 5,
"text": "r^2 \\le r_c^2"
},
{
"math_id": 6,
"text": "p_\\alpha"
},
{
"math_id": 7,
"text": "p_\\beta"
},
{
"math_id": 8,
"text": "N"
},
{
"math_id": 9,
"text": "m"
},
{
"math_id": 10,
"text": "\\overline{c} = N/m"
},
{
"math_id": 11,
"text": "\\mathcal O(\\overline{c}^2)"
},
{
"math_id": 12,
"text": "\\mathcal O(Nc) \\in \\mathcal O(N)"
},
{
"math_id": 13,
"text": "\\mathcal O(N^2)"
},
{
"math_id": 14,
"text": "\\mathbf q_{\\alpha\\beta}"
},
{
"math_id": 15,
"text": "(C_\\alpha,C_\\beta)"
},
{
"math_id": 16,
"text": "r^2 = \\| \\mathbf x[p_\\alpha] - \\mathbf x[p_\\beta] - \\mathbf q_{\\alpha\\beta} \\|^2_2"
},
{
"math_id": 17,
"text": "\\mathcal O(N)"
},
{
"math_id": 18,
"text": "C_\\beta"
},
{
"math_id": 19,
"text": "C_\\alpha"
},
{
"math_id": 20,
"text": "r_c/2"
}
] | https://en.wikipedia.org/wiki?curid=11518797 |
11518816 | Verlet list | A Verlet list (named after Loup Verlet) is a data structure in molecular dynamics simulations to efficiently maintain a list of all particles within a given cut-off distance of each other.
This method may easily be applied to Monte Carlo simulations. For short-range interactions, a cut-off radius is typically used, beyond which particle interactions are considered "close enough" to zero to be safely ignored. For each particle, a Verlet list is constructed that lists all other particles within the potential cut-off distance, plus some extra distance so that the list may be used for several consecutive Monte Carlo "sweeps" (set of Monte Carlo steps or moves) before being updated. If we wish to use the same Verlet list formula_0 times before updating, then the cut-off distance for inclusion in the Verlet list should be formula_1, where formula_2 is the cut-off distance of the potential, and formula_3 is the maximum Monte Carlo step (move) of a single particle. Thus, we will spend of order formula_4 time to compute the Verlet lists (formula_5 is the total number of particles), but are rewarded with formula_0 Monte Carlo "sweeps" of order formula_6 instead of formula_7. By optimizing our choice of formula_0 it can be shown that Verlet lists allow converting the formula_8 problem of Monte Carlo sweeps to an formula_9 problem.
Using cell lists to identify the nearest neighbors in formula_10 further reduces the computational cost.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "R_c + 2nd"
},
{
"math_id": 2,
"text": "R_c"
},
{
"math_id": 3,
"text": "d"
},
{
"math_id": 4,
"text": "N^2"
},
{
"math_id": 5,
"text": "N"
},
{
"math_id": 6,
"text": "Nn^2"
},
{
"math_id": 7,
"text": "NN"
},
{
"math_id": 8,
"text": "O(N^2)"
},
{
"math_id": 9,
"text": "O(N^{5/3})"
},
{
"math_id": 10,
"text": "O(N)"
}
] | https://en.wikipedia.org/wiki?curid=11518816 |
1151882 | Profit margin | Ratio between turnover and profit
Profit margin is a financial ratio that measures the percentage of profit earned by a company in relation to its revenue. Expressed as a percentage, it indicates how much profit the company makes for every dollar of revenue generated. Profit margin is important because this percentage provides a comprehensive picture of the operating efficiency of a business or an industry. All margin changes provide useful indicators for assessing growth potential, investment viability and the financial stability of a company relative to its competitors. Maintaining a healthy profit margin will help to ensure the financial success of a business, which will improve its ability to obtain loans.
It is calculated by finding the profit as a percentage of the revenue.
formula_0For example, if a company reports that it achieved a 35% profit margin during the last quarter, it means that it netted $0.35 from each dollar of sales generated.
Profit margins are generally distinct from rate of return. Profit margins can include risk premiums.
Overview.
Profit margin is calculated with selling price (or revenue) taken as base times 100. It is the percentage of selling price that is turned into profit, whereas "profit percentage" or "markup" is the percentage of cost price that one gets as profit on top of cost price. While selling something one should know what percentage of profit one will get on a particular investment, so companies calculate profit percentage to find the ratio of profit to cost.
The profit margin is used mostly for internal comparison. It is difficult to accurately compare the net profit ratio for different entities. Individual businesses' operating and financing arrangements vary so much that different entities are bound to have different levels of expenditure, so that comparison of one with another can have little meaning. A low profit margin indicates a low margin of safety: higher risk that a decline in sales will erase profits and result in a net loss, or a negative margin.
Profit margin is an indicator of a company's pricing strategies and how well it controls costs. Differences in competitive strategy and product mix cause the profit margin to vary among different companies.
Profit percentage.
On the other hand, profit percentage is calculated with cost taken as base:
formula_1
Suppose that something is bought for $40 and sold for $100.
Cost = $40
Revenue = $100
formula_2
formula_3
formula_4
formula_5 (profit divided by cost).
If the revenue is the same as the cost, profit percentage is 0%. The result above or below 100% can be calculated as the percentage of return on investment. In this example, the return on investment is a multiple of 1.5 of the investment, corresponding to a 150% gain.
Type of profit margin.
There are 3 types of profit margins: gross profit margin, operating profit margin and net profit margin.
Gross profit margin.
Gross profit margin is calculated as gross profit divided by net sales (percentage). Gross profit is calculated by deducting the cost of goods sold (COGS)—that is, all the direct costs—from the revenue. This margin compares revenue to variable cost.
Service companies, such as law firms, can use the cost of revenue (the total cost to achieve a sale) instead of the cost of goods sold (COGS). It is calculated as:formula_6
formula_7
formula_8
Example. If a company takes in a revenue of $1,000,000 and $600,000 in COGS. Gross profit is $400,000, and gross profit margin is (400,000 /. 1,000,000) x 100 = 40%.
Operating profit margin.
Operating profit margin includes the cost of goods sold and is the earning before interest and taxes (EBIT) known as operating income divided by revenue. The COGS formula is the same across most industries, but what is included in each of the elements can vary for each. It should be calculated as: formula_9
Example. If a company has $1,000,000 in revenue, $600,000 in COGS, and $200,000 in operating expenses. Operating profit is $200,000, and operating profit margin is (200,000 / 1,000,000) x 100 = 20%.
Net profit margin.
Net profit margin is net profit divided by revenue. Net profit is calculated as revenue minus all expenses from total sales. formula_10
Example. A company has $1,000,000 in revenue, $600,000 in COGS, $200,000 in operating expenses, and $50,000 in taxes. Net profit is $150,000, and net profit margin is (150,000 / 1,000,000) x 100 = 15%.
Importance of profit margin.
Profit margin in an economy reflects the profitability of any business and enables relative comparisons between small and large businesses. It is a standard measure to evaluate the potential and capacity of a business in generating profits. These margins help business determine their pricing strategies for goods and services. The pricing is influenced by the cost of their products and the expected profit margin. pricing errors which create cash flow challenges can be detected using profit margin concept and prevent potential challenges and losses in an entity.
Profit margin is also used by businesses and companies to study the seasonal patterns and changes in the performance and further detect operational challenges. For example, a negative or zero profit margin indicates that the sales of a business does not suffice or it is failing to manage its expenses. This encourages business owners to identify the areas which inhibit growth such as inventory accumulation, under-utilized resources or high cost of production.
Profit margins are important whilst seeking credit and is often used as collateral. They are important to investors who base their predictions on many factors, one of which is the profit margin. It is used to compare between companies and influences the decision of investment in a particular venture. To attract investors, a high profit margin is preferred while comparing with similar businesses.
Uses of Profit Margin in Business.
Profit margins can be used to assess a company's financial performance over time. By comparing profit margins over time, investors and analysts can assess whether a company's profitability is improving or deteriorating. This information can be used to make informed investment decisions.
Profit margins are a useful tool for comparing the profitability of different companies in the same industry. By comparing the profitability of similar companies, investors can determine which companies are more profitable and therefore potentially more attractive investment opportunities.
Profit margins can also be used to assess a company's pricing strategy. By analysing the profitability of different products and services, companies can determine which products or services are most profitable and adjust their pricing accordingly. This can help companies maximise profitability and remain competitive in the marketplace.
Margins can also be used to identify areas of a company's operations that may be inefficient or not cost effective. By analysing the profitability of different product lines, companies can identify areas where costs are too high in relation to the profits generated. This information can then be used to optimise operations and reduce costs.
By Sector.
Estimated average after-tax unadjusted operating margin in USA by sector as of January 2024:
<templatestyles src="template:sticky header/styles.css"/><templatestyles src="Template:Table alignment/tables.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{Profit Margin} = {100 \\cdot \\text{Profit}\\over\\text{Revenue}} = {{100 \\cdot (\\text{Sales} - \\text{Total Expenses})}\\over\\text{Revenue}}"
},
{
"math_id": 1,
"text": "\\text{Profit Percentage} = {100 \\cdot \\text{Net Profit}\\over\\text{Cost}}"
},
{
"math_id": 2,
"text": "\\text{Profit} = \\$100 - \\$40 = \\$60"
},
{
"math_id": 3,
"text": "\\text{Profit percentage} = \\frac{100 \\times \\$60} {\\$40} = 150\\%"
},
{
"math_id": 4,
"text": "\\text{Profit margin} = \\frac{100 \\times (\\$100 - \\$40)} {\\$100} = 60\\%"
},
{
"math_id": 5,
"text": "\\text{Return on investment multiple} = \\frac{\\$60} {\\$40} = 1.5"
},
{
"math_id": 6,
"text": "\\text{Gross Profit} = \\text{Revenue} - (\\text{Direct materials} + \\text{Direct labor} + \\text{Factory overhead})"
},
{
"math_id": 7,
"text": "\\text{Net Sales} = \\text{Revenue} - \\text{Cost of Sales Returns} - \\text{Allowances and Discounts}"
},
{
"math_id": 8,
"text": "\\text{Gross Profit Margin} = {100 \\cdot \\text{Gross Profit}\\over\\text{Net Sales}}"
},
{
"math_id": 9,
"text": "\\text{Operating Profit Margin} = {100 \\cdot \\text{Operating Income}\\over\\text{Revenue}}"
},
{
"math_id": 10,
"text": "\\text{Net Profit Margin} = {100 \\cdot \\text{Net profit}\\over\\text{Revenue}}"
}
] | https://en.wikipedia.org/wiki?curid=1151882 |
11519719 | Probabilistic automaton | In mathematics and computer science, the probabilistic automaton (PA) is a generalization of the nondeterministic finite automaton; it includes the probability of a given transition into the transition function, turning it into a transition matrix. Thus, the probabilistic automaton also generalizes the concepts of a Markov chain and of a subshift of finite type. The languages recognized by probabilistic automata are called stochastic languages; these include the regular languages as a subset. The number of stochastic languages is uncountable.
The concept was introduced by Michael O. Rabin in 1963; a certain special case is sometimes known as the Rabin automaton (not to be confused with the subclass of ω-automata also referred to as Rabin automata). In recent years, a variant has been formulated in terms of quantum probabilities, the quantum finite automaton.
Informal Description.
For a given initial state and input character, a deterministic finite automaton (DFA) has exactly one next state, and a nondeterministic finite automaton (NFA) has a set of next states. A probabilistic automaton (PA) instead has a weighted set (or vector) of next states, where the weights must sum to 1 and therefore can be interpreted as probabilities (making it a stochastic vector). The notions states and acceptance must also be modified to reflect the introduction of these weights. The state of the machine as a given step must now also be represented by a stochastic vector of states, and a state accepted if its total probability of being in an acceptance state exceeds some cut-off.
A PA is in some sense a half-way step from deterministic to non-deterministic, as it allows a set of next states but with restrictions on their weights. However, this is somewhat misleading, as the PA utilizes the notion of the real numbers to define the weights, which is absent in the definition of both DFAs and NFAs. This additional freedom enables them to decide languages that are not regular, such as the p-adic languages with irrational parameters. As such, PAs are more powerful than both DFAs and NFAs (which are famously equally powerful).
Formal Definition.
The probabilistic automaton may be defined as an extension of a nondeterministic finite automaton formula_0, together with two probabilities: the probability formula_1 of a particular state transition taking place, and with the initial state formula_2 replaced by a stochastic vector giving the probability of the automaton being in a given initial state.
For the ordinary non-deterministic finite automaton, one has
Here, formula_8 denotes the power set of formula_3.
By use of currying, the transition function formula_5 of a non-deterministic finite automaton can be written as a membership function
formula_9
so that formula_10 if formula_11 and formula_12 otherwise. The curried transition function can be understood to be a matrix with matrix entries
formula_13
The matrix formula_14 is then a square matrix, whose entries are zero or one, indicating whether a transition formula_15 is allowed by the NFA. Such a transition matrix is always defined for a non-deterministic finite automaton.
The probabilistic automaton replaces these matrices by a family of right stochastic matrices formula_16, for each symbol a in the alphabet formula_4 so that the probability of a transition is given by
formula_17
A state change from some state to any state must occur with probability one, of course, and so one must have
formula_18
for all input letters formula_19 and internal states formula_20. The initial state of a probabilistic automaton is given by a row vector formula_21, whose components are the probabilities of the individual initial states formula_20, that add to 1:
formula_22
The transition matrix acts on the right, so that the state of the probabilistic automaton, after consuming the input string formula_23, would be
formula_24
In particular, the state of a probabilistic automaton is always a stochastic vector, since the product of any two stochastic matrices is a stochastic matrix, and the product of a stochastic vector and a stochastic matrix is again a stochastic vector. This vector is sometimes called the distribution of states, emphasizing that it is a discrete probability distribution.
Formally, the definition of a probabilistic automaton does not require the mechanics of the non-deterministic automaton, which may be dispensed with. Formally, a probabilistic automaton "PA" is defined as the tuple formula_25. A Rabin automaton is one for which the initial distribution formula_21 is a coordinate vector; that is, has zero for all but one entries, and the remaining entry being one.
Stochastic languages.
The set of languages recognized by probabilistic automata are called stochastic languages. They include the regular languages as a subset.
Let formula_26 be the set of "accepting" or "final" states of the automaton. By abuse of notation, formula_27 can also be understood to be the column vector that is the membership function for formula_27; that is, it has a 1 at the places corresponding to elements in formula_27, and a zero otherwise. This vector may be contracted with the internal state probability, to form a scalar. The language recognized by a specific automaton is then defined as
formula_28
where formula_29 is the set of all strings in the alphabet formula_4 (so that * is the Kleene star). The language depends on the value of the cut-point formula_30, normally taken to be in the range formula_31.
A language is called "η"-stochastic if and only if there exists some PA that recognizes the language, for fixed formula_30. A language is called stochastic if and only if there is some formula_31 for which formula_32 is "η"-stochastic.
A cut-point is said to be an isolated cut-point if and only if there exists a formula_33 such that
formula_34
for all formula_35
Properties.
Every regular language is stochastic, and more strongly, every regular language is "η"-stochastic. A weak converse is that every 0-stochastic language is regular; however, the general converse does not hold: there are stochastic languages that are not regular.
Every "η"-stochastic language is stochastic, for some formula_36.
Every stochastic language is representable by a Rabin automaton.
If formula_30 is an isolated cut-point, then formula_32 is a regular language.
"p"-adic languages.
The "p"-adic languages provide an example of a stochastic language that is not regular, and also show that the number of stochastic languages is uncountable. A "p"-adic language is defined as the set of strings
formula_37
in the letters formula_38.
That is, a "p"-adic language is merely the set of real numbers in [0, 1], written in base-"p", such that they are greater than formula_30. It is straightforward to show that all "p"-adic languages are stochastic. In particular, this implies that the number of stochastic languages is uncountable. A "p"-adic language is regular if and only if formula_30 is rational.
Generalizations.
The probabilistic automaton has a geometric interpretation: the state vector can be understood to be a point that lives on the face of the standard simplex, opposite to the orthogonal corner. The transition matrices form a monoid, acting on the point. This may be generalized by having the point be from some general topological space, while the transition matrices are chosen from a collection of operators acting on the topological space, thus forming a semiautomaton. When the cut-point is suitably generalized, one has a topological automaton.
An example of such a generalization is the quantum finite automaton; here, the automaton state is represented by a point in complex projective space, while the transition matrices are a fixed set chosen from the unitary group. The cut-point is understood as a limit on the maximum value of the quantum angle.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(Q,\\Sigma,\\delta,q_0,F)"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "q_0"
},
{
"math_id": 3,
"text": "Q"
},
{
"math_id": 4,
"text": "\\Sigma"
},
{
"math_id": 5,
"text": "\\delta:Q\\times\\Sigma \\to \\wp(Q)"
},
{
"math_id": 6,
"text": "F"
},
{
"math_id": 7,
"text": "F\\subseteq Q"
},
{
"math_id": 8,
"text": "\\wp(Q)"
},
{
"math_id": 9,
"text": "\\delta:Q\\times\\Sigma \\times Q\\to \\{0,1\\}"
},
{
"math_id": 10,
"text": "\\delta(q,a,q^\\prime)=1"
},
{
"math_id": 11,
"text": "q^\\prime\\in \\delta(q,a)"
},
{
"math_id": 12,
"text": "0"
},
{
"math_id": 13,
"text": "\\left[\\theta_a\\right]_{qq^\\prime}=\\delta(q,a,q^\\prime)"
},
{
"math_id": 14,
"text": "\\theta_a"
},
{
"math_id": 15,
"text": "q\\stackrel{a}{\\rightarrow} q^\\prime"
},
{
"math_id": 16,
"text": "P_a"
},
{
"math_id": 17,
"text": "\\left[P_a\\right]_{qq^\\prime}"
},
{
"math_id": 18,
"text": "\\sum_{q^\\prime}\\left[P_a\\right]_{qq^\\prime}=1"
},
{
"math_id": 19,
"text": "a"
},
{
"math_id": 20,
"text": "q"
},
{
"math_id": 21,
"text": "v"
},
{
"math_id": 22,
"text": "\\sum_{q}\\left[v\\right]_{q}=1"
},
{
"math_id": 23,
"text": "abc"
},
{
"math_id": 24,
"text": "v P_a P_b P_c"
},
{
"math_id": 25,
"text": "(Q,\\Sigma,P, v, F)"
},
{
"math_id": 26,
"text": "F=Q_\\text{accept}\\subseteq Q"
},
{
"math_id": 27,
"text": "Q_\\text{accept}"
},
{
"math_id": 28,
"text": "L_\\eta = \\{s\\in\\Sigma^* \\vert vP_s Q_\\text{accept} > \\eta\\}"
},
{
"math_id": 29,
"text": "\\Sigma^*"
},
{
"math_id": 30,
"text": "\\eta"
},
{
"math_id": 31,
"text": "0\\le \\eta<1"
},
{
"math_id": 32,
"text": "L_\\eta"
},
{
"math_id": 33,
"text": "\\delta>0"
},
{
"math_id": 34,
"text": "\\vert vP(s)Q_\\text{accept} - \\eta \\vert \\ge \\delta"
},
{
"math_id": 35,
"text": "s\\in\\Sigma^*"
},
{
"math_id": 36,
"text": "0<\\eta<1"
},
{
"math_id": 37,
"text": "L_{\\eta}(p)=\\{n_1n_2n_3 \\ldots \\vert 0\\le n_k<p \\text{ and } \n0.n_1n_2n_3\\ldots > \\eta \\}"
},
{
"math_id": 38,
"text": "0,1,2,\\ldots,(p-1)"
}
] | https://en.wikipedia.org/wiki?curid=11519719 |
11521009 | Lemaître–Tolman metric | In physics, the Lemaître–Tolman metric, also known as the Lemaître–Tolman–Bondi metric or the Tolman metric, is a Lorentzian metric based on an exact solution of Einstein's field equations; it describes an isotropic and expanding (or contracting) universe which is not homogeneous, and is thus used in cosmology as an alternative to the standard Friedmann–Lemaître–Robertson–Walker metric to model the expansion of the universe. It has also been used to model a universe which has a fractal distribution of matter to explain the accelerating expansion of the universe. It was first found by Georges Lemaître in 1933 and Richard Tolman in 1934 and later investigated by Hermann Bondi in 1947.
Details.
In a synchronous reference system where formula_0 and formula_1, the time coordinate formula_2 (we set formula_3) is also the proper time formula_4 and clocks at all points can be synchronized. For a dust-like medium where the pressure is zero, dust particles move freely i.e., along the geodesics and thus the synchronous frame is also a comoving frame wherein the components of four velocity formula_5 are formula_6. The solution of the field equations yield
formula_7
where formula_8 is the "radius" or "luminosity distance" in the sense that the surface area of a sphere with radius formula_8 is formula_9 and formula_10 is just interpreted as the Lagrangian coordinate and
formula_11
subjected to the conditions formula_12 and formula_13, where formula_14 and formula_15 are arbitrary functions, formula_16 is the matter density and finally primes denote differentiation with respect to formula_10. We can also assume formula_17 and formula_18 that excludes cases resulting in crossing of material particles during its motion. To each particle there corresponds a value of formula_10, the function formula_19 and its time derivative respectively provides its law of motion and radial velocity. An interesting property of the solution described above is that when formula_14 and formula_15 are plotted as functions of formula_10, the form of these functions plotted for the range formula_20 is independent of how these functions will be plotted for formula_21. This prediction is evidently similar to the Newtonian theory. The total mass within the sphere formula_22 is given by
formula_23
which implies that Schwarzschild radius is given by formula_24.
The function formula_19 can be obtained upon integration and is given in a parametric form with a parameter formula_25 with three possibilities,
formula_26
formula_27
formula_28
where formula_29 emerges as another arbitrary function. However, we know that centrally symmetric matter distribution can be described by at most two functions, namely their density distribution and the radial velocity of the matter. This means that of the three functions formula_30, only two are independent. In fact, since no particular selection has been made for the Lagrangian coordinate formula_10 yet that can be subjected to arbitrary transformation, we can see that only two functions are arbitrary. For the dust-like medium, there exists another solution where formula_31 and independent of formula_10, although such solution does not correspond to collapse of a finite body of matter.
Schwarzschild solution.
When formula_32const., formula_33 and therefore the solution corresponds to empty space with a point mass located at the center. Further by setting formula_34 and formula_35, the solution reduces to Schwarzschild solution expressed in Lemaître coordinates.
Gravitational collapse.
The gravitational collapse occurs when formula_36 reaches formula_29 with formula_37. The moment formula_38 corresponds to the arrival of matter denoted by its Lagrangian coordinate formula_10 to the center. In all three cases, as formula_39, the asymptotic behaviors are given by
formula_40
in which the first two relations indicate that in the comoving frame, all radial distances tend to infinity and tangential distances approaches zero like formula_41, whereas the third relation shows that the matter density increases like formula_42 In the special case formula_43constant where the time of collapse of all the material particle is the same, the asymptotic behaviors are different,
formula_44
Here both the tangential and radial distances goes to zero like formula_45, whereas the matter density increases like formula_46
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "g_{00}=1"
},
{
"math_id": 1,
"text": "g_{0\\alpha}=0"
},
{
"math_id": 2,
"text": "x^0=t"
},
{
"math_id": 3,
"text": "G=c=1"
},
{
"math_id": 4,
"text": "\\tau=\\sqrt{g_{00}} x^0"
},
{
"math_id": 5,
"text": "u^i=dx^i/ds"
},
{
"math_id": 6,
"text": "u^0=1,\\,u^\\alpha=0"
},
{
"math_id": 7,
"text": "ds^2 = d\\tau^2 - e^{\\lambda(\\tau,R)} dR^2 - r^2(\\tau,R) (d\\theta^2 + \\sin^2\\theta d\\phi^2)"
},
{
"math_id": 8,
"text": "r"
},
{
"math_id": 9,
"text": "4\\pi r^2"
},
{
"math_id": 10,
"text": "R"
},
{
"math_id": 11,
"text": "e^\\lambda = \\frac{r'^2}{1+f(R)}, \\quad \\left(\\frac{\\partial r}{\\partial \\tau}\\right)^2 = f(R) + \\frac{F(R)}{r}, \\quad 4\\pi r^2\\rho = \\frac{F'(R)}{2r'}"
},
{
"math_id": 12,
"text": "1+f>0"
},
{
"math_id": 13,
"text": "F>0"
},
{
"math_id": 14,
"text": "f(R)"
},
{
"math_id": 15,
"text": "F(R)"
},
{
"math_id": 16,
"text": "\\rho"
},
{
"math_id": 17,
"text": "F'>0"
},
{
"math_id": 18,
"text": "r'>0"
},
{
"math_id": 19,
"text": "r(\\tau,R)"
},
{
"math_id": 20,
"text": "R\\in [0,R_0]"
},
{
"math_id": 21,
"text": "R>R_0"
},
{
"math_id": 22,
"text": "R=R_0"
},
{
"math_id": 23,
"text": "m = 4\\pi \\int_0^{r(\\tau,R_0)} \\rho r^2 dr=4\\pi \\int_0^{R_0} \\rho r' r^2 dR= \\frac{F(R_0)}{2}"
},
{
"math_id": 24,
"text": "r_s=2m=F(R_0)"
},
{
"math_id": 25,
"text": "\\eta"
},
{
"math_id": 26,
"text": "f > 0:~~~~~~~~ r = \\frac{F}{2f}(\\cosh\\eta-1), \\quad \\tau_0 -\\tau = \\frac{F}{2f^{3/2}}(\\sinh\\eta-\\eta),"
},
{
"math_id": 27,
"text": "f < 0:~~~~~~~~ r = \\frac{F}{-2f}(1-\\cosh\\eta), \\quad \\tau_0 -\\tau = \\frac{F}{2(-f)^{3/2}}(\\eta-\\sinh\\eta)"
},
{
"math_id": 28,
"text": "f = 0:~~~~~~~~ r = \\left(\\frac{9F}{4}\\right)^{1/3}(\\tau_0-\\tau)^{2/3}."
},
{
"math_id": 29,
"text": "\\tau_0(R)"
},
{
"math_id": 30,
"text": "f,F,\\tau_0"
},
{
"math_id": 31,
"text": "r=r(\\tau)"
},
{
"math_id": 32,
"text": "F=r_s="
},
{
"math_id": 33,
"text": "\\rho=0"
},
{
"math_id": 34,
"text": "f=0"
},
{
"math_id": 35,
"text": "\\tau_0=R"
},
{
"math_id": 36,
"text": "\\tau"
},
{
"math_id": 37,
"text": "\\tau_0'>0"
},
{
"math_id": 38,
"text": "\\tau=\\tau_0(R)"
},
{
"math_id": 39,
"text": "\\tau\\rightarrow \\tau_0(R)"
},
{
"math_id": 40,
"text": "r \\approx \\left(\\frac{9F}{4}\\right)^{1/3}(\\tau_0-\\tau)^{2/3}, \\quad e^{\\lambda/2} \\approx \\left(\\frac{2F}{3}\\right)^{1/3} \\frac{\\tau_0'}{\\sqrt{1+f}} (\\tau_0-\\tau)^{-1/3}, \\quad 4\\pi \\rho \\approx \\frac{F'}{3F\\tau_0'(\\tau_0-\\tau)}"
},
{
"math_id": 41,
"text": "\\tau-\\tau_0"
},
{
"math_id": 42,
"text": "1/(\\tau_0-\\tau)."
},
{
"math_id": 43,
"text": "\\tau_0(R)="
},
{
"math_id": 44,
"text": "r \\approx \\left(\\frac{9F}{3}\\right)^{1/3}(\\tau_0-\\tau)^{2/3}, \\quad e^{\\lambda/2} \\approx \\left(\\frac{2}{3}\\right)^{1/3} \\frac{F'}{2F^{2/3}\\sqrt{1+f}} (\\tau_0-\\tau)^{2/3}, \\quad 4\\pi \\rho \\approx \\frac{2}{3(\\tau_0-\\tau)^2}."
},
{
"math_id": 45,
"text": "(\\tau_0-\\tau)^{2/3}"
},
{
"math_id": 46,
"text": "1/(\\tau_0-\\tau)^2."
}
] | https://en.wikipedia.org/wiki?curid=11521009 |
11521028 | Navarro–Frenk–White profile | The Navarro–Frenk–White (NFW) profile is a spatial mass distribution of dark matter fitted to dark matter halos identified in N-body simulations by Julio Navarro, Carlos Frenk and Simon White. The NFW profile is one of the most commonly used model profiles for dark matter halos.
Density distribution.
In the NFW profile, the density of dark matter as a function of radius is given by:
formula_0
where "ρ"0 and the "scale radius", "Rs", are parameters which vary from halo to halo.
The integrated mass within some radius "R"max is
formula_1
The total mass is divergent, but it is often useful to take the edge of the halo to be the virial radius, "R"vir, which is related to the "concentration parameter", "c", and scale radius via
formula_2
(Alternatively, one can define a radius at which the average density within this radius is formula_3 times the critical or mean density of the universe, resulting in a similar relation: formula_4. The virial radius will lie around formula_5 to formula_6, though values of formula_7 are used in X-ray astronomy, for example, due to higher concentrations.)
The total mass in the halo within formula_8 is
formula_9
The specific value of "c" is roughly 10 or 15 for the Milky Way, and may range from 4 to 40 for halos of various sizes.
This can then be used to define a dark matter halo in terms of its mean density, solving the above equation for formula_10 and substituting it into the original equation. This gives
formula_11
where
Higher order moments.
The integral of the "squared density" is
formula_15
so that the mean squared density inside of "R"max is
formula_16
which for the virial radius simplifies to
formula_17
and the mean squared density inside the scale radius is simply
formula_18
Gravitational potential.
Solving Poisson's equation gives the gravitational potential
formula_19
with the limits formula_20 and formula_21.
The acceleration due to the NFW potential is:
formula_22
where formula_23 is the position vector and formula_24.
Radius of the maximum circular velocity.
The radius of the maximum circular velocity (confusingly sometimes also referred to as formula_25) can be found from the maximum of formula_26 as
formula_27
where formula_28 is the positive root of
formula_29
Maximum circular velocity is also related to the characteristic density and length scale of NFW profile:
formula_30
Dark matter simulations.
Over a broad range of halo mass and redshift, the NFW profile approximates the equilibrium configuration of dark matter halos produced in simulations of collisionless dark matter particles by numerous groups of scientists. Before the dark matter virializes, the distribution of dark matter deviates from an NFW profile, and significant substructure is observed in simulations both during and after the collapse of the halos.
Alternative models, in particular the Einasto profile, have been shown to represent the dark matter profiles of simulated halos as well as or better than the NFW profile by including an additional third parameter. The Einasto profile has a finite central density, unlike the NFW profile which has a divergent (infinite) central density. Because of the limited resolution of N-body simulations, it is not yet known which model provides the best description of the central densities of simulated dark-matter halos.
Simulations assuming different cosmological initial conditions produce halo populations in which the two parameters of the NFW profile follow different mass-concentration relations, depending on cosmological properties such as the density of the universe and the nature of the very early process which created all structure. Observational measurements of this relation thus offer a route to constraining these properties.
Observations of halos.
The dark matter density profiles of massive galaxy clusters can be measured directly by gravitational lensing and agree well with the NFW profiles predicted for cosmologies with the parameters inferred from other data. For lower mass halos, gravitational lensing is too noisy to give useful results for individual objects, but accurate measurements can still be made by averaging the profiles of many similar systems. For the main body of the halos, the agreement with the predictions remains good down to halo masses as small as those of the halos surrounding isolated galaxies like our own. The inner regions of halos are beyond the reach of lensing measurements, however, and other techniques give results which disagree with NFW predictions for the dark matter distribution inside
the visible galaxies which lie at halo centers.
Observations of the inner regions of bright galaxies like the Milky Way and M31 may be compatible with the NFW profile, but this is open to debate. The NFW dark matter profile is not consistent with observations of the inner regions of low surface brightness galaxies, which have less central mass than predicted. This is known as the cusp-core or cuspy halo problem.
It is currently debated whether this discrepancy is a consequence of the nature of the dark matter, of the influence of dynamical processes during galaxy formation, or of shortcomings in dynamical modelling of the observational data.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rho (r) = \\frac{\\rho_0}{\\frac{r}{R_s}\\left(1~+~\\frac{r}{R_s}\\right)^2}"
},
{
"math_id": 1,
"text": "\nM = \\int_0^{R_\\max} 4\\pi r^2 \\rho (r) \\, dr=4\\pi \\rho_0 R_s^3 \\left[\n\\ln\\left(\\frac{R_s+R_\\max}{R_s}\\right)-\\frac{R_\\max}{R_s+R_\\max}\\right]\n"
},
{
"math_id": 2,
"text": "R_\\mathrm{vir} = c R_s"
},
{
"math_id": 3,
"text": " \\Delta "
},
{
"math_id": 4,
"text": " R_{\\Delta}=c_{\\Delta} R_s "
},
{
"math_id": 5,
"text": " R_{200} "
},
{
"math_id": 6,
"text": " R_{500} "
},
{
"math_id": 7,
"text": " \\Delta=1000 "
},
{
"math_id": 8,
"text": " R_\\mathrm{vir} "
},
{
"math_id": 9,
"text": "M = \\int_0^{R_\\mathrm{vir}} 4\\pi r^2 \\rho (r) \\, dr=4\\pi \\rho_0 R_s^3 \\left[\\ln(1+c) - \\frac{c}{1+c} \\right]."
},
{
"math_id": 10,
"text": " \\rho_0"
},
{
"math_id": 11,
"text": "\\rho(r) = \\frac{\\rho_\\text{halo}}{3 A_\\text{NFW} \\, x (c^{-1} + x)^2} "
},
{
"math_id": 12,
"text": "\\rho_\\text{halo} \\equiv M \\biggr / \\left( \\frac{4}{3} \\pi R_\\text{vir}^3 \\right) "
},
{
"math_id": 13,
"text": "A_\\text{NFW} = \\left[ \\ln(1+c) - \\frac{c}{1+c} \\right] "
},
{
"math_id": 14,
"text": "x = r/R_\\text{vir} "
},
{
"math_id": 15,
"text": " \\int_0^{R_\\max} 4\\pi r^2 \\rho (r)^2 \\, dr\n= \\frac{4\\pi}{3} R_s^3 \\rho_0^2 \\left[1-\\frac{R_s^3}{(R_s+R_\\max)^3}\\right] "
},
{
"math_id": 16,
"text": " \\langle \\rho^2 \\rangle_{R_\\max}\n= \\frac{R_s^3\\rho_0^2}{R_\\max^3} \\left[1-\\frac{R_s^3}{(R_s+R_\\max)^3}\\right] "
},
{
"math_id": 17,
"text": "\\langle \\rho^2 \\rangle_{R_\\mathrm{vir}}\n= \\frac{\\rho_0^2}{c^3} \\left[1-\\frac{1}{(1+c)^3}\\right]\n\\approx \\frac{\\rho_0^2}{c^3}"
},
{
"math_id": 18,
"text": "\\langle \\rho^2 \\rangle_{R_s} = \\frac{7}{8}\\rho_0^2"
},
{
"math_id": 19,
"text": "\n\\Phi(r) = - \\frac{4\\pi G \\rho_0 R_s^3}{r} \\ln \\left( 1+ \\frac{r}{R_s} \\right)\n"
},
{
"math_id": 20,
"text": "\\lim_{r\\to \\infty} \\Phi=0"
},
{
"math_id": 21,
"text": "\\lim_{r\\to 0} \\Phi = -4\\pi G\\rho_0 R_s^2"
},
{
"math_id": 22,
"text": "\\mathbf{a} = -\\nabla{\\Phi_\\text{NFW}(\\mathbf{r})} = G\\frac{M_\\text{vir}}{\\ln{(1+c)}-c/(1+c) } \\frac{r/(r+R_s)-\\ln{(1+r/R_s)}}{r^3} \\mathbf{r}"
},
{
"math_id": 23,
"text": "\\mathbf{r}"
},
{
"math_id": 24,
"text": "M_\\text{vir} = \\frac{4\\pi}{3}r_\\text{vir}^3 200\\rho_\\text{crit}"
},
{
"math_id": 25,
"text": "R_\\max"
},
{
"math_id": 26,
"text": "M(r)/r"
},
{
"math_id": 27,
"text": "R^\\max_{\\mathrm{circ}} = \\alpha R_s"
},
{
"math_id": 28,
"text": "\\alpha \\approx 2.16258"
},
{
"math_id": 29,
"text": "\\ln \\left( 1 + \\alpha \\right) = \\frac{\\alpha (1+2\\alpha)}{(1+\\alpha)^2}."
},
{
"math_id": 30,
"text": "V^\\max_{\\mathrm{circ}} \\approx 1.64 R_s \\sqrt{G \\rho_s}"
}
] | https://en.wikipedia.org/wiki?curid=11521028 |
1152311 | Signature (topology) | In the field of topology, the signature is an integer invariant which is defined for an oriented manifold "M" of dimension divisible by four.
This invariant of a manifold has been studied in detail, starting with Rokhlin's theorem for 4-manifolds, and Hirzebruch signature theorem.
Definition.
Given a connected and oriented manifold "M" of dimension 4"k", the cup product gives rise to a quadratic form "Q" on the 'middle' real cohomology group
formula_0.
The basic identity for the cup product
formula_1
shows that with "p" = "q" = 2"k" the product is symmetric. It takes values in
formula_2.
If we assume also that "M" is compact, Poincaré duality identifies this with
formula_3
which can be identified with formula_4. Therefore the cup product, under these hypotheses, does give rise to a symmetric bilinear form on "H"2"k"("M","R"); and therefore to a quadratic form "Q". The form "Q" is non-degenerate due to Poincaré duality, as it pairs non-degenerately with itself. More generally, the signature can be defined in this way for any general compact polyhedron with "4n"-dimensional Poincaré duality.
The signature formula_5 of "M" is by definition the signature of "Q", that is, formula_6 where any diagonal matrix defining "Q" has formula_7 positive entries and formula_8 negative entries. If "M" is not connected, its signature is defined to be the sum of the signatures of its connected components.
Other dimensions.
If "M" has dimension not divisible by 4, its signature is usually defined to be 0. There are alternative generalization in L-theory: the signature can be interpreted as the 4"k"-dimensional (simply connected) symmetric L-group formula_9 or as the 4"k"-dimensional quadratic L-group formula_10 and these invariants do not always vanish for other dimensions. The Kervaire invariant is a mod 2 (i.e., an element of formula_11) for framed manifolds of dimension 4"k"+2 (the quadratic L-group formula_12), while the de Rham invariant is a mod 2 invariant of manifolds of dimension 4"k"+1 (the symmetric L-group formula_13); the other dimensional L-groups vanish.
Kervaire invariant.
When formula_14 is twice an odd integer (singly even), the same construction gives rise to an antisymmetric bilinear form. Such forms do not have a signature invariant; if they are non-degenerate, any two such forms are equivalent. However, if one takes a quadratic refinement of the form, which occurs if one has a framed manifold, then the resulting ε-quadratic forms need not be equivalent, being distinguished by the Arf invariant. The resulting invariant of a manifold is called the Kervaire invariant.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H^{2k}(M,\\mathbf{R})"
},
{
"math_id": 1,
"text": "\\alpha^p \\smile \\beta^q = (-1)^{pq}(\\beta^q \\smile \\alpha^p)"
},
{
"math_id": 2,
"text": "H^{4k}(M,\\mathbf{R})"
},
{
"math_id": 3,
"text": "H^{0}(M,\\mathbf{R})"
},
{
"math_id": 4,
"text": "\\mathbf{R}"
},
{
"math_id": 5,
"text": "\\sigma(M)"
},
{
"math_id": 6,
"text": "\\sigma(M) = n_+ - n_-"
},
{
"math_id": 7,
"text": "n_+"
},
{
"math_id": 8,
"text": "n_-"
},
{
"math_id": 9,
"text": "L^{4k},"
},
{
"math_id": 10,
"text": "L_{4k},"
},
{
"math_id": 11,
"text": "\\mathbf{Z}/2"
},
{
"math_id": 12,
"text": "L_{4k+2}"
},
{
"math_id": 13,
"text": "L^{4k+1}"
},
{
"math_id": 14,
"text": "d=4k+2=2(2k+1)"
},
{
"math_id": 15,
"text": "\\sigma(M \\sqcup N) = \\sigma(M) + \\sigma(N)"
},
{
"math_id": 16,
"text": "\\sigma(M\\times N) = \\sigma(M)\\sigma(N)"
},
{
"math_id": 17,
"text": "\\sigma(M)=0"
},
{
"math_id": 18,
"text": "\\frac{p_1}{3}"
}
] | https://en.wikipedia.org/wiki?curid=1152311 |
1152374 | Reidemeister move | One of three types of isotopy-preserving local changes to a knot diagram
In the mathematical area of knot theory, a Reidemeister move is any of three local moves on a link diagram. Kurt Reidemeister (1927) and, independently, James Waddell Alexander and Garland Baird Briggs (1926), demonstrated that two knot diagrams belonging to the same knot, up to planar isotopy, can be related by a sequence of the three Reidemeister moves.
Each move operates on a small region of the diagram and is one of three types:
No other part of the diagram is involved in the picture of a move, and a planar isotopy may distort the picture. The numbering for the types of moves corresponds to how many strands are involved, e.g. a type II move operates on two strands of the diagram.
One important context in which the Reidemeister moves appear is in defining knot invariants. By demonstrating a property of a knot diagram which is not changed when we apply any of the Reidemeister moves, an invariant is defined. Many important invariants can be defined in this way, including the Jones polynomial.
The type I move is the only move that affects the writhe of the diagram. The type III move is the only one which does not change the crossing number of the diagram.
In applications such as the Kirby calculus, in which the desired equivalence class of knot diagrams is not a knot but a framed link, one must replace the type I move with a "modified type I" (type I') move composed of two type I moves of opposite sense. The type I' move affects neither the framing of the link nor the writhe of the overall knot diagram.
showed that two knot diagrams for the same knot are related by using only type II and III moves if and only if they have the same writhe and winding number. Furthermore, combined work of , , and shows that for every knot type there are a pair of knot diagrams so that every sequence of Reidemeister moves taking one to the other must use all three types of moves. Alexander Coward demonstrated that for link diagrams representing equivalent links, there is a sequence of moves ordered by type: first type I moves, then type II moves, type III, and then type II. The moves before the type III moves increase crossing number while those after decrease crossing number.
proved the existence of an exponential tower upper bound (depending on crossing number) on the number of Reidemeister moves required to pass between two diagrams of the same link. In detail, let formula_0 be the sum of the crossing numbers of the two diagrams, then the upper bound is
formula_1 where the height of the tower of formula_2s (with a single formula_0 at the top) is formula_3
proved the existence of a polynomial upper bound (depending on crossing number) on the number of Reidemeister moves required to change a diagram of the unknot to the standard unknot. In detail, for any such diagram with formula_4 crossings, the upper bound is formula_5.
proved there is also an upper bound, depending on crossing number, on the number of Reidemeister moves required to split a link. | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "2^{2^{2^{.^{.^n}}}}"
},
{
"math_id": 2,
"text": "2"
},
{
"math_id": 3,
"text": "10^{1,000,000n}"
},
{
"math_id": 4,
"text": "c"
},
{
"math_id": 5,
"text": "(236 c)^{11}"
}
] | https://en.wikipedia.org/wiki?curid=1152374 |
11524295 | Tetrahalomethane | Class of chemical compounds
Tetrahalomethanes are fully halogenated methane derivatives of general formula CFkCllBrmInAtp, where:formula_0Tetrahalomethanes are on the border of inorganic and organic chemistry, thus they can be assigned both inorganic and organic names by IUPAC: tetrafluoromethane - carbon tetrafluoride, tetraiodomethane - carbon tetraiodide, dichlorodifluoromethane - carbon dichloride difluoride.
Each halogen (F, Cl, Br, I, At) forms a corresponding halomethane, but their stability decreases in order CF4 >
CCl4 > CBr4 > CI4 from exceptionally stable gaseous tetrafluoromethane with bond energy 515 kJ·mol−1 to solid tetraiodomethane, depending on bond energy.
Many mixed halomethanes are also known, such as CBrClF2.
Uses.
Fluorine, chlorine, and sometimes bromine-substituted halomethanes were used as refrigerants, commonly known as CFCs (chlorofluorocarbons). | [
{
"math_id": 0,
"text": "k+l+m+n+p=4"
}
] | https://en.wikipedia.org/wiki?curid=11524295 |
11524497 | Power gain | Ratio of a circuit's output power to input
In electrical engineering, the power gain of an electrical network is the ratio of an output power to an input power. Unlike other signal gains, such as voltage and current gain, "power gain" may be ambiguous as the meaning of terms "input power" and "output power" is not always clear. Three important power gains are operating power gain, transducer power gain and available power gain. Note that all these definitions of power gains employ the use of average (as opposed to instantaneous) power quantities and therefore the term "average" is often suppressed, which can be confusing at occasions.
Operating power gain.
The operating power gain of a two-port network, GP, is defined as:
formula_0
where
If the time-averaged input power depends on the load impedance, one must take the maximum of the ratio, not just the maximum of the numerator.
Transducer power gain.
The transducer power gain of a two-port network, GT, is defined as:
formula_1
where
In terms of y-parameters this definition can be used to derive:
formula_2
where
This result can be generalized to z, h, g and y-parameters as:
formula_3
where
"P"S max may only be obtained from the source when the load impedance connected to it (i.e. the equivalent input impedance of the two-port network) is the complex conjugate of the source impedance, a consequence of the maximum power theorem.
Available power gain.
The available power gain of a two-port network, GA, is defined as:
formula_4
where
Similarly "P"L max may only be obtained when the load impedance is the complex conjugate of the output impedance of the network. | [
{
"math_id": 0,
"text": "G_P = \\frac{P_\\mathrm{L}}{P_\\mathrm{I}}"
},
{
"math_id": 1,
"text": "G_T = \\frac{P_\\mathrm{L}}{P_\\mathrm{S\\ max}}"
},
{
"math_id": 2,
"text": "G_T = \\frac{4|y_{21}|^2 \\Re{(Y_\\mathrm{L})}\\Re{(Y_\\mathrm{S})}}{ \\bigl|(y_{11}+Y_\\mathrm{S})(y_{22}+Y_\\mathrm{L})-y_{12}y_{21} \\bigr|^2}"
},
{
"math_id": 3,
"text": "G_T = \\frac{4|k_{21}|^2 \\Re{(M_\\mathrm{L})}\\Re{(M_\\mathrm{S})}}{ \\bigl|(k_{11}+M_\\mathrm{S})(k_{22}+M_\\mathrm{L})-k_{12}k_{21} \\bigr|^2}"
},
{
"math_id": 4,
"text": "G_A = \\frac{P_\\mathrm{L\\ max}}{P_\\mathrm{S\\ max}}"
}
] | https://en.wikipedia.org/wiki?curid=11524497 |
11526 | Quotient group | Group obtained by aggregating similar elements of a larger group
A quotient group or factor group is a mathematical group obtained by aggregating similar elements of a larger group using an equivalence relation that preserves some of the group structure (the rest of the structure is "factored out"). For example, the cyclic group of addition modulo "n" can be obtained from the group of integers under addition by identifying elements that differ by a multiple of formula_0 and defining a group structure that operates on each such class (known as a congruence class) as a single entity. It is part of the mathematical field known as group theory.
For a congruence relation on a group, the equivalence class of the identity element is always a normal subgroup of the original group, and the other equivalence classes are precisely the cosets of that normal subgroup. The resulting quotient is written &NoBreak;&NoBreak;, where formula_1 is the original group and formula_2 is the normal subgroup. This is read as '&NoBreak;&NoBreak;', where formula_3 is short for modulo. (The notation &NoBreak;&NoBreak; should be interpreted with caution, as some authors (e.g., Vinberg) use it to represent the left cosets of formula_4 in formula_1 for "any" subgroup formula_4, even though these cosets do not form a group if formula_4 is not normal in formula_1. Others (e.g., Dummit and Foote) only use this notation to refer to the quotient group, with the appearance of this notation implying the normality of formula_4 in formula_1.)
Much of the importance of quotient groups is derived from their relation to homomorphisms. The first isomorphism theorem states that the image of any group "G" under a homomorphism is always isomorphic to a quotient of formula_1. Specifically, the image of formula_1 under a homomorphism formula_5 is isomorphic to formula_6 where formula_7 denotes the kernel of &NoBreak;&NoBreak;.
The dual notion of a quotient group is a subgroup, these being the two primary ways of forming a smaller group from a larger one. Any normal subgroup has a corresponding quotient group, formed from the larger group by eliminating the distinction between elements of the subgroup. In category theory, quotient groups are examples of quotient objects, which are dual to subobjects.
Definition and illustration.
Given a group formula_1 and a subgroup formula_4, and a fixed element formula_8, one can consider the corresponding left coset: &NoBreak;}&NoBreak;. Cosets are a natural class of subsets of a group; for example consider the abelian group "G" of integers, with operation defined by the usual addition, and the subgroup formula_4 of even integers. Then there are exactly two cosets: &NoBreak;&NoBreak;, which are the even integers, and &NoBreak;&NoBreak;, which are the odd integers (here we are using additive notation for the binary operation instead of multiplicative notation).
For a general subgroup &NoBreak;&NoBreak;, it is desirable to define a compatible group operation on the set of all possible cosets, &NoBreak;}&NoBreak;. This is possible exactly when formula_4 is a normal subgroup, see below. A subgroup formula_2 of a group formula_1 is normal if and only if the coset equality formula_9 holds for all &NoBreak;&NoBreak;. A normal subgroup of formula_1 is denoted &NoBreak;&NoBreak;.
Definition.
Let formula_2 be a normal subgroup of a group &NoBreak;&NoBreak;. Define the set formula_10 to be the set of all left cosets of formula_2 in &NoBreak;&NoBreak;. That is, &NoBreak;}&NoBreak;.
Since the identity element &NoBreak;&NoBreak;, &NoBreak;&NoBreak;. Define a binary operation on the set of cosets, &NoBreak;&NoBreak;, as follows. For each formula_11 and formula_12 in &NoBreak;&NoBreak;, the product of formula_11 and &NoBreak;&NoBreak;, &NoBreak;&NoBreak;, is &NoBreak;&NoBreak;. This works only because formula_13 does not depend on the choice of the representatives, formula_14 and &NoBreak;&NoBreak;, of each left coset, formula_11 and &NoBreak;&NoBreak;. To prove this, suppose formula_15 and formula_16 for some &NoBreak;&NoBreak;. Then
formula_17.
This depends on the fact that &NoBreak;&NoBreak; is a normal subgroup. It still remains to be shown that this condition is not only sufficient but necessary to define the operation on &NoBreak;&NoBreak;.
To show that it is necessary, consider that for a subgroup formula_2 of &NoBreak;&NoBreak;, we have been given that the operation is well defined. That is, for all formula_15 and &NoBreak;&NoBreak; for &NoBreak;&NoBreak;.
Let formula_18 and &NoBreak;&NoBreak;. Since &NoBreak;&NoBreak;, we have &NoBreak;&NoBreak;.
Now, formula_19 and &NoBreak;&NoBreak;.
Hence formula_2 is a normal subgroup of &NoBreak;&NoBreak;.
It can also be checked that this operation on formula_10 is always associative, formula_10 has identity element &NoBreak;&NoBreak;, and the inverse of element formula_11 can always be represented by &NoBreak;&NoBreak;. Therefore, the set formula_10 together with the operation defined by formula_20 forms a group, the quotient group of formula_1 by &NoBreak;&NoBreak;.
Due to the normality of &NoBreak;&NoBreak;, the left cosets and right cosets of formula_2 in formula_1 are the same, and so, formula_10 could have been defined to be the set of right cosets of formula_2 in &NoBreak;&NoBreak;.
Example: Addition modulo 6.
For example, consider the group with addition modulo 6: &NoBreak;}&NoBreak;. Consider the subgroup &NoBreak;}&NoBreak;, which is normal because formula_1 is abelian. Then the set of (left) cosets is of size three:
formula_21.
The binary operation defined above makes this set into a group, known as the quotient group, which in this case is isomorphic to the cyclic group of order 3.
Motivation for the name "quotient".
The reason formula_10 is called a quotient group comes from division of integers. When dividing 12 by 3 one obtains the answer 4 because one can regroup 12 objects into 4 subcollections of 3 objects. The quotient group is the same idea, although we end up with a group for a final answer instead of a number because groups have more structure than an arbitrary collection of objects.
To elaborate, when looking at formula_10 with formula_2 a normal subgroup of &NoBreak;&NoBreak;, the group structure is used to form a natural "regrouping". These are the cosets of formula_2 in &NoBreak;&NoBreak;. Because we started with a group and normal subgroup, the final quotient contains more information than just the number of cosets (which is what regular division yields), but instead has a group structure itself.
Examples.
Even and odd integers.
Consider the group of integers formula_22 (under addition) and the subgroup formula_23 consisting of all even integers. This is a normal subgroup, because formula_22 is abelian. There are only two cosets: the set of even integers and the set of odd integers, and therefore the quotient group formula_24 is the cyclic group with two elements. This quotient group is isomorphic with the set formula_25 with addition modulo 2; informally, it is sometimes said that formula_24 "equals" the set formula_25 with addition modulo 2.
Example further explained...
Let formula_26 be the remainders of formula_27 when dividing by &NoBreak;&NoBreak;. Then, formula_28 when formula_29 is even and formula_30 when formula_29 is odd.
By definition of &NoBreak;&NoBreak;, the kernel of &NoBreak;&NoBreak;, &NoBreak;}&NoBreak;, is the set of all even integers.
Let &NoBreak;&NoBreak;. Then, formula_31 is a subgroup, because the identity in &NoBreak;&NoBreak;, which is &NoBreak;&NoBreak;, is in &NoBreak;&NoBreak;, the sum of two even integers is even and hence if formula_29 and formula_32 are in &NoBreak;&NoBreak;, formula_33 is in formula_31 (closure) and if formula_29 is even, formula_34 is also even and so formula_31 contains its inverses.
Define formula_35 as formula_36 for formula_37 and formula_38 is the quotient group of left cosets; &NoBreak;}&NoBreak;.
Note that we have defined &NoBreak;&NoBreak;, formula_39 is formula_40 if formula_41 is odd and formula_42 if formula_41 is even.
Thus, formula_43 is an isomorphism from formula_38 to &NoBreak;&NoBreak;.
Remainders of integer division.
A slight generalization of the last example. Once again consider the group of integers formula_22 under addition. Let &NoBreak;&NoBreak; be any positive integer. We will consider the subgroup formula_44 of formula_22 consisting of all multiples of &NoBreak;&NoBreak;. Once again formula_44 is normal in formula_22 because formula_22 is abelian. The cosets are the collection &NoBreak;}&NoBreak;. An integer formula_45 belongs to the coset &NoBreak;&NoBreak;, where formula_46 is the remainder when dividing formula_45 by &NoBreak;&NoBreak;. The quotient formula_47 can be thought of as the group of "remainders" modulo &NoBreak;&NoBreak;. This is a cyclic group of order &NoBreak;&NoBreak;.
Complex integer roots of 1.
The twelfth roots of unity, which are points on the complex unit circle, form a multiplicative abelian group &NoBreak;&NoBreak;, shown on the picture on the right as colored balls with the number at each point giving its complex argument. Consider its subgroup formula_2 made of the fourth roots of unity, shown as red balls. This normal subgroup splits the group into three cosets, shown in red, green and blue. One can check that the cosets form a group of three elements (the product of a red element with a blue element is blue, the inverse of a blue element is green, etc.). Thus, the quotient group formula_10 is the group of three colors, which turns out to be the cyclic group with three elements.
Real numbers modulo the integers.
Consider the group of real numbers formula_48 under addition, and the subgroup formula_22 of integers. Each coset of formula_22 in formula_48 is a set of the form &NoBreak;&NoBreak;, where formula_14 is a real number. Since formula_49 and formula_50 are identical sets when the non-integer parts of formula_51 and formula_52 are equal, one may impose the restriction formula_53 without change of meaning. Adding such cosets is done by adding the corresponding real numbers, and subtracting 1 if the result is greater than or equal to 1. The quotient group formula_54 is isomorphic to the circle group, the group of complex numbers of absolute value 1 under multiplication, or correspondingly, the group of rotations in 2D about the origin, that is, the special orthogonal group &NoBreak;&NoBreak;. An isomorphism is given by formula_55 (see Euler's identity).
Matrices of real numbers.
If formula_1 is the group of invertible formula_56 real matrices, and formula_2 is the subgroup of formula_56 real matrices with determinant 1, then formula_2 is normal in formula_1 (since it is the kernel of the determinant homomorphism). The cosets of formula_2 are the sets of matrices with a given determinant, and hence formula_10 is isomorphic to the multiplicative group of non-zero real numbers. The group formula_2 is known as the special linear group &NoBreak;&NoBreak;.
Integer modular arithmetic.
Consider the abelian group formula_57 (that is, the set formula_58 with addition modulo 4), and its subgroup &NoBreak;}&NoBreak;. The quotient group formula_59 is &NoBreak;}&NoBreak;. This is a group with identity element &NoBreak;}&NoBreak;, and group operations such as &NoBreak;}&NoBreak;. Both the subgroup formula_60 and the quotient group formula_61 are isomorphic with &NoBreak;&NoBreak;.
Integer multiplication.
Consider the multiplicative group &NoBreak;}&NoBreak;. The set formula_2 of &NoBreak;&NoBreak;th residues is a multiplicative subgroup isomorphic to &NoBreak;}&NoBreak;. Then formula_2 is normal in formula_1 and the factor group formula_10 has the cosets &NoBreak;&NoBreak;. The Paillier cryptosystem is based on the conjecture that it is difficult to determine the coset of a random element of formula_1 without knowing the factorization of &NoBreak;&NoBreak;.
Properties.
The quotient group formula_62 is isomorphic to the trivial group (the group with one element), and formula_63 is isomorphic to &NoBreak;&NoBreak;.
The order of &NoBreak;&NoBreak;, by definition the number of elements, is equal to &NoBreak;&NoBreak;, the index of formula_2 in &NoBreak;&NoBreak;. If formula_1 is finite, the index is also equal to the order of formula_1 divided by the order of &NoBreak;&NoBreak;. The set formula_10 may be finite, although both formula_1 and formula_2 are infinite (for example, &NoBreak;&NoBreak;).
There is a "natural" surjective group homomorphism &NoBreak;&NoBreak;, sending each element formula_64 of formula_1 to the coset of formula_2 to which formula_64 belongs, that is: &NoBreak;&NoBreak;. The mapping formula_65 is sometimes called the "canonical projection of formula_1 onto &NoBreak;&NoBreak;". Its kernel is &NoBreak;&NoBreak;.
There is a bijective correspondence between the subgroups of formula_1 that contain formula_2 and the subgroups of &NoBreak;&NoBreak;; if formula_4 is a subgroup of formula_1 containing &NoBreak;&NoBreak;, then the corresponding subgroup of formula_10 is &NoBreak;&NoBreak;. This correspondence holds for normal subgroups of formula_1 and formula_10 as well, and is formalized in the lattice theorem.
Several important properties of quotient groups are recorded in the fundamental theorem on homomorphisms and the isomorphism theorems.
If formula_1 is abelian, nilpotent, solvable, cyclic or finitely generated, then so is &NoBreak;&NoBreak;.
If formula_4 is a subgroup in a finite group &NoBreak;&NoBreak;, and the order of formula_4 is one half of the order of &NoBreak;&NoBreak;, then formula_4 is guaranteed to be a normal subgroup, so formula_66 exists and is isomorphic to &NoBreak;&NoBreak;. This result can also be stated as "any subgroup of index 2 is normal", and in this form it applies also to infinite groups. Furthermore, if formula_67 is the smallest prime number dividing the order of a finite group, &NoBreak;&NoBreak;, then if formula_66 has order &NoBreak;&NoBreak;, formula_4 must be a normal subgroup of &NoBreak;&NoBreak;.
Given formula_1 and a normal subgroup &NoBreak;&NoBreak;, then formula_1 is a group extension of formula_10 by &NoBreak;&NoBreak;. One could ask whether this extension is trivial or split; in other words, one could ask whether formula_1 is a direct product or semidirect product of formula_2 and &NoBreak;&NoBreak;. This is a special case of the extension problem. An example where the extension is not split is as follows: Let &NoBreak;}&NoBreak;, and &NoBreak;}&NoBreak;, which is isomorphic to &NoBreak;&NoBreak;. Then formula_10 is also isomorphic to &NoBreak;&NoBreak;. But formula_68 has only the trivial automorphism, so the only semi-direct product of formula_2 and formula_10 is the direct product. Since formula_69 is different from &NoBreak;&NoBreak;, we conclude that formula_1 is not a semi-direct product of formula_2 and &NoBreak;&NoBreak;.
Quotients of Lie groups.
If formula_1 is a Lie group and formula_2 is a normal and closed (in the topological rather than the algebraic sense of the word) Lie subgroup of &NoBreak;&NoBreak;, the quotient formula_10 is also a Lie group. In this case, the original group "formula_1" has the structure of a fiber bundle (specifically, a principal &NoBreak;&NoBreak;-bundle), with base space formula_10 and fiber &NoBreak;&NoBreak;. The dimension of formula_10 equals &NoBreak;&NoBreak;.
Note that the condition that formula_2 is closed is necessary. Indeed, if formula_2 is not closed then the quotient space is not a T1-space (since there is a coset in the quotient which cannot be separated from the identity by an open set), and thus not a Hausdorff space.
For a non-normal Lie subgroup &NoBreak;&NoBreak;, the space formula_10 of left cosets is not a group, but simply a differentiable manifold on which formula_1 acts. The result is known as a homogeneous space.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "N"
},
{
"math_id": 3,
"text": "\\mbox{mod}"
},
{
"math_id": 4,
"text": "H"
},
{
"math_id": 5,
"text": "\\varphi: G \\rightarrow H"
},
{
"math_id": 6,
"text": "G\\,/\\,\\ker(\\varphi)"
},
{
"math_id": 7,
"text": "\\ker(\\varphi)"
},
{
"math_id": 8,
"text": "a \\in G"
},
{
"math_id": 9,
"text": "aN = Na"
},
{
"math_id": 10,
"text": "G\\,/\\,N"
},
{
"math_id": 11,
"text": "aN"
},
{
"math_id": 12,
"text": "bN"
},
{
"math_id": 13,
"text": "(ab)N"
},
{
"math_id": 14,
"text": "a"
},
{
"math_id": 15,
"text": "xN = aN"
},
{
"math_id": 16,
"text": "yN = bN"
},
{
"math_id": 17,
"text": "(ab)N = a(bN) = a(yN) = a(Ny) = (aN)y = (xN)y = x(Ny) = x(yN) = (xy)N"
},
{
"math_id": 18,
"text": "n \\in N"
},
{
"math_id": 19,
"text": "gN = (ng)N \\Leftrightarrow N = (g^{-1}ng)N \\Leftrightarrow g^{-1}ng \\in N, \\; \\forall \\, n \\in N"
},
{
"math_id": 20,
"text": "(aN)(bN) = (ab)N"
},
{
"math_id": 21,
"text": "G\\,/\\,N = \\left\\{a+N: a \\in G \\right\\} = \\left\\{ \\left\\{0, 3 \\right\\}, \\left\\{1, 4 \\right\\}, \\left\\{2, 5 \\right\\} \\right\\} = \\left\\{0+N, 1+N, 2+N \\right\\}"
},
{
"math_id": 22,
"text": "\\Z"
},
{
"math_id": 23,
"text": "2\\Z"
},
{
"math_id": 24,
"text": "\\Z\\,/\\,2\\Z"
},
{
"math_id": 25,
"text": "\\left\\{0,1 \\right\\}"
},
{
"math_id": 26,
"text": " \\gamma(m) "
},
{
"math_id": 27,
"text": " m \\in \\Z "
},
{
"math_id": 28,
"text": " \\gamma(m)=0 "
},
{
"math_id": 29,
"text": " m "
},
{
"math_id": 30,
"text": " \\gamma(m)=1 "
},
{
"math_id": 31,
"text": " H "
},
{
"math_id": 32,
"text": " n "
},
{
"math_id": 33,
"text": " m+n "
},
{
"math_id": 34,
"text": " -m "
},
{
"math_id": 35,
"text": " \\mu : \\mathbb{Z} / H \\to \\mathrm{Z}_2 "
},
{
"math_id": 36,
"text": " \\mu(aH)=\\gamma(a) "
},
{
"math_id": 37,
"text": " a\\in\\Z "
},
{
"math_id": 38,
"text": "\\mathbb{Z} / H"
},
{
"math_id": 39,
"text": " \\mu(aH) "
},
{
"math_id": 40,
"text": " 1 "
},
{
"math_id": 41,
"text": " a "
},
{
"math_id": 42,
"text": " 0 "
},
{
"math_id": 43,
"text": " \\mu "
},
{
"math_id": 44,
"text": "n\\Z"
},
{
"math_id": 45,
"text": "k"
},
{
"math_id": 46,
"text": "r"
},
{
"math_id": 47,
"text": "\\Z\\,/\\,n\\Z"
},
{
"math_id": 48,
"text": "\\R"
},
{
"math_id": 49,
"text": "a_1+\\Z"
},
{
"math_id": 50,
"text": "a_2+\\Z"
},
{
"math_id": 51,
"text": "a_1"
},
{
"math_id": 52,
"text": "a_2"
},
{
"math_id": 53,
"text": "0 \\leq a < 1"
},
{
"math_id": 54,
"text": "\\R\\,/\\,\\Z"
},
{
"math_id": 55,
"text": "f(a+\\Z) = \\exp(2\\pi ia)"
},
{
"math_id": 56,
"text": "3 \\times 3"
},
{
"math_id": 57,
"text": "\\mathrm{Z}_4 = \\Z\\,/\\,4 \\Z"
},
{
"math_id": 58,
"text": "\\left\\{0, 1, 2, 3 \\right\\}"
},
{
"math_id": 59,
"text": "\\mathrm{Z}_4\\,/\\,\\left\\{0, 2\\right\\}"
},
{
"math_id": 60,
"text": "\\left\\{0, 2\\right\\}"
},
{
"math_id": 61,
"text": "\\left\\{\\left\\{ 0, 2 \\right\\}, \\left\\{1, 3 \\right\\} \\right\\}"
},
{
"math_id": 62,
"text": "G\\,/\\,G"
},
{
"math_id": 63,
"text": "G\\,/\\,\\left\\{e \\right\\}"
},
{
"math_id": 64,
"text": "g"
},
{
"math_id": 65,
"text": "\\pi"
},
{
"math_id": 66,
"text": "G\\,/\\,H"
},
{
"math_id": 67,
"text": "p"
},
{
"math_id": 68,
"text": "\\mathrm{Z}_2"
},
{
"math_id": 69,
"text": "\\mathrm{Z}_4"
}
] | https://en.wikipedia.org/wiki?curid=11526 |
11527 | Fundamental theorem on homomorphisms | Theorem relating a group with the image and kernel of a homomorphism
In abstract algebra, the fundamental theorem on homomorphisms, also known as the fundamental homomorphism theorem, or the first isomorphism theorem, relates the structure of two objects between which a homomorphism is given, and of the kernel and image of the homomorphism.
The homomorphism theorem is used to prove the isomorphism theorems.
Group-theoretic version.
Given two groups "G" and "H" and a group homomorphism "f" : "G" → "H", let "N" be a normal subgroup in "G" and "φ" the natural surjective homomorphism "G" → "G" / "N" (where "G" / "N" is the quotient group of "G" by "N"). If "N" is a subset of ker("f") then there exists a unique homomorphism "h" : "G" / "N" → "H" such that "f" = "h" ∘ "φ".
In other words, the natural projection "φ" is universal among homomorphisms on "G" that map "N" to the identity element.
The situation is described by the following commutative diagram:
"h" is injective if and only if "N" = ker("f"). Therefore, by setting "N" = ker("f"), we immediately get the first isomorphism theorem.
We can write the statement of the fundamental theorem on homomorphisms of groups as "every homomorphic image of a group is isomorphic to a quotient group".
Proof.
The proof follows from two basic facts about homomorphisms, namely their preservation of the group operation, and their mapping of the identity element to the identity element. We need to show that if formula_0 is a homomorphism of groups, then:
Proof of 1.
The operation that is preserved by formula_3 is the group operation. If &NoBreak;&NoBreak;, then there exist elements formula_4 such that formula_5 and &NoBreak;&NoBreak;. For these formula_6 and &NoBreak;&NoBreak;, we have formula_7 (since formula_3 preserves the group operation), and thus, the closure property is satisfied in &NoBreak;&NoBreak;. The identity element formula_8 is also in formula_1 because formula_3 maps the identity element of formula_9 to it. Since every element formula_10 in formula_9 has an inverse formula_11 such that formula_12 (because formula_3 preserves the inverse property as well), we have an inverse for each element formula_13 in &NoBreak;&NoBreak;, therefore, formula_1 is a subgroup of &NoBreak;&NoBreak;.
Proof of 2.
Construct a map formula_14 by &NoBreak;&NoBreak;. This map is well-defined, as if &NoBreak;&NoBreak;, then formula_15 and so formula_16 which gives &NoBreak;&NoBreak;. This map is an isomorphism. formula_17 is surjective onto formula_1 by definition. To show injectiveness, if formula_18, then &NoBreak;&NoBreak;, which implies formula_19 so &NoBreak;&NoBreak;.
Finally,
formula_20
hence formula_17 preserves the group operation. Hence formula_17 is an isomorphism between formula_2 and &NoBreak;&NoBreak;, which completes the proof.
Applications.
The group theoretic version of fundamental homomorphism theorem can be used to show that two selected groups are isomorphic. Two examples are shown below.
Integers modulo "n".
For each &NoBreak;}&NoBreak;, consider the groups formula_21 and formula_22 and a group homomorphism formula_23 defined by formula_24 (see modular arithmetic). Next, consider the kernel of &NoBreak;&NoBreak;, &NoBreak;}&NoBreak;, which is a normal subgroup in &NoBreak;}&NoBreak;. There exists a natural surjective homomorphism formula_25 defined by &NoBreak;}&NoBreak;. The theorem asserts that there exists an isomorphism formula_26 between formula_22 and &NoBreak;}&NoBreak;, or in other words &NoBreak;}&NoBreak;. The commutative diagram is illustrated below.
"N"/"C" theorem.
Let formula_9 be a group with subgroup &NoBreak;&NoBreak;. Let &NoBreak;&NoBreak;, formula_27 and formula_28 be the centralizer, the normalizer and the automorphism group of formula_29 in &NoBreak;&NoBreak;, respectively. Then, the "N"/"C" theorem states that formula_30 is isomorphic to a subgroup of &NoBreak;&NoBreak;.
Proof.
We are able to find a group homomorphism formula_31 defined by &NoBreak;}&NoBreak;, for all &NoBreak;&NoBreak;. Clearly, the kernel of formula_32 is &NoBreak;&NoBreak;. Hence, we have a natural surjective homomorphism formula_33 defined by &NoBreak;&NoBreak;. The fundamental homomorphism theorem then asserts that there exists an isomorphism between formula_34 and &NoBreak;&NoBreak;, which is a subgroup of &NoBreak;&NoBreak;.
Other versions.
Similar theorems are valid for monoids, vector spaces, modules, and rings. | [
{
"math_id": 0,
"text": "\\phi: G \\to H"
},
{
"math_id": 1,
"text": "\\text{im}(\\phi)"
},
{
"math_id": 2,
"text": "G / \\ker(\\phi)"
},
{
"math_id": 3,
"text": "\\phi"
},
{
"math_id": 4,
"text": "a', b' \\in G"
},
{
"math_id": 5,
"text": "\\phi(a')=a"
},
{
"math_id": 6,
"text": "a"
},
{
"math_id": 7,
"text": "ab = \\phi(a')\\phi(b') = \\phi(a'b') \\in \\text{im}(\\phi)"
},
{
"math_id": 8,
"text": "e \\in H"
},
{
"math_id": 9,
"text": "G"
},
{
"math_id": 10,
"text": "a'"
},
{
"math_id": 11,
"text": "(a')^{-1}"
},
{
"math_id": 12,
"text": "\\phi((a')^{-1}) = (\\phi(a'))^{-1}"
},
{
"math_id": 13,
"text": "\\phi(a') = a"
},
{
"math_id": 14,
"text": "\\psi: G / \\ker(\\phi) \\to \\text{im}(\\phi)"
},
{
"math_id": 15,
"text": "b^{-1}a \\in \\ker(\\phi)"
},
{
"math_id": 16,
"text": "\\phi(b^{-1}a) = e \\Rightarrow \\phi(b^{-1})\\phi(a) = e"
},
{
"math_id": 17,
"text": "\\psi"
},
{
"math_id": 18,
"text": "\\psi(a\\ker(\\phi)) = \\psi(b\\ker(\\phi))"
},
{
"math_id": 19,
"text": "b^{-1}a \\in\\ker(\\phi)"
},
{
"math_id": 20,
"text": "\\psi((a\\ker(\\phi))(b\\ker(\\phi))) = \\psi(ab\\ker(\\phi)) = \\phi(ab) = \\phi(a)\\phi(b) = \\psi(a\\ker(\\phi))\\psi(b\\ker(\\phi)),"
},
{
"math_id": 21,
"text": " \\mathbb{Z} "
},
{
"math_id": 22,
"text": " \\mathbb{Z}_n "
},
{
"math_id": 23,
"text": " f:\\mathbb{Z} \\rightarrow \\mathbb{Z}_n "
},
{
"math_id": 24,
"text": " m \\mapsto m \\text{ mod }n "
},
{
"math_id": 25,
"text": " \\varphi : \\mathbb{Z} \\rightarrow \\mathbb{Z}/n\\mathbb{Z} "
},
{
"math_id": 26,
"text": " h "
},
{
"math_id": 27,
"text": "N_G(H)"
},
{
"math_id": 28,
"text": " \\text{Aut}(H) "
},
{
"math_id": 29,
"text": " H "
},
{
"math_id": 30,
"text": "N_G(H)/C_G(H)"
},
{
"math_id": 31,
"text": " f: N_G(H) \\rightarrow \\text{Aut}(H) "
},
{
"math_id": 32,
"text": " f "
},
{
"math_id": 33,
"text": " \\varphi : N_G(H) \\rightarrow N_G(H)/C_G(H) "
},
{
"math_id": 34,
"text": " N_G(H)/C_G(H) "
}
] | https://en.wikipedia.org/wiki?curid=11527 |
11527342 | STO-nG basis sets | Basis sets used in quantum chemistry
STO-"n"G basis sets are minimal basis sets, where formula_0 primitive Gaussian orbitals are fitted to a single Slater-type orbital (STO). formula_0 originally took the values 2 – 6. They were first proposed by John Pople. A minimum basis set is where only sufficient orbitals are used to contain all the electrons in the neutral atom. Thus for the hydrogen atom, only a single 1s orbital is needed, while for a carbon atom, 1s, 2s and three 2p orbitals are needed. The core and valence orbitals are represented by the same number of primitive Gaussian functions formula_1. For example, an STO-3G basis set for the 1s, 2s and 2p orbital of the carbon atom are all linear combination of 3 primitive Gaussian functions. For example, a STO-3G s orbital is given by:
formula_2
where
formula_3
formula_4
formula_5
The values of "c"1, "c"2, "c"3, "α"1, "α"2 and "α"3 have to be determined. For the STO-"n"G basis sets, they are obtained by making a least squares fit of the three Gaussian orbitals to the single Slater-type orbitals. (Extensive tables of parameters have been calculated for STO-1G through STO-5G for s orbitals through g orbitals.) This differs from the more common procedure where the criteria often used is to choose the coefficients ("c"'s) and exponents ("α"'s) to give the lowest energy with some appropriate method for some appropriate molecule. A special feature of this basis set is that common exponents are used for orbitals in the same shell (e.g. 2s and 2p) as this allows more efficient computation.
The fit between the Gaussian orbitals and the Slater orbital is good for all values of "r", except for very small values near to the nucleus. The Slater orbital has a cusp at the nucleus, while Gaussian orbitals are flat at the nucleus.
Use of STO-"n"G basis sets.
The most widely used basis set of this group is STO-3G, which is used for large systems and for preliminary geometry determinations. This basis set is available for all atoms from hydrogen up to xenon.
STO-2G basis set.
The STO-2G basis set is a linear combination of 2 primitive Gaussian functions. The original coefficients and exponents for first-row and second-row atoms are given as follows.
Accuracy.
The exact energy of the 1s electron of H atom is −0.5 hartree, given by a single Slater-type orbital with exponent 1.0. The following table illustrates the increase in accuracy as the number of primitive Gaussian functions increases from 3 to 6 in the basis set.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "\\mathbf \\phi_i"
},
{
"math_id": 2,
"text": "\\mathbf \\psi_{\\mathrm{STO}-3\\mathrm{G}}(s)=c_1\\phi_1 + c_2\\phi_2 + c_3\\phi_3"
},
{
"math_id": 3,
"text": "\\mathbf \\phi_1 = \\left (\\frac{2\\alpha_1}{\\pi} \\right ) ^{3/4}e^{-\\alpha_1 r^2}"
},
{
"math_id": 4,
"text": "\\mathbf \\phi_2 = \\left (\\frac{2\\alpha_2}{\\pi} \\right ) ^{3/4}e^{-\\alpha_2 r^2}"
},
{
"math_id": 5,
"text": "\\mathbf \\phi_3 = \\left (\\frac{2\\alpha_3}{\\pi} \\right ) ^{3/4}e^{-\\alpha_3 r^2}"
}
] | https://en.wikipedia.org/wiki?curid=11527342 |
11528159 | NTU method | Method to calculate rate of heat transfer in heat exchangers
The number of transfer units (NTU) method is used to calculate the rate of heat transfer in heat exchangers (especially parallel flow, counter current, and cross-flow exchangers) when there is insufficient information to calculate the log mean temperature difference (LMTD). Alternatively, this method is useful for determining the expected heat exchanger effectiveness from the known geometry. In heat exchanger analysis, if the fluid inlet and outlet temperatures are specified or can be determined by simple energy balance, the LMTD method can be used; but when these temperatures are not available either the NTU or the effectiveness NTU method is used.
The effectiveness-NTU method is very useful for all the flow arrangements (besides parallel flow, cross flow, and counterflow ones) but the effectiveness of all other types must be obtained by a numerical solution of the partial differential equations and there is no analytical equation for LMTD or effectiveness.
Defining and using heat exchanger effectiveness.
To define the effectiveness of a heat exchanger we need to find the maximum possible heat transfer that can be hypothetically achieved in a counter-flow heat exchanger of infinite length. Therefore "one" fluid will experience the maximum possible temperature difference, which is the difference of formula_0 (the temperature difference between the inlet temperature of the hot stream and the inlet temperature of the cold stream). First, you must know the specific heat capacity of your two fluid streams, denoted as formula_1. By definition formula_1 is the derivative of enthalpy with respect to temperature:formula_2This information can usually be found in a thermodynamics textbook, or by using various software packages. Additionally, the mass flowrates (formula_3) of the two streams exchanging heat must be known (here, the cold stream is denoted with subscripts 'c' and the hot stream is denoted with subscripts 'h'). The method proceeds by calculating the heat capacity rates (i.e. mass flow rate multiplied by specific heat capacity) formula_4 and formula_5 for the hot and cold fluids respectively. To determine the maximum possible heat transfer rate in the heat exchanger, the minimum heat capacity rate must be used, denoted as formula_6:
formula_7
Where formula_8 is the mass flow rate and formula_9 is the fluid's specific heat capacity at constant pressure. The maximum possible heat transfer rate is then determined by the following expression:
formula_10
Here, formula_11 is the maximum rate of heat that could be transferred between the fluids per unit time. formula_12 must be used as it is the fluid with the lowest heat capacity rate that would, in this hypothetical infinite length exchanger, actually undergo the maximum possible temperature change. The other fluid would change temperature more slowly along the heat exchanger length. The method, at this point, is concerned only with the fluid undergoing the maximum temperature change.
The "effectiveness of the heat exchanger (formula_13)", is the ratio between the actual heat transfer rate and the maximum possible heat transfer rate:
formula_14
where the real heat transfer rate can be determined either from the cold fluid or the hot fluid (they must provide equivalent results):
formula_15
Effectiveness is a dimensionless quantity between 0 and 1. If we know formula_13 for a particular heat exchanger, and we know the inlet conditions of the two flow streams we can calculate the amount of heat being transferred between the fluids by:
formula_16
Then, having determined the actual heat transfer from the effectiveness and inlet temperatures, the outlet temperatures can be determined from the equation above.
Relating Effectiveness to the Number of Transfer Units (NTU).
For any heat exchanger it can be shown that the effectiveness of the heat exchanger is related to a non-dimensional term called the "number of transfer units" or NTU:
formula_17
For a given geometry, formula_13 can be calculated using correlations in terms of the "heat capacity ratio," or formula_18 and NTU:
formula_19
formula_20 describes heat transfer across a surface
formula_21
Here, formula_22 is the overall heat transfer coefficient, formula_23 is the total heat transfer area, and formula_24 is the minimum heat capacity rate. To better understand where this definition of NTU comes from, consider the following heat transfer energy balance, which is an extension of the energy balance above:
formula_25
From this energy balance, it is clear that NTU relates the temperature change of the flow with the minimum heat capacitance rate to the log mean temperature difference (formula_26). Starting from the differential equations that describe heat transfer, several "simple" correlations between effectiveness and NTU can be made. For brevity, below summarizes the Effectiveness-NTU correlations for some of the most common flow configurations:
For example, the effectiveness of a parallel flow heat exchanger is calculated with:
formula_27
Or the effectiveness of a counter-current flow heat exchanger is calculated with:
formula_28
For a balanced counter-current flow heat exchanger (balanced meaning formula_29, which is a scenario desirable to enable irreversible entropy production to be reduced given sufficient heat transfer area):
formula_30
A single-stream heat exchanger is a special case in which formula_31. This occurs when formula_32 or formula_33 and may represent a situation in which a phase change (condensation or evaporation) is occurring in one of the heat exchanger fluids or when one of the heat exchanger fluids is being held at a fixed temperature. In this special case the heat exchanger behavior is independent of the flow arrangement and the effectiveness is given by:
formula_34
For a crossflow heat exchanger with both fluid unmixed, the effectiveness is:
formula_35
where formula_36 is the polynomial function
formula_37
If both fluids are mixed in the crossflow heat exchanger, then
formula_38
If one of the fluids in the crossflow heat exchanger is mixed and the other is unmixed, the result depends on which one has the minimum heat capacity rate. If formula_39 corresponds to the mixed fluid, the result is
formula_40
whereas if formula_39 corresponds to the unmixed fluid, the solution is
formula_41
All these formulas for crossflow heat exchangers are also valid for formula_42.
Additional effectiveness-"NTU" analytical relationships have been derived for other flow arrangements, including shell-and-tube heat exchangers with multiple passes and different shell types, and plate heat exchangers.
Effectiveness-NTU method for gaseous mass transfer.
It is common in the field of mass transfer system design and modeling to draw analogies between heat transfer and mass transfer. However, a mass transfer-analogous definition of the effectiveness-NTU method requires some additional terms. One common misconception is that gaseous mass transfer is driven by concentration gradients, however, in reality it is the partial pressure of the given gas that drive mass transfer. In the same way that the heat transfer definition includes the specific heat capacity of the fluid, which describes the change in enthalpy of the fluid with respect to change in temperature and is defined as:formula_2then a mass transfer-analogous specific mass capacity is required. This specific mass capacity should describe the change in concentration of the transferring gas relative to the partial pressure difference driving the mass transfer. This results in a definition for specific mass capacity as follows:formula_43Here, formula_44 represents the mass ratio of gas 'x' (meaning mass of gas 'x' relative to the mass of all other non-'x' gas mass) and formula_45 is the partial pressure of gas 'x'. Using the ideal gas formulation for the mass ratio gives the following definition for the specific mass capacity:formula_46Here, formula_47 is the molecular weight of gas 'x' and formula_48 is the average molecular weight of all other gas constituents. With this information, the NTU for gaseous mass transfer of gas 'x' can be defined as follows:formula_49Here, formula_50 is the overall mass transfer coefficient, which could be determined by empirical correlations, formula_51 is the surface area for mass transfer (particularly relevant in membrane-based separations), and formula_52 is the mass flowrate of bulk fluid (e.g., mass flowrate of air in an application where water vapor is being separated from the air mixture). At this point, all of the same heat transfer effectiveness-NTU correlations will accurately predict the mass transfer performance, as long as the heat transfer terms in the definition of NTU have been replaced by the mass transfer terms, as shown above. Similarly, it follows that the definition of formula_18 becomes:formula_53
Effectiveness-NTU method for dehumidification applications.
One particularly useful application for the above described effectiveness-NTU framework is membrane-based air dehumidification. In this case, the definition of specific mass capacity can be defined for humid air and is termed "specific humidity capacity."formula_54Here, formula_55 is the molecular weight of water (vapor), formula_56 is the average molecular weight of air, formula_57 is the partial pressure of air (not including the partial pressure of water vapor in an air mixture) and can be approximated by knowing the partial pressure of water vapor at the inlet, before dehumidification occurs, formula_58. From here, all of the previously described equations can be used to determine the effectiveness of the mass exchanger.
Importance of defining the specific mass capacity.
It is very common, especially in dehumidification applications, to define the mass transfer driving force as the concentration difference. When deriving effectiveness-NTU correlations for membrane-based gas separations, this is valid only if the total pressures are approximately equal on both sides of the membrane (e.g., an energy recovery ventilator for a building). This is sufficient since the partial pressure and concentration are proportional. However, if the total pressures are not approximately equal on both sides of the membrane, the low pressure side could have a higher "concentration" but a lower partial pressure of the given gas (e.g., water vapor in a dehumidification application) than the high pressure side, thus using the concentration as the driving is not physically accurate.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\ T_{h,i}- \\ T_{c,i} "
},
{
"math_id": 1,
"text": "c_p"
},
{
"math_id": 2,
"text": "c_p=\\frac{dh}{dT}"
},
{
"math_id": 3,
"text": " \\dot m"
},
{
"math_id": 4,
"text": " \\ C_h"
},
{
"math_id": 5,
"text": " \\ C_c"
},
{
"math_id": 6,
"text": " \\ C_\\mathrm{min}"
},
{
"math_id": 7,
"text": " \\ C_\\mathrm{min}=\\mathrm{min}[ \\dot m_c c_{p,c}, \\dot m_h c_{p,h}]"
},
{
"math_id": 8,
"text": " \\dot m "
},
{
"math_id": 9,
"text": " c_{p} "
},
{
"math_id": 10,
"text": "\\dot Q_\\mathrm{max}\\ = C_\\mathrm{min} (T_{h,i}-T_{c,i})"
},
{
"math_id": 11,
"text": " \\dot Q_\\mathrm{max}"
},
{
"math_id": 12,
"text": " \\ C_\\mathrm{min} "
},
{
"math_id": 13,
"text": "\\epsilon"
},
{
"math_id": 14,
"text": "\\epsilon \\ = \\frac{\\dot Q}{\\dot Q _\\mathrm{max}}"
},
{
"math_id": 15,
"text": "\\dot Q \\ = C_h (T_{h,i} -T_{h,o})\\ = C_c (T_{c,o} - T_{c,i})"
},
{
"math_id": 16,
"text": "\\dot Q \\ = \\epsilon C_\\mathrm{min} (T_{h,i} -T_{c,i})"
},
{
"math_id": 17,
"text": "\\ \\epsilon = f ( NTU,\\frac{C_\\mathrm{min}} {C_\\mathrm{max}}) "
},
{
"math_id": 18,
"text": "C_r"
},
{
"math_id": 19,
"text": "C_r \\ = \\frac{C_\\mathrm{min}}{C_\\mathrm{max}}"
},
{
"math_id": 20,
"text": " \\ NTU"
},
{
"math_id": 21,
"text": "NTU \\ = \\frac{U A}{C_\\mathrm{min}}"
},
{
"math_id": 22,
"text": "U"
},
{
"math_id": 23,
"text": "A"
},
{
"math_id": 24,
"text": "C_{min}"
},
{
"math_id": 25,
"text": "\\dot Q= C_{min} (T_{o} -T_{i})_{min} = UA \\Delta T_{LM}"
},
{
"math_id": 26,
"text": "\\Delta T_{LM}"
},
{
"math_id": 27,
"text": " \\epsilon \\ = \\frac {1 - \\exp[-NTU(1 + C_{r})]}{1 + C_{r}} "
},
{
"math_id": 28,
"text": " \\epsilon \\ = \\frac {1 - \\exp[-NTU(1 - C_{r})]}{1 - C_{r}\\exp[-NTU(1 - C_{r})]} "
},
{
"math_id": 29,
"text": " C_r \\ = 1 "
},
{
"math_id": 30,
"text": " \\epsilon\\ = \\frac{NTU}{1+NTU} "
},
{
"math_id": 31,
"text": " C_r \\ = 0 "
},
{
"math_id": 32,
"text": "C_\\mathrm{min}=0"
},
{
"math_id": 33,
"text": "C_\\mathrm{max}=\\infty"
},
{
"math_id": 34,
"text": " \\epsilon \\ = 1 - e^{-NTU} "
},
{
"math_id": 35,
"text": " \\epsilon \\ = 1 - \\exp(-NTU) - \\exp[-(1+C_r)NTU] \\sum_{n=1}^\\infty C_r^n P_n(NTU) "
},
{
"math_id": 36,
"text": " P_n "
},
{
"math_id": 37,
"text": " P_n(x) = \\frac{1}{(n+1)!} \\sum_{j=1}^n \\frac{n+1-j}{j!} x^{n+j} "
},
{
"math_id": 38,
"text": " \\epsilon \\ = \\left[ \\frac{1}{1 -\\exp(-NTU)} + \\frac{C_r}{1 -\\exp(-NTU \\cdot C_r)} - \\frac{1}{NTU} \\right]^{-1} "
},
{
"math_id": 39,
"text": " C_{\\mathrm{min}} "
},
{
"math_id": 40,
"text": " \\epsilon \\ = 1 - \\exp\\left( -\\frac{1-\\exp(-NTU\\cdot C_r)}{C_r} \\right) "
},
{
"math_id": 41,
"text": " \\epsilon \\ = \\frac{1}{C_r} (1 - \\exp\\{-C_r[1 - \\exp(-NTU)] \\}) "
},
{
"math_id": 42,
"text": " C_r = 1 "
},
{
"math_id": 43,
"text": "c_{p-x}=\n\\frac{d \\omega_x}{dP_x}"
},
{
"math_id": 44,
"text": "\\omega_x"
},
{
"math_id": 45,
"text": "P_x"
},
{
"math_id": 46,
"text": "c_{p-x}=\n\\frac\n{M_x/M_{other}}\n{P_{other}}"
},
{
"math_id": 47,
"text": "M_x"
},
{
"math_id": 48,
"text": "M\n_{other}"
},
{
"math_id": 49,
"text": "NTU_x \\ = \\frac{U_mA_m}{\\dot{m}c_{p-x}}"
},
{
"math_id": 50,
"text": "U_m"
},
{
"math_id": 51,
"text": "A_m"
},
{
"math_id": 52,
"text": "\\dot{m}"
},
{
"math_id": 53,
"text": "C_r \\ = \\frac{(\\dot{m}c_{p-x})_{min}}{(\\dot{m}c_{p-x})_{max}}"
},
{
"math_id": 54,
"text": "c_{p-h}=\\frac\n{M_{wv}/M_{air}}\n{P_{air}}=\\frac{0.62198}{P_{air}}=\\frac{0.62198}{P_{total}-P_{wv,inlet}}"
},
{
"math_id": 55,
"text": "M_{wv}"
},
{
"math_id": 56,
"text": "M_{air}"
},
{
"math_id": 57,
"text": "P_{air}"
},
{
"math_id": 58,
"text": "P_{wv,inlet}"
}
] | https://en.wikipedia.org/wiki?curid=11528159 |
1153192 | Expectiminimax | The expectiminimax algorithm is a variation of the minimax algorithm, for use in artificial intelligence systems that play two-player zero-sum games, such as backgammon, in which the outcome depends on a combination of the player's skill and chance elements such as dice rolls. In addition to "min" and "max" nodes of the traditional minimax tree, this variant has "chance" ("move by nature") nodes, which take the expected value of a random event occurring. In game theory terms, an expectiminimax tree is the game tree of an extensive-form game of perfect, but incomplete information.
In the traditional minimax method, the levels of the tree alternate from max to min until the depth limit of the tree has been reached. In an expectiminimax tree, the "chance" nodes are interleaved with the max and min nodes. Instead of taking the max or min of the utility values of their children, chance nodes take a weighted average, with the weight being the probability that child is reached.
The interleaving depends on the game. Each "turn" of the game is evaluated as a "max" node (representing the AI player's turn), a "min" node (representing a potentially-optimal opponent's turn), or a "chance" node (representing a random effect or player).
For example, consider a game in which each round consists of a single die throw, and then decisions made by first the AI player, and then another intelligent opponent. The order of nodes in this game would alternate between "chance", "max" and then "min".
Pseudocode.
The expectiminimax algorithm is a variant of the minimax algorithm and was firstly proposed by Donald Michie in 1966.
Its pseudocode is given below.
function expectiminimax(node, depth)
if node is a terminal node or depth = 0
return the heuristic value of node
if the adversary is to play at node
// Return value of minimum-valued child node
let α := +∞
foreach child of node
α := min(α, expectiminimax(child, depth-1))
else if we are to play at node
// Return value of maximum-valued child node
let α := -∞
foreach child of node
α := max(α, expectiminimax(child, depth-1))
else if random event at node
// Return weighted average of all child nodes' values
let α := 0
foreach child of node
α := α + (Probability[child] × expectiminimax(child, depth-1))
return α
Note that for random nodes, there must be a known probability of reaching each child. (For most games of chance, child nodes will be equally-weighted, which means the return value can simply be the average of all child values.)
Expectimax search.
Expectimax search is a variant described in "Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability" (2005) by Tom Everitt and Marcus Hutter.
Alpha-beta pruning.
Bruce Ballard was the first to develop a technique, called *-minimax, that enables alpha-beta pruning in expectiminimax trees. The problem with integrating alpha-beta pruning into the expectiminimax algorithm is that the scores of a chance node's children may exceed the alpha or beta bound of its parent, even if the weighted value of each child does not. However, it is possible to bound the scores of a chance node's children, and therefore bound the score of the CHANCE node.
If a standard iterative search is about to score the formula_0th child of a chance node with formula_1 equally likely children, that search has computed scores formula_2 for child nodes 1 through formula_3. Assuming a lowest possible score formula_4 and a highest possible score formula_5 for each unsearched child, the bounds of the chance node's score is as follows:
formula_6
formula_7
If an alpha and/or beta bound are given in scoring the chance node, these bounds can be used to cut off the search of the formula_0th child. The above equations can be rearranged to find a new alpha & beta value that will cut off the search if it would cause the chance node to exceed its own alpha and beta bounds:
formula_8
formula_9
The pseudocode for extending expectiminimax with fail-hard alpha-beta pruning in this manner is as follows:
function *-minimax(node, depth, α, β)
if node is a terminal node or depth = 0
return the heuristic value of node
if node is a max or min node
return the minimax value of the node
let N = numSuccessors(node)
// Compute α, β for children
let A = N * (α - U) + U
let B = N * (β - L) + L
let sum = 0
foreach child of node
// Limit child α, β to a valid range
let AX = max(A, L)
let BX = min(B, U)
// Search the child with new cutoff values
let score = *-minimax(child, depth - 1, AX, BX)
// Check for α, β cutoff conditions
if score <= A
return α
if score >= B
return β
sum += score
// Adjust α, β for the next child
A += U - v
B += L - v
// No cutoff occurred, return score
return sum / N
This technique is one of a family of variants of algorithms which can bound the search of a CHANCE node and its children based on collecting lower and upper bounds of the children during search. Other techniques which can offer performance benefits include probing each child with a heuristic to establish a min or max before performing a full search on each child, etc. | [
{
"math_id": 0,
"text": "i"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "v_1, v_2, \\ldots, v_{i-1}"
},
{
"math_id": 3,
"text": "i-1"
},
{
"math_id": 4,
"text": "L"
},
{
"math_id": 5,
"text": "U"
},
{
"math_id": 6,
"text": "\\text{score} \\leq \\frac{1}{n} \\left( ( v_1 + \\ldots + v_{i-1}) + v_i + U \\times (n - i) \\right)"
},
{
"math_id": 7,
"text": "\\text{score} \\geq \\frac{1}{n} \\left( ( v_1 + \\ldots + v_{i-1}) + v_i + L \\times (n - i) \\right)"
},
{
"math_id": 8,
"text": "\\alpha_i = N \\times \\alpha - \\left( v_1 + \\ldots + v_{i-1} \\right) + U \\times (n - i)"
},
{
"math_id": 9,
"text": "\\beta_i = N \\times \\beta - \\left( v_1 + \\ldots + v_{i-1} \\right) + L \\times (n - i)"
}
] | https://en.wikipedia.org/wiki?curid=1153192 |
11532414 | Richard Lipton | American computer scientist
Richard Jay Lipton (born September 6, 1946) is an American computer scientist who is Associate Dean of Research, Professor, and the Frederick G. Storey Chair in Computing in the College of Computing at the Georgia Institute of Technology. He has worked in computer science theory, cryptography, and DNA computing.
Career.
In 1968, Lipton received his undergraduate degree in mathematics from Case Western Reserve University. In 1973, he received his Ph.D. from Carnegie Mellon University; his dissertation, supervised by David Parnas, is entitled "On Synchronization Primitive Systems". After graduating, Lipton taught at Yale 1973–1978, at Berkeley 1978–1980, and then at Princeton 1980–2000. Since 2000, Lipton has been at Georgia Tech. While at Princeton, Lipton worked in the field of DNA computing. Since 1996, Lipton has been the chief consulting scientist at Telcordia. In 1999, Lipton was elected a member of the National Academy of Engineering for the application of computer science theory to practice.
Karp–Lipton theorem.
In 1980, along with Richard M. Karp, Lipton proved that if SAT can be solved by Boolean circuits with a polynomial number of logic gates, then the polynomial hierarchy collapses to its second level.
Parallel algorithms.
Showing that a program P has some property is a simple process if the actions inside the program are uninterruptible. However, when the action is interruptible, Lipton showed that through a type of reduction and analysis, it can be shown that the reduced program has that property if and only if the original program has the property. If the reduction is done by treating interruptible operations as one large uninterruptible action, even with these relaxed conditions properties can be proven for a program P. Thus, correctness proofs of a parallel system can often be greatly simplified.
Database security.
Lipton studied and created database security models on how and when to restrict the queries made by users of a database such that private or secret information will not be leaked. For example, querying a database of campaign donations could allow the user to discover the individual donations to political candidates or organizations. If given access to averages of data and unrestricted query access, a user could exploit the properties of those averages to gain illicit information. These queries are considered to have large "overlap" creating the insecurity. By bounding the "overlap" and number of queries, a secure database can be achieved.
Online scheduling.
Richard Lipton with Andrew Tomkins introduced a randomized online interval scheduling algorithm, the 2-size version being strongly competitive, and the "k"-size version achieving O(logformula_0), as well as demonstrating a theoretical lower-bound of O(logformula_1). This algorithm uses a private-coin for randomization and a "virtual" choice to fool a medium adversary.
Being presented with an event the user must decide whether or not to include the event in the schedule. The 2-size virtual algorithm is described by how it reacts to 1-interval or "k"-intervals being presented by the adversary:
Again, this 2-size algorithm is shown to be strongly-competitive. The generalized "k"-size algorithm which is similar to the 2-size algorithm is then shown to be O(logformula_0)-competitive.
Program checking.
Lipton showed that randomized testing can be provably useful, given the problem satisfied certain properties. Proving correctness of a program is one of the most important problems presented in computer science. Typically in randomized testing, in order to attain a 1/1000 chance of an error, 1000 tests must be run. However Lipton shows that if a problem has "easy" sub-parts, repeated black-box testing can attain "c""r" error rate, with "c" a constant less than 1 and "r" being the number of tests. Therefore, the probability of error goes to zero exponentially fast as "r" grows.
This technique is useful to check the correctness of many types of problems.
PSPACE.
Games with simple strategies.
In the area of game theory, more specifically of non-cooperative games, Lipton together with E. Markakis and A. Mehta proved the existence of epsilon-equilibrium strategies with support logarithmic in the number of pure strategies. Furthermore, the payoff of such strategies can epsilon-approximate the payoffs of exact Nash equilibria. The limited (logarithmic) size of the support provides a natural quasi-polynomial algorithm to compute epsilon-equilibria.
Query size estimation.
Lipton and J. Naughton presented an adaptive random sampling algorithm for database querying which is applicable to any query for which answers to the query can be partitioned into disjoint subsets. Unlike most sampling estimation algorithms—which statically determine the number of samples needed—their algorithm decides the number of samples based on the sizes of the samples, and tends to keep the running time constant (as opposed to linear in the number of samples).
Formal verification of programs.
DeMillo, Lipton and Perlis criticized the idea of formal verification of programs and argued that
Multi-party protocols.
Chandra, Furst and Lipton generalized the notion of two-party communication protocols to multi-party communication protocols. They proposed a model in which a collection of processes (formula_3) have access to a set of integers (formula_4, formula_5) except one of them, so that formula_6 is denied access to formula_7. These processes are allowed to communicate in order to arrive at a consensus on a predicate. They studied this model's communication complexity, defined as the number of bits broadcast among all the processes. As an example, they studied the complexity of a "k"-party protocol for Exactly-"N" (do all formula_7’s sum up to N?), and obtained a lower bound using the tiling method. They further applied this model to study general branching programs and obtained a time lower bound for constant-space branching programs that compute Exactly-"N".
Time/space SAT tradeoff.
We have no way to prove that the Boolean satisfiability problem (often abbreviated as SAT), which is NP-complete, requires exponential (or at least super-polynomial) time (this is the famous P versus NP problem), or linear (or at least super-logarithmic) space to solve. However, in the context of space–time tradeoff, one can prove that SAT cannot be computed if we apply constraints to both time and space. L. Fortnow, Lipton, D. van Melkebeek, and A. Viglas proved that SAT cannot be computed by a Turing machine that takes at most O("n"1.1) steps and at most O("n"0.1) cells of its read–write tapes. | [
{
"math_id": 0,
"text": "\\vartriangle^{1+\\epsilon}"
},
{
"math_id": 1,
"text": "\\vartriangle"
},
{
"math_id": 2,
"text": "f(x_1,\\dots,x_n)"
},
{
"math_id": 3,
"text": "P_0,P_1,\\ldots,P_{k-1}"
},
{
"math_id": 4,
"text": "a_0"
},
{
"math_id": 5,
"text": "a_1,\\ldots,a_{k-1}"
},
{
"math_id": 6,
"text": "P_i"
},
{
"math_id": 7,
"text": "a_i"
}
] | https://en.wikipedia.org/wiki?curid=11532414 |
11538526 | Nucleotidyltransferase | Class of enzymes
Nucleotidyltransferases are transferase enzymes of phosphorus-containing groups, e.g., substituents of nucleotidylic acids or simply nucleoside monophosphates. The general reaction of transferring a nucleoside monophosphate moiety from A to B, can be written as:
A-P-N + B formula_0 A + B-P-N
For example, in the case of polymerases, A is pyrophosphate and B is the nascent polynucleotide.
They are classified under EC number 2.7.7 and they can be categorised into:
Role in metabolism.
Many metabolic enzymes are modified by nucleotidyltransferases. The attachment of an AMP (adenylylation) or UMP (uridylylation) can activate or inactivate an enzyme or change its specificity (see figure). These modifications can lead to intricate regulatory networks that can finely tune enzymatic activities so that only the needed compounds are made (here: glutamine).
Role in DNA repair mechanisms.
Nucleotidyl transferase is a component of the repair pathway for single nucleotide base excision repair. This repair mechanism begins when a single nucleotide is recognized by DNA glycosylase as incorrectly matched or has been mutated in some way (UV light, chemical mutagen, etc.), and is removed. Later, a nucleotidyl transferase is used to fill in the gap with the correct base, using the template strand as the reference. | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=11538526 |
11540091 | Hylomorphism (computer science) | Recursive function
In computer science, and in particular functional programming, a hylomorphism is a recursive function, corresponding to the composition of an anamorphism (which first builds a set of results; also known as 'unfolding') followed by a catamorphism (which then folds these results into a final return value). Fusion of these two recursive computations into a single recursive pattern then avoids building the intermediate data structure. This is an example of deforestation, a program optimization strategy. A related type of function is a metamorphism, which is a catamorphism followed by an anamorphism.
Formal definition.
A hylomorphism formula_0 can be defined in terms of its separate anamorphic and catamorphic parts.
The anamorphic part can be defined in terms of a unary function formula_1 defining the list of elements in formula_2 by repeated application ("unfolding"), and a predicate formula_3 providing the terminating condition.
The catamorphic part can be defined as a combination of an initial value formula_4 for the fold and a binary operator formula_5 used to perform the fold.
Thus a hylomorphism
formula_6
may be defined (assuming appropriate definitions of formula_7 & formula_8).
Notation.
An abbreviated notation for the above hylomorphism is formula_9.
Hylomorphisms in practice.
Lists.
Lists are common data structures as they naturally reflect linear computational processes. These processes arise in repeated (iterative) function calls. Therefore, it is sometimes necessary to generate a temporary list of intermediate results before reducing this list to a single result.
One example of a commonly encountered hylomorphism is the canonical factorial function.
factorial :: Integer -> Integer
factorial n
| n == 0 = 1
| n > 0 = n * factorial (n - 1)
In the previous example (written in Haskell, a purely functional programming language) it can be seen that this function, applied to any given valid input, will generate a linear call tree isomorphic to a list. For example, given "n" = 5 it will produce the following:
factorial 5 = 5 * (factorial 4) = 120
factorial 4 = 4 * (factorial 3) = 24
factorial 3 = 3 * (factorial 2) = 6
factorial 2 = 2 * (factorial 1) = 2
factorial 1 = 1 * (factorial 0) = 1
factorial 0 = 1
In this example, the anamorphic part of the process is the generation of the call tree which is isomorphic to the list codice_0. The catamorphism, then, is the calculation of the product of the elements of this list. Thus, in the notation given above, the factorial function may be written as formula_10 where formula_11 and formula_12.
Trees.
However, the term 'hylomorphism' does not apply solely to functions acting upon isomorphisms of lists. For example, a hylomorphism may also be defined by generating a non-linear call tree which is then collapsed. An example of such a function is the function to generate the "n"th term of the Fibonacci sequence.
fibonacci :: Integer -> Integer
fibonacci n
| n == 0 = 0
| n == 1 = 1
| n > 1 = fibonacci (n - 2) + fibonacci (n - 1)
This function, again applied to any valid input, will generate a call tree which is non-linear. In the example on the right, the call tree generated by applying the codice_1 function to the input codice_2.
This time, the anamorphism is the generation of the call tree isomorphic to the tree with leaf nodes codice_3 and the catamorphism the summation of these leaf nodes. | [
{
"math_id": 0,
"text": "h : A \\rightarrow C"
},
{
"math_id": 1,
"text": "g : A \\rightarrow B \\times A"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "p : A \\rightarrow \\text{Boolean}"
},
{
"math_id": 4,
"text": "c \\in C"
},
{
"math_id": 5,
"text": "\\oplus : B \\times C \\rightarrow C"
},
{
"math_id": 6,
"text": "\nh\\,a = \\begin{cases}\n c & \\mbox{if } p\\,a\n \\\\b \\oplus h\\,a' \\,\\,\\,where \\,(b, a') = g\\,a & \\mbox{otherwise}\n \\end{cases}\n "
},
{
"math_id": 7,
"text": "p"
},
{
"math_id": 8,
"text": "g"
},
{
"math_id": 9,
"text": "h = [\\![(c, \\oplus),(g, p)]\\!]"
},
{
"math_id": 10,
"text": "\\text{factorial} = [\\![(1,\\times),(g, p)]\\!]"
},
{
"math_id": 11,
"text": "g\\ n = (n, n - 1)"
},
{
"math_id": 12,
"text": "p\\ n = (n = 0)"
}
] | https://en.wikipedia.org/wiki?curid=11540091 |
11543042 | Dynamic convex hull | The dynamic convex hull problem is a class of dynamic problems in computational geometry. The problem consists in the maintenance, i.e., keeping track, of the convex hull for input data undergoing a sequence of discrete changes, i.e., when input data elements may be inserted, deleted, or modified. It should be distinguished from the kinetic convex hull, which studies similar problems for continuously moving points. Dynamic convex hull problems may be distinguished by the types of the input data and the allowed types of modification of the input data.
Planar point set.
It is easy to construct an example for which the convex hull contains all input points, but after the insertion of a single point the convex hull becomes a triangle. And conversely, the deletion of a single point may produce the opposite drastic change of the size of the output. Therefore, if the convex hull is required to be reported in traditional way as a polygon, the lower bound for the worst-case computational complexity of the recomputation of the convex hull is formula_0, since this time is required for a mere reporting of the output. This lower bound is attainable, because several general-purpose convex hull algorithms run in linear time when input points are ordered in some way and logarithmic-time methods for dynamic maintenance of ordered data are well-known.
This problem may be overcome by eliminating the restriction on the output representation. There are data structures that can maintain representations of the convex hull in an amount of time per update that is much smaller than linear. For many years the best algorithm of this type was that of Overmars and van Leeuwen (1981), which took time O(log2 "n") per update, but it has since been improved by Timothy M. Chan and others.
In a number of applications finding the convex hull is a step in an algorithm for the solution of the overall problem. The selected representation of the convex hull may influence on the computational complexity of further operations of the overall algorithm. For example, the point in polygon query for a convex polygon represented by the ordered set of its vertices may be answered in logarithmic time, which would be impossible for convex hulls reported by the set of it vertices without any additional information. Therefore, some research of dynamic convex hull algorithms involves the computational complexity of various geometric search problems with convex hulls stored in specific kinds of data structures. The mentioned approach of Overmars and van Leeuwen allows for logarithmic complexity of various common queries. | [
{
"math_id": 0,
"text": "\\Omega(N)"
}
] | https://en.wikipedia.org/wiki?curid=11543042 |
11543702 | Carathéodory metric | In mathematics, the Carathéodory metric is a metric defined on the open unit ball of a complex Banach space that has many similar properties to the Poincaré metric of hyperbolic geometry. It is named after the Greek mathematician Constantin Carathéodory.
Definition.
Let ("X", || ||) be a complex Banach space and let "B" be the open unit ball in "X". Let Δ denote the open unit disc in the complex plane C, thought of as the Poincaré disc model for 2-dimensional real/1-dimensional complex hyperbolic geometry. Let the Poincaré metric "ρ" on Δ be given by
formula_0
(thus fixing the curvature to be −4). Then the Carathéodory metric "d" on "B" is defined by
formula_1
What it means for a function on a Banach space to be holomorphic is defined in the article on Infinite dimensional holomorphy.
formula_2
formula_3
formula_4
with equality if and only if either "a" = "b" or there exists a bounded linear functional ℓ ∈ "X"∗ such that ||ℓ|| = 1, ℓ("a" + "b") = 0 and
formula_5
Moreover, any ℓ satisfying these three conditions has |ℓ("a" − "b")| = ||"a" − "b"||.
Carathéodory length of a tangent vector.
There is an associated notion of Carathéodory length for tangent vectors to the ball "B". Let "x" be a point of "B" and let "v" be a tangent vector to "B" at "x"; since "B" is the open unit ball in the vector space "X", the tangent space T"x""B" can be identified with "X" in a natural way, and "v" can be thought of as an element of "X". Then the Carathéodory length of "v" at "x", denoted "α"("x", "v"), is defined by
formula_6
One can show that "α"("x", "v") ≥ ||"v"||, with equality when "x" = 0. | [
{
"math_id": 0,
"text": "\\rho (a, b) = \\tanh^{-1} \\frac{| a - b |}{|1 - \\bar{a} b |}"
},
{
"math_id": 1,
"text": "d (x, y) = \\sup \\{ \\rho (f(x), f(y)) | f : B \\to \\Delta \\mbox{ is holomorphic} \\}."
},
{
"math_id": 2,
"text": "d(0, x) = \\rho(0, \\| x \\|)."
},
{
"math_id": 3,
"text": "d(x, y) = \\sup \\left\\{ \\left. 2 \\tanh^{-1} \\left\\| \\frac{f(x) - f(y)}{2} \\right\\| \\right| f : B \\to \\Delta \\mbox{ is holomorphic} \\right\\}"
},
{
"math_id": 4,
"text": "\\| a - b \\| \\leq 2 \\tanh \\frac{d(a, b)}{2}, \\qquad \\qquad (1)"
},
{
"math_id": 5,
"text": "\\rho (\\ell (a), \\ell (b)) = d(a, b)."
},
{
"math_id": 6,
"text": "\\alpha (x, v) = \\sup \\big\\{ | \\mathrm{D} f(x) v | \\big| f : B \\to \\Delta \\mbox{ is holomorphic} \\big\\}."
}
] | https://en.wikipedia.org/wiki?curid=11543702 |
11543936 | Huber's equation | Huber's equation, first derived by a Polish engineer Tytus Maksymilian Huber, is a basic formula in elastic material tension calculations, an equivalent of the equation of state, but applying to solids. In most simple expression and commonly in use it looks like this:
formula_0
where formula_1 is the tensile stress, and formula_2 is the shear stress, measured in newtons per square meter (N/m2, also called pascals, Pa), while formula_3—called a reduced tension—is the resultant tension of the material.
Finds application in calculating the span width of the bridges, their beam cross-sections, etc.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\sigma_{red}=\\sqrt{({\\sigma}^2) + 3({\\tau}^2)}\n"
},
{
"math_id": 1,
"text": "\\sigma"
},
{
"math_id": 2,
"text": "\\tau"
},
{
"math_id": 3,
"text": "\\sigma_{red}"
}
] | https://en.wikipedia.org/wiki?curid=11543936 |
11548901 | Local volatility | Option pricing model
A local volatility model, in mathematical finance and financial engineering, is an option pricing model that treats volatility as a function of both the current asset level formula_0 and of time formula_1. As such, it is a generalisation of the Black–Scholes model, where the volatility is a constant (i.e. a trivial function of formula_0 and formula_1). Local volatility models are often compared with stochastic volatility models, where the instantaneous volatility is not just a function of the asset level formula_0 but depends also on a new "global" randomness coming from an additional random component.
Formulation.
In mathematical finance, the asset "S""t" that underlies a financial derivative is typically assumed to follow a stochastic differential equation of the form
formula_2,
under the risk neutral measure, where formula_3 is the instantaneous risk free rate, giving an average local direction to the dynamics, and formula_4 is a Wiener process, representing the inflow of randomness into the dynamics. The amplitude of this randomness is measured by the instant volatility formula_5. In the simplest model i.e. the Black–Scholes model, formula_5 is assumed to be constant, or at most a deterministic function of time; in reality, the realised volatility of an underlying actually varies with time and with the underlying itself.
When such volatility has a randomness of its own—often described by a different equation driven by a different "W"—the model above is called a stochastic volatility model. And when such volatility is merely a function of the current underlying asset level "S""t" and of time "t", we have a local volatility model. The local volatility model is a useful simplification of the stochastic volatility model.
"Local volatility" is thus a term used in quantitative finance to denote the set of diffusion coefficients, formula_6, that are consistent with market prices for all options on a given underlying, yielding an asset price model of the type
formula_7
This model is used to calculate exotic option valuations which are consistent with observed prices of vanilla options.
Development.
The concept of a local volatility fully consistent with option markets was developed when Bruno Dupire and Emanuel Derman and Iraj Kani
noted that there is a unique diffusion process consistent with the risk neutral densities derived from the market prices of European options.
Derman and Kani described and implemented a local volatility function to model instantaneous volatility. They used this function at each node in a binomial options pricing model. The tree successfully produced option valuations consistent with all market prices across strikes and expirations. The Derman-Kani model was thus formulated with discrete time and stock-price steps. (Derman and Kani produced what is called an "implied binomial tree"; with Neil Chriss they extended this to an implied trinomial tree. The implied binomial tree fitting process was numerically unstable.)
The key continuous-time equations used in local volatility models were developed by Bruno Dupire in 1994. Dupire's equation states
formula_8
In order to compute the partial derivatives, there exist few known parameterizations of the implied volatility surface based on the Heston model: Schönbucher, SVI and gSVI. Other techniques include mixture of lognormal distribution and stochastic collocation.
Derivation.
Given the price of the asset formula_9 governed by the risk neutral SDE
formula_10
The transition probability formula_11 conditional to formula_12 satisfies the forward Kolmogorov equation (also known as Fokker–Planck equation)
formula_13
where, for brevity, the notation formula_14 denotes the partial derivative of the function f with respect to x and where the notation formula_15 denotes the second order partial derivative of the function f with respect to x. Thus, formula_16 is the partial derivative of the density formula_17 with respect to t and for example
formula_18 is the second derivative of
formula_19 with respect to S. p will denote formula_17, and inside the integral formula_20.
Because of the Martingale pricing theorem, the price of a call option with maturity formula_21 and strike formula_22 is
formula_23
Differentiating the price of a call option with respect to formula_22
formula_24
and replacing in the formula for the price of a call option and rearranging terms
formula_25
Differentiating the price of a call option with respect to formula_22 twice
formula_26
Differentiating the price of a call option with respect to formula_21 yields
formula_27
using the Forward Kolmogorov equation
formula_28
integrating by parts the first integral once and the second integral twice
formula_29
using the formulas derived differentiating the price of a call option with respect to formula_22
formula_30
Parametric local volatility models.
Dupire's approach is non-parametric. It requires to pre-interpolate the data to obtain a continuum of traded prices and the choice of a type of interpolation. As an alternative, one can formulate parametric local volatility models. A few examples are presented below.
Bachelier model.
The Bachelier model has been inspired by Louis Bachelier's work in 1900. This model, at least for assets with zero drift, e.g. forward prices or forward interest rates under their forward measure, can be seen as a local volatility model
formula_31.
In the Bachelier model the diffusion coefficient is a constant formula_32, so we have formula_33, implying formula_34. As interest rates turned negative in many economies, the Bachelier model became of interest, as it can model negative forward rates F through its Gaussian distribution.
Displaced diffusion model.
This model was introduced by Mark Rubinstein.
For a stock price, it follows the dynamics
formula_35
where for simplicity we assume zero dividend yield.
The model can be obtained with a change of variable from a standard Black-Scholes model as follows. By setting formula_36 it is immediate to see that Y follows a standard Black-Scholes model
formula_37
As the SDE for formula_38 is a geometric Brownian motion, it has a lognormal distribution, and given that formula_39 the S model is also called a shifted lognormal model, the shift at time t being formula_40.
To price a call option with strike K on S one simply writes the payoff
formula_41
where H is the new strike formula_42. As Y follows a Black Scholes model, the price of the option becomes a Black Scholes price with modified strike and is easy to obtain. The model produces a monotonic volatility smile curve, whose pattern is decreasing for negative formula_43. Furthermore, for negative formula_43, from formula_44 it follows that the asset S is allowed to take negative values with positive probability. This is useful for example in interest rate modelling, where negative rates have been affecting several economies.
CEV model.
The constant elasticity of variance model (CEV) is a local volatility model where the stock dynamics is, under the risk neutral measure and assuming no dividends,
formula_45
for a constant interest rate r, a positive constant formula_46 and an exponent formula_47 so that in this case
formula_48
The model is at times classified as a stochastic volatility model, although according to the definition given here, it is a local volatility model, as there is no new randomness in the diffusion coefficient. This model and related references are shown in detail in the related page.
The lognormal mixture dynamics model.
This model has been developed from 1998 to 2021 in several versions by Damiano Brigo, Fabio Mercurio and co-authors. Carol Alexander studied the short and long term smile effects.
The starting point is the basic Black Scholes formula, coming from the risk neutral dynamics formula_49 with constant deterministic volatility formula_50 and with lognormal probability density function denoted by formula_51. In the Black Scholes model the price of a European non-path-dependent option is obtained by integration of the option payoff against this lognormal density at maturity.
The basic idea of the lognormal mixture dynamics model is to consider lognormal densities, as in the Black Scholes model, but for a number formula_52 of possible constant deterministic volatilities formula_53, where we call formula_54, the lognormal density of a Black Scholes model with volatility formula_55.
When modelling a stock price, Brigo and Mercurio build a local volatility model
formula_56
where formula_57 is defined in a way that makes the risk neutral distribution of formula_9 the required mixture of the lognormal densities formula_58, so that the density of the resulting stock price is
formula_59
where formula_60 and formula_61. The formula_62's are the weights of the different densities formula_58 included in the mixture.
The instantaneous volatility is defined as
formula_63 or more in detail
formula_64
for formula_65; formula_66 for
formula_67 The original model has a regularization of the diffusion coefficient in a small initial time interval formula_68. With this adjustment, the SDE with formula_69 has a unique strong solution whose
marginal density is the desired mixture formula_70
One can further write formula_71
where formula_72 and formula_73.
This shows that formula_74 is a ``weighted average" of the formula_75's with weights
formula_76
An option price in this model is very simple to calculate. If formula_77 denotes the risk neutral expectation, by the martingale pricing theorem a call option price on S with strike K and maturity T is given by
formula_78
formula_79
formula_80
where formula_81 is the corresponding call price in a Black Scholes model with volatility formula_55.
The price of the option is given by a closed form formula and it is a linear convex combination of Black Scholes prices of call options with volatilities formula_53 weighted by formula_82. The same holds for put options and all other simple contingent claims. The same convex combination applies also to several option
greeks like Delta, Gamma, Rho and Theta.
The mixture dynamics is a flexible model, as one can select the number of components formula_52 according to the complexity of the smile. Optimizing the parameters formula_55 and formula_62, and a possible shift parameter, allows one to reproduce most market smiles. The model has been used successfully in the equity, FX, and interest-rate markets.
In the mixture dynamics model, one can show that the resulting volatility smile curve will have a minimum for K equal to the at-the-money-forward price formula_83. This can be avoided, and the smile allowed to be more general, by combining the mixture dynamics and displaced diffusion ideas, leading to the shifted lognormal mixture dynamics.
The model has also been applied with volatilities formula_55's in the mixture components that are time dependent, so as to calibrate the smile term structure. An extension of the model where the different mixture densities have different means has been studied, while preserving the final no arbitrage drift in the dynamics. A further extension has been the application to the multivariate case, where a multivariate model has been formulated that is consistent with a mixture of multivariate lognormal densities, possibly with shifts, and where the single assets are also distributed as mixtures, reconciling modelling of single assets smile with the smile on an index of these assets. A second application of the multivariate version has been triangulation of FX volatility smiles.
Finally, the model is linked to an uncertain volatility model where, roughly speaking, the volatility is a random variable taking the values formula_53 with probabilities formula_82.
Technically, it can be shown that the local volatility lognormal mixture dynamics is the Markovian projection of the uncertain volatility model.
Use.
Local volatility models are useful in any options market in which the underlying's volatility is predominantly a function of the level of the underlying, interest-rate derivatives for example. Time-invariant local volatilities are supposedly inconsistent with the dynamics of the equity index implied volatility surface, but see Crepey (2004), who claims that such models provide the best average hedge for equity index options, and note that models like the mixture dynamics allow for time dependent local volatilities, calibrating also the term structure of the smile. Local volatility models are also useful in the formulation of stochastic volatility models.
Local volatility models have a number of attractive features. Because the only source of randomness is the stock price, local volatility models are easy to calibrate. Numerous calibration methods are developed to deal with the McKean-Vlasov processes including the most used particle and bin approach. Also, they lead to complete markets where hedging can be based only on the underlying asset. As hinted above, the general non-parametric approach by Dupire is problematic, as one needs to arbitrarily pre-interpolate the input implied volatility surface before applying the method. Alternative parametric approaches with a rich and sound parametrization, as the above tractable mixture dynamical local volatility models, can be an alternative.
Since in local volatility models the volatility is a deterministic function of the random stock price, local volatility models are not very well used to price cliquet options or forward start options, whose values depend specifically on the random nature of volatility itself. In such cases, stochastic volatility models are preferred.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " S_t "
},
{
"math_id": 1,
"text": " t "
},
{
"math_id": 2,
"text": " dS_t = (r_t-d_t) S_t\\,dt + \\sigma_t S_t\\,dW_t "
},
{
"math_id": 3,
"text": "r_t"
},
{
"math_id": 4,
"text": "W_t"
},
{
"math_id": 5,
"text": "\\sigma_t"
},
{
"math_id": 6,
"text": "\\sigma_t = \\sigma(S_t,t)"
},
{
"math_id": 7,
"text": " dS_t = (r_t-d_t) S_t\\,dt + \\sigma(S_t,t) S_t\\,dW_t ."
},
{
"math_id": 8,
"text": "\n\\frac{\\partial C}{\\partial T} = \\frac{1}{2} \\sigma^2(K,T; S_0)K^2 \\frac{\\partial^2C}{\\partial K^2}-(r - d)K \\frac{\\partial C}{\\partial K} - dC\n"
},
{
"math_id": 9,
"text": "S_t"
},
{
"math_id": 10,
"text": "\n dS_t = (r-d)S_t dt + \\sigma(t,S_t)S_t dW_t\n"
},
{
"math_id": 11,
"text": "p(t,S_t)"
},
{
"math_id": 12,
"text": "S_0"
},
{
"math_id": 13,
"text": "\n p_t = -[(r-d)s\\,p]_s + \\frac{1}{2}[(\\sigma s)^2p]_{ss}\n"
},
{
"math_id": 14,
"text": "f_{x}"
},
{
"math_id": 15,
"text": "f_{xx}"
},
{
"math_id": 16,
"text": "p_t"
},
{
"math_id": 17,
"text": "p(t,S)"
},
{
"math_id": 18,
"text": "[(\\sigma s)^2p]_{ss}"
},
{
"math_id": 19,
"text": "(\\sigma(t,S)S)^2 p(t,S)"
},
{
"math_id": 20,
"text": "p(t,s)"
},
{
"math_id": 21,
"text": "T"
},
{
"math_id": 22,
"text": "K"
},
{
"math_id": 23,
"text": "\\begin{align}\n C &= e^{-rT} \\mathbb{E}^Q[(S_T-K)^+] \\\\\n &= e^{-rT} \\int_K^{\\infty} (s-K)\\, p\\, ds \\\\\n &= e^{-rT} \\int_K^{\\infty} s \\,p \\,ds - K\\,e^{-rT} \\int_K^{\\infty} p\\, ds\n\\end{align}"
},
{
"math_id": 24,
"text": "\n C_K = -e^{-rT} \\int_K^{\\infty} p \\; ds \n"
},
{
"math_id": 25,
"text": "\n e^{-rT} \\int_K^{\\infty} s\\, p\\, ds = C - K\\,C_K \n"
},
{
"math_id": 26,
"text": "\n C_{KK} = e^{-rT} p \n"
},
{
"math_id": 27,
"text": "\n C_T = -r\\,C + e^{-rT} \\int_K^{\\infty} (s-K) p_T ds\n"
},
{
"math_id": 28,
"text": "\n C_T = -r\\,C -e^{-rT} \\int_K^{\\infty} (s-K) [(r-d)s\\,p]_s \\,ds + \\frac{1}{2}e^{-rT}\\int_K^{\\infty} (s-K) [(\\sigma s)^2\\,p]_{ss}\\, ds\n"
},
{
"math_id": 29,
"text": "\n C_T = -r\\,C + (r-d) e^{-rT} \\int_K^{\\infty} s\\,p\\, ds + \\frac{1}{2} e^{-rT} (\\sigma K)^2\\,p\n"
},
{
"math_id": 30,
"text": "\\begin{align}\n C_T &= -r\\,C + (r-d) (C - K\\,C_K) + \\frac{1}{2} \\sigma^2 K^2 C_{KK} \\\\\n &= - (r-d) K\\,C_K -d\\,C + \\frac{1}{2} \\sigma^2 K^2 C_{KK}\n\\end{align}"
},
{
"math_id": 31,
"text": " dF_t = v \\,dW_t "
},
{
"math_id": 32,
"text": "v"
},
{
"math_id": 33,
"text": "\\sigma(F_t,t)F_t = v"
},
{
"math_id": 34,
"text": "\\sigma(F_t,t) = v/F_t"
},
{
"math_id": 35,
"text": " dS_t = r S_t\\,dt + \\sigma (S_t-\\beta e^{r t})\\,dW_t "
},
{
"math_id": 36,
"text": " Y_t = S_t - \\beta e^{r t}"
},
{
"math_id": 37,
"text": " dY_t = r Y_t\\,dt + \\sigma Y_t \\,dW_t ."
},
{
"math_id": 38,
"text": "Y"
},
{
"math_id": 39,
"text": " S_t = Y_t+\\beta e^{r t}"
},
{
"math_id": 40,
"text": "\\beta e^{r t}"
},
{
"math_id": 41,
"text": "(S_T-K)^+ = (Y_T +\\beta e^{r T} - K)^+ = (Y_T-H)^+"
},
{
"math_id": 42,
"text": "H=K-\\beta e^{r T}"
},
{
"math_id": 43,
"text": "\\beta"
},
{
"math_id": 44,
"text": " S_t = Y_t + \\beta e^{r t}"
},
{
"math_id": 45,
"text": "\\mathrm{d}S_t = r S_t \\mathrm{d}t + \\sigma S_t ^ \\gamma \\mathrm{d}W_t,"
},
{
"math_id": 46,
"text": "\\sigma >0"
},
{
"math_id": 47,
"text": "\\gamma \\geq 0,"
},
{
"math_id": 48,
"text": "\\sigma(S_t, t)=\\sigma S_t^{\\gamma-1}."
},
{
"math_id": 49,
"text": "dS_t = r S_t dt + \\sigma S_t dW_t,"
},
{
"math_id": 50,
"text": "\\sigma"
},
{
"math_id": 51,
"text": "p^{lognormal}_{t,\\sigma}"
},
{
"math_id": 52,
"text": "N"
},
{
"math_id": 53,
"text": "\\sigma_1,\\ldots,\\sigma_N"
},
{
"math_id": 54,
"text": "p_{i,t} = p^{lognormal}_{t,\\sigma_i}"
},
{
"math_id": 55,
"text": "\\sigma_i"
},
{
"math_id": 56,
"text": "d S_t = r S_t dt + \\sigma_{mix}(t,S_t) S_t \\ dW_t, "
},
{
"math_id": 57,
"text": "\\sigma_{mix}(t,S_t)"
},
{
"math_id": 58,
"text": "p_{i,t}"
},
{
"math_id": 59,
"text": "p_{S_t}(y) =: p_t(y) =\\sum_{i=1}^N \\lambda_i p_{i,t}(y) = \\sum_{i=1}^N \\lambda_i p^{lognormal}_{t,\\sigma_i}(y)"
},
{
"math_id": 60,
"text": "\\lambda_i \\in (0,1)"
},
{
"math_id": 61,
"text": "\\sum_{i=1}^N \\lambda_i =1"
},
{
"math_id": 62,
"text": "\\lambda_i"
},
{
"math_id": 63,
"text": "\\sigma_{mix}(t,y)^2 = \\frac{1}{\\sum_{j} \\lambda_j p_{j,t}(y)}\\sum_{i} \\lambda_i \\sigma_i^2 p_{i,t}(y),"
},
{
"math_id": 64,
"text": "\n\\sigma_{mix}(t,y)^2 = \\frac{\\sum_{i=1}^N \\lambda_i \\sigma_i^2 \\ \\frac{1}{\\sigma_i \\sqrt{t}}\\exp\n\\left\\{-\\frac{1} {2 \\sigma_i^2 t} \\left[ \\ln\\frac{y}{S_0} -r t\n+\\tfrac{1}{2} \\sigma_i^2 t \\right]^2 \\right\\}}{\\sum_{j=1}^N \\lambda_j\n\\frac{1}{\\sigma_j \\sqrt{t}}\\exp \\left\\{-\\frac{1}{2 \\sigma_j^2 t} \\left[\n\\ln\\frac{y}{S_0} -r t +\\tfrac{1}{2}\\sigma_j^2 t \\right]^2 \\right\\}}\n"
},
{
"math_id": 65,
"text": "(t,y)>(0,0)"
},
{
"math_id": 66,
"text": "\\sigma_{mix}(t,y)=\\sigma_0"
},
{
"math_id": 67,
"text": "(t,y)=(0,s_0)."
},
{
"math_id": 68,
"text": "[0,\\epsilon]"
},
{
"math_id": 69,
"text": "\\sigma_{mix}"
},
{
"math_id": 70,
"text": "p_{S_t} = \\sum_i \\lambda_i p_{i,t}."
},
{
"math_id": 71,
"text": "\\sigma_{mix}^2(t,y) = \\sum_{i=1}^N \\Lambda_i(t,y) \\sigma_i^2,"
},
{
"math_id": 72,
"text": "\\Lambda_i(t,y)\\in (0,1)"
},
{
"math_id": 73,
"text": "\\sum_{i=1}^N \\Lambda_i(t,y)=1"
},
{
"math_id": 74,
"text": "\\sigma_{mix}^2(t,y)"
},
{
"math_id": 75,
"text": "\\sigma_i^2"
},
{
"math_id": 76,
"text": " \\Lambda_i(t,y) = \\frac{\\lambda_i \\ p_{i,t}(y)}{\\sum_j \\lambda_j \\ p_{j,t}(y)}."
},
{
"math_id": 77,
"text": "\\mathbb{E}^Q"
},
{
"math_id": 78,
"text": "V^{Call}_{mix}(K,T)= e^{-r T}\\mathbb{E}^Q\\left\\{(S_T-K)^+ \\right\\}"
},
{
"math_id": 79,
"text": "= e^{-r T}\\int_0^{+\\infty}(y-K)^+ p_{S_T}(y) dy = e^{-r T}\\int_0^{+\\infty}(y-K)^+\\sum_{i=1}^N\\lambda_i p_{i,T}(y)dy"
},
{
"math_id": 80,
"text": "=\\sum_{i=1}^N \\lambda_i e^{-r T} \\int(y-K)^+ p_{i,T}(y)dy=\\sum_{{i=1}^N}\n{\\lambda_i} V^{Call}_{BS}(K,T,{\\sigma_i})\n"
},
{
"math_id": 81,
"text": "V^{Call}_{BS}(K,T,{\\sigma_i})"
},
{
"math_id": 82,
"text": "\\lambda_1,\\ldots,\\lambda_N"
},
{
"math_id": 83,
"text": "S_0 e^{r T}"
}
] | https://en.wikipedia.org/wiki?curid=11548901 |
11548952 | Censoring (statistics) | Condition in which the value of a measurement or observation is only partially known
In statistics, censoring is a condition in which the value of a measurement or observation is only partially known.
For example, suppose a study is conducted to measure the impact of a drug on mortality rate. In such a study, it may be known that an individual's age at death is "at least" 75 years (but may be more). Such a situation could occur if the individual withdrew from the study at age 75, or if the individual is currently alive at the age of 75.
Censoring also occurs when a value occurs outside the range of a measuring instrument. For example, a bathroom scale might only measure up to 140 kg. If a 160 kg individual is weighed using the scale, the observer would only know that the individual's weight is at least 140 kg.
The problem of censored data, in which the observed value of some variable is partially known, is related to the problem of missing data, where the observed value of some variable is unknown.
Censoring should not be confused with the related idea truncation. With censoring, observations result either in knowing the exact value that applies, or in knowing that the value lies within an interval. With truncation, observations never result in values outside a given range: values in the population outside the range are never seen or never recorded if they are seen. Note that in statistics, truncation is not the same as rounding.
Types.
Interval censoring can occur when observing a value requires follow-ups or inspections. Left and right censoring are special cases of interval censoring, with the beginning of the interval at zero or the end at infinity, respectively.
Estimation methods for using left-censored data vary, and not all methods of estimation may be applicable to, or the most reliable, for all data sets.
A common misconception with time interval data is to class as "left censored" intervals where the start time is unknown. In these cases we have a lower bound on the time "interval", thus the data is "right censored" (despite the fact that the missing start point is to the left of the known interval when viewed as a timeline!).
Analysis.
Special techniques may be used to handle censored data. Tests with specific failure times are coded as actual failures; censored data are coded for the type of censoring and the known interval or limit. Special software programs (often reliability oriented) can conduct a maximum likelihood estimation for summary statistics, confidence intervals, etc.
Epidemiology.
One of the earliest attempts to analyse a statistical problem involving censored data was Daniel Bernoulli's 1766 analysis of smallpox morbidity and mortality data to demonstrate the efficacy of vaccination. An early paper to use the Kaplan–Meier estimator for estimating censored costs was Quesenberry et al. (1989), however this approach was found to be invalid by Lin et al. unless all patients accumulated costs with a common deterministic rate function over time, they proposed an alternative estimation technique known as the Lin estimator.
Operating life testing.
Reliability testing often consists of conducting a test on an item (under specified conditions) to determine the time it takes for a failure to occur.
An analysis of the data from replicate tests includes both the times-to-failure for the items that failed and the time-of-test-termination for those that did not fail.
Censored regression.
An earlier model for censored regression, the tobit model, was proposed by James Tobin in 1958.
Likelihood.
The likelihood is the probability or probability density of what was observed, viewed as a function of parameters in an assumed model. To incorporate censored data points in the likelihood the censored data points are represented by the probability of the censored data points as a function of the model parameters given a model, i.e. a function of CDF(s) instead of the density or probability mass.
The most general censoring case is interval censoring: formula_0, where formula_1 is the CDF of the probability distribution, and the two special cases are:
For continuous probability distributions: formula_4
Example.
Suppose we are interested in survival times, formula_5, but we don't observe formula_6 for all formula_7. Instead, we observe
formula_8, with formula_9 and formula_10 if formula_6 is actually observed, and
formula_8, with formula_11 and formula_12 if all we know is that formula_6 is longer than formula_13.
When formula_14 is called the "censoring time".
If the censoring times are all known constants, then the likelihood is
formula_15
where formula_16 = the probability density function evaluated at formula_17,
and formula_18 = the probability that formula_6 is greater than formula_17, called the "survival function".
This can be simplified by defining the hazard function, the instantaneous force of mortality, as
formula_19
so
formula_20.
Then
formula_21.
For the exponential distribution, this becomes even simpler, because the hazard rate, formula_22, is constant, and formula_23. Then:
formula_24,
where formula_25.
From this we easily compute formula_26, the maximum likelihood estimate (MLE) of formula_22, as follows:
formula_27.
Then
formula_28.
We set this to 0 and solve for formula_22 to get:
formula_29.
Equivalently, the mean time to failure is:
formula_30.
This differs from the standard MLE for the exponential distribution in that the any censored observations are considered only in the numerator.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Pr( a< x\\leqslant b) =F( b) -F( a)"
},
{
"math_id": 1,
"text": "F( x)"
},
{
"math_id": 2,
"text": "Pr( -\\infty < x\\leqslant b) =F( b) -F(-\\infty)=F( b)-0=F(b) =Pr( x\\leqslant b)"
},
{
"math_id": 3,
"text": "Pr( a< x\\leqslant \\infty ) =F( \\infty ) -F( a) =1-F( a) =1-Pr( x\\leqslant a) =Pr( x >a)"
},
{
"math_id": 4,
"text": "Pr( a< x\\leqslant b) =Pr( a< x< b)"
},
{
"math_id": 5,
"text": "T_1, T_2, ..., T_n"
},
{
"math_id": 6,
"text": "T_i"
},
{
"math_id": 7,
"text": "i"
},
{
"math_id": 8,
"text": "(U_i, \\delta_i)"
},
{
"math_id": 9,
"text": "U_i = T_i"
},
{
"math_id": 10,
"text": "\\delta_i = 1"
},
{
"math_id": 11,
"text": "U_i < T_i"
},
{
"math_id": 12,
"text": "\\delta_i = 0"
},
{
"math_id": 13,
"text": "U_i"
},
{
"math_id": 14,
"text": "T_i > U_i, U_i"
},
{
"math_id": 15,
"text": "L = \\prod_{i, \\delta_i = 1} f(u_i) \\prod_{i, \\delta_i=0} S(u_i)"
},
{
"math_id": 16,
"text": "f(u_i)"
},
{
"math_id": 17,
"text": "u_i"
},
{
"math_id": 18,
"text": "S(u_i)"
},
{
"math_id": 19,
"text": "\\lambda(u) = f(u)/S(u)"
},
{
"math_id": 20,
"text": "f(u) = \\lambda(u)S(u)"
},
{
"math_id": 21,
"text": "L = \\prod_i \\lambda(u_i)^{\\delta_i} S(u_i)"
},
{
"math_id": 22,
"text": "\\lambda"
},
{
"math_id": 23,
"text": "S(u) = \\exp(-\\lambda u)"
},
{
"math_id": 24,
"text": "L(\\lambda) = \\lambda^k \\exp (-\\lambda \\sum{u_i})"
},
{
"math_id": 25,
"text": "k = \\sum{\\delta_i}"
},
{
"math_id": 26,
"text": "\\hat{\\lambda}"
},
{
"math_id": 27,
"text": "l(\\lambda) = \\log(L(\\lambda)) = k \\log(\\lambda) - \\lambda \\sum{u_i}"
},
{
"math_id": 28,
"text": "dl / d\\lambda = k/\\lambda - \\sum{u_i}"
},
{
"math_id": 29,
"text": "\\hat \\lambda = k / \\sum u_i"
},
{
"math_id": 30,
"text": "1 / \\hat\\lambda = \\sum u_i / k"
}
] | https://en.wikipedia.org/wiki?curid=11548952 |
11549406 | Quality control and genetic algorithms | The combination of quality control and genetic algorithms led to novel solutions of complex quality control design and optimization problems. Quality is the degree to which a set of inherent characteristics of an entity fulfils a need or expectation that is stated, general implied or obligatory. ISO 9000 defines quality control as "A part of quality management focused on fulfilling quality requirements". Genetic algorithms are search algorithms, based on the mechanics of natural selection and natural genetics.
Quality control.
Alternative quality control (QC) procedures can be applied to a process to test statistically the null hypothesis, that the process conforms to the quality specifications and consequently is in control, against the alternative, that the process is out of control. When a true null hypothesis is rejected, a statistical type I error is committed. We have then a false rejection of a run of the process. The probability of a type I error is called probability of false rejection. When a false null hypothesis is accepted, a statistical type II error is committed. We fail then to detect a significant change in the probability density function of a quality characteristic of the process. The probability of rejection of a false null hypothesis equals the probability of detection of the nonconformity of the process to the quality specifications.
The QC procedure to be designed or optimized can be formulated as:
formula_0 (1)
where formula_1denotes a statistical decision rule, "ni" denotes the size of the sample S"i", that is the number of the samples the rule is applied upon, and formula_2denotes the vector of the rule specific parameters, including the decision limits. Each symbol "#" denotes either the Boolean operator AND or the operator OR. Obviously, for "#" denoting AND, and for "n"1 < "n"2 <...< "n""q", that is for S1 ⊂ S2 ⊂ ... ⊂ S"q", the (1) denotes a "q"-sampling QC procedure.
Each statistical decision rule is evaluated by calculating the respective statistic of the measured quality characteristic of the sample. Then, if the statistic is out of the interval between the decision limits, the decision rule is considered to be true. Many statistics can be used, including the following: a single value of the variable of a sample, the range, the mean, and the standard deviation of the values of the variable of the samples, the cumulative sum, the smoothed mean, and the smoothed standard deviation. Finally, the QC procedure is evaluated as a Boolean proposition. If it is true, then the null hypothesis is considered to be false, the process is considered to be out of control, and the run is rejected.
A quality control procedure is considered to be optimum when it minimizes (or maximizes) a context specific objective function. The objective function depends on the probabilities of detection of the nonconformity of the process and of false rejection. These probabilities depend on the parameters of the quality control procedure (1) and on the probability density functions (see probability density function) of the monitored variables of the process.
Genetic algorithms.
Genetic algorithms are robust search algorithms, that do not require knowledge of the objective function to be optimized and search through large spaces quickly. Genetic algorithms have been derived from the processes of the molecular biology of the gene and the evolution of life. Their operators, cross-over, mutation, and reproduction, are isomorphic with the synonymous biological processes. Genetic algorithms have been used to solve a variety of complex optimization problems. Additionally the classifier systems and the genetic programming paradigm have shown us that genetic algorithms can be used for tasks as complex as the program induction.
Quality control and genetic algorithms.
In general, we can not use algebraic methods to optimize the quality control procedures. Usage of enumerative methods would be very tedious, especially with multi-rule procedures, as the number of the points of the parameter space to be searched grows exponentially with the number of the parameters to be optimized. Optimization methods based on genetic algorithms offer an appealing alternative.
Furthermore, the complexity of the design process of novel quality control procedures is obviously greater than the complexity of the optimization of predefined ones.
In fact, since 1993, genetic algorithms have been used successfully to optimize and to design novel quality control procedures.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Q_1 ( n_1,\\mathbf{ X_1} ) \\# Q_2 ( n_2,\\mathbf{ X_2} ) \\# ... \\# Q_q (n_q,\\mathbf{ X_q} )\\;"
},
{
"math_id": 1,
"text": "Q_i ( n_i,\\mathbf{ X_i} )\\;"
},
{
"math_id": 2,
"text": "\\mathbf{ X_i}\\;"
}
] | https://en.wikipedia.org/wiki?curid=11549406 |
1155117 | Darboux vector | Angular velocity vector of the Frenet frame of a space curve
In differential geometry, especially the theory of space curves, the Darboux vector is the angular velocity vector of the Frenet frame of a space curve. It is named after Gaston Darboux who discovered it. It is also called angular momentum vector, because it is directly proportional to angular momentum.
In terms of the Frenet-Serret apparatus, the Darboux vector ω can be expressed as
formula_0
and it has the following symmetrical properties:
formula_1
formula_2
formula_3
which can be derived from Equation (1) by means of the Frenet-Serret theorem (or vice versa).
Let a rigid object move along a regular curve described parametrically by β("t"). This object has its own intrinsic coordinate system. As the object moves along the curve, let its intrinsic coordinate system keep itself aligned with the curve's Frenet frame. As it does so, the object's motion will be described by two vectors: a translation vector, and a rotation vector ω, which is an areal velocity vector: the Darboux vector.
Note that this rotation is kinematic, rather than physical, because usually when a rigid object moves freely in space its rotation is independent of its translation. The exception would be if the object's rotation is physically constrained to align itself with the object's translation, as is the case with the cart of a roller coaster.
Consider the rigid object moving smoothly along the regular curve. Once the translation is "factored out", the object is seen to rotate the same way as its Frenet frame. The total rotation of the Frenet frame is the combination of the rotations of each of the three Frenet vectors:
formula_4
Each Frenet vector moves about an "origin" which is the centre of the rigid object (pick some point within the object and call it its centre). The areal velocity of the tangent vector is:
formula_5
formula_6
Likewise,
formula_7
formula_8
Now apply the Frenet-Serret theorem to find the areal velocity components:
formula_9
formula_10
formula_11
so that
formula_12
as claimed.
The Darboux vector provides a concise way of interpreting curvature "κ" and torsion "τ" geometrically: curvature is the measure of the rotation of the Frenet frame about the binormal unit vector, whereas torsion is the measure of the rotation of the Frenet frame about the tangent unit vector.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\boldsymbol{\\omega} = \\tau \\mathbf{T} + \\kappa \\mathbf{B} \\qquad \\qquad (1)"
},
{
"math_id": 1,
"text": " \\boldsymbol{\\omega} \\times \\mathbf{T} = \\mathbf{T'}, "
},
{
"math_id": 2,
"text": " \\boldsymbol{\\omega} \\times \\mathbf{N} = \\mathbf{N'}, "
},
{
"math_id": 3,
"text": " \\boldsymbol{\\omega} \\times \\mathbf{B} = \\mathbf{B'}, "
},
{
"math_id": 4,
"text": " \\boldsymbol{\\omega} = \\boldsymbol{\\omega}_\\mathbf{T} + \\boldsymbol{\\omega}_\\mathbf{N} + \\boldsymbol{\\omega}_\\mathbf{B}. "
},
{
"math_id": 5,
"text": " \\boldsymbol{\\omega}_\\mathbf{T} = \\lim_{\\Delta t \\rightarrow 0} {\\mathbf{T}(t) \\times \\mathbf{T}(t + \\Delta t) \\over 2 \\, \\Delta t} "
},
{
"math_id": 6,
"text": " = {\\mathbf{T}(t) \\times \\mathbf{T'}(t) \\over 2}. "
},
{
"math_id": 7,
"text": " \\boldsymbol{\\omega}_\\mathbf{N} = {1 \\over 2} \\ \\mathbf{N}(t) \\times \\mathbf{N'}(t), "
},
{
"math_id": 8,
"text": " \\boldsymbol{\\omega}_\\mathbf{B} = {1 \\over 2} \\ \\mathbf{B}(t) \\times \\mathbf{B'}(t). "
},
{
"math_id": 9,
"text": " \\boldsymbol{\\omega}_\\mathbf{T} = {1\\over 2} \\mathbf{T} \\times \\mathbf{T'} = {1\\over 2}\\kappa \\mathbf{T} \\times \\mathbf{N} = {1\\over 2}\\kappa \\mathbf{B} "
},
{
"math_id": 10,
"text": " \\boldsymbol{\\omega}_\\mathbf{N} = {1\\over 2}\\mathbf{N} \\times \\mathbf{N'} = {1\\over 2}(-\\kappa \\mathbf{N} \\times \\mathbf{T} + \\tau \\mathbf{N} \\times \\mathbf{B}) = {1\\over 2}(\\kappa \\mathbf{B} + \\tau \\mathbf{T}) "
},
{
"math_id": 11,
"text": " \\boldsymbol{\\omega}_\\mathbf{B} = {1\\over 2}\\mathbf{B} \\times \\mathbf{B'} = -{1\\over 2}\\tau \\mathbf{B} \\times \\mathbf{N} = {1\\over 2}\\tau \\mathbf{T} "
},
{
"math_id": 12,
"text": " \\boldsymbol{\\omega} = {1\\over 2}\\kappa \\mathbf{B} + {1\\over 2}(\\kappa \\mathbf{B} + \\tau \\mathbf{T}) + {1\\over 2}\\tau \\mathbf{T} = \\kappa \\mathbf{B} + \\tau \\mathbf{T}, "
}
] | https://en.wikipedia.org/wiki?curid=1155117 |
11551222 | Heat pump and refrigeration cycle | Mathematical models of heat pumps and refrigeration
Thermodynamic heat pump cycles or refrigeration cycles are the conceptual and mathematical models for heat pump, air conditioning and refrigeration systems. A heat pump is a mechanical system that transmits heat from one location (the "source") at a certain temperature to another location (the "sink" or "heat sink") at a higher temperature. Thus a heat pump may be thought of as a "heater" if the objective is to warm the heat sink (as when warming the inside of a home on a cold day), or a "refrigerator" or “cooler” if the objective is to cool the heat source (as in the normal operation of a freezer). The operating principles in both cases are the same; energy is used to move heat from a colder place to a warmer place.
Thermodynamic cycles.
According to the second law of thermodynamics, heat cannot spontaneously flow from a colder location to a hotter area; work is required to achieve this. An air conditioner requires work to cool a living space, moving heat from the interior being cooled (the heat source) to the outdoors (the heat sink). Similarly, a refrigerator moves heat from inside the cold icebox (the heat source) to the warmer room-temperature air of the kitchen (the heat sink). The operating principle of an ideal heat engine was described mathematically using the Carnot cycle by Sadi Carnot in 1824. An ideal refrigerator or heat pump can be thought of as an ideal heat engine that is operating in a reverse Carnot cycle.
Heat pump cycles and refrigeration cycles can be classified as "vapor compression", "vapor absorption", "gas cycle", or "Stirling cycle" types.
Vapor-compression cycle.
The vapor-compression cycle is used by many refrigeration, air conditioning, and other cooling applications and also within heat pump for heating applications. There are two heat exchangers, one being the condenser, which is hotter and releases heat, and the other being the evaporator, which is colder and accepts heat. For applications which need to operate in both heating and cooling modes, a reversing valve is used to switch the roles of these two heat exchangers.
At the start of the thermodynamic cycle the refrigerant enters the compressor as a low pressure and low temperature vapor. In heat pumps, this refrigerant is typically R32 refrigerant or R290 refrigerant. Then the pressure is increased and the refrigerant leaves as a higher temperature and higher pressure superheated gas. This hot pressurised gas then passes through the condenser where it releases heat to the surroundings as it cools and condenses completely. The cooler high-pressure liquid next passes through the expansion valve (throttle valve) which reduces the pressure abruptly causing the temperature to drop dramatically. The cold low pressure mixture of liquid and vapor next travels through the evaporator where it vaporizes completely as it accepts heat from the surroundings before returning to the compressor as a low pressure low temperature gas to start the cycle again.
Some simpler applications with fixed operating temperatures, such as a domestic refrigerator, may use a fixed speed compressor and fixed aperture expansion valve. Applications that need to operate at a high coefficient of performance in very varied conditions, as is the case with heat pumps where external temperatures and internal heat demand vary considerably through the seasons, typically use a variable speed inverter compressor and an adjustable expansion valve to control the pressures of the cycle more accurately.
The above discussion is based on the ideal vapor-compression refrigeration cycle and does not take into account real-world effects like frictional pressure drop in the system, slight thermodynamic irreversibility during the compression of the refrigerant vapor, or non-ideal gas behavior (if any).
Vapor absorption cycle.
In the early years of the twentieth century, the vapor absorption cycle using water-ammonia systems was popular and widely used but, after the development of the vapor compression cycle, it lost much of its importance because of its low coefficient of performance (about one fifth of that of the vapor compression cycle). Nowadays, the vapor absorption cycle is used only where heat is more readily available than electricity, such as industrial waste heat, solar thermal energy by solar collectors, or off-the-grid refrigeration in recreational vehicles.
The absorption cycle is similar to the compression cycle, but depends on the partial pressure of the refrigerant vapor. In the absorption system, the compressor is replaced by an absorber and a generator. The absorber dissolves the refrigerant in a suitable liquid (dilute solution) and therefore the dilute solution becomes a strong solution. In the generator, on heat addition, the temperature increases, and with it, the partial pressure of the refrigerant vapor is released from the strong solution. However, the generator requires a heat source, which would consume energy unless waste heat is used. In an absorption refrigerator, a suitable combination of refrigerant and absorbent is used. The most common combinations are ammonia (refrigerant) and water (absorbent), and water (refrigerant) and lithium bromide (absorbent).
Absorption refrigeration systems can be powered by combustion of fossil fuels (e.g., coal, oil, natural gas, etc.) or renewable energy (e.g., waste-heat recovery, biomass combustion, or solar energy).
Gas cycle.
When the working fluid is a gas that is compressed and expanded but does not change phase, the refrigeration cycle is called a "gas cycle". Air is most often this working fluid. As there is no condensation and evaporation intended in a gas cycle, components corresponding to the condenser and evaporator in a vapor compression cycle are the hot and cold gas-to-gas heat exchangers.
For given extreme temperatures, a gas cycle may be less efficient than a vapor compression cycle because the gas cycle works on the reverse Brayton cycle instead of the reverse Rankine cycle. As such, the working fluid never receives or rejects heat at constant temperature. In the gas cycle, the refrigeration effect is equal to the product of the specific heat of the gas and the rise in temperature of the gas in the low temperature side. Therefore, for the same cooling load, gas refrigeration cycle machines require a larger mass flow rate, which in turn increases their size.
Because of their lower efficiency and larger bulk, "air cycle" coolers are not often applied in terrestrial refrigeration. The air cycle machine is very common, however, on gas turbine-powered jet airliners since compressed air is readily available from the engines' compressor sections. These jet aircraft's cooling and ventilation units also serve the purpose of heating and pressurizing the aircraft cabin.
Stirling engine.
The Stirling cycle heat engine can be driven in reverse, using a mechanical energy input to drive heat transfer in a reversed direction (i.e. a heat pump, or refrigerator). There are several design configurations for such devices that can be built. Several such setups require rotary or sliding seals, which can introduce difficult tradeoffs between frictional losses and refrigerant leakage.
Reversed Carnot cycle.
The Carnot cycle, which has a quantum equivalent, is reversible so the four processes that comprise it, two isothermal and two isentropic, can also be reversed. When a Carnot cycle runs in reverse, it is called a "reverse Carnot cycle". A refrigerator or heat pump that acts according to the reversed Carnot cycle is called a Carnot refrigerator or Carnot heat pump, respectively. In the first stage of this cycle, the refrigerant absorbs heat isothermally from a low-temperature source, TL, in the amount QL. Next, the refrigerant is compressed isentropically (adiabatically, without heat transfer) and its temperature rises to that of the high-temperature source, TH. Then at this high temperature, the refrigerant isothermally rejects heat in the amount QH < 0 (negative according to the sign convention for heat lost by the system). Also during this stage, the refrigerant changes from a saturated vapor to a saturated liquid in the condenser. Lastly, the refrigerant expands isentropically until its temperature falls to that of the low-temperature source, TL.
Absorption-compression heat pump.
An absorption-compression heat pump (ACHP) is a device that integrate an electric compressor in an absorption heat pump. In some cases this is obtained by combining a vapor-compression heat pump and an absorption heat pump. It is also referred to as a hybrid heat pump which is however a broader field. Thanks to this integration, the device can obtain cooling and heating effects using both thermal and electrical energy sources. This type of systems is well coupled with cogeneration systems where both heat and electricity are produced. Depending on the configuration, the system can maximise heating and cooling production from a given amount of fuel, or can improve the temperature (hence the quality) of waste heat from other processes. This second use is the most studied one and has been applied to several industrial applications.
Coefficient of performance.
The merit of a refrigerator or heat pump is given by a parameter called the coefficient of performance (COP). The equation is:
formula_0
where
The detailed COP of a refrigerator is given by the following equation:
formula_3
The COP of a heat pump (sometimes referred to as coefficient of amplification COA) is given by the following equations, where the first law of thermodynamics: formula_4 and formula_5 was used in one of the last steps:
formula_6
Both the COP of a refrigerator and a heat pump can be greater than one. Combining these two equations results in:
formula_7 for fixed values of QH and QL.
This implies that COPHP will be greater than one because COPR will be a positive quantity. In a worst-case scenario, the heat pump will supply as much energy as it consumes, making it act as a resistance heater. However, in reality, as in home heating, some of QH is lost to the outside air through piping, insulation, etc., thus making the COPHP drop below unity when the outside air temperature is too low.
For Carnot refrigerators and heat pumps, the COP can be expressed in terms of temperatures:
formula_8
formula_9
These are the upper limits for the COP of any system operating between TL and TH.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{\\rm COP} = \\frac{|Q|}{ W_{net,in}}"
},
{
"math_id": 1,
"text": " Q "
},
{
"math_id": 2,
"text": "W_{net,in} "
},
{
"math_id": 3,
"text": "{\\rm COP_R} = \\frac{ \\text{Desired Output}}{ \\text{Required Input}} = \\frac{ \\text{Cooling Effect}}{ \\text{Work Input}} = \\frac{ Q_L}{ W_\\text{net,in} }"
},
{
"math_id": 4,
"text": "W_{net,in}+Q_{L}+Q_{H} = \\Delta_{cycle}U = 0 "
},
{
"math_id": 5,
"text": "|Q_{H}|= -Q_{H} "
},
{
"math_id": 6,
"text": "{\\rm COP_{HP} } = \\frac{ \\text{Desired Output}} { \\text{Required Input}} = \\frac{ \\text{Heating Effect}} { \\text{Work Input}} = \\frac{|Q_H|} { W_\\text{net,in} }= \\frac{W_{net,in} + Q_L} { W_\\text{net,in} }=1 +\\frac { Q_L} { W_\\text{net,in} }"
},
{
"math_id": 7,
"text": "{\\rm COP_{HP} } = 1+{\\rm COP_R }"
},
{
"math_id": 8,
"text": "{\\rm COP_{R,Carnot} } = \\frac { T_L} { T_H - T_L} = \\frac { 1} { (T_H / T_L) - 1}"
},
{
"math_id": 9,
"text": "{\\rm COP_{HP,Carnot} } = \\frac { T_H} { T_H-T_L} = \\frac { 1} { 1 - (T_L / T_H)}"
}
] | https://en.wikipedia.org/wiki?curid=11551222 |
11555 | Fluorescence | Emission of light by a substance that has absorbed light
Fluorescence is one of two kinds of emission of light by a substance that has absorbed light or other electromagnetic radiation. Fluorescence involves no change in electron spin multiplicity and generally it immediately follows absorption; phosphorescence involves spin change and is delayed. Thus fluorescent materials generally cease to glow nearly immediately when the radiation source stops, while phosphorescent materials continue to emit light for some time after.
Fluorescence is a form of luminescence. In most cases, the emitted light has a longer wavelength, and therefore a lower photon energy, than the absorbed radiation. A perceptible example of fluorescence occurs when the absorbed radiation is in the ultraviolet region of the electromagnetic spectrum (invisible to the human eye), while the emitted light is in the visible region; this gives the fluorescent substance a distinct color that can only be seen when the substance has been exposed to UV light.
Fluorescence has many practical applications, including mineralogy, gemology, medicine, chemical sensors (fluorescence spectroscopy), fluorescent labelling, dyes, biological detectors, cosmic-ray detection, vacuum fluorescent displays, and cathode-ray tubes. Its most common everyday application is in (gas-discharge) fluorescent lamps and LED lamps, in which fluorescent coatings convert UV or blue light into longer-wavelengths resulting in white light which can even appear indistinguishable from that of the traditional but energy-inefficient incandescent lamp.
Fluorescence also occurs frequently in nature in some minerals and in many biological forms across all kingdoms of life. The latter may be referred to as "biofluorescence", indicating that the fluorophore is part of or is extracted from a living organism (rather than an inorganic dye or stain). But since fluorescence is due to a specific chemical, which can also be synthesized artificially in most cases, it is sufficient to describe the substance itself as "fluorescent".
History.
Fluorescence was observed long before it was named and understood.
An early observation of fluorescence was known to the Aztecs and described in 1560 by Bernardino de Sahagún and in 1565 by Nicolás Monardes in the infusion known as "lignum nephriticum" (Latin for "kidney wood"). It was derived from the wood of two tree species, "Pterocarpus indicus" and "Eysenhardtia polystachya".
The chemical compound responsible for this fluorescence is matlaline, which is the oxidation product of one of the flavonoids found in this wood.
In 1819, E.D. Clarke
and in 1822 René Just Haüy
described some varieties of fluorites that had a different color depending if the light was reflected or (apparently) transmitted; Haüy's incorrectly viewed the effect as light scattering similar to opalescence.Fig.5 In 1833 Sir David Brewster described a similar effect in chlorophyll which he also considered a form of opalescence.
Sir John Herschel studied quinine in 1845 and came to a different incorrect conclusion.
In 1842, A.E. Becquerel observed that calcium sulfide emits light after being exposed to solar ultraviolet, making him the first to state that the emitted light is of longer wavelength than the incident light. While his observation of photoluminescence was similar to that described 10 years later by Stokes, who observed a fluorescence of a solution of quinine, the phenomenon that Becquerel described with calcium sulfide is now called phosphorescence.
In his 1852 paper on the "Refrangibility" (wavelength change) of light, George Gabriel Stokes described the ability of fluorspar, uranium glass and many other substances to change invisible light beyond the violet end of the visible spectrum into visible light. He named this phenomenon "fluorescence"
"I am almost inclined to coin a word, and call the appearance "fluorescence", from fluor-spar [i.e., fluorite], as the analogous term "opalescence" is derived from the name of a mineral."(p479, footnote)
Neither Becquerel nor Stokes understood one key aspect of photoluminescence: the critical difference from incandescence, the emission of light by heated material. To distinguish it from incandescence, in the late 1800s, Gustav Wiedemann proposed the term luminescence to designate any emission of light more intense than expected from the source's temperature.
Advances in spectroscopy and quantum electronics between the 1950s and 1970s provided a way to distinguish between the three different mechanisms that produce the light, as well as narrowing down the typical timescales those mechanisms take to decay after absorption. In modern science, this distinction became important because some items, such as lasers, required the fastest decay times, which typically occur in the nanosecond (billionth of a second) range. In physics, this first mechanism was termed "fluorescence" or "singlet emission", and is common in many laser mediums such as ruby. Other fluorescent materials were discovered to have much longer decay times, because some of the atoms would change their spin to a triplet state, thus would glow brightly with fluorescence under excitation but produce a dimmer afterglow for a short time after the excitation was removed, which became labeled "phosphorescence" or "triplet phosphorescence". The typical decay times ranged from a few microseconds to one second, which are still fast enough by human-eye standards to be colloquially referred to as fluorescent. Common examples include fluorescent lamps, organic dyes, and even fluorspar. Longer emitters, commonly referred to as glow-in-the-dark substances, ranged from one second to many hours, and this mechanism was called persistent phosphorescence or persistent luminescence, to distinguish it from the other two mechanisms.
Physical principles.
Mechanism.
Fluorescence occurs when an excited molecule, atom, or nanostructure, relaxes to a lower energy state (usually the ground state) through emission of a photon without a change in electron spin. When the initial and final states have different multiplicity (spin), the phenomenon is termed phosphorescence.
The ground state of most molecules is a singlet state, denoted as S0. A notable exception is molecular oxygen, which has a triplet ground state. Absorption of a photon of energy formula_0 results in an excited state of the same multiplicity (spin) of the ground state, usually a singlet (Sn with n > 0). In solution, states with n > 1 relax rapidly to the lowest vibrational level of the first excited state (S1) by transferring energy to the solvent molecules through non-radiative processes, including internal conversion followed by vibrational relaxation, in which the energy is dissipated as heat. Therefore, most commonly, fluorescence occurs from the first singlet excited state, S1. Fluorescence is the emission of a photon accompanying the relaxation of the excited state to the ground state. Fluorescence photons are lower in energy (formula_1) compared to the energy of the photons used to generate the excited state (formula_0)
In each case the photon energy formula_4 is proportional to its frequency formula_5 according to formula_6, where formula_7 is the Planck constant.
The excited state S1 can relax by other mechanisms that do not involve the emission of light. These processes, called non-radiative processes, compete with fluorescence emission and decrease its efficiency. Examples include internal conversion, intersystem crossing to the triplet state, and energy transfer to another molecule. An example of energy transfer is Förster resonance energy transfer. Relaxation from an excited state can also occur through collisional quenching, a process where a molecule (the quencher) collides with the fluorescent molecule during its excited state lifetime. Molecular oxygen (O2) is an extremely efficient quencher of fluorescence just because of its unusual triplet ground state.
Quantum yield.
The fluorescence quantum yield gives the efficiency of the fluorescence process. It is defined as the ratio of the number of photons emitted to the number of photons absorbed.(p10)
formula_8
The maximum possible fluorescence quantum yield is 1.0 (100%); each photon absorbed results in a photon emitted. Compounds with quantum yields of 0.10 are still considered quite fluorescent. Another way to define the quantum yield of fluorescence is by the rate of excited state decay:
formula_9
where formula_10 is the rate constant of spontaneous emission of radiation and
formula_11
is the sum of all rates of excited state decay. Other rates of excited state decay are caused by mechanisms other than photon emission and are, therefore, often called "non-radiative rates", which can include:
Thus, if the rate of any pathway changes, both the excited state lifetime and the fluorescence quantum yield will be affected.
Fluorescence quantum yields are measured by comparison to a standard. The quinine salt "quinine sulfate" in a sulfuric acid solution was regarded as the most common fluorescence standard,
however, a recent study revealed that the fluorescence quantum yield of this solution is strongly affected by the temperature, and should no longer be used as the standard solution. The quinine in 0.1 M perchloric acid (Φ = 0.60) shows no temperature dependence up to 45 °C, therefore it can be considered as a reliable standard solution.
Lifetime.
The fluorescence lifetime refers to the average time the molecule stays in its excited state before emitting a photon. Fluorescence typically follows first-order kinetics:
formula_12
where formula_13 is the concentration of excited state molecules at time formula_14, formula_15 is the initial concentration and formula_16 is the decay rate or the inverse of the fluorescence lifetime. This is an instance of exponential decay. Various radiative and non-radiative processes can de-populate the excited state. In such case the total decay rate is the sum over all rates:
formula_17
where formula_18 is the total decay rate, formula_19 the radiative decay rate and formula_20 the non-radiative decay rate. It is similar to a first-order chemical reaction in which the first-order rate constant is the sum of all of the rates (a parallel kinetic model). If the rate of spontaneous emission, or any of the other rates are fast, the lifetime is short. For commonly used fluorescent compounds, typical excited state decay times for photon emissions with energies from the UV to near infrared are within the range of 0.5 to 20 nanoseconds. The fluorescence lifetime is an important parameter for practical applications of fluorescence such as fluorescence resonance energy transfer and fluorescence-lifetime imaging microscopy.
Jablonski diagram.
The Jablonski diagram describes most of the relaxation mechanisms for excited state molecules. The diagram alongside shows how fluorescence occurs due to the relaxation of certain excited electrons of a molecule.
Fluorescence anisotropy.
Fluorophores are more likely to be excited by photons if the transition moment of the fluorophore is parallel to the electric vector of the photon. The polarization of the emitted light will also depend on the transition moment. The transition moment is dependent on the physical orientation of the fluorophore molecule. For fluorophores in solution, the intensity and polarization of the emitted light is dependent on rotational diffusion. Therefore, anisotropy measurements can be used to investigate how freely a fluorescent molecule moves in a particular environment.
Fluorescence anisotropy can be defined quantitatively as
formula_21
where formula_22 is the emitted intensity parallel to the polarization of the excitation light and formula_23 is the emitted intensity perpendicular to the polarization of the excitation light.
Anisotropy is independent of the intensity of the absorbed or emitted light, it is the property of the light, so photobleaching of the dye will not affect the anisotropy value as long as the signal is detectable.
Fluorescence.
Strongly fluorescent pigments often have an unusual appearance which is often described colloquially as a "neon color" (originally "day-glo" in the late 1960s, early 1970s). This phenomenon was termed "Farbenglut" by Hermann von Helmholtz and "fluorence" by Ralph M. Evans. It is generally thought to be related to the high brightness of the color relative to what it would be as a component of white. Fluorescence shifts energy in the incident illumination from shorter wavelengths to longer (such as blue to yellow) and thus can make the fluorescent color appear brighter (more saturated) than it could possibly be by reflection alone.
Rules.
There are several general rules that deal with fluorescence. Each of the following rules have exceptions but they are useful guidelines for understanding fluorescence (these rules do not necessarily apply to two-photon absorption).
Kasha's rule.
Kasha's rule states that the luminesce (fluorescence or phosphorescence) of a molecule will be emitted only from the lowest excited state of its given multiplicity. Vavilov's rule (a logical extension of Kasha's rule thusly called Kasha–Vavilov rule) dictates that the quantum yield of luminescence is independent of the wavelength of exciting radiation and is proportional to the absorbance of the excited wavelength. Kasha's rule does not always apply and is violated by simple molecules, such an example is azulene. A somewhat more reliable statement, although still with exceptions, would be that the fluorescence spectrum shows very little dependence on the wavelength of exciting radiation.
Mirror image rule.
For many fluorophores the absorption spectrum is a mirror image of the emission spectrum.
This is known as the mirror image rule and is related to the Franck–Condon principle which states that electronic transitions are vertical, that is energy changes without distance changing as can be represented with a vertical line in Jablonski diagram. This means the nucleus does not move and the vibration levels of the excited state resemble the vibration levels of the ground state.
Stokes shift.
In general, emitted fluorescence light has a longer wavelength and lower energy than the absorbed light. This phenomenon, known as Stokes shift, is due to energy loss between the time a photon is absorbed and when a new one is emitted. The causes and magnitude of Stokes shift can be complex and are dependent on the fluorophore and its environment. However, there are some common causes. It is frequently due to non-radiative decay to the lowest vibrational energy level of the excited state. Another factor is that the emission of fluorescence frequently leaves a fluorophore in a higher vibrational level of the ground state.
In nature.
There are many natural compounds that exhibit fluorescence, and they have a number of applications. Some deep-sea animals, such as the greeneye, have fluorescent structures.
Compared to bioluminescence and biophosphorescence.
Fluorescence.
Fluorescence is the phenomenon of absorption of electromagnetic radiation, typically from ultraviolet or visible light, by a molecule and the subsequent emission of a photon of a lower energy (smaller frequency, longer wavelength). This causes the light that is emitted to be a different color than the light that is absorbed. Stimulating light excites an electron to an excited state. When the molecule returns to the ground state, it releases a photon, which is the fluorescent emission. The excited state lifetime is short, so emission of light is typically only observable when the absorbing light is on. Fluorescence can be of any wavelength but is often more significant when emitted photons are in the visible spectrum. When it occurs in a living organism, it is sometimes called biofluorescence. Fluorescence should not be confused with bioluminescence and biophosphorescence. Pumpkin toadlets that live in the Brazilian Atlantic forest are fluorescent.
Bioluminescence.
Bioluminescence differs from fluorescence in that it is the natural production of light by chemical reactions within an organism, whereas fluorescence is the absorption and reemission of light from the environment. Fireflies and anglerfish are two examples of bioluminescent organisms. To add to the potential confusion, some organisms are both bioluminescent and fluorescent, like the sea pansy Renilla reniformis, where bioluminescence serves as the light source for fluorescence.
Phosphorescence.
Phosphorescence is similar to fluorescence in its requirement of light wavelengths as a provider of excitation energy. The difference here lies in the relative stability of the energized electron. Unlike with fluorescence, in phosphorescence the electron retains stability, emitting light that continues to "glow in the dark" even after the stimulating light source has been removed. For example, glow-in-the-dark stickers are phosphorescent, but there are no truly "biophosphorescent" animals known.
Mechanisms.
Epidermal chromatophores.
Pigment cells that exhibit fluorescence are called fluorescent chromatophores, and function somatically similar to regular chromatophores. These cells are dendritic, and contain pigments called fluorosomes. These pigments contain fluorescent proteins which are activated by K+ (potassium) ions, and it is their movement, aggregation, and dispersion within the fluorescent chromatophore that cause directed fluorescence patterning. Fluorescent cells are innervated the same as other chromatophores, like melanophores, pigment cells that contain melanin. Short term fluorescent patterning and signaling is controlled by the nervous system. Fluorescent chromatophores can be found in the skin (e.g. in fish) just below the epidermis, amongst other chromatophores.
Epidermal fluorescent cells in fish also respond to hormonal stimuli by the α–MSH and MCH hormones much the same as melanophores. This suggests that fluorescent cells may have color changes throughout the day that coincide with their circadian rhythm. Fish may also be sensitive to cortisol induced stress responses to environmental stimuli, such as interaction with a predator or engaging in a mating ritual.
Phylogenetics.
Evolutionary origins.
The incidence of fluorescence across the tree of life is widespread, and has been studied most extensively in cnidarians and fish. The phenomenon appears to have evolved multiple times in multiple taxa such as in the anguilliformes (eels), gobioidei (gobies and cardinalfishes), and tetradontiformes (triggerfishes), along with the other taxa discussed later in the article. Fluorescence is highly genotypically and phenotypically variable even within ecosystems, in regards to the wavelengths emitted, the patterns displayed, and the intensity of the fluorescence. Generally, the species relying upon camouflage exhibit the greatest diversity in fluorescence, likely because camouflage may be one of the uses of fluorescence.
It is suspected by some scientists that GFPs and GFP-like proteins began as electron donors activated by light. These electrons were then used for reactions requiring light energy. Functions of fluorescent proteins, such as protection from the sun, conversion of light into different wavelengths, or for signaling are thought to have evolved secondarily.
Adaptive functions.
Currently, relatively little is known about the functional significance of fluorescence and fluorescent proteins. However, it is suspected that fluorescence may serve important functions in signaling and communication, mating, lures, camouflage, UV protection and antioxidation, photoacclimation, dinoflagellate regulation, and in coral health.
Aquatic.
Water absorbs light of long wavelengths, so less light from these wavelengths reflects back to reach the eye. Therefore, warm colors from the visual light spectrum appear less vibrant at increasing depths. Water scatters light of shorter wavelengths above violet, meaning cooler colors dominate the visual field in the photic zone. Light intensity decreases 10 fold with every 75 m of depth, so at depths of 75 m, light is 10% as intense as it is on the surface, and is only 1% as intense at 150 m as it is on the surface. Because the water filters out the wavelengths and intensity of water reaching certain depths, different proteins, because of the wavelengths and intensities of light they are capable of absorbing, are better suited to different depths. Theoretically, some fish eyes can detect light as deep as 1000 m. At these depths of the aphotic zone, the only sources of light are organisms themselves, giving off light through chemical reactions in a process called bioluminescence.
Fluorescence is simply defined as the absorption of electromagnetic radiation at one wavelength and its reemission at another, lower energy wavelength. Thus any type of fluorescence depends on the presence of external sources of light. Biologically functional fluorescence is found in the photic zone, where there is not only enough light to cause fluorescence, but enough light for other organisms to detect it.
The visual field in the photic zone is naturally blue, so colors of fluorescence can be detected as bright reds, oranges, yellows, and greens. Green is the most commonly found color in the marine spectrum, yellow the second most, orange the third, and red is the rarest. Fluorescence can occur in organisms in the aphotic zone as a byproduct of that same organism's bioluminescence. Some fluorescence in the aphotic zone is merely a byproduct of the organism's tissue biochemistry and does not have a functional purpose. However, some cases of functional and adaptive significance of fluorescence in the aphotic zone of the deep ocean is an active area of research.
Photic zone.
Fish.
Bony fishes living in shallow water generally have good color vision due to their living in a colorful environment. Thus, in shallow-water fishes, red, orange, and green fluorescence most likely serves as a means of communication with conspecifics, especially given the great phenotypic variance of the phenomenon.
Many fish that exhibit fluorescence, such as sharks, lizardfish, scorpionfish, wrasses, and flatfishes, also possess yellow intraocular filters. Yellow intraocular filters in the lenses and cornea of certain fishes function as long-pass filters. These filters enable the species to visualize and potentially exploit fluorescence, in order to enhance visual contrast and patterns that are unseen to other fishes and predators that lack this visual specialization. Fish that possess the necessary yellow intraocular filters for visualizing fluorescence potentially exploit a light signal from members of it. Fluorescent patterning was especially prominent in cryptically patterned fishes possessing complex camouflage. Many of these lineages also possess yellow long-pass intraocular filters that could enable visualization of such patterns.
Another adaptive use of fluorescence is to generate orange and red light from the ambient blue light of the photic zone to aid vision. Red light can only be seen across short distances due to attenuation of red light wavelengths by water. Many fish species that fluoresce are small, group-living, or benthic/aphotic, and have conspicuous patterning. This patterning is caused by fluorescent tissue and is visible to other members of the species, however the patterning is invisible at other visual spectra. These intraspecific fluorescent patterns also coincide with intra-species signaling. The patterns present in ocular rings to indicate directionality of an individual's gaze, and along fins to indicate directionality of an individual's movement. Current research suspects that this red fluorescence is used for private communication between members of the same species. Due to the prominence of blue light at ocean depths, red light and light of longer wavelengths are muddled, and many predatory reef fish have little to no sensitivity for light at these wavelengths. Fish such as the fairy wrasse that have developed visual sensitivity to longer wavelengths are able to display red fluorescent signals that give a high contrast to the blue environment and are conspicuous to conspecifics in short ranges, yet are relatively invisible to other common fish that have reduced sensitivities to long wavelengths. Thus, fluorescence can be used as adaptive signaling and intra-species communication in reef fish.
Additionally, it is suggested that fluorescent tissues that surround an organism's eyes are used to convert blue light from the photic zone or green bioluminescence in the aphotic zone into red light to aid vision.
Sharks.
A new fluorophore was described in two species of sharks, wherein it was due to an undescribed group of brominated tryptophane-kynurenine small molecule metabolites.
Coral.
Fluorescence serves a wide variety of functions in coral. Fluorescent proteins in corals may contribute to photosynthesis by converting otherwise unusable wavelengths of light into ones for which the coral's symbiotic algae are able to conduct photosynthesis. Also, the proteins may fluctuate in number as more or less light becomes available as a means of photoacclimation. Similarly, these fluorescent proteins may possess antioxidant capacities to eliminate oxygen radicals produced by photosynthesis. Finally, through modulating photosynthesis, the fluorescent proteins may also serve as a means of regulating the activity of the coral's photosynthetic algal symbionts.
Cephalopods.
"Alloteuthis subulata" and "Loligo vulgaris", two types of nearly transparent squid, have fluorescent spots above their eyes. These spots reflect incident light, which may serve as a means of camouflage, but also for signaling to other squids for schooling purposes.
Jellyfish.
Another, well-studied example of fluorescence in the ocean is the hydrozoan "Aequorea victoria". This jellyfish lives in the photic zone off the west coast of North America and was identified as a carrier of green fluorescent protein (GFP) by Osamu Shimomura. The gene for these green fluorescent proteins has been isolated and is scientifically significant because it is widely used in genetic studies to indicate the expression of other genes.
Mantis shrimp.
Several species of mantis shrimp, which are stomatopod crustaceans, including "Lysiosquillina glabriuscula", have yellow fluorescent markings along their antennal scales and carapace (shell) that males present during threat displays to predators and other males. The display involves raising the head and thorax, spreading the striking appendages and other maxillipeds, and extending the prominent, oval antennal scales laterally, which makes the animal appear larger and accentuates its yellow fluorescent markings. Furthermore, as depth increases, mantis shrimp fluorescence accounts for a greater part of the visible light available. During mating rituals, mantis shrimp actively fluoresce, and the wavelength of this fluorescence matches the wavelengths detected by their eye pigments.
Aphotic zone.
Siphonophores.
"Siphonophorae" is an order of marine animals from the phylum Hydrozoa that consist of a specialized medusoid and polyp zooid. Some siphonophores, including the genus Erenna that live in the aphotic zone between depths of 1600 m and 2300 m, exhibit yellow to red fluorescence in the photophores of their tentacle-like tentilla. This fluorescence occurs as a by-product of bioluminescence from these same photophores. The siphonophores exhibit the fluorescence in a flicking pattern that is used as a lure to attract prey.
Dragonfish.
The predatory deep-sea dragonfish "Malacosteus niger", the closely related genus "Aristostomias" and the species "Pachystomias microdon" use fluorescent red accessory pigments to convert the blue light emitted from their own bioluminescence to red light from suborbital photophores. This red luminescence is invisible to other animals, which allows these dragonfish extra light at dark ocean depths without attracting or signaling predators.
Terrestrial.
Amphibians.
Fluorescence is widespread among amphibians and has been documented in several families of frogs, salamanders and caecilians, but the extent of it varies greatly.
The polka-dot tree frog ("Hypsiboas punctatus"), widely found in South America, was unintentionally discovered to be the first fluorescent amphibian in 2017. The fluorescence was traced to a new compound found in the lymph and skin glands. The main fluorescent compound is Hyloin-L1 and it gives a blue-green glow when exposed to violet or ultraviolet light. The scientists behind the discovery suggested that the fluorescence can be used for communication. They speculated that fluorescence possibly is relatively widespread among frogs. Only a few months later, fluorescence was discovered in the closely related "Hypsiboas atlanticus". Because it is linked to secretions from skin glands, they can also leave fluorescent markings on surfaces where they have been.
In 2019, two other frogs, the tiny pumpkin toadlet ("Brachycephalus ephippium") and red pumpkin toadlet ("B. pitanga") of southeastern Brazil, were found to have naturally fluorescent skeletons, which are visible through their skin when exposed to ultraviolet light. It was initially speculated that the fluorescence supplemented their already aposematic colours (they are toxic) or that it was related to mate choice (species recognition or determining fitness of a potential partner), but later studies indicate that the former explanation is unlikely, as predation attempts on the toadlets appear to be unaffected by the presence/absence of fluorescence.
In 2020 it was confirmed that green or yellow fluorescence is widespread not only in adult frogs that are exposed to blue or ultraviolet light, but also among tadpoles, salamanders and caecilians. The extent varies greatly depending on species; in some it is highly distinct and in others it is barely noticeable. It can be based on their skin pigmentation, their mucous or their bones.
Butterflies.
Swallowtail ("Papilio") butterflies have complex systems for emitting fluorescent light. Their wings contain pigment-infused crystals that provide directed fluorescent light. These crystals function to produce fluorescent light best when they absorb radiance from sky-blue light (wavelength about 420 nm). The wavelengths of light that the butterflies see the best correspond to the absorbance of the crystals in the butterfly's wings. This likely functions to enhance the capacity for signaling.
Parrots.
Parrots have fluorescent plumage that may be used in mate signaling. A study using mate-choice experiments on budgerigars ("Melopsittacus undulates") found compelling support for fluorescent sexual signaling, with both males and females significantly preferring birds with the fluorescent experimental stimulus. This study suggests that the fluorescent plumage of parrots is not simply a by-product of pigmentation, but instead an adapted sexual signal. Considering the intricacies of the pathways that produce fluorescent pigments, there may be significant costs involved. Therefore, individuals exhibiting strong fluorescence may be honest indicators of high individual quality, since they can deal with the associated costs.
Arachnids.
Spiders fluoresce under UV light and possess a huge diversity of fluorophores. Andrews, Reed, & Masta noted that spiders are the only known group in which fluorescence is "taxonomically widespread, variably expressed, evolutionarily labile, and probably under selection and potentially of ecological importance for intraspecific and interspecific signaling". They showed that fluorescence evolved multiple times across spider taxa, with novel fluorophores evolving during spider diversification.
In some spiders, ultraviolet cues are important for predator–prey interactions, intraspecific communication, and camouflage-matching with fluorescent flowers. Differing ecological contexts could favor inhibition or enhancement of fluorescence expression, depending upon whether fluorescence helps spiders be cryptic or makes them more conspicuous to predators. Therefore, natural selection could be acting on expression of fluorescence across spider species.
Scorpions are also fluorescent, in their case due to the presence of beta carboline in their cuticles.
Platypus.
In 2020 fluorescence was reported for several platypus specimens.
Plants.
Many plants are fluorescent due to the presence of chlorophyll, which is probably the most widely distributed fluorescent molecule, producing red emission under a range of excitation wavelengths. This attribute of chlorophyll is commonly used by ecologists to measure photosynthetic efficiency.
The "Mirabilis jalapa" flower contains violet, fluorescent betacyanins and yellow, fluorescent betaxanthins. Under white light, parts of the flower containing only betaxanthins appear yellow, but in areas where both betaxanthins and betacyanins are present, the visible fluorescence of the flower is faded due to internal light-filtering mechanisms. Fluorescence was previously suggested to play a role in pollinator attraction, however, it was later found that the visual signal by fluorescence is negligible compared to the visual signal of light reflected by the flower.
Abiotic.
Gemology, mineralogy and geology.
In addition to the eponyous fluorspar, many
gemstones and minerals may have a distinctive fluorescence or may fluoresce differently under short-wave ultraviolet, long-wave ultraviolet, visible light, or X-rays.
Many types of calcite and amber will fluoresce under shortwave UV, longwave UV and visible light. Rubies, emeralds, and diamonds exhibit red fluorescence under long-wave UV, blue and sometimes green light; diamonds also emit light under X-ray radiation.
Fluorescence in minerals is caused by a wide range of activators. In some cases, the concentration of the activator must be restricted to below a certain level, to prevent quenching of the fluorescent emission. Furthermore, the mineral must be free of impurities such as iron or copper, to prevent quenching of possible fluorescence. Divalent manganese, in concentrations of up to several percent, is responsible for the red or orange fluorescence of calcite, the green fluorescence of willemite, the yellow fluorescence of esperite, and the orange fluorescence of wollastonite and clinohedrite. Hexavalent uranium, in the form of the uranyl cation (UO22+), fluoresces at all concentrations in a yellow green, and is the cause of fluorescence of minerals such as autunite or andersonite, and, at low concentration, is the cause of the fluorescence of such materials as some samples of hyalite opal. Trivalent chromium at low concentration is the source of the red fluorescence of ruby. Divalent europium is the source of the blue fluorescence, when seen in the mineral fluorite. Trivalent lanthanides such as terbium and dysprosium are the principal activators of the creamy yellow fluorescence exhibited by the yttrofluorite variety of the mineral fluorite, and contribute to the orange fluorescence of zircon. Powellite (calcium molybdate) and scheelite (calcium tungstate) fluoresce intrinsically in yellow and blue, respectively. When present together in solid solution, energy is transferred from the higher-energy tungsten to the lower-energy molybdenum, such that fairly low levels of molybdenum are sufficient to cause a yellow emission for scheelite, instead of blue. Low-iron sphalerite (zinc sulfide), fluoresces and phosphoresces in a range of colors, influenced by the presence of various trace impurities.
Crude oil (petroleum) fluoresces in a range of colors, from dull-brown for heavy oils and tars through to bright-yellowish and bluish-white for very light oils and condensates. This phenomenon is used in oil exploration drilling to identify very small amounts of oil in drill cuttings and core samples.
Humic acids and fulvic acids produced by the degradation of organic matter in soils (humus) may also fluoresce because of the presence of aromatic cycles in their complex molecular structures. Humic substances dissolved in groundwater can be detected and characterized by spectrofluorimetry.
Organic liquids.
Organic (carbon based) solutions such anthracene or stilbene, dissolved in benzene or toluene, fluoresce with ultraviolet or gamma ray irradiation. The decay times of this fluorescence are on the order of nanoseconds, since the duration of the light depends on the lifetime of the excited states of the fluorescent material, in this case anthracene or stilbene.
Scintillation is defined a flash of light produced in a transparent material by the passage of a particle (an electron, an alpha particle, an ion, or a high-energy photon). Stilbene and derivatives are used in scintillation counters to detect such particles. Stilbene is also one of the gain mediums used in dye lasers.
Atmosphere.
Fluorescence is observed in the atmosphere when the air is under energetic electron bombardment. In cases such as the natural aurora, high-altitude nuclear explosions, and rocket-borne electron gun experiments, the molecules and ions formed have a fluorescent response to light.
In novel technology.
In August 2020 researchers reported the creation of the brightest fluorescent solid optical materials so far by enabling the transfer of properties of highly fluorescent dyes via spatial and electronic isolation of the dyes by mixing cationic dyes with anion-binding cyanostar macrocycles. According to a co-author these materials may have applications in areas such as solar energy harvesting, bioimaging, and lasers.
Applications.
Lighting.
The common fluorescent lamp relies on fluorescence. Inside the glass tube is a partial vacuum and a small amount of mercury. An electric discharge in the tube causes the mercury atoms to emit mostly ultraviolet light. The tube is lined with a coating of a fluorescent material, called the "phosphor", which absorbs ultraviolet light and re-emits visible light. Fluorescent lighting is more energy-efficient than incandescent lighting elements. However, the uneven spectrum of traditional fluorescent lamps may cause certain colors to appear different from when illuminated by incandescent light or daylight. The mercury vapor emission spectrum is dominated by a short-wave UV line at 254 nm (which provides most of the energy to the phosphors), accompanied by visible light emission at 436 nm (blue), 546 nm (green) and 579 nm (yellow-orange). These three lines can be observed superimposed on the white continuum using a hand spectroscope, for light emitted by the usual white fluorescent tubes. These same visible lines, accompanied by the emission lines of trivalent europium and trivalent terbium, and further accompanied by the emission continuum of divalent europium in the blue region, comprise the more discontinuous light emission of the modern trichromatic phosphor systems used in many compact fluorescent lamp and traditional lamps where better color rendition is a goal.
Fluorescent lights were first available to the public at the 1939 New York World's Fair. Improvements since then have largely been better phosphors, longer life, and more consistent internal discharge, and easier-to-use shapes (such as compact fluorescent lamps). Some high-intensity discharge (HID) lamps couple their even-greater electrical efficiency with phosphor enhancement for better color rendition.
White light-emitting diodes (LEDs) became available in the mid-1990s as LED lamps, in which blue light emitted from the semiconductor strikes phosphors deposited on the tiny chip. The combination of the blue light that continues through the phosphor and the green to red fluorescence from the phosphors produces a net emission of white light.
Glow sticks sometimes utilize fluorescent materials to absorb light from the chemiluminescent reaction and emit light of a different color.
Analytical chemistry.
Many analytical procedures involve the use of a fluorometer, usually with a single exciting wavelength and single detection wavelength. Because of the sensitivity that the method affords, fluorescent molecule concentrations as low as 1 part per trillion can be measured.
Fluorescence in several wavelengths can be detected by an array detector, to detect compounds from HPLC flow. Also, TLC plates can be visualized if the compounds or a coloring reagent is fluorescent. Fluorescence is most effective when there is a larger ratio of atoms at lower energy levels in a Boltzmann distribution. There is, then, a higher probability of excitement and release of photons by lower-energy atoms, making analysis more efficient.
Spectroscopy.
Usually the setup of a fluorescence assay involves a light source, which may emit many different wavelengths of light. In general, a single wavelength is required for proper analysis, so, in order to selectively filter the light, it is passed through an excitation monochromator, and then that chosen wavelength is passed through the sample cell. After absorption and re-emission of the energy, many wavelengths may emerge due to Stokes shift and various electron transitions. To separate and analyze them, the fluorescent radiation is passed through an emission monochromator, and observed selectively by a detector.
Lasers.
Lasers most often use the fluorescence of certain materials as their active media, such as the red glow produced by a ruby (chromium sapphire), the infrared of titanium sapphire, or the unlimited range of colors produced by organic dyes. These materials normally fluoresce through a process called spontaneous emission, in which the light is emitted in all directions and often at many discrete spectral lines all at once. In many lasers, the fluorescent medium is "pumped" by exposing it to an intense light source, creating a population inversion, meaning that more of its atoms become in an excited state (high energy) rather than at ground state (low energy). When this occurs, the spontaneous fluorescence can then induce the other atoms to emit their photons in the same direction and at the same wavelength, creating stimulated emission. When a portion of the spontaneous fluorescence is trapped between two mirrors, nearly all of the medium's fluorescence can be stimulated to emit along the same line, producing a laser beam.
Biochemistry and medicine.
Fluorescence in the life sciences is used generally as a non-destructive way of tracking or analysis of biological molecules by means of the fluorescent emission at a specific frequency where there is no background from the excitation light, as relatively few cellular components are naturally fluorescent (called intrinsic or autofluorescence).
In fact, a protein or other component can be "labelled" with an extrinsic fluorophore, a fluorescent dye that can be a small molecule, protein, or quantum dot, finding a large use in many biological applications.(pxxvi)
The quantification of a dye is done with a spectrofluorometer and finds additional applications in:
Forensics.
Fingerprints can be visualized with fluorescent compounds such as ninhydrin or DFO (1,8-Diazafluoren-9-one). Blood and other substances are sometimes detected by fluorescent reagents, like fluorescein. Fibers, and other materials that may be encountered in forensics or with a relationship to various collectibles, are sometimes fluorescent.
Non-destructive testing.
Fluorescent penetrant inspection is used to find cracks and other defects on the surface of a part. Dye tracing, using fluorescent dyes, is used to find leaks in liquid and gas plumbing systems.
Signage.
Fluorescent colors are frequently used in signage, particularly road signs. Fluorescent colors are generally recognizable at longer ranges than their non-fluorescent counterparts, with fluorescent orange being particularly noticeable. This property has led to its frequent use in safety signs and labels.
Optical brighteners.
Fluorescent compounds are often used to enhance the appearance of fabric and paper, causing a "whitening" effect. A white surface treated with an optical brightener can emit more visible light than that which shines on it, making it appear brighter. The blue light emitted by the brightener compensates for the diminishing blue of the treated material and changes the hue away from yellow or brown and toward white. Optical brighteners are used in laundry detergents, high brightness paper, cosmetics, high-visibility clothing and more.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "h \\nu_{ex} "
},
{
"math_id": 1,
"text": "h \\nu_{em} "
},
{
"math_id": 2,
"text": " \\mathrm{S}_0 + h \\nu_\\text{ex} \\to \\mathrm{S}_1 "
},
{
"math_id": 3,
"text": " \\mathrm{S}_1 \\to \\mathrm{S}_0 + h \\nu_\\text{em} "
},
{
"math_id": 4,
"text": "E"
},
{
"math_id": 5,
"text": "\\nu"
},
{
"math_id": 6,
"text": "E=h\\nu"
},
{
"math_id": 7,
"text": "h"
},
{
"math_id": 8,
"text": " \\Phi = \\frac {\\text{Number of photons emitted}} {\\text{Number of photons absorbed}} "
},
{
"math_id": 9,
"text": " \\Phi = \\frac{ { k}_{ f} }{ \\sum_{i}{ k}_{i } } "
},
{
"math_id": 10,
"text": "{ k}_{ f}"
},
{
"math_id": 11,
"text": " \\sum_{i}{ k}_{i } "
},
{
"math_id": 12,
"text": " \\left[S_1 \\right] = \\left[S_1 \\right]_0 e^{-\\Gamma t} "
},
{
"math_id": 13,
"text": "\\left[S_1 \\right]"
},
{
"math_id": 14,
"text": "t"
},
{
"math_id": 15,
"text": "\\left[S_1 \\right]_0"
},
{
"math_id": 16,
"text": "\\Gamma"
},
{
"math_id": 17,
"text": " \\Gamma_{tot}=\\Gamma_{rad} + \\Gamma_{nrad} "
},
{
"math_id": 18,
"text": "\\Gamma_{tot}"
},
{
"math_id": 19,
"text": "\\Gamma_{rad}"
},
{
"math_id": 20,
"text": "\\Gamma_{nrad}"
},
{
"math_id": 21,
"text": "r = {I_\\parallel - I_\\perp \\over I_\\parallel + 2I_\\perp}"
},
{
"math_id": 22,
"text": "I_\\parallel"
},
{
"math_id": 23,
"text": "I_\\perp"
}
] | https://en.wikipedia.org/wiki?curid=11555 |
11556 | Fundamental theorem of arithmetic | Integers have unique prime factorizations
In mathematics, the fundamental theorem of arithmetic, also called the unique factorization theorem and prime factorization theorem, states that every integer greater than 1 can be represented uniquely as a product of prime numbers, up to the order of the factors. For example,
formula_0
The theorem says two things about this example: first, that 1200 can be represented as a product of primes, and second, that no matter how this is done, there will always be exactly four 2s, one 3, two 5s, and no other primes in the product.
The requirement that the factors be prime is necessary: factorizations containing composite numbers may not be unique
(for example, formula_1).
This theorem is one of the main reasons why 1 is not considered a prime number: if 1 were prime, then factorization into primes would not be unique; for example, formula_2
The theorem generalizes to other algebraic structures that are called unique factorization domains and include principal ideal domains, Euclidean domains, and polynomial rings over a field. However, the theorem does not hold for algebraic integers. This failure of unique factorization is one of the reasons for the difficulty of the proof of Fermat's Last Theorem. The implicit use of unique factorization in rings of algebraic integers is behind the error of many of the numerous false proofs that have been written during the 358 years between Fermat's statement and Wiles's proof.
History.
The fundamental theorem can be derived from Book VII, propositions 30, 31 and 32, and Book IX, proposition 14 of Euclid's "Elements".
<templatestyles src="Template:Blockquote/styles.css" />If two numbers by multiplying one another make some
number, and any prime number measure the product, it will
also measure one of the original numbers.
(In modern terminology: if a prime "p" divides the product "ab", then "p" divides either "a" or "b" or both.) Proposition 30 is referred to as Euclid's lemma, and it is the key in the proof of the fundamental theorem of arithmetic.
<templatestyles src="Template:Blockquote/styles.css" />Any composite number is measured by some prime number.
(In modern terminology: every integer greater than one is divided evenly by some prime number.) Proposition 31 is proved directly by infinite descent.
<templatestyles src="Template:Blockquote/styles.css" />Any number either is prime or is measured by some prime number.
Proposition 32 is derived from proposition 31, and proves that the decomposition is possible.
<templatestyles src="Template:Blockquote/styles.css" />If a number be the least that is measured by prime numbers, it will not be measured by any
other prime number except those originally measuring it.
(In modern terminology: a least common multiple of several prime numbers is not a multiple of any other prime number.) Book IX, proposition 14 is derived from Book VII, proposition 30, and proves partially that the decomposition is unique – a point critically noted by André Weil. Indeed, in this proposition the exponents are all equal to one, so nothing is said for the general case.
While Euclid took the first step on the way to the existence of prime factorization, Kamāl al-Dīn al-Fārisī took the final step and stated for the first time the fundamental theorem of arithmetic.
Article 16 of Gauss's "Disquisitiones Arithmeticae" is an early modern statement and proof employing modular arithmetic.
Applications.
Canonical representation of a positive integer.
Every positive integer "n" > 1 can be represented in exactly one way as a product of prime powers
formula_3
where "p"1 < "p"2 < ... < "p"k are primes and the "n""i" are positive integers. This representation is commonly extended to all positive integers, including 1, by the convention that the empty product is equal to 1 (the empty product corresponds to "k" = 0).
This representation is called the canonical representation of "n", or the standard form of "n". For example,
999 = 33×37,
1000 = 23×53,
1001 = 7×11×13.
Factors "p"0 = 1 may be inserted without changing the value of "n" (for example, 1000 = 23×30×53). In fact, any positive integer can be uniquely represented as an infinite product taken over all the positive prime numbers, as
formula_4
where a finite number of the "n""i" are positive integers, and the others are zero.
Allowing negative exponents provides a canonical form for positive rational numbers.
Arithmetic operations.
The canonical representations of the product, greatest common divisor (GCD), and least common multiple (LCM) of two numbers "a" and "b" can be expressed simply in terms of the canonical representations of "a" and "b" themselves:
formula_5
However, integer factorization, especially of large numbers, is much more difficult than computing products, GCDs, or LCMs. So these formulas have limited use in practice.
Arithmetic functions.
Many arithmetic functions are defined using the canonical representation. In particular, the values of additive and multiplicative functions are determined by their values on the powers of prime numbers.
Proof.
The proof uses Euclid's lemma ("Elements" VII, 30): If a prime divides the product of two integers, then it must divide at least one of these integers.
Existence.
It must be shown that every integer greater than 1 is either prime or a product of primes. First, 2 is prime. Then, by strong induction, assume this is true for all numbers greater than 1 and less than "n". If "n" is prime, there is nothing more to prove. Otherwise, there are integers "a" and "b", where "n" = "a b", and 1 < "a" ≤ "b" < "n". By the induction hypothesis, "a" = "p"1 "p"2 ⋅⋅⋅ "p""j" and "b" = "q"1 "q"2 ⋅⋅⋅ "q""k" are products of primes. But then "n" = "a b" = "p"1 "p"2 ⋅⋅⋅ "p""j" "q"1 "q"2 ⋅⋅⋅ "q""k" is a product of primes.
Uniqueness.
Suppose, to the contrary, there is an integer that has two distinct prime factorizations. Let "n" be the least such integer and write "n" = "p"1 "p"2 ... "p""j" = "q"1 "q"2 ... "q""k", where each "p""i" and "q""i" is prime. We see that "p"1 divides "q"1 "q"2 ... "q""k", so "p"1 divides some "q""i" by Euclid's lemma. Without loss of generality, say "p"1 divides "q"1. Since "p"1 and "q"1 are both prime, it follows that "p"1 = "q"1. Returning to our factorizations of "n", we may cancel these two factors to conclude that "p"2 ... "p""j" = "q"2 ... "q""k". We now have two distinct prime factorizations of some integer strictly smaller than "n", which contradicts the minimality of "n".
Uniqueness without Euclid's lemma.
The fundamental theorem of arithmetic can also be proved without using Euclid's lemma. The proof that follows is inspired by Euclid's original version of the Euclidean algorithm.
Assume that formula_6 is the smallest positive integer which is the product of prime numbers in two different ways. Incidentally, this implies that formula_6, if it exists, must be a composite number greater than formula_7. Now, say
formula_8
Every formula_9 must be distinct from every formula_10 Otherwise, if say formula_11 then there would exist some positive integer formula_12 that is smaller than s and has two distinct prime factorizations. One may also suppose that formula_13 by exchanging the two factorizations, if needed.
Setting formula_14 and formula_15 one has formula_16
Also, since formula_13 one has formula_17
It then follows that
formula_18
As the positive integers less than s have been supposed to have a unique prime factorization, formula_19 must occur in the factorization of either formula_20 or Q. The latter case is impossible, as Q, being smaller than s, must have a unique prime factorization, and formula_19 differs from every formula_10 The former case is also impossible, as, if formula_19 is a divisor of formula_21 it must be also a divisor of formula_22 which is impossible as formula_19 and formula_23 are distinct primes.
Therefore, there cannot exist a smallest integer with more than a single distinct prime factorization. Every positive integer must either be a prime number itself, which would factor uniquely, or a composite that also factors uniquely into primes, or in the case of the integer formula_7, not factor into any prime.
Generalizations.
The first generalization of the theorem is found in Gauss's second monograph (1832) on biquadratic reciprocity. This paper introduced what is now called the ring of Gaussian integers, the set of all complex numbers "a" + "bi" where "a" and "b" are integers. It is now denoted by formula_24 He showed that this ring has the four units ±1 and ±"i", that the non-zero, non-unit numbers fall into two classes, primes and composites, and that (except for order), the composites have unique factorization as a product of primes (up to the order and multiplication by units).
Similarly, in 1844 while working on cubic reciprocity, Eisenstein introduced the ring formula_25, where formula_26 formula_27 is a cube root of unity. This is the ring of Eisenstein integers, and he proved it has the six units formula_28 and that it has unique factorization.
However, it was also discovered that unique factorization does not always hold. An example is given by formula_29. In this ring one has
formula_30
Examples like this caused the notion of "prime" to be modified. In formula_31 it can be proven that if any of the factors above can be represented as a product, for example, 2 = "ab", then one of "a" or "b" must be a unit. This is the traditional definition of "prime". It can also be proven that none of these factors obeys Euclid's lemma; for example, 2 divides neither (1 + √−5) nor (1 − √−5) even though it divides their product 6. In algebraic number theory 2 is called irreducible in formula_31 (only divisible by itself or a unit) but not prime in formula_31 (if it divides a product it must divide one of the factors). The mention of formula_31 is required because 2 is prime and irreducible in formula_32 Using these definitions it can be proven that in any integral domain a prime must be irreducible. Euclid's classical lemma can be rephrased as "in the ring of integers formula_33 every irreducible is prime". This is also true in formula_34 and formula_35 but not in formula_36
The rings in which factorization into irreducibles is essentially unique are called unique factorization domains. Important examples are polynomial rings over the integers or over a field, Euclidean domains and principal ideal domains.
In 1843 Kummer introduced the concept of ideal number, which was developed further by Dedekind (1876) into the modern theory of ideals, special subsets of rings. Multiplication is defined for ideals, and the rings in which they have unique factorization are called Dedekind domains.
There is a version of unique factorization for ordinals, though it requires some additional conditions to ensure uniqueness.
Any commutative Möbius monoid satisfies a unique factorization theorem and thus possesses arithmetical properties similar to those of the multiplicative semigroup of positive integers. Fundamental Theorem of Arithmetic is, in fact, a special case of the unique factorization theorem in commutative Möbius monoids.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
The "Disquisitiones Arithmeticae" has been translated from Latin into English and German. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes.
The two monographs Gauss published on biquadratic reciprocity have consecutively numbered sections: the first contains §§ 1–23 and the second §§ 24–76. Footnotes referencing these are of the form "Gauss, BQ, § "n"". Footnotes referencing the "Disquisitiones Arithmeticae" are of the form "Gauss, DA, Art. "n"".
These are in Gauss's "Werke", Vol II, pp. 65–92 and 93–148; German translations are pp. 511–533 and 534–586 of the German edition of the "Disquisitiones". | [
{
"math_id": 0,
"text": "\n1200 = 2^4 \\cdot 3^1 \\cdot 5^2 = (2 \\cdot 2 \\cdot 2 \\cdot 2) \\cdot 3 \\cdot (5 \\cdot 5) = 5 \\cdot 2 \\cdot 5 \\cdot 2 \\cdot 3 \\cdot 2 \\cdot 2 = \\ldots\n"
},
{
"math_id": 1,
"text": "12 = 2 \\cdot 6 = 3 \\cdot 4"
},
{
"math_id": 2,
"text": "2 = 2 \\cdot 1 = 2 \\cdot 1 \\cdot 1 = \\ldots"
},
{
"math_id": 3,
"text": "\nn = p_1^{n_1}p_2^{n_2} \\cdots p_k^{n_k}\n= \\prod_{i=1}^{k} p_i^{n_i},\n"
},
{
"math_id": 4,
"text": "n=2^{n_1}3^{n_2}5^{n_3}7^{n_4}\\cdots=\\prod_{i=1}^\\infty p_i^{n_i},"
},
{
"math_id": 5,
"text": "\\begin{alignat}{2}\n a\\cdot b & = 2^{a_1+b_1}3^{a_2+b_2}5^{a_3+b_3}7^{a_4+b_4}\\cdots\n && = \\prod p_i^{a_i+b_i},\\\\\n \\gcd(a,b) & = 2^{\\min(a_1,b_1)}3^{\\min(a_2,b_2)}5^{\\min(a_3,b_3)}7^{\\min(a_4,b_4)}\\cdots\n && = \\prod p_i^{\\min(a_i,b_i)},\\\\\n \\operatorname{lcm}(a,b) & = 2^{\\max(a_1,b_1)}3^{\\max(a_2,b_2)}5^{\\max(a_3,b_3)}7^{\\max(a_4,b_4)}\\cdots\n && = \\prod p_i^{\\max(a_i,b_i)}.\n \\end{alignat}"
},
{
"math_id": 6,
"text": "s"
},
{
"math_id": 7,
"text": "1"
},
{
"math_id": 8,
"text": "\n\\begin{align}\ns\n&=p_1 p_2 \\cdots p_m \\\\\n&=q_1 q_2 \\cdots q_n.\n\\end{align}\n"
},
{
"math_id": 9,
"text": "p_i"
},
{
"math_id": 10,
"text": "q_j."
},
{
"math_id": 11,
"text": "p_i=q_j,"
},
{
"math_id": 12,
"text": "t=s/p_i=s/q_j"
},
{
"math_id": 13,
"text": "p_1 < q_1,"
},
{
"math_id": 14,
"text": "P=p_2\\cdots p_m"
},
{
"math_id": 15,
"text": "Q=q_2\\cdots q_n,"
},
{
"math_id": 16,
"text": "s=p_1P=q_1Q."
},
{
"math_id": 17,
"text": "Q < P."
},
{
"math_id": 18,
"text": "s-p_1Q = (q_1-p_1)Q = p_1(P-Q) < s."
},
{
"math_id": 19,
"text": "p_1"
},
{
"math_id": 20,
"text": "q_1-p_1"
},
{
"math_id": 21,
"text": "q_1-p_1,"
},
{
"math_id": 22,
"text": "q_1,"
},
{
"math_id": 23,
"text": "q_1"
},
{
"math_id": 24,
"text": "\\mathbb{Z}[i]."
},
{
"math_id": 25,
"text": "\\mathbb{Z}[\\omega]"
},
{
"math_id": 26,
"text": "\\omega = \\frac{-1 + \\sqrt{-3}}{2},"
},
{
"math_id": 27,
"text": "\\omega^3 = 1"
},
{
"math_id": 28,
"text": "\\pm 1, \\pm\\omega, \\pm\\omega^2"
},
{
"math_id": 29,
"text": "\\mathbb{Z}[\\sqrt{-5}]"
},
{
"math_id": 30,
"text": "\n 6 =\n 2 \\cdot 3 =\n \\left(1 + \\sqrt{-5}\\right)\\left(1 - \\sqrt{-5}\\right).\n"
},
{
"math_id": 31,
"text": "\\mathbb{Z}\\left[\\sqrt{-5}\\right]"
},
{
"math_id": 32,
"text": "\\mathbb{Z}."
},
{
"math_id": 33,
"text": "\\mathbb{Z}"
},
{
"math_id": 34,
"text": "\\mathbb{Z}[i]"
},
{
"math_id": 35,
"text": "\\mathbb{Z}[\\omega],"
},
{
"math_id": 36,
"text": "\\mathbb{Z}[\\sqrt{-5}]."
}
] | https://en.wikipedia.org/wiki?curid=11556 |
1155685 | Graded category | If formula_0 is a category, then a formula_0-graded category is a category formula_1 together with a functor
formula_2.
Monoids and groups can be thought of as categories with a single object. A monoid-graded or group-graded category is therefore one in which to each morphism is attached an element of a given monoid (resp. group), its grade. This must be compatible with composition, in the sense that compositions have the product grade.
Definition.
There are various different definitions of a graded category, up to the most abstract one given above. A more concrete definition of a graded abelian category is as follows:
Let formula_1 be an abelian category and formula_3 a monoid. Let formula_4 be a set of functors from formula_1 to itself. If
we say that formula_10 is a formula_3-graded category.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{A}"
},
{
"math_id": 1,
"text": "\\mathcal{C}"
},
{
"math_id": 2,
"text": "F\\colon\\mathcal{C} \\rightarrow \\mathcal{A}"
},
{
"math_id": 3,
"text": "\\mathbb{G}"
},
{
"math_id": 4,
"text": "\\mathcal{S}=\\{ S_{g} : g\\in \\mathbb{G} \\}"
},
{
"math_id": 5,
"text": "S_{1}"
},
{
"math_id": 6,
"text": "S_{g}S_{h}=S_{gh}"
},
{
"math_id": 7,
"text": "g,h \\in \\mathbb{G}"
},
{
"math_id": 8,
"text": "S_{g}"
},
{
"math_id": 9,
"text": "g\\in \\mathbb{G}"
},
{
"math_id": 10,
"text": "(\\mathcal{C},\\mathcal{S})"
}
] | https://en.wikipedia.org/wiki?curid=1155685 |
1155734 | Graded vector space | Algebraic structure decomposed into a direct sum
In mathematics, a graded vector space is a vector space that has the extra structure of a "grading" or "gradation", which is a decomposition of the vector space into a direct sum of vector subspaces, generally indexed by the integers.
For "pure" vector spaces, the concept has been introduced in homological algebra, and it is widely used for graded algebras, which are graded vector spaces with additional structures.
Integer gradation.
Let formula_0 be the set of non-negative integers. An formula_1-graded vector space, often called simply a graded vector space without the prefix formula_0, is a vector space "V" together with a decomposition into a direct sum of the form
formula_2
where each formula_3 is a vector space. For a given "n" the elements of formula_3 are then called homogeneous elements of degree "n".
Graded vector spaces are common. For example the set of all polynomials in one or several variables forms a graded vector space, where the homogeneous elements of degree "n" are exactly the linear combinations of monomials of degree "n".
General gradation.
The subspaces of a graded vector space need not be indexed by the set of natural numbers, and may be indexed by the elements of any set "I". An "I"-graded vector space "V" is a vector space together with a decomposition into a direct sum of subspaces indexed by elements "i" of the set "I":
formula_4
Therefore, an formula_0-graded vector space, as defined above, is just an "I"-graded vector space where the set "I" is formula_0 (the set of natural numbers).
The case where "I" is the ring formula_5 (the elements 0 and 1) is particularly important in physics. A formula_6-graded vector space is also known as a supervector space.
Homomorphisms.
For general index sets "I", a linear map between two "I"-graded vector spaces "f" : "V" → "W" is called a graded linear map if it preserves the grading of homogeneous elements. A graded linear map is also called a homomorphism (or morphism) of graded vector spaces, or homogeneous linear map:
formula_7 for all "i" in "I".
For a fixed field and a fixed index set, the graded vector spaces form a category whose morphisms are the graded linear maps.
When "I" is a commutative monoid (such as the natural numbers), then one may more generally define linear maps that are homogeneous of any degree "i" in "I" by the property
formula_8 for all "j" in "I",
where "+" denotes the monoid operation. If moreover "I" satisfies the cancellation property so that it can be embedded into an abelian group "A" that it generates (for instance the integers if "I" is the natural numbers), then one may also define linear maps that are homogeneous of degree "i" in "A" by the same property (but now "+" denotes the group operation in "A"). Specifically, for "i" in "I" a linear map will be homogeneous of degree −"i" if
formula_9 for all "j" in "I", while
formula_10 if "j" − "i" is not in "I".
Just as the set of linear maps from a vector space to itself forms an associative algebra (the algebra of endomorphisms of the vector space), the sets of homogeneous linear maps from a space to itself – either restricting degrees to "I" or allowing any degrees in the group "A" – form associative graded algebras over those index sets.
Operations on graded vector spaces.
Some operations on vector spaces can be defined for graded vector spaces as well.
Given two "I"-graded vector spaces "V" and "W", their direct sum has underlying vector space "V" ⊕ "W" with gradation
("V" ⊕ "W")"i" = "Vi" ⊕ "Wi" .
If "I" is a semigroup, then the tensor product of two "I"-graded vector spaces "V" and "W" is another "I"-graded vector space, formula_11, with gradation
formula_12
Hilbert–Poincaré series.
Given a formula_13-graded vector space that is finite-dimensional for every formula_14 its Hilbert–Poincaré series is the formal power series
formula_15
From the formulas above, the Hilbert–Poincaré series of a direct sum and of a tensor product
of graded vector spaces (finite dimensional in each degree) are respectively the sum and the product of the corresponding Hilbert–Poincaré series. | [
{
"math_id": 0,
"text": "\\mathbb{N}"
},
{
"math_id": 1,
"text": "\\mathbb{N}"
},
{
"math_id": 2,
"text": "V = \\bigoplus_{n \\in \\mathbb{N}} V_n"
},
{
"math_id": 3,
"text": "V_n"
},
{
"math_id": 4,
"text": "V = \\bigoplus_{i \\in I} V_i."
},
{
"math_id": 5,
"text": "\\mathbb{Z}/2\\mathbb{Z}"
},
{
"math_id": 6,
"text": "(\\mathbb{Z}/2\\mathbb{Z})"
},
{
"math_id": 7,
"text": "f(V_i)\\subseteq W_i"
},
{
"math_id": 8,
"text": "f(V_j)\\subseteq W_{i+j}"
},
{
"math_id": 9,
"text": "f(V_{i+j})\\subseteq W_j"
},
{
"math_id": 10,
"text": "f(V_j)=0\\,"
},
{
"math_id": 11,
"text": "V \\otimes W"
},
{
"math_id": 12,
"text": "(V \\otimes W)_i = \\bigoplus_{\\left\\{\\left(j,k\\right) \\,:\\; j+k=i\\right\\}} V_j \\otimes W_k."
},
{
"math_id": 13,
"text": "\\N"
},
{
"math_id": 14,
"text": "n\\in \\N,"
},
{
"math_id": 15,
"text": "\\sum_{n\\in\\N}\\dim_K(V_n)\\, t^n."
}
] | https://en.wikipedia.org/wiki?curid=1155734 |
11559418 | International System of Quantities | System of quantities used in science and their interrelationships
The International System of Quantities (ISQ) is a standard system of quantities used in physics and in modern science in general. It includes basic quantities such as length and mass and the relationships between those quantities. This system underlies the International System of Units (SI) but does not itself determine the units of measurement used for the quantities.
The system is formally described in a multi-part ISO standard ISO/IEC 80000 (which also defines many other quantities used in science and technology), first completed in 2009 and subsequently revised and expanded.
Base quantities.
The base quantities of a given system of physical quantities is a subset of those quantities, where no base quantity can be expressed in terms of the others, but where every quantity in the system can be expressed in terms of the base quantities. Within this constraint, the set of base quantities is chosen by convention. There are seven ISQ base quantities. The symbols for them, as for other quantities, are written in italics.
The dimension of a physical quantity does not include magnitude or units. The conventional symbolic representation of the dimension of a base quantity is a single upper-case letter in roman (upright) sans-serif type.
Derived quantities.
A derived quantity is a quantity in a system of quantities that is defined in terms of only the base quantities of that system. The ISQ defines many derived quantities and corresponding derived units.
Dimensional expression of derived quantities.
The conventional symbolic representation of the dimension of a derived quantity is the product of powers of the dimensions of the base quantities according to the definition of the derived quantity. The dimension of a quantity is denoted by formula_0, where the dimensional exponents are positive, negative, or zero. The dimension symbol may be omitted if its exponent is zero. For example, in the ISQ, the quantity dimension of velocity is denoted formula_1. The following table lists some quantities defined by the ISQ.
Dimensionless quantities.
A "quantity of dimension one" is historically known as a "dimensionless quantity" (a term that is still commonly used); all its dimensional exponents are zero and its dimension symbol is formula_2. Such a quantity can be regarded as a derived quantity in the form of the ratio of two quantities of the same dimension. The named dimensionless units "radian" (rad) and "steradian" (sr) are acceptable for distinguishing dimensionless quantities of different kind, respectively plane angle and solid angle.
Logarithmic quantities.
Level.
The "level" of a quantity is defined as the logarithm of the ratio of the quantity with a stated reference value of that quantity. Within the ISQ it is differently defined for a root-power quantity (also known by the deprecated term "field quantity") and for a power quantity. It is not defined for ratios of quantities of other kinds. Within the ISQ, all levels are treated as derived quantities of dimension 1. Several units for levels are defined by the SI and classified as "non-SI units accepted for use with the SI units".
An example of level is sound pressure level, with the unit of decibel.
Other logarithmic quantities.
Units of logarithmic frequency ratio include the octave, corresponding to a factor of 2 in frequency (precisely) and the decade, corresponding to a factor 10.
The ISQ recognizes another logarithmic quantity, information entropy, for which the coherent unit is the natural unit of information (symbol nat).
Documentation.
The system is formally described in a multi-part ISO standard ISO/IEC 80000, first completed in 2009 but subsequently revised and expanded, which replaced standards published in 1992, ISO 31 and ISO 1000. Working jointly, ISO and IEC have formalized parts of the ISQ by giving information and definitions concerning quantities, systems of quantities, units, quantity and unit symbols, and coherent unit systems, with particular reference to the ISQ. ISO/IEC 80000 defines physical quantities that are measured with the SI units and also includes many other quantities in modern science and technology. The name "International System of Quantities" is used by the General Conference on Weights and Measures (CGPM) to describe the system of quantities that underlie the International System of Units.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf L^a \\mathsf M^b \\mathsf T^c \\mathsf I^d \\mathsf \\Theta^e \\mathsf N^f \\mathsf J^g"
},
{
"math_id": 1,
"text": "\\mathsf{LT}^{-1}"
},
{
"math_id": 2,
"text": "1"
}
] | https://en.wikipedia.org/wiki?curid=11559418 |
1156215 | Helmholtz equation | Eigenvalue problem for the Laplace operator
In mathematics, the Helmholtz equation is the eigenvalue problem for the Laplace operator. It corresponds to the elliptic partial differential equation:
formula_0
where ∇2 is the Laplace operator, "k"2 is the eigenvalue, and f is the (eigen)function. When the equation is applied to waves, k is known as the wave number. The Helmholtz equation has a variety of applications in physics and other sciences, including the wave equation, the diffusion equation, and the Schrödinger equation for a free particle.
In optics, the Helmholtz equation is the wave equation for the electric field.
The equation is named after Hermann von Helmholtz, who studied it in 1860.
Motivation and uses.
The Helmholtz equation often arises in the study of physical problems involving partial differential equations (PDEs) in both space and time. The Helmholtz equation, which represents a time-independent form of the wave equation, results from applying the technique of separation of variables to reduce the complexity of the analysis.
For example, consider the wave equation
formula_1
Separation of variables begins by assuming that the wave function "u"(r, "t") is in fact separable:
formula_2
Substituting this form into the wave equation and then simplifying, we obtain the following equation:
formula_3
Notice that the expression on the left side depends only on r, whereas the right expression depends only on t. As a result, this equation is valid in the general case if and only if both sides of the equation are equal to the same constant value. This argument is key in the technique of solving linear partial differential equations by separation of variables. From this observation, we obtain two equations, one for "A"(r), the other for "T"("t"):
formula_4
formula_5
where we have chosen, without loss of generality, the expression −"k"2 for the value of the constant. (It is equally valid to use any constant k as the separation constant; −"k"2 is chosen only for convenience in the resulting solutions.)
Rearranging the first equation, we obtain the (homogeneous) Helmholtz equation:
formula_6
Likewise, after making the substitution "ω" = "kc", where k is the wave number, and ω is the angular frequency (assuming a monochromatic field), the second equation becomes
formula_7
We now have Helmholtz's equation for the spatial variable r and a second-order ordinary differential equation in time. The solution in time will be a linear combination of sine and cosine functions, whose exact form is determined by initial conditions, while the form of the solution in space will depend on the boundary conditions. Alternatively, integral transforms, such as the Laplace or Fourier transform, are often used to transform a hyperbolic PDE into a form of the Helmholtz equation.
Because of its relationship to the wave equation, the Helmholtz equation arises in problems in such areas of physics as the study of electromagnetic radiation, seismology, and acoustics.
Solving the Helmholtz equation using separation of variables.
The solution to the spatial Helmholtz equation:
formula_8
can be obtained for simple geometries using separation of variables.
Vibrating membrane.
The two-dimensional analogue of the vibrating string is the vibrating membrane, with the edges clamped to be motionless. The Helmholtz equation was solved for many basic shapes in the 19th century: the rectangular membrane by Siméon Denis Poisson in 1829, the equilateral triangle by Gabriel Lamé in 1852, and the circular membrane by Alfred Clebsch in 1862. The elliptical drumhead was studied by Émile Mathieu, leading to Mathieu's differential equation.
If the edges of a shape are straight line segments, then a solution is integrable or knowable in closed-form only if it is expressible as a finite linear combination of plane waves that satisfy the boundary conditions (zero at the boundary, i.e., membrane clamped).
If the domain is a circle of radius a, then it is appropriate to introduce polar coordinates r and θ. The Helmholtz equation takes the form
formula_9
We may impose the boundary condition that A vanishes if "r" = "a"; thus
formula_10
the method of separation of variables leads to trial solutions of the form
formula_11
where Θ must be periodic of period 2"π". This leads to
formula_12
formula_13
It follows from the periodicity condition that
formula_14
and that n must be an integer. The radial component R has the form
formula_15
where the Bessel function "Jn"("ρ") satisfies Bessel's equation
formula_16
and "ρ" = "kr". The radial function "Jn" has infinitely many roots for each value of n, denoted by "ρ""m","n". The boundary condition that A vanishes where "r" = "a" will be satisfied if the corresponding wavenumbers are given by
formula_17
The general solution A then takes the form of a generalized Fourier series of terms involving products of "Jn"("km,nr") and the sine (or cosine) of "nθ". These solutions are the modes of vibration of a circular drumhead.
Three-dimensional solutions.
In spherical coordinates, the solution is:
formula_18
This solution arises from the spatial solution of the wave equation and diffusion equation. Here "jℓ"("kr") and "yℓ"("kr") are the spherical Bessel functions, and "Y"("θ", "φ") are the spherical harmonics (Abramowitz and Stegun, 1964). Note that these forms are general solutions, and require boundary conditions to be specified to be used in any specific case. For infinite exterior domains, a radiation condition may also be required (Sommerfeld, 1949).
Writing r0 = ("x", "y", "z") function "A"("r"0) has asymptotics
formula_19
where function f is called scattering amplitude and "u"0("r"0) is the value of A at each boundary point "r"0.
Three-dimensional solutions given the function on a 2-dimensional plane.
Given a 2-dimensional plane where A is known, the solution to the Helmholtz equation is given by:
formula_20
where
As z approaches zero, all contributions from the integral vanish except for r=0. Thus formula_23 up to a numerical factor, which can be verified to be 1 by transforming the integral to polar coordinates formula_24.
This solution is important in diffraction theory, e.g. in deriving Fresnel diffraction.
Paraxial approximation.
In the paraxial approximation of the Helmholtz equation, the complex amplitude A is expressed as
formula_25
where u represents the complex-valued amplitude which modulates the sinusoidal plane wave represented by the exponential factor. Then under a suitable assumption, u approximately solves
formula_26
where formula_27 is the transverse part of the Laplacian.
This equation has important applications in the science of optics, where it provides solutions that describe the propagation of electromagnetic waves (light) in the form of either paraboloidal waves or Gaussian beams. Most lasers emit beams that take this form.
The assumption under which the paraxial approximation is valid is that the z derivative of the amplitude function u is a slowly varying function of z:
formula_28
This condition is equivalent to saying that the angle θ between the wave vector k and the optical axis z is small: "θ" ≪ 1.
The paraxial form of the Helmholtz equation is found by substituting the above-stated expression for the complex amplitude into the general form of the Helmholtz equation as follows:
formula_29
Expansion and cancellation yields the following:
formula_30
Because of the paraxial inequality stated above, the ∂2"u"/∂"z"2 term is neglected in comparison with the "k"·∂"u"/∂"z" term. This yields the paraxial Helmholtz equation. Substituting "u"(r) = "A"(r) "e"−"ikz" then gives the paraxial equation for the original complex amplitude A:
formula_31
The Fresnel diffraction integral is an exact solution to the paraxial Helmholtz equation.
Inhomogeneous Helmholtz equation.
The inhomogeneous Helmholtz equation is the equation
formula_32
where "ƒ" : R"n" → C is a function with compact support, and "n" = 1, 2, 3. This equation is very similar to the screened Poisson equation, and would be identical if the plus sign (in front of the k term) were switched to a minus sign.
In order to solve this equation uniquely, one needs to specify a boundary condition at infinity, which is typically the Sommerfeld radiation condition
formula_33
in formula_34 spatial dimensions, for all angles (i.e. any value of formula_35). Here formula_36 where formula_37 are the coordinates of the vector formula_38.
With this condition, the solution to the inhomogeneous Helmholtz equation is
formula_39
(notice this integral is actually over a finite region, since f has compact support). Here, G is the Green's function of this equation, that is, the solution to the inhomogeneous Helmholtz equation with "f" equaling the Dirac delta function, so G satisfies
formula_40
The expression for the Green's function depends on the dimension n of the space. One has
formula_41
for "n" = 1,
formula_42
for "n" = 2, where "H" is a Hankel function, and
formula_43
for "n" = 3. Note that we have chosen the boundary condition that the Green's function is an outgoing wave for .
Finally, for general n,
formula_44
where formula_45 and formula_46.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\nabla^2 f = -k^2 f,"
},
{
"math_id": 1,
"text": "\\left(\\nabla^2-\\frac{1}{c^2}\\frac{\\partial^2}{\\partial t^2}\\right) u(\\mathbf{r},t)=0."
},
{
"math_id": 2,
"text": "u(\\mathbf{r},t) =A (\\mathbf{r}) T(t)."
},
{
"math_id": 3,
"text": "\\frac{\\nabla^2 A}{A} = \\frac{1}{c^2 T} \\frac{\\mathrm{d}^2 T}{\\mathrm{d} t^2}."
},
{
"math_id": 4,
"text": "\\frac{\\nabla^2 A}{A} = -k^2"
},
{
"math_id": 5,
"text": "\\frac{1}{c^2 T} \\frac{\\mathrm{d}^2 T}{\\mathrm{d}t^2} = -k^2,"
},
{
"math_id": 6,
"text": "\\nabla^2 A + k^2 A = (\\nabla^2 + k^2) A = 0."
},
{
"math_id": 7,
"text": "\\frac{\\mathrm{d}^2 T}{\\mathrm{d}t^2} + \\omega^2T = \\left( \\frac{\\mathrm{d}^2}{\\mathrm{d}t^2} + \\omega^2 \\right) T = 0."
},
{
"math_id": 8,
"text": " \\nabla^2 A = -k^2 A "
},
{
"math_id": 9,
"text": "A_{rr} + \\frac{1}{r} A_r + \\frac{1}{r^2}A_{\\theta\\theta} + k^2 A = 0."
},
{
"math_id": 10,
"text": "A(a,\\theta) = 0."
},
{
"math_id": 11,
"text": "A(r,\\theta) = R(r)\\Theta(\\theta),"
},
{
"math_id": 12,
"text": "\\Theta'' +n^2 \\Theta =0,"
},
{
"math_id": 13,
"text": " r^2 R'' + r R' + r^2 k^2 R - n^2 R=0."
},
{
"math_id": 14,
"text": " \\Theta = \\alpha \\cos n\\theta + \\beta \\sin n\\theta,"
},
{
"math_id": 15,
"text": " R(r) = \\gamma J_n(\\rho), "
},
{
"math_id": 16,
"text": " \\rho^2 J_n'' + \\rho J_n' +(\\rho^2 - n^2)J_n =0, "
},
{
"math_id": 17,
"text": "k_{m,n} = \\frac{1}{a} \\rho_{m,n}."
},
{
"math_id": 18,
"text": " A (r, \\theta, \\varphi)= \\sum_{\\ell=0}^\\infty \\sum_{m=-\\ell}^\\ell \\left( a_{\\ell m} j_\\ell ( k r ) + b_{\\ell m} y_\\ell(kr) \\right) Y^m_\\ell (\\theta,\\varphi) ."
},
{
"math_id": 19,
"text": "A(r_0)=\\frac{e^{i k r_0}}{r_0} f\\left(\\frac{\\mathbf{r}_0}{r_0},k,u_0\\right) + o\\left(\\frac 1 {r_0}\\right)\\text{ as } r_0\\to\\infty"
},
{
"math_id": 20,
"text": " A(x, y, z) = -\\frac{1}{2 \\pi} \\iint_{-\\infty}^{+\\infty} A'(x', y') \\frac{e^{ikr}}{r} \\frac{z}{r} \\left(ik-\\frac{1}{r}\\right) \\,dx'dy', "
},
{
"math_id": 21,
"text": "A'(x', y')"
},
{
"math_id": 22,
"text": "r = \\sqrt{(x - x')^2 + (y - y')^2 + z^2},"
},
{
"math_id": 23,
"text": "A(x, y, 0)=A'(x,y)"
},
{
"math_id": 24,
"text": "(\\rho, \\theta)"
},
{
"math_id": 25,
"text": "A(\\mathbf{r}) = u(\\mathbf{r}) e^{ikz} "
},
{
"math_id": 26,
"text": "\\nabla_{\\perp}^2 u + 2ik\\frac{\\partial u}{\\partial z} = 0,"
},
{
"math_id": 27,
"text": "\\nabla_\\perp^2 \\overset{\\text{ def }}{=} \\frac{\\partial^2}{\\partial x^2} + \\frac{\\partial^2}{\\partial y^2}"
},
{
"math_id": 28,
"text": " \\left| \\frac{ \\partial^2 u }{ \\partial z^2 } \\right| \\ll \\left| k \\frac{\\partial u}{\\partial z} \\right| ."
},
{
"math_id": 29,
"text": "\\nabla^{2}(u\\left( x,y,z \\right) e^{ikz}) + k^2 u\\left( x,y,z \\right) e^{ikz} = 0."
},
{
"math_id": 30,
"text": "\\left( \\frac {\\partial^2}{\\partial x^2} + \\frac {\\partial^2}{\\partial y^2} \\right) u(x,y,z) e^{ikz} + \\left( \\frac {\\partial^2}{\\partial z^2} u (x,y,z) \\right) e^{ikz} + 2 \\left( \\frac \\partial {\\partial z} u(x,y,z) \\right) ik{e^{ikz}}=0."
},
{
"math_id": 31,
"text": "\\nabla_{\\perp}^2 A + 2ik\\frac{\\partial A}{\\partial z} + 2k^2A = 0."
},
{
"math_id": 32,
"text": "\\nabla^2 A(\\mathbf{x}) + k^2 A(\\mathbf{x}) = -f(\\mathbf{x}) \\ \\text { in } \\R^n,"
},
{
"math_id": 33,
"text": "\\lim_{r \\to \\infty} r^{\\frac{n-1}{2}} \\left( \\frac{\\partial}{\\partial r} - ik \\right) A(\\mathbf{x}) = 0"
},
{
"math_id": 34,
"text": "n"
},
{
"math_id": 35,
"text": "\\theta, \\phi"
},
{
"math_id": 36,
"text": "r = \\sqrt{\\sum_{i=1}^n x_i^2} "
},
{
"math_id": 37,
"text": "x_i"
},
{
"math_id": 38,
"text": "\\mathbf{x}"
},
{
"math_id": 39,
"text": "A(\\mathbf{x})=\\int_{\\R^n}\\! G(\\mathbf{x},\\mathbf{x'})f(\\mathbf{x'})\\,\\mathrm{d}\\mathbf{x'}"
},
{
"math_id": 40,
"text": "\\nabla^2 G(\\mathbf{x},\\mathbf{x'}) + k^2 G(\\mathbf{x},\\mathbf{x'}) = -\\delta(\\mathbf{x},\\mathbf{x'}) \\in \\R^n. "
},
{
"math_id": 41,
"text": "G(x,x') = \\frac{ie^{ik|x - x'|}}{2k}"
},
{
"math_id": 42,
"text": "G(\\mathbf{x},\\mathbf{x'}) = \\frac{i}{4}H^{(1)}_0(k|\\mathbf{x}-\\mathbf{x'}|)"
},
{
"math_id": 43,
"text": "G(\\mathbf{x},\\mathbf{x'}) = \\frac{e^{ik|\\mathbf{x}-\\mathbf{x'}|}}{4\\pi |\\mathbf{x}-\\mathbf{x'}|}"
},
{
"math_id": 44,
"text": "G(\\mathbf{x},\\mathbf{x'}) = c_d k^p \\frac{H_p^{(1)}(k|\\mathbf{x}-\\mathbf{x'}|)}{|\\mathbf{x}-\\mathbf{x'}|^p}"
},
{
"math_id": 45,
"text": " p = \\frac{n - 2}{2} "
},
{
"math_id": 46,
"text": "c_d = \\frac{1}{2i(2\\pi)^p} "
}
] | https://en.wikipedia.org/wiki?curid=1156215 |
1156527 | Detection theory | Means to measure signal processing ability
Detection theory or signal detection theory is a means to measure the ability to differentiate between information-bearing patterns (called stimulus in living organisms, signal in machines) and random patterns that distract from the information (called noise, consisting of background stimuli and random activity of the detection machine and of the nervous system of the operator).
In the field of electronics, signal recovery is the separation of such patterns from a disguising background.
According to the theory, there are a number of determiners of how a detecting system will detect a signal, and where its threshold levels will be. The theory can explain how changing the threshold will affect the ability to discern, often exposing how adapted the system is to the task, purpose or goal at which it is aimed. When the detecting system is a human being, characteristics such as experience, expectations, physiological state (e.g.
fatigue) and other factors can affect the threshold applied. For instance, a sentry in wartime might be likely to detect fainter stimuli than the same sentry in peacetime due to a lower criterion, however they might also be more likely to treat innocuous stimuli as a threat.
Much of the early work in detection theory was done by radar researchers. By 1954, the theory was fully developed on the theoretical side as described by Peterson, Birdsall and Fox and the foundation for the psychological theory was made by Wilson P. Tanner, David M. Green, and John A. Swets, also in 1954.
Detection theory was used in 1966 by John A. Swets and David M. Green for psychophysics. Green and Swets criticized the traditional methods of psychophysics for their inability to discriminate between the real sensitivity of subjects and their (potential) response biases.
Detection theory has applications in many fields such as diagnostics of any kind, quality control, telecommunications, and psychology. The concept is similar to the signal-to-noise ratio used in the sciences and confusion matrices used in artificial intelligence. It is also usable in alarm management, where it is important to separate important events from background noise.
Psychology.
Signal detection theory (SDT) is used when psychologists want to measure the way we make decisions under conditions of uncertainty, such as how we would perceive distances in foggy conditions or during eyewitness identification. SDT assumes that the decision maker is not a passive receiver of information, but an active decision-maker who makes difficult perceptual judgments under conditions of uncertainty. In foggy circumstances, we are forced to decide how far away from us an object is, based solely upon visual stimulus which is impaired by the fog. Since the brightness of the object, such as a traffic light, is used by the brain to discriminate the distance of an object, and the fog reduces the brightness of objects, we perceive the object to be much farther away than it actually is (see also decision theory). According to SDT, during eyewitness identifications, witnesses base their decision as to whether a suspect is the culprit or not based on their perceived level of familiarity with the suspect.
To apply signal detection theory to a data set where stimuli were either present or absent, and the observer categorized each trial as having the stimulus present or absent, the trials are sorted into one of four categories:
Based on the proportions of these types of trials, numerical estimates of sensitivity can be obtained with statistics like the sensitivity index "d"' and A', and response bias can be estimated with statistics like c and β. β is the measure of response bias.
Signal detection theory can also be applied to memory experiments, where items are presented on a study list for later testing. A test list is created by combining these 'old' items with novel, 'new' items that did not appear on the study list. On each test trial the subject will respond 'yes, this was on the study list' or 'no, this was not on the study list'. Items presented on the study list are called Targets, and new items are called Distractors. Saying 'Yes' to a target constitutes a Hit, while saying 'Yes' to a distractor constitutes a False Alarm.
Applications.
Signal Detection Theory has wide application, both in humans and animals. Topics include memory, stimulus characteristics of schedules of reinforcement, etc.
Sensitivity or discriminability.
Conceptually, sensitivity refers to how hard or easy it is to detect that a target stimulus is present from background events. For example, in a recognition memory paradigm, having longer to study to-be-remembered words makes it easier to recognize previously seen or heard words. In contrast, having to remember 30 words rather than 5 makes the discrimination harder. One of the most commonly used statistics for computing sensitivity is the so-called sensitivity index or "d"'. There are also non-parametric measures, such as the area under the ROC-curve.
Bias.
Bias is the extent to which one response is more probable than another, averaging across stimulus-present and stimulus-absent cases. That is, a receiver may be more likely overall to respond that a stimulus is present or more likely overall to respond that a stimulus is not present. Bias is independent of sensitivity. Bias can be desirable if false alarms and misses lead to different costs. For example, if the stimulus is a bomber, then a miss (failing to detect the bomber) may be more costly than a false alarm (reporting a bomber when there is not one), making a liberal response bias desirable. In contrast, giving false alarms too often (crying wolf) may make people less likely to respond, a problem that can be reduced by a conservative response bias.
Compressed sensing.
Another field which is closely related to signal detection theory is called compressed sensing (or compressive sensing). The objective of compressed sensing is to recover high dimensional but with low complexity entities from only a few measurements. Thus, one of the most important applications of compressed sensing is in the recovery of high dimensional signals which are known to be sparse (or nearly sparse) with only a few linear measurements. The number of measurements needed in the recovery of signals is by far smaller than what Nyquist sampling theorem requires provided that the signal is sparse, meaning that it only contains a few non-zero elements. There are different methods of signal recovery in compressed sensing including basis pursuit, expander recovery algorithm, CoSaMP and also fast non-iterative algorithm. In all of the recovery methods mentioned above, choosing an appropriate measurement matrix using probabilistic constructions or deterministic constructions, is of great importance. In other words, measurement matrices must satisfy certain specific conditions such as RIP (Restricted Isometry Property) or Null-Space property in order to achieve robust sparse recovery.
Mathematics.
P(H1|y) > P(H2|y) / MAP testing.
In the case of making a decision between two hypotheses, "H1", absent, and "H2", present, in the event of a particular observation, "y", a classical approach is to choose "H1" when "p(H1|y) > p(H2|y)" and "H2" in the reverse case. In the event that the two "a posteriori" probabilities are equal, one might choose to default to a single choice (either always choose "H1" or always choose "H2"), or might randomly select either "H1" or "H2". The "a priori" probabilities of "H1" and "H2" can guide this choice, e.g. by always choosing the hypothesis with the higher "a priori" probability.
When taking this approach, usually what one knows are the conditional probabilities, "p(y|H1)" and "p(y|H2)", and the "a priori" probabilities formula_0 and formula_1. In this case,
formula_2,
formula_3
where "p(y)" is the total probability of event "y",
formula_4.
"H2" is chosen in case
formula_5
formula_6
and "H1" otherwise.
Often, the ratio formula_7 is called formula_8 and formula_9 is called formula_10, the "likelihood ratio".
Using this terminology, "H2" is chosen in case formula_11. This is called MAP testing, where MAP stands for "maximum "a posteriori"").
Taking this approach minimizes the expected number of errors one will make.
Bayes criterion.
In some cases, it is far more important to respond appropriately to "H1" than it is to respond appropriately to "H2". For example, if an alarm goes off, indicating H1 (an incoming bomber is carrying a nuclear weapon), it is much more important to shoot down the bomber if H1 = TRUE, than it is to avoid sending a fighter squadron to inspect a false alarm (i.e., H1 = FALSE, H2 = TRUE) (assuming a large supply of fighter squadrons). The Bayes criterion is an approach suitable for such cases.
Here a utility is associated with each of four situations:
As is shown below, what is important are the differences, formula_16 and formula_17.
Similarly, there are four probabilities, formula_18, formula_19, etc., for each of the cases (which are dependent on one's decision strategy).
The Bayes criterion approach is to maximize the expected utility:
formula_20
formula_21
formula_22
Effectively, one may maximize the sum,
formula_23,
and make the following substitutions:
formula_24
formula_25
where formula_26 and formula_27 are the "a priori" probabilities, formula_28 and formula_29, and formula_30 is the region of observation events, "y", that are responded to as though "H1" is true.
formula_31
formula_32 and thus formula_33 are maximized by extending formula_30 over the region where
formula_34
This is accomplished by deciding H2 in case
formula_35
formula_36
and H1 otherwise, where "L(y)" is the so-defined "likelihood ratio".
Normal distribution models.
Das and Geisler extended the results of signal detection theory for normally distributed stimuli, and derived methods of computing the error rate and confusion matrix for ideal observers and non-ideal observers for detecting and categorizing univariate and multivariate normal signals from two or more categories.
See also.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "p(H1) = \\pi_1"
},
{
"math_id": 1,
"text": "p(H2) = \\pi_2"
},
{
"math_id": 2,
"text": "p(H1|y) = \\frac{p(y|H1) \\cdot \\pi_1}{p(y)} "
},
{
"math_id": 3,
"text": "p(H2|y) = \\frac{p(y|H2) \\cdot \\pi_2}{p(y)} "
},
{
"math_id": 4,
"text": " p(y|H1) \\cdot \\pi_1 + p(y|H2) \\cdot \\pi_2 "
},
{
"math_id": 5,
"text": " \\frac{p(y|H2) \\cdot \\pi_2}{p(y|H1) \\cdot \\pi_1 + p(y|H2) \\cdot \\pi_2} \\ge \\frac{p(y|H1) \\cdot \\pi_1}{p(y|H1) \\cdot \\pi_1 + p(y|H2) \\cdot \\pi_2} "
},
{
"math_id": 6,
"text": " \\Rightarrow \\frac{p(y|H2)}{p(y|H1)} \\ge \\frac{\\pi_1}{\\pi_2}"
},
{
"math_id": 7,
"text": "\\frac{\\pi_1}{\\pi_2}"
},
{
"math_id": 8,
"text": "\\tau_{MAP}"
},
{
"math_id": 9,
"text": "\\frac{p(y|H2)}{p(y|H1)}"
},
{
"math_id": 10,
"text": "L(y)"
},
{
"math_id": 11,
"text": "L(y) \\ge \\tau_{MAP}"
},
{
"math_id": 12,
"text": "U_{11}"
},
{
"math_id": 13,
"text": "U_{12}"
},
{
"math_id": 14,
"text": "U_{21}"
},
{
"math_id": 15,
"text": "U_{22}"
},
{
"math_id": 16,
"text": "U_{11} - U_{21}"
},
{
"math_id": 17,
"text": "U_{22} - U_{12}"
},
{
"math_id": 18,
"text": "P_{11}"
},
{
"math_id": 19,
"text": "P_{12}"
},
{
"math_id": 20,
"text": " E\\{U\\} = P_{11} \\cdot U_{11} + P_{21} \\cdot U_{21} + P_{12} \\cdot U_{12} + P_{22} \\cdot U_{22} "
},
{
"math_id": 21,
"text": " E\\{U\\} = P_{11} \\cdot U_{11} + (1-P_{11}) \\cdot U_{21} + P_{12} \\cdot U_{12} + (1-P_{12}) \\cdot U_{22} "
},
{
"math_id": 22,
"text": " E\\{U\\} = U_{21} + U_{22} + P_{11} \\cdot (U_{11} - U_{21}) - P_{12} \\cdot (U_{22} - U_{12}) "
},
{
"math_id": 23,
"text": "U' = P_{11} \\cdot (U_{11} - U_{21}) - P_{12} \\cdot (U_{22} - U_{12}) "
},
{
"math_id": 24,
"text": "P_{11} = \\pi_1 \\cdot \\int_{R_1}p(y|H1)\\, dy "
},
{
"math_id": 25,
"text": "P_{12} = \\pi_2 \\cdot \\int_{R_1}p(y|H2)\\, dy "
},
{
"math_id": 26,
"text": "\\pi_1"
},
{
"math_id": 27,
"text": "\\pi_2"
},
{
"math_id": 28,
"text": "P(H1)"
},
{
"math_id": 29,
"text": "P(H2)"
},
{
"math_id": 30,
"text": "R_1"
},
{
"math_id": 31,
"text": " \\Rightarrow U' = \\int_{R_1} \\left \\{ \\pi_1 \\cdot (U_{11} - U_{21}) \\cdot p(y|H1) - \\pi_2 \\cdot (U_{22} - U_{12}) \\cdot p(y|H2) \\right \\} \\, dy "
},
{
"math_id": 32,
"text": "U'"
},
{
"math_id": 33,
"text": "U"
},
{
"math_id": 34,
"text": "\\pi_1 \\cdot (U_{11} - U_{21}) \\cdot p(y|H1) - \\pi_2 \\cdot (U_{22} - U_{12}) \\cdot p(y|H2) > 0 "
},
{
"math_id": 35,
"text": "\\pi_2 \\cdot (U_{22} - U_{12}) \\cdot p(y|H2) \\ge \\pi_1 \\cdot (U_{11} - U_{21}) \\cdot p(y|H1) "
},
{
"math_id": 36,
"text": " \\Rightarrow L(y) \\equiv \\frac{p(y|H2)}{p(y|H1)} \\ge \\frac{\\pi_1 \\cdot (U_{11} - U_{21})}{\\pi_2 \\cdot (U_{22} - U_{12})} \\equiv \\tau_B "
}
] | https://en.wikipedia.org/wiki?curid=1156527 |
1156624 | Commercial paper | Financial product
Commercial paper, in the global financial market, is an unsecured promissory note with a fixed maturity of usually less than 270 days. In layperson terms, it is like an "IOU" but can be bought and sold because its buyers and sellers have some degree of confidence that it can be successfully redeemed later for cash, based on their assessment of the creditworthiness of the issuing company.
Commercial paper is a money-market security issued by large corporations to obtain funds to meet short-term debt obligations (for example, payroll) and is backed only by an issuing bank or company promise to pay the face amount on the maturity date specified on the note. Since it is not backed by collateral, only firms with excellent credit ratings from a recognized credit rating agency will be able to sell their commercial paper at a reasonable price. Commercial paper is usually sold at a discount from face value and generally carries lower interest repayment rates than bonds or corporate bonds due to the shorter maturities of commercial paper. Typically, the longer the maturity on a note, the higher the interest rate the issuing institution pays. Interest rates fluctuate with market conditions but are typically lower than banks' rates.
Commercial paper, though a short-term obligation, is typically issued as part of a continuous rolling program, which is either a number of years long (in Europe) or open-ended (in the United States).
Overview.
As defined in United States law, commercial paper matures before nine months (270 days), and is only used to fund operating expenses or current assets (e.g., inventories and receivables) and not used for financing fixed assets, such as land, buildings, or machinery. By meeting these qualifications it may be issued without U.S. federal government regulation, that is, it need not be registered with the U.S. Securities and Exchange Commission. Commercial paper is a type of negotiable instrument, where the legal rights and obligations of involved parties are governed by Articles Three and Four of the Uniform Commercial Code, a set of laws adopted by 49 of the 50 states, Louisiana being the exception.
At the end of 2009, more than 1,700 companies in the United States issued commercial paper. As of October 31, 2008, the U.S. Federal Reserve reported seasonally adjusted figures for the end of 2007: there was $1.7807 trillion in total outstanding commercial paper; $801.3 billion was "asset backed" and $979.4 billion was not; $162.7 billion of the latter was issued by non-financial corporations, and $816.7 billion was issued by financial corporations.
Outside of the United States, the international Euro-Commercial Paper Market has over $500 billion in outstandings, made up of instruments denominated predominantly in euros, dollars and sterling.
History.
Commercial credit (trade credit), in the form of promissory notes issued by corporations, has existed since at least the 19th century. For instance, Marcus Goldman, founder of Goldman Sachs, got his start trading commercial paper in New York in 1869.
Issuance.
Commercial paper – though a short-term obligation – is issued as part of a continuous significantly longer rolling program, which is either a number of years long (as in Europe), or open-ended (as in the U.S.). Because the continuous commercial paper program is much longer than the individual commercial paper in the program (which cannot be longer than 270 days), as commercial paper matures it is replaced with newly issued commercial paper for the remaining amount of the obligation. If the maturity is less than 270 days, the issuer does not have to file a registrations statement with the SEC, which would mean delay and increased cost.
There are two methods of issuing credit. The issuer can market the securities directly to a buy and hold investor such as most money market funds. Alternatively, it can sell the paper to a dealer, who then sells the paper in the market. The dealer market for commercial paper involves large securities firms and subsidiaries of bank holding companies. Most of these firms also are dealers in US Treasury securities. Direct issuers of commercial paper usually are financial companies that have frequent and sizable borrowing needs and find it more economical to sell paper without the use of an intermediary. In the United States, direct issuers save a dealer fee of approximately 5 basis points, or 0.05% annualized, which translates to $50,000 on every $100 million outstanding. This saving compensates for the cost of maintaining a permanent sales staff to market the paper. Dealer fees tend to be lower outside the United States.
Line of credit.
Commercial paper is a lower-cost alternative to a line of credit with a bank. Once a business becomes established, and builds a high credit rating, it is often cheaper to draw on a commercial paper than on a bank line of credit. Nevertheless, many companies still maintain bank lines of credit as a "backup". Banks often charge fees for the amount of the line of the credit that "does not" have a balance, because under the capital regulatory regimes set out by the Basel Accords, banks must anticipate that such unused lines of credit will be drawn upon if a company gets into financial distress. They must therefore put aside equity capital to account for potential loan losses also on the currently unused part of lines of credit, and will usually charge a fee for the cost of this equity capital.<br>
Advantages of commercial paper:
Disadvantages of commercial paper:
Commercial paper yields.
Like treasury bills, yields on commercial paper are quoted on a discount basis—the discount return to commercial paper holders is the annualized percentage difference between the price paid for the paper and the face value using a 360-day year. Specifically, where formula_0 is the discount yield, formula_1 is the face value, formula_2 is the price paid, and formula_3 is the term length of the paper in days:
formula_4
and when converted to a bond equivalent yield (formula_5):
formula_6
Defaults.
In case of default, the issuer of commercial paper (large corporate) would be debarred for 6 months and credit ratings would be dropped down from existing to "Default".
Defaults on high quality commercial paper are rare, and cause concern when they occur. Notable examples include:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " dy_{CP} "
},
{
"math_id": 1,
"text": " P_f "
},
{
"math_id": 2,
"text": " P_0 "
},
{
"math_id": 3,
"text": " t "
},
{
"math_id": 4,
"text": " dy_{CP} = \\frac{P_{f} - P_{0}}{P_{f}} \\, \\frac{360}{t} "
},
{
"math_id": 5,
"text": " bey_{CP} "
},
{
"math_id": 6,
"text": " bey_{CP} = \\frac{P_{f} - P_{0}}{P_{0}} \\, \\frac{365}{t} "
}
] | https://en.wikipedia.org/wiki?curid=1156624 |
1156776 | Implicant | In Boolean logic, the term implicant has either a generic or a particular meaning. In the generic use, it refers to the hypothesis of an implication (implicant). In the particular use, a product term (i.e., a conjunction of literals) "P" is an implicant of a Boolean function "F", denoted formula_0, if "P" implies "F" (i.e., whenever "P" takes the value 1 so does "F").
For instance, implicants of the function
formula_1
include the terms formula_2, formula_3, formula_4, formula_5,
as well as some others.
Prime implicant.
A prime implicant of a function is an implicant (in the above particular sense) that cannot be covered by a more general, (more reduced, meaning with fewer literals) implicant. W. V. Quine defined a "prime implicant" to be an implicant that is minimal—that is, the removal of any literal from "P" results in a non-implicant for "F". Essential prime implicants (also known as core prime implicants) are prime implicants that cover an output of the function that no combination of other prime implicants is able to cover.
Using the example above, one can easily see that while formula_2 (and others) is a prime implicant, formula_3 and formula_4 are not. From the latter, multiple literals can be removed to make it prime:
The process of removing literals from a Boolean term is called expanding the term. Expanding by one literal doubles the number of input combinations for which the term is true (in binary Boolean algebra). Using the example function above, we may expand formula_3 to formula_2 or to formula_9 without changing the cover of formula_10.
The sum of all prime implicants of a Boolean function is called its complete sum, minimal covering sum, or Blake canonical form.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " P \\le F"
},
{
"math_id": 1,
"text": "f(x,y,z,w)=xy+yz+w"
},
{
"math_id": 2,
"text": "xy"
},
{
"math_id": 3,
"text": "xyz"
},
{
"math_id": 4,
"text": "xyzw"
},
{
"math_id": 5,
"text": "w"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": "y"
},
{
"math_id": 8,
"text": "z"
},
{
"math_id": 9,
"text": "yz"
},
{
"math_id": 10,
"text": "f"
}
] | https://en.wikipedia.org/wiki?curid=1156776 |
1156819 | Population ecology | Sub-field of ecology
Population ecology is a sub-field of ecology that deals with the dynamics of species populations and how these populations interact with the environment, such as birth and death rates, and by immigration and emigration.
The discipline is important in conservation biology, especially in the development of population viability analysis which makes it possible to predict the long-term probability of a species persisting in a given patch of habitat. Although population ecology is a subfield of biology, it provides interesting problems for mathematicians and statisticians who work in population dynamics.
History.
In the 1940s, ecology was divided into autecology—the study of individual species in relation to the environment—and synecology—the study of groups of species in relation to the environment. The term autecology (from Ancient Greek: αὐτο, "aúto", "self"; οίκος, "oíkos", "household"; and λόγος, "lógos", "knowledge"), refers to roughly the same field of study as concepts such as life cycles and behaviour as adaptations to the environment by individual organisms. Eugene Odum, writing in 1953, considered that synecology should be divided into population ecology, community ecology and ecosystem ecology, renaming autecology as 'species ecology' (Odum regarded "autecology" as an archaic term), thus that there were four subdivisions of ecology.
Terminology.
A population is defined as a group of interacting organisms of the same species. A demographic structure of a population is how populations are often quantified. The total number of individuals in a population is defined as a population size, and how dense these individuals are is defined as population density. There is also a population's geographic range, which has limits that a species can tolerate (such as temperature).
Population size can be influenced by the per capita population growth rate (rate at which the population size changes per individual in the population.) Births, deaths, emigration, and immigration rates all play a significant role in growth rate. The maximum per capita growth rate for a population is known as the intrinsic rate of increase.
In a population, carrying capacity is known as the maximum population size of the species that the environment can sustain, which is determined by resources available. In many classic population models, r is represented as the intrinsic growth rate, where K is the carrying capacity, and N0 is the initial population size.
Population dynamics.
The development of population ecology owes much to the mathematical models known as population dynamics, which were originally formulae derived from demography at the end of the 18th and beginning of 19th century.
The beginning of population dynamics is widely regarded as the work of Malthus, formulated as the Malthusian growth model. According to Malthus, assuming that the conditions (the environment) remain constant ("ceteris paribus"), a population will grow (or decline) exponentially. This principle provided the basis for the subsequent predictive theories, such as the demographic studies such as the work of Benjamin Gompertz and Pierre François Verhulst in the early 19th century, who refined and adjusted the Malthusian demographic model.
A more general model formulation was proposed by F. J. Richards in 1959, further expanded by Simon Hopkins, in which the models of Gompertz, Verhulst and also Ludwig von Bertalanffy are covered as special cases of the general formulation. The Lotka–Volterra predator-prey equations are another famous example, as well as the alternative Arditi–Ginzburg equations.
Exponential vs. logistic growth.
When describing growth models, there are two main types of models that are most commonly used: exponential and logistic growth.
When the per capita rate of increase takes the same positive value regardless of population size, the graph shows exponential growth. Exponential growth takes on the assumption that there is unlimited resources and no predation. An example of exponential population growth is that of the Monk Parakeets in the United States. Originally from South America, Monk Parakeets were either released or escaped from people who owned them. These birds experienced exponential growth from the years 1975-1994 and grew about 55 times their population size from 1975. This growth is likely due to reproduction within their population, as opposed to the addition of more birds from South America (Van Bael & Prudet-Jones 1996).
When the per capita rate of increase decreases as the population increases towards the maximum limit, or carrying capacity, the graph shows logistic growth. Environmental and social variables, along with many others, impact the carrying capacity of a population, meaning that it has the ability to change (Schacht 1980).
Fisheries and wildlife management.
In fisheries and wildlife management, population is affected by three dynamic rate functions.
If "N"1 is the number of individuals at time 1 then
formula_0
where "N"0 is the number of individuals at time 0, "B" is the number of individuals born, "D" the number that died, "I" the number that immigrated, and "E" the number that emigrated between time 0 and time 1.
If we measure these rates over many time intervals, we can determine how a population's density changes over time. Immigration and emigration are present, but are usually not measured.
All of these are measured to determine the harvestable surplus, which is the number of individuals that can be harvested from a population without affecting long-term population stability or average population size. The harvest within the harvestable surplus is termed "compensatory" mortality, where the harvest deaths are substituted for the deaths that would have occurred naturally. Harvest above that level is termed "additive" mortality, because it adds to the number of deaths that would have occurred naturally. These terms are not necessarily judged as "good" and "bad," respectively, in population management. For example, a fish & game agency might aim to reduce the size of a deer population through additive mortality. Bucks might be targeted to increase buck competition, or does might be targeted to reduce reproduction and thus overall population size.
For the management of many fish and other wildlife populations, the goal is often to achieve the largest possible long-run sustainable harvest, also known as maximum sustainable yield (or MSY). Given a population dynamic model, such as any of the ones above, it is possible to calculate the population size that produces the largest harvestable surplus at equilibrium. While the use of population dynamic models along with statistics and optimization to set harvest limits for fish and game is controversial among some scientists, it has been shown to be more effective than the use of human judgment in computer experiments where both incorrect models and natural resource management students competed to maximize yield in two hypothetical fisheries. To give an example of a non-intuitive result, fisheries produce more fish when there is a nearby refuge from human predation in the form of a nature reserve, resulting in higher catches than if the whole area was open to fishing.
r/K selection.
<templatestyles src="Template:Quote_box/styles.css" />
At its most elementary level, interspecific competition involves two species utilizing a similar resource. It rapidly gets more complicated, but stripping the phenomenon of all its complications, this is the basic principle: two consumers consuming the same resource.
An important concept in population ecology is the r/K selection theory. For example, if an animal has the choice of producing one or a few offspring, or to put a lot of effort or little effort in offspring—these are all examples of trade-offs. In order for species to thrive, they must choose what is best for them, leading to a clear distinction between r and K selected species.
The first variable is "r" (the intrinsic rate of natural increase in population size, density independent) and the second variable is "K" (the carrying capacity of a population, density dependent).
It is important to understand the difference between density-independent factors when selecting the intrinsic rate and density-dependent for the selection of the carrying capacity. Carrying capacity is only found during a density-dependent population. Density-dependent factors influence the carrying capacity are predation, harvest, and genetics, so when selecting the carrying capacity it is important to understand to look at the predation or harvest rates that influence the population (Stewart 2004).
An "r"-selected species (e.g., many kinds of insects, such as aphids) is one that has high rates of fecundity, low levels of parental investment in the young, and high rates of mortality before individuals reach maturity. Evolution favors productivity in r-selected species.
In contrast, a "K"-selected species (such as humans) has low rates of fecundity, high levels of parental investment in the young, and low rates of mortality as individuals mature. Evolution in "K"-selected species favors efficiency in the conversion of more resources into fewer offspring. K-selected species generally experience stronger competition, where populations generally live near carrying capacity. These species have heavy investment in offspring, resulting in longer lived organisms, and longer period of maturation. Offspring of K-selected species generally have a higher probability of survival, due to heavy parental care and nurturing.
Offspring Quality.
The offspring fitness is mainly affected by the size and quality of that specific offspring [depending on the species]. Factors that contribute to the relative fitness of offspring size is either the resources the parents provide to their young or morphological traits that come from the parents. The overall success of the offspring after the initial birth or hatching is the survival of the young, the growth rate, and the birthing success of the offspring. There is found to be no effect of the young being raised by the natural parents or foster parents, the offspring need the proper resources to survive (Kristi 2010).
A study that was conducted on the egg size and offspring quality in birds found that, in summary, that the egg size contributes to the overall fitness of the offspring. This study shows the direct relationship to the survivorship curve Type I in that if the offspring is cared for during its early stages of life by a parent, it will die off later in life. However, if the offspring is not cared for by the parents due to an increase in egg quantity, then the survivorship curve will be similar to Type III, in that the offspring will die off early and will survive later in life.
Top-down and bottom-up controls.
Top-down controls.
In some populations, organisms in lower trophic levels are controlled by organisms at the top. This is known as top-down control.
For example, the presence of top carnivores keep herbivore populations in check. If there were no top carnivores in the ecosystem, then herbivore populations would rapidly increase, leading to all plants being eaten. This ecosystem would eventually collapse.
Bottom-up controls.
Bottom-up controls, on the other hand, are driven by producers in the ecosystem. If plant populations change, then the population of all species would be impacted.
For example, if plant populations decreased significantly, the herbivore populations would decrease, which would lead to a carnivore population decreasing too. Therefore, if all of the plants disappeared, then the ecosystem would collapse. Another example would be if there were too many plants available, then two herbivore populations may compete for the same food. The competition would lead to an eventual removal of one population.
Do all ecosystems have to be either top-down or bottom-up?
An ecosystem does not have to be either top-down or bottom-up. There are occasions where an ecosystem could be bottom-up sometimes, such as a marine ecosystem, but then have periods of top-down control due to fishing.
Survivorship curves.
Survivorship curves are graphs that show the distribution of survivors in a population according to age. Survivorship curves play an important role in comparing generations, populations, or even different species.
A Type I survivorship curve is characterized by the fact that death occurs in the later years of an organism's life (mostly mammals). In other words, most organisms reach the maximum expected lifespan and the life expectancy and the age of death go hand-in-hand (Demetrius 1978). Typically, Type I survivorship curves characterize K-selected species.
Type II survivorship shows that death at any age is equally probable. This means that the chances of death are not dependent on or affected by the age of that organism.
Type III curves indicate few surviving the younger years, but after a certain age, individuals are much more likely to survive. Type III survivorship typically characterizes r-selected species.
Metapopulation.
Populations are also studied and conceptualized through the "metapopulation" concept. The metapopulation concept was introduced in 1969: "as a population of populations which go extinct locally and recolonize." Metapopulation ecology is a simplified model of the landscape into patches of varying levels of quality. Patches are either occupied or they are not. Migrants moving among the patches are structured into metapopulations either as sources or sinks. Source patches are productive sites that generate a seasonal supply of migrants to other patch locations. Sink patches are unproductive sites that only receive migrants. In metapopulation terminology there are emigrants (individuals that leave a patch) and immigrants (individuals that move into a patch). Metapopulation models examine patch dynamics over time to answer questions about spatial and demographic ecology. An important concept in metapopulation ecology is the rescue effect, where small patches of lower quality (i.e., sinks) are maintained by a seasonal influx of new immigrants. Metapopulation structure evolves from year to year, where some patches are sinks, such as dry years, and become sources when conditions are more favorable. Ecologists utilize a mixture of computer models and field studies to explain metapopulation structure.
Metapopulation ecology allows for ecologists to take in a wide range of factors when examining a metapopulation like genetics, the bottle-neck effect, and many more. Metapopulation data is extremely useful in understanding population dynamics as most species are not numerous and require specific resources from their habitats. In addition, metapopulation ecology allows for a deeper understanding of the effects of habitat loss, and can help to predict the future of a habitat. To elaborate, metapopulation ecology assumes that, before a habitat becomes uninhabitable, the species in it will emigrate out, or die off. This information is helpful to ecologists in determining what, if anything, can be done to aid a declining habitat. Overall, the information that metapopulation ecology provides is useful to ecologists in many ways (Hanski 1998).
Journals.
The first journal publication of the Society of Population Ecology, titled "Population Ecology" (originally called "Researches on Population Ecology") was released in 1952.
Scientific articles on population ecology can also be found in the "Journal of Animal Ecology", "Oikos" and other journals.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " N_1 = N_0 + B - D + I - E "
}
] | https://en.wikipedia.org/wiki?curid=1156819 |
11568881 | Complex geodesic | In mathematics, a complex geodesic is a generalization of the notion of geodesic to complex spaces.
Definition.
Let ("X", || ||) be a complex Banach space and let "B" be the open unit ball in "X". Let Δ denote the open unit disc in the complex plane C, thought of as the Poincaré disc model for 2-dimensional real/1-dimensional complex hyperbolic geometry. Let the Poincaré metric "ρ" on Δ be given by
formula_0
and denote the corresponding Carathéodory metric on "B" by "d". Then a holomorphic function "f" : Δ → "B" is said to be a complex geodesic if
formula_1
for all points "w" and "z" in Δ.
formula_2
for some "z" ≠ 0, then "f" is a complex geodesic.
formula_3
where "α" denotes the Caratheodory length of a tangent vector, then "f" is a complex geodesic. | [
{
"math_id": 0,
"text": "\\rho (a, b) = \\tanh^{-1} \\frac{| a - b |}{|1 - \\bar{a} b |}"
},
{
"math_id": 1,
"text": "d(f(w), f(z)) = \\rho (w, z) \\,"
},
{
"math_id": 2,
"text": "d(f(0), f(z)) = \\rho (0, z)"
},
{
"math_id": 3,
"text": "\\alpha (f(0), f'(0)) = 1,"
}
] | https://en.wikipedia.org/wiki?curid=11568881 |
11569 | Frequency modulation synthesis | Form of sound synthesis
Frequency modulation synthesis (or FM synthesis) is a form of sound synthesis whereby the frequency of a waveform is changed by modulating its frequency with a modulator. The (instantaneous) frequency of an oscillator is altered in accordance with the amplitude of a modulating signal.
FM synthesis can create both harmonic and inharmonic sounds. To synthesize harmonic sounds, the modulating signal must have a harmonic relationship to the original carrier signal. As the amount of frequency modulation increases, the sound grows progressively complex. Through the use of modulators with frequencies that are non-integer multiples of the carrier signal (i.e. inharmonic), inharmonic bell-like and percussive spectra can be created.
Applications.
FM synthesis using analog oscillators may result in pitch instability. However, FM synthesis can also be implemented digitally, which is more stable and became standard practice. Digital FM synthesis (equivalent to the phase modulation using the time integration of instantaneous frequency) was the basis of several musical instruments beginning as early as 1974. Yamaha built the first prototype digital synthesizer in 1974, based on FM synthesis, before commercially releasing the Yamaha GS-1 in 1980. The Synclavier I, manufactured by New England Digital Corporation beginning in 1978, included a digital FM synthesizer, using an FM synthesis algorithm licensed from Yamaha. Yamaha's groundbreaking Yamaha DX7 synthesizer, released in 1983, brought FM to the forefront of synthesis in the mid-1980s.
Amusement use: FM sound chips on PCs, arcades, game consoles, and mobile phones.
FM synthesis also became the usual setting for games and software up until the mid-nineties. For IBM PC compatible systems, sound cards like the AdLib and Sound Blaster popularized Yamaha chips like the OPL2 and OPL3. Other computers such as the Sharp X68000 and MSX (Yamaha CX5M computer unit) use the OPM sound chip (which was also commonly used for arcade machines up to the mid-nineties) with later CX5M units using the OPP sound chip, and the NEC PC-88 and PC-98 computers use the OPN and OPNA. For arcade systems and game consoles, OPNB was used as main basic sound generator board in Taito's arcade boards (with a variant of the OPNB being used in the Taito Z System) and notably used in SNK's Neo Geo arcade (MVS) and home console (AES) machines. The related OPN2 was used in the Sega's Mega Drive (Genesis) and Fujitsu's FM Towns Marty as one of its sound generator chips. Throughout the 2000s, FM synthesis was also used on a wide range of phones to play ringtones and other sounds, typically in the Yamaha SMAF format.
History.
Don Buchla (mid-1960s).
Don Buchla implemented FM on his instruments in the mid-1960s, prior to Chowning's patent. His 158, 258 and 259 dual oscillator modules had a specific FM control voltage input, and the model 208 (Music Easel) had a modulation oscillator hard-wired to allow FM as well as AM of the primary oscillator. These early applications used analog oscillators, and this capability was also followed by other modular synthesizers and portable synthesizers including Minimoog and ARP Odyssey.
John Chowning (late-1960s–1970s).
By the mid-20th century, frequency modulation (FM), a means of carrying sound, had been understood for decades and was being used to broadcast radio transmissions. FM synthesis was developed since 1967 at Stanford University, California, by John Chowning, . His was licensed to Japanese company Yamaha in 1973. The implementation commercialized by Yamaha (US Patent 4018121 Apr 1977 or U.S. Patent 4,018,121) , .
1970s–1980s.
Expansions by Yamaha.
Yamaha's engineers began adapting Chowning's algorithm for use in a commercial digital synthesizer, adding improvements such as the "key scaling" method , though it would take several years before Yamaha released their FM digital synthesizers. In the 1970s, Yamaha were granted a number of patents, under the company's former name "Nippon Gakki Seizo Kabushiki Kaisha", evolving Chowning's work. Yamaha built the first prototype FM digital synthesizer in 1974. Yamaha eventually commercialized FM synthesis technology with the Yamaha GS-1, the first FM digital synthesizer, released in 1980.
FM synthesis was the basis of some of the early generations of digital synthesizers, most notably those from Yamaha, as well as New England Digital Corporation under license from Yamaha. Yamaha's DX7 synthesizer, released in 1983, was ubiquitous throughout the 1980s. Several other models by Yamaha provided variations and evolutions of FM synthesis during that decade.
Yamaha had patented its hardware implementation of FM in the 1970s, allowing it to nearly monopolize the market for FM technology until the mid-1990s.
Related development by Casio.
Casio developed a related form of synthesis called phase distortion synthesis, used in its CZ range of synthesizers. It had a similar (but slightly differently derived) sound quality to the DX series.
1990s.
Popularization after the expiration of patent.
With the expiration of the Stanford University FM patent in 1995, digital FM synthesis can now be implemented freely by other manufacturers. The FM synthesis patent brought Stanford $20 million before it expired, making it (in 1994) "the second most lucrative licensing agreement in Stanford's history". FM today is mostly found in software-based synths such as FM8 by Native Instruments or Sytrus by Image-Line, but it has also been incorporated into the synthesis repertoire of some modern digital synthesizers, usually coexisting as an option alongside other methods of synthesis such as subtractive, sample-based synthesis, additive synthesis, and other techniques. The degree of complexity of the FM in such hardware synths may vary from simple 2-operator FM, to the highly flexible 6-operator engines of the Korg Kronos and Alesis Fusion, to creation of FM in extensively modular engines such as those in the latest synthesisers by Kurzweil Music Systems.
Realtime Convolution & Modulation (AFM + Sample) and Formant Shaping Synthesis.
New hardware synths specifically marketed for their FM capabilities disappeared from the market after the release of the Yamaha SY99 and FS1R, and even those marketed their highly powerful FM abilities as counterparts to sample-based synthesis and formant synthesis respectively.
Combining sets of 8 FM operators with multi-spectral wave forms began in 1999 by Yamaha in the FS1R. The FS1R had 16 operators, 8 standard FM operators and 8 additional operators that used a noise source rather than an oscillator as its sound source. By adding in tuneable noise sources the FS1R could model the sounds produced in the human voice and in a wind instrument, along with making percussion instrument sounds. The FS1R also contained an additional wave form called the Formant wave form. Formants can be used to model resonating body instrument sounds like the cello, violin, acoustic guitar, bassoon, English horn, or human voice. Formants can even be found in the harmonic spectrum of several brass instruments.
2000s–present.
Variable Phase Modulation, FM-X Synthesis, Altered FM, etc.
In 2016, Korg released the Korg Volca FM, a, 3-voice, 6 operators FM iteration of the Korg Volca series of compact, affordable desktop modules. More recently Korg released the opsix (2020) and opsix SE (2023) integrating 6 operators FM synthesis with subtractive, analogue modeling, additive, semi-modular and Waveshaping. Yamaha released the Montage, which combines a 128-voice sample-based engine with a 128-voice FM engine. This iteration of FM is called FM-X, and features 8 operators; each operator has a choice of several basic wave forms, but each wave form has several parameters to adjust its spectrum. The Yamaha Montage was followed by the more affordable Yamaha MODX in 2018, with 64-voice, 8 operators FM-X architecture in addition to a 128-voice sample-based engine. Elektron in 2018 launched the Digitone, an 8-voice, 4 operators FM synth featuring Elektron's renowned sequence engine.
FM-X synthesis was introduced with the Yamaha Montage synthesizers in 2016. FM-X uses 8 operators. Each FM-X operator has a set of multi-spectral wave forms to choose from, which means each FM-X operator can be equivalent to a stack of 3 or 4 DX7 FM operators. The list of selectable wave forms includes sine waves, the All1 and All2 wave forms, the Odd1 and Odd2 wave forms, and the Res1 and Res2 wave forms. The sine wave selection works the same as the DX7 wave forms. The All1 and All2 wave forms are a saw-tooth wave form. The Odd1 and Odd2 wave forms are pulse or square waves. These two types of wave forms can be used to model the basic harmonic peaks in the bottom of the harmonic spectrum of most instruments. The Res1 and Res2 wave forms move the spectral peak to a specific harmonic and can be used to model either triangular or rounded groups of harmonics further up in the spectrum of an instrument. Combining an All1 or Odd1 wave form with multiple Res1 (or Res2) wave forms (and adjusting their amplitudes) can model the harmonic spectrum of an instrument or sound.
Spectral analysis.
There are multiple variations of FM synthesis, including:
"etc".
As the basic of these variations, we analyze the spectrum of 2 operators (linear FM synthesis using two sinusoidal operators) on the following.
2 operators.
The spectrum generated by FM synthesis with one modulator is expressed as follows:
For modulation signal formula_0, the carrier signal is:
formula_1
If we were to ignore the constant phase terms on the carrier formula_2 and the modulator formula_3, finally we would get the following expression, as seen on and :
formula_4
where formula_5 are angular frequencies (formula_6) of carrier and modulator, formula_7 is frequency modulation index, and amplitudes formula_8 is formula_9-th , respectively.
References.
Footnotes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "m(t) = B\\,\\sin(\\omega_m t)\\,"
},
{
"math_id": 1,
"text": "\\begin{align}\nFM(t)\t& \\ =\\ A\\,\\sin\\left(\\,\\int_0^t \\left(\\omega_c + B\\,\\sin(\\omega_m\\,\\tau)\\right)d\\tau\\right) \\\\\n\t& \\ =\\ A\\,\\sin\\left(\\omega_c\\,t - \\frac{B}{\\omega_m}\\left(\\cos(\\omega_m\\,t) - 1\\right)\\right) \\\\\n\t& \\ =\\ A\\,\\sin\\left(\\omega_c\\,t + \\frac{B}{\\omega_m}\\left(\\sin(\\omega_m\\,t - \\pi/2) + 1\\right)\\right) \\\\\n\\end{align}"
},
{
"math_id": 2,
"text": "\\phi_c = B/\\omega_m\\,"
},
{
"math_id": 3,
"text": "\\phi_m = - \\pi/2\\,"
},
{
"math_id": 4,
"text": "\\begin{align}\nFM(t)\t& \\ \\approx\\ A\\,\\sin\\left(\\omega_c\\,t + \\beta\\,\\sin(\\omega_m\\,t)\\right) \\\\\n\t& \\ =\\ A\\left( J_0(\\beta) \\sin(\\omega_c\\,t)\n\t + \\sum_{n=1}^{\\infty} J_n(\\beta)\\left[\\,\\sin((\\omega_c+n\\,\\omega_m)\\,t)\\ +\\ (-1)^{n}\\sin((\\omega_c-n\\,\\omega_m)\\,t)\\,\\right] \\right) \\\\\n\t& \\ =\\ A\\sum_{n=-\\infty}^{\\infty} J_n(\\beta)\\,\\sin((\\omega_c+n\\,\\omega_m)\\,t)\n\\end{align}"
},
{
"math_id": 5,
"text": "\\omega_c\\,,\\,\\omega_m\\,"
},
{
"math_id": 6,
"text": "\\,\\omega = 2\\pi f\\,"
},
{
"math_id": 7,
"text": "\\beta = B / \\omega_m\\,"
},
{
"math_id": 8,
"text": "J_n(\\beta)\\,"
},
{
"math_id": 9,
"text": "n\\,"
}
] | https://en.wikipedia.org/wiki?curid=11569 |
1156993 | Rolling | Type of motion which combines translation and rotation with respect to a surface
Rolling is a type of motion that combines rotation (commonly, of an axially symmetric object) and translation of that object with respect to a surface (either one or the other moves), such that, if ideal conditions exist, the two are in contact with each other without sliding.
Rolling where there is no sliding is referred to as "pure rolling". By definition, there is no sliding when there is a frame of reference in which all points of contact on the rolling object have the same velocity as their counterparts on the surface on which the object rolls; in particular, for a frame of reference in which the rolling plane is at rest (see animation), the instantaneous velocity of all the points of contact (for instance, a generating line segment of a cylinder) of the rolling object is zero.
In practice, due to small deformations near the contact area, some sliding and energy dissipation occurs. Nevertheless, the resulting rolling resistance is much lower than sliding friction, and thus, rolling objects typically require much less energy to be moved than sliding ones. As a result, such objects will more easily move, if they experience a force with a component along the surface, for instance gravity on a tilted surface, wind, pushing, pulling, or torque from an engine. Unlike cylindrical axially symmetric objects, the rolling motion of a cone is such that while rolling on a flat surface, its center of gravity performs a circular motion, rather than a linear motion. Rolling objects are not necessarily axially-symmetrical. Two well known non-axially-symmetrical rollers are the Reuleaux triangle and the Meissner bodies. The oloid and the sphericon are members of a special family of developable rollers that develop their entire surface when rolling down a flat plane. Objects with corners, such as dice, roll by successive rotations about the edge or corner which is in contact with the surface. The construction of a specific surface allows even a perfect square wheel to roll with its centroid at constant height above a reference plane.
Applications.
Most land vehicles use wheels and therefore rolling for displacement. Slip should be kept to a minimum (approximating pure rolling), otherwise loss of control and an accident may result. This may happen when the road is covered in snow, sand, or oil, when taking a turn at high speed or attempting to brake or accelerate suddenly.
One of the most practical applications of rolling objects is the use of rolling-element bearings, such as ball bearings, in rotating devices. Made of metal, the rolling elements are usually encased between two rings that can rotate independently of each other. In most mechanisms, the inner ring is attached to a stationary shaft (or axle). Thus, while the inner ring is stationary, the outer ring is free to move with very little friction. This is the basis for which almost all motors (such as those found in ceiling fans, cars, drills, etc.) rely on to operate. Alternatively, the outer ring may be attached to a fixed support bracket, allowing the inner ring to support an axle, allowing for rotational freedom of an axle. The amount of friction on the mechanism's parts depends on the quality of the ball bearings and how much lubrication is in the mechanism.
Rolling objects are also frequently used as tools for transportation. One of the most basic ways is by placing a (usually flat) object on a series of lined-up rollers, or wheels. The object on the wheels can be moved along them in a straight line, as long as the wheels are continuously replaced in the front (see history of bearings). This method of primitive transportation is efficient when no other machinery is available. Today, the most practical application of objects on wheels are cars, trains, and other human transportation vehicles.
Rolling is used to apply normal forces to a moving line of contact in various processes, for example in metalworking, printing, rubber manufacturing, painting.
Rigid bodies.
The simplest case of rolling is that of a rigid body rolling without slipping along a flat surface with its axis parallel to the surface (or equivalently: perpendicular to the surface normal).
The trajectory of any point is a trochoid; in particular, the trajectory of any point in the object axis is a line, while the trajectory of any point in the object rim is a cycloid.
The velocity of any point in the rolling object is given by formula_0, where formula_1 is the displacement between the particle and the rolling object's contact point (or line) with the surface, and ω is the angular velocity vector. Thus, despite that rolling is different from rotation around a fixed axis, the "instantaneous velocity" of all particles of the rolling object is the same as if it was rotating around an axis that passes through the point of contact with the same angular velocity.
Any point in the rolling object farther from the axis than the point of contact will temporarily move opposite to the direction of the overall motion when it is below the level of the rolling surface (for example, any point in the part of the flange of a train wheel that is below the rail).
Energy.
Since kinetic energy is entirely a function of an object mass and velocity, the above result may be used with the parallel axis theorem to obtain the kinetic energy associated with simple rolling
formula_2
Forces and acceleration.
Differentiating the relation between linear and angular "velocity", formula_4, with respect to time gives a formula relating linear and angular "acceleration" formula_5. Applying Newton's second law:
formula_6
It follows that to accelerate the object, both a net force and a torque are required. When external force with no torque acts on the rolling object‐surface system, there will be a tangential force at the point of contact between the surface and rolling object that provides the required torque as long as the motion is pure rolling; this force is usually static friction, for example, between the road and a wheel or between a bowling lane and a bowling ball. When static friction isn't enough, the friction becomes dynamic friction and slipping happens. The tangential force is opposite in direction to the external force, and therefore partially cancels it. The resulting net force and acceleration are:
formula_7
formula_8 has dimension of mass, and it is the mass that would have a rotational inertia formula_9 at distance formula_3 from an axis of rotation. Therefore, the term formula_8 may be thought of as the mass with linear inertia equivalent to the rolling object rotational inertia (around its center of mass). The action of the external force upon an object in simple rotation may be conceptualized as accelerating the sum of the real mass and the virtual mass that represents the rotational inertia, which is formula_10. Since the work done by the external force is split between overcoming the translational and rotational inertia, the external force results in a smaller net force by the dimensionless multiplicative factor formula_11 where formula_12 represents the ratio of the aforesaid virtual mass to the object actual mass and it is equal to formula_13 where formula_14 is the radius of gyration corresponding to the object rotational inertia in pure rotation (not the rotational inertia in pure rolling). The square power is due to the fact rotational inertia of a point mass varies proportionally to the square of its distance to the axis.
In the specific case of an object rolling in an inclined plane which experiences only static friction, normal force and its own weight, (air drag is absent) the acceleration in the direction of rolling down the slope is:
formula_15
formula_16 is specific to the object shape and mass distribution, it does not depend on scale or density. However, it will vary if the object is made to roll with different radiuses; for instance, it varies between a train wheel set rolling normally (by its tire), and by its axle. It follows that given a reference rolling object, another object bigger or with different density will roll with the same acceleration. This behavior is the same as that of an object in free fall or an object sliding without friction (instead of rolling) down an inclined plane.
Deformable bodies.
When an axisymmetric deformable body contacts a surface, an interface is formed through which normal and shear forces may be transmitted. For example, a tire contacting the road carries the weight (normal load) of the car as well as any shear forces arising due to acceleration, braking or steering. The deformations and motions in a steady rolling body can be efficiently characterized using an Eulerian description of rigid body rotation and a Lagrangian description of deformation. This approach greatly simplifies analysis by eliminating time-dependence, resulting in displacement, velocity, stress and strain fields that vary only spatially. Analysis procedures for finite element analysis of steady state rolling were first developed by Padovan, and are now featured in several commercial codes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{v}=\\boldsymbol{\\omega}\\times\\mathbf{r}"
},
{
"math_id": 1,
"text": "\\mathbf{r}"
},
{
"math_id": 2,
"text": "K_\\text{rolling}=K_\\text{translation}+K_\\text{rotation}"
},
{
"math_id": 3,
"text": "r"
},
{
"math_id": 4,
"text": "v_\\text{c.o.m.}=r\\omega"
},
{
"math_id": 5,
"text": "a=r\\alpha"
},
{
"math_id": 6,
"text": "a=\\frac{F_\\text{net}}{m}=r\\alpha=\\frac{r\\tau}{I}."
},
{
"math_id": 7,
"text": "\\begin{align}\n F_\\text{net} &=\n \\frac{F_\\text{external}}{1 + \\frac{I}{mr^2}} =\n \\frac{F_\\text{external}}{1 + \\left(\\frac{r_\\text{gyr.}}{r}\\right)^2} \\\\\n a &= \\frac{F_\\text{external}}{m + \\frac{I}{r^2}}\n\\end{align}"
},
{
"math_id": 8,
"text": "\\tfrac{I}{r^2}"
},
{
"math_id": 9,
"text": "I"
},
{
"math_id": 10,
"text": "m+\\tfrac{I}{r^2}"
},
{
"math_id": 11,
"text": "1/\\left(1+\\tfrac{I}{mr^2}\\right)"
},
{
"math_id": 12,
"text": "\\tfrac{I}{mr^2}"
},
{
"math_id": 13,
"text": "\\left(\\tfrac{r_\\text{gyr.}}{r}\\right)^2"
},
{
"math_id": 14,
"text": "r_\\text{gyr.}"
},
{
"math_id": 15,
"text": "a=\\frac{g\\sin\\left(\\theta\\right)}{1+\\left(\\tfrac{r_\\text{gyr.}}{r}\\right)^2}"
},
{
"math_id": 16,
"text": "\\tfrac{r_\\text{gyr.}}{r}"
}
] | https://en.wikipedia.org/wiki?curid=1156993 |
11572324 | List of Indian inventions and discoveries | Indian inventions
This list of Indian inventions and discoveries details the inventions, scientific discoveries and contributions of India, including those from the historic Indian subcontinent and the modern-day republic of India. It draws from the whole cultural and technological history of India, during which architecture, astronomy, cartography, metallurgy, logic, mathematics, metrology and mineralogy were among the branches of study pursued by . During recent times science and technology in the Republic of India has also focused on automobile engineering, information technology, communications as well as research into space and polar technology.
For the purpose of this list, the inventions are regarded as technological firsts developed within territory of India, as such does not include foreign technologies which India acquired through contact or any Indian origin living in foreign country doing any breakthroughs in foreign land. It also does not include technologies or discoveries developed elsewhere and later invented separately in India, nor inventions by Indian emigres in other places. Changes in minor concepts of design or style and artistic innovations do not appear in the lists.
Ancient India.
Metrology.
A total of 558 weights were excavated from Mohenjodaro, Harappa, and Chanhu-daro, not including defective weights. They did not find statistically significant differences between weights that were excavated from five different layers, each about 1.5 m in thickness. This was evidence that strong control existed for at least a 500-year period. The 13.7-g weight seems to be one of the units used in the Indus valley. The notation was based on the binary and decimal systems. 83% of the weights which were excavated from the above three cities were cubic, and 68% were made of chert.
When a ‘sign’ or ‘mark’ (linga) is identified, there are three possibilities: the sign may be present in all, some, or none of the sapakṣas. Likewise, the sign may be present in all, some or none of the vipakṣas. To identify a sign, we have to assume that it is present in the pakṣa, however; that is the first condition is already satisfied. Combining these, Dignaga constructed his ‘Wheel of Reason’ (Sanskrit: Hetucakra).
The seven predicate theory consists in the use of seven claims about sentences, each preceded by "arguably" or "conditionally" (), concerning a single object and its particular properties, composed of assertions and denials, either simultaneously or successively, and without contradiction. These seven claims are the following.
Mathematics.
"It is India that gave us the ingenuous method of expressing all numbers by the means of ten symbols, each symbol receiving a value of position, as well as an absolute value; a profound and important idea which appears so simple to us now that we ignore its true merit, but its very simplicity, the great ease which it has lent to all computations, puts our arithmetic in the first rank of useful inventions, and we shall appreciate the grandeur of this achievement when we remember that it escaped the genius of Archimedes and Apollonius, two of the greatest minds produced by antiquity."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\arctan x"
},
{
"math_id": 1,
"text": "104348/33215"
},
{
"math_id": 2,
"text": "3.1415926539214"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "k"
},
{
"math_id": 5,
"text": "\\ x^2-Ny^2=1, "
},
{
"math_id": 6,
"text": " \\pi "
}
] | https://en.wikipedia.org/wiki?curid=11572324 |
115742 | Zeta distribution | In probability theory and statistics, the zeta distribution is a discrete probability distribution. If "X" is a zeta-distributed random variable with parameter "s", then the probability that "X" takes the positive integer value "k" is given by the probability mass function
formula_0
where "ζ"("s") is the Riemann zeta function (which is undefined for "s" = 1).
The multiplicities of distinct prime factors of "X" are independent random variables.
The Riemann zeta function being the sum of all terms formula_1 for positive integer "k", it appears thus as the normalization of the Zipf distribution. The terms "Zipf distribution" and the "zeta distribution" are often used interchangeably. But while the Zeta distribution is a probability distribution by itself, it is not associated to the Zipf's law with same exponent.
Definition.
The Zeta distribution is defined for positive integers formula_2, and its probability mass function is given by
formula_3
where formula_4 is the parameter, and formula_5 is the Riemann zeta function.
The cumulative distribution function is given by
formula_6
where formula_7 is the generalized harmonic number
formula_8
Moments.
The "n"th raw moment is defined as the expected value of "X""n":
formula_9
The series on the right is just a series representation of the Riemann zeta function, but it only converges for values of formula_10 that are greater than unity. Thus:
formula_11
The ratio of the zeta functions is well-defined, even for "n" > "s" − 1 because the series representation of the zeta function can be analytically continued. This does not change the fact that the moments are specified by the series itself, and are therefore undefined for large "n".
Moment generating function.
The moment generating function is defined as
formula_12
The series is just the definition of the polylogarithm, valid for formula_13 so that
formula_14
Since this does not converge on an open interval containing formula_15, the moment generating function does not exist.
The case "s" = 1.
"ζ"(1) is infinite as the harmonic series, and so the case when "s" = 1 is not meaningful. However, if "A" is any set of positive integers that has a density, i.e. if
formula_16
exists where "N"("A", "n") is the number of members of "A" less than or equal to "n", then
formula_17
is equal to that density.
The latter limit can also exist in some cases in which "A" does not have a density. For example, if "A" is the set of all positive integers whose first digit is "d", then "A" has no density, but nonetheless the second limit given above exists and is proportional to
formula_18
which is Benford's law.
Infinite divisibility.
The Zeta distribution can be constructed with a sequence of independent random variables with a geometric distribution. Let formula_19 be a prime number and formula_20 be a random variable with a geometric distribution of parameter formula_21, namely
formula_22
If the random variables formula_23 are independent, then, the random variable formula_24 defined by
formula_25
has the zeta distribution: formula_26.
Stated differently, the random variable formula_27 is infinitely divisible with Lévy measure given by the following sum of Dirac masses:
formula_28
See also.
Other "power-law" distributions | [
{
"math_id": 0,
"text": "f_s(k) = \\frac{k^{-s}}{\\zeta(s)} "
},
{
"math_id": 1,
"text": "k^{-s}"
},
{
"math_id": 2,
"text": "k \\geq 1"
},
{
"math_id": 3,
"text": " P(x=k) = \\frac 1 {\\zeta(s)} k^{-s}, "
},
{
"math_id": 4,
"text": "s>1"
},
{
"math_id": 5,
"text": "\\zeta(s)"
},
{
"math_id": 6,
"text": "P(x \\leq k) = \\frac{H_{k,s}}{\\zeta(s)},"
},
{
"math_id": 7,
"text": "H_{k,s}"
},
{
"math_id": 8,
"text": "H_{k,s} = \\sum_{i=1}^k \\frac 1 {i^s}."
},
{
"math_id": 9,
"text": "m_n = E(X^n) = \\frac{1}{\\zeta(s)}\\sum_{k=1}^\\infty \\frac{1}{k^{s-n}}"
},
{
"math_id": 10,
"text": "s-n"
},
{
"math_id": 11,
"text": "m_n =\n\\begin{cases}\n\\zeta(s-n)/\\zeta(s) & \\text{for } n < s-1 \\\\\n\\infty & \\text{for } n \\ge s-1\n\\end{cases}\n"
},
{
"math_id": 12,
"text": "M(t;s) = E(e^{tX}) = \\frac{1}{\\zeta(s)} \\sum_{k=1}^\\infty \\frac{e^{tk}}{k^s}."
},
{
"math_id": 13,
"text": "e^t<1"
},
{
"math_id": 14,
"text": "M(t;s) = \\frac{\\operatorname{Li}_s(e^t)}{\\zeta(s)}\\text{ for }t<0."
},
{
"math_id": 15,
"text": " t=0"
},
{
"math_id": 16,
"text": "\\lim_{n\\to\\infty}\\frac{N(A,n)}{n}"
},
{
"math_id": 17,
"text": "\\lim_{s\\to 1^+}P(X\\in A)\\,"
},
{
"math_id": 18,
"text": "\\log(d+1) - \\log(d) = \\log\\left(1+\\frac{1}{d}\\right),\\,"
},
{
"math_id": 19,
"text": "p"
},
{
"math_id": 20,
"text": "X(p^{-s})"
},
{
"math_id": 21,
"text": "p^{-s}"
},
{
"math_id": 22,
"text": "\\quad\\quad\\quad \\mathbb{P}\\left( X(p^{-s}) = k \\right) = p^{-ks } (1 - p^{-s} )"
},
{
"math_id": 23,
"text": "( X(p^{-s}) )_{p \\in \\mathcal{P} }"
},
{
"math_id": 24,
"text": "Z_s"
},
{
"math_id": 25,
"text": "\\quad\\quad\\quad Z_s = \\prod_{p \\in \\mathcal{P} } p^{ X(p^{-s}) }"
},
{
"math_id": 26,
"text": "\\mathbb{P}\\left( Z_s = n \\right) = \\frac{1}{ n^s \\zeta(s) }"
},
{
"math_id": 27,
"text": "\\log(Z_s) = \\sum_{p \\in \\mathcal{P} } X(p^{-s}) \\, \\log(p)"
},
{
"math_id": 28,
"text": "\\quad\\quad\\quad \\Pi_s(dx) = \\sum_{p \\in \\mathcal{P} } \\sum_{k \\geqslant 1 } \\frac{p^{-k s}}{k} \\delta_{k \\log(p) }(dx)"
}
] | https://en.wikipedia.org/wiki?curid=115742 |
1157422 | Virial expansion | Series expansion of the equation of state for a many-particle system
The virial expansion is a model of thermodynamic equations of state. It expresses the pressure P of a gas in local equilibrium as a power series of the density. This equation may be represented in terms of the compressibility factor, Z, as
formula_0
This equation was first proposed by Kamerlingh Onnes. The terms A, B, and C represent the virial coefficients. The leading coefficient A is defined as the constant value of 1, which ensures that the equation reduces to the ideal gas expression as the gas density approaches zero.
Second and third virial coefficients.
The second, B, and third, C, virial coefficients have been studied extensively and tabulated for many fluids for more than a century. Two of the most extensive compilations are in the books by Dymond and the National Institute of Standards and Technology's Thermo Data Engine Database and its Web Thermo Tables. Tables of second and third virial coefficients of many fluids are included in these compilations.
Casting equations of the state into virial form.
Most equations of state can be reformulated and cast in virial equations to evaluate and compare their implicit second and third virial coefficients. The seminal van der Waals equation of state was proposed in 1873:
formula_1
where "v" = 1/"ρ" is molar volume. It can be rearranged by expanding 1/("v" − "b") into a Taylor series:
formula_2
In the van der Waals equation, the second virial coefficient has roughly the correct behavior, as it decreases monotonically when the temperature is lowered. The third and higher virial coefficients are independent of temperature, and are not correct, especially at low temperatures.
Almost all subsequent equations of state are derived from the van der Waals equation, like those from Dieterici, Berthelot, Redlich-Kwong, and Peng-Robinson suffer from the singularity introduced by 1/(v - b).
Other equations of state, started by Beattie and Bridgeman, are more closely related to virial equations, and show to be more accurate in representing behavior of fluids in both gaseous and liquid phases. The Beattie-Bridgeman equation of state, proposed in 1928,
formula_3
where
can be rearranged as
formula_6The Benedict-Webb-Rubin equation of state of 1940 represents better isotherms below the critical temperature:
formula_7
More improvements were achieved by Starling in 1972:
formula_8
Following are plots of reduced second and third virial coefficients against reduced temperature according to Starling:
The exponential terms in the last two equations correct the third virial coefficient so that the isotherms in the liquid phase can be represented correctly. The exponential term converges rapidly as ρ increases, and if only the first two terms in its Taylor expansion series are taken, formula_9, and multiplied with formula_10, the result is formula_11, which contributes a formula_12 term to the third virial coefficient, and one term to the eighth virial coefficient, which can be ignored.
After the expansion of the exponential terms, the Benedict-Webb-Rubin and Starling equations of state have this form:
formula_13
Cubic virial equation of state.
The three-term virial equation or a cubic virial equation of state
formula_14
has the simplicity of the Van der Waals equation of state without its singularity at "v" = "b". Theoretically, the second virial coefficient represents bimolecular attraction forces, and the third virial term represents the repulsive forces among three molecules in close contact.
With this cubic virial equation, the coefficients B and C can be solved in closed form. Imposing the critical conditions:
formula_15
the cubic virial equation can be solved to yield:
formula_16 formula_17 and formula_18
formula_19 is therefore 0.333, compared to 0.375 from the Van der Waals equation.
Between the critical point and the triple point is the saturation region of fluids. In this region, the gaseous phase coexists with the liquid phase under saturation pressure formula_20, and the saturation temperature formula_21. Under the saturation pressure, the liquid phase has a molar volume of formula_22, and the gaseous phase has a molar volume of formula_23. The corresponding molar densities are formula_24 and formula_25. These are the saturation properties needed to compute second and third virial coefficients.
A valid equation of state must produce an isotherm which crosses the horizontal line of formula_20 at formula_22 and formula_23, on formula_21. Under formula_20 and formula_21, gas is in equilibrium with liquid. This means that the PρT isotherm has three roots at formula_20. The cubic virial equation of state at formula_21 is:
formula_26
It can be rearranged as:
formula_27
The factor formula_28 is the volume of saturated gas according to the ideal gas law, and can be given a unique name formula_29:
formula_30
In the saturation region, the cubic equation has three roots, and can be written alternatively as:
formula_31
which can be expanded as:
formula_32
formula_33 is a volume of an unstable state between formula_22 and formula_23. The cubic equations are identical. Therefore, from the linear terms in these equations, formula_34 can be solved:
formula_35
From the quadratic terms, "B" can be solved:
formula_36
And from the cubic terms, "C" can be solved:
formula_37
Since formula_22, formula_23 and formula_20 have been tabulated for many fluids with formula_21 as a parameter, "B" and "C" can be computed in the saturation region of these fluids. The results are generally in agreement with those computed from Benedict-Webb-Rubin and Starling equations of state.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Z \\equiv \\frac{P}{RT\\rho} = A + B\\rho + C\\rho^2 + \\cdots"
},
{
"math_id": 1,
"text": "P = \\frac{RT}{\\left(v-b\\right)} - \\frac{a}{v^2}"
},
{
"math_id": 2,
"text": "Z = 1 + \\left(b-\\frac{a}{RT}\\right)\\rho + b^2\\rho^2 + b^3\\rho^3 + \\cdots"
},
{
"math_id": 3,
"text": "p=\\frac{RT}{v^2}\\left(1-\\frac{c}{vT^3}\\right)(v+B)-\\frac{A}{v^2}"
},
{
"math_id": 4,
"text": "A = A_0 \\left(1 - \\frac{a}{v} \\right)"
},
{
"math_id": 5,
"text": "B = B_0 \\left(1 - \\frac{b}{v} \\right)"
},
{
"math_id": 6,
"text": "Z=1 + \\left(B_0 -\\frac{A_0}{RT} - \\frac{c}{T^3}\\right) \\rho - \\left(B_0 b-\\frac{A_0 a}{RT} + \\frac{B_0 c}{T^3}\\right) \\rho^2 + \\left(\\frac{B_0 b c}{T^3}\\right) \\rho^3 "
},
{
"math_id": 7,
"text": "Z = 1 + \\left(B_0 -\\frac{A_0}{RT} - \\frac{C_0}{RT^3}\\right) \\rho + \\left(b-\\frac{a}{RT}\\right) \\rho^2 + \\left(\\frac{\\alpha a}{RT}\\right) \\rho^5 + \\frac{c\\rho^2}{RT^3}\\left(1 + \\gamma\\rho^2\\right)\\exp\\left(-\\gamma\\rho^2\\right)"
},
{
"math_id": 8,
"text": "Z = 1 + \\left(B_0 -\\frac{A_0}{RT} - \\frac{C_0}{RT^3} + \\frac{D_0}{RT^4} - \\frac{E_0}{RT^5}\\right) \\rho + \\left(b-\\frac{a}{RT}-\\frac{d}{RT^2}\\right) \\rho^2 + \\alpha\\left(\\frac{a}{RT}+\\frac{d}{RT^2}\\right) \\rho^5 + \\frac{c\\rho^2}{RT^3}\\left(1 + \\gamma\\rho^2\\right)\\exp\\left(-\\gamma\\rho^2\\right)"
},
{
"math_id": 9,
"text": "1-\\gamma\\rho^2"
},
{
"math_id": 10,
"text": "1 + \\gamma\\rho^2"
},
{
"math_id": 11,
"text": "1 - \\gamma^2\\rho^4"
},
{
"math_id": 12,
"text": "c / RT^3"
},
{
"math_id": 13,
"text": "Z = 1 + b\\rho_r + c\\rho_r^2 + f\\rho_r^5"
},
{
"math_id": 14,
"text": "Z = 1+B\\rho+C\\rho^2"
},
{
"math_id": 15,
"text": "\\frac{\\mathrm{d}P}{\\mathrm{d}v}=0 \\qquad \\text{and} \\qquad \\frac{\\mathrm{d}^2P}{\\mathrm{d}v^2}=0"
},
{
"math_id": 16,
"text": "B = -v_c ,"
},
{
"math_id": 17,
"text": " C = \\frac{v_c^2}{3},"
},
{
"math_id": 18,
"text": "Z_c = \\frac{P_c v_c}{RT_c} = \\frac 1 3."
},
{
"math_id": 19,
"text": "Z_c"
},
{
"math_id": 20,
"text": "P_\\text{sat}"
},
{
"math_id": 21,
"text": "T_\\text{sat}"
},
{
"math_id": 22,
"text": "v_\\text{l}"
},
{
"math_id": 23,
"text": "v_\\text{g}"
},
{
"math_id": 24,
"text": "\\rho_\\text{l}"
},
{
"math_id": 25,
"text": "\\rho_\\text{g}"
},
{
"math_id": 26,
"text": "P_\\text{sat} = RT_\\text{sat} \\left(1 + B\\rho + C\\rho^2\\right) \\rho"
},
{
"math_id": 27,
"text": "1 - \\frac{RT_\\text{sat}}{P_\\text{sat}} \\left(1 + B\\rho + C\\rho^2\\right) \\rho = 0"
},
{
"math_id": 28,
"text": "RT_\\text{sat}/P_\\text{sat}"
},
{
"math_id": 29,
"text": "v^\\text{id}"
},
{
"math_id": 30,
"text": "v^\\text{id} = \\frac{RT_\\text{sat}}{P_\\text{sat}}"
},
{
"math_id": 31,
"text": "\\left(1 - v_\\text{l} \\rho \\right) \\left(1 - v_\\text{m} \\rho \\right) \\left(1 - v_\\text{g} \\rho \\right) = 0"
},
{
"math_id": 32,
"text": "1 - \\left(v_\\text{l} + v_\\text{g} + v_m\\right)\\rho + \\left(v_\\text{l} v_\\text{g} + v_\\text{g} v_\\text{m} + v_\\text{m} v_\\text{l}\\right)\\rho^2 - v_\\text{l} v_\\text{g} v_\\text{m} \\rho^3 = 0 "
},
{
"math_id": 33,
"text": "v_\\text{m}"
},
{
"math_id": 34,
"text": "v_m"
},
{
"math_id": 35,
"text": "v_\\text{m} = v^\\text{id} - v_\\text{l} - v_\\text{g} "
},
{
"math_id": 36,
"text": "B = -\\frac{\\left(v_\\text{l} v_\\text{g} + v_\\text{g} v_\\text{m} + v_\\text{m} v_\\text{l}\\right)}{v^\\text{id}} "
},
{
"math_id": 37,
"text": "C = \\frac{v_\\text{l} v_\\text{g} v_\\text{m}}{v^\\text{id}} "
}
] | https://en.wikipedia.org/wiki?curid=1157422 |
1157698 | Transcendental number theory | Study of numbers that are not solutions of polynomials with rational coefficients
Transcendental number theory is a branch of number theory that investigates transcendental numbers (numbers that are not solutions of any polynomial equation with rational coefficients), in both qualitative and quantitative ways.
Transcendence.
The fundamental theorem of algebra tells us that if we have a non-constant polynomial with rational coefficients (or equivalently, by clearing denominators, with integer coefficients) then that polynomial will have a root in the complex numbers. That is, for any non-constant polynomial formula_0 with rational coefficients there will be a complex number formula_1 such that formula_2. Transcendence theory is concerned with the converse question: given a complex number formula_1, is there a polynomial formula_0 with rational coefficients such that formula_3 If no such polynomial exists then the number is called transcendental.
More generally the theory deals with algebraic independence of numbers. A set of numbers {α1, α2, …, α"n"} is called algebraically independent over a field "K" if there is no non-zero polynomial "P" in "n" variables with coefficients in "K" such that "P"(α1, α2, …, α"n") = 0. So working out if a given number is transcendental is really a special case of algebraic independence where "n" = 1 and the field "K" is the field of rational numbers.
A related notion is whether there is a closed-form expression for a number, including exponentials and logarithms as well as algebraic operations. There are various definitions of "closed-form", and questions about closed-form can often be reduced to questions about transcendence.
History.
Approximation by rational numbers: Liouville to Roth.
Use of the term "transcendental" to refer to an object that is not algebraic dates back to the seventeenth century, when Gottfried Leibniz proved that the sine function was not an algebraic function. The question of whether certain classes of numbers could be transcendental dates back to 1748 when Euler asserted that the number log"a""b" was not algebraic for rational numbers "a" and "b" provided "b" is not of the form "b" = "a""c" for some rational "c".
Euler's assertion was not proved until the twentieth century, but almost a hundred years after his claim Joseph Liouville did manage to prove the existence of numbers that are not algebraic, something that until then had not been known for sure. His original papers on the matter in the 1840s sketched out arguments using continued fractions to construct transcendental numbers. Later, in the 1850s, he gave a necessary condition for a number to be algebraic, and thus a sufficient condition for a number to be transcendental. This transcendence criterion was not strong enough to be necessary too, and indeed it fails to detect that the number "e" is transcendental. But his work did provide a larger class of transcendental numbers, now known as Liouville numbers in his honour.
Liouville's criterion essentially said that algebraic numbers cannot be very well approximated by rational numbers. So if a number can be very well approximated by rational numbers then it must be transcendental. The exact meaning of "very well approximated" in Liouville's work relates to a certain exponent. He showed that if α is an algebraic number of degree "d" ≥ 2 and ε is any number greater than zero, then the expression
formula_4
can be satisfied by only finitely many rational numbers "p"/"q". Using this as a criterion for transcendence is not trivial, as one must check whether there are infinitely many solutions "p"/"q" for every "d" ≥ 2.
In the twentieth century work by Axel Thue, Carl Siegel, and Klaus Roth reduced the exponent in Liouville's work from "d" + ε to "d"/2 + 1 + ε, and finally, in 1955, to 2 + ε. This result, known as the Thue–Siegel–Roth theorem, is ostensibly the best possible, since if the exponent 2 + ε is replaced by just 2 then the result is no longer true. However, Serge Lang conjectured an improvement of Roth's result; in particular he conjectured that "q"2+ε in the denominator of the right-hand side could be reduced to formula_5.
Roth's work effectively ended the work started by Liouville, and his theorem allowed mathematicians to prove the transcendence of many more numbers, such as the Champernowne constant. The theorem is still not strong enough to detect "all" transcendental numbers, though, and many famous constants including "e" and π either are not or are not known to be very well approximable in the above sense.
Auxiliary functions: Hermite to Baker.
Fortunately other methods were pioneered in the nineteenth century to deal with the algebraic properties of "e", and consequently of π through Euler's identity. This work centred on use of the so-called auxiliary function. These are functions which typically have many zeros at the points under consideration. Here "many zeros" may mean many distinct zeros, or as few as one zero but with a high multiplicity, or even many zeros all with high multiplicity. Charles Hermite used auxiliary functions that approximated the functions formula_6 for each natural number formula_7 in order to prove the transcendence of formula_8 in 1873. His work was built upon by Ferdinand von Lindemann in the 1880s in order to prove that "e"α is transcendental for nonzero algebraic numbers α. In particular this proved that π is transcendental since "e"π"i" is algebraic, and thus answered in the negative the problem of antiquity as to whether it was possible to square the circle. Karl Weierstrass developed their work yet further and eventually proved the Lindemann–Weierstrass theorem in 1885.
In 1900 David Hilbert posed his famous collection of problems. The seventh of these, and one of the hardest in Hilbert's estimation, asked about the transcendence of numbers of the form "a""b" where "a" and "b" are algebraic, "a" is not zero or one, and "b" is irrational. In the 1930s Alexander Gelfond and Theodor Schneider proved that all such numbers were indeed transcendental using a non-explicit auxiliary function whose existence was granted by Siegel's lemma. This result, the Gelfond–Schneider theorem, proved the transcendence of numbers such as "e"π and the Gelfond–Schneider constant.
The next big result in this field occurred in the 1960s, when Alan Baker made progress on a problem posed by Gelfond on linear forms in logarithms. Gelfond himself had managed to find a non-trivial lower bound for the quantity
formula_9
where all four unknowns are algebraic, the αs being neither zero nor one and the βs being irrational. Finding similar lower bounds for the sum of three or more logarithms had eluded Gelfond, though. The proof of Baker's theorem contained such bounds, solving Gauss' class number problem for class number one in the process. This work won Baker the Fields medal for its uses in solving Diophantine equations. From a purely transcendental number theoretic viewpoint, Baker had proved that if α1, ..., α"n" are algebraic numbers, none of them zero or one, and β1, ..., β"n" are algebraic numbers such that 1, β1, ..., β"n" are linearly independent over the rational numbers, then the number
formula_10
is transcendental.
Other techniques: Cantor and Zilber.
In the 1870s, Georg Cantor started to develop set theory and, in 1874, published a paper proving that the algebraic numbers could be put in one-to-one correspondence with the set of natural numbers, and thus that the set of transcendental numbers must be uncountable. Later, in 1891, Cantor used his more familiar diagonal argument to prove the same result. While Cantor's result is often quoted as being purely existential and thus unusable for constructing a single transcendental number, the proofs in both the aforementioned papers give methods to construct transcendental numbers.
While Cantor used set theory to prove the plenitude of transcendental numbers, a recent development has been the use of model theory in attempts to prove an unsolved problem in transcendental number theory. The problem is to determine the transcendence degree of the field
formula_11
for complex numbers "x"1, ..., "x""n" that are linearly independent over the rational numbers. Stephen Schanuel conjectured that the answer is at least "n", but no proof is known. In 2004, though, Boris Zilber published a paper that used model theoretic techniques to create a structure that behaves very much like the complex numbers equipped with the operations of addition, multiplication, and exponentiation. Moreover, in this abstract structure Schanuel's conjecture does indeed hold. Unfortunately it is not yet known that this structure is in fact the same as the complex numbers with the operations mentioned; there could exist some other abstract structure that behaves very similarly to the complex numbers but where Schanuel's conjecture doesn't hold. Zilber did provide several criteria that would prove the structure in question was C, but could not prove the so-called Strong Exponential Closure axiom. The simplest case of this axiom has since been proved, but a proof that it holds in full generality is required to complete the proof of the conjecture.
Approaches.
A typical problem in this area of mathematics is to work out whether a given number is transcendental. Cantor used a cardinality argument to show that there are only countably many algebraic numbers, and hence almost all numbers are transcendental. Transcendental numbers therefore represent the typical case; even so, it may be extremely difficult to prove that a given number is transcendental (or even simply irrational).
For this reason transcendence theory often works towards a more quantitative approach. So given a particular complex number α one can ask how close α is to being an algebraic number. For example, if one supposes that the number α is algebraic then can one show that it must have very high degree or a minimum polynomial with very large coefficients? Ultimately if it is possible to show that no finite degree or size of coefficient is sufficient then the number must be transcendental. Since a number α is transcendental if and only if "P"(α) ≠ 0 for every non-zero polynomial "P" with integer coefficients, this problem can be approached by trying to find lower bounds of the form
formula_12
where the right hand side is some positive function depending on some measure "A" of the size of the coefficients of "P", and its degree "d", and such that these lower bounds apply to all "P" ≠ 0. Such a bound is called a transcendence measure.
The case of "d" = 1 is that of "classical" diophantine approximation asking for lower bounds for
formula_13.
The methods of transcendence theory and diophantine approximation have much in common: they both use the auxiliary function concept.
Major results.
The Gelfond–Schneider theorem was the major advance in transcendence theory in the period 1900–1950. In the 1960s the method of Alan Baker on linear forms in logarithms of algebraic numbers reanimated transcendence theory, with applications to numerous classical problems and diophantine equations.
Mahler's classification.
Kurt Mahler in 1932 partitioned the transcendental numbers into 3 classes, called S, T, and U. Definition of these classes draws on an extension of the idea of a Liouville number (cited above).
Measure of irrationality of a real number.
One way to define a Liouville number is to consider how small a given real number x makes linear polynomials |"qx" − "p"| without making them exactly 0. Here "p", "q" are integers with |"p"|, |"q"| bounded by a positive integer "H".
Let formula_14 be the minimum non-zero absolute value these polynomials take and take:
formula_15
formula_16
ω("x", 1) is often called the measure of irrationality of a real number "x". For rational numbers, ω("x", 1) = 0 and is at least 1 for irrational real numbers. A Liouville number is defined to have infinite measure of irrationality. Roth's theorem says that irrational real algebraic numbers have measure of irrationality 1.
Measure of transcendence of a complex number.
Next consider the values of polynomials at a complex number "x", when these polynomials have integer coefficients, degree at most "n", and height at most "H", with "n", "H" being positive integers.
Let formula_17 be the minimum non-zero absolute value such polynomials take at formula_18 and take:
formula_19
formula_20
Suppose this is infinite for some minimum positive integer "n". A complex number "x" in this case is called a U number of degree "n".
Now we can define
formula_21
ω("x") is often called the measure of transcendence of "x". If the ω("x", "n") are bounded, then ω("x") is finite, and "x" is called an S number. If the ω("x", "n") are finite but unbounded, "x" is called a T number. "x" is algebraic if and only if ω("x") = 0.
Clearly the Liouville numbers are a subset of the U numbers. William LeVeque in 1953 constructed U numbers of any desired degree. The Liouville numbers and hence the U numbers are uncountable sets. They are sets of measure 0.
T numbers also comprise a set of measure 0. It took about 35 years to show their existence. Wolfgang M. Schmidt in 1968 showed that examples exist. However, almost all complex numbers are S numbers. Mahler proved that the exponential function sends all non-zero algebraic numbers to S numbers: this shows that "e" is an S number and gives a proof of the transcendence of π. This number π is known not to be a U number. Many other transcendental numbers remain unclassified.
Two numbers "x", "y" are called algebraically dependent if there is a non-zero polynomial "P" in two indeterminates with integer coefficients such that "P"("x", "y") = 0. There is a powerful theorem that two complex numbers that are algebraically dependent belong to the same Mahler class. This allows construction of new transcendental numbers, such as the sum of a Liouville number with "e" or π.
The symbol S probably stood for the name of Mahler's teacher Carl Ludwig Siegel, and T and U are just the next two letters.
Koksma's equivalent classification.
Jurjen Koksma in 1939 proposed another classification based on approximation by algebraic numbers.
Consider the approximation of a complex number "x" by algebraic numbers of degree ≤ "n" and height ≤ "H". Let α be an algebraic number of this finite set such that |"x" − α| has the minimum positive value. Define ω*("x", "H", "n") and ω*("x", "n") by:
formula_22
formula_23
If for a smallest positive integer "n", ω*("x", "n") is infinite, "x" is called a U*-number of degree "n".
If the ω*("x", "n") are bounded and do not converge to 0, "x" is called an S*-number,
A number "x" is called an A*-number if the ω*("x", "n") converge to 0.
If the ω*("x", "n") are all finite but unbounded, "x" is called a T*-number,
Koksma's and Mahler's classifications are equivalent in that they divide the transcendental numbers into the same classes. The "A*"-numbers are the algebraic numbers.
LeVeque's construction.
Let
formula_24
It can be shown that the "n"th root of λ (a Liouville number) is a U-number of degree "n".
This construction can be improved to create an uncountable family of U-numbers of degree "n". Let "Z" be the set consisting of every other power of 10 in the series above for λ. The set of all subsets of "Z" is uncountable. Deleting any of the subsets of "Z" from the series for λ creates uncountably many distinct Liouville numbers, whose "n"th roots are U-numbers of degree "n".
Type.
The supremum of the sequence {ω("x", "n")} is called the type. Almost all real numbers are S numbers of type 1, which is minimal for real S numbers. Almost all complex numbers are S numbers of type 1/2, which is also minimal. The claims of almost all numbers were conjectured by Mahler and in 1965 proved by Vladimir Sprindzhuk.
Open problems.
While the Gelfond–Schneider theorem proved that a large class of numbers was transcendental, this class was still countable. Many well-known mathematical constants are still not known to be transcendental, and in some cases it is not even known whether they are rational or irrational. A partial list can be found here.
A major problem in transcendence theory is showing that a particular set of numbers is algebraically independent rather than just showing that individual elements are transcendental. So while we know that "e" and "π" are transcendental that doesn't imply that "e" + "π" is transcendental, nor other combinations of the two (except "e"π, Gelfond's constant, which is known to be transcendental). Another major problem is dealing with numbers that are not related to the exponential function. The main results in transcendence theory tend to revolve around "e" and the logarithm function, which means that wholly new methods tend to be required to deal with numbers that cannot be expressed in terms of these two objects in an elementary fashion.
Schanuel's conjecture would solve the first of these problems somewhat as it deals with algebraic independence and would indeed confirm that "e" + "π" is transcendental. It still revolves around the exponential function, however, and so would not necessarily deal with numbers such as Apéry's constant or the Euler–Mascheroni constant. Another extremely difficult unsolved problem is the so-called constant or identity problem.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "\\alpha"
},
{
"math_id": 2,
"text": "P(\\alpha)=0"
},
{
"math_id": 3,
"text": "P(\\alpha)=0?"
},
{
"math_id": 4,
"text": "\\left|\\alpha-\\frac{p}{q}\\right|<\\frac{1}{q^{d+\\varepsilon}}"
},
{
"math_id": 5,
"text": "q^{2}(\\log q)^{1+ \\epsilon}"
},
{
"math_id": 6,
"text": "e^{kx}"
},
{
"math_id": 7,
"text": "k"
},
{
"math_id": 8,
"text": "e"
},
{
"math_id": 9,
"text": "|\\beta_1\\log\\alpha_1 +\\beta_2\\log\\alpha_2|\\,"
},
{
"math_id": 10,
"text": "\\alpha_1^{\\beta_1}\\alpha_2^{\\beta_2}\\cdots\\alpha_n^{\\beta_n}"
},
{
"math_id": 11,
"text": "K=\\mathbb{Q}(x_1,\\ldots,x_n,e^{x_1},\\ldots,e^{x_n})"
},
{
"math_id": 12,
"text": " |P(a)| > F(A,d) "
},
{
"math_id": 13,
"text": "|ax + b|"
},
{
"math_id": 14,
"text": "m(x, 1, H)"
},
{
"math_id": 15,
"text": "\\omega(x, 1, H) = -\\frac{\\log m(x, 1, H)}{\\log H}"
},
{
"math_id": 16,
"text": "\\omega(x, 1) = \\limsup_{H\\to\\infty}\\, \\omega(x,1,H)."
},
{
"math_id": 17,
"text": "m(x, n, H)"
},
{
"math_id": 18,
"text": "x"
},
{
"math_id": 19,
"text": "\\omega(x, n, H) = -\\frac{\\log m(x, n, H)}{n\\log H}"
},
{
"math_id": 20,
"text": "\\omega(x, n) = \\limsup_{H\\to\\infty}\\, \\omega(x,n,H)."
},
{
"math_id": 21,
"text": "\\omega (x) = \\limsup_{n\\to\\infty}\\, \\omega(x,n)."
},
{
"math_id": 22,
"text": "|x-\\alpha| = H^{-n\\omega^*(x,H,n)-1}."
},
{
"math_id": 23,
"text": "\\omega^*(x,n) = \\limsup_{H\\to\\infty}\\, \\omega^*(x,n,H)."
},
{
"math_id": 24,
"text": "\\lambda= \\tfrac{1}{3} + \\sum_{k=1}^\\infty 10^{-k!}."
}
] | https://en.wikipedia.org/wiki?curid=1157698 |
1157819 | Virial coefficient | Expansion coefficients in statistical mechanics
Virial coefficients formula_0 appear as coefficients in the virial expansion of the pressure of a many-particle system in powers of the density, providing systematic corrections to the ideal gas law. They are characteristic of the interaction potential between the particles and in general depend on the temperature. The second virial coefficient formula_1 depends only on the pair interaction between the particles, the third (formula_2) depends on 2- and non-additive 3-body interactions, and so on.
Derivation.
The first step in obtaining a closed expression for virial coefficients is a cluster expansion of the grand canonical partition function
formula_3
Here formula_4 is the pressure, formula_5 is the volume of the vessel containing the particles, formula_6 is the Boltzmann constant, formula_7 is the absolute temperature, formula_8 is the fugacity, with formula_9 the chemical potential. The quantity formula_10 is the canonical partition function of a subsystem of formula_11 particles:
formula_12
Here formula_13 is the Hamiltonian (energy operator) of a subsystem of formula_11 particles. The Hamiltonian is a sum of the kinetic energies of the particles and the total formula_11-particle potential energy (interaction energy). The latter includes pair interactions and possibly 3-body and higher-body interactions. The grand partition function formula_14 can be expanded in a sum of contributions from one-body, two-body, etc. clusters. The virial expansion is obtained from this expansion by observing that formula_15 equals formula_16. In this manner one derives
formula_17
formula_18.
These are quantum-statistical expressions containing kinetic energies. Note that the one-particle partition function formula_19 contains only a kinetic energy term. In the classical limit formula_20 the kinetic energy operators commute with the potential operators and the kinetic energies in numerator and denominator cancel mutually. The trace (tr) becomes an integral over the configuration space. It follows that classical virial coefficients depend on the interactions between the particles only and are given as integrals over the particle coordinates.
The derivation of higher than formula_2 virial coefficients becomes quickly a complex combinatorial problem. Making the classical approximation and
neglecting non-additive interactions (if present), the combinatorics can be handled graphically as first shown by Joseph E. Mayer and Maria Goeppert-Mayer.
They introduced what is now known as the Mayer function:
formula_21
and wrote the cluster expansion in terms of these functions. Here formula_22is the interaction potential between particle 1 and 2 (which are assumed to be identical particles).
Definition in terms of graphs.
The virial coefficients formula_0 are related to the irreducible Mayer cluster integrals formula_23 through
formula_24
The latter are concisely defined in terms of graphs.
formula_25
The rule for turning these graphs into integrals is as follows:
The first two cluster integrals are
The expression of the second virial coefficient is thus:
formula_28
where particle 2 was assumed to define the origin (formula_29).
This classical expression for the second virial coefficient was first derived by Leonard Ornstein in his 1908 Leiden University Ph.D. thesis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "B_i"
},
{
"math_id": 1,
"text": "B_2"
},
{
"math_id": 2,
"text": "B_3"
},
{
"math_id": 3,
"text": " \\Xi = \\sum_{n}{\\lambda^{n}Q_{n}} = e^{\\left(pV\\right)/\\left(k_\\text{B}T\\right)}"
},
{
"math_id": 4,
"text": "p"
},
{
"math_id": 5,
"text": "V"
},
{
"math_id": 6,
"text": "k_\\text{B}"
},
{
"math_id": 7,
"text": "T"
},
{
"math_id": 8,
"text": "\\lambda =\\exp[\\mu/(k_\\text{B}T)] "
},
{
"math_id": 9,
"text": "\\mu"
},
{
"math_id": 10,
"text": "Q_n"
},
{
"math_id": 11,
"text": "n"
},
{
"math_id": 12,
"text": " Q_n = \\operatorname{tr} [ e^{- H(1,2,\\ldots,n)/(k_\\text{B} T)} ]. "
},
{
"math_id": 13,
"text": "H(1,2,\\ldots,n)"
},
{
"math_id": 14,
"text": "\\Xi"
},
{
"math_id": 15,
"text": " \\ln \\Xi "
},
{
"math_id": 16,
"text": "p V / (k_B T )"
},
{
"math_id": 17,
"text": " B_2 = V \\left(\\frac{1}{2}-\\frac{Q_2}{Q_1^2}\\right) "
},
{
"math_id": 18,
"text": " B_3 = V^2 \\left[ \\frac{2Q_2}{Q_1^2}\\Big( \\frac{2Q_2}{Q_1^2}-1\\Big) -\\frac{1}{3}\\Big(\\frac{6Q_3}{Q_1^3}-1\\Big)\n\\right] "
},
{
"math_id": 19,
"text": "Q_1"
},
{
"math_id": 20,
"text": "\\hbar = 0"
},
{
"math_id": 21,
"text": "f(1,2) = \\exp\\left[- \\frac{u(|\\vec{r}_1- \\vec{r}_2|)}{k_B T}\\right] - 1 "
},
{
"math_id": 22,
"text": "u(|\\vec{r}_1- \\vec{r}_2|)"
},
{
"math_id": 23,
"text": "\\beta_i"
},
{
"math_id": 24,
"text": "B_{i+1}=-\\frac{i}{i+1}\\beta_i"
},
{
"math_id": 25,
"text": "\\beta_i=\\mbox{The sum of all connected, irreducible graphs with one white and}\\ i\\ \\mbox{black vertices}"
},
{
"math_id": 26,
"text": "k=0"
},
{
"math_id": 27,
"text": "k=1,..,i"
},
{
"math_id": 28,
"text": "B_2 = -2\\pi \\int r^2 {\\Big( e^{-u(r)/(k_\\text{B}T)} - 1 \\Big)} ~ \\mathrm{d}r ,"
},
{
"math_id": 29,
"text": " \\vec{r}_2 = \\vec{0} "
},
{
"math_id": 30,
"text": "B_{2}"
}
] | https://en.wikipedia.org/wiki?curid=1157819 |
1157842 | Hasse–Weil zeta function | Mathematical function associated to algebraic varieties
In mathematics, the Hasse–Weil zeta function attached to an algebraic variety "V" defined over an algebraic number field "K" is a meromorphic function on the complex plane defined in terms of the number of points on the variety after reducing modulo each prime number "p". It is a global "L"-function defined as an Euler product of local zeta functions.
Hasse–Weil "L"-functions form one of the two major classes of global "L"-functions, alongside the "L"-functions associated to automorphic representations. Conjecturally, these two types of global "L"-functions are actually two descriptions of the same type of global "L"-function; this would be a vast generalisation of the Taniyama-Weil conjecture, itself an important result in number theory.
For an elliptic curve over a number field "K", the Hasse–Weil zeta function is conjecturally related to the group of rational points of the elliptic curve over "K" by the Birch and Swinnerton-Dyer conjecture.
Definition.
The description of the Hasse–Weil zeta function "up to finitely many factors of its Euler product" is relatively simple. This follows the initial suggestions of Helmut Hasse and André Weil, motivated by the Riemann zeta function, which results from the case when "V" is a single point.
Taking the case of "K" the rational number field formula_0, and "V" a non-singular projective variety, we can for almost all prime numbers "p" consider the reduction of "V" modulo "p", an algebraic variety "V""p" over the finite field formula_1 with "p" elements, just by reducing equations for "V". Scheme-theoretically, this reduction is just the pullback of "V" along the canonical map Spec formula_1 → Spec formula_2. Again for almost all "p" it will be non-singular. We define a Dirichlet series of the complex variable "s",
formula_3
which is the infinite product of the local zeta functions
formula_4
where "Nk" is the number of points of "V" defined over the finite field extension formula_5 of formula_1.
This formula_6 is well-defined only up to multiplication by rational functions in formula_7 for finitely many primes "p".
Since the indeterminacy is relatively harmless, and has meromorphic continuation everywhere, there is a sense in which the properties of "Z(s)" do not essentially depend on it. In particular, while the exact form of the functional equation for "Z"("s"), reflecting in a vertical line in the complex plane, will definitely depend on the 'missing' factors, the existence of some such functional equation does not.
A more refined definition became possible with the development of étale cohomology; this neatly explains what to do about the missing, 'bad reduction' factors. According to general principles visible in ramification theory, 'bad' primes carry good information (theory of the "conductor"). This manifests itself in the étale theory in the Ogg–Néron–Shafarevich criterion for good reduction; namely that there is good reduction, in a definite sense, at all primes "p" for which the Galois representation ρ on the étale cohomology groups of "V" is "unramified". For those, the definition of local zeta function can be recovered in terms of the characteristic polynomial of
formula_8
Frob("p") being a Frobenius element for "p". What happens at the ramified "p" is that ρ is non-trivial on the inertia group "I"("p") for "p". At those primes the definition must be 'corrected', taking the largest quotient of the representation ρ on which the inertia group acts by the trivial representation. With this refinement, the definition of "Z"("s") can be upgraded successfully from 'almost all' "p" to "all" "p" participating in the Euler product. The consequences for the functional equation were worked out by Serre and Deligne in the later 1960s; the functional equation itself has not been proved in general.
Hasse–Weil conjecture.
The Hasse–Weil conjecture states that the Hasse–Weil zeta function should extend to a meromorphic function for all complex "s", and should satisfy a functional equation similar to that of the Riemann zeta function. For elliptic curves over the rational numbers, the Hasse–Weil conjecture follows from the modularity theorem.
Birch and Swinnerton-Dyer conjecture.
The Birch and Swinnerton-Dyer conjecture states that the rank of the abelian group "E"("K") of points of an elliptic curve "E" is the order of the zero of the Hasse–Weil "L"-function "L"("E", "s") at "s" = 1, and that the first non-zero coefficient in the Taylor expansion of "L"("E", "s") at "s" = 1 is given by more refined arithmetic data attached to "E" over "K". The conjecture is one of the seven Millennium Prize Problems listed by the Clay Mathematics Institute, which has offered a $1,000,000 prize for the first correct proof.
Elliptic curves over Q.
An elliptic curve is a specific type of variety. Let "E" be an elliptic curve over Q of conductor "N". Then, "E" has good reduction at all primes "p" not dividing "N", it has multiplicative reduction at the primes "p" that "exactly" divide "N" (i.e. such that "p" divides "N", but "p"2 does not; this is written "p" || "N"), and it has additive reduction elsewhere (i.e. at the primes where "p"2 divides "N"). The Hasse–Weil zeta function of "E" then takes the form
formula_9
Here, ζ("s") is the usual Riemann zeta function and "L"("E", "s") is called the "L"-function of "E"/Q, which takes the form
formula_10
where, for a given prime "p",
formula_11
where in the case of good reduction "a""p" is "p" + 1 − (number of points of "E" mod "p"), and in the case of multiplicative reduction "a""p" is ±1 depending on whether "E" has split (plus sign) or non-split (minus sign) multiplicative reduction at "p". A multiplicative reduction of curve "E" by the prime "p" is said to be split if -c6 is a square in the finite field with p elements.
There is a useful relation not using the conductor:
1. If "p" doesn't divide formula_12 (where formula_12 is the discriminant of the elliptic curve) then "E" has good reduction at "p".
2. If "p" divides formula_12 but not formula_13 then "E" has multiplicative bad reduction at "p".
3. If "p" divides both formula_12 and formula_13 then "E" has additive bad reduction at "p". | [
{
"math_id": 0,
"text": "\\mathbb{Q}"
},
{
"math_id": 1,
"text": "\\mathbb{F}_{p}"
},
{
"math_id": 2,
"text": "\\mathbb{Z}"
},
{
"math_id": 3,
"text": "Z_{V\\!,\\mathbb{Q}}(s) = \\prod_{p} Z_{V\\!,\\,p}(p^{-s}), "
},
{
"math_id": 4,
"text": "Z_{V\\!,\\,p}(p^{-s}) = \\exp\\left(\\sum_{k = 1}^\\infty \\frac{N_k}{k} (p^{-s})^k\\right)"
},
{
"math_id": 5,
"text": "\\mathbb{F}_{p^k}"
},
{
"math_id": 6,
"text": "Z_{V\\!,\\mathbb{Q}}(s)"
},
{
"math_id": 7,
"text": "p^{-s}"
},
{
"math_id": 8,
"text": "\\rho(\\operatorname{Frob}(p)),"
},
{
"math_id": 9,
"text": "Z_{V\\!,\\mathbb{Q}}(s)= \\frac{\\zeta(s)\\zeta(s-1)}{L(E,s)}. \\,"
},
{
"math_id": 10,
"text": "L(E,s)=\\prod_pL_p(E,s)^{-1} \\,"
},
{
"math_id": 11,
"text": "L_p(E,s)=\\begin{cases}\n (1-a_pp^{-s}+p^{1-2s}), & \\text{if } p\\nmid N \\\\\n (1-a_pp^{-s}), & \\text{if }p\\mid N \\text{ and } p^2 \\nmid N \\\\\n 1, & \\text{if }p^2\\mid N\n \\end{cases}"
},
{
"math_id": 12,
"text": "\\Delta"
},
{
"math_id": 13,
"text": "c_4"
}
] | https://en.wikipedia.org/wiki?curid=1157842 |
11578785 | Jordan–Chevalley decomposition | In mathematics, specifically linear algebra, the Jordan–Chevalley decomposition, named after Camille Jordan and Claude Chevalley, expresses a linear operator in a unique way as the sum of two other linear operators which are simpler to understand. Specifically, one part is potentially diagonalisable and the other is nilpotent. The two parts are polynomials in the operator, which makes them behave nicely in algebraic manipulations.
The decomposition has a short description when the Jordan normal form of the operator is given, but it exists under weaker hypotheses than are needed for the existence of a Jordan normal form. Hence the Jordan–Chevalley decomposition can be seen as a generalisation of the Jordan normal form, which is also reflected in several proofs of it.
It is closely related to the Wedderburn principal theorem about associative algebras, which also leads to several analogues in Lie algebras. Analogues of the Jordan–Chevalley decomposition also exist for elements of Linear algebraic groups and Lie groups via a multiplicative reformulation. The decomposition is an important tool in the study of all of these objects, and was developed for this purpose.
In many texts, the potentially diagonalisable part is also characterised as the semisimple part.
Introduction.
A basic question in linear algebra is whether an operator on a finite-dimensional vector space can be diagonalised. For example, this is closely related to the eigenvalues of the operator. In several contexts, one may be dealing with many operators which are not diagonalisable. Even over an algebraically closed field, a diagonalisation may not exist. In this context, the Jordan normal form achieves the best possible result akin to a diagonalisation. For linear operators over a field which is not algebraically closed, there may be no eigenvector at all. This latter point is not the main concern dealt with by the Jordan–Chevalley decomposition. To avoid this problem, instead "potentially diagonalisable operators" are considered, which are those that admit a diagonalisation over some field (or equivalently over the algebraic closure of the field under consideration).
The operators which are "the furthest away" from being diagonalisable are nilpotent operators. An operator (or more generally an element of a ring) formula_0 is said to be "nilpotent" when there is some positive integer formula_1 such that formula_2. In several contexts in abstract algebra, it is the case that the presence of nilpotent elements of a ring make them much more complicated to work with. To some extent, this is also the case for linear operators. The Jordan–Chevalley decomposition "separates out" the nilpotent part of an operator which causes it to be not potentially diagonalisable. So when it exists, the complications introduced by nilpotent operators and their interaction with other operators can be understood using the Jordan–Chevalley decomposition.
Historically, the Jordan–Chevalley decomposition was motivated by the applications to the theory of Lie algebras and linear algebraic groups, as described in sections below.
Decomposition of a linear operator.
Let formula_3 be a field, formula_4 a finite-dimensional vector space over formula_3, and formula_5 a linear operator over formula_4 (equivalently, a matrix with entries from formula_3). If the minimal polynomial of formula_5 splits over formula_3 (for example if formula_3 is algebraically closed), then formula_5 has a Jordan normal form formula_6. If formula_7 is the diagonal of formula_8, let formula_9 be the remaining part. Then formula_10 is a decomposition where formula_11 is diagonalisable and formula_12 is nilpotent. This restatement of the normal form as an additive decomposition not only makes the numerical computation more stable, but can be generalised to cases where the minimal polynomial of formula_5 does not split.
If the minimal polynomial of formula_5 splits into "distinct" linear factors, then formula_5 is diagonalisable. Therefore, if the minimal polynomial of formula_5 is at least separable, then formula_5 is potentially diagonalisable. The Jordan–Chevalley decomposition is concerned with the more general case where the minimal polynomial of formula_5 is a product of separable polynomials.
Let formula_13 be any linear operator on the finite-dimensional vector space formula_4 over the field formula_3. A Jordan–Chevalley decomposition of formula_0 is an expression of it as a sum
formula_14 ,
where formula_15 is potentially diagonalisable, formula_16 is nilpotent, and formula_17.
<templatestyles src="Math_theorem/styles.css" />
Jordan-Chevalley decomposition —
Let formula_13 be any operator on the finite-dimensional vector space formula_4 over the field formula_3. Then formula_0 admits a Jordan-Chevalley decomposition if and only if the minimal polynomial of formula_0 is a product of separable polynomials. Moreover, in this case, there is a unique Jordan-Chevalley decomposition, and formula_15 (and hence also formula_16) can be written as a polynomial (with coefficients from formula_3) in formula_0 with zero constant coefficient.
Several proofs are discussed in (). Two arguments are also described below.
If formula_3 is a perfect field, then every polynomial is a product of separable polynomials (since every polynomial is a product of its irreducible factors, and these are separable over a perfect field). So in this case, the Jordan–Chevalley decomposition always exists. Moreover, over a perfect field, a polynomial is separable if and only if it is square-free. Therefore an operator is potentially diagonalisable if and only if its minimal polynomial is square-free. In general (over any field), the minimal polynomial of a linear operator is square-free if and only if the operator is semisimple. (In particular, the sum of two commuting semisimple operators is always semisimple over a perfect field. The same statement is not true over general fields.) The property of being semisimple is more relevant than being potentially diagonalisable in most contexts where the Jordan–Chevalley decomposition is applied, such as for Lie algebras. For these reasons, many texts restrict to the case of perfect fields.
Proof of uniqueness and necessity.
That formula_15 and formula_16 are polynomials in formula_0 implies in particular that they commute with any operator that commutes with formula_0. This observation underlies the uniqueness proof.
Let formula_18 be a Jordan–Chevalley decomposition in which formula_15 and (hence also) formula_16 are polynomials in formula_0. Let formula_19 be any Jordan–Chevalley decomposition. Then formula_20, and formula_21 both commute with formula_0, hence with formula_22 since these are polynomials in formula_23. The sum of commuting nilpotent operators is again nilpotent, and the sum of commuting potentially diagonalisable operators again potentially diagonalisable (because they are simultaneously diagonalizable over the algebraic closure of formula_3). Since the only operator which is both potentially diagonalisable and nilpotent is the zero operator it follows that formula_24.
To show that the condition that formula_0 have a minimal polynomial which is a product of separable polynomials is necessary, suppose that formula_14 is some Jordan–Chevalley decomposition. Letting formula_25 be the separable minimal polynomial of formula_15, one can check using the binomial theorem that formula_26 can be written as formula_27 where formula_28 is some polynomial in formula_29. Moreover, for some formula_30, formula_31. Thus formula_32 and so the minimal polynomial of formula_0 must divide formula_33. As formula_33 is a product of separable polynomials (namely of copies of formula_25), so is the minimal polynomial.
Concrete example for non-existence.
If the ground field is not perfect, then a Jordan–Chevalley decomposition may not exist, as it is possible that the minimal polynomial is not a product of separable polynomials. The simplest such example is the following. Let formula_25 be a prime number, let formula_34 be an imperfect field of characteristic formula_35 (e. g. formula_36) and choose formula_37 that is not a formula_38th power. Let formula_39 let formula_40 be the image in the quotient and let formula_41 be the formula_34-linear operator given by multiplication by formula_23 in formula_42. Note that the minimal polynomial is precisely formula_43, which is inseparable and a square. By the necessity of the condition for the Jordan–Chevalley decomposition (as shown in the last section), this operator does not have a Jordan–Chevalley decomposition. It can be instructive to see concretely why there is at least no decomposition into a square-free and a nilpotent part.
If instead of with the polynomial formula_43, the same construction is performed with formula_46, the resulting operator formula_5 still does not admit a Jordan–Chevalley decomposition by the main theorem. However, formula_5 is semi-simple. The trivial decomposition formula_47 hence expresses formula_5 as a sum of a semisimple and a nilpotent operator, both of which are polynomials in formula_5.
Elementary proof of existence.
This construction is similar to Hensel's lemma in that it uses an algebraic analogue of Taylor's theorem to find an element with a certain algebraic property via a variant of Newton's method. In this form, it is taken from ().
Let formula_0 have minimal polynomial formula_25 and assume this is a product of separable polynomials. This condition is equivalent to demanding that there is some separable formula_48 such that formula_49 and formula_50 for some formula_1. By the Bézout lemma, there are polynomials formula_51 and formula_52 such that formula_53. This can be used to define a recursion formula_54, starting with formula_55. Letting formula_56 be the algebra of operators which are polynomials in formula_23, it can be checked by induction that for all formula_44:
Thus, as soon as formula_67, formula_68 by the third point since formula_69 and formula_70, so the minimal polynomial of formula_71 will divide formula_72 and hence be separable. Moreover, formula_71 will be a polynomial in formula_73 by the first point and formula_74 will be nilpotent by the second point (in fact, formula_75). Therefore, formula_76 is then the Jordan–Chevalley decomposition of formula_73. Q.E.D.
This proof, besides being completely elementary, has the advantage that it is algorithmic: By the Cayley–Hamilton theorem, formula_77 can be taken to be the characteristic polynomial of formula_73, and in many contexts, formula_72 can be determined from formula_25. Then formula_78 can be determined using the Euclidean algorithm. The iteration of applying the polynomial formula_79 to the matrix then can be performed until either formula_80 (because then all later values will be equal) or formula_81 exceeds the dimension of the vector space on which formula_73 is defined (where formula_82 is the number of iteration steps performed, as above).
Proof of existence via Galois theory.
This proof, or variants of it, is commonly used to establish the Jordan–Chevalley decomposition. It has the advantage that it is very direct and describes quite precisely how close one can get to a Jordan–Chevalley decomposition: If formula_83 is the splitting field of the minimal polynomial of formula_73 and formula_84 is the group of automorphisms of formula_83 that fix the base field formula_85, then the set formula_86 of elements of formula_83 that are fixed by all elements of formula_84 is a field with inclusions formula_87 (see Galois correspondence). Below it is argued that formula_73 admits a Jordan–Chevalley decomposition over formula_86, but not any smaller field. This argument does not use Galois theory. However, Galois theory is required deduce from this the condition for the existence of the Jordan-Chevalley given above.
Above it was observed that if formula_0 has a Jordan normal form (i. e. if the minimal polynomial of formula_0 splits), then it has a Jordan Chevalley decomposition. In this case, one can also see directly that formula_16 (and hence also formula_15) is a polynomial in formula_0. Indeed, it suffices to check this for the decomposition of the Jordan matrix formula_88. This is a technical argument, but does not require any tricks beyond the Chinese remainder theorem.
This fact can be used to deduce the Jordan–Chevalley decomposition in the general case. Let formula_92 be the splitting field of the minimal polynomial of formula_0, so that formula_0 does admit a Jordan normal form over formula_92. Then, by the argument just given, formula_0 has a Jordan–Chevalley decomposition formula_93 where formula_94 is a polynomial with coefficients from formula_92, formula_95 is diagonalisable (over formula_92) and formula_96 is nilpotent.
Let formula_97 be a field automorphism of formula_92 which fixes formula_3. Then
formula_98
Here formula_99 is a polynomial in formula_23, so is formula_96. Thus, formula_100 and formula_101 commute. Also, formula_102 is potentially diagonalisable and formula_103 is nilpotent. Thus, by the uniqueness of the Jordan–Chevalley decomposition (over formula_104), formula_105 and formula_105. Therefore, by definition, formula_29 are endomorphisms (represented by matrices) over formula_106. Finally, since formula_107 contains an formula_104-basis that spans the space containing formula_22, by the same argument, we also see that formula_94 has coefficients in formula_106. Q.E.D.
If the minimal polynomial of formula_0 is a product of separable polynomials, then the field extension formula_108 is Galois, meaning that formula_109.
Relations to the theory of algebras.
Separable algebras.
The Jordan–Chevalley decomposition is very closely related to the Wedderburn principal theorem in the following formulation:
<templatestyles src="Math_theorem/styles.css" />
Wedderburn principal theorem — Let formula_110 be a finite-dimensional associative algebra over the field formula_3 with Jacobson radical formula_8. Then formula_111 is separable if and only if formula_110 has a separable semisimple subalgebra formula_112 such that formula_113.
Usually, the term „separable“ in this theorem refers to the general concept of a separable algebra and the theorem might then be established as a corollary of a more general high-powered result. However, if it is instead interpreted in the more basic sense that every element have a separable minimal polynomial, then this statement is essentially equivalent to the Jordan–Chevalley decomposition as described above. This gives a different way to view the decomposition, and for instance takes this route for establishing it.
Over perfect fields, this result simplifies. Indeed, formula_111 is then always separable in the sense of minimal polynomials: If formula_115, then the minimal polynomial formula_25 is a product of separable polynomials, so there is a separable polynomial formula_48 such that formula_49 and formula_50 for some formula_1. Thus formula_116. So in formula_111, the minimal polynomial of formula_118 divides formula_48 and is hence separable. The crucial point in the theorem is then not that formula_111 is separable (because that condition is vacuous), but that it is semisimple, meaning its radical is trivial.
The same statement is true for Lie algebras, but only in characteristic zero. This is the content of Levi’s theorem. (Note that the notions of semisimple in both results do indeed correspond, because in both cases this is equivalent to being the sum of simple subalgebras or having trivial radical, at least in the finite-dimensional case.)
Preservation under representations.
The crucial point in the proof for the Wedderburn principal theorem above is that an element formula_117 corresponds to a linear operator formula_119 with the same properties. In the theory of Lie algebras, this corresponds to the adjoint representation of a Lie algebra formula_120. This decomposed operator has a Jordan–Chevalley decomposition formula_121. Just as in the associative case, this corresponds to a decomposition of formula_0, but polynomials are not available as a tool. One context in which this does makes sense is the restricted case where formula_120 is contained in the Lie algebra formula_122 of the endomorphisms of a finite-dimensional vector space formula_4 over the perfect field formula_3. Indeed, any semisimple Lie algebra can be realised in this way.
If formula_18 is the Jordan decomposition, then formula_123 is the Jordan decomposition of the adjoint endomorphism formula_124 on the vector space formula_125. Indeed, first, formula_126 and formula_127 commute since formula_128. Second, in general, for each endomorphism formula_129, we have:
Hence, by uniqueness, formula_140 and formula_141.
The adjoint representation is a very natural and general representation of any Lie algebra. The argument above illustrates (and indeed proves) a general principle which generalises this: If formula_142 is "any" finite-dimensional representation of a semisimple finite-dimensional Lie algebra over a perfect field, then formula_143 preserves the Jordan decomposition in the following sense: if formula_18, then formula_144 and formula_145.
Nilpotency criterion.
The Jordan decomposition can be used to characterize nilpotency of an endomorphism. Let "k" be an algebraically closed field of characteristic zero, formula_146 the endomorphism ring of "k" over rational numbers and "V" a finite-dimensional vector space over "k". Given an endomorphism formula_114, let formula_147 be the Jordan decomposition. Then formula_45 is diagonalizable; i.e., formula_148 where each formula_90 is the eigenspace for eigenvalue formula_89 with multiplicity formula_149. Then for any formula_150 let formula_151 be the endomorphism such that formula_152 is the multiplication by formula_153. Chevalley calls formula_154 the replica of formula_45 given by formula_155. (For example, if formula_156, then the complex conjugate of an endomorphism is an example of a replica.) Now,
"Proof:" First, since formula_157 is nilpotent,
formula_158.
If formula_155 is the complex conjugation, this implies formula_159 for every "i". Otherwise, take formula_155 to be a formula_160-linear functional formula_161 followed by formula_162. Applying that to the above equation, one gets:
formula_163
and, since formula_153 are all real numbers, formula_164 for every "i". Varying the linear functionals then implies formula_159 for every "i". formula_165
A typical application of the above criterion is the proof of Cartan's criterion for solvability of a Lie algebra. It says: if formula_166 is a Lie subalgebra over a field "k" of characteristic zero such that formula_167 for each formula_168, then formula_125 is solvable.
"Proof:" Without loss of generality, assume "k" is algebraically closed. By Lie's theorem and Engel's theorem, it suffices to show for each formula_169, formula_23 is a nilpotent endomorphism of "V". Write formula_170. Then we need to show:
formula_171
is zero. Let formula_172. Note we have: formula_173 and, since formula_174 is the semisimple part of the Jordan decomposition of formula_175, it follows that formula_174 is a polynomial without constant term in formula_175; hence, formula_176 and the same is true with formula_154 in place of formula_45. That is, formula_177, which implies the claim given the assumption. formula_165
Real semisimple Lie algebras.
In the formulation of Chevalley and Mostow, the additive decomposition states that an element "X" in a real semisimple Lie algebra g with Iwasawa decomposition g = k ⊕ a ⊕ n can be written as the sum of three commuting elements of the Lie algebra "X" = "S" + "D" + "N", with "S", "D" and "N" conjugate to elements in k, a and n respectively. In general the terms in the Iwasawa decomposition do not commute.
Multiplicative decomposition.
If formula_0 is an invertible linear operator, it may be more convenient to use a multiplicative Jordan–Chevalley decomposition. This expresses formula_0 as a product
formula_178,
where formula_15 is potentially diagonalisable, and formula_179 is nilpotent (one also says that formula_180 is unipotent).
The multiplicative version of the decomposition follows from the additive one since, as formula_91 is invertible (because the sum of an invertible operator and a nilpotent operator is invertible)
formula_181
and formula_182 is unipotent. (Conversely, by the same type of argument, one can deduce the additive version from the multiplicative one.)
The multiplicative version is closely related to decompositions encountered in a linear algebraic group. For this it is again useful to assume that the underlying field formula_3 is perfect because then the Jordan–Chevalley decomposition exists for all matrices.
Linear algebraic groups.
Let formula_183 be a linear algebraic group over a perfect field. Then, essentially by definition, there is a closed embedding formula_184. Now, to each element formula_185, by the multiplicative Jordan decomposition, there are a pair of a semisimple element formula_186 and a unipotent element formula_187 "a priori" in formula_188 such that formula_189. But, as it turns out, the elements formula_190 can be shown to be in formula_183 (i.e., they satisfy the defining equations of "G") and that they are independent of the embedding into formula_188; i.e., the decomposition is intrinsic.
When "G" is abelian, formula_183 is then the direct product of the closed subgroup of the semisimple elements in "G" and that of unipotent elements.
Real semisimple Lie groups.
The multiplicative decomposition states that if "g" is an element of the corresponding connected semisimple Lie group "G" with corresponding Iwasawa decomposition "G" = "KAN", then "g" can be written as the product of three commuting elements "g" = "sdu" with "s", "d" and "u" conjugate to elements of "K", "A" and "N" respectively. In general the terms in the Iwasawa decomposition "g" = "kan" do not commute.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " x "
},
{
"math_id": 1,
"text": " m \\geq 1 "
},
{
"math_id": 2,
"text": " x^m = 0 "
},
{
"math_id": 3,
"text": " K "
},
{
"math_id": 4,
"text": " V "
},
{
"math_id": 5,
"text": " T "
},
{
"math_id": 6,
"text": " T = SJS^{-1} "
},
{
"math_id": 7,
"text": " D "
},
{
"math_id": 8,
"text": " J "
},
{
"math_id": 9,
"text": " R = J - D "
},
{
"math_id": 10,
"text": " T = SDS^{-1} + SRS^{-1} "
},
{
"math_id": 11,
"text": " SDS^{-1} "
},
{
"math_id": 12,
"text": " SRS^{-1} "
},
{
"math_id": 13,
"text": " x: V \\to V "
},
{
"math_id": 14,
"text": " x = x_s + x_n "
},
{
"math_id": 15,
"text": " x_s "
},
{
"math_id": 16,
"text": " x_n "
},
{
"math_id": 17,
"text": " x_s x_n = x_n x_s "
},
{
"math_id": 18,
"text": "x = x_s + x_n"
},
{
"math_id": 19,
"text": "x = x_s' + x_n'"
},
{
"math_id": 20,
"text": "x_s - x_s' = x_n' - x_n"
},
{
"math_id": 21,
"text": "x_s', x_n'"
},
{
"math_id": 22,
"text": "x_s, x_n"
},
{
"math_id": 23,
"text": "x"
},
{
"math_id": 24,
"text": "x_s - x_s' = 0 = x_n - x_n'"
},
{
"math_id": 25,
"text": " p "
},
{
"math_id": 26,
"text": " p(x_s + x_n) "
},
{
"math_id": 27,
"text": " x_n y "
},
{
"math_id": 28,
"text": " y "
},
{
"math_id": 29,
"text": " x_s, x_n "
},
{
"math_id": 30,
"text": " \\ell \\geq 1 "
},
{
"math_id": 31,
"text": " x_n^\\ell = 0 "
},
{
"math_id": 32,
"text": " p(x)^\\ell = x_n^\\ell y^\\ell = 0 "
},
{
"math_id": 33,
"text": " p^\\ell "
},
{
"math_id": 34,
"text": "k"
},
{
"math_id": 35,
"text": "p,"
},
{
"math_id": 36,
"text": " k = \\mathbb{F}_p(t) "
},
{
"math_id": 37,
"text": "a \\in k"
},
{
"math_id": 38,
"text": "p"
},
{
"math_id": 39,
"text": "V = k[X]/\\left(X^p - a\\right)^2,"
},
{
"math_id": 40,
"text": "x = \\overline X"
},
{
"math_id": 41,
"text": "T"
},
{
"math_id": 42,
"text": "V"
},
{
"math_id": 43,
"text": " \\left(X^p - a\\right)^2 "
},
{
"math_id": 44,
"text": "n"
},
{
"math_id": 45,
"text": "s"
},
{
"math_id": 46,
"text": " {X^p} - a "
},
{
"math_id": 47,
"text": " T = T + 0 "
},
{
"math_id": 48,
"text": " q "
},
{
"math_id": 49,
"text": " q \\mid p "
},
{
"math_id": 50,
"text": " p \\mid q^m "
},
{
"math_id": 51,
"text": "u"
},
{
"math_id": 52,
"text": "v"
},
{
"math_id": 53,
"text": "{uq+{vq'}}=1"
},
{
"math_id": 54,
"text": "x_{n+1} = x_n - v(x_n)q(x_n)"
},
{
"math_id": 55,
"text": "x_0 = x"
},
{
"math_id": 56,
"text": "\\mathfrak{X}"
},
{
"math_id": 57,
"text": "x_n \\in \\mathfrak{X}"
},
{
"math_id": 58,
"text": "x_n - x \\in q(x) \\cdot \\mathfrak{X}"
},
{
"math_id": 59,
"text": "x_{n+1} - x = (x_{n+1} - x_n) + (x_n - x)"
},
{
"math_id": 60,
"text": "q(x) \\cdot \\mathfrak{X}"
},
{
"math_id": 61,
"text": "q(x_n) \\in q(x)^{2^n} \\cdot \\mathfrak{X}"
},
{
"math_id": 62,
"text": "q(x_{n+1}) = q(x_n) + q'(x_n) (x_{n+1} - x_n) + (x_{n+1} - x_n)^2 h"
},
{
"math_id": 63,
"text": "h \\in \\mathfrak{X}"
},
{
"math_id": 64,
"text": "x_{n+1}"
},
{
"math_id": 65,
"text": "q(x_{n+1}) = q(x_n)^2 (u(x_n) + v(x_n)^2 h) "
},
{
"math_id": 66,
"text": "q(x)^{2^{n+1}} \\cdot \\mathfrak{X} "
},
{
"math_id": 67,
"text": "2^n \\geq m "
},
{
"math_id": 68,
"text": "q(x_n) = 0 "
},
{
"math_id": 69,
"text": "p \\mid q^m"
},
{
"math_id": 70,
"text": "p(x) = 0"
},
{
"math_id": 71,
"text": "x_n "
},
{
"math_id": 72,
"text": "q "
},
{
"math_id": 73,
"text": "x "
},
{
"math_id": 74,
"text": "x_n - x "
},
{
"math_id": 75,
"text": "(x_n - x)^m=0 "
},
{
"math_id": 76,
"text": "x = x_n + (x - x_n) "
},
{
"math_id": 77,
"text": "p "
},
{
"math_id": 78,
"text": "v "
},
{
"math_id": 79,
"text": "vq "
},
{
"math_id": 80,
"text": "v(x_n) q(x_n) = 0 "
},
{
"math_id": 81,
"text": "2^n "
},
{
"math_id": 82,
"text": "n "
},
{
"math_id": 83,
"text": "L "
},
{
"math_id": 84,
"text": "G "
},
{
"math_id": 85,
"text": "K "
},
{
"math_id": 86,
"text": "F "
},
{
"math_id": 87,
"text": "K \\subseteq F \\subseteq L "
},
{
"math_id": 88,
"text": " J = D + R "
},
{
"math_id": 89,
"text": "\\lambda_i"
},
{
"math_id": 90,
"text": "V_i"
},
{
"math_id": 91,
"text": "x_s"
},
{
"math_id": 92,
"text": " L "
},
{
"math_id": 93,
"text": " x = {c(x)} + {(x - {c(x)})} "
},
{
"math_id": 94,
"text": " c "
},
{
"math_id": 95,
"text": "c(x) "
},
{
"math_id": 96,
"text": " x - c(x) "
},
{
"math_id": 97,
"text": " \\sigma "
},
{
"math_id": 98,
"text": "c(x) + (x-{c(x)}) = x = {\\sigma(x)} = {\\sigma({c(x)})} + {\\sigma(x- {c(x)} )}"
},
{
"math_id": 99,
"text": "\\sigma(c(x)) = \\sigma(c)(x)"
},
{
"math_id": 100,
"text": "\\sigma(c(x))"
},
{
"math_id": 101,
"text": "\\sigma(x - c(x))"
},
{
"math_id": 102,
"text": " \\sigma (c(x)) "
},
{
"math_id": 103,
"text": " \\sigma({x - c(x)}) "
},
{
"math_id": 104,
"text": "L"
},
{
"math_id": 105,
"text": "\\sigma(c(x)) = c(x)"
},
{
"math_id": 106,
"text": " F "
},
{
"math_id": 107,
"text": "\\left\\{1, x, x^2, \\dots\\right\\}"
},
{
"math_id": 108,
"text": " L/K "
},
{
"math_id": 109,
"text": " F = K "
},
{
"math_id": 110,
"text": " A "
},
{
"math_id": 111,
"text": " A/J "
},
{
"math_id": 112,
"text": " B "
},
{
"math_id": 113,
"text": " A = B \\oplus J "
},
{
"math_id": 114,
"text": "x : V \\to V"
},
{
"math_id": 115,
"text": " a \\in A "
},
{
"math_id": 116,
"text": " q(a) \\in J "
},
{
"math_id": 117,
"text": " x \\in A "
},
{
"math_id": 118,
"text": " a + J "
},
{
"math_id": 119,
"text": " T_x: A \\to A "
},
{
"math_id": 120,
"text": " \\mathfrak{g} "
},
{
"math_id": 121,
"text": "\\operatorname{ad}(x) = \\operatorname{ad}(x)_s + \\operatorname{ad}(x)_n"
},
{
"math_id": 122,
"text": " \\mathfrak{gl}(V) "
},
{
"math_id": 123,
"text": "\\operatorname{ad}(x) = \\operatorname{ad}(x_s) + \\operatorname{ad}(x_n)"
},
{
"math_id": 124,
"text": "\\operatorname{ad}(x)"
},
{
"math_id": 125,
"text": "\\mathfrak{g}"
},
{
"math_id": 126,
"text": "\\operatorname{ad}(x_s)"
},
{
"math_id": 127,
"text": "\\operatorname{ad}(x_n)"
},
{
"math_id": 128,
"text": "[\\operatorname{ad}(x_s), \\operatorname{ad}(x_n)] = \\operatorname{ad}([x_s, x_n]) = 0"
},
{
"math_id": 129,
"text": "y \\in \\mathfrak{g}"
},
{
"math_id": 130,
"text": "y^m = 0"
},
{
"math_id": 131,
"text": "\\operatorname{ad}(y)^{2m-1} = 0"
},
{
"math_id": 132,
"text": "\\operatorname{ad}(y)"
},
{
"math_id": 133,
"text": "y"
},
{
"math_id": 134,
"text": " \\{b_1, \\dots, b_n \\} "
},
{
"math_id": 135,
"text": " \\operatorname{ad}(y) "
},
{
"math_id": 136,
"text": " M_{ij} "
},
{
"math_id": 137,
"text": " b_i \\mapsto b_j "
},
{
"math_id": 138,
"text": " b_k \\mapsto 0 "
},
{
"math_id": 139,
"text": " k \\neq 0 "
},
{
"math_id": 140,
"text": "\\operatorname{ad}(x)_s = \\operatorname{ad}(x_s)"
},
{
"math_id": 141,
"text": "\\operatorname{ad}(x)_n = \\operatorname{ad}(x_n)"
},
{
"math_id": 142,
"text": "\\pi: \\mathfrak{g} \\to \\mathfrak{gl}(V)"
},
{
"math_id": 143,
"text": "\\pi"
},
{
"math_id": 144,
"text": "\\pi(x_s) = \\pi(x)_s"
},
{
"math_id": 145,
"text": "\\pi(x_n) = \\pi(x)_n"
},
{
"math_id": 146,
"text": "E = \\operatorname{End}_\\mathbb{Q}(k)"
},
{
"math_id": 147,
"text": "x = s + n"
},
{
"math_id": 148,
"text": "V = \\bigoplus V_i"
},
{
"math_id": 149,
"text": "m_i"
},
{
"math_id": 150,
"text": "\\varphi\\in E"
},
{
"math_id": 151,
"text": "\\varphi(s) : V \\to V"
},
{
"math_id": 152,
"text": "\\varphi(s) : V_i \\to V_i"
},
{
"math_id": 153,
"text": "\\varphi(\\lambda_i)"
},
{
"math_id": 154,
"text": "\\varphi(s)"
},
{
"math_id": 155,
"text": "\\varphi"
},
{
"math_id": 156,
"text": "k = \\mathbb{C}"
},
{
"math_id": 157,
"text": "n \\varphi(s)"
},
{
"math_id": 158,
"text": "0 = \\operatorname{tr}(x\\varphi(s)) = \\sum_i \\operatorname{tr}\\left(s\\varphi(s) | V_i\\right) = \\sum_i m_i \\lambda_i\\varphi(\\lambda_i)"
},
{
"math_id": 159,
"text": "\\lambda_i = 0"
},
{
"math_id": 160,
"text": "\\mathbb{Q}"
},
{
"math_id": 161,
"text": "\\varphi : k \\to \\mathbb{Q}"
},
{
"math_id": 162,
"text": "\\mathbb{Q} \\hookrightarrow k"
},
{
"math_id": 163,
"text": "\\sum_i m_i \\varphi(\\lambda_i)^2 = 0"
},
{
"math_id": 164,
"text": "\\varphi(\\lambda_i) = 0"
},
{
"math_id": 165,
"text": "\\square"
},
{
"math_id": 166,
"text": "\\mathfrak{g} \\subset \\mathfrak{gl}(V)"
},
{
"math_id": 167,
"text": "\\operatorname{tr}(xy) = 0"
},
{
"math_id": 168,
"text": "x \\in \\mathfrak{g}, y \\in D \\mathfrak{g} = [\\mathfrak{g}, \\mathfrak{g}]"
},
{
"math_id": 169,
"text": "x \\in D \\mathfrak g"
},
{
"math_id": 170,
"text": "x = \\sum_i [x_i, y_i]"
},
{
"math_id": 171,
"text": "\\operatorname{tr}(x \\varphi(s)) = \\sum_i \\operatorname{tr}([x_i, y_i] \\varphi(s)) = \\sum_i \\operatorname{tr}(x_i [y_i, \\varphi(s)])"
},
{
"math_id": 172,
"text": "\\mathfrak{g}' = \\mathfrak{gl}(V)"
},
{
"math_id": 173,
"text": "\\operatorname{ad}_{\\mathfrak{g}'}(x) : \\mathfrak{g} \\to D \\mathfrak{g}"
},
{
"math_id": 174,
"text": "\\operatorname{ad}_{\\mathfrak{g}'}(s)"
},
{
"math_id": 175,
"text": "\\operatorname{ad}_{\\mathfrak{g}'}(x)"
},
{
"math_id": 176,
"text": "\\operatorname{ad}_{\\mathfrak{g}'}(s) : \\mathfrak{g} \\to D \\mathfrak{g}"
},
{
"math_id": 177,
"text": "[\\varphi(s), \\mathfrak{g}] \\subset D \\mathfrak{g}"
},
{
"math_id": 178,
"text": " x = x_s \\cdot x_u "
},
{
"math_id": 179,
"text": " x_u - 1 "
},
{
"math_id": 180,
"text": " x_u "
},
{
"math_id": 181,
"text": "x = x_s + x_n = x_s\\left(1 + x_s^{-1}x_n\\right)"
},
{
"math_id": 182,
"text": "1 + x_s^{-1}x_n"
},
{
"math_id": 183,
"text": "G"
},
{
"math_id": 184,
"text": "G \\hookrightarrow \\mathbf{GL}_n"
},
{
"math_id": 185,
"text": "g \\in G"
},
{
"math_id": 186,
"text": "g_s"
},
{
"math_id": 187,
"text": "g_u"
},
{
"math_id": 188,
"text": "\\mathbf{GL}_n"
},
{
"math_id": 189,
"text": "g = g_s g_u = g_u g_s"
},
{
"math_id": 190,
"text": "g_s, g_u"
}
] | https://en.wikipedia.org/wiki?curid=11578785 |
1157887 | Supersymmetric quantum mechanics | Quantum mechanics with supersymmetry
In theoretical physics, supersymmetric quantum mechanics is an area of research where supersymmetry are applied to the simpler setting of plain quantum mechanics, rather than quantum field theory. Supersymmetric quantum mechanics has found applications outside of high-energy physics, such as providing new methods to solve quantum mechanical problems, providing useful extensions to the WKB approximation, and statistical mechanics.
Introduction.
Understanding the consequences of supersymmetry (SUSY) has proven mathematically daunting, and it has likewise been difficult to develop theories that could account for symmetry breaking, "i.e.", the lack of observed partner particles of equal mass. To make progress on these problems, physicists developed "supersymmetric quantum mechanics", an application of the supersymmetry superalgebra to quantum mechanics as opposed to quantum field theory. It was hoped that studying SUSY's consequences in this simpler setting would lead to new understanding; remarkably, the effort created new areas of research in quantum mechanics itself.
For example, students are typically taught to "solve" the hydrogen atom by a laborious process which begins by inserting the Coulomb potential into the Schrödinger equation. After a considerable amount of work using many differential equations, the analysis produces a recursion relation for the Laguerre polynomials. The outcome is the spectrum of hydrogen-atom energy states (labeled by quantum numbers "n" and "l"). Using ideas drawn from SUSY, the final result can be derived with significantly greater ease, in much the same way that operator methods are used to solve the harmonic oscillator. A similar supersymmetric approach can also be used to more accurately find the hydrogen spectrum using the Dirac equation. Oddly enough, this approach is analogous to the way Erwin Schrödinger first solved the hydrogen atom. Of course, he did not "call" his solution supersymmetric, as SUSY was thirty years in the future.
The SUSY solution of the hydrogen atom is only one example of the very general class of solutions which SUSY provides to "shape-invariant potentials", a category which includes most potentials taught in introductory quantum mechanics courses.
SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called "partner Hamiltonians". (The potential energy terms which occur in the Hamiltonians are then called "partner potentials".) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy (except possibly for zero energy eigenstates). This fact can be exploited to deduce many properties of the eigenstate spectrum. It is analogous to the original description of SUSY, which referred to bosons and fermions. We can imagine a "bosonic Hamiltonian", whose eigenstates are the various bosons of our theory. The SUSY partner of this Hamiltonian would be "fermionic", and its eigenstates would be the theory's fermions. Each boson would have a fermionic partner of equal energy—but, in the relativistic world, energy and mass are interchangeable, so we can just as easily say that the partner particles have equal mass.
SUSY concepts have provided useful extensions to the WKB approximation in the form of a modified version of the Bohr-Sommerfeld quantization condition. In addition, SUSY has been applied to non-quantum statistical mechanics through the Fokker–Planck equation, showing that even if the original inspiration in high-energy particle physics turns out to be a blind alley, its investigation has brought about many useful benefits.
Example: the harmonic oscillator.
The Schrödinger equation for the harmonic oscillator takes the form
formula_0
where formula_1 is the formula_2th energy eigenstate of formula_3 with energy formula_4. We want to find an expression for formula_5 in terms of formula_2. We define the operators
formula_6
and
formula_7
where formula_8, which we need to choose, is called the superpotential of formula_9. We also define the aforementioned partner Hamiltonians formula_10 and formula_11 as
formula_12
formula_13
A zero energy ground state formula_14 of formula_10 would satisfy the equation
formula_15
Assuming that we know the ground state of the harmonic oscillator formula_16, we can solve for formula_8 as
formula_17
We then find that
formula_18
formula_19
We can now see that
formula_20
This is a special case of shape invariance, discussed below. Taking without proof the introductory theorem mentioned above, it is apparent that the spectrum of formula_10 will start with formula_21 and continue upwards in steps of formula_22 The spectra of formula_11 and formula_9 will have the same even spacing, but will be shifted up by amounts formula_23 and formula_24, respectively. It follows that the spectrum of formula_9 is therefore the familiar formula_25.
SUSY QM superalgebra.
In fundamental quantum mechanics, we learn that an algebra of operators is defined by commutation relations among those operators. For example, the canonical operators of position and momentum have the commutator formula_26. (Here, we use "natural units" where the Planck constant is set equal to 1.) A more intricate case is the algebra of angular momentum operators; these quantities are closely connected to the rotational symmetries of three-dimensional space. To generalize this concept, we define an "anticommutator", which relates operators the same way as an ordinary commutator, but with the opposite sign:
formula_27
If operators are related by anticommutators as well as commutators, we say they are part of a "Lie superalgebra". Let's say we have a quantum system described by a Hamiltonian formula_28 and a set of formula_29 operators formula_30. We shall call this system "supersymmetric" if the following anticommutation relation is valid for all formula_31:
formula_32
If this is the case, then we call "formula_30" the system's "supercharges".
Example.
Let's look at the example of a one-dimensional nonrelativistic particle with a 2D ("i.e.," two states) internal degree of freedom called "spin" (it's not really spin because "real" spin is a property of 3D particles). Let formula_33 be an operator which transforms a "spin up" particle into a "spin down" particle. Its adjoint formula_34 then transforms a spin down particle into a spin up particle; the operators are normalized such that the anticommutator formula_35. And of course, formula_36. Let formula_37 be the momentum of the particle and formula_38 be its position with formula_26. Let formula_39 (the "superpotential") be an arbitrary complex analytic function of formula_38 and define the supersymmetric operators
formula_40
formula_41
Note that formula_42 and formula_43 are self-adjoint. Let the Hamiltonian
formula_44
where "W"′ is the derivative of "W". Also note that {"Q"1, "Q"2} = 0. This is nothing other than "N" = 2 supersymmetry. Note that formula_45 acts like an electromagnetic vector potential.
Let's also call the spin down state "bosonic" and the spin up state "fermionic". This is only in analogy to quantum field theory and should not be taken literally. Then, "Q1" and "Q2" maps "bosonic" states into "fermionic" states and vice versa.
Reformulating this a bit:
Define
formula_46
and of course,
formula_47
formula_48
and
formula_49
An operator is "bosonic" if it maps "bosonic" states to "bosonic" states and "fermionic" states to "fermionic" states. An operator is "fermionic" if it maps "bosonic" states to "fermionic" states and vice versa. Any operator can be expressed uniquely as the sum of a bosonic operator and a fermionic operator. Define the supercommutator [,} as follows: Between two bosonic operators or a bosonic and a fermionic operator, it is none other than the commutator but between two fermionic operators, it is an anticommutator.
Then, "x" and "p" are bosonic operators and "b", formula_34, "Q" and formula_50 are fermionic operators.
Let's work in the Heisenberg picture where "x", "b" and formula_34 are functions of time.
Then,
formula_51
formula_52
formula_53
formula_54
formula_55
formula_56
This is nonlinear in general: "i.e.", x(t), b(t) and formula_57 do not form a linear SUSY representation because formula_58 isn't necessarily linear in "x". To avoid this problem, define the self-adjoint operator formula_59. Then,
formula_51
formula_52
formula_60
formula_61
formula_54
formula_62
formula_56
formula_63
and we see that we have a linear SUSY representation.
Now let's introduce two "formal" quantities, formula_64; and formula_65 with the latter being the adjoint of the former such that
formula_66
and both of them commute with bosonic operators but anticommute with fermionic ones.
Next, we define a construct called a superfield:
formula_67
"f" is self-adjoint, of course. Then,
formula_68
formula_69
Incidentally, there's also a U(1)R symmetry, with "p" and "x" and "W" having zero R-charges and formula_34 having an R-charge of 1 and "b" having an R-charge of −1.
Shape invariance.
Suppose formula_39 is real for all real formula_38. Then we can simplify the expression for the Hamiltonian to
formula_70
There are certain classes of superpotentials such that both the bosonic and fermionic Hamiltonians have similar forms. Specifically
formula_71
where the formula_72's are parameters. For example, the hydrogen atom potential with angular momentum formula_73 can be written this way.
formula_74
This corresponds to formula_75 for the superpotential
formula_76
formula_77
This is the potential for formula_78 angular momentum shifted by a constant. After solving the formula_79 ground state, the supersymmetric operators can be used to construct the rest of the bound state spectrum.
In general, since formula_80 and formula_81 are partner potentials, they share the same energy spectrum except the one extra ground energy. We can continue this process of finding partner potentials with the shape invariance condition, giving the following formula for the energy levels in terms of the parameters of the potential
formula_82
where formula_83 are the parameters for the multiple partnered potentials.
Applications.
In 2021, supersymmetric quantum mechanics was applied to option pricing and the analysis of markets in quantum finance, and to financial networks.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H^{\\rm HO} \\psi_{n}(x) = \\bigg(\\frac{-\\hbar^{2}}{2m}\\frac{d^{2}}{dx^{2}}+\\frac{m \\omega^{2}}{2}x^{2}\\bigg) \\psi_{n}(x) = E_{n}^{\\rm HO} \\psi_{n}(x),"
},
{
"math_id": 1,
"text": "\\psi_{n}(x)"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "H^\\text{HO}"
},
{
"math_id": 4,
"text": "E_{n}^\\text{HO}"
},
{
"math_id": 5,
"text": "E_{n}^{\\rm HO}"
},
{
"math_id": 6,
"text": "A = \\frac{\\hbar}{\\sqrt{2m}}\\frac{d}{dx}+W(x)"
},
{
"math_id": 7,
"text": "A^{\\dagger} = -\\frac{\\hbar}{\\sqrt{2m}}\\frac{d}{dx}+W(x),"
},
{
"math_id": 8,
"text": "W(x)"
},
{
"math_id": 9,
"text": "H^{\\rm HO}"
},
{
"math_id": 10,
"text": "H^{(1)}"
},
{
"math_id": 11,
"text": "H^{(2)}"
},
{
"math_id": 12,
"text": "H^{(1)} = A^{\\dagger} A = \\frac{-\\hbar^{2}}{2m}\\frac{d^{2}}{dx^{2}} - \\frac{\\hbar}{\\sqrt{2m}} W^{\\prime}(x) + W^{2}(x)"
},
{
"math_id": 13,
"text": "H^{(2)} = A A^{\\dagger} = \\frac{-\\hbar^{2}}{2m}\\frac{d^{2}}{dx^{2}} + \\frac{\\hbar}{\\sqrt{2m}} W^{\\prime}(x) + W^{2}(x)."
},
{
"math_id": 14,
"text": "\\psi_{0}^{(1)}(x)"
},
{
"math_id": 15,
"text": "H^{(1)} \\psi_{0}^{(1)}(x) = A^{\\dagger} A \\psi_{0}^{(1)}(x)\n = A^{\\dagger} \\bigg(\\frac{\\hbar}{\\sqrt{2m}}\\frac{d}{dx}+ W(x)\\bigg) \\psi_{0}^{(1)}(x) = 0."
},
{
"math_id": 16,
"text": "\\psi_{0}(x)"
},
{
"math_id": 17,
"text": "W(x) = \\frac{-\\hbar}{\\sqrt{2m}} \\bigg(\\frac{\\psi_{0}^{\\prime}(x)}{\\psi_{0}(x)}\\bigg) = x \\sqrt{m \\omega^{2}/2} "
},
{
"math_id": 18,
"text": "H^{(1)} = \\frac{-\\hbar^{2}}{2m}\\frac{d^{2}}{dx^{2}} + \\frac{m \\omega^{2}}{2} x^{2} - \\frac{\\hbar \\omega}{2} "
},
{
"math_id": 19,
"text": "H^{(2)} = \\frac{-\\hbar^{2}}{2m}\\frac{d^{2}}{dx^{2}} + \\frac{m \\omega^{2}}{2} x^{2} + \\frac{\\hbar \\omega}{2}."
},
{
"math_id": 20,
"text": "H^{(1)} = H^{(2)} - \\hbar \\omega = H^{\\rm HO} - \\frac{\\hbar \\omega}{2}."
},
{
"math_id": 21,
"text": "E_{0} = 0"
},
{
"math_id": 22,
"text": "\\hbar \\omega."
},
{
"math_id": 23,
"text": "\\hbar \\omega"
},
{
"math_id": 24,
"text": "\\hbar \\omega / 2"
},
{
"math_id": 25,
"text": "E_{n}^{\\rm HO} = \\hbar \\omega (n + 1/2)"
},
{
"math_id": 26,
"text": "[x,p]=i"
},
{
"math_id": 27,
"text": "\\{A,B\\} = AB + BA."
},
{
"math_id": 28,
"text": "\\mathcal{H}"
},
{
"math_id": 29,
"text": "N"
},
{
"math_id": 30,
"text": "Q_i"
},
{
"math_id": 31,
"text": "i,j = 1,\\ldots,N"
},
{
"math_id": 32,
"text": "\\{Q_i,Q^\\dagger_j\\} = \\mathcal{H}\\delta_{ij}."
},
{
"math_id": 33,
"text": "b"
},
{
"math_id": 34,
"text": "b^\\dagger"
},
{
"math_id": 35,
"text": "\\{b,b^\\dagger\\}=1"
},
{
"math_id": 36,
"text": "b^2=0"
},
{
"math_id": 37,
"text": "p"
},
{
"math_id": 38,
"text": "x"
},
{
"math_id": 39,
"text": "W"
},
{
"math_id": 40,
"text": "Q_1=\\frac{1}{2}\\left[(p-iW)b+(p+iW^\\dagger)b^\\dagger\\right]"
},
{
"math_id": 41,
"text": "Q_2=\\frac{i}{2}\\left[(p-iW)b-(p+iW^\\dagger)b^\\dagger\\right]"
},
{
"math_id": 42,
"text": "Q_1"
},
{
"math_id": 43,
"text": "Q_2"
},
{
"math_id": 44,
"text": "H=\\{Q_1,Q_1\\}=\\{Q_2,Q_2\\}=\\frac{(p+\\Im\\{W\\})^2}{2}+\\frac{{\\Re\\{W\\}}^2}{2}+\\frac{\\Re\\{W\\}'}{2}(bb^\\dagger-b^\\dagger b)"
},
{
"math_id": 45,
"text": "\\Im\\{W\\}"
},
{
"math_id": 46,
"text": "Q=(p-iW)b"
},
{
"math_id": 47,
"text": "Q^\\dagger=(p+iW^\\dagger)b^\\dagger"
},
{
"math_id": 48,
"text": "\\{Q,Q\\}=\\{Q^\\dagger,Q^\\dagger\\}=0"
},
{
"math_id": 49,
"text": "\\{Q^\\dagger,Q\\}=2H"
},
{
"math_id": 50,
"text": "Q^\\dagger"
},
{
"math_id": 51,
"text": "[Q,x\\}=-ib"
},
{
"math_id": 52,
"text": "[Q,b\\}=0"
},
{
"math_id": 53,
"text": "[Q,b^\\dagger\\}=\\frac{dx}{dt}-i\\Re\\{W\\}"
},
{
"math_id": 54,
"text": "[Q^\\dagger,x\\}=ib^\\dagger"
},
{
"math_id": 55,
"text": "[Q^\\dagger,b\\}=\\frac{dx}{dt}+i\\Re\\{W\\}"
},
{
"math_id": 56,
"text": "[Q^\\dagger,b^\\dagger\\}=0"
},
{
"math_id": 57,
"text": "b^\\dagger(t)"
},
{
"math_id": 58,
"text": "\\Re\\{W\\}"
},
{
"math_id": 59,
"text": "F=\\Re\\{W\\}"
},
{
"math_id": 60,
"text": "[Q,b^\\dagger\\}=\\frac{dx}{dt}-iF"
},
{
"math_id": 61,
"text": "[Q,F\\}=-\\frac{db}{dt}"
},
{
"math_id": 62,
"text": "[Q^\\dagger,b\\}=\\frac{dx}{dt}+iF"
},
{
"math_id": 63,
"text": "[Q^\\dagger,F\\}=\\frac{db^\\dagger}{dt}"
},
{
"math_id": 64,
"text": "\\theta"
},
{
"math_id": 65,
"text": "\\bar{\\theta}"
},
{
"math_id": 66,
"text": "\\{\\theta,\\theta\\}=\\{\\bar{\\theta},\\bar{\\theta}\\}=\\{\\bar{\\theta},\\theta\\}=0"
},
{
"math_id": 67,
"text": "f(t,\\bar{\\theta},\\theta)=x(t)-i\\theta b(t)-i\\bar{\\theta}b^\\dagger(t)+\\bar{\\theta}\\theta F(t)"
},
{
"math_id": 68,
"text": "[Q,f\\}=\\frac{\\partial}{\\partial\\theta}f-i\\bar{\\theta}\\frac{\\partial}{\\partial t}f,"
},
{
"math_id": 69,
"text": "[Q^\\dagger,f\\}=\\frac{\\partial}{\\partial \\bar{\\theta}}f-i\\theta \\frac{\\partial}{\\partial t}f."
},
{
"math_id": 70,
"text": "H = \\frac{(p)^2}{2}+\\frac{{W}^2}{2}+\\frac{W'}{2}(bb^\\dagger-b^\\dagger b)"
},
{
"math_id": 71,
"text": " V_{+} (x, a_1 ) = V_{-} (x, a_2) + R(a_1)"
},
{
"math_id": 72,
"text": "a"
},
{
"math_id": 73,
"text": "l"
},
{
"math_id": 74,
"text": " \\frac{-e^2}{4\\pi \\epsilon_0} \\frac{1}{r} + \\frac{h^2 l (l+1)} {2m} \\frac{1}{r^2} - E_0"
},
{
"math_id": 75,
"text": "V_{-}"
},
{
"math_id": 76,
"text": "W = \\frac{\\sqrt{2m}}{h} \\frac{e^2}{2 4\\pi \\epsilon_0 (l+1)} - \\frac{h(l+1)}{r\\sqrt{2m}}"
},
{
"math_id": 77,
"text": "V_+ = \\frac{-e^2}{4\\pi \\epsilon_0} \\frac{1}{r} + \\frac{h^2 (l+1) (l+2)} {2m} \\frac{1}{r^2} + \\frac{e^4 m}{32 \\pi^2 h^2 \\epsilon_0^2 (l+1)^2}"
},
{
"math_id": 78,
"text": "l+1"
},
{
"math_id": 79,
"text": "l=0"
},
{
"math_id": 80,
"text": "V_-"
},
{
"math_id": 81,
"text": "V_+"
},
{
"math_id": 82,
"text": " E_n=\\sum\\limits_{i=1}^n R(a_i) "
},
{
"math_id": 83,
"text": "a_i"
}
] | https://en.wikipedia.org/wiki?curid=1157887 |
11579 | Fermi paradox | Discrepancy between lack of evidence of advanced alien life and apparently high likelihood it exists
The Fermi paradox is the discrepancy between the lack of conclusive evidence of advanced extraterrestrial life and the apparently high likelihood of its existence. As a 2015 article put it, "If life is so easy, someone from somewhere must have come calling by now."
Italian-American physicist Enrico Fermi's name is associated with the paradox because of a casual conversation in the summer of 1950 with fellow physicists Edward Teller, Herbert York, and Emil Konopinski. While walking to lunch, the men discussed recent UFO reports and the possibility of faster-than-light travel. The conversation moved on to other topics, until during lunch Fermi blurted out, "But where is everybody?" (although the exact quote is uncertain).
There have been many attempts to resolve the Fermi paradox, such as suggesting that intelligent extraterrestrial beings are extremely rare, that the lifetime of such civilizations is short, or that they exist but (for various reasons) humans see no evidence.
<templatestyles src="Template:TOC limit/styles.css" />
Chain of reasoning.
The following are some of the facts and hypotheses that together serve to highlight the apparent contradiction:
History.
Fermi was not the first to ask the question. An earlier implicit mention was by Konstantin Tsiolkovsky in an unpublished manuscript from 1933. He noted "people deny the presence of intelligent beings on the planets of the universe" because "(i) if such beings exist they would have visited Earth, and (ii) if such civilizations existed then they would have given us some sign of their existence". This was not a paradox for others, who took this to imply the absence of extraterrestrial life. But it was one for him, since he believed in extraterrestrial life and the possibility of space travel. Therefore, he proposed what is now known as the zoo hypothesis and speculated that mankind is not yet ready for higher beings to contact us. In turn, Tsiolkovsky himself was not the first to discover the paradox, as shown by his reference to other people's reasons for not accepting the premise that extraterrestrial civilizations exist.
In 1975, Michael H. Hart published a detailed examination of the paradox, one of the first to do so. He argued that if intelligent extraterrestrials exist, and are capable of space travel, then the galaxy could have been colonized in a time much less than that of the age of the Earth. However, there is no observable evidence they have been here, which Hart called "Fact A".
Other names closely related to Fermi's question ("Where are they?") include the Great Silence, and (Latin for "silence of the universe"), though these only refer to one portion of the Fermi paradox, that humans see no evidence of other civilizations.
Original conversations.
In the summer of 1950 at Los Alamos National Laboratory in New Mexico, Enrico Fermi and co-workers Emil Konopinski, Edward Teller, and Herbert York had one or several lunchtime conversations. In one, Fermi suddenly blurted out, "Where is everybody?" (Teller's letter), or "Don't you ever wonder where everybody is?" (York's letter), or "But where is everybody?" (Konopinski's letter). Teller wrote, "The result of his question was general laughter because of the strange fact that, in spite of Fermi's question coming out of the blue, everybody around the table seemed to understand at once that he was talking about extraterrestrial life."
In 1984 York wrote that Fermi "followed up with a series of calculations on the probability of earthlike planets, the probability of life given an earth, the probability of humans given life, the likely rise and duration of high technology, and so on. He concluded on the basis of such calculations that we ought to have been visited long ago and many times over." Teller remembers that not much came of this conversation "except perhaps a statement that the distances to the next location of living beings may be very great and that, indeed, as far as our galaxy is concerned, we are living somewhere in the sticks, far removed from the metropolitan area of the galactic center."
Fermi died of cancer in 1954. However, in letters to the three surviving men decades later in 1984, Dr. Eric Jones of Los Alamos was able to partially put the original conversation back together. He informed each of the men that he wished to include a reasonably accurate version or composite in the written proceedings he was putting together for a previously held conference entitled "Interstellar Migration and the Human Experience". Jones first sent a letter to Edward Teller which included a secondhand account from Hans Mark. Teller responded, and then Jones sent Teller's letter to Herbert York. York responded, and finally, Jones sent both Teller's and York's letters to Emil Konopinski who also responded. Furthermore, Konopinski was able to later identify a cartoon which Jones found as the one involved in the conversation and thereby help to settle the time period as being the summer of 1950.
Basis.
The Fermi paradox is a conflict between the argument that scale and probability seem to favor intelligent life being common in the universe, and the total lack of evidence of intelligent life having ever arisen anywhere other than on Earth.
The first aspect of the Fermi paradox is a function of the scale or the large numbers involved: there are an estimated 200–400 billion stars in the Milky Way (2–4 × 1011) and 70 sextillion (7×1022) in the observable universe. Even if intelligent life occurs on only a minuscule percentage of planets around these stars, there might still be a great number of extant civilizations, and if the percentage were high enough it would produce a significant number of extant civilizations in the Milky Way. This assumes the mediocrity principle, by which Earth is a typical planet.
The second aspect of the Fermi paradox is the argument of probability: given intelligent life's ability to overcome scarcity, and its tendency to colonize new habitats, it seems possible that at least some civilizations would be technologically advanced, seek out new resources in space, and colonize their star system and, subsequently, surrounding star systems. Since there is no significant evidence on Earth, or elsewhere in the known universe, of other intelligent life after 13.8 billion years of the universe's history, there is a conflict requiring a resolution. Some examples of possible resolutions are that intelligent life is rarer than is thought, that assumptions about the general development or behavior of intelligent species are flawed, or, more radically, that current scientific understanding of the nature of the universe itself is quite incomplete.
The Fermi paradox can be asked in two ways. The first is, "Why are no aliens or their artifacts found on Earth, or in the Solar System?". If interstellar travel is possible, even the "slow" kind nearly within the reach of Earth technology, then it would only take from 5 million to 50 million years to colonize the galaxy. This is relatively brief on a geological scale, let alone a cosmological one. Since there are many stars older than the Sun, and since intelligent life might have evolved earlier elsewhere, the question then becomes why the galaxy has not been colonized already. Even if colonization is impractical or undesirable to all alien civilizations, large-scale exploration of the galaxy could be possible by probes. These might leave detectable artifacts in the Solar System, such as old probes or evidence of mining activity, but none of these have been observed.
The second form of the question is "Why are there no signs of intelligence elsewhere in the universe?". This version does not assume interstellar travel, but includes other galaxies as well. For distant galaxies, travel times may well explain the lack of alien visits to Earth, but a sufficiently advanced civilization could potentially be observable over a significant fraction of the size of the observable universe. Even if such civilizations are rare, the scale argument indicates they should exist somewhere at some point during the history of the universe, and since they could be detected from far away over a considerable period of time, many more potential sites for their origin are within range of human observation. It is unknown whether the paradox is stronger for the Milky Way galaxy or for the universe as a whole.
Drake equation.
The theories and principles in the Drake equation are closely related to the Fermi paradox. The equation was formulated by Frank Drake in 1961 in an attempt to find a systematic means to evaluate the numerous probabilities involved in the existence of alien life. The equation is presented as follows:
formula_0
Where formula_1 is the number of technologically advanced civilizations in the Milky Way galaxy, and formula_1 is asserted to be the product of
The fundamental problem is that the last four terms (formula_5, formula_6, formula_7, and formula_8) are entirely unknown, rendering statistical estimates impossible.
The Drake equation has been used by both optimists and pessimists, with wildly differing results. The first scientific meeting on the search for extraterrestrial intelligence (SETI), which had 10 attendees including Frank Drake and Carl Sagan, speculated that the number of civilizations was roughly between 1,000 and 100,000,000 civilizations in the Milky Way galaxy. Conversely, Frank Tipler and John D. Barrow used pessimistic numbers and speculated that the average number of civilizations in a galaxy is much less than one. Almost all arguments involving the Drake equation suffer from the overconfidence effect, a common error of probabilistic reasoning about low-probability events, by guessing specific numbers for likelihoods of events whose mechanism is not yet understood, such as the likelihood of abiogenesis on an Earth-like planet, with current likelihood estimates varying over many hundreds of orders of magnitude. An analysis that takes into account some of the uncertainty associated with this lack of understanding has been carried out by Anders Sandberg, Eric Drexler and Toby Ord, and suggests "a substantial "ex ante" probability of there being no other intelligent life in our observable universe".
Great Filter.
The Great Filter, a concept introduced by Robin Hanson in 1996, represents whatever natural phenomena that would make it unlikely for life to evolve from inanimate matter to an advanced civilization. The most commonly agreed-upon low probability event is abiogenesis: a gradual process of increasing complexity of the first self-replicating molecules by a randomly occurring chemical process. Other proposed great filters are the emergence of eukaryotic cells or of meiosis or some of the steps involved in the evolution of a brain capable of complex logical deductions.
Astrobiologists Dirk Schulze-Makuch and William Bains, reviewing the history of life on Earth, including convergent evolution, concluded that transitions such as oxygenic photosynthesis, the eukaryotic cell, multicellularity, and tool-using intelligence are likely to occur on any Earth-like planet given enough time. They argue that the Great Filter may be abiogenesis, the rise of technological human-level intelligence, or an inability to settle other worlds because of self-destruction or a lack of resources. Paleobiologist Olev Vinn has suggested that the great filter may have universal biological roots related to evolutionary animal behavior.
Grabby Aliens.
In 2021, the concepts of quiet, loud, and grabby aliens were introduced by Hanson "et al." The possible "loud" aliens expand rapidly in a highly detectable way throughout the universe and endure, while "quiet" aliens are hard or impossible to detect and eventually disappear. "Grabby" aliens prevent the emergence of other civilizations in their sphere of influence. The authors argue that if loud civilizations are rare, as they appear to be, then quiet civilizations are also rare. The paper suggests that humanity's current stage of technological development is relatively early in the potential timeline of intelligent life in the universe, as loud aliens would otherwise be observable by astronomers.
Earlier in 2013, Anders Sandberg and Stuart Armstrong examined the potential for intelligent life to spread intergalactically throughout the universe and the implications for the Fermi Paradox. Their study suggests that with sufficient energy, intelligent civilizations could potentially colonize the entire Milky Way galaxy within a few million years, and spread to nearby galaxies in a timespan that is cosmologically brief. They conclude that intergalactic colonization appears possible with the resources of a single solar system and that intergalactic colonization is of comparable difficulty to interstellar colonization, and therefore the Fermi paradox is much sharper than commonly thought.
Empirical evidence.
There are two parts of the Fermi paradox that rely on empirical evidence—that there are many potentially habitable planets, and that humans see no evidence of life. The first point, that many suitable planets exist, was an assumption in Fermi's time but is now supported by the discovery that exoplanets are common. Current models predict billions of habitable worlds in the Milky Way.
The second part of the paradox, that humans see no evidence of extraterrestrial life, is also an active field of scientific research. This includes both efforts to find any indication of life, and efforts specifically directed to finding intelligent life. These searches have been made since 1960, and several are ongoing.
Although astronomers do not usually search for extraterrestrials, they have observed phenomena that they could not immediately explain without positing an intelligent civilization as the source. For example, pulsars, when first discovered in 1967, were called little green men (LGM) because of the precise repetition of their pulses. In all cases, explanations with no need for intelligent life have been found for such observations, but the possibility of discovery remains. Proposed examples include asteroid mining that would change the appearance of debris disks around stars, or spectral lines from nuclear waste disposal in stars.
Explanations based on technosignatures, such as radio communications, have been presented.
Electromagnetic emissions.
Radio technology and the ability to construct a radio telescope are presumed to be a natural advance for technological species, theoretically creating effects that might be detected over interstellar distances. The careful searching for non-natural radio emissions from space may lead to the detection of alien civilizations. Sensitive alien observers of the Solar System, for example, would note unusually intense radio waves for a G2 star due to Earth's television and telecommunication broadcasts. In the absence of an apparent natural cause, alien observers might infer the existence of a terrestrial civilization. Such signals could be either "accidental" by-products of a civilization, or deliberate attempts to communicate, such as the Arecibo message. It is unclear whether "leakage", as opposed to a deliberate beacon, could be detected by an extraterrestrial civilization. The most sensitive radio telescopes on Earth, as of 2019[ [update]], would not be able to detect non-directional radio signals (such as broadband) even at a fraction of a light-year away, but other civilizations could hypothetically have much better equipment.
A number of astronomers and observatories have attempted and are attempting to detect such evidence, mostly through SETI organizations such as the SETI Institute and Breakthrough Listen. Several decades of SETI analysis have not revealed any unusually bright or meaningfully repetitive radio emissions.
Direct planetary observation.
Exoplanet detection and classification is a very active sub-discipline in astronomy; the first candidate terrestrial planet discovered within a star's habitable zone was found in 2007. New refinements in exoplanet detection methods, and use of existing methods from space (such as the Kepler and TESS missions) are starting to detect and characterize Earth-size planets, to determine whether they are within the habitable zones of their stars. Such observational refinements may allow for a better estimation of how common these potentially habitable worlds are.
Conjectures about interstellar probes.
The Hart-Tipler conjecture is a form of contraposition which states that because no interstellar probes have been detected, there likely is no other intelligent life in the universe, as such life should be expected to eventually create and launch such probes. Self-replicating probes could exhaustively explore a galaxy the size of the Milky Way in as little as a million years. If even a single civilization in the Milky Way attempted this, such probes could spread throughout the entire galaxy. Another speculation for contact with an alien probe—one that would be trying to find human beings—is an alien Bracewell probe. Such a hypothetical device would be an autonomous space probe whose purpose is to seek out and communicate with alien civilizations (as opposed to von Neumann probes, which are usually described as purely exploratory). These were proposed as an alternative to carrying a slow speed-of-light dialogue between vastly distant neighbors. Rather than contending with the long delays a radio dialogue would suffer, a probe housing an artificial intelligence would seek out an alien civilization to carry on a close-range communication with the discovered civilization. The findings of such a probe would still have to be transmitted to the home civilization at light speed, but an information-gathering dialogue could be conducted in real time.
Direct exploration of the Solar System has yielded no evidence indicating a visit by aliens or their probes. Detailed exploration of areas of the Solar System where resources would be plentiful may yet produce evidence of alien exploration, though the entirety of the Solar System is vast and difficult to investigate. Attempts to signal, attract, or activate hypothetical Bracewell probes in Earth's vicinity have not succeeded.
Searches for stellar-scale artifacts.
In 1959, Freeman Dyson observed that every developing human civilization constantly increases its energy consumption, and he conjectured that a civilization might try to harness a large part of the energy produced by a star. He proposed a hypothetical "Dyson sphere" as a possible means: a shell or cloud of objects enclosing a star to absorb and utilize as much radiant energy as possible. Such a feat of astroengineering would drastically alter the observed spectrum of the star involved, changing it at least partly from the normal emission lines of a natural stellar atmosphere to those of black-body radiation, probably with a peak in the infrared. Dyson speculated that advanced alien civilizations might be detected by examining the spectra of stars and searching for such an altered spectrum.
There have been some attempts to find evidence of the existence of Dyson spheres that would alter the spectra of their core stars. Direct observation of thousands of galaxies has shown no explicit evidence of artificial construction or modifications. In October 2015, there was some speculation that a dimming of light from star KIC 8462852, observed by the Kepler space telescope, could have been a result of Dyson sphere construction. However, in 2018, observations determined that the amount of dimming varied by the frequency of the light, pointing to dust, rather than an opaque object such as a Dyson sphere, as the culprit for causing the dimming.
Hypothetical explanations for the paradox.
Rarity of intelligent life.
Extraterrestrial life is rare or non-existent.
Those who think that intelligent extraterrestrial life is (nearly) impossible argue that the conditions needed for the evolution of life—or at least the evolution of biological complexity—are rare or even unique to Earth. Under this assumption, called the rare Earth hypothesis, a rejection of the mediocrity principle, complex multicellular life is regarded as exceedingly unusual.
The rare Earth hypothesis argues that the evolution of biological complexity requires a host of fortuitous circumstances, such as a galactic habitable zone, a star and planet(s) having the requisite conditions, such as enough of a continuous habitable zone, the advantage of a giant guardian like Jupiter and a large moon, conditions needed to ensure the planet has a magnetosphere and plate tectonics, the chemistry of the lithosphere, atmosphere, and oceans, the role of "evolutionary pumps" such as massive glaciation and rare bolide impacts. Perhaps most importantly, advanced life needs whatever it was that led to the transition of (some) prokaryotic cells to eukaryotic cells, sexual reproduction and the Cambrian explosion.
In his book "Wonderful Life" (1989), Stephen Jay Gould suggested that if the "tape of life" were rewound to the time of the Cambrian explosion, and one or two tweaks made, human beings most probably never would have evolved. Other thinkers such as Fontana, Buss, and Kauffman have written about the self-organizing properties of life.
Extraterrestrial intelligence is rare or non-existent.
It is possible that even if complex life is common, intelligence (and consequently civilizations) is not. While there are remote sensing techniques that could perhaps detect life-bearing planets without relying on the signs of technology, none of them have any ability to tell if any detected life is intelligent. This is sometimes referred to as the "algae vs. alumnae" problem.
Charles Lineweaver states that when considering any extreme trait in an animal, intermediate stages do not necessarily produce "inevitable" outcomes. For example, large brains are no more "inevitable", or convergent, than are the long noses of animals such as aardvarks and elephants. As he points out, "dolphins have had ~20million years to build a radio telescope and have not done so". In addition, Rebecca Boyle points out that of all the species that have ever evolved in the history of life on the planet Earth, only one—human beings and only in the beginning stages—has ever become space-faring.
Periodic extinction by natural events.
New life might commonly die out due to runaway heating or cooling on their fledgling planets. On Earth, there have been numerous major extinction events that destroyed the majority of complex species alive at the time; the extinction of the non-avian dinosaurs is the best known example. These are thought to have been caused by events such as impact from a large meteorite, massive volcanic eruptions, or astronomical events such as gamma-ray bursts. It may be the case that such extinction events are common throughout the universe and periodically destroy intelligent life, or at least its civilizations, before the species is able to develop the technology to communicate with other intelligent species.
However, the chances of extinction by natural events may be very low on the scale of a civilization's lifetime. Based on an analysis of impact craters on Earth and the Moon, the average interval between impacts large enough to cause global consequences (like the Chicxulub impact) is estimated to be around 100 million years.
Evolutionary explanations.
Intelligent alien species have not developed advanced technologies.
It may be that while alien species with intelligence exist, they are primitive or have not reached the level of technological advancement necessary to communicate. Along with non-intelligent life, such civilizations would also be very difficult to detect. A trip using conventional rockets would take hundreds of thousands of years to reach the nearest stars.
To skeptics, the fact that in the history of life on the Earth, only one species has developed a civilization to the point of being capable of spaceflight and radio technology, lends more credence to the idea that technologically advanced civilizations are rare in the universe.
Amedeo Balbi and Adam Frank propose the concept of an "oxygen bottleneck" for the emergence of technospheres. The "oxygen bottleneck" refers to the critical level of atmospheric oxygen necessary for fire and combustion. Earth's current atmospheric oxygen concentration is about 21%, but has been much lower in the past and may also be on many exoplanets. The authors argue that while the threshold of oxygen required for the existence of complex life and ecosystems is much lower, technological advancement, particularly that reliant on combustion, such as metal smelting and energy production, requires higher oxygen concentrations of around 18% or more. Thus, the presence of high levels of oxygen in a planet's atmosphere is not only a potential biosignature but also a critical factor in the emergence of detectable technological civilizations.
Another hypothesis in this category is the "Water World hypothesis". According to author and scientist David Brin: "it turns out that our Earth skates the very inner edge of our sun's continuously habitable—or 'Goldilocks'—zone. And Earth may be anomalous. It may be that because we are so close to our sun, we have an anomalously oxygen-rich atmosphere, and we have anomalously little ocean for a water world. In other words, 32 percent continental mass may be high among water worlds..." Brin continues, "In which case, the evolution of creatures like us, with hands and fire and all that sort of thing, may be rare in the galaxy. In which case, when we do build starships and head out there, perhaps we'll find lots and lots of life worlds, but they're all like Polynesia. We'll find lots and lots of intelligent lifeforms out there, but they're all dolphins, whales, squids, who could never build their own starships. What a perfect universe for us to be in, because nobody would be able to boss us around, and we'd get to be the voyagers, the "Star Trek" people, the starship builders, the policemen, and so on."
It is the nature of intelligent life to destroy itself.
This is the argument that technological civilizations may usually or invariably destroy themselves before or shortly after developing radio or spaceflight technology. The astrophysicist Sebastian von Hoerner stated that the progress of science and technology on Earth was driven by two factors—the struggle for domination and the desire for an easy life. The former potentially leads to complete destruction, while the latter may lead to biological or mental degeneration. Possible means of annihilation via major global issues, where global interconnectedness actually makes humanity more vulnerable than resilient, are many, including war, accidental environmental contamination or damage, the development of biotechnology, synthetic life like mirror life, resource depletion, climate change, or poorly-designed artificial intelligence. This general theme is explored both in fiction and in scientific hypothesizing.
In 1966, Sagan and Shklovskii speculated that technological civilizations will either tend to destroy themselves within a century of developing interstellar communicative capability or master their self-destructive tendencies and survive for billion-year timescales. Self-annihilation may also be viewed in terms of thermodynamics: insofar as life is an ordered system that can sustain itself against the tendency to disorder, Stephen Hawking's "external transmission" or interstellar communicative phase, where knowledge production and knowledge management is more important than transmission of information via evolution, may be the point at which the system becomes unstable and self-destructs. Here, Hawking emphasizes self-design of the human genome (transhumanism) or enhancement via machines (e.g., brain–computer interface) to enhance human intelligence and reduce aggression, without which he implies human civilization may be too stupid collectively to survive an increasingly unstable system. For instance, the development of technologies during the "external transmission" phase, such as weaponization of artificial general intelligence or antimatter, may not be met by concomitant increases in human ability to manage its own inventions. Consequently, disorder increases in the system: global governance may become increasingly destabilized, worsening humanity's ability to manage the possible means of annihilation listed above, resulting in global societal collapse.
A less theoretical example might be the resource-depletion issue on Polynesian islands, of which Easter Island is only the best known. David Brin points out that during the expansion phase from 1500 BC to 800 AD there were cycles of overpopulation followed by what might be called periodic cullings of adult males through war or ritual. He writes, "There are many stories of islands whose men were almost wiped out—sometimes by internal strife, and sometimes by invading males from other islands."
Using extinct civilizations such as Easter Island (Rapa Nui) as models, a study conducted in 2018 by Adam Frank "et al." posited that climate change induced by "energy intensive" civilizations may prevent sustainability within such civilizations, thus explaining the paradoxical lack of evidence for intelligent extraterrestrial life. Based on dynamical systems theory, the study examined how technological civilizations (exo-civilizations) consume resources and the feedback effects this consumption has on their planets and its carrying capacity. According to Adam Frank "[t]he point is to recognize that driving climate change may be something generic. The laws of physics demand that any young population, building an energy-intensive civilization like ours, is going to have feedback on its planet. Seeing climate change in this cosmic context may give us better insight into what’s happening to us now and how to deal with it." Generalizing the Anthropocene, their model produces four different outcomes:
It is the nature of intelligent life to destroy others.
Another hypothesis is that an intelligent species beyond a certain point of technological capability will destroy other intelligent species as they appear, perhaps by using self-replicating probes. Science fiction writer Fred Saberhagen has explored this idea in his "Berserker" series, as has physicist Gregory Benford and, as well, science fiction writer Greg Bear in his "The Forge of God" novel, and later Liu Cixin in his "The Three-Body Problem" series.
A species might undertake such extermination out of expansionist motives, greed, paranoia, or aggression. In 1981, cosmologist Edward Harrison argued that such behavior would be an act of prudence: an intelligent species that has overcome its own self-destructive tendencies might view any other species bent on galactic expansion as a threat. It has also been suggested that a successful alien species would be a superpredator, as are humans. Another possibility invokes the "tragedy of the commons" and the anthropic principle: the first lifeform to achieve interstellar travel will necessarily (even if unintentionally) prevent competitors from arising, and humans simply happen to be first.
Civilizations only broadcast detectable signals for a brief period of time.
It may be that alien civilizations are detectable through their radio emissions for only a short time, reducing the likelihood of spotting them. The usual assumption is that civilizations outgrow radio through technological advancement. However, there could be other leakage such as that from microwaves used to transmit power from solar satellites to ground receivers. Regarding the first point, in a 2006 "Sky & Telescope" article, Seth Shostak wrote, "Moreover, radio leakage from a planet is only likely to get weaker as a civilization advances and its communications technology gets better. Earth itself is increasingly switching from broadcasts to leakage-free cables and fiber optics, and from primitive but obvious carrier-wave broadcasts to subtler, hard-to-recognize spread-spectrum transmissions."
More hypothetically, advanced alien civilizations may evolve beyond broadcasting at all in the electromagnetic spectrum and communicate by technologies not developed or used by mankind. Some scientists have hypothesized that advanced civilizations may send neutrino signals. If such signals exist, they could be detectable by neutrino detectors that are now under construction for other goals.
Alien life may be too incomprehensible.
Another possibility is that human theoreticians have underestimated how much alien life might differ from that on Earth. Aliens may be psychologically unwilling to attempt to communicate with human beings. Perhaps human mathematics is parochial to Earth and not shared by other life, though others argue this can only apply to abstract math since the math associated with physics must be similar (in results, if not in methods).
In his 2009 book, SETI scientist Seth Shostak wrote, "Our experiments [such as plans to use drilling rigs on Mars] are still looking for the type of extraterrestrial that would have appealed to Percival Lowell [astronomer who believed he had observed canals on Mars]."
Physiology might also cause a communication barrier. Carl Sagan speculated that an alien species might have a thought process orders of magnitude slower (or faster) than that of humans. A message broadcast by that species might well seem like random background noise to humans, and therefore go undetected.
Paul Davies states that 500 years ago the very idea of a computer doing work merely by manipulating internal data may not have been viewed as a technology at all. He writes, "Might there be a still higher level[...] If so, this 'third level' would never be manifest through observations made at the informational level, still less the matter level. There is no vocabulary to describe the third level, but that doesn't mean it is non-existent, and we need to be open to the possibility that alien technology may operate at the third level, or maybe the fourth, fifth[...] levels."
Arthur C. Clarke hypothesized that "our technology must still be laughably primitive; we may well be like jungle savages listening for the throbbing of tom-toms, while the ether around them carries more words per second than they could utter in a lifetime". Another thought is that technological civilizations invariably experience a technological singularity and attain a post-biological character.
Sociological explanations.
Colonization is not the cosmic norm.
In response to Tipler's idea of self-replicating probes, Stephen Jay Gould wrote, "I must confess that I simply don't know how to react to such arguments. I have enough trouble predicting the plans and reactions of the people closest to me. I am usually baffled by the thoughts and accomplishments of humans in different cultures. I'll be damned if I can state with certainty what some extraterrestrial source of intelligence might do."
Alien species may have only settled part of the galaxy.
According to a study by Frank "et al.", advanced civilizations may not colonize everything in the galaxy due to their potential adoption of steady states of expansion. This hypothesis suggests that civilizations might reach a stable pattern of expansion where they neither collapse nor aggressively spread throughout the galaxy. A February 2019 article in "Popular Science" states, "Sweeping across the Milky Way and establishing a unified galactic empire might be inevitable for a monolithic super-civilization, but most cultures are neither monolithic nor super—at least if our experience is any guide." Astrophysicist Adam Frank, along with co-authors such as astronomer Jason Wright, ran a variety of simulations in which they varied such factors as settlement lifespans, fractions of suitable planets, and recharge times between launches. They found many of their simulations seemingly resulted in a "third category" in which the Milky Way remains partially settled indefinitely. The abstract to their 2019 paper states, "These results break the link between Hart's famous 'Fact A' (no interstellar visitors on Earth now) and the conclusion that humans must, therefore, be the only technological civilization in the galaxy. Explicitly, our solutions admit situations where our current circumstances are consistent with an otherwise settled, steady-state galaxy."
An alternative scenario is that long-lived civilizations may only choose to colonize stars during closest approach. As low mass K- and M-type dwarfs are by far the most common types of main sequence stars in the Milky Way, they are more likely to pass close to existing civilizations. These stars have longer life spans, which may be preferred by such a civilization. Interstellar travel capability of 0.3 light years is theoretically sufficient to colonize all M-dwarfs in the galaxy within 2 billion years. If the travel capability is increased to 2 light years, then all K-dwarfs can be colonized in the same time frame.
Alien species may isolate themselves in virtual worlds.
Avi Loeb suggests that one possible explanation for the Fermi paradox is virtual reality technology. Individuals of extraterrestrial civilizations may prefer to spend time in virtual worlds or metaverses that have different physical law constraints as opposed to focusing on colonizing planets. Nick Bostrom suggests that some advanced beings may divest themselves entirely of physical form, create massive artificial virtual environments, transfer themselves into these environments through mind uploading, and exist totally within virtual worlds, ignoring the external physical universe.
It may be that intelligent alien life develops an "increasing disinterest" in their outside world. Possibly any sufficiently advanced society will develop highly engaging media and entertainment well before the capacity for advanced space travel, with the rate of appeal of these social contrivances being destined, because of their inherent reduced complexity, to overtake any desire for complex, expensive endeavors such as space exploration and communication. Once any sufficiently advanced civilization becomes able to master its environment, and most of its physical needs are met through technology, various "social and entertainment technologies", including virtual reality, are postulated to become the primary drivers and motivations of that civilization.
Artificial intelligence may not expand.
While artificial intelligence supplanting its creators could only deepen the Fermi paradox, such as through enabling the colonizing of the galaxy through self-replicating probes, it is also possible that after replacing its creators, artificial intelligence either doesn't expand or endure for a variety of reasons. Michael A. Garrett has suggested that biological civilizations may universally underestimate the speed that AI systems progress, and not react to it in time, thus making it a possible great filter. He also argues that this could make the longevity of advanced technological civilizations less than 200 years, thus explaining the great silence observed by SETI.
Economic explanations.
Lack of resources needed to physically spread throughout the galaxy.
The ability of an alien culture to colonize other star systems is based on the idea that interstellar travel is technologically feasible. While the current understanding of physics rules out the possibility of faster-than-light travel, it appears that there are no major theoretical barriers to the construction of "slow" interstellar ships, even though the engineering required is considerably beyond present human capabilities. This idea underlies the concept of the Von Neumann probe and the Bracewell probe as a potential evidence of extraterrestrial intelligence.
It is possible, however, that present scientific knowledge cannot properly gauge the feasibility and costs of such interstellar colonization. Theoretical barriers may not yet be understood, and the resources needed may be so great as to make it unlikely that any civilization could afford to attempt it. Even if interstellar travel and colonization are possible, they may be difficult, leading to a colonization model based on percolation theory.
Colonization efforts may not occur as an unstoppable rush, but rather as an uneven tendency to "percolate" outwards, within an eventual slowing and termination of the effort given the enormous costs involved and the expectation that colonies will inevitably develop a culture and civilization of their own. Colonization may thus occur in "clusters", with large areas remaining uncolonized at any one time.
Information is cheaper to transmit than matter is to transfer.
If a human-capability machine intelligence is possible, and if it is possible to transfer such constructs over vast distances and rebuild them on a remote machine, then it might not make strong economic sense to travel the galaxy by spaceflight. Louis K. Scheffer calculates the cost of radio transmission of information across space to be cheaper than spaceflight by a factor of 108–1017. For a machine civilization, the costs of interstellar travel are therefore enormous compared to the more efficient option of sending computational signals across space to already established sites. After the first civilization has physically explored or colonized the galaxy, as well as sent such machines for easy exploration, then any subsequent civilizations, after having contacted the first, may find it cheaper, faster, and easier to explore the galaxy through intelligent mind transfers to the machines built by the first civilization. However, since a star system needs only one such remote machine, and the communication is most likely highly directed, transmitted at high-frequencies, and at a minimal power to be economical, such signals would be hard to detect from Earth.
By contrast, in economics the counter-intuitive Jevons paradox implies that higher productivity results in higher demand. In other words, increased economic efficiency results in increased economic growth. For example, increased renewable energy has the risk of not directly resulting in declining fossil fuel use, but rather additional economic growth as fossil fuels instead are directed to alternative uses. Thus, technological innovation makes human civilization more capable of higher levels of consumption, as opposed to its existing consumption being achieved more efficiently at a stable level.
Discovery of extraterrestrial life is too difficult.
Humans have not listened properly.
There are some assumptions that underlie the SETI programs that may cause searchers to miss signals that are present. Extraterrestrials might, for example, transmit signals that have a very high or low data rate, or employ unconventional (in human terms) frequencies, which would make them hard to distinguish from background noise. Signals might be sent from non-main sequence star systems that humans search with lower priority; current programs assume that most alien life will be orbiting Sun-like stars.
The greatest challenge is the sheer size of the radio search needed to look for signals (effectively spanning the entire observable universe), the limited amount of resources committed to SETI, and the sensitivity of modern instruments. SETI estimates, for instance, that with a radio telescope as sensitive as the Arecibo Observatory, Earth's television and radio broadcasts would only be detectable at distances up to 0.3 light-years, less than 1/10 the distance to the nearest star. A signal is much easier to detect if it consists of a deliberate, powerful transmission directed at Earth. Such signals could be detected at ranges of hundreds to tens of thousands of light-years distance. However, this means that detectors must be listening to an appropriate range of frequencies, and be in that region of space to which the beam is being sent. Many SETI searches assume that extraterrestrial civilizations will be broadcasting a deliberate signal, like the Arecibo message, in order to be found.
Thus, to detect alien civilizations through their radio emissions, Earth observers either need more sensitive instruments or must hope for fortunate circumstances: that the broadband radio emissions of alien radio technology are much stronger than humanity's own; that one of SETI's programs is listening to the correct frequencies from the right regions of space; or that aliens are deliberately sending focused transmissions in Earth's general direction.
Humans have not listened for long enough.
Humanity's ability to detect intelligent extraterrestrial life has existed for only a very brief period—from 1937 onwards, if the invention of the radio telescope is taken as the dividing line—and "Homo sapiens" is a geologically recent species. The whole period of modern human existence to date is a very brief period on a cosmological scale, and radio transmissions have only been propagated since 1895. Thus, it remains possible that human beings have neither existed long enough nor made themselves sufficiently detectable to be found by extraterrestrial intelligence.
Intelligent life may be too far away.
It may be that non-colonizing technologically capable alien civilizations exist, but that they are simply too far apart for meaningful two-way communication. Sebastian von Hoerner estimated the average duration of civilization at 6,500 years and the average distance between civilizations in the Milky Way at 1,000 light years. If two civilizations are separated by several thousand light-years, it is possible that one or both cultures may become extinct before meaningful dialogue can be established. Human searches may be able to detect their existence, but communication will remain impossible because of distance. It has been suggested that this problem might be ameliorated somewhat if contact and communication is made through a Bracewell probe. In this case at least one partner in the exchange may obtain meaningful information. Alternatively, a civilization may simply broadcast its knowledge, and leave it to the receiver to make what they may of it. This is similar to the transmission of information from ancient civilizations to the present, and humanity has undertaken similar activities like the Arecibo message, which could transfer information about Earth's intelligent species, even if it never yields a response or does not yield a response in time for humanity to receive it. It is possible that observational signatures of self-destroyed civilizations could be detected, depending on the destruction scenario and the timing of human observation relative to it.
A related speculation by Sagan and Newman suggests that if other civilizations exist, and are transmitting and exploring, their signals and probes simply have not arrived yet. However, critics have noted that this is unlikely, since it requires that humanity's advancement has occurred at a very special point in time, while the Milky Way is in transition from empty to full. This is a tiny fraction of the lifespan of a galaxy under ordinary assumptions, so the likelihood that humanity is in the midst of this transition is considered low in the paradox.
Some SETI skeptics may also believe that humanity is at a very special point of time—specifically, a transitional period from no space-faring societies to one space-faring society, namely that of human beings.
Intelligent life may exist hidden from view.
Planetary scientist Alan Stern put forward the idea that there could be a number of worlds with subsurface oceans (such as Jupiter's Europa or Saturn's Enceladus). The surface would provide a large degree of protection from such things as cometary impacts and nearby supernovae, as well as creating a situation in which a much broader range of orbits are acceptable. Life, and potentially intelligence and civilization, could evolve. Stern states, "If they have technology, and let's say they're broadcasting, or they have city lights or whatever—we can't see it in any part of the spectrum, except maybe very-low-frequency [radio]."
Advanced civilizations may limit their search for life to technological signatures.
If life is abundant in the universe but the cost of space travel is high, an advanced civilization may choose to focus its search not on signs of life in general, but on those of other advanced civilizations, and specifically on radio signals. Since humanity has only recently began to use radio communication, its signals may have yet to arrive to other inhabited planets, and if they have, probes from those planets may have yet to arrive on Earth.
Willingness to communicate.
Everyone is listening but no one is transmitting.
Alien civilizations might be technically capable of contacting Earth, but could be only listening instead of transmitting. If all or most civilizations act in the same way, the galaxy could be full of civilizations eager for contact, but everyone is listening and no one is transmitting. This is the so-called "SETI Paradox".
The only civilization known, humanity, does not explicitly transmit, except for a few small efforts. Even these efforts, and certainly any attempt to expand them, are controversial. It is not even clear humanity would respond to a detected signal—the official policy within the SETI community is that "[no] response to a signal or other evidence of extraterrestrial intelligence should be sent until appropriate international consultations have taken place". However, given the possible impact of any reply, it may be very difficult to obtain any consensus on who would speak and what they would say.
Communication is dangerous.
An alien civilization might feel it is too dangerous to communicate, either for humanity or for them. It is argued that when very different civilizations have met on Earth, the results have often been disastrous for one side or the other, and the same may well apply to interstellar contact. Even contact at a safe distance could lead to infection by computer code or even ideas themselves. Perhaps prudent civilizations actively hide not only from Earth but from everyone, out of fear of other civilizations.
Perhaps the Fermi paradox itself—or the alien equivalent of it—is the reason for any civilization to avoid contact with other civilizations, even if no other obstacles existed. From any one civilization's point of view, it would be unlikely for them to be the first ones to make first contact. Therefore, according to this reasoning, it is likely that previous civilizations faced fatal problems with first contact and doing so should be avoided. So perhaps every civilization keeps quiet because of the possibility that there is a real reason for others to do so.
In 1987, science fiction author Greg Bear explored this concept in his novel "The Forge of God". In "The Forge of God", humanity is likened to a baby crying in a hostile forest: "There once was an infant lost in the woods, crying its heart out, wondering why no one answered, drawing down the wolves." One of the characters explains, "We've been sitting in our tree chirping like foolish birds for over a century now, wondering why no other birds answered. The galactic skies are full of hawks, that's why. Planetisms that don't know enough to keep quiet, get eaten."
In Liu Cixin's 2008 novel "The Dark Forest", the author proposes a literary explanation for the Fermi paradox in which many multiple alien civilizations exist, but are both silent and paranoid, destroying any nascent lifeforms loud enough to make themselves known. This is because any other intelligent life may represent a future threat. As a result, Liu's fictional universe contains a plethora of quiet civilizations which do not reveal themselves, as in a "dark forest"...filled with "armed hunter(s) stalking through the trees like a ghost". This idea has come to be known as the dark forest hypothesis.
Earth is deliberately being avoided.
The zoo hypothesis states that intelligent extraterrestrial life exists and does not contact life on Earth to allow for its natural evolution and development. A variation on the zoo hypothesis is the laboratory hypothesis, where humanity has been or is being subject to experiments, with Earth or the Solar System effectively serving as a laboratory. The zoo hypothesis may break down under the uniformity of motive flaw: all it takes is a single culture or civilization to decide to act contrary to the imperative within humanity's range of detection for it to be abrogated, and the probability of such a violation of hegemony increases with the number of civilizations, tending not towards a "Galactic Club" with a unified foreign policy with regard to life on Earth but multiple "Galactic Cliques". However, if artificial superintelligences dominate galactic life, and if it is true that such intelligences tend towards merged hegemonic behavior, then this would address the uniformity of motive flaw by dissuading rogue behavior.
Analysis of the inter-arrival times between civilizations in the galaxy based on common astrobiological assumptions suggests that the initial civilization would have a commanding lead over the later arrivals. As such, it may have established what has been termed the "zoo hypothesis" through force or as a galactic or universal norm and the resultant "paradox" by a cultural founder effect with or without the continued activity of the founder. Some colonization scenarios predict spherical expansion across star systems, with continued expansion coming from the systems just previously settled. It has been suggested that this would cause a strong selection process among the colonization front favoring cultural or biological adaptations to living in starships or space habitats. As a result, they may forgo living on planets. This may result in the destruction of terrestrial planets in these systems for use as building materials, thus preventing the development of life on those worlds. Or, they may have an ethic of protection for "nursery worlds", and protect them.
It is possible that a civilization advanced enough to travel between solar systems could be actively visiting or observing Earth while remaining undetected or unrecognized. Following this logic, and building on arguments that other proposed solutions to the Fermi paradox may be implausible, Ian Crawford and Dirk Schulze-Makuch have argued that technological civilisations are either very rare in the Galaxy or are deliberately hiding from us.
Earth is deliberately being isolated.
A related idea to the zoo hypothesis is that, beyond a certain distance, the perceived universe is a simulated reality. The planetarium hypothesis speculates that beings may have created this simulation so that the universe appears to be empty of other life.
Alien life is already here, unacknowledged.
A significant fraction of the population believes that at least some UFOs (Unidentified Flying Objects) are spacecraft piloted by aliens. While most of these are unrecognized or mistaken interpretations of mundane phenomena, some occurrences remain puzzling even after investigation. The consensus scientific view is that although they may be unexplained, they do not rise to the level of convincing evidence.
Similarly, it is theoretically possible that SETI groups are not reporting positive detections, or governments have been blocking signals or suppressing publication. This response might be attributed to security or economic interests from the potential use of advanced extraterrestrial technology. It has been suggested that the detection of an extraterrestrial radio signal or technology could well be the most highly secret information that exists. Claims that this has already happened are common in the popular press, but the scientists involved report the opposite experience—the press becomes informed and interested in a potential detection even before a signal can be confirmed.
Regarding the idea that aliens are in secret contact with governments, David Brin writes, "Aversion to an idea, simply because of its long association with crackpots, gives crackpots altogether too much influence."
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N = R_* \\cdot f_\\mathrm{p} \\cdot n_\\mathrm{e} \\cdot f_\\mathrm{l} \\cdot f_\\mathrm{i} \\cdot f_\\mathrm{c} \\cdot L"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "R_*"
},
{
"math_id": 3,
"text": "f_p"
},
{
"math_id": 4,
"text": "n_e"
},
{
"math_id": 5,
"text": "f_l"
},
{
"math_id": 6,
"text": "f_i"
},
{
"math_id": 7,
"text": "f_c"
},
{
"math_id": 8,
"text": "L"
}
] | https://en.wikipedia.org/wiki?curid=11579 |
1158 | Algebraic number | Complex number that is a root of a non-zero polynomial in one variable with rational coefficients
An algebraic number is a number that is a root of a non-zero polynomial (of finite degree) in one variable with integer (or, equivalently, rational) coefficients. For example, the golden ratio, formula_0, is an algebraic number, because it is a root of the polynomial "x"2 − "x" − 1. That is, it is a value for x for which the polynomial evaluates to zero. As another example, the complex number formula_1 is algebraic because it is a root of "x"4 + 4.
All integers and rational numbers are algebraic, as are all roots of integers. Real and complex numbers that are not algebraic, such as π and e, are called transcendental numbers.
The set of algebraic numbers is countably infinite and has measure zero in the Lebesgue measure as a subset of the uncountable complex numbers. In that sense, almost all complex numbers are transcendental.
Examples.
is the root of a non-zero polynomial, namely "bx" − "a".
1), the roots are further qualified as quadratic integers.
0. This polynomial is irreducible over the rationals and so the three cosines are "conjugate" algebraic numbers. Likewise, tan, tan, tan, and tan satisfy the irreducible polynomial "x"4 − 4"x"3 − 6"x"2 + 4"x" + 1
0, and so are conjugate algebraic integers. This is the equivalent of angles which, when measured in degrees, have rational numbers.
Properties.
Degree of simple extensions of the rationals as a criterion to algebraicity.
For any α, the simple extension of the rationals by α, denoted by formula_5, is of finite degree if and only if α is an algebraic number.
The condition of finite degree means that there is a finite set formula_6 in formula_7 such that formula_8; that is, every member in formula_7 can be written as formula_9 for some rational numbers formula_10 (note that the set formula_11 is fixed).
Indeed, since the formula_12 are themselves members of formula_7, each can be expressed as sums of products of rational numbers and powers of α, and therefore this condition is equivalent to the requirement that for some finite formula_13, formula_14.
The latter condition is equivalent to formula_15, itself a member of formula_7, being expressible as formula_16 for some rationals formula_17, so formula_18 or, equivalently, α is a root of formula_19; that is, an algebraic number with a minimal polynomial of degree not larger than formula_20.
It can similarly be proven that for any finite set of algebraic numbers formula_21, formula_22... formula_23, the field extension formula_24 has a finite degree.
Field.
The sum, difference, product, and quotient (if the denominator is nonzero) of two algebraic numbers is again algebraic:
For any two algebraic numbers α, β, this follows directly from the fact that the simple extension formula_25, for formula_26 being either formula_27, formula_28, formula_29 or (for formula_30) formula_31, is a linear subspace of the finite-degree field extension formula_32, and therefore has a finite degree itself, from which it follows (as shown above) that formula_26 is algebraic.
An alternative way of showing this is constructively, by using the resultant.
Algebraic numbers thus form a field formula_33 (sometimes denoted by formula_34, but that usually denotes the adele ring).
Algebraic closure.
Every root of a polynomial equation whose coefficients are "algebraic numbers" is again algebraic. That can be rephrased by saying that the field of algebraic numbers is algebraically closed. In fact, it is the smallest algebraically-closed field containing the rationals and so it is called the algebraic closure of the rationals.
That the field of algebraic numbers is algebraically closed can be proven as follows: Let β be a root of a polynomial formula_35 with coefficients that are algebraic numbers formula_36, formula_21, formula_22... formula_23. The field extension formula_37 then has a finite degree with respect to formula_38. The simple extension formula_39 then has a finite degree with respect to formula_40 (since all powers of β can be expressed by powers of up to formula_41). Therefore, formula_42 also has a finite degree with respect to formula_38. Since formula_43 is a linear subspace of formula_39, it must also have a finite degree with respect to formula_38, so β must be an algebraic number.
Related fields.
Numbers defined by radicals.
Any number that can be obtained from the integers using a finite number of additions, subtractions, multiplications, divisions, and taking (possibly complex) nth roots where n is a positive integer are algebraic. The converse, however, is not true: there are algebraic numbers that cannot be obtained in this manner. These numbers are roots of polynomials of degree 5 or higher, a result of Galois theory (see Quintic equations and the Abel–Ruffini theorem). For example, the equation:
formula_44
has a unique real root that cannot be expressed in terms of only radicals and arithmetic operations.
Closed-form number.
Algebraic numbers are all numbers that can be defined explicitly or implicitly in terms of polynomials, starting from the rational numbers. One may generalize this to "closed-form numbers", which may be defined in various ways. Most broadly, all numbers that can be defined explicitly or implicitly in terms of polynomials, exponentials, and logarithms are called "elementary numbers", and these include the algebraic numbers, plus some transcendental numbers. Most narrowly, one may consider numbers "explicitly" defined in terms of polynomials, exponentials, and logarithms – this does not include all algebraic numbers, but does include some simple transcendental numbers such as e or ln 2.
Algebraic integers.
An "algebraic integer" is an algebraic number that is a root of a polynomial with integer coefficients with leading coefficient 1 (a monic polynomial). Examples of algebraic integers are formula_45 formula_46 and formula_47 Therefore, the algebraic integers constitute a proper superset of the integers, as the latter are the roots of monic polynomials "x" − "k" for all formula_48. In this sense, algebraic integers are to algebraic numbers what integers are to rational numbers.
The sum, difference and product of algebraic integers are again algebraic integers, which means that the algebraic integers form a ring. The name "algebraic integer" comes from the fact that the only rational numbers that are algebraic integers are the integers, and because the algebraic integers in any number field are in many ways analogous to the integers. If "K" is a number field, its ring of integers is the subring of algebraic integers in "K", and is frequently denoted as "OK". These are the prototypical examples of Dedekind domains.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(1 + \\sqrt{5})/2"
},
{
"math_id": 1,
"text": "1 + i"
},
{
"math_id": 2,
"text": "3+i \\sqrt{2}"
},
{
"math_id": 3,
"text": "\\sqrt{2}"
},
{
"math_id": 4,
"text": "\\frac{ \\sqrt[3]{3} }{ 2 }"
},
{
"math_id": 5,
"text": "\\Q(\\alpha) \\equiv \\{\\sum_{i=-{n_1}}^{n_2} \\alpha^i q_i | q_i\\in \\Q, n_1,n_2\\in \\N\\}"
},
{
"math_id": 6,
"text": "\\{a_i | 1\\le i\\le k\\}"
},
{
"math_id": 7,
"text": "\\Q(\\alpha)"
},
{
"math_id": 8,
"text": "\\Q(\\alpha) = \\sum_{i=1}^k a_i \\Q"
},
{
"math_id": 9,
"text": "\\sum_{i=1}^k a_i q_i"
},
{
"math_id": 10,
"text": "\\{q_i | 1\\le i\\le k\\}"
},
{
"math_id": 11,
"text": "\\{a_i\\}"
},
{
"math_id": 12,
"text": "a_i-s"
},
{
"math_id": 13,
"text": "n"
},
{
"math_id": 14,
"text": "\\Q(\\alpha) = \\{\\sum_{i=-n}^n \\alpha^{i} q_i | q_i\\in \\Q\\}"
},
{
"math_id": 15,
"text": "\\alpha^{n+1}"
},
{
"math_id": 16,
"text": "\\sum_{i=-n}^n \\alpha^i q_i"
},
{
"math_id": 17,
"text": "\\{q_i\\}"
},
{
"math_id": 18,
"text": "\\alpha^{2n+1} = \\sum_{i=0}^{2n} \\alpha^i q_{i-n}"
},
{
"math_id": 19,
"text": "x^{2n+1}-\\sum_{i=0}^{2n} x^i q_{i-n}"
},
{
"math_id": 20,
"text": "2n+1"
},
{
"math_id": 21,
"text": "\\alpha_1"
},
{
"math_id": 22,
"text": "\\alpha_2"
},
{
"math_id": 23,
"text": "\\alpha_n"
},
{
"math_id": 24,
"text": "\\Q(\\alpha_1, \\alpha_2, ... \\alpha_n)"
},
{
"math_id": 25,
"text": "\\Q(\\gamma)"
},
{
"math_id": 26,
"text": "\\gamma"
},
{
"math_id": 27,
"text": "\\alpha+\\beta"
},
{
"math_id": 28,
"text": "\\alpha-\\beta"
},
{
"math_id": 29,
"text": "\\alpha\\beta"
},
{
"math_id": 30,
"text": "\\beta\\ne 0"
},
{
"math_id": 31,
"text": "\\alpha/\\beta"
},
{
"math_id": 32,
"text": "\\Q(\\alpha,\\beta)"
},
{
"math_id": 33,
"text": "\\overline{\\mathbb{Q}}"
},
{
"math_id": 34,
"text": "\\mathbb A"
},
{
"math_id": 35,
"text": " \\alpha_0 + \\alpha_1 x + \\alpha_2 x^2 ... +\\alpha_n x^n"
},
{
"math_id": 36,
"text": "\\alpha_0"
},
{
"math_id": 37,
"text": "\\Q^\\prime \\equiv \\Q(\\alpha_1, \\alpha_2, ... \\alpha_n)"
},
{
"math_id": 38,
"text": "\\Q"
},
{
"math_id": 39,
"text": "\\Q^\\prime(\\beta)"
},
{
"math_id": 40,
"text": "\\Q^\\prime"
},
{
"math_id": 41,
"text": "\\beta^{n-1}"
},
{
"math_id": 42,
"text": "\\Q^\\prime(\\beta) = \\Q(\\beta, \\alpha_1, \\alpha_2, ... \\alpha_n)"
},
{
"math_id": 43,
"text": "\\Q(\\beta)"
},
{
"math_id": 44,
"text": "x^5-x-1=0"
},
{
"math_id": 45,
"text": "5 + 13 \\sqrt{2},"
},
{
"math_id": 46,
"text": "2 - 6i,"
},
{
"math_id": 47,
"text": "\\frac{1}{2}(1+i\\sqrt{3})."
},
{
"math_id": 48,
"text": "k \\in \\mathbb{Z}"
}
] | https://en.wikipedia.org/wiki?curid=1158 |
1158068 | Mayer f-function | The Mayer f-function is an auxiliary function that often appears in the series expansion of thermodynamic quantities related to classical many-particle systems. It is named after chemist and physicist Joseph Edward Mayer.
Definition.
Consider a system of classical particles interacting through a pair-wise potential
formula_0
where the bold labels formula_1 and formula_2 denote the continuous degrees of freedom associated with the particles, e.g.,
formula_3
for spherically symmetric particles and
formula_4
for rigid non-spherical particles where formula_5 denotes position and formula_6 the orientation parametrized e.g. by Euler angles. The Mayer f-function is then defined as
formula_7
where formula_8 the inverse absolute temperature in units of energy−1 .
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V(\\mathbf{i},\\mathbf{j})"
},
{
"math_id": 1,
"text": "\\mathbf{i}"
},
{
"math_id": 2,
"text": "\\mathbf{j}"
},
{
"math_id": 3,
"text": "\\mathbf{i}=\\mathbf{r}_i"
},
{
"math_id": 4,
"text": "\\mathbf{i}=(\\mathbf{r}_i,\\Omega_i)"
},
{
"math_id": 5,
"text": "\\mathbf{r}"
},
{
"math_id": 6,
"text": "\\Omega"
},
{
"math_id": 7,
"text": "f(\\mathbf{i},\\mathbf{j})=e^{-\\beta V(\\mathbf{i},\\mathbf{j})}-1"
},
{
"math_id": 8,
"text": "\\beta=(k_{B}T)^{-1}"
}
] | https://en.wikipedia.org/wiki?curid=1158068 |
1158235 | Debye length | Measure of electrostatic effect and how far it persists
In plasmas and electrolytes, the Debye length formula_0 (Debye radius or Debye–Hückel screening length), is a measure of a charge carrier's net electrostatic effect in a solution and how far its electrostatic effect persists. With each Debye length the charges are increasingly electrically screened and the electric potential decreases in magnitude by 1/e. A Debye sphere is a volume whose radius is the Debye length. Debye length is an important parameter in plasma physics, electrolytes, and colloids (DLVO theory). The corresponding Debye screening wave vector formula_1 for particles of density formula_2, charge formula_3 at a temperature formula_4 is given by formula_5 in Gaussian units. Expressions in MKS units will be given below. The analogous quantities at very low temperatures (formula_6) are known as the Thomas–Fermi length and the Thomas–Fermi wave vector. They are of interest in describing the behaviour of electrons in metals at room temperature.
The Debye length is named after the Dutch-American physicist and chemist Peter Debye (1884–1966), a Nobel laureate in Chemistry.
Physical origin.
The Debye length arises naturally in the thermodynamic description of large systems of mobile charges. In a system of formula_7 different species of charges, the formula_8-th species carries charge formula_9 and has concentration formula_10 at position formula_11. According to the so-called "primitive model", these charges are distributed in a continuous medium that is characterized only by its relative static permittivity, formula_12.
This distribution of charges within this medium gives rise to an electric potential formula_13 that satisfies Poisson's equation:
formula_14
where formula_15, formula_16 is the electric constant, and formula_17 is a charge density external (logically, not spatially) to the medium.
The mobile charges not only contribute in establishing formula_13 but also move in response to the associated Coulomb force, formula_18.
If we further assume the system to be in thermodynamic equilibrium with a heat bath at absolute temperature formula_4, then the concentrations of discrete charges, formula_10, may be considered to be thermodynamic (ensemble) averages and the associated electric potential to be a thermodynamic mean field.
With these assumptions, the concentration of the formula_8-th charge species is described by the Boltzmann distribution,
formula_19
where formula_20 is the Boltzmann constant and where formula_21 is the mean
concentration of charges of species formula_8.
Identifying the instantaneous concentrations and potential in the Poisson equation with their mean-field counterparts in the Boltzmann distribution yields the Poisson–Boltzmann equation:
formula_22
Solutions to this nonlinear equation are known for some simple systems. Solutions for more general systems may be obtained in the high-temperature (weak coupling) limit, formula_23, by Taylor expanding the exponential:
formula_24
This approximation yields the linearized Poisson–Boltzmann equation
formula_25
which also is known as the Debye–Hückel equation:
The second term on the right-hand side vanishes for systems that are electrically neutral. The term in parentheses divided by formula_26, has the units of an inverse length squared and by
dimensional analysis leads to the definition of the characteristic length scale
formula_27
that commonly is referred to as the Debye–Hückel length. As the only characteristic length scale in the Debye–Hückel equation, formula_28 sets the scale for variations in the potential and in the concentrations of charged species. All charged species contribute to the Debye–Hückel length in the same way, regardless of the sign of their charges. For an electrically neutral system, the Poisson equation becomes
formula_29
To illustrate Debye screening, the potential produced by an external point charge formula_30 is
formula_31
The bare Coulomb potential is exponentially screened by the medium, over a distance of the Debye length: this is called Debye screening or shielding (Screening effect).
The Debye–Hückel length may be expressed in terms of the Bjerrum length formula_32 as
formula_33
where formula_34 is the integer charge number that relates the charge on the formula_8-th ionic
species to the elementary charge formula_35.
In a plasma.
For a weakly collisional plasma, Debye shielding can be introduced in a very intuitive way by taking into account the granular character of such a plasma. Let us imagine a sphere about one of its electrons, and compare the number of electrons crossing this sphere with and without Coulomb repulsion. With repulsion, this number is smaller. Therefore, according to Gauss theorem, the apparent charge of the first electron is smaller than in the absence of repulsion. The larger the sphere radius, the larger is the number of deflected electrons, and the smaller the apparent charge: this is Debye shielding. Since the global deflection of particles includes the contributions of many other ones, the density of the electrons does not change, at variance with the shielding at work next to a Langmuir probe (Debye sheath). Ions bring a similar contribution to shielding, because of the attractive Coulombian deflection of charges with opposite signs.
This intuitive picture leads to an effective calculation of Debye shielding (see section II.A.2 of ). The assumption of a Boltzmann distribution is not necessary in this calculation: it works for whatever particle distribution function. The calculation also avoids approximating weakly collisional plasmas as continuous media. An N-body calculation reveals that the bare Coulomb acceleration of a particle by another one is modified by a contribution mediated by all other particles, a signature of Debye shielding (see section 8 of ). When starting from random particle positions, the typical time-scale for shielding to set in is the time for a thermal particle to cross a Debye length, i.e. the inverse of the plasma frequency. Therefore in a weakly collisional plasma, collisions play an essential role by bringing a cooperative self-organization process: Debye shielding. This shielding is important to get a finite diffusion coefficient in the calculation of Coulomb scattering (Coulomb collision).
In a non-isothermic plasma, the temperatures for electrons and heavy species may differ while the background medium may be treated as the vacuum (formula_36), and the Debye length is
formula_37
where
Even in quasineutral cold plasma, where ion contribution virtually seems to be larger due to lower ion temperature, the ion term is actually often dropped, giving
formula_38
although this is only valid when the mobility of ions is negligible compared to the process's timescale.
Typical values.
In space plasmas where the electron density is relatively low, the Debye length may reach macroscopic values, such as in the magnetosphere, solar wind, interstellar medium and intergalactic medium. See the table here below:
In an electrolyte solution.
In an electrolyte or a colloidal suspension, the Debye length for a monovalent electrolyte is usually denoted with symbol "κ"−1
formula_39
where
or, for a symmetric monovalent electrolyte,
formula_40
where
Alternatively,
formula_41
where formula_32 is the Bjerrum length of the medium in nm,
and the factor formula_42 derives from transforming unit volume from cubic dm to cubic nm.
For deionized water at room temperature, at pH=7, "λ"B ≈ 1μm.
At room temperature (), one can consider in water the relation:
formula_43
where
There is a method of estimating an approximate value of the Debye length in liquids using conductivity, which is described in ISO Standard, and the book.
In semiconductors.
The Debye length has become increasingly significant in the modeling of solid state devices as improvements in lithographic technologies have enabled smaller geometries.
The Debye length of semiconductors is given:
formula_44
where
When doping profiles exceed the Debye length, majority carriers no longer behave according to the distribution of the dopants. Instead, a measure of the profile of the doping gradients provides an "effective" profile that better matches the profile of the majority carrier density.
In the context of solids, Thomas–Fermi screening length may be required instead of Debye length.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lambda_{\\rm D}"
},
{
"math_id": 1,
"text": "k_{\\rm D}=1/\\lambda_{\\rm D}"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "q"
},
{
"math_id": 4,
"text": "T"
},
{
"math_id": 5,
"text": " k_{\\rm D}^2=4\\pi n q^2/(k_{\\rm B}T) "
},
{
"math_id": 6,
"text": "T \\to 0"
},
{
"math_id": 7,
"text": "N"
},
{
"math_id": 8,
"text": "j"
},
{
"math_id": 9,
"text": "q_j"
},
{
"math_id": 10,
"text": "n_j(\\mathbf{r})"
},
{
"math_id": 11,
"text": "\\mathbf{r}"
},
{
"math_id": 12,
"text": "\\varepsilon_r"
},
{
"math_id": 13,
"text": "\\Phi(\\mathbf{r})"
},
{
"math_id": 14,
"text": " \\varepsilon \\nabla^2 \\Phi(\\mathbf{r}) = -\\, \\sum_{j = 1}^N q_j \\, n_j(\\mathbf{r}) - \\rho_{\\rm ext}(\\mathbf{r}),"
},
{
"math_id": 15,
"text": "\\varepsilon \\equiv \\varepsilon_r \\varepsilon_0"
},
{
"math_id": 16,
"text": "\\varepsilon_0"
},
{
"math_id": 17,
"text": "\\rho_{\\rm ext}"
},
{
"math_id": 18,
"text": "- q_j \\, \\nabla \\Phi(\\mathbf{r})"
},
{
"math_id": 19,
"text": " n_j(\\mathbf{r}) = n_j^0 \\, \\exp\\left( - \\frac{q_j \\, \\Phi(\\mathbf{r})}{k_{\\rm B} T} \\right),"
},
{
"math_id": 20,
"text": "k_{\\rm B}"
},
{
"math_id": 21,
"text": "n_j^0"
},
{
"math_id": 22,
"text": " \\varepsilon \\nabla^2 \\Phi(\\mathbf{r}) = -\\, \\sum_{j = 1}^N q_j n_j^0 \\, \\exp\\left(- \\frac{q_j \\, \\Phi(\\mathbf{r})}{k_{\\rm B} T} \\right) - \\rho_{\\rm ext}(\\mathbf{r}) ."
},
{
"math_id": 23,
"text": "q_j \\, \\Phi(\\mathbf{r}) \\ll k_{\\rm B} T"
},
{
"math_id": 24,
"text": " \\exp\\left(- \\frac{q_j \\, \\Phi(\\mathbf{r})}{k_{\\rm B} T} \\right) \\approx\n1 - \\frac{q_j \\, \\Phi(\\mathbf{r})}{k_{\\rm B} T}."
},
{
"math_id": 25,
"text": " \\varepsilon \\nabla^2 \\Phi(\\mathbf{r}) =\n\\left(\\sum_{j = 1}^N \\frac{n_j^0 \\, q_j^2}{ k_{\\rm B} T} \\right)\\, \\Phi(\\mathbf{r}) -\\, \\sum_{j = 1}^N n_j^0 q_j - \\rho_{\\rm ext}(\\mathbf{r})\n"
},
{
"math_id": 26,
"text": "\\varepsilon"
},
{
"math_id": 27,
"text": " \\lambda_{\\rm D} =\n\\left(\\frac{\\varepsilon \\, k_{\\rm B} T}{\\sum_{j = 1}^N n_j^0 \\, q_j^2}\\right)^{1/2}"
},
{
"math_id": 28,
"text": "\\lambda_D"
},
{
"math_id": 29,
"text": " \\nabla^2 \\Phi(\\mathbf{r}) =\n\\lambda_{\\rm D}^{-2} \\Phi(\\mathbf{r}) - \\frac{\\rho_{\\rm ext}(\\mathbf{r})}{\\varepsilon}\n"
},
{
"math_id": 30,
"text": "\\rho_{\\rm ext} = Q\\delta(\\mathbf{r})"
},
{
"math_id": 31,
"text": " \\Phi(\\mathbf{r}) = \\frac{Q}{4\\pi\\varepsilon r} e^{-r/\\lambda_{\\rm D}}"
},
{
"math_id": 32,
"text": "\\lambda_{\\rm B}"
},
{
"math_id": 33,
"text": " \\lambda_{\\rm D} =\n\\left(4 \\pi \\, \\lambda_{\\rm B} \\, \\sum_{j = 1}^N n_j^0 \\, z_j^2\\right)^{-1/2},"
},
{
"math_id": 34,
"text": "z_j = q_j/e"
},
{
"math_id": 35,
"text": "e"
},
{
"math_id": 36,
"text": "\\varepsilon_r = 1"
},
{
"math_id": 37,
"text": " \\lambda_{\\rm D} = \\sqrt{\\frac{\\varepsilon_0 k_{\\rm B}/q_e^2}{n_e/T_e+\\sum_j z_j^2n_j/T_j}}"
},
{
"math_id": 38,
"text": " \\lambda_{\\rm D} = \\sqrt{\\frac{\\varepsilon_0 k_{\\rm B} T_e}{n_e q_e^2}}"
},
{
"math_id": 39,
"text": " \\kappa^{-1} = \\sqrt{\\frac{\\varepsilon_{\\rm r} \\varepsilon_0 k_{\\rm B} T}{2 e^2 I}}"
},
{
"math_id": 40,
"text": " \\kappa^{-1} = \\sqrt{\\frac{\\varepsilon_{\\rm r} \\varepsilon_0 R T}{2\\times10^3 F^2 C_0}}"
},
{
"math_id": 41,
"text": " \\kappa^{-1} = \\frac{1}{\\sqrt{8\\pi \\lambda_{\\rm B} N_{\\rm A} \\times 10^{-24} I}} "
},
{
"math_id": 42,
"text": " 10^{-24} "
},
{
"math_id": 43,
"text": " \\kappa^{-1}(\\mathrm{nm}) = \\frac{0.304}{\\sqrt{I(\\mathrm{M})}}"
},
{
"math_id": 44,
"text": " L_{\\rm D} = \\sqrt{\\frac{\\varepsilon k_{\\rm B} T}{q^2 N_{\\rm dop}}}"
}
] | https://en.wikipedia.org/wiki?curid=1158235 |
1158573 | Mahāvīra (mathematician) | 9th-century Indian mathematician
Mahāvīra (or Mahaviracharya, "Mahavira the Teacher") was a 9th-century Indian Jain mathematician possibly born in Mysore, in India. He authored "Gaṇita-sāra-saṅgraha" ("Ganita Sara Sangraha") or the Compendium on the gist of Mathematics in 850 CE. He was patronised by the Rashtrakuta emperor Amoghavarsha. He separated astrology from mathematics. It is the earliest Indian text entirely devoted to mathematics. He expounded on the same subjects on which Aryabhata and Brahmagupta contended, but he expressed them more clearly. His work is a highly syncopated approach to algebra and the emphasis in much of his text is on developing the techniques necessary to solve algebraic problems. He is highly respected among Indian mathematicians, because of his establishment of terminology for concepts such as equilateral, and isosceles triangle; rhombus; circle and semicircle. Mahāvīra's eminence spread throughout southern India and his books proved inspirational to other mathematicians in Southern India. It was translated into the Telugu language by Pavuluri Mallana as "Saara Sangraha Ganitamu".
He discovered algebraic identities like "a"3 = "a" ("a" + "b") ("a" − "b") + "b"2 ("a" − "b") + "b"3. He also found out the formula for "n"C"r" as ["n" ("n" − 1) ("n" − 2) ... ("n" − "r" + 1)] / ["r" ("r" − 1) ("r" − 2) ... 2 * 1]. He devised a formula which approximated the area and perimeters of ellipses and found methods to calculate the square of a number and cube roots of a number. He asserted that the square root of a negative number does not exist. Arithmetic operations utilized in his works like Gaṇita-sāra-saṅgraha(Ganita Sara Sangraha) uses decimal place-value system and include the use of zero. However, he erroneously states that a number divided by zero remains unchanged.
Rules for decomposing fractions.
Mahāvīra's "Gaṇita-sāra-saṅgraha" gave systematic rules for expressing a fraction as the sum of unit fractions. This follows the use of unit fractions in Indian mathematics in the Vedic period, and the Śulba Sūtras' giving an approximation of √2 equivalent to formula_0.
In the "Gaṇita-sāra-saṅgraha" (GSS), the second section of the chapter on arithmetic is named "kalā-savarṇa-vyavahāra" (lit. "the operation of the reduction of fractions"). In this, the "bhāgajāti" section (verses 55–98) gives rules for the following:
<templatestyles src="Template:Blockquote/styles.css" />rūpāṃśakarāśīnāṃ rūpādyās triguṇitā harāḥ kramaśaḥ /
dvidvitryaṃśābhyastāv ādimacaramau phale rūpe //
<templatestyles src="Template:Blockquote/styles.css" />When the result is one, the denominators of the quantities having one as numerators are [the numbers] beginning with one and multiplied by three, in order. The first and the last are multiplied by two and two-thirds [respectively].
formula_1
formula_2
formula_5
Choose an integer "i" such that formula_7 is an integer "r", then write
formula_8
and repeat the process for the second term, recursively. (Note that if "i" is always chosen to be the "smallest" such integer, this is identical to the greedy algorithm for Egyptian fractions.)
formula_9 where formula_10 is to be chosen such that formula_11 is an integer (for which formula_10 must be a multiple of formula_12).
formula_13
formula_16 where formula_17 is to be chosen such that formula_10 divides formula_18
Some further rules were given in the "Gaṇita-kaumudi" of Nārāyaṇa in the 14th century.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1 + \\tfrac13 + \\tfrac1{3\\cdot4} - \\tfrac1{3\\cdot4\\cdot34}"
},
{
"math_id": 1,
"text": " 1 = \\frac1{1 \\cdot 2} + \\frac1{3} + \\frac1{3^2} + \\dots + \\frac1{3^{n-2}} + \\frac1{\\frac23 \\cdot 3^{n-1}} "
},
{
"math_id": 2,
"text": "1 = \\frac1{2\\cdot 3 \\cdot 1/2} + \\frac1{3 \\cdot 4 \\cdot 1/2} + \\dots + \\frac1{(2n-1) \\cdot 2n \\cdot 1/2} + \\frac1{2n \\cdot 1/2} "
},
{
"math_id": 3,
"text": "1/q"
},
{
"math_id": 4,
"text": "a_1, a_2, \\dots, a_n"
},
{
"math_id": 5,
"text": "\\frac1q = \\frac{a_1}{q(q+a_1)} + \\frac{a_2}{(q+a_1)(q+a_1+a_2)} + \\dots + \\frac{a_{n-1}}{(q+a_1+\\dots+a_{n-2})(q+a_1+\\dots+a_{n-1})} + \\frac{a_n}{a_n(q+a_1+\\dots+a_{n-1})}"
},
{
"math_id": 6,
"text": "p/q"
},
{
"math_id": 7,
"text": "\\tfrac{q+i}{p}"
},
{
"math_id": 8,
"text": " \\frac{p}{q} = \\frac{1}{r} + \\frac{i}{r \\cdot q} "
},
{
"math_id": 9,
"text": "\\frac1{n} = \\frac1{p\\cdot n} + \\frac1{\\frac{p\\cdot n}{n-1}}"
},
{
"math_id": 10,
"text": "p"
},
{
"math_id": 11,
"text": "\\frac{p\\cdot n}{n-1}"
},
{
"math_id": 12,
"text": "n-1"
},
{
"math_id": 13,
"text": "\\frac1{a\\cdot b} = \\frac1{a(a+b)} + \\frac1{b(a+b)}"
},
{
"math_id": 14,
"text": "a"
},
{
"math_id": 15,
"text": "b"
},
{
"math_id": 16,
"text": "\\frac{p}{q} = \\frac{a}{\\frac{ai+b}{p}\\cdot\\frac{q}{i}} + \\frac{b}{\\frac{ai+b}{p} \\cdot \\frac{q}{i} \\cdot{i}}"
},
{
"math_id": 17,
"text": "i"
},
{
"math_id": 18,
"text": "ai + b"
}
] | https://en.wikipedia.org/wiki?curid=1158573 |
1158626 | Bhāskara I | Indian mathematician and astronomer (600-680)
Bhāskara (c. 600 – c. 680) (commonly called Bhāskara I to avoid confusion with the 12th-century mathematician Bhāskara II) was a 7th-century Indian mathematician and astronomer who was the first to write numbers in the Hindu–Arabic decimal system with a circle for the zero, and who gave a unique and remarkable rational approximation of the sine function in his commentary on Aryabhata's work. This commentary, "Āryabhaṭīyabhāṣya", written in 629, is among the oldest known prose works in Sanskrit on mathematics and astronomy. He also wrote two astronomical works in the line of Aryabhata's school: the "Mahābhāskarīya" ("Great Book of Bhāskara") and the "Laghubhāskarīya" ("Small Book of Bhāskara").
On 7 June 1979, the Indian Space Research Organisation launched the Bhāskara I satellite, named in honour of the mathematician.
Biography.
Little is known about Bhāskara's life, except for what can be deduced from his writings. He was born in India in the 7th century, and was probably an astronomer. Bhāskara I received his astronomical education from his father.
There are references to places in India in Bhāskara's writings, such as Vallabhi (the capital of the Maitraka dynasty in the 7th century) and Sivarajapura, both of which are in the Saurastra region of the present-day state of Gujarat in India. Also mentioned are Bharuch in southern Gujarat, and Thanesar in the eastern Punjab, which was ruled by Harsha. Therefore, a reasonable guess would be that Bhāskara was born in Saurastra and later moved to Aśmaka.
Bhāskara I is considered the most important scholar of Aryabhata's astronomical school. He and Brahmagupta are two of the most renowned Indian mathematicians; both made considerable contributions to the study of fractions.
Representation of numbers.
The most important mathematical contribution of Bhāskara I concerns the representation of numbers in a positional numeral system. The first positional representations had been known to Indian astronomers approximately 500 years before Bhāskara's work. However, these numbers were written not in figures, but in words or allegories and were organized in verses. For instance, the number 1 was given as "moon", since it exists only once; the number 2 was represented by "wings", "twins", or "eyes" since they always occur in pairs; the number 5 was given by the (5) "senses". Similar to our current decimal system, these words were aligned such that each number assigns the factor of the power of ten corresponding to its position, only in reverse order: the higher powers were to the right of the lower ones.
Bhāskara's numeral system was truly positional, in contrast to word representations, where the same word could represent multiple values (such as 40 or 400). He often explained a number given in his numeral system by stating "ankair api" ("in figures this reads"), and then repeating it written with the first nine Brahmi numerals, using a small circle for the zero. Contrary to the word system, however, his numerals were written in descending values from left to right, exactly as we do it today. Therefore, since at least 629, the decimal system was definitely known to Indian scholars. Presumably, Bhāskara did not invent it, but he was the first to openly use the Brahmi numerals in a scientific contribution in Sanskrit.
Further contributions.
Mathematics.
Bhāskara I wrote three astronomical contributions. In 629, he annotated the "Āryabhaṭīya", an astronomical treatise by Aryabhata written in verses. Bhāskara's comments referred exactly to the 33 verses dealing with mathematics, in which he considered variable equations and trigonometric formulae. In general, he emphasized proving mathematical rules instead of simply relying on tradition or expediency.
His work "Mahābhāskarīya" is divided into eight chapters about mathematical astronomy. In chapter 7, he gives a remarkable approximation formula for sin x:
formula_0
which he assigns to Aryabhata. It reveals a relative error of less than 1.9% (the greatest deviation formula_1 at formula_2). Additionally, he gives relations between sine and cosine, as well as relations between the sine of an angle less than 90° and the sines of angles 90°–180°, 180°–270°, and greater than 270°.
Moreover, Bhāskara stated theorems about the solutions to equations now known as Pell's equations. For instance, he posed the problem: "Tell me, O mathematician, what is that square which multiplied by 8 becomes – together with unity – a square?" In modern notation, he asked for the solutions of the Pell equation formula_3 (or formula_4 relative to pell's equation). This equation has the simple solution x = 1, y = 3, or shortly (x,y) = (1,3), from which further solutions can be constructed, such as (x,y) = (6,17).
Bhāskara clearly believed that "π" was irrational. In support of Aryabhata's approximation of π, he criticized its approximation to formula_5, a practice common among Jain mathematicians.
He was the first mathematician to openly discuss quadrilaterals with four unequal, nonparallel sides.
Astronomy.
The "Mahābhāskarīya" consists of eight chapters dealing with mathematical astronomy. The book deals with topics such as the longitudes of the planets, the conjunctions among the planets and stars, the phases of the moon, solar and lunar eclipses, and the rising and setting of the planets.
Parts of "Mahābhāskarīya" were later translated into Arabic.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\sin x \\approx \\frac{16x (\\pi - x)}{5 \\pi^2 - 4x (\\pi - x)}, \\qquad (0 \\leq x \\leq \\pi )"
},
{
"math_id": 1,
"text": "\\frac{16}{5\\pi} - 1 \\approx 1.859\\%"
},
{
"math_id": 2,
"text": "x=0"
},
{
"math_id": 3,
"text": " 8x^2 + 1 = y^2 "
},
{
"math_id": 4,
"text": "y^2 - 8x^2 = 1 "
},
{
"math_id": 5,
"text": "\\sqrt{10}"
}
] | https://en.wikipedia.org/wiki?curid=1158626 |
1158661 | Parameshvara Nambudiri | Indian mathematician and astronomer
Vatasseri Parameshvara Nambudiri (c. 1380–1460) was a major Indian mathematician and astronomer of the Kerala school of astronomy and mathematics founded by Madhava of Sangamagrama. He was also an astrologer. Parameshvara was a proponent of observational astronomy in medieval India and he himself had made a series of eclipse observations to verify the accuracy of the computational methods then in use. Based on his eclipse observations, Parameshvara proposed several corrections to the astronomical parameters which had been in use since the times of Aryabhata. The computational scheme based on the revised set of parameters has come to be known as the "Drgganita" or Drig system. Parameshvara was also a prolific writer on matters relating to astronomy. At least 25 manuscripts have been identified as being authored by Parameshvara.
Biographical details.
Parameshvara was a Hindu of Bhrgugotra following the Ashvalayanasutra of the Rigveda. Parameshvara's family name ("Illam") was Vatasseri and his family resided in the village of Alathiyur (Sanskritised as "Asvatthagrama") in Tirur, Kerala. Alathiyur is situated on the northern bank of the river Nila (river Bharathappuzha) at its mouth in Kerala. He was a grandson of a disciple of Govinda Bhattathiri (1237–1295 CE), a legendary figure in the astrological traditions of Kerala.
Parameshvara studied under teachers Rudra and Narayana, and also under Madhava of Sangamagrama (c. 1350 – c. 1425) the founder of the Kerala school of astronomy and mathematics. Damodara, another prominent member of the Kerala school, was his son and also his pupil. Parameshvara was also a teacher of Nilakantha Somayaji (1444–1544) the author of the celebrated Tantrasamgraha.
Work.
Parameshvara wrote commentaries on many mathematical and astronomical works such as those by Bhāskara I and Aryabhata. He made a series of eclipse observations over a 55-year period. Constantly attempted to compare these with the theoretically computed positions of the planets. He revised planetary parameters based on his observations.
One of Parameshvara's more significant contributions was his mean value type formula for the inverse interpolation of the sine.
He was the first mathematician to give a formula for the radius of the circle circumscribing a cyclic quadrilateral. The expression is sometimes attributed to Lhuilier [1782], 350 years later. With the sides of the cyclic quadrilateral being "a, b, c," and "d", the radius "R" of the circumscribed circle is:
formula_0
Works by Parameshvara.
The following works of Parameshvara are well-known. A complete list of all manuscripts attributed to Parameshvara is available in Pingree.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " R = \\sqrt {\\frac{(ab + cd)(ac + bd)(ad + bc)}{(- a + b + c + d)(a - b + c + d)(a + b - c + d)(a + b + c - d)}}."
}
] | https://en.wikipedia.org/wiki?curid=1158661 |
1158706 | Baudhayana sutras | Group of Vedic Sanskrit texts
The (Sanskrit: बौधायन सूत्रस् ) are a group of Vedic Sanskrit texts which cover dharma, daily ritual, mathematics and is one of the oldest Dharma-related texts of Hinduism that have survived into the modern age from the 1st-millennium BCE. They belong to the "Taittiriya" branch of the Krishna Yajurveda school and are among the earliest texts of the genre.
The Baudhayana sūtras consist of six texts:
The is noted for containing several early mathematical results, including an approximation of the square root of 2 and the statement of the Pythagorean theorem.
Baudhāyana Shrautasūtra.
Baudhayana's Śrauta sūtras related to performing Vedic sacrifices have followers in some Smārta brāhmaṇas (Iyers) and some Iyengars of Tamil Nadu, Yajurvedis or Namboothiris of Kerala, Gurukkal Brahmins (Aadi Saivas) and Kongu Vellalars. The followers of this sūtra follow a different method and do 24 Tila-tarpaṇa, as Lord Krishna had done tarpaṇa on the day before amāvāsyā; they call themselves Baudhāyana Amavasya.
Baudhāyana Dharmasūtra.
The Dharmasūtra of Baudhāyana like that of Apastamba also forms a part of the larger Kalpasutra. Likewise, it is composed of "praśnas" which literally means 'questions' or books. The structure of this Dharmasūtra is not very clear because it came down in an incomplete manner. Moreover, the text has undergone alterations in the form of additions and explanations over a period of time. The "praśnas" consist of the Srautasutra and other ritual treatises, the Sulvasutra which deals with vedic geometry, and the Grhyasutra which deals with domestic rituals.
There are no commentaries on this Dharmasūtra with the exception of Govindasvāmin's "Vivaraṇa". The date of the commentary is uncertain but according to Olivelle it is not very ancient. Also the commentary is inferior in comparison to that of Haradatta on Āpastamba and Gautama.
This Dharmasūtra is divided into four books. Olivelle states that Book One and the first sixteen chapters of Book Two are the 'Proto-Baudhayana' even though this section has undergone alteration. Scholars like Bühler and Kane agree that the last two books of the Dharmasūtra are later additions. Chapter 17 and 18 in Book Two lays emphasis on various types of ascetics and acetic practices.
The first book is primarily devoted to the student and deals in topics related to studentship. It also refers to social classes, the role of the king, marriage, and suspension of Vedic recitation. Book two refers to penances, inheritance, women, householder, orders of life, ancestral offerings. Book three refers to holy householders, forest hermit and penances. Book four primarily refers to the yogic practices and penances along with offenses regarding marriage.
Baudhāyana Śulvasūtra.
Pythagorean theorem.
The "Baudhāyana Śulvasūtra" states the rule referred to today in most of the world as the Pythagorean Theorem. The rule was known to a number of ancient civilizations, including also the Greek and the Chinese, and was recorded in Mesopotamia as far back as 1800 BCE. For the most part, the Śulvasūtras do not contain proofs of the rules which they describe. The rule stated in the "Baudhāyana Śulvasūtra" is:
दीर्घचतुरस्रस्याक्ष्णया रज्जुः पार्श्वमानी तिर्यग् मानी च यत् पृथग् भूते कुरूतस्तदुभयं करोति ॥
"dīrghachatursrasyākṣaṇayā rajjuḥ pārśvamānī, tiryagmānī,"
"cha yatpṛthagbhūte kurutastadubhayāṅ karoti."
The diagonal of an oblong produces by itself both the areas which the two sides of the oblong produce separately.
The diagonal and sides referred to are those of a rectangle (oblong), and the areas are those of the squares having these line segments as their sides. Since the diagonal of a rectangle is the hypotenuse of the right triangle formed by two adjacent sides, the statement is seen to be equivalent to the Pythagorean theorem.
Baudhāyana also provides a statement using a rope measure of the reduced form of the Pythagorean theorem for an isosceles right triangle:
"The cord which is stretched across a square produces an area double the size of the original square."
Circling the square.
Another problem tackled by Baudhāyana is that of finding a circle whose area is the same as that of a square (the reverse of squaring the circle). His sūtra i.58 gives this construction:
"Draw half its diagonal about the centre towards the East–West line; then describe a circle together with a third part of that which lies outside the square. "
Explanation:
Square root of 2.
Baudhāyana i.61-2 (elaborated in Āpastamba Sulbasūtra i.6)
gives the length of the diagonal of a square in terms of its sides, which is equivalent to a formula for the square root of 2:
"samasya dvikaraṇī. pramāṇaṃ tṛtīyena vardhayet tac caturthenātmacatustriṃśonena saviśeṣaḥ"
The diagonal [lit. "doubler"] of a square. The measure is to be increased by a third and by a fourth decreased by the 34th. That is its diagonal approximately.
That is,
formula_6
which is correct to five decimals.
Other theorems include: diagonals of rectangle bisect each other, diagonals of rhombus bisect at right angles, area of a square formed by joining the middle points of a square is half of original, the
midpoints of a rectangle joined forms a rhombus whose area is half the rectangle, etc.
Note the emphasis on rectangles and squares; this arises from the need to specify "yajña bhūmikā"s—i.e. the altar on which rituals were conducted, including fire offerings (yajña).
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x = {a \\over 2}\\sqrt{2}- {a \\over 2}"
},
{
"math_id": 1,
"text": "{a \\over 2} + {x \\over 3}"
},
{
"math_id": 2,
"text": "{a \\over 2} + {a \\over 6}(\\sqrt{2}-1)"
},
{
"math_id": 3,
"text": "{a \\over 6}(2 + \\sqrt{2})"
},
{
"math_id": 4,
"text": "(2+\\sqrt{2})^2 \\approx 11.66 \\approx {36.6\\over \\pi}"
},
{
"math_id": 5,
"text": "{\\pi}r^2 \\approx \\pi \\times {a^2 \\over 6^2} \\times {36.6\\over \\pi} \\approx a^2"
},
{
"math_id": 6,
"text": "\\sqrt{2} \\approx 1 + \\frac{1}{3} + \\frac{1}{3 \\cdot 4} - \\frac{1}{3 \\cdot4 \\cdot 34} = \\frac{577}{408} \\approx 1.414216,"
}
] | https://en.wikipedia.org/wiki?curid=1158706 |
1159362 | Growing degree-day | Heuristic tool in phenology
Growing degree days (GDD), also called growing degree units (GDUs), are a heuristic tool in phenology. GDD are a measure of heat accumulation used by horticulturists, gardeners, and farmers to predict plant and animal development rates such as the date that a flower will bloom, an insect will emerge from dormancy, or a crop will reach maturity. GDD is credited to be first defined by Reaumur in 1735.
Introduction.
In the absence of extreme conditions such as unseasonal drought or disease, plants grow in a cumulative stepwise manner which is strongly influenced by the ambient temperature. Growing degree days take aspects of local weather into account and allow gardeners to predict (or, in greenhouses, even to control) the plants' pace toward maturity.
Unless stressed by other environmental factors like moisture, the development rate from emergence to maturity for many plants depends upon the daily air temperature. Because many developmental events of plants and insects depend on the accumulation of specific quantities of heat, it is possible to predict when these events should occur during a growing season regardless of differences in temperatures from year to year. Growing degrees (GDs) is defined as the number of temperature degrees above a certain threshold base temperature, which varies among crop species. The base temperature is that temperature below which plant growth is zero. GDs are calculated each day as maximum temperature plus the minimum temperature divided by 2, minus the base temperature. GDUs are accumulated by adding each day's GDs contribution as the season progresses.
GDUs can be used to: assess the suitability of a region for production of a particular crop; estimate the growth-stages of crops, weeds or even life stages of insects; predict maturity and cutting dates of forage crops; predict best timing of fertilizer or pesticide application; estimate the heat stress on crops; plan spacing of planting dates to produce separate harvest dates. Crop specific indices that employ separate equations for the influence of the daily minimum (nighttime) and the maximum (daytime) temperatures on growth are called crop heat units (CHUs).
GDD calculation.
GDD are calculated by taking the integral of warmth above a base temperature, "T"base (plant type dependant, see baseline section):
formula_0 (where integration is over the time period with formula_1).
A simpler, approximately equivalent formulation uses the average of the daily maximum and minimum temperatures compared to a "T"base to calculate degree-days for a given day. As an equation:
formula_2
If the minimum temperature "T"min is below the "T"base one, there exist two variants:
GDDs are typically measured from the winter low. Any temperature below "T"base is set to "T"base before calculating the average. Likewise, the maximum temperature is usually capped at 30 °C because most plants and insects do not grow any faster above that temperature. However, some warm temperate and tropical plants do have significant requirements for days above 30 °C to mature fruit or seeds.
Example of GDD calculation.
For example, a day with a high of 23 °C and a low of 12 °C (and a base of 10 °C) would contribute 7.5 GDDs.
formula_9
As a second example, a day with a high of 13 °C and a low of 5 °C (and a base of 10 °C) would contribute:
Pest control.
Insect development and growing degree days are also used by some farmers and horticulturalists to time their use of organic or biological pest control or other pest control methods so they are applying the procedure or treatment at the point that the pest is most vulnerable. For example:
Honeybees.
Several beekeepers are now researching the correlation between growing degree-days and the life cycle of a honeybee colony.
Baselines.
The optimal base temperature is often determined experimentally based on the life cycle of the plant or insect in question. Common baselines for crops are either 5 °C for cool-season plants and 10 °C for warm-season plants and most insect development.
Modified growing degree days.
In the cases of some plants, not only do they require a certain minimum temperature to grow, but they will also stop growing above a warmer threshold temperature. In such cases, a "modified growing degree day" is used: the growing degree days are calculated at the lower baseline, then at the higher baseline, which is subtracted. Corn is an example of this: it starts growing at 10 °C and stops at 30 °C, meaning any growing degree-days above 30 °C do not count.
Units.
GDDs may be calculated in either Celsius or Fahrenheit, though they must be converted appropriately; for every 9 GDDF there are 5 GDDC, or in conversion calculation:
GDDC = 5/9 * GDDF
The equivalent unit compliant with the International System of Units is the kelvin-second. A quantity of kelvin-seconds is four orders of magnitude higher than the corresponding degree day (1 Celsius degree-day is 8.64×104 K·s; 1 Fahrenheit degree-day is 4.8×104 K·s).
References.
This article incorporates public domain material from
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{GDD} = \\int(T(t) - T_\\mathrm{base})dt"
},
{
"math_id": 1,
"text": "T(t) > T_\\mathrm{base}"
},
{
"math_id": 2,
"text": "\\text{GDD} = \\frac{T_\\mathrm{max}+T_\\mathrm{min}}{2}-T_\\mathrm{base}."
},
{
"math_id": 3,
"text": "T_\\mathrm{min}"
},
{
"math_id": 4,
"text": "(T_\\mathrm{max}+T_\\mathrm{min})/2 < T_\\mathrm{base}"
},
{
"math_id": 5,
"text": "(T_\\mathrm{max}+T_\\mathrm{min})/2 = T_\\mathrm{base} "
},
{
"math_id": 6,
"text": "\\text{GDD} = \\max{\\left(\\frac{T_\\mathrm{max}+T_\\mathrm{min}}{2}-T_\\mathrm{base},0\\right)}"
},
{
"math_id": 7,
"text": "T_\\mathrm{min}<T_\\mathrm{base}"
},
{
"math_id": 8,
"text": "T_\\mathrm{min}=T_\\mathrm{base}"
},
{
"math_id": 9,
"text": "\\frac{23+12}{2}-10=7.5"
},
{
"math_id": 10,
"text": "\\max((13+5)/2- 10, 0)=0"
},
{
"math_id": 11,
"text": "(13+10)/2-10=1.5"
}
] | https://en.wikipedia.org/wiki?curid=1159362 |
11594091 | SLD resolution | Rule in logic programming
SLD resolution ("Selective Linear Definite" clause resolution) is the basic inference rule used in logic programming. It is a refinement of resolution, which is both sound and refutation complete for Horn clauses.
The SLD inference rule.
Given a goal clause, represented as the negation of a problem to be solved :
formula_0
with selected literal formula_1, and an input definite clause:
formula_2
whose positive literal (atom) formula_3 unifies with the atom formula_4 of the selected literal formula_5, SLD resolution derives another goal clause, in which the selected literal is replaced by the negative literals of the input clause and the unifying substitution formula_6 is applied:
formula_7
In the simplest case, in propositional logic, the atoms formula_4 and formula_8 are identical, and the unifying substitution formula_6 is vacuous. However, in the more general case, the unifying substitution is necessary to make the two literals identical.
The origin of the name "SLD".
The name "SLD resolution" was given by Maarten van Emden for the unnamed inference rule introduced by Robert Kowalski. Its name is derived from SL resolution, which is both sound and refutation complete for the unrestricted clausal form of logic. "SLD" stands for "SL resolution with Definite clauses".
In both, SL and SLD, "L" stands for the fact that a resolution proof can be restricted to a linear sequence of clauses:
formula_9
where the "top clause" formula_10 is an input clause, and every other clause formula_11 is a resolvent one of whose parents is the previous clause formula_12. The proof is a refutation if the last clause formula_13 is the empty clause.
In SLD, all of the clauses in the sequence are goal clauses, and the other parent is an input clause. In SL resolution, the other parent is either an input clause or an ancestor clause earlier in the sequence.
In both SL and SLD, "S" stands for the fact that the only literal resolved upon in any clause formula_12 is one that is uniquely selected by a selection rule or selection function. In SL resolution, the selected literal is restricted to one which has been most recently introduced into the clause. In the simplest case, such a last-in-first-out selection function can be specified by the order in which literals are written, as in Prolog. However, the selection function in SLD resolution is more general than in SL resolution and in Prolog. There is no restriction on the literal that can be selected.
The computational interpretation of SLD resolution.
In clausal logic, an SLD refutation demonstrates that the input set of clauses is unsatisfiable. In logic programming, however, an SLD refutation also has a computational interpretation. The top clause formula_0 can be interpreted as the denial of a conjunction of subgoals formula_14. The derivation of clause formula_11 from formula_12 is the derivation, by means of backward reasoning, of a new set of sub-goals using an input clause as a goal-reduction procedure. The unifying substitution formula_6 both passes input from the selected subgoal to the body of the procedure and simultaneously passes output from the head of the procedure to the remaining unselected subgoals. The empty clause is simply an empty set of subgoals, which signals that the initial conjunction of subgoals in the top clause has been solved.
SLD resolution strategies.
SLD resolution implicitly defines a search tree of alternative computations, in which the initial goal clause is associated with the root of the tree. For every node in the tree and for every definite clause in the program whose positive literal unifies with the selected literal in the goal clause associated with the node, there is a child node associated with the goal clause obtained by SLD resolution.
A leaf node, which has no children, is a success node if its associated goal clause is the empty clause. It is a failure node if its associated goal clause is non-empty but its selected literal unifies with no positive literal of definite clauses in the program.
SLD resolution is non-deterministic in the sense that it does not determine the search strategy for exploring the search tree. Prolog searches the tree depth-first, one branch at a time, using backtracking when it encounters a failure node. Depth-first search is very efficient in its use of computing resources, but is incomplete if the search space contains infinite branches and the search strategy searches these in preference to finite branches: the computation does not terminate. Other search strategies, including breadth-first, best-first, and branch-and-bound search are also possible. Moreover, the search can be carried out sequentially, one node at a time, or in parallel, many nodes simultaneously.
SLD resolution is also non-deterministic in the sense, mentioned earlier, that the selection rule is not determined by the inference rule, but is determined by a separate decision procedure, which can be sensitive to the dynamics of the program execution process.
The SLD resolution search space is an or-tree, in which different branches represent alternative computations. In the case of propositional logic programs, SLD can be generalised so that the search space is an and-or tree, whose nodes are labelled by single literals, representing subgoals, and nodes are joined either by conjunction or by disjunction. In the general case, where conjoint subgoals share variables, the and-or tree representation is more complicated.
Example.
Given the logic program in the Prolog language:
q :- p.
p.
and the top-level goal:
q.
the search space consists of a single branch, in which codice_0 is reduced to codice_1 which is reduced to the empty set of subgoals, signalling a successful computation. In this case, the program is so simple that there is no role for the selection function and no need for any search.
In clausal logic, the program is represented by the set of clauses:
formula_15
formula_16
and top-level goal is represented by the goal clause with a single negative literal:
formula_17
The search space consists of the single refutation:
formula_18
where formula_19 represents the empty clause.
If the following clause were added to the program:
q :- r.
then there would be an additional branch in the search space, whose leaf node codice_2 is a failure node. In Prolog, if this clause were added to the front of the original program, then Prolog would use the order in which the clauses are written to determine the order in which the branches of the search space are investigated. Prolog would try this new branch first, fail, and then backtrack to investigate the single branch of the original program and succeed.
If the clause
p :- p.
were now added to the program, then the search tree would contain an infinite branch. If this clause were tried first, then Prolog would go into an infinite loop and not find the successful branch.
SLDNF.
SLDNF is an extension of SLD resolution to deal with negation as failure. In SLDNF, goal clauses can contain negation as failure literals, say of the form formula_20, which can be selected only if they contain no variables. When such a variable-free literal is selected, a subproof (or subcomputation) is attempted to determine whether there is an SLDNF refutation starting from the corresponding unnegated literal formula_16 as top clause. The selected subgoal formula_20 succeeds if the subproof fails, and it fails if the subproof succeeds. | [
{
"math_id": 0,
"text": " \\neg L_1 \\lor \\cdots \\lor \\neg L_i \\lor \\cdots \\lor \\neg L_n "
},
{
"math_id": 1,
"text": " \\neg L_i "
},
{
"math_id": 2,
"text": " L \\lor \\neg K_1 \\lor \\cdots \\lor \\neg K_m "
},
{
"math_id": 3,
"text": " L\\, "
},
{
"math_id": 4,
"text": " L_i \\, "
},
{
"math_id": 5,
"text": "\\neg L_i \\, "
},
{
"math_id": 6,
"text": " \\theta \\, "
},
{
"math_id": 7,
"text": " (\\neg L_1 \\lor \\cdots \\lor \\neg K_1 \\lor \\cdots \\lor \\neg K_m\\ \\lor \\cdots \\lor \\neg L_n)\\theta "
},
{
"math_id": 8,
"text": " L \\, "
},
{
"math_id": 9,
"text": " C_1, C_2, \\cdots, C_l "
},
{
"math_id": 10,
"text": " C_1 \\, "
},
{
"math_id": 11,
"text": " C_{i+1} \\, "
},
{
"math_id": 12,
"text": " C_i \\, "
},
{
"math_id": 13,
"text": " C_l \\, "
},
{
"math_id": 14,
"text": " L_1 \\land \\cdots \\land L_i \\land \\cdots \\land L_n "
},
{
"math_id": 15,
"text": " q \\lor \\neg p "
},
{
"math_id": 16,
"text": " p \\, "
},
{
"math_id": 17,
"text": " \\neg q "
},
{
"math_id": 18,
"text": " \\neg q, \\neg p, \\mathit{false} "
},
{
"math_id": 19,
"text": " \\mathit{false} \\, "
},
{
"math_id": 20,
"text": " not(p) \\, "
}
] | https://en.wikipedia.org/wiki?curid=11594091 |
11594341 | Correspondence analysis | Correspondence analysis (CA) is a multivariate statistical technique proposed by Herman Otto Hartley (Hirschfeld) and later developed by Jean-Paul Benzécri. It is conceptually similar to principal component analysis, but applies to categorical rather than continuous data. In a similar manner to principal component analysis, it provides a means of displaying or summarising a set of data in two-dimensional graphical form. Its aim is to display in a biplot any structure hidden in the multivariate setting of the data table. As such it is a technique from the field of multivariate ordination. Since the variant of CA described here can be applied either with a focus on the rows or on the columns it should in fact be called simple (symmetric) correspondence analysis.
It is traditionally applied to the contingency table of a pair of nominal variables where each cell contains either a count or a zero value. If more than two categorical variables are to be summarized, a variant called multiple correspondence analysis should be chosen instead. CA may also be applied to binary data given the presence/absence coding represents simplified count data i.e. a 1 describes a positive count and 0 stands for a count of zero. Depending on the scores used CA preserves the chi-square distance between either the rows or the columns of the table. Because CA is a descriptive technique, it can be applied to tables regardless of a significant chi-squared test. Although the formula_0 statistic used in inferential statistics and the chi-square distance are computationally related they should not be confused since the latter works as a multivariate statistical distance measure in CA while the formula_0 statistic is in fact a scalar not a metric.
Details.
Like principal components analysis, correspondence analysis creates orthogonal components (or axes) and, for each item in a table i.e. for each row, a set of scores (sometimes called factor scores, see Factor analysis). Correspondence analysis is performed on the data table, conceived as matrix "C" of size "m" × "n" where "m" is the number of rows and "n" is the number of columns. In the following mathematical description of the method capital letters in italics refer to a matrix while letters in italics refer to vectors. Understanding the following computations requires knowledge of matrix algebra.
Preprocessing.
Before proceeding to the central computational step of the algorithm, the values in matrix "C" have to be transformed. First compute a set of weights for the columns and the rows (sometimes called "masses"), where row and column weights are given by the row and column vectors, respectively:
formula_1
Here formula_2 is the sum of all cell values in matrix "C", or short the sum of "C", and formula_3 is a column vector of ones with the appropriate dimension.
Put in simple words, formula_4 is just a vector whose elements are the row sums of "C" divided by the sum of "C", and formula_5 is a vector whose elements are the column sums of "C" divided by the sum of "C".
The weights are transformed into diagonal matrices
formula_6
and
formula_7
where the diagonal elements of formula_8 are formula_9 and those of formula_10 are formula_11 respectively i.e. the vector elements are the inverses of the square roots of the masses. The off-diagonal elements are all 0.
Next, compute matrix formula_12 by dividing formula_13 by its sum
formula_14
In simple words, Matrix formula_12 is just the data matrix (contingency table or binary table) transformed into portions i.e. each cell value is just the cell portion of the sum of the whole table.
Finally, compute matrix "formula_15", sometimes called the matrix of "standardized residuals", by matrix multiplication as
formula_16
Note, the vectors formula_4 and formula_5 are combined in an outer product resulting in a matrix of the same dimensions as formula_12. In words the formula reads: matrix formula_17 is subtracted from matrix "formula_12" and the resulting matrix is scaled (weighted) by the diagonal matrices formula_10 and formula_8. Multiplying the resulting matrix by the diagonal matrices is equivalent to multiply the i-th row (or column) of it by the i-th element of the diagonal of formula_10 or formula_8, respectively"."
Interpretation of preprocessing.
The vectors formula_4 and formula_5 are the row and column masses or the marginal probabilities for the rows and columns, respectively. Subtracting matrix formula_17 from matrix "formula_12" is the matrix algebra version of double centering the data. Multiplying this difference by the diagonal weighting matrices results in a matrix containing weighted deviations from the origin of a vector space. This origin is defined by matrix formula_17.
In fact matrix formula_17 is identical with the matrix of "expected frequencies" in the chi-squared test. Therefore "formula_15" is computationally related to the independence model used in that test. But since CA is "not" an inferential method the term independence model is inappropriate here.
Orthogonal components.
The table "formula_15" is then decomposed by a singular value decomposition as
formula_18
where formula_19 and formula_20 are the left and right singular vectors of formula_15 and formula_21 is a square diagonal matrix with the singular values formula_22 of "formula_15" on the diagonal. formula_21 is of dimension formula_23 hence formula_19 is of dimension m×p and formula_20 is of n×p". A"s orthonormal vectors formula_19 and formula_20 fulfill
formula_24.
In other words, the multivariate information that is contained in formula_13 as well as in "formula_15" is now distributed across two (coordinate) matrices formula_19 and formula_20 and a diagonal (scaling) matrix formula_21. The vector space defined by them has as number of dimensions p, that is the smaller of the two values, number of rows and number of columns, minus 1.
Inertia.
While a principal component analysis may be said to decompose the (co)variance, and hence its measure of success is the amount of (co-)variance covered by the first few PCA axes - measured in eigenvalue -, a CA works with a weighted (co-)variance which is called "inertia". The sum of the squared singular values is the "total inertia" formula_25 of the data table, computed as
formula_26
The "total inertia" formula_25 of the data table can also computed directly from "formula_15" as
formula_27
The amount of inertia covered by the i-th set of singular vectors is formula_28, the "principal inertia." The higher the portion of inertia covered by the first few singular vectors i.e. the larger the sum of the principal inertiae in comparison to the total inertia, the more successful a CA is. Therefore all principal inertia values are expressed as portion formula_29 of the total inertia
formula_30
and are presented in the form of a scree plot. In fact a scree plot is just a bar plot of all principal inertia portions formula_29.
Coordinates.
To transform the singular vectors to coordinates which preserve the chisquare distances between rows or columns an additional weighting step is necessary. The resulting coordinates are called "principal coordinates" in CA text books. If principal coordinates are used for rows their visualization is called a "row isometric" scaling in econometrics and "scaling 1" in ecology. Since the weighting includes the singular values formula_21 of the matrix of standardized residuals formula_15 these coordinates are sometimes referred to as "singular value scaled singular vectors", or, a little bit misleading, as eigenvalue scaled eigenvectors. In fact the non-trivial eigenvectors of formula_31are the left singular vectors formula_19 of formula_15 and those of formula_32 are the right singular vectors formula_20 of formula_15 while the eigenvalues of either of these matrices are the squares of the singular values formula_21. But since all modern algorithms for CA are based on a singular value decomposition this terminology should be avoided. In the french tradition of CA the coordinates are sometimes called (factor) "scores".
Factor scores or "principal coordinates" for the rows of matrix "C" are computed by
formula_33
i.e. the left singular vectors are scaled by the inverse of the square roots of the row masses and by the singular values. Because principal coordinates are computed using singular values they contain the information about the spread between the rows (or columns) in the original table. Computing the euclidean distances between the entities in principal coordinates results in values that equal their chisquare distances which is the reason why CA is said to "preserve chisquare distances".
Compute principal coordinates for the columns by
formula_34
To represent the result of CA in a proper biplot, those categories which are "not" plotted in principal coordinates, i.e. in chisquare distance preserving coordinates, should be plotted in so called "standard coordinates". They are called standard coordinates because each vector of standard coordinates has been standardized to exhibit mean 0 and variance 1. When computing standard coordinates the singular values are omitted which is a direct result of applying the biplot rule by which one of the two sets of singular vector matrices must be scaled by singular values raised to the power of zero i.e. multiplied by one i.e. be computed by omitting the singular values if the other set of singular vectors have been scaled by the singular values. This reassures the existence of a inner product between the two sets of coordinates i.e. it leads to meaningful interpretations of their spatial relations in a biplot.
In practical terms one can think of the standard coordinates as the vertices of the vector space in which the set of principal coordinates (i.e. the respective points) "exists". The standard coordinates for the rows are
formula_35
and those for the columns are
formula_36
Note that a "scaling 1" biplot in ecology implies the rows to be in principal and the columns to be in standard coordinates while "scaling 2" implies the rows to be in standard and the columns to be in principal coordinates. I.e. scaling 1 implies a biplot of formula_37together with formula_38 while scaling 2 implies a biplot of formula_39together with formula_40.
Graphical representation of result.
The visualization of a CA result always starts with displaying the scree plot of the principal inertia values to evaluate the success of summarizing spread by the first few singular vectors.
The actual ordination is presented in a graph which could - at first look - be confused with a complicated scatter plot. In fact it consists of two scatter plots printed one upon the other, one set of points for the rows and one for the columns. But being a biplot a clear interpretation rule relates the two coordinate matrices used.
Usually the first two dimensions of the CA solution are plotted because they encompass the maximum of information about the data table that can be displayed in 2D although other combinations of dimensions may be investigated by a biplot. A biplot is in fact a low dimensional mapping of a part of the information contained in the original table.
As a rule of thumb that set (rows or columns) which should be analysed with respect to its composition as measured by the other set is displayed in principal coordinates while the other set is displayed in standard coordinates. E.g. a table displaying voting districts in rows and political parties in columns with the cells containing the counted votes may be displayed with the districts (rows) in principal coordinates when the focus is on ordering districts according to similar voting.
Traditionally, originating from the french tradition in CA, early CA biplots mapped both entities in the same coordinate version, usually principal coordinates, but this kind of display is misleading insofar as: "Although this is called a biplot, it does "not" have any useful inner product relationship between the row and column scores" as Brian Ripley, maintainer of R package MASS points out correctly. Today that kind of display should be avoided since laymen usually are not aware of the lacking relation between the two point sets.
A "scaling 1" biplot (rows in principal coordinates, columns in standard coordinates) is interpreted as follows:
Extensions and applications.
Several variants of CA are available, including detrended correspondence analysis (DCA) and canonical correspondence analysis (CCA). The latter (CCA) is used when there is information about possible causes for the similarities between the investigated entities. The extension of correspondence analysis to many categorical variables is called multiple correspondence analysis. An adaptation of correspondence analysis to the problem of discrimination based upon qualitative variables (i.e., the equivalent of discriminant analysis for qualitative data) is called discriminant correspondence analysis or barycentric discriminant analysis.
In the social sciences, correspondence analysis, and particularly its extension multiple correspondence analysis, was made known outside France through French sociologist Pierre Bourdieu's application of it.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\chi^2"
},
{
"math_id": 1,
"text": "w_m = \\frac{1}{n_C} C \\mathbf{1}, \\quad w_n = \\frac{1}{n_C}\\mathbf{1}^T C."
},
{
"math_id": 2,
"text": "n_C = \\sum_{i=1}^n \\sum_{j=1}^m C_{ij} "
},
{
"math_id": 3,
"text": "\\mathbf{1}"
},
{
"math_id": 4,
"text": "w_m"
},
{
"math_id": 5,
"text": "w_n"
},
{
"math_id": 6,
"text": "W_m = \\operatorname{diag}(1/\\sqrt{w_m})"
},
{
"math_id": 7,
"text": "W_n = \\operatorname{diag}(1/\\sqrt{w_n})"
},
{
"math_id": 8,
"text": "W_n"
},
{
"math_id": 9,
"text": "1/\\sqrt{w_n}"
},
{
"math_id": 10,
"text": "W_m"
},
{
"math_id": 11,
"text": "1/\\sqrt{w_m}"
},
{
"math_id": 12,
"text": "P"
},
{
"math_id": 13,
"text": "C"
},
{
"math_id": 14,
"text": "P = \\frac{1}{n_C} C."
},
{
"math_id": 15,
"text": "S"
},
{
"math_id": 16,
"text": "S = W_m(P - w_m w_n)W_n"
},
{
"math_id": 17,
"text": "\\operatorname{outer}(w_m, w_n)"
},
{
"math_id": 18,
"text": "S = U\\Sigma V^* \\,"
},
{
"math_id": 19,
"text": "U"
},
{
"math_id": 20,
"text": "V"
},
{
"math_id": 21,
"text": "\\Sigma"
},
{
"math_id": 22,
"text": "\\sigma_i"
},
{
"math_id": 23,
"text": "p \\leq (\\min(m,n)-1)"
},
{
"math_id": 24,
"text": "U^* U = V^* V = I"
},
{
"math_id": 25,
"text": "\\Iota"
},
{
"math_id": 26,
"text": "\\Iota = \\sum_{i=1}^p \\sigma_i^2."
},
{
"math_id": 27,
"text": "\\Iota = \\sum_{i=1}^n \\sum_{j=1}^m s_{ij}^2. "
},
{
"math_id": 28,
"text": "\\iota_i"
},
{
"math_id": 29,
"text": "\\epsilon_i"
},
{
"math_id": 30,
"text": "\\epsilon_i = \\sigma_i^2 / \\sum_{i=1}^p \\sigma_i^2"
},
{
"math_id": 31,
"text": "S S^* "
},
{
"math_id": 32,
"text": "S^* S "
},
{
"math_id": 33,
"text": "F_m = W_m U \\Sigma"
},
{
"math_id": 34,
"text": "F_n = W_n V \\Sigma."
},
{
"math_id": 35,
"text": "G_m = W_m U"
},
{
"math_id": 36,
"text": "G_n = W_n V"
},
{
"math_id": 37,
"text": "F_m"
},
{
"math_id": 38,
"text": "G_n"
},
{
"math_id": 39,
"text": "F_n"
},
{
"math_id": 40,
"text": "G_m"
}
] | https://en.wikipedia.org/wiki?curid=11594341 |
1159575 | 360-day calendar | Calendar used in some situations such as financial markets
The 360-day calendar is a method of measuring durations used in financial markets, in computer models, in ancient literature, and in prophetic literary genres.
It is based on merging the three major calendar systems into one complex clock, with the 360-day year derived from the average year of the lunar and the solar: (365.2425 "(solar)" + 354.3829 "(lunar)")/2 = 719.6254/2 = 359.8127 days, rounding to 360.
A 360-day year consists of 12 months of 30 days each, so to derive such a calendar from the standard Gregorian calendar, certain days are skipped.
For example, the 27th of June (Gregorian calendar) would be the 4th of July in the USA.
Ancient Calendars.
Ancient calendars around the world initially used a 360-day calendar.
Rome.
According to Plutarch's Parallel Lives Romans initially used a calendar which had 360 days, with varying length of months. However, Macrobius' Saturnalia and Censorinus' The Birthday Book, claim that the original Roman calendar had 304 days split into 10 months.
India.
The Rig Veda describes a calendar with twelve months and 360 days.
Mesoamerica.
In the Mayan Long Count Calendar, the equivalent of the year, the tun, was 360 days.
Egypt.
Ancient Egyptians also used a 360-day calendar. One myth tells of how the extra 5 days were added.
<templatestyles src="Template:Blockquote/styles.css" />
A long time ago, Ra, who was god of the sun, ruled the earth. During this time, he heard of a prophecy that Nut, the sky goddess, would give birth to a son who would depose him. Therefore Re cast a spell to the effect that Nut could not give birth on any day of the year, which was then itself composed of precisely 360 days. To help Nut to counter this spell, the wisdom god Thoth devised a plan.
Thoth went to the moon god Khonsu and asked that he play a game known as Senet, requesting that they play for the very light of the moon itself. Feeling confident that he would win, Khonsu agreed. However, in the course of playing he lost the game several times in succession, such that Thoth ended up winning from the moon a substantial measure of its light, equal to about five days.
With this in hand, Thoth then took this extra time, and gave it to Nut. In doing so this had the effect of increasing the earth’s number of days per year, allowing Nut to give birth to a succession of children; one upon each of the extra 5 days that were added to the original 360. And as for the moon, losing its light had quite an effect upon it, for it became weaker and smaller in the sky. Being forced to hide itself periodically to recuperate; it could only show itself fully for a short period of time before having to disappear to regain its strength.
Financial use.
A duration is calculated as an integral number of days between startdate A and enddate B. The difference in years, months and days are usually calculated separately:
formula_0
There are several methods commonly available which differ in the way that they handle the cases where the months are not 30 days long, i.e. how they adjust dates:
If either date A or B falls on the 31st of the month, that date will be changed to the 30th.
Where date B falls on the last day of February, the actual date B will be used.
All months are considered to last 30 days and hence a full year has 360 days, but another source says that February has its actual number of days.
If both date A and B fall on the last day of February, then date B will be changed to the 30th.
If date A falls on the 31st of a month or last day of February, then date A will be changed to the 30th.
If date A falls on the 30th of a month after applying (2) above and date B falls on the 31st of a month, then date B will be changed to the 30th.
All months are considered to last 30 days and hence a full year has 360 days.
If date A falls on the 31st of a month, then date A will be changed to the 30th.
If date A falls on the 30th of the month after applying the rule above, and date B falls on the 31st of the month, then date B will be changed to the 30th.
All months are considered to last 30 days except February which has its actual length. Any full year, however, always counts for 360 days.
If date A falls on the 31st of a month or last day of February, then date A will be changed to the 30th.
If date A falls on the 30th of the month after applying the rule above, and date B falls on the 31st of the month, then date B will be changed to the 30th.
All months are considered to last 30 days and hence a full year has 360 days.
If date A falls on the 31st of a month, then date A will be changed to the 30th.
If date B falls on the 31st of a month, then date B will be changed to the 1st of the following month.
Where date B falls on the last day of February, the actual date B will be used.
All months are considered to last 30 days and hence a full year has 360 days.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "duration(A,B) = (B_y - A_y) \\times 360 + (B_m - A_m) \\times 30 + (B_d - A_d); A \\le B"
}
] | https://en.wikipedia.org/wiki?curid=1159575 |
11597641 | Schur functor | In mathematics, especially in the field of representation theory, Schur functors (named after Issai Schur) are certain functors from the category of modules over a fixed commutative ring to itself. They generalize the constructions of exterior powers and symmetric powers of a vector space. Schur functors are indexed by Young diagrams in such a way that the horizontal diagram with "n" cells corresponds to the "n"th symmetric power functor, and the vertical diagram with "n" cells corresponds to the "n"th exterior power functor. If a vector space "V" is a representation of a group "G", then formula_0 also has a natural action of "G" for any Schur functor formula_1.
Definition.
Schur functors are indexed by partitions and are described as follows. Let "R" be a commutative ring, "E" an "R"-module
and "λ" a partition of a positive integer "n". Let "T" be a Young tableau of shape "λ", thus indexing the factors of the "n"-fold direct product, "E" × "E" × ... × "E", with the boxes of "T". Consider those maps of "R"-modules formula_2 satisfying the following conditions
formula_5
where the sum is over "n"-tuples "x"′ obtained from "x" by exchanging the elements indexed by "I" with any formula_6 elements indexed by the numbers in column formula_7 (in order).
The universal "R"-module formula_8 that extends formula_3 to a mapping of "R"-modules formula_9 is the image of "E" under the Schur functor indexed by "λ".
For an example of the condition (3) placed on formula_3
suppose that "λ" is the partition formula_10 and the tableau
"T" is numbered such that its entries are 1, 2, 3, 4, 5 when read
top-to-bottom (left-to-right). Taking formula_11 (i.e.,
the numbers in the second column of "T") we have
formula_12
while if formula_13 then
formula_14
Examples.
Fix a vector space "V" over a field of characteristic zero. We identify partitions and the corresponding Young diagrams. The following descriptions hold:
Applications.
Let "V" be a complex vector space of dimension "k". It's a tautological representation of its automorphism group GL("V"). If "λ" is a diagram where each row has no more than "k" cells, then S"λ"("V") is an irreducible GL("V")-representation of highest weight "λ". In fact, any rational representation of GL("V") is isomorphic to a direct sum of representations of the form S"λ"("V") ⊗ det("V")⊗"m", where "λ" is a Young diagram with each row strictly shorter than "k", and "m" is any (possibly negative) integer.
In this context Schur-Weyl duality states that as a GL("V")-module
formula_16
where formula_17 is the number of standard young tableaux of shape "λ". More generally, we have the decomposition of the tensor product as formula_18-bimodule
formula_19
where formula_20 is the Specht module indexed by "λ". Schur functors can also be used to describe the coordinate ring of certain flag varieties.
Plethysm.
For two Young diagrams "λ" and "μ" consider the composition of the corresponding Schur functors S"λ"(S"μ"(−)). This composition is called a plethysm of "λ" and "μ". From the general theory it is known that, at least for vector spaces over a characteristic zero field, the plethysm is isomorphic to a direct sum of Schur functors. The problem of determining which Young diagrams occur in that description and how to calculate their multiplicities is open, aside from some special cases like Sym"m"(Sym2("V")).
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{S}^{\\lambda}V"
},
{
"math_id": 1,
"text": "\\mathbb{S}^{\\lambda}(-)"
},
{
"math_id": 2,
"text": "\\varphi:E^{\\times\nn} \\to M"
},
{
"math_id": 3,
"text": "\\varphi"
},
{
"math_id": 4,
"text": "I \\subset \\{1,2,\\dots,n\\}"
},
{
"math_id": 5,
"text": "\\varphi(x) = \\sum_{x'} \\varphi(x') "
},
{
"math_id": 6,
"text": "|I|"
},
{
"math_id": 7,
"text": "i-1"
},
{
"math_id": 8,
"text": "\\mathbb{S}^\\lambda E"
},
{
"math_id": 9,
"text": "\\tilde{\\varphi}:\\mathbb{S}^\\lambda E \\to M"
},
{
"math_id": 10,
"text": "(2,2,1)"
},
{
"math_id": 11,
"text": "I = \\{4,5\\}"
},
{
"math_id": 12,
"text": "\\varphi(x_1,x_2,x_3,x_4,x_5) =\n\\varphi(x_4,x_5,x_3,x_1,x_2) +\n\\varphi(x_4,x_2,x_5,x_1,x_3) +\n\\varphi(x_1,x_4,x_5,x_2,x_3),"
},
{
"math_id": 13,
"text": "I = \\{5\\}"
},
{
"math_id": 14,
"text": "\\varphi(x_1,x_2,x_3,x_4,x_5) =\n\\varphi(x_5,x_2,x_3,x_4,x_1) +\n\\varphi(x_1,x_5,x_3,x_4,x_2) +\n\\varphi(x_1,x_2,x_5,x_4,x_3)."
},
{
"math_id": 15,
"text": " \\Lambda^{n+1}(V) \\otimes \\mathrm{Sym}^{m-1}(V) ~\\xrightarrow{\\Delta \\otimes \\mathrm{id}}~ \\Lambda^n(V) \\otimes V \\otimes \\mathrm{Sym}^{m-1}(V)\n ~\\xrightarrow{\\mathrm{id} \\otimes \\cdot}~ \\Lambda^n(V) \\otimes \\mathrm{Sym}^m(V)"
},
{
"math_id": 16,
"text": "V^{\\otimes n} = \\bigoplus_{\\lambda \\vdash n: \\ell(\\lambda) \\leq k} (\\mathbb{S}^{\\lambda} V)^{\\oplus f^\\lambda}"
},
{
"math_id": 17,
"text": "f^\\lambda"
},
{
"math_id": 18,
"text": "\\mathrm{GL}(V) \\times \\mathfrak{S}_n"
},
{
"math_id": 19,
"text": "V^{\\otimes n} = \\bigoplus_{\\lambda \\vdash n: \\ell(\\lambda) \\leq k} (\\mathbb{S}^{\\lambda} V) \\otimes \\operatorname{Specht}(\\lambda)"
},
{
"math_id": 20,
"text": "\\operatorname{Specht}(\\lambda)"
}
] | https://en.wikipedia.org/wiki?curid=11597641 |
11599902 | Total functional programming | Total functional programming (also known as strong functional programming, to be contrasted with ordinary, or "weak" functional programming) is a programming paradigm that restricts the range of programs to those that are provably terminating.
Restrictions.
Termination is guaranteed by the following restrictions:
These restrictions mean that total functional programming is not Turing-complete. However, the set of algorithms that can be used is still huge. For example, any algorithm for which an asymptotic upper bound can be calculated (by a program that itself only uses Walther recursion) can be trivially transformed into a provably-terminating function by using the upper bound as an extra argument decremented on each iteration or recursion.
For example, quicksort is not trivially shown to be substructural recursive, but it only recurs to a maximum depth of the length of the vector (worst-case time complexity O("n"2)). A quicksort implementation on lists (which would be rejected by a substructural recursive checker) is, using Haskell:
import Data.List (partition)
qsort [] = []
qsort [a] = [a]
qsort (a:as) = let (lesser, greater) = partition (<a) as
in qsort lesser ++ [a] ++ qsort greater
To make it substructural recursive using the length of the vector as a limit, we could do:
import Data.List (partition)
qsort x = qsortSub x x
-- minimum case
qsortSub [] as = as -- shows termination
-- standard qsort cases
qsortSub (l:ls) [] = [] -- nonrecursive, so accepted
qsortSub (l:ls) [a] = [a] -- nonrecursive, so accepted
qsortSub (l:ls) (a:as) = let (lesser, greater) = partition (<a) as
-- recursive, but recurs on ls, which is a substructure of
-- its first input.
in qsortSub ls lesser ++ [a] ++ qsortSub ls greater
Some classes of algorithms have no theoretical upper bound but do have a practical upper bound (for example, some heuristic-based algorithms can be programmed to "give up" after so many recursions, also ensuring termination).
Another outcome of total functional programming is that both strict evaluation and lazy evaluation result in the same behaviour, in principle; however, one or the other may still be preferable (or even required) for performance reasons.
In total functional programming, a distinction is made between data and codata—the former is finitary, while the latter is potentially infinite. Such potentially infinite data structures are used for applications such as I/O. Using codata entails the usage of such operations as corecursion. However, it is possible to do I/O in a total functional programming language (with dependent types) also without codata.
Both Epigram and Charity could be considered total functional programming languages, even though they do not work in the way Turner specifies in his paper. So could programming directly in plain System F, in Martin-Löf type theory or the Calculus of Constructions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\forall x \\in \\mathbb{N}. x \\div 0 = 0"
}
] | https://en.wikipedia.org/wiki?curid=11599902 |
1160 | Automorphism | Isomorphism of an object to itself
In mathematics, an automorphism is an isomorphism from a mathematical object to itself. It is, in some sense, a symmetry of the object, and a way of mapping the object to itself while preserving all of its structure. The set of all automorphisms of an object forms a group, called the automorphism group. It is, loosely speaking, the symmetry group of the object.
Definition.
In an algebraic structure such as a group, a ring, or vector space, an "automorphism" is simply a bijective homomorphism of an object into itself. (The definition of a homomorphism depends on the type of algebraic structure; see, for example, group homomorphism, ring homomorphism, and linear operator.)
More generally, for an object in some category, an automorphism is a morphism of the object to itself that has an inverse morphism; that is, a morphism formula_0 is an automorphism if there is a morphism formula_1 such that formula_2 where formula_3 is the identity morphism of X. For algebraic structures, the two definitions are equivalent; in this case, the identity morphism is simply the identity function, and is often called the "trivial automorphism"
Automorphism group.
The automorphisms of an object X form a group under composition of morphisms, which is called the "automorphism group" of X. This results straightforwardly from the definition of a category.
The automorphism group of an object "X" in a category "C" is often denoted Aut"C"("X"), or simply Aut("X") if the category is clear from context.
History.
One of the earliest group automorphisms (automorphism of a group, not simply a group of automorphisms of points) was given by the Irish mathematician William Rowan Hamilton in 1856, in his icosian calculus, where he discovered an order two automorphism, writing:
so that formula_10 is a new fifth root of unity, connected with the former fifth root formula_11 by relations of perfect reciprocity.
Inner and outer automorphisms.
In some categories—notably groups, rings, and Lie algebras—it is possible to separate automorphisms into two types, called "inner" and "outer" automorphisms.
In the case of groups, the inner automorphisms are the conjugations by the elements of the group itself. For each element "a" of a group "G", conjugation by "a" is the operation "φ""a" : "G" → "G" given by "φ""a"("g") = "aga"−1 (or "a"−1"ga"; usage varies). One can easily check that conjugation by "a" is a group automorphism. The inner automorphisms form a normal subgroup of Aut("G"), denoted by Inn("G"); this is called Goursat's lemma.
The other automorphisms are called outer automorphisms. The quotient group Aut("G") / Inn("G") is usually denoted by Out("G"); the non-trivial elements are the cosets that contain the outer automorphisms.
The same definition holds in any unital ring or algebra where "a" is any invertible element. For Lie algebras the definition is slightly different. | [
{
"math_id": 0,
"text": "f: X\\to X"
},
{
"math_id": 1,
"text": "g: X\\to X"
},
{
"math_id": 2,
"text": "g\\circ f= f\\circ g = \\operatorname {id}_X,"
},
{
"math_id": 3,
"text": "\\operatorname {id}_X"
},
{
"math_id": 4,
"text": "\\Q "
},
{
"math_id": 5,
"text": "\\R "
},
{
"math_id": 6,
"text": "x<y"
},
{
"math_id": 7,
"text": "\\exists z\\mid y-x=z^2,"
},
{
"math_id": 8,
"text": "\\Complex"
},
{
"math_id": 9,
"text": "\\R ,"
},
{
"math_id": 10,
"text": "\\mu"
},
{
"math_id": 11,
"text": "\\lambda"
}
] | https://en.wikipedia.org/wiki?curid=1160 |
1160163 | Apastamba Dharmasutra | Āpastamba Dharmasūtra (Sanskrit: आपस्तम्ब धर्मसूत्र) is a Sanskrit text and one of the oldest Dharma-related texts of Hinduism that have survived into the modern age from the 1st millennium BCE. It is one of three extant Dharmasutras texts from the Taittiriya school of Krishna Yajurveda, the other two being "Baudhayana Dharmasutra" and "Hiranyakesin Dharmasutra".
The "Apastamba Dharmasutra" is part of "Apastamba Kalpasutra" collection, along with "Apastamba Shrautasutra" and "Apastamba Grihyasutra". One of the best preserved ancient texts on Dharma, it is also notable for mentioning and citing views of ten ancient experts on Dharma, which has led scholars to conclude that there existed a rich genre of Dharmasutras text in ancient India before this text was composed.
Authorship, location and dates.
<templatestyles src="Template:Quote_box/styles.css" />
Duties of a teacher
<poem>
Next the teacher's conduct towards his pupil.
Loving him like a son and totally devoted to him,
the teacher should impart knowledge to him,
without holding anything back,
with respect to any of the Laws.
Except in emergency, moreover,
he should not employ a pupil,
for purposes to the detriment of the pupil's studies.
</poem>
— "Apastamba Dharmasutras 1.8.23-25"<br>Translator: Patrick Olivelle
The Dharmasutra is attributed to Apastamba, the founder of a Shakha (Vedic school) of Yajurveda. According to the Hindu tradition, Apastamba was the student of Baudhayana, and himself had a student named Hiranyakesin. Each of the three founded a Vedic school, and each of their schools produced a collection of literature within the Krishna Yajurveda tradition, one that included separate Kalpasutra compilations. They were founders of their traditions, but it is unclear if they authored the Dharmasutras. It is, states Patrick Olivelle, possible that the Apastamba Dharmasutra is ascribed to Apastamba, but actually composed by others in his school.
The Apastamba tradition may be from south India, possibly near where modern Andhra Pradesh is between Godavari and Krishna rivers, but this is not certain. The verse 2.17.17 of the Apastamba Dharmasutra mentions a practice of "northerners" but it in unclear what "north" means in the context it is used. Further, the ancient grammarian Panini refers to it too, and he is generally placed in northwest Indian subcontinent. Olivelle states that the three Taittiriya school Dharmasutras mention practices of north and south, but never clarify how far north or south they are referring to, but placing Dharmasutras in the southern Indian peninsula implies that Brahmanical ideas had established themselves or emerged in the south by the 1st millennium BCE. According to Olivelle, the Yajurveda schools may have been in what is north India today, and the Apastamba Dharmasutra may have been composed in north India, rather than south. In contrast, Robert Lingat states that epigraphical evidence such as the Pallava inscriptions confirm that Apastamba tradition existed in South India, in ancient times, in parts of what became Madras Presidency in the colonial British India.
Kane estimated that Apastamba Dharmasutra dates from approximately 600-300 BCE, and later more narrowly to between 450 and 350 BCE. Lingat states that the internal evidence within the text hints of great antiquity, because unlike later Dharma texts, it makes no mention of Buddhism. Other scholars, such as Hopkins, assert that all this can be explained to be an artifact of its relatively remote geographical origins in Andhra region. Olivelle, and several other scholars, in contrast, state that the first version of Apastamba Dharmasutra may have been composed after others, but the extant version of the Apastamba text is the oldest Dharma text from ancient India.
Regardless of the relative chronology, the ancient Apastamba Dharmasutra, states Olivelle, shows clear signs of a maturing legal procedure tradition and that there were Dharma texts in ancient India before it was composed.
Organization and content.
The text is in sutra format, and part of thirty "prashnas" (प्रश्न, portions, issue, questions) of Apastamba Kalpasutra. The Apastamba Dharmasutra is the 28th and 29th "prashna" of this compilation, while the first 24 "prashnas" are about Shrautasutras (vedic rituals), 25th is an ancillary mantra section, 26th and 27th are Grihyasutras (householder rites of passage), and the last or the 30th "prashna" is a Shulbasutra (mathematics for altar building). The text is systematically arranged, cross references to other sections of the Kalpasutra compilation so extensively and accurately, as if it is the work of a single author.
Of the two books of this Dharmasūtra, the first is devoted to the student tradition and the second book is devoted to the householder tradition.
Significance.
<templatestyles src="Template:Quote_box/styles.css" />
Who doesn't pay taxes?
<poem>
The following are exempt from taxes:
vedic scholars, women of all classes,
pre-pubescent boys,
all students studying with a guru,
ascetics, sudras who work as personal servants,
people who are blind, dumb, deaf and sick,
anyone excluded from acquiring property.
</poem>
— "Apastamba Dharmasutras 2.26.10-17"
The "Āpastamba Dharmasutra" is notable for placing the importance of the Veda scriptures second and that of "samayacarika" or mutually agreed and accepted customs of practice first. Āpastamba proposes that scriptures alone cannot be source of Law (dharma), and dharma has an empirical nature. Āpastamba asserts that it is difficult to find absolute sources of law, in ancient books or current people, according to Patrick Olivelle, with "The Righteous (dharma) and the Unrighteous (adharma) do not go around saying, 'here we are!'; Nor do gods, Gandharvas or ancestors declare, 'This is righteous and that is unrighteous'."
Most laws are based on agreement between the Aryas, states Āpastamba, on what is right and what is wrong. Laws must also change with ages, states Āpastamba, a theory that became known as "Yuga dharma" in Hindu traditions. Āpastamba also asserts in verses 2.29.11-15 a broad minded and liberal view, states Olivelle, that "aspects of dharma not taught in Dharmasastras can be learned from women and people of all classes". The Apastamba Dharmasutra also recognizes property rights of women, and her ability to inherit wealth from her parents. Sita Anantha Raman notes, "As a southerner from Andhra, Āpastamba was familiar with southern customs, including matriliny. He gave importance to the 'married pair' (ĀDS 2.1.7-10) who performed Vedic rites together for the prosperity of the family."
Āpastamba used a hermeneutic strategy to assert that the Vedas once contained all knowledge including that of ideal Dharma, but parts of Vedas have been lost. Human customs developed from the original complete Vedas, but given the lost text, one must use customs between good people as a source to infer what the original Vedas might have stated the Dharma to be. This theory, called the ‘lost Veda’ theory, made the study of customs of good people as a source of dharma and guide to proper living, states Olivelle.
Apastamba has given a strikingly accurate value for formula_0 in his which is correct up to five decimal places.
Apastamba in his Sulbasutras provide approximate value of square root of 2 as follows:
formula_1
Commentaries.
Several ancient commentaries ("bhasya") were written on this Dharmasūtra, but only one by Haradatta named " has survived into the modern era. Haradatta, possibly from South India and one who lived in 12th- or 13th-century commented on the praśnas of Āpastamba Gṛhyasūtra as well as Gautama's Dharmasūtra.
Haradatta's commentary on Apastamba Dharmasutra was criticized by Boehtlingk in 1885 for lacking "European critical attitude", a view that modern scholars such as Patrick Olivelle have called unjustified and erroneous because Haradatta was a very careful commentator, far more than Boehtlingk and many other 19th-century Orientalists were.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt{2}"
},
{
"math_id": 1,
"text": " 1 + \\frac{1}{3} + \\frac{1}{3\\times 4} - \\frac{1}{3\\times 4\\times 34} = \\frac{577}{408} = 1.41421\\overline{56862745098039}."
}
] | https://en.wikipedia.org/wiki?curid=1160163 |
11602384 | Product integral | An integral using products instead of sums
A product integral is any product-based counterpart of the usual sum-based integral of calculus. The product integral was developed by the mathematician Vito Volterra in 1887 to solve systems of linear differential equations.
Informal sketch.
The classical Riemann integral of a function formula_0 can be defined by the relation
formula_1
where the limit is taken over all partitions of the interval formula_2 whose norms approach zero. Product integrals are similar, but take the limit of a product instead of the limit of a sum. They can be thought of as "continuous" versions of "discrete" products. They are defined as
formula_3
For the case of formula_4, the product integral reduces exactly to the case of Lebesgue integration, that is, to classical calculus. Thus, the interesting cases arise for functions formula_5 where formula_6 is either some commutative algebra, such as a finite-dimensional matrix field. or if formula_6 is a non-commutative algebra. The theories for these two cases, the commutative and non-commutative cases, have little in common. The non-commutative case is far more complicated; it requires proper path-ordering to make the integral well-defined.
Commutative case.
For the commutative case, three distinct definitions are commonplace in the literature, referred to as Type-I, Type-II or "geometric", and type-III or "bigeometric".
Such integrals have found use in epidemiology (the Kaplan–Meier estimator) and stochastic population dynamics. The geometric integral, together with the geometric derivative, is useful in image analysis and in the study of growth/decay phenomena ("e.g.", in economic growth, bacterial growth, and radioactive decay). The , together with the bigeometric derivative, is useful in some applications of fractals, and in the theory of elasticity in economics.
Non-commutative case.
The non-commutative case commonly arises in quantum mechanics and quantum field theory. The integrand is generally an operator belonging to some non-commutative algebra. In this case, one must be careful to establish a path-ordering while integrating. A typical result is the ordered exponential. The Magnus expansion provides one technique for computing the Volterra integral. Examples include the Dyson expansion, the integrals that occur in the operator product expansion and the Wilson line, a product integral over a gauge field. The Wilson loop is the trace of a Wilson line. The product integral also occurs in control theory, as the Peano–Baker series describing state transitions in linear systems written in a master equation type form.
General (non-commutative) case.
The Volterra product integral is most useful when applied to matrix-valued functions or functions with values in a Banach algebra. When applied to scalars belonging to a non-commutative field, to matrixes, and to operators, "i.e." to mathematical objects that don't commute, the Volterra integral splits in two definitions.
The left product integral is
formula_7
With this notation of left products (i.e. normal products applied from left)
formula_8
The right product integral
formula_9
With this notation of right products (i.e. applied from right)
formula_10
Where formula_11 is the identity matrix and D is a partition of the interval [a,b] in the Riemann sense, "i.e." the limit is over the maximum interval in the partition. Note how in this case time ordering becomes evident in the definitions.
The Magnus expansion provides a technique for computing the product integral. It defines a continuous-time version of the Baker–Campbell–Hausdorff formula.
The product integral satisfies a collection of properties defining a one-parameter continuous group; these are stated in two articles showing applications: the Dyson series and the Peano–Baker series.
Commutative case.
The commutative case is vastly simpler, and, as a result, a large variety of distinct notations and definitions have appeared. Three distinct styles are popular in the literature. This subsection adopts the product formula_12 notation for product integration instead of the integral formula_13 (usually modified by a superimposed times symbol or letter P) favoured by Volterra and others. An arbitrary classification of types is adopted to impose some order in the field.
When the function to be integrated is valued in the real numbers, then the theory reduces exactly to the theory of Lebesgue integration.
Type I: Volterra integral.
The type I product integral corresponds to Volterra's original definition. The following relationship exists for scalar functions formula_14:
formula_15
formula_16
Type II: Geometric integral.
which is called the geometric integral. The logarithm is well-defined if "f" takes values in the real or complex numbers, or if "f" takes values in a commutative field of commuting trace-class operators. This definition of the product integral is the continuous analog of the discrete product operator formula_17 (with formula_18) and the multiplicative analog to the (normal/standard/additive) integral formula_19 (with formula_20):
It is very useful in stochastics, where the log-likelihood (i.e. the logarithm of a product integral of independent random variables) equals the integral of the logarithm of these (infinitesimally many) random variables:
formula_21
formula_22
Type III: Bigeometric integral.
The type III product integral is called the bigeometric integral.
Basic results.
For the commutative case, the following results hold for the type II product integral (the geometric integral).
formula_23
formula_24
formula_25
formula_26
formula_27
The geometric integral (type II above) plays a central role in the geometric calculus, which is a multiplicative calculus. The inverse of the geometric integral, which is the geometric derivative, denoted formula_28, is defined using the following relationship:
formula_29
Thus, the following can be concluded:
formula_30
formula_31
formula_32
formula_33
where X is a random variable with probability distribution "F"("x").
Compare with the standard law of large numbers:
formula_34
Commutative case: Lebesgue-type product-integrals.
When the integrand takes values in the real numbers, then the product intervals become easy to work with by using simple functions. Just as in the case of Lebesgue version of (classical) integrals, one can compute product integrals by approximating them with the product integrals of simple functions. The case of Type II geometric integrals reduces to exactly the case of classical Lebesgue integration.
Type I: Volterra integral.
Because simple functions generalize step functions, in what follows we will only consider the special case of simple functions that are step functions. This will also make it easier to compare the Lebesgue definition with the Riemann definition.
Given a step function formula_35 with corresponding partition formula_36 and a tagged partition
formula_37
one approximation of the "Riemann definition" of the is given by
formula_38
The (type I) product integral was defined to be, roughly speaking, the limit of these products by Ludwig Schlesinger in a 1931 article.
Another approximation of the "Riemann definition" of the type I product integral is defined as
formula_39
When formula_40 is a constant function, the limit of the first type of approximation is equal to the second type of approximation. Notice that in general, for a step function, the value of the second type of approximation doesn't depend on the partition, as long as the partition is a refinement of the partition defining the step function, whereas the value of the first type of approximation "does" depend on the fineness of the partition, even when it is a refinement of the partition defining the step function.
It turns out that for "any" product-integrable function formula_40, the limit of the first type of approximation equals the limit of the second type of approximation. Since, for step functions, the value of the second type of approximation doesn't depend on the fineness of the partition for partitions "fine enough", it makes sense to define the "Lebesgue (type I) product integral" of a step function as
formula_41
where formula_42 is a tagged partition, and again formula_36 is the partition corresponding to the step function formula_40. (In contrast, the corresponding quantity would not be unambiguously defined using the first type of approximation.)
This generalizes to arbitrary measure spaces readily. If formula_43 is a measure space with measure formula_44, then for any product-integrable simple function formula_45 (i.e. a conical combination of the indicator functions for some disjoint measurable sets formula_46), its type I product integral is defined to be
formula_47
since formula_48 is the value of formula_40 at any point of formula_49. In the special case where formula_50, formula_51 is Lebesgue measure, and all of the measurable sets formula_49 are intervals, one can verify that this is equal to the definition given above for that special case. Analogous to the theory of Lebesgue (classical) integrals, the Type I product integral of any product-integrable function formula_40 can be written as the limit of an increasing sequence of Volterra product integrals of product-integrable simple functions.
Taking logarithms of both sides of the above definition, one gets that for any product-integrable simple function formula_40:
formula_52
formula_53
where we used the definition of integral for simple functions. Moreover, because continuous functions like formula_54 can be interchanged with limits, and the product integral of any product-integrable function formula_40 is equal to the limit of product integrals of simple functions, it follows that the relationship
formula_55
holds generally for "any" product-integrable formula_40. This clearly generalizes the property mentioned above.
The Type I integral is multiplicative as a set function, which can be shown using the above property. More specifically, given a product-integrable function formula_40 one can define a set function formula_56 by defining, for every measurable set formula_57,
formula_58
where formula_59 denotes the indicator function of formula_60. Then for any two "disjoint" measurable sets formula_61 one has
formula_62
This property can be contrasted with measures, which are sigma-additive set functions.
However, the Type I integral is "not" multiplicative as a functional. Given two product-integrable functions formula_63, and a measurable set formula_6, it is generally the case that
formula_64
Type II: Geometric integral.
If formula_43 is a measure space with measure formula_44, then for any product-integrable simple function formula_45 (i.e. a conical combination of the indicator functions for some disjoint measurable sets formula_46), its type II product integral is defined to be
formula_65
This can be seen to generalize the definition given above.
Taking logarithms of both sides, we see that for any product-integrable simple function formula_40:
formula_66
where the definition of the Lebesgue integral for simple functions was used. This observation, analogous to the one already made for Type II integrals above, allows one to entirely reduce the "Lebesgue theory of type II geometric integrals" to the Lebesgue theory of (classical) integrals. In other words, because continuous functions like formula_54 and formula_67 can be interchanged with limits, and the product integral of any product-integrable function formula_40 is equal to the limit of some increasing sequence of product integrals of simple functions, it follows that the relationship
formula_68
holds generally for "any" product-integrable formula_40. This generalizes the property of geometric integrals mentioned above.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f:[a,b]\\to\\mathbb{R}"
},
{
"math_id": 1,
"text": "\\int_a^b f(x)\\,dx = \\lim_{\\Delta x\\to 0}\\sum f(x_i)\\,\\Delta x,"
},
{
"math_id": 2,
"text": "[a,b]"
},
{
"math_id": 3,
"text": "\\prod_a^b \\big(1 + f(x)\\,dx\\big) = \\lim_{\\Delta x \\to 0} \\prod \\big(1 + f(x_i)\\,\\Delta x\\big)."
},
{
"math_id": 4,
"text": "f:[a,b]\\to\\R"
},
{
"math_id": 5,
"text": "f:[a,b]\\to A"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "P(A,D)=\\prod_{i=m}^{1}(\\mathbb{1}+A(\\xi_i)\\Delta t_i) = (\\mathbb{1}+A(\\xi_m)\\Delta t_m) \\cdots (\\mathbb{1}+A(\\xi_1)\\Delta t_1)"
},
{
"math_id": 8,
"text": "\\prod_a^b (\\mathbb{1}+A(t)dt)=\\lim_{\\max \\Delta t_i \\to 0} P(A,D)"
},
{
"math_id": 9,
"text": "P(A,D)^*=\\prod_{i=1}^{m}(\\mathbb{1}+A(\\xi_i)\\Delta t_i) = (\\mathbb{1}+A(\\xi_1)\\Delta t_1) \\cdots (\\mathbb{1}+A(\\xi_m)\\Delta t_m)"
},
{
"math_id": 10,
"text": "(\\mathbb{1}+A(t)dt) \\prod_a^b =\\lim_{\\max \\Delta t_i \\to 0} P(A,D)^*"
},
{
"math_id": 11,
"text": "\\mathbb{1}"
},
{
"math_id": 12,
"text": "\\textstyle \\prod"
},
{
"math_id": 13,
"text": "\\textstyle \\int"
},
{
"math_id": 14,
"text": "f:[a,b] \\to \\mathbb{R}"
},
{
"math_id": 15,
"text": "\\prod_a^b \\big(1 + f(x)\\,dx\\big) = \\exp\\left(\\int_a^b f(x) \\,dx\\right),"
},
{
"math_id": 16,
"text": "\\prod_a^b f(x)^{dx} = \\lim_{\\Delta x \\to 0} \\prod{f(x_i)^{\\Delta x}} = \\exp\\left(\\int_a^b \\ln f(x) \\,dx\\right),"
},
{
"math_id": 17,
"text": "\\textstyle \\prod_{i=a}^b"
},
{
"math_id": 18,
"text": "i, a, b \\in \\mathbb{Z}"
},
{
"math_id": 19,
"text": "\\textstyle \\int_a^b dx"
},
{
"math_id": 20,
"text": "x \\in [a,b]"
},
{
"math_id": 21,
"text": "\\ln \\prod_a^b p(x)^{dx} = \\int_a^b \\ln p(x) \\,dx."
},
{
"math_id": 22,
"text": "\\prod_a^b f(x)^{d(\\ln x)} = \\exp\\left(\\int_{\\ln(a)}^{\\ln(b)} \\ln f(e^x) \\,dx\\right),"
},
{
"math_id": 23,
"text": "\\prod_a^b c^{dx} = c^{b-a}, "
},
{
"math_id": 24,
"text": "\\prod_a^b x^{dx} = \\frac{b^b}{a^a} {\\rm e}^{a-b}, "
},
{
"math_id": 25,
"text": "\\prod_0^b x^{dx} = b^b {\\rm e}^{-b}, "
},
{
"math_id": 26,
"text": "\\prod_a^b \\left(f(x)^k\\right)^{dx} = \\left(\\prod_a^b f(x)^{dx}\\right)^k, "
},
{
"math_id": 27,
"text": "\\prod_a^b \\left(c^{f(x)}\\right)^{dx} = c^{\\int_a^b f(x) \\,dx}, "
},
{
"math_id": 28,
"text": "f^*(x)"
},
{
"math_id": 29,
"text": "f^*(x)=\\exp\\left(\\frac{f'(x)}{f(x)}\\right)"
},
{
"math_id": 30,
"text": "\\prod_a^b f^*(x)^{dx} = \\prod_a^b \\exp\\left(\\frac{f'(x)}{f(x)} \\,dx\\right) = \\frac{f(b)}{f(a)},"
},
{
"math_id": 31,
"text": "(fg)^* = f^* g^*."
},
{
"math_id": 32,
"text": "(f/g)^* = f^*/g^*."
},
{
"math_id": 33,
"text": "\\sqrt[n]{X_1 X_2 \\cdots X_n} \\underset{n \\to \\infty}{\\longrightarrow} \\prod_x X^{dF(x)},"
},
{
"math_id": 34,
"text": "\\frac{X_1 + X_2 + \\cdots + X_n}{n} \\underset{n \\to \\infty}{\\longrightarrow} \\int X \\,dF(x)."
},
{
"math_id": 35,
"text": "f: [a,b] \\to \\mathbb{R}"
},
{
"math_id": 36,
"text": "a = y_0 < y_1 < \\dots < y_m "
},
{
"math_id": 37,
"text": "a = x_0 < x_1 < \\dots < x_n = b, \\quad x_0 \\le t_0 \\le x_1, x_1 \\le t_1 \\le x_2, \\dots, x_{n-1} \\le t_{n-1} \\le x_n,"
},
{
"math_id": 38,
"text": "\\prod_{k=0}^{n-1} \\left[ \\big(1 + f(t_k)\\big) \\cdot (x_{k+1} - x_k) \\right]."
},
{
"math_id": 39,
"text": "\\prod_{k=0}^{n-1} \\exp\\big(f(t_k) \\cdot (x_{k+1} - x_k)\\big)."
},
{
"math_id": 40,
"text": "f"
},
{
"math_id": 41,
"text": "\\prod_a^b \\big(1 + f(x) \\,dx\\big) \\overset{def}{=} \\prod_{k=0}^{m-1} \\exp\\big(f(s_k) \\cdot (y_{k+1} - y_k)\\big),"
},
{
"math_id": 42,
"text": "y_0 < a = s_0 < y_1 < \\dots < y_{n-1} < s_{n-1} < y_n = b"
},
{
"math_id": 43,
"text": "X"
},
{
"math_id": 44,
"text": "\\mu"
},
{
"math_id": 45,
"text": "f(x) = \\sum_{k=1}^n a_k I_{A_k}(x)"
},
{
"math_id": 46,
"text": "A_0, A_1, \\dots, A_{m-1} \\subseteq X"
},
{
"math_id": 47,
"text": "\\prod_X \\big(1 + f(x) \\,d\\mu(x)\\big) \\overset{def}{=} \\prod_{k=0}^{m-1} \\exp\\big(a_k \\mu(A_k)\\big),"
},
{
"math_id": 48,
"text": "a_k"
},
{
"math_id": 49,
"text": "A_k"
},
{
"math_id": 50,
"text": "X = \\mathbb{R}"
},
{
"math_id": 51,
"text": "\\mu "
},
{
"math_id": 52,
"text": "\\ln \\left(\\prod_X \\big(1 + f(x) \\,d\\mu(x)\\big) \\right) = \\ln \\left( \\prod_{k=0}^{m-1} \\exp\\big(a_k \\mu(A_k)\\big) \\right) = \\sum_{k=0}^{m-1} a_k \\mu(A_k) = \\int_X f(x) \\,d\\mu(x) \\iff"
},
{
"math_id": 53,
"text": "\\prod_X \\big(1 + f(x) \\,d\\mu(x)\\big) = \\exp \\left( \\int_X f(x) \\,d\\mu(x) \\right),"
},
{
"math_id": 54,
"text": "\\exp"
},
{
"math_id": 55,
"text": "\\prod_X \\big(1 + f(x) \\,d\\mu(x)\\big) = \\exp \\left( \\int_X f(x) \\,d\\mu(x) \\right)"
},
{
"math_id": 56,
"text": "{\\cal V}_f"
},
{
"math_id": 57,
"text": "B \\subseteq X "
},
{
"math_id": 58,
"text": "{\\cal V}_f(B) \\overset{def}{=} \\prod_B \\big(1 + f(x) \\,d\\mu(x)\\big) \\overset{def}{=} \\prod_X \\big(1 + (f \\cdot I_B)(x) \\,d\\mu(x)\\big),"
},
{
"math_id": 59,
"text": "I_B(x)"
},
{
"math_id": 60,
"text": "B"
},
{
"math_id": 61,
"text": "B_1, B_2"
},
{
"math_id": 62,
"text": "\\begin{align}\n {\\cal V}_f(B_1 \\sqcup B_2) &= \\prod_{B_1 \\sqcup B_2} \\big(1 + f(x) \\,d\\mu(x)\\big) \\\\\n &= \\exp\\left( \\int_{B_1 \\sqcup B_2} f(x) \\,d\\mu(x) \\right) \\\\\n &= \\exp\\left( \\int_{B_1} f(x) \\,d\\mu(x) + \\int_{B_2} f(x) \\,d\\mu(x) \\right) \\\\\n &= \\exp\\left( \\int_{B_1} f(x) \\,d\\mu(x) \\right) \\exp\\left( \\int_{B_2} f(x) \\,d\\mu(x) \\right) \\\\\n &= \\prod_{B_1} (1 + f(x)d \\mu(x)) \\prod_{ B_2} (1 + f(x) \\,d\\mu(x)) \\\\\n &= {\\cal V}_f(B_1 ) {\\cal V}_f(B_2).\n\\end{align}"
},
{
"math_id": 63,
"text": "f , g"
},
{
"math_id": 64,
"text": "\\prod_A \\big(1 + (fg)(x) \\,d\\mu(x)\\big) \\neq \\prod_A \\big(1 + f(x) \\,d\\mu(x)\\big) \\prod_A \\big(1 + g(x) \\,d\\mu(x)\\big)."
},
{
"math_id": 65,
"text": "\\prod_X f(x)^{d\\mu(x)} \\overset{def}{=} \\prod_{k=0}^{m-1} a_k^{\\mu(A_k)}."
},
{
"math_id": 66,
"text": "\\ln \\left( \\prod_X f(x)^{d\\mu(x)} \\right) = \\sum_{k=0}^{m-1} \\ln(a_k) \\mu(A_k) = \\int_X \\ln f(x) \\,d\\mu (x) \\iff \\prod_X f(x)^{d\\mu(x)} = \\exp\\left( \\int_X \\ln f(x) \\,d\\mu (x) \\right),"
},
{
"math_id": 67,
"text": "\\ln"
},
{
"math_id": 68,
"text": "\\prod_X f(x)^{d\\mu(x)} = \\exp\\left( \\int_X \\ln f(x) \\,d\\mu(x) \\right)"
}
] | https://en.wikipedia.org/wiki?curid=11602384 |
11606627 | Isaak Yaglom | Soviet mathematician (1921–1988)
Isaak Moiseevich Yaglom (; 6 March 1921 – 17 April 1988) was a Soviet mathematician and author of popular mathematics books, some with his twin Akiva Yaglom.
Yaglom received a Ph.D. from Moscow State University in 1945 as student of Veniamin Kagan. As the author of several books, translated into English, that have become academic standards of reference, he has an international stature. His attention to the necessities of learning (pedagogy) make his books pleasing experiences for students. The seven authors of his Russian obituary recount "…the breadth of his interests was truly extraordinary: he was seriously interested in history and philosophy, passionately loved and had a good knowledge of literature and art, often came forward with reports and lectures on the most diverse topics (for example, on Alexander Blok, Anna Akhmatova, and the Dutch painter M. C. Escher), actively took part in the work of the cinema club in Yaroslavl and the music club at the House of Composers in Moscow, and was a continual participant of conferences on mathematical linguistics and on semiotics."
University life.
Yaglom started his higher education at Moscow State University in 1938. During World War II he volunteered, but due to myopia he was deferred from military service. In the evacuation of Moscow he went with his family to Sverdlovsk in the Ural Mountains. He studied at Sverdlovsk State University, graduated in 1942, and when the usual Moscow faculty assembled in Sverdlovsk during the war, he took up graduate study. Under the geometer Veniamin Kagan he developed his Ph.D. thesis which he defended in Moscow in 1945. It is reported that this thesis "was devoted to projective metrics on a plane and their connections with different types of complex numbers formula_0 (where formula_1, or formula_2, or else formula_3)."
Institutes and titles.
During his career, Yaglom was affiliated with these institutions:
Affine geometry.
In 1962 Yaglom and Vladimir G. Ashkinuse published "Ideas and Methods of Affine and Projective Geometry", in Russian. The text is limited to affine geometry since projective geometry was put off to a second volume that did not appear. The concept of hyperbolic angle is developed through area of hyperbolic sectors. A treatment of Routh's theorem is given at page 193. This textbook, published by the Ministry of Education, includes 234 exercises with hints and solutions in an appendix.
English translations.
Isaac Yaglom wrote over 40 books and many articles. Several were translated, and appeared in the year given:
Complex numbers in geometry (1968).
Translated by Eric J. F. Primrose, published by Academic Press (N.Y.). The trinity of complex number planes is laid out and exploited. Topics include line coordinates in the Euclidean and Lobachevski planes, and inversive geometry.
Geometric Transformations (1962, 1968, 1973, 2009).
The first three books were originally published in English by Random House as part of the series New Mathematical Library (Volumes 8, 21, and 24). They were keenly appreciated by proponents of the New Math in the U.S.A., but represented only a part of Yaglom's two-volume original published in Russian in 1955 and 56. More recently the final portion of Yaglom's work was translated into English and published by the Mathematical Association of America. All four volumes are now available from the MAA in the series Anneli Lax New Mathematical Library (Volumes 8, 21, 24, and 44).
A simple non-euclidean geometry and its physical basis (1979).
Subtitle: "An elementary account of Galilean geometry and the Galilean principle of relativity". Translated by Abe Shenitzer, published by Springer-Verlag. In his prefix, the translator says the book is "a fascinating story which flows from one geometry to another, from geometry to algebra, and from geometry to kinematics, and in so doing crosses artificial boundaries separating one area of mathematics from another and mathematics from physics." The author's own prefix speaks of "the important connection between Klein's Erlanger Program and the principles of relativity."
The approach taken is elementary; simple manipulations by shear mapping lead on page 68 to the conclusion that "the difference between the Galilean geometry of points and the Galilean geometry of lines is just a matter of terminology".
The concepts of the dual number and its "imaginary" ε, ε2 = 0, do not appear in the development of Galilean geometry. However, Yaglom shows that the common slope concept in analytic geometry corresponds to the "Galilean angle". Yaglom extensively develops his non-Euclidean geometry including the theory of cycles (pp. 77–79), duality, and the circumcycle and incycle of a triangle (p. 104).
Yaglom continues with his Galilean study to the "inversive Galilean plane" by including a special line at infinity and showing the topology with a stereographic projection. The Conclusion of the book delves into the "Minkowskian geometry" of hyperbolas in the plane, including the nine-point hyperbola. Yaglom also covers the "inversive Minkowski plane".
Probability and information (1983).
Co-author: A. M. Yaglom. Russian editions in 1956, 59 and 72. Translated by V. K. Jain, published by D. Reidel and the Hindustan Publishing Corporation, India.
The channel capacity work of Claude Shannon is developed from first principles in four chapters: probability, entropy and information, information calculation to solve logical problems, and applications to information transmission. The final chapter is well-developed including code efficiency, Huffman codes, natural language and biological information channels, influence of noise, and error detection and correction.
Challenging Mathematical Problems With Elementary Solutions (1987).
Co-author: A. M. Yaglom. Two volumes. Russian edition in 1954. First English edition 1964–1967
Felix Klein and Sophus Lie (1988).
Subtitle: The evolution of the idea of symmetry in the 19th century.
In his chapter on "Felix Klein and his Erlangen Program", Yaglom says that "finding a general description of all geometric systems [was] considered by mathematicians the central question of the day." The subtitle more accurately describes the book than the main title, since a great number of mathematicians are credited in this account of the modern tools and methods of symmetry.
In 2009 the book was republished by Ishi Press as "Geometry, Groups and Algebra in the Nineteenth Century". The new edition, designed by Sam Sloan, has a foreword by Richard Bozulich.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a + jb"
},
{
"math_id": 1,
"text": "jj=-1"
},
{
"math_id": 2,
"text": "jj=+1"
},
{
"math_id": 3,
"text": "jj=0"
}
] | https://en.wikipedia.org/wiki?curid=11606627 |
1160674 | SO | SO or so may refer to:
<templatestyles src="Template:TOC_right/styles.css" />
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "\\mathfrak{so}(n)"
}
] | https://en.wikipedia.org/wiki?curid=1160674 |
11607118 | Mean square quantization error | Mean square quantization error (MSQE) is a figure of merit for the process of analog to digital conversion.
In this conversion process, analog signals in a continuous range of values are converted to a discrete set of values by comparing them with a sequence of thresholds.
The quantization error of a signal is the difference between the original continuous value and its discretization, and the mean square quantization error (given some probability distribution on the input values) is the expected value of the square of the quantization errors.
Mathematically, suppose that the lower threshold for inputs that generate the quantized value formula_0 is formula_1, that the upper threshold is formula_2, that there are formula_3 levels of quantization, and that the probability density function for the input analog values is formula_4. Let formula_5 denote the quantized value corresponding to an input formula_6; that is, formula_5 is the value formula_0 for which formula_7.
Then
formula_8 | [
{
"math_id": 0,
"text": "q_i"
},
{
"math_id": 1,
"text": "t_{i-1}"
},
{
"math_id": 2,
"text": "t_i"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "p(x)"
},
{
"math_id": 5,
"text": "\\hat x"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": "t_i-1\\le x<t_i"
},
{
"math_id": 8,
"text": "\n\\begin{align}\n\\operatorname{MSQE}&=\\operatorname{E}[(x-\\hat x)^2]\\\\\n&=\\int_{t_0}^{t_k} (x-\\hat x)^2 p(x)\\, dx\\\\\n&= \\sum_{i=1}^k \\int_{t_{i-1}}^{t_i} (x-q_i)^2 p(x) \\,dx.\n\\end{align}\n"
}
] | https://en.wikipedia.org/wiki?curid=11607118 |
11609441 | Padé table | In complex analysis, a Padé table is an array, possibly of infinite extent, of the rational Padé approximants
"R""m", "n"
to a given complex formal power series. Certain sequences of approximants lying within a Padé table can often be shown to correspond with successive convergents of a continued fraction representation of a holomorphic or meromorphic function.
History.
Although earlier mathematicians had obtained sporadic results involving sequences of rational approximations to transcendental functions, Frobenius (in 1881) was apparently the first to organize the approximants in the form of a table. Henri Padé further expanded this notion in his doctoral thesis "Sur la representation approchee d'une fonction par des fractions rationelles", in 1892. Over the ensuing 16 years Padé published 28 additional papers exploring the properties of his table, and relating the table to analytic continued fractions.
Modern interest in Padé tables was revived by H. S. Wall and Oskar Perron, who were primarily interested in the connections between the tables and certain classes of continued fractions. Daniel Shanks and Peter Wynn published influential papers about 1955, and W. B. Gragg obtained far-reaching convergence results during the '70s. More recently, the widespread use of electronic computers has stimulated a great deal of additional interest in the subject.
Notation.
A function "f"("z") is represented by a formal power series:
formula_0
where "c"0 ≠ 0, by convention. The ("m", "n")th entry "Rm, n" in the Padé table for "f"("z") is then given by
formula_1
where "Pm"("z") and "Qn"("z") are polynomials of degrees not more than "m" and "n", respectively. The coefficients {"ai"} and {"bi"} can always be found by considering the expression
formula_2
formula_3
formula_4
and equating coefficients of like powers of "z" up through "m" + "n". For the coefficients of powers "m" + 1 to "m" + "n", the right hand side is 0 and the resulting system of linear equations contains a homogeneous system of "n" equations in the "n" + 1 unknowns "bi", and so admits of infinitely many solutions each of which determines a possible "Qn". "Pm" is then easily found by equating the first "m" coefficients of the equation above. However, it can be shown that, due to cancellation, the generated rational functions "Rm, n" are all the same, so that the ("m", "n")th entry in the Padé table is unique. Alternatively, we may require that "b"0 = 1, thus putting the table in a standard form.
Although the entries in the Padé table can always be generated by solving this system of equations, that approach is computationally expensive. Usage of the Padé table has been extended to meromorphic functions by newer, timesaving methods such as the epsilon algorithm.
The block theorem and normal approximants.
Because of the way the ("m", "n")th approximant is constructed, the difference
"Qn"("z")"f"("z") − "Pm"("z")
is a power series whose first term is of degree no less than
"m" + "n" + 1.
If the first term of that difference is of degree
"m" + "n" + "r" + 1, "r" > 0,
then the rational function "Rm, n" occupies
("r" + 1)2
cells in the Padé table, from position ("m", "n") through position ("m"+"r", "n"+"r"), inclusive. In other words, if the same rational function appears more than once in the table, that rational function occupies a square block of cells within the table. This result is known as the block theorem.
If a particular rational function occurs exactly once in the Padé table, it is called a normal approximant to "f"("z"). If every entry in the complete Padé table is normal, the table itself is said to be normal. Normal Padé approximants can be characterized using determinants of the coefficients "cn" in the Taylor series expansion of "f"("z"), as follows. Define the ("m", "n")th determinant by
formula_5
with "D""m",0 = 1, "D""m",1 = "cm", and "ck" = 0 for "k" < 0. Then
Connection with continued fractions.
One of the most important forms in which an analytic continued fraction can appear is as a regular continued fraction, which is a continued fraction of the form
formula_6
where the "ai" ≠ 0 are complex constants, and "z" is a complex variable.
There is an intimate connection between regular continued fractions and Padé tables with normal approximants along the main diagonal: the "stairstep" sequence of Padé approximants "R"0,0, "R"1,0, "R"1,1, "R"2,1, "R"2,2, ... is normal if and only if that sequence coincides with the successive convergents of a regular continued fraction. In other words, if the Padé table is normal along the main diagonal, it can be used to construct a regular continued fraction, and if a regular continued fraction representation for the function "f"("z") exists, then the main diagonal of the Padé table representing "f"("z") is normal.
An example – the exponential function.
Here is an example of a Padé table, for the exponential function.
Several features are immediately apparent.
formula_7,
where formula_8 is a generalized hypergeometric series and formula_9 is a generalized reverse Bessel polynomial.
The expressions on the main diagonal reduce to formula_10, where formula_11 is a reverse Bessel polynomial.
The procedure used to derive Gauss's continued fraction can be applied to a certain confluent hypergeometric series to derive the following C-fraction expansion for the exponential function, valid throughout the entire complex plane:
formula_12
By applying the fundamental recurrence formulas one may easily verify that the successive convergents of this C-fraction are the stairstep sequence of Padé approximants "R"0,0, "R"1,0, "R"1,1, ... In this particular case a closely related continued fraction can be obtained from the identity
formula_13
that continued fraction looks like this:
formula_14
This fraction's successive convergents also appear in the Padé table, and form the sequence "R"0,0, "R"0,1, "R"1,1, "R"1,2, "R"2,2, ...
Generalizations.
A formal Newton series "L" is of the form
formula_15
where the sequence {β"k"} of points in the complex plane is known as the set of "interpolation points". A sequence of rational approximants "Rm,n" can be formed for such a series "L" in a manner entirely analogous to the procedure described above, and the approximants can be arranged in a "Newton-Padé table". It has been shown that some "staircase" sequences in the Newton-Padé table correspond with the successive convergents of a Thiele-type continued fraction, which is of the form
formula_16
Mathematicians have also constructed "two-point Padé tables" by considering two series, one in powers of "z", the other in powers of 1/"z", which alternately represent the function "f"("z") in a neighborhood of zero and in a neighborhood of infinity.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nf(z) = c_0 + c_1 z + c_2 z^2 + \\cdots = \\sum_{l=0}^\\infty c_l z^l,\n"
},
{
"math_id": 1,
"text": "\nR_{m,n}(z) = \\frac{P_m(z)}{Q_n(z)} = \n\\frac{a_0 + a_1 z + a_2 z^2 + \\cdots + a_m z^m}{b_0 + b_1 z + b_2 z^2 + \\cdots + b_n z^n}\n"
},
{
"math_id": 2,
"text": "\nf(z) \\approx \\sum_{l=0}^{m+n} c_l z^l =: f_{\\mathrm{approx}}(z)\n"
},
{
"math_id": 3,
"text": "\nQ_n(z) f_{\\mathrm{approx}}(z) = P_m(z)\n"
},
{
"math_id": 4,
"text": "\nQ_n(z) \\left(c_0 + c_1 z + c_2 z^2 + \\cdots + c_{m+n} z^{m+n} \\right) = P_m(z)\n"
},
{
"math_id": 5,
"text": "D_{m,n} = \\left|\\begin{matrix}\nc_m & c_{m-1} & \\ldots & c_{m-n+2} & c_{m-n+1}\\\\\nc_{m+1} & c_m & \\ldots & c_{m-n+3} & c_{m-n+2}\\\\\n\\vdots & \\vdots & & \\vdots & \\vdots\\\\\nc_{m+n-2} & c_{m+n-3} & \\ldots & c_m & c_{m-1}\\\\\nc_{m+n-1} & c_{m+n-2} & \\ldots & c_{m+1} & c_m\\\\\n\\end{matrix}\\right|\n"
},
{
"math_id": 6,
"text": "\nf(z) = b_0 + \\cfrac{a_1z}{1 - \\cfrac{a_2z}{1 - \\cfrac{a_3z}{1 - \\cfrac{a_4z}{1 - \\ddots}}}}.\n"
},
{
"math_id": 7,
"text": "R_{m,n}=\\frac{{}_1F_1(-m;-m-n;z)}{{}_1F_1(-n;-m-n;-z)} = \\frac{n!\\,2^m\\theta_m\\left(\\frac{z}{2};n-m+2,2\\right)}{m!\\,2^n\\theta_n\\left(-\\frac{z}{2};m-n+2,2\\right)}"
},
{
"math_id": 8,
"text": "{}_1F_1(a;b;z)"
},
{
"math_id": 9,
"text": "\\theta_n(x;\\alpha,\\beta)"
},
{
"math_id": 10,
"text": "R_{n,n}=\\theta_n(z/2)/\\theta_n(-z/2)"
},
{
"math_id": 11,
"text": "\\theta_n(x)"
},
{
"math_id": 12,
"text": "e^z = 1 + \\cfrac{z}{1 - \\cfrac{\\frac{1}{2}z}{1 + \\cfrac{\\frac{1}{6}z}{1 - \\cfrac{\\frac{1}{6}z}\n{1 + \\cfrac{\\frac{1}{10}z}{1 - \\cfrac{\\frac{1}{10}z}{1 + - \\ddots}}}}}}."
},
{
"math_id": 13,
"text": "e^z = \\frac{1}{e^{-z}};"
},
{
"math_id": 14,
"text": "e^z = \\cfrac{1}{1 - \\cfrac{z}{1 + \\cfrac{\\frac{1}{2}z}{1 - \\cfrac{\\frac{1}{6}z}{1 + \\cfrac{\\frac{1}{6}z}\n{1 - \\cfrac{\\frac{1}{10}z}{1 + \\cfrac{\\frac{1}{10}z}{1 - + \\ddots}}}}}}}."
},
{
"math_id": 15,
"text": "\nL(z) = c_0 + \\sum_{n=1}^\\infty c_n \\prod_{k=1}^n (z - \\beta_k)\n"
},
{
"math_id": 16,
"text": "\na_0 + \\cfrac{a_1(z - \\beta_1)}{1 - \\cfrac{a_2(z - \\beta_2)}{1 - \\cfrac{a_3(z - \\beta_3)}{1 - \\ddots}}}.\n"
}
] | https://en.wikipedia.org/wiki?curid=11609441 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.