id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
63326424
|
Discrete fixed-point theorem
|
In discrete mathematics, a discrete fixed-point is a fixed-point for functions defined on finite sets, typically subsets of the integer grid formula_0.
Discrete fixed-point theorems were developed by Iimura, Murota and Tamura, Chen and Deng and others. Yang provides a survey.
Basic concepts.
Continuous fixed-point theorems often require a continuous function. Since continuity is not meaningful for functions on discrete sets, it is replaced by conditions such as a direction-preserving function. Such conditions imply that the function does not change too drastically when moving between neighboring points of the integer grid. There are various direction-preservation conditions, depending on whether neighboring points are considered points of a hypercube (HGDP), of a simplex (SGDP) etc. See the page on direction-preserving function for definitions.
Continuous fixed-point theorems often require a convex set. The analogue of this property for discrete sets is an integrally-convex set.
A fixed point of a discrete function "f" is defined exactly as for continuous functions: it is a point "x" for which "f"("x")="x".
For functions on discrete sets.
We focus on functions formula_1, where the domain X is a nonempty subset of the Euclidean space formula_2. ch("X") denotes the convex hull of "X".
Iimura-Murota-Tamura theorem: If "X" is a finite integrally-convex subset of formula_0, and formula_3 is a "hypercubic direction-preserving (HDP)" function, then "f" has a fixed-point.
Chen-Deng theorem: If "X" is a finite subset of formula_2, and formula_3 is "simplicially direction-preserving" "(SDP)", then "f" has a fixed-point.
Yang's theorems:
For discontinuous functions on continuous sets.
Discrete fixed-point theorems are closely related to fixed-point theorems on discontinuous functions. These, too, use the direction-preservation condition instead of continuity.
Herings-Laan-Talman-Yang fixed-point theorem:
Let "X" be a non-empty convex compact subset of formula_2. Let "f": "X" → "X" be a "locally gross direction preserving (LGDP)" function: at any point "x" that is not a fixed point of "f", the direction of formula_7 is grossly preserved in some neighborhood of "x", in the sense that for any two points "y", "z" in this neighborhood, its inner product is non-negative, i.e.: formula_13. Then "f" has a fixed point in "X".
The theorem is originally stated for polytopes, but Philippe Bich extends it to convex compact sets.Thm.3.7Note that every continuous function is LGDP, but an LGDP function may be discontinuous. An LGDP function may even be neither upper nor lower semi-continuous. Moreover, there is a constructive algorithm for approximating this fixed point.
Applications.
Discrete fixed-point theorems have been used to prove the existence of a Nash equilibrium in a discrete game, and the existence of a Walrasian equilibrium in a discrete market.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{Z}^n"
},
{
"math_id": 1,
"text": "f: X\\to \\mathbb{R}^n"
},
{
"math_id": 2,
"text": "\\mathbb{R}^n"
},
{
"math_id": 3,
"text": "f: X\\to \\text{ch}(X)"
},
{
"math_id": 4,
"text": "x+g(x)f(x)\\in \\text{ch}(X)"
},
{
"math_id": 5,
"text": "f_i(x_1,\\ldots,x_{i-1},a_i,x_{i+1},\\ldots,x_n) \\leq 0 "
},
{
"math_id": 6,
"text": "f_i(x_1,\\ldots,x_{i-1},b_i,x_{i+1},\\ldots,x_n) \\geq 0"
},
{
"math_id": 7,
"text": "f(x)-x"
},
{
"math_id": 8,
"text": "F: X\\to 2^X"
},
{
"math_id": 9,
"text": "F(x) = \\text{ch}(F(x))\\cap \\mathbb{Z}^n"
},
{
"math_id": 10,
"text": "f(x)\\in \\text{ch}(F(x))"
},
{
"math_id": 11,
"text": "y \\in F(y)"
},
{
"math_id": 12,
"text": "f(x)\\cdot f(-y) \\leq 0"
},
{
"math_id": 13,
"text": "(f(y)-y)\\cdot (f(z)-z) \\geq 0"
}
] |
https://en.wikipedia.org/wiki?curid=63326424
|
6333385
|
C-theorem
|
Theorem in quantum field theory
In quantum field theory the "C"-theorem states that there exists a positive real function, formula_0, depending on the coupling constants of the quantum field theory considered, formula_1, and on the energy scale, formula_2, which has the following properties:
The theorem formalizes the notion that theories at high energies have more degrees of freedom than theories at low energies and that information is lost as we flow from the former to the latter.
Two-dimensional case.
Alexander Zamolodchikov proved in 1986 that two-dimensional quantum field theory always has such a "C"-function. Moreover, at fixed points of the RG flow, which correspond to conformal field theories, Zamolodchikov's "C"-function is equal to the central charge of the corresponding conformal field theory, which lends the name "C" to the theorem.
Four-dimensional case: "A"-theorem.
John Cardy in 1988 considered the possibility to generalise "C"-theorem to higher-dimensional quantum field theory. He conjectured that in four spacetime dimensions, the quantity behaving monotonically under renormalization group flows, and thus playing the role analogous to the central charge c in two dimensions, is a certain anomaly coefficient which came to be denoted as a.
For this reason, the analog of the "C"-theorem in four dimensions is called the "A"-theorem.
In perturbation theory, that is for renormalization flows which do not deviate much from free theories, the "A"-theorem in four dimensions was proved by Hugh Osborn using the local renormalization group equation. However, the problem of finding a proof valid beyond perturbation theory remained open for many years.
In 2011, Zohar Komargodski and Adam Schwimmer of the Weizmann Institute of Science proposed a nonperturbative proof for the "A"-theorem, which has gained acceptance. (Still, simultaneous monotonic and cyclic (limit cycle) or even chaotic RG flows are compatible with such flow functions when multivalued in the couplings, as evinced in specific systems.) RG flows of theories in 4 dimensions and the question of whether scale invariance implies conformal invariance, is a field of active research and not all questions are settled.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C(g^{}_i,\\mu)"
},
{
"math_id": 1,
"text": "g^{}_i"
},
{
"math_id": 2,
"text": "\\mu^{}_{}"
},
{
"math_id": 3,
"text": "g^*_i"
},
{
"math_id": 4,
"text": "C(g^*_i,\\mu)=C_*"
}
] |
https://en.wikipedia.org/wiki?curid=6333385
|
63340505
|
Adiabatic electron transfer
|
Adiabatic electron-transfer is a type of oxidation-reduction processes. The mechanism is ubiquitous in nature in both the inorganic and biological spheres. Adiabatic electron-transfers proceed without making or breaking chemical bonds. Adiabatic electron-transfer can occur by either optical or thermal mechanisms. Electron transfer during a collision between an oxidant and a reductant occurs adiabatically on a continuous potential-energy surface.
History.
Noel Hush is often credited with formulation of the theory of adiabatic electron-transfer.
Figure 1 sketches the basic elements of adiabatic electron-transfer theory. Two chemical species (ions, molecules, polymers, protein cofactors, etc.) labelled D (for “donor”) and A (for “acceptor”) become a distance "R" apart, either through collisions, covalent bonding, location in a material, protein or polymer structure, etc. A and D have different chemical environments. Each polarizes their surrounding condensed media. Electron-transfer theories describe the influence of a variety of parameters on the rate of electron-transfer. All electrochemical reactions occur by this mechanism. Adiabatic electron-transfer theory stresses that intricately coupled to such charge transfer is the ability of any D-A system to absorb or emit light. Hence fundamental understanding of any electrochemical process demands simultaneous understanding of the optical processes that the system can undergo.
Figure 2 sketches what happens if light is absorbed by just one of the chemical species, taken to be the charge donor. This produces an excited state of the donor. As the donor and acceptor are close to each other and surrounding matter, they experience a coupling formula_0. If the free energy change formula_2 is favorable, this coupling facilitates primary charge separation to produce D+-A− , producing charged species. In this way, solar energy is captured and converted to electrical energy. This process is typical of natural photosynthesis as well as modern organic photovoltaic and artificial photosynthesis solar-energy capture devices. The inverse of this process is also used to make organic light-emitting diodes (OLEDs).
Adiabatic electron-transfer is also relevant to the area of solar energy harvesting. Here, light absorption directly leads to charge separation D+-A−. Hush's theory for this process considers the donor-acceptor coupling formula_0, the energy formula_1 required to rearrange the atoms from their initial geometry to the preferred local geometry and environment polarization of the charge-separated state, and the energy change formula_2 associated with charge separation. In the weak-coupling limit ( formula_3), Hush showed that the rate of light absorption (and hence charge separation) is given from the Einstein equation by
formula_4 … (1)
This theory explained how Prussian blue absorbes light, creating
the field of intervalence charge transfer spectroscopy.
Adiabatic electron transfer is also relevant to the Robin-Day classification system, which codifies types of mixed valence compounds. An iconic system for understanding Inner sphere electron transfer is the mixed-valence Creutz-Taube ion, wherein otherwise equivalent Ru(III) and Ru(II) are linked by a pyrazine. The coupling formula_0 is not small: charge is not localized on just one chemical species but is shared quantum mechanically between two Ru centers, presenting classically forbidden half-integral valence states. that the critical requirement for this phenomenon is
formula_5 … (2)
Adiabatic electron-transfer theory stems from London's approach to charge-transfer and indeed general chemical reactions applied by Hush using parabolic potential-energy surfaces. Hush himself has carried out many theoretical and experimental studies of mixed valence complexes and long range electron transfer in biological systems. Hush's quantum-electronic adiabatic approach to electron transfer was unique; directly connecting with the Quantum Chemistry concepts of Mulliken, it forms the basis of all modern computational approaches to modeling electron transfer. Its essential feature is that electron transfer can never be regarded as an “instantaneous transition”; instead, the electron is partially transferred at all molecular geometries, with the extent of the transfer being a critical quantum descriptor of all thermal, tunneling, and spectroscopic processes. It also leads seamlessly to understanding electron-transfer transition-state spectroscopy pioneered by Zewail.
In adiabatic electron-transfer theory, the ratio formula_6 is of central importance. In the very strong coupling limit when Eqn. (2) is satisfied, intrinsically quantum molecules like the Creutz-Taube ion result. Most intervalence spectroscopy occurs in the weak-coupling limit described by Eqn. (1), however. In both natural photosynthesis and in artificial solar-energy capture devices, formula_6 is maximized by minimizing formula_1 through use of large molecules like chlorophylls, pentacenes, and conjugated polymers. The coupling formula_0 can be controlled by controlling the distance "R" at which charge transfer occurs- the coupling typically decreases exponentially with distance. When electron transfer occurs during collisions of the D and A species, the coupling is typically large and the “adiabatic” limit applies in which rate constants are given by transition state theory. In biological applications, however, as well as some organic conductors and other device materials, "R" is externally constrained and so the coupling set at low or high values. In these situations, weak-coupling scenarios often become critical.
In the weak-coupling (“non-adiabatic”) limit, the activation energy for electron transfer is given by the expression derived independently by Kubo and Toyozawa and by Hush.
Using adiabatic electron-transfer theory,
in this limit Levich and Dogonadze then determined the electron-tunneling probability to express the rate constant for thermal reactions as
formula_7. … (3)
This approach is widely applicable to long-range ground-state intramolecular electron transfer, electron transfer in biology, and electron transfer in conducting materials. It also typically controls the rate of charge separation in the excited-state photochemical application described in Figure 2 and related problems.
Marcus showed that the activation energy in Eqn. (3) reduces to formula_8 in the case of symmetric reactions with formula_9. In that work, he also derived the standard expression for the solvent contribution to the reorganization energy, making the theory more applicable to practical problems. Use of this solvation description (instead of the form that Hush originally proposed) in approaches spanning the adiabatic and non-adiabatic limits is often termed “Marcus-Hush Theory”. These and other contributions, including the widespread demonstration of the usefulness of Eqn. (3), led to the award of the 1992 Nobel Prize in Chemistry to Marcus.
Adiabatic electron-transfer theory is also widely applied in Molecular Electronics. In particular, this reconnects adiabatic electron-transfer theory with its roots in proton-transfer theory
and hydrogen-atom transfer, leading back to London's theory of general chemical reactions.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " V_{DA} "
},
{
"math_id": 1,
"text": " \\lambda "
},
{
"math_id": 2,
"text": " \\Delta G_0 "
},
{
"math_id": 3,
"text": " 4V_{DA}^2/\\lambda^2 \\ll 1 "
},
{
"math_id": 4,
"text": " k \\propto \\frac {V_{DA}^2 R^2}{\\lambda + \\Delta G_0}. "
},
{
"math_id": 5,
"text": " \\frac {2|J_{DA}|} {\\lambda} \\ge 1. "
},
{
"math_id": 6,
"text": " 2V_{DA}/\\lambda "
},
{
"math_id": 7,
"text": " k= \\frac{2 \\pi V_{DA}^2} {\\hbar (4\\pi \\lambda k_{\\beta} T)^{1/2} } \\exp \\frac{-(\\Delta G_0+\\lambda)^2} {4\\lambda k_{\\beta}T} "
},
{
"math_id": 8,
"text": " \\lambda/4 "
},
{
"math_id": 9,
"text": " \\Delta G_0 = 0 "
}
] |
https://en.wikipedia.org/wiki?curid=63340505
|
6334535
|
Spin-weighted spherical harmonics
|
In special functions, a topic in mathematics, spin-weighted spherical harmonics are generalizations of the standard spherical harmonics and—like the usual spherical harmonics—are functions on the sphere. Unlike ordinary spherical harmonics, the spin-weighted harmonics are U(1) gauge fields rather than scalar fields: mathematically, they take values in a complex line bundle. The spin-weighted harmonics are organized by degree l, just like ordinary spherical harmonics, but have an additional spin weight s that reflects the additional U(1) symmetry. A special basis of harmonics can be derived from the Laplace spherical harmonics "Y""lm", and are typically denoted by "s""Y""lm", where l and m are the usual parameters familiar from the standard Laplace spherical harmonics. In this special basis, the spin-weighted spherical harmonics appear as actual functions, because the choice of a polar axis fixes the U(1) gauge ambiguity. The spin-weighted spherical harmonics can be obtained from the standard spherical harmonics by application of spin raising and lowering operators. In particular, the spin-weighted spherical harmonics of spin weight "s"
0 are simply the standard spherical harmonics:
formula_0
Spaces of spin-weighted spherical harmonics were first identified in connection with the representation theory of the Lorentz group . They were subsequently and independently rediscovered by and applied to describe gravitational radiation, and again by as so-called "monopole harmonics" in the study of Dirac monopoles.
Spin-weighted functions.
Regard the sphere "S"2 as embedded into the three-dimensional Euclidean space R3. At a point x on the sphere, a positively oriented orthonormal basis of tangent vectors at x is a pair a, b of vectors such that
formula_1
where the first pair of equations states that a and b are tangent at x, the second pair states that a and b are unit vectors, the penultimate equation that a and b are orthogonal, and the final equation that (x, a, b) is a right-handed basis of R3.
A spin-weight s function f is a function accepting as input a point x of "S"2 and a positively oriented orthonormal basis of tangent vectors at x, such that
formula_2
for every rotation angle θ.
Following , denote the collection of all spin-weight s functions by B("s"). Concretely, these are understood as functions f on C2\{0} satisfying the following homogeneity law under complex scaling
formula_3
This makes sense provided s is a half-integer.
Abstractly, B("s") is isomorphic to the smooth vector bundle underlying the antiholomorphic vector bundle of the Serre twist on the complex projective line CP1. A section of the latter bundle is a function g on C2\{0} satisfying
formula_4
Given such a g, we may produce a spin-weight s function by multiplying by a suitable power of the hermitian form
formula_5
Specifically, "f"
"P"−"s""g" is a spin-weight s function. The association of a spin-weighted function to an ordinary homogeneous function is an isomorphism.
The operator ð.
The spin weight bundles B("s") are equipped with a differential operator ð (eth). This operator is essentially the Dolbeault operator, after suitable identifications have been made,
formula_6
Thus for "f" ∈ B("s"),
formula_7
defines a function of spin-weight "s" + 1.
Spin-weighted harmonics.
Just as conventional spherical harmonics are the eigenfunctions of the Laplace-Beltrami operator on the sphere, the spin-weight s harmonics are the eigensections for the Laplace-Beltrami operator acting on the bundles of spin-weight s functions.
Representation as functions.
The spin-weighted harmonics can be represented as functions on a sphere once a point on the sphere has been selected to serve as the North pole. By definition, a function η with "spin weight s" transforms under rotation about the pole via
formula_8
Working in standard spherical coordinates, we can define a particular operator ð acting on a function η as:
formula_9
This gives us another function of θ and φ. (The operator ð is effectively a covariant derivative operator in the sphere.)
An important property of the new function ðη is that if η had spin weight s, ðη has spin weight "s" + 1. Thus, the operator raises the spin weight of a function by 1. Similarly, we can define an operator which will lower the spin weight of a function by 1:
formula_10
The spin-weighted spherical harmonics are then defined in terms of the usual spherical harmonics as:
formula_11
The functions "s""Y""lm" then have the property of transforming with spin weight s.
Other important properties include the following:
formula_12
Orthogonality and completeness.
The harmonics are orthogonal over the entire sphere:
formula_13
and satisfy the completeness relation
formula_14
Calculating.
These harmonics can be explicitly calculated by several methods. The obvious recursion relation results from repeatedly applying the raising or lowering operators. Formulae for direct calculation were derived by . Note that their formulae use an old choice for the Condon–Shortley phase. The convention chosen below is in agreement with Mathematica, for instance.
The more useful of the Goldberg, et al., formulae is the following:
formula_15
A Mathematica notebook using this formula to calculate arbitrary spin-weighted spherical harmonics can be found here.
With the phase convention here:
formula_16
First few spin-weighted spherical harmonics.
Analytic expressions for the first few orthonormalized spin-weighted spherical harmonics:
=== Spin-weight "s"
1, degree "l"
1 ===
formula_17
formula_18
Relation to Wigner rotation matrices.
This relation allows the spin harmonics to be calculated using recursion relations for the D-matrices.
Triple integral.
The triple integral in the case that "s"1 + "s"2 + "s"3
0 is given in terms of the 3-j symbol:
formula_19
|
[
{
"math_id": 0,
"text": "{}_0Y_{l m} = Y_{l m}\\ ."
},
{
"math_id": 1,
"text": "\n\\begin{align}\n\\mathbf{x}\\cdot\\mathbf{a} = \\mathbf{x}\\cdot\\mathbf{b} &= 0\\\\\n\\mathbf{a}\\cdot\\mathbf{a} = \\mathbf{b}\\cdot\\mathbf{b} &= 1\\\\\n\\mathbf{a}\\cdot\\mathbf{b} &= 0\\\\\n\\mathbf{x}\\cdot (\\mathbf{a}\\times\\mathbf{b}) &> 0,\n\\end{align}\n"
},
{
"math_id": 2,
"text": "f\\bigl(\\mathbf x,(\\cos\\theta)\\mathbf{a}-(\\sin\\theta)\\mathbf{b}, (\\sin\\theta)\\mathbf{a} + (\\cos\\theta)\\mathbf{b}\\bigr) = e^{is\\theta}f(\\mathbf x,\\mathbf{a},\\mathbf{b})"
},
{
"math_id": 3,
"text": "f\\left(\\lambda z,\\overline{\\lambda}\\bar{z}\\right) = \\left(\\frac{\\overline{\\lambda}}{\\lambda}\\right)^s f\\left(z,\\bar{z}\\right)."
},
{
"math_id": 4,
"text": "g\\left(\\lambda z,\\overline{\\lambda}\\bar{z}\\right) = \\overline{\\lambda}^{2s} g\\left(z,\\bar{z}\\right)."
},
{
"math_id": 5,
"text": "P\\left(z,\\bar{z}\\right) = z\\cdot\\bar{z}."
},
{
"math_id": 6,
"text": "\\partial : \\overline{\\mathbf O(2s)} \\to \\mathcal{E}^{1,0}\\otimes \\overline{\\mathbf O(2s)} \\cong \\overline{\\mathbf O(2s)}\\otimes\\mathbf{O}(-2)."
},
{
"math_id": 7,
"text": "\\eth f \\ \\stackrel{\\text{def}}{=}\\ P^{-s+1}\\partial \\left(P^s f\\right) "
},
{
"math_id": 8,
"text": "\\eta \\rightarrow e^{i s \\psi}\\eta."
},
{
"math_id": 9,
"text": "\\eth\\eta = - \\left(\\sin{\\theta}\\right)^s \\left\\{ \\frac{\\partial}{\\partial \\theta} + \\frac{i}{\\sin{\\theta}} \\frac{\\partial} {\\partial \\phi} \\right\\} \\left[ \\left(\\sin{\\theta}\\right)^{-s} \\eta \\right]."
},
{
"math_id": 10,
"text": "\\bar\\eth\\eta = - \\left(\\sin{\\theta}\\right)^{-s} \\left\\{ \\frac{\\partial}{\\partial \\theta} - \\frac{i}{\\sin{\\theta}} \\frac{\\partial} {\\partial \\phi} \\right\\} \\left[ \\left(\\sin{\\theta}\\right)^{s} \\eta \\right]."
},
{
"math_id": 11,
"text": "\n{}_sY_{l m} = \\begin{cases}\n\\sqrt{\\frac{(l-s)!}{(l+s)!}}\\ \\eth^s Y_{l m},&& 0\\leq s \\leq l; \\\\\n\\sqrt{\\frac{(l+s)!}{(l-s)!}}\\ \\left(-1\\right)^s \\bar\\eth^{-s} Y_{l m},&& -l\\leq s \\leq 0; \\\\\n0,&&l < |s|.\\end{cases}"
},
{
"math_id": 12,
"text": "\\begin{align}\n\\eth\\left({}_sY_{l m}\\right) &= +\\sqrt{(l-s)(l+s+1)}\\, {}_{s+1}Y_{l m};\\\\\n\\bar\\eth\\left({}_sY_{l m}\\right) &= -\\sqrt{(l+s)(l-s+1)}\\, {}_{s-1}Y_{l m};\n\\end{align}"
},
{
"math_id": 13,
"text": "\\int_{S^2} {}_sY_{l m}\\, {}_s\\bar{Y}_{l'm'}\\, dS = \\delta_{ll'} \\delta_{mm'}, "
},
{
"math_id": 14,
"text": "\\sum_{l m} {}_s\\bar Y_{l m}\\left(\\theta',\\phi'\\right) {}_s Y_{l m}(\\theta,\\phi) = \\delta\\left(\\phi'-\\phi\\right)\\delta\\left(\\cos\\theta'-\\cos\\theta\\right)"
},
{
"math_id": 15,
"text": "{}_sY_{l m} (\\theta, \\phi) = \\left(-1\\right)^{l+m-s} \\sqrt{ \\frac{(l+m)! (l-m)! (2l+1)} {4\\pi (l+s)! (l-s)!} } \\sin^{2l} \\left( \\frac{\\theta}{2} \\right) e^{i m \\phi} \\times\\sum_{r=0}^{l-s} \\left(-1\\right)^{r} {l-s \\choose r} {l+s \\choose r+s-m} \\cot^{2r+s-m} \\left( \\frac{\\theta} {2} \\right)\\, ."
},
{
"math_id": 16,
"text": "\\begin{align}\n{}_s\\bar Y_{l m} &= \\left(-1\\right)^{s+m}{}_{-s}Y_{l(-m)}\\\\\n{}_sY_{l m}(\\pi-\\theta,\\phi+\\pi) &= \\left(-1\\right)^l {}_{-s}Y_{l m}(\\theta,\\phi).\n\\end{align}"
},
{
"math_id": 17,
"text": "\\begin{align}\n{}_1 Y_{10}(\\theta,\\phi) &= \\sqrt{\\frac{3}{8\\pi}}\\,\\sin\\theta \\\\\n{}_1 Y_{1\\pm 1}(\\theta,\\phi) &= -\\sqrt{\\frac{3}{16\\pi}}(1 \\mp \\cos\\theta)\\,e^{\\pm i\\phi}\n\\end{align}"
},
{
"math_id": 18,
"text": "D^l_{-m s}(\\phi,\\theta,-\\psi) =\\left(-1\\right)^m \\sqrt\\frac{4\\pi}{2l+1} {}_sY_{l m}(\\theta,\\phi) e^{is\\psi}"
},
{
"math_id": 19,
"text": "\\int_{S^2} \\,{}_{s_1} Y_{j_1 m_1}\n\\,{}_{s_2} Y_{j_2m_2}\\, {}_{s_3} Y_{j_3m_3} = \\sqrt{\\frac{\\left(2j_1+1\\right)\\left(2j_2+1\\right)\\left(2j_3+1\\right)}{4\\pi}}\n\\begin{pmatrix}\n j_1 & j_2 & j_3\\\\\n m_1 & m_2 & m_3\n\\end{pmatrix}\n\\begin{pmatrix}\n j_1 & j_2 & j_3\\\\\n -s_1 & -s_2 & -s_3\n\\end{pmatrix}\n"
}
] |
https://en.wikipedia.org/wiki?curid=6334535
|
63363213
|
Hiptmair–Xu preconditioner
|
In mathematics, Hiptmair–Xu (HX) preconditioners are preconditioners for solving formula_0 and formula_1 problems based on the auxiliary space preconditioning framework. An important ingredient in the derivation of HX preconditioners in two and three dimensions is the so-called regular decomposition, which decomposes a Sobolev space function into a component of higher regularity and a scalar or vector potential. The key to the success of HX preconditioners is the discrete version of this decomposition, which is also known as HX decomposition. The discrete decomposition decomposes a discrete Sobolev space function into a discrete component of higher regularity, a discrete scale or vector potential, and a high-frequency component.
HX preconditioners have been used for accelerating a wide variety of solution techniques, thanks to their highly scalable parallel implementations, and are known as AMS and ADS precondition. HX preconditioner was identified by the U.S. Department of Energy as one of the top ten breakthroughs in computational science in recent years. Researchers from Sandia, Los Alamos, and Lawrence Livermore National Labs use this algorithm for modeling fusion with magnetohydrodynamic equations. Moreover, this approach will also be instrumental in developing optimal iterative methods in structural mechanics, electrodynamics, and modeling of complex flows.
HX preconditioner for formula_0.
Consider the following formula_0 problem: Find formula_2 such that
formula_3 with formula_4.
The corresponding matrix form is
formula_5
The HX preconditioner for formula_0 problem is defined as
formula_6
where formula_7is a smoother (e.g., Jacobi smoother, Gauss–Seidel smoother), formula_8 is the canonical interpolation operator for formula_9 space, formula_10 is the matrix representation of discrete vector Laplacian defined on formula_11,formula_12 is the discrete gradient operator, and formula_13 is the matrix representation of the discrete scalar Laplacian defined on formula_14. Based on auxiliary space preconditioning framework, one can show that
formula_15
where formula_16 denotes the condition number of matrix formula_17.
In practice, inverting formula_18 and formula_19 might be expensive, especially for large scale problems. Therefore, we can replace their inversion by spectrally equivalent approximations, formula_20 and formula_21, respectively. And the HX preconditioner for formula_0 becomes
formula_22
HX Preconditioner for formula_1.
Consider the following formula_1 problem: Find formula_23
formula_24with formula_4.
The corresponding matrix form is
formula_25
The HX preconditioner for formula_1 problem is defined as
formula_26
where formula_27 is a smoother (e.g., Jacobi smoother, Gauss–Seidel smoother), formula_28is the canonical interpolation operator for formula_1 space, formula_18 is the matrix representation of discrete vector Laplacian defined on formula_29, and formula_30 is the discrete curl operator.
Based on the auxiliary space preconditioning framework, one can show that
formula_31
For formula_32 in the definition of formula_33, we can replace it by the HX preconditioner for formula_0 problem, e.g., formula_34, since they are spectrally equivalent. Moreover, inverting formula_35 might be expensive and we can replace it by a spectrally equivalent approximations formula_36. These leads to the following practical HX preconditioner for formula_1problem,
formula_37
Derivation.
The derivation of HX preconditioners is based on the discrete regular decompositions for formula_38and formula_39, for the completeness, let us briefly recall them.
Theorem:[Discrete regular decomposition for formula_38]
Let formula_40 be a simply connected bounded domain. For any function formula_41, there exists a vectorformula_42, formula_43, formula_44, such that
formula_45andformula_46
Theorem:[Discrete regular decomposition for formula_47]
Let formula_40 be a simply connected bounded domain. For any function formula_48, there exists a vector formula_49
, formula_50 formula_51
such that
formula_52
and
formula_53
Based on the above discrete regular decompositions, together with the auxiliary space preconditioning framework, we can derive the HX preconditioners for formula_54 and formula_1 problems as shown before.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "H(\\operatorname{curl})"
},
{
"math_id": 1,
"text": "H(\\operatorname{div})"
},
{
"math_id": 2,
"text": "u \\in H_h(\\operatorname{curl})"
},
{
"math_id": 3,
"text": " (\\operatorname{curl}~u, \\operatorname{curl}~v) + \\tau (u, v) = (f, v), \\quad \\forall v \\in H_h(\\operatorname{curl}), "
},
{
"math_id": 4,
"text": "\\tau > 0"
},
{
"math_id": 5,
"text": "\nA_{\\operatorname{curl}} u = f.\n"
},
{
"math_id": 6,
"text": "\nB_{\\operatorname{curl}} = S_{\\operatorname{curl}} + \\Pi_h^{\\operatorname{curl}} \\, A_{vgrad}^{-1} \\, (\\Pi_h^{\\operatorname{curl}})^T + \\operatorname{grad} \\, A_{\\operatorname{grad}}^{-1} \\, (\\operatorname{grad})^T,\n"
},
{
"math_id": 7,
"text": " S_{\\operatorname{curl}} "
},
{
"math_id": 8,
"text": " \\Pi_h^{\\operatorname{curl}} "
},
{
"math_id": 9,
"text": " H_h(\\operatorname{curl})"
},
{
"math_id": 10,
"text": " A_{vgrad}"
},
{
"math_id": 11,
"text": " [H_h(\\operatorname{grad})]^n"
},
{
"math_id": 12,
"text": "grad"
},
{
"math_id": 13,
"text": "A_{\\operatorname{grad}}"
},
{
"math_id": 14,
"text": "H_h(\\operatorname{grad})"
},
{
"math_id": 15,
"text": "\n\\kappa(B_{\\operatorname{curl}} A_{\\operatorname{curl}}) \\leq C,\n"
},
{
"math_id": 16,
"text": "\\kappa(A)"
},
{
"math_id": 17,
"text": "A"
},
{
"math_id": 18,
"text": "A_{vgrad}"
},
{
"math_id": 19,
"text": "A_{grad}"
},
{
"math_id": 20,
"text": "B_{vgrad}"
},
{
"math_id": 21,
"text": "B_{\\operatorname{grad}}"
},
{
"math_id": 22,
"text": "\nB_{\\operatorname{curl}} = S_{\\operatorname{curl}} + \\Pi_h^{\\operatorname{curl}} \\, B_{vgrad} \\, (\\Pi_h^{\\operatorname{curl}})^T + \\operatorname{grad} B_{\\operatorname{grad}} (\\operatorname{grad})^T.\n"
},
{
"math_id": 23,
"text": "u \\in H_h(\\operatorname{div})"
},
{
"math_id": 24,
"text": "(\\operatorname{div} \\,u, \\operatorname{div} \\,v) + \\tau (u, v) = (f, v), \\quad \\forall v \\in H_h(\\operatorname{div}),"
},
{
"math_id": 25,
"text": "A_{\\operatorname{div}} \\,u = f."
},
{
"math_id": 26,
"text": "B_{\\operatorname{div}} = S_{\\operatorname{div}} + \\Pi_h^{\\operatorname{div}} \\, A_{vgrad}^{-1} \\, (\\Pi_h^{\\operatorname{div}})^T + \\operatorname{curl} \\, A_{\\operatorname{curl}}^{-1} \\, (\\operatorname{curl})^T,"
},
{
"math_id": 27,
"text": "S_{\\operatorname{div}}"
},
{
"math_id": 28,
"text": "\\Pi_h^{\\operatorname{div}}"
},
{
"math_id": 29,
"text": "[H_h(\\operatorname{grad})]^n"
},
{
"math_id": 30,
"text": "\\operatorname{curl}"
},
{
"math_id": 31,
"text": "\\kappa(B_{\\operatorname{div}} A_{\\operatorname{div}}) \\leq C."
},
{
"math_id": 32,
"text": "A_{\\operatorname{curl}}^{-1}"
},
{
"math_id": 33,
"text": "B_{\\operatorname{div}}"
},
{
"math_id": 34,
"text": "B_{\\operatorname{curl}}"
},
{
"math_id": 35,
"text": "\nA_{vgrad}"
},
{
"math_id": 36,
"text": "\nB_{vgrad}"
},
{
"math_id": 37,
"text": "B_{\\operatorname{div}} = S_{\\operatorname{div}} + \\Pi_h^{\\operatorname{div}} B_{vgrad} (\\Pi_h^{\\operatorname{div}})^T + \\operatorname{curl} B_{\\operatorname{curl}} (\\operatorname{curl})^T \n = S_{\\operatorname{div}} + \\Pi_h^{\\operatorname{div}} B_{vgrad} (\\Pi_h^{\\operatorname{div}})^T + \\operatorname{curl} S_{\\operatorname{curl}} (\\operatorname{curl})^T + \\operatorname{curl} \\Pi_h^{\\operatorname{curl}} B_{vgrad} (\\Pi_h^{\\operatorname{curl}})^T (\\operatorname{curl})^T.\n"
},
{
"math_id": 38,
"text": "H_h(\\operatorname{curl})"
},
{
"math_id": 39,
"text": "H_h(\\operatorname{div})"
},
{
"math_id": 40,
"text": "\\Omega"
},
{
"math_id": 41,
"text": "v_h\\in H_h(\\operatorname{curl} \\Omega)"
},
{
"math_id": 42,
"text": "\\tilde{v}_h\\in H_h(\\operatorname{curl} \\Omega)"
},
{
"math_id": 43,
"text": "\\psi_h\\in [H_h (\\operatorname{grad} \\Omega)]^3"
},
{
"math_id": 44,
"text": "p_h\\in H_h(\\operatorname{grad} \\Omega)"
},
{
"math_id": 45,
"text": "v_h=\\tilde{v}_h+\\Pi_h^{\\operatorname{curl}}\\psi_{h}+ \\operatorname{grad} p_h"
},
{
"math_id": 46,
"text": "\\Vert h^{-1} \\tilde{v}_h\\Vert + \\Vert\\psi_h\\Vert_1 + \\Vert p_h\\Vert_1 \\lesssim \\Vert v_{h}\\Vert_{H(\\operatorname{curl})}"
},
{
"math_id": 47,
"text": "H_{h}(\\operatorname{div})"
},
{
"math_id": 48,
"text": "v_{h}\\in H_{h}(\\operatorname{div} \\Omega)"
},
{
"math_id": 49,
"text": "\n\\widetilde{v}_h\\in H_h(\\operatorname{div} \\Omega)"
},
{
"math_id": 50,
"text": "\\psi_h\\in [H_h(\\operatorname{grad} \\Omega)]^{3},"
},
{
"math_id": 51,
"text": "\nw_h\\in H_h(\\operatorname{curl} \\Omega),"
},
{
"math_id": 52,
"text": "\n\tv_{h}=\\widetilde{v}_h+\\Pi_h^{\\operatorname{div}}\\psi_h+ \\operatorname{curl} \\, w_h,\n"
},
{
"math_id": 53,
"text": "\n\t\\Vert h^{-1}\\widetilde{v}_h\\Vert + \\Vert\\psi_h\\Vert_1 + \\Vert w_h\\Vert_1 \\lesssim \\Vert v_h \\Vert_{H(\\operatorname{div})} \n"
},
{
"math_id": 54,
"text": "\nH(\\operatorname{curl})"
}
] |
https://en.wikipedia.org/wiki?curid=63363213
|
63369504
|
Lucas Lombriser
|
Lucas Lombriser (born 12 April 1982) is a Swiss National Science Foundation Professor at the Department of Theoretical Physics, University of Geneva. His research is in Theoretical Cosmology, Dark Energy, and Alternative Theories of Gravity. In 2020 and 2021 Lombriser proposed that the Hubble tension and other discrepancies between cosmological measurements imply significant evidence that we are living in a Hubble Bubble of 250 million light years in diameter which is 20% less dense than the cosmic average and lowers the locally measured cosmic microwave background temperature over its cosmic average. Previously, in 2019, he has proposed a solution to the cosmological constant problem from arguing that Newton's constant varies globally. In 2015 and 2016, Lombriser predicted the measurement of the gravitational wave speed with a neutron star merger and that this would rule out alternative theories of gravity as the cause of the late-time accelerated expansion of our Universe, a prediction that proved true with GW170817. Lombriser is a member of the Romansh-speaking minority in Switzerland.
Education and career.
Lombriser did a Master in Physics at ETH Zurich in 2008 and completed his PhD at the Institute for Theoretical Physics, University of Zurich in 2011. His thesis advisor was Uroš Seljak. Lombriser did postdoctoral research at the Institute of Cosmology and Gravitation, University of Portsmouth and the Institute of Astronomy, Royal Observatory Edinburgh, University of Edinburgh. He joined the Department of Theoretical Physics, University of Geneva in January 2018 on a Swiss National Science Foundation Professorship.
He is an Affiliate Member of the Higgs Centre for Theoretical Physics, University of Edinburgh.
Research.
Lombriser's research is in Theoretical Cosmology, Dark Energy, and Alternative Theories of Gravity. In 2010 he was part of a research group that succeeded in making the first measurement of the formula_0 quantity, a model-independent estimator for gravitational interactions at cosmological distances. In 2015 and 2016, Lombriser predicted the measurement of the gravitational wave speed with a neutron star merger and that this would rule out alternative theories of gravity as the cause of the late-time accelerated expansion of our Universe. This prediction and its implications became reality with GW170817. In 2019, he proposed the additional global variation of the General Relativistic Einstein-Hilbert action with respect to Newton's constant. This leads to a constraint equation upon Einstein's field equations which, after evaluation over the observable Universe, provides a solution to the decades-old cosmological constant problem. In March 2020 Lombriser proposed that the much-debated Hubble tension implies significant evidence that we are living in a Hubble Bubble that is 250 million light years in diameter and is 20% less dense than the cosmic average. In April 2021 his team showed that this results in a higher cosmic microwave background temperature than measured locally, which eases further cosmological tensions.
Lombriser is involved in the Euclid space telescope mission of the European Space Agency (ESA) and the Euclid Consortium. He is also involved in the ESA Laser Interferometer Space Antenna (LISA) gravitational wave observatory. He is the PI of the "Deep"Thought Project.
In June 2023, Lombriser reported an alternative way of interpreting the available scientific data which suggested that the notion of an expanding universe may be more a "mirage" than an actuality.
Media.
Lombriser has given several interviews on his research work and life in the Swiss Romansh-speaking media, including TV, radio, and newspaper articles. He has also spoken on BBC Radio Scotland. His research works from 2010, 2016, 2019, 2020, and 2021 have received broad attention by news outlets worldwide.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "E_G"
}
] |
https://en.wikipedia.org/wiki?curid=63369504
|
63371375
|
Selenium tetrabromide
|
<templatestyles src="Chembox/styles.css"/>
Chemical compound
Selenium tetrabromide is an inorganic compound with a chemical formula SeBr4.
Preparation.
Selenium tetrabromide could be produced by mixing elemental bromine and selenium:
formula_0
Properties.
Selenium tetrabromide exists in two polymorphs, the trigonal, black α-SeBr4 and the monoclinic, orange-reddish β-SeBr4, both of which feature tetrameric cubane-like Se4Br16 units but differ in how they are arranged. It dissolves in carbon disulfide, chloroform and ethyl bromide, but decomposes in water, so that it produces selenous acid in wet air.
The compound is only stable under a bromine-saturated atmosphere; gas phase measurements of the gas density indicate that the compound decomposes into selenium monobromide and bromine.
formula_1
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rm \\ Se + 2Br_2 \\rightarrow SeBr_4"
},
{
"math_id": 1,
"text": "\\rm \\ 2SeBr_4 \\rightarrow Se_2Br_2 + 3Br_2 "
}
] |
https://en.wikipedia.org/wiki?curid=63371375
|
63376504
|
Introduction to the Theory of Error-Correcting Codes
|
Book
Introduction to the Theory of Error-Correcting Codes is a textbook on error-correcting codes, by Vera Pless. It was published in 1982 by John Wiley & Sons, with a second edition in 1989 and a third in 1998. The Basic Library List Committee of the Mathematical Association of America has rated the book as essential for inclusion in undergraduate mathematics libraries.
Topics.
This book is mainly centered around algebraic and combinatorial techniques for designing and using error-correcting linear block codes. It differs from previous works in this area in its reduction of each result to its mathematical foundations, and its clear exposition of the results follow from these foundations.
The first two of its ten chapters present background and introductory material, including Hamming distance, decoding methods including maximum likelihood and syndromes, sphere packing and the Hamming bound, the Singleton bound, and the Gilbert–Varshamov bound, and the Hamming(7,4) code. They also include brief discussions of additional material not covered in more detail later, including information theory, convolutional codes, and burst error-correcting codes. Chapter 3 presents the BCH code over the field formula_0, and Chapter 4 develops the theory of finite fields more generally.
Chapter 5 studies cyclic codes and Chapter 6 studies a special case of cyclic codes, the quadratic residue codes. Chapter 7 returns to BCH codes. After these discussions of specific codes, the next chapter concerns enumerator polynomials, including the MacWilliams identities, Pless's own power moment identities, and the Gleason polynomials.
The final two chapters connect this material to the theory of combinatorial designs and the design of experiments, and include material on the Assmus–Mattson theorem, the Witt design, the binary Golay codes, and the ternary Golay codes.
The second edition adds material on BCH codes, Reed–Solomon error correction, Reed–Muller codes, decoding Golay codes, and "a new, simple combinatorial proof of the MacWilliams identities".
As well as correcting some errors and adding more exercises, the third edition includes new material on connections between greedily constructed lexicographic codes and combinatorial game theory, the Griesmer bound, non-linear codes, and the Gray images of formula_1 codes.
Audience and reception.
This book is written as a textbook for advanced undergraduates; reviewer H. N. calls it "a leisurely introduction to the field which is at the same time mathematically rigorous". It includes over 250 problems, and can be read by mathematically-inclined students with only a background in linear algebra (provided in an appendix) and with no prior knowledge of coding theory.
Reviewer Ian F. Blake complained that the first edition omitted some topics necessary for engineers, including algebraic decoding, Goppa codes, Reed–Solomon error correction, and performance analysis, making this more appropriate for mathematics courses, but he suggests that it could still be used as the basis of an engineering course by replacing the last two chapters with this material, and overall he calls the book "a delightful little monograph". Reviewer John Baylis adds that "for clearly exhibiting coding theory as a showpiece of applied modern algebra I haven't seen any to beat this one".
Related reading.
Other books in this area include "The Theory of Error-Correcting Codes" (1977) by Jessie MacWilliams and Neil Sloane, and "A First Course in Coding Theory" (1988) by Raymond Hill.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "GF(2^4)"
},
{
"math_id": 1,
"text": "\\mathbb{Z}^4"
}
] |
https://en.wikipedia.org/wiki?curid=63376504
|
6339361
|
Multicomplex number
|
In mathematics, the multicomplex number systems formula_0 are defined inductively as follows: Let C0 be the real number system. For every "n" > 0 let "i""n" be a square root of −1, that is, an imaginary unit. Then formula_1. In the multicomplex number systems one also requires that formula_2 (commutativity). Then formula_3 is the complex number system, formula_4 is the bicomplex number system, formula_5 is the tricomplex number system of Corrado Segre, and formula_0 is the multicomplex number system of order "n".
Each formula_0 forms a Banach algebra. G. Bayley Price has written about the function theory of multicomplex systems, providing details for the bicomplex system formula_6
The multicomplex number systems are not to be confused with "Clifford numbers" (elements of a Clifford algebra), since Clifford's square roots of −1 anti-commute (formula_7 when "m" ≠ "n" for Clifford).
Because the multicomplex numbers have several square roots of –1 that commute, they also have zero divisors: formula_8 despite formula_9 and formula_10, and formula_11 despite formula_12 and formula_13. Any product formula_14 of two distinct multicomplex units behaves as the formula_15 of the split-complex numbers, and therefore the multicomplex numbers contain a number of copies of the split-complex number plane.
With respect to subalgebra formula_16, "k" = 0, 1, ..., "n" − 1, the multicomplex system formula_0 is of dimension 2"n" − "k" over formula_17
|
[
{
"math_id": 0,
"text": "\\Complex_n"
},
{
"math_id": 1,
"text": "\\Complex_{n+1} = \\lbrace z = x + y i_{n+1} : x,y \\in \\Complex_n \\rbrace"
},
{
"math_id": 2,
"text": "i_n i_m = i_m i_n"
},
{
"math_id": 3,
"text": "\\Complex_1"
},
{
"math_id": 4,
"text": "\\Complex_2"
},
{
"math_id": 5,
"text": "\\Complex_3"
},
{
"math_id": 6,
"text": "\\Complex_2 ."
},
{
"math_id": 7,
"text": "i_n i_m + i_m i_n = 0"
},
{
"math_id": 8,
"text": "(i_n - i_m)(i_n + i_m) = i_n^2 - i_m^2 = 0"
},
{
"math_id": 9,
"text": "i_n - i_m \\neq 0"
},
{
"math_id": 10,
"text": "i_n + i_m \\neq 0"
},
{
"math_id": 11,
"text": "(i_n i_m - 1)(i_n i_m + 1) = i_n^2 i_m^2 - 1 = 0"
},
{
"math_id": 12,
"text": " i_n i_m \\neq 1"
},
{
"math_id": 13,
"text": "i_n i_m \\neq -1"
},
{
"math_id": 14,
"text": "i_n i_m"
},
{
"math_id": 15,
"text": "j"
},
{
"math_id": 16,
"text": "\\Complex_k"
},
{
"math_id": 17,
"text": "\\Complex_k ."
}
] |
https://en.wikipedia.org/wiki?curid=6339361
|
63393820
|
2 Kings 13
|
2 Kings, chapter 13
2 Kings 13 is the thirteenth chapter of the second part of the Books of Kings in the Hebrew Bible or the Second Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. This chapter records the reigns of Jehu's son, Jehoahaz, and Jehu's grandson, Jehoash, in the kingdom of Israel during the reign of Jehoash, the king of Judah, as well as the events around the death of Elisha. The narrative is a part of a major section 2 Kings 9:1–15:12 covering the period of Jehu's dynasty.
Text.
This chapter was originally written in the Hebrew language and since the 16th century is divided into 25 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter contains an underlying typology of the Exodus and Conquest, linking also to passages in the Book of Judges with the recurring pattern: worship of idols provoking the jealousy and anger of YHWH, then Israel is delivered into the hands of foreign nations, until the people cry for help, so YHWH sends a savior to deliver them, returning them to true worship until the savior (or 'judge') dies and the cycle starts again (). This pattern is 'grounded in the foundational exodus pattern': YHWH responds to the cry of the people, remembers their covenant with him, raises Moses as a savior and delivers Israel from Egypt.
Jehoahaz, king of Israel (13:1–9).
Jehu's son Jehoahaz became the king of Israel during the long reign of Joash, the king of Judah. This is a period of a relatively long and internally stable dynasty, but starkly in contrast to problems from abroad, as Aram-Damascus became the superpower in the region, with bitter consequences for Israel (cf. ). The oppression of the Syrian kings, Hazael and his son Ben-hadad is seen as the result of God's anger on Israel's faithlessness, more specifically, 'the
sins of Jeroboam' (cf. with ; ; , etc.). Like Israel at the time of the judges, Jehoahaz asked God for help and was provided a 'savior' (). However, Israel kept adhered to 'the sins of Jeroboam' and additionally worshipped Asherah in Samaria.
"In the twenty-third year of Joash the son of Ahaziah, king of Judah, Jehoahaz the son of Jehu began to reign over Israel in Samaria, and he reigned seventeen years."
Jehoash, king of Israel, and the death of Elisha (13:10-25).
The passage about Jehoahaz' son, Jehoash, the king of Israel (his name is spelt 'Joash' in , and ) is unusually structured:
The following passages are still related to Jehoash with the concluding formula repeated in . This peculiarity could be a result of the insertion of two Elisha legends (verses 14–19 and 20–21) into the narrative context using verses 12–13 and 22–25. The first legend shows Elisha acting as military support against the Arameans (cf. 2 Kings 6–7). Jehoash held the prophet Elisha in honor, and wept by his bedside while he was dying, addressing him in the words Elisha himself had used when Elijah was carried up into heaven (): "O my father, my father, the chariot of Israel and the horsemen thereof" (; 2 Kings 14), During the visit, Elisha had Jehoash perform certain prophetic tasks. The king did not know what he was doing, and was only given explanation after the deed. The arrow shot to the east is an indication of future victory against Aram, significantly shows ‘how far south the Arameans had advanced’ into the territory of Israel in the eastbank (cf. ) and the point from where they are to be pushed back. The use of obscure sign language in the prophecies is found in other books of prophets (e.g. ; ; Jeremiah 27–28; Ezekiel 4–5; 12, amongst others). The prophecy was fulfilled with successive victories of Jehoash over the Syrians, enabling him to retake from them the towns which Hazael had captured from Israel.
The attack by a band of Moabites in the second short legend indicates that the northern kingdom was so severely weakened after Jehu's coup that not only the Arameans, but other neighboring tribes also took advantage of the situation. The hasty burial of a body in Elisha's grave (probably a burial cave) results in a resurrection, which displays Elisha's miraculous death-defying powers even beyond his own death, just as during his lifetime ().
Verses 22–25 clarify that the story fits Jehoash, not Jehoahaz, because Jehoahaz suffered lifelong pressure from Hazael and Ben-hadad (13:3), whereas Jehoash did not (cf. ; ).
"In the thirty-seventh year of Joash king of Judah, Jehoash the son of Jehoahaz began to reign over Israel in Samaria, and he reigned sixteen years."
Archeology.
The excavation at Tell al-Rimah yields a stele of Adad-nirari III which mentioned "Jehoash the Samarian" and contains the first cuneiform mention of Samaria by that name. The inscriptions of this "Tell al-Rimah Stele" may provide evidence of the existence of King Jehoash, attest to the weakening of Syrian kingdom (cf. ), and show the vassal status of the northern kingdom of Israel to the Assyrians.
A postulated image of Jehoash is reconstructed from plaster remains recovered at Kuntillet Ajrud. The ruins were from a temple built by the northern Israel kingdom when Jehoash of Israel gained control over the kingdom of Judah during the reign of Amaziah of Judah.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=63393820
|
63397
|
Mean time between failures
|
Predicted elapsed time between inherent failures of a system during operation
Mean time between failures (MTBF) is the predicted elapsed time between inherent failures of a mechanical or electronic system during normal system operation. MTBF can be calculated as the arithmetic mean (average) time between failures of a system. The term is used for repairable systems while mean time to failure (MTTF) denotes the expected time to failure for a non-repairable system.
The definition of MTBF depends on the definition of what is considered a failure. For complex, repairable systems, failures are considered to be those out of design conditions which place the system out of service and into a state for repair. Failures which occur that can be left or maintained in an unrepaired condition, and do not place the system out of service, are not considered failures under this definition. In addition, units that are taken down for routine scheduled maintenance or inventory control are not considered within the definition of failure. The higher the MTBF, the longer a system is likely to work before failing.
Overview.
Mean time between failures (MTBF) describes the expected time between two failures for a repairable system. For example, three identical systems starting to function properly at time 0 are working until all of them fail. The first system fails after 100 hours, the second after 120 hours and the third after 130 hours. The MTBF of the systems is the average of the three failure times, which is 116.667 hours. If the systems were non-repairable, then their MTTF would be 116.667 hours.
In general, MTBF is the "up-time" between two failure states of a repairable system during operation as outlined here:
For each observation, the "down time" is the instantaneous time it went down, which is after (i.e. greater than) the moment it went up, the "up time". The difference ("down time" minus "up time") is the amount of time it was operating between these two events.
By referring to the figure above, the MTBF of a component is the sum of the lengths of the operational periods divided by the number of observed failures:
formula_0
In a similar manner, mean down time (MDT) can be defined as
formula_1
Mathematical description.
The MTBF is the expected value of the random variable formula_2 indicating the time until failure. Thus, it can be written as
formula_3
where formula_4 is the probability density function of formula_2. Equivalently, the MTBF can be expressed in terms of the reliability function formula_5 as
formula_6.
The MTBF and formula_2 have units of time (e.g., hours).
Any practically-relevant calculation of the MTBF assumes that the system is working within its "useful life period", which is characterized by a relatively constant failure rate (the middle part of the "bathtub curve") when only random failures are occurring. In other words, it is assumed that the system has survived initial setup stresses and has not yet approached its expected end of life, both of which often increase the failure rate.
Assuming a constant failure rate formula_7 implies that formula_2 has an exponential distribution with parameter formula_7. Since the MTBF is the expected value of formula_2, it is given by the reciprocal of the failure rate of the system,
formula_8.
Once the MTBF of a system is known, and assuming a constant failure rate, the probability that any one particular system will be operational for a given duration can be inferred from the reliability function of the exponential distribution, formula_9. In particular, the probability that a particular system will survive to its MTBF is formula_10, or about 37% (i.e., it will fail earlier with probability 63%).
Application.
The MTBF value can be used as a system reliability parameter or to compare different systems or designs. This value should only be understood conditionally as the “mean lifetime” (an average value), and not as a quantitative identity between working and failed units.
Since MTBF can be expressed as “average life (expectancy)”, many engineers assume that 50% of items will have failed by time "t" = MTBF. This inaccuracy can lead to bad design decisions. Furthermore, probabilistic failure prediction based on MTBF implies the total absence of systematic failures (i.e., a constant failure rate with only intrinsic, random failures), which is not easy to verify. Assuming no systematic errors, the probability the system survives during a duration, T, is calculated as exp^(-T/MTBF). Hence the probability a system fails during a duration T, is given by 1 - exp^(-T/MTBF).
MTBF value prediction is an important element in the development of products. Reliability engineers and design engineers often use reliability software to calculate a product's MTBF according to various methods and standards (MIL-HDBK-217F, Telcordia SR332, Siemens SN 29500, FIDES, UTE 80-810 (RDF2000), etc.). The Mil-HDBK-217 reliability calculator manual in combination with RelCalc software (or other comparable tool) enables MTBF reliability rates to be predicted based on design.
A concept which is closely related to MTBF, and is important in the computations involving MTBF, is the mean down time (MDT). MDT can be defined as mean time which the system is down after the failure. Usually, MDT is considered different from MTTR (Mean Time To Repair); in particular, MDT usually includes organizational and logistical factors (such as business days or waiting for components to arrive) while MTTR is usually understood as more narrow and more technical.
Application of MTBF in manufacturing.
MTBF serves as a crucial metric for managing machinery and equipment reliability. Its application is particularly significant in the context of total productive maintenance (TPM), a comprehensive maintenance strategy aimed at maximizing equipment effectiveness. MTBF provides a quantitative measure of the time elapsed between failures of a system during normal operation, offering insights into the reliability and performance of manufacturing equipment.
By integrating MTBF with TPM principles, manufacturers can achieve a more proactive maintenance approach. This synergy allows for the identification of patterns and potential failures before they occur, enabling preventive maintenance and reducing unplanned downtime. As a result, MTBF becomes a key performance indicator (KPI) within TPM, guiding decisions on maintenance schedules, spare parts inventory, and ultimately, optimizing the lifespan and efficiency of machinery. This strategic use of MTBF within TPM frameworks enhances overall production efficiency, reduces costs associated with breakdowns, and contributes to the continuous improvement of manufacturing processes.
MTBF and MDT for networks of components.
Two components formula_11 (for instance hard drives, servers, etc.) may be arranged in a network, in "series" or in "parallel". The terminology is here used by close analogy to electrical circuits, but has a slightly different meaning. We say that the two components are in series if the failure of "either" causes the failure of the network, and that they are in parallel if only the failure of "both" causes the network to fail. The MTBF of the resulting two-component network with repairable components can be computed according to the following formulae, assuming that the MTBF of both individual components is known:
formula_12
where formula_13 is the network in which the components are arranged in series.
For the network containing parallel repairable components, to find out the MTBF of the whole system, in addition to component MTBFs, it is also necessary to know their respective MDTs. Then, assuming that MDTs are negligible compared to MTBFs (which usually stands in practice), the MTBF for the parallel system consisting from two parallel repairable components can be written as follows:
formula_14
where formula_15 is the network in which the components are arranged in parallel, and formula_16 is the probability of failure of component formula_17 during "vulnerability window" formula_18.
Intuitively, both these formulae can be explained from the point of view of failure probabilities. First of all, let's note that the probability of a system failing within a certain timeframe is the inverse of its MTBF. Then, when considering series of components, failure of any component leads to the failure of the whole system, so (assuming that failure probabilities are small, which is usually the case) probability of the failure of the whole system within a given interval can be approximated as a sum of failure probabilities of the components. With parallel components the situation is a bit more complicated: the whole system will fail if and only if after one of the components fails, the other component fails while the first component is being repaired; this is where MDT comes into play: the faster the first component is repaired, the less is the "vulnerability window" for the other component to fail.
Using similar logic, MDT for a system out of two serial components can be calculated as:
formula_19
and for a system out of two parallel components MDT can be calculated as:
formula_20
Through successive application of these four formulae, the MTBF and MDT of any network of repairable components can be computed, provided that the MTBF and MDT is known for each component. In a special but all-important case of several serial components, MTBF calculation can be easily generalised into
formula_21
which can be shown by induction, and likewise
formula_22
since the formula for the mdt of two components in parallel is identical to that of the mtbf for two components in series.
Variations of MTBF.
There are many variations of MTBF, such as "mean time between system aborts" (MTBSA), "mean time between critical failures" (MTBCF) or "mean time between unscheduled removal" (MTBUR). Such nomenclature is used when it is desirable to differentiate among types of failures, such as critical and non-critical failures. For example, in an automobile, the failure of the FM radio does not prevent the primary operation of the vehicle.
It is recommended to use "Mean time to failure" (MTTF) instead of MTBF in cases where a system is replaced after a failure ("non-repairable system"), since MTBF denotes time between failures in a system which can be repaired.
MTTFd is an extension of MTTF, and is only concerned about failures which would result in a dangerous condition. It can be calculated as follows:
formula_23
where "B"10 is the number of operations that a device will operate prior to 10% of a sample of those devices would fail and "n"op is number of operations. "B"10d is the same calculation, but where 10% of the sample would fail to danger. "n"op is the number of operations/cycle in one year.
MTBF considering censoring.
In fact the MTBF counting only failures with at least some systems still operating that have not yet failed underestimates the MTBF by failing to include in the computations the partial lifetimes of the systems that have not yet failed. With such lifetimes, all we know is that the time to failure exceeds the time they've been running. This is called censoring. In fact with a parametric model of the lifetime, the likelihood for the experience on any given day is as follows:
formula_24,
where
formula_25 is the failure time for failures and the censoring time for units that have not yet failed,
formula_26 = 1 for failures and 0 for censoring times,
formula_27 = the probability that the lifetime exceeds formula_25, called the survival function, and
formula_28 is called the hazard function, the instantaneous force of mortality (where formula_29 = the probability density function of the distribution).
For a constant exponential distribution, the hazard, formula_7, is constant. In this case, the MBTF is
MTBF = formula_30,
where formula_31 is the maximum likelihood estimate of formula_7, maximizing the likelihood given above and formula_32 is the number of uncensored observations.
We see that the difference between the MTBF considering only failures and the MTBF including censored observations is that the censoring times add to the numerator but not the denominator in computing the MTBF.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\text{MTBF} = \\frac{\\sum{(\\text{start of downtime} - \\text{start of uptime})}}{\\text{number of failures}}.\n"
},
{
"math_id": 1,
"text": "\n\\text{MDT} = \\frac{\\sum{(\\text{start of uptime} - \\text{start of downtime})}}{\\text{number of failures}}.\n"
},
{
"math_id": 2,
"text": "T"
},
{
"math_id": 3,
"text": "\\text{MTBF} = \\mathbb{E}\\{T\\} = \\int_0^\\infty tf_T(t)\\, dt"
},
{
"math_id": 4,
"text": "f_T(t)"
},
{
"math_id": 5,
"text": "R_T(t)"
},
{
"math_id": 6,
"text": "\\text{MTBF} = \\int_0^\\infty R(t)\\, dt "
},
{
"math_id": 7,
"text": "\\lambda"
},
{
"math_id": 8,
"text": "\\text{MTBF} = \\frac{1}{\\lambda}"
},
{
"math_id": 9,
"text": "R_T(t) = e^{-\\lambda t}"
},
{
"math_id": 10,
"text": "1/e"
},
{
"math_id": 11,
"text": "c_1,c_2"
},
{
"math_id": 12,
"text": "\\text{mtbf}(c_1 ; c_2) = \\frac{1}{\\frac{1}{\\text{mtbf}(c_1)} + \\frac{1}{\\text{mtbf}(c_2)}} = \\frac{\\text{mtbf}(c_1)\\times \\text{mtbf}(c_2)} {\\text{mtbf}(c_1) + \\text{mtbf}(c_2)}\\;,"
},
{
"math_id": 13,
"text": "c_1 ; c_2"
},
{
"math_id": 14,
"text": "\n\n\\begin{align}\\text{mtbf}(c_1 \\parallel c_2) &= \\frac{1}{\\frac{1}{\\text{mtbf}(c_1)}\\times\\text{PF}(c_2,\\text{mdt}(c_1))+\\frac{1}{\\text{mtbf}(c_2)}\\times\\text{PF}(c_1,\\text{mdt}(c_2))} \n\\\\[1em]&= \\frac{1}{\\frac{1}{\\text{mtbf}(c_1)}\\times\\frac{\\text{mdt}(c_1)}{\\text{mtbf}(c_2)}+\\frac{1}{\\text{mtbf}(c_2)}\\times\\frac{\\text{mdt}(c_2)}{\\text{mtbf}(c_1)}} \n\\\\[1em]&= \\frac{\\text{mtbf}(c_1)\\times \\text{mtbf}(c_2)} {\\text{mdt}(c_1) + \\text{mdt}(c_2)}\\;,\n\n\\end{align}\n"
},
{
"math_id": 15,
"text": "c_1 \\parallel c_2"
},
{
"math_id": 16,
"text": "PF(c,t)"
},
{
"math_id": 17,
"text": "c"
},
{
"math_id": 18,
"text": "t"
},
{
"math_id": 19,
"text": "\\text{mdt}(c_1 ; c_2) = \\frac{\\text{mtbf}(c_1)\\times \\text{mdt}(c_2) + \\text{mtbf}(c_2)\\times \\text{mdt}(c_1)} {\\text{mtbf}(c_1) + \\text{mtbf}(c_2)}\\;,"
},
{
"math_id": 20,
"text": "\\text{mdt}(c_1 \\parallel c_2) = \\frac{\\text{mdt}(c_1)\\times \\text{mdt}(c_2)} {\\text{mdt}(c_1) + \\text{mdt}(c_2)}\\;."
},
{
"math_id": 21,
"text": "\\text{mtbf}(c_1;\\dots; c_n) = \\left(\\sum_{k=1}^n \\frac 1{\\text{mtbf}(c_k)}\\right)^{-1}\\;,"
},
{
"math_id": 22,
"text": "\\text{mdt}(c_1\\parallel\\dots\\parallel c_n) = \\left(\\sum_{k=1}^n \\frac 1{\\text{mdt}(c_k)}\\right)^{-1}\\;,"
},
{
"math_id": 23,
"text": "\n\\begin{align}\n\\text{MTTF} & \\approx \\frac{B_{10}}{0.1n_\\text{onm}}, \\\\[8pt]\n\\text{MTTFd} & \\approx \\frac{B_{10d}}{0.1n_\\text{op}},\n\\end{align}\n"
},
{
"math_id": 24,
"text": "L = \\prod_i \\lambda(u_i)^{\\delta_i} S(u_i)"
},
{
"math_id": 25,
"text": "u_i"
},
{
"math_id": 26,
"text": "\\delta_i"
},
{
"math_id": 27,
"text": "S(u_i)"
},
{
"math_id": 28,
"text": "\\lambda(u_i) = f(u)/S(u)"
},
{
"math_id": 29,
"text": "f(u)"
},
{
"math_id": 30,
"text": "1 / \\hat\\lambda = \\sum u_i / k"
},
{
"math_id": 31,
"text": "\\hat\\lambda"
},
{
"math_id": 32,
"text": "k = \\sum \\sigma_i"
}
] |
https://en.wikipedia.org/wiki?curid=63397
|
634
|
Analysis of variance
|
Collection of statistical models
Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures (such as the "variation" among and between groups) used to analyze the differences among means. ANOVA was developed by the statistician Ronald Fisher. ANOVA is based on the law of total variance, where the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether two or more population means are equal, and therefore generalizes the "t"-test beyond two means. In other words, the ANOVA is used to test the difference between two or more means.
<templatestyles src="Template:TOC limit/styles.css" />
History.
While the analysis of variance reached fruition in the 20th century, antecedents extend centuries into the past according to Stigler. These include hypothesis testing, the partitioning of sums of squares, experimental techniques and the additive model. Laplace was performing hypothesis testing in the 1770s. Around 1800, Laplace and Gauss developed the least-squares method for combining observations, which improved upon methods then used in astronomy and geodesy. It also initiated much study of the contributions to sums of squares. Laplace knew how to estimate a variance from a residual (rather than a total) sum of squares. By 1827, Laplace was using least squares methods to address ANOVA problems regarding measurements of atmospheric tides. Before 1800, astronomers had isolated observational errors resulting
from reaction times (the "personal equation") and had developed methods of reducing the errors. The experimental methods used in the study of the personal equation were later accepted by the emerging field of psychology which developed strong (full factorial) experimental methods to which randomization and blinding were soon added. An eloquent non-mathematical explanation of the additive effects model was available in 1885.
Ronald Fisher introduced the term variance and proposed its formal analysis in a 1918 article on theoretical population genetics, "The Correlation Between Relatives on the Supposition of Mendelian Inheritance". His first application of the analysis of variance to data analysis was published in 1921, "Studies in Crop Variation I". This divided the variation of a time series into components representing annual causes and slow deterioration. Fisher's next piece, "Studies in Crop Variation II", written with Winifred Mackenzie and published in 1923, studied the variation in yield across plots sown with different varieties and subjected to different fertiliser treatments. Analysis of variance became widely known after being included in Fisher's 1925 book "Statistical Methods for Research Workers".
Randomization models were developed by several researchers. The first was published in Polish by Jerzy Neyman in 1923.
Example.
The analysis of variance can be used to describe otherwise complex relations among variables. A dog show provides an example. A dog show is not a random sampling of the breed: it is typically limited to dogs that are adult, pure-bred, and exemplary. A histogram of dog weights from a show is likely to be rather complicated, like the yellow-orange distribution shown in the illustrations. Suppose we wanted to predict the weight of a dog based on a certain set of characteristics of each dog. One way to do that is to "explain" the distribution of weights by dividing the dog population into groups based on those characteristics. A successful grouping will split dogs such that (a) each group has a low variance of dog weights (meaning the group is relatively homogeneous) and (b) the mean of each group is distinct (if two groups have the same mean, then it isn't reasonable to conclude that the groups are, in fact, separate in any meaningful way).
In the illustrations to the right, groups are identified as "X"1, "X"2, etc. In the first illustration, the dogs are divided according to the product (interaction) of two binary groupings: young vs old, and short-haired vs long-haired (e.g., group 1 is young, short-haired dogs, group 2 is young, long-haired dogs, etc.). Since the distributions of dog weight within each of the groups (shown in blue) has a relatively large variance, and since the means are very similar across groups, grouping dogs by these characteristics does not produce an effective way to explain the variation in dog weights: knowing which group a dog is in doesn't allow us to predict its weight much better than simply knowing the dog is in a dog show. Thus, this grouping fails to explain the variation in the overall distribution (yellow-orange).
An attempt to explain the weight distribution by grouping dogs as "pet vs working breed" and "less athletic vs more athletic" would probably be somewhat more successful (fair fit). The heaviest show dogs are likely to be big, strong, working breeds, while breeds kept as pets tend to be smaller and thus lighter. As shown by the second illustration, the distributions have variances that are considerably smaller than in the first case, and the means are more distinguishable. However, the significant overlap of distributions, for example, means that we cannot distinguish "X"1 and "X"2 reliably. Grouping dogs according to a coin flip might produce distributions that look similar.
An attempt to explain weight by breed is likely to produce a very good fit. All Chihuahuas are light and all St Bernards are heavy. The difference in weights between Setters and Pointers does not justify separate breeds. The analysis of variance provides the formal tools to justify these intuitive judgments. A common use of the method is the analysis of experimental data or the development of models. The method has some advantages over correlation: not all of the data must be numeric and one result of the method is a judgment in the confidence in an explanatory relationship.
Classes of models.
There are three classes of models used in the analysis of variance, and these are outlined here.
Fixed-effects models.
The fixed-effects model (class I) of analysis of variance applies to situations in which the experimenter applies one or more treatments to the subjects of the experiment to see whether the response variable values change. This allows the experimenter to estimate the ranges of response variable values that the treatment would generate in the population as a whole.
Random-effects models.
Random-effects model (class II) is used when the treatments are not fixed. This occurs when the various factor levels are sampled from a larger population. Because the levels themselves are random variables, some assumptions and the method of contrasting the treatments (a multi-variable generalization of simple differences) differ from the fixed-effects model.
Mixed-effects models.
A mixed-effects model (class III) contains experimental factors of both fixed and random-effects types, with appropriately different interpretations and analysis for the two types.
Example.
Teaching experiments could be performed by a college or university department to find a good introductory textbook, with each text considered a treatment. The fixed-effects model would compare a list of candidate texts. The random-effects model would determine whether important differences exist among a list of randomly selected texts. The mixed-effects model would compare the (fixed) incumbent texts to randomly selected alternatives.
Defining fixed and random effects has proven elusive, with multiple competing definitions.
Assumptions.
The analysis of variance has been studied from several approaches, the most common of which uses a linear model that relates the response to the treatments and blocks. Note that the model is linear in parameters but may be nonlinear across factor levels. Interpretation is easy when data is balanced across factors but much deeper understanding is needed for unbalanced data.
Textbook analysis using a normal distribution.
The analysis of variance can be presented in terms of a linear model, which makes the following assumptions about the probability distribution of the responses:
The separate assumptions of the textbook model imply that the errors are independently, identically, and normally distributed for fixed effects models, that is, that the errors (formula_0) are independent and
formula_1
Randomization-based analysis.
In a randomized controlled experiment, the treatments are randomly assigned to experimental units, following the experimental protocol. This randomization is objective and declared before the experiment is carried out. The objective random-assignment is used to test the significance of the null hypothesis, following the ideas of C. S. Peirce and Ronald Fisher. This design-based analysis was discussed and developed by Francis J. Anscombe at Rothamsted Experimental Station and by Oscar Kempthorne at Iowa State University. Kempthorne and his students make an assumption of "unit treatment additivity", which is discussed in the books of Kempthorne and David R. Cox.
Unit-treatment additivity.
In its simplest form, the assumption of unit-treatment additivity states that the observed response formula_2 from experimental unit formula_3 when receiving treatment formula_4 can be written as the sum of the unit's response formula_5 and the treatment-effect formula_6, that is
formula_7
The assumption of unit-treatment additivity implies that, for every treatment formula_4, the formula_4th treatment has exactly the same effect formula_8 on every experiment unit.
The assumption of unit treatment additivity usually cannot be directly falsified, according to Cox and Kempthorne. However, many "consequences" of treatment-unit additivity can be falsified. For a randomized experiment, the assumption of unit-treatment additivity "implies" that the variance is constant for all treatments. Therefore, by contraposition, a necessary condition for unit-treatment additivity is that the variance is constant.
The use of unit treatment additivity and randomization is similar to the design-based inference that is standard in finite-population survey sampling.
Derived linear model.
Kempthorne uses the randomization-distribution and the assumption of "unit treatment additivity" to produce a "derived linear model", very similar to the textbook model discussed previously. The test statistics of this derived linear model are closely approximated by the test statistics of an appropriate normal linear model, according to approximation theorems and simulation studies. However, there are differences. For example, the randomization-based analysis results in a small but (strictly) negative correlation between the observations. In the randomization-based analysis, there is "no assumption" of a "normal" distribution and certainly "no assumption" of "independence". On the contrary, "the observations are dependent"!
The randomization-based analysis has the disadvantage that its exposition involves tedious algebra and extensive time. Since the randomization-based analysis is complicated and is closely approximated by the approach using a normal linear model, most teachers emphasize the normal linear model approach. Few statisticians object to model-based analysis of balanced randomized experiments.
Statistical models for observational data.
However, when applied to data from non-randomized experiments or observational studies, model-based analysis lacks the warrant of randomization. For observational data, the derivation of confidence intervals must use "subjective" models, as emphasized by Ronald Fisher and his followers. In practice, the estimates of treatment-effects from observational studies generally are often inconsistent. In practice, "statistical models" and observational data are useful for suggesting hypotheses that should be treated very cautiously by the public.
Summary of assumptions.
The normal-model based ANOVA analysis assumes the independence, normality, and homogeneity of variances of the residuals. The randomization-based analysis assumes only the homogeneity of the variances of the residuals (as a consequence of unit-treatment additivity) and uses the randomization procedure of the experiment. Both these analyses require homoscedasticity, as an assumption for the normal-model analysis and as a consequence of randomization and additivity for the randomization-based analysis.
However, studies of processes that change variances rather than means (called dispersion effects) have been successfully conducted using ANOVA. There are "no" necessary assumptions for ANOVA in its full generality, but the "F"-test used for ANOVA hypothesis testing has assumptions and practical
limitations which are of continuing interest.
Problems which do not satisfy the assumptions of ANOVA can often be transformed to satisfy the assumptions.
The property of unit-treatment additivity is not invariant under a "change of scale", so statisticians often use transformations to achieve unit-treatment additivity. If the response variable is expected to follow a parametric family of probability distributions, then the statistician may specify (in the protocol for the experiment or observational study) that the responses be transformed to stabilize the variance. Also, a statistician may specify that logarithmic transforms be applied to the responses which are believed to follow a multiplicative model.
According to Cauchy's functional equation theorem, the logarithm is the only continuous transformation that transforms real multiplication to addition.
Characteristics.
ANOVA is used in the analysis of comparative experiments, those in which only the difference in outcomes is of interest. The statistical significance of the experiment is determined by a ratio of two variances. This ratio is independent of several possible alterations to the experimental observations: Adding a constant to all observations does not alter significance. Multiplying all observations by a constant does not alter significance. So ANOVA statistical significance result is independent of constant bias and scaling errors as well as the units used in expressing observations. In the era of mechanical calculation it was common to subtract a constant from all observations (when equivalent to dropping leading digits) to simplify data entry. This is an example of data coding.
Algorithm.
The calculations of ANOVA can be characterized as computing a number of means and variances, dividing two variances and comparing the ratio to a handbook value to determine statistical significance. Calculating a treatment effect is then trivial: "the effect of any treatment is estimated by taking the difference between the mean of the observations which receive the treatment and the general mean".
Partitioning of the sum of squares.
ANOVA uses traditional standardized terminology. The definitional equation of sample variance is formula_9, where the divisor is called the degrees of freedom (DF), the summation is called
the sum of squares (SS), the result is called the mean square (MS) and the squared terms are deviations from the sample mean. ANOVA estimates 3 sample variances: a total variance based on all the observation deviations from the grand mean, an error variance based on all the observation deviations from their appropriate treatment means, and a treatment variance. The treatment variance is based on the deviations of treatment means from the grand mean, the result being multiplied by the number of observations in each treatment to account for the difference between the variance of observations and the variance of means.
The fundamental technique is a partitioning of the total sum of squares "SS" into components related to the effects used in the model. For example, the model for a simplified ANOVA with one type of treatment at different levels.
formula_10
The number of degrees of freedom "DF" can be partitioned in a similar way: one of these components (that for error) specifies a chi-squared distribution which describes the associated sum of squares, while the same is true for "treatments" if there is no treatment effect.
formula_11
The "F"-test.
The "F"-test is used for comparing the factors of the total deviation. For example, in one-way, or single-factor ANOVA, statistical significance is tested for by comparing the F test statistic
formula_12
formula_13
where "MS" is mean square, formula_14 is the number of treatments and formula_15 is the total number of cases
to the "F"-distribution with formula_16 being the numerator degrees of freedom and formula_17 the denominator degrees of freedom. Using the "F"-distribution is a natural candidate because the test statistic is the ratio of two scaled sums of squares each of which follows a scaled chi-squared distribution.
The expected value of F is formula_18 (where formula_19 is the treatment sample size) which is 1 for no treatment effect. As values of F increase above 1, the evidence is increasingly inconsistent with the null hypothesis. Two apparent experimental methods of increasing F are increasing the sample size and reducing the error variance by tight experimental controls.
There are two methods of concluding the ANOVA hypothesis test, both of which produce the same result:
The ANOVA "F"-test is known to be nearly optimal in the sense of minimizing false negative errors for a fixed rate of false positive errors (i.e. maximizing power for a fixed significance level). For example, to test the hypothesis that various medical treatments have exactly the same effect, the "F"-test's "p"-values closely approximate the permutation test's p-values: The approximation is particularly close when the design is balanced. Such permutation tests characterize tests with maximum power against all alternative hypotheses, as observed by Rosenbaum. The ANOVA "F"-test (of the null-hypothesis that all treatments have exactly the same effect) is recommended as a practical test, because of its robustness against many alternative distributions.
Extended algorithm.
ANOVA consists of separable parts; partitioning sources of variance and hypothesis testing can be used individually. ANOVA is used to support other statistical tools. Regression is first used to fit more complex models to data, then ANOVA is used to compare models with the objective of selecting simple(r) models that adequately describe the data. "Such models could be fit without any reference to ANOVA, but ANOVA tools could then be used to make some sense of the fitted models, and to test hypotheses about batches of coefficients." "[W]e think of the analysis of variance as a way of understanding and structuring multilevel models—not as an alternative to regression but as a tool for summarizing complex high-dimensional inferences ..."
For a single factor.
The simplest experiment suitable for ANOVA analysis is the completely randomized experiment with a single factor. More complex experiments with a single factor involve constraints on randomization and include completely randomized blocks and Latin squares (and variants: Graeco-Latin squares, etc.). The more complex experiments share many of the complexities of multiple factors.
There are some alternatives to conventional one-way analysis of variance, e.g.: Welch's heteroscedastic F test, Welch's heteroscedastic F test with trimmed means and Winsorized variances, Brown-Forsythe test, Alexander-Govern test, James second order test and Kruskal-Wallis test, available in onewaytests R
It is useful to represent each data point in the following form, called a statistical model:
formula_20
where
That is, we envision an additive model that says every data point can be represented by summing three quantities: the true mean, averaged over all factor levels being investigated, plus an incremental component associated with the particular column (factor level), plus a final component associated with everything else affecting that specific data value.
For multiple factors.
ANOVA generalizes to the study of the effects of multiple factors. When the experiment includes observations at all combinations of levels of each factor, it is termed factorial. Factorial experiments are more efficient than a series of single factor experiments and the efficiency grows as the number of factors increases. Consequently, factorial designs are heavily used.
The use of ANOVA to study the effects of multiple factors has a complication. In a 3-way ANOVA with factors x, y and z, the ANOVA model includes terms for the main effects (x, y, z) and terms for interactions (xy, xz, yz, xyz).
All terms require hypothesis tests. The proliferation of interaction terms increases the risk that some hypothesis test will produce a false positive by chance. Fortunately, experience says that high order interactions are rare.
The ability to detect interactions is a major advantage of multiple factor ANOVA. Testing one factor at a time hides interactions, but produces apparently inconsistent experimental results.
Caution is advised when encountering interactions; Test interaction terms first and expand the analysis beyond ANOVA if interactions are found. Texts vary in their recommendations regarding the continuation of the ANOVA procedure after encountering an interaction. Interactions complicate the interpretation of experimental data. Neither the calculations of significance nor the estimated treatment effects can be taken at face value. "A significant interaction will often mask the significance of main effects." Graphical methods are recommended to enhance understanding. Regression is often useful. A lengthy discussion of interactions is available in Cox (1958). Some interactions can be removed (by transformations) while others cannot.
A variety of techniques are used with multiple factor ANOVA to reduce expense. One technique used in factorial designs is to minimize replication (possibly no replication with support of analytical trickery) and to combine groups when effects are found to be statistically (or practically) insignificant. An experiment with many insignificant factors may collapse into one with a few factors supported by many replications.
Associated analysis.
Some analysis is required in support of the "design" of the experiment while other analysis is performed after changes in the factors are formally found to produce statistically significant changes in the responses. Because experimentation is iterative, the results of one experiment alter plans for following experiments.
Preparatory analysis.
The number of experimental units.
In the design of an experiment, the number of experimental units is planned to satisfy the goals of the experiment. Experimentation is often sequential.
Early experiments are often designed to provide mean-unbiased estimates of treatment effects and of experimental error. Later experiments are often designed to test a hypothesis that a treatment effect has an important magnitude; in this case, the number of experimental units is chosen so that the experiment is within budget and has adequate power, among other goals.
Reporting sample size analysis is generally required in psychology. "Provide information on sample size and the process that led to sample size decisions." The analysis, which is written in the experimental protocol before the experiment is conducted, is examined in grant applications and administrative review boards.
Besides the power analysis, there are less formal methods for selecting the number of experimental units. These include graphical methods based on limiting the probability of false negative errors, graphical methods based on an expected variation increase (above the residuals) and methods based on achieving a desired confidence interval.
Power analysis.
Power analysis is often applied in the context of ANOVA in order to assess the probability of successfully rejecting the null hypothesis if we assume a certain ANOVA design, effect size in the population, sample size and significance level. Power analysis can assist in study design by determining what sample size would be required in order to have a reasonable chance of rejecting the null hypothesis when the alternative hypothesis is true.
Effect size.
Several standardized measures of effect have been proposed for ANOVA to summarize the strength of the association between a predictor(s) and the dependent variable or the overall standardized difference of the complete model. Standardized effect-size estimates facilitate comparison of findings across studies and disciplines. However, while standardized effect sizes are commonly used in much of the professional literature, a non-standardized measure of effect size that has immediately "meaningful" units may be preferable for reporting purposes.
Model confirmation.
Sometimes tests are conducted to determine whether the assumptions of ANOVA appear to be violated. Residuals are examined or analyzed to confirm homoscedasticity and gross normality. Residuals should have the appearance of (zero mean normal distribution) noise when plotted as a function of anything including time and
modeled data values. Trends hint at interactions among factors or among observations.
Follow-up tests.
A statistically significant effect in ANOVA is often followed by additional tests. This can be done in order to assess which groups are different from which other groups or to test various other focused hypotheses. Follow-up tests are often distinguished in terms of whether they are "planned" (a priori) or "post hoc." Planned tests are determined before looking at the data, and post hoc tests are conceived only after looking at the data (though the term "post hoc" is inconsistently used).
The follow-up tests may be "simple" pairwise comparisons of individual group means or may be "compound" comparisons (e.g., comparing the mean pooling across groups A, B and C to the mean of group D). Comparisons can also look at tests of trend, such as linear and quadratic relationships, when the independent variable involves ordered levels. Often the follow-up tests incorporate a method of adjusting for the multiple comparisons problem.
Follow-up tests to identify which specific groups, variables, or factors have statistically different means include the Tukey's range test, and Duncan's new multiple range test. In turn, these tests are often followed with a Compact Letter Display (CLD) methodology in order to render the output of the mentioned tests more transparent to a non-statistician audience.
Study designs.
There are several types of ANOVA. Many statisticians base ANOVA on the design of the experiment, especially on the protocol that specifies the random assignment of treatments to subjects; the protocol's description of the assignment mechanism should include a specification of the structure of the treatments and of any blocking. It is also common to apply ANOVA to observational data using an appropriate statistical model.
Some popular designs use the following types of ANOVA:
Cautions.
Balanced experiments (those with an equal sample size for each treatment) are relatively easy to interpret; unbalanced experiments offer more complexity. For single-factor (one-way) ANOVA, the adjustment for unbalanced data is easy, but the unbalanced analysis lacks both robustness and power. For more complex designs the lack of balance leads to further complications. "The orthogonality property of main effects and interactions present in balanced data does not carry over to the unbalanced case. This means that the usual analysis of variance techniques do not apply. Consequently, the analysis of unbalanced factorials is much more difficult than that for balanced designs." In the general case, "The analysis of variance can also be applied to unbalanced data, but then the sums of squares, mean squares, and "F"-ratios will depend on the order in which the sources of variation are considered."
ANOVA is (in part) a test of statistical significance. The American Psychological Association (and many other organisations) holds the view that simply reporting statistical significance is insufficient and that reporting confidence bounds is preferred.
Generalizations.
ANOVA is considered to be a special case of linear regression which in turn is a special case of the general linear model. All consider the observations to be the sum of a model (fit) and a residual (error) to be minimized.
The Kruskal-Wallis test and the Friedman test are nonparametric tests which do not rely on an assumption of normality.
Connection to linear regression.
Below we make clear the connection between multi-way ANOVA and linear regression.
Linearly re-order the data so that formula_22-th observation is associated with a response formula_23 and factors formula_24 where formula_25 denotes the different factors and formula_26 is the total number of factors. In one-way ANOVA formula_27 and in two-way ANOVA formula_28. Furthermore, we assume the formula_29-th factor has formula_30 levels, namely formula_31. Now, we can one-hot encode the factors into the formula_32 dimensional vector formula_33.
The one-hot encoding function formula_34 is defined such that the formula_3-th entry of formula_35 is
formula_36
The vector formula_33 is the concatenation of all of the above vectors for all formula_29. Thus, formula_37. In order to obtain a fully general formula_26-way interaction ANOVA we must also concatenate every additional interaction term in the vector formula_33 and then add an intercept term. Let that vector be formula_38.
With this notation in place, we now have the exact connection with linear regression. We simply regress response formula_23 against the vector formula_38. However, there is a concern about identifiability. In order to overcome such issues we assume that the sum of the parameters within each set of interactions is equal to zero. From here, one can use "F"-statistics or other methods to determine the relevance of the individual factors.
Example.
We can consider the 2-way interaction example where we assume that the first factor has 2 levels and the second factor has 3 levels.
Define formula_39 if formula_40 and formula_41 if formula_42, i.e. formula_43 is the one-hot encoding of the first factor and formula_29 is the one-hot encoding of the second factor.
With that,
formula_44
where the last term is an intercept term. For a more concrete example suppose that
formula_45
Then, formula_46
See also.
<templatestyles src="Div col/styles.css"/>
Footnotes.
<templatestyles src="Reflist/styles.css" />
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\varepsilon"
},
{
"math_id": 1,
"text": "\\varepsilon \\thicksim N(0, \\sigma^2)."
},
{
"math_id": 2,
"text": "y_{i,j}"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "j"
},
{
"math_id": 5,
"text": "y_i"
},
{
"math_id": 6,
"text": " t_j"
},
{
"math_id": 7,
"text": "y_{i,j}=y_i+t_j."
},
{
"math_id": 8,
"text": "t_j"
},
{
"math_id": 9,
"text": "s^2 = \\frac{1}{n-1} \\sum_i (y_i-\\bar{y})^2"
},
{
"math_id": 10,
"text": "SS_\\text{Total} = SS_\\text{Error} + SS_\\text{Treatments}"
},
{
"math_id": 11,
"text": "DF_\\text{Total} = DF_\\text{Error} + DF_\\text{Treatments}"
},
{
"math_id": 12,
"text": "F = \\frac{\\text{variance between treatments}}{\\text{variance within treatments}}"
},
{
"math_id": 13,
"text": "F = \\frac{MS_\\text{Treatments}}{MS_\\text{Error}} = {{SS_\\text{Treatments} / (I-1)} \\over {SS_\\text{Error} / (n_T-I)}}"
},
{
"math_id": 14,
"text": "I"
},
{
"math_id": 15,
"text": "n_T"
},
{
"math_id": 16,
"text": "I - 1"
},
{
"math_id": 17,
"text": "n_T - I"
},
{
"math_id": 18,
"text": "1 + {n \\sigma^2_\\text{Treatment}} / {\\sigma^2_\\text{Error}}"
},
{
"math_id": 19,
"text": "n"
},
{
"math_id": 20,
"text": "Y_{ij} = \\mu + \\tau_j + \\varepsilon_{ij}"
},
{
"math_id": 21,
"text": "\\sum_{j = 1}^C \\tau_j = 0"
},
{
"math_id": 22,
"text": "k"
},
{
"math_id": 23,
"text": "y_k"
},
{
"math_id": 24,
"text": "Z_{k,b}"
},
{
"math_id": 25,
"text": "b \\in \\{1,2,\\ldots,B\\}"
},
{
"math_id": 26,
"text": "B"
},
{
"math_id": 27,
"text": "B=1"
},
{
"math_id": 28,
"text": "B = 2"
},
{
"math_id": 29,
"text": "b"
},
{
"math_id": 30,
"text": "I_b"
},
{
"math_id": 31,
"text": "\\{1,2,\\ldots,I_b\\}"
},
{
"math_id": 32,
"text": " \\sum_{b=1}^B I_b"
},
{
"math_id": 33,
"text": "v_k"
},
{
"math_id": 34,
"text": "g_b : \\{1,2,\\ldots,I_b\\} \\mapsto \\{0,1\\}^{I_b}"
},
{
"math_id": 35,
"text": "g_b(Z_{k,b})"
},
{
"math_id": 36,
"text": "g_b(Z_{k,b})_i = \\begin{cases}\n1 & \\text{if } i=Z_{k,b} \\\\\n0 & \\text{otherwise}\n\\end{cases}"
},
{
"math_id": 37,
"text": "v_k = [g_1(Z_{k,1}), g_2(Z_{k,2}), \\ldots, g_B(Z_{k,B})]"
},
{
"math_id": 38,
"text": "X_k"
},
{
"math_id": 39,
"text": "a_i = 1"
},
{
"math_id": 40,
"text": "Z_{k,1}=i"
},
{
"math_id": 41,
"text": "b_i = 1"
},
{
"math_id": 42,
"text": "Z_{k,2} = i"
},
{
"math_id": 43,
"text": "a"
},
{
"math_id": 44,
"text": "\nX_k = [a_1, a_2, b_1, b_2, b_3 ,a_1 \\times b_1, a_1 \\times b_2, a_1 \\times b_3, a_2 \\times b_1, a_2 \\times b_2, a_2 \\times b_3, 1]\n"
},
{
"math_id": 45,
"text": "\\begin{align}\nZ_{k,1} & = 2 \\\\\nZ_{k,2} & = 1\n\\end{align}"
},
{
"math_id": 46,
"text": "X_k = [0,1,1,0,0,0,0,0,1,0,0,1]"
}
] |
https://en.wikipedia.org/wiki?curid=634
|
6340012
|
Organic photochemistry
|
Organic photochemistry encompasses organic reactions that are induced by the action of light. The absorption of ultraviolet light by organic molecules often leads to reactions. In the earliest days, sunlight was employed, while in more modern times ultraviolet lamps are employed. Organic photochemistry has proven to be a very useful synthetic tool. Complex organic products can be obtained simply.
History.
Early examples were often uncovered by the observation of precipitates or color changes from samples that were exposed to sunlights. The first reported case was by Ciamician that sunlight converted santonin to a yellow photoproduct:
An early example of a precipitate was the photodimerization of anthracene, characterized by Yulii Fedorovich Fritzsche and confirmed by Elbs. Similar observations focused on the dimerization of cinnamic acid to truxillic acid. Many photodimers are now recognized, e.g. pyrimidine dimer, thiophosgene, diamantane.
Another example was uncovered by Egbert Havinga in 1956. The curious result was activation on photolysis by a meta nitro group in contrast to the usual activation by ortho and para groups.
Organic photochemistry advanced with the development of the Woodward-Hoffmann rules. Illustrative, these rules help rationalize the photochemically driven electrocyclic ring-closure of hexa-2,4-diene, which proceeds in a disrotatory fashion.
Organic reactions that obey these rules are said to be symmetry allowed. Reactions that take the opposite course are symmetry forbidden and require substantially more energy to take place if they take place at all.
Key reactions.
Organic photochemical reactions are explained in the context of the relevant excited states.
Parallel to the structural studies described above, the role of spin multiplicity – singlet vs triplet – on reactivity was evaluated. The importance of triplet excited species was emphasized. Triplets tend to be longer-lived than singlets and of lower energy than the singlet of the same configuration. Triplets may arise from (A) conversion of the initially formed singlets or by (B) interaction with a higher energy triplet (sensitization).
It is possible to quench triplet reactions.
Common organic photochemical reactions include: Norrish Type I, the Norrish Type II, the racemization of optically active biphenyls, the type A cyclohexadienone rearrangement, the type B cyclohexenone rearrangement, the di-π-methane rearrangement, the type B bicyclo[3.1.0]hexanone rearrangement to phenols, photochemical electrocyclic processes, the rearrangement of epoxyketones to beta-diketones, ring opening of cyclopropyl ketones, heterolysis of 3,5-dimethoxylbenzylic derivatives, and photochemical cyclizations of dienes.
Practical considerations.
Reactants of the photoreactions can be both gaseous and liquids. In general, it is necessary to bring the reactants close to the light source in order to obtain the highest possible luminous efficacy. For this purpose, the reaction mixture can be irradiated either directly or in a flow-through side arm of a reactor with a suitable light source.
A disadvantage of photochemical processes is the low efficiency of the conversion of electrical energy in the radiation energy of the required wavelength. In addition to the radiation, light sources generate plenty of heat, which in turn requires cooling energy. In addition, most light sources emit polychromatic light, even though only monochromatic light is needed. A high quantum yield, however, compensates for these disadvantages.
Working at low temperatures is advantageous since side reactions are avoided (as the selectivity is increased) and the yield is increased (since gaseous reactants are driven out less from the solvent).
The starting materials can sometimes be cooled before the reaction to such an extent that the reaction heat is absorbed without further cooling of the mixture. In the case of gaseous or low-boiling starting materials, work under overpressure is necessary. Due to the large number of possible raw materials, a large number of processes have been described. Large scale reactions are usually carried out in a stirred tank reactor, a bubble column reactor or a tube reactor, followed by further processing depending on the target product. In case of a stirred tank reactor, the lamp (generally shaped as an elongated cylinder) is provided with a cooling jacket and placed in the reaction solution. Tube reactors are made from quartz or glass tubes, which are irradiated from the outside. Using a stirred tank reactor has the advantage that no light is lost to the environment. However, the intensity of light drops rapidly with the distance to the light source due to adsorption by the reactants.
The influence of the radiation on the reaction rate can often be represented by a power law based on the quantum flow density, i.e. the mole light quantum (previously measured in the unit einstein) per area and time. One objective in the design of reactors is therefore to determine the economically most favorable dimensioning with regard to an optimization of the quantum current density.
Case studies.
[2+2] Cycloadditions.
Olefins dimerize upon UV-irradiation.
4,4-Diphenylcyclohexadienone rearrangement.
Quite parallel to the santonin to lumisantonin example is the rearrangement of 4,4-diphenylcyclohexadienone Here the n-pi* triplet excited state undergoes the same beta-beta bonding. This is followed by intersystem crossing (i.e. ISC) to form the singlet ground state which is seen to be a zwitterion. The final step is the rearrangement to the bicyclic photoproduct. The reaction is termed the type A cyclohexadienone rearrangement.
4,4-diphenylcyclohexenone.
To provide further evidence on the mechanism of the dienone in which there is bonding between the two double bonds,
the case of 4,4-diphenylcyclohexenone is presented here. It is seen that the rearrangement is quite different; thus two double bonds are required for a type A rearrangement. With one double bond one of the phenyl groups, originally at C-4, has migrated to C-3 (i.e. the beta carbon).
When one of the aryl groups has a para-cyano or para-methoxy group, that substituted aryl group migrates in preference. Inspection of the alternative phenonium-type species, in which an aryl group has begun to migrate to the beta-carbon, reveals the greater electron delocalization with a substituent para on the migrating aryl group and thus a more stabilized pathway.
π-π* reactivity.
Still another type of photochemical reaction is the di-π-methane rearrangement. Two further early examples were the rearrangement of 1,1,5,5-tetraphenyl-3,3-dimethyl-1,4-pentadiene (the "Mariano" molecule) and the rearrangement of barrelene to semibullvalene. We note that, in contrast to the cyclohexadienone reactions which used n-π* excited states, the di-π-methane rearrangements utilize π-π* excited states.
Related topics.
Photoredox catalysis.
In photoredox catalysis, the photon is absorbed by a sensitizer (antenna molecule or ion) which then effects redox reactions on the organic substrate. A common sensitizer is ruthenium(II) tris(bipyridine). Illustrative of photoredox catalysis are some aminotrifluoromethylation reactions.
Photochlorination.
Photochlorination is one of the largest implementations of photochemistry to organic synthesis. The photon is however not absorbed by the organic compound, but by chlorine. Photolysis of Cl2 gives chlorine atoms, which abstract H atoms from hydrocarbons, leading to chlorination.
formula_0
formula_1
formula_2
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{Cl_2 \\ \\xrightarrow {h\\nu} \\ Cl{\\cdot} + {\\cdot}Cl \\quad (initiation)}"
},
{
"math_id": 1,
"text": "\\mathrm{Cl{\\cdot} + RH \\longrightarrow {\\cdot}R + HCl \\quad (chain propagation)}"
},
{
"math_id": 2,
"text": "\\mathrm{R{\\cdot} + Cl_2 \\longrightarrow {\\cdot}Cl + RCl \\quad (chain propagation)}"
}
] |
https://en.wikipedia.org/wiki?curid=6340012
|
634016
|
Fluorescence recovery after photobleaching
|
Fluorescence recovery after photobleaching (FRAP) is a method for determining the kinetics of diffusion through tissue or cells. It is capable of quantifying the two-dimensional lateral diffusion of a molecularly thin film containing fluorescently labeled probes, or to examine single cells. This technique is very useful in biological studies of cell membrane diffusion and protein binding. In addition, surface deposition of a fluorescing phospholipid bilayer (or monolayer) allows the characterization of hydrophilic (or hydrophobic) surfaces in terms of surface structure and free energy.
Similar, though less well known, techniques have been developed to investigate the 3-dimensional diffusion and binding of molecules inside the cell; they are also referred to as FRAP.
Experimental setup.
The basic apparatus comprises an optical microscope, a light source and some fluorescent probe. Fluorescent emission is contingent upon absorption of a specific optical wavelength or color which restricts the choice of lamps. Most commonly, a broad spectrum mercury or xenon source is used in conjunction with a color filter. The technique begins by saving a background image of the sample before photobleaching. Next, the light source is focused onto a small patch of the viewable area either by switching to a higher magnification microscope objective or with laser light of the appropriate wavelength. The fluorophores in this region receive high intensity illumination which causes their fluorescence lifetime to quickly elapse (limited to roughly 105 photons before extinction). Now the image in the microscope is that of a uniformly fluorescent field with a noticeable dark spot. As Brownian motion proceeds, the still-fluorescing probes will diffuse throughout the sample and replace the non-fluorescent probes in the bleached region. This diffusion proceeds in an ordered fashion, analytically determinable from the diffusion equation. Assuming a Gaussian profile for the bleaching beam, the diffusion constant "D" can be simply calculated from:
formula_0
where "w" is the radius of the beam and "tD" is the "Characteristic" diffusion time.
Applications.
Supported lipid bilayers.
Originally, the FRAP technique was intended for use as a means to characterize the mobility of individual lipid molecules within a cell membrane. While providing great utility in this role, current research leans more toward investigation of artificial lipid membranes. Supported by hydrophilic or hydrophobic substrates (to produce lipid bilayers or monolayers respectively) and incorporating membrane proteins, these biomimetic structures are potentially useful as analytical devices for determining the identity of unknown substances, understanding cellular transduction, and identifying ligand binding sites.
Protein binding.
This technique is commonly used in conjunction with green fluorescent protein (GFP) fusion proteins, where the studied protein is fused to a GFP. When excited by a specific wavelength of light, the protein will fluoresce. When the protein that is being studied is produced with the GFP, then the fluorescence can be tracked. Photodestroying the GFP, and then watching the repopulation into the bleached area can reveal information about protein interaction partners, organelle continuity and protein trafficking.
If after some time the fluorescence doesn't reach the initial level anymore, then some part of the fluorescence is caused by an immobile fraction (that cannot be replenished by diffusion). Similarly, if the fluorescent proteins bind to static cell receptors, the rate of recovery will be retarded by a factor related to the association and disassociation coefficients of binding. This observation has most recently been exploited to investigate protein binding. Similarly, if the GFP labeled protein is constitutively incorporated into a larger complex, the dynamics of fluorescence recovery will be characterized by the diffusion of the larger complex.
Applications outside the membrane.
FRAP can also be used to monitor proteins outside the membrane. After the protein of interest is made fluorescent, generally by expression as a GFP fusion protein, a confocal microscope is used to photobleach and monitor a region of the cytoplasm, mitotic spindle, nucleus, or another cellular structure. The mean fluorescence in the region can then be plotted versus time since the photobleaching, and the resulting curve can yield kinetic coefficients, such as those for the protein's binding reactions and/or the protein's diffusion coefficient in the medium where it is being monitored. Often the only dynamics considered are diffusion and binding/unbinding interactions, however, in principle proteins can also move via flow, i.e., undergo directed motion, and this was recognized very early by Axelrod et al. This could be due to flow of the cytoplasm or nucleoplasm, or transport along filaments in the cell such as microtubules by molecular motors.
The analysis is most simple when the fluorescence recovery is limited by either the rate of diffusion into the bleached area or by rate at which bleached proteins unbind from their binding sites within the bleached area, and are replaced by fluorescent protein. Let us look at these two limits, for the common case of bleaching a GFP fusion protein in a living cell.
Diffusion-limited fluorescence recovery.
For a circular bleach spot of radius formula_1 and diffusion-dominated recovery, the fluorescence is described by an equation derived by Soumpasis (which involves modified Bessel functions formula_2 and formula_3)
formula_4
with formula_5 the characteristic timescale for diffusion, and formula_6 is the time. formula_7 is the normalized fluorescence (goes to 1 as formula_6 goes to infinity). The diffusion timescale for a bleached spot of radius formula_1 is formula_8, with "D" the diffusion coefficient.
Note that this is for an instantaneous bleach with a step function profile, i.e., the fraction formula_9 of protein assumed to be bleached instantaneously at time formula_10 is formula_11, and formula_12, for formula_13 is the distance from the centre of the bleached area. It is also assumed that the recovery can be modelled by diffusion in two dimensions, that is also both uniform and isotropic. In other words, that diffusion is occurring in a uniform medium so the effective diffusion constant "D" is the same everywhere, and that the diffusion is isotropic, i.e., occurs at the same rate along all axes in the plane.
In practice, in a cell none of these assumptions will be strictly true.
Thus, the equation of Soumpasis is just a useful approximation, that can be used when the assumptions listed above are good approximations to the true situation, and when the recovery of fluorescence is indeed limited by the timescale of diffusion formula_5. Note that just because the Soumpasis can be fitted adequately to data does not necessarily imply that the assumptions are true and that diffusion dominates recovery.
Reaction-limited recovery.
The equation describing the fluorescence as a function of time is particularly simple in another limit. If a large number of proteins bind to sites in a small volume such that there the fluorescence signal is dominated by the signal from bound proteins, and if this binding is all in a single state with an off rate koff, then the fluorescence as a function of time is given by
formula_14
Note that the recovery depends on the rate constant for unbinding, "koff", only. It does not depend on the on rate for binding. Although it does depend on a number of assumptions
If all these assumptions are satisfied, then fitting an exponential to the recovery curve will give the off rate constant, "koff". However, other dynamics can give recovery curves similar to exponentials, so fitting an exponential does not necessarily imply that recovery is dominated by a simple bimolecular reaction. One way to distinguish between recovery with a rate determined by unbinding and recovery that is limited by diffusion, is to note that the recovery rate for unbinding-limited recovery is independent of the size of the bleached area "r", while it scales as formula_16, for diffusion-limited recovery. Thus if a small and a large area are bleached, if recovery is limited by unbinding then the recovery rates will be the same for the two sizes of bleached area, whereas if recovery is limited by diffusion then it will be much slower for the larger bleached area.
Diffusion and reaction.
In general, the recovery of fluorescence will not be dominated by either simple isotropic diffusion, or by a single simple unbinding rate. There will be both diffusion and binding, and indeed the diffusion constant may not be uniform in space, and there may be more than one type of binding sites, and these sites may also have a non-uniform distribution in space. Flow processes may also be important. This more complex behavior implies that a model with several parameters is required to describe the data; models with only either a single diffusion constant "D" or a single off rate constant, "koff", are inadequate.
There are models with both diffusion and reaction. Unfortunately, a single FRAP curve may provide insufficient evidence to reliably and uniquely fit (possibly noisy) experimental data. Sadegh Zadeh " et al." have shown that FRAP curves can be fitted by "different" pairs of values of the diffusion constant and the on-rate constant, or, in other words, that fits to the FRAP are not unique. This is in three-parameter (on-rate constant, off-rate constant and diffusion constant) fits. Fits that are not unique, are not generally useful.
Thus for models with a number of parameters, a single FRAP experiment may be insufficient to estimate all the model parameters. Then more data is required, e.g., by bleaching areas of different sizes, determining some model parameters independently, etc.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "D = \\frac{w^{2}}{4t_{D}}"
},
{
"math_id": 1,
"text": "w"
},
{
"math_id": 2,
"text": "I_0"
},
{
"math_id": 3,
"text": "I_1"
},
{
"math_id": 4,
"text": "f(t)=e^{-2\\tau_D/t}\\left(I_{0}(2\\tau_D/t)+I_{1}(2\\tau_D/t)\\right)"
},
{
"math_id": 5,
"text": "\\tau_D"
},
{
"math_id": 6,
"text": "t"
},
{
"math_id": 7,
"text": "f(t)"
},
{
"math_id": 8,
"text": "\\tau_D=w^2/(4D)"
},
{
"math_id": 9,
"text": "f_b"
},
{
"math_id": 10,
"text": "t=0"
},
{
"math_id": 11,
"text": "f_b(r)=b, ~~r<w"
},
{
"math_id": 12,
"text": "f_b(r)=0, ~~r>w"
},
{
"math_id": 13,
"text": "r"
},
{
"math_id": 14,
"text": "f(t)=1-e^{-k_{\\text{off}}t}"
},
{
"math_id": 15,
"text": "1/k_{\\text{off}} >> r^2/D"
},
{
"math_id": 16,
"text": "r^{-2}"
}
] |
https://en.wikipedia.org/wiki?curid=634016
|
63406456
|
Laver's theorem
|
Laver's theorem, in order theory, states that order embeddability of countable total orders is a well-quasi-ordering. That is, for every infinite sequence of totally-ordered countable sets, there exists an order embedding from an earlier member of the sequence to a later member. This result was previously known as Fraïssé's conjecture, after Roland Fraïssé, who conjectured it in 1948; Richard Laver proved the conjecture in 1971. More generally, Laver proved the same result for order embeddings of countable unions of scattered orders.
In reverse mathematics, the version of the theorem for countable orders is denoted FRA (for Fraïssé) and the version for countable unions of scattered orders is denoted LAV (for Laver). In terms of the "big five" systems of second-order arithmetic, FRA is known to fall in strength somewhere between the strongest two systems, formula_0-CA0 and ATR0, and to be weaker than formula_0-CA0. However, it remains open whether it is equivalent to ATR0 or strictly between these two systems in strength.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Pi_1^1"
}
] |
https://en.wikipedia.org/wiki?curid=63406456
|
634073
|
Where's Willy?
|
Canadian dollar note tracking website
Where's Willy? is a website that tracks Canadian paper money, most commonly $5 bills, but also higher denominations. "Where's Willy" is free, supported by users who pay a fee for extra features. The name Willy refers to Sir Wilfrid Laurier - the seventh Prime Minister of Canada whose portrait appears on the $5 banknote.
"Where's Willy?" is a currency tracking spin-off of Where's George, a site that tracks United States dollars.
History.
The free site, established by Hank Eskin, a computer consultant in Brookline, Massachusetts, allows people to enter their local postal code and the serial and series of any Canadian denomination from $5 up to $100 they want to track. Once a bill is registered, the site reports the time between sightings, the distance travelled and any comments from the finders, and anyone who registered the bill earlier learns about it by e-mail and/or text messaging.
To increase the chance of having a bill reported, users write or stamp text on the bills encouraging bill finders to visit whereswilly.com and track the bill's travels.
Since Canada has replaced the one and two dollar bills with more durable coins, the $5 note is the smallest denomination tracked by Where's Willy.
In April 2003, "USA Today" named whereswilly.com one of its "Hot Sites".
In 2005, the "Montreal Mirror" described the hobby as "the most joy to be had with the five-dollar bill since the illegal defacing of Laurier with Spock ears." While the "Mirror" describes this practice as illegal, a 2015 statement from the Bank of Canada explained that "Spocking Fives" does not violate the Bank of Canada Act or the Criminal Code.
As of June 7, 2021, "Where's Willy?" was tracking more than 5,800,000 bills totaling nearly $85,000,000.
Researchers studying pandemics have used currency tracking sites to plot human travel patterns, to find clues on how to combat the spread of diseases like SARS.
Willy Index.
The "Willy Index" is a method of rating users based on how many bills they've entered and also by how many total hits they've had. The formula is as follows:
formula_0
Because of the nature of square roots and natural logarithms, the higher a user's bills entered and hits are, the more of each are necessary to increase the score.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "100\\times\\left[\\sqrt{\\ln({\\rm bills\\ entered})}+\\ln({\\rm hits}+1)\\right]\\times[1-({\\rm days\\ of\\ inactivity}/100)]"
}
] |
https://en.wikipedia.org/wiki?curid=634073
|
63411286
|
Infeld–Van der Waerden symbols
|
Matrices used for Lorentz group spinors
The Infeld–Van der Waerden symbols, sometimes called simply Van der Waerden symbols, are an invariant symbol associated to the Lorentz group used in quantum field theory. They are named after Leopold Infeld and Bartel Leendert van der Waerden.
The Infeld–Van der Waerden symbols are index notation for Clifford multiplication of covectors on left handed spinors giving a right-handed spinors or vice versa, i.e. they are off diagonal blocks of gamma matrices. The symbols are typically denoted in Van der Waerden notation as
formula_0
and so have one Lorentz index (m), one left-handed (undotted Greek), and one right-handed (dotted Greek) Weyl spinor index. They satisfy
formula_1
They need not be constant, however, and can therefore be formulated on curved space time.
Background.
The existence of this invariant symbol follows from a result in the representation theory of the Lorentz group or more properly its Lie algebra. Labeling irreducible representations by formula_2, the spinor and its complex conjugate representations are the left and right fundamental representations
formula_3 and formula_4
while the tangent vectors live in the vector representation
formula_5
The tensor product of one left and right fundamental representation is the vector representation,formula_6. A dual statement is that the tensor product of the vector, left, and right fundamental representations contains the trivial representation which is in fact generated by the construction of the Lie algebra representations through the Clifford algebra (see below) formula_7
Representations of the Clifford algebra.
Consider the space of positive Weyl spinors formula_8 of a Lorentzian vector space formula_9 with dual formula_10.
Then the negative Weyl spinors can be identified with the vector space formula_11 of complex conjugate dual spinors.
The Weyl spinors implement "two halves of a Clifford algebra representation" i.e. they come with a multiplication by covectors implemented as maps
formula_12
and
formula_13
which we will call Infeld–Van der Waerden maps. Note that in a natural way we can also think of the maps as a sesquilinear map associating a vector to a left and righthand spinor
formula_14
respectively formula_15.
That the Infeld–Van der Waerden maps implement "two halves of a Clifford algebra representation" means that for covectors formula_16
formula_17
resp.
formula_18,
so that if we define
formula_19
then
formula_20
Therefore formula_21 extends to a proper Clifford algebra representation formula_22.
The Infeld–Van der Waerden maps are real (or hermitian) in the sense that the complex conjugate dual maps
formula_23
coincides (for a real covector formula_24) :
formula_25.
Likewise we have formula_26.
Now the Infeld the Infeld–Van der Waerden symbols are the components of the maps formula_27 and formula_28 with respect to bases of formula_29 and formula_8 with induced bases on formula_30 and formula_11. Concretely, if T is the tangent space at a point O with local coordinates formula_31 (formula_32) so that formula_33 is a basis for formula_29 and formula_34 is a basis for formula_30, and formula_35 (formula_36) is a basis for formula_8, formula_37 is a dual basis for formula_38 with complex conjugate dual basis formula_39 of formula_11, then
formula_40
formula_41
Using local frames of the (co)tangent bundle and a Weyl spinor bundle, the construction carries over to a differentiable manifold with a spinor bundle.
Applications.
The formula_28 symbols are of fundamental importance for calculations in quantum field theory in curved spacetime, and in supersymmetry. In the presence of a tetrad formula_42 for "soldering" local Lorentz indices to tangent indices, the contracted version formula_43 can also be thought of as a soldering form for building a tangent vector out of a pair of left and right Weyl spinors.
Conventions.
In flat Minkowski space, A standard component representation is in terms of the Pauli matrices, hence the formula_28 notation. In an orthonormal basis with a standard spin frame, the conventional components areformula_44
Note that these are the blocks of the gamma matrices in the Weyl Chiral basis convention. There are, however, many conventions.
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\n \\sigma^m{}_{\\alpha\\dot{\\beta}}\\quad\\text{and}\\quad\\bar{\\sigma}^{m\\,\\dot{\\alpha}\\beta}.\n"
},
{
"math_id": 1,
"text": "\n\\begin{align}\n \\sigma^m{}_{\\alpha\\dot{\\beta}}\\bar{\\sigma}^{n\\,\\dot{\\beta}\\gamma} + \\sigma^n{}_{\\alpha\\dot{\\beta}}\\bar{\\sigma}^{m\\,\\dot{\\beta}\\gamma} \n &= 2\\delta_\\alpha^\\gamma g^{mn}, \\quad \\\\\n \\sigma^m{}_{\\alpha\\dot{\\beta}}\\bar{\\sigma}^{n\\,\\dot{\\gamma}\\alpha} + \\sigma^n{}_{\\alpha\\dot{\\beta}}\\bar{\\sigma}^{m\\,\\dot{\\gamma}\\alpha} \n &= 2 \\delta_{\\dot \\beta}^{\\dot \\gamma} g^{mn}.\n\\end{align}\n"
},
{
"math_id": 2,
"text": "(j,\\bar{\\jmath})"
},
{
"math_id": 3,
"text": "(\\tfrac{1}{2},0)"
},
{
"math_id": 4,
"text": "(0,\\tfrac{1}{2}),"
},
{
"math_id": 5,
"text": "(\\tfrac{1}{2},\\tfrac{1}{2})."
},
{
"math_id": 6,
"text": "(\\tfrac{1}{2},0)\\otimes(0,\\tfrac{1}{2})=(\\tfrac{1}{2},\\tfrac{1}{2})"
},
{
"math_id": 7,
"text": "(\\tfrac{1}{2},0)\\otimes(0,\\tfrac{1}{2})\\otimes(\\tfrac{1}{2},\\tfrac{1}{2})=(0,0)\\oplus\\cdots ."
},
{
"math_id": 8,
"text": "S"
},
{
"math_id": 9,
"text": "(T,g)"
},
{
"math_id": 10,
"text": "(T^\\vee, g^\\vee)"
},
{
"math_id": 11,
"text": "\\bar S^\\vee"
},
{
"math_id": 12,
"text": "\\sigma:T^\\vee \\to \\mathrm{Hom}(S, \\bar S^\\vee)"
},
{
"math_id": 13,
"text": "\\bar\\sigma: T^\\vee \\to \\mathrm{Hom}(\\bar S^\\vee, S)"
},
{
"math_id": 14,
"text": "\\sigma \\in T \\otimes \\bar S^\\vee \\otimes S^\\vee \\cong \\mathrm{Hom}(\\bar S \\otimes S, T) "
},
{
"math_id": 15,
"text": "\\bar \\sigma \\in T \\otimes S \\otimes \\bar S \\cong \\mathrm{Hom}(S^\\vee \\otimes \\bar S^\\vee, T) "
},
{
"math_id": 16,
"text": "a,b \\in T^\\vee"
},
{
"math_id": 17,
"text": "\\bar\\sigma(a)\\sigma(b) + \\bar\\sigma(b)\\sigma(a) = 2g^\\vee(a,b)1_{S}"
},
{
"math_id": 18,
"text": "\\sigma(a)\\bar\\sigma(b) + \\sigma(b)\\bar\\sigma(a) = 2g^\\vee(a,b)1_{\\bar S^\\vee}"
},
{
"math_id": 19,
"text": "\\gamma = \\begin{pmatrix}0 & \\bar \\sigma \\\\\\sigma & 0 \\end{pmatrix}:T^\\vee \\to \\mathrm{End}(S\\oplus \\bar S^\\vee) "
},
{
"math_id": 20,
"text": "\\gamma(a)\\gamma(b) + \\gamma(b)\\gamma(a) = 2g^\\vee(a,b)1_{S \\oplus \\bar S^\\vee}."
},
{
"math_id": 21,
"text": "\\gamma"
},
{
"math_id": 22,
"text": "\\mathrm{Cl}(T^\\vee, g^\\vee) \\to \\mathrm{End}(S\\oplus \\bar S^\\vee)"
},
{
"math_id": 23,
"text": "\\sigma^\\dagger(a): S \\mathop{\\to}\\limits^{\\bar\\ } \\bar S \\mathop{\\longrightarrow}\\limits^{\\sigma^\\vee(a)} S^\\vee \\mathop{\\to}\\limits^{\\bar\\ } \\bar S^\\vee"
},
{
"math_id": 24,
"text": " a"
},
{
"math_id": 25,
"text": "\\sigma(a) = \\sigma(\\bar a)^\\dagger"
},
{
"math_id": 26,
"text": "\\bar\\sigma(a) = \\bar\\sigma(\\bar a)^\\dagger"
},
{
"math_id": 27,
"text": "\\bar\\sigma"
},
{
"math_id": 28,
"text": "\\sigma"
},
{
"math_id": 29,
"text": "T"
},
{
"math_id": 30,
"text": "T^\\vee"
},
{
"math_id": 31,
"text": "x^m"
},
{
"math_id": 32,
"text": "m = 0, \\ldots, 3"
},
{
"math_id": 33,
"text": "\\partial_m"
},
{
"math_id": 34,
"text": " dx^m"
},
{
"math_id": 35,
"text": " s_\\alpha"
},
{
"math_id": 36,
"text": "\\alpha = 0,1 "
},
{
"math_id": 37,
"text": "s^\\alpha"
},
{
"math_id": 38,
"text": "S^\\vee"
},
{
"math_id": 39,
"text": "\\bar s^{\\dot\\alpha}"
},
{
"math_id": 40,
"text": " \\sigma(dx^m)(s_\\alpha) = \\sigma^m_{\\alpha\\dot\\beta}\\bar s^{\\dot \\beta}"
},
{
"math_id": 41,
"text": " \\bar\\sigma(dx^m)(\\bar s^{\\dot\\alpha}) = \\bar\\sigma^{m, \\dot\\alpha\\beta}s_\\beta"
},
{
"math_id": 42,
"text": "e^\\mu{}_m"
},
{
"math_id": 43,
"text": "\\sigma^\\mu{}_{\\alpha\\dot{\\beta}}"
},
{
"math_id": 44,
"text": "\\begin{align}\n\\sigma^0{}_{\\alpha\\dot{\\beta}} \\ &\\dot{=}\\ \\delta_{\\alpha\\dot{\\beta}} \\,, \\\\\n\\sigma^i{}_{\\alpha\\dot{\\beta}} \\ &\\dot{=}\\ (\\sigma^i)_{\\alpha\\dot{\\beta}} \\,, \\\\\n\\bar{\\sigma}^{0\\,\\dot{\\alpha}\\beta} \\ &\\dot{=}\\ \\delta^{\\dot{\\alpha}\\beta} \\,, \\\\\n\\bar{\\sigma}^{i\\,\\dot{\\alpha}\\beta} \\ &\\dot{=}\\ -(\\sigma^i)^{\\dot{\\alpha}\\beta} \\,.\n\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=63411286
|
63412780
|
Signed set
|
In mathematics, a signed set is a set of elements together with an assignment of a sign (positive or negative) to each element of the set.
Representation.
Signed sets may be represented mathematically as an ordered pair of disjoint sets, one set for their positive elements and another for their negative elements. Alternatively, they may be represented as a Boolean function, a function whose domain is the underlying unsigned set (possibly specified explicitly as a separate part of the representation) and whose range is a two-element set representing the signs.
Signed sets may also be called formula_0-graded sets.
Application.
Signed sets are fundamental to the definition of oriented matroids.
They may also be used to define the faces of a hypercube. If the hypercube consists of all points in Euclidean space of a given dimension whose Cartesian coordinates are in the interval formula_1, then a signed subset of the coordinate axes can be used to specify the points whose coordinates within the subset are formula_2 or formula_3 (according to the sign in the signed subset) and whose other coordinates may be anywhere in the interval formula_1. This subset of points forms a face, whose codimension is the cardinality of the signed subset.
Combinatorics.
Enumeration.
The number of signed subsets of a given finite set of formula_4 elements is formula_5, a power of three, because there are three choices for each element: it may be absent from the subset, present with positive sign, or present with negative sign. For the same reason, the number of signed subsets of cardinality formula_6 is
formula_7
and summing these gives an instance of the binomial theorem,
formula_8
Intersecting families.
An analogue of the Erdős–Ko–Rado theorem on intersecting families of sets holds also for signed sets. The intersection of two signed sets is defined to be the signed set of elements that belong to both and have the same sign in both. According to this theorem, for any a collection of signed subsets of an formula_4-element set, all having cardinality formula_6 and all pairs having a non-empty intersection, the number of signed subsets in the collection is at most
formula_9
For instance, an intersecting family of this size can be obtained by choosing the sign of a single fixed element, and taking the family to be all signed subsets of cardinality formula_6 that contain this element with this sign. For formula_10 this theorem follows immediately from the unsigned Erdős–Ko–Rado theorem, as the unsigned versions of the subsets form an intersecting family and each unsigned set can correspond to at most formula_11 signed sets. However, for larger values of formula_6 a different proof is needed.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Z_2"
},
{
"math_id": 1,
"text": "[-1,+1]"
},
{
"math_id": 2,
"text": "-1"
},
{
"math_id": 3,
"text": "+1"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "3^n"
},
{
"math_id": 6,
"text": "r"
},
{
"math_id": 7,
"text": "2^r\\binom{n}{r},"
},
{
"math_id": 8,
"text": "\\sum_r 2^r\\binom{n}{r}=3^n."
},
{
"math_id": 9,
"text": "2^{r-1}\\binom{n-1}{r-1}."
},
{
"math_id": 10,
"text": "r\\le n/2"
},
{
"math_id": 11,
"text": "2^{r-1}"
}
] |
https://en.wikipedia.org/wiki?curid=63412780
|
63415138
|
Berge equilibrium
|
Solution concept capturing altruism in game theory
The Berge equilibrium is a game theory solution concept named after the mathematician Claude Berge. It is similar to the standard Nash equilibrium, except that it aims to capture a type of altruism rather than purely non-cooperative play. Whereas a Nash equilibrium is a situation in which each player of a strategic game ensures that they personally will receive the highest payoff given other players' strategies, in a Berge equilibrium every player ensures that all other players will receive the highest payoff possible. Although Berge introduced the intuition for this equilibrium notion in 1957, it was only formally defined by Vladislav Iosifovich Zhukovskii in 1985, and it was not in widespread use until half a century after Berge originally developed it.
History.
The Berge equilibrium was first introduced in Claude Berge's 1957 book "Théorie générale des jeux à n personnes". Moussa Larbani and Vladislav Iosifovich Zhukovskii write that the ideas in this book were not widely used in Russia partly due to a harsh review that it received shortly after its translation into Russian in 1961, and they were not used in the English speaking world because the book had only received French and Russian printings. These explanations are echoed by other authors, with Pierre Courtois "et al." adding that the impact of the book was likely dampened by its lack of economic examples, as well as by its reliance on tools from graph theory that would have been less familiar to economists of the time.
Berge introduced his original equilibrium notion only in intuitive terms, and the first formal definition of the Berge equilibrium was published by Vladislav Iosifovich Zhukovskii in 1985. The topic of Berge equilibria was then studied in detail by Konstantin Semenovich Vaisman in his 1995 PhD dissertation, and Larbani and Zhukovskii document that the tool became more widely used in the mid-2000s as economists became interested in increasingly complex systems in which players might be more inclined to seek globally favourable equilibria and attach value to other players' payoffs. Colman "et al." connect interest in the Berge equilibrium to interest in cooperative game theory, the evolution of cooperation, and topics like altruism in evolutionary game theory.
Definition.
Formal definition.
Consider a normal-form game formula_0, where formula_1 is the set of formula_2 players, formula_3 is the (nonempty) strategy set of player formula_4 where formula_5, and formula_6 is that player's utility function. Denote a strategy profile as formula_7, and denote an incomplete strategy profile formula_8. A strategy profile formula_9 is called a Berge equilibrium if, for any player formula_5 and any formula_10, the strategy profile satisfies formula_11.
Informal definition.
The players in a game are playing a Berge equilibrium if they have chosen a strategy profile such that, if any given player formula_4 sticks with their chosen strategy while some of the other players change their strategies, then player formula_4's payoff will not increase. So, every player in a Berge equilibrium guarantees the best possible payoff for every other player who is playing their Berge equilibrium strategy; this is a contrast with Nash equilibria, in which each player formula_4 is only concerned about maximizing their own payoffs from their strategy, and no other player cares about the payoff obtained by player formula_4.
Example.
Consider the following prisoner's dilemma game, from Larbani and Zhukovskii (2017):
Berge result.
A Berge equilibrium of this game is the situation in which both players pick "cooperate", denote it formula_12. This is a Berge equilibrium because each player can only lower the other player's payoff by switching their strategy; if either player switched from "cooperate" to "defect", then they would lower the other player's payoff from 20 down to 5, so they must be in a Berge equilibrium.
Berge versus Nash result.
Notice first that the Berge equilibrium formula_12 is not a Nash equilibrium, because either the row player or the column player could increase their own payoff from 20 to 25 by switching to "defect" instead of "cooperate".
A Nash equilibrium of this prisoner's dilemma game is the situation in which both players pick "defect", denote it formula_13. That strategy pair yields a payoff of 10 to the row player and 10 to the column player, and no player has a unilateral incentive to switch their strategy to maximize their own payoff. However, formula_13 is not a Berge equilibrium, because the row player could ensure a higher payoff for the column player by switching strategies and giving the column player a payoff of 25 instead of 10, and the column player could do the same for the row player.
The cooperative nature of the Berge equilibrium therefore avoids the mutual defection problem that has made the prisoner's dilemma a notorious example of the potential for Nash equilibrium reasoning to produce a mutually suboptimal result.
Motivation.
The Berge equilibrium has been motivated as the exact opposite of a Nash equilibrium, in that while the Nash equilibrium models selfish behaviours, the Berge equilibrium models altruistic behaviours. Moussa Larbani and Vladislav Iosifovich Zhukovskii note that Berge equilibria could be interpreted as a method for formalising the Golden Rule in strategic interactions.
One advantage of the Berge equilibrium over the Nash equilibrium is that the Berge results may agree more closely with results obtained from experimental psychology and experimental economics. Several authors have noted that players asked to play games like the Prisoner's Dilemma or the ultimatum game in laboratory scenarios rarely reach the Nash Equilibrium result, in part because people in real situations often do attach value to the well-being of others, and that therefore Berge equilibria could sometimes be a better fit to real behaviour in certain situations.
A challenge for the use of Berge equilibria is that they do not have as strong existence properties as Nash equilibria, although their existence may be assured by adding extra conditions. The Berge equilibrium solution concept may also be used for games that do not satisfy the conditions for Nash's existence theorem and have no Nash equilibria, such as certain games with infinite strategy sets, or in situations where equilibria in pure strategies are desired and yet there are no Nash equilibria among the pure strategy profiles.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "G = \\langle N, S_i, u_i \\rangle"
},
{
"math_id": 1,
"text": "N = \\{1, 2, \\ldots, n\\}"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "S_i"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "i \\in N"
},
{
"math_id": 6,
"text": "u_i"
},
{
"math_id": 7,
"text": "s = (s_1, s_2, \\ldots, s_n) \\in S"
},
{
"math_id": 8,
"text": "s_{-i} = (s_1, s_2, \\ldots, s_{i-1}, s_{i+1}, \\ldots, s_n)"
},
{
"math_id": 9,
"text": "s^\\ast \\in S"
},
{
"math_id": 10,
"text": "s_{-i} \\in S_{-i}"
},
{
"math_id": 11,
"text": "u_i(s^\\ast_i,s_{-i}) \\leq u_i(s^\\ast)"
},
{
"math_id": 12,
"text": "(C,C)"
},
{
"math_id": 13,
"text": "(D,D)"
}
] |
https://en.wikipedia.org/wiki?curid=63415138
|
634233
|
Analytical hierarchy
|
Concept in mathematical logic and set theory
In mathematical logic and descriptive set theory, the analytical hierarchy is an extension of the arithmetical hierarchy. The analytical hierarchy of formulas includes formulas in the language of second-order arithmetic, which can have quantifiers over both the set of natural numbers, formula_0, and over functions from formula_0 to formula_0. The analytical hierarchy of sets classifies sets by the formulas that can be used to define them; it is the lightface version of the projective hierarchy.
The analytical hierarchy of formulas.
The notation formula_1
indicates the class of formulas in the language of second-order arithmetic with number quantifiers but no set quantifiers. This language does not contain set parameters. The Greek letters here are lightface symbols, indicating the language choice. Each corresponding boldface symbol denotes the corresponding class of formulas in the extended language with a parameter for each real; see projective hierarchy for details.
A formula in the language of second-order arithmetic is defined to be formula_2 if it is logically equivalent to a formula of the form formula_3 where formula_4 is formula_5. A formula is defined to be formula_6 if it is logically equivalent to a formula of the form formula_7 where formula_4 is formula_8. This inductive definition defines the classes formula_9 and formula_10 for every natural number formula_11.
Kuratowski and Tarski showed in 1931 that every formula in the language of second-order arithmetic has a prenex normal form, and therefore is formula_9 or formula_10 for some formula_11. Because meaningless quantifiers can be added to any formula, once a formula is given the classification formula_9 or formula_10 for some formula_11 it will be given the classifications formula_12 and formula_13 for all formula_14 greater than formula_11.
The analytical hierarchy of sets of natural numbers.
A set of natural numbers is assigned the classification formula_9 if it is definable by a formula_9 formula (with one free number variable and no free set variables). The set is assigned the classification formula_10 if it is definable by a formula_10 formula. If the set is both formula_9 and formula_10 then it is given the additional classification formula_15.
The formula_16 sets are called hyperarithmetical. An alternate classification of these sets by way of iterated computable functionals is provided by the hyperarithmetical theory.
The analytical hierarchy on subsets of Cantor and Baire space.
The analytical hierarchy can be defined on any effective Polish space; the definition is particularly simple for Cantor and Baire space because they fit with the language of ordinary second-order arithmetic. Cantor space is the set of all infinite sequences of 0s and 1s; Baire space is the set of all infinite sequences of natural numbers. These are both Polish spaces.
The ordinary axiomatization of second-order arithmetic uses a set-based language in which the set quantifiers can naturally be viewed as quantifying over Cantor space. A subset of Cantor space is assigned the classification formula_9 if it is definable by a formula_9 formula (with one free set variable and no free number variables). The set is assigned the classification formula_10 if it is definable by a formula_10 formula. If the set is both formula_9 and formula_10 then it is given the additional classification formula_15.
A subset of Baire space has a corresponding subset of Cantor space under the map that takes each function from formula_17 to formula_17 to the characteristic function of its graph. A subset of Baire space is given the classification formula_9, formula_10, or formula_15 if and only if the corresponding subset of Cantor space has the same classification. An equivalent definition of the analytical hierarchy on Baire space is given by defining the analytical hierarchy of formulas using a functional version of second-order arithmetic; then the analytical hierarchy on subsets of Cantor space can be defined from the hierarchy on Baire space. This alternate definition gives exactly the same classifications as the first definition.
Because Cantor space is homeomorphic to any finite Cartesian power of itself, and Baire space is homeomorphic to any finite Cartesian power of itself, the analytical hierarchy applies equally well to finite Cartesian powers of one of these spaces.
A similar extension is possible for countable powers and to products of powers of Cantor space and powers of Baire space.
Extensions.
As is the case with the arithmetical hierarchy, a relativized version of the analytical hierarchy can be defined. The language is extended to add a constant set symbol "A". A formula in the extended language is inductively defined to be formula_18 or formula_19 using the same inductive definition as above. Given a set formula_20, a set is defined to be formula_21 if it is definable by a formula_18 formula in which the symbol formula_22 is interpreted as formula_20; similar definitions for formula_23 and formula_24 apply. The sets that are formula_21 or formula_23, for any parameter "Y", are classified in the projective hierarchy, and often denoted by boldface Greek letters to indicate the use of parameters.
Properties.
For each formula_11 we have the following strict containments:
formula_38,
formula_39,
formula_40,
formula_41.
A set that is in formula_9 for some "n" is said to be analytical. Care is required to distinguish this usage from the term analytic set, which has a different meaning, namely formula_42.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{N}"
},
{
"math_id": 1,
"text": "\\Sigma^1_0 = \\Pi^1_0 = \\Delta^1_0"
},
{
"math_id": 2,
"text": "\\Sigma^1_{n+1}"
},
{
"math_id": 3,
"text": "\\exists X_1\\cdots \\exists X_k \\psi"
},
{
"math_id": 4,
"text": "\\psi"
},
{
"math_id": 5,
"text": "\\Pi^1_{n}"
},
{
"math_id": 6,
"text": "\\Pi^1_{n+1}"
},
{
"math_id": 7,
"text": "\\forall X_1\\cdots \\forall X_k \\psi"
},
{
"math_id": 8,
"text": "\\Sigma^1_{n}"
},
{
"math_id": 9,
"text": "\\Sigma^1_n"
},
{
"math_id": 10,
"text": "\\Pi^1_n"
},
{
"math_id": 11,
"text": "n"
},
{
"math_id": 12,
"text": "\\Sigma^1_m"
},
{
"math_id": 13,
"text": "\\Pi^1_m"
},
{
"math_id": 14,
"text": "m"
},
{
"math_id": 15,
"text": "\\Delta^1_n"
},
{
"math_id": 16,
"text": "\\Delta^1_1"
},
{
"math_id": 17,
"text": "\\omega"
},
{
"math_id": 18,
"text": "\\Sigma^{1,A}_n"
},
{
"math_id": 19,
"text": "\\Pi^{1,A}_n"
},
{
"math_id": 20,
"text": "Y"
},
{
"math_id": 21,
"text": "\\Sigma^{1,Y}_n"
},
{
"math_id": 22,
"text": "A"
},
{
"math_id": 23,
"text": "\\Pi^{1,Y}_n"
},
{
"math_id": 24,
"text": "\\Delta^{1,Y}_n"
},
{
"math_id": 25,
"text": "\\prec"
},
{
"math_id": 26,
"text": "\\mathbb N^2"
},
{
"math_id": 27,
"text": "\\mathbb N"
},
{
"math_id": 28,
"text": "\\Pi_1^1"
},
{
"math_id": 29,
"text": "\\Pi^1_1"
},
{
"math_id": 30,
"text": "\\Sigma^1_1"
},
{
"math_id": 31,
"text": "\\omega_1^{CK}"
},
{
"math_id": 32,
"text": "f:\\mathbb N\\to\\mathbb N"
},
{
"math_id": 33,
"text": "f"
},
{
"math_id": 34,
"text": "f:[0,1]\\to\\mathbb [0,1]"
},
{
"math_id": 35,
"text": "\\Delta_2^1"
},
{
"math_id": 36,
"text": "\\Sigma^{1,Y}_1"
},
{
"math_id": 37,
"text": "\\Delta^1_2"
},
{
"math_id": 38,
"text": "\\Pi^1_n \\subset \\Sigma^1_{n+1}"
},
{
"math_id": 39,
"text": "\\Pi^1_n \\subset \\Pi^1_{n+1}"
},
{
"math_id": 40,
"text": "\\Sigma^1_n \\subset \\Pi^1_{n+1}"
},
{
"math_id": 41,
"text": "\\Sigma^1_n \\subset \\Sigma^1_{n+1}"
},
{
"math_id": 42,
"text": "\\boldsymbol\\Sigma_1^1"
}
] |
https://en.wikipedia.org/wiki?curid=634233
|
634240
|
Wheel theory
|
Algebra where division is always defined
A wheel is a type of algebra (in the sense of universal algebra) where division is always defined. In particular, division by zero is meaningful. The real numbers can be extended to a wheel, as can any commutative ring.
The term "wheel" is inspired by the topological picture formula_0 of the real projective line together with an extra point ⊥ (bottom element) such that formula_1.
A wheel can be regarded as the equivalent of a commutative ring (and semiring) where addition and multiplication are not a group but respectively a commutative monoid and a commutative monoid with involution.
Definition.
A wheel is an algebraic structure formula_2, in which
and satisfying the following properties:
Algebra of wheels.
Wheels replace the usual division as a binary operation with multiplication, with a unary operation applied to one argument formula_18 similar (but not identical) to the multiplicative inverse formula_19, such that formula_20 becomes shorthand for formula_21, but neither formula_22 nor formula_23 in general, and modifies the rules of algebra such that
Other identities that may be derived are
where the negation formula_30 is defined by formula_31 and formula_32 if there is an element formula_33 such that formula_34 (thus in the general case formula_35).
However, for values of formula_26 satisfying formula_36 and formula_37, we get the usual
If negation can be defined as below then the subset formula_40 is a commutative ring, and every commutative ring is such a subset of a wheel. If formula_26 is an invertible element of the commutative ring then formula_41. Thus, whenever formula_19 makes sense, it is equal to formula_18, but the latter is always defined, even when formula_42.
Examples.
Wheel of fractions.
Let formula_43 be a commutative ring, and let formula_44 be a multiplicative submonoid of formula_43. Define the congruence relation formula_45 on formula_46 via
formula_47 means that there exist formula_48 such that formula_49.
Define the "wheel of fractions" of formula_43 with respect to formula_44 as the quotient formula_50 (and denoting the equivalence class containing formula_51 as formula_52) with the operations
formula_53 (additive identity)
formula_54 (multiplicative identity)
formula_55 (reciprocal operation)
formula_56 (addition operation)
formula_57 (multiplication operation)
Projective line and Riemann sphere.
The special case of the above starting with a field produces a projective line extended to a wheel by adjoining a bottom element noted ⊥, where formula_58. The projective line is itself an extension of the original field by an element formula_59, where formula_60 for any element formula_61 in the field. However, formula_62 is still undefined on the projective line, but is defined in its extension to a wheel.
Starting with the real numbers, the corresponding projective "line" is geometrically a circle, and then the extra point formula_62 gives the shape that is the source of the term "wheel". Or starting with the complex numbers instead, the corresponding projective "line" is a sphere (the Riemann sphere), and then the extra point gives a 3-dimensional version of a wheel.
Citations.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\odot"
},
{
"math_id": 1,
"text": "\\bot = 0/0"
},
{
"math_id": 2,
"text": "(W, 0, 1, +, \\cdot, /)"
},
{
"math_id": 3,
"text": "W"
},
{
"math_id": 4,
"text": "{}0"
},
{
"math_id": 5,
"text": "1"
},
{
"math_id": 6,
"text": "+"
},
{
"math_id": 7,
"text": "\\cdot"
},
{
"math_id": 8,
"text": "/"
},
{
"math_id": 9,
"text": "\\,0"
},
{
"math_id": 10,
"text": "//x = x"
},
{
"math_id": 11,
"text": "/(xy) = /x/y"
},
{
"math_id": 12,
"text": "(x + y)z + 0z = xz + yz"
},
{
"math_id": 13,
"text": "(x + yz)/y = x/y + z + 0y"
},
{
"math_id": 14,
"text": "0\\cdot 0 = 0"
},
{
"math_id": 15,
"text": "(x+0y)z = xz + 0y"
},
{
"math_id": 16,
"text": "/(x+0y) = /x + 0y"
},
{
"math_id": 17,
"text": "0/0 + x = 0/0"
},
{
"math_id": 18,
"text": "/x"
},
{
"math_id": 19,
"text": "x^{-1}"
},
{
"math_id": 20,
"text": "a/b"
},
{
"math_id": 21,
"text": "a \\cdot /b = /b \\cdot a"
},
{
"math_id": 22,
"text": "a \\cdot b^{-1}"
},
{
"math_id": 23,
"text": "b^{-1} \\cdot a"
},
{
"math_id": 24,
"text": "0x \\neq 0"
},
{
"math_id": 25,
"text": "x/x \\neq 1"
},
{
"math_id": 26,
"text": "x"
},
{
"math_id": 27,
"text": "0x + 0y = 0xy"
},
{
"math_id": 28,
"text": "x/x = 1 + 0x/x"
},
{
"math_id": 29,
"text": "x-x = 0x^2"
},
{
"math_id": 30,
"text": "-x"
},
{
"math_id": 31,
"text": " -x = ax "
},
{
"math_id": 32,
"text": "x - y = x + (-y)"
},
{
"math_id": 33,
"text": "a"
},
{
"math_id": 34,
"text": "1 + a = 0"
},
{
"math_id": 35,
"text": "x - x \\neq 0"
},
{
"math_id": 36,
"text": "0x = 0"
},
{
"math_id": 37,
"text": "0/x = 0"
},
{
"math_id": 38,
"text": "x/x = 1"
},
{
"math_id": 39,
"text": "x-x = 0"
},
{
"math_id": 40,
"text": "\\{x\\mid 0x=0\\}"
},
{
"math_id": 41,
"text": "x^{-1} = /x"
},
{
"math_id": 42,
"text": "x=0"
},
{
"math_id": 43,
"text": "A"
},
{
"math_id": 44,
"text": "S"
},
{
"math_id": 45,
"text": "\\sim_S"
},
{
"math_id": 46,
"text": "A \\times A"
},
{
"math_id": 47,
"text": "(x_1,x_2)\\sim_S(y_1,y_2)"
},
{
"math_id": 48,
"text": "s_x,s_y \\in S"
},
{
"math_id": 49,
"text": "(s_x x_1,s_x x_2) = (s_y y_1,s_y y_2)"
},
{
"math_id": 50,
"text": "A \\times A~/{\\sim_S}"
},
{
"math_id": 51,
"text": "(x_1,x_2)"
},
{
"math_id": 52,
"text": "[x_1,x_2]"
},
{
"math_id": 53,
"text": "0 = [0_A,1_A]"
},
{
"math_id": 54,
"text": "1 = [1_A,1_A]"
},
{
"math_id": 55,
"text": "/[x_1,x_2] = [x_2,x_1]"
},
{
"math_id": 56,
"text": "[x_1,x_2] + [y_1,y_2] = [x_1y_2 + x_2 y_1,x_2 y_2]"
},
{
"math_id": 57,
"text": "[x_1,x_2] \\cdot [y_1,y_2] = [x_1 y_1,x_2 y_2]"
},
{
"math_id": 58,
"text": "0/0=\\bot"
},
{
"math_id": 59,
"text": "\\infty"
},
{
"math_id": 60,
"text": "z/0=\\infty"
},
{
"math_id": 61,
"text": "z\\neq 0"
},
{
"math_id": 62,
"text": "0/0"
}
] |
https://en.wikipedia.org/wiki?curid=634240
|
634261
|
Analytic set
|
In the mathematical field of descriptive set theory, a subset of a Polish space formula_0 is an analytic set if it is a continuous image of a Polish space. These sets were first defined by and his student .
Definition.
There are several equivalent definitions of analytic set. The following conditions on a subspace "A" of a Polish space "X" are equivalent:
formula_5
An alternative characterization, in the specific, important, case that formula_0 is Baire space ωω, is that the analytic sets are precisely the projections of trees on formula_6. Similarly, the analytic subsets of Cantor space 2ω are precisely the projections of trees on formula_7.
Properties.
Analytic subsets of Polish spaces are closed under countable unions and intersections, continuous images, and inverse images.
The complement of an analytic set need not be analytic. Suslin proved that if the complement of an analytic set is analytic then the set is Borel. (Conversely any Borel set is analytic and Borel sets are closed under complements.) Luzin proved more generally that any two disjoint analytic sets are separated by a Borel set: in other words there is a Borel set including one and disjoint from the other. This is sometimes called the "Luzin separability principle" (though it was implicit in the proof of Suslin's theorem).
Analytic sets are always Lebesgue measurable (indeed, universally measurable) and have the property of Baire and the perfect set property.
Examples.
When formula_3 is a set of natural numbers, refer to the set formula_8 as the difference set of formula_3. The set of difference sets of natural numbers is an analytic set, and is complete for analytic sets.
Projective hierarchy.
Analytic sets are also called formula_9 (see projective hierarchy). Note that the bold font in this symbol is not the Wikipedia convention, but rather is used distinctively from its lightface counterpart formula_10 (see analytical hierarchy). The complements of analytic sets are called coanalytic sets, and the set of coanalytic sets is denoted by formula_11.
The intersection formula_12 is the set of Borel sets.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "Y"
},
{
"math_id": 2,
"text": "B\\subseteq X\\times Y"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "B"
},
{
"math_id": 5,
"text": "A=\\{x\\in X|(\\exists y\\in Y)\\langle x,y \\rangle\\in B\\}."
},
{
"math_id": 6,
"text": "\\omega\\times\\omega"
},
{
"math_id": 7,
"text": "2\\times\\omega"
},
{
"math_id": 8,
"text": "\\{x-y\\mid y\\leq x\\land x,y\\in A\\}"
},
{
"math_id": 9,
"text": "\\boldsymbol{\\Sigma}^1_1"
},
{
"math_id": 10,
"text": "\\Sigma^1_1"
},
{
"math_id": 11,
"text": "\\boldsymbol{\\Pi}^1_1"
},
{
"math_id": 12,
"text": "\\boldsymbol{\\Delta}^1_1=\\boldsymbol{\\Sigma}^1_1\\cap \\boldsymbol{\\Pi}^1_1"
}
] |
https://en.wikipedia.org/wiki?curid=634261
|
6344
|
Capsid
|
Protein shell of a virus
A capsid is the protein shell of a virus, enclosing its genetic material. It consists of several oligomeric (repeating) structural subunits made of protein called protomers. The observable 3-dimensional morphological subunits, which may or may not correspond to individual proteins, are called capsomeres. The proteins making up the capsid are called capsid proteins or viral coat proteins (VCP). The capsid and inner genome is called the nucleocapsid.
Capsids are broadly classified according to their structure. The majority of the viruses have capsids with either helical or icosahedral structure. Some viruses, such as bacteriophages, have developed more complicated structures due to constraints of elasticity and electrostatics. The icosahedral shape, which has 20 equilateral triangular faces, approximates a sphere, while the helical shape resembles the shape of a spring, taking the space of a cylinder but not being a cylinder itself. The capsid faces may consist of one or more proteins. For example, the foot-and-mouth disease virus capsid has faces consisting of three proteins named VP1–3.
Some viruses are "enveloped", meaning that the capsid is coated with a lipid membrane known as the viral envelope. The envelope is acquired by the capsid from an intracellular membrane in the virus' host; examples include the inner nuclear membrane, the Golgi membrane, and the cell's outer membrane.
Once the virus has infected a cell and begins replicating itself, new capsid subunits are synthesized using the protein biosynthesis mechanism of the cell. In some viruses, including those with helical capsids and especially those with RNA genomes, the capsid proteins co-assemble with their genomes. In other viruses, especially more complex viruses with double-stranded DNA genomes, the capsid proteins assemble into empty precursor procapsids that include a specialized portal structure at one vertex. Through this portal, viral DNA is translocated into the capsid.
Structural analyses of major capsid protein (MCP) architectures have been used to categorise viruses into lineages. For example, the bacteriophage PRD1, the algal virus "Paramecium bursaria Chlorella virus-1" (PBCV-1), mimivirus and the mammalian adenovirus have been placed in the same lineage, whereas tailed, double-stranded DNA bacteriophages ("Caudovirales") and herpesvirus belong to a second lineage.
Specific shapes.
Icosahedral.
The icosahedral structure is extremely common among viruses. The icosahedron consists of 20 triangular faces delimited by 12 fivefold vertexes and consists of 60 asymmetric units. Thus, an icosahedral virus is made of 60N protein subunits. The number and arrangement of capsomeres in an icosahedral capsid can be classified using the "quasi-equivalence principle" proposed by Donald Caspar and Aaron Klug. Like the Goldberg polyhedra, an icosahedral structure can be regarded as being constructed from pentamers and hexamers. The structures can be indexed by two integers "h" and "k", with formula_0 and formula_1; the structure can be thought of as taking "h" steps from the edge of a pentamer, turning 60 degrees counterclockwise, then taking "k" steps to get to the next pentamer. The triangulation number "T" for the capsid is defined as:
formula_2
In this scheme, icosahedral capsids contain 12 pentamers plus 10("T" − 1) hexamers. The "T"-number is representative of the size and complexity of the capsids. Geometric examples for many values of "h", "k", and "T" can be found at List of geodesic polyhedra and Goldberg polyhedra.
Many exceptions to this rule exist: For example, the polyomaviruses and papillomaviruses have pentamers instead of hexamers in hexavalent positions on a quasi T = 7 lattice. Members of the double-stranded RNA virus lineage, including reovirus, rotavirus and bacteriophage φ6 have capsids built of 120 copies of capsid protein, corresponding to a T = 2 capsid, or arguably a T = 1 capsid with a dimer in the asymmetric unit. Similarly, many small viruses have a pseudo T = 3 (or P = 3) capsid, which is organized according to a T = 3 lattice, but with distinct polypeptides occupying the three quasi-equivalent positions
T-numbers can be represented in different ways, for example "T" = 1 can only be represented as an icosahedron or a dodecahedron and, depending on the type of quasi-symmetry, "T" = 3 can be presented as a truncated dodecahedron, an icosidodecahedron, or a truncated icosahedron and their respective duals a triakis icosahedron, a rhombic triacontahedron, or a pentakis dodecahedron.
Prolate.
An elongated icosahedron is a common shape for the heads of bacteriophages. Such a structure is composed of a cylinder with a cap at either end. The cylinder is composed of 10 elongated triangular faces. The Q number (or Tmid), which can be any positive integer, specifies the number of triangles, composed of asymmetric subunits, that make up the 10 triangles of the cylinder. The caps are classified by the T (or Tend) number.
The bacterium "E. coli" is the host for bacteriophage T4 that has a prolate head structure. The bacteriophage encoded gp31 protein appears to be functionally homologous to "E. coli" chaperone protein GroES and able to substitute for it in the assembly of bacteriophage T4 virions during infection. Like GroES, gp31 forms a stable complex with GroEL chaperonin that is absolutely necessary for the folding and assembly "in vivo" of the bacteriophage T4 major capsid protein gp23.
Helical.
Many rod-shaped and filamentous plant viruses have capsids with helical symmetry. The helical structure can be described as a set of "n" 1-D molecular helices related by an "n"-fold axial symmetry. The helical transformation are classified into two categories: one-dimensional and two-dimensional helical systems. Creating an entire helical structure relies on a set of translational and rotational matrices which are coded in the protein data bank. Helical symmetry is given by the formula "P" = "μ" x "ρ", where "μ" is the number of structural units per turn of the helix, "ρ" is the axial rise per unit and "P" is the pitch of the helix. The structure is said to be open due to the characteristic that any volume can be enclosed by varying the length of the helix. The most understood helical virus is the tobacco mosaic virus. The virus is a single molecule of (+) strand RNA. Each coat protein on the interior of the helix bind three nucleotides of the RNA genome. Influenza A viruses differ by comprising multiple ribonucleoproteins, the viral NP protein organizes the RNA into a helical structure. The size is also different; the tobacco mosaic virus has a 16.33 protein subunits per helical turn, while the influenza A virus has a 28 amino acid tail loop.
Functions.
The functions of the capsid are to:
The virus must assemble a stable, protective protein shell to protect the genome from lethal chemical and physical agents. These include extremes of pH or temperature and proteolytic and nucleolytic enzymes. For non-enveloped viruses, the capsid itself may be involved in interaction with receptors on the host cell, leading to penetration of the host cell membrane and internalization of the capsid. Delivery of the genome occurs by subsequent uncoating or disassembly of the capsid and release of the genome into the cytoplasm, or by ejection of the genome through a specialized portal structure directly into the host cell nucleus.
Origin and evolution.
It has been suggested that many viral capsid proteins have evolved on multiple occasions from functionally diverse cellular proteins. The recruitment of cellular proteins appears to have occurred at different stages of evolution so that some cellular proteins were captured and refunctionalized prior to the divergence of cellular organisms into the three contemporary domains of life, whereas others were hijacked relatively recently. As a result, some capsid proteins are widespread in viruses infecting distantly related organisms (e.g., capsid proteins with the jelly-roll fold), whereas others are restricted to a particular group of viruses (e.g., capsid proteins of alphaviruses).
A computational model (2015) has shown that capsids may have originated before viruses and that they served as a means of horizontal transfer between replicator communities since these communities could not survive if the number of gene parasites increased, with certain genes being responsible for the formation of these structures and those that favored the survival of self-replicating communities. The displacement of these ancestral genes between cellular organisms could favor the appearance of new viruses during evolution.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "h \\ge 1"
},
{
"math_id": 1,
"text": "k \\ge 0"
},
{
"math_id": 2,
"text": " T = h^2 + h \\cdot k + k^2 "
}
] |
https://en.wikipedia.org/wiki?curid=6344
|
63442098
|
Hyperbolastic functions
|
Mathematical functions
The hyperbolastic functions, also known as hyperbolastic growth models, are mathematical functions that are used in medical statistical modeling. These models were originally developed to capture the growth dynamics of multicellular tumor spheres, and were introduced in 2005 by Mohammad Tabatabai, David Williams, and Zoran Bursac. The precision of hyperbolastic functions in modeling real world problems is somewhat due to their flexibility in their point of inflection. These functions can be used in a wide variety of modeling problems such as tumor growth, stem cell proliferation, pharma kinetics, cancer growth, sigmoid activation function in neural networks, and epidemiological disease progression or regression.
The "hyperbolastic functions" can model both growth and decay curves until it reaches carrying capacity. Due to their flexibility,
these models have diverse applications in the medical field, with the ability to capture disease progression with an intervening
treatment. As the figures indicate, "hyperbolastic functions" can fit a sigmoidal curve indicating that the slowest rate occurs at
the early and late stages. In addition to the presenting sigmoidal shapes, it can also accommodate biphasic situations where medical
interventions slow or reverse disease progression; but, when the effect of the treatment vanishes, the disease will begin the
second phase of its progression until it reaches its horizontal asymptote.
One of the main characteristics these functions have is that they cannot only fit sigmoidal shapes, but can also model biphasic growth patterns that other classical sigmoidal curves cannot adequately model. This distinguishing feature has advantageous applications in various fields including medicine, biology, economics, engineering, agronomy, and computer aided system theory.
Function H1.
The "hyperbolastic rate equation of type I", denoted H1, is given by
formula_0
where formula_1 is any real number and
formula_2 is the population size at formula_1. The parameter formula_3 represents carrying capacity, and parameters formula_4 and formula_5 jointly represent growth rate. The parameter formula_5 gives the distance from a symmetric sigmoidal curve. Solving the hyperbolastic rate equation of type I for formula_6 gives
formula_7
where formula_8 is the inverse hyperbolic sine function. If one desires to use the initial condition formula_9, then formula_10 can be expressed as
formula_11.
If formula_12, then formula_10 reduces to
formula_13.
In the event that a vertical shift is needed to give a better model fit, one can add the shift parameter formula_14, which would result in the following formula
formula_15.
The "hyperbolastic function of type I" generalizes the logistic function. If the parameters formula_16, then it would become a logistic function. This function formula_17 is a "hyperbolastic function of type I". The "standard hyperbolastic function of type I" is
formula_18.
Function H2.
The "hyperbolastic rate equation of type II", denoted by H2, is defined as
formula_19
where formula_20 is the hyperbolic tangent function, formula_3 is the carrying capacity, and both formula_4 and formula_21 jointly determine the growth rate. In addition, the parameter formula_22 represents acceleration in the time course. Solving the hyperbolastic rate function of type II for formula_23 gives
formula_24.
If one desires to use initial condition formula_25 then formula_10 can be expressed as
formula_26.
If formula_12, then formula_10 reduces to
formula_27.
Similarly, in the event that a vertical shift is needed to give a better fit, one can use the following formula
formula_28.
The "standard hyperbolastic function of type II" is defined as
formula_29.
Function H3.
The hyperbolastic rate equation of type III is denoted by H3 and has the form
formula_30,
where formula_31 > 0. The parameter formula_3 represents the carrying capacity, and the parameters formula_32 formula_33 and formula_5 jointly determine the growth rate. The parameter formula_33 represents acceleration of the time scale, while the size of formula_5 represents distance from a symmetric sigmoidal curve. The solution to the differential equation of type III is
formula_34,
with the initial condition formula_35 we can express formula_10 as
formula_36.
The hyperbolastic distribution of type III is a three-parameter family of continuous probability distributions with scale parameters formula_4 > 0, and formula_5 ≥ 0 and parameter formula_22 as the shape parameter. When the parameter formula_5 = 0, the hyperbolastic distribution of type III is reduced to the weibull distribution. The hyperbolastic cumulative distribution function of type III is given by
formula_37,
and its corresponding probability density function is
formula_38.
The hazard function formula_39 (or failure rate) is given by
formula_40
The survival function formula_41 is given by
formula_42
The standard hyperbolastic cumulative distribution function of type III is defined as
formula_43,
and its corresponding probability density function is
formula_44.
Properties.
If one desires to calculate the point formula_1 where the population reaches a percentage of its carrying capacity formula_3, then
one can solve the equation
formula_45
for formula_1, where formula_46. For instance, the half point can be found by setting formula_47.
Applications.
According to stem cell researchers at McGowan Institute for Regenerative Medicine at the University of Pittsburgh, "a newer model [called the hyperbolastic type III or] H3 is a differential equation that also describes the cell growth. This model allows for much more variation and has been proven to better predict growth."
The hyperbolastic growth models H1, H2, and H3 have been applied to analyze the growth of solid Ehrlich carcinoma using a variety of treatments.
In animal science, the hyperbolastic functions have been used for modeling broiler chicken growth. The hyperbolastic model of type III was used to determine the size of the recovering wound.
In the area of wound healing, the hyperbolastic models accurately representing the time course of healing. Such functions have been used to investigate variations in the healing velocity among different kinds of wounds and at different stages in the healing process taking into consideration the areas of trace elements, growth factors, diabetic wounds, and nutrition.
Another application of hyperbolastic functions is in the area of the stochastic diffusion process, whose mean function is a hyperbolastic curve. The main characteristics of the process are studied and the maximum likelihood estimation for the parameters of the process is considered.
To this end, the firefly metaheuristic optimization algorithm is applied after bounding the parametric space by a stage wise procedure. Some examples based on simulated sample paths and real data illustrate this development. A sample path of a diffusion process models the trajectory of a particle embedded in a flowing fluid and subjected to random displacements due to collisions with other particles, which is called Brownian motion. The hyperbolastic function of type III was used to model the proliferation of both adult mesenchymal and embryonic stem cells; and, the hyperbolastic mixed model of type II has been used in modeling cervical cancer data. Hyperbolastic curves can be an important tool in analyzing cellular growth, the fitting of biological curves, the growth of phytoplankton, and instantaneous maturity rate.
In forest ecology and management, the hyperbolastic models have been applied to model the relationship between DBH and height.
The multivariable "hyperbolastic model type III" has been used to analyze the growth dynamics of phytoplankton taking into consideration the concentration of nutrients.
Hyperbolastic regressions.
Hyperbolastic regressions are statistical models that utilize standard hyperbolastic functions to model a dichotomous or multinomial outcome variable. The purpose of hyperbolastic regression is to predict an outcome using a set of explanatory (independent) variables. These types of regressions are routinely used in many areas including medical, public health, dental, biomedical, as well as social, behavioral, and engineering sciences. For instance, binary regression analysis has been used to predict endoscopic lesions in iron deficiency anemia. In addition, binary regression was applied to differentiate between malignant and benign adnexal mass prior to surgery.
The binary hyperbolastic regression of type I.
Let formula_48 be a binary outcome variable which can assume one of two mutually exclusive values, success or failure. If we code success as formula_49 and failure as formula_50, then for parameter formula_51, the hyperbolastic success probability of type I with a sample of size formula_52 as a function of parameter formula_5 and parameter vector formula_53 given a formula_54-dimensional vector of explanatory variables is defined as formula_55, where formula_56, is given by
formula_57.
The odds of success is the ratio of the probability of success to the probability of failure. For binary hyperbolastic regression of type I, the odds of success is denoted by formula_58 and expressed by the equation
formula_59.
The logarithm of formula_58 is called the logit of binary hyperbolastic regression of type I. The logit transformation is denoted by formula_60 and can be written as
formula_61.
Shannon information for binary hyperbolastic of type I (H1).
The Shannon information for the random variable formula_48 is defined as
formula_62
where the base of logarithm formula_63 and formula_64. For binary outcome, formula_65 is equal to formula_66.
For the binary hyperbolastic regression of type I, the information formula_67 is given by
formula_68,
where formula_69, and formula_70 is the formula_71 input data.
For a random sample of binary outcomes of size formula_52, the average empirical information for hyperbolastic H1 can be estimated by
formula_72,
where formula_73, and formula_74 is the formula_71 input data for the formula_75 observation.
Information Entropy for hyperbolastic H1.
Information entropy measures the loss of information in a transmitted message or signal. In machine learning applications, it is the number of bits necessary to transmit a randomly selected event from a probability distribution. For a discrete random variable formula_48, the information entropy formula_76 is defined as
formula_77
where formula_78 is the probability mass function for the random variable formula_48.
The information entropy is the mathematical expectation of formula_67 with respect to probability mass function formula_78. The Information entropy has many applications in machine learning and artificial intelligence such as classification modeling and decision trees. For the hyperbolastic H1, the entropy formula_76 is equal to
formula_79
The estimated average entropy for hyperbolastic H1 is denoted by formula_80 and is given by
formula_81
Binary Cross-entropy for hyperbolastic H1.
The binary cross-entropy compares the observed formula_82 with the predicted probabilities. The average binary cross-entropy for hyperbolastic H1 is denoted by formula_83 and is equal to
formula_84
The binary hyperbolastic regression of type II.
The hyperbolastic regression of type II is an alternative method for the analysis of binary data with robust properties. For the binary outcome variable formula_48, the hyperbolastic success probability of type II is a function of a formula_54-dimensional vector of explanatory variables formula_85 given by
formula_86 ,
For the binary hyperbolastic regression of type II, the odds of success is denoted by formula_87 and is defined as
formula_88
The logit transformation formula_89 is given by
formula_90
Shannon information for binary hyperbolastic of type II (H2).
For the binary hyperbolastic regression H2, the Shannon information formula_67 is given by
formula_91
where formula_69, and formula_70 is the formula_71 input data.
For a random sample of binary outcomes of size formula_52, the average empirical information for hyperbolastic H2 is estimated by
formula_92
where formula_93, and formula_74 is the formula_71 input data for the formula_75 observation.
Information Entropy for hyperbolastic H2.
For the hyperbolastic H2, the information entropy formula_76 is equal to
formula_94
and the estimated average entropy formula_80 for hyperbolastic H2 is
formula_95
Binary Cross-entropy for hyperbolastic H2.
The average binary cross-entropy formula_83 for hyperbolastic H2 is
formula_96
Parameter estimation for the binary hyperbolastic regression of type I and II.
The estimate of the parameter vector formula_97 can be obtained by maximizing the log-likelihood function
formula_98
where formula_99 is defined according to one of the two types of hyberbolastic functions used.
The multinomial hyperbolastic regression of type I and II.
The generalization of the binary hyperbolastic regression to multinomial hyperbolastic regression has a response variable formula_100 for individual formula_101 with formula_102 categories (i.e. formula_103). When formula_104, this model reduces to a binary hyperbolastic regression.
For each formula_105, we form formula_102 indicator variables formula_106 where
formula_107,
meaning that formula_108 whenever the formula_75 response is in category formula_109 and formula_110 otherwise.
Define parameter vector formula_111 in a formula_112-dimensional Euclidean space and formula_113.
Using category 1 as a reference and formula_114 as its corresponding probability function, the multinomial hyperbolastic regression of type I probabilities are defined as
formula_115
and for formula_116,
formula_117
Similarly, for the multinomial hyperbolastic regression of type II we have
formula_118
and for formula_116,
formula_119
where formula_120 with formula_121 and formula_122.
The choice of formula_123 is dependent on the choice of hyperbolastic H1 or H2.
Shannon Information for multiclass hyperbolastic H1 or H2.
For the multiclass formula_124, the Shannon information formula_125 is
formula_126.
For a random sample of size formula_52, the empirical multiclass information can be estimated by
formula_127.
Multiclass Entropy in Information Theory.
For a discrete random variable formula_48, the multiclass information entropy is defined as
formula_128
where formula_78 is the probability mass function for the multiclass random variable formula_48.
For the hyperbolastic H1 or H2, the multiclass entropy formula_76 is equal to
formula_129
The estimated average multiclass entropy formula_130 is equal to
formula_131
Multiclass Cross-entropy for hyperbolastic H1 or H2.
Multiclass cross-entropy compares the observed multiclass output with the predicted probabilities. For a random sample of multiclass outcomes of size formula_52, the average multiclass cross-entropy formula_83 for hyperbolastic H1 or H2 can be estimated by
formula_132
The log-odds of membership in category formula_109 versus the reference category 1, denoted by formula_133, is equal to
formula_134
where formula_135 and formula_136. The estimated parameter matrix formula_137 of multinomial hyperbolastic regression is obtained by maximizing the log-likelihood function. The maximum likelihood estimates of the parameter matrix formula_138 is
formula_139
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{dP(x)}{dx}= \\frac{P(x)}{M} \\left(M-P \\left(x\\right)\\right)\\left(\\delta+\\frac{\\theta}{\\sqrt{1+x^2}}\\right),"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "P\\left(x \\right)"
},
{
"math_id": 3,
"text": "M"
},
{
"math_id": 4,
"text": "\\delta"
},
{
"math_id": 5,
"text": "\\theta"
},
{
"math_id": 6,
"text": "P \\left(x \\right)"
},
{
"math_id": 7,
"text": "P(x)= \\frac{M}{1+ \\alpha e^{-\\delta x- \\theta \\operatorname{arsinh}(x)}},"
},
{
"math_id": 8,
"text": "\\operatorname{arsinh}"
},
{
"math_id": 9,
"text": "P\\left(x_0\\right)=P_0"
},
{
"math_id": 10,
"text": "\\alpha"
},
{
"math_id": 11,
"text": "\\alpha=\\frac{M-P_0}{P_0} e^{\\delta x_0+ \\theta \\operatorname{arsinh}(x_0)}"
},
{
"math_id": 12,
"text": "x_0=0"
},
{
"math_id": 13,
"text": "\\alpha= \\frac{M-P_0}{P_0}"
},
{
"math_id": 14,
"text": "\\zeta"
},
{
"math_id": 15,
"text": "P(x)= \\frac{M}{1+ \\alpha e^{-\\delta x- \\theta \\operatorname{arsinh}(x)}} + \\zeta"
},
{
"math_id": 16,
"text": "\\theta = 0"
},
{
"math_id": 17,
"text": "P(x)"
},
{
"math_id": 18,
"text": "P(x)= \\frac{1}{1+ e^{-x-\\theta \\operatorname{arsinh}(x)}}"
},
{
"math_id": 19,
"text": "\\frac{dP(x)}{dx}= \\frac{\\alpha \\delta \\gamma P^2 (x) x^{\\gamma - 1}}{M} \\tanh \\left(\\frac{M-P(x)}{\\alpha P(x)}\\right),"
},
{
"math_id": 20,
"text": "\\tanh"
},
{
"math_id": 21,
"text": "\\gamma>0"
},
{
"math_id": 22,
"text": "\\gamma"
},
{
"math_id": 23,
"text": "P\\left(x\\right)"
},
{
"math_id": 24,
"text": "P(x)=\\frac{M}{1+\\alpha\\operatorname{arsinh} \\left(e^{-\\delta x^\\gamma}\\right)}\n"
},
{
"math_id": 25,
"text": "\nP(x_0)=P_0,"
},
{
"math_id": 26,
"text": "\\alpha=\\frac{M-P_0}{P_0 \\operatorname{arsinh} \\left(e^{-\\delta x_0^\\gamma}\\right)}\n"
},
{
"math_id": 27,
"text": "\\alpha=\\frac{M-P_0}{P_0 \\operatorname{arsinh} (1)}"
},
{
"math_id": 28,
"text": "P(x)=\\frac{M}{1+\\alpha\\operatorname{arsinh} \\left(e^{-\\delta x^\\gamma}\\right)}+\\zeta\n"
},
{
"math_id": 29,
"text": "P(x)=\\frac{1}{1+ \\operatorname{arsinh} \\left(e^{-x}\\right)}\n"
},
{
"math_id": 30,
"text": "\\frac{dP(t)}{dt}= \\left(M-P \\left(t \\right)\\right)\\left(\\delta \\gamma t^{\\gamma - 1}+ \\frac{\\theta}{\\sqrt{1+ \\theta^2 t^2}}\\right)"
},
{
"math_id": 31,
"text": "t"
},
{
"math_id": 32,
"text": "\\delta,"
},
{
"math_id": 33,
"text": "\\gamma,"
},
{
"math_id": 34,
"text": "P(t)= M- \\alpha e^{-\\delta t^\\gamma- \\operatorname{arsinh}(\\theta t)}"
},
{
"math_id": 35,
"text": "\nP\\left(t_0\\right)=P_0"
},
{
"math_id": 36,
"text": "\\alpha=\\left(M-P_0 \\right) e^{\\delta t_0^\\gamma+ \\operatorname{arsinh}(\\theta t_0)}"
},
{
"math_id": 37,
"text": "F(x; \\delta, \\gamma, \\theta)=\n\\begin{cases}\n1- e^{-\\delta x^\\gamma - \\operatorname{arsinh}(\\theta x)} & x\\geq0 ,\\\\\n0 & x < 0\n\\end{cases}\n"
},
{
"math_id": 38,
"text": "\nf(x; \\delta, \\gamma, \\theta) =\n\\begin{cases}\ne^{- \\delta x^\\gamma - \\operatorname{arsinh}(\\theta x)}\\left(\\delta \\gamma x^{\\gamma-1}+ \\frac{\\theta}{\\sqrt{1+\\theta^2 x^2}}\\right) & x\\geq0 ,\\\\\n0 & x<0\n\\end{cases}"
},
{
"math_id": 39,
"text": "h"
},
{
"math_id": 40,
"text": "h\\left(x; \\delta, \\gamma, \\theta \\right) = \\delta\\gamma x^{\\gamma-1} + \\frac{\\theta}{\\sqrt{1+ x^2\\theta^2}}."
},
{
"math_id": 41,
"text": "S"
},
{
"math_id": 42,
"text": "S(x; \\delta, \\gamma, \\theta)= e^{- \\delta x^\\gamma- \\operatorname{arsinh}(\\theta x)}."
},
{
"math_id": 43,
"text": "F\\left(x\\right)=1-e^{-x- \\operatorname{arsinh}(x)}"
},
{
"math_id": 44,
"text": "\nf(x) = e^{- x - \\operatorname{arsinh}(x)}\\left(1+ \\frac{1}{\\sqrt{1+ x^2}}\\right) "
},
{
"math_id": 45,
"text": "P(x) = k M"
},
{
"math_id": 46,
"text": "0 < k < 1"
},
{
"math_id": 47,
"text": "k= \\frac{1}{2}"
},
{
"math_id": 48,
"text": "Y"
},
{
"math_id": 49,
"text": "Y=1"
},
{
"math_id": 50,
"text": "Y=0"
},
{
"math_id": 51,
"text": "\\theta \\geq -1"
},
{
"math_id": 52,
"text": "n"
},
{
"math_id": 53,
"text": "\\boldsymbol{\\beta} = (\\beta_0, \\beta_1,\\ldots, \\beta_p)"
},
{
"math_id": 54,
"text": "p"
},
{
"math_id": 55,
"text": "\\mathbf{x}_i=(x_{i1},\\ x_{i2},\\ldots ,\\ x_{ip})^T"
},
{
"math_id": 56,
"text": "i = 1,2,\\ldots,n"
},
{
"math_id": 57,
"text": "\\pi(\\mathbf{x}_i;\\boldsymbol{\\beta}) = P(y_i=1|\\mathbf{x}_i;\\boldsymbol{\\beta})=\\frac{1}{1+e^{-(\\beta_0+\\sum_{s=1}^{p}{\\beta_s x_{is}})-\\theta \\operatorname{arsinh}(\\beta_0+\\sum_{s=1}^{p}{\\beta_s x_{is}}) } }"
},
{
"math_id": 58,
"text": "Odds_{H1}"
},
{
"math_id": 59,
"text": "Odds_{H1}=e^{\\beta_0+\\sum_{s=1}^{p}{\\beta_s x_{is}}+\\theta \\operatorname{arsinh}(\\beta_0+\\sum_{s=1}^{p}{\\beta_s x_{is}}) }"
},
{
"math_id": 60,
"text": "L_{H1}"
},
{
"math_id": 61,
"text": "L_{H1}=\\beta_0+\\sum_{s=1}^{p}{\\beta_s x_{is}} +\\theta \\operatorname{arsinh}[\\beta_0+\\sum_{s=1}^{p}{\\beta_s x_{is}}] "
},
{
"math_id": 62,
"text": "I(y)=-{log}_bP(y)"
},
{
"math_id": 63,
"text": "b > 0"
},
{
"math_id": 64,
"text": "b \\neq 1"
},
{
"math_id": 65,
"text": "b"
},
{
"math_id": 66,
"text": "2"
},
{
"math_id": 67,
"text": "I(y)"
},
{
"math_id": 68,
"text": "I(y)=\n\\begin{cases}\n-log_b\\frac{1}{1+e^{-Z-\\theta \\operatorname{arsinh}(Z)}} & y = 1 ,\\\\\n-log_b\\frac{e^{-Z-\\theta \\operatorname{arsinh}(Z)}}{1+e^{-Z-\\theta \\operatorname{arsinh}(Z)}} & y = 0\n\\end{cases}\n"
},
{
"math_id": 69,
"text": "Z= \\beta_0+\\sum_{s=1}^{p}\\beta_sx_s"
},
{
"math_id": 70,
"text": "x_s"
},
{
"math_id": 71,
"text": "s^{th}"
},
{
"math_id": 72,
"text": "\\overline{I(y)}=\n\\begin{cases}\n-\\frac{1}{n}\\sum_{i=1}^{n}{log_b\\frac{1}{1+e^{-Z_i-\\theta \\operatorname{arsinh}(Z_i)}}} & y = 1 ,\\\\\n-\\frac{1}{n}\\sum_{i=1}^{n}{log_b\\frac{e^{-Z_i-\\theta \\operatorname{arsinh}(Z_i)}}{1+e^{-Z_i-\\theta \\operatorname{arsinh}(Z_i)}}} & y = 0\n\\end{cases}\n"
},
{
"math_id": 73,
"text": "Z_i= \\beta_0+\\sum_{s=1}^{p}\\beta_sx_{is}"
},
{
"math_id": 74,
"text": "x_{is}"
},
{
"math_id": 75,
"text": "i^{th}"
},
{
"math_id": 76,
"text": "H"
},
{
"math_id": 77,
"text": "H=-\\sum_{y\\in Y}{P(y)\\ {log}_bP(y)}"
},
{
"math_id": 78,
"text": "P(y)"
},
{
"math_id": 79,
"text": "\n\\begin{align}\nH & = -\\sum_{y \\in \\{0,1\\}}{P(Y=y;\\mathbf{x},\\boldsymbol{\\beta})log_b(P(Y=y;\\mathbf{x},\\boldsymbol{\\beta}))} \\\\\n& = -[\\pi(\\mathbf{x};\\boldsymbol{\\beta})\\ log_b(\\pi(\\mathbf{x};\\boldsymbol{\\beta})+(1-\\pi(\\mathbf{x};\\boldsymbol{\\beta}))log_b(1-\\pi(\\mathbf{x};\\boldsymbol{\\beta}))] \\\\\n& = {log}_b(1+e^{-Z-\\theta \\operatorname{arsinh}(Z)})-\\frac{e^{-Z-\\theta \\operatorname{arsinh}(Z)}{log}_b(e^{-Z-\\theta \\operatorname{arsinh}(Z)})}{1+e^{-Z-\\theta \\operatorname{arsinh}(Z)}}\n\\end{align}\n"
},
{
"math_id": 80,
"text": "\\bar{H}"
},
{
"math_id": 81,
"text": "\n\\bar{H}=\\frac{1}{n}\\sum_{i=1}^{n}{[log_b(1+e^{{-Z}_i-\\theta \\operatorname{arsinh}(Z_i)})-}\\frac{e^{{-Z}_i-\\theta \\operatorname{arsinh}(Z_i)}\\ {log}_b(e^{{-Z}_i-\\theta \\operatorname{arsinh}((Z_i)})}{1+e^{{-Z}_i-\\theta \\operatorname{arsinh}(Z_i)}}]\n"
},
{
"math_id": 82,
"text": "y \\in \\{0,1\\}"
},
{
"math_id": 83,
"text": "\\overline{C}"
},
{
"math_id": 84,
"text": "\n\\begin{align}\n\\overline{C} & =-\\frac{1}{n}\\sum_{i=1}^{n}{{[y}_i log_b(\\pi(x_i;\\boldsymbol{\\beta}))+}{(1-y}_i)log_b(1-\\pi(x_i;\\boldsymbol{\\beta}))] \\\\\n&=\\frac{1}{n}\\sum_{i=1}^{n}{[log_b(1+e^{{-Z}_i-\\theta \\operatorname{arsinh}(Z_i)})-}{(1-y}_i)log_b(e^{{-Z}_i-\\theta \\operatorname{arsinh}(Z_i)})]\n\\end{align}\n"
},
{
"math_id": 85,
"text": "\\mathbf{x}_i"
},
{
"math_id": 86,
"text": "\\pi(\\mathbf{x}_i;\\boldsymbol{\\beta}) = P(y_i=1|\\mathbf{x}_i;\\boldsymbol{\\beta})= \\frac{1}{1 + \\operatorname{arsinh}[e^{ - (\\beta_0 + \\sum_{s=1}^{p}{\\beta_s x_{is}}) }]} "
},
{
"math_id": 87,
"text": "Odds_{H2}"
},
{
"math_id": 88,
"text": "Odds_{H2} = \\frac{1}{\\operatorname{arsinh}[e^{-(\\beta_0+\\sum_{s=1}^{p}{\\beta_s x_{is}}) } ] }. "
},
{
"math_id": 89,
"text": " L_{H2} "
},
{
"math_id": 90,
"text": "L_{H2}= - \\log{( \\operatorname{arsinh}[ e^{-(\\beta_0+\\sum_{s=1}^{p}{\\beta_s x_{is}})}])}"
},
{
"math_id": 91,
"text": "I(y) =\n\\begin{cases}\n-log_b \\frac{1}{1+arsinh(e^{-Z})} & y = 1 \\\\\n-log_b \\frac{arsinh(e^{-Z})}{1+arsinh(e^{-Z})} & y = 0\n\\end{cases}\n"
},
{
"math_id": 92,
"text": " \\overline{I(y)}=\n\\begin{cases}\n-\\frac{1}{n}\\sum_{i=1}^{n}log_b \\frac{1}{1+arsinh(e^{-Z_i})} & y = 1 \\\\\n-\\frac{1}{n}\\sum_{i=1}^{n}log_b \\frac{arsinh(e^{-Z_i})}{1+arsinh(e^{-Z_i})} & y=0\n\\end{cases} \n"
},
{
"math_id": 93,
"text": "Z_i= \\beta_0+\\sum_{s=1}^{p} \\beta_sx_{is}"
},
{
"math_id": 94,
"text": "\n\\begin{align}\nH& = -\\sum_{y\\in \\{0,1\\}}{P(Y=y;\\mathbf{x} , \\boldsymbol{\\beta}) log_b(P(Y=y;\\mathbf{x} ,\\boldsymbol{\\beta}))} \\\\\n& =-[\\pi(\\mathbf{x};\\boldsymbol{\\beta})\\ log_b(\\pi(\\mathbf{x};\\boldsymbol{\\beta}))+(1-\\pi(\\mathbf{x};\\boldsymbol{\\beta}))log_b(1-\\pi(\\mathbf{x};\\boldsymbol{\\beta}))] \\\\\n& =log_b(1+arsinh(e^{-Z}))-\\frac{arsinh(e^{-Z}) log_b (arsinh(e^{-Z}))}{1+arsinh(e^{-Z})}\n\\end{align}\n"
},
{
"math_id": 95,
"text": "\n\\bar{H}=\\frac{1}{n}\\sum_{i=1}^{n}{[log_b(1+{arsinh(e}^{{-Z}_i}))-}\\frac{{arsinh(e}^{{-Z}_i})\\ {log}_b{(arsinh(e}^{{-Z}_i}))}{1+{arsinh(e}^{{-Z}_i})}]\n"
},
{
"math_id": 96,
"text": "\n\\begin{align}\n\\overline{C} & =-\\frac{1}{n}\\sum_{i=1}^{n}{{[y}_ilog_b(\\pi(x_i;\\beta))+}{(1-y}_i)log_b(1-\\pi(x_i;\\beta))] \\\\\n& =\\frac{1}{n}\\sum_{i=1}^{n}{[log_b(1+{arsinh(e}^{{-Z}_i}))-}{(1-y}_i)log_b({arsinh(e}^{{-Z}_i}))]\n\\end{align}\n"
},
{
"math_id": 97,
"text": "\\boldsymbol{\\beta}"
},
{
"math_id": 98,
"text": " \\hat{\\beta} = \\underset{\\boldsymbol{\\beta}}\\operatorname{argmax}{\\sum_{i = 1}^{n}[y_iln(\\pi(\\mathbf{x}_i;\\boldsymbol{\\beta}))+(1-y_i)ln(1-\\pi(\\mathbf{x}_i;\\boldsymbol{\\beta}))]} "
},
{
"math_id": 99,
"text": "\\pi(\\mathbf{x}_i;\\boldsymbol{\\beta})"
},
{
"math_id": 100,
"text": "y_i"
},
{
"math_id": 101,
"text": "i"
},
{
"math_id": 102,
"text": "k"
},
{
"math_id": 103,
"text": "y_i \\in \\{1,2,\\ldots,k\\}"
},
{
"math_id": 104,
"text": "k=2"
},
{
"math_id": 105,
"text": "i=1,2,\\ldots,n"
},
{
"math_id": 106,
"text": "y_{ij}"
},
{
"math_id": 107,
"text": "y_{ij}=\n\\begin{cases}\n1 & \\text{if } y_i = j,\\\\\n0 & \\text{if } y_i \\neq j\n\\end{cases}\n"
},
{
"math_id": 108,
"text": "y_{ij}=1"
},
{
"math_id": 109,
"text": "j"
},
{
"math_id": 110,
"text": "0"
},
{
"math_id": 111,
"text": "\\boldsymbol{\\beta}_j=(\\beta_{j0},\\beta_{j1},\\ldots,\\beta_{jp})"
},
{
"math_id": 112,
"text": "p+1"
},
{
"math_id": 113,
"text": "\\boldsymbol{\\beta}=(\\boldsymbol{\\beta}_1,\\ldots,\\boldsymbol{\\beta}_{k-1})^T"
},
{
"math_id": 114,
"text": "\\pi_1(\\mathbf{x}_i;\\boldsymbol{\\beta})"
},
{
"math_id": 115,
"text": "\\pi_1(\\mathbf{x}_i;\\boldsymbol{\\beta})=P(y_i=1|\\mathbf{x}_i;\\boldsymbol{\\beta})=\\frac{1}{1+\\sum_{s=2}^{k}e^{-\\eta_s(\\mathbf{x}_i;\\boldsymbol{\\beta})-\\theta \\operatorname{arsinh}[\\eta_s(\\mathbf{x}_i;\\boldsymbol{\\beta})]}}"
},
{
"math_id": 116,
"text": "j = 2,\\ldots,k"
},
{
"math_id": 117,
"text": "\\pi_j(\\mathbf{x}_i;\\boldsymbol{\\beta})=P(y_i=j|\\mathbf{x}_i;\\boldsymbol{\\beta})=\\frac{e^{-\\eta_j(\\mathbf{x}_i;\\boldsymbol{\\beta})-\\theta \\operatorname{arsinh}[\\eta_j(\\mathbf{x}_i;\\boldsymbol{\\beta})]}}{1+\\sum_{s=2}^{k}e^{-\\eta_s(\\mathbf{x}_i;\\boldsymbol{\\beta})-\\theta \\operatorname{arsinh}[\\eta_s(\\mathbf{x}_i;\\boldsymbol{\\beta})]}}"
},
{
"math_id": 118,
"text": "\\pi_1(\\mathbf{x}_i;\\boldsymbol{\\beta})=P(y_i=1|\\mathbf{x}_i;\\boldsymbol{\\beta})=\\frac{1}{1+\\sum_{s=2}^{k}arsinh[e^{-\\eta_s(\\mathbf{x}_i;\\boldsymbol{\\beta})}]}"
},
{
"math_id": 119,
"text": "\\pi_j(\\mathbf{x}_i;\\boldsymbol{\\beta})=P(y_i=j|\\mathbf{x}_i;\\boldsymbol{\\beta})=\\frac{arsinh[e^{-\\eta_j(\\mathbf{x}_i;\\boldsymbol{\\beta})}]}{1+\\sum_{s=2}^{k}arsinh[e^{-\\eta_s(\\mathbf{x}_i;\\boldsymbol{\\beta})}]}"
},
{
"math_id": 120,
"text": "\\eta_s(\\mathbf{x}_i;\\boldsymbol{\\beta})=\\beta_{s0}+\\sum_{l=1}^{p}\\beta_{sl}x_{il}"
},
{
"math_id": 121,
"text": "s = 2, \\dots, k"
},
{
"math_id": 122,
"text": "i = 1,\\dots,n"
},
{
"math_id": 123,
"text": "\\pi_i(\\mathbf{x_i};\\boldsymbol{\\beta})"
},
{
"math_id": 124,
"text": "(j=1, 2, \\dots, k)"
},
{
"math_id": 125,
"text": "I_j"
},
{
"math_id": 126,
"text": "I_j=-log_b(\\pi_j(\\mathbf{x};\\boldsymbol{\\beta}))"
},
{
"math_id": 127,
"text": "\\overline{I_j}=-\\frac{1}{n}\\sum_{i=1}^{n}{log_b(\\pi_j(\\mathbf{x_i};\\boldsymbol{\\beta}))}"
},
{
"math_id": 128,
"text": "H=-\\sum_{y \\in Y}{P(y)\\ {log}_bP(y)}"
},
{
"math_id": 129,
"text": "H=-\\sum_{j=1}^{k}{[\\pi_j(\\mathbf{x};\\boldsymbol{\\beta}) log_b(\\pi_j(\\mathbf{x};\\boldsymbol{\\beta}))]}"
},
{
"math_id": 130,
"text": "\\overline{H}"
},
{
"math_id": 131,
"text": "\\overline{H}=-\\frac{1}{n}\\sum_{i=1}^{n}{\\sum_{j=1}^{k}{[\\pi_j(\\mathbf{x_i};\\boldsymbol{\\beta}) log_b(\\pi_j(\\mathbf{x_i};\\boldsymbol{\\beta}))]}}"
},
{
"math_id": 132,
"text": "\\overline{C}=-\\frac{1}{n} \\sum_{i=1}^{n}{\\sum_{j=1}^{k}{[y_{ij} log_b(\\pi_j(\\mathbf{x_i};\\boldsymbol{\\beta}))]}}"
},
{
"math_id": 133,
"text": "\\omicron_j(\\mathbf{x}_i;\\boldsymbol{\\beta})"
},
{
"math_id": 134,
"text": "\\omicron_j(\\mathbf{x}_i;\\boldsymbol{\\beta}) = ln[\\frac{\\pi_j(\\mathbf{x}_i;\\boldsymbol{\\beta})}{\\pi_1(\\mathbf{x}_i;\\boldsymbol{\\beta})}]"
},
{
"math_id": 135,
"text": "j=2,\\ldots,k"
},
{
"math_id": 136,
"text": "i=1,\\ldots,n"
},
{
"math_id": 137,
"text": "\\hat\\boldsymbol{\\beta}"
},
{
"math_id": 138,
"text": "\\boldsymbol\\beta"
},
{
"math_id": 139,
"text": "\\boldsymbol{\\hat{\\beta}} = \\underset{\\boldsymbol{\\beta}}\\operatorname{argmax}{\\sum_{i=1}^n(y_{i1}ln[\\pi_1(\\mathbf{x}_i;\\boldsymbol{\\beta})]+y_{i2}ln[\\pi_2(\\mathbf{x}_i;\\boldsymbol{\\beta})]+\\ldots+y_{ik}ln[\\pi_k(\\mathbf{x}_i;\\boldsymbol{\\beta})])}"
}
] |
https://en.wikipedia.org/wiki?curid=63442098
|
63449981
|
Davenport–Schinzel Sequences and Their Geometric Applications
|
Book published in 1995
Davenport–Schinzel Sequences and Their Geometric Applications is a book in discrete geometry. It was written by Micha Sharir and Pankaj K. Agarwal, and published by Cambridge University Press in 1995, with a paperback reprint in 2010.
Topics.
Davenport–Schinzel sequences are named after Harold Davenport and Andrzej Schinzel, who applied them to certain problems in the theory of differential equations. They are finite sequences of symbols from a given alphabet, constrained by forbidding pairs of symbols from appearing in alternation more than a given number of times (regardless of what other symbols might separate them). In a Davenport–Schinzel sequence of order formula_0, the longest allowed alternations have length formula_0. For instance, a Davenport–Schinzel sequence of order three could have two symbols formula_1 and formula_2 that appear either in the order formula_3 or formula_4, but longer alternations like formula_5 would be forbidden. The length of such a sequence, for a given choice of formula_0, can be only slightly longer than its number of distinct symbols. This phenomenon has been used to prove corresponding near-linear bounds on various problems in discrete geometry, for instance showing that the unbounded face of an arrangement of line segments can have complexity that is only slightly superlinear. The book is about this family of results, both on bounding the lengths of Davenport–Schinzel sequences and on their applications to discrete geometry.
The first three chapters of the book provide bounds on the lengths of Davenport–Schinzel sequences whose superlinearity is described in terms of the inverse Ackermann function formula_6. For instance, the length of a Davenport–Schinzel sequence of order three, with formula_7 symbols, can be at most formula_8, as the second chapter shows; the third concerns higher orders. The fourth chapter applies this theory to line segments,
and includes a proof that the bounds proven using these tools are tight: there exist systems of line segments whose arrangement complexity matches the bounds on Davenport–Schinzel sequence length.
The remaining chapters concern more advanced applications of these methods. Three chapters concern arrangements of curves in the plane, algorithms for arrangements, and higher-dimensional arrangements, following which the final chapter (comprising a large fraction of the book) concerns applications of these combinatorial bounds to problems including Voronoi diagrams and nearest neighbor search, the construction of transversal lines through systems of objects, visibility problems, and robot motion planning. The topic remains an active area of research and the book poses many open questions.
Audience and reception.
Although primarily aimed at researchers, this book (and especially its earlier chapters) could also be used as the textbook for a graduate course in its material. Reviewer Peter Hajnal calls it "very important to any specialist in computational geometry" and "highly recommended to anybody who is interested in this new topic at the border of combinatorics, geometry, and algorithm theory".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "y"
},
{
"math_id": 3,
"text": "x\\dots y\\dots x"
},
{
"math_id": 4,
"text": "y\\dots x\\dots y"
},
{
"math_id": 5,
"text": "x\\dots y\\dots x\\dots y"
},
{
"math_id": 6,
"text": "\\alpha(n)"
},
{
"math_id": 7,
"text": "n"
},
{
"math_id": 8,
"text": "\\Theta(n\\alpha(n))"
}
] |
https://en.wikipedia.org/wiki?curid=63449981
|
634543
|
Bendixson–Dulac theorem
|
In mathematics, the Bendixson–Dulac theorem on dynamical systems states that if there exists a formula_0 function formula_1 (called the Dulac function) such that the expression
formula_2
has the same sign (formula_3) almost everywhere in a simply connected region of the plane, then the plane autonomous system
formula_4
formula_5
has no nonconstant periodic solutions lying entirely within the region. "Almost everywhere" means everywhere except possibly in a set of measure 0, such as a point or line.
The theorem was first established by Swedish mathematician Ivar Bendixson in 1901 and further refined by French mathematician Henri Dulac in 1923 using Green's theorem.
Proof.
Without loss of generality, let there exist a function formula_1 such that
formula_6
in simply connected region formula_7. Let formula_8 be a closed trajectory of the plane autonomous system in formula_7. Let formula_9 be the interior of formula_8. Then by Green's theorem,
formula_10
Because of the constant sign, the left-hand integral in the previous line must evaluate to a positive number. But on formula_8, formula_11 and formula_12, so the bottom integrand is in fact 0 everywhere and for this reason the right-hand integral evaluates to 0. This is a contradiction, so there can be no such closed trajectory formula_8.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C^1"
},
{
"math_id": 1,
"text": " \\varphi(x, y)"
},
{
"math_id": 2,
"text": "\\frac{ \\partial (\\varphi f) }{ \\partial x } + \\frac{ \\partial (\\varphi g) }{ \\partial y }"
},
{
"math_id": 3,
"text": "\\neq 0"
},
{
"math_id": 4,
"text": "\\frac{ dx }{ dt } = f(x,y),"
},
{
"math_id": 5,
"text": "\\frac{ dy }{ dt } = g(x,y)"
},
{
"math_id": 6,
"text": "\\frac { \\partial (\\varphi f) }{ \\partial x } +\\frac { \\partial (\\varphi g) }{ \\partial y } >0"
},
{
"math_id": 7,
"text": "R"
},
{
"math_id": 8,
"text": "C"
},
{
"math_id": 9,
"text": "D"
},
{
"math_id": 10,
"text": "\n\\begin{align}\n& \\iint_D \\left( \\frac { \\partial (\\varphi f) }{ \\partial x } +\\frac { \\partial (\\varphi g) }{ \\partial y } \\right) \\,dx\\,dy \n=\\iint_D \\left( \\frac { \\partial (\\varphi \\dot { x }) }{ \\partial x } +\\frac { \\partial (\\varphi \\dot { y }) }{ \\partial y } \\right) \\,dx\\,dy \\\\[6pt]\n= {} & \\oint_C \\varphi \\left( -\\dot { y } \\,dx+\\dot { x } \\,dy\\right)\n=\\oint_C \\varphi \\left( -\\dot { y }\\dot { x }+\\dot { x }\\dot { y }\\right)\\,dt=0\n\\end{align}\n"
},
{
"math_id": 11,
"text": "dx=\\dot { x } \\,dt"
},
{
"math_id": 12,
"text": "dy=\\dot { y } \\,dt"
},
{
"math_id": 13,
"text": "\\frac{ dq }{ dt } =\\frac{\\partial H(q,p)}{\\partial p}\\, (=f(q,p)), \\frac{ dp }{ dt } =-\\frac{\\partial H(q,p)}{\\partial q}\\, (=g(q,p))"
}
] |
https://en.wikipedia.org/wiki?curid=634543
|
63455582
|
Rayleigh–Gans approximation
|
Rayleigh–Gans approximation, also known as Rayleigh–Gans–Debye approximation and Rayleigh–Gans–Born approximation, is an approximate solution to light scattering by optically soft particles. Optical softness implies that the relative refractive index of particle is close to that of the surrounding medium. The approximation holds for particles of arbitrary shape that are relatively small but can be larger than Rayleigh scattering limits.
The theory was derived by Lord Rayleigh in 1881 and was applied to homogeneous spheres, spherical shells, radially inhomogeneous spheres and infinite cylinders. Peter Debye has contributed to the theory in 1881. The theory for homogeneous sphere was rederived by Richard Gans in 1925. The approximation is analogous to Born approximation in quantum mechanics.
Theory.
The validity conditions for the approximation can be denoted as:
formula_0
formula_1
formula_2 is the wavevector of the light (formula_3), whereas formula_4 refers to the linear dimension of the particle. formula_5 is the complex refractive index of the particle. The first condition allows for a simplification in expressing the material polarizability in the derivation below. The second condition is a statement of the Born approximation, that is, that the incident field is not greatly altered within one particle so that each volume element is considered to be illuminated by an intensity and phase determined only by its position relative to the incident wave, unaffected by scattering from other volume elements.
The particle is divided into small volume elements, which are treated as independent Rayleigh scatterers. For an inbound light with s polarization, the scattering amplitude contribution from each volume element is given as:
formula_6
where formula_7 denotes the phase difference due to each individual element, and the fraction in parentheses is the electric polarizability as found from the refractive index using the Clausius–Mossotti relation. Under the condition "(n-1) « 1", this factor can be approximated as "2(n-1)/3". The phases formula_7 affecting the scattering from each volume element are dependent only on their positions with respect to the incoming wave and the scattering direction. Integrating, the scattering amplitude function thus obtains:
formula_8
in which only the final integral, which describes the interfering phases contributing to the scattering direction (θ, φ), remains to be solved according to the particular geometry of the scatterer. Calling "V" the entire volume of the scattering object, over which this integration is performed, one can write that scattering parameter for scattering with the electric field polarization normal to the plane of incidence (s polarization) as
formula_9
and for polarization "in" the plane of incidence (p polarization) as
formula_10
where formula_11 denotes the "form factor" of the scatterer:
formula_12
In order to only find intensities we can define "P" as the squared magnitude of the form factor:
formula_13
Then the scattered radiation intensity, relative to the intensity of the incident wave, for each polarization can be written as:
formula_14
formula_15
where "r" is the distance from the scatterer to the observation point. Per the optical theorem, absorption cross section is given as:
formula_16
which is independent of the polarization.
Applications.
Rayleigh–Gans approximation has been applied on the calculation of the optical cross sections of fractal aggregates. The theory was also applied to anisotropic spheres for nanostructured polycrystalline alumina and turbidity calculations on biological structures such as lipid vesicles and bacteria.
A nonlinear Rayleigh−Gans−Debye model was used to investigate second-harmonic generation in malachite green molecules adsorbed on polystyrene particles.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "|n-1| \\ll 1"
},
{
"math_id": 1,
"text": "kd|n-1| \\ll 1"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "k=\\frac{2 \\pi}{\\lambda}"
},
{
"math_id": 4,
"text": "d"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "dS_1(\\theta, \\phi)=i \\frac{3}{4 \\pi} k^3 \\left( \\frac{n^2-1}{n^2+2} \\right) e^{i \\delta} dV"
},
{
"math_id": 7,
"text": "\\delta"
},
{
"math_id": 8,
"text": "S_1(\\theta, \\phi) \\approx \\frac{i}{2 \\pi}k^3(n-1) \\int e^{i \\delta} dV"
},
{
"math_id": 9,
"text": "S_1=\\frac{i}{2 \\pi}k^3(n-1) V R(\\theta, \\phi)"
},
{
"math_id": 10,
"text": "S_2=\\frac{i}{2 \\pi}k^3(n-1) V R(\\theta, \\phi)cos \\theta"
},
{
"math_id": 11,
"text": "R(\\theta, \\phi)"
},
{
"math_id": 12,
"text": "R(\\theta, \\phi)=\\frac{1}{V} \\int{}{} e^{i \\delta} dV"
},
{
"math_id": 13,
"text": "P(\\theta, \\phi)=\\left( \\frac{1}{V^2} \\right) \\left| \\int e^{i \\delta} dV \\right|^2"
},
{
"math_id": 14,
"text": "I_1/I_0 = \\left(\\frac{k^4 V^2}{4 \\pi^2 r^2} \\right) (n-1)^2 P(\\theta, \\phi)"
},
{
"math_id": 15,
"text": "I_2/I_0 = \\left(\\frac{k^4 V^2}{4 \\pi^2 r^2} \\right) (n-1)^2 P(\\theta, \\phi)cos^2 \\theta"
},
{
"math_id": 16,
"text": "C_{abs}=2kV \\mathbb{Im}(n)"
}
] |
https://en.wikipedia.org/wiki?curid=63455582
|
63455728
|
Modular forms modulo p
|
Mathematical concept
In mathematics, modular forms are particular complex analytic functions on the upper half-plane of interest in complex analysis and number theory. When reduced modulo a prime "p", there is an analogous theory to the classical theory of complex modular forms and the "p"-adic theory of modular forms.
Reduction of modular forms modulo 2.
Conditions to reduce modulo 2.
Modular forms are analytic functions, so they admit a Fourier series. As modular forms also satisfy a certain kind of functional equation with respect to the group action of the modular group, this Fourier series may be expressed in terms of formula_0.
So if formula_1 is a modular form, then there are coefficients formula_2 such that formula_3.
To reduce modulo 2, consider the subspace of modular forms with coefficients of the formula_4-series being all integers (since complex numbers, in general, may not be reduced modulo 2).
It is then possible to reduce all coefficients modulo 2, which will give a modular form modulo 2.
Basis for modular forms modulo 2.
Modular forms are generated by formula_5 and formula_6.
It is then possible to normalize formula_5 and formula_6 to formula_7 and formula_8, having integers coefficients in their formula_4-series.
This gives generators for modular forms, which may be reduced modulo 2.
Note the Miller basis has some interesting properties:
once reduced modulo 2, formula_7 and formula_8 are just formula_9; that is, a trivial reduction.
To get a non-trivial reduction, one must use the modular discriminant formula_10.
Thus, modular forms are seen as polynomials of formula_7,formula_8 and formula_10 (over the complex formula_11 in general, but seen over integers formula_12 for reduction), once reduced modulo 2, they become just polynomials of formula_10 over formula_13.
The modular discriminant modulo 2.
The modular discriminant is defined by an infinite product,
formula_14
where formula_15 is the Ramanujan tau function.
Results from Kolberg and Jean-Pierre Serre demonstrate that, modulo 2, we have
formula_16 i.e., the formula_4-series of formula_10 modulo 2 consists of formula_4 to powers of odd squares.
Hecke operators modulo 2.
The action of the Hecke operators is fundamental to understanding the structure of spaces of modular forms. It is therefore justified to try to reduce them modulo 2.
The Hecke operators for a modular form formula_1 are defined as follows:
formula_17
with formula_18.
Hecke operators may be defined on the formula_4-series as follows:
if formula_19,
then formula_20
with
formula_21
Since modular forms were reduced using the formula_4-series, it makes sense to use the formula_4-series definition. The sum simplifies a lot for Hecke operators of primes (i.e. when formula_22 is prime): there are only two summands. This is very nice for reduction modulo 2, as the formula simplifies a lot.
With more than two summands, there would be many cancellations modulo 2, and the legitimacy of the process would be doubtable. Thus, Hecke operators modulo 2 are usually defined only for primes numbers.
With formula_1 a modular form modulo 2 with formula_4-representation formula_23, the Hecke operator formula_24 on formula_1 is defined by formula_25 where
formula_26
It is important to note that Hecke operators modulo 2 have the interesting property of being nilpotent.
Finding their order of nilpotency is a problem solved by Jean-Pierre Serre and Jean-Louis Nicolas in a paper published in 2012:.
The Hecke algebra modulo 2.
The Hecke algebra may also be reduced modulo 2.
It is defined to be the algebra generated by Hecke operators modulo 2, over formula_13.
Following Serre and Nicolas's notations,
formula_27, i.e. formula_28.
Writing formula_29 so that formula_30, define formula_31 as the formula_13-subalgebra of formula_32 given by formula_13 and formula_24.
That is, if formula_33 is a sub-vector-space of formula_34, we get formula_35.
Finally, define the Hecke algebra formula_36 as follows:
Since formula_37, one can restrict elements of formula_38 to formula_34 to obtain an element of formula_31.
When considering the map formula_39 as the restriction to formula_40, then formula_41 is a homomorphism.
As formula_42 is either identity or zero, formula_43.
Therefore, the following chain is obtained:
formula_44.
Then, define the Hecke algebra formula_36 to be the projective limit of the above formula_31 as formula_45.
Explicitly, this means
formula_46.
The main property of the Hecke algebra formula_36 is that it is generated by series of formula_47 and formula_48.
That is:
formula_49.
So for any prime formula_50, it is possible to find coefficients formula_51 such that
formula_52.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "q=e^{2 \\pi i z}"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "c(n)"
},
{
"math_id": 3,
"text": "f(z) = \\sum_{n \\in \\mathbb{N}} c(n)q^n"
},
{
"math_id": 4,
"text": "q"
},
{
"math_id": 5,
"text": "G_4"
},
{
"math_id": 6,
"text": "G_6"
},
{
"math_id": 7,
"text": "E_4"
},
{
"math_id": 8,
"text": "E_6"
},
{
"math_id": 9,
"text": "1"
},
{
"math_id": 10,
"text": "\\Delta"
},
{
"math_id": 11,
"text": "\\mathbb{C}"
},
{
"math_id": 12,
"text": "\\mathbb{Z}"
},
{
"math_id": 13,
"text": "\\mathbb{F}_2"
},
{
"math_id": 14,
"text": "\n\\Delta(q) = q \\prod_{n=1}^\\infty (1-q^n)^{24} = \\sum_{n=1}^\\infty \\tau(n)q^n,\n"
},
{
"math_id": 15,
"text": "\\tau(n)"
},
{
"math_id": 16,
"text": " \\Delta(q) \\equiv \\sum_{m=0}^{\\infty} q^{(2m+1)^2} \\bmod 2"
},
{
"math_id": 17,
"text": "T_nf(z) = n^{2k-1}\\sum_{a \\geq 1,\\, ad=n,\\, 0 \\leq b < d} d^{-2k}f \\left( \\frac{az+b}{d} \\right)"
},
{
"math_id": 18,
"text": "n \\in \\N"
},
{
"math_id": 19,
"text": "f(z) = \\sum_{n \\in \\Z} c(n)q^n"
},
{
"math_id": 20,
"text": "T_nf(z) = \\sum_{m \\in \\Z} \\gamma(m)q^m"
},
{
"math_id": 21,
"text": "\\gamma(z) = \\sum_{a | (n,m),\\, a \\geq 1} a^{2k-1} c\\left( \\frac{mn}{a^2} \\right)."
},
{
"math_id": 22,
"text": "m"
},
{
"math_id": 23,
"text": "f(q) = \\sum_{n \\in \\N} c(n)q^n"
},
{
"math_id": 24,
"text": "T_p"
},
{
"math_id": 25,
"text": "\\overline{T_p}|f(q) = \\sum_{n \\in \\N} \\gamma(n)q^n"
},
{
"math_id": 26,
"text": "\n\\gamma(n) = \\begin{cases}\n c(np) & \\text{ if } p \\nmid n \\\\\n c(np)+c(n/p) & \\text{ if } p \\mid n\n\\end{cases} \\quad \\text{ and } p \\text{ an odd prime}.\n"
},
{
"math_id": 27,
"text": "\\mathcal{F} = \\left\\langle \\Delta^k \\mid k \\text{ odd} \\right\\rangle"
},
{
"math_id": 28,
"text": "\\mathcal{F} = \\left\\langle \\Delta, \\Delta^3, \\Delta^5, \\Delta^7, \\Delta^9, \\dots \\right\\rangle"
},
{
"math_id": 29,
"text": "\\mathcal{F}(n) = \\left\\langle \\Delta, \\Delta^3, \\Delta^5, \\dots, \\Delta^{2n-1} \\right\\rangle"
},
{
"math_id": 30,
"text": "\\dim(\\mathcal{F}(n)) = n"
},
{
"math_id": 31,
"text": "A(n)"
},
{
"math_id": 32,
"text": "\\text{End}\\left(\\mathcal{F}(n)\\right)"
},
{
"math_id": 33,
"text": "\\mathfrak{m}(n) = \\{T_{p_1} \\cdot T_{p_2} \\cdots T_{p_k} \\mid p_1, p_2, \\dots, p_k \\in \\mathbb{P}, k\\geq 1\\}"
},
{
"math_id": 34,
"text": "\\mathcal{F}"
},
{
"math_id": 35,
"text": "A(n) = \\mathbb{F}_2 \\oplus \\mathfrak{m}(n)"
},
{
"math_id": 36,
"text": "A"
},
{
"math_id": 37,
"text": "\\mathcal{F}(n) \\subset \\mathcal{F}(n+1)"
},
{
"math_id": 38,
"text": "A(n+1)"
},
{
"math_id": 39,
"text": "\\phi_n: A(n+1) \\to A(n)"
},
{
"math_id": 40,
"text": "\\mathcal{F}(n)"
},
{
"math_id": 41,
"text": "\\phi_n"
},
{
"math_id": 42,
"text": "A(1)"
},
{
"math_id": 43,
"text": "A(1) \\cong \\mathbb{F}_2"
},
{
"math_id": 44,
"text": "\\dots \\to A(n+1) \\to A(n) \\to A(n-1) \\to \\dots \\to A(2) \\to A(1) \\cong \\mathbb{F}_2"
},
{
"math_id": 45,
"text": "n \\to \\infty"
},
{
"math_id": 46,
"text": "A = \\varprojlim_{n \\in \\N} A(n) = \\left\\lbrace T_{p_1} \\cdot T_{p_2} \\cdots T_{p_k} | p_1, p_2, \\dots, p_k \\in \\mathbb{P}, k\\geq 0 \\right\\rbrace"
},
{
"math_id": 47,
"text": "T_3"
},
{
"math_id": 48,
"text": "T_5"
},
{
"math_id": 49,
"text": "\nA = \\mathbb{F}_2\\left[ T_p \\mid p \\in \\mathbb{P} \\right]\n = \\mathbb{F}_2 \\left[\\left[ T_3, T_5 \\right]\\right]\n"
},
{
"math_id": 50,
"text": "p \\in \\mathbb{P}"
},
{
"math_id": 51,
"text": "a_{ij}(p) \\in \\mathbb{F}_2"
},
{
"math_id": 52,
"text": "T_p = \\sum_{i+j \\geq 1} a_{ij}(p) T_3^iT_5^j"
}
] |
https://en.wikipedia.org/wiki?curid=63455728
|
63465236
|
Shvab–Zeldovich formulation
|
Mathematical method in fluid dynamics
The Shvab–Zeldovich formulation is an approach to remove the chemical-source terms from the conservation equations for energy and chemical species by linear combinations of independent variables, when the conservation equations are expressed in a common form. Expressing conservation equations in common form often limits the range of applicability of the formulation. The method was first introduced by V. A. Shvab in 1948 and by Yakov Zeldovich in 1949.
Method.
For simplicity, assume combustion takes place in a single global irreversible reaction
formula_0
where formula_1 is the ith chemical species of the total formula_2 species and formula_3 and formula_4 are the stoichiometric coefficients of the reactants and products, respectively. Then, it can be shown from the law of mass action that the rate of moles produced per unit volume of any species formula_5 is constant and given by
formula_6
where formula_7 is the mass of species i produced or consumed per unit volume and formula_8 is the molecular weight of species i.
The main approximation involved in Shvab–Zeldovich formulation is that all binary diffusion coefficients formula_9 of all pairs of species are the same and equal to the thermal diffusivity. In other words, Lewis number of all species are constant and equal to one. This puts a limitation on the range of applicability of the formulation since in reality, except for methane, ethylene, oxygen and some other reactants, Lewis numbers vary significantly from unity. The steady, low Mach number conservation equations for the species and energy in terms of the rescaled independent variables
formula_10
where formula_11 is the mass fraction of species i, formula_12 is the specific heat at constant pressure of the mixture, formula_13 is the temperature and formula_14 is the formation enthalpy of species i, reduce to
formula_15
where formula_16 is the gas density and formula_17 is the flow velocity. The above set of formula_18 nonlinear equations, expressed in a common form, can be replaced with formula_2 linear equations and one nonlinear equation. Suppose the nonlinear equation corresponds to formula_19 so that
formula_20
then by defining the linear combinations formula_21 and formula_22 with formula_23, the remaining formula_2 governing equations required become
formula_24
The linear combinations automatically removes the nonlinear reaction term in the above formula_2 equations.
Shvab–Zeldovich–Liñán formulation.
Shvab–Zeldovich–Liñán formulation was introduced by Amable Liñán in 1991 for diffusion-flame problems where the chemical time scale is infinitely small (Burke–Schumann limit) so that the flame appears as a thin reaction sheet. The reactants can have Lewis number that is not necessarily equal to one.
Suppose the non-dimensional scalar equations for fuel mass fraction formula_25 (defined such that it takes a unit value in the fuel stream), oxidizer mass fraction formula_26 (defined such that it takes a unit value in the oxidizer stream) and non-dimensional temperature formula_13 (measured in units of oxidizer-stream temperature) are given by
formula_27
where formula_28 is the reaction rate, formula_29 is the appropriate Damköhler number, formula_30 is the mass of oxidizer stream required to burn unit mass of fuel stream, formula_31 is the non-dimensional amount of heat released per unit mass of fuel stream burnt and formula_32 is the Arrhenius exponent. Here, formula_33 and formula_34 are the Lewis number of the fuel and oxygen, respectively and formula_35 is the thermal diffusivity. In the Burke–Schumann limit, formula_36 leading to the equilibrium condition
formula_37.
In this case, the reaction terms on the right-hand side become Dirac delta functions. To solve this problem, Liñán introduced the following functions
formula_38
where formula_39, formula_40 is the fuel-stream temperature and formula_41 is the adiabatic flame temperature, both measured in units of oxidizer-stream temperature. Introducing these functions reduces the governing equations to
formula_42
where formula_43 is the mean (or, effective) Lewis number. The relationship between formula_44 and formula_45 and between formula_46 and formula_47 can be derived from the equilibrium condition.
At the stoichiometric surface (the flame surface), both formula_25 and formula_26 are equal to zero, leading to formula_48, formula_49, formula_50 and formula_51, where formula_52 is the flame temperature (measured in units of oxidizer-stream temperature) that is, in general, not equal to formula_41 unless formula_53. On the fuel stream, since formula_54, we have formula_55. Similarly, on the oxidizer stream, since formula_56, we have formula_57.
The equilibrium condition defines
formula_58
The above relations define the piecewise function formula_59
formula_60
where formula_61 is a mean Lewis number. This leads to a nonlinear equation for formula_45. Since formula_62 is only a function of formula_25 and formula_26, the above expressions can be used to define the function formula_63
formula_64
With appropriate boundary conditions for formula_47, the problem can be solved.
It can be shown that formula_45 and formula_47 are conserved scalars, that is, their derivatives are continuous when crossing the reaction sheet, whereas formula_44 and formula_46 have gradient jumps across the flame sheet.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sum_{i=1}^N \\nu_i' \\real_i \\rightarrow \\sum_{i=1}^N \\nu_i'' \\real_i"
},
{
"math_id": 1,
"text": "\\real_i"
},
{
"math_id": 2,
"text": "N"
},
{
"math_id": 3,
"text": "\\nu_i'"
},
{
"math_id": 4,
"text": "\\nu_i''"
},
{
"math_id": 5,
"text": "\\omega"
},
{
"math_id": 6,
"text": "\\omega = \\frac{w_i}{W_i(\\nu_i''-\\nu_i')}"
},
{
"math_id": 7,
"text": "w_i"
},
{
"math_id": 8,
"text": "W_i"
},
{
"math_id": 9,
"text": "D"
},
{
"math_id": 10,
"text": "\\alpha_i=Y_i/[W_i(\\nu_i''-\\nu_i')] \\quad \\text{and} \\quad \\alpha_T = \\frac{\\int_{T_{ref}}^T c_p\\, \\mathrm{d}T}{\\sum_{i=1}^Nh_i^0 W_i(\\nu_i'-\\nu_i'')}"
},
{
"math_id": 11,
"text": "Y_i"
},
{
"math_id": 12,
"text": "c_p = \\sum_{i=1}^N Y_i c_{p,i}"
},
{
"math_id": 13,
"text": "T"
},
{
"math_id": 14,
"text": "h_i^0"
},
{
"math_id": 15,
"text": "\n\\begin{align}\n\\nabla\\cdot[\\rho\\boldsymbol{v} \\alpha_i - \\rho D\\nabla \\alpha_i] = \\omega,\\\\\n\\nabla\\cdot[\\rho\\boldsymbol{v} \\alpha_T - \\rho D\\nabla \\alpha_T] = \\omega\n\\end{align}\n"
},
{
"math_id": 16,
"text": "\\rho"
},
{
"math_id": 17,
"text": "\\boldsymbol{v}"
},
{
"math_id": 18,
"text": "N+1"
},
{
"math_id": 19,
"text": "\\alpha_1"
},
{
"math_id": 20,
"text": "\\nabla\\cdot[\\rho\\boldsymbol{v} \\alpha_1 - \\rho D\\nabla \\alpha_1] = \\omega"
},
{
"math_id": 21,
"text": "\\beta_T=\\alpha_T-\\alpha_1"
},
{
"math_id": 22,
"text": "\\beta_i=\\alpha_i-\\alpha_1"
},
{
"math_id": 23,
"text": "i\\neq 1"
},
{
"math_id": 24,
"text": "\n\\begin{align}\n\\nabla\\cdot[\\rho\\boldsymbol{v} \\beta_i - \\rho D\\nabla \\beta_i] = 0,\\\\\n\\nabla\\cdot[\\rho\\boldsymbol{v} \\beta_T - \\rho D\\nabla \\beta_T] = 0.\n\\end{align}\n"
},
{
"math_id": 25,
"text": "Y_F"
},
{
"math_id": 26,
"text": "Y_O"
},
{
"math_id": 27,
"text": "\n\\begin{align}\n\\rho \\frac{\\partial Y_F}{\\partial t} + \\rho \\mathbf{v}\\cdot\\nabla Y_F &= \\frac{1}{Le_F}\\nabla\\cdot(\\rho D_T \\nabla Y_F) - \\omega,\\\\\n\\rho \\frac{\\partial Y_O}{\\partial t} + \\rho \\mathbf{v}\\cdot\\nabla Y_O &= \\frac{1}{Le_O}\\nabla\\cdot(\\rho D_T \\nabla Y_O) - S\\omega,\\\\\n\\rho \\frac{\\partial T}{\\partial t} + \\rho \\mathbf{v}\\cdot\\nabla T &= \\nabla\\cdot(\\rho D_T \\nabla T) + q\\omega\n\\end{align}\n"
},
{
"math_id": 28,
"text": "\\omega=Da\\,Y_FY_O e^{-E/RT}"
},
{
"math_id": 29,
"text": "Da"
},
{
"math_id": 30,
"text": "S"
},
{
"math_id": 31,
"text": "q"
},
{
"math_id": 32,
"text": "e^{-E/RT}"
},
{
"math_id": 33,
"text": "Le_F"
},
{
"math_id": 34,
"text": "Le_O"
},
{
"math_id": 35,
"text": "D_T"
},
{
"math_id": 36,
"text": "Da\\rightarrow \\infty"
},
{
"math_id": 37,
"text": "Y_FY_O = 0"
},
{
"math_id": 38,
"text": "\n\\begin{align}\nZ = \\frac{SY_F-Y_O+1}{S+1}, &\\qquad \\tilde Z = \\frac{\\tilde SY_F-Y_O+1}{\\tilde S+1},\\\\\nH = \\frac{T-T_0}{T_s-T_0} + Y_F + Y_O -1 , &\\qquad \\tilde H = \\frac{T-T_0}{T_s-T_0} + \\frac{Y_O}{Le_O} + \\frac{Y_F-1}{Le_F}\n\\end{align}\n"
},
{
"math_id": 39,
"text": "\\tilde S = SLe_O/Le_F"
},
{
"math_id": 40,
"text": "T_0"
},
{
"math_id": 41,
"text": "T_s"
},
{
"math_id": 42,
"text": "\n\\begin{align}\n\\rho \\frac{\\partial Z}{\\partial t} + \\rho \\mathbf{v}\\cdot\\nabla Z &= \\frac{1}{Le_m}\\nabla\\cdot(\\rho D_T \\nabla \\tilde Z),\\\\\n\\rho \\frac{\\partial H}{\\partial t} + \\rho \\mathbf{v}\\cdot\\nabla H &= \\nabla\\cdot(\\rho D_T \\nabla \\tilde H),\n\\end{align}\n"
},
{
"math_id": 43,
"text": "Le_m=Le_O (S+1)/(\\tilde S+1)"
},
{
"math_id": 44,
"text": "Z"
},
{
"math_id": 45,
"text": "\\tilde Z"
},
{
"math_id": 46,
"text": "H"
},
{
"math_id": 47,
"text": "\\tilde H"
},
{
"math_id": 48,
"text": "Z=Z_s=1/(S+1)"
},
{
"math_id": 49,
"text": "\\tilde Z=\\tilde Z_s=1/(\\tilde S+1)"
},
{
"math_id": 50,
"text": "H=H_s =(T_f-T_0)/(T_s-T_0)-1"
},
{
"math_id": 51,
"text": "\\tilde H=\\tilde H_s = (T_f-T_0)/(T_s-T_0)-1/Le_F"
},
{
"math_id": 52,
"text": "T_f"
},
{
"math_id": 53,
"text": "Le_F=Le_O=1"
},
{
"math_id": 54,
"text": "Y_F-1=Y_O=T-T_0=0"
},
{
"math_id": 55,
"text": "Z-1=\\tilde Z-1=H=\\tilde H=0"
},
{
"math_id": 56,
"text": "Y_F=Y_O-1=T-1=0"
},
{
"math_id": 57,
"text": "Z=\\tilde Z=H-(1-T_0)/(T_s-T_0)=\\tilde H-(1-T_0)/(T_s-T_0)-1/Le_O+1/Le_F=0"
},
{
"math_id": 58,
"text": "\n\\begin{align}\n\\tilde Z<\\tilde Z_s: &\\qquad Y_F = 0,\\,\\,\\, Y_O = 1-\\frac{\\tilde Z}{\\tilde Z_s}=1-\\frac{Z}{Z_s},\\\\ \n\\tilde Z>\\tilde Z_s: &\\qquad Y_O = 0,\\,\\,\\, Y_F = \\frac{\\tilde Z-\\tilde Z_s}{1-\\tilde Z_s}=\\frac{Z-Z_s}{1-Z_s}.\n\\end{align}\n"
},
{
"math_id": 59,
"text": "Z(\\tilde Z)"
},
{
"math_id": 60,
"text": "\nZ=\\begin{cases}\n\\tilde Z/Le_m,\\quad \\text{if}\\,\\,\\tilde Z<\\tilde Z_s\\\\\nZ_s + Le(\\tilde Z-\\tilde Z_s)/Le_m ,\\quad \\text{if}\\,\\,\\tilde Z>\\tilde Z_s\n\\end{cases}\n"
},
{
"math_id": 61,
"text": "Le_m=\\tilde Z_s/Z_s=(S+1)/(S/Le_F+1)"
},
{
"math_id": 62,
"text": "H-\\tilde H"
},
{
"math_id": 63,
"text": "H(\\tilde Z,\\tilde H)"
},
{
"math_id": 64,
"text": "\nH=\\tilde H + \\begin{cases}\n(1/Le_F-1) -(1/Le_O-1)(1-\\tilde Z/\\tilde Z_s),\\quad \\text{if}\\,\\,\\tilde Z<\\tilde Z_s\\\\\n(1/Le_F-1)(1-\\tilde Z)/(1-\\tilde Z_s) ,\\quad \\text{if}\\,\\,\\tilde Z>\\tilde Z_s\n\\end{cases}\n"
}
] |
https://en.wikipedia.org/wiki?curid=63465236
|
63466570
|
Hidden shift problem
|
Problem in computer science
In quantum computing, the hidden shift problem is a type of oracle-based problem. Various versions of this problem have quantum algorithms which can run much more quickly than known non-quantum methods for the same problem. In its general form, it is equivalent to the hidden subgroup problem for the dihedral group. It is a major open problem to understand how well quantum algorithms can perform for this task, as it can be applied to break lattice-based cryptography.
Problem statement.
The hidden shift problem states: Given an oracle formula_0 that encodes two functions formula_1 and formula_2, there is an formula_3-bit string formula_4 for which formula_5 for all formula_6. Find formula_4.
Functions such as the Legendre symbol and bent functions, satisfy these constraints.
Algorithms.
With a quantum algorithm that is defined as formula_7, where formula_8 is the Hadamard gate and formula_9 is the Fourier transform of formula_2, certain instantiations of this problem can be solved in a polynomial number of queries to formula_0 while taking exponential queries with a classical algorithm.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "O"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "g"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "s"
},
{
"math_id": 5,
"text": "g(x) = f(x + s)"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": "|s\\rangle = H^{\\otimes n} O_{f} H^{\\otimes n} O_{\\hat{g}} H^{\\otimes n}|0^{n}\\rangle "
},
{
"math_id": 8,
"text": " H"
},
{
"math_id": 9,
"text": "\\hat{g}"
}
] |
https://en.wikipedia.org/wiki?curid=63466570
|
63473753
|
Andrew S. Levey
|
American nephrologist (born 1950)
Andrew S. Levey (born September 16, 1950) is an American nephrologist who transformed chronic kidney disease (CKD) clinical practice, research, and public health by developing equations to estimate glomerular filtration rate (GFR) (renal function), and leading the global standardization of CKD definition and staging.
Education and career.
Levey graduated from the University of Chicago in 1972 with a Bachelor of Arts in Biological Sciences and graduated as a Doctor of Medicine (MD) in 1976 at the Boston University School of Medicine. He became a Professor of Medicine in the Tufts University School of Medicine in 1994, and was Chief of the Division of Nephrology from 1999 to 2017.
Contributions.
Levey is known for developing the most widely used equations to estimate GFR (renal function) globally. He pioneered work with the MDRD Study Equation, led the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI), which pooled measured kidney function CKD data from studies all over the world to develop equations to estimate kidney function from serum creatinine, cystatin C, and panels of metabolites and low-molecular-weight proteins. The 2009 creatinine equation with race and the 2021 creatinine equation without race to replace the 2009 equation are shown below:
formula_0
formula_1
For both equations, α is -0.241 for females and -0.302 for males; min indicates minimum of Scr/k or 1, and max indicates maximum of Scr/k or 1
Levey is an authority on clinical practice guidelines in kidney disease. He chaired the U.S. National Kidney Foundation Kidney Disease Outcome Quality Initiative (KDOQI) Clinical Practice Guideline Workgroup on “Chronic Kidney Disease: Evaluation, Classification and Risk Stratification”. The recommendations from this workgroup transformed the way Kidney Disease was defined and staged globally. The guideline has been cited over 10,000 times in subsequent research publications. He led multiple KDOQI and Kidney Disease Improving Global Outcomes (KDIGO) guidelines which advanced the global recognition and care for CKD, hypertension, acute kidney injury, living kidney donor evaluation, and nomenclature.
Levey was a founding member of the Chronic Kidney Disease Prognosis Consortium (CKDPC), which includes over 80 cohorts and 10 million participants and has informed multiple clinical practice guidelines and regulatory policies.
Levey led the U.S. National Kidney Foundation task force on cardiovascular disease in chronic kidney disease, which led to the recognition by the American Heart Association of CKD as a risk factor for cardiovascular disease.
Levey co-chaired the U.S. Centers for Disease Control and Prevention expert panel to develop comprehensive public health strategies for preventing the development, progression, and complications of CKD.
Levey led scientific workshops sponsored by the U.S. National Kidney Foundation in collaboration with the U.S. Food and Drug Administration and European Medicines Association for the evaluation of Renal function as surrogate endpoints for clinical trials of kidney disease progression.
Levey was editor-in-chief for "American Journal of Kidney Diseases", the official journal of the U.S. National Kidney Foundation, from 2007-2016.
Personal.
Levey is married to Roberta Falke, MD. In 2009 he donated a kidney to her via a 3-pair transplant.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{eGFR} = 141\\ \\times \\ \\mathrm{\\min(SCr/k,1)}^{a} \\ \\times \\ \\mathrm{\\max(SCr/k,1)}^{-1.209} \\ \\times \\ 0.993^\\text{Age} \\ \\times \\text{[1.018 if Female]} \\ \\times \\text{[1.159 if Black]} \\ "
},
{
"math_id": 1,
"text": "\\mathrm{eGFR} = 142\\ \\times \\ \\mathrm{\\min(SCr/k,1)}^{a} \\ \\times \\ \\mathrm{\\max(SCr/k,1)}^{-1.200} \\ \\times \\ 0.9938^\\text{Age} \\ \\times \\text{[1.012 if Female]} \\ "
}
] |
https://en.wikipedia.org/wiki?curid=63473753
|
634759
|
Hausdorff paradox
|
The Hausdorff paradox is a paradox in mathematics named after Felix Hausdorff. It involves the sphere formula_0 (the surface of a 3-dimensional ball in formula_1). It states that if a certain countable subset is removed from formula_2, then the remainder can be divided into three disjoint subsets formula_3 and formula_4 such that formula_5 and formula_6 are all congruent. In particular, it follows that on formula_7 there is no finitely additive measure defined on all subsets such that the measure of congruent sets is equal (because this would imply that the measure of formula_6 is simultaneously formula_8, formula_9, and formula_10 of the non-zero measure of the whole sphere).
The paradox was published in "Mathematische Annalen" in 1914 and also in Hausdorff's book, "Grundzüge der Mengenlehre", the same year. The proof of the much more famous Banach–Tarski paradox uses Hausdorff's ideas. The proof of this paradox relies on the axiom of choice.
This paradox shows that there is no finitely additive measure on a sphere defined on "all" subsets which is equal on congruent pieces. (Hausdorff first showed in the same paper the easier result that there is no "countably" additive measure defined on all subsets.) The structure of the group of rotations on the sphere plays a crucial role here – the statement is not true on the plane or the line. In fact, as was later shown by Banach, it is possible to define an "area" for "all" bounded subsets in the Euclidean plane (as well as "length" on the real line) in such a way that congruent sets will have equal "area". (This Banach measure, however, is only finitely additive, so it is not a measure in the full sense, but it equals the Lebesgue measure on sets for which the latter exists.) This implies that if two open subsets of the plane (or the real line) are equi-decomposable then they have equal area.
|
[
{
"math_id": 0,
"text": "{ S^2}"
},
{
"math_id": 1,
"text": "{ \\R^3 }"
},
{
"math_id": 2,
"text": "{ S^2 }"
},
{
"math_id": 3,
"text": "{ A,B }"
},
{
"math_id": 4,
"text": "{ C }"
},
{
"math_id": 5,
"text": "{ A, B, C }"
},
{
"math_id": 6,
"text": "{ B \\cup C }"
},
{
"math_id": 7,
"text": "S^2"
},
{
"math_id": 8,
"text": "1/3"
},
{
"math_id": 9,
"text": "1/2"
},
{
"math_id": 10,
"text": "2/3"
}
] |
https://en.wikipedia.org/wiki?curid=634759
|
634765
|
Hadamard transform
|
Involutive change of basis in linear algebra
The Hadamard transform (also known as the Walsh–Hadamard transform, Hadamard–Rademacher–Walsh transform, Walsh transform, or Walsh–Fourier transform) is an example of a generalized class of Fourier transforms. It performs an orthogonal, symmetric, involutive, linear operation on 2"m" real numbers (or complex, or hypercomplex numbers, although the Hadamard matrices themselves are purely real).
The Hadamard transform can be regarded as being built out of size-2 discrete Fourier transforms (DFTs), and is in fact equivalent to a multidimensional DFT of size 2 × 2 × ⋯ × 2 × 2. It decomposes an arbitrary input vector into a superposition of Walsh functions.
The transform is named for the French mathematician Jacques Hadamard (), the German-American mathematician Hans Rademacher, and the American mathematician Joseph L. Walsh.
Definition.
The Hadamard transform "H""m" is a 2"m" × 2"m" matrix, the Hadamard matrix (scaled by a normalization factor), that transforms 2"m" real numbers "x""n" into 2"m" real numbers "X""k". The Hadamard transform can be defined in two ways: recursively, or by using the binary (base-2) representation of the indices "n" and "k".
Recursively, we define the 1 × 1 Hadamard transform "H"0 by the identity "H"0 = 1, and then define "H""m" for "m" > 0 by:
formula_0
where the 1/√2 is a normalization that is sometimes omitted.
For "m" > 1, we can also define "H""m" by:
formula_1
where formula_2 represents the Kronecker product. Thus, other than this normalization factor, the Hadamard matrices are made up entirely of 1 and −1.
Equivalently, we can define the Hadamard matrix by its ("k", "n")-th entry by writing
formula_3
where the "k""j" and "n""j" are the bit elements (0 or 1) of "k" and "n", respectively. Note that for the element in the top left corner, we define: formula_4. In this case, we have:
formula_5
This is exactly the multidimensional formula_6 DFT, normalized to be unitary, if the inputs and outputs are regarded as multidimensional arrays indexed by the "n""j" and "k""j", respectively.
Some examples of the Hadamard matrices follow.
formula_7
where formula_8 is the bitwise dot product of the binary representations of the numbers i and j. For example, if formula_9, then formula_10agreeing with the above (ignoring the overall constant). Note that the first row, first column element of the matrix is denoted by formula_11.
"H"1 is precisely the size-2 DFT. It can also be regarded as the Fourier transform on the two-element "additive" group of Z/(2).
The rows of the Hadamard matrices are the Walsh functions.
Advantages of the Walsh–Hadamard transform.
Real.
According to the above definition of matrix "H", here we let "H" = "H"["m","n"]
formula_12
In the Walsh transform, only 1 and −1 will appear in the matrix. The numbers 1 and −1 are real numbers so there is no need to perform a complex number calculation.
No multiplication is required.
The DFT needs irrational multiplication, while the Hadamard transform does not. Even rational multiplication is not needed, since sign flips is all it takes.
Some properties are similar to those of the DFT.
In the Walsh transform matrix, all entries in the first row (and column) are equal to 1.
formula_13
Discrete Fourier transform:
formula_14
In discrete Fourier transform, when m equal to zeros (mean first row), the result of DFT also is 1.
At the second row, although it is different from the first row we can observe a characteristic of the matrix that the signal in the first raw matrix is low frequency and it will increase the frequency at second row, increase more frequency until the last row.
If we calculate zero crossing:
First row = 0 zero crossing
Second row = 1 zero crossing
Third row = 2 zero crossings
Eight row = 7 zero crossings
Relation to Fourier transform.
The Hadamard transform is in fact equivalent to a multidimensional DFT of size 2 × 2 × ⋯ × 2 × 2.
Another approach is to view the Hadamard transform as a Fourier transform on the Boolean group formula_15.
Using the Fourier transform on finite (abelian) groups, the Fourier transform of a function formula_16 is the function formula_17 defined by
formula_18
where formula_19 is a character of formula_20. Each character has the form formula_21 for some formula_22, where the multiplication is the boolean dot product on bit strings, so we can identify the input to formula_23 with formula_22 (Pontryagin duality) and define formula_24 by
formula_25
This is the Hadamard transform of formula_26, considering the input to formula_26 and formula_23 as boolean strings.
In terms of the above formulation where the Hadamard transform multiplies a vector of formula_27 complex numbers formula_28 on the left by the Hadamard matrix formula_29 the equivalence is seen by taking formula_26 to take as input the bit string corresponding to the index of an element of formula_28, and having formula_26 output the corresponding element of formula_28.
Compare this to the usual discrete Fourier transform which when applied to a vector formula_28 of formula_27 complex numbers instead uses characters of the cyclic group formula_30.
Computational complexity.
In the classical domain, the Hadamard transform can be computed in formula_31 operations (formula_32), using the fast Hadamard transform algorithm.
In the quantum domain, the Hadamard transform can be computed in formula_33 time, as it is a quantum logic gate that can be parallelized.
Quantum computing applications.
The Hadamard transform is used extensively in quantum computing. The 2 × 2 Hadamard transform formula_34 is the quantum logic gate known as the Hadamard gate, and the application of a Hadamard gate to each qubit of an formula_35-qubit register in parallel is equivalent to the Hadamard transform formula_29.
Hadamard gate.
In quantum computing, the Hadamard gate is a one-qubit rotation, mapping the qubit-basis states formula_36 and formula_37 to two superposition states with equal weight of the computational basis states formula_36 and formula_37. Usually the phases are chosen so that
formula_38
in Dirac notation. This corresponds to the transformation matrix
formula_39
in the formula_40 basis, also known as the computational basis. The states formula_41 and formula_42 are known as formula_43 and formula_44 respectively, and together constitute the polar basis in quantum computing.
Hadamard gate operations.
formula_45
One application of the Hadamard gate to either a 0 or 1 qubit will produce a quantum state that, if observed, will be a 0 or 1 with equal probability (as seen in the first two operations). This is exactly like flipping a fair coin in the standard probabilistic model of computation. However, if the Hadamard gate is applied twice in succession (as is effectively being done in the last two operations), then the final state is always the same as the initial state.
Hadamard transform in quantum algorithms.
Computing the quantum Hadamard transform is simply the application of a Hadamard gate to each qubit individually because of the tensor product structure of the Hadamard transform. This simple result means the quantum Hadamard transform requires formula_46 operations, compared to the classical case of formula_47 operations.
For an formula_35-qubit system, Hadamard gates acting on each of the formula_35 qubits (each initialized to the formula_48) can be used to prepare uniform quantum superposition states
when formula_49 is of the form formula_50.
In this case case with formula_35 qubits, the combined Hadamard gate formula_29 is expressed as the tensor product of formula_35 Hadamard gates:
formula_51
The resulting uniform quantum superposition state is then:
formula_52
This generalizes the preparation of uniform quantum states using Hadamard gates for any formula_50.
Measurement of this uniform quantum state results in a random state between formula_48 and formula_53.
Many quantum algorithms use the Hadamard transform as an initial step, since as explained earlier, it maps "n" qubits initialized with formula_54 to a superposition of all 2"n" orthogonal states in the formula_55 basis with equal weight. For example, this is used in the Deutsch–Jozsa algorithm, Simon's algorithm, the Bernstein–Vazirani algorithm, and in Grover's algorithm. Note that Shor's algorithm uses both an initial Hadamard transform, as well as the quantum Fourier transform, which are both types of Fourier transforms on finite groups; the first on formula_56 and the second on formula_57.
Preparation of uniform quantum superposition states in the general case, when formula_58 ≠ formula_27 is non-trivial and requires more work.
An efficient and deterministic approach for preparing the superposition state
formula_59
with a gate complexity and circuit depth of only formula_60 for all formula_49 was recently presented. This approach requires only
formula_61
qubits. Importantly, neither ancilla qubits nor any quantum gates with multiple controls are needed in this approach for creating the uniform superposition state formula_62.
Molecular phylogenetics (evolutionary biology) applications.
The Hadamard transform can be used to estimate phylogenetic trees from molecular data. Phylogenetics is the subfield of evolutionary biology focused on understanding the relationships among organisms. A Hadamard transform applied to a vector (or matrix) of site pattern frequencies obtained from a DNA multiple sequence alignment can be used to generate another vector that carries information about the tree topology. The invertible nature of the phylogenetic Hadamard transform also allows the calculation of site likelihoods from a tree topology vector, allowing one to use the Hadamard transform for maximum likelihood estimation of phylogenetic trees. However, the latter application is less useful than the transformation from the site pattern vector to the tree vector because there are other ways to calculate site likelihoods that are much more efficient. However, the invertible nature of the phylogenetic Hadamard transform does provide an elegant tool for mathematic phylogenetics.
The mechanics of the phylogenetic Hadamard transform involve the calculation of a vector formula_63 that provides information about the topology and branch lengths for tree formula_64 using the site pattern vector or matrix formula_65.
formula_66
where formula_67 is the Hadamard matrix of the appropriate size. This equation can be rewritten as a series of three equations to simplify its interpretation:
formula_68
The invertible nature of this equation allows one to calculate an expected site pattern vector (or matrix) as follows:
formula_69
We can use the Cavender–Farris–Neyman (CFN) two-state substitution model for DNA by encoding the nucleotides as binary characters (the purines A and G are encoded as R and the pyrimidines C and T are encoded as Y). This makes it possible to encode the multiple sequence alignment as the site pattern vector formula_65 that can be converted to a tree vector formula_63, as shown in the following example:
The example shown in this table uses the simplified three equation scheme and it is for a four taxon tree that can be written as ((A,B),(C,D)); in newick format. The site patterns are written in the order ABCD. This particular tree has two long terminal branches (0.2 transversion substitutions per site), two short terminal branches (0.025 transversion substitutions per site), and a short internal branch (0.025 transversion substitutions per site); thus, it would be written as ((A:0.025,B:0.2):0.025,(C:0.025,D:0.2)); in newick format. This tree will exhibit long branch attraction if the data are analyzed using the maximum parsimony criterion (assuming the sequence analyzed is long enough for the observed site pattern frequencies to be close to the expected frequencies shown in the formula_70 column). The long branch attraction reflects the fact that the expected number of site patterns with index 6 -- which support the tree ((A,C),(B,D)); -- exceed the expected number of site patterns that support the true tree (index 4). Obviously, the invertible nature of the phylogenetic Hadamard transform means that the tree vector means that the tree vector formula_63 corresponds to the correct tree. Parsimony analysis after the transformation is therefore statistically consistent, as would be a standard maximum likelihood analysis using the correct model (in this case the CFN model).
Note that the site pattern with 0 corresponds to the sites that have not changed (after encoding the nucleotides as purines or pyrimidines). The indices with asterisks (3, 5, and 6) are "parsimony-informative", and. the remaining indices represent site patterns where a single taxon differs from the other three taxa (so they are the equivalent of terminal branch lengths in a standard maximum likelihood phylogenetic tree).
If one wishes to use nucleotide data without recoding as R and Y (and ultimately as 0 and 1) it is possible to encode the site patterns as a matrix. If we consider a four-taxon tree there are a total of 256 site patterns (four nucleotides to the 4th power). However, symmetries of the Kimura three-parameter (or K81) model allow us to reduce the 256 possible site patterns for DNA to 64 patterns, making it possible to encode the nucleotide data for a four-taxon tree as an 8 × 8 matrix in a manner similar to the vector of 8 elements used above for transversion (RY) site patterns. This is accomplished by recoding the data using the Klein four-group:
As with RY data, site patterns are indexed relative to the base in the arbitrarily chosen first taxon with the bases in the subsequent taxa encoded relative to that first base. Thus, the first taxon receives the bit pair (0,0). Using those bit pairs one can produce two vectors similar to the RY vectors and then populate the matrix using those vectors. This can be illustrated using the example from Hendy et al. (1994), which is based on a multiple sequence alignment of four primate hemoglobin pseudogenes:
The much larger number of site patterns in column 0 reflects the fact that column 0 corresponds to transition differences, which accumulate more rapidly than transversion differences in virtually all comparisons of genomic regions (and definitely accumulate more rapidly in the hemoglobin pseudogenes used for this worked example). If we consider the site pattern AAGG it would to binary pattern 0000 for the second element of the Klein group bit pair and 0011 for the first element. in this case binary pattern based on the first element first element corresponds to index 3 (so row 3 in column 0; indicated with a single asterisk in the table). The site patterns GGAA, CCTT, and TTCC would be encoded in the exact same way. The site pattern AACT would be encoded with binary pattern 0011 based on the second element and 0001 based on the first element; this yields index 1 for the first element and index 3 for the second. The index based on the second Klein group bit pair is multiplied by 8 to yield the column index (in this case it would be column 24) The cell that would include the count of AACT site patterns is indicated with two asterisks; however, the absence of a number in the example indicates that the sequence alignment include no AACT site patterns (likewise, CCAG, GGTC, and TTGA site patterns, which would be encoded in the same way, are absent).
Other applications.
The Hadamard transform is also used in data encryption, as well as many signal processing and data compression algorithms, such as JPEG XR and MPEG-4 AVC. In video compression applications, it is usually used in the form of the sum of absolute transformed differences. It is also a crucial part of significant number of algorithms in quantum computing. The Hadamard transform is also applied in experimental techniques such as NMR, mass spectrometry and crystallography. It is additionally used in some versions of locality-sensitive hashing, to obtain pseudo-random matrix rotations.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "H_m = \\frac{1}{\\sqrt2} \\begin{pmatrix} H_{m-1} & H_{m-1} \\\\ H_{m-1} & -H_{m-1} \\end{pmatrix}"
},
{
"math_id": 1,
"text": "H_m = H_{1} \\otimes H_{m-1}"
},
{
"math_id": 2,
"text": " \\otimes "
},
{
"math_id": 3,
"text": "\\begin{align}\n k &= \\sum^{m-1}_{i=0} {k_i 2^i} = k_{m-1} 2^{m-1} + k_{m-2} 2^{m-2} + \\dots + k_1 2 + k_0 \\\\\n n &= \\sum^{m-1}_{i=0} {n_i 2^i} = n_{m-1} 2^{m-1} + n_{m-2} 2^{m-2} + \\dots + n_1 2 + n_0\n\\end{align}"
},
{
"math_id": 4,
"text": "k = n = 0"
},
{
"math_id": 5,
"text": " (H_m)_{k,n} = \\frac{1}{2^{m/2}} (-1)^{\\sum_j k_j n_j}"
},
{
"math_id": 6,
"text": " 2 \\times 2 \\times \\cdots \\times 2 \\times 2"
},
{
"math_id": 7,
"text": " \n\\begin{align}\n H_0 & = +\\begin{pmatrix}1\\end{pmatrix}\\\\[5pt]\n H_1 & = \\frac{1}{\\sqrt2}\n \\left(\\begin{array}{rr}\n 1 & 1\\\\\n 1 & -1\n \\end{array}\\right)\\\\[5pt]\n H_2 & = \\frac{1}{2}\n \\left(\\begin{array}{rrrr}\n 1 & 1 & 1 & 1\\\\\n 1 & -1 & 1 & -1\\\\\n 1 & 1 & -1 & -1\\\\\n 1 & -1 & -1 & 1\n \\end{array}\\right)\\\\[5pt]\n H_3 & = \\frac{1}{2^{3/2}}\n \\left(\\begin{array}{rrrrrrrr}\n 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\\\\n 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1\\\\\n 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1\\\\\n 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1\\\\ \n 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1\\\\\n 1 & -1 & 1 & -1 & -1 & 1 & -1 & 1\\\\\n 1 & 1 & -1 & -1 & -1 & -1 & 1 & 1\\\\\n 1 & -1 & -1 & 1 & -1 & 1 & 1 & -1\n \\end{array}\\right)\\\\[5pt]\n (H_n)_{i,j} & = \\frac{1}{2^{n/2}} (-1)^{i \\cdot j}\n\\end{align}\n"
},
{
"math_id": 8,
"text": " i \\cdot j "
},
{
"math_id": 9,
"text": " n \\;\\geq\\; 2"
},
{
"math_id": 10,
"text": " (H_n)_{3,2} \\;=\\; (-1)^{3 \\cdot 2} \\;=\\; (-1)^{(1,1) \\cdot (1,0)} \\;=\\; (-1)^{1+0} \\;=\\; (-1)^1 \\;=\\; -1"
},
{
"math_id": 11,
"text": " (H_n)_{0,0} "
},
{
"math_id": 12,
"text": "H[m,n]=\\begin{pmatrix} 1 & 1 \\\\ 1 & -1 \\end{pmatrix}"
},
{
"math_id": 13,
"text": "H[m,n] = \\left(\\begin{array}{rrrrrrrr}\n 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\\\\n 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1\\\\\n 1 & 1 & -1 & -1 & -1 & -1 & 1 & 1\\\\\n 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1\\\\ \n 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1\\\\\n 1 & -1 & -1 & 1 & -1 & 1 & 1 & -1\\\\\n 1 & -1 & 1 & -1 & -1 & 1 & -1 & 1\\\\\n 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1\n\\end{array}\\right)"
},
{
"math_id": 14,
"text": "e^{-j 2\\pi mn/N}"
},
{
"math_id": 15,
"text": "(\\Z / 2\\Z)^n"
},
{
"math_id": 16,
"text": "f \\colon (\\Z/2\\Z)^n \\to \\Complex"
},
{
"math_id": 17,
"text": "\\widehat f"
},
{
"math_id": 18,
"text": "\\widehat{f}(\\chi) = \\sum_{a \\in (\\Z/2\\Z)^n} f(a) \\bar{\\chi}(a)"
},
{
"math_id": 19,
"text": "\\chi"
},
{
"math_id": 20,
"text": "(\\Z/2\\Z)^n"
},
{
"math_id": 21,
"text": "\\chi_r(a) = (-1)^{a \\cdot r}"
},
{
"math_id": 22,
"text": "r \\in (\\Z/2\\Z)^n"
},
{
"math_id": 23,
"text": "\\widehat{f}"
},
{
"math_id": 24,
"text": "\\widehat f \\colon (\\Z/2\\Z)^n \\to \\Complex"
},
{
"math_id": 25,
"text": "\\widehat{f}(r) = \\sum_{a \\in (\\Z/2\\Z)^n} f(a) (-1)^{r \\cdot a}"
},
{
"math_id": 26,
"text": "f"
},
{
"math_id": 27,
"text": "2^n"
},
{
"math_id": 28,
"text": "v"
},
{
"math_id": 29,
"text": "H_n"
},
{
"math_id": 30,
"text": "\\Z / 2^n \\Z"
},
{
"math_id": 31,
"text": "n \\log n"
},
{
"math_id": 32,
"text": "n = 2^m"
},
{
"math_id": 33,
"text": "O(1)"
},
{
"math_id": 34,
"text": "H_1"
},
{
"math_id": 35,
"text": "n"
},
{
"math_id": 36,
"text": "|0 \\rangle "
},
{
"math_id": 37,
"text": "|1 \\rangle "
},
{
"math_id": 38,
"text": "H=\\frac{|0\\rangle+|1\\rangle}{\\sqrt{2}}\\langle0|+\\frac{|0\\rangle-|1\\rangle}{\\sqrt{2}}\\langle1|"
},
{
"math_id": 39,
"text": "H_1=\\frac{1}{\\sqrt{2}}\\begin{pmatrix} 1 & 1 \\\\ 1 & -1 \\end{pmatrix}"
},
{
"math_id": 40,
"text": "|0 \\rangle , |1 \\rangle "
},
{
"math_id": 41,
"text": " \\frac{\\left|0\\right\\rangle + \\left|1\\right\\rangle}{\\sqrt{2}}"
},
{
"math_id": 42,
"text": "\\frac{\\left|0\\right\\rangle - \\left|1\\right\\rangle}{\\sqrt{2}}"
},
{
"math_id": 43,
"text": "\\left|\\boldsymbol{+}\\right\\rangle"
},
{
"math_id": 44,
"text": "\\left|\\boldsymbol{-}\\right\\rangle"
},
{
"math_id": 45,
"text": "\\begin{align}\n H(|0\\rangle) &= \\frac{1}{\\sqrt 2}|0\\rangle + \\frac{1}{\\sqrt{2}}|1\\rangle =: |+\\rangle\\\\\n H(|1\\rangle) &= \\frac{1}{\\sqrt 2}|0\\rangle - \\frac{1}{\\sqrt{2}}|1\\rangle =: |-\\rangle\\\\\n H(|+\\rangle) &= H\\left( \\frac{1}{\\sqrt 2}|0\\rangle + \\frac{1}{\\sqrt{2}}|1\\rangle \\right) = \\frac{1}{2}\\Big( |0\\rangle + |1\\rangle\\Big) + \\frac{1}{2}\\Big(|0\\rangle - |1\\rangle\\Big) = |0\\rangle\\\\\n H(|-\\rangle) &= H\\left( \\frac{1}{\\sqrt 2}|0\\rangle - \\frac{1}{\\sqrt{2}}|1\\rangle \\right) = \\frac{1}{2}\\Big( |0\\rangle + |1\\rangle\\Big) - \\frac{1}{2}\\Big(|0\\rangle - |1\\rangle\\Big) = |1\\rangle\n\\end{align}"
},
{
"math_id": 46,
"text": "\\log_2 N "
},
{
"math_id": 47,
"text": "N \\log_2 N"
},
{
"math_id": 48,
"text": "|0\\rangle"
},
{
"math_id": 49,
"text": "N"
},
{
"math_id": 50,
"text": "N = 2^n"
},
{
"math_id": 51,
"text": "H_n = \\underbrace{H \\otimes H \\otimes \\ldots \\otimes H}_{n \\text{ times}}"
},
{
"math_id": 52,
"text": " H_{n} |0\\rangle^{\\otimes n} = \\frac{1}{\\sqrt{2^n}} \\sum_{j=0}^{2^n-1} |j\\rangle"
},
{
"math_id": 53,
"text": "|N-1\\rangle"
},
{
"math_id": 54,
"text": "|0 \\rangle"
},
{
"math_id": 55,
"text": " |0 \\rangle , |1 \\rangle "
},
{
"math_id": 56,
"text": "(\\Z / 2 \\Z)^n"
},
{
"math_id": 57,
"text": "\\Z /2^n \\Z"
},
{
"math_id": 58,
"text": "N "
},
{
"math_id": 59,
"text": " |\\Psi\\rangle = \\frac{1}{\\sqrt{N}} \\sum_{j=0}^{N-1} |j\\rangle "
},
{
"math_id": 60,
"text": " O(\\log_2 N)"
},
{
"math_id": 61,
"text": " n = \\lceil \\log_2 N \\rceil"
},
{
"math_id": 62,
"text": " |\\Psi\\rangle "
},
{
"math_id": 63,
"text": "\\gamma(T)"
},
{
"math_id": 64,
"text": "T"
},
{
"math_id": 65,
"text": "s(T)"
},
{
"math_id": 66,
"text": "\\gamma(T) = H^{-1}(\\ln(Hs(T)))"
},
{
"math_id": 67,
"text": "H"
},
{
"math_id": 68,
"text": "\\begin{align}\n r &= H s(T) \\\\\n \\rho &= \\ln r \\\\\n \\gamma(T) &= H^{-1}\\rho\n\\end{align}"
},
{
"math_id": 69,
"text": "s(T)=H^{-1}(\\exp(H\\gamma(T)))"
},
{
"math_id": 70,
"text": "s(T)=H^{-1}\\rho"
}
] |
https://en.wikipedia.org/wiki?curid=634765
|
634780
|
Bohr compactification
|
In mathematics, the Bohr compactification of a topological group "G" is a compact Hausdorff topological group "H" that may be canonically associated to "G". Its importance lies in the reduction of the theory of uniformly almost periodic functions on "G" to the theory of continuous functions on "H". The concept is named after Harald Bohr who pioneered the study of almost periodic functions, on the real line.
Definitions and basic properties.
Given a topological group "G", the Bohr compactification of "G" is a compact "Hausdorff" topological group Bohr("G") and a continuous homomorphism
b: "G" → Bohr("G")
which is universal with respect to homomorphisms into compact Hausdorff groups; this means that if "K" is another compact Hausdorff topological group and
"f": "G" → "K"
is a continuous homomorphism, then there is a unique continuous homomorphism
Bohr("f"): Bohr("G") → "K"
such that "f" = Bohr("f") ∘ b.
Theorem. The Bohr compactification exists and is unique up to isomorphism.
We will denote the Bohr compactification of "G" by Bohr("G") and the canonical map by
formula_0
The correspondence "G" ↦ Bohr("G") defines a covariant functor on the category of topological groups and continuous homomorphisms.
The Bohr compactification is intimately connected to the finite-dimensional unitary representation theory of a topological group. The kernel of b consists exactly of those elements of "G" which cannot be separated from the identity of "G" by finite-dimensional "unitary" representations.
The Bohr compactification also reduces many problems in the theory of almost periodic functions on topological groups to that of functions on compact groups.
A bounded continuous complex-valued function "f" on a topological group "G" is uniformly almost periodic if and only if the set of right translates "g""f" where
formula_1
is relatively compact in the uniform topology as "g" varies through "G".
Theorem. A bounded continuous complex-valued function "f" on "G" is uniformly almost periodic if and only if there is a continuous function "f"1 on Bohr("G") (which is uniquely determined) such that
formula_2
Maximally almost periodic groups.
Topological groups for which the Bohr compactification mapping is injective are called "maximally almost periodic" (or MAP groups). For example all Abelian groups, all compact groups, and all free groups are MAP. In the case "G" is a locally compact connected group, MAP groups are completely characterized: They are precisely products of compact groups with vector groups
of finite dimension.
References.
Notes.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathbf{b}: G \\rightarrow \\mathbf{Bohr}(G). "
},
{
"math_id": 1,
"text": " [{}_g f ] (x) = f(g^{-1} \\cdot x) "
},
{
"math_id": 2,
"text": " f = f_1 \\circ \\mathbf{b}. "
}
] |
https://en.wikipedia.org/wiki?curid=634780
|
6347835
|
Relaxation (approximation)
|
In mathematical optimization and related fields, relaxation is a modeling strategy. A relaxation is an approximation of a difficult problem by a nearby problem that is easier to solve. A solution of the relaxed problem provides information about the original problem.
For example, a linear programming relaxation of an integer programming problem removes the integrality constraint and so allows non-integer rational solutions. A Lagrangian relaxation of a complicated problem in combinatorial optimization penalizes violations of some constraints, allowing an easier relaxed problem to be solved. Relaxation techniques complement or supplement branch and bound algorithms of combinatorial optimization; linear programming and Lagrangian relaxations are used to obtain bounds in branch-and-bound algorithms for integer programming.
The modeling strategy of relaxation should not be confused with iterative methods of relaxation, such as successive over-relaxation (SOR); iterative methods of relaxation are used in solving problems in differential equations, linear least-squares, and linear programming. However, iterative methods of relaxation have been used to solve Lagrangian relaxations.
Definition.
A "relaxation" of the minimization problem
formula_0
is another minimization problem of the form
formula_1
with these two properties
The first property states that the original problem's feasible domain is a subset of the relaxed problem's feasible domain. The second property states that the original problem's objective-function is greater than or equal to the relaxed problem's objective-function.
Properties.
If formula_5 is an optimal solution of the original problem, then formula_6 and formula_7. Therefore, formula_8 provides an upper bound on formula_9.
If in addition to the previous assumptions, formula_10, formula_11, the following holds: If an optimal solution for the relaxed problem is feasible for the original problem, then it is optimal for the original problem.
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "z = \\min \\{c(x) : x \\in X \\subseteq \\mathbf{R}^{n}\\}"
},
{
"math_id": 1,
"text": "z_R = \\min \\{c_R(x) : x \\in X_R \\subseteq \\mathbf{R}^{n}\\}"
},
{
"math_id": 2,
"text": "X_R \\supseteq X"
},
{
"math_id": 3,
"text": "c_R(x) \\leq c(x)"
},
{
"math_id": 4,
"text": "x \\in X"
},
{
"math_id": 5,
"text": "x^*"
},
{
"math_id": 6,
"text": "x^* \\in X \\subseteq X_R"
},
{
"math_id": 7,
"text": "z = c(x^*) \\geq c_R(x^*)\\geq z_R"
},
{
"math_id": 8,
"text": "x^* \\in X_R"
},
{
"math_id": 9,
"text": "z_R"
},
{
"math_id": 10,
"text": "c_R(x)=c(x)"
},
{
"math_id": 11,
"text": "\\forall x\\in X"
}
] |
https://en.wikipedia.org/wiki?curid=6347835
|
634785
|
Dissipation factor
|
Measure of loss-rate of energy of a mode of oscillation in a dissipative system
In physics, the dissipation factor (DF) is a measure of loss-rate of energy of a mode of oscillation (mechanical, electrical, or electromechanical) in a dissipative system. It is the reciprocal of quality factor, which represents the "quality" or durability of oscillation.
Explanation.
Electrical potential energy is dissipated in all dielectric materials, usually in the form of heat. In a capacitor made of a dielectric placed between conductors, the typical lumped element model includes a lossless ideal capacitor in series with a resistor termed the equivalent series resistance (ESR) as shown below. The ESR represents losses in the capacitor. In a good capacitor the ESR is very small, and in a poor capacitor the ESR is large. However, ESR is sometimes a minimum value to be required. Note that the ESR is "not" simply the resistance that would be measured across a capacitor by an ohmmeter. The ESR is a derived quantity with physical origins in both the dielectric's conduction electrons and dipole relaxation phenomena. In dielectric only one of either the conduction electrons or the dipole relaxation typically dominates loss. For the case of the conduction electrons being the dominant loss, then
formula_0
where
If the capacitor is used in an AC circuit, the dissipation factor due to the non-ideal capacitor is expressed as the ratio of the resistive power loss in the ESR to the reactive power oscillating in the capacitor, or
formula_5
When representing the electrical circuit parameters as vectors in a complex plane, known as phasors, a capacitor's dissipation factor is equal to the tangent of the angle between the capacitor's impedance vector and the negative reactive axis, as shown in the adjacent diagram. This gives rise to the parameter known as the loss tangent tan "δ" where
formula_6
Alternatively, formula_7 can be derived from frequency at which loss tangent was determined and capacitance
formula_8
Since the formula_9 in a good capacitor is usually small, formula_10, and formula_9 is often expressed as a percentage.
formula_9 approximates to the power factor when formula_7 is far less than formula_11, which is usually the case.
formula_9 will vary depending on the dielectric material and the frequency of the electrical signals. In low dielectric constant (low-κ), temperature compensating ceramics, formula_9 of 0.1–0.2% is typical. In high dielectric constant ceramics, formula_9 can be 1–2%. However, lower formula_9 is usually an indication of quality capacitors when comparing similar dielectric material.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\text{ESR} = \\frac{\\sigma}{\\varepsilon \\omega^2 C} "
},
{
"math_id": 1,
"text": " \\sigma "
},
{
"math_id": 2,
"text": " \\varepsilon "
},
{
"math_id": 3,
"text": " \\omega = 2\\pi f"
},
{
"math_id": 4,
"text": " C "
},
{
"math_id": 5,
"text": " \\text{DF} = \\frac{i^2 \\text{ESR}}{i^2 \\left|X_c\\right|} = \\omega C\\, \\text{ESR} = \\frac{\\sigma}{\\varepsilon\\omega} = \\frac{1}{Q} "
},
{
"math_id": 6,
"text": " \\frac{1}{Q} = \\tan(\\delta) = \\frac{\\text{ESR}}{\\left|X_c\\right|} = \\text{DF} "
},
{
"math_id": 7,
"text": "\\text{ESR}"
},
{
"math_id": 8,
"text": " \\text{ESR} = \\frac{1}{\\omega C}\\tan(\\delta) "
},
{
"math_id": 9,
"text": "\\text{DF}"
},
{
"math_id": 10,
"text": "\\delta \\sim \\text{DF}"
},
{
"math_id": 11,
"text": "X_c"
}
] |
https://en.wikipedia.org/wiki?curid=634785
|
6348084
|
Lagrangian relaxation
|
In the field of mathematical optimization, Lagrangian relaxation is a relaxation method which approximates a difficult problem of constrained optimization by a simpler problem. A solution to the relaxed problem is an approximate solution to the original problem, and provides useful information.
The method penalizes violations of inequality constraints using a Lagrange multiplier, which imposes a cost on violations. These added costs are used instead of the strict inequality constraints in the optimization. In practice, this relaxed problem can often be solved more easily than the original problem.
The problem of maximizing the Lagrangian function of the dual variables (the Lagrangian multipliers) is the Lagrangian dual problem.
Mathematical description.
Suppose we are given a linear programming problem, with formula_0 and formula_1, of the following form:
If we split the constraints in formula_2 such that formula_3,
formula_4 and formula_5 we may write the system:
We may introduce the constraint (2) into the objective:
If we let formula_6 be nonnegative
weights, we get penalized if we violate the constraint (2), and we are also rewarded if we satisfy the constraint strictly. The above
system is called the Lagrangian relaxation of our original problem.
The LR solution as a bound.
Of particular use is the property that for any fixed set of formula_7 values, the optimal result to the Lagrangian relaxation problem will be no smaller than the optimal result to the original problem. To see this, let formula_8 be the optimal solution to the original problem, and let formula_9 be the optimal solution to the Lagrangian relaxation. We can then see that
The first inequality is true because formula_8 is feasible in the original problem and the second inequality is true because formula_9 is the optimal solution to the Lagrangian relaxation.
Iterating towards a solution of the original problem.
The above inequality tells us that if we minimize the maximum value we obtain from the relaxed problem, we obtain a tighter limit on the objective value of our original problem. Thus we can address the original problem by instead exploring the partially dualized problem
where we define formula_10 as
A Lagrangian relaxation algorithm thus proceeds to explore the range of feasible formula_11 values while seeking to minimize the result returned by the inner formula_12 problem. Each value returned by formula_12 is a candidate upper bound to the problem, the smallest of which is kept as the best upper bound. If we additionally employ a heuristic, probably seeded by the formula_9 values returned by formula_12, to find feasible solutions to the original problem, then we can iterate until the best upper bound and the cost of the best feasible solution converge to a desired tolerance.
Related methods.
The augmented Lagrangian method is quite similar in spirit to the Lagrangian relaxation method, but adds an extra term, and updates the dual parameters formula_11 in a more principled manner. It was introduced in the 1970s and has been used extensively.
The penalty method does not use dual variables but rather removes the constraints and instead penalizes deviations from the constraint. The method is conceptually simple but usually augmented Lagrangian methods are preferred in practice since the penalty method suffers from ill-conditioning issues.
|
[
{
"math_id": 0,
"text": "x\\in \\mathbb{R}^n"
},
{
"math_id": 1,
"text": "A\\in \\mathbb{R}^{m,n}"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "A_1\\in \\mathbb{R}^{m_1,n}"
},
{
"math_id": 4,
"text": "A_2\\in \\mathbb{R}^{m_2,n}"
},
{
"math_id": 5,
"text": "m_1+m_2=m"
},
{
"math_id": 6,
"text": "\\lambda=(\\lambda_1,\\ldots,\\lambda_{m_2})"
},
{
"math_id": 7,
"text": "\\tilde{\\lambda} \\succeq 0"
},
{
"math_id": 8,
"text": "\\hat{x}"
},
{
"math_id": 9,
"text": "\\bar{x}"
},
{
"math_id": 10,
"text": "P(\\lambda)"
},
{
"math_id": 11,
"text": "\\lambda"
},
{
"math_id": 12,
"text": "P"
}
] |
https://en.wikipedia.org/wiki?curid=6348084
|
63490326
|
Kac's lemma
|
In ergodic theory, Kac's lemma, demonstrated by mathematician Mark Kac in 1947, is a lemma stating that in a measure space the orbit of almost all the points contained in a set "formula_0" of such space, whose measure is "formula_1", return to "formula_0" within an average time inversely proportional to "formula_1".
The lemma extends what is stated by Poincaré recurrence theorem, in which it is shown that the points return in "formula_0" infinite times.
Application.
In physics, a dynamical system evolving in time may be described in a phase space, that is by the evolution in time of some variables. If this variables are bounded, that is having a minimum and a maximum, for a theorem due to Liouville, a measure can be defined in the space, having a measure space where the lemma applies. As a consequence, given a configuration of the system (a point in the phase space) the average return period close to this configuration (in the neighbourhood of the point) is inversely proportional to the considered size of volume surrounding the configuration.
Normalizing the measure space to 1, it becomes a probability space and the measure "formula_2" of its set "formula_0" represents the probability of finding the system in the states represented by the points of that set. In this case the lemma implies that the smaller is the probability to be in a certain state (or close to it), the longer is the time of return near that state.
In formulas, if "formula_0" is the region close to the starting point and formula_3 is the return period, its average value is:
formula_4
Where formula_5 is a characteristic time of the system in question.
Note that since the volume of "formula_0", therefore formula_2, depends exponentially on the "formula_6" variables in the system (formula_7, with formula_8 infinitesimal side, therefore less than 1, of the volume in "formula_6" dimensions), formula_2 decreases very rapidly as the variables of the system increase and consequently the return period increases exponentially.
In practice, as the variables needed to describe the system increase, the return period increases rapidly.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "\\mu(A)"
},
{
"math_id": 2,
"text": "P(A)"
},
{
"math_id": 3,
"text": "T_R"
},
{
"math_id": 4,
"text": "\\langle T_R \\rangle = \\tau/P(A)"
},
{
"math_id": 5,
"text": "\\tau"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "A = \\epsilon ^n"
},
{
"math_id": 8,
"text": "\\epsilon"
}
] |
https://en.wikipedia.org/wiki?curid=63490326
|
63491
|
Bill James
|
American baseball writer and statistician
George William James (born October 5, 1949) is an American baseball writer, historian, and statistician whose work has been widely influential. Since 1977, James has written more than two dozen books about baseball history and statistics. His approach, which he named sabermetrics after the Society for American Baseball Research (SABR), scientifically analyzes and studies baseball, often through the use of statistical data, in an attempt to determine why teams win and lose.
In 2006, "Time" named him in the "Time" 100 as one of the most influential people in the world. In 2003, James was hired as senior advisor on Baseball Operations for the Boston Red Sox and worked for the team for 17 years during which they won four World Series championships.
Early life.
James was born in Holton, Kansas. He joined the United States Army in 1971. After his service, he graduated from the University of Kansas in 1973 with degrees in English and economics, and in 1975 with a degree in education.
Career.
"The Bill James Baseball Abstract"s.
An aspiring writer and obsessive fan, James began writing baseball articles after leaving the United States Army in his mid-twenties. Many of his first baseball writings came while he was doing night shifts as a security guard at the Stokely-Van Camp's pork and beans cannery. Unlike most writers, his pieces did not recount games in epic terms or offer insights gleaned from interviews with players. A typical James piece posed a question ("e.g.," "Which pitchers and catchers allow runners to steal the most bases?"), and then presented data and analysis that offered an answer.
Editors considered James's pieces so unusual that few believed them suitable for their readers. In an effort to reach a wider audience, James began self-publishing an annual book titled "The Bill James Baseball Abstract", beginning in 1977. The first edition, titled "1977 Baseball Abstract: Featuring 18 categories of statistical information that you just can't find anywhere else", presented 68 pages of in-depth statistics compiled from James's study of box scores from the preceding season and was offered for sale through a small advertisement in "The Sporting News". Seventy-five people purchased the booklet. The 1978 edition, subtitled "The 2nd annual edition of baseball's most informative and imaginative review", sold 250 copies. Beginning in 1979, James wrote an annual preview of the baseball season for "Esquire", and continued to do so through 1984.
The first three editions of the "Baseball Abstract" garnered respect for James's work, including a very favorable review by Daniel Okrent in "Sports Illustrated". New annual editions added essays on teams and players. By 1982 sales had increased tenfold, and a media conglomerate agreed to publish and distribute future editions.
While writers had published books about baseball statistics before (most notably Earnshaw Cook's "Percentage Baseball", in the 1960s), few had ever reached a mass audience. Attempts to imitate James's work spawned a flood of books and articles that continues to this day.
Post-"Abstract"s work.
In 1988, James ceased writing the "Abstract", citing workload-related burnout and concern about the volume of statistics on the market. He has continued to publish hardcover books about baseball history, which have sold well and received admiring reviews. These books include three editions of "The Bill James Historical Baseball Abstract" (1985, 1988, 2001, the last entitled "The New Bill James Historical Baseball Abstract").
James has also written several series of new annuals:
In 2008, James launched Bill James Online. Subscribers could read James's new, original writing and interact with one another—as well as with James—in a question-and-answer format. The web site also offered new "profiles" of teams and players full of facts and statistics that hoped to map what James has termed "the lost island of baseball statistics". On June 9, 2023, James wrote an article for the site announcing that it would soon be closed in order for James to "focus on other projects".
STATS, Inc..
In an essay published in the 1984 "Abstract", James vented his frustration about Major League Baseball's refusal to publish play-by-play accounts of every game. James proposed the creation of Project Scoresheet, a network of fans that would work together to collect and distribute this information.
While the resulting non-profit organization never functioned smoothly, it worked well enough to collect accounts of every game from 1984 through 1991. James's publisher agreed to distribute two annuals of essays and data—the 1987 and 1988 editions of "Bill James Presents The Great American Baseball Statbook" (though only the first of these featured writing by James).
The organization was eventually disbanded, but many of its members went on to form for-profit companies with similar goals and structure. STATS, Inc., the company James joined, provided data and analysis to every major media outlet before being acquired by Fox Sports in 2001.
Innovations.
Among the statistical innovations attributable to James are:
Although James may be best known as an inventor of statistical tools, he has often written on the limitations of statistics and urged humility concerning their place amid other kinds of information about baseball. To James, context is paramount: he was among the first to emphasize the importance of adjusting traditional statistics for park factors and to stress the role of luck in a pitcher's win–loss record. Many of his statistical innovations are arguably less important than the underlying ideas. When he introduced the notion of secondary average, it was as a vehicle for the then-counterintuitive concept that batting average represents only a fraction of a player's offensive contribution. (The runs-created statistic plays a similar role vis-à-vis the traditional RBI.) Some of his contributions to the language of baseball, like the idea of the "defensive spectrum", border on being entirely non-statistical.
Acceptance and employment in mainstream baseball.
Oakland Athletics general manager Billy Beane began applying sabermetric principles to running his low-budget team in the early 2000s, to notable effect, as chronicled in Michael Lewis' book "".
In 2003, James was hired by a former reader, John Henry, the new owner of the Boston Red Sox.
One point of controversy was in handling the relief pitching of the Red Sox. James had previously published analysis of the use of the closer in baseball, and had concluded that the traditional use of the closer both overrated the abilities of that individual and used him in suboptimal circumstances. He wrote that it is "far better to use your relief ace when the score is tied, even if that is the seventh inning, than in the ninth inning with a lead of two or more runs." The Red Sox in 2003 staffed their bullpen with several marginally talented relievers. Red Sox manager Grady Little was never fully comfortable with the setup, and designated unofficial closers and reshuffled roles after a bad outing. When Boston lost a number of games due to bullpen failures, Little reverted to a traditional closer approach and moved Byung-hyun Kim from being a starting pitcher to a closer. The Red Sox did not follow James's idea of a bullpen with no closer, but with consistent overall talent that would allow the responsibilities to be shared. Red Sox reliever Alan Embree thought the plan could have worked if the bullpen had not suffered injuries. During the 2004 regular season Keith Foulke was used primarily as a closer in the conventional model; however, Foulke's usage in the 2004 postseason was along the lines of a relief ace with multiple inning appearances at pivotal times of the game. Houston Astros manager Phil Garner also employed a relief ace model with his use of Brad Lidge in the 2004 postseason.
During his tenure with the Red Sox, James published several new sabermetric books (see #Bibliography below). Indeed, although James was typically tight-lipped about his activities on behalf of the Red Sox, he is credited with advocating some of the moves that led to the team's first World Series championship in 86 years, including the signing of non-tendered free agent David Ortiz, the trade for Mark Bellhorn, and the team's increased emphasis on on-base percentage.
After the Red Sox suffered through a disastrous 2012 season, Henry stated that James had fallen "out of favor [in the front office] over the last few years for reasons I really don't understand. We've gotten him more involved recently in the central process and that will help greatly."
On October 24, 2019, James announced his retirement from the Red Sox, saying that he had "fallen out of step with the organization" and added that he hadn't earned his paycheck with the Red Sox for the last couple of years. During his time with the team, Bill James received four World Series rings for the team's 2004, 2007, 2013, and 2018 World Series titles.
Other writing.
James has written two true crime books, "Popular Crime: Reflections on the Celebration of Violence" (2011) and - together with his daughter Rachel McCarthy James - "The Man from the Train" (2017). The latter is an attempt to link scores of murders of entire families in the early 20th century United States to a single perpetrator. Those murders include the Villisca axe murders. The Jameses propose a solution to the murders based on the signature elements these killings share with each other.
James is a fan of the University of Kansas men's basketball team and has written about basketball. He has created a formula for what he calls a "safe lead" in the sport.
In culture.
Michael Lewis, in his 2003 book "", dedicates a chapter to James's career and sabermetrics as background for his portrayal of Billy Beane and the Oakland Athletics' unlikely success.
James was inducted into the Baseball Reliquary's Shrine of the Eternals in 2007.
James was profiled on "60 Minutes" on March 30, 2008, in his role as a sabermetric pioneer and Red Sox advisor. In 2010, he was inducted into the Irish American Baseball Hall of Fame.
James made a guest appearance on "The Simpsons" 2010 episode "MoneyBART". He claimed "I've made baseball as fun as doing your taxes."
Steven Soderbergh's planned film adaptation of "" would have featured an animated version of James as a "host". This script was discarded when director Bennett Miller and writer Aaron Sorkin succeeded Soderbergh on the project. Ultimately, the 2011 film mentions James several times. His bio is briefly recapped, and Billy Beane is depicted telling John Henry that Henry's hiring of James is the reason Beane is interested in the Red Sox general manager job.
Controversies.
Dowd Report controversy.
In his "Baseball Book 1990", James heavily criticized the methodology of the Dowd Report, which was an investigation (commissioned by baseball commissioner Bart Giamatti) on the gambling activities of Pete Rose. James reproached commissioner Giamatti and his successor, Fay Vincent, for their acceptance of the Dowd Report as the final word on Rose's gambling. (James's attitude on the matter surprised many fans, especially after the writer had been deeply critical of Rose in the past, especially what James considered to be Rose's selfish pursuit of Ty Cobb's all-time record for base hits.)
James expanded his defense of Rose in his 2001 book "The New Historical Baseball Abstract", with a detailed explanation of why he found the case against Rose flimsy. James wrote "I would characterize the evidence that Rose bet on baseball as...well, not quite non-existent. It is extremely weak." This countered the popular opinion that the case against Rose was a slam dunk, and several critics claimed that James misstated some of the evidence in his defense of Rose. Derek Zumsteg of Baseball Prospectus wrote an exhaustive review of the case James made and concluded: "James' defense of Rose is filled with oversights, errors in judgment, failures in research, and is a great disservice to the many people who have looked to him for a balanced and fair take on this complicated and important issue."
In 2004, Rose admitted publicly that he had bet on baseball and confirmed the Dowd Report was correct. James remained steadfast, continuing to insist that the evidence available to Dowd at the time was insufficient to reach the conclusion that it did.
Paterno controversy.
On November 4, 2011, Jerry Sandusky was indicted for committing sex crimes against young boys, which brought the Penn State child sex abuse scandal to national attention. On December 11, 2011, James published an article called "The Trial of Penn State", depicting an imaginary trial in which Penn State defended itself against charges of "acting rashly and irresponsibly in the matter of Joe Paterno, in such a manner that [they] defamed, libeled and slandered Paterno, unfairly demolishing his reputation."
On July 12, 2012, the Freeh report was released, charging Paterno and three other University officials with covering up reports of sexual assaults and enabling the attacker to prey on other children for more than a decade, often in Penn State facilities. Soon afterwards, during an interview on ESPN radio, James claimed that the Freeh report's characterizations of Paterno as a powerful figure were wrong, and that it was not Paterno's responsibility to report allegations of child molestation to the police. "[Paterno] had very few allies. He was isolated and he was not nearly as powerful as people imagine him to have been." When asked if he knew anyone who had showered with a boy they were not related to, James said it was a common practice when he was growing up. "That was actually quite common in the town I grew up in. That was quite common in America 40 years ago."
The July 2012 interview comments were widely criticized. Rob Neyer wrote in defense of James. James's employer, the Boston Red Sox, issued a statement disavowing the comments James made and saying that he had been asked not to make further public comments on the matter.
"Replaceable players" controversy.
On November 7, 2018, James participated in a Twitter conversation regarding comments made by agent Scott Boras about teams "tanking". James wrote:If the players all retired tomorrow, we would replace them, the game would go on; in three years it would make no difference whatsoever. The players are NOT the game, any more than the beer vendors are.This was arguably consistent with thoughts James had publicly expressed prior to his affiliation with the Red Sox. In an article in "The 1988 Bill James Baseball Abstract", he had written:This nation could support, without any detectable loss of player quality, at a very, very minimum, 200 major league teams.Nonetheless, in the context of James's association with the Red Sox front office and baseball's checkered labor history (including alleged collusion amongst the owners in the previous offseason to curb free agent salaries), the tweets were taken by many as inflammatory. Major League Baseball Players Association executive director Tony Clark called James's comments "reckless and insulting". Other active or former players also objected. James told the "New York Times":I don't know that the idea that the game endures and we're all just passing through it is inherently an offensive idea. But if I phrased it in an offensive way, that was not my intention.The Red Sox responded by issuing a statement saying:Bill James is a consultant to the Red Sox. He is not an employee, nor does he speak for the club. His comments on Twitter were inappropriate and do not reflect the opinions of the Red Sox front office or its ownership group. Our Championships would not have been possible without our incredibly talented players — they are the backbone of our franchise and our industry. To insinuate otherwise is absurd.
Personal life.
James married Susan McCarthy in 1978. They have three children.
In January 2024, James announced that he had suffered a stroke, which hampered the use of his right hand.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "RC = \\frac{(H+BB) \\times TB}{AB+BB}"
},
{
"math_id": 1,
"text": "RF = \\frac{9 \\times (A + PO)}\\text{Innings}"
},
{
"math_id": 2,
"text": "DER = 1 - \\frac{H + ROE - HR}{PA - K - BB - HBP - HR}"
},
{
"math_id": 3,
"text": "\\mathrm{Pythagorean ~ W\\%} = \\frac{R^2}{R^2+RA^2}"
},
{
"math_id": 4,
"text": "\\mathrm{SecA} = \\frac{BB+(TB-H) + (SB-CS)}{AB} = \\frac{BB + (SB-CS)}{AB} + ISO"
},
{
"math_id": 5,
"text": " \\mathrm{PSN} = \\frac{2 \\times HR \\times SB}{HR + SB}"
}
] |
https://en.wikipedia.org/wiki?curid=63491
|
63492344
|
Mathias Spahlinger
|
German composer
Mathias Spahlinger (born 15 October 1944 in Frankfurt) is a German composer. His work takes place in a field of tension between the most diverse musical influences and styles: between Renaissance music and Jazz, between musique concrète and Webernian minimalism, between noise, improvisation and notation, between aesthetic autonomy and political consciousness, Spahlinger's works carry out conflicts for which there are no fixed models.
Life.
His father was a cellist. He taught him fiddle, viola da gamba, recorder, and later cello from 1951. In 1952 he got piano lessons. In 1959 he began to study jazz intensively, took saxophone lessons and wanted to become a jazz musician. In 1962 he left school and took an apprenticeship as a typesetter. During this time he took private composition lessons with Konrad Lechner. After finishing his apprenticeship he continued his studies with Lechner at the Akademie für Tonkunst in Darmstadt. In 1968 he became a teacher at the Stuttgart Music School for piano, theory, early musical education and experimental music. From 1973 to 1977 he studied composition with Erhard Karkoschka at the Musikhochschule Stuttgart. In 1978 he became a guest lecturer for music theory at the Berlin University of the Arts, and in 1984 professor for composition and music theory at the University of Music Karlsruhe. From 1990 he was professor of composition and director of the Institute for New Music at the Freiburg University of Music. Since 1996, he is a member of the Academy of Arts, Berlin. In 2012 he declined the Ernst von Siemens Musikstiftung grant for his commissioned composition "off" (1993/2011) for the Swiss festival "usinesonore".
Publications.
chronological
Secondary literature.
Chronological
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R{\\overset{a}\\underset{i}{ou}}{}_{FF}^{GH}"
}
] |
https://en.wikipedia.org/wiki?curid=63492344
|
63494499
|
COVID-19 pandemic in England
|
The COVID-19 pandemic was first confirmed to have spread to England with two cases among Chinese nationals staying in a hotel in York on 31 January 2020. The two main public bodies responsible for health in England were NHS England and Public Health England (PHE).
NHS England oversees the budget, planning, delivery and day-to-day operation of the commissioning side of the NHS in England, while PHE's mission is "to protect and improve the nation's health and to address inequalities". As of 14 September 2021, there have been 6,237,505 total cases and 117,955 deaths in England. In January 2021, it was estimated around 22% of people in England have had COVID-19.
Healthcare in Scotland, Wales and Northern Ireland is administered by the devolved governments, but there is no devolved government for England and so healthcare is the direct responsibility of the UK Government. As a result of each country having different policies and priorities, a variety of differences now exist between these systems.
Timeline.
September 2019 – January 2020 : Suspected cases.
In May 2020, the BBC reported that several members of a choir in Yorkshire had developed COVID-19-like symptoms shortly after the partner of one of the choir members returned from a business trip to Wuhan, China, on 17 or 18 December.
Earlier, in March 2020, it was reported that a 50-year-old man from East Sussex fell ill with COVID-19 symptoms on 20 January after he returned from Ischgl in Austria; the resort was under investigation because it allegedly failed to report possible cases. Three members of his family, two friends from Denmark and one from Minnesota, US had the same symptoms.
In June 2020, the BBC reported it was found COVID-19 in the UK had at least 1,356 origins, mostly from Italy (late February), Spain (early-to-mid-March), and France (mid-to-late-March). In the same month, it was reported that a 53-year-old woman fell ill on 6 January, two days after returning from Obergurgl resort in Austria.
In August 2020, the Kent coroner reportedly certified that the death of Peter Attwood (aged 84) on 30 January had been related to COVID-19 ('COVID-19 infection and bronchopneumonia', according to an email on 3 September, after COVID-19 was detected in his lung tissue), making him the first confirmed England and UK death from the disease. He first showed symptoms on 15 December 2019. Attwood had not travelled abroad.
In November 2020, it was reported that a 66-year-old man had experienced symptoms of COVID-19 shortly after returning from holiday in Italy in September 2019, and his 44-year-old daughter had experienced similar symptoms. Scientists had previously speculated about COVID-19 in Italy as early as September 2019.
January 2020.
On 31 January, two members of a family of Chinese nationals staying in a hotel in York, one of whom studied at the University of York, became the first confirmed cases of COVID-19 in the UK. Upon confirmation, they were transferred from Hull University Teaching Hospital to a specialist isolation facility, a designated High Consequence Infectious Diseases Unit in Newcastle upon Tyne's Royal Victoria Infirmary.
On the same day, an evacuation flight from Wuhan landed at RAF Brize Norton and the passengers, none of whom were showing symptoms, were taken to quarantine, in a staff residential block at Arrowe Park Hospital on the Wirral. There had previously been contention over whether the government should assist the repatriation of UK passport holders from the most affected areas in China, or restrict travel from affected regions altogether. Some British nationals in Wuhan had been informed that they could be evacuated but any spouses or children with mainland Chinese passports could not. This was later overturned, but the delay meant that some people missed the flight.
February 2020.
On 6 February, a third confirmed case, a man who had recently travelled to Singapore prior to visiting a ski resort in the Haute-Savoie, France, was reported in Brighton. He had been the source of infection to six of his relatives during a stay in France, before returning to the UK on 28 January. Following confirmation of his result, the UK's CMOs expanded the number of countries where a history of previous travel associated with flu-like symptoms – such as fever, cough and difficulty breathing – in the previous 14 days would require self-isolation and calling NHS 111. These countries included China, Hong Kong, Japan, Macau, Malaysia, Republic of Korea, Singapore, Taiwan and Thailand.
On 10 February, the total number of cases in the UK reached eight as four further cases were confirmed in people linked to the affected man from Brighton. Globally, the virus had spread to 28 countries. On the morning of 10 February, the Secretary of State for Health and Social Care, Matt Hancock, announced The Health Protection (Coronavirus) Regulations 2020, to give public health professionals "strengthened powers" to keep affected people and those believed to be a possible risk of having the virus, in isolation. That day, the Arrowe Park Hospital, Merseyside, and the Kents Hill Park hotel and conference centre, Milton Keynes became designated isolation units. The following day, two of the eight confirmed cases in the UK were reported by BBC News to be general practitioners. A ninth case was confirmed in London on 11 February.
March 2020.
On 1 March, further cases were reported in Greater Manchester, some of them believed to be contacts of the case in Surrey who had no history of travel abroad.
On 2 March, four further people in England tested positive. All had recently travelled from Italy; they were from Hertfordshire, Devon and Kent. The total number of UK cases was reported as having reached 40, though this was revised to 39 after additional testing. The following day, when the number of confirmed cases in the UK stood at 51, the UK government unveiled their "Coronavirus Action Plan", which outlined what the UK had done already and what it planned to do next.
On 2 March, the first COVID-19 death occurred in a care home, but at that time care home data were not yet published.
On 3 March, the first three hospital deaths were reported in Nottingham, Essex, and Buckinghamshire.
On 15 March, the COVID-19 Hospitalisation in England Surveillance System (CHESS) was initiated across all NHS Trusts.
On 17 March, NHS England announced that all non-urgent operations would be postponed from 15 April to free up 30,000 beds. Additionally, many patients were discharged into care homes, initially this thought to have caused significant infections, and as a result deaths in care homes, however it is now believed that community infections were responsible for the infections. Also on 17 March, Chancellor Rishi Sunak announced that £330bn would be made available in loan guarantees for businesses affected by the pandemic.
By 18 March, over 1,000 patients were in hospital with COVID-19. This number rapidly grew and by 31 March exceeded 11,000. Admissions to hospital grew from less than 700 a day on 20 March to more than 2,400 a day by 31 March.
By 20 March, genome sequencing had identified ten viral lineages of COVID-19 in England (A, B, B1, B10, B10.2, B11, B12.1, B5, B8, B9). The research, which was at an early stage, concluded that the data were consistent with a large number of independent introductions into the UK, from places around the world, particularly Italy and other European countries. It was very likely that the true number of independent introductions was substantially higher.
By 31 March, England was the worst affected country in the United Kingdom with over 21,000 confirmed infections; In March there were around 4,500 deaths in hospital but more than 6,700 patients who had recovered were discharged.
ONS data for England and Wales suggests that by 31 March, England had seen over 200 COVID-19 deaths in care homes and more than 200 deaths at home.
April 2020.
On 2 April the maximum number of hospital admissions in a day during the first wave was reached (around 3,000 patients); the number of daily hospital deaths from COVID-19 was now more than 600.
On 12 April, the number of patients in hospital, for the first wave, peaked at 18,974 and the number of daily admissions due to COVID-19 had reduced to less than 1,900; more than 700 COVID-19 hospital deaths were recorded.
Up to 24 April, ONS death registrations for England and Wales showed 19,643 had occurred in hospital, 5,890 in care homes, 1,306 in private homes and 301 in hospices. Of these deaths, 1,149 occurred in Wales.
On 29 April, the method of reporting deaths in England was changed. Data from three sources are now cross checked against the list of people who have had a diagnosis of COVID-19 confirmed by a Public Health England or NHS laboratory. The three sources are:
After checking, the records are merged into one database and duplicates removed so there is no double counting.
The new method of counting deaths results in higher numbers than the previous method. On 29 April, the total number of deaths reported by NHS England was 21,400. The new method identified 23,550 deaths of people who had a positive test result confirmed by a PHE or NHS laboratory.
The number of patients with COVID-19 in hospital steadily reduced until on 30 April it was around 12,900; at least 54,100 patients were admitted to hospital in April with COVID-19. The total number of deaths in hospital during April from COVID-19 exceeded 17,500; this data suggest that there were around 36,000 patients who had the disease that were discharged in April.
In April, the ITV News health and science editor Emily Morgan filmed inside the intensive care unit at the Royal Bournemouth Hospital in Dorset, showing critically ill coronavirus patients being treated. It was the first time cameras in the UK had been allowed to film inside an intensive care unit.
May 2020.
By 3 May, daily admissions to hospital had further reduced to around 1,000, while discharges continued to exceed admissions and thus the number of people in hospital was now around 10,500.
An app for the adult social care workforce in England was launched on 6 May to support workers during the COVID-19 outbreak. The Care Workforce app was developed by NHSX and the NHS Business Services Authority. The GMB union told members not to use the app, saying that managers could identify staff who had complained about pay, testing and personal protective equipment through a chat feature.
On 11 May, a set of COVID Alert Levels were published by the Government and many restrictions in England were eased; people who were unable to work from home were encouraged to return to work, but where possible avoid public transport.
On 12 May, the number of people in hospital fell below 10,000 and the total number of deaths in hospital since 1 March had grown to at least 24,500.
By 21 May, the number of hospital patients was below 8,000 and daily admissions were around 700.
On 21 May, the lockdown rules were amended in England to allow people to meet one other person from another household, outdoors, but to remain 2 m (6 ft) apart. Outdoor sports such as golf or tennis were allowed with members of the same household or with one other person from another household, while maintaining social distancing. Households were allowed to drive any distance in England to destinations such as parks and beaches, but not to Wales or Scotland.
On 27 May, Matt Hancock announced NHS Test and Trace would begin operations the following day.
The number of patients in hospital with COVID-19 continued to reduce and on 31 May was around 5,900. During the month at least 22,400 patients were admitted to hospital with COVID-19, the number of hospital deaths was around 5,200 and around 23,900 patients were discharged.
June 2020.
A study published on 8 June which included genome sequencing data concluded that in mid to late February travel from Italy resulted in the majority of importations. By 1 March this had changed to Spain and by mid-March it changed again to France; because of the travel restrictions imposed, importations after mid April were at very low levels. It was estimated that around half of the importations were by UK nationals returning to the UK. In the period up to 3 May, approximately 34% of detected UK transmission lineages arrived via travel from Spain, 29% from France, 14% from Italy and 23% from other countries. Less than 0.1% were from China.
By 15 June the number of people in hospital had fallen steadily to around 3,900 and daily admissions were down to around 360, but each day there were still around 50 deaths reported.
On 30 June the government imposed the first local lockdown in the UK after 10% of all positive cases in the UK over the past week were found in Leicester. Non-essential shops in the city had to close, and the public houses and restaurants hoping to reopen on 4 July had to delay opening for at least two weeks; schools would also be shut for most pupils.
By 30 June daily COVID-19 hospital admissions were fewer than 200 and COVID-19 daily deaths in hospital were around 30; the total number in hospital with COVID-19 was fewer than 2,700.
July 2020.
On Friday 24 July 2020 new regulations made it compulsory to wear face coverings in most indoor shops, shopping centres, banks, post offices and public transport hubs. Those breaking the rules could be fined up to £100. Face coverings remained optional in other indoor public places including museums, cinemas and hairdressers. Excluded from the regulations were venues where wearing a mask might be 'impractical', such as restaurants and gyms. Exemptions were available for children under 11, individuals with physical or mental illness or disability, and for anyone to whom it would cause significant distress.
On 24 July it was reported that, as a result of the pandemic and job losses, almost 1,000 people applied to a restaurant in Manchester advertising a vacancy for a receptionist.
Indoor gyms and pools started to re-open on 25 July.
During July the total number of COVID-19 hospital admissions fell to around 3,050, the number of deaths in hospital from COVID-19 fell to around 480, and around 4,200 patients with the disease were discharged from hospitals.
August 2020.
August saw the fewest monthly hospital admissions (1,600) since the start of the pandemic and hospital deaths (208); the number of patients in hospital on 31 August was under 500. Throughout August the daily hospital death rate was essentially in single digits.
The rules aimed at stopping the spread of the virus were eased on 15 August: casinos, bowling alleys and conference halls were among a range of venues allowed to reopen across much of England. Also permitted were indoor performances with socially distanced live audiences (including in theatres and sports stadiums), wedding receptions for up to 30 people, skating rinks and beauticians as long as they had measures in place to reduce COVID-19 transmission. Beauticians, tattooists, spas and tanning salons could offer additional services, including front-of-face treatments such as eyebrow threading.
September 2020.
On 2 September the minimum number of hospital patients since the start of the pandemic was recorded (451); hospital admissions were around 60 a day at the start of the month. Until 12 September, the number of deaths in hospital was essentially in single digits but thereafter rose until it was around 40 a day by the end of the month.
On 8 September, following a rise in case numbers, the government published new social distancing rules to come into effect in England from 14 September. These restricted gatherings of separate households to groups of six or few people (the so-called "rule of six"), excluding work or educational settings. By 18 September, the COVID Symptom Study estimated the formula_0 value in England to be 1.4, meaning that cases were doubling every seven days.
Hospital admissions in September were around 5,900. There were around 560 deaths in hospital and more than 3,750 discharges of patients who had caught COVID-19.
October 2020.
Between July and September 2020, ever more extensive and increasingly rigorous ad hoc local regulations were introduced, which in many areas proved unsuccessful in controlling the spread of the virus. In England, all of these local regulations were swept away on 14 October, and replaced by new tier regulations with three levels of restrictions.
The easing of restrictions and emergence of a second more infectious variant of COVID-19 resulted in a second wave of the virus becoming well established. Hospital admissions rose from less than 6,000 in September to over 25,000 in October. Almost 3,500 people who had tested positive for COVID-19 in the previous 28 days died in hospital from COVID-19 but more than 14,700 patients were discharged.
November 2020.
After further forecasts predicting unsustainable pressure on the healthcare system, new uniform national restrictions were put in place from 5 November to at least 2 December. On 8 November, 1 million cases had been confirmed in England.
Despite these tighter regulations, the number of hospital admissions during November was more than 41,200; deaths in hospital of patients who had tested positive for COVID-19 in the previous 28 days was around 8,300 with 29,000 patient discharges. On 30 November there were 13,700 people in hospital. It is now known that, in London, the new variant of COVID-19 accounted for around 25% of the cases.
December 2020.
From 2 December, the national restrictions were replaced by a second version of tier regulations, again with three levels; 57% of the population was placed in Tier 2 and 42% in the strictest Tier 3. The government also announced that, from 23 December to 27 December, a 'Christmas bubble' would be permitted, allowing people from up to three households to meet in private homes and/or gardens, and travel between tiers for the purpose of meeting others in the same bubble.
After the existence of the new variant – referred to as Variant of Concern 202012/01 – was announced the government issued new public health guidance and were expected to impose transit restrictions. By mid-December around two-thirds of the cases reported in London were the new variant. On 19 December it was announced that a new "tier four" measure would be applied to Bedfordshire, Berkshire, Buckinghamshire, Hertfordshire, Kent, London and parts of Cambridgeshire, East Sussex, Essex and Surrey, and Christmas season relaxation would be limited to only Christmas Day.
These attempts at controlling the second wave had limited success: the total number of hospital admissions rose again during December to more than 58,600, and deaths in hospital of patients who had tested positive for COVID-19 in the previous 28 days approached 10,600. Although almost 39,000 patients were discharged there were still more than 22,700 people in hospital on 31 December.
January 2021.
On 1 January, the government announced that all primary schools in London would remain closed. This caused uproar from many headmasters and teaching staff in other areas. On 4 January the majority of primary schools opened. That evening, Boris Johnson made a televised address to the nation, announcing a third lockdown in England. The rules were similar to the first lockdown in March 2020 and schools would close for most pupils on 5 January. In view of the increase of hospitalized cases, the government's slogan was changed back to "Stay Home, Protect the NHS, Save Lives." On 25 January, Boris Johnson said the government would give an update on when schools can reopen in England as soon as they could. On the same day, the Health Secretary Matt Hancock said there were early signs that the current restrictions are working, but it was not a moment to ease them.
The peak of hospital admissions occurred on 12 January at 4,134 patients; the peak number of people in hospital occurred almost a week later on 18 January at 34,336, this is over 80% higher than that recorded in the first wave of the pandemic. In January, the total number of patients with COVID-19 admitted to hospital exceeded 100,000, more than 22,000 patients who had tested positive for COVID-19 in the previous 28 days died but there were over 73,200 discharges.
On 30 January 2021, a group of asylum seekers set light to a barracks building at Napier Barracks, where they had been housed temporarily pending resolution of their cases. The asylum seekers were beset by COVID-19 at the rate of one person in four. Coventry South MP Zarah Sultana called on the government to "provide good, safe and liveable housing instead".
February 2021.
On 1 February, door to door testing was announced to identify cases of the South African variant. There would be around 80,000 tests across 8 different areas of the country where the 11 cases were found that had no travel history. The cumulative total of deaths had surpassed 100,000 by 9 February. On 22 February, Boris Johnson announced the roadmap out of lockdown starting on 8 March with schools and colleges reopening and the lockdown ending on 21 June with nearly all restrictions being lifted.
By the end of February, daily cases were as low as they were during September 2020, with 5,080 cases being reported in England on 28 February.
In February, more than 35,800 people were admitted to hospital with COVID-19, around 9,400 patients who had tested positive for COVID-19 in the previous 28 days died but more than 44,200 patients recovered and were discharged.
March 2021.
On 3 March, there were fewer than 10,000 patients in hospitals for the first time since 1 November 2020. Students returned to face-to-face education in schools and colleges on 8 March, with rapid testing being carried out in secondary schools. By 13 March, over 20 million people had received their first dose of vaccinations, as well as over 1 million people having received their second dose. There had been a surge of infections in many countries of Europe, however the current roadmap out of England's lockdown would still go as planned. On March 29, the next phase of easing the lockdown took place, with people being able to meet up in groups of 6 or 2 households outdoors, and outdoor sporting facilities could reopen. Cases began to drop towards the end of March, with less than 3,000 people getting infected a day.
In March, just over 11,400 were admitted to hospital, around 2,090 people who had tested positive for COVID-19 in the previous 28 days died in hospital and more than 17,000 people were discharged.
April 2021.
On 5 April, Boris Johnson announced the next phase of the lockdown easing would go as planned, with pubs and non-essential shops reopening from 12 April. Over 10 million people had been fully vaccinated by 23 April with the cumulative total of second doses exceeding 10 million. On 18 April a one-day "trial" festival at Sefton Park, Liverpool on 2 May that year was announced, to be headlined by band Blossoms. The festival was notable as it was to be the first festival in the UK for fourteen months with no social distancing or face masks following the worldwide COVID-19 pandemic.
In April, just over 4,000 people with COVID-19 had been admitted to hospital; around 450 patients who had tested positive for COVID-19 in the previous 28 days died but over 5,400 people were discharged. The number of people in hospital with COVID-19 on 30 April was 1,161 and daily admissions from the disease had dropped to around 80.
The effectiveness of the vaccine is beginning to become apparent as ONS data shows that as a percentage of all deaths from COVID-19 those in care homes has reduced from around 20% a week at its peak to less than 15% in April.
May 2021.
On 6 May, there were fewer than 1,000 patients with COVID-19 in hospital; the last time this had happened was mid-September 2020. On 20 May the number of COVID-19 patients in hospital was 749, the last time this level had occurred was mid-September 2020. Cases began to rise towards the end of May, mostly in the North West. This was due to the spread of the Delta (Indian) variant.
Just over 2,600 people were admitted to hospital in May, there were 170 deaths in hospital of people who had tested positive for COVID-19 in the previous 28 days; more than 2,700 patients recovered from the virus and were discharged.
June 2021.
At the beginning of June the Delta variant had become the dominant strain in England, the increased transmission rate associated with it had resulted in a small increase in daily admissions and the number of people in hospital was now around 800. On 14 June it was announced that the final step of easing the lockdown on 21 June would be delayed for 4 weeks until 19 July. Government research found a 50% increase in infections from 3 May to 7 June, and an increase in the Delta variant, which became dominant in the UK. The rise in infections is, however, strongest among younger, unvaccinated patients. Older, vaccinated people are less at risk. A third wave of infections had begun in June, and around 110,000 swab tests carried out in England from 20 May to 7 June appeared to show COVID-19 cases were doubling every 11 days. The disease was most common in the north-west and one person in 670 was infected.
COVID-19 hospital admissions in June were slightly higher than May at around 405,700 and the number of people in hospital with COVID-19 at the end of June was almost double that for the end of May at 1,560; there were 247 hospital deaths in June of people who had tested positive for COVID-19 in the previous 28 days. Over 4,100 COVID-19 patients were discharged in June.
July 2021.
In July, cases began to rise rapidly. On 19 July, the 4 week delayed "Freedom Day" took place. Social distancing and mask wearing became optional, and night clubs were allowed to re-open, however self-isolation remained mandatory for close contacts of a positive case. The total number of infections in England surpassed 5 million on 27 July. Despite the rise in cases, deaths and hospitalisations had been lower compared to the previous waves before due to the vaccination programme. Cases began to fall after 17 July when 50,955 cases were reported, however scientists believed that it was too early to say if infection rates had dropped.
Over 19,000 people were admitted to hospital with COVID-19 in July, more than 1,140 people died in hospital from COVID-19 but there almost 14,500 patients had been discharged after recovering.
ONS data showed that as a percentage of all deaths from COVID-19, those in care homes has fallen from around 20% at its peak to around 10%.
August 2021.
Self-isolation rules changed: from 16 August those who had been in contact with a positive COVID-19 case no longer needed to self-isolate if they were fully vaccinated or under the age of 18. With more social and household mixing, there has been an inevitable rise in hospital admissions to more than 23,000; the number of people dying in hospital from COVID-19 was almost double that of July at 2,100. August saw more than 19,800 hospital discharges.
September 2021.
During September, the number of people in hospital continued to rise and the weekly number of excess deaths from other causes increased to around 600. Hospital deaths from COVID-19 during September were around 2500 and there were almost 18,500 discharges.
On 14 September, Prime Minister Johnson warned that COVID-19 remained a risk in England as the autumn and winter approached, and unveiled the government's plans to protect the NHS. This included continued testing, tracing, and prioritizing the vaccination of children 12–15 (with drop-in clinics to be run at schools), those who are not yet vaccinated, and the booster dose programme. Businesses would also be encouraged to voluntarily use the NHS COVID Pass.
Johnson also discussed a "Plan B" that would be implemented in the event the NHS is in danger of being overwhelmed, which would include reinstating mandatory masking in certain settings, and mandating proof of vaccination for large gatherings and other settings. Johnson stated that the implementation of "Plan B" would be based on multiple metrics (including hospitalizations, caseloads, and other factors), and would "give us the confidence that we don't have to go back to the lockdowns of the past.". If implemented, Plan B would bring England in line with restrictions in the remainder of the Home Nations.
October 2021.
The NHS Confederation and the British Medical Association urged the government to implement "Plan B" for COVID-19 in the winter due to a backlog of five million patients. However, the government stated that there were currently no plans to do so. The number of people in hospital at the end of October was around 50% greater than at the end of September, the number of deaths in hospital was around 2,500 and there were around 18,000 discharges.
November 2021.
On 27 November, the first UK cases of the Omicron variant were found in Essex and Nottingham. New restrictions went into force, including several African countries being placed on the Red list for travel, mandating PCR testing of anyone entering the UK from outside the Common Travel Area, masks becoming mandatory on public transport and at shops, and all close contacts of an Omicron variant case being required to self-isolate regardless of vaccination status.
December 2021.
On 8 December, Johnson announced that "Plan B" would be activated in England due to concerns over the Omicron variant and the increasing rate of infections it could cause, explaining that "the best way to ensure we all have a Christmas as close to normal as possible is to get on with Plan B." Workers were advised to stay at home if possible. On 10 December, mask mandates were extended to cinemas and theatres. From 15 December, the NHS COVID Pass became mandatory at nightclubs, unseated indoor events with 500 attendees or more, unseated outdoor events with 4,000 attendees or more, and any event with more than 10,000 attendees. Hospital admissions during December were around 50% higher than in November at around 33,800 patients, however, hospital deaths were slightly lower at around 2,500; more than 24,400 patients were discharged in December.
January 2022.
On 19 January 2022, Johnson announced that the "Plan B" restrictions would end from 27 January. Johnson cited booster vaccination progress and reports that Omicron had peaked as justification, but warned that "we must learn to live with COVID in the same way we live with flu". There were almost 54,000 hospital admissions in January along with over 4,700 deaths. Over 48,000 patients were discharged from hospital in January.
February 2022.
In February more people with COVID-19 were discharged from hospital (around 32,000) than were admitted (around 30,000). Another 2,800 hospital patients who had the virus died.
March 2022.
The government decided to cut down on the number of people in England who would be eligible for free influenza vaccination in autumn 2022: people aged 50–64 and school children aged 11–15 would no longer qualify. Nick Kaye of the National Pharmacy Association said, "It's short-sighted to cut back on this sensible public health measure, given that no one can say for certain that we'll be through the Covid pandemic by next winter", adding that hospitals would be overstretched for years and free flu vaccination helps keep people out of hospital. Giulia Guerrini of online pharmacy Medino maintained that vaccination mattered since, "immune systems are lower than ever due to our bodies having had a lower amount of exposure to viruses than normal over the last two years". March saw another large increase in hospital admissions of patients with COVID-19 (over 52,000), and patients who died having tested positive for COVID-19 increased to around 3,250.
April 2022.
Hospital admissions of patients testing positive for COVID-19 reduced to fewer than 45,000 in April and the number discharged was around 47,000. The number of deaths in hospital of patients who had tested positive for COVID-19 was just over 4,000. Free COVID-19 testing was stopped for most individuals and the majority of Lighthouse labs that supplied centralised COVID-19 testing were closed.
Hospital death statistics.
Statistics for deaths in hospital up to 30 December 2020 showed that those with a pre-existing condition – especially diabetes, chromic kidney disease, dementia or ischaemic heart disease but also asthma, chronic neurological or pulmonary disease – were around twenty-three times more likely to die than those who did not have one. Age and sex also influenced the risk of death, with men between 60 and 79 showing a death rate almost double that of women. Men over 80 were over 30% more likely to die than women in the same age group. The percentages in each category showed only small changes through the year.
Statistics for deaths in hospital for 2021 showed only small changes from the data recorded in 2020 but the effectiveness of vaccination did result in lower death rates in those over 80, particularly for men.
ONS data.
Registered deaths.
The Office of National Statistics publishes data on weekly deaths in England and Wales, which include information on deaths from COVID-19. These data give the number of deaths registered in England during a seven-day period; the total number of deaths will be greater as there is normally a delay between the date death occurred and the date it is registered.
2020.
Up to and including the week ending 6 March 2020, the number of deaths in England was on average 442 fewer each week than the five-year average (2015–2019). The number of deaths above the average is generally referred to as 'excess' deaths, in both 2020 and 2012 the data were influenced by many factors including the lockdowns, social distancing, mask wearing, reduced elective surgery and less medical diagnosis and care. This resulted in very few deaths from influenza, slightly less from road traffic accident but more because people did not seek or were unable to get healthcare.
The total number of excess deaths in England for the whole of 2020, based on the 5 year average for 2015–2019, was 71,677 but if the starting point of the pandemic is taken as 6 March, the total number of excess deaths would be more than 76,000. A recent BMJ paper based on a 4-year average (2016–2019) reported a value of 85,400 (83,900 to 86,800 (95% confidence intervals)) excess deaths for England and Wales in 2020; on a pro rata basis this would give a value of around 79,800 for England.
The registration data were affected by closure of the registry offices over bank holidays, Christmas and the New Year. In addition, 2020 was a 53-week year.
2021.
For the first nine weeks of 2021 the total number of excess deaths in 2021 continued to increase, then from 12 March it started to reduce, this in likely to be in part is because some of the people who died prematurely from COVID-19 would have succumbed to something else at a slightly later date. At the beginning of July the number of excess deaths started to rise again, many of these deaths are not attributed to COVID-19 which suggests delays in obtaining diagnosis and subsequent treatment in 2020 and 2021 has starting to influence the data. The data are affected by a number of public holidays; 2 and 9 April – Good Friday and Easter Monday Bank Holiday, 7 May – Spring Bank Holiday, 30 August – Summer Bank Holiday.
2022.
The ONS 5 year average for 2022 includes data from 2016-2019 and 2021, as the number of deaths in 2021 was significantly above the previous 5 year average there will be a significant effect on the excess death data.
Covid-19 deaths by place of occurrence.
13 March – 4 September 2020.
The ONS data includes information on deaths by place of occurrence. During the first wave of infections, the majority of deaths were in hospital (63%) but deaths in care homes was also high (30%). The percentage of deaths in each setting remained essentially constant from mid June to early September.
11 September 2020 – 1 January 2021.
During the second wave there was a significant increase in the percentage of deaths in hospital, and a corresponding decrease in care-home deaths.
COVID-19 deaths by age.
ONS Data are only available for England and Wales; the differences in the percentage of all deaths with age between the two waves of the pandemic were small. Almost three-quarters of the deaths occurred in those over 75 years (around 11% of the population) while those aged between 70 and 75 accounted for a further 9% of the deaths. Some differences were observed between the genders, with a generally higher percentage of deaths for men; the exception was those over 75, but this reflects the greater number of older women in the population.
Vaccination programme.
A programme of mass vaccinations began on 8 December 2020, with priority given to the elderly, their carers and frontline health and social care workers.
Although there is a lag between catching the disease and mortality, the ONS data provided a way of identifying the effectiveness of the vaccination programme; only combined date for England and Wales are available. In December 2020, around 75% of deaths registered in England and Wales where COVID-19 was mentioned on the death certificate were in the 75+ age group, following the vaccination program by the end of March this had fallen to 63%. By the end of March 2021, around 50% of the population had received at least one dose of the vaccination and as a result the total number of registered deaths from COVID-19 had fallen from a maximum of more than 8,000/week to less than 700/week.
An alternative source that confirms the effectiveness of the vaccination program in England was the deaths in hospital data released daily by NHS. The caveat when considering this information is that around 30% of all deaths from Covid are not in hospitals and the majority of these deaths are people who are likely to be 80+ years old. In the period 1 January to 26 March 2021 there was a significant and continuous decrease in the weekly number of hospital deaths being recorded in the 80+ age group from 58.4% to 49.2%.
In June 2021 it was calculated that general practice had delivered 27.3 million out of 41.1 million covid-19 vaccinations in England at that point, with better response rates than the "mass" centres. This was considerably more than expected.
In February 2022, concerns have been presented about young children's access to Covid vaccines when it was claimed that they would not be included in school immunization programs in England. Following the Joint Committee for Vaccination and Immunisation's guidance, all children aged five to eleven in England will receive Covid vaccination (JCVI).
Regulations and legislation.
The government published the Health Protection (Coronavirus) Regulations 2020 on 10 February 2020, a statutory instrument covering the legal framework behind the government's initial containment and isolation strategies and its organisation of the national reaction to the virus for England. Other published regulations include changes to Statutory Sick Pay (into force on 13 March), and changes to Employment and Support Allowance and Universal Credit (also 13 March).
On 19 March, the government introduced the Coronavirus Act 2020, which grants the government discretionary emergency powers in the areas of the NHS, social care, schools, police, the Border Force, local councils, funerals and courts. The act received royal assent on 25 March 2020. Closures to pubs, restaurants and indoor sports and leisure facilities were imposed via The Health Protection (Coronavirus, Business Closure) (England) Regulations 2020 (SI 327).
On 23 March, the government announced a number of restrictions on movement, some of which were later enacted into law. These included:
The full regulations are detailed in:
Local lockdown regulations.
In England, up until 14 October 2020 most of the COVID-19 lockdown regulations covered the whole country, but some local areas of particular concern are or have been subject to more restrictive rules at various times, namely Leicester, Luton, Blackburn with Darwen, Bradford, Tameside, Bury, Manchester, Oldham, Rochdale, Salford, Stockport, Trafford, Wigan, Pendle, Hyndburn, Burnley, Calderdale and Kirklees. In most cases, the effect of the local regulations had been to slow down the gradual easing of the lockdown regulations which applied to the rest of the country.
Tier regulations.
In England the local lockdown regulations were swept away on 14 October 2020, and were replaced by the first COVID-19 tier regulations in England. The restrictions were enforced by three statutory instruments, as follows:
These are referred to as the 'first tier regulations". The regulations relate to England only.
Following the November lockdown, a new framework of tiers, known as the second tier regulations, were introduced in The Health Protection (Coronavirus, Restrictions) (All Tiers) (England) Regulations 2020. The regulations apply from 2 December 2020 until 2 February 2021, with special arrangements over the Christmas period, 23–27 December 2020.
In December 2020, a new Fourth Tier was added to the second tier regulations. Households in this tier were subjected to further restrictions including a restrictions on movement, a ban on international travel and a ban on meeting more than one person outside. The Christmas regulations were changed, so that only households in Tiers 1–3 could mix with up to three other households only on Christmas Day only; Tier 4 households could not mix over the festive period.
Tiers by local government district for each period of being in force
<gallery mode=nolines heights=220px>
England COVID-19 alert levels by district.svg
Travel restrictions.
On 7 May, the government released a list of countries with quarantine rules when returning to England.
Impact.
Finance and the economy.
During the second half of March, one million British workers applied for the Universal Credit benefit scheme. On 20 March the government announced a COVID-19 Job Retention Scheme, where it would offer grants to companies to pay 80% of a staff wage each month up to a total of £2,500 per a person, if companies kept staff on their payroll. The scheme would cover three months' wages and would be backdated to the start of March. Following a three-week extension of the countrywide lockdown the scheme was extended until the end of June 2020. Initially the scheme was only for those workers who started work at their company on or before 28 February 2020; this was later changed to 19 March 2020, the day before the scheme was announced, allowing 200,000 additional workers to be part of it. On the first day of operation 140,000 companies used the scheme. Later the scheme was extended until the end of October with the Chancellor saying that from August companies would have to contribute towards the 80% of employees wages that the government was covering. It was stated that the scheme was costing £14 billion a month to run, with nearly a quarter of all workers in Britain furloughed by their employers within two weeks of the start of the scheme. The decision to extended the job retention scheme was made to avoid mass redundancies, company bankruptcies and potential unemployment levels not seen since the 1930s.
In March the Self Employed Income Support Scheme (SEISS) was announced. The scheme paid a grant worth 80% of self employed profits up to £2,500 each month, for companies whose trading profit was less than £50,000 in the 2018–19 financial year or averaged less than £50,000 over the last three financial tax years. HM Revenue & Customs (HMRC) were tasked with contacting those who were eligible and the grant was taxable. The government also had announced a six-month delay on tax payments. Self employed workers who pay themselves a salary and dividends are not covered by the scheme and instead had to apply for the job retention scheme. The scheme went live on 13 May. The scheme went live ahead of schedule and people were invited to claim on a specific date between 13 and 18 May based on their Unique Tax Reference number. Claimants would receive their money by 25 May or within six days of a completed claim. By 15 May, more than 1 million self employed people had applied to the scheme.
The government announced Retail, Hospitality and Leisure Grant Fund (RHLGF) and changes to the Small Business Grant Fund (SBGF) on 17 March. The SBGF was changed from £3,000 to £10,000, while the RHLGF offered grants of up to £25,000. £12.33 billion in funding was committed to the SBGF and the RHLGF schemes with another £617 million added at the start of May. By 25 April only around 50% of eligible business had received funding.
On 23 March the Government announced the COVID-19 Business Interruption Loan Scheme (CBILS) for small and medium-sized businesses and Covid Corporate Financing Facility for large companies. The government banned banks from seeking personal guarantees on COVID-19 Business Interruption loans under £250,000 following complaints. COVID-19 Large Business Interruption Loan Scheme (CLBILS) was announced on 3April and later tweaked to include more companies. In May the amount a company could borrow on the scheme was raised from £50 million to £200 million. Restrictions were put in place on companies on the scheme including dividends payout and bonuses to members of the board. On 20 April the Government announced a scheme worth £1.25 billion to support innovative new companies that could not claim for COVID-19 rescue schemes. The government additionally announced the Bounce Back Loan Scheme (BBLS) for small and medium size businesses. The scheme offered loans of up to £50,000 and was interest free for the first year before an interest rate of 2.5% a year was applied, with the loan being paid back within six years. Businesses who had an existing CBILS loan of up to £50,000 could transfer on to this scheme, but had to do so by 4 November 2020. The scheme launched on 4 May. The loan was 100% guaranteed by the government and was designed to be simpler than the CBILS scheme. More than 130,000 BBLS applications were received by banks on the first day of operation with more than 69,500 being approved. On 13 May the Government announced that it was under writing Trade credit insurance, to prevent businesses struggling in the pandemic from having no insurance cover. On 12 May almost £15 billion of state aid had been given to businesses. The Treasury and the Bank of England on 17 March announced the Covid Corporate Financing Facility (CCFF).
The Resolution Foundation surveyed 6,000 workers, and concluded that 30% of those in the lowest income bracket had been affected by the pandemic compared with 10% of those in the top fifth of earners. The foundation said that about a quarter of 18 to 24-year-olds included in the research had been furloughed whilst another 9% had lost their job altogether. They also said that 35 to 44 year olds were least likely to be furloughed or lose their jobs with only around 15% of the surveyed population having experienced these outcomes. Earlier research by the Institute for Fiscal Studies concluded that young people (those under 25) and women were more likely to be working in a shutdown business sector.
"The Guardian" reported that after the government had suspended the standard tender process so contracts could to be issued "with extreme urgency", over a billion pounds of state contracts had been awarded under the new fast-track rules. The contracts were to provide food parcels, personal protective equipment (PPE) and assist in operations. The largest contract was handed to Edenred by the Department for Education, it was worth £234 million and was for the replacement of free school meals.
National health service response.
Appointments and self-isolation.
In March, hospitals in England began to cancel all elective procedures. On 22 March, the government announced that it would be asking about 1.5 million people (everyone in England with certain health conditions that carry serious risk if infected) to "shield" for 12 weeks. They were to be notified by mail or text messaged by their NHS general practitioners, and provided deliveries of medication, food, and household essentials, delivered by pharmacists and local governments, and at least initially paid for by the UK government. Members of the public were told to stay at home, should they suspect they have symptoms of COVID-19, and not visit a GP, pharmacy, or hospital. For advice, the public were told to use a dedicated online self-assessment form before calling NHS 111, the non-emergency medical helpline.
To allow vulnerable patients with underlying conditions to still be able to attend for routine blood tests without having to come to a hospital, from 8 April, the Sheffield Teaching Hospitals NHS Foundation Trust opened a drive-through phlebotomy service operating out of a tent in the car park of Sheffield Arena. This allows patients to have their blood tests taken from within their car, in a similar manner to how COVID-19 swabbing drive-through stations work. Following the success of the service, it was expanded to cover all patients registered with any GP in the Sheffield area from 27 April.
Beds.
NHS England freed up 30,000 beds by discharging patients who were well enough and by delaying non-emergency treatment, and acquired use of 8,000 beds in private sector facilities. Emergency building work was undertaken to add capacity to existing hospitals, 52 beds in Wigan, for example. An additional capacity of almost 20,000 beds was created with NHS Nightingale Hospitals in major conurbations across the United Kingdom. Only a small amount of the capacity was used, and most of the hospitals were put on standby as the situation progressed.
On 18 October 2020, The Guardian reported that according to the National Health Service (NHS) report, the Greater Manchester is at risk of running out of hospital beds during the pandemic. NHS data revealed that before Friday 211 of the 257 critical care beds were occupied in Greater Manchester, and 82% of the total supply was in use by Covid-positive people or people admitted for other critical cases.
Communication.
NHS England's approach to communications during the pandemic was described as "truly dreadful" by Sir Richard Leese, chair of Greater Manchester Health and Social Care Partnership in May 2021. He said their tight control of public communications had made getting crucial messages to the public a "nightmare". "We took the view that having a fully informed public might have helped us tackle covid, but that's not the view we got from NHSE. People's willingness to comply with guidelines around covid was beginning to weaken and we wanted to get a message out [that] our hospitals were on the edge of falling over. We wanted to have responsible media to be able to go into hospitals and tell that story, but it took us ages to get consent to do that." Although this criticism was rejected by NHS England medical director Steve Powis dozens of local NHS leaders and communications staff privately agreed.
Birth sex ratio changes
Population stress is thought to have contributed to a sharp drop in the sex ratio at birth in England and Wales to 51.00% in June 2020, three months after COVID-19 was declared a pandemic. In December 2020, nine months after the pandemic was declared, the sex ratio at birth dramatically increased to 51.71%, most likely as a result of lockdown measures that initially encouraged more coupled sexual activity in a portion of the population.
Law and order.
In March, police forces in each nation of the UK were given powers to arrest and issue fixed penalty notices (FPNs) to citizens who broke lockdown rules. The National Police Chiefs' Council said police had issued their first FPNs for people breaking lockdown rules on 27 March. The penalty amounts were £60, reduced to £30 if paid within 14 days. By 31 March, some police forces and individual officers were being criticised by a variety of people including former Supreme Court judge Lord Sumption, former Justice secretary David Gauke, former Chancellor George Osborne and privacy and civil liberties group Big Brother Watch for over-zealous and incorrect application of the new powers. New guidance was released by the National Police Chiefs Council.
According to the National Police Chiefs' Council, around 9,000 people were issued FPNs for breaking lockdown rules in England and Wales between 27 March and 27 April. From 13 May, amendments to the regulations increased the initial penalty to £100.
In May 2020, the Crown Prosecution Service stated 56 people were wrongly charged with offences related to the pandemic. This was mainly due to Welsh regulations being applied in England and vice versa. Some fixed penalty notices issued for breaking lockdown laws were wrongly issued. Of those where an individual declined to pay and were prosecuted in open court, 25% were found to be wrongly issued. Giving evidence to parliament, barrister Kirsty Brimelow said it was likely that thousands of FPNs had been incorrectly issued.
There were reports of hate incidents against Italian and Chinese persons, and a Singaporean student was assaulted in London in an attack that police linked to COVID-19 fears. In addition there were reports of young people deliberately coughing and spitting in the faces of others, including an incident involving health workers.
On 9 May, police broke up an anti-lockdown protest in London consisting of around 40 people. It was thought to be the first such protest in the UK, following protests in other nations. It was reported that around 60 protests had been planned on the weekend of 16 May, with police saying that they were preparing to break them up. Protests took place in London and Southampton, with several protesters arrested and fined at the London demonstration.
In October, police broke up a wedding with 100 guests at the Tudor Rose, Southall, breaking social isolation laws. A police spokesman said the owner could be fined £10,000.
Fraud.
Local councils found fake goods being sold including testing kits, face masks and hand sanitiser. There had also been reports of scams involving the replacement school meals scheme and incidents of people posing as government officials, council staff or IT workers.
During the contact tracing app trial on the Isle of Wight the Chartered Trading Standards Institute found evidence of a phishing scam. In the scam recipients would receive a text stating that they had been in contact with someone with COVID-19 and were directed to a website to input their personal details.
Courts and prisons.
On 17 March, trials lasting longer than three days were postponed until May in England and Wales, Those cases already running would continue in the hope of reaching a conclusion.
The government released specific guidance to prisons in the event of COVID-19 symptoms or cases, specifically the rule that "any prisoner or detainee with a new, continuous cough or a high temperature should be placed in protective isolation for 7days". There are around 83,000 prisoners in England and Wales. On 24 March, the Ministry of Justice announced that prison visits would be suspended and that inmates would be confined to their cells. In order to maintain communication between prisoners and their families, the government promised 900 secure phones to 55 prisons, with calls being monitored and time-limited. In a committee meeting on the same day, Justice Secretary Robert Buckland suggested that 50 pregnant inmates might be given early release, and another 9,000 inmates awaiting trial could be transferred to bail hostels. On 14 April, the Ministry of Justice ordered 500 modular buildings, reportedly adapted from shipping containers, to provide additional single prison cell accommodation at seven prisons: HMPs North Sea Camp, Littlehey, Hollesley Bay, Highpoint, Moorland, Lindholme and Humber.
Following a COVID-19 case in HMP Manchester, public services think tank Reform called for the release of 2,305 "low-risk" offenders on short sentences to reduce the risk of COVID-19 on the prison population. Former justice secretary David Gauke echoed similar sentiments, citing the "churn" of prisoners going in and out of prison as a risk. Up to 4,000 prisoners in England and Wales are to be released. Amnesty International's Europe Deputy Director of Research said that authorities in UK should consider releasing those who are more vulnerable to COVID-19.
On 18 March, the first COVID-19 case was reported within the UK prison population. The prisoner, who had been serving time in HMP Manchester (commonly referred to as Strangeways), was moved to a hospital. While no other prisoners or staff tested positive for the virus, thirteen prisoners and four members of staff were put into isolation as a precaution. On 26 March, it was reported that an 84-year-old sex offender had died from COVID-19 on 22 March at HMP Littlehey in Cambridgeshire, becoming the first inmate in the UK to die from the virus. On 28 April, Public Health England had identified around 2,000 "possible/probable" and confirmed COVID-19 cases; outbreaks had occurred in 75 different institutions, with 35 inmates treated in hospital and 15 deaths.
Aviation.
From the latter half of January, Heathrow Airport received additional clinical support and tightened surveillance of the three direct flights it receives from Wuhan every week; each were to be met by a Port Health team. Later, airlines including British Airways and Ryanair announced a number of flight cancellations for March.
On 25 March, London City Airport announced it would temporarily close due to the COVID-19 outbreak. Heathrow Airport closed one runway from 6April, while Gatwick Airport closed one of its two terminals, and said its runway would open for scheduled flights only between 2:00pm and 10:00pm.
Public transport.
On 20 March, Southeastern became the first train operating company to announce a reduced timetable, which would come into use from 23 March.
On 19 March, the Stagecoach Supertram light rail network in Sheffield announced that they would be switching to a modified Sunday service from 23 March until further notice. Local bus operators First South Yorkshire and Stagecoach Yorkshire, which operate across the same area, announced that they would also be switching to a reduced timetable from 23 March. National Express suspended all its long-distance coach services from 6April.
Transport for London (TfL) services were reduced in stages. All Night Overground and Night Tube services, as well as all services on the Waterloo & City line, were suspended from 20 March, and 40 tube stations were closed on the same day. The Mayor of London and TfL urged people to use public transport only if absolutely essential, so it could be used by critical workers.
In April, TfL trialled changes encouraging passengers to board London buses by the middle or rear doors to lessen the risks to drivers, after the deaths of 14 TfL workers including nine drivers. This measure was extended to all routes on 20 April, and passengers were no longer required to pay, so they did not need to use the card reader near the driver.
On 22 April, London mayor Sadiq Khan warned that TfL could run out of money to pay staff by the end of April unless the government stepped in. Since London entered lockdown on 23 March, Tube journeys had fallen by 95% and bus journeys by 85%. On 7 May, it was reported that TfL had requested £2 billion in state aid to keep services running until September 2020. on 12 May, TfL documents warned it expected to lose £4bn due to the pandemic and said it needed £3.2bn to balance a proposed emergency budget for 2021, having lost 90% of its overall income. Without an agreement with the government, deputy mayor for transport Heidi Alexander said TfL might have to issue a 'section 114 notice' – the equivalent of a public body becoming bankrupt. On 14 May, the UK Government agreed £1.6bn in emergency funding to keep Tube and bus services running until September.
In April, Govia Thameslink Railway re-branded three trains with special liveries to show its support for the NHS and the 200,000 essential workers commuting on GTR's network every week.
British Armed Forces.
The COVID-19 pandemic affected British military deployments at home and abroad. Training exercises, including those in Canada and Kenya, had to be cancelled to free up personnel for the COVID Support Force. The British training mission in Iraq, part of Operation Shader, had to be down-scaled. An air base supporting this military operation also confirmed nine cases of COVID-19. The British Army paused face-to-face recruitment and basic training operations, instead conducting them virtually. Training locations, such as Royal Military Academy Sandhurst and HMS "Raleigh", had to adapt their passing out parades. Cadets involved were made to stand apart in combat dress and there were no spectators in the grandstands. Ceremonial duties, such as the Changing of the Guard at Buckingham Palace and the Gun Salute for the Queen's Official Birthday were either scaled-down or cancelled. The Royal Air Force suspended all displays of its teams and bands, with some replaced by virtual displays. The British Army deployed two experts to NATO to help counter disinformation around the pandemic.
Elsewhere in defence, air shows, including the Royal International Air Tattoo at RAF Fairford, were cancelled. Civilian airports, including Birmingham Airport, were used to practice transferring COVID-19 patients to local hospitals via helicopter. Several defence and aerospace companies contributed to the national effort to produce more ventilators. BAE Systems, the country's largest defence company, also loaned its Warton Aerodrome site to be used as a temporary morgue. The Government's defence and security review, named the Integrated Review, was delayed.
The armed forces assisted in the transportation of COVID-19 patients in some of the country's remotest regions, such as Shetland and the Isles of Scilly. On 23 March 2020, Joint Helicopter Command began assisting the COVID-19 relief effort by transporting people and supplies. Helicopters were based at RAF Leeming to cover Northern England and Scotland, whilst helicopters based at RAF Benson, RAF Odiham and RNAS Yeovilton supported the Midlands and Southern England.
On 24 March 2020, the armed forces helped plan and construct a field hospital at the ExCeL London conference centre, named NHS Nightingale Hospital London. Further critical care field hospitals were later built with military assistance in Birmingham, Manchester, Harrogate, Bristol, Exeter, Washington and Glasgow. These hospitals were staffed by military medics, alongside the NHS.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R_0"
}
] |
https://en.wikipedia.org/wiki?curid=63494499
|
63494981
|
Brouwer's conjecture
|
In the mathematical field of spectral graph theory, Brouwer's conjecture is a conjecture by Andries Brouwer on upper bounds for the intermediate sums of the eigenvalues of the Laplacian of a graph in term of its number of edges.
The conjecture states that if "G" is a simple undirected graph and "L"("G") its Laplacian matrix, then its eigenvalues "λ""n"("L"("G")) ≤ "λ""n"−1("L"("G")) ≤ ... ≤ "λ"1("L"("G")) satisfy
formula_0
where "m"("G") is the number of edges of "G".
State of the art.
Brouwer has confirmed by computation that the conjecture is valid for all graphs with at most 10 vertices. It is also known that the conjecture is valid for any number of vertices if "t" = 1, 2, "n" − 1, and "n".
For certain types of graphs, Brouwer's conjecture is known to be valid for all "t" and for any number of vertices. In particular, it is known that is valid for trees, and for unicyclic and bicyclic graphs. It was also proved that Brouwer’s conjecture holds for two large families of graphs; the first family of graphs is obtained from a clique by identifying each of its vertices to a vertex of an arbitrary c-cyclic graph, and the second family is composed of the graphs in which the removal of the edges of the maximal complete bipartite subgraph gives a graph each of whose non-trivial components is a c-cyclic graph.
For certain sequences of random graphs, Brouwer's conjecture holds true with probability tending to one as the number of vertices tends to infinity.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sum_{i=1}^{t}\\lambda_{i}(L(G))\\leq m(G)+\\left(\\begin{array}{c}\nt+1\\\\\n2\n\\end{array}\\right),\\quad t=1,\\ldots,n"
}
] |
https://en.wikipedia.org/wiki?curid=63494981
|
6349982
|
308 (number)
|
308 is the natural number following 307 and preceding 309.
Natural number
|
[
{
"math_id": 0,
"text": "10^{308}"
}
] |
https://en.wikipedia.org/wiki?curid=6349982
|
63509442
|
Weyl integration formula
|
Mathematical formula
In mathematics, the Weyl integration formula, introduced by Hermann Weyl, is an integration formula for a compact connected Lie group "G" in terms of a maximal torus "T". Precisely, it says there exists a real-valued continuous function "u" on "T" such that for every class function "f" on "G":
formula_0
Moreover, formula_1 is explicitly given as: formula_2 where formula_3 is the Weyl group determined by "T" and
formula_4
the product running over the positive roots of "G" relative to "T". More generally, if formula_5 is only a continuous function, then
formula_6
The formula can be used to derive the Weyl character formula. (The theory of Verma modules, on the other hand, gives a purely algebraic derivation of the Weyl character formula.)
Derivation.
Consider the map
formula_7.
The Weyl group "W" acts on "T" by conjugation and on formula_8 from the left by: for formula_9,
formula_10
Let formula_11 be the quotient space by this "W"-action. Then, since the "W"-action on formula_8 is free, the quotient map
formula_12
is a smooth covering with fiber "W" when it is restricted to regular points. Now, formula_13 is formula_14 followed by formula_15 and the latter is a homeomorphism on regular points and so has degree one. Hence, the degree of formula_13 is formula_16 and, by the change of variable formula, we get:
formula_17
Here, formula_18 since formula_5 is a class function. We next compute formula_19. We identify a tangent space to formula_20 as formula_21 where formula_22 are the Lie algebras of formula_23. For each formula_24,
formula_25
and thus, on formula_26, we have:
formula_27
Similarly we see, on formula_28, formula_29. Now, we can view "G" as a connected subgroup of an orthogonal group (as it is compact connected) and thus formula_30. Hence,
formula_31
To compute the determinant, we recall that formula_32 where formula_33 and each formula_34 has dimension one. Hence, considering the eigenvalues of formula_35, we get:
formula_36
as each root formula_37 has pure imaginary value.
Weyl character formula.
The Weyl character formula is a consequence of the Weyl integral formula as follows. We first note that formula_38 can be identified with a subgroup of formula_39; in particular, it acts on the set of roots, linear functionals on formula_40. Let
formula_41
where formula_42 is the length of "w". Let formula_43 be the weight lattice of "G" relative to "T". The Weyl character formula then says that: for each irreducible character formula_44 of formula_45, there exists a formula_46 such that
formula_47.
To see this, we first note
The property (1) is precisely (a part of) the orthogonality relations on irreducible characters.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\int_G f(g) \\, dg = \\int_T f(t) u(t) \\, dt."
},
{
"math_id": 1,
"text": "u"
},
{
"math_id": 2,
"text": "u = |\\delta |^2 / \\# W"
},
{
"math_id": 3,
"text": "W = N_G(T)/T"
},
{
"math_id": 4,
"text": "\\delta(t) = \\prod_{\\alpha > 0} \\left( e^{\\alpha(t)/2} - e^{-\\alpha(t)/2} \\right),"
},
{
"math_id": 5,
"text": "f"
},
{
"math_id": 6,
"text": "\\int_G f(g) \\, dg = \\int_T \\left( \\int_G f(gtg^{-1}) \\, dg \\right) u(t) \\, dt."
},
{
"math_id": 7,
"text": "q : G/T \\times T \\to G, \\, (gT, t) \\mapsto gtg^{-1}"
},
{
"math_id": 8,
"text": "G/T"
},
{
"math_id": 9,
"text": "nT \\in W"
},
{
"math_id": 10,
"text": "nT(gT) = gn^{-1} T."
},
{
"math_id": 11,
"text": "G/T \\times_W T"
},
{
"math_id": 12,
"text": "p: G/T \\times T \\to G/T \\times_W T"
},
{
"math_id": 13,
"text": "q"
},
{
"math_id": 14,
"text": "p"
},
{
"math_id": 15,
"text": "G/T \\times_W T \\to G"
},
{
"math_id": 16,
"text": "\\# W"
},
{
"math_id": 17,
"text": "\\# W \\int_G f \\, dg = \\int_{G/T \\times T} q^*(f \\, dg)."
},
{
"math_id": 18,
"text": "q^*(f \\, dg)|_{(gT, t)} = f(t) q^*(dg)|_{(gT, t)}"
},
{
"math_id": 19,
"text": "q^*(dg)|_{(gT, t)}"
},
{
"math_id": 20,
"text": "G/T \\times T"
},
{
"math_id": 21,
"text": "\\mathfrak{g}/\\mathfrak{t} \\oplus \\mathfrak{t}"
},
{
"math_id": 22,
"text": "\\mathfrak{g}, \\mathfrak{t}"
},
{
"math_id": 23,
"text": "G, T"
},
{
"math_id": 24,
"text": "v \\in T"
},
{
"math_id": 25,
"text": "q(gv, t) = gvtv^{-1}g^{-1}"
},
{
"math_id": 26,
"text": "\\mathfrak{g}/\\mathfrak{t}"
},
{
"math_id": 27,
"text": "d(gT \\mapsto q(gT, t))(\\dot v) = gtg^{-1}(gt^{-1} \\dot v t g^{-1} - g \\dot v g^{-1}) = (\\operatorname{Ad}(g) \\circ (\\operatorname{Ad}(t^{-1}) - I))(\\dot v)."
},
{
"math_id": 28,
"text": "\\mathfrak{t}"
},
{
"math_id": 29,
"text": "d(t \\mapsto q(gT, t)) = \\operatorname{Ad}(g)"
},
{
"math_id": 30,
"text": "\\det(\\operatorname{Ad}(g)) = 1"
},
{
"math_id": 31,
"text": "q^*(dg) = \\det(\\operatorname{Ad}_{\\mathfrak{g}/\\mathfrak{t}}(t^{-1}) - I_{\\mathfrak{g}/\\mathfrak{t}})\\, dg."
},
{
"math_id": 32,
"text": "\\mathfrak{g}_{\\mathbb{C}} = \\mathfrak{t}_{\\mathbb{C}} \\oplus \\oplus_\\alpha \\mathfrak{g}_\\alpha"
},
{
"math_id": 33,
"text": "\\mathfrak{g}_{\\alpha} = \\{ x \\in \\mathfrak{g}_{\\mathbb{C}} \\mid \\operatorname{Ad}(t) x = e^{\\alpha(t)} x, t \\in T \\}"
},
{
"math_id": 34,
"text": "\\mathfrak{g}_\\alpha"
},
{
"math_id": 35,
"text": "\\operatorname{Ad}_{\\mathfrak{g}/\\mathfrak{t}}(t^{-1})"
},
{
"math_id": 36,
"text": "\\det(\\operatorname{Ad}_{\\mathfrak{g}/\\mathfrak{t}}(t^{-1}) - I_{\\mathfrak{g}/\\mathfrak{t}}) = \\prod_{\\alpha > 0} (e^{-\\alpha(t)} - 1)(e^{\\alpha(t)} - 1) = \\delta(t) \\overline{\\delta(t)},"
},
{
"math_id": 37,
"text": "\\alpha"
},
{
"math_id": 38,
"text": "W"
},
{
"math_id": 39,
"text": "\\operatorname{GL}(\\mathfrak{t}_{\\mathbb{C}}^*)"
},
{
"math_id": 40,
"text": "\\mathfrak{t}_{\\mathbb{C}}"
},
{
"math_id": 41,
"text": "A_{\\mu} = \\sum_{w \\in W} (-1)^{l(w)} e^{w(\\mu)}"
},
{
"math_id": 42,
"text": "l(w)"
},
{
"math_id": 43,
"text": "\\Lambda"
},
{
"math_id": 44,
"text": "\\chi"
},
{
"math_id": 45,
"text": "G"
},
{
"math_id": 46,
"text": "\\mu \\in \\Lambda"
},
{
"math_id": 47,
"text": "\\chi|T \\cdot \\delta = A_{\\mu}"
},
{
"math_id": 48,
"text": "\\|\\chi \\|^2 = \\int_G |\\chi|^2 dg = 1."
},
{
"math_id": 49,
"text": "\\chi|T \\cdot \\delta \\in \\mathbb{Z}[\\Lambda]."
}
] |
https://en.wikipedia.org/wiki?curid=63509442
|
63513679
|
Neural network Gaussian process
|
The distribution over functions corresponding to an infinitely wide Bayesian neural network.
A Neural Network Gaussian Process (NNGP) is a Gaussian process (GP) obtained as the limit of a certain type of sequence of neural networks. Specifically, a wide variety of network architectures converges to a GP in the infinitely wide limit, in the sense of distribution.
The concept constitutes an intensional definition, i.e., a NNGP is just a GP, but distinguished by how it is obtained.
Motivation.
Bayesian networks are a modeling tool for assigning probabilities to events, and thereby characterizing the uncertainty in a model's predictions. Deep learning and artificial neural networks are approaches used in machine learning to build computational models which learn from training examples. Bayesian "neural" networks merge these fields. They are a type of neural network whose parameters and predictions are both probabilistic. While standard neural networks often assign high confidence even to incorrect predictions, Bayesian neural networks can more accurately evaluate how likely their predictions are to be correct.
Computation in artificial neural networks is usually organized into sequential layers of artificial neurons. The number of neurons in a layer is called the layer width. When we consider a sequence of Bayesian neural networks with increasingly wide layers (see figure), they converge in distribution to a NNGP. This large width limit is of practical interest, since the networks often improve as layers get wider. And the process may give a closed form way to evaluate networks.
NNGPs also appears in several other contexts: It describes the distribution over predictions made by wide non-Bayesian artificial neural networks after random initialization of their parameters, but before training; it appears as a term in neural tangent kernel prediction equations; it is used in deep information propagation to characterize whether hyperparameters and architectures will be trainable.
It is related to other large width limits of neural networks.
Scope.
The first correspondence result had been established in the 1995 PhD thesis of Radford M. Neal, then supervised by Geoffrey Hinton at University of Toronto. Neal cites David J. C. MacKay as inspiration, who worked in Bayesian learning.
Today the correspondence is proven for: Single hidden layer Bayesian neural networks; deep fully connected networks as the number of units per layer is taken to infinity; convolutional neural networks as the number of channels is taken to infinity; transformer networks as the number of attention heads is taken to infinity; recurrent networks as the number of units is taken to infinity.
In fact, this NNGP correspondence holds for almost any architecture: Generally, if an architecture can be expressed solely via matrix multiplication and coordinatewise nonlinearities (i.e., a tensor program), then it has an infinite-width GP.
This in particular includes all feedforward or recurrent neural networks composed of multilayer perceptron, recurrent neural networks (e.g., LSTMs, GRUs), (nD or graph) convolution, pooling, skip connection, attention, batch normalization, and/or layer normalization.
Illustration.
Every setting of a neural network's parameters formula_0 corresponds to a specific function computed by the neural network. A prior distribution formula_1 over neural network parameters therefore corresponds to a prior distribution over functions computed by the network. As neural networks are made infinitely wide, this distribution over functions converges to a Gaussian process for many architectures.
The notation used in this section is the same as the notation used below to derive the correspondence between NNGPs and fully connected networks, and more details can be found there.
The figure to the right plots the one-dimensional outputs formula_2 of a neural network for two inputs formula_3 and formula_4 against each other. The black dots show the function computed by the neural network on these inputs for random draws of the parameters from formula_1. The red lines are iso-probability contours for the joint distribution over network outputs formula_5 and formula_6 induced by formula_1. This is the distribution in function space corresponding to the distribution formula_1 in parameter space, and the black dots are samples from this distribution. For infinitely wide neural networks, since the distribution over functions computed by the neural network is a Gaussian process, the joint distribution over network outputs is a multivariate Gaussian for any finite set of network inputs.
Discussion.
Infinitely wide fully connected network.
This section expands on the correspondence between infinitely wide neural networks and Gaussian processes for the specific case of a fully connected architecture. It provides a proof sketch outlining why the correspondence holds, and introduces the specific functional form of the NNGP for fully connected networks. The proof sketch closely follows the approach by Novak and coauthors.
Network architecture specification.
Consider a fully connected artificial neural network with inputs formula_3, parameters formula_0 consisting of weights formula_7 and biases formula_8 for each layer formula_9 in the network, pre-activations (pre-nonlinearity) formula_10, activations (post-nonlinearity) formula_11, pointwise nonlinearity formula_12, and layer widths formula_13. For simplicity, the width formula_14 of the readout vector formula_15 is taken to be 1. The parameters of this network have a prior distribution formula_1, which consists of an isotropic Gaussian for each weight and bias, with the variance of the weights scaled inversely with layer width. This network is illustrated in the figure to the right, and described by the following set of equations:
formula_16
formula_17 is a Gaussian process.
We first observe that the pre-activations formula_10 are described by a Gaussian process conditioned on the preceding activations formula_11. This result holds even at finite width.
Each pre-activation formula_18 is a weighted sum of Gaussian random variables, corresponding to the weights formula_19 and biases formula_20, where the coefficients for each of those Gaussian variables are the preceding activations formula_21.
Because they are a weighted sum of zero-mean Gaussians, the formula_18 are themselves zero-mean Gaussians (conditioned on the coefficients formula_21).
Since the formula_10 are jointly Gaussian for any set of formula_11, they are described by a Gaussian process conditioned on the preceding activations formula_11.
The covariance or kernel of this Gaussian process depends on the weight and bias variances formula_22 and formula_23, as well as the second moment matrix formula_24 of the preceding activations formula_11,
formula_25
The effect of the weight scale formula_26 is to rescale the contribution to the covariance matrix from formula_24, while the bias is shared for all inputs, and so formula_23 makes the formula_18 for different datapoints more similar and makes the covariance matrix more like a constant matrix.
formula_27 is a Gaussian process.
The pre-activations formula_10 only depend on formula_11 through its second moment matrix formula_24. Because of this, we can say that formula_10 is a Gaussian process conditioned on formula_24, rather than conditioned on formula_11,
formula_28
As layer width formula_29, formula_30 becomes deterministic.
As previously defined, formula_24 is the second moment matrix of formula_11. Since formula_11 is the activation vector after applying the nonlinearity formula_31, it can be replaced by formula_32, resulting in a modified equation expressing formula_24 for formula_33 in terms of formula_34,
formula_35
We have already determined that formula_36 is a Gaussian process. This means that the sum defining formula_24 is an average over formula_13 samples from a Gaussian process which is a function of formula_37,
formula_38
As the layer width formula_13 goes to infinity, this average over formula_13 samples from the Gaussian process can be replaced with an integral over the Gaussian process:
formula_39
So, in the infinite width limit the second moment matrix formula_24 for each pair of inputs formula_3 and formula_40 can be expressed as an integral over a 2d Gaussian, of the product of formula_41 and formula_42.
There are a number of situations where this has been solved analytically, such as when formula_12 is a ReLU,
ELU, GELU, or error function nonlinearity.
Even when it can't be solved analytically, since it is a 2d integral it can generally be efficiently computed numerically.
This integral is deterministic, so formula_43 is deterministic.
For shorthand, we define a functional formula_44, which corresponds to computing this 2d integral for all pairs of inputs, and which maps formula_37 into formula_24,
formula_45
formula_46 is an NNGP.
By recursively applying the observation that formula_30 is deterministic as formula_29, formula_47 can be written as a deterministic function of formula_48,
formula_49
where formula_50 indicates applying the functional formula_44 sequentially formula_51 times.
By combining this expression with the further observations that the input layer second moment matrix formula_52 is a deterministic function of the input formula_3, and that formula_53 is a Gaussian process, the output of the neural network can be expressed as a Gaussian process in terms of its input,
formula_54
Software libraries.
Neural Tangents is a free and open-source Python library used for computing and doing inference with the NNGP and neural tangent kernel corresponding to various common ANN architectures.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\theta"
},
{
"math_id": 1,
"text": "p(\\theta)"
},
{
"math_id": 2,
"text": "z^L(\\cdot;\\theta)"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "x^*"
},
{
"math_id": 5,
"text": "z^L(x;\\theta)"
},
{
"math_id": 6,
"text": "z^L(x^*;\\theta)"
},
{
"math_id": 7,
"text": "W^l"
},
{
"math_id": 8,
"text": "b^l"
},
{
"math_id": 9,
"text": "l"
},
{
"math_id": 10,
"text": "z^l"
},
{
"math_id": 11,
"text": "y^l"
},
{
"math_id": 12,
"text": "\\phi(\\cdot)"
},
{
"math_id": 13,
"text": "n^l"
},
{
"math_id": 14,
"text": "n^{L+1}"
},
{
"math_id": 15,
"text": "z^L"
},
{
"math_id": 16,
"text": "\n\\begin{align}\nx &\\equiv \\text{input} \\\\\ny^l(x) &= \\left\\{\\begin{array}{lcl} \nx & & l = 0 \\\\\n\\phi\\left(z^{l-1}(x)\\right) & & l > 0 \n\\end{array}\\right. \\\\\nz^l_i(x) &= \\sum_j W^l_{ij} y^l_j(x) + b^l_i \\\\\nW^l_{ij} &\\sim \\mathcal N\\left( 0, \\frac{\\sigma^2_w}{n^l} \\right) \\\\\nb^l_i &\\sim \\mathcal N\\left( 0,\\sigma^2_b \\right) \\\\\n\\phi(\\cdot) &\\equiv \\text{nonlinearity} \\\\\ny^l(x), z^{l-1}(x) &\\in \\mathbb R^{n^l \\times 1} \\\\\nn^{L+1} &= 1 \\\\\n\\theta &= \\left\\{ W^0, b^0, \\dots, W^L, b^L \\right\\} \n\\end{align}\n"
},
{
"math_id": 17,
"text": "z^l | y^l"
},
{
"math_id": 18,
"text": "z^l_i"
},
{
"math_id": 19,
"text": "W^l_{ij}"
},
{
"math_id": 20,
"text": "b^l_i"
},
{
"math_id": 21,
"text": "y^l_j"
},
{
"math_id": 22,
"text": "\\sigma_w^2"
},
{
"math_id": 23,
"text": "\\sigma_b^2"
},
{
"math_id": 24,
"text": "K^l"
},
{
"math_id": 25,
"text": "\n\\begin{align}\nz^l_i \\mid y^l &\\sim \\mathcal{GP}\\left( 0, \\sigma^2_w K^l + \\sigma^2_b \\right) \\\\\nK^l(x, x') &= \\frac{1}{n^l} \\sum_i y_i^l(x) y_i^l(x')\n\\end{align}\n"
},
{
"math_id": 26,
"text": "\\sigma^2_w"
},
{
"math_id": 27,
"text": "z^l | K^l"
},
{
"math_id": 28,
"text": "\n\\begin{align}\nz^l_i \\mid K^l &\\sim \\mathcal{GP}\\left( 0, \\sigma^2_w K^l + \\sigma^2_b \\right).\n\\end{align}\n"
},
{
"math_id": 29,
"text": "n^l \\rightarrow \\infty"
},
{
"math_id": 30,
"text": "K^l \\mid K^{l-1}"
},
{
"math_id": 31,
"text": "\\phi"
},
{
"math_id": 32,
"text": "\\phi\\left(z^{l-1}\\right)"
},
{
"math_id": 33,
"text": "l>0"
},
{
"math_id": 34,
"text": "z^{l-1}"
},
{
"math_id": 35,
"text": "\n\\begin{align}\nK^l(x, x') &= \n\\frac{1}{n^l} \\sum_i \\phi\\left( z^{l-1}_i(x) \\right) \\phi\\left( z^{l-1}_i(x') \\right)\n.\n\\end{align}\n"
},
{
"math_id": 36,
"text": "z^{l-1} | K^{l-1}"
},
{
"math_id": 37,
"text": "K^{l-1}"
},
{
"math_id": 38,
"text": "\n\\begin{align}\n\\left\\{ z^{l-1}_i(x), z^{l-1}_i(x') \\right\\} &\\sim \\mathcal{GP}\\left( 0, \\sigma^2_w K^{l-1} + \\sigma^2_b \\right)\n.\n\\end{align}\n"
},
{
"math_id": 39,
"text": "\n\\begin{align}\n\\lim_{n^l \\rightarrow \\infty} K^l(x, x') &= \\int dz\\, dz'\\, \\phi( z )\\, \\phi( z' )\\, \\mathcal{N}\\left( \\left[\\begin{array}{c} \nz \\\\\nz' \n\\end{array}\\right]; 0, \\sigma^2_w \\left[ \n\\begin{array}{cc} \nK^{l-1}(x, x) & K^{l-1}(x, x') \\\\\nK^{l-1}(x', x) & K^{l-1}(x', x') \n\\end{array}\n\\right] + \\sigma^2_b \\right) \n\\end{align}\n"
},
{
"math_id": 40,
"text": "x'"
},
{
"math_id": 41,
"text": "\\phi(z)"
},
{
"math_id": 42,
"text": "\\phi(z')"
},
{
"math_id": 43,
"text": "K^l | K^{l-1}"
},
{
"math_id": 44,
"text": "F"
},
{
"math_id": 45,
"text": "\n\\begin{align}\n\\lim_{n^l \\rightarrow \\infty} K^l\n&= F\\left(\nK^{l-1}\n\\right)\n.\n\\end{align}\n"
},
{
"math_id": 46,
"text": "z^L \\mid x"
},
{
"math_id": 47,
"text": "K^L"
},
{
"math_id": 48,
"text": "K^0"
},
{
"math_id": 49,
"text": "\n\\begin{align}\n\\lim_{\\min\\left( n^1, \\dots, n^L\\right) \\rightarrow \\infty} K^L\n&= F \\circ F\n\\cdots\n\\left(\nK^{0}\n\\right) = F^L\\left(K^0\\right)\n,\n\\end{align}\n"
},
{
"math_id": 50,
"text": "F^L"
},
{
"math_id": 51,
"text": "L"
},
{
"math_id": 52,
"text": "K^0(x,x')=\\tfrac{1}{n^0} \\sum_i x_i x'_i"
},
{
"math_id": 53,
"text": "z^L | K^L"
},
{
"math_id": 54,
"text": "\n\\begin{align}\nz^L_i(x) &\\sim \\mathcal{GP}\\left( 0, \\sigma^2_w F^L\\left(K^0\\right) + \\sigma^2_b \\right)\n.\n\\end{align}\n"
}
] |
https://en.wikipedia.org/wiki?curid=63513679
|
63526503
|
PDE-constrained optimization
|
PDE-constrained optimization is a subset of mathematical optimization where at least one of the constraints may be expressed as a partial differential equation. Typical domains where these problems arise include aerodynamics, computational fluid dynamics, image segmentation, and inverse problems. A standard formulation of PDE-constrained optimization encountered in a number of disciplines is given by:formula_0where formula_1 is the control variable and formula_2 is the squared Euclidean norm and is not a norm itself. Closed-form solutions are generally unavailable for PDE-constrained optimization problems, necessitating the development of numerical methods.
Applications.
Optimal control of bacterial chemotaxis system.
The following example comes from p. 20-21 of Pearson. Chemotaxis is the movement of an organism in response to an external chemical stimulus. One problem of particular interest is in managing the spatial dynamics of bacteria that are subject to chemotaxis to achieve some desired result. For a cell density formula_3 and concentration density formula_4 of a chemoattractant, it is possible to formulate a boundary control problem:formula_5where formula_6 is the ideal cell density, formula_7 is the ideal concentration density, and formula_1 is the control variable. This objective function is subject to the dynamics:formula_8where formula_9 is the Laplace operator.
|
[
{
"math_id": 0,
"text": "\\min_{y,u} \\; \\frac 1 2 \\|y-\\widehat{y}\\|_{L_2(\\Omega)}^2 + \\frac\\beta2 \\|u\\|_{L_2(\\Omega)}^2, \\quad \\text{s.t.} \\; \\mathcal{D}y = u"
},
{
"math_id": 1,
"text": "u"
},
{
"math_id": 2,
"text": "\\|\\cdot\\|_{L_{2}(\\Omega)}^{2}"
},
{
"math_id": 3,
"text": "z(t,{\\bf x})"
},
{
"math_id": 4,
"text": "c(t,{\\bf x})"
},
{
"math_id": 5,
"text": "\\min_{z,c,u} \\; {1\\over{2}}\\int_{\\Omega}\\left[z(T,{\\bf x})-\\widehat{z} \\right]^{2} + {\\gamma_{c}\\over{2}} \\int_{\\Omega}\\left[c(T,{\\bf x})-\\widehat{c} \\right]^{2} + {\\gamma_{u}\\over{2}}\\int_{0}^{T}\\int_{\\partial\\Omega}u^{2}"
},
{
"math_id": 6,
"text": "\\widehat{z}"
},
{
"math_id": 7,
"text": "\\widehat{c}"
},
{
"math_id": 8,
"text": "\\begin{aligned}\n{\\partial z\\over{\\partial t}} - D_{z}\\Delta z - \\alpha \\nabla \\cdot \\left[ {\\nabla c\\over{(1+c)^{2}}}z \\right] &= 0 \\quad \\text{in} \\quad \\Omega \\\\\n{\\partial c\\over{\\partial t}} - \\Delta c + \\rho c - w{z^{2}\\over{1+z^{2}}} &= 0 \\quad \\text{in} \\quad \\Omega \\\\\n{\\partial z\\over{\\partial n}} &= 0 \\quad \\text{on} \\quad \\partial\\Omega \\\\\n{\\partial c\\over{\\partial n}} + \\zeta (c-u) &= 0 \\quad \\text{on} \\quad \\partial\\Omega\n\\end{aligned}"
},
{
"math_id": 9,
"text": "\\Delta"
}
] |
https://en.wikipedia.org/wiki?curid=63526503
|
6352976
|
Adam Adamandy Kochański
|
Adam Adamandy Kochański (5 August 1631 – 17 May 1700) was a Polish mathematician, physicist, clock-maker, pedagogue and librarian. He was the Court Mathematician of John III Sobieski.
Kochański was born in Dobrzyń nad Wisłą. He began his education in Toruń, and in 1652 he entered the Society of Jesus in Vilnius. He studied philosophy at Vilnius University (then called "Vilnius Academy"). He also studied mathematics, physics and theology. He went on to lecture on those subjects at several European universities: in Florence, Prague, Olomouc, Wrocław, Mainz and Würzburg. In 1680 he accepted an offer from John III Sobieski, the king of Poland, returning to Poland and taking the position of the king's chaplain, mathematician, clock maker, librarian, and tutor of the king's son, Jakub.
He wrote many scientific papers, mainly on mathematics and mechanics, but also on physics, astronomy and philosophy. The best known of his works, "Observationes Cyclometricae ad facilitandam Praxin accommodatae", is devoted to the squaring the circle (or alternatively, "the quadrature of the circle") and was published in 1685 in the leading scientific periodical of the time, "Acta Eruditorum". He also found a famous approximation of π today called Kochański's approximation:
formula_0
Kochański cooperated and corresponded with many scientists, Johannes Hevelius and Gottfried Leibniz among them. He was apparently the only one of the contemporary Poles to know elements of the newly invented calculus. As a mechanic he was a renowned clock maker. He suggested replacing the clock's pendulum with a spring, and standardizing the number of escapements per hour.
He died in Teplice in Bohemia.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sqrt{{40 \\over 3} - 2 \\sqrt{3}\\ } = 3.14153333870509461863 \\dots "
}
] |
https://en.wikipedia.org/wiki?curid=6352976
|
63530168
|
Obelus
|
Historical manuscript symbol
An obelus (plural: obeluses or obeli) is a term in codicology and latterly in typography that refers to a historical annotation mark which has resolved to three modern meanings:
The word "obelus" comes from (obelós), the Ancient Greek word for a sharpened stick, spit, or pointed pillar. This is the same root as that of the word 'obelisk'.
In mathematics, the first symbol is mainly used in Anglophone countries to represent the mathematical operation of division and is called an obelus. In editing texts, the second symbol, also called a dagger mark † is used to indicate erroneous or dubious content; or as a reference mark or footnote indicator. It also has other uses in a variety of specialist contexts.
Use in text annotation.
The modern dagger symbol originated from a variant of the obelus, originally depicted by a plain line −, or a line with one or two dots . It represented an iron roasting spit, a dart, or the sharp end of a javelin, symbolizing the skewering or cutting out of dubious matter.
Originally, one of these marks (or a plain line) was used in ancient manuscripts to mark passages that were suspected of being corrupted or spurious; the practice of adding such marginal notes became known as obelism. The dagger symbol †, also called an "obelisk", is derived from the obelus, and continues to be used for this purpose.
The obelus is believed to have been invented by the Homeric scholar Zenodotus, as one of a system of editorial symbols. They marked questionable or corrupt words or passages in manuscripts of the Homeric epics. The system was further refined by his student Aristophanes of Byzantium, who first introduced the asterisk and used a symbol resembling a ⊤ for an obelus; and finally by Aristophanes' student, in turn, Aristarchus, from whom they earned the name of "Aristarchian symbols".
In some commercial and financial documents, especially in Germany and Scandinavia, a variant () is used in the margins of letters to indicate an enclosure, where the upper point is sometimes replaced with the corresponding number. In Finland, the obelus (or a slight variant, formula_0) is used as a symbol for a correct response (alongside the check mark, ✓, which is used for an "incorrect" response).
In the 7.0 release of Unicode, was one of a group of "Ancient Greek textual symbols" that were added to the specification (in the block Supplemental Punctuation).
In mathematics.
The form of the obelus as a horizontal line with a dot above and a dot below, ÷, was first used as a symbol for division by the Swiss mathematician Johann Rahn in his book "Teutsche Algebra" in 1659. This gave rise to the modern mathematical symbol ÷, used in anglophone countries as a division sign. This usage, though widespread in Anglophone countries, is neither universal nor recommended: the ISO 80000-2 standard for mathematical notation recommends only the solidus / or fraction bar for division, or the colon : for ratios; it says that ÷ "should not be used" for division.
This form of the obelus was also occasionally used as a mathematical symbol for subtraction in Northern Europe; such usage continued in some parts of Europe (including Norway and, until fairly recently, Denmark). In Italy, Poland and Russia, this notation is sometimes used in engineering to denote a range of values.
In some commercial and financial documents, especially in Germany and Scandinavia, another form of the obelus – the commercial minus sign – is used to signify a negative remainder of a division operation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\cdot \\! / \\! \\cdot"
}
] |
https://en.wikipedia.org/wiki?curid=63530168
|
635304
|
Page's trend test
|
In statistics, the Page test for multiple comparisons between ordered correlated variables is the counterpart of Spearman's rank correlation coefficient which summarizes the association of continuous variables. It is also known as Page's trend test or Page's "L" test. It is a repeated measure trend test.
The Page test is useful where:
For example, a number of subjects might each be given three trials at the same task, and we predict that performance will improve from trial to trial. A test of the significance of the trend between conditions in this situation was developed by Ellis Batten Page (1963). More formally, the test considers the null hypothesis that, for "n" conditions, where "m""i" is a measure of the central tendency of the "i"th condition,
formula_0
against the alternative hypothesis that
formula_1
It has more statistical power than the Friedman test against the alternative that there is a difference in trend. Friedman's test considers the alternative hypothesis that the central tendencies of the observations under the "n" conditions are different without specifying their order.
Procedure for the Page test, with "k" subjects each exposed to "n" conditions:
Alternatively, the quantity
formula_2
may be compared with values of the chi-squared distribution with one degree of freedom. This gives a two-tailed test. The approximation is reliable for more than 20 subjects with any number of conditions, for more than 12 subjects when there are 4 or more conditions, and for any number of subjects when there are 9 or more conditions.
ρ = 12"L"/"k"("n"3 − "n") − 3("n" + 1)/("n" − 1)
if "k" = 1, this reduces to the familiar Spearman coefficient.
The Page test is most often used with fairly small numbers of conditions and subjects. The minimum values of "L" for significance at the 0.05 level, one-tailed, with three conditions, are 56 for 4 subjects (the lowest number that is capable of giving a significant result at this level), 54 for 5 subjects, 91 for 7 subjects, 128 for 10 subjects, 190 for 15 subjects and 251 for 20 subjects..
A corresponding extension of Kendall's tau was developed by Jonckheere (1954).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "m_1 = m_2 = m_3 = \\cdots = m_n\\, "
},
{
"math_id": 1,
"text": " m_1 < m_2 < m_3 < \\cdots < m_n.\\,"
},
{
"math_id": 2,
"text": " { (12L - 3kn(n+1)^2)^2 \\over kn^2(n^2 - 1)(n + 1) } "
}
] |
https://en.wikipedia.org/wiki?curid=635304
|
63539376
|
Two-dimensional critical Ising model
|
Conformal field theory of the 2D Ising model critical point
The two-dimensional critical Ising model is the critical limit of the Ising model in two dimensions. It is a two-dimensional conformal field theory whose symmetry algebra is the Virasoro algebra with the central charge formula_0.
Correlation functions of the spin and energy operators are described by the formula_1 minimal model. While the minimal model has been exactly solved (see Ising critical exponents), the solution does not cover other observables such as connectivities of clusters.
The minimal model.
Space of states and conformal dimensions.
The Kac table of the formula_1 minimal model is:
formula_2
This means that the space of states is generated by three primary states, which correspond to three primary fields or operators:
formula_3
The decomposition of the space of states into irreducible representations of the product of the left- and right-moving Virasoro algebras is
formula_4
where formula_5 is the irreducible highest-weight representation of the Virasoro algebra with the conformal dimension formula_6.
In particular, the Ising model is diagonal and unitary.
Characters and partition function.
The characters of the three representations of the Virasoro algebra that appear in the space of states are
formula_7
where formula_8 is the Dedekind eta function, and formula_9 are theta functions of the nome formula_10, for example formula_11.
The modular S-matrix, i.e. the matrix formula_12 such that formula_13, is
formula_14
where the fields are ordered as formula_15.
The modular invariant partition function is
formula_16
Fusion rules and operator product expansions.
The fusion rules of the model are
formula_17
The fusion rules are invariant under the formula_18 symmetry formula_19.
The three-point structure constants are
formula_20
Knowing the fusion rules and three-point structure constants, it is possible to write operator product expansions, for example
formula_21
where formula_22 are the conformal dimensions of the primary fields, and the omitted terms formula_23 are contributions of descendant fields.
Correlation functions on the sphere.
Any one-, two- and three-point function of primary fields is determined by conformal symmetry up to a multiplicative constant. This constant is set to be one for one- and two-point functions by a choice of field normalizations. The only non-trivial dynamical quantities are the three-point structure constants, which were given above in the context of operator product expansions.
formula_24
formula_25
with formula_26.
formula_27
formula_28
formula_29
formula_30
The three non-trivial four-point functions are of the type formula_31. For a four-point function formula_32, let formula_33 and formula_34 be the s- and t-channel Virasoro conformal blocks, which respectively correspond to the contributions of formula_35 (and its descendants) in the operator product expansion formula_36, and of formula_37 (and its descendants) in the operator product expansion formula_38. Let formula_39 be the cross-ratio.
In the case of formula_40, fusion rules allow only one primary field in all channels, namely the identity field.
formula_41
In the case of formula_42, fusion rules allow only the identity field in the s-channel, and the spin field in the t-channel.
formula_43
In the case of formula_44, fusion rules allow two primary fields in all channels: the identity field and the energy field. In this case we write the conformal blocks in the case formula_45 only: the general case is obtained by inserting the prefactor formula_46, and identifying formula_47 with the cross-ratio.
formula_48
In the case of formula_44, the conformal blocks are:
formula_49
From the representation of the model in terms of Dirac fermions, it is possible to compute correlation functions of any number of spin or energy operators:
formula_50
formula_51
These formulas have generalizations to correlation functions on the torus, which involve theta functions.
Other observables.
Disorder operator.
The two-dimensional Ising model is mapped to itself by a high-low temperature duality. The image of the spin operator formula_52 under this duality is a disorder operator formula_53, which has the same left and right conformal dimensions formula_54. Although the disorder operator does not belong to the minimal model, correlation functions involving the disorder operator can be computed exactly, for example
formula_55
whereas
formula_56
Connectivities of clusters.
The Ising model has a description as a random cluster model due to Fortuin and Kasteleyn. In this description, the natural observables are connectivities of clusters, i.e. probabilities that a number of points belong to the same cluster.
The Ising model can then be viewed as the case formula_57 of the formula_58-state Potts model, whose parameter formula_58 can vary continuously, and is related to the central charge of the Virasoro algebra.
In the critical limit, connectivities of clusters have the same behaviour under conformal transformations as correlation functions of the spin operator. Nevertheless, connectivities do not coincide with spin correlation functions: for example, the three-point connectivity does not vanish, while formula_59. There are four independent four-point connectivities, and their sum coincides with formula_60. Other combinations of four-point connectivities are not known analytically. In particular they are not related to correlation functions of the minimal model, although they are related to the formula_61 limit of spin correlators in the formula_58-state Potts model.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "c=\\tfrac12"
},
{
"math_id": 1,
"text": "(4, 3)"
},
{
"math_id": 2,
"text": "\n\\begin{array}{c|ccc} 2 & \\frac{1}{2} & \\frac{1}{16} & 0 \\\\ 1 & 0 & \\frac{1}{16} & \\frac{1}{2} \\\\ \\hline & 1 & 2 & 3 \\end{array}\n"
},
{
"math_id": 3,
"text": "\n\\begin{array}{cccc}\n\\hline\n\\text{Kac table indices} & \\text{Dimension} & \\text{Primary field} & \\text{Name}\n\\\\\n\\hline\n(1,1) \\text{ or } (3,2) & 0 & \\mathbf{1} & \\text{Identity} \n\\\\\n(2,1) \\text{ or } (2,2) & \\frac{1}{16} & \\sigma & \\text{Spin} \n\\\\\n(1,2) \\text{ or } (3,1) & \\frac12 & \\epsilon & \\text{Energy}\n\\\\\n\\hline\n\\end{array}\n"
},
{
"math_id": 4,
"text": "\n\\mathcal{S} = \\mathcal{R}_{0} \\otimes \\bar{\\mathcal{R}}_0 \n\\oplus \\mathcal{R}_{\\frac{1}{16}} \\otimes \\bar{\\mathcal{R}}_\\frac{1}{16}\n\\oplus \\mathcal{R}_\\frac12 \\otimes \\bar{\\mathcal{R}}_\\frac12 \n"
},
{
"math_id": 5,
"text": "\\mathcal{R}_\\Delta"
},
{
"math_id": 6,
"text": "\\Delta"
},
{
"math_id": 7,
"text": "\n\\begin{align}\n\\chi_0(q) &= \\frac{1}{\\eta(q)} \\sum_{k\\in\\mathbb{Z}}\\left( q^\\frac{(24k+1)^2}{48} -q^\\frac{(24k+7)^2}{48}\\right) \n= \\frac{1}{2\\sqrt{\\eta(q)}}\\left(\\sqrt{\\theta_3(0|q)} + \\sqrt{\\theta_4(0|q)}\\right)\n\\\\\n\\chi_{\\frac{1}{16}}(q) &= \\frac{1}{\\eta(q)} \\sum_{k\\in\\mathbb{Z}}\\left( q^\\frac{(24k+2)^2}{48} -q^\\frac{(24k+10)^2}{48}\\right) = \\frac{1}{2\\sqrt{\\eta(q)}}\\left(\\sqrt{\\theta_3(0|q)} - \\sqrt{\\theta_4(0|q)}\\right)\n\\\\\n\\chi_{\\frac12}(q) &= \\frac{1}{\\eta(q)} \\sum_{k\\in\\mathbb{Z}}\\left( q^\\frac{(24k+5)^2}{48} -q^\\frac{(24k+11)^2}{48}\\right) = \\frac{1}{\\sqrt{2\\eta(q)}}\\sqrt{\\theta_2(0|q)}\n\\end{align}\n"
},
{
"math_id": 8,
"text": "\\eta(q)"
},
{
"math_id": 9,
"text": "\\theta_i(0|q)"
},
{
"math_id": 10,
"text": "q=e^{2\\pi i\\tau}"
},
{
"math_id": 11,
"text": "\\theta_3(0|q)=\\sum_{n\\in\\mathbb{Z}} q^{\\frac{n^2}{2}}"
},
{
"math_id": 12,
"text": "\\mathcal{S}"
},
{
"math_id": 13,
"text": "\\chi_i(-\\tfrac{1}{\\tau}) = \\sum_j \\mathcal{S}_{ij}\\chi_j(\\tau)"
},
{
"math_id": 14,
"text": "\n\\mathcal{S} = \\frac12 \\left(\\begin{array}{ccc} 1 & 1 & \\sqrt{2}\\\\ 1 & 1 & -\\sqrt{2} \\\\ \\sqrt{2} & -\\sqrt{2} & 0 \\end{array}\\right)\n"
},
{
"math_id": 15,
"text": "1,\\epsilon, \\sigma"
},
{
"math_id": 16,
"text": "\nZ(q) = \\left|\\chi_0(q)\\right|^2 + \\left|\\chi_{\\frac{1}{16}}(q)\\right|^2 \n+ \\left|\\chi_\\frac12(q)\\right|^2 = \\frac{|\\theta_2(0|q)|+ |\\theta_3(0|q)|+|\\theta_4(0|q)|}{2|\\eta(q)|}\n"
},
{
"math_id": 17,
"text": "\n\\begin{align}\n\\mathbf{1}\\times \\mathbf{1} &= \\mathbf{1}\n\\\\\n\\mathbf{1}\\times \\sigma &= \\sigma\n\\\\\n\\mathbf{1}\\times \\epsilon &= \\epsilon\n\\\\\n\\sigma \\times \\sigma &= \\mathbf{1} + \\epsilon\n\\\\\n\\sigma \\times \\epsilon &= \\sigma\n\\\\\n\\epsilon \\times \\epsilon &= \\mathbf{1}\n\\end{align} \n"
},
{
"math_id": 18,
"text": "\\mathbb{Z}_2"
},
{
"math_id": 19,
"text": "\\sigma \\to -\\sigma"
},
{
"math_id": 20,
"text": "\nC_{\\mathbf{1}\\mathbf{1}\\mathbf{1}} = C_{\\mathbf{1}\\epsilon\\epsilon} = C_{\\mathbf{1}\\sigma\\sigma} = 1 \\quad , \\quad C_{\\sigma\\sigma\\epsilon} = \\frac12\n"
},
{
"math_id": 21,
"text": "\n\\begin{align}\n\\sigma(z)\\sigma(0) &= |z|^{2\\Delta_\\mathbf{1} - 4\\Delta_\\sigma} \nC_{\\mathbf{1}\\sigma\\sigma}\\Big(\\mathbf{1}(0) + O(z)\\Big) \n+ |z|^{2\\Delta_\\epsilon -4\\Delta_\\sigma} C_{\\sigma\\sigma\\epsilon} \\Big(\\epsilon(0) + O(z)\\Big)\n\\\\\n&= |z|^{-\\frac14} \\Big(\\mathbf{1}(0) + O(z)\\Big) +\\frac12 |z|^\\frac34 \\Big(\\epsilon(0) + O(z)\\Big)\n\\end{align}\n"
},
{
"math_id": 22,
"text": "\\Delta_\\mathbf{1},\\Delta_\\sigma,\\Delta_\\epsilon"
},
{
"math_id": 23,
"text": "O(z)"
},
{
"math_id": 24,
"text": "\n\\left\\langle \\mathbf{1}(z_1)\\right\\rangle = 1 \\ , \\ \n\\left\\langle\\sigma(z_1)\\right\\rangle = 0 \\ , \\ \n\\left\\langle\\epsilon(z_1)\\right\\rangle = 0 \n"
},
{
"math_id": 25,
"text": " \n\\left\\langle \\mathbf{1}(z_1)\\mathbf{1}(z_2)\\right\\rangle = 1 \n\\ , \\ \\left\\langle\\sigma(z_1)\\sigma(z_2)\\right\\rangle = |z_{12}|^{-\\frac14} \\ , \\ \\left\\langle\\epsilon(z_1)\\epsilon(z_2)\\right\\rangle = |z_{12}|^{-2}\n"
},
{
"math_id": 26,
"text": " z_{ij} = z_i-z_j"
},
{
"math_id": 27,
"text": "\n\\langle \\mathbf{1}\\sigma \\rangle = \\langle \\mathbf{1}\\epsilon\\rangle = \\langle \\sigma \\epsilon \\rangle = 0 \n"
},
{
"math_id": 28,
"text": " \n\\left\\langle \\mathbf{1}(z_1)\\mathbf{1}(z_2)\\mathbf{1}(z_3)\\right\\rangle = 1 \n\\ , \\ \\left\\langle\\sigma(z_1)\\sigma(z_2)\\mathbf{1}(z_3)\\right\\rangle = |z_{12}|^{-\\frac14} \n\\ , \\ \\left\\langle\\epsilon(z_1)\\epsilon(z_2)\\mathbf{1}(z_3)\\right\\rangle = |z_{12}|^{-2}\n"
},
{
"math_id": 29,
"text": "\n\\left\\langle \\sigma(z_1)\\sigma(z_2)\\epsilon(z_3)\\right\\rangle = \\frac12 |z_{12}|^{\\frac34} |z_{13}|^{-1} |z_{23}|^{-1}\n"
},
{
"math_id": 30,
"text": "\n\\langle \\mathbf{1}\\mathbf{1}\\sigma \\rangle \n=\n\\langle \\mathbf{1}\\mathbf{1}\\epsilon \\rangle\n= \n\\langle \\mathbf{1}\\sigma\\epsilon \\rangle\n=\n\\langle \\sigma\\epsilon\\epsilon \\rangle\n=\n\\langle \\sigma \\sigma \\sigma \\rangle\n=\n\\langle \\epsilon \\epsilon\\epsilon \\rangle\n= 0\n"
},
{
"math_id": 31,
"text": "\\langle \\sigma^4\\rangle, \\langle \\sigma^2\\epsilon^2\\rangle, \\langle \\epsilon^4\\rangle"
},
{
"math_id": 32,
"text": " \\left\\langle\\prod_{i=1}^4 V_i(z_i)\\right\\rangle"
},
{
"math_id": 33,
"text": "\\mathcal{F}^{(s)}_j"
},
{
"math_id": 34,
"text": "\\mathcal{F}^{(t)}_j"
},
{
"math_id": 35,
"text": "V_j(z_2)"
},
{
"math_id": 36,
"text": "V_1(z_1)V_2(z_2)"
},
{
"math_id": 37,
"text": "V_j(z_4)"
},
{
"math_id": 38,
"text": "V_1(z_1)V_4(z_4)"
},
{
"math_id": 39,
"text": " x=\\frac{z_{12}z_{34}}{z_{13}z_{24}}"
},
{
"math_id": 40,
"text": "\\langle \\epsilon^4\\rangle"
},
{
"math_id": 41,
"text": " \n\\begin{align}\n& \\langle \\epsilon^4\\rangle = \\left|\\mathcal{F}^{(s)}_\\textbf{1}\\right|^2 = \\left|\\mathcal{F}^{(t)}_\\textbf{1}\\right|^2\n\\\\\n& \\mathcal{F}^{(s)}_\\textbf{1} \n= \\mathcal{F}^{(t)}_\\textbf{1} \n= \\left[\\prod_{1\\leq i<j\\leq 4} z_{ij}^{-\\frac13}\\right] \\frac{1-x+x^2}{x^\\frac23(1-x)^\\frac23}\n\\ \\underset{(z_i)=(x, 0,\\infty, 1)}{=}\\ \\frac{1}{x(1-x)} -1\n\\end{align}\n"
},
{
"math_id": 42,
"text": "\\langle \\sigma^2\\epsilon^2\\rangle"
},
{
"math_id": 43,
"text": " \n\\begin{align}\n& \\langle \\sigma^2\\epsilon^2\\rangle = \\left|\\mathcal{F}^{(s)}_\\textbf{1}\\right|^2 = C_{\\sigma\\sigma\\epsilon}^2\\left|\\mathcal{F}^{(t)}_\\sigma\\right|^2 = \\frac14\\left|\\mathcal{F}^{(t)}_\\sigma\\right|^2 \n\\\\\n& \\mathcal{F}^{(s)}_\\textbf{1} \n= \\frac12 \\mathcal{F}^{(t)}_\\sigma\n=\\left[z_{12}^\\frac14 z_{34}^{-\\frac58}\\left(z_{13}z_{24}z_{14}z_{23}\\right)^{-\\frac{3}{16}} \\right]\\frac{1-\\frac{x}{2}}{x^\\frac38(1-x)^\\frac{5}{16}} \n\\ \\underset{(z_i)=(x, 0,\\infty, 1)}{=}\\ \n\\frac{1-\\frac{x}{2}}{x^\\frac18(1-x)^\\frac12}\n\\end{align}\n"
},
{
"math_id": 44,
"text": "\\langle \\sigma^4\\rangle"
},
{
"math_id": 45,
"text": "(z_1,z_2,z_3,z_4)=(x,0,\\infty,1)"
},
{
"math_id": 46,
"text": "x^\\frac{1}{24}(1-x)^\\frac{1}{24}\\prod_{1\\leq i<j\\leq 4} z_{ij}^{-\\frac{1}{24}}"
},
{
"math_id": 47,
"text": "x"
},
{
"math_id": 48,
"text": "\n\\begin{align}\n\\langle \\sigma^4\\rangle &= \n\\left|\\mathcal{F}_\\textbf{1}^{(s)}\\right|^2 + \\frac14 \\left|\\mathcal{F}_{\\epsilon}^{(s)}\\right|^2 \n= \n\\left|\\mathcal{F}_\\textbf{1}^{(t)}\\right|^2 + \\frac14 \\left|\\mathcal{F}_{\\epsilon}^{(t)}\\right|^2 \n\\\\\n&= \n\\frac{|1+\\sqrt{x}|+|1-\\sqrt{x}|}{2|x|^\\frac14 |1-x|^\\frac14}\n\\ \\underset{x\\in (0, 1)}{=}\\ \\frac{1}{|x|^\\frac14 |1-x|^\\frac14}\n\\end{align}\n"
},
{
"math_id": 49,
"text": "\n\\begin{align}\n& \\mathcal{F}_\\textbf{1}^{(s)} \n= \\frac{\\sqrt{\\frac{1+\\sqrt{1-x}}{2}}}{x^\\frac18(1-x)^\\frac18} \n\\ ,\\;\\;\n\\mathcal{F}_{\\epsilon}^{(s)} \n= \n\\frac{\\sqrt{2-2\\sqrt{1-x}}}{x^\\frac18(1-x)^\\frac18}\n\\\\ &\n\\mathcal{F}_\\textbf{1}^{(t)} \n= \\frac{\\mathcal{F}^{(s)}_\\textbf{1}}{\\sqrt{2}} + \\frac{\\mathcal{F}^{(s)}_\\epsilon}{2\\sqrt{2}}\n= \\frac{\\sqrt{\\frac{1+\\sqrt{x}}{2}}}{x^\\frac18(1-x)^\\frac18} \n\\ ,\\;\\;\n\\mathcal{F}_{\\epsilon}^{(t)} \n= \\sqrt{2}\\mathcal{F}^{(s)}_\\textbf{1} - \\frac{\\mathcal{F}^{(s)}_\\epsilon}{\\sqrt{2}}\n= \n\\frac{\\sqrt{2-2\\sqrt{x}}}{x^\\frac18(1-x)^\\frac18}\n\\end{align}\n"
},
{
"math_id": 50,
"text": "\n\\left\\langle \\prod_{i=1}^{2n} \\epsilon(z_i)\\right\\rangle^2 \n= \\left| \\det\\left(\\frac{1}{z_{ij}}\\right)_{1\\leq i\\neq j\\leq 2n} \\right|^2\n"
},
{
"math_id": 51,
"text": "\n\\left\\langle \\prod_{i=1}^{2n} \\sigma(z_i)\\right\\rangle^2 \n= \\frac{1}{2^n}\\sum_{\\begin{array}{c}\\epsilon_i=\\pm 1 \\\\ \\sum_{i=1}^{2n}\\epsilon_i=0\\end{array}} \\prod_{1\\leq i<j\\leq 2n} |z_{ij}|^{\\frac{\\epsilon_i\\epsilon_j}{2}}\n"
},
{
"math_id": 52,
"text": "\\sigma"
},
{
"math_id": 53,
"text": "\\mu"
},
{
"math_id": 54,
"text": "(\\Delta_\\mu,\\bar\\Delta_\\mu) = (\\Delta_\\sigma,\\bar \\Delta_\\sigma)=(\\tfrac{1}{16},\\tfrac{1}{16})"
},
{
"math_id": 55,
"text": "\n\\left\\langle \\sigma(z_1)\\mu(z_2)\\sigma(z_3)\\mu(z_4)\\right\\rangle^2 \n= \\frac12 \\sqrt{\\frac{|z_{13}z_{24}|}{|z_{12}z_{34}z_{23}z_{14}|}}\n\\Big( |x|+|1-x|-1 \\Big)\n"
},
{
"math_id": 56,
"text": "\n\\left\\langle \\prod_{i=1}^4\\mu(z_i)\\right\\rangle^2 \n=\n\\left\\langle \\prod_{i=1}^4\\sigma(z_i)\\right\\rangle^2 \n= \\frac12 \\sqrt{\\frac{|z_{13}z_{24}|}{|z_{12}z_{34}z_{23}z_{14}|}}\n\\Big( |x|+|1-x|+1 \\Big)\n"
},
{
"math_id": 57,
"text": "q=2"
},
{
"math_id": 58,
"text": "q"
},
{
"math_id": 59,
"text": "\\langle\\sigma\\sigma\\sigma\\rangle=0"
},
{
"math_id": 60,
"text": "\\langle\\sigma\\sigma\\sigma\\sigma\\rangle"
},
{
"math_id": 61,
"text": " q\\to 2"
}
] |
https://en.wikipedia.org/wiki?curid=63539376
|
6354035
|
Differentiator
|
Type of circuit
In electronics, a differentiator is a circuit that outputs a signal approximately proportional to the rate of change (i.e. the derivative with respect to time) of its input signal. Because the derivative of a sinusoid is another sinusoid whose amplitude is multiplied by its frequency, a true differentiator that works across all frequencies can't be realized (as its gain would have to increase indefinitely as frequency increase). Real circuits such as a 1st-order high-pass filter are able to approximate differentiation at lower frequencies by limiting the gain above its cutoff frequency. An active differentiator includes an amplifier, while a passive differentiator is made only of resistors, capacitors and inductors.
Passive differentiator.
The four-terminal 1st-order passive high-pass filter circuits depicted in figure, consisting of a resistor and a capacitor, or alternatively a resistor and an inductor, are called differentiators because they approximate differentiation at frequencies well-below each filter's cutoff frequency.
According to Ohm's law, the voltages at the two ends of the "capacitive differentiator" are related by the following transfer function (which has a zero in the origin and a pole at formula_0):
formula_1
which is a good approximation of an ideal differentiator at frequencies well below the filter's cutoff frequency of formula_2 in hertz or formula_3 in radians.
Similarly, the transfer function of the "inductive differentiator" has a zero in the origin and a pole in formula_4, corresponding to a cutoff frequency of formula_5 in hertz or formula_6 in radians.
Active differentiator.
Ideal differentiator.
A differentiator circuit (also known as a differentiating amplifier or inverting differentiator) consists of an ideal operational amplifier with a resistor "R" providing negative feedback and a capacitor "C" at the input, such that:
According to the capacitor's current–voltage relation, this current formula_9 as it flows from the input through the capacitor to the virtual ground will be proportional to the derivative of the input voltage:
formula_10
This same current formula_9 is converted into a voltage when it travels from the virtual ground through the resistor to the output, according to ohm's law:
formula_11
Inserting the capacitor's equation for formula_9 provides the output voltage as a function of the input voltage:
formula_12
Consequently,
The op amp's low-impedance output isolates the load of the succeeding stages, so this circuit has the same response independent of its load.
If a constant DC voltage is applied as input, the output voltage is zero. If the input voltage changes from zero to negative, the output voltage is positive. If the applied input voltage changes from zero to positive, the output voltage is negative. If a square-wave input is applied to a differentiator, then a spike waveform is obtained at the output.
Operation as high pass filter.
Treating the capacitor as an impedance with capacitive reactance of "X"c = allows analyzing the differentiator as a high pass filter. The inverse-proportionality to frequency means that at low frequency, the reactance of a capacitor is high, and at high frequency reactance is low. Since the feedback configuration provides a gain of , that means the gain is low at low frequencies (or for slow changing input), and higher at higher frequencies (or for fast changing input).
Frequency response.
The transfer function of an ideal differentiator is formula_14, resulting in the Bode plot of its magnitude having a positive +20 dB per decade slope over all frequencies and having unity gain at formula_15
Advantages.
A small time constant is sufficient to cause differentiation of the input signal.
Limitations.
At high frequencies:
Practical differentiator.
In order to overcome the limitations of the ideal differentiator, an additional small-value capacitor "C"1 is connected in parallel with the feedback resistor "R", which prevents the differentiator circuit from oscillating, and a resistor "R"1 is connected in series with the capacitor "C", which limits the increase in gain to a ratio of .
Since negative feedback is present through the resistor "R", we can apply the virtual ground concept, that is, the voltage at the inverting terminal is the same 0 volts at the non-inverting terminal.
Applying nodal analysis, we get
formula_16
formula_17
Therefore,
formula_18
Hence, there occurs one zero at formula_19 and one pole at formula_20 (corresponding to a corner frequency of formula_21) and another pole at formula_22 (corresponding to a corner frequency of formula_23).
Frequency response.
This practical differentiator's frequency response is a band-pass filter with a +20 dB per decade slope over frequency band for differentiation. Its Bode plot when normalized with formula_24 and formula_25 is:
From the above plot, it can be seen that:
Setting formula_28 will produce one zero at formula_19 and two poles at formula_0 (corresponding to one corner frequency of formula_29), resulting in the following frequency response (normalized using formula_30):
From the above plot, we observe that:
Applications.
The differentiator circuit is essentially a high-pass filter. It can generate a square wave from a triangle wave input and produce alternating-direction voltage spikes when a square wave is applied. In ideal cases, a differentiator reverses the effects of an integrator on a waveform, and conversely. Hence, they are most commonly used in wave-shaping circuits to detect high-frequency components in an input signal. Differentiators are an important part of electronic analogue computers and analogue PID controllers. They are also used in frequency modulators as rate-of-change detectors.
A passive differentiator circuit is one of the basic electronic circuits, being widely used in circuit analysis based on the equivalent circuit method.
|
[
{
"math_id": 0,
"text": "s {=} \\tfrac{\\text{-}1}{RC}"
},
{
"math_id": 1,
"text": "Y=\\frac{Z_R}{Z_R+Z_C}X =\\frac{R}{R+\\frac{1}{sC}}X =\\frac{sRC}{1+sRC}X \\implies Y\\approx sRCX \\quad \\text{for} \\ |s|\\ll \\frac{1}{RC}"
},
{
"math_id": 2,
"text": "\\tfrac{1}{2\\pi RC}"
},
{
"math_id": 3,
"text": "\\tfrac{1}{RC}"
},
{
"math_id": 4,
"text": "s {=} \\tfrac{\\text{-}R}{L}"
},
{
"math_id": 5,
"text": "\\tfrac{R}{2\\pi L}"
},
{
"math_id": 6,
"text": "\\tfrac{R}{L}"
},
{
"math_id": 7,
"text": "V_\\text{in}"
},
{
"math_id": 8,
"text": "V_\\text{out}"
},
{
"math_id": 9,
"text": "I"
},
{
"math_id": 10,
"text": "I = C \\, \\frac{dV_\\text{in}}{dt} \\, ."
},
{
"math_id": 11,
"text": "0 - V_\\text{out} = IR \\, ."
},
{
"math_id": 12,
"text": "V_\\text{out} = -RC \\frac{dV_\\text{in}}{dt}."
},
{
"math_id": 13,
"text": "RC ."
},
{
"math_id": 14,
"text": "\\tfrac{V_\\text{out}}{V_\\text{in}} = \\text{-}sRC"
},
{
"math_id": 15,
"text": "f_\\text{0dB} {=} \\tfrac{1}{2\\pi RC} \\, ."
},
{
"math_id": 16,
"text": "\\frac{0 - V_o}{R} + \\frac{0 - V_o}{\\frac{1}{sC_1}} + \\frac{0 - V_i}{R_1 + \\frac{1}{sC}} = 0,"
},
{
"math_id": 17,
"text": "-V_o \\left(\\frac{1}{R} + sC_1\\right) = \\frac{V_i}{R_1 + \\frac{1}{sC}}."
},
{
"math_id": 18,
"text": "\\frac{V_o}{V_i} = \\frac{-sRC}{(1 + sR_1C)(1 + sRC_1)}."
},
{
"math_id": 19,
"text": "s {=} 0"
},
{
"math_id": 20,
"text": "s {=} \\tfrac{\\text{-}1}{R_1C}"
},
{
"math_id": 21,
"text": "f_1 {=} \\tfrac{1}{2\\pi R_1C}"
},
{
"math_id": 22,
"text": "s {=} \\tfrac{\\text{-}1}{RC_1}"
},
{
"math_id": 23,
"text": "f_2 {=} \\tfrac{1}{2\\pi RC_1}"
},
{
"math_id": 24,
"text": "R_1 C {=} 10^{1}"
},
{
"math_id": 25,
"text": "R C_1 {=} 10^{\\text{-}1}"
},
{
"math_id": 26,
"text": "\\omega_1"
},
{
"math_id": 27,
"text": "\\omega_2"
},
{
"math_id": 28,
"text": "RC_1 {=} R_1C {=} RC"
},
{
"math_id": 29,
"text": "\\omega_1 {=} \\tfrac{1}{RC}"
},
{
"math_id": 30,
"text": "RC {=} 1"
}
] |
https://en.wikipedia.org/wiki?curid=6354035
|
63543769
|
Natural time analysis
|
Natural time analysis is a statistical method applied to analyze complex time series and critical phenomena, based on event counts as a measure of "time" rather than the clock time. Natural time concept was introduced by P. Varotsos, N. Sarlis and E. Skordas in 2001. Natural time analysis has been primarily applied to earthquake prediction / nowcasting and secondarily to sudden cardiac death / heart failure and financial markets. Natural time characteristics are considered to be unique.
Etymology.
"Natural time" is a new view of time introduced in 2001 that is not continuous, in contrast to conventional time which is in the continuum of real numbers, but instead its values form countable sets as natural numbers.
Definition.
In natural time domain each event is characterized by two terms, the "natural time" χ, and the energy "Q""k". χ is defined as "k"/"N", where "k" is a natural number (the "k"-th event) and "N" is the total number of events in the time sequence of data. A related term, "p""k", is the ratio "Q""k" "/" "Q""total", which describes the fractional energy released. The term κ1 is the variance in natural time:
formula_0
where formula_1 and formula_2
Time reversal.
Time reversal, in contrast to clock time, is applicable when studying the approach of a system to criticality with natural time analysis. Living systems for example are considered to operate far from equilibrium as there is flow of energy crossing their boundaries, in contrast to deceased organisms where inner driving forces are absent. While time irreversibility is a fundamental property of a living system, the state of death is more time reversible by means of energy flow across the system's boundaries. Thus a critical state of a system can be estimated by applying natural time analysis upon calculating the entropy upon both normal time flow and time reversal and studying the difference of the two results.
Applications.
Seismology.
Earthquake prediction.
Natural time analysis has been initially applied to VAN method in order to improve the accuracy of the estimation of the time of a forthcoming earthquake that has been indicated to occur by seismic electric signals (SES). The method deems SES valid when κ1 = 0.070. Once the SES are deemed valid, a second NT analysis is started in which the subsequent seismic (rather than electric) events are noted, and the region is divided up as a Venn diagram with at least two seismic events per overlapping rectangle. When κ1 approaches the value κ1 = 0.070 for the candidate region, a critical seismic event is considered imminent, i.e. it will occur in a few days to one week or so.
Earthquake nowcasting.
In seismology, nowcasting is the estimate of the current dynamic state of a seismological system. It differs from forecasting which aims to estimate the probability of a future event but it is also considered a potential base for forecasting. Nowcasting is based on the earthquake cycle model, a recurring cycle between pairs of large earthquakes in a geographical area, upon which the system is evaluated using natural time. Nowcasting calculations produce the "earthquake potential score", an estimation of the current level of seismic progress.
When applied to seismicity, natural time has the following advantages:
Typical applications are: great global earthquakes and tsunamis, aftershocks and induced seismicity, induced seismicity at gas fields, seismic risk to global megacities, studying of clustering of large global earthquakes, etc.
Cardiology.
Natural time analysis has been experimentally used for the diagnosis of heart failure syndrome as well as identifying patients at high risk for sudden cardiac death, even when measuring solely the heart rate, either using electrocardiography or far more inexpensive and portable equipment (i.e. oximeter).
Economy.
Due to similarities of the dynamic characteristics between earthquakes and financial markets, natural time analysis, which is primarily used in seismology, was chosen to assist in developing winning strategies in financial markets, with encouraging results.
|
[
{
"math_id": 0,
"text": "\\kappa_1=\\sum_{k=1}^N p_k(\\chi_k)^2 - \\bigl(\\sum_{k=1}^N p_k\\chi_k\\bigr)^2"
},
{
"math_id": 1,
"text": "\\textstyle\\chi_k=k/N"
},
{
"math_id": 2,
"text": "\\textstyle\\ p_k=\\frac{Q_k}{\\sum_{n=1}^N Q_n}"
}
] |
https://en.wikipedia.org/wiki?curid=63543769
|
63544354
|
Rigidity (K-theory)
|
In mathematics, rigidity of "K"-theory encompasses results relating algebraic "K"-theory of different rings.
Suslin rigidity.
"Suslin rigidity", named after Andrei Suslin, refers to the invariance of mod-"n" algebraic "K"-theory under the base change between two algebraically closed fields: showed that for an extension
formula_0
of algebraically closed fields, and an algebraic variety "X" / "F", there is an isomorphism
formula_1
between the mod-"n" "K"-theory of coherent sheaves on "X", respectively its base change to "E". A textbook account of this fact in the case "X" = "F", including the resulting computation of "K"-theory of algebraically closed fields in characteristic "p", is in .
This result has stimulated various other papers. For example show that the base change functor for the mod-"n" stable A1-homotopy category
formula_2
is fully faithful. A similar statement for non-commutative motives has been established by .
Gabber rigidity.
Another type of rigidity relates the mod-"n" K-theory of an henselian ring "A" to the one of its residue field "A"/"m". This rigidity result is referred to as "Gabber rigidity", in view of the work of who showed that there is an isomorphism
formula_3
provided that "n"≥1 is an integer which is invertible in "A".
If "n" is not invertible in "A", the result as above still holds, provided that K-theory is replaced by the fiber of the trace map between K-theory and topological cyclic homology. This was shown by .
Applications.
used Gabber's and Suslin's rigidity result to reprove Quillen's computation of K-theory of finite fields.
References.
|
[
{
"math_id": 0,
"text": "E / F"
},
{
"math_id": 1,
"text": "K_*(X, \\mathbf Z/n) \\cong K_*(X \\times_F E, \\mathbf Z/n), \\ i \\ge 0"
},
{
"math_id": 2,
"text": "\\mathrm{SH}(F, \\mathbf Z/n) \\to \\mathrm{SH}(E, \\mathbf Z/n)"
},
{
"math_id": 3,
"text": "K_*(A, \\mathbf Z/n) = K_*(A / m, \\mathbf Z/n)"
}
] |
https://en.wikipedia.org/wiki?curid=63544354
|
63547027
|
Reflectarray antenna
|
Beam focusing, typically horn-fed planar array of unit cells
A reflectarray antenna (or just reflectarray) consists of an array of unit cells, illuminated by a feeding antenna (source of electromagnetic waves). The feeding antenna is usually a horn.
The unit cells are usually backed by a ground plane, and the incident wave reflects off them towards the direction of the beam, but each cell adds a different phase delay to the reflected signal.
A phase distribution of concentric rings is applied to focus the wavefronts from the feeding antenna into a plane wave (to account for the varying path lengths from the feeding antenna to each unit cell).
A progressive phase shift can be applied to the unit cells to steer the beam direction. It is common to offset the feeding antenna to prevent blockage of the beam.
In this case, the phase distribution on the reflectarray surface needs to be altered. A reflectarray focuses a beam in a similar way to a parabolic reflector (dish), but with a much thinner form factor.
Phase distribution.
According to, in a reflectarray a constant phase of the entire reflected field is achieved in a plane normal to the direction of the
desired pencil beam as expressed by:
formula_0
where formula_1 is free space wavelength,
formula_2 is the position vector of the formula_3th element/unit cell relative to formula_4,
formula_5 is the focal length,
formula_6 is the position vector of formula_3th element relative to the origin formula_7 i.e. centre of the reflectarray,
formula_8 is the direction vector of the desired pencil beam,
formula_9,
and formula_10 is the phase shift introduced by formula_3th unit cell of reflect array to its reflected field relative to the incident field.
For a feed horn located at formula_4, the formula for the optimal phase distribution on a conventional reflectarray for a beam in the boresight direction is given by:
formula_11
where formula_12 is the phase shift for a unit cell located at coordinates formula_13.
Unit cell considerations.
It is important to analyse the reflection magnitude formula_14 and the reflection phase formula_15 across the frequency bandwidth of operation. When designing a reflectarray, we aim to maximise the reflection magnitude formula_14 to be close to 1 (0 dB). The reflection phase formula_15 at each unit cell determines the overall beam shape and direction. Ideally, the total phase shift range would be 360°. The aperture efficiency, and hence gain, of the reflectarray will be reduced if the angle of incidence to the unit cells is not considered, or if spillover occurs or illumination of the reflectarray is not optimal (see also transmitarrays). Similarly, phase errors due to quantization into a discrete number of phase states for digital control can also reduce the gain.
A fixed reflectarray has a single beam direction per feed. Changing the shape of the unit cells alters their reflection phase. The unit cells cannot be reconfigured. This has applications in point-to-point communications, or for a satellite covering a specific geographic region (with a fixed beam contour).
A reconfigurable reflectarray has unit cells whose phase can be electronically controlled in real-time to steer the beam or change its shape. Several methods have been used to implement reconfigurable reflectarray unit cells, including PIN diodes, liquid crystal, and novel materials.
Each of these methods introduces loss which reduces the efficiency of the unit cells. Linearity (such as distortion due to the diodes) also needs to be considered to minimise out-of-band radiation which could interfere with users on adjacent frequencies.
Other types of reflectarrays.
In satellite communications, it is necessary to produce multiple beams per feed, sometimes operating at different frequencies and polarizations. An example of this is the four-color frequency reuse scheme. Circular polarization is commonly used to reduce the effect of atmospheric depolarization on the communication system performance. A dual-band reflectarray has two different passband frequencies, for example for uplink and downlink.
A bifocal reflectarray has two principle foci, so can focus wavefronts to or from two feeding antennas simultaneously.
A dual reflectarray consists of two stages of reflection, in which the beam is first focused by one reflectarray, then by another. The phase distribution on each reflectarray must be carefully calculated to ensure that the phase derivatives are consistent with the angle of incidence of the rays The ratio of the sizes and positions of these reflectarrays can be used to achieve quasi-optical magnification (narrowing of the beam).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\frac{2\\pi}{\\lambda_0} \\left(r_{mn} - R_{mn}.r\\right) - \\Delta \\phi_{mn} = 2 \\pi N\n"
},
{
"math_id": 1,
"text": "\\lambda_0"
},
{
"math_id": 2,
"text": "r_{mn}"
},
{
"math_id": 3,
"text": "{mn}"
},
{
"math_id": 4,
"text": "(0,0,F)"
},
{
"math_id": 5,
"text": "F"
},
{
"math_id": 6,
"text": "R_{mn}"
},
{
"math_id": 7,
"text": "(0,0,0)"
},
{
"math_id": 8,
"text": "r"
},
{
"math_id": 9,
"text": "N = 1,2,3,..."
},
{
"math_id": 10,
"text": "\\Delta \\phi_{mn}"
},
{
"math_id": 11,
"text": "\n\\Delta \\phi(x_m, y_m) = \\frac{2\\pi}{\\lambda_0}\\sqrt{x^2 + y^2 + F^2}\n"
},
{
"math_id": 12,
"text": "\\Delta \\phi(x_m, y_m)"
},
{
"math_id": 13,
"text": "(x_m, y_m)"
},
{
"math_id": 14,
"text": "|S_{11}|"
},
{
"math_id": 15,
"text": "\\angle S_{11}"
}
] |
https://en.wikipedia.org/wiki?curid=63547027
|
635483
|
Gromov's theorem on groups of polynomial growth
|
In geometric group theory, Gromov's theorem on groups of polynomial growth, first proved by Mikhail Gromov, characterizes finitely generated groups of "polynomial" growth, as those groups which have nilpotent subgroups of finite index.
Statement.
The growth rate of a group is a well-defined notion from asymptotic analysis. To say that a finitely generated group has polynomial growth means the number of elements of length at most "n" (relative to a symmetric generating set) is bounded above by a polynomial function "p"("n"). The "order of growth" is then the least degree of any such polynomial function "p".
A nilpotent group "G" is a group with a lower central series terminating in the identity subgroup.
Gromov's theorem states that a finitely generated group has polynomial growth if and only if it has a nilpotent subgroup that is of finite index.
Growth rates of nilpotent groups.
There is a vast literature on growth rates, leading up to Gromov's theorem. An earlier result of Joseph A. Wolf showed that if "G" is a finitely generated nilpotent group, then the group has polynomial growth. Yves Guivarc'h and independently Hyman Bass (with different proofs) computed the exact order of polynomial growth. Let "G" be a finitely generated nilpotent group with lower central series
formula_0
In particular, the quotient group "G""k"/"G""k"+1 is a finitely generated abelian group.
The Bass–Guivarc'h formula states that the order of polynomial growth of "G" is
formula_1
where:
"rank" denotes the rank of an abelian group, i.e. the largest number of independent and torsion-free elements of the abelian group.
In particular, Gromov's theorem and the Bass–Guivarc'h formula imply that the order of polynomial growth of a finitely generated group is always either an integer or infinity (excluding for example, fractional powers).
Another nice application of Gromov's theorem and the Bass–Guivarch formula is to the quasi-isometric rigidity of finitely generated abelian groups: any group which is quasi-isometric to a finitely generated abelian group contains a free abelian group of finite index.
Proofs of Gromov's theorem.
In order to prove this theorem Gromov introduced a convergence for metric spaces. This convergence, now called the Gromov–Hausdorff convergence, is currently widely used in geometry.
A relatively simple proof of the theorem was found by Bruce Kleiner. Later, Terence Tao and Yehuda Shalom modified Kleiner's proof to make an essentially elementary proof as well as a version of the theorem with explicit bounds. Gromov's theorem also follows from the classification of approximate groups obtained by Breuillard, Green and Tao. A simple and concise proof based on functional analytic methods is given by Ozawa.
The gap conjecture.
Beyond Gromov's theorem one can ask whether there exists a gap in the growth spectrum for finitely generated group just above polynomial growth, separating virtually nilpotent groups from others. Formally, this means that there would exist a function formula_2 such that a finitely generated group is virtually nilpotent if and only if its growth function is an formula_3. Such a theorem was obtained by Shalom and Tao, with an explicit function formula_4 for some formula_5. All known groups with intermediate growth (i.e. both superpolynomial and subexponential) are essentially generalizations of Grigorchuk's group, and have faster growth functions; so all known groups have growth faster than formula_6, with formula_7, where formula_8 is the real root of the polynomial formula_9.
It is conjectured that the true lower bound on growth rates of groups with intermediate growth is formula_10. This is known as the "Gap conjecture".
|
[
{
"math_id": 0,
"text": " G = G_1 \\supseteq G_2 \\supseteq \\cdots. "
},
{
"math_id": 1,
"text": " d(G) = \\sum_{k \\geq 1} k \\operatorname{rank}(G_k/G_{k+1}) "
},
{
"math_id": 2,
"text": "f: \\mathbb N \\to \\mathbb N"
},
{
"math_id": 3,
"text": "O(f(n))"
},
{
"math_id": 4,
"text": "n^{\\log\\log(n)^c}"
},
{
"math_id": 5,
"text": "c > 0"
},
{
"math_id": 6,
"text": "e^{n^\\alpha}"
},
{
"math_id": 7,
"text": "\\alpha = \\log(2)/\\log(2/\\eta ) \\approx 0.767"
},
{
"math_id": 8,
"text": "\\eta"
},
{
"math_id": 9,
"text": "x^3+x^2+x-2"
},
{
"math_id": 10,
"text": "e^{\\sqrt n}"
}
] |
https://en.wikipedia.org/wiki?curid=635483
|
63553560
|
Cannon–Thurston map
|
In mathematics, a Cannon–Thurston map is any of a number of continuous group-equivariant maps between the boundaries of two hyperbolic metric spaces extending a discrete isometric actions of the group on those spaces.
The notion originated from a seminal 1980s preprint of James Cannon and William Thurston "Group-invariant Peano curves" (eventually published in 2007) about fibered hyperbolic 3-manifolds.
Cannon–Thurston maps provide many natural geometric examples of space-filling curves.
History.
The Cannon–Thurston map first appeared in a mid-1980s preprint of James W. Cannon and William Thurston called "Group-invariant Peano curves". The preprint remained unpublished until 2007, but in the meantime had generated numerous follow-up works by other researchers.
In their paper Cannon and Thurston considered the following situation. Let "M" be a closed hyperbolic 3-manifold that fibers over the circle with fiber "S". Then "S" itself is a closed hyperbolic surface, and its universal cover formula_0 can be identified with the hyperbolic plane formula_1. Similarly, the universal cover of "M" can be identified with the hyperbolic 3-space formula_2. The inclusion formula_3 lifts to a formula_4-invariant inclusion formula_5. This inclusion is highly distorted because the action of formula_4 on
formula_2 is not geometrically finite.
Nevertheless, Cannon and Thurston proved that this distorted inclusion formula_6 extends to a continuous formula_4-equivariant map
formula_7,
where formula_8 and formula_9. Moreover, in this case the map "j" is surjective, so that it provides a continuous onto function from the circle onto the 2-sphere, that is, a space-filling curve.
Cannon and Thurston also explicitly described the map formula_7, via collapsing stable and unstable laminations of the monodromy pseudo-Anosov homeomorphism of "S" for this fibration of "M". In particular, this description implies that the map "j" is uniformly finite-to-one, with the pre-image of every point of formula_10 having cardinality at most 2"g", where "g" is the genus of "S".
After the paper of Cannon and Thurston generated a large amount of follow-up work, with other researchers analyzing the existence or non-existence of analogs of the map "j" in various other set-ups motivated by the Cannon–Thurston result.
Cannon–Thurston maps and Kleinian groups.
Kleinian representations of surface groups.
The original example of Cannon and Thurston can be thought of in terms of Kleinian representations of the surface group formula_11. As a subgroup of formula_12, the group "H" acts on formula_13 by isometries, and this action is properly discontinuous. Thus one gets a discrete representation formula_14.
The group formula_11 also acts by isometries, properly discontinuously and co-compactly, on the universal cover formula_15, with the limit set formula_16 being equal to formula_17. The Cannon–Thurston result can be interpreted as saying that these actions of "H" on formula_1 and formula_2 induce a continuous "H"-equivariant map formula_7.
One can ask, given a hyperbolic surface "S" and a discrete representation formula_18, if there exists an induced continuous map formula_19.
For Kleinian representations of surface groups, the most general result in this direction is due to Mahan Mj (2014).
Let "S" be a complete connected finite volume hyperbolic surface. Thus "S" is a surface without boundary, with a finite (possibly empty) set of cusps. Then one still has formula_15 and formula_20 (even if "S" has some cusps). In this setting Mj proved the following theorem:
Let "S" be a complete connected finite volume hyperbolic surface and let formula_11. Let formula_21 be a discrete faithful representation without accidental parabolics. Then formula_22 induces a continuous "H"-equivariant map formula_7.
Here the "without accidental parabolics" assumption means that for formula_23, the element formula_24 is a parabolic isometry of formula_2 if and only if formula_25 is a parabolic isometry of formula_1. One of important applications of this result is that in the above situation the limit set formula_26 is locally connected.
This result of Mj was preceded by numerous other results in the same direction, such as Minsky (1994), Alperin, Dicks and Porti (1999), McMullen (2001), Bowditch (2007) and (2013), Miyachi (2002), Souto (2006), Mj (2009), (2011), and others.
In particular, Bowditch's 2013 paper introduced the notion of a "stack" of Gromov-hyperbolic metric spaces and developed an alternative framework to that of Mj for proving various results about Cannon–Thurston maps.
General Kleinian groups.
In a 2017 paper Mj proved the existence of the Cannon–Thurston map in the following setting:
Let formula_27 be a discrete faithful representation where "G" is a word-hyperbolic group, and where formula_28 contains no parabolic isometries of formula_2. Then formula_22 induces a continuous "G"-equivariant map formula_29, where formula_30 is the Gromov boundary of "G", and where the image of "j" is the limit set of "G" in formula_10.
Here "induces" means that the map formula_31 is continuous, where formula_32 and formula_33 (for some basepoint formula_34). In the same paper Mj obtains a more general version of this result, allowing "G" to contain parabolics, under some extra technical assumptions on "G". He also provided a description of the fibers of "j" in terms of ending laminations of formula_35.
Cannon–Thurston maps and word-hyperbolic groups.
Existence and non-existence results.
Let "G" be a word-hyperbolic group and let "H" ≤ "G" be a subgroup such that "H" is also word-hyperbolic. If the inclusion "i":"H" → "G" extends to a continuous map "∂i": "∂H" → "∂G" between their hyperbolic boundaries, the map "∂i" is called a Cannon–Thurston map. Here "extends" means that the map between hyperbolic compactifications formula_36, given by formula_37, is continuous. In this setting, if the map "∂i" exists, it is unique and "H"-equivariant, and the image "∂i"("∂H") is equal to the limit set formula_38.
If "H" ≤ "G" is quasi-isometrically embedded (i.e. quasiconvex) subgroup, then the Cannon–Thurston map "∂i": "∂H" → "∂G" exists and is a topological embedding.
However, it turns out that the Cannon–Thurston map exists in many other situations as well.
Mitra proved that if "G" is word-hyperbolic and "H" ≤ "G" is a normal word-hyperbolic subgroup, then the Cannon–Thurston map exists. (In this case if "H" and "Q" = "G"/"H" are infinite then "H" is not quasiconvex in "G".) The original Cannon–Thurston theorem about fibered hyperbolic 3-manifolds is a special case of this result.
If "H" ≤ "G" are two word-hyperbolic groups and "H" is normal in "G" then, by a result of Mosher, the quotient group "Q" = "G"/"H" is also word-hyperbolic. In this setting Mitra also described the fibers of the map "∂i": "∂H" → "∂G" in terms of "algebraic ending laminations" on "H", parameterized by the boundary points "z" ∈ "∂Q".
In another paper Mitra considered the case where a word-hyperbolic group "G" splits as the fundamental group of a graph of groups, where all vertex and edge groups are word-hyperbolic, and the edge-monomorphisms are quasi-isometric embeddings. In this setting Mitra proved that for every vertex group formula_39, for the inclusion map formula_40 the Cannon–Thurston map formula_41 does exist.
By combining and iterating these constructions, Mitra produced examples of hyperbolic subgroups of hyperbolic groups "H" ≤ "G" where the subgroup distortion of "H" in "G" is an arbitrarily high tower of exponentials, and the Cannon–Thurston map formula_42 exists. Later Barker and Riley showed that one can arrange for "H" to have arbitrarily high primitive recursive distortion in "G".
In a 2013 paper, Baker and Riley constructed the first example of a word-hyperbolic group "G" and a word-hyperbolic (in fact free) subgroup "H" ≤ "G" such that the Cannon–Thurston map formula_42 does not exist.
Later Matsuda and Oguni generalized the Baker–Riley approach and showed that every non-elementary word-hyperbolic group "H" can be embedded in some word-hyperbolic group "G" in such a way that the Cannon–Thurston map formula_42 does not exist.
Multiplicity of the Cannon–Thurston map.
As noted above, if "H" is a quasi-isometrically embedded subgroup of a word-hyperbolic group "G", then "H" is word-hyperbolic, and the Cannon–Thurston map formula_43 exists and is injective. Moreover, it is known that the converse is also true: If "H" is a word-hyperbolic subgroup of a word-hyperbolic group "G" such that the Cannon–Thurston map formula_43 exists and is injective, then "H" is uasi-isometrically embedded in "G".
It is known, for more general convergence groups reasons, that if "H" is a word-hyperbolic subgroup of a word-hyperbolic group "G" such that the Cannon–Thurston map formula_43 exists then for every conical limit point for "H" in formula_30 has exactly one pre-image under formula_44. However, the converse fails: If formula_43 exists and is non-injective, then there always exists a non-conical limit point of "H" in "∂G" with exactly one preimage under "∂i".
It the context of the original Cannon–Thurston paper, and for many generalizations for the Kleinin representations formula_45 the Cannon–Thurston map formula_7 is known to be uniformly finite-to-one. That means that for every point formula_46, the full pre-image formula_47 is a finite set with cardinality bounded by a constant depending only on "S".
In general, it is known, as a consequence of the JSJ-decomposition theory for word-hyperbolic groups, that if formula_48 is a short exact sequence of three infinite torsion-free word-hyperbolic groups, then "H" is isomorphic to a free product of some closed surface groups and of a free group.
If formula_11 is the fundamental group of a closed hyperbolic surface "S", such hyperbolic extensions of "H" are described by the theory of "convex cocompact" subgroups of the mapping class group Mod("S"). Every subgroup Γ ≤ Mod("S") determines, via the Birman short exact sequence, an extension
formula_49
Moreover, the group formula_50 is word-hyperbolic if and only if Γ ≤ Mod("S") is convex-cocompact.
In this case, by Mitra's general result, the Cannon–Thurston map "∂i":"∂H" → "∂E"Γ does exist. The fibers of the map "∂i" are described by a collection of ending laminations on "S" determined by Γ. This description implies that map "∂i" is uniformly finite-to-one.
If formula_51 is a convex-cocompact purely atoroidal subgroup of formula_52 (where formula_53) then for the corresponding extension formula_54 the group formula_50 is word-hyperbolic. In this setting Dowdall, Kapovich and Taylor proved that the Cannon–Thurston map formula_55 is uniformly finite-to-one, with point preimages having cardinality formula_56. This result was first proved by Kapovich and Lustig under the extra assumption that formula_51 is infinite cyclic, that is, that formula_51 is generated by an atoroidal fully irreducible element of formula_52.
Ghosh proved that for an arbitrary atoroidal formula_57 (without requiring formula_58 to be convex cocompact) the Cannon–Thurston map formula_55 is uniformly finite-to-one, with a bound on the cardinality of point preimages depending only on "n". (However, Ghosh's result does not provide an explicit bound in terms of "n", and it is still unknown if the 2"n" bound always holds in this case.)
It remains unknown, whenever "H" is a word-hyperbolic subgroup of a word-hyperbolic group "G" such that the Cannon–Thurston map formula_59 exists, if the map formula_44 is finite-to-one.
However, it is known that in this setting for every formula_60 such that "p" is a conical limit point, the set formula_61 has cardinality 1.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\tilde S"
},
{
"math_id": 1,
"text": "\\mathbb H^2"
},
{
"math_id": 2,
"text": "\\mathbb H^3"
},
{
"math_id": 3,
"text": " S\\subseteq M"
},
{
"math_id": 4,
"text": "\\pi_1(S)"
},
{
"math_id": 5,
"text": "\\tilde S=\\mathbb H^2\\subseteq \\mathbb H^3=\\tilde M"
},
{
"math_id": 6,
"text": "\\mathbb H^2\\subseteq \\mathbb H^3"
},
{
"math_id": 7,
"text": "j:\\mathbb S^1\\to \\mathbb S^2"
},
{
"math_id": 8,
"text": "\\mathbb S^1=\\partial \\mathbb H^2"
},
{
"math_id": 9,
"text": "\\mathbb S^2=\\partial \\mathbb H^3"
},
{
"math_id": 10,
"text": "\\mathbb S^2"
},
{
"math_id": 11,
"text": "H=\\pi_1(S)"
},
{
"math_id": 12,
"text": "G=\\pi_1(M)"
},
{
"math_id": 13,
"text": "\\mathbb H^3=\\tilde M"
},
{
"math_id": 14,
"text": " \\rho:H\\to \\mathbb PSL(2,\\mathbb C)=\\operatorname{Isom}_+(\\mathbb H^3)"
},
{
"math_id": 15,
"text": "\\mathbb H^2=\\tilde S"
},
{
"math_id": 16,
"text": "\\Lambda H\\subseteq \\partial H^2=\\mathbb S^1"
},
{
"math_id": 17,
"text": "\\mathbb S^1"
},
{
"math_id": 18,
"text": " \\rho:\\pi_1(S)\\to \\mathbb PSL(2,\\mathbb C)"
},
{
"math_id": 19,
"text": "j:\\Lambda H\\to \\mathbb S^2"
},
{
"math_id": 20,
"text": "\\Lambda \\pi_1(S)=\\mathbb S^1"
},
{
"math_id": 21,
"text": " \\rho: H\\to \\mathbb PSL(2,\\mathbb C)"
},
{
"math_id": 22,
"text": "\\rho"
},
{
"math_id": 23,
"text": "1\\ne h\\in H"
},
{
"math_id": 24,
"text": "\\rho(h)"
},
{
"math_id": 25,
"text": "h"
},
{
"math_id": 26,
"text": "\\Lambda \\rho(\\pi_1(S))\\subseteq \\mathbb S^2"
},
{
"math_id": 27,
"text": "\\rho:G\\to \\mathbb PSL(2,\\mathbb C) "
},
{
"math_id": 28,
"text": "\\rho(G)"
},
{
"math_id": 29,
"text": "j:\\partial G\\to \\mathbb S^2"
},
{
"math_id": 30,
"text": "\\partial G"
},
{
"math_id": 31,
"text": "J: G\\cup \\partial G\\to \\mathbb H^3 \\cup \\mathbb S^2"
},
{
"math_id": 32,
"text": "J|_{\\partial G}=j"
},
{
"math_id": 33,
"text": "J(g)=gx_0, g\\in G"
},
{
"math_id": 34,
"text": "x_0\\in \\mathbb H^3"
},
{
"math_id": 35,
"text": "\\mathbb H^3/G"
},
{
"math_id": 36,
"text": "\\hat i: H\\cup \\partial H\\to G\\cup \\partial G"
},
{
"math_id": 37,
"text": "\\hat i|_H=i, \\hat i|_{\\partial H}=\\partial i"
},
{
"math_id": 38,
"text": "\\Lambda_{\\partial G}(H)"
},
{
"math_id": 39,
"text": "A_v"
},
{
"math_id": 40,
"text": "i:A_v\\to G"
},
{
"math_id": 41,
"text": "\\partial i:\\partial A_v\\to \\partial G"
},
{
"math_id": 42,
"text": "\\partial i:\\partial H\\to \\partial G"
},
{
"math_id": 43,
"text": "\\partial i:\\partial H\\to\\partial G "
},
{
"math_id": 44,
"text": "\\partial i"
},
{
"math_id": 45,
"text": "\\rho:\\pi_1(S)\\to \\mathbb PSL(2,\\mathbb C),"
},
{
"math_id": 46,
"text": "p\\in \\mathbb S^2"
},
{
"math_id": 47,
"text": "j^{-1}(p)"
},
{
"math_id": 48,
"text": "1\\to H\\to G\\to Q\\to 1"
},
{
"math_id": 49,
"text": "1\\to H\\to E_\\Gamma \\to \\Gamma\\to 1"
},
{
"math_id": 50,
"text": "E_\\Gamma"
},
{
"math_id": 51,
"text": "\\Gamma"
},
{
"math_id": 52,
"text": "\\operatorname{Out}(F_n)"
},
{
"math_id": 53,
"text": "n\\ge 3"
},
{
"math_id": 54,
"text": "1\\to F_n\\to E_\\Gamma\\to \\Gamma\\to 1"
},
{
"math_id": 55,
"text": "\\partial i: \\partial F_n\\to\\partial E_\\Gamma"
},
{
"math_id": 56,
"text": "\\le 2n"
},
{
"math_id": 57,
"text": "\\phi\\in\\operatorname{Out}(F_n)"
},
{
"math_id": 58,
"text": "\\Gamma=\\langle \\phi\\rangle"
},
{
"math_id": 59,
"text": "\\partial i: \\partial H\\to\\partial G"
},
{
"math_id": 60,
"text": "p\\in \\Lambda_{\\partial G} H"
},
{
"math_id": 61,
"text": "(\\partial i)^{-1}(p)"
},
{
"math_id": 62,
"text": "\\Gamma\\le \\mathbb PSL(2,\\mathbb C)"
},
{
"math_id": 63,
"text": "\\Lambda\\subseteq \\partial \\mathbb H^3"
},
{
"math_id": 64,
"text": "\\Lambda"
},
{
"math_id": 65,
"text": "\\partial \\pi_1(S)=\\mathbb S^1"
},
{
"math_id": 66,
"text": "\\partial \\mathcal C(S,z) "
},
{
"math_id": 67,
"text": "\\mathbb PSL(2,\\mathbb C)"
},
{
"math_id": 68,
"text": "\\operatorname{Out}t(F_n)"
}
] |
https://en.wikipedia.org/wiki?curid=63553560
|
635546
|
Unified neutral theory of biodiversity
|
Theory of evolutionary biology
The unified neutral theory of biodiversity and biogeography (here "Unified Theory" or "UNTB") is a theory and the title of a monograph by ecologist Stephen P. Hubbell. It aims to explain the diversity and relative abundance of species in ecological communities. Like other neutral theories of ecology, Hubbell assumes that the differences between members of an ecological community of trophically similar species are "neutral", or irrelevant to their success. This implies that niche differences do not influence abundance and the abundance of each species follows a random walk. The theory has sparked controversy, and some authors consider it a more complex version of other null models that fit the data better.
"Neutrality" means that at a given trophic level in a food web, species are equivalent in birth rates, death rates, dispersal rates and speciation rates, when measured on a per-capita basis. This can be considered a null hypothesis to niche theory. Hubbell built on earlier neutral models, including Robert MacArthur and E.O. Wilson's theory of island biogeography and Stephen Jay Gould's concepts of symmetry and null models.
An "ecological community" is a group of trophically similar, sympatric species that actually or potentially compete in a local area for the same or similar resources. Under the Unified Theory, complex ecological interactions are permitted among individuals of an ecological community (such as competition and cooperation), provided that all individuals obey the same rules. Asymmetric phenomena such as parasitism and predation are ruled out by the terms of reference; but cooperative strategies such as swarming, and negative interaction such as competing for limited food or light are allowed (so long as all individuals behave alike).
The theory predicts the existence of a fundamental biodiversity constant, conventionally written "θ", that appears to govern species richness on a wide variety of spatial and temporal scales.
Saturation.
Although not strictly necessary for a neutral theory, many stochastic models of biodiversity assume a fixed, finite community size (total number of individual organisms). There are unavoidable physical constraints on the total number of individuals that can be packed into a given space (although space "per se" isn't necessarily a resource, it is often a useful surrogate variable for a limiting resource that is distributed over the landscape; examples would include sunlight or hosts, in the case of parasites).
If a wide range of species are considered (say, giant sequoia trees and duckweed, two species that have very different saturation densities), then the assumption of constant community size might not be very good, because density would be higher if the smaller species were monodominant. Because the Unified Theory refers only to communities of trophically similar, competing species, it is unlikely that population density will vary too widely from one place to another.
Hubbell considers the fact that community sizes are constant and interprets it as a general principle: "large landscapes are always biotically saturated with individuals". Hubbell thus treats communities as being of a fixed number of individuals, usually denoted by "J".
Exceptions to the saturation principle include disturbed ecosystems such as the Serengeti, where saplings are trampled by elephants and Blue wildebeests; or gardens, where certain species are systematically removed.
Species abundances.
When abundance data on natural populations are collected, two observations are almost universal:
Such observations typically generate a large number of questions. Why are the rare species rare? Why is the most abundant species so much more abundant than the median species abundance?
A non neutral explanation for the rarity of rare species might suggest that rarity is a result of poor adaptation to local conditions. The UNTB suggests that it is not necessary to invoke adaptation or niche differences because neutral dynamics alone can generate such patterns.
Species composition in any community will change randomly with time. Any particular abundance structure will have an associated probability. The UNTB predicts that the probability of a community of "J" individuals composed of "S" distinct species with abundances formula_0 for species 1, formula_1 for species 2, and so on up to formula_2 for species "S" is given by
formula_3
where formula_4 is the fundamental biodiversity number (formula_5 is the speciation rate), and formula_6 is the number of species that have "i" individuals in the sample.
This equation shows that the UNTB implies a nontrivial dominance-diversity equilibrium between speciation and extinction.
As an example, consider a community with 10 individuals and three species "a", "b", and "c" with abundances 3, 6 and 1 respectively. Then the formula above would allow us to assess the likelihood of different values of "θ". There are thus "S" = 3 species and formula_7, all other formula_8's being zero. The formula would give
formula_9
which could be maximized to yield an estimate for "θ" (in practice, numerical methods are used). The maximum likelihood estimate for "θ" is about 1.1478.
We could have labelled the species another way and counted the abundances being 1,3,6 instead (or 3,1,6, etc. etc.). Logic tells us that the probability of observing a pattern of abundances will be the same observing any permutation of those abundances. Here we would have
formula_10
and so on.
To account for this, it is helpful to consider only ranked abundances (that is, to sort the abundances before inserting into the formula). A ranked dominance-diversity configuration is usually written as formula_11 where formula_12 is the abundance of the "i"th most abundant species: formula_13 is the abundance of the most abundant, formula_14 the abundance of the second most abundant species, and so on. For convenience, the expression is usually "padded" with enough zeros to ensure that there are "J" species (the zeros indicating that the extra species have zero abundance).
It is now possible to determine the expected abundance of the "i"th most abundant species:
formula_15
where "C" is the total number of configurations, formula_16 is the abundance of the "i"th ranked species in the "k"th configuration, and formula_17 is the dominance-diversity probability. This formula is difficult to manipulate mathematically, but relatively simple to simulate computationally.
The model discussed so far is a model of a regional community, which Hubbell calls the metacommunity. Hubbell also acknowledged that on a local scale, dispersal plays an important role. For example, seeds are more likely to come from nearby parents than from distant parents. Hubbell introduced the parameter m, which denotes the probability of immigration in the local community from the metacommunity. If m = 1, dispersal is unlimited; the local community is just a random sample from the metacommunity and the formulas above apply. If m < 1, dispersal is limited and the local community is a dispersal-limited sample from the metacommunity for which different formulas apply.
It has been shown that formula_18, the expected number of species with abundance n, may be calculated by
formula_19
where "θ" is the fundamental biodiversity number, "J" the community size, formula_20 is the gamma function, and formula_21. This formula is an approximation. The correct formula is derived in a series of papers, reviewed and synthesized by Etienne and Alonso in 2005:
formula_22
where formula_23 is a parameter that measures dispersal limitation.
formula_24 is zero for "n" > "J", as there cannot be more species than individuals.
This formula is important because it allows a quick evaluation of the Unified Theory. It is not suitable for testing the theory. For this purpose, the appropriate likelihood function should be used. For the metacommunity this was given above. For the local community with dispersal limitation it is given by:
formula_25
Here, the formula_26 for formula_27 are coefficients fully determined by the data, being defined as
formula_28
This seemingly complicated formula involves Stirling numbers and Pochhammer symbols, but can be very easily calculated.
An example of a species abundance curve can be found in Scientific American.
Stochastic modelling of species abundances.
UNTB distinguishes between a dispersal-limited local community of size formula_29 and a so-called metacommunity from which species can (re)immigrate and which acts as a heat bath to the local community. The distribution of species in the metacommunity is given by a dynamic equilibrium of speciation and extinction. Both community dynamics are modelled by appropriate urn processes, where each individual is represented by a ball with a color corresponding to its species. With a certain rate formula_30 randomly chosen individuals reproduce, i.e. add another ball of their own color to the urn. Since one basic assumption is saturation, this reproduction has to happen at the cost of another random individual from the urn which is removed. At a different rate formula_31 single individuals in the metacommunity are replaced by mutants of an entirely new species. Hubbell calls this simplified model for speciation a point mutation, using the terminology of the Neutral theory of molecular evolution. The urn scheme for the metacommunity of formula_32 individuals is the following.
At each time step take one of the two possible actions :
The size formula_32 of the metacommunity does not change. This is a point process in time. The length of the time steps is distributed exponentially. For simplicity one can assume that each time step is as long as the mean time between two changes which can be derived from the reproduction and mutation rates formula_30 and formula_31. The probability formula_5 is given as formula_34.
The species abundance distribution for this urn process is given by Ewens's sampling formula which was originally derived in 1972 for the distribution of alleles under neutral mutations. The expected number formula_35 of species in the metacommunity having exactly formula_36 individuals is:
formula_37
where formula_38 is called the fundamental biodiversity number. For large
metacommunities and formula_39 one recovers the Fisher Log-Series as species distribution.
formula_40
The urn scheme for the local community of fixed size formula_29 is very similar to the one for the metacommunity.
At each time step take one of the two actions :
The metacommunity is changing on a much larger timescale and is assumed to be fixed during the evolution of the local community. The resulting distribution of species in the local community and expected values depend on four parameters, formula_29, formula_32, formula_43 and formula_42 (or formula_44) and are derived by Etienne and Alonso (2005), including several simplifying limit cases like the one presented in the previous section (there called formula_45). The parameter formula_42 is a dispersal parameter. If formula_46 then the local community is just a sample from the metacommunity. For formula_47 the local community is completely isolated from the metacommunity and all species will go extinct except one. This case has been analyzed by Hubbell himself. The case formula_48 is characterized by a unimodal species distribution in a Preston Diagram and often fitted by a log-normal distribution. This is understood as an intermediate state between domination of the most common species and a sampling from the metacommunity, where singleton species are most abundant. UNTB thus predicts that in dispersal limited communities rare species become even rarer. The log-normal distribution describes the maximum and the abundance of common species very well but underestimates the number of very rare species considerably which becomes only apparent for very large sample sizes.
Species-area relationships.
The Unified Theory unifies "biodiversity", as measured by species-abundance curves, with "biogeography", as measured by species-area curves. Species-area relationships show the rate at which species diversity increases with area. The topic is of great interest to conservation biologists in the design of reserves, as it is often desired to harbour as many species as possible.
The most commonly encountered relationship is the power law given by
formula_49
where "S" is the number of species found, "A" is the area sampled, and "c" and "z" are constants. This relationship, with different constants, has been found to fit a wide range of empirical data.
From the perspective of Unified Theory, it is convenient to consider "S" as a function of total community size "J". Then formula_50 for some constant "k", and if this relationship were exactly true, the species area line would be straight on log scales. It is typically found that the curve is not straight, but the slope changes from being steep at small areas, shallower at intermediate areas, and steep at the largest areas.
The formula for species composition may be used to calculate the expected number of species present in a community under the assumptions of the Unified Theory. In symbols
formula_51
where θ is the fundamental biodiversity number. This formula specifies the expected number of species sampled in a community of size "J". The last term, formula_52, is the expected number of "new" species encountered when adding one new individual to the community. This is an increasing function of θ and a decreasing function of "J", as expected.
By making the substitution formula_53 (see section on saturation above), then the expected number of species becomes formula_54.
The formula above may be approximated to an integral giving
formula_55
This formulation is predicated on a random placement of individuals.
Example.
Consider the following (synthetic) dataset of 27 individuals:
a,a,a,a,a,a,a,a,a,a,b,b,b,b,c,c,c,c,d,d,d,d,e,f,g,h,i
There are thus 27 individuals of 9 species ("a" to "i") in the sample. Tabulating this would give:
a b c d e f g h i
10 4 4 4 1 1 1 1 1
indicating that species "a" is the most abundant with 10 individuals and species "e" to "i" are singletons. Tabulating the table gives:
species abundance 1 2 3 4 5 6 7 8 9 10
number of species 5 0 0 3 0 0 0 0 0 1
On the second row, the 5 in the first column means that five species, species "e" through "i", have abundance one. The following two zeros in columns 2 and 3 mean that zero species have abundance 2 or 3. The 3 in column 4 means that three species, species "b", "c", and "d", have abundance four. The final 1 in column 10 means that one species, species "a", has abundance 10.
This type of dataset is typical in biodiversity studies. Observe how more than half the biodiversity (as measured by species count) is due to singletons.
For real datasets, the species abundances are binned into logarithmic categories, usually using base 2, which gives bins of abundance 0–1, abundance 1–2, abundance 2–4, abundance 4–8, etc. Such abundance classes are called "octaves"; early developers of this concept included F. W. Preston and histograms showing number of species as a function of abundance octave are known as Preston diagrams.
These bins are not mutually exclusive: a species with abundance 4, for example, could be considered as lying in the 2-4 abundance class or the 4-8 abundance class. Species with an abundance of an exact power of 2 (i.e. 2,4,8,16, etc.) are conventionally considered as having 50% membership in the lower abundance class 50% membership in the upper class. Such species are thus considered to be evenly split between the two adjacent classes (apart from singletons which are classified into the rarest category). Thus in the example above, the Preston abundances would be
abundance class 1 1-2 2-4 4-8 8-16
species 5 0 1.5 1.5 1
The three species of abundance four thus appear, 1.5 in abundance class 2–4, and 1.5 in 4–8.
The above method of analysis cannot account for species that are unsampled: that is, species sufficiently rare to have been recorded zero times. Preston diagrams are thus truncated at zero abundance. Preston called this the "veil line" and noted that the cutoff point would move as more individuals are sampled.
Dynamics.
All biodiversity patterns previously described are related to time-independent quantities. For biodiversity evolution and species preservation, it is crucial to compare the dynamics of ecosystems with models (Leigh, 2007). An easily accessible index of the underlying evolution is the so-called species turnover distribution (STD), defined as the probability P(r,t) that the population of any species has varied by a fraction r after a given time t.
A neutral model that can analytically predict both the relative species abundance (RSA) at steady-state and the STD at time t has been presented in Azaele et al. (2006). Within this framework the population of any species is represented by a continuous (random) variable x, whose evolution is governed by the following Langevin equation:
formula_56
where b is the immigration rate from a large regional community, formula_57 represents competition for finite resources and D is related to demographic stochasticity; formula_58 is a Gaussian white noise. The model can also be derived as a continuous approximation of a master equation, where birth and death rates are independent of species, and predicts that at steady-state the RSA is simply a gamma distribution.
From the exact time-dependent solution of the previous equation, one can exactly calculate the STD at time t under stationary conditions:
formula_59
This formula provides good fits of data collected in the Barro Colorado tropical forest from 1990 to 2000. From the best fit one can estimate formula_60 ~ 3500 years with a broad uncertainty due to the relative short time interval of the sample. This parameter can be interpreted as the relaxation time of the system, i.e. the time the system needs to recover from a perturbation of species distribution. In the same framework, the estimated mean species lifetime is very close to the fitted temporal scale formula_60. This suggests that the neutral assumption could correspond to a scenario in which species originate and become extinct on the same timescales of fluctuations of the whole ecosystem.
Testing.
The theory has provoked much controversy as it "abandons" the role of ecology when modelling ecosystems. The theory has been criticized as it requires an equilibrium, yet climatic and geographical conditions are thought to change too frequently for this to be attained.
Tests on bird and tree abundance data demonstrate that the theory is usually a poorer match to the data than alternative null hypotheses that use fewer parameters (a log-normal model with two tunable parameters, compared to the neutral theory's three), and are thus more parsimonious. The theory also fails to describe coral reef communities, studied by Dornelas et al., and is a poor fit to data in intertidal communities. It also fails to explain why families of tropical trees have statistically highly correlated numbers of species in phylogenetically unrelated and geographically distant forest plots in Central and South America, Africa, and South East Asia.
While the theory has been heralded as a valuable tool for palaeontologists, little work has so far been done to test the theory against the fossil record.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n_1"
},
{
"math_id": 1,
"text": "n_2"
},
{
"math_id": 2,
"text": "n_S"
},
{
"math_id": 3,
"text": "\n\\Pr(n_1,n_2,\\ldots,n_S| \\theta, J)=\n\\frac{J!\\theta^S}\n{\n 1^{\\phi_1}2^{\\phi_2}\\cdots J^{\\phi_J}\n \\phi_1!\\phi_2!\\cdots\\phi_J!\n \\Pi_{k=1}^J(\\theta+k-1)\n}\n"
},
{
"math_id": 4,
"text": "\\theta=2J\\nu"
},
{
"math_id": 5,
"text": "\\nu"
},
{
"math_id": 6,
"text": "\\phi_i"
},
{
"math_id": 7,
"text": "\\phi_1=\\phi_3=\\phi_6=1"
},
{
"math_id": 8,
"text": "\\phi"
},
{
"math_id": 9,
"text": "\n\\Pr(3,6,1| \\theta,10)=\n\\frac{10!\\theta^3}{\n1^1\\cdot 3^1\\cdot 6^1\n\\cdot\n1!1!1!\n\\cdot\n\\theta(\\theta+1)(\\theta+2)\\cdots(\\theta+9)}\n"
},
{
"math_id": 10,
"text": "\\Pr(3;3,6,1)=\\Pr(3;1,3,6)=\\Pr(3;3,1,6)"
},
{
"math_id": 11,
"text": "\\Pr(S;r_1,r_2,\\ldots,r_s,0,\\ldots,0)"
},
{
"math_id": 12,
"text": "r_i"
},
{
"math_id": 13,
"text": "r_1"
},
{
"math_id": 14,
"text": "r_2"
},
{
"math_id": 15,
"text": "\nE(r_i)=\\sum_{k=1}^C\nr_i(k)\\cdot \\Pr(S;r_1,r_2,\\ldots,r_s,0,\\ldots,0)\n"
},
{
"math_id": 16,
"text": "r_i(k)"
},
{
"math_id": 17,
"text": "Pr(\\ldots)"
},
{
"math_id": 18,
"text": "\\langle \\phi_n \\rangle "
},
{
"math_id": 19,
"text": "\n\\theta\\frac{J!}{n!(J-n)!}\n\\frac{\\Gamma(\\gamma)}{\\Gamma(J+\\gamma)}\n\\int_{y=0}^\\gamma\n\\frac{\\Gamma(n+y)}{\\Gamma(1+y)}\n\\frac{\\Gamma(J-n+\\gamma-y)}{\\Gamma(\\gamma-y)}\n\\exp(-y\\theta/\\gamma)\\,dy\n"
},
{
"math_id": 20,
"text": "\\Gamma"
},
{
"math_id": 21,
"text": "\\gamma=(J-1)m/(1-m)"
},
{
"math_id": 22,
"text": "\n\\frac{\\theta }{(I)_{J}} {J \\choose n}\n\\int_{0}^{1}(Ix)_{n}(I(1-x))_{J-n}\\frac{(1-x)^{\\theta -1}}{x}\\,dx \n"
},
{
"math_id": 23,
"text": "I = (J-1)*m/(1-m)"
},
{
"math_id": 24,
"text": "\\langle \\phi_n \\rangle"
},
{
"math_id": 25,
"text": "\n\\Pr(n_1,n_2,\\ldots,n_S| \\theta, m, J)=\n\\frac{J!}{\\prod_{i=1}^{S}n_{i}\n\\prod_{j=1}^{J}\\Phi_{j}!}\\frac{\\theta ^{S}}{(I)_{J}}\n\\sum_{A=S}^{J}K(\\overrightarrow{D},A)\\frac{I^{A}}{(\\theta) _{A}}\n"
},
{
"math_id": 26,
"text": "K(\\overrightarrow{D},A)"
},
{
"math_id": 27,
"text": "A=S,...,J"
},
{
"math_id": 28,
"text": "\nK(\\overrightarrow{D},A):=\\sum_{\\{a_{1},...,a_{S}|\\sum_{i=1}^{S}a_{i}=A\\}}\n\\prod_{i=1}^{S}\\frac{\\overline{s}\\left( n_{i},a_{i}\\right) \\overline{s}\n\\left( a_{i},1\\right) }{\\overline{s}\\left( n_{i},1\\right) }\n"
},
{
"math_id": 29,
"text": "J"
},
{
"math_id": 30,
"text": "r"
},
{
"math_id": 31,
"text": "\\mu"
},
{
"math_id": 32,
"text": "J_M"
},
{
"math_id": 33,
"text": "(1-\\nu)"
},
{
"math_id": 34,
"text": "\\nu=\\mu/(r+\\mu)"
},
{
"math_id": 35,
"text": "S_M(n)"
},
{
"math_id": 36,
"text": "n"
},
{
"math_id": 37,
"text": "\nS_M(n) = \\frac{\\theta}{n}\\frac{\\Gamma(J_M+1)\\Gamma(J_M+\\theta-n)}{\\Gamma(J_M+1-n)\\Gamma(J_M+\\theta)}\n"
},
{
"math_id": 38,
"text": "\\theta=(J_M-1)\\nu/(1-\\nu)\\approx J_M\\nu"
},
{
"math_id": 39,
"text": "n\\ll J_M"
},
{
"math_id": 40,
"text": "\nS_M(n) \\approx \\frac{\\theta}{n} \\left( \\frac{J_M}{J_M+\\theta} \\right)^n\n"
},
{
"math_id": 41,
"text": "(1-m)"
},
{
"math_id": 42,
"text": "m"
},
{
"math_id": 43,
"text": "\\theta"
},
{
"math_id": 44,
"text": "I"
},
{
"math_id": 45,
"text": "\\langle\\phi_n\\rangle"
},
{
"math_id": 46,
"text": "m=1"
},
{
"math_id": 47,
"text": "m=0"
},
{
"math_id": 48,
"text": "0<m<1"
},
{
"math_id": 49,
"text": "\nS=cA^z"
},
{
"math_id": 50,
"text": "S=kJ^z"
},
{
"math_id": 51,
"text": "\nE\\left\\{S|\\theta,J\\right\\}=\n\\frac{\\theta}{\\theta }+\n\\frac{\\theta}{\\theta+1}+\n\\frac{\\theta}{\\theta+2}+\n\\cdots +\n\\frac{\\theta}{\\theta+J-1}\n"
},
{
"math_id": 52,
"text": "\\theta/(\\theta+J-1)"
},
{
"math_id": 53,
"text": "J=\\rho A"
},
{
"math_id": 54,
"text": "\\Sigma\\theta/(\\theta+\\rho A-1)"
},
{
"math_id": 55,
"text": "\nS(\\theta)=\n1+\\theta\\ln\\left(1+\\frac{J-1}{\\theta}\\right).\n"
},
{
"math_id": 56,
"text": "\n\\dot{x}=b-x/\\tau+\\sqrt{Dx}\\xi(t)\n"
},
{
"math_id": 57,
"text": "-x/\\tau"
},
{
"math_id": 58,
"text": "\\xi(t)"
},
{
"math_id": 59,
"text": "\nP(r,t)=A\\frac{\\lambda+1}{\\lambda}\\frac{(e^{t/\\tau})^{b/2D}}{1-e^{-t/\\tau}}\\left(\\frac{\\sinh(\\frac{t}{2\\tau})}{\\lambda}\\right)^{\\frac{b}{D}+1}\\left(\\frac{4\\lambda^2}{(\\lambda+1)^2e^{t/\\tau}-4\\lambda}\\right)^{\\frac{b}{D}+\\frac{1}{2}}.\n"
},
{
"math_id": 60,
"text": "\\tau"
}
] |
https://en.wikipedia.org/wiki?curid=635546
|
63556609
|
Ursescu theorem
|
Generalization of closed graph, open mapping, and uniform boundedness theorem
In mathematics, particularly in functional analysis and convex analysis, the Ursescu theorem is a theorem that generalizes the closed graph theorem, the open mapping theorem, and the uniform boundedness principle.
Ursescu Theorem.
The following notation and notions are used, where formula_0 is a set-valued function and formula_1 is a non-empty subset of a topological vector space formula_2:
Statement.
<templatestyles src="Math_theorem/styles.css" />
Theorem (Ursescu) —
Let formula_2 be a complete semi-metrizable locally convex topological vector space and formula_0 be a closed convex multifunction with non-empty domain.
Assume that formula_41 is a barrelled space for some/every formula_42
Assume that formula_43 and let formula_44 (so that formula_45).
Then for every neighborhood formula_46 of formula_47 in formula_48 formula_49 belongs to the relative interior of formula_50 in formula_51 (that is, formula_52).
In particular, if formula_53 then formula_54
Corollaries.
Closed graph theorem.
<templatestyles src="Math_theorem/styles.css" />
Closed graph theorem —
Let formula_2 and formula_55 be Fréchet spaces and formula_56 be a linear map. Then formula_57 is continuous if and only if the graph of formula_57 is closed in formula_23
<templatestyles src="Math_proof/styles.css" />Proof
For the non-trivial direction, assume that the graph of formula_57 is closed and let formula_58 It is easy to see that formula_59 is closed and convex and that its image is formula_6
Given formula_13 formula_60 belongs to formula_61 so that for every open neighborhood formula_62 of formula_63 in formula_64 formula_65 is a neighborhood of formula_66 in formula_6
Thus formula_57 is continuous at formula_67
Q.E.D.
Uniform boundedness principle.
<templatestyles src="Math_theorem/styles.css" />
Uniform boundedness principle —
Let formula_2 and formula_55 be Fréchet spaces and formula_56 be a bijective linear map. Then formula_57 is continuous if and only if formula_68 is continuous. Furthermore, if formula_57 is continuous then formula_57 is an isomorphism of Fréchet spaces.
<templatestyles src="Math_proof/styles.css" />Proof
Apply the closed graph theorem to formula_57 and formula_69
Q.E.D.
Open mapping theorem.
<templatestyles src="Math_theorem/styles.css" />
Open mapping theorem —
Let formula_2 and formula_55 be Fréchet spaces and formula_56 be a continuous surjective linear map. Then T is an open map.
<templatestyles src="Math_proof/styles.css" />Proof
Clearly, formula_57 is a closed and convex relation whose image is formula_70
Let formula_46 be a non-empty open subset of formula_48 let formula_71 be in formula_72 and let formula_66 in formula_46 be such that formula_73
From the Ursescu theorem it follows that formula_74 is a neighborhood of formula_75 Q.E.D.
Additional corollaries.
The following notation and notions are used for these corollaries, where formula_0 is a set-valued function, formula_1 is a non-empty subset of a topological vector space formula_2:
<templatestyles src="Math_theorem/styles.css" />
Corollary —
Let formula_2 be a barreled first countable space and let formula_81 be a subset of formula_6 Then:
Related theorems.
Simons' theorem.
<templatestyles src="Math_theorem/styles.css" />
Simons' theorem —
Let formula_2 and formula_55 be first countable with formula_2 locally convex. Suppose that formula_0 is a multimap with non-empty domain that satisfies condition (Hw"x") or else assume that formula_2 is a Fréchet space and that formula_17 is lower ideally convex.
Assume that formula_41 is barreled for some/every formula_42
Assume that formula_43 and let formula_84
Then for every neighborhood formula_46 of formula_47 in formula_48 formula_49 belongs to the relative interior of formula_50 in formula_51 (i.e. formula_52).
In particular, if formula_53 then formula_54
Robinson–Ursescu theorem.
The implication (1) formula_85 (2) in the following theorem is known as the Robinson–Ursescu theorem.
<templatestyles src="Math_theorem/styles.css" />
Robinson–Ursescu theorem —
Let formula_86 and formula_87 be normed spaces and formula_0 be a multimap with non-empty domain.
Suppose that formula_55 is a barreled space, the graph of formula_17 verifies condition condition (Hw"x"), and that formula_88
Let formula_89 (resp. formula_90) denote the closed unit ball in formula_2 (resp. formula_55) (so formula_91).
Then the following are equivalent:
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathcal{R} : X \\rightrightarrows Y"
},
{
"math_id": 1,
"text": "S"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "\\operatorname{aff} S"
},
{
"math_id": 4,
"text": "\\operatorname{span} S."
},
{
"math_id": 5,
"text": "S^{i} := \\operatorname{aint}_X S"
},
{
"math_id": 6,
"text": "X."
},
{
"math_id": 7,
"text": "{}^{i}S:= \\operatorname{aint}_{\\operatorname{aff}(S - S)} S"
},
{
"math_id": 8,
"text": "\\operatorname{aff}(S - S)"
},
{
"math_id": 9,
"text": "{}^{ib}S := {}^{i}S"
},
{
"math_id": 10,
"text": "\\operatorname{span} \\left(S - s_0\\right)"
},
{
"math_id": 11,
"text": "s_0 \\in S"
},
{
"math_id": 12,
"text": "{}^{ib}S := \\varnothing"
},
{
"math_id": 13,
"text": "x \\in X,"
},
{
"math_id": 14,
"text": "x \\in {}^{ib} S"
},
{
"math_id": 15,
"text": "S - x"
},
{
"math_id": 16,
"text": "\\cup_{n \\in \\N} n (S - x)"
},
{
"math_id": 17,
"text": "\\mathcal{R}"
},
{
"math_id": 18,
"text": "\\operatorname{Dom} \\mathcal{R} := \\{ x \\in X : \\mathcal{R}(x) \\neq \\varnothing \\}."
},
{
"math_id": 19,
"text": "\\operatorname{Im} \\mathcal{R} := \\cup_{x \\in X} \\mathcal{R}(x)."
},
{
"math_id": 20,
"text": "A \\subseteq X,"
},
{
"math_id": 21,
"text": "\\mathcal{R}(A) := \\cup_{x \\in A} \\mathcal{R}(x)."
},
{
"math_id": 22,
"text": "\\operatorname{gr} \\mathcal{R} := \\{ (x, y) \\in X \\times Y : y \\in \\mathcal{R}(x) \\}."
},
{
"math_id": 23,
"text": "X \\times Y."
},
{
"math_id": 24,
"text": "x_0, x_1 \\in X"
},
{
"math_id": 25,
"text": "r \\in [0, 1],"
},
{
"math_id": 26,
"text": "r \\mathcal{R}\\left(x_0\\right) + (1 - r) \\mathcal{R}\\left(x_1\\right) \\subseteq \\mathcal{R} \\left(r x_0 + (1 - r) x_1\\right)."
},
{
"math_id": 27,
"text": "\\mathcal{R}^{-1} : Y \\rightrightarrows X"
},
{
"math_id": 28,
"text": "\\mathcal{R}^{-1}(y) := \\{ x \\in X : y \\in \\mathcal{R}(x) \\}."
},
{
"math_id": 29,
"text": "B \\subseteq Y,"
},
{
"math_id": 30,
"text": "\\mathcal{R}^{-1}(B) := \\cup_{y \\in B} \\mathcal{R}^{-1}(y)."
},
{
"math_id": 31,
"text": "f : X \\to Y"
},
{
"math_id": 32,
"text": "f^{-1} : Y \\rightrightarrows X"
},
{
"math_id": 33,
"text": "f"
},
{
"math_id": 34,
"text": "f : X \\rightrightarrows Y"
},
{
"math_id": 35,
"text": "x \\mapsto \\{ f(x)\\}."
},
{
"math_id": 36,
"text": "\\operatorname{int}_T S"
},
{
"math_id": 37,
"text": "T,"
},
{
"math_id": 38,
"text": "S \\subseteq T."
},
{
"math_id": 39,
"text": "\\operatorname{rint} S := \\operatorname{int}_{\\operatorname{aff} S} S"
},
{
"math_id": 40,
"text": "\\operatorname{aff} S."
},
{
"math_id": 41,
"text": "\\operatorname{span} (\\operatorname{Im} \\mathcal{R} - y)"
},
{
"math_id": 42,
"text": "y \\in \\operatorname{Im} \\mathcal{R}."
},
{
"math_id": 43,
"text": "y_0 \\in {}^{i}(\\operatorname{Im} \\mathcal{R})"
},
{
"math_id": 44,
"text": "x_0 \\in \\mathcal{R}^{-1}\\left(y_0\\right)"
},
{
"math_id": 45,
"text": "y_0 \\in \\mathcal{R}\\left(x_0\\right)"
},
{
"math_id": 46,
"text": "U"
},
{
"math_id": 47,
"text": "x_0"
},
{
"math_id": 48,
"text": "X,"
},
{
"math_id": 49,
"text": "y_0"
},
{
"math_id": 50,
"text": "\\mathcal{R}(U)"
},
{
"math_id": 51,
"text": "\\operatorname{aff} (\\operatorname{Im} \\mathcal{R})"
},
{
"math_id": 52,
"text": "y_0 \\in \\operatorname{int}_{\\operatorname{aff} (\\operatorname{Im} \\mathcal{R})} \\mathcal{R}(U)"
},
{
"math_id": 53,
"text": "{}^{ib}(\\operatorname{Im} \\mathcal{R}) \\neq \\varnothing"
},
{
"math_id": 54,
"text": "{}^{ib}(\\operatorname{Im} \\mathcal{R}) = {}^{i}(\\operatorname{Im} \\mathcal{R}) = \\operatorname{rint} (\\operatorname{Im} \\mathcal{R})."
},
{
"math_id": 55,
"text": "Y"
},
{
"math_id": 56,
"text": "T : X \\to Y"
},
{
"math_id": 57,
"text": "T"
},
{
"math_id": 58,
"text": "\\mathcal{R} := T^{-1} : Y \\rightrightarrows X."
},
{
"math_id": 59,
"text": "\\operatorname{gr} \\mathcal{R}"
},
{
"math_id": 60,
"text": "(Tx, x)"
},
{
"math_id": 61,
"text": "Y \\times X"
},
{
"math_id": 62,
"text": "V"
},
{
"math_id": 63,
"text": "Tx"
},
{
"math_id": 64,
"text": "Y,"
},
{
"math_id": 65,
"text": "\\mathcal{R}(V) = T^{-1}(V)"
},
{
"math_id": 66,
"text": "x"
},
{
"math_id": 67,
"text": "x."
},
{
"math_id": 68,
"text": "T^{-1} : Y \\to X"
},
{
"math_id": 69,
"text": "T^{-1}."
},
{
"math_id": 70,
"text": "Y."
},
{
"math_id": 71,
"text": "y"
},
{
"math_id": 72,
"text": "T(U),"
},
{
"math_id": 73,
"text": "y = Tx."
},
{
"math_id": 74,
"text": "T(U)"
},
{
"math_id": 75,
"text": "y."
},
{
"math_id": 76,
"text": "\\sum_{i=1}^\\infty r_i s_i"
},
{
"math_id": 77,
"text": "s_i \\in S"
},
{
"math_id": 78,
"text": "\\sum_{i=1}^\\infty r_i = 1"
},
{
"math_id": 79,
"text": "\\left(s_i\\right)_{i=1}^{\\infty}"
},
{
"math_id": 80,
"text": "S."
},
{
"math_id": 81,
"text": "C"
},
{
"math_id": 82,
"text": "C^{i} = \\operatorname{int} C."
},
{
"math_id": 83,
"text": "C^{i} = \\operatorname{int} C = \\operatorname{int} \\left(\\operatorname{cl} C\\right) = \\left(\\operatorname{cl} C\\right)^i."
},
{
"math_id": 84,
"text": "x_0 \\in \\mathcal{R}^{-1}\\left(y_0\\right)."
},
{
"math_id": 85,
"text": "\\implies"
},
{
"math_id": 86,
"text": "(X, \\|\\,\\cdot\\,\\|)"
},
{
"math_id": 87,
"text": "(Y, \\|\\,\\cdot\\,\\|)"
},
{
"math_id": 88,
"text": "(x_0, y_0) \\in \\operatorname{gr} \\mathcal{R}."
},
{
"math_id": 89,
"text": "C_X"
},
{
"math_id": 90,
"text": "C_Y"
},
{
"math_id": 91,
"text": "C_X = \\{ x \\in X : \\| x \\| \\leq 1 \\}"
},
{
"math_id": 92,
"text": "\\operatorname{Im} \\mathcal{R}."
},
{
"math_id": 93,
"text": "y_0 \\in \\operatorname{int} \\mathcal{R}\\left(x_0 + C_X\\right)."
},
{
"math_id": 94,
"text": "B > 0"
},
{
"math_id": 95,
"text": "0 \\leq r \\leq 1,"
},
{
"math_id": 96,
"text": "y_0 + B r C_Y \\subseteq \\mathcal{R} \\left(x_0 + r C_X\\right)."
},
{
"math_id": 97,
"text": "A > 0"
},
{
"math_id": 98,
"text": "x \\in x_0 + A C_X"
},
{
"math_id": 99,
"text": "y \\in y_0 + A C_Y,"
},
{
"math_id": 100,
"text": "d\\left(x, \\mathcal{R}^{-1}(y)\\right) \\leq B \\cdot d(y, \\mathcal{R}(x))."
},
{
"math_id": 101,
"text": "x \\in X"
},
{
"math_id": 102,
"text": "y \\in y_0 + B C_Y,"
},
{
"math_id": 103,
"text": "d \\left(x, \\mathcal{R}^{-1}(y)\\right) \\leq \\frac{1 + \\left\\|x - x_0\\right\\|}{B - \\left\\|y - y_0\\right\\|} \\cdot d(y, \\mathcal{R}(x))."
}
] |
https://en.wikipedia.org/wiki?curid=63556609
|
63558794
|
Convex series
|
In mathematics, particularly in functional analysis and convex analysis, a is a series of the form formula_0 where formula_1 are all elements of a topological vector space formula_2, and all formula_3 are non-negative real numbers that sum to formula_4 (that is, such that formula_5).
Types of Convex series.
Suppose that formula_6 is a subset of formula_2 and formula_0 is a convex series in formula_7
Types of subsets.
Convex series allow for the definition of special types of subsets that are well-behaved and useful with very good stability properties.
If formula_6 is a subset of a topological vector space formula_2 then formula_6 is said to be a:
The empty set is convex, ideally convex, bcs-complete, cs-complete, and cs-closed.
Conditions (Hx) and (Hwx).
If formula_2 and formula_12 are topological vector spaces, formula_16 is a subset of formula_17 and formula_18 then formula_16 is said to satisfy:
Multifunctions.
The following notation and notions are used, where formula_24 and formula_25 are multifunctions and formula_26 is a non-empty subset of a topological vector space formula_27
Relationships.
Let formula_44 be topological vector spaces, formula_45 and formula_46 The following implications hold:
complete formula_47 cs-complete formula_47 cs-closed formula_47 lower cs-closed (lcs-closed) and ideally convex.
lower cs-closed (lcs-closed) or ideally convex formula_47 lower ideally convex (li-convex) formula_47 convex.
(H"x") formula_47 (Hw"x") formula_47 convex.
The converse implications do not hold in general.
If formula_2 is complete then,
If formula_12 is complete then,
If formula_2 is locally convex and formula_50 is bounded then,
Preserved properties.
Let formula_51 be a linear subspace of formula_7 Let formula_24 and formula_25 be multifunctions.
Properties.
If formula_6 be a non-empty convex subset of a topological vector space formula_2 then,
Let formula_2 be a Fréchet space, formula_12 be a topological vector spaces, formula_64 and formula_65 be the canonical projection. If formula_16 is lower ideally convex (resp. lower cs-closed) then the same is true of formula_66
If formula_2 is a barreled first countable space and if formula_67 then:
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sum_{i=1}^{\\infty} r_i x_i"
},
{
"math_id": 1,
"text": "x_1, x_2, \\ldots"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "r_1, r_2, \\ldots"
},
{
"math_id": 4,
"text": "1"
},
{
"math_id": 5,
"text": "\\sum_{i=1}^{\\infty} r_i = 1"
},
{
"math_id": 6,
"text": "S"
},
{
"math_id": 7,
"text": "X."
},
{
"math_id": 8,
"text": "\\left\\{ x_1, x_2, \\ldots \\right\\}"
},
{
"math_id": 9,
"text": "\\left(\\sum_{i=1}^n r_i x_i\\right)_{n=1}^{\\infty}"
},
{
"math_id": 10,
"text": "X,"
},
{
"math_id": 11,
"text": "S."
},
{
"math_id": 12,
"text": "Y"
},
{
"math_id": 13,
"text": "B"
},
{
"math_id": 14,
"text": "X \\times Y"
},
{
"math_id": 15,
"text": "X \\times Y."
},
{
"math_id": 16,
"text": "A"
},
{
"math_id": 17,
"text": "X \\times Y,"
},
{
"math_id": 18,
"text": "x \\in X"
},
{
"math_id": 19,
"text": "\\sum_{i=1}^{\\infty} r_i (x_i, y_i)"
},
{
"math_id": 20,
"text": "\\sum_{i=1}^{\\infty} r_i y_i"
},
{
"math_id": 21,
"text": "y"
},
{
"math_id": 22,
"text": "x"
},
{
"math_id": 23,
"text": "(x, y) \\in A."
},
{
"math_id": 24,
"text": "\\mathcal{R} : X \\rightrightarrows Y"
},
{
"math_id": 25,
"text": "\\mathcal{S} : Y \\rightrightarrows Z"
},
{
"math_id": 26,
"text": "S \\subseteq X"
},
{
"math_id": 27,
"text": "X:"
},
{
"math_id": 28,
"text": "\\mathcal{R}"
},
{
"math_id": 29,
"text": "\\operatorname{gr} \\mathcal{R} := \\{ (x, y) \\in X \\times Y : y \\in \\mathcal{R}(x) \\}."
},
{
"math_id": 30,
"text": "x_0, x_1 \\in X"
},
{
"math_id": 31,
"text": "r \\in [0, 1],"
},
{
"math_id": 32,
"text": "r \\mathcal{R}\\left(x_0\\right) + (1 - r) \\mathcal{R}\\left(x_1\\right) \\subseteq \\mathcal{R} \\left(r x_0 + (1 - r) x_1\\right)."
},
{
"math_id": 33,
"text": "\\mathcal{R}^{-1} : Y \\rightrightarrows X"
},
{
"math_id": 34,
"text": "\\mathcal{R}^{-1}(y) := \\left\\{ x \\in X : y \\in \\mathcal{R}(x) \\right\\}."
},
{
"math_id": 35,
"text": "B \\subseteq Y,"
},
{
"math_id": 36,
"text": "\\mathcal{R}^{-1}(B) := \\cup_{y \\in B} \\mathcal{R}^{-1}(y)."
},
{
"math_id": 37,
"text": "\\operatorname{Dom} \\mathcal{R} := \\left\\{ x \\in X : \\mathcal{R}(x) \\neq \\emptyset \\right\\}."
},
{
"math_id": 38,
"text": "\\operatorname{Im} \\mathcal{R} := \\cup_{x \\in X} \\mathcal{R}(x)."
},
{
"math_id": 39,
"text": "A \\subseteq X,"
},
{
"math_id": 40,
"text": "\\mathcal{R}(A) := \\cup_{x \\in A} \\mathcal{R}(x)."
},
{
"math_id": 41,
"text": "\\mathcal{S} \\circ \\mathcal{R} : X \\rightrightarrows Z"
},
{
"math_id": 42,
"text": "\\left(\\mathcal{S} \\circ \\mathcal{R}\\right)(x) := \\cup_{y \\in \\mathcal{R}(x)} \\mathcal{S}(y)"
},
{
"math_id": 43,
"text": "x \\in X."
},
{
"math_id": 44,
"text": "X, Y, \\text{ and } Z"
},
{
"math_id": 45,
"text": "S \\subseteq X, T \\subseteq Y,"
},
{
"math_id": 46,
"text": "A \\subseteq X \\times Y."
},
{
"math_id": 47,
"text": "\\implies"
},
{
"math_id": 48,
"text": "B \\subseteq X \\times Y \\times Z"
},
{
"math_id": 49,
"text": "y \\in Y"
},
{
"math_id": 50,
"text": "\\operatorname{Pr}_X (A)"
},
{
"math_id": 51,
"text": "X_0"
},
{
"math_id": 52,
"text": "X_0 \\cap S"
},
{
"math_id": 53,
"text": "X_0."
},
{
"math_id": 54,
"text": "S \\times T"
},
{
"math_id": 55,
"text": "T"
},
{
"math_id": 56,
"text": "Y."
},
{
"math_id": 57,
"text": "A + B."
},
{
"math_id": 58,
"text": "\\mathcal{R}(A)."
},
{
"math_id": 59,
"text": "\\mathcal{R}_2 : X \\rightrightarrows Y"
},
{
"math_id": 60,
"text": "\\mathcal{R}, \\mathcal{R}_2, \\mathcal{S}"
},
{
"math_id": 61,
"text": "\\mathcal{R} + \\mathcal{R}_2 : X \\rightrightarrows Y"
},
{
"math_id": 62,
"text": "\\mathcal{S} \\circ \\mathcal{R} : X \\rightrightarrows Z."
},
{
"math_id": 63,
"text": "\\operatorname{int} S = \\operatorname{int} \\left(\\operatorname{cl} S\\right)."
},
{
"math_id": 64,
"text": "A \\subseteq X \\times Y,"
},
{
"math_id": 65,
"text": "\\operatorname{Pr}_Y : X \\times Y \\to Y"
},
{
"math_id": 66,
"text": "\\operatorname{Pr}_Y (A)."
},
{
"math_id": 67,
"text": "C \\subseteq X"
},
{
"math_id": 68,
"text": "C"
},
{
"math_id": 69,
"text": "C^i = \\operatorname{int} C,"
},
{
"math_id": 70,
"text": "C^i := \\operatorname{aint}_X C"
},
{
"math_id": 71,
"text": "C^i = \\operatorname{int} C = \\operatorname{int} \\left(\\operatorname{cl} C\\right) = \\left(\\operatorname{cl} C\\right)^i."
}
] |
https://en.wikipedia.org/wiki?curid=63558794
|
6356194
|
353 (number)
|
Natural number
353 (three hundred [and] fifty-three) is the natural number following 352 and preceding 354. It is a prime number.
In mathematics.
353 is the 71st prime number, a palindromic prime, an irregular prime, a super-prime, a Chen prime, a Proth prime, and an Eisenstein prime.
In connection with Euler's sum of powers conjecture, 353 is the smallest number whose 4th power is equal to the sum of four other 4th powers, as discovered by R. Norrie in 1911:
formula_0
In a seven-team round robin tournament, there are 353 combinatorially distinct outcomes in which no subset of teams wins all its games against the teams outside the subset; mathematically, there are 353 strongly connected tournaments on seven nodes.
353 is one of the solutions to the stamp folding problem: there are exactly 353 ways to fold a strip of eight blank stamps into a single flat pile of stamps.
353 in Mertens Function returns 0.
353 is an index of a prime Lucas number.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "353^4=30^4+120^4+272^4+315^4."
}
] |
https://en.wikipedia.org/wiki?curid=6356194
|
63562105
|
Surjection of Fréchet spaces
|
Characterization of surjectivity
The theorem on the surjection of Fréchet spaces is an important theorem, due to Stefan Banach, that characterizes when a continuous linear operator between Fréchet spaces is surjective.
The importance of this theorem is related to the open mapping theorem, which states that a continuous linear surjection between Fréchet spaces is an open map. Often in practice, one knows that they have a continuous linear map between Fréchet spaces and wishes to show that it is surjective in order to use the open mapping theorem to deduce that it is also an open mapping. This theorem may help reach that goal.
Preliminaries, definitions, and notation.
Let formula_0 be a continuous linear map between topological vector spaces.
The continuous dual space of formula_1 is denoted by formula_2
The transpose of formula_3 is the map formula_4 defined by formula_5 If formula_0 is surjective then formula_4 will be injective, but the converse is not true in general.
The weak topology on formula_1 (resp. formula_6) is denoted by formula_7 (resp. formula_8). The set formula_1 endowed with this topology is denoted by formula_9 The topology formula_7 is the weakest topology on formula_1 making all linear functionals in formula_6 continuous.
If formula_10 then the polar of formula_11 in formula_12 is denoted by formula_13
If formula_14 is a seminorm on formula_1, then formula_15 will denoted the vector space formula_1 endowed with the weakest TVS topology making formula_16 continuous. A neighborhood basis of formula_15 at the origin consists of the sets formula_17 as formula_18 ranges over the positive reals. If formula_16 is not a norm then formula_15 is not Hausdorff and formula_19 is a linear subspace of formula_1.
If formula_16 is continuous then the identity map formula_20 is continuous so we may identify the continuous dual space formula_21 of formula_15 as a subset of formula_6 via the transpose of the identity map formula_22 which is injective.
Surjection of Fréchet spaces.
<templatestyles src="Math_theorem/styles.css" />
Theorem (Banach) — If formula_0 is a continuous linear map between two Fréchet spaces, then formula_0 is surjective if and only if the following two conditions both hold:
Extensions of the theorem.
<templatestyles src="Math_theorem/styles.css" />
Theorem —
If formula_0 is a continuous linear map between two Fréchet spaces then the following are equivalent:
Lemmas.
The following lemmas are used to prove the theorems on the surjectivity of Fréchet spaces. They are useful even on their own.
<templatestyles src="Math_theorem/styles.css" />
Theorem — Let formula_1 be a Fréchet space and formula_23 be a linear subspace of formula_2
The following are equivalent:
<templatestyles src="Math_theorem/styles.css" />
Theorem —
On the dual formula_6 of a Fréchet space formula_1, the topology of uniform convergence on compact convex subsets of formula_1 is identical to the topology of uniform convergence on compact subsets of formula_1.
<templatestyles src="Math_theorem/styles.css" />
Theorem —
Let formula_0 be a linear map between Hausdorff locally convex TVSs, with formula_1 also metrizable.
If the map formula_24 is continuous then formula_0 is continuous (where formula_1 and formula_12 carry their original topologies).
Applications.
Borel's theorem on power series expansions.
<templatestyles src="Math_theorem/styles.css" />
Theorem (E. Borel) —
Fix a positive integer formula_25.
If formula_26 is an arbitrary formal power series in formula_25 indeterminates with complex coefficients then there exists a formula_27 function formula_28 whose Taylor expansion at the origin is identical to formula_26.
That is, suppose that for every formula_25-tuple of non-negative integers formula_29 we are given a complex number formula_30 (with no restrictions). Then there exists a formula_27 function formula_28 such that formula_31 for every formula_25-tuple formula_32
Linear partial differential operators.
<templatestyles src="Math_theorem/styles.css" />
Theorem — Let formula_33 be a linear partial differential operator with formula_27 coefficients in an open subset formula_34
The following are equivalent:
formula_33 being semiglobally solvable in formula_36 means that for every relatively compact open subset formula_37 of formula_36, the following condition holds:
to every formula_35 there is some formula_38 such that formula_39 in formula_37.
formula_36 being formula_33-convex means that for every compact subset formula_40 and every integer formula_41 there is a compact subset formula_42 of formula_36 such that for every distribution formula_43 with compact support in formula_36, the following condition holds:
if formula_44 is of order formula_45 and if formula_46 then formula_47
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "L : X \\to Y"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "X^{\\prime}."
},
{
"math_id": 3,
"text": "L"
},
{
"math_id": 4,
"text": "{}^t L : Y^{\\prime} \\to X^{\\prime}"
},
{
"math_id": 5,
"text": "L \\left(y^{\\prime}\\right) := y^{\\prime} \\circ L."
},
{
"math_id": 6,
"text": "X^{\\prime}"
},
{
"math_id": 7,
"text": "\\sigma\\left(X, X^{\\prime}\\right)"
},
{
"math_id": 8,
"text": "\\sigma\\left(X^{\\prime}, X\\right)"
},
{
"math_id": 9,
"text": "\\left(X, \\sigma\\left(X, X^{\\prime}\\right)\\right)."
},
{
"math_id": 10,
"text": "S \\subseteq Y"
},
{
"math_id": 11,
"text": "S"
},
{
"math_id": 12,
"text": "Y"
},
{
"math_id": 13,
"text": "S^{\\circ}."
},
{
"math_id": 14,
"text": "p : X \\to \\R"
},
{
"math_id": 15,
"text": "X_p"
},
{
"math_id": 16,
"text": "p"
},
{
"math_id": 17,
"text": "\\left\\{ x \\in X : p(x) < r \\right\\}"
},
{
"math_id": 18,
"text": "r"
},
{
"math_id": 19,
"text": "\\ker p := \\left\\{ x \\in X : p(x) = 0 \\right\\}"
},
{
"math_id": 20,
"text": "\\operatorname{Id} : X \\to X_p"
},
{
"math_id": 21,
"text": "X_p^{\\prime}"
},
{
"math_id": 22,
"text": "{}^{t} \\operatorname{Id} : X_p^{\\prime} \\to X^{\\prime},"
},
{
"math_id": 23,
"text": "Z"
},
{
"math_id": 24,
"text": "L : \\left(X, \\sigma\\left(X, X^{\\prime}\\right)\\right) \\to \\left(Y, \\sigma\\left(Y, Y^{\\prime}\\right)\\right)"
},
{
"math_id": 25,
"text": "n"
},
{
"math_id": 26,
"text": "P"
},
{
"math_id": 27,
"text": "\\mathcal{C}^{\\infty}"
},
{
"math_id": 28,
"text": "f : \\R^n \\to \\Complex"
},
{
"math_id": 29,
"text": "p = \\left(p_1, \\ldots, p_n\\right)"
},
{
"math_id": 30,
"text": "a_p"
},
{
"math_id": 31,
"text": "a_p = \\left(\\partial / \\partial x\\right)^p f \\bigg\\vert_{x = 0}"
},
{
"math_id": 32,
"text": "p."
},
{
"math_id": 33,
"text": "D"
},
{
"math_id": 34,
"text": "U \\subseteq \\R^n."
},
{
"math_id": 35,
"text": "f \\in \\mathcal{C}^{\\infty}(U)"
},
{
"math_id": 36,
"text": "U"
},
{
"math_id": 37,
"text": "V"
},
{
"math_id": 38,
"text": "g \\in \\mathcal{C}^{\\infty}(U)"
},
{
"math_id": 39,
"text": "D g = f"
},
{
"math_id": 40,
"text": "K \\subseteq U"
},
{
"math_id": 41,
"text": "n \\geq 0,"
},
{
"math_id": 42,
"text": "C_n"
},
{
"math_id": 43,
"text": "d"
},
{
"math_id": 44,
"text": "{}^{t} D d"
},
{
"math_id": 45,
"text": "\\leq n"
},
{
"math_id": 46,
"text": "\\operatorname{supp} {}^{t} D d \\subseteq K,"
},
{
"math_id": 47,
"text": "\\operatorname{supp} d \\subseteq C_n."
}
] |
https://en.wikipedia.org/wiki?curid=63562105
|
63566617
|
Projective tensor product
|
In functional analysis, an area of mathematics, the projective tensor product of two locally convex topological vector spaces is a natural topological vector space structure on their tensor product. Namely, given locally convex topological vector spaces formula_0 and formula_1, the projective topology, or π-topology, on formula_2 is the strongest topology which makes formula_2 a locally convex topological vector space such that the canonical map formula_3 (from formula_4 to formula_2) is continuous. When equipped with this topology, formula_2 is denoted formula_5 and called the projective tensor product of formula_0 and formula_1.
Definitions.
Let formula_0 and formula_1 be locally convex topological vector spaces. Their projective tensor product formula_5 is the unique locally convex topological vector space with underlying vector space formula_2 having the following universal property:
For any locally convex topological vector space formula_6, if formula_7 is the canonical map from the vector space of bilinear maps formula_8 to the vector space of linear maps formula_9, then the image of the restriction of formula_7 to the "continuous" bilinear maps is the space of "continuous" linear maps formula_10.
When the topologies of formula_0 and formula_1 are induced by seminorms, the topology of formula_5 is induced by seminorms constructed from those on formula_0 and formula_1 as follows. If formula_11 is a seminorm on formula_0, and formula_12 is a seminorm on formula_1, define their "tensor product" formula_13 to be the seminorm on formula_2 given by
formula_14
for all formula_15 in formula_2, where formula_16 is the balanced convex hull of the set formula_17. The projective topology on formula_2 is generated by the collection of such tensor products of the seminorms on formula_0 and formula_1.
When formula_0 and formula_1 are normed spaces, this definition applied to the norms on formula_0 and formula_1 gives a norm, called the "projective norm", on formula_2 which generates the projective topology.
Properties.
Throughout, all spaces are assumed to be locally convex. The symbol formula_18 denotes the completion of the projective tensor product of formula_0 and formula_1.
Completion.
In general, the space formula_5 is not complete, even if both formula_0 and formula_1 are complete (in fact, if formula_0 and formula_1 are both infinite-dimensional Banach spaces then formula_5 is necessarily not complete). However, formula_5 can always be linearly embedded as a dense vector subspace of some complete locally convex TVS, which is generally denoted by formula_18.
The continuous dual space of formula_18 is the same as that of formula_5, namely, the space of continuous bilinear forms formula_31.
Grothendieck's representation of elements in the completion.
In a Hausdorff locally convex space formula_32 a sequence formula_33 in formula_0 is absolutely convergent if formula_34 for every continuous seminorm formula_11 on formula_35 We write formula_36 if the sequence of partial sums formula_37 converges to formula_38 in formula_35
The following fundamental result in the theory of topological tensor products is due to Alexander Grothendieck.
<templatestyles src="Math_theorem/styles.css" />
Theorem —
Let formula_0 and formula_1 be metrizable locally convex TVSs and let formula_39 Then formula_40 is the sum of an absolutely convergent series
formula_41
where formula_42 and formula_33 and formula_43 are null sequences in formula_0 and formula_25 respectively.
The next theorem shows that it is possible to make the representation of formula_40 independent of the sequences formula_33 and formula_44
<templatestyles src="Math_theorem/styles.css" />
Theorem —
Let formula_0 and formula_1 be Fréchet spaces and let formula_45 (resp. formula_46) be a balanced open neighborhood of the origin in formula_0 (resp. in formula_1). Let formula_47 be a compact subset of the convex balanced hull of formula_48 There exists a compact subset formula_49 of the unit ball in formula_50 and sequences formula_33 and formula_43 contained in formula_45 and formula_51 respectively, converging to the origin such that for every formula_52 there exists some formula_53 such that
formula_54
Topology of bi-bounded convergence.
Let formula_55 and formula_56 denote the families of all bounded subsets of formula_0 and formula_25 respectively. Since the continuous dual space of formula_18 is the space of continuous bilinear forms formula_57 we can place on formula_31 the topology of uniform convergence on sets in formula_58 which is also called the topology of bi-bounded convergence. This topology is coarser than the strong topology on formula_31, and in , Alexander Grothendieck was interested in when these two topologies were identical. This is equivalent to the problem: Given a bounded subset formula_59 do there exist bounded subsets formula_60 and formula_61 such that formula_62 is a subset of the closed convex hull of formula_63?
Grothendieck proved that these topologies are equal when formula_0 and formula_1 are both Banach spaces or both are DF-spaces (a class of spaces introduced by Grothendieck). They are also equal when both spaces are Fréchet with one of them being nuclear.
Strong dual and bidual.
Let formula_0 be a locally convex topological vector space and let formula_64 be its continuous dual space. Alexander Grothendieck characterized the strong dual and bidual for certain situations:
<templatestyles src="Math_theorem/styles.css" />
Theorem (Grothendieck) —
Let formula_65 and formula_1 be locally convex topological vector spaces with formula_65 nuclear. Assume that both formula_65 and formula_1 are Fréchet spaces, or else that they are both DF-spaces. Then, denoting strong dual spaces with a subscripted formula_15:
Citations.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "Y"
},
{
"math_id": 2,
"text": "X \\otimes Y"
},
{
"math_id": 3,
"text": "(x,y) \\mapsto x \\otimes y"
},
{
"math_id": 4,
"text": "X\\times Y"
},
{
"math_id": 5,
"text": "X \\otimes_\\pi Y"
},
{
"math_id": 6,
"text": "Z"
},
{
"math_id": 7,
"text": "\\Phi_Z"
},
{
"math_id": 8,
"text": "X\\times Y \\to Z"
},
{
"math_id": 9,
"text": "X \\otimes Y \\to Z"
},
{
"math_id": 10,
"text": "X \\otimes_\\pi Y \\to Z"
},
{
"math_id": 11,
"text": "p"
},
{
"math_id": 12,
"text": "q"
},
{
"math_id": 13,
"text": "p \\otimes q"
},
{
"math_id": 14,
"text": "(p \\otimes q)(b) = \\inf_{r > 0,\\, b \\in r W} r"
},
{
"math_id": 15,
"text": "b"
},
{
"math_id": 16,
"text": "W"
},
{
"math_id": 17,
"text": "\\left\\{ x \\otimes y : p(x) \\leq 1, q(y) \\leq 1 \\right\\}"
},
{
"math_id": 18,
"text": "X \\widehat{\\otimes}_\\pi Y"
},
{
"math_id": 19,
"text": "u_1 : X_1 \\to Y_1"
},
{
"math_id": 20,
"text": "u_2 : X_2 \\to Y_2"
},
{
"math_id": 21,
"text": "u_1 \\otimes u_2 : X_1 \\otimes_\\pi X_2 \\to Y_1 \\otimes_\\pi Y_2"
},
{
"math_id": 22,
"text": "Z \\otimes_\\pi Y"
},
{
"math_id": 23,
"text": "E"
},
{
"math_id": 24,
"text": "F"
},
{
"math_id": 25,
"text": "Y,"
},
{
"math_id": 26,
"text": "E \\otimes F"
},
{
"math_id": 27,
"text": "E \\otimes_\\pi F"
},
{
"math_id": 28,
"text": "E \\widehat{\\otimes} F"
},
{
"math_id": 29,
"text": "E \\times F"
},
{
"math_id": 30,
"text": "X \\times Y"
},
{
"math_id": 31,
"text": "B(X, Y)"
},
{
"math_id": 32,
"text": "X,"
},
{
"math_id": 33,
"text": "\\left(x_i\\right)_{i=1}^{\\infty}"
},
{
"math_id": 34,
"text": "\\sum_{i=1}^{\\infty} p \\left(x_i\\right) < \\infty"
},
{
"math_id": 35,
"text": "X."
},
{
"math_id": 36,
"text": "x = \\sum_{i=1}^{\\infty} x_i"
},
{
"math_id": 37,
"text": "\\left(\\sum_{i=1}^n x_i\\right)_{n=1}^{\\infty}"
},
{
"math_id": 38,
"text": "x"
},
{
"math_id": 39,
"text": "z \\in X \\widehat{\\otimes}_\\pi Y."
},
{
"math_id": 40,
"text": "z"
},
{
"math_id": 41,
"text": "z = \\sum_{i=1}^{\\infty} \\lambda_i x_i \\otimes y_i"
},
{
"math_id": 42,
"text": "\\sum_{i=1}^{\\infty}|\\lambda_i|< \\infty,"
},
{
"math_id": 43,
"text": "\\left(y_i\\right)_{i=1}^{\\infty}"
},
{
"math_id": 44,
"text": "\\left(y_i\\right)_{i=1}^{\\infty}."
},
{
"math_id": 45,
"text": "U"
},
{
"math_id": 46,
"text": "V"
},
{
"math_id": 47,
"text": "K_0"
},
{
"math_id": 48,
"text": "U \\otimes V := \\{ u \\otimes v : u \\in U, v \\in V \\}."
},
{
"math_id": 49,
"text": "K_1"
},
{
"math_id": 50,
"text": "\\ell^1"
},
{
"math_id": 51,
"text": "V,"
},
{
"math_id": 52,
"text": "z \\in K_0"
},
{
"math_id": 53,
"text": "\\left(\\lambda_i\\right)_{i=1}^{\\infty} \\in K_1"
},
{
"math_id": 54,
"text": "z = \\sum_{i=1}^{\\infty} \\lambda_i x_i \\otimes y_i."
},
{
"math_id": 55,
"text": "\\mathfrak{B}_X"
},
{
"math_id": 56,
"text": "\\mathfrak{B}_Y"
},
{
"math_id": 57,
"text": "B(X, Y),"
},
{
"math_id": 58,
"text": "\\mathfrak{B}_X \\times \\mathfrak{B}_Y,"
},
{
"math_id": 59,
"text": "B \\subseteq X \\widehat{\\otimes} Y,"
},
{
"math_id": 60,
"text": "B_1 \\subseteq X"
},
{
"math_id": 61,
"text": "B_2 \\subseteq Y"
},
{
"math_id": 62,
"text": "B"
},
{
"math_id": 63,
"text": "B_1 \\otimes B_2 := \\{ b_1 \\otimes b_2 : b_1 \\in B_1, b_2 \\in B_2 \\}"
},
{
"math_id": 64,
"text": "X^{\\prime}"
},
{
"math_id": 65,
"text": "N"
},
{
"math_id": 66,
"text": "N \\widehat{\\otimes}_\\pi Y"
},
{
"math_id": 67,
"text": "N^{\\prime}_b \\widehat{\\otimes}_\\pi Y^{\\prime}_b"
},
{
"math_id": 68,
"text": "N \\widehat{\\otimes}_\\pi Y^{\\prime\\prime}"
},
{
"math_id": 69,
"text": "N^{\\prime}_b \\times Y^{\\prime}_b"
},
{
"math_id": 70,
"text": "L\\left(X^{\\prime}_b, Y\\right)"
},
{
"math_id": 71,
"text": "X^{\\prime}_b"
},
{
"math_id": 72,
"text": "N^{\\prime}_b \\widehat{\\otimes}_\\pi Y^{\\prime}_b,"
},
{
"math_id": 73,
"text": "L_b\\left(X^{\\prime}_b, Y\\right)."
},
{
"math_id": 74,
"text": "(X, \\mathcal{A}, \\mu)"
},
{
"math_id": 75,
"text": "L^1"
},
{
"math_id": 76,
"text": "L^1(\\mu)"
},
{
"math_id": 77,
"text": "L^1_E"
},
{
"math_id": 78,
"text": "X\\to E"
},
{
"math_id": 79,
"text": "X\\to\\Reals"
},
{
"math_id": 80,
"text": "0"
},
{
"math_id": 81,
"text": "\\mu"
},
{
"math_id": 82,
"text": "L^1 \\widehat{\\otimes}_\\pi E"
}
] |
https://en.wikipedia.org/wiki?curid=63566617
|
63566619
|
Injective tensor product
|
In mathematics, the injective tensor product of two topological vector spaces (TVSs) was introduced by Alexander Grothendieck and was used by him to define nuclear spaces. An injective tensor product is in general not necessarily complete, so its completion is called the completed injective tensor products. Injective tensor products have applications outside of nuclear spaces. In particular, as described below, up to TVS-isomorphism, many TVSs that are defined for real or complex valued functions, for instance, the Schwartz space or the space of continuously differentiable functions, can be immediately extended to functions valued in a Hausdorff locally convex TVS formula_0 without any need to extend definitions (such as "differentiable at a point") from real/complex-valued functions to formula_0-valued functions.
Preliminaries and notation.
Throughout let formula_1 and formula_2 be topological vector spaces and formula_3 be a linear map.
Definition.
Throughout let formula_7 and formula_0 be topological vector spaces with continuous dual spaces formula_42 and formula_63 Note that almost all results described are independent of whether these vector spaces are over formula_64 or formula_65 but to simplify the exposition we will assume that they are over the field formula_66
Continuous bilinear maps as a tensor product.
Despite the fact that the tensor product formula_67 is a purely algebraic construct (its definition does not involve any topologies), the vector space formula_68 of continuous bilinear functionals is nevertheless always a tensor product of formula_7 and formula_0 (that is, formula_69) when formula_70 is defined in the manner now described.
For every formula_71 let formula_72 denote the bilinear form on formula_73 defined by
formula_74
This map formula_75 is always continuous and so the assignment that sends formula_76 to the bilinear form formula_72 induces a canonical map
formula_77
whose image formula_67 is contained in formula_78
In fact, every continuous bilinear form on formula_79 belongs to the span of this map's image (that is, formula_80).
The following theorem may be used to verify that formula_68 together with the above map formula_70 is a tensor product of formula_7 and formula_81
<templatestyles src="Math_theorem/styles.css" />
Theorem —
Let formula_1 and formula_2 be vector spaces and let formula_82 be a bilinear map. Then formula_83 is a tensor product of formula_7 and formula_0 if and only if the image of formula_84 spans all of formula_2 (that is, formula_85), and the vectors spaces formula_7 and formula_0 are formula_84-linearly disjoint, which by definition means that for all sequences of elements formula_86 and formula_87 of the same finite length formula_88 satisfying formula_89
Equivalently, formula_7 and formula_0 are formula_84-linearly disjoint if and only if for all linearly independent sequences formula_96 in formula_7 and all linearly independent sequences formula_93 in formula_57 the vectors formula_97 are linearly independent.
Topology.
Henceforth, all topological vector spaces considered will be assumed to be locally convex.
If formula_2 is any locally convex topological vector space, then formula_98 and for any equicontinuous subsets formula_99 and formula_100 and any neighborhood formula_101 in formula_102 define
formula_103
where every set formula_104 is bounded in formula_102 which is necessary and sufficient for the collection of all formula_105 to form a locally convex TVS topology on formula_106
This topology is called the formula_107-topology and whenever a vector spaces is endowed with the formula_107-topology then this will be indicated by placing formula_107 as a subscript before the opening parenthesis. For example, formula_108 endowed with the formula_107-topology will be denoted by formula_109
If formula_2 is Hausdorff then so is the formula_107-topology.
In the special case where formula_2 is the underlying scalar field, formula_68 is the tensor product formula_67 and so the topological vector space formula_110 is called the injective tensor product of formula_7 and formula_0 and it is denoted by formula_111
This TVS is not necessarily complete so its completion, denoted by formula_112 will be constructed.
When all spaces are Hausdorff then formula_113 is complete if and only if both formula_7 and formula_0 are complete, in which case the completion formula_114 of formula_110 is a vector subspace of formula_115
If formula_7 and formula_0 are normed spaces then so is formula_116 where formula_113 is a Banach space if and only if this is true of both formula_7 and formula_81
Equicontinuous sets.
One reason for converging on equicontinuous subsets (of all possibilities) is the following important fact:
A set of continuous linear functionals formula_54 on a TVS formula_7 is equicontinuous if and only if it is contained in the polar of some neighborhood formula_58 of the origin in formula_7; that is, formula_117
A TVS's topology is completely determined by the open neighborhoods of the origin. This fact together with the bipolar theorem means that via the operation of taking the polar of a subset, the collection of all equicontinuous subsets of formula_42 "encodes" all information about formula_7's given topology. Specifically, distinct locally convex TVS topologies on formula_7 produce distinct collections of equicontinuous subsets and conversely, given any such collection of equicontinuous sets, the TVS's original topology can be recovered by taking the polar of every (equicontinuous) set in the collection. Thus through this identification, uniform convergence on the collection of equicontinuous subsets is essentially uniform convergence on the very topology of the TVS; this allows one to directly relate the injective topology with the given topologies of formula_7 and formula_81
Furthermore, the topology of a locally convex Hausdorff space formula_7 is identical to the topology of uniform convergence on the equicontinuous subsets of formula_118
For this reason, the article now lists some properties of equicontinuous sets that are relevant for dealing with the injective tensor product. Throughout formula_7 and formula_0 are any locally convex space and formula_54 is a collection of linear maps from formula_7 into formula_81
In particular, to show that a set formula_54 is equicontinuous it suffices to show that it is bounded in the topology of pointwise converge.
For equicontinuous subsets of the continuous dual space formula_42 (where formula_0 is now the underlying scalar field of formula_7), the following hold:
We mention some additional important basic properties relevant to the injective tensor product:
Canonical identification of separately continuous bilinear maps with linear maps.
The set equality formula_136 always holds; that is, if formula_137 is a linear map, then formula_138 is continuous if and only if formula_139 is continuous, where here formula_0 has its original topology.
There also exists a canonical vector space isomorphism
formula_140
To define it, for every separately continuous bilinear form formula_135 defined on formula_141 and every formula_142 let formula_143 be defined by
formula_144
Because formula_145 is canonically vector space-isomorphic to formula_0 (via the canonical map formula_146 value at formula_147), formula_148 will be identified as an element of formula_57 which will be denoted by formula_149
This defines a map formula_150 given by formula_151 and so the canonical isomorphism is of course defined by formula_152
When formula_153 is given the topology of uniform convergence on equicontinous subsets of formula_154 the canonical map becomes a TVS-isomorphism
formula_155
In particular, formula_156 can be canonically TVS-embedded into formula_157; furthermore the image in formula_158 of formula_156 under the canonical map formula_159 consists exactly of the space of continuous linear maps formula_160 whose image is finite dimensional.
The inclusion formula_161 always holds. If formula_7 is normed then formula_157 is in fact a topological vector subspace of formula_162 And if in addition formula_0 is Banach then so is formula_163 (even if formula_7 is not complete).
Properties.
The canonical map formula_164 is always continuous and the ε-topology is always coarser than the π-topology, which is in turn coarser than the inductive topology (the finest locally convex TVS topology making formula_165 separately continuous).
The space formula_166 is Hausdorff if and only if both formula_7 and formula_0 are Hausdorff.
If formula_7 and formula_0 are normed then formula_166 is normable in which case for all formula_167 formula_168
Suppose that formula_169 and formula_170 are two linear maps between locally convex spaces. If both formula_171 and formula_172 are continuous then so is their tensor product formula_173 Moreover:
Relation to projective tensor product and nuclear spaces.
The projective topology or the formula_183-topology is the finest locally convex topology on formula_184 that makes continuous the canonical map formula_185 defined by sending formula_76 to the bilinear form formula_186 When formula_184 is endowed with this topology then it will be denoted by formula_187 and called the projective tensor product of formula_7 and formula_81
The following definition was used by Grothendieck to define nuclear spaces.
Definition 0: Let formula_7 be a locally convex topological vector space. Then formula_7 is nuclear if for any locally convex space formula_57 the canonical vector space embedding formula_188 is an embedding of TVSs whose image is dense in the codomain.
Canonical identifications of bilinear and linear maps.
In this section we describe canonical identifications between spaces of bilinear and linear maps. These identifications will be used to define important subspaces and topologies (particularly those that relate to nuclear operators and nuclear spaces).
Dual spaces of the injective tensor product and its completion.
Suppose that
formula_189
denotes the TVS-embedding of formula_166 into its completion and let
formula_190
be its transpose, which is a vector space-isomorphism. This identifies the continuous dual space of formula_166 as being identical to the continuous dual space of formula_191
The identity map
formula_192
is continuous (by definition of the π-topology) so there exists a unique continuous linear extension
formula_193
If formula_7 and formula_0 are Hilbert spaces then formula_194 is injective and the dual of formula_114 is canonically isometrically isomorphic to the vector space formula_195 of nuclear operators from formula_7 into formula_0 (with the trace norm).
Injective tensor product of Hilbert spaces.
There is a canonical map
formula_196
that sends formula_197 to the linear map formula_198 defined by
formula_199
where it may be shown that the definition of formula_200 does not depend on the particular choice of representation formula_201 of formula_202 The map
formula_203
is continuous and when formula_204 is complete, it has a continuous extension
formula_205
When formula_7 and formula_0 are Hilbert spaces then formula_206 is a TVS-embedding and isometry (when the spaces are given their usual norms) whose range is the space of all compact linear operators from formula_7 into formula_0 (which is a closed vector subspace of formula_207 Hence formula_114 is identical to space of compact operators from formula_20 into formula_0 (note the prime on formula_7). The space of compact linear operators between any two Banach spaces (which includes Hilbert spaces) formula_7 and formula_0 is a closed subset of formula_208
Furthermore, the canonical map formula_209 is injective when formula_7 and formula_0 are Hilbert spaces.
Integral forms and operators.
Integral bilinear forms.
Denote the identity map by
formula_210
and let
formula_211
denote its transpose, which is a continuous injection. Recall that formula_212 is canonically identified with formula_213 the space of continuous bilinear maps on formula_214 In this way, the continuous dual space of formula_166 can be canonically identified as a subvector space of formula_213 denoted by formula_215 The elements of formula_216 are called integral (bilinear) forms on formula_214 The following theorem justifies the word integral.
<templatestyles src="Math_theorem/styles.css" />
Theorem — The dual formula_216 of formula_114 consists of exactly those continuous bilinear forms "v" on formula_217 that can be represented in the form of a map
formula_218
where formula_6 and formula_84 are some closed, equicontinuous subsets of formula_30 and formula_219 respectively, and formula_220 is a positive Radon measure on the compact set formula_221 with total mass formula_222
Furthermore, if formula_223 is an equicontinuous subset of formula_216 then the elements formula_224 can be represented with formula_221 fixed and formula_220 running through a norm bounded subset of the space of Radon measures on formula_225
Integral linear operators.
Given a linear map formula_226 one can define a canonical bilinear form formula_227 called the associated bilinear form on formula_228 by
formula_229
A continuous map formula_230 is called integral if its associated bilinear form is an integral bilinear form. An integral map formula_231 is of the form, for every formula_232 and formula_233
formula_234
for suitable weakly closed and equicontinuous subsets formula_235 and formula_236 of formula_20 and formula_237 respectively, and some positive Radon measure formula_220 of total mass formula_222
Canonical map into "L"("X"; "Y").
There is a canonical map formula_238 that sends formula_239 to the linear map formula_200 defined by formula_240 where it may be shown that the definition of formula_200 does not depend on the particular choice of representation formula_241 of formula_202
Examples.
Space of summable families.
Throughout this section we fix some arbitrary (possibly uncountable) set formula_242 a TVS formula_21 and we let formula_243 be the directed set of all finite subsets of formula_223 directed by inclusion formula_244
Let formula_245 be a family of elements in a TVS formula_7 and for every finite subset formula_246 let formula_247 We call formula_245 summable in formula_7 if the limit formula_248 of the net formula_249 converges in formula_7 to some element (any such element is called its sum). The set of all such summable families is a vector subspace of formula_250 denoted by formula_251
We now define a topology on formula_6 in a very natural way. This topology turns out to be the injective topology taken from formula_252 and transferred to formula_6 via a canonical vector space isomorphism (the obvious one). This is a common occurrence when studying the injective and projective tensor products of function/sequence spaces and TVSs: the "natural way" in which one would define (from scratch) a topology on such a tensor product is frequently equivalent to the injective or projective tensor product topology.
Let formula_253 denote a base of convex balanced neighborhoods of 0 in formula_7 and for each formula_254 let formula_255 denote its Minkowski functional. For any such formula_58 and any formula_256 let
formula_257
where formula_258 defines a seminorm on formula_251 The family of seminorms formula_259 generates a topology making formula_6 into a locally convex space. The vector space formula_6 endowed with this topology will be denoted by formula_260 The special case where formula_7 is the scalar field will be denoted by formula_261
There is a canonical embedding of vector spaces formula_262 defined by linearizing the bilinear map formula_263 defined by formula_264
<templatestyles src="Math_theorem/styles.css" />
Theorem: — The canonical embedding (of vector spaces) formula_262 becomes an embedding of topological vector spaces formula_265 when formula_266 is given the injective topology and furthermore, its range is dense in its codomain. If formula_267 is a completion of formula_7 then the continuous extension formula_268 of this embedding formula_269 is an isomorphism of TVSs. So in particular, if formula_7 is complete then formula_252 is canonically isomorphic to formula_270
Space of continuously differentiable vector-valued functions.
Throughout, let formula_271 be an open subset of formula_272 where formula_88 is an integer and let formula_0 be a locally convex topological vector space (TVS).
Definition Suppose formula_273 and formula_274 is a function such that formula_275 with formula_276 a limit point of formula_277 Say that formula_278 is differentiable at formula_276 if there exist formula_279 vectors formula_280 in formula_57 called the partial derivatives of formula_278, such that
formula_281
where formula_282
One may naturally extend the notion of continuously differentiable function to formula_0-valued functions defined on formula_283
For any formula_284 let formula_285 denote the vector space of all formula_286 formula_0-valued maps defined on formula_271 and let formula_287 denote the vector subspace of formula_285 consisting of all maps in formula_285 that have compact support.
One may then define topologies on formula_285 and formula_287 in the same manner as the topologies on formula_288 and formula_289 are defined for the space of distributions and test functions (see the article: Differentiable vector-valued functions from Euclidean space).
All of this work in extending the definition of differentiability and various topologies turns out to be exactly equivalent to simply taking the completed injective tensor product:
<templatestyles src="Math_theorem/styles.css" />
Theorem — If formula_0 is a complete Hausdorff locally convex space, then formula_285 is canonically isomorphic to the injective tensor product formula_290
Spaces of continuous maps from a compact space.
If formula_0 is a normed space and if formula_291 is a compact set, then the formula_107-norm on formula_292 is equal to formula_293
If formula_54 and formula_291 are two compact spaces, then formula_294 where this canonical map is an isomorphism of Banach spaces.
Spaces of sequences converging to 0.
If formula_0 is a normed space, then let formula_295 denote the space of all sequences formula_296 in formula_0 that converge to the origin and give this space the norm formula_297
Let formula_298 denote formula_299
Then for any Banach space formula_57 formula_300 is canonically isometrically isomorphic to formula_301
Schwartz space of functions.
We will now generalize the Schwartz space to functions valued in a TVS.
Let formula_302 be the space of all formula_303 such that for all pairs of polynomials formula_304 and formula_305 in formula_279 variables, formula_306 is a bounded subset of formula_81
To generalize the topology of the Schwartz space to formula_307 we give formula_302 the topology of uniform convergence over formula_308 of the functions formula_309 as formula_304 and formula_305 vary over all possible pairs of polynomials in formula_279 variables.
<templatestyles src="Math_theorem/styles.css" />
Theorem — If formula_0 is a complete locally convex space, then formula_302 is canonically isomorphic to formula_310
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Y"
},
{
"math_id": 1,
"text": "X, Y,"
},
{
"math_id": 2,
"text": "Z"
},
{
"math_id": 3,
"text": "L : X \\to Y"
},
{
"math_id": 4,
"text": "L : X \\to \\operatorname{Im} L"
},
{
"math_id": 5,
"text": "\\operatorname{Im} L = L(X)"
},
{
"math_id": 6,
"text": "S"
},
{
"math_id": 7,
"text": "X"
},
{
"math_id": 8,
"text": "X \\to X / S"
},
{
"math_id": 9,
"text": "S \\to X"
},
{
"math_id": 10,
"text": "X \\to X / \\ker L \\mathbin{\\overset{L_0}\\rightarrow} \\operatorname{Im} L \\to Y"
},
{
"math_id": 11,
"text": "L_0(x + \\ker L) := L (x)"
},
{
"math_id": 12,
"text": "X \\to Z"
},
{
"math_id": 13,
"text": "X \\times Y \\to Z"
},
{
"math_id": 14,
"text": "L(X; Z)"
},
{
"math_id": 15,
"text": "B(X, Y; Z)"
},
{
"math_id": 16,
"text": "L(X)"
},
{
"math_id": 17,
"text": "B(X, Y)"
},
{
"math_id": 18,
"text": "\\mathcal{B}(X, Y; Z)"
},
{
"math_id": 19,
"text": "\\mathcal{B}(X, Y)."
},
{
"math_id": 20,
"text": "X^{\\prime}"
},
{
"math_id": 21,
"text": "X,"
},
{
"math_id": 22,
"text": "X^{\\#}."
},
{
"math_id": 23,
"text": "x^{\\prime}"
},
{
"math_id": 24,
"text": "x"
},
{
"math_id": 25,
"text": "\\sigma\\left(X, X^{\\prime}\\right)"
},
{
"math_id": 26,
"text": "X_{\\sigma\\left(X, X^{\\prime}\\right)}"
},
{
"math_id": 27,
"text": "X_\\sigma"
},
{
"math_id": 28,
"text": "\\sigma\\left(X^{\\prime}, X\\right)"
},
{
"math_id": 29,
"text": "X_{\\sigma\\left(X^{\\prime}, X\\right)}"
},
{
"math_id": 30,
"text": "X^{\\prime}_\\sigma"
},
{
"math_id": 31,
"text": "x_0 \\in X"
},
{
"math_id": 32,
"text": "X^{\\prime} \\to \\R"
},
{
"math_id": 33,
"text": "\\lambda \\mapsto \\lambda \\left(x_0\\right)."
},
{
"math_id": 34,
"text": "b\\left(X, X^{\\prime}\\right)"
},
{
"math_id": 35,
"text": "X_{b\\left(X, X^{\\prime}\\right)}"
},
{
"math_id": 36,
"text": "X_b"
},
{
"math_id": 37,
"text": "b\\left(X^{\\prime}, X\\right)"
},
{
"math_id": 38,
"text": "X_{b\\left(X^{\\prime}, X\\right)}"
},
{
"math_id": 39,
"text": "X^{\\prime}_b"
},
{
"math_id": 40,
"text": "b\\left(X^{\\prime}, X\\right)."
},
{
"math_id": 41,
"text": "\\tau\\left(X, X^{\\prime}\\right)"
},
{
"math_id": 42,
"text": "X^\\prime"
},
{
"math_id": 43,
"text": "X_{\\tau\\left(X, X^{\\prime}\\right)}"
},
{
"math_id": 44,
"text": "X_\\tau"
},
{
"math_id": 45,
"text": "\\tau(X, X^{\\prime})"
},
{
"math_id": 46,
"text": "X^{\\prime}."
},
{
"math_id": 47,
"text": "\\tau\\left(X^{\\prime}, X\\right)"
},
{
"math_id": 48,
"text": "X_{\\tau\\left(X^{\\prime}, X\\right)}"
},
{
"math_id": 49,
"text": "X^{\\prime}_\\tau"
},
{
"math_id": 50,
"text": "\\tau\\left(X^{\\prime}, X\\right) \\subseteq b\\left(X^{\\prime}, X\\right) \\subseteq \\tau \\left(X^{\\prime}, X^{\\prime\\prime}\\right)."
},
{
"math_id": 51,
"text": "\\varepsilon\\left(X, X^{\\prime}\\right)"
},
{
"math_id": 52,
"text": "X_{\\varepsilon\\left(X, X^{\\prime}\\right)}"
},
{
"math_id": 53,
"text": "X_\\varepsilon"
},
{
"math_id": 54,
"text": "H"
},
{
"math_id": 55,
"text": "X \\to Y"
},
{
"math_id": 56,
"text": "V"
},
{
"math_id": 57,
"text": "Y,"
},
{
"math_id": 58,
"text": "U"
},
{
"math_id": 59,
"text": "\\lambda(U) \\subseteq V"
},
{
"math_id": 60,
"text": "\\lambda \\in H."
},
{
"math_id": 61,
"text": "h(U) \\subseteq V"
},
{
"math_id": 62,
"text": "h \\in H."
},
{
"math_id": 63,
"text": "Y^\\prime."
},
{
"math_id": 64,
"text": "\\R"
},
{
"math_id": 65,
"text": "\\Complex"
},
{
"math_id": 66,
"text": "\\Complex."
},
{
"math_id": 67,
"text": "X \\otimes Y"
},
{
"math_id": 68,
"text": "B\\left(X^\\prime_\\sigma, Y^\\prime_\\sigma\\right)"
},
{
"math_id": 69,
"text": "B\\left(X^\\prime_\\sigma, Y^\\prime_\\sigma\\right) = X \\otimes Y"
},
{
"math_id": 70,
"text": "\\,\\otimes\\,"
},
{
"math_id": 71,
"text": "(x, y) \\in X \\times Y,"
},
{
"math_id": 72,
"text": "x \\otimes y"
},
{
"math_id": 73,
"text": "X^\\prime \\times Y^\\prime"
},
{
"math_id": 74,
"text": "(x \\otimes y) \\left(x^\\prime, y^\\prime\\right) := x^\\prime(x) y^\\prime(y)."
},
{
"math_id": 75,
"text": "x \\otimes y : X^\\prime_\\sigma \\times Y^\\prime_\\sigma \\to \\Complex"
},
{
"math_id": 76,
"text": "(x, y) \\in X \\times Y"
},
{
"math_id": 77,
"text": "\\cdot \\,\\otimes\\, \\cdot\\; : \\;X \\times Y \\to \\mathcal{B}\\left(X^\\prime_\\sigma, Y^\\prime_\\sigma\\right)"
},
{
"math_id": 78,
"text": "B\\left(X^\\prime_\\sigma, Y^\\prime_\\sigma\\right)."
},
{
"math_id": 79,
"text": "X^\\prime_\\sigma \\times Y^\\prime_\\sigma"
},
{
"math_id": 80,
"text": "B\\left(X^\\prime_\\sigma, Y^\\prime_\\sigma\\right) = \\operatorname{span} (X \\otimes Y)"
},
{
"math_id": 81,
"text": "Y."
},
{
"math_id": 82,
"text": "T : X \\times Y \\to Z"
},
{
"math_id": 83,
"text": "(Z, T)"
},
{
"math_id": 84,
"text": "T"
},
{
"math_id": 85,
"text": "Z = \\operatorname{span} T(X \\times Y)"
},
{
"math_id": 86,
"text": "x_1, \\ldots, x_n \\in X"
},
{
"math_id": 87,
"text": "y_1, \\ldots, y_n \\in Y"
},
{
"math_id": 88,
"text": "n \\geq 1"
},
{
"math_id": 89,
"text": "0 = T\\left(x_1, y_1\\right) + \\cdots + T\\left(x_n, y_n\\right),"
},
{
"math_id": 90,
"text": "x_1, \\ldots, x_n"
},
{
"math_id": 91,
"text": "y_i"
},
{
"math_id": 92,
"text": "0,"
},
{
"math_id": 93,
"text": "y_1, \\ldots, y_n"
},
{
"math_id": 94,
"text": "x_i"
},
{
"math_id": 95,
"text": "0."
},
{
"math_id": 96,
"text": "x_1, \\ldots, x_m"
},
{
"math_id": 97,
"text": "\\left\\{T\\left(x_i, y_j\\right) : 1 \\leq i \\leq m, 1 \\leq j \\leq n\\right\\}"
},
{
"math_id": 98,
"text": "\\mathcal{B}\\left(X^\\prime_\\sigma, Y^\\prime_\\sigma; Z\\right)~\\subseteq~ \\mathcal{B}\\left(X^\\prime_b, Y^\\prime_b; Z\\right)"
},
{
"math_id": 99,
"text": "G \\subseteq X^\\prime"
},
{
"math_id": 100,
"text": "H \\subseteq Y^\\prime,"
},
{
"math_id": 101,
"text": "N"
},
{
"math_id": 102,
"text": "Z,"
},
{
"math_id": 103,
"text": "\\mathcal{U}(G, H, N) = \\left\\{b \\in \\mathcal{B}\\left(X^\\prime_b, Y^\\prime_b; Z\\right) ~:~ b(G, H) \\subseteq N\\right\\}"
},
{
"math_id": 104,
"text": "b(G \\times H)"
},
{
"math_id": 105,
"text": "\\mathcal{U}(G, H, N)"
},
{
"math_id": 106,
"text": "\\mathcal{B}\\left(X^\\prime_b, Y^\\prime_b; Z\\right)."
},
{
"math_id": 107,
"text": "\\varepsilon"
},
{
"math_id": 108,
"text": "\\mathcal{B}\\left(X^\\prime_b, Y^\\prime_b; Z\\right)"
},
{
"math_id": 109,
"text": "\\mathcal{B}_\\varepsilon\\left(X^\\prime_b, Y^\\prime_b; Z\\right)."
},
{
"math_id": 110,
"text": "B_\\varepsilon\\left(X^\\prime_\\sigma, Y^\\prime_\\sigma\\right)"
},
{
"math_id": 111,
"text": "X \\otimes_\\varepsilon Y."
},
{
"math_id": 112,
"text": "X \\widehat{\\otimes}_\\varepsilon Y,"
},
{
"math_id": 113,
"text": "\\mathcal{B}_\\varepsilon\\left(X^\\prime_\\sigma, Y^\\prime_\\sigma\\right)"
},
{
"math_id": 114,
"text": "X \\widehat{\\otimes}_\\varepsilon Y"
},
{
"math_id": 115,
"text": "\\mathcal{B}\\left(X^\\prime_\\sigma, Y^\\prime_\\sigma\\right)."
},
{
"math_id": 116,
"text": "\\mathcal{B}_\\varepsilon\\left(X^\\prime_\\sigma, Y^\\prime_\\sigma\\right),"
},
{
"math_id": 117,
"text": "H \\subseteq U^\\circ."
},
{
"math_id": 118,
"text": "X^\\prime."
},
{
"math_id": 119,
"text": "H \\subseteq L(X; Y)"
},
{
"math_id": 120,
"text": "L(X; Y)"
},
{
"math_id": 121,
"text": "X."
},
{
"math_id": 122,
"text": "L_b(X; Y)"
},
{
"math_id": 123,
"text": "H \\subseteq L(X; Y),"
},
{
"math_id": 124,
"text": "L_{\\sigma}(X; Y)"
},
{
"math_id": 125,
"text": "L_\\sigma(X; Y)"
},
{
"math_id": 126,
"text": "D"
},
{
"math_id": 127,
"text": "X_\\sigma^\\prime."
},
{
"math_id": 128,
"text": "X_\\sigma^\\prime"
},
{
"math_id": 129,
"text": "H \\subseteq X^\\prime"
},
{
"math_id": 130,
"text": "X_b^\\prime"
},
{
"math_id": 131,
"text": "H \\subseteq X^\\prime,"
},
{
"math_id": 132,
"text": "B : X_1 \\times X_2 \\to Y"
},
{
"math_id": 133,
"text": "X_1"
},
{
"math_id": 134,
"text": "X_2"
},
{
"math_id": 135,
"text": "B"
},
{
"math_id": 136,
"text": "L\\left(X^\\prime_\\sigma; Y_\\sigma\\right) = L\\left(X^\\prime_\\tau; Y\\right)"
},
{
"math_id": 137,
"text": "u : X^\\prime \\to Y"
},
{
"math_id": 138,
"text": "u : X^\\prime_{\\sigma\\left(X^\\prime, X\\right)} \\to Y_{\\sigma\\left(Y, Y^\\prime\\right)}"
},
{
"math_id": 139,
"text": "u : X^\\prime_{\\tau\\left(X^\\prime, X\\right)} \\to Y"
},
{
"math_id": 140,
"text": "J : \\mathcal{B}\\left(X^\\prime_{\\sigma\\left(X^\\prime, X\\right)}, Y^\\prime_{\\sigma\\left(Y^\\prime, Y\\right)}\\right) \\to L\\left(X^\\prime_{\\sigma \\left(X^\\prime, X\\right)}; Y_{\\sigma \\left(Y, Y^\\prime\\right)}\\right)."
},
{
"math_id": 141,
"text": "X^\\prime_{\\sigma\\left(X^\\prime, X\\right)} \\times Y^\\prime_{\\sigma\\left(Y^\\prime, Y\\right)}"
},
{
"math_id": 142,
"text": "x^\\prime \\in X^\\prime,"
},
{
"math_id": 143,
"text": "B_{x^\\prime} \\in \\left(Y_\\sigma^\\prime\\right)^\\prime"
},
{
"math_id": 144,
"text": "B_{x^\\prime}\\left(y^\\prime\\right) := B\\left(x^\\prime, y^\\prime\\right)."
},
{
"math_id": 145,
"text": "\\left(Y_\\sigma^\\prime\\right)^\\prime"
},
{
"math_id": 146,
"text": "y \\mapsto "
},
{
"math_id": 147,
"text": "y"
},
{
"math_id": 148,
"text": "B_{x^\\prime}"
},
{
"math_id": 149,
"text": "\\tilde{B}_{x^\\prime} \\in Y."
},
{
"math_id": 150,
"text": "\\tilde{B} : X^\\prime \\to Y"
},
{
"math_id": 151,
"text": "x^\\prime \\mapsto \\tilde{B}_{x^\\prime}"
},
{
"math_id": 152,
"text": "J(B) := \\tilde{B}."
},
{
"math_id": 153,
"text": "L\\left(X^\\sigma_\\sigma; Y_\\sigma\\right)"
},
{
"math_id": 154,
"text": "X^\\prime,"
},
{
"math_id": 155,
"text": "J : \\mathcal{B}_\\varepsilon\\left(X^\\prime_\\sigma, Y^\\prime_\\sigma\\right) \\to L_\\varepsilon\\left(X^\\prime_\\tau; Y\\right)."
},
{
"math_id": 156,
"text": "X \\otimes_\\varepsilon Y = B_\\varepsilon\\left(X^\\prime_\\sigma, Y^\\prime_\\sigma\\right)"
},
{
"math_id": 157,
"text": "L_\\varepsilon\\left(X^\\prime_\\tau; Y\\right)"
},
{
"math_id": 158,
"text": "L\\left(X^\\prime_\\sigma; Y_\\sigma\\right)"
},
{
"math_id": 159,
"text": "J"
},
{
"math_id": 160,
"text": "X^\\prime_{\\sigma\\left(X^\\prime, X\\right)} \\to Y"
},
{
"math_id": 161,
"text": "L\\left(X^\\prime_\\tau; Y\\right) \\subseteq L\\left(X^\\prime_b; Y\\right)"
},
{
"math_id": 162,
"text": "L_b\\left(X^\\prime_b; Y\\right)."
},
{
"math_id": 163,
"text": "L_b\\left(X^\\prime_b; Y\\right)"
},
{
"math_id": 164,
"text": "\\cdot \\otimes \\cdot : X \\times Y \\to \\mathcal{B}\\left(X_\\sigma^{\\prime}, Y_\\sigma^{\\prime}\\right)"
},
{
"math_id": 165,
"text": "X \\times Y \\to X \\otimes Y"
},
{
"math_id": 166,
"text": "X \\otimes_\\varepsilon Y"
},
{
"math_id": 167,
"text": "\\theta \\in X \\otimes Y,"
},
{
"math_id": 168,
"text": "\\|\\theta\\|_\\varepsilon \\leq \\|\\theta\\|_{\\pi}."
},
{
"math_id": 169,
"text": "u : X_1 \\to Y_1"
},
{
"math_id": 170,
"text": "v : X_2 \\to Y_2"
},
{
"math_id": 171,
"text": "u"
},
{
"math_id": 172,
"text": "v"
},
{
"math_id": 173,
"text": "u \\otimes v : X_1 \\otimes_\\varepsilon X_2 \\to Y_1 \\otimes_\\varepsilon Y_2."
},
{
"math_id": 174,
"text": "u \\widehat{\\otimes}_\\varepsilon v : X_1 \\widehat{\\otimes}_\\varepsilon X_2 \\to Y_1 \\widehat{\\otimes}_\\varepsilon Y_2."
},
{
"math_id": 175,
"text": "Y_1"
},
{
"math_id": 176,
"text": "Y_2"
},
{
"math_id": 177,
"text": "X_1 \\otimes_\\varepsilon Y_1"
},
{
"math_id": 178,
"text": "X_2 \\otimes_\\varepsilon Y_2"
},
{
"math_id": 179,
"text": "X_1 \\widehat{\\otimes}_\\varepsilon Y_1"
},
{
"math_id": 180,
"text": "X_2 \\widehat{\\otimes}_\\varepsilon Y_2."
},
{
"math_id": 181,
"text": "u \\widehat{\\otimes}_\\varepsilon v : X_1 \\widehat{\\otimes}_\\varepsilon X_2 \\to Y_1 \\widehat{\\otimes}_\\varepsilon Y_2"
},
{
"math_id": 182,
"text": "\\|u \\otimes v\\|_\\varepsilon = \\|u\\| \\|v\\|."
},
{
"math_id": 183,
"text": "\\pi"
},
{
"math_id": 184,
"text": "B\\left(X^{\\prime}_\\sigma, Y^{\\prime}_\\sigma\\right) = X \\otimes Y"
},
{
"math_id": 185,
"text": "X \\times Y \\to B\\left(X_\\sigma^{\\prime}, Y_\\sigma^{\\prime}\\right)"
},
{
"math_id": 186,
"text": "x \\otimes y."
},
{
"math_id": 187,
"text": "X \\otimes_{\\pi} Y"
},
{
"math_id": 188,
"text": "X \\otimes_\\pi Y \\to \\mathcal{B}_\\varepsilon\\left(X^{\\prime}_\\sigma, Y^{\\prime}_\\sigma\\right)"
},
{
"math_id": 189,
"text": "\\operatorname{In} : X \\otimes_\\varepsilon Y \\to X \\widehat{\\otimes}_\\varepsilon Y"
},
{
"math_id": 190,
"text": "{}^t \\operatorname{In} : \\left(X \\widehat{\\otimes}_\\varepsilon Y\\right)^{\\prime}_b \\to \\left(X \\otimes_\\varepsilon Y\\right)^{\\prime}_b"
},
{
"math_id": 191,
"text": "X \\widehat{\\otimes}_\\varepsilon Y."
},
{
"math_id": 192,
"text": "\\operatorname{Id}_{X \\otimes Y} : X \\otimes_{\\pi} Y \\to X \\otimes_\\varepsilon Y"
},
{
"math_id": 193,
"text": "\\hat{I} : X \\widehat{\\otimes}_{\\pi} Y \\to X \\widehat{\\otimes}_\\varepsilon Y."
},
{
"math_id": 194,
"text": "\\hat{I} : X \\widehat{\\otimes}_{\\pi} Y \\to X \\widehat{\\otimes}_\\varepsilon Y"
},
{
"math_id": 195,
"text": "L^1\\left(X; Y^{\\prime}\\right)"
},
{
"math_id": 196,
"text": "K : X \\otimes Y \\to L\\left(X^{\\prime}; Y\\right)"
},
{
"math_id": 197,
"text": "z = \\sum_{i=1}^n x_i \\otimes y_i"
},
{
"math_id": 198,
"text": "K(z) : X^{\\prime} \\to Y"
},
{
"math_id": 199,
"text": "K(z)\\left(x^{\\prime}\\right) := \\sum_{i=1}^n x^{\\prime}(x_i) y_i \\in Y,"
},
{
"math_id": 200,
"text": "K(z) : X \\to Y"
},
{
"math_id": 201,
"text": "\\sum_{i=1}^n x_i \\otimes y_i"
},
{
"math_id": 202,
"text": "z."
},
{
"math_id": 203,
"text": "K : X \\otimes_\\varepsilon Y \\to L_b\\left(X^{\\prime}_b; Y\\right)"
},
{
"math_id": 204,
"text": "L_b\\left(X^{\\prime}_b; Y\\right)"
},
{
"math_id": 205,
"text": "\\hat{K} : X \\widehat{\\otimes}_\\varepsilon Y \\to L_b\\left(X^{\\prime}_b; Y\\right)."
},
{
"math_id": 206,
"text": "\\hat{K} : X \\widehat{\\otimes}_\\varepsilon Y \\to L_b\\left(X^{\\prime}_b; Y\\right)"
},
{
"math_id": 207,
"text": "L_b\\left(X^{\\prime}; Y\\right)."
},
{
"math_id": 208,
"text": "L_b(X; Y)."
},
{
"math_id": 209,
"text": "X \\widehat{\\otimes}_{\\pi} Y \\to X \\widehat{\\otimes}_\\varepsilon Y"
},
{
"math_id": 210,
"text": "\\operatorname{Id} : X \\otimes_{\\pi} Y \\to X \\otimes_\\varepsilon Y"
},
{
"math_id": 211,
"text": "{}^{t}\\operatorname{Id} : \\left(X \\otimes_\\varepsilon Y\\right)^{\\prime}_b \\to \\left(X \\otimes_{\\pi} Y\\right)^{\\prime}_b"
},
{
"math_id": 212,
"text": "\\left(X \\otimes_{\\pi} Y\\right)^{\\prime}"
},
{
"math_id": 213,
"text": "B(X, Y),"
},
{
"math_id": 214,
"text": "X \\times Y."
},
{
"math_id": 215,
"text": "J(X, Y)."
},
{
"math_id": 216,
"text": "J(X, Y)"
},
{
"math_id": 217,
"text": "X \\times Y"
},
{
"math_id": 218,
"text": "b \\in B(X, Y) \\mapsto v(b) = \\int_{S \\times T} b\\big\\vert_{S \\times T} \\left(x^{\\prime}, y^{\\prime}\\right) \\operatorname{d} \\mu\\left(x^{\\prime}, y^{\\prime}\\right)"
},
{
"math_id": 219,
"text": "Y^{\\prime}_\\sigma,"
},
{
"math_id": 220,
"text": "\\mu"
},
{
"math_id": 221,
"text": "S \\times T"
},
{
"math_id": 222,
"text": "\\leq 1."
},
{
"math_id": 223,
"text": "A"
},
{
"math_id": 224,
"text": "v \\in A"
},
{
"math_id": 225,
"text": "S \\times T."
},
{
"math_id": 226,
"text": "\\Lambda : X \\to Y,"
},
{
"math_id": 227,
"text": "B_{\\Lambda} \\in Bi\\left(X, Y^{\\prime}\\right),"
},
{
"math_id": 228,
"text": "X \\times Y^{\\prime},"
},
{
"math_id": 229,
"text": "B_{\\Lambda}\\left(x, y^{\\prime}\\right) := \\left( y^{\\prime} \\circ \\Lambda\\right)(x)."
},
{
"math_id": 230,
"text": "\\Lambda : X \\to Y"
},
{
"math_id": 231,
"text": "\\Lambda: X \\to Y"
},
{
"math_id": 232,
"text": "x \\in X"
},
{
"math_id": 233,
"text": "y^{\\prime} \\in Y^{\\prime}:"
},
{
"math_id": 234,
"text": "\\left\\langle y^{\\prime}, \\Lambda(x)\\right\\rangle = \\int_{A^{\\prime} \\times B^{\\prime\\prime}} \\left\\langle x^{\\prime}, x\\right\\rangle \\left\\langle y^{\\prime\\prime}, y^{\\prime}\\right\\rangle \\operatorname{d} \\mu \\left(x^{\\prime}, y^{\\prime\\prime}\\right)"
},
{
"math_id": 235,
"text": "A^{\\prime}"
},
{
"math_id": 236,
"text": "B^{\\prime\\prime}"
},
{
"math_id": 237,
"text": "Y^{\\prime\\prime},"
},
{
"math_id": 238,
"text": "K : X^{\\prime} \\otimes Y \\to L(X; Y)"
},
{
"math_id": 239,
"text": "z = \\sum_{i=1}^n x_i^{\\prime} \\otimes y_i"
},
{
"math_id": 240,
"text": "K(z)(x) := \\sum_{i=1}^n x_i^{\\prime}(x) y_i \\in Y,"
},
{
"math_id": 241,
"text": "\\sum_{i=1}^n x_i^{\\prime} \\otimes y_i"
},
{
"math_id": 242,
"text": "A,"
},
{
"math_id": 243,
"text": "\\mathcal{F}(A)"
},
{
"math_id": 244,
"text": "\\subseteq."
},
{
"math_id": 245,
"text": "\\left(x_{\\alpha}\\right)_{\\alpha \\in A}"
},
{
"math_id": 246,
"text": "H \\subseteq A,"
},
{
"math_id": 247,
"text": "x_H := \\sum_{i \\in H} x_i."
},
{
"math_id": 248,
"text": "\\lim_{H \\in \\mathcal{F}(A)} x_{H}"
},
{
"math_id": 249,
"text": "\\left(x_H\\right)_{H \\in \\mathcal{F}(A)}"
},
{
"math_id": 250,
"text": "X^{A}"
},
{
"math_id": 251,
"text": "S."
},
{
"math_id": 252,
"text": "l^1(A) \\widehat{\\otimes}_\\varepsilon X"
},
{
"math_id": 253,
"text": "\\mathfrak{U}"
},
{
"math_id": 254,
"text": "U \\in \\mathfrak{U},"
},
{
"math_id": 255,
"text": "\\mu_U : X \\to \\R"
},
{
"math_id": 256,
"text": "x = \\left(x_{\\alpha}\\right)_{\\alpha \\in A} \\in S,"
},
{
"math_id": 257,
"text": "q_U(x) := \\sup_{x^{\\prime} \\in U^{\\circ}} \\sum_{\\alpha \\in A} \\left| \\left\\langle x^{\\prime}, x_{\\alpha}\\right\\rangle\\right|"
},
{
"math_id": 258,
"text": "q_U"
},
{
"math_id": 259,
"text": "\\left\\{q_U : U \\in \\mathfrak{U}\\right\\}"
},
{
"math_id": 260,
"text": "l^1(A, X)."
},
{
"math_id": 261,
"text": "l^1(A)."
},
{
"math_id": 262,
"text": "l^1(A) \\otimes X \\to l^1(A, E)"
},
{
"math_id": 263,
"text": "l^1(A) \\times X \\to l^1(A, E)"
},
{
"math_id": 264,
"text": "\\left(\\left(r_{\\alpha}\\right)_{\\alpha \\in A}, x\\right) \\mapsto \\left(r_{\\alpha} x\\right)_{\\alpha \\in A}."
},
{
"math_id": 265,
"text": "l^1(A) \\otimes_\\varepsilon X \\to l^1(A, E)"
},
{
"math_id": 266,
"text": "l^1(A) \\otimes X"
},
{
"math_id": 267,
"text": "\\hat{X}"
},
{
"math_id": 268,
"text": "l^1(A) \\widehat{\\otimes}_\\varepsilon X \\to l^1\\left(A, \\hat{X}\\right)"
},
{
"math_id": 269,
"text": "l^1(A) \\otimes_\\varepsilon X \\to l^1\\left(A, X\\right) \\subseteq l^1\\left(A, \\hat{X}\\right)"
},
{
"math_id": 270,
"text": "l^1(A, E)."
},
{
"math_id": 271,
"text": "\\Omega"
},
{
"math_id": 272,
"text": "\\R^n,"
},
{
"math_id": 273,
"text": "p^0 = \\left(p^0_1, \\ldots, p^0_n\\right) \\in \\Omega"
},
{
"math_id": 274,
"text": "f : \\operatorname{Dom} f \\to Y"
},
{
"math_id": 275,
"text": "p^0 \\in \\operatorname{Dom} f"
},
{
"math_id": 276,
"text": "p^0"
},
{
"math_id": 277,
"text": "\\operatorname{Dom} f."
},
{
"math_id": 278,
"text": "f"
},
{
"math_id": 279,
"text": "n"
},
{
"math_id": 280,
"text": "e_1, \\ldots, e_n"
},
{
"math_id": 281,
"text": "\\lim_{\\stackrel{p \\to p^0,}{p \\in \\operatorname{domain} f}} \\frac{f(p) - f\\left(p^0\\right) - \\sum_{i=1}^n (p_i - p^0_i) e_i}{\\left\\|p - p^0\\right\\|_2} = 0 \\text{ in } Y"
},
{
"math_id": 282,
"text": "p = (p_1, \\ldots, p_n)."
},
{
"math_id": 283,
"text": "\\Omega."
},
{
"math_id": 284,
"text": "k = 0, 1, \\ldots, \\infty,"
},
{
"math_id": 285,
"text": "C^k(\\Omega; Y)"
},
{
"math_id": 286,
"text": "C^k"
},
{
"math_id": 287,
"text": "C_c^k(\\Omega; Y)"
},
{
"math_id": 288,
"text": "C^k(\\Omega)"
},
{
"math_id": 289,
"text": "C_c^k(\\Omega)"
},
{
"math_id": 290,
"text": "C^k(\\Omega) \\widehat{\\otimes}_\\varepsilon Y."
},
{
"math_id": 291,
"text": "K"
},
{
"math_id": 292,
"text": "C(K) \\otimes Y"
},
{
"math_id": 293,
"text": "\\| f \\|_\\varepsilon = \\sup_{x \\in K} \\|f(x)\\|."
},
{
"math_id": 294,
"text": "C(H \\times K) \\cong C(H) \\widehat{\\otimes}_\\varepsilon C(K),"
},
{
"math_id": 295,
"text": "l_{\\infty}(Y)"
},
{
"math_id": 296,
"text": "\\left(y_i\\right)_{i=1}^{\\infty}"
},
{
"math_id": 297,
"text": "\\left\\|\\left(y_i\\right)_{i=1}^{\\infty}\\right\\| := \\sup_{i \\in \\N} \\left\\|y_i\\right\\|."
},
{
"math_id": 298,
"text": "l_{\\infty}"
},
{
"math_id": 299,
"text": "l_{\\infty}\\left(\\Complex\\right)."
},
{
"math_id": 300,
"text": "l_{\\infty} \\widehat{\\otimes}_\\varepsilon Y"
},
{
"math_id": 301,
"text": "l_{\\infty}(Y)."
},
{
"math_id": 302,
"text": "\\mathcal{L}\\left(\\R^n; Y\\right)"
},
{
"math_id": 303,
"text": "f \\in C^{\\infty}\\left(\\R^n; Y\\right)"
},
{
"math_id": 304,
"text": "P"
},
{
"math_id": 305,
"text": "Q"
},
{
"math_id": 306,
"text": "\\left\\{P(x) Q\\left(\\partial / \\partial x\\right) f(x) : x \\in \\R^n\\right\\}"
},
{
"math_id": 307,
"text": "\\mathcal{L}\\left(\\R^n; Y\\right),"
},
{
"math_id": 308,
"text": "\\R^n"
},
{
"math_id": 309,
"text": "P(x) Q\\left(\\partial / \\partial x\\right) f(x),"
},
{
"math_id": 310,
"text": "\\mathcal{L}\\left(\\R^n\\right) \\widehat{\\otimes}_\\varepsilon Y."
}
] |
https://en.wikipedia.org/wiki?curid=63566619
|
63572008
|
Thermodynamics and an Introduction to Thermostatistics
|
Textbook by Herbert Callen
Thermodynamics and an Introduction to Thermostatistics is a textbook written by Herbert Callen that explains the basics of classical thermodynamics and discusses advanced topics in both classical and quantum frameworks. It covers the subject in an abstract and rigorous manner and contains discussions of applications. The textbook contains three parts, each building upon the previous. The first edition was published in 1960 and a second followed in 1985.
Overview.
The first part of the book starts by presenting the problem thermodynamics is trying to solve, and provides the postulates on which thermodynamics is founded. It then develops upon this foundation to discuss reversible processes, heat engines, thermodynamics potentials, Maxwell's relations, stability of thermodynamics systems, and first-order phase transitions. As the author lays down the basics of thermodynamics, he then goes to discuss more advanced topics such as critical phenomena and irreversible processes.
The second part of the text presents the foundations of classical statistical mechanics. The concept of Boltzmann's entropy is introduced and used to describe the Einstein model, the two-state system, and the polymer model. Afterwards, the different statistical ensembles are discussed from which the thermodynamics potentials are derived. Quantum fluids and fluctuations are also discussed.
The last part of the text is a brief discussion on symmetry and the conceptual foundations of thermostatistics. In the final chapter, Callen advances his thesis that the symmetries of the fundamental laws of physics underlie the very foundations of thermodynamics and seeks to illuminate the crucial role thermodynamics plays in science.
Callen advises that a one-semester course for advanced undergraduates should cover the first seven chapters plus chapters 15 and 16 if time permits.
Second edition.
Background.
The second edition provides a descriptive account of the thermodynamics of critical phenomena, which progressed dramatically in the 1960s and 1970s. Drawing on feedback from students and instructors, Callen improved many explanations, explicitly solved examples, and added many exercises, many of which have complete or partial answers. He also provided an introduction to statistical mechanics with an emphasis on the core principles rather than the applications. However, he sought to neither separate thermodynamics and statistical mechanics completely nor subsume the former under the latter under the banner of "thermal physics." Indeed, thermal physics courses often emphasizes statistical mechanics at the expense of thermodynamics, despite its importance for industry, as a survey of business leaders conducted by the American Physical Society in 1971 suggested. Callen observed that thermodynamics had subsequently been de-emphasized.
Table of Contents.
<templatestyles src="Div col/styles.css"/>
Reception.
Robert B. Griffiths, a specialist in thermodynamics and statistical mechanics at the Carnegie Mellon University, commented that both editions of this book presents clearly and concisely the core of thermodynamics within the first eight chapters. At the time of writing (1987), Griffiths knew of books that explained the principles of thermodynamics, but Callen's was had the best presentation of the material. He believed Callen offered a pedagogical, if abrupt, treatment of the subject. His book begins in an abstract manner, assuming the existence and properties of entropy and derive the consequences for various processes of interest rather than through heat engines and thermodynamic cycles or by statistical mechanics and Boltzmann's entropy formula formula_0. However, he argued that Callen's treatment of critical phenomena (Chapter 10) contains some technical flaws. Callen thought that classical analysis had broken down. But Griffiths wrote that the problem lies not in the breakdown of thermodynamics but rather the Taylor-expansion of thermodynamic quantities, and that precise expressions of the functions appearing in a fundamental relation should be determined by statistical mechanics and experiments, not thermodynamics. Nevertheless, Griffiths still believed this book to be an excellent resource for learning the basics of thermodynamics.
According to L.C. Scott, who studied statistical mechanics and biophysics at Oklahoma State University, "Thermodynamics and an Introduction to Thermostatistics" is a popular textbook that begins with some basic postulates based on intuitive classical, empirical, and macroscopic arguments. He found that it is remarkable for the whole edifice of classical thermodynamics to follow from just a few basic assumptions. However, Scott preferred the discussion of temperature in "Heat and Thermodynamics" by Mark W. Zemansky and Richard H. Dittman because it is based on thermometry and forces students to contemplate the empirical basis of concept of temperature, leaving aside the molecular basis of heat. He argued that such an approach yields greater appreciation for the meaning of temperature and its statistical-mechanical basis which students will encounter later. In contrast, Callen's book does not mention temperature till Chapter 2, where Callen defines temperature as the reciprocal of the derivative of entropy with respect to internal energy then shows, using the postulates, that this definition is consistent with our intuition. While Zemansky and Dittman cover the first law of thermodynamics empirically, Callen simply assumes the existence of the internal energy function the invokes the conservative nature of inter-atomic forces. Whereas Zemansky and Dittman treated the second law of thermodynamics using heat engines and simply state the Clausius and Kelvin formulations of it, in Callen's book, the second law is contained within the postulates. Scott was unsure which approach is more understandable for students. In general, Zemansky and Dittman employed an empirical approach while that of Callen is deductive. Scott opined that Zemansky and Dittman's book is more suitable for beginning students while Callen's is more appropriate for an advanced course or as a reference.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "S = k \\ln \\Omega"
}
] |
https://en.wikipedia.org/wiki?curid=63572008
|
63576316
|
Trace metal stable isotope biogeochemistry
|
Trace metal stable isotope biogeochemistry is the study of the distribution and relative abundances of trace metal isotopes in order to better understand the biological, geological, and chemical processes occurring in an environment. Trace metals are elements such as iron, magnesium, copper, and zinc that occur at low levels in the environment. Trace metals are critically important in biology and are involved in many processes that allow organisms to grow and generate energy. In addition, trace metals are constituents of numerous rocks and minerals, thus serving as an important component of the geosphere. Both stable and radioactive isotopes of trace metals exist, but this article focuses on those that are stable. Isotopic variations of trace metals in samples are used as isotopic fingerprints to elucidate the processes occurring in an environment and answer questions relating to biology, geochemistry, and medicine.
Isotope notation.
In order to study trace metal stable isotope biogeochemistry, it is necessary to compare the relative abundances of isotopes of trace metals in a given biological, geological, or chemical pool to a standard (discussed individually for each isotope system below) and monitor how those relative abundances change as a result of various biogeochemical processes. Conventional notations used to mathematically describe isotope abundances, as exemplified here for 56Fe, include the isotope ratio (56R), fractional abundance (56F) and delta notation (δ56Fe). Furthermore, as different biogeochemical processes vary the relative abundances of the isotopes of a given trace metal, different reaction pools or substances will become enriched or depleted in specific isotopes. This partial separation of isotopes between different pools is termed isotope fractionation, and is mathematically described by fractionation factors α or ε (which express the difference in isotope ratio between two pools), or by "cap delta" (Δ; the difference between two δ values). For a more complete description of these notations, see the isotope notation section in Hydrogen isotope biogeochemistry.
Naturally occurring trace metal isotope variations and fractionations.
In nature, variations in isotopic ratios of trace metals on the order of a few tenths to several ‰ are observed within and across diverse environments spanning the geosphere, hydrosphere and biosphere. A complete understanding of all processes that fractionate trace metal isotopes is presently lacking, but in general, isotopes of trace metals are fractionated during various chemical and biological processes due to kinetic and equilibrium isotope effects.
Geochemical fractionations.
Certain isotopes of trace metals are preferentially oxidized or reduced; thus, transitions between redox species of the metal ions (e.g., Fe2+ → Fe3+) are fractionating, resulting in different isotopic compositions between the different redox pools in the environment. Additionally, at high temperatures, metals ions can evaporate (and subsequently condense upon cooling), and the relative differences in isotope masses of a given heavy metal leads to fractionation during these evaporation and condensation processes. Diffusion of isotopes through a solution or material can also result in fractionations, as the lighter mass isotopes are able to diffuse at a faster rate. Additionally, isotopes can have slight variations in their solubility and other chemical and physical properties, which can also drive fractionation.
Biological fractionations.
In sediments, oceans, and rivers, distinct trace metal isotope ratios exist due to biological processes such as metal ion uptake and abiotic processes such as adsorption to particulate matter that preferentially remove certain isotopes. The trace metal isotopic composition of a given organism results from a combination of the isotopic compositions of source material (i.e., food and water) and any fractionations imparted during metal ion uptake, translocation and processing inside cells.
Applications of trace metal isotope ratios.
Stable isotope ratios of trace metals can be used to answer a variety of questions spanning diverse fields, including oceanography, geochemistry, biology, medicine, anthropology and astronomy. In addition to their modern applications, trace metal isotopic compositions can provide insight into ancient biogeochemical processes operated on Earth. These signatures arise because the processes that form and modify samples are recorded in the trace metal isotopic compositions of the samples. By analyzing and understanding trace metal isotopic compositions in biological, chemical or geological materials, one can answer questions such as the sources of nutrients for phytoplankton in the ocean, processes that drove the formation of geologic structures, the diets of modern or ancient organisms, and accretionary processes that took place in the early Solar System. Trace metal stable isotope biogeochemistry is still an emerging field, yet each trace metal isotope system has clear, powerful applications to diverse and important questions. Important heavy metal isotope systems are discussed (in order of increasing atomic mass) in the proceeding sections.
Iron.
Stable isotopes and natural abundances.
Naturally occurring iron has four stable isotopes, 54Fe, 56Fe, 57Fe, and 58Fe.
Stable iron isotopes are described as the relative abundance of each of the stable isotopes with respect to 54Fe. The standard for iron is elemental iron, IRMM-014, and it is distributed by the Institute for Reference Materials and Measurement. The delta value is compared to this standard, and is defined as:
formula_0
Delta values are often reported as per mil values (‰), or part-per-thousand differences from the standard. Iron isotopic fractionation is also commonly described in units of per mil per atomic mass unit.
In many cases, the δ56Fe value can be related to the δ57Fe and δ58Fe values through mass-dependent fractionation:
formula_1
formula_2
Chemistry.
One of the most prevalent features of iron chemistry is its redox chemistry. Iron has three oxidation states: metallic iron (Fe0), ferrous iron (Fe2+), and ferric iron (Fe3+). Ferrous iron is the reduced form of iron, and ferric iron is the oxidized form of iron. In the presence of oxygen, ferrous iron is oxidized to ferric iron, thus ferric iron is the dominant redox state of iron at Earth's surface conditions. However, ferrous iron is the dominant redox state below the surface at depth. Because of this redox chemistry, iron can act as either an electron donor or receptor, making it a metabolically useful species.
Each form of iron has a specific distribution of electrons (i.e., electron configuration), tabulated below:
Equilibrium Isotope Fractionation.
Variations in iron isotopes are caused by a number of chemical processes which result in the preferential incorporation of certain isotopes of iron into certain phases. Many of the chemical processes which fractionate iron are not well understood and are still being studied. The most well-documented chemical processes which fractionate iron isotopes relate to its redox chemistry, the evaporation and condensation of iron, and the diffusion of dissolved iron through systems. These processes are described in more detail below.
Fractionation as a result of redox chemistry.
To first order, reduced iron favors isotopically light iron and oxidized iron favors isotopically heavy iron. This effect has been studied in regards to the abiotic oxidation of Fe2+ to Fe3+, which results in fractionation. The mineral ferrihydrite, which forms in acidic aquatic conditions, is precipitated via the oxidation of aqueous Fe2+ to Fe3+. Precipitated ferrihydrite has been found to be enriched in the heavy isotopes by 0.45‰ per atomic mass unit with respect to the starting material. This indicates that heavier iron isotopes are preferentially precipitated as a result of oxidizing processes.
Theoretical calculations in combination with experimental data have also aimed to quantify the fractionation between Fe(III)aq and Fe(II)aq in HCl. Based on modeling, the fractionation factor between the two species is temperature dependent:
formula_3
Fractionation as a result of evaporation and condensation.
Evaporation and condensation can give rise to both kinetic and equilibrium isotope effects. While equilibrium mass fractionation is present evaporation and condensation, it is negligible compared to kinetic effects. During condensation, the condensate is enriched in the light isotope, whereas in evaporation, the gas phase is enriched in the light isotope. Using the kinetic theory of gases, for 56Fe/54Fe, a fractionation factor of α = 1.01835 for the evaporation of a pool containing equimolar amounts of 56Fe and 54Fe. In evaporation experiments, the evaporation of FeO at 1,823 K gave a fractionation factor of α = 1.01877. Presently, there have been no experimental attempts to determine the 56Fe/54Fe fractionation factor of condensation.
Fractionation as a result of diffusion.
Kinetic fractionation of dissolved iron occurs as a result of diffusion. When isotopes diffuse, the lower mass isotopes diffuse more quickly than the heavier isotopes, resulting in fractionation. This difference in diffusion rates has been approximated as:
formula_4
In this equation, D1 and D2 are the diffusivities of the isotopes, m1 and m2 are the masses of the isotopes, and β, which can vary between 0 and 0.5, depending on the system. More work is required to fully understand fractionation as a result of diffusion, studies of diffusion of iron on metal have consistently given β values of approximately 0.25. Iron diffusion between silicate melts and basaltic/rhyolitic melts have given lower β values (~0.030). In aqueous environments, a β value of 0.0025 has been obtained.
Fractionation as a result of phase partitioning.
There may be equilibrium fractionation between coexisting minerals. This would be particularly relevant when considering the formation of planetary bodies early in the solar system. Experiments have aimed to simulate the formation of the Earth at high temperatures using a platinum-iron alloy and an analog for the silicate earth at 1,500 °C. However, the observed fractionation was very small, less than 0.2‰ per atomic mass unit. More experimental work is needed to fully understand this effect.
Biology.
In biology, iron plays a number of roles. Iron is widespread in most living organisms and is essential for their function. In microbes, iron redox chemistry is utilized as an electron donor or receptor in microbial metabolism, allowing microbes to generate energy. In the oceans, iron is essential for the growth and survival of phytoplankton, which use iron to fix nitrogen. Iron is also important in plants, given that they need iron to transfer electrons during photosynthesis. Finally, in animals, iron plays many roles, however, its most essential function is to transport oxygen in the bloodstream throughout the body. Thus, iron undergoes many biological processes, each of which have variations in which isotopes of iron they preferentially use. While iron isotopic fractionations are observed in many organisms, they are still not well understood. Improvements in the understanding the iron isotope fractionations observed in biology will enable the development of a more complete knowledge of the enzymatic, metabolic, and other biologic pathways in different organisms. Below, the known iron isotopic variations for different classes of organisms are described.
Iron reducing bacteria.
Iron reducing bacteria reduce ferric iron to ferrous iron under anaerobic conditions. One of the first studies that studied iron fractionation in iron-reducing bacteria studied the bacterium "Shewanella algae". "S. algae" was grown on a ferrihydrite substrate, and was then allowed to reduce iron. The study found that "S. algae" preferentially reduced 54Fe over 56Fe, with a δ56/54Fe value of -1.3‰.
More recent experiments have studied the bacterium "Shewanella putrefaciens" and its reduction of Fe(III) in goethite. These studies have found δ56/54Fe values of -1.2‰ relative to the goethite. The kinetics of this fractionation were also studied in this experiment, and it was suggested that the iron isotope fractionation is likely related to the kinetics of the electron transfer step.
Most studies of other iron reducing bacteria have found δ56/54Fe values of approximately -1.3‰. At high Fe(III) reduction rates, δ56/54Fe values of -2 – -3‰ relative to the substrate have been observed. The study of iron isotopes in iron reducing bacteria enable the development of an improved understanding regarding the metabolic processes operating in these organisms.
Iron oxidizing bacteria.
While most iron is oxidized as a result of interaction with atmospheric oxygen or oxygenated waters, oxidation by bacteria is an active process in anoxic environments and in oxygenated, low pH (<3) environments. Studies of the acidophilic Fe(II)-oxidizing bacterium, "Acidthiobacillus ferrooxidans", have been used to determine the fractionation as a result of iron-oxidizing bacteria. In most cases, δ56/54Fe values between 2 and 3‰ were measured. However, a Rayleigh trend with a fractionation factor of αFe(III)aq-Fe(II)aq ~ 1.0022 was observed, which is smaller than the fractionation factor in the abiotic control experiments (αFe(III)aq-Fe(II)aq ~ 1.0034), which has been inferred to reflect a biological isotope effect. Using iron isotopes, an improvement in the understanding of the metabolic processes controlling iron oxidation and energy production in these organisms can be developed.
Photoautrophic bacteria, which oxidize Fe(II) under anaerobic conditions, have also been studied. The "Thiodictyon" bacteria precipitate poorly crystalline hydrous ferric oxide when they oxidize iron. The precipitate was enriched in the 56Fe relative to Fe(II)aq, with a δ56/54Fe value of +1.5 ± 0.2‰.
Magnetotactic bacteria.
Magnetotactic bacteria are bacteria with magnetosomes that contain magnetic crystals, usually magnetite or greigite, which allow them to orient themselves with the Earth’s magnetic field lines. These bacteria mineralize magnetite via the reduction of Fe(III), usually in microaerobic or anoxic environments. In the magnetotactic bacteria that have been studied, there was no significant iron isotope fractionation observed.
Phytoplankton.
Iron is important for the growth of phytoplankton. In phytoplankton, iron is used for electron transfer reactions in photosynthesis in both photosystem I and photosystem II. Additionally, iron is an important component of the enzyme nitrogenase, which is used to fix nitrogen. In measurements at open ocean stations, phytoplankton are isotopically light, with the fractionation as a result of biological uptake measured between -0.25‰ and -0.13‰. Improvement in the understanding of this fractionation will enable the more precise understanding of phytoplankton photosynthetic processes.
Animals.
Iron has many important roles in animal biology, specifically when considering oxygen transport in the bloodstream, oxygen storage in muscles, and enzymes. Known isotope variations are shown in the figure below. Iron isotopes could be useful tracers of the iron biochemical pathways in animals, and also be indicative of trophic levels in a food chain.
Iron isotope variations in humans reflects a number of processes. Specifically, iron in the blood stream reflects dietary iron, which is isotopically lighter than iron in the geosphere. Iron isotopes are distributed heterogeneously throughout the body, primarily to red blood cells, the liver, muscle, skin, enzymes, nails, and hair. Iron losses in the body (intestinal bleeding, bile, sweat, etc.) favor the loss of isotopically heavy iron, with mean losses averaging a δ56Fe of +10‰. Iron absorption in the intestine favors lighter iron isotopes. This is largely due to the fact that iron is carried by transport proteins and transferrin, both of which are kinetic processes, resulting in the preferential uptake of isotopically light iron.
The observed iron isotopic variations in humans and animals are particularly important as tracers. Iron isotopic signatures are utilized to determine the geographic origin of food. Additionally, anthropologists and paleontologists use iron isotope data in order to track the transfer of iron between the geosphere and the biosphere, specifically between plant foods and animals. This allows for the reconstruction of ancient dietary habits based on the variations in iron isotopes in food.
Geochemistry.
By mass, iron is the most common element on Earth, and it is the fourth most abundant element in the Earth's crust. Thus, iron is widespread throughout the geosphere, and is also common on other planetary bodies. Natural variations in the iron in the geosphere are relatively small. Currently, the values of δ56/54Fe measured in rocks and minerals range from -2.5‰ to +1.5‰. Iron isotope composition is homogeneous in igneous rocks to ±0.05‰, indicating that much of the geologic isotopic variability is a result of the formation of rocks and minerals at low temperature. This homogeneity is particularly useful when tracing processes which result in fractionation through the system. While fractionation of igneous rocks is relatively constant, there are larger variations in the iron isotopic composition of chemical sediments. Thus, iron isotopes are used to determine the origin of the protolith of heavily metamorphosed rocks of a sedimentary origin. Improvements of the understanding regarding the way in which iron isotopes fractionate in the geosphere can help to better understand geologic processes of formation.
Natural iron isotopic variations.
To date, iron is one of the most widely studied trace metals, and iron isotope compositions are relatively well-documented. Based on measurements, iron isotopes exhibit minimal variation (±3‰) in the terrestrial environment. A list of iron isotopic values of different materials from different environments is presented below.
In terrestrial environments.
There is an extreme constancy of the isotopic composition of igneous rocks. The mean value of δ56Fe of terrestrial rocks is 0.00 ± 0.05‰. More precise isotopic measurements indicate that the small deviations from 0.00‰ may reflect a slight mass-dependent fractionation. This mass fractionation has been proposed to be FFe = 0.039 ± 0.008‰ per atomic mass unit relative to IRMM-014. There may also be slight isotopic variations in igneous rocks depending on their composition and process of formation. The average value of δ56Fe for ultramafic igneous rocks is -0.06‰, whereas the average value of δ56Fe for mid-ocean ridge basalts (MORB) is +0.03‰. Sedimentary rocks exhibit slightly larger variations in δ56Fe, with values between -1.6‰ and +0.9‰ relative to IRMM-014. Banded iron formations δ56Fe span the entire range observed on Earth, from -2.5‰ to +1‰.
In the oceans.
There are slight iron isotopic variations in the oceans relative to IRMM-014, which likely reflect variations in the biogeochemical cycling of iron within a given ocean basin. In the southeastern Atlantic, δ56Fe values between -0.13 and +0.21‰ have been measured. In the north Atlantic, δ56Fe values between -1.35 and +0.80‰ have been measured. In the equatorial Pacific δ56Fe values between -0.03 and +0.58‰ have been measured. The supply of aerosol iron particles to the ocean have an isotopic composition of approximately 0‰. Dissolved iron riverine input to the ocean is isotopically light relative to igneous rocks, with δ56Fe values between -1 and 0‰.
Most modern marine sediments have δ56Fe values similar to those of igneous δ56Fe values. Marine ferromanganese nodules have δ56Fe values between -0.8 and 0‰.
In hydrothermal systems.
Hot (> 300 °C) hydrothermal fluids from mid ocean ridges are isotopically light, with δ56Fe between -0.2 and -0.8‰. Particles in hydrothermal plumes are isotopically heavy relative to the hydrothermal fluids, with δ56Fe between 0.1 and 1.1‰. Hydrothermal deposits have average δ56Fe between -1.6 and 0.3‰. The sulfide minerals within these deposits have δ56Fe between -2.0 and 1.1‰.
In extraterrestrial objects.
Variations in iron isotopic composition have been observed in meteorite samples from other planetary bodies. The Moon has variations in iron isotopes of 0.4‰ per atomic mass unit. Mars has very small isotope fractionation of 0.001 ± 0.006‰ per atomic mass unit. Vesta has iron fractionations of 0.010 ± 0.010‰ per atomic mass unit. The chondritic reservoir exhibits fractionations of 0.069 ± 0.010‰ per atomic mass unit. Isotopic variations observed on planetary bodies can help to constrain and better understand their formation and processes occurring in the early Solar System.
Measurement.
High precision iron isotope measurements are obtained either via thermal ionization mass spectrometry (TIMS) or multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS).
Applications of iron isotopes.
Iron isotopes have many applications in the geosciences, biology, medicine, and other fields. Their ability to act as isotopic tracers allows for their use to determine information regarding the formation of geologic units and as a potential proxy for life on Earth and other planets. Iron isotopes also have applications in anthropology and paleontology, as they are used to study the diets of ancient civilizations and animals. The widespread uses of iron in biology make its isotopes a promising frontier in biomedical research, specifically their use to prevent and treat blood conditions and other pathological blood diseases. Some of the more prevalent applications of iron isotopes are described below.
Banded iron formations.
Banded iron formations (BIFs) are particularly important when considering the surface environments of the early Earth, which were significantly different from the surface environments observed today. This is manifested in the mineralogy of these formations, which are indicative of different redox conditions. Additionally, BIFs are interesting in that they were deposited while major changes were occurring in the atmosphere and in the biosphere 2.8 to 1.8 billion years ago. Iron isotopic studies can reveal the details of the formation of BIFs, which allows for the reconstruction of redox and climatic conditions at the time of deposition.
BIFs formed as a result of the oxidation of iron by oxygen, which was likely generated by the evolution of cyanobacteria. This was followed by the subsequent precipitation of iron particles in the ocean. Observed variations in the iron isotopic composition of BIFs span the entire range observed on Earth, with δ56/54Fe values between -2.5 and +1‰. The cause of these variations are hypothesized to occur for three reasons. The first relates to the varying mineralogy of the BIFs. Within the BIFs, minerals such as hematite, magnetite, siderite, and pyrite are observed. These minerals each having varying isotopic fractionation, likely as a result of their structures and the kinetics of their growth. The isotopic composition of the BIFs is indicative of the fluids from which they precipitated, which has applications when reconstructing environmental conditions of the ancient Earth. It has also been suggested that BIFs may be biologic in origin. The range of their δ56/54Fe values fall within the range of those observed to occur as a result of biologic processes relating to bacterial metabolic processes, such as those of anoxygenic phototrophic iron-oxidizing bacteria. Ultimately, the improved understanding of BIFs using iron isotope fractionations would allow for the reconstruction of past environments and the constraint of processes occurring on the ancient Earth. However, given that the values observed as a result of biogenic and abiogenic fractionation are relatively similar, the exact processes of BIFs are still unclear. Thus, the continued study and improved understanding of biologic and abiologic fractionation effects would be beneficial in providing better details regarding BIF formation.
Iron cycling in the ocean.
Iron isotopes have become particularly useful in recent years for tracing biogeochemical cycling in the oceans. Iron is an important micronutrient for living species in the ocean, particularly for the growth of phytoplankton. Iron is estimated to limit phytoplankton growth in about one half of the ocean. As a result, the development of a better understanding of sources and cycling of iron in the modern oceans is important. Iron isotopes have been used to better constrain these pathways through data collected by the GEOTRACES program, which has collected iron isotopic data throughout the ocean. Based on the variations in iron isotopes, biogeochemical cycling and other processes controlling iron distribution in the ocean can be elucidated.
For example, the combination of iron concentration and iron isotope data can use to determine the sources of oceanic iron. In the South Atlantic and in the Southern Ocean, isotopically light iron is observed in intermediate waters (200 - 1,300 meters), whereas isotopically heavy iron is observed in surface waters and deep waters (> 1,300 meters). To first order, this demonstrates that there are different sources, sinks, and processes contributing to the iron cycle in varying water masses. The isotopically light iron in intermediate waters suggests that the dominant iron sources include remineralized organic matter. This organic matter is isotopically light because phytoplankton preferentially take up light iron. In the surface ocean, the isotopically heavy iron represents the external sources of iron, such as dust, which is isotopically heavy relative to IRMM-014, and the sink of light isotopes as a result of their preferential uptake by phytoplankton. The isotopically heavy iron in the deep ocean suggests that the iron cycle is dominated by the abiotic, non-reductive release of iron, via desorption or dissolution, from particles. Isotopic analyses similar to the one above are utilized throughout all of the world's oceans to better understand regional variability in the processes which control iron cycling. These analyses can then be synthesized to better model the global biogeochemical cycling of iron, which is particularly important when considering primary production in the ocean.
Constraining processes on extraterrestrial bodies.
Iron isotopes have been applied for a number of purposes on planetary bodies. Their variations have been measured to more precisely determine the processes that occurred during planetary accretion. In the future, the comparison of observed biological fractionation of iron on Earth to fractionation on other planetary bodies may have astrobiological implications.
Planetary accretion.
One of the primary challenges in the study of planetary accretion is the fact that many tracers of the processes occurring in the early Solar System have been eliminated as a result of subsequent geologic events. Because transition metals do not show large stable isotope fractionations as a result of these events and because iron is one of the most abundant elements in the terrestrial planets, its isotopic variability has been used as a tracer of early Solar System processes.
Variations in δ57/54Fe between samples from Vesta, Mars, the Moon, and Earth have been observed, and these variations cannot be explained by any known petrological, geochemical, or planetary processes, thus, it has been inferred that the observed fractionations are a result of planetary accretion. It is interesting to note that the isotopic compositions of the Earth and the Moon are much heavier than that of Vesta and Mars. This provides strong support for the giant-impact hypothesis as an impact of this energy would generate large amounts of energy, which would melt and vaporize iron, leading to the preferential escape of the lighter iron isotopes to space. More of the heavier isotopes would remain, resulting in the heavier iron isotopic compositions observed for the Earth and the Moon. The samples from Vesta and Mars exhibit minimal fractionation, consistent with the theory of runaway growth for their formations, as this process would not yield significant fractionations. Further study of the stable isotope of iron in other planetary bodies and samples could provide further evidence and more precise constraints for planetary accretion and other processes that occurred in the early Solar System.
Astrobiology.
The use of iron isotopes may also have applications when studying potential evidence for life on other planets. The ability of microbes to utilize iron in their metabolisms makes it possible for organisms to survive in anoxic, iron-rich environments, such as Mars. Thus, the continual improvement of knowledge regarding the biological fractionations of iron observed on Earth can have applications when studying extraterrestrial samples in the future. While this field of research is still developing, this could provide evidence regarding whether a sample was generated as a result of biologic or abiologic processes depending on the isotopic fractionation. For example, it has been hypothesized that magnetite crystals found in Martian meteorites may have formed biologically as a result of their striking similarity to magnetite crystals produced by magnetotactic bacteria on Earth. Iron isotopes could be used to study the origin of the proposed "magnetofossils" and other rock formations on Mars.
Biomedical research.
Iron plays many roles in human biology, specifically in oxygen transport, short-term oxygen storage, and metabolism Iron also plays a role in the body's immune system. Current biomedical research aims to use iron isotopes to better understand the speciation of iron in the body, with hopes of eventually being able to reduce the availability of free iron, as this would help to defend against infection.
Iron isotopes can also be utilized to better understand iron absorption in humans. The iron isotopic composition of blood reflects an individual's long-term absorption of dietary iron. This allows for the study of genetic predisposition to blood conditions, such as anemia, which will ultimately enable the prevention, identification, and resolution of blood disorders. Iron isotopic data could also aid in identifying impairments of the iron absorption regulatory system in the body, which would help to prevent the development of pathological conditions related to issues with iron regulation.
Copper.
Stable isotopes and natural abundances.
Copper has two naturally occurring stable isotopes: 63Cu and 65Cu, which exist in the following natural abundances:
The isotopic composition of Cu is conventionally reported in delta notation (in ‰) relative to a NIST SRM 976 standard:
formula_5
Chemistry.
Copper can exist in non-ionic form (as Cu0) or in one of two redox states: Cu1+ (reduced) or Cu2+ (oxidized). Each form of Cu has a specific distribution of electrons (i.e., electron configuration), tabulated below:
The electronic configurations of Cu control the number and types of bonds Cu can form with other atoms (e.g., see Copper Biology section). These diverse coordination chemistries are what enable Cu to participate in many different biological and chemical reactions.
Finally, due to its full d-orbital, Cu1+ has diamagnetic resonance. In contrast, Cu2+ has one unpaired electron in its d-orbital, giving it paramagnetic resonance. The different resonances of the Cu ions enable determination of Cu's redox state by techniques such as electron paramagnetic resonance (epr) spectroscopy, which can identify atoms with unpaired electrons by exciting electron spins.
Equilibrium isotope fractionation.
Transitions between redox species Cu1+ and Cu2+ fractionate Cu isotopes. 63Cu2+ is preferentially reduced over 65Cu2+, leaving the residual Cu2+ enriched in 65Cu. The equilibrium fractionation factor for speciation between Cu2+ and Cu1+ (αCu(II)-Cu(I)) is 1.00403 (i.e., dissolved Cu2+ is enriched in 65Cu by ~+4‰ relative to Cu1+).
Biology.
Copper can be found in the active sites of most enzymes that catalyze redox reactions (i.e., oxidoreductases), as it facilitates single electron transfers while reversibly oscillating between the Cu1+ and Cu2+ redox states. Enzymes typically contain between one (mononuclear) and four (tetranuclear) copper centers, which enable enzymes to catalyze different reactions. These copper centers coordinate with different ligands depending on the Cu redox state. Oxidized Cu2+ preferentially coordinates with "hard donor" ligands (e.g., N- or O-containing ligands such as histidine, aspartic acid, glutamic acid or tyrosine), while reduced Cu1+ preferentially coordinates with "soft donor" ligands (e.g., S-containing ligands such as cysteine or methionine). Copper's powerful redox capability makes it critically important for biology, but comes at a cost: Cu1+ is a highly toxic metal to cells because it readily abstracts single electrons from organic compounds and cellular material, leading to production of free radicals. Thus, cells have evolved specific strategies for carefully controlling the activity of Cu1+ while exploiting its redox behavior.
Examples of copper-based enzymes.
Copper serves catalytic and structural roles in many essential enzymes in biology. In the context of catalytic activity, copper proteins function as electron or oxygen carriers, oxidases, mono- and dioxygenases and nitrite reductases. In particular, copper-containing enzymes include hemocyanins, one flavor of superoxide dismutase (SOD), metallothionein, cytochrome c oxidase, multicopper oxidase and particulate methane monooxygenase (pMMO).
Biological fractionation.
Biological processes that fractionate Cu isotopes are not well-understood, but play an important role in driving the δ65Cu values of materials observed in the marine and terrestrial environments. The natural 65Cu/63Cu varies according to copper's redox form and the ligand to which copper binds. Oxidized Cu2+ preferentially coordinates with hard donor ligands (e.g., N- or O-containing ligands), while reduced Cu1+ preferentially coordinates with soft donor ligands (e.g., S-containing ligands). As 65Cu is preferentially oxidized over 63Cu, these isotopes tend to coordinate with hard and soft donor ligands, respectively. Cu isotopes can fractionate upon Cu-bacteria interactions from processes that include Cu adsorption to cells, intracellular uptake, metabolic regulation and redox speciation. Fractionation of Cu isotopes upon adsorption to cellular walls appears to depend on the surface functional groups that Cu complexes with, and can span positive and negative values. Furthermore, bacteria preferentially incorporate the lighter Cu isotope intracellularly and into proteins. For example, "E. coli", "B. subtilis" and a natural consortia of microbes sequestered Cu with apparent fractionations (ε65Cu) ranging from ~-1.0 to -4.4‰. Additionally, fractionation of Cu upon incorporation into the apoprotein of azurin was ~-1‰ in "P. aeruginosa", and -1.5‰ in "E. coli", while ε65Cu values of Cu incorporation into Cu-metallothionein and Cu-Zn-SOD in yeast were -1.7 and -1.2‰, respectively.
Geochemistry.
The concentration of Cu in bulk silicate Earth is ~30 ppm, slightly less than its average concentration (~72 ppm) in fresh mid-oceanic ridge basalt (MORB) glass. and form a variety of sulfides (often in association with Fe), as well as carbonates and hydroxides (e.g., chalcopyrite, chalcocite, cuprite and malachite). In mafic and ultramafic rocks, Cu tends to be concentrated in sulfidic materials. In freshwater, the predominant form of Cu is free Cu2+; in seawater, Cu complexes with carbonate ligands to form and .
Measurement.
In order to measure Cu isotope ratios of various materials, several steps must be taken prior to the isotopic measurement in order to extract and purify copper. The first step in the analytical pipeline to measure Cu isotopes is to liberate Cu from its host material. Liberation should be quantitative, otherwise fractionation may be introduced at this step. Cu-containing rocks are generally dissolved with HF; biological materials are commonly digested with HNO3. Seawater samples must be concentrated due to the low (nM) concentrations of Cu in the ocean. The sample material is subsequently run through an anion-exchange column to isolated and purify Cu. This step can also introduce Cu isotope fractionation if Cu is not quantitatively recovered from the column. If samples are from seawater, other ions (e.g., Na+, Mg2+, Ca2+) must be removed in order to eliminate isobaric interferences during the isotope measurement. Prior to 1992, 65Cu/63Cu ratios were measured via thermal ionization mass spectrometry (TIMS). Today, Cu isotopic compositions are measured via multi-conductor inductively coupled plasma mass spectrometry (MC-ICP-MS), which ionizes samples using inductively coupled plasma and introduces smaller errors than TIMS.
Natural copper isotopic variations.
The field of Cu isotope biogeochemistry is still in a relatively early stage, so the Cu isotope compositions of materials in the environment are not well-documented. However, based on a compilation of measurements already made, it appears that Cu isotope ratios vary somewhat widely within and between environmental materials (e.g., plants, minerals, seawater, etc.), though as a whole, these ratios do not vary by more than ±10‰.
In humans.
In human bodies, coppers is an important constituent of many essential enzymes, including ceruloplasmin (which carries Cu and oxidizes Fe2+ in human plasma), cytochrome c oxidase, metallothionein and superoxide dismutase 1. Serum in human blood is typically 65Cu-depleted by ~0.8‰ relative to erythrocytes (i.e., red blood cells). In a study of 49 male and female blood donors, the average δ65Cu value of the donors' blood serum was -0.26 ± 0.40‰, while that of their erythrocytes was +0.56 ± 0.50‰. In a separate study, δ65Cu values of serum in 20 healthy patients ranged from -0.39 to +0.38‰, while the δ65Cu values of their erythrocytes ranged from +0.57 to +1.24‰. To balance Cu loss due to menstruation, a large portion of Cu in the blood of menstruating women comes from their liver. Due to fractionation associated with Cu transport from the liver to the blood, the total blood of pre-menopausal women is generally 65Cu-depleted relative to that of males and non-menstruating women. The δ65Cu values of healthy human liver tissue in 7 patients ranged from -0.45 to -0.11‰.
In the terrestrial environment.
To first order, δ65Cu values in organisms are driven by the δ65Cu values of source materials. The δ65Cu values of various soils from different regions have been found to vary from -0.34 to +0.33‰ depending on the biogeochemical processes taking place in the soil and the ligands with which Cu complexes. Organic-rich soils generally have lighter δ65Cu values than mineral soils because the organic layers result from plant litter, which is isotopically light.
In plants, δ65Cu values vary between the different components (seeds, roots, stem and leaves). The δ65Cu values the roots of rice, lettuce, tomato and durum wheat plants were found to be 0.5 to 1.0‰ 65Cu-depleted relative to their source, while their shoots were up to 0.5‰ lighter than the roots. Seeds appear to be the most isotopically light component of plants, followed by leaves, then stems.
Rivers sampled throughout the world have a range of dissolved δ65Cu values from +0.02 to +1.45‰. The average δ65Cu values of the Amazon, Brahmaputra and Nile rivers are 0.69, 0.64 and 0.58‰, respectively. The average δ65Cu value of the Chang Jiang river is 1.32‰, while that of the Missouri river is 0.13‰.
In rocks and minerals.
In general, igneous, metamorphic and sedimentary processes do not appear to strongly fractionate Cu isotopes, while δ65Cu values of Cu minerals vary widely. The average Cu isotopic composition of bulk silicate Earth has been measured as 0.06 ± 0.20‰ based on 132 different terrestrial samples. MORBs and oceanic island basalts (OIBs) generally have homogenous Cu isotopic compositions that fall around 0‰, while arc and continental basalts have more heterogeneous Cu isotope compositions that range from -0.19 to +0.47‰. These Cu isotope ratios of basalts suggest that mantle partial melting imparts negligible Cu isotopic fractionation, while recycling of crustal materials leads to widely variable δ65Cu values. The Cu isotope compositions of copper-containing minerals vary over a wide range, likely due to alteration of the primary high-temperature deposits. In one study that investigated Cu isotopic compositions of various minerals from hydrothermal fields along the mid-Atlantic ridge, chalcopyrite from mafic igneous rocks had δ65Cu values of -0.1 to -0.2‰, while Cu minerals in black smokers (chalcopyrite, bornite, covellite and atacamite) exhibited a wider range of δ65Cu values from -1.0 to +4.0‰. Additionally, atacamite lining the outer rims of black smokers can be up to 2.5‰ heavier than chalcopyrite contained within the black smoker. δ65Cu values of Cu minerals (including chrysocolle, azurite, malachite, cuprite and native copper) in low-temperature deposits have been observed to vary widely over a range of -3.0 to +5.6‰.
In the marine environment.
Cu is strongly cycled in the surface and deep ocean. In the deep ocean, Cu concentrations are ~5 nM in the Pacific and ~1.5 nM in the Atlantic. The deep/surface ratio of Cu in the ocean is typically <10, and vertical concentration profiles for Cu are roughly linear due to biological recycling and scavenging processes as well as adsorption to particles.
Due to equilibrium and biological processes that fractionate Cu isotopes in the marine environment, the bulk copper isotopic composition (δ65Cu = +0.6 to +1.5‰) is different from the δ65Cu values of the riverine input (δ65Cu = +0.02 to +1.45‰, with discharge-weighted average δ65Cu = +0.68‰) to the oceans. δ65Cu values of the surface layers of FeMn-nodules are fairly homogenous throughout the oceans (average = 0.31‰), suggesting low biological demand for Cu in the marine environment compared to that of Fe or Zn. Additionally, δ65Cu values in the Atlantic ocean do not markedly vary with depth, ranging from +0.56 to +0.72‰. However, Cu isotope compositions of material collected on sediment traps at depths of 1,000 and 2,500 m in the central Atlantic ocean show seasonal variation with heaviest δ65Cu values in the spring and summer seasons suggesting seasonal preferential uptake of 63Cu by biological processes.
Equilibrium processes that fractionate Cu isotopes include high temperature ion exchange and redox speciation between mineral phases, and low temperature ion exchange between aqueous species or redox speciation between inorganic species. In riverine and marine environments, 65Cu/63Cu ratios are driven by preferential adsorption of 63Cu to particulate matter and preferential binding of 65Cu to organic complexes. As a net result, ocean sediments tend to be depleted in 63Cu relative to the bulk ocean. For example, the downcore δ65Cu values of a 760 cm sedimentary core taken from the Central Pacific ocean varied from -0.94 to -2.83‰, significantly lighter than the bulk ocean.
Applications of copper isotopes.
Medicine.
Due to its relatively short turnover time of ~6 weeks in the human body, Cu serves as an important indicator of cancer and other diseases that rapidly evolve. The serum of cancer patients contains significantly higher levels of Cu than that of healthy patients due to copper chelation by lactate, which is produced via anaerobic glycolysis by tumor cells. These imbalances in Cu homeostasis are reflected isotopically in the serum and organ tissues of patients with various types of cancer, where the serum of cancer patients is generally 65Cu-depleted relative to the serum of healthy patients, while organ tumors are generally 65Cu-enriched. In one study, the blood components of patients with hepatocellular carcinomas (HCC) was found to be, on average, depleted in 65Cu by 0.4‰ relative to the blood of non-cancer patients. In particular, the δ65Cu values of the serum in patients with HCC ranged from -0.66 to +0.47‰ (compared to serum δ65Cu values of -0.39 to +0.38‰ in matched control patients), and the δ65Cu values of the erythrocytes in the HCC patients ranged from -0.07 to +0.92‰ (compared to erythrocyte δ65Cu values of +0.57 to +1.24‰ in matched control patients). The liver tumor tissues in the HCC patients were 65Cu-enriched relative to healthy liver tissue in the same patients (δ65Culiver, HCC = -0.02 to +0.43‰; δ65Culiver, healthy = -0.45 to -0.11‰), and the magnitude of 65Cu-enrichment mirrored that of the 65Cu-depletion observed in the cancer patients' serum. Though our understanding of how copper isotopes are fractionated during cancer physiologies is still limited, it is clear that copper isotope ratios may serve as a powerful biomarker of cancer presence and progression.
Zinc.
Stable isotopes and natural abundances.
Zinc has five stable isotopes, tabulated along with their natural abundances below:
The isotopic composition of Zn is reported in delta notation (in ‰):
formula_6
where "xZn" is a Zn isotope other than 64Zn (commonly either 66Zn or 68Zn). Standard reference materials used for Zn isotope measurements are JMC 3-0749C, NIST-SRM 683 or NIST-SRM 682.
Chemistry.
Because it has just one valence state (Zn2+), zinc is a redox-inert element. The electronic configurations of Zn0 and Zn2+ are shown below:
Biology.
Zinc is present in almost 3,000 human proteins, and thus is essential for nearly all cellular functions. Zn is also a key constituent of enzymes involved in cell regulation. Consistent with its ubiquitous presence, total cellular Zn concentrations are typically very high (~200 μM), while the concentrations of free Zn ions in the cytoplasms of cells can be as low as a few hundred picomolar, maintained within a narrow range to avoid deficiency and toxicity. One feature of Zn that makes it so critical in cellular biology is its flexibility in coordination to different numbers and types of ligands. Zn can coordinate with anywhere between three and six N-, O- and S-containing ligands (such as histidine, glutamic acid, aspartic acid and cysteine), resulting in a large number of possible coordination chemistries. Zn tends to bind to metal sites of proteins with relatively high affinities compared to other metal ions which, aside from its important functions in enzymatic reactions, partly explains its ubiquitous presence in cellular enzymes.
Examples of zinc-based enzymes.
Zn is present in the active sites of most hydrolytic enzymes, and is used as an electrophilic catalyst to activate water molecules that ultimately hydrolyze chemical bonds. Examples of zinc-based enzymes include superoxide dismutase (SOD), metallothionein, carbonic anhydrase, Zn finger proteins, alcohol dehydrogenase and carboxypeptidase.
Biological fractionation.
Relatively little is known about isotopic fractionation of zinc by biological processes, but several studies have elucidated that Zn isotopes fractionate during surface adsorption, intracellular uptake processes and speciation. Many organisms, including certain species of fish, plants and marine phytoplankton, have both high- and low-affinity Zn transport systems, which appear to fractionate Zn isotopes differently. A study by John et al. observed apparent isotope effects associated with Zn uptake by the marine diatom "Thalassiosira oceanica" of -0.2‰ for high-affinity uptake (at low Zn concentrations) and -0.8‰ for low-affinity uptake (at high Zn concentrations). Additionally, in this study, unwashed cells were enriched in 65Zn, indicating preferential adsorption of 65Zn to the extracellular surfaces of "T. oceanica". Results from John et al. demonstrating apparent discrimination against the heavy isotope (66Zn) during uptake conflict with results by Gélabert et al. in which marine phytoplankton and freshwater periphytic organisms preferentially uptook 66Zn from solution. The latter authors explained these results as due to a preferential partitioning of 66Zn into a tetrahedrally coordinated structure (i.e., with carboxylate, amine or silanol groups on or inside the cell) over an octahedral coordination with six water molecules in the aqueous phase, consistent with quantum mechanical predictions. Kafantaris and Borrok grew model organisms "B. subtilis", "P. mendocina" and "E. coli," as well as a natural bacterial consortium collected from soil, on high and low concentrations of Zn. In the high [Zn] condition, the average fractionation of Zn isotopes imparted by cellular surface adsorption was +0.46‰ (i.e., 66Zn was preferentially adsorbed), while fractionation upon intracellular incorporation varied from -0.2 to +0.5‰ depending on the bacterial species and growth phase. Empirical models of the low [Zn] condition estimated larger Zn isotope fractionation factors for surface adsorption ranging from +2 to +3‰. Overall, Zn isotope ratios in microbes appear to be driven by a number of complex factors including surface interactions, bacterial metal metabolism and metal speciation, but by understanding the relative contributions of these factors to Zn isotope signals, one can use Zn isotopes to investigate metal-binding pathways operating in natural communities of microbes.
Geochemistry.
The concentration of Zn in bulk silicate Earth is ~55 ppm, while its average concentration in fresh mid-oceanic ridge basalt (MORB) glass is ~87 ppm. Like Cu, Zn commonly associates with Fe to form a variety of zinc sulfide minerals such as sphalerite. Additionally, Zn associates with carbonates and hydroxides to form numerous diverse minerals (e.g., smithsonite, sweetite, etc.). In mafic and ultramafic rocks, Zn tends to concentrate in oxides such as spinel and magnetite. In freshwater, Zn predominantly complexes with water to form an octahedrally coordinated aqua ion . In seawater, Cl− ions replace up to four water molecules in the Zn aqua ion, forming , and .
Measurement.
The analytical pipeline for preparation of sample material for Zn isotope measurements is similar to that of Cu, consisting of digestion of host material or concentration from seawater, isolation and purification via anion-exchange chromatography, removal of ions of interfering mass (in particular, 64Ni) and isotope measurement via MC-ICP-MS (see Copper Isotope Measurement section for more details).
Natural zinc isotopic variations.
As with Cu, the field of Zn isotope biogeochemistry is still in a relatively early stage, so the Zn isotope compositions of materials in the environment are not well-documented. However, based on a compilation of some reported measurements, it appears that Zn isotope ratios do not vary widely among environmental materials (e.g., plants, minerals, seawater, etc.), as δ66Zn values of materials typically fall within a range of -1 to +1‰.
In humans.
Zn isotope ratios vary between individual blood components, bones and the different organs in humans, though in general, δ66Zn values fall within a narrow range. In the blood of healthy individuals, the Zn isotopic composition of erythrocytes is typically ~0.3‰ lighter than that of serum, and no significant differences in erythrocyte or serum δ66Zn values exist between men and women. For example, in the blood of 49 healthy blood donors, the average erythrocyte δ66Zn value was +0.44 ± 0.33‰, while that of serum was +0.17 ± 0.26‰. In a separate study on 29 donors, a similar average δ66Zn value of +0.29 ± 0.27‰ was obtained for the patients' serum. Additionally, in a small sample set of volunteers, whole blood δ66Zn values were ~+0.15‰ higher for vegetarians than for omnivores, suggesting diet plays an important role in driving Zn isotope compositions in the human body.
In the terrestrial environment.
Zn isotope ratios vary on small scales throughout the terrestrial biosphere. Zn is released into soils during mineral weathering, and isotopes of Zn fractionate upon interaction with mineral and organic components in the soil. In 5 soil profiles collected from Iceland (all derived from the same parent basalt), soil δ66Zn values varied from +0.10 to +0.35‰, and the organic-rich layers were 66Zn-depleted relative to the mineral-rich layers, likely due to contribution by isotopically light organic matter and Zn loss by leaching.
Isotopic discrimination of Zn varies in different components of higher plants, likely due to the various processes involved in Zn uptake, binding, transport, diffusion, speciation and compartmentalization. For example, Weiss et al. observed heavier δ66Zn values in the roots of several plants (rice, lettuce and tomato) relative to the bulk solution in which the plants were grown, and the shoots of those plants were 66Zn-depleted relative to both their roots and bulk solution. Furthermore, Zn isotopes partition differently between different Zn-ligand complexes, so the form of Zn incorporated by organisms in the terrestrial biosphere plays a role in driving Zn isotope compositions of the organisms. In particular, based on "ab initio" calculations, Zn-phosphate complexes are expected to be isotopically heavier than Zn-citrates, Zn-malates and Zn-histidine complexes by 0.6 to 1‰.
The discharge- and [Zn]-weighted average δ66Zn value of rivers throughout the world is +0.33‰. In particular, the average δ66Zn values of the Kalix and Chang Jiang rivers are +0.64 and +0.56‰, respectively. The Amazon, Missouri and Brahmaputra rivers have average δ66Zn values near +0.30‰, and the average δ66Zn value of the Nile river is +0.21‰.
In rocks and minerals.
In general, δ66Zn values of various rocks and minerals do not appear to significantly vary. The δ66Zn value of bulk silicate Earth (BSE) is +0.28 + 0.05‰. Fractionation of Zn isotopes by igneous processes is generally insignificant, and δ66Zn values of basalt fall within the range of +0.2 to +0.3‰, encompassing the value for BSE. δ66Zn values of clay minerals from diverse environments and of diverse ages have been found to fall within the same range as basalts, suggesting negligible fractionation between the basaltic precursors and sedimentary materials. Carbonates appear to be more 66Zn-enriched than other sedimentary and igneous rocks. For example, the δ66Zn value of a limestone core taken from the Central Pacific was +0.6‰ at the surface and increased to +1.2‰ with depth The Zn isotopic compositions of various ores are not well-characterized, but smithsonites and sphalerites (Zn carbonates and Zn sulfides, respectively) collected from various localities in Europe had δ66Zn values ranging from -0.06 to +0.69‰, with smithsonite potentially slightly heavier by 0.3‰ than sphalerite.
In the marine environment.
Zn is an essential biological nutrient in the oceans, and its concentration is largely controlled by uptake by phytoplankton and remineralization. In addition to its critical role in many metalloenzymes (see Zinc Biology section), Zn is an important component of the carbonate shells of foraminifera and siliceous frustules in diatoms. The main inputs of Zn to the ocean are thought to be from rivers and dust. In some photic zones in the ocean, Zn is a limiting nutrient for phytoplankton, and thus its concentration in surface waters serves as one control on marine primary productivity. Zn concentrations are extremely low in the surface ocean (<0.1 nM) but are maximal at depth (~2 nM in the deep Atlantic; ~10 nM in the deep Pacific), indicating a deep regeneration cycle. The deep/surface ratio of Zn is typically on the order of 100, significantly larger than that observed for Cu.
A multitude of complex processes fractionate Zn isotopes in the marine environment. As seen with copper isotopes, the bulk isotopic composition of zinc in the oceans (δ66Zn = +0.5‰) is heavier than that of the riverine input (δ66Zn = +0.3‰), reflecting both equilibrium, biological and other processes that affect Zn isotope ratios in the ocean. In the surface ocean, phytoplankton preferentially uptake 64Zn, and as a result have average δ66Zn values of ~+0.16‰ (i.e., 0.34‰ lighter than the bulk ocean). This preferential removal of 64Zn by photosynthetic marine organisms in the photic zone is most prominent in the spring and summer seasons when primary productivity is highest, and the seasonal variability of Zn isotope ratios is reflected in the δ66Zn values of settling materials, which are heavier (e.g., by ~+0.20‰ in the Atlantic Ocean) during spring and summer than during the colder seasons. Additionally, the surface layers of FeMn-nodules are 66Zn enriched at high-latitudes (average δ66Zn = +1‰), while δ66Zn values of low-latitude samples are smaller and more variable (spanning +0.5 to +1‰). This observation has been interpreted as due to high levels of Zn consumption and preferential uptake of 64Zn above the seasonal thermocline at high latitudes during warmer seasons, and transfer of this heavy δ66Zn signal to the settling sedimentary Fe-Mn hydroxides.
Sources and sinks for Zn isotopes are further highlighted in the vertical profile of 66Zn/64Zn in the water column. In the upper 2,000 m of the Atlantic Ocean, δ66Zn values are highly variable near the surface (δ66Zn = +0.05 to +0.33‰) due to biological uptake and other surface processes, then gradually increase to ~+0.50‰ at 2,000 m depth. Potential sinks for light Zn isotopes, which enrich the residual bulk Zn isotope ratios in the ocean, include binding to and burial with sinking particulate matter, as well as Zn sulfide precipitation in buried sediments. As a result of preferential burial of 64Zn over the heavier Zn isotopes, sediments in the ocean are generally isotopically lighter than that of bulk seawater. For example, δ66Zn values in 8 sedimentary cores from three different continental margins were depleted in 66Zn relative to the bulk ocean (δ66Zncores = -0.15 to +0.2‰), and furthermore the vertical profiles of δ66Zn values in the cores showed no downcore isotopic variability, suggesting diagenesis does not significantly fractionate Zn isotopes.
Applications of zinc isotopes.
Medicine.
Zn isotopes may be useful as a tracer for breast cancer. Relative to non-cancerous patients, breast cancer patients are known to have significantly higher concentrations of Zn in their breast tissue, but lower concentrations in their blood serum and erythrocytes, due to overexpression of Zn transporters in breast cancer cells. Consistent with these body-wide shifts in Zn homeostasis, δ66Zn values in breast cancer tumors of 5 patients were found to be anomalously light (varying from -0.9 to -0.6‰) relative to healthy tissue in 3 breast cancer patients and 1 healthy control (δ66Zn = -0.5 to -0.3‰). In this study, δ66Zn values of blood and serum were not found to be significantly different between cancerous and non-cancerous patients, suggesting an unknown isotopically heavy pool of Zn must exist in cancer patients. Though results from this study are promising regarding the use of Zn isotope ratios as a biomarker for breast cancer, a mechanistic understanding of how Zn isotopes fractionate during tumor formation in breast cancer is still lacking. Fortunately, increasing attention is being devoted to the use of stable metal isotopes as tracers of cancer and other diseases, and the usefulness of these isotope systems in medical applications will become more apparent in the next few decades.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\delta^{56/54}Fe = \\frac{(^{56}Fe/^{54}Fe)_{sample}}{(^{56}Fe/^{54}Fe)_{IRMM014}} -1"
},
{
"math_id": 1,
"text": "\\delta^{57}Fe =1.5 \\times \\delta^{56}Fe"
},
{
"math_id": 2,
"text": "\\delta^{58}Fe=2 \\times \\delta^{56}Fe"
},
{
"math_id": 3,
"text": "10^3\\times\\ln{\\alpha_{Fe(III)-Fe(II)}}=\\frac{0.334\\pm0.032\\times10^6}{T^2}-0.66\\pm0.38"
},
{
"math_id": 4,
"text": "\\frac{D_1}{D_2}=\\left ( \\frac{m_1}{m_2} \\right )^\\beta"
},
{
"math_id": 5,
"text": "\\delta^{65}Cu = \\left [ \\frac{(^{65}Cu/^{63}Cu)_{sample}}{(^{65}Cu/^{63}Cu)_{NIST976}} \\right ]"
},
{
"math_id": 6,
"text": "\\delta^{x}Zn = \\left [ \\frac{(^{x}Zn/^{64}Zn)_{sample}}{(^{x}Zn/^{64}Zn)_{std}} \\right ]"
}
] |
https://en.wikipedia.org/wiki?curid=63576316
|
63579202
|
Ring lemma
|
In the geometry of circle packings in the Euclidean plane, the ring lemma gives a lower bound on the sizes of adjacent circles in a circle packing.
Statement.
The lemma states: Let formula_0 be any integer greater than or equal to three. Suppose that the unit circle is surrounded by a ring of formula_0 interior-disjoint circles, all tangent to it, with consecutive circles in the ring tangent to each other. Then the minimum radius of any circle in the ring is at least the unit fraction
formula_1
where formula_2 is the formula_3th Fibonacci number.
The sequence of minimum radii, from formula_4, begins
<templatestyles src="Block indent/styles.css"/>formula_5 (sequence in the OEIS)
Generalizations to three-dimensional space are also known.
Construction.
An infinite sequence of circles can be constructed, containing rings for each formula_0 that exactly meet the bound of the ring lemma, showing that it is tight. The construction allows halfplanes to be considered as degenerate circles with infinite radius, and includes additional tangencies between the circles beyond those required in the statement of the lemma. It begins by sandwiching the unit circle between two parallel halfplanes; in the geometry of circles, these are considered to be tangent to each other at the point at infinity. Each successive circle after these first two is tangent to the central unit circle and to the two most recently added circles; see the illustration for the first six circles (including the two halfplanes) constructed in this way. The first formula_0 circles of this construction form a ring, whose minimum radius can be calculated by Descartes' theorem to be the same as the radius specified in the ring lemma. This construction can be perturbed to a ring of formula_0 finite circles, without additional tangencies, whose minimum radius is arbitrarily close to this bound.
History.
A version of the ring lemma with a weaker bound was first proven by Burton Rodin and Dennis Sullivan as part of their proof of William Thurston's conjecture that circle packings can be used to approximate conformal maps. Lowell Hansen gave a recurrence relation for the tightest possible lower bound, and Dov Aharonov found a closed-form expression for the same bound.
Applications.
Beyond its original application to conformal mapping, the circle packing theorem and the ring lemma play key roles in a proof by Keszegh, Pach, and Pálvölgyi that planar graphs of bounded degree can be drawn with bounded slope number.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "\\frac{1}{F_{2n-3}-1}"
},
{
"math_id": 2,
"text": "F_i"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "n=3"
},
{
"math_id": 5,
"text": "\\displaystyle 1, \\frac{1}{4}, \\frac{1}{12}, \\frac{1}{33}, \\frac{1}{88}, \\frac{1}{232}, \\dots"
}
] |
https://en.wikipedia.org/wiki?curid=63579202
|
63580385
|
Campbell paradigm
|
Behavioral theory in social psychology
The Campbell paradigm is a behavioral theory from social psychology. The paradigm was developed by social psychologist Florian G. Kaiser and his colleagues, Katarzyna Byrka and Terry Hartig, in 2010, building on an earlier suggestion by Donald T. Campbell, after whom the paradigm is named. It offers an explanation for why and when individuals engage in particular behaviors. It is mainly (but not exclusive) applied to behaviors that are aimed at fighting climate change and protecting the environment.
Overview.
The Campbell paradigm suggests that behavior (e.g., switching off lights when leaving a room) is typically the result of two factors: a person's commitment to fighting climate change and protecting the environment (i.e., a person's environmental attitude) and the costs that come with a specific behavior (e.g., having to remember to switch off the lights; see Fig. 1). The paradigm stands in contrast to the widespread rational choice theories, whose prototype is the theory of planned behavior in psychology. Rational choice theories explain behavior through a behavior's expected utility.
The Campbell paradigm is based on the controversial assumption that attitude and behavior are genuinely consistent. Accordingly, behavior arises spontaneously as a manifestation of a person's attitude (quite analogous to the tripartite model of attitude by Rosenberg and Hovland). In contrast to Campbell's deterministic model (in which he aimed to explain engagement), Kaiser and colleagues lowered their aspiration to explaining only the probability of engagement. Thus, they adopted the Rasch model as a less rigid depiction of the paradigm (see the formula and its explanation). formula_0The Rasch model describes the natural logarithm of the ratio of the probability (formula_1) that person "k" will switch off the lights (the specific behavior "i") to the inverse probability (formula_2) that person "k" will "not" switch off the lights as a function of person "k"'s attitude (formula_3: e.g., his or her environmental attitude) minus all the financial and figurative costs that come with switching off lights (formula_4: e.g., needing to remember to switch off the lights when one leaves a room). This means more or less that "k"'s general attitude (formula_3) along with "i"'s specific costs (formula_4) determine the probability (formula_1) that behavior "i" will become manifest should the opportunity arise.Only if a person's attitude exceeds the costs of a behavior will the behavior have a reasonable chance of manifesting (see Fig. 1). This account of why and when behavior occurs also serves as the theoretical basis for the measurement of individual attitudes.
Attitude measurement.
Within the Campbell paradigm, a person's attitude is derived from the behavioral costs that this person will incur to achieve the goal that is implied by the attitude. For example, the goal implied by environmental attitude is to protect the environment, whereas the goal implied by a health-focused attitude is to maintain or restore health.
Behavioral costs include everything that makes behavior objectively more or less demanding: things such as effort, time, and financial costs, but also social norms and expectations, cultural practices, and the antagonistic social preferences that go hand in hand with certain behaviors. To illustrate: Someone with a pronounced preference for music by Taylor Swift (i.e., a person with a strong, positive attitude toward Taylor Swift's music) will generally put forth considerable effort and spend large amounts of money to attend a concert by her. By contrast, people with less of a commitment to Taylor Swift's music will attend a concert only if given a ticket as a gift. And those who do not like Taylor Swift at all will change the station when a song by her comes on the radio.
On the one hand, this example shows that people can engage in different behaviors to express a more or less strong commitment to/preference for Taylor Swift's music (e.g., attend a concert, listen to a song when it is played on the radio). On the other hand, the example also makes clear that whatever a person does to listen to Taylor Swift is accompanied by costs; these costs are again unique to a specific behavior. Consequently, the costs that someone bears and, thus, the behaviors that someone will engage in to attain the attitudinal goal, can be used to determine people's attitude levels. So far, several attitude scales have been developed on the basis of the Campbell paradigm: environmental attitude, attitude toward nature, (negative) attitude toward anthropogenic climate change, health-focused attitude, attitude toward social contacts or privacy in the office, attitude toward one's own mental vigor, and attitude toward social expectations (i.e., people's conformity).
Behavioral explanation.
In social psychology, attitudes have traditionally reflected people's personal reasons and, thus, their personal behavioral propensities. Analogously, what later became a measure of environmental attitude was initially introduced as a measure of people's propensity to protect the environment. This classical view of attitude as a personal reason is of course ultimately justified only when one is able to reliably and consistently anticipate manifest behavior with an attitude measure, that is, if the notorious attitude-behavior gap does not really exist.
The Campbell paradigm's explanation of behavior is extremely parsimonious as can be concluded from the Rasch model. The likelihood of engaging in a behavior is a function of two countervailing factors: a person's attitude and the sociocultural boundary conditions in which the behavior takes place (see Fig. 1). These objective conditions ultimately determine the specific costs of a behavior. Accordingly, a vegetarian lunch is the result of not only a person's particular level of environmental attitude but also of the sociocultural boundary conditions in which the person's lunch is chosen; for example, the promise of a financial reward makes vegetarian lunches more attractive. The question that remains is “for whom?”
The literature contains a considerable number of (sometimes contradictory) conjunctive behavioral explanations that speak of the cost-moderated efficacy of people's attitudes (see Fig. 2a). By contrast, the Campbell Paradigm suggests that behavioral costs are unrestrictedly behaviorally effective and independent of people's attitude levels (see Fig. 2b). In other words, financial rewards make vegetarian lunches more probable for everyone. This countervailing relationship between behavioral costs and attitude has been repeatedly quasi-experimentally confirmed in environmental protection research.
Apparent circularity.
If a person's attitude is derived from the behaviors that the person engages in, we cannot really be surprised to subsequently find that the very same behaviors are explained by this attitude. In other words, what is the point of predicting that Peter will donate money to Greenpeace after we have already seen him donate money to Greenpeace? This apparent circularity is why, for many including Campbell himself, explaining behavior on the basis of the Campbell paradigm seems trivial and thus pointless. However, Kaiser and colleagues have argued that any form of circularity can be comparatively easily avoided.
When individual differences in people's attitude (e.g., in environmental attitude) are derived from verbal behaviors expressed in questionnaires (i.e., opinions, e.g., "protecting the environment is important"; appraisals, e.g., "I regret not doing more to combat climate change"; and claims of engaging in a behavior, e.g., "I recycle paper"), it is by no means trivial to use the correspondingly derived attitudinal differences to predict whether people will actually eat vegetarian lunches. Circularity can thus be avoided if the indicators (i.e., the manifestations used to derive individual levels of an attitude) and the consequences of the attitude (e.g., its manifest effects, the criteria to be explained) are logically and practically distinct.
In order to measure individual differences in a certain attitude, one can therefore use verbal behaviors, such as retrospective self-reports of behavior, stated intentions, appraisals, and opinions. This can be done with questionnaires. As consequences of people's attitude, one can then employ real behavior (e.g., the manifest choice of a vegetarian lunch) or objectively measurable traces of behavior (e.g., the amount of electricity a person consumes annually).
|
[
{
"math_id": 0,
"text": "\\ln\\left ( \\frac{p_{ki}}{1-p_{ki}} \\right )=\\theta_k-\\delta_i"
},
{
"math_id": 1,
"text": "p_{ki}"
},
{
"math_id": 2,
"text": "1-p_{ki}"
},
{
"math_id": 3,
"text": "\\theta_k"
},
{
"math_id": 4,
"text": "\\delta_i"
}
] |
https://en.wikipedia.org/wiki?curid=63580385
|
63584355
|
𝔹
|
formula_0 is the blackboard bold letter B. It can refer to:
<templatestyles src="Dmbox/styles.css" />
Topics referred to by the same termThis page lists mathematics articles associated with the same title.
|
[
{
"math_id": 0,
"text": "\\mathbb{B}"
},
{
"math_id": 1,
"text": "\\mathbb{B}^n"
}
] |
https://en.wikipedia.org/wiki?curid=63584355
|
63584718
|
Richards' theorem
|
Theorem in mathematics
Richards' theorem is a mathematical result due to Paul I. Richards in 1947. The theorem states that for,
formula_0
if formula_1 is a positive-real function (PRF) then formula_2 is a PRF for all real, positive values of formula_3.
The theorem has applications in electrical network synthesis. The PRF property of an impedance function determines whether or not a passive network can be realised having that impedance. Richards' theorem led to a new method of realising such networks in the 1940s.
formula_4
Proof.
where formula_1 is a PRF, formula_3 is a positive real constant, and formula_5 is the complex frequency variable, can be written as,
formula_6
where,
formula_7
Since formula_1 is PRF then
formula_8
is also PRF. The zeroes of this function are the poles of formula_9. Since a PRF can have no zeroes in the right-half "s"-plane, then formula_9 can have no poles in the right-half "s"-plane and hence is analytic in the right-half "s"-plane.
Let
formula_10
Then the magnitude of formula_11 is given by,
formula_12
Since the PRF condition requires that formula_13 for all formula_14 then formula_15 for all formula_14. The maximum magnitude of formula_9 occurs on the formula_16 axis because formula_9 is analytic in the right-half "s"-plane. Thus formula_17 for formula_18.
Let formula_19, then the real part of formula_2 is given by,
formula_20
Because formula_21 for formula_18 then formula_22 for formula_18 and consequently formula_2 must be a PRF.
Richards' theorem can also be derived from Schwarz's lemma.
Uses.
The theorem was introduced by Paul I. Richards as part of his investigation into the properties of PRFs. The term "PRF" was coined by Otto Brune who proved that the PRF property was a necessary and sufficient condition for a function to be realisable as a passive electrical network, an important result in network synthesis. Richards gave the theorem in his 1947 paper in the reduced form,
formula_23
that is, the special case where formula_24
The theorem (with the more general casse of formula_3 being able to take on any value) formed the basis of the network synthesis technique presented by Raoul Bott and Richard Duffin in 1949. In the Bott-Duffin synthesis, formula_1 represents the electrical network to be synthesised and formula_2 is another (unknown) network incorporated within it (formula_2 is unitless, but formula_25 has units of impedance and formula_26 has units of admittance). Making formula_1 the subject gives
formula_27
Since formula_28 is merely a positive real number, formula_1 can be synthesised as a new network proportional to formula_2 in parallel with a capacitor all in series with a network proportional to the inverse of formula_2 in parallel with an inductor. By a suitable choice for the value of formula_3, a resonant circuit can be extracted from formula_2 leaving a function formula_29 two degrees lower than formula_1. The whole process can then be applied iteratively to formula_29 until the degree of the function is reduced to something that can be realised directly.
The advantage of the Bott-Duffin synthesis is that, unlike other methods, it is able to synthesise any PRF. Other methods have limitations such as only being able to deal with two kinds of element in any single network. Its major disadvantage is that it does not result in the minimal number of elements in a network. The number of elements grows exponentially with each iteration. After the first iteration there are two formula_30 and associated elements, after the second, there are four formula_31 and so on.
Hubbard notes that Bott and Duffin appeared not to know the relationship of Richards' theorem to Schwarz's lemma and offers it as his own discovery, but it was certainly known to Richards who used it in his own proof of the theorem.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R(s) = \\frac {kZ(s)-sZ(k)}{kZ(k)-sZ(s)} "
},
{
"math_id": 1,
"text": "Z(s)"
},
{
"math_id": 2,
"text": "R(s)"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": " R(s) = \\frac {kZ(s)-sZ(k)}{kZ(k)-sZ(s)} "
},
{
"math_id": 5,
"text": "s= \\sigma + i \\omega"
},
{
"math_id": 6,
"text": " R(s) = \\dfrac {1-W(s)}{1+W(s)} "
},
{
"math_id": 7,
"text": " W(s) = {1 - \\dfrac {Z(s)}{Z(k)} \\over 1 + \\dfrac {Z(s)}{Z(k)}} \\left ( \\frac {k+s}{k-s} \\right ) "
},
{
"math_id": 8,
"text": " 1 + \\dfrac {Z(s)}{Z(k)} "
},
{
"math_id": 9,
"text": "W(s)"
},
{
"math_id": 10,
"text": " Z(i \\omega) = r (\\omega) + ix(\\omega)"
},
{
"math_id": 11,
"text": "W(i \\omega)"
},
{
"math_id": 12,
"text": " \\left | W(i \\omega) \\right | = \\sqrt { \\dfrac { (Z(k) - r(\\omega))^2 +x(\\omega)^2 }{ (Z(k) + r(\\omega))^2 +x(\\omega)^2 }}"
},
{
"math_id": 13,
"text": "r(\\omega) \\ge 0"
},
{
"math_id": 14,
"text": "\\omega"
},
{
"math_id": 15,
"text": "\\left | W(i \\omega) \\right | \\le 1"
},
{
"math_id": 16,
"text": "i \\omega"
},
{
"math_id": 17,
"text": "|W(s)| \\le 1"
},
{
"math_id": 18,
"text": "\\sigma \\ge 0"
},
{
"math_id": 19,
"text": " W(s) = u( \\sigma, \\omega) + iv( \\sigma, \\omega)"
},
{
"math_id": 20,
"text": " \\Re (R(s)) = \\dfrac {1 - |W(s)|^2}{ (1 + u( \\sigma, \\omega))^2 + v^2(\\sigma, \\omega)} "
},
{
"math_id": 21,
"text": "W(s) \\le 1"
},
{
"math_id": 22,
"text": "\\Re (R(s)) \\ge 0"
},
{
"math_id": 23,
"text": "R(s) = \\frac {Z(s)-sZ(1)}{Z(1)-sZ(s)} "
},
{
"math_id": 24,
"text": "k=1"
},
{
"math_id": 25,
"text": "R(s)Z(k)"
},
{
"math_id": 26,
"text": "R(s)/Z(k)"
},
{
"math_id": 27,
"text": " Z(s) = \\left ( \\frac {R(s)}{Z(k)} + \\frac {k}{sZ(k)} \\right )^{-1} + \\left ( \\frac {1}{Z(k) R(s)} + \\frac {s}{k Z(k)} \\right )^{-1}"
},
{
"math_id": 28,
"text": "Z(k)"
},
{
"math_id": 29,
"text": "Z'(s)"
},
{
"math_id": 30,
"text": "Z'"
},
{
"math_id": 31,
"text": "Z''"
}
] |
https://en.wikipedia.org/wiki?curid=63584718
|
63584864
|
Copulas in signal processing
|
A copula is a mathematical function that provides a relationship between marginal distributions of random variables and their joint distributions. Copulas are important because it represents a dependence structure without using marginal distributions. Copulas have been widely used in the field of finance, but their use in signal processing is relatively new. Copulas have been employed in the field of wireless communication for classifying radar signals, change detection in remote sensing applications, and EEG signal processing in medicine. In this article, a short introduction to copulas is presented, followed by a mathematical derivation to obtain copula density functions, and then a section with a list of copula density functions with applications in signal processing.
Introduction.
Using Sklar's theorem, a copula can be described as a cumulative distribution function (CDF) on a unit-space with uniform marginal distributions on the interval (0, 1). The CDF of a random variable "X" is the probability that "X" will take a value less than or equal to "x" when evaluated at "x" itself. A copula can represent a dependence structure without using marginal distributions. Therefore, it is simple to transform the uniformly distributed variables of copula ("u", "v", and so on) into the marginal variables ("x", "y", and so on) by the inverse marginal cumulative distribution function. Using the chain rule, copula distribution function can be partially differentiated with respect to the uniformly distributed variables of copula, and it is possible to express the multivariate probability density function (PDF) as a product of a multivariate copula density function and marginal PDF"s. The mathematics for converting a copula distribution function into a copula density function is shown for a bivariate case, and a family of copulas used in signal processing are listed in a TABLE 1.
Mathematical derivation.
For any two random variables "X" and "Y", the continuous joint probability distribution function can be written as
formula_0
where formula_1 and
formula_2 are the marginal cumulative distribution functions of the random variables "X" and "Y", respectively.
then the copula distribution function formula_3 can be defined using Sklar's theorem as:
formula_4,
where formula_5 and formula_6 are marginal distribution functions, formula_7 joint and formula_8.
We start by using the relationship between joint probability density function (PDF) and joint cumulative distribution function (CDF) and its partial derivatives.
formula_9
(Equation 1)
where formula_10is the copula density function, formula_11 and formula_12 are the marginal probability density functions of "X" and "Y", respectively. It is important understand that there are four elements in the equation 1, and if three of the four are know, the fourth element can me calculated. For example, equation 1 may be used
Summary table.
The use of copula in signal processing is fairly new compared to finance. Here, a family of new bivariate copula density functions are listed with importance in the area of signal processing. Here, formula_13 and formula_14 are marginal distributions functions and formula_15 and formula_16 are marginal density functions
TABLE 1: Copula density function of a family of copulas used in signal processing.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "F_{XY}(x,y) = \\Pr \\begin{Bmatrix} X \\leq{x},Y\\leq{y} \\end{Bmatrix}, "
},
{
"math_id": 1,
"text": "F_X(x) = \\Pr \\begin{Bmatrix} X \\leq{x} \\end{Bmatrix} "
},
{
"math_id": 2,
"text": " F_Y(y) = \\Pr \\begin{Bmatrix} Y \\leq{y} \\end{Bmatrix} "
},
{
"math_id": 3,
"text": "C(u, v)"
},
{
"math_id": 4,
"text": "F_{XY}(x,y) = C( F_X (x) , F_Y (y) ) \\triangleq C( u, v )\n"
},
{
"math_id": 5,
"text": "u = F_X(x) "
},
{
"math_id": 6,
"text": "v = F_Y(y) "
},
{
"math_id": 7,
"text": " F_{XY}(x,y) "
},
{
"math_id": 8,
"text": " u, v \\in (0,1) "
},
{
"math_id": 9,
"text": "\\begin{alignat}{6}\nf_{XY}(x,y) = {} & {\\partial^2 F_{XY}(x,y) \\over\\partial x\\,\\partial y } \\\\\n\\vdots \\\\\nf_{XY}(x,y) = {} & {\\partial^2 C(F_X(x),F_Y(y)) \\over\\partial x\\,\\partial y} \\\\\n\\vdots \\\\\nf_{XY}(x,y) = {} & {\\partial^2 C(u,v) \\over\\partial u\\,\\partial v} \\cdot {\\partial F_X(x) \\over\\partial x} \\cdot {\\partial F_Y(y) \\over\\partial y} \\\\\n\\vdots \\\\\nf_{XY}(x,y) = {} & c(u,v) f_X(x) f_Y(y) \\\\\n\\vdots \\\\\n\\frac{f_{XY}(x,y)}{f_X(x) f_Y(y) } = {} & c(u,v)\n\\end{alignat}\n\n"
},
{
"math_id": 10,
"text": "c(u,v) "
},
{
"math_id": 11,
"text": "f_X(x) "
},
{
"math_id": 12,
"text": "f_Y(y) "
},
{
"math_id": 13,
"text": "u=F_X(x)\n\n"
},
{
"math_id": 14,
"text": "v=F_Y(y)\n\n"
},
{
"math_id": 15,
"text": "f_X(x)\n\n"
},
{
"math_id": 16,
"text": "f_Y(y)\n\n"
}
] |
https://en.wikipedia.org/wiki?curid=63584864
|
635926
|
Transconductance
|
Electrical characteristic
Transconductance (for transfer conductance), also infrequently called mutual conductance, is the electrical characteristic relating the current through the output of a device to the voltage across the input of a device. Conductance is the reciprocal of resistance.
Transadmittance (or transfer admittance) is the AC equivalent of transconductance.
Definition.
Transconductance is very often denoted as a conductance, "g"m, with a subscript, m, for "mutual". It is defined as follows:
formula_0
For small signal alternating current, the definition is simpler:
formula_1
The SI unit for transconductance is the siemens, with the symbol S, as in conductance.
Transresistance.
Transresistance (for transfer resistance), also infrequently referred to as mutual resistance, is the dual of transconductance. It refers to the ratio between a change of the voltage at two output points and a related change of current through two input points, and is notated as "r"m:
formula_2
The SI unit for transresistance is simply the ohm, as in resistance.
Transimpedance (or, transfer impedance) is the AC equivalent of transresistance, and is the dual of transadmittance.
Devices.
Vacuum tubes.
For vacuum tubes, transconductance is defined as the change in the plate (anode) current divided by the corresponding change in the grid/cathode voltage, with a constant plate(anode) to cathode voltage. Typical values of "g"m for a small-signal vacuum tube are 1 to 10 millisiemens. It is one of the three characteristic constants of a vacuum tube, the other two being its gain μ (mu) and plate resistance "r"p or "r"a. The Van der Bijl equation defines their relation as follows:
formula_3
Field-effect transistors.
Similarly, in field-effect transistors, and MOSFETs in particular, transconductance is the change in the drain current divided by the small change in the gate–source voltage with a constant drain–source voltage. Typical values of "g"m for a small-signal field-effect transistor are 1 to 30 millisiemens.
Using the Shichman–Hodges model, the transconductance for the MOSFET can be expressed as (see MOSFET article)
formula_4
where "I"D is the DC drain current at the bias point, and "V"OV is the overdrive voltage, which is the difference between the bias point gate–source voltage and the threshold voltage (i.e., "V"OV ≡ "V"GS – "V"th). The overdrive voltage (sometimes known as the effective voltage) is customarily chosen at about 70–200 mV for the 65 nm technology node ("I"D ≈ 1.13 mA/μm of width) for a "g"m of 11–32 mS/μm.
Additionally, the transconductance for the junction FET is given by
formula_5
where "V"P is the pinchoff voltage, and "I"DSS is the maximum drain current.
Bipolar transistors.
The "g"m of bipolar small-signal transistors varies widely, being proportional to the collector current. It has a typical range of 1 to 400 millisiemens. The input voltage change is applied between the base/emitter and the output is the change in collector current flowing between the collector/emitter with a constant collector/emitter voltage.
The transconductance for the bipolar transistor can be expressed as
formula_6
where "I"C = DC collector current at the Q-point, and "V"T = , typically about 26 mV at room temperature. For a typical current of 10 mA, "g"m ≈ 385 mS. The input impedance is the current gain divided by the transconductance.
The output (collector) conductance is determined by the Early voltage and is proportional to the collector current. For most transistors in linear operation it is well below 100 μS.
Amplifiers.
Transconductance amplifiers.
A transconductance amplifier ("g"m amplifier) puts out a current proportional to its input voltage. In network analysis, the transconductance amplifier is defined as a "voltage controlled current source" ("VCCS"). These amplifiers are commonly seen installed in a cascode configuration, which improves the frequency response.
An ideal transconductance amplifier in a voltage follower configuration behaves at the output like a resistor of value formula_7, between a buffered copy of the input voltage and the output. If the follower is loaded by a single capacitor formula_8, the voltage follower transfer function has a single pole with time constant formula_9, or equivalently it behaves as a 1st-order low-pass filter with a -3 dB bandwidth of formula_10.
Operational transconductance amplifiers.
An operational transconductance amplifier (OTA) is an integrated circuit which can function as a transconductance amplifier. These normally have an input to allow the transconductance to be controlled.
Transresistance amplifiers.
A transresistance amplifier outputs a voltage proportional to its input current. The transresistance amplifier is often referred to as a transimpedance amplifier, especially by semiconductor manufacturers.
The term for a transresistance amplifier in network analysis is "current controlled voltage source" ("CCVS").
A basic inverting transresistance amplifier can be built from an operational amplifier and a single resistor. Simply connect the resistor between the output and the inverting input of the operational amplifier and connect the non-inverting input to ground. The output voltage will then be proportional to the input current at the inverting input, decreasing with increasing input current and vice versa.
Specialist chip transresistance (transimpedance) amplifiers are widely used for amplifying the signal current from photo diodes at the receiving end of ultra high speed fibre optic links.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "g_m = \\frac{\\Delta I_\\text{out}}{\\Delta V_\\text{in}}"
},
{
"math_id": 1,
"text": "g_m = \\frac{i_\\text{out}}{v_\\text{in}}"
},
{
"math_id": 2,
"text": "r_m = \\frac{\\Delta V_\\text{out}}{\\Delta I_\\text{in}}"
},
{
"math_id": 3,
"text": "g_\\mathrm{m} = \\frac{\\mu}{r_\\mathrm{p}}"
},
{
"math_id": 4,
"text": "g_\\text{m} = \\frac{2I_\\text{D}}{V_\\text{OV}},"
},
{
"math_id": 5,
"text": "g_\\text{m} = \\frac{2I_\\text{DSS}}{|V_\\text{P}|} \\left(1 - \\frac{V_\\text{GS}}{V_\\text{P}}\\right),"
},
{
"math_id": 6,
"text": "g_\\mathrm{m} = \\frac{I_\\mathrm{C}}{V_\\mathrm{T}}"
},
{
"math_id": 7,
"text": "1/g_m"
},
{
"math_id": 8,
"text": "C"
},
{
"math_id": 9,
"text": "C/g_m"
},
{
"math_id": 10,
"text": "{g_m}/{2\\pi C}"
}
] |
https://en.wikipedia.org/wiki?curid=635926
|
63598605
|
The Strange Logic of Random Graphs
|
The Strange Logic of Random Graphs is a book on zero-one laws for random graphs. It was written by Joel Spencer and published in 2001 by Springer-Verlag as volume 22 of their book series Algorithms and Combinatorics.
Topics.
The random graphs of the book are generated from the Erdős–Rényi–Gilbert model formula_0 in which formula_1 vertices are given and a random choice is made whether to connect each pair of vertices by an edge, independently for each pair, with probability formula_2 of making a connection. A zero-one law is a theorem stating that, for certain properties of graphs, and for certain choices of formula_2,
the probability of generating a graph with the property tends to zero or one in the limit as formula_1 goes to infinity.
A fundamental result in this area, proved independently by Glebskiĭ et al. and by Ronald Fagin, is that there is a zero-one law for formula_3 for every property that can be described in the first-order logic of graphs. Moreover, the limiting probability is one if and only if the infinite Rado graph has the property. For instance, a random graph in this model contains a triangle with probability tending to one; it contains a universal vertex with probability tending to zero. For other choices of formula_2, other outcomes can occur.
For instance, the limiting probability of containing a triangle is between 0 and 1 when formula_4 for a constant formula_5; it tends to 0 for smaller choices of formula_2 and to 1 for larger choices. The function formula_6 is said to be a "threshold" for the property of containing a triangle, meaning that it separates the values of formula_2 with limiting probability 0 from the values with limiting probability 1.
The main result of the book (proved by Spencer with Saharon Shelah) is that irrational powers of formula_1 are never threshold functions. That is, whenever formula_7 is an irrational number, there is a zero-one law for the first-order properties of the random graphs formula_8. A key tool in the proof is the Ehrenfeucht–Fraïssé game.
Audience and reception.
Although it is essentially the proof of a single theorem, aimed at specialists in the area, the book is written in a readable style that introduces the reader to many important topics in finite model theory and the theory of random graphs. Reviewer Valentin Kolchin, himself the author of another book on random graphs, writes that the book is "self-contained, easily read, and is distinguished by elegant writing", recommending it to probability theorists and logicians. Reviewer Alessandro Berarducci calls the book "beautifully written" and its subject "fascinating".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "G(n,p)"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "p"
},
{
"math_id": 3,
"text": "G(n,1/2)"
},
{
"math_id": 4,
"text": "p=c/n"
},
{
"math_id": 5,
"text": "c"
},
{
"math_id": 6,
"text": "1/n"
},
{
"math_id": 7,
"text": "a>0"
},
{
"math_id": 8,
"text": "G(n,n^{-a})"
}
] |
https://en.wikipedia.org/wiki?curid=63598605
|
63601960
|
Information fluctuation complexity
|
Information fluctuation complexity is an information-theoretic quantity defined as the fluctuation of information about entropy. It is derivable from fluctuations in the predominance of order and chaos in a dynamic system and has been used as a measure of complexity in many diverse fields. It was introduced in a 1993 paper by Bates and
Definition.
The information fluctuation complexity of a discrete dynamic system is a function of the probability distribution of its states when it is subject to random external input data. The purpose of driving the system with a rich information source such as a random number generator or a white noise signal is to probe the internal dynamics of the system in much the same way as a frequency-rich impulse is used in signal processing.
If a system has formula_0 possible states and the "state probabilities" formula_1are known, then its information entropy is
formula_2
where formula_3 is the information content of state formula_4.
The information fluctuation complexity of the system is defined as the standard deviation or fluctuation of formula_5 about its mean formula_6:
formula_7
or
formula_8
The "fluctuation of state information" formula_9 is zero in a maximally disordered system with all formula_10; the system simply mimics its random inputs. formula_9 is also zero when the system is perfectly ordered with just one fixed state formula_11, regardless of inputs. formula_9 is non-zero between these two extremes with a mixture of both higher-probability states and lower-probability states populating state space.
Fluctuation of information allows for memory and computation.
As a complex dynamic system evolves in time, how it transitions between states depends on external stimuli in an irregular way. At times it may be more sensitive to external stimuli (unstable) and at other times less sensitive (stable). If a particular state has several possible next-states, external information determines which one will be next and the system gains that information by following a particular trajectory in state space. But if several different states all lead to the same next-state, then upon entering the next-state the system loses information about which state preceded it. Thus, a complex system exhibits alternating information gain and loss as it evolves in time. The alternation or fluctuation of information is equivalent to remembering and forgetting — temporary information storage or memory — an essential feature of non-trivial computation.
The gain or loss of information associated with transitions between states can be related to state information. The "net information gain" of a transition from state formula_12 to state formula_13 is the information gained when leaving state formula_12 less the information lost when entering state formula_13:
formula_14
Here formula_15 is the "forward conditional probability" that if the present state is formula_12 then the next state is formula_13 and formula_16 is the "reverse conditional probability" that if the present state is formula_13 then the previous state was formula_17. The conditional probabilities are related to the "transition probability" formula_18, the probability that a transition from state formula_17 to state formula_13 occurs, by:
formula_19
Eliminating the conditional probabilities:
formula_20
Therefore, the net information gained by the system as a result of the transition depends only on the increase in state information from the initial to the final state. It can be shown that this is true even for multiple consecutive
formula_21 is reminiscent of the relation between force and potential energy. formula_22 is like potential formula_23 and formula_24 is like force formula_25 in formula_26. External information “pushes” a system “uphill” to a state of higher information potential to accomplish information storage, much like pushing a mass uphill to a state of higher gravitational potential stores energy. The amount of energy storage depends only on the final height, not on the path up the hill. Likewise, the amount of information storage does not depend on the transition path between an initial common state and a final rare state. Once a system reaches a rare state with high information potential, it may then "fall" back to a common state, losing the previously stored information.
It may be useful to compute the standard deviation of formula_24 about its mean (which is zero), namely the "fluctuation of net information gain" but formula_27 takes into account multi-transition "memory loops" in state space and therefore should be more indicative of the computational power of a system. Moreover, formula_27 is easier to calculate because there can be many more transitions than states.
Chaos and order.
A dynamic system that is sensitive to external information (unstable) exhibits chaotic behavior whereas one that is insensitive to external information (stable) exhibits orderly behavior. A complex system exhibits both behaviors, fluctuating between them in dynamic balance when subject to a rich information source. The degree of fluctuation is quantified by formula_27; it captures the alternation in the predominance of chaos and order in a complex system as it evolves in time.
Example: rule 110 variant of the elementary cellular.
Source:
The rule 110 variant of the elementary cellular automaton has been proven to be capable of universal computation. The proof is based on the existence and interactions of cohesive and self-perpetuating cell patterns known as gliders or spaceships (examples of emergent phenomena associated with complex systems), that imply the capability of groups of automaton cells to remember that a glider is passing through them. It is therefore to be expected that there will be memory loops in state space resulting from alternations of information gain and loss, instability and stability, chaos and order.
Consider a 3-cell group of adjacent automaton cells that obey rule 110: end-center-end. The next state of the center cell depends on the present state of itself and the end cells as specified by the rule:
To calculate the information fluctuation complexity of this system, attach a "driver cell" to each end of the 3-cell group to provide a random external stimulus like so, driver→end-center-end←driver, such that the rule can be applied to the two end cells. Next, determine what the next state is for each possible present state and for each possible combination of driver cell contents, to determine the forward conditional probabilities.
The state diagram of this system is depicted below, with circles representing the states and arrows representing transitions between states. The eight states of this system, 1-1-1 to 0-0-0 are labeled with the octal equivalent of the 3-bit contents of the 3-cell group: 7 to 0. The transition arrows are labeled with forward conditional probabilities. Notice that there is variability in the divergence and convergence of arrows corresponding to variability in gain and loss of information from the driver cells.
The forward conditional probabilities are determined by the proportion of possible driver cell contents that drive a particular transition. For example, for the four possible combinations of two driver cell contents, state 7 leads to states 5, 4, 1 and 0 and so formula_28, formula_29, formula_30, and formula_31 are each ¼ or 25%. Likewise, state 0 leads to states 0, 1, 0 and 1 and so formula_32and formula_33are each ½ or 50%. And so forth.
The state probabilities are related by
formula_34 and formula_35
These linear algebraic equations can be solved manually or with the aid of a computer program for the state probabilities, with the following
Information entropy and complexity can then be calculated from the state probabilities:
formula_36
formula_37
Note that the maximum possible entropy for eight states is formula_38 which would be the case if all eight states were equally likely with probabilities of ⅛ (randomness). Thus rule 110 has a relatively high entropy or state utilization at 2.86 bits. But this does not preclude a substantial fluctuation of state information about entropy and thus a substantial value of complexity. Whereas, maximum entropy "would" preclude complexity.
An alternative method can be used to obtain the state probabilities when the analytical method used above is unfeasible. Simply drive the system at its inputs (the driver cells) with a random source for many generations and observe the state probabilities empirically. When this is done via computer simulation for 10 million generations the results are as
Since both formula_39 and formula_27 increase with system size, their dimensionless ratio formula_40, the "relative information fluctuation complexity", is included to better compare systems of different sizes. Notice that the empirical and analytical results agree for the 3-cell automaton and that the relative complexity levels off to about 0.10 by 10 cells.
In the paper by Bates and formula_27 is computed for all elementary cellular automaton rules and it was observed that the ones that exhibit slow-moving gliders and possibly stationary objects, as rule 110 does, are highly correlated with large values of formula_27. formula_27 can therefore be used as a filter to select candidate rules for universal computation, which is difficult to prove.
Applications.
Although the derivation of the information fluctuation complexity formula is based on information fluctuations in a dynamic system, the formula depends only on state probabilities and so is also applicable to any probability distribution, including those derived from static images or text.
Over the years the original paper has been referred to by researchers in many diverse fields: complexity theory, complex systems science, complex networks, chaotic dynamics, many-body localization entanglement, environmental engineering, ecological complexity, ecological time-series analysis, ecosystem sustainability, air and water pollution, hydrological wavelet analysis, soil water flow, soil moisture, headwater runoff, groundwater depth, air traffic control, flow patterns and flood events, topology, economics, market forecasting of metal and electricity prices, health informatics, human cognition, human gait kinematics, neurology, EEG analysis, education, investing, artificial life and aesthetics.
|
[
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "p_i"
},
{
"math_id": 2,
"text": "\\Eta = \\sum_{i=1}^N p_i I_i = - \\sum_{i=1}^N p_i \\log p_i,"
},
{
"math_id": 3,
"text": "I_i = -\\log p_i"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "I"
},
{
"math_id": 6,
"text": "\\Eta"
},
{
"math_id": 7,
"text": "\\sigma_I = \\sqrt{\\sum_{i=1}^N p_i(I_i - \\Eta)^2} = \\sqrt{\\sum_{i=1}^N p_iI_i^2 - \\Eta^2}"
},
{
"math_id": 8,
"text": "\\sigma_I = \\sqrt{\\sum_{i=1}^N p_i \\log^2 p_i - \\Biggl(\\sum_{i=1}^N p_i \\log p_i \\Biggr)^2}."
},
{
"math_id": 9,
"text": "\\sigma_I\n"
},
{
"math_id": 10,
"text": "p_i = 1/N"
},
{
"math_id": 11,
"text": "(p_1 = 1)"
},
{
"math_id": 12,
"text": "i"
},
{
"math_id": 13,
"text": "j"
},
{
"math_id": 14,
"text": "\\Gamma_{ij} = -\\log p_{i \\rightarrow j} + \\log p_{i \\leftarrow j}."
},
{
"math_id": 15,
"text": "p_{i \\rightarrow j}"
},
{
"math_id": 16,
"text": "p_{i \\leftarrow j}"
},
{
"math_id": 17,
"text": "i\n"
},
{
"math_id": 18,
"text": "p_{ij}"
},
{
"math_id": 19,
"text": "p_{ij} = p_i p_{i \\rightarrow j} = p_{i \\leftarrow j} p_j."
},
{
"math_id": 20,
"text": "\\Gamma_{ij} = -\\log (p_{ij}/p_i) + \\log (p_{ij}/p_j) = \\log p_i - \\log p_j = I_j - I_i."
},
{
"math_id": 21,
"text": "\\Gamma = \\Delta I"
},
{
"math_id": 22,
"text": "I"
},
{
"math_id": 23,
"text": "\\Phi"
},
{
"math_id": 24,
"text": "\\Gamma"
},
{
"math_id": 25,
"text": "\\mathbf{F}"
},
{
"math_id": 26,
"text": "\\mathbf{F}={\\nabla \\Phi}"
},
{
"math_id": 27,
"text": "\\sigma_I"
},
{
"math_id": 28,
"text": "p_{7 \\rightarrow 5}"
},
{
"math_id": 29,
"text": "p_{7 \\rightarrow 4}"
},
{
"math_id": 30,
"text": "p_{7 \\rightarrow 1}"
},
{
"math_id": 31,
"text": "p_{7 \\rightarrow 0}"
},
{
"math_id": 32,
"text": "p_{0 \\rightarrow 1}"
},
{
"math_id": 33,
"text": "p_{0 \\rightarrow 0}"
},
{
"math_id": 34,
"text": "p_j = \\sum_{i=0}^7 p_i p_{i \\rightarrow j}"
},
{
"math_id": 35,
"text": "\\sum_{i=0}^7 p_i = 1."
},
{
"math_id": 36,
"text": "\\Eta = - \\sum_{i=0}^7 p_i \\log_2 p_i = 2.86 \\text{ bits},"
},
{
"math_id": 37,
"text": "\\sigma_I = \\sqrt{\\sum_{i=0}^7 p_i \\log_2^2 p_i - \\Eta^2} = 0.56 \\text{ bits}."
},
{
"math_id": 38,
"text": "\\log_2 8 = 3 \\text{ bits},"
},
{
"math_id": 39,
"text": "\\Eta"
},
{
"math_id": 40,
"text": "\\sigma_I/\\Eta"
}
] |
https://en.wikipedia.org/wiki?curid=63601960
|
6361017
|
Rab geranylgeranyltransferase
|
Class of enzyme complexes
Rab geranylgeranyltransferase also known as (protein) geranylgeranyltransferase II is one of the three prenyltransferases. It transfers (usually) two geranylgeranyl groups to the cystein(s) at the C-terminus of Rab proteins.
geranylgeranyl diphosphate + protein-cysteine formula_0 S-geranylgeranyl-Cys-protein + diphosphate
The C-terminus of Rab proteins varies in length and sequence and is referred to as hypervariable. Thus Rab proteins do not have a consensus sequence, such as the CAAX box, which the Rab geranylgeranyltransferase can recognise. Instead Rab proteins are bound by the Rab escort protein (REP) over a more conserved region of the Rab protein and then presented to the Rab geranylgeranyltransferase.
Once Rab proteins are prenylated, the lipid anchor(s) ensure that Rabs are no longer soluble. REP therefore plays an important role in binding and solubilising the geranylgeranyl groups and delivers the Rab protein to the relevant cell membrane.
Reaction.
Rab geranylgeranyltransferase (RabGGTase; enzyme commission code EC 2.5.1.60) is classified as a transferase enzyme; specifically, it is in the protein prenyltransferase family along with two other enzymes (protein farnesyltransferase and protein geranylgeranyltransferase type-I). The reaction catalyzed by RabGGTase is summarized as follows:
This reaction is essential in the control of membrane docking and fusion. Studies of mice have shown that Rab GGTase genes are expressed in all major adult organs, as well as in some embryonic units, including the spinal cord and liver (Chinpaisal).
Rab geranylgeranyltransferase’s “outsourcing” of specificity (using REP to interact with the Rab proteins it prenylates, as mentioned above) is unique among prenyltransferases. Rab GGTase is “responsible for the largest number of individual protein prenylation events in the cell,” probably due to this ability to interact with many different Rab proteins (it can prenylate any sequence containing a cysteine residue).
In vitro studies have shown that Rab GGTase can be inhibited by nitrogen-containing bisphosphonate drugs such as risedronate; however, the effects of such drugs seem to be much more limited in vivo (Coxon).
Structure.
RabGGTase is a heterodimer composed of alpha and beta subunits that are encoded by the "RABGGTA" and "RABGGTB" genes, respectively. The structure of rat RabGGTase has been determined by X-ray diffraction (see image to the left) to a resolution of 1.80 Å. RabGGTase’s secondary structure is largely composed of alpha helices; the alpha subunit is 74% helical with no beta sheets, while the beta subunit is 51% helical and 5% beta sheet. There are 28 alpha helices total (15 in the alpha subunit and 13 in the beta subunit) and 15 very short (no more than 4 residues) beta sheets. Functional RabGGTase binds three metal ions as ligands: two calcium ions (Ca2+) and a zinc ion (Zn2+), all of which interact with the beta subunit.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=6361017
|
63617842
|
Pan-European Privacy-Preserving Proximity Tracing
|
Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT/PEPP) is a full-stack open protocol designed to facilitate digital contact tracing of infected participants. The protocol was developed in the context of the ongoing COVID-19 pandemic. The protocol, like the competing Decentralized Privacy-Preserving Proximity Tracing (DP-3T) protocol, makes use of Bluetooth LE to discover and locally log clients near a user. However, unlike DP-3T, it uses a centralized reporting server to process contact logs and individually notify clients of potential contact with an infected patient. It has been argued that this approaches compromises privacy, but has the benefit of human-in-the-loop checks and health authority verification. While users are not expected to register with their real name, the back-end server processes pseudonymous personal data that would eventually be capable of being reidentified. It has also been put forward that the distinction between centralized/decentralized systems is mostly technical and PEPP-PT is equally able to preserve privacy.
Technical specification.
The protocol can be divided into two broad responsibilities: local device encounters and logging, and transmission of contact logs to a central health authority. These two areas will be referred to as the "encounter handshake" and "infection reporting" respectively. Additionally authentication, notification, and other minor responsibilities of the protocol are defined.
Authentication.
Authentication during registration is required to prevent malicious actors from creating a multiple false user accounts, using them to interfere with the system. In order to preserve the anonymity of the users, traditional authentication models using static identifiers such as email addresses or phone numbers could not be employed. Rather, the protocol uses a combination of a proof-of-work challenge and CAPTCHA. The suggested proof-of-work algorithm is scrypt as defined in RFC7914, popularized in various blockchain systems such as Dogecoin and Litecoin. Scrypt was chosen because it is memory bound rather than CPU bound. Once a user registers with the app, they are issued a unique 128 bit pseudo-random identifier (PUID) by the server. It will be marked inactive until the app solves the PoW challenge with the input parameters of formula_0, a cost factor of 2, and a block size of 8. Once completed, OAuth2 credentials are issued to the client to authenticate all future requests.
Encounter handshake.
When two clients encounter each other, they must exchange and log identifying details. In order to prevent the tracking of clients over time through the use of static identifiers, clients exchange time sensitive temporary IDs issued by the central server. In order to generate these temporary IDs, the central server generates a global secret key formula_1, which is used to calculate all temporary IDs for a short timeframe formula_2. From this an Ephemeral Bluetooth ID (EBID) is calculated for each user with the algorithm formula_3 where formula_4 is the AES encryption algorithm. These EBIDs are used by the clients as the temporary IDs in the exchange. EBIDs are fetched in forward dated batches to account for poor internet access.
Clients then constantly broadcast their EBID under the PEPP-PT Bluetooth service identifier, while also scanning for other clients. If another client is found, the two exchange and log EBIDs, along with metadata about the encounter such as the signal strength and a timestamp.
Infection reporting.
When a user, out of band, has been confirmed positive for infection the patient is asked to upload their contact logs to the central reporting server. If the user consents, the health authority issues a key authorizing the upload. The user then transmits the contact log over HTTPS to the reporting server to be processed.
Once the reporting server has received a contact log, each entry is run through a proximity check algorithm to reduce the likelihood of false positives. The resulting list of contact is manually confirmed and they, along with a random sample of other users, are sent a message containing a random number and message hash. This message serves to wake up the client and have them check the server for new reports. If the client is on the list of confirmed users, the server will confirm potential infection to the client which will in turn warn the user. If a client is in the random sample, it will receive a response with no meaning. The reason a random sample of users is sent a message for every report is so that eavesdroppers are not able to determine who is at risk for infection by listening to communication between the client and server.
Controversy.
The Helmholtz Center for Information Security (CISPA) confirmed in a press release on April 20, 2020 that it was withdrawing from the consortium, citing a 'lack of transparency and clear governance' as well as data protection concerns around the PEPP-PT design. The École Polytechnique Fédérale de Lausanne, ETH Zurich, KU Leuven and the Institute for Scientific Interchange withdrew from the project in the same week. This group was also responsible for the development of the competing Decentralized Privacy-Preserving Proximity Tracing protocol.
On 20 April 2020, an open letter was released signed by over 300 security and privacy academics from 26 countries criticising the approach taken by PEPP-PT, stating that 'solutions which allow reconstructing invasive information about the population should be rejected without further discussion'.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "input = nonce || challenge"
},
{
"math_id": 1,
"text": "BK_t"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "EBID_t(PUID)=AES(BK_t, PUID)"
},
{
"math_id": 4,
"text": "AES"
}
] |
https://en.wikipedia.org/wiki?curid=63617842
|
63619636
|
Remote point
|
In general topology, a remote point is a point formula_0 that belongs to the Stone–Čech compactification formula_1 of a Tychonoff space formula_2 but that does not belong to the topological closure within formula_1 of any nowhere dense subset of formula_2.
Let formula_3 be the real line with the standard topology. In 1962, Nathan Fine and Leonard Gillman proved that, assuming the continuum hypothesis:
<templatestyles src="Template:Blockquote/styles.css" />There exists a point formula_0 in formula_4 that is not in the closure of "any" discrete subset of formula_3 ...
Their proof works for any Tychonoff space that is separable and not pseudocompact.
Chae and Smith proved that the existence of remote points is independent, in terms of Zermelo–Fraenkel set theory, of the continuum hypothesis for a class of topological spaces that includes metric spaces. Several other mathematical theorems have been proved concerning remote points.
|
[
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "\\beta X"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "\\R"
},
{
"math_id": 4,
"text": "\\beta \\R"
}
] |
https://en.wikipedia.org/wiki?curid=63619636
|
636219
|
Pressure vessel
|
Vessel for pressurised gases or liquids
A pressure vessel is a container designed to hold gases or liquids at a pressure substantially different from the ambient pressure.
Construction methods and materials may be chosen to suit the pressure application, and will depend on the size of the vessel, the contents, working pressure, mass constraints, and the number of items required.
Pressure vessels can be dangerous, and fatal accidents have occurred in the history of their development and operation. Consequently, pressure vessel design, manufacture, and operation are regulated by engineering authorities backed by legislation. For these reasons, the definition of a pressure vessel varies from country to country.
Design involves parameters such as maximum safe operating pressure and temperature, safety factor, corrosion allowance and minimum design temperature (for brittle fracture). Construction is tested using nondestructive testing, such as ultrasonic testing, radiography, and pressure tests. Hydrostatic pressure tests usually use water, but pneumatic tests use air or another gas. Hydrostatic testing is preferred, because it is a safer method, as much less energy is released if a fracture occurs during the test (water does not greatly increase its volume when rapid depressurization occurs, unlike gases, which expand explosively). Mass or batch production products will often have a representative sample tested to destruction in controlled conditions for quality assurance. Pressure relief devices may be fitted if the overall safety of the system is sufficiently enhanced.
In most countries, vessels over a certain size and pressure must be built to a formal code. In the United States that code is the ASME Boiler and Pressure Vessel Code (BPVC). In Europe the code is the Pressure Equipment Directive. Information on this page is mostly valid in ASME only. These vessels also require an authorized inspector to sign off on every new vessel constructed and each vessel has a nameplate with pertinent information about the vessel, such as maximum allowable working pressure, maximum temperature, minimum design metal temperature, what company manufactured it, the date, its registration number (through the National Board), and American Society of Mechanical Engineers's official stamp for pressure vessels (U-stamp). The nameplate makes the vessel traceable and officially an ASME Code vessel.
A special application is pressure vessels for human occupancy, for which more stringent safety rules apply.
History.
The earliest documented design of pressure vessels was described in 1495 in the book by Leonardo da Vinci, the Codex Madrid I, in which containers of pressurized air were theorized to lift heavy weights underwater. However, vessels resembling those used today did not come about until the 1800s, when steam was generated in boilers helping to spur the Industrial Revolution. However, with poor material quality and manufacturing techniques along with improper knowledge of design, operation and maintenance there was a large number of damaging and often deadly explosions associated with these boilers and pressure vessels, with a death occurring on a nearly daily basis in the United States. Local provinces and states in the US began enacting rules for constructing these vessels after some particularly devastating vessel failures occurred killing dozens of people at a time, which made it difficult for manufacturers to keep up with the varied rules from one location to another. The first pressure vessel code was developed starting in 1911 and released in 1914, starting the ASME Boiler and Pressure Vessel Code (BPVC). In an early effort to design a tank capable of withstanding pressures up to , a diameter tank was developed in 1919 that was spirally-wound with two layers of high tensile strength steel wire to prevent sidewall rupture, and the end caps longitudinally reinforced with lengthwise high-tensile rods. The need for high pressure and temperature vessels for petroleum refineries and chemical plants gave rise to vessels joined with welding instead of rivets (which were unsuitable for the pressures and temperatures required) and in the 1920s and 1930s the BPVC included welding as an acceptable means of construction; welding is the main means of joining metal vessels today.
There have been many advancements in the field of pressure vessel engineering such as advanced non-destructive examination, phased array ultrasonic testing and radiography, new material grades with increased corrosion resistance and stronger materials, and new ways to join materials such as explosion welding, friction stir welding, advanced theories and means of more accurately assessing the stresses encountered in vessels such as with the use of Finite Element Analysis, allowing the vessels to be built safer and more efficiently. Today, vessels in the USA require BPVC stamping but the BPVC is not just a domestic code, many other countries have adopted the BPVC as their official code. There are, however, other official codes in some countries, such as Japan, Australia, Canada, Britain, and Europe. Regardless of the country, nearly all recognize the inherent potential hazards of pressure vessels and the need for standards and codes regulating their design and construction.
Features.
Shape.
Pressure vessels can theoretically be almost any shape, but shapes made of sections of spheres, cylinders, and cones are usually employed. A common design is a cylinder with end caps called heads. Head shapes are frequently either hemispherical or dished (torispherical). More complicated shapes have historically been much harder to analyze for safe operation and are usually far more difficult to construct.
Theoretically, a spherical pressure vessel has approximately twice the strength of a cylindrical pressure vessel with the same wall thickness, and is the ideal shape to hold internal pressure. However, a spherical shape is difficult to manufacture, and therefore more expensive, so most pressure vessels are cylindrical with 2:1 semi-elliptical heads or end caps on each end. Smaller pressure vessels are assembled from a pipe and two covers. For cylindrical vessels with a diameter up to 600 mm (NPS of 24 in), it is possible to use seamless pipe for the shell, thus avoiding many inspection and testing issues, mainly the nondestructive examination of radiography for the long seam if required. A disadvantage of these vessels is that greater diameters are more expensive, so that for example the most economic shape of a , pressure vessel might be a diameter of and a length of including the 2:1 semi-elliptical domed end caps.
Construction materials.
Many pressure vessels are made of steel. To manufacture a cylindrical or spherical pressure vessel, rolled and possibly forged parts would have to be welded together. Some mechanical properties of steel, achieved by rolling or forging, could be adversely affected by welding, unless special precautions are taken. In addition to adequate mechanical strength, current standards dictate the use of steel with a high impact resistance, especially for vessels used in low temperatures. In applications where carbon steel would suffer corrosion, special corrosion resistant material should also be used.
Some pressure vessels are made of composite materials, such as filament wound composite using carbon fibre held in place with a polymer. Due to the very high tensile strength of carbon fibre these vessels can be very light, but are much more difficult to manufacture. The composite material may be wound around a metal liner, forming a composite overwrapped pressure vessel.
Other very common materials include polymers such as PET in carbonated beverage containers and copper in plumbing.
Pressure vessels may be lined with various metals, ceramics, or polymers to prevent leaking and protect the structure of the vessel from the contained medium. This liner may also carry a significant portion of the pressure load.
Pressure Vessels may also be constructed from concrete (PCV) or other materials which are weak in tension. Cabling, wrapped around the vessel or within the wall or the vessel itself, provides the necessary tension to resist the internal pressure. A "leakproof steel thin membrane" lines the internal wall of the vessel. Such vessels can be assembled from modular pieces and so have "no inherent size limitations". There is also a high order of redundancy thanks to the large number of individual cables resisting the internal pressure.
The very small vessels used to make liquid butane fueled cigarette lighters are subjected to about 2 bar pressure, depending on ambient temperature. These vessels are often oval (1 x 2 cm ... 1.3 x 2.5 cm) in cross section but sometimes circular. The oval versions generally include one or two internal tension struts which appear to be baffles but which also provide additional cylinder strength.
Working pressure.
The typical circular-cylindrical high pressure gas cylinders for permanent gases (that do not liquify at storing pressure, like air, oxygen, nitrogen, hydrogen, argon, helium) have been manufactured by hot forging by pressing and rolling to get a seamless steel vessel.
Working pressure of cylinders for use in industry, skilled craft, diving and medicine had a standardized working pressure (WP) of only in Europe until about 1950. From about 1975 until now, the standard pressure is . Firemen need slim, lightweight cylinders to move in confined spaces; since about 1995 cylinders for WP were used (first in pure steel).
A demand for reduced weight led to different generations of composite (fiber and matrix, over a liner) cylinders that are more easily damageable by a hit from outside. Therefore, composite cylinders are usually built for .
Hydraulic (filled with water) testing pressure is usually 50% higher than the working pressure.
Vessel thread.
Until 1990, high pressure cylinders were produced with conical (tapered) threads. Two types of threads have dominated the full metal cylinders in industrial use from in volume.
Taper thread (17E), with a 12% taper right hand thread, standard Whitworth 55° form with a pitch of 14 threads per inch (5.5 threads per cm) and pitch diameter at the top thread of the cylinder of . These connections are sealed using thread tape and torqued to between on steel cylinders, and between on aluminium cylinders.
To screw in the valve, a high torque of typically is necessary for the larger 25E taper thread, and for the smaller 17E thread. Until around 1950, hemp was used as a sealant. Later, a thin sheet of lead pressed to a hat with a hole on top was used. Since 2005, PTFE-tape has been used to avoid using lead.
A tapered thread provides simple assembly, but requires high torque for connecting and leads to high radial forces in the vessel neck. All cylinders built for working pressure, all diving cylinders, and all composite cylinders use parallel threads.
Parallel threads are made to several standards:
The 3/4"NGS and 3/4"BSP are very similar, having the same pitch and a pitch diameter that only differs by about , but they are not compatible, as the thread forms are different.
All parallel thread valves are sealed using an elastomer O-ring at top of the neck thread which seals in a chamfer or step in the cylinder neck and against the flange of the valve.
Development of composite vessels.
To classify the different structural principles cylinders, 4 types are defined.
Type 2 and 3 cylinders have been in production since around 1995. Type 4 cylinders are commercially available at least since 2016.
Safety features.
Leak before burst.
Leak before burst describes a pressure vessel designed such that a crack in the vessel will grow through the wall, allowing the contained fluid to escape and reducing the pressure, prior to growing so large as to cause fracture at the operating pressure.
Many pressure vessel standards, including the ASME Boiler and Pressure Vessel Code and the AIAA metallic pressure vessel standard, either require pressure vessel designs to be leak before burst, or require pressure vessels to meet more stringent requirements for fatigue and fracture if they are not shown to be leak before burst.
Safety valves.
As the pressure vessel is designed to a pressure, there is typically a safety valve or relief valve to ensure that this pressure is not exceeded in operation.
Maintenance features.
Pressure vessel closures.
Pressure vessel closures are pressure retaining structures designed to provide quick access to pipelines, pressure vessels, pig traps, filters and filtration systems. Typically pressure vessel closures allow access by maintenance personnel.
A commonly used access hole shape is elliptical, which allows the closure to be passed through the opening, and rotated into the working position, and is held in place by a bar on the outside, secured by a central bolt. The internal pressure prevents it from being inadvertently opened under load.
Uses.
Pressure vessels are used in a variety of applications in both industry and the private sector. They appear in these sectors as industrial compressed air receivers, boilers and domestic hot water storage tanks. Other examples of pressure vessels are diving cylinders, recompression chambers, distillation towers, pressure reactors, autoclaves, and many other vessels in mining operations, oil refineries and petrochemical plants, nuclear reactor vessels, submarine and space ship habitats, atmospheric diving suits, pneumatic reservoirs, hydraulic reservoirs under pressure, rail vehicle airbrake reservoirs, road vehicle airbrake reservoirs, and storage vessels for high pressure permanent gases and liquified gases such as ammonia, chlorine, and LPG (propane, butane).
A unique application of a pressure vessel is the passenger cabin of an airliner: the outer skin carries both the aircraft maneuvering loads and the cabin pressurization loads.
Alternatives.
Depending on the application and local circumstances, alternatives to pressure vessels exist. Examples can be seen in domestic water collection systems, where the following may be used:
Design.
Scaling.
No matter what shape it takes, the minimum mass of a pressure vessel scales with the pressure and volume it contains and is inversely proportional to the strength to weight ratio of the construction material (minimum mass decreases as strength increases).
Scaling of stress in walls of vessel.
Pressure vessels are held together against the gas pressure due to tensile forces within the walls of the container. The normal (tensile) stress in the walls of the container is proportional to the pressure and radius of the vessel and inversely proportional to the thickness of the walls. Therefore, pressure vessels are designed to have a thickness proportional to the radius of tank and the pressure of the tank and inversely proportional to the maximum allowed normal stress of the particular material used in the walls of the container.
Because (for a given pressure) the thickness of the walls scales with the radius of the tank, the mass of a tank (which scales as the length times radius times thickness of the wall for a cylindrical tank) scales with the volume of the gas held (which scales as length times radius squared). The exact formula varies with the tank shape but depends on the density, ρ, and maximum allowable stress σ of the material in addition to the pressure P and volume V of the vessel. (See below for the exact equations for the stress in the walls.)
Spherical vessel.
For a sphere, the minimum mass of a pressure vessel is
formula_0,
where:
Other shapes besides a sphere have constants larger than 3/2 (infinite cylinders take 2), although some tanks, such as non-spherical wound composite tanks can approach this.
Cylindrical vessel with hemispherical ends.
This is sometimes called a "bullet" for its shape, although in geometric terms it is a capsule.
For a cylinder with hemispherical ends,
formula_6,
where
Cylindrical vessel with semi-elliptical ends.
In a vessel with an aspect ratio of middle cylinder width to radius of 2:1,
formula_7.
Gas storage.
In looking at the first equation, the factor PV, in SI units, is in units of (pressurization) energy. For a stored gas, PV is proportional to the mass of gas at a given temperature, thus
formula_8. (see gas law)
The other factors are constant for a given vessel shape and material. So we can see that there is no theoretical "efficiency of scale", in terms of the ratio of pressure vessel mass to pressurization energy, or of pressure vessel mass to stored gas mass. For storing gases, "tankage efficiency" is independent of pressure, at least for the same temperature.
So, for example, a typical design for a minimum mass tank to hold helium (as a pressurant gas) on a rocket would use a spherical chamber for a minimum shape constant, carbon fiber for best possible formula_9, and very cold helium for best possible formula_10.
Stress in thin-walled pressure vessels.
Stress in a thin-walled pressure vessel in the shape of a sphere is
formula_11,
where formula_12 is hoop stress, or stress in the circumferential direction, formula_13 is stress in the longitudinal direction, "p" is internal gauge pressure, "r" is the inner radius of the sphere, and "t" is thickness of the sphere wall. A vessel can be considered "thin-walled" if the diameter is at least 10 times (sometimes cited as 20 times) greater than the wall thickness.
Stress in a thin-walled pressure vessel in the shape of a cylinder is
formula_14,
formula_15,
where:
Almost all pressure vessel design standards contain variations of these two formulas with additional empirical terms to account for variation of stresses across thickness, quality control of welds and in-service corrosion allowances.
All formulae mentioned above assume uniform distribution of membrane stresses across thickness of shell but in reality, that is not the case. Deeper analysis is given by Lamé's theorem, which gives the distribution of stress in the walls of a thick-walled cylinder of a homogeneous and isotropic material. The formulae of pressure vessel design standards are extension of Lamé's theorem by putting some limit on ratio of inner radius and thickness.
For example, the ASME Boiler and Pressure Vessel Code (BPVC) (UG-27) formulas are:
Spherical shells: Thickness has to be less than 0.356 times inner radius
formula_16
Cylindrical shells: Thickness has to be less than 0.5 times inner radius
formula_17
formula_18
where "E" is the joint efficiency, and all others variables as stated above.
The factor of safety is often included in these formulas as well, in the case of the ASME BPVC this term is included in the material stress value when solving for pressure or thickness.
Winding angle of carbon fibre vessels.
Wound infinite cylindrical shapes optimally take a winding angle of 54.7 degrees to the cylindrical axis, as this gives the necessary twice the strength in the circumferential direction to the longitudinal.
Construction methods.
Riveted.
The standard method of construction for boilers, compressed air receivers and other pressure vessels of iron or steel before gas and electrical welding of reliable quality became widespread was riveted sheets which had been rolled and forged into shape, then riveted together, often using butt straps along the joints, and caulked along the riveted seams by deforming the edges of the overlap with a blunt chisel. Hot riveting caused the rivets to contract on cooling, forming a tighter joint.
Seamless.
Manufacturing methods for seamless metal pressure vessels are commonly used for relatively small diameter cylinders where large numbers will be produced, as the machinery and tooling require large capital outlay. The methods are well suited to high pressure gas transport and storage applications, and provide consistently high quality products.
Backward extrusion: A process by which the material is forced to flow back along the mandrel between the mandrel and die.
Cold extrusion (aluminium):
Seamless aluminium cylinders may be manufactured by cold backward extrusion of aluminium billets in a process which first presses the walls and base, then trims the top edge of the cylinder walls, followed by press forming the shoulder and neck.
Hot extrusion (steel):
In the hot extrusion process a billet of steel is cut to size, induction heated to the correct temperature for the alloy, descaled and placed in the die. The metal is backward extruded by forcing the mandrel into it, causing it to flow through the annular gap until a deep cup is formed. This cup is further drawn to diameter and wall thickness reduced and the bottom formed. After inspection and trimming of the open end The cylinder is hot spun to close the end and form the neck.
Drawn:
Seamless cylinders may also be cold drawn from steel plate discs to a cylindrical cup form, in two or three stages. After forming the base and side walls, the top of the cylinder is trimmed to length, heated and hot spun to form the shoulder and close the neck. This process thickens the material of the shoulder. The cylinder is heat-treated by quenching and tempering to provide the best strength and toughness.
Regardless of the method used to form the cylinder, it will be machined to finish the neck and cut the neck threads, heat treated, cleaned, and surface finished, stamp marked, tested, and inspected for quality assurance.
Welded.
Large and low pressure vessels are commonly manufactured from formed plates welded together. Weld quality is critical to safety in pressure vessels for human occupancy.
Composite.
Composite pressure vessels are generally filament wound rovings in a thermosetting polymer matrix. The mandrel may be removable after cure, or may remain a part of the finished product, often providing a more reliable gas or liquid-tight liner, or better chemical resistance to the intended contents than the resin matrix. Metallic inserts may be provided for attaching threaded accessories, such as valves and pipes.
Operation standards.
Pressure vessels are designed to operate safely at a specific pressure and temperature, technically referred to as the "Design Pressure" and "Design Temperature". A vessel that is inadequately designed to handle a high pressure constitutes a very significant safety hazard. Because of that, the design and certification of pressure vessels is governed by design codes such as the ASME Boiler and Pressure Vessel Code in North America, the Pressure Equipment Directive of the EU (PED), Japanese Industrial Standard (JIS), CSA B51 in Canada, Australian Standards in Australia and other international standards like Lloyd's, Germanischer Lloyd, Det Norske Veritas, Société Générale de Surveillance (SGS S.A.), Lloyd's Register Energy Nederland (formerly known as Stoomwezen) etc.
Note that where the pressure-volume product is part of a safety standard, any incompressible liquid in the vessel can be excluded as it does not contribute to the potential energy stored in the vessel, so only the volume of the compressible part such as gas is used.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "M = {3 \\over 2} P V {\\rho \\over \\sigma}"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "P"
},
{
"math_id": 3,
"text": "V"
},
{
"math_id": 4,
"text": "\\rho"
},
{
"math_id": 5,
"text": "\\sigma"
},
{
"math_id": 6,
"text": "M = 2 \\pi R^2 (R + W) P {\\rho \\over \\sigma}"
},
{
"math_id": 7,
"text": "M = 6 \\pi R^3 P {\\rho \\over \\sigma}"
},
{
"math_id": 8,
"text": "M = {3 \\over 2} nRT {\\rho \\over \\sigma}"
},
{
"math_id": 9,
"text": "\\rho / \\sigma"
},
{
"math_id": 10,
"text": "M / {pV}"
},
{
"math_id": 11,
"text": "\\sigma_\\theta = \\sigma_{\\rm long} = \\frac{pr}{2t}"
},
{
"math_id": 12,
"text": "\\sigma_\\theta"
},
{
"math_id": 13,
"text": "\\sigma_{long}"
},
{
"math_id": 14,
"text": "\\sigma_\\theta = \\frac{pr}{t}"
},
{
"math_id": 15,
"text": "\\sigma_{\\rm long} = \\frac{pr}{2t}"
},
{
"math_id": 16,
"text": "\\sigma_\\theta = \\sigma_{\\rm long} = \\frac{p(r + 0.2t)}{2tE}"
},
{
"math_id": 17,
"text": "\\sigma_\\theta = \\frac{p(r + 0.6t)}{tE}"
},
{
"math_id": 18,
"text": "\\sigma_{\\rm long} = \\frac{p(r - 0.4t)}{2tE}"
}
] |
https://en.wikipedia.org/wiki?curid=636219
|
63621945
|
Penning–Malmberg trap
|
Electromagnetic device used to confine particles of a single sign of charge
The Penning–Malmberg trap (PM trap), named after Frans Penning and John Malmberg, is an electromagnetic device used to confine large numbers of charged particles of a single sign of charge. Much interest in Penning–Malmberg (PM) traps arises from the fact that if the density of particles is large and the temperature is low, the gas will become a single-component plasma. While confinement of electrically neutral plasmas is generally difficult, single-species plasmas (an example of a non-neutral plasma) can be confined for long times in PM traps. They are the method of choice to study a variety of plasma phenomena. They are also widely used to confine antiparticles such as positrons (i.e., anti-electrons) and antiprotons for use in studies of the properties of antimatter and interactions of antiparticles with matter.
Design and operation.
A schematic design of a PM trap is shown in Fig. 1. Charged particles of a single sign of charge are confined in a vacuum inside an electrode structure consisting of a stack of hollow, metal cylinders. A uniform axial magnetic field formula_0 is applied to inhibit positron motion radially, and voltages are imposed on the end electrodes to prevent particle loss in the magnetic field direction. This is similar to the arrangement in a Penning trap, but with an extended confinement electrode to trap large numbers of particles (e.g., formula_1).
Such traps are renowned for their good confinement properties. This is due to the fact that, for a sufficiently strong magnetic field, the canonical angular momentum formula_2 of the charge cloud (i.e., including angular momentum due to the magnetic field B) in the direction formula_3 of the field is approximately
where formula_4 is the radial position of the formula_5th particle, formula_6 is the total number of particles, and formula_7 is the cyclotron frequency, with particle mass m and charge e. If the system has no magnetic or electrostatic asymmetries in the plane perpendicular to formula_0, there are no torques on the plasma; thus formula_2 is constant, and the plasma cannot expand. As discussed below, these plasmas do expand due to magnetic and/or electrostatic asymmetries thought to be due to imperfections in trap construction.
The PM traps are typically filled using sources of low energy charged particles. In the case of electrons, this can be done using a hot filament or electron gun. For positrons, a sealed radioisotope source and "moderator" (the latter used to slow the positrons to electron-volt energies) can be used. Techniques have been developed to measure the plasma length, radius, temperature, and density in the trap, and to excite plasma waves and oscillations. It is frequently useful to compress plasmas radially to increase the plasma density and/or to combat asymmetry-induced transport. This can be accomplished by applying a torque on the plasma using rotating electric fields [the so-called "rotating wall" (RW) technique], or in the case of ion plasmas, using laser light. Very long confinement times (hours or days) can be achieved using these techniques.
Particle cooling is frequently necessary to maintain good confinement (e.g., to mitigate the heating from RW torques). This can be accomplished in a number of ways, such as using inelastic collisions with molecular gases, or in the case of ions, using lasers. In the case of electrons or positrons, if the magnetic field is sufficiently strong, the particles will cool by cyclotron radiation.
History and uses.
The confinement and properties of single species plasmas in (what are now known as) PM traps was first studied by John Malmberg and John DeGrassie. Confinement was shown to be excellent as compared to that for neutral plasmas. It was also shown that, while good, confinement is not perfect and there are particle losses.
Penning–Malmberg traps have been used to study a variety of transport mechanisms. Figure 2 shows an early study of confinement in a PM trap as a function of a background pressure of helium gas. At higher pressures, transport is due to electron-atom collisions, while at lower pressures, there is a pressure-independent particle loss mechanism. The latter (“anomalous transport”) mechanism has been shown to be due to inadvertent magnetic and electrostatic asymmetries and the effects of trapped particles. There is evidence that confinement in PM traps is improved if the main confinement electrode (blue in Fig. 1) is replaced by a series of coaxial cylinders biased to create a smoothly varying potential well (a “multi-ring PM trap”).
One fruitful area of research arises from the fact that plasmas in PM traps can be used to model the dynamics of inviscid two-dimensional fluid flows. PM traps are also the device of choice to accumulate and store anti-particles such as positrons and antiprotons. One has been able to create positron and antiproton plasmas and to study electron-beam positron plasma dynamics.
Pure ion plasmas can be laser-cooled into crystalline states. Cryogenic pure-ion plasmas are used to study quantum entanglement. The PM traps also provide an excellent source for cold positron beams. They have been used to study with precision positronium (Ps) atoms (the bound state of a positron and an electron, lifetime ≤ 0.1 μs) and to create and study the positronium molecule (Psformula_8, formula_9). Recently PM-trap-based positron beams have been used to produce practical Ps-atom beams.
Antihydrogen is the bound state of an antiproton and a positron and the simplest antiatom. Nested PM traps (one for antiprotons and another for positrons) have been central to the successful efforts to create, trap and to compare the properties of antihydrogen with those of hydrogen. The antiparticle plasmas (and electron plasmas used to cool the antiprotons) are carefully tuned with an array of recently developed techniques to optimize the production antihydrogen atoms. These neutral antiatoms are then confined in a minimum-magnetic-field trap.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "B"
},
{
"math_id": 1,
"text": "N\\geq10^{10}"
},
{
"math_id": 2,
"text": "L_z"
},
{
"math_id": 3,
"text": "z"
},
{
"math_id": 4,
"text": "r_j"
},
{
"math_id": 5,
"text": "j"
},
{
"math_id": 6,
"text": "N"
},
{
"math_id": 7,
"text": "{\\omega_c}=eB/m "
},
{
"math_id": 8,
"text": "_2"
},
{
"math_id": 9,
"text": "e^+e^-e^+e^-"
}
] |
https://en.wikipedia.org/wiki?curid=63621945
|
63621955
|
Directional-change intrinsic time
|
Directional-change intrinsic time is an event-based operator to dissect a data series into a sequence of alternating trends of defined size formula_0.
The directional-change intrinsic time operator was developed for the analysis of financial market data series. It is an alternative methodology to the concept of continuous time. Directional-change intrinsic time operator dissects a data series into a set of drawups and drawdowns or up and down trends that alternate with each other. An established trend comes to an end as soon as a trend reversal is observed. A price move that extends a trend is called overshoot and leads to new price extremes.
Figure 1 provides an example of a price curve dissected by the directional change intrinsic time operator.
The frequency of directional-change intrinsic events maps (1) the volatility of price changes conditional to (2) the selected threshold formula_0. The stochastic nature of the underlying process is mirrored in the non-equal number of intrinsic events observed over equal periods of physical time.
Directional-change intrinsic time operator is a noise filtering technique. It identifies regime shifts, when trend changes of a particular size occur and hides price fluctuations that are smaller than the threshold formula_0.
Application.
The directional-change intrinsic time operator was used to analyze high frequency foreign exchange market data and has led to the discovery of a large set of scaling laws that have not been previously observed. The scaling laws identify properties of the underlying data series, such as the size of the expected price overshoot after an intrinsic time event or the number of expected directional-changes within a physical time interval or price threshold. For example, a scaling relating the expected number of directional-changes formula_1 observed over the fixed period to the size of the threshold formula_0:
formula_2,
where formula_3 and formula_4 are the scaling law coefficients.
Other applications of the directional-change intrinsic time in finance include:
The methodology can also be used for applications beyond economics and finance. It can be applied to other scientific domains and opens a new avenue of research in the area of BigData.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\delta"
},
{
"math_id": 1,
"text": "N(\\delta)"
},
{
"math_id": 2,
"text": "N(\\delta) = \\left( \\frac{\\delta}{C_{N, DC}} \\right)^{E_{N, DC}}"
},
{
"math_id": 3,
"text": "C_{N, DC}"
},
{
"math_id": 4,
"text": "E_{N, DC}"
}
] |
https://en.wikipedia.org/wiki?curid=63621955
|
63622954
|
Auxiliary normed space
|
In functional analysis, a branch of mathematics, two methods of constructing normed spaces from disks were systematically employed by Alexander Grothendieck to define nuclear operators and nuclear spaces.
One method is used if the disk formula_0 is bounded: in this case, the auxiliary normed space is formula_1 with norm
formula_2
The other method is used if the disk formula_0 is absorbing: in this case, the auxiliary normed space is the quotient space formula_3
If the disk is both bounded and absorbing then the two auxiliary normed spaces are canonically isomorphic (as topological vector spaces and as normed spaces).
Induced by a bounded disk – Banach disks.
Throughout this article, formula_4 will be a real or complex vector space (not necessarily a TVS, yet) and formula_0 will be a disk in formula_5
Seminormed space induced by a disk.
Let formula_4 will be a real or complex vector space. For any subset formula_0 of formula_6 the "Minkowski functional" of formula_0 defined by:
Let formula_4 will be a real or complex vector space. For any subset formula_0 of formula_4 such that the Minkowski functional formula_15is a seminorm on formula_16 let formula_17 denote
formula_18
which is called the "seminormed space induced by formula_19" where if formula_15 is a norm then it is called the "normed space induced by formula_20"
Assumption (Topology): formula_21 is endowed with the seminorm topology induced by formula_22 which will be denoted by formula_23 or formula_24
Importantly, this topology stems "entirely" from the set formula_19 the algebraic structure of formula_6 and the usual topology on formula_25 (since formula_15is defined using only the set formula_0 and scalar multiplication). This justifies the study of Banach disks and is part of the reason why they play an important role in the theory of nuclear operators and nuclear spaces.
The inclusion map formula_26 is called the "canonical map".
Suppose that formula_0 is a disk.
Then formula_27 so that formula_0 is absorbing in formula_16 the linear span of formula_20
The set formula_28 of all positive scalar multiples of formula_0 forms a basis of neighborhoods at the origin for a locally convex topological vector space topology formula_23 on formula_29
The Minkowski functional of the disk formula_0 in formula_1 guarantees that formula_15is well-defined and forms a seminorm on formula_29
The locally convex topology induced by this seminorm is the topology formula_23 that was defined before.
Banach disk definition.
A bounded disk formula_0 in a topological vector space formula_4 such that formula_30 is a Banach space is called a Banach disk, infracomplete, or a bounded completant in formula_5
If its shown that formula_31 is a Banach space then formula_0 will be a Banach disk in any TVS that contains formula_0 as a bounded subset.
This is because the Minkowski functional formula_15is defined in purely algebraic terms.
Consequently, the question of whether or not formula_30 forms a Banach space is dependent only on the disk formula_0 and the Minkowski functional formula_22 and not on any particular TVS topology that formula_4 may carry.
Thus the requirement that a Banach disk in a TVS formula_4 be a bounded subset of formula_4 is the only property that ties a Banach disk's topology to the topology of its containing TVS formula_5
Properties of disk induced seminormed spaces.
Bounded disks
The following result explains why Banach disks are required to be bounded.
<templatestyles src="Math_theorem/styles.css" />
Theorem —
If formula_0 is a disk in a topological vector space (TVS) formula_6 then formula_0 is bounded in formula_4 if and only if the inclusion map formula_26 is continuous.
<templatestyles src="Math_proof/styles.css" />Proof
If the disk formula_0 is bounded in the TVS formula_4 then for all neighborhoods formula_32 of the origin in formula_6 there exists some formula_33 such that formula_34
It follows that in this case the topology of formula_30 is finer than the subspace topology that formula_17 inherits from formula_6 which implies that the inclusion map formula_26 is continuous.
Conversely, if formula_4 has a TVS topology such that formula_26 is continuous, then for every neighborhood formula_32 of the origin in formula_4 there exists some formula_33 such that formula_35 which shows that formula_0 is bounded in formula_5
Hausdorffness
The space formula_30 is Hausdorff if and only if formula_15is a norm, which happens if and only if formula_0 does not contain any non-trivial vector subspace.
In particular, if there exists a Hausdorff TVS topology on formula_4 such that formula_0 is bounded in formula_4 then formula_15is a norm.
An example where formula_17 is not Hausdorff is obtained by letting formula_36 and letting formula_0 be the formula_37-axis.
Convergence of nets
Suppose that formula_0 is a disk in formula_4 such that formula_17 is Hausdorff and let formula_38 be a net in formula_39
Then formula_40 in formula_17 if and only if there exists a net formula_41 of real numbers such that formula_42 and formula_43 for all formula_44;
moreover, in this case it will be assumed without loss of generality that formula_45 for all formula_46
Relationship between disk-induced spaces
If formula_47then formula_48 and formula_49 on formula_50 so define the following continuous linear map:
If formula_51 and formula_0 are disks in formula_4 with formula_52 then call the inclusion map formula_53 the "canonical inclusion" of formula_54 into formula_39
In particular, the subspace topology that formula_55 inherits from formula_30 is weaker than formula_56's seminorm topology.
The disk as the closed unit ball
The disk formula_0 is a closed subset of formula_30 if and only if formula_0 is the closed unit ball of the seminorm formula_15; that is,
formula_57
If formula_0 is a disk in a vector space formula_4 and if there exists a TVS topology formula_58 on formula_1 such that formula_0 is a closed and bounded subset of formula_59 then formula_0 is the closed unit ball of formula_30 (that is, formula_60 ) (see footnote for proof).
Sufficient conditions for a Banach disk.
The following theorem may be used to establish that formula_30 is a Banach space.
Once this is established, formula_0 will be a Banach disk in any TVS in which formula_0 is bounded.
<templatestyles src="Math_theorem/styles.css" />
Theorem —
Let formula_0 be a disk in a vector space formula_5
If there exists a Hausdorff TVS topology formula_58 on formula_1 such that formula_0 is a bounded sequentially complete subset of formula_63 then formula_30 is a Banach space.
<templatestyles src="Math_proof/styles.css" />Proof
Assume without loss of generality that formula_64 and let formula_65 be the Minkowski functional of formula_20
Since formula_0 is a bounded subset of a Hausdorff TVS, formula_0 do not contain any non-trivial vector subspace, which implies that formula_66 is a norm.
Let formula_23 denote the norm topology on formula_4 induced by formula_66 where since formula_0 is a bounded subset of formula_61 formula_23 is finer than formula_67
Because formula_0 is convex and balanced, for any formula_68
formula_69
Let formula_70 be a Cauchy sequence in formula_62
By replacing formula_71 with a subsequence, we may assume without loss of generality† that for all formula_72
formula_73
This implies that for any formula_74
formula_75
so that in particular, by taking formula_76 it follows that formula_71 is contained in formula_77
Since formula_23 is finer than formula_78 formula_71 is a Cauchy sequence in formula_79
For all formula_80 formula_81 is a Hausdorff sequentially complete subset of formula_79
In particular, this is true for formula_82 so there exists some formula_83 such that formula_84 in formula_79
Since formula_85 for all formula_74 by fixing formula_86 and taking the limit (in formula_87) as formula_88 it follows that formula_89 for each formula_90
This implies that formula_91 as formula_92 which says exactly that formula_84 in formula_62
This shows that formula_93 is complete.
†This assumption is allowed because formula_71 is a Cauchy sequence in a metric space (so the limits of all subsequences are equal) and a sequence in a metric space converges if and only if every subsequence has a sub-subsequence that converges.
Note that even if formula_0 is not a bounded and sequentially complete subset of any Hausdorff TVS, one might still be able to conclude that formula_30 is a Banach space by applying this theorem to some disk formula_94 satisfying
formula_95
because formula_96
The following are consequences of the above theorem:
Suppose that formula_0 is a bounded disk in a TVS formula_5
Properties of Banach disks.
Let formula_4 be a TVS and let formula_0 be a bounded disk in formula_5
If formula_0 is a bounded Banach disk in a Hausdorff locally convex space formula_4 and if formula_102 is a barrel in formula_4 then formula_102 absorbs formula_0 (that is, there is a number formula_33 such that formula_103
If formula_32 is a convex balanced closed neighborhood of the origin in formula_4 then the collection of all neighborhoods formula_104 where formula_33 ranges over the positive real numbers, induces a topological vector space topology on formula_5 When formula_4 has this topology, it is denoted by formula_105 Since this topology is not necessarily Hausdorff nor complete, the completion of the Hausdorff space formula_106 is denoted by formula_107 so that formula_107 is a complete Hausdorff space and formula_108 is a norm on this space making formula_107 into a Banach space. The polar of formula_109 formula_110 is a weakly compact bounded equicontinuous disk in formula_111 and so is infracomplete.
If formula_4 is a metrizable locally convex TVS then for every bounded subset formula_112 of formula_6 there exists a bounded disk formula_0 in formula_4 such that formula_113 and both formula_4 and formula_17 induce the same subspace topology on formula_114
Induced by a radial disk – quotient.
Suppose that formula_4 is a topological vector space and formula_115 is a convex balanced and radial set.
Then formula_116 is a neighborhood basis at the origin for some locally convex topology formula_117 on formula_5
This TVS topology formula_117 is given by the Minkowski functional formed by formula_118 formula_119 which is a seminorm on formula_4 defined by formula_120
The topology formula_117 is Hausdorff if and only if formula_121 is a norm, or equivalently, if and only if formula_122 or equivalently, for which it suffices that formula_115 be bounded in formula_5
The topology formula_117 need not be Hausdorff but formula_123 is Hausdorff.
A norm on formula_123 is given by formula_124 where this value is in fact independent of the representative of the equivalence class formula_125 chosen.
The normed space formula_126 is denoted by formula_127 and its completion is denoted by formula_128
If in addition formula_115 is bounded in formula_4 then the seminorm formula_129 is a norm so in particular, formula_130
In this case, we take formula_127 to be the vector space formula_4 instead of formula_131 so that the notation formula_127 is unambiguous (whether formula_127 denotes the space induced by a radial disk or the space induced by a bounded disk).
The quotient topology formula_132 on formula_123 (inherited from formula_4's original topology) is finer (in general, strictly finer) than the norm topology.
Canonical maps.
The "canonical map" is the quotient map formula_133 which is continuous when formula_127 has either the norm topology or the quotient topology.
If formula_32 and formula_115 are radial disks such that formula_134then formula_135 so there is a continuous linear surjective "canonical map" formula_136 defined by sending
formula_137 to the equivalence class formula_138 where one may verify that the definition does not depend on the representative of the equivalence class formula_139 that is chosen.
This canonical map has norm formula_140 and it has a unique continuous linear canonical extension to formula_107 that is denoted by formula_141
Suppose that in addition formula_142 and formula_51 are bounded disks in formula_4 with formula_143 so that formula_144 and the inclusion formula_145 is a continuous linear map.
Let formula_146 formula_147 and formula_145 be the canonical maps.
Then formula_148 and formula_149
Induced by a bounded radial disk.
Suppose that formula_150 is a bounded radial disk.
Since formula_150 is a bounded disk, if formula_151 then we may create the auxiliary normed space formula_21 with norm formula_152; since formula_150 is radial, formula_153
Since formula_150 is a radial disk, if formula_154 then we may create the auxiliary seminormed space formula_123 with the seminorm formula_155; because formula_150 is bounded, this seminorm is a norm and formula_156 so formula_157
Thus, in this case the two auxiliary normed spaces produced by these two different methods result in the same normed space.
Duality.
Suppose that formula_158 is a weakly closed equicontinuous disk in formula_111 (this implies that formula_158 is weakly compact) and let
formula_159
be the polar of formula_160
Because formula_161 by the bipolar theorem, it follows that a continuous linear functional formula_162 belongs to formula_163 if and only if formula_162 belongs to the continuous dual space of formula_164 where formula_165 is the Minkowski functional of formula_32 defined by formula_166
Related concepts.
A disk in a TVS is called "infrabornivorous" if it absorbs all Banach disks.
A linear map between two TVSs is called "infrabounded" if it maps Banach disks to bounded disks.
Fast convergence.
A sequence formula_167 in a TVS formula_4 is said to be "fast convergent" to a point formula_168 if there exists a Banach disk formula_0 such that both formula_37 and the sequence is (eventually) contained in formula_1 and formula_169 in formula_170
Every fast convergent sequence is Mackey convergent.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "D"
},
{
"math_id": 1,
"text": "\\operatorname{span} D"
},
{
"math_id": 2,
"text": "p_D(x) := \\inf_{x \\in r D, r > 0} r."
},
{
"math_id": 3,
"text": "X / p_D^{-1}(0)."
},
{
"math_id": 4,
"text": "X"
},
{
"math_id": 5,
"text": "X."
},
{
"math_id": 6,
"text": "X,"
},
{
"math_id": 7,
"text": "D = \\varnothing"
},
{
"math_id": 8,
"text": "p_{\\varnothing}(x) : \\{0\\} \\to [0, \\infty)"
},
{
"math_id": 9,
"text": "p_{\\varnothing} = 0"
},
{
"math_id": 10,
"text": "\\operatorname{span} \\varnothing = \\{0\\}."
},
{
"math_id": 11,
"text": "D \\neq \\varnothing"
},
{
"math_id": 12,
"text": "p_D : \\operatorname{span} D \\to [0, \\infty)"
},
{
"math_id": 13,
"text": "x \\in \\operatorname{span} D,"
},
{
"math_id": 14,
"text": "p_D (x) := \\inf_{} \\{r : x \\in r D, r > 0\\}."
},
{
"math_id": 15,
"text": "p_D"
},
{
"math_id": 16,
"text": "\\operatorname{span} D,"
},
{
"math_id": 17,
"text": "X_D"
},
{
"math_id": 18,
"text": "X_D := \\left(\\operatorname{span} D, p_D\\right)"
},
{
"math_id": 19,
"text": "D,"
},
{
"math_id": 20,
"text": "D."
},
{
"math_id": 21,
"text": "X_D = \\operatorname{span} D"
},
{
"math_id": 22,
"text": "p_D,"
},
{
"math_id": 23,
"text": "\\tau_D"
},
{
"math_id": 24,
"text": "\\tau_{p_D}"
},
{
"math_id": 25,
"text": "\\R"
},
{
"math_id": 26,
"text": "\\operatorname{In}_D : X_D \\to X"
},
{
"math_id": 27,
"text": " \\operatorname{span} D = \\bigcup_{n=1}^{\\infty} n D"
},
{
"math_id": 28,
"text": "\\{r D : r > 0\\}"
},
{
"math_id": 29,
"text": "\\operatorname{span} D."
},
{
"math_id": 30,
"text": "\\left(X_D, p_D\\right)"
},
{
"math_id": 31,
"text": "\\left(\\operatorname{span} D, p_D\\right)"
},
{
"math_id": 32,
"text": "U"
},
{
"math_id": 33,
"text": "r > 0"
},
{
"math_id": 34,
"text": "r D \\subseteq U \\cap X_D."
},
{
"math_id": 35,
"text": "r D \\subseteq U \\cap X_D,"
},
{
"math_id": 36,
"text": "X = \\R^2"
},
{
"math_id": 37,
"text": "x"
},
{
"math_id": 38,
"text": "x_\\bull = \\left(x_i\\right)_{i \\in I}"
},
{
"math_id": 39,
"text": "X_D."
},
{
"math_id": 40,
"text": "x_\\bull \\to 0"
},
{
"math_id": 41,
"text": "r_\\bull = \\left(r_i\\right)_{i \\in I}"
},
{
"math_id": 42,
"text": "r_\\bull \\to 0"
},
{
"math_id": 43,
"text": "x_i \\in r_i D"
},
{
"math_id": 44,
"text": "i"
},
{
"math_id": 45,
"text": "r_i \\geq 0"
},
{
"math_id": 46,
"text": "i."
},
{
"math_id": 47,
"text": "C \\subseteq D \\subseteq X"
},
{
"math_id": 48,
"text": "\\operatorname{span} C \\subseteq \\operatorname{span} D"
},
{
"math_id": 49,
"text": "p_D \\leq p_C"
},
{
"math_id": 50,
"text": "\\operatorname{span} C,"
},
{
"math_id": 51,
"text": "C"
},
{
"math_id": 52,
"text": "C \\subseteq D"
},
{
"math_id": 53,
"text": "\\operatorname{In}_C^D : X_C \\to X_D"
},
{
"math_id": 54,
"text": "X_C"
},
{
"math_id": 55,
"text": "\\operatorname{span} C"
},
{
"math_id": 56,
"text": "\\left(X_C, p_C\\right)"
},
{
"math_id": 57,
"text": "D = \\left\\{x \\in \\operatorname{span} D : p_D(x) \\leq 1\\right\\}."
},
{
"math_id": 58,
"text": "\\tau"
},
{
"math_id": 59,
"text": "\\left(\\operatorname{span} D, \\tau\\right),"
},
{
"math_id": 60,
"text": "D = \\left\\{x \\in \\operatorname{span} D : p_D(x) \\leq 1\\right\\}"
},
{
"math_id": 61,
"text": "(X, \\tau),"
},
{
"math_id": 62,
"text": "\\left(X_D, p\\right)."
},
{
"math_id": 63,
"text": "(\\operatorname{span} D, \\tau),"
},
{
"math_id": 64,
"text": "X = \\operatorname{span} D"
},
{
"math_id": 65,
"text": "p := p_D"
},
{
"math_id": 66,
"text": "p"
},
{
"math_id": 67,
"text": "\\tau."
},
{
"math_id": 68,
"text": "0 < m < n"
},
{
"math_id": 69,
"text": "2^{-(n+1)} D + \\cdots + 2^{-(m+2)} D = 2^{-(m+1)} \\left(1 - 2^{m-n}\\right) D \\subseteq 2^{-(m+2)} D."
},
{
"math_id": 70,
"text": "x_{\\bull} = \\left(x_i\\right)_{i=1}^{\\infty}"
},
{
"math_id": 71,
"text": "x_{\\bull}"
},
{
"math_id": 72,
"text": "i,"
},
{
"math_id": 73,
"text": "x_{i+1} - x_i \\in \\frac{1}{2^{i+2}} D."
},
{
"math_id": 74,
"text": "0 < m < n,"
},
{
"math_id": 75,
"text": "x_n - x_m = \\left(x_n - x_{n-1}\\right) + \\left(x_{m+1} - x_m\\right) \\in 2^{-(n+1)} D + \\cdots + 2^{-(m+2)} D \\subseteq 2^{-(m+2)} D"
},
{
"math_id": 76,
"text": "m = 1"
},
{
"math_id": 77,
"text": "x_1 + 2^{-3} D."
},
{
"math_id": 78,
"text": "\\tau,"
},
{
"math_id": 79,
"text": "(X, \\tau)."
},
{
"math_id": 80,
"text": "m > 0,"
},
{
"math_id": 81,
"text": "2^{-(m+2)} D"
},
{
"math_id": 82,
"text": "x_1 + 2^{-3} D"
},
{
"math_id": 83,
"text": "x \\in x_1 + 2^{-3} D"
},
{
"math_id": 84,
"text": "x_{\\bull} \\to x"
},
{
"math_id": 85,
"text": "x_n - x_m \\in 2^{-(m+2)} D"
},
{
"math_id": 86,
"text": "m"
},
{
"math_id": 87,
"text": "(X, \\tau)"
},
{
"math_id": 88,
"text": "n \\to \\infty,"
},
{
"math_id": 89,
"text": "x - x_m \\in 2^{-(m+2)} D"
},
{
"math_id": 90,
"text": "m > 0."
},
{
"math_id": 91,
"text": "p\\left(x - x_m\\right) \\to 0"
},
{
"math_id": 92,
"text": "m \\to \\infty,"
},
{
"math_id": 93,
"text": "\\left(X_D, p\\right)"
},
{
"math_id": 94,
"text": "K"
},
{
"math_id": 95,
"text": "\\left\\{x \\in \\operatorname{span} D : p_D(x) < 1\\right\\} \\subseteq K \\subseteq \\left\\{x \\in \\operatorname{span} D : p_D(x) \\leq 1\\right\\}"
},
{
"math_id": 96,
"text": "p_D = p_K."
},
{
"math_id": 97,
"text": "L : X \\to Y"
},
{
"math_id": 98,
"text": "B \\subseteq X"
},
{
"math_id": 99,
"text": "L(B)"
},
{
"math_id": 100,
"text": "L\\big\\vert_{X_B} : X_B \\to L\\left(X_B\\right)"
},
{
"math_id": 101,
"text": "Y_{L(B)} \\cong X_B / \\left(X_B \\cap \\operatorname{ker} L\\right)."
},
{
"math_id": 102,
"text": "T"
},
{
"math_id": 103,
"text": "D \\subseteq r T."
},
{
"math_id": 104,
"text": "r U,"
},
{
"math_id": 105,
"text": "X_U."
},
{
"math_id": 106,
"text": "X / p_U^{-1}(0)"
},
{
"math_id": 107,
"text": "\\overline{X_U}"
},
{
"math_id": 108,
"text": "p_U(x) := \\inf_{x \\in r U, r > 0} r"
},
{
"math_id": 109,
"text": "U,"
},
{
"math_id": 110,
"text": "U^{\\circ},"
},
{
"math_id": 111,
"text": "X^{\\prime}"
},
{
"math_id": 112,
"text": "B"
},
{
"math_id": 113,
"text": "B \\subseteq X_D,"
},
{
"math_id": 114,
"text": "B."
},
{
"math_id": 115,
"text": "V"
},
{
"math_id": 116,
"text": "\\left\\{\\tfrac{1}{n} V : n = 1, 2, \\ldots\\right\\}"
},
{
"math_id": 117,
"text": "\\tau_V"
},
{
"math_id": 118,
"text": "V,"
},
{
"math_id": 119,
"text": "p_V : X \\to \\R,"
},
{
"math_id": 120,
"text": "p_V(x) := \\inf_{x \\in r V, r > 0} r."
},
{
"math_id": 121,
"text": "p_V"
},
{
"math_id": 122,
"text": "X / p_V^{-1}(0) = \\{0\\}"
},
{
"math_id": 123,
"text": "X / p_V^{-1}(0)"
},
{
"math_id": 124,
"text": "\\left\\|x + X / p_V^{-1}(0)\\right\\| := p_V(x),"
},
{
"math_id": 125,
"text": "x + X / p_V^{-1}(0)"
},
{
"math_id": 126,
"text": "\\left(X / p_V^{-1}(0), \\| \\cdot \\|\\right)"
},
{
"math_id": 127,
"text": "X_V"
},
{
"math_id": 128,
"text": "\\overline{X_V}."
},
{
"math_id": 129,
"text": "p_V : X \\to \\R"
},
{
"math_id": 130,
"text": "p_V^{-1}(0) = \\{0\\}."
},
{
"math_id": 131,
"text": "X / \\{0\\}"
},
{
"math_id": 132,
"text": "\\tau_Q"
},
{
"math_id": 133,
"text": "q_V : X \\to X_V = X / p_V^{-1}(0),"
},
{
"math_id": 134,
"text": "U \\subseteq V"
},
{
"math_id": 135,
"text": "p_U^{-1}(0) \\subseteq p_V^{-1}(0)"
},
{
"math_id": 136,
"text": "q_{V,U} : X / p_U^{-1}(0) \\to X / p_V^{-1}(0) = X_V"
},
{
"math_id": 137,
"text": "x + p_U^{-1}(0) \\in X_U = X / p_U^{-1}(0)"
},
{
"math_id": 138,
"text": "x + p_V^{-1}(0),"
},
{
"math_id": 139,
"text": "x + p_U^{-1}(0)"
},
{
"math_id": 140,
"text": "\\,\\leq 1"
},
{
"math_id": 141,
"text": "\\overline{g_{V,U}} : \\overline{X_U} \\to \\overline{X_V}."
},
{
"math_id": 142,
"text": "B \\neq \\varnothing"
},
{
"math_id": 143,
"text": "B \\subseteq C"
},
{
"math_id": 144,
"text": "X_B \\subseteq X_C"
},
{
"math_id": 145,
"text": "\\operatorname{In}_B^C : X_B \\to X_C"
},
{
"math_id": 146,
"text": "\\operatorname{In}_B : X_B \\to X,"
},
{
"math_id": 147,
"text": "\\operatorname{In}_C : X_C \\to X,"
},
{
"math_id": 148,
"text": "\\operatorname{In}_C = \\operatorname{In}_B^C \\circ \\operatorname{In}_C : X_B \\to X_C"
},
{
"math_id": 149,
"text": "q_V = q_{V,U} \\circ q_U."
},
{
"math_id": 150,
"text": "S"
},
{
"math_id": 151,
"text": "D := S"
},
{
"math_id": 152,
"text": "p_D(x) := \\inf_{x \\in r D, r > 0} r"
},
{
"math_id": 153,
"text": "X_S = X."
},
{
"math_id": 154,
"text": "V := S"
},
{
"math_id": 155,
"text": "p_V(x) := \\inf_{x \\in r V, r > 0} r"
},
{
"math_id": 156,
"text": "p_V^{-1}(0) = \\{0\\}"
},
{
"math_id": 157,
"text": "X / p_V^{-1}(0) = X / \\{0\\} = X."
},
{
"math_id": 158,
"text": "H"
},
{
"math_id": 159,
"text": "U := H^{\\circ} = \\{x \\in X : |h(x)| \\leq 1 \\text{ for all } h \\in H\\}"
},
{
"math_id": 160,
"text": "H."
},
{
"math_id": 161,
"text": "U^\\circ = H^{\\circ\\circ} = H"
},
{
"math_id": 162,
"text": "f"
},
{
"math_id": 163,
"text": "X^{\\prime}_H = \\operatorname{span} H"
},
{
"math_id": 164,
"text": "\\left(X, p_U\\right),"
},
{
"math_id": 165,
"text": "p_U"
},
{
"math_id": 166,
"text": "p_U(x) := \\inf_{x \\in r U, r > 0} r."
},
{
"math_id": 167,
"text": "x_\\bull = \\left(x_i\\right)_{i=1}^\\infty"
},
{
"math_id": 168,
"text": "x \\in X"
},
{
"math_id": 169,
"text": "x_\\bull \\to x"
},
{
"math_id": 170,
"text": "\\left(X_D, p_D\\right)."
}
] |
https://en.wikipedia.org/wiki?curid=63622954
|
6363696
|
Kramers–Heisenberg formula
|
The Kramers–Heisenberg dispersion formula is an expression for the cross section for scattering of a photon by an atomic electron. It was derived before the advent of quantum mechanics by Hendrik Kramers and Werner Heisenberg in 1925, based on the correspondence principle applied to the classical dispersion formula for light. The quantum mechanical derivation was given by Paul Dirac in 1927.
The Kramers–Heisenberg formula was an important achievement when it was published, explaining the notion of "negative absorption" (stimulated emission), the Thomas–Reiche–Kuhn sum rule, and inelastic scattering — where the energy of the scattered photon may be larger or smaller than that of the incident photon — thereby anticipating the discovery of the Raman effect.
Equation.
The Kramers–Heisenberg (KH) formula for second order processes is
formula_0
It represents the probability of the emission of photons of energy formula_1 in the
solid angle formula_2 (centered in the formula_3 direction), after the excitation of the system with photons of energy formula_4. formula_5 are the initial, intermediate
and final states of the system with energy formula_6 respectively; the delta
function ensures the energy conservation during the whole process. formula_7 is the relevant
transition operator. formula_8 is the intrinsic linewidth of the intermediate state.
|
[
{
"math_id": 0,
"text": " \\frac{d^2 \\sigma}{d\\Omega_{k^\\prime}d(\\hbar \\omega_k^\\prime)}=\\frac{\\omega_k^\\prime}{\\omega_k}\\sum_{|f\\rangle}\\left | \\sum_{|n\\rangle} \\frac{\\langle f | T^\\dagger | n \\rangle \\langle n | T | i \\rangle}{E_i - E_n + \\hbar \\omega_k + i \\frac{\\Gamma_n}{2}}\\right |^2 \\delta (E_i - E_f + \\hbar \\omega_k - \\hbar \\omega_k^\\prime)"
},
{
"math_id": 1,
"text": " \\hbar \\omega_k^\\prime "
},
{
"math_id": 2,
"text": "d\\Omega_{k^\\prime}"
},
{
"math_id": 3,
"text": "k^\\prime"
},
{
"math_id": 4,
"text": " \\hbar \\omega_k"
},
{
"math_id": 5,
"text": "|i\\rangle, |n\\rangle, |f\\rangle"
},
{
"math_id": 6,
"text": "E_i , E_n , E_f"
},
{
"math_id": 7,
"text": "T"
},
{
"math_id": 8,
"text": "\\Gamma_n "
}
] |
https://en.wikipedia.org/wiki?curid=6363696
|
63638453
|
Moy–Prasad filtration
|
In mathematics, the Moy–Prasad filtration is a family of filtrations of "p"-adic reductive groups and their Lie algebras, named after Allen Moy and Gopal Prasad. The family is parameterized by the Bruhat–Tits building; that is, each point of the building gives a different filtration. Alternatively, since the initial term in each filtration at a point of the building is the parahoric subgroup for that point, the Moy–Prasad filtration can be viewed as a filtration of a parahoric subgroup of a reductive group.
The chief application of the Moy–Prasad filtration is to the representation theory of "p"-adic groups, where it can be used to define a certain rational number called the depth of a representation. The representations of depth "r" can be better understood by studying the "r"th Moy–Prasad subgroups. This information then leads to a better understanding of the overall structure of the representations, and that understanding in turn has applications to other areas of mathematics, such as number theory via the Langlands program.
For a detailed exposition of Moy-Prasad filtrations and the associated semi-stable points, see Chapter 13 of the book Bruhat-Tits theory: a new approach by Tasho Kaletha and Gopal Prasad.
History.
In their foundational work on the theory of buildings, Bruhat and Tits defined subgroups associated to concave functions of the root system. These subgroups are a special case of the Moy–Prasad subgroups, defined when the group is split. The main innovations of Moy and Prasad were to generalize Bruhat–Tits's construction to quasi-split groups, in particular tori, and to use the subgroups to study the representation theory of the ambient group.
Examples.
The following examples use the "p"-adic rational numbers formula_0 and the "p"-adic integers formula_1. A reader unfamiliar with these rings may instead replace formula_0 by the rational numbers formula_2 and formula_1 by the integers formula_3 without losing the main idea.
Multiplicative group.
The simplest example of a "p"-adic reductive group is formula_4, the multiplicative group of "p"-adic units. Since formula_4 is abelian, it has a unique parahoric subgroup, formula_5. The Moy–Prasad subgroups of formula_5 are the higher unit groups formula_6, where for simplicity formula_7 is a positive integer: formula_8The Lie algebra of formula_4 is formula_0, and its Moy–Prasad subalgebras are the nonzero ideals of formula_1:formula_9More generally, if formula_7 is a positive real number then we use the floor function to define the formula_7th Moy–Prasad subgroup and subalgebra: formula_10This example illustrates the general phenomenon that although the Moy–Prasad filtration is indexed by the nonnegative real numbers, the filtration jumps only on a discrete, periodic subset, in this case, the natural numbers. In particular, it is usually the case that the formula_7th and formula_11th Moy–Prasad subgroups are equal if formula_11 is only slightly larger than formula_7.
General linear group.
Another important example of a "p"-adic reductive group is the general linear group formula_12; this example generalizes the previous one because formula_13. Since formula_12 is nonabelian (when formula_14), it has infinitely many parahoric subgroups. One particular parahoric subgroup is formula_15. The Moy–Prasad subgroups of formula_15 are the subgroups of elements equal to the identity matrix formula_16 modulo high powers of formula_17. Specifically, when formula_7 is a positive integer we defineformula_18where formula_19 is the algebra of "n × n" matrices with coefficients in formula_1. The Lie algebra of formula_12 is formula_20, and its Moy–Prasad subalgebras are the spaces of matrices equal to the zero matrix modulo high powers of formula_17; when formula_7 is a positive integer we defineformula_21Finally, as before, if formula_7 is a positive real number then we use the floor function to define the formula_7th Moy–Prasad subgroup and subalgebra:formula_22In this example, the Moy–Prasad groups would more commonly be denoted by formula_23 instead of formula_24, where formula_25 is a point of the building of formula_12 whose corresponding parahoric subgroup is formula_26
Properties.
Although the Moy–Prasad filtration is commonly used to study the representation theory of "p"-adic groups, one can construct Moy–Prasad subgroups over any Henselian, discretely valued field formula_27, not just over a nonarchimedean local field. In this and subsequent sections, we will therefore assume that the base field formula_27 is Henselian and discretely valued, and with ring of integers formula_28. Nonetheless, the reader is welcome to assume for simplicity that formula_29, so that formula_30.
Let formula_31 be a reductive formula_27-group, let formula_32, and let formula_25 be a point of the extended Bruhat-Tits building of formula_31. The formula_7th Moy–Prasad subgroup of formula_33 at formula_25 is denoted by formula_34. Similarly, the formula_7th Moy–Prasad Lie subalgebra of formula_35 at formula_25 is denoted by formula_36; it is a free formula_28-module spanning formula_36, or in other words, a lattice. (In fact, the Lie algebra formula_36 can also be defined when formula_37, though the group formula_34 cannot.)
Perhaps the most basic property of the Moy–Prasad filtration is that it is decreasing: if formula_38 then formula_39 and formula_40. It is standard to then define the subgroup and subalgebraformula_41This convention is just a notational shortcut because for any formula_42, there is an formula_43 such that formula_44 and formula_45.
The Moy–Prasad filtration satisfies the following additional properties.
Under certain technical assumptions on formula_31, an additional important property is satisfied. By the commutator subgroup property, the quotient formula_63 is abelian if formula_64. In this case there is a canonical isomorphism formula_65, called the Moy–Prasad isomorphism. The technical assumption needed for the Moy–Prasad isomorphism to exist is that formula_31 be tame, meaning that formula_31 splits over a tamely ramified extension of the base field formula_27. If this assumption is violated then formula_66 and formula_63 are not necessarily isomorphic.
Depth of a representation.
The Moy–Prasad can be used to define an important numerical invariant of a smooth representation formula_67 of formula_33, the depth of the representation: this is the smallest number formula_7 such that for some point formula_25 in the building of formula_31, there is a nonzero vector of formula_68 fixed by formula_69.
In a sequel to the paper defining their filtration, Moy and Prasad proved a structure theorem for depth-zero supercuspidal representations. Let formula_25 be a point in a minimal facet of the building of formula_31; that is, the parahoric subgroup formula_54 is a maximal parahoric subgroup. The quotient formula_53 is a finite group of Lie type. Let formula_70 be the inflation to formula_54 of a representation of this quotient that is cuspidal in the sense of Harish-Chandra (see also Deligne–Lusztig theory). The stabilizer formula_71 of formula_25 in formula_33 contains the parahoric group formula_54 as a finite-index normal subgroup. Let formula_72 be an irreducible representation of formula_71 whose restriction to formula_54 contains formula_70 as a subrepresentation. Then the compact induction of formula_72 to formula_33 is a depth-zero supercuspidal representation. Moreover, every depth-zero supercuspidal representation is isomorphic to one of this form.
In the tame case, the local Langlands correspondence is expected to preserve depth, where the depth of an L-parameter is defined using the upper numbering filtration on the Weil group.
Construction.
Although we defined formula_25 to lie in the extended building of formula_31, it turns out that the Moy–Prasad subgroup formula_34 depends only on the image of formula_25 in the reduced building, so that nothing is lost by thinking of formula_25 as a point in the reduced building.
Our description of the construction follows Yu's article on smooth models.
Tori.
Since algebraic tori are a particular class of reductive groups, the theory of the Moy–Prasad filtration applies to them as well. It turns out, however, that the construction of the Moy–Prasad subgroups for a general reductive group relies on the construction for tori, so we begin by discussing the case where formula_73 is a torus. Since the reduced building of a torus is a point there is only one choice for formula_25, and so we will suppress formula_25 from the notation and write formula_74.
First, consider the special case where formula_75 is the Weil restriction of formula_76 along a finite separable extension formula_77 of formula_27, so that formula_78. In this case, we define formula_79 as the set of formula_80 such that formula_81, where formula_82 is the unique extension of the valuation of formula_27 to formula_77.
A torus is said to be induced if it is the direct product of finitely many tori of the form considered in the previous paragraph. The formula_7th Moy–Prasad subgroup of an induced torus is defined as the product of the formula_7th Moy–Prasad subgroup of these factors.
Second, consider the case where formula_83 but formula_75 is an arbitrary torus. Here the Moy–Prasad subgroup formula_84 is defined as the integral points of the Néron lft-model of formula_75. This definition agrees with the previously given one when formula_75 is an induced torus.
It turns out that every torus can be embedded in an induced torus. To define the Moy–Prasad subgroups of a general torus formula_75, then, we choose an embedding of formula_75 in an induced torus formula_85 and define formula_86. This construction is independent of the choice of induced torus and embedding.
Reductive groups.
For simplicity, we will first outline the construction of the Moy–Prasad subgroup formula_34 in the case where formula_31 is split. After, we will comment on the general definition.
Let formula_75 be a maximal split torus of formula_31 whose apartment contains formula_25, and let formula_87 be the root system of formula_31 with respect to formula_75.
For each formula_88, let formula_89 be the root subgroup of formula_31 with respect to formula_90. As an abstract group formula_89 is isomorphic to formula_91, though there is no canonical isomorphism. The point formula_25 determines, for each root formula_90, an additive valuation formula_92. We define formula_93.
Finally, the Moy–Prasad subgroup formula_34 is defined as the subgroup of formula_33 generated by the subgroups formula_94 for formula_88 and the subgroup formula_79.
If formula_31 is not split, then the Moy–Prasad subgroup formula_34 is defined by unramified descent from the quasi-split case, a standard trick in Bruhat–Tits theory. More specifically, one first generalizes the definition of the Moy–Prasad subgroups given above, which applies when formula_31 is split, to the case where formula_31 is only quasi-split, using the relative root system. From here, the Moy–Prasad subgroup can be defined for an arbitrary formula_31 by passing to the maximal unramified extension formula_95 of formula_27, a field over which every reductive group, and in particular formula_31, is quasi-split, and then taking the fixed points of this Moy–Prasad group under the Galois group of formula_95 over formula_27.
Group schemes.
The formula_27-group formula_31 carries much more structure than the group formula_33 of rational points: the former is an algebraic variety whereas the second is only an abstract group. For this reason, there are many technical advantages to working not only with the abstract group formula_33, but also the variety formula_33. Similarly, although we described formula_34 as an abstract group, a certain subgroup of formula_33, it is desirable for formula_34 to be the group of integral points of a group scheme formula_96 defined over the ring of integers, so that formula_97. In fact, it is possible to construct such a group scheme formula_96.
Lie algebras.
Let formula_35 be the Lie algebra of formula_31. In a similar procedure as for reductive groups, namely, by defining Moy–Prasad filtrations on the Lie algebra of a torus and the Lie algebra of a root group, one can define the Moy–Prasad Lie algebras formula_50 of formula_35; they are free formula_28-modules, that is, formula_28-lattices in the formula_27-vector space formula_35. When formula_32, it turns out that formula_50 is just the Lie algebra of the formula_28-group scheme formula_96.
Indexing set.
We have defined the Moy–Prasad filtration at the point formula_25 to be indexed by the set formula_98 of real numbers. It is common in the subject to extend the indexing set slightly, to the set formula_99 consisting of formula_98 and formal symbols formula_100 with formula_101. The element formula_100 is thought of as being infinitesimally larger than formula_7, and the filtration is extended to this case by defining formula_102. Since the valuation on formula_27 is discrete, there is formula_103 such that formula_45.
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{Q}_p"
},
{
"math_id": 1,
"text": "\\mathbb{Z}_p"
},
{
"math_id": 2,
"text": "\\mathbb{Q}"
},
{
"math_id": 3,
"text": "\\mathbb{Z}"
},
{
"math_id": 4,
"text": "\\mathbb{Q}_p^\\times"
},
{
"math_id": 5,
"text": "\\mathbb{Z}_p^\\times"
},
{
"math_id": 6,
"text": "U^{(r)}"
},
{
"math_id": 7,
"text": "r"
},
{
"math_id": 8,
"text": "(\\mathbb{Z}_p^\\times)_r=1+(p\\,\\mathbb{Z}_p)^r = \\{u\\in\\mathbb{Z}_p^\\times:u\\equiv1\\bmod{p^r}\\}."
},
{
"math_id": 9,
"text": "(\\mathbb{Z}_p)_r=(p\\,\\mathbb{Z}_p)^r = \\{p^ra : a\\in\\mathbb{Z}_p\\}."
},
{
"math_id": 10,
"text": "(\\mathbb{Z}_p^\\times)_r := (\\mathbb{Z}_p^\\times)_{\\lfloor r\\rfloor},\\qquad(\\mathbb{Z}_p)_r := (\\mathbb{Z}_p)_{\\lfloor r\\rfloor}"
},
{
"math_id": 11,
"text": "s"
},
{
"math_id": 12,
"text": "\\text{GL}_n(\\mathbb{Q}_p)"
},
{
"math_id": 13,
"text": "\\text{GL}_1(\\mathbb{Q}_p) = \\mathbb{Q}_p^\\times"
},
{
"math_id": 14,
"text": "n\\geq2"
},
{
"math_id": 15,
"text": "\\text{GL}_n(\\mathbb{Z}_p)"
},
{
"math_id": 16,
"text": "1"
},
{
"math_id": 17,
"text": "p"
},
{
"math_id": 18,
"text": "\\text{GL}_n(\\mathbb{Z}_p)_r=1+(p\\,\\text{M}_n(\\mathbb{Z}_p))^r = \\{u\\in\\text{M}_n(\\mathbb{Z}_p):u\\equiv1\\bmod{p^r}\\}."
},
{
"math_id": 19,
"text": "\\text{M}_n(\\mathbb{Z}_p)"
},
{
"math_id": 20,
"text": "\\text{M}_n(\\mathbb{Q}_p)"
},
{
"math_id": 21,
"text": "\\text{M}_n(\\mathbb{Z}_p)_r=(p\\,\\text{M}_n(\\mathbb{Z}_p))^r = \\{u\\in\\text{M}_n(\\mathbb{Z}_p):u\\equiv0\\bmod{p^r}\\}."
},
{
"math_id": 22,
"text": "\\text{GL}_n(\\mathbb{Z}_p)_r := \\text{GL}_n(\\mathbb{Z}_p)_{\\lfloor r\\rfloor},\\qquad\\text{M}_n(\\mathbb{Z}_p)_r := \\text{M}_n(\\mathbb{Z}_p)_{\\lfloor r\\rfloor}"
},
{
"math_id": 23,
"text": "\\text{GL}_n(\\mathbb{Q}_p)_{x,r}"
},
{
"math_id": 24,
"text": "\\text{GL}_n(\\mathbb{Z}_p)_r"
},
{
"math_id": 25,
"text": "x"
},
{
"math_id": 26,
"text": "\\text{GL}_n(\\mathbb{Z}_p)."
},
{
"math_id": 27,
"text": "k"
},
{
"math_id": 28,
"text": "\\mathcal{O}_k"
},
{
"math_id": 29,
"text": "k=\\mathbb{Q}_p"
},
{
"math_id": 30,
"text": "\\mathcal{O}_k=\\mathbb{Z}_p"
},
{
"math_id": 31,
"text": "G"
},
{
"math_id": 32,
"text": "r\\geq0"
},
{
"math_id": 33,
"text": "G(k)"
},
{
"math_id": 34,
"text": "G(k)_{x,r}"
},
{
"math_id": 35,
"text": "\\mathfrak{g}"
},
{
"math_id": 36,
"text": "\\mathfrak{g}_{x,r}\n"
},
{
"math_id": 37,
"text": "r<0"
},
{
"math_id": 38,
"text": "r\\leq s\n"
},
{
"math_id": 39,
"text": "\\mathfrak{g}_{x,r}\\supseteq\\mathfrak{g}_{x,s}\n"
},
{
"math_id": 40,
"text": "G(k)_{x,r}\\supseteq G(k)_{x,s}"
},
{
"math_id": 41,
"text": "G(k)_{x,r+}:=\\bigcup_{s>r} G(k)_{x,s},\n\\qquad\n\\mathfrak{g}_{x,r+}:=\\bigcup_{s>r} \\mathfrak{g}_{x,s}."
},
{
"math_id": 42,
"text": "r\n"
},
{
"math_id": 43,
"text": "\\varepsilon>0\n"
},
{
"math_id": 44,
"text": "\\mathfrak{g}_{x,r+}=\\mathfrak{g}_{x,r+\\varepsilon}"
},
{
"math_id": 45,
"text": "G(k)_{x,r+}=G(k)_{x,r+\\varepsilon}"
},
{
"math_id": 46,
"text": "G(k)_{x,r+}\\neq G(k)_{x,r}"
},
{
"math_id": 47,
"text": "r\\leq s"
},
{
"math_id": 48,
"text": "G(k)_{x,s}"
},
{
"math_id": 49,
"text": "\\mathfrak{g}_{x,s}"
},
{
"math_id": 50,
"text": "\\mathfrak{g}_{x,r}"
},
{
"math_id": 51,
"text": "G(k)_{x,r:s}:=G(k)_{x,r}/G(k)_{x,s}"
},
{
"math_id": 52,
"text": "\\mathfrak{g}_{x,r:s}:=\\mathfrak{g}_{x,r}/\\mathfrak{g}_{x,s}"
},
{
"math_id": 53,
"text": "G(k)_{x,0:0+}"
},
{
"math_id": 54,
"text": "G(k)_{x,0}"
},
{
"math_id": 55,
"text": "[G(k)_{x,r},G(k)_{x,s}]\\subseteq G(k)_{x,r+s}"
},
{
"math_id": 56,
"text": "[\\mathfrak{g}_{x,r},\\mathfrak{g}_{x,s}]\\subseteq\\mathfrak{g}_{x,r+s}"
},
{
"math_id": 57,
"text": "\\theta"
},
{
"math_id": 58,
"text": "\\theta(G(k)_{x,r})=G(k)_{\\theta(x),r}"
},
{
"math_id": 59,
"text": "\\text{d}\\theta(\\mathfrak{g}_{x,r}) = \\mathfrak{g}_{\\theta(x),r}"
},
{
"math_id": 60,
"text": "\\text{d}\\theta"
},
{
"math_id": 61,
"text": "\\varpi"
},
{
"math_id": 62,
"text": "\\varpi\\mathfrak{g}_{x,r}=\\mathfrak{g}_{x,r+1}"
},
{
"math_id": 63,
"text": "G(k)_{x,r:s}"
},
{
"math_id": 64,
"text": "r\\leq s\\leq 2r"
},
{
"math_id": 65,
"text": "\\mathfrak{g}_{x,r:s}\\cong G(k)_{x,r:s}"
},
{
"math_id": 66,
"text": "\\mathfrak{g}_{x,r:s}"
},
{
"math_id": 67,
"text": "(\\pi,V)"
},
{
"math_id": 68,
"text": "V"
},
{
"math_id": 69,
"text": "G(k)_{x,r+}"
},
{
"math_id": 70,
"text": "\\tau"
},
{
"math_id": 71,
"text": "G(k)_x"
},
{
"math_id": 72,
"text": "\\rho"
},
{
"math_id": 73,
"text": "G=T"
},
{
"math_id": 74,
"text": "T(k)_r:=T(k)_{x,r}"
},
{
"math_id": 75,
"text": "T"
},
{
"math_id": 76,
"text": "\\mathbb{G}_\\text{m}"
},
{
"math_id": 77,
"text": "\\ell"
},
{
"math_id": 78,
"text": "T(k)=\\ell^\\times"
},
{
"math_id": 79,
"text": "T(k)_r"
},
{
"math_id": 80,
"text": "a\\in\\ell^\\times"
},
{
"math_id": 81,
"text": "\\text{val}_k(x-1)\\geq r"
},
{
"math_id": 82,
"text": "\\text{val}_k:\\ell\\to\\mathbb{R}"
},
{
"math_id": 83,
"text": "r=0"
},
{
"math_id": 84,
"text": "T(k)_0"
},
{
"math_id": 85,
"text": "S"
},
{
"math_id": 86,
"text": "T(k)_r:=T(k)_0\\cap S(k)_r"
},
{
"math_id": 87,
"text": "\\Phi"
},
{
"math_id": 88,
"text": "\\alpha\\in\\Phi"
},
{
"math_id": 89,
"text": "U_\\alpha"
},
{
"math_id": 90,
"text": "\\alpha"
},
{
"math_id": 91,
"text": "\\mathbb{G}_\\text{a}"
},
{
"math_id": 92,
"text": "v_{\\alpha,x}:U_\\alpha(k)\\to\\mathbb{R}"
},
{
"math_id": 93,
"text": "U_\\alpha(k)_{x,r}:=\\{u\\in U_\\alpha(k) : v_{\\alpha,x}(u)\\geq r\\}"
},
{
"math_id": 94,
"text": "U_\\alpha(k)_{x,r}"
},
{
"math_id": 95,
"text": "k^\\text{nr}"
},
{
"math_id": 96,
"text": "G_{x,r}"
},
{
"math_id": 97,
"text": "G(k)_{x,r}=G_{x,r}(\\mathcal{O}_k)"
},
{
"math_id": 98,
"text": "\\mathbb{R}"
},
{
"math_id": 99,
"text": "\\widetilde{\\mathbb{R}}"
},
{
"math_id": 100,
"text": "r+"
},
{
"math_id": 101,
"text": "r\\in\\mathbb{R}"
},
{
"math_id": 102,
"text": "G(k)_{x,r+}:=\\bigcup_{s>r}G(k)_{x,s}"
},
{
"math_id": 103,
"text": "\\varepsilon>0"
}
] |
https://en.wikipedia.org/wiki?curid=63638453
|
6363853
|
Fluid conductance
|
Fluid conductance is a measure of how effectively fluids are transported through a medium or a region. The concept is particularly useful in cases in which the amount of fluid transported is linearly related to whatever is driving the transport.
For example, the concept is useful in the flow of liquids through permeable media, especially in hydrology in relation to river and lake bottoms. In this case, it is an application of intrinsic permeability to a unit of material with a defined area and thickness, and the magnitude of conductance affects the rate of groundwater recharge or interaction with groundwater. This parameter is often used in such computer modelling codes as MODFLOW.
Conductance is also a useful concept in the design and study of vacuum systems. Such systems consist of vacuum chambers and the various flow passages and pumps that connect and maintain them. These systems are common in physical science laboratories and many laboratory apparatus as well, such as mass spectrometers. Typically, the pressures inside these devices are low enough that the gas inside them is rarefied, meaning here that the mean free path of constituent atoms and molecules is a non-negligible fraction of the dimensions of orifices and passageways. Under those conditions, the total mass flow through an orifice or conduit is typically linearly proportional to the pressure drop, so that it is convenient to quantify mass flow in terms of the fluid conductance of the constituent components.
Example from hydrology.
For example, the conductance of water through a stream-bed is:
formula_0
where
formula_1 is the conductance of the stream-bed ([L2T−1]; m2s−1 or ft2day−1)
formula_2 is the hydraulic conductivity of the stream-bed materials([LT−1]; m·s−1 or ft·day−1];
formula_3 is the area of the stream-bed ([L2]; m2 or ft2)
formula_4 is the thickness of the stream-bed sediments ([L]; m or ft)
The volumetric discharge through the stream-bed can be calculated if the difference in hydraulic head is known:
formula_5
where
formula_6 is the volumetric discharge through the stream-bed ([L3T−1]; m3s−1 or ft3day−1)
formula_7 is the hydraulic head of the river (elevation stage)
formula_8 is the hydraulic head of the aquifer below the stream-bed ([L]; m or ft)
Example from vacuum technology.
The defining equation for conductance in vacuum technology is
formula_9
Here
formula_10 is the total throughput, usually by convention not measured as a mass throughput but rather as a pressure throughput and having units of pressure times volume per second,
formula_11 and formula_12 are the upstream and downstream pressures,
formula_13 is the conductance, having units of volume/time, which are the same units as pumping speed for a vacuum pump.
This definition proves useful in vacuum systems because under conditions of rarefied gas flow, the conductance of various structures is usually constant, and the overall conductance of a complex network of pipes, orifices and other conveyances can be found in direct analogy to a resistive electrical circuit.
For example, the conductance of a simple orifice is
formula_14 liters/sec, where formula_15 is measured in centimeters.
|
[
{
"math_id": 0,
"text": "C_b = K \\frac{A}{b}"
},
{
"math_id": 1,
"text": "C_b"
},
{
"math_id": 2,
"text": "K"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "b"
},
{
"math_id": 5,
"text": "Q_b = C_b (h_b - h)\\,"
},
{
"math_id": 6,
"text": "Q_b"
},
{
"math_id": 7,
"text": "h_b"
},
{
"math_id": 8,
"text": "h"
},
{
"math_id": 9,
"text": "Q = (P_1-P_2)C."
},
{
"math_id": 10,
"text": "Q"
},
{
"math_id": 11,
"text": "P_1"
},
{
"math_id": 12,
"text": "P_2"
},
{
"math_id": 13,
"text": "C"
},
{
"math_id": 14,
"text": "C = 15 d^2"
},
{
"math_id": 15,
"text": "d"
}
] |
https://en.wikipedia.org/wiki?curid=6363853
|
63640292
|
Geometric Exercises in Paper Folding
|
1893 book on making polygons with origami
Geometric Exercises in Paper Folding is a book on the mathematics of paper folding. It was written by Indian mathematician T. Sundara Row, first published in India in 1893, and later republished in many other editions. Its topics include paper constructions for regular polygons, symmetry, and algebraic curves. According to historian of mathematics Michael Friedman, it became "one of the main engines of the popularization of folding as a mathematical activity".
Publication history.
"Geometric Exercises in Paper Folding" was first published by Addison & Co. in Madras in 1893. The book became known in Europe through a remark of Felix Klein in his book "Vorträge über ausgewählte Fragen der Elementargeometrie" (1895) and its translation "Famous Problems Of Elementary Geometry" (1897). Based on the success of "Geometric Exercises in Paper Folding" in Germany, the Open Court Press of Chicago published it in the US, with updates by Wooster Woodruff Beman and David Eugene Smith. Although Open Court listed four editions of the book, published in 1901, 1905, 1917, and 1941, the content did not change between these editions. The fourth edition was also published in London by La Salle, and both presses reprinted the fourth edition in 1958.
The contributions of Beman and Smith to the Open Court editions have been described as "translation and adaptation", despite the fact that the original 1893 edition was already in English. Beman and Smith also replaced many footnotes by references to their own work, replaced some of the diagrams by photographs, and removed some remarks specific to India. In 1966, Dover Publications of New York published a reprint of the 1905 edition, and other publishers of out-of-copyright works have also printed editions of the book.
Topics.
"Geometric Exercises in Paper Folding" shows how to construct various geometric figures using paper-folding in place of the classical Greek Straightedge and compass constructions.
The book begins by constructing regular polygons beyond the classical constructible polygons of 3, 4, or 5 sides, or of any power of two times these numbers, and the construction by Carl Friedrich Gauss of the heptadecagon, it also provides a paper-folding construction of the regular nonagon, not possible with compass and straightedge. The nonagon construction involves angle trisection, but Rao is vague about how this can be performed using folding; an exact and rigorous method for folding-based trisection would have to wait until the work in the 1930s of Margherita Piazzola Beloch. The construction of the square also includes a discussion of the Pythagorean theorem. The book uses high-order regular polygons to provide a geometric calculation of pi.
A discussion of the symmetries of the plane includes congruence, similarity, and collineations of the projective plane; this part of the book also covers some of the major theorems of projective geometry including Desargues's theorem, Pascal's theorem, and Poncelet's closure theorem.
Later chapters of the book show how to construct algebraic curves including the conic sections, the conchoid, the cubical parabola, the witch of Agnesi, the cissoid of Diocles, and the Cassini ovals. The book also provides a gnomon-based proof of Nicomachus's theorem that the sum of the first formula_0 cubes is the square of the sum of the first formula_0 integers, and material on other arithmetic series, geometric series, and harmonic series.
There are 285 exercises, and many illustrations, both in the form of diagrams and (in the updated editions) photographs.
Influences.
Tandalam Sundara Row was born in 1853, the son of a college principal, and earned a bachelor's degree at the Kumbakonam College in 1874, with second-place honours in mathematics. He became a tax collector in Tiruchirappalli, retiring in 1913, and pursued mathematics as an amateur. As well as "Geometric Exercises in Paper Folding", he also wrote a second book, "Elementary Solid Geometry", published in three parts from 1906 to 1909.
One of the sources of inspiration for "Geometric Exercises in Paper Folding" was "Kindergarten Gift No. VIII: Paper-folding". This was one of the Froebel gifts, a set of kindergarten activities designed in the early 19th century by Friedrich Fröbel. The book was also influenced by an earlier Indian geometry textbook, "First Lessons in Geometry", by Bhimanakunte Hanumantha Rao (1855–1922). "First Lessons" drew inspiration from Fröbel's gifts in setting exercises based on paper-folding, and from the book "Elementary Geometry: Congruent Figures" by Olaus Henrici in using a definition of geometric congruence based on matching shapes to each other and well-suited for folding-based geometry.
In turn, "Geometric Exercises in Paper Folding" inspired other works of mathematics. A chapter in "Mathematische Unterhaltungen und Spiele" ["Mathematical Recreations and Games"] by Wilhelm Ahrens (1901) concerns folding and is based on Rao's book, inspiring the inclusion of this material in several other books on recreational mathematics. Other mathematical publications have studied the curves that can be generated by the folding processes used in "Geometric Exercises in Paper Folding". In 1934, Margherita Piazzola Beloch began her research on axiomatizing the mathematics of paper-folding, a line of work that would eventually lead to the Huzita–Hatori axioms in the late 20th century. Beloch was explicitly inspired by Rao's book, titling her first work in this area "Alcune applicazioni del metodo del ripiegamento della carta di Sundara Row" ["Several applications of the method of folding a paper of Sundara Row"].
Audience and reception.
The original intent of "Geometric Exercises in Paper Folding" was twofold: as an aid in geometry instruction,
and as a work of recreational mathematics to inspire interest in geometry in a general audience. Edward Mann Langley, reviewing the 1901 edition, suggested that its content went well beyond what should be covered in a standard geometry course. And in their own textbook on geometry using paper-folding exercises, "The First Book of Geometry" (1905), Grace Chisholm Young and William Henry Young heavily criticized "Geometric Exercises in Paper Folding", writing that it is "too difficult for a child, and too infantile for a grown person". However, reviewing the 1966 Dover edition, mathematics educator Pamela Liebeck called it "remarkably relevant" to the discovery learning techniques for geometry instruction of the time, and in 2016 computational origami expert Tetsuo Ida, introducing an attempt to formalize the mathematics of the book, wrote "After 123 years, the significance of the book remains."
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
}
] |
https://en.wikipedia.org/wiki?curid=63640292
|
636427
|
Mahler's compactness theorem
|
Characterizes sets of lattices that are bounded in a certain sense
In mathematics, Mahler's compactness theorem, proved by Kurt Mahler (1946), is a foundational result on lattices in Euclidean space, characterising sets of lattices that are 'bounded' in a certain definite sense. Looked at another way, it explains the ways in which a lattice could degenerate ("go off to infinity") in a sequence of lattices. In intuitive terms it says that this is possible in just two ways: becoming "coarse-grained" with a fundamental domain that has ever larger volume; or containing shorter and shorter vectors. It is also called his selection theorem, following an older convention used in naming compactness theorems, because they were formulated in terms of sequential compactness (the possibility of selecting a convergent subsequence).
Let "X" be the space
formula_0
that parametrises lattices in formula_1, with its quotient topology. There is a well-defined function Δ on "X", which is the absolute value of the determinant of a matrix – this is constant on the cosets, since an invertible integer matrix has determinant 1 or −1.
Mahler's compactness theorem states that a subset "Y" of "X" is relatively compact if and only if Δ is bounded on "Y", and there is a neighbourhood "N" of 0 in formula_1 such that for all Λ in "Y", the only lattice point of Λ in "N" is 0 itself.
The assertion of Mahler's theorem is equivalent to the compactness of the space of unit-covolume lattices in formula_1 whose systole is larger or equal than any fixed formula_2.
Mahler's compactness theorem was generalized to semisimple Lie groups by David Mumford; see Mumford's compactness theorem.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{GL}_n(\\mathbb{R})/\\mathrm{GL}_n(\\mathbb{Z})"
},
{
"math_id": 1,
"text": "\\mathbb{R}^n"
},
{
"math_id": 2,
"text": "\\varepsilon>0"
}
] |
https://en.wikipedia.org/wiki?curid=636427
|
63644269
|
Samarium(II) fluoride
|
<templatestyles src="Chembox/styles.css"/>
Chemical compound
Samarium(II) fluoride is one of fluorides of samarium with a chemical formula SmF2. The compound crystalizes in the fluorite structure, and is significantly nonstoichiometric. Along with europium(II) fluoride and ytterbium(II) fluoride, it is one of three known rare earth difluorides, the rest are unstable.
Preparation.
Samarium(II) fluoride can be prepared by using samarium or hydrogen gas to reduce samarium(III) fluoride:
formula_0
formula_1
Properties.
Samarium(II) fluoride is a purple to black solid. This is present in the crystal structure of the cubic calcium fluoride type (space group Fm3m; No. 225 with "a" = 587.7 pm).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{2 \\ SmF_3 + Sm \\longrightarrow 3 \\ SmF_2}"
},
{
"math_id": 1,
"text": "\\mathrm{2 \\ SmF_3 + H_2 \\longrightarrow 2 \\ SmF_2 + 2 \\ HF}"
}
] |
https://en.wikipedia.org/wiki?curid=63644269
|
6364620
|
Kramers' theorem
|
Theorem in quantum mechanics
In quantum mechanics, the Kramers' degeneracy theorem states that for every energy eigenstate of a time-reversal symmetric system with half-integer total spin, there is another eigenstate with the same energy related by time-reversal. In other words, the degeneracy of every energy level is an even number if it has half-integer spin. The theorem is named after Dutch physicist H. A. Kramers.
In theoretical physics, the time reversal symmetry is the symmetry of physical laws under a time reversal transformation:
formula_0
If the Hamiltonian operator commutes with the time-reversal operator, that is
formula_1
then, for every energy eigenstate formula_2, the time reversed state formula_3 is also an eigenstate with the same energy. These two states are sometimes called a Kramers pair. In general, this time-reversed state may be identical to the original one, but that is not possible in a half-integer spin system: since time reversal reverses all angular momenta, reversing a half-integer spin cannot yield the same state (the magnetic quantum number is never zero).
Mathematical statement and proof.
In quantum mechanics, the time reversal operation is represented by an antiunitary operator formula_4 acting on a Hilbert space formula_5. If it happens that formula_6, then we have the following simple theorem:
If formula_4 is an antiunitary operator acting on a Hilbert space formula_5 satisfying formula_6 and formula_7 a vector in formula_5, then formula_8 is orthogonal to formula_7.
Proof.
By the definition of an antiunitary operator, formula_9, where formula_10 and formula_11 are vectors in formula_5. Replacing formula_12 and formula_13 and using that formula_6, we get formula_14which implies that formula_15.
Consequently, if a Hamiltonian formula_16 is time-reversal symmetric, i.e. it commutes with formula_17, then all its energy eigenspaces have even degeneracy, since applying formula_17 to an arbitrary energy eigenstate formula_18 gives another energy eigenstate formula_19 that is orthogonal to the first one. The orthogonality property is crucial, as it means that the two eigenstates formula_20 and formula_19 represent different physical states. If, on the contrary, they were the same physical state, then formula_21 for an angle formula_22, which would imply
formula_23
To complete Kramers degeneracy theorem, we just need to prove that the time-reversal operator formula_17 acting on a half-odd-integer spin Hilbert space satisfies formula_6. This follows from the fact that the spin operator formula_24 represents a type of angular momentum, and, as such, should reverse direction under formula_25:
formula_26
Concretely, an operator formula_17 that has this property is usually written as
formula_27
where formula_28 is the spin operator in the formula_29 direction and formula_30 is the complex conjugation map in the formula_31 spin basis.
Since formula_32 has real matrix components in the formula_33 basis, then
formula_34
Hence, for half-odd-integer spins formula_35, we have formula_6. This is the same minus sign that appears when one does a full formula_36 rotation on systems with half-odd-integer spins, such as fermions.
Consequences.
The energy levels of a system with an odd total number of fermions (such as electrons, protons and neutrons) remain at least doubly degenerate in the presence of purely electric fields (i.e. no external magnetic fields). It was first discovered in 1930 by H. A. Kramers as a consequence of the Breit equation. As shown by Eugene Wigner in 1932, it is a consequence of the time reversal invariance of electric fields, and follows from an application of the antiunitary "T"-operator to the wavefunction of an odd number of fermions. The theorem is valid for any configuration of static or time-varying electric fields.
For example, the hydrogen (H) atom contains one proton and one electron, so that the Kramers theorem does not apply. Indeed, the lowest (hyperfine) energy level of H is nondegenerate, although a generic system might have degeneracy for other reasons. The deuterium (D) isotope on the other hand contains an extra neutron, so that the total number of fermions is three, and the theorem does apply. The ground state of D contains two hyperfine components, which are twofold and fourfold degenerate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " T: t \\mapsto -t."
},
{
"math_id": 1,
"text": "[H,T]=0,"
},
{
"math_id": 2,
"text": "|n\\rangle"
},
{
"math_id": 3,
"text": "T|n\\rangle"
},
{
"math_id": 4,
"text": "T : \\mathcal{H} \\to \\mathcal{H}"
},
{
"math_id": 5,
"text": "\\mathcal{H}"
},
{
"math_id": 6,
"text": "T^2 = -1"
},
{
"math_id": 7,
"text": "v"
},
{
"math_id": 8,
"text": "T v"
},
{
"math_id": 9,
"text": "\\langle T u, T w \\rangle = \\langle w, u \\rangle"
},
{
"math_id": 10,
"text": "u"
},
{
"math_id": 11,
"text": "w"
},
{
"math_id": 12,
"text": "u = T v"
},
{
"math_id": 13,
"text": "w = v"
},
{
"math_id": 14,
"text": "- \\langle v, T v \\rangle = \\langle T^2 v, T v \\rangle = \\langle v, T v \\rangle,"
},
{
"math_id": 15,
"text": "\\langle v, T v \\rangle = 0"
},
{
"math_id": 16,
"text": "H"
},
{
"math_id": 17,
"text": "T"
},
{
"math_id": 18,
"text": "|n\\rangle"
},
{
"math_id": 19,
"text": "T | n \\rangle"
},
{
"math_id": 20,
"text": "| n \\rangle"
},
{
"math_id": 21,
"text": "T |n\\rangle = e^{i \\alpha} |n\\rangle"
},
{
"math_id": 22,
"text": "\\alpha \\in \\mathbb{R}"
},
{
"math_id": 23,
"text": "T^2 |n\\rangle = T (e^{i \\alpha} |n\\rangle) = e^{- i \\alpha} e^{i \\alpha} |n \\rangle = +\\,|n \\rangle"
},
{
"math_id": 24,
"text": "\\mathbf{S}"
},
{
"math_id": 25,
"text": "T"
},
{
"math_id": 26,
"text": "\\mathbf{S} \\to T^{-1} \\mathbf{S} T = - \\mathbf{S}."
},
{
"math_id": 27,
"text": "T = e^{- i \\pi S_y} K"
},
{
"math_id": 28,
"text": "S_y"
},
{
"math_id": 29,
"text": "y"
},
{
"math_id": 30,
"text": "K"
},
{
"math_id": 31,
"text": "S_z"
},
{
"math_id": 32,
"text": "i S_y"
},
{
"math_id": 33,
"text": "S_z"
},
{
"math_id": 34,
"text": "T^2 = e^{- i \\pi S_y} K e^{- i \\pi S_y} K = e^{- i 2 \\pi S_y} K^2 = (-1)^{2 S}."
},
{
"math_id": 35,
"text": "S = \\frac{1}{2}, \\frac{3}{2}, \\ldots"
},
{
"math_id": 36,
"text": "2 \\pi"
}
] |
https://en.wikipedia.org/wiki?curid=6364620
|
63648795
|
Powell's dog leg method
|
Powell's dog leg method, also called Powell's hybrid method, is an iterative optimisation algorithm for the solution of non-linear least squares problems, introduced in 1970 by Michael J. D. Powell. Similarly to the Levenberg–Marquardt algorithm, it combines the Gauss–Newton algorithm with gradient descent, but it uses an explicit trust region. At each iteration, if the step from the Gauss–Newton algorithm is within the trust region, it is used to update the current solution. If not, the algorithm searches for the minimum of the objective function along the steepest descent direction, known as Cauchy point. If the Cauchy point is outside of the trust region, it is truncated to the boundary of the latter and it is taken as the new solution. If the Cauchy point is inside the trust region, the new solution is taken at the intersection between the trust region boundary and the line joining the Cauchy point and the Gauss-Newton step (dog leg step).
The name of the method derives from the resemblance between the construction of the dog leg step and the shape of a dogleg hole in golf.
Formulation.
Given a least squares problem in the form
formula_0
with formula_1, Powell's dog leg method finds the optimal point formula_2 by constructing a sequence formula_3 that converges to formula_4. At a given iteration, the Gauss–Newton step is given by
formula_5
where formula_6 is the Jacobian matrix, while the steepest descent direction is given by
formula_7
The objective function is linearised along the steepest descent direction
formula_8
To compute the value of the parameter formula_9 at the Cauchy point, the derivative of the last expression with respect to formula_9 is imposed to be equal to zero, giving
formula_10
Given a trust region of radius formula_11, Powell's dog leg method selects the update step formula_12 as equal to:
|
[
{
"math_id": 0,
"text": "\nF(\\boldsymbol{x}) = \\frac{1}{2} \\left\\| \\boldsymbol{f} (\\boldsymbol{x}) \\right\\|^2 = \\frac{1}{2} \\sum_{i=1}^m \\left( f_i(\\boldsymbol{x}) \\right)^2\n"
},
{
"math_id": 1,
"text": "f_i: \\mathbb{R}^n \\to \\mathbb{R}"
},
{
"math_id": 2,
"text": "\\boldsymbol{x}^* = \\operatorname{argmin}_{\\boldsymbol{x}} F(\\boldsymbol{x})"
},
{
"math_id": 3,
"text": "\\boldsymbol{x}_k = \\boldsymbol{x}_{k-1} + \\delta_k"
},
{
"math_id": 4,
"text": "\\boldsymbol{x}^*"
},
{
"math_id": 5,
"text": "\n\\boldsymbol{\\delta_{gn}} = - \\left( \\boldsymbol{J}^\\top \\boldsymbol{J} \\right)^{-1} \\boldsymbol{J}^\\top \\boldsymbol{f}(\\boldsymbol{x})\n"
},
{
"math_id": 6,
"text": "\\boldsymbol{J} = \\left( \\frac{\\partial{f_i}}{\\partial{x_j}} \\right)"
},
{
"math_id": 7,
"text": "\n\\boldsymbol{\\delta_{sd}} = - \\boldsymbol{J}^\\top \\boldsymbol{f}(\\boldsymbol{x}) .\n"
},
{
"math_id": 8,
"text": "\n\\begin{align}\nF(\\boldsymbol{x} + t \\boldsymbol{\\delta_{sd}})\n &\\approx \\frac{1}{2} \\left\\| \\boldsymbol{f}(\\boldsymbol{x}) + t \\boldsymbol{J}(\\boldsymbol{x}) \\boldsymbol{\\delta_{sd}} \\right\\|^2 \\\\\n &= F(\\boldsymbol{x}) + t \\boldsymbol{\\delta_{sd}}^\\top \\boldsymbol{J}^\\top \\boldsymbol{f}(\\boldsymbol{x}) + \\frac{1}{2} t^2 \\left\\| \\boldsymbol{J} \\boldsymbol{\\delta_{sd}} \\right\\|^2 .\n\\end{align}\n"
},
{
"math_id": 9,
"text": "t"
},
{
"math_id": 10,
"text": "\nt = -\\frac{\\boldsymbol{\\delta_{sd}}^\\top \\boldsymbol{J}^\\top \\boldsymbol{f}(\\boldsymbol{x})}{\\left\\| \\boldsymbol{J} \\boldsymbol{\\delta_{sd}} \\right\\|^2}\n = \\frac{\\left\\| \\boldsymbol{\\delta_{sd}} \\right\\|^2}{\\left\\| \\boldsymbol{J} \\boldsymbol{\\delta_{sd}} \\right\\|^2}.\n"
},
{
"math_id": 11,
"text": "\\Delta"
},
{
"math_id": 12,
"text": "\\boldsymbol{\\delta_k}"
},
{
"math_id": 13,
"text": "\\boldsymbol{\\delta_{gn}}"
},
{
"math_id": 14,
"text": "\\left\\| \\boldsymbol{\\delta_{gn}} \\right\\| \\le \\Delta"
},
{
"math_id": 15,
"text": "\\frac{\\Delta}{\\left\\| \\boldsymbol{\\delta_{sd}} \\right\\|} \\boldsymbol{\\delta_{sd}}"
},
{
"math_id": 16,
"text": "t \\left\\| \\boldsymbol{\\delta_{sd}} \\right\\| "
},
{
"math_id": 17,
"text": "t \\boldsymbol{\\delta_{sd}} + s \\left( \\boldsymbol{\\delta_{gn}} - t \\boldsymbol{\\delta_{sd}} \\right)"
},
{
"math_id": 18,
"text": "s"
},
{
"math_id": 19,
"text": "\\left\\| \\boldsymbol{\\delta} \\right\\| = \\Delta"
}
] |
https://en.wikipedia.org/wiki?curid=63648795
|
63649139
|
Sum of residues formula
|
In mathematics, the residue formula says that the sum of the residues of a meromorphic differential form on a smooth proper algebraic curve vanishes.
Statement.
In this article, "X" denotes a proper smooth algebraic curve over a field "k". A meromorphic (algebraic) differential form formula_0 has, at each closed point "x" in "X", a residue which is denoted formula_1. Since formula_0 has poles only at finitely many points, in particular the residue vanishes for all but finitely many points. The residue formula states:
formula_2
Proofs.
A geometric way of proving the theorem is by reducing the theorem to the case when "X" is the projective line, and proving it by explicit computations in this case, for example in .
proves the theorem using a notion of traces for certain endomorphisms of infinite-dimensional vector spaces. The residue of a differential form formula_3 can be expressed in terms of traces of endomorphisms on the fraction field formula_4 of the completed local rings formula_5 which leads to a conceptual proof of the formula. A more recent exposition along similar lines, using more explicitly the notion of Tate vector spaces, is given by .
|
[
{
"math_id": 0,
"text": "\\omega"
},
{
"math_id": 1,
"text": "\\operatorname{res}_x \\omega"
},
{
"math_id": 2,
"text": "\\sum_{x} \\operatorname{res}_x \\omega=0."
},
{
"math_id": 3,
"text": "f dg"
},
{
"math_id": 4,
"text": "K_x"
},
{
"math_id": 5,
"text": "\\hat \\mathcal O_{X, x}"
}
] |
https://en.wikipedia.org/wiki?curid=63649139
|
63650316
|
Speckle variance optical coherence tomography
|
Speckle variance optical coherence tomography (SV-OCT) is an imaging algorithm for functional optical imaging. Optical coherence tomography is an imaging modality that uses low-coherence interferometry to obtain high resolution, depth-resolved volumetric images. OCT can be used to capture functional images of blood flow, a technique known as optical coherence tomography angiography (OCT-A). SV-OCT is one method for OCT-A that uses the variance of consecutively acquired images to detect flow at the micron scale. SV-OCT can be used to measure the microvasculature of tissue. In particular, it is useful in ophthalmology for visualizing blood flow in retinal and choroidal regions of the eye, which can provide information on the pathophysiology of diseases.
Introduction.
Color fundus photography, fluorescein angiography (FA) and indocyanine green angiography (ICGA) are methods for imaging retinal microvasculature networks. However, these methods have drawbacks in that they require the use of exogenous contrast agents. In addition, the images acquired using these techniques are two dimensional in nature and therefore lack depth information. OCT has several advantages that make it appealing for volumetric imaging of vasculature structure. Namely, OCT is able to acquire depth-resolved localization at high spatial and temporal resolutions, does not require exogenous contrast agents, and is non-invasive and contactless.
OCT gave rise to a family of techniques to perform OCT-A including speckle variance OCT, phase variance OCT, optical microangiography, and split-spectrum microangiography.
Speckle variance OCT uses only the amplitude information of the complex OCT signal, whereas phase variance OCT uses only the phase information.
Optical microangiography computes flow using both components of the complex OCT signal.
Split-spectrum amplitude decorrelation angiography (SSADA) computes average decorrelation between consecutive B-scans to visualize blood flow.
Methods.
Imaging system.
SV-OCT can be done with spectral domain OCT (SD-OCT) and swept source OCT (SS-OCT).
SD-OCT and SS-OCT are both methods of Fourier domain OCT (FD-OCT), which has significantly faster image acquisition speed compared to time domain OCT. In general, OCT measures the echo time delay and intensity of reflected and backscattered light. A broad-bandwidth laser or superluminescent diode low-coherence light source travels to a beam splitter, which sends half of the light to the reference arm, which is at a known location, and half of the light to the sample, where it scatters and reflects off tissue. Light from the reference and sample arms recombine at the beam splitter, forming an interference pattern that is sensed by a photodetector. In SD-OCT, the interference pattern is split into its frequency components by a grating and are simultaneously detected by a charge-coupled device (CCD). Each frequency corresponds to a certain depth within the tissue.
In SS-OCT, a tunable swept laser source is used.
Algorithm.
The intensity or speckle of an OCT signal is the random interference pattern produced by backscattered light from a random medium. OCT captures cross-sectional images, known as B-scans. In SV-OCT, multiple B-scans are captured at the same location, creating a 3D data set, with time as the third dimension. The pixel-wise variance is computed between consecutive B-scan frames. A speckle variance image, formula_0 is calculated as
formula_1
where formula_2 is the number of B-scans obtained at a single location and formula_3 is the intensity of a pixel with image coordinates formula_4 in the B-scan indexed by formula_5.
The speckle pattern of OCT images are affected by the motion of scattering particles in the target medium. The interference pattern produced by the signal of backscattered light through a medium depends on the movement of these particles. Therefore, the speckling pattern encapsulates information regarding the spatial and temporal motion of scattering particles in a random scattering medium.
SV-OCT uses the inter-frame variance of image intensities to compute blood flow. Areas that have high flow will have higher motion of scattering particles and this information is encoded in the speckle pattern.
SV-OCT has advantages for microvasculature imaging due to its high sensitivity and independence to the Doppler angle. In addition, it has low computational complexity and requires relatively low data storage compared to PV-OCT.
However, SV-OCT it is susceptible to bulk tissue motion and multiple scattering induced artifacts.
Applications.
SV-OCT has applications in the field of ophthalmology as several diseases affect blood flow in the eye. For example diabetic retinopathy (DR) can alter the structure of retinal capillaries and cause neovascularization, glaucoma is associated with lower retinal blood flow, age-related macular degeneration (AMD) is associated with choroidal neovascularization which can lead to loss of vision. SV-OCT has been used to image the microvasculature of the eye and study the pathophysiology of these diseases.
Aside from applications in ophthalmology, SV-OCT has been used to study blood flow in embryos, cardiac tissue, and spinal tissue
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " I_{SV} "
},
{
"math_id": 1,
"text": " I_{SV} = \\frac{1}{N} \\sum^N_{i=1} \\left(I_{ijk} - \\frac{1}{N} \\sum^N_{i=1} I_{ijk} \\right)^2 \\qquad \\quad (1) "
},
{
"math_id": 2,
"text": " {N} "
},
{
"math_id": 3,
"text": " I_{ijk} "
},
{
"math_id": 4,
"text": " (j,k) "
},
{
"math_id": 5,
"text": " i "
}
] |
https://en.wikipedia.org/wiki?curid=63650316
|
63661740
|
Star unfolding
|
Net obtained by cutting a polyhedron
In computational geometry, the star unfolding of a convex polyhedron is a net obtained by cutting the polyhedron along geodesics (shortest paths) through its faces. It has also been called the inward layout of the polyhedron, or the Alexandrov unfolding after Aleksandr Danilovich Aleksandrov, who first considered it.
Description.
In more detail, the star unfolding is obtained from a polyhedron formula_0 by choosing a starting point formula_1 on the surface of formula_0, in general position, meaning that there is a unique shortest geodesic from formula_1 to each vertex of formula_0.
The star polygon is obtained by cutting the surface of formula_0 along these geodesics, and unfolding the resulting cut surface onto a plane. The resulting shape forms a simple polygon in the plane.
The star unfolding may be used as the basis for polynomial time algorithms for various other problems involving geodesics on convex polyhedra.
Related unfoldings.
The star unfolding should be distinguished from another way of cutting a convex polyhedron into a simple polygon net, the source unfolding. The source unfolding cuts the polyhedron at points that have multiple equally short geodesics to the given base point formula_1, and forms a polygon with formula_1 at its center, preserving geodesics from formula_1. Instead, the star unfolding cuts the polyhedron along the geodesics, and forms a polygon with multiple copies of formula_1 at its vertices. Despite their names, the source unfolding always produces a star-shaped polygon, but the star unfolding does not.
Generalizations of the star unfolding using a geodesic or quasigeodesic in place of a single base point have also been studied. Another generalization uses a single base point, and a system of geodesics that are not necessarily shortest geodesics.
Neither the star unfolding nor the source unfolding restrict their cuts to the edges of the polyhedron. It is an open problem whether every polyhedron can be cut and unfolded to a simple polygon using only cuts along its edges.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "p"
}
] |
https://en.wikipedia.org/wiki?curid=63661740
|
636621
|
Overspill
|
Proof technique in nonstandard analysis
In nonstandard analysis, a branch of mathematics, overspill (referred to as "overflow" by Goldblatt (1998, p. 129)) is a widely used proof technique. It is based on the fact that the set of standard natural numbers N is not an internal subset of the internal set *N of hypernatural numbers.
By applying the induction principle for the standard integers N and the transfer principle we get the principle of internal induction:
For any "internal" subset "A" of *N, if
# 1 is an element of "A", and
# for every element "n" of "A", "n" + 1 also belongs to "A",
then
"A" = *N
If N were an internal set, then instantiating the internal induction principle with N, it would follow N = *N which is known not to be the case.
The overspill principle has a number of useful consequences:
In particular:
Example.
These facts can be used to prove the equivalence of the following two conditions for an "internal" hyperreal-valued function ƒ defined on *R.
formula_0
and
formula_1
The proof that the second fact implies the first uses overspill, since given a non-infinitesimal positive "ε",
formula_2
Applying overspill, we obtain a positive appreciable δ with the requisite properties.
These equivalent conditions express the property known in nonstandard analysis as S-continuity (or microcontinuity) of ƒ at "x". S-continuity is referred to as an external property. The first definition is external because it involves quantification over standard values only. The second definition is external because it involves the external relation of being infinitesimal.
|
[
{
"math_id": 0,
"text": " \\forall \\epsilon\\in \\mathbb{R}^+, \\exists \\delta \\in\\mathbb{R}^+, |h| \\leq \\delta \\implies |f(x+h) - f(x)| \\leq \\varepsilon"
},
{
"math_id": 1,
"text": " \\forall h \\cong 0, \\ |f(x+h) - f(x)| \\cong 0 "
},
{
"math_id": 2,
"text": " \\forall \\mbox{ positive } \\delta \\cong 0, \\ (|h| \\leq \\delta \\implies |f(x+h) - f(x)| < \\varepsilon)."
}
] |
https://en.wikipedia.org/wiki?curid=636621
|
63665258
|
Bateman-Mukai method
|
Method for describing the mutation rates for genes through the observation of phenotypes
In genetics, the Bateman–Mukai method, sometimes referred to as the Bateman–Mukai technique, is a traditional method used for describing the mutation rates for genes through the observation of physical traits (phenotype) of a living organism. The method involves the maintenance of many mutation accumulation lineages of the organism studied, and it is therefore labor intensive.
Origin.
The foundational papers from which this method gets its name were conducted by geneticists A. J. Bateman in 1959 and T. Mukai in 1964. Bateman used an early form of this method to understand how radiation affects the survival of chromosomes due to radiation induced mutations. Mukai's experimental design largely followed the design of Bateman's study, but rather than inducing mutations via any external factor, the study aimed to describe the spontaneous naturally occurring deleterious mutation rate of the common fruit fly.
Procedure.
The method requires the establishment of many mutation accumulation lineages using within line breeding of diploid organisms. These lines are maintained in a favorable environment for deleterious mutations to accumulate so that they are not to be purged by natural selection: excess food and other resources are kept available to eliminate competition, and the parents of the next generation are chosen at random without any regards to fitness. Importantly, in this way, mutation accumulation experiments attempt to describe the true mutation rates that would be observed in the absence of natural selection.
Asexually reproducing organisms can simply have a single parent selected as the parent for the next generation of each line. In sexually reproducing organisms, measures must be taken such that researchers can be sure that mutations are inherited in by future generations of the mutation accumulation lines. The use of a balancer gene can be implemented towards this end. In the Mukai experiment, male flies homozygous for the wild type chromosome 2 were always mated with female heterozygotes for the Pm/Cy balancer gene that produces an observable phenotype in the wings, with homozygous Pm/Cy being lethal. This ensures that researches can select for organisms that do not exhibit the phenotypic trait of the balancer gene, which in turn means that only wild type chromosomes will be passed down to the next generation. In this way, in sexually reproducing organisms, any spontaneously occurring mutations that occur in the mutation accumulation line should have a random chance, due to the independent assortment, of being fixed in the next generation of line.
The main measurements that are derived from results of a Bateman–Mukai method are: the deleterious mutation rate, formula_0, and the average selection coefficient, formula_1, although these must be derived from phenotype observation. formula_0, which is specifically the mutation rate for a single copy of a gene is derived from the assumption that deleterious mutations are randomly fixed and from the nature of the diploid organisms, which have two copies of each gene. So the mutation rate from both copies, formula_2, multiplied by the population of the line, formula_3, with the assumption of random fixation gives that the deleterious mutation rate,formula_4, such that each mutation per line per generation directly counts towards the mutation rate. By tracking the quantitative deleterious change in a trait delta M (ie. number of offspring), the mutation rate can be defined within one line as: formula_5.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "U"
},
{
"math_id": 1,
"text": "E(a)"
},
{
"math_id": 2,
"text": "2U"
},
{
"math_id": 3,
"text": "p"
},
{
"math_id": 4,
"text": "U = (p2U/p2)"
},
{
"math_id": 5,
"text": "\\Delta M = UE(a)"
}
] |
https://en.wikipedia.org/wiki?curid=63665258
|
63670899
|
Smith–Helmholtz invariant
|
In optics the Smith–Helmholtz invariant is an invariant quantity for paraxial beams propagating through an optical system. Given an object at height formula_0 and an axial ray passing through the same axial position as the object with angle formula_1, the invariant is defined by
formula_2,
where formula_3 is the refractive index. For a given optical system and specific choice of object height and axial ray, this quantity is invariant under refraction. Therefore, at the formula_4th conjugate image point with height formula_5 and refracted axial ray with angle formula_6 in medium with index of refraction formula_7 we have formula_8. Typically the two points of most interest are the object point and the final image point.
The Smith–Helmholtz invariant has a close connection with the Abbe sine condition. The paraxial version of the sine condition is satisfied if the ratio formula_9 is constant, where formula_1 and formula_3 are the axial ray angle and refractive index in object space and formula_10 and formula_11 are the corresponding quantities in image space. The Smith–Helmholtz invariant implies that the lateral magnification, formula_12 is constant if and only if the sine condition is satisfied.
The Smith–Helmholtz invariant also relates the lateral and angular magnification of the optical system, which are formula_13 and formula_14 respectively. Applying the invariant to the object and image points implies the product of these magnifications is given by
formula_15
The Smith–Helmholtz invariant is closely related to the Lagrange invariant and the optical invariant. The Smith–Helmholtz is the optical invariant restricted to conjugate image planes.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\bar{y}"
},
{
"math_id": 1,
"text": "u"
},
{
"math_id": 2,
"text": "H = n\\bar{y}u"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "\\bar{y}_i"
},
{
"math_id": 6,
"text": "u_i"
},
{
"math_id": 7,
"text": "n_i"
},
{
"math_id": 8,
"text": " H = n_i \\bar{y}_i u_i"
},
{
"math_id": 9,
"text": "n u / n' u'"
},
{
"math_id": 10,
"text": "u'"
},
{
"math_id": 11,
"text": "n'"
},
{
"math_id": 12,
"text": "y/y'"
},
{
"math_id": 13,
"text": "y'/y"
},
{
"math_id": 14,
"text": "u'/u"
},
{
"math_id": 15,
"text": " \\frac{y'}{y} \\frac{u'}{u} = \\frac{n}{n'} "
}
] |
https://en.wikipedia.org/wiki?curid=63670899
|
6367125
|
Band brake
|
Type of brake
A band brake is a primary or secondary brake, consisting of a band of friction material that tightens concentrically around a cylindrical piece of equipment or train wheel to either prevent it from rotating (a static or "holding" brake), or to slow it (a dynamic brake). This application is common on winch drums and chain saws and is also used for some bicycle brakes.
A former application was the locking of gear rings in epicyclic gearing. In modern automatic transmissions this task has been taken over entirely by multiple-plate clutches or multiple-plate brakes.
Features.
Band brakes can be simple, compact, rugged, and can generate high force with a light input force. However, band brakes are prone to grabbing or chatter and loss of brake force when hot. These problems are inherent with the design and thus limit where band brakes are a good solution.
Effectiveness.
One way to describe the effectiveness of the brake is as formula_0, where formula_1 is the coefficient of friction between band and drum, and formula_2 is the angle of wrap. With a large formula_3, the brake is very effective and requires low input force to achieve high brake force, but is also very sensitive to changes in formula_1. For example, light rust on the drum may cause the brake to "grab" or chatter, water may cause the brake to slip, and rising temperatures in braking may cause the coefficient of friction to drop slightly but in turn cause brake force to drop greatly. Using a band material with low formula_1 increases the input force required to achieve a given brake force, but some low-formula_1 materials also have more consistent formula_1 across the range of working temperatures.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "e^{\\mu \\theta}"
},
{
"math_id": 1,
"text": "\\mu"
},
{
"math_id": 2,
"text": "\\theta"
},
{
"math_id": 3,
"text": "\\mu \\theta"
}
] |
https://en.wikipedia.org/wiki?curid=6367125
|
63671402
|
C/2020 F8 (SWAN)
|
Second brightest naked-eye comet of 2020
C/2020 F8 (SWAN), or Comet SWAN, is an Oort cloud comet that was discovered in images taken by the Solar Wind Anisotropies (SWAN) camera on March 25, 2020, aboard the Solar Heliospheric Observer (SOHO) spacecraft. In the glare of twilight, Comet SWAN is difficult to find with 50mm binoculars even though it is still near the theoretical range of naked eye visibility. The comet has dimmed since May 3. As of perihelion, the comet is very diffuse, does not have a visible nucleus and is not a comet that will be noticed by inexperienced observers. It is likely that the comet disintegrated.
Observing.
On April 28, 2020 it had an apparent magnitude of 7 and was too diffuse to be visible to the naked eye even from a dark site. The comet was also hidden by the glare of twilight, zodiacal light and atmospheric extinction. It was originally best seen from the Southern Hemisphere. It was expected to possibly reach 3rd magnitude in May, but instead hovered closer to magnitude 6. In either case it was near the glare of twilight, which made it appear significantly fainter. On May 2, the comet had reached a magnitude of 4.7 and had been detected with naked eye, the tail had a visual length of one degree and could be traced photographically for 6-8 degrees. After that the comet faded, probably as the nucleus of the comet fragmented. It passed through the celestial equator on 7 May, then it headed northward and it was near the 2nd magnitude star Algol on 20 May. It passed its perihelion on May 27, 2020.
Orbit.
The Minor Planet Center initially listed the orbit as bound with formula_0. With a short 18-day observation arc JPL listed the comet as hyperbolic with an eccentricity of , but a longer observation arc was needed to refine the uncertainties and either confirm its hyperbolic trajectory, or determine its orbital period of thousands or millions of years. With a 40-day observation arc it was possible to determine that it came from the Oort cloud on a Hyperbolic trajectory and that the outbound orbit will last ~11,000 years.
On May 12, 2020, the comet passed about from Earth. On May 27, 2020 the comet came to perihelion from the Sun.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "e<1"
}
] |
https://en.wikipedia.org/wiki?curid=63671402
|
63672907
|
Christophe Fraser
|
British epidemiologist
Christophe Fraser is a professor of Infectious Disease Epidemiology in the Big Data Institute, part of the Nuffield Department of Medicine at the University of Oxford.
Fraser's PhD and initial postdoctoral research were in theoretical particle physics. He converted to infectious disease epidemiology in 1998, based first at the University of Oxford then at Imperial College London, where he became Chair of Theoretical Epidemiology and served as deputy director of the MRC Centre for Outbreak Analysis and Modelling.
He returned to the University of Oxford in 2016 as Senior Group Leader in Pathogen Dynamics at the Big Data Institute.
In 2022 he was appointed Moh Family Foundation Professor of Infectious Disease Epidemiology as part of the University of Oxford's newly created Pandemic Sciences Institute.
Research on HIV.
Fraser and colleagues were among the first to hypothesise that the large variability in virulence observed between individuals living with HIV could be partly due to genetic variation in the virus.
In other words they hypothesised that virulence, considered as a phenotype of the virus, has appreciable heritability.
They and others later provided evidence for this.
Fraser was principal investigator of the BEEHIVE project to investigate the mechanism of this heritability, which discovered the 'VB variant': a highly virulent strain within the B subtype of HIV found in 107 individuals living with HIV in the Netherlands. UNAIDS stated that the discovery "provides evidence of urgency to halt the pandemic and reach all with testing and treatment".
Research on the COVID-19 pandemic.
In March 2020 Fraser and his research group published epidemiological modelling supporting 'digital contact tracing' using COVID-19 apps to reduce the spread of SARS-CoV-2.
Fraser provided advice to the British government and more broadly about implementing such apps, including designing a risk evaluation algorithm with Anthony Finkelstein and others.
Fraser's team developed the OpenABM-Covid-19 agent-based model, used by the NHS to model the pandemic, winning the 2021 Analysis in Government award for Innovative methods.
Research on other outbreaks.
Fraser worked on
the 2002–2004 SARS outbreak,
the 2009 swine flu pandemic,
the 2012 MERS outbreak
and the Western African Ebola virus epidemic.
Methodological research.
Fraser's publications include "Factors that make an infectious disease outbreak controllable", 2004, which argued that in addition to the basic reproduction number formula_0 a second key parameter of an infectious disease is the proportion of transmission that occurs before the onset of symptoms.
This proportion being large for SARS-CoV-2 was a key difficulty in infection control for the COVID-19 pandemic.
Fraser's 2007 analysis "Estimating Individual and Household Reproduction Numbers in an Emerging Epidemic" first defined an estimator for the instantaneous (time-varying) reproduction number formula_1 that was subsequently widely used. The definition was obtained by inverting the standard relationship between the reproduction number, the generation time distribution and the parameter formula_2 of the Malthusian growth model that is implied by the renewal equation for epidemic dynamics (or the Euler-Lotka equation as it is known in demography; the two are equivalent due to actual births being analogous to infectious disease transmissions as 'epidemiological births', giving rise to a new infected individual).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R_0"
},
{
"math_id": 1,
"text": "R(t)"
},
{
"math_id": 2,
"text": "r"
}
] |
https://en.wikipedia.org/wiki?curid=63672907
|
63676220
|
Okubo–Weiss parameter
|
In fluid mechanics, the Okubo–Weiss parameter, (normally given by "W") is a measure of the relative importance of deformation and rotation at a given point. It is calculated as the sum of the squares of normal and shear strain minus the relative vorticity. This is widely applicable in fluid properties particularly in identifying and describing oceanic eddies.
For a horizontally non-divergent flow in the ocean, the parameter is given by:
formula_0
where:
|
[
{
"math_id": 0,
"text": "W = s_n^2 + s_s^2 - \\omega^2"
},
{
"math_id": 1,
"text": "s_n"
},
{
"math_id": 2,
"text": "s_s"
},
{
"math_id": 3,
"text": "\\omega"
}
] |
https://en.wikipedia.org/wiki?curid=63676220
|
63676221
|
Exposure Notification
|
Initiative for mobile device-based privacy-preserving contact tracing
The (Google/Apple) Exposure Notification System (GAEN) is a framework and protocol specification developed by Apple Inc. and Google to facilitate digital contact tracing during the COVID-19 pandemic. When used by health authorities, it augments more traditional contact tracing techniques by automatically logging close approaches among notification system users using Android or iOS smartphones. Exposure Notification is a decentralized reporting protocol built on a combination of Bluetooth Low Energy technology and privacy-preserving cryptography. It is an opt-in feature within COVID-19 apps developed and published by authorized health authorities. Unveiled on April 10, 2020, it was made available on iOS on May 20, 2020 as part of the iOS 13.5 update and on December 14, 2020 as part of the iOS 12.5 update for older iPhones. On Android, it was added to devices via a Google Play Services update, supporting all versions since Android Marshmallow.
The Apple/Google protocol is similar to the Decentralized Privacy-Preserving Proximity Tracing (DP-3T) protocol created by the European DP-3T consortium and the Temporary Contact Number (TCN) protocol by Covid Watch, but is implemented at the operating system level, which allows for more efficient operation as a background process. Since May 2020, a variant of the DP-3T protocol is supported by the Exposure Notification Interface. Other protocols are constrained in operation because they are not privileged over normal apps. This leads to issues, particularly on iOS devices where digital contact tracing apps running in the background experience significantly degraded performance. The joint approach is also designed to maintain interoperability between Android and iOS devices, which constitute nearly all of the market.
The ACLU stated the approach "appears to mitigate the worst privacy and centralization risks, but there is still room for improvement". In late April, Google and Apple shifted the emphasis of the naming of the system, describing it as an "exposure notification service", rather than "contact tracing" system.
Technical specification.
Digital contact tracing protocols typically have two major responsibilities: encounter logging and infection reporting. Exposure Notification only involves encounter logging which is a decentralized architecture. The majority of infection reporting is centralized in individual app implementations.
To handle encounter logging, the system uses Bluetooth Low Energy to send tracking messages to nearby devices running the protocol to discover encounters with other people. The tracking messages contain unique identifiers that are encrypted with a secret daily key held by the sending device. These identifiers change every 15–20 minutes as well as Bluetooth MAC address in order to prevent tracking of clients by malicious third parties through observing static identifiers over time.
The sender's daily encryption keys are generated using a random number generator. Devices record received messages, retaining them locally for 14 days. If a user tests positive for infection, the last 14 days of their daily encryption keys can be uploaded to a central server, where it is then broadcast to all devices on the network. The method through which daily encryption keys are transmitted to the central server and broadcast is defined by individual app developers. The Google-developed reference implementation calls for a health official to request a one-time verification code (VC) from a "verification server", which the user enters into the encounter logging app. This causes the app to obtain a cryptographically signed certificate, which is used to authorize the submission of keys to the central reporting server.
The received keys are then provided to the protocol, where each client individually searches for matches in their local encounter history. If a match meeting certain risk parameters is found, the app notifies the user of potential exposure to the infection. Google and Apple intend to use the received signal strength (RSSI) of the beacon messages as a source to infer proximity. RSSI and other signal metadata will also be encrypted to resist deanonymization attacks.
Version 1.0.
To generate encounter identifiers, first a persistent 32-byte private "Tracing Key" (formula_0) is generated by a client. From this a 16 byte "Daily Tracing Key" is derived using the algorithm "formula_1", where formula_2 is a HKDF function using SHA-256, and formula_3 is the "day number" for the 24-hour window the broadcast is in starting from Unix Epoch Time. These generated keys are later sent to the central reporting server should a user become infected.
From the daily tracing key a 16-byte temporary "Rolling Proximity Identifier" is generated every 10 minutes with the algorithm formula_4, where formula_5 is a HMAC function using SHA-256, and formula_6 is the "time interval number", representing a unique index for every 10 minute period in a 24-hour day. The Truncate function returns the first 16 bytes of the HMAC value. When two clients come within proximity of each other they exchange and locally store the current formula_7 as the encounter identifier.
Once a registered health authority has confirmed the infection of a user, the user's Daily Tracing Key for the past 14 days is uploaded to the central reporting server. Clients then download this report and individually recalculate every Rolling Proximity Identifier used in the report period, matching it against the user's local encounter log. If a matching entry is found, then contact has been established and the app presents a notification to the user warning them of potential infection.
Version 1.1.
Unlike version 1.0 of the protocol, version 1.1 does not use a persistent "tracing key", rather every day a new random 16-byte "Temporary Exposure Key" (formula_8) is generated. This is analogous to the "daily tracing key" from version 1.0. Here formula_9 denotes the time is discretized in 10 minute intervals starting from Unix Epoch Time. From this two 128-bit keys are calculated, the "Rolling Proximity Identifier Key" (formula_10) and the "Associated Encrypted Metadata Key" (formula_11). formula_10 is calculated with the algorithm formula_12, and formula_11 using the algorithmformula_13.
From these values a temporary "Rolling Proximity Identifier" (formula_7) is generated every time the BLE MAC address changes, roughly every 15–20 minutes. The following algorithm is used: formula_14, where formula_15 is an AES cryptography function with a 128-bit key, the data is one 16-byte block, formula_16 denotes the Unix Epoch Time at the moment the roll occurs, and formula_17 is the corresponding 10-minute interval number. Next, additional "Associated Encrypted Metadata" is encrypted. What the metadata represents is not specified, likely to allow the later expansion of the protocol. The following algorithm is used: formula_18, where formula_19 denotes AES encryption with a 128-bit key in CTR mode. The Rolling Proximity Identifier and the Associated Encrypted Metadata are then combined and broadcast using BLE. Clients exchange and log these payloads.
Once a registered health authority has confirmed the infection of a user, the user's Temporary Exposure Keys formula_8 and their respective interval numbers formula_9 for the past 14 days are uploaded to the central reporting server. Clients then download this report and individually recalculate every Rolling Proximity Identifier starting from interval number formula_9, matching it against the user's local encounter log. If a matching entry is found, then contact has been established and the app presents a notification to the user warning them of potential infection.
Version 1.2.
Version 1.2 of the protocol is identical to version 1.1, only introducing minor terminology changes.
Privacy.
Preservation of privacy was referred to as a major component of the protocol; it is designed so that no personally identifiable information can be obtained about the user or their device. Apps implementing Exposure Notification are only allowed to collect personal information from users on a voluntary basis. Consent must be obtained by the user to enable the system or publicize a positive result through the system, and apps using the system are prohibited from collecting location data. As an additional measure, the companies stated that it would sunset the protocol by-region once they determine that it is "no longer needed".
The Electronic Frontier Foundation showed concerns the protocol was vulnerable to "linkage attacks", where sufficiently capable third parties who had recorded beacon traffic may retroactively be able to turn this information into tracking information, for only areas in which they had already recorded beacons, for a limited time segment and for only users who have disclosed their COVID-19 status, once a device's set of daily encryption keys have been revealed.
On April 16, the European Union started the process of assessing the proposed system for compatibility with privacy and data protection laws, including the General Data Protection Regulation (GDPR). On April 17, 2020, the UK's Information Commissioner's Office, a supervisory authority for data protection, published an opinion analyzing both Exposure Notification and the Decentralized Privacy-Preserving Proximity Tracing protocol, stating that the systems are "aligned with the principles of data protection by design and by default" (as mandated by the GDPR).
Deployment.
Exposure Notification is compatible with Android devices supporting Bluetooth Low Energy and running Android 6.0 "Marshmallow" and newer with Google Mobile Services. It is serviced via updates to Google Play Services, ensuring compatibility with the majority of Android devices released outside of Mainland China, and not requiring it to be integrated into Android firmware updates (which would hinder deployment by relying on individual OEMs). It is not compatible with devices that do not have GMS, such as Huawei devices released since May 2019. On iOS, EN is serviced via operating system updates. It was first introduced as part of iOS 13.5 on May 20, 2020. In December 2020, Apple released iOS 12.5, which backported EN support to iPhone models that cannot be upgraded to iOS 13, including iPhone 6 and older.
Exposure Notification apps may only be released by public health authorities. To discourage fragmentation, each country will typically be restricted to one app, although Apple and Google stated that they would accommodate regionalized approaches if a country elects to do so. Apple and Google released reference implementations for apps utilizing the system, which can be used as a base.
On September 1, 2020, the consortium announced "Exposure Notifications Express" (EN Express), a system designed to ease adoption of the protocol by health authorities by removing the need to develop an app themselves. Under this system, a health authority provides parameters specific to their implementation (such as thresholds, branding, messaging, and key servers), which is then processed to generate the required functionality. On Android, this data is used to generate an app, and a configuration profile that can also be deployed to users via Google Play Services without a dedicated app. On iOS, the functionality is integrated directly at the system level on iOS 13.7 and newer without a dedicated app.
The last information update on the “Exposure Notification Systems” partnership was a year end review issued by Google in December 2020: "we plan to keep you updated here with new information again next year". Nothing has however been issued on the one year anniversary of the launch of the “Exposure Notification Interface” API in spite of important changes on the pandemic front such as vaccination, variants, digital health passports, app adoption challenges as well as growing interest for tracking QR codes (and notifying from that basis) on a mostly airborne transmitted virus. The Frequently Asked Questions (FAQ) published document has not been revised since May 2020. Basic support remains provided through the apps store released by authorized public health agencies, including enforcement of the personal privacy protection framework as demonstrated on the UK NHS challenge in support of their contact tracers.
In June 2021, Google faced allegations that it had automatically downloaded Massachusetts' "MassNotify" app to Android devices without user consent. Google clarified that it had not actually downloaded the app to user devices, and that Google Play Services was being used to deploy an EN Express configuration profile that would allow it to be enabled via the Google Settings app without needing to download a separate app.
Adoption.
As of May 21, at least 22 countries had received access to the protocol. Switzerland and Austria were among the first to back the protocol. On April 26, after initially backing PEPP-PT, Germany announced it would back Exposure Notification, followed shortly after by Ireland and Italy. Despite already adopting the centralised BlueTrace protocol, Australia's Department of Health and Digital Transformation Agency were investigating whether the protocol could be implemented to overcome limitations of its COVIDSafe app. On May 25, Switzerland became the first country to launch an app leveraging the protocol, SwissCovid, beginning with a small pilot group.
In England, the National Health Service (NHS) trialed both an in-house app on a centralized platform developed by its NHSX division, and a second app using Exposure Notification. On June 18, the NHS announced that it would focus on using Exposure Notification to complement manual contact tracing, citing tests on the Isle of Wight showing that it had better cross-device compatibility (and would also be compatible with other European approaches), but that its distance calculations were not as reliable as the centralized version of the app, an issue which was later rectified. Later, it was stated that the app would be supplemented by QR codes at venues. A study of the impact of Exposure Notification in England and Wales estimated that it averted 8,700 (95% confidence interval 4,700–13,500) deaths out of the 32,500 recorded from its introduction on 24 September 2020 to 31 December 2020.
Canada launched its COVID Alert app, co-developed in partnership with BlackBerry Limited and Shopify, on July 31 in Ontario. As of February 2022, only around 57,000 positive cases had been reported via the app, leading some critics to dismiss it as a failure.
In May 2020, Covid Watch launched the first calibration and beta testing pilot of the GAEN APIs in the United States at the University of Arizona. In Aug 2020, the app launched publicly for a phased roll-out in the state of Arizona.
The U.S. Association of Public Health Laboratories (APHL) stated in July 2020 that it was working with Apple, Google, and Microsoft on a national reporting server for use with the protocol, which it stated would ease adoption and interoperability between states.
In August 2020, Google stated that at least 20 U.S. states had expressed interest in using the protocol. In Alabama, the Alabama Department of Public Health, University of Alabama at Birmingham, and the University of Alabama System deployed the "GuideSafe" app for university students returning to campus, which includes Exposure Notification features. On August 5, the Virginia Department of Health released its "COVIDWise" app — making it the first U.S. state to release an Exposure Notification-based app for the general public. North Dakota and Wyoming released an EN app known as "Care19 Alert", developed by ProudCrowd and using the APHL server (the app is a spin-off from an existing location logging application it had developed, based on one it had developed primarily for use by students travelling to attend college football away games).
Maryland, Nevada, Virginia, and Washington, D.C. have announced plans to use EN Express. In September, Delaware, New Jersey, New York, and Pennsylvania all adopted "COVID Alert" apps developed by NearForm, which are based on its COVID Tracker Ireland app. Later that month, the Norwegian Institute of Public Health announced that it would lead development of an Exposure Notification-based app for the country, which replaces a centralized app that had ceased operations in June 2020 after the Norwegian Data Protection Authority ruled that it violated privacy laws.
In Nov 2020, Bermuda launched the Wehealth Bermuda app developed by Wehealth, a Public Benefit Corporation, which was based on the Covid Watch app released in Arizona.
Alternatives.
Some countries, such as France, have pursued centralized approaches to digital contact tracing, in order to maintain records of personal information that can be used to assist in investigating cases. The French government asked Apple in April 2020 to allow apps to perform Bluetooth operations in the background, which would allow the government to create its own system independent of Exposure Notification.
On August 9, the Canadian province of Alberta announced plans to migrate to the EN-based COVID Alert from its BlueTrace-based ABTraceTogether app. This did not occur, and on November 6 Premier of Alberta Jason Kenney announced that the province would not do so, arguing that ABTraceTogether was "from our view, simply a better and more effective public health tool", and that they would be required to phase out ABTraceTogether if they did switch. British Columbia has also declined to adopt COVID Alert, with provincial health officer Bonnie Henry stating that COVID Alert was too "non-specific".
Australia's officials have stated its COVIDSafe, which is based on Singapore's BlueTrace, will not be shifting from manual intervention.
In the United States, states such as California and Massachusetts declined to use the technology, opting for manual contact tracing. California later reversed course and adopted the system in December 2020.
Chinese vendor Huawei (which cannot include Google software on its current Android products due to U.S. sanctions) added a OS-level DP-3T API known as "Contact Shield" to its Huawei Mobile Services stack in June 2020, which the company states is intended to be interoperable with Exposure Notification.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "tk"
},
{
"math_id": 1,
"text": "dtk_i = HKDF(tk, NULL, \\text{'CT-DTK'}||D_i, 16)"
},
{
"math_id": 2,
"text": "HKDF(\\text{Key, Salt, Data, OutputLength})"
},
{
"math_id": 3,
"text": "D_i"
},
{
"math_id": 4,
"text": "RPI_{i,j} = \\text{Truncate}(HMAC(dtk_i, \\text{'CT-RPI'}||TIN_j),16)"
},
{
"math_id": 5,
"text": "HMAC(\\text{Key, Data})"
},
{
"math_id": 6,
"text": "TIN_j"
},
{
"math_id": 7,
"text": "RPI_{i,j}"
},
{
"math_id": 8,
"text": "tek_i"
},
{
"math_id": 9,
"text": "i"
},
{
"math_id": 10,
"text": "RPIK_i"
},
{
"math_id": 11,
"text": "AEMK_i"
},
{
"math_id": 12,
"text": "RPIK_i = HKDF(tek_i, NULL, \\text{'EN-RPIK'},16)"
},
{
"math_id": 13,
"text": "AEMK_i = HKDF(tek_i, NULL, \\text{'EN-AEMK'},16)"
},
{
"math_id": 14,
"text": "RPI_{i,j} = AES128(RPIK_i, \\text{'EN-RPI'} || \\mathtt{0x000000000000} || ENIN_j)"
},
{
"math_id": 15,
"text": "AES128(\\text{Key, Data})"
},
{
"math_id": 16,
"text": "j"
},
{
"math_id": 17,
"text": "ENIN_j"
},
{
"math_id": 18,
"text": "\\text{Associated Encrypted Metadata}_{i,j} = AES128\\_CTR(AEMK_i, RPI_{i,j}, \\text{Metadata})"
},
{
"math_id": 19,
"text": "AES128\\_CTR(\\text{Key, IV, Data})"
}
] |
https://en.wikipedia.org/wiki?curid=63676221
|
63677270
|
Spoof surface plasmon
|
Spoof surface plasmons, also known as spoof surface plasmon polaritons and designer surface plasmons, are surface electromagnetic waves in microwave and terahertz regimes that propagate along planar interfaces with sign-changing permittivities. Spoof surface plasmons are a type of surface plasmon polariton, which ordinarily propagate along metal and dielectric interfaces in infrared and visible frequencies. Since surface plasmon polaritons cannot exist naturally in microwave and terahertz frequencies due to dispersion properties of metals, spoof surface plasmons necessitate the use of artificially-engineered metamaterials.
Spoof surface plasmons share the natural properties of surface plasmon polaritons, such as dispersion characteristics and subwavelength field confinement. They were first theorized by John Pendry et al.
Theory.
Surface plasmon polaritons (SPP) result from the coupling of delocalized electron oscillations ("surface plasmon") to electromagnetic waves ("polariton"). SPPs propagate along the interface between a positive- and a negative-permittivity material. These waves decay perpendicularly from the interface ("evanescent field"). For a plasmonic medium that is stratified along the z-direction in Cartesian coordinates, dispersion relation for SPPs can be obtained from solving Maxwell's equations:
formula_0
where
Per this relation, SPPs have shorter wavelengths than light in free space for a frequency band below surface plasmon frequency; this property, as well as subwavelength confinement, enables new applications in subwavelength optics and systems beyond the diffraction-limit. Nevertheless, for lower frequency bands such as microwave and terahertz, surface plasmon polariton modes are not supported; metals function approximately as perfect electrical conductors with imaginary dielectric functions in this regime. Per the effective medium approach, metal surfaces with subwavelength structural elements can mimic the plasma behaviour, resulting in artificial surface plasmon polariton excitations with similar dispersion behaviour.
For the canonical case of a metamaterial medium that is formed by thin metallic wires on a periodic square lattice, the effective relative permittivity can be represented by the Drude model formula:
formula_6
formula_7
where
Methods and applications.
The use of subwavelength structures to induce low-frequency plasmonic excitations was first theorized by John Pendry et al. in 1996; Pendry proposed that a periodic lattice of thin metallic wires with a radius of 1 μm could be used to support surface-bound modes, with a plasma cut-off frequency of 8.2 GHz. In 2004, Pendry et al. extended the approach to metal surfaces that are perforated by holes, terming the artificial SPP excitations as "spoof surface plasmons."
In 2006, terahertz pulse propagation in planar metallic structures with holes were shown via FDTD simulations. Martin-Cano et al. has realized the spatial and temporal modulation of guided terahertz modes via metallic parallelepiped structures, which they termed as "domino plasmons." Designer spoof plasmonic structures were also tailored to improve the performance of terahertz quantum cascade lasers in 2010.
Spoof surface plasmons were proposed as a possible solution for decreasing the crosstalk in microwave integrated circuits, transmission lines and waveguides. In 2013, Ma et al. demonstrated a matched conversion from coplanar waveguide with a characteristic impedance of 50Ω to a spoof-plasmonic structure. In 2014, integration of commercial low-noise amplifier with spoof plasmonic structures was realized; the system reportedly worked from 6 to 20 GHz with a gain around 20 dB. Kianinejad et al. also reported the design of a slow-wave spoof-plasmonic transmission line; conversion from quasi-TEM microstrip modes to TM spoof plasmon modes were also demonstrated.
Khanikaev et al. reported nonreciprocal spoof surface plasmon modes in structured conductor embedded in an asymmetric magneto-optical medium, which results in one-way transmission. Pan et al. observed the rejection of certain spoof plasmon modes with an introduction of electrically resonant metamaterial particles to the spoof plasmonic strip. Localized spoof surface plasmons were also demonstrated for metallic disks in microwave frequencies.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "k_x=\\frac{\\omega}{c} \\left( \\frac{\\varepsilon_1 \\varepsilon_2}{\\varepsilon_1 + \\varepsilon_2} \\right)^{\\frac{1}{2}}"
},
{
"math_id": 1,
"text": "k_x"
},
{
"math_id": 2,
"text": "\\omega"
},
{
"math_id": 3,
"text": "c"
},
{
"math_id": 4,
"text": "\\varepsilon_1"
},
{
"math_id": 5,
"text": "\\varepsilon_2"
},
{
"math_id": 6,
"text": "\\varepsilon_{eff}=1-\\frac{\\omega_p^2}{\\omega \\left(\\omega + i \\frac{\\varepsilon_0 a^2 \\omega_p^2}{\\pi r^2 \\sigma} \\right)}"
},
{
"math_id": 7,
"text": "\\omega_p^2=\\frac{2\\pi c^2}{a^2 ln(a/r)}"
},
{
"math_id": 8,
"text": "\\omega_p"
},
{
"math_id": 9,
"text": "\\varepsilon_0"
},
{
"math_id": 10,
"text": "a"
},
{
"math_id": 11,
"text": "r"
},
{
"math_id": 12,
"text": "\\sigma"
}
] |
https://en.wikipedia.org/wiki?curid=63677270
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.