id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
56861922
|
White noise analysis
|
In probability theory, a branch of mathematics, white noise analysis, otherwise known as Hida calculus, is a framework for infinite-dimensional and stochastic calculus, based on the Gaussian white noise probability space, to be compared with Malliavin calculus based on the Wiener process. It was initiated by Takeyuki Hida in his 1975 Carleton Mathematical Lecture Notes.
The term white noise was first used for signals with a flat spectrum.
White noise measure.
The white noise probability measure formula_0 on the space formula_1 of tempered distributions has the characteristic function
formula_2
Brownian motion in white noise analysis.
A version of Wiener's Brownian motion formula_3 is obtained by the dual pairing
formula_4
where formula_5 is the indicator function of the interval formula_6. Informally
formula_7
and in a generalized sense
formula_8
Hilbert space.
Fundamental to white noise analysis is the Hilbert space
formula_9
generalizing the Hilbert spaces formula_10 to infinite dimension.
Wick polynomials.
An orthonormal basis in this Hilbert space, generalizing that of Hermite polynomials, is given by the so-called "Wick", or "normal ordered" polynomials formula_11 with formula_12 and formula_13
with normalization
formula_14
entailing the Itô-Segal-Wiener isomorphism of the white noise Hilbert space formula_15 with Fock space:
formula_16
The "chaos expansion"
formula_17
in terms of Wick polynomials correspond to the expansion in terms of multiple Wiener integrals. Brownian martingales formula_18 are characterized by kernel functions formula_19 depending on formula_20 only a "cut-off":
formula_21
Gelfand triples.
Suitable restrictions of the kernel function formula_22 to be smooth and rapidly decreasing in formula_23 and formula_24 give rise to spaces of white noise test functions formula_25, and, by duality, to spaces of generalized functions formula_26 of white noise, with
formula_27
generalizing the scalar product in formula_15. Examples are the Hida triple, with
formula_28
or the more general Kondratiev triples.
T- and S-transform.
Using the white noise test functions
formula_29
one introduces the "T-transform" of white noise distributions formula_26 by setting
formula_30
Likewise, using
formula_31
one defines the "S-transform" of white noise distributions formula_26 by
formula_32
It is worth noting that for generalized functions formula_33, the S-transform is just
formula_34
Depending on the choice of Gelfand triple, the white noise test functions and distributions are characterized by corresponding growth and analyticity properties of their S- or T-transforms.
Characterization theorem.
The function formula_35 is the T-transform of a (unique) Hida distribution formula_26 iff for all formula_36 the function formula_37 is analytic in the whole complex plane and of second order exponential growth, i.e. formula_38where formula_39 is some continuous quadratic form on formula_40.The same is true for S-transforms, and similar characterization theorems hold for the more general Kondratiev distributions.
Calculus.
For test functions formula_41, partial, directional derivatives exist:
formula_42
where formula_43 may be varied by any generalized function formula_44. In particular, for the Dirac distribution formula_45 one defines the "Hida derivative", denoting
formula_46
Gaussian integration by parts yields the dual operator on distribution space
formula_47
An infinite-dimensional gradient
formula_48
is given by
formula_49
The Laplacian formula_50 ("Laplace–Beltrami operator") with
formula_51
plays an important role in infinite-dimensional analysis and is the image of the Fock space number operator.
Stochastic integrals.
A stochastic integral, the Hitsuda–Skorokhod integral, can be defined for suitable families formula_52 of white noise distributions as a Pettis integral
formula_53
generalizing the Itô integral beyond adapted integrands.
Applications.
In general terms, there are two features of white noise analysis that have been prominent in applications.
First, white noise is a generalized stochastic process with independent values at each time. Hence it plays the role of a generalized system of independent coordinates, in the sense that in various contexts it has been fruitful to express more general processes occurring e.g. in engineering or mathematical finance, in terms of white noise.
Second, the characterization theorem given above allows various heuristic expressions to be identified as generalized functions of white noise. This is particularly effective to attribute a well-defined mathematical meaning to so-called "functional integrals". Feynman integrals in particular have been given rigorous meaning for large classes of quantum dynamical models.
Noncommutative extensions of the theory have grown under the name of quantum white noise, and finally, the rotational invariance of the white noise characteristic function provides a framework for representations of infinite-dimensional rotation groups.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mu"
},
{
"math_id": 1,
"text": "S'(\\mathbb{R})"
},
{
"math_id": 2,
"text": "C(f)=\\int_{S'(\\mathbb{R})}\\exp \\left( i\\left\\langle \\omega ,f\\right\\rangle\n\\right) \\, d\\mu (\\omega )=\\exp \\left( -\\frac{1}{2}\\int_{\\mathbb{R}} f^2(t) \\, dt\\right), \\quad f\\in S(\\mathbb{R}). "
},
{
"math_id": 3,
"text": "B(t)"
},
{
"math_id": 4,
"text": "B(t) = \\langle \\omega, 1\\!\\!1_{[0,t)}\\rangle, "
},
{
"math_id": 5,
"text": "1\\!\\!1_{[0,t)}"
},
{
"math_id": 6,
"text": "[0,t)\n"
},
{
"math_id": 7,
"text": "B(t)=\\int_0^t \\omega(t) \\, dt"
},
{
"math_id": 8,
"text": "\\omega(t)=\\frac{d B(t)}{dt}."
},
{
"math_id": 9,
"text": "(L^2):=L^2\\left( S'(\\mathbb{R}),\\mu \\right), "
},
{
"math_id": 10,
"text": "L^2(\\mathbb{R}^n,e^{-\\frac{1}{2} x^2}d^n x) "
},
{
"math_id": 11,
"text": "\\left\\langle {:\\omega^n:} , f_n\\right\\rangle "
},
{
"math_id": 12,
"text": "{:\\omega^n:} \\in S'(\\mathbb{R}^n) "
},
{
"math_id": 13,
"text": "f_n \\in S(\\mathbb{R}^n) "
},
{
"math_id": 14,
"text": "\\int_{S'(\\mathbb{R})}\\left\\langle :\\omega^n:,f_n \\right\\rangle^2 \\, d\\mu(\\omega) = n!\\int f_{n}^2(x_1,\\ldots,x_n) \\, d^n x, "
},
{
"math_id": 15,
"text": "(L^2) "
},
{
"math_id": 16,
"text": "L^2\\left( S'(\\mathbb{R}),\\mu \\right) \\simeq \\bigoplus\\limits_{n=0}^\\infty \\operatorname{Sym} L^2(\\mathbb{R}^n,n! \\, d^n x). "
},
{
"math_id": 17,
"text": "\\varphi(\\omega) =\\sum_n \\left\\langle :\\omega^n:, f_n\\right\\rangle "
},
{
"math_id": 18,
"text": "M_t(\\omega) "
},
{
"math_id": 19,
"text": "f_n "
},
{
"math_id": 20,
"text": "t "
},
{
"math_id": 21,
"text": "f_n(x_1,\\ldots,x_n;t)=\n\\begin{cases}\nf_n (x_1,\\ldots,x_n) & \\text{if } i x_i\\leq t, \\\\\n0 & \\text{otherwise}.\n\\end{cases}\n"
},
{
"math_id": 22,
"text": "\\varphi _{n} "
},
{
"math_id": 23,
"text": "x "
},
{
"math_id": 24,
"text": "n "
},
{
"math_id": 25,
"text": "\\varphi "
},
{
"math_id": 26,
"text": "\\Psi "
},
{
"math_id": 27,
"text": "\\left\\langle \\! \\left\\langle \\Psi ,\\varphi \\right\\rangle \\!\\right\\rangle\n:=\\sum_n n!\\left\\langle \\psi_n,\\varphi_n \\right\\rangle "
},
{
"math_id": 28,
"text": "\\varphi \\in (S)\\subset (L^2)\\subset (S)^\\ast \\ni \\Psi "
},
{
"math_id": 29,
"text": "\\varphi_f(\\omega ):=\\exp \\left( i\\left\\langle \\omega ,f\\right\\rangle \\right) \\in (S),\\quad f \\in S(\\mathbb{R}) "
},
{
"math_id": 30,
"text": "T\\Psi (f):=\\left\\langle \\!\\left\\langle \\Psi ,\\varphi _{f}\\right\\rangle\n\\!\\right\\rangle . "
},
{
"math_id": 31,
"text": "\\phi_f(\\omega ):=\\exp \\left( -\\frac{1}{2}\\int f^2(t) \\, dt\\right) \\exp\\left( -\\left\\langle \\omega ,f\\right\\rangle \\right) \\in (S) "
},
{
"math_id": 32,
"text": "S\\Psi (f):=\\left\\langle \\!\\left\\langle \\Psi ,\\phi_f\\right\\rangle\\!\n\\right\\rangle,\\quad f \\in S(\\mathbb{R}). "
},
{
"math_id": 33,
"text": "\\Psi"
},
{
"math_id": 34,
"text": "S\\Psi (f)=\\sum n!\\left\\langle \\psi_n,f^{\\otimes n}\\right\\rangle. "
},
{
"math_id": 35,
"text": "G(f) "
},
{
"math_id": 36,
"text": "f_1,f_2\\in S(R), "
},
{
"math_id": 37,
"text": "z\\mapsto G(zf_1+f_2) "
},
{
"math_id": 38,
"text": "\\left\\vert G(\\ f)\\right\\vert <ae^{bK(f,f)}, "
},
{
"math_id": 39,
"text": "K "
},
{
"math_id": 40,
"text": "S'(\\mathbb{R})\\times S'(\\mathbb{R})"
},
{
"math_id": 41,
"text": "\\varphi \\in (S) "
},
{
"math_id": 42,
"text": "\\partial_\\eta \\varphi (\\omega ):=\\lim_{\\varepsilon \\rightarrow 0}\\frac{\\varphi (\\omega +\\varepsilon \\eta )-F(\\omega )} \\varepsilon "
},
{
"math_id": 43,
"text": "\\omega "
},
{
"math_id": 44,
"text": "\\eta "
},
{
"math_id": 45,
"text": "\\eta =\\delta _{t} "
},
{
"math_id": 46,
"text": "\\partial_t \\varphi (\\omega ):=\\lim_{\\varepsilon \\rightarrow 0} \\frac{\\varphi(\\omega +\\varepsilon \\delta_t)-F(\\omega )} \\varepsilon. "
},
{
"math_id": 47,
"text": "\\partial_t^\\ast =-\\partial_t+\\omega(t) "
},
{
"math_id": 48,
"text": "\\nabla :(S)\\rightarrow L^2(R,dt) \\otimes (S) "
},
{
"math_id": 49,
"text": "\\nabla F(t,\\omega) =\\partial_t F(\\omega). "
},
{
"math_id": 50,
"text": "\\triangle "
},
{
"math_id": 51,
"text": "-\\triangle =\\int dt\\;\\partial_t^\\ast \\partial_t \\geq 0 "
},
{
"math_id": 52,
"text": "\\Psi (t) "
},
{
"math_id": 53,
"text": "\\int \\partial_t^\\ast \\Psi (t) \\, dt\\in (S)^\\ast, "
}
] |
https://en.wikipedia.org/wiki?curid=56861922
|
56866766
|
National Self-Government of Germans in Hungary
|
German political organisation in Hungary
The National Self-Government of Germans in Hungary (, LdU; , MNOÖ) is the nationwide representative organization of the German minority in Hungary.
History.
After electing minority self-governments in 1994, the electoral assembly of the German minority elected the political and cultural representative body of the Germans of Hungary, the National Self-Government of the Germans in Hungary, on 11 March 1995. According to the opportunities offered by the 2011. CLXXIX. Nationality Act, it wants to implement a modern minority policy.
Aims.
Its main aims are to preserve and support the language, the intellectual heritage, the historical traditions and the German identity in Hungary. That includes the preservation of the German mother tongue in cultural areas, the teaching of German language in the Hungarian school system and in the field of international relations and the exchange of German relations by partnerships and programs.
The implementation of cultural autonomy, the takeover of the German institutions in Hungary, ensures the main activity of LdU.
At the same time, it supports co-operation of Hungary and its neighbours, above all with German-speaking countries.
It is the umbrella organization of 406 local minority self-governments and more than 500 cultural groups and other German associations of Hungary.
Electoral results.
Since 2014, voters of ethnic minorities in Hungary are able to vote on nationality lists. The minorities can obtain a preferential mandate if they reach the quarter of the ninety-third part (which is formula_0) of the list votes. Nationalities who did not get a mandate could send a nationality spokesman to the National Assembly.
In the 2018 parliamentary election, Imre Ritter was elected as the first German parliamentary spokesman in the history of the National Assembly.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{1}{4 \\times 93}=\\frac{1}{372}\\approx0.2688\\%"
}
] |
https://en.wikipedia.org/wiki?curid=56866766
|
56874
|
Horizontal line test
|
Test for the injectivity of a function
In mathematics, the horizontal line test is a test used to determine whether a function is injective (i.e., one-to-one).
In calculus.
A "horizontal line" is a straight, flat line that goes from left to right. Given a function formula_0 (i.e. from the real numbers to the real numbers), we can decide if it is injective by looking at horizontal lines that intersect the function's graph. If any horizontal line formula_1 intersects the graph in more than one point, the function is not injective. To see this, note that the points of intersection have the same y-value (because they lie on the line formula_1) but different x values, which by definition means the function cannot be injective.
Variations of the horizontal line test can be used to determine whether a function is surjective or bijective:
In set theory.
Consider a function formula_2 with its corresponding graph as a subset of the Cartesian product formula_3. Consider the horizontal lines in formula_3 :formula_4. The function "f" is injective if and only if each horizontal line intersects the graph at most once. In this case the graph is said to pass the horizontal line test. If any horizontal line intersects the graph more than once, the function fails the horizontal line test and is not injective.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f \\colon \\mathbb{R} \\to \\mathbb{R}"
},
{
"math_id": 1,
"text": "y=c"
},
{
"math_id": 2,
"text": "f \\colon X \\to Y"
},
{
"math_id": 3,
"text": "X \\times Y"
},
{
"math_id": 4,
"text": "\\{(x,y_0) \\in X \\times Y: y_0 \\text{ is constant}\\} = X \\times \\{y_0\\}"
}
] |
https://en.wikipedia.org/wiki?curid=56874
|
56875243
|
Batchelor–Chandrasekhar equation
|
The Batchelor–Chandrasekhar equation is the evolution equation for the scalar functions, defining the two-point velocity correlation tensor of a homogeneous axisymmetric turbulence, named after George Batchelor and Subrahmanyan Chandrasekhar. They developed the theory of homogeneous axisymmetric turbulence based on Howard P. Robertson's work on isotropic turbulence using an invariant principle. This equation is an extension of Kármán–Howarth equation from isotropic to axisymmetric turbulence.
Mathematical description.
The theory is based on the principle that the statistical properties are invariant for rotations about a particular direction formula_0 (say), and reflections in planes containing formula_0 and perpendicular to formula_0. This type of axisymmetry is sometimes referred to as strong axisymmetry or axisymmetry in the strong sense, opposed to "weak axisymmetry", where reflections in planes perpendicular to formula_0 or planes containing formula_0 are not allowed.
Let the two-point correlation for homogeneous turbulence be
formula_1
A single scalar describes this correlation tensor in isotropic turbulence, whereas, it turns out for axisymmetric turbulence, two scalar functions are enough to uniquely specify the correlation tensor. In fact, Batchelor was unable to express the correlation tensor in terms of two scalar functions, but ended up with four scalar functions, nevertheless, Chandrasekhar showed that it could be expressed with only two scalar functions by expressing the solenoidal axisymmetric tensor as the curl of a general axisymmetric skew tensor (reflectionally non-invariant tensor).
Let formula_0 be the unit vector which defines the axis of symmetry of the flow, then we have two scalar variables, formula_2 and formula_3. Since formula_4, it is clear that formula_5 represents the cosine of the angle between formula_0 and formula_6. Let formula_7 and formula_8 be the two scalar functions that describes the correlation function, then the most general axisymmetric tensor which is solenoidal (incompressible) is given by,
formula_9
where
formula_10
The differential operators appearing in the above expressions are defined as
formula_11
Then the evolution equations (equivalent form of Kármán–Howarth equation) for the two scalar functions are given by
formula_12
where formula_13 is the kinematic viscosity and
formula_14
The scalar functions formula_15 and formula_16 are related to triply correlated tensor formula_17, exactly the same way formula_7 and formula_8 are related to the two point correlated tensor formula_18. The triply correlated tensor is
formula_19
Here formula_20 is the density of the fluid.
formula_21
Decay of the turbulence.
During decay, if we neglect the triple correlation scalars, then the equations reduce to axially symmetric five-dimensional heat equations,
formula_27
Solutions to these five-dimensional heat equation was solved by Chandrasekhar. The initial conditions can be expressed in terms of Gegenbauer polynomials (without loss of generality),
formula_28
where formula_29 are Gegenbauer polynomials. The required solutions are
formula_30
where formula_31 is the Bessel function of the first kind.
As formula_32 the solutions become independent of formula_5
formula_33
where
formula_34
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\boldsymbol{\\lambda}"
},
{
"math_id": 1,
"text": "R_{ij}(\\mathbf{r},t) = \\overline{u_i(\\mathbf{x},t)u_j(\\mathbf{x}+\\mathbf{r},t)}."
},
{
"math_id": 2,
"text": "\\mathbf{r}\\cdot\\mathbf{r}=r^2"
},
{
"math_id": 3,
"text": "\\mathbf{r}\\cdot\\boldsymbol{\\lambda}=r\\mu"
},
{
"math_id": 4,
"text": "|\\boldsymbol{\\lambda}|=1"
},
{
"math_id": 5,
"text": "\\mu"
},
{
"math_id": 6,
"text": "\\mathbf{r}"
},
{
"math_id": 7,
"text": "Q_1(r,\\mu,t)"
},
{
"math_id": 8,
"text": "Q_2(r,\\mu,t)"
},
{
"math_id": 9,
"text": "R_{ij} = Ar_ir_j + B\\delta_{ij} + C\\lambda_i\\lambda_j + D \\left (\\lambda_i r_j + r_i \\lambda_j \\right ) "
},
{
"math_id": 10,
"text": "\\begin{align}\nA &= \\left (D_r-D_{\\mu\\mu} \\right )Q_1+ D_r Q_2, \\\\\nB &= \\left [- \\left (r^2D_r+r\\mu D_\\mu+2 \\right )+r^2 \\left (1-\\mu^2 \\right )D_{\\mu\\mu}-r\\mu D_\\mu \\right ]Q_1 - \\left [r^2 \\left (1-\\mu^2 \\right )D_r+1 \\right ]Q_2, \\\\\nC &= -r^2 D_{\\mu\\mu}Q_1 + \\left (r^2 D_r+1 \\right )Q_2, \\\\\nD &= \\left (r\\mu D_\\mu +1 \\right )D_\\mu Q_1 - r\\mu D_r Q_2.\n\\end{align}"
},
{
"math_id": 11,
"text": "\\begin{align}\nD_r &= \\frac{1}{r}\\frac{\\partial }{\\partial r} - \\frac{\\mu}{r^2} \\frac{\\partial }{\\partial \\mu}, \\\\\nD_\\mu &= \\frac{1}{r} \\frac{\\partial }{\\partial \\mu}, \\\\\nD_{\\mu\\mu} &= D_\\mu D_\\mu = \\frac{1}{r^2} \\frac{\\partial^2 }{\\partial \\mu^2}.\n\\end{align}"
},
{
"math_id": 12,
"text": "\\begin{align}\n\\frac{\\partial Q_1}{\\partial t} &= 2\\nu\\Delta Q_1 + S_1, \\\\\n\\frac{\\partial Q_2}{\\partial t} &= 2\\nu \\left (\\Delta Q_2 + 2 D_{\\mu\\mu} Q_1 \\right ) + S_2\n\\end{align}"
},
{
"math_id": 13,
"text": "\\nu"
},
{
"math_id": 14,
"text": "\\Delta = \\frac{\\partial^2}{\\partial r^2} + \\frac{4}{r}\\frac{\\partial }{\\partial r} + \\frac{1-\\mu^2}{r^2}\\frac{\\partial^2 }{\\partial \\mu^2} - \\frac{4\\mu}{r^2}\\frac{\\partial }{\\partial \\mu}."
},
{
"math_id": 15,
"text": "S_1(r,\\mu,t)"
},
{
"math_id": 16,
"text": "S_2(r,\\mu,t)"
},
{
"math_id": 17,
"text": "S_{ij}"
},
{
"math_id": 18,
"text": "R_{ij}"
},
{
"math_id": 19,
"text": "S_{ij} = \\frac{\\partial}{\\partial r_k} \\left( \\overline{u_i(\\mathbf{x},t) u_k(\\mathbf{x},t)u_j(\\mathbf{x}+\\mathbf{r},t)}-\\overline{u_i(\\mathbf{x},t) u_k(\\mathbf{x}+\\mathbf{r},t)u_j(\\mathbf{x}+\\mathbf{r},t)}\\right) + \\frac{1}{\\rho} \\left(\\frac{\\overline{\\partial p(\\mathbf{x},t) u_j(\\mathbf{x} + \\mathbf{r},t)}}{\\partial r_i} - \\frac{\\overline{\\partial p(\\mathbf{x} + \\mathbf{r},t) u_i(\\mathbf{x},t)}}{\\partial r_j} \\right)."
},
{
"math_id": 20,
"text": "\\rho"
},
{
"math_id": 21,
"text": "R_{ii} =r^2 \\left (1-\\mu^2 \\right ) \\left (D_{\\mu\\mu}Q_1-D_rQ_2 \\right )-2Q_2-2 \\left (r^2D_r+2r\\mu D_\\mu +3 \\right )Q_1."
},
{
"math_id": 22,
"text": "R_{ij}(-\\mathbf{r})=R_{ji}(\\mathbf{r})"
},
{
"math_id": 23,
"text": "Q_1"
},
{
"math_id": 24,
"text": "Q_2"
},
{
"math_id": 25,
"text": "r"
},
{
"math_id": 26,
"text": "r\\mu"
},
{
"math_id": 27,
"text": "\\begin{align}\n\\frac{\\partial Q_1}{\\partial t} &= 2\\nu\\Delta Q_1, \\\\\n\\frac{\\partial Q_2}{\\partial t} &= 2\\nu \\left ( \\Delta Q_2 + 2 D_{\\mu\\mu} Q_1 \\right ) \n\\end{align}"
},
{
"math_id": 28,
"text": "\\begin{align}\nQ_1(r,\\mu,0) &= \\sum_{n=0}^\\infty q_{2n}^{(1)}(r)C_{2n}^{\\frac{3}{2}}(\\mu), \\\\\nQ_2(r,\\mu,0) &= \\sum_{n=0}^\\infty q_{2n}^{(2)}(r)C_{2n}^{\\frac{3}{2}}(\\mu),\n\\end{align}"
},
{
"math_id": 29,
"text": "C_{2n}^{\\frac{3}{2}}(\\mu)"
},
{
"math_id": 30,
"text": "\\begin{align}\nQ_1(r,\\mu,t) &= \\frac{e^{-\\frac{r^2}{8\\nu t}}}{32(\\nu t)^{\\frac{5}{2}}} \\sum_{n=0}^\\infty C_{2n}^{\\frac{3}{2}}(\\mu) \\int_0^\\infty e^{-\\frac{r'^2}{8\\nu t}}r'^4 q_{2n}^{(1)}(r')\\frac{I_{2n+\\frac{3}{2}} \\left (\\frac{rr'}{4\\nu t} \\right )}{\\left (\\frac{rr'}{4\\nu t} \\right )^{\\frac{3}{2}}}\\ dr', \\\\[8pt]\nQ_2(r,\\mu,t) &= \\frac{e^{-\\frac{r^2}{8\\nu t}}}{32(\\nu t)^{\\frac{5}{2}}}\\sum_{n=0}^\\infty C_{2n}^{\\frac{3}{2}}(\\mu) \\int_0^\\infty e^{-\\frac{r'^2}{8\\nu t}}r'^4 q_{2n}^{(2)}(r')\\frac{I_{2n+\\frac{3}{2}}\\left (\\frac{rr'}{4\\nu t} \\right )}{\\left (\\frac{rr'}{4\\nu t} \\right )^{\\frac{3}{2}}}\\ dr' +4\\nu\\int_0^t\\frac{dt'}{[8\\pi\\nu(t-t')]^{\\frac{5}{2}}} \\int\\cdots\\int\\left(\\frac{1}{r^2}\\frac{\\partial^2 Q_1}{\\partial \\mu^2}\\right)_{r',\\mu',t'} e^{-\\frac{|r-r'|^2}{8\\nu(t-t')}}\\ dx_1'\\cdots dx_5',\n\\end{align}"
},
{
"math_id": 31,
"text": "I_{2n+\\frac{3}{2}}"
},
{
"math_id": 32,
"text": "t\\to\\infty,"
},
{
"math_id": 33,
"text": "\\begin{align}\nQ_1(r,\\mu,t) &\\to -\\frac{\\Lambda_1 e^{-\\frac{r^2}{8\\nu t}}}{48 \\sqrt{2\\pi}(\\nu t)^{\\frac{5}{2}}}, \\\\\nQ_2(r,\\mu,t) &\\to -\\frac{\\Lambda_2 e^{-\\frac{r^2}{8\\nu t}}}{48 \\sqrt{2\\pi}(\\nu t)^{\\frac{5}{2}}},\n\\end{align}"
},
{
"math_id": 34,
"text": "\\begin{align}\n\\Lambda_1 &=-\\int_0^\\infty q_{2n}^{(1)}(r)\\ dr \\\\\n\\Lambda_2 &=-\\int_0^\\infty q_{2n}^{(2)}(r)\\ dr\n\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=56875243
|
56877042
|
Prandtl–Batchelor theorem
|
In fluid dynamics, Prandtl–Batchelor theorem states that "if in a two-dimensional laminar flow at high Reynolds number closed streamlines occur, then the vorticity in the closed streamline region must be a constant". A similar statement holds true for axisymmetric flows. The theorem is named after Ludwig Prandtl and George Batchelor. Prandtl in his celebrated 1904 paper stated this theorem in arguments, George Batchelor unaware of this work proved the theorem in 1956. The problem was also studied in the same year by Richard Feynman and Paco Lagerstrom and by W.W. Wood in 1957.
Mathematical proof.
At high Reynolds numbers, the two-dimensional problem governed by two-dimensional Euler equations reduce to solving a problem for the stream function formula_0, which satisfies
formula_1
where formula_2 is the only non-zero vorticity component in the formula_3-direction of the vorticity vector. As it stands, the problem is ill-posed since the vorticity distribution formula_4 can have infinite number of possibilities, all of which satisfies the equation and the boundary condition. This is not true if no streamline is closed, in which case, every streamline can be traced back to the boundary formula_5 where formula_0 and therefore its corresponding vorticity formula_4 are prescribed. The difficulty arises only when there are some closed streamlines inside the domain that does not connect to the boundary and one may suppose that at high Reynolds numbers, formula_4 is not uniquely defined in regions where closed streamlines occur. The Prandtl–Batchelor theorem, however, asserts that this is not the case and formula_4 is uniquely defined in such cases, through an examination of the limiting process formula_6 properly.
The steady, non-dimensional vorticity equation in our case reduces to
formula_7
Integrate the equation over a surface formula_8 lying entirely in the region where we have closed streamlines, bounded by a closed contour formula_9
formula_10
The integrand in the left-hand side term can be written as formula_11 since formula_12. By divergence theorem, one obtains
formula_13
where formula_14 is the outward unit vector normal to the contour line element formula_15. The left-hand side integrand can be made zero if the contour formula_9 is taken to be one of the closed streamlines since then the velocity vector projected normal to the contour will be zero, that is to say formula_16. Thus one obtains
formula_17
This expression is true for finite but large Reynolds number since we did not neglect the viscous term before.
Unlike the two-dimensional inviscid flows, where formula_18 since formula_19 with no restrictions on the functional form of formula_2, in the viscous flows, formula_20. But for large but finite formula_21, we can write formula_22, and this small corrections become smaller and smaller as we increase the Reynolds number. Thus, in the limit formula_23, in the first approximation (neglecting the small corrections), we have
formula_24
Since formula_25 is constant for a given streamline, we can take that term outside the integral,
formula_26
One may notice that the integral is negative of the circulation since
formula_27
where we used the Stokes theorem for circulation and formula_28. Thus, we have
formula_29
The circulation around those closed streamlines is not zero (unless the velocity at each point of the streamline is zero with a possible discontinuous vorticity jump across the streamline) . The only way the above equation can be satisfied is only if
formula_30
i.e., vorticity is not changing across these closed streamlines, thus proving the theorem. Of course, the theorem is not valid inside the boundary layer regime. This theorem cannot be derived from the Euler equations.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\psi"
},
{
"math_id": 1,
"text": "\\nabla^2\\psi = - \\omega(\\psi), \\quad \\psi=\\psi_o \\text{ on } \\partial D"
},
{
"math_id": 2,
"text": "\\omega"
},
{
"math_id": 3,
"text": "z"
},
{
"math_id": 4,
"text": "\\omega(\\psi)"
},
{
"math_id": 5,
"text": "\\partial D"
},
{
"math_id": 6,
"text": "Re\\rightarrow \\infty"
},
{
"math_id": 7,
"text": "\\mathbf{u} \\cdot \\nabla\\mathbf{\\omega} = \\frac{1}{\\mathrm{Re}}\\nabla^2\\omega."
},
{
"math_id": 8,
"text": "S"
},
{
"math_id": 9,
"text": "C"
},
{
"math_id": 10,
"text": "\\int_S\\mathbf{u} \\cdot \\nabla\\mathbf{\\omega}\\, d\\mathbf S = \\frac{1}{\\mathrm{Re}}\\int_S\\nabla^2\\omega\\, d\\mathbf S."
},
{
"math_id": 11,
"text": "\\nabla \\cdot (\\omega\\mathbf u)"
},
{
"math_id": 12,
"text": "\\nabla\\cdot\\mathbf u=0"
},
{
"math_id": 13,
"text": "\\oint_C \\omega\\mathbf{u}\\cdot \\mathbf n dl = \\frac{1}{\\mathrm{Re}}\\oint_C\\nabla\\omega\\cdot \\mathbf n dl."
},
{
"math_id": 14,
"text": "\\mathbf n"
},
{
"math_id": 15,
"text": "dl"
},
{
"math_id": 16,
"text": "\\mathbf u\\cdot \\mathbf n=0"
},
{
"math_id": 17,
"text": "\\frac{1}{\\mathrm{Re}}\\oint_C \\nabla\\omega \\cdot \\mathbf{n}\\ dl = 0"
},
{
"math_id": 18,
"text": "\\omega=\\omega(\\psi)"
},
{
"math_id": 19,
"text": "\\mathbf u\\cdot \\nabla \\omega =0"
},
{
"math_id": 20,
"text": "\\omega\\neq \\omega(\\psi)"
},
{
"math_id": 21,
"text": "\\mathrm{Re}"
},
{
"math_id": 22,
"text": "\\omega=\\omega(\\psi) + \\rm{small\\ corrections}"
},
{
"math_id": 23,
"text": "\\mathrm{Re}\\rightarrow \\infty"
},
{
"math_id": 24,
"text": " \\frac{1}{\\mathrm{Re}}\\oint_C \\nabla\\omega \\cdot \\mathbf{n}\\ dl = \\frac{1}{\\mathrm{Re}}\\oint_C \\frac{d\\omega}{d\\psi}\\nabla\\psi \\cdot \\mathbf{n}\\ dl = 0."
},
{
"math_id": 25,
"text": "d\\omega/d\\psi"
},
{
"math_id": 26,
"text": "\\frac{1}{\\mathrm{Re}}\\frac{d\\omega}{d\\psi}\\oint_C \\nabla\\psi \\cdot \\mathbf{n}\\ dl = 0."
},
{
"math_id": 27,
"text": "\\Gamma = -\\oint_C\\mathbf u\\cdot d\\mathbf{l} =-\\int_S \\omega d\\mathbf S = \\int_S \\nabla^2\\psi d\\mathbf{S} = \\oint_C \\nabla \\psi \\cdot \\mathbf{n} dl"
},
{
"math_id": 28,
"text": "\\omega=-\\nabla^2\\psi"
},
{
"math_id": 29,
"text": "\\frac{\\Gamma}{\\mathrm{Re}}\\frac{d\\omega}{d\\psi} = 0."
},
{
"math_id": 30,
"text": "\\frac{d\\omega}{d\\psi} = 0,"
}
] |
https://en.wikipedia.org/wiki?curid=56877042
|
56877096
|
Uwe Jannsen
|
German mathematician
Uwe Jannsen (born 11 March 1954) is a German mathematician, specializing in algebra, algebraic number theory, and algebraic geometry.
Education and career.
Born in Meddewade, Jannsen studied mathematics and physics at the University of Hamburg with Diplom in mathematics in 1978 and with Promotion (PhD) in 1980 under Helmut Brückner and Jürgen Neukirch with thesis "Über Galoisgruppen lokaler Körper" (On Galois groups of local fields). In the academic year 1983–1984 he was a postdoc at Harvard University. From 1980 to 1989 he was an assistant and then docent at the University of Regensburg, where he received in 1988 his habilitation. From 1989 to 1991 he held a research professorship at the Max-Planck-Institut für Mathematik in Bonn. In 1991 he became a full professor at the University of Cologne and since 1999 he has been a professor at the University of Regensburg.
Jannsen's research deals with, among other topics, the Galois theory of algebraic number fields, the theory of motives in algebraic geometry, the Hasse principle (local–global principle), and resolution of singularities. In particular, he has done research on a cohomology theory for algebraic varieties, involving their extension in mixed motives as a development of research by Pierre Deligne, and a motivic cohomology as a development of research by Vladimir Voevodsky. In the 1980s with Kay Wingberg he completely described the absolute Galois group of "p"-adic number fields, "i.e." in the local case.
In 1994 he was an Invited Speaker with talk "Mixed motives, motivic cohomology and Ext-groups" at the International Congress of Mathematicians in Zürich.
He was elected in 2009 a full member of the Bayerische Akademie der Wissenschaften and in 2011 a full member of the Academia Europaea.
His doctoral students include Moritz Kerz.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{Q}"
}
] |
https://en.wikipedia.org/wiki?curid=56877096
|
5687865
|
Holevo's theorem
|
Upper bound on the knowable information of a quantum state
Holevo's theorem is an important limitative theorem in quantum computing, an interdisciplinary field of physics and computer science. It is sometimes called Holevo's bound, since it establishes an upper bound to the amount of information that can be known about a quantum state (accessible information). It was published by Alexander Holevo in 1973.
Statement of the theorem.
Suppose Alice wants to send a classical message to Bob by encoding it into a quantum state, and suppose she can prepare a state from some fixed set formula_0, with the i-th state prepared with probability formula_1. Let formula_2 be the classical register containing the choice of state made by Alice. Bob's objective is to recover the value of formula_2 from measurement results on the state he received. Let formula_3 be the classical register containing Bob's measurement outcome. Note that formula_3 is therefore a random variable whose probability distribution depends on Bob's choice of measurement.
Holevo's theorem bounds the amount of correlation between the classical registers formula_2 and formula_3, regardless of Bob's measurement choice, in terms of the "Holevo information". This is useful in practice because the Holevo information does not depend on the measurement choice, and therefore its computation does not require performing an optimization over the possible measurements.
More precisely, define the "accessible information" between formula_2 and formula_3 as the (classical) mutual information between the two registers maximized over all possible choices of measurements on Bob's side:formula_4where formula_5 is the (classical) mutual information of the joint probability distribution given by formula_6. There is currently no known formula to analytically solve the optimization in the definition of accessible information in the general case. Nonetheless, we always have the upper bound:formula_7where formula_8 is the ensemble of states Alice is using to send information, and formula_9 is the von Neumann entropy. This formula_10 is called the Holevo information or Holevo "χ" quantity.
Note that the Holevo information also equals the quantum mutual information of the classical-quantum state corresponding to the ensemble:formula_11with formula_12 the quantum mutual information of the bipartite state formula_13. It follows that Holevo's theorem can be concisely summarized as a bound on the accessible information in terms of the quantum mutual information for classical-quantum states.
Proof.
Consider the composite system that describes the entire communication process, which involves Alice's classical input formula_2, the quantum system formula_14, and Bob's classical output formula_3. The classical input formula_2 can be written as a classical register formula_15 with respect to some orthonormal basis formula_16. By writing formula_2 in this manner, the von Neumann entropy formula_17 of the state formula_18 corresponds to the Shannon entropy formula_19 of the probability distribution formula_20:
formula_21
The initial state of the system, where Alice prepares the state formula_22 with probability formula_23, is described by
formula_24
Afterwards, Alice sends the quantum state to Bob. As Bob only has access to the quantum system formula_14 but not the input formula_2, he receives a mixed state of the form formula_25. Bob measures this state with respect to the POVM elements formula_26, and the probabilities formula_27 of measuring the outcomes formula_28 form the classical output formula_3. This measurement process can be described as a quantum instrument
formula_29
where formula_30 is the probability of outcome formula_31 given the state formula_22, while formula_32 for some unitary formula_33 is the normalised post-measurement state. Then, the state of the entire system after the measurement process is
formula_34
Here formula_35 is the identity channel on the system formula_2. Since formula_36 is a quantum channel, and the quantum mutual information is monotonic under completely positive trace-preserving maps, formula_37. Additionally, as the partial trace over formula_38 is also completely positive and trace-preserving, formula_39. These two inequalities give
formula_40
On the left-hand side, the quantities of interest depend only on
formula_41
with joint probabilities formula_42. Clearly, formula_43 and formula_44, which are in the same form as formula_18, describe classical registers. Hence,
formula_45
Meanwhile, formula_46 depends on the term
formula_47
where formula_48 is the identity operator on the quantum system formula_14. Then, the right-hand side is
formula_49
which completes the proof.
Comments and remarks.
In essence, the Holevo bound proves that given "n" qubits, although they can "carry" a larger amount of (classical) information (thanks to quantum superposition), the amount of classical information that can be "retrieved", i.e. "accessed", can be only up to "n" classical (non-quantum encoded) bits. It was also established, both theoretically and experimentally, that there are computations where quantum bits carry more information through the process of the computation than is possible classically.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\{\\rho_1,...,\\rho_n\\}"
},
{
"math_id": 1,
"text": "p_i"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "Y"
},
{
"math_id": 4,
"text": "I_{\\rm acc}(X:Y) = \\sup_{\\{\\Pi^B_i\\}_i } I(X:Y|\\{\\Pi^B_i\\}_i),"
},
{
"math_id": 5,
"text": "I(X:Y|\\{\\Pi^B_i\\}_i)"
},
{
"math_id": 6,
"text": "p_{ij} = p_i \\operatorname{Tr}(\\Pi^B_j \\rho_i)"
},
{
"math_id": 7,
"text": "I_{\\rm acc} (X : Y) \\leq \\chi(\\eta) \\equiv S\\left(\\sum_i p_i \\rho_i\\right) - \\sum_i p_i S(\\rho_i),"
},
{
"math_id": 8,
"text": "\\eta\\equiv\\{(p_i,\\rho_i)\\}_i"
},
{
"math_id": 9,
"text": "S"
},
{
"math_id": 10,
"text": "\\chi(\\eta)"
},
{
"math_id": 11,
"text": "\\chi(\\eta) = I\\left(\\sum_i p_i |i\\rangle\\!\\langle i|\\otimes \\rho_i\\right),"
},
{
"math_id": 12,
"text": "I(\\rho_{AB}) \\equiv S(\\rho_A)+S(\\rho_B) - S(\\rho_{AB})"
},
{
"math_id": 13,
"text": "\\rho_{AB}"
},
{
"math_id": 14,
"text": "Q"
},
{
"math_id": 15,
"text": "\\rho^X := \\sum\\nolimits_{x=1}^n p_x |x\\rangle \\langle x|"
},
{
"math_id": 16,
"text": "\\{|x\\rangle\\}_{x=1}^n"
},
{
"math_id": 17,
"text": "S(X)"
},
{
"math_id": 18,
"text": "\\rho^X"
},
{
"math_id": 19,
"text": "H(X)"
},
{
"math_id": 20,
"text": "\\{p_x\\}_{x=1}^n"
},
{
"math_id": 21,
"text": "\nS(X)\n= -\\operatorname{tr}\\left(\\rho^X \\log \\rho^X \\right)\n= -\\operatorname{tr}\\left(\\sum_{x=1}^n p_x \\log p_x |x\\rangle\\langle x|\\right)\n= -\\sum_{x=1}^n p_x \\log p_x\n= H(X).\n"
},
{
"math_id": 22,
"text": "\\rho_x"
},
{
"math_id": 23,
"text": "p_x"
},
{
"math_id": 24,
"text": "\\rho^{XQ} := \\sum_{x=1}^n p_x |x\\rangle \\langle x|\\otimes\\rho_x."
},
{
"math_id": 25,
"text": "\\rho := \\operatorname{tr}_X\\left(\\rho^{XQ}\\right) = \\sum\\nolimits_{x=1}^n p_x \\rho_x"
},
{
"math_id": 26,
"text": "\\{E_y\\}_{y=1}^m"
},
{
"math_id": 27,
"text": "\\{q_y\\}_{y=1}^m"
},
{
"math_id": 28,
"text": "y=1,2,\\dots,m"
},
{
"math_id": 29,
"text": "\\mathcal{E}^{Q}(\\rho_x) = \\sum_{y=1}^m q_{y|x} \\rho_{y|x} \\otimes |y\\rangle \\langle y|,"
},
{
"math_id": 30,
"text": "q_{y|x} = \\operatorname{tr}\\left(E_y\\rho_x\\right)"
},
{
"math_id": 31,
"text": "y"
},
{
"math_id": 32,
"text": "\\rho_{y|x} = W\\sqrt{E_y}\\rho_x\\sqrt{E_y}W^\\dagger/q_{y|x}"
},
{
"math_id": 33,
"text": "W"
},
{
"math_id": 34,
"text": "\\rho^{XQ'Y} := \\left[\\mathcal{I}^{X}\\otimes\\mathcal{E}^{Q}\\right]\\!\\left(\\rho^{XQ}\\right) = \\sum_{x=1}^n\\sum_{y=1}^m p_x q_{y|x} |x\\rangle \\langle x|\\otimes\\rho_{y|x}\\otimes |y\\rangle \\langle y|."
},
{
"math_id": 35,
"text": "\\mathcal{I}^X"
},
{
"math_id": 36,
"text": "\\mathcal{E}^Q"
},
{
"math_id": 37,
"text": "S(X:Q'Y) \\leq S(X:Q)"
},
{
"math_id": 38,
"text": "Q'"
},
{
"math_id": 39,
"text": "S(X:Y) \\leq S(X:Q'Y)"
},
{
"math_id": 40,
"text": "S(X:Y) \\leq S(X:Q)."
},
{
"math_id": 41,
"text": "\\rho^{XY} := \\operatorname{tr}_{Q'}\\left(\\rho^{XQ'Y}\\right) = \\sum_{x=1}^n\\sum_{y=1}^m p_x q_{y|x} |x\\rangle \\langle x|\\otimes |y\\rangle \\langle y|\n= \\sum_{x=1}^n\\sum_{y=1}^m p_{x,y} |x,y\\rangle \\langle x,y|,"
},
{
"math_id": 42,
"text": "p_{x,y}=p_x q_{y|x}"
},
{
"math_id": 43,
"text": "\\rho^{XY}"
},
{
"math_id": 44,
"text": "\\rho^Y := \\operatorname{tr}_X(\\rho^{XY})"
},
{
"math_id": 45,
"text": "S(X:Y) = S(X)+S(Y)-S(XY) = H(X)+H(Y)-H(XY) = I(X:Y)."
},
{
"math_id": 46,
"text": "S(X:Q)"
},
{
"math_id": 47,
"text": "\\log \\rho^{XQ} = \\log\\left(\\sum_{x=1}^n p_x |x\\rangle \\langle x|\\otimes\\rho_x\\right)\n= \\sum_{x=1}^n |x\\rangle \\langle x| \\otimes \\log\\left(p_x\\rho_x\\right)\n= \\sum_{x=1}^n \\log p_x |x\\rangle \\langle x| \\otimes I^Q + \\sum_{x=1}^n |x\\rangle \\langle x| \\otimes \\log\\rho_x,"
},
{
"math_id": 48,
"text": "I^Q"
},
{
"math_id": 49,
"text": "\\begin{aligned}\nS(X:Q) &= S(X)+S(Q)-S(XQ) \\\\\n&= S(X) + S(\\rho) + \\operatorname{tr}\\left(\\rho^{XQ}\\log\\rho^{XQ}\\right) \\\\\n&= S(X) + S(\\rho) + \\operatorname{tr}\\left(\\sum_{x=1}^n p_x\\log p_x |x\\rangle \\langle x| \\otimes \\rho_x\\right) + \\operatorname{tr}\\left(\\sum_{x=1}^n p_x|x\\rangle \\langle x| \\otimes \\rho_x\\log\\rho_x\\right)\\\\\n&= S(X) + S(\\rho) + \\underbrace{\\operatorname{tr}\\left(\\sum_{x=1}^n p_x\\log p_x |x\\rangle \\langle x|\\right)}_{-S(X)} + \\operatorname{tr}\\left(\\sum_{x=1}^n p_x \\rho_x\\log\\rho_x\\right)\\\\\n&= S(\\rho) + \\sum_{x=1}^n p_x \\underbrace{\\operatorname{tr}\\left(\\rho_x\\log\\rho_x\\right)}_{-S(\\rho_x)} \\\\\n&= S(\\rho) - \\sum_{x=1}^n p_x S(\\rho_x),\n\\end{aligned}"
}
] |
https://en.wikipedia.org/wiki?curid=5687865
|
56880139
|
Markov odometer
|
In mathematics, a Markov odometer is a certain type of topological dynamical system. It plays a fundamental role in ergodic theory and especially in orbit theory of dynamical systems, since a theorem of H. Dye asserts that every ergodic nonsingular transformation is orbit-equivalent to a Markov odometer.
The basic example of such system is the "nonsingular odometer", which is an additive topological group defined on the product space of discrete spaces, induced by addition defined as formula_0, where formula_1. This group can be endowed with the structure of a dynamical system; the result is a conservative dynamical system.
The general form, which is called "Markov odometer", can be constructed through Bratteli–Vershik diagram to define "Bratteli–Vershik compactum" space together with a corresponding transformation.
Nonsingular odometers.
Several kinds of non-singular odometers may be defined.
These are sometimes referred to as adding machines.
The simplest is illustrated with the Bernoulli process. This is the set of all infinite strings in two symbols, here denoted by formula_2 endowed with the product topology. This definition extends naturally to a more general odometer defined on the product space
formula_3
for some sequence of integers formula_4 with each formula_5
The odometer for formula_6 for all formula_7 is termed the dyadic odometer, the von Neumann–Kakutani adding machine or the dyadic adding machine.
The topological entropy of every adding machine is zero. Any continuous map of an interval with a topological entropy of zero is topologically conjugate to an adding machine, when restricted to its action on the topologically invariant transitive set, with periodic orbits removed.
Dyadic odometer.
The set of all infinite strings in strings in two symbols formula_2 has a natural topology, the product topology, generated by the cylinder sets. The product topology extends to a Borel sigma-algebra; let formula_9 denote that algebra. Individual points formula_10 are denoted as formula_11
The Bernoulli process is conventionally endowed with a collection of measures, the Bernoulli measures, given by formula_12 and formula_13, for some formula_14 independent of formula_7. The value of formula_15 is rather special; it corresponds to the special case of the Haar measure, when formula_16 is viewed as a compact Abelian group. Note that the Bernoulli measure is "not" the same as the 2-adic measure on the dyadic integers! Formally, one can observe that formula_16 is also the base space for the dyadic integers; however, the dyadic integers are endowed with a metric, the p-adic metric, which induces a metric topology distinct from the product topology used here.
The space formula_16 can be endowed with addition, defined as coordinate addition, with a carry bit. That is, for each coordinate, let
formula_17
where formula_18 and
formula_19
inductively. Increment-by-one is then called the (dyadic) odometer. It is the transformation formula_20 given by formula_21, where formula_1. It is called the "odometer" due to how it looks when it "rolls over": formula_8 is the transformation formula_22. Note that formula_23 and that formula_8 is formula_9-measurable, that is, formula_24 for all formula_25
The transformation formula_8 is non-singular for every formula_26. Recall that a measurable transformation formula_27 is non-singular when, given formula_28, one has that formula_29 if and only if formula_30. In this case, one finds
formula_31
where formula_32. Hence formula_8 is nonsingular with respect to formula_26.
The transformation formula_8 is ergodic. This follows because, for every formula_33 and natural number formula_7, the orbit of formula_34 under formula_35 is the set formula_36. This in turn implies that formula_8 is conservative, since every invertible ergodic nonsingular transformation in a nonatomic space is conservative.
Note that for the special case of formula_15, that formula_37 is a measure-preserving dynamical system.
Integer odometers.
The same construction enables to define such a system for every product of discrete spaces. In general, one writes
formula_38
for formula_39 with formula_40 an integer. The product topology extends naturally to the product Borel sigma-algebra formula_9 on formula_16. A product measure on formula_9 is conventionally defined as formula_41 given some measure formula_42 on formula_43. The corresponding map is defined by
formula_44
where formula_45 is the smallest index for which formula_46. This is again a topological group.
A special case of this is the "Ornstein odometer", which is defined on the space
formula_47
with the measure a product of
formula_48
Sandpile model.
A concept closely related to the conservative odometer is that of the abelian sandpile model. This model replaces the directed linear sequence of finite groups constructed above by an undirected graph formula_49 of vertexes and edges. At each vertex formula_50 one places a finite group formula_51 with formula_52 the degree of the vertex formula_53. Transition functions are defined by the graph Laplacian. That is, one can increment any given vertex by one; when incrementing the largest group element (so that it increments back down to zero), each of the neighboring vertexes are incremented by one.
Sandpile models differ from the above definition of a conservative odometer in three different ways. First, in general, there is no unique vertex singled out as the starting vertex, whereas in the above, the first vertex is the starting vertex; it is the one that is incremented by the transition function. Next, the sandpile models in general use undirected edges, so that the wrapping of the odometer redistributes in all directions. A third difference is that sandpile models are usually not taken on an infinite graph, and that rather, there is one special vertex singled out, the "sink", which absorbs all increments and never wraps. The sink is equivalent to cutting away the infinite parts of an infinite graph, and replacing them by the sink; alternately, as ignoring all changes past that termination point.
Markov odometer.
Let formula_54 be an ordered Bratteli–Vershik diagram, consists on a set of vertices of the form formula_55 (disjoint union) where formula_56 is a singleton and on a set of edges formula_57 (disjoint union).
The diagram includes source surjection-mappings formula_58 and range surjection-mappings formula_59. We assume that formula_60 are comparable if and only if formula_61.
For such diagram we look at the product space formula_62 equipped with the product topology. Define "Bratteli–Vershik compactum" to be the subspace of infinite paths,
formula_63
Assume there exists only one infinite path formula_64 for which each formula_65 is maximal and similarly one infinite path formula_66. Define the "Bratteli-Vershik map" formula_67 by formula_68 and, for any formula_69 define formula_70, where formula_45 is the first index for which formula_71 is not maximal and accordingly let formula_72 be the unique path for which formula_73 are all maximal and formula_74 is the successor of formula_71. Then formula_75 is homeomorphism of formula_76.
Let formula_77 be a sequence of stochastic matrices formula_78 such that formula_79 if and only if formula_80. Define "Markov measure" on the cylinders of formula_76 by formula_81. Then the system formula_82 is called a "Markov odometer".
One can show that the nonsingular odometer is a Markov odometer where all the formula_83 are singletons.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x \\mapsto x+\\underline{1}"
},
{
"math_id": 1,
"text": "\\underline{1}:=(1,0,0,\\dots)"
},
{
"math_id": 2,
"text": "\\Omega=\\{0,1\\}^{\\mathbb{N}}"
},
{
"math_id": 3,
"text": "\\Omega=\\prod_{n\\in\\mathbb{N}} \\left(\\mathbb{Z}/k_n\\mathbb{Z}\\right)"
},
{
"math_id": 4,
"text": "(k_n)"
},
{
"math_id": 5,
"text": "k_n\\ge 2."
},
{
"math_id": 6,
"text": "k_n=2"
},
{
"math_id": 7,
"text": "n"
},
{
"math_id": 8,
"text": "T"
},
{
"math_id": 9,
"text": "\\mathcal{B}"
},
{
"math_id": 10,
"text": "x\\in\\Omega"
},
{
"math_id": 11,
"text": "x=(x_1,x_2,x_3,\\cdots)."
},
{
"math_id": 12,
"text": "\\mu_p(x_n=1)=p"
},
{
"math_id": 13,
"text": "\\mu_p(x_n=0)=1-p"
},
{
"math_id": 14,
"text": "0<p<1"
},
{
"math_id": 15,
"text": "p=1/2"
},
{
"math_id": 16,
"text": "\\Omega"
},
{
"math_id": 17,
"text": "(x+y)_n=x_n+y_n+\\varepsilon_n\\,\\bmod\\,2"
},
{
"math_id": 18,
"text": "\\varepsilon_0=0"
},
{
"math_id": 19,
"text": "\n\\varepsilon_n=\\begin{cases}\n0 & x_{n-1}+y_{n-1}<2\\\\\n1 & x_{n-1}+y_{n-1}=2\n\\end{cases}\n"
},
{
"math_id": 20,
"text": "T:\\Omega\\to\\Omega"
},
{
"math_id": 21,
"text": "T(x)=x+\\underline{1}"
},
{
"math_id": 22,
"text": "T\\left(1,\\dots,1,0,x_{k+1},x_{k+2},\\dots\\right) = \\left(0,\\dots,0,1,x_{k+1},x_{k+2},\\dots \\right)"
},
{
"math_id": 23,
"text": "T^{-1}(0,0,\\cdots)=(1,1,\\cdots)"
},
{
"math_id": 24,
"text": "T^{-1}(\\sigma)\\in\\mathcal{B}"
},
{
"math_id": 25,
"text": "\\sigma\\in\\mathcal{B}."
},
{
"math_id": 26,
"text": "\\mu_p"
},
{
"math_id": 27,
"text": "\\tau:\\Omega\\to\\Omega"
},
{
"math_id": 28,
"text": "\\sigma\\in\\mathcal{B}"
},
{
"math_id": 29,
"text": "\\mu(\\tau^{-1}\\sigma)=0"
},
{
"math_id": 30,
"text": "\\mu(\\sigma)=0"
},
{
"math_id": 31,
"text": "\\frac{d \\mu_p \\circ T}{d \\mu_p} = \\left(\\frac{1-p} p\\right)^\\varphi"
},
{
"math_id": 32,
"text": "\\varphi(x)=\\min\\left\\{ n\\in\\mathbb{N}\\mid x_n = 0 \\right\\}-2"
},
{
"math_id": 33,
"text": "x \\in \\Omega"
},
{
"math_id": 34,
"text": "x"
},
{
"math_id": 35,
"text": "T^0,T^1,\\cdots,T^{2^n-1}"
},
{
"math_id": 36,
"text": "\\{0,1\\}^n"
},
{
"math_id": 37,
"text": "\\left(\\Omega,\\mathcal{B},\\mu_{1/2},T\\right)"
},
{
"math_id": 38,
"text": "\\Omega=\\prod_{n\\in\\mathbb{N}}A_{n}"
},
{
"math_id": 39,
"text": "A_n=\\mathbb{Z}/m_n\\mathbb{Z}=\\{ 0,1,\\dots,m_n-1\\}"
},
{
"math_id": 40,
"text": "m_n\\ge2"
},
{
"math_id": 41,
"text": "\\textstyle\\mu=\\prod_{n\\in\\mathbb{N}}\\mu_{n},"
},
{
"math_id": 42,
"text": "\\mu_n"
},
{
"math_id": 43,
"text": "A_n"
},
{
"math_id": 44,
"text": "T(x_1,\\dots,x_k,x_{k+1},x_{k+2},\\dots)=(0,\\dots,0,x_k+1,x_{k+1},x_{k+2},\\dots)"
},
{
"math_id": 45,
"text": "k"
},
{
"math_id": 46,
"text": "x_k \\neq m_k-1"
},
{
"math_id": 47,
"text": "\\Omega=\\left(\\mathbb{Z}/2\\mathbb{Z}\\right)\\times \\left(\\mathbb{Z}/3\\mathbb{Z}\\right)\\times \\left(\\mathbb{Z}/4\\mathbb{Z}\\right)\\times \\cdots"
},
{
"math_id": 48,
"text": "\\mu_n(j)=\\begin{cases}\n1/2 & \\mbox{ if } j=0 \\\\\n1/2(n+1) & \\mbox{ if } j\\ne 0 \\\\\n\\end{cases}"
},
{
"math_id": 49,
"text": "(V,E)"
},
{
"math_id": 50,
"text": "v\\in V"
},
{
"math_id": 51,
"text": "\\mathbb{Z}/n\\mathbb{Z}"
},
{
"math_id": 52,
"text": "n=deg(v)"
},
{
"math_id": 53,
"text": "v"
},
{
"math_id": 54,
"text": "B=(V,E)"
},
{
"math_id": 55,
"text": "\\textstyle\\coprod_{n\\in\\mathbb{N}}V^{(n)}"
},
{
"math_id": 56,
"text": "V^0"
},
{
"math_id": 57,
"text": "\\textstyle\\coprod_{n\\in\\mathbb{N}}E^{(n)}"
},
{
"math_id": 58,
"text": "s_n:E^{(n)} \\to V^{(n-1)}"
},
{
"math_id": 59,
"text": "r_n:E^{(n)} \\to V^{(n)}"
},
{
"math_id": 60,
"text": "e,e' \\in E^{(n)}"
},
{
"math_id": 61,
"text": "r_n(e) = r_n(e')"
},
{
"math_id": 62,
"text": "\\textstyle E:=\\prod_{n\\in\\mathbb{N}}E^{(n)}"
},
{
"math_id": 63,
"text": "X_{B}:=\\left\\{ x=(x_n)_{n\\in\\mathbb{N}} \\in E\\mid x_n\\in E^{(n)}\\text{ and } r (x_n) = s(x_{n+1}) \\right\\} "
},
{
"math_id": 64,
"text": "x_{\\max} = (x_n)_{n \\in \\mathbb{N}}"
},
{
"math_id": 65,
"text": "x_n"
},
{
"math_id": 66,
"text": "x_{\\text{min}}"
},
{
"math_id": 67,
"text": "T_B:X_B \\to X_B"
},
{
"math_id": 68,
"text": "T( x_{\\max}) = x_{\\min}"
},
{
"math_id": 69,
"text": "x = (x_n)_{n\\in \\mathbb{N}} \\neq x_{\\max}"
},
{
"math_id": 70,
"text": "T_B(x_1,\\dots,x_k,x_{k+1},\\dots)=(y_1,\\dots,y_k,x_{k+1},\\dots)"
},
{
"math_id": 71,
"text": "x_k"
},
{
"math_id": 72,
"text": "(y_1,\\dots,y_k)"
},
{
"math_id": 73,
"text": "y_1,\\dots,y_{k-1}"
},
{
"math_id": 74,
"text": "y_k"
},
{
"math_id": 75,
"text": "T_B"
},
{
"math_id": 76,
"text": "X_B"
},
{
"math_id": 77,
"text": "P=\\left(P^{(1)},P^{(2)},\\dots \\right)"
},
{
"math_id": 78,
"text": "P^{(n)}=\\left(p^{(n)}_{(v,e) \\in V^{n-1} \\times E^(n)}\\right)"
},
{
"math_id": 79,
"text": "p^{(n)}_{v,e} > 0"
},
{
"math_id": 80,
"text": "v=s_n(e)"
},
{
"math_id": 81,
"text": "\\mu_P ([e_1,\\dots,e_n]) = p^{(1)}_{s_1(e_1),e_1}\\cdots p^{(n)}_{s_n(e_n),e_n}"
},
{
"math_id": 82,
"text": "\\left(X_B, \\mathcal{B}, \\mu_P, T_B \\right)"
},
{
"math_id": 83,
"text": "V^{(n)}"
}
] |
https://en.wikipedia.org/wiki?curid=56880139
|
5688623
|
Rami Grossberg
|
American mathematician
Rami Grossberg () is a full professor of mathematics at Carnegie Mellon University and works in model theory.
Work.
Grossberg's work in the past few years has revolved around the classification theory of non-elementary classes. In particular, he has provided, in joint work with Monica VanDieren, a proof of an upward "Morley's Categoricity Theorem" (a version of Shelah's categoricity conjecture) for Abstract Elementary Classes with the amalgamation property, that are tame. In another work with VanDieren, they also initiated the study of "tame" Abstract Elementary Classes. Tameness is both a crucial technical property in categoricity transfer proofs and an independent notion of interest in the area – it has been studied by Baldwin, Hyttinen, Lessmann, Kesälä, Kolesnikov, Kueker among others.
Other results include a best approximation to the main gap conjecture for AECs (with Olivier Lessmann), identifying AECs with JEP, AP, no maximal models and tameness as the uncountable analog to Fraïssé's constructions (with VanDieren), a stability spectrum theorem and the existence of Morley sequences for those classes (also with VanDieren).
In addition to this work on the Categoricity Conjecture, more recently, with Boney and Vasey, new understanding of frames in AECs and forking (in the abstract elementary class setting) has been obtained.
Some of Grossberg's work may be understood as part of the big project on Saharon Shelah's outstanding categoricity conjectures:
"Conjecture 1." (Categoricity for formula_0). Let formula_1 be a sentence. If formula_1 is categorical in a cardinal formula_2 then formula_1 is categorical in all cardinals formula_2. See Infinitary logic and Beth number.
"Conjecture 2." (Categoricity for AECs) See and . Let "K" be an AEC. There exists a cardinal "μ"("K") such that categoricity in a cardinal greater than "μ"("K") implies categoricity in all cardinals greater than "μ"("K"). Furthermore, "μ"("K") is the Hanf number of "K".
Other examples of his results in pure model theory include: generalizing the Keisler–Shelah omitting types theorem for formula_3 to successors of singular cardinals; with Shelah, introducing the notion of unsuper-stability for infinitary logics, and proving a nonstructure theorem, which is used to resolve a problem of Fuchs and Salce in the theory of modules; with Hart, proving a structure theorem for formula_4, which resolves Morley's conjecture for excellent classes; and the notion of relative saturation and its connection to Shelah's conjecture for formula_4.
Examples of his results in applications to algebra include the finding that under the weak continuum hypothesis there is no universal object in the class of uncountable locally finite groups (answering a question of Macintyre and Shelah); with Shelah, showing that there is a jump in cardinality of the abelian group Extp("G", "Z") at the first singular strong limit cardinal.
Personal life.
In 1986, Grossberg attained his doctorate from the University of Jerusalem. He later married his former doctoral student and frequent collaborator, Monica VanDieren.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathit{L}_{{\\omega_1},\\omega}"
},
{
"math_id": 1,
"text": "\\psi"
},
{
"math_id": 2,
"text": "\\; >\\beth_{\\omega_{1}}"
},
{
"math_id": 3,
"text": "\\mathit{L(Q)}"
},
{
"math_id": 4,
"text": "\\mathit{L}_{\\omega_1,\\omega}"
}
] |
https://en.wikipedia.org/wiki?curid=5688623
|
56891926
|
Evidence lower bound
|
Lower bound on the log-likelihood of some observed data
In variational Bayesian methods, the evidence lower bound (often abbreviated ELBO, also sometimes called the variational lower bound or negative variational free energy) is a useful lower bound on the log-likelihood of some observed data.
The ELBO is useful because it provides a guarantee on the worst-case for the log-likelihood of some distribution (e.g. formula_0) which models a set of data. The actual log-likelihood may be higher (indicating an even better fit to the distribution) because the ELBO includes a Kullback-Leibler divergence (KL divergence) term which decreases the ELBO due to an internal part of the model being inaccurate despite good fit of the model overall. Thus improving the ELBO score indicates either improving the likelihood of the model formula_0 or the fit of a component internal to the model, or both, and the ELBO score makes a good loss function, e.g., for training a deep neural network to improve both the model overall and the internal component. (The internal component is formula_1, defined in detail later in this article.)
Definition.
Let formula_2 and formula_3 be random variables, jointly distributed with distribution formula_4. For example, formula_5 is the marginal distribution of formula_2, and formula_6 is the conditional distribution of formula_3 given formula_2. Then, for a sample formula_7, and any distribution formula_8, the ELBO is defined asformula_9The ELBO can equivalently be written as
formula_10
In the first line, formula_11 is the entropy of formula_12, which relates the ELBO to the Helmholtz free energy. In the second line, formula_13 is called the "evidence" for formula_14, and formula_15 is the Kullback-Leibler divergence between formula_12 and formula_4. Since the Kullback-Leibler divergence is non-negative, formula_16 forms a lower bound on the evidence ("ELBO inequality")formula_17
Motivation.
Variational Bayesian inference.
Suppose we have an observable random variable formula_18, and we want to find its true distribution formula_19. This would allow us to generate data by sampling, and estimate probabilities of future events. In general, it is impossible to find formula_19 exactly, forcing us to search for a good approximation"."
That is, we define a sufficiently large parametric family formula_20 of distributions, then solve for formula_21 for some loss function formula_22. One possible way to solve this is by considering small variation from formula_4 to formula_23, and solve for formula_24. This is a problem in the calculus of variations, thus it is called the variational method.
Since there are not many explicitly parametrized distribution families (all the classical distribution families, such as the normal distribution, the Gumbel distribution, etc, are far too simplistic to model the true distribution), we consider "implicitly parametrized" probability distributions:
This defines a family of joint distributions formula_4 over formula_32. It is very easy to sample formula_33: simply sample formula_34, then compute formula_29, and finally sample formula_35 using formula_29.
In other words, we have a generative model for both the observable and the latent.
Now, we consider a distribution formula_4 good, if it is a close approximation of formula_19:formula_36since the distribution on the right side is over formula_18 only, the distribution on the left side must marginalize the latent variable formula_26 away.
In general, it's impossible to perform the integral formula_37, forcing us to perform another approximation.
Since formula_38 (Bayes' Rule), it suffices to find a good approximation of formula_39. So define another distribution family formula_40 and use it to approximate formula_39. This is a discriminative model for the latent.
The entire situation is summarized in the following table:
In Bayesian language, formula_18 is the observed evidence, and formula_26 is the latent/unobserved. The distribution formula_42 over formula_26 is the "prior distribution" over formula_26, formula_41 is the likelihood function, and formula_39 is the "posterior" "distribution" over formula_26.
Given an observation formula_14, we can "infer" what formula_43 likely gave rise to formula_14 by computing formula_39. The usual Bayesian method is to estimate the integral formula_37, then compute by Bayes' rule formula_44. This is expensive to perform in general, but if we can simply find a good approximation formula_45 for most formula_46, then we can infer formula_43 from formula_14 cheaply. Thus, the search for a good formula_47 is also called amortized inference.
All in all, we have found a problem of variational Bayesian inference.
Deriving the ELBO.
A basic result in variational inference is that minimizing the Kullback–Leibler divergence (KL-divergence) is equivalent to maximizing the log-likelihood:formula_48where formula_49 is the entropy of the true distribution. So if we can maximize formula_50, we can minimize formula_51, and consequently find an accurate approximation formula_52.
To maximize formula_50, we simply sample many formula_53, i.e. use importance samplingformula_54where formula_55 is the number of samples drawn from the true distribution. This approximation can be seen as overfitting.
In order to maximize formula_56, it's necessary to find formula_13:formula_57This usually has no closed form and must be estimated. The usual way to estimate integrals is Monte Carlo integration with importance sampling:formula_58where formula_40 is a sampling distribution over formula_43 that we use to perform the Monte Carlo integration.
So we see that if we sample formula_59, then formula_60 is an unbiased estimator of formula_61. Unfortunately, this does not give us an unbiased estimator of formula_13, because formula_62 is nonlinear. Indeed, we have by Jensen's inequality, formula_63In fact, all the obvious estimators of formula_13 are biased downwards, because no matter how many samples of formula_64 we take, we have by Jensen's inequality:formula_65Subtracting the right side, we see that the problem comes down to a biased estimator of zero:formula_66At this point, we could branch off towards the development of an importance-weighted autoencoder, but we will instead continue with the simplest case with formula_67:formula_63The tightness of the inequality has a closed form:formula_68We have thus obtained the ELBO function:formula_69
Maximizing the ELBO.
For fixed formula_14, the optimization formula_70 simultaneously attempts to maximize formula_13 and minimize formula_71. If the parametrization for formula_4 and formula_47 are flexible enough, we would obtain some formula_72, such that we have simultaneously
formula_73Sinceformula_48we haveformula_74and soformula_75In other words, maximizing the ELBO would simultaneously allow us to obtain an accurate generative model formula_76 and an accurate discriminative model formula_77.
Main forms.
The ELBO has many possible expressions, each with some different emphasis.
formula_78
This form shows that if we sample formula_79, then formula_80 is an unbiased estimator of the ELBO.
formula_81
This form shows that the ELBO is a lower bound on the evidence formula_13, and that maximizing the ELBO with respect to formula_82 is equivalent to minimizing the KL-divergence from formula_83 to formula_1.
formula_84
This form shows that maximizing the ELBO simultaneously attempts to keep formula_1 close to formula_42 and concentrate formula_1 on those formula_43 that maximizes formula_85. That is, the approximate posterior formula_1 balances between staying close to the prior formula_42 and moving towards the maximum likelihood formula_86.
formula_87
This form shows that maximizing the ELBO simultaneously attempts to keep the entropy of formula_1 high, and concentrate formula_1 on those formula_43 that maximizes formula_88. That is, the approximate posterior formula_1 balances between being a uniform distribution and moving towards the maximum a posteriori formula_89.
Data-processing inequality.
Suppose we take formula_55 independent samples from formula_19, and collect them in the dataset formula_90, then we have empirical distribution formula_91.
Fitting formula_61 to formula_92 can be done, as usual, by maximizing the loglikelihood formula_93:formula_94Now, by the ELBO inequality, we can bound formula_93, and thusformula_95The right-hand-side simplifies to a KL-divergence, and so we get:formula_96This result can be interpreted as a special case of the data processing inequality.
In this interpretation, maximizing formula_97 is minimizing formula_98, which upper-bounds the real quantity of interest formula_99 via the data-processing inequality. That is, we append a latent space to the observable space, paying the price of a weaker inequality for the sake of more computationally efficient minimization of the KL-divergence.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "p(X)"
},
{
"math_id": 1,
"text": "q_\\phi(\\cdot | x)"
},
{
"math_id": 2,
"text": " X"
},
{
"math_id": 3,
"text": " Z"
},
{
"math_id": 4,
"text": "p_\\theta"
},
{
"math_id": 5,
"text": "p_\\theta( X)"
},
{
"math_id": 6,
"text": "p_\\theta( Z \\mid X)"
},
{
"math_id": 7,
"text": "x\\sim p_\\text{data}"
},
{
"math_id": 8,
"text": "\nq_\\phi\n"
},
{
"math_id": 9,
"text": "L(\\phi, \\theta; x) := \\mathbb E_{z\\sim q_\\phi(\\cdot | x)} \\left[ \\ln\\frac{p_\\theta(x, z)}{q_\\phi(z|x)} \\right] . \n"
},
{
"math_id": 10,
"text": "\\begin{align}\nL(\\phi, \\theta; x) = & \\mathbb E_{z\\sim q_\\phi(\\cdot | x)}\\left[ \\ln{} p_\\theta(x, z) \\right] + H[ q_\\phi(z|x) ] \\\\\n= & \\mathbb \\ln{} \\,p_\\theta(x) - D_{KL}( q_\\phi(z|x) || p_\\theta(z|x) ) . \\\\\n\\end{align}"
},
{
"math_id": 11,
"text": " H[ q_\\phi(z|x) ] "
},
{
"math_id": 12,
"text": " q_\\phi"
},
{
"math_id": 13,
"text": "\\ln p_\\theta(x)"
},
{
"math_id": 14,
"text": "x"
},
{
"math_id": 15,
"text": "D_{KL}( q_\\phi(z|x) || p_\\theta(z|x) ) "
},
{
"math_id": 16,
"text": "L(\\phi, \\theta; x)"
},
{
"math_id": 17,
"text": "\\ln p_\\theta(x) \\ge \\mathbb \\mathbb E_{z\\sim q_\\phi(\\cdot|x)}\\left[ \\ln\\frac{p_\\theta(x, z)}{q_\\phi(z\\vert x)} \\right]."
},
{
"math_id": 18,
"text": "X"
},
{
"math_id": 19,
"text": "p^*"
},
{
"math_id": 20,
"text": "\\{p_\\theta\\}_{\\theta\\in\\Theta}"
},
{
"math_id": 21,
"text": "\\min_\\theta L(p_\\theta, p^*)"
},
{
"math_id": 22,
"text": "L"
},
{
"math_id": 23,
"text": "p_{\\theta + \\delta \\theta}"
},
{
"math_id": 24,
"text": "L(p_\\theta, p^*) - L(p_{\\theta+\\delta \\theta}, p^*) =0"
},
{
"math_id": 25,
"text": "p(z)"
},
{
"math_id": 26,
"text": "Z"
},
{
"math_id": 27,
"text": "f_\\theta"
},
{
"math_id": 28,
"text": "\\theta"
},
{
"math_id": 29,
"text": "f_\\theta(z)"
},
{
"math_id": 30,
"text": "f_\\theta(z) = (f_1(z), f_2(z))"
},
{
"math_id": 31,
"text": "\\mathcal N(f_1(z), e^{f_2(z)})"
},
{
"math_id": 32,
"text": "(X, Z)"
},
{
"math_id": 33,
"text": "(x, z) \\sim p_\\theta"
},
{
"math_id": 34,
"text": "z\\sim p"
},
{
"math_id": 35,
"text": "x \\sim p_\\theta(\\cdot | z)"
},
{
"math_id": 36,
"text": "p_\\theta(X) \\approx p^*(X)"
},
{
"math_id": 37,
"text": "p_\\theta(x) = \\int p_\\theta(x|z)p(z)dz"
},
{
"math_id": 38,
"text": "p_\\theta(x) = \\frac{p_\\theta(x|z)p(z)}{p_\\theta(z|x)}"
},
{
"math_id": 39,
"text": "p_\\theta(z|x)"
},
{
"math_id": 40,
"text": "q_\\phi(z|x)"
},
{
"math_id": 41,
"text": "p_\\theta(x|z)"
},
{
"math_id": 42,
"text": "p"
},
{
"math_id": 43,
"text": "z"
},
{
"math_id": 44,
"text": "p_\\theta(z|x) = \\frac{p_\\theta(x|z)p(z)}{p_\\theta(x)}"
},
{
"math_id": 45,
"text": "q_\\phi(z|x) \\approx p_\\theta(z|x)"
},
{
"math_id": 46,
"text": "x, z"
},
{
"math_id": 47,
"text": "q_\\phi"
},
{
"math_id": 48,
"text": "\\mathbb{E}_{x\\sim p^*(x)}[\\ln p_\\theta (x)] = -H(p^*) - D_{\\mathit{KL}}(p^*(x) \\| p_\\theta(x))"
},
{
"math_id": 49,
"text": "H(p^*) = -\\mathbb \\mathbb E_{x\\sim p^*}[\\ln p^*(x)]"
},
{
"math_id": 50,
"text": "\\mathbb{E}_{x\\sim p^*(x)}[\\ln p_\\theta (x)]"
},
{
"math_id": 51,
"text": "D_{\\mathit{KL}}(p^*(x) \\| p_\\theta(x))"
},
{
"math_id": 52,
"text": "p_\\theta \\approx p^*"
},
{
"math_id": 53,
"text": "x_i\\sim p^*(x)"
},
{
"math_id": 54,
"text": "N\\max_\\theta \\mathbb{E}_{x\\sim p^*(x)}[\\ln p_\\theta (x)]\\approx \\max_\\theta \\sum_i \\ln p_\\theta (x_i)"
},
{
"math_id": 55,
"text": "N"
},
{
"math_id": 56,
"text": "\\sum_i \\ln p_\\theta (x_i)"
},
{
"math_id": 57,
"text": "\\ln p_\\theta(x) = \\ln \\int p_\\theta(x|z) p(z)dz"
},
{
"math_id": 58,
"text": "\\int p_\\theta(x|z) p(z)dz = \\mathbb E_{z\\sim q_\\phi(\\cdot|x)}\\left[\\frac{p_\\theta (x, z)}{q_\\phi(z|x)}\\right]"
},
{
"math_id": 59,
"text": "z\\sim q_\\phi(\\cdot|x)"
},
{
"math_id": 60,
"text": "\\frac{p_\\theta (x, z)}{q_\\phi(z|x)}"
},
{
"math_id": 61,
"text": "p_\\theta(x)"
},
{
"math_id": 62,
"text": "\\ln"
},
{
"math_id": 63,
"text": "\\ln p_\\theta(x)= \\ln \\mathbb E_{z\\sim q_\\phi(\\cdot|x)}\\left[\\frac{p_\\theta (x, z)}{q_\\phi(z|x)}\\right] \\geq \\mathbb E_{z\\sim q_\\phi(\\cdot|x)}\\left[\\ln\\frac{p_\\theta (x, z)}{q_\\phi(z|x)}\\right]"
},
{
"math_id": 64,
"text": "z_i\\sim q_\\phi(\\cdot | x)"
},
{
"math_id": 65,
"text": "\\mathbb E_{z_i \\sim q_\\phi(\\cdot|x)}\\left[\n\t\t \\ln \\left(\\frac 1N \\sum_i \\frac{p_\\theta (x, z_i)}{q_\\phi(z_i|x)}\\right)\n\t\t \\right] \\leq \\ln \\mathbb E_{z_i \\sim q_\\phi(\\cdot|x)}\\left[\n\t\t \\frac 1N \\sum_i \\frac{p_\\theta (x, z_i)}{q_\\phi(z_i|x)}\n\t\t \\right] = \\ln p_\\theta(x) "
},
{
"math_id": 66,
"text": "\\mathbb E_{z_i \\sim q_\\phi(\\cdot|x)}\\left[\n\t\t \\ln \\left(\\frac 1N \\sum_i \\frac{p_\\theta (z_i|x)}{q_\\phi(z_i|x)}\\right)\n\t\t \\right] \\leq 0"
},
{
"math_id": 67,
"text": "N=1"
},
{
"math_id": 68,
"text": "\\ln p_\\theta(x)- \\mathbb E_{z\\sim q_\\phi(\\cdot|x)}\\left[\\ln\\frac{p_\\theta (x, z)}{q_\\phi(z|x)}\\right] = D_{\\mathit{KL}}(q_\\phi(\\cdot | x)\\| p_\\theta(\\cdot | x))\\geq 0"
},
{
"math_id": 69,
"text": "L(\\phi, \\theta; x) := \\ln p_\\theta(x) - D_{\\mathit{KL}}(q_\\phi(\\cdot | x)\\| p_\\theta(\\cdot | x))"
},
{
"math_id": 70,
"text": "\\max_{\\theta, \\phi} L(\\phi, \\theta; x)"
},
{
"math_id": 71,
"text": "D_{\\mathit{KL}}(q_\\phi(\\cdot | x)\\| p_\\theta(\\cdot | x))"
},
{
"math_id": 72,
"text": "\\hat\\phi, \\hat \\theta"
},
{
"math_id": 73,
"text": "\\ln p_{\\hat \\theta}(x) \\approx \\max_\\theta \\ln p_\\theta(x); \\quad q_{\\hat\\phi}(\\cdot | x)\\approx p_{\\hat\\theta}(\\cdot | x)"
},
{
"math_id": 74,
"text": "\\ln p_{\\hat \\theta}(x) \\approx \\max_\\theta -H(p^*) - D_{\\mathit{KL}}(p^*(x) \\| p_\\theta(x))"
},
{
"math_id": 75,
"text": "\\hat\\theta \\approx \\arg\\min D_{\\mathit{KL}}(p^*(x) \\| p_\\theta(x))"
},
{
"math_id": 76,
"text": "p_{\\hat\\theta} \\approx p^*"
},
{
"math_id": 77,
"text": "q_{\\hat\\phi}(\\cdot | x)\\approx p_{\\hat\\theta}(\\cdot | x)"
},
{
"math_id": 78,
"text": "\\mathbb{E}_{z\\sim q_\\phi(\\cdot | x)}\\left[\\ln\\frac{p_\\theta(x, z)}{q_\\phi(z|x)}\\right] = \\int q_\\phi(z|x)\\ln\\frac{p_\\theta(x, z)}{q_\\phi(z|x)}dz"
},
{
"math_id": 79,
"text": "z\\sim q_\\phi(\\cdot | x)"
},
{
"math_id": 80,
"text": "\\ln\\frac{p_\\theta(x, z)}{q_\\phi(z|x)}"
},
{
"math_id": 81,
"text": "\\ln p_\\theta(x) - D_{\\mathit{KL}}(q_\\phi(\\cdot | x) \\;\\|\\; p_\\theta(\\cdot | x))"
},
{
"math_id": 82,
"text": "\\phi"
},
{
"math_id": 83,
"text": "p_\\theta(\\cdot | x)"
},
{
"math_id": 84,
"text": "\\mathbb{E}_{z\\sim q_\\phi(\\cdot | x)}[\\ln p_\\theta(x|z)] - D_{\\mathit{KL}}(q_\\phi(\\cdot | x) \\;\\|\\; p)"
},
{
"math_id": 85,
"text": "\\ln p_\\theta (x|z)"
},
{
"math_id": 86,
"text": "\\arg\\max_z \\ln p_\\theta (x|z)"
},
{
"math_id": 87,
"text": "H(q_\\phi(\\cdot | x)) + \\mathbb{E}_{z\\sim q(\\cdot | x)}[\\ln p_\\theta(z|x)] + \\ln p_\\theta(x)"
},
{
"math_id": 88,
"text": "\\ln p_\\theta (z|x)"
},
{
"math_id": 89,
"text": "\\arg\\max_z \\ln p_\\theta (z|x)"
},
{
"math_id": 90,
"text": "D = \\{x_1, ..., x_N\\}"
},
{
"math_id": 91,
"text": "q_D(x) = \\frac 1N \\sum_i \\delta_{x_i}"
},
{
"math_id": 92,
"text": "q_D(x)"
},
{
"math_id": 93,
"text": "\\ln p_\\theta(D)"
},
{
"math_id": 94,
"text": "D_{\\mathit{KL}}(q_D(x) \\| p_\\theta(x)) = -\\frac 1N \\sum_i \\ln p_\\theta(x_i) - H(q_D)= -\\frac 1N \\ln p_\\theta(D) - H(q_D) "
},
{
"math_id": 95,
"text": "D_{\\mathit{KL}}(q_D(x) \\| p_\\theta(x)) \\leq -\\frac 1N L(\\phi, \\theta; D) - H(q_D)"
},
{
"math_id": 96,
"text": "D_{\\mathit{KL}}(q_D(x) \\| p_\\theta(x)) \\leq -\\frac 1N \\sum_i L(\\phi, \\theta; x_i) - H(q_D)= D_{\\mathit{KL}}(q_{D, \\phi}(x, z); p_\\theta(x, z))"
},
{
"math_id": 97,
"text": "L(\\phi, \\theta; D)= \\sum_i L(\\phi, \\theta; x_i)"
},
{
"math_id": 98,
"text": "D_{\\mathit{KL}}(q_{D, \\phi}(x, z); p_\\theta(x, z))"
},
{
"math_id": 99,
"text": "D_{\\mathit{KL}}(q_{D}(x); p_\\theta(x))"
}
] |
https://en.wikipedia.org/wiki?curid=56891926
|
56893192
|
Edward Odell
|
American mathematician
Edward "Ted" Wilfred Odell, Jr. (15 March 1947, in Pleasantville, New York – 9 January 2013, in Houston, Texas) was an American mathematician, specializing in the theory of Banach spaces.
Odell received in 1969 in his B.S. degree from the State University of New York at Binghamton and in 1975 his Ph.D. degree from the Massachusetts Institute of Technology under William Buhmann Johnson. From 1975 to 1977 Odell was a Josiah Willard Gibbs Instructor at Yale University. He became in 1977 an assistant professor, in 1981 an associate professor, and in 1990 a full professor at the University of Texas at Austin. He was the author or coauthor of 84 articles.
In 1994 Odell was an Invited Speaker of the ICM in Zurich. In 2012 he was elected a Fellow of the American Mathematical Society.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "L_{p}(\\mathbb{R})"
}
] |
https://en.wikipedia.org/wiki?curid=56893192
|
5689562
|
Two-photon absorption
|
Simultaneous absorption of two photons by a molecule
In atomic physics, two-photon absorption (TPA or 2PA), also called two-photon excitation or non-linear absorption, is the simultaneous absorption of two photons of identical or different frequencies in order to excite an atom or a molecule from one state (usually the ground state), via a virtual energy level, to a higher energy, most commonly an excited electronic state. Absorption of two photons with different frequencies is called non-degenerate two-photon absorption. Since TPA depends on the simultaneous absorption of two photons, the probability of TPA is proportional to the photon dose (D), which is proportional to the square of the light intensity (D ∝ I2); thus it is a nonlinear optical process. The energy difference between the involved lower and upper states of the molecule is equal or smaller than the sum of the photon energies of the two photons absorbed. Two-photon absorption is a third-order process, with absorption cross section typically several orders of magnitude smaller than one-photon absorption cross section.
Two-photon excitation of a fluorophore (a fluorescent molecule) leads to two-photon-excited fluorescence where the excited state produced by TPA decays by spontaneous emission of a photon to a lower energy state.
Background.
The phenomenon was originally predicted by Maria Goeppert-Mayer in 1931 in her doctoral dissertation. Thirty years later, the invention of the laser permitted the first experimental verification of TPA when two-photon-excited fluorescence was detected in a europium-doped crystal. Soon afterwards, the effect was observed in cesium vapor and then in CdS, a semiconductor.
TPA is a nonlinear optical process. In particular, the imaginary part of the third-order nonlinear susceptibility is related to the extent of TPA in a given molecule. The selection rules for TPA are therefore different from one-photon absorption (OPA), which is dependent on the first-order susceptibility. The relationship between the selection rules for one- and two-photon absorption is analogous to those of Raman and IR spectroscopies. For example, in a centrosymmetric molecule, one- and two-photon allowed transitions are mutually exclusive, an optical transition allowed in one of the spectroscopies is forbidden in the other. However, for non-centrosymmetric molecules there is no formal mutual exclusion between the selection rules for OPA and TPA. In quantum mechanical terms, this difference results from the fact that the quantum states of such molecules have either + or - inversion symmetry, usually labelled by g (for +) and u (for −). One photon transitions are only allowed between states that differ in the inversion symmetry, i.e. g <-> u, while two photon transitions are only allowed between states that have the same inversion symmetry, i.e. g <-> g and u <-> u.
The relation between the number of photons - or, equivalently, order of the electronic transitions - involved in a TPA process (two) and the order of the corresponding nonlinear susceptibility (three) may be understood using the optical theorem. This theorem relates the imaginary part of an all-optical process of a given perturbation order formula_0 with a process involving charge carriers with half the perturbation order, i.e. formula_1. To apply this theorem it is important to consider that the order in perturbation theory to calculate the probability amplitude of an all-optical formula_2process is formula_3. Since in the case of TPA there are electronic transitions of the second order involved (formula_4), it results from the optical theorem that the order of the nonlinear susceptibility is formula_5, i.e. it is a formula_6process.
Phenomenologically, TPA can be thought of as the third term in a conventional anharmonic oscillator model for depicting vibrational behavior of molecules. Another view is to think of light as photons. In nonresonant TPA, neither photon is at resonance with the system energy gap, and two photons combine to bridge the energy gap larger than the energies of each photon individually. If there were an intermediate electronic state in the gap, this could happen via two separate one-photon transitions in a process described as "resonant TPA", "sequential TPA", or "1+1 absorption" where the absorption alone is a first order process and the generated fluorescence will rise as the square of the incoming intensity. In nonresonant TPA the transition occurs without the presence of the intermediate state. This can be viewed as being due to a "virtual" state created by the interaction of the photons with the molecule. The virtual state argument is quite orthogonal to the anharmonic oscillator argument. It states for example that in a semiconductor, absorption at high energies is impossible if two photons cannot bridge the band gap. So, many materials can be used for the Kerr effect that do not show any absorption and thus have a high damage threshold.
The "nonlinear" in the description of this process means that the strength of the interaction increases faster than linearly with the electric field of the light. In fact, under ideal conditions the rate of TPA is proportional to the square of the field intensity. This dependence can be derived quantum mechanically, but is intuitively obvious when one considers that it requires two photons to coincide in time and space. This requirement for high light intensity means that lasers are required to study TPA phenomena. Further, in order to understand the TPA spectrum, monochromatic light is also desired in order to measure the TPA cross section at different wavelengths. Hence, tunable pulsed lasers (such as frequency-doubled Nd:YAG-pumped OPOs and OPAs) are the choice of excitation.
Measurements.
Two-photon absorption can be measured by several techniques. Some of them are two-photon excited fluorescence (TPEF), z-scan, self-diffraction or nonlinear transmission (NLT). Pulsed lasers are most often used because TPA is a third-order nonlinear optical process, and therefore is most efficient at very high intensities.
Absorption rate.
Beer's law describes the decay in intensity due to one-photon absorption:
formula_7
where formula_8 are the distance that light travelled through a sample, formula_9 is the light intensity after travelling a distance formula_8, formula_10 is the light intensity where the light enters the sample and formula_11 is the one-photon absorption coefficient of the sample. In two-photon absorption, for an incident plane wave of radiation, the light intensity versus distance changes to
formula_12
for TPA with light intensity as a function of path length or cross section formula_13 as a function of concentration formula_14 and the initial light intensity formula_15. The absorption coefficient formula_16 now becomes the TPA coefficient formula_17. (Note that there is some confusion over the term formula_17 in nonlinear optics, since it is sometimes used to describe the second-order polarizability, and occasionally for the molecular two-photon cross-section. More often however, it is used to describe the bulk 2-photon optical density of a sample. The letter formula_18 or formula_19 is more often used to denote the molecular two-photon cross-section.)
Two-photon excited fluorescence.
Relation between the two-photon excited fluorescence and the total number of absorbed photons per unit time formula_20 is given by
formula_21
where formula_22 and formula_23 are the fluorescence quantum efficiency of the fluorophore and the fluorescence collection efficiency of the measurement system, respectively. In a particular measurement, formula_20 is a function of fluorophore concentration formula_24, illuminated sample volume formula_25, incident light intensity formula_26, and two-photon absorption cross-section formula_18:
formula_27
Notice that the formula_20 is proportional to the square of the incident light as expected for TPA.
Units of cross-section.
The molecular two-photon absorption cross-section is usually quoted in the units of Goeppert-Mayer (GM) (after its discoverer, Nobel laureate Maria Goeppert-Mayer), where
1 GM = 10−50 cm4 s photon−1.
Considering the reason for these units, one can see that it results from the product of two areas (one for each photon, each in cm2) and a time (within which the two photons must arrive to be able to act together). The large scaling factor is introduced in order that 2-photon absorption cross-sections of common dyes will have convenient values.
Development of the field and potential applications.
Until the early 1980s, TPA was used as a spectroscopic tool. Scientists compared the OPA and TPA spectra of different organic molecules and obtained several fundamental structure property relationships. However, in late 1980s, applications started to be developed. Peter Rentzepis suggested applications in 3D optical data storage. Watt Webb suggested microscopy and imaging. Other applications such as 3D microfabrication, optical logic, autocorrelation, pulse reshaping and optical power limiting were also demonstrated.
3D imaging of semiconductors.
It was demonstrated that by using 2-photon absorption charge carriers can be generated spatially confined in a semiconductor device. This can be used to investigate the charge transport properties of such device.
Microfabrication and lithography.
In 1992, with the use of higher laser powers (35 mW) and more sensitive resins/resists, TPA found its way into lithography. One of the most distinguishing features of TPA is that the rate of absorption of light by a molecule depends on the square of the light's intensity. This is different from OPA, where the rate of absorption is linear with respect to input intensity. As a result of this dependence, if material is cut with a high power laser beam, the rate of material removal decreases very sharply from the center of the beam to its periphery. Because of this, the "pit" created is sharper and better resolved than if the same size pit were created using normal absorption.
3D photopolymerization.
In 1997, Maruo "et al." developed the first application of TPA in 3D microfabrication. In 3D microfabrication, a block of gel containing monomers and a 2-photon active photoinitiator is prepared as a raw material. Application of a focused laser to the block results in polymerization only at the focal spot of the laser, where the intensity of the absorbed light is highest. The shape of an object can therefore be traced out by the laser, and then the excess gel can be washed away to leave the traced solid. Photopolymerization for 3D microfabrication is used in a wide variety of applications, including microoptics, microfluids, biomedical implants, 3D scaffolds for cell cultures and tissue engineering.
Imaging.
The human body is not transparent to visible wavelengths. Hence, one photon imaging using fluorescent dyes is not very efficient. If the same dye had good two-photon absorption, then the corresponding excitation would occur at approximately two times the wavelength at which one-photon excitation would have occurred. As a result, it is possible to use excitation in the far infrared region where the human body shows good transparency.
It is sometimes said, incorrectly, that Rayleigh scattering is relevant to imaging techniques such as two-photon. According to Rayleigh's scattering law, the amount of scattering is proportional to formula_28, where formula_29 is the wavelength. As a result, if the wavelength is increased by a factor of 2, the Rayleigh scattering is reduced by a factor of 16. However, Rayleigh scattering only takes place when scattering particles are much smaller than the wavelength of light (the sky is blue because air molecules scatter blue light much more than red light). When particles are larger, scattering increases approximately linearly with wavelength: hence clouds are white since they contain water droplets. This form of scatter is known as Mie scattering and is what occurs in biological tissues. So, although longer wavelengths do scatter less in biological tissues, the difference is not as dramatic as Rayleigh's law would predict.
Optical power limiting.
Another area of research is "optical power limiting". In a material with a strong nonlinear effect, the absorption of light increases with intensity such that beyond a certain input intensity the output intensity approaches a constant value. Such a material can be used to limit the amount of optical power entering a system. This can be used to protect expensive or sensitive equipment such as sensors, can be used in protective goggles, or can be used to control noise in laser beams.
Photodynamic therapy.
Photodynamic therapy (PDT) is a method for treating cancer. In this technique, an organic molecule with a good triplet quantum yield is excited so that the triplet state of this molecule interacts with oxygen. The ground state of oxygen has triplet character. This leads to triplet-triplet annihilation, which gives rise to singlet oxygen, which in turn attacks cancerous cells. However, using TPA materials, the window for excitation can be extended into the infrared region, thereby making the process more viable to be used on the human body.
Two-photon pharmacology.
Photoisomerization of azobenzene-based pharmacological ligands by 2-photon absorption has been described for use in photopharmacology. It allows controlling the activity of endogenous proteins in intact tissue with pharmacological selectivity in three dimensions. It can be used to study neural circuits and to develop drug-based non invasive phototherapies.
Optical data storage.
The ability of two-photon excitation to address molecules deep within a sample without affecting other areas makes it possible to store and retrieve information in the volume of a substance rather than only on a surface as is done on the DVD. Therefore, 3D optical data storage has the possibility to provide media that contain terabyte-level data capacities on a single disc.
Compounds.
To some extent, linear and 2-photon absorption strengths are linked. Therefore, the first compounds to be studied (and many that are still studied and used in e.g. 2-photon microscopy) were standard dyes. In particular, laser dyes were used, since these have good photostability characteristics. However, these dyes tend to have 2-photon cross-sections of the order of 0.1–10 GM, much less than is required to allow simple experiments.
It was not until the 1990s that rational design principles for the construction of two-photon-absorbing molecules began to be developed, in response to a need from imaging and data storage technologies, and aided by the rapid increases in computer power that allowed quantum calculations to be made. The accurate quantum mechanical analysis of two-photon absorbance is orders of magnitude more computationally intensive than that of one-photon absorbance, requiring highly correlated calculations at very high levels of theory.
The most important features of strongly TPA molecules were found to be a long conjugation system (analogous to a large antenna) and substitution by strong donor and acceptor groups (which can be thought of as inducing nonlinearity in the system and increasing the potential for charge-transfer). Therefore, many push-pull olefins exhibit high TPA transitions, up to several thousand GM. It is also found that compounds with a real intermediate energy level close to the "virtual" energy level can have large 2-photon cross-sections as a result of resonance enhancement. There are several databases of two-photon absorption spectra available online.
Compounds with interesting TPA properties also include various porphyrin derivatives, conjugated polymers and even dendrimers. In one study a diradical resonance contribution for the compound depicted below was also linked to efficient TPA. The TPA wavelength for this compound is 1425 nanometer with observed TPA cross section of 424 GM.
Coefficients.
The two-photon absorption coefficient is defined by the relation
formula_30
so that
formula_31
Where formula_17 is the two-photon absorption coefficient, formula_16 is the absorption coefficient, formula_32 is the transition rate for TPA per unit volume, formula_26 is the irradiance, "ħ" is the reduced Planck constant, formula_33 is the photon frequency and the thickness of the slice is formula_34. formula_35 is the number density of molecules per cm3, formula_36 is the photon energy (J), formula_37 is the two-photon absorption cross section (cm4s/molecule).
The SI units of the beta coefficient are m/W. If formula_17 (m/W) is multiplied by 10−9 it can be converted to the CGS system (cal/cm s/erg).
Due to different laser pulses the TPA coefficients reported has differed as much as a factor 3. With the transition towards shorter laser pulses, from
picosecond to subpicosecond durations, noticeably reduced TPA coefficient have been obtained.
In water.
Laser induced TPA in water was discovered in 1980.
Water absorbs UV radiation near 125 nm exiting the 3a1 orbital leading to dissociation into OH− and H+. Through TPA this dissociation can be achieved by two photons near 266 nm. Since water and heavy water have different vibration frequencies and inertia they also need different photon energies to achieve dissociation and have different absorption coefficients for a given photon wavelength.
A study from Jan 2002 used a femtosecond laser tuned to 0.22 Picoseconds found the coefficient of D2O to be 42±5 10−11(cm/W) whereas H2O was 49±5 10−11(cm/W).
Two-photon emission.
The opposite process of TPA is two-photon emission (TPE), which is a single electron transition accompanied by the emission of a photon pair. The energy of each individual photon of the pair is not determined, while the pair as a whole conserves the transition energy. The spectrum of TPE is therefore very broad and continuous. TPE is important for applications in astrophysics, contributing to the continuum radiation from planetary nebulae (theoretically predicted for them in and observed in ). TPE in condensed matter and specifically in semiconductors was only first observed in 2008, with emission rates nearly 5 orders of magnitude weaker than one-photon spontaneous emission, with potential applications in quantum information.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "m"
},
{
"math_id": 1,
"text": "m/2"
},
{
"math_id": 2,
"text": "\\chi^{(n)}"
},
{
"math_id": 3,
"text": "m=n+1"
},
{
"math_id": 4,
"text": "m/2=2"
},
{
"math_id": 5,
"text": "n=m-1=3"
},
{
"math_id": 6,
"text": "\\chi^{(3)}"
},
{
"math_id": 7,
"text": "I(x) = I_0 \\mathrm e^{-\\alpha \\,x} \\,"
},
{
"math_id": 8,
"text": " x "
},
{
"math_id": 9,
"text": " I(x) "
},
{
"math_id": 10,
"text": " I(0) "
},
{
"math_id": 11,
"text": " \\alpha "
},
{
"math_id": 12,
"text": "I(x) = \\frac{I_0}{1 + \\beta x I_0} \\,"
},
{
"math_id": 13,
"text": "x"
},
{
"math_id": 14,
"text": "c"
},
{
"math_id": 15,
"text": "I_0"
},
{
"math_id": 16,
"text": "\\alpha"
},
{
"math_id": 17,
"text": "\\beta"
},
{
"math_id": 18,
"text": "\\delta"
},
{
"math_id": 19,
"text": "\\sigma"
},
{
"math_id": 20,
"text": "N_{abs}"
},
{
"math_id": 21,
"text": " F(t)=\\frac{1}{2}\\phi\\eta N_{abs},"
},
{
"math_id": 22,
"text": "\\phi"
},
{
"math_id": 23,
"text": "\\eta"
},
{
"math_id": 24,
"text": "C"
},
{
"math_id": 25,
"text": "V"
},
{
"math_id": 26,
"text": "I"
},
{
"math_id": 27,
"text": " N_{abs}=\\int_V \\mathrm dV \\delta C(r,t)I^2(r,t). "
},
{
"math_id": 28,
"text": "1/\\lambda^4"
},
{
"math_id": 29,
"text": "\\lambda"
},
{
"math_id": 30,
"text": "-\\frac{dI}{dz}=\\alpha I+\\beta I^{2} "
},
{
"math_id": 31,
"text": "\\beta (\\omega)=\\frac{2 \\hbar \\omega}{I^{2}} W_T^{(2)}(\\omega)=\\frac{N}{E}\\sigma^{(2)}"
},
{
"math_id": 32,
"text": "W_T^{(2)}(\\omega)"
},
{
"math_id": 33,
"text": "\\omega"
},
{
"math_id": 34,
"text": "dz"
},
{
"math_id": 35,
"text": "N"
},
{
"math_id": 36,
"text": "E"
},
{
"math_id": 37,
"text": "\\sigma^{(2)}"
}
] |
https://en.wikipedia.org/wiki?curid=5689562
|
56899835
|
Projection (measure theory)
|
In measure theory, projection maps often appear when working with product (Cartesian) spaces: The product sigma-algebra of measurable spaces is defined to be the finest such that the projection mappings will be measurable. Sometimes for some reasons product spaces are equipped with 𝜎-algebra different than "the" product 𝜎-algebra. In these cases the projections need not be measurable at all.
The projected set of a measurable set is called analytic set and need not be a measurable set. However, in some cases, either relatively to the product 𝜎-algebra or relatively to some other 𝜎-algebra, projected set of measurable set is indeed measurable.
Henri Lebesgue himself, one of the founders of measure theory, was mistaken about that fact. In a paper from 1905 he wrote that the projection of Borel set in the plane onto the real line is again a Borel set. The mathematician Mikhail Yakovlevich Suslin found that error about ten years later, and his following research has led to descriptive set theory. The fundamental mistake of Lebesgue was to think that projection commutes with decreasing intersection, while there are simple counterexamples to that.
Basic examples.
For an example of a non-measurable set with measurable projections, consider the space formula_0 with the 𝜎-algebra formula_1 and the space formula_2 with the 𝜎-algebra formula_3 The diagonal set formula_4 is not measurable relatively to formula_5 although the both projections are measurable sets.
The common example for a non-measurable set which is a projection of a measurable set, is in Lebesgue 𝜎-algebra. Let formula_6 be Lebesgue 𝜎-algebra of formula_7 and let formula_8 be the Lebesgue 𝜎-algebra of formula_9 For any bounded formula_10 not in formula_11 the set formula_12 is in formula_13 since Lebesgue measure is complete and the product set is contained in a set of measure zero.
Still one can see that formula_8 is not the product 𝜎-algebra formula_14 but its completion. As for such example in product 𝜎-algebra, one can take the space formula_15 (or any product along a set with cardinality greater than continuum) with the product 𝜎-algebra formula_16 where formula_17 for every formula_18 In fact, in this case "most" of the projected sets are not measurable, since the cardinality of formula_19 is formula_20 whereas the cardinality of the projected sets is formula_21 There are also examples of Borel sets in the plane which their projection to the real line is not a Borel set, as Suslin showed.
Measurable projection theorem.
The following theorem gives a sufficient condition for the projection of measurable sets to be measurable.
Let formula_22 be a measurable space and let formula_23 be a polish space where formula_24 is its Borel 𝜎-algebra. Then for every set in the product 𝜎-algebra formula_25 the projected set onto formula_26 is a universally measurable set relatively to formula_27
An important special case of this theorem is that the projection of any Borel set of formula_28 onto formula_29 where formula_30 is Lebesgue-measurable, even though it is not necessarily a Borel set. In addition, it means that the former example of non-Lebesgue-measurable set of formula_7 which is a projection of some measurable set of formula_31 is the only sort of such example.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X := \\{0, 1\\}"
},
{
"math_id": 1,
"text": "\\mathcal{F} := \\{\\varnothing, \\{0\\}, \\{1\\}, \\{0, 1\\}\\}"
},
{
"math_id": 2,
"text": "Y := \\{0, 1\\}"
},
{
"math_id": 3,
"text": "\\mathcal{G} := \\{\\varnothing, \\{0, 1\\}\\}."
},
{
"math_id": 4,
"text": "\\{(0, 0), (1, 1)\\} \\subseteq X \\times Y"
},
{
"math_id": 5,
"text": "\\mathcal{F}\\otimes\\mathcal{G},"
},
{
"math_id": 6,
"text": "\\mathcal{L}"
},
{
"math_id": 7,
"text": "\\Reals"
},
{
"math_id": 8,
"text": "\\mathcal{L}'"
},
{
"math_id": 9,
"text": "\\Reals^2."
},
{
"math_id": 10,
"text": "N \\subseteq \\Reals"
},
{
"math_id": 11,
"text": "\\mathcal{L}."
},
{
"math_id": 12,
"text": "N \\times \\{0\\}"
},
{
"math_id": 13,
"text": "\\mathcal{L}',"
},
{
"math_id": 14,
"text": "\\mathcal{L} \\otimes \\mathcal{L}"
},
{
"math_id": 15,
"text": "\\{0, 1\\}^\\Reals"
},
{
"math_id": 16,
"text": "\\mathcal{F} = \\textstyle {\\bigotimes\\limits_{t\\in\\Reals}} \\mathcal{F}_t"
},
{
"math_id": 17,
"text": "\\mathcal{F}_t = \\{\\varnothing,\\{0\\} ,\\{1\\} ,\\{0, 1\\}\\}"
},
{
"math_id": 18,
"text": "t \\in \\Reals."
},
{
"math_id": 19,
"text": "\\mathcal{F}"
},
{
"math_id": 20,
"text": "\\aleph_0 \\cdot 2^{\\aleph_0} = 2^{\\aleph_0},"
},
{
"math_id": 21,
"text": "2^{2^{\\aleph_0}}."
},
{
"math_id": 22,
"text": "(X, \\mathcal{F})"
},
{
"math_id": 23,
"text": "(Y, \\mathcal{B})"
},
{
"math_id": 24,
"text": "\\mathcal{B}"
},
{
"math_id": 25,
"text": "\\mathcal{F} \\otimes \\mathcal{B},"
},
{
"math_id": 26,
"text": "X"
},
{
"math_id": 27,
"text": "\\mathcal{F}."
},
{
"math_id": 28,
"text": "\\Reals^n"
},
{
"math_id": 29,
"text": "\\Reals^{n-k}"
},
{
"math_id": 30,
"text": "k < n"
},
{
"math_id": 31,
"text": "\\Reals^2,"
}
] |
https://en.wikipedia.org/wiki?curid=56899835
|
56903444
|
David Drasin
|
American mathematician
David Drasin (born 3 November 1940, Philadelphia) is an American mathematician, specializing in function theory.
Drasin received in 1962 his bachelor's degree from Temple University and in 1966 his doctorate from Cornell University supervised by Wolfgang Fuchs and Clifford John Earle, Jr. with thesis "An integral Tauberian theorem and other topics". After that he was an assistant professor, from 1969 an associate professor, and from 1974 a full professor at Purdue University. He was visiting professor in 2005 at the University of Kiel and in 2005/2006 at the University of Helsinki.
In 1976, Drasin gave a complete solution to the inverse problem of Nevanlinna theory (value distribution theory), which was posed by Rolf Nevanlinna in 1929. In the 1930s, the problem was investigated by Nevanlinna and by, among others, Egon Ullrich() (1902–1957) with later investigations by Oswald Teichmüller (1913–1943), Hans Wittich, Le Van Thiem (1918–1991) and other mathematicians. Anatolii Goldberg (1930–2008) was the first to completely solve the inverse problem in the special case where the number of exceptional values is finite. For entire functions the problem was solved in 1962 by Wolfgang Fuchs and Walter Hayman. The general problem concerns the question of the existence of a meromorphic function at given values of the exceptional values and associated deficiency values and branching values (with constraints from the Nevanlinna theory). Drasin proved that there is a positive answer to Nevanlinna's problem.
In 1994 Drasin was an Invited Speaker at the ICM in Zurich. Since 1996 he is a co-editor of the "Annals of the Finnish Academy of Sciences" and a co-editor of "Computational Methods in Function Theory". He was a co-editor of the American Mathematical Monthly from 1968 to 1971. From 2002 to 2004 he was a program director/analyst for the National Science Foundation.
He is married and has three children.
|
[
{
"math_id": 0,
"text": "\\cos \\,\\pi \\rho"
}
] |
https://en.wikipedia.org/wiki?curid=56903444
|
569071
|
Free-air gravity anomaly
|
In geophysics, the free-air gravity anomaly, often simply called the free-air anomaly, is the measured gravity anomaly after a free-air correction is applied to account for the elevation at which a measurement is made. It does so by adjusting these measurements of gravity to what would have been measured at a reference level, which is commonly taken as mean sea level or the geoid.
Applications.
Studies of the subsurface structure and composition of the Earth's crust and mantle employ surveys using gravimeters to measure the departure of observed gravity from a theoretical gravity value to identify anomalies due to geologic features below the measurement locations. The computation of anomalies from observed measurements involves the application of corrections that define the resulting anomaly. The free-air anomaly can be used to test for isostatic equilibrium over broad regions.
Survey methods.
The free-air correction adjusts measurements of gravity to what would have been measured at mean sea level, that is, on the geoid. The gravitational attraction of Earth below the measurement point and above mean sea level is ignored and it is imagined that the observed gravity is measured in air, hence the name. The theoretical gravity value at a location is computed by representing the Earth as an ellipsoid that approximates the more complex shape of the geoid. Gravity is computed on the ellipsoid surface using the International Gravity Formula.
For studies of subsurface structure, the free-air anomaly is further adjusted by a correction for the mass below the measurement point and above the reference of mean sea level or a local datum elevation. This defines the Bouguer anomaly.
Calculation.
The free-air gravity anomaly formula_0 is given by the equation:
formula_1
Here, formula_2 is observed gravity, formula_3 is the "free-air correction", and formula_4 is theoretical gravity.
It can be helpful to think of the free-air anomaly as comparing observed gravity to theoretical gravity adjusted up to the measurement point instead of observed gravity adjusted down to the geoid. This avoids any confusion of assuming that the measurement is made in free air. Either way, however, the Earth mass between the observation point and the geoid is neglected.
The equation for this approach is simply rearranging terms in the first equation of this section so that reference gravity is adjusted and not the observed gravity:
formula_5
Correction.
Gravitational acceleration decreases as an inverse square law with the distance at which the measurement is made from the mass. The free air correction is calculated from Newton's Law, as a rate of change of gravity with distance:
formula_6
At 45° latitude, formula_7 mGal/m.
The free-air correction is the amount that must be added to a measurement at height formula_8 to correct it to the reference level:
formula_9
Here we have assumed that measurements are made relatively close to the surface so that R does not vary significantly. The value of the free-air correction is positive when measured above the geoid, and negative when measured below. There is the assumption that no mass exists between the observation point and the reference level. The Bouguer and terrain corrections are used to account for this.
Significance.
Over the ocean where gravity is measured from ships near sea level, there is no or little free-air correction. In marine gravity surveys, it was observed that the free-air anomaly is positive but very small over the Mid-Ocean Ridges in spite of the fact that these features rise several kilometers above the surrounding seafloor. The small anomaly is explained by the lower density crust and mantle below the ridges resulting from seafloor spreading. This lower density is an apparent offset to the extra height of the ridge indicating that Mid-Ocean Ridges are in isostatic equilibrium.
|
[
{
"math_id": 0,
"text": "g_F"
},
{
"math_id": 1,
"text": "g_{F} = (g_{obs} + \\delta g_F) - g_\\lambda "
},
{
"math_id": 2,
"text": "g_{obs}"
},
{
"math_id": 3,
"text": "\\delta g_F"
},
{
"math_id": 4,
"text": "g_\\lambda"
},
{
"math_id": 5,
"text": "g_{F} = g_{obs} - (g_\\lambda - \\delta g_F) "
},
{
"math_id": 6,
"text": "\\begin{align} g &=\\frac{GM}{R^2}\\\\\n\\frac{dg}{dR} &= -\\frac{2GM}{R^3}= -\\frac{2g}{R} \\end{align}"
},
{
"math_id": 7,
"text": "2g/R = 0.3086"
},
{
"math_id": 8,
"text": "h"
},
{
"math_id": 9,
"text": "\\delta g_F = \\frac{2g}{R} \\times h "
}
] |
https://en.wikipedia.org/wiki?curid=569071
|
56907265
|
Clarke's equation
|
In combustion, Clarke's equation is a third-order nonlinear partial differential equation, first derived by John Frederick Clarke in 1978. The equation describes the thermal explosion process, including both effects of constant-volume and constant-pressure processes, as well as the effects of adiabatic and isothermal sound speeds. The equation reads as
formula_0
or, alternatively
formula_1
where formula_2 is the non-dimensional temperature perturbation, formula_3 is the specific heat ratio and formula_4 is the relevant Damköhler number. The term formula_5 describes the thermal explosion at constant pressure and the term formula_6 describes the thermal explosion at constant volume. Similarly, the term formula_7 describes the wave propagation at adiabatic sound speed and the term formula_8 describes the wave propagation at isothermal sound speed. Molecular transports are neglected in the derivation.
It may appear that the parameter formula_9 can be removed from the equation by the transformation formula_10, it is, however, retained here since formula_9 may also appear in the initial and boundary conditions.
Example: Fast, non-diffusive ignition by deposition of a radially symmetric hot source.
Suppose a radially symmetric hot source is deposited instantnaeously in a reacting mixture. When the chemical time is comparable to the acoustic time, diffusion is neglected so that igntion is characterised by heat release by the chemical energy and cooling by the expansion waves. This problem is governed by the Clarke's equation with formula_11, where formula_12 is the maximum initial temperature, formula_13 is the temperature and formula_14 is the Frank-Kamenetskii temperature (formula_15 is the gas constant and formula_16 is the activation energy). Furthermore, let formula_17 denote the distance from the center, measured in units of initial hot core size and formula_18 be the time, measured in units of acoustic time. In this case, the initial and boundary conditions are given by
formula_19
where formula_20, respectively, corresponds to the planar, cylindrical and spherical problems. Let us deifine a new variable
formula_21
which is the increment of formula_22 from its distant values. Then, at small times, the asymptotic solution is given by
formula_23
As time progresses, a steady state is approached when formula_24 and a thermal explosion is found to occur when formula_25, where formula_26 is the Frank-Kamenetskii parameter; if formula_27, then formula_28 in the planar case, formula_29 in the cylindrical case and formula_30 in the spherical case. For formula_31, the solution in the first approximation is given by
formula_32
which shows that thermal explosion occurs at formula_33, where formula_34 is the ignition time.
Generalised form.
For generalised form for the reaction term, one may write
formula_35
where formula_36 is arbitray function representing the reaction term.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{\\partial^2 }{\\partial t^2}\\left(\\frac{\\partial \\theta}{\\partial t}-\\gamma \\delta e^\\theta\\right) = \\nabla^2 \\left(\\frac{\\partial \\theta}{\\partial t}-\\delta e^\\theta\\right) "
},
{
"math_id": 1,
"text": "\\left(\\frac{\\partial^2 }{\\partial t^2}-\\nabla^2\\right) \\frac{\\partial \\theta}{\\partial t}= \\left(\\gamma\\frac{\\partial^2 }{\\partial t^2} - \\nabla^2 \\right)\\delta e^\\theta "
},
{
"math_id": 2,
"text": "\\theta"
},
{
"math_id": 3,
"text": "\\gamma>1"
},
{
"math_id": 4,
"text": "\n\\delta"
},
{
"math_id": 5,
"text": "\\partial\\theta/\\partial t-e^\\theta"
},
{
"math_id": 6,
"text": "\\partial\\theta/\\partial t-\\gamma e^\\theta"
},
{
"math_id": 7,
"text": "\\partial^2/\\partial t^2-\\nabla^2"
},
{
"math_id": 8,
"text": "\\gamma\\partial^2/\\partial t^2-\\nabla^2"
},
{
"math_id": 9,
"text": "\\delta"
},
{
"math_id": 10,
"text": "(x,t)\\to(\\delta x,\\delta t)"
},
{
"math_id": 11,
"text": "\\theta=(T_m-T)/\\varepsilon T_m"
},
{
"math_id": 12,
"text": "T_m"
},
{
"math_id": 13,
"text": "T"
},
{
"math_id": 14,
"text": "\\varepsilon T_m \\equiv RT_m^2/E \\ll T_m"
},
{
"math_id": 15,
"text": "R"
},
{
"math_id": 16,
"text": "E"
},
{
"math_id": 17,
"text": "r"
},
{
"math_id": 18,
"text": "t"
},
{
"math_id": 19,
"text": "t=0:\\,-\\theta=r^2, \\quad r=0:\\, \\frac{\\partial \\theta}{\\partial r} =0, \\quad r\\gg 1:\\,-\\theta=r^2 +(j+1)\\frac{\\gamma-1}{\\gamma} t^2,"
},
{
"math_id": 20,
"text": "j=(0,1,2)"
},
{
"math_id": 21,
"text": "\\varphi(r,t) = \\theta +r^2 + (j+1)\\frac{\\gamma-1}{\\gamma} t^2"
},
{
"math_id": 22,
"text": "\\theta(r,t)"
},
{
"math_id": 23,
"text": "\\varphi = \\gamma\\delta t e^{-r^2} + \\frac{1}{2}(\\gamma\\delta t)^2e^{-2r^2} + \\cdots"
},
{
"math_id": 24,
"text": "\\delta\\leq \\delta_c"
},
{
"math_id": 25,
"text": "\\delta>\\delta_c"
},
{
"math_id": 26,
"text": "\\delta_c"
},
{
"math_id": 27,
"text": "\\gamma=1.4"
},
{
"math_id": 28,
"text": "\\delta_c=0.50340"
},
{
"math_id": 29,
"text": "\\delta_c = 0.73583"
},
{
"math_id": 30,
"text": "\\delta_c=0.91448"
},
{
"math_id": 31,
"text": "\\delta\\gg \\delta_c"
},
{
"math_id": 32,
"text": "\\varphi=-\\ln(1-\\gamma\\delta t e^{-r^2})"
},
{
"math_id": 33,
"text": "t=t_i\\equiv 1/(\\gamma\\delta)"
},
{
"math_id": 34,
"text": "t_i"
},
{
"math_id": 35,
"text": "\\left(\\frac{\\partial^2 }{\\partial t^2}-\\nabla^2\\right) \\frac{\\partial \\theta}{\\partial t}= \\left(\\gamma\\frac{\\partial^2 }{\\partial t^2} - \\nabla^2 \\right)\\delta\\omega(\\theta)"
},
{
"math_id": 36,
"text": "\\omega(\\theta)"
}
] |
https://en.wikipedia.org/wiki?curid=56907265
|
56912080
|
Soil moisture velocity equation
|
The soil moisture velocity equation describes the speed that water moves vertically through unsaturated soil under the combined actions of gravity and capillarity, a process known as infiltration. The equation is alternative form of the Richardson/Richards' equation. The key difference being that the dependent variable is the position of the wetting front formula_0, which is a function of time, the water content and media properties. The soil moisture velocity equation consists of two terms. The first "advection-like" term was developed to simulate surface infiltration and was extended to the water table, which was verified using data collected in a column experimental that was patterned after the famous experiment by Childs & Poulovassilis (1962) and against exact solutions.
Soil moisture velocity equation.
The soil moisture velocity equation or SMVE is a Lagrangian reinterpretation of the Eulerian Richards' equation wherein the dependent variable is the position "z" of a wetting front of a particular moisture content formula_1 with time.
formula_2
where:
formula_0 is the vertical coordinate [L] (positive downward),
formula_1 is the water content of the soil at a point [-]
formula_3 is the unsaturated hydraulic conductivity [L T−1],
formula_4 is the capillary pressure head [L],
formula_5 is the soil water diffusivity, which is defined as: formula_6, [L2 T]
formula_7 is time [T].
The first term on the right-hand side of the SMVE is called the "advection-like" term, while the second term is called the "diffusion-like" term. The advection-like term of the Soil Moisture Velocity Equation is particularly useful for calculating the advance of wetting fronts for a liquid invading an unsaturated porous medium under the combined action of gravity and capillarity because it is convertible to an ordinary differential equation by neglecting the diffusion-like term. and it avoids the problem of representative elementary volume by use of a fine water-content discretization and solution method.
This equation was converted into a set of three ordinary differential equations (ODEs) using the method of lines to convert the partial derivatives on the right-hand side of the equation into appropriate finite difference forms. These three ODEs represent the dynamics of infiltrating water, falling slugs, and capillary groundwater, respectively.
Derivation.
This derivation of the 1-D soil moisture velocity equation for calculating vertical flux formula_8 of water in the vadose zone starts with conservation of mass for an unsaturated porous medium without sources or sinks:
formula_9
We next insert the unsaturated Buckingham–Darcy flux:
formula_10
yielding Richards' equation in mixed form because it includes both the water content formula_1and capillary head formula_4:
formula_11.
Applying the chain rule of differentiation to the right-hand side of Richards' equation:
formula_12.
Assuming that the constitutive relations for unsaturated hydraulic conductivity and soil capillarity are solely functions of the water content, formula_13and formula_14, respectively:
formula_15.
This equation implicitly defines a function formula_16that describes the position of a particular moisture content within the soil using a finite moisture-content discretization. Employing the Implicit function theorem, which by the cyclic rule required dividing both sides of this equation by formula_17 to perform the change in variable,
resulting in:
formula_18,
which can be written as:
formula_19.
Inserting the definition of the soil water diffusivity:
formula_20
into the previous equation produces:
formula_21
If we consider the velocity of a particular water content formula_1, then we can write the equation in the form of the Soil Moisture Velocity Equation:
formula_22
Physical significance.
Written in moisture content form, 1-D Richards' equation is
formula_23
Where "D"("θ") [L2/T] is 'the soil water diffusivity' as previously defined.
Note that with formula_1 as the dependent variable, physical interpretation is difficult because all the factors that affect the divergence of the flux are wrapped up in the soil moisture diffusivity term formula_5. However, in the SMVE, the three factors that drive flow are in separate terms that have physical significance.
The primary assumptions used in the derivation of the Soil Moisture Velocity Equation are that formula_24 and formula_25 are not overly restrictive. Analytical and experimental results show that these assumptions are acceptable under most conditions in natural soils. In this case, the Soil Moisture Velocity Equation is equivalent to the 1-D Richards' equation, albeit with a change in dependent variable. This change of dependent variable is convenient because it reduces the complexity of the problem because compared to Richards' equation, which requires the calculation of the divergence of the flux, the SMVE represents a flux calculation, not a divergence calculation. The first term on the right-hand side of the SMVE represents the two scalar drivers of flow, gravity and the integrated capillarity of the wetting front. Considering just that term, the SMVE becomes:
formula_26
where formula_27 is the capillary head gradient that is driving the flux and the remaining conductivity term formula_28 represents the ability of gravity to conduct flux through the soil. This term is responsible for the true advection of water through the soil under the combined influences of gravity and capillarity. As such, it is called the "advection-like" term.
Neglecting gravity and the scalar wetting front capillarity, we can consider only the second term on the right-hand side of the SMVE. In this case the Soil Moisture Velocity Equation becomes:
formula_29
This term is strikingly similar to Fick's second law of diffusion. For this reason, this term is called the "diffusion-like" term of the SMVE.
This term represents the flux due to the shape of the wetting front formula_30, divided by the spatial gradient of the capillary head formula_31. Looking at this diffusion-like term, it is reasonable to ask when might this term be negligible? The first answer is that this term will be zero when the first derivative formula_32, because the second derivative will equal zero. One example where this occurs is in the case of an equilibrium hydrostatic moisture profile, when formula_33 with z defined as positive upward. This is a physically realistic result because an equilibrium hydrostatic moisture profile is known to not produce fluxes.
Another instance when the diffusion-like term will be nearly zero is in the case of sharp wetting fronts, where the denominator of the diffusion-like term formula_34, causing the term to vanish. Notably, sharp wetting fronts are notoriously difficult to resolve and accurately solve with traditional numerical Richards' equation solvers.
Finally, in the case of dry soils, formula_3 tends towards formula_35, making the soil water diffusivity formula_5 tend towards zero as well. In this case, the diffusion-like term would produce no flux.
Comparing against exact solutions of Richards' equation for infiltration into idealized soils developed by Ross & Parlange (1994) revealed that indeed, neglecting the diffusion-like term resulted in accuracy >99% in calculated cumulative infiltration. This result indicates that the advection-like term of the SMVE, converted into an ordinary differential equation using the method of lines, is an accurate ODE solution of the infiltration problem. This is consistent with the result published by Ogden et al. who found errors in simulated cumulative infiltration of 0.3% using 263 cm of tropical rainfall over an 8-month simulation to drive infiltration simulations that compared the advection-like SMVE solution against the numerical solution of Richards' equation.
Solution.
The advection-like term of the SMVE can be solved using the method of lines and a finite moisture content discretization. This solution of the SMVE advection-like term replaces the 1-D Richards' equation PDE with a set of three ordinary differential equations (ODEs). These three ODEs are:
Infiltration fronts.
With reference to Figure 1, water infiltrating the land surface can flow through the pore space between formula_36 and formula_37. Using the method of lines to convert the SMVE advection-like term into an ODE:
formula_38
Given that any ponded depth of water on the land surface is formula_39, the Green and Ampt (1911) assumption is employed,
formula_40
represents the capillary head gradient that is driving the flow in the formula_41 discretization or "bin". Therefore, the finite water-content equation in the case of infiltration fronts is:
formula_42
Falling slugs.
After rainfall stops and all surface water infiltrates, water in bins that contains infiltration fronts detaches from the land surface. Assuming that the capillarity at leading and trailing edges of this 'falling slug' of water is balanced, then the water falls through the media at the incremental conductivity associated with the formula_43 bin:
formula_44.
This approach to solving the capillary-free solution is very similar to the kinematic wave approximation.
Capillary groundwater fronts.
In this case, the flux of water to the formula_45 bin occurs between bin "j" and "i". Therefore, in the context of the method of lines:
formula_46
and
formula_47
which yields:
formula_48
Note the "-1" in parentheses, representing the fact that gravity and capillarity are acting in opposite directions. The performance of this equation was verified, using a column experiment fashioned after that by Childs and Poulovassilis (1962). Results of that validation showed that the finite water-content vadose zone flux calculation method performed comparably to the numerical solution of Richards' equation. The photo shows apparatus. Data from this column experiment are available by clicking on this hot-linked DOI. These data are useful for evaluating models of near-surface water table dynamics.
It is noteworthy that the SMVE advection-like term solved using the finite moisture-content method completely avoids the need to estimate the specific yield. Calculating the specific yield as the water table nears the land surface is made cumbersome my non-linearities. However, the SMVE solved using a finite moisture-content discretization essentially does this automatically in the case of a dynamic near-surface water table.
Notice and awards.
The paper on the Soil Moisture Velocity Equation was highlighted by the editor in the issue of "J. Adv. Modeling of Earth Systems" when the paper was first published, and is in the public domain. The paper may be freely downloaded here by anyone.
The paper describing the finite moisture-content solution of the advection-like term of the Soil Moisture Velocity Equation was selected to receive the 2015 Coolest Paper Award by the early career members of the International Association of Hydrogeologists.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "z"
},
{
"math_id": 1,
"text": "\\theta"
},
{
"math_id": 2,
"text": "\\left. \\frac{dz}{dt} \\right \\vert_\\theta = \\frac{\\partial K(\\theta)}{\\partial \\theta} \\left[ 1- \\left (\\frac{\\partial \\psi(\\theta)}{\\partial z}\\right) \\right] - D(\\theta) \\frac{\\partial^2 \\psi / \\partial z^2}{\\partial \\psi / \\partial z} \n"
},
{
"math_id": 3,
"text": "K(\\theta)"
},
{
"math_id": 4,
"text": "\\psi(\\theta)"
},
{
"math_id": 5,
"text": "D(\\theta)"
},
{
"math_id": 6,
"text": "K(\\theta) \\partial \\psi / \\partial \\theta"
},
{
"math_id": 7,
"text": "t"
},
{
"math_id": 8,
"text": "q"
},
{
"math_id": 9,
"text": "\\frac{\\partial \\theta}{\\partial t} + \\frac{\\partial q}{\\partial z}= 0."
},
{
"math_id": 10,
"text": "q=-K(\\theta)\\frac{\\partial\\psi(\\theta)}{\\partial z} + K(\\theta),"
},
{
"math_id": 11,
"text": "\\frac{\\partial \\theta}{\\partial t}=\\frac{\\partial}{\\partial z} \\left[K(\\theta) \\left(\\frac {\\partial\\psi(\\theta)}{\\partial z}-1\\right)\\right]\n"
},
{
"math_id": 12,
"text": " \\frac{\\partial \\theta}{\\partial t} = \\frac{\\partial }{\\partial z} K(\\theta(z,t))\\frac{\\partial}{\\partial z} \\psi(\\theta(z,t))+K(\\theta)\\frac{\\partial^2}{\\partial z^2}\\psi(\\theta(z,t))-\\frac{\\partial}{\\partial z}K(\\theta(z,t)) \n"
},
{
"math_id": 13,
"text": " K=K(\\theta) "
},
{
"math_id": 14,
"text": " \\psi=\\psi(\\theta) "
},
{
"math_id": 15,
"text": " \\frac{\\partial \\theta}{\\partial t} = K'(\\theta) \\psi'(\\theta) \\left(\\frac{\\partial \\theta}{\\partial z} \\right)^2+K(\\theta) \\left[\\psi''(\\theta)\\left(\\frac{\\partial \\theta}{\\partial z}\\right)^2 + \\psi'(\\theta)\\frac{\\partial^2 \\theta}{\\partial z^2} \\right]-K'(\\theta)\\frac{\\partial \\theta}{\\partial z} \n"
},
{
"math_id": 16,
"text": " Z_R(\\theta,t) \n"
},
{
"math_id": 17,
"text": " {-\\partial \\theta}/{\\partial z} "
},
{
"math_id": 18,
"text": " \\frac{\\partial Z_R}{\\partial t}= -K'(\\theta)\\psi'(\\theta)\\frac{\\partial \\theta}{\\partial z}-K(\\theta)\\psi''(\\theta)\\frac{\\partial \\theta}{\\partial z}-K(\\theta)\\psi'(\\theta)\\frac{\\partial^2\\theta/\\partial z^2}{\\partial \\theta/\\partial z}+K'(\\theta) \n"
},
{
"math_id": 19,
"text": " \\frac{\\partial Z_R}{\\partial t}= -K'(\\theta)\\left[\\frac{\\partial \\psi(\\theta)}{\\partial z} -1 \\right] - K(\\theta)\\left[\\psi''(\\theta)\\frac{\\partial \\theta}{\\partial z}+\\psi'(\\theta)\\frac{\\partial^2 \\theta/\\partial z^2}{\\partial \\theta/\\partial z}\\right] \n"
},
{
"math_id": 20,
"text": " D(\\theta) \\equiv K(\\theta)\\frac{\\partial \\psi}{ \\partial \\theta} "
},
{
"math_id": 21,
"text": " \\frac{\\partial Z_R}{\\partial t}= -K'(\\theta) \\left [\\frac{\\partial \\psi(\\theta)}{\\partial z} -1 \\right]-D(\\theta) \\frac{\\partial^2\\psi/\\partial z^2}{\\partial \\psi/\\partial z} \n"
},
{
"math_id": 22,
"text": " \\left. \\frac{dz}{dt} \\right\\vert_\\theta = \\frac{\\partial K(\\theta)}{\\partial \\theta} \\left[ 1- \\left (\\frac{\\partial \\psi(\\theta)}{\\partial z}\\right) \\right] - D(\\theta) \\frac{\\partial^2 \\psi / \\partial z^2}{\\partial \\psi / \\partial z} \n"
},
{
"math_id": 23,
"text": " \\frac{\\partial \\theta }{\\partial t}= \\frac{\\partial}{\\partial z}\\left(D(\\theta)\\frac{\\partial \\theta}{\\partial z}\\right)+\\frac{\\partial K(\\theta)}{\\partial z}"
},
{
"math_id": 24,
"text": "K=K(\\theta)"
},
{
"math_id": 25,
"text": "\\psi=\\psi(\\theta)"
},
{
"math_id": 26,
"text": " \\frac{\\partial Z_R}{\\partial t}= -K'(\\theta) \\left [\\frac{\\partial \\psi(\\theta)}{\\partial z} -1 \\right]"
},
{
"math_id": 27,
"text": "{\\partial \\psi(\\theta)}/{\\partial z}"
},
{
"math_id": 28,
"text": "K'(\\theta)"
},
{
"math_id": 29,
"text": " \\frac{\\partial Z_R}{\\partial t}= -D(\\theta) \\frac{\\partial^2\\psi/\\partial z^2}{\\partial \\psi/\\partial z} "
},
{
"math_id": 30,
"text": "-D(\\theta) {\\partial^2\\psi/\\partial z^2}"
},
{
"math_id": 31,
"text": "{\\partial \\psi/\\partial z}"
},
{
"math_id": 32,
"text": "<\\partial \\psi/\\partial z=C"
},
{
"math_id": 33,
"text": "\\partial \\psi/\\partial z=-1"
},
{
"math_id": 34,
"text": "\\partial \\psi/\\partial z \\to \\infty "
},
{
"math_id": 35,
"text": "0"
},
{
"math_id": 36,
"text": "\\theta_d"
},
{
"math_id": 37,
"text": "\\theta_i"
},
{
"math_id": 38,
"text": "\\frac{\\partial K(\\theta)}{\\partial \\theta}=\\frac{K(\\theta_d)-K(\\theta_i)}{\\theta_d-\\theta_i}."
},
{
"math_id": 39,
"text": "h_p"
},
{
"math_id": 40,
"text": "\\frac{\\partial \\psi(\\theta)}{\\partial z}=\\frac{|\\psi(\\theta_d)|+h_p}{z_j},"
},
{
"math_id": 41,
"text": "j^{th}"
},
{
"math_id": 42,
"text": "\\left(\\frac{dz}{dt}\\right)_j= \\frac{K(\\theta_d)-K(\\theta_i)}{\\theta_d-\\theta_i} \\left(\\frac{|\\psi(\\theta_d)|+h_p}{z_j}+1\\right)."
},
{
"math_id": 43,
"text": "j^\\text{th}\\ \\Delta\\theta"
},
{
"math_id": 44,
"text": " \\left(\\frac{dz}{dt}\\right)_j= \\frac{K(\\theta_j)-K(\\theta_{j-1})}{\\theta_j -\\theta_{j-1}}"
},
{
"math_id": 45,
"text": " j^\\text{th}"
},
{
"math_id": 46,
"text": "\\frac{\\partial K(\\theta)}{\\partial \\theta}= \\frac{K(\\theta_j)-K(\\theta_i)}{\\theta_j - \\theta_i}, "
},
{
"math_id": 47,
"text": " \\frac{\\partial\\psi(\\theta)}{\\partial z} = \\frac{|\\psi(\\theta_j)|}{H_j} "
},
{
"math_id": 48,
"text": "\\left(\\frac{dH}{dt}\\right)_j= \\frac{K(\\theta_j)-K(\\theta_i)}{\\theta_j - \\theta_i} \\left(\\frac{|\\psi(\\theta_j)|}{H_j}-1\\right). "
}
] |
https://en.wikipedia.org/wiki?curid=56912080
|
56913000
|
Hawking–Page phase transition
|
Thermal phase transition in AdS black holes
In quantum gravity, the Hawking–Page phase transition is phase transition between AdS black holes with radiation and thermal AdS.
Stephen Hawking and Don Page () showed that although AdS black holes can be in stable thermal equilibrium with radiation, they are not the preferred state below a certain critical temperature formula_0. At this temperature, there will be a first order phase transition where below formula_0, thermal AdS will become the dominant contribution to the partition function.
The Hawking-Page phase transition between the unstable small black hole to stable large black hole phase is understood as a confinement-deconfinement phase transition in the dual conformal field theory via AdS/CFT correspondence.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " T_C "
}
] |
https://en.wikipedia.org/wiki?curid=56913000
|
569154
|
Perpetual calendar
|
Calendar designed to look up the day of the week for a given date
A perpetual calendar is a calendar valid for many years, usually designed to look up the day of the week for a given date in the past or future.
For the Gregorian and Julian calendars, a perpetual calendar typically consists of one of three general variations:
Such a perpetual calendar fails to indicate the dates of moveable feasts such as Easter, which are calculated based on a combination of events in the Tropical year and lunar cycles. These issues are dealt with in great detail in "computus".
An early example of a perpetual calendar for practical use is found in the "Nürnberger Handschrift GNM 3227a". The calendar covers the period of 1390–1495 (on which grounds the manuscript is dated to c. 1389). For each year of this period, it lists the number of weeks between Christmas and Quinquagesima. This is the first known instance of a tabular form of perpetual calendar allowing the calculation of the moveable feasts that became popular during the 15th century.
The chapel Cappella dei Mercanti, Turin contains a perpetual calendar machine made by Giovanni Plana using rotating drums.
Other uses of the term "perpetual calendar".
Offices and retail establishments often display devices containing a set of elements to form all possible numbers from 1 through 31, as well as the names/abbreviations for the months and the days of the week, to show the current date for convenience of people who might be signing and dating documents such as checks. Establishments that serve alcoholic beverages may use a variant that shows the current month and day but subtracting the legal age of alcohol consumption in years, indicating the latest legal birth date for alcohol purchases. A common device consists of two cubes in a holder. One cube carries the digits zero to five. The other bears the digits 0, 1, 2, 6 (or 9 if inverted), 7, and 8. This is sufficient because only one and two may appear twice in date and they are on both cubes, while the 0 is on both cubes so that all single-digit dates can be shown in double-digit format. In addition to the two cubes, three blocks, each as wide as the two cubes combined, and a third as tall and as deep, have the names of the months printed on their long faces. The current month is turned forward on the front block, with the other two month blocks behind it.
Certain calendar reforms have been labeled perpetual calendars because their dates are fixed on the same weekdays every year. Examples are The World Calendar, the International Fixed Calendar and the Pax Calendar. Technically, these are not perpetual calendars but perennial calendars. Their purpose, in part, is to eliminate the need for perpetual calendar tables, algorithms, and computation devices.
In watchmaking, "perpetual calendar" describes a calendar mechanism that correctly displays the date on the watch "perpetually", taking into account the different lengths of the months as well as leap years. The internal mechanism will move the dial to the next day.
Algorithms.
Perpetual calendars use algorithms to compute the day of the week for any given year, month, and day of the month. Even though the individual operations in the formulas can be very efficiently implemented in software, they are too complicated for most people to perform all of the arithmetic mentally. Perpetual calendar designers hide the complexity in tables to simplify their use.
A perpetual calendar employs a table for finding which of fourteen yearly calendars to use. A table for the Gregorian calendar expresses its 400-year grand cycle: 303 common years and 97 leap years total to 146,097 days, or exactly 20,871 weeks. This cycle breaks down into one 100-year period with 25 leap years, making 36,525 days, or "one" day less than 5,218 full weeks; and three 100-year periods with 24 leap years each, making 36,524 days, or "two" days less than 5,218 full weeks.
Within each 100-year block, the cyclic nature of the Gregorian calendar proceeds in the same fashion as its Julian predecessor: A common year begins and ends on the same day of the week, so the following year will begin on the next successive day of the week. A leap year has one more day, so the year following a leap year begins on the "second" day of the week after the leap year began. Every four years, the starting weekday advances five days, so over a 28-year period, it advances 35, returning to the same place in both the leap year progression and the starting weekday. This cycle completes three times in 84 years, leaving 16 years in the fourth, incomplete cycle of the century.
A major complicating factor in constructing a perpetual calendar algorithm is the peculiar and variable length of February, which was at one time the "last" month of the year, leaving the first 11 months March through January with a five-month repeating pattern: 31, 30, 31, 30, 31, ..., so that the offset from March of the starting day of the week for any month could be easily determined. Zeller's congruence, a well-known algorithm for finding the day of the week for any date, explicitly defines January and February as the "13th" and "14th" months of the "previous" year to take advantage of this regularity, but the month-dependent calculation is still very complicated for mental arithmetic:
formula_0
Instead, a table-based perpetual calendar provides a simple lookup mechanism to find offset for the day of the week for the first day of each month. To simplify the table, in a leap year January and February must either be treated as a separate year or have extra entries in the month table:
Perpetual Julian and Gregorian calendar tables.
Table one (cyd).
The following calendar works for any date from 15 October 1582 onwards, but only for Gregorian calendar dates.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\left\\lfloor\\frac{(m+1)13}{5}\\right\\rfloor \\mod 7,"
}
] |
https://en.wikipedia.org/wiki?curid=569154
|
56920346
|
Zindler curve
|
A Zindler curve is a simple closed plane curve with the defining property that:
(L) All chords which cut the curve length into halves have the same length.
The most simple examples are circles. The Austrian mathematician Konrad Zindler discovered further examples, and gave a method to construct them. Herman Auerbach was the first, who used (in 1938) the now established name "Zindler curve".
Auerbach proved that a figure bounded by a Zindler curve and with half the density of water will float in water in any position. This gives a negative answer to the bidimensional version of Stanislaw Ulam's problem on floating bodies (Problem 19 of the Scottish Book), which asks if the disk is the only figure of uniform density which will float in water in any position (the original problem asks if the sphere is the only solid having this property in three dimension).
Zindler curves are also connected to the problem of establishing if it is possible to determine the direction of the motion of a bicycle given only the closed rear and front tracks.
Equivalent definitions.
An equivalent definition of a Zindler curve is the following one:
(A) All chords which cut the "area" into halves have the same length.
These chords are the same, which cut the curve length into halves.
Another definition is based on Zindler carousels of two chairs. Consider two smooth curves in R² given by λ1 and λ2. Suppose that the distance between points λ1(t) and λ2(t) are constant for each "t" ∈ R and that the curve defined by the midpoints between λ1 and λ2 is such that its tangent vector at the point "t" is parallel to the segment from λ1("t") to λ2("t") for each "t". If the curves λ1 and λ2 parametrizes the same smooth closed curve, then this curve is a Zindler curve.
Examples.
Consider a fixed real parameter formula_0. For formula_1, any of the curves
formula_2
is a Zindler curve. For formula_3 the curve is even "convex". The diagram shows curves for formula_4 (blue), formula_5 (green) and formula_6 (red). For formula_7 the curves are related to a curve of constant width.
"Proof of (L):" The derivative of the parametric equation is
formula_8 and
formula_9
formula_10 is formula_11-periodic.
Hence for any formula_12 the following equation holds
formula_13
which is half the length of the entire curve.
The desired chords, which divide the curve into halves are bounded by the points formula_14 for any formula_15. The length of such a chord is formula_16 hence independent of formula_12. ∎
For formula_17 the desired chords meet the curve in an additional point (see Figure 3). Hence only for formula_18 the sample curves are Zindler curves.
Generalizations.
The property defining Zindler curves can also be generalized to chords that cut the perimeter of the curve in a fixed ratio α different from 1/2. In this case, one may consider a chord system (a continuous selection of chords) instead of all chords of the curve. These curves are known as α-Zindler curves, and are Zindler curves for α = 1/2. This generalization of Zindler curve has the following property related to the floating problem: let γ be a closed smooth curve with a chord system cutting the perimeter in a fixed ratio α. If all the chords of this chord system are in the interior of the region bounded by γ, then γ is a α-Zindler curve if and only if the region bounded by γ is a solid of uniform density ρ that floats in any orientation.
|
[
{
"math_id": 0,
"text": "a"
},
{
"math_id": 1,
"text": "a>4"
},
{
"math_id": 2,
"text": "z(u)= x(u) +iy(u)=e^{2iu}+2e^{-iu} +ae^{iu/2}\\; , \\ u\\in [0,4\\pi]\\; , "
},
{
"math_id": 3,
"text": "a\\ge 24"
},
{
"math_id": 4,
"text": "a=8"
},
{
"math_id": 5,
"text": "a=16"
},
{
"math_id": 6,
"text": " a=24"
},
{
"math_id": 7,
"text": "a\\ge 8"
},
{
"math_id": 8,
"text": " z'(u)=i\\Big(2e^{2iu}-2e^{-iu} +\\frac{a}{2}e^{iu/2}\\Big) \\;"
},
{
"math_id": 9,
"text": " |z'(u)|^2=z'(u)\\overline{z'(u)}= \\cdots =8+\\frac{a^2}{4}-8\\cos 3u \\; ."
},
{
"math_id": 10,
"text": "|z'(u)| "
},
{
"math_id": 11,
"text": "2\\pi"
},
{
"math_id": 12,
"text": "u_0"
},
{
"math_id": 13,
"text": " \\int_{u_0}^{u_0+2\\pi} |z'(u)| \\, du = \\int_0^{2\\pi} |z'(u)|\\,du \\; ,"
},
{
"math_id": 14,
"text": "\\;z(u_0)\\; ,\\;z(u_0+2\\pi)\\;"
},
{
"math_id": 15,
"text": " u_0 \\in [0,4\\pi]"
},
{
"math_id": 16,
"text": "|z(u_0+2\\pi)-z(u_0)|= \\cdots =|2ae^{iu_0/2}|=2a \\; ,"
},
{
"math_id": 17,
"text": "a=4"
},
{
"math_id": 18,
"text": " a>4"
}
] |
https://en.wikipedia.org/wiki?curid=56920346
|
5693122
|
Pseudorandom graph
|
In graph theory, a graph is said to be a pseudorandom graph if it obeys certain properties that random graphs obey with high probability. There is no concrete definition of graph pseudorandomness, but there are many reasonable characterizations of pseudorandomness one can consider.
Pseudorandom properties were first formally considered by Andrew Thomason in 1987. He defined a condition called "jumbledness": a graph formula_0 is said to be formula_1-"jumbled" for real formula_2 and formula_3 with formula_4 if
formula_5
for every subset formula_6 of the vertex set formula_7, where formula_8 is the number of edges among formula_6 (equivalently, the number of edges in the subgraph induced by the vertex set formula_6). It can be shown that the Erdős–Rényi random graph formula_9 is almost surely formula_10-jumbled.6 However, graphs with less uniformly distributed edges, for example a graph on formula_11 vertices consisting of an formula_12-vertex complete graph and formula_12 completely independent vertices, are not formula_1-jumbled for any small formula_3, making jumbledness a reasonable quantifier for "random-like" properties of a graph's edge distribution.
Connection to local conditions.
Thomason showed that the "jumbled" condition is implied by a simpler-to-check condition, only depending on the codegree of two vertices and not every subset of the vertex set of the graph. Letting formula_13 be the number of common neighbors of two vertices formula_14 and formula_15, Thomason showed that, given a graph formula_16 on formula_12 vertices with minimum degree formula_17, if formula_18 for every formula_14 and formula_15, then formula_16 is formula_19-jumbled.7 This result shows how to check the jumbledness condition algorithmically in polynomial time in the number of vertices, and can be used to show pseudorandomness of specific graphs.7
Chung–Graham–Wilson theorem.
In the spirit of the conditions considered by Thomason and their alternately global and local nature, several weaker conditions were considered by Chung, Graham, and Wilson in 1989: a graph formula_16 on formula_12 vertices with edge density formula_2 and some formula_20 can satisfy each of these conditions if
formula_34
These conditions may all be stated in terms of a sequence of graphs formula_40 where formula_41 is on formula_12 vertices with formula_42 edges. For example, the 4-cycle counting condition becomes that the number of copies of any graph formula_28 in formula_41 is formula_43 as formula_44, and the discrepancy condition becomes that formula_45, using little-o notation.
A pivotal result about graph pseudorandomness is the Chung–Graham–Wilson theorem, which states that many of the above conditions are equivalent, up to polynomial changes in formula_46. A sequence of graphs which satisfies those conditions is called quasi-random. It is considered particularly surprising9 that the weak condition of having the "correct" 4-cycle density implies the other seemingly much stronger pseudorandomness conditions. Graphs such as the 4-cycle, the density of which in a sequence of graphs is sufficient to test the quasi-randomness of the sequence, are known as forcing graphs.
Some implications in the Chung–Graham–Wilson theorem are clear by the definitions of the conditions: the discrepancy on individual sets condition is simply the special case of the discrepancy condition for formula_47, and 4-cycle counting is a special case of subgraph counting. In addition, the graph counting lemma, a straightforward generalization of the triangle counting lemma, implies that the discrepancy condition implies subgraph counting.
The fact that 4-cycle counting implies the codegree condition can be proven by a technique similar to the second-moment method. Firstly, the sum of codegrees can be upper-bounded:
formula_48
Given 4-cycles, the sum of squares of codegrees is bounded:
formula_49
Therefore, the Cauchy–Schwarz inequality gives
formula_50
which can be expanded out using our bounds on the first and second moments of formula_51 to give the desired bound. A proof that the codegree condition implies the discrepancy condition can be done by a similar, albeit trickier, computation involving the Cauchy–Schwarz inequality.
The eigenvalue condition and the 4-cycle condition can be related by noting that the number of labeled 4-cycles in formula_16 is, up to formula_52 stemming from degenerate 4-cycles, formula_53, where formula_54 is the adjacency matrix of formula_16. The two conditions can then be shown to be equivalent by invocation of the Courant–Fischer theorem.
Connections to graph regularity.
The concept of graphs that act like random graphs connects strongly to the concept of graph regularity used in the Szemerédi regularity lemma. For formula_20, a pair of vertex sets formula_21 is called formula_46-regular, if for all subsets formula_55 satisfying formula_56, it holds that
formula_57
where formula_58 denotes the "edge density" between formula_23 and formula_24: the number of edges between formula_23 and formula_24 divided by formula_59. This condition implies a bipartite analogue of the discrepancy condition, and essentially states that the edges between formula_60 and formula_61 behave in a "random-like" fashion. In addition, it was shown by Miklós Simonovits and Vera T. Sós in 1991 that a graph satisfies the above weak pseudorandomness conditions used in the Chung–Graham–Wilson theorem if and only if it possesses a Szemerédi partition where nearly all densities are close to the edge density of the whole graph.
Sparse pseudorandomness.
Chung–Graham–Wilson theorem analogues.
The Chung–Graham–Wilson theorem, specifically the implication of subgraph counting from discrepancy, does not follow for sequences of graphs with edge density approaching formula_62, or, for example, the common case of formula_63-regular graphs on formula_12 vertices as formula_44. The following sparse analogues of the discrepancy and eigenvalue bounding conditions are commonly considered:
It is generally true that this eigenvalue condition implies the corresponding discrepancy condition, but the reverse is not true: the disjoint union of a random large formula_63-regular graph and a formula_67-vertex complete graph has two eigenvalues of exactly formula_63 but is likely to satisfy the discrepancy property. However, as proven by David Conlon and Yufei Zhao in 2017, slight variants of the discrepancy and eigenvalue conditions for formula_63-regular Cayley graphs are equivalent up to linear scaling in formula_46. One direction of this follows from the expander mixing lemma, while the other requires the assumption that the graph is a Cayley graph and uses the Grothendieck inequality.
Consequences of eigenvalue bounding.
A formula_63-regular graph formula_16 on formula_12 vertices is called an "formula_68-graph" if, letting the eigenvalues of the adjacency matrix of formula_16 be formula_69, formula_70. The Alon-Boppana bound gives that formula_71 (where the formula_52 term is as formula_44), and Joel Friedman proved that a random formula_63-regular graph on formula_12 vertices is formula_68 for formula_72. In this sense, how much formula_73 exceeds formula_74 is a general measure of the non-randomness of a graph. There are graphs with formula_75, which are termed Ramanujan graphs. They have been studied extensively and there are a number of open problems relating to their existence and commonness.
Given an formula_68 graph for small formula_73, many standard graph-theoretic quantities can be bounded to near what one would expect from a random graph. In particular, the size of formula_73 has a direct effect on subset edge density discrepancies via the expander mixing lemma. Other examples are as follows, letting formula_16 be an formula_68 graph:
Connections to the Green–Tao theorem.
Pseudorandom graphs factor prominently in the proof of the Green–Tao theorem. The theorem is proven by transferring Szemerédi's theorem, the statement that a set of positive integers with positive natural density contains arbitrarily long arithmetic progressions, to the sparse setting (as the primes have natural density formula_62 in the integers). The transference to sparse sets requires that the sets behave pseudorandomly, in the sense that corresponding graphs and hypergraphs have the correct subgraph densities for some fixed set of small (hyper)subgraphs. It is then shown that a suitable superset of the prime numbers, called pseudoprimes, in which the primes are dense obeys these pseudorandomness conditions, completing the proof.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "G=(V,E)"
},
{
"math_id": 1,
"text": "(p,\\alpha)"
},
{
"math_id": 2,
"text": "p"
},
{
"math_id": 3,
"text": "\\alpha"
},
{
"math_id": 4,
"text": "0<p<1\\leq \\alpha"
},
{
"math_id": 5,
"text": "\\left|e(U)-p\\binom{|U|}{2}\\right|\\leq \\alpha|U|"
},
{
"math_id": 6,
"text": "U"
},
{
"math_id": 7,
"text": "V"
},
{
"math_id": 8,
"text": "e(U)"
},
{
"math_id": 9,
"text": "G(n,p)"
},
{
"math_id": 10,
"text": "(p,O(\\sqrt{np}))"
},
{
"math_id": 11,
"text": "2n"
},
{
"math_id": 12,
"text": "n"
},
{
"math_id": 13,
"text": "\\operatorname{codeg}(u,v)"
},
{
"math_id": 14,
"text": "u"
},
{
"math_id": 15,
"text": "v"
},
{
"math_id": 16,
"text": "G"
},
{
"math_id": 17,
"text": "np"
},
{
"math_id": 18,
"text": "\\operatorname{codeg}(u,v)\\leq np^2+\\ell"
},
{
"math_id": 19,
"text": " \\left( p,\\sqrt{(p+\\ell)n}\\,\\right) "
},
{
"math_id": 20,
"text": "\\varepsilon>0"
},
{
"math_id": 21,
"text": "X,Y"
},
{
"math_id": 22,
"text": "V=V(G)"
},
{
"math_id": 23,
"text": "X"
},
{
"math_id": 24,
"text": "Y"
},
{
"math_id": 25,
"text": "\\varepsilon n^2"
},
{
"math_id": 26,
"text": "p|X||Y|"
},
{
"math_id": 27,
"text": "p\\binom{|X|}{2}"
},
{
"math_id": 28,
"text": "H"
},
{
"math_id": 29,
"text": "\\varepsilon n^{v(H)}"
},
{
"math_id": 30,
"text": "p^{e(H)}n^{v(H)}"
},
{
"math_id": 31,
"text": "4"
},
{
"math_id": 32,
"text": "\\varepsilon n^4"
},
{
"math_id": 33,
"text": "p^4n^4"
},
{
"math_id": 34,
"text": "\\sum_{u,v\\in V}\\big|\\operatorname{codeg}(u,v)-p^2 n\\big|\\leq \\varepsilon n^3."
},
{
"math_id": 35,
"text": "\\lambda_1\\geq \\lambda_2\\geq \\cdots \\geq \\lambda_n"
},
{
"math_id": 36,
"text": "\\lambda_1"
},
{
"math_id": 37,
"text": "\\varepsilon n"
},
{
"math_id": 38,
"text": "pn"
},
{
"math_id": 39,
"text": "\\max\\left(\\left|\\lambda_2\\right|,\\left|\\lambda_n\\right|\\right)\\leq \\varepsilon n"
},
{
"math_id": 40,
"text": "\\{G_n\\}"
},
{
"math_id": 41,
"text": "G_n"
},
{
"math_id": 42,
"text": "(p+o(1))\\binom{n}{2}"
},
{
"math_id": 43,
"text": "\\left(p^{e(H)}+o(1)\\right)e^{v(H)}"
},
{
"math_id": 44,
"text": "n\\to\\infty"
},
{
"math_id": 45,
"text": "\\left|e(X,Y)-p|X||Y|\\right|=o(n^2)"
},
{
"math_id": 46,
"text": "\\varepsilon"
},
{
"math_id": 47,
"text": "Y=X"
},
{
"math_id": 48,
"text": "\\sum_{u,v\\in G} \\operatorname{codeg}(u,v)=\\sum_{x\\in G} \\deg(x)^2\\ge n\\left(\\frac{2e(G)}{n}\\right)^2=\\left(p^2+o(1)\\right)n^3."
},
{
"math_id": 49,
"text": "\\sum_{u,v} \\operatorname{codeg}(u,v)^2=\\text{Number of labeled copies of }C_4 + o(n^4)\\le \\left(p^4+o(1)\\right)n^4."
},
{
"math_id": 50,
"text": "\\sum_{u,v\\in G}|\\operatorname{codeg}(u,v)-p^2n|\\le n\\left(\\sum_{u,v\\in G} \\left(\\operatorname{codeg}(u,v)-p^2n\\right)^2\\right)^{1/2},"
},
{
"math_id": 51,
"text": "\\operatorname{codeg}"
},
{
"math_id": 52,
"text": "o(1)"
},
{
"math_id": 53,
"text": "\\operatorname{tr}\\left(A_G^4\\right)"
},
{
"math_id": 54,
"text": "A_G"
},
{
"math_id": 55,
"text": "A\\subset X,B\\subset Y"
},
{
"math_id": 56,
"text": "|A|\\geq\\varepsilon|X|,|B|\\geq\\varepsilon|Y|"
},
{
"math_id": 57,
"text": "\\left| d(X,Y) - d(A,B) \\right| \\le \\varepsilon,"
},
{
"math_id": 58,
"text": "d(X,Y)"
},
{
"math_id": 59,
"text": "|X||Y|"
},
{
"math_id": 60,
"text": "A"
},
{
"math_id": 61,
"text": "B"
},
{
"math_id": 62,
"text": "0"
},
{
"math_id": 63,
"text": "d"
},
{
"math_id": 64,
"text": "\\varepsilon dn"
},
{
"math_id": 65,
"text": "\\frac{d}{n}|X||Y|"
},
{
"math_id": 66,
"text": "\\max\\left(\\left|\\lambda_2\\right|,\\left|\\lambda_n\\right|\\right)\\leq \\varepsilon d"
},
{
"math_id": 67,
"text": "d+1"
},
{
"math_id": 68,
"text": "(n,d,\\lambda)"
},
{
"math_id": 69,
"text": "d=\\lambda_1\\geq \\lambda_2\\geq \\cdots \\geq \\lambda_n"
},
{
"math_id": 70,
"text": "\\max\\left(\\left|\\lambda_2\\right|,\\left|\\lambda_n\\right|\\right)\\leq \\lambda"
},
{
"math_id": 71,
"text": "\\max\\left(\\left|\\lambda_2\\right|,\\left|\\lambda_n\\right|\\right)\\geq 2\\sqrt{d-1}-o(1)"
},
{
"math_id": 72,
"text": "\\lambda=2\\sqrt{d-1}+o(1)"
},
{
"math_id": 73,
"text": "\\lambda"
},
{
"math_id": 74,
"text": "2\\sqrt{d-1}"
},
{
"math_id": 75,
"text": "\\lambda\\leq 2\\sqrt{d-1}"
},
{
"math_id": 76,
"text": "d\\leq \\frac{n}{2}"
},
{
"math_id": 77,
"text": "\\kappa(G)"
},
{
"math_id": 78,
"text": "\\kappa(G)\\geq d-\\frac{36\\lambda^2}{d}."
},
{
"math_id": 79,
"text": "\\lambda\\leq d-2"
},
{
"math_id": 80,
"text": "\\frac{n(d+\\lambda)}{4}"
},
{
"math_id": 81,
"text": "U\\subset V(G)"
},
{
"math_id": 82,
"text": "\\frac{n}{2(d-\\lambda)}\\ln\\left(\\frac{|U|(d-\\lambda)}{n(\\lambda+1)}+1\\right)."
},
{
"math_id": 83,
"text": "\\frac{6(d-\\lambda)}{\\ln\\left(\\frac{d+1}{\\lambda+1}\\right)}."
}
] |
https://en.wikipedia.org/wiki?curid=5693122
|
5693885
|
Time-weighted average price
|
Average price of a security over a specified time
In finance, time-weighted average price (TWAP) is the average price of a security over a specified time.
TWAP is also sometimes used to describe a TWAP card, that is a strategy that will attempt to execute an order and achieve the TWAP or better. A TWAP strategy underpins more sophisticated ways of buying and selling than simply executing orders en masse: for example, dumping a huge number of shares in one block is likely to affect market perceptions, with an adverse effect on the price.
Use.
A TWAP strategy is often used to minimize a large order's impact on the market and result in price improvement. High-volume traders use TWAP to execute their orders over a specific time, so they trade to keep the price close to that which reflects the true market price. TWAP orders are a strategy of executing trades evenly over a specified time period. Volume-weighted average price (VWAP) balances execution with volume. Regularly, a VWAP trade will buy or sell 40% of a trade in the first half of the day and then the other 60% in the second half of the day. A TWAP trade would most likely execute an even 50/50 volume in the first and second half of the day.
Formula.
TWAP is calculated using the following formula:
formula_0
where:
formula_1 is Time Weighted Average Price;
formula_2 is the price of security at a time of measurementformula_3;
formula_4 is change of time since previous price measurementformula_3;
formula_3 is each individual measurement that takes place over the defined period of time.
Increased period of measurements formula_5 results in a less up-to-date price.
|
[
{
"math_id": 0,
"text": "P_{\\mathrm{TWAP}} = \\frac{\\sum_{j}{P_j \\cdot T_j}}{\\sum_j{T_j}} \\,"
},
{
"math_id": 1,
"text": "P_{\\mathrm{TWAP}}"
},
{
"math_id": 2,
"text": "P_j"
},
{
"math_id": 3,
"text": "j"
},
{
"math_id": 4,
"text": "T_j"
},
{
"math_id": 5,
"text": "\\sum_j{T_j}"
}
] |
https://en.wikipedia.org/wiki?curid=5693885
|
56940695
|
Howarth–Dorodnitsyn transformation
|
In fluid dynamics, Howarth–Dorodnitsyn transformation (or Dorodnitsyn-Howarth transformation) is a density-weighted coordinate transformation, which reduces variable-density flow conservation equations to simpler form (in most cases, to incompressible form). The transformation was first used by Anatoly Dorodnitsyn in 1942 and later by Leslie Howarth in 1948. The transformation of formula_0 coordinate (usually taken as the coordinate normal to the predominant flow direction) to formula_1 is given by
formula_2
where formula_3 is the density and formula_4 is the density at infinity. The transformation is extensively used in boundary layer theory and other gas dynamics problems.
Stewartson–Illingworth transformation.
Keith Stewartson and C. R. Illingworth, independently introduced in 1949, a transformation that extends the Howarth–Dorodnitsyn transformation to compressible flows. The transformation reads as
formula_5
formula_2
where formula_6 is the streamwise coordinate, formula_0 is the normal coordinate, formula_7 denotes the sound speed and formula_8 denotes the pressure. For ideal gas, the transformation is defined as
formula_9
formula_2
where formula_10 is the specific heat ratio.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "y"
},
{
"math_id": 1,
"text": "\\eta"
},
{
"math_id": 2,
"text": "\\eta = \\int_0^y \\frac{\\rho}{\\rho_\\infty} \\ dy,"
},
{
"math_id": 3,
"text": "\\rho"
},
{
"math_id": 4,
"text": "\\rho_\\infty"
},
{
"math_id": 5,
"text": "\\xi = \\int_0^x \\frac{c}{c_\\infty}\\frac{p}{p_\\infty} \\ dx,"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": "c"
},
{
"math_id": 8,
"text": "p"
},
{
"math_id": 9,
"text": "\\xi = \\int_0^x \\left(\\frac{c}{c_\\infty}\\right)^{(3\\gamma-1)/(\\gamma-1)} \\ dx,"
},
{
"math_id": 10,
"text": "\\gamma"
}
] |
https://en.wikipedia.org/wiki?curid=56940695
|
56944950
|
Metric temporal logic
|
Metric temporal logic (MTL) is a special case of temporal logic. It is an extension of temporal logic in which temporal operators are replaced by time-constrained versions like "until", "next", "since" and "previous" operators. It is a linear-time logic that assumes both the interleaving and fictitious-clock abstractions. It is defined over a point-based weakly-monotonic integer-time semantics.
MTL has been described as a prominent specification formalism for real-time systems. Full MTL over infinite timed words is undecidable.
Syntax.
The full metric temporal logic is defined similarly to linear temporal logic, where a set of non-negative real number is added to temporal modal operators U and S. Formally, MTL is built up from:
When the subscript is omitted, it is implicitly equal to formula_0.
Note that the next operator N is not considered to be a part of MTL syntax. It will instead be defined from other operators.
Past and Future.
The past fragment of metric temporal logic, denoted as past-MTL is defined as the restriction of the full metric temporal logic without the until operator. Similarly, the future fragment of metric temporal logic, denoted as future-MTL is defined as the restriction of the full metric temporal logic without the since operator.
Depending on the authors, MTL is either defined as the future fragment of MTL, in which case full-MTL is called MTL+Past. Or MTL is defined as full-MTL.
In order to avoid ambiguity, this article uses the names full-MTL, past-MTL and future-MTL. When the statements holds for the three logic, MTL will simply be used.
Model.
Let formula_1 intuitively represent a set
of points in time. Let formula_2 a function which associates a letter to
each moment formula_3. A model of a MTL formula is
such a function formula_4. Usually, formula_4 is
either a timed word or a signal. In
those cases, formula_5 is either a discrete subset or an interval
containing 0.
Semantics.
Let formula_5 and formula_4 as above and let formula_6 some fixed time. We are now going to explain what it means
that a MTL formula formula_7 holds at time formula_8,
which is denoted formula_9.
Let formula_10 and formula_11. We first consider the formula formula_12. We say
that formula_13 if and only if
there exists some time formula_14 such that:
We now consider the formula formula_19 (pronounced "formula_7 since
in formula_20 formula_21.") We say
that formula_22 if and only if
there exists some time formula_23 such that:
The definitions of formula_9 for the values
of formula_7 not considered above is similar as the definition
in the LTL case.
Operators defined from basic MTL operators.
Some formulas are so often used that a new operator is introduced for
them. These operators are usually not considered to belong to the
definition of MTL, but are syntactic sugar which denote more
complex MTL formula. We first consider operators which also exists in LTL. In
this section, we fix formula_25 MTL formulas
and formula_10.
Operators similar to the ones of LTL.
Release and Back to.
We denote by formula_26
(pronounced "formula_7 release
in formula_20, formula_21") the
formula formula_27. This formula holds
at time formula_8 if either:
The name "release" come from the LTL case, where this formula simply
means that formula_7 should always hold,
unless formula_21 releases it.
The past counterpart of release is denote by formula_30
(pronounced "formula_7 back to
in formula_20, formula_21") and is equal to the
formula formula_31.
Finally and Eventually.
We denote by formula_32 or formula_33 (pronounced "Finally
in formula_20, formula_7", or "Eventually
in formula_20, formula_7") the formula formula_34. Intuitively, this formula holds at time formula_8
if there is some time formula_28 such
that formula_7 holds.
We denote by formula_35 or formula_36 (pronounced "Globally
in formula_20, formula_7",) the
formula formula_37. Intuitively, this formula
holds at time formula_8 if for all time formula_38, formula_7 holds.
We denote by formula_39
and formula_40 the formula similar
to formula_35 and formula_32,
where formula_41 is replaced by formula_42. Both formula has intuitively the same meaning, but when we
consider the past instead of the future.
Next and previous.
This case is slightly different from the previous ones, because the intuitive meaning of the "Next" and "Previously" formulas differs depending on the kind of function formula_4 considered.
We denote by formula_43 or formula_44
(pronounced "Next in formula_20, formula_7") the
formula formula_45. Similarly, we denote by formula_46 (pronounced "Previously in formula_20, formula_7) the formula formula_47. The following discussion about the Next operator also holds for the Previously operator, by reversing the past and the future.
When this formula is evaluated over a timed word formula_48, this formula
means that both:
When this formula is evaluated over a signal formula_4, the notion of next time does
not makes sense. Instead, "next" means "immediately after". More
precisely formula_49 means:
Other operators.
We now consider operators which are not similar to any standard LTL operators.
Fall and Rise.
We denote by formula_53 (pronounced "rise formula_7"), a formula which holds when formula_7 becomes true. More precisely, either formula_7 did not hold in the immediate past, and holds at this time, or it does not hold and it holds in the immediate future. Formally formula_53 is defined as formula_54.
Over timed words, this formula always hold. Indeed formula_55 and formula_56 always hold. Thus the formula is equivalent to formula_57, hence is true.
By symmetry, we denote by formula_58 (pronounced "Fall formula_7), a formula which holds when formula_7 becomes false. Thus, it is defined as formula_59.
History and Prophecy.
We now introduce the "prophecy" operator, denoted by formula_60. We denote by formula_61 the formula formula_62. This formula asserts that there exists a first moment in the future such that formula_7 holds, and the time to wait for this first moment belongs to formula_20.
We now consider this formula over timed words and over signals. We consider timed words first. Assume that formula_63 where formula_64 and formula_65 represents either open or closed bounds. Let formula_4 a timed word and formula_8 in its domain of definition.
Over timed words, the formula formula_66 holds if and only if formula_67 also holds. That is, this formula simply assert that, in the future, until the interval formula_68 is met, formula_7 should not hold. Furthermore, formula_7 should hold sometime in the interval formula_68. Indeed, given any time formula_69 such that formula_70 hold, there exists only a finite number of time formula_28 with formula_71 and formula_52. Thus, there exists necessarily a smaller such formula_72.
Let us now consider signal. The equivalence mentioned above does not hold anymore over signal. This is due to the fact that, using the variables introduced above, there may exists an infinite number of correct values for formula_73, due to the fact that the domain of definition of a signal is continuous. Thus, the formula formula_61 also ensures that the first interval in which formula_7 holds is closed on the left.
By temporal symmetry, we define the "history" operator, denoted by formula_74. We define formula_75 as formula_76. This formula asserts that there exists a last moment in the past such that formula_7 held. And the time since this first moment belongs to formula_20.
Non-strict operator.
The semantic of operators until and since introduced do not consider the current time. That is, in order for formula_77 to holds at some time formula_8, neither formula_78 nor formula_79 has to hold at time formula_8. This is not always wanted, for example in the sentence "there is no bug until the system is turned-off", it may actually be wanted that there are no bug at current time. Thus, we introduce another until operator, called non-strict until, denoted by formula_80, which consider the current time.
We denote by formula_81 and formula_82 either:
For any of the operators formula_88 introduced above, we denote formula_89 the formula in which non-strict untils and sinces are used. For example formula_90 is an abbreviation for formula_91.
Strict operator can not be defined using non-strict operator. That is, there is no formula equivalent to formula_92 which uses only non-strict operator. This formula is defined as formula_93. This formula can never hold at a time formula_8 if it is required that formula_94 holds at time formula_8.
Example.
We now give examples of MTL formulas. Some more example can be found on article of fragments of MITL, such as metric interval temporal logic.
Comparison with LTL.
A standard (untimed) infinite word formula_99 is a
function from formula_100 to formula_101. We can
consider such a word using the set of time formula_102,
and the function formula_103. In this case,
for formula_7 an arbitrary LTL
formula, formula_104 if and only
if formula_105, where formula_7 is
considered as a MTL formula with non-strict operator and formula_0 subscript. In this sense, MTL is an extension of LTL.
For this reason, a formula using only non-strict operator with formula_0 subscript is called an LTL formula. This is because the
Algorithmic complexity.
The satisfiability of ECL over signals is EXPSPACE-complete.
Fragments of MTL.
We now consider some fragments of MTL.
MITL.
An important subset of MTL is the Metric Interval Temporal Logic (MITL). This is defined similarly to MTL, with the
restriction that the sets formula_20, used in formula_106 and formula_107, are intervals which are not
singletons, and whose bounds are natural numbers or infinity.
Some other subsets of MITL are defined in the article MITL.
Future Fragments.
Future-MTL was already introduced above. Both over timed-words and over signals, it is less expressive than Full-MTL3.
Event-Clock Temporal Logic.
The fragment "Event-Clock Temporal Logic" of MTL, denoted "EventClockTL" or "ECL", allows only the following operators:
Over signals, ECL is as expressive as MITL and as MITL0. The equivalence between the two last logics is explained in the article MITL0. We sketch the equivalence of those logics with ECL.
If formula_20 is not a singleton and formula_7 is a MITL formula, formula_61 is defined as a MITL formula. If formula_108 is a singleton, then formula_61 is equivalent to formula_109 which is a MITL-formula. Reciprocally, for formula_21 an ECL-formula, and formula_20 an interval whose lower bound is 0, formula_110 is equivalent to the ECL-formula formula_111.
The satisfiability of ECL over signals is PSPACE-complete.
Positive normal form.
A MTL-formula in positive normal form is defined almost as any MTL formula, with the two following change:
Any MTL formula is equivalent to formula in normal form. This can be shown by an easy induction on formulas. For example, the formula formula_112 is equivalent to the formula formula_113. Similarly, conjunctions and disjunctions can be considered using De Morgan's laws.
Strictly speaking, the set of formulas in positive normal form is not a fragment of MTL.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "[0,\\infty)"
},
{
"math_id": 1,
"text": "T\\subseteq\\mathbb R_+"
},
{
"math_id": 2,
"text": "\\gamma: T\\to A"
},
{
"math_id": 3,
"text": "t\\in T"
},
{
"math_id": 4,
"text": "\\gamma"
},
{
"math_id": 5,
"text": "T"
},
{
"math_id": 6,
"text": "t\\in\nT"
},
{
"math_id": 7,
"text": "\\phi"
},
{
"math_id": 8,
"text": "t"
},
{
"math_id": 9,
"text": "\\gamma,t\\models\\phi"
},
{
"math_id": 10,
"text": "I\\subseteq\\mathbb R_+"
},
{
"math_id": 11,
"text": "\\phi,\\psi\\in\nMTL"
},
{
"math_id": 12,
"text": "\\phi\\mathcal\nU_I\\psi"
},
{
"math_id": 13,
"text": "\\gamma,t\\models\\phi\\mathcal U_I\\psi"
},
{
"math_id": 14,
"text": "t'\\in I+t"
},
{
"math_id": 15,
"text": "\\gamma,t'\\models\\psi"
},
{
"math_id": 16,
"text": "t''\\in T"
},
{
"math_id": 17,
"text": "t< t''<\n t' "
},
{
"math_id": 18,
"text": "\\gamma,t''\\models\\phi "
},
{
"math_id": 19,
"text": "\\phi\\mathcal\nS_I\\psi"
},
{
"math_id": 20,
"text": "I"
},
{
"math_id": 21,
"text": "\\psi"
},
{
"math_id": 22,
"text": "\\gamma,t\\models\\phi\\mathcal S_I\\psi"
},
{
"math_id": 23,
"text": "t'\\in I-t"
},
{
"math_id": 24,
"text": "t'< t''<\n t "
},
{
"math_id": 25,
"text": "\\phi,\\psi"
},
{
"math_id": 26,
"text": "\\phi\\mathcal R_I\\psi"
},
{
"math_id": 27,
"text": "\\neg\\phi\\mathcal U_I\\neg\\psi"
},
{
"math_id": 28,
"text": "t'\\in t+I"
},
{
"math_id": 29,
"text": "(t,t')\\cap (t+I)"
},
{
"math_id": 30,
"text": "\\phi\\mathcal B_I\\psi"
},
{
"math_id": 31,
"text": "\\neg\\phi\\mathcal S_I\\neg\\psi"
},
{
"math_id": 32,
"text": "\\Diamond_I\\phi"
},
{
"math_id": 33,
"text": "\\mathcal\nF_I\\phi"
},
{
"math_id": 34,
"text": "\\top\\mathcal\nU_I\\phi"
},
{
"math_id": 35,
"text": "\\Box_I\\phi"
},
{
"math_id": 36,
"text": "\\mathcal\nG_I\\phi"
},
{
"math_id": 37,
"text": "\\neg\\Diamond_I\\neg\\phi"
},
{
"math_id": 38,
"text": "t'\\in\n t+I"
},
{
"math_id": 39,
"text": "\\overleftarrow\\Box_I\\phi"
},
{
"math_id": 40,
"text": "\\overleftarrow\\Diamond_I\\phi"
},
{
"math_id": 41,
"text": "\\mathcal U"
},
{
"math_id": 42,
"text": "\\mathcal\n S"
},
{
"math_id": 43,
"text": "\\bigcirc_I\\phi"
},
{
"math_id": 44,
"text": "\\mathcal N_I\\phi"
},
{
"math_id": 45,
"text": "\\bot\\mathcal U_I\\phi"
},
{
"math_id": 46,
"text": "\\ominus_I\\phi"
},
{
"math_id": 47,
"text": "\\bot\\mathcal S_I\\phi"
},
{
"math_id": 48,
"text": "\\gamma:T\\to\n A"
},
{
"math_id": 49,
"text": "\\gamma,t\\models\\circ\\phi"
},
{
"math_id": 50,
"text": "(0,\\epsilon)"
},
{
"math_id": 51,
"text": "t'\\in(t,t+\\epsilon)"
},
{
"math_id": 52,
"text": "\\gamma,t'\\models\\phi"
},
{
"math_id": 53,
"text": "\\uparrow\\phi"
},
{
"math_id": 54,
"text": "(\\phi\\land(\\neg\\phi\\mathcal S\\top))\\lor(\\neg\\phi\\land(\\phi\\mathcal U\\top))"
},
{
"math_id": 55,
"text": "\\phi\\mathcal U\\top"
},
{
"math_id": 56,
"text": "\\neg\\phi\\mathcal S\\top"
},
{
"math_id": 57,
"text": "\\phi\\lor\\neg\\phi"
},
{
"math_id": 58,
"text": "\\downarrow\\phi"
},
{
"math_id": 59,
"text": "(\\neg\\phi\\land(\\phi\\mathcal S\\top))\\land(\\phi\\land(\\neg\\phi\\mathcal U\\top))"
},
{
"math_id": 60,
"text": "\\triangleright"
},
{
"math_id": 61,
"text": "\\triangleright_I\\phi"
},
{
"math_id": 62,
"text": "\\neg\\phi\\mathcal U_I\\phi"
},
{
"math_id": 63,
"text": "I=\\mid a,b\\mid'"
},
{
"math_id": 64,
"text": "\\mid"
},
{
"math_id": 65,
"text": "\\mid'"
},
{
"math_id": 66,
"text": "\\gamma,t\\models\\triangleright_I\\phi"
},
{
"math_id": 67,
"text": "\\gamma,t\\models\\Box_{]0,b[\\setminus I}\\neg\\phi\\land\\Diamond_I\\phi"
},
{
"math_id": 68,
"text": "t+I"
},
{
"math_id": 69,
"text": "t''\\in t+I"
},
{
"math_id": 70,
"text": "\\gamma,t''\\models\\phi"
},
{
"math_id": 71,
"text": "t'<t''"
},
{
"math_id": 72,
"text": "t''"
},
{
"math_id": 73,
"text": "t'"
},
{
"math_id": 74,
"text": "\\triangleleft"
},
{
"math_id": 75,
"text": "\\triangleleft_I\\phi"
},
{
"math_id": 76,
"text": "\\neg\\phi\\mathcal S_I\\phi"
},
{
"math_id": 77,
"text": "\\phi_1\\mathcal{U}\\phi_2"
},
{
"math_id": 78,
"text": "\\phi_1"
},
{
"math_id": 79,
"text": "\\phi_2"
},
{
"math_id": 80,
"text": "\\overline{\\mathcal U}"
},
{
"math_id": 81,
"text": "\\phi_1\\overline{\\mathcal U}_{I}\\phi_2"
},
{
"math_id": 82,
"text": "\\phi_1\\overline{\\mathcal S}_{I}\\phi_2"
},
{
"math_id": 83,
"text": "\\phi_2\\lor(\\phi_1\\land (\\phi_1\\mathcal U_{I}\\phi_2))"
},
{
"math_id": 84,
"text": "\\phi_2\\lor(\\phi_1\\land (\\phi_1\\mathcal S_{I}\\phi_2))"
},
{
"math_id": 85,
"text": "0\\in I"
},
{
"math_id": 86,
"text": "\\phi_1\\land(\\phi_1\\mathcal U_{I}\\phi_2)"
},
{
"math_id": 87,
"text": "\\phi_1\\land(\\phi_1\\mathcal S_{I}\\phi_2)"
},
{
"math_id": 88,
"text": "\\mathcal O"
},
{
"math_id": 89,
"text": "\\overline{\\mathcal O}"
},
{
"math_id": 90,
"text": "\\overline\\Diamond p"
},
{
"math_id": 91,
"text": "\\top\\overline{\\mathcal U}p"
},
{
"math_id": 92,
"text": "\\bigcirc_I p"
},
{
"math_id": 93,
"text": "\\bot\\mathcal U_I p"
},
{
"math_id": 94,
"text": "\\bot"
},
{
"math_id": 95,
"text": "\\Box(p\\implies\\Diamond_{\\{1\\}}q)"
},
{
"math_id": 96,
"text": "p"
},
{
"math_id": 97,
"text": "q"
},
{
"math_id": 98,
"text": "\\Box(p\\implies\\neg\\Diamond_{\\{1\\}}p)"
},
{
"math_id": 99,
"text": "w=a_0,a_1,\\dots,"
},
{
"math_id": 100,
"text": "\\mathbb N"
},
{
"math_id": 101,
"text": "A"
},
{
"math_id": 102,
"text": "T=\\mathbb N"
},
{
"math_id": 103,
"text": "\\gamma(i)=a_i"
},
{
"math_id": 104,
"text": "w,i\\models\\phi"
},
{
"math_id": 105,
"text": "\\gamma,i\\models\\phi"
},
{
"math_id": 106,
"text": "\\mathcal\n U"
},
{
"math_id": 107,
"text": "\\mathcal S"
},
{
"math_id": 108,
"text": "I=\\{i\\}"
},
{
"math_id": 109,
"text": "\\Box_{]0,i[}\\neg\\phi\\land\\Diamond_{]0,i]}\\phi"
},
{
"math_id": 110,
"text": "\\Box_I\\psi"
},
{
"math_id": 111,
"text": "\\neg\\triangleright_I\\neg\\psi"
},
{
"math_id": 112,
"text": "\\neg(\\phi\\mathcal U_{S}\\psi)"
},
{
"math_id": 113,
"text": "(\\neg\\phi)\\mathcal R_{S}(\\neg\\psi)"
}
] |
https://en.wikipedia.org/wiki?curid=56944950
|
569480
|
Receptor (biochemistry)
|
Protein molecule receiving signals for a cell
In biochemistry and pharmacology, receptors are chemical structures, composed of protein, that receive and transduce signals that may be integrated into biological systems. These signals are typically chemical messengers which bind to a receptor and produce physiological responses such as change in the electrical activity of a cell. For example, GABA, an inhibitory neurotransmitter, inhibits electrical activity of neurons by binding to GABAA receptors. There are three main ways the action of the receptor can be classified: relay of signal, amplification, or integration. Relaying sends the signal onward, amplification increases the effect of a single ligand, and integration allows the signal to be incorporated into another biochemical pathway.
Receptor proteins can be classified by their location. Cell surface receptors, also known as transmembrane receptors, include ligand-gated ion channels, G protein-coupled receptors, and enzyme-linked hormone receptors. Intracellular receptors are those found inside the cell, and include cytoplasmic receptors and nuclear receptors. A molecule that binds to a receptor is called a ligand and can be a protein, peptide (short protein), or another small molecule, such as a neurotransmitter, hormone, pharmaceutical drug, toxin, calcium ion or parts of the outside of a virus or microbe. An endogenously produced substance that binds to a particular receptor is referred to as its endogenous ligand. E.g. the endogenous ligand for the nicotinic acetylcholine receptor is acetylcholine, but it can also be activated by nicotine and blocked by curare. Receptors of a particular type are linked to specific cellular biochemical pathways that correspond to the signal. While numerous receptors are found in most cells, each receptor will only bind with ligands of a particular structure. This has been analogously compared to how locks will only accept specifically shaped keys. When a ligand binds to a corresponding receptor, it activates or inhibits the receptor's associated biochemical pathway, which may also be highly specialised.
Receptor proteins can be also classified by the property of the ligands. Such classifications include chemoreceptors, mechanoreceptors, gravitropic receptors, photoreceptors, magnetoreceptors and gasoreceptors.
Structure.
The structures of receptors are very diverse and include the following major categories, among others:
Membrane receptors may be isolated from cell membranes by complex extraction procedures using solvents, detergents, and/or affinity purification.
The structures and actions of receptors may be studied by using biophysical methods such as X-ray crystallography, NMR, circular dichroism, and dual polarisation interferometry. Computer simulations of the dynamic behavior of receptors have been used to gain understanding of their mechanisms of action.
Binding and activation.
Ligand binding is an equilibrium process. Ligands bind to receptors and dissociate from them according to the law of mass action in the following equation, for a ligand L and receptor, R. The brackets around chemical species denote their concentrations.
formula_0
One measure of how well a molecule fits a receptor is its binding affinity, which is inversely related to the dissociation constant "K""d". A good fit corresponds with high affinity and low "K""d". The final biological response (e.g. second messenger cascade, muscle-contraction), is only achieved after a significant number of receptors are activated.
Affinity is a measure of the tendency of a ligand to bind to its receptor. Efficacy is the measure of the bound ligand to activate its receptor.
Agonists versus antagonists.
Not every ligand that binds to a receptor also activates that receptor. The following classes of ligands exist:
Note that the idea of receptor agonism and antagonism only refers to the interaction between receptors and ligands and not to their biological effects.
Constitutive activity.
A receptor which is capable of producing a biological response in the absence of a bound ligand is said to display "constitutive activity". The constitutive activity of a receptor may be blocked by an inverse agonist. The anti-obesity drugs rimonabant and taranabant are inverse agonists at the cannabinoid CB1 receptor and though they produced significant weight loss, both were withdrawn owing to a high incidence of depression and anxiety, which are believed to relate to the inhibition of the constitutive activity of the cannabinoid receptor.
The GABAA receptor has constitutive activity and conducts some basal current in the absence of an agonist. This allows beta carboline to act as an inverse agonist and reduce the current "below" basal levels.
Mutations in receptors that result in increased constitutive activity underlie some inherited diseases, such as precocious puberty (due to mutations in luteinizing hormone receptors) and hyperthyroidism (due to mutations in thyroid-stimulating hormone receptors).
Theories of drug-receptor interaction.
Occupation.
Early forms of the receptor theory of pharmacology stated that a drug's effect is directly proportional to the number of receptors that are occupied. Furthermore, a drug effect ceases as a drug-receptor complex dissociates.
Ariëns & Stephenson introduced the terms "affinity" & "efficacy" to describe the action of ligands bound to receptors.
Rate.
In contrast to the accepted "Occupation Theory", Rate Theory proposes that the activation of receptors is directly proportional to the total number of encounters of a drug with its receptors per unit time. Pharmacological activity is directly proportional to the rates of dissociation and association, not the number of receptors occupied:
Induced-fit.
As a drug approaches a receptor, the receptor alters the conformation of its binding site to produce drug—receptor complex.
Spare Receptors.
In some receptor systems (e.g. acetylcholine at the neuromuscular junction in smooth muscle), agonists are able to elicit maximal response at very low levels of receptor occupancy (<1%). Thus, that system has spare receptors or a receptor reserve. This arrangement produces an economy of neurotransmitter production and release.
Receptor regulation.
Cells can increase (upregulate) or decrease (downregulate) the number of receptors to a given hormone or neurotransmitter to alter their sensitivity to different molecules. This is a locally acting feedback mechanism.
Examples and ligands.
The ligands for receptors are as diverse as their receptors. GPCRs (7TMs) are a particularly vast family, with at least 810 members. There are also LGICs for at least a dozen endogenous ligands, and many more receptors possible through different subunit compositions. Some common examples of ligands and receptors include:
Ion channels and G protein coupled receptors.
Some example ionotropic (LGIC) and metabotropic (specifically, GPCRs) receptors are shown in the table below. The chief neurotransmitters are glutamate and GABA; other neurotransmitters are neuromodulatory. This list is by no means exhaustive.
Enzyme linked receptors.
Enzyme linked receptors include Receptor tyrosine kinases (RTKs), serine/threonine-specific protein kinase, as in bone morphogenetic protein and guanylate cyclase, as in atrial natriuretic factor receptor. Of the RTKs, 20 classes have been identified, with 58 different RTKs as members. Some examples are shown below:
Intracellular Receptors.
Receptors may be classed based on their mechanism or on their position in the cell. 4 examples of intracellular LGIC are shown below:
Role in health and disease.
In genetic disorders.
Many genetic disorders involve hereditary defects in receptor genes. Often, it is hard to determine whether the receptor is nonfunctional or the hormone is produced at decreased level; this gives rise to the "pseudo-hypo-" group of endocrine disorders, where there appears to be a decreased hormonal level while in fact it is the receptor that is not responding sufficiently to the hormone.
In the immune system.
The main receptors in the immune system are pattern recognition receptors (PRRs), toll-like receptors (TLRs), killer activated and killer inhibitor receptors (KARs and KIRs), complement receptors, Fc receptors, B cell receptors and T cell receptors.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n{[\\ce{L}] + [\\ce{R}] \\ce{<=>[{K_d}]} [\\text{LR}]}\n"
}
] |
https://en.wikipedia.org/wiki?curid=569480
|
56951340
|
Group functor
|
In mathematics, a group functor is a group-valued functor on the category of commutative rings. Although it is typically viewed as a generalization of a group scheme, the notion itself involves no scheme theory. Because of this feature, some authors, notably Waterhouse and Milne (who followed Waterhouse), develop the theory of group schemes based on the notion of group functor instead of scheme theory.
A formal group is usually defined as a particular kind of a group functor.
Group functor as a generalization of a group scheme.
A scheme may be thought of as a contravariant functor from the category formula_0 of "S"-schemes to the category of sets satisfying the gluing axiom; the perspective known as the functor of points. Under this perspective, a group scheme is a contravariant functor from formula_0 to the category of groups that is a Zariski sheaf (i.e., satisfying the gluing axiom for the Zariski topology).
For example, if Γ is a finite group, then consider the functor that sends Spec("R") to the set of locally constant functions on it. For example, the group scheme
formula_1
can be described as the functor
formula_2
If we take a ring, for example, formula_3, then
formula_4
Group sheaf.
It is useful to consider a group functor that respects a topology (if any) of the underlying category; namely, one that is a sheaf and a group functor that is a sheaf is called a group sheaf. The notion appears in particular in the discussion of a torsor (where a choice of topology is an important matter).
For example, a "p"-divisible group is an example of a fppf group sheaf (a group sheaf with respect to the fppf topology).
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathsf{Sch}_S"
},
{
"math_id": 1,
"text": "SL_2 = \\operatorname{Spec}\\left( \\frac{\\mathbb{Z}[a,b,c,d]}{(ad - bc - 1)} \\right)"
},
{
"math_id": 2,
"text": "\\operatorname{Hom}_{\\textbf{CRing}}\\left(\\frac{\\mathbb{Z}[a,b,c,d]}{(ad - bc - 1)}, -\\right)"
},
{
"math_id": 3,
"text": "\\mathbb{C}"
},
{
"math_id": 4,
"text": "\n\\begin{align}\nSL_2(\\mathbb{C}) &= \\operatorname{Hom}_{\\textbf{CRing}}\\left(\\frac{\\mathbb{Z}[a,b,c,d]}{(ad - bc - 1)}, \\mathbb{C}\\right) \\\\\n&\\cong \\left\\{ \\begin{bmatrix}a & b \\\\ c & d \\end{bmatrix} \\in M_2(\\mathbb{C}) : ad-bc = 1 \\right\\}\n\\end{align}\n"
}
] |
https://en.wikipedia.org/wiki?curid=56951340
|
56956125
|
Multi-homogeneous Bézout theorem
|
In algebra and algebraic geometry, the multi-homogeneous Bézout theorem is a generalization to multi-homogeneous polynomials of Bézout's theorem, which counts the number of isolated common zeros of a set of homogeneous polynomials. This generalization is due to Igor Shafarevich.
Motivation.
Given a polynomial equation or a system of polynomial equations it is often useful to compute or to bound the number of solutions without computing explicitly the solutions.
In the case of a single equation, this problem is solved by the fundamental theorem of algebra, which asserts that the number of complex solutions is bounded by the degree of the polynomial, with equality, if the solutions are counted with their multiplicities.
In the case of a system of n polynomial equations in n unknowns, the problem is solved by Bézout's theorem, which asserts that, if the number of complex solutions is finite, their number is bounded by the product of the degrees of the polynomials. Moreover, if the number of solutions at infinity is also finite, then the product of the degrees equals the number of solutions counted with multiplicities and including the solutions at infinity.
However, it is rather common that the number of solutions at infinity is infinite. In this case, the product of the degrees of the polynomials may be much larger than the number of roots, and better bounds are useful.
Multi-homogeneous Bézout theorem provides such a better root when the unknowns may be split into several subsets such that the degree of each polynomial in each subset is lower than the total degree of the polynomial. For example, let formula_0 be polynomials of degree two which are of degree one in n indeterminate formula_1 and also of degree one in formula_2 (that is the polynomials are "bilinear". In this case, Bézout's theorem bounds the number of solutions by
formula_3
while the multi-homogeneous Bézout theorem gives the bound (using Stirling's approximation)
formula_4
Statement.
A multi-homogeneous polynomial is a polynomial that is homogeneous with respect to several sets of variables.
More precisely, consider k positive integers formula_5, and, for "i" = 1, ..., "k", the formula_6 indeterminates formula_7 A polynomial in all these indeterminates is multi-homogeneous of multi-degree formula_8 if it is homogeneous of degree formula_9 in formula_10
A multi-projective variety is a projective subvariety of the product of projective spaces
formula_11
where formula_12 denote the projective space of dimension n. A multi-projective variety may be defined as the set of the common nontrivial zeros of an ideal of multi-homogeneous polynomials, where "nontrivial" means that formula_13 are not simultaneously 0, for each i.
Bézout's theorem asserts that n homogeneous polynomials of degree formula_14 in "n" + 1 indeterminates define either an algebraic set of positive dimension, or a zero-dimensional algebraic set consisting of formula_15 points counted with their multiplicities.
For stating the generalization of Bézout's theorem, it is convenient to introduce new indeterminates formula_16 and to represent the multi-degree formula_17 by the linear form formula_18 In the following, "multi-degree" will refer to this linear form rather than to the sequence of degrees.
Setting formula_19 the multi-homogeneous Bézout theorem is the following.
"With above notation," "n" "multi-homogeneous polynomials of multi-degrees" formula_20 "define either a multi-projective algebraic set of positive dimension, or a zero-dimensional algebraic set consisting of" B "points, counted with multiplicities, where" B "is the coefficient of"
formula_21
"in the product of linear forms"
formula_22
Non-homogeneous case.
The multi-homogeneous Bézout bound on the number of solutions may be used for non-homogeneous systems of equations, when the polynomials may be (multi)-homogenized without increasing the total degree. However, in this case, the bound may be not sharp, if there are solutions "at infinity".
Without insight on the problem that is studied, it may be difficult to group the variables for a "good" multi-homogenization. Fortunately, there are many problems where such a grouping results directly from the problem that is modeled. For example, in mechanics, equations are generally homogeneous or almost homogeneous in the lengths and in the masses.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "p_1, \\ldots, p_{2n}"
},
{
"math_id": 1,
"text": "x_1, \\ldots x_n,"
},
{
"math_id": 2,
"text": "y_1, \\ldots y_n."
},
{
"math_id": 3,
"text": "2^{2n},"
},
{
"math_id": 4,
"text": "\\binom{2n}{n}= \\frac{(2n)!}{(n!)^2}\\sim \\frac{2^{2n}}{\\sqrt{\\pi n}}."
},
{
"math_id": 5,
"text": "n_1, \\ldots, n_k"
},
{
"math_id": 6,
"text": "n_i+1"
},
{
"math_id": 7,
"text": "x_{i,0}, x_{i,1}, \\ldots, x_{i,n_i}."
},
{
"math_id": 8,
"text": "d_1, \\ldots, d_k,"
},
{
"math_id": 9,
"text": "d_i"
},
{
"math_id": 10,
"text": "x_{i,0}, x_{i,1}, \\ldots, x_{i,{n_i}}."
},
{
"math_id": 11,
"text": "\\mathbb P_{n_1}\\times \\cdots\\times \\mathbb P_{n_k},"
},
{
"math_id": 12,
"text": "\\mathbb P_n"
},
{
"math_id": 13,
"text": "x_{i,0}, x_{i,1}, \\ldots, x_{i,n}"
},
{
"math_id": 14,
"text": "d_1, \\ldots, d_n"
},
{
"math_id": 15,
"text": "d_1\\cdots d_n"
},
{
"math_id": 16,
"text": "t_1, \\ldots, t_k,"
},
{
"math_id": 17,
"text": "d_1, \\ldots, d_k"
},
{
"math_id": 18,
"text": "\\mathbf d=d_1t_1+\\cdots + d_kt_k."
},
{
"math_id": 19,
"text": "n=n_1+\\cdots +n_k,"
},
{
"math_id": 20,
"text": "\\mathbf d_1, \\ldots, \\mathbf d_n"
},
{
"math_id": 21,
"text": "t_1^{n_1}\\cdots t_k^{n_k}"
},
{
"math_id": 22,
"text": "\\mathbf d_1 \\cdots \\mathbf d_n."
}
] |
https://en.wikipedia.org/wiki?curid=56956125
|
56956816
|
Two-tree broadcast
|
The two-tree broadcast (abbreviated 2tree-broadcast or 23-broadcast) is an algorithm that implements a broadcast communication pattern on a distributed system using message passing.
A broadcast is a commonly used collective operation that sends data from one processor to all other processors.
The two-tree broadcast communicates concurrently over two binary trees that span all processors. This achieves full usage of the bandwidth in the full-duplex communication model while having a startup latency logarithmic in the number of partaking processors.
The algorithm can also be adapted to perform a reduction or prefix sum.
Algorithm.
A broadcast sends a message from a specified root processor to all other processors.
"Binary tree broadcasting" uses a binary tree to model the communication between the processors.
Each processor corresponds to one node in the tree, and the root processor is the root of the tree.
To broadcast a message "M", the root sends "M" to its two children (child nodes). Each processor waits until it receives "M" and then sends "M" to its children. Because leaves have no children, they don't have to send any messages.
The broadcasting process can be pipelined by splitting the message into "k" blocks, which are then broadcast consecutively.
In such a binary tree, the leaves of the tree only receive data, but never send any data themselves. If the communication is bidirectional (full-duplex), meaning each processor can send a message and receive a message at the same time, the leaves only use one half of the available bandwidth.
The idea of the two-tree broadcast is to use two binary trees "T"1 and "T"2 and communicate on both concurrently.
The trees are constructed so that the interior nodes of one tree correspond to leaf nodes of the other tree.
The data that has to be broadcast is split into blocks of equal size.
In each step of the algorithm, each processor receives one block and sends the previous block to one of its children in the tree in which it is an interior node.
A schedule is needed so that no processor has to send or receive two messages in the same step.
To create such a schedule, the edges of both trees are colored with 0 and 1 such that
Edges with color 0 are used in even steps, edges with color 1 are used in odd steps.
This schedule allows each processor to send one message and receive one message in each step, fully utilizing the available bandwidth.<br>
Assume that processor "i" wants to broadcast a message. The two trees are constructed for the remaining processors. Processor "i" sends blocks alternating to the roots of the two trees, so each tree broadcasts one half of the message.
Analysis.
Let "p" be the number of processing elements (PE), numbered from 0 to "p" - 1.
Construction of the trees.
Let "h"
⌈log("p" + 2)⌉.
"T"1 and "T"2 can be constructed as trees of height "h" - 1, such that both trees form an in-order numbering of the processors, with the following method:
T1: If "p"
2"h" − 2, "T"1 is a complete binary tree of height "h" − 1 except that the rightmost leaf is missing. Otherwise, "T"1 consists of a complete binary tree of height "h" − 2 covering PEs [0, 2"h"−1 − 2], a recursively constructed tree covering PEs [2"h"−1, "p" − 1], and a root at PE 2"h"−1 − 1 whose children are the roots of the left and the right subtree.
T2: There are two ways to construct "T"2.
With "shifting", "T"2 is first constructed like "T"1, except that it contains an additional processor. Then "T"2 is shifted by one position to the left and the leftmost leaf is removed.
With "mirroring", "T"2 is the mirror image of "T"1 (with the mirror axis between processors −1 and ). Mirroring only works for even "p".
It can be proven that a coloring with the desired properties exists for all "p".
When mirroring is used to construct "T"2, each processor can independently compute the color of its incident edges in "O"(log "p") time.
Communication Time.
For this analysis, the following communication model is used: A message of size "n" has a communication time of "α" + "βn", independent on which processors communicate. "α" represents the startup overhead to send the message, "β" represents the transmission time per data element.
Suppose the message of size "m" is split into 2"k" blocks. Each communication step takes time "α" + "β".
Let "h"
log "p" be the height of the communication structure with the root at processor "i" and the two trees below it.
After 2"h" steps, the first data block has reached every node in both trees. Afterwards, each processor receives one block in every step until it received all blocks.
The total number of steps is 2"h" + 2"k" resulting in a total communication time of (2"h" + 2"k")("α" + "β").
Using an optimal "k"
"k"*
()<templatestyles src="Fraction/styles.css" />1⁄2, the total communication time is "βm" + 2"α"log "p" + √8"αβm"log "p".
Comparison to similar algorithms.
In a linear pipeline broadcast, the message is split into "k" blocks. In each step, each processor "i" receives one block from the processor "i"-1 (mod "p") and sends one block to the processor "i"+1 (mod "p").
Linear pipeline has optimal throughput, but has a startup time in "O"("p").
For large "p", the "O"(log "p") startup latency of the two-tree broadcast is faster.
Because both algorithms have optimal throughput, the two-tree algorithm is faster for a large numbers of processors.
A binomial tree broadcast communicates along a binomial tree. Each process receives the message that is broadcast (the root already has the message) and then sends the message to its children.
A binomial tree broadcast has only half the startup time of the two-tree broadcast, but a factor of log("p") more communication.
The binomial tree broadcast is faster than the two-tree broadcast for small messages, but slower for large messages.
A pipelined binary tree broadcast splits the message into "k" blocks and broadcasts the blocks consecutively over a binary tree.
By using a "Fibonacci tree" instead of a simple balanced binary tree, the startup latency can be reduced to "α"log("p").
A Fibonacci tree of height "h" consists of a root that has a Fibonacci tree of height "h"-1 as its left child and a Fibonacci tree of "h"-2 as its right child.
The pipelined Fibonacci tree broadcast has half the startup latency of the two-tree broadcast, but also only half of the throughput.
It is faster for small messages, while the two-tree broadcast is faster for large messages.
Usage for other communication primitives.
Reduction.
A reduction (codice_0 in the MPI standard) computes formula_0 where "M""i" is a vector of length "m" originally
available at processor "i" and formula_1 is a binary operation that is associative, but not necessarily commutative.
The result is stored at a specified root processor "r".
Assume that "r"
0 or "r"
"p"−1.
In this case the communication is identical to the broadcast, except that the communication direction is reversed.
Each process receives two blocks from its children, reduces them with its own block, and sends the result to its parent.
The root takes turns receiving blocks from the roots of "T"1 and "T"2 and reduces them with its own data.
The communication time is the same as for the Broadcast and the amount of data reduced per processor is 2"m".
If the reduce operation is commutative, the result can be achieved for any root by renumbering the processors.
If the operation is not commutative and the root is not 0 or "p"−1, then 2"βm" is a lower bound for the communication time.
In this case, the remaining processors are split into two subgroups. The processors <"r" perform a reduction to the root "r"−1 and the processors >"r" perform a reduction to the root "r"+1. Processor "r" receives blocks alternating from the two roots of the subgroups.
Prefix sum.
A prefix sum (codice_1) computes formula_2 for each
processor "j" where "M""i" is a vector of length "m" originally
available at processor "i" and formula_1 is a binary associative operation.
Using an inorder binary tree, a prefix sum can be computed by first performing an up-phase in which each interior node computes a partial sum formula_3 for left- and rightmost leaves "l" and "r", followed by a down-phase in which prefixes of the form formula_4 are sent down the tree and allow each processor to finish computing its prefix sum.
The communication in the up-phase is equivalent to a reduction to processor 0 and the communication in the down-phase is equivalent to a broadcast from the processor 0.
The total communication time is about twice the communication time of the two-tree broadcast.
ESBT broadcast.
If "p" is a power of two, there is an optimal broadcasting algorithm based on edge disjoint spanning binomial trees (ESBT) in a hypercube.
The hypercube, excluding the root 0"d", is split into log "p" ESBTs.
The algorithm uses pipelining by splitting the broadcast data into "k" blocks.
Processor 0"d" cyclically distributes blocks to the roots of the ESBTs and each ESBT performs a pipelined binary tree broadcast.
In step "i", each processor sends and receives one message along dimension "i" mod "d".
The communication time of the algorithm is "βm" + "α"log "p" + √4"αβm"log "p", so the startup latency is only one half of the startup latency of the two-tree broadcast.
<br>
The drawback of the ESBT broadcast is that it does not work for other values of "p" and it cannot be adapted for (non-commutative) reduction or prefix sum.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\bigoplus_{i=0}^{p-1} M_i"
},
{
"math_id": 1,
"text": "\\bigoplus"
},
{
"math_id": 2,
"text": "\\bigoplus_{i=0}^{j} M_i"
},
{
"math_id": 3,
"text": " \\bigoplus_{i=l}^{r} M_i"
},
{
"math_id": 4,
"text": " \\bigoplus_{i=0}^{l-1} M_i"
}
] |
https://en.wikipedia.org/wiki?curid=56956816
|
56959327
|
Elsasser number
|
The Elsasser number, Λ, is a dimensionless number in magnetohydrodynamics that represents the ratio of magnetic forces to the Coriolis force.
formula_0
where σ is the conductivity of the fluid, "B" is the magnetic field, ρ is the density of the fluid, and Ω is the rate of rotation of the body.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Lambda = \\frac{\\sigma B^2}{\\rho \\Omega}"
}
] |
https://en.wikipedia.org/wiki?curid=56959327
|
5697044
|
Negishi coupling
|
Chemical reaction
<templatestyles src="Reactionbox/styles.css"/>
The Negishi coupling is a widely employed transition metal catalyzed cross-coupling reaction. The reaction couples organic halides or triflates with organozinc compounds, forming carbon-carbon bonds (C-C) in the process. A palladium (0) species is generally utilized as the catalyst, though nickel is sometimes used. A variety of nickel catalysts in either Ni0 or NiII oxidation state can be employed in Negishi cross couplings such as Ni(PPh3)4, Ni(acac)2, Ni(COD)2 etc.
* The leaving group X is usually chloride, bromide, or iodide, but triflate and acetyloxy groups are feasible as well. X = Cl usually leads to slow reactions.
* The organic residue R = alkenyl, aryl, allyl, alkynyl or propargyl.
* The halide X' in the organozinc compound can be chloride, bromine or iodine and the organic residue R' is alkenyl, aryl, allyl, alkyl, benzyl, homoallyl, and homopropargyl.
* The metal M in the catalyst is nickel or palladium
* The ligand L in the catalyst can be triphenylphosphine, dppe, BINAP, chiraphos or XPhos.
Palladium catalysts in general have higher chemical yields and higher functional group tolerance.
The Negishi coupling finds common use in the field of total synthesis as a method for selectively forming C-C bonds between complex synthetic intermediates. The reaction allows for the coupling of sp3, sp2, and sp carbon atoms, (see orbital hybridization) which make it somewhat unusual among the palladium-catalyzed coupling reactions. Organozincs are moisture and air sensitive, so the Negishi coupling must be performed in an oxygen and water free environment, a fact that has hindered its use relative to other cross-coupling reactions that require less robust conditions (i.e. Suzuki reaction). However, organozincs are more reactive than both organostannanes and organoborates which correlates to faster reaction times.
The reaction is named after Ei-ichi Negishi who was a co-recipient of the 2010 Nobel Prize in Chemistry for the discovery and development of this reaction.
Negishi and coworkers originally investigated the cross-coupling of organoaluminum reagents in 1976 initially employing Ni and Pd as the transition metal catalysts, but noted that Ni resulted in the decay of stereospecifity whereas Pd did not. Transitioning from organoaluminum species to organozinc compounds Negishi and coworkers reported the use of Pd complexes in organozinc coupling reactions and carried out methods studies, eventually developing the reaction conditions into those commonly utilized today. Alongside Richard F. Heck and Akira Suzuki, El-ichi Negishi was a co-recipient of the Nobel Prize in Chemistry in 2010, for his work on "palladium-catalyzed cross couplings in organic synthesis".
Reaction mechanism.
The reaction mechanism is thought to proceed via a standard Pd catalyzed cross-coupling pathway, starting with a Pd(0) species, which is oxidized to Pd(II) in an oxidative addition step involving the organohalide species. This step proceeds with aryl, vinyl, alkynyl, and acyl halides, acetates, or triflates, with substrates following standard oxidative addition relative rates (I>OTf>Br»Cl).
The actual mechanism of oxidative addition is unresolved, though there are two likely pathways. One pathway is thought to proceed via an SN2 like mechanism resulting in inverted stereochemistry. The other pathway proceeds via concerted addition and retains stereochemistry.
Though the additions are cis- the Pd(II) complex rapidly isomerizes to the trans- complex.
Next, the transmetalation step occurs where the organozinc reagent exchanges its organic substituent with the halide in the Pd(II) complex, generating the trans- Pd(II) complex and a zinc halide salt. The organozinc substrate can be aryl, vinyl, allyl, benzyl, homoallyl, or homopropargyl. Transmetalation is usually rate limiting and a complete mechanistic understanding of this step has not yet been reached though several studies have shed light on this process. Alkylzinc species form higher-order zincate species prior to transmetalation whereas arylzinc species do not. ZnXR and ZnR2 can both be used as reactive reagents, and Zn is known to prefer four coordinate complexes, which means solvent coordinated Zn complexes, such as cannot be ruled out "a priori". Studies indicate competing equilibriums exist between cis- and trans- bis alkyl organopalladium complexes, but that the only productive intermediate is the cis complex.
The last step in the catalytic pathway of the Negishi coupling is reductive elimination, which is thought to proceed via a three coordinate transition state, yielding the coupled organic product and regenerating the Pd(0) catalyst. For this step to occur, the aforementioned cis- alkyl organopalladium complex must be formed.
Both organozinc halides and diorganozinc compounds can be used as starting materials. In one model system it was found that in the transmetalation step the former give the cis-adduct R-Pd-R' resulting in fast reductive elimination to product while the latter gives the trans-adduct which has to go through a slow trans-cis isomerization first.
A common side reaction is homocoupling. In one Negishi model system the formation of homocoupling was found to be the result of a second transmetalation reaction between the diarylmetal intermediate and arylmetal halide:
Ar–Pd–Ar' + Ar'–Zn–X → Ar'–Pd–Ar' + Ar–Zn–X
Ar'–Pd–Ar' → Ar'–Ar' + Pd(0) "(homocoupling)"
Ar–Zn–X + H2O → Ar–H + HO–Zn–X "(reaction accompanied by dehalogenation)"
Nickel catalyzed systems can operate under different mechanisms depending on the coupling partners. Unlike palladium systems which involve only Pd0 or PdII, nickel catalyzed systems can involve nickel of different oxidation states. Both systems are similar in that they involve similar elementary steps: oxidative addition, transmetalation, and reductive elimination. Both systems also have to address issues of β-hydride elimination and difficult oxidative addition of alkyl electrophiles.
For unactivated alkyl electrophiles, one possible mechanism is a transmetalation first mechanism. In this mechanism, the alkyl zinc species would first transmetalate with the nickel catalyst. Then the nickel would abstract the halide from the alkyl halide resulting in the alkyl radical and oxidation of nickel after addition of the radical.
One important factor when contemplating the mechanism of a nickel catalyzed cross coupling is that reductive elimination is facile from NiIII species, but very difficult from NiII species. Kochi and Morrell provided evidence for this by isolating NiII complex Ni(PEt3)2(Me)("o"-tolyl), which did not undergo reductive elimination quickly enough to be involved in this elementary step.
Scope.
The Negishi coupling has been applied the following illustrative syntheses:
Negishi coupling has been applied in the synthesis of hexaferrocenylbenzene:
with hexaiodidobenzene, diferrocenylzinc and tris(dibenzylideneacetone)dipalladium(0) in tetrahydrofuran. The yield is only 4% signifying substantial crowding around the aryl core.
In a novel modification palladium is first oxidized by the haloketone "2-chloro-2-phenylacetophenone" 1 and the resulting palladium OPdCl complex then accepts both the organozinc compound 2 and the organotin compound 3 in a double transmetalation:
Examples of nickel catalyzed Negishi couplings include sp2-sp2, sp2-sp3, and sp3-sp3 systems. In the system first studied by Negishi, aryl-aryl cross coupling was catalyzed by Ni(PPh3)4 generated "in situ" through reduction of Ni(acac)2 with PPh3 and (i-Bu)2AlH.
Variations have also been developed to allow for the cross-coupling of aryl and alkenyl partners. In the variation developed by Knochel et al, aryl zinc bromides were reacted with vinyl triflates and vinyl halides.
Reactions between sp3-sp3 centers are often more difficult; however, adding an unsaturated ligand with an electron withdrawing group as a cocatalyst improved the yield in some systems. It is believed that added coordination from the unsaturated ligand favors reductive elimination over β-hydride elimination. This also works in some alkyl-aryl systems.
Several asymmetric variants exist and many utilize Pybox ligands.
Industrial applications.
The Negishi coupling is not employed as frequently in industrial applications as its cousins the Suzuki reaction and Heck reaction, mostly as a result of the water and air sensitivity of the required aryl or alkyl zinc reagents. In 2003 Novartis employed a Negishi coupling in the manufacture of PDE472, a phosphodiesterase type 4D inhibitor, which was being investigated as a drug lead for the treatment of asthma. The Negishi coupling was used as an alternative to the Suzuki reaction providing improved yields, 73% on a 4.5 kg scale, of the desired benzodioxazole synthetic intermediate.
Applications in total synthesis.
Where the Negishi coupling is rarely used in industrial chemistry, a result of the aforementioned water and oxygen sensitivity, it finds wide use in the field of natural products total synthesis. The increased reactivity relative to other cross-coupling reactions makes the Negishi coupling ideal for joining complex intermediates in the synthesis of natural products. Additionally, Zn is more environmentally friendly than other metals such as Sn used in the Stille coupling. The Negishi coupling historically is not used as much as the Stille or Suzuki coupling. When it comes to fragment-coupling processes the Negishi coupling is particularly useful, especially when compared to the aforementioned Stille and Suzuki coupling reactions. The major drawback of the Negishi coupling, aside from its water and oxygen sensitivity, is its relative lack of functional group tolerance when compared to other cross-coupling reactions.
(−)-stemoamide is a natural product found in the root extracts of ‘’Stemona tuberosa’’. These extracts have been used Japanese and Chinese folk medicine to treat respiratory disorders, and (−)-stemoamide is also an anthelminthic. Somfai and coworkers employed a Negishi coupling in their synthesis of (−)-stemoamide. The reaction was implemented mid-synthesis, forming an sp3-sp2 c-c bond between β,γ-unsaturated ester and an intermediate diene 4 with a 78% yield of product 5. Somfai completed the stereoselective total synthesis of (−)-stemoamide in 12-steps with a 20% overall yield.
Kibayashi and coworkers utilized the Negishi coupling in the total synthesis of Pumiliotoxin B. Pumiliotoxin B is one of the major toxic alkaloids isolated from Dendrobates pumilio, a Panamanian poison frog. These toxic alkaloids display modulatory effects on voltage-dependent sodium channels, resulting in cardiotonic and myotonic activity. Kibayashi employed the Negishi coupling late stage in the synthesis of Pumiliotoxin B, coupling a homoallylic sp3 carbon on the zinc alkylidene indolizidine 6 with the (E)-vinyl iodide 7 with a 51% yield. The natural product was then obtained after deprotection.
δ-trans-tocotrienoloic acid isolated from the plant, Chrysochlamys ulei, is a natural product shown to inhibit DNA polymerase β (pol β), which functions to repair DNA via base excision. Inhibition of pol B in conjunction with other chemotherapy drugs may increase the cytotoxicity of these chemotherapeutics, leading to lower effective dosages. The Negishi coupling was implemented in the synthesis of δ-trans-tocotrienoloic acid by Hecht and Maloney coupling the sp3 homopropargyl zinc reagent 8 with sp2 vinyl iodide 9. The reaction proceeded with quantitative yield, coupling fragments mid-synthesis en route to the stereoselectively synthesized natural product δ-trans-tocotrienoloic acid.
Smith and Fu demonstrated that their method to couple secondary nucleophiles with secondary alkyl electrophiles could be applied to the formal synthesis of α-cembra-2,7,11-triene-4,6-diol, a target with antitumor activity. They achieved a 61% yield on a gram scale using their method to install an "iso"-propyl group. This method would be highly adaptable in this application for diversification and installing other alkyl groups to enable structure-activbity relationship (SAR) studies.Kirschning and Schmidt applied nickel catalyzed negishi cross-coupling to the first total synthesis of carolactone. In this application, they achieved 82% yield and dr = 10:1.
Preparation of organozinc precursors.
Alkylzinc reagents can be accessed from the corresponding alkyl bromides using iodine in dimethylacetamide (DMAC). The catalytic I2 serves to activate the zinc towards nucleophilic addition.
Aryl zincs can be synthesized using mild reaction conditions via a Grignard like intermediate.
formula_0
Organozincs can also be generated in situ and used in a one pot procedure as demonstrated by Knochel et al.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{matrix} {}\\\\\n\\ce{Ar-I ->[\\begin{matrix}\\ce{iPrMgCl}\\\\\\text{THF}\\end{matrix}][\\ce{ZnBr2}] Ar-ZnBr}\n\\end{matrix}"
}
] |
https://en.wikipedia.org/wiki?curid=5697044
|
56981684
|
Walter Gröbli
|
Swiss mathematician
Walter Gröbli (23 September 1852 – 26 June 1903) was a Swiss mathematician.
Life and work.
His father, Issak Gröbli, was an industrial who was invented a shuttle embroidery machine in 1863, and his old brother is credited to have introduced the invention in the United States. Walter Gröbli was more interested in mathematics than in embroidery and he studied from 1871 to 1875 at the Polytechnicum of Zürich under Hermann Schwarz and Heinrich Martin Weber. Then Gröbli studied at university of Berlin and he was awarded a doctorate in the university of Göttingen (1877).
The following six years, Gröbli was assistant of Frobenius in Polytechnicum of Zürich. In 1883 he was elected mathematics professor in the "Gymnasium" of Zürich. Despite his mathematical talent he did not follow a research career, he was happy to be a schoolmaster.
His other main passion was mountaineering. He died with other three colleagues on a mountain accident climbing the Piz Blas.
The only work known by Gröbli was his doctoral thesis dissertation. It deals about three vortex motion, four vortex motion having an axis of symmetry and formula_0 vortex motion having formula_1 symmetry axes. This work is a classical in vortex dynamics literature.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "2n"
},
{
"math_id": 1,
"text": "n"
}
] |
https://en.wikipedia.org/wiki?curid=56981684
|
5698171
|
Fidelity of quantum states
|
Term in quantum mechanics
In quantum mechanics, notably in quantum information theory, fidelity quantifies the "closeness" between two density matrices. It expresses the probability that one state will pass a test to identify as the other. It is not a metric on the space of density matrices, but it can be used to define the Bures metric on this space.
Definition.
The fidelity between two quantum states "formula_0" and "formula_1", expressed as density matrices, is commonly defined as:
formula_2
The square roots in this expression are well-defined because both formula_0 and formula_3 are positive semidefinite matrices, and the square root of a positive semidefinite matrix is defined via the spectral theorem. The Euclidean inner product from the classical definition is replaced by the Hilbert–Schmidt inner product.
As will be discussed in the following sections, this expression can be simplified in various cases of interest. In particular, for pure states, formula_4 and formula_5, it equals:formula_6This tells us that the fidelity between pure states has a straightforward interpretation in terms of probability of finding the state formula_7 when measuring formula_8 in a basis containing formula_7.
Some authors use an alternative definition formula_9 and call this quantity fidelity. The definition of formula_10 however is more common. To avoid confusion, formula_11 could be called "square root fidelity". In any case it is advisable to clarify the adopted definition whenever the fidelity is employed.
Motivation from classical counterpart.
Given two random variables formula_12 with values formula_13 (categorical random variables) and probabilities formula_14 and formula_15, the fidelity of formula_16 and formula_17 is defined to be the quantity
formula_18.
The fidelity deals with the marginal distribution of the random variables. It says nothing about the joint distribution of those variables. In other words, the fidelity formula_19 is the square of the inner product of formula_20 and formula_21 viewed as vectors in Euclidean space. Notice that formula_22 if and only if formula_23. In general, formula_24. The measure formula_25 is known as the Bhattacharyya coefficient.
Given a classical measure of the distinguishability of two probability distributions, one can motivate a measure of distinguishability of two quantum states as follows: if an experimenter is attempting to determine whether a quantum state is either of two possibilities formula_0 or formula_1, the most general possible measurement they can make on the state is a POVM, which is described by a set of Hermitian positive semidefinite operators formula_26. When measuring a state formula_0 with this POVM, formula_27-th outcome is found with probability formula_28, and likewise with probability formula_29 for formula_1. The ability to distinguish between formula_0 and formula_1 is then equivalent to their ability to distinguish between the classical probability distributions formula_30 and formula_31. A natural question is then to ask what is the POVM the makes the two distributions as distinguishable as possible, which in this context means to minimize the Bhattacharyya coefficient over the possible choices of POVM. Formally, we are thus led to define the fidelity between quantum states as:
formula_32
It was shown by Fuchs and Caves that the minimization in this expression can be computed explicitly, with solution the projective POVM corresponding to measuring in the eigenbasis of formula_33, and results in the common explicit expression for the fidelity asformula_34
Equivalent expressions.
Equivalent expression via trace norm.
An equivalent expression for the fidelity between arbitrary states via the trace norm is:
formula_35
where the absolute value of an operator is here defined as formula_36.
Equivalent expression via characteristic polynomials.
Since the trace of a matrix is equal to the sum of its eigenvalues
formula_37
where the formula_38 are the eigenvalues of formula_39, which is positive semidefinite by construction and so the square roots of the eigenvalues are well defined. Because the characteristic polynomial of a product of two matrices is independent of the order, the spectrum of a matrix product is invariant under cyclic permutation, and so these eigenvalues can instead be calculated from formula_40. Reversing the trace property leads to
formula_41.
Expressions for pure states.
If (at least) one of the two states is pure, for example formula_4, the fidelity simplifies toformula_42This follows observing that if formula_0 is pure then formula_43, and thusformula_44
If both states are pure, formula_4 and formula_5, then we get the even simpler expression:formula_6
Properties.
Some of the important properties of the quantum state fidelity are:
If formula_0 and formula_1 are both qubit states, the fidelity can be computed as
formula_54
Qubit state means that formula_0 and formula_1 are represented by two-dimensional matrices. This result follows noticing that formula_55 is a positive semidefinite operator, hence formula_56, where formula_57 and formula_58 are the (nonnegative) eigenvalues of formula_59. If formula_0 (or formula_1) is pure, this result is simplified further to formula_60 since formula_61 for pure states.
Unitary invariance.
Direct calculation shows that the fidelity is preserved by unitary evolution, i.e.
formula_62
for any unitary operator formula_63.
Relationship with the fidelity between the corresponding probability distributions.
Let formula_64 be an arbitrary positive operator-valued measure (POVM); that is, a set of positive semidefinite operators formula_65 satisfying formula_66. Then, for any pair of states formula_0 and formula_1, we have
formula_67
where in the last step we denoted with formula_68 and formula_69 the probability distributions obtained by measuring formula_70 with the POVM formula_64.
This shows that the square root of the fidelity between two quantum states is upper bounded by the Bhattacharyya coefficient between the corresponding probability distributions in any possible POVM. Indeed, it is more generally true that formula_71 where formula_72, and the minimum is taken over all possible POVMs. More specifically, one can prove that the minimum is achieved by the projective POVM corresponding to measuring in the eigenbasis of the operator formula_33.
Proof of inequality.
As was previously shown, the square root of the fidelity can be written as formula_73which is equivalent to the existence of a unitary operator formula_63 such that
formula_74Remembering that formula_66 holds true for any POVM, we can then writeformula_75where in the last step we used Cauchy-Schwarz inequality as in formula_76.
Behavior under quantum operations.
The fidelity between two states can be shown to never decrease when a non-selective quantum operation formula_77 is applied to the states:formula_78 for any trace-preserving completely positive map formula_77.
Relationship to trace distance.
We can define the trace distance between two matrices A and B in terms of the trace norm by
formula_79
When A and B are both density operators, this is a quantum generalization of the statistical distance. This is relevant because the trace distance provides upper and lower bounds on the fidelity as quantified by the "Fuchs–van de Graaf inequalities",
formula_80
Often the trace distance is easier to calculate or bound than the fidelity, so these relationships are quite useful. In the case that at least one of the states is a pure state Ψ, the lower bound can be tightened.
formula_81
Uhlmann's theorem.
We saw that for two pure states, their fidelity coincides with the overlap. Uhlmann's theorem generalizes this statement to mixed states, in terms of their purifications:
Theorem Let ρ and σ be density matrices acting on Cn. Let ρ<templatestyles src="Fraction/styles.css" />1⁄2 be the unique positive square root of ρ and
formula_82
be a purification of ρ (therefore formula_83 is an orthonormal basis), then the following equality holds:
formula_84
where formula_85 is a purification of σ. Therefore, in general, the fidelity is the maximum overlap between purifications.
Sketch of proof.
A simple proof can be sketched as follows. Let formula_86 denote the vector
formula_87
and σ<templatestyles src="Fraction/styles.css" />1⁄2 be the unique positive square root of σ. We see that, due to the unitary freedom in square root factorizations and choosing orthonormal bases, an arbitrary purification of σ is of the form
formula_88
where "V"i's are unitary operators. Now we directly calculate
formula_89
But in general, for any square matrix "A" and unitary "U", it is true that |tr("AU")| ≤ tr(("A"*"A")<templatestyles src="Fraction/styles.css" />1⁄2). Furthermore, equality is achieved if "U"* is the unitary operator in the polar decomposition of "A". From this follows directly Uhlmann's theorem.
Proof with explicit decompositions.
We will here provide an alternative, explicit way to prove Uhlmann's theorem.
Let formula_7 and formula_8 be purifications of formula_0 and formula_1, respectively. To start, let us show that formula_90.
The general form of the purifications of the states is:formula_91were formula_92 are the eigenvectors of formula_70, and formula_93 are arbitrary orthonormal bases. The overlap between the purifications isformula_94where the unitary matrix formula_63 is defined asformula_95The conclusion is now reached via using the inequality formula_96: formula_97Note that this inequality is the triangle inequality applied to the singular values of the matrix. Indeed, for a generic matrix formula_98and unitary formula_99, we haveformula_100where formula_101 are the (always real and non-negative) singular values of formula_102, as in the singular value decomposition. The inequality is saturated and becomes an equality when formula_103, that is, when formula_104 and thus formula_105. The above shows that formula_106 when the purifications formula_7 and formula_8 are such that formula_107. Because this choice is possible regardless of the states, we can finally conclude thatformula_108
Consequences.
Some immediate consequences of Uhlmann's theorem are
So we can see that fidelity behaves almost like a metric. This can be formalized and made useful by defining
formula_109
As the angle between the states formula_0 and formula_1. It follows from the above properties that formula_110 is non-negative, symmetric in its inputs, and is equal to zero if and only if formula_111. Furthermore, it can be proved that it obeys the triangle inequality, so this angle is a metric on the state space: the Fubini–Study metric.
|
[
{
"math_id": 0,
"text": "\\rho"
},
{
"math_id": 1,
"text": "\\sigma"
},
{
"math_id": 2,
"text": "F(\\rho, \\sigma) = \\left(\\operatorname{tr} \\sqrt{\\sqrt{\\rho} \\sigma \\sqrt{\\rho}}\\right)^2."
},
{
"math_id": 3,
"text": "\\sqrt\\rho\\sigma\\sqrt\\rho"
},
{
"math_id": 4,
"text": "\\rho=|\\psi_\\rho\\rangle\\!\\langle\\psi_\\rho|"
},
{
"math_id": 5,
"text": "\\sigma=|\\psi_\\sigma\\rangle\\!\\langle\\psi_\\sigma|"
},
{
"math_id": 6,
"text": "F(\\rho, \\sigma)=|\\langle\\psi_\\rho|\\psi_\\sigma\\rangle|^2."
},
{
"math_id": 7,
"text": "|\\psi_\\rho\\rangle"
},
{
"math_id": 8,
"text": "|\\psi_\\sigma\\rangle"
},
{
"math_id": 9,
"text": "F':=\\sqrt{F}"
},
{
"math_id": 10,
"text": "F"
},
{
"math_id": 11,
"text": "F'"
},
{
"math_id": 12,
"text": "X,Y"
},
{
"math_id": 13,
"text": "(1, ..., n)"
},
{
"math_id": 14,
"text": "p = (p_1,p_2,\\ldots,p_n)"
},
{
"math_id": 15,
"text": "q = (q_1,q_2,\\ldots,q_n)"
},
{
"math_id": 16,
"text": "X"
},
{
"math_id": 17,
"text": "Y"
},
{
"math_id": 18,
"text": "F(X,Y) = \\left(\\sum _i \\sqrt{p_i q_i}\\right)^2"
},
{
"math_id": 19,
"text": "F(X,Y)"
},
{
"math_id": 20,
"text": "(\\sqrt{p_1}, \\ldots ,\\sqrt{p_n})"
},
{
"math_id": 21,
"text": "(\\sqrt{q_1}, \\ldots ,\\sqrt{q_n})"
},
{
"math_id": 22,
"text": "F(X,Y) = 1"
},
{
"math_id": 23,
"text": "p = q"
},
{
"math_id": 24,
"text": "0 \\leq F(X,Y) \\leq 1"
},
{
"math_id": 25,
"text": "\\sum _i \\sqrt{p_i q_i}"
},
{
"math_id": 26,
"text": "\\{F_i\\} "
},
{
"math_id": 27,
"text": "i"
},
{
"math_id": 28,
"text": "p_i = \\operatorname{tr}( \\rho F_i )"
},
{
"math_id": 29,
"text": "q_i = \\operatorname{tr}( \\sigma F_i )"
},
{
"math_id": 30,
"text": "p"
},
{
"math_id": 31,
"text": "q"
},
{
"math_id": 32,
"text": "F(\\rho,\\sigma) = \\min_{\\{F_i\\}} F(X,Y) = \\min_{\\{F_i\\}} \\left(\\sum _i \\sqrt{\\operatorname{tr}( \\rho F_i ) \\operatorname{tr}( \\sigma F_i )}\\right)^{2}."
},
{
"math_id": 33,
"text": "\\sigma^{-1/2}|\\sqrt\\sigma\\sqrt\\rho|\\sigma^{-1/2}"
},
{
"math_id": 34,
"text": "F(\\rho, \\sigma) = \\left(\\operatorname{tr} \\sqrt{\\sqrt{\\rho} \\sigma \\sqrt{\\rho}}\\right)^2."
},
{
"math_id": 35,
"text": "F(\\rho, \\sigma)= \\lVert \\sqrt{\\rho} \\sqrt{\\sigma} \\rVert_\\operatorname{tr}^2 = \\left(\\operatorname{tr}|\\sqrt\\rho\\sqrt\\sigma|\\right)^2,"
},
{
"math_id": 36,
"text": "|A|\\equiv \\sqrt{A^\\dagger A}"
},
{
"math_id": 37,
"text": "F(\\rho, \\sigma)= \\sum_j\\sqrt{\\lambda_j},"
},
{
"math_id": 38,
"text": "\\lambda_j"
},
{
"math_id": 39,
"text": "\\sqrt{\\rho} \\sigma \\sqrt{\\rho}"
},
{
"math_id": 40,
"text": "\\rho\\sigma"
},
{
"math_id": 41,
"text": "F(\\rho, \\sigma)= \\left(\\operatorname{tr}\\sqrt{\\rho\\sigma}\\right)^2"
},
{
"math_id": 42,
"text": "F(\\rho,\\sigma)=\\operatorname{tr}(\\sigma\\rho)=\\langle \\psi_\\rho|\\sigma|\\psi_\\rho\\rangle."
},
{
"math_id": 43,
"text": "\\sqrt\\rho=\\rho"
},
{
"math_id": 44,
"text": "\nF(\\rho, \\sigma) = \\left(\\operatorname{tr} \\sqrt{ | \\psi_\\rho \\rangle \\langle \\psi_\\rho | \\sigma | \\psi_\\rho \\rangle \\langle \\psi_\\rho |} \\right)^2\n= \\langle \\psi_\\rho | \\sigma | \\psi_\\rho \\rangle \\left(\\operatorname{tr} \\sqrt{ | \\psi_\\rho \\rangle \\langle \\psi_\\rho |} \\right)^2\n= \\langle \\psi_\\rho | \\sigma | \\psi_\\rho \\rangle.\n"
},
{
"math_id": 45,
"text": "F(\\rho,\\sigma)=F(\\sigma,\\rho)"
},
{
"math_id": 46,
"text": "0\\le F(\\rho,\\sigma) \\le 1"
},
{
"math_id": 47,
"text": "F(\\rho,\\rho)=1"
},
{
"math_id": 48,
"text": "F(\\rho,\\sigma) =\n\\left[\\operatorname{tr}\\sqrt{\\rho\\sigma}\\right]^2 =\n\\left(\\sum_k \\sqrt{p_k q_k} \\right)^2 =\nF(\\boldsymbol p, \\boldsymbol q),"
},
{
"math_id": 49,
"text": "p_k, q_k"
},
{
"math_id": 50,
"text": "\\rho,\\sigma"
},
{
"math_id": 51,
"text": "[\\rho,\\sigma]=0"
},
{
"math_id": 52,
"text": " \\rho = \\sum_i p_i | i \\rangle \\langle i |\n\\text{ and }\n\\sigma = \\sum_i q_i | i \\rangle \\langle i |,"
},
{
"math_id": 53,
"text": " \\operatorname{tr}\\sqrt{\\rho\\sigma} =\n\\operatorname{tr}\\left(\\sum_k \\sqrt{p_k q_k} |k\\rangle\\!\\langle k|\\right) =\n\\sum_k \\sqrt{p_k q_k}."
},
{
"math_id": 54,
"text": "F(\\rho, \\sigma) = \\operatorname{tr}(\\rho\\sigma)+2\\sqrt{\\det(\\rho)\\det(\\sigma)}."
},
{
"math_id": 55,
"text": "M=\\sqrt{\\rho}\\sigma\\sqrt{\\rho}"
},
{
"math_id": 56,
"text": "\\operatorname{tr}\\sqrt{M}=\\sqrt{\\lambda_1}+\\sqrt{\\lambda_2}"
},
{
"math_id": 57,
"text": "\\lambda_1"
},
{
"math_id": 58,
"text": "\\lambda_2"
},
{
"math_id": 59,
"text": "M"
},
{
"math_id": 60,
"text": "F(\\rho,\\sigma) = \\operatorname{tr}(\\rho\\sigma)"
},
{
"math_id": 61,
"text": "\\mathrm{Det}(\\rho) = 0"
},
{
"math_id": 62,
"text": "\\; F(\\rho, \\sigma) = F(U \\rho \\; U^*, U \\sigma U^*) "
},
{
"math_id": 63,
"text": "U"
},
{
"math_id": 64,
"text": "\\{E_k\\}_k"
},
{
"math_id": 65,
"text": "E_k"
},
{
"math_id": 66,
"text": "\\sum_k E_k=I"
},
{
"math_id": 67,
"text": "\n \\sqrt{F(\\rho,\\sigma)} \\le \\sum_k \\sqrt{\\operatorname{tr}(E_k\\rho)}\\sqrt{\\operatorname{tr}(E_k\\sigma)} \\equiv \\sum_k \\sqrt{p_k q_k},\n"
},
{
"math_id": 68,
"text": "p_k \\equiv \\operatorname{tr}(E_k \\rho)"
},
{
"math_id": 69,
"text": "q_k \\equiv \\operatorname{tr}(E_k \\sigma)"
},
{
"math_id": 70,
"text": "\\rho,\\ \\sigma"
},
{
"math_id": 71,
"text": "F(\\rho,\\sigma)=\\min_{\\{E_k\\}} F(\\boldsymbol p,\\boldsymbol q),"
},
{
"math_id": 72,
"text": "F(\\boldsymbol p, \\boldsymbol q)\\equiv\\left(\\sum_k\\sqrt{p_k q_k}\\right)^2"
},
{
"math_id": 73,
"text": "\\sqrt{F(\\rho,\\sigma)}=\\operatorname{tr}|\\sqrt\\rho\\sqrt\\sigma|,"
},
{
"math_id": 74,
"text": "\\sqrt{F(\\rho,\\sigma)}=\\operatorname{tr}(\\sqrt\\rho\\sqrt\\sigma U)."
},
{
"math_id": 75,
"text": "\\sqrt{F(\\rho,\\sigma)}=\\operatorname{tr}(\\sqrt\\rho\\sqrt\\sigma U)=\n\\sum_k\\operatorname{tr}(\\sqrt\\rho E_k \\sqrt\\sigma U)=\\sum_k\\operatorname{tr}(\\sqrt\\rho \\sqrt{E_k} \\sqrt{E_k}\\sqrt\\sigma U) \\le\n\\sum_k\\sqrt{\\operatorname{tr}(E_k\\rho)\\operatorname{tr}(E_k \\sigma)},"
},
{
"math_id": 76,
"text": "|\\operatorname{tr}(A^\\dagger B)|^2\\le\\operatorname{tr}(A^\\dagger A)\\operatorname{tr}(B^\\dagger B)"
},
{
"math_id": 77,
"text": "\\mathcal E"
},
{
"math_id": 78,
"text": "F(\\mathcal E(\\rho),\\mathcal E(\\sigma)) \\ge\nF(\\rho,\\sigma),"
},
{
"math_id": 79,
"text": "\nD(A,B) = \\frac{1}{2}\\| A-B\\|_{\\rm tr} \\, .\n"
},
{
"math_id": 80,
"text": "\n1-\\sqrt{F(\\rho,\\sigma)} \\le D(\\rho,\\sigma) \\le\\sqrt{1-F(\\rho,\\sigma)} \\, .\n"
},
{
"math_id": 81,
"text": "\n1-F(\\psi,\\rho) \\le D(\\psi,\\rho) \\, .\n"
},
{
"math_id": 82,
"text": "\n| \\psi _{\\rho} \\rangle = \\sum_{i=1}^n (\\rho^{{1}/{2}} | e_i \\rangle) \\otimes | e_i \\rangle \\in \\mathbb{C}^n \\otimes \\mathbb{C}^n \n"
},
{
"math_id": 83,
"text": "\\textstyle \\{|e_i\\rangle\\}"
},
{
"math_id": 84,
"text": "F(\\rho, \\sigma) = \\max_{|\\psi_{\\sigma} \\rangle} | \\langle \\psi _{\\rho}| \\psi _{\\sigma} \\rangle |^2"
},
{
"math_id": 85,
"text": "| \\psi _{\\sigma} \\rangle"
},
{
"math_id": 86,
"text": "\\textstyle |\\Omega\\rangle"
},
{
"math_id": 87,
"text": "| \\Omega \\rangle= \\sum_{i=1}^n | e_i \\rangle \\otimes | e_i \\rangle "
},
{
"math_id": 88,
"text": "| \\psi_{\\sigma} \\rangle = ( \\sigma^{{1}/{2}} V_1 \\otimes V_2 ) | \\Omega \\rangle "
},
{
"math_id": 89,
"text": "\n| \\langle \\psi _{\\rho}| \\psi _{\\sigma} \\rangle |^2\n= | \\langle \\Omega | ( \\rho^{{1}/{2}} \\otimes I) ( \\sigma^{{1}/{2}} V_1 \\otimes V_2 ) | \\Omega \\rangle |^2\n= | \\operatorname{tr} ( \\rho^{{1}/{2}} \\sigma^{{1}/{2}} V_1 V_2^T )|^2.\n"
},
{
"math_id": 90,
"text": "|\\langle\\psi_\\rho|\\psi_\\sigma\\rangle|\\le\\operatorname{tr}|\\sqrt\\rho\\sqrt\\sigma|"
},
{
"math_id": 91,
"text": "\\begin{align}\n |\\psi_\\rho\\rangle &=\\sum_k\\sqrt{\\lambda_k}|\\lambda_k\\rangle\\otimes|u_k\\rangle, \\\\\n |\\psi_\\sigma\\rangle &=\\sum_k\\sqrt{\\mu_k}|\\mu_k\\rangle\\otimes|v_k\\rangle,\n\\end{align}"
},
{
"math_id": 92,
"text": "|\\lambda_k\\rangle, |\\mu_k\\rangle"
},
{
"math_id": 93,
"text": "\\{u_k\\}_k, \\{v_k\\}_k"
},
{
"math_id": 94,
"text": "\\langle\\psi_\\rho|\\psi_\\sigma\\rangle =\n\\sum_{jk}\\sqrt{\\lambda_j\\mu_k} \\langle\\lambda_j|\\mu_k\\rangle\\,\\langle u_j|v_k\\rangle =\n\\operatorname{tr}\\left(\\sqrt\\rho\\sqrt\\sigma U\\right),"
},
{
"math_id": 95,
"text": "U=\\left(\\sum_k |\\mu_k\\rangle\\!\\langle u_k| \\right)\\,\\left(\\sum_j |v_j\\rangle\\!\\langle \\lambda_j|\\right)."
},
{
"math_id": 96,
"text": "|\\operatorname{tr}(AU)|\\le \\operatorname{tr}(\\sqrt{A^\\dagger A})\\equiv\\operatorname{tr}|A|"
},
{
"math_id": 97,
"text": "|\\langle\\psi_\\rho|\\psi_\\sigma\\rangle|=\n|\\operatorname{tr}(\\sqrt\\rho\\sqrt\\sigma U)| \\le\n\\operatorname{tr}|\\sqrt\\rho\\sqrt\\sigma|."
},
{
"math_id": 98,
"text": "A\\equiv \\sum_j s_j(A)|a_j\\rangle\\!\\langle b_j|"
},
{
"math_id": 99,
"text": "U=\\sum_j |b_j\\rangle\\!\\langle w_j|"
},
{
"math_id": 100,
"text": "\\begin{align}\n|\\operatorname{tr}(AU)| &=\n\\left|\\operatorname{tr}\\left(\\sum_j s_j(A)|a_j\\rangle\\!\\langle b_j| \\,\\,\\sum_k |b_k\\rangle\\!\\langle w_k| \\right)\\right| \\\\ &= \n\\left|\\sum_j s_j(A)\\langle \nw_j|a_j\\rangle\\right|\\\\ &\\le\n\\sum_j s_j(A) \\,|\\langle w_j|a_j\\rangle| \\\\\n&\\le\n\\sum_j s_j(A) \\\\\n&= \\operatorname{tr}|A|,\n\\end{align}"
},
{
"math_id": 101,
"text": "s_j(A)\\ge 0"
},
{
"math_id": 102,
"text": "A"
},
{
"math_id": 103,
"text": "\\langle w_j|a_j\\rangle=1"
},
{
"math_id": 104,
"text": "U=\\sum_k |b_k\\rangle\\!\\langle a_k|,"
},
{
"math_id": 105,
"text": "AU=\\sqrt{AA^\\dagger}\\equiv |A|"
},
{
"math_id": 106,
"text": "|\\langle\\psi_\\rho|\\psi_\\sigma\\rangle|=\n\\operatorname{tr}|\\sqrt\\rho\\sqrt\\sigma|"
},
{
"math_id": 107,
"text": "\\sqrt\\rho\\sqrt\\sigma U=|\\sqrt\\rho\\sqrt\\sigma|"
},
{
"math_id": 108,
"text": "\\operatorname{tr}|\\sqrt\\rho\\sqrt\\sigma|=\\max|\\langle\\psi_\\rho|\\psi_\\sigma\\rangle|."
},
{
"math_id": 109,
"text": " \\cos^2 \\theta_{\\rho\\sigma} = F(\\rho,\\sigma) \\,"
},
{
"math_id": 110,
"text": "\\theta_{\\rho\\sigma}"
},
{
"math_id": 111,
"text": "\\rho = \\sigma"
}
] |
https://en.wikipedia.org/wiki?curid=5698171
|
569850
|
Roller chain
|
Type of chain drive
Roller chain or bush roller chain is the type of chain drive most commonly used for transmission of mechanical power on many kinds of domestic, industrial and agricultural machinery, including conveyors, wire- and tube-drawing machines, printing presses, cars, motorcycles, and bicycles. It consists of a series of short cylindrical rollers held together by side links. It is driven by a toothed wheel called a sprocket. It is a simple, reliable, and efficient means of power transmission.
Sketches by Leonardo da Vinci in the 16th century show a chain with a roller bearing. In 1800, James Fussell patented a roller chain on development of his balance lock and in 1880 Hans Renold patented a bush roller chain.
Construction.
There are two types of links alternating in the bush roller chain. The first type is inner links, having two inner plates held together by two sleeves or bushings upon which rotate two rollers. Inner links alternate with the second type, the outer links, consisting of two outer plates held together by pins passing through the bushings of the inner links. The "bushingless" roller chain is similar in operation though not in construction; instead of separate bushings or sleeves holding the inner plates together, the plate has a tube stamped into it protruding from the hole which serves the same purpose. This has the advantage of removing one step in assembly of the chain.
The roller chain design reduces friction compared to simpler designs, resulting in higher efficiency and less wear. The original power transmission chain varieties lacked rollers and bushings, with both the inner and outer plates held by pins which directly contacted the sprocket teeth; however this configuration exhibited extremely rapid wear of both the sprocket teeth and the plates where they pivoted on the pins. This problem was partially solved by the development of bushed chains, with the pins holding the outer plates passing through bushings or sleeves connecting the inner plates. This distributed the wear over a greater area; however the teeth of the sprockets still wore more rapidly than is desirable, from the sliding friction against the bushings. The addition of rollers surrounding the bushing sleeves of the chain and provided rolling contact with the teeth of the sprockets resulting in excellent resistance to wear of both sprockets and chain as well. There is even very low friction, as long as the chain is sufficiently lubricated. Continuous, clean, lubrication of roller chains is of primary importance for efficient operation, as is correct tensioning.
Lubrication.
Many driving chains (for example, in factory equipment, or driving a camshaft inside an internal combustion engine) operate in clean environments, and thus the wearing surfaces (that is, the pins and bushings) are safe from precipitation and airborne grit, many even in a sealed environment such as an oil bath. Some roller chains are designed to have o-rings built into the space between the outside link plate and the inside roller link plates. Chain manufacturers began to include this feature in 1971 after the application was invented by Joseph Montano while working for Whitney Chain of Hartford, Connecticut. O-rings were included as a way to improve lubrication to the links of power transmission chains, a service that is vitally important to extending their working life. These rubber fixtures form a barrier that holds factory applied lubricating grease inside the pin and bushing wear areas. Further, the rubber o-rings prevent dirt and other contaminants from entering inside the chain linkages, where such particles would otherwise cause significant wear.
There are also many chains that have to operate in dirty conditions, and for size or operational reasons cannot be sealed. Examples include chains on farm equipment, bicycles, and chain saws. These chains will necessarily have relatively high rates of wear.
Many oil-based lubricants attract dirt and other particles, eventually forming an abrasive paste that will compound wear on chains. This problem can be reduced by use of a "dry" PTFE spray, which forms a solid film after application and repels both particles and moisture.
Motorcycle chain lubrication.
Chains operating at high speeds comparable to those on motorcycles should be used in conjunction with an oil bath. For modern motorcycles this is not possible, and most motorcycle chains run unprotected. Thus, motorcycle chains tend to wear very quickly relative to other applications. They are subject to extreme forces and are exposed to rain, dirt, sand and road salt.
Motorcycle chains are part of the drive train to transmit the motor power to the back wheel. Properly lubricated chains can reach an efficiency of 98% or greater in the transmission. Unlubricated chains will significantly decrease performance and increase chain and sprocket wear.
Two types of aftermarket lubricants are available for motorcycle chains: spray on lubricants and oil drip feed systems.
Variants.
If the chain is not being used for a high wear application (for instance if it is just transmitting motion from a hand-operated lever to a control shaft on a machine, or a sliding door on an oven), then one of the simpler types of chain may still be used. Conversely, where extra strength but the smooth drive of a smaller pitch is required, the chain may be "siamesed"; instead of just two rows of plates on the outer sides of the chain, there may be three ("duplex"), four ("triplex"), or more rows of plates running parallel, with bushings and rollers between each adjacent pair, and the same number of rows of teeth running in parallel on the sprockets to match. Timing chains on automotive engines, for example, typically have multiple rows of plates called strands.
Roller chain is made in several sizes, the most common American National Standards Institute (ANSI) standards being 40, 50, 60, and 80. The first digits indicate the pitch of the chain in eighths of an inch, with the last digit being 0 for standard chain, 1 for lightweight chain, and 5 for bushed chain with no rollers. Thus, a chain with half-inch pitch is a No. 40 while a No. 160 sprocket has teeth spaced 2 inches apart, etc. Metric pitches are expressed in sixteenths of an inch; thus a metric No. 8 chain (08B-1) is equivalent to an ANSI No. 40. Most roller chain is made from plain carbon or alloy steel, but stainless steel is used in food processing machinery or other places where lubrication is a problem, and nylon or brass are occasionally seen for the same reason.
Roller chain is ordinarily hooked up using a master link (also known as a "connecting link"), which typically has one pin held by a horseshoe clip rather than friction fit, allowing it to be inserted or removed with simple tools. Chain with a removable link or pin is also known as "cottered chain", which allows the length of the chain to be adjusted. Half links (also known as "offsets") are available and are used to increase the length of the chain by a single roller. Riveted roller chain has the master link (also known as a "connecting link") "riveted" or mashed on the ends. These pins are made to be durable and are not removable.
Horseshoe clip.
A "horseshoe clip" is the U-shaped spring steel fitting that holds the side-plate of the joining (or "master") link formerly essential to complete the loop of a roller chain. The clip method is losing popularity as more and more chains are manufactured as endless loops not intended for maintenance. Modern motorcycles are often fitted with an endless chain but in the increasingly rare circumstances of the chain wearing out and needing to be replaced, a length of chain and a joining link (with horseshoe clip) will be provided as a spare. Changes in motorcycle suspension are tending to make this use less prevalent.
Common on older motorcycles and older bicycles (e.g. those with hub gears) this clip method cannot be used on bicycles fitted with derailleur gears, as the clip will tend to catch on the gear-changers.
In many cases, an endless chain cannot be replaced easily since it is linked into the frame of the machine (this is the case on the traditional bicycle, amongst other places). However, in some cases, a joining link with horseshoe clip cannot be used or is not preferred in the application either. In this case, a "soft link" is used, placed with a chain riveter and relying solely on friction. With modern materials and tools and skilled application this is a permanent repair having almost the same strength and life of the unbroken chain.
Wear.
The effect of wear on a roller chain is to increase the pitch (spacing of the links), causing the chain to grow longer. Note that this is due to wear at the pivoting pins and bushes, not from actual stretching of the metal (as does happen to some flexible steel components such as the hand-brake cable of a motor vehicle).
With modern chains it is unusual for a chain (other than that of a bicycle) to wear until it breaks, since a worn chain leads to the rapid onset of wear on the teeth of the sprockets, with ultimate failure being the loss of all the teeth on the sprocket. The sprockets (in particular the smaller of the two) suffer a grinding motion that puts a characteristic hook shape into the driven face of the teeth. (This effect is made worse by a chain improperly tensioned, but is unavoidable no matter what care is taken). The worn teeth (and chain) no longer provides smooth transmission of power and this may become evident from the noise, the vibration or (in car engines using a timing chain) the variation in ignition timing seen with a timing light. Both sprockets and chain should be replaced in these cases, since a new chain on worn sprockets will not last long. However, in less severe cases it may be possible to save the larger of the two sprockets, since it is always the smaller one that suffers the most wear. Only in very light-weight applications such as a bicycle, or in extreme cases of improper tension, will the chain normally jump off the sprockets.
The lengthening due to wear of a chain is calculated by the following formula:
formula_0
M = the length of a number of links measured
S = the number of links measured
P = Pitch
In industry, it is usual to monitor the movement of the chain tensioner (whether manual or automatic) or the exact length of a drive chain (one rule of thumb is to replace a roller chain which has elongated 3% on an adjustable drive or 1.5% on a fixed-center drive). A simpler method, particularly suitable for the cycle or motorcycle user, is to attempt to pull the chain away from the larger of the two sprockets, whilst ensuring the chain is taut. Any significant movement (e.g. making it possible to see through a gap) probably indicates a chain worn up to and beyond the limit. Sprocket damage will result if the problem is ignored. Sprocket wear cancels this effect, and may mask chain wear.
Bicycle chain wear.
The lightweight chain of a bicycle with derailleur gears can snap (or rather, come apart at the side-plates, since it is normal for the "riveting" to fail first) because the pins inside are not cylindrical, they are barrel-shaped. Contact between the pin and the bushing is not the regular line, but a point which allows the chain's pins to work its way through the bushing, and finally the roller, ultimately causing the chain to snap. This form of construction is necessary because the gear-changing action of this form of transmission requires the chain to both bend sideways and to twist, but this can occur with the flexibility of such a narrow chain and relatively large free lengths on a bicycle.
Chain failure is much less of a problem on hub-geared systems since the chainline does not bend, so the parallel pins have a much bigger wearing surface in contact with the bush. The hub-gear system also allows complete enclosure, a great aid to lubrication and protection from grit.
Chain strength.
The most common measure of roller chain's strength is tensile strength. Tensile strength represents how much load a chain can withstand under a one-time load before breaking. Just as important as tensile strength is a chain's fatigue strength. The critical factors in a chain's fatigue strength is the quality of steel used to manufacture the chain, the heat treatment of the chain components, the quality of the pitch hole fabrication of the linkplates, and the type of shot plus the intensity of shot peen coverage on the linkplates. Other factors can include the thickness of the linkplates and the design (contour) of the linkplates. The rule of thumb for roller chain operating on a continuous drive is for the chain load to not exceed a mere 1/6 or 1/9 of the chain's tensile strength, depending on the type of master links used (press-fit vs. slip-fit). Roller chains operating on a continuous drive beyond these thresholds can and typically do fail prematurely via linkplate fatigue failure.
The standard minimum ultimate strength of the ANSI 29.1 steel chain is 12,500 x (pitch, in inches)2.
X-ring and O-Ring chains greatly decrease wear by means of internal lubricants, increasing chain life. The internal lubrication is inserted by means of a vacuum when riveting the chain together.
Chain standards.
Standards organizations (such as ANSI and ISO) maintain standards for design, dimensions, and interchangeability of transmission chains. For example, the following table shows data from ANSI standard B29.1-2011 (precision power transmission roller chains, attachments, and sprockets) developed by the American Society of Mechanical Engineers (ASME). See the references for additional information.
For mnemonic purposes, below is another presentation of key dimensions from the same standard, expressed in fractions of an inch (which was part of the thinking behind the choice of preferred numbers in the ANSI standard):
A typical bicycle chain (for derailleur gears) uses narrow <templatestyles src="Fraction/styles.css" />1⁄2-inch-pitch chain. The width of the chain is variable, and does not affect the load capacity. The more sprockets at the rear wheel (historically 3–6, nowadays 7–12 sprockets), the narrower the chain. Chains are sold according to the number of speeds they are designed to work with, for example, "10 speed chain". Hub gear or single speed bicycles use 1/2 x 1/8 inch chains, where 1/8 inch refers to the maximum thickness of a sprocket that can be used with the chain.
Typically chains with parallel shaped links have an even number of links, with each narrow link followed by a broad one. Chains built up with a uniform type of link, narrow at one and broad at the other end, can be made with an odd number of links, which can be an advantage to adapt to a special chainwheel-distance; on the other side such a chain tends to be not so strong.
Roller chains made using ISO standard are sometimes called "isochains".
References.
<templatestyles src="Reflist/styles.css" />
External links.
https://www.leonardodigitale.com/en/browse/Codex-atlanticus/0987-r/
|
[
{
"math_id": 0,
"text": "\\%=((M-(S*P))/(S*P))*100"
}
] |
https://en.wikipedia.org/wiki?curid=569850
|
56990
|
Tower of Hanoi
|
Mathematical puzzle game
The Tower of Hanoi (also called The problem of Benares Temple or Tower of Brahma or Lucas' Tower and sometimes pluralized as Towers, or simply pyramid puzzle) is a mathematical game or puzzle consisting of three rods and a number of disks of various diameters, which can slide onto any rod. The puzzle begins with the disks stacked on one rod in order of decreasing size, the smallest at the top, thus approximating a conical shape. The objective of the puzzle is to move the entire stack to one of the other rods, obeying the following rules:
With three disks, the puzzle can be solved in seven moves. The minimal number of moves required to solve a Tower of Hanoi puzzle is 2"n" − 1, where "n" is the number of disks.
Origins.
The puzzle was invented by the French mathematician Édouard Lucas, first presented in 1883 as a game discovered by "N. Claus (de Siam)" (an anagram of "Lucas d'Amiens"), and later published as a booklet in 1889 and in a posthumously-published volume of Lucas' "Récréations mathématiques". Accompanying the game was an instruction booklet, describing the game's purported origins in Tonkin, and claiming that according to legend Brahmins at a temple in Benares have been carrying out the movement of the "Sacred Tower of Brahma", consisting of sixty-four golden disks, according to the same rules as in the game, and that the completion of the tower would lead to the end of the world. Numerous variations on this legend regarding the ancient and mystical nature of the puzzle popped up almost immediately.
If the legend were true, and if the priests were able to move disks at a rate of one per second, using the smallest number of moves, it would take them 264 − 1 seconds or roughly 585 billion years to finish, which is about 42 times the estimated current age of the universe.
There are many variations on this legend. For instance, in some tellings, the temple is a monastery, and the priests are monks. The temple or monastery may be in various locales including Hanoi, and may be associated with any religion. In some versions, other elements are introduced, such as the fact that the tower was created at the beginning of the world, or that the priests or monks may make only one move per day.
Solution.
The puzzle can be played with any number of disks, although many toy versions have around 7 to 9 of them. The minimal number of moves required to solve a Tower of Hanoi puzzle with "n" disks is 2"n" − 1.
Iterative solution.
A simple solution for the toy puzzle is to alternate moves between the smallest piece and a non-smallest piece. When moving the smallest piece, always move it to the next position in the same direction (to the right if the starting number of pieces is even, to the left if the starting number of pieces is odd). If there is no tower position in the chosen direction, move the piece to the opposite end, but then continue to move in the correct direction. For example, if you started with three pieces, you would move the smallest piece to the opposite end, then continue in the left direction after that. When the turn is to move the non-smallest piece, there is only one legal move. Doing this will complete the puzzle in the fewest moves.
Simpler statement of iterative solution.
The iterative solution is equivalent to repeated execution of the following sequence of steps until the goal has been achieved:
Following this approach, the stack will end up on peg B if the number of disks is odd and peg C if it is even.
Recursive solution.
The key to solving a problem recursively is to recognize that it can be broken down into a collection of smaller sub-problems, to each of which "that same general solving procedure that we are seeking" applies, and the total solution is then found in some "simple" way from those sub-problems' solutions. Each of these created sub-problems being "smaller" guarantees that the base case(s) will eventually be reached. For the Towers of Hanoi:
Assuming all "n" disks are distributed in valid arrangements among the pegs; assuming there are "m" top disks on a "source" peg, and all the rest of the disks are larger than "m", so they can be safely ignored; to move "m" disks from a source peg to a "target" peg using a "spare" peg, without violating the rules:
The full Tower of Hanoi solution then moves "n" disks from the source peg A to the target peg C, using B as the spare peg.
This approach can be given a rigorous mathematical proof with mathematical induction and is often used as an example of recursion when teaching programming.
Logical analysis of the recursive solution.
As in many mathematical puzzles, finding a solution is made easier by solving a slightly more general problem: how to move a tower of "h" (height) disks from a starting peg "f" = A (from) onto a destination peg "t" = C (to), B being the remaining third peg and assuming "t" ≠ "f". First, observe that the problem is symmetric for permutations of the names of the pegs (symmetric group "S"3). If a solution is known moving from peg A to peg C, then, by renaming the pegs, the same solution can be used for every other choice of starting and destination peg. If there is only one disk (or even none at all), the problem is trivial. If "h" = 1, then move the disk from peg A to peg C. If "h" > 1, then somewhere along the sequence of moves, the largest disk must be moved from peg A to another peg, preferably to peg C. The only situation that allows this move is when all smaller "h" − 1 disks are on peg B. Hence, first all "h" − 1 smaller disks must go from A to B. Then move the largest disk and finally move the "h" − 1 smaller disks from peg B to peg C. The presence of the largest disk does not impede any move of the "h" − 1 smaller disks and can be temporarily ignored. Now the problem is reduced to moving "h" − 1 disks from one peg to another one, first from A to B and subsequently from B to C, but the same method can be used both times by renaming the pegs. The same strategy can be used to reduce the "h" − 1 problem to "h" − 2, "h" − 3, and so on until only one disk is left. This is called recursion. This algorithm can be schematized as follows.
Identify the disks in order of increasing size by the natural numbers from 0 up to but not including "h". Hence disk 0 is the smallest one, and disk "h" − 1 the largest one.
The following is a procedure for moving a tower of "h" disks from a peg A onto a peg C, with B being the remaining third peg:
By mathematical induction, it is easily proven that the above procedure requires the minimum number of moves possible and that the produced solution is the only one with this minimal number of moves. Using recurrence relations, the exact number of moves that this solution requires can be calculated by: formula_0. This result is obtained by noting that steps 1 and 3 take formula_1 moves, and step 2 takes one move, giving formula_2.
Non-recursive solution.
The list of moves for a tower being carried from one peg onto another one, as produced by the recursive algorithm, has many regularities. When counting the moves starting from 1, the ordinal of the disk to be moved during move "m" is the number of times "m" can be divided by 2. Hence every odd move involves the smallest disk. It can also be observed that the smallest disk traverses the pegs "f", "t", "r", "f", "t", "r", etc. for odd height of the tower and traverses the pegs "f", "r", "t", "f", "r", "t", etc. for even height of the tower. This provides the following algorithm, which is easier, carried out by hand, than the recursive algorithm.
In alternate moves:
For the very first move, the smallest disk goes to peg "t" if "h" is odd and to peg "r" if "h" is even.
Also observe that:
With this knowledge, a set of disks in the middle of an optimal solution can be recovered with no more state information than the positions of each disk:
Binary solution.
Disk positions may be determined more directly from the binary (base-2) representation of the move number (the initial state being move #0, with all digits 0, and the final state being with all digits 1), using the following rules:
For example, in an 8-disk Hanoi:
The source and destination pegs for the "m"th move can also be found elegantly from the binary representation of "m" using bitwise operations. To use the syntax of the C programming language, move "m" is from peg codice_0 to peg codice_1, where the disks begin on peg 0 and finish on peg 1 or 2 according as whether the number of disks is even or odd. Another formulation is from peg codice_2 to peg codice_3.
Furthermore, the disk to be moved is determined by the number of times the move count ("m") can be divided by 2 (i.e. the number of zero bits at the right), counting the first move as 1 and identifying the disks by the numbers 0, 1, 2, etc. in order of increasing size. This permits a very fast non-recursive computer implementation to find the positions of the disks after m moves without reference to any previous move or distribution of disks.
The operation, which counts the number of consecutive zeros at the end of a binary number, gives a simple solution to the problem: the disks are numbered from zero, and at move "m", disk number count trailing zeros is moved the minimal possible distance to the right (circling back around to the left as needed).
Gray-code solution.
The binary numeral system of Gray codes gives an alternative way of solving the puzzle. In the Gray system, numbers are expressed in a binary combination of 0s and 1s, but rather than being a standard positional numeral system, the Gray code operates on the premise that each value differs from its predecessor by only one bit changed.
If one counts in Gray code of a bit size equal to the number of disks in a particular Tower of Hanoi, begins at zero and counts up, then the bit changed each move corresponds to the disk to move, where the least-significant bit is the smallest disk, and the most-significant bit is the largest.
Counting moves from 1 and identifying the disks by numbers starting from 0 in order of increasing size, the ordinal of the disk to be moved during move "m" is the number of times "m" can be divided by 2.
This technique identifies which disk to move, but not where to move it to. For the smallest disk, there are always two possibilities. For the other disks there is always one possibility, except when all disks are on the same peg, but in that case either it is the smallest disk that must be moved or the objective has already been achieved. Luckily, there is a rule that does say where to move the smallest disk to. Let "f" be the starting peg, "t" the destination peg, and "r" the remaining third peg. If the number of disks is odd, the smallest disk cycles along the pegs in the order "f" → "t" → "r" → "f" → "t" → "r", etc. If the number of disks is even, this must be reversed: "f" → "r" → "t" → "f" → "r" → "t", etc.
The position of the bit change in the Gray code solution gives the size of the disk moved at each step: 1, 2, 1, 3, 1, 2, 1, 4, 1, 2, 1, 3, 1, 2, 1, ... (sequence in the OEIS), a sequence also known as the ruler function, or one more than the power of 2 within the move number. In the Wolfram Language, codice_4 gives moves for the 8-disk puzzle.
Graphical representation.
The game can be represented by an undirected graph, the nodes representing distributions of disks and the edges representing moves. For one disk, the graph is a triangle:
The graph for two disks is three triangles connected to form the corners of a larger triangle.
A second letter is added to represent the larger disk. Clearly, it cannot initially be moved.
The topmost small triangle now represents the one-move possibilities with two disks:
The nodes at the vertices of the outermost triangle represent distributions with all disks on the same peg.
For h + 1 disks, take the graph of h disks and replace each small triangle with the graph for two disks.
For three disks the graph is:
The sides of the outermost triangle represent the shortest ways of moving a tower from one peg to another one. The edge in the middle of the sides of the largest triangle represents a move of the largest disk. The edge in the middle of the sides of each next smaller triangle represents a move of each next smaller disk. The sides of the smallest triangles represent moves of the smallest disk.
In general, for a puzzle with "n" disks, there are 3"n" nodes in the graph; every node has three edges to other nodes, except the three corner nodes, which have two: it is always possible to move the smallest disk to one of the two other pegs, and it is possible to move one disk between those two pegs "except" in the situation where all disks are stacked on one peg. The corner nodes represent the three cases where all the disks are stacked on one peg. The diagram for "n" + 1 disks is obtained by taking three copies of the "n"-disk diagram—each one representing all the states and moves of the smaller disks for one particular position of the new largest disk—and joining them at the corners with three new edges, representing the only three opportunities to move the largest disk. The resulting figure thus has 3"n"+1 nodes and still has three corners remaining with only two edges.
As more disks are added, the graph representation of the game will resemble a fractal figure, the Sierpiński triangle. It is clear that the great majority of positions in the puzzle will never be reached when using the shortest possible solution; indeed, if the priests of the legend are using the longest possible solution (without re-visiting any position), it will take them 364 − 1 moves, or more than 1023 years.
The longest non-repetitive way for three disks can be visualized by erasing the unused edges:
Incidentally, this longest non-repetitive path can be obtained by forbidding all moves from "a" to "c".
The Hamiltonian cycle for three disks is:
The graphs clearly show that:
This gives "N""h" to be 2, 12, 1872, 6563711232, ... (sequence in the OEIS)
Variations.
Linear Hanoi.
If all moves must be between adjacent pegs (i.e. given pegs A, B, C, one cannot move directly between pegs A and C), then moving a stack of "n" disks from peg A to peg C takes 3"n" − 1 moves. The solution uses all 3n valid positions, always taking the unique move that does not undo the previous move. The position with all disks at peg B is reached halfway, i.e. after (3"n" − 1) / 2 moves.
Cyclic Hanoi.
In Cyclic Hanoi, we are given three pegs (A, B, C), which are arranged as a circle with the clockwise and the counterclockwise directions being defined as A – B – C – A and A – C – B – A, respectively. The moving direction of the disk must be clockwise. It suffices to represent the sequence of disks to be moved. The solution can be found using two mutually recursive procedures:
To move "n" disks counterclockwise to the neighbouring target peg:
To move "n" disks clockwise to the neighbouring target peg:
Let C(n) and A(n) represent moving n disks clockwise and counterclockwise, then we can write down both formulas:
The solution for the Cyclic Hanoi has some interesting properties:
With four pegs and beyond.
Although the three-peg version has a simple recursive solution long been known, the optimal solution for the Tower of Hanoi problem with four pegs (called Reve's puzzle) was not verified until 2014, by Bousch.
However, in case of four or more pegs, the Frame–Stewart algorithm is known without proof of optimality since 1941.
For the formal derivation of the exact number of minimal moves required to solve the problem by applying the Frame–Stewart algorithm (and other equivalent methods), see the following paper.
For other variants of the four-peg Tower of Hanoi problem, see Paul Stockmeyer's survey paper.
The so-called Towers of Bucharest and Towers of Klagenfurt game configurations yield ternary and pentary Gray codes.
Frame–Stewart algorithm.
The Frame–Stewart algorithm is described below:
The algorithm can be described recursively:
The entire process takes formula_12 moves. Therefore, the count formula_6 should be picked for which this quantity is minimum. In the 4-peg case, the optimal formula_6 equals formula_13, where formula_14 is the nearest integer function. For example, in the UPenn CIS 194 course on Haskell, the first assignment page lists the optimal solution for the 15-disk and 4-peg case as 129 steps, which is obtained for the above value of "k".
This algorithm is presumed to be optimal for any number of pegs; its number of moves is 2Θ("n"1/("r"−2)) (for fixed "r").
General shortest paths and the number 466/885.
A curious generalization of the original goal of the puzzle is to start from a given configuration of the disks where all disks are not necessarily on the same peg and to arrive in a minimal number of moves at another given configuration. In general, it can be quite difficult to compute a shortest sequence of moves to solve this problem. A solution was proposed by Andreas Hinz and is based on the observation that in a shortest sequence of moves, the largest disk that needs to be moved (obviously one may ignore all of the largest disks that will occupy the same peg in both the initial and final configurations) will move either exactly once or exactly twice.
The mathematics related to this generalized problem becomes even more interesting when one considers the "average" number of moves in a shortest sequence of moves between two initial and final disk configurations that are chosen at random. Hinz and Chan Tat-Hung independently discovered (see also
) that the average number of moves in an n-disk Tower is given by the following exact formula:
formula_15
For large enough "n", only the first and second terms do not converge to zero, so we get an asymptotic expression: formula_16, as formula_17. Thus intuitively, we could interpret the fraction of formula_18 as representing the ratio of the labor one has to perform when going from a randomly chosen configuration to another randomly chosen configuration, relative to the difficulty of having to cross the "most difficult" path of length formula_19 which involves moving all the disks from one peg to another. An alternative explanation for the appearance of the constant 466/885, as well as a new and somewhat improved algorithm for computing the shortest path, was given by Romik.
Magnetic Hanoi.
In Magnetic Tower of Hanoi, each disk has two distinct sides North and South (typically colored "red" and "blue").
Disks must not be placed with the similar poles together—magnets in each disk prevent this illegal move.
Also, each disk must be flipped as it is moved.
Bicolor Towers of Hanoi.
This variation of the famous Tower of Hanoi puzzle was offered to grade 3–6 students at "2ème Championnat de France des Jeux Mathématiques et Logiques" held in July 1988.
The rules of the puzzle are essentially the same: disks are transferred between pegs one at a time. At no time may a bigger disk be placed on top of a smaller one. The difference is that now for every size there are two disks: one black and one white. Also, there are now two towers of disks of alternating colors. The goal of the puzzle is to make the towers monochrome (same color). The biggest disks at the bottom of the towers are assumed to swap positions.
Tower of Hanoy.
A variation of the puzzle has been adapted as a solitaire game with nine playing cards under the name Tower of Hanoy. It is not known whether the altered spelling of the original name is deliberate or accidental.
Applications.
The Tower of Hanoi is frequently used in psychological research on problem-solving. There also exists a variant of this task called Tower of London for neuropsychological diagnosis and treatment of disorders of executive function.
Zhang and Norman used several isomorphic (equivalent) representations of the game to study the impact of representational effect in task design. They demonstrated an impact on user performance by changing the way that the rules of the game are represented, using variations in the physical design of the game components. This knowledge has impacted on the development of the TURF framework for the representation of human–computer interaction.
The Tower of Hanoi is also used as a backup rotation scheme when performing computer data backups where multiple tapes/media are involved.
As mentioned above, the Tower of Hanoi is popular for teaching recursive algorithms to beginning programming students. A pictorial version of this puzzle is programmed into the emacs editor, accessed by typing M-x hanoi. There is also a sample algorithm written in Prolog.
The Tower of Hanoi is also used as a test by neuropsychologists trying to evaluate frontal lobe deficits.
In 2010, researchers published the results of an experiment that found that the ant species "Linepithema humile" were successfully able to solve the 3-disk version of the Tower of Hanoi problem through non-linear dynamics and pheromone signals.
In 2014, scientists synthesized multilayered palladium nanosheets with a Tower of Hanoi-like structure.
In popular culture.
In the science fiction story "Now Inhale", by Eric Frank Russell, a human is held prisoner on a planet where the local custom is to make the prisoner play a game until it is won or lost before his execution. The protagonist knows that a rescue ship might take a year or more to arrive, so he chooses to play Towers of Hanoi with 64 disks. This story makes reference to the legend about the Buddhist monks playing the game until the end of the world.
In the 1966 "Doctor Who" story "The Celestial Toymaker", the eponymous villain forces the Doctor to play a ten-piece, 1,023-move Tower of Hanoi game entitled The Trilogic Game with the pieces forming a pyramid shape when stacked.
In 2007, the concept of the Towers Of Hanoi problem was used in "Professor Layton and the Diabolical Box" in puzzles 6, 83, and 84, but the disks had been changed to pancakes. The puzzle was based around a dilemma where the chef of a restaurant had to move a pile of pancakes from one plate to the other with the basic principles of the original puzzle (i.e. three plates that the pancakes could be moved onto, not being able to put a larger pancake onto a smaller one, etc.)
In the 2011 film "Rise of the Planet of the Apes", this puzzle, called in the film the "Lucas Tower", is used as a test to study the intelligence of apes.
The puzzle is featured regularly in adventure and puzzle games. Since it is easy to implement, and easily recognised, it is well suited to use as a puzzle in a larger graphical game (e.g. "" and "Mass Effect"). Some implementations use straight disks, but others disguise the puzzle in some other form. There is an arcade version by Sega.
A 15-disk version of the puzzle appears in the game "Sunless Sea" as a lock to a tomb. The player has the option to click through each move of the puzzle in order to solve it, but the game notes that it will take 32,767 moves to complete. If an especially dedicated player does click through to the end of the puzzle, it is revealed that completing the puzzle does not unlock the door.
This was first used as a challenge in "Survivor" Thailand in 2002 but rather than rings, the pieces were made to resemble a temple. Sook Jai threw the challenge to get rid of Jed even though Shii-Ann knew full well how to complete the puzzle.
The problem is featured as part of a reward challenge in a . Both players (Ozzy Lusth and Benjamin "Coach" Wade) struggled to understand how to solve the puzzle and are aided by their fellow tribe members.
In "Genshin Impact", this puzzle is shown in Faruzan's hangout quest, "Early Learning Mechanism", where she mentions seeing it as a mechanism and uses it to make a toy prototype for children. She calls it pagoda stacks.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "2^h - 1"
},
{
"math_id": 1,
"text": "T_{h-1}"
},
{
"math_id": 2,
"text": "T_h = 2T_{h-1} + 1"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "r"
},
{
"math_id": 5,
"text": "T(n,r)"
},
{
"math_id": 6,
"text": "k"
},
{
"math_id": 7,
"text": "1 \\leq k < n"
},
{
"math_id": 8,
"text": "T(k,r)"
},
{
"math_id": 9,
"text": "n-k"
},
{
"math_id": 10,
"text": "r-1"
},
{
"math_id": 11,
"text": "T(n-k,r-1)"
},
{
"math_id": 12,
"text": "2T(k,r)+T(n-k,r-1)"
},
{
"math_id": 13,
"text": "n - \\left\\lfloor\\sqrt{2n+1}\\right\\rceil + 1"
},
{
"math_id": 14,
"text": "\\left\\lfloor\\cdot\\right\\rceil"
},
{
"math_id": 15,
"text": " \\frac{466}{885}\\cdot 2^n - \\frac{1}{3} - \\frac{3}{5}\\cdot \\left(\\frac{1}{3}\\right)^n +\n\\left(\\frac{12}{59} + \\frac{18}{1003}\\sqrt{17}\\right)\\left(\\frac{5+\\sqrt{17}}{18}\\right)^n +\n\\left(\\frac{12}{59} - \\frac{18}{1003}\\sqrt{17}\\right)\\left(\\frac{5-\\sqrt{17}}{18}\\right)^n."
},
{
"math_id": 16,
"text": "466/885\\cdot 2^n - 1/3 + o(1)"
},
{
"math_id": 17,
"text": "n \\to \\infty"
},
{
"math_id": 18,
"text": "466/885\\approx 52.6\\%"
},
{
"math_id": 19,
"text": "2^n-1"
}
] |
https://en.wikipedia.org/wiki?curid=56990
|
5699217
|
Diffusion creep
|
Diffusion creep refers to the deformation of crystalline solids by the diffusion of vacancies through their crystal lattice. Diffusion creep results in plastic deformation rather than brittle failure of the material.
Diffusion creep is more sensitive to temperature than other deformation mechanisms. It becomes especially relevant at high homologous temperatures (i.e. within about a tenth of its absolute melting temperature). Diffusion creep is caused by the migration of crystalline defects through the lattice of a crystal such that when a crystal is subjected to a greater degree of compression in one direction relative to another, defects migrate to the crystal faces along the direction of compression, causing a net mass transfer that shortens the crystal in the direction of maximum compression. The migration of defects is in part due to vacancies, whose migration is equal to a net mass transport in the opposite direction.
Principle.
Crystalline materials are never perfect on a microscale. Some sites of atoms in the crystal lattice can be occupied by point defects, such as "alien" particles or vacancies. Vacancies can actually be thought of as chemical species themselves (or part of a compound species/component) that may then be treated using heterogeneous phase equilibria. The number of vacancies may also be influenced by the number of chemical impurities in the crystal lattice, if such impurities require the formation of vacancies to exist in the lattice.
A vacancy can move through the crystal structure when the neighbouring particle "jumps" in the vacancy, so that the vacancy moves in effect one site in the crystal lattice. Chemical bonds need to be broken and new bonds have to be formed during the process, therefore a certain activation energy is needed. Moving a vacancy through a crystal becomes therefore easier when the temperature is higher.
The most stable state will be when all vacancies are evenly spread through the crystal. This principle follows from Fick's law:
formula_0
In which "Jx" stands for the flux ("flow") of vacancies in direction "x"; "D""x" is a constant for the material in that direction and formula_1 is the difference in concentration of vacancies in that direction. The law is valid for all principal directions in ("x", "y", "z")-space, so the "x" in the formula can be exchanged for "y" or "z". The result will be that they will become evenly distributed over the crystal, which will result in the highest mixing entropy.
When a mechanical stress is applied to the crystal, new vacancies will be created at the sides perpendicular to the direction of the lowest principal stress. The vacancies will start moving in the direction of crystal planes perpendicular to the maximal stress. Current theory holds that the elastic strain in the neighborhood of a defect is smaller toward the axis of greatest differential compression, creating a defect chemical potential gradient (depending upon lattice strain) within the crystal that leads to net accumulation of defects at the faces of maximum compression by diffusion. A flow of vacancies is the same as a flow of particles in the opposite direction. This means a crystalline material can deform under a differential stress, by the flow of vacancies.
Highly mobile chemical components substituting for other species in the lattice can also cause a net differential mass transfer (i.e. segregation) of chemical species inside the crystal itself, often promoting shortening of the rheologically more difficult substance and enhancing deformation.
Types of diffusion creep.
Diffusion of vacancies through a crystal can happen in a number of ways. When vacancies move through the crystal (in the material sciences often called a "grain"), this is called "Nabarro–Herring creep". Another way in which vacancies can move is along the grain boundaries, a mechanism called "Coble creep".
When a crystal deforms by diffusion creep to accommodate space problems from simultaneous grain boundary sliding (the movement of whole grains along grain boundaries) this is called "granular or superplastic flow". Diffusion creep can also be simultaneous with pressure solution. Pressure solution is, like Coble creep, a mechanism in which material moves along grain boundaries. While in Coble creep the particles move by "dry" diffusion, in pressure solution they move in solution.
Flow laws.
Each plastic deformation of a material can be described by a formula in which the strain rate (formula_2) depends on the differential stress ("σ" or "σ"D), the grain size ("d") and an activation value in the form of an Arrhenius equation:
formula_3
In which "A" is the constant of diffusion, "Q" the activation energy of the mechanism, "R" the gas constant and "T" the absolute temperature (in kelvins). The exponents "n" and "m" are values for the sensitivity of the flow to stress and grain size respectively. The values of "A", "Q", "n" and "m" are different for each deformation mechanism. For diffusion creep, the value of "n" is usually around 1. The value for "m" can vary between 2 (Nabarro-Herring creep) and 3 (Coble creep). That means Coble creep is more sensitive to grain size of a material: materials with larger grains can deform less easily by Coble creep than materials with small grains.
Traces of diffusion creep.
It is difficult to find clear microscale evidence for diffusion creep in a crystalline material, since few structures have been identified as definite proof. A material that was deformed by diffusion creep can have flattened grains (grains with a so called shape-preferred orientation or SPO). Equidimensional grains with no lattice-preferred orientation (or LPO) can be an indication for superplastic flow. In materials that were deformed under very high temperatures, lobate grain boundaries may be taken as evidence for diffusion creep.
Diffusion creep is a mechanism by which the volume of the crystals can increase. Larger grain sizes can be a sign that diffusion creep was more effective in a crystalline material.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " J_x = -D_x \\frac{\\Delta C}{\\Delta x} "
},
{
"math_id": 1,
"text": "{\\Delta C}/{\\Delta x}"
},
{
"math_id": 2,
"text": "\\dot{\\epsilon}"
},
{
"math_id": 3,
"text": "\\!\\dot{\\epsilon} = Ae^\\frac{-Q}{RT} \\frac{\\sigma^n}{d^m}"
}
] |
https://en.wikipedia.org/wiki?curid=5699217
|
56993742
|
Parallel all-pairs shortest path algorithm
|
A central problem in algorithmic graph theory is the shortest path problem. Hereby, the problem of finding the shortest path between every pair of nodes is known as all-pair-shortest-paths (APSP) problem. As sequential algorithms for this problem often yield long runtimes, parallelization has shown to be beneficial in this field. In this article two efficient algorithms solving this problem are introduced.
Another variation of the problem is the single-source-shortest-paths (SSSP) problem, which also has parallel approaches: Parallel single-source shortest path algorithm.
Problem definition.
Let formula_0 be a directed Graph with the set of nodes formula_1 and the set of edges formula_2. Each edge formula_3 has a weight formula_4 assigned. The goal of the all-pair-shortest-paths problem is to find the shortest path between all pairs of nodes of the graph. For this path to be unique it is required that the graph does not contain cycles with a negative weight.
In the remainder of the article it is assumed that the graph is represented using an adjacency matrix. We expect the output of the algorithm to be a distancematrix formula_5. In formula_5, every entry formula_6 is the weight of the shortest path in formula_7 from node formula_8 to node formula_9.
The Floyd algorithm presented later can handle negative edge weights, whereas the Dijkstra algorithm requires all edges to have a positive weight.
Dijkstra algorithm.
The Dijkstra algorithm originally was proposed as a solver for the single-source-shortest-paths problem. However, the algorithm can easily be used for solving the All-Pair-Shortest-Paths problem by executing the Single-Source variant with each node in the role of the root node.
In pseudocode such an implementation could look as follows:
1 func DijkstraSSSP("G","v") {
2 ... "//standard SSSP-implementation here"
3 return "dv";
5
6 func DijkstraAPSP("G") {
7 "D" := |"V"|x|"V"|-Matrix
8 for "i" from 1 to |"V"| {
9 //"D[v] denotes the v-th row of D"
10 "D"["v"] := DijkstraSSP("G","i")
In this example we assume that codice_0 takes the graph formula_7 and the root node formula_10 as input.
The result of the execution in turn is the distancelist formula_11. In formula_11, the formula_8-th element stores the distance from the root node formula_10 to the node formula_8.
Therefore the list formula_11 corresponds exactly to the formula_10-th row of the APSP distancematrix formula_5.
For this reason, codice_1 iterates over all nodes of the graph formula_7 and executes codice_0 with each as root node while storing the results in formula_5.
The runtime of codice_0 is formula_12 as we expect the graph to be represented using an adjacency matrix.
Therefore codice_1 has a total sequential runtime of formula_13.
Parallelization for up to |"V"| processors.
A trivial parallelization can be obtained by parallelizing the loop of codice_1 in line"8".
However, when using the sequential codice_0 this limits the number of processors to be used by the number of iterations executed in the loop.
Therefore, for this trivial parallelization formula_14 is an upper bound for the number of processors.
For example, let the number of processors formula_15 be equal to the number of nodes formula_14. This results in each processor executing codice_0 exactly once in parallel.
However, when there are only for example formula_16 processors available, each processor has to execute codice_0 twice.
In total this yields a runtime of formula_17, when formula_14 is a multiple of formula_15.
Consequently, the efficiency of this parallelization is perfect: Employing formula_15 processors reduces the runtime by the factor formula_15.
Another benefit of this parallelization is that no communication between the processors is required. However, it is required that every processor has enough local memory to store the entire adjacency matrix of the graph.
Parallelization for more than |"V"| processors.
If more than formula_14 processors shall be used for the parallelization, it is required that multiple processors take part of the codice_0 computation. For this reason, the parallelization is split across into two levels.
For the first level the processors are split into formula_14 partitions.
Each partition is responsible for the computation of a single row of the distancematrix formula_5. This means each partition has to evaluate one codice_0 execution with a fixed root node.
With this definition each partition has a size of formula_18 processors. The partitions can perform their computations in parallel as the results of each are independent of each other. Therefore, the parallelization presented in the previous section corresponds to a partition size of 1 with formula_19 processors.
The main difficulty is the parallelization of multiple processors executing codice_0 for a single root node. The idea for this parallelization is to distribute the management of the distancelist formula_11 in DijkstraSSSP within the partition. Each processor in the partition therefore is exclusively responsible for formula_20 elements of formula_11. For example, consider formula_21 and formula_22: this yields a partition size of formula_23. In this case, the first processor of each partition is responsible for formula_24, formula_25 and the second processor is responsible for formula_26 and formula_27. Hereby, the total distance lists is formula_28.
The codice_0 algorithm mainly consists of the repetition of two steps: First, the nearest node formula_29 in the distancelist formula_11 has to be found. For this node the shortest path already has been found.
Afterwards the distance of all neighbors of formula_29 has to be adjusted in formula_11.
These steps have to be altered as follows because for the parallelization formula_11 has been distributed across the partition:
The total runtime of such an iteration of codice_0 performed by a partition of size formula_31 can be derived based on the performed subtasks:
For formula_14-iterations this results in a total runtime of formula_34.
After substituting the definition of formula_31 this yields the total runtime for codice_1: formula_35.
The main benefit of this parallelization is that it is not required anymore that every processor stores the entire adjacency matrix.
Instead, it is sufficient when each processor within a partition only stores the columns of the adjacency matrix of the nodes for which he is responsible.
Given a partition size of formula_31, each processor only has to store formula_20 columns of the adjacency matrix.
A downside, however, is that this parallelization comes with a communication overhead due to the reduce- and broadcast-operations.
Example.
The graph used in this example is the one presented in the image with four nodes.
The goal is to compute the distancematrix with formula_22 processors.
For this reason, the processors are divided into four partitions with two processors each.
For the illustration we focus on the partition which is responsible for the computation of the shortest paths from node A to all other nodes.
Let the processors of this partition be named p1 and p2.
The computation of the distancelist across the different iterations is visualized in the second image.
The top row in the image corresponds to formula_36 after the initialization, the bottom one to formula_36 after the termination of the algorithm.
The nodes are distributed in a way that p1 is responsible for the nodes A and B, while p2 is responsible for C and D.
The distancelist formula_36 is distributed according to this.
For the second iteration the subtasks executed are shown explicitly in the image:
Floyd–Warshall algorithm.
The Floyd–Warshall algorithm solves the All-Pair-Shortest-Paths problem for directed graphs. With the adjacency matrix of a graph as input, it calculates shorter paths iterative. After |"V"| iterations the distance-matrix contains all the shortest paths. The following describes a sequential version of the algorithm in pseudo code:
1 func Floyd_All_Pairs_SP("A") {
2 formula_37 = "A";
3 for "k" := 1 to "n" do
4 for "i" := 1 to "n" do
5 for "j" := 1 to "n" do
6 formula_38
Where "A" is the adjacency matrix, "n" = |"V"| the number of nodes and "D" the distance matrix.
Parallelization.
The basic idea to parallelize the algorithm is to partition the matrix and split the computation between the processes. Each process is assigned to a specific part of the matrix. A common way to achieve this is 2-D Block Mapping. Here the matrix is partitioned into squares of the same size and each square gets assigned to a process. For an formula_39-matrix and "p" processes each process calculates a formula_40 sized part of the distance matrix. For formula_41 processes each would get assigned to exactly one element of the matrix. Because of that the parallelization only scales to a maximum of formula_42 processes. In the following we refer with formula_43 to the process that is assigned to the square in the i-th row and the j-th column.
As the calculation of the parts of the distance matrix is dependent on results from other parts the processes have to communicate between each other and exchange data. In the following we refer with formula_44 to the element of the i-th row and j-th column of the distance matrix after the k-th iteration. To calculate formula_44 we need the elements formula_45, formula_46 and formula_47 as specified in line 6 of the algorithm. formula_45 is available to each process as it was calculated by itself in the previous iteration.
Additionally each process needs a part of the k-th row and the k-th column of the formula_48 matrix. The formula_46 element holds a process in the same row and the formula_47 element holds a process in the same column as the process that wants to compute formula_44. Each process that calculated a part of the k-th row in the formula_48 matrix has to send this part to all processes in its column. Each process that calculated a part of the k-th column in the formula_48 matrix has to send this part to all processes in its row. All this processes have to do a one-to-all-broadcast operation along the row or the column. The data dependencies are illustrated in the image below.
For the 2-D block mapping we have to modify the algorithm as follows:
1 func Floyd_All_Pairs_Parallel(formula_37) {
2 for "k" := 1 to "n" do {
3 Each process formula_43 that has a segment of the k-th row of formula_49,
broadcasts it to the formula_50 processes;
4 Each process formula_43 that has a segment of the k-th column of formula_49,
broadcasts it to the formula_51 processes;
5 Each process waits to receive the needed segments;
6 Each process computes its part of the formula_52 matrix;
In line 5 of the algorithm we have a synchronisation step to ensure that all processes have the data necessary to compute the next iteration. To improve the runtime of the algorithm we can remove the synchronisation step without affecting the correctness of the algorithm. To achieve that each process starts the computation as soon as it has the data necessary to compute its part of the matrix. This version of the algorithm is called pipelined 2-D block mapping.
Runtime.
The runtime of the sequential algorithm is determined by the triple nested for loop. The computation in line 6 can be done in constant time (formula_53). Therefore, the runtime of the sequential algorithm is formula_54.
2-D block mapping.
The runtime of the parallelized algorithm consists of two parts. The time for the computation and the part for communication and data transfer between the processes.
As there is no additional computation in the algorithm and the computation is split equally among the "p" processes, we have a runtime of formula_55 for the computational part.
In each iteration of the algorithm there is a one-to-all broadcast operation performed along the row and column of the processes. There are formula_56 elements broadcast. Afterwards there is a synchronisation step performed. How much time these operations take is highly dependent on the architecture of the parallel system used. Therefore, the time needed for communication and data transfer in the algorithm is formula_57.
For the whole algorithm we have the following runtime:
formula_58
Pipelined 2-D block mapping.
For the runtime of the data transfer between the processes in the pipelined version of the algorithm we assume that a process can transfer "k" elements to a neighbouring process in formula_59 time. In every step there are formula_60 elements of a row or a column send to a neighbouring process. Such a step takes formula_61 time. After formula_62 steps the relevant data of the first row and column arrive at process formula_63 (in formula_64 time).
The values of successive rows and columns follow after time formula_65 in a pipelined mode. Process formula_66 finishes its last computation after O(formula_67) + O(formula_68) time. Therefore, the additional time needed for communication in the pipelined version is formula_64.
The overall runtime for the pipelined version of the algorithm is:
formula_69
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "G=(V,E,w)"
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "E\\subseteq V\\times V"
},
{
"math_id": 3,
"text": "e \\in E"
},
{
"math_id": 4,
"text": "w(e)"
},
{
"math_id": 5,
"text": "D"
},
{
"math_id": 6,
"text": "d-{i,j}"
},
{
"math_id": 7,
"text": "G"
},
{
"math_id": 8,
"text": "i"
},
{
"math_id": 9,
"text": "j"
},
{
"math_id": 10,
"text": "v"
},
{
"math_id": 11,
"text": "d_v"
},
{
"math_id": 12,
"text": "O(|V|^2)"
},
{
"math_id": 13,
"text": "O(|V|^3)"
},
{
"math_id": 14,
"text": "|V|"
},
{
"math_id": 15,
"text": "p"
},
{
"math_id": 16,
"text": "p=\\frac{|V|}{2}"
},
{
"math_id": 17,
"text": "O(|V|^2 \\cdot \\frac{|V|}{p})"
},
{
"math_id": 18,
"text": "k=\\frac{p}{|V|}"
},
{
"math_id": 19,
"text": "p=|V|"
},
{
"math_id": 20,
"text": "\\frac{|V|}{k}"
},
{
"math_id": 21,
"text": "|V|=4"
},
{
"math_id": 22,
"text": "p=8"
},
{
"math_id": 23,
"text": "k=2"
},
{
"math_id": 24,
"text": "d_{v,1}"
},
{
"math_id": 25,
"text": "d_{v,2}"
},
{
"math_id": 26,
"text": "d_{v,3}"
},
{
"math_id": 27,
"text": "d_{v,4}"
},
{
"math_id": 28,
"text": "d_v = [d_{v,1},d_{v,2},d_{v,3},d_{v,4}]"
},
{
"math_id": 29,
"text": "x"
},
{
"math_id": 30,
"text": "\\tilde{x}"
},
{
"math_id": 31,
"text": "k"
},
{
"math_id": 32,
"text": "O(\\frac{|V|}{k})"
},
{
"math_id": 33,
"text": "O(\\log k)"
},
{
"math_id": 34,
"text": "O(|V| (\\frac{|V|}{k} + \\log k))"
},
{
"math_id": 35,
"text": "O(\\frac{|V|^3}{p} + \\log p)"
},
{
"math_id": 36,
"text": "d_A"
},
{
"math_id": 37,
"text": "D^{(0)}"
},
{
"math_id": 38,
"text": "d^{(k)}_{i,j} := \\min(d^{(k-1)}_{i,j}, d^{(k-1)}_{i,k} + d^{(k-1)}_{k,j}) "
},
{
"math_id": 39,
"text": "n \\times n"
},
{
"math_id": 40,
"text": "n/ \\sqrt p \\times n/ \\sqrt p"
},
{
"math_id": 41,
"text": " p = n^2 "
},
{
"math_id": 42,
"text": "n^2"
},
{
"math_id": 43,
"text": "p_{i,j}"
},
{
"math_id": 44,
"text": "d^{(k)}_{i,j}"
},
{
"math_id": 45,
"text": "d^{(k-1)}_{i,j}"
},
{
"math_id": 46,
"text": "d^{(k-1)}_{i,k}"
},
{
"math_id": 47,
"text": "d^{(k-1)}_{k,j}"
},
{
"math_id": 48,
"text": "D^{k-1}"
},
{
"math_id": 49,
"text": "D^{(k-1)}"
},
{
"math_id": 50,
"text": "p_{*,j}"
},
{
"math_id": 51,
"text": "p_{i,*}"
},
{
"math_id": 52,
"text": "D^{(k)}"
},
{
"math_id": 53,
"text": "O(1)"
},
{
"math_id": 54,
"text": "O(n^3)"
},
{
"math_id": 55,
"text": "O(n^3 / p)"
},
{
"math_id": 56,
"text": " n / \\sqrt p "
},
{
"math_id": 57,
"text": "T_\\text{comm} = n (T_\\text{synch} + T_\\text{broadcast})"
},
{
"math_id": 58,
"text": "T = O\\left( \\frac{n^3} p\\right) + n (T_\\text{synch} + T_\\text{broadcast})"
},
{
"math_id": 59,
"text": "O(k)"
},
{
"math_id": 60,
"text": "n / \\sqrt p "
},
{
"math_id": 61,
"text": "O(n / \\sqrt p )"
},
{
"math_id": 62,
"text": "\\sqrt p "
},
{
"math_id": 63,
"text": " p_{\\sqrt p ,\\sqrt p}"
},
{
"math_id": 64,
"text": "O(n)"
},
{
"math_id": 65,
"text": "O(n^2 / p)"
},
{
"math_id": 66,
"text": "p_{\\sqrt p ,\\sqrt p}"
},
{
"math_id": 67,
"text": "n^3 / p"
},
{
"math_id": 68,
"text": "n"
},
{
"math_id": 69,
"text": "T = O\\left( \\frac{n^3} p\\right) + O(n)"
}
] |
https://en.wikipedia.org/wiki?curid=56993742
|
56998079
|
Woltjer's theorem
|
Theorem in plasma physics
In plasma physics, Woltjer's theorem states that force-free magnetic fields in a closed system with constant force-free parameter formula_0 represent the state with lowest magnetic energy in the system and that the magnetic helicity is invariant under this condition. It is named after Lodewijk Woltjer who derived it in 1958. A force-free magnetic field with flux density formula_1 satisfies
formula_2
where formula_0 is a scalar function that is constant along field lines. The helicity formula_3 invariant is given by
formula_4
where formula_3 is related to formula_5 through the vector potential formula_6 as below
formula_7
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "\\mathbf{B}"
},
{
"math_id": 2,
"text": "\\nabla \\times \\mathbf{B} = \\alpha \\mathbf{B}"
},
{
"math_id": 3,
"text": "\\mathcal{H}"
},
{
"math_id": 4,
"text": "\\frac{d\\mathcal{H}}{d t} = 0"
},
{
"math_id": 5,
"text": "\\mathbf{B}=\\nabla\\times \\mathbf{A}"
},
{
"math_id": 6,
"text": "\\mathbf{A}"
},
{
"math_id": 7,
"text": "\\mathcal{H} = \\int_V \\mathbf{A}\\cdot\\mathbf{B}\\ dV = \\int_V \\mathbf{A} \\cdot (\\nabla \\times \\mathbf{A}) \\ dV."
}
] |
https://en.wikipedia.org/wiki?curid=56998079
|
5700418
|
Rubin causal model
|
Method of statistical analysis
The Rubin causal model (RCM), also known as the Neyman–Rubin causal model, is an approach to the statistical analysis of cause and effect based on the framework of potential outcomes, named after Donald Rubin. The name "Rubin causal model" was first coined by Paul W. Holland. The potential outcomes framework was first proposed by Jerzy Neyman in his 1923 Master's thesis, though he discussed it only in the context of completely randomized experiments. Rubin extended it into a general framework for thinking about causation in both observational and experimental studies.
Introduction.
The Rubin causal model is based on the idea of potential outcomes. For example, a person would have a particular income at age 40 if they had attended college, whereas they would have a different income at age 40 if they had not attended college. To measure the causal effect of going to college for this person, we need to compare the outcome for the same individual in both alternative futures. Since it is impossible to see both potential outcomes at once, one of the potential outcomes is always missing. This dilemma is the "fundamental problem of causal inference."
Because of the fundamental problem of causal inference, unit-level causal effects cannot be directly observed. However, randomized experiments allow for the estimation of population-level causal effects. A randomized experiment assigns people randomly to treatments: college or no college. Because of this random assignment, the groups are (on average) equivalent, and the difference in income at age 40 can be attributed to the college assignment since that was the only difference between the groups. An estimate of the average causal effect (also referred to as the average treatment effect or ATE) can then be obtained by computing the difference in means between the treated (college-attending) and control (not-college-attending) samples.
In many circumstances, however, randomized experiments are not possible due to ethical or practical concerns. In such scenarios there is a non-random assignment mechanism. This is the case for the example of college attendance: people are not randomly assigned to attend college. Rather, people may choose to attend college based on their financial situation, parents' education, and so on. Many statistical methods have been developed for causal inference, such as propensity score matching. These methods attempt to correct for the assignment mechanism by finding control units similar to treatment units.
An extended example.
Rubin defines a causal effect:
Intuitively, the causal effect of one treatment, E, over another, C, for a particular unit and an interval of time from formula_0 to formula_1 is the difference between what would have happened at time formula_1 if the unit had been exposed to E initiated at formula_0 and what would have happened at formula_1 if the unit had been exposed to C initiated at formula_0: 'If an hour ago I had taken two aspirins instead of just a glass of water, my headache would now be gone,' or 'because an hour ago I took two aspirins instead of just a glass of water, my headache is now gone.' Our definition of the causal effect of the E versus C treatment will reflect this intuitive meaning."
According to the RCM, the causal effect of your taking or not taking aspirin one hour ago is the difference between how your head would have felt in case 1 (taking the aspirin) and case 2 (not taking the aspirin). If your headache would remain without aspirin but disappear if you took aspirin, then the causal effect of taking aspirin is headache relief. In most circumstances, we are interested in comparing two futures, one generally termed "treatment" and the other "control". These labels are somewhat arbitrary.
Potential outcomes.
Suppose that Joe is participating in an FDA test for a new hypertension drug. An all-knowing observer would know the outcomes for Joe under both treatment (the new drug) and control (either no treatment or the current standard treatment). The causal effect, or treatment effect, is the difference between these two potential outcomes.
formula_2 is Joe's blood pressure if he takes the new pill. In general, this notation expresses the potential outcome which results from a treatment, "t", on a unit, "u". Similarly, formula_3 is the effect of a different treatment, "c" or control, on a unit, "u". In this case, formula_3 is Joe's blood pressure if he doesn't take the pill. formula_4 is the causal effect of taking the new drug.
From this table we only know the causal effect on Joe. Everyone else in the study might have an increase in blood pressure if they take the pill. However, regardless of what the causal effect is for the other subjects, the causal effect for Joe is lower blood pressure, relative to what his blood pressure would have been if he had not taken the pill.
Consider a larger sample of patients:
The causal effect is different for every subject, but the drug "works" for Joe, Mary and Bob because the causal effect is negative. Their blood pressure is lower with the drug than it would have been if each did not take the drug. For Sally, on the other hand, the drug causes an increase in blood pressure.
In order for a potential outcome to make sense, it must be possible, at least "a priori". For example, if there is no way for Joe, under any circumstance, to obtain the new drug, then formula_2 is impossible for him. It can never happen. And if formula_2 can never be observed, even in theory, then the causal effect of treatment on Joe's blood pressure is not defined.
No causation without manipulation.
The causal effect of new drug is well defined because it is the simple difference of two potential outcomes, both of which might happen. In this case, we (or something else) can manipulate the world, at least conceptually, so that it is possible that one thing or a different thing might happen.
This definition of causal effects becomes much more problematic if there is no way for one of the potential outcomes to happen, ever. For example, what is the causal effect of Joe's height on his weight? Naively, this seems similar to our other examples. We just need to compare two potential outcomes: what would Joe's weight be under the treatment (where treatment is defined as being 3 inches taller) and what would Joe's weight be under the control (where control is defined as his current height).
A moment's reflection highlights the problem: we can't increase Joe's height. There is no way to observe, even conceptually, what Joe's weight would be if he were taller because there is no way to make him taller. We can't "manipulate" Joe's height, so it makes no sense to investigate the causal effect of height on weight. Hence the slogan: "No causation without manipulation".
Stable unit treatment value assumption (SUTVA).
We require that "the [potential outcome] observation on one unit should be unaffected by the particular assignment of treatments to the other units" (Cox 1958, §2.4). This is called the stable unit treatment value assumption (SUTVA), which goes beyond the concept of independence.
In the context of our example, Joe's blood pressure should not depend on whether or not Mary receives the drug. But what if it does? Suppose that Joe and Mary live in the same house and Mary always cooks. The drug causes Mary to crave salty foods, so if she takes the drug she will cook with more salt than she would have otherwise. A high salt diet increases Joe's blood pressure. Therefore, his outcome will depend on both which treatment he received and which treatment Mary receives.
SUTVA violation makes causal inference more difficult. We can account for dependent observations by considering more treatments. We create 4 treatments by taking into account whether or not Mary receives treatment.
Recall that a causal effect is defined as the difference between two potential outcomes. In this case, there are multiple causal effects because there are more than two potential outcomes. One is the causal effect of the drug on Joe when Mary receives treatment and is calculated, formula_5. Another is the causal effect on Joe when Mary does not receive treatment and is calculated formula_6. The third is the causal effect of Mary's treatment on Joe when Joe is not treated. This is calculated as formula_7. The treatment Mary receives has a greater causal effect on Joe than the treatment which Joe received has on Joe, and it is in the opposite direction.
By considering more potential outcomes in this way, we can cause SUTVA to hold. However, if any units other than Joe are dependent on Mary, then we must consider further potential outcomes. The greater the number of dependent units, the more potential outcomes we must consider and the more complex the calculations become (consider an experiment with 20 different people, each of whose treatment status can effect outcomes for every one else). In order to (easily) estimate the causal effect of a single treatment relative to a control, SUTVA should hold.
Average causal effect.
Consider:
One may "calculate" the average causal effect (also known as the average treatment effect or ATE) by taking the mean of all the causal effects.
How we measure the response affects what inferences we draw. Suppose that we measure changes in blood pressure as a percentage change rather than in absolute values. Then, depending in the exact numbers, the average causal effect might be an increase in blood pressure. For example, assume that George's blood pressure would be 154 under control and 140 with treatment. The absolute size of the causal effect is −14, but the percentage difference (in terms of the treatment level of 140) is −10%. If Sarah's blood pressure is 200 under treatment and 184 under control, then the causal effect in 16 in absolute terms but 8% in terms of the treatment value. A smaller absolute change in blood pressure (−14 versus 16) yields a larger percentage change (−10% versus 8%) for George. Even though the average causal effect for George and Sarah is +2 in absolute terms, it is −2 in percentage terms.
The fundamental problem of causal inference.
The results we have seen up to this point would never be measured in practice. It is impossible, by definition, to observe the effect of more than one treatment on a subject over a specific time period. Joe cannot both take the pill and not take the pill at the same time. Therefore, the data would look something like this:
Question marks are responses that could not be observed. The "Fundamental Problem of Causal Inference" is that directly observing causal effects is impossible. However, this does not make "causal inference" impossible. Certain techniques and assumptions allow the fundamental problem to be overcome.
Assume that we have the following data:
We can infer what Joe's potential outcome under control would have been if we make an assumption of constant effect:
formula_8
and
formula_9
Where T is the average treatment effect.. in this case -10.
If we wanted to infer the unobserved values we could assume a constant effect. The following tables illustrates data consistent with the assumption of a constant effect.
All of the subjects have the same causal effect even though they have different outcomes under the treatment.
The assignment mechanism.
The assignment mechanism, the method by which units are assigned treatment, affects the calculation of the average causal effect. One such assignment mechanism is randomization. For each subject we could flip a coin to determine if she receives treatment. If we wanted five subjects to receive treatment, we could assign treatment to the first five names we pick out of a hat. When we randomly assign treatments we may get different answers.
Assume that this data is the truth:
The true average causal effect is −8. But the causal effect for these individuals is never equal to this average. The causal effect varies, as it generally (always?) does in real life. After assigning treatments randomly, we might estimate the causal effect as:
A different random assignment of treatments yields a different estimate of the average causal effect.
The average causal effect varies because our sample is small and the responses have a large variance. If the sample were larger and the variance were less, the average causal effect would be closer to the true average causal effect regardless of the specific units randomly assigned to treatment.
Alternatively, suppose the mechanism assigns the treatment to all men and only to them.
Under this assignment mechanism, it is impossible for women to receive treatment and therefore impossible to determine the average causal effect on female subjects. In order to make any inferences of causal effect on a subject, the probability that the subject receive treatment must be greater than 0 and less than 1.
The perfect doctor.
Consider the use of the "perfect doctor" as an assignment mechanism. The perfect doctor knows how each subject will respond to the drug or the control and assigns each subject to the treatment that will most benefit her. The perfect doctor knows this information about a sample of patients:
Based on this knowledge she would make the following treatment assignments:
The perfect doctor distorts both averages by filtering out poor responses to both the treatment and control. The difference between means, which is the supposed average causal effect, is distorted in a direction that depends on the details. For instance, a subject like Laila who is harmed by taking the drug would be assigned to the control group by the perfect doctor and thus the negative effect of the drug would be masked.
Conclusion.
The causal effect of a treatment on a single unit at a point in time is the difference between the outcome variable with the treatment and without the treatment. The Fundamental Problem of Causal Inference is that it is impossible to observe the causal effect on a single unit. You either take the aspirin now or you don't. As a consequence, assumptions must be made in order to estimate the missing counterfactuals.
The Rubin causal model has also been connected to instrumental variables (Angrist, Imbens, and Rubin, 1996) and other techniques for causal inference. For more on the connections between the Rubin causal model, structural equation modeling, and other statistical methods for causal inference, see Morgan and Winship (2007) and Pearl (2000). Pearl (2000) argues that all potential outcomes can be derived from Structural Equation Models (SEMs) thus unifying econometrics and modern causal analysis.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "t_1"
},
{
"math_id": 1,
"text": "t_2"
},
{
"math_id": 2,
"text": "Y_t(u)"
},
{
"math_id": 3,
"text": "Y_c(u)"
},
{
"math_id": 4,
"text": "Y_t(u) - Y_c(u)"
},
{
"math_id": 5,
"text": "130 - 140"
},
{
"math_id": 6,
"text": "120 - 125"
},
{
"math_id": 7,
"text": "140 - 125"
},
{
"math_id": 8,
"text": "Y_t(u) = T+Y_c(u)"
},
{
"math_id": 9,
"text": "Y_t(u) - T = Y_c(u)."
}
] |
https://en.wikipedia.org/wiki?curid=5700418
|
570140
|
Infinite impulse response
|
Property of many linear time-invariant (LTI) systems
Infinite impulse response (IIR) is a property applying to many linear time-invariant systems that are distinguished by having an impulse response formula_0 that does not become exactly zero past a certain point but continues indefinitely. This is in contrast to a finite impulse response (FIR) system, in which the impulse response "does" become exactly zero at times formula_1 for some finite formula_2, thus being of finite duration. Common examples of linear time-invariant systems are most electronic and digital filters. Systems with this property are known as "IIR systems" or "IIR filters".
In practice, the impulse response, even of IIR systems, usually approaches zero and can be neglected past a certain point. However the physical systems which give rise to IIR or FIR responses are dissimilar, and therein lies the importance of the distinction. For instance, analog electronic filters composed of resistors, capacitors, and/or inductors (and perhaps linear amplifiers) are generally IIR filters. On the other hand, discrete-time filters (usually digital filters) based on a tapped delay line "employing no feedback" are necessarily FIR filters. The capacitors (or inductors) in the analog filter have a "memory" and their internal state never completely relaxes following an impulse (assuming the classical model of capacitors and inductors where quantum effects are ignored). But in the latter case, after an impulse has reached the end of the tapped delay line, the system has no further memory of that impulse and has returned to its initial state; its impulse response beyond that point is exactly zero.
Implementation and design.
Although almost all analog electronic filters are IIR, digital filters may be either IIR or FIR. The presence of feedback in the topology of a discrete-time filter (such as the block diagram shown below) generally creates an IIR response. The z domain transfer function of an IIR filter contains a non-trivial denominator, describing those feedback terms. The transfer function of an FIR filter, on the other hand, has only a numerator as expressed in the general form derived below. All of the formula_3 coefficients with formula_4 (feedback terms) are zero and the filter has no finite poles.
The transfer functions pertaining to IIR analog electronic filters have been extensively studied and optimized for their amplitude and phase characteristics. These continuous-time filter functions are described in the Laplace domain. Desired solutions can be transferred to the case of discrete-time filters whose transfer functions are expressed in the z domain, through the use of certain mathematical techniques such as the bilinear transform, impulse invariance, or pole–zero matching method. Thus digital IIR filters can be based on well-known solutions for analog filters such as the Chebyshev filter, Butterworth filter, and elliptic filter, inheriting the characteristics of those solutions.
Transfer function derivation.
Digital filters are often described and implemented in terms of the difference equation that defines how the output signal is related to the input signal:
formula_5
where:
A more condensed form of the difference equation is:
formula_12
which, when rearranged, becomes:
formula_13
To find the transfer function of the filter, we first take the Z-transform of each side of the above equation, where we use the time-shift property to obtain:
formula_14
We define the transfer function to be:
formula_15
Considering that in most IIR filter designs coefficientformula_16 is 1, the IIR filter transfer function takes the more traditional form:
formula_17
Stability.
The transfer function allows one to judge whether or not a system is bounded-input, bounded-output (BIBO) stable. To be specific, the BIBO stability criterion requires that the ROC of the system includes the unit circle. For example, for a causal system, all poles of the transfer function have to have an absolute value smaller than one. In other words, all poles must be located within a unit circle in the formula_18-plane.
The poles are defined as the values of formula_18 which make the denominator of formula_19 equal to 0:
formula_20
Clearly, if formula_21 then the poles are not located at the origin of the formula_18-plane. This is in contrast to the FIR filter where all poles are located at the origin, and is therefore always stable.
IIR filters are sometimes preferred over FIR filters because an IIR filter can achieve a much sharper transition region roll-off than an FIR filter of the same order.
Example.
Let the transfer function formula_19 of a discrete-time filter be given by:
formula_22
governed by the parameter formula_23, a real number with formula_24. formula_19 is stable and causal with a pole at formula_23. The time-domain impulse response can be shown to be given by:
formula_25
where formula_26 is the unit step function. It can be seen that formula_27 is non-zero for all formula_28, thus an impulse response which continues infinitely.
Advantages and disadvantages.
The main advantage digital IIR filters have over FIR filters is their efficiency in implementation, in order to meet a specification in terms of passband, stopband, ripple, and/or roll-off. Such a set of specifications can be accomplished with a lower order ("Q" in the above formulae) IIR filter than would be required for an FIR filter meeting the same requirements. If implemented in a signal processor, this implies a correspondingly fewer number of calculations per time step; the computational savings is often of a rather large factor.
On the other hand, FIR filters can be easier to design, for instance, to match a particular frequency response requirement. This is particularly true when the requirement is not one of the usual cases (high-pass, low-pass, notch, etc.) which have been studied and optimized for analog filters. Also FIR filters can be easily made to be linear phase (constant group delay vs frequency)—a property that is not easily met using IIR filters and then only as an approximation (for instance with the Bessel filter). Another issue regarding digital IIR filters is the potential for limit cycle behavior when idle, due to the feedback system in conjunction with quantization.
Design Methods.
Impulse Invariance.
Impulse invariance is a technique for designing discrete-time infinite-impulse-response (IIR) filters from continuous-time filters in which the impulse response of the continuous-time system is sampled to produce the impulse response of the discrete-time system.
Impulse invariance is one of the commonly used methods to meet the two basic requirements of the mapping from the s-plane to the z-plane. This is obtained by solving the T(z) that has the same output value at the same sampling time as the analog filter, and it is only applicable when the inputs are in a pulse.
Note that all inputs of the digital filter generated by this method are approximate values, except for pulse inputs that are very accurate. This is the simplest IIR filter design method. It is the most accurate at low frequencies, so it is usually used in low-pass filters.
For Laplace transform or z-transform, the output after the transformation is just the input multiplied by the corresponding transformation function, T(s) or T(z). Y(s) and Y(z) are the converted output of input X(s) and input X(z), respectively.
formula_29
formula_30
When applying the Laplace transform or z-transform on the unit impulse, the result is 1. Hence, the output results after the conversion are
formula_31
formula_32
Now the output of the analog filter is just the inverse Laplace transform in the time domain.
formula_33
If we use nT instead of t, we can get the output y(nT) derived from the pulse at the sampling time. It can also be expressed as y(n)
formula_34
This discrete time signal can be applied z-transform to get T(z)
formula_35
formula_36
formula_37
The last equation mathematically describes that a digital IIR filter is to perform z-transform on the analog signal that has been sampled and converted to T(s) by Laplace, which is usually simplified to
formula_38
Pay attention to the fact that there is a multiplier T appearing in the formula. This is because even if the Laplace transform and z-transform for the unit pulse are 1, the pulse itself is not necessarily the same. For analog signals, the pulse has an infinite value but the area is 1 at t=0, but it is 1 at the discrete-time pulse t=0, so the existence of a multiplier T is required.
Step Invariance.
Step invariance is a better design method than impulse invariant. The digital filter has several segments of input with different constants when sampling, which is composed of discrete steps. The step invariant IIR filter is less accurate than the same input step signal to the ADC. However, it is a better approximation for any input than the impulse invariant.
Step invariant solves the problem of the same sample values when T(z) and T(s) are both step inputs. The input to the digital filter is u(n), and the input to the analog filter is u(t). Apply z-transform and Laplace transform on these two inputs to obtain the converted output signal.
Perform z-transform on step input formula_39
Converted output after z-transform formula_40
Perform Laplace transform on step input formula_41
Converted output after Laplace transform formula_42
The output of the analog filter is y(t), which is the inverse Laplace transform of Y(s). If sampled every T seconds, it is y(n), which is the inverse conversion of Y(z).These signals are used to solve for the digital filter and the analog filter and have the same output at the sampling time.
The following equation points out the solution of T(z), which is the approximate formula for the analog filter.
formula_43
formula_44
formula_45
formula_46
Bilinear Transform.
The bilinear transform is a special case of a conformal mapping, often used to convert a transfer function formula_47 of a linear, time-invariant (LTI) filter in the continuous-time domain (often called an analog filter) to a transfer function formula_48 of a linear, shift-invariant filter in the discrete-time domain.
The bilinear transform is a first-order approximation of the natural logarithm function that is an exact mapping of the "z"-plane to the "s"-plane. When the Laplace transform is performed on a discrete-time signal (with each element of the discrete-time sequence attached to a correspondingly delayed unit impulse), the result is precisely the Z transform of the discrete-time sequence with the substitution of
formula_49
where formula_50 is the numerical integration step size of the trapezoidal rule used in the bilinear transform derivation; or, in other words, the sampling period. The above bilinear approximation can be solved for formula_51 or a similar approximation for formula_52 can be performed.
The inverse of this mapping (and its first-order bilinear approximation) is
formula_53
This relationship is used in the Laplace transfer function of any analog filter or the digital infinite impulse response (IIR) filter T(z) of the analog filter.
The bilinear transform essentially uses this first order approximation and substitutes into the continuous-time transfer function, formula_54
formula_55
That is
formula_56
which is used to calculate the IIR digital filter, starting from the Laplace transfer function of the analog filter.
|
[
{
"math_id": 0,
"text": "h(t)"
},
{
"math_id": 1,
"text": "t>T"
},
{
"math_id": 2,
"text": "T"
},
{
"math_id": 3,
"text": "a_i"
},
{
"math_id": 4,
"text": "i > 0"
},
{
"math_id": 5,
"text": "\n\\begin{align}\n y[n] {} = & \\frac{1}{a_0}(b_0 x[n] + b_1 x[n-1] + \\cdots + b_P x[n-P] \\\\\n & {} - a_1 y[n-1] - a_2 y[n-2] - \\cdots - a_Q y[n-Q])\n\\end{align}\n"
},
{
"math_id": 6,
"text": "\\ P"
},
{
"math_id": 7,
"text": "\\ b_i"
},
{
"math_id": 8,
"text": "\\ Q"
},
{
"math_id": 9,
"text": "\\ a_i"
},
{
"math_id": 10,
"text": "\\ x[n]"
},
{
"math_id": 11,
"text": "\\ y[n]"
},
{
"math_id": 12,
"text": "\\ y[n] = \\frac{1}{a_0} \\left(\\sum_{i=0}^P b_{i}x[n-i] - \\sum_{j=1}^Q a_j y[n-j]\\right)"
},
{
"math_id": 13,
"text": "\\ \\sum_{j=0}^Q a_j y[n-j] = \\sum_{i=0}^P b_i x[n-i]"
},
{
"math_id": 14,
"text": "\\ \\sum_{j=0}^Q a_j z^{-j} Y(z) = \\sum_{i=0}^P b_i z^{-i} X(z)"
},
{
"math_id": 15,
"text": "\n\\begin{align}\nH(z) & = \\frac{Y(z)}{X(z)} \\\\\n & = \\frac{\\sum_{i=0}^P b_i z^{-i}}{\\sum_{j=0}^Q a_j z^{-j}}\n\\end{align}\n"
},
{
"math_id": 16,
"text": "\\ a_0"
},
{
"math_id": 17,
"text": "\nH(z) = \\frac{\\sum_{i=0}^P b_i z^{-i}}{1+\\sum_{j=1}^Q a_j z^{-j}}\n"
},
{
"math_id": 18,
"text": "z"
},
{
"math_id": 19,
"text": "H(z)"
},
{
"math_id": 20,
"text": "\\ 0 = \\sum_{j=0}^Q a_{j} z^{-j}"
},
{
"math_id": 21,
"text": "a_{j}\\ne 0"
},
{
"math_id": 22,
"text": "H(z) = \\frac{B(z)}{A(z)} = \\frac{1}{1 - a z^{-1}}"
},
{
"math_id": 23,
"text": "a"
},
{
"math_id": 24,
"text": "0 < |a| < 1"
},
{
"math_id": 25,
"text": "h(n) = a^{n} u(n)"
},
{
"math_id": 26,
"text": "u(n)"
},
{
"math_id": 27,
"text": "h(n)"
},
{
"math_id": 28,
"text": "n \\ge 0"
},
{
"math_id": 29,
"text": "Y(s)=T(s)X(s)"
},
{
"math_id": 30,
"text": "Y(z)=T(z)X(z)"
},
{
"math_id": 31,
"text": "Y(s)=T(s)"
},
{
"math_id": 32,
"text": "Y(z)=T(z)"
},
{
"math_id": 33,
"text": "y(t)=L^{-1}[Y(s)]=L^{-1}[T(s)]"
},
{
"math_id": 34,
"text": "y(n)=y(nT)=y(t)|_{t=sT}"
},
{
"math_id": 35,
"text": "T(z)=Y(z)=Z[y(n)]"
},
{
"math_id": 36,
"text": "T(z)=Z[y(n)]=Z[y(nT)]"
},
{
"math_id": 37,
"text": "T(z)=Z\\left\\{L^{-1}[T(s)]_{t=nT}\\right\\}"
},
{
"math_id": 38,
"text": "T(z)=Z[T(s)]*T"
},
{
"math_id": 39,
"text": "Z[u(n)]=\\dfrac{z}{z-1}"
},
{
"math_id": 40,
"text": "Y(z)=T(z)U(z)=T(z)\\dfrac{z}{z-1}"
},
{
"math_id": 41,
"text": "L[u(t)]=\\dfrac{1}{s}"
},
{
"math_id": 42,
"text": "Y(s)=T(s)U(s)=\\dfrac{T(s)}{s}"
},
{
"math_id": 43,
"text": "T(z)=\\dfrac{z-1}{z}Y(z)"
},
{
"math_id": 44,
"text": "T(z)=\\dfrac{z-1}{z}Z[y(n)]"
},
{
"math_id": 45,
"text": "T(z)=\\dfrac{z-1}{z}Z[Y(s)]"
},
{
"math_id": 46,
"text": "T(z)=\\dfrac{z-1}{z}Z[\\dfrac{T(s)}{s}]"
},
{
"math_id": 47,
"text": "H_a(s)"
},
{
"math_id": 48,
"text": "H_d(z)"
},
{
"math_id": 49,
"text": "\n\\begin{align}\nz &= e^{sT} \\\\\n &= \\frac{e^{sT/2}}{e^{-sT/2}} \\\\\n &\\approx \\frac{1 + s T / 2}{1 - s T / 2}\n\\end{align}\n"
},
{
"math_id": 50,
"text": " T "
},
{
"math_id": 51,
"text": " s "
},
{
"math_id": 52,
"text": " s = (1/T) \\ln(z) "
},
{
"math_id": 53,
"text": "\n\\begin{align}\ns &= \\frac{1}{T} \\ln(z) \\\\\n &= \\frac{2}{T} \\left[\\frac{z-1}{z+1} + \\frac{1}{3} \\left( \\frac{z-1}{z+1} \\right)^3 + \\frac{1}{5} \\left( \\frac{z-1}{z+1} \\right)^5 + \\frac{1}{7} \\left( \\frac{z-1}{z+1} \\right)^7 + \\cdots \\right] \\\\\n &\\approx \\frac{2}{T} \\frac{z - 1}{z + 1} \\\\\n &= \\frac{2}{T} \\frac{1 - z^{-1}}{1 + z^{-1}}\n\\end{align}\n"
},
{
"math_id": 54,
"text": " H_a(s) "
},
{
"math_id": 55,
"text": "s \\leftarrow \\frac{2}{T} \\frac{z - 1}{z + 1}."
},
{
"math_id": 56,
"text": "H_d(z) = H_a(s) \\bigg|_{s = \\frac{2}{T} \\frac{z - 1}{z + 1}}= H_a \\left( \\frac{2}{T} \\frac{z-1}{z+1} \\right). \\ "
}
] |
https://en.wikipedia.org/wiki?curid=570140
|
570172
|
Airspeed
|
Speed of an aircraft relative to the surrounding air
In aviation, airspeed is the speed of an aircraft relative to the air it is flying through (which itself is usually moving relative to the ground due to wind). It is difficult to measure the exact airspeed of the aircraft (true airspeed), but other measures of airspeed, such as indicated airspeed and Mach number give useful information about the capabilities and limitations of airplane performance. The common measures of airspeed are:
The measurement and indication of airspeed is ordinarily accomplished on board an aircraft by an airspeed indicator (ASI) connected to a pitot-static system. The pitot-static system comprises one or more pitot probes (or tubes) facing the on-coming air flow to measure pitot pressure (also called stagnation, total or ram pressure) and one or more static ports to measure the static pressure in the air flow. These two pressures are compared by the ASI to give an IAS reading. Airspeed indicators are designed to give true airspeed at sea level pressure and standard temperature. As the aircraft climbs into less dense air, its true airspeed is greater than the airspeed indicated on the ASI.
Calibrated airspeed is typically within a few knots of indicated airspeed, while equivalent airspeed decreases slightly from CAS as aircraft altitude increases or at high speeds.
Units.
Airspeed is commonly given in knots (kn). Since 2010, the International Civil Aviation Organization (ICAO) recommends using kilometers per hour (km/h) for airspeed (and meters per second for wind speed on runways), but allows using the de facto standard of knots, and has no set date on when to stop.
Depending on the country of manufacture or which era in aviation history, airspeed indicators on aircraft instrument panels have been configured to read in knots, kilometers per hour, miles per hour. In high altitude flight, the Mach number is sometimes used for reporting airspeed.
Indicated airspeed.
Indicated airspeed (IAS) is the airspeed indicator reading (ASIR) uncorrected for instrument, position, and other errors. From current EASA definitions: Indicated airspeed means the speed of an aircraft as shown on its pitot static airspeed indicator calibrated to reflect standard atmosphere adiabatic compressible flow at sea level uncorrected for airspeed system errors.
An airspeed indicator is a differential pressure gauge with the pressure reading expressed in units of speed, rather than pressure. The airspeed is derived from the difference between the ram air pressure from the pitot tube, or stagnation pressure, and the static pressure. The pitot tube is mounted facing forward; the static pressure is frequently detected at static ports on one or both sides of the aircraft. Sometimes both pressure sources are combined in a single probe, a pitot-static tube. The static pressure measurement is subject to error due to inability to place the static ports at positions where the pressure is true static pressure at all airspeeds and attitudes. The correction for this error is the position error correction (PEC) and varies for different aircraft and airspeeds. Further errors of 10% or more are common if the airplane is flown in "uncoordinated" flight.
Uses of indicated airspeed.
Indicated airspeed is a better measure of power required and lift available than true airspeed. Therefore, IAS is used for controlling the aircraft during taxiing, takeoff, climb, descent, approach or landing. Target speeds for best rate of climb, best range, and best endurance are given in terms of indicated speed. The airspeed structural limit, beyond which the forces on panels may become too high or wing flutter may occur, is often given in terms of IAS.
Calibrated airspeed.
Calibrated airspeed (CAS) is indicated airspeed corrected for instrument errors, position error (due to incorrect pressure at the static port) and installation errors.
Calibrated airspeed values less than the speed of sound at standard sea level (661.4788 knots) are calculated as follows:
formula_0 minus position and installation error correction.
formula_1 is the calibrated airspeed,
formula_2 is speed of sound at standard sea level
formula_3 is the ratio of specific heats (1.4 for air)
formula_4 is the impact pressure, the difference between total pressure and static pressure
formula_5 is the static air pressure at standard sea level
This expression is based on the form of Bernoulli's equation applicable to isentropic compressible flow. CAS is the same as true air speed at sea level standard conditions, but becomes smaller relative to true airspeed as we climb into lower pressure and cooler air. Nevertheless, it remains a good measure of the forces acting on the airplane, meaning stall speeds can be called out on the airspeed indicator. The values for formula_6 and formula_7 are consistent with the ISA i.e. the conditions under which airspeed indicators are calibrated.
True airspeed.
The true airspeed (TAS; also KTAS, for "knots true airspeed") of an aircraft is the speed of the aircraft relative to the air in which it is flying. The true airspeed and heading of an aircraft constitute its velocity relative to the atmosphere.
Uses of true airspeed.
The true airspeed is important information for accurate navigation of an aircraft. To maintain a desired ground track whilst flying in a moving airmass, the pilot of an aircraft must use knowledge of wind speed, wind direction, and true air speed to determine the required heading. See wind triangle.
TAS is the appropriate speed to use when calculating the range of an airplane. It is the speed normally listed on the flight plan, also used in flight planning, before considering the effects of wind.
Measurement of true airspeed.
True airspeed is calculated from calibrated airspeed as follows
formula_8
where
formula_9 is true airspeed
formula_10 is the temperature ratio, namely local over standard sea level temperature, formula_11
Some airspeed indicators include a TAS scale, which is set by entering outside air temperature and pressure altitude. Alternatively, TAS can be calculated using an E6B flight calculator or equivalent, given inputs of CAS, outside air temperature (OAT) and pressure altitude.
Equivalent airspeed.
Equivalent airspeed (EAS) is defined as the airspeed at sea level in the International Standard Atmosphere at which the (incompressible) dynamic pressure is the same as the dynamic pressure at the true airspeed (TAS) and altitude at which the aircraft is flying. That is, it is defined by the equation
formula_12
where
formula_13 is equivalent airspeed
formula_9 is true airspeed
formula_14 is the density of air at the altitude at which the aircraft is currently flying;
formula_15 is the density of air at sea level in the International Standard Atmosphere (1.225 kg/m3 or 0.00237 slug/ft3).
Stated differently,
formula_16
where
formula_17 is the density ratio, that is formula_18
Uses of equivalent airspeed.
EAS is a measure of airspeed that is a function of incompressible dynamic pressure. Structural analysis is often in terms of incompressible dynamic pressure, so equivalent airspeed is a useful speed for structural testing. The significance of equivalent airspeed is that, at Mach numbers below the onset of wave drag, all of the aerodynamic forces and moments on an aircraft are proportional to the square of the equivalent airspeed. Thus, the handling and 'feel' of an aircraft, and the aerodynamic loads upon it, at a given equivalent airspeed, are very nearly constant and equal to those at standard sea level irrespective of the actual flight conditions.
At standard sea level pressure, CAS and EAS are equal. Up to about 200 knots CAS and 10,000 ft (3,000 m) the difference is negligible, but at higher speeds and altitudes CAS diverges from EAS due to compressibility.
Mach number.
Mach number formula_19 is defined as
formula_20
where
formula_21 is true airspeed
formula_22 is the local speed of sound
Both the Mach number and the speed of sound can be computed using measurements of impact pressure, static pressure and outside air temperature.
Uses of Mach number.
For aircraft that fly close to, but below the speed of sound (i.e. most civil jets) the compressibility speed limit is given in terms of Mach number. Beyond this speed, Mach buffet or stall or tuck may occur.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V_c=a_0\\sqrt{\\bigg(\\frac{2}{\\gamma-1}\\bigg)\\Bigg[\\bigg(\\frac{q_c}{p_0}+1\\bigg)^\\frac{\\gamma-1}{\\gamma}-1\\Bigg]}"
},
{
"math_id": 1,
"text": "V_c \\, "
},
{
"math_id": 2,
"text": "a_0 \\, "
},
{
"math_id": 3,
"text": "\\gamma \\, "
},
{
"math_id": 4,
"text": "q_c \\, "
},
{
"math_id": 5,
"text": "p_0 \\, "
},
{
"math_id": 6,
"text": "p_0"
},
{
"math_id": 7,
"text": "a_0"
},
{
"math_id": 8,
"text": "V = V_c\\sqrt{\\theta \\frac{(1 + q_c/ p)^{(\\gamma - 1)/\\gamma} - 1}{(1 + q_c/ p_0)^{(\\gamma - 1)/\\gamma} - 1}}"
},
{
"math_id": 9,
"text": "V\\,"
},
{
"math_id": 10,
"text": "\\theta \\,"
},
{
"math_id": 11,
"text": "T/T_0"
},
{
"math_id": 12,
"text": "\\frac{1}{2} \\rho_0 {V_e}^2 = \\frac{1}{2} \\rho V^2"
},
{
"math_id": 13,
"text": "V_e\\,"
},
{
"math_id": 14,
"text": "\\rho\\,"
},
{
"math_id": 15,
"text": "\\rho_0\\,"
},
{
"math_id": 16,
"text": "V_e \\equiv V\\sqrt{\\sigma}"
},
{
"math_id": 17,
"text": "\\sigma"
},
{
"math_id": 18,
"text": "\\frac{\\rho}{\\rho_0}"
},
{
"math_id": 19,
"text": "M"
},
{
"math_id": 20,
"text": "M = \\frac{V}{a}"
},
{
"math_id": 21,
"text": "V \\,"
},
{
"math_id": 22,
"text": "a \\,"
}
] |
https://en.wikipedia.org/wiki?curid=570172
|
57021483
|
Distributed-element circuit
|
Electrical circuits composed of lengths of transmission lines or other distributed components
Distributed-element circuits are electrical circuits composed of lengths of transmission lines or other distributed components. These circuits perform the same functions as conventional circuits composed of passive components, such as capacitors, inductors, and transformers. They are used mostly at microwave frequencies, where conventional components are difficult (or impossible) to implement.
Conventional circuits consist of individual components manufactured separately then connected together with a conducting medium. Distributed-element circuits are built by forming the medium itself into specific patterns. A major advantage of distributed-element circuits is that they can be produced cheaply as a printed circuit board for consumer products, such as satellite television. They are also made in coaxial and waveguide formats for applications such as radar, satellite communication, and microwave links.
A phenomenon commonly used in distributed-element circuits is that a length of transmission line can be made to behave as a resonator. Distributed-element components which do this include stubs, coupled lines, and cascaded lines. Circuits built from these components include filters, power dividers, directional couplers, and circulators.
Distributed-element circuits were studied during the 1920s and 1930s but did not become important until World War II, when they were used in radar. After the war their use was limited to military, space, and broadcasting infrastructure, but improvements in materials science in the field soon led to broader applications. They can now be found in domestic products such as satellite dishes and mobile phones.
Circuit modelling.
Distributed-element circuits are designed with the distributed-element model, an alternative to the lumped-element model in which the passive electrical elements of electrical resistance, capacitance and inductance are assumed to be "lumped" at one point in space in a resistor, capacitor or inductor, respectively. The distributed-element model is used when this assumption no longer holds, and these properties are considered to be distributed in space. The assumption breaks down when there is significant time for electromagnetic waves to travel from one terminal of a component to the other; "significant", in this context, implies enough time for a noticeable phase change. The amount of phase change is dependent on the wave's frequency (and inversely dependent on wavelength). A common rule of thumb amongst engineers is to change from the lumped to the distributed model when distances involved are more than one-tenth of a wavelength (a 36° phase change). The lumped model completely fails at one-quarter wavelength (a 90° phase change), with not only the value, but the nature of the component not being as predicted. Due to this dependence on wavelength, the distributed-element model is used mostly at higher frequencies; at low frequencies, distributed-element components are too bulky. Distributed designs are feasible above 300 MHz, and are the technology of choice at microwave frequencies above 1 GHz.
There is no clear-cut demarcation in the frequency at which these models should be used. Although the changeover is usually somewhere in the 100-to-500 MHz range, the technological scale is also significant; miniaturised circuits can use the lumped model at a higher frequency. Printed circuit boards (PCBs) using through-hole technology are larger than equivalent designs using surface-mount technology. Hybrid integrated circuits are smaller than PCB technologies, and monolithic integrated circuits are smaller than both. Integrated circuits can use lumped designs at higher frequencies than printed circuits, and this is done in some radio frequency integrated circuits. This choice is particularly significant for hand-held devices, because lumped-element designs generally result in a smaller product.
Construction with transmission lines.
The overwhelming majority of distributed-element circuits are composed of lengths of transmission line, a particularly simple form to model. The cross-sectional dimensions of the line are unvarying along its length, and are small compared to the signal wavelength; thus, only distribution along the length of the line need be considered. Such an element of a distributed circuit is entirely characterised by its length and characteristic impedance. A further simplification occurs in commensurate line circuits, where all the elements are the same length. With commensurate circuits, a lumped circuit design prototype consisting of capacitors and inductors can be directly converted into a distributed circuit with a one-to-one correspondence between the elements of each circuit.
Commensurate line circuits are important because a design theory for producing them exists; no general theory exists for circuits consisting of arbitrary lengths of transmission line (or any arbitrary shapes). Although an arbitrary shape can be analysed with Maxwell's equations to determine its behaviour, finding useful structures is a matter of trial and error or guesswork.
An important difference between distributed-element circuits and lumped-element circuits is that the frequency response of a distributed circuit periodically repeats as shown in the Chebyshev filter example; the equivalent lumped circuit does not. This is a result of the transfer function of lumped forms being a rational function of complex frequency; distributed forms are an irrational function. Another difference is that cascade-connected lengths of line introduce a fixed delay at all frequencies (assuming an ideal line). There is no equivalent in lumped circuits for a fixed delay, although an approximation could be constructed for a limited frequency range.
Advantages and disadvantages.
Distributed-element circuits are cheap and easy to manufacture in some formats, but take up more space than lumped-element circuits. This is problematic in mobile devices (especially hand-held ones), where space is at a premium. If the operating frequencies are not too high, the designer may miniaturise components rather than switching to distributed elements. However, parasitic elements and resistive losses in lumped components are greater with increasing frequency as a proportion of the nominal value of the lumped-element impedance. In some cases, designers may choose a distributed-element design (even if lumped components are available at that frequency) to benefit from improved quality. Distributed-element designs tend to have greater power-handling capability; with a lumped component, all the energy passed by a circuit is concentrated in a small volume.
Media.
Paired conductors.
Several types of transmission line exist, and any of them can be used to construct distributed-element circuits. The oldest (and still most widely used) is a pair of conductors; its most common form is twisted pair, used for telephone lines and Internet connections. It is not often used for distributed-element circuits because the frequencies used are lower than the point where distributed-element designs become advantageous. However, designers frequently begin with a lumped-element design and convert it to an open-wire distributed-element design. Open wire is a pair of parallel uninsulated conductors used, for instance, for telephone lines on telegraph poles. The designer does not usually intend to implement the circuit in this form; it is an intermediate step in the design process. Distributed-element designs with conductor pairs are limited to a few specialised uses, such as Lecher lines and the twin-lead used for antenna feed lines.
Coaxial.
Coaxial line, a centre conductor surrounded by an insulated shielding conductor, is widely used for interconnecting units of microwave equipment and for longer-distance transmissions. Although coaxial distributed-element devices were commonly manufactured during the second half of the 20th century, they have been replaced in many applications by planar forms due to cost and size considerations. Air-dielectric coaxial line is used for low-loss and high-power applications. Distributed-element circuits in other media still commonly transition to coaxial connectors at the circuit ports for interconnection purposes.
Planar.
The majority of modern distributed-element circuits use planar transmission lines, especially those in mass-produced consumer items. There are several forms of planar line, but the kind known as microstrip is the most common. It can be manufactured by the same process as printed circuit boards and hence is cheap to make. It also lends itself to integration with lumped circuits on the same board. Other forms of printed planar lines include stripline, finline and many variations. Planar lines can also be used in monolithic microwave integrated circuits, where they are integral to the device chip.
Waveguide.
Many distributed-element designs can be directly implemented in waveguide. However, there is an additional complication with waveguides in that multiple modes are possible. These sometimes exist simultaneously, and this situation has no analogy in conducting lines. Waveguides have the advantages of lower loss and higher quality resonators over conducting lines, but their relative expense and bulk means that microstrip is often preferred. Waveguide mostly finds uses in high-end products, such as high-power military radars and the upper microwave bands (where planar formats are too lossy). Waveguide becomes bulkier with lower frequency, which militates against its use on the lower bands.
Mechanical.
In a few specialist applications, such as the mechanical filters in high-end radio transmitters (marine, military, amateur radio), electronic circuits can be implemented as mechanical components; this is done largely because of the high quality of the mechanical resonators. They are used in the radio frequency band (below microwave frequencies), where waveguides might otherwise be used. Mechanical circuits can also be implemented, in whole or in part, as distributed-element circuits. The frequency at which the transition to distributed-element design becomes feasible (or necessary) is much lower with mechanical circuits. This is because the speed at which signals travel through mechanical media is much lower than the speed of electrical signals.
Circuit components.
There are several structures that are repeatedly used in distributed-element circuits. Some of the common ones are described below.
Stub.
A stub is a short length of line that branches to the side of a main line. The end of the stub is often left open- or short-circuited, but may also be terminated with a lumped component. A stub can be used on its own (for instance, for impedance matching), or several of them can be used together in a more complex circuit such as a filter. A stub can be designed as the equivalent of a lumped capacitor, inductor, or resonator.
Departures from constructing with uniform transmission lines in distributed-element circuits are rare. One such departure that is widely used is the radial stub, which is shaped like a sector of a circle. They are often used in pairs, one on either side of the main transmission line. Such pairs are called butterfly or bowtie stubs.
Coupled lines.
Coupled lines are two transmission lines between which there is some electromagnetic coupling. The coupling can be direct or indirect. In indirect coupling, the two lines are run closely together for a distance with no screening between them. The strength of the coupling depends on the distance between the lines and the cross-section presented to the other line. In direct coupling, branch lines directly connect the two main lines together at intervals.
Coupled lines are a common method of constructing power dividers and directional couplers. Another property of coupled lines is that they act as a pair of coupled resonators. This property is used in many distributed-element filters.
Cascaded lines.
Cascaded lines are lengths of transmission line where the output of one line is connected to the input of the next. Multiple cascaded lines of different characteristic impedances can be used to construct a filter or a wide-band impedance matching network. This is called a stepped impedance structure. A single, cascaded line one-quarter wavelength long forms a quarter-wave impedance transformer. This has the useful property of transforming any impedance network into its dual; in this role, it is called an impedance inverter. This structure can be used in filters to implement a lumped-element prototype in ladder topology as a distributed-element circuit. The quarter-wave transformers are alternated with a distributed-element resonator to achieve this. However, this is now a dated design; more compact inverters, such as the impedance step, are used instead. An impedance step is the discontinuity formed at the junction of two cascaded transmission lines with different characteristic impedances.
Cavity resonator.
A cavity resonator is an empty (or sometimes dielectric-filled) space surrounded by conducting walls. Apertures in the walls couple the resonator to the rest of the circuit. Resonance occurs due to electromagnetic waves reflected back and forth from the cavity walls setting up standing waves. Cavity resonators can be used in many media, but are most naturally formed in waveguide from the already existing metal walls of the guide.
Dielectric resonator.
A dielectric resonator is a piece of dielectric material exposed to electromagnetic waves. It is most often in the form of a cylinder or thick disc. Although cavity resonators can be filled with dielectric, the essential difference is that in cavity resonators the electromagnetic field is entirely contained within the cavity walls. A dielectric resonator has some field in the surrounding space. This can lead to undesirable coupling with other components. The major advantage of dielectric resonators is that they are considerably smaller than the equivalent air-filled cavity.
Helical resonator.
A helical resonator is a helix of wire in a cavity; one end is unconnected, and the other is bonded to the cavity wall. Although they are superficially similar to lumped inductors, helical resonators are distributed-element components and are used in the VHF and lower UHF bands.
Fractals.
The use of fractal-like curves as a circuit component is an emerging field in distributed-element circuits. Fractals have been used to make resonators for filters and antennae. One of the benefits of using fractals is their space-filling property, making them smaller than other designs. Other advantages include the ability to produce wide-band and multi-band designs, good in-band performance, and good out-of-band rejection. In practice, a true fractal cannot be made because at each fractal iteration the manufacturing tolerances become tighter and are eventually greater than the construction method can achieve. However, after a small number of iterations, the performance is close to that of a true fractal. These may be called "pre-fractals" or "finite-order fractals" where it is necessary to distinguish from a true fractal.
Fractals that have been used as a circuit component include the Koch snowflake, Minkowski island, Sierpiński curve, Hilbert curve, and Peano curve. The first three are closed curves, suitable for patch antennae. The latter two are open curves with terminations on opposite sides of the fractal. This makes them suitable for use where a connection in cascade is required.
Taper.
A taper is a transmission line with a gradual change in cross-section. It can be considered the limiting case of the stepped impedance structure with an infinite number of steps. Tapers are a simple way of joining two transmission lines of different characteristic impedances. Using tapers greatly reduces the mismatch effects that a direct join would cause. If the change in cross-section is not too great, no other matching circuitry may be needed. Tapers can provide transitions between lines in different media, especially different forms of planar media. Tapers commonly change shape linearly, but a variety of other profiles may be used. The profile that achieves a specified match in the shortest length is known as a Klopfenstein taper and is based on the Chebychev filter design.
Tapers can be used to match a transmission line to an antenna. In some designs, such as the horn antenna and Vivaldi antenna, the taper is itself the antenna. Horn antennae, like other tapers, are often linear, but the best match is obtained with an exponential curve. The Vivaldi antenna is a flat (slot) version of the exponential taper.
Distributed resistance.
Resistive elements are generally not useful in a distributed-element circuit. However, distributed resistors may be used in attenuators and line terminations. In planar media they can be implemented as a meandering line of high-resistance material, or as a deposited patch of thin-film or thick-film material. In waveguide, a card of microwave absorbent material can be inserted into the waveguide.
Circuit blocks.
Filters and impedance matching.
Filters are a large percentage of circuits constructed with distributed elements. A wide range of structures are used for constructing them, including stubs, coupled lines and cascaded lines. Variations include interdigital filters, combline filters and hairpin filters. More-recent developments include fractal filters. Many filters are constructed in conjunction with dielectric resonators.
As with lumped-element filters, the more elements used, the closer the filter comes to an ideal response; the structure can become quite complex. For simple, narrow-band requirements, a single resonator may suffice (such as a stub or spurline filter).
Impedance matching for narrow-band applications is frequently achieved with a single matching stub. However, for wide-band applications the impedance-matching network assumes a filter-like design. The designer prescribes a required frequency response, and designs a filter with that response. The only difference from a standard filter design is that the filter's source and load impedances differ.
Power dividers, combiners and directional couplers.
A directional coupler is a four-port device which couples power flowing in one direction from one path to another. Two of the ports are the input and output ports of the main line. A portion of the power entering the input port is coupled to a third port, known as the "coupled port". None of the power entering the input port is coupled to the fourth port, usually known as the "isolated port". For power flowing in the reverse direction and entering the output port, a reciprocal situation occurs; some power is coupled to the isolated port, but none is coupled to the coupled port.
A power divider is often constructed as a directional coupler, with the isolated port permanently terminated in a matched load (making it effectively a three-port device). There is no essential difference between the two devices. The term "directional coupler" is usually used when the coupling factor (the proportion of power reaching the coupled port) is low, and "power divider" when the coupling factor is high. A power combiner is simply a power splitter used in reverse. In distributed-element implementations using coupled lines, indirectly coupled lines are more suitable for low-coupling directional couplers; directly coupled branch line couplers are more suitable for high-coupling power dividers.
Distributed-element designs rely on an element length of one-quarter wavelength (or some other length); this will hold true at only one frequency. Simple designs, therefore, have a limited bandwidth over which they will work successfully. Like impedance matching networks, a wide-band design requires multiple sections and the design begins to resemble a filter.
Hybrids.
A directional coupler which splits power equally between the output and coupled ports (a 3 dB coupler) is called a "hybrid". Although "hybrid" originally referred to a hybrid transformer (a lumped device used in telephones), it now has a broader meaning. A widely used distributed-element hybrid which does not use coupled lines is the "hybrid ring" or rat-race coupler. Each of its four ports is connected to a ring of transmission line at a different point. Waves travel in opposite directions around the ring, setting up standing waves. At some points on the ring, destructive interference results in a null; no power will leave a port set at that point. At other points, constructive interference maximises the power transferred.
Another use for a hybrid coupler is to produce the sum and difference of two signals. In the illustration, two input signals are fed into the ports marked 1 and 2. The sum of the two signals appears at the port marked Σ, and the difference at the port marked Δ. In addition to their uses as couplers and power dividers, directional couplers can be used in balanced mixers, frequency discriminators, attenuators, phase shifters, and antenna array feed networks.
Circulators.
A circulator is usually a three- or four-port device in which power entering one port is transferred to the next port in rotation, as if round a circle. Power can flow in only one direction around the circle (clockwise or anticlockwise), and no power is transferred to any of the other ports. Most distributed-element circulators are based on ferrite materials. Uses of circulators include as an isolator to protect a transmitter (or other equipment) from damage due to reflections from the antenna, and as a duplexer connecting the antenna, transmitter and receiver of a radio system.
An unusual application of a circulator is in a reflection amplifier, where the negative resistance of a Gunn diode is used to reflect back more power than it received. The circulator is used to direct the input and output power flows to separate ports.
Passive circuits, both lumped and distributed, are nearly always reciprocal; however, circulators are an exception. There are several equivalent ways to define or represent reciprocity. A convenient one for circuits at microwave frequencies (where distributed-element circuits are used) is in terms of their S-parameters. A reciprocal circuit will have an S-parameter matrix, ["S"], which is symmetric. From the definition of a circulator, it is clear that this will not be the case,
formula_0
for an ideal three-port circulator, showing that circulators are non-reciprocal by definition. It follows that it is impossible to build a circulator from standard passive components (lumped or distributed). The presence of a ferrite, or some other non-reciprocal material or system, is essential for the device to work.
Active components.
Distributed elements are usually passive, but most applications will require active components in some role. A microwave hybrid integrated circuit uses distributed elements for many passive components, but active components (such as diodes, transistors, and some passive components) are discrete. The active components may be packaged, or they may be placed on the substrate in chip form without individual packaging to reduce size and eliminate packaging-induced parasitics.
Distributed amplifiers consist of a number of amplifying devices (usually FETs), with all their inputs connected via one transmission line and all their outputs via another transmission line. The lengths of the two lines must be equal between each transistor for the circuit to work correctly, and each transistor adds to the output of the amplifier. This is different from a conventional multistage amplifier, where the gain is multiplied by the gain of each stage. Although a distributed amplifier has lower gain than a conventional amplifier with the same number of transistors, it has significantly greater bandwidth. In a conventional amplifier, the bandwidth is reduced by each additional stage; in a distributed amplifier, the overall bandwidth is the same as the bandwidth of a single stage. Distributed amplifiers are used when a single large transistor (or a complex, multi-transistor amplifier) would be too large to treat as a lumped component; the linking transmission lines separate the individual transistors.
History.
Distributed-element modelling was first used in electrical network analysis by Oliver Heaviside in 1881. Heaviside used it to find a correct description of the behaviour of signals on the transatlantic telegraph cable. Transmission of early transatlantic telegraph had been difficult and slow due to dispersion, an effect which was not well understood at the time. Heaviside's analysis, now known as the telegrapher's equations, identified the problem and suggested methods for overcoming it. It remains the standard analysis of transmission lines.
Warren P. Mason was the first to investigate the possibility of distributed-element circuits, and filed a patent in 1927 for a coaxial filter designed by this method. Mason and Sykes published the definitive paper on the method in 1937. Mason was also the first to suggest a distributed-element acoustic filter in his 1927 doctoral thesis, and a distributed-element mechanical filter in a patent filed in 1941. Mason's work was concerned with the coaxial form and other conducting wires, although much of it could also be adapted for waveguide. The acoustic work had come first, and Mason's colleagues in the Bell Labs radio department asked him to assist with coaxial and waveguide filters.
Before World War II, there was little demand for distributed-element circuits; the frequencies used for radio transmissions were lower than the point at which distributed elements became advantageous. Lower frequencies had a greater range, a primary consideration for broadcast purposes. These frequencies require long antennae for efficient operation, and this led to work on higher-frequency systems. A key breakthrough was the 1940 introduction of the cavity magnetron which operated in the microwave band and resulted in radar equipment small enough to install in aircraft. A surge in distributed-element filter development followed, filters being an essential component of radars. The signal loss in coaxial components led to the first widespread use of waveguide, extending the filter technology from the coaxial domain into the waveguide domain.
The wartime work was mostly unpublished until after the war for security reasons, which made it difficult to ascertain who was responsible for each development. An important centre for this research was the MIT Radiation Laboratory (Rad Lab), but work was also done elsewhere in the US and Britain. The Rad Lab work was published by Fano and Lawson. Another wartime development was the hybrid ring. This work was carried out at Bell Labs, and was published after the war by W. A. Tyrrell. Tyrrell describes hybrid rings implemented in waveguide, and analyses them in terms of the well-known waveguide magic tee. Other researchers soon published coaxial versions of this device.
George Matthaei led a research group at Stanford Research Institute which included Leo Young and was responsible for many filter designs. Matthaei first described the interdigital filter and the combline filter. The group's work was published in a landmark 1964 book covering the state of distributed-element circuit design at that time, which remained a major reference work for many years.
Planar formats began to be used with the invention of stripline by Robert M. Barrett. Although stripline was another wartime invention, its details were not published until 1951. Microstrip, invented in 1952, became a commercial rival of stripline; however, planar formats did not start to become widely used in microwave applications until better dielectric materials became available for the substrates in the 1960s. Another structure which had to wait for better materials was the dielectric resonator. Its advantages (compact size and high quality) were first pointed out by R. D. Richtmeyer in 1939, but materials with good temperature stability were not developed until the 1970s. Dielectric resonator filters are now common in waveguide and transmission line filters.
Important theoretical developments included Paul I. Richards' commensurate line theory, which was published in 1948, and Kuroda's identities, a set of transforms which overcame some practical limitations of Richards theory, published by Kuroda in 1955. According to Nathan Cohen, the log-periodic antenna, invented by Raymond DuHamel and Dwight Isbell in 1957, should be considered the first fractal antenna. However, its self-similar nature, and hence its relation to fractals was missed at the time. It is still not usually classed as a fractal antenna. Cohen was the first to explicitly identify the class of fractal antennae after being inspired by a lecture of Benoit Mandelbrot in 1987, but he could not get a paper published until 1995.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "[S] = \\begin{pmatrix}\n 0 & 0 & 1\\\\\n 1 & 0 & 0 \\\\\n 0 & 1 & 0\n\\end{pmatrix}"
}
] |
https://en.wikipedia.org/wiki?curid=57021483
|
57027
|
Fitts's law
|
Predictive model of human movement
Fitts's law (often cited as Fitts' law) is a predictive model of human movement primarily used in human–computer interaction and ergonomics. The law predicts that the time required to rapidly move to a target area is a function of the ratio between the distance to the target and the width of the target. Fitts's law is used to model the act of "pointing", either by physically touching an object with a hand or finger, or virtually, by pointing to an object on a computer monitor using a pointing device. It was initially developed by Paul Fitts.
Fitts's law has been shown to apply under a variety of conditions; with many different limbs (hands, feet, the lower lip, head-mounted sights), manipulanda (input devices), physical environments (including underwater), and user populations (young, old, special educational needs, and drugged participants).
Original model formulation.
The original 1954 paper by Paul Morris Fitts proposed a metric to quantify the difficulty of a target selection task.
The metric was based on an information analogy, where the distance to the center of the target ("D") is like a signal and the tolerance or width of the target ("W") is like noise.
The metric is Fitts's "index of difficulty" ("ID", in bits):
formula_0
Fitts also proposed an "index of performance" ("IP", in bits per second) as a measure of human performance. The metric combines a task's index of difficulty ("ID") with the movement time ("MT", in seconds) in selecting the target. In Fitts's words,
"The average rate of information generated by a series of movements is the average information per movement divided by the time per movement." Thus,
formula_1
Today, "IP" is more commonly called "throughput" ("TP"). It is also common to include an adjustment for accuracy in the calculation.
Researchers after Fitts began the practice of building linear regression equations and examining the correlation ("r") for goodness of fit. The equation expresses the relationship between
"MT" and the "D" and "W" task parameters:
formula_2
where:
Since shorter movement times are desirable for a given task, the value of the "b" parameter can be used as a metric when comparing computer pointing devices against one another. The first human–computer interface application of Fitts's law was by Card, English, and Burr, who used the index of performance ("IP"), interpreted as <templatestyles src="Fraction/styles.css" />1⁄"b", to compare performance of different input devices, with the mouse coming out on top compared to the joystick or directional movement keys. This early work, according to Stuart Card's biography, "was a major factor leading to the mouse's commercial introduction by Xerox".
Many experiments testing Fitts's law apply the model to a dataset in which either distance or width, but not both, are varied. The model's predictive power deteriorates when both are varied over a significant range. Notice that because the "ID" term depends only on the "ratio" of distance to width, the model implies that a target distance and width combination can be re-scaled arbitrarily without affecting movement time, which is impossible. Despite its flaws, this form of the model does possess remarkable predictive power across a range of computer interface modalities and motor tasks, and has provided many insights into user interface design principles.
Movement.
A movement during a single Fitts's law task can be split into two phases:
The first phase is defined by the distance to the target. In this phase the distance can be closed quickly while still being imprecise. The second movement tries to perform a slow and controlled precise movement to actually hit the target.
The task duration scales linearly in regards to difficulty. But as different tasks can have the same difficulty, it is derived that distance has a greater impact on the overall task completion time than target size.
Often it is cited that Fitts's law can be applied to eye tracking. This seems to be at least a controversial topic as Drewes showed. During fast saccadic eye movements the user is blind. During a Fitts's law task the user consciously acquires its target and can actually see it, making these two types of interaction not comparable.
Bits per second: model innovations driven by information theory.
The formulation of Fitts's index of difficulty most frequently used in the human–computer interaction community is called the Shannon formulation:
formula_3
This form was proposed by Scott MacKenzie, professor at York University, and named for its resemblance to the Shannon–Hartley theorem. It describes the transmission of information using bandwidth, signal strength and noise. In Fitts's law, the distance represents signal strength, while target width is noise.
Using this form of the model, the difficulty of a pointing task was equated to a quantity of information transmitted (in units of bits) by performing the task. This was justified by the assertion that pointing reduces to an information processing task. Although no formal mathematical connection was established between Fitts's law and the Shannon-Hartley theorem it was inspired by, the Shannon form of the law has been used extensively, likely due to the appeal of quantifying motor actions using information theory. In 2002 the ISO 9241 was published, providing standards for human–computer interface testing, including the use of the Shannon form of Fitts's law. It has been shown that the information transmitted via serial keystrokes on a keyboard and the information implied by the "ID" for such a task are not consistent. The Shannon-Entropy results in a different information value than Fitts's law. The authors note, though, that the error is negligible and only has to be accounted for in comparisons of devices with known entropy or measurements of human information processing capabilities.
Adjustment for accuracy: use of the effective target width.
An important improvement to Fitts's law was proposed by Crossman in 1956 (see Welford, 1968, pp. 147–148) and used by Fitts
in his 1964 paper with Peterson. With the adjustment, target width ("W") is replaced by an effective target width ("W"e).
"W"e is computed from the standard deviation in the selection coordinates gathered over a sequence of trials for a particular "D-W" condition. If the selections are logged as "x" coordinates along the axis of approach to the target, then
formula_4
This yields
formula_5
and hence
formula_6
If the selection coordinates are normally distributed, "W"e spans 96% of the distribution. If the observed error rate was 4% in the sequence of trials, then "W"e = "W". If the error rate was greater than 4%, "W"e > "W", and if the error rate was less than 4%, "W"e < "W". By using "W"e, a Fitts' law model more closely reflects what users actually did, rather than what they were asked to do.
The main advantage in computing "IP" as above is that spatial variability, or accuracy, is included in the measurement. With the adjustment for accuracy, Fitts's law more truly encompasses the speed-accuracy tradeoff. The equations above appear in ISO 9241-9 as the recommended method of computing "throughput".
Welford's model: innovations driven by predictive power.
Not long after the original model was proposed, a 2-factor variation was proposed under the intuition that target distance and width have separate effects on movement time. Welford's model, proposed in 1968, separated the influence of target distance and width into separate terms, and provided improved predictive power:
formula_7
This model has an additional parameter, so its predictive accuracy cannot be directly compared with 1-factor forms of Fitts's law. However, a variation on Welford's model inspired by the Shannon formulation,
formula_8
The additional parameter "k" allows the introduction of angles into the model. Now the users position can be accounted for. The influence of the angle can be weighted using the exponent. This addition was introduced by Kopper et al. in 2010.
The formula reduces to the Shannon form when "k = 1". Therefore, this model "can" be directly compared against the Shannon form of Fitts's law using the F-test of nested models. This comparison reveals that not only does the Shannon form of Welford's model better predict movement times, but it is also more robust when control-display gain (the ratio between e.g. hand movement and cursor movement) is varied. Consequently, although the Shannon model is slightly more complex and less intuitive, it is empirically the best model to use for virtual pointing tasks.
Extending the model from 1D to 2D and other nuances.
Extensions to two or more dimensions.
In its original form, Fitts's law is meant to apply only to one-dimensional tasks. However, the original experiments required subjects to move a stylus (in three dimensions) between two metal plates on a table, termed the reciprocal tapping task. The target width perpendicular to the direction of movement was very wide to avoid it having a significant influence on performance. A major application for Fitts's law is 2D virtual pointing tasks on computer screens, in which targets have bounded sizes in both dimensions.
Fitts's law has been extended to two-dimensional tasks in two different ways. For navigating e.g. hierarchical pull-down menus, the user must generate a trajectory with the pointing device that is constrained by the menu geometry; for this application the Accot-Zhai steering law was derived.
For simply pointing to targets in a two-dimensional space, the model generally holds as-is but requires adjustments to capture target geometry and quantify targeting errors in a logically consistent way.
Multiple Methods have been used to determine the target size :
While the "W"-model is sometimes considered the state-of-the-art measurement, the truly correct representation for non-circular targets is substantially more complex, as it requires computing the angle-specific convolution between the trajectory of the pointing device and the target
Characterizing performance.
Since the "a" and "b" parameters should capture movement times over a potentially wide range of task geometries, they can serve as a performance metric for a given interface. In doing so, it is necessary to separate variation between users from variation between interfaces.
The "a" parameter is typically positive and close to zero, and sometimes ignored in characterizing average performance, as in Fitts' original experiment. Multiple methods exist for identifying parameters from experimental data, and the choice of method is the subject of heated debate, since method variation can result in parameter differences that overwhelm underlying performance differences.
An additional issue in characterizing performance is incorporating success rate: an aggressive user can achieve shorter movement times at the cost of experimental trials in which the target is missed. If the latter are not incorporated into the model, then average movement times can be artificially decreased.
Temporal targets.
Fitts's law deals only with targets defined in space. However, a target can be defined purely on the time axis, which is called a temporal target. A blinking target or a target moving toward a selection area are examples of temporal targets. Similar to space, the distance to the target (i.e., temporal distance "D"t) and the width of the target (i.e., temporal width "W"t) can be defined for temporal targets as well. The temporal distance is the amount of time a person must wait for a target to appear. The temporal width is a short duration from the moment the target appears until it disappears. For example, for a blinking target, "D"t can be thought of as the period of blinking and "W"t as the duration of the blinking. As with targets in space, the larger the "D"t or the smaller the "W"t, the more difficult it becomes to select the target.
The task of selecting the temporal target is called "temporal pointing". The model for temporal pointing was first presented to the human–computer interaction field in 2016. The model predicts the error rate, the human performance in temporal pointing, as a function of temporal index of difficulty ("ID"t):
formula_9
Implications for UI design.
Multiple design guidelines for GUIs can be derived from the implications of Fitts's law. In its basic form, Fitts's law says that targets a user has to hit should be as big as possible. This is derived from the "W" parameter. More specifically, the effective size of the button should be as big as possible, meaning that its form has to be optimized for the direction of the user's movement onto the target.
Layouts should also cluster functions that are commonly used with each other. Optimizing for the "D" parameter in this way allows for smaller travel times.
Placing layout elements on the four edges of the screen allows for infinitely large targets in one dimension and therefore presents ideal scenarios. Since the pointer will always stop at the edge, the user can move the mouse with the greatest possible speed and still hit the target. The target area is effectively infinitely long along the movement axis. Therefore, this guideline is called “Rule of the infinite edges”. The use of this rule can be seen for example in MacOS, which always places the menu bar on the top left edge of the screen instead of the current program's windowframe.
This effect can be exaggerated at the four corners of a screen. At these points two edges collide and form a theoretically infinitely big button. Microsoft Windows (prior to Windows 11) places its "Start" button in the lower left corner and Microsoft Office 2007 uses the upper left corner for its "Office" menu. These four spots are sometimes called "magic corners".
MacOS places the close button on the upper left side of the program window and the menu bar fills out the magic corner with another button.
A UI that allows for pop-up menus rather than fixed drop-down menus reduces travel times for the "D" parameter. The user can continue interaction right from the current mouse position and doesn't have to move to a different preset area. Many operating systems use this when displaying right-click context menus. As the menu starts right on the pixel which the user clicked, this pixel is referred to as the "magic" or "prime pixel".
James Boritz et al. (1991) compared radial menu designs. In a radial menu all items have the same distance from the prime pixel. The research suggests that in practical implementations the direction in which a user has to move their mouse has also to be accounted for. For right-handed users, selecting the left-most menu item was significantly more difficult than the right-most one. No differences were found for transitions from upper to lower functions and vice versa.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\text{ID} = \\log_2 \\Bigg(\\frac{2D} {W}\\Bigg)"
},
{
"math_id": 1,
"text": "\\text{IP} = \\Bigg(\\frac{\\text{ID}} {\\text{MT}}\\Bigg)"
},
{
"math_id": 2,
"text": "\\text{MT} = a + b \\cdot \\text{ID} = a + b \\cdot \\log_2 \\Bigg(\\frac{2D}{W}\\Bigg)"
},
{
"math_id": 3,
"text": "\\text{ID} = \\log_2 \\Bigg(\\frac{D}{W}+1\\Bigg)"
},
{
"math_id": 4,
"text": "W_e = 4.133 \\times SD_x"
},
{
"math_id": 5,
"text": "\\text{ID}_e = \\log_2 \\Bigg(\\frac{D}{W_e}+1\\Bigg)"
},
{
"math_id": 6,
"text": "\\text{IP} = \\Bigg(\\frac{ID_e} {MT}\\Bigg)"
},
{
"math_id": 7,
"text": "MT = a + b_1 \\log_2 (D) + b_2 \\log_2 (W)"
},
{
"math_id": 8,
"text": "MT = a + b_1 \\log_2 (D+W) + b_2 \\log_2 (W) = a + b\\log_2 \\left(\\frac{D+W}{W^k}\\right)"
},
{
"math_id": 9,
"text": "\\text{ID}_{t} = \\log_2 \\Bigg(\\frac{D_{t}}{W_{t}}\\Bigg)"
}
] |
https://en.wikipedia.org/wiki?curid=57027
|
57027555
|
Carbonate-associated sulfate
|
Carbonate-associated sulfates (CAS) are sulfate species found in association with carbonate minerals, either as inclusions, adsorbed phases, or in distorted sites within the carbonate mineral lattice. It is derived primarily from dissolved sulfate in the solution from which the carbonate precipitates. In the ocean, the source of this sulfate is a combination of riverine and atmospheric inputs, as well as the products of marine hydrothermal reactions and biomass remineralisation. CAS is a common component of most carbonate rocks, having concentrations in the parts per thousand within biogenic carbonates and parts per million within abiogenic carbonates. Through its abundance and sulfur isotope composition, it provides a valuable record of the global sulfur cycle across time and space.
Importance of sulfur (and CAS) to biogeochemistry.
Sulfur compounds play a major role in global climate, nutrient cycling, and the production and distribution of biomass. They can have significant effects on cloud formation and greenhouse forcing, and their distribution responds to the oxidation state of the atmosphere and oceans, as well as the evolution of different metabolic strategies. We can resolve the response of sulfur to biogeochemical change by measuring the abundance and isotopic composition of different sulfur species in different environments at different times.
But "how" do abundance and isotopic composition of different sulfur reservoirs inform our understanding of biogeochemical processes? The oxidation and reduction of sulfur species often involves the breakage or formation of chemical bonds involving S atoms. Because the thermodynamic stability of certain bonds is often greater when they involve heavier isotopes, an oxidation or reduction reaction can enrich the reactant pool (reservoir) or product pool in compounds containing the heavier isotope, relative to each other. This is known as an isotope effect. The extent to which such a mass-dependent reaction operates in the world's oceans or atmosphere determines how much heavier or lighter various reservoirs of sulfur species will become.
The largest sulfur pool on Earth is that of marine or "seawater" sulfate. Traditionally, the isotopic composition of seawater sulfate is obtained by analysis of sulfate minerals within evaporites, which are somewhat sparse in the geologic record, often poorly preserved, and necessarily associated with complicated and excursive events such as local sea level change. Marine barites are similarly limited. Carbonate-associated sulfate (CAS) provides geochemists with a more ubiquitous source of material for the direct measurement of seawater sulfate, provided the degree of secondary alteration and diagenetic history of the carbonate and CAS can be constrained.
Sulfate and the global sulfur cycle.
Earth's sulfur cycle is complex. Volcanoes release both reduced and oxidized sulfur species into the atmosphere, where they are further oxidized by reaction with oxygen to SO2 and various sulfates. These oxidized sulfur species enter groundwater and the oceans both directly (rain/snow) or by incorporation into biomass, which decays to sulfates and sulfides, again by a combination of biological and abiological processes. Some of this sulfate is reduced through microbial metabolism (microbial sulfate reduction or MSR) or by hydrothermal processes, yielding sulfides, thiosulfate, and elemental sulfur. Some reduced sulfur species are buried as metal-sulfide compounds, some are cyclically reduced and oxidized in the oceans and sediments indefinitely, and some are oxidized back into sulfate minerals, which precipitate out in tidal flats, lakes, and lagoons as evaporite deposits or are incorporated into the structure of carbonate and phosphate minerals in the ocean (i.e. as CAS).
Because oxidation-reduction reactions with sulfur species are often accompanied by mass-dependent fractionation, the sulfur isotope composition of the various pools of reduced and oxidized sulfur species in the water column, sediment, and rock record is a clue to how sulfur moves between those pools or "has moved" in the past. For example, sulfur at the time of Earth's formation should have (barring some accretion-related fractionation process for which there is little evidence) had a δ34S value of about 0‰, while sulfate in the modern oceans (the dominant marine sulfur species) has a δ34S of about +21‰. This implies that, over geologic time, a reservoir of correspondingly depleted (i.e. 34S-poor) sulfur was buried in the crust and possibly subducted into the deep mantle. This is because sulfate's reduction to sulfide is typically accompanied by a negative isotope effect, which (depending on the sulfate-reducing microorganism's enzymatic machinery, temperature, and other factors) can be tens of per mille. This effect can be compounded through sulfur disproportionation, a process by which some microbes reduce sulfate to sulfides "and" thiosulfate, both of which can be 34S-depleted by tens of per mille relative to the starting sulfate pool. Depleted sulfides and thiosulfate can then be repeatedly oxidized and reduced again, until the final, total sulfide pool that is measured has δ34S values of -70 or -80‰. The formation of a "lighter" S-isotope pool leaves behind an enriched pool, and so the enrichment of seawater sulfate is taken as evidence that some large amount of reduced sulfur (in the form, perhaps, of metal-sulfide minerals) was buried and incorporated into the crust.
Recording seawater sulfate.
Carbonate-associated sulfate (CAS) represents a small fraction of seawater sulfate, buried (and to some extent, preserved) with carbonate sediments. Thus, the changing δ34S value of CAS through time should theoretically scale with the changing amount of reduced sulfur species being buried as metal-sulfides and the correspondingly enriched ocean. The enrichment of marine sulfate in 34S should in turn scale with things like: the level of oxygen in the oceans and atmosphere, the initial appearance and proliferation of sulfur-reducing metabolisms among the world's microbial communities, and perhaps local-scale climate events and tectonism. The more positive the δ34S of marine sulfate, the more sulfate reduction and/or burial/removal of reduced, 34S-depleted sulfur species must be occurring.
There are some limitations, however, to the use of carbonate-associated sulfate's isotopic composition as a proxy for the isotopic composition of marine sulfate (and thus as a proxy for the response of the sulfur cycle to major climatological and geobiological events) through time. First, there is the question of: how representative is a particular carbonate rock's CAS of marine sulfate at the time of the rock's deposition? Various diagenetic processes (meaning: deformation by burial and exhumation, exposure to groundwater and meteoric fluids carrying sulfur species from more modern sources, etc.) can alter the abundance and isotopic composition of CAS. And so, carbonate mineral crystals used as a sulfur cycle proxy must be carefully selected to avoid highly altered or recrystallized material.
Significant to this problem is the position that carbonate-associated sulfate occupies in the structure of carbonate minerals. X-ray diffraction and reflectance spectroscopy have revealed how the replacement of the carbonate group with sulfate ion tetrahedra expands the crystal lattice. (It follows that higher Mg-content in the carbonate, which itself depends on the ocean's weathering inputs, pH, etc. and increases the distortion of the crystal lattice and rock volume, can also allow for the incorporation of "more" sulfate into the mineral structure.) Any processes that further distort the crystal lattice can cause sulfate to be lost from or added to the carbonate mineral, possibly overprinting the marine sulfate signal from the time of deposition.
On balance, CAS preserves and records the isotopic composition of seawater sulfate at the time of its deposition, provided the host carbonate has not been completely recrystallized or undergone replacement via sulfur-bearing fluids after burial. If the host carbonate has been altered in this way, CAS may contain a mixture of signals that is difficult to characterize.
Measuring.
Measuring abundance.
In measuring the abundance and isotopic composition of CAS, it is important to know exactly "what" is being measured: CAS within particular shell fragments, corals, microbialites, cements, or otherwise. The first step is therefore to separate out the desired component for measurement. This could mean drilling and powdering a rock (if the CAS measurement of the whole rock is desired) or sorting sediments by visual identification of particular microfossils or mineral phases, using fine tweezers and drills under a microscope. The fragments, sediments, or powders should be cleaned (likely by sonication) and exposed only to deionized and filtered water, so that no contaminant sulfur species are introduced, and the original CAS is not further reduced, oxidized, or otherwise altered. Next, the clean samples must be measured.
In one method, these samples are "digested" in an acid, likely HCl, which will liberate CAS from inclusions or the mineral lattice by dissolving the calcite mineral. The resulting sulfate ions are precipitated (often by mixture with barium chloride to produce barium sulfate), and the solid sulfate precipitate is filtered, dried, and transferred to an elemental analysis pipeline, which may involve the combustion of the sample and the mass balance of its various combustion products (which should include CO2 and SO2). Knowledge of the ratio of sulfur to oxygen and other components in the elemental analysis pipeline allows one to calculate the amount of sulfate introduced to the pipeline by the sample. This, along with the precise measurement of the original sample's mass and volume, yields a sulfate concentration for the original sample. The "combustion" and reaction to SO2 can also bypassed by instead passing the acid-dissolved sample through an ion chromatography column, wherein different ions' polarity determines the strength of their interactions with polymers in the column, such that they are retained in the column for different amounts of time.
The concentration of CAS may also be measured by spectroscopic methods. This could mean using the characteristic X-ray-induced fluorescence of sulfur, oxygen, carbon, and other elements in the sample to determine the abundance and ratios of each component, or the energy spectrum of an electron beam transmitted through the sample.
It is also important to calibrate your measurement using standards of a known sulfate concentration, so that the strength/intensity of the signal associated with each sample can be mapped to a particular abundance.
Measuring isotopic composition.
The abundance of CAS in a particular sample depends as much on the circumstances of a particular carbonate rock's formation and diagenetic history as it does on the processes acting on the marine sulfate pool that generated it. Thus, it is important to have both the abundance/concentration of CAS in a sample "and" its isotopic composition to understand its place in the marine sulfate record. As mentioned above, different biogeochemical processes produce different isotope effects under equilibrium and disequilibrium conditions: microbial sulfur reduction and sulfur disproportionation can produce equilibrium and kinetic isotope effects of many 10s of per mille. The sulfur isotope composition of the ocean (or a lake, lagoon, or other body) is critical to understanding the extent to which those processes controlled the global sulfur cycle throughout the past. Just as the carbon and oxygen isotope composition of the carbonate host rock can illuminate temperature and local climate history, the sulfur and oxygen isotope composition of CAS can illuminate the cause and effect relationships between that history and the sulfur cycle. Isotopic composition of CAS and carbonate host rock can both be measured by "elemental analysis" wherein sulfate or carbonate is "burned" or otherwise volatilized and the ionized isotopes are accelerated along a path, the length and duration of which is a function of their masses. The ratio of different isotopes to one another is assessed by comparison to blanks and standards. However, SO2, the analyte used in this method, presents some difficulties as the isotopic composition of the component oxygen may also vary, affecting the mass measurement. SO2 can also "stick" to or react with other compounds in the mass spectrometer line. Thus, if high precision is needed, sulfate samples are reduced to sulfides, which are then fluorinated to produce the inert and stable-isotopologue-free compound SF6, which can be passed through a specialized mass spectrometer. These methods, mass spectrometry and clumped isotope mass spectrometry, are discussed in greater detail in their primary articles.
The isotopic composition of CAS is often discussed in terms of δ34S, which is a way of expressing the ratio of the isotope 34S to 32S in a sample, relative to a standard such as the Canyon Diablo Troilite. δ34S (expressed in ‰) is equal to formula_0. The isotope effect of a particular process (microbial sulfate reduction, for example) is often expressed as an ε value (also in ‰) which refers to the difference in the δ value of the reactant pool and the product pool.
While studies of the sulfur isotope composition of seawater sulfate, CAS, marine barite, and evaporites typically discuss the relative 34S enrichment of depletion of these pools, there are other minor but stable isotopes of sulfur that can also be measured, though to lower precision given their rarity. These include 33S and 36S. Mass-dependent and mass-independent fractionation of minor sulfur isotopes may also be an important gauge for the sulfur cycle through geologic time. 33S and 36S must, however, be measured at high-precision via fluorination to SF6 before passing through a mass spectrometer.
Interpreting measurements.
Interpreting the sulfur isotope composition of CAS can be complex. As discussed above, if seawater sulfate at a particular horizon in the geologic record gets heavier (i.e. more enriched in 34S relative to seawater sulfate before it) that could mean that the 34S-depleted products of sulfur-reducing reactions are being buried as sulfide minerals and removed from the oceans, possibly because of an instance of ocean anoxia or an increase in dissimilatory sulfate reduction by marine microorganisms. But it could also mean that the CAS measured at that particular horizon was derived not from seawater sulfate at the time of carbonate deposition, but from fluids moving through the sediment or porous rock from a later time, in which sulfate could have been enriched by processes in a more oxidizing world. It could mean that there is a hitherto uncharacterized kinetic isotope effect associated with the incorporation of sulfate into a particular carbonate texture (shrubs vs. nodules vs. acicular cements vs. other conformations). Distinguishing between the effects of true changes in ancient ocean dynamics/chemistry and the effects of early- and late-stage diagenesis on CAS isotope composition is possible only through careful analyses that: compare the CAS record to the seawater sulfate record preserved in evaporites and marine barite, "and" carefully screen samples for their thermodynamic stability and evidence of alteration. Such samples could include brachiopod shell fragments (which are made of stable, low-Mg calcite that visibly resists alteration after cementation).
Some important insights from CAS studies.
The CAS record can preserve evidence of major changes in oxidation state of the ocean in response to climate. For example, the Great Oxygenation Event led to the oxidation of reduced sulfur species, increasing the flux of sulfate into the oceans. This led to a corresponding depletion of 34S in the marine sulfate pool — a depletion recorded in the sulfur isotope composition of marginal marine evaporite deposits and CAS in marine carbonates.
Before the Great Oxygenation Event, when atmospheric and marine oxygen was low, it is expected that oxidized sulfur species like sulfate would have been much less abundant. Exactly how much less may be estimated from the δ34S value of sediments in modern analog environments like anoxic lakes, and their comparison to preserved Archean-age seawater sulfate (as found in CAS).
The Great Oxygenation Event lead not just to the oxygenation of Earth's oceans, but to the development of the ozone layer. Prior to this, the Archean Earth was exposed to high-energy radiation that caused mass-independent fractionation of various pools, including sulfur (which would lead to an expected negative δ34S excursion in the marine sulfate pool). The marine sulfate record preserved in CAS complicates this view, as late or Neo-Archean CAS samples seem to have positive δ34S.
The CAS record may (or may not) preserve evidence of the rise of microbial sulfate reduction, in the form of a negative δ34S excursion between 2.7 and 2.5 Ga.
The variation in sulfur isotope composition of sulfate associated with the different components of a carbonate or phosphate rock may also provide insights into the diagenetic history of a sample and the degree of preservation of the original texture and chemistry in different types of grains.
Ongoing improvements to CAS studies.
Much of the ongoing work in the field of carbonate-associated sulfate is dedicated to characterizing sources of variation in the CAS record, answering questions like: how are sulfate ions incorporated into the mineral structure of different Ca-carbonate and Ca-Mg-carbonate morphotypes, mechanistically speaking? And which morphotypes are most likely to contain CAS derived from primary marine sulfate?
Just as for other geochemical proxies, the utility and reliability of CAS measurements will improve with the advent of more sensitive measurement techniques, and the characterization of more isotope standards.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\left [ \\frac{(^{34}S/^{32}S)_{sample}}{(^{34}S/^{32}S)_{standard}}-1 \\right ]*1000 "
}
] |
https://en.wikipedia.org/wiki?curid=57027555
|
57035748
|
Methane clumped isotopes
|
Methane molecules that contain two or more rare isotopes
Methane clumped isotopes are methane molecules that contain two or more rare isotopes. Methane (CH4) contains two elements, carbon and hydrogen, each of which has two stable isotopes. For carbon, 98.9% are in the form of carbon-12 (12C) and 1.1% are carbon-13 (13C); while for hydrogen, 99.99% are in the form of protium (1H) and 0.01% are deuterium (2H or D). Carbon-13 (13C) and deuterium (2H or D) are rare isotopes in methane molecules. The abundance of the clumped isotopes provides information independent from the traditional carbon or hydrogen isotope composition of methane molecules.
Introduction.
Isotopologues are molecules that have the same chemical composition, but differ only in their isotopic composition. Methane has ten stable isotopologues: 12CH4, 13CH4, 12CH3D, 13CH3D, 12CH2D2, 13CH2D2, 12CHD3, 13CHD3, 12CD4 and 13CD4, among which, 12CH4 is an unsubstituted isotopologue; 13CH4 and 12CH3D are singly substituted isotopologues; 13CH3D and 12CH2D2 are doubly substituted isotopologues. The multiple-substituted isotopologues are clumped isotopologues.
The absolute abundance of each isotopologue primarily depends on the traditional carbon and hydrogen isotope compositions ("δ"13C and "δ"D) of the molecules. Clumped isotope composition is calculated relative to the random distribution of carbon and hydrogen isotopes in the methane molecules. The deviations from the random distribution is the key signature of methane clumped isotope (please see "notation" for details).
In thermodynamic equilibrium, methane clumped isotopologue composition has a monotonic relationship with formation temperature. This is the condition for many geological environments so that methane clumped isotope can record its formation temperature, and therefore can be used to identify the origins of methane. When methane clumped-isotope composition is controlled by kinetic effects, for example, for microbial methane, it has the potential to be used to study metabolism.
The study of methane clumped isotopologues is very recent. The first mass spectrometry measurement of methane clumped isotopologues of natural abundance was made in 2014. This is a very young and fast-growing field.
Notation.
Δ notation.
The Δ notation of clumped isotopes is an analogue to δ notation of traditional isotopes (e.g. "δ"13C, "δ"18O, "δ"15N, δ34S and "δ"D).
The notation of traditional isotopes are defined as:
formula_0‰
formula_1 is the ratio of the rare isotope to the abundant isotope in the sample. formula_2 is the same ratio in the reference material. Because the variation of formula_1 is rather small, in the convenience of comparison between difference samples, the notation is define as a ratio minus 1 and expressed in permil (‰).
The Δ notation is inherited from traditional δ notation. But the reference is not a physical reference material. Instead, the reference frame is defined as the stochastic distribution of isotopologues in the sample. It means the values of Δ are to denote the excess or deficit of the isotopologue relative to the amount expected if a material conforms to the stochastic distribution.
The calculation of stochastic distribution of methane isotopologues:
formula_3
formula_4
where formula_5 is defined as the abundance of 13CH3D molecules relative to 12CH4 molecules in random distribution; formula_6 is defined as the abundance of 12CH2D2 molecules relative to 12CH4 molecules in random distribution; formula_7 calculates the abundance of deuterium relative to protium in all methane molecules; formula_8 calculates the abundance of carbon-13 relative to carbon-12 in all methane molecules.
For the random distribution (i.e. probability distribution), the probability of choosing a carbon-13 atom over a carbon-12 atom is formula_9; the probability of choosing three protium atoms and one deuterium atom over four protium atoms isformula_10 (see "Combination") . Therefore, the probability of the occurrence of a 13CH3D molecule relative to the occurrence of a 12CH4 molecule is the product of formula_9 and formula_11, which gets to formula_3. Similarly, the probability of choosing two protium atoms and two deuterium atoms over four protium atoms is formula_12. Therefore, the probability of the occurrence of a 12CH2D2 molecule relative to the occurrence of a 12CH4 molecule is formula_12, which gets to formula_4.
The calculation of deviation from the random distribution:
formula_13
formula_14
where the actual abundance of 13CH3D molecules relative to 12CH4 molecules, and the actual abundance of 12CH2D2 molecules relative to 12CH4 molecules are calculated as follows:
formula_15
formula_16
The two Δ formulas are frequently used to report the abundance of clumped isotopologues of methane.
The reason for choosing stochastic distribution as the reference frame may be historical - in the process of developing CO2 clumped isotope measurement, the only material with known clumped isotope abundance was CO2 heated to 1000 °C. However, this reference frame is a good choice. Because the absolute abundance of each isotopologue primarily depends on the bulk carbon and hydrogen isotope compositions ("δ"13C and "δ"D) of the molecules, i.e. very close to stochastic distribution. Therefore, the deviation from the stochastic distribution, which is the key information embedded in the methane clumped isotopologues, is denoted by Δ values.
Mass-18 notation.
Under some circumstances, the abundances of 13CH3D and 12CH2D2 isotopologues are only measured as a sum, which leads to the notation for isotopologues of mass-18 (i.e. 13CH3D and 12CH2D2):
formula_17
formula_18
Note that formula_19 is not just the sum of formula_20 and formula_21.
Inferred equilibration temperature.
formula_22 is the inferred equilibration temperature based on formula_19 values; formula_23 is the inferred equilibration temperature based on formula_20 values; and formula_24 is the inferred equilibration temperature based on formula_25 values (see "Equilibrium thermodynamics" for details). formula_22, formula_23, and formula_24 are also called clumped-isotope temperatures. When a Δ value is smaller than zero, there is no inferred equilibration temperature associated with it. Because at any finite temperature, the equilibrium Δ value is always positive.
Physical chemistry.
Equilibrium thermodynamics.
When formed or re-equilibrated in reversible reactions, methane molecules can exchange isotopes with each other or with other substances present, such as H2O, H2 and CO2, and reach internal isotopic equilibrium. As a result, clumped isotopologues are enriched relative to the stochastic distribution. formula_19 and formula_20 values of methane in internal isotopic equilibrium are predicted and verified to vary as monotonic functions of temperature of equilibration as follows:
formula_26
formula_27
Δ values are in permil (‰).
Similar relationship also applies to formula_25:
formula_28
Based on these correlations, formula_19, formula_20 and formula_25 can be used as a geothermometer to indicate the formation temperature of methane (formula_22, formula_23 and formula_24). And the correlation of formula_20 and formula_25can help to determine whether methane is formed in internal isotopic equilibrium.
Kinetic isotope effects.
Kinetic isotope effect (KIE) occurs in irreversible reactions, such as methanogenesis, and can deviate methane clumped isotopologue composition from its thermodynamic equilibrium. Normally, KIE significantly drives formula_19 and formula_20 lower than their equilibrium states and even to negative values (i.e. more depleted of clumped isotopologues than stochastic distribution. Such lower formula_19 and formula_20 values correspond to apparent formation temperatures that are significantly higher than actual formation temperature, or to no possible temperatures (when a Δ value is smaller than zero, there is no inferred equilibration temperature associated with it).
Mixing effect.
Mixing between end-members with different conventional carbon and hydrogen isotope compositions (i.e. δ13C, δD) results in non-linear variations in formula_19 or formula_20. This non-linearity results from the non-linear definition of formula_19 and formula_20 values in reference to the random distributions of methane isotopologues (formula_3 and formula_29, as in "Notation"), which are non-linear polynomial functions of δD and δ13C values. Such non-linearity can be a diagnostic signature for mixing if multiple samples of various mixing ratios can be measured. When end-members have similar δ13C or δD compositions, the non-linearity is negligible.
Measurement techniques.
Mass spectrometry.
On an isotope-ratio mass spectrometer, the measurement of clumped isotopologues has to be conducted on intact methane molecules, instead of converting methane to CO2, H2 or H2O. High mass resolution is required to distinguish different isotopologues of very close relative molecular mass (same "cardinal mass", e.g. 13CH4 and 12CH3D (17.03465 Da (daltons) versus 17.03758 Da), 13CH3D and 12CH2D2 (18.04093 Da versus 18.04385 Da). Currently, two commercial models capable of such measurement are Thermo Scientific 253 Ultra and the Panorama by Nu Instruments.
Infrared spectroscopy.
Tunable infrared laser direct absorption spectroscopy (TILDAS) has been developed to measure the abundance of 13CH3D with two continuous wave quantum cascade lasers.
Theoretical studies.
There have been several theoretical studies on equilibrium thermodynamics of methane clumped isotopologues since 2008. These studies are based on "ab initio", from underlying physical chemistry principles, and do not rely on empirical, or lab-based, data.
Ma et al. utilized first-principle quantum mechanism molecular calculation (Density Functional Theory, or DFT) to study the temperature dependence of the 13CH3D abundance. Cao and Liu estimated formula_19 and formula_20 based on statistical mechanics. Webb and Miller combined path-integral Monte Carlo methods with high-quality potential energy surfaces to more rigorously compute equilibrium isotope effects of formula_20 compared to Urey model using reduced partition function ratios. Piasecki et al. performed first-principles calculations of the equilibrium distributions of all substituted isotopologues of methane.
The overall conclusion of theoretical studies is formula_20 and formula_19vary as decreasing monotonic functions of temperature, and the enrichment of multiply D-substituted > multiply 13C-D-substituted > multiply 13C-substituted isotopologues for a same number of substitutions (as shown in this ).
Distribution in nature.
Geosphere.
Many studies have observed composition of thermogenic methane in equilibria. The reported formula_22 and formula_23 are normally distributed within the range of 72 to 298 °C (peak value: formula_30°C), which aligns well with modeled results of methane formation temperature and yield. However, some thermogenic methane samples have clumped-isotope temperatures that are unrealistically high. Possible explanations for exceedingly high clumped isotope temperatures include natural gas migration after formation, mixing effect, and kinetic isotope effect of secondary cracking.
Biosphere.
Methanogenesis is a form of anaerobic respiration used by microbes, and microbial methanogenesis can occur in deep subsurface, marine sediments, freshwater bodies, etc. It appears that methane from deep subsurface and marine sediment is generally in internal isotopic equilibrium., while freshwater microbial methanogenesis expresses large kinetic isotope effect on methane clumped isotope composition.
There are two possible explanations for this variance: firstly, substrate limitation may enhance the reversibility of methanogenesis, thus allowing methane to achieve internal isotopic equilibrium via rapid hydrogen exchange with water; secondly, activation of C-H bonds during anaerobic oxidation precedes reversibly such that C-H bonds are broken and reformed faster than the net rate of methane consumption and methane can be reequilibrated.
Experimental studies.
Calibration of equilibrium thermodynamics.
Theoretical calculations have predicted formula_19 and formula_20 values of methane in internal isotopic equilibrium. As there are assumptions and approximations in calculations, the equilibrium distribution is only experimentally validated after the analysis of samples brought to thermodynamic equilibrium. Nickel and platinum catalysts have been used to equilibrate methane C-H bonds at various temperatures from 150 to 500 °C in laboratory. Currently, catalytic equilibration is also the practice to develop the reference material for clumped isotope analysis .
Microbial culture.
Hydrogenotrophic methanogens utilize CO2 and H2 to produce methane by the following reaction:
CO2 + 4H2 → CH4 + 2H2O
Acetoclastic methanogens metabolize acetate acid and produce methane:
CH3COOH → CH4 + CO2
In laboratories, clumped isotope compositions of methane generated by hydrogenotrophic methanogens, acetoclastic methanogens (biodegradation of acetate), and methylotrophic methanogens are universally out of equilibria. It has been proposed that the reversibility of methanogenic enzyme is key to the kinetic isotope effect expressed in biogenic methane.
Pyrolysis of larger organic molecules.
Both pyrolysis of propane and closed-system hydrous pyrolysis of organic matter generate methane of formula_22 consistent with experimental temperatures. Closed-system nonhydrous pyrolysis of coal yields non-equilibrium distribution of methane isotopologues.
Sabatier reaction.
Methane synthesized by Sabatier reaction is largely depleted in CH2D2 and slightly depleted in 13CH3D relative to the equilibrium state. It has been proposed that quantum tunneling effects result in the low "formula_25" observed in the experiment.
Applications.
Distinguishing origins of natural gas.
Biogenic, thermogenic and abiotic methane is formed at different temperatures, which can be recorded in clumped isotope compositions of methane. Combined with conventional carbon and hydrogen isotope fingerprints and gas wetness (the abundance of low molecular weight hydrocarbon), methane clumped isotope can be used to identify the origins of methane in different types of natural gas accumulations.
Biogeochemistry of microbial methane.
In freshwater environments, significant kinetic isotope effect leads to a wide range of observed "formula_19" and "formula_20" values, which has the potential to provide insights into methanogenesis rate and chemical condition in the corresponding environments.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\delta=(\\left ( \\frac{R_{sample}}{R_{reference}} \\right )-1)\\times1000"
},
{
"math_id": 1,
"text": "R_{sample}"
},
{
"math_id": 2,
"text": "R_{reference}"
},
{
"math_id": 3,
"text": "^{^{13}CH_3D}R^*= 4\\times{^2R}\\times{^{13}R} "
},
{
"math_id": 4,
"text": "^{^{12}CH_2D_2}R^*= 6\\times{^2R}^2 "
},
{
"math_id": 5,
"text": "^{^{13}CH_3D}R^* "
},
{
"math_id": 6,
"text": "^{^{12}CH_2D_2}R^* "
},
{
"math_id": 7,
"text": "{^2R}=\\frac{[D]}{[H]} "
},
{
"math_id": 8,
"text": "{^{13}R}=\\frac{[^{13}C]}{[^{12}C]} "
},
{
"math_id": 9,
"text": "{^{13}R} "
},
{
"math_id": 10,
"text": "\\binom 41 \\times{^2R} "
},
{
"math_id": 11,
"text": "4\\times{^2R} "
},
{
"math_id": 12,
"text": "\\binom 42\\times{^2R}^2 "
},
{
"math_id": 13,
"text": "\\Delta_{^{13}CH_3D}=\\left ( \\frac{^{^{13}CH_3D}R}{^{^{13}CH_3D}R^*} \\right )-1"
},
{
"math_id": 14,
"text": "\\Delta_{{}^{12}CH_2D_2}=\\left ( \\frac{^{^{12}CH_2D_2}R}{^{^{12}CH_2D_2}R^*} \\right )-1"
},
{
"math_id": 15,
"text": "^{^{13}CH_3D}R= \\frac{[^{13}CH_3D]}{[^{12}CH_4]} "
},
{
"math_id": 16,
"text": "^{^{12}CH_2D_2}R= \\frac{[^{12}CH_2D_2]}{[^{12}CH_4]} "
},
{
"math_id": 17,
"text": "^{18}R= ^{^{13}CH_3D}R+^{^{12}CH_2D_2}R=\\frac{[^{13}CH_3D]+[^{12}CH_2D_2]}{[^{12}CH_4]} "
},
{
"math_id": 18,
"text": "\\Delta_{18}=\\left ( \\frac{^{18}R}{^{18}R^*} \\right )-1"
},
{
"math_id": 19,
"text": "\\Delta_{18}"
},
{
"math_id": 20,
"text": "\\Delta_{^{13}CH_3D}"
},
{
"math_id": 21,
"text": "\\Delta_{{}^{12}CH_2D_2}\n"
},
{
"math_id": 22,
"text": "T_{18}"
},
{
"math_id": 23,
"text": "T_{^{13}CH_3D}"
},
{
"math_id": 24,
"text": "T_{^{12}CH_2D_2}"
},
{
"math_id": 25,
"text": "\\Delta_{^{12}CH_2D_2}"
},
{
"math_id": 26,
"text": "\\Delta_{18}=-0.0117{\\left ( \\frac{10^6}{T^2} \\right )}^2+0.708\\left ( \\frac{10^6}{T^2} \\right )-0.337"
},
{
"math_id": 27,
"text": "\\Delta_{^{13}CH_3D}=-0.0141{\\left ( \\frac{10^6}{T^2} \\right )}^2+0.699\\left ( \\frac{10^6}{T^2} \\right )-0.311"
},
{
"math_id": 28,
"text": "\\Delta_{^{12}CH_2D_2}={\\left ( \\frac{0.183798}T \\right )}-{\\left ( \\frac{785.483}{T^2} \\right )}+\\left ( \\frac{1056280.0}{T^3} \\right )+\\left ( \\frac{9.37307\\times10^7}{T^4} \\right )\n-\\left ( \\frac{8.919480\\times10^{10}}{T^5} \\right )+\\left ( \\frac{9.901730\\times10^{12}}{T^6} \\right )"
},
{
"math_id": 29,
"text": "^{18}R^*= ^{^{13}CH_3D}R^*+^{^{12}CH_2D_2}R^*=4\\times{^2R}\\times{^{13}R}+6\\times{^2R}^2 "
},
{
"math_id": 30,
"text": "175\\pm47"
}
] |
https://en.wikipedia.org/wiki?curid=57035748
|
5703575
|
Virtual screening
|
Virtual screening (VS) is a computational technique used in drug discovery to search libraries of small molecules in order to identify those structures which are most likely to bind to a drug target, typically a protein receptor or enzyme.
Virtual screening has been defined as "automatically evaluating very large libraries of compounds" using computer programs. As this definition suggests, VS has largely been a numbers game focusing on how the enormous chemical space of over 1060 conceivable compounds can be filtered to a manageable number that can be synthesized, purchased, and tested. Although searching the entire chemical universe may be a theoretically interesting problem, more practical VS scenarios focus on designing and optimizing targeted combinatorial libraries and enriching libraries of available compounds from in-house compound repositories or vendor offerings. As the accuracy of the method has increased, virtual screening has become an integral part of the drug discovery process. Virtual Screening can be used to select in house database compounds for screening, choose compounds that can be purchased externally, and to choose which compound should be synthesized next.
Methods.
There are two broad categories of screening techniques: ligand-based and structure-based. The remainder of this page will reflect Figure 1 Flow Chart of Virtual Screening.
Ligand-based methods.
Given a set of structurally diverse ligands that binds to a receptor, a model of the receptor can be built by exploiting the collective information contained in such set of ligands. Different computational techniques explore the structural, electronic, molecular shape, and physicochemical similarities of different ligands that could imply their mode of action against a specific molecular receptor or cell lines. A candidate ligand can then be compared to the pharmacophore model to determine whether it is compatible with it and therefore likely to bind. Different 2D chemical similarity analysis methods have been used to scan a databases to find active ligands. Another popular approach used in ligand-based virtual screening consist on searching molecules with shape similar to that of known actives, as such molecules will fit the target's binding site and hence will be likely to bind the target. There are a number of prospective applications of this class of techniques in the literature. Pharmacophoric extensions of these 3D methods are also freely-available as webservers. Also shape based virtual screening has gained significant popularity.
Structure-based methods.
Structure-based virtual screening approach includes different computational techniques that consider the structure of the receptor that is the molecular target of the investigated active ligands. Some of these techniques include molecular docking, structure-based pharmacophore prediction, and molecular dynamics simulations. Molecular docking is the most used structure-based technique, and it applies a scoring function to estimate the fitness of each ligand against the binding site of the macromolecular receptor, helping to choose the ligands with the most high affinity. Currently, there are some webservers oriented to prospective virtual screening.
Hybrid methods.
Hybrid methods that rely on structural and ligand similarity were also developed to overcome the limitations of traditional VLS approaches. This methodologies utilizes evolution‐based ligand‐binding information to predict small-molecule binders and can employ both global structural similarity and pocket similarity. A global structural similarity based approach employs both an experimental structure or a predicted protein model to find structural similarity with proteins in the PDB holo‐template library. Upon detecting significant structural similarity, 2D fingerprint based Tanimoto coefficient metric is applied to screen for small-molecules that are similar to ligands extracted from selected holo PDB templates. The predictions from this method have been experimentally assessed and shows good enrichment in identifying active small molecules.
The above specified method depends on global structural similarity and is not capable of "a priori" selecting a particular ligand‐binding site in the protein of interest. Further, since the methods rely on 2D similarity assessment for ligands, they are not capable of recognizing stereochemical similarity of small-molecules that are substantially different but demonstrate geometric shape similarity. To address these concerns, a new pocket centric approach, "PoLi," capable of targeting specific binding pockets in holo‐protein templates, was developed and experimentally assessed.
Computing Infrastructure.
The computation of pair-wise interactions between atoms, which is a prerequisite for the operation of many virtual screening programs, scales by formula_0, "N" is the number of atoms in the system. Due to the quadratic scaling, the computational costs increase quickly.
Ligand-based Approach.
Ligand-based methods typically require a fraction of a second for a single structure comparison operation. Sometimes a single CPU is enough to perform a large screening within hours. However, several comparisons can be made in parallel in order to expedite the processing of a large database of compounds.
Structure-based Approach.
The size of the task requires a parallel computing infrastructure, such as a cluster of Linux systems, running a batch queue processor to handle the work, such as Sun Grid Engine or Torque PBS.
A means of handling the input from large compound libraries is needed. This requires a form of compound database that can be queried by the parallel cluster, delivering compounds in parallel to the various compute nodes. Commercial database engines may be too ponderous, and a high speed indexing engine, such as Berkeley DB, may be a better choice. Furthermore, it may not be efficient to run one comparison per job, because the ramp up time of the cluster nodes could easily outstrip the amount of useful work. To work around this, it is necessary to process batches of compounds in each cluster job, aggregating the results into some kind of log file. A secondary process, to mine the log files and extract high scoring candidates, can then be run after the whole experiment has been run.
Accuracy.
The aim of virtual screening is to identify molecules of novel chemical structure that bind to the macromolecular target of interest. Thus, success of a virtual screen is defined in terms of finding interesting new scaffolds rather than the total number of hits. Interpretations of virtual screening accuracy should, therefore, be considered with caution. Low hit rates of interesting scaffolds are clearly preferable over high hit rates of already known scaffolds.
Most tests of virtual screening studies in the literature are retrospective. In these studies, the performance of a VS technique is measured by its ability to retrieve a small set of previously known molecules with affinity to the target of interest (active molecules or just actives) from a library containing a much higher proportion of assumed inactives or decoys. There are several distinct ways to select decoys by matching the properties of the corresponding active molecule and more recently decoys are also selected in a property-unmatched manner. The actual impact of decoy selection, either for training or testing purposes, has also been discussed.
By contrast, in prospective applications of virtual screening, the resulting hits are subjected to experimental confirmation (e.g., IC50 measurements). There is consensus that retrospective benchmarks are not good predictors of prospective performance and consequently only prospective studies constitute conclusive proof of the suitability of a technique for a particular target.
Application to drug discovery.
Virtual screening is a very useful application when it comes to identifying hit molecules as a beginning for medicinal chemistry. As the virtual screening approach begins to become a more vital and substantial technique within the medicinal chemistry industry the approach has had an expeditious increase.
Ligand-based methods.
While not knowing the structure trying to predict how the ligands will bind to the receptor. With the use of pharmacophore features each ligand identified donor, and acceptors. Equating features are overlaid, however given it is unlikely there is a single correct solution.
Pharmacophore models.
This technique is used when merging the results of searches by using unlike reference compounds, same descriptors and coefficient, but different active compounds. This technique is beneficial because it is more efficient than just using a single reference structure along with the most accurate performance when it comes to diverse actives.
Pharmacophore is an ensemble of steric and electronic features that are needed to have an optimal supramolecular interaction or interactions with a biological target structure in order to precipitate its biological response. Choose a representative as a set of actives, most methods will look for similar bindings. It is preferred to have multiple rigid molecules and the ligands should be diversified, in other words ensure to have different features that don't occur during the binding phase.
Shape-Based Virtual Screening.
Shape-based molecular similarity approaches have been established as important and popular virtual screening techniques. At present, the highly optimized screening platform ROCS (Rapid Overlay of Chemical Structures) is considered the de facto industry standard for shape-based, ligand-centric virtual screening. It uses a Gaussian function to define molecular volumes of small organic molecules. The selection of the query conformation is less important, rendering shape-based screening ideal for ligand-based modeling: As the availability of a bioactive conformation for the query is not the limiting factor for screening — it is more the selection of query compound(s) that is decisive for screening performance. Other shape-based molecular similarity methods such as Blaze (formerly known as FieldScreen) and Autodock-SS have also been developed.
Field-Based Virtual Screening.
As an improvement to Shape-Based similarity methods, Field-Based methods try to take into account all the fields that influence a ligand-receptor interaction while being agnostic of the chemical structure used as a query. Examples of other fields that are used in these methods are electrostatic or hydrophobic fields.
Quantitative-Structure Activity Relationship.
Quantitative-Structure Activity Relationship (QSAR) models consist of predictive models based on information extracted from a set of known active and known inactive compounds. SAR's (Structure Activity Relationship) where data is treated qualitatively and can be used with structural classes and more than one binding mode. Models prioritize compounds for lead discovery.
Machine learning algorithms.
Machine learning algorithms have been widely used in virtual screening approaches. Supervised learning techniques use a training and test datasets composed of known active and known inactive compounds. Different ML algorithms have been applied with success in virtual screening strategies, such as recursive partitioning, support vector machines, random forest, k-nearest neighbors and neural networks. These models find the probability that a compound is active and then ranking each compound based on its probability.
Substructural analysis in Machine Learning.
The first Machine Learning model used on large datasets is the Substructure Analysis that was created in 1973. Each fragment substructure make a continuous contribution an activity of specific type. Substructure is a method that overcomes the difficulty of massive dimensionality when it comes to analyzing structures in drug design. An efficient substructure analysis is used for structures that have similarities to a multi-level building or tower. Geometry is used for numbering boundary joints for a given structure in the onset and towards the climax. When the method of special static condensation and substitutions routines are developed this method is proved to be more productive than the previous substructure analysis models.
Recursive partitioning.
Recursively partitioning is method that creates a decision tree using qualitative data. Understanding the way rules break classes up with a low error of misclassification while repeating each step until no sensible splits can be found. However, recursive partitioning can have poor prediction ability potentially creating fine models at the same rate.
Structure-based methods known protein ligand docking.
Ligand can bind into an active site within a protein by using a docking search algorithm, and scoring function in order to identify the most likely cause for an individual ligand while assigning a priority order.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "O(N^{2})"
}
] |
https://en.wikipedia.org/wiki?curid=5703575
|
5703638
|
BBGKY hierarchy
|
Set of equations describing the dynamics of a system of many interacting particles
In statistical physics, the BBGKY hierarchy (Bogoliubov–Born–Green–Kirkwood–Yvon hierarchy, sometimes called Bogoliubov hierarchy) is a set of equations describing the dynamics of a system of a large number of interacting particles. The equation for an "s"-particle distribution function (probability density function) in the BBGKY hierarchy includes the ("s" + 1)-particle distribution function, thus forming a coupled chain of equations. This formal theoretic result is named after Nikolay Bogolyubov, Max Born, Herbert S. Green, John Gamble Kirkwood, and Jacques Yvon.
Formulation.
The evolution of an "N"-particle system in absence of quantum fluctuations is given by the Liouville equation for the probability density function formula_0 in 6"N"-dimensional phase space (3 space and 3 momentum coordinates per particle)
formula_1
where formula_2 are the coordinates and momentum for formula_3-th particle with mass formula_4, and the net force acting on the formula_3-th particle is
formula_5
where formula_6 is the pair potential for interaction between particles, and formula_7 is the external-field potential. By integration over part of the variables, the Liouville equation can be transformed into a chain of equations where the first equation connects the evolution of one-particle probability density function with the two-particle probability density function, second equation connects the two-particle probability density function with the three-particle probability density function, and generally the "s"-th equation connects the "s"-particle probability density function
formula_8
with the ("s" + 1)-particle probability density function:
formula_9
The equation above for "s"-particle distribution function is obtained by integration of the Liouville equation over the variables formula_10. The problem with the above equation is that it is not closed. To solve formula_11, one has to know formula_12, which in turn demands to solve formula_13 and all the way back to the full Liouville equation. However, one can solve formula_11, if formula_12 could be modeled. One such case is the Boltzmann equation for formula_14, where formula_15 is modeled based on the molecular chaos hypothesis (). In fact, in the Boltzmann equation formula_16 is the collision integral. This limiting process of obtaining Boltzmann equation from Liouville equation is known as Boltzmann–Grad limit.
Physical interpretation and applications.
Schematically, the Liouville equation gives us the time evolution for the whole formula_17-particle system in the form formula_18, which expresses an incompressible flow of the probability density in phase space. We then define the reduced distribution functions incrementally by integrating out another particle's degrees of freedom formula_19. An equation in the BBGKY hierarchy tells us that the time evolution for such a formula_11 is consequently given by a Liouville-like equation, but with a correction term that represents force-influence of the formula_20 suppressed particles
formula_21
The problem of solving the BBGKY hierarchy of equations is as hard as solving the original Liouville equation, but approximations for the BBGKY hierarchy (which allow truncation of the chain into a finite system of equations) can readily be made. The merit of these equations is that the higher distribution functions formula_22 affect the time evolution of formula_11 only implicitly via formula_23 Truncation of the BBGKY chain is a common starting point for many applications of kinetic theory that can be used for derivation of classical or quantum kinetic equations. In particular, truncation at the first equation or the first two equations can be used to derive classical and quantum Boltzmann equations and the first order corrections to the Boltzmann equations. Other approximations, such as the assumption that the density probability function depends only on the relative distance between the particles or the assumption of the hydrodynamic regime, can also render the BBGKY chain accessible to solution.
Bibliography.
"s"-particle distribution functions were introduced in classical statistical mechanics by J. Yvon in 1935. The BBGKY hierarchy of equations for "s"-particle distribution functions was written out and applied to the derivation of kinetic equations by Bogoliubov in the article received in July 1945 and published in 1946 in Russian and in English. The kinetic transport theory was considered by Kirkwood in the article received in October 1945 and published in March 1946, and in the subsequent articles. The first article by Born and Green considered a general kinetic theory of liquids and was received in February 1946 and published on 31 December 1946.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f_N = f_N(\\mathbf{q}_1 \\dots \\mathbf{q}_N, \\mathbf{p}_1 \\dots \\mathbf{p}_N, t)"
},
{
"math_id": 1,
"text": "\n\\frac{\\partial f_N}{\\partial t} + \\sum_{i=1}^N \\frac{\\mathbf{p}_i}{m} \\frac{\\partial f_N}{\\partial \\mathbf{q}_i} + \\sum_{i=1}^N \\mathbf{F}_i \\frac{\\partial f_N}{\\partial \\mathbf{p}_i} = 0,\n"
},
{
"math_id": 2,
"text": "\\mathbf{q}_i, \\mathbf{p}_i"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "m"
},
{
"math_id": 5,
"text": "\\mathbf{F}_i = -\\sum_{j=1 \\neq i}^N \\frac{\\partial \\Phi_{ij}}{\\partial \\mathbf{q}_i} - \\frac{\\partial \\Phi_i^\\text{ext}}{\\partial \\mathbf{q}_i},"
},
{
"math_id": 6,
"text": "\\Phi_{ij}(\\mathbf{q}_i, \\mathbf{q}_j)"
},
{
"math_id": 7,
"text": "\\Phi^\\text{ext}(\\mathbf{q}_i)"
},
{
"math_id": 8,
"text": "f_s(\\mathbf{q}_1 \\dots \\mathbf{q}_s, \\mathbf{p}_1 \\dots \\mathbf{p}_s, t) = \\int f_N(\\mathbf{q}_1 \\dots \\mathbf{q}_N, \\mathbf{p}_1 \\dots \\mathbf{p}_N, t) \\,d\\mathbf{q}_{s+1} \\dots d\\mathbf{q}_N \\,d\\mathbf{p}_{s+1} \\dots d\\mathbf{p}_N"
},
{
"math_id": 9,
"text": "\n\\frac{\\partial f_s}{\\partial t} + \\sum_{i=1}^s \\frac{\\mathbf{p}_i}{m} \\frac{\\partial f_s}{\\partial \\mathbf{q}_i} - \\sum_{i=1}^s \\left( \\sum_{j=1\\neq i}^s \\frac{\\partial \\Phi_{ij}}{\\partial \\mathbf{q}_i} + \\frac{\\partial \\Phi_i^{ext}}{\\partial \\mathbf{q}_i} \\right) \\frac{\\partial f_s}{\\partial \\mathbf{p}_i} = (N-s) \\sum_{i=1}^s \\int \\frac{\\partial \\Phi_{i\\,s+1}}{\\partial \\mathbf{q}_i} \\frac{\\partial f_{s+1}}{\\partial \\mathbf{p}_i} \\,d\\mathbf{q}_{s+1} \\,d\\mathbf{p}_{s+1}.\n"
},
{
"math_id": 10,
"text": "\\mathbf{q}_{s+1} \\dots \\mathbf{q}_N, \\mathbf{p}_{s+1} \\dots \\mathbf{p}_N"
},
{
"math_id": 11,
"text": "f_s"
},
{
"math_id": 12,
"text": "f_{s+1}"
},
{
"math_id": 13,
"text": "f_{s+2}"
},
{
"math_id": 14,
"text": "f_1(\\mathbf{q}_1, \\mathbf{p}_1, t)"
},
{
"math_id": 15,
"text": "f_2(\\mathbf{q}_1, \\mathbf{q}_2, \\mathbf{p}_1, \\mathbf{p}_2, t)"
},
{
"math_id": 16,
"text": "f_2 = f_2(\\mathbf{p}_1, \\mathbf{p_2}, t)"
},
{
"math_id": 17,
"text": "N"
},
{
"math_id": 18,
"text": "Df_N=0"
},
{
"math_id": 19,
"text": "f_s \\sim \\int f_{s+1}"
},
{
"math_id": 20,
"text": "N-s"
},
{
"math_id": 21,
"text": "D f_s \\propto \\text{div}_{\\mathbf p} \\langle \\text{grad}_{\\mathbf q}\\Phi_{i,s+1}\\rangle_{f_{s+1}}."
},
{
"math_id": 22,
"text": "f_{s+2},f_{s+3},\\dots"
},
{
"math_id": 23,
"text": "f_{s+1}."
}
] |
https://en.wikipedia.org/wiki?curid=5703638
|
57042156
|
Fractionation of carbon isotopes in oxygenic photosynthesis
|
Photosynthesis converts carbon dioxide to carbohydrates via several metabolic pathways that provide energy to an organism and preferentially react with certain stable isotopes of carbon. The selective enrichment of one stable isotope over another creates distinct isotopic fractionations that can be measured and correlated among oxygenic phototrophs. The degree of carbon isotope fractionation is influenced by several factors, including the metabolism, anatomy, growth rate, and environmental conditions of the organism. Understanding these variations in carbon fractionation across species is useful for biogeochemical studies, including the reconstruction of paleoecology, plant evolution, and the characterization of food chains.
Oxygenic photosynthesis is a metabolic pathway facilitated by autotrophs, including plants, algae, and cyanobacteria. This pathway converts inorganic carbon dioxide from the atmosphere or aquatic environment into carbohydrates, using water and energy from light, then releases molecular oxygen as a product. Organic carbon contains less of the stable isotope Carbon-13, or 13C, relative to the initial inorganic carbon from the atmosphere or water because photosynthetic carbon fixation involves several fractionating reactions with kinetic isotope effects. These reactions undergo a kinetic isotope effect because they are limited by overcoming an activation energy barrier. The lighter isotope has a higher energy state in the quantum well of a chemical bond, allowing it to be preferentially formed into products. Different organisms fix carbon through different mechanisms, which are reflected in the varying isotope compositions across photosynthetic pathways (see table below, and explanation of notation in "Carbon Isotope Measurement" section). The following sections will outline the different oxygenic photosynthetic pathways and what contributes to their associated delta values.
Carbon isotope measurement.
Carbon on Earth naturally occurs in two stable isotopes, with 98.9% in the form of 12C and 1.1% in 13C. The ratio between these isotopes varies in biological organisms due to metabolic processes that selectively use one carbon isotope over the other, or "fractionate" carbon through kinetic or thermodynamic effects. Oxygenic photosynthesis takes place in plants and microorganisms through different chemical pathways, so various forms of organic material reflect different ratios of 13C isotopes. Understanding these variations in carbon fractionation across species is applied in isotope geochemistry and ecological isotope studies to understand biochemical processes, establish food chains, or model the carbon cycle through geological time.
Carbon isotope fractionations are expressed in using delta notation of "δ"13C ("delta thirteen C"), which is reported in parts per thousand (per mille, ‰). "δ"13C is defined in relation to the Vienna Pee Dee Belemnite (VPDB, 13C/12C = 0.01118) as an established reference standard. This is called a "delta value" and can be calculated from the formula below:
formula_0
Photosynthesis reactions.
The chemical pathway of oxygenic photosynthesis fixes carbon in two stages: the light-dependent reactions and the light-independent reactions.
The light-dependent reactions capture light energy to transfer electrons from water and convert NADP+, ADP, and inorganic phosphate into the energy-storage molecules NADPH and ATP. The overall equation for the light-dependent reactions is generally:2 H2O + 2 NADP+ + 3 ADP + 3 Pi + light → 2 NADPH + 2 H+ + 3 ATP + O2The light-independent reactions undergo the Calvin-Benson cycle, in which the energy from NADPH and ATP is used to convert carbon dioxide and water into organic compounds via the enzyme RuBisCO.
The overall general equation for the light-independent reactions is the following:3 CO2 + 9 ATP + 6 NADPH + 6 H+ → C3H6O3-phosphate + 9 ADP + 8 Pi + 6 NADP+ + 3 H2OThe 3-carbon products (C3H6O3-phosphate) of the Calvin cycle are later converted to glucose or other carbohydrates such as starch, sucrose, and cellulose.
Fractionation via RuBisCO.
The large fractionation of 13C in photosynthesis is due to the carboxylation reaction, which is carried out by the enzyme ribulose-1,5-bisphosphate carboxylase oxygenase, or RuBisCO. RuBisCO catalyzes the reaction between a five-carbon molecule, ribulose-1,5-bisphosphate (abbreviated as RuBP) and CO2 to form two molecules of 3-phosphoglyceric acid (abbreviated as PGA). PGA reacts with NADPH to produce 3-phosphoglyceraldehyde.
Isotope fractionation due to Rubisco (form I) carboxylation alone is predicted to be a 28‰ depletion, on average. However, fractionation values vary between organisms, ranging from an 11‰ depletion observed in coccolithophorid algae to a 29‰ depletion observed in spinach. RuBisCO causes a kinetic isotope effect because 12CO2 and 13CO2 compete for the same active site and 13C has an intrinsically lower reaction rate.
13C fractionation model.
In addition to the discriminating effects of enzymatic reactions, the diffusion of CO2 gas to the carboxylation site within a plant cell also influences isotopic fractionation. Depending on the type of plant (see sections below), external CO2 must be transported through the boundary layer and stomata and into the internal gas space of a plant cell, where it dissolves and diffuses to the chloroplast. The diffusivity of a gas is inversely proportional to the square root of its molecular reduced mass (relatively to air), causing 13CO2 to be 4.4‰ less diffusive than 12CO2.
A prevailing model for fractionation of atmospheric CO2 in plants combines the isotope effects of the carboxylation reaction with the isotope effects from gas diffusion into the plant in the following equation:
formula_1
Where:
This model, derived "ab initio", generally describes fractionation of carbon in the majority of plants, which facilitate C3 carbon fixation. Modifications have been made to this model with empirical findings. However, several additional factors, not included in this general model, will increase or decrease 13C fractionation across species. Such factors include the competing oxygenation reaction of RuBisCO, anatomical and temporal adaptations to enzyme activity, and variations in cell growth and geometry. The isotopic fractionations of different photosynthetic pathways are uniquely characterized by these factors, as described below.
In C3 plants.
A C3 plant uses C3 carbon fixation, one of the three metabolic photosynthesis pathways which also include C4 and CAM (described below). These plants are called "C3" due to the three-carbon compound (3-Phosphoglyceric acid, or 3-PGA) produced by the CO2 fixation mechanism in these plants. This C3 mechanism is the first step of the Calvin-Benson cycle, which converts CO2 and RuBP into 3-PGA.
C3 plants are the most common type of plant, and typically thrive under moderate sunlight intensity and temperatures, CO2 concentrations above 200 ppm, and abundant groundwater. C3 plants do not grow well in very hot or arid regions, in which C4 and CAM plants are better adapted.
The isotope fractionations in C3 carbon fixation arise from the combined effects of CO2 gas diffusion through the stomata of the plant, and the carboxylation via RuBisCO. Stomatal conductance discriminates against the heavier 13C by 4.4‰. RuBisCO carboxylation contributes a larger discrimination of 27‰.
RuBisCO enzyme catalyzes the carboxylation of CO2 and the 5-carbon sugar, RuBP, into 3-phosphoglycerate, a 3-carbon compound through the following reaction:
formula_2
The product 3-phosphoglycerate is depleted in 13C due to the kinetic isotope effect of the above reaction. The overall 13C fractionation for C3 photosynthesis ranges between -20 and -37‰.
The wide range of variation in delta values expressed in C3 plants is modulated by the stomatal conductance, or the rate of CO2 entering, or water vapor exiting, the small pores in the epidermis of a leaf. The δ13C of C3 plants depends on the relationship between stomatal conductance and photosynthetic rate, which is a good proxy of water use efficiency in the leaf. C3 plants with high water-use efficiency tend to be less fractionated in 13C (i.e., δ13C is relatively less negative) compared to C3 plants with low water-use efficiency.
In C4 plants.
C4 plants have developed the C4 carbon fixation pathway to conserve water loss, thus are more prevalent in hot, sunny, and dry climates. These plants differ from C3 plants because CO2 is initially converted to a four-carbon molecule, malate, which is shuttled to bundle sheath cells, released back as CO2 and only then enters the Calvin Cycle. In contrast, C3 plants directly perform the Calvin Cycle in mesophyll cells, without making use of a CO2 concentration method. Malate, the four-carbon compound is the namesake of "C4" photosynthesis. This pathway allows C4 photosynthesis to efficiently shuttle CO2 to the RuBisCO enzyme and maintain high concentrations of CO2 within bundle sheath cells. These cells are part of the characteristic "kranz leaf anatomy", which spatially separates photosynthetic cell-types in a concentric arrangement to accumulate CO2 near RuBisCO.
These chemical and anatomical mechanisms improve the ability of RuBisCO to fix carbon, rather than perform its wasteful oxygenase activity. The RuBisCO oxygenase activity, called photorespiration, causes the RuBP substrate to be lost to oxygenation, and consumes energy in doing so. The adaptations of C4 plants provide an advantage over the C3 pathway, which loses efficiency due to photorespiration. The ratio of photorespiration to photosynthesis in a plant varies with environmental conditions, since decreased CO2 and elevated O2 concentrations would increase the efficiency of photorespiration. Atmospheric CO2 on Earth decreased abruptly at a point between 32 and 25 million years ago. This gave a selective advantage to the evolution of the C4 pathway, which can limit photorespiration rate despite the reduced ambient CO2. Today, C4 plants represent roughly 5% of plant biomass on Earth, but about 23% of terrestrial carbon fixation. Types of plants which use C4 photosynthesis include grasses and economically important crops, such as maize, sugar cane, millet, and sorghum.
Isotopic fractionation differs between C4 carbon fixation and C3, due to the spatial separation in C4 plants of CO2 capture (in the mesophyll cells) and the Calvin cycle (in the bundle sheath cells). In C4 plants, carbon is converted to bicarbonate, fixed into oxaloacetate via the enzyme phosphoenolpyruvate (PEP) carboxylase, and is then converted to malate. The malate is transported from the mesophyll to bundle sheath cells, which are impermeable to CO2. The internal CO2 is concentrated in these cells as malate is reoxidized then decarboxylated back into CO2 and pyruvate. This enables RuBisCO to perform catalysis while internal CO2 is sufficiently high to avoid the competing photorespiration reaction. The delta value in the C4 pathway is -12 to -16‰ depleted in 13C due to the combined effects of PEP carboxylase and RuBisCO.
The isotopic discrimination in the C4 pathway varies relative to the C3 pathway due to the additional chemical conversion steps and activity of PEP carboxylase. After diffusion into the stomata, the conversion of CO2 to bicarbonate concentrates the heavier 13C. The subsequent fixation via PEP carboxylase is thereby less depleted in 13C than that from Rubisco: about 2‰ depleted in PEP carboxylase, versus 29‰ in RuBisCO. However, a portion of the isotopically heavy carbon that is fixed by PEP carboxylase leaks out of the bundle sheath cells. This limits the carbon available to RuBisCO, which in turn lowers its fractionation effect. This accounts for the overall delta value in C4 plants to be -12 to -16 ‰.
In CAM plants.
Plants that use Crassulacean acid metabolism, also known as CAM photosynthesis, temporally separate their chemical reactions between day and night. This strategy modulates stomatal conductance to increase water-use efficiency, so is well-adapted for arid climates. During the night, CAM plants open stomata to allow CO2 to enter the cell and undergo fixation into organic acids that are stored in vacuoles. This carbon is released to the Calvin cycle during the day, when stomata are closed to prevent water loss, and the light reactions can drive the necessary ATP and NADPH production. This pathway differs from C4 photosynthesis because CAM plants separate carbon by storing fixed CO2 in vesicles at night, then transporting it for use during the day. Thus, CAM plants temporally concentrate CO2 to improve RuBisCO efficiency, whereas C4 plants spatially concentrate CO2 in bundle sheath cells. The distribution of plants which use CAM photosynthesis includes epiphytes (e.g., orchids, bromeliads) and xerophytes (e.g., succulents, cacti).
In Crassulacean acid metabolism, isotopic fractionation combines the effects of the C3 pathway in the daytime and the C4 pathway in the nighttime. At night, when temperature and water loss are lower, the CO2 diffuses through the stomata and produce malate via phosphenolpyruvate carboxylase. During the following day, stomata are closed, malate is decarboxylated, and CO2 is fixed by RuBisCO. This process alone is similar to that of C4 plants and yields characteristic C4 fractionation values of approximately -11‰. However, in the afternoon, CAM plants may open their stomata and perform C3 photosynthesis. In daytime alone, CAM plants have approximately -28‰ fractionation, characteristic of C3 plants. These combined effects provide "δ"13C values for CAM plants in the range of -10 to -20‰.
The 13C to 12C ratio in CAM plants can indicate the temporal separation of CO2 fixation, which is the extent of biomass derived from nocturnal CO2 fixation relative to diurnal CO2 fixation. This distinction can be made because PEP carboxylase, the enzyme responsible for net CO2 uptake at night, discriminates 13C less than RuBisCO, which is responsible to daytime CO2 uptake. CAM plants which fix CO2 primarily at night would be predicted to show "δ"13C values more similar to C4 plants, whereas daytime CO2 fixation would show "δ"13C values more similar to C3 plants.
In phytoplankton.
In contrast to terrestrial plants, where CO2 diffusion in air is relatively fast and typically not limiting, diffusion of dissolved CO2 in water is considerably slower and can often limit carbon fixation in phytoplankton. As gaseous CO2(g) is dissolved into aqueous CO2(aq), it is fractionated by both kinetic and equilibrium effects that are temperature-dependent. Relative to plants, the dissolved CO2 source for phytoplankton can be enriched in 13C by about 8‰ from atmospheric CO2.
Isotope fractionation of 13C by phytoplankton photosynthesis is affected by the diffusion of extracellular aqueous CO2 into the cell, the RuBisCO-dependent cell growth rate, and the cell geometry and surface area. The use of bicarbonate and carbon-concentrating mechanisms in phytoplankton distinguishes the isotopic fractionation from plant photosynthetic pathways.
The difference between intracellular and extracellular CO2 concentrations reflects the CO2 demand of a phytoplankton cell, which is dependent on its growth rate. The ratio of carbon demand to supply governs the diffusion of CO2 into the cell, and is negatively correlated with the magnitude of the carbon fractionation by phytoplankton. Combined, these relationships allow the fractionation between CO2(aq) and phytoplankton biomass to be used to estimate the phytoplankton growth rates.
However, growth rate alone does not account for observed fractionation. The flux of CO2(aq) into and out of a cell is roughly proportional to the cell surface area, and the cell carbon biomass varies as a function of cell volume. Phytoplankton geometry that maximizes surface area to volume should have larger isotopic fractionation from photosynthesis.
The biochemical characteristics of phytoplankton are similar to C3 plants, whereas the gas exchange characteristics more closely resemble the C4 strategy. More specifically, phytoplankton improve the efficiency of their primary carbon-fixing enzyme, RuBisCO, with carbon concentrating mechanisms (CCM), just as C4 plants accumulate CO2 in the bundle sheath cells. Different forms of CCM in phytoplankton include the active uptake of bicarbonate and CO2 through the cell membrane, the active transport of inorganic carbon from the cellular membrane to the chloroplasts, and active, unidirectional conversion of CO2 to bicarbonate. The parameters affecting 13C fractionation in phytoplankton contribute to "δ"13C values between -18 and -25‰.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\delta^{13}\\mathrm{C} = \\left ( \\frac{\\left ( \\frac{^{13}\\mathrm{C}}{^{12}\\mathrm{C}} \\right )_\\mathrm{sample}}{\\left ( \\frac{^{13}\\mathrm{C}}{^{12}\\mathrm{C}} \\right )_\\mathrm{standard}} -1 \\right)\\times1000"
},
{
"math_id": 1,
"text": " \\delta^{13}C_\\text{sample} = \\delta^{13}C_\\text{atm} - a - (b-a)(c_{i}/c_{a})"
},
{
"math_id": 2,
"text": "CO_2+H_2O+RuBP \\xrightarrow[RuBisCO]{} 2(3 \\mbox{-} \\text{phosphoglycerate}) "
}
] |
https://en.wikipedia.org/wiki?curid=57042156
|
57043691
|
François Lalonde
|
Canadian mathematician
François Lalonde (born 17 September 1955 in Montréal) is a Canadian mathematician, specializing in symplectic geometry and symplectic topology.
Lalonde received from the Université de Montréal in 1976, at the age of 20, his bachelor's degree (called licence in France) in physics and, after a year to complete the bachelor in mathematics in 1977, he received in 1979 his master's degree in logic and theoretical computer science (complexity theory and NP-completeness). In 1985 he received his doctorate (Doctorat d'Etat) in mathematics from the Université de Paris-Saclay in Orsay becoming one of the rare candidates obtaining the Doctorat d'Etat before the age of thirty. He then was an NSERC (Natural Sciences and Engineering Research Council of Canada) University Research Fellow at Université du Québec à Montréal where he became, six years later, full professor in 1991 until 2000. He is professor at the Université de Montréal since 2000, holding from 2001 to 2022 the "Canada Research Chair" (CRC) in differential geometry and topology when the CRC program was first set up by the Prime Minister of Canada.
He has held invited positions at many institutions, including the IHES (1983–1985), Harvard University (1989–1990), the Université de Strasbourg (1990), the University of Tel Aviv (1997 and 1999), the École Polytechnique (2001-2002), Stanford University (2005 and 2022), the École Normale Supérieure de Lyon (2008), and the Université d'Aix-Marseille (2015).
With Octav Cornea he developed a new homology (cluster homology), leading to a new universal Floer homology for pairs of Lagrangian submanifolds of a symplectic manifold. He has also collaborated with Dusa McDuff and Leonid Polterovich.
He became Fellow of the Royal Society of Canada in 1997 at the age of 41, Fellow of the Fields Institute in 2001 when this distinction was introduced. From 2000 to 2002 he was a Killam Fellow, a private-public foundation in arts and sciences that enables Canadian researchers to devote most of their time to their works.
From 2004 to 2008 and from 2011 to 2013 he was the director of the Centre de Recherches Mathématiques (CRM), the premier scientific institute in Canada founded in 1968, based at Université de Montréal. Members of this institute have won the "Nobel Prize" in computer sciences (Turing Prize, Yoshua Bengio) in 2019 and the Wolf Prize in physics in 2018 (Gilles Brassard), considered as the most prestigious prize in physics after the Nobel, leading usually to the Nobel prize in physics. In 2022, James Maynard (mathematician) (Oxford) was awarded the Fields Medal after his postdoctoral year in the CRM-ISM postdoctoral program that Lalonde founded.
He (co)founded several institutions, namely, with Francis Clarke (mathematician), the Institut des Sciences Mathématiques (ISM) (McGill, Montréal, UQAM, Concordia, Laval, Sherbrooke universities) based at UQAM, the first unified doctoral school in the world with 250 professors, the Centre interuniversitaire de recherches en géométrie différentielle et en topologie (CIRGET), the Institut transdisciplinaire de recherches en informatique quantique (INTRIQ) with Gilles Brassard and Michael Hilke, the Unité mixte internationale (UMI), a joint venture between the CNRS (France) and the Centre de recherches mathématiques (CRM), and the journal Annales mathématiques du Québec (Springer).
In 2006 he was an Invited Speaker with talk "Lagrangian submanifolds: from the local model to the cluster complex" at the International Congress of Mathematicians (ICM) in Madrid.
Selected publications.
as editor:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "L^{\\infty}"
}
] |
https://en.wikipedia.org/wiki?curid=57043691
|
57047243
|
Symmetric power
|
In mathematics, the "n"-th symmetric power of an object "X" is the quotient of the "n"-fold product formula_0 by the permutation action of the symmetric group formula_1.
More precisely, the notion exists at least in the following three areas:
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "X^n:=X \\times \\cdots \\times X"
},
{
"math_id": 1,
"text": "\\mathfrak{S}_n"
},
{
"math_id": 2,
"text": "X^n/\\mathfrak{S}_n"
},
{
"math_id": 3,
"text": "X = \\operatorname{Spec}(A)"
},
{
"math_id": 4,
"text": "\\operatorname{Spec}((A \\otimes_k \\dots \\otimes_k A)^{\\mathfrak{S}_n})"
}
] |
https://en.wikipedia.org/wiki?curid=57047243
|
5704757
|
Gold compounds
|
Gold compounds are compounds by the element gold (Au). Although gold is the most noble of the noble metals, it still forms many diverse compounds. The oxidation state of gold in its compounds ranges from −1 to +5, but Au(I) and Au(III) dominate its chemistry. Au(I), referred to as the aurous ion, is the most common oxidation state with soft ligands such as thioethers, thiolates, and organophosphines. Au(I) compounds are typically linear. A good example is , which is the soluble form of gold encountered in mining. The binary gold halides, such as AuCl, form zigzag polymeric chains, again featuring linear coordination at Au. Most drugs based on gold are Au(I) derivatives.
Au(III) (referred to as the auric) is a common oxidation state, and is illustrated by gold(III) chloride, . The gold atom centers in Au(III) complexes, like other d8 compounds, are typically square planar, with chemical bonds that have both covalent and ionic character. Gold(I,III) chloride is also known, an example of a mixed-valence complex.
Gold does not react with oxygen at any temperature and, up to 100 °C, is resistant to attack from ozone.
formula_0
formula_1
Some free halogens react with gold. Gold is strongly attacked by fluorine at dull-red heat to form gold(III) fluoride . Powdered gold reacts with chlorine at 180 °C to form gold(III) chloride . Gold reacts with bromine at 140 °C to form gold(III) bromide , but reacts only very slowly with iodine to form gold(I) iodide AuI.
<chem>2 Au + 3 F2 ->[t] 2 AuF3</chem>
<chem>2 Au + 3 Cl2 ->[t] 2 AuCl3</chem>
<chem>2 Au + 2 Br2 ->[t] AuBr3 + AuBr</chem>
<chem>2 Au + I2 ->[t] 2 AuI</chem>
Gold does not react with sulfur directly, but gold(III) sulfide can be made by passing hydrogen sulfide through a dilute solution of gold(III) chloride or chloroauric acid.
Gold readily dissolves in mercury at room temperature to form an amalgam, and forms alloys with many other metals at higher temperatures. These alloys can be produced to modify the hardness and other metallurgical properties, to control melting point or to create exotic colors.
Gold is unaffected by most acids. It does not react with hydrofluoric, hydrochloric, hydrobromic, hydriodic, sulfuric, or nitric acid. It does react with selenic acid, and is dissolved by aqua regia, a 1:3 mixture of nitric acid and hydrochloric acid. Nitric acid oxidizes the metal to +3 ions, but only in minute amounts, typically undetectable in the pure acid because of the chemical equilibrium of the reaction. However, the ions are removed from the equilibrium by hydrochloric acid, forming ions, or chloroauric acid, thereby enabling further oxidation.
<chem>2Au {}+ 6 H2SeO4 ->[200^\circ C] Au2(SeO4)3 {}+ 3 H2SeO3 {}+ 3H2O</chem>
<chem>Au {}+ 4 HCl {}+ HNO3 -> H[AuCl4] {}+ NO\uparrow + 2H2O </chem>
Gold is similarly unaffected by most bases. It does not react with aqueous, solid, or molten sodium or potassium hydroxide. It does however, react with sodium or potassium cyanide under alkaline conditions when oxygen is present to form soluble complexes.
Common oxidation states of gold include +1 (gold(I) or aurous compounds) and +3 (gold(III) or auric compounds). Gold ions in solution are readily reduced and precipitated as metal by adding any other metal as the reducing agent. The added metal is oxidized and dissolves, allowing the gold to be displaced from solution and be recovered as a solid precipitate.
Rare oxidation states.
Less common oxidation states of gold include −1, +2, and +5.
The −1 oxidation state occurs in aurides, compounds containing the anion. Caesium auride (CsAu), for example, crystallizes in the caesium chloride motif; rubidium, potassium, and tetramethylammonium aurides are also known. Gold has the highest electron affinity of any metal, at 222.8 kJ/mol, making a stable species, analogous to the halides.
Gold also has a –1 oxidation state in covalent complexes with the group 4 transition metals, such as in titanium tetraauride and the analogous zirconium and hafnium compounds. These chemicals are expected to form gold-bridged dimers in a manner similar to titanium(IV) hydride.
Gold(II) compounds are usually diamagnetic with Au–Au bonds such as [. The evaporation of a solution of in concentrated produces red crystals of gold(II) sulfate, . Originally thought to be a mixed-valence compound, it has been shown to contain cations, analogous to the better-known mercury(I) ion, . A gold(II) complex, the tetraxenonogold(II) cation, which contains xenon as a ligand, occurs in .
Gold pentafluoride, along with its derivative anion, , and its difluorine complex, gold heptafluoride, is the sole example of gold(V), the highest verified oxidation state.
Some gold compounds exhibit "aurophilic bonding", which describes the tendency of gold ions to interact at distances that are too long to be a conventional Au–Au bond but shorter than van der Waals bonding. The interaction is estimated to be comparable in strength to that of a hydrogen bond.
Well-defined cluster compounds are numerous. In some cases, gold has a fractional oxidation state. A representative example is the octahedral species .
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{Au}+\\mathrm{O}_2 \\neq"
},
{
"math_id": 1,
"text": " \\mathrm{Au}+\\mathrm{O}_3 \\overset{\\underset{t<100^\\circ\\text{C}}{}}{\\neq}"
}
] |
https://en.wikipedia.org/wiki?curid=5704757
|
5705
|
Continuum hypothesis
|
Proposition in mathematical logic
In mathematics, specifically set theory, the continuum hypothesis (abbreviated CH) is a hypothesis about the possible sizes of infinite sets. It states:
<templatestyles src="Template:Blockquote/styles.css" />"There is no set whose cardinality is strictly between that of the integers and the real numbers."
Or equivalently:
<templatestyles src="Template:Blockquote/styles.css" />"Any subset of the real numbers is either finite, or countably infinite, or has the cardinality of the real numbers."
In Zermelo–Fraenkel set theory with the axiom of choice (ZFC), this is equivalent to the following equation in aleph numbers: formula_0, or even shorter with beth numbers: formula_1.
The continuum hypothesis was advanced by Georg Cantor in 1878, and establishing its truth or falsehood is the first of Hilbert's 23 problems presented in 1900. The answer to this problem is independent of ZFC, so that either the continuum hypothesis or its negation can be added as an axiom to ZFC set theory, with the resulting theory being consistent if and only if ZFC is consistent. This independence was proved in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940.
The name of the hypothesis comes from the term "the continuum" for the real numbers.
History.
Cantor believed the continuum hypothesis to be true and for many years tried in vain to prove it. It became the first on David Hilbert's list of important open questions that was presented at the International Congress of Mathematicians in the year 1900 in Paris. Axiomatic set theory was at that point not yet formulated.
Kurt Gödel proved in 1940 that the negation of the continuum hypothesis, i.e., the existence of a set with intermediate cardinality, could not be proved in standard set theory. The second half of the independence of the continuum hypothesis – i.e., unprovability of the nonexistence of an intermediate-sized set – was proved in 1963 by Paul Cohen.
Cardinality of infinite sets.
Two sets are said to have the same "cardinality" or "cardinal number" if there exists a bijection (a one-to-one correspondence) between them. Intuitively, for two sets "S" and "T" to have the same cardinality means that it is possible to "pair off" elements of "S" with elements of "T" in such a fashion that every element of "S" is paired off with exactly one element of "T" and vice versa. Hence, the set {banana, apple, pear} has the same cardinality as {yellow, red, green}.
With infinite sets such as the set of integers or rational numbers, the existence of a bijection between two sets becomes more difficult to demonstrate. The rational numbers seemingly form a counterexample to the continuum hypothesis: the integers form a proper subset of the rationals, which themselves form a proper subset of the reals, so intuitively, there are more rational numbers than integers and more real numbers than rational numbers. However, this intuitive analysis is flawed; it does not take proper account of the fact that all three sets are infinite. It turns out the rational numbers can actually be placed in one-to-one correspondence with the integers, and therefore the set of rational numbers is the same size ("cardinality") as the set of integers: they are both countable sets.
Cantor gave two proofs that the cardinality of the set of integers is strictly smaller than that of the set of real numbers (see Cantor's first uncountability proof and Cantor's diagonal argument). His proofs, however, give no indication of the extent to which the cardinality of the integers is less than that of the real numbers. Cantor proposed the continuum hypothesis as a possible solution to this question.
The continuum hypothesis states that the set of real numbers has minimal possible cardinality which is greater than the cardinality of the set of integers. That is, every set, "S", of real numbers can either be mapped one-to-one into the integers or the real numbers can be mapped one-to-one into "S". As the real numbers are equinumerous with the powerset of the integers, i.e. formula_2, the continuum hypothesis can be restated as follows:
<templatestyles src="Math_theorem/styles.css" />
Continuum hypothesis — formula_3.
Assuming the axiom of choice, there is a unique smallest cardinal number formula_4 greater than formula_5, and the continuum hypothesis is in turn equivalent to the equality formula_6.
Independence from ZFC.
The independence of the continuum hypothesis (CH) from Zermelo–Fraenkel set theory (ZF) follows from combined work of Kurt Gödel and Paul Cohen.
Gödel showed that CH cannot be disproved from ZF, even if the axiom of choice (AC) is adopted (making ZFC). Gödel's proof shows that CH and AC both hold in the constructible universe L, an inner model of ZF set theory, assuming only the axioms of ZF. The existence of an inner model of ZF in which additional axioms hold shows that the additional axioms are consistent with ZF, provided ZF itself is consistent. The latter condition cannot be proved in ZF itself, due to Gödel's incompleteness theorems, but is widely believed to be true and can be proved in stronger set theories.
Cohen showed that CH cannot be proven from the ZFC axioms, completing the overall independence proof. To prove his result, Cohen developed the method of forcing, which has become a standard tool in set theory. Essentially, this method begins with a model of ZF in which CH holds, and constructs another model which contains more sets than the original, in a way that CH does not hold in the new model. Cohen was awarded the Fields Medal in 1966 for his proof.
The independence proof just described shows that CH is independent of ZFC. Further research has shown that CH is independent of all known "large cardinal axioms" in the context of ZFC. Moreover, it has been shown that the cardinality of the continuum can be any cardinal consistent with König's theorem. A result of Solovay, proved shortly after Cohen's result on the independence of the continuum hypothesis, shows that in any model of ZFC, if formula_7 is a cardinal of uncountable cofinality, then there is a forcing extension in which formula_8. However, per König's theorem, it is not consistent to assume formula_9 is formula_10 or formula_11 or any cardinal with cofinality formula_12.
The continuum hypothesis is closely related to many statements in analysis, point set topology and measure theory. As a result of its independence, many substantial conjectures in those fields have subsequently been shown to be independent as well.
The independence from ZFC means that proving or disproving the CH within ZFC is impossible. However, Gödel and Cohen's negative results are not universally accepted as disposing of all interest in the continuum hypothesis. The continuum hypothesis remains an active topic of research; see Woodin and Peter Koellner for an overview of the current research status.
The continuum hypothesis and the axiom of choice were among the first genuinely mathematical statements shown to be independent of ZF set theory. Although the existence of some statements independent of ZFC had already been known more than two decades prior: for example, assuming good soundness properties and the consistency ZFC, Gödel's incompleteness theorems, which were published in 1931, establish that there is a formal statement (one for each appropriate Gödel numbering scheme) expressing the consistency of ZFC, that is also independent of it. The latter independence result indeed holds for many theories.
Arguments for and against the continuum hypothesis.
Gödel believed that CH is false, and that his proof that CH is consistent with ZFC only shows that the Zermelo–Fraenkel axioms do not adequately characterize the universe of sets. Gödel was a platonist and therefore had no problems with asserting the truth and falsehood of statements independent of their provability. Cohen, though a formalist, also tended towards rejecting CH.
Historically, mathematicians who favored a "rich" and "large" universe of sets were against CH, while those favoring a "neat" and "controllable" universe favored CH. Parallel arguments were made for and against the axiom of constructibility, which implies CH. More recently, Matthew Foreman has pointed out that ontological maximalism can actually be used to argue in favor of CH, because among models that have the same reals, models with "more" sets of reals have a better chance of satisfying CH.
Another viewpoint is that the conception of set is not specific enough to determine whether CH is true or false. This viewpoint was advanced as early as 1923 by Skolem, even before Gödel's first incompleteness theorem. Skolem argued on the basis of what is now known as Skolem's paradox, and it was later supported by the independence of CH from the axioms of ZFC since these axioms are enough to establish the elementary properties of sets and cardinalities. In order to argue against this viewpoint, it would be sufficient to demonstrate new axioms that are supported by intuition and resolve CH in one direction or another. Although the axiom of constructibility does resolve CH, it is not generally considered to be intuitively true any more than CH is generally considered to be false.
At least two other axioms have been proposed that have implications for the continuum hypothesis, although these axioms have not currently found wide acceptance in the mathematical community. In 1986, Chris Freiling presented an argument against CH by showing that the negation of CH is equivalent to Freiling's axiom of symmetry, a statement derived by arguing from particular intuitions about probabilities. Freiling believes this axiom is "intuitively clear" but others have disagreed.
A difficult argument against CH developed by W. Hugh Woodin has attracted considerable attention since the year 2000. Foreman does not reject Woodin's argument outright but urges caution. Woodin proposed a new hypothesis that he labeled the "(*)-axiom", or "Star axiom". The Star axiom would imply that formula_9 is formula_13, thus falsifying CH. The Star axiom was bolstered by an independent May 2021 proof showing the Star axiom can be derived from a variation of Martin's maximum. However, Woodin stated in the 2010s that he now instead believes CH to be true, based on his belief in his new "ultimate L" conjecture.
Solomon Feferman argued that CH is not a definite mathematical problem. He proposed a theory of "definiteness" using a semi-intuitionistic subsystem of ZF that accepts classical logic for bounded quantifiers but uses intuitionistic logic for unbounded ones, and suggested that a proposition formula_14 is mathematically "definite" if the semi-intuitionistic theory can prove formula_15. He conjectured that CH is not definite according to this notion, and proposed that CH should, therefore, be considered not to have a truth value. Peter Koellner wrote a critical commentary on Feferman's article.
Joel David Hamkins proposes a multiverse approach to set theory and argues that "the continuum hypothesis is settled on the multiverse view by our extensive knowledge about how it behaves in the multiverse, and, as a result, it can no longer be settled in the manner formerly hoped for". In a related vein, Saharon Shelah wrote that he does "not agree with the pure Platonic view that the interesting problems in set theory can be decided, that we just have to discover the additional axiom. My mental picture is that we have many possible set theories, all conforming to ZFC".
Generalized continuum hypothesis.
The "generalized continuum hypothesis" (GCH) states that if an infinite set's cardinality lies between that of an infinite set "S" and that of the power set formula_16 of "S", then it has the same cardinality as either "S" or formula_16. That is, for any infinite cardinal formula_17 there is no cardinal formula_7 such that formula_18. GCH is equivalent to:
formula_19 for every ordinal formula_20 (occasionally called "Cantor's aleph hypothesis").
The beth numbers provide an alternative notation for this condition: formula_21 for every ordinal formula_20. The continuum hypothesis is the special case for the ordinal formula_22. GCH was first suggested by Philip Jourdain. For the early history of GCH, see Moore.
Like CH, GCH is also independent of ZFC, but Sierpiński proved that ZF + GCH implies the axiom of choice (AC) (and therefore the negation of the axiom of determinacy, AD), so choice and GCH are not independent in ZF; there are no models of ZF in which GCH holds and AC fails. To prove this, Sierpiński showed GCH implies that every cardinality n is smaller than some aleph number, and thus can be ordered. This is done by showing that n is smaller than formula_23 which is smaller than its own Hartogs number—this uses the equality formula_24; for the full proof, see Gillman.
Kurt Gödel showed that GCH is a consequence of ZF + V=L (the axiom that every set is constructible relative to the ordinals), and is therefore consistent with ZFC. As GCH implies CH, Cohen's model in which CH fails is a model in which GCH fails, and thus GCH is not provable from ZFC. W. B. Easton used the method of forcing developed by Cohen to prove Easton's theorem, which shows it is consistent with ZFC for arbitrarily large cardinals formula_25 to fail to satisfy formula_26. Much later, Foreman and Woodin proved that (assuming the consistency of very large cardinals) it is consistent that formula_27 holds for every infinite cardinal formula_7. Later Woodin extended this by showing the consistency of formula_28 for every formula_7. Carmi Merimovich showed that, for each "n" ≥ 1, it is consistent with ZFC that for each κ, 2κ is the "n"th successor of κ. On the other hand, László Patai proved that if γ is an ordinal and for each infinite cardinal κ, 2κ is the γth successor of κ, then γ is finite.
For any infinite sets A and B, if there is an injection from A to B then there is an injection from subsets of A to subsets of B. Thus for any infinite cardinals A and B, formula_29 . If A and B are finite, the stronger inequality formula_30 holds. GCH implies that this strict, stronger inequality holds for infinite cardinals as well as finite cardinals.
Implications of GCH for cardinal exponentiation.
Although the generalized continuum hypothesis refers directly only to cardinal exponentiation with 2 as the base, one can deduce from it the values of cardinal exponentiation formula_31 in all cases. GCH implies that for ordinals "α" and "β":
formula_32 when "α" ≤ "β"+1;
formula_33 when "β"+1 < "α" and formula_34, where cf is the cofinality operation; and
formula_35 when "β"+1 < "α" and formula_36.
The first equality (when "α" ≤ "β"+1) follows from:
formula_37 , while:
formula_38 ;
The third equality (when "β"+1 < "α" and formula_39) follows from:
formula_40, by König's theorem, while:
formula_41
References.
<templatestyles src="Reflist/styles.css" />
External links.
Quotations related to at Wikiquote
|
[
{
"math_id": 0,
"text": "2^{\\aleph_0}=\\aleph_1"
},
{
"math_id": 1,
"text": "\\beth_1 = \\aleph_1"
},
{
"math_id": 2,
"text": "|\\mathbb{R}|=2^{\\aleph_0}"
},
{
"math_id": 3,
"text": "\\nexists S\\colon\\aleph_0 < |S| < 2^{\\aleph_0}"
},
{
"math_id": 4,
"text": "\\aleph_1"
},
{
"math_id": 5,
"text": "\\aleph_0"
},
{
"math_id": 6,
"text": "2^{\\aleph_0} = \\aleph_1"
},
{
"math_id": 7,
"text": "\\kappa"
},
{
"math_id": 8,
"text": "2^{\\aleph_0} = \\kappa"
},
{
"math_id": 9,
"text": "2^{\\aleph_0}"
},
{
"math_id": 10,
"text": "\\aleph_\\omega"
},
{
"math_id": 11,
"text": "\\aleph_{\\omega_1+\\omega}"
},
{
"math_id": 12,
"text": "\\omega"
},
{
"math_id": 13,
"text": "\\aleph_2"
},
{
"math_id": 14,
"text": "\\phi"
},
{
"math_id": 15,
"text": "(\\phi \\lor \\neg\\phi)"
},
{
"math_id": 16,
"text": "\\mathcal{P}(S)"
},
{
"math_id": 17,
"text": "\\lambda"
},
{
"math_id": 18,
"text": "\\lambda <\\kappa <2^{\\lambda}"
},
{
"math_id": 19,
"text": "\\aleph_{\\alpha+1}=2^{\\aleph_\\alpha}"
},
{
"math_id": 20,
"text": "\\alpha"
},
{
"math_id": 21,
"text": "\\aleph_\\alpha=\\beth_\\alpha"
},
{
"math_id": 22,
"text": "\\alpha=1"
},
{
"math_id": 23,
"text": "2^{\\aleph_0+n}"
},
{
"math_id": 24,
"text": "2^{\\aleph_0+n}\\, = \\,2\\cdot\\,2^{\\aleph_0+n} "
},
{
"math_id": 25,
"text": "\\aleph_\\alpha"
},
{
"math_id": 26,
"text": "2^{\\aleph_\\alpha} = \\aleph_{\\alpha + 1}"
},
{
"math_id": 27,
"text": "2^\\kappa>\\kappa^+"
},
{
"math_id": 28,
"text": "2^\\kappa=\\kappa^{++}"
},
{
"math_id": 29,
"text": "A < B \\to 2^A \\le 2^B"
},
{
"math_id": 30,
"text": "A < B \\to 2^A < 2^B "
},
{
"math_id": 31,
"text": "\\aleph_{\\alpha}^{\\aleph_{\\beta}}"
},
{
"math_id": 32,
"text": "\\aleph_{\\alpha}^{\\aleph_{\\beta}} = \\aleph_{\\beta+1}"
},
{
"math_id": 33,
"text": "\\aleph_{\\alpha}^{\\aleph_{\\beta}} = \\aleph_{\\alpha}"
},
{
"math_id": 34,
"text": "\\aleph_{\\beta} < \\operatorname{cf} (\\aleph_{\\alpha})"
},
{
"math_id": 35,
"text": "\\aleph_{\\alpha}^{\\aleph_{\\beta}} = \\aleph_{\\alpha+1}"
},
{
"math_id": 36,
"text": "\\aleph_{\\beta} \\ge \\operatorname{cf} (\\aleph_{\\alpha})"
},
{
"math_id": 37,
"text": "\\aleph_{\\alpha}^{\\aleph_{\\beta}} \\le \\aleph_{\\beta+1}^{\\aleph_{\\beta}} =(2^{\\aleph_{\\beta}})^{\\aleph_{\\beta}} = 2^{\\aleph_{\\beta}\\cdot\\aleph_{\\beta}} = 2^{\\aleph_{\\beta}} = \\aleph_{\\beta+1} "
},
{
"math_id": 38,
"text": "\\aleph_{\\beta+1} = 2^{\\aleph_{\\beta}} \\le \\aleph_{\\alpha}^{\\aleph_{\\beta}} "
},
{
"math_id": 39,
"text": "\\aleph_{\\beta} \\ge \\operatorname{cf}(\\aleph_{\\alpha})"
},
{
"math_id": 40,
"text": "\\aleph_{\\alpha}^{\\aleph_{\\beta}} \\ge \\aleph_{\\alpha}^{\\operatorname{cf}(\\aleph_{\\alpha})} > \\aleph_{\\alpha} "
},
{
"math_id": 41,
"text": "\\aleph_{\\alpha}^{\\aleph_{\\beta}} \\le \\aleph_{\\alpha}^{\\aleph_{\\alpha}} \\le (2^{\\aleph_{\\alpha}})^{\\aleph_{\\alpha}} = 2^{\\aleph_{\\alpha}\\cdot\\aleph_{\\alpha}} = 2^{\\aleph_{\\alpha}} = \\aleph_{\\alpha+1}"
}
] |
https://en.wikipedia.org/wiki?curid=5705
|
570542
|
Malonic acid
|
Carboxylic acid with chemical formula CH2(COOH)2
<templatestyles src="Chembox/styles.css"/>
Chemical compound
Malonic acid is a dicarboxylic acid with structure CH2(COOH)2. The ionized form of malonic acid, as well as its esters and salts, are known as malonates. For example, diethyl malonate is malonic acid's diethyl ester. The name originates from the Greek word μᾶλον ("malon") meaning 'apple'.
History.
Malonic acid is a naturally occurring substance found in many fruits and vegetables. There is a suggestion that citrus fruits produced in organic farming contain higher levels of malonic acid than fruits produced in conventional agriculture.
Malonic acid was first prepared in 1858 by the French chemist Victor Dessaignes via the oxidation of malic acid.
Structure and preparation.
The structure has been determined by X-ray crystallography and extensive property data including for condensed phase thermochemistry are available from the National Institute of Standards and Technology.
A classical preparation of malonic acid starts from chloroacetic acid:
Sodium carbonate generates the sodium salt, which is then reacted with sodium cyanide to provide the sodium salt of cyanoacetic acid via a nucleophilic substitution. The nitrile group can be hydrolyzed with sodium hydroxide to sodium malonate, and acidification affords malonic acid. Industrially, however, malonic acid is produced by hydrolysis of dimethyl malonate or diethyl malonate. It has also been produced through fermentation of glucose.
Reactions.
Malonic acid reacts as a typical carboxylic acid forming amide, ester, and chloride derivatives. Malonic anhydride can be used as an intermediate to mono-ester or amide derivatives, while malonyl chloride is most useful to obtain diesters or diamides.
In a well-known reaction, malonic acid condenses with urea to form barbituric acid. Malonic acid may also be condensed with acetone to form Meldrum's acid, a versatile intermediate in further transformations. The esters of malonic acid are also used as a −CH2COOH synthon in the malonic ester synthesis.
Briggs–Rauscher reaction.
Malonic acid is a key component in the Briggs–Rauscher reaction, the classic example of an oscillating chemical reaction.
Knoevenagel condensation.
Malonic acid is used to prepare a,b-unsaturated carboxylic acids by condensation and decarboxylation. Cinnamic acids are prepared in this way:
In this, the so-called Knoevenagel condensation, malonic acid condenses with the carbonyl group of an aldehyde or ketone, followed by a decarboxylation.
When malonic acid is condensed in hot pyridine, the condensation is accompanied by decarboxylation, the so-called Doebner modification.
Preparation of carbon suboxide.
Malonic acid does not readily form an anhydride, dehydration gives carbon suboxide instead:
The transformation is achieved by warming a dry mixture of phosphorus pentoxide () and malonic acid. It reacts in a similar way to malonic anhydride, forming malonates.
Applications.
Malonic acid is a precursor to specialty polyesters. It can be converted into 1,3-propanediol for use in polyesters and polymers (whose usefulness is unclear though). It can also be a component in alkyd resins, which are used in a number of coatings applications for protecting against damage caused by UV light, oxidation, and corrosion. One application of malonic acid is in the coatings industry as a crosslinker for low-temperature cure powder coatings, which are becoming increasingly valuable for heat sensitive substrates and a desire to speed up the coatings process. The global coatings market for automobiles was estimated to be $18.59 billion in 2014 with projected combined annual growth rate of 5.1% through 2022.
It is used in a number of manufacturing processes as a high value specialty chemical including the electronics industry, flavors and fragrances industry, specialty solvents, polymer crosslinking, and pharmaceutical industry. In 2004, annual global production of malonic acid and related diesters was over 20,000 metric tons. Potential growth of these markets could result from advances in industrial biotechnology that seeks to displace petroleum-based chemicals in industrial applications.
In 2004, malonic acid was listed by the US Department of Energy as one of the top 30 chemicals to be produced from biomass.
In food and drug applications, malonic acid can be used to control acidity, either as an excipient in pharmaceutical formulation or natural preservative additive for foods.
Malonic acid is used as a building block chemical to produce numerous valuable compounds, including the flavor and fragrance compounds gamma-nonalactone, cinnamic acid, and the pharmaceutical compound valproate.
Malonic acid (up to 37.5% w/w) has been used to cross-link corn and potato starches to produce a biodegradable thermoplastic; the process is performed in water using non-toxic catalysts. Starch-based polymers comprised 38% of the global biodegradable polymers market in 2014 with food packaging, foam packaging, and compost bags as the largest end-use segments.
Eastman Kodak company and others use malonic acid and derivatives as a surgical adhesive.
Pathology.
If elevated malonic acid levels are accompanied by elevated methylmalonic acid levels, this may indicate the metabolic disease combined malonic and methylmalonic aciduria (CMAMMA). By calculating the malonic acid to methylmalonic acid ratio in blood plasma, CMAMMA can be distinguished from classic methylmalonic acidemia.
Biochemistry.
Malonic acid is the precursor in mitochondrial fatty acid synthesis (mtFASII), in which it is converted to malonyl-CoA by acyl-CoA synthetase family member 3 (ACSF3).
Additionally, the coenzyme A derivative of malonate, malonyl-CoA, is an important precursor in cytosolic fatty acid biosynthesis along with acetyl CoA. Malonyl CoA is formed there from acetyl CoA by the action of acetyl-CoA carboxylase, and the malonate is transferred to an acyl carrier protein to be added to a fatty acid chain.
Malonic acid is the classic example of a competitive inhibitor of the enzyme succinate dehydrogenase (complex II), in the respiratory electron transport chain. It binds to the active site of the enzyme without reacting, competing with the usual substrate succinate but lacking the −CH2CH2− group required for dehydrogenation. This observation was used to deduce the structure of the active site in succinate dehydrogenase. Inhibition of this enzyme decreases cellular respiration. Since malonic acid is a natural component of many foods, it is present in mammals including humans.
Related Chemicals.
The fluorinated version of malonic acide is difluoromalonic acid.
Malonic acid is diprotic; that is, it can donate two protons per molecule. Its first formula_0 is 2.8 and the second is 5.7. Thus the malonate ion can be or . Malonate or propanedioate compounds include salts and esters of malonic acid, such as
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "pK_a"
}
] |
https://en.wikipedia.org/wiki?curid=570542
|
5705504
|
Gamma (disambiguation)
|
Gamma ( or ) is the third letter of the Greek alphabet.
Gamma may also refer to:
<templatestyles src="Template:TOC_right/styles.css" />
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title .
|
[
{
"math_id": 0,
"text": "\\gamma"
}
] |
https://en.wikipedia.org/wiki?curid=5705504
|
57055994
|
Profinite word
|
In mathematics, more precisely in formal language theory, the profinite words are a generalization of the notion of finite words into a complete topological space. This notion allows the use of topology to study languages and finite semigroups. For example, profinite words are used to give an alternative characterization of the algebraic notion of a variety of finite semigroups.
Definition.
Let "A" be an alphabet. The set of profinite words over "A" consists of the completion of a metric space whose domain is the set formula_0 of words over "A". The distance used to define the metric is given using a notion of separation of words. Those notions are now defined.
Separation.
Let "M" and "N" be monoids, and let "p" and "q" be elements of the monoid "M". Let "φ" be a morphism of monoids from "M" to "N". It is said that the morphism "φ" separates "p" and "q" if formula_1. For example, the morphism formula_2 sending a word to the parity of its length separates the words "ababa" and "abaa". Indeed formula_3.
It is said that "N" separates "p" and "q" if there exists a morphism of monoids "φ" from "M" to "N" that separates "p" and "q". Using the previous example, formula_4 separates "ababa" and "abaa". More generally, formula_5 separates any words whose size are not congruent modulo "n". In general, any two distinct words can be separated, using the monoid whose elements are the factors of "p" plus a fresh element 0. The morphism sends prefixes of "p" to themselves and everything else to 0.
Distance.
The distance between two distinct words "p" and "q" is defined as the inverse of the size of the smallest monoid "N" separating "p" and "q". Thus, the distance of "ababa" and "abaa" is formula_6. The distance of "p" to itself is defined as 0.
This distance "d" is an ultrametric, that is, formula_7. Furthermore it satisfies formula_8 and formula_9.
Since any word "p" can be separated from any other word using a monoid with "|p|+1" elements, where "|p|" is the length of "p", it follows that the distance between "p" and any other word is at least formula_10. Thus the topology defined by this metric is discrete.
Profinite topology.
The profinite completion of formula_0, denoted formula_11, is the completion of the set of finite words under the distance defined above. The completion preserves the monoid structure.
The topology on formula_11 is compact.
Any monoid morphism formula_12, with "M" finite can be extended uniquely into a monoid morphism formula_13, and this morphism is uniformly continuous (using any metric on formula_14 compatible with the discrete topology). Furthermore, formula_11 is the least topological space with this property.
Profinite word.
A profinite word is an element of formula_11. And a profinite language is a set of profinite words. Every finite word is a profinite word. A few examples of profinite words that are not finite are now given.
For "m" any word, let formula_15 denote formula_16, which exists because formula_17 is a Cauchy sequence. Intuitively, to separate formula_17 and formula_18, a monoid should count at least up to formula_19, and hence requires at least formula_19 elements. Since formula_17 is a Cauchy sequence, formula_15 is indeed a profinite word.
Furthermore, the word formula_15 is idempotent. This is due to the fact that, for any morphism formula_20 with "N" finite, formula_21. Since "N" is finite, for "i" great enough, formula_22 is idempotent, and the sequence is constant.
Similarly, formula_23 and formula_24 are defined as formula_25 and formula_26 respectively.
Profinite languages.
The notion of profinite languages allows one to relate notions of semigroup theory to notions of topology. More precisely, given "P" a profinite language, the following statements are equivalent:
Similar statements also hold for languages "P" of finite words. The following conditions are equivalent.
Those characterisations are due to the more general fact that, taking the closure of a language of finite words, and restricting a profinite language to finite words are inverse operations, when they are applied to recognisable languages.
|
[
{
"math_id": 0,
"text": "A^*"
},
{
"math_id": 1,
"text": "\\phi(p)\\ne\\phi(q)"
},
{
"math_id": 2,
"text": "\\phi:A^*\\to \\mathbb Z/2\\mathbb Z, w\\mapsto |w| (\\operatorname{mod} 2)"
},
{
"math_id": 3,
"text": "\\phi(ababa)=1\\ne0=\\phi(abaa)"
},
{
"math_id": 4,
"text": "\\mathbb Z/2\\mathbb Z"
},
{
"math_id": 5,
"text": "\\mathbb Z/n\\mathbb Z"
},
{
"math_id": 6,
"text": "\\frac12"
},
{
"math_id": 7,
"text": "d(x,z)\\leq\\max\\left\\{d(x,y),d(y,z)\\right\\}"
},
{
"math_id": 8,
"text": "d(uw,vw)\\le d(u,v)"
},
{
"math_id": 9,
"text": "d(wu,wv)\\le d(u,v)"
},
{
"math_id": 10,
"text": "\\frac{1}{|p|}"
},
{
"math_id": 11,
"text": "\\widehat {A^*}"
},
{
"math_id": 12,
"text": "\\phi:A^*\\to M"
},
{
"math_id": 13,
"text": "\\widehat \\phi:\\widehat {A^*}\\to M"
},
{
"math_id": 14,
"text": "M"
},
{
"math_id": 15,
"text": "m^\\omega"
},
{
"math_id": 16,
"text": "\\lim_{i\\to\\infty}m^{i!}"
},
{
"math_id": 17,
"text": "m^{i!}"
},
{
"math_id": 18,
"text": "m^{i'!}"
},
{
"math_id": 19,
"text": "\\min(i,i')"
},
{
"math_id": 20,
"text": "\\phi:A^*\\to N"
},
{
"math_id": 21,
"text": "\\phi(m^{i!})=\\phi(m)^{i!}"
},
{
"math_id": 22,
"text": "\\phi(m)^{i!}"
},
{
"math_id": 23,
"text": "m^{\\omega+1}"
},
{
"math_id": 24,
"text": "m^{\\omega-1}"
},
{
"math_id": 25,
"text": "\\lim_{n\\to\\infty}m^{n!+ 1}"
},
{
"math_id": 26,
"text": "\\lim_{n\\to\\infty}m^{n!-1}"
},
{
"math_id": 27,
"text": "\\widehat {A^*}\\times\\widehat {A^*}"
},
{
"math_id": 28,
"text": "P"
},
{
"math_id": 29,
"text": "\\overline P"
},
{
"math_id": 30,
"text": "P=K\\cap A^*"
}
] |
https://en.wikipedia.org/wiki?curid=57055994
|
570662
|
Wireless power transfer
|
Transmission of electrical energy without wires as a physical link
Wireless power transfer (WPT), wireless power transmission, wireless energy transmission (WET), or electromagnetic power transfer is the transmission of electrical energy without wires as a physical link. In a wireless power transmission system, an electrically powered transmitter device generates a time-varying electromagnetic field that transmits power across space to a receiver device; the receiver device extracts power from the field and supplies it to an electrical load. The technology of wireless power transmission can eliminate the use of the wires and batteries, thereby increasing the mobility, convenience, and safety of an electronic device for all users. Wireless power transfer is useful to power electrical devices where interconnecting wires are inconvenient, hazardous, or are not possible.
Wireless power techniques mainly fall into two categories: near field and far-field. In "near field" or "non-radiative" techniques, power is transferred over short distances by magnetic fields using inductive coupling between coils of wire, or by electric fields using capacitive coupling between metal electrodes. Inductive coupling is the most widely used wireless technology; its applications include charging handheld devices like phones and electric toothbrushes, RFID tags, induction cooking, and wirelessly charging or continuous wireless power transfer in implantable medical devices like artificial cardiac pacemakers, or electric vehicles.
In "far-field" or "radiative" techniques, also called "power beaming", power is transferred by beams of electromagnetic radiation, like microwaves or laser beams. These techniques can transport energy longer distances but must be aimed at the receiver. Proposed applications for this type include solar power satellites and wireless powered drone aircraft.
An important issue associated with all wireless power systems is limiting the exposure of people and other living beings to potentially injurious electromagnetic fields.
<templatestyles src="Template:TOC limit/styles.css" />
Overview.
Wireless power transfer is a generic term for a number of different technologies for transmitting energy by means of electromagnetic fields. The technologies, listed in the table below, differ in the distance over which they can transfer power efficiently, whether the transmitter must be aimed (directed) at the receiver, and in the type of electromagnetic energy they use: time varying electric fields, magnetic fields, radio waves, microwaves, infrared or visible light waves.
In general a wireless power system consists of a "transmitter" device connected to a source of power such as a mains power line, which converts the power to a time-varying electromagnetic field, and one or more "receiver" devices which receive the power and convert it back to DC or AC electric current which is used by an electrical load. At the transmitter the input power is converted to an oscillating electromagnetic field by some type of "antenna" device. The word "antenna" is used loosely here; it may be a coil of wire which generates a magnetic field, a metal plate which generates an electric field, an antenna which radiates radio waves, or a laser which generates light. A similar antenna or coupling device at the receiver converts the oscillating fields to an electric current. An important parameter that determines the type of waves is the frequency, which determines the wavelength.
Wireless power uses the same fields and waves as wireless communication devices like radio, another familiar technology that involves electrical energy transmitted without wires by electromagnetic fields, used in cellphones, radio and television broadcasting, and WiFi. In radio communication the goal is the transmission of information, so the amount of power reaching the receiver is not so important, as long as it is sufficient that the information can be received intelligibly. In wireless communication technologies only tiny amounts of power reach the receiver. In contrast, with wireless power transfer the amount of energy received is the important thing, so the efficiency (fraction of transmitted energy that is received) is the more significant parameter. For this reason, wireless power technologies are likely to be more limited by distance than wireless communication technologies.
Wireless power transfer may be used to power up wireless information transmitters or receivers. This type of communication is known as wireless powered communication (WPC). When the harvested power is used to supply the power of wireless information transmitters, the network is known as Simultaneous Wireless Information and Power Transfer (SWIPT); whereas when it is used to supply the power of wireless information receivers, it is known as a Wireless Powered Communication Network (WPCN).
These are the different wireless power technologies:
Field regions.
Electric and magnetic fields are created by charged particles in matter such as electrons. A stationary charge creates an electrostatic field in the space around it. A steady current of charges (direct current, DC) creates a static magnetic field around it. The above fields contain energy, but cannot carry power because they are static. However time-varying fields can carry power. Accelerating electric charges, such as are found in an alternating current (AC) of electrons in a wire, create time-varying electric and magnetic fields in the space around them. These fields can exert oscillating forces on the electrons in a receiving "antenna", causing them to move back and forth. These represent alternating current which can be used to power a load.
The oscillating electric and magnetic fields surrounding moving electric charges in an antenna device can be divided into two regions, depending on distance "D"range from the antenna.
The boundary between the regions is somewhat vaguely defined. The fields have different characteristics in these regions, and different technologies are used for transferring power:
Resonance, such as resonant inductive coupling, can increase the coupling between the antennas greatly, allowing efficient transmission at somewhat greater distances, although the fields still decrease exponentially. Therefore the range of near-field devices is conventionally divided into two categories:
*Short range – up to about one antenna diameter: D"range ≤ D"ant. This is the range over which ordinary nonresonant capacitive or inductive coupling can transfer practical amounts of power.
*Mid-range – up to 10 times the antenna diameter: D"range ≤ 10 D"ant. This is the range over which resonant capacitive or inductive coupling can transfer practical amounts of power.
However, unlike fields, electromagnetic radiation can be focused by reflection or refraction into beams. By using a high-gain antenna or optical system which concentrates the radiation into a narrow beam aimed at the receiver, it can be used for long range power transmission. From the Rayleigh criterion, to produce the narrow beams necessary to focus a significant amount of the energy on a distant receiver, an antenna must be much larger than the wavelength of the waves used: D"ant » λ = c/f". Practical "beam power" devices require wavelengths in the centimeter region or below, corresponding to frequencies above 1 GHz, in the microwave range or above.
Near-field (nonradiative) techniques.
At large relative distance, the near-field components of electric and magnetic fields are approximately quasi-static oscillating dipole fields. These fields decrease with the cube of distance: (D"range/D"ant)−3 Since power is proportional to the square of the field strength, the power transferred decreases as (D"range/D"ant)−6. or 60 dB per decade. In other words, if far apart, increasing the distance between the two antennas tenfold causes the power received to decrease by a factor of 106 = 1000000. As a result, inductive and capacitive coupling can only be used for short-range power transfer, within a few times the diameter of the antenna device "D"ant. Unlike in a radiative system where the maximum radiation occurs when the dipole antennas are oriented transverse to the direction of propagation, with dipole fields the maximum coupling occurs when the dipoles are oriented longitudinally.
Inductive coupling.
In inductive coupling ("electromagnetic induction" or "inductive power transfer", IPT), power is transferred between coils of wire by a magnetic field. The transmitter and receiver coils together form a "transformer" "(see diagram)". An alternating current (AC) through the transmitter coil "(L1)" creates an oscillating magnetic field "(B)" by Ampere's law. The magnetic field passes through the receiving coil "(L2)", where it induces an alternating EMF (voltage) by Faraday's law of induction, which creates an alternating current in the receiver. The induced alternating current may either drive the load directly, or be rectified to direct current (DC) by a rectifier in the receiver, which drives the load. A few systems, such as electric toothbrush charging stands, work at 50/60 Hz so AC mains current is applied directly to the transmitter coil, but in most systems an electronic oscillator generates a higher frequency AC current which drives the coil, because transmission efficiency improves with frequency.
Inductive coupling is the oldest and most widely used wireless power technology, and virtually the only one so far which is used in commercial products. It is used in inductive charging stands for cordless appliances used in wet environments such as electric toothbrushes and shavers, to reduce the risk of electric shock. Another application area is "transcutaneous" recharging of biomedical prosthetic devices implanted in the human body, such as cardiac pacemakers, to avoid having wires passing through the skin. It is also used to charge electric vehicles such as cars and to either charge or power transit vehicles like buses and trains.
However the fastest growing use is wireless charging pads to recharge mobile and handheld wireless devices such as laptop and tablet computers, computer mouse, cellphones, digital media players, and video game controllers. In the United States, the Federal Communications Commission (FCC) provided its first certification for a wireless transmission charging system in December 2017.
The power transferred increases with frequency and the mutual inductance formula_0 between the coils, which depends on their geometry and the distance formula_1 between them. A widely used figure of merit is the coupling coefficient formula_2. This dimensionless parameter is equal to the fraction of magnetic flux through the transmitter coil formula_3 that passes through the receiver coil formula_4 when L2 is open circuited. If the two coils are on the same axis and close together so all the magnetic flux from formula_3 passes through formula_4, formula_5 and the link efficiency approaches 100%. The greater the separation between the coils, the more of the magnetic field from the first coil misses the second, and the lower formula_6 and the link efficiency are, approaching zero at large separations. The link efficiency and power transferred is roughly proportional to formula_7. In order to achieve high efficiency, the coils must be very close together, a fraction of the coil diameter formula_8, usually within centimeters, with the coils' axes aligned. Wide, flat coil shapes are usually used, to increase coupling. Ferrite "flux confinement" cores can confine the magnetic fields, improving coupling and reducing interference to nearby electronics, but they are heavy and bulky so small wireless devices often use air-core coils.
Ordinary inductive coupling can only achieve high efficiency when the coils are very close together, usually adjacent. In most modern inductive systems resonant inductive coupling "(described below)" is used, in which the efficiency is increased by using resonant circuits. This can achieve high efficiencies at greater distances than nonresonant inductive coupling.
Resonant inductive coupling.
Resonant inductive coupling ("electrodynamic coupling", "strongly coupled magnetic resonance") is a form of inductive coupling in which power is transferred by magnetic fields "(B, green)" between two resonant circuits (tuned circuits), one in the transmitter and one in the receiver "(see diagram, right)". Each resonant circuit consists of a coil of wire connected to a capacitor, or a self-resonant coil or other resonator with internal capacitance. The two are tuned to resonate at the same resonant frequency. The resonance between the coils can greatly increase coupling and power transfer, analogously to the way a vibrating tuning fork can induce sympathetic vibration in a distant fork tuned to the same pitch.
Nikola Tesla first discovered resonant coupling during his pioneering experiments in wireless power transfer around the turn of the 20th century, but the possibilities of using resonant coupling to increase transmission range has only recently been explored. In 2007 a team led by Marin Soljačić at MIT used two coupled tuned circuits each made of a 25 cm self-resonant coil of wire at 10 MHz to achieve the transmission of 60 W of power over a distance of (8 times the coil diameter) at around 40% efficiency.
The concept behind resonant inductive coupling systems is that high Q factor resonators exchange energy at a much higher rate than they lose energy due to internal damping. Therefore, by using resonance, the same amount of power can be transferred at greater distances, using the much weaker magnetic fields out in the peripheral regions ("tails") of the near fields. Resonant inductive coupling can achieve high efficiency at ranges of 4 to 10 times the coil diameter ("D"ant). This is called "mid-range" transfer, in contrast to the "short range" of nonresonant inductive transfer, which can achieve similar efficiencies only when the coils are adjacent. Another advantage is that resonant circuits interact with each other so much more strongly than they do with nonresonant objects that power losses due to absorption in stray nearby objects are negligible.
A drawback of resonant coupling theory is that at close ranges when the two resonant circuits are tightly coupled, the resonant frequency of the system is no longer constant but "splits" into two resonant peaks, so the maximum power transfer no longer occurs at the original resonant frequency and the oscillator frequency must be tuned to the new resonance peak.
Resonant technology is currently being widely incorporated in modern inductive wireless power systems. One of the possibilities envisioned for this technology is area wireless power coverage. A coil in the wall or ceiling of a room might be able to wirelessly power lights and mobile devices anywhere in the room, with reasonable efficiency. An environmental and economic benefit of wirelessly powering small devices such as clocks, radios, music players and remote controls is that it could drastically reduce the 6 billion batteries disposed of each year, a large source of toxic waste and groundwater contamination.
A study for the Swedish military found that 85kHz systems for dynamic wireless power transfer for vehicles can cause electromagnetic interference at a radius of up to 300 kilometers.
Capacitive coupling.
Capacitive coupling also referred to as electric coupling, makes use of electric fields for the transmission of power between two electrodes (an anode and cathode) forming a capacitance for the transfer of power. In capacitive coupling (electrostatic induction), the conjugate of inductive coupling, energy is transmitted by electric fields between electrodes such as metal plates. The transmitter and receiver electrodes form a capacitor, with the intervening space as the dielectric. An alternating voltage generated by the transmitter is applied to the transmitting plate, and the oscillating electric field induces an alternating potential on the receiver plate by electrostatic induction, which causes an alternating current to flow in the load circuit. The amount of power transferred increases with the frequency the square of the voltage, and the capacitance between the plates, which is proportional to the area of the smaller plate and (for short distances) inversely proportional to the separation.
Capacitive coupling has only been used practically in a few low power applications, because the very high voltages on the electrodes required to transmit significant power can be hazardous, and can cause unpleasant side effects such as noxious ozone production. In addition, in contrast to magnetic fields, electric fields interact strongly with most materials, including the human body, due to dielectric polarization. Intervening materials between or near the electrodes can absorb the energy, in the case of humans possibly causing excessive electromagnetic field exposure. However capacitive coupling has a few advantages over inductive coupling. The field is largely confined between the capacitor plates, reducing interference, which in inductive coupling requires heavy ferrite "flux confinement" cores. Also, alignment requirements between the transmitter and receiver are less critical. Capacitive coupling has recently been applied to charging battery powered portable devices as well as charging or continuous wireless power transfer in biomedical implants, and is being considered as a means of transferring power between substrate layers in integrated circuits.
Two types of circuit have been used:
Resonant capacitive coupling.
Resonance can also be used with capacitive coupling to extend the range. At the turn of the 20th century, Nikola Tesla did the first experiments with both resonant inductive and capacitive coupling.
Electrodynamic Wireless Power Transfer.
An electrodynamic wireless power transfer (EWPT) system utilizes a receiver with a mechanically resonating or rotating permanent magnet. When subjected to a time-varying magnetic field, the mechanical motion of the resonating magnet is converted into electricity by one or more electromechanical transduction schemes (e.g. electromagnetic/induction, piezoelectric, or capacitive). In contrast to inductive coupling systems which usually use high frequency magnetic fields, EWPT uses low-frequency magnetic fields (<1 kHz), which safely pass through conductive media and have higher human field exposure limits (~2 mTrms at 1 kHz), showing promise for potential use in wirelessly recharging biomedical implants.
For EWPT devices having identical resonant frequencies, the magnitude of power transfer is entirely dependent on critical coupling coefficient, denoted by formula_6, between the transmitter and receiver devices. For coupled resonators with same resonant frequencies, wireless power transfer between the transmitter and the receiver is spread over three regimes – under-coupled, critically coupled and over-coupled. As the critical coupling coefficient increases from an under-coupled regime (formula_9) to the critical coupled regime, the optimum voltage gain curve grows in magnitude (measured at the receiver) and peaks when formula_10 and then enters into the over-coupled regime where formula_11 and the peak splits into two. This critical coupling coefficient is demonstrated to be a function of distance between the source and the receiver devices.
Magnetodynamic coupling.
In this method, power is transmitted between two rotating armatures, one in the transmitter and one in the receiver, which rotate synchronously, coupled together by a magnetic field generated by permanent magnets on the armatures. The transmitter armature is turned either by or as the rotor of an electric motor, and its magnetic field exerts torque on the receiver armature, turning it. The magnetic field acts like a mechanical coupling between the armatures. The receiver armature produces power to drive the load, either by turning a separate electric generator or by using the receiver armature itself as the rotor in a generator.
This device has been proposed as an alternative to inductive power transfer for noncontact charging of electric vehicles. A rotating armature embedded in a garage floor or curb would turn a receiver armature in the underside of the vehicle to charge its batteries. It is claimed that this technique can transfer power over distances of 10 to 15 cm (4 to 6 inches) with high efficiency, over 90%. Also, the low frequency stray magnetic fields produced by the rotating magnets produce less electromagnetic interference to nearby electronic devices than the high frequency magnetic fields produced by inductive coupling systems. A prototype system charging electric vehicles has been in operation at University of British Columbia since 2012. Other researchers, however, claim that the two energy conversions (electrical to mechanical to electrical again) make the system less efficient than electrical systems like inductive coupling.
Zenneck Wave Transmission.
A new kind of system using the Zenneck type waves was shown by Oruganti et al., where they demonstrated that it was possible to excite Zenneck wave type waves on flat metal-air interfaces and transmit power across metal obstacles.
Here the idea is to excite a localized charge oscillation at the metal-air interface, the resulting modes propagate along the metal-air interface.
Far-field (radiative) techniques.
Far field methods achieve longer ranges, often multiple kilometer ranges, where the distance is much greater than the diameter of the device(s). High-directivity antennas or well-collimated laser light produce a beam of energy that can be made to match the shape of the receiving area. The maximum directivity for antennas is physically limited by diffraction.
In general, visible light (from lasers) and microwaves (from purpose-designed antennas) are the forms of electromagnetic radiation best suited to energy transfer.
The dimensions of the components may be dictated by the distance from transmitter to receiver, the wavelength and the Rayleigh criterion or diffraction limit, used in standard radio frequency antenna design, which also applies to lasers. Airy's diffraction limit is also frequently used to determine an approximate spot size at an arbitrary distance from the aperture. Electromagnetic radiation experiences less diffraction at shorter wavelengths (higher frequencies); so, for example, a blue laser is diffracted less than a red one.
The Rayleigh limit (also known as the Abbe diffraction limit), although originally applied to image resolution, can be viewed in reverse, and dictates that the irradiance (or "intensity") of any electromagnetic wave (such as a microwave or laser beam) will be reduced as the beam diverges over distance at a minimum rate inversely proportional to the aperture size. The larger the ratio of a transmitting antenna's aperture or laser's exit aperture to the wavelength of radiation, the more can the radiation be concentrated in a compact beam.
Microwave power beaming can be more efficient than lasers, and is less prone to atmospheric attenuation caused by dust or aerosols such as fog.
Here, the power levels are calculated by combining the above parameters together, and adding in the gains and losses due to the antenna characteristics and the transparency and dispersion of the medium through which the radiation passes. That process is known as calculating a link budget.
Microwaves.
Power transmission via radio waves can be made more directional, allowing longer-distance power beaming, with shorter wavelengths of electromagnetic radiation, typically in the microwave range. A rectenna may be used to convert the microwave energy back into electricity. Rectenna conversion efficiencies exceeding 95% have been realized. Power beaming using microwaves has been proposed for the transmission of energy from orbiting solar power satellites to Earth and the beaming of power to spacecraft leaving orbit has been considered.
Power beaming by microwaves has the difficulty that, for most space applications, the required aperture sizes are very large due to diffraction limiting antenna directionality. For example, the 1978 NASA study of solar power satellites required a transmitting antenna and a receiving rectenna for a microwave beam at 2.45 GHz. These sizes can be somewhat decreased by using shorter wavelengths, although short wavelengths may have difficulties with atmospheric absorption and beam blockage by rain or water droplets. Because of the "thinned-array curse", it is not possible to make a narrower beam by combining the beams of several smaller satellites.
For earthbound applications, a large-area 10 km diameter receiving array allows large total power levels to be used while operating at the low power density suggested for human electromagnetic exposure safety. A human safe power density of 1 mW/cm2 distributed across a 10 km diameter area corresponds to 750 megawatts total power level. This is the power level found in many modern electric power plants. For comparison, a solar PV farm of similar size might easily exceed 10,000 megawatts (rounded) at best conditions during daytime.
Following World War II, which saw the development of high-power microwave emitters known as cavity magnetrons, the idea of using microwaves to transfer power was researched. By 1964, a miniature helicopter propelled by microwave power had been demonstrated.
Japanese researcher Hidetsugu Yagi also investigated wireless energy transmission using a directional array antenna that he designed. In February 1926, Yagi and his colleague Shintaro Uda published their first paper on the tuned high-gain directional array now known as the Yagi antenna. While it did not prove to be particularly useful for power transmission, this beam antenna has been widely adopted throughout the broadcasting and wireless telecommunications industries due to its excellent performance characteristics.
Wireless high power transmission using microwaves is well proven. Experiments in the tens of kilowatts have been performed at the Goldstone Deep Space Communications Complex in California in 1975 and more recently (1997) at Grand Bassin on Reunion Island. These methods achieve distances on the order of a kilometer.
Under experimental conditions, microwave conversion efficiency was measured to be around 54% across one meter.
A change to 24 GHz has been suggested as microwave emitters similar to LEDs have been made with very high quantum efficiencies using negative resistance, i.e., Gunn or IMPATT diodes, and this would be viable for short range links.
In 2013, inventor Hatem Zeine demonstrated how wireless power transmission using phased array antennas can deliver electrical power up to 30 feet. It uses the same radio frequencies as WiFi.
In 2015, researchers at the University of Washington introduced power over Wi-Fi, which trickle-charges batteries and powered battery-free cameras and temperature sensors using transmissions from Wi-Fi routers. Wi-Fi signals were shown to power battery-free temperature and camera sensors at ranges of up to 20 feet. It was also shown that Wi-Fi can be used to wirelessly trickle-charge nickel–metal hydride and lithium-ion coin-cell batteries at distances of up to 28 feet.
In 2017, the Federal Communications Commission (FCC) certified the first mid-field radio frequency (RF) transmitter of wireless power. In 2021 the FCC granted a license to a over-the-air (OTA) wireless charging system that combines near-field and far-field methods by using a frequency of about 900 MHz. Due to the radiated power of about 1 W this system is intended for small IoT devices as various sensors, trackers, detectors and monitors.
Lasers.
In the case of electromagnetic radiation closer to the visible region of the spectrum (.2 to 2 micrometers), power can be transmitted by converting electricity into a laser beam that is received and concentrated onto photovoltaic cells (solar cells). This mechanism is generally known as 'power beaming' because the power is beamed at a receiver that can convert it to electrical energy. At the receiver, special photovoltaic laser power converters which are optimized for monochromatic light conversion are applied.
Advantages compared to other wireless methods are:
Drawbacks include:
Laser 'powerbeaming' technology was explored in military weapons and aerospace applications. Also, it is applied for the powering of various kinds of sensors in industrial environments. Lately, it is developed for powering commercial and consumer electronics. Wireless energy transfer systems using lasers for consumer space have to satisfy laser safety requirements standardized under IEC 60825.
The first wireless power system using lasers for consumer applications was demonstrated in 2018, capable of delivering power to stationary and moving devices across a room. This wireless power system complies with safety regulations according to IEC 60825 standard. It is also approved by the US Food and Drugs Administration (FDA).
Other details include propagation, and the coherence and the range limitation problem.
Geoffrey Landis is one of the pioneers of solar power satellites and laser-based transfer of energy, especially for space and lunar missions. The demand for safe and frequent space missions has resulted in proposals for a laser-powered space elevator.
NASA's Dryden Flight Research Center has demonstrated a lightweight unmanned model plane powered by a laser beam. This proof-of-concept demonstrates the feasibility of periodic recharging using a laser beam system.
Scientists from the Chinese Academy of Sciences have developed a proof-of-concept of utilizing a dual-wavelength laser to wirelessly charge portable devices or UAVs.
Atmospheric plasma channel coupling.
In atmospheric plasma channel coupling, energy is transferred between two electrodes by electrical conduction through ionized air. When an electric field gradient exists between the two electrodes, exceeding 34 kilovolts per centimeter at sea level atmospheric pressure, an electric arc occurs. This atmospheric dielectric breakdown results in the flow of electric current along a random trajectory through an ionized plasma channel between the two electrodes. An example of this is natural lightning, where one electrode is a virtual point in a cloud and the other is a point on Earth. Laser Induced Plasma Channel (LIPC) research is presently underway using ultrafast lasers to artificially promote development of the plasma channel through the air, directing the electric arc, and guiding the current across a specific path in a controllable manner. The laser energy reduces the atmospheric dielectric breakdown voltage and the air is made less insulating by superheating, which lowers the density (formula_12) of the filament of air.
This new process is being explored for use as a laser lightning rod and as a means to trigger lightning bolts from clouds for natural lightning channel studies, for artificial atmospheric propagation studies, as a substitute for conventional radio antennas, for applications associated with electric welding and machining, for diverting power from high-voltage capacitor discharges, for directed-energy weapon applications employing electrical conduction through a ground return path, and electronic jamming.
Energy harvesting.
In the context of wireless power, "energy harvesting", also called "power harvesting" or "energy scavenging", is the conversion of ambient energy from the environment to electric power, mainly to power small autonomous wireless electronic devices. The ambient energy may come from stray electric or magnetic fields or radio waves from nearby electrical equipment, light, thermal energy (heat), or kinetic energy such as vibration or motion of the device. Although the efficiency of conversion is usually low and the power gathered often minuscule (milliwatts or microwatts), it can be adequate to run or recharge small micropower wireless devices such as remote sensors, which are proliferating in many fields. This new technology is being developed to eliminate the need for battery replacement or charging of such wireless devices, allowing them to operate completely autonomously.
History.
19th century developments and dead ends.
The 19th century saw many developments of theories, and counter-theories on how electrical energy might be transmitted. In 1826, André-Marie Ampère discovered a connection between current and magnets. Michael Faraday described in 1831 with his law of induction the electromotive force driving a current in a conductor loop by a time-varying magnetic flux. Transmission of electrical energy without wires was observed by many inventors and experimenters, but lack of a coherent theory attributed these phenomena vaguely to electromagnetic induction. A concise explanation of these phenomena would come from the 1860s Maxwell's equations by James Clerk Maxwell, establishing a theory that unified electricity and magnetism to electromagnetism, predicting the existence of electromagnetic waves as the "wireless" carrier of electromagnetic energy. Around 1884 John Henry Poynting defined the Poynting vector and gave Poynting's theorem, which describe the flow of power across an area within electromagnetic radiation and allow for a correct analysis of wireless power transfer systems. This was followed on by Heinrich Rudolf Hertz' 1888 validation of the theory, which included the evidence for radio waves.
During the same period two schemes of wireless signaling were put forward by William Henry Ward (1871) and Mahlon Loomis (1872) that were based on the erroneous belief that there was an electrified atmospheric stratum accessible at low altitude. Both inventors' patents noted this layer connected with a return path using "Earth currents"' would allow for wireless telegraphy as well as supply power for the telegraph, doing away with artificial batteries, and could also be used for lighting, heat, and motive power. A more practical demonstration of wireless transmission via conduction came in Amos Dolbear's 1879 magneto electric telephone that used ground conduction to transmit over a distance of a quarter of a mile.
Nikola Tesla.
After 1890, inventor Nikola Tesla experimented with transmitting power by inductive and capacitive coupling using spark-excited radio frequency resonant transformers, now called Tesla coils, which generated high AC voltages. Early on he attempted to develop a wireless lighting system based on near-field inductive and capacitive coupling and conducted a series of public demonstrations where he lit Geissler tubes and even incandescent light bulbs from across a stage. He found he could increase the distance at which he could light a lamp by using a receiving LC circuit tuned to resonance with the transmitter's LC circuit. using resonant inductive coupling. Tesla failed to make a commercial product out of his findings but his resonant inductive coupling method is now widely used in electronics and is currently being applied to short-range wireless power systems.
Tesla went on to develop a wireless power distribution system that he hoped would be capable of transmitting power long distance directly into homes and factories. Early on he seemed to borrow from the ideas of Mahlon Loomis, proposing a system composed of balloons to suspend transmitting and receiving electrodes in the air above in altitude, where he thought the pressure would allow him to send high voltages (millions of volts) long distances. To further study the conductive nature of low pressure air he set up a test facility at high altitude in Colorado Springs during 1899. Experiments he conducted there with a large coil operating in the megavolts range, as well as observations he made of the electronic noise of lightning strikes, led him to conclude incorrectly that he could use the entire globe of the Earth to conduct electrical energy. The theory included driving alternating current pulses into the Earth at its resonant frequency from a grounded Tesla coil working against an elevated capacitance to make the potential of the Earth oscillate. Tesla thought this would allow alternating current to be received with a similar capacitive antenna tuned to resonance with it at any point on Earth with very little power loss. His observations also led him to believe a high voltage used in a coil at an elevation of a few hundred feet would "break the air stratum down", eliminating the need for miles of cable hanging on balloons to create his atmospheric return circuit. Tesla would go on the next year to propose a "World Wireless System" that was to broadcast both information and power worldwide. In 1901, at Shoreham, New York he attempted to construct a large high-voltage wireless power station, now called Wardenclyffe Tower, but by 1904 investment dried up and the facility was never completed.
Near-field and non-radiative technologies.
Inductive power transfer between nearby wire coils was the earliest wireless power technology to be developed, existing since the transformer was developed in the 1800s. Induction heating has been used since the early 1900s and is used for induction cooking.
With the advent of cordless devices, induction charging stands have been developed for appliances used in wet environments, like electric toothbrushes and electric razors, to eliminate the hazard of electric shock. One of the earliest proposed applications of inductive transfer was to power electric locomotives. In 1892 Maurice Hutin and Maurice Leblanc patented a wireless method of powering railroad trains using resonant coils inductively coupled to a track wire at 3 kHz.
In the early 1960s resonant inductive wireless energy transfer was used successfully in implantable medical devices including such devices as pacemakers and artificial hearts. While the early systems used a resonant receiver coil, later systems implemented resonant transmitter coils as well. These medical devices are designed for high efficiency using low power electronics while efficiently accommodating some misalignment and dynamic twisting of the coils. The separation between the coils in implantable applications is commonly less than 20 cm. Today resonant inductive energy transfer is regularly used for providing electric power in many commercially available medical implantable devices.
The first passive RFID (Radio Frequency Identification) technologies were invented by Mario Cardullo (1973) and Koelle et al. (1975) and by the 1990s were being used in proximity cards and contactless smartcards.
The proliferation of portable wireless communication devices such as mobile phones, tablet, and laptop computers in recent decades is currently driving the development of mid-range wireless powering and charging technology to eliminate the need for these devices to be tethered to wall plugs during charging. The Wireless Power Consortium was established in 2008 to develop interoperable standards across manufacturers. Its Qi inductive power standard published in August 2009 enables high efficiency charging and powering of portable devices of up to 5 watts over distances of 4 cm (1.6 inches). The wireless device is placed on a flat charger plate (which can be embedded in table tops at cafes, for example) and power is transferred from a flat coil in the charger to a similar one in the device. In 2007, a team led by Marin Soljačić at MIT used a dual resonance transmitter with a 25 cm diameter secondary tuned to 10 MHz to transfer 60 W of power to a similar dual resonance receiver over a distance of (eight times the transmitter coil diameter) at around 40% efficiency.
In 2008 the team of Greg Leyh and Mike Kennan of Nevada Lightning Lab used a grounded dual resonance transmitter with a 57 cm diameter secondary tuned to 60 kHz and a similar grounded dual resonance receiver to transfer power through coupled electric fields with an earth current return circuit over a distance of . In 2011, Dr. Christopher A. Tucker and Professor Kevin Warwick of the University of Reading, recreated Tesla's 1900 patent 0,645,576 in miniature and demonstrated power transmission over with a coil diameter of at a resonant frequency of 27.50 MHz, with an effective efficiency of 60%.
Microwaves and lasers.
Before World War II, little progress was made in wireless power transmission. Radio was developed for communication uses, but could not be used for power transmission since the relatively low-frequency radio waves spread out in all directions and little energy reached the receiver. In radio communication, at the receiver, an amplifier intensifies a weak signal using energy from another source. For power transmission, efficient transmission required transmitters that could generate higher-frequency microwaves, which can be focused in narrow beams towards a receiver.
The development of microwave technology during World War II, such as the klystron and magnetron tubes and parabolic antennas, made radiative (far-field) methods practical for the first time, and the first long-distance wireless power transmission was achieved in the 1960s by William C. Brown. In 1964, Brown invented the rectenna which could efficiently convert microwaves to DC power, and in 1964 demonstrated it with the first wireless-powered aircraft, a model helicopter powered by microwaves beamed from the ground. A major motivation for microwave research in the 1970s and 1980s was to develop a solar power satellite. Conceived in 1968 by Peter Glaser, this would harvest energy from sunlight using solar cells and beam it down to Earth as microwaves to huge rectennas, which would convert it to electrical energy on the electric power grid. In landmark 1975 experiments as technical director of a JPL/Raytheon program, Brown demonstrated long-range transmission by beaming 475 W of microwave power to a rectenna a mile away, with a microwave to DC conversion efficiency of 54%. At NASA's Jet Propulsion Laboratory, he and Robert Dickinson transmitted 30 kW DC output power across 1.5 km with 2.38 GHz microwaves from a 26 m dish to a 7.3 x 3.5 m rectenna array. The incident-RF to DC conversion efficiency of the rectenna was 80%. In 1983 Japan launched Microwave Ionosphere Nonlinear Interaction Experiment (MINIX), a rocket experiment to test transmission of high power microwaves through the ionosphere.
In recent years a focus of research has been the development of wireless-powered drone aircraft, which began in 1959 with the Dept. of Defense's RAMP (Raytheon Airborne Microwave Platform) project which sponsored Brown's research. In 1987 Canada's Communications Research Center developed a small prototype airplane called Stationary High Altitude Relay Platform (SHARP) to relay telecommunication data between points on earth similar to a communications satellite. Powered by a rectenna, it could fly at 13 miles (21 km) altitude and stay aloft for months. In 1992 a team at Kyoto University built a more advanced craft called MILAX (MIcrowave Lifted Airplane eXperiment).
In 2003 NASA flew the first laser powered aircraft. The small model plane's motor was powered by electricity generated by photocells from a beam of infrared light from a ground-based laser, while a control system kept the laser pointed at the plane.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
Books and articles.
<templatestyles src="Refbegin/styles.css" />
Patents.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "D_\\text{range}"
},
{
"math_id": 2,
"text": "k\\; =\\; M/\\sqrt{L_1 L_2}"
},
{
"math_id": 3,
"text": "L1"
},
{
"math_id": 4,
"text": "L2"
},
{
"math_id": 5,
"text": "k = 1"
},
{
"math_id": 6,
"text": "k"
},
{
"math_id": 7,
"text": "k^2"
},
{
"math_id": 8,
"text": "D_\\text{ant}"
},
{
"math_id": 9,
"text": "k<k_{crit}"
},
{
"math_id": 10,
"text": "k=k_{crit}"
},
{
"math_id": 11,
"text": "k>k_{crit}"
},
{
"math_id": 12,
"text": "p"
}
] |
https://en.wikipedia.org/wiki?curid=570662
|
57073854
|
Rhenium disulfide
|
<templatestyles src="Chembox/styles.css"/>
Chemical compound
Rhenium disulfide is an inorganic compound of rhenium and sulfur with the formula ReS2. It has a layered structure where atoms are strongly bonded within each layer. The layers are held together by weak Van der Waals bonds, and can be easily peeled off from the bulk material.
Production.
ReS2 is found in nature as the mineral rheniite. It can be synthesized from the reaction between rhenium and sulfur at 1000 °C, or the decomposition of rhenium(VII) sulfide at 1100 °C:
Re + 2 S → ReS2
Re2S7 → 2 ReS2 + 3 S
Nanostructured ReS2 can usually be achieved through mechanical exfoliation, chemical vapor deposition (CVD), and chemical and liquid exfoliations. Larger crystals can be grown with the assistance of liquid carbonate flux at high pressure. It is widely used in electronic and optoelectronic device, energy storage, photocatalytic and electrocatalytic reactions.
Properties.
It is a two-dimensional (2D) group VII transition metal dichalcogenide (TMD). ReS2 was isolated down to monolayers which is only one unit cell in thickness for the first time in 2014. These monolayers have shown layer-independent electrical, optical, and vibrational properties much different from other TMDs.
Structure.
Bulk ReS2 has a layered structure and a platelet-like habit. Different crystal structures were proposed for ReS2 based on single-crystal X-ray diffraction studies. While all authors agree that the lattice is triclinic, the reported cell parameters and atomic arrangements slightly differ. The earliest work describes ReS2 in a triclinic unit cell (sp. gr. Pformula_0, a = 0.6455 nm, b = 0.6362 nm, c = 0.6401 nm, α = 105.04°, β = 91.60°, γ = 118.97°) as a distorted variant of the CdCl2 prototype (1T structure, trigonal space group Rformula_1m). In comparison with ideal octahedral coordination of the metal atoms in CdCl2, the Re atoms in ReS2 are displaced from the centers of the surrounding Se6 octahedra and form Re4 clusters that are linked to chains in the b direction. A later study proposed a more accurate description of the crystal structure. It reports a different triclinic cell (sp. gr. Pformula_0, a = 0.6352 nm, b = 0.6446 nm, c = 1.2779 nm, α = 91.51°, β = 105.17°, γ = 118.97°) with the doubled c parameter and swapped a and b, α and β. There are two layers in this unit cell, related by symmetry centers, and the chains of clusters run along the a axis. Each layer form parallelogram-shaped connected clusters with Re-Re distances of ca. 0.27-0.28 nm in the cluster, and ca. 0.29 nm between clusters. There is one more structure description of ReS2 published in in yet another triclinic cell (sp. gr. Pformula_0, a = 0.6417 nm, b = 0.6510 nm, c = 0.6461 nm, α = 121.10°, β = 88.38°, γ = 106.47°) where only one layer is present and the centers of symmetry are in the Re layer. The current consent is that the latter work might have overlooked the doubling of the c parameter captured in.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\bar{1}"
},
{
"math_id": 1,
"text": "\\bar{3}"
}
] |
https://en.wikipedia.org/wiki?curid=57073854
|
5707590
|
Étale topology
|
Type of Grothendieck topology on the category of schemes
In algebraic geometry, the étale topology is a Grothendieck topology on the category of schemes which has properties similar to the Euclidean topology, but unlike the Euclidean topology, it is also defined in positive characteristic. The étale topology was originally introduced by Alexander Grothendieck to define étale cohomology, and this is still the étale topology's most well-known use.
Definitions.
For any scheme "X", let Ét("X") be the category of all étale morphisms from a scheme to "X". This is the analog of the category of open subsets of "X" (that is, the category whose objects are varieties and whose morphisms are open immersions). Its objects can be informally thought of as étale open subsets of "X". The intersection of two objects corresponds to their fiber product over "X". Ét("X") is a large category, meaning that its objects do not form a set.
An étale presheaf on "X" is a contravariant functor from Ét("X") to the category of sets. A presheaf "F" is called an étale sheaf if it satisfies the analog of the usual gluing condition for sheaves on topological spaces. That is, "F" is an étale sheaf if and only if the following condition is true. Suppose that "U" → "X" is an object of Ét("X") and that "U""i" → "U" is a jointly surjective family of étale morphisms over "X". For each "i", choose a section "x""i" of "F" over "U""i". The projection map "U""i" × "U""j" → "U""i", which is loosely speaking the inclusion of the intersection of "U""i" and "U""j" in "U""i", induces a restriction map "F"("U""i") → "F"("U""i" × "U""j"). If for all "i" and "j" the restrictions of "x""i" and "x""j" to "U""i" × "U""j" are equal, then there must exist a unique section "x" of "F" over "U" which restricts to "x""i" for all "i".
Suppose that "X" is a Noetherian scheme. An abelian étale sheaf "F" on "X" is called finite locally constant if it is a representable functor which can be represented by an étale cover of "X". It is called constructible if "X" can be covered by a finite family of subschemes on each of which the restriction of "F" is finite locally constant. It is called torsion if "F"("U") is a torsion group for all étale covers "U" of "X". Finite locally constant sheaves are constructible, and constructible sheaves are torsion. Every torsion sheaf is a filtered inductive limit of constructible sheaves.
Grothendieck originally introduced the machinery of Grothendieck topologies and topoi to define the étale topology. In this language, the definition of the étale topology is succinct but abstract: It is the topology generated by the pretopology whose covering families are jointly surjective families of étale morphisms. The small étale site of X is the category "O"("X"ét) whose objects are schemes "U" with a fixed étale morphism "U" → "X". The morphisms are morphisms of schemes compatible with the fixed maps to "X". The big étale site of X is the category Ét/"X", that is, the category of schemes with a fixed map to "X", considered with the étale topology.
The étale topology can be defined using slightly less data. First, notice that the étale topology is finer than the Zariski topology. Consequently, to define an étale cover of a scheme "X", it suffices to first cover "X" by open affine subschemes, that is, to take a Zariski cover, and then to define an étale cover of an affine scheme. An étale cover of an affine scheme "X" can be defined as a jointly surjective family {"u""α" : "X""α" → "X"} such that the set of all "α" is finite, each "X""α" is affine, and each "u""α" is étale. Then an étale cover of "X" is a family {"u""α" : "X""α" → "X"} which becomes an étale cover after base changing to any open affine subscheme of "X".
Local rings.
Let "X" be a scheme with its étale topology, and fix a point "x" of "X". In the Zariski topology, the stalk of "X" at "x" is computed by taking a direct limit of the sections of the structure sheaf over all the Zariski open neighborhoods of "x". In the étale topology, there are strictly more open neighborhoods of "x", so the correct analog of the local ring at "x" is formed by taking the limit over a strictly larger family. The correct analog of the local ring at "x" for the étale topology turns out to be the strict henselization of the local ring formula_0. It is usually denoted formula_1.
|
[
{
"math_id": 0,
"text": "\\mathcal{O}_{X, x}"
},
{
"math_id": 1,
"text": "\\mathcal{O}_{X, x}^\\text{sh}"
},
{
"math_id": 2,
"text": "U \\to X"
},
{
"math_id": 3,
"text": "\\mathbb{G}_m(U) = \\mathcal{O}_U(U)^{\\times}"
},
{
"math_id": 4,
"text": "U \\mapsto \\mathbb{G}_m(U)"
},
{
"math_id": 5,
"text": "\\operatorname{Spec}_X (\\mathcal{O}_X[t, t^{-1}])"
}
] |
https://en.wikipedia.org/wiki?curid=5707590
|
57078824
|
Earth section paths
|
Plane curved by the intersection of an earth ellipsoid and a plane
Earth section paths are plane curves defined by the intersection of an earth ellipsoid and a plane (ellipsoid plane sections). Common examples include the "great ellipse" (containing the center of the ellipsoid) and normal sections (containing an ellipsoid normal direction). Earth section paths are useful as approximate solutions for geodetic problems, the direct and inverse calculation of geographic distances. The rigorous solution of geodetic problems involves skew curves known as "geodesics".
Inverse problem.
The inverse problem for earth sections is: given two points, formula_0 and formula_1 on the surface of the reference ellipsoid, find the length, formula_2, of the short arc of a spheroid section from formula_0 to formula_1 and also find the departure and arrival azimuths (angle from true north) of that curve, formula_3 and formula_4. The figure to the right illustrates the notation used here. Let formula_5 have geodetic latitude formula_6 and longitude formula_7 ("k"=1,2). This problem is best solved using analytic geometry in earth-centered, earth-fixed (ECEF) Cartesian coordinates.
Let formula_8 and formula_9 be the ECEF coordinates of the two points, computed using the geodetic to ECEF transformation discussed here.
Section plane.
To define the section plane select any third point formula_10 not on the line from formula_11 to formula_12. Choosing formula_10 to be on the surface normal at formula_0 will define the normal section at formula_0. If formula_10 is the origin then the earth section is the great ellipse. (The origin would be co-linear with 2 antipodal points so a different point must be used in that case). Since there are infinitely many choices for formula_10, the above problem is really a class of problems (one for each plane). Let formula_10 be given. To put the equation of the plane into the standard form, formula_13, where formula_14, requires the components of a unit vector, formula_15, normal to the section plane. These components may be computed as follows: The vector from formula_10 to formula_11 is formula_16, and the vector from formula_11 to formula_12 is formula_17. Therefore, formula_18), where formula_19 is the unit vector in the direction of formula_20. The orientation convention used here is that formula_21 points to the left of the path. If this is not the case then redefine formula_22. Finally, the parameter d for the plane may be computed using the dot product of formula_21 with a vector from the origin to any point on the plane, such as formula_11, i.e. formula_23. The equation of the plane (in vector form) is thus formula_24, where formula_25 is the position vector of formula_26.
Azimuth.
Examination of the ENU to ECEF transformation reveals that the ECEF coordinates of a unit vector pointing east at any point on the ellipsoid is: formula_27, a unit vector pointing north is formula_28, and a unit vector pointing up is formula_29. A vector tangent to the path is:
formula_30 so the east component of formula_31 is formula_32, and the north component is formula_33. Therefore, the azimuth may be obtained from a two-argument arctangent function, formula_34. Use this method at both formula_0 and formula_1 to get formula_3 and formula_4.
Section ellipse.
The (non-trivial) intersection of a plane and ellipsoid is an ellipse. Therefore, the arc length, formula_2, on the section path from formula_0 to formula_1 is an elliptic integral that may be computed to any desired accuracy using a truncated series or numerical integration. Before this can be done the ellipse must be defined and the limits of integration computed.
Let the ellipsoid given by formula_35, and let formula_36.
If formula_37 then the section is a horizontal circle of radius formula_38, which has no solution if formula_39.
If formula_40 then Gilbertson showed that the ECEF coordinates of the center of the ellipse is formula_41, where formula_42,
the semi-major axis is formula_43, in the direction formula_44, and the semi-minor axis is formula_45, in the direction formula_46, which has no solution if formula_47.
Arc Length.
The above referenced paper provides a derivation for an arc length formula involving the central angle and powers of formula_48 to compute the arc length to millimeter accuracy, where formula_49.
That arc length formula may be rearranged and put into the form:
formula_50, where
formula_51 and the coefficients are
formula_52
formula_53
formula_54
formula_55
To compute the central angle, let formula_56 be any point on the section ellipse and formula_57. Then formula_58 is a vector from the center of the ellipse to the point. The central angle formula_59 is the angle from the semi-major axis to formula_20. Letting formula_60, we have formula_61.
In this way we obtain formula_62 and formula_63.
On the other hand it's possible to use Meridian arc formulas in the more general case provided that the section ellipse parameters are used rather than the spheroid parameters. One such rapidly convergent series is given in Series in terms of the parametric latitude. If we use formula_64 to denote the spheroid eccentricity, i.e. formula_65, then formula_66 ≤ formula_67 ≅ . Similarly the third flattening of the section ellipse is bounded by the corresponding value for the spheroid, and for the spheroid we have formula_68 ≅ , and formula_69 ≅ . Therefore it may suffice to ignore terms beyond formula_70 in the parametric latitude series.
To apply formula_71 in the current context requires converting the central angle to the parametric angle using formula_72, and using the section ellipse third flattening. Whichever method is used, care must be taken when using formula_62 & formula_63 or formula_73 & formula_74 to ensure that the shorter arc connecting the 2 points is used.
Direct problem.
The direct problem is given formula_75, the distance formula_2, and departure azimuth formula_3, find formula_76 and the arrival azimuth formula_4.
Section plane.
The answer to this problem depends the choice of formula_77. i.e. on the type of section. Observe that formula_77 must not be in span{formula_78} (otherwise the plane would be tangent to the earth at formula_75, so no path would result). Having made such a choice, and considering orientation proceed as follows.
Construct the tangent vector at formula_75, formula_79, where formula_80 and formula_81 are unit vectors pointing north and east (respectively) at formula_75. The normal vector formula_82), together with formula_83 defines the plane. In other words, the tangent takes the place of the chord since the destination is unknown.
Locate arrival point.
This is a 2-d problem in span{formula_84}, which will be solved with the help of the arc length formula above. If the arc length, formula_2 is given then the problem is to find the corresponding change in the central angle formula_85, so that formula_86 and the position can be calculated. Assuming that we have a series that gives formula_87 then what we seek now is formula_88. The inverse of the central angle arc length series above may be found on page 8a of Rapp, Vol. 1, who credits Ganshin. An alternative to using the inverse series is using Newton's method of successive approximations to formula_85. The inverse meridian problem for the ellipsoid provides the inverse to Bessel's arc length series in terms of the parametric angle. Before the inverse series can be used, the parametric angle series must be used to compute the arc length from the semi-major axis to formula_0, formula_89. Once formula_90 is known apply the inverse formula to obtainformula_91, where formula_92. Rectangular coordinates in the section plane are formula_93. So an ECEF vector may be computed using formula_94. Finally calculate geographic coordinates via formula_95 using Bowring's 1985 algorithm, or the algorithm here.
Azimuth.
Azimuth may be obtained by the same method as the indirect problem: formula_96 and formula_97.
Examples.
The great ellipse.
The great ellipse is the curve formed by intersecting the ellipsoid with a plane through its center. Therefore, to use the method above, just let formula_10 be the origin, so that formula_98 (the position vector of formula_11). This method avoids the esoteric and sometimes ambiguous formulas of spherical trigonometry, and provides an alternative to the formulas of Bowring. The shortest path between two points on a spheroid is known as a geodesic. Such paths are developed using differential geometry. The equator and meridians are great ellipses that are also geodesics. The maximum difference in length between a great ellipse and the corresponding geodesic of length 5,000 nautical miles is about 10.5 meters. The lateral deviation between them may be as large as 3.7 nautical miles. A normal section connecting the two points will be closer to the geodesic than the great ellipse, unless the path touches the equator.
On the WGS84 ellipsoid, the results for the great elliptic arc from New York, formula_99 = 40.64130°, formula_100 = -73.77810°
to Paris, formula_101 = 49.00970°, formula_102= 2.54800° are:
formula_3 = 53.596810°, formula_4 = 111.537138° and formula_2 = 5849159.753 (m) = 3158.293603 (nm). The corresponding numbers for the geodesic are:
formula_3 = 53.511007°, formula_4 = 111.626714° and formula_2 = 5849157.543 (m) = 3158.292410 (nm).
To illustrate the dependence on section type for the direct problem, let the departure azimuth and trip distance be those of the geodesic above, and use the great ellipse to define the direct problem. In this case the arrival point is
formula_101 = 49.073057°, formula_102= 2.586154°, which is about 4.1 nm from the arrival point in Paris defined above. Of course using the departure azimuth and distance from the great ellipse indirect problem will properly locate the destination, formula_101 = 49.00970°, formula_102= 2.54800°, and the arrival azimuth formula_4 = 111.537138°.
Normal sections.
A normal section at formula_0 is determined by letting formula_103 (the surface normal at formula_0). Another normal section, known as the reciprocal normal section, results from using the surface normal at formula_1. Unless the two points are both on the same parallel or the same meridian, the reciprocal normal section will be a different path than the normal section. The above approach provides an alternative to that of others, such as Bowring. The importance of normal sections in surveying as well as a discussion of the meaning of the term line in such a context is given in the paper by Deakin, Sheppard and Ross.
On the WGS84 ellipsoid, the results for the normal section from New York, formula_99 = 40.64130°, formula_100 = -73.77810°
to Paris, formula_101 = 49.00970°, formula_102= 2.54800° are:
formula_3 = 53.521396°, formula_4 = 111.612516° and formula_2 = 5849157.595 (m) = 3158.292438 (nm).
The results for the reciprocal normal section from New York to Paris are:
formula_3 = 53.509422°, formula_4 = 111.624483° and formula_2 = 5849157.545 (m) = 3158.292411 (nm).
The maximum difference in length between a normal section and the corresponding geodesic of length 5,000 nautical miles is about 6.0 meters. The lateral deviation between them may be as large as 2.8 nautical miles.
To illustrate the dependence on section type for the direct problem, let the departure azimuth and trip distance be those of the geodesic above, and use the surface normal at NY to define the direct problem. In this case the arrival point is
formula_101 = 49.017378°, formula_102= 2.552626°, which is about 1/2 nm from the arrival point defined above. Of course, using the departure azimuth and distance from the normal section indirect problem will properly locate the destination in Paris.
Presumably the direct problem is used when the arrival point is unknown, yet it is possible to use whatever vector formula_77 one pleases. For example, using the surface normal at Paris, formula_104, results in an arrival point of formula_101 = 49.007778°, formula_102= 2.546842°, which is about 1/8 nm from the arrival point defined above. Using the surface normal at Reykjavik (while still using the departure azimuth and trip distance of the geodesic to Paris) will have you arriving about 347 nm from Paris, while the normal at Zürich brings you to within 5.5 nm.
The search for a section that's closer to the geodesic led to the next two examples.
The mean normal section.
The mean normal section from formula_0 to formula_1 is determined by letting formula_105. This is a good approximation to the geodesic from formula_0 to formula_1 for aviation or sailing.The maximum difference in length between the mean normal section and the corresponding geodesic of length 5,000 nautical miles is about 0.5 meters. The lateral deviation between them is no more than about 0.8 nautical miles. For paths of length 1000 nautical miles the length error is less than a millimeter, and the worst case lateral deviation is about 4.4 meters.
Continuing the example from New York to Paris on WGS84 gives the following results for the mean normal section:
formula_3 = 53.515409°, formula_4 = 111.618500° and formula_2 = 5849157.560 (m) = 3158.292419 (nm).
The midpoint normal section.
The midpoint normal section from formula_0 to formula_1 is determined by letting formula_77 = the surface normal at the midpoint of the geodesic from formula_0 to formula_1. This path is only slightly closer to the geodesic that the mean normal section. The maximum difference in length between a midpoint normal section and the corresponding geodesic of length 5,000 nautical miles is about 0.3 meters. The worst case lateral deviation between them is about 0.3 nautical miles.
Finishing the example from New York to Paris on WGS84 gives the following results for the geodesic midpoint normal section:
formula_3 = 53.506207°, formula_4 = 111.627697° and formula_2 = 5849157.545 (m) = 3158.292411 (nm).
Discussion.
All of the section paths used in the charts to the right were defined using the indirect method above. In the third and fourth charts the terminal point was defined using the direct algorithm for the geodesic with the given distance and initial azimuth. On each of the geodesics some points were selected, the nearest point on the section plane was located by vector projection, and the distance between the two points computed. This distance is described as the lateral deviation from the geodesic, or briefly geodesic deviation, and is displayed in the charts on the right. The alternative of finding the corresponding point on the section path and computing geodesic distances would produce slightly different results.
The first chart is typical of mid-latitude cases where the great ellipse is the outlier. The normal section associated with the point farthest from the equator is a good choice for these cases.
The second example is longer and is typical of equator crossing cases, where the great ellipse beats the normal sections. However, the two normal sections deviate on opposite sides of the geodesic, making the mean normal section a good choice here.
The third chart shows how the geodesic deviations vary with initial geodesic azimuth originating from 20 degrees north latitude. The worst case deviation for normal sections of 5000 nautical miles length is about 2.8 nm and occurs at initial geodesic azimuth of 132° from 18° north latitude (48° azimuth for south latitude).
The fourth chart is what the third chart looks like when departing from the equator. On the equator there are more symmetries since sections at 90° and 270° azimuths are also geodesics. Consequently the fourth chart shows only 7 distinct lines out of the 24 with 15 degree spacing. Specifically, the lines at azimuths 15, 75, 195 and 255 coincide, as do the lines at 105, 165, 285, and 345 on the other side as the inner most (other than the geodesics). Next farthest coincident lines from the four geodesic lines are at azimuths 30, 60, 210, and 240 on one side and 120, 150, 300, and 330 on the other side. The outer most lines are at azimuths 45, and 225 on one side and 135 and 315 on the other. As the departure point moves north the lines at azimuths 90 and 270 are no longer geodesics, and other coincident lines separate and fan out until 18° latitude where the maximum deviation is attained. Beyond this point the deviations contract like a Japanese fan as the initial point proceeds north. So that by 84° latitude the maximum deviation for normal sections is about 0.25 nm.
The midpoint normal section is (almost) always a good choice.
Intersections.
Let two section planes be given: formula_106, and formula_107. Assuming that the two planes are not parallel, the line of intersection is on both planes. Hence orthogonal to both normals, i.e. in the direction of formula_108 (there is no reason to normalize formula_109).
Since formula_110 and formula_111 are not collinear formula_110, formula_111, formula_109 is a basis for formula_112. Therefore, there exist constants formula_113 and formula_114 such that the line of intersection of the 2 planes is given by formula_115, where t is an independent parameter.
Since this line is on both section planes, it satisfies both:
formula_116, and
formula_117.
Solving these equations for formula_118 and formula_119 gives
formula_120, and
formula_121.
Define the "dihedral angle", formula_122, by formula_123.
Then formula_124 , and formula_125.
On the intersection line we have formula_126, where formula_127.
Hence: formula_128, formula_129, and formula_130, where
formula_131, formula_132, and formula_133,
formula_134, for i=1,2, and formula_135.
To find the intersection of this line with the earth, plug the line equations into formula_136, to get
formula_137, where formula_138,
formula_139,
formula_140.
Therefore, the line intersects the earth at formula_141. If formula_142, then there is no intersection. If formula_143, then the line is tangent to the earth at formula_144 (i.e. the sections intersect at that single point).
Observe that formula_145 since formula_110 and formula_111 are not collinear. Plugging t into
formula_146, gives the points of intersection of the earth sections.
Example.
Find where a section from New York to Paris, intersects the Greenwich meridian. The plane of the prime meridian may be described by formula_147 and formula_148. The results are as follows:
Extreme latitudes and longitudes.
The maximum (or minimum) latitude is where the section ellipse intersections a parallel at a single point. To set up the problem, let
formula_149, formula_150
be the given section plane. The parallel is
formula_151, formula_152, where formula_153 is to be determined so that there is only one intersection point.
Applying the intersection method above results in formula_154, formula_155,
formula_156, and formula_157, since formula_158.
The resulting linear equations become formula_159, formula_160, and formula_161, where
formula_162, formula_163, and formula_153 is to be determined. The resulting quadratic coefficients are
formula_164,
formula_165,
formula_166.
Therefore the intersection will result in only one solution if formula_143, but since formula_167 and formula_168, the critical equation becomes formula_169. This equation may be rearranged and put into the form formula_170, where
formula_171,
formula_172, and
formula_173.
Therefore, formula_174 provides the distance from the origin of the desired parallel planes. Plugging formula_153 into formula_113 gives the values for formula_175 and formula_176. Recall that formula_177 so formula_178, formula_179 are the remaining coordinates of the intersections. The geographic coordinates may then be computed using the ECEF_to_Geo conversion.
The same method may be applied to meridians to find extreme longitudes, but the results are not easy to interpret due to the modular nature of longitude. However, the results can always be verified using the following approach.
The simpler approach is to compute the end points of the minor and major axes of the section ellipse using formula_180, and formula_181, and then converting to geographic coordinates. It may be worth mentioning here that the line of intersection of two planes consists of the set of fixed points, hence the rotation axis, of a coordinate rotation that maps one plane onto the other.
For the New York to Paris example the results are:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P_1"
},
{
"math_id": 1,
"text": "P_2"
},
{
"math_id": 2,
"text": "s_{12}"
},
{
"math_id": 3,
"text": "\\alpha_1"
},
{
"math_id": 4,
"text": "\\alpha_2"
},
{
"math_id": 5,
"text": "P_k"
},
{
"math_id": 6,
"text": "\\phi_k"
},
{
"math_id": 7,
"text": "\\lambda_k"
},
{
"math_id": 8,
"text": "R_1 = \\mathrm{ECEF}(P_1)"
},
{
"math_id": 9,
"text": "R_2 = \\mathrm{ECEF}(P_2)"
},
{
"math_id": 10,
"text": "R_0"
},
{
"math_id": 11,
"text": "R_1"
},
{
"math_id": 12,
"text": "R_2"
},
{
"math_id": 13,
"text": "lx + my + nz = d"
},
{
"math_id": 14,
"text": " l^2 + m^2 + n^2 = 1"
},
{
"math_id": 15,
"text": "\\mathbf{\\hat N} = (l, m, n)"
},
{
"math_id": 16,
"text": "\\mathbf{V_0} = \\mathbf{R_1} - \\mathbf{R_0}"
},
{
"math_id": 17,
"text": "\\mathbf{V_1} = \\mathbf{R_2} - \\mathbf{R_1}"
},
{
"math_id": 18,
"text": "\\mathbf{\\hat N} = \\mathrm{unit}(\\mathbf{V_0}\\times\\mathbf{V_1})"
},
{
"math_id": 19,
"text": "\\mathrm{unit}(\\mathbf{V})"
},
{
"math_id": 20,
"text": "\\mathbf{V}"
},
{
"math_id": 21,
"text": "\\mathbf{\\hat N}"
},
{
"math_id": 22,
"text": "\\mathbf{V_0} = -\\mathbf{V_0}"
},
{
"math_id": 23,
"text": "d = \\mathbf{\\hat N}\\cdot\\mathbf{R_1}"
},
{
"math_id": 24,
"text": "\\mathbf{\\hat N}\\cdot\\mathbf{R} = d"
},
{
"math_id": 25,
"text": "\\mathbf{R}"
},
{
"math_id": 26,
"text": "(x, y, z)"
},
{
"math_id": 27,
"text": "\\mathbf\\hat e = (-\\sin\\lambda,\\cos\\lambda,0)"
},
{
"math_id": 28,
"text": "\\mathbf\\hat n=(-\\sin\\phi\\cos\\lambda,-\\sin\\phi\\sin\\lambda,\\cos\\phi)"
},
{
"math_id": 29,
"text": "\\mathbf\\hat u = (\\cos\\phi\\cos\\lambda,\\cos\\phi\\sin\\lambda,\\sin\\phi)"
},
{
"math_id": 30,
"text": "\\mathbf{t} = \\mathbf\\hat N \\times \\mathbf{\\hat u}"
},
{
"math_id": 31,
"text": "\\mathbf{t}"
},
{
"math_id": 32,
"text": "\\mathbf{t}\\cdot\\mathbf{\\hat e}"
},
{
"math_id": 33,
"text": "\\mathbf{t}\\cdot\\mathbf{\\hat n}"
},
{
"math_id": 34,
"text": "\\alpha=\\operatorname{atan2}(\\mathbf{t}\\cdot\\mathbf{\\hat e},\\mathbf{t}\\cdot\\mathbf{\\hat n})"
},
{
"math_id": 35,
"text": "\\frac{x^2}{a^2}+\\frac{y^2}{a^2}+\\frac{z^2}{b^2} = 1"
},
{
"math_id": 36,
"text": " p=\\sqrt{l^2+m^2}"
},
{
"math_id": 37,
"text": "p=0"
},
{
"math_id": 38,
"text": "a\\sqrt{1-\\frac{d^2}{b^2}}"
},
{
"math_id": 39,
"text": "|d|>b"
},
{
"math_id": 40,
"text": "p>0"
},
{
"math_id": 41,
"text": "{R_c}=\\frac{d}{C}(la^2, ma^2, nb^2)"
},
{
"math_id": 42,
"text": "C = a^2 p^2 + b^2 n^2"
},
{
"math_id": 43,
"text": "a^*=a\\sqrt{1-\\frac{d^2}{C}}"
},
{
"math_id": 44,
"text": "\\mathbf{\\hat i^*} = \\left(\\frac{m}{p}, \\frac{-l}{p}, 0\\right)"
},
{
"math_id": 45,
"text": "b^*=\\frac{b}{\\sqrt{C}}a^*"
},
{
"math_id": 46,
"text": "\\mathbf{\\hat j^*} = \\left(\\frac{ln}{p}, \\frac{mn}{p}, -p\\right)"
},
{
"math_id": 47,
"text": "|d|>\\sqrt{C}"
},
{
"math_id": 48,
"text": "e^2"
},
{
"math_id": 49,
"text": "e^2 = 1 - \\left(\\frac{b^*}{a^*}\\right)^2"
},
{
"math_id": 50,
"text": " s_{12} = s(\\theta_2) - s(\\theta_1)"
},
{
"math_id": 51,
"text": " s(\\theta) = b^*({C_0}\\theta + {C_2} \\sin(2\\theta) + {C_4}\\sin(4\\theta) + {C_6}\\sin(6\\theta))"
},
{
"math_id": 52,
"text": " C_0 = 1.0 + e^2 (1/4 + 13 e^2/64 + 45 e^4/256 + 2577 e^6/16384) "
},
{
"math_id": 53,
"text": " C_2 = e^2 (1/8 + 3 e^2/32 + 95 e^4/1024 + 385 e^6/4096) "
},
{
"math_id": 54,
"text": " C_4 = -e^4 (1/256 + 5 e^2/1024 + 19 e^4/16384) "
},
{
"math_id": 55,
"text": " C_6 = -e^6 (15/3072 + 35 e^2/4096) "
},
{
"math_id": 56,
"text": "P"
},
{
"math_id": 57,
"text": "R = \\mathrm{ECEF}(P)"
},
{
"math_id": 58,
"text": "\\mathbf{V} = \\mathbf{R} - \\mathbf{R_c}"
},
{
"math_id": 59,
"text": " \\theta"
},
{
"math_id": 60,
"text": "\\mathbf{\\hat V} = \\mathrm{unit}(\\mathbf{V})"
},
{
"math_id": 61,
"text": "\\theta = \\operatorname{atan2}(\\mathbf{\\hat V} \\cdot \\mathbf{\\hat j^*}, \\mathbf{\\hat V} \\cdot \\mathbf{\\hat i^*})"
},
{
"math_id": 62,
"text": "\\theta_1"
},
{
"math_id": 63,
"text": "\\theta_2"
},
{
"math_id": 64,
"text": "\\varepsilon"
},
{
"math_id": 65,
"text": "\\varepsilon^2 = 1 - \\left(\\frac{b}{a}\\right)^2"
},
{
"math_id": 66,
"text": "e^8"
},
{
"math_id": 67,
"text": "\\varepsilon^8"
},
{
"math_id": 68,
"text": "n^3"
},
{
"math_id": 69,
"text": "n^4"
},
{
"math_id": 70,
"text": "B_6"
},
{
"math_id": 71,
"text": "s(\\beta)=\\frac{a^*+b^*}2(B_0\\beta+B_2\\sin 2\\beta+B_4\\sin4\\beta+B_6\\sin6\\beta)"
},
{
"math_id": 72,
"text": "\\beta = \\tan^{-1}\\left(\\tan\\theta/((1 - f)\\right)"
},
{
"math_id": 73,
"text": "\\beta_1"
},
{
"math_id": 74,
"text": "\\beta_2"
},
{
"math_id": 75,
"text": "{P_1}"
},
{
"math_id": 76,
"text": "{P_2}"
},
{
"math_id": 77,
"text": "\\mathbf{V_0}"
},
{
"math_id": 78,
"text": "\\mathbf\\hat n_1 ,\\mathbf{\\hat e_1}"
},
{
"math_id": 79,
"text": "\\mathbf\\hat t_1= \\mathbf\\hat n_1\\cos{\\alpha_1} + \\mathbf{\\hat e_1} \\sin{\\alpha_1}"
},
{
"math_id": 80,
"text": "\\mathbf\\hat n_1"
},
{
"math_id": 81,
"text": "\\mathbf{\\hat e_1}"
},
{
"math_id": 82,
"text": "\\mathbf\\hat N = \\mathrm{unit}(\\mathbf{V_0}\\times\\mathbf\\hat t_1"
},
{
"math_id": 83,
"text": "\\mathbf{P_1}"
},
{
"math_id": 84,
"text": "\\mathbf\\hat i^* , \\mathbf\\hat j^*"
},
{
"math_id": 85,
"text": "\\theta_{12}"
},
{
"math_id": 86,
"text": "\\theta_2 = \\theta_1 + \\theta_{12}"
},
{
"math_id": 87,
"text": "s = s(\\theta )"
},
{
"math_id": 88,
"text": "\\theta_2 = s^{-1}(s_1+s_{12})"
},
{
"math_id": 89,
"text": "s_1 = s(\\beta_1) = \\frac{a^*+b^*}2(B_0\\beta_1 + B_2\\sin 2\\beta_1 + B_4\\sin4\\beta_1 + B_6\\sin6\\beta_1)"
},
{
"math_id": 90,
"text": "s_1"
},
{
"math_id": 91,
"text": "\\beta_2 = \\beta(s_1+s_{12}) = \\mu_2 + B'_2\\sin2\\mu_2 + B'_4\\sin4\\mu_2 + B'_6\\sin6\\mu_2"
},
{
"math_id": 92,
"text": "\\mu_2 = 2(s_1+s_{12})/(B_0(a^*+b^*))"
},
{
"math_id": 93,
"text": "x_2 = a^*\\cos\\beta_2, y_2 = b^*\\sin\\beta_2"
},
{
"math_id": 94,
"text": "\\mathbf{V_2} = \\mathbf{R_c} + (x_2\\mathbf{\\hat i^*}+ y_2\\mathbf{\\hat j^*})"
},
{
"math_id": 95,
"text": "P_2 = \\mathrm{Geo}(V_2)"
},
{
"math_id": 96,
"text": "\\mathbf{t_2} = \\mathbf\\hat N \\times \\mathbf{\\hat u_2}"
},
{
"math_id": 97,
"text": "{\\alpha_2}=\\operatorname{atan2}(\\mathbf{t_2}\\cdot\\mathbf{\\hat e_2}, \\mathbf{t_2}\\cdot\\mathbf{\\hat n_2})"
},
{
"math_id": 98,
"text": "\\mathbf{V_0} = \\mathbf{R_1}"
},
{
"math_id": 99,
"text": "\\phi_1"
},
{
"math_id": 100,
"text": "\\lambda_1"
},
{
"math_id": 101,
"text": "\\phi_2"
},
{
"math_id": 102,
"text": "\\lambda_2 "
},
{
"math_id": 103,
"text": "\\mathbf{V_0} = \\mathbf\\hat u_1"
},
{
"math_id": 104,
"text": "\\mathbf\\hat u_2"
},
{
"math_id": 105,
"text": "\\mathbf{V_0} = 0.5(\\mathbf\\hat u_1+\\mathbf\\hat u_2)"
},
{
"math_id": 106,
"text": "\\mathbf\\hat N_1\\cdot\\mathbf{R} = d_1"
},
{
"math_id": 107,
"text": "\\mathbf\\hat N_2\\cdot\\mathbf{R} = d_2"
},
{
"math_id": 108,
"text": "\\mathbf{N_3} = \\mathbf\\hat N_1 \\times \\mathbf\\hat N_2"
},
{
"math_id": 109,
"text": "\\mathbf{N_3}"
},
{
"math_id": 110,
"text": "\\mathbf\\hat N_1"
},
{
"math_id": 111,
"text": "\\mathbf\\hat N_2"
},
{
"math_id": 112,
"text": "\\mathbb{R}^3"
},
{
"math_id": 113,
"text": "C_1"
},
{
"math_id": 114,
"text": "C_2"
},
{
"math_id": 115,
"text": "R = C_1\\mathbf\\hat N_1 + C_2\\mathbf\\hat N_2 + t\\mathbf{N_3}"
},
{
"math_id": 116,
"text": "C_1 + C_2(\\mathbf\\hat N_1\\cdot\\mathbf\\hat N_2) = d_1"
},
{
"math_id": 117,
"text": "C_1(\\mathbf\\hat N_1\\cdot\\mathbf\\hat N_2) + C_2 = d_2"
},
{
"math_id": 118,
"text": "{C_1}"
},
{
"math_id": 119,
"text": "{C_2}"
},
{
"math_id": 120,
"text": "C_1 [1 - (\\mathbf\\hat N_1\\cdot\\mathbf\\hat N_2)^2] = d_1 - d_2(\\mathbf\\hat N_1\\cdot\\mathbf\\hat N_2)"
},
{
"math_id": 121,
"text": "C_2 [1 - (\\mathbf\\hat N_1\\cdot\\mathbf\\hat N_2)^2] = d_2 - d_1(\\mathbf\\hat N_1\\cdot\\mathbf\\hat N_2)"
},
{
"math_id": 122,
"text": "\\nu"
},
{
"math_id": 123,
"text": "\\cos\\nu = {\\mathbf\\hat N_1}\\cdot{\\mathbf\\hat N_2}"
},
{
"math_id": 124,
"text": "C_1 = \\frac{(d_1- d_2 \\cos\\nu)}{\\sin^2\\nu}"
},
{
"math_id": 125,
"text": "C_2 = \\frac{(d_2- d_1 \\cos\\nu)}{\\sin^2 \\nu}"
},
{
"math_id": 126,
"text": "\\mathbf{R} =\\mathbf{R_0} + t\\mathbf{N_3}"
},
{
"math_id": 127,
"text": "\\mathbf{R_0} = C_1\\mathbf\\hat N_1 + C_2\\mathbf\\hat N_2"
},
{
"math_id": 128,
"text": "x = x_0 + tl_3"
},
{
"math_id": 129,
"text": "y = y_0 + tm_3"
},
{
"math_id": 130,
"text": "z = z_0 + tn_3"
},
{
"math_id": 131,
"text": "x_0= C_1l_1 + C_2l_2"
},
{
"math_id": 132,
"text": "y_0 = C_1m_1 + C_2m_2"
},
{
"math_id": 133,
"text": "z_0 = C_1n_1 + C_2n_2"
},
{
"math_id": 134,
"text": "\\mathbf\\hat N_i = (l_i,m_i,n_i)"
},
{
"math_id": 135,
"text": "\\mathbf{N_3} = (l_3,m_3,n_3)"
},
{
"math_id": 136,
"text": "\\frac{x^2}{a^2} + \\frac{y^2}{a^2}+\\frac{z^2}{b^2} = 1"
},
{
"math_id": 137,
"text": "At^2 + 2Bt + C = 0"
},
{
"math_id": 138,
"text": "A = l_3^2 + m_3^2 + \\frac{a^2}{b^2}n_3^2"
},
{
"math_id": 139,
"text": "B = x_0l_3 + y_0m_3 + \\frac{a^2}{b^2}z_0n_3"
},
{
"math_id": 140,
"text": "C = x_0^2 + y_0^2 + \\frac{a^2}{b^2}z_0^2 - a^2"
},
{
"math_id": 141,
"text": "t = \\frac{-B \\pm \\sqrt{{B}^2-AC}}{A}"
},
{
"math_id": 142,
"text": "B^2 < AC"
},
{
"math_id": 143,
"text": "B^2 = AC"
},
{
"math_id": 144,
"text": "t = -B/A"
},
{
"math_id": 145,
"text": "A\\ne0"
},
{
"math_id": 146,
"text": "\\mathbf{R} = \\mathbf{R_0} + t\\mathbf{N_3}"
},
{
"math_id": 147,
"text": "\\mathbf\\hat N = (0, 1, 0)"
},
{
"math_id": 148,
"text": "d = 0"
},
{
"math_id": 149,
"text": "\\mathbf{\\hat N_1} = (l, m, n)"
},
{
"math_id": 150,
"text": "d_1 = d"
},
{
"math_id": 151,
"text": "\\mathbf{\\hat N_2} = (0, 0, 1)"
},
{
"math_id": 152,
"text": "d_2 = z_0"
},
{
"math_id": 153,
"text": "z_0"
},
{
"math_id": 154,
"text": "\\mathbf{N_3} = \\mathbf\\hat N_1 \\times \\mathbf\\hat N_2 = (m, -l, 0)"
},
{
"math_id": 155,
"text": "{\\mathbf\\hat N_1}\\cdot{\\mathbf\\hat N_2} = n"
},
{
"math_id": 156,
"text": "C_1 = \\frac{1}{p^2}(d- nz_0)"
},
{
"math_id": 157,
"text": "C_2 = \\frac{1}{p^2}(z_0- nd)"
},
{
"math_id": 158,
"text": "1 - n^2 = l^2 + m^2 = p^2"
},
{
"math_id": 159,
"text": "x = x_0 + tm"
},
{
"math_id": 160,
"text": "y = y_0 - tl"
},
{
"math_id": 161,
"text": "z = z_0"
},
{
"math_id": 162,
"text": "x_0= C_1l"
},
{
"math_id": 163,
"text": "y_0 = C_1m"
},
{
"math_id": 164,
"text": "A = m^2 + l^2 = p^2"
},
{
"math_id": 165,
"text": "B = mx_0 - ly_0 = lmC_1 - lmC_1 = 0"
},
{
"math_id": 166,
"text": "C = x_0^2 + y_0^2 + \\frac{a^2}{b^2}z_0^2 - a^2 = p^2C_1^2 + \\frac{a^2}{b^2}z_0^2 - a^2 = \\frac{1}{p^2}(d- nz_0)^2 + \\frac{a^2}{b^2}z_0^2 - a^2"
},
{
"math_id": 167,
"text": "B = 0"
},
{
"math_id": 168,
"text": "A > 0"
},
{
"math_id": 169,
"text": "C = 0"
},
{
"math_id": 170,
"text": "Ez_0^2 - 2Fz_0 + G = 0"
},
{
"math_id": 171,
"text": "E = \\frac{a^2}{b^2}p^2 + n^2"
},
{
"math_id": 172,
"text": "F = nd"
},
{
"math_id": 173,
"text": "G = d^2 - a^2p^2"
},
{
"math_id": 174,
"text": "z_0 = \\frac{F \\pm \\sqrt{{F}^2-EG}}{E}"
},
{
"math_id": 175,
"text": "x_0"
},
{
"math_id": 176,
"text": "y_0"
},
{
"math_id": 177,
"text": "t = -B/A = 0"
},
{
"math_id": 178,
"text": "x = x_0"
},
{
"math_id": 179,
"text": "y = y_0"
},
{
"math_id": 180,
"text": "\\mathbf{R} = \\mathbf{R_c} \\pm b^*\\mathbf\\hat j^*"
},
{
"math_id": 181,
"text": "\\mathbf{R} = \\mathbf{R_c} \\pm a^*\\mathbf\\hat i^*"
}
] |
https://en.wikipedia.org/wiki?curid=57078824
|
57079200
|
Weighted urban proliferation
|
Weighted urban proliferation (WUP) is a method used for measuring urban sprawl. This method, first introduced by Jaeger et al. (2010), calculates and presents the degree of urban sprawl as a numeric value. The method is based on the premise that as the built-over area in a given landscape increases (amount of built-up area), and the more dispersed this built-up area becomes (spatial configuration), and the higher the uptake of this built-up area per inhabitant or job increases (utilization intensity in the built-up area), the higher the overall degree of urban sprawl.
The WUP method, thus, measures urban sprawl by integrating these three dimensions into a single metric.
formula_0
Since the utilization density and dispersion are weighted with the weighting functions formula_1 and formula_2, this metric of urban sprawl is referred to as Weighted Urban Proliferation (WUP).
Three components of the WUP method.
Urban permeation.
The first component of the WUP method is urban permeation (UP). UP measures the size of the built-up area as well as its degree of dispersion throughout the study area (reporting unit). The formula for UP is
formula_3
UP is expressed in urban permeation units per m2 of land (UPU/m2). Within the framework of the WUP method, built-up areas are defined as areas where buildings are located. Since roads, railway lines, and parking lots are not buildings, they are disregarded in the WUP method of measuring urban sprawl.
Dispersion.
The second component of the WUP method is dispersion (DIS). This component is based on the idea that the degree of urban sprawl intensifies with both increasing amount of urban area and increasing dispersion. The dispersion metric analyses the pattern of built-up area on the landscape from a geometric perspective. The analysis is performed by taking distance measurements between random points within the built-up area. The average value is then computed from the measurement of all possible pairs of points. The farther apart any two points are, the higher the measurement value, and the higher their contribution to dispersion. Whereas the closer any two points are, the lower the value and the lower their contribution to dispersion. With the w1(DIS) function, dispersion values are weighted. This weighting function allows sections of the landscape where built-up areas are more dispersed to receive a higher weight, or a lower weighting for compact settled areas with low dispersion.
Utilization density.
The third component of the WUP model is utilization density (UD). This component is based on the premise that as more people and jobs are located in the built-up area, the more efficient the utilization of the land becomes.
formula_4
The number of jobs is included in the calculation to emphasize that many downtown areas are dominated by office buildings that have very few residents, yet each building, and thus the land they are on, is densely utilized and should not be considered sprawl. With the w1(UD) function, utilization density values are weighted. This weighting function allows sections of the built-up area to receive a value between 0 and 1 depending on their utilization density. The higher the utilization density, the lower the weighting value. This lower weight reflects the understanding that dense subsections of the reporting unit, like inner cities, are not considered as urban sprawl.
Examples of projects which used the WUP method.
Hayek et al. (2010) used settlement development scenarios for Switzerland, to find the causes of urban sprawl in order to reduce undesired future settlement developments. The results show that overall urban permeation and dispersion of settlement areas is likely to increase, in varying degrees, in all scenarios by 2030.
Jaeger & Schwick (2014) analysed historical changes as well as future scenarios for urban sprawl in Switzerland. They concluded that the degree of urban sprawl had increased by 155% between 1935 and 2002 and that, within the framework of modelling future scenarios, urban sprawl is likely to further increase by more than 50% by 2050 without abrupt mitigation measures.
Jaeger et al. (2015) analysed the degree of urban sprawl for 32 countries in Europe. The results show that large parts of Europe are affected by urban sprawl, and that significant increases took place between 2006 and 2009, however, the values of the individual countries differ greatly.
Nazarnia, Schwick & Jaeger (2016) compared patterns of accelerated urban sprawl, between 1951 and 2011, in the metropolitan areas of Montreal and Quebec City Canada, and Zurich in Switzerland. Their research determined that, in Montreal, the degree of urban sprawl increased 26-fold, Quebec City increased 9-fold, and Zurich 3-fold.
Torres, Jaeger & Alonso (2016) quantified spatial patterns of urban sprawl for mainland Spain at multiple scales. They tested the stability, non-stationarity, and scale-dependency of the relationship between landscape fragmentation patterns and urban sprawl.
Weilenmann, Seidl & Schulz (2017) analysed the major socio-economic determinants of change in urban patterns in Switzerland. Their analysis covered the years 1980–2010 and was conducted on all of the 2495 Swiss municipalities.
|
[
{
"math_id": 0,
"text": "\n\\begin{align}\n\\text{WUP} &= \\text{UP} \\cdot w_1(\\text{Dis}) \\cdot w_2(\\text{UD}) \\\\ \n\\\\ \n\\text{where}~&\n\\begin{cases}\n\\text{UP} = \\text{Urban Permeation} \\\\\nw_1(\\text{Dis}) = \\text{Weighting}_1(\\text{Dispersion})\\\\ \nw_2(\\text{UD}) = \\text{Weighting}_2(\\text{Utilization Density}) \n\\end{cases}\n\\end{align}"
},
{
"math_id": 1,
"text": "w_1(\\text{Dis})"
},
{
"math_id": 2,
"text": "w_2(\\text{UD})"
},
{
"math_id": 3,
"text": "\\text{UP} = \\frac{\\text{Size of Built-up Area}}{\\text{Reporting Unit}} \\cdot \\text{Dispersion}\n"
},
{
"math_id": 4,
"text": "\\text{Utilization Density} = \\frac{\\text{Number of Inhabitants} + \\text{Number of Jobs}}{\\text{Size of Built-up Area}}"
}
] |
https://en.wikipedia.org/wiki?curid=57079200
|
5707971
|
Weierstrass product inequality
|
In mathematics, the Weierstrass product inequality states that for any real numbers 0 ≤ "x1", "..., xn" ≤ 1 we have
formula_0
and similarly, for 0 ≤ "x1", "..., xn,"
formula_1
where formula_2
The inequality is named after the German mathematician Karl Weierstrass.
Proof.
The inequality with the subtractions can be proven easily via mathematical induction. The one with the additions is proven identically. We can choose formula_3 as the base case and see that for this value of formula_4 we get
formula_5
which is indeed true. Assuming now that the inequality holds for all natural numbers up to formula_6, for formula_7 we have:
formula_8
formula_9
formula_10
formula_11
formula_12
which concludes the proof.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(1-x_1)(1-x_2)(1-x_3)(1-x_4)....(1-x_n) \\geq 1-S_n, "
},
{
"math_id": 1,
"text": "(1+x_1)(1+x_2)(1+x_3)(1+x_4)....(1+x_n) \\geq 1+S_n,"
},
{
"math_id": 2,
"text": "S_n=x_1+x_2+x_3+x_4+....+x_n."
},
{
"math_id": 3,
"text": " n = 1 "
},
{
"math_id": 4,
"text": " n "
},
{
"math_id": 5,
"text": " 1 -x_1 \\geq 1 - x_1 "
},
{
"math_id": 6,
"text": " n > 1"
},
{
"math_id": 7,
"text": " n + 1 "
},
{
"math_id": 8,
"text": " \n\\prod_{i=1}^{n+1}(1-x_i)\\,\\, = (1-x_{n+1})\\prod_{i=1}^{n}(1-x_i)"
},
{
"math_id": 9,
"text": "\\geq (1-x_{n+1})\\left(1 - \\sum_{i=1}^nx_i\\right)\n"
},
{
"math_id": 10,
"text": "= 1 - \\sum_{i=1}^nx_i - x_{n+1} + x_{n+1}\\sum_{i=1}^nx_i\n"
},
{
"math_id": 11,
"text": "= 1 - \\sum_{i=1}^{n+1}x_i + x_{n+1}\\sum_{i=1}^nx_i\n"
},
{
"math_id": 12,
"text": "\\geq 1 - \\sum_{i=1}^{n+1}x_i \n"
}
] |
https://en.wikipedia.org/wiki?curid=5707971
|
5708400
|
Marked graph
|
Specific type of Petri net
A marked graph is a Petri net in which every place has exactly one incoming arc, and exactly one outgoing arc. This means, that there can "not" be "conflict", but there can be "concurrency". Mathematically: formula_0. Marked graphs are used mostly to mathematically represent concurrently running operations, such as a multiprocessor machine's internal process state. This class of Petri nets gets the name from a popular way of representing them: as a graph where each place is an edge and each transition is a node.
Uses.
Marked graphs are mainly used to mathematically represent concurrent mechanisms, in order to be able to mathematically derive certain characteristics of the design.
Example.
This example presents a Marked Graph, where a process is forked at transition T1 and synchronised at T4. In between, two operations take place in non-deterministic fashion, T2 and T3. In fact, Petri nets are so much non-deterministic, that they may not take place at all. But the reason for having this non-deterministic property is not this, but to mimic real-life experiences which shows that parallel computing always means that it is impossible to determine which process/thread will finish first i.e. which operation(s) will execute faster. This can be due to waiting for I/O in real world, or just the different parameters given to the processes/threads.
|
[
{
"math_id": 0,
"text": "\\forall p\\in P: |p\\bullet|=|\\bullet p|=1"
}
] |
https://en.wikipedia.org/wiki?curid=5708400
|
57084837
|
Heterogeneous gold catalysis
|
Heterogeneous gold catalysis refers to the use of elemental gold as a heterogeneous catalyst. As in most heterogeneous catalysis, the metal is typically supported on metal oxide. Furthermore, as seen in other heterogeneous catalysts, activity increases with a decreasing diameter of supported gold clusters. Several industrially relevant processes are also observed such as H2 activation, Water-gas shift reaction, and hydrogenation. One or two gold-catalyzed reactions may have been commercialized.
The high activity of supported gold clusters has been proposed to arise from a combination of structural changes, quantum-size effects and support effects that preferentially tune the electronic structure of gold such that optimal binding of adsorbates during the catalytic cycle is enabled. The selectivity and activity of gold nanoparticles can be finely tuned by varying the choice of support material, with e.g. titania (TiO2), hematite (α-Fe2O3), cobalt(II/III) oxide (Co3O4) and nickel(II) oxide (NiO) serving as the most effective support materials for facilitating the catalysis of CO combustion. Besides enabling an optimal dispersion of the nanoclusters, the support materials have been suggested to promote catalysis by altering the size, shape, strain and charge state of the cluster. A precise shape control of the deposited gold clusters has been shown to be important for optimizing the catalytic activity, with hemispherical, few atomic layers thick nanoparticles generally exhibiting the most desirable catalytic properties due to maximized number of high-energy edge and corner sites.
Proposed applications.
In the past, heterogeneous gold catalysts have found preliminary commercial applications for the industrial production of vinyl chloride (precursor to polyvinyl chloride or PVC) and methyl methacrylate. Traditionally, PVC production uses mercury catalysts and leads to serious environmental concerns. China accounts for 50% of world's mercury emissions and 60% of China's mercury emission is caused by PVC production. Although gold catalysts are slightly expensive, overall production cost is affected by only ~1%. Therefore, green gold catalysis is considered valuable. The price fluctuation in gold has later led to cease the operations based on their use in catalytic converters. Very recently, there has been a lot of developments in gold catalysis for the synthesis of organic molecules including the C-C bond forming homocoupling or cross-coupling reactions and it has been speculated that some of these catalysts could find applications in various fields.
CO oxidation.
Gold can be a very active catalyst in oxidation of carbon monoxide (CO), i.e. the reaction of CO with molecular oxygen to produce carbon dioxide (CO2). Particles of 2 to 5 nm exhibit high catalytic activities. Supported gold clusters, thin films and nanoparticles are one to two orders of magnitude more active than atomically dispersed gold cations or unsupported metallic gold.
Gold cations can be dispersed atomically on basic metal oxide supports such as MgO and La2O3. Monovalent and trivalent gold cations have been identified, the latter being more active but less stable than the former. The turnover frequency (TOF) of CO oxidation on these cationic gold catalysts is in the order of magnitude of 0.01 s−1, exhibiting the very high activation energy of 138 kJ/mol.
Supported gold nanoclusters with a diameter < 2 nm are active to CO oxidation with turnover number (TOF) in the order of magnitude of 0.1 s−1. It has been observed that clusters with 8 to 100 atoms are catalytically active. The reason is that, on one hand, eight atoms are the minimum necessary to form a stable, discrete energy band structure, and on the other hand, d-band splitting decreases in clusters with more than 100 atoms, resembling the bulk electronic structure. The support has a substantial effect on the electronic structure of gold clusters. Metal hydroxide supports such as Be(OH)2, Mg(OH)2, and La(OH)3, with gold clusters of < 1.5 nm in diameter constitute highly active catalysts for CO oxidation at 200 K (-73 °C). By means of techniques such as HR-TEM and EXAFS, it has been proven that the activity of these catalysts is due exclusively to clusters with 13 atoms arranged in an icosahedron structure. Furthermore, the metal loading should exceed 10 wt% for the catalysts to be active.
Gold nanoparticles in the size range of 2 to 5 nm catalyze CO oxidation with a TOF of about 1 s−1 at temperatures below 273 K (0 °C). The catalytic activity of nanoparticles is brought about in the absence of moisture when the support is semiconductive or reducible, e.g. TiO2, MnO2, Fe2O3, ZnO, ZrO2, or CeO2. However, when the support is insulating or non-reducible, e.g. Al2O3 and SiO2, a moisture level > 5000 ppm is required for activity at room temperature. In the case of powder catalysts prepared by wet methods, the surface OH− groups on the support provide sufficient aid as co-catalysts, so that no additional moisture is necessary. At temperatures above 333 K (60 °C), no water is needed at all.
The apparent activation energy of CO oxidation on supported gold powder catalysts prepared by wet methods is 2-3 kJ/mol above 333 K (60 °C) and 26-34 kJ/mol below 333 K. These energies are low, compared to the values displayed by other noble metal catalysts (80-120 kJ/mol). The change in activation energy at 333 K can be ascribed to a change in reaction mechanism. This explanation has been supported experimentally. At 400 K (127 °C), the reaction rate per surface Au atom is not dependent on particle diameter, but the reaction rate per perimeter Au atom is directly proportional to particle diameter. This suggests that the mechanism above 333 K takes place on the gold surfaces. By contrast, at 300 K (27 °C), the reaction rate per surface Au atom is inversely proportional to particle diameter, while the rate per perimeter interface does not depend on particle size. Hence, CO oxidation occurs on the perimeter sites at room temperature. Further information on the reaction mechanism has been revealed by studying the dependency of the reaction rate on the partial pressures of the reactive species. Both at 300 K and 400 K, there is a first order rate dependency on CO partial pressure up to 4 Torr (533 Pa), above which the reaction is zero order. With respect to O2, the reaction is zero order above 10 Torr (54.7 kPa) at both 300 and 400 K. The order with respect to O2 at lower partial pressures is 1 at 300 K and 0.5 at 400 K. The shift towards zero order indicates that the catalyst's active sites are saturated with the species in question. Hence, a Langmuir-Hinshelwood mechanism has been proposed, in which CO adsorbed on gold surfaces reacts with O adsorbed at the edge sites of the gold nanoparticles.
The need to use oxide supports, and more specifically reducible supports, is due to their ability to activate dioxygen. Gold nanoparticles supported on inert materials such as carbon or polymers have been proven inactive in CO oxidation. The aforementioned dependency of some catalysts on water or moisture also relates to oxygen activation. The ability of certain reducible oxides, such as MnO2, Co3O4, and NiO to activate oxygen in dry conditions (< 0.1 ppm H2O) can be ascribed to the formation of oxygen defects during pretreatment.
Water gas shift.
Water gas shift is the most widespread industrial process for the production of dihydrogen, H2. It involves the reaction of carbon monoxide and water (syngas) to form hydrogen and carbon dioxide as a byproduct. In many catalytic reaction schemes, one of the elementary reactions is the oxidation of CO with an adsorbed oxygen species. Gold catalysts have been proposed as an alternative for water gas shift at low temperatures, viz. < 523 K (250 °C). This technology is essential to the development of solid oxide fuel cells. Hematite has been found to be an appropriate catalyst support for this purpose. Furthermore, a bimetallic Au-Ru/Fe2O3 catalyst has been proven highly active and stable for low-temperature water gas shift. Titania and ceria have also been used as supports for effective catalysts. Unfortunately, Au/CeO2 is prone to deactivation caused by surface-bound carbonate or formate species.
Although gold catalysts are active at room temperature to CO oxidation, the high amounts of water involved in water gas shift require higher temperatures. At such temperatures, gold is fully reduced to its metallic form. However, the activity of e.g. Au/CeO2 has been enhanced by CN− treatment, whereby metallic gold is leached, leaving behind highly active cations. According to DFT calculations, the presence of such Au cations on the catalyst is allowed by empty, localized nonbonding f states in CeO2. On the other hand, STEM studies of Au/CeO2 have revealed nanoparticles of 3 nm in diameter. Water gas shift has been proposed to occur at the interface of Au nanoparticles and the reduced CeO2 support.
Epoxidations.
Although the epoxidation of ethylene is routinely achieved in the industry with selectivities as high as 90% on Ag catalysts, most catalysts provided < 10% selectivity for propylene epoxidation. Using a gold catalyst supported on titanium silicate-1 (TS-1) molecular sieve, yields of 350 g/h per gram of gold were obtained at 473 K (200 °C). The reaction took place in the gas phase. Furthermore, using mesoporous titanosilicate supports (Ti-MCM-41 and Ti-MCM-48), gold catalysts provided > 90% selectivity at ~ 7% propylene conversion, 40% H2 efficiency, and 433 K (160 °C). The active species in these catalysts were identified to be hemispherical gold nano-crystals of less than 2 nm in diameter in intimate contact with the support.
Alkene epoxidation has been demonstrated in absence of H2 reductant in the liquid phase. For example, using 1% Au/graphite, ~80% selectivities of cis-cyclooctene to cyclooctene oxide (analogous to cyclohexene oxide) were obtained at 7-8% conversion, 353 K (80 °C), and 3 MPa O2 in absence of hydrogen or solvent. Other liquid-phase selective oxidations have been achieved with saturated hydrocarbons. For instance, cyclohexane has been converted to cyclohexanone and cyclohexanol with a combined selectivity of ~100% on gold catalysts. Product selectivities can be tuned in liquid phase reactions by the presence or absence of solvent and by the nature of the latter, viz. water, polar, or nonpolar. With gold catalysts, the catalyst's support has less influence on reactions in the liquid phase than on reactions in the gas phase.
Selective hydrogenations.
Typical hydrogenation catalysts are based on metals from the 8, 9, and 10 groups, such as Ni, Ru, Pd, and Pt. By comparison, gold has a poor catalytic activity for hydrogenation. This low activity is caused by the difficulty of dihydrogen activation on gold. While hydrogen dissociates on Pd and Pt without an energy barrier, dissociation on Au(111) has an energy barrier of ~1.3 eV, according to DFT calculations. These calculations agree with experimental studies, in which hydrogen dissociation was not observed on gold (111) or (110) terraces, nor on (331) steps. No dissociation was observed on these surfaces either at room temperature or at 473 K (200 °C). However, the rate of hydrogen activation increases for Au nanoparticles. Notwithstanding its poor activity, nano-sized gold immobilized in various supports has been found to provide a good selectivity in hydrogenation reactions.
One of the early studies (1966) of hydrogenation on supported, highly dispersed gold was performed with 1-butene and cyclohexene in the gas phase at 383 K (110 °C). The reaction rate was found to be first order with respect to alkene pressure and second order with respect to chemisorbed hydrogen. In later works, it was shown that gold-catalyzed hydrogenation can be highly sensitive to Au loading (hence to particle size) and to the nature of the support. For example, 1-pentene hydrogenation occurred optimally on 0.04 wt% Au/SiO2, but not at all on Au/γ-Al2O3. By contrast, the hydrogenation of 1,3-butadiene to 1-butene was shown to be relatively insensitive to Au particle size in a study with a series of Au/Al2O3 catalysts prepared by different methods. With all the tested catalysts, conversion was ~100% and selectivity, < 60%. Concerning reaction mechanisms, in a study of propylene hydrogenation on Au/SiO2, reaction rates were determined using D2 and H2. Because the reaction with deuterium was substantially slower, it was suggested that the rate-determining step in alkene hydrogenation was the cleavage of the H-H bond. Lastly, ethylene hydrogenation was studied on Au/MgO at atmospheric pressure and 353 K (80 °C) with EXAFS, XANES and IR spectroscopy, suggesting that the active species might be Au+3 and the reaction intermediate, an ethylgold species.
Gold catalysts are especially selective in the hydrogenation of α,β-insaturated aldehydes, i.e. aldehydes containing a C=C double bond on the carbon adjacent to the carbonyl. Gold catalysts are able to hydrogenate only the carbonyl group, so that the aldehyde is transformed to the corresponding alcohol, while leaving the C=C double bond untouched. In the hydrogenation of crotonaldehyde to crotyl alcohol, 80% selectivity was attained at 5-10% conversion and 523 K (250 °C) on Au/ZrO2 and Au/ZnO. The selectivity increased along with Au particle size in the range of ~2 to ~5 nm. Other instances of this reaction include acrolein, citral, benzal acetone, and pent-3-en-2-one. The activity and selectivity of gold catalysts for this reaction has been linked to the morphology of the nanoparticles, which in turn is influenced by the support. For example, round particles tend to form on TiO2, while ZnO promotes particles with clear facets, as observed by TEM. Because the round morphology provides a higher relative amount of low-coordinated metal surface sites, the higher activity observed with Au/TiO2 compared to Au/ZnO is explained. Finally, a bimetallic Au-In/ZnO catalyst has been observed to improve the selectivity towards the hydrogenation of the carbonyl in acrolein. It was observed in HRTEM images that indium thin films decorate some of the facets of the gold nanoparticle. The promoting effect on selectivity might result from the fact that only the Au sites that promote side-reactions are decorated by In.
A strategy that in many reactions has succeeded at improving gold's catalytic activity without impairing its selectivity is to synthesize bimetallic Pd-Au or Pt-Au catalysts. For the hydrogenation of 1,3-butadiene to butenes, model surfaces of Au(111), Pd-Au(111), Pd-Au(110), and Pd(111) were studied with LEED, AES, and LEIS. A selectivity of ~100% was achieved on Pd70Au30(111) and it was suggested that Au might promote the desorption of the product during the reaction. A second instance is the hydrogenation of "p"-chloronitrobenzene to "p"-chloroaniline, in which selectivity suffers with typical hydrogenation catalysts due to the parallel hydrodechlorination to aniline. However, Pd-Au/Al2O3 (Au/Pd ≥20) has been proven thrice as active as the pure Au catalyst, while being ~100% selective to "p"-chloroaniline. In a mechanistic study of hydrogenation of nitrobenzenes with Pt-Au/TiO2, the dissociation of H2 was identified as rate-controlling, hence the incorporation of Pt, an efficient hydrogenation metal, highly improved catalytic activity. Dihydrogen dissociated on Pt and the nitroaromatic compound was activated on the Au-TiO2 interface. Finally, hydrogenation was enabled by the spillover of activated H surface species from Pt to the Au surface.
Theoretical background.
Bulk metallic gold is known to be inert, exhibiting a surface reactivity at room temperature only towards a few substances such as formic acid and sulphur-containing compounds, e.g. H2S and thiols. Within heterogeneous catalysis, reactants adsorb onto the surface of the catalyst thus forming activated intermediates. However, if the adsorption is weak such as in the case of bulk gold, a sufficient perturbation of the reactant electronic structure does not occur and catalysis is hindered (Sabatier's principle). When gold is deposited as nanosized clusters of less than 5 nm onto metal oxide supports, a markedly increased interaction with adsorbates is observed, thereby resulting in surprising catalytic activities. Evidently, nano-scaling and dispersing gold on metal oxide substrates makes gold less noble by tuning its electronic structure, but the precise mechanisms underlying this phenomenon are as of yet uncertain and hence widely studied.
It is generally known that decreasing the size of metallic particles in some dimension to the nanometer scale will yield clusters with a significantly more discrete electronic band structure in comparison with the bulk material. This is an example of a quantum-size effect and has been previously correlated with an increased reactivity enabling nanoparticles to bind gas phase molecules more strongly. In the case of TiO2-supported gold nanoparticles, Valden "et al." observed the opening of a band gap of approximately 0.2-0.6 eV in the gold electronic structure as the thickness of the deposited particles was decreased below three atomic layers. The two-layer thick supported gold clusters were also shown to be exceptionally active for CO combustion, based on which it was concluded that quantum-size effects inducing a metal-insulator transition play a key role in enhancing the catalytic properties of gold. However, decreasing the size further to a single atomic layer and a diameter of less than 3 nm was reported to again decrease the activity. This has later been explained by a destabilization of clusters composed of very few atoms, resulting in too strong bonding of adsorbates and thus poisoning of the catalyst.
The properties of the metal d-band are central for describing the origin of catalytic activity based on electronic effects. According to the d-band model of heterogeneous catalysis, substrate-adsorbate bonds are formed as the discrete energy levels of the adsorbate molecule interacts with the metal d-band, thus forming bonding and antibonding orbitals. The strength of the formed bond depends on the position of the d-band center such that a d-band closer to the Fermi level (formula_0) will result in stronger interaction. The d-band center of bulk gold is located far below formula_0, which qualitatively explains the observed weak binding of adsorbates as both the bonding and antibonding orbitals formed upon adsorption will be occupied, resulting in no net bonding. However, as the size of gold clusters is decreased below 5 nm, it has been shown that the d-band center of gold shifts to energies closer to the Fermi level, such that the as formed antibonding orbital will be pushed to an energy above formula_0, hence reducing its filling. In addition to a shift in the d-band center of gold clusters, the size-dependency of the d-band width as well as the formula_1 spin-orbit splitting has been studied from the viewpoint of catalytic activity. As the size of the gold clusters is decreased below 150 atoms (diameter ca. 2.5 nm), rapid drops in both values occur. This can be attributed to d-band narrowing due to the decreased number of hybridizing valence states of small clusters as well as to the increased ratio of high-energy edge atoms with low coordination to the total number of Au atoms. The effect of the decreased formula_1 spin-orbit splitting as well as the narrower distribution of d-band states on the catalytic properties of gold clusters cannot be understood via simple qualitative arguments as in the case of the d-band center model. Nevertheless, the observed trends provide further evidence that a significant perturbation of the Au electronic structure occurs upon nanoscaling, which is likely to play a key role in the enhancement of the catalytic properties of gold nanoparticles.
A central structural argument explaining the high activity of metal oxide supported gold clusters is based on the concept of periphery sites formed at the junction between the gold cluster and the substrate. In the case of CO oxidation, it has been hypothesized that CO adsorbs onto the edges and corners of the gold clusters, while the activation of oxygen occurs at the peripheral sites. The high activity of edge and corner sites towards adsorption can be understood by considering the high coordinative unsaturation of these atoms in comparison with terrace atoms. The low degree of coordination increases the surface energy of corner and edge sites, hence making them more active towards binding adsorbates. This is further coupled with the local shift of the d-band center of the unsaturated Au atoms towards energies closer to the Fermi level, which in accordance with the d-band model results in increased substrate-adsorbate interaction and lowering of the adsorption-dissociation energy barriers. Lopez "et al." calculated the adsorption energy of CO and O2 on the Au(111) terrace on which the Au-atoms have a coordination number of 9 as well as on an Au10 cluster where the most reactive sites have a coordination of 4. They observed that the bond strengths are in general increased by as much as 1 eV, indicating a significant activation towards CO oxidation if one assumes that the activation barriers of surface reactions scale linearly with the adsorption energies (Brønsted-Evans-Polanyi principle). The observation that hemispherical two-layer gold clusters with a diameter of a few nanometers are most active for CO oxidation is well in line with the assumption that edge and corner atoms serve as the active sites, since for clusters of this shape and size the ratio of edge atoms to the total number of atoms is indeed maximized.
The preferential activation of O2 at the perimeter sites is an example of a support effect that promotes the catalytic activity of gold nanoparticles. Besides enabling a proper dispersion of the deposited particles and hence a high surface-to-volume ratio, the metal oxide support also directly perturbs the electronic structure of the deposited gold clusters via various mechanisms, including strain-inducing and charge transfer. For gold deposited on magnesia (MgO), a charge transfer from singly charged oxygen vacancies (F-centers) at the MgO surface to the Au cluster has been observed. This charge transfer induces a local perturbation in the electronic structure of the gold clusters at the perimeter sites, enabling the formation of resonance states as the antibonding formula_2orbital of oxygen interacts with the metal d-band. As the antibonding orbital is occupied, the O-O bond is significantly weakened and stretched, i.e. activated. In gas-phase model studies, the formation of activated super-oxo species O2- is found to correlate with the size-dependent electronic properties of the clusters. The activation of O2 at the perimeter sites is also observed for defect-free surfaces and neutral gold clusters, but to a significantly smaller extent. The activity enhancing effect of charge transfer from the substrate to gold has also been reported by Chen and Goodman in the case of a gold bilayer supported on ultrathin TiO2 on Mo(112). In addition to charge transfer between the substrate and the gold nanoparticles, the support material has been observed to increase the catalytic activity of gold by inducing strain as a consequence of lattice mismatch. The induced strains especially affect the Au atoms close to the substrate-cluster interface, resulting in a shift of the local d-band center towards energies closer to the Fermi level. This corroborates the periphery hypothesis and the creation of catalytically active bifunctional sites at the cluster-support interface. Furthermore, the support-cluster interaction directly influences the size and shape of the deposited gold nanoparticles. In the case of weak interaction, less active 3D clusters are formed, whereas if the interaction is stronger more active 2D few-layer structures are formed. This illustrates the ability to fine-tune the catalytic activity of gold clusters via varying the support material as well as the underlying metal upon which the substrate has been grown.
Finally, it has been observed that the catalytic activity of supported gold clusters towards CO oxidation is further enhanced by the presence of water. Invoking the periphery hypothesis, water promotes the activation of O2 by co-adsorption onto the perimeter sites where it reacts with O2 to form adsorbed hydroxyl (OH*) and hydroperoxo (OOH*) species. The reaction of these intermediates with adsorbed CO is very rapid, and results in the efficient formation of CO2 with concomitant recovery of the water molecule.
|
[
{
"math_id": 0,
"text": "E_\\mathrm{F}"
},
{
"math_id": 1,
"text": "5d_{3/2}\\text{-}d_{5/2}"
},
{
"math_id": 2,
"text": "2\\pi^*"
}
] |
https://en.wikipedia.org/wiki?curid=57084837
|
5708736
|
Feedback linearization
|
Approach used in controlling nonlinear systems
Feedback linearization is a common strategy employed in nonlinear control to control nonlinear systems. Feedback linearization techniques may be applied to nonlinear control systems of the form
where formula_0 is the state, formula_1 are the inputs. The approach involves transforming a nonlinear control system into an equivalent linear control system through a change of variables and a suitable control input. In particular, one seeks a change of coordinates formula_2 and control input formula_3 so that the dynamics of formula_4 in the coordinates formula_5 take the form of a linear, controllable control system,
An outer-loop control strategy for the resulting linear control system can then be applied to achieve the control objective.
Feedback linearization of SISO systems.
Here, consider the case of feedback linearization of a single-input single-output (SISO) system. Similar results can be extended to multiple-input multiple-output (MIMO) systems. In this case, formula_6 and formula_7. The objective is to find a coordinate transformation formula_8 that transforms the system (1) into the so-called normal form which will reveal a feedback law of the form
that will render a linear input–output map from the new input formula_9 to the output formula_10. To ensure that the transformed system is an equivalent representation of the original system, the transformation must be a diffeomorphism. That is, the transformation must not only be invertible (i.e., bijective), but both the transformation and its inverse must be smooth so that differentiability in the original coordinate system is preserved in the new coordinate system. In practice, the transformation can be only locally diffeomorphic and the linearization results only hold in this smaller region.
Several tools are required to solve this problem.
Lie derivative.
The goal of feedback linearization is to produce a transformed system whose states are the output formula_10 and its first formula_11 derivatives. To understand the structure of this target system, we use the Lie derivative. Consider the time derivative of (2), which can be computed using the chain rule,
formula_12
Now we can define the Lie derivative of formula_13 along formula_14 as,
formula_15
and similarly, the Lie derivative of formula_13 along formula_16 as,
formula_17
With this new notation, we may express formula_18 as,
formula_19
Note that the notation of Lie derivatives is convenient when we take multiple derivatives with respect to either the same vector field, or a different one. For example,
formula_20
and
formula_21
Relative degree.
In our feedback linearized system made up of a state vector of the output formula_10 and its first formula_11 derivatives, we must understand how the input formula_22 enters the system. To do this, we introduce the notion of relative degree. Our system given by (1) and (2) is said to have relative degree formula_23 at a point formula_24 if,
formula_25 in a neighbourhood of formula_24 and all formula_26
formula_27
Considering this definition of relative degree in light of the expression of the time derivative of the output formula_10, we can consider the relative degree of our system (1) and (2) to be the number of times we have to differentiate the output formula_10 before the input formula_22 appears explicitly. In an LTI system, the relative degree is the difference between the degree of the transfer function's denominator polynomial (i.e., number of poles) and the degree of its numerator polynomial (i.e., number of zeros).
Linearization by feedback.
For the discussion that follows, we will assume that the relative degree of the system is formula_28. In this case, after differentiating the output formula_28 times we have,
formula_29
where the notation formula_30 indicates the formula_28th derivative of formula_10. Because we assumed the relative degree of the system is formula_28, the Lie derivatives of the form formula_31 for formula_32 are all zero. That is, the input formula_22 has no direct contribution to any of the first formula_11th derivatives.
The coordinate transformation formula_33 that puts the system into normal form comes from the first formula_11 derivatives. In particular,
formula_34
transforms trajectories from the original formula_35 coordinate system into the new formula_36 coordinate system. So long as this transformation is a diffeomorphism, smooth trajectories in the original coordinate system will have unique counterparts in the formula_36 coordinate system that are also smooth. Those formula_36 trajectories will be described by the new system,
formula_37
Hence, the feedback control law
formula_38
renders a linear input–output map from formula_39 to formula_40. The resulting linearized system
formula_41
is a cascade of formula_28 integrators, and an outer-loop control formula_39 may be chosen using standard linear system methodology. In particular, a state-feedback control law of
formula_42
where the state vector formula_36 is the output formula_10 and its first formula_11 derivatives, results in the LTI system
formula_43
with,
formula_44
So, with the appropriate choice of formula_45, we can arbitrarily place the closed-loop poles of the linearized system.
Unstable zero dynamics.
Feedback linearization can be accomplished with systems that have relative degree less than formula_28. However, the normal form of the system will include zero dynamics (i.e., states that are not observable from the output of the system) that may be unstable. In practice, unstable dynamics may have deleterious effects on the system (e.g., it may be dangerous for internal states of the system to grow unbounded). These unobservable states may be controllable or at least stable, and so measures can be taken to ensure these states do not cause problems in practice. Minimum phase systems provide some insight on zero dynamics.
Feedback linearization of MIMO systems.
Although NDI is not necessarily restricted to this type of system, lets consider a nonlinear MIMO system that is affine in input formula_46, as is shown below.
It is assumed that the amount of inputs is the same as the amount of outputs. Lets say there are formula_47 inputs and outputs. Then formula_48 is an formula_49 matrix, where formula_50 are the vectors making up its columns. Furthermore, formula_51 and formula_52. To use a similar derivation as for SISO, the system from Eq. 4 can be split up by isolating each formula_53'th output formula_54, as is shown in Eq. 5.
Similarly to SISO, it can be shown that up until the formula_55’th derivative of formula_54, the term formula_56. Here formula_57 refers to the relative degree of the formula_53'th output. Analogously, this gives
Working this out the same way as SISO, one finds that defining a virtual input formula_58 such that
linearizes this formula_53'th system. However, if formula_59, formula_60 can obviously not be solved given a value for formula_58. However, setting up such an equation for all formula_47 outputs, formula_61, results in formula_47 equations of the form shown in Eq. 7. Combining these equation results in a matrix equation, which generally allows solving for the input formula_60, as is shown below.
Further reading.
<templatestyles src="Refbegin/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x(t) \\in \\mathbb{R}^n"
},
{
"math_id": 1,
"text": "u_1(t), \\ldots, u_m(t) \\in \\mathbb{R}"
},
{
"math_id": 2,
"text": "z = \\Phi(x)"
},
{
"math_id": 3,
"text": "u = a(x) + b(x)\\,v,"
},
{
"math_id": 4,
"text": "x(t)"
},
{
"math_id": 5,
"text": "z(t)"
},
{
"math_id": 6,
"text": "u \\in \\mathbb{R}"
},
{
"math_id": 7,
"text": "y \\in \\mathbb{R}"
},
{
"math_id": 8,
"text": "z = T(x)"
},
{
"math_id": 9,
"text": "v \\in \\mathbb{R}"
},
{
"math_id": 10,
"text": "y"
},
{
"math_id": 11,
"text": "(n-1)"
},
{
"math_id": 12,
"text": "\\begin{align}\n\\dot{y} = \\frac{\\mathord{\\operatorname{d}}h(x)}{\\mathord{\\operatorname{d}}t}\n&= \\frac{\\partial h(x)}{\\partial x}\\dot{x}\\\\\n&= \\frac{\\partial h(x)}{\\partial x}f(x) + \\frac{\\partial h(x)}{\\partial x}g(x)u\n\\end{align}"
},
{
"math_id": 13,
"text": "h(x)"
},
{
"math_id": 14,
"text": "f(x)"
},
{
"math_id": 15,
"text": "L_{f}h(x) \\triangleq \\frac{\\partial h(x)}{\\partial x}f(x),"
},
{
"math_id": 16,
"text": "g(x)"
},
{
"math_id": 17,
"text": "L_{g}h(x) \\triangleq \\frac{\\partial h(x)}{\\partial x}g(x)."
},
{
"math_id": 18,
"text": "\\dot{y}"
},
{
"math_id": 19,
"text": "\\dot{y} = L_{f}h(x) + L_{g}h(x)u"
},
{
"math_id": 20,
"text": "L_{f}^{2}h(x) = L_{f}L_{f}h(x) = \\frac{\\partial (L_{f}h(x))}{\\partial x}f(x),"
},
{
"math_id": 21,
"text": "L_{g}L_{f}h(x) = \\frac{\\partial (L_{f}h(x))}{\\partial x}g(x)."
},
{
"math_id": 22,
"text": "u"
},
{
"math_id": 23,
"text": "r \\in \\mathbb{W}"
},
{
"math_id": 24,
"text": "x_0"
},
{
"math_id": 25,
"text": "L_{g}L_{f}^{k}h(x) = 0 \\qquad \\forall x"
},
{
"math_id": 26,
"text": "k \\leq r-2"
},
{
"math_id": 27,
"text": "L_{g}L_{f}^{r-1}h(x_0) \\neq 0"
},
{
"math_id": 28,
"text": "n"
},
{
"math_id": 29,
"text": "\\begin{align}\ny &= h(x)\\\\\n\\dot{y} &= L_{f}h(x)\\\\\n\\ddot{y} &= L_{f}^{2}h(x)\\\\\n&\\vdots\\\\\ny^{(n-1)} &= L_{f}^{n-1}h(x)\\\\\ny^{(n)} &= L_{f}^{n}h(x) + L_{g}L_{f}^{n-1}h(x)u\n\\end{align}"
},
{
"math_id": 30,
"text": "y^{(n)}"
},
{
"math_id": 31,
"text": "L_{g}L_{f}^{i}h(x)"
},
{
"math_id": 32,
"text": "i = 1, \\dots, n-2"
},
{
"math_id": 33,
"text": "T(x)"
},
{
"math_id": 34,
"text": "z = T(x) = \\begin{bmatrix}z_1(x) \\\\\nz_2(x) \\\\\n\\vdots \\\\\nz_n(x)\n\\end{bmatrix}\n= \\begin{bmatrix}y\\\\\n\\dot{y}\\\\\n\\vdots\\\\\ny^{(n-1)}\n\\end{bmatrix}\n= \\begin{bmatrix}h(x) \\\\\nL_{f}h(x) \\\\\n\\vdots \\\\\nL_{f}^{n-1}h(x)\n\\end{bmatrix}"
},
{
"math_id": 35,
"text": "x"
},
{
"math_id": 36,
"text": "z"
},
{
"math_id": 37,
"text": "\\begin{cases}\\dot{z}_1 &= L_{f}h(x) = z_2(x)\\\\\n\\dot{z}_2 &= L_{f}^{2}h(x) = z_3(x)\\\\\n&\\vdots\\\\\n\\dot{z}_n &= L_{f}^{n}h(x) + L_{g}L_{f}^{n-1}h(x)u\\end{cases}."
},
{
"math_id": 38,
"text": "u = \\frac{1}{L_{g}L_{f}^{n-1}h(x)}(-L_{f}^{n}h(x) + v)"
},
{
"math_id": 39,
"text": "v"
},
{
"math_id": 40,
"text": "z_1 = y"
},
{
"math_id": 41,
"text": "\\begin{cases}\\dot{z}_1 &= z_2\\\\\n\\dot{z}_2 &= z_3\\\\\n&\\vdots\\\\\n\\dot{z}_n &= v\\end{cases}"
},
{
"math_id": 42,
"text": "v = -Kz\\qquad,"
},
{
"math_id": 43,
"text": "\\dot{z} = Az"
},
{
"math_id": 44,
"text": "A = \\begin{bmatrix}\n0 & 1 & 0 & \\ldots & 0 \\\\\n0 & 0 & 1 & \\ldots & 0 \\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n0 & 0 & 0 & \\ldots & 1 \\\\\n-k_1 & -k_2 & -k_3 & \\ldots & -k_n\n\\end{bmatrix}."
},
{
"math_id": 45,
"text": "k"
},
{
"math_id": 46,
"text": "\\mathbf{\\mathbf{u}}"
},
{
"math_id": 47,
"text": "m"
},
{
"math_id": 48,
"text": "G = [\\mathbf{g}_1 \\, \\mathbf{g}_2 \\, \\cdots \\, \\mathbf{g}_m]"
},
{
"math_id": 49,
"text": "n\\times m"
},
{
"math_id": 50,
"text": "\\mathbf{g}_j"
},
{
"math_id": 51,
"text": "\\mathbf{u}\\in \\mathbb{R}^m"
},
{
"math_id": 52,
"text": "\\mathbf{y}\\in \\mathbb{R}^m"
},
{
"math_id": 53,
"text": "i"
},
{
"math_id": 54,
"text": "y_i"
},
{
"math_id": 55,
"text": "(r_i-1)"
},
{
"math_id": 56,
"text": "L_{g_j} h_i (\\mathbf{x}) = 0"
},
{
"math_id": 57,
"text": "r_i"
},
{
"math_id": 58,
"text": "v_i"
},
{
"math_id": 59,
"text": "m>1"
},
{
"math_id": 60,
"text": "\\mathbf{u}"
},
{
"math_id": 61,
"text": "y_1,y_2,\\ldots,y_m"
}
] |
https://en.wikipedia.org/wiki?curid=5708736
|
5709055
|
Cichoń's diagram
|
In set theory,
Cichoń's diagram or Cichon's diagram is a table of 10 infinite cardinal numbers related to the set theory of the reals displaying the provable relations between these
cardinal characteristics of the continuum. All these cardinals are greater than or equal to formula_0, the smallest uncountable cardinal, and they are bounded above by formula_1, the cardinality of the continuum. Four cardinals describe properties of the ideal of sets of measure zero; four more describe the corresponding properties of the ideal of meager sets (first category sets).
Definitions.
Let "I" be an ideal of a fixed infinite set "X", containing all finite subsets of "X". We define the following "cardinal coefficients" of "I":
The "additivity" of "I" is the smallest number of sets from "I" whose union is not in "I" any more. As any ideal is closed under finite unions, this number is always at least formula_3; if "I" is a σ-ideal, then add("I") ≥ formula_0.
The "covering number" of "I" is the smallest number of sets from "I" whose union is all of "X". As "X" itself is not in "I", we must have add("I") ≤ cov("I").
The "uniformity number" of "I" (sometimes also written formula_6) is the size of the smallest set not in "I". By our assumption on "I", add("I") ≤ non("I").
The "cofinality" of "I" is the cofinality of the partial order ("I", ⊆). It is easy to see that we must have non("I") ≤ cof("I") and cov("I") ≤ cof("I").
Furthermore, the "bounding number" or "unboundedness number" formula_8 and the "dominating number" formula_9 are defined as follows:
where "formula_12" means: "there are infinitely many natural numbers "n" such that …", and "formula_13" means "for all except finitely many natural numbers "n" we have …".
Diagram.
Let formula_14 be the σ-ideal of those subsets of the real line that are meager (or "of the first category") in the euclidean topology, and let
formula_15 be the σ-ideal of those subsets of the real line that are of Lebesgue measure zero. Then the following inequalities hold:
Where an arrow from formula_16 to formula_17 is to mean that formula_18. In addition, the following relations hold:
It turns out that the inequalities described by the diagram, together with the relations mentioned above, are all the relations between these cardinals that are provable in ZFC, in the following limited sense. Let "A" be any assignment of the cardinals formula_0 and formula_19 to the 10 cardinals in Cichoń's diagram. Then if "A" is consistent with the diagram's relations, and if "A" also satisfies the two additional relations, then "A" can be realized in some model of ZFC.
For larger continuum sizes, the situation is less clear. It is consistent with ZFC that all of the Cichoń's diagram cardinals are simultaneously different apart from formula_20 and formula_21 (which are equal to other entries).
Some inequalities in the diagram (such as "add ≤ cov") follow immediately from the definitions. The inequalities formula_22 and
formula_23 are classical theorems
and follow from the fact that the real line can be partitioned into a meager set and a set of measure zero.
Remarks.
The British mathematician David Fremlin named the diagram after the Polish mathematician from Wrocław, Jacek Cichoń.
The continuum hypothesis, of formula_1 being equal to formula_0, would make all of these relations equalities.
Martin's axiom, a weakening of the continuum hypothesis, implies that all cardinals in the diagram (except perhaps formula_0) are equal to formula_1.
Similar diagrams can be drawn for cardinal characteristics of higher cardinals formula_24 for formula_24 strongly inaccessible, which assort various cardinals between formula_25 and formula_26.
|
[
{
"math_id": 0,
"text": "\\aleph_1"
},
{
"math_id": 1,
"text": "2^{\\aleph_0}"
},
{
"math_id": 2,
"text": "\\operatorname{add}(I)=\\min\\{|{\\mathcal A}|: {\\mathcal A}\\subseteq I \\wedge \\bigcup{\\mathcal A}\\notin I\\big\\}."
},
{
"math_id": 3,
"text": "\\aleph_0"
},
{
"math_id": 4,
"text": "\\operatorname{cov}(I)=\\min\\{|{\\mathcal A}|:{\\mathcal A}\\subseteq I \\wedge\\bigcup{\\mathcal A} = X\\big\\}."
},
{
"math_id": 5,
"text": "\\operatorname{non}(I)=\\min\\{|\\mathcal{A}|:\\mathcal{A}\\subseteq X\\ \\wedge\\ \\mathcal{A}\\notin I\\big\\},"
},
{
"math_id": 6,
"text": "\\operatorname{unif}(I)"
},
{
"math_id": 7,
"text": "\\operatorname{cof}(I)=\\min\\{|{\\mathcal A}|:{\\mathcal A}\\subseteq I \\wedge (\\forall B\\in I)(\\exists A\\in {\\mathcal A})(B\\subseteq A)\\big\\}."
},
{
"math_id": 8,
"text": "{\\mathfrak b}"
},
{
"math_id": 9,
"text": "{\\mathfrak d}"
},
{
"math_id": 10,
"text": "{\\mathfrak b}=\\min\\big\\{|F|:F\\subseteq{\\mathbb N}^{\\mathbb N}\\ \\wedge\\ (\\forall g\\in {\\mathbb N}^{\\mathbb N})(\\exists f\\in F)(\\exists^\\infty n\\in{\\mathbb N})(g(n)<f(n))\\big\\},"
},
{
"math_id": 11,
"text": "{\\mathfrak d}=\\min\\big\\{|F|:F\\subseteq{\\mathbb N}^{\\mathbb N}\\ \\wedge\\ (\\forall g\\in{\\mathbb N}^{\\mathbb N})(\\exists f\\in F)(\\forall^\\infty n\\in{\\mathbb N})(g(n)<f(n))\\big\\},"
},
{
"math_id": 12,
"text": "\\exists^\\infty n\\in{\\mathbb N}"
},
{
"math_id": 13,
"text": "\\forall^\\infty n\\in{\\mathbb N}"
},
{
"math_id": 14,
"text": "{\\mathcal B}"
},
{
"math_id": 15,
"text": "{\\mathcal L}"
},
{
"math_id": 16,
"text": "x"
},
{
"math_id": 17,
"text": "y"
},
{
"math_id": 18,
"text": "x\\le y"
},
{
"math_id": 19,
"text": "\\aleph_2"
},
{
"math_id": 20,
"text": "\\operatorname{add}({\\mathcal B})"
},
{
"math_id": 21,
"text": "\\operatorname{cof}({\\mathcal B})"
},
{
"math_id": 22,
"text": "\\operatorname{cov}({\\mathcal B}) \\le \\operatorname{non}({\\mathcal L})"
},
{
"math_id": 23,
"text": "\\operatorname{cov}({\\mathcal L}) \\le \\operatorname{non}({\\mathcal B})"
},
{
"math_id": 24,
"text": "\\kappa"
},
{
"math_id": 25,
"text": "\\kappa^+"
},
{
"math_id": 26,
"text": "2^\\kappa"
}
] |
https://en.wikipedia.org/wiki?curid=5709055
|
57090569
|
L1-norm principal component analysis
|
L1-norm principal component analysis (L1-PCA) is a general method for multivariate data analysis.
L1-PCA is often preferred over standard L2-norm principal component analysis (PCA) when the analyzed data may contain outliers (faulty values or corruptions).
Both L1-PCA and standard PCA seek a collection of orthogonal directions (principal components) that define a subspace wherein data representation is maximized according to the selected criterion.
Standard PCA quantifies data representation as the aggregate of the L2-norm of the data point projections into the subspace, or equivalently the aggregate Euclidean distance of the original points from their subspace-projected representations.
L1-PCA uses instead the aggregate of the L1-norm of the data point projections into the subspace. In PCA and L1-PCA, the number of principal components (PCs) is lower than the rank of the analyzed matrix, which coincides with the dimensionality of the space defined by the original data points.
Therefore, PCA or L1-PCA are commonly employed for dimensionality reduction for the purpose of data denoising or compression.
Among the advantages of standard PCA that contributed to its high popularity are low-cost computational implementation by means of singular-value decomposition (SVD) and statistical optimality when the data set is generated by a true multivariate normal data source.
However, in modern big data sets, data often include corrupted, faulty points, commonly referred to as outliers.
Standard PCA is known to be sensitive to outliers, even when they appear as a small fraction of the processed data.
The reason is that the L2-norm formulation of L2-PCA places squared emphasis on the magnitude of each coordinate of each data point, ultimately overemphasizing peripheral points, such as outliers.
On the other hand, following an L1-norm formulation, L1-PCA places linear emphasis on the coordinates of each data point, effectively restraining outliers.
Formulation.
Consider any matrix formula_0 consisting of formula_1 formula_2-dimensional data points. Define formula_3. For integer formula_4 such that formula_5, L1-PCA is formulated as:
For formula_6, (1) simplifies to finding the L1-norm principal component (L1-PC) of formula_7 by
In (1)-(2), L1-norm formula_8 returns the sum of the absolute entries of its argument and L2-norm formula_9 returns the sum of the squared entries of its argument. If one substitutes formula_8 in (2) by the Frobenius/L2-norm formula_10, then the problem becomes standard PCA and it is solved by the matrix formula_11 that contains the formula_4 dominant singular vectors of formula_7 (i.e., the singular vectors that correspond to the formula_4 highest singular values).
The maximization metric in (2) can be expanded as
Solution.
For any matrix formula_12 with formula_13, define formula_14 as the nearest (in the L2-norm sense) matrix to formula_15 that has orthonormal columns. That is, define
Procrustes Theorem states that if formula_15 has SVD formula_16, then
formula_17.
Markopoulos, Karystinos, and Pados showed that, if formula_18 is the exact solution to the binary nuclear-norm maximization (BNM) problem
then
is the exact solution to L1-PCA in (2). The nuclear-norm formula_19 in (2) returns the summation of the singular values of its matrix argument and can be calculated by means of standard SVD. Moreover, it holds that, given the solution to L1-PCA, formula_20, the solution to BNM can be obtained as
where formula_21 returns the formula_22-sign matrix of its matrix argument (with no loss of generality, we can consider formula_23). In addition, it follows that formula_24. BNM in (5) is a combinatorial problem over antipodal binary variables. Therefore, its exact solution can be found through exhaustive evaluation of all formula_25 elements of its feasibility set, with asymptotic cost formula_26. Therefore, L1-PCA can also be solved, through BNM, with cost formula_26 (exponential in the product of the number of data points with the number of the sought-after components). It turns out that L1-PCA can be solved optimally (exactly) with polynomial complexity in formula_1 for fixed data dimension formula_2", formula_27".
For the special case of formula_6 (single L1-PC of formula_7), BNM takes the binary-quadratic-maximization (BQM) form
The transition from (5) to (8) for formula_6 holds true, since the unique singular value of formula_28 is equal to formula_29, for every formula_30. Then, if formula_31 is the solution to BQM in (7), it holds that
is the exact L1-PC of formula_7, as defined in (1). In addition, it holds that formula_32 and formula_33.
Algorithms.
Exact solution of exponential complexity.
As shown above, the exact solution to L1-PCA can be obtained by the following two-step process:
1. Solve the problem in (5) to obtain formula_18.
2. Apply SVD on formula_34 to obtain formula_20.
BNM in (5) can be solved by exhaustive search over the domain of formula_35 with cost formula_36.
Exact solution of polynomial complexity.
Also, L1-PCA can be solved optimally with cost formula_27, when formula_3 is constant with respect to formula_1 (always true for finite data dimension formula_2).
Approximate efficient solvers.
In 2008, Kwak proposed an iterative algorithm for the approximate solution of L1-PCA for formula_6. This iterative method was later generalized for formula_37 components. Another approximate efficient solver was proposed by McCoy and Tropp by means of semi-definite programming (SDP). Most recently, L1-PCA (and BNM in (5)) were solved efficiently by means of bit-flipping iterations (L1-BF algorithm).
L1-BF algorithm.
1 function L1BF(formula_7, formula_4):
2 Initialize formula_38 and formula_39
3 Set formula_40 and formula_41
4 Until termination (or formula_42 iterations)
5 formula_43, formula_44
6 For formula_45
7 formula_46, formula_47
8 formula_48 "// flip bit"
9 formula_49 "// calculated by SVD or faster (see)"
10 if formula_50
11 formula_51, formula_52
12 formula_53
13 end
14 if formula_54 "// no bit was flipped"
15 if formula_55
16 terminate
17 else
18 formula_56
The computational cost of L1-BF is formula_57.
Complex data.
L1-PCA has also been generalized to process complex data. For complex L1-PCA, two efficient algorithms were proposed in 2018.
Tensor data.
L1-PCA has also been extended for the analysis of tensor data, in the form of L1-Tucker, the L1-norm robust analogous of standard Tucker decomposition. Two algorithms for the solution of L1-Tucker are L1-HOSVD and L1-HOOI.
Code.
MATLAB code for L1-PCA is available at MathWorks.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf X = [\\mathbf x_1, \\mathbf x_2, \\ldots, \\mathbf x_N] \\in \\mathbb R^{D \\times N}"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "D"
},
{
"math_id": 3,
"text": "r=rank(\\mathbf X)"
},
{
"math_id": 4,
"text": "K"
},
{
"math_id": 5,
"text": "1 \\leq K < r"
},
{
"math_id": 6,
"text": "K=1"
},
{
"math_id": 7,
"text": "\\mathbf X"
},
{
"math_id": 8,
"text": "\\| \\cdot \\|_1"
},
{
"math_id": 9,
"text": "\\| \\cdot \\|_2"
},
{
"math_id": 10,
"text": "\\| \\cdot \\|_F"
},
{
"math_id": 11,
"text": "\\mathbf Q"
},
{
"math_id": 12,
"text": "\\mathbf A \\in \\mathbb R^{m \\times n}"
},
{
"math_id": 13,
"text": "m \\geq n"
},
{
"math_id": 14,
"text": "\\Phi(\\mathbf A)"
},
{
"math_id": 15,
"text": "\\mathbf A"
},
{
"math_id": 16,
"text": "\\mathbf U_{m \\times n} \\boldsymbol \\Sigma_{n \\times n} \\mathbf V_{n \\times n}^\\top"
},
{
"math_id": 17,
"text": "\\Phi(\\mathbf A)=\\mathbf U \\mathbf V^\\top "
},
{
"math_id": 18,
"text": "\\mathbf B_{\\text{BNM}}"
},
{
"math_id": 19,
"text": "\\| \\cdot \\|_*"
},
{
"math_id": 20,
"text": "\\mathbf Q_{\\text{L1}}"
},
{
"math_id": 21,
"text": "\\text{sgn}(\\cdot)"
},
{
"math_id": 22,
"text": "\\{\\pm 1\\}"
},
{
"math_id": 23,
"text": "\\text{sgn}(0)=1"
},
{
"math_id": 24,
"text": " \\| \\mathbf X^\\top \\mathbf Q_{\\text{L1}}\\|_1 = \\| \\mathbf X \\mathbf B_{\\text{BNM}}\\|_*"
},
{
"math_id": 25,
"text": "2^{NK}"
},
{
"math_id": 26,
"text": "\\mathcal O(2^{NK})"
},
{
"math_id": 27,
"text": "\\mathcal{O}(N^{rK-K+1})"
},
{
"math_id": 28,
"text": "\\mathbf X \\mathbf b"
},
{
"math_id": 29,
"text": "\\| \\mathbf X \\mathbf b\\|_2 = \\sqrt{\\mathbf b^\\top \\mathbf X^\\top \\mathbf X \\mathbf b}"
},
{
"math_id": 30,
"text": "\\mathbf b "
},
{
"math_id": 31,
"text": "\\mathbf b_{\\text{BNM}}"
},
{
"math_id": 32,
"text": "\\mathbf b_{\\text{BNM}} = \\text{sgn}(\\mathbf X^\\top \\mathbf q_{\\text{L1}})"
},
{
"math_id": 33,
"text": " \\| \\mathbf X^\\top \\mathbf q_{\\text{L1}}\\|_1 = \\| \\mathbf X \\mathbf b_{\\text{BNM}}\\|_2"
},
{
"math_id": 34,
"text": "\\mathbf X\\mathbf B_{\\text{BNM}}"
},
{
"math_id": 35,
"text": "\\mathbf B"
},
{
"math_id": 36,
"text": "\\mathcal{O}(2^{NK})"
},
{
"math_id": 37,
"text": "K>1"
},
{
"math_id": 38,
"text": "\\mathbf B^{(0)} \\in \\{\\pm 1\\}^{N \\times K}"
},
{
"math_id": 39,
"text": "\\mathcal L \\leftarrow \\{1,2,\\ldots, NK\\}"
},
{
"math_id": 40,
"text": "t \\leftarrow 0"
},
{
"math_id": 41,
"text": "\\omega \\leftarrow \\| \\mathbf X \\mathbf B^{(0)} \\|_*"
},
{
"math_id": 42,
"text": "T"
},
{
"math_id": 43,
"text": "\\mathbf B \\leftarrow \\mathbf B^{(t)}"
},
{
"math_id": 44,
"text": "t' \\leftarrow t"
},
{
"math_id": 45,
"text": "x \\in \\mathcal L"
},
{
"math_id": 46,
"text": "k \\leftarrow \\lceil \\frac{x}{N} \\rceil"
},
{
"math_id": 47,
"text": "n \\leftarrow x-N(k-1)"
},
{
"math_id": 48,
"text": "[\\mathbf B]_{n,k} \\leftarrow - [\\mathbf B]_{n,k}"
},
{
"math_id": 49,
"text": "a(n,k) \\leftarrow \\| \\mathbf X \\mathbf B \\|_*"
},
{
"math_id": 50,
"text": "a(n,k)>\\omega"
},
{
"math_id": 51,
"text": "\\mathbf B^{(t)} \\leftarrow \\mathbf B"
},
{
"math_id": 52,
"text": "t' \\leftarrow t+1"
},
{
"math_id": 53,
"text": "\\omega \\leftarrow a(n,k)"
},
{
"math_id": 54,
"text": "t'=t"
},
{
"math_id": 55,
"text": "\\mathcal L = \\{1,2, \\ldots, NK\\}"
},
{
"math_id": 56,
"text": "\\mathcal L \\leftarrow \\{1,2, \\ldots, NK\\}"
},
{
"math_id": 57,
"text": "\\mathcal O (ND min\\{N,D\\} + N^2K^2(K^2 + r))"
}
] |
https://en.wikipedia.org/wiki?curid=57090569
|
57091271
|
Three-gap theorem
|
On distances between points on a circle
In mathematics, the three-gap theorem, three-distance theorem, or Steinhaus conjecture states that if one places n points on a circle, at angles of "θ", 2"θ", 3"θ", ... from the starting point, then there will be at most three distinct distances between pairs of points in adjacent positions around the circle. When there are three distances, the largest of the three always equals the sum of the other two. Unless "θ" is a rational multiple of π, there will also be at least two distinct distances.
This result was conjectured by Hugo Steinhaus, and proved in the 1950s by Vera T. Sós, János Surányi, and Stanisław Świerczkowski; more proofs were added by others later. Applications of the three-gap theorem include the study of plant growth and musical tuning systems, and the theory of light reflection within a mirrored square.
Statement.
The three-gap theorem can be stated geometrically in terms of points on a circle. In this form, it states that if one places formula_0 points on a circle, at angles of formula_1 from the starting point, then there will be at most three distinct distances between pairs of points in adjacent positions around the circle. An equivalent and more algebraic form involves the fractional parts of multiples of a real number. It states that, for any positive real number formula_2 and integer formula_0, the fractional parts of the numbers formula_3 divide the unit interval into subintervals with at most three different lengths. The two problems are equivalent under a linear correspondence between the unit interval and the circumference of the circle, and a correspondence between the real number formula_2 and the angle formula_4.
Applications.
Plant growth.
In the study of phyllotaxis, the arrangements of leaves on plant stems, it has been observed that each successive leaf on the stems of many plants is turned from the previous leaf by the golden angle, approximately 137.5°. It has been suggested that this angle maximizes the sun-collecting power of the plant's leaves. If one looks end-on at a plant stem that has grown in this way, there will be at most three distinct angles between two leaves that are consecutive in the cyclic order given by this end-on view.
For example, in the figure, the largest of these three angles occurs three times, between the leaves numbered 3 and 6, between leaves 4 and 7, and between leaves 5 and 8. The second-largest angle occurs five times, between leaves 6 and 1, 9 and 4, 7 and 2, 10 and 5, and 8 and 3. And the smallest angle occurs only twice, between leaves 1 and 9 and between leaves 2 and 10. The phenomenon of having three types of distinct gaps depends only on fact that the growth pattern uses a constant rotation angle, and not on the relation of this angle to the golden ratio; the same phenomenon would happen for any other rotation angle, and not just for the golden angle. However, other properties of this growth pattern do depend on the golden ratio. For instance, the fact that golden ratio is a badly approximable number implies that points spaced at this angle along the Fermat spiral (as they are in some models of plant growth) form a Delone set; intuitively, this means that they are uniformly spaced.
Music theory.
In music theory, a musical interval describes the ratio in frequency between two musical tones. Intervals are commonly considered consonant or harmonious when they are the ratio of two small integers; for instance, the octave corresponds to the ratio 2:1, while the perfect fifth corresponds to the ratio 3:2. Two tones are commonly considered to be equivalent when they differ by a whole number of octaves; this equivalence can be represented geometrically by the chromatic circle, the points of which represent classes of equivalent tones. Mathematically, this circle can be described as the unit circle in the complex plane, and the point on this circle that represents a given tone can be obtained by the mapping the frequency formula_5 to the complex number formula_6. An interval with ratio formula_7 corresponds to the angle formula_8 between points on this circle, meaning that two musical tones differ by the given interval when their two points on the circle differ by this angle. For instance, this formula gives formula_9 (a whole circle) as the angle corresponding to an octave. Because 3/2 is not a rational power of two, the angle on the chromatic circle that represents a perfect fifth is not a rational multiple of formula_9, and similarly other common musical intervals other than the octave do not correspond to rational angles.
A tuning system is a collection of tones used to compose and play music. For instance, the equal temperament commonly used for the piano is a tuning system, consisting of 12 tones equally spaced around the chromatic circle. Some other tuning systems do not space their tones equally, but instead generate them by some number of consecutive multiples of a given interval. An example is the Pythagorean tuning, which is constructed in this way from twelve tones, generated as the consecutive multiples of a perfect fifth in the circle of fifths. The irrational angle formed on the chromatic circle by a perfect fifth is close to 7/12 of a circle, and therefore the twelve tones of the Pythagorean tuning are close to, but not the same as, the twelve tones of equal temperament, which could be generated in the same way using an angle of exactly 7/12 of a circle. Instead of being spaced at angles of exactly 1/12 of a circle, as the tones of equal temperament would be, the tones of the Pythagorean tuning are separated by intervals of two different angles, close to but not exactly 1/12 of a circle, representing two different types of semitones. If the Pythagorean tuning system were extended by one more perfect fifth, to a set of 13 tones, then the sequence of intervals between its tones would include a third, much shorter interval, the Pythagorean comma.
In this context, the three-gap theorem can be used to describe any tuning system that is generated in this way by consecutive multiples of a single interval. Some of these tuning systems (like equal temperament) may have only one interval separating the closest pairs of tones, and some (like the Pythagorean tuning) may have only two different intervals separating the tones, but the three-gap theorem implies that there are always at most three different intervals separating the tones.
Mirrored reflection.
A Sturmian word is infinite sequences of two symbols (for instance, "H" and "V") describing the sequence of horizontal and vertical reflections of a light ray within a mirrored square, starting along a line of irrational slope. Equivalently, the same sequence describes the sequence of horizontal and vertical lines of the integer grid that are crossed by the starting line. One property that all such sequences have is that, for any positive integer n, the sequence has exactly "n" + 1 distinct consecutive subsequences of length n. Each subsequence occurs infinitely often with a certain frequency, and the three-gap theorem implies that these "n" + 1 subsequences occur with at most three distinct frequencies. If there are three frequencies, then the largest frequency must equal the sum of the other two. One proof of this result involves partitioning the y-intercepts of the starting lines (modulo 1) into "n" + 1 subintervals within which the initial n elements of the sequence are the same, and applying the three-gap theorem to this partition.
History and proof.
The three-gap theorem was conjectured by Hugo Steinhaus, and its first proofs were found in the late 1950s by Vera T. Sós, János Surányi, and Stanisław Świerczkowski. Later researchers published additional proofs, generalizing this result to higher dimensions, and connecting it to topics including continued fractions, symmetries and geodesics of Riemannian manifolds, ergodic theory, and the space of planar lattices. formalizes a proof using the Coq interactive theorem prover.
The following simple proof is due to Frank Liang. Let "θ" be the rotation angle generating a set of points as some number of consecutive multiples of "θ" on a circle. Define a "gap" to be an arc A of the circle that extends between two adjacent points of the given set, and define a gap to be "rigid" if its endpoints occur later in the sequence of multiples of "θ" than any other gap of the same length. From this definition, it follows that every gap has the same length as a rigid gap. If A is a rigid gap, then "A" + "θ" is not a gap, because it has the same length and would be one step later.
The only ways for this to happen are for one of the endpoints of A to be the last point in the sequence of multiples of "θ" (so that the corresponding endpoint of "A" + "θ" is missing) or for one of the given points to land within "A" + "θ", preventing it from being a gap. A point can only land within "A" + "θ" if it is the first point in the sequence of multiples of "θ", because otherwise its predecessor in the sequence would land within A, contradicting the assumption that A is a gap. So there can be at most three rigid gaps, the two on either side of the last point and the one in which the predecessor of the first point (if it were part of the sequence) would land. Because there are at most three rigid gaps, there are at most three lengths of gaps.
Related results.
Liang's proof additionally shows that, when there are exactly three gap lengths, the longest gap length is the sum of the other two. For, in this case, the rotated copy "A" + "θ" that has the first point in it is partitioned by that point into two smaller gaps, which must be the other two gaps. Liang also proves a more general result, the "formula_10 distance theorem", according to which the union of formula_11 different arithmetic progressions on a circle has at most formula_10 different gap lengths. In the three-gap theorem, there is a constant bound on the ratios between the three gaps, if and only if "θ"/2π is a badly approximable number.
A closely related but earlier theorem, also called the three-gap theorem, is that if A is any arc of the circle, then the integer sequence of multiples of "θ" that land in A has at most three lengths of gaps between sequence values. Again, if there are three gap lengths then one is the sum of the other two.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "\\theta, 2\\theta, \\dots, n\\theta"
},
{
"math_id": 2,
"text": "\\alpha"
},
{
"math_id": 3,
"text": "\\alpha, 2\\alpha, \\dots, n\\alpha"
},
{
"math_id": 4,
"text": "\\theta=2\\pi\\alpha"
},
{
"math_id": 5,
"text": "\\nu"
},
{
"math_id": 6,
"text": "\\exp(2\\pi i\\log_2\\nu)"
},
{
"math_id": 7,
"text": "\\rho"
},
{
"math_id": 8,
"text": "2\\pi\\log_2\\rho"
},
{
"math_id": 9,
"text": "2\\pi"
},
{
"math_id": 10,
"text": "3d"
},
{
"math_id": 11,
"text": "d"
}
] |
https://en.wikipedia.org/wiki?curid=57091271
|
570922
|
Action at a distance
|
Concept in physics
In physics, action at a distance is the concept that an object's motion can be affected by another object without being in physical contact with it; that is, the non-local interaction of objects that are separated in space. Coulomb's law and Newton's law of universal gravitation are based on action at a distance.
Historically, action at a distance was the earliest scientific model for gravity and electricity and it continues to be useful in many practical cases. In the 19th and 20th centuries, field models arose to explain these phenomena with more precision. The discovery of electrons and of special relativity led to new action at a distance models providing alternative to field theories. Under our modern understanding, the four fundamental interactions (gravity, electromagnetism, the strong interaction and the weak interaction) in all of physics are not described by action at a distance.
Categories of action.
In the study of mechanics, action at a distance is one of three fundamental actions on matter that cause motion. The other two are direct impact (elastic or inelastic collisions) and actions in a continuous medium as in fluid mechanics or solid mechanics.
Historically, physical explanations for particular phenomena have moved between these three categories over time as new models were developed.
Action-at-a-distance and actions in a continuous medium may be easily distinguished when the medium dynamics are visible, like waves in water or in an elastic solid. In the case of electricity or gravity, no medium is required. In the nineteenth century, criteria like the effect of actions on intervening matter, the observation of a time delay, the apparent storage of energy, or even the possibility of a plausible mechanical model for action transmission were all accepted as evidence against action at a distance. Aether theories were alternative proposals to replace apparent action-at-a-distance in gravity and electromagnetism, in terms of continuous action inside an (invisible) medium called "aether".
Direct impact of macroscopic objects seems visually distinguishable from action at a distance. If however the objects are constructed of atoms, and the volume of those atoms is not defined and atoms interact by electric and magnetic forces, the distinction is less clear.
Roles.
The concept of action at a distance acts in multiple roles in physics and it can co-exist with other models according to the needs of each physical problem.
One role is as a summary of physical phenomena, independent of any understanding of the cause of such an action. For example, astronomical tables of planetary positions can be compactly summarized using Newton's law of universal gravitation, which assumes the planets interact without contact or an intervening medium. As a summary of data, the concept does not need to be evaluated as a plausible physical model.
Action at a distance also acts as a model explaining physical phenomena even in the presence of other models. Again in the case of gravity, hypothesizing an instantaneous force between masses allows the return time of comets to be predicted as well as predicting the existence of previously unknown planets, like Neptune. These triumphs of physics predated the alternative more accurate model for gravity based on general relativity by many decades.
Introductory physics textbooks discuss central forces, like gravity, by models based on action-at-distance without discussing the cause of such forces or issues with it until the topics of relativity and fields are discussed. For example, see "The Feynman Lectures on Physics" on gravity.
History.
Early inquiries into motion.
Action-at-a-distance as a physical concept requires identifying objects, distances, and their motion. In antiquity, ideas about the natural world were not organized in these terms. Objects in motion were modeled as living beings. Around 1600, the scientific method began to take root. René Descartes held a more fundamental view, developing ideas of matter and action independent of theology. Galileo Galilei wrote about experimental measurements of falling and rolling objects. Johannes Kepler's laws of planetary motion summarized Tycho Brahe's astronomical observations. Many experiments with electrical and magnetic materials led to new ideas about forces. These efforts set the stage for Newton's work on forces and gravity.
Newtonian gravity.
In 1687 Isaac Newton published his "Principia" which combined his laws of motion with a new mathematical analysis able to reproduce Kepler's empirical results. His explanation was in the form of a law of universal gravitation: any two bodies are attracted by a force proportional to their mass and inversely proportional to the square of the distance between them. Thus the motions of planets were predicted by assuming forces working over great distances.
This mathematical expression of the force did not imply a cause. Newton considered action-at-a-distance to be an inadequate model for gravity. Newton, in his words, considered action at a distance to be:
<templatestyles src="Template:Blockquote/styles.css" />Isaac Newton
Metaphysical scientists of the early 1700s strongly objected to the unexplained action-at-a-distance in Newton's theory. Gottfried Wilhelm Leibniz complained that the mechanism of gravity was "invisible, intangible, and not mechanical". Moreover, initial comparisons with astronomical data were not favorable. As mathematical techniques improved throughout the 1700s, the theory showed increasing success, predicting the date of the return of Halley's comet and aiding the discovery of planet Neptune in 1846. These successes and the increasingly empirical focus of science towards the 19th century led to acceptance of Newton's theory of gravity despite distaste for action-at-a-distance.
Electrical action at a distance.
Electrical and magnetic phenomena also began to be explored systematically in the early 1600s. In William Gilbert's early theory of "electric effluvia," a kind of electric atmosphere, he rules out action-at-a-distance on the grounds that "no action can be performed by matter save by contact".
However subsequent experiments, especially those by Stephen Gray showed electrical effects over distance. Gray developed an experiment call the "electric boy" demonstrating electric transfer without direct contact.
Franz Aepinus was the first to show, in 1759, that a theory of action at a distance for electricity provides a simpler replacement for the electric effluvia theory. Despite this success, Aepinus himself considered the nature of the forces to be unexplained: he did "not approve of the doctrine which assumes the possibility of action at a distance", setting the stage for a shift to theories based on aether.
By 1785 Charles-Augustin de Coulomb showed that two electric charges at rest experience a force inversely proportional to the square of the distance between them, a result now called Coulomb's law. The striking similarity to gravity strengthened the case for action at a distance, at least as a mathematical model.
As mathematical methods improved, especially through the work of Pierre-Simon Laplace, Joseph-Louis Lagrange, and Siméon Denis Poisson, more sophisticated mathematical methods began to influence the thinking of scientists. The concept of potential energy applied to small test particles led to the concept of a scalar field, a mathematical model representing the forces throughout space. While this mathematical model is not a mechanical medium, the mental picture of such a field resembles a medium.
Fields as an alternative.
It was Michael Faraday who first suggested that action at a distance, even in the form of a (mathematical) potential field, was inadequate as an account of electric and magnetic forces. Faraday, an empirical experimentalist, cited three reasons in support of some medium transmitting electrical force: 1) electrostatic induction across an insulator depends on the nature of the insulator, 2) cutting a charged insulator causes opposite charges to appear on each half, and 3) electric discharge sparks are curved at an insulator. From these reasons he concluded that the particles of an insulator must be polarized, with each particle contributing to continuous action. He also experimented with magnets, demonstrating lines of force made visible by iron filings. However, in both cases his field-like model depends on particles that interact through an action-at-a-distance: his mechanical field-like model has no more fundamental physical cause than the long-range central field model.
Faraday's observations, as well as others, led James Clerk Maxwell to a breakthrough formulation in 1865, a set of equations that combined electricity and magnetism, both static and dynamic, and which included electromagnetic radiation – light. Maxwell started with elaborate mechanical models but ultimately produced a purely mathematical treatment using dynamical vector fields. The sense that these fields must be set to vibrate to propagate light set off a search of a medium of propagation; the medium was called the luminiferous aether or the aether.
In 1873 Maxwell addressed action at a distance explicitly. He reviews Faraday's lines of force, carefully pointing out that Faraday himself did not provide a mechanical model of these lines in terms of a medium. Nevertheless the many properties of these lines of force imply these "lines must not be regarded as mere mathematical abstractions". Faraday himself viewed these lines of force as a model, a "valuable aid" to the experimentalist, a means to suggest further experiments.
In distinguishing between different kinds of action Faraday suggests three criteria: 1) do additional material objects alter the action?, 2) does the action take time, and 3) does it depend upon the receiving end? For electricity, Faraday knew that all three criteria were met for electric action, but gravity was thought to only meet the third one. After Maxwell's time a fourth criteria, the transmission of energy, was added, thought to also apply to electricity but not gravity. With the advent of new theories of gravity, the modern account would give gravity all of the criteria except dependence on additional objects.
Fields fade into spacetime.
The success of Maxwell's field equations led to numerous efforts in the later decades of the 19th century to represent electrical, magnetic, and gravitational fields, primarily with mechanical models. No model emerged that explained the existing phenomena. In particular no good model for stellar aberration, the shift in the position of stars with the Earth's relative velocity. The best models required the ether to be stationary while the Earth moved, but experimental efforts to measure the effect of Earth's motion through the aether found no effect.
In 1892 Hendrik Lorentz proposed a modified aether based on the emerging microscopic molecular model rather than the strictly macroscopic continuous theory of Maxwell. Lorentz investigated the mutual interaction of a moving solitary electrons within a stationary aether. He rederived Maxwell's equations in this way but, critically, in the process he changed them to represent the wave in the coordinates moving electrons. He showed that the wave equations had the same form if they were transformed using a particular scaling factor,
formula_0
where formula_1 is the velocity of the moving electrons and formula_2 is the speed of light. Lorentz noted that if this factor were applied as a length contraction to moving matter in a stationary ether, it would eliminate any effect of motion through the ether, in agreement with experiment.
In 1899, Henri Poincaré questioned the existence of an aether, showing that the principle of relativity prohibits the absolute motion assumed by proponents of the aether model. He named the transformation used by Lorentz the Lorentz transformation but interpreted it as a transformation between two inertial frames with relative velocity formula_1. This transformation makes the electromagnetic equations look the same in every uniformly moving inertial frame. Then, in 1905, Albert Einstein demonstrated that the principle of relativity, applied to the simultaneity of time and the constant speed of light, precisely predicts the Lorentz transformation. This theory of special relativity quickly became the modern concept of spacetime.
Thus the aether model, initially so very different from action at a distance, slowly changed to
resemble simple empty space.
In 1905, Poincaré proposed gravitational waves, emanating from a body and propagating at the speed of light, as being required by the Lorentz transformations and suggested that, in analogy to an accelerating electrical charge producing electromagnetic waves, accelerated masses in a relativistic field theory of gravity should produce gravitational waves. However, until 1915 gravity stood apart as a force still described by action-at-a-distance. In that year, Einstein showed that a field theory of spacetime, general relativity, consistent with relativity can explain gravity. New effects resulting from this theory were dramatic for cosmology but minor for planetary motion and physics on Earth.
Einstein himself noted Newton's "enormous practical success".
Modern action at a distance.
In the early decades of the 20th century Karl Schwarzschild, Hugo Tetrode, and Adriaan Fokker independently developed non-instantaneous models for action at a distance consistent with special relativity. In 1949 John Archibald Wheeler and Richard Feynman built on these models to develop a new field-free theory of electromagnetism.
While Maxwell's field equations are generally successful, the Lorentz model of a moving electron interacting with the field encounters mathematical difficulties: the self-energy of the moving point charge within the field is infinite. The Wheeler–Feynman absorber theory of electromagnetism avoids the self-energy issue. They interpret Abraham–Lorentz force, the apparent force resisting electron acceleration, as a real force returning from all the other existing charges in the universe.
The Wheeler–Feynman theory has inspired new thinking about the arrow of time and about the nature of quantum non-locality. The theory has implications for cosmology; it has been extended to quantum mechanics. A similar approach has been applied to develop an alternative theory of gravity consistent with general relativity. John G. Cramer has extended the Wheeler–Feynman ideas to create the transactional interpretation of quantum mechanics.
"Spooky action at a distance".
Einstein wrote to Max Born about issues in quantum mechanics in 1947 and
used a phrase translated as "spooky action at a distance". The phrase has been picked up and used as a description for the cause of small non-classical correlations between physically separated measurement of entangled quantum states. The correlations are predicted by quantum mechanics and verified by experiments. Rather than a postulate like Newton's gravitational force, this use of "action-at-a-distance" concerns observed correlations which are not easy to explain within simple interpretations of quantum mechanics.
Force in quantum field theory.
Quantum field theory does not need action at a distance. At the most fundamental level only four forces are needed and each is described as resulting from the exchange of specific bosons. Two are short range: the strong interaction mediated by mesons and the weak interaction mediated by the weak boson; two are long range: electromagnetism mediated by the photon and gravity hypothesized to be mediated by the graviton. However, the entire concept of force is of secondary concern in advanced modern particle physics. Energy forms the basis of physical models and the word action has shifted away from implying a force to a specific technical meaning, an integral over the difference between potential energy and kinetic energy.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\gamma = \\frac{1}{\\sqrt{1-(u^2/c^2)}}."
},
{
"math_id": 1,
"text": "u"
},
{
"math_id": 2,
"text": "c"
}
] |
https://en.wikipedia.org/wiki?curid=570922
|
5709504
|
Kenneth Kunen
|
American mathematician (1943–2020)
Herbert Kenneth Kunen (August 2, 1943 – August 14, 2020) was a professor of mathematics at the University of Wisconsin–Madison who worked in set theory and its applications to various areas of mathematics, such as set-theoretic topology and measure theory. He also worked on non-associative algebraic systems, such as loops, and used computer software, such as the Otter theorem prover, to derive theorems in these areas.
Personal life.
Kunen was born in New York City in 1943 and died in 2020. He lived in Madison, Wisconsin, with his wife Anne, with whom he had two sons, Isaac and Adam.
Education.
Kunen completed his undergraduate degree at the California Institute of Technology and received his Ph.D. in 1968 from Stanford University, where he was supervised by Dana Scott.
Career and research.
Kunen showed that if there exists a nontrivial elementary embedding "j" : "L" → "L" of the constructible universe, then 0# exists.
He proved the consistency of a normal, formula_0-saturated ideal on formula_1 from the consistency of the existence of a huge cardinal. He introduced the method of iterated ultrapowers, with which he proved that if formula_2 is a measurable cardinal with formula_3 or formula_2 is a strongly compact cardinal then there is an inner model of set theory with formula_2 many measurable cardinals. He proved Kunen's inconsistency theorem showing the impossibility of a nontrivial elementary embedding formula_4, which had been suggested as a large cardinal assumption (a Reinhardt cardinal).
Away from the area of large cardinals, Kunen is known for intricate forcing and combinatorial constructions. He proved that it is consistent that Martin's axiom first fails at a singular cardinal and constructed under the continuum hypothesis a compact L-space supporting a nonseparable measure. He also showed that formula_5 has no increasing chain of length formula_6 in the standard Cohen model
where the continuum is formula_0. The concept of a Jech–Kunen tree is named after him and Thomas Jech.
Bibliography.
The journal "Topology and its Applications" has dedicated a special issue to "Ken" Kunen, containing a biography by Arnold W. Miller, and surveys about Kunen's research in various fields by Mary Ellen Rudin, Akihiro Kanamori, István Juhász, Jan van Mill, Dikran Dikranjan, and Michael Kinyon.
|
[
{
"math_id": 0,
"text": "\\aleph_2"
},
{
"math_id": 1,
"text": "\\aleph_1"
},
{
"math_id": 2,
"text": "\\kappa"
},
{
"math_id": 3,
"text": "2^\\kappa>\\kappa^+"
},
{
"math_id": 4,
"text": "V\\to V"
},
{
"math_id": 5,
"text": "P(\\omega)/Fin"
},
{
"math_id": 6,
"text": "\\omega_2 "
}
] |
https://en.wikipedia.org/wiki?curid=5709504
|
57095040
|
Jun Ishiwara
|
Japanese theoretical physicist
Jun Ishiwara or Atsushi Ishihara (石原 純; January 15, 1881 – January 19, 1947) was a Japanese theoretical physicist, known for his works on the electronic theory of metals, the theory of relativity and quantum theory. Being the only Japanese scientist who made an original contribution to the old quantum theory, in 1915, independently of other scientists, he formulated quantization rules for systems with several degrees of freedom.
Biography.
Jun Ishiwara was born in the family of Christian priest Ryo Ishiwara and Chise Ishiwara. In 1906, he completed his studies at the Department of Theoretical Physics at the University of Tokyo, where he was a student of Hantaro Nagaoka. Since 1908, Ishiwara taught at the Army School of Artillery and Engineers, and in 1911 received the position of Assistant Professor at the College of Science of Tohoku University. From April 1912 to May 1914 he trained in Europe – at the University of Munich, ETH Zurich and Leiden University, where he worked with Arnold Sommerfeld and Albert Einstein. After returning to his homeland, Ishiwara received a post of professor at Tohoku University, and in 1919 for his scientific work was awarded the Imperial Prize of the Japan Academy.
Since 1918, Ishiwara's scientific activity began to decline. In 1921, because of a love affair, he was forced to take leave at the university, and two years later finally retired. After retirement, he devoted himself mainly to writing and scientific journalism (in this area he was one of the pioneers in Japan), he authored many popular books and articles on the latest achievements of science. At the end of 1922, Ishiwara hosted Einstein during his visit to Japan; he recorded and published a number of speeches by Einstein, including his Kyoto address, in which Einstein, for the first time, detailed his path to the creation of the theory of relativity. The two-volume monograph of Ishiwara, titled "Fundamental Problems of Physics", was very popular among young scientists and specialists; he also edited the first complete collection of Einstein's works, published in a Japanese translation in 1922-1924. In addition, Ishiwara was known as a poet who wrote poems in the genre of tanka. Shortly before the outbreak of World War II, he criticized the government control over science.
Scientific achievements.
Theory of relativity.
Ishiwara was one of the first Japanese scholars to turn to the theory of relativity; he wrote the first scientific article in Japan on this subject. In 1909-1911, he studied within the framework of this theory a number of specific problems related to the dynamics of electrons, the propagation of light in moving objects and the calculation of the energy-momentum tensor of the electromagnetic field. In 1913, on the basis of the principle of least action, he derived an expression for this tensor, previously obtained by Hermann Minkowski. Ishiwara took part in the discussions of the first half of the 1910s (the decade from 1910 to 1919) which preceded the creation of the general theory of relativity. Starting from the scalar theory of gravitation proposed by Max Abraham and using the then popular idea of the electromagnetic origin of matter, the Japanese physicist developed his own theory, in which he attempted to unify the electromagnetic and gravitational fields, or, more precisely, to deduce the latter from the former. Assuming that the speed of light is variable and rewriting Maxwell's equations accordingly, he showed that such a representation leads to the appearance of additional terms in the energy-momentum conservation law that can be treated as a gravitational contribution. The result was in agreement with Abraham's theory, but subsequently Ishiwara developed his theory in another direction trying to harmonize it with the theory of relativity. The scientist also made attempts to build a five-dimensional theory for unification of the gravitational and electromagnetic fields.
Quantum physics.
In the first paper devoted to the problems of quantum physics (1911), Ishiwara derived Planck's law and tried to substantiate the wave properties of radiation on the basis of the assumption that it consists of light quanta. Thus, he anticipated certain ideas of Louis de Broglie and Satyendra Nath Bose. In the same year, he supported the hypothesis of light quanta as a possible explanation of the nature of X-rays and gamma rays.
In 1915, Ishiwara became the first non-Western scientist who referred to the Bohr atom theory in a published work. On April 4, 1915, he presented to the Tokyo Mathematico-Physical Society the article "The universal meaning of the quantum of action" ("Universelle Bedeutung des Wirkungsquantums"), in which he attempted to unite the ideas of Max Planck on elementary cells in phase space, the idea of quantizing the angular momentum in the Bohr model atom and the hypothesis of Arnold Sommerfeld about the change of the action integral in quantum processes. Ishiwara suggested that the motion of a quantum system having formula_0 degrees of freedom should satisfy the following average relationship between the values of the coordinates (formula_1) and the corresponding momenta (formula_2): formula_3, where formula_4 is the Planck constant. Ishiwara showed that this new hypothesis can be used to reproduce some quantum effects known at that time. Thus, he succeeded in obtaining an expression for the quantization of the angular momentum in the Bohr atom, taking into account also the ellipticity of electron orbits, although it followed from his theory the need to take the charge of the nucleus of the hydrogen atom equal to two elementary charges. As a second application of the proposed hypothesis, Ishiwara considered the problem of the photoelectric effect, obtaining a linear relationship between the electron energy and the radiation frequency in accordance with the Einstein formula. Later in the same year Ishiwara put forward another hypothesis, according to which the product of the energy of the atom and the period of electron motion in the stationary state should be equal to the integer number of Planck constants. In 1918, he linked the postulate proposed three years earlier to the theory of adiabatic invariants.
Around the same time, analogous rules for quantizing systems of many degrees of freedom were independently obtained by William Wilson and Sommerfeld and are usually called the Sommerfeld quantum conditions. The reason for the error of Ishiwara, which was manifested in the calculation of the hydrogen atom, apparently was a superfluous averaging over the number of degrees of freedom (dividing by formula_0 before the sum). At the same time, his quantum condition, which differed from Sommerfeld's one in the presence of summation, allowed to obtain correct results regardless of the choice of coordinates. This was pointed out in 1917 by Einstein, who, not knowing about the work of Ishiwara, derived the same relation and showed that in the case of separable coordinates it gives the conditions of Wilson and Sommerfeld.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "j"
},
{
"math_id": 1,
"text": "q_i"
},
{
"math_id": 2,
"text": "p_i"
},
{
"math_id": 3,
"text": "\\frac{1}{j} \\sum_{i=1}^j \\int{p_i dq_i}=h"
},
{
"math_id": 4,
"text": "h"
}
] |
https://en.wikipedia.org/wiki?curid=57095040
|
570963
|
Marginal propensity to save
|
Fraction of income increase that is saved
The marginal propensity to save (MPS) is the fraction of an increase in income that is not spent and instead used for saving. It is the slope of the line plotting saving against income. For example, if a household earns one extra dollar, and the marginal propensity to save is 0.35, then of that dollar, the household will spend 65 cents and save 35 cents. Likewise, it is the fractional decrease in saving that results from a decrease in income.
The MPS plays a central role in Keynesian economics as it quantifies the saving-income relation, which is the flip side of the consumption-income relation, and according to Keynes it reflects the fundamental psychological law. The marginal propensity to save is also a key variable in determining the value of the multiplier.
Calculation.
MPS can be calculated as the change in savings divided by the change in income.
formula_0
Or mathematically, the marginal propensity to save (MPS) function is expressed as the derivative of the savings (S) function with respect to disposable income (Y).
formula_1 where, dS=Change in Savings and dY=Change in income.
Multiplier effect.
Mathematical implication.
The end result is a magnified, multiplied change in aggregate production initially triggered by the change in investment, but amplified by the change in consumption i.e. the initial investment multiplied by the consumption coefficient (Marginal Propensity to consume).
The MPS enters into the process because it indicates the division of extra income between consumption and saving. It determines how much saving is induced with each change in production and income, and thus how much consumption is induced. If the MPS is smaller, then the multiplier process is also greater as less saving is induced, but more consumption is induced, with each round of activity.
Thus, in this highly simplified model, total magnified change in production due to change in an autonomous variable by $1
= formula_2
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "MPS=\\frac{\\text{Change in Savings}}{\\text{Change in Income}}"
},
{
"math_id": 1,
"text": "MPS=\\frac{dS}{dY}"
},
{
"math_id": 2,
"text": "1 + c + c^2 + \\cdots\n =\\frac{1}{1-c}\n =\\frac{1}{1-MPC} = \\frac{1}{MPS}"
}
] |
https://en.wikipedia.org/wiki?curid=570963
|
570984
|
Consumption (economics)
|
Using money to obtain an item for use
Consumption is the act of using resources to satisfy current needs and wants. It is seen in contrast to investing, which is spending for acquisition of "future" income. Consumption is a major concept in economics and is also studied in many other social sciences.
Different schools of economists define consumption differently. According to mainstream economists, only the final purchase of newly produced goods and services by individuals for immediate use constitutes consumption, while other types of expenditure — in particular, fixed investment, intermediate consumption, and government spending — are placed in separate categories (see consumer choice). Other economists define consumption much more broadly, as the aggregate of all economic activity that does not entail the design, production and marketing of goods and services (e.g., the selection, adoption, use, disposal and recycling of goods and services).
Economists are particularly interested in the relationship between consumption and income, as modelled with the consumption function. A similar realist structural view can be found in consumption theory, which views the Fisherian intertemporal choice framework as the real structure of the consumption function. Unlike the passive strategy of structure embodied in inductive structural realism, economists define structure in terms of its invariance under intervention.
Behavioural economics, Keynesian consumption function.
The Keynesian consumption function is also known as the absolute income hypothesis, as it only bases consumption on current income and ignores potential future income (or lack of). Criticism of this assumption led to the development of Milton Friedman's permanent income hypothesis and Franco Modigliani's life cycle hypothesis.
More recent theoretical approaches are based on behavioural economics and suggest that a number of behavioural principles can be taken as microeconomic foundations for a behaviourally-based aggregate consumption function.
Behavioural economics also adopts and explains several human behavioural traits within the constraint of the standard economic model. These include bounded rationality, bounded willpower, and bounded selfishness.
Bounded rationality was first proposed by Herbert Simon. This means that people sometimes respond rationally to their own cognitive limits, which aimed to minimize the sum of the costs of decision making and the costs of error. In addition, bounded willpower refers to the fact that people often take actions that they know are in conflict with their long-term interests. For example, most smokers would rather not smoke, and many smokers willing to pay for a drug or a program to help them quit. Finally, bounded self-interest refers to an essential fact about the utility function of a large part of people: under certain circumstances, they care about others or act as if they care about others, even strangers.
Consumption and household production.
Aggregate consumption is a component of aggregate demand.
Consumption is defined in part by comparison to production.
In the tradition of the Columbia School of Household Economics, also known as the New Home Economics, commercial consumption has to be analyzed in the context of household production. The opportunity cost of time affects the cost of home-produced substitutes and therefore demand for commercial goods and services. The elasticity of demand for consumption goods is also a function of who performs chores in households and how their spouses compensate them for opportunity costs of home production.
Different schools of economists define production and consumption differently. According to mainstream economists, only the final purchase of goods and services by individuals constitutes consumption, while other types of expenditure — in particular, fixed investment, intermediate consumption, and government spending — are placed in separate categories (See consumer choice). Other economists define consumption much more broadly, as the aggregate of all economic activity that does not entail the design, production and marketing of goods and services (e.g., the selection, adoption, use, disposal and recycling of goods and services).
Consumption can also be measured in a variety of different ways such as energy in energy economics metrics.
Consumption as part of GDP.
GDP (Gross domestic product) is defined via this formula:
formula_0
Where formula_1 stands for consumption.
Where formula_2 stands for total government spending. (including salaries)
Where formula_3 stands for Investments.
Where formula_4 stands for net exports. Net exports are exports minus imports.
In most countries consumption is the most important part of GDP. It usually ranges from 45% from GDP to 85% of GDP.
Consumption in microeconomics.
In microeconomics, consumer choice is a theory that assumes that people are rational consumers and they decide on what combinations of goods to buy based on their utility function (which goods provide them with more use/happiness) and their budget constraint (which combinations of goods they can afford to buy). Consumers try to maximize utility while staying within the limits of their budget constrain or to minimalize cost while getting the target level of utility. A special case of this is the consumption-leisure model where a consumer chooses between a combination of leisure and working time, which is represented by income.
However, behavioural economics shows that consumers do not behave rationally and they are influenced by factors other than their utility from the given good. Those factors can be the popularity of a given good or its position in a supermarket.
Consumption in macroeconomics.
In macroeconomics in the theory of national accounts consumption is not only the amount of money that is spent by households on goods and services from companies, but also the expenditures of government that are meant to provide things for citizens they would have to buy themselves otherwise. This means things like healthcare. Where consumption is equal to income minus savings. Consumption can be calculated via this formula:
formula_5
Where formula_6 stands for autonomous consumption which is minimal consumption of household that is achieved always, by either reducing the savings of household or by borrowing money.
formula_7 is marginal propensity to consume where formula_8 and it reveals how much of household income is spent on consumption.
formula_9 is the disposable income of the household.
Consumption as a measurement of growth.
Consumption of electric energy is positively correlated with economical growth. As electric energy is one of the most important inputs of the economy. Electric energy is needed to produce goods and to provide services to consumers. There is a statistically significant effect of electrical energy consumption and economic growth that is positive. Electricity consumption reflects economic growth. With the gradual rise of people's material level, electric energy consumption is also gradually increasing. In Iran, for example, electricity consumption has increased along with economic growth since 1970. But as countries continue to develop this effect is decreasing as they optimize their production, by getting more energy-efficient equipment. Or by transferring parts of their production to foreign nations where the cost of electrical energy is smaller.
Determinant factors of consumption.
The main factors affecting consumption studied by economists include:
Income: Economists consider the income level to be the most crucial factor affecting consumption. Therefore, the offered consumption functions often emphasize this variable. Keynes considers absolute income, Duesenberry considers relative income, and Friedman considers permanent income as factors that determine one's consumption.
Consumer expectations: Changes in the prices would change the real income and purchasing power of the consumer. If the consumer's expectations about future prices change, it can change his consumption decisions in the present period.
Consumer assets and wealth: These refer to assets in the form of cash, bank deposits, securities, as well as physical assets such as stocks of durable goods or real estate such as houses, land, etc. These factors can affect consumption; if the mentioned assets are sufficiently liquid, they will remain in reserve and can be used in emergencies.
Consumer credits: The increase in the consumer's credit and his credit transactions can allow the consumer to use his future income at present. As a result, it can lead to more consumption expenditure compared to the case that the only purchasing power is current income.
Interest rate: Fluctuations in interest rates can affect household consumption decisions. An increase in interest rates increases people's savings and, as a result, reduces their consumption expenditures.
Household size: Households' absolute consumption costs increase as the number of family members increases. Although for some goods, as the number of households increases, the consumption of such goods would increase relatively less than the number of households. This happens due to the phenomena of the economy of scale.
Social groups: Household consumption varies in different social groups. For example, the consumption pattern of employers is different from the consumption pattern of workers. The smaller the gap between groups in a society, the more homogeneous consumption pattern within the society.
Consumer taste: One of the important factors in shaping the consumption pattern is consumer taste. This factor, to some extent, can affect other factors such as income and price levels. On the other hand, society's culture has a significant impact on shaping the tastes of consumers.
Area: Consumption patterns are different in different geographical regions. For example, this pattern differs from urban and rural areas, crowded and sparsely populated areas, economically active and inactive areas, etc.
Consumption theories.
Consumption theories began with John Maynard Keynes in 1936 and were developed by economists such as Friedman, Dusenbery, and Modigliani. The relationship between consumption and income was a crucial concept in macroeconomic analysis for a long time.
Absolute Income Hypothesis.
In his 1936 General Theory, Keynes introduced the consumption function. He believed that various factors influence consumption decisions; But in the short run, the most important factor is real income. According to the Absolute Income Hypothesis, consumer spending on consumption goods and services is a linear function of his current disposable income.
Relative Income Hypothesis.
James Duesenberry proposed this model in 1949. This theory is based on two assumptions:
Intertemporal consumption.
The model of intertemporal consumption was first thought of by John Rae in 1830s and it was later expanded by Irving Fisher in 1930s in the book "Theory of interest". This model describes how consumption is distributed over periods of life. In the basic model with 2 periods for example young and old age.
formula_10
And then
formula_11
Where formula_1 is the consumption in a given year.
Where formula_12 is the income received in a given year.
Where formula_13 are saving from a given year.
Where formula_14 is the interest rate.
Indexes 1,2 stand for period 1 and period 2.
This model can be expanded to represent each year of a lifetime.
Permanent income hypothesis.
The permanent income hypothesis was developed by Milton Friedman in the 1950s in his book "A theory of the Consumption Function". This theory divides income into two components: formula_15 is transitory income and formula_16is permanent income, such that formula_17.
Changes in the two components have different impacts on consumption. If formula_16 changes then consumption changes accordingly by formula_18, where formula_19 is known as the "marginal propensity to consume". If we expect part of income to be saved or invested, then formula_20, otherwise formula_21. On the other hand, if formula_15 changes (for example as a result of winning the lottery), then this increase in income is distributed over the remaining lifespan. For example, winning $1000 with the expectation of living for 10 more years will result in yearly increase of consumption by $100.
Life-cycle hypothesis.
The life-cycle hypothesis was published by Franco Modigliani in 1966. It describes how people make consumption decisions based on their past income, current income, and future income as they tend to distribute their consumption over their lifetime. It is, in its basic form:
formula_22
Where formula_1 is the consumption in given year.
Where formula_23 is the number of years the individual is going to live for.
Where formula_24 is for how many more years will the individual be working.
Where formula_12 is the average wage the individual will be paid over his or her remaining work time
And formula_25 is the wealth he has already accumulated in his or her life.
Access-based consumption.
The term "access-based consumption" refers to the increasing extent to which people seek the experience of temporarily accessing goods rather than owning them, thus there are opportunities for a "sharing economy" to develop, although Bardhi and Eckhardt outline differences between "access" and "sharing". Social theorist Jeremy Rifkin put forward the idea in his 2000 publication "The Age of Access".
Old-age spending.
"Spending the Kids' Inheritance" (originally the title of a book on the subject by Annie Hulley) and the acronyms SKI and SKI'ing refer to the growing number of older people in Western society spending their money on travel, cars and property, in contrast to previous generations who tended to leave that money to their children. According to a study from 2017 that was conducted in the USA 20% of married people consider leaving inheritance a priority, while 34% do not consider it as a priority. And about one in ten unmarried Americans (14 percent) plan to spend their retirement money to improve their lives, rather than saving it to leave an inheritance to their children. In addition, three in ten married Americans (28 percent) have downsized or plan to downsize their home after retirement.
"Die Broke" (from the book "Die Broke: A Radical Four-Part Financial Plan" by Stephen Pollan and Mark Levine) is a similar idea.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Y=C+G+I+NX"
},
{
"math_id": 1,
"text": "C"
},
{
"math_id": 2,
"text": "G"
},
{
"math_id": 3,
"text": "I"
},
{
"math_id": 4,
"text": "NX"
},
{
"math_id": 5,
"text": "C=C_0+c*Y_d"
},
{
"math_id": 6,
"text": "C_0"
},
{
"math_id": 7,
"text": "c"
},
{
"math_id": 8,
"text": "c \\in [0,1]"
},
{
"math_id": 9,
"text": "Y_d"
},
{
"math_id": 10,
"text": "S_1=Y_1 - C_1"
},
{
"math_id": 11,
"text": "C_2 = Y_2 + S_1 \\times (1+r)"
},
{
"math_id": 12,
"text": "Y"
},
{
"math_id": 13,
"text": "S"
},
{
"math_id": 14,
"text": "r"
},
{
"math_id": 15,
"text": "Y_t"
},
{
"math_id": 16,
"text": "Y_p"
},
{
"math_id": 17,
"text": "Y = Y_t + Y_p"
},
{
"math_id": 18,
"text": "\\alpha \\times Y_p"
},
{
"math_id": 19,
"text": "\\alpha"
},
{
"math_id": 20,
"text": "\\alpha \\in (0, 1)"
},
{
"math_id": 21,
"text": "\\alpha = 1\n"
},
{
"math_id": 22,
"text": "C=1/T \\times W + 1/T \\times (R\\times Y)"
},
{
"math_id": 23,
"text": "T"
},
{
"math_id": 24,
"text": "R"
},
{
"math_id": 25,
"text": "W"
}
] |
https://en.wikipedia.org/wiki?curid=570984
|
571001
|
Marginal propensity to consume
|
Metric that quantifies induced consumption
In economics, the marginal propensity to consume (MPC) is a metric that quantifies induced consumption, the concept that the increase in personal consumer spending (consumption) occurs with an increase in disposable income (income after taxes and transfers). The proportion of disposable income which individuals spend on consumption is known as propensity to consume. MPC is the proportion of additional income that an individual consumes. For example, if a household earns one extra dollar of disposable income, and the marginal propensity to consume is 0.65, then of that dollar, the household will spend 65 cents and save 35 cents. Obviously, the household cannot spend "more" than the extra dollar (without borrowing or using savings). If the extra money accessed by the individual gives more economic confidence, then the MPC of the individual may well exceed 1, as they may borrow or utilise savings.
According to John Maynard Keynes, marginal propensity to consume is less than one. As such, the MPC is higher in the case of poorer people than in rich.
Background.
Mathematically, the formula_0 function is expressed as the derivative of the consumption function formula_1 with respect to disposable income formula_2, i.e., the instantaneous slope of the formula_1-formula_2 curve.
formula_3
or, approximately,
formula_4, where formula_5 is the change in consumption, and formula_6 is the change in disposable income that produced the consumption.
Marginal propensity to consume can be found by dividing change in consumption by a change in income, or formula_7. The MPC can be explained with the simple example:
Here formula_8; formula_9
Therefore, formula_10 or 83%.
For example, suppose you receive a bonus with your paycheck, and it's $500 on top of your normal annual earnings. You suddenly have $500 more in income than you did before. If you decide to spend $400 of this marginal increase in income on a new business suit, your marginal propensity to consume will be 0.8 (formula_11).
The marginal propensity to consume is measured as the ratio of the change in consumption to the change in income, thus giving us a figure between 0 and 1. The MPC can be more than one if the subject borrowed money or dissaved to finance expenditures higher than their income. The MPC can also be less than zero if an increase in income leads to a reduction in consumption (which might occur if, for example, the increase in income makes it worthwhile to save up for a particular purchase). One minus the MPC equals the marginal propensity to save (in a two sector closed economy), which is crucial to Keynesian economics and a key variable in determining the value of the multiplier. In symbols, we have:formula_12.
In a standard Keynesian model, the MPC is less than the average propensity to consume (APC) because in the short-run some (autonomous) consumption does not change with income. Falls (increases) in income do not lead to reductions (increases) in consumption because people reduce (add to) savings to stabilize consumption. Over the long-run, as wealth and income rise, consumption also rises; the marginal propensity to consume out of long-run income is closer to the average propensity to consume.
The MPC is not strongly influenced by interest rates; consumption tends to be stable relative to income. In theory one might think that higher interest rates would induce more saving (the substitution effect) but higher interest rates also mean than people do not have to save as much for the future.
Economists often distinguish between the marginal propensity to consume out of permanent income, and the average propensity to consume out of temporary income, because if consumers expect a change in income to be permanent, then they have a greater incentive to increase their consumption. This implies that the Keynesian multiplier should be "larger" in response to permanent changes in income than it is in response to temporary changes in income (though the earliest Keynesian analyses ignored these subtleties). However, the distinction between permanent and temporary changes in income is often subtle in practice, and it is often quite difficult to designate a particular change in income as being permanent or temporary. What is more, the marginal propensity to consume should also be affected by factors such as the prevailing interest rate and the general level of consumer surplus that can be derived from purchasing.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathit{MPC}"
},
{
"math_id": 1,
"text": "C"
},
{
"math_id": 2,
"text": "Y"
},
{
"math_id": 3,
"text": "\\mathit{MPC}=\\frac{dC}{dY}"
},
{
"math_id": 4,
"text": "\\mathit{MPC} = \\frac{\\Delta C}{\\Delta Y}"
},
{
"math_id": 5,
"text": "\\Delta C"
},
{
"math_id": 6,
"text": "\\Delta Y"
},
{
"math_id": 7,
"text": "\\mathit{MPC}=\\Delta C/\\Delta Y"
},
{
"math_id": 8,
"text": "\\Delta C= 50"
},
{
"math_id": 9,
"text": "\\Delta Y= 60"
},
{
"math_id": 10,
"text": "\\mathit{MPC}=\\Delta C/\\Delta Y= 50/60= 0.83"
},
{
"math_id": 11,
"text": "\\$400/\\$500"
},
{
"math_id": 12,
"text": "\\frac{\\Delta C}{\\Delta Y} + \\frac{\\Delta S}{\\Delta Y} = 1"
}
] |
https://en.wikipedia.org/wiki?curid=571001
|
57103836
|
Transconvolution
|
The term transconvolution designates a numerical method used in medical imaging, in particular emission computed tomography. "Transconvolution" enables a subsequent manipulation of the Point spread function (PSF) in already recorded images.
Properties of an image such as the spatial resolution or the appearance of small objects are determined by the PSF of the imaging system used for image acquisition. Different imaging systems with different PSFs therefore provide slightly different images of one and the same object.
Starting from known PSFs of different tomographic systems, the transconvolution method allows an image recorded on a particular tomograph to be converted as if it had been acquired by another tomograph. The method can thus ensure the comparability of images that were originally recorded on different systems.
Definition.
Given two different tomographs with different point spread functions formula_0 and formula_1 the imaging process can be defined in terms of convolution as
formula_2
formula_3
with "formula_4" representing the convolution operator and formula_5 and formula_6 representing the two slightly different images of the same object formula_7 as seen by the respective tomographs.
The two equations yield the relationship
formula_8
with formula_9 representing the inverse function of the according point spread function formula_0.
The inverse point spreading function formula_9 diverges and can not be determined or handled numerically. But, within certain boundary conditions, the complete term formula_10 is approximately computable by numerical methods.
The transconvolution function formula_11 is defined as
formula_12
which results in the formula
formula_13
With the PSFs of the respective tomographs known, it is thus possible to convert an image formula_5 recorded by the first tomograph into an formula_6 emulating an image as recorded by the second tomograph.
Of course, the method is subject to certain limits, in particular the computed formula_6 can not represent spatial frequencies not captured to at least some degree by formula_0. Consequently, the spatial resolution of an image can not be increased arbitrarily.
Application in medical imaging.
The second point spread function formula_1 does not have to represent a real tomograph, but can be purposely defined to represent a "virtual tomograph" with corresponding properties.
Based on the definition of a standardized virtual tomograph and the determination of the imaging properties of different real tomographs, the transconvolution method allows a uniform and quantitatively comparable representation of the image data taken by the different tomographs or systems, as if all measurements were made consistently by the standardized virtual system.
The method thus supports quantitative comparisons of images taken by different imaging systems and in particular by different clinical tomographs.
Another application of the transconvolution method in positron emission tomography allows to handle the diverse image blur caused by the according diverse positron range of the actual positron emitting radionuclide. In particular this allows to use differing radionuclides for calibration as opposed to the subsequent imaging.
|
[
{
"math_id": 0,
"text": "psf_1"
},
{
"math_id": 1,
"text": "psf_2"
},
{
"math_id": 2,
"text": "obj * psf_1 = img_1"
},
{
"math_id": 3,
"text": "obj * psf_2 = img_2"
},
{
"math_id": 4,
"text": "*"
},
{
"math_id": 5,
"text": "img_1"
},
{
"math_id": 6,
"text": "img_2"
},
{
"math_id": 7,
"text": "obj"
},
{
"math_id": 8,
"text": "img_1 * psf_1^{-1} * psf_2 = img_2"
},
{
"math_id": 9,
"text": "psf_1^{-1}"
},
{
"math_id": 10,
"text": "psf_1^{-1} * psf_2"
},
{
"math_id": 11,
"text": "tf"
},
{
"math_id": 12,
"text": "tf = psf_1^{-1} * psf_2"
},
{
"math_id": 13,
"text": "img_1 * tf = img_2"
}
] |
https://en.wikipedia.org/wiki?curid=57103836
|
57109223
|
Bjerknes force
|
Translational forces on bubbles in a sound wave
Bjerknes forces are translational forces on bubbles in a sound wave. The phenomenon is a type of acoustic radiation force. "Primary" Bjerknes forces are caused by an external sound field; "secondary" Bjerknes forces are attractive or repulsive forces between pairs of bubbles in the same sound field caused by the pressure field generated by each bubble volume's oscillations. They were first described by Vilhelm Bjerknes in his 1906 "Fields of Force".
Hydrodynamics – electromagnetism analogy.
In "Fields of Force" Bjerknes lay out geometrical and dynamical analogies between the Maxwell's theory of electromagnetism and hydrodynamics. In the light of these analogies the Bjerknes forces are being predicted.
Principle of kinematic buoyancy.
Bjerknes writes:"Any body which participates in the translatory motion of a fluid mass is subject to a kinematic buoyancy equal to the product of the acceleration of the translatory motion multiplied by the mass of the water displaced by the body"This principle is analogous to Archimedes' principle. Based on this principle the force acting on a particle of volume formula_0 is formula_1. Where formula_2 is the fluid velocity and formula_3 is the fluid density.
Using conservation of momentum for incompressible non-viscous fluid one can find that to first order: formula_4, Concluding that formula_5.
Charge and oscillating particles.
Bjerknes realized that the velocity field generated by an expanding particle in an incompressible fluid has the same geometrical structure as the electric field generated by a positively charged particle, and that the same applies for contracting particle and a negatively charged particle.
In the case of an oscillating motion, Bjerknes argued that two particles that oscillate "in phase" generate a velocity field that is geometrically equivalent to the electric field generated by two particles with the "same charge", whereas two particles that oscillate in an "opposite phase" will generate a velocity field that is geometrically equivalent to the electric field generated by particles with an "opposite sign".
Bjerknes then writes:"Between Bodies pulsating in the same phase there is an apparent attraction; between bodies pulsating in the opposite phase there is an apparent repulsion, the force being proportional to the product of the two intensities of pulsating, and proportional to the inverse square of the distance." This result is counter to our intuition, as it demonstrates that bodies oscillating in phase exert an attractive force on each other, despite creating a field akin to that of identically charged particles. This result was described by Bjerknes as "Astonishing".
Primary Bjerknes force.
The force on a small particle in a sound wave is given by:
formula_5
where V is the volume of the particle, and formula_6P is the acoustic pressure gradient on the bubble.
Assuming a sinusoidal standing wave, the time-averaged pressure gradient over a single acoustic cycle is zero, meaning a solid particle (with fixed volume) experiences no net force. However, because a bubble is compressible, the oscillating pressure field also causes its volume to change; for spherical bubbles this can be described by the Rayleigh–Plesset equation. This means the time-averaged product of the bubble volume and the pressure gradient can be non-zero over an acoustic cycle. Unlike acoustic radiation forces on incompressible particles, net forces can be generated in the absence of attenuation or reflection of the sound wave.
The sign of the force will depend on the relative phase between the pressure field formula_7 and the volume formula_8 oscillations. According to the theory of forced harmonic oscillator the relative phase will depend on the difference between the bubble resonant frequency and the acoustic driving frequency.
Bubble focusing.
From Rayleigh–Plesset equation one can derive the bubble resonant frequency:
formula_9
Where formula_3 is the fluid density, formula_10 is the rest radius of the bubble, formula_11 is the polytropic index, formula_12 is the ambient pressure, formula_13 is the vapor pressure and formula_14 is the surface tension constant.
Bubbles with resonance frequency above the acoustic driving frequency travel up the pressure gradient, while those with a lower resonance frequency travel down the pressure gradient.
The dependence of the resonant frequency (formula_15) on the rest radius of the bubble predicts that for standing waves, there is a critical radius formula_16 that depends on the driving frequency. Small bubbles (formula_17) accumulate at pressure antinodes, whereas large bubbles (formula_18) accumulate at pressure nodes.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " V "
},
{
"math_id": 1,
"text": " F = \\bold{a} \\rho V =\\frac{\\partial \\bold{u}}{\\partial t} \\rho V "
},
{
"math_id": 2,
"text": " \\bold{u} "
},
{
"math_id": 3,
"text": " \\rho "
},
{
"math_id": 4,
"text": " \\rho \\frac{\\partial \\bold{u}}{\\partial t}=-\\nabla P "
},
{
"math_id": 5,
"text": " F = - V \\nabla P "
},
{
"math_id": 6,
"text": "\\nabla"
},
{
"math_id": 7,
"text": " P(t) "
},
{
"math_id": 8,
"text": " V(t) "
},
{
"math_id": 9,
"text": " \\omega_0^2 = \\frac{1}{\\rho R_0^2} \\Bigl( 3\\kappa \\bigl(P_0 - P_V \\bigr) + 2(3\\kappa - 1) \\frac{\\sigma}{R_0} \\Bigr) "
},
{
"math_id": 10,
"text": " R_0 "
},
{
"math_id": 11,
"text": " \\kappa "
},
{
"math_id": 12,
"text": " P_0 "
},
{
"math_id": 13,
"text": " P_V "
},
{
"math_id": 14,
"text": " \\sigma "
},
{
"math_id": 15,
"text": " \\omega_0 "
},
{
"math_id": 16,
"text": " R_c "
},
{
"math_id": 17,
"text": " R_0<R_c "
},
{
"math_id": 18,
"text": " R_0>R_c "
}
] |
https://en.wikipedia.org/wiki?curid=57109223
|
571109
|
Dirichlet problem
|
Problem of solving a partial differential equation subject to prescribed boundary values
In mathematics, a Dirichlet problem asks for a function which solves a specified partial differential equation (PDE) in the interior of a given region that takes prescribed values on the boundary of the region.
The Dirichlet problem can be solved for many PDEs, although originally it was posed for Laplace's equation. In that case the problem can be stated as follows:
Given a function "f" that has values everywhere on the boundary of a region in formula_0, is there a unique continuous function formula_1 twice continuously differentiable in the interior and continuous on the boundary, such that formula_1 is harmonic in the interior and formula_2 on the boundary?
This requirement is called the Dirichlet boundary condition. The main issue is to prove the existence of a solution; uniqueness can be proven using the maximum principle.
History.
The Dirichlet problem goes back to George Green, who studied the problem on general domains with general boundary conditions in his "Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism", published in 1828. He reduced the problem into a problem of constructing what we now call Green's functions, and argued that Green's function exists for any domain. His methods were not rigorous by today's standards, but the ideas were highly influential in the subsequent developments. The next steps in the study of the Dirichlet's problem were taken by Karl Friedrich Gauss, William Thomson (Lord Kelvin) and Peter Gustav Lejeune Dirichlet, after whom the problem was named, and the solution to the problem (at least for the ball) using the Poisson kernel was known to Dirichlet (judging by his 1850 paper submitted to the Prussian academy). Lord Kelvin and Dirichlet suggested a solution to the problem by a variational method based on the minimization of "Dirichlet's energy". According to Hans Freudenthal (in the "Dictionary of Scientific Biography", vol. 11), Bernhard Riemann was the first mathematician who solved this variational problem based on a method which he called Dirichlet's principle. The existence of a unique solution is very plausible by the "physical argument": any charge distribution on the boundary should, by the laws of electrostatics, determine an electrical potential as solution. However, Karl Weierstrass found a flaw in Riemann's argument, and a rigorous proof of existence was found only in 1900 by David Hilbert, using his direct method in the calculus of variations. It turns out that the existence of a solution depends delicately on the smoothness of the boundary and the prescribed data.
General solution.
For a domain formula_3 having a sufficiently smooth boundary formula_4, the general solution to the Dirichlet problem is given by
formula_5
where formula_6 is the Green's function for the partial differential equation, and
formula_7
is the derivative of the Green's function along the inward-pointing unit normal vector formula_8. The integration is performed on the boundary, with measure formula_9. The function formula_10 is given by the unique solution to the Fredholm integral equation of the second kind,
formula_11
The Green's function to be used in the above integral is one which vanishes on the boundary:
formula_12
for formula_13 and formula_14. Such a Green's function is usually a sum of the free-field Green's function and a harmonic solution to the differential equation.
Existence.
The Dirichlet problem for harmonic functions always has a solution, and that solution is unique, when the boundary is sufficiently smooth and formula_15 is continuous. More precisely, it has a solution when
formula_16
for some formula_17, where formula_18 denotes the Hölder condition.
Example: the unit disk in two dimensions.
In some simple cases the Dirichlet problem can be solved explicitly. For example, the solution to the Dirichlet problem for the unit disk in R2 is given by the Poisson integral formula.
If formula_19 is a continuous function on the boundary formula_4 of the open unit disk formula_3, then the solution to the Dirichlet problem is formula_20 given by
formula_21
The solution formula_1 is continuous on the closed unit disk formula_22 and harmonic on formula_23
The integrand is known as the Poisson kernel; this solution follows from the Green's function in two dimensions:
formula_24
where formula_25 is harmonic (formula_26) and chosen such that formula_27 for formula_28.
Methods of solution.
For bounded domains, the Dirichlet problem can be solved using the Perron method, which relies on the maximum principle for subharmonic functions. This approach is described in many text books. It is not well-suited to describing smoothness of solutions when the boundary is smooth. Another classical Hilbert space approach through Sobolev spaces does yield such information. The solution of the Dirichlet problem using Sobolev spaces for planar domains can be used to prove the smooth version of the Riemann mapping theorem. has outlined a different approach for establishing the smooth Riemann mapping theorem, based on the reproducing kernels of Szegő and Bergman, and in turn used it to solve the Dirichlet problem. The classical methods of potential theory allow the Dirichlet problem to be solved directly in terms of integral operators, for which the standard theory of compact and Fredholm operators is applicable. The same methods work equally for the Neumann problem.
Generalizations.
Dirichlet problems are typical of elliptic partial differential equations, and potential theory, and the Laplace equation in particular. Other examples include the biharmonic equation and related equations in elasticity theory.
They are one of several types of classes of PDE problems defined by the information given at the boundary, including Neumann problems and Cauchy problems.
Example: equation of a finite string attached to one moving wall.
Consider the Dirichlet problem for the wave equation describing a string attached between walls with one end attached permanently and the other moving with the constant velocity i.e. the d'Alembert equation on the triangular region of the Cartesian product of the space and the time:
formula_29
formula_30
formula_31
As one can easily check by substitution, the solution fulfilling the first condition is
formula_32
Additionally we want
formula_33
Substituting
formula_34
we get the condition of self-similarity
formula_35
where
formula_36
It is fulfilled, for example, by the composite function
formula_37
with
formula_38
thus in general
formula_39
where formula_40 is a periodic function with a period formula_41:
formula_42
and we get the general solution
formula_43
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{R}^n"
},
{
"math_id": 1,
"text": "u"
},
{
"math_id": 2,
"text": "u=f"
},
{
"math_id": 3,
"text": "D"
},
{
"math_id": 4,
"text": "\\partial D"
},
{
"math_id": 5,
"text": "u(x) = \\int_{\\partial D} \\nu(s) \\frac{\\partial G(x, s)}{\\partial n} \\,ds,"
},
{
"math_id": 6,
"text": "G(x, y)"
},
{
"math_id": 7,
"text": "\\frac{\\partial G(x, s)}{\\partial n} = \\widehat{n} \\cdot \\nabla_s G (x, s) = \\sum_i n_i \\frac{\\partial G(x, s)}{\\partial s_i}"
},
{
"math_id": 8,
"text": "\\widehat{n}"
},
{
"math_id": 9,
"text": "ds"
},
{
"math_id": 10,
"text": "\\nu(s)"
},
{
"math_id": 11,
"text": "f(x) = -\\frac{\\nu(x)}{2} + \\int_{\\partial D} \\nu(s) \\frac{\\partial G(x, s)}{\\partial n} \\,ds."
},
{
"math_id": 12,
"text": "G(x, s) = 0"
},
{
"math_id": 13,
"text": "s \\in \\partial D"
},
{
"math_id": 14,
"text": "x \\in D"
},
{
"math_id": 15,
"text": "f(s)"
},
{
"math_id": 16,
"text": "\\partial D \\in C^{1,\\alpha}"
},
{
"math_id": 17,
"text": "\\alpha \\in (0, 1)"
},
{
"math_id": 18,
"text": "C^{1,\\alpha}"
},
{
"math_id": 19,
"text": "f"
},
{
"math_id": 20,
"text": "u(z)"
},
{
"math_id": 21,
"text": "u(z) =\n \\begin{cases}\n \\displaystyle \\frac{1}{2\\pi} \\int_0^{2\\pi} f(e^{i\\psi}) \\frac {1 - |z|^2}{|1 - ze^{-i\\psi}|^2} \\,d\\psi & \\text{if } z \\in D, \\\\\n f(z) & \\text{if } z \\in \\partial D.\n \\end{cases}\n"
},
{
"math_id": 22,
"text": "\\bar{D}"
},
{
"math_id": 23,
"text": "D."
},
{
"math_id": 24,
"text": "G(z, x) = -\\frac{1}{2\\pi} \\log|z - x| + \\gamma(z, x),"
},
{
"math_id": 25,
"text": "\\gamma(z, x)"
},
{
"math_id": 26,
"text": "\\Delta_x \\gamma(z, x) = 0"
},
{
"math_id": 27,
"text": "G(z, x) = 0"
},
{
"math_id": 28,
"text": "x \\in \\partial D"
},
{
"math_id": 29,
"text": "\\frac{\\partial^2}{\\partial t^2} u(x, t) - \\frac{\\partial^2}{\\partial x^2} u(x, t) = 0,"
},
{
"math_id": 30,
"text": "u(0, t) = 0,"
},
{
"math_id": 31,
"text": "u(\\lambda t, t) = 0."
},
{
"math_id": 32,
"text": "u(x, t) = f(t - x) - f(x + t)."
},
{
"math_id": 33,
"text": "f(t - \\lambda t) - f(\\lambda t + t) = 0."
},
{
"math_id": 34,
"text": "\\tau = (\\lambda + 1) t,"
},
{
"math_id": 35,
"text": "f(\\gamma \\tau) = f(\\tau),"
},
{
"math_id": 36,
"text": "\\gamma = \\frac{1 - \\lambda}{\\lambda + 1}."
},
{
"math_id": 37,
"text": "\\sin[\\log(e^{2 \\pi} x)] = \\sin[\\log(x)]"
},
{
"math_id": 38,
"text": "\\lambda = e^{2\\pi} = 1^{-i},"
},
{
"math_id": 39,
"text": "f(\\tau) = g[\\log(\\gamma \\tau)],"
},
{
"math_id": 40,
"text": "g"
},
{
"math_id": 41,
"text": "\\log(\\gamma)"
},
{
"math_id": 42,
"text": "g[\\tau + \\log(\\gamma)] = g(\\tau),"
},
{
"math_id": 43,
"text": "u(x, t) = g[\\log(t - x)] - g[\\log(x + t)]."
}
] |
https://en.wikipedia.org/wiki?curid=571109
|
57121312
|
Quantum foundations
|
Branch of knowledge concerned with building intuition for quantum theory
Quantum foundations is a discipline of science that seeks to understand the most counter-intuitive aspects of quantum theory, reformulate it and even propose new generalizations thereof. Contrary to other physical theories, such as general relativity, the defining axioms of quantum theory are quite ad hoc, with no obvious physical intuition. While they lead to the right experimental predictions, they do not come with a mental picture of the world where they fit.
There exist different approaches to resolve this conceptual gap:
Research in quantum foundations is structured along these roads.
Non-classical features of quantum theory.
Quantum nonlocality.
Two or more separate parties conducting measurements over a quantum state can observe correlations which cannot be explained with any local hidden variable theory. Whether this should be regarded as proving that the physical world itself is "nonlocal" is a topic of debate, but the terminology of "quantum nonlocality" is commonplace. Nonlocality research efforts in quantum foundations focus on determining the exact limits that classical or quantum physics enforces on the correlations observed in a Bell experiment or more complex causal scenarios. This research program has so far provided a generalization of Bell's theorem that allows falsifying all classical theories with a superluminal, yet finite, hidden influence.
Quantum contextuality.
Nonlocality can be understood as an instance of quantum contextuality. A situation is contextual when the value of an observable depends on the context in which it is measured (namely, on which other observables are being measured as well). The original definition of measurement contextuality can be extended to state preparations and even general physical transformations.
Epistemic models for the quantum wave-function.
A physical property is epistemic when it represents our knowledge or beliefs on the value of a second, more fundamental feature. The probability of an event to occur is an example of an epistemic property. In contrast, a non-epistemic or ontic variable captures the notion of a “real” property of the system under consideration.
There is an on-going debate on whether the wave-function represents the epistemic state of a yet to be discovered ontic variable or, on the contrary, it is a fundamental entity. Under some physical assumptions, the Pusey–Barrett–Rudolph (PBR) theorem demonstrates the inconsistency of quantum states as epistemic states, in the sense above. Note that, in QBism and Copenhagen-type views, quantum states are still regarded as epistemic, not with respect to some ontic variable, but to one's expectations about future experimental outcomes. The PBR theorem does not exclude such epistemic views on quantum states.
Axiomatic reconstructions.
Some of the counter-intuitive aspects of quantum theory, as well as the difficulty to extend it, follow from the fact that its defining axioms lack a physical motivation. An active area of research in quantum foundations is therefore to find alternative formulations of quantum theory which rely on physically compelling principles. Those efforts come in two flavors, depending on the desired level of description of the theory: the so-called Generalized Probabilistic Theories approach and the Black boxes approach.
The framework of generalized probabilistic theories.
Generalized Probabilistic Theories (GPTs) are a general framework to describe the operational features of arbitrary physical theories. Essentially, they provide a statistical description of any experiment combining state preparations, transformations and measurements. The framework of GPTs can accommodate classical and quantum physics, as well as hypothetical non-quantum physical theories which nonetheless possess quantum theory's most remarkable features, such as entanglement or teleportation. Notably, a small set of physically motivated axioms is enough to single out the GPT representation of quantum theory.
L. Hardy introduced the concept of GPT in 2001, in an attempt to re-derive quantum theory from basic physical principles. Although Hardy's work was very influential (see the follow-ups below), one of his axioms was regarded as unsatisfactory: it stipulated that, of all the physical theories compatible with the rest of the axioms, one should choose the simplest one. The work of Dakic and Brukner eliminated this “axiom of simplicity” and provided a reconstruction of quantum theory based on three physical principles. This was followed by the more rigorous reconstruction of Masanes and Müller.
Axioms common to these three reconstructions are:
An alternative GPT reconstruction proposed by Chiribella et al. around the same time is also based on the
The use of purification to characterize quantum theory has been criticized on the grounds that it also applies in the Spekkens toy model.
To the success of the GPT approach, it can be countered that all such works just recover finite dimensional quantum theory. In addition, none of the previous axioms can be experimentally falsified unless the measurement apparatuses are assumed to be tomographically complete.
Categorical quantum mechanics or process theories.
Categorical Quantum Mechanics (CQM) or Process Theories are a general framework to describe physical theories, with an emphasis on processes and their compositions. It was pioneered by Samson Abramsky and Bob Coecke. Besides its influence in quantum foundations, most notably the use of a diagrammatic formalism, CQM also plays an important role in quantum technologies, most notably in the form of ZX-calculus. It also has been used to model theories outside of physics, for example the DisCoCat compositional natural language meaning model.
The framework of black boxes.
In the black box or device-independent framework, an experiment is regarded as a black box where the experimentalist introduces an input (the type of experiment) and obtains an output (the outcome of the experiment). Experiments conducted by two or more parties in separate labs are hence described by their statistical correlations alone.
From Bell's theorem, we know that classical and quantum physics predict different sets of allowed correlations. It is expected, therefore, that far-from-quantum physical theories should predict correlations beyond the quantum set. In fact, there exist instances of theoretical non-quantum correlations which, a priori, do not seem physically implausible. The aim of device-independent reconstructions is to show that all such supra-quantum examples are precluded by a reasonable physical principle.
The physical principles proposed so far include no-signalling, Non-Trivial Communication Complexity, No-Advantage for Nonlocal computation, Information Causality, Macroscopic Locality, and Local Orthogonality. All these principles limit the set of possible correlations in non-trivial ways. Moreover, they are all device-independent: this means that they can be falsified under the assumption that we can decide if two or more events are space-like separated. The drawback of the device-independent approach is that, even when taken together, all the afore-mentioned physical principles do not suffice to single out the set of quantum correlations. In other words: all such reconstructions are partial.
Interpretations of quantum theory.
An interpretation of quantum theory is a correspondence between the elements of its mathematical formalism and physical phenomena. For instance, in the pilot wave theory, the quantum wave function is interpreted as a field that guides the particle trajectory and evolves with it via a system of coupled differential equations. Most interpretations of quantum theory stem from the desire to solve the quantum measurement problem.
Extensions of quantum theory.
In an attempt to reconcile quantum and classical physics, or to identify non-classical models with a dynamical causal structure, some modifications of quantum theory have been proposed.
Collapse models.
Collapse models posit the existence of natural processes which periodically localize the wave-function. Such theories provide an explanation to the nonexistence of superpositions of macroscopic objects, at the cost of abandoning unitarity and exact energy conservation.
Quantum measure theory.
In Sorkin's quantum measure theory (QMT), physical systems are not modeled via unitary rays and Hermitian operators, but through a single matrix-like object, the decoherence functional. The entries of the decoherence functional determine the feasibility to experimentally discriminate between two or more different sets of classical histories, as well as the probabilities of each experimental outcome. In some models of QMT the decoherence functional is further constrained to be positive semidefinite (strong positivity). Even under the assumption of strong positivity, there exist models of QMT which generate stronger-than-quantum Bell correlations.
Acausal quantum processes.
The formalism of process matrices starts from the observation that, given the structure of quantum states, the set of feasible quantum operations follows from positivity considerations. Namely, for any linear map from states to probabilities one can find a physical system where this map corresponds to a physical measurement. Likewise, any linear transformation that maps composite states to states corresponds to a valid operation in some physical system. In view of this trend, it is reasonable to postulate that any high-order map from quantum instruments (namely, measurement processes) to probabilities should also be physically realizable. Any such map is termed a process matrix. As shown by Oreshkov et al., some process matrices describe situations where the notion of global causality breaks.
The starting point of this claim is the following mental experiment: two parties, Alice and Bob, enter a building and end up in separate rooms. The rooms have ingoing and outgoing channels from which a quantum system periodically enters and leaves the room. While those systems are in the lab, Alice and Bob are able to interact with them in any way; in particular, they can measure some of their properties.
Since Alice and Bob's interactions can be modeled by quantum instruments, the statistics they observe when they apply one instrument or another are given by a process matrix. As it turns out, there exist process matrices which would guarantee that the measurement statistics collected by Alice and Bob is incompatible with Alice interacting with her system at the same time, before or after Bob, or any convex combination of these three situations. Such processes are called acausal.
|
[
{
"math_id": 0,
"text": "S_A"
},
{
"math_id": 1,
"text": "A-B"
},
{
"math_id": 2,
"text": "T_{AB}"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "T_{AB}, T^{\\prime}_{AB}"
},
{
"math_id": 5,
"text": "B"
}
] |
https://en.wikipedia.org/wiki?curid=57121312
|
5712595
|
Ascorbate peroxidase
|
Enzyme
Ascorbate peroxidase (or L-ascorbate peroxidase, APX or APEX) (EC 1.11.1.11) is an enzyme that catalyzes the chemical reaction
L-ascorbate + H2O2 formula_0 dehydroascorbate + 2 H2O
It is a member of the family of heme-containing peroxidases. Heme peroxidases catalyse the H2O2-dependent oxidation of a wide range of different, usually organic, substrates in biology.
This enzyme belongs to the family of oxidoreductases, specifically those acting on a peroxide as acceptor (peroxidases). The systematic name of this enzyme class is L-ascorbate:hydrogen-peroxide oxidoreductase. Other names in common use include L-ascorbic acid peroxidase, L-ascorbic acid-specific peroxidase, ascorbate peroxidase, and ascorbic acid peroxidase. This enzyme participates in ascorbate and aldarate metabolism.
Overview.
Ascorbate-dependent peroxidase activity was first reported in 1979, more than 150 years after the first observation of peroxidase activity in horseradish plants and almost 40 years after the discovery of the closely related cytochrome c peroxidase enzyme.
Peroxidases have been classified into three types (class I, class II and class III): ascorbate peroxidases is a class I peroxidase enzyme. APXs catalyse the H2O2-dependent oxidation of ascorbate in plants, algae and certain cyanobacteria. APX has high sequence identity to cytochrome c peroxidase, which is also a class I peroxidase enzyme. Under physiological conditions, the immediate product of the reaction, the monodehydroascorbate radical, is reduced back to ascorbate by a monodehydroascorbate reductase (monodehydroascorbate reductase (NADH)) enzyme. In the absence of a reductase, two monodehydroascorbate radicals disproportionate rapidly to dehydroascorbic acid and ascorbate. APX is an integral component of the glutathione-ascorbate cycle.
Substrate specificity.
APX enzymes show high specificity for ascorbate as an electron donor, but most APXs will also oxidise other organic substrates that are more characteristic of the class III peroxidases (such as horseradish peroxidase), in some cases at rates comparable to that of ascorbate itself. This means that defining an enzyme as an APX is not straightforward, but is usually applied when the specific activity for ascorbate is higher than that for other substrates. Some proteins from the APX family lack the ascorbate-binding amino acid residues suggesting that they might oxidize other molecules than ascorbate.
Mechanism.
Most of the information on mechanism comes from work on the pea cytosolic and soybean cytosolic enzymes. The mechanism of oxidation of ascorbate is achieved by means of an oxidized Compound I intermediate, which is subsequently reduced by substrate in two, sequential single electron transfer steps (equations [1]–[3], where HS = substrate and S• = one electron oxidised form of substrate).
APX + H2O2 → Compound I + H2O [1]
Compound I + HS → Compound II + S• [2]
Compound II + HS → APX + S• + H2O [3]
In ascorbate peroxidase, Compound I is a transient (green) species and contains a high-valent iron species (known as ferryl heme, FeIV) and a porphyrin pi-cation radical, as found in horseradish peroxidase. Compound II contains only the ferryl heme.
Structural information.
The structure of pea cytosolic APX was reported in 1995. The binding interaction of soybean cytosolic APX with its physiological substrate, ascorbate, and with a number of other substrates are also known.
As of late 2007, 12 structures have been solved for this class of enzymes, with PDB accession codes 1APX, 1IYN, 1OAF, 1OAG, 1V0H, 2CL4, 2GGN, 2GHC, 2GHD, 2GHE, 2GHH, and 2GHK.
Applications in cellular imaging.
Both pea APX and soybean APX and their mutants (APEX, APEX2) have been used in electron microscopy studies for cellular imaging.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=5712595
|
571280
|
Stratification (mathematics)
|
Stratification has several usages in mathematics.
In mathematical logic.
In mathematical logic, stratification is any consistent assignment of numbers to predicate symbols guaranteeing that a unique formal interpretation of a logical theory exists. Specifically, we say that a set of clauses of the form formula_0 is stratified if and only if
there is a stratification assignment S that fulfills the following conditions:
The notion of stratified negation leads to a very effective operational semantics for stratified programs in terms of the stratified least fixpoint, that is obtained by iteratively applying the fixpoint operator to each "stratum" of the program, from the lowest one up.
Stratification is not only useful for guaranteeing unique interpretation of Horn clause
theories.
In a specific set theory.
In New Foundations (NF) and related set theories, a formula formula_3 in the language of first-order logic with equality and membership is said to be
stratified if and only if there is a function
formula_4 which sends each variable appearing in formula_3 (considered as an item of syntax) to
a natural number (this works equally well if all integers are used) in such a way that
any atomic formula formula_5 appearing in formula_3 satisfies formula_6 and any atomic formula formula_7 appearing in formula_3 satisfies formula_8.
It turns out that it is sufficient to require that these conditions be satisfied only when
both variables in an atomic formula are bound in the set abstract formula_9
under consideration. A set abstract satisfying this weaker condition is said to be
weakly stratified.
The stratification of New Foundations generalizes readily to languages with more
predicates and with term constructions. Each primitive predicate needs to have specified
required displacements between values of formula_4 at its (bound) arguments
in a (weakly) stratified formula. In a language with term constructions, terms themselves
need to be assigned values under formula_4, with fixed displacements from the
values of each of their (bound) arguments in a (weakly) stratified formula. Defined term
constructions are neatly handled by (possibly merely implicitly) using the theory
of descriptions: a term formula_10 (the x such that formula_3) must
be assigned the same value under formula_4 as the variable x.
A formula is stratified if and only if it is possible to assign types to all variables appearing
in the formula in such a way that it will make sense in a version TST of the theory of
types described in the New Foundations article, and this is probably the best way
to understand the stratification of New Foundations in practice.
The notion of stratification can be extended to the lambda calculus; this is found
in papers of Randall Holmes.
A motivation for the use of stratification is to address Russell's paradox, the antinomy considered to have undermined Frege's central work "Grundgesetze der Arithmetik" (1902).
In topology.
In singularity theory, there is a different meaning, of a decomposition of a topological space "X" into disjoint subsets each of which is a topological manifold (so that in particular a "stratification" defines a partition of the topological space). This is not a useful notion when unrestricted; but when the various strata are defined by some recognisable set of conditions (for example being locally closed), and fit together manageably, this idea is often applied in geometry. Hassler Whitney and René Thom first defined formal conditions for stratification. See Whitney stratification and topologically stratified space.
In statistics.
See stratified sampling.
<templatestyles src="Dmbox/styles.css" />
Index of articles associated with the same name
This includes a list of related items that share the same name (or similar names). <br> If an [ internal link] incorrectly led you here, you may wish to change the link to point directly to the intended article.
|
[
{
"math_id": 0,
"text": "Q_1 \\wedge \\dots \\wedge Q_n \\wedge \\neg Q_{n+1} \\wedge \\dots \\wedge \\neg Q_{n+m} \\rightarrow P"
},
{
"math_id": 1,
"text": "S(P) \\geq S(Q)"
},
{
"math_id": 2,
"text": "S(P) > S(Q)"
},
{
"math_id": 3,
"text": "\\phi"
},
{
"math_id": 4,
"text": "\\sigma"
},
{
"math_id": 5,
"text": "x \\in y"
},
{
"math_id": 6,
"text": "\\sigma(x)+1 = \\sigma(y)"
},
{
"math_id": 7,
"text": "x = y"
},
{
"math_id": 8,
"text": "\\sigma(x) = \\sigma(y)"
},
{
"math_id": 9,
"text": "\\{x \\mid \\phi\\}"
},
{
"math_id": 10,
"text": "(\\iota x.\\phi)"
}
] |
https://en.wikipedia.org/wiki?curid=571280
|
57128106
|
Žito
|
Court magician
Žito (, also called Ziito, fl. fourteenth century) was a court-magician of Wenceslaus IV of Bohemia.
History.
Žito was well known as a conjurer and illusionist. Reportedly, he was deformed and had a mouth that stretched from ear to ear.
Among the tales told of his prowess with sleight of hand is one in which, during an argument with a visiting juggler, he swallowed him whole, except for his shoes. He returned a time later, leading his opponent by the hand. Supposedly this occurred during the wedding of Wenceslaus and Sophia.
At a banquet, he caused a commotion outside, and when the guests went to look, he affixed deer antlers to their heads, which prevented them from drawing their heads back inside. While they struggled to remove them, he helped himself to sweets from their tables.
In another, he sold a butcher a dozen pigs, under the condition they not drink from running water. When they did so, the pigs changed into kernels of corn. He was angry at Žito and accosted him roughly, tearing one of his arms out by the roots. The argument soon attracted a crowd and Žito called out that the butcher was actually selling human flesh in his stall. The crowd rushed to look and found the proof. When the butcher was about to be torn apart, Žito called out for the crowd to look again. They found only animal meat.
He traveled in a cart drawn by poultry.
According to ""; published in 1552 by Dubravius, he was at the end taken to Hell, in "both body and soul".
Factualness.
P. T. Barnum, while considering all stories of magicians to be fictional, points out that the story of the butcher closely resembles a story about Doctor Faustus selling a magic trick horse, before allowing the enraged buyer to pull off his foot and leg while Faustus slept.
While many of his exploits can be seen as the product of skilled illusions such as Misdirection and Quick-change, in the past this was seen as the product of sorcery.
Mentions.
Appeared in House of Secrets' short feature "Realm of the Mystics".
Appeared on a postage stamp of the Czech Republic in 1997.
In Holub's poem "Žito the Magician", Žito can do many wondrous things, but he can't make a formula_0 greater than one.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sin(\\alpha)"
}
] |
https://en.wikipedia.org/wiki?curid=57128106
|
57138844
|
Activation energy asymptotics
|
Asymptotic analysis of combustion
Activation energy asymptotics (AEA), also known as large activation energy asymptotics, is an asymptotic analysis used in the combustion field utilizing the fact that the reaction rate is extremely sensitive to temperature changes due to the large activation energy of the chemical reaction.
History.
The techniques were pioneered by the Russian scientists Yakov Borisovich Zel'dovich, David A. Frank-Kamenetskii and co-workers in the 30s, in their study on premixed flames and thermal explosions (Frank-Kamenetskii theory), but not popular to western scientists until the 70s. In the early 70s, due to the pioneering work of Williams B. Bush, Francis E. Fendell, Forman A. Williams, Amable Liñán and John F. Clarke, it became popular in western community and since then it was widely used to explain more complicated problems in combustion.
Method overview.
In combustion processes, the reaction rate formula_0 is dependent on temperature formula_1 in the following form (Arrhenius law),
formula_2
where formula_3 is the activation energy, and formula_4 is the universal gas constant. In general, the condition formula_5 is satisfied, where formula_6 is the burnt gas temperature. This condition forms the basis for activation energy asymptotics. Denoting formula_7 for unburnt gas temperature, one can define the Zel'dovich number and heat release parameter as follows
formula_8
In addition, if we define a non-dimensional temperature
formula_9
such that formula_10 approaching zero in the unburnt region and approaching unity in the burnt gas region (in other words, formula_11), then the ratio of reaction rate at any temperature to reaction rate at burnt gas temperature is given by
formula_12
Now in the limit of formula_13 (large activation energy) with formula_14, the reaction rate is exponentially small i.e., formula_15 and negligible everywhere, but non-negligible when formula_16. In other words, the reaction rate is negligible everywhere, except in a small region very close to burnt gas temperature, where formula_17. Thus, in solving the conservation equations, one identifies two different regimes, at leading order,
where in the convective-diffusive zone, reaction term will be neglected and in the thin reactive-diffusive layer, convective terms can be neglected and the solutions in these two regions are stitched together by matching slopes using method of matched asymptotic expansions. The above mentioned two regime are true only at leading order since the next order corrections may involve all the three transport mechanisms.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\omega"
},
{
"math_id": 1,
"text": "T"
},
{
"math_id": 2,
"text": "\\omega(T) \\propto \\mathrm{e}^{-E_{\\rm a}/RT},"
},
{
"math_id": 3,
"text": "E_{\\rm a}"
},
{
"math_id": 4,
"text": "R"
},
{
"math_id": 5,
"text": "E_{\\rm a}/R \\gg T_b"
},
{
"math_id": 6,
"text": "T_{\\rm b}"
},
{
"math_id": 7,
"text": "T_{\\rm u}"
},
{
"math_id": 8,
"text": "\\beta = \\frac{E_{\\rm a}}{RT_{\\rm b}}\\frac{T_{\\rm b}-T_{\\rm u}}{T_{\\rm b}}, \\quad q = \\frac{T_{\\rm b}-T_{\\rm u}}{T_{\\rm u}}."
},
{
"math_id": 9,
"text": "\\theta = \\frac{T-T_{\\rm u}}{T_{\\rm b}-T_{\\rm u}},"
},
{
"math_id": 10,
"text": "\\theta"
},
{
"math_id": 11,
"text": "0\\leq\\theta\\leq 1"
},
{
"math_id": 12,
"text": "\\frac{\\omega(T)}{\\omega(T_{\\rm b})} \\propto \\frac{\\mathrm{e}^{-E_{\\rm a}/RT}}{\\mathrm{e}^{-E_{\\rm a}/RT_{\\rm b}}} = \\exp \\left[-\\beta(1-\\theta)\\frac{1+q}{1+q\\theta}\\right]."
},
{
"math_id": 13,
"text": "\\beta\\rightarrow \\infty"
},
{
"math_id": 14,
"text": "q\\sim O(1)"
},
{
"math_id": 15,
"text": "O(e^{-\\beta})"
},
{
"math_id": 16,
"text": "\\beta(1-\\theta) \\sim O(1)"
},
{
"math_id": 17,
"text": "1-\\theta \\sim O(1/\\beta)"
}
] |
https://en.wikipedia.org/wiki?curid=57138844
|
57139596
|
Divisome
|
A protein complex in bacteria responsible for cell division
The divisome is a protein complex in bacteria that is responsible for cell division, constriction of inner and outer membranes during division, and peptidoglycan (PG) synthesis at the division site. The divisome is a membrane protein complex with proteins on both sides of the cytoplasmic membrane. In gram-negative cells it is located in the inner membrane. The divisome is nearly ubiquitous in bacteria although its composition may vary between species.
The elongasome is a modified version of the divisome, without the membrane-constricting FtsZ-ring and its associated machinery. The elongasome is present only in non-spherical bacteria and directs lateral insertion of PG along the long axis of the cell, thus allowing cylindrical growth (as opposed to spherical growth, as in cocci).
History.
Some of the first cell-division genes of "Escherichia coli" were discovered by François Jacob's group in France in the 1960s. They were called "fts" genes, because mutants of these genes conferred a filamentous temperature-sensitive phenotype. At the non-permissive temperature (usually 42 °C), fts mutant cells continue to elongate without dividing, forming filaments that can be up to 150 formula_0m long (as opposed to 2-3 formula_0m in wild-type cells). Three breakthroughs came with the discovery of the ftsZ gene in 1980 and the realization that the FtsZ protein was localized to the division plane of dividing cells, and finally the realization that the structure of FtsZ is remarkably similar to tubulin and that they likely share a common ancestor.
Composition.
The precise composition of the divisome and elongasome remains unknown, given that they are highly dynamic protein complexes which recruit and release certain proteins during cell division. However, more than 20 proteins are known to be part of the divisome in "E. coli" with a similar number of proteins in Gram-positive bacteria (such as "Bacillus subtilis"), although not all proteins are conserved across bacteria.
Several other fts genes, such as ftsA, ftsW, ftsQ, ftsI, ftsL, ftsK, ftsN, and ftsB, were all found to be essential for cell division and to associate with the divisome complex and the FtsZ ring. FtsA protein binds directly to FtsZ in the cytoplasm, and FtsB, FtsL and FtsQ form an essential membrane-embedded subcomplex. FtsK and FtsW are larger proteins with multiple transmembrane domains. FtsI, also known as PBP3, is the divisome-specific transpeptidase required for synthesis of the division septum.
DNA replication and cell division.
DNA replication in bacteria is tightly linked to cell division. For instance, blocking replication in B. subtilis results in elongated cells without proper cell division. Bacterial DNA replication is initiated by the binding of DnaA (an ATPase) to the origin of replication (oriC) at midcell. FtsZ assembly appears to be linked to successful DNA replication with MatP and ZapB somehow coordinating interactions between the division machinery and DNA replication during chromosome segregation in "E. coli".
Assembly of the divisome.
The precise assembly process of the divisome is not well understood. It starts with the early proteins FtsZ and its membrane anchor FtsA, and the proteins ZipA, EzrA, and the Zaps (ZapA, ZapB, ZapC, ZapD) which promote FtsZ ring-formation. While FtsA and especially FtsZ are highly conserved among bacteria, ZipA, which is a second membrane anchor for FtsZ in gamma-proteobacteria, EzrA, and the Zap proteins are less well conserved and are missing in some species. After the early proteins, the FtsQLB subcomplex is added, followed by FtsI (transpeptidase), FtsW (transglycosylase), and FtsN. Both FtsI and FtsW are required for synthesis of the septal wall. FtsW is related to the putative elongation-specific transglycosylase RodA, another divisome protein. FtsN appears to have several functions: it stabilizes the divisome (at least when over-expressed), acts as a trigger for cytokinesis (via interactions with FtsI and FtsW), and activates FtsA mediated recruitment of FtsQLB through direct binding of FtsA. However, while FtsA, FtsQLB, FtsI and FtsW are widely conserved, FtsN is limited to Gram-negative organisms (such as "E. coli)" and hence is not universally required.
The mitochondrial (eukaryotic) divisome.
A protein complex that orchestrates division of eukaryotic mitochondria has been called the "mitochondrial divisome". It is conceptually and operationally similar to the bacterial cell-division machinery but consists (mostly) of different proteins. However, there seem to be some conserved aspects, e.g. in the red alga "Cyanidioschyzon merolae", a mitochondrial FtsZ protein partially constricts the organelle, which enables the dynamin homologue Dnm1 to assemble with the mitochondrion-dividing (MD) ring on the cytosolic face to induce fission. However, in many eukaryotes (including yeasts and animals), the divisome functions in the complete absence of the contractile FtsZ ring.
|
[
{
"math_id": 0,
"text": "\\mu"
}
] |
https://en.wikipedia.org/wiki?curid=57139596
|
57139943
|
Flame stretch
|
In combustion, flame stretch (formula_0) is a quantity which measures the amount of stretch of the flame surface due to curvature and due to the outer velocity field strain. The early concept of flame stretch was introduced by Karlovitz in 1953, although the correct definition was introduced by Forman A. Williams in 1975.
George H. Markstein studied flame stretch by treating the flame surface as a hydrodynamic discontinuity (known as flame front). The flame stretch is also discussed by Bernard Lewis and Guenther von Elbe in their book. All these discussions treated flame stretch as an effect of flow velocity gradients. The stretch can be found even if there is no velocity gradient, but due to the flame curvature. So, the definition required a more general formulation and its precise definition is given as the ratio of rate of change of flame surface area to the area itself
formula_1
When formula_2, the flame is stretched, otherwise compressed. Sometimes the flame stretch is defined as non-dimensional quantity
formula_3
where formula_4 is the laminar flame thickness and formula_5 is the laminar propagation speed of unstretched premixed flame.
The formula for flame stretch was first derived by John D. Buckmaster in 1979.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K"
},
{
"math_id": 1,
"text": "K = \\frac{1}{A}\\frac{dA}{dt}."
},
{
"math_id": 2,
"text": "K>0"
},
{
"math_id": 3,
"text": "\\tilde K = \\frac{\\delta_L}{S_L} \\frac{1}{A}\\frac{dA}{dt}"
},
{
"math_id": 4,
"text": "\\delta_L"
},
{
"math_id": 5,
"text": "S_L"
}
] |
https://en.wikipedia.org/wiki?curid=57139943
|
57141126
|
Ruzzo–Tompa algorithm
|
The Ruzzo–Tompa algorithm or the RT algorithm is a linear-time algorithm for finding all non-overlapping, contiguous, maximal scoring subsequences in a sequence of real numbers. The Ruzzo–Tompa algorithm was proposed by Walter L. Ruzzo and Martin Tompa. This algorithm is an improvement over previously known quadratic time algorithms. The maximum scoring subsequence from the set produced by the algorithm is also a solution to the maximum subarray problem.
The Ruzzo–Tompa algorithm has applications in bioinformatics, web scraping, and information retrieval.
Applications.
Bioinformatics.
The Ruzzo–Tompa algorithm has been used in Bioinformatics tools to study biological data. The problem of finding disjoint maximal subsequences is of practical importance in the analysis of DNA. Maximal subsequences algorithms have been used in the identification of transmembrane segments and the evaluation of sequence homology.
The algorithm is used in sequence alignment which is used as a method of identifying similar DNA, RNA, or protein sequences. Accounting for the ordering of pairs of high-scoring subsequences in two sequences creates better sequence alignments. This is because the biological model suggests that separate high-scoring subsequence pairs arise from insertions or deletions within a matching region. Requiring consistent ordering of high-scoring subsequence pairs increases their statistical significance.
Web scraping.
The Ruzzo–Tompa algorithm is used in Web scraping to extract information from web pages. Pasternack and Roth proposed a method for extracting important blocks of text from HTML documents. The web pages are first tokenized and the score for each token is found using local, token-level classifiers. A modified version of the Ruzzo–Tompa algorithm is then used to find the k highest-valued subsequences of tokens. These subsequences are then used as predictions of important blocks of text in the article.
Information retrieval.
The Ruzzo–Tompa algorithm has been used in Information retrieval search algorithms. Liang et al. proposed a data fusion method to combine the search results of several microblog search algorithms. In their method, the Ruzzo–Tompa algorithm is used to detect bursts of information.
Problem definition.
The problem of finding all maximal subsequences is defined as follows: Given a list of real numbered scores formula_0, find the list of contiguous subsequences that gives the greatest total score, where the score of each subsequence formula_1. The subsequences must be disjoint (non-overlapping) and have a positive score.
Other algorithms.
There are several approaches to solving the all maximal scoring subsequences problem. A natural approach is to use existing, linear time algorithms to find the maximum subsequence (see maximum subarray problem) and then recursively find the maximal subsequences to the left and right of the maximum subsequence. The analysis of this algorithm is similar to that of Quicksort: The maximum subsequence could be small in comparison to the rest of sequence, leading to a running time of formula_2 in the worst case.
Algorithm.
The standard implementation of the Ruzzo–Tompa algorithm runs in formula_5 time and uses "O"("n") space, where "n" is the length of the list of scores. The algorithm uses dynamic programming to progressively build the final solution by incrementally solving progressively larger subsets of the problem. The description of the algorithm provided by Ruzzo and Tompa is as follows:
Read the scores left to right and maintain the cumulative sum of the scores read. Maintain an ordered list formula_6 of disjoint subsequences. For each subsequence formula_7, record the cumulative total formula_8 of all scores up to but not including the leftmost score of formula_7, and the total formula_9 up to and including the rightmost score of formula_7.
The lists are initially empty. Scores are read from left to right and are processed as follows. Nonpositive scores require no special processing, so the next score is read. A positive score is incorporated into a new sub-sequence formula_10 of length one that is then integrated into the list by the following process:
Once the end of the input is reached, all subsequences remaining on the list formula_4 are maximal.
The following Python code implements the Ruzzo–Tompa algorithm:
def ruzzo_tompa(scores):
"""Ruzzo–Tompa algorithm."""
k = 0
total = 0
# Allocating arrays of size n
I, L, R, Lidx = for l in range(k)]
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x_1,x_2,\\ldots,x_n"
},
{
"math_id": 1,
"text": "S_{i,j} = \\sum_{i\\leq k\\leq j} x_k"
},
{
"math_id": 2,
"text": "O(n^2)"
},
{
"math_id": 3,
"text": "j"
},
{
"math_id": 4,
"text": "I"
},
{
"math_id": 5,
"text": "O(n)"
},
{
"math_id": 6,
"text": "I_1,I_2,\\ldots,I_j"
},
{
"math_id": 7,
"text": "I_j"
},
{
"math_id": 8,
"text": "L_j"
},
{
"math_id": 9,
"text": "R_j"
},
{
"math_id": 10,
"text": "I_k"
},
{
"math_id": 11,
"text": "L_j<L_k"
},
{
"math_id": 12,
"text": "R_j \\geq R_k"
},
{
"math_id": 13,
"text": "R_j < R_k"
},
{
"math_id": 14,
"text": "I_j,I_j+1,\\ldots,I_k-1"
}
] |
https://en.wikipedia.org/wiki?curid=57141126
|
571480
|
Absorbed dose
|
Amount of energy deposited in matter by ionizing radiation
Absorbed dose is a dose quantity which is the measure of the energy deposited in matter by ionizing radiation per unit mass. Absorbed dose is used in the calculation of dose uptake in living tissue in both radiation protection (reduction of harmful effects), and radiology (potential beneficial effects, for example in cancer treatment). It is also used to directly compare the effect of radiation on inanimate matter such as in radiation hardening.
The SI unit of measure is the gray (Gy), which is defined as one Joule of energy absorbed per kilogram of matter. The older, non-SI CGS unit rad, is sometimes also used, predominantly in the USA.
Deterministic effects.
Conventionally, in radiation protection, unmodified absorbed dose is only used for indicating the immediate health effects due to high levels of acute dose. These are tissue effects, such as in acute radiation syndrome, which are also known as deterministic effects. These are effects which are certain to happen in a short time. The time between exposure and vomiting may be used as a heuristic for quantifying a dose when more precise means of testing are unavailable.
Radiation therapy.
Dose computation.
The absorbed dose is equal to the radiation exposure (ions or C/kg) of the radiation beam multiplied by the ionization energy of the medium to be ionized.
For example, the ionization energy of dry air at 20 °C and 101.325 kPa of pressure is . (33.97 eV per ion pair) Therefore, an exposure of (1 roentgen) would deposit an absorbed dose of (0.00876 Gy or 0.876 rad) in dry air at those conditions.
When the absorbed dose is not uniform, or when it is only applied to a portion of a body or object, an absorbed dose representative of the entire item can be calculated by taking a mass-weighted average of the absorbed doses at each point.
More precisely,
formula_0
Where
Stochastic risk - conversion to equivalent dose.
For stochastic radiation risk, defined as the "probability" of cancer induction and genetic effects occurring over a long time scale, consideration must be given to the type of radiation and the sensitivity of the irradiated tissues, which requires the use of modifying factors to produce a risk factor in sieverts. One sievert carries with it a 5.5% chance of eventually developing cancer based on the linear no-threshold model. This calculation starts with the absorbed dose.
To represent stochastic risk the dose quantities equivalent dose "H" T and effective dose "E" are used, and appropriate dose factors and coefficients are used to calculate these from the absorbed dose. Equivalent and effective dose quantities are expressed in units of the sievert or rem which implies that biological effects have been taken into account. The derivation of stochastic risk is in accordance with the recommendations of the International Committee on Radiation Protection (ICRP) and International Commission on Radiation Units and Measurements (ICRU). The coherent system of radiological protection quantities developed by them is shown in the accompanying diagram.
For whole body radiation, with Gamma rays or x-rays the modifying factors are numerically equal to 1, which means that in that case the dose in grays equals the dose in sieverts.
Development of the absorbed dose concept and the gray.
Wilhelm Röntgen first discovered X-rays on November 8, 1895, and their use spread very quickly for medical diagnostics, particularly broken bones and embedded foreign objects where they were a revolutionary improvement over previous techniques.
Due to the wide use of X-rays and the growing realisation of the dangers of ionizing radiation, measurement standards became necessary for radiation intensity and various countries developed their own, but using differing definitions and methods. Eventually, in order to promote international standardisation, the first International Congress of Radiology (ICR) meeting in London in 1925, proposed a separate body to consider units of measure. This was called the International Commission on Radiation Units and Measurements, or ICRU, and came into being at the Second ICR in Stockholm in 1928, under the chairmanship of Manne Siegbahn.
One of the earliest techniques of measuring the intensity of X-rays was to measure their ionising effect in air by means of an air-filled ion chamber. At the first ICRU meeting it was proposed that one unit of X-ray dose should be defined as the quantity of X-rays that would produce one esu of charge in one cubic centimetre of dry air at 0 °C and 1 standard atmosphere of pressure. This unit of radiation exposure was named the roentgen in honour of Wilhelm Röntgen, who had died five years previously. At the 1937 meeting of the ICRU, this definition was extended to apply to gamma radiation. This approach, although a great step forward in standardisation, had the disadvantage of not being a direct measure of the absorption of radiation, and thereby the ionisation effect, in various types of matter including human tissue, and was a measurement only of the effect of the X-rays in a specific circumstance; the ionisation effect in dry air.
In 1940, Louis Harold Gray, who had been studying the effect of neutron damage on human tissue, together with William Valentine Mayneord and the radiobiologist John Read, published a paper in which a new unit of measure, dubbed the "gram roentgen" (symbol: gr) was proposed, and defined as "that amount of neutron radiation which produces an increment in energy in unit volume of tissue equal to the increment of energy produced in unit volume of water by one roentgen of radiation". This unit was found to be equivalent to 88 ergs in air, and made the absorbed dose, as it subsequently became known, dependent on the interaction of the radiation with the irradiated material, not just an expression of radiation exposure or intensity, which the roentgen represented. In 1953 the ICRU recommended the rad, equal to 100 erg/g, as the new unit of measure of absorbed radiation. The rad was expressed in coherent cgs units.
In the late 1950s, the CGPM invited the ICRU to join other scientific bodies to work on the development of the International System of Units, or SI. It was decided to define the SI unit of absorbed radiation as energy deposited per unit mass which is how the rad had been defined, but in MKS units it would be J/kg. This was confirmed in 1975 by the 15th CGPM, and the unit was named the "gray" in honour of Louis Harold Gray, who had died in 1965. The gray was equal to 100 rad, the cgs unit.
Other uses.
Absorbed dose is also used to manage the irradiation and measure the effects of ionising radiation on inanimate matter in a number of fields.
Component survivability.
Absorbed dose is used to rate the survivability of devices such as electronic components in ionizing radiation environments.
Radiation hardening.
The measurement of absorbed dose absorbed by inanimate matter is vital in the process of radiation hardening which improves the resistance of electronic devices to radiation effects.
Food irradiation.
Absorbed dose is the physical dose quantity used to ensure irradiated food has received the correct dose to ensure effectiveness. Variable doses are used depending on the application and can be as high as 70 kGy.
Radiation-related quantities.
The following table shows radiation quantities in SI and non-SI units:
Although the United States Nuclear Regulatory Commission permits the use of the units curie, rad, and rem alongside SI units, the European Union European units of measurement directives required that their use for "public health ... purposes" be phased out by 31 December 1985.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\overline{D_T} = \\frac{\\displaystyle \\int_{T} D(x,y,z) \\, \\rho(x,y,z) \\, dV} {\\displaystyle \\int_{T} \\rho(x,y,z) \\, dV}"
},
{
"math_id": 1,
"text": "\\overline{D_T}"
},
{
"math_id": 2,
"text": "T"
},
{
"math_id": 3,
"text": "D(x,y,z)"
},
{
"math_id": 4,
"text": "\\rho(x,y,z)"
},
{
"math_id": 5,
"text": "V"
}
] |
https://en.wikipedia.org/wiki?curid=571480
|
57151064
|
Variety of finite semigroups
|
In mathematics, and more precisely in semigroup theory, a variety of finite semigroups is a class of semigroups having some nice algebraic properties. Those classes can be defined in two distinct ways, using either algebraic notions or topological notions. Varieties of finite monoids, varieties of finite ordered semigroups and varieties of finite ordered monoids are defined similarly.
This notion is very similar to the general notion of variety in universal algebra.
Definition.
Two equivalent definitions are now given.
Algebraic definition.
A variety "V" of finite (ordered) semigroups is a class of finite (ordered) semigroups that:
The first condition is equivalent to stating that "V" is closed under taking subsemigroups and under taking quotients. The second property implies that the empty product—that is, the trivial semigroup of one element—belongs to each variety. Hence a variety is necessarily non-empty.
A variety of finite (ordered) monoids is a variety of finite (ordered) semigroups whose elements are monoids. That is, it is a class of (ordered) monoids satisfying the two conditions stated above.
Topological definition.
In order to give the topological definition of a variety of finite semigroups, some other definitions related to profinite words are needed.
Let "A" be an arbitrary finite alphabet. Let "A"+ be its free semigroup. Then let formula_0 be the set of profinite words over "A". Given a semigroup morphism formula_1, let formula_2 be the unique continuous extension of formula_3 to formula_0.
A profinite identity is a pair "u" and "v" of profinite words. A semigroup "S" is said to satisfy the profinite identity "u" = "v" if, for each semigroup morphism formula_1, the equality formula_4 holds.
A variety of finite semigroups is the class of finite semigroups satisfying a set of profinite identities "P".
A variety of finite monoids is defined like a variety of finite semigroups, with the difference that one should consider monoid morphisms formula_5 instead of semigroup morphisms formula_6.
A variety of finite ordered semigroups/monoids is also given by a similar definition, with the difference that one should consider morphisms of ordered semigroups/monoids.
Examples.
A few examples of classes of semigroups are given. The first examples uses finite identities—that is, profinite identities whose two words are finite words. The next example uses profinite identities. The last one is an example of a class that is not a variety.
More examples are given in the article Special classes of semigroups.
Using finite identities.
More generally, given a profinite word "u" and a letter "x", the profinite equality "ux" = "xu" states that the set of possible images of "u" contains only elements of the centralizer. Similarly, "ux" = "x" states that the set of possible images of "u" contains only left identities. Finally "ux" = "u" states that the set of possible images of "u" is composed of left zeros.
Using profinite identities.
Examples using profinite words that are not finite are now given.
Given a profinite word, "x", let formula_7 denote formula_8. Hence, given a semigroup morphism formula_1, formula_9 is the only idempotent power of formula_10. Thus, in profinite equalities, formula_7 represents an arbitrary idempotent.
The class "G" of finite groups is a variety of finite semigroups. Note that a finite group can be defined as a finite semigroup, with a unique idempotent, which in addition is a left and right identity. Once those two properties are translated in terms of profinite equality, one can see that the variety "G" is defined by the set of profinite equalities formula_11
Classes that are not varieties.
Note that the class of finite monoids is not a variety of finite semigroups. Indeed, this class is not closed under subsemigroups. To see this, take any finite semigroup "S" that is not a monoid. It is a subsemigroup of the monoid "S"1 formed by adjoining an identity element.
Reiterman's theorem.
Reiterman's theorem states that the two definitions above are equivalent. A scheme of the proof is now given.
Given a variety "V" of semigroups as in the algebraic definition, one can choose the set "P" of profinite identities to be the set of profinite identities satisfied by every semigroup of "V".
Reciprocally, given a profinite identity "u" = "v", one can remark that the class of semigroups satisfying this profinite identity is closed under subsemigroups, quotients, and finite products. Thus this class is a variety of finite semigroups. Furthermore, varieties are closed under arbitrary intersection, thus, given an arbitrary set "P" of profinite identities "ui" = "vi", the class of semigroups satisfying "P" is the intersection of the class of semigroups satisfying all of those profinite identities. That is, it is an intersection of varieties of finite semigroups, and this a variety of finite semigroups.
Comparison with the notion of variety of universal algebra.
The definition of a variety of finite semigroups is inspired by the notion of a variety of universal algebras. We recall the definition of a variety in universal algebra. Such a variety is, equivalently:
The main differences between the two notions of variety are now given. In this section "variety of (arbitrary) semigroups" means "the class of semigroups as a variety of universal algebra over the vocabulary of one binary operator". It follows from the definitions of those two kind of varieties that, for any variety "V" of (arbitrary) semigroups, the class of finite semigroups of "V" is a variety of finite semigroups.
We first give an example of a variety of finite semigroups that is not similar to any subvariety of the variety of (arbitrary) semigroups. We then give the difference between the two definition using identities. Finally, we give the difference between the algebraic definitions.
As shown above, the class of finite groups is a variety of finite semigroups. However, the class of groups is not a subvariety of the variety of (arbitrary) semigroups. Indeed, formula_12 is a monoid that is an infinite group. However, its submonoid formula_13 is not a group. Since the class of (arbitrary) groups contains a semigroup and does not contain one of its subsemigroups, it is not a variety. The main difference between the finite case and the infinite case, when groups are considered, is that a submonoid of a finite group is a finite group. While infinite groups are not closed under taking submonoids.
The class of finite groups is a variety of finite semigroups, while it is not a subvariety of the variety of (arbitrary) semigroups. Thus, Reiterman's theorem shows that this class can be defined using profinite identities. And Birkhoff's HSP theorem shows that this class can not be defined using identities (of finite words). This illustrates why the definition of a variety of finite semigroups uses the notion of profinite words and not the notion of identities.
We now consider the algebraic definitions of varieties. Requiring that varieties are closed under arbitrary direct products implies that a variety is either trivial or contains infinite structures. In order to restrict varieties to contain only finite structures, the definition of variety of finite semigroups uses the notion of finite product instead of notion of arbitrary direct product.
|
[
{
"math_id": 0,
"text": "\\hat A"
},
{
"math_id": 1,
"text": "\\phi:A^+\\to S"
},
{
"math_id": 2,
"text": "\\hat\\phi:\\hat A\\to S"
},
{
"math_id": 3,
"text": "\\phi"
},
{
"math_id": 4,
"text": "\\hat\\phi(u)=\\hat\\phi(v)"
},
{
"math_id": 5,
"text": "\\phi:A^*\\to M"
},
{
"math_id": 6,
"text": "\\phi:A^+\\to M"
},
{
"math_id": 7,
"text": "x^\\omega"
},
{
"math_id": 8,
"text": "\\lim_{n\\to\\infty} x^{n!}"
},
{
"math_id": 9,
"text": "\\hat\\phi(x^\\omega)"
},
{
"math_id": 10,
"text": "\\phi(x)"
},
{
"math_id": 11,
"text": " \\{ x^\\omega=y^\\omega \\text{ and } x^\\omega y=yx^\\omega=y\\}."
},
{
"math_id": 12,
"text": "\\langle\\mathbb Z,+\\rangle"
},
{
"math_id": 13,
"text": "\\langle\\mathbb N,+\\rangle"
}
] |
https://en.wikipedia.org/wiki?curid=57151064
|
57152266
|
Neoclassical transport
|
Type of plasma diffusion
In plasma physics and magnetic confinement fusion, neoclassical transport or neoclassical diffusion is a theoretical description of collisional transport in toroidal plasmas, usually found in tokamaks or stellarators. It is a modification of classical diffusion adding in effects of non-uniform magnetic fields due to the toroidal geometry, which give rise to new diffusion effects.
Description.
Classical transport models a plasma in a magnetic field as a large number of particles traveling in helical paths around a line of force. In typical reactor designs, the lines are roughly parallel, so particles orbiting adjacent lines may collide and scatter. This results in a random walk process which eventually leads to the particles finding themselves outside the magnetic field.
Neoclassical transport adds the effects of the geometry of the fields. In particular, it considers the field inside the tokamak and similar toroidal arrangements, where the field is stronger on the inside curve than the outside simply due to the magnets being closer together in that area. To even out these forces, the field as a whole is twisted into a helix, so that the particles alternately move from the inside to the outside of the reactor.
In this case, as the particle transits from the outside to the inside, it sees an increasing magnetic force. If the particle energy is low, this increasing field may cause the particle to reverse directions, as in a magnetic mirror. The particle now travels in the reverse direction through the reactor, to the outside limit, and then back towards the inside where the same reflection process occurs. This leads to a population of particles bouncing back and forth between two points, tracing out a path that looks like a banana from above, the so-called banana orbits.
Since any particle in the long tail of the Maxwell–Boltzmann distribution is subject to this effect, there is always some natural population of such banana particles. Since these travel in the reverse direction for half of their orbit, their drift behavior is oscillatory in space. Therefore, when the particles collide, their average step size (width of the banana) is much larger than their gyroradius, leading to neoclassical diffusion across the magnetic field.
Trapped particles and banana orbits.
A consequence of the toroidal geometry to the guiding-center orbits is that some particles can be reflected on the trajectory from the outboard side to the inboard side due to the presence of magnetic field gradients, similar to a magnetic mirror. The reflected particles cannot do a full turn in the poloidal plane and are trapped which follow the banana orbits.
This can be demonstrated by considering tokamak equilibria for low-formula_0 and large aspect ratio which have nearly circular cross sections, where polar coordinates formula_1 centered at the magnetic axis can be used with formula_2 approximately describing the flux surfaces. The magnitude of the total magnetic field can be approximated by the following expression:formula_3where the subscript formula_4 indicates value at the magnetic axis formula_5, formula_6 is the major radius, formula_7 is the inverse aspect ratio, and formula_8 is the magnetic field. The parallel component of the drift-ordered guiding-center orbits in this magnetic field, assuming no electric field, is given by:
formula_9
where formula_10 is the particle mass, formula_11 is the velocity, and formula_12 is the magnetic moment (first adiabatic invariant). The direction in the subscript indicates parallel or perpendicular to the magnetic filed. formula_13 is the effective potential reflecting the conservation of kinetic energy formula_14.
The parallel trajectory experiences a mirror force where the particle moving into a magnetic field of increasing magnitude can be reflected by this force. If a magnetic field has a minimum along a field line, the particles in this region of weaker field can be trapped. This is indeed true given the form of formula_8 we use. The particles are reflected (trapped particles) for sufficiently large formula_15 or complete their poloidal turn (passing particles) otherwise.
To see this in detail, the maximum and minimum of the effective potential can be identified as formula_16 and formula_17. The passing particles have formula_18 and the trapped particles have formula_19. Recognising this and define a constant of motion formula_20, we have
Orbit width.
The orbit width formula_23 can be estimated by considering the variation in formula_24 over an orbit period formula_25. Using the conservation of formula_26 and formula_27,formula_28The orbit widths can then be estimated, which gives
The bounce angle formula_31 at which formula_24 becomes zero for the trapped particles isformula_32
Bounce time.
The bounce time formula_33 is the time required for a particle to complete its poloidal orbit. This is calculated byformula_34where formula_35. The integral can be rewritten asformula_36where formula_37 and formula_38, which is also equivalent to formula_39 for trapped particles. This can be evaluated using the results from the complete elliptic integral of the first kindformula_40with propertiesformula_41The bounce time for passing particles is obtained by integrating between formula_42formula_43where the bounce time for trapped particle is evaluated by integrating between formula_44 and taking formula_45formula_46The limiting cases are
|
[
{
"math_id": 0,
"text": "\\beta"
},
{
"math_id": 1,
"text": "(r,\\theta)"
},
{
"math_id": 2,
"text": "r=\\text{constant}"
},
{
"math_id": 3,
"text": "B \\approx B_{0}(1- \\epsilon \\cos{\\theta})"
},
{
"math_id": 4,
"text": "0"
},
{
"math_id": 5,
"text": "(r=0)"
},
{
"math_id": 6,
"text": "R"
},
{
"math_id": 7,
"text": "\\epsilon=r/R_{0}"
},
{
"math_id": 8,
"text": "B"
},
{
"math_id": 9,
"text": " \nm\\dot{v}_{\\parallel}= -\\mu \\nabla_{\\parallel}B = - \\nabla_{\\parallel}(U(\\theta))\n\n"
},
{
"math_id": 10,
"text": "m"
},
{
"math_id": 11,
"text": "\\boldsymbol{v}"
},
{
"math_id": 12,
"text": "\\mu=mv_{\\perp}^{2}/2B"
},
{
"math_id": 13,
"text": "U(\\theta)=\\mu B_{0}(1-\\epsilon \\cos{\\theta})"
},
{
"math_id": 14,
"text": "\\mathcal{E} = mv_{\\parallel}^{2}/2 + mv_{\\perp}^{2}/2 = mv_{\\parallel}^{2}/2 + U = \\text{constant} "
},
{
"math_id": 15,
"text": "v_{\\perp} > v_{\\parallel}"
},
{
"math_id": 16,
"text": "U_{\\min}=\\mu B_{0} (1 - \\epsilon)"
},
{
"math_id": 17,
"text": "U_{\\max}=\\mu B_{0} (1 + \\epsilon)"
},
{
"math_id": 18,
"text": "\\mathcal{E} > U_{\\max}"
},
{
"math_id": 19,
"text": "U_{\\min} < \\mathcal{E} \\leq U_{\\max}"
},
{
"math_id": 20,
"text": "\\lambda = \\mu B_{0}/\\mathcal{E} \\geq 0"
},
{
"math_id": 21,
"text": "0 \\leq \\lambda < 1 - \\epsilon"
},
{
"math_id": 22,
"text": "1 - \\epsilon < \\lambda \\leq 1+\\epsilon"
},
{
"math_id": 23,
"text": "\\Delta r"
},
{
"math_id": 24,
"text": "v_{\\parallel}"
},
{
"math_id": 25,
"text": "\\Delta r \\sim \\Delta v_{\\parallel}/\\Omega_{\\text{p}}"
},
{
"math_id": 26,
"text": "\\mathcal{E}"
},
{
"math_id": 27,
"text": "\\mu"
},
{
"math_id": 28,
"text": "v_{\\parallel} = \\pm v\\sqrt{1- \\lambda B / B_{0}} \\approx \\pm v \\sqrt{1-\\lambda(1-\\epsilon \\cos{\\theta})}"
},
{
"math_id": 29,
"text": "\\Delta r_{\\text{p}} \\sim q \\rho"
},
{
"math_id": 30,
"text": "\\Delta r_{\\text{b}} \\sim q \\rho / \\sqrt{\\epsilon}"
},
{
"math_id": 31,
"text": "\\theta_{\\text{b}}"
},
{
"math_id": 32,
"text": "v_{\\parallel}(\\theta_{\\text{b}})=0 \\quad \\Rightarrow \\quad \\cos{\\theta_{\\text{b}}}= \\frac{\\lambda -1}{\\epsilon \\lambda}"
},
{
"math_id": 33,
"text": "\\tau_{\\text{b}}"
},
{
"math_id": 34,
"text": "\\tau_{\\text{b}}=\\int \\text{d}t = \\oint \\frac{\\text{d}\\theta}{\\dot{\\theta}}=\\oint \\frac{\\text{d}\\theta}{v_{\\parallel} \\boldsymbol{b} \\cdot \\nabla \\theta} \\simeq \\frac{B}{B_{\\theta}} \\oint \\frac{r\\text{d}\\theta}{\\sigma v \\sqrt{1-\\lambda(1-\\epsilon \\cos{\\theta}})}"
},
{
"math_id": 35,
"text": "\\sigma = \\pm1"
},
{
"math_id": 36,
"text": "\\tau_{\\text{b}} \\simeq \\frac{qR}{v\\sqrt{2\\epsilon \\lambda}} \\oint \\frac{\\text{d}\\theta}{\\sigma \\sqrt{k^{2} - \\sin^{2}(\\theta/2)}}"
},
{
"math_id": 37,
"text": "q=rB_{\\phi}/RB_{\\theta}"
},
{
"math_id": 38,
"text": "k^{2} \\equiv [1-\\lambda(1-\\epsilon)] / 2\\epsilon \\lambda"
},
{
"math_id": 39,
"text": "\\sin^{2}(\\theta_{\\text{b}}/2)"
},
{
"math_id": 40,
"text": "K(k) \\equiv \\int_{0}^{\\pi/2} \\frac{\\text{d}x}{\\sqrt{1-k^{2}\\sin^{2}x}}, \\quad 0<k \\leq 1"
},
{
"math_id": 41,
"text": "\\begin{align}\nK(k) & = \\frac{\\pi}{2} (1+ \\mathcal{O}(k^{2})) \\quad &\\text{for} \\quad k \\rightarrow 0\\\\\nK(k) & \\rightarrow \\ln{\\frac{4}{\\sqrt{1-k^{2}}}} \\quad &\\text{for} \\quad k \\rightarrow 1\n\\end{align}"
},
{
"math_id": 42,
"text": "[0,2 \\pi]"
},
{
"math_id": 43,
"text": "\\tau_{b}=\\frac{4qR}{\\sigma \\sqrt{2\\epsilon \\lambda}} \\frac{K(k^{-1})}{k}"
},
{
"math_id": 44,
"text": "[0,\\theta_{\\text{b}}]"
},
{
"math_id": 45,
"text": "\\lambda \\approx 1"
},
{
"math_id": 46,
"text": "\\tau_{b}=\\frac{8qR}{\\sigma \\sqrt{2\\epsilon}} K(k)"
},
{
"math_id": 47,
"text": "k \\rightarrow \\infty \\quad \\Rightarrow \\quad K(k^{-1}) \\rightarrow \\pi /2 \\quad \\Rightarrow \\quad \\tau_{\\text{b}} \\rightarrow 2 \\pi q R / v_{\\parallel} "
},
{
"math_id": 48,
"text": "k \\rightarrow 0 \\quad \\Rightarrow \\quad K(k) \\rightarrow \\pi /2 \\quad \\Rightarrow \\quad \\tau_{\\text{b}} \\rightarrow (2 \\pi q R / v_{\\parallel} )\\sqrt{2/\\epsilon} "
},
{
"math_id": 49,
"text": "k \\rightarrow 1 \\quad \\Rightarrow \\quad K(k) \\rightarrow \\infty \\quad \\Rightarrow \\quad \\tau_{\\text{b}} \\rightarrow \\infty "
}
] |
https://en.wikipedia.org/wiki?curid=57152266
|
57153310
|
Genome architecture mapping
|
In molecular biology, genome architecture mapping (GAM) is a cryosectioning method to map colocalized DNA regions in a ligation independent manner. It overcomes some limitations of Chromosome conformation capture (3C), as these methods have a reliance on digestion and ligation to capture interacting DNA segments. GAM is the first genome-wide method for capturing three-dimensional proximities between any number of genomic loci without ligation.
The sections that are found using the cryosectioning method mentioned above are referred to as nuclear profiles. The information that they provide relates to their coverage across a genome. A large set of values can be produced that represents the strength of nuclear profiles’ presence within a genome. Based on how large or small the coverage across a genome is, judgements can be made involving chromatin interactions, nuclear profile location within the nucleus being cryosectioned, and chromatin compaction levels.
To be able to visualize this information, certain methods can be implemented using the raw data given by a table that shows whether or not nuclear profiles are detected in a genomic window, the genomic windows being represented within a certain chromosome. With a 1 representing a detection within a window and a 0 representing no detection, subsets of data can be obtained and interpreted by creating graphs, charts, heatmaps, and other visualization methods that allow these subsets to be seen in ways other than binary detection methods. By using a more graphic approach to interpreting the data obtained with cryosectioning, it is possible to see interactions that would have otherwise not been seen before.
Some examples of how these visuals can be interpreted include bar graphs that show the radial position and chromatin compaction levels of nuclear profiles, they can be split into categories to give a generalization of how often nuclear profiles are detected within a genomic window. A radar chart is a circular graph that represents the percentages of occurrence within a number of variables. In the sense of genomic information, radar charts can be used to show how genomic windows are represented within “features” of the genome that are part of certain regions that make it up. These charts can be made to compare groups of nuclear profiles with each other and their differences in how they occur within these features is shown graphically. Heatmaps are another form of visual representation where individual values in a table are shown by cells that take on different colors based on their value. This allows for trends to be seen within a table by the display of groups of similar colors or the lack of.
The heatmap to the right represents the relationship between nuclear profiles based on a calculated Jaccard Index where the values ranging from 0-1 are the degree of similarity between two nuclear profiles. Showing this similarity can help to display where certain groups of nuclear profiles are more common within a genome. In this heatmap the diagonal white line of cells is expected because these cells indicate where nuclear profiles intersect themselves and are therefore the most similar as possible to each other, which gives them a value of 1. In addition to the white diagonal line of cells, a cluster of other lightly colored cells can be observed in the bottom right of the heatmap. This grouping of nuclear profiles display high similarity using the Jaccard Index. This means that the nuclear profiles are present in a greater number of genomic windows than others.
The bar graph to the right represents the percentage of nuclear profiles that belong to a category of radial position (with 5 being strongly equatorial and 1 being strongly apical). The cluster of nuclear profiles was calculated based on their similarity to each other using a k-means clustering method. To begin the process, three nuclear profiles were chosen at random as the ‘centers’ of the cluster. After the centers were chosen at random, every other nuclear profile is assigned to a cluster based on its distance from each center using a calculated distance value. New centers were then chosen to better represent the cluster. This process was repeated until the centers at the start matched the centers at the end. When the cluster centers have not changed, it could be interpreted that this means proper clusters have been chosen. Within each of these clusters the nuclear profiles are then given a value from 1 to 5 based on their radial position and this data is fed into a bar graph to give a visualization.
This radar chart to the right shows 3 clusters of nuclear profiles’ percentage of occurrence within certain features of the mouse genome. Each cluster of nuclear profiles was calculated using the k-means clustering technique described above, relating to the bar graph showing radial positions of nuclear profiles. Comparisons can be made between the clusters and how they show up more or less in certain features in contrast to each other. To calculate a cluster's presence within a certain feature, it is determined if a nuclear profile is present within a window that is detected within a feature. The percentage of how often nuclear profiles within a cluster occur within the same windows that are detected within a feature are then displayed by the radar chart.
Cryosection and laser microdissection.
Cryosections are produced according to the Tokuyasu method, involving stringent fixation to preserve nuclear and cellular architecture, cryoprotection with a sucrose-PBS solution, before freezing in liquid nitrogen. In Genome Architecture Mapping, sectioning is a necessary step for exploring the 3D topology of the genome, before Laser Microdissection. Then laser microdissection can isolate each nuclear profile, before DNA extraction and sequencing.
Data analysis - bioinformatic tools.
GAMtools.
GAMtools is a collection of software utilities for Genome Architecture Mapping data developed by Robert Beagrie. Bowtie2 is required before running GAMtools. The input required for this program is in Fastq format. This software has a variety of features and the exact commands to use will depend on what you want to do with it, however most features require generating segregation table, so for most users the first steps to take will be to download or create input data, and perform the sequence mapping. This will generate a segregation table, which can then be used to perform various other operations which are outlined below. For further information, view the GAMtools documentation.
Mapping the sequencing data.
The GAMtools command gamtools process_nps can be used to perform the mapping. It maps the raw sequence data from the nuclear profiles. GAMtools also provides the option to perform quality control checks on the NPs. This option can be enabled by adding the flag -c/--do-qc to the previous command. When the quality control check is enabled, GAMtools will try to exclude poor quality nuclear profiles.
Windows calling and segregation table.
After the mapping has finished, GAMtools will compute the number of reads from each nuclear profile which overlap with each window in the background genome file. The default window size is 50 kb. This is all done by the same process_nps command. After this, it generates a segregation table.
Producing proximity matrices.
The command for this process is gamtools matrix. The input file is the segregation table that was calculated from the windows calling step. GAMtools calculates these matrices using the normalized linkage disequilibrium, which means that it looks at how many times each pair of windows are detected by the same NP, and then normalizes the results based on how many times each window was detected across all NPs. The figure below shows an example of a proximity matrix heatmap produced using GAMtools.
Calculating chromatin compaction.
The GAMtools command gamtools compaction can be used to calculate an estimation of chromatin compaction. Compaction is a value assigned to a gene that represents how large the gene is. The level of compaction is inversely proportional to the locus volume. Genomic loci with a low volume are said to have a high level of compaction, and loci with a high volume have a low level of compaction. As shown in the figure, loci with a low compaction level are expected to be intersected more often by the cryosection slices. GAMtools uses this information to assign a compaction value to each locus based on its detection frequency across many nuclear profiles. The compaction rate of these loci is not static, and will continually change throughout the life of the cell. Genomic loci are thought to be de-compacted when that gene is active. This allows a researcher to make assumptions about which genes are currently active in a cell, using the results of the GAMtools data. A locus with low compaction is also thought to be related to transcriptional activity. The time-complexity of the compaction command is "O"("m" × "n"), where m is the number of genomic windows, and n is the number of nuclear profiles.
Calculating radial position.
GAMtools can be used to calculate the radial position of NPs. The radial position of an NP is a measure of how near or far that NP is from the equator or center of the nucleus. NPs that are close to the center of the nucleus are considered equatorial whereas NPs that are closer to the edge of the nucleus are considered apical. The GAMtools command to calculate radial positioning is gamtools radial_pos. This requires that you have previously generated a segregation table. The radial position is estimated from the average size of NPs that contain a given chromatin region. Chromatin that are closer to the periphery will typically be intersected by smaller, more apical NPs, whereas central chromatin will be intersected by larger, equatorial NPs.
In order to estimate the size of each NP, GAMtools looks at the number of windows each NP saw, as NPs that saw more windows can be assumed to be larger in volume. This is very similar to the method used to estimate chromatin compaction. The figure to the right illustrates how GAMtools looks at each NP's detection rate to estimate the volume, in order to determine the compaction or the radial position. If we look at the first NP, we see that it intersects all three windows, so we can estimate that it is one of the largest NPs. The second NP intersects two out of the three windows, so we can estimate that it is smaller than the first NP. The third NP only intersects one out of the three windows, so we can estimate that it is the smallest NP. Now that we have an estimation of the size of each NP, we can estimate the radial position. If we assume that the larger NPs are more equatorial, then we find that the first NP is the most equatorial, the second NP is the second most equatorial, and the third NP is the most apical.
Here is some pseudocode that illustrates how one might calculate the radial position of a list of NPs:
This pseudocode will create a list of radial positions that range from 0 - 1 that provide an estimation of the radial position, where 1 is the most equatorial and 0 is the most apical. The time complexity of this pseudocode is O( n * m ), where n is the number of NPs and m is the number of windows. The first for loop goes through n iterations, and it has an inner for loop which goes through m iterations, which means the time complexity of that for loop is O( n * m ). The second for loop has n iterations, so it has time complexity O( n ). Therefore, the overall time complexity of this code is O( n * m + n ), which can be reduced to O( n * m ).
Data analysis methods.
Overview.
The above flowchart shows a general process of how data may be derived from GAM analysis. Circles represent processes that may be performed, and squares represent pieces of data.
The first step of GAM analysis is the cryosectioning and examination of cells. This process results in a collection of nucleus slices (nuclear profiles) which contain pieces of DNA (genomic windows). These nuclear profiles are then examined so that a segregation table may be formed. Segregation tables are the foundation of GAM analysis. They contain information detailing which genomic loci appear within each nuclear profile.
An example of data analysis not given below would be clustering. For example, nuclear profiles that contain similar genomic loci could be clustered together by k-means clustering or some variation. K-means would work well for this particular problem in the sense that it would cluster every nuclear profile according to a similarity measure, but it also has drawbacks. The time complexity of K-means clustering is O(tknd), where "t" is the number of iterations, "k" is the number of means, "n" is the number of data points, and "d" is the number of dimensions for each data point. Such a complexity makes it NP-hard. As such, it does not scale well to large data sets and is more suited to subsets of data.
For further analysis, GAMtools may be used. GAMtools is a suite of software tools which can be used to extrapolate data from the segregation table, some of the results of which will be discussed below.
Cosegregation, or linkage, can be determined by observing how often two genomic loci appear together in the same nuclear profile. This data can show which loci are physically close to each other in 3D space, and which loci interact with each other regularly, which can help explain DNA transcription.
SLICE is a method of predicting specific interactions among genomic loci. It uses statistical data derived from cosegregation data.
Finally, graph analysis can be applied to the segregation table to locate communities. Communities can be defined several ways, such as by cliques, but in this article, community analysis will be focused on centrality. Centrality-based communities can be thought of as analogous to celebrities and their fan bases on a social media network. The fans may not interact with each other very much, but they do interact with the celebrity, who is the “center.”
There are several different types of centrality, including but not limited to degree centrality, eigenvector centrality, and betweenness centrality, which may all result in different communities being defined. Something of note is that in our social network analogy above, an eigenvector centrality may not be accurate because one person who follows many celebrities may not have any influence over them. In that case, the graph may be seen as directed. In GAM analysis, it is generally assumed that the graph is undirected, so that if eigenvector centrality were to be used it would be accurate. Both clique and centrality calculations are computationally complex. Similar to the clustering mentioned above, they do not scale well to large problems.
SLICE.
SLICE (StatisticaL Inference of Co-sEgregation) plays a key role in GAM data analysis. It was developed in the laboratory of Mario Nicodemi to provide a math model to identify the most specific interactions among loci from GAM cosegregation data. It estimates the proportion of specific interaction for each pair loci at a given time. It is a kind of likelihood method. The first step of SLICE is to provide a function of the expected proportion of GAM nuclear profiles. Then find the best probability result to explain the experimental data.
SLICE model.
The SLICE Model is based on a hypothesis that the probability of non-interacting loci falls into the same nuclear profile is predictable. The probability is dependent on the distance of these loci.
The SLICE Model considers a pair of loci as two types: one is interacting, the other is non-interacting. As per the hypothesis, the proportions of nuclear profiles state can be predicted by mathematical analysis. By deriving a function of the interaction probability, these GAM data can also be used to find prominent interactions and explore the sensitivity of GAM.
Calculate distribution in a single nuclear profile.
SLICE considers a pair of loci can be interaction or non-interaction across the cell population. The first step of this calculation is to describe a single locus. A pair of loci, A and B, can have two possible states: one is that A and B have no interactions with each other. The other is that they have. The first problem is that whether a single locus can be found in a nuclear profile.
The mathematical expression is:
Single locus probability: formula_0
- <formula_1> probability that the locus is found in a nuclear profile.
- <formula_2>formula_3<formula_1> probability that the locus is not found in a nuclear profile.
- <formula_1>=formula_4
Estimation of average nuclear radius.
As the equation above, the volume of the nuclear is a necessary value for calculation. The radii of these nuclear profiles can be used to estimate the nuclear radius. The SLICE prediction for radius matches Monte Carlo simulations(more detail about this step will be updated after get the license of the figure in the original author's paper.). With the result of the estimated radius, the probability of two loci in a non-interacting state and the probability of these two loci in an interacting state can be estimated.
Here is the mathematical expression of non-interacting:
<formula_5>,i = 0, 1, 2 represents: find 0, 1 or 2 loci of a pair of non-interacting loci.
Two loci in a non-interacting state:formula_5
formula_6
Here is the mathematical expression of interacting:
Estimation of two loci interaction state: formula_7 probability
formula_8~formula_9, formula_10~0, formula_11~formula_12
Calculate probability of pairs of loci in single nuclear profile.
With the results of previous processes, the occurrence probability of a pair of loci in one nuclear profile can be calculated by statistics method. A pair of loci can exist in three different states. Each of them has a probability of formula_13
Occurrence probability of pairs of loci in single nuclear profiles:formula_14
formula_15: probability of two pairs of loci are in a state of interaction;
formula_16: probability of one interacts the other, but the other does not interact;
formula_17: probability of the two not interact.
SLICE Statistical Analysis
formula_18
formula_19
formula_20 represent: number i is for A. Number j is for B.(i and j are equal to 0, 1 or 2 loci).
Detection efficiency.
As the number of experiments is limited, there should be some detection efficiency. Considering the detection efficiency can expand this SLICE model to accommodate additional complications. It is a statistical method to improve the calculation result. In this part, the GAM data is divided into two types: one is that the locus in the slice is found in the experiments, and the other is that the locus in the slice is not detected in the experiments.
Estimating interaction probabilities of pairs.
Based on the estimated detection efficiency and the previous probability of formula_21,the interaction probability of pairs can be calculated. The loci are detected by next generation sequencing.
Co-segregation and normalized linkage.
When mapping a genome, you can look at the co-segregation across different genomic windows and Nuclear Profiles (NPs) of a genome. Taking slices and samples of tissues derives nuclear profiles, and the ranges of windows found within a genome. Co-segregation in this instance is identifying the linkage between specified windows in a genome, as well as linkage disequilibrium and normalized linkage disequilibrium. One of the steps in calculating the co-segregation and linkage is finding each window's detection frequency. The detection frequency is the number of NPs present in the specified window divided by the total number of NPs. Each of the values calculated identify important differences and statistics for analyzing a genome. Normalized linkage disequilibrium is the final calculation which determines the real linkage between genomic windows. Once each of the values are calculated each result is used to calculate the normalized linkage equilibrium for each specified window in a genome. The normalized linkage value can be between 1.0 and -1.0, with 1.0 meaning the linkage between the two is high, and below 1.0 the linkage gets lower. Combining each windows normalized linkage value into a chart or matrix allows for the genome to be mapped and analyzed using a heatmap or another graph. The co-segregation and normalized linkage values can also be used for further calculations and analysis such as centrality and community detection which is discussed in the next section.
In order to find the co-segregation and linkages of windows, the following calculations must be completed: Detection frequency, co-segregation, linkage, and normalized linkage.
Calculating linkage and frequencies.
Each calculation step discussed above is displayed and explained in the table below.
Displaying normalized linkage.
Once all calculation steps in the previous step have been completed, a matrix can be created and then mapped. In a specified set of 81 windows in a genome, a normalized linkage can be filled into a matrix that is of size 81 by 81. This is due to the fact that each window will be compared to itself and every other window in order to calculate all normalized linkage values. As each window's linkage is calculated, the value should be inserted to its specified location in the matrix. For example, if the comparison is between the first and second window, the linkage value would be placed in the first column and the second row of the matrix. An example of a heatmap generated from a matrix of this size is shown below.
When analyzing the heatmap displayed from the normalized linkage matrix, the colors of each block are the key. Looking at the example heatmap above, the legend indicates that 1.00 linkage value corresponds to bright yellow within the heatmap. This is the highest linkage value, which is shown in the diagonal line of yellow blocks within the map where each window is compared against itself. This legend and heatmap allows for the linkages to be shown based on color, showing that there is a lower level of linkage between the first and last few windows in the matrix, where is a blue/green color. The heatmap is one of the easiest and clearest ways to analyze the linkage values between every window in a specified section of windows in a genome. This generated heatmap and normalized linkage matrix once created can be used for further analysis as described below.
Graph analysis approach.
Once cosegregation of all of the targeted genomic windows has been calculated, related subsets or "communities" within the set of windows can be approximated via graph analysis.
Deriving an adjacency (graph) matrix.
Once a cosegregation matrix has been established, the process of converting it to an adjacency matrix to represent a graph is a relatively simple process. Each cell of the cosegregation matrix must be compared to a threshold value between 0.0 and 1.0. This value can be adjusted depending on the desired specificity of the graph. If a higher value is chosen as the threshold, then the graph will generally have fewer edges, as high thresholds require the two windows to be strongly linked. If a lower value is chosen, then the graph will generally have more edges, as windows will not need to be as strongly linked to be classified as an edge. A reasonable starting point to set this value to is the mean value of the cosegregation graph. However, if the simple mean is used, then the threshold may be higher than intended. This is because the cosegregation value of any window to itself will be a value of 1.0. Since the adjacency matrix being constructed is non-reflexive, meaning that a window cannot share an edge with itself, the diagonal of the adjacency must be all zeroes, and the diagonal of the cosegregation matrix is not relevant. To compensate for this, one can simply discount the values along the diagonal of the cosegregation matrix to normalize the mean. To see the effect of this adjustment, see the attached figure.
Once the threshold value is set, the translation becomes rather direct. If the cell of the cosegregation matrix is along the main diagonal, then its respective cell in the adjacency matrix will be 0 as previously mentioned. Otherwise, it is compared with the threshold. If the value is lower than the threshold, then the respective cell in the adjacency matrix will be a 0, otherwise it will be a 1.
Assess centrality of windows.
Once the adjacency matrix has been established, then the windows can be assessed via several different measures of centrality. One such measure is degree centrality. Degree centrality is calculated by dividing the number of edges a given node of the graph (one of the genomic windows) has by the quantity of the total number of nodes minus one. See the included figure for an example of this calculation. The centrality of a node can be a good indicator of that individual node's potential to be strongly influential in the dataset based upon its relatively high amount of connections.
Community detection.
Once centrality values have been calculated, it becomes possible to infer related subsets of the data. These related subsets of the data are called "communities" which are clusters in the data that are closely linked within, but not as closely linked to the rest of the data outside. While one of the most common applications of community detection is in regards to social media and mapping social connections, it can be applied to problems such as genomic interactions.
A relatively simple method of approximating communities is to isolate several significant nodes based on centrality measures, such as degree centrality, and to then build communities from them. A community of a node will be the full set of nodes immediately linked to it, as well as the node itself. For instance, in the figure to the left, the community around node C would be all four nodes of the graph, while the community of D would just be nodes C and D. Detection of communities in genomic windows may highlight potential chromatin interactions, or other interactions not previously expected or understood, and provide a target for further study.
Advantages.
In comparison with 3C based methods, GAM provides three key advantages.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "v_0,v_1"
},
{
"math_id": 1,
"text": "v_1"
},
{
"math_id": 2,
"text": "v_0"
},
{
"math_id": 3,
"text": "=1-"
},
{
"math_id": 4,
"text": "V_{NP}/V_{nucleus}"
},
{
"math_id": 5,
"text": "u_i"
},
{
"math_id": 6,
"text": "<u_0>=<v_0^2>,<u_1>=<v_1v_0>,<u_2>=<v_1^2>"
},
{
"math_id": 7,
"text": "t_i"
},
{
"math_id": 8,
"text": "<t_2>"
},
{
"math_id": 9,
"text": "<v_1>"
},
{
"math_id": 10,
"text": "<t_1>"
},
{
"math_id": 11,
"text": "<t_0>"
},
{
"math_id": 12,
"text": "<v_0>=1-<v_1> "
},
{
"math_id": 13,
"text": "P_i,i = 0, 1, 2"
},
{
"math_id": 14,
"text": "P_2,P_1,P_0"
},
{
"math_id": 15,
"text": "P_2"
},
{
"math_id": 16,
"text": "P_1"
},
{
"math_id": 17,
"text": "P_0"
},
{
"math_id": 18,
"text": "N_{0,0}/N=<t_0^2>P_2+<t_0u_0>P_1+<u_0^2>P_0"
},
{
"math_id": 19,
"text": "N_{2,0}/N=N_{0,2}=<t_1^2>P_2+<t_1u_1>P_1+<u_1^2>P_0"
},
{
"math_id": 20,
"text": "N_{i,j}"
},
{
"math_id": 21,
"text": "u_0,u_1,u_2"
}
] |
https://en.wikipedia.org/wiki?curid=57153310
|
57159859
|
Fracton (subdimensional particle)
|
A fracton is an emergent topological quasiparticle excitation which is immobile when in isolation. Many theoretical systems have been proposed in which fractons exist as elementary excitations. Such systems are known as fracton models. Fractons have been identified in various CSS codes as well as in symmetric tensor gauge theories.
Gapped fracton models often feature a topological ground state degeneracy that grows exponentially and sub-extensively with system size. Among the gapped phases of fracton models, there is a non-rigorous phenomenological classification into "type I" and "type II". Type I fracton models generally have fracton excitations that are completely immobile, as well as other excitations, including bound states, with restricted mobility. Type II fracton models generally have fracton excitations and no mobile particles of any form. Furthermore, isolated fracton particles in type II models are associated with nonlocal operators with intricate fractal structure.
<templatestyles src="Template:TOC limit/styles.css" />
Models.
Type I.
The paradigmatic example of a type I fracton model is the X-cube model. Other examples of type I fracton models include the semionic X-cube model, the checkerboard model, the Majorana checkerboard model, the stacked Kagome X-cube model, the hyperkagome X-cube model, and more.
X-cube model.
The X-cube model is constructed on a cubic lattice, with qubits on each edge of the lattice.
The Hamiltonian is given by
formula_0
Here, the sums run over cubic unit cells and over vertices. For any cubic unit cell formula_1, the operator formula_2 is equal to the product of the Pauli formula_3 operator on all 12 edges of that unit cube. For any vertex of the lattice formula_4, operator formula_5 is equal to the product of the Pauli formula_6 operator on all four edges adjacent to vertex formula_4 and perpendicular to the formula_7 axis. Other notation conventions in the literature may interchange formula_3 and formula_6.
In addition to obeying an overall formula_8 symmetry defined by global symmetry generators formula_9 and formula_10 where the product runs over all edges in the lattice, this Hamiltonian obeys subsystem symmetries acting on individual planes.
All of the terms in this Hamiltonian commute and belong to the Pauli algebra. This makes the Hamiltonian exactly solvable. One can simultaneously diagonalise all the terms in the Hamiltonian, and the simultaneous eigenstates are the Hamiltonian's energy eigenstates. A ground state of this Hamiltonian is a state formula_11 that satisfies formula_12 and formula_13 for all formula_14. One can explicitly write down a ground state using projection operators formula_15 and formula_16.
The constraints posed by formula_17 and formula_18 are not all linearly independent when the X cube model is embedded on a compact manifold. This leads to a large ground state degeneracy that increases with system size. On a torus with dimensions formula_19, the ground state degeneracy is exactly formula_20 . A similar degeneracy scaling, formula_21, is seen on other manifolds as well as in the thermodynamic limit.
Restricted mobility excitations.
The X cube model hosts two types of elementary excitations, the fracton and lineon (also known as the one-dimensional particle).
If a quantum state is such that the eigenvalue of formula_22 for some unit cube formula_1, then we say that, in this quantum state, there is a fracton located at the position formula_1. For example, if formula_11 is a ground state of the Hamiltonian, then for any edge formula_23, the state formula_24 features four fractons, one each on the cubes adjacent to formula_23.
Given a rectangle formula_25 in a plane, one can define a "membrane" operator as formula_26 where the product runs over all edges formula_23 perpendicular to the rectangle that pass through this rectangle. Then the state formula_27 features four fractons each located at the cubes next to the corners of the rectangle. Thus, an isolated fracton can appear in the limit of taking the length and width of the rectangle formula_25 to infinity. The fact that a nonlocal membrane operator acts on the ground state to produce an isolated fracton is analogous to how, in smaller dimensional systems, nonlocal string operators can produce isolated formula_28 flux particles and domain walls.
This construction shows that an isolated fracton cannot be mobile in any direction. In other words, there is no local operator that can be acted on an isolated fracton to move it to a different location. In order to move an individual isolated fracton, one would need to apply a highly nonlocal operator to move the entire membrane associated with it.
If a quantum state is such that the eigenvalue of formula_29 for some vertex formula_4, then we say that, in this quantum state, there is a lineon located at the position formula_4 that is mobile in the formula_30 direction. A similar definition holds for lineons that are mobile in the formula_31 direction and lineons that are mobile in the formula_32 direction. In order to create an isolated formula_30 lineon at a vertex formula_4, one must act on the ground state with a long string of Pauli formula_3 operators acting on all the edges along the formula_30 axis that are below the lineon. Lineon excitations are mobile in one direction only; the Pauli formula_3 operator can act on lineons to translate them along that direction.
An formula_33 and formula_30 lineon can all fuse into the vacuum, if the lines on which each of them move concur. That is, there is a sequence of local operators that can make this fusion happen. The opposite process can also happen. For a similar reason, an isolated lineon can change direction of motion from formula_31 to formula_32, creating a new lineon moving in the formula_30 direction in the process. The new lineon is created at the point in space where the original lineon changes direction.
It is also possible to make bound states of these elementary excitations that have higher mobility. For example, consider the bound state of two fractons with the same formula_31 and formula_32 coordinates separated by a finite distance formula_34 along the formula_30 axis. This bound state, called a planeon, is mobile in all directions in the formula_35 plane. One can construct a membrane operator with width formula_34 in the formula_30 axis and arbitrary length in either the formula_31 or formula_32 direction that can act on the planeon state to move it within the formula_35 plane.
Interferometry.
It is possible to remotely detect the presence of an isolated elementary excitation in a region by moving the opposite type of elementary excitation around it. Here, as usual, "moving" refers to the repeated action of local unitary operators that translate the particles. This process is known as interferometry. It can be considered analogous to the idea of braiding anyons in two dimensions.
For example, suppose a lineon (either an formula_31 lineon or a formula_32 lineon) is located in the formula_35 plane, and there is also a planeon that can move in the formula_35 plane. Then we can move the planeon in a full rotation that happens to encompass the position of the lineon. Such a planeon movement would be implemented by a membrane operator. If this membrane operator intersects with the Pauli-formula_3 string operator attached to the lineon exactly one time, then at the end of the rotation of the planeon the wave function will pick up a factor of formula_36, which indicates the presence of the lineon.
Coupled layer construction.
It is possible to construct the X cube model by taking three stacks of toric code sheets, on along each of the three axes, superimposing them, and adding couplings to the edges where they intersect. This construction explains some of the connections that can be seen between the toric code topological order and the X cube model. For example, each additional toric code sheet can be understood to contribute a topological degeneracy of 4 to the overall ground state degeneracy of the X cube model when it is placed on a three dimensional torus; this is consistent with the formula for the ground state degeneracy of the X cube model.
Checkerboard Model.
Another example of a type I fracton model is the checkerboard model.
This model also lives on a cubic lattice, but with one qubit on each vertex. First, one colours the cubic unit cells with the colours formula_37 and formula_38 in a checkerboard pattern, i.e. such that no two adjacent cubic cells are the same colour. Then the Hamiltonian is
formula_39
This model is also exactly solvable with commuting terms. The topological ground state degeneracy on a torus is given by formula_40 for lattice of size formula_41 (as a rule the dimensions of the lattice must be even for periodic boundary conditions to make sense).
Like the X cube model, the checkerboard model features excitations in the form of fractons, lineons, and planeons.
Type II.
The paradigmatic example of a type II fracton model is Haah's code. Due to the more complicated nature of Haah's code, the generalisations to other type II models are poorly understood compared to type I models.
Haah's code.
Haah's code is defined on a cubic lattice with two qubits on each vertex. We can refer to these qubits using Pauli matrices formula_42 and formula_43, each acting on a separate qubit. The Hamiltonian is
formula_44.
Here, for any unit cube formula_1 whose eight vertices are labeled as formula_45, formula_46, formula_47, formula_48, formula_49, formula_50, formula_51 , and formula_52, the operators formula_2 and formula_53 are defined as
formula_54
formula_55
This is also an exactly solvable model, as all terms of the Hamiltonian commute with each other.
The ground state degeneracy for an formula_56 torus is given by
formula_57
Here, gcd denotes the greatest common divisor of the three polynomials shown, and deg refers to the degree of this common divisor. The coefficients of the polynomials belong to the finite field formula_58, consisting of the four elements formula_59 of characteristic 2 (i.e. formula_60). formula_61 is a cube root of 1 that is distinct from 1. The greatest common divisor can be defined through Euclid's algorithm. This degeneracy fluctuates wildly as a function of formula_62. If formula_62 is a power of 2, then according to Lucas's theorem the three polynomials take the simple forms formula_63, indicating a ground state degeneracy of formula_64. More generally, if formula_65 is the largest power of 2 that divides formula_62, then the ground state degeneracy is at least formula_66 and at most formula_67.
Thus the Haah's code fracton model also in some sense exhibits the property that the logarithm of the ground state degeneracy tends to scale in direct proportion to the linear dimension of the system. This appears to be a general property of gapped fracton models. Just like in type I models and in topologically ordered systems, different ground states of Haah's code cannot be distinguished by local operators.
Haah's code also features immobile elementary excitations called fractons. A quantum state is said to have a fracton located at a cube formula_1 if the eigenvalue of formula_2 is formula_36 for this quantum state (an excitation of the formula_53 operator is also a fracton. Such a fracton is physically equivalent to an excitation of formula_2 because there is a unitary map exchanging formula_2 and formula_53, so it suffices to consider excitations of formula_2 only for this discussion).
If formula_11 is a ground state of the Hamiltonian, then for any vertex formula_4, the state formula_68 features four fractons in a tetrahedral arrangement, occupying four of the eight cubes adjacent to vertex formula_4 (the same is true for the state formula_69, although the exact shape of the tetrahedron is different).
In an attempt to isolate just one of these four fractons, one may try to apply additional formula_70 spin flips at different nearby vertices to try annihilate the three other fractons. Doing so simply results in three new fractons appearing further away. Motivated by this process, one can then identify a set formula_71 of vertices in space that together form some arbitrary iteration of the three-dimensional Sierpiński fractal. Then the state
formula_72
features four fractons, one each at a cube adjacent to a corner vertex of the Sierpinski tetrahedron. Thus we see that an infinitely large fractal-shaped operator is required to generate an isolated fracton out of the ground state in the Haah's code model. The fractal-shaped operator in Haah's code plays an analogous role to the membrane operators in the X-cube model.
Unlike in type I models, there are no stable bound states of a finite number of fractons that are mobile. The only mobile bound states are those such as the completely mobile four-fracton states like formula_68 that are unstable (i.e. can transform into the ground state by the action of a local operator).
Foliated fracton order.
One formalism used to understand the universal properties of type I fracton phases is called foliated fracton order.
Foliated fracton order establishes an equivalence relation between two systems, system formula_37 and system formula_38, with Hamiltonians formula_73 and formula_74. If one can transform the ground state of formula_73 to the ground state of formula_74 by applying a finite depth local unitary map and arbitrarily adding and/or removing two-dimensional gapped systems, then formula_75 and formula_76 are said to belong to the same foliated fracton order.
It is important in this definition that the local unitary map remains at finite depth as the sizes of systems 1 and 2 are taken to the thermodynamic limit. However, the number of gapped systems being added or removed can be infinite. The fact that two-dimensional topologically ordered gapped systems can be freely added or removed in the transformation process is what distinguishes foliated fracton order form more conventional notions of phases.
To state the definition more precisely, suppose one can find two (possibly empty or infinite) collections of two-dimensional gapped phases (with arbitrary topological order), formula_77 and formula_78, and a finite depth local unitary map formula_79, such that formula_79 maps the ground state of formula_80 to the ground state of formula_81. Then formula_73 and formula_74 belong to the same foliated fracton order.
More conventional notions of phase equivalence fail to give sensible results when directly applied to fracton models, because they are based on the notion that two models in the same phase should have the same topological ground state degeneracy. Since the ground state degeneracy of fracton models scales with system size, these conventional definitions would imply that simply changing the system size slightly would alter the entire phase. This would make it impossible to study the phases of fracton matter in the thermodynamic limit where system size formula_82. The concept of foliated fracton order resolves this issue, by allowing degenerate subsystems ( two-dimensional gapped topological phases) to be used as "free resources" that can be arbitrarily added or removed from the system to account for these differences. If a fracton model formula_83 is such that formula_84 is in the same foliated fracton order as formula_85 for a larger system size, then the foliated fracton order formalism is suitable for the model.
Foliated fracton order is not a suitable formalism for type II fracton models.
Known foliated fracton orders of type I models.
Many of the known type I fracton model are in fact in the same foliated fracton order as the X cube Model, or in the same foliated fracton order as multiple copies of the X cube model. However, not all are. A notable known example of a distinct foliated fracton order is the twisted foliated fracton model.
Explicit local unitary maps have been constructed that demonstrate the equivalence of the X cube model with various other models, such as the Majorana checkerboard model and the semionic X cube model. The checkerboard model belongs to the same foliated fracton order as two copies of the X cube model.
Invariants of foliated fracton order.
Just like how topological orders tend to have various invariant quantities that represent topological signatures, one can also attempt to identify invariants of foliated fracton orders.
Conventional topological orders often exhibit ground state degeneracy which is dependent only on the topology of the manifold on which the system is embedded. Fracton models do not have this property, because the ground state degeneracy also depends on system size. Furthermore, in foliated fracton models the ground state degeneracy can also depend on the intricacies of the foliation structure used to construct it. In other words, the same type of model on the same manifold with the same system size may have different ground state degeneracies depending on the underlying choice of foliation.
Quotient superselection sectors.
By definition, the number of superselection sectors in a fracton model is infinite (i.e. scales with system size). For example, each individual fracton belongs to its own superselection sector, as there is no local operator that can transform it to any other fracton at a different position.
However, a loosening of the concept of superselection sector, known as the quotient superselection sector, effectively ignores two-dimensional particles (e.g. planeon bound states) which are presumed to come from two-dimensional foliating layers. Foliated fracton models then tend to have a finite list of quotient superselection sectors describing the types of fractional excitations present in the model. This is analogous to how topological orders tend to have a finite list of ordinary superselection sectors.
Entanglement Entropy.
Generally for fracton models in the ground state, when considering the entanglement entropy of a subregion of space with large linear size formula_25, the leading order contribution to the entropy is proportional to formula_86, as expected for a gapped three dimensional system obeying an area law. However, the entanglement entropy also has subleading terms as a function of formula_25 that reflect hidden nonlocal contributions. For example, the formula_87 subleading correction represents a contribution from the constant topological entanglement entropy of each of the 2D topologically ordered layers present in the foliation structure of the system.
Since foliated fracton order is invariant even when disentangling such 2D gapped layers, an entanglement signature of a foliated fracton order must be able to ignore of the entropy contributions both from local details and from 2D topologically ordered layers.
It is possible to use a mutual information calculation to extract a contribution to entanglement entropy that is unique to the foliated fracton order. Effectively, this is done by adding and subtracting entanglement entropies of different regions in such a way as to get rid of local contributions as well as contributions from 2D gapped layers.
Symmetric tensor gauge theory.
The immobility of fractons in symmetric tensor gauge theory can be understood as a generalization of electric charge conservation resulting from a modified Gauss's law. Various formulations and constraints of symmetric tensor gauge theory tend to result in conservation laws that imply the existence of restricted-mobility particles.
U(1) scalar charge model.
For example, in the U(1) scalar charge model, the fracton charge density (formula_88) is related to a symmetric electric field tensor (formula_89, a theoretical generalization of the usual electric vector field) via formula_90, where the repeated spatial indices formula_91 are implicitly summed over.
Both the fracton charge (formula_92) and dipole moment (formula_93) can be shown to be conserved:
formula_94
When integrating by parts, we have assumed that there is no electric field at spatial infinity.
Since the total fracton charge and dipole moment is zero under this assumption, this implies that the charge and dipole moment is conserved.
Because moving an isolated charge changes the total dipole moment, this implies that isolated charges are immobile in this theory.
However, two oppositely charged fractons, which forms a fracton dipole, can move freely since this does not change the dipole moment.
One approach to constructing an explicit action for scalar fractonic matter fields and their coupling to the symmetric tensor gauge theory is the following. Suppose the scalar fractonic matter field is formula_95. A global charge conservation symmetry would imply that the action is symmetric under the transformation formula_96 for some spatially uniform real formula_97, as is the case in usual formula_98 charged theories. A global dipole moment conservation symmetry would imply that the action is symmetric under the transformation formula_99 for an arbitrary real spatially uniform vector formula_100.
The simplest kinetic terms (i.e. terms featuring the spatial derivative) that are symmetric under these transformations are quartic in formula_95.
formula_101
Now when gauging this symmetry, the kinetic expression formula_102 gets replaced with formula_103, where formula_104 is a symmetric tensor that transforms under arbitrary gauge transformations as formula_105. This shows how a symmetric tensor field couples to scalar fractonic matter fields.
U(1) vector charge model.
The U(1) scalar charge theory is not the only symmetric tensor gauge theory that is gives rise to limited mobility particles. Another example is the U(1) vector charge theory.
In this theory, the fractonic charge is a vector quantity formula_106. The symmetric tensor gauge field transforms under gauge transformations formula_107 as formula_108. The Gauss law for this theory takes the form formula_109, which implies both a total charge conservation and a conservation of total angular charge moment formula_110. The latter conservation law implies that isolated charges are restricted to move parallel to their corresponding charge vectors. Thus these particles appear to be similar to the lineons in Type I fractons, except here they are in a gapless theory.
Applications.
Fractons were originally studied as an analytically tractable realization of quantum glassiness where the immobility of isolated fractons results in a slow relaxation rate.
This immobility has also been shown to be capable of producing a partially self-correcting quantum memory, which could be useful for making an analog of a hard drive for a quantum computer.
Fractons have also been shown to appear in quantum linearized gravity models
and (via a ) as disclination crystal defects.
However, aside from the duality to crystal defects, and although it has been shown to be possible in principle,
other experimental realizations of gapped fracton models have not yet been realized. On the other hand, there has been progress in studying the dynamics of dipole-conserving systems, both theoretically and experimentally, which exhibit the characteristic slow dynamics expected of systems with fractonic behavior.
Fracton models.
It has been conjectured that many type-I models are examples of foliated fracton phases; however, it remains unclear whether non-Abelian fracton models can be understood within the foliated framework.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "H = -\\sum_{c} A_c - \\sum_{\\textrm{}v} (B_{v,x} + B_{v,y} + B_{v,z})"
},
{
"math_id": 1,
"text": "c"
},
{
"math_id": 2,
"text": "A_c"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "v"
},
{
"math_id": 5,
"text": "B_{v,\\mu}"
},
{
"math_id": 6,
"text": "Z"
},
{
"math_id": 7,
"text": "\\mu"
},
{
"math_id": 8,
"text": "Z_2 \\times Z_2"
},
{
"math_id": 9,
"text": "\\prod_\\ell X_\\ell"
},
{
"math_id": 10,
"text": "\\prod_\\ell Z_\\ell"
},
{
"math_id": 11,
"text": "|\\mathrm{GS}\\rangle"
},
{
"math_id": 12,
"text": "A_c |\\mathrm{GS}\\rangle = 1"
},
{
"math_id": 13,
"text": "B_{v, \\mu} |\\mathrm{GS}\\rangle =1"
},
{
"math_id": 14,
"text": "c, v , \\mu"
},
{
"math_id": 15,
"text": "\\frac{1+A_c}{2}"
},
{
"math_id": 16,
"text": "\\frac{1+B_{v,\\mu}}{2}"
},
{
"math_id": 17,
"text": "A_c =1"
},
{
"math_id": 18,
"text": "B_{v,\\mu} =1"
},
{
"math_id": 19,
"text": "L_x, L_y, L_z"
},
{
"math_id": 20,
"text": "2^{2L_x+2L_y + 2L_z-3} "
},
{
"math_id": 21,
"text": "\\log \\mathrm{GSD} \\propto L"
},
{
"math_id": 22,
"text": "A_c = -1"
},
{
"math_id": 23,
"text": "\\ell"
},
{
"math_id": 24,
"text": "X_\\ell |\\mathrm{GS}\\rangle"
},
{
"math_id": 25,
"text": "R"
},
{
"math_id": 26,
"text": "Z_R \\equiv \\prod_{\\ell \\in R} Z_{\\ell} "
},
{
"math_id": 27,
"text": "Z_R|\\mathrm{GS}\\rangle "
},
{
"math_id": 28,
"text": "\\pi"
},
{
"math_id": 29,
"text": "B_{v,x} = B_{v,y} = -1"
},
{
"math_id": 30,
"text": "z"
},
{
"math_id": 31,
"text": "x"
},
{
"math_id": 32,
"text": "y"
},
{
"math_id": 33,
"text": "x, y"
},
{
"math_id": 34,
"text": "w"
},
{
"math_id": 35,
"text": "xy"
},
{
"math_id": 36,
"text": "-1"
},
{
"math_id": 37,
"text": "A"
},
{
"math_id": 38,
"text": "B"
},
{
"math_id": 39,
"text": " H = - \\sum_{c\\in A} \\prod_{v \\in c} X_v - \\sum_{c\\in A} \\prod_{v \\in c} Z_v "
},
{
"math_id": 40,
"text": "\\log_2 \\mathrm{GSD} = 4 L_x + 4L_y + 4L_z-6 "
},
{
"math_id": 41,
"text": "2L_x, 2L_y, 2L_z"
},
{
"math_id": 42,
"text": "\\vec{\\sigma}_v"
},
{
"math_id": 43,
"text": "\\vec{\\mu}_v"
},
{
"math_id": 44,
"text": "H = -\\sum_c ( A_c + B_c)"
},
{
"math_id": 45,
"text": "p"
},
{
"math_id": 46,
"text": "q_1 = p+(1,0,0)"
},
{
"math_id": 47,
"text": "q_2 = p+(0,1,0)"
},
{
"math_id": 48,
"text": "q_3 = p+(0,0,1)"
},
{
"math_id": 49,
"text": "r_1 = p+(0,1,1)"
},
{
"math_id": 50,
"text": "r_2 = p+(1,0,1)"
},
{
"math_id": 51,
"text": "r_3 = p+(1,1,0)"
},
{
"math_id": 52,
"text": "s = p+(1,1,1)"
},
{
"math_id": 53,
"text": "B_c"
},
{
"math_id": 54,
"text": "A_c = \\sigma_s^z \\mu_s^z \\prod_{j=1}^3 \\mu_{q_j}^z \\sigma_{r_j}^z "
},
{
"math_id": 55,
"text": "B_c = \\sigma_p^x \\mu_p^x \\prod_{j=1}^3 \\mu_{q_j}^x \\sigma_{r_j}^x "
},
{
"math_id": 56,
"text": "L \\times L \\times L "
},
{
"math_id": 57,
"text": "\\log_2 \\mathrm{GSD} = 4 \\deg( \\gcd ( 1 + (1+x)^L, 1+(1+\\omega x)^L, 1+ (1+\\omega^2 x)^L)_{\\mathbb{F}_4})-2 . "
},
{
"math_id": 58,
"text": "\\mathbb{F}_4"
},
{
"math_id": 59,
"text": "\\lbrace 0,1, \\omega, \\omega^2 \\rbrace"
},
{
"math_id": 60,
"text": "1+1= 0"
},
{
"math_id": 61,
"text": "\\omega"
},
{
"math_id": 62,
"text": "L"
},
{
"math_id": 63,
"text": "x^L, (\\omega x)^L, (\\omega^2 x)^L"
},
{
"math_id": 64,
"text": "2^{4L-2} "
},
{
"math_id": 65,
"text": "2^m"
},
{
"math_id": 66,
"text": "2^{2^{m+2}-2}"
},
{
"math_id": 67,
"text": "2^{4L-2}"
},
{
"math_id": 68,
"text": " \\mu^x_v |\\mathrm{GS}\\rangle "
},
{
"math_id": 69,
"text": " \\sigma^x_v |\\mathrm{GS}\\rangle "
},
{
"math_id": 70,
"text": " \\mu^x_{v'} |\\mathrm{GS}\\rangle "
},
{
"math_id": 71,
"text": "S"
},
{
"math_id": 72,
"text": " \\psi = \\prod_{v \\in S} \\mu^z_v |\\mathrm{GS}\\rangle "
},
{
"math_id": 73,
"text": "H_A"
},
{
"math_id": 74,
"text": "H_B"
},
{
"math_id": 75,
"text": "H_1"
},
{
"math_id": 76,
"text": "H_2"
},
{
"math_id": 77,
"text": "H^{2\\mathrm{D}, A}_j"
},
{
"math_id": 78,
"text": "H^{2\\mathrm{D},B}_j"
},
{
"math_id": 79,
"text": "U"
},
{
"math_id": 80,
"text": "H_A \\otimes \\bigotimes_j H^{2\\mathrm{D},A}_j"
},
{
"math_id": 81,
"text": "H_B \\otimes \\bigotimes_j H^{2\\mathrm{D},B}_j"
},
{
"math_id": 82,
"text": "L \\to \\infty"
},
{
"math_id": 83,
"text": "H"
},
{
"math_id": 84,
"text": "H(L_x, L_y, L_z)"
},
{
"math_id": 85,
"text": "H(L'_x, L'_y, L'_z)"
},
{
"math_id": 86,
"text": "R^2"
},
{
"math_id": 87,
"text": "\\propto R"
},
{
"math_id": 88,
"text": "\\rho"
},
{
"math_id": 89,
"text": "E_{ij}"
},
{
"math_id": 90,
"text": "\\rho = \\partial_i \\partial_j E_{ij}"
},
{
"math_id": 91,
"text": "i,j=1,2,3"
},
{
"math_id": 92,
"text": "q"
},
{
"math_id": 93,
"text": "p_i"
},
{
"math_id": 94,
"text": "\n\\begin{align}\nq &= \\int \\rho \\; d^3x = \\int \\partial_i (\\partial_j E_{ij}) \\; d^3x = 0 \\\\\np_i &= \\int x_i \\rho \\; d^3x = \\int x_i \\partial_j \\partial_k E_{jk} \\; d^3x = - \\int \\partial_k E_{ik} \\; d^3x = 0\n\\end{align}\n"
},
{
"math_id": 95,
"text": "\\Phi"
},
{
"math_id": 96,
"text": "\\Phi \\to e^{i\\alpha} \\Phi"
},
{
"math_id": 97,
"text": "\\alpha"
},
{
"math_id": 98,
"text": "U(1)"
},
{
"math_id": 99,
"text": "\\Phi(\\vec r) \\to e^{i \\vec \\lambda \\cdot \\vec r} \\Phi(\\vec{r})"
},
{
"math_id": 100,
"text": "\\vec \\lambda"
},
{
"math_id": 101,
"text": " \\mathcal{L} = |\\partial_t \\Phi|^2 -m^2|\\Phi|^2 - g|\\Phi \\partial_i \\partial_j \\Phi - \\partial_i \\Phi \\partial_j \\Phi|^2 -g' \\Phi^{*2}(\\Phi \\partial_i \\partial_j \\Phi - \\partial_i \\Phi \\partial_j \\Phi) +\\ldots"
},
{
"math_id": 102,
"text": "\\Phi \\partial_i \\partial_j \\Phi - \\partial_i \\Phi \\partial_j \\Phi"
},
{
"math_id": 103,
"text": "\\Phi \\partial_i \\partial_j \\Phi - \\partial_i \\Phi \\partial_j \\Phi -i A_{ij} \\Phi^2"
},
{
"math_id": 104,
"text": "A_{ij}"
},
{
"math_id": 105,
"text": "A_{ij} \\to A_{ij}+\\partial_i \\partial_j \\alpha"
},
{
"math_id": 106,
"text": "\\vec{\\rho}"
},
{
"math_id": 107,
"text": "\\vec{\\alpha}"
},
{
"math_id": 108,
"text": "A_{ij} \\to A_{ij} + \\partial_i \\alpha_j + \\partial_j \\alpha _i"
},
{
"math_id": 109,
"text": "\\partial_i E_{ij} = \\rho_j"
},
{
"math_id": 110,
"text": "\\int d^3 x \\vec{\\rho} \\otimes \\vec{x}"
}
] |
https://en.wikipedia.org/wiki?curid=57159859
|
5716871
|
2,2-Dichloro-1,1,1-trifluoroethane
|
<templatestyles src="Chembox/styles.css"/>
Chemical compound
2,2-Dichloro-1,1,1-trifluoroethane or HCFC-123 is considered as an alternative to CFC-11 in low pressure refrigeration and HVAC systems, and should not be used in foam blowing processes or solvent applications. It is also the primary component of the Halotron I fire-extinguishing mixture.
Its ozone depletion potential is ODP = 0.012, and global warming potential is GWP = 76. HCFC-123 will eventually be phased out under the current schedule of the Montreal Protocol. It was discontinue in new HVAC equipment in 2020 in developed countries but will still be produced for service use of HVAC equipment until 2030. Developing countries can continue to use It in new equipment until 2030 and will be produced for use in service there until 2040.
HCFC-123 is used in large tonnage centrifugal chiller applications, and is the most efficient refrigerant currently in use in the marketplace for HVAC applications. HCFC-123 is also used as a testing agent for bypass leakage of carbon adsorbers in gas filtration systems, and as the primary chemical in Halotron I fire-extinguishing agent.
Cylinders of HCFC-123 were a light grey prior to the elimination of cylinder color identification.
Isomers are 1,2-dichloro-1,1,2-trifluoroethane (R-123a) with CAS 354-23-4 and
1,1-dichloro-1,2,2-trifluoroethane (R-123b) with CAS 812-04-4.
Production.
2,2-Dichloro-1,1,1-trifluoroethane can be produced by reacting tetrachloroethylene with hydrogen fluoride in the gas phase. This is an exothermic reaction and requires a catalyst:
formula_0
|
[
{
"math_id": 0,
"text": "\\mathrm{Cl_2C{=}CCl_2 + 3 \\ HF \\longrightarrow\\ Cl_2CH{-}CF_3 \\ + 2 \\ HCl}"
}
] |
https://en.wikipedia.org/wiki?curid=5716871
|
57169339
|
Bing–Borsuk conjecture
|
In mathematics, the Bing–Borsuk conjecture states that every formula_0-dimensional homogeneous absolute neighborhood retract space is a topological manifold. The conjecture has been proved for dimensions 1 and 2, and it is known that the 3-dimensional version of the conjecture implies the Poincaré conjecture.
Definitions.
A topological space is "homogeneous" if, for any two points formula_1, there is a homeomorphism of formula_2 which takes formula_3 to formula_4.
A metric space formula_2 is an absolute neighborhood retract (ANR) if, for every closed embedding formula_5 (where formula_6 is a metric space), there exists an open neighbourhood formula_7 of the image formula_8 which retracts to formula_8.
There is an alternate statement of the Bing–Borsuk conjecture: suppose formula_2 is embedded in formula_9 for some formula_10 and this embedding can be extended to an embedding of formula_11. If formula_2 has a mapping cylinder neighbourhood formula_12 of some map formula_13 with mapping cylinder projection formula_14, then formula_15 is an approximate fibration.
History.
The conjecture was first made in a paper by R. H. Bing and Karol Borsuk in 1965, who proved it for formula_16 and 2.
Włodzimierz Jakobsche showed in 1978 that, if the Bing–Borsuk conjecture is true in dimension 3, then the Poincaré conjecture must also be true.
The Busemann conjecture states that every Busemann formula_17-space is a topological manifold. It is a special case of the Bing–Borsuk conjecture. The Busemann conjecture is known to be true for dimensions 1 to 4.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "m_1, m_2 \\in M"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "m_1"
},
{
"math_id": 4,
"text": "m_2"
},
{
"math_id": 5,
"text": "f: M \\rightarrow N"
},
{
"math_id": 6,
"text": "N"
},
{
"math_id": 7,
"text": "U"
},
{
"math_id": 8,
"text": "f(M)"
},
{
"math_id": 9,
"text": "\\mathbb{R}^{m+n}"
},
{
"math_id": 10,
"text": "m \\geq 3"
},
{
"math_id": 11,
"text": "M \\times (-\\varepsilon, \\varepsilon)"
},
{
"math_id": 12,
"text": "N=C_\\varphi"
},
{
"math_id": 13,
"text": "\\varphi: \\partial N \\rightarrow M"
},
{
"math_id": 14,
"text": "\\pi: N \\rightarrow M"
},
{
"math_id": 15,
"text": "\\pi"
},
{
"math_id": 16,
"text": "n=1"
},
{
"math_id": 17,
"text": "G"
}
] |
https://en.wikipedia.org/wiki?curid=57169339
|
57172319
|
Busemann G-space
|
In mathematics, a Busemann "G"-space is a type of metric space first described by Herbert Busemann in 1942.
If formula_0 is a metric space such that
then "X" is said to be a "Busemann" "G"-"space". Every Busemann "G"-space is a homogenous space.
The Busemann conjecture states that every Busemann "G"-space is a topological manifold. It is a special case of the Bing–Borsuk conjecture. The Busemann conjecture is known to be true for dimensions 1 to 4.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(X,d)"
},
{
"math_id": 1,
"text": "x, y \\in X"
},
{
"math_id": 2,
"text": "z \\in X\\setminus\\{x,y\\}"
},
{
"math_id": 3,
"text": "d(x,z)+d(y,z)=d(x,y)"
},
{
"math_id": 4,
"text": "d"
},
{
"math_id": 5,
"text": "w \\in X"
},
{
"math_id": 6,
"text": "\\rho_w"
},
{
"math_id": 7,
"text": "x,y \\in B(w,\\rho_w)"
},
{
"math_id": 8,
"text": "z \\in ( B(w,\\rho_w)\\setminus\\{ x,y \\} )^\\circ"
},
{
"math_id": 9,
"text": "d(x,y)+d(y,z)=d(x,z)"
},
{
"math_id": 10,
"text": "x,y \\in X"
},
{
"math_id": 11,
"text": "u,v \\in X"
},
{
"math_id": 12,
"text": "d(x,y)+d(y,u)=d(x,u)"
},
{
"math_id": 13,
"text": "d(x,y)+d(y,v)=d(x,v)"
},
{
"math_id": 14,
"text": "d(y,u)=d(y,v)"
},
{
"math_id": 15,
"text": "u=v"
}
] |
https://en.wikipedia.org/wiki?curid=57172319
|
571730
|
Murti
|
Idol, symbol, statue or icon in Hindu religion
In the Hindu tradition, a murti (, lit. 'form, embodiment, or solid object') is a devotional image, such as a statue or icon, of a deity or saint used during "puja" and/or in other customary forms of actively expressing devotion or reverence - whether at Hindu temples or shrines. A "mūrti" is a symbolic icon representing divinity for the purpose of devotional activities. Thus, not all icons of gods and saints are "mūrti"; for example, purely decorative depictions of divine figures often adorn Hindu temple architecture in intricately carved doorframes, on colourfully painted walls, and ornately sculpted rooftop domes. A "mūrti" itself is not God, but it is merely a representative shape, symbolic embodiment, or iconic manifestation of God.
"Murti" are also found in some nontheistic Jain traditions, where they serve as symbols of revered mortals inside Jain temples, and are worshiped in "murtipujaka" rituals.
A "murti" is typically made by carving stone, wood working, metal casting or through pottery. Ancient era texts describing their proper proportions, positions and gestures include the Puranas, Agamas, and Samhitas. The expressions in a murti vary in diverse Hindu traditions, ranging from "ugra" (transl. Angry) symbolism to express destruction, fear, and violence (Durga, Kali) to "saumya" (transl. Calm) symbolism to express joy, knowledge, and harmony (Saraswati, Lakshmi, and Ganesha). "Saumya" images are most common in Hindu temples. Other "murti" forms found in Hinduism include the "lingam".
A "murti" is an embodiment of the divine, the ultimate reality or Brahman, to some Hindus. In a religious context, they are found in Hindu temples or homes, where they may be treated as a beloved guest and serve as a participant of "puja". On other occasions, they serve as the centre of attention in annual festive processions; these are called "utsava murti". The earliest "murti" are mentioned by Pāṇini in the 4th century BCE. Prior to that, the "agnicayana" ritual ground seemed to serve as a template for the temple.
A "murti" may also be referred to as a vigraha, pratima or simply deity.
Hindu devotees go to the mandirs to take "darshan," bringing prepared offerings of "naivedya" to be blessed at the altar before the deity"," and to perform "puja" and "aarti."
Etymology and nomenclature.
"Murti" literally means any solid body or form with a definite shape or limits produced from material elements. It contrasts with the mind, thought, and immaterial in ancient Indian literature. The term also refers to any embodiment, manifestation, incarnation, personification, appearance, image, idol, or statue of a deity.
The earliest mention of the term "murti" occurs in primary Upanishads composed in the 1st millennium BCE, particularly in verse 3.2 of Aitareya Upanishad, verse 1.13 of Shvetashvatara Upanishad, verse 6.14 of Maitrayaniya Upanishad and verse 1.5 of Prashna Upanishad. For example, the Maitrayaniya Upanishad uses the term to mean a "form, manifestation of time". The section sets out to prove Time exists, acknowledges the difficulty in proving Time exists by Pramana (epistemology in Indian philosophy), then inserts a theory of inductive inference for epistemological proof as follows,
<templatestyles src="Template:Blockquote/styles.css" /><poem>
On account of the subtleness of Time, this is the proof of its reality;
On account of this, the Time is demonstrated.
Because without proof, the assumption which is to be proved is not permissible;
But, when one comprehends it in its parts, that which is itself to be proved or demonstrated becomes the ground of proof, through which it brings itself into consciousness (in an inductive way).
</poem>
The section includes the concept of Time and non-Time, stating that non-Time existed before the creation of the universe, and time came into existence with the creation of the universe. Non-time is indivisible, time is divisible, and the Maitri Upanishad then asserts that the "year is the "mūrti" of time". Robert Hume translates the discussion of ""mūrti" of time", in verse 6.14 of the Maitri Upanishad, as "form".
Western scholarship on Hinduism emphasizes that there was neither murti nor temples nor idol-facilitated worship in the Vedic era. The Vedic Hinduism rituals were directed at nature and abstract deities called during yajna with hymns. However, there isn't a universal consensus, with scholars such as AC Das, pointing to the word "Mūradeva" in Rig Veda verses 7.104.24, 10.87.2 and 10.87.14. This word may refer to "Deva who is fixed" or "Deva who is foolish". The former interpretation, if accurate, may imply that there were communities in the Vedic era who had Deva in the form of murti, and the context of these hymns suggests that the term could be referring to practices of the tribal communities outside of the Vedic fold.
One of the earliest firm textual evidence of Deva images, in the sense of "murti", is found in "Jivikarthe Capanye" by the Sanskrit grammarian Pāṇini who lived about 4th-century BCE. He mentions "Acala" and "Cala", with former referring to images in a shrine, and the latter meaning images that were carried from place to place. Panini also mentions "Devalaka", meaning custodians of images of worship who show the images but do not sell them, as well as "Jivika" as people whose source of livelihood was the gifts they received from devotees. In ancient Sanskrit texts that follow Panini's work, numerous references are found to divine images with terms such as "Devagrha", "Devagara", "Devakula", "Devayatana" and others. These texts, states Noel Salmond, strongly suggest that temples and murti were in existence in ancient India by about 4th century BCE. Recent archaeological evidence confirms that the knowledge and art of sculpture was established in India by the Maurya Empire period (~3rd century BCE).
By the early 1st millennium BCE, the term "murti" meant idols, images, or statues in various Indian texts such as Bhavishya Purana verse 132.5.7, Brihat Samhita 1.8.29, and inscriptions in different parts of India. The term "murti" has been a more generic term referring to an idol or statue of anyone, either a deity, of any human being, animal or any art. "Pratima" includes murti as well as painting of any non-anthropomorphic object. In contrast, "Bera" or "Bimba" meant "idol of god" only, and "Vigraha" was synonymous with "Bimba".
Types.
A "murti" in contemporary usage is any image or statue. It may be found inside or outside a temple or home, installed to be moved with a festive procession ("utsava murti"), or just be a landmark. It is a significant part of Hindu iconography, and is implemented in many ways. Two major categories include:
Beyond anthropomorphic forms of religious murti, some traditions of Hinduism cherish aniconism, where alternate symbols are shaped into a murti, such as the linga for Shiva, yoni for Devi, and the saligrama for Vishnu.
Methods and manuals.
Murti, when produced properly, are made according to the design rules of the Shilpa Shastras. They recommend materials, measurements, proportions, decoration, and symbolism of the murti. Explanation of the metaphysical significance of each stage of manufacture and the prescription of specific mantras to sanctify the process and evoke and invoke the power of the deity in the image are found in the liturgical handbooks the Agamas and Tantras. In Tantric traditions, a murti is installed by priests through the "Prana pratishta" ceremony, where mantras are recited sometimes with yantras (mystic diagrams), whereby state Harold Coward and David Goa, the "divine vital energy of the cosmos is infused into the sculpture" and then the divine is welcomed as one would welcome a friend. The esoteric Hindu tantric traditions through texts such as "Tantra-tattva" follow elaborate rituals to infuse life into a murti. Some tantra texts such as the "Pancaratraraksa" state that anyone who considers an icon of Vishnu as nothing but "an ordinary object" made of iron "goes to hell". The use of murti and particularly the "prana protist" consecration ceremony, states Buhnemann, has been criticized by Hindu groups. These groups state that this practice came from more recent "false tantra books", and there is not a single word in the Vedas about such a ceremony.
<templatestyles src="Template:Quote_box/styles.css" />
A Hindu prayer before cutting a tree for a murti
<poem>
Oh, Tree! you have been selected for the worship of a deity,
Salutations to you!
I worship you per rules, kindly accept it.
May all who live in this tree, find residence elsewhere,
May they forgive us now, we bow to them.
</poem>
—" Brihat Samhita" 59.10 - 59.11
The artists who make any art or craft, including murti, were known as "shilpins". The formally trained "Shilpins" shape the murti not following fancy but following canonical manuals such as the Agamas and the Shilpa Shastras texts such as Vishvakarma. The material of construction range from clay to wood to marble to metal alloys such as panchaloha. The sixth century "Brihat Samhita" and eighth-century text "Manasara-Silpasastra" (literally: "treatise on art using the method of measurement"), identify nine materials for murti construction – gold, silver, copper, stone, wood, "Sudha" (a type of stucco, mortar plaster), "sarkara" (gravel, grit), "Bahasa" (marble types), and earth (clay, terracotta). For "Bahasa", the texts describe working methods for various types of marble, specialized stones, colors, and a range of opacity (transparent, translucent and crystal).
"Brihat Samhita", a 6th-century encyclopedia of a range of topics from horticulture to astrology to gemology to murti and temple design, specifies in Chapter 56 that the "pratima" (murti) height should be formula_0 of the sanctum sanctorum's door height, the "Pratima" height and the sanctum sanctorum room's width be in the ratio of 0.292, it stands on a pedestal that is 0.146 of sanctum room width, thereafter the text describes 20 types of temples with their dimensions. Chapter 58 of the text describes the ratios of various anatomical parts of a murti, from head to toe, along with the recommendation in verse 59.29 that generally accepted variations in dress, decoration, and dimensions of local regional traditions for the murti are the artistic tradition.
The texts recommend materials of construction, proportions, postures, and mudra, symbolic items the murti holds in its hands, colors, garments, and ornaments to go with the murti of each god or goddess, vehicles of deities such as Garuda, bull and lion, and other details. The texts also include chapters on the design of Jaina and Buddhist murti, as well as reliefs of sages, apsaras, different types of devotees (based on bhakti yoga, jnana yoga, karma yoga, ascetics) to decorate the area near the murti. The texts recommend that the material of construction and relative scale of murti be correlated to the scale of the temple dimensions, using twelve types of comparative measurements.
In Southern India, the material used predominantly for murti is black granite, while the material in North India is white marble. However, for some Hindus, it is not the materials used that matter, but the faith and meditation on the universal Absolute Brahman. More particularly, devotees meditate or worship on the formless God (nirguna Brahman) through murti symbolism of God (saguna Brahman) during a puja before a murti, or the meditation on a Tirthankara in the case of Jainism, thus making the material of construction or the specific shape of the murti not spiritually important.
According to John Keay, "Only after achieving remarkable expertise in the portrayal of the Buddha figure and of animal and human, did Indian stonemasons turn to produce images of the orthodox 'Hindu' deities". This view, however, is not shared by other scholars. Trudy King et al. state that stone images of reverential figures and guardian spirits ("yaksha") were first produced in Jainism and Hinduism, by about 2 century BCE, as suggested by Mathura region excavations, and this knowledge grew into iconographic traditions and stone monuments in India including those for Buddhism.
Role in worship.
Major Hindu traditions such as Vaishnavism, Shaivism, Shaktism and Smartaism favour the use of murti. These traditions suggest that it is easier to dedicate time and focus on spirituality through anthropomorphic or non-anthropomorphic icons. Hindu scriptures such as the Bhagavad Gita, states in verse 12.5,
In Hinduism, a murti itself is not god, it is an image of god and thus a symbol and representation. A murti is a form and manifestation of the formless Absolute. Thus a literal translation of "murti" as 'idol' is incorrect, when idol is understood as superstitious end in itself. Just like the photograph of a person is not the real person, a "murti" is an image in Hinduism but not the real thing, but in both cases the image reminds of something of emotional and real value to the viewer. When a person worships a murti, it is assumed to be a manifestation of the essence or spirit of the deity, the worshipper's spiritual ideas and needs are meditated through it, yet the idea of ultimate reality or Brahman is not confined in it.Devotional ("bhakti movement") practices centered on cultivating a deep and personal bond of love with God, often expressed and facilitated with one or more murti, and includes individual or community hymns, japa or singing ("bhajan", "kirtan" or "aarti"). Acts of devotion, in major temples particularly, are structured on treating the murti as the manifestation of a revered guest, and the daily routine can include awakening the murti in the morning and making sure that it "is washed, dressed, and garlanded." In Vaishnavism, the building of a temple for the murti is considered an act of devotion, but non-murti symbolism is also common wherein the aromatic Tulsi plant or "Saligrama" is an aniconic reminder of the spiritualism in Vishnu. These puja rituals with the murti correspond to ancient cultural practices for a beloved guest, and the murti is welcomed, taken care of, and then requested to retire.
An image in Hinduism cannot be equated with a deity and the object of worship is the divine whose power is inside the image, and the image is not the object of worship itself, Hindus believe everything is worthy of worship as it contains divine energy emanating from the one god. According to the Agamas, the "bimba murti" ( / ) is different from the "mantra murti" () from the perspective of rituals, gestures, hymns and offerings.
Some Hindu denominations like Arya Samaj and Satya Mahima Dharma reject idol worship.
Modes of worshipping.
Worship of a "murti" involves various modes and rituals. Before a "murti" is worshipped, a ritual known as "prana pratishta" is conducted. This ritual is performed to invoke the presence of the god or goddess into the physical form of the murti. In temples, this ceremony is a one-time event for a specific "murti". In domestic rituals, the deity is invited to reside in the murti through "avahana" (invocation) each time a puja is conducted and then dispersed back at the end of the puja. Adorning a "murti" is mode that allows devotees to express love for the deity and visually and experientially connect with the nature of the god or goddess. In worship at a temple, the significant moment is when the adorned "murti" is revealed, and worshippers take darshan by witnessing the fully adorned "murti".
Role in history.
"Murti" and temples were well established in South Asia, before the start of Delhi Sultanate in the late 12th century CE. They became a target of destruction during raids and religious wars between Islam and Hinduism through the 18th century.
During the colonial era, Christian missionaries aiming to convert Hindus to Christianity wrote memoirs and books that were widely distributed in Europe, which Mitter, Pennington, and other scholars call fictionalized stereotypes, where "murti" were claimed as the evidence of lack of spiritual heritage in primitive Hindus, of "idolatry and savage worship of stones", practices akin to Biblical demons, calling "murti" monstrous devils or eroticized bizarre beings carved in stone. The British Missionary Society with colonial government's assistance bought and sometimes seized, then transferred "murti" from India and displayed it in their "trophies" room in the United Kingdom with the note claiming that these were given up by Hindus who now accept the "folly and sin of idolatry". In other instances, the colonial British authorities, seeking additional government revenue, introduced Pilgrim Tax on Hindus to view "murti" inside major temples.
The missionaries and orientalist scholars attempted to justify the need for colonial rule of India by attacking "murti" as a symbol of depravity and primitiveness, arguing that it was, states Tanisha Ramachandran, "the White Man's Burden to create a moral society" in India. This literature by the Christian missionaries constructed the foundation of a "Hindu image" in Europe, during the colonial era, and it blamed "murti" idolatry as "the cause for the ills of Indian society". By 19th-century, ideas such as pantheism (the universe is identical with God or Brahman), contained in newly translated Sanskrit texts were linked to the idolatry of "murti" and declared as additional evidence of superstitions and evil by Christian missionaries and colonial authorities in British India.
The polemics of Christian missionaries in colonial India triggered a debate among Hindus, yielding divergent responses. It ranged from activists such as Dayananda Saraswati who denounced all "murti", to Vivekananda who refused to denounce "murti" and asked Hindus in India and Christians in the West to introspect, that images are used everywhere to help think and as a road to ideas, in the following words,
<templatestyles src="Template:Blockquote/styles.css" />Superstition is a great enemy of man, but bigotry is worse. Why does a Christian go to church? Why is the cross holy? Why is the face turned toward the sky in prayer? Why are there so many images in the Catholic Church? Why are there so many images in the minds of Protestants when they pray? My brethren, we can no more think about anything without a mental image than we can live without breathing. By the law of association, the material image calls up the mental idea and vice versa.
Religious intolerance and polemics, state Halbertal and Margalit, have historically targeted idols and material symbols cherished by other religions, while encouraging the worship of material symbols of one's religion, characterizing the material symbols of others as grotesque and wrong, in some cases dehumanizing the others and encouraging the destruction of idols of the others. The outsider conflates and stereotypes the "strange worship" of the other religions as "false worship" first, then calls "false worship" as "improper worship and false belief" of pagan or an equivalent term, thereafter constructing an identity of the others as "primitive and barbarians" that need to be saved, followed by justified intolerance and often violence against those who cherish a different material symbol than one's own. In the history of Hinduism and India, states Pennington, Hindu deity images ("murti") have been a religious lens for focusing this anti-Hindu polemic and was the basis for distortions, accusations and attacks by non-Indian religious powers and missionaries.
Significance.
Ancient Indian texts assert the significance of murti in spiritual terms. The "Vāstusūtra Upaniṣad", whose palm-leaf manuscripts were discovered in the 1970s among remote villages of Orissa – four in Oriya language and one in crude Sanskrit, asserts that the doctrine of murti art making is founded on the principles of origin and evolution of universe, is a "form of every form of cosmic creator" that empirically exists in nature, and it functions to inspire a devotee towards contemplating the Ultimate Supreme Principle (Brahman). This text, whose composition date is unknown but probably from late 1st millennium CE, discusses the significance of images as, state Alice Boner and others, "inspiring, elevating and purifying influence" on the viewer and "means of communicating a vision of supreme truth and for giving a taste of the infinite that lies beyond". It adds (abridged):
<templatestyles src="Template:Blockquote/styles.css" />
From the contemplation of images grows delight, from delight faith, from faith steadfast devotion, through such devotion arises that higher understanding ("parāvidyā") that is the royal road to moksha. Without the guidance of images, the mind of the devotee may go astray and form the wrong imagination. Images dispel false imaginations. [... ] It resides within the consciousness of "Rishis" (sages), who possess the ability to perceive the essence of all created things in their manifested forms. They observe the various attributes, the divine and the demoniac, the creative and the destructive forces, engaged in their eternal interplay. It is this vision of Rishis, of the gigantic drama of cosmic powers in eternal conflict, from which the "Sthapakas" [Silpins, "murti", and temple artists] drew the subject matter for their work.
In the fifth chapter of Vāstusūtra Upaniṣad, Pippalada asserts, "from tattva-rupa (essence of a form, underlying principle) come the "pratirupani" [images]". In the sixth chapter, Pippalada repeats his message that the artist portrays the particular and universal concepts, with the statement "the work of the "Sthapaka" is a creation similar to that of the Prajapati" (that which created the universe). Non-theistic Jaina scholars such as Jnansundar, states John Cort, have argued the significance of murti along the same lines, asserting that "no matter what the field – scientific, commercial, religious – there can be no knowledge without an icon", images are part of how human beings learn and focus their thoughts, icons are necessary and inseparable from spiritual endeavors in Jainism.While "murti" are an easily and commonly visible aspect of Hinduism, they are not necessary for Hindu worship. Among Hindus, states Gopinath Rao, one who has realized Self (Soul, Atman) and the Universal Principle (Brahman, god) within himself, there is no need for any temple or divine image for worship. For those who have yet to reach this height of realization, various symbolic manifestations through images, idols, and icons as well as mental modes of worship are offered as one of the spiritual paths in the Hindu way of life. This belief is repeated in ancient Hindu scriptures. For example, the Jabaladarshana Upanishad states:
<templatestyles src="Template:Blockquote/styles.css" />अज्ञानं भावनार्थाय प्रतिमाः परिकल्पिताः
— - जाबालदर्शनोपनिषत्
</poem>
<poem>
A yogin perceives god (Siva) within himself,
images are for those who have not reached this knowledge. (Verse 59)
</poem>, in
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\tfrac{7}{8}"
}
] |
https://en.wikipedia.org/wiki?curid=571730
|
5717356
|
Steane code
|
Code for quantum correction
The Steane code is a tool in quantum error correction introduced by Andrew Steane in 1996. It is a CSS code (Calderbank-Shor-Steane), using the classical binary [7,4,3] Hamming code to correct for both qubit flip errors (X errors) and phase flip errors (Z errors). The Steane code encodes one logical qubit in 7 physical qubits and is able to correct arbitrary single qubit errors.
Its check matrix in standard form is
formula_0
where H is the parity-check matrix of the Hamming code and is given by
formula_1
The formula_2 Steane code is the first in the family of quantum Hamming codes, codes with parameters formula_3 for integers formula_4. It is also a quantum color code.
Expression in the stabilizer formalism.
In a quantum error-correcting code, the codespace is the subspace of the overall Hilbert space where all logical states live. In an formula_5-qubit stabilizer code, we can describe this subspace by its Pauli stabilizing group, the set of all formula_5-qubit Pauli operators which stabilize every logical state. The stabilizer formalism allows us to define the codespace of a stabilizer code by specifying its Pauli stabilizing group. We can efficiently describe this exponentially large group by listing its generators.
Since the Steane code encodes one logical qubit in 7 physical qubits, the codespace for the Steane code is a formula_6-dimensional subspace of its formula_7-dimensional Hilbert space.
In the stabilizer formalism, the Steane code has 6 generators:
formula_8
Note that each of the above generators is the tensor product of 7 single-qubit Pauli operations. For instance, formula_9 is just shorthand for formula_10, that is, an identity on the first three qubits and an formula_11 gate on each of the last four qubits. The tensor products are often omitted in notation for brevity.
The logical formula_11 and formula_12 gates are
formula_13
The logical formula_14 and formula_15 states of the Steane code are
formula_16
Arbitrary codestates are of the form formula_17.
|
[
{
"math_id": 0,
"text": "\n \\begin{bmatrix}\n H & 0 \\\\\n 0 & H\n \\end{bmatrix}\n"
},
{
"math_id": 1,
"text": "\nH = \\begin{bmatrix}\n 1 & 0 & 0 & 1 & 0 & 1 & 1\\\\\n 0 & 1 & 0 & 1 & 1 & 0 & 1\\\\\n 0 & 0 & 1 & 0 & 1 & 1 & 1\n \\end{bmatrix}.\n"
},
{
"math_id": 2,
"text": "[[7,1,3]]"
},
{
"math_id": 3,
"text": "[[2^r-1, 2^r-1-2r, 3]]"
},
{
"math_id": 4,
"text": "r \\geq 3"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "2"
},
{
"math_id": 7,
"text": "2^7"
},
{
"math_id": 8,
"text": "\n\\begin{align}\n & IIIXXXX \\\\\n & IXXIIXX \\\\\n & XIXIXIX \\\\\n & IIIZZZZ \\\\\n & IZZIIZZ \\\\\n & ZIZIZIZ.\n\\end{align}\n"
},
{
"math_id": 9,
"text": "IIIXXXX"
},
{
"math_id": 10,
"text": "I \\otimes I \\otimes I \\otimes X \\otimes X \\otimes X \\otimes X"
},
{
"math_id": 11,
"text": "X"
},
{
"math_id": 12,
"text": "Z"
},
{
"math_id": 13,
"text": "\n\\begin{align}\nX_L & = XXXXXXX \\\\\nZ_L & = ZZZZZZZ.\n\\end{align}\n"
},
{
"math_id": 14,
"text": "| 0 \\rangle"
},
{
"math_id": 15,
"text": "| 1 \\rangle"
},
{
"math_id": 16,
"text": "\n\\begin{align}\n| 0 \\rangle_L = & \\frac{1}{\\sqrt{8}} [ | 0000000 \\rangle + | 1010101 \\rangle + | 0110011 \\rangle + | 1100110 \\rangle \\\\\n & + | 0001111 \\rangle + | 1011010 \\rangle + | 0111100 \\rangle + | 1101001 \\rangle ] \\\\\n| 1 \\rangle_L = & X_L | 0 \\rangle_L. \n\\end{align}\n"
},
{
"math_id": 17,
"text": "| \\psi \\rangle = \\alpha | 0 \\rangle_L + \\beta | 1 \\rangle_L"
}
] |
https://en.wikipedia.org/wiki?curid=5717356
|
571755
|
P–n junction
|
Semiconductor–semiconductor junction
A p–n junction is a combination of two types of semiconductor materials, p-type and n-type, in a single crystal. The "n" (negative) side contains freely-moving electrons, while the "p" (positive) side contains freely-moving electron holes. Connecting the two materials causes creation of a depletion region near the boundary, as the free electrons fill the available holes, which in turn allows electric current to pass through the junction only in one direction.
p–n junctions represent the simplest case of a semiconductor electronic device; a p-n junction by itself, when connected on both sides to a circuit, is a diode. More complex circuit components can be created by further combinations of p-type and n-type semiconductors; for example, the bipolar junction transistor (BJT) is a semiconductor in the form n–p–n or p–n–p. Combinations of such semiconductor devices on a single chip allow for the creation of integrated circuits.
Solar cells and light-emitting diodes (LEDs) are essentially p-n junctions where the semiconductor materials are chosen, and the component's geometry designed, to maximise the desired effect (light absorption or emission). A Schottky junction is a similar case to a p–n junction, where instead of an n-type semiconductor, a metal directly serves the role of the "negative" charge provider.
History.
The invention of the p–n junction is usually attributed to American physicist Russell Ohl of Bell Laboratories in 1939. Two years later (1941), Vadim Lashkaryov reported discovery of p–n junctions in Cu2O and silver sulphide photocells and selenium rectifiers. The modern theory of p-n junctions was elucidated by William Shockley in his classic work "Electrons and Holes in Semiconductors" (1950).
Properties.
The p–n junction possesses a useful property for modern semiconductor electronics. A p-doped semiconductor is relatively conductive. The same is true of an n-doped semiconductor, but the junction between them can become depleted of charge carriers, depending on the relative voltages of the two semiconductor regions. By manipulating flow of charge carriers across this depleted layer, p–n junctions are commonly used as diodes: circuit elements that allow a flow of electricity in one direction but not in the other (opposite) direction.
"Bias" is the application of a voltage relative to a p–n junction region:
The forward-bias and the reverse-bias properties of the p–n junction imply that it can be used as a diode. A p–n junction diode allows charge carriers to flow in one direction, but not in the opposite direction; negative charge carriers (electrons) can easily flow through the junction from n to p but not from p to n, and the reverse is true for positive charge carriers (holes). When the p–n junction is forward-biased, charge carriers flow freely due to the reduction in energy barriers seen by electrons and holes. When the p–n junction is reverse-biased, however, the junction barrier (and therefore resistance) becomes greater and charge flow is minimal.
Equilibrium (zero bias).
In a p–n junction, without an external applied voltage, an equilibrium condition is reached in which a potential difference forms across the junction. This potential difference is called "built-in potential" formula_0.
At the junction, some of the free electrons in the n-type wander into the p-type due to random thermal migration ("diffusion"). As they diffuse into the p-type they combine with holes, and cancel each other out. In a similar way some of the positive holes in the p-type diffuse into the n-type and combine with free electrons, and cancel each other out. The positively charged ("donor") dopant atoms in the n-type are part of the crystal, and cannot move. Thus, in the n-type, a region near the junction has a fixed amount of positive charge. The negatively charged ("acceptor") dopant atoms in the p-type are part of the crystal, and cannot move. Thus, in the p-type, a region near the junction becomes negatively charged. The result is a region near the junction that acts to repel the mobile charges away from the junction through the electric field that these charged regions create. The regions near the p–n interface lose their neutrality and most of their mobile carriers, forming the space charge region or depletion layer (see ).
The electric field created by the space charge region opposes the diffusion process for both electrons and holes. There are two concurrent phenomena: the diffusion process that tends to generate more space charge, and the electric field generated by the space charge that tends to counteract the diffusion. The carrier concentration profile at equilibrium is shown in with blue and red lines. Also shown are the two counterbalancing phenomena that establish equilibrium.
The space charge region is a zone with a net charge provided by the fixed ions (donors or acceptors) that have been left "uncovered" by majority carrier diffusion. When equilibrium is reached, the charge density is approximated by the displayed step function. In fact, since the y-axis of figure A is log-scale, the region is almost completely depleted of majority carriers (leaving a charge density equal to the net doping level), and the edge between the space charge region and the neutral region is quite sharp (see , Q(x) graph). The space charge region has the same magnitude of charge on both sides of the p–n interfaces, thus it extends farther on the less doped side in this example (the n side in figures A and B).
Forward bias.
In forward bias, the p-type is connected with the positive terminal and the n-type is connected with the negative terminal. The panels show energy band diagram, electric field, and net charge density. Both p and n junctions are doped at a 1e15 cm−3 (160 μC/cm3) doping level, leading to built-in potential of ~0.59 V. Reducing depletion width can be inferred from the shrinking carrier motion across the p–n junction, which as a consequence reduces electrical resistance. Electrons that cross the p–n junction into the p-type material (or holes that cross into the n-type material) diffuse into the nearby neutral region. The amount of minority diffusion in the near-neutral zones determines the amount of current that can flow through the diode.
Only majority carriers (electrons in n-type material or holes in p-type) can flow through a semiconductor for a macroscopic length. With this in mind, consider the flow of electrons across the junction. The forward bias causes a force on the electrons pushing them from the N side toward the P side. With forward bias, the depletion region is narrow enough that electrons can cross the junction and "inject" into the p-type material. However, they do not continue to flow through the p-type material indefinitely, because it is energetically favorable for them to recombine with holes. The average length an electron travels through the p-type material before recombining is called the "diffusion length", and it is typically on the order of micrometers.
Although the electrons penetrate only a short distance into the p-type material, the electric current continues uninterrupted, because holes (the majority carriers) begin to flow in the opposite direction. The total current (the sum of the electron and hole currents) is constant in space, because any variation would cause charge buildup over time (this is Kirchhoff's current law). The flow of holes from the p-type region into the n-type region is exactly analogous to the flow of electrons from N to P (electrons and holes swap roles and the signs of all currents and voltages are reversed).
Therefore, the macroscopic picture of the current flow through the diode involves electrons flowing through the n-type region toward the junction, holes flowing through the p-type region in the opposite direction toward the junction, and the two species of carriers constantly recombining in the vicinity of the junction. The electrons and holes travel in opposite directions, but they also have opposite charges, so the overall current is in the same direction on both sides of the diode, as required.
The Shockley diode equation models the forward-bias operational characteristics of a p–n junction outside the avalanche (reverse-biased conducting) region.
Reverse bias.
Connecting the "p-type" region to the "negative" terminal of the voltage supply and the "n-type" region to the "positive" terminal corresponds to reverse bias. If a diode is reverse-biased, the voltage at the cathode is comparatively higher than at the anode. Therefore, very little current flows until the diode breaks down. The connections are illustrated in the adjacent diagram.
Because the p-type material is now connected to the negative terminal of the power supply, the 'holes' in the p-type material are pulled away from the junction, leaving behind charged ions and causing the width of the depletion region to increase. Likewise, because the n-type region is connected to the positive terminal, the electrons are pulled away from the junction, with similar effect. This increases the voltage barrier causing a high resistance to the flow of charge carriers, thus allowing minimal electric current to cross the p–n junction. The increase in resistance of the p–n junction results in the junction behaving as an insulator.
The strength of the depletion zone electric field increases as the reverse-bias voltage increases. Once the electric field intensity increases beyond a critical level, the p–n junction depletion zone breaks down and current begins to flow, usually by either the Zener or the avalanche breakdown processes. Both of these breakdown processes are non-destructive and are reversible, as long as the amount of current flowing does not reach levels that cause the semiconductor material to overheat and cause thermal damage.
This effect is used to advantage in Zener diode regulator circuits. Zener diodes have a low breakdown voltage. A standard value for breakdown voltage is for instance 5.6 V. This means that the voltage at the cathode cannot be more than about 5.6 V higher than the voltage at the anode (though there is a slight rise with current), because the diode breaks down, and therefore conducts, if the voltage gets any higher. This effect limits the voltage over the diode.
Another application of reverse biasing is Varactor diodes, where the width of the depletion zone (controlled with the reverse bias voltage) changes the capacitance of the diode.
Governing equations.
Size of depletion region.
For a p–n junction, let formula_1 be the concentration of negatively-charged acceptor atoms and formula_2 be the concentrations of positively-charged donor atoms. Let formula_3 and formula_4 be the equilibrium concentrations of electrons and holes respectively. Thus, by Poisson's equation:
formula_5
where formula_6 is the electric potential, formula_7 is the charge density, formula_8 is permittivity and
formula_9 is the magnitude of the electron charge.
For a general case, the dopants have a concentration profile that varies with depth x, but for a simple case of an abrupt junction, formula_10 can be assumed to be constant on the p side of the junction and zero on the n side, and formula_11 can be assumed to be constant on the n side of the junction and zero on the p side. Let formula_12 be the width of the depletion region on the p-side and formula_13 the width of the depletion region on the n-side. Then, since formula_14 within the depletion region, it must be that
formula_15
because the total charge on the p and the n side of the depletion region sums to zero. Therefore, letting formula_16 and formula_17 represent the entire depletion region and the potential difference across it,
formula_18
And thus, letting formula_19 be the total width of the depletion region, we get
formula_20
formula_17 can be written as formula_21, where we have broken up the voltage difference into the equilibrium plus external components. The equilibrium potential results from diffusion forces, and thus we can calculate formula_22 by implementing the Einstein relation and assuming the semiconductor is nondegenerate ("i.e.", the product formula_23 is independent of the Fermi energy):
formula_24
where "T" is the temperature of the semiconductor and "k" is Boltzmann constant.
Current across depletion region.
The "Shockley ideal diode equation" characterizes the current across a p–n junction as a function of external voltage and ambient conditions (temperature, choice of semiconductor, etc.). To see how it can be derived, we must examine the various reasons for current. The convention is that the forward (+) direction be pointed against the diode's built-in potential gradient at equilibrium.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V_{\\rm bi}"
},
{
"math_id": 1,
"text": " C_A(x) "
},
{
"math_id": 2,
"text": " C_D(x) "
},
{
"math_id": 3,
"text": "N_0(x) "
},
{
"math_id": 4,
"text": "P_0(x) "
},
{
"math_id": 5,
"text": "-\\frac{\\mathrm{d}^2 V}{\\mathrm{d}x^2}=\\frac{\\rho }{\\varepsilon }=\\frac{q}{\\varepsilon }\\left[ (P_0-N_0)+(C_D-C_A)\\right]"
},
{
"math_id": 6,
"text": "V"
},
{
"math_id": 7,
"text": "\\rho "
},
{
"math_id": 8,
"text": "\\varepsilon "
},
{
"math_id": 9,
"text": "q"
},
{
"math_id": 10,
"text": " C_A "
},
{
"math_id": 11,
"text": "C_D "
},
{
"math_id": 12,
"text": "d_p"
},
{
"math_id": 13,
"text": "d_n "
},
{
"math_id": 14,
"text": "P_0=N_0=0"
},
{
"math_id": 15,
"text": "d_pC_A=d_nC_D"
},
{
"math_id": 16,
"text": "D"
},
{
"math_id": 17,
"text": "\\Delta V"
},
{
"math_id": 18,
"text": "\\Delta V=\\int_D \\int\\frac{q}{\\varepsilon }\\left[ (P_0-N_0)+ (C_D-C_A)\\right]\\,\\mathrm{d} x \\,\\mathrm{d}x\n=\\frac{C_A C_D}{C_A+C_D}\\frac{q}{2\\varepsilon}(d_p+d_n)^2"
},
{
"math_id": 19,
"text": "d"
},
{
"math_id": 20,
"text": "d=\\sqrt{\\frac{2\\varepsilon }{q}\\frac{C_A+C_D}{C_AC_D}\\Delta V}"
},
{
"math_id": 21,
"text": "\\Delta V_0+\\Delta V_\\text{ext}"
},
{
"math_id": 22,
"text": "\\Delta V_0"
},
{
"math_id": 23,
"text": "{P}_0 {N}_{0}= {n}_{i}^2"
},
{
"math_id": 24,
"text": "\\Delta V_0 = \\frac{kT}{q} \\ln \\left( \\frac{C_A C_D}{P_0 N_0} \\right) = \\frac{kT}{q}\\ln \\left( \\frac{C_A C_D}{n_i^2} \\right)"
},
{
"math_id": 25,
"text": "\\mathbf{J}_F"
},
{
"math_id": 26,
"text": "n"
},
{
"math_id": 27,
"text": "\\mathbf{J}_D\\propto-q\\nabla n"
},
{
"math_id": 28,
"text": "\\mathbf{J}_R"
}
] |
https://en.wikipedia.org/wiki?curid=571755
|
57177883
|
DeMix
|
Statistical method for studying cancer cells
DeMix is a statistical method for deconvolving mixed cancer transcriptomes to predict the likely proportion of tumor and stromal cell samples using a linear mixture model. It was developed by "Ahn et al".
Demix explicitly considers four possible scenarios: matched tumor and normal samples, with reference genes; matched tumor and normal samples, without reference genes; unmatched tumor and normal samples, with reference genes; and unmatched tumor and normal samples, without reference genes.
Reference genes are a set of genes for which expression profiles have been accurately estimated based on external data in all constituting tissue types.
Introduction.
Solid tumor samples obtained from clinical practice are highly heterogeneous. They consist of multiple clonal populations of cancer cells as well as adjacent normal tissue, stromal, and infiltrating immune cells. The highly heterogeneous structure of tumor tissues could complicate or bias various genomic data analysis. Removing heterogeneity is of substantial interest to isolate expression data from mixed samples "in silico".
It is important to estimate and account for the tumor purity, or the percentage of cancer cells in the tumor sample before analyses. Owing to the marked differences between cancer and normal cells, it is possible to estimate tumor purity from high-throughput genomic or epigenomic data.
DeMix estimates the proportion and gene expression profile from cancer cells in mixed samples. In this method, the mixed sample is assumed to be composed only by two cell types: cancer cells (without any known priori gene expression profile) and normal cells (with known gene expression data, which can either come from tumor-matched or unmatched samples).
DeMix was developed for microarray data and shows that it was important to use the raw data as input assuming it follows a log-normal distribution as is the case for microarray, instead of working with log-transformed data as most other methods did. DeMix estimates the variance of the gene expression in the normal samples and uses this in the maximum likelihood estimation to predict the cancer cell gene expression and proportions, using thus implicitly a gene-specific weight for each gene.
DeMix is the first method to follow a linear mixture of gene expression levels on data before they are log-transformed. This method analyzes data from heterogeneous tumor samples before the data are log-transformed, estimates individual level expression levels in each sample and each gene in an unmatched design.
Method.
Let formula_0 and formula_1 be the expression level for a gene g and sample formula_2 from pure normal and tumor tissues, respectively. LN represents the formula_3 Normal distribution. When the formula_3 Normal assumption is violated, a deterioration of accuracy should be expected. The expression level from tumor tissue formula_4 is not observed. Let formula_5 denote the expression level of a clinically derived tumor sample which is observed. Let formula_6, unknown, denote the proportion of tumor tissue in sample formula_2. The raw measured data is written as a linear equation as
formula_7
Note that formula_5 does not follow a formula_3 Normal distribution when both formula_8 and formula_4 follow a formula_3 Normal distribution.
There are mainly two steps in the DeMix method:
Step 1: Given the formula_9's and the distribution of the formula_10's, the likelihood of observing formula_9 is maximized in order to search for formula_11.
Step 2: Given the formula_12's and the distribution of the formula_13's and the formula_14, an individual pair of formula_15 is estimated for each sample and each gene.
These steps are then adapted to specific data scenarios.
DeMix was developed using the Nelder–Mead optimization procedure which includes a numerical integration of the joint density. DeMix takes a two-stage approach by first estimating the formula_6s and then estimating the means and variances of gene expressions based on the formula_16s. A joint model that estimates all parameters simultaneously will be able to further incorporate the uncertainty measure of the tissue proportions. However, the estimation step from such a model can be computationally intensive and may not be suitable for the analysis of high-throughput data.
Usage.
DeMix addresses four data scenarios: with or without a reference gene and matched or unmatched design. Although the algorithm requires a minimum of one gene as a reference gene, it is recommended to use at least 5 to 10 genes to alleviate the potential influence from outliers and to identify an optimal set of formula_12s.
DeMix assumes the mixed sample is composed of at most two cellular compartments: normal and tumor, and that the distributional parameters of normal cells can be estimated from other available data. For other situations, more complex modeling may be needed.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "N_{ig} \\sim LN(\\mu_{N_g}, \\sigma^2_{N_g})"
},
{
"math_id": 1,
"text": "T_{ig} \\sim LN(\\mu_{T_g}, \\sigma^2_{T_g})"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "log_2"
},
{
"math_id": 4,
"text": "T_{ig}"
},
{
"math_id": 5,
"text": "Y_{ig}"
},
{
"math_id": 6,
"text": "\\pi_i"
},
{
"math_id": 7,
"text": "Y_{ig}=\\pi_iT_{ig}+(1-\\pi_i)N_{ig}"
},
{
"math_id": 8,
"text": "N_{ig}"
},
{
"math_id": 9,
"text": "Y"
},
{
"math_id": 10,
"text": "N"
},
{
"math_id": 11,
"text": "\\{\\pi, \\mu_T, \\sigma^2_T\\}"
},
{
"math_id": 12,
"text": "\\pi"
},
{
"math_id": 13,
"text": "T"
},
{
"math_id": 14,
"text": "N's"
},
{
"math_id": 15,
"text": "(T, N)"
},
{
"math_id": 16,
"text": "\\hat{\\pi}_i"
}
] |
https://en.wikipedia.org/wiki?curid=57177883
|
57183712
|
Zeldovich–Liñán model
|
In combustion, Zeldovich–Liñán model is a two-step reaction model for the combustion processes, named after Yakov Borisovich Zeldovich and Amable Liñán. The model includes a chain-branching and a chain-breaking (or radical recombination) reaction. The model was first introduced by Zeldovich in 1948 and later analysed by Liñán using activation energy asymptotics in 1971. The mechanism with a "quadratic or second-order recombination" that were originally studied reads as
formula_0
where formula_1 is the fuel, formula_2 is an intermediate radical, formula_3 is the third body and formula_4 is the product. The mechanism with a "linear or first-order recombination" is known as Zeldovich–Liñán–Dold model which was introduced by John W. Dold. This mechanism reads as
formula_5
In both models, the first reaction is the chain-branching reaction (it produces two radicals by consuming one radical), which is considered to be auto-catalytic (consumes no heat and releases no heat), with very large activation energy and the second reaction is the chain-breaking (or radical-recombination) reaction (it consumes radicals), where all of the heat in the combustion is released, with almost negligible activation energy. Therefore, the rate constants are written as
formula_6
where formula_7 and formula_8 are the pre-exponential factors, formula_9 is the activation energy for chain-branching reaction which is much larger than the thermal energy and formula_10 is the temperature.
Crossover temperature.
Albeit, there are two fundamental aspects that differentiate Zeldovich–Liñán–Dold (ZLD) model from the Zeldovich–Liñán (ZL) model. First of all, the so-called cold-boundary difficulty in premixed flames does not occur in the ZLD model and secondly the so-called crossover temperature exist in the ZLD, but not in the ZL model.
For simplicity, consider a spatially homogeneous system, then the concentration formula_11 of the radical in the ZLD model evolves according to
formula_12
It is clear from this equation that the radical concentration will grow in time if the righthand side term is positive. More preceisley, the initial equilibrium state formula_13 is unstable if the right-side term is positive. If formula_14 denotes the initial fuel concentration, a "crossover temperature" formula_15 as a temperature at which the branching and recombination rates are equal can be defined, i.e.,
formula_16
When formula_17, branching dominates over recombination and therefore the radial concentration will grow in time, whereas if formula_18, recombination dominates over branching and therefore the radial concentration will disappear in time.
In a more general setup, where the system is non-homogeneous, evaluation of crossover temperature is complicated because of the presence of convective and diffusive transport.
In the ZL model, one would have obtained formula_19, but since formula_20 is zero or vanishingly small in the perturbed state, there is no crossover temperature.
Three regimes.
In his analysis, Liñán showed that there exists three types of regimes, namely, "slow recombination regime", "intermediate recombination regime" and "fast recombination regime". These regimes exist in both aforementioned models.
Let us consider a premixed flame in the ZLD model. Based on the thermal diffusivity formula_21 and the flame burning speed formula_22, one can define the flame thickness (or the thermal thickness) as formula_23. Since the activation energy of the branching is much greater than thermal energy, the chracteristic thickness formula_24 of the branching layer will be formula_25, where formula_26 is the Zeldovich number based on formula_27. The recombination reaction does not have the activation energy and its thickness formula_28 will characterised by its Damköhler number formula_29, where formula_30 is the molecular weight of the intermediate species. Specifically, from a diffusive-reactive balance, we obtain formula_31 (in the ZL model, this would have been formula_32).
By comparing the thicknesses of the different layers, the three regimes are classified:
The fast recombination represents siturations near the flammability limits. As can be seen, the recombination layer becomes comprable to the brnaching layer. The criticality is achieved when the branching is unable to cope up with the recombination. Such criticality exists in the ZLD model. Su-Ryong Lee and Jong S. Kim showed that as formula_33 becomes large, the critical condition is reached,
formula_34
where
formula_35
Here formula_36 is the heat release parameter, formula_37 is the unburnt fuel mass fraction and formula_38 is the molecular weight of the fuel.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{align}\n\\rm{Branching\\, (I):} & \\quad \\rm{F} + \\rm{Z} \\rightarrow 2\\rm{Z} \\\\\n\\rm{Recombination\\, (II):} & \\quad\\rm{Z} + \\rm{Z} + \\rm{M} \\rightarrow 2\\rm{P} +\\rm{M} +\\rm{Heat}\n\\end{align}"
},
{
"math_id": 1,
"text": "\\rm{F}"
},
{
"math_id": 2,
"text": "\\rm{Z}"
},
{
"math_id": 3,
"text": "\\rm{M}"
},
{
"math_id": 4,
"text": "\\rm{P}"
},
{
"math_id": 5,
"text": "\\begin{align}\n\\rm{Branching\\, (I):} & \\quad \\rm{F} + \\rm{Z} \\rightarrow 2\\rm{Z} \\\\\n\\rm{Recombination\\, (II):} & \\quad \\rm{Z} + \\rm{M} \\rightarrow \\rm{P} +\\rm{M} +\\rm{Heat}\n\\end{align}"
},
{
"math_id": 6,
"text": "k_{\\rm{I}} = A_{\\rm{I}} e^{-E_{\\rm I}/RT}, \\quad k_{\\rm II} = A_{\\rm II}"
},
{
"math_id": 7,
"text": "A_{\\rm I}"
},
{
"math_id": 8,
"text": "A_{\\rm II}"
},
{
"math_id": 9,
"text": "E_{\\rm I}"
},
{
"math_id": 10,
"text": "T"
},
{
"math_id": 11,
"text": "C_{\\mathrm{Z}}(t)"
},
{
"math_id": 12,
"text": "\\frac{dC_{\\mathrm{Z}}}{dt} = C_{\\mathrm{Z}}\\left(A_{\\mathrm{I}}C_{\\mathrm{F}}e^{-E_{\\mathrm{I}}/RT} - A_{\\mathrm{II}}\\right)."
},
{
"math_id": 13,
"text": "C_{\\mathrm{Z}}(0)=0"
},
{
"math_id": 14,
"text": "C_{\\mathrm{F}}(0)=C_{\\mathrm{F},0}"
},
{
"math_id": 15,
"text": "T^*"
},
{
"math_id": 16,
"text": "e^{E_{\\mathrm{I}}/RT^*} = \\frac{A_{\\mathrm{I}}}{A_{\\mathrm{II}}} C_{\\mathrm{F},0}."
},
{
"math_id": 17,
"text": "T>T^*"
},
{
"math_id": 18,
"text": "T<T^*"
},
{
"math_id": 19,
"text": "e^{E_{\\mathrm{I}}/RT^*} = (A_{\\mathrm{I}}/A_{\\mathrm{II}}) C_{\\mathrm{F},0} C_{\\mathrm{Z}}(0)"
},
{
"math_id": 20,
"text": "C_{\\mathrm{Z}}(0)"
},
{
"math_id": 21,
"text": "D_T"
},
{
"math_id": 22,
"text": "S_L"
},
{
"math_id": 23,
"text": "\\delta_L=D_T/S_L"
},
{
"math_id": 24,
"text": "\\delta_B"
},
{
"math_id": 25,
"text": "\\delta_B/\\delta_L \\sim O(1/\\beta)"
},
{
"math_id": 26,
"text": "\\beta"
},
{
"math_id": 27,
"text": "E_{\\mathrm{I}}"
},
{
"math_id": 28,
"text": "\\delta_R"
},
{
"math_id": 29,
"text": "Da_{\\mathrm{II}}=(D_T/S_L^2)/(W_{\\mathrm{Z}} A_{\\mathrm{II}}^{-1})"
},
{
"math_id": 30,
"text": "W_{\\mathrm{Z}}"
},
{
"math_id": 31,
"text": "\\delta_R/\\delta_L \\sim O(Da_{\\mathrm{II}}^{-1/2})"
},
{
"math_id": 32,
"text": "\\delta_R/\\delta_L \\sim O(Da_{\\mathrm{II}}^{-1/3})"
},
{
"math_id": 33,
"text": "\\Delta \\equiv Da_{\\mathrm{II}}/\\beta^2"
},
{
"math_id": 34,
"text": "r=e\\left(1+ \\frac{0.4162}{\\sqrt{\\Delta}}\\right)"
},
{
"math_id": 35,
"text": "r = \\frac{Da_{\\mathrm{I}}}{\\beta Da_{\\mathrm{II}}} e^{-\\beta(1+q)/q}, \\quad Da_{\\mathrm{I}} = \\frac{D_T/S_L^2}{(A_\\mathrm{I}Y_{\\mathrm{F},0}/W_{\\mathrm{F}})^{-1}}."
},
{
"math_id": 36,
"text": "q"
},
{
"math_id": 37,
"text": "Y_{\\mathrm{F},0}"
},
{
"math_id": 38,
"text": "W_{\\mathrm{F}}"
}
] |
https://en.wikipedia.org/wiki?curid=57183712
|
57188648
|
Vladimir Ilyin (mathematician)
|
Soviet and Russian mathematician
Vladimir Aleksandrovich Ilyin (Russian: ; May 2, 1928 – June 26, 2014) was a Soviet and Russian mathematician, Professor at Moscow State University, Doctor of Science, Academician of the Russian Academy of Sciences who made significant contributions to the theory of differential equations, the spectral theory of differential operators, and mathematical modeling.
Biography.
Ilyin was allowed to skip the first grade and start school from the second grade in Moscow in 1936 and finished school with a gold medal in 1945. After graduating from the MSU Faculty of Physics in 1950 with Honours Ilyin continued education at the same faculty as a postgraduate student specializing in mathematical physics. In 1953 Ilyin obtained his Candidate of Science degree in Physics and Mathematics for the thesis «Diffraction of electromagnetic waves on some inhomogeneities», his scientific advisor being Andrey Tikhonov.
In 1958 he obtained Doctor of Science degree in Physics and Mathematics for his thesis «On convergence of expansions in eigenfunctions of Laplace operator».
In 1960 he was appointed Professor of the Faculty of Physics at Moscow State University.
From 1953 till the end of his life Ilyin worked at Moscow State University:
Since 1973 he also worked as a Chief Researcher at Steklov Institute of Mathematics (Department of Theory of Functions).
Ilyin was the author of more than 140 research papers and 17 monographs on mathematical analysis, analytical geometry, and linear algebra, which were published both in Russia and abroad. He supervised 28 Doctors of Sciences and more than 100 Candidates of Sciences in Physics and Mathematics. For several years he chaired the Expert Council of the Higher Attestation Commission. He was the member of the State Prize Committee of the Russian Federation. He was also the member of Scientific and Methodological Council on Mathematics under the Ministry of Education of Russia.
His son, , the Corresponding Member of the Russian Academy of Sciences, is a Professor of the Department of Nonlinear Dynamic Systems and Control Processes at CMC MSU.
Teaching activities.
Ilyin's 55-year scientific and pedagogical activity is connected with Moscow State University: with the Faculty of Physics, where he started his career, and with the Faculty of Computational Mathematics and Cybernetics. He supervised 28 Doctors of Sciences and more than 100 Candidates of Science in Physics and Mathematics. Several of his students are Members of the Russian and National Academies of Sciences.
Ilyin is considered to have been a brilliant lecturer. He wrote a lot of textbooks, which have become classical. Eight of them have been included into the series «Classical University Textbook». The lecture courses he gave within his pedagogical activity included: «Equations of Mathematical Physics», «Equations of Elliptic Type», «Functional Analysis», «Mathematical Analysis», and «Linear Algebra and Analytical Geometry».
Areas of Expertise.
Ilyin is recognized for his outstanding scientific achievements in the theory of boundary value and mixed problems for equations of mathematical physics in domains with non-smooth boundaries and discontinuous coefficients. His results for hyperbolic equations (combined with earlier results obtained by Andrey Tikhonov, O.A. Oleinik, and G. Tautz for parabolic and elliptic equations) demonstrated that in terms of domain boundary conditions the solvability of all the three problems reduces to the solvability of a simplest problem of mathematical physics, the Dirichlet problem for the Laplace equation. In the late 1960s Ilyin developed a universal method that made it possible for an arbitrary selfadjoint second-order operator in an arbitrary (not necessarily bounded) domain to establish the final conditions of uniform (on any compact) convergence for both spectral expansions themselves and their Riesz means in each of the classes of functions (Nikolsky, Sobolev-Liouville, Besov and Sigmund-Holder function classes). These conditions also proved to be novel and final for expansions into both the multiple Fourier integral and the trigonometric Fourier series.
In 1971 Ilyin published a negative solution to the problem posed by Israel Gelfand concerning the validity of the theorem on equiconvergence of spectral expansion with the expansion into a Fourier integral for the case when the expansion itself has no uniform convergence.
In 1972 he published a negative solution to the problem posed by Sergei Sobolev on the convergence for formula_0 in the spectral expansion metric formula_0 of a finite function from this class. He developed a new method for estimating the remainder term of the spectral function of an elliptic operator in both the metric formula_1 and the metric formula_2.
Ilyin made a fundamental contribution to the spectral theory of nonself-adjoint operators. He obtained the conditions under which the system of eigenvectors and associated vectors for the one-dimensional boundary value problem has the basis property in formula_3 for formula_4.
In 1980-1982 he obtained estimates for formula_2-norms of eigenfunctions and associated functions using a one order higher associated function. He called these estimates «anti-a priori estimates». He also showed that these estimates are central to the theory of nonselfadjoint operators.
In a joint work with Evgeny Moiseev and K.V. Malkov in 1989, he demonstrated that the previously established conditions for the basis property of the system of eigenfunctions and associated functions of an operator formula_5 are both necessary and sufficient existence conditions for a complete system of motion integrals of a nonlinear system generated formula_6 by a Lax pair.
From 1999, and for the rest of his life Ilyin focused on boundary control problems for processes described by hyperbolic equations, specifically by the wave equation.
For a number of cases, he obtained formulas describing optimal boundary controls (in terms of minimizing the boundary energy) that transfer the system from a given initial state to a given finite state (the results obtained in co-authorship with Evgeny Moiseev are among the best achievements of the Russian Academy of Sciences in 2007 year).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "p \\neq 2"
},
{
"math_id": 1,
"text": "L_{\\infty}"
},
{
"math_id": 2,
"text": "L_2"
},
{
"math_id": 3,
"text": "L_p"
},
{
"math_id": 4,
"text": "p \\geq 1"
},
{
"math_id": 5,
"text": "L"
},
{
"math_id": 6,
"text": "(L,A)"
}
] |
https://en.wikipedia.org/wiki?curid=57188648
|
57190448
|
Nilsemigroup
|
In mathematics, and more precisely in semigroup theory, a nilsemigroup or nilpotent semigroup is a semigroup whose every element is nilpotent.
Definitions.
Formally, a semigroup "S" is a nilsemigroup if:
Finite nilsemigroups.
Equivalent definitions exists for finite semigroup. A finite semigroup "S" is nilpotent if, equivalently:
Examples.
The trivial semigroup of a single element is trivially a nilsemigroup.
The set of strictly upper triangular matrix, with matrix multiplication is nilpotent.
Let formula_3 a bounded interval of positive real numbers. For "x", "y" belonging to "I", define formula_4 as formula_5. We now show that formula_6 is a nilsemigroup whose zero is "n". For each natural number "k", "kx" is equal to formula_7. For "k" at least equal to formula_8, "kx" equals "n". This example generalize for any bounded interval of an Archimedean ordered semigroup.
Properties.
A non-trivial nilsemigroup does not contain an identity element. It follows that the only nilpotent monoid is the trivial monoid.
The class of nilsemigroups is:
It follows that the class of nilsemigroups is not a variety of universal algebra. However, the set of finite nilsemigroups is a variety of finite semigroups. The variety of finite nilsemigroups is defined by the profinite equalities formula_11.
|
[
{
"math_id": 0,
"text": "x_1\\dots x_n=y_1\\dots y_n"
},
{
"math_id": 1,
"text": "x_i,y_i\\in S"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "I_n=[a,n]"
},
{
"math_id": 4,
"text": "x\\star_n y"
},
{
"math_id": 5,
"text": "\\min(x+y,n)"
},
{
"math_id": 6,
"text": "\\langle I,\\star_n\\rangle"
},
{
"math_id": 7,
"text": "\\min(kx,n)"
},
{
"math_id": 8,
"text": "\\left\\lceil\\frac{n-x}{x}\\right\\rceil"
},
{
"math_id": 9,
"text": "S=\\prod_{i\\in\\mathbb N}\\langle I_n,\\star_n\\rangle"
},
{
"math_id": 10,
"text": "\\langle I_n,\\star_n\\rangle"
},
{
"math_id": 11,
"text": "x^\\omega y=x^\\omega=yx^\\omega"
}
] |
https://en.wikipedia.org/wiki?curid=57190448
|
5719307
|
Paley graph
|
In mathematics, Paley graphs are undirected graphs constructed from the members of a suitable finite field by connecting pairs of elements that differ by a quadratic residue. The Paley graphs form an infinite family of conference graphs, which yield an infinite family of symmetric conference matrices. Paley graphs allow graph-theoretic tools to be applied to the number theory of quadratic residues, and have interesting properties that make them useful in graph theory more generally.
Paley graphs are named after Raymond Paley. They are closely related to the Paley construction for constructing Hadamard matrices from quadratic residues.
They were introduced as graphs independently by and . Sachs was interested in them for their self-complementarity properties, while Erdős and Rényi studied their symmetries.
Paley digraphs are directed analogs of Paley graphs that yield antisymmetric conference matrices. They were introduced by (independently of Sachs, Erdős, and Rényi) as a way of constructing tournaments with a property previously known to be held only by random tournaments: in a Paley digraph, every small subset of vertices is dominated by some other vertex.
Definition.
Let "q" be a prime power such that "q" = 1 (mod 4). That is, "q" should either be an arbitrary power of a Pythagorean prime (a prime congruent to 1 mod 4) or an even power of an odd non-Pythagorean prime. This choice of "q" implies that in the unique finite field F"q" of order "q", the element −1 has a square root.
Now let "V" = F"q" and let
formula_0.
If a pair {"a","b"} is included in "E", it is included under either ordering of its two elements. For, "a" − "b" = −("b" − "a"), and −1 is a square, from which it follows that "a" − "b" is a square if and only if "b" − "a" is a square.
By definition "G" = ("V", "E") is the Paley graph of order "q".
Example.
For "q" = 13, the field F"q" is just integer arithmetic modulo 13. The numbers with square roots mod 13 are:
Thus, in the Paley graph, we form a vertex for each of the integers in the range [0,12], and connect each such integer "x" to six neighbors: "x" ± 1 (mod 13), "x" ± 3 (mod 13), and "x" ± 4 (mod 13).
Properties.
The Paley graphs are self-complementary: the complement of any Paley graph is isomorphic to it. One isomorphism is via the mapping that takes a vertex x to "xk" (mod "q"), where k is any nonresidue mod "q".
Paley graphs are strongly regular graphs, with parameters
formula_1
This in fact follows from the fact that the graph is arc-transitive and self-complementary. The strongly regular graphs with parameters of this form (for an arbitrary q) are called conference graphs, so the Paley graphs form an infinite family of conference graphs. The adjacency matrix of a conference graph, such as a Paley graph, can be used to construct a conference matrix, and vice versa. These are matrices whose coefficients are &pm;1, with zero on the diagaonal, that give a scalar multiple of the identity matrix when multiplied by their transpose.
The eigenvalues of Paley graphs are formula_2 (with multiplicity 1) and formula_3 (both with multiplicity formula_2). They can be calculated using the quadratic Gauss sum or by using the theory of strongly regular graphs.
If q is prime, the isoperimetric number "i"("G") of the Paley graph satisfies the following bounds:
<templatestyles src="Block indent/styles.css"/>formula_4
When q is prime, the associated Paley graph is a Hamiltonian circulant graph.
Paley graphs are "quasi-random": the number of times each possible constant-order graph occurs as a subgraph of a Paley graph is (in the limit for large q) the same as for random graphs, and large sets of vertices have approximately the same number of edges as they would in random graphs.
Paley digraphs.
Let "q" be a prime power such that "q" = 3 (mod 4). Thus, the finite field of order "q", F"q", has no square root of −1. Consequently, for each pair ("a","b") of distinct elements of F"q", either "a" − "b" or "b" − "a", but not both, is a square. The Paley digraph is the directed graph with vertex set "V" = F"q" and arc set
formula_5
The Paley digraph is a tournament because each pair of distinct vertices is linked by an arc in one and only one direction.
The Paley digraph leads to the construction of some antisymmetric conference matrices and biplane geometries.
Genus.
The six neighbors of each vertex in the Paley graph of order 13 are connected in a cycle; that is, the graph is locally cyclic. Therefore, this graph can be embedded as a Whitney triangulation of a torus, in which every face is a triangle and every triangle is a face. More generally, if any Paley graph of order "q" could be embedded so that all its faces are triangles, we could calculate the genus of the resulting surface via the Euler characteristic as formula_6. Bojan Mohar conjectures that the minimum genus of a surface into which a Paley graph can be embedded is near this bound in the case that "q" is a square, and questions whether such a bound might hold more generally. Specifically, Mohar conjectures that the Paley graphs of square order can be embedded into surfaces with genus
formula_7
where the o(1) term can be any function of "q" that goes to zero in the limit as "q" goes to infinity.
finds embeddings of the Paley graphs of order "q" ≡ 1 (mod 8) that are highly symmetric and self-dual, generalizing a natural embedding of the Paley graph of order 9 as a 3×3 square grid on a torus. However the genus of White's embeddings is higher by approximately a factor of three than Mohar's conjectured bound.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "E= \\left \\{\\{a,b\\} \\ : \\ a-b\\in (\\mathbf{F}_q^{\\times})^2 \\right \\}"
},
{
"math_id": 1,
"text": "srg \\left (q, \\tfrac{1}{2}(q-1),\\tfrac{1}{4}(q-5),\\tfrac{1}{4}(q-1) \\right )."
},
{
"math_id": 2,
"text": "\\tfrac{1}{2}(q-1)"
},
{
"math_id": 3,
"text": "\\tfrac{1}{2} (-1 \\pm \\sqrt{q})"
},
{
"math_id": 4,
"text": "\\displaystyle\\frac{q-\\sqrt{q}}{4}\\leq i(G) \\leq \\frac{q-1}{4}."
},
{
"math_id": 5,
"text": "A = \\left \\{(a,b)\\in \\mathbf{F}_q\\times\\mathbf{F}_q \\ : \\ b-a\\in (\\mathbf{F}_q^{\\times})^2 \\right \\}."
},
{
"math_id": 6,
"text": "\\tfrac{1}{24}(q^2 - 13q + 24)"
},
{
"math_id": 7,
"text": "(q^2 - 13q + 24)\\left(\\tfrac{1}{24} + o(1)\\right),"
}
] |
https://en.wikipedia.org/wiki?curid=5719307
|
57193995
|
Legendre moment
|
In mathematics, Legendre moments are a type of image moment and are achieved by using the Legendre polynomial. Legendre moments are used in areas of image processing including: pattern and object recognition, image indexing, line fitting, feature extraction, edge detection, and texture analysis. Legendre moments have been studied as a means to reduce image moment calculation complexity by limiting the amount of information redundancy through approximation.
Legendre moments.
With order of "m" + "n", and object intensity function "f"("x","y"):
formula_0
where "m","n" = 1, 2, 3, ...∞ with the "n"th-order Legendre polynomials being:
formula_1
which can also be written:
formula_2
where "D"("n") = floor("n"/2). The set of Legendre polynomials {"P""n"("x")} form an orthogonal set on the interval [−1,1]:
formula_3
A recurrence relation can be used to compute the Legendre polynomial:
formula_4
"f"("x","y") can be written as an infinite series expansion in terms of Legendre polynomials [−1 ≤ "x","y" ≤ 1.]:
formula_5
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " L_{mn}=\\frac{(2m+1)(2n+1)} 4 \\int\\limits_{-1}^1 \\int\\limits_{-1}^1 P_m(x) P_n(y)f(x,y) \\,dx\\, dy"
},
{
"math_id": 1,
"text": "P_n(x)=\\sum_{k=0}^n a_{k,n}x^k=\\frac{(-1)^n}{2^n n!} \\left( \\frac{d}{dx} \\right) [(1-x^2)^n] "
},
{
"math_id": 2,
"text": "\n\\begin{align}\nP_n(x) & =\\sum_{k=0}^{D(n)}(-1)^k \\frac{(2n-2k)!}{2^n k!(n-k)!(n-2k)!} x^{n-2k} \\\\[5pt]\n& = \\frac{(2n)!}{2^n(n!)^2}x^n-\\frac{(2n-2)!}{2^n 1!(n-1)!(n-2)!} x^{n-2} + \\cdots\n\\end{align}\n"
},
{
"math_id": 3,
"text": "\\int_{-1}^1 P_n(x)P_m(x) \\, dx = \\frac{2}{2n+1}\\delta_{nm}"
},
{
"math_id": 4,
"text": "(n+1)P_{n+1}(x)-(2n+1)xP_n(x)+nP_{n-1}(x)=0"
},
{
"math_id": 5,
"text": "f(x,y)=\\sum_{m=0}^\\infty \\sum_{n=0}^\\infty \\lambda_{mn}P_m(x)P_n(y)"
}
] |
https://en.wikipedia.org/wiki?curid=57193995
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.