id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
1318086 | Revolutions per minute | Unit of rotational speed
<templatestyles src="Template:Infobox/styles-images.css" />
Revolutions per minute (abbreviated rpm, RPM, rev/min, r/min, or r⋅min−1) is a unit of rotational speed (or rotational frequency) for rotating machines.
One revolution per minute is equivalent to hertz.
Standards.
ISO 80000-3:2019 defines a physical quantity called "rotation" (or "number of revolutions"), dimensionless, whose instantaneous rate of change is called "rotational frequency" (or "rate of rotation"), with units of reciprocal seconds (s−1).
A related but distinct quantity for describing rotation is "angular frequency" (or "angular speed", the magnitude of angular velocity), for which the SI unit is the radian per second (rad/s).
Although they have the same dimensions (reciprocal time) and base unit (s−1), the hertz (Hz) and radians per second (rad/s) are special names used to express two different but proportional ISQ quantities: frequency and angular frequency, respectively. The conversions between a frequency f and an angular frequency ω are:
formula_0
Thus a disc rotating at 60 rpm is said to have an angular speed of 2"π" rad/s and a rotation frequency of 1 Hz.
The International System of Units (SI) does not recognize rpm as a unit. It defines units of angular frequency and angular velocity as rad s−1, and units of frequency as Hz, equal to s−1.
<templatestyles src="Col-begin/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\omega = 2 \\pi f\\,, \\qquad f = \\frac {\\omega} {2 \\pi}\\,."
}
]
| https://en.wikipedia.org/wiki?curid=1318086 |
13185596 | Stack (mathematics) | Generalisation of a sheaf; a fibered category that admits effective descent
In mathematics a stack or 2-sheaf is, roughly speaking, a sheaf that takes values in categories rather than sets. Stacks are used to formalise some of the main constructions of descent theory, and to construct fine moduli stacks when fine moduli spaces do not exist.
Descent theory is concerned with generalisations of situations where isomorphic, compatible geometrical objects (such as vector bundles on topological spaces) can be "glued together" within a restriction of the topological basis. In a more general set-up the restrictions are replaced with pullbacks; fibred categories then make a good framework to discuss the possibility of such gluing. The intuitive meaning of a stack is that it is a fibred category such that "all possible gluings work". The specification of gluings requires a definition of coverings with regard to which the gluings can be considered. It turns out that the general language for describing these coverings is that of a Grothendieck topology. Thus a stack is formally given as a fibred category over another "base" category, where the base has a Grothendieck topology and where the fibred category satisfies a few axioms that ensure existence and uniqueness of certain gluings with respect to the Grothendieck topology.
Overview.
Stacks are the underlying structure of algebraic stacks (also called Artin stacks) and Deligne–Mumford stacks, which generalize schemes and algebraic spaces and which are particularly useful in studying moduli spaces. There are inclusions: schemes ⊆ algebraic spaces ⊆ Deligne–Mumford stacks ⊆ algebraic stacks (Artin stacks) ⊆ stacks. and give a brief introductory accounts of stacks, , and give more detailed introductions, and describes the more advanced theory.
Motivation and history.
<templatestyles src="Template:Quote_box/styles.css" />
La conclusion pratique à laquelle je suis arrivé dès maintenant, c'est que chaque fois que en vertu de mes critères, une variété de modules (ou plutôt, un schéma de modules) pour la classification des variations (globales, ou infinitésimales) de certaines structures (variétés complètes non singulières, fibrés vectoriels, etc.) ne peut exister, malgré de bonnes hypothèses de platitude, propreté, et non singularité éventuellement, la raison en est seulement l'existence d'automorphismes de la structure qui empêche la technique de descente de marcher.
Grothendieck's letter to Serre, 1959 Nov 5.
The concept of stacks has its origin in the definition of effective descent data in .
In a 1959 letter to Serre, Grothendieck observed that a fundamental obstruction to constructing good moduli spaces is the existence of automorphisms. A major motivation for stacks is that if a moduli "space" for some problem does not exist because of the existence of automorphisms, it may still be possible to construct a moduli "stack".
studied the Picard group of the moduli stack of elliptic curves, before stacks had been defined. Stacks were first defined by Giraud (1966, 1971), and the term "stack" was introduced by for the original French term "champ" meaning "field". In this paper they also introduced Deligne–Mumford stacks, which they called algebraic stacks, though the term "algebraic stack" now usually refers to the more general Artin stacks introduced by Artin (1974).
When defining quotients of schemes by group actions, it is often impossible for the quotient to be a scheme and still satisfy desirable properties for a quotient. For example, if a few points have non-trivial stabilisers, then the categorical quotient will not exist among schemes, but it will exist as a stack.
In the same way, moduli spaces of curves, vector bundles, or other geometric objects are often best defined as stacks instead of schemes. Constructions of moduli spaces often proceed by first constructing a larger space parametrizing the objects in question, and then quotienting by group action to account for objects with automorphisms which have been overcounted.
Definitions.
Abstract stacks.
A category formula_0 with a functor to a category formula_1 is called a fibered category over formula_1 if for any morphism formula_2 in formula_1 and any object formula_3 of formula_0 with image formula_4 (under the functor), there is a pullback formula_5 of formula_3 by formula_6. This means a morphism with image formula_6 such that any morphism formula_7 with image formula_8 can be factored as formula_9 by a unique morphism formula_10 in formula_0 such that the functor maps formula_11 to formula_12. The element formula_13 is called the pullback of formula_3 along formula_6 and is unique up to canonical isomorphism.
The category "c" is called a prestack over a category "C" with a Grothendieck topology if it is fibered over "C" and for any object "U" of "C" and objects "x", "y" of "c" with image "U", the functor from the over category C/U to sets taking "F":"V"→"U" to Hom("F"*"x","F"*"y") is a sheaf. This terminology is not consistent with the terminology for sheaves: prestacks are the analogues of separated presheaves rather than presheaves. Some authors require this as a property of stacks, rather than of prestacks.
The category "c" is called a stack over the category "C" with a Grothendieck topology if it is a prestack over "C" and every descent datum is effective. A descent datum consists roughly of a covering of an object "V" of "C" by a family "Vi", elements "xi" in the fiber over "Vi", and morphisms "fji" between the restrictions of "xi" and "xj" to "Vij"="Vi"×"V""Vj" satisfying the compatibility condition "fki" = "fkjfji". The descent datum is called effective if the elements "xi" are essentially the pullbacks of an element "x" with image "V".
A stack is called a stack in groupoids or a (2,1)-sheaf if it is also fibered in groupoids, meaning that its fibers (the inverse images of objects of "C") are groupoids. Some authors use the word "stack" to refer to the more restrictive notion of a stack in groupoids.
Algebraic stacks.
An algebraic stack or Artin stack is a stack in groupoids "X" over the fppf site such that the diagonal map of "X" is representable and there exists a smooth surjection from (the stack associated to) a scheme to X.
A morphism "Y"formula_14 "X" of stacks is representable if, for every morphism "S" formula_14 "X" from (the stack associated to) a scheme to X, the fiber product "Y" ×"X" "S" is isomorphic to (the stack associated to) an algebraic space. The fiber product of stacks is defined using the usual universal property, and changing the requirement that diagrams commute to the requirement that they 2-commute. See also morphism of algebraic stacks for further information.
The motivation behind the representability of the diagonal is the following: the diagonal morphism formula_15 is representable if and only if for any pair of morphisms of algebraic spaces formula_16, their fiber product formula_17 is representable.
A Deligne–Mumford stack is an algebraic stack "X" such that there is an étale surjection from a scheme to "X". Roughly speaking, Deligne–Mumford stacks can be thought of as algebraic stacks whose objects have no infinitesimal automorphisms.
Local structure of algebraic stacks.
Since the inception of algebraic stacks it was expected that they are locally quotient stacks of the form formula_18 where formula_19 is a linearly reductive algebraic group. This was recently proved to be the case: given a quasi-separated algebraic stack formula_20 locally of finite type over an algebraically closed field formula_21 whose stabilizers are affine, and formula_22 a smooth and closed point with linearly reductive stabilizer group formula_23, there exists an etale cover of the GIT quotient formula_24, where formula_25, such that the diagramformula_26is cartesian, and there exists an etale morphismformula_27inducing an isomorphism of the stabilizer groups at formula_28 and formula_29.
formula_33
Then, this functor determines the following category formula_12
# an object is a pair formula_34 consisting of a scheme formula_35 in formula_36 and an element formula_37
# a morphism formula_38 consists of a morphism formula_39 in formula_40 such that formula_41.
Via the forgetful functor formula_42, the category formula_12 is a category fibered over formula_40. For example, if formula_35 is a scheme in formula_40, then it determines the contravariant functor formula_43 and the corresponding fibered category is the <templatestyles src="Template:Visible anchor/styles.css" />stack associated to "X". Stacks (or prestacks) can be constructed as a variant of this construction. In fact, any scheme formula_35 with a quasi-compact diagonal is an algebraic stack associated to the scheme formula_35.
Examples.
Constructions with stacks.
Stack quotients.
If formula_35 is a scheme formula_40 and formula_19 is a smooth affine group scheme acting on formula_35, then there is a quotient algebraic stack formula_44, taking a scheme formula_45 to the groupoid of formula_19-torsors over the formula_46-scheme formula_4 with formula_19-equivariant maps to formula_35. Explicitly, given a space formula_35 with a formula_19-action, form the stack formula_44, which (intuitively speaking) sends a space formula_4 to the groupoid of pullback diagramsformula_47where formula_48 is a formula_19-equivariant morphism of spaces and formula_49is a principal formula_19-bundle. The morphisms in this category are just morphisms of diagrams where the arrows on the right-hand side are equal and the arrows on the left-hand side are morphisms of principal formula_19-bundles.
Classifying stacks.
A special case of this when "X" is a point gives the classifying stack "BG" of a smooth affine group scheme "G": formula_50 It is named so since the category formula_51, the fiber over "Y", is precisely the category formula_52 of principal formula_19-bundles over formula_4. Note that formula_52 itself can be considered as a stack, the moduli stack of principal "G"-bundles on "Y".
An important subexample from this construction is formula_53, which is the moduli stack of principal formula_54-bundles. Since the data of a principal formula_54-bundle is equivalent to the data of a rank formula_55 vector bundle, this is isomorphic to the moduli stack of rank formula_55 vector bundles formula_56.
Moduli stack of line bundles.
The moduli stack of line bundles is formula_57 since every line bundle is canonically isomorphic to a principal formula_58-bundle. Indeed, given a line bundle formula_59 over a scheme formula_46, the relative specformula_60gives a geometric line bundle. By removing the image of the zero section, one obtains a principal formula_58-bundle. Conversely, from the representation formula_61, the associated line bundle can be reconstructed.
Gerbes.
A gerbe is a stack in groupoids that is locally nonempty, for example the trivial gerbe formula_62 that assigns to each scheme the groupoid of principal formula_19-bundles over the scheme, for some group formula_19.
Relative spec and proj.
If "A" is a quasi-coherent sheaf of algebras in an algebraic stack "X" over a scheme "S", then there is a stack Spec("A") generalizing the construction of the spectrum Spec("A") of a commutative ring "A". An object of Spec("A") is given by an "S"-scheme "T", an object "x" of "X"("T"), and a morphism of sheaves of algebras from "x"*("A") to the coordinate ring "O"("T") of "T".
If "A" is a quasi-coherent sheaf of graded algebras in an algebraic stack "X" over a scheme "S", then there is a stack Proj("A") generalizing the construction of the projective scheme Proj("A") of a graded ring "A".
Moduli stacks.
Kontsevich moduli spaces.
Another widely studied class of moduli spaces are the Kontsevich moduli spaces parameterizing the space of stable maps between curves of a fixed genus to a fixed space formula_35 whose image represents a fixed cohomology class. These moduli spaces are denotedformula_72and can have wild behavior, such as being reducible stacks whose components are non-equal dimension. For example, the moduli stack formula_73has smooth curves parametrized by an open subset formula_74. On the boundary of the moduli space, where curves may degenerate to reducible curves, there is a substack parametrizing reducible curves with a genus formula_75 component and a genus formula_76 component intersecting at one point, and the map sends the genus formula_76 curve to a point. Since all such genus formula_76 curves are parametrized by formula_77, and there is an additional formula_76 dimensional choice of where these curves intersect on the genus formula_76 curve, the boundary component has dimension formula_78.
Geometric stacks.
Weighted projective stacks.
Constructing weighted projective spaces involves taking the quotient variety of some formula_79 by a formula_58-action. In particular, the action sends a tupleformula_80and the quotient of this action gives the weighted projective space formula_81. Since this can instead be taken as a stack quotient, the weighted projective stack pg 30 isformula_82Taking the vanishing locus of a weighted polynomial in a line bundle formula_83 gives a stacky weighted projective variety.
Stacky curves.
Stacky curves, or orbicurves, can be constructed by taking the stack quotient of a morphism of curves by the monodromy group of the cover over the generic points. For example, take a projective morphismformula_84which is generically etale. The stack quotient of the domain by formula_85 gives a stacky formula_86 with stacky points that have stabilizer group formula_87 at the fifth roots of unity in the formula_88-chart. This is because these are the points where the cover ramifies.
Non-affine stack.
An example of a non-affine stack is given by the half-line with two stacky origins. This can be constructed as the colimit of two inclusion of formula_89.
Quasi-coherent sheaves on algebraic stacks.
On an algebraic stack one can construct a category of quasi-coherent sheaves similar to the category of quasi-coherent sheaves over a scheme.
A quasi-coherent sheaf is roughly one that looks locally like the sheaf of a module over a ring. The first problem is to decide what one means by "locally": this involves the choice of a Grothendieck topology, and there are many possible choices for this, all of which have some problems and none of which seem completely satisfactory. The Grothendieck topology should be strong enough so that the stack is locally affine in this topology: schemes are locally affine in the Zariski topology so this is a good choice for schemes as Serre discovered, algebraic spaces and Deligne–Mumford stacks are locally affine in the etale topology so one usually uses the etale topology for these, while algebraic stacks are locally affine in the smooth topology so one can use the smooth topology in this case. For general algebraic stacks the etale topology does not have enough open sets: for example, if G is a smooth connected group then the only etale covers of the classifying stack BG are unions of copies of BG, which are not enough to give the right theory of quasicoherent sheaves.
Instead of using the smooth topology for algebraic stacks one often uses a modification of it called the Lis-Et topology (short for Lisse-Etale: lisse is the French term for smooth), which has the same open sets as the smooth topology but the open covers are given by etale rather than smooth maps. This usually seems to lead to an equivalent category of quasi-coherent sheaves, but is easier to use: for example it is easier to compare with the etale topology on algebraic spaces. The Lis-Et topology has a subtle technical problem: a morphism between stacks does not in general give a morphism between the corresponding topoi. (The problem is that while one can construct a pair of adjoint functors "f"*, "f"*, as needed for a geometric morphism of topoi, the functor "f"* is not left exact in general. This problem is notorious for having caused some errors in published papers and books.) This means that constructing the pullback of a quasicoherent sheaf under a morphism of stacks requires some extra effort.
It is also possible to use finer topologies. Most reasonable "sufficiently large" Grothendieck topologies seem to lead to equivalent categories of quasi-coherent sheaves, but the larger a topology is the harder it is to handle, so one generally prefers to use smaller topologies as long as they have enough open sets. For example, the big fppf topology leads to essentially the same category of quasi-coherent sheaves as the Lis-Et topology, but has a subtle problem: the natural embedding of quasi-coherent sheaves into O"X" modules in this topology is not exact (it does not preserve kernels in general).
Other types of stack.
Differentiable stacks and topological stacks are defined in a way similar to algebraic stacks, except that the underlying category of affine schemes is replaced by the category of smooth manifolds or topological spaces.
More generally one can define the notion of an "n"-sheaf or "n"–1 stack, which is roughly a sort of sheaf taking values in "n"–1 categories. There are several inequivalent ways of doing this. 1-sheaves are the same as sheaves, and 2-sheaves are the same as stacks. They are called higher stacks.
A very similar and analogous extension is to develop the stack theory on non-discrete objects (i.e., a space is really a spectrum in algebraic topology). The resulting stacky objects are called derived stacks (or spectral stacks). Jacob Lurie's under-construction book "Spectral Algebraic Geometry" studies a generalization that he calls a spectral Deligne–Mumford stack. By definition, it is a ringed ∞-topos that is étale-locally the étale spectrum of an E∞-ring (this notion subsumes that of a derived scheme, at least in characteristic zero.)
Set-theoretical problems.
There are some minor set theoretical problems with the usual foundation of the theory of stacks, because stacks are often defined as certain functors to the category of sets and are therefore not sets. There are several ways to deal with this problem:
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "c"
},
{
"math_id": 1,
"text": "C"
},
{
"math_id": 2,
"text": "F:X\\to Y"
},
{
"math_id": 3,
"text": "y"
},
{
"math_id": 4,
"text": "Y"
},
{
"math_id": 5,
"text": "f:x\\to y"
},
{
"math_id": 6,
"text": "F"
},
{
"math_id": 7,
"text": "g:z\\to y"
},
{
"math_id": 8,
"text": "G=F\\circ H"
},
{
"math_id": 9,
"text": "g=f\\circ h"
},
{
"math_id": 10,
"text": "h:z\\to x"
},
{
"math_id": 11,
"text": "h"
},
{
"math_id": 12,
"text": "H"
},
{
"math_id": 13,
"text": "x = F^*y"
},
{
"math_id": 14,
"text": "\\rightarrow"
},
{
"math_id": 15,
"text": "\\Delta:\\mathfrak{X} \\to \\mathfrak{X}\\times\\mathfrak{X}"
},
{
"math_id": 16,
"text": "X,Y \\to \\mathfrak{X}"
},
{
"math_id": 17,
"text": "X\\times_{\\mathfrak{X}}Y"
},
{
"math_id": 18,
"text": "[\\text{Spec}(A)/G]"
},
{
"math_id": 19,
"text": "G"
},
{
"math_id": 20,
"text": "\\mathfrak{X}"
},
{
"math_id": 21,
"text": "k"
},
{
"math_id": 22,
"text": "x \\in \\mathfrak{X}(k)"
},
{
"math_id": 23,
"text": "G_x"
},
{
"math_id": 24,
"text": "(U,u) \\to (N_x//G_x, 0)"
},
{
"math_id": 25,
"text": "N_x = (J_x/J_x^2)^\\vee"
},
{
"math_id": 26,
"text": "\\begin{matrix}\n([W/G_x],w) & \\to & ([N_x/G_x],0) \\\\\n\\downarrow & & \\downarrow \\\\\n(U,u) & \\to & (N_x//G_x,0)\n\\end{matrix}"
},
{
"math_id": 27,
"text": "f:([W/G_x], w) \\to (\\mathfrak{X},x)"
},
{
"math_id": 28,
"text": "w"
},
{
"math_id": 29,
"text": "x"
},
{
"math_id": 30,
"text": "\\mathcal{F}:C^{op} \\to Sets"
},
{
"math_id": 31,
"text": "X \\in \\text{Ob}(C)"
},
{
"math_id": 32,
"text": "\\mathcal{F}(X)"
},
{
"math_id": 33,
"text": "h: (Sch/S)^{op} \\to Sets"
},
{
"math_id": 34,
"text": "(X\\to S, x)"
},
{
"math_id": 35,
"text": "X"
},
{
"math_id": 36,
"text": "(Sch/S)^{op}"
},
{
"math_id": 37,
"text": "x \\in h(X)"
},
{
"math_id": 38,
"text": "(X\\to S, x) \\to (Y\\to S,y)"
},
{
"math_id": 39,
"text": "\\phi:X \\to Y"
},
{
"math_id": 40,
"text": "(Sch/S)"
},
{
"math_id": 41,
"text": "h(\\phi)(y) = x"
},
{
"math_id": 42,
"text": "p:H \\to (Sch/S)"
},
{
"math_id": 43,
"text": "h = \\operatorname{Hom}(-, X)"
},
{
"math_id": 44,
"text": "[X/G]"
},
{
"math_id": 45,
"text": "Y \\to S"
},
{
"math_id": 46,
"text": "S"
},
{
"math_id": 47,
"text": "[X/G](Y) = \\begin{Bmatrix}\nZ & \\xrightarrow{\\Phi} & X \\\\\n\\downarrow & & \\downarrow \\\\\nY & \\xrightarrow{\\phi} & [X/G]\n\\end{Bmatrix}"
},
{
"math_id": 48,
"text": "\\Phi"
},
{
"math_id": 49,
"text": "Z \\to Y"
},
{
"math_id": 50,
"text": "\\textbf{B}G := [pt/G]."
},
{
"math_id": 51,
"text": "\\mathbf{B}G(Y)"
},
{
"math_id": 52,
"text": "\\operatorname{Bun}_G(Y)"
},
{
"math_id": 53,
"text": "\\mathbf{B}GL_n"
},
{
"math_id": 54,
"text": "GL_n"
},
{
"math_id": 55,
"text": "n"
},
{
"math_id": 56,
"text": "Vect_n"
},
{
"math_id": 57,
"text": "B\\mathbb{G}_m"
},
{
"math_id": 58,
"text": "\\mathbb{G}_m"
},
{
"math_id": 59,
"text": "L"
},
{
"math_id": 60,
"text": "\\underline{\\text{Spec}}_S(\\text{Sym}_S(L^\\vee)) \\to S"
},
{
"math_id": 61,
"text": "id:\\mathbb{G}_m \\to \\text{Aut}(\\mathbb{A}^1)"
},
{
"math_id": 62,
"text": "BG"
},
{
"math_id": 63,
"text": "\\mathcal{M}_g"
},
{
"math_id": 64,
"text": "g"
},
{
"math_id": 65,
"text": "\\mathcal{M}_{g,n}"
},
{
"math_id": 66,
"text": "g \\geq 2"
},
{
"math_id": 67,
"text": "g = 1, n \\geq 1"
},
{
"math_id": 68,
"text": "g = 0, n \\geq 3"
},
{
"math_id": 69,
"text": "\\mathcal{M}_0"
},
{
"math_id": 70,
"text": "B\\text{PGL}(2)"
},
{
"math_id": 71,
"text": "\\mathcal{M}_1"
},
{
"math_id": 72,
"text": "\\overline{\\mathcal{M}}_{g,n}(X,\\beta)"
},
{
"math_id": 73,
"text": "\\overline{\\mathcal{M}}_{1,0}(\\mathbb{P}^2,3[H])"
},
{
"math_id": 74,
"text": "U \\subset \\mathbb{P}^9 = \\mathbb{P}(\\Gamma(\\mathbb{P}^2,\\mathcal{O}(3)))"
},
{
"math_id": 75,
"text": "0"
},
{
"math_id": 76,
"text": "1"
},
{
"math_id": 77,
"text": "U"
},
{
"math_id": 78,
"text": "10"
},
{
"math_id": 79,
"text": "\\mathbb{A}^{n+1} - \\{0\\}"
},
{
"math_id": 80,
"text": "g \\cdot(x_0,\\ldots, x_n) \\mapsto (g^{a_0}x_0,\\ldots,g^{a_n}x_n)"
},
{
"math_id": 81,
"text": "\\mathbb{WP}(a_0,\\ldots, a_n)"
},
{
"math_id": 82,
"text": "\\textbf{WP}(a_0,\\ldots, a_n) := [\\mathbb {A}^{n}-\\{0\\} / \\mathbb{G}_m]"
},
{
"math_id": 83,
"text": "f \\in \\Gamma(\\textbf{WP}(a_0,\\ldots, a_n),\\mathcal{O}(a))"
},
{
"math_id": 84,
"text": "\\text{Proj}(\\mathbb{C}[x,y,z]/(x^5 + y^5 + z^5)) \\to \\text{Proj}(\\mathbb{C}[x,y])"
},
{
"math_id": 85,
"text": "\\mu_5"
},
{
"math_id": 86,
"text": "\\mathbb{P}^1"
},
{
"math_id": 87,
"text": "\\mathbb{Z}/5"
},
{
"math_id": 88,
"text": "x/y"
},
{
"math_id": 89,
"text": " [\\mathbb{G}_m/ (\\mathbb{Z}/2)] \\to [\\mathbb{A}^1/(\\mathbb{Z}/2)]"
}
]
| https://en.wikipedia.org/wiki?curid=13185596 |
13186787 | List of price index formulas | A number of different formulae, more than a hundred, have been proposed as means of calculating price indexes. While price index formulae all use price and possibly quantity data, they aggregate these in different ways. A price index aggregates various combinations of base period prices (formula_0), later period prices (formula_1), base period quantities (formula_2), and later period quantities (formula_3). Price index numbers are usually defined either in terms of (actual or hypothetical) expenditures (expenditure = price * quantity) or as different weighted averages of price relatives (formula_4). These tell the relative change of the price in question. Two of the most commonly used price index formulae were defined by German economists and statisticians Étienne Laspeyres and Hermann Paasche, both around 1875 when investigating price changes in Germany.
Laspeyres.
Developed in 1871 by Étienne Laspeyres, the formula:
formula_5
compares the total cost of the same basket of final goods formula_2 at the old and new prices.
Paasche.
Developed in 1874 by Hermann Paasche, the formula:
formula_6
compares the total cost of a new basket of goods formula_3 at the old and new prices.
Geometric means.
The geometric means index:
formula_7
incorporates quantity information through the share of expenditure in the base period.
Unweighted indices.
Unweighted, or "elementary", price indices only compare prices of a single type of good between two periods. They do not make any use of quantities or expenditure weights. They are called "elementary" because they are often used at the lower levels of aggregation for more comprehensive price indices. In such a case, they are not indices but merely an intermediate stage in the calculation of an index. At these lower levels, it is argued that weighting is not necessary since only one type of good is being aggregated. However this implicitly assumes that only one type of the good is available (e.g. only one brand and one package size of frozen peas) and that it has not changed in quality etc between time periods.
Carli.
Developed in 1764 by Gian Rinaldo Carli, an Italian economist, this formula is the arithmetic mean of the price relative between a period "t" and a base period "0".[]
formula_8
On 17 August 2012 the BBC Radio 4 program "More or Less" noted that the Carli index, used in part in the British retail price index, has a built-in bias towards recording inflation even when over successive periods there is no increase in prices overall.[]
Dutot.
In 1738 French economist Nicolas Dutot proposed using an index calculated by dividing the average price in period "t" by the average price in period "0".
formula_9
Jevons.
In 1863, English economist William Stanley Jevons proposed taking the geometric average of the price relative of period "t" and base period "0". When used as an elementary aggregate, the Jevons index is considered a constant elasticity of substitution index since it allows for product substitution between time periods.
formula_10
This is the formula that was used for the old "Financial Times" stock market index (the predecessor of the FTSE 100 Index). It was inadequate for that purpose. In particular, if the price of any of the constituents were to fall to zero, the whole index would fall to zero. That is an extreme case; in general the formula will understate the total cost of a basket of goods (or of any subset of that basket) unless their prices all change at the same rate. Also, as the index is unweighted, large price changes in selected constituents can transmit to the index to an extent not representing their importance in the average portfolio.
Harmonic mean of price relatives.
The harmonic average counterpart to the Carli index. The index was proposed by Jevons in 1865 and by Coggeshall in 1887.
formula_11
Carruthers, Sellwood, Ward, Dalén index.
Is the geometric mean of the Carli and the harmonic price indexes. In 1922 Fisher wrote that this and the Jevons were the two best unweighted indexes based on Fisher's test approach to index number theory.
formula_12
Ratio of harmonic means.
The ratio of harmonic means or "Harmonic means" price index is the harmonic average counterpart to the Dutot index.
formula_13
Bilateral formulae.
Marshall-Edgeworth.
The Marshall-Edgeworth index, credited to Marshall (1887) and Edgeworth (1925), is a weighted relative of current period to base period sets of prices. This index uses the arithmetic average of the current and based period quantities for weighting. It is considered a pseudo-superlative formula and is symmetric. The use of the Marshall-Edgeworth index can be problematic in cases such as a comparison of the price level of a large country to a small one. In such instances, the set of quantities of the large country will overwhelm those of the small one.
formula_14
Superlative indices.
Superlative indices treat prices and quantities equally across periods. They are symmetrical and provide close approximations of cost of living indices and other theoretical indices used to provide guidelines for constructing price indices. All superlative indices produce similar results and are generally the favored formulas for calculating price indices. A superlative index is defined technically as "an index that is exact for a flexible functional form that can provide a second-order approximation to other twice-differentiable functions around the same point."
Fisher.
The change in a Fisher index from one period to the next is the geometric mean of the changes in Laspeyres' and Paasche's indices between those periods, and these are chained together to make comparisons over many periods:
formula_15
This is also called Fisher's "ideal" price index.
Törnqvist.
The Törnqvist or Törnqvist-Theil index is the geometric average of the n price relatives of the current to base period prices (for n goods) weighted by the arithmetic average of the value shares for the two periods.
formula_16
Walsh.
The Walsh price index is the weighted sum of the current period prices divided by the weighted sum of the base period prices with the geometric average of both period quantities serving as the weighting mechanism:
formula_17
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p_0"
},
{
"math_id": 1,
"text": "p_t"
},
{
"math_id": 2,
"text": "q_0"
},
{
"math_id": 3,
"text": "q_t"
},
{
"math_id": 4,
"text": "p_t/p_0"
},
{
"math_id": 5,
"text": " P_{L}=\\frac{\\sum\\left(p_{t}\\cdot q_{0}\\right)}{\\sum\\left(p_{0}\\cdot q_{0}\\right)}"
},
{
"math_id": 6,
"text": "P_{P}=\\frac{\\sum\\left(p_{t}\\cdot q_{t}\\right)}{\\sum\\left(p_{0}\\cdot q_{t}\\right)}"
},
{
"math_id": 7,
"text": "P_{GM}=\\prod_{i=1}^{n}\\left(\\frac{p_{i,t}}{p_{i,0}}\\right)^\\frac{p_{i,0}\\cdot q_{i,0}}{\\sum\\left(p_{0}\\cdot q_{0}\\right)}"
},
{
"math_id": 8,
"text": "P_{C}=\\frac{1}{n}\\cdot\\sum\\frac{p_{t}}{p_{0}}"
},
{
"math_id": 9,
"text": "P_{D}=\\frac{\\frac{1}{n}\\cdot\\sum p_{t}}{\\frac{1}{n}\\cdot\\sum p_{0}}=\\frac{\\sum p_{t}}{\\sum p_{0}} "
},
{
"math_id": 10,
"text": "P_{J}=\\left(\\prod\\frac{p_{t}}{p_{0}}\\right)^{1/n}"
},
{
"math_id": 11,
"text": "P_{HR}=\\frac{1}{\\frac{1}{n}\\cdot\\sum\\frac{p_{0}}{p_{t}}}"
},
{
"math_id": 12,
"text": "P_{CSWD}=\\sqrt{P_{C}\\cdot P_{HR}}"
},
{
"math_id": 13,
"text": "P_{RH}=\\frac{\\sum\\frac{n}{p_{0}}}{\\sum\\frac{n}{p_{t}}}"
},
{
"math_id": 14,
"text": "P_{ME}=\\frac{\\sum\\left[p_{t}\\cdot \\frac{1}{2}\\left(q_{0}+q_{t}\\right)\\right]}{\\sum\\left[p_{0}\\cdot \\frac{1}{2}(q_{0}+q_{t})\\right]}=\\frac{\\sum\\left[p_{t}\\cdot\\left(q_{0}+q_{t}\\right)\\right]}{\\sum\\left[p_{0}\\cdot\\left(q_{0}+q_{t}\\right)\\right]}"
},
{
"math_id": 15,
"text": "P_{F}=\\sqrt{P_{L}\\cdot P_{P}}"
},
{
"math_id": 16,
"text": "P_{T}=\\prod_{i=1}^{n}\\left(\\frac{p_{i,t}}{p_{i,0}}\\right)^{\\frac{1}{2}\\left[\\frac{p_{i,0}\\cdot q_{i,0}}{\\sum\\left(p_{0}\\cdot q_{0}\\right)}+\\frac{p_{i,t}\\cdot q_{i,t}}{\\sum\\left(p_{t}\\cdot q_{t}\\right)}\\right]}"
},
{
"math_id": 17,
"text": "P_{W}=\\frac{\\sum\\left(p_{t}\\cdot\\sqrt{q_{0}\\cdot q_{t}}\\right)}{\\sum\\left(p_{0}\\cdot\\sqrt{q_{0}\\cdot q_{t}}\\right)}"
}
]
| https://en.wikipedia.org/wiki?curid=13186787 |
13187475 | Seashell surface | Mathematical spiral-type surface
In mathematics, a seashell surface is a surface made by a circle which spirals up the "z"-axis while decreasing its own radius and distance from the "z"-axis. Not all seashell surfaces describe actual seashells found in nature.
Parametrization.
The following is a parameterization of one seashell surface:
formula_0
where formula_1 and formula_2\\
Various authors have suggested different models for the shape of shell. David M. Raup proposed a model where there is one magnification for the x-y plane, and another for the x-z plane. Chris Illert proposed a model where the magnification is scalar, and the same for any sense or direction with an equation like
formula_3
which starts with an initial generating curve formula_4 and applies a rotation and exponential magnification.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\nx & {} = \\frac{5}{4}\\left(1-\\frac{v}{2\\pi}\\right)\\cos(2v)(1+\\cos u)+\\cos 2v \\\\ \\\\\ny & {} = \\frac{5}{4}\\left(1-\\frac{v}{2\\pi}\\right)\\sin(2v)(1+\\cos u)+\\sin 2v \\\\ \\\\\nz & {} = \\frac{10v}{2\\pi}+\\frac{5}{4}\\left(1-\\frac{v}{2\\pi}\\right)\\sin(u)+15\n\\end{align}"
},
{
"math_id": 1,
"text": "0\\le u<2\\pi"
},
{
"math_id": 2,
"text": "-2\\pi\\le v <2\\pi"
},
{
"math_id": 3,
"text": "\n\\vec{F}\\left( {\\theta ,\\varphi } \\right) = e^{\\alpha \\varphi } \\left( {\\begin{array}{*{20}c}\n {\\cos \\left( \\varphi \\right),} & { - \\sin (\\varphi ),} & {\\rm{0}} \\\\\n {\\sin (\\varphi ),} & {\\cos \\left( \\varphi \\right),} & 0 \\\\\n {0,} & {{\\rm{0,}}} & 1 \\\\\n\\end{array}} \\right)\\vec{F}\\left( {\\theta ,0} \\right)\n"
},
{
"math_id": 4,
"text": "\\vec{F}\\left( {\\theta ,0} \\right)"
}
]
| https://en.wikipedia.org/wiki?curid=13187475 |
13196068 | Fractional vortices | In a standard superconductor, described by a complex field fermionic condensate wave function (denoted formula_0), vortices carry quantized magnetic fields because the condensate wave function formula_0 is invariant to increments of the phase formula_1 by formula_2. There a winding of the phase formula_3 by formula_2 creates a vortex which carries one flux quantum. See quantum vortex.
The term Fractional vortex is used for two kinds of very different quantum vortices which occur when:
(i) A physical system allows phase windings different from formula_4, i.e. non-integer or fractional phase winding. Quantum mechanics prohibits it in a uniform ordinary superconductor, but it becomes possible in an inhomogeneous system, for example, if a vortex is placed on a boundary between two superconductors which are connected only by an extremely weak link (also called a Josephson junction); such a situation also occurs on grain boundaries etc. At such superconducting boundaries the phase can have a discontinuous jump. Correspondingly, a vortex placed onto such a boundary acquires a fractional phase winding hence the term fractional vortex. A similar situation occurs in Spin-1 Bose condensate, where a vortex with formula_5 phase winding can exist if it is combined with a domain of overturned spins.
(ii) A different situation occurs in uniform multicomponent superconductors, which allow stable vortex solutions with integer phase winding formula_6, where formula_7, which however carry arbitrarily fractionally quantized magnetic flux.
Observation of fractional-flux vortices was reported in a multiband Iron-based superconductor.
(i) Vortices with non-integer phase winding.
Josephson vortices.
Fractional vortices at phase discontinuities.
Josephson phase discontinuities may appear in specially designed long Josephson junctions (LJJ). For example, so-called 0-π LJJ have a formula_5 discontinuity of the Josephson phase at the point where 0 and formula_5 parts join. Physically, such formula_8 LJJ can be fabricated using tailored ferromagnetic barrier or using d-wave superconductors. The Josephson phase discontinuities can also be introduced using artificial tricks, e.g., a pair of tiny current injectors attached to one of the superconducting electrodes of the LJJ. The value of the phase discontinuity is denoted by κ and, without losing generality, it is assumed that 0<κ<2π, because the phase is 2π periodic.
An LJJ reacts to the phase discontinuity by bending the Josephson phase formula_9 in the formula_10 vicinity of the discontinuity point, so that far away there are no traces of this perturbation. The bending of the Josephson phase inevitably results in appearance of a local magnetic field formula_11 localized around the discontinuity (formula_8 boundary). It also results in the appearance of a supercurrent formula_12 circulating around the discontinuity. The total magnetic flux Φ, carried by the localized magnetic field is proportional to the value of the discontinuity formula_13, namely Φ
(κ/2π)Φ,
where Φ0 is a magnetic flux quantum. For a π-discontinuity, Φ
Φ0/2, the vortex of the supercurrent is called a semifluxon. When κ≠π, one speaks about arbitrary fractional Josephson vortices. This type of vortex is pinned at the phase discontinuity point, but may have two polarities, positive and negative, distinguished by the direction of the fractional flux and direction of the supercurrent (clockwise or counterclockwise) circulating around its center (discontinuity point).
The semifluxon is a particular case of such a fractional vortex pinned at the phase discontinuity point.
Although, such fractional Josephson vortices are pinned, if perturbed they may perform a small oscillations around the phase discontinuity point with an eigenfrequency, that depends on the value of κ.
Splintered vortices (double sine-Gordon solitons).
In the context of d-wave superconductivity, a fractional vortex (also known as splintered vortex) is a vortex of supercurrent carrying unquantized magnetic flux Φ1<Φ0, which depends on parameters of the system. Physically, such vortices may appear at the grain boundary between two d-wave superconductors, which often looks like a regular or irregular sequence of 0 and π facets. One can also construct an artificial array of short 0 and π facets to achieve the same effect. These splintered vortices are solitons. They are able to move and preserve their shape similar to conventional "integer" Josephson vortices (fluxons). This is opposite to the "fractional vortices pinned at phase discontinuity", e.g. semifluxons, which are pinned at the discontinuity and cannot move far from it.
Theoretically, one can describe a grain boundary between d-wave superconductors (or an array of tiny 0 and π facets) by an effective equation for a large-scale phase ψ. Large scale means that the scale is much larger than the facet size. This equation is double sin-Gordon equation, which in normalized units reads
where "g"<0 is a dimensionless constant resulting from averaging over tiny facets. The detailed mathematical procedure of averaging is similar to the one done for a parametrically driven pendulum, and can be extended to time-dependent phenomena. In essence, (EqDSG) described extended φ Josephson junction.
For "g"<-1 (EqDSG) has two stable equilibrium values (in each 2π interval): ψ
±φ, where φ
cos(-1/"g"). They corresponding to two energy minima. Correspondingly, there are two fractional vortices (topological solitons): one with the phase ψ("x") going from -φ to +φ, while the other has the phase ψ("x") changing from +φ to -φ+2π. The first vortex has a topological change of 2φ and carries the magnetic flux Φ1
(φ/π)Φ0. The second vortex has a topological change of 2π-2φ and carries the flux Φ2
Φ0-Φ1.
Splintered vortices were first observed at the asymmetric 45° grain boundaries between two d-wave superconductors YBa2Cu3O7−δ.
Spin-triplet Superfluidity.
In certain states of spin-1 superfluids or Bose condensates, the condensate wavefunction is invariant if the superfluid phase changes by formula_14, along with a formula_14 rotation of spin angle. This is in contrast to the formula_2 invariance of condensate wavefunction in a spin-0 superfluid. A vortex resulting from such phase windings is called fractional or half-quantum vortex, in contrast to one-quantum vortex where a phase changes by formula_2.
(ii) Vortices with integer phase winding and fractional flux in multicomponent superconductivity.
Different kinds of "Fractional vortices" appear in a different context in multi-component superconductivity where several independent charged condensates or superconducting components interact with each other electromagnetically.
Such a situation occurs for example in the formula_15 theories of the projected quantum states of liquid metallic hydrogen, where two order parameters originate from theoretically anticipated coexistence of electronic and protonic Cooper pairs. There topological defects with an formula_16 (i.e. "integer") phase winding only in or only in a protonic condensate carries fractionally quantized magnetic flux: a consequence of electromagnetic interaction with the second condensate. Also these fractional vortices carry a superfluid momentum which does not obey Onsager-Feynman quantization
Despite the integer phase winding, the basic properties of these kinds of fractional vortices are very different from the Abrikosov vortex solutions. For example, in contrast to the Abrikosov vortex, their magnetic field generically is not exponentially localized in space. Also in some cases the magnetic flux inverts its direction at a certain distance from the vortex center
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "|\\Psi|e^{i\\phi}"
},
{
"math_id": 1,
"text": "\\phi"
},
{
"math_id": 2,
"text": "2\\pi"
},
{
"math_id": 3,
"text": "{\\phi}"
},
{
"math_id": 4,
"text": " 2\\pi \\times \\mathit{integer}"
},
{
"math_id": 5,
"text": "\\pi"
},
{
"math_id": 6,
"text": "2\\pi N"
},
{
"math_id": 7,
"text": " N= \\pm 1, \\pm 2, ... "
},
{
"math_id": 8,
"text": "0-\\pi"
},
{
"math_id": 9,
"text": "\\phi(x)"
},
{
"math_id": 10,
"text": "\\lambda_J"
},
{
"math_id": 11,
"text": "\\propto d\\phi(x)/dx"
},
{
"math_id": 12,
"text": "\\propto\\sin\\phi(x)"
},
{
"math_id": 13,
"text": "\\kappa"
},
{
"math_id": 14,
"text": " \\pi"
},
{
"math_id": 15,
"text": " U(1)\\times U(1)"
},
{
"math_id": 16,
"text": " 2\\pi"
}
]
| https://en.wikipedia.org/wiki?curid=13196068 |
13197019 | RP3 | RP3, RP-3, RP.3, RP 3, or "variant", may refer to:
<templatestyles src="Dmbox/styles.css" />
Topics referred to by the same termThis page lists articles associated with the same title formed as a letter–number combination. | [
{
"math_id": 0,
"text": "\\mathbb{P}_3(\\mathbb{R})"
}
]
| https://en.wikipedia.org/wiki?curid=13197019 |
13197969 | Vitali convergence theorem | In real analysis and measure theory, the Vitali convergence theorem, named after the Italian mathematician Giuseppe Vitali, is a generalization of the better-known dominated convergence theorem of Henri Lebesgue. It is a characterization of the convergence in "Lp" in terms of convergence in measure and a condition related to uniform integrability.
Preliminary definitions.
Let formula_0 be a measure space, i.e. formula_1 is a set function such that formula_2 and formula_3 is countably-additive. All functions considered in the sequel will be functions formula_4, where formula_5 or formula_6. We adopt the following definitions according to Bogachev's terminology.
When formula_12, a set of functions formula_7 is uniformly integrable if and only if it is bounded in formula_13 and has uniformly absolutely continuous integrals. If, in addition, formula_3 is atomless, then the uniform integrability is equivalent to the uniform absolute continuity of integrals.
Finite measure case.
Let formula_0 be a measure space with formula_12. Let formula_14 and formula_15 be an formula_16-measurable function. Then, the following are equivalent :
For a proof, see Bogachev's monograph "Measure Theory, Volume I".
Infinite measure case.
Let formula_0 be a measure space and formula_21. Let formula_22 and formula_17. Then, formula_18 converges to formula_15 in formula_19 if and only if the following holds :
When formula_12, the third condition becomes superfluous (one can simply take formula_27) and the first two conditions give the usual form of Lebesgue-Vitali's convergence theorem originally stated for measure spaces with finite measure. In this case, one can show that conditions 1 and 2 imply that the sequence formula_20 is uniformly integrable.
Converse of the theorem.
Let formula_0 be measure space. Let formula_28 and assume that formula_29 exists for every formula_30. Then, the sequence formula_18 is bounded in formula_13 and has uniformly absolutely continuous integrals. In addition, there exists formula_31 such that formula_32 for every formula_30.
When formula_12, this implies that formula_18 is uniformly integrable.
For a proof, see Bogachev's monograph "Measure Theory, Volume I".
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(X,\\mathcal{A},\\mu)"
},
{
"math_id": 1,
"text": "\\mu : \\mathcal{A}\\to [0,\\infty]"
},
{
"math_id": 2,
"text": "\\mu(\\emptyset)=0"
},
{
"math_id": 3,
"text": "\\mu"
},
{
"math_id": 4,
"text": "f:X\\to \\mathbb{K}"
},
{
"math_id": 5,
"text": "\\mathbb{K}=\\R"
},
{
"math_id": 6,
"text": "\\mathbb{C}"
},
{
"math_id": 7,
"text": "\\mathcal{F} \\subset L^1(X,\\mathcal{A},\\mu)"
},
{
"math_id": 8,
"text": "\\lim_{M\\to+\\infty} \\sup_{f\\in\\mathcal{F}} \\int_{\\{|f|>M\\}} |f|\\, d\\mu = 0"
},
{
"math_id": 9,
"text": "\\forall\\ \\varepsilon >0,\\ \\exists\\ M_\\varepsilon>0\n: \\sup_{f\\in\\mathcal{F}} \\int_{\\{|f|\\geq M_\\varepsilon\\}} |f|\\, d\\mu < \\varepsilon"
},
{
"math_id": 10,
"text": "\\lim_{\\mu(A)\\to 0}\\sup_{f\\in\\mathcal{F}} \\int_A |f|\\, d\\mu = 0"
},
{
"math_id": 11,
"text": "\\forall\\ \\varepsilon>0,\\ \\exists\\ \\delta_\\varepsilon >0,\\ \\forall\\ A\\in\\mathcal{A} : \n\\mu(A)<\\delta_\\varepsilon \\Rightarrow \\sup_{f\\in \\mathcal{F}} \\int_A |f|\\, d\\mu < \\varepsilon"
},
{
"math_id": 12,
"text": "\\mu(X)<\\infty"
},
{
"math_id": 13,
"text": "L^1(X,\\mathcal{A},\\mu)"
},
{
"math_id": 14,
"text": "(f_n)\\subset L^p(X,\\mathcal{A},\\mu)"
},
{
"math_id": 15,
"text": "f"
},
{
"math_id": 16,
"text": "\\mathcal{A}"
},
{
"math_id": 17,
"text": "f\\in L^p(X,\\mathcal{A},\\mu)"
},
{
"math_id": 18,
"text": "(f_n)"
},
{
"math_id": 19,
"text": "L^p(X,\\mathcal{A},\\mu)"
},
{
"math_id": 20,
"text": "(|f_n|^p)_{n\\geq 1}"
},
{
"math_id": 21,
"text": "1\\leq p<\\infty"
},
{
"math_id": 22,
"text": "(f_n)_{n\\geq 1} \\subseteq L^p(X,\\mathcal{A},\\mu)"
},
{
"math_id": 23,
"text": "\\varepsilon>0"
},
{
"math_id": 24,
"text": "X_\\varepsilon\\in \\mathcal{A}"
},
{
"math_id": 25,
"text": "\\mu(X_\\varepsilon)<\\infty"
},
{
"math_id": 26,
"text": "\\sup_{n\\geq 1}\\int_{X\\setminus X_\\varepsilon} |f_n|^p\\, d\\mu <\\varepsilon."
},
{
"math_id": 27,
"text": "X_\\varepsilon = X"
},
{
"math_id": 28,
"text": "(f_n)_{n\\geq 1} \\subseteq L^1(X,\\mathcal{A},\\mu)"
},
{
"math_id": 29,
"text": "\\lim_{n\\to\\infty}\\int_A f_n\\,d\\mu"
},
{
"math_id": 30,
"text": "A\\in\\mathcal{A}"
},
{
"math_id": 31,
"text": "f\\in L^1(X,\\mathcal{A},\\mu)"
},
{
"math_id": 32,
"text": "\\lim_{n\\to\\infty}\\int_A f_n\\,d\\mu = \\int_A f\\, d\\mu"
}
]
| https://en.wikipedia.org/wiki?curid=13197969 |
13200604 | Derivator | In mathematics, derivators are a proposed frameworkpg 190-195 for homological algebra giving a foundation for both abelian and non-abelian homological algebra and various generalizations of it. They were introduced to address the deficiencies of derived categories (such as the non-functoriality of the cone construction) and provide at the same time a language for homotopical algebra.
Derivators were first introduced by Alexander Grothendieck in his long unpublished 1983 manuscript "Pursuing Stacks". They were then further developed by him in the huge unpublished 1991 manuscript "Les Dérivateurs" of almost 2000 pages. Essentially the same concept was introduced (apparently independently) by Alex Heller.
The manuscript has been edited for on-line publication by Georges Maltsiniotis. The theory has been further developed by several other people, including Heller, Franke, Keller and Groth.
Motivations.
One of the motivating reasons for considering derivators is the lack of functoriality with the cone construction with triangulated categories. Derivators are able to solve this problem, and solve the inclusion of general homotopy colimits, by keeping track of all possible diagrams in a category with weak equivalences and their relations between each other. Heuristically, given the diagramformula_0which is a category with two objects and one non-identity arrow, and a functorformula_1to a category formula_2 with a class of weak-equivalences formula_3 (and satisfying the right hypotheses), we should have an associated functorformula_4where the target object is unique up to weak equivalence in formula_5. Derivators are able to encode this kind of information and provide a diagram calculus to use in derived categories and homotopy theory.
Definition.
Prederivators.
Formally, a prederivator formula_6 is a 2-functorformula_7from a suitable 2-category of indices to the category of categories. Typically such 2-functors come from considering the categories formula_8 where formula_2 is called the category of coefficients. For example, formula_9 could be the category of small categories which are filtered, whose objects can be thought of as the indexing sets for a filtered colimit. Then, given a morphism of diagramsformula_10denote formula_11 byformula_12This is called the inverse image functor. In the motivating example, this is just precompositition, so given a functor formula_13 there is an associated functor formula_14. Note these 2-functors could be taken to beformula_15where formula_3 is a suitable class of weak equivalences in a category formula_2.
Indexing categories.
There are a number of examples of indexing categories which can be used in this construction
Derivators.
Derivators are then the axiomatization of prederivators which come equipped with adjoint functors
formula_22
where formula_23 is left adjoint to formula_11 and so on. Heuristically, formula_24 should correspond to inverse limits, formula_23 to colimits.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\bullet \\to \\bullet"
},
{
"math_id": 1,
"text": "F:(\\bullet \\to \\bullet) \\to A"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "W"
},
{
"math_id": 4,
"text": "C(F): \\bullet \\to A[W^{-1}]"
},
{
"math_id": 5,
"text": "\\mathcal{C}[W^{-1}]"
},
{
"math_id": 6,
"text": "\\mathbb{D}"
},
{
"math_id": 7,
"text": "\\mathbb{D}: \\text{Ind}^{op} \\to \\text{CAT}"
},
{
"math_id": 8,
"text": "\\underline{\\text{Hom}}(I^{op}, A)"
},
{
"math_id": 9,
"text": "\\text{Ind}"
},
{
"math_id": 10,
"text": "f:I \\to J"
},
{
"math_id": 11,
"text": "f^*"
},
{
"math_id": 12,
"text": "f^*:\\mathbb{D}(J) \\to \\mathbb{D}(I)"
},
{
"math_id": 13,
"text": "F_I \\in \\underline{\\text{Hom}}(I^{op}, A)"
},
{
"math_id": 14,
"text": "F_J = F_I \\circ f"
},
{
"math_id": 15,
"text": "\\underline{\\text{Hom}}(-,A[W^{-1}])"
},
{
"math_id": 16,
"text": "\\text{FinCat}"
},
{
"math_id": 17,
"text": "\\Delta"
},
{
"math_id": 18,
"text": "X"
},
{
"math_id": 19,
"text": "\\text{Open}(X)"
},
{
"math_id": 20,
"text": "(X)_\\tau"
},
{
"math_id": 21,
"text": "T"
},
{
"math_id": 22,
"text": "f^? \\dashv f_! \\dashv f^* \\dashv f_* \\dashv f^!"
},
{
"math_id": 23,
"text": "f_!"
},
{
"math_id": 24,
"text": "f_*"
}
]
| https://en.wikipedia.org/wiki?curid=13200604 |
1320289 | Steam reforming | Method for producing hydrogen and carbon monoxide from hydrocarbon fuels
Steam reforming or steam methane reforming (SMR) is a method for producing syngas (hydrogen and carbon monoxide) by reaction of hydrocarbons with water. Commonly natural gas is the feedstock. The main purpose of this technology is hydrogen production. The reaction is represented by this equilibrium:
<chem>CH4 + H2O <=> CO + 3 H2</chem>
The reaction is strongly endothermic (Δ"H"SR = 206 kJ/mol).
Hydrogen produced by steam reforming is termed 'grey' hydrogen when the waste carbon dioxide is released to the atmosphere and 'blue' hydrogen when the carbon dioxide is (mostly) captured and stored geologically - see carbon capture and storage. Zero carbon 'green' hydrogen is produced by thermochemical water splitting, using solar thermal, low- or zero-carbon electricity or waste heat, or electrolysis, using low- or zero-carbon electricity. Zero carbon emissions 'turquoise' hydrogen is produced by one-step methane pyrolysis of natural gas.
Steam reforming of natural gas produces most of the world's hydrogen. Hydrogen is used in the industrial synthesis of ammonia and other chemicals.
Reactions.
Steam reforming reaction kinetics, in particular using nickel-alumina catalysts, have been studied in detail since the 1950s.
Pre-reforming.
The purpose of pre-reforming is to break down higher hydrocarbons such as propane, butane or naphtha into methane (CH4), which allows for more efficient reforming downstream.
Steam reforming.
The name-giving reaction is the steam reforming (SR) reaction and is expressed by the equation:
formula_0
Via the water-gas shift reaction (WGSR), additional hydrogen is released by reaction of water with the carbon monoxide generated according to equation [1]:
formula_1
Some additional reactions occurring within steam reforming processes have been studied. Commonly the direct steam reforming (DSR) reaction is also included:
formula_2
As these reactions by themselves are highly endothermic (apart from WGSR, which is mildly exothermic), a large amount of heat needs to be added to the reactor to keep a constant temperature. Optimal SMR reactor operating conditions lie within a temperature range of 800 °C to 900 °C at medium pressures of 20-30 bar. High excess of steam is required, expressed by the (molar) steam-to-carbon (S/C) ratio. Typical S/C ratio values lie within the range 2.5:1 - 3:1.
Industrial practice.
The reaction is conducted in multitubular packed bed reactors, a subtype of the plug flow reactor category. These reactors consist of an array of long and narrow tubes which are situated within the combustion chamber of a large industrial furnace, providing the necessary energy to keep the reactor at a constant temperature during operation. Furnace designs vary, depending on the burner configuration they are typically categorized into: top-fired, bottom-fired, and side-fired. A notable design is the Foster-Wheeler terrace wall reformer.
Inside the tubes, a mixture of steam and methane are put into contact with a nickel catalyst. Catalysts with high surface-area-to-volume ratio are preferred because of diffusion limitations due to high operating temperature. Examples of catalyst shapes used are spoked wheels, gear wheels, and rings with holes ("see:" "Raschig rings"). Additionally, these shapes have a low pressure drop which is advantageous for this application.
Steam reforming of natural gas is 65–75% efficient.
The United States produces 9–10 million tons of hydrogen per year, mostly with steam reforming of natural gas. The worldwide ammonia production, using hydrogen derived from steam reforming, was 144 million tonnes in 2018. The energy consumption has been reduced from 100 GJ/tonne of ammonia in 1920 to 27 GJ by 2019.
Globally, almost 50% of hydrogen is produced via steam reforming. It is currently the least expensive method for hydrogen production available in terms of its capital cost.
In an effort to decarbonise hydrogen production, carbon capture and storage (CCS) methods are being implemented within the industry, which have the potential to remove up to 90% of CO2 produced from the process. Despite this, implementation of this technology remains problematic, costly, and increases the price of the produced hydrogen significantly.
Autothermal reforming.
Autothermal reforming (ATR) uses oxygen and carbon dioxide or steam in a reaction with methane to form syngas. The reaction takes place in a single chamber where the methane is partially oxidized. The reaction is exothermic. When the ATR uses carbon dioxide, the H2:CO ratio produced is 1:1; when the ATR uses steam, the H2:CO ratio produced is 2.5:1. The outlet temperature of the syngas is between 950–1100 °C and outlet pressure can be as high as 100 bar.
In addition to reactions [1] – [3], ATR introduces the following reaction:
formula_3
The main difference between SMR and ATR is that SMR only uses air for combustion as a heat source to create steam, while ATR uses purified oxygen. The advantage of ATR is that the H2:CO ratio can be varied, which can be useful for producing specialty products. Due to the exothermic nature of some of the additional reactions occurring within ATR, the process can essentially be performed at a net enthalpy of zero (Δ"H" = 0).
Partial oxidation.
Partial oxidation (POX) occurs when a sub-stoichiometric fuel-air mixture is partially combusted in a reformer creating hydrogen-rich syngas. POX is typically much faster than steam reforming and requires a smaller reactor vessel. POX produces less hydrogen per unit of the input fuel than steam reforming of the same fuel.
Steam reforming at small scale.
The capital cost of steam reforming plants is considered prohibitive for small to medium size applications. The costs for these elaborate facilities do not scale down well. Conventional steam reforming plants operate at pressures between 200 and 600 psi (14–40 bar) with outlet temperatures in the range of 815 to 925 °C.
For combustion engines.
Flared gas and vented volatile organic compounds (VOCs) are known problems in the offshore industry and in the on-shore oil and gas industry, since both release greenhouse gases into the atmosphere. Reforming for combustion engines utilizes steam reforming technology for converting waste gases into a source of energy.
Reforming for combustion engines is based on steam reforming, where non-methane hydrocarbons (NMHCs) of low quality gases are converted to synthesis gas (H2 + CO) and finally to methane (CH4), carbon dioxide (CO2) and hydrogen (H2) - thereby improving the fuel gas quality (methane number).
For fuel cells.
There is also interest in the development of much smaller units based on similar technology to produce hydrogen as a feedstock for fuel cells. Small-scale steam reforming units to supply fuel cells are currently the subject of research and development, typically involving the reforming of methanol, but other fuels are also being considered such as propane, gasoline, autogas, diesel fuel, and ethanol.
Disadvantages.
The reformer– the fuel-cell system is still being researched but in the near term, systems would continue to run on existing fuels, such as natural gas or gasoline or diesel. However, there is an active debate about whether using these fuels to make hydrogen is beneficial while global warming is an issue. Fossil fuel reforming does not eliminate carbon dioxide release into the atmosphere but reduces the carbon dioxide emissions and nearly eliminates carbon monoxide emissions as compared to the burning of conventional fuels due to increased efficiency and fuel cell characteristics. However, by turning the release of carbon dioxide into a point source rather than distributed release, carbon capture and storage becomes a possibility, which would prevent the release of carbon dioxide to the atmosphere, while adding to the cost of the process.
The cost of hydrogen production by reforming fossil fuels depends on the scale at which it is done, the capital cost of the reformer, and the efficiency of the unit, so that whilst it may cost only a few dollars per kilogram of hydrogen at an industrial scale, it could be more expensive at the smaller scale needed for fuel cells.
Challenges with reformers supplying fuel cells.
There are several challenges associated with this technology:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "[1]\\qquad \\mathrm{CH}_4 + \\mathrm{H}_2\\mathrm{O} \\rightleftharpoons \\mathrm{CO} + 3\\,\\mathrm{H}_2\n\\qquad \\Delta H_{SR} = 206\\ \\mathrm{kJ/mol}"
},
{
"math_id": 1,
"text": "[2]\\qquad \\mathrm{CO} + \\mathrm{H}_2\\mathrm{O} \\rightleftharpoons \\mathrm{CO}_2 + \\mathrm{H}_2\n\\qquad \\Delta H_{WGSR} = -41\\ \\mathrm{kJ/mol}"
},
{
"math_id": 2,
"text": "[3]\\qquad \\mathrm{CH}_4 + 2\\,\\mathrm{H}_2\\mathrm{O} \\rightleftharpoons \\mathrm{CO}_2 + 4\\,\\mathrm{H}_2\n\\qquad \\Delta H_{DSR} = 165\\ \\mathrm{kJ/mol}"
},
{
"math_id": 3,
"text": "[4]\\qquad \\mathrm{CH}_4 + 0.5\\,\\mathrm{O}_2 \\rightleftharpoons \\mathrm{CO} + 2\\,\\mathrm{H}_2\n\\qquad \\Delta H_{R} = -24.5\\ \\mathrm{kJ/mol}"
}
]
| https://en.wikipedia.org/wiki?curid=1320289 |
1320360 | Spectral power distribution | Measurement describing the power of an illumination
In radiometry, photometry, and color science, a spectral power distribution (SPD) measurement describes the power per unit area per unit wavelength of an illumination (radiant exitance). More generally, the term "spectral power distribution" can refer to the concentration, as a function of wavelength, of any radiometric or photometric quantity (e.g. radiant energy, radiant flux, radiant intensity, radiance, irradiance, radiant exitance, radiosity, luminance, luminous flux, luminous intensity, illuminance, luminous emittance).
Knowledge of the SPD is crucial for optical-sensor system applications. Optical properties such as transmittance, reflectivity, and absorbance as well as the sensor response are typically dependent on the incident wavelength.
Physics.
Mathematically, for the spectral power distribution of a radiant exitance or irradiance one may write:
formula_0
where "M"("λ") is the spectral irradiance (or exitance) of the light (SI units: W/m2 = kg·m−1·s−3); "Φ" is the radiant flux of the source (SI unit: watt, W); "A" is the area over which the radiant flux is integrated (SI unit: square meter, m2); and "λ" is the wavelength (SI unit: meter, m). (Note that it is more convenient to express the wavelength of light in terms of nanometers; spectral exitance would then be expressed in units of W·m−2·nm−1.) The approximation is valid when the area and wavelength interval are small.
Relative SPD.
The ratio of spectral concentration (irradiance or exitance) at a given wavelength to the concentration of a reference wavelength provides the relative SPD. This can be written as:
formula_1
For instance, the luminance of lighting fixtures and other light sources are handled separately, a spectral power distribution may be normalized in some manner, often to unity at 555 or 560 nanometers, coinciding with the peak of the eye's luminosity function.
Responsivity.
The SPD can be used to determine the response of a sensor at a specified wavelength. This compares the output power of the sensor to the input power as a function of wavelength. This can be generalized in the following formula:
formula_2
Knowing the responsitivity is beneficial for determination of illumination, interactive material components, and optical components to optimize performance of a system's design.
Source SPD and matter.
The spectral power distribution over the visible spectrum from a source can have varying concentrations of relative SPDs. The interactions between light and matter affect the absorption and reflectance properties of materials and subsequently produces a color that varies with source illumination.
For example, the relative spectral power distribution of the sun produces a white appearance if observed directly, but when the sunlight illuminates the Earth's atmosphere the sky appears blue under normal daylight conditions. This stems from the optical phenomenon called Rayleigh scattering which produces a concentration of shorter wavelengths and hence the blue color appearance.
Source SPD and color appearance.
The human visual response relies on trichromacy to process color appearance. While the human visual response integrates over all wavelengths, the relative spectral power distribution will provide color appearance modeling information as the concentration of wavelength band(s) will become the primary contributors to the perceived color.
This becomes useful in photometry and colorimetry as the perceived color changes with source illumination and spectral distribution and coincides with metamerisms where an object's color appearance changes.
The spectral makeup of the source can also coincide with color temperature producing differences in color appearance due to the source's temperature.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M(\\lambda)=\\frac{\\partial^2\\Phi}{\\partial A\\,\\partial\\lambda}\\approx\\frac{\\Phi}{A\\,\\Delta\\lambda}"
},
{
"math_id": 1,
"text": "M_\\mathrm{rel}(\\lambda)=\\frac{M(\\lambda)}{M\\left(\\lambda_0\\right)}"
},
{
"math_id": 2,
"text": "R(\\lambda)=\\frac{S(\\lambda)}{M(\\lambda)}"
}
]
| https://en.wikipedia.org/wiki?curid=1320360 |
13205367 | Separable algebra | In mathematics, a separable algebra is a kind of semisimple algebra. It is a generalization to associative algebras of the notion of a separable field extension.
Definition and first properties.
A homomorphism of (unital, but not necessarily commutative) rings
formula_0
is called "separable" if the multiplication map
formula_1
admits a section
formula_2
that is a homomorphism of "A"-"A"-bimodules.
If the ring formula_3 is commutative and formula_0 maps formula_3 into the center of formula_4, we call formula_4 a "separable algebra over" formula_3.
It is useful to describe separability in terms of the element
formula_5
The reason is that a section "σ" is determined by this element. The condition that "σ" is a section of "μ" is equivalent to
formula_6
and the condition that σ is a homomorphism of "A"-"A"-bimodules is equivalent to the following requirement for any "a" in "A":
formula_7
Such an element "p" is called a "separability idempotent", since regarded as an element of the algebra formula_8 it satisfies formula_9.
Examples.
For any commutative ring "R", the (non-commutative) ring of "n"-by-"n" matrices formula_10 is a separable "R"-algebra. For any formula_11, a separability idempotent is given by formula_12, where formula_13 denotes the elementary matrix which is 0 except for the entry in the ("i", "j") entry, which is 1. In particular, this shows that separability idempotents need not be unique.
Separable algebras over a field.
A field extension "L"/"K" of finite degree is a separable extension if and only if "L" is separable as an associative "K"-algebra. If "L"/"K" has a primitive element formula_14 with irreducible polynomial formula_15, then a separability idempotent is given by formula_16. The tensorands are dual bases for the trace map: if formula_17 are the distinct "K"-monomorphisms of "L" into an algebraic closure of "K", the trace mapping Tr of "L" into "K" is defined by formula_18. The trace map and its dual bases make explicit "L" as a Frobenius algebra over "K".
More generally, separable algebras over a field "K" can be classified as follows: they are the same as finite products of matrix algebras over finite-dimensional division algebras whose centers are finite-dimensional separable field extensions of the field "K". In particular: Every separable algebra is itself finite-dimensional. If "K" is a perfect field – for example a field of characteristic zero, or a finite field, or an algebraically closed field – then every extension of "K" is separable, so that separable "K"-algebras are finite products of matrix algebras over finite-dimensional division algebras over field "K". In other words, if "K" is a perfect field, there is no difference between a separable algebra over "K" and a finite-dimensional semisimple algebra over "K".
It can be shown by a generalized theorem of Maschke that an associative "K"-algebra "A" is separable if for every field extension formula_19 the algebra formula_20 is semisimple.
Group rings.
If "K" is commutative ring and "G" is a finite group such that the order of "G" is invertible in "K", then the group algebra "K"["G"] is a separable "K"-algebra. A separability idempotent is given by formula_21.
Equivalent characterizations of separability.
There are several equivalent definitions of separable algebras. A "K"-algebra "A" is separable if and only if it is projective when considered as a left module of formula_22 in the usual way. Moreover, an algebra "A" is separable if and only if it is flat when considered as a right module of formula_22 in the usual way.
Separable algebras can also be characterized by means of split extensions: "A" is separable over "K" if and only if all short exact sequences of "A"-"A"-bimodules that are split as "A"-"K"-bimodules also split as "A"-"A"-bimodules. Indeed, this condition is necessary since the multiplication mapping formula_23 arising in the definition above is a "A"-"A"-bimodule epimorphism, which is split as an "A"-"K"-bimodule map by the right inverse mapping formula_24 given by formula_25. The converse can be proven by a judicious use of the separability idempotent (similarly to the proof of Maschke's theorem, applying its components within and without the splitting maps).
Equivalently, the relative Hochschild cohomology groups formula_26 of ("R", "S") in any coefficient bimodule "M" is zero for "n" > 0. Examples of separable extensions are many including first separable algebras where "R" is a separable algebra and "S" = 1 times the ground field. Any ring "R" with elements "a" and "b" satisfying "ab" = 1, but "ba" different from 1, is a separable extension over the subring "S" generated by 1 and "bRa".
Relation to Frobenius algebras.
A separable algebra is said to be "strongly separable" if there exists a separability idempotent that is "symmetric", meaning
formula_27
An algebra is strongly separable if and only if its trace form is nondegenerate, thus making the algebra into a particular kind of Frobenius algebra called a symmetric algebra (not to be confused with the symmetric algebra arising as the quotient of the tensor algebra).
If "K" is commutative, "A" is a finitely generated projective separable "K"-module, then "A" is a symmetric Frobenius algebra.
Relation to formally unramified and formally étale extensions.
Any separable extension "A" / "K" of commutative rings is formally unramified. The converse holds if "A" is a finitely generated "K"-algebra. A separable flat (commutative) "K"-algebra "A" is formally étale.
Further results.
A theorem in the area is that of J. Cuadra that a separable Hopf–Galois extension "R" | "S" has finitely generated natural "S"-module "R". A fundamental fact about a separable extension "R" | "S" is that it is left or right semisimple extension: a short exact sequence of left or right "R"-modules that is split as "S"-modules, is split as "R"-modules. In terms of G. Hochschild's relative homological algebra, one says that all "R"-modules are relative ("R", "S")-projective. Usually relative properties of subrings or ring extensions, such as the notion of separable extension, serve to promote theorems that say that the over-ring shares a property of the subring. For example, a separable extension "R" of a semisimple algebra "S" has "R" semisimple, which follows from the preceding discussion.
There is the celebrated Jans theorem that a finite group algebra "A" over a field of characteristic "p" is of finite representation type if and only if its Sylow "p"-subgroup is cyclic: the clearest proof is to note this fact for "p"-groups, then note that the group algebra is a separable extension of its Sylow "p"-subgroup algebra "B" as the index is coprime to the characteristic. The separability condition above will imply every finitely generated "A"-module "M" is isomorphic to a direct summand in its restricted, induced module. But if "B" has finite representation type, the restricted module is uniquely a direct sum of multiples of finitely many indecomposables, which induce to a finite number of constituent indecomposable modules of which "M" is a direct sum. Hence "A" is of finite representation type if "B" is. The converse is proven by a similar argument noting that every subgroup algebra "B" is a "B"-bimodule direct summand of a group algebra "A".
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K \\to A"
},
{
"math_id": 1,
"text": "\\begin{array}{rccc} \\mu :& A \\otimes_K A &\\to& A \\\\\n & a \\otimes b &\\mapsto & ab \n\\end{array}"
},
{
"math_id": 2,
"text": "\\sigma: A \\to A \\otimes_K A"
},
{
"math_id": 3,
"text": "K"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "p := \\sigma(1) = \\sum a_i \\otimes b_i \\in A \\otimes_K A"
},
{
"math_id": 6,
"text": "\\sum a_i b_i = 1"
},
{
"math_id": 7,
"text": "\\sum a a_i \\otimes b_i = \\sum a_i \\otimes b_i a."
},
{
"math_id": 8,
"text": "A \\otimes A^{\\rm op}"
},
{
"math_id": 9,
"text": "p^2 = p"
},
{
"math_id": 10,
"text": "M_n(R)"
},
{
"math_id": 11,
"text": "1 \\le j \\le n"
},
{
"math_id": 12,
"text": "\\sum_{i=1}^n e_{ij} \\otimes e_{ji}"
},
{
"math_id": 13,
"text": "e_{ij}"
},
{
"math_id": 14,
"text": " a"
},
{
"math_id": 15,
"text": " p(x) = (x - a) \\sum_{i=0}^{n-1} b_i x^i"
},
{
"math_id": 16,
"text": " \\sum_{i=0}^{n-1} a^i \\otimes_K \\frac{b_i}{p'(a)}"
},
{
"math_id": 17,
"text": " \\sigma_1,\\ldots,\\sigma_{n} "
},
{
"math_id": 18,
"text": "Tr(x) = \\sum_{i=1}^{n} \\sigma_i(x)"
},
{
"math_id": 19,
"text": "L/K"
},
{
"math_id": 20,
"text": "A\\otimes_K L"
},
{
"math_id": 21,
"text": " \\frac{1}{o(G)} \\sum_{g \\in G} g \\otimes g^{-1}"
},
{
"math_id": 22,
"text": "A^e"
},
{
"math_id": 23,
"text": "\\mu : A \\otimes_K A \\rightarrow A "
},
{
"math_id": 24,
"text": " A \\rightarrow A \\otimes_K A"
},
{
"math_id": 25,
"text": " a \\mapsto a \\otimes 1 "
},
{
"math_id": 26,
"text": " H^n(R,S;M)"
},
{
"math_id": 27,
"text": " e = \\sum_{i=1}^n x_i \\otimes y_i = \\sum_{i=1}^n y_i \\otimes x_i"
}
]
| https://en.wikipedia.org/wiki?curid=13205367 |
1321000 | Carboplatin | Medication used to treat cancer
Carboplatin, sold under the brand name Paraplatin among others, is a chemotherapy medication used to treat a number of forms of cancer. This includes ovarian cancer, lung cancer, head and neck cancer, brain cancer, and neuroblastoma. It is used by injection into a vein.
Side effects generally occur. Common side effects include low blood cell levels, nausea, and electrolyte problems. Other serious side effects include allergic reactions and mutagenesis. It may be carcinogenic, but further research is needed to confirm this. Use during pregnancy may result in harm to the baby. Carboplatin is in the platinum-based antineoplastic family of medications and works by interfering with duplication of DNA.
Carboplatin was developed as a less toxic analogue of cisplatin. It was patented in 1972 and approved for medical use in 1989. It is on the 2023 World Health Organization's List of Essential Medicines.
Medical uses.
Carboplatin is used to treat a number of forms of cancer. This includes ovarian cancer, lung cancer, head and neck cancer, brain cancer, and neuroblastoma. It may be used for some types of testicular cancer but cisplatin is generally more effective. It has also been used to treat triple-negative breast cancer.
Side effects.
Relative to cisplatin, the greatest benefit of carboplatin is its reduced side effects, particularly the elimination of nephrotoxic effects. Nausea and vomiting are less severe and more easily controlled.
The main drawback of carboplatin is its myelosuppressive effect. This causes the blood cell and platelet output of bone marrow in the body to decrease quite dramatically, sometimes as low as 10% of its usual production levels. The nadir of this myelosuppression usually occurs 21–28 days after the first treatment, after which the blood cell and platelet levels in the blood begin to stabilize, often coming close to its pre-carboplatin levels. This decrease in white blood cells (neutropenia) can cause complications, and is sometimes treated with drugs like filgrastim. The most notable complication of neutropenia is increased probability of infection by opportunistic organisms, which necessitates hospital readmission and treatment with antibiotics.
Mechanism of action.
Carboplatin differs from cisplatin in that it has a bidentate dicarboxylate (the ligand is cyclobutane dicarboxylate, CBDCA) in place of the two chloride ligands. Both drugs are alkylating agents. CBDCA and chloride are the leaving groups in these respective drugs Carboplatin exhibits slower aquation (replacement of CBDCA by water) and thus slower DNA binding kinetics, although it forms the same reaction products "in vitro" at equivalent doses with cisplatin. Unlike cisplatin, carboplatin may be susceptible to alternative mechanisms. Some results show that cisplatin and carboplatin cause different morphological changes in MCF-7 cell lines while exerting their cytotoxic behaviour. The diminished reactivity limits protein-carboplatin complexes, which are excreted. The lower excretion rate of carboplatin means that more is retained in the body, and hence its effects are longer lasting (a retention half-life of 30 hours for carboplatin, compared to 1.5-3.6 hours in the case of cisplatin).
Like cisplatin, carboplatin binds to and cross-links DNA, interfering with the replication and suppressing growth of the cancer cell.
Dose.
Calvert's formula is used to calculate the dose of carboplatin. It considers the creatinine clearance and the desired area under curve. After 24 hours, close to 70% of carboplatin is excreted in the urine unchanged. This means that the dose of carboplatin must be adjusted for any impairment in kidney function.
Calvert formula: formula_0
The typical area under the curve (AUC) for carboplatin ranges from 3-7 (mg/ml)*min.
Synthesis.
Cisplatin reacts with silver nitrate and then cyclobutan-1,1-dicarboxylic acid to form carboplatin.
History.
Carboplatin, a cisplatin analogue, was developed by Bristol Myers Squibb and the Institute of Cancer Research in order to reduce the toxicity of cis platin. It gained U.S. Food and Drug Administration (FDA) approval for carboplatin, under the brand name Paraplatin, in March 1989. Starting in October 2004, generic versions of the drug became available.
Research.
Carboplatin has also been used for adjuvant therapy of stage 1 seminomatous testicular cancer. Research has indicated that it is not less effective than adjuvant radiotherapy for this treatment, while having fewer side effects. This has led to carboplatin based adjuvant therapy being generally preferred over adjuvant radiotherapy in clinical practice.
Carboplatin combined with hexadecyl chain and polyethylene glycol appears to have increased liposolubility and PEGylation. This is useful in chemotherapy, specifically for non-small cell lung cancer.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{Dose}(\\mathrm{mg})= \\mathrm{AUC} \\cdot (\\mathrm{GFR} + 25)"
}
]
| https://en.wikipedia.org/wiki?curid=1321000 |
13213280 | Kachurovskii's theorem | In mathematics, Kachurovskii's theorem is a theorem relating the convexity of a function on a Banach space to the monotonicity of its Fréchet derivative.
Statement of the theorem.
Let "K" be a convex subset of a Banach space "V" and let "f" : "K" → R ∪ {+∞} be an extended real-valued function that is Fréchet differentiable with derivative d"f"("x") : "V" → R at each point "x" in "K". (In fact, d"f"("x") is an element of the continuous dual space "V"∗.) Then the following are equivalent:
formula_0
formula_1 | [
{
"math_id": 0,
"text": "\\mathrm{d} f(x) (y - x) \\leq f(y) - f(x);"
},
{
"math_id": 1,
"text": "\\big( \\mathrm{d} f(x) - \\mathrm{d} f(y) \\big) (x - y) \\geq 0."
}
]
| https://en.wikipedia.org/wiki?curid=13213280 |
13213701 | Kōmura's theorem | Mathematical theorem
In mathematics, Kōmura's theorem is a result on the differentiability of absolutely continuous Banach space-valued functions, and is a substantial generalization of Lebesgue's theorem on the differentiability of the indefinite integral, which is that Φ : [0, "T"] → R given by
formula_0
is differentiable at "t" for almost every 0 < "t" < "T" when "φ" : [0, "T"] → R lies in the "L""p" space "L"1([0, "T"]; R).
Statement.
Let ("X", || ||) be a reflexive Banach space and let "φ" : [0, "T"] → "X" be absolutely continuous. Then "φ" is (strongly) differentiable almost everywhere, the derivative "φ"′ lies in the Bochner space "L"1([0, "T"]; "X"), and, for all 0 ≤ "t" ≤ "T",
formula_1 | [
{
"math_id": 0,
"text": "\\Phi(t) = \\int_{0}^{t} \\varphi(s) \\, \\mathrm{d} s,"
},
{
"math_id": 1,
"text": "\\varphi(t) = \\varphi(0) + \\int_{0}^{t} \\varphi'(s) \\, \\mathrm{d} s."
}
]
| https://en.wikipedia.org/wiki?curid=13213701 |
13214503 | Antenna gain-to-noise-temperature | Antenna gain-to-noise-temperature (G/T) is a figure of merit in the characterization of antenna performance, where "G" is the antenna gain in decibels at the receive frequency, and "T" is the equivalent noise temperature of the receiving system in kelvins. The receiving system noise temperature is the summation of the antenna noise temperature and the RF chain noise temperature from the antenna terminals to the receiver output.
Antenna temperature ("Tant") is a parameter that describes how much noise an antenna produces in a given environment. Antenna noise temperature is not the physical temperature of the antenna but rather an expression of the available noise power at the antenna flange. Moreover, an antenna does not have an intrinsic "antenna temperature" associated with it; rather the temperature depends on its gain pattern and the thermal environment that it is placed in. Antenna temperature is also sometimes referred to as Antenna Noise Temperature.
To define the environment, we'll introduce a temperature distribution - this is the temperature in every direction away from the antenna in spherical coordinates. For instance, the night sky is roughly ; the value of the temperature pattern in the direction of the Earth's ground is the physical temperature of the Earth's ground. This temperature distribution will be written as "T"S(θ, φ). Hence, an antenna's temperature will vary depending on whether it is directional and pointed into space or staring into the sun.
For an antenna with a radiation pattern given by "G"(θ, φ), the noise temperature is mathematically defined as:
formula_0
This states that the temperature surrounding the antenna is integrated over the entire sphere, and weighted by the antenna's radiation pattern. Hence, an isotropic antenna would have a noise temperature that is the average of all temperatures around the antenna; for a perfectly directional antenna (with a pencil beam), the antenna temperature will only depend on the temperature in which the antenna is "looking".
The noise power "P"N (in watts) received from an antenna at temperature "T"A can be expressed in terms of the bandwidth, "B", that the antenna (and its receiver) are operating over:
formula_1,
where "k" is the Boltzmann constant (). The receiver also has a temperature associated with it, "T"E, and the total system temperature "T" (antenna plus receiver) has a combined temperature given by "T" = "T"A + "T"E. This temperature can be used in the above equation to find the total noise power of the system. These concepts begin to illustrate how antenna engineers must understand receivers and the associated electronics, because the resulting systems very much depend on each other.
A parameter often encountered in specification sheets for antennas that operate in certain environments is the ratio of gain of the antenna divided by the antenna temperature (or system temperature if a receiver is specified). This parameter is written as "G/T", and has units of dB·K−1.
"G/T" Calculation
"G/T" is the figure of merit for a satellite system.
"G" is the Receive antenna gain.
"T" is the system noise temperature.
System noise temperature = antenna noise temperature + Receiver noise temperature (LNA)
Antenna noise temperature is the noise power seen at the receive output of the antenna. (To LNA)
"If we are not measuring with an LNA or Receiver then"
System noise temperature = antenna noise temperature.
This is not a representative value for calculating "G/T" since the "G/T" relates to the receive performance of both antenna and receiver.
Selection of antenna aperture.
Satellite antenna aperture is closely related to the quality factor ("G/T" value) of the earth station. The "G/T" value and satellite power demand, i.e. equivalent rent bandwidth, is a logarithmic linear relationship. So the value of equivalent rent bandwidth increases with the narrowing of antenna aperture.
Therefore, when selecting the earth station aperture, it is not a case of the smaller, the better. The earth station aperture should make a compromise between the space overhead (equivalent rent bandwidth) and ground overhead (antenna aperture) in order to make the system achieve optimum allocation.
Achievable G/T with current VSAT antenna in C & Ku Bands (Elevation Angle E=35 Degree)
Diameter G/T
3.8m 21.7
7.5m 25.3
11m 31.7
References.
<templatestyles src="Reflist/styles.css" />
[1] p. 32, Thomas A. Milligan, Modern Antenna Design, 2nd Edition, IEEE Press
[2] p. 32, Thomas A. Milligan, Modern Antenna Design, 2nd Edition, IEEE Press
<templatestyles src="Citation/styles.css"/>
Tartle Info Library | [
{
"math_id": 0,
"text": "T_\\text{A} = \\frac{1}{4\\pi} \\int_{0}^{2\\pi} \\int_{0}^{\\pi} G(\\theta,\\varphi) T_\\text{S}(\\theta,\\varphi) \\sin(\\theta) \\; d\\theta d\\varphi "
},
{
"math_id": 1,
"text": "P_\\text{N} = k T_\\text{A} B"
}
]
| https://en.wikipedia.org/wiki?curid=13214503 |
13215004 | Moreau's theorem | In mathematics, Moreau's theorem is a result in convex analysis named after French mathematician Jean-Jacques Moreau. It shows that sufficiently well-behaved convex functionals on Hilbert spaces are differentiable and the derivative is well-approximated by the so-called Yosida approximation, which is defined in terms of the resolvent operator.
Statement of the theorem.
Let "H" be a Hilbert space and let "φ" : "H" → R ∪ {+∞} be a proper, convex and lower semi-continuous extended real-valued functional on "H". Let "A" stand for ∂"φ", the subderivative of "φ"; for "α" > 0 let "J""α" denote the resolvent:
formula_0
and let "A""α" denote the Yosida approximation to "A":
formula_1
For each "α" > 0 and "x" ∈ "H", let
formula_2
Then
formula_3
and "φ""α" is convex and Fréchet differentiable with derivative d"φ""α" = "A""α". Also, for each "x" ∈ "H" (pointwise), "φ""α"("x") converges upwards to "φ"("x") as "α" → 0. | [
{
"math_id": 0,
"text": "J_{\\alpha} = (\\mathrm{id} + \\alpha A)^{-1};"
},
{
"math_id": 1,
"text": "A_{\\alpha} = \\frac1{\\alpha} ( \\mathrm{id} - J_{\\alpha} )."
},
{
"math_id": 2,
"text": "\\varphi_{\\alpha} (x) = \\inf_{y \\in H} \\frac1{2 \\alpha} \\| y - x \\|^{2} + \\varphi (y)."
},
{
"math_id": 3,
"text": "\\varphi_{\\alpha} (x) = \\frac{\\alpha}{2} \\| A_{\\alpha} x \\|^{2} + \\varphi (J_{\\alpha} (x))"
}
]
| https://en.wikipedia.org/wiki?curid=13215004 |
1321785 | Quasi-projective variety | In mathematics, a quasi-projective variety in algebraic geometry is a locally closed subset of a projective variety, i.e., the intersection inside some projective space of a Zariski-open and a Zariski-closed subset. A similar definition is used in scheme theory, where a "quasi-projective scheme" is a locally closed subscheme of some projective space.
Relationship to affine varieties.
An affine space is a Zariski-open subset of a projective space, and since any closed affine subset formula_0 can be expressed as an intersection of the projective completion formula_1 and the affine space embedded in the projective space, this implies that any affine variety is quasiprojective. There are locally closed subsets of projective space that are not affine, so that quasi-projective is more general than affine. Taking the complement of a single point in projective space of dimension at least 2 gives a non-affine quasi-projective variety. This is also an example of a quasi-projective variety that is neither affine nor projective.
Examples.
Since quasi-projective varieties generalize both affine and projective varieties, they are sometimes referred to simply as "varieties". Varieties isomorphic to affine algebraic varieties as quasi-projective varieties are called affine varieties; similarly for projective varieties. For example, the complement of a point in the affine line, i.e., formula_2, is isomorphic to the zero set of the polynomial formula_3 in the affine plane. As an affine set formula_4 is not closed since any polynomial zero on the complement must be zero on the affine line. For another example, the complement of any conic in projective space of dimension 2 is affine. Varieties isomorphic to open subsets of affine varieties are called quasi-affine.
Quasi-projective varieties are "locally affine" in the same sense that a manifold is locally Euclidean: every point of a quasi-projective variety has a neighborhood which is an affine variety. This yields a basis of affine sets for the Zariski topology on a quasi-projective variety.
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "U"
},
{
"math_id": 1,
"text": "\\bar{U}"
},
{
"math_id": 2,
"text": "X=\\mathbb{A}^1 \\setminus \\{0\\}"
},
{
"math_id": 3,
"text": "xy-1"
},
{
"math_id": 4,
"text": "X"
}
]
| https://en.wikipedia.org/wiki?curid=1321785 |
13217890 | IMO number | International ship identification number
The IMO number of the International Maritime Organization is a generic term covering two distinct meanings. The IMO ship identification number is a unique ship identifier; the IMO company and registered owner identification number is used to identify uniquely each company and/or registered owner managing ships of at least 100 gross tons (gt). The schemes are managed in parallel, but IMO company/owner numbers may also be obtained by managers of vessels "not" having IMO ship numbers. IMO numbers were introduced to improve maritime safety and reduce fraud and pollution, under the International Convention for the Safety of Life at Sea (SOLAS).
The IMO ship number scheme has been mandatory, for SOLAS signatories, for passenger and cargo ships above a certain size since 1996, and voluntarily applicable to various other vessels since 2013/2017. The number identifies a ship and does not change when the ship's owner, country of registry (flag state) or name changes, unlike the official numbers used in some countries, e.g. the UK. The ship's certificates must also bear the IMO ship number. Since 1 July 2004, passenger ships are also required to carry the marking on a horizontal surface visible from the air.
History.
IMO resolutions (1987–2017).
In 1987 the IMO adopted Resolution A.600(15) to create the IMO ship identification number scheme aimed at the "enhancement of maritime safety and pollution prevention and the prevention of maritime fraud" by assigning to each ship a unique permanent identification number. Lloyd's Register had already introduced permanent numbers for all the ships in their published register in 1963, and these were modified to seven-digit numbers in 1969. It is this number series that was adopted as the basis for IMO ship numbers in 1987.
Unique and permanent numbers are needed due to the frequent changes in ships' names or other details. As one example, the vessel with IMO ship number "IMO 9176187" was built in Japan, has been through the names "Asia Melody", "Cornelie Oldendorff", "Maxima", "Jaydee M", "Evangelia", "Evangeli", "Shinsung Dream" and "Orange Dream", has operated under the flags of Panama, Liberia, Marshall Islands, the Republic of Korea and Sierra Leone, with numerous different owners/operators, and has had home ports of Majuro, Freetown and Cheju, but its IMO number has remained unchanged throughout.
The original resolution applied to cargo vessels (meaning "ships which are not passenger ships") at least 300 gt and passenger vessels of at least 100 gt.
This resolution was revoked in 2013, being replaced by Resolution A.1078(28), which allowed application of the Scheme to ships of 100 gt and above, including fishing vessels. That in turn was revoked in 2017 and replaced by Resolution A.1117(30), which allows its application to ships of 100 gt and above, "including fishing vessels of steel and non-steel hull construction; passenger ships of less than 100 gt, high-speed passenger craft and mobile offshore drilling units [...]; and all motorized inboard fishing vessels of less than 100 gt down to a size limit of 12 metres in length overall (LOA), authorized to operate outside waters under the national jurisdiction of the flag State". IMO resolutions are "for implementation on a voluntary basis".
Although not mandatory under SOLAS, since IMO ship numbers became available also to fishing vessels in 2013, some regional fisheries management organisations, the European Union and other organizations or states have made them mandatory for fishing vessels above a certain size.
SOLAS regulation (1994).
SOLAS regulation XI-1/3 was adopted in 1994 and came into force on 1 January 1996, making IMO ship numbers mandatory for those countries that have ratified (or acceded to, accepted, approved, adopted, etc.) SOLAS.
The IMO scheme and hence SOLAS regulation does not apply to:
Security enhancements 2002.
In December 2002, the Diplomatic Conference on Maritime Security adopted a number of measures aimed at enhancing security of ships and port facilities. This included a modification to SOLAS Regulation XI-1/3 to require the IMO ship numbers to be permanently marked in a visible place either on the ship's hull or superstructure as well as internally and on the ship's certificates. Passenger ships should also carry the marking on a horizontal surface visible from the air. The enhanced regulations came into effect on 1 July 2004.
Company and Registered Owner Regulation 2005.
In May 2005, IMO adopted a new SOLAS regulation XI-1/3-1 on the mandatory company and registered owner identification number scheme, with entry into force on 1 January 2009.
The regulation provides that every ship owner and management company shall have a unique identification number. Other amendments require these numbers to be added to the relevant certificates and documents in the International Safety Management Code (ISM) and the International Ship and Port Facility Security Code (ISPS). Like the IMO ship identification number, the company identification number is a seven-digit number with the prefix IMO. For example, for the ship "Atlantic Star" (IMO 8024026), IMO 5304986 referred to the former ship manager Pullmantur Cruises Ship Management Ltd and IMO 5364264 to her former owner, Pullmantur Cruises Empress Ltd.
Assignment.
S&P Global is the manager of the scheme and, as such, identifies and assigns IMO numbers without charge. The organization was previously known as Lloyd's Register-Fairplay, IHS Fairplay and IHS Maritime.
For new vessels, the IMO ship number is assigned to a hull during construction, generally upon keel laying. Many vessels which fall outside the mandatory requirements of SOLAS have numbers allocated by Lloyd's Register or IHS Markit in the same numerical series, including fishing vessels and commercial yachts.
Structure.
IMO number of a vessel.
An IMO number is made of the three letters "IMO" followed by a seven-digit number. This consists of a six-digit sequential unique number followed by a check digit. The integrity of an IMO number can be verified using its check digit. The checksum of an IMO ship identification number is calculated by multiplying each of the first six digits by a factor of "7" to "2" corresponding to their position from right to left. The rightmost digit of this sum is the check digit.
Example for IMO 9074729:
(9×"7") + (0×"6") + (7×"5") + (4×"4") + (7×"3") + (2×"2") = 139.
IMO number of a company.
The checksum of an IMO company and registered owner identification number is calculated somewhat differently. The first six digits are multiplied by the respective weights: "8", "6", "4", "2", "9", and "7" and then summed. From this sum modulo 11 is taken. The result of which is subtracted from 11. And modulo 10 of this difference results in the check digit.
Example for company IMO 0289230:
formula_0
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\\mbox{check digit} &= (11 - (0 \\times 7 + 2 \\times 6 + 8 \\times 5 + 9 \\times 4 + 2 \\times 3 + 3 \\times 2) \\bmod 11) \\bmod 10 \\\\\n&= (11 - (0 + 12 + 40 + 36 + 6 + 6) \\bmod 11) \\bmod 10 \\\\\n&= (11 - 100 \\bmod 11) \\bmod 10 \\\\\n&= (11 - 1) \\bmod 10 \\\\\n&= 10 \\bmod 10 = 0\n\\end{align}\n"
}
]
| https://en.wikipedia.org/wiki?curid=13217890 |
1322061 | Integral curve | In mathematics, an integral curve is a parametric curve that represents a specific solution to an ordinary differential equation or system of equations.
Name.
Integral curves are known by various other names, depending on the nature and interpretation of the differential equation or vector field. In physics, integral curves for an electric field or magnetic field are known as "field lines", and integral curves for the velocity field of a fluid are known as "streamlines". In dynamical systems, the integral curves for a differential equation that governs a system are referred to as "trajectories" or "orbits".
Definition.
Suppose that F is a static vector field, that is, a vector-valued function with Cartesian coordinates ("F"1,"F"2...,"F""n"), and that x("t") is a parametric curve with Cartesian coordinates ("x"1("t"),"x"2("t")...,"x""n"("t")). Then x("t") is an integral curve of F if it is a solution of the autonomous system of ordinary differential equations,
formula_0
Such a system may be written as a single vector equation,
formula_1
This equation says that the vector tangent to the curve at any point x("t") along the curve is precisely the vector F(x("t")), and so the curve x("t") is tangent at each point to the vector field F.
If a given vector field is Lipschitz continuous, then the Picard–Lindelöf theorem implies that there exists a unique flow for small time.
Examples.
If the differential equation is represented as a vector field or slope field, then the corresponding integral curves are tangent to the field at each point.
Generalization to differentiable manifolds.
Definition.
Let "M" be a Banach manifold of class "C""r" with "r" ≥ 2. As usual, T"M" denotes the tangent bundle of "M" with its natural projection "π""M" : T"M" → "M" given by
formula_2
A vector field on "M" is a cross-section of the tangent bundle T"M", i.e. an assignment to every point of the manifold "M" of a tangent vector to "M" at that point. Let "X" be a vector field on "M" of class "C""r"−1 and let "p" ∈ "M". An integral curve for "X" passing through "p" at time "t"0 is a curve "α" : "J" → "M" of class "C""r"−1, defined on an open interval "J" of the real line R containing "t"0, such that
formula_3
formula_4
Relationship to ordinary differential equations.
The above definition of an integral curve "α" for a vector field "X", passing through "p" at time "t"0, is the same as saying that "α" is a local solution to the ordinary differential equation/initial value problem
formula_3
formula_5
It is local in the sense that it is defined only for times in "J", and not necessarily for all "t" ≥ "t"0 (let alone "t" ≤ "t"0). Thus, the problem of proving the existence and uniqueness of integral curves is the same as that of finding solutions to ordinary differential equations/initial value problems and showing that they are unique.
Remarks on the time derivative.
In the above, "α"′("t") denotes the derivative of "α" at time "t", the "direction "α" is pointing" at time "t". From a more abstract viewpoint, this is the Fréchet derivative:
formula_6
In the special case that "M" is some open subset of R"n", this is the familiar derivative
formula_7
where "α"1, ..., "α""n" are the coordinates for "α" with respect to the usual coordinate directions.
The same thing may be phrased even more abstractly in terms of induced maps. Note that the tangent bundle T"J" of "J" is the trivial bundle "J" × R and there is a canonical cross-section "ι" of this bundle such that "ι"("t") = 1 (or, more precisely, ("t", 1) ∈ "ι") for all "t" ∈ "J". The curve "α" induces a bundle map "α"∗ : T"J" → T"M" so that the following diagram commutes:
Then the time derivative "α"′ is the composition "α"′ = "α"∗ "ι", and "α"′("t") is its value at some point "t" ∈ "J". | [
{
"math_id": 0,
"text": "\\begin{align}\n\\frac{dx_1}{dt} &= F_1(x_1,\\ldots,x_n) \\\\ \n&\\vdots \\\\\n\\frac{dx_n}{dt} &= F_n(x_1,\\ldots,x_n).\n\\end{align}\n"
},
{
"math_id": 1,
"text": "\\mathbf{x}'(t) = \\mathbf{F}(\\mathbf{x}(t)).\\!\\,"
},
{
"math_id": 2,
"text": "\\pi_{M} : (x, v) \\mapsto x."
},
{
"math_id": 3,
"text": "\\alpha (t_{0}) = p;\\,"
},
{
"math_id": 4,
"text": "\\alpha' (t) = X (\\alpha (t)) \\mbox{ for all } t \\in J."
},
{
"math_id": 5,
"text": "\\alpha' (t) = X (\\alpha (t)).\\,"
},
{
"math_id": 6,
"text": "(\\mathrm{d}_t\\alpha) (+1) \\in \\mathrm{T}_{\\alpha (t)} M."
},
{
"math_id": 7,
"text": "\\left( \\frac{\\mathrm{d} \\alpha_{1}}{\\mathrm{d} t}, \\dots, \\frac{\\mathrm{d} \\alpha_{n}}{\\mathrm{d} t} \\right),"
}
]
| https://en.wikipedia.org/wiki?curid=1322061 |
13229499 | Bochner identity | Identity concerning harmonic maps between Riemannian manifolds
In mathematics — specifically, differential geometry — the Bochner identity is an identity concerning harmonic maps between Riemannian manifolds. The identity is named after the American mathematician Salomon Bochner.
Statement of the result.
Let "M" and "N" be Riemannian manifolds and let "u" : "M" → "N" be a harmonic map. Let d"u" denote the derivative (pushforward) of "u", ∇ the gradient, Δ the Laplace–Beltrami operator, Riem"N" the Riemann curvature tensor on "N" and Ric"M" the Ricci curvature tensor on "M". Then
formula_0 | [
{
"math_id": 0,
"text": "\\frac12 \\Delta \\big( | \\nabla u |^{2} \\big) = \\big| \\nabla ( \\mathrm{d} u ) \\big|^{2} + \\big\\langle \\mathrm{Ric}_{M} \\nabla u, \\nabla u \\big\\rangle - \\big\\langle \\mathrm{Riem}_{N} (u) (\\nabla u, \\nabla u) \\nabla u, \\nabla u \\big\\rangle."
}
]
| https://en.wikipedia.org/wiki?curid=13229499 |
13230426 | AIJ | AIJ, Aij, or aij could refer to:
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "A_{ij}"
}
]
| https://en.wikipedia.org/wiki?curid=13230426 |
1323068 | Catmull–Clark subdivision surface | Technique in 3D computer graphics
The Catmull–Clark algorithm is a technique used in 3D computer graphics to create curved surfaces by using subdivision surface modeling. It was devised by Edwin Catmull and Jim Clark in 1978 as a generalization of bi-cubic "uniform" B-spline surfaces to arbitrary topology.
In 2005, Edwin Catmull, together with Tony DeRose and Jos Stam, received an Academy Award for Technical Achievement for their invention and application of subdivision surfaces. DeRose wrote about "efficient, fair interpolation" and character animation. Stam described a technique for a direct evaluation of the limit surface without recursion.
Recursive evaluation.
Catmull–Clark surfaces are defined recursively, using the following "refinement scheme."
Start with a mesh of an arbitrary polyhedron. All the vertices in this mesh shall be called "original points".
Properties.
The new mesh will consist only of quadrilaterals, which in general will "not" be planar. The new mesh will generally look "smoother" (i.e. less "jagged" or "pointy") than the old mesh. Repeated subdivision results in meshes that are more and more rounded.
The arbitrary-looking barycenter formula was chosen by Catmull and Clark based on the aesthetic appearance of the resulting surfaces rather than on a mathematical derivation, although they do go to great lengths to rigorously show that the method converges to bicubic B-spline surfaces.
It can be shown that the limit surface obtained by this refinement process is at least formula_2 at extraordinary vertices and formula_3 everywhere else (when "n" indicates how many derivatives are continuous, we speak of formula_4 continuity). After one iteration, the number of extraordinary points on the surface remains constant.
Exact evaluation.
The limit surface of Catmull–Clark subdivision surfaces can also be evaluated directly, without any recursive refinement. This can be accomplished by means of the technique of Jos Stam (1998). This method reformulates the recursive refinement process into a matrix exponential problem, which can be solved directly by means of matrix diagonalization.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{A+F+M+E}{4}"
},
{
"math_id": 1,
"text": "\\frac{F + 2R + (n-3)P}{n}"
},
{
"math_id": 2,
"text": "\\mathcal{C}^1"
},
{
"math_id": 3,
"text": "\\mathcal{C}^2"
},
{
"math_id": 4,
"text": "\\mathcal{C}^n"
}
]
| https://en.wikipedia.org/wiki?curid=1323068 |
13230920 | Furstenberg's proof of the infinitude of primes | Proof of the infinitude of primes
In mathematics, particularly in number theory, Hillel Furstenberg's proof of the infinitude of primes is a topological proof that the integers contain infinitely many prime numbers. When examined closely, the proof is less a statement about topology than a statement about certain properties of arithmetic sequences. Unlike Euclid's classical proof, Furstenberg's proof is a proof by contradiction. The proof was published in 1955 in the "American Mathematical Monthly" while Furstenberg was still an undergraduate student at Yeshiva University.
Furstenberg's proof.
Define a topology on the integers formula_0, called the evenly spaced integer topology, by declaring a subset "U" ⊆ formula_0 to be an open set if and only if it is a union of arithmetic sequences "S"("a", "b") for "a" ≠ 0, or is empty (which can be seen as a nullary union (empty union) of arithmetic sequences), where
formula_1
Equivalently, "U" is open if and only if for every "x" in "U" there is some non-zero integer "a" such that "S"("a", "x") ⊆ "U". The axioms for a topology are easily verified:
This topology has two notable properties:
formula_2
The only integers that are not integer multiples of prime numbers are −1 and +1, i.e.
formula_3
Now, by the first topological property, the set on the left-hand side cannot be closed. On the other hand, by the second topological property, the sets "S"("p", 0) are closed. So, if there were only finitely many prime numbers, then the set on the right-hand side would be a finite union of closed sets, and hence closed. This would be a contradiction, so there must be infinitely many prime numbers.
Topological properties.
The evenly spaced integer topology on formula_4 is the topology induced by the inclusion formula_5, where formula_6 is the profinite integer ring with its profinite topology.
It is homeomorphic to the rational numbers formula_7 with the subspace topology inherited from the real line, which makes it clear that any finite subset of it, such as formula_8, cannot be open.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{Z}"
},
{
"math_id": 1,
"text": "S(a, b) = \\{ a n + b \\mid n \\in \\mathbb{Z} \\} = a \\mathbb{Z} + b. "
},
{
"math_id": 2,
"text": "S(a, b) = \\mathbb{Z} \\setminus \\bigcup_{j = 1}^{a - 1} S(a, b + j)."
},
{
"math_id": 3,
"text": "\\mathbb{Z} \\setminus \\{ -1, + 1 \\} = \\bigcup_{p \\mathrm{\\, prime}} S(p, 0)."
},
{
"math_id": 4,
"text": "\\Z"
},
{
"math_id": 5,
"text": "\\Z\\subset \\hat\\Z"
},
{
"math_id": 6,
"text": "\\hat\\Z"
},
{
"math_id": 7,
"text": "\\mathbb{Q}"
},
{
"math_id": 8,
"text": "\\{-1, +1\\}"
}
]
| https://en.wikipedia.org/wiki?curid=13230920 |
1323240 | Electrohydrodynamics | Study of electrically conducting fluids in the presence of electric fields
Electrohydrodynamics (EHD), also known as electro-fluid-dynamics (EFD) or electrokinetics, is the study of the dynamics of electrically charged fluids. Electrohydrodynamics (EHD) is a joint domain of electrodynamics and fluid dynamics mainly focused on the fluid motion induced by electric fields. EHD, in its simplest form, involves the application of an electric field to a fluid medium, resulting in fluid flow, form, or properties manipulation. These mechanisms arise from the interaction between the electric fields and charged particles or polarization effects within the fluid. The generation and movement of charge carriers (ions) in a fluid subjected to an electric field are the underlying physics of all EHD-based technologies.
The electric forces acting on particles consist of electrostatic (Coulomb) and electrophoresis force (first term in the following equation)., dielectrophoretic force (second term in the following equation), and electrostrictive force (third term in the following equation):
formula_0
This electrical force is then inserted in Navier-Stokes equation, as a body (volumetric) force.EHD covers the following types of particle and fluid transport mechanisms: electrophoresis, electrokinesis, dielectrophoresis, electro-osmosis, and electrorotation. In general, the phenomena relate to the direct conversion of electrical energy into kinetic energy, and "vice versa".
In the first instance, shaped electrostatic fields (ESF's) create hydrostatic pressure (HSP, or motion) in dielectric media. When such media are fluids, a flow is produced. If the dielectric is a vacuum or a solid, no flow is produced. Such flow can be directed against the electrodes, generally to move the electrodes. In such case, the moving structure acts as an electric motor. Practical fields of interest of EHD are the common air ioniser, electrohydrodynamic thrusters and EHD cooling systems.
In the second instance, the converse takes place. A powered flow of medium within a shaped electrostatic field adds energy to the system which is picked up as a potential difference by electrodes. In such case, the structure acts as an electrical generator.
Electrokinesis.
Electrokinesis is the particle or fluid transport produced by an electric field acting on a fluid having a net mobile charge. (See -kinesis for explanation and further uses of the -kinesis suffix.) "Electrokinesis" was first observed by Ferdinand Frederic Reuss during 1808, in the electrophoresis of clay particles The effect was also noticed and publicized in the 1920s by Thomas Townsend Brown which he called the Biefeld–Brown effect, although he seems to have misidentified it as an electric field acting on gravity. The flow rate in such a mechanism is linear in the electric field. Electrokinesis is of considerable practical importance in microfluidics, because it offers a way to manipulate and convey fluids in microsystems using only electric fields, with no moving parts.
The force acting on the fluid, is given by the equation
formula_1
where, formula_2 is the resulting force, measured in newtons, formula_3 is the current, measured in amperes, formula_4 is the distance between electrodes, measured in metres, and formula_5 is the ion mobility coefficient of the dielectric fluid, measured in m2/(V·s).
If the electrodes are free to move within the fluid, while keeping their distance fixed from each other, then such a force will actually propel the electrodes with respect to the fluid.
"Electrokinesis" has also been observed in biology, where it was found to cause physical damage to neurons by inciting movement in their membranes. It is discussed in R. J. Elul's "Fixed charge in the cell membrane" (1967).
Water electrokinetics.
In October 2003, Dr. Daniel Kwok, Dr. Larry Kostiuk and two graduate students from the University of Alberta discussed a method to convert hydrodynamic to electrical energy by exploiting the natural electrokinetic properties of a liquid such as ordinary tap water, by pumping fluid through tiny micro-channels with a pressure difference. This technology could lead to a practical and clean energy storage device, replacing batteries for devices such as mobile phones or calculators which would be charged up by simply compressing water to high pressure. Pressure would then be released on demand, for the fluid to flow through micro-channels. When water travels, or streams over a surface, the ions in the water "rub" against the solid, leaving the surface slightly charged. Kinetic energy from the moving ions would thus be converted to electrical energy. Although the power generated from a single channel is extremely small, millions of parallel micro-channels can be used to increase the power output.
This streaming potential, water-flow phenomenon was discovered in 1859 by German physicist Georg Hermann Quincke.
Electrokinetic instabilities.
The fluid flows in microfluidic and nanofluidic devices are often stable and strongly damped by viscous forces (with Reynolds numbers of order unity or smaller). However, heterogeneous ionic conductivity fields in the presence of applied electric fields can, under certain conditions, generate an unstable flow field owing to electrokinetic instabilities (EKI). Conductivity gradients are prevalent in on-chip electrokinetic processes such as preconcentration methods (e.g. field amplified sample stacking and isoelectric focusing), multidimensional assays, and systems with poorly specified sample chemistry. The dynamics and periodic morphology of "electrokinetic instabilities" are similar to other systems with Rayleigh–Taylor instabilities. The particular case of a flat plane geometry with homogeneous ions injection in the bottom side leads to a mathematical frame identical to the Rayleigh–Bénard convection.
EKI's can be leveraged for rapid mixing or can cause undesirable dispersion in sample injection, separation and stacking. These instabilities are caused by a coupling of electric fields and ionic conductivity gradients that results in an electric body force. This coupling results in an electric body force in the bulk liquid, outside the electric double layer, that can generate temporal, convective, and absolute flow instabilities. Electrokinetic flows with conductivity gradients become unstable when the electroviscous stretching and folding of conductivity interfaces grows faster than the dissipative effect of molecular diffusion.
Since these flows are characterized by low velocities and small length scales, the Reynolds number is below 0.01 and the flow is "laminar". The onset of instability in these flows is best described as an electric "Rayleigh number".
Misc.
Liquids can be printed at nanoscale by pyro-EHD.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F_e= \\rho_e \\overrightarrow{E} - {1 \\over 2}\\varepsilon_{0}\\overrightarrow{E}^2\\triangledown\\varepsilon_r + {1 \\over 2}\\varepsilon_{0}\\triangledown\\Bigl(\\overrightarrow{E}^2 \\rho_{f}\\left ( \\frac{\\partial \\varepsilon_{r}}{\\partial \\rho_{f}} \\right ) \\Bigr)"
},
{
"math_id": 1,
"text": "F = \\frac{I d}{k} "
},
{
"math_id": 2,
"text": "F "
},
{
"math_id": 3,
"text": "I "
},
{
"math_id": 4,
"text": "d "
},
{
"math_id": 5,
"text": "k "
}
]
| https://en.wikipedia.org/wiki?curid=1323240 |
13236337 | Di-positronium | Exotic molecule consisting of two atoms of positroniumDi-positronium, or dipositronium, is an exotic molecule consisting of two atoms of positronium. It was predicted to exist in 1946 by John Archibald Wheeler, and subsequently studied theoretically, but was not observed until 2007 in an experiment performed by David Cassidy and Allen Mills at the University of California, Riverside. The researchers made the positronium molecules by firing intense bursts of positrons into a thin film of porous silicon dioxide. Upon slowing down in the silica, the positrons captured ordinary electrons to form positronium atoms. Within the silica, these were long lived enough to interact, forming molecular di-positronium. Advances in trapping and manipulating positrons, and spectroscopy techniques have enabled studies of Ps–Ps interactions. In 2012, Cassidy et al. were able to produce the excited molecular positronium formula_0 angular momentum state.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L = 1"
}
]
| https://en.wikipedia.org/wiki?curid=13236337 |
1323656 | Mixing study | Type of blood plasma screening test
Mixing studies are tests performed on blood plasma of patients or test subjects to distinguish factor deficiencies from factor inhibitors, such as lupus anticoagulant, or specific factor inhibitors, such as antibodies directed against factor VIII. Mixing studies are screening tests widely performed in coagulation laboratories. The basic purpose of these tests is to determine the cause of prolongation of Prothrombin Time (PT), Partial Thromboplastin Time, or sometimes of thrombin time (TT). Mixing studies take advantage of the fact that factor levels that are 50 percent of normal should give a normal Prothrombin time (PT) or Partial thromboplastin time (PTT) result.
Test method.
Fresh normal plasma has all the blood coagulation factors with normal levels.
If the problem is a simple factor deficiency, mixing the patient plasma 1:1 with plasma that contains 100% of the normal factor level results in a level ≥50% in the mixture (say the patient has an activity of 0%; the average of 100% + 0% = 50%). The PT or PTT will be normal (the mixing study shows correction). Correction with mixing indicates factor deficiency. Failure to correct with mixing indicates an inhibitor. Performing a thrombin time on the test plasma can provide useful additional information for the interpretation of mixing tests, such as by demonstrating the presence of anticoagulants, hypofibrinogenemia or dysfibrinogenemia.
Adsorbed Plasma and Aged Plasma.
Factor deficient plasmas (Adsorbed Plasma and Aged Plasma, etc.) have been used historically in mixing studies. Plasma with known factor deficiencies are commercially available but are very expensive, so they have been prepared in the laboratory and used for mixing tests. Adsorbed plasma or plasma from patients on oral anticoagulants (Warfarin etc.) for a week or more is deficient in Factor II, Factor VII, Factor IX, and Factor X. Plasma from patients on oral anticoagulants (Warfarin etc.) for 48–72 hours is deficient in Factor VII. Aged plasma is deficient in Factor V & Factor VIIIC. Serum is deficient in factors I, V & VIIIC.
Correction of prothrombin time.
Prothrombin time (PT) may be corrected as follows:
Correction of partial thromboplastin time.
Partial thromboplastin time (PTT) may be corrected as follows:
Time-dependent inhibitors.
Some inhibitors are time dependent. In other words, it takes time for the antibody to react with and inactivate the added clotting factor. The clotting test performed immediately after the specimens are mixed may show correction because the antibody has not had time to inactivate its target factor. A test performed after the mixture is incubated for 1 to 2 hours at 37°C will show significant prolongation over the clotting time obtained after immediate mixing. Nonspecific inhibitors like the lupus anticoagulant usually are not time dependent; the immediate mixture will show prolongation. Many specific factor inhibitors are time dependent, and the inhibitor will not be detected unless the test is repeated after incubation (factor VIII inhibitors are notorious for this).
Abnormal coagulation test results.
A common problem is an unexplained increase in the PT and/or PTT. If this is observed, the test should be repeated with a fresh sample. Another consideration is heparin. It is possible that the blood sample was mistakenly drawn though a running line. Interference by heparin can be detected by absorbing the heparin with a resin (“Heparsorb”) or by using an enzyme to digest the heparin (“Hepzyme”). Also, the patient's history should be checked, especially with regard to anticoagulant use or liver disease. Provided that the abnormal result is reproduced on a fresh specimen and there is no obvious explanation from the history, a mixing study should be performed. If the mixing study shows correction and no prolongation with incubation, factor deficiency should be looked for, starting with VIII and IX. Vitamin K-dependent and nonvitamin K–dependent factors should be considered to rule out vitamin K deficiency, or accidental or surreptitious warfarin ingestion.
Inhibitor.
If the mixing study fails to correct, then an inhibitor should be suspected. The most common inhibitor is a nonspecific inhibitor such as a lupus anticoagulant. Perform a test to demonstrate a phospholipid-dependent antibody, such as a platelet neutralization procedure. Spontaneous specific inhibitors against clotting factors occur (i.e. not in hemophiliacs), most often against factor VIII. This can occur in patients with systemic lupus erythematosus, monoclonal gammopathies, other malignancies, during pregnancy and for no apparent reason (idiopathic). These patients can have devastating bleeding. The thing to do is identify the specific factor involved and find out how high the titer is. If the patient has a low titer inhibitor, try to overwhelm it with high doses of the factor. If the patient has a high titer antibody against factor VIII, try porcine factor VIII, activated prothrombin complex concentrate FEIBA (Factor Eight Inhibitor Bypassing Agent), or NovoSeven to stop the bleeding. Prednisone will often lower the titer over time. Intravenous immunoglobulin has been reported to also help but it does not seem to work for hemophiliacs with an inhibitor. Rituximab, cyclophosphamide or other immunosuppressive therapy may be required.
Assessing correction of mixing study.
In order to provide specific cutoffs to distinguish an inhibitor defect from a factor deficiency, the "Rosner index" (index of circulating anticoagulant) and/or the "Chang percentage" (percent correction method) can be used:
formula_0
Results are: ≤10 is classified as correction, ≥15 indicates presence of an inhibitor, and 11-15 is classified as "indeterminate".
formula_1
Results are classified as follows: <58% as inhibitor and >70% as correction.>
Alternatively, correction into the reference range can be used to define complete correction.
A fourth method is known as Estimated Factor Correction (EFC). This method involves four steps. First, determine the most likely factor suspected to be deficient, based on PT, aPTT, and clinical history. Next, choose the appropriate curve — single factor deficiency, vitamin K-dependent factor deficient, or all factor-deficient. Use this curve to estimate the factor level in the patient sample. Then, predict the factor level and PT or aPTT that will occur after 1:1 mix in case of deficiency. Finally, compare the actual mix results with the predicted results for deficiency.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Rosner\\;index = \\frac{(aPTT\\;of\\;1:1\\;mix) - (aPTT\\;of\\;normal\\;pooled\\;plasma)}{aPTT\\;of\\;nonmixed\\;patient\\;plasma} x 100"
},
{
"math_id": 1,
"text": "Chang\\;percentage = \\frac{(aPTT\\;of\\;nonmixed\\;patient\\;plasma) - (aPTT\\;of\\;1:1\\;mix)}{(aPTT\\;of\\;nonmixed\\;patient\\;plasma) - (aPTT\\;of\\;normal\\;pooled\\;plasma)} x 100"
}
]
| https://en.wikipedia.org/wiki?curid=1323656 |
1323972 | Richard C. Tolman | American physicist (1881–1948)
Richard Chace Tolman (March 4, 1881 – September 5, 1948) was an American mathematical physicist and physical chemist who made many contributions to statistical mechanics. He also made important contributions to theoretical cosmology in the years soon after Einstein's discovery of general relativity. He was a professor of physical chemistry and mathematical physics at the California Institute of Technology (Caltech).
Biography.
Tolman was born in West Newton, Massachusetts and studied chemical engineering at the Massachusetts Institute of Technology, receiving his bachelor's degree in 1903 and PhD in 1910 under A. A. Noyes.
He married Ruth Sherman Tolman in 1924.
In 1912, he conceived of the concept of relativistic mass, writing that "the expression formula_0 is best suited for the mass of a moving body."
In a 1916 experiment with Thomas Dale Stewart, Tolman demonstrated that electricity consists of electrons flowing through a metallic conductor. A by-product of this experiment was a measured value of the mass of the electron. Overall, however, he was primarily known as a theorist.
Tolman was a member of the Technical Alliance in 1919, a forerunner of the Technocracy movement where he helped conduct an energy survey analyzing the possibility of applying science to social and industrial affairs.
Tolman was elected a Fellow of the American Academy of Arts and Sciences in 1922. The same year, he joined the faculty of the California Institute of Technology, where he became professor of physical chemistry and mathematical physics and later dean of the graduate school. One of Tolman's early students at Caltech was the theoretical chemist Linus Pauling, to whom Tolman taught the old quantum theory. Tolman was elected to the United States National Academy of Sciences in 1923.
In 1927, Tolman published a text on statistical mechanics whose background was the old quantum theory of Max Planck, Niels Bohr and Arnold Sommerfeld. Tolman was elected to the American Philosophical Society in 1932. In 1938, he published a new detailed work that covered the application of statistical mechanics to classical and quantum systems. It was the standard work on the subject for many years and remains of interest today.
In the later years of his career, Tolman became increasingly interested in the application of thermodynamics to relativistic systems and cosmology. An important monograph he published in 1934 titled "Relativity, Thermodynamics, and Cosmology" demonstrated how black body radiation in an expanding universe cools but remains thermal – a key pointer toward the properties of the cosmic microwave background. Also in this monograph, Tolman was the first person to document and explain how a closed universe could equal zero energy. He explained how all mass energy is positive and all gravitational energy is negative and they cancel each other out, leading to a universe of zero energy. His investigation of the oscillatory universe hypothesis, which Alexander Friedmann had proposed in 1922, drew attention to difficulties as regards entropy and resulted in its demise until the late 1960s.
During World War II, Tolman served as scientific advisor to General Leslie Groves on the Manhattan Project. At the time of his death in Pasadena, he was chief advisor to Bernard Baruch, the U.S. representative to the United Nations Atomic Energy Commission.
Each year, the southern California section of the American Chemical Society honors Tolman by awarding its Tolman Medal "in recognition of outstanding contributions to chemistry."
Family.
Tolman's brother was the behavioral psychologist Edward Chace Tolman.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m_0 \\left(1 - \\frac{v^2}{c^2} \\right)^{-1/2}"
}
]
| https://en.wikipedia.org/wiki?curid=1323972 |
1323985 | Markov random field | Set of random variables
In the domain of physics and probability, a Markov random field (MRF), Markov network or undirected graphical model is a set of random variables having a Markov property described by an undirected graph. In other words, a random field is said to be a Markov random field if it satisfies Markov properties. The concept originates from the Sherrington–Kirkpatrick model.
A Markov network or MRF is similar to a Bayesian network in its representation of dependencies; the differences being that Bayesian networks are directed and acyclic, whereas Markov networks are undirected and may be cyclic. Thus, a Markov network can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies ); on the other hand, it can't represent certain dependencies that a Bayesian network can (such as induced dependencies ). The underlying graph of a Markov random field may be finite or infinite.
When the joint probability density of the random variables is strictly positive, it is also referred to as a Gibbs random field, because, according to the Hammersley–Clifford theorem, it can then be represented by a Gibbs measure for an appropriate (locally defined) energy function. The prototypical Markov random field is the Ising model; indeed, the Markov random field was introduced as the general setting for the Ising model. In the domain of artificial intelligence, a Markov random field is used to model various low- to mid-level tasks in image processing and computer vision.
Definition.
Given an undirected graph formula_0, a set of random variables formula_1 indexed by formula_2 form a Markov random field with respect to formula_3 if they satisfy the local Markov properties:
Pairwise Markov property: Any two non-adjacent variables are conditionally independent given all other variables:
formula_4
Local Markov property: A variable is conditionally independent of all other variables given its neighbors:
formula_5
where formula_6 is the set of neighbors of formula_7, and formula_8 is the closed neighbourhood of formula_7.
Global Markov property: Any two subsets of variables are conditionally independent given a separating subset:
formula_9
where every path from a node in formula_10 to a node in formula_11 passes through formula_12.
The Global Markov property is stronger than the Local Markov property, which in turn is stronger than the Pairwise one. However, the above three Markov properties are equivalent for positive distributions (those that assign only nonzero probabilities to the associated variables).
The relation between the three Markov properties is particularly clear in the following formulation:
Clique factorization.
As the Markov property of an arbitrary probability distribution can be difficult to establish, a commonly used class of Markov random fields are those that can be factorized according to the cliques of the graph.
Given a set of random variables formula_1, let formula_21 be the probability of a particular field configuration formula_22 in formula_23—that is, formula_21 is the probability of finding that the random variables formula_23 take on the particular value formula_22. Because formula_23 is a set, the probability of formula_22 should be understood to be taken with respect to a "joint distribution" of the formula_24.
If this joint density can be factorized over the cliques of formula_3 as
formula_25
then formula_23 forms a Markov random field with respect to formula_3. Here, formula_26 is the set of cliques of formula_3. The definition is equivalent if only maximal cliques are used. The functions formula_27 are sometimes referred to as "factor potentials" or "clique potentials". Note, however, conflicting terminology is in use: the word "potential" is often applied to the logarithm of formula_27. This is because, in statistical mechanics, formula_28 has a direct interpretation as the potential energy of a configuration formula_29.
Some MRF's do not factorize: a simple example can be constructed on a cycle of 4 nodes with some infinite energies, i.e. configurations of zero probabilities, even if one, more appropriately, allows the infinite energies to act on the complete graph on formula_2.
MRF's factorize if at least one of the following conditions is fulfilled:
When such a factorization does exist, it is possible to construct a factor graph for the network.
Exponential family.
Any positive Markov random field can be written as exponential family in canonical form with feature functions formula_30 such that the full-joint distribution can be written as
formula_31
where the notation
formula_32
is simply a dot product over field configurations, and "Z" is the partition function:
formula_33
Here, formula_34 denotes the set of all possible assignments of values to all the network's random variables. Usually, the feature functions formula_35 are defined such that they are indicators of the clique's configuration, "i.e." formula_36 if formula_37 corresponds to the "i"-th possible configuration of the "k"-th clique and 0 otherwise. This model is equivalent to the clique factorization model given above, if formula_38 is the cardinality of the clique, and the weight of a feature formula_35 corresponds to the logarithm of the corresponding clique factor, "i.e." formula_39, where formula_40 is the "i"-th possible configuration of the "k"-th clique, "i.e." the "i"-th value in the domain of the clique formula_41.
The probability "P" is often called the Gibbs measure. This expression of a Markov field as a logistic model is only possible if all clique factors are non-zero, "i.e." if none of the elements of formula_34 are assigned a probability of 0. This allows techniques from matrix algebra to be applied, "e.g." that the trace of a matrix is log of the determinant, with the matrix representation of a graph arising from the graph's incidence matrix.
The importance of the partition function "Z" is that many concepts from statistical mechanics, such as entropy, directly generalize to the case of Markov networks, and an "intuitive" understanding can thereby be gained. In addition, the partition function allows variational methods to be applied to the solution of the problem: one can attach a driving force to one or more of the random variables, and explore the reaction of the network in response to this perturbation. Thus, for example, one may add a driving term "J""v", for each vertex "v" of the graph, to the partition function to get:
formula_42
Formally differentiating with respect to "J""v" gives the expectation value of the random variable "X""v" associated with the vertex "v":
formula_43
Correlation functions are computed likewise; the two-point correlation is:
formula_44
Unfortunately, though the likelihood of a logistic Markov network is convex, evaluating the likelihood or gradient of the likelihood of a model requires inference in the model, which is generally computationally infeasible (see 'Inference' below).
Examples.
Gaussian.
A multivariate normal distribution forms a Markov random field with respect to a graph formula_0 if the missing edges correspond to zeros on the precision matrix (the inverse covariance matrix):
formula_45
such that
formula_46
Inference.
As in a Bayesian network, one may calculate the conditional distribution of a set of nodes formula_47 given values to another set of nodes formula_48 in the Markov random field by summing over all possible assignments to formula_49; this is called exact inference. However, exact inference is a #P-complete problem, and thus computationally intractable in the general case. Approximation techniques such as Markov chain Monte Carlo and loopy belief propagation are often more feasible in practice. Some particular subclasses of MRFs, such as trees (see Chow–Liu tree), have polynomial-time inference algorithms; discovering such subclasses is an active research topic. There are also subclasses of MRFs that permit efficient MAP, or most likely assignment, inference; examples of these include associative networks. Another interesting sub-class is the one of decomposable models (when the graph is chordal): having a closed-form for the MLE, it is possible to discover a consistent structure for hundreds of variables.
Conditional random fields.
One notable variant of a Markov random field is a conditional random field, in which each random variable may also be conditioned upon a set of global observations formula_50. In this model, each function formula_51 is a mapping from all assignments to both the clique "k" and the observations formula_50 to the nonnegative real numbers. This form of the Markov network may be more appropriate for producing discriminative classifiers, which do not model the distribution over the observations. CRFs were proposed by John D. Lafferty, Andrew McCallum and Fernando C.N. Pereira in 2001.
Varied applications.
Markov random fields find application in a variety of fields, ranging from computer graphics to computer vision, machine learning or computational biology, and information retrieval. MRFs are used in image processing to generate textures as they can be used to generate flexible and stochastic image models. In image modelling, the task is to find a suitable intensity distribution of a given image, where suitability depends on the kind of task and MRFs are flexible enough to be used for image and texture synthesis, image compression and restoration, image segmentation, 3D image inference from 2D images, image registration, texture synthesis, super-resolution, stereo matching and information retrieval. They can be used to solve various computer vision problems which can be posed as energy minimization problems or problems where different regions have to be distinguished using a set of discriminating features, within a Markov random field framework, to predict the category of the region. Markov random fields were a generalization over the Ising model and have, since then, been used widely in combinatorial optimizations and networks.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G=(V,E)"
},
{
"math_id": 1,
"text": "X = (X_v)_{v\\in V}"
},
{
"math_id": 2,
"text": "V"
},
{
"math_id": 3,
"text": "G"
},
{
"math_id": 4,
"text": "X_u \\perp\\!\\!\\!\\perp X_v \\mid X_{V \\smallsetminus \\{u,v\\}} "
},
{
"math_id": 5,
"text": "X_v \\perp\\!\\!\\!\\perp X_{V\\smallsetminus \\operatorname{N}[v]} \\mid X_{\\operatorname{N}(v)}"
},
{
"math_id": 6,
"text": "\\operatorname{N}(v)"
},
{
"math_id": 7,
"text": "v"
},
{
"math_id": 8,
"text": "\\operatorname{N}[v] = v \\cup \\operatorname{N}(v)"
},
{
"math_id": 9,
"text": "X_A \\perp\\!\\!\\!\\perp X_B \\mid X_S"
},
{
"math_id": 10,
"text": "A"
},
{
"math_id": 11,
"text": "B"
},
{
"math_id": 12,
"text": "S"
},
{
"math_id": 13,
"text": "i, j \\in V"
},
{
"math_id": 14,
"text": "X_i \\perp\\!\\!\\!\\perp X_j | X_{V \\smallsetminus \\{i, j\\}}"
},
{
"math_id": 15,
"text": "i\\in V"
},
{
"math_id": 16,
"text": "J\\subset V"
},
{
"math_id": 17,
"text": "i"
},
{
"math_id": 18,
"text": "X_i \\perp\\!\\!\\!\\perp X_J | X_{V \\smallsetminus (\\{i\\}\\cup J)}"
},
{
"math_id": 19,
"text": "I, J\\subset V"
},
{
"math_id": 20,
"text": "X_I \\perp\\!\\!\\!\\perp X_J | X_{V \\smallsetminus (I\\cup J)}"
},
{
"math_id": 21,
"text": "P(X=x)"
},
{
"math_id": 22,
"text": "x"
},
{
"math_id": 23,
"text": "X"
},
{
"math_id": 24,
"text": "X_v"
},
{
"math_id": 25,
"text": "P(X=x) = \\prod_{C \\in \\operatorname{cl}(G)} \\varphi_C (x_C) "
},
{
"math_id": 26,
"text": "\\operatorname{cl}(G)"
},
{
"math_id": 27,
"text": "\\varphi_C"
},
{
"math_id": 28,
"text": "\\log(\\varphi_C)"
},
{
"math_id": 29,
"text": "x_C"
},
{
"math_id": 30,
"text": "f_k"
},
{
"math_id": 31,
"text": " P(X=x) = \\frac{1}{Z} \\exp \\left( \\sum_{k} w_k^{\\top} f_k (x_{ \\{ k \\}}) \\right)"
},
{
"math_id": 32,
"text": " w_k^{\\top} f_k (x_{ \\{ k \\}}) = \\sum_{i=1}^{N_k} w_{k,i} \\cdot f_{k,i}(x_{\\{k\\}})"
},
{
"math_id": 33,
"text": " Z = \\sum_{x \\in \\mathcal{X}} \\exp \\left(\\sum_{k} w_k^{\\top} f_k(x_{ \\{ k \\} })\\right)."
},
{
"math_id": 34,
"text": "\\mathcal{X}"
},
{
"math_id": 35,
"text": "f_{k,i}"
},
{
"math_id": 36,
"text": "f_{k,i}(x_{\\{k\\}}) = 1"
},
{
"math_id": 37,
"text": "x_{\\{k\\}}"
},
{
"math_id": 38,
"text": "N_k=|\\operatorname{dom}(C_k)|"
},
{
"math_id": 39,
"text": "w_{k,i} = \\log \\varphi(c_{k,i})"
},
{
"math_id": 40,
"text": "c_{k,i}"
},
{
"math_id": 41,
"text": "C_k"
},
{
"math_id": 42,
"text": " Z[J] = \\sum_{x \\in \\mathcal{X}} \\exp \\left(\\sum_{k} w_k^{\\top} f_k(x_{ \\{ k \\} }) + \\sum_v J_v x_v\\right)"
},
{
"math_id": 43,
"text": "E[X_v] = \\frac{1}{Z} \\left.\\frac{\\partial Z[J]}{\\partial J_v}\\right|_{J_v=0}."
},
{
"math_id": 44,
"text": "C[X_u, X_v] = \\frac{1}{Z} \\left.\\frac{\\partial^2 Z[J]}{\\partial J_u \\,\\partial J_v}\\right|_{J_u=0, J_v=0}."
},
{
"math_id": 45,
"text": "X=(X_v)_{v\\in V} \\sim \\mathcal N (\\boldsymbol \\mu, \\Sigma)\n"
},
{
"math_id": 46,
"text": "(\\Sigma^{-1})_{uv} =0 \\quad \\text{iff} \\quad \\{u,v\\} \\notin E ."
},
{
"math_id": 47,
"text": " V' = \\{ v_1 ,\\ldots, v_i \\} "
},
{
"math_id": 48,
"text": " W' = \\{ w_1 ,\\ldots, w_j \\} "
},
{
"math_id": 49,
"text": "u \\notin V',W'"
},
{
"math_id": 50,
"text": "o"
},
{
"math_id": 51,
"text": "\\varphi_k"
}
]
| https://en.wikipedia.org/wiki?curid=1323985 |
1324357 | Yukawa interaction | Type of interaction between scalars and fermions
In particle physics, Yukawa's interaction or Yukawa coupling, named after Hideki Yukawa, is an interaction between particles according to the Yukawa potential. Specifically, it is between a scalar field (or pseudoscalar field) ϕ and a Dirac field ψ of the type
<templatestyles src="Block indent/styles.css"/> formula_0 (scalar) or formula_1 (pseudoscalar).
The Yukawa interaction was developed to model the strong force between hadrons. A Yukawa interaction is thus used to describe the nuclear force between nucleons mediated by pions (which are pseudoscalar mesons).
A Yukawa interaction is also used in the Standard Model to describe the coupling between the Higgs field and massless quark and lepton fields (i.e., the fundamental fermion particles). Through spontaneous symmetry breaking, these fermions acquire a mass proportional to the vacuum expectation value of the Higgs field. This Higgs-fermion coupling was first described by Steven Weinberg in 1967 to model lepton masses.
Classical potential.
If two fermions interact through a Yukawa interaction mediated by a Yukawa particle of mass formula_2, the potential between the two particles, known as the "Yukawa potential", will be:
formula_3
which is the same as a Coulomb potential except for the sign and the exponential factor. The sign will make the interaction attractive between all particles (the electromagnetic interaction is repulsive for same electrical charge sign particles). This is explained by the fact that the Yukawa particle has spin zero and even spin always results in an attractive potential. (It is a non-trivial result of quantum field theory that the exchange of even-spin bosons like the pion (spin 0, Yukawa force) or the graviton (spin 2, gravity) results in forces always attractive, while odd-spin bosons like the gluons (spin 1, strong interaction), the photon (spin 1, electromagnetic force) or the rho meson (spin 1, Yukawa-like interaction) yields a force that is attractive between opposite charge and repulsive between like-charge.) The negative sign in the exponential gives the interaction a finite effective range, so that particles at great distances will hardly interact any longer (interaction forces fall off exponentially with increasing separation).
As for other forces, the form of the Yukawa potential has a geometrical interpretation in term of the field line picture introduced by Faraday: The part results from the dilution of the field line flux in space. The force is proportional to the number of field lines crossing an elementary surface. Since the field lines are emitted isotropically from the force source and since the distance r between the elementary surface and the source varies the apparent size of the surface (the solid angle) as the force also follows the dependence. This is equivalent to the part of the potential. In addition, the exchanged mesons are unstable and have a finite lifetime. The disappearance (radioactive decay) of the mesons causes a reduction of the flux through the surface that results in the additional exponential factor formula_4 of the Yukawa potential. Massless particles such as photons are stable and thus yield only potentials. (Note however that other massless particles such as gluons or gravitons do not generally yield potentials because they interact with each other, distorting their field pattern. When this self-interaction is negligible, such as in weak-field gravity (Newtonian gravitation) or for very short distances for the strong interaction (asymptotic freedom), the potential is restored.)
The action.
The Yukawa interaction is an interaction between a scalar field (or pseudoscalar field) ϕ and a Dirac field ψ of the type
<templatestyles src="Block indent/styles.css"/> formula_5 (scalar) or formula_6 (pseudoscalar).
The action for a meson field formula_7 interacting with a Dirac baryon field formula_8 is
formula_9
where the integration is performed over n dimensions; for typical four-dimensional spacetime "n" = 4, and formula_10
The meson Lagrangian is given by
formula_11
Here, formula_12 is a self-interaction term. For a free-field massive meson, one would have formula_13 where formula_2 is the mass for the meson. For a (renormalizable, polynomial) self-interacting field, one will have formula_14 where λ is a coupling constant. This potential is explored in detail in the article on the quartic interaction.
The free-field Dirac Lagrangian is given by
formula_15
where m is the real-valued, positive mass of the fermion.
The Yukawa interaction term is
formula_16
where g is the (real) coupling constant for scalar mesons and
formula_17
for pseudoscalar mesons. Putting it all together one can write the above more explicitly as
formula_18
Yukawa coupling to the Higgs in the Standard Model.
A Yukawa coupling term to the Higgs field effecting spontaneous symmetry breaking in the Standard Model is responsible for fermion masses in a symmetric manner.
Suppose that the potential formula_12 has its minimum, not at formula_19 but at some non-zero value formula_20 This can happen, for example, with a potential form such as formula_21. In this case, the Lagrangian exhibits spontaneous symmetry breaking. This is because the non-zero value of the formula_22 field, when operating on the vacuum, has a non-zero vacuum expectation value of formula_23
In the Standard Model, this non-zero expectation is responsible for the fermion masses despite the chiral symmetry of the model apparently excluding them.
To exhibit the mass term, the action can be re-expressed in terms of the derived field formula_24 where formula_25 is constructed to be independent of position (a constant). This means that the Yukawa term includes a component
formula_26
and, since both g and formula_27 are constants, the term presents as a mass term for the fermion with equivalent mass formula_28 This mechanism is the means by which spontaneous symmetry breaking gives mass to fermions. The scalar field formula_29 is known as the Higgs field.
The Yukawa coupling for any given fermion in the Standard Model is an input to the theory. The ultimate reason for these couplings is not known: it would be something that a better, deeper theory should explain.
Majorana form.
It is also possible to have a Yukawa interaction between a scalar and a Majorana field. In fact, the Yukawa interaction involving a scalar and a Dirac spinor can be thought of as a Yukawa interaction involving a scalar with two Majorana spinors of the same mass. Broken out in terms of the two chiral Majorana spinors, one has
formula_30
where g is a complex coupling constant, m is a complex number, and n is the number of dimensions, as above.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "~ V \\approx g \\, \\bar\\psi \\, \\phi \\, \\psi "
},
{
"math_id": 1,
"text": " g \\, \\bar\\psi \\, i \\,\\gamma^5 \\, \\phi \\, \\psi "
},
{
"math_id": 2,
"text": "\\mu"
},
{
"math_id": 3,
"text": "V(r) = -\\frac{g^2}{\\,4\\pi\\,} \\, \\frac{1}{\\,r\\,} \\, e^{-\\mu r}"
},
{
"math_id": 4,
"text": "~e^{-\\mu r}~"
},
{
"math_id": 5,
"text": " V \\approx g\\,\\bar\\psi \\,\\phi \\,\\psi "
},
{
"math_id": 6,
"text": " g \\,\\bar\\psi \\,i\\,\\gamma^5 \\,\\phi \\,\\psi "
},
{
"math_id": 7,
"text": "\\phi"
},
{
"math_id": 8,
"text": "\\psi"
},
{
"math_id": 9,
"text": "S[\\phi,\\psi]=\\int \\left[ \\,\n\\mathcal{L}_\\mathrm{meson}(\\phi) +\n\\mathcal{L}_\\mathrm{Dirac}(\\psi) +\n\\mathcal{L}_\\mathrm{Yukawa}(\\phi,\\psi)\n\\, \\right] \\mathrm{d}^{n}x\n"
},
{
"math_id": 10,
"text": "\\mathrm{d}^{4}x \\equiv \\mathrm{d}x_1 \\, \\mathrm{d}x_2 \\, \\mathrm{d}x_3 \\, \\mathrm{d}x_4 ~."
},
{
"math_id": 11,
"text": "\\mathcal{L}_\\mathrm{meson}(\\phi) = \n\\frac{1}{2}\\partial^\\mu \\phi \\; \\partial_\\mu \\phi - V(\\phi)~."
},
{
"math_id": 12,
"text": "~V(\\phi)~"
},
{
"math_id": 13,
"text": "~V(\\phi)=\\frac{1}{2}\\,\\mu^2\\,\\phi^2~"
},
{
"math_id": 14,
"text": "V(\\phi) = \\frac{1}{2}\\,\\mu^2\\,\\phi^2 + \\lambda\\,\\phi^4"
},
{
"math_id": 15,
"text": "\\mathcal{L}_\\mathrm{Dirac}(\\psi) = \n\\bar{\\psi}\\,\\left( i\\,\\partial\\!\\!\\!/ - m \\right)\\,\\psi "
},
{
"math_id": 16,
"text": "\\mathcal{L}_\\mathrm{Yukawa}(\\phi,\\psi) = -g\\,\\bar\\psi \\,\\phi \\,\\psi"
},
{
"math_id": 17,
"text": "\\mathcal{L}_\\mathrm{Yukawa}(\\phi,\\psi) = -g\\,\\bar\\psi \\,i \\,\\gamma^5 \\,\\phi \\,\\psi"
},
{
"math_id": 18,
"text": "S[\\phi,\\psi] = \\int \\left[ \\tfrac{1}{2} \\, \\partial^\\mu \\phi \\; \\partial_\\mu \\phi - V(\\phi) + \\bar{\\psi} \\, \\left( i\\, \\partial\\!\\!\\!/ - m \\right) \\, \\psi - g \\, \\bar{\\psi} \\, \\phi \\,\\psi \\, \\right] \\mathrm{d}^{n}x ~."
},
{
"math_id": 19,
"text": "~\\phi = 0~,"
},
{
"math_id": 20,
"text": "~\\phi_0~."
},
{
"math_id": 21,
"text": "~V(\\phi) = \\lambda\\,\\phi^4~ - \\mu^2\\,\\phi^2 "
},
{
"math_id": 22,
"text": "~\\phi~"
},
{
"math_id": 23,
"text": "~\\phi~."
},
{
"math_id": 24,
"text": " \\phi' = \\phi - \\phi_0~,"
},
{
"math_id": 25,
"text": "~\\phi_0~"
},
{
"math_id": 26,
"text": "~g \\, \\phi_0 \\, \\bar\\psi \\, \\psi~"
},
{
"math_id": 27,
"text": "\\phi_0"
},
{
"math_id": 28,
"text": "~g\\,\\phi_0~."
},
{
"math_id": 29,
"text": "\\phi'~"
},
{
"math_id": 30,
"text": "S[\\phi,\\chi]=\\int \\left[\\,\\frac{1}{2}\\,\\partial^\\mu\\phi \\; \\partial_\\mu \\phi - V(\\phi) + \\chi^\\dagger \\, i \\, \\bar{\\sigma}\\,\\cdot\\,\\partial\\chi + \\frac{i}{2}\\,(m + g \\, \\phi)\\,\\chi^T \\,\\sigma^2 \\,\\chi - \\frac{i}{2}\\,(m + g \\,\\phi)^* \\, \\chi^\\dagger \\,\\sigma^2 \\, \\chi^*\\,\\right] \\mathrm{d}^{n}x"
}
]
| https://en.wikipedia.org/wiki?curid=1324357 |
13245649 | Bubble point |
In thermodynamics, the bubble point is the temperature (at a given pressure) where the first bubble of vapor is formed when heating a liquid consisting of two or more components. Given that vapor will probably have a different composition than the liquid, the bubble point (along with the dew point) at different compositions are useful data when designing distillation systems.
For a single component the bubble point and the dew point are the same and are referred to as the boiling point.
Calculating the bubble point.
At the bubble point, the following relationship holds:<br>
formula_0
<br>
where<br>
formula_1.
K is the "distribution coefficient" or "K factor", defined as the ratio of mole fraction in the vapor phase formula_2 to the mole fraction in the liquid phase formula_3 at equilibrium.
<br>
When Raoult's law and Dalton's law hold for the mixture, the K factor is defined as the ratio of the vapor pressure to the total pressure of the system:<br>
formula_4
Given either of formula_5 or formula_6 and either the temperature or pressure of a two-component system, calculations can be performed to determine the unknown information.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{i=1}^{N_c} y_i = \\sum_{i=1}^{N_c} K_i x_i = 1"
},
{
"math_id": 1,
"text": "K_i \\equiv \\frac{y_{ie}}{x_{ie}}"
},
{
"math_id": 2,
"text": "\\big(y_{ie}\\big)"
},
{
"math_id": 3,
"text": "\\big(x_{ie}\\big)"
},
{
"math_id": 4,
"text": "K_i = \\frac{P'_i}{P}"
},
{
"math_id": 5,
"text": "x_i"
},
{
"math_id": 6,
"text": "y_i"
}
]
| https://en.wikipedia.org/wiki?curid=13245649 |
1324565 | DuPont analysis | Expression which breaks ROE (return on equity) into three parts
DuPont analysis (also known as the DuPont identity, DuPont equation, DuPont framework, DuPont model, DuPont method or DuPont system) is a tool used in financial analysis, where return on equity (ROE) is separated into its component parts.
Useful in several contexts, this "decomposition" of ROE allows financial managers to focus on the key metrics of financial performance individually, and thereby to identify strengths and weaknesses within the company that should be addressed. Similarly, it allows investors to compare the operational efficiency of two comparable firms.
The name derives from the DuPont company, which began using this formula in the 1920s. A DuPont explosives salesman, Donaldson Brown, submitted an internal efficiency report to his superiors in 1912 that contained the formula.
Basic formula.
The DuPont analysis breaks down ROE into three component parts, which may then be managed individually:
ROE = (Profit margin)×(Asset turnover)×(Equity multiplier) = ×× =
Or
ROE = × = ×
Or
ROE = ROS×AT = ROA×Leverage
ROE analysis.
The DuPont analysis breaks down ROE (that is, the returns that investors receive from a single dollar of equity) into three distinct elements. This analysis enables the manager or analyst to understand the source of superior (or inferior) return by comparison with companies in similar industries (or between industries). See for further context.
The DuPont analysis is less useful for industries such as investment banking, in which the underlying elements are not meaningful (see related discussion: ). Variations of the DuPont analysis have been developed for industries where the elements are weakly meaningful, for example:
High margin industries.
Some industries, such as the fashion industry, may derive a substantial portion of their income from selling at a higher margin, rather than higher sales. For high-end fashion brands, increasing sales without sacrificing margin may be critical. The DuPont analysis allows analysts to determine which of the elements is dominant in any change of ROE.
High turnover industries.
Certain types of retail operations, particularly stores, may have very low profit margins on sales, and relatively moderate leverage. In contrast, though, groceries may have very high turnover, selling a significant multiple of their assets per year. The ROE of such firms may be particularly dependent on performance of this metric, and hence asset turnover may be studied extremely carefully for signs of under-, or, over-performance. For example, same-store sales of many retailers is considered important as an indication that the firm is deriving greater profits from existing stores (rather than showing improved performance by continually opening stores).
High leverage industries.
Some sectors, such as the financial sector, rely on high leverage to generate acceptable ROE. Other industries would see high levels of leverage as unacceptably risky. DuPont analysis enables third parties that rely primarily on their financial statements to compare leverage among similar companies.
ROA and ROE ratio.
The return on assets (ROA) ratio developed by DuPont for its own use is now used by many firms to evaluate how effectively assets are used. It measures the combined effects of profit margins and asset turnover.
formula_0
The return on equity (ROE) ratio is a measure of the rate of return to stockholders. Decomposing the ROE into various factors influencing company performance is often called the DuPont system.
formula_1
Where
* Net Income = pre-tax income after taxes
* Equity = shareholders' equity
* EBIT = Earnings before interest and taxes
* Pretax Income is often reported as Earnings Before Taxes or EBT
This decomposition presents various ratios used in fundamental analysis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{ROA} = \\frac{\\text{Net Income}}{\\text{Revenue}} \\times \\frac{\\text{Revenue}}{\\text{Average Total Assets}} = \\frac{\\text{Net income}}{\\text{Average Total Assets}}"
},
{
"math_id": 1,
"text": "\\text{ROE} = \\frac{\\text{Net Income}}{\\text{Average Total Equity}} = \\frac{\\text{Net Income}}{\\text{Pretax Income}} \\times \\frac{\\text{Pretax Income}}{\\text{EBIT}} \\times \\frac{\\text{EBIT}}{\\text{Revenue}} \\times \\frac{\\text{Revenue}}{\\text{Average Total Assets}} \\times \\frac{\\text{Average Total Assets}}{\\text{Average Total Equity}} "
}
]
| https://en.wikipedia.org/wiki?curid=1324565 |
1324664 | Color rendering index | Measure of ability of a light source to reproduce colors in comparison with a standard light source
A color rendering index (CRI) is a quantitative measure of the ability of a light source to reveal the colors of various objects faithfully in comparison with a natural or standard light source.
"Color rendering", as defined by the International Commission on Illumination (CIE), is the effect of an illuminant on the color appearance of objects by conscious or subconscious comparison with their color appearance under a reference or standard illuminant.
The CRI of a light source does not indicate the apparent color of the light source; that information is given by the correlated color temperature (CCT). The CRI is determined by the light source's spectrum. An incandescent lamp has a continuous spectrum, a fluorescent lamp has a discrete line spectrum; implying that the incandescent lamp has the higher CRI.
The value often quoted as "CRI" on commercially available lighting products is properly called the CIE Ra value, "CRI" being a general term and CIE Ra being the international standard color rendering index.
Numerically, the highest possible CIE Ra value is 100 and would only be given to a source whose spectrum is identical to the spectrum of daylight, very close to that of a black body (incandescent lamps are effectively black bodies), dropping to negative values for some light sources. Low-pressure sodium lighting has a negative CRI; fluorescent lights range from about 50 for the basic types, up to about 98 for the best multi-phosphor type. Typical white-color LEDs have a CRI of 80 or more, while some manufacturers claim that their LEDs achieve a CRI of up to 98.
CIE Ra's ability to predict color appearance has been criticized in favor of measures based on color appearance models, such as CIECAM02 and for daylight simulators, the CIE metamerism index. CRI is not a good indicator for use in visual assessment of light sources, especially for sources below 5000 kelvin (K). New standards, such as the IES TM-30, resolve these issues and have begun replacing the usage of CRI among professional lighting designers. However, CRI is still common among household lighting products.
History.
Researchers use daylight as the benchmark to which to compare color rendering of electric lights. In 1948, daylight was described as the ideal source of illumination for good color rendering because "it (daylight) displays (1) a great variety of colors, (2) makes it easy to distinguish slight shades of color, and (3) the colors of objects around us obviously look natural".
Around the middle of the 20th century, color scientists took an interest in assessing the ability of artificial lights to accurately reproduce colors. European researchers attempted to describe illuminants by measuring the spectral power distribution (SPD) in "representative" spectral bands, whereas their North American counterparts studied the colorimetric effect of the illuminants on reference objects.
The CIE assembled a committee to study the matter and accepted the proposal to use the latter approach, which has the virtue of not needing spectrophotometry, with a set of Munsell samples. Eight samples of varying hue would be alternately lit with two illuminants, and the color appearance compared. Since no color appearance model existed at the time, it was decided to base the evaluation on color differences in a suitable color space, CIEUVW. In 1931, the CIE adopted the first formal system of colorimetry, which is based on the trichromatic nature of the human visual system. CRI is based upon this system of colorimetry.
To deal with the problem of having to compare light sources of different correlated color temperatures (CCT), the CIE settled on using a reference black body with the same color temperature for lamps with a CCT of under 5000 K, or a phase of CIE standard illuminant D (daylight) otherwise. This presented a continuous range of color temperatures to choose a reference from. Any chromaticity difference between the source and reference illuminants were to be abridged with a von Kries-type chromatic adaptation transform. There are two extent versions of CRI: the more commonly used Ra of (actually from 1974) and R96a of .
Test method.
The CRI is calculated by comparing the color rendering of the test source to that of a "perfect" source, which is a black body radiator for sources with correlated color temperatures under 5000 K, and a phase of daylight otherwise (e.g., D65). Chromatic adaptation should be performed so that like quantities are compared. The "Test Method" (also called "Test Sample Method" or "Test Color Method") needs only colorimetric, rather than spectrophotometric, information.
Note that the last three steps are equivalent to finding the mean color difference, formula_3 and using that to calculate formula_4:
formula_5
Chromatic adaptation.
uses this von Kries chromatic transform equation to find the corresponding color ("u""c","i", "v""c","i") for each sample. The mixed subscripts ("t", "i") refer to the inner product of the test illuminant spectrum and the spectral reflexivity of sample "i":
formula_6
formula_7
formula_8
formula_9
where subscripts "r" and "t" refer to reference and test light sources respectively.
Test color samples.
As specified in , the original test color samples (TCS) are taken from an early edition of the Munsell Atlas. The first eight samples, a subset of the eighteen proposed in , are relatively low saturated colors and are evenly distributed over the complete range of hues. These eight samples are employed to calculate the general color rendering index formula_4. The last six samples provide supplementary information about the color rendering properties of the light source; the first four for high saturation, and the last two as representatives of well-known objects. The reflectance spectra of these samples may be found in , and their approximate Munsell notations are listed aside.
R96a method.
In the CIE's 1991 Quadrennial Meeting, Technical Committee 1-33 (Color Rendering) was assembled to work on updating the color rendering method, as a result of which the R96a method was developed. The committee was dissolved in 1999, releasing , but no firm recommendations, partly due to disagreements between researchers and manufacturers.
The R96a method has a few distinguishing features:
It is conventional to use the original method; R96a should be explicitly mentioned if used.
New test color samples.
As discussed in , recommends the use of a ColorChecker chart owing to the obsolescence of the original samples, of which only metameric matches remain. In addition to the eight ColorChart samples, two skin tone samples are defined (TCS09* and TCS10*). Accordingly, the updated general CRI is averaged over ten samples, not eight as before. Nevertheless, has determined that the patches in give better correlations for any color difference than the ColorChecker chart, whose samples are not equally distributed in a uniform color space.
Example.
The CRI can also be theoretically derived from the spectral power distribution (SPD) of the illuminant and samples, since physical copies of the original color samples are difficult to find. In this method, care should be taken to use a sampling resolution fine enough to capture spikes in the SPD. The SPDs of the standard test colors are tabulated in 5 nm increments , so it is suggested to use interpolation up to the resolution of the illuminant's spectrophotometry.
Starting with the SPD, let us verify that the CRI of reference illuminant F4 is 51. The first step is to determine the tristimulus values using the 1931 standard observer. Calculation of the inner product of the SPD with the standard observer's color matching functions (CMFs) yields ("X", "Y", "Z") = (109.2, 100.0, 38.9) (after normalizing for "Y" = 100). From this follow the "xy" chromaticity values:
formula_10
formula_11
The next step is to convert these chromaticities to the CIE 1960 UCS in order to be able to determine the CCT:
formula_12
formula_13
Examining the CIE 1960 UCS reveals this point to be closest to 2938 K on the Planckian locus, which has a coordinate of (0.2528, 0.3484). The distance of the test point to the locus is under the limit (5.4×10−3), so we can continue the procedure, assured of a meaningful result:
formula_14
We can verify the CCT by using McCamy's approximation algorithm to estimate the CCT from the "xy" chromaticities:
formula_15
where formula_16.
Substituting formula_17 yields "n" = 0.4979 and CCTest. = 2941 K, which is close enough. (Robertson's method can be used for greater precision, but we will be content with 2940 K in order to replicate published results.) Since 2940 < 5000, we select a Planckian radiator of 2940 K as the reference illuminant.
The next step is to determine the values of the test color samples under each illuminant in the CIEUVW color space. This is done by integrating the product of the CMF with the SPDs of the illuminant and the sample, then converting from CIEXYZ to CIEUVW (with the "u", "v" coordinates of the reference illuminant as white point):
From this we can calculate the color difference between the chromatically adapted samples (labeled "CAT") and those illuminated by the reference. (The Euclidean metric is used to calculate the color difference in CIEUVW.) The special CRI is simply formula_18.
Finally, the general color rendering index is the mean of the special CRIs: 51.
Typical values.
A reference source, such as blackbody radiation, is defined as having a CRI of 100. This is why incandescent lamps have that rating, as they are, in effect, almost blackbody radiators. The best possible faithfulness to a reference is specified by CRI = 100, while the very poorest is specified by a CRI below zero. A high CRI by itself does not imply a good rendition of color, because the reference itself may have an imbalanced SPD if it has an extreme color temperature.
Special value: R9.
Ra is the average value of R1–R8; other values from R9 to R15 are not used in the calculation of Ra, including R9 "saturated red", R13 "skin color (light)", and R15 "skin color (medium)", which are all difficult colors to faithfully reproduce. R9 is a vital index in high-CRI lighting, as many applications require red lights, such as film and video lighting, medical lighting, art lighting, etc. However, in the general CRI (Ra) calculation R9 is not included.
R9 is one of the numbers of Ri refers to test color samples (TCS), which is one score in extended CRI. It is the number rates the light source's color revealing ability towards TCS 09. And it describes the specific ability of light to accurately reproduce the red color of objects. Many lights manufacturers or retailers do not point out the score of R9, while it is a vital value to evaluate the color rendition performance for film and video lighting, as well as any applications that need high CRI value. So, generally, it is regarded as a supplement of color rendering index when evaluating a high-CRI light source.
R9 value, TCS 09, or in other words, the red color is the key color for many lighting applications, such as film and video lighting, textile printing, image printing, skin tone, medical lighting, and so on. Besides, many other objects which are not in red color, but actually consists of different colors including red color. For instance, the skin tone is impacted by the blood under the skin, which means that the skin tone also includes red color, although it looks much like close to white or light yellow. So, if the R9 value is not good enough, the skin tone under this light will be more paleness or even greenish in your eyes or cameras.
Criticism.
Ohno and others have criticized CRI for not always correlating well with subjective color rendering quality in practice, particularly for light sources with spiky emission spectra such as fluorescent lamps or white LEDs. Another problem is that the CRI is discontinuous at 5000 K, because the chromaticity of the reference moves from the Planckian locus to the CIE daylight locus. identify several other issues, which they address in their color quality scale (CQS):
Alternatives.
"reviews the applicability of the CIE color rendering index to white LED light sources based on the results of visual experiments". Chaired by Davis, CIE TC 1-69(C) is currently investigating "new methods for assessing the color rendition properties of white-light sources used for illumination, including solid-state light sources, with the goal of recommending new assessment procedures [...] by March, 2010".
For a comprehensive review of alternative color rendering indexes see .
reviewed several alternative quality metrics and compared their performance based on visual data obtained in nine psychophysical experiments. It was found that a geometric mean of the GAI index and the CIE Ra correlated best with naturalness (r=0.85), while a color quality metric based on memory colors (MCRI) correlated best for preference ("r" = 0.88). The differences in performance of these metrics with the other tested metrics (CIE Ra; CRI-CAM02UCS; CQS; RCRI; GAI; geomean (GAI, CIE Ra); CSA; Judd Flattery; Thornton CPI; MCRI) were found to be statistically significant with "p" < 0.0001.
Dangol, et al., performed psychophysical experiments and concluded that people's judgments of naturalness and overall preference could not be predicted with a single measure, but required the joint use of a fidelity-based measure (e.g., Qp) and a gamut-based measure (e.g., Qg or GAI.). They carried out further experiments in real offices evaluating various spectra generated for combination existing and proposed color rendering metrics.
Due to the criticisms of CRI many researchers have developed alternative metrics, though relatively few of them have had wide adoption.
Gamut area index (GAI).
Developed in 2010 by Rea and Freyssinier, the gamut area index (GAI) is an attempt to improve over the flaws found in the CRI. They have shown that the GAI is better than the CRI at predicting color discrimination on standardized Farnsworth-Munsell 100 Hue Tests and that GAI is predictive of color saturation. Proponents of using GAI claim that, when used in conjunction with CRI, this method of evaluating color rendering is preferred by test subjects over light sources that have high values of only one measure. Researchers recommend a lower and an upper limit to GAI. Use of LED technology has called for a new way to evaluate color rendering because of the unique spectrum of light created by these technologies. Preliminary tests have shown that the combination of GAI and CRI used together is a preferred method for evaluating color rendering.
Color quality scale (CQS).
developed a psychophysical experiment in order to evaluate light quality of LED lightings. It is based on colored samples used in the "color quality scale". Predictions of the CQS and results from visual measurements were compared.
Film and video high-CRI LED lighting.
Problems have been encountered attempting to use LED lighting on film and video sets. The color spectra of LED lighting primary colors does not match the expected color wavelength bandpasses of film emulsions and digital sensors. As a result, color rendition can be completely unpredictable in optical prints, transfers to digital media from film (DIs), and video camera recordings. This phenomenon with respect to motion picture film has been documented in an LED lighting evaluation series of tests produced by the Academy of Motion Picture Arts and Sciences scientific staff.
To that end, various other metrics such as the TLCI (television lighting consistency index) have been developed to replace the human observer with a camera observer. Similar to the CRI, the metric measures quality of a light source as it would appear on camera on a scale from 0 to 100. Some manufacturers say that their products have TLCI values of up to 99.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{DC} = \\Delta_{uv} = \\sqrt{(u_r - u_t)^2 + (v_r - v_t)^2}."
},
{
"math_id": 1,
"text": "\\Delta E_i"
},
{
"math_id": 2,
"text": "R_i = 100 - 4.6 \\Delta E_i"
},
{
"math_id": 3,
"text": "\\Delta \\bar{E}_{UVW}"
},
{
"math_id": 4,
"text": "R_a"
},
{
"math_id": 5,
"text": "R_a = 100 - 4.6 \\Delta \\bar{E}_{UVW}."
},
{
"math_id": 6,
"text": "u_{c,i} = \\frac{10.872 + 0.404 (c_r/c_t) c_{t,i} - 4 (d_r/d_t) d_{t,i}}{16.518 + 1.481 (c_r/c_t) c_{t,i} - (d_r/d_t) d_{t,i}},"
},
{
"math_id": 7,
"text": "v_{c,i} = \\frac{5.520}{16.518 + 1.481 (c_r/c_t) c_{t,i} - (d_r/d_t) d_{t,i}},"
},
{
"math_id": 8,
"text": "c = (4.0 - u - 10.0 v) / v,"
},
{
"math_id": 9,
"text": "d = (1.708 v - 1.481 u + 0.404) / v,"
},
{
"math_id": 10,
"text": "x = \\frac{109.2}{109.2 + 100.0 + 38.9} = 0.4402,"
},
{
"math_id": 11,
"text": "y = \\frac{100}{109.2 + 100.0 + 38.9} = 0.4031."
},
{
"math_id": 12,
"text": "u = \\frac{4 \\times 0.4402}{-2 \\times 0.4402 + 12 \\times 0.4031 + 3} = 0.2531,"
},
{
"math_id": 13,
"text": "v = \\frac{6 \\times 0.4031}{-2 \\times 0.4402 + 12 \\times 0.4031 + 3} = 0.3477."
},
{
"math_id": 14,
"text": "\\begin{align}\n \\text{DC} &= \\sqrt{(0.2531 - 0.2528)^2 + (0.3477 - 0.3484)^2} \\\\\n &= 8.12 \\times 10^{-4} < 5.4 \\times 10^{-3}.\n\\end{align}"
},
{
"math_id": 15,
"text": "\\text{CCT}_\\text{est.} = -449 n^3 + 3525 n^2 - 6823.3 n + 5520.33,"
},
{
"math_id": 16,
"text": "n = \\frac{x - 0.3320}{y - 0.1858}"
},
{
"math_id": 17,
"text": "(x, y) = (0.4402, 0.4031)"
},
{
"math_id": 18,
"text": "R_i = 100 - 4.6 \\Delta E_{UVW}"
},
{
"math_id": 19,
"text": "R_\\text{out} = 10 \\ln \\left[\\exp(R_\\text{in}/10) + 1\\right]"
}
]
| https://en.wikipedia.org/wiki?curid=1324664 |
13246731 | Vackář oscillator | A Vackář oscillator is a wide range variable frequency oscillator (VFO) which has a near constant output amplitude over its frequency range. It is similar to a Colpitts oscillator or a Clapp oscillator, but those designs do not have a constant output amplitude when tuned.
Invention.
In 1949, the Czech engineer Jiří Vackář published a paper on the design of stable variable-frequency oscillators (VFO). The paper discussed many stability issues such as variations with temperature, atmospheric pressure, component aging, and microphonics. For example, Vackář describes making inductors by first heating the wire and then winding the wire on a stable ceramic coil form. The resulting inductor has a temperature coefficient of 6 to 8 parts per million per degree Celsius. Vackář points out that common air variable capacitors have a stability of 2 parts per thousand; to build a VFO with a stability of 50 parts per million requires that the variable capacitor is only 1/40 of the tuning capacity (.002/40 = 50ppm). The stability requirement also implies the variable capacitor may only tune a limited range of 1:1.025. Larger tuning ranges require switching stable fixed capacitors or inductors.
Vackář was interested in high stability designs, so he wanted the highest Q for his circuits. It is possible to make wide range VFOs with stable output amplitude by heavily damping (loading) the tuned circuit, but that tactic substantially reduces the Q and the frequency stability.
Vackář was also concerned with the amplitude variations of the variable-frequency oscillator as it is tuned through its range. Ideally, an oscillator's loop gain will be unity according to the Barkhausen stability criterion. In practice, the loop gain is adjusted to be a little more than one to get the oscillation started; as the amplitude increases, some gain compression then causes the loop gain to average out over a complete cycle to unity. If the VFO frequency is then adjusted, the gain may increase substantially; the result is more gain compression is needed, and that affects both the output amplitude of the VFO and its frequency stability.
Vackář reviewed several existing circuits for their amplitude stability. In his analysis, Vackář made several assumptions. He assumed the tuned circuit has a constant quality factor (Q) over the VFO's frequency range; this assumption implies that the tank's effective resistance increases linearly with frequency (ω). The Clapp oscillator's transconductance is proportional to "ω"3. If the Clapp transconductance is set to just oscillate at the lowest frequency, then the oscillator will be overdriven at its highest frequency. If the frequency changed by a factor of 1.5, then the loop gain at the high end would be 3.375 times higher; this higher gain requires significant compression. Vackář concluded that the Clapp oscillator "can only be used for operation on fixed frequencies or at the most over narrow bands (max. about 1:1.2)." In contrast, the Seiler (tapped capacitor) and Lampkin (tapped inductor) oscillators have a transconductance requirement that is proportional to "ω"−1.
Vackář then describes an oscillator circuit due to Radioslavia in 1945 that maintained "a comparatively constant amplitude over a wide frequency range." Vackář reports that VFO circuit being used by the Czechoslovak Post Office since 1946. Vackář analyzes the circuit and explains how to get an approximately constant amplitude response. The circuit's transconductance increases linearly with frequency, but that increase is offset by the tuning inductor's increasing Q. This circuit has become known as the Vackář VFO. Vackář referred to the circuit as "our circuit" and states that O. Landini independently discovered the circuit and published it (without an analysis) in "Radio Rivista" in 1948. Vackář describes a VFO design using this circuit that covers a modest frequency range of 1:1.17.
Vackář then describes a variation of the Radioslavia circuit that can cover a frequency range of 1:2.5 or even 1:3. This circuit tries to compensate for some variation in Q over the useful range of the VCO. Vackář patented this new circuit and two variations of it.
Circuit operation.
The schematic above is the equivalent of Fig. 5 in his paper (Radioslavia design), redrawn for the use of a junction FET. L1 and the capacitors form the resonant circuit of a Colpitts oscillator, and capacitors Cv and Cg also serve as the grid voltage divider. The circuit can be tuned with C0. Example values are from his paper. Resistor RL is part of the simulation, not part of the circuit. L2 is a radio frequency choke.
It is similar to an earlier Seiler oscillator, the difference is that in Seiler's the C0 is connected to the other side of Ca. Vackář based his design on stability analysis of Gouriet-Clapp (Vackář claims it is for fixed frequency or a very narrow band, max 1:1.2), Seiler and Lampkin oscillators (in the Lampkin's, an inductive voltage divider on the tuned circuit coil is used instead Cv, Cg, and Ca of Seiler's; schematics in the 1st ref).
The oscillator's stability is due largely to the dependency of the tube's (or transistor's) forward transconductance on the resonant frequency (ω) of the tuned circuit. Specifically, Vackář found that forward transconductance varied as "ω"3 for the Clapp oscillator, as 1/"ω" for the Seiler oscillator, and as "ω"/"Q" for his design, where the Q factor of the coil (L1) increases with ω.
The conditions for a forward transconductance that varies minimally with respect to "ω" are met when:
formula_0 and
formula_1
and the Q of the resonator increases in proportion to ω, which is often approximated by real-world inductors.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " C_a \\gg C_0 \\gg C_v, "
},
{
"math_id": 1,
"text": " C_g \\gg C_v "
}
]
| https://en.wikipedia.org/wiki?curid=13246731 |
1324681 | Operating margin | Ratio of operating income to net sales
In business, operating margin—also known as operating income margin, operating profit margin, EBIT margin and return on sales (ROS)—is the ratio of operating income ("operating profit" in the UK) to net sales, usually expressed in percent.
formula_0
"Net profit" measures the profitability of ventures after accounting for all costs.
"Return on sales (ROS)" is net profit as a percentage of sales revenue. ROS is an indicator of profitability and is often used to compare the profitability of companies and industries of differing sizes. Significantly, ROS does not account for the capital (investment) used to generate the profit. In a survey of nearly 200 senior marketing managers, 69 percent responded that they found the "return on sales" metric very useful.
Unlike Earnings before interest, taxes, depreciation, and amortization (EBITDA) margin, operating margin takes into account depreciation and amortization expenses. {NNP = GNP- depreciation /GNP = GDP- depreciation
Purpose.
These financial metrics measure levels and rates of profitability. Probably the most common way to determine the successfulness of a company is to look at the net profits of the business. Companies are collections of projects and markets, individual areas can be judged on how successful they are at adding to the corporate net profit. Not all projects are of equal size, however, and one way to adjust for size is to divide the profit by sales revenue. The resulting ratio is return on sales (ROS), the percentage of sales revenue that gets 'returned' to the company as net profits after all the related costs of the activity are deducted.
Construction.
Net profit measures the fundamental profitability of the business. It is the revenues of the activity less the costs of the activity. The main complication is in more complex businesses when overhead needs to be allocated across divisions of the company. Almost by definition, overheads are costs that cannot be directly tied to any specific product or division. The classic example would be the cost of headquarters staff.
"Net profit: To calculate net profit for a unit (such as a company or division), subtract all costs, including a fair share of total corporate overheads, from the gross revenues.
formula_1
"Return on sales (ROS): Net profit as a percentage of sales revenue".
formula_2
Earnings Before Interest, Taxes, Depreciation, and Amortization (EBITDA) is a very popular measure of financial performance. It is used to assess the 'operating' profit of the business. It is a rough way of calculating how much cash the business is generating and is even sometimes called the 'operating cash flow'. It can be useful because it removes factors that change the view of performance depending upon the accounting and financing policies of the business. Supporters argue it reduces management's ability to change the profits they report by their choice of accounting rules and the way they generate financial backing for the company. This metric excludes from consideration expenses related to decisions such as how to finance the business (debt or equity) and over what period they depreciate fixed assets. EBITDA is typically closer to actual cash flow than is NOPAT. ... EBITDA can be calculated by adding back the costs of interest, depreciation, and amortization charges and any taxes incurred.
formula_3Example: The Coca-Cola Company
formula_4
It is a measurement of what proportion of a company's revenue is left over, before taxes and other indirect costs (such as rent, bonus, interest, "etc".), after paying for variable costs of production as wages, raw materials, "etc". A good operating margin is needed for a company to be able to pay for its fixed costs, such as interest on debt. A higher operating margin means that the company has less financial risk.
Operating margin can be considered total revenue from product sales less all costs before adjustment for taxes, dividends to shareholders, and interest on debt. | [
{
"math_id": 0,
"text": " \\text{Operating margin} = \\frac {\\text{Operating income}}{\\text{Revenue}} ."
},
{
"math_id": 1,
"text": "\\text{Net profit}\\ (\\$) = \\text{Sales revenue}\\ (\\$) - \\text{Total costs}\\ (\\$)"
},
{
"math_id": 2,
"text": "\\text{Return on sales}\\ (\\%) = \\frac{\\text{Net profit}\\ (\\$)}{\\text{Sales revenue}\\ (\\$)}"
},
{
"math_id": 3,
"text": "\\text{EBITDA}\\ (\\$) = \\text{Net profit}\\ (\\$) + \\text{Interest Payments}\\ (\\$) + \\text{Taxes Incurred}\\ (\\$) + \\text{Depreciation and Amortization Charges}\\ (\\$)"
},
{
"math_id": 4,
"text": " \\text{Operating margin} = \\tfrac {6,318}{20,088} = \\underline{\\underline{31.45 \\%}} "
}
]
| https://en.wikipedia.org/wiki?curid=1324681 |
13247267 | Lyndon–Hochschild–Serre spectral sequence | Topic in mathematics
In mathematics, especially in the fields of group cohomology, homological algebra and number theory, the Lyndon spectral sequence or Hochschild–Serre spectral sequence is a spectral sequence relating the group cohomology of a normal subgroup "N" and the quotient group "G"/"N" to the cohomology of the total group "G". The spectral sequence is named after Roger Lyndon, Gerhard Hochschild, and Jean-Pierre Serre.
Statement.
Let formula_0 be a group and formula_1 be a normal subgroup. The latter ensures that the quotient formula_2 is a group, as well. Finally, let formula_3 be a formula_0-module. Then there is a spectral sequence of cohomological type
formula_4
and there is a spectral sequence of homological type
formula_5,
where the arrow 'formula_6' means convergence of spectral sequences.
The same statement holds if formula_0 is a profinite group, formula_1 is a "closed" normal subgroup and formula_7 denotes the continuous cohomology.
Examples.
Homology of the Heisenberg group.
The spectral sequence can be used to compute the homology of the Heisenberg group "G" with integral entries, i.e., matrices of the form
formula_8
This group is a central extension
formula_9
with center formula_10 corresponding to the subgroup with formula_11. The spectral sequence for the group homology, together with the analysis of a differential in this spectral sequence, shows that
formula_12
Cohomology of wreath products.
For a group "G", the wreath product is an extension
formula_13
The resulting spectral sequence of group cohomology with coefficients in a field "k",
formula_14
is known to degenerate at the formula_15-page.
Properties.
The associated five-term exact sequence is the usual inflation-restriction exact sequence:
formula_16
Generalizations.
The spectral sequence is an instance of the more general Grothendieck spectral sequence of the composition of two derived functors. Indeed, formula_17 is the derived functor of formula_18 (i.e., taking "G"-invariants) and the composition of the functors formula_19 and formula_20 is exactly formula_18.
A similar spectral sequence exists for group homology, as opposed to group cohomology, as well. | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "G/N"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "H^p(G/N,H^q(N,A)) \\Longrightarrow H^{p+q}(G,A)"
},
{
"math_id": 5,
"text": "H_p(G/N,H_q(N,A)) \\Longrightarrow H_{p+q}(G,A)"
},
{
"math_id": 6,
"text": "\\Longrightarrow"
},
{
"math_id": 7,
"text": "H^*"
},
{
"math_id": 8,
"text": "\\left ( \\begin{array}{ccc} 1 & a & c \\\\ 0 & 1 & b \\\\ 0 & 0 & 1 \\end{array} \\right ), \\ a, b, c \\in \\Z."
},
{
"math_id": 9,
"text": "0 \\to \\Z \\to G \\to \\Z \\oplus \\Z \\to 0"
},
{
"math_id": 10,
"text": "\\Z"
},
{
"math_id": 11,
"text": "a=b=0"
},
{
"math_id": 12,
"text": "H_i (G, \\Z) = \\left \\{ \\begin{array}{cc} \\Z & i=0, 3 \\\\ \\Z \\oplus \\Z & i=1,2 \\\\ 0 & i>3. \\end{array} \\right. "
},
{
"math_id": 13,
"text": "1 \\to G^p \\to G \\wr \\Z / p \\to \\Z / p \\to 1."
},
{
"math_id": 14,
"text": "H^r(\\Z/p, H^s(G^p, k)) \\Rightarrow H^{r+s}(G \\wr \\Z/p, k),"
},
{
"math_id": 15,
"text": "E_2"
},
{
"math_id": 16,
"text": "0 \\to H^1(G/N,A^N) \\to H^1(G,A) \\to H^1(N,A)^{G/N} \\to H^2(G/N,A^N) \\to H^2(G,A)."
},
{
"math_id": 17,
"text": "H^{*}(G,-)"
},
{
"math_id": 18,
"text": "(-)^G"
},
{
"math_id": 19,
"text": "(-)^N"
},
{
"math_id": 20,
"text": "(-)^{G/N}"
}
]
| https://en.wikipedia.org/wiki?curid=13247267 |
13247379 | Factorion | In number theory, a factorion in a given number base formula_0 is a natural number that equals the sum of the factorials of its digits. The name factorion was coined by the author Clifford A. Pickover.
Definition.
Let formula_1 be a natural number. For a base formula_2, we define the sum of the factorials of the digits of formula_1, formula_3, to be the following:
formula_4
where formula_5 is the number of digits in the number in base formula_0, formula_6 is the factorial of formula_1 and
formula_7
is the value of the formula_8th digit of the number. A natural number formula_1 is a formula_0-factorion if it is a fixed point for formula_9, i.e. if formula_10. formula_11 and formula_12 are fixed points for all bases formula_0, and thus are trivial factorions for all formula_0, and all other factorions are nontrivial factorions.
For example, the number 145 in base formula_13 is a factorion because formula_14.
For formula_15, the sum of the factorials of the digits is simply the number of digits formula_16 in the base 2 representation since formula_17.
A natural number formula_1 is a sociable factorion if it is a periodic point for formula_9, where formula_18 for a positive integer formula_16, and forms a cycle of period formula_16. A factorion is a sociable factorion with formula_19, and a amicable factorion is a sociable factorion with formula_20.
All natural numbers formula_1 are preperiodic points for formula_9, regardless of the base. This is because all natural numbers of base formula_0 with formula_16 digits satisfy formula_21. However, when formula_22, then formula_23 for formula_24, so any formula_1 will satisfy formula_25 until formula_26. There are finitely many natural numbers less than formula_27, so the number is guaranteed to reach a periodic point or a fixed point less than formula_28, making it a preperiodic point. For formula_15, the number of digits formula_29 for any number, once again, making it a preperiodic point. This means also that there are a finite number of factorions and cycles for any given base formula_0.
The number of iterations formula_8 needed for formula_30 to reach a fixed point is the formula_9 function's persistence of formula_1, and undefined if it never reaches a fixed point.
Factorions for.
"b" = ("k" − 1)!
Let formula_16 be a positive integer and the number base formula_31. Then:
<templatestyles src="Math_proof/styles.css" />Proof
Let the digits of formula_34 be formula_35, and formula_36 Then
formula_37
formula_38
formula_39
formula_40
formula_41
Thus formula_42 is a factorion for formula_43 for all formula_16.
<templatestyles src="Math_proof/styles.css" />Proof
Let the digits of formula_45 be formula_35, and formula_46. Then
formula_47
formula_48
formula_49
formula_40
formula_50
Thus formula_51 is a factorion for formula_43 for all formula_16.
"b" = "k"! − "k" + 1.
Let formula_16 be a positive integer and the number base formula_52. Then:
<templatestyles src="Math_proof/styles.css" />Proof
Let the digits of formula_34 be formula_54, and formula_55. Then
formula_37
formula_56
formula_57
formula_58
formula_40
formula_41
Thus formula_42 is a factorion for formula_43 for all formula_16.
Table of factorions and cycles of.
All numbers are represented in base formula_0.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "b"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "b > 1"
},
{
"math_id": 3,
"text": "\\operatorname{SFD}_b : \\mathbb{N} \\rightarrow \\mathbb{N}"
},
{
"math_id": 4,
"text": "\\operatorname{SFD}_b(n) = \\sum_{i=0}^{k - 1} d_i!."
},
{
"math_id": 5,
"text": "k = \\lfloor \\log_b n \\rfloor + 1"
},
{
"math_id": 6,
"text": "n!"
},
{
"math_id": 7,
"text": "d_i = \\frac{n \\bmod{b^{i+1}} - n \\bmod{b^{i}}}{b^{i}}"
},
{
"math_id": 8,
"text": "i"
},
{
"math_id": 9,
"text": "\\operatorname{SFD}_b"
},
{
"math_id": 10,
"text": "\\operatorname{SFD}_b(n) = n"
},
{
"math_id": 11,
"text": "1"
},
{
"math_id": 12,
"text": "2"
},
{
"math_id": 13,
"text": "b = 10"
},
{
"math_id": 14,
"text": "145 = 1! + 4! + 5!"
},
{
"math_id": 15,
"text": "b = 2"
},
{
"math_id": 16,
"text": "k"
},
{
"math_id": 17,
"text": "0! = 1! = 1"
},
{
"math_id": 18,
"text": "\\operatorname{SFD}_b^k(n) = n"
},
{
"math_id": 19,
"text": "k = 1"
},
{
"math_id": 20,
"text": "k = 2"
},
{
"math_id": 21,
"text": "b^{k-1} \\leq n \\leq (b-1)!(k)"
},
{
"math_id": 22,
"text": "k \\geq b"
},
{
"math_id": 23,
"text": "b^{k-1} > (b-1)!(k)"
},
{
"math_id": 24,
"text": "b > 2"
},
{
"math_id": 25,
"text": "n > \\operatorname{SFD}_b(n)"
},
{
"math_id": 26,
"text": "n < b^b"
},
{
"math_id": 27,
"text": "b^b"
},
{
"math_id": 28,
"text": " b^b"
},
{
"math_id": 29,
"text": "k \\leq n"
},
{
"math_id": 30,
"text": "\\operatorname{SFD}_b^i(n)"
},
{
"math_id": 31,
"text": "b = (k - 1)!"
},
{
"math_id": 32,
"text": "n_1 = kb + 1"
},
{
"math_id": 33,
"text": "k."
},
{
"math_id": 34,
"text": "n_1 = d_1 b + d_0"
},
{
"math_id": 35,
"text": "d_1 = k"
},
{
"math_id": 36,
"text": "d_0 = 1."
},
{
"math_id": 37,
"text": "\\operatorname{SFD}_b(n_1) = d_1! + d_0!"
},
{
"math_id": 38,
"text": " = k! + 1!"
},
{
"math_id": 39,
"text": " = k(k - 1)! + 1"
},
{
"math_id": 40,
"text": " = d_1 b + d_0"
},
{
"math_id": 41,
"text": " = n_1"
},
{
"math_id": 42,
"text": "n_1"
},
{
"math_id": 43,
"text": "F_b"
},
{
"math_id": 44,
"text": "n_2 = kb + 2"
},
{
"math_id": 45,
"text": "n_2 = d_1 b + d_0"
},
{
"math_id": 46,
"text": "d_0 = 2"
},
{
"math_id": 47,
"text": "\\operatorname{SFD}_b(n_2) = d_1! + d_0!"
},
{
"math_id": 48,
"text": " = k! + 2!"
},
{
"math_id": 49,
"text": " = k(k - 1)! + 2"
},
{
"math_id": 50,
"text": " = n_2"
},
{
"math_id": 51,
"text": "n_2"
},
{
"math_id": 52,
"text": "b = k! - k + 1"
},
{
"math_id": 53,
"text": "n_1 = b + k"
},
{
"math_id": 54,
"text": "d_1 = 1"
},
{
"math_id": 55,
"text": "d_0 = k"
},
{
"math_id": 56,
"text": " = 1! + k!"
},
{
"math_id": 57,
"text": " = k! + 1 - k + k"
},
{
"math_id": 58,
"text": " = 1(k! - k + 1) + k"
}
]
| https://en.wikipedia.org/wiki?curid=13247379 |
13250641 | Tutte–Berge formula | Characterization of the size of a maximum matching in a graph
In the mathematical discipline of graph theory the Tutte–Berge formula is a characterization of the size of a maximum matching in a graph. It is a generalization of Tutte theorem on perfect matchings, and is named after W. T. Tutte (who proved Tutte's theorem) and Claude Berge (who proved its generalization).
Statement.
The theorem states that the size of a maximum matching of a graph formula_0 equals
formula_1
where formula_2 counts how many of the connected components of the graph formula_3 have an odd number of vertices.
Equivalently, the number of "unmatched" vertices in a maximum matching equalsformula_4.
Explanation.
Intuitively, for any subset formula_5 of the vertices, the only way to completely cover an odd component of formula_6 by a matching is for one of the matched edges covering the component to be incident to formula_5. If, instead, some odd component had no matched edge connecting it to formula_5, then the part of the matching that covered the component would cover its vertices in pairs, but since the component has an odd number of vertices it would necessarily include at least one leftover and unmatched vertex. Therefore, if some choice of formula_5 has few vertices but its removal creates a large number of odd components, then there will be many unmatched vertices, implying that the matching itself will be small. This reasoning can be made precise by stating that the size of a maximum matching is at most equal to the value given by the Tutte–Berge formula.
The characterization of Tutte and Berge proves that this is the only obstacle to creating a large matching: the size of the optimal matching will be determined by the subset formula_5 with the biggest difference between its numbers of odd components outside formula_5 and vertices inside formula_5. That is, there always exists a subset formula_5 such that deleting formula_5 creates the correct number of odd components needed to make the formula true. One way to find such a set formula_5 is to choose any maximum matching formula_7, and to let formula_8 be the set of vertices that are either unmatched in formula_7, or that can be reached from an unmatched vertex by an alternating path that ends with a matched edge. Then, let formula_5 be the set of vertices that are matched by formula_7 to vertices in formula_8. No two vertices in formula_8 can be adjacent, for if they were then their alternating paths could be concatenated to give a path by which the matching could be increased, contradicting the maximality of formula_7. Every neighbor of a vertex formula_9 in formula_8 must belong to formula_5, for otherwise we could extend an alternating path to formula_9 by one more pair of edges, through the neighbor, causing the neighbor to become part of formula_5. Therefore, in formula_6, every vertex of formula_8 forms a single-vertex component, which is odd. There can be no other odd components, because all other vertices remain matched after deleting formula_5. So with this construction the size of formula_5 and the number of odd components created by deleting formula_5 are what they need to be to make the formula be true.
Relation to Tutte's theorem.
Tutte's theorem characterizes the graphs with perfect matchings as being the ones for which deleting any subset formula_5 of vertices creates at most formula_10 odd components. (A subset formula_5 that creates at least formula_10 odd components can always be found in the empty set.) In this case, by the Tutte–Berge formula, the size of the matching is formula_11; that is, the maximum matching is a perfect matching. Thus, Tutte's theorem can be derived as a corollary of the Tutte–Berge formula, and the formula can be seen as a generalization of Tutte's theorem. | [
{
"math_id": 0,
"text": "G=(V,E)"
},
{
"math_id": 1,
"text": "\\frac{1}{2} \\min_{U\\subseteq V} \\left(|U|-\\operatorname{odd}(G-U)+|V|\\right), "
},
{
"math_id": 2,
"text": "\\operatorname{odd}(H)"
},
{
"math_id": 3,
"text": "H"
},
{
"math_id": 4,
"text": "\\max_{U\\subseteq V} \\left(\\operatorname{odd}(G-U)-|U|\\right)\n\n\n\n "
},
{
"math_id": 5,
"text": "U"
},
{
"math_id": 6,
"text": "G-U"
},
{
"math_id": 7,
"text": "M"
},
{
"math_id": 8,
"text": "X"
},
{
"math_id": 9,
"text": "x"
},
{
"math_id": 10,
"text": "|U|"
},
{
"math_id": 11,
"text": "|V|/2"
}
]
| https://en.wikipedia.org/wiki?curid=13250641 |
13251197 | Ion cyclotron resonance | Ion cyclotron resonance is a phenomenon related to the movement of ions in a magnetic field. It is used for accelerating ions in a cyclotron, and for measuring the masses of an ionized analyte in mass spectrometry, particularly with Fourier transform ion cyclotron resonance mass spectrometers. It can also be used to follow the kinetics of chemical reactions in a dilute gas mixture, provided these involve charged species.
Definition of the resonant frequency.
An ion in a static and uniform magnetic field will move in a circle due to the Lorentz force. The angular frequency of this "cyclotron motion" for a given magnetic field strength "B" is given by
formula_0
where "z" is the number of positive or negative charges of the ion, "e" is the elementary charge and "m" is the mass of the ion. An electric excitation signal having a frequency "f" will therefore resonate with ions having a mass-to-charge ratio "m/z" given by
formula_1
The circular motion may be superimposed with a uniform axial motion, resulting in a helix, or with a uniform motion perpendicular to the field (e.g., in the presence of an electrical or gravitational field) resulting in a cycloid.
Ion cyclotron resonance heating.
Ion cyclotron resonance heating (or ICRH) is a technique in which electromagnetic waves with frequencies corresponding to the ion cyclotron frequency is used to heat up a plasma. The ions in the plasma absorb the electromagnetic radiation and as a result of this, increase in kinetic energy. This technique is commonly used in the heating of tokamak plasmas.
In the solar wind.
On March 8, 2013, NASA released an article according to which ion cyclotron waves were identified by its solar probe spacecraft called WIND as the main cause for the heating of the solar wind as it rises from the Sun's surface. Before this discovery, it was unclear why the solar wind particles would heat up instead of cool down, when speeding away from the Sun's surface.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\omega = 2\\pi f = \\frac{zeB}{m},"
},
{
"math_id": 1,
"text": "\\frac{m}{z} = \\frac{eB}{2\\pi f}."
}
]
| https://en.wikipedia.org/wiki?curid=13251197 |
13253224 | Giacinto Morera | Italian engineer and mathematician
Giacinto Morera (18 July 1856 – 8 February 1909), was an Italian engineer and mathematician. He is known for Morera's theorem in the theory of functions of a complex variable and for his work in the theory of linear elasticity.
Biography.
Life.
He was born in Novara on 18 July 1856, the son of Giacomo Morera and Vittoria Unico. According to , his family was a wealthy one, his father being a rich merchant. This occurrence eased him in his studies after the laurea: however, he was an extraordinarily hard worker and he widely used this ability in his researches. After studying in Turin he went to Pavia, Pisa and Leipzig: then he went back to Pavia for a brief period in 1885, and finally he went to Genova in 1886, living here for the next 15 years. While being in Genova he married his fellow-citizen Cesira Faà. From 1901 on to his death he worked in Turin: he died of pneumonia on 8 February 1909.
Education and academic career.
He earned in 1878 the laurea in engineering and then, in 1879, the laurea in mathematics, both awarded him from the Politecnico di Torino: According to , the title of his thesis in the mathematical sciences was: ""Sul moto di un punto attratto da due centri fissi colla legge di Newton". In Turin he attended the courses held by Enrico d'Ovidio, Angelo Genocchi and particularly the ones held by Francesco Siacci: later in his life, Morera acknowledged Siacci as his mentor in scientific research and life. After graduating, he followed several advanced courses: he studied in Pavia from 1881 to 1882 under Eugenio Beltrami, Eugenio Bertini and Felice Casorati. In 1883 he was in Pisa under Enrico Betti, Riccardo de Paolis and Ulisse Dini: a year later, he was in Leipzig under Felix Klein, Adolph Mayer and Carl Neumann. In 1885 he went in Berlin in order to follow the lessons of Hermann von Helmholtz, Gustav Kirchhoff, Leopold Kronecker and Karl Weierstrass at the local university: later in the same year, he went back to Italy, briefly working at the University of Pavia as a professor in the then newly established ""Scuola di Magistero". In 1886, after passing the required competitive examination by a judging commission, he became professor of rational mechanics at the University of Genova: he lived there for 15 years, serving also as dean and as rector. In 1901 he was called by the University of Turin to hold the chair of rational mechanics, left vacant by Vito Volterra. In 1908 he passed to the chair of "Meccanica Superiore" and was elected dean of the Faculty of Sciences.
Honours.
He was member of the Accademia Nazionale dei Lincei (first elected corresponding member on 18 July 1896, then elected national member on 26 August 1907) and of the Accademia delle Scienze di Torino (elected on 9 February 1902). refers that also the Kharkov Mathematical Society elected him corresponding member during the meeting of the society held on 31 October 1909 (Old Calendar), being apparently not aware of his death.
Tracts of his personality and attitudes.
In his commemorative papers, Carlo Somigliana describes extensively Morera's personality: according to him, he was a devoted friend and precious colleague, capable of serenely judging men and facts. On the very personal level, he remembers him s a cheerful person and a witty talker.
His intelligence is described as sharp and penetrating, his mind as uncommonly lucid, himself as possessing analytic and critical abilities and being versatile, capable to grasp and appreciate every kind of manifestation of the human intellect. Nevertheless, Somigliana also states that he was not interested in any scientific or other kind of field outside of his own realm of expertise. himself, in the inaugural address as the rector of the University of Genova, after quoting a statement attributed to Peter Guthrie Tait, revealed the reason behind his views: ""In science, the one who has a sound and solid knowledge, even in a narrow field, holds a true strength and he can use it whenever he needs: the one who has only a superficial knowledge, however wide and striking, holds nothing, and indeed he often holds a weakness pushing him towards vanity".
Acknowledged as honest, loyal and conscientious, good-tempered and with a good intellect, his simple manners earned him affection even when performing the duties of dean and rector at the University of Genoa. Also describes him as a man of high moral value, and ascribes to such qualities the reason of his success in social relations and in performing his duties as a civil servant.
However, despite being successful in social relations, he did not care for, nor appreciate, appearances and was not interested in activities other than teaching and doing research: consequently, he was not well known outside the circle of his family and relatives and the circle of his colleagues. He did not make a display of himself, careless of not being acknowledged by everyone for his true value: he also had a serious conception of life and strongly disliked vanity and superficiality.
According to Somigliana, his entire life was devoted to the higher unselfish ideal of "scientific research": and also remarks that only his beloved family shared the same attentions and cares he reserved to his lifelong ideal.
Work.
Research activity.
<templatestyles src="Template:Blockquote/styles.css" />Una quantità di quistioni egli chiarì, semplificò o perfezionò, portando quasi sempre il contributo di vedute ingegnose ed originali. Talchè la sua produzione scientifica può dirsi critica nel senso più largo e fecondo, cioè non-dedicata allo studio di minuziosi particolari, ma alla penetrazione e soluzione delle quistioni più difficili e complicate. Questa tendenza del suo ingegno si rivelò anche in un carattere esteriore di molte sue pubblicazioni, che egli presentò in forma di lavori brevi e concettosi; dei quali poi particolarmente si compiaceva, ed in conformità del suo carattere sincero, la sua compiacenza non-si tratteneva dal manifestare apertamente.
According to Somigliana, he was not particularly inventive: he did not create any new theory since this was not his main ability. Instead, he perfected already developed theories: nearly all of his researches appear as the natural result of a deep analysis work on theories that have already reached a high degree of perfection, clearly and precisely exposed. He had an exquisite sense for the applicability of his work, derived from his engineering studies, and mastered perfectly all known branches of mathematical analysis and their mechanical and physical applications.
He authored more than 60 research works: nearly complete lists of his publications are included in the commemorative papers , and . In particular classifies Morera's work by assigning each publication to particular research field: this classification is basically adopted in the following subsections.
Complex analysis.
Morera wrote eight research works on complex analysis: the style he used for their writing probably inspired Somigliana in the quotation introducing the "Research activity" section. Morera's theorem, probably the best known part of his scientific research, was first proved in the paper . The theorem states that if, in the complex plane formula_0, the line integral of a given continuous complex–valued function "f" satisfies the equation
formula_1
for every closed curve "C" in a given domain "D", then "f" is holomorphic there.
Differential equations.
This section includes all his works on the theory of differential equations, ordinary or partial ones: classifies this contributions as works in the theory of the equations of dynamics, in the theory of first-order partial differential equations and in the theory of exact differential equations. He wrote twelve papers on this topic: the results he obtained in these works are well described by . In the paper he gives a very brief proof of a transformation formula for the Poisson brackets first proved by Émile Léonard Mathieu, while in the paper he simplifies the proof of a theorem of Francesco Siacci which is substantially equivalent to Lie's third theorem: the paper is concerned with the Pfaff problem, proving a theorem on the minimum number of integrations to be performed in order to solve the problem.
Equilibrium of continuous bodies in elasticity theory.
classifies four of his works within the realm of elasticity theory: his contribution are well described by and by in their known monographs. The works within this section are perhaps the second best known part of his research, after his contributions to complex analysis.
Mathematical analysis.
classifies four of his works under the locution "Questioni varie di Analisi".
Potential theory of harmonic functions.
His contribution of this topics are classified by under two sections, named respectively "Fondamenti della teoria della funzione potenziale" and "Attrazione dell'elissoide e funzioni armoniche ellissoidali". The work deals with the definition and properties of ellipsoidal harmonics and the related Lamé functions.
Rational mechanics and mathematical physics.
includes in this class twelve works: his first published work is included among them.
Varia: algebraic analysis and differential geometry.
This section includes the only two papers of Morera on the subject of algebraic analysis and his unique paper on differential geometry: they are, respectively, the papers , and .
Teaching activity.
References , and do not say much about the teaching activity of Giacinto Morera: Somigliana describes once his teaching ability as incisive. However, his teaching is also testified by the lithographed lecture notes : according to the OPAC , this book had two editions, the first one being in 1901–1902.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
Biographical references.
The references listed in this section contain mainly biographical information on the life of Giacinto Morera.
General references.
The references listed in this section are mainly commemorations or surveys giving information on the life or Morera but also describing his scientific researches in some detail.
Scientific references.
The references listed in this section describe particular aspect of Morera's scientific work or survey his scientific contribution to a given field. | [
{
"math_id": 0,
"text": "\\mathbb{C}"
},
{
"math_id": 1,
"text": "\\oint_C f(z)\\,\\mathrm{d}z = 0"
}
]
| https://en.wikipedia.org/wiki?curid=13253224 |
13255 | Hydrogen | Chemical element with atomic number 1 (H)
Hydrogen is a chemical element; it has symbol H and atomic number 1. It is the lightest element and, at standard conditions, is a gas of diatomic molecules with the formula , sometimes called dihydrogen, but more commonly called hydrogen gas, molecular hydrogen or simply hydrogen. It is colorless, odorless, tasteless, non-toxic, and highly combustible. Constituting about 75% of all normal matter, hydrogen is the most abundant chemical element in the universe. Stars, including the Sun, mainly consist of hydrogen in a plasma state, while on Earth, hydrogen is found in water, organic compounds, as dihydrogen, and in other molecular forms. The most common isotope of hydrogen (protium, 1H) consists of one proton, one electron, and no neutrons.
In the early universe, the formation of hydrogen's protons occurred in the first second after the Big Bang; neutral hydrogen atoms only formed about 370,000 years later during the recombination epoch as the universe cooled and plasma had cooled enough for electrons to remain bound to protons. Hydrogen, typically nonmetallic except under extreme pressure, readily forms covalent bonds with most nonmetals, contributing to the formation of compounds like water and various organic substances. Its role is crucial in acid-base reactions, which mainly involve proton exchange among soluble molecules. In ionic compounds, hydrogen can take the form of either a negatively charged anion, where it is known as hydride, or as a positively charged cation, H+. The cation, usually just a proton (symbol p), exhibits specific behavior in aqueous solutions and in ionic compounds involves screening of its electric charge by surrounding polar molecules or anions. Hydrogen's unique position as the only neutral atom for which the Schrödinger equation can be directly solved, has significantly contributed to the foundational principles of quantum mechanics through the exploration of its energetics and chemical bonding.
Hydrogen gas was first produced artificially in the early 16th century by reacting acids with metals. Henry Cavendish, in 1766–81, identified hydrogen gas as a distinct substance and discovered its property of producing water when burned; hence its name means "water-former" in Greek.
Most hydrogen production occurs through steam reforming of natural gas; a smaller portion comes from energy-intensive methods such as the electrolysis of water. Its main industrial uses include fossil fuel processing, such as hydrocracking, and ammonia production, with emerging uses in fuel cells for electricity generation and as a heat source. When used in fuel cells, hydrogen's only emission at point of use is water vapor, though combustion can produce nitrogen oxides. Hydrogen's interaction with metals may cause embrittlement.
<templatestyles src="Template:TOC limit/styles.css" />
Properties.
Combustion.
Hydrogen gas is highly flammable:
(572 kJ/2 mol = 286 kJ/mol = 141.865 MJ/kg)
Enthalpy of combustion: −286 kJ/mol.
Hydrogen gas forms explosive mixtures with air in concentrations from 4–74% and with chlorine at 5–95%. The hydrogen autoignition temperature, the temperature of spontaneous ignition in air, is .
Flame.
Pure hydrogen-oxygen flames emit ultraviolet light and with high oxygen mix are nearly invisible to the naked eye, as illustrated by the faint plume of the Space Shuttle Main Engine, compared to the highly visible plume of a Space Shuttle Solid Rocket Booster, which uses an ammonium perchlorate composite. The detection of a burning hydrogen leak, may require a flame detector; such leaks can be very dangerous. Hydrogen flames in other conditions are blue, resembling blue natural gas flames. The destruction of the "Hindenburg" airship was a notorious example of hydrogen combustion and the cause is still debated. The visible flames in the photographs were the result of carbon compounds in the airship skin burning.
Reactants.
is unreactive compared to diatomic elements such as halogens or oxygen. The thermodynamic basis of this low reactivity is the very strong H–H bond, with a bond dissociation energy of 435.7 kJ/mol. The kinetic basis of the low reactivity is the nonpolar nature of and its weak polarizability. It spontaneously reacts with chlorine and fluorine to form hydrogen chloride and hydrogen fluoride, respectively. The reactivity of is strongly affected by the presence of metal catalysts. Thus, while mixtures of with or air combust readily when heated to at least 500°C by a spark or flame, they do not react at room temperature in the absence of a catalyst.
Electron energy levels.
The ground state energy level of the electron in a hydrogen atom is −13.6 eV, equivalent to an ultraviolet photon of roughly 91 nm wavelength.
The energy levels of hydrogen can be calculated fairly accurately using the Bohr model of the atom, in which the electron "orbits" the proton, like how Earth orbits the Sun. However, the electron and proton are held together by electrostatic attraction, while planets and celestial objects are held by gravity. Due to the discretization of angular momentum postulated in early quantum mechanics by Bohr, the electron in the Bohr model can only occupy certain allowed distances from the proton, and therefore only certain allowed energies.
A more accurate description of the hydrogen atom comes from a quantum analysis that uses the Schrödinger equation, Dirac equation or Feynman path integral formulation to calculate the probability density of the electron around the proton. The most complex formulas include the small effects of special relativity and vacuum polarization. In the quantum mechanical treatment, the electron in a ground state hydrogen atom has no angular momentum—illustrating how the "planetary orbit" differs from electron motion.
Spin isomers.
Molecular exists as two spin isomers, i.e. compounds that differ only in the spin states of their nuclei. In the orthohydrogen form, the spins of the two nuclei are parallel, forming a spin triplet state having a total molecular spin formula_0; in the parahydrogen form the spins are antiparallel and form a spin singlet state having spin formula_1. The equilibrium ratio of ortho- to para-hydrogen depends on temperature. At room temperature or warmer, equilibrium hydrogen gas contains about 25% of the para form and 75% of the ortho form. The ortho form is an excited state, having higher energy than the para form by 1.455 kJ/mol, and it converts to the para form over the course of several minutes when cooled to low temperature. The thermal properties of the forms differ because they differ in their allowed rotational quantum states, resulting in different thermal properties such as the heat capacity.
The ortho-to-para ratio in is an important consideration in the liquefaction and storage of liquid hydrogen: the conversion from ortho to para is exothermic and produces enough heat to evaporate most of the liquid if not converted first to parahydrogen during the cooling process. Catalysts for the ortho-para interconversion, such as ferric oxide and activated carbon compounds, are used during hydrogen cooling to avoid this loss of liquid.
Compounds.
Covalent and organic compounds.
While is not very reactive under standard conditions, it does form compounds with most elements. Hydrogen can form compounds with elements that are more electronegative, such as halogens (F, Cl, Br, I), or oxygen; in these compounds hydrogen takes on a partial positive charge. When bonded to a more electronegative element, particularly fluorine, oxygen, or nitrogen, hydrogen can participate in a form of medium-strength noncovalent bonding with another electronegative element with a lone pair, a phenomenon called hydrogen bonding that is critical to the stability of many biological molecules. Hydrogen also forms compounds with less electronegative elements, such as metals and metalloids, where it takes on a partial negative charge. These compounds are often known as hydrides.
Hydrogen forms many compounds with carbon called the hydrocarbons, and even more with heteroatoms that, due to their association with living things, are called organic compounds. The study of their properties is known as organic chemistry and their study in the context of living organisms is called biochemistry. By some definitions, "organic" compounds are only required to contain carbon. However, most of them also contain hydrogen, and because it is the carbon-hydrogen bond that gives this class of compounds most of its particular chemical characteristics, carbon-hydrogen bonds are required in some definitions of the word "organic" in chemistry. Millions of hydrocarbons are known, and they are usually formed by complicated pathways that seldom involve elemental hydrogen.
Hydrogen is highly soluble in many rare earth and transition metals and is soluble in both nanocrystalline and amorphous metals. Hydrogen solubility in metals is influenced by local distortions or impurities in the crystal lattice. These properties may be useful when hydrogen is purified by passage through hot palladium disks, but the gas's high solubility is a metallurgical problem, contributing to the embrittlement of many metals, complicating the design of pipelines and storage tanks.
Hydrides.
Hydrogen compounds are often called hydrides, a term that is used fairly loosely. The term "hydride" suggests that the H atom has acquired a negative or anionic character, denoted ; and is used when hydrogen forms a compound with a more electropositive element. The existence of the hydride anion, suggested by Gilbert N. Lewis in 1916 for group 1 and 2 salt-like hydrides, was demonstrated by Moers in 1920 by the electrolysis of molten lithium hydride (LiH), producing a stoichiometric quantity of hydrogen at the anode. For hydrides other than group 1 and 2 metals, the term is quite misleading, considering the low electronegativity of hydrogen. An exception in group 2 hydrides is , which is polymeric. In lithium aluminium hydride, the anion carries hydridic centers firmly attached to the Al(III).
Although hydrides can be formed with almost all main-group elements, the number and combination of possible compounds varies widely; for example, more than 100 binary borane hydrides are known, but only one binary aluminium hydride. Binary indium hydride has not yet been identified, although larger complexes exist.
In inorganic chemistry, hydrides can also serve as bridging ligands that link two metal centers in a coordination complex. This function is particularly common in group 13 elements, especially in boranes (boron hydrides) and aluminium complexes, as well as in clustered carboranes.
Protons and acids.
Oxidation of hydrogen removes its electron and gives , which contains no electrons and a nucleus which is usually composed of one proton. That is why is often called a proton. This species is central to discussion of acids. Under the Brønsted–Lowry acid–base theory, acids are proton donors, while bases are proton acceptors.
A bare proton, , cannot exist in solution or in ionic crystals because of its strong attraction to other atoms or molecules with electrons. Except at the high temperatures associated with plasmas, such protons cannot be removed from the electron clouds of atoms and molecules, and will remain attached to them. However, the term 'proton' is sometimes used loosely and metaphorically to refer to positively charged or cationic hydrogen attached to other species in this fashion, and as such is denoted "" without any implication that any single protons exist freely as a species.
To avoid the implication of the naked "solvated proton" in solution, acidic aqueous solutions are sometimes considered to contain a less unlikely fictitious species, termed the "hydronium ion" (). However, even in this case, such solvated hydrogen cations are more realistically conceived as being organized into clusters that form species closer to . Other oxonium ions are found when water is in acidic solution with other solvents.
Although exotic on Earth, one of the most common ions in the universe is the ion, known as protonated molecular hydrogen or the trihydrogen cation.
Isotopes.
Hydrogen has three naturally occurring isotopes, denoted 1H, 2H and 3H. Other, highly unstable nuclei (4H to 7H) have been synthesized in the laboratory but not observed in nature.
Unique among the elements, distinct names are assigned to its isotopes in common use. During the early study of radioactivity, heavy radioisotopes were given their own names, but these are mostly no longer used. The symbols D and T (instead of 2H and 3H) are sometimes used for deuterium and tritium, but the symbol P was already used for phosphorus and thus was not available for protium. In its nomenclatural guidelines, the International Union of Pure and Applied Chemistry (IUPAC) allows any of D, T, 2H, and 3H to be used, though 2H and 3H are preferred.
The exotic atom muonium (symbol Mu), composed of an antimuon and an electron, can also be considered a light radioisotope of hydrogen. Because muons decay with lifetime , muonium is too unstable for observable chemistry. Nevertheless, muonium compounds are important test cases for quantum simulation, due to the mass difference between the antimuon and the proton, and IUPAC nomenclature incorporates such hypothetical compounds as muonium chloride (MuCl) and sodium muonide (NaMu), analogous to hydrogen chloride and sodium hydride respectively.
Thermal and physical properties.
Table of thermal and physical properties of hydrogen (H2) at atmospheric pressure:
History.
Discovery and use.
Robert Boyle.
In 1671, Irish scientist Robert Boyle discovered and described the reaction between iron filings and dilute acids, which results in the production of hydrogen gas.
<templatestyles src="Template:Blockquote/styles.css" />Having provided a saline spirit [hydrochloric acid], which by an uncommon way of preparation was made exceeding sharp and piercing, we put into a vial, capable of containing three or four ounces of water, a convenient quantity of filings of steel, which were not such as are commonly sold in shops to Chymists and Apothecaries, (those being usually not free enough from rust) but such as I had a while before caus'd to be purposely fil'd off from a piece of good steel. This metalline powder being moistn'd in the viol with a little of the menstruum, was afterwards drench'd with more; whereupon the mixture grew very hot, and belch'd up copious and stinking fumes; which whether they consisted altogether of the volatile sulfur of the Mars [iron], or of metalline steams participating of a sulfureous nature, and join'd with the saline exhalations of the menstruum, is not necessary to be here discuss'd. But whencesoever this stinking smoak proceeded, so inflammable it was, that upon the approach of a lighted candle to it, it would readily enough take fire, and burn with a blewish and somewhat greenish flame at the mouth of the viol for a good while together; and that, though with little light, yet with more strength than one would easily suspect.
The word "sulfureous" may be somewhat confusing, especially since Boyle did a similar experiment with iron and sulfuric acid. However, in all likelihood, "sulfureous" should here be understood to mean "combustible".
Henry Cavendish.
In 1766, Henry Cavendish was the first to recognize hydrogen gas as a discrete substance, by naming the gas from a metal-acid reaction "inflammable air". He speculated that "inflammable air" was in fact identical to the hypothetical substance "phlogiston" and further finding in 1781 that the gas produces water when burned. He is usually given credit for the discovery of hydrogen as an element.
Antoine Lavoisier.
In 1783, Antoine Lavoisier identified the element that came to be known as hydrogen when he and Laplace reproduced Cavendish's finding that water is produced when hydrogen is burned. Lavoisier produced hydrogen for his experiments on mass conservation by reacting a flux of steam with metallic iron through an incandescent iron tube heated in a fire. Anaerobic oxidation of iron by the protons of water at high temperature can be schematically represented by the set of following reactions:
1)
2)
3)
Many metals such as zirconium undergo a similar reaction with water leading to the production of hydrogen.
19th century.
François Isaac de Rivaz built the first de Rivaz engine, an internal combustion engine powered by a mixture of hydrogen and oxygen in 1806. Edward Daniel Clarke invented the hydrogen gas blowpipe in 1819. The Döbereiner's lamp and limelight were invented in 1823.
Hydrogen was liquefied for the first time by James Dewar in 1898 by using regenerative cooling and his invention, the vacuum flask. He produced solid hydrogen the next year.
Hydrogen-lifted airship.
The first hydrogen-filled balloon was invented by Jacques Charles in 1783. Hydrogen provided the lift for the first reliable form of air-travel following the 1852 invention of the first hydrogen-lifted airship by Henri Giffard. German count Ferdinand von Zeppelin promoted the idea of rigid airships lifted by hydrogen that later were called Zeppelins; the first of which had its maiden flight in 1900. Regularly scheduled flights started in 1910 and by the outbreak of World War I in August 1914, they had carried 35,000 passengers without a serious incident. Hydrogen-lifted airships were used as observation platforms and bombers during the war.
The first non-stop transatlantic crossing was made by the British airship "R34" in 1919. Regular passenger service resumed in the 1920s and the discovery of helium reserves in the United States promised increased safety, but the U.S. government refused to sell the gas for this purpose. Therefore, was used in the "Hindenburg" airship, which was destroyed in a midair fire over New Jersey on 6 May 1937. The incident was broadcast live on radio and filmed. Ignition of leaking hydrogen is widely assumed to be the cause, but later investigations pointed to the ignition of the aluminized fabric coating by static electricity. But the damage to hydrogen's reputation as a lifting gas was already done and commercial hydrogen airship travel ceased. Hydrogen is still used, in preference to non-flammable but more expensive helium, as a lifting gas for weather balloons.
Deuterium and tritium.
Deuterium was discovered in December 1931 by Harold Urey, and tritium was prepared in 1934 by Ernest Rutherford, Mark Oliphant, and Paul Harteck. Heavy water, which consists of deuterium in the place of regular hydrogen, was discovered by Urey's group in 1932.
Hydrogen-cooled turbogenerator.
The first hydrogen-cooled turbogenerator went into service using gaseous hydrogen as a coolant in the rotor and the stator in 1937 at Dayton, Ohio, owned by the Dayton Power & Light Co. This was justified by the high thermal conductivity and very low viscosity of hydrogen gas, thus lower drag than air. This is the most common coolant used for generators 60 MW and larger; smaller generators are usually air-cooled.
Nickel–hydrogen battery.
The nickel–hydrogen battery was used for the first time in 1977 aboard the U.S. Navy's Navigation technology satellite-2 (NTS-2). The International Space Station, Mars Odyssey and the Mars Global Surveyor are equipped with nickel-hydrogen batteries. In the dark part of its orbit, the Hubble Space Telescope is also powered by nickel-hydrogen batteries, which were finally replaced in May 2009, more than 19 years after launch and 13 years beyond their design life.
Role in quantum theory.
Because of its simple atomic structure, consisting only of a proton and an electron, the hydrogen atom, together with the spectrum of light produced from it or absorbed by it, has been central to the development of the theory of atomic structure. Furthermore, study of the corresponding simplicity of the hydrogen molecule and the corresponding cation brought understanding of the nature of the chemical bond, which followed shortly after the quantum mechanical treatment of the hydrogen atom had been developed in the mid-1920s.
One of the first quantum effects to be explicitly noticed (but not understood at the time) was a Maxwell observation involving hydrogen, half a century before full quantum mechanical theory arrived. Maxwell observed that the specific heat capacity of unaccountably departs from that of a diatomic gas below room temperature and begins to increasingly resemble that of a monatomic gas at cryogenic temperatures. According to quantum theory, this behavior arises from the spacing of the (quantized) rotational energy levels, which are particularly wide-spaced in because of its low mass. These widely spaced levels inhibit equal partition of heat energy into rotational motion in hydrogen at low temperatures. Diatomic gases composed of heavier atoms do not have such widely spaced levels and do not exhibit the same effect.
Antihydrogen (H) is the antimatter counterpart to hydrogen. It consists of an antiproton with a positron. Antihydrogen is the only type of antimatter atom to have been produced as of 2015[ [update]].
Cosmic prevalence and distribution.
Hydrogen, as atomic H, is the most abundant chemical element in the universe, making up 75% of normal matter by mass and >90% by number of atoms. Most of the mass of the universe, however, is not in the form of chemical-element type matter, but rather is postulated to occur as yet-undetected forms of mass such as dark matter and dark energy.
Hydrogen is found in great abundance in stars and gas giant planets. Molecular clouds of are associated with star formation. Hydrogen plays a vital role in powering stars through the proton-proton reaction in case of stars with very low to approximately 1 mass of the Sun and the CNO cycle of nuclear fusion in case of stars more massive than the Sun.
States.
Throughout the universe, hydrogen is mostly found in the atomic and plasma states, with properties quite distinct from those of molecular hydrogen. As a plasma, hydrogen's electron and proton are not bound together, resulting in very high electrical conductivity and high emissivity (producing the light from the Sun and other stars). The charged particles are highly influenced by magnetic and electric fields. For example, in the solar wind they interact with the Earth's magnetosphere giving rise to Birkeland currents and the aurora.
Hydrogen is found in the neutral atomic state in the interstellar medium because the atoms seldom collide and combine. They are the source of the 21-cm hydrogen line at 1420 MHz that is detected in order to probe primordial hydrogen. The large amount of neutral hydrogen found in the damped Lyman-alpha systems is thought to dominate the cosmological baryonic density of the universe up to a redshift of "z" = 4.
Under ordinary conditions on Earth, elemental hydrogen exists as the diatomic gas, . Hydrogen gas is very rare in Earth's atmosphere (around 0.53 ppm on a molar basis) because of its light weight, which enables it to escape the atmosphere more rapidly than heavier gases. However, hydrogen is the third most abundant element on the Earth's surface, mostly in the form of chemical compounds such as hydrocarbons and water.
A molecular form called protonated molecular hydrogen () is found in the interstellar medium, where it is generated by ionization of molecular hydrogen from cosmic rays. This ion has also been observed in the upper atmosphere of Jupiter. The ion is relatively stable in outer space due to the low temperature and density. is one of the most abundant ions in the universe, and it plays a notable role in the chemistry of the interstellar medium. Neutral triatomic hydrogen can exist only in an excited form and is unstable. By contrast, the positive hydrogen molecular ion () is a rare molecule in the universe.
Production.
Many methods exist for producing H2, but three dominate commercially: steam reforming often coupled to water-gas shift, partial oxidation of hydrocarbons, and water electrolysis.
Steam reforming.
Hydrogen is mainly produced by steam methane reforming (SMR), the reaction of water and methane. Thus, at high temperature (1000–1400 K, 700–1100°C or 1300–2000°F), steam (water vapor) reacts with methane to yield carbon monoxide and .
Steam reforming is also used for the industrial preparation of ammonia.
This reaction is favored at low pressures, Nonetheless, conducted at high pressures (2.0 MPa, 20 atm or 600 inHg) because high-pressure is the most marketable product, and pressure swing adsorption (PSA) purification systems work better at higher pressures. The product mixture is known as "synthesis gas" because it is often used directly for the production of methanol and many other compounds. Hydrocarbons other than methane can be used to produce synthesis gas with varying product ratios. One of the many complications to this highly optimized technology is the formation of coke or carbon:
Therefore, steam reforming typically employs an excess of . Additional hydrogen can be recovered from the steam by using carbon monoxide through the water gas shift reaction (WGS). This process requires an iron oxide catalyst:
Hydrogen is sometimes produced and consumed in the same industrial process, without being separated. In the Haber process for ammonia production, hydrogen is generated from natural gas.
Partial oxidation of hydrocarbons.
Other methods for CO and production include partial oxidation of hydrocarbons:
Although less important commercially, coal can serve as a prelude to the shift reaction above:
Olefin production units may produce substantial quantities of byproduct hydrogen particularly from cracking light feedstocks like ethane or propane.
Water electrolysis.
Electrolysis of water is a conceptually simple method of producing hydrogen.
Commercial electrolyzers use nickel-based catalysts in strongly alkaline solution. Platinum is a better catalyst but is expensive.
Electrolysis of brine to yield chlorine also produces hydrogen as a co-product.
Methane pyrolysis.
Hydrogen can be produced by pyrolysis of natural gas (methane).
This route has a lower carbon footprint than commercial hydrogen production processes. Developing a commercial methane pyrolysis process could expedite the expanded use of hydrogen in industrial and transportation applications. Methane pyrolysis is accomplished by passing methane through a molten metal catalyst containing dissolved nickel. Methane is converted to hydrogen gas and solid carbon.
(ΔH° = 74 kJ/mol)
The carbon may be sold as a manufacturing feedstock or fuel, or landfilled.
Further research continues in several laboratories, including at Karlsruhe Liquid-metal Laboratory and at University of California – Santa Barbara. BASF built a methane pyrolysis pilot plant.
Thermochemical.
More than 200 thermochemical cycles can be used for water splitting. Many of these cycles such as the iron oxide cycle, cerium(IV) oxide–cerium(III) oxide cycle, zinc zinc-oxide cycle, sulfur-iodine cycle, copper-chlorine cycle and hybrid sulfur cycle have been evaluated for their commercial potential to produce hydrogen and oxygen from water and heat without using electricity. A number of labs (including in France, Germany, Greece, Japan, and the United States) are developing thermochemical methods to produce hydrogen from solar energy and water.
Laboratory methods.
is produced in labs, often as a by-product of other reactions. Many metals react with water to produce , but the rate of hydrogen evolution depends on the metal, the pH, and the presence of alloying agents. Most often, hydrogen evolution is induced by acids. The alkali and alkaline earth metals, aluminium, zinc, manganese, and iron react readily with aqueous acids. This reaction is the basis of the Kipp's apparatus, which once was used as a laboratory gas source:
In the absence of acid, the evolution of is slower. Because iron is widely used structural material, its anaerobic corrosion is of technological significance:
Many metals, such as aluminium, are slow to react with water because they form passivated oxide coatings of oxides. An alloy of aluminium and gallium, however, does react with water. At high pH, aluminium can produce :
Some metal-containing compounds react with acids to evolve . Under anaerobic conditions, ferrous hydroxide (Fe(OH)2) can be oxidized by the protons of water to form magnetite and . This process is described by the Schikorr reaction:
This process occurs during the anaerobic corrosion of iron and steel in oxygen-free groundwater and in reducing soils below the water table.
Biohydrogen.
is produced by hydrogenase enzymes in some fermentation.
Wells.
There is a well in Mali and deposits in several other countries, such as France.
Applications.
Petrochemical industry.
Large quantities of are used in the "upgrading" of fossil fuels. Key consumers of include hydrodesulfurization, and hydrocracking. Many of these reactions can be classified as hydrogenolysis, i.e., the cleavage of bonds by hydrogen. Illustrative is the separation of sulfur from liquid fossil fuels:
Hydrogenation.
Hydrogenation, the addition of to various substrates, is done on a large scale. Hydrogenation of to produce ammonia by the Haber process, consumes a few percent of the energy budget in the entire industry. The resulting ammonia is used to supply most of the protein consumed by humans. Hydrogenation is used to convert unsaturated fats and oils to saturated (trans) fats and oils. The major application is the production of margarine. Methanol is produced by hydrogenation of carbon dioxide. It is similarly the source of hydrogen in the manufacture of hydrochloric acid. is also used as a reducing agent for the conversion of some ores to the metals.
Coolant.
Hydrogen is commonly used in power stations as a coolant in generators due to a number of favorable properties that are a direct result of its light diatomic molecules. These include low density, low viscosity, and the highest specific heat and thermal conductivity of all gases.
Energy carrier.
Elemental hydrogen is widely discussed in the context of energy as an energy carrier with potential to help to decarbonize economies and mitigate greenhouse gas emissions. This therefore requires hydrogen to be produced cleanly in quantities to be supplied in sectors and applications where cheaper and more energy-efficient mitigation alternatives are limited. These include heavy industry and long-distance transport. Hydrogen is a "carrier" of energy rather than an energy resource, because there is no naturally occurring source of hydrogen in useful quantities.
Hydrogen can be deployed as an energy source in fuel cells to produce electricity or via combustion to generate heat. When hydrogen is consumed in fuel cells, the only emission at the point of use is water vapor. Combustion of hydrogen can lead to the thermal formation of harmful nitrogen oxides. The overall lifecycle emissions of hydrogen depend on how it is produced. Nearly all the world's current supply of hydrogen is created from fossil fuels. The main method is steam methane reforming, in which hydrogen is produced from a chemical reaction between steam and methane, the main component of natural gas. Producing one tonne of hydrogen through this process emits 6.6–9.3 tonnes of carbon dioxide. While carbon capture and storage (CCS) could remove a large fraction of these emissions, the overall carbon footprint of hydrogen from natural gas is difficult to assess as of 2021[ [update]], in part because of emissions (including vented and fugitive methane) created in the production of the natural gas itself.
Electricity can be used to split water molecules, producing sustainable hydrogen, provided the electricity was generated sustainably. However, this electrolysis process is currently more expensive than creating hydrogen from methane without CCS and the efficiency of energy conversion is inherently low. Hydrogen can be produced when there is a surplus of variable renewable electricity, then stored and used to generate heat or to re-generate electricity. Hydrogen created through electrolysis using renewable energy is commonly referred to as "green hydrogen." It can be further transformed into synthetic fuels such as ammonia and methanol.
Innovation in hydrogen electrolyzers could make large-scale production of hydrogen from electricity more cost-competitive. There is potential for hydrogen produced this way to play a significant role in decarbonizing energy systems where there are challenges and limitations to replacing fossil fuels with direct use of electricity.
Hydrogen fuel can produce the intense heat required for industrial production of steel, cement, glass, and chemicals, thus contributing to the decarbonisation of industry alongside other technologies, such as electric arc furnaces for steelmaking. However, it is likely to play a larger role in providing industrial feedstock for cleaner production of ammonia and organic chemicals. For example, in steelmaking, hydrogen could function as a clean energy carrier and also as a low-carbon catalyst, replacing coal-derived coke. Hydrogen used to decarbonise transportation is likely to find its largest applications in shipping, aviation and, to a lesser extent, heavy goods vehicles, through the use of hydrogen-derived synthetic fuels such as ammonia and methanol and fuel cell technology. For light-duty vehicles including cars, hydrogen is far behind other alternative fuel vehicles, especially compared with the rate of adoption of battery electric vehicles, and may not play a significant role in future.
Disadvantages of hydrogen as an energy carrier include high costs of storage and distribution due to hydrogen's explosivity, its large volume compared to other fuels, and its tendency to make pipes brittle.
Semiconductor industry.
Hydrogen is employed to saturate broken ("dangling") bonds of amorphous silicon and amorphous carbon that helps stabilizing material properties. It is also a potential electron donor in various oxide materials, including ZnO, , CdO, MgO, , , , , , , , , , , , and .
Biological reactions.
is a product of some types of anaerobic metabolism and is produced by several microorganisms, usually via reactions catalyzed by iron- or nickel-containing enzymes called hydrogenases. These enzymes catalyze the reversible redox reaction between and its component two protons and two electrons. Creation of hydrogen gas occurs in the transfer of reducing equivalents, produced during pyruvate fermentation, to water. The natural cycle of hydrogen production and consumption by organisms is called the hydrogen cycle.
Bacteria such as "Mycobacterium smegmatis" can use the small amount of hydrogen in the atmosphere as a source of energy when other sources are lacking, using a hydrogenase with small channels that exclude oxygen and so permits the reaction to occur even though the hydrogen concentration is very low and the oxygen concentration is as in normal air.
Hydrogen is the most abundant element in the human body by numbers of atoms but the third most abundant by mass. occurs in human breath due to the metabolic activity of hydrogenase-containing microorganisms in the large intestine and is a natural component of flatus. The concentration in the breath of fasting people at rest is typically less than 5 parts per million (ppm) but can be 50 ppm when people with intestinal disorders consume molecules they cannot absorb during diagnostic hydrogen breath tests.
Water splitting, in which water is decomposed into its component protons, electrons, and oxygen, occurs in the light reactions in all photosynthetic organisms. Some such organisms, including the alga "Chlamydomonas reinhardtii" and cyanobacteria, have evolved a second step in the dark reactions in which protons and electrons are reduced to form gas by specialized hydrogenases in the chloroplast. Efforts have been undertaken to genetically modify cyanobacterial hydrogenases to efficiently synthesize gas even in the presence of oxygen. Efforts have also been undertaken with genetically modified alga in a bioreactor.
Safety and precautions.
<templatestyles src="Chembox/styles.css"/>
Chemical compound
Hydrogen poses a number of hazards to human safety, from potential detonations and fires when mixed with air to being an asphyxiant in its pure, oxygen-free form. Also, liquid hydrogen is a cryogen and presents dangers (such as frostbite) associated with very cold liquids. Hydrogen dissolves in many metals and in addition to leaking out, may have adverse effects on them, such as hydrogen embrittlement, leading to cracks and explosions. Hydrogen gas leaking into external air may spontaneously ignite. Moreover, hydrogen fire, while being extremely hot, is almost invisible, and thus can lead to accidental burns.
Even interpreting the hydrogen data (including safety data) is confounded by a number of phenomena. Many physical and chemical properties of hydrogen depend on the parahydrogen/orthohydrogen ratio (it often takes days or weeks at a given temperature to reach the equilibrium ratio, for which the data is usually given). Hydrogen detonation parameters, such as critical detonation pressure and temperature, strongly depend on the container geometry.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S = 1"
},
{
"math_id": 1,
"text": "S = 0"
}
]
| https://en.wikipedia.org/wiki?curid=13255 |
13257 | Hydrocarbon | Organic compound consisting entirely of hydrogen and carbon
In organic chemistry, a hydrocarbon is an organic compound consisting entirely of hydrogen and carbon. Hydrocarbons are examples of group 14 hydrides. Hydrocarbons are generally colourless and hydrophobic; their odor is usually faint, and may be similar to that of gasoline or lighter fluid. They occur in a diverse range of molecular structures and phases: they can be gases (such as methane and propane), liquids (such as hexane and benzene), low melting solids (such as paraffin wax and naphthalene) or polymers (such as polyethylene and polystyrene).
In the fossil fuel industries, "hydrocarbon" refers to naturally occurring petroleum, natural gas and coal, or their hydrocarbon derivatives and purified forms. Combustion of hydrocarbons is the main source of the world's energy. Petroleum is the dominant raw-material source for organic commodity chemicals such as solvents and polymers. Most anthropogenic (human-generated) emissions of greenhouse gases are either carbon dioxide released by the burning of fossil fuels, or methane released from the handling of natural gas or from agriculture.
Types.
As defined by the International Union of Pure and Applied Chemistry's nomenclature of organic chemistry, hydrocarbons are classified as follows:
The term 'aliphatic' refers to non-aromatic hydrocarbons. Saturated aliphatic hydrocarbons are sometimes referred to as 'paraffins'. Aliphatic hydrocarbons containing a double bond between carbon atoms are sometimes referred to as 'olefins'.
Usage.
The predominant use of hydrocarbons is as a combustible fuel source. Methane is the predominant component of natural gas. C6 through C10 alkanes, alkenes, cycloalkanes, and aromatic hydrocarbons are the main components of gasoline, naphtha, jet fuel, and specialized industrial solvent mixtures. With the progressive addition of carbon units, the simple non-ring structured hydrocarbons have higher viscosities, lubricating indices, boiling points, solidification temperatures, and deeper color. At the opposite extreme from methane lie the heavy tars that remain as the "lowest fraction" in a crude oil refining retort. They are collected and widely utilized as roofing compounds, pavement material (bitumen), wood preservatives (the creosote series) and as extremely high viscosity shear-resisting liquids.
Some large-scale non-fuel applications of hydrocarbons begin with ethane and propane, which are obtained from petroleum and natural gas. These two gases are converted either to syngas or to ethylene and propylene respectively. Global consumption of benzene in 2021 is estimated at more than 58 million metric tons, which will increase to 60 million tons in 2022.
Hydrocarbons are also prevalent in nature. Some eusocial arthropods, such as the Brazilian stingless bee, "Schwarziana quadripunctata", use unique cuticular hydrocarbon "scents" in order to determine kin from non-kin. This hydrocarbon composition varies between age, sex, nest location, and hierarchal position.
There is also potential to harvest hydrocarbons from plants like "Euphorbia lathyris" and "E. tirucalli" as an alternative and renewable energy source for vehicles that use diesel. Furthermore, endophytic bacteria from plants that naturally produce hydrocarbons have been used in hydrocarbon degradation in attempts to deplete hydrocarbon concentration in polluted soils.
Reactions.
The noteworthy feature of saturated hydrocarbons is their inertness. Unsaturated hydrocarbons (alkanes, alkenes and aromatic compounds) react more readily, by means of substitution, addition, polymerization. At higher temperatures they undergo dehydrogenation, oxidation and combustion.
Substitution.
Of the classes of hydrocarbons, aromatic compounds uniquely (or nearly so) undergo substitution reactions. The chemical process practiced on the largest scale is the reaction of benzene and ethene to give ethylbenzene:
The resulting ethylbenzene is dehydrogenated to styrene and then polymerized to manufacture polystyrene, a common thermoplastic material.
Free-radical substitution.
Substitution reactions occur also in saturated hydrocarbons (all single carbon–carbon bonds). Such reactions require highly reactive reagents, such as chlorine and fluorine. In the case of chlorination, one of the chlorine atoms replaces a hydrogen atom. The reactions proceed via free-radical pathways, in which the halogen first dissociates into a two neutral radical atoms (homolytic fission).
CH4 + Cl2 → CH3Cl + HCl
CH3Cl + Cl2 → CH2Cl2 + HCl
all the way to CCl4 (carbon tetrachloride)
C2H6 + Cl2 → C2H5Cl + HCl
C2H4Cl2 + Cl2 → C2H3Cl3 + HCl
all the way to C2Cl6 (hexachloroethane)
Addition.
Addition reactions apply to alkenes and alkynes. In this reaction a variety of reagents add "across" the pi-bond(s). Chlorine, hydrogen chloride, water, and hydrogen are illustrative reagents.
Addition polymerization.
Alkenes and some alkynes also undergo polymerization by opening of the multiple bonds to produce polyethylene, polybutylene, and polystyrene. The alkyne acetylene polymerizes to produce polyacetylene. Oligomers (chains of a few monomers) may be produced, for example in the Shell higher olefin process, where α-olefins are extended to make longer α-olefins by adding ethylene repeatedly.
Metathesis.
Some hydrocarbons undergo "metathesis", in which substituents attached by C–C bonds are exchanged between molecules. For a single C–C bond it is alkane metathesis, for a double C–C bond it is alkene metathesis (olefin metathesis), and for a triple C–C bond it is alkyne metathesis.
High-temperature reactions.
Combustion.
Combustion of hydrocarbons is currently the main source of the world's energy for electric power generation, heating (such as home heating) and transportation. Often this energy is used directly as heat such as in home heaters, which use either petroleum or natural gas. The hydrocarbon is burnt and the heat is used to heat water, which is then circulated. A similar principle is used to create electrical energy in power plants.
Common properties of hydrocarbons are the facts that they produce steam, carbon dioxide and heat during combustion and that oxygen is required for combustion to take place. The simplest hydrocarbon, methane, burns as follows:
<chem>\underset{methane}{CH4} + 2O2 -> CO2 + 2H2O</chem>
In inadequate supply of air, carbon black and water vapour are formed:
<chem>\underset{methane}{CH4} + O2 -> C + 2H2O</chem>
And finally, for any linear alkane of n carbon atoms,
formula_0
Partial oxidation characterizes the reactions of alkenes and oxygen. This process is the basis of rancidification and paint drying.
Benzene burns with sooty flame when heated in air:
<chem>\underset{benzene}{C6H6} + {15\over 2}O2 -> 6CO2 {+} 3H2O</chem>
Origin.
The vast majority of hydrocarbons found on Earth occur in crude oil, petroleum, coal, and natural gas. Since thousands of years they have been exploited and used for a vast range of purposes. Petroleum (lit. 'rock oil') and coal are generally thought to be products of decomposition of organic matter. Coal, in contrast to petroleum, is richer in carbon and poorer in hydrogen. Natural gas is the product of methanogenesis.
A seemingly limitless variety of compounds comprise petroleum, hence the necessity of refineries. These hydrocarbons consist of saturated hydrocarbons, aromatic hydrocarbons, or combinations of the two. Missing in petroleum are alkenes and alkynes. Their production requires refineries. Petroleum-derived hydrocarbons are mainly consumed for fuel, but they are also the source of virtually all synthetic organic compounds, including plastics and pharmaceuticals. Natural gas is consumed almost exclusively as fuel. Coal is used as a fuel and as a reducing agent in metallurgy.
A small fraction of hydrocarbon found on earth, and all currently known hydrocarbon found on other planets and moons, is thought to be abiological.
Hydrocarbons such as ethylene, isoprene, and monoterpenes are emitted by living vegetation.
Some hydrocarbons also are widespread and abundant in the Solar System. Lakes of liquid methane and ethane have been found on Titan, Saturn's largest moon, as confirmed by the "Cassini–Huygens" space probe. Hydrocarbons are also abundant in nebulae forming polycyclic aromatic hydrocarbon compounds.
Environmental impact.
Burning hydrocarbons as fuel, which produces carbon dioxide and water, is a major contributor to anthropogenic global warming.
Hydrocarbons are introduced into the environment through their extensive use as fuels and chemicals as well as through leaks or accidental spills during exploration, production, refining, or transport of fossil fuels. Anthropogenic hydrocarbon contamination of soil is a serious global issue due to contaminant persistence and the negative impact on human health.
When soil is contaminated by hydrocarbons, it can have a significant impact on its microbiological, chemical, and physical properties. This can serve to prevent, slow down or even accelerate the growth of vegetation depending on the exact changes that occur. Crude oil and natural gas are the two largest sources of hydrocarbon contamination of soil.
Bioremediation.
Bioremediation of hydrocarbon from soil or water contaminated is a formidable challenge because of the chemical inertness that characterize hydrocarbons (hence they survived millions of years in the source rock). Nonetheless, many strategies have been devised, bioremediation being prominent. The basic problem with bioremediation is the paucity of enzymes that act on them. Nonetheless, the area has received regular attention.
Bacteria in the gabbroic layer of the ocean's crust can degrade hydrocarbons; but the extreme environment makes research difficult. Other bacteria such as "Lutibacterium anuloederans" can also degrade hydrocarbons.
Mycoremediation or breaking down of hydrocarbon by mycelium and mushrooms is possible.
Safety.
Hydrocarbons are generally of low toxicity, hence the widespread use of gasoline and related volatile products. Aromatic compounds such as benzene and toluene are narcotic and chronic toxins, and benzene in particular is known to be carcinogenic. Certain rare polycyclic aromatic compounds are carcinogenic.
Hydrocarbons are highly flammable.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ce{C}_n \\ce{H}_{2n+2} + \\left({{3n+1}\\over 2}\\right)\\ce{O2->} n\\ce{CO2} + (n+1)\\ce{H2O}"
}
]
| https://en.wikipedia.org/wiki?curid=13257 |
1325784 | Piano key frequencies | This is a list of the fundamental frequencies in hertz (cycles per second) of the keys of a modern 88-key standard or 108-key extended piano in twelve-tone equal temperament, with the 49th key, the fifth A (called A4), tuned to 440 Hz (referred to as A440). Every octave is made of twelve steps called semitones. A jump from the lowest semitone to the highest semitone in one octave doubles the frequency (for example, the fifth A is 440 Hz and the sixth A is 880 Hz). The frequency of a pitch is derived by multiplying (ascending) or dividing (descending) the frequency of the previous pitch by the twelfth root of two (approximately 1.059463). For example, to get the frequency one semitone up from A4 (A♯4), multiply 440 Hz by the twelfth root of two. To go from A4 up two semitones (one whole tone) to B4, multiply 440 twice by the twelfth root of two (or once by the sixth root of two, approximately 1.122462). To go from A4 up three semitones to C5 (a minor third), multiply 440 Hz three times by the twelfth root of two (or once by the fourth root of two, approximately 1.189207). For other tuning schemes, refer to musical tuning.
This list of frequencies is for a theoretically ideal piano. On an actual piano, the ratio between semitones is slightly larger, especially at the high and low ends, where string stiffness causes inharmonicity, i.e., the tendency for the harmonic makeup of each note to run sharp. To compensate for this, octaves are tuned slightly wide, stretched according to the inharmonic characteristics of each instrument. This deviation from equal temperament is called the Railsback curve.
The following equation gives the frequency f (Hz) of the nth key on the idealized standard piano with the 49th key tuned to A4 at 440 Hz:
formula_0
where n is shown in the table below.
Conversely, the key number of a pitch with a frequency f (Hz) on the idealized standard piano is:
formula_1
List.
Values in bold are exact on an idealized standard piano. Keys shaded gray are rare and only appear on extended pianos. The normal 88 keys were numbered 1–88, with the extra low keys numbered 89–97 and the extra high keys numbered 98–108. A 108-key piano that extends from C0 to B8 was first built in 2018 by Stuart & Sons. (Note: these piano key numbers 1-108 are not the n keys in the equations or the table.) | [
{
"math_id": 0,
"text": "\nf(n) = \\left(\\sqrt[12]{2}\\,\\right)^{n-49} \\times 440 \\,\\text{Hz}\\, = 2^{\\frac{n-49}{12}} \\times 440 \\,\\text{Hz}\\,\n"
},
{
"math_id": 1,
"text": "\nn = 12 \\, \\log_2\\left({\\frac{f}{440 \\,\\text{Hz}}}\\right) + 49\n"
}
]
| https://en.wikipedia.org/wiki?curid=1325784 |
13257986 | Plane wave expansion method | Technique in computational electromagnetism
Plane wave expansion method (PWE) refers to a computational technique in electromagnetics to solve the Maxwell's equations by formulating an eigenvalue problem out of the equation. This method is popular among the photonic crystal community as a method of solving for the band structure (dispersion relation) of specific photonic crystal geometries. PWE is traceable to the analytical formulations, and is useful in calculating modal solutions of Maxwell's equations over an inhomogeneous or periodic geometry. It is specifically tuned to solve problems in a time-harmonic forms, with non-dispersive media (a reformulation of the method named Inverse dispersion allows frequency-dependent refractive indices).
Principles.
Plane waves are solutions to the homogeneous Helmholtz equation, and form a basis to represent fields in the periodic media. PWE as applied to photonic crystals as described is primarily sourced from Dr. Danner's tutorial.
The electric or magnetic fields are expanded for each field component in terms of the Fourier series components along the reciprocal lattice vector. Similarly, the dielectric permittivity (which is periodic along reciprocal lattice vector for photonic crystals) is also expanded through Fourier series components.
formula_0
formula_1
with the Fourier series coefficients being the K numbers subscripted by m, n respectively, and the reciprocal lattice vector given by formula_2. In real modeling, the range of components considered will be reduced to just formula_3 instead of the ideal, infinite wave.
Using these expansions in any of the curl-curl relations like,
formula_4
and simplifying under assumptions of a source free, linear, and non-dispersive region we obtain the eigenvalue relations which can be solved.
Example for 1D case.
For a y-polarized z-propagating electric wave, incident on a 1D-DBR periodic in only z-direction and homogeneous along x,y, with a lattice period of a. We then have the following simplified relations:
formula_5
formula_6
The constitutive eigenvalue equation we finally have to solve becomes,
formula_7
This can be solved by building a matrix for the terms in the left hand side, and finding its eigenvalue and vectors. The eigenvalues correspond to the modal solutions, while the corresponding magnetic or electric fields themselves can be plotted using the Fourier expansions. The coefficients of the field harmonics are obtained from the specific eigenvectors.
The resulting band-structure obtained through the eigenmodes of this structure are shown to the right.
Example code.
We can use the following code in MATLAB or GNU Octave to compute the same band structure,
% solve the DBR photonic band structure for a simple
% 1D DBR. air-spacing d, periodicity a, i.e, a > d,
% we assume an infinite stack of 1D alternating eps_r|air layers
% y-polarized, z-directed plane wave incident on the stack
% periodic in the z-direction;
% parameters
d = 8; % air gap
a = 10; % total periodicity
d_over_a = d / a;
eps_r = 12.2500; % dielectric constant, like GaAs,
% max F.S coefs for representing E field, and Eps(r), are
Mmax = 50;
% Q matrix is non-symmetric in this case, Qij != Qji
% Qmn = (2*pi*n + Kz)^2*Km-n
% Kn = delta_n / eps_r + (1 - 1/eps_r) (d/a) sinc(pi.n.d/a)
% here n runs from -Mmax to + Mmax,
freqs = [];
for Kz = - pi / a:pi / (10 * a): + pi / a
Q = zeros(2 * Mmax + 1);
for x = 1:2 * Mmax + 1
for y = 1:2 * Mmax + 1
X = x - Mmax;
Y = y - Mmax;
kn = (1 - 1 / eps_r) * d_over_a .* sinc((X - Y) .* d_over_a) + ((X - Y) == 0) * 1 / eps_r;
Q(x, y) = (2 * pi * (Y - 1) / a + Kz) .^ 2 * kn; % -Mmax<=(Y-1)<=Mmax
end
end
fprintf('Kz = %g\n', Kz)
omega_c = eig(Q);
omega_c = sort(sqrt(omega_c)); % important step
freqs = [freqs; omega_c.'];
end
close
figure
hold on
idx = 1;
for idx = 1:length(- pi / a:pi / (10 * a): + pi / a)
plot(- pi / a:pi / (10 * a): + pi / a, freqs(:, idx), '.-')
end
hold off
xlabel('Kz')
ylabel('omega/c')
title(sprintf('PBG of 1D DBR with d/a=%g, Epsr=%g', d / a, eps_r))
Advantages.
PWE expansions are rigorous solutions. PWE is extremely well suited to the modal solution problem. Large size problems can be solved using iterative techniques like Conjugate gradient method.
For both generalized and normal eigenvalue problems, just a few band-index plots in the band-structure diagrams are required, usually lying on the brillouin zone edges. This corresponds to eigenmodes solutions using iterative techniques, as opposed to diagonalization of the entire matrix.
The PWEM is highly efficient for calculating modes in periodic dielectric structures. Being a Fourier space method, it suffers from the Gibbs phenomenon and slow convergence in some configuration when fast Fourier factorization is not used. It is the method of choice for calculating the band structure of photonic crystals. It is not easy to understand at first, but it is easy to implement.
Disadvantages.
Sometimes spurious modes appear. Large problems scaled as "O"("n"3), with the number of the plane waves ("n") used in the problem. This is both time consuming and complex in memory requirements.
Alternatives include Order-N spectral method, and methods using Finite-difference time-domain (FDTD) which are simpler, and model transients.
If implemented correctly, spurious solutions are avoided. It is less efficient when index contrast is high or when metals are incorporated. It cannot be used for scattering analysis.
Being a Fourier-space method, Gibbs phenomenon affects the method's accuracy. This is particularly problematic for devices with high dielectric contrast. | [
{
"math_id": 0,
"text": "\\frac{1}{\\epsilon_r} = \\sum_{m=-\\infty}^{+\\infty} K_m^{\\epsilon_r} e^{-i \\mathbf{G} \\cdot \\mathbf{r}}"
},
{
"math_id": 1,
"text": "E(\\omega,\\mathbf{r}) = \\sum_{n=-\\infty}^{+\\infty} K_n^{E_y} e^{-i\\mathbf{G} \\cdot \\mathbf{r}} e^{-i\\mathbf{k} \\cdot \\mathbf{r}}"
},
{
"math_id": 2,
"text": "\\mathbf{G}"
},
{
"math_id": 3,
"text": "\\pm N_\\max"
},
{
"math_id": 4,
"text": "\\frac{1}{\\epsilon(\\mathbf{r})} \\nabla \\times \\nabla \\times E(\\mathbf{r},\\omega) = \\left( \\frac{\\omega}{c} \\right)^2 E(\\mathbf{r},\\omega)"
},
{
"math_id": 5,
"text": "\\frac{1}{\\epsilon_r} = \\sum_{m=-\\infty}^{+\\infty} K_m^{\\epsilon_r} e^{-i \\frac{2\\pi m}{a}z}"
},
{
"math_id": 6,
"text": "E(\\omega,\\mathbf{r}) = \\sum_{n=-\\infty}^{+\\infty} K_n^{E_y} e^{-i\\frac{2\\pi n}{a}z} e^{-i \\mathbf{k} \\cdot \\mathbf{r}}"
},
{
"math_id": 7,
"text": "\\sum_n{\\left( \\frac{2\\pi n}{a} + k_z \\right)\\left( \\frac{2\\pi m}{a} + k_z \\right) K_{m-n}^{\\epsilon_r} K_{n}^{E_y}} = \\frac{\\omega^2}{c^2} K_{m}^{E_y}"
}
]
| https://en.wikipedia.org/wiki?curid=13257986 |
13258327 | McClellan oscillator | The McClellan oscillator is a market breadth indicator used in technical analysis by financial analysts of the New York Stock Exchange to evaluate the balance between the advancing and declining stocks. The McClellan oscillator is based on the Advance-Decline Data and it could be applied to stock market exchanges, indexes, portfolio of stocks or any basket of stocks.
How it works.
The simplified formula for determining the oscillator is:
formula_0
where advances is the number of the NYSE listed stocks which are traded above their previous day close and "declines" is the number of the NYSE listed stocks traded below their previous day close.
Therefore, crossovers of McClellan Oscillator and zero center line around which it oscillates would have the following meaning:
McClellan Summation Index.
The McClellan Summation Index (MSI) is calculated by adding each day's McClellan oscillator to the previous day's summation index.
MSI properties:
The Summation index is oversold at −1000 to −1250 or overbought at 1000 to 1250.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Oscillator = (\\text{19-day EMA of advances minus declines}) - (\\text{39 day EMA of advances minus declines})"
}
]
| https://en.wikipedia.org/wiki?curid=13258327 |
13259181 | Bollard pull | Bollard pull is a conventional measure of the pulling (or towing) power of a watercraft. It is defined as the force (usually in tonnes-force or kilonewtons (kN)) exerted by a vessel under full power, on a shore-mounted bollard through a tow-line, commonly measured in a practical test (but sometimes simulated) under test conditions that include calm water, no tide, level trim, and sufficient depth and side clearance for a free propeller stream. Like the horsepower or mileage rating of a car, it is a convenient but idealized number that must be adjusted for operating conditions that differ from the test. The bollard pull of a vessel may be reported as two numbers, the "static" or "maximum" bollard pull – the highest force measured – and the "steady" or "continuous" bollard pull, the average of measurements over an interval of, for example, 10 minutes. An equivalent measurement on land is known as drawbar pull, or tractive force, which is used to measure the total horizontal force generated by a locomotive, a piece of heavy machinery such as a tractor, or a truck, (specifically a ballast tractor), which is utilized to move a load.
Bollard pull is primarily (but not only) used for measuring the strength of tugboats, with the largest commercial harbour tugboats in the 2000-2010s having around of bollard pull, which is described as above "normal" tugboats. The worlds strongest tug since its delivery in 2020 is Island Victory (Vard Brevik 831) of Island Offshore, with a bollard pull of . Island Victory is not a typical tug, rather it is a special class of ship used in the petroleum industry called an Anchor Handling Tug Supply vessel.
For vessels that hold station by thrusting under power against a fixed object, such as crew transfer ships used in offshore wind turbine maintenance, an equivalent measure "bollard push" may be given.
Background.
Unlike in ground vehicles, the statement of installed horsepower is not sufficient to understand how strong a tug is – this is because the tug operates mainly in very low or zero speeds, thus may not be delivering power (power = force × velocity; so, for zero speeds, the power is also zero), yet still absorbing torque and delivering thrust. Bollard pull values are stated in tonnes-force (written as t or tonne) or kilonewtons (kN).
Effective towing power is equal to total resistance times velocity of the ship.
formula_0
Total resistance is the sum of frictional resistance, formula_1, residual resistance, formula_2, and air resistance, formula_3.
formula_4
formula_5
formula_6
Where:
formula_7 is the density of water
formula_8 is the density of air
formula_9 is the velocity of (relative to) water
formula_10 is the velocity of (relative to) air
formula_11 is resistance coefficient of frictional resistance
formula_12 is resistance coefficient of residual resistance
formula_13 is resistance coefficient of air resistance (usually quite high, >0.9, as ships are not designed to be aerodynamic)
formula_14 is the wetted area of the ship
formula_15 is the cross-sectional area of the ship above the waterline
Measurement.
Values for bollard pull can be determined in two ways.
Practical trial.
This method is useful for one-off ship designs and smaller shipyards. It is limited in precision - a number of boundary conditions need to be observed to obtain reliable results. Summarizing the below requirements, practical bollard pull trials need to be conducted in a deep water seaport, ideally not at the mouth of a river, on a calm day with hardly any traffic.
See Figure 2 for an illustration of error influences in a practical bollard pull trial. Note the difference in elevation of the ends of the line (the port bollard is higher than the ship's towing hook). Furthermore, there is the partial short circuit in propeller discharge current, the uneven trim of the ship and the short length of the tow line. All of these factors contribute to measurement error.
Simulation.
This method eliminates much of the uncertainties of the practical trial. However, any numerical simulation also has an error margin. Furthermore, simulation tools and computer systems capable of determining bollard pull for a ship design are costly. Hence, this method makes sense for larger shipyards and for the design of a series of ships.
Both methods can be combined. Practical trials can be used to validate the result of numerical simulation.
Human-powered vehicles.
Practical bollard pull tests under simplified conditions are conducted for human powered vehicles. There, bollard pull is often a category in competitions and gives an indication of the power train efficiency. Although conditions for such measurements are inaccurate in absolute terms, they are the same for all competitors. Hence, they can still be valid for comparing several craft.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P_E=R_T \\times V"
},
{
"math_id": 1,
"text": "R_F"
},
{
"math_id": 2,
"text": "R_R"
},
{
"math_id": 3,
"text": "R_A"
},
{
"math_id": 4,
"text": "R_F= \\frac{1}{2} \\times C_F \\times \\rho_w \\times V_w^2 \\times A_s "
},
{
"math_id": 5,
"text": "R_R= \\frac{1}{2} \\times C_R \\times \\rho_w \\times V_w^2 \\times A_s "
},
{
"math_id": 6,
"text": "R_A= \\frac{1}{2} \\times C_A \\times \\rho_a \\times V_a^2 \\times A_a "
},
{
"math_id": 7,
"text": "\\rho_w "
},
{
"math_id": 8,
"text": "\\rho_a "
},
{
"math_id": 9,
"text": "V_w "
},
{
"math_id": 10,
"text": "V_a "
},
{
"math_id": 11,
"text": "C_F "
},
{
"math_id": 12,
"text": "C_R "
},
{
"math_id": 13,
"text": "C_A "
},
{
"math_id": 14,
"text": "A_s "
},
{
"math_id": 15,
"text": "A_a "
}
]
| https://en.wikipedia.org/wiki?curid=13259181 |
13260616 | Krivine–Stengle Positivstellensatz | Theorem of real algebraic geometry
In real algebraic geometry, Krivine–Stengle (German for "positive-locus-theorem") characterizes polynomials that are positive on a semialgebraic set, which is defined by systems of inequalities of polynomials with real coefficients, or more generally, coefficients from any real closed field.
It can be thought of as a real analogue of Hilbert's Nullstellensatz (which concern complex zeros of polynomial ideals), and this analogy is at the origin of its name. It was proved by French mathematician Jean-Louis Krivine and then rediscovered by the Canadian Gilbert Stengle.
Statement.
Let R be a real closed field, and F = {"f"1, "f"2, ..., "f""m"} and G = {"g"1, "g"2, ..., "g""r"} finite sets of polynomials over R in n variables. Let W be the semialgebraic set
formula_0
and define the preordering associated with W as the set
formula_1
where Σ2[X1...,Xn] is the set of sum-of-squares polynomials. In other words, P(F, G) = C + I, where C is the cone generated by F (i.e., the subsemiring of R[X1...,Xn] generated by F and arbitrary squares) and I is the ideal generated by G.
Let p ∈ R[X1...,Xn] be a polynomial. "Krivine–Stengle Positivstellensatz" states that
(i) formula_2 if and only if formula_3 and formula_4 such that formula_5.
(ii) formula_6 if and only if formula_3 such that formula_7.
The "weak " is the following variant of the . Let R be a real closed field, and F, G, and H finite subsets of R[X1...,Xn]. Let C be the cone generated by F, and I the ideal generated by G. Then
formula_8
if and only if
formula_9
Variants.
The Krivine–Stengle Positivstellensatz also has the following refinements under additional assumptions. It should be remarked that Schmüdgen's Positivstellensatz has a weaker assumption than Putinar's Positivstellensatz, but the conclusion is also weaker.
Schmüdgen's Positivstellensatz.
Suppose that formula_10. If the semialgebraic set formula_11 is compact, then each polynomial formula_12 that is strictly positive on formula_13 can be written as a polynomial in the defining functions of formula_14 with sums-of-squares coefficients, i.e. formula_15. Here P is said to be "strictly positive on formula_14" if formula_16 for all formula_17. Note that Schmüdgen's Positivstellensatz is stated for formula_10 and does not hold for arbitrary real closed fields.
Putinar's Positivstellensatz.
Define the quadratic module associated with W as the set
formula_18
Assume there exists "L" > 0 such that the polynomial formula_19 If formula_16 for all formula_20, then p ∈ Q(F,"G"). | [
{
"math_id": 0,
"text": "W=\\{x\\in R^n\\mid\\forall f\\in F,\\,f(x)\\ge0;\\, \\forall g\\in G,\\,g(x)=0\\},"
},
{
"math_id": 1,
"text": "P(F,G) = \\left\\{ \\sum_{\\alpha \\in \\{0,1\\}^m} \\sigma_\\alpha f_1^{\\alpha_1} \\cdots f_m^{\\alpha_m} + \\sum_{\\ell=1}^r \\varphi_\\ell g_\\ell : \\sigma_\\alpha \\in \\Sigma^2[X_1,\\ldots,X_n];\\ \\varphi_\\ell \\in R[X_1,\\ldots,X_n] \\right\\} "
},
{
"math_id": 2,
"text": "\\forall x\\in W\\;p(x)\\ge 0"
},
{
"math_id": 3,
"text": "\\exists q_1,q_2\\in P(F,G)"
},
{
"math_id": 4,
"text": "s \\in \\mathbb{Z}"
},
{
"math_id": 5,
"text": "q_1 p = p^{2s} + q_2"
},
{
"math_id": 6,
"text": "\\forall x\\in W\\;p(x)>0"
},
{
"math_id": 7,
"text": "q_1 p = 1 + q_2"
},
{
"math_id": 8,
"text": "\\{x\\in R^n\\mid\\forall f\\in F\\,f(x)\\ge0\\land\\forall g\\in G\\,g(x)=0\\land\\forall h\\in H\\,h(x)\\ne0\\}=\\emptyset"
},
{
"math_id": 9,
"text": "\\exists f \\in C,g \\in I,n \\in \\mathbb{N}\\; f+g+\\left(\\prod H\\right)^{\\!2n} = 0."
},
{
"math_id": 10,
"text": "R = \\mathbb{R}"
},
{
"math_id": 11,
"text": "W=\\{x\\in \\mathbb{R}^n\\mid\\forall f\\in F,\\,f(x)\\ge0\\}"
},
{
"math_id": 12,
"text": " p \\in \\mathbb{R}[X_1, \\dots, X_n] "
},
{
"math_id": 13,
"text": "W"
},
{
"math_id": 14,
"text": " W "
},
{
"math_id": 15,
"text": " p \\in P(F, \\emptyset) "
},
{
"math_id": 16,
"text": "p(x)>0"
},
{
"math_id": 17,
"text": " x \\in W "
},
{
"math_id": 18,
"text": "Q(F,G) = \\left\\{ \\sigma_0 + \\sum_{j=1}^m \\sigma_j f_j + \\sum_{\\ell=1}^r \\varphi_\\ell g_\\ell : \\sigma_j \\in \\Sigma^2 [X_1,\\ldots,X_n];\\ \\varphi_\\ell \\in \\mathbb{R}[X_1,\\ldots,X_n] \\right\\} "
},
{
"math_id": 19,
"text": "L - \\sum_{i=1}^n x_i^2 \\in Q(F,G)."
},
{
"math_id": 20,
"text": "x \\in W"
}
]
| https://en.wikipedia.org/wiki?curid=13260616 |
1326107 | Steady state | State in which variables of a system are unchanging in time
In systems theory, a system or a process is in a steady state if the variables (called state variables) which define the behavior of the system or the process are unchanging in time. In continuous time, this means that for those properties "p" of the system, the partial derivative with respect to time is zero and remains so:
formula_0
In discrete time, it means that the first difference of each property is zero and remains so:
formula_1
The concept of a steady state has relevance in many fields, in particular thermodynamics, economics, and engineering. If a system is in a steady state, then the recently observed behavior of the system will continue into the future. In stochastic systems, the probabilities that various states will be repeated will remain constant. See for example Linear difference equation#Conversion to homogeneous form for the derivation of the steady state.
In many systems, a steady state is not achieved until some time after the system is started or initiated. This initial situation is often identified as a transient state, start-up or warm-up period. For example, while the flow of fluid through a tube or electricity through a network could be in a steady state because there is a constant flow of fluid or electricity, a tank or capacitor being drained or filled with fluid is a system in transient state, because its volume of fluid changes with time.
Often, a steady state is approached asymptotically. An unstable system is one that diverges from the steady state. See for example Linear difference equation#Stability.
In chemistry, a steady state is a more general situation than dynamic equilibrium. While a dynamic equilibrium occurs when two or more reversible processes occur at the same rate, and such a system can be said to be in a steady state, a system that is in a steady state may not necessarily be in a state of dynamic equilibrium, because some of the processes involved are not reversible. In other words, dynamic equilibrium is just one manifestation of a steady state.
Applications.
Economics.
A "steady state economy" is an economy (especially a national economy but possibly that of a city, a region, or the world) of stable size featuring a stable population and stable consumption that remain at or below carrying capacity. In the economic growth model of Robert Solow and Trevor Swan, the steady state occurs when gross investment in physical capital equals depreciation and the economy reaches economic equilibrium, which may occur during a period of growth.
Electrical engineering.
In electrical engineering and electronic engineering, "steady state" is an equilibrium condition of a circuit or network that occurs as the effects of transients are no longer important. Steady state is also used as an approximation in systems with on-going transient signals, such as audio systems, to allow simplified analysis of first order performance.
Sinusoidal Steady State Analysis is a method for analyzing alternating current circuits using the same techniques as for solving DC circuits.
The ability of an electrical machine or power system to regain its original/previous state is called Steady State Stability.
The stability of a system refers to the ability of a system to return to its steady state when subjected to a disturbance. As mentioned before, power is generated by synchronous generators that operate in synchronism with the rest of the system. A generator is synchronized with a bus when both of them have same frequency, voltage and phase sequence. We can thus define the power system stability as the ability of the power system to return to steady state without losing synchronicity. Usually power system stability is categorized into Steady State, Transient and Dynamic Stability
Steady State Stability studies are restricted to small and gradual changes in the system operating conditions. In this we basically concentrate on restricting the bus voltages close to their nominal values. We also ensure that phase angles between two buses are not too large and check for the overloading of the power equipment and transmission lines. These checks are usually done using power flow studies.
Transient Stability involves the study of the power system following a major disturbance. Following a large disturbance in the synchronous alternator the machine power (load) angle changes due to sudden acceleration of the rotor shaft. The objective of the transient stability study is to ascertain whether the load angle returns to a steady value following the clearance of the disturbance.
The ability of a power system to maintain stability under continuous small disturbances is investigated under the name of Dynamic Stability (also known as small-signal stability). These small disturbances occur due to random fluctuations in loads and generation levels. In an interconnected power system, these random variations can lead catastrophic failure as this may force the rotor angle to increase steadily.
Steady state determination is an important topic, because many design specifications of electronic systems are given in terms of the steady-state characteristics. Periodic steady-state solution is also a prerequisite for small signal dynamic modeling. Steady-state analysis is therefore an indispensable component of the design process.
In some cases, it is useful to consider constant envelope vibration—vibration that never settles down to motionlessness, but continues to move at constant amplitude—a kind of steady-state condition.
Chemical engineering.
In chemistry, thermodynamics, and other chemical engineering, a "steady state" is a situation in which all state variables are constant in spite of ongoing processes that strive to change them. For an entire system to be at steady state, i.e. for all state variables of a system to be constant, there must be a flow through the system (compare mass balance). One of the simplest examples of such a system is the case of a bathtub with the tap open but without the bottom plug: after a certain time the water flows in and out at the same rate, so the water level (the state variable being Volume) stabilizes and the system is at steady state. Of course the Volume stabilizing inside the tub depends on the size of the tub, the diameter of the exit hole and the flowrate of water in. Since the tub can overflow, eventually a steady state can be reached where the water flowing in equals the overflow plus the water out through the drain.
A steady state flow process requires conditions at all points in an apparatus remain constant as time changes. There must be no accumulation of mass or energy over the time period of interest. The same mass flow rate will remain constant in the flow path through each element of the system. Thermodynamic properties may vary from point to point, but will remain unchanged at any given point.
Mechanical engineering.
When a periodic force is applied to a mechanical system, it will typically reach a steady state after going through some transient behavior. This is often observed in vibrating systems, such as a clock pendulum, but can happen with any type of stable or semi-stable dynamic system. The length of the transient state will depend on the initial conditions of the system. Given certain initial conditions, a system may be in steady state from the beginning.
Biochemistry.
In biochemistry, the study of biochemical pathways is an important topic. Such pathways will often display steady-state behavior where the chemical species are unchanging, but there is a continuous dissipation of flux through the pathway. Many, but not all, biochemical pathways evolve to stable, steady states. As a result, the steady state represents an important reference state to study. This is also related to the concept of homeostasis, however, in biochemistry, a steady state can be stable or unstable such as in the case of sustained oscillations or bistable behavior.
Physiology.
Homeostasis (from Greek ὅμοιος, "hómoios", "similar" and στάσις, "stásis", "standing still") is the property of a system that regulates its internal environment and tends to maintain a stable, constant condition. Typically used to refer to a living organism, the concept came from that of milieu interieur that was created by Claude Bernard and published in 1865. Multiple dynamic equilibrium adjustment and regulation mechanisms make homeostasis possible.
Fiber optics.
In fiber optics, "steady state" is a synonym for equilibrium mode distribution.
Pharmacokinetics.
In Pharmacokinetics, steady state is a dynamic equilibrium in the body where drug concentrations consistently stay within a therapeutic limit over time.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\partial p}{\\partial t} = 0 \\quad \\text{for all present and future } t."
},
{
"math_id": 1,
"text": "p_t-p_{t-1}=0 \\quad \\text{for all present and future } t."
}
]
| https://en.wikipedia.org/wiki?curid=1326107 |
1326120 | Spectroradiometer | A spectroradiometer is a light measurement tool that is able to measure both the wavelength and amplitude of the light emitted from a light source. Spectrometers discriminate the wavelength based on the position the light hits at the detector array allowing the full spectrum to be obtained with a single acquisition. Most spectrometers have a base measurement of counts which is the un-calibrated reading and is thus impacted by the sensitivity of the detector to each wavelength. By applying a calibration, the spectrometer is then able to provide measurements of spectral irradiance, spectral radiance and/or spectral flux. This data is also then used with built in or PC software and numerous algorithms to provide readings or Irradiance (W/cm2), Illuminance (lux or fc), Radiance (W/sr), Luminance (cd), Flux (Lumens or Watts), Chromaticity, Color Temperature, Peak and Dominant Wavelength. Some more complex spectrometer software packages also allow calculation of PAR μmol/m2/s, Metamerism, and candela calculations based on distance and include features like 2- and 20-degree observer, baseline overlay comparisons, transmission and reflectance.
Spectrometers are available in numerous packages and sizes covering many wavelength ranges. The effective wavelength (spectral) range of a spectrometer is determined not only by the grating dispersion ability but also depends on the detectors' sensitivity range. Limited by the semiconductor's band gap the silicon-based detector responds to 200-1100 nm while the InGaAs based detector is sensitive to 900-1700 nm (or out to 2500 nm with cooling).
Lab/Research spectrometers often cover a broad spectral range from UV to NIR and require a PC. There are also IR Spectrometers that require higher power to run a cooling system. Many Spectrometers can be optimized for a specific range i.e. UV, or VIS and combined with a second system to allow more precise measurements, better resolution, and eliminate some of the more common errors found in broadband system such as stray light and lack of sensitivity.
Portable devices are also available for numerous spectral ranges covering UV to NIR and offer many different package styles and sizes. Hand held systems with integrated displays typically have built in optics, and an onboard computer with pre-programmed software. Mini spectrometers are also able to be used hand held, or in the lab as they are powered and controlled by a PC and require a USB cable. Input optics may be incorporated or are commonly attached by a fiber optic light guide. There are also micro Spectrometers smaller than a quarter that can be integrated into a system, or used stand alone.
Background.
The field of spectroradiometry concerns itself with the measurement of absolute radiometric quantities in narrow wavelength intervals. It is useful to sample the spectrum with narrow bandwidth and wavelength increments because many sources have line structures Most often in spectroradiometry, spectral irradiance is the desired measurement. In practice, the average spectral irradiance is measured, shown mathematically as the approximation:
formula_0
Where formula_1 is the spectral irradiance, formula_2 is the radiant flux of the source (SI unit: watt, W) within a wavelength interval formula_3 (SI unit: meter, m), incident on the surface area, formula_4 (SI unit: square meter, m2). The SI unit for spectral irradiance is W/m3. However it is often more useful to measure area in terms of centimeters and wavelength in nanometers, thus submultiples of the SI units of spectral irradiance will be used, for example μW/cm2*nm
Spectral irradiance will vary from point to point on the surface in general. In practice, it is important note how radiant flux varies with direction, the size of the solid angle subtended by the source at each point on the surface, and the orientation of the surface. Given these considerations, it is often more prudent to use a more rigorous form of the equation to account for these dependencies
Note that the prefix "spectral" is to be understood as an abbreviation of the phrase "spectral concentration of" which is understood and defined by the CIE as the "quotient of the radiometric quantity taken over an infinitesimal range on either side of a given wavelength, by the range".
Spectral power distribution.
The spectral power distribution (SPD) of a source describes how much flux reaches the sensor over a particular wavelength and area. This effectively expresses the per-wavelength contribution to the radiometric quantity being measured. The SPD of a source is commonly shown as an SPD curve. SPD curves provide a visual representation of the color characteristics of a light source, showing the radiant flux emitted by the source at various wavelengths across the visible spectrum It is also a metric by which we can evaluate a light source's ability to render colors, that is, whether a certain color stimulus can be properly rendered under a given illuminant.
Sources of error.
The quality of a given spectroradiometric system is a function of its electronics, optical components, software, power supply, and calibration. Under ideal laboratory conditions and with highly trained experts, it is possible to achieve small (a few tenths to a few percent) errors in measurements. However, in many practical situations, there is the likelihood of errors on the order of 10 percent Several types of error are at play when taking physical measurements. The three basic types of error noted as the limiting factors of accuracy of measurement are random, systematic, and periodic errors
In addition to these generic sources of error, a few of the more specific reasons for error in spectroradiometry include:
Gamma-scientific, a California-based manufacturer of light measurement devices, lists seven factors affecting the accuracy and performance of their spectroradiometers, due to either the system calibration, the software and power supply, the optics, or the measurement engine itself.
Definitions.
Stray light.
Stray light is unwanted wavelength radiation reaching the incorrect detector element. It generates erroneous electronic counts not related to designed spectral signal for the pixel or element of the detector array. It can come from light scatter and reflection of imperfect optical elements as well as higher order diffraction effects. The second order effect can be removed or at least dramatically reduced, by installing order sorting filters before the detector.
A Si detectors sensitivity to visible and NIR is nearly an order of magnitude larger than that in the UV range. This means that the pixels at the UV spectral position respond to stray light in visible and NIR much more strongly than to their own designed spectral signal. Therefore, the stray light impacts in UV region are much more significant as compared to visible and NIR pixels. This situation gets worse the shorter the wavelength.
When measuring broad band light with small fraction of UV signals, the stray light impact can sometimes be dominant in the UV range since the detector pixels are already struggling to get enough UV signals from the source. For this reason, calibration using QTH standard lamp can have huge errors (more than 100%) below 350 nm and Deuterium standard lamp is required for more accurate calibration in this region. In fact, absolute light measurement in the UV region can have large errors even with the correct calibration when majority of the electronic counts in these pixels is result of the stray light (longer wavelength strikes instead of the actual UV light).
Calibration errors.
There are numerous companies that offer calibration for spectrometers, but not all are equal. It is important to find a traceable, certified laboratory to perform calibration. The calibration certificate should state the light source used (ex: Halogen, Deuterium, Xenon, LED), and the uncertainty of the calibration for each band (UVC, UVB, VIS..), each wavelength in nm, or for the full spectrum measured. It should also list the confidence level for the calibration uncertainty.
Incorrect settings.
Like a camera, most spectrometers allow the user to select the exposure time and quantity of samples to be collected. Setting the integration time and the number of scans is an important step. Too long of an integration time can cause saturation. (In a camera photo this could appear as a large white spot, where as in a spectrometer it can appear as a dip, or cut off peak) Too short an integration time can generate noisy results (In a camera photo this would be a dark or blurry area, where as in a spectrometer this may appear are spiky or unstable readings).
The exposure time is the time the light falls on the sensor during a measurement. Adjusting this parameter changes the overall sensitivity of the instrument, as changing the exposure time does for a camera. The minimum integration time varies by instrument with a minimum of .5 msec and a maximum of about 10 minutes per scan. A practical setting is in the range of 3 to 999 ms depending on the light intensity.
The integration time should be adjusted for a signal which does not exceed the maximum counts (16-bit CCD has 65,536, 14-bit CCD has 16,384). Saturation occurs when the integration time is set too high. Typically, a peak signal of about 85% of the maximum is a good target and yields a good S/N ratio. (ex: 60K counts or 16K counts respectively)
The number of scans indicates how many measurements will be averaged. Other things being equal, the Signal-to-Noise Ratio (SNR) of the collected spectra improves by the square root of the number N of scans averaged. For example, if 16 spectral scans are averaged, the SNR is improved by a factor of 4 over that of a single scan.
S/N ratio is measured at the input light level which reaches the full scale of the spectrometer. It is the ratio of signal counts Cs (usually at full scale) to RMS (root mean square) noise at this light level. This noise includes the dark noise Nd, the shot noise Ns related to the counts generated by the input light and read out noise. This is the best S/N ratio one can get from the spectrometer for light measurements.
How it works.
The essential components of a spectroradiometric system are as follows:
Input optics.
The front-end optics of a spectroradiometer includes the lenses, diffusers, and filters that modify the light as it first enters the system. For Radiance an optic with a narrow field of view is required. For total flux an integrating sphere is required. For Irradiance cosine correcting optics are required. The material used for these elements determines what type of light is capable of being measured. For example, to take UV measurements, quartz rather than glass lenses, optical fibers, Teflon diffusers, and barium sulphate coated integrating spheres are often used to ensure accurate UV measurement.
Monochromator.
To perform spectral analysis of a source, monochromatic light at every wavelength would be needed to create a spectrum response of the illuminant. A monochromator is used to sample wavelengths from the source and essentially produce a monochromatic signal. It is essentially a variable filter, selectively separating and transmitting a specific wavelength or band of wavelengths from the full spectrum of measured light and excluding any light that falls outside that region.
A typical monochromator achieves this through the use of entrance and exit slits, collimating and focus optics, and a wavelength-dispersing element such as a diffraction grating or prism. Modern monochromators are manufactured with diffraction gratings, and diffraction gratings are used almost exclusively in spectroradiometric applications. Diffraction gratings are preferable due to their versatility, low attenuation, extensive wavelength range, lower cost, and more constant dispersion. Single or double monochromators can be used depending on application, with double monochromators generally providing more precision due to the additional dispersion and baffling between gratings.
Detectors.
The detector used in a spectroradiometer is determined by the wavelength over which the light is being measured, as well as the required dynamic range and sensitivity of the measurements. Basic spectroradiometer detector technologies generally fall into one of three groups: photoemissive detectors (e.g. photomultiplier tubes), semiconductor devices (e.g. silicon), or thermal detectors (e.g. thermopile).
The spectral response of a given detector is determined by its core materials. For example, photocathodes found in photomultiplier tubes can be manufactured from certain elements to be solar-blind – sensitive to UV and non-responsive to light in the visible or IR.
CCD (Charge Coupled Device) arrays typically one dimensional (linear) or two dimensional (area) arrays of thousands or millions of individual detector elements (also known as pixels) and CMOS sensors. They include a silicon or InGaAs based multichannel array detector capable of measuring UV, visible and near-infra light.
CMOS (Complementary Metal Oxide Semiconductor) sensors differs from a CCD in that they add an amplifier to each photodiode. This is called an active pixel sensor because the amplifier is part of the pixel. Transistor switches connect each photodiode to the intrapixel amplifier at the time of readout.
Control and logging system.
The logging system is often simply a personal computer. In initial signal processing, the signal often needs to be amplified and converted for use with the control system. The lines of communication between monochromator, detector output, and computer should be optimized to ensure the desired metrics and features are being used. The commercially available software included with spectroradiometric systems often come stored with useful reference functions for further calculation of measurements, such as CIE color matching functions and the Vformula_5 curve.
Applications.
Spectroradiometers are used in many applications, and can be made to meet a wide variety of specifications. Example applications include:
DIY builds.
It is possible to build a basic optical spectrometer using an optical disc grating and a basic webcam, using a CFL lamp for calibrating the wavelengths. A calibration using a source of known spectrum can then turn the spectrometer into a spectroradiometer by interpreting the brightness of photo pixels. A DIY build is affected by some extra error sources in the photo-to-value conversion: photographic noise (requiring dark frame subtraction) and non-linearity in the CCD-to-photograph conversion (possibly solved by a raw image format).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E(\\lambda)=\\frac{\\Delta\\Phi}{\\Delta A \\Delta\\lambda}"
},
{
"math_id": 1,
"text": "E"
},
{
"math_id": 2,
"text": "\\Phi"
},
{
"math_id": 3,
"text": "\\Delta\\lambda "
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "\\lambda"
}
]
| https://en.wikipedia.org/wiki?curid=1326120 |
13264358 | Padua points | In polynomial interpolation of two variables, the Padua points are the first known example (and up to now the only one) of a unisolvent point set (that is, the interpolating polynomial is unique) with "minimal growth" of their Lebesgue constant, proven to be formula_0.
Their name is due to the University of Padua, where they were originally discovered.
The points are defined in the domain formula_1. It is possible to use the points with four orientations, obtained with subsequent 90-degree rotations: this way we get four different families of Padua points.
The four families.
We can see the Padua point as a "sampling" of a parametric curve, called "generating curve", which is slightly different for each of the four families, so that the points for interpolation degree formula_2 and family formula_3 can be defined as
formula_4
Actually, the Padua points lie exactly on the self-intersections of the curve, and on the intersections of the curve with the boundaries of the square formula_5. The cardinality of the set formula_6 is formula_7. Moreover, for each family of Padua points, two points lie on consecutive vertices of the square formula_5, formula_8 points lie on the edges of the square, and the remaining points lie on the self-intersections of the generating curve inside the square.
The four generating curves are "closed" parametric curves in the interval formula_9, and are a special case of Lissajous curves.
The first family.
The generating curve of Padua points of the first family is
formula_10
If we sample it as written above, we have:
formula_11
where formula_12 when formula_2 is even or odd but formula_13 is even, formula_14
if formula_2 and formula_15 are both odd
with
formula_16
From this follows that the Padua points of first family will have two vertices on the bottom if formula_2 is even, or on the left if formula_2 is odd.
The second family.
The generating curve of Padua points of the second family is
formula_17
which leads to have vertices on the left if formula_2 is even and on the bottom if formula_2 is odd.
The third family.
The generating curve of Padua points of the third family is
formula_18
which leads to have vertices on the top if formula_2 is even and on the right if formula_2 is odd.
The fourth family.
The generating curve of Padua points of the fourth family is
formula_19
which leads to have vertices on the right if formula_2 is even and on the top if formula_2 is odd.
The interpolation formula.
The explicit representation of their fundamental Lagrange polynomial is based on the reproducing kernel formula_20, formula_21 and formula_22, of the space formula_23 equipped with the inner product
formula_24
defined by
formula_25
with formula_26 representing the normalized Chebyshev polynomial of degree formula_13 (that is, formula_27 and formula_28, where formula_29 is the classical Chebyshev polynomial "of first kind" of degree formula_30). For the four families of Padua points, which we may denote by formula_31, formula_32, the interpolation formula of order formula_2 of the function formula_33 on the generic target point formula_34 is then
formula_35
where formula_36 is the fundamental Lagrange polynomial
formula_37
The weights formula_38 are defined as
formula_39 | [
{
"math_id": 0,
"text": "O(\\log^2 n)"
},
{
"math_id": 1,
"text": "[-1,1] \\times [-1,1] \\subset \\mathbb{R}^2"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "s"
},
{
"math_id": 4,
"text": "\\text{Pad}_n^s=\\lbrace\\mathbf{\\xi}=(\\xi_1,\\xi_2)\\rbrace=\\left\\lbrace\\gamma_s\\left(\\frac{k\\pi}{n(n+1)}\\right),k=0,\\ldots,n(n+1)\\right\\rbrace."
},
{
"math_id": 5,
"text": "[-1,1]^2"
},
{
"math_id": 6,
"text": "\\operatorname{Pad}_n^s"
},
{
"math_id": 7,
"text": "|\\operatorname{Pad}_n^s| = \\frac{(n+1)(n+2)}{2}"
},
{
"math_id": 8,
"text": "2n-1"
},
{
"math_id": 9,
"text": "[0,2\\pi]"
},
{
"math_id": 10,
"text": "\\gamma_1(t)=[-\\cos((n+1)t),-\\cos(nt)],\\quad t\\in [0,\\pi]."
},
{
"math_id": 11,
"text": "\\operatorname{Pad}_n^1=\\lbrace\\mathbf{\\xi}=(\\mu_j,\\eta_k), 0\\le j\\le n; 1\\le k\\le\\lfloor\\frac{n}{2}\\rfloor+1+\\delta_j\\rbrace,"
},
{
"math_id": 12,
"text": "\\delta_j=0"
},
{
"math_id": 13,
"text": "j"
},
{
"math_id": 14,
"text": "\\delta_j=1"
},
{
"math_id": 15,
"text": "k"
},
{
"math_id": 16,
"text": "\\mu_j=\\cos\\left(\\frac{j\\pi}{n}\\right), \\eta_k=\n\\begin{cases}\n\\cos\\left(\\frac{(2k-2)\\pi}{n+1}\\right) & j\\mbox{ odd} \\\\\n\\cos\\left(\\frac{(2k-1)\\pi}{n+1}\\right) & j\\mbox{ even.}\n\\end{cases}\n"
},
{
"math_id": 17,
"text": "\\gamma_2(t)=[-\\cos(nt),-\\cos((n+1)t)],\\quad t\\in [0,\\pi],"
},
{
"math_id": 18,
"text": "\\gamma_3(t)=[\\cos((n+1)t),\\cos(nt)],\\quad t\\in [0,\\pi],"
},
{
"math_id": 19,
"text": "\\gamma_4(t)=[\\cos(nt),\\cos((n+1)t)],\\quad t\\in [0,\\pi],"
},
{
"math_id": 20,
"text": "K_n(\\mathbf{x},\\mathbf{y})"
},
{
"math_id": 21,
"text": "\\mathbf{x}=(x_1,x_2)"
},
{
"math_id": 22,
"text": "\\mathbf{y}=(y_1,y_2)"
},
{
"math_id": 23,
"text": "\\Pi_n^2([-1,1]^2)"
},
{
"math_id": 24,
"text": "\\langle f,g\\rangle =\\frac{1}{\\pi^2} \\int_{[-1,1]^2} f(x_1,x_2)g(x_1,x_2)\\frac{dx_1}{\\sqrt{1-x_1^2}}\\frac{dx_2}{\\sqrt{1-x_2^2}}\n"
},
{
"math_id": 25,
"text": "K_n(\\mathbf{x},\\mathbf{y})=\\sum_{k=0}^n\\sum_{j=0}^k \\hat T_j(x_1)\\hat T_{k-j}(x_2)\\hat T_j(y_1)\\hat T_{k-j}(y_2)\n"
},
{
"math_id": 26,
"text": "\\hat T_j"
},
{
"math_id": 27,
"text": "\\hat T_0=T_0"
},
{
"math_id": 28,
"text": "\\hat T_p=\\sqrt{2}T_p"
},
{
"math_id": 29,
"text": "T_p(\\cdot)=\\cos(p\\arccos(\\cdot))"
},
{
"math_id": 30,
"text": "p"
},
{
"math_id": 31,
"text": "\\operatorname{Pad}_n^s=\\lbrace\\mathbf{\\xi}=(\\xi_1,\\xi_2)\\rbrace"
},
{
"math_id": 32,
"text": "s=\\lbrace 1,2,3,4\\rbrace"
},
{
"math_id": 33,
"text": "f\\colon [-1,1]^2\\to\\mathbb{R}^2"
},
{
"math_id": 34,
"text": "\\mathbf{x}\\in [-1,1]^2"
},
{
"math_id": 35,
"text": "\n\\mathcal{L}_n^s f(\\mathbf{x})=\\sum_{\\mathbf{\\xi}\\in\\operatorname{Pad}_n^s}f(\\mathbf{\\xi})L^s_{\\mathbf\\xi}(\\mathbf{x})\n"
},
{
"math_id": 36,
"text": "L^s_{\\mathbf\\xi}(\\mathbf{x})"
},
{
"math_id": 37,
"text": "L^s_{\\mathbf\\xi}(\\mathbf{x})=w_{\\mathbf\\xi}(K_n(\\mathbf\\xi,\\mathbf{x})-T_n(\\xi_i)T_n(x_i)),\\quad s=1,2,3,4,\\quad i=2-(s\\mod 2).\n"
},
{
"math_id": 38,
"text": "w_{\\mathbf\\xi}"
},
{
"math_id": 39,
"text": "\nw_{\\mathbf\\xi}=\\frac{1}{n(n+1)}\\cdot\n\\begin{cases}\n\\frac{1}{2}\\text{ if }\\mathbf\\xi\\text{ is a vertex point}\\\\\n1\\text{ if }\\mathbf\\xi\\text{ is an edge point}\\\\\n2\\text{ if }\\mathbf\\xi\\text{ is an interior point.}\n\\end{cases}\n"
}
]
| https://en.wikipedia.org/wiki?curid=13264358 |
1326443 | B+ tree | An m-ary rooted tree
A B+ tree is an m-ary tree with a variable but often large number of children per node. A B+ tree consists of a root, internal nodes and leaves. The root may be either a leaf or a node with two or more children.
A B+ tree can be viewed as a B-tree in which each node contains only keys (not key–value pairs), and to which an additional level is added at the bottom with linked leaves.
The primary value of a B+ tree is in storing data for efficient retrieval in a block-oriented storage context — in particular, filesystems. This is primarily because unlike binary search trees, B+ trees have very high fanout (number of pointers to child nodes in a node, typically on the order of 100 or more), which reduces the number of I/O operations required to find an element in the tree.
History.
There is no single paper introducing the B+ tree concept. Instead, the notion of maintaining all data in leaf nodes is repeatedly brought up as an interesting variant of the B-tree, which was introduced by R. Bayer and E. McCreight. Douglas Comer notes in an early survey of B-trees (which also covers B+ trees) that the B+ tree was used in IBM's VSAM data access software, and refers to an IBM published article from 1973.
Structure.
Pointer structure.
As with other trees, B+ trees can be represented as a collection of three types of nodes: "root", "internal" (a.k.a. interior), and "leaf". In B+ trees, the following properties are maintained for these nodes:
The pointer properties of nodes are summarized in the tables below:
Node bounds.
The node bounds are summarized in the table below:
Intervals in internal nodes.
By definition, each value contained within the B+ tree is a key contained in exactly one leaf node. Each key is required to be directly comparable with every other key, which forms a total order. This enables each leaf node to keep all of its keys sorted at all times, which then enables each internal node to construct an ordered collection of intervals representing the contiguous extent of values contained in a given leaf. Internal nodes higher in the tree can then construct their own intervals, which recursively aggregate the intervals contained in their own child internal nodes. Eventually, the root of a B+ Tree represents the whole range of values in the tree, where every internal node represents a subinterval.
For this recursive interval information to be retained, internal nodes must additionally contain formula_6 copies of keys formula_7 for formula_8 representing the least element within the interval covered by the child with index i (which may itself be an internal node, or a leaf). Where m represents the "actual" number of children for a given internal node.
Characteristics.
The "order" or "branching factor" b of a B+ tree measures the capacity of interior nodes, i.e. their maximum allowed number of direct child nodes. This value is constant over the entire tree. For a b-order B+ tree with h levels of index:
Algorithms.
Search.
We are looking for a value k in the B+ Tree. This means that starting from the root, we are looking for the leaf which may contain the value k. At each node, we figure out which internal node we should follow. An internal B+ Tree node has at most formula_16 children, where every one of them represents a different sub-interval. We select the corresponding child via a linear search of the m entries, then when we finally get to a leaf, we do a linear search of its n elements for the desired key. Because we only traverse one branch of all the children at each rung of the tree, we achieve formula_17 runtime, where N is the total number of keys stored in the leaves of the B+ tree.
function search("k", "root") is
let leaf = leaf_search(k, root)
for leaf_key in leaf.keys():
if k = leaf_key:
return true
return false
function leaf_search("k", "node") is
if node is a leaf:
return node
let p = node.children()
let l = node.left_sided_intervals()
assert formula_18
let m = p.len()
for i from 1 to m - 1:
if formula_19:
return leaf_search(k, p[i])
return leaf_search(k, p[m])
Note that this pseudocode uses 1-based array indexing.
Insertion.
B+ trees grow at the root and not at the leaves.
Bulk-loading.
Given a collection of data records, we want to create a B+ tree index on some key field. One approach is to insert each record into an empty tree. However, it is quite expensive, because each entry requires us to start from the root and go down to the appropriate leaf page. An efficient alternative is to use bulk-loading.
Note :
Deletion.
The purpose of the delete algorithm is to remove the desired entry node from the tree structure. We recursively call the delete algorithm on the appropriate node until no node is found. For each function call, we traverse along, using the index to navigate until we find the node, remove it, and then work back up to the root.
At entry L that we wish to remove:
- If L is at least half-full, done
- If L has only d-1 entries, try to re-distribute, borrowing from sibling (adjacent node with same parent as L).
After the re-distribution of two sibling nodes happens, the parent node must be updated to reflect this change. The index key that points to the second sibling must take the smallest value of that node to be the index key.
- If re-distribute fails, merge L and sibling. After merging, the parent node is updated by deleting the index key that point to the deleted entry. In other words, if merge occurred, must delete entry (pointing to L or sibling) from parent of L.
Note: merge could propagate to root, which means decreasing height.
Implementation.
The leaves (the bottom-most index blocks) of the B+ tree are often linked to one another in a linked list; this makes range queries or an (ordered) iteration through the blocks simpler and more efficient (though the aforementioned upper bound can be achieved even without this addition). This does not substantially increase space consumption or maintenance on the tree. This illustrates one of the significant advantages of a B+tree over a B-tree; in a B-tree, since not all keys are present in the leaves, such an ordered linked list cannot be constructed. A B+tree is thus particularly useful as a database system index, where the data typically resides on disk, as it allows the B+tree to actually provide an efficient structure for housing the data itself (this is described in as index structure "Alternative 1").
If a storage system has a block size of B bytes, and the keys to be stored have a size of k, arguably the most efficient B+ tree is one where formula_21. Although theoretically the one-off is unnecessary, in practice there is often a little extra space taken up by the index blocks (for example, the linked list references in the leaf blocks). Having an index block which is slightly larger than the storage system's actual block represents a significant performance decrease; therefore erring on the side of caution is preferable.
If nodes of the B+ tree are organized as arrays of elements, then it may take a considerable time to insert or delete an element as half of the array will need to be shifted on average. To overcome this problem, elements inside a node can be organized in a binary tree or a B+ tree instead of an array.
B+ trees can also be used for data stored in RAM. In this case a reasonable choice for block size would be the size of processor's cache line.
Space efficiency of B+ trees can be improved by using some compression techniques. One possibility is to use delta encoding to compress keys stored into each block. For internal blocks, space saving can be achieved by either compressing keys or pointers. For string keys, space can be saved by using the following technique: Normally the "i"-th entry of an internal block contains the first key of block &NoBreak;&NoBreak;. Instead of storing the full key, we could store the shortest prefix of the first key of block &NoBreak;&NoBreak; that is strictly greater (in lexicographic order) than last key of block "i". There is also a simple way to compress pointers: if we suppose that some consecutive blocks &NoBreak;&NoBreak; are stored contiguously, then it will suffice to store only a pointer to the first block and the count of consecutive blocks.
All the above compression techniques have some drawbacks. First, a full block must be decompressed to extract a single element. One technique to overcome this problem is to divide each block into sub-blocks and compress them separately. In this case searching or inserting an element will only need to decompress or compress a sub-block instead of a full block. Another drawback of compression techniques is that the number of stored elements may vary considerably from a block to another depending on how well the elements are compressed inside each block.
Applications.
Filesystems.
The ReiserFS, NSS, XFS, JFS, ReFS, and BFS filesystems all use this type of tree for metadata indexing; BFS also uses B+ trees for storing directories. NTFS uses B+ trees for directory and security-related metadata indexing. EXT4 uses extent trees (a modified B+ tree data structure) for file extent indexing. APFS uses B+ trees to store mappings from filesystem object IDs to their locations on disk, and to store filesystem records (including directories), though these trees' leaf nodes lack sibling pointers.
Database Systems.
Relational database management systems such as IBM Db2, Informix, Microsoft SQL Server, Oracle 8, Sybase ASE, and SQLite support this type of tree for table indices, though each such system implements the basic B+ tree structure with variations and extensions. Many NoSQL database management systems such as CouchDB and Tokyo Cabinet also support this type of tree for data access and storage.
Finding objects in a high-dimensional database that are comparable to a particular query object is one of the most often utilized and yet expensive procedures in such systems. In such situations, finding the closest neighbor using a B+ tree is productive.
iDistance.
B+ tree is efficiently used to construct an indexed search method called iDistance. iDistance searches for k nearest neighbors (kNN) in high-dimension metric spaces. The data in those high-dimension spaces is divided based on space or partition strategies, and each partition has an index value that is close with the respect to the partition. From here, those points can be efficiently implemented using B+ tree, thus, the queries are mapped to single dimensions ranged search. In other words, the iDistance technique can be viewed as a way of accelerating the sequential scan. Instead of scanning records from the beginning to the end of the data file, the iDistance starts the scan from spots where the nearest neighbors can be obtained early with a very high probability.
NVRAM.
Nonvolatile random-access memory (NVRAM) has been using B+ tree structure as the main memory access technique for the Internet Of Things (IoT) system because of its non static power consumption and high solidity of cell memory. B+ can regulate the trafficking of data to memory efficiently. Moreover, with advanced strategies on frequencies of some highly used leaf or reference point, the B+ tree shows significant results in increasing the endurance of database systems. | [
{
"math_id": 0,
"text": "k_i"
},
{
"math_id": 1,
"text": "k_{i-1}"
},
{
"math_id": 2,
"text": "i \\ge 1"
},
{
"math_id": 3,
"text": "p_i"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "k_i"
},
{
"math_id": 6,
"text": "m - 1"
},
{
"math_id": 7,
"text": "l_i"
},
{
"math_id": 8,
"text": "i \\in [1, m - 1]"
},
{
"math_id": 9,
"text": "n_{\\max} = b^h - b^{h-1}"
},
{
"math_id": 10,
"text": "n_{\\min} = 2\\left\\lceil\\tfrac{b}{2}\\right\\rceil^{h-1}-2\\left\\lceil\\tfrac{b}{2}\\right\\rceil^{h-2}"
},
{
"math_id": 11,
"text": "n_\\mathrm{kmin} = 2\\left\\lceil\\tfrac{b}{2}\\right\\rceil^{h-1}-1"
},
{
"math_id": 12,
"text": "n_\\mathrm{kmax} = b^h-1"
},
{
"math_id": 13,
"text": "O(n)"
},
{
"math_id": 14,
"text": "O(\\log_bn)"
},
{
"math_id": 15,
"text": "O(\\log_bn+k)"
},
{
"math_id": 16,
"text": " m \\le b "
},
{
"math_id": 17,
"text": "O(\\log N)"
},
{
"math_id": 18,
"text": "|p| = |l| + 1"
},
{
"math_id": 19,
"text": "k \\le l[i]"
},
{
"math_id": 20,
"text": " b - 1 "
},
{
"math_id": 21,
"text": "b=\\tfrac B k -1"
}
]
| https://en.wikipedia.org/wiki?curid=1326443 |
13265337 | Kuratowski convergence | In mathematics, Kuratowski convergence or Painlevé-Kuratowski convergence is a notion of convergence for subsets of a topological space. First introduced by Paul Painlevé in lectures on mathematical analysis in 1902, the concept was popularized in texts by Felix Hausdorff and Kazimierz Kuratowski. Intuitively, the Kuratowski limit of a sequence of sets is where the sets "accumulate".
Definitions.
For a given sequence formula_0 of points in a space formula_1, a limit point of the sequence can be understood as any point formula_2 where the sequence "eventually" becomes arbitrarily close to formula_3. On the other hand, a cluster point of the sequence can be thought of as a point formula_2 where the sequence "frequently" becomes arbitrarily close to formula_3. The Kuratowski limits inferior and superior generalize this intuition of limit and cluster points to subsets of the given space formula_1.
Metric Spaces.
Let formula_4 be a metric space, where formula_1 is a given set. For any point formula_3 and any non-empty subset formula_5, define the distance between the point and the subset:
formula_6
For any sequence of subsets formula_7 of formula_1, the "Kuratowski limit inferior" (or "lower closed limit") of formula_8 as formula_9; isformula_10the "Kuratowski limit superior" (or "upper closed limit") of formula_8 as formula_9; isformula_11If the Kuratowski limits inferior and superior agree, then the common set is called the "Kuratowski limit" of formula_8 and is denoted formula_12.
Topological Spaces.
If formula_13 is a topological space, and formula_14 are a net of subsets of formula_15, the limits inferior and superior follow a similar construction. For a given point formula_16 denote formula_17 the collection of open neighborhoods of formula_18. The "Kuratowski limit inferior" of formula_14 is the setformula_19and the "Kuratowski limit superior" is the setformula_20Elements of formula_21 are called "limit points" of formula_14 and elements of formula_22 are called "cluster points" of formula_14. In other words, formula_3 is a limit point of formula_14 if each of its neighborhoods intersects formula_23 for all formula_24 in a "residual" subset of formula_25, while formula_3 is a cluster point of formula_14 if each of its neighborhoods intersects formula_23 for all formula_24 in a cofinal subset of formula_25.
When these sets agree, the common set is the "Kuratowski limit" of formula_14, denoted formula_26.
Properties.
The following properties hold for the limits inferior and superior in both the metric and topological contexts, but are stated in the metric formulation for ease of reading.
Kuratowski Continuity of Set-Valued Functions.
Let formula_79 be a set-valued function between the spaces formula_1 and formula_80; namely, formula_81 for all formula_2. Denote formula_82. We can define the operatorsformula_83where formula_84 means convergence in sequences when formula_1 is metrizable and convergence in nets otherwise. Then,
When formula_85 is both inner and outer semi-continuous at formula_2, we say that formula_85 is "continuous" (or continuous "in the sense of Kuratowski").
Continuity of set-valued functions is commonly defined in terms of lower- and upper-hemicontinuity popularized by Berge. In this sense, a set-valued function is continuous if and only if the function formula_89 defined by formula_90 is continuous with respect to the Vietoris hyperspace topology of formula_91. For set-valued functions with closed values, continuity in the sense of Vietoris-Berge is stronger than continuity in the sense of Kuratowski.
Epi-convergence and Γ-convergence.
For the metric space formula_108 a sequence of functions formula_109, the "epi-limit inferior" (or "lower epi-limit") is the function formula_110 defined by the epigraph equationformula_111and similarly the "epi-limit superior" (or "upper epi-limit") is the function formula_112 defined by the epigraph equationformula_113Since Kuratowski upper and lower limits are closed sets, it follows that both formula_110 and formula_112 are lower semi-continuous functions. Similarly, since formula_114, it follows that formula_115 uniformly. These functions agree, if and only if formula_116 exists, and the associated function is called the "epi-limit" of formula_117.
When formula_118 is a topological space, epi-convergence of the sequence formula_117 is called Γ-convergence. From the perspective of Kuratowski convergence there is no distinction between epi-limits and Γ-limits. The concepts are usually studied separately, because epi-convergence admits special characterizations that rely on the metric space structure of formula_1, which does not hold in topological spaces generally. | [
{
"math_id": 0,
"text": "\\{x_n\\}_{n=1}^{\\infty}"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "x \\in X"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "(X,d)"
},
{
"math_id": 5,
"text": "A \\subset X"
},
{
"math_id": 6,
"text": "d(x, A) := \\inf_{y \\in A} d(x, y), \\qquad x \\in X."
},
{
"math_id": 7,
"text": "\\{A_n\\}_{n=1}^{\\infty}"
},
{
"math_id": 8,
"text": "A_n"
},
{
"math_id": 9,
"text": "n \\to \\infty"
},
{
"math_id": 10,
"text": "\\begin{align}\n\\mathop{\\mathrm{Li}} A_{n} :=&\n\\left\\{ x \\in X : \\begin{matrix} \\mbox{for all open neighbourhoods } U \\mbox{ of } x, U \\cap A_{n} \\neq \\emptyset \\mbox{ for large enough } n \\end{matrix} \\right\\} \\\\\n=&\\left\\{ x \\in X : \\limsup_{n \\to \\infty} d(x, A_{n}) = 0 \\right\\};\n\\end{align}"
},
{
"math_id": 11,
"text": "\\begin{align}\n\\mathop{\\mathrm{Ls}} A_{n} :=&\n\\left\\{ x \\in X : \\begin{matrix} \\mbox{for all open neighbourhoods } U \\mbox{ of } x, U \\cap A_{n} \\neq \\emptyset \\mbox{ for infinitely many } n \\end{matrix} \\right\\} \\\\\n=&\\left\\{ x \\in X : \\liminf_{n \\to \\infty} d(x, A_{n}) = 0 \\right\\};\n\\end{align}"
},
{
"math_id": 12,
"text": "\\mathop{\\mathrm{Lim}}_{n \\to \\infty} A_n"
},
{
"math_id": 13,
"text": "(X, \\tau)"
},
{
"math_id": 14,
"text": "\\{A_i\\}_{i \\in I}"
},
{
"math_id": 15,
"text": "X"
},
{
"math_id": 16,
"text": "x \\in X"
},
{
"math_id": 17,
"text": "\\mathcal{N}(x)"
},
{
"math_id": 18,
"text": "x"
},
{
"math_id": 19,
"text": "\\mathop{\\mathrm{Li}} A_i := \\left\\{\n x \\in X :\n \\mbox{for all } U \\in \\mathcal{N}(x) \\mbox{ there exists } i_0 \\in I \\mbox{ such that } U \\cap A_i \\ne \\emptyset \\text{ if } i_0 \\leq i\n\\right\\},"
},
{
"math_id": 20,
"text": "\\mathop{\\mathrm{Ls}} A_i := \\left\\{\n x \\in X :\n \\mbox{for all } U \\in \\mathcal{N}(x) \\mbox{ and } i \\in I \\mbox{ there exists } i' \\in I \\mbox{ such that } i \\leq i' \\mbox{ and } U \\cap A_{i'} \\ne \\emptyset\n\\right\\}."
},
{
"math_id": 21,
"text": "\\mathop{\\mathrm{Li}} A_i"
},
{
"math_id": 22,
"text": "\\mathop{\\mathrm{Ls}} A_i"
},
{
"math_id": 23,
"text": "A_i"
},
{
"math_id": 24,
"text": "i"
},
{
"math_id": 25,
"text": "I"
},
{
"math_id": 26,
"text": "\\mathop{\\mathrm{Lim}} A_i"
},
{
"math_id": 27,
"text": "(X, d)"
},
{
"math_id": 28,
"text": "D = \\{d_1, d_2, \\dots\\}"
},
{
"math_id": 29,
"text": "A_n := \\{d_1, d_2, \\dots, d_n\\}"
},
{
"math_id": 30,
"text": "\\mathop{\\mathrm{Lim}} A_n = X"
},
{
"math_id": 31,
"text": "B, C \\subset X"
},
{
"math_id": 32,
"text": "A_{2n-1} := B"
},
{
"math_id": 33,
"text": "A_{2n} := C"
},
{
"math_id": 34,
"text": "n=1,2,\\dots"
},
{
"math_id": 35,
"text": "\\mathop{\\mathrm{Li}} A_n = B \\cap C"
},
{
"math_id": 36,
"text": "\\mathop{\\mathrm{Ls}} A_n = B \\cup C"
},
{
"math_id": 37,
"text": "A_n := \\{y \\in X: d(x_n,y) \\leq r_n\\}"
},
{
"math_id": 38,
"text": "x_n \\to x"
},
{
"math_id": 39,
"text": "r_n \\to r"
},
{
"math_id": 40,
"text": "[0, +\\infty)"
},
{
"math_id": 41,
"text": "\\mathop{\\mathrm{Lim}}(A_n) = \\{y \\in X : d(x,y) \\leq r\\}"
},
{
"math_id": 42,
"text": "r_n \\to +\\infty"
},
{
"math_id": 43,
"text": "\\mathop{\\mathrm{Lim}}(X \\setminus A_n) = \\emptyset"
},
{
"math_id": 44,
"text": "A_{n} := \\{ x \\in \\mathbb{R} : \\sin (n x) = 0 \\}"
},
{
"math_id": 45,
"text": "A_n := \\{(x,y) \\in \\mathbb{R}^2 : y \\geq n|x|\\}"
},
{
"math_id": 46,
"text": "\\{(0,y) \\in \\mathbb{R}^2 : y \\geq 0\\}"
},
{
"math_id": 47,
"text": "\\mathop{\\mathrm{Li}} A_n"
},
{
"math_id": 48,
"text": "\\mathop{\\mathrm{Ls}} A_n"
},
{
"math_id": 49,
"text": "\\mathop{\\mathrm{Li}} A_n \\subset \\mathop{\\mathrm{Ls}} A_n"
},
{
"math_id": 50,
"text": "\\mathop{\\mathrm{Li}} A_n = \\mathop{\\mathrm{Li}} \\mathop{\\mathrm{cl}}(A_n)"
},
{
"math_id": 51,
"text": "\\mathop{\\mathrm{Ls}} A_n = \\mathop{\\mathrm{L s}} \\mathop{\\mathrm{cl}}(A_n)"
},
{
"math_id": 52,
"text": "A_n := A"
},
{
"math_id": 53,
"text": "\\mathop{\\mathrm{Lim}} A_n = \\mathop{\\mathrm{cl}} A"
},
{
"math_id": 54,
"text": "\nA_n := \\{x_n\\}\n"
},
{
"math_id": 55,
"text": "\n\\mathop{\\mathrm{Li}} A_n\n"
},
{
"math_id": 56,
"text": "\n\\mathop{\\mathrm{Ls}} A_n\n"
},
{
"math_id": 57,
"text": "\n\\{x_n\\}_{n=1}^{\\infty} \\subset X\n"
},
{
"math_id": 58,
"text": "A_n \\subset B_n \\subset C_n"
},
{
"math_id": 59,
"text": "B := \\mathop{\\mathrm{Lim}} A_n = \\mathop{\\mathrm{Lim}} C_n"
},
{
"math_id": 60,
"text": "\\mathop{\\mathrm{Lim}} B_n = B"
},
{
"math_id": 61,
"text": "A \\subset \\mathop{\\mathrm{Li}} A_n"
},
{
"math_id": 62,
"text": "U \\subset X"
},
{
"math_id": 63,
"text": "A \\cap U \\ne \\emptyset"
},
{
"math_id": 64,
"text": "n_0"
},
{
"math_id": 65,
"text": "A_n \\cap U \\ne \\emptyset"
},
{
"math_id": 66,
"text": "n_0 \\leq n"
},
{
"math_id": 67,
"text": "\\mathop{\\mathrm{Ls}} A_n \\subset A"
},
{
"math_id": 68,
"text": "K \\subset X"
},
{
"math_id": 69,
"text": "A \\cap K \\ne \\emptyset"
},
{
"math_id": 70,
"text": "A_n \\cap K \\ne \\emptyset"
},
{
"math_id": 71,
"text": "A_1 \\subset A_2 \\subset A_3 \\subset \\cdots"
},
{
"math_id": 72,
"text": "\\mathop{\\mathrm{Lim}} A_n = \\mathop{\\mathrm{cl}} \\left( \\bigcup_{n = 1}^{\\infty} A_n \\right)"
},
{
"math_id": 73,
"text": "A_1 \\supset A_2 \\supset A_3 \\supset \\cdots"
},
{
"math_id": 74,
"text": "\\mathop{\\mathrm{Lim}} A_n = \\bigcap_{n = 1}^{\\infty} \\mathop{\\mathrm{cl}}(A_n)"
},
{
"math_id": 75,
"text": "d_H"
},
{
"math_id": 76,
"text": "d_H(A_n, A) \\to 0"
},
{
"math_id": 77,
"text": "\\mathop{\\mathrm{cl}}A = \\mathop{\\mathrm{Lim}} A_n"
},
{
"math_id": 78,
"text": "d_H(A_n, \\mathop{\\mathrm{Lim}} A_n) = +\\infty"
},
{
"math_id": 79,
"text": "S : X \\rightrightarrows Y"
},
{
"math_id": 80,
"text": "Y"
},
{
"math_id": 81,
"text": "S(x) \\subset Y"
},
{
"math_id": 82,
"text": "S^{-1}(y) = \\{x \\in X : y \\in S(x)\\}"
},
{
"math_id": 83,
"text": "\\begin{align}\n\\mathop{\\mathrm{Li}}_{x' \\to x} S(x') :=& \\bigcap_{x' \\to x} \\mathop{\\mathrm{Li}} S(x'), \\qquad x \\in X \\\\\n\\mathop{\\mathrm{Ls}}_{x' \\to x} S(x') :=& \\bigcup_{x' \\to x} \\mathop{\\mathrm{Ls}} S(x'), \\qquad x \\in X\\\\\n\\end{align}"
},
{
"math_id": 84,
"text": "x' \\to x"
},
{
"math_id": 85,
"text": "S"
},
{
"math_id": 86,
"text": "S(x) \\subset \\mathop{\\mathrm{Li}}_{x' \\to x} S(x')"
},
{
"math_id": 87,
"text": "\\mathop{\\mathrm{Ls}}_{x' \\to x} S(x') \\subset S(x)"
},
{
"math_id": 88,
"text": "S(x)"
},
{
"math_id": 89,
"text": "f_S : X \\to 2^Y"
},
{
"math_id": 90,
"text": "f(x) = S(x)"
},
{
"math_id": 91,
"text": "2^Y"
},
{
"math_id": 92,
"text": "B(x,r) = \\{y \\in X : d(x,y) \\leq r \\}"
},
{
"math_id": 93,
"text": "X \\times [0,+\\infty) \\rightrightarrows X"
},
{
"math_id": 94,
"text": "f : X \\to [-\\infty, +\\infty]"
},
{
"math_id": 95,
"text": "S_f(x) := \\{\\lambda \\in \\mathbb{R} : f(x) \\leq \\lambda\\}"
},
{
"math_id": 96,
"text": "f"
},
{
"math_id": 97,
"text": "S_f"
},
{
"math_id": 98,
"text": "y \\notin S(x)"
},
{
"math_id": 99,
"text": "V \\in \\mathcal{N}(y)"
},
{
"math_id": 100,
"text": "U \\in \\mathcal{N}(x)"
},
{
"math_id": 101,
"text": "U \\cap S^{-1}(V) = \\emptyset"
},
{
"math_id": 102,
"text": "y \\in S(x)"
},
{
"math_id": 103,
"text": "V \\cap S(x') \\ne \\emptyset"
},
{
"math_id": 104,
"text": "x' \\in U"
},
{
"math_id": 105,
"text": "\\{(x,y) \\in X \\times Y : y \\in S(x)\\}"
},
{
"math_id": 106,
"text": "S : \\mathbb{R}^n \\to \\mathbb{R}^m"
},
{
"math_id": 107,
"text": "T(x) := \\mathop{\\mathrm{conv}} S(x)"
},
{
"math_id": 108,
"text": "\n(X, d)\n"
},
{
"math_id": 109,
"text": "f_n : X \\to [-\\infty, +\\infty]"
},
{
"math_id": 110,
"text": "\\mathop{\\mathrm{e}\\liminf} f_n"
},
{
"math_id": 111,
"text": "\n\\mathop{\\mathrm{epi}} \\left( \\mathop{\\mathrm{e}\\liminf} f_n\\right) := \\mathop{\\mathrm{Ls}} \\left(\\mathop{\\mathrm{epi}} f_n\\right),\n"
},
{
"math_id": 112,
"text": "\\mathop{\\mathrm{e}\\limsup} f_n"
},
{
"math_id": 113,
"text": "\n\\mathop{\\mathrm{epi}} \\left( \\mathop{\\mathrm{e}\\limsup} f_n\\right)\n:= \\mathop{\\mathrm{Li}} \\left(\\mathop{\\mathrm{epi}} f_n\\right).\n"
},
{
"math_id": 114,
"text": "\\mathop{\\mathrm{Li}} \\mathop{\\mathrm{epi}} f_n \\subset\n\\mathop{\\mathrm{Ls}} \\mathop{\\mathrm{epi}} f_n"
},
{
"math_id": 115,
"text": "\\mathop{\\mathrm{e}\\liminf} f_n \\leq \\mathop{\\mathrm{e}\\liminf} f_n"
},
{
"math_id": 116,
"text": "\\mathop{\\mathrm{Lim}} \\mathop{\\mathrm{epi}} f_n"
},
{
"math_id": 117,
"text": "\\{f_n\\}_{n=1}^{\\infty}"
},
{
"math_id": 118,
"text": "(X, \\tau)"
}
]
| https://en.wikipedia.org/wiki?curid=13265337 |
13266 | Histogram | Graphical representation of the distribution of numerical data
A histogram is a visual representation of the distribution of quantitative data. The term was first introduced by Karl Pearson. To construct a histogram, the first step is to "bin" (or "bucket") the range of values— divide the entire range of values into a series of intervals—and then count how many values fall into each interval. The bins are usually specified as consecutive, non-overlapping intervals of a variable. The bins (intervals) are adjacent and are typically (but not required to be) of equal size.
Histograms give a rough sense of the density of the underlying distribution of the data, and often for density estimation: estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1. If the length of the intervals on the "x"-axis are all 1, then a histogram is identical to a relative frequency plot.
Histograms are sometimes confused with bar charts. In a histogram, each bin is for a different range of values, so altogether the histogram illustrates the distribution of values. But in a bar chart, each bar is for a different category of observations (e.g., each bar might be for a different population), so altogether the bar chart can be used to compare different categories. Some authors recommend that bar charts always have gaps between the bars to clarify that they are not histograms.
Examples.
This is the data for the histogram to the right, using 500 items:
The words used to describe the patterns in a histogram are: "symmetric", "skewed left" or "right", "unimodal", "bimodal" or "multimodal".
It is a good idea to plot the data using several different bin widths to learn more about it. Here is an example on tips given in a restaurant.
The U.S. Census Bureau found that there were 124 million people who work outside of their homes. Using their data on the time occupied by travel to work, the table below shows the absolute number of people who responded with travel times "at least 30 but less than 35 minutes" is higher than the numbers for the categories above and below it. This is likely due to people rounding their reported journey time. The problem of reporting values as somewhat arbitrarily rounded numbers is a common phenomenon when collecting data from people.
This histogram shows the number of cases per unit interval as the height of each block, so that the area of each block is equal to the number of people in the survey who fall into its category. The area under the curve represents the total number of cases (124 million). This type of histogram shows absolute numbers, with Q in thousands.
This histogram differs from the first only in the vertical scale. The area of each block is the fraction of the total that each category represents, and the total area of all the bars is equal to 1 (the fraction meaning "all"). The curve displayed is a simple density estimate. This version shows proportions, and is also known as a unit area histogram.
In other words, a histogram represents a frequency distribution by means of rectangles whose widths represent class intervals and whose areas are proportional to the corresponding frequencies: the height of each is the average frequency density for the interval. The intervals are placed together in order to show that the data represented by the histogram, while exclusive, is also contiguous. (E.g., in a histogram it is possible to have two connecting intervals of 10.5–20.5 and 20.5–33.5, but not two connecting intervals of 10.5–20.5 and 22.5–32.5. Empty intervals are represented as empty and not skipped.)
Mathematical definitions.
The data used to construct a histogram are generated via a function "m""i" that counts the number of observations that fall into each of the disjoint categories (known as "bins"). Thus, if we let "n" be the total number of observations and "k" be the total number of bins, the histogram data "m""i" meet the following conditions:
formula_0
A histogram can be thought of as a simplistic kernel density estimation, which uses a kernel to smooth frequencies over the bins. This yields a smoother probability density function, which will in general more accurately reflect distribution of the underlying variable. The density estimate could be plotted as an alternative to the histogram, and is usually drawn as a curve rather than a set of boxes. Histograms are nevertheless preferred in applications, when their statistical properties need to be modeled. The correlated variation of a kernel density estimate is very difficult to describe mathematically, while it is simple for a histogram where each bin varies independently.
An alternative to kernel density estimation is the average shifted histogram,
which is fast to compute and gives a smooth curve estimate of the density without using kernels.
Cumulative histogram.
A cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram "M""i" of a histogram "m""j" is defined as:
formula_1
Number of bins and width.
There is no "best" number of bins, and different bin sizes can reveal different features of the data. Grouping data is at least as old as Graunt's work in the 17th century, but no systematic guidelines were given until Sturges's work in 1926.
Using wider bins where the density of the underlying data points is low reduces noise due to sampling randomness; using narrower bins where the density is high (so the signal drowns the noise) gives greater precision to the density estimation. Thus varying the bin-width within a histogram can be beneficial. Nonetheless, equal-width bins are widely used.
Some theoreticians have attempted to determine an optimal number of bins, but these methods generally make strong assumptions about the shape of the distribution. Depending on the actual data distribution and the goals of the analysis, different bin widths may be appropriate, so experimentation is usually needed to determine an appropriate width. There are, however, various useful guidelines and rules of thumb.
The number of bins "k" can be assigned directly or can be calculated from a suggested bin width "h" as:
formula_2
The braces indicate the ceiling function.
formula_3
Square-root choice.
which takes the square root of the number of data points in the sample and rounds to the next integer. This rule is suggested by a number of elementary statistics textbooks and widely implemented in many software packages.
Sturges's formula.
Sturges's rule is derived from a binomial distribution and implicitly assumes an approximately normal distribution.
formula_4
Sturges's formula implicitly bases bin sizes on the range of the data, and can perform poorly if "n" < 30, because the number of bins will be small—less than seven—and unlikely to show trends in the data well. On the other extreme, Sturges's formula may overestimate bin width for very large datasets, resulting in oversmoothed histograms. It may also perform poorly if the data are not normally distributed.
When compared to Scott's rule and the Terrell-Scott rule, two other widely accepted formulas for histogram bins, the output of Sturges's formula is closest when "n" ≈ 100.
formula_5
Rice rule.
The Rice rule is presented as a simple alternative to Sturges's rule.
Doane's formula.
Doane's formula is a modification of Sturges's formula which attempts to improve its performance with non-normal data.
formula_6
where formula_7 is the estimated 3rd-moment-skewness of the distribution and
formula_8
Scott's normal reference rule.
Bin width formula_9 is given by
formula_10
where formula_11 is the sample standard deviation. Scott's normal reference rule is optimal for random samples of normally distributed data, in the sense that it minimizes the integrated mean squared error of the density estimate. This is the default rule used in Microsoft Excel.
formula_12
Terrell–Scott rule.
The Terrell–Scott rule is not a normal reference rule. It gives the minimum number of bins required for an asymptotically optimal histogram, where optimality is measured by the integrated mean squared error. The bound is derived by finding the 'smoothest' possible density, which turns out to be formula_13. Any other density will require more bins, hence the above estimate is also referred to as the 'oversmoothed' rule. The similarity of the formulas and the fact that Terrell and Scott were at Rice University when the proposed it suggests that this is also the origin of the Rice rule.
Freedman–Diaconis rule.
The Freedman–Diaconis rule gives bin width formula_9 as:
formula_14
which is based on the interquartile range, denoted by IQR. It replaces 3.5σ of Scott's rule with 2 IQR, which is less sensitive than the standard deviation to outliers in data.
Minimizing cross-validation estimated squared error.
This approach of minimizing integrated mean squared error from Scott's rule can be generalized beyond normal distributions, by using leave-one out cross validation:
formula_15
Here, formula_16 is the number of datapoints in the "k"th bin, and choosing the value of "h" that minimizes "J" will minimize integrated mean squared error.
Shimazaki and Shinomoto's choice.
The choice is based on minimization of an estimated "L"2 risk function
formula_17
where formula_18 and formula_19 are mean and biased variance of a histogram with bin-width formula_20, formula_21 and formula_22.
Variable bin widths.
Rather than choosing evenly spaced bins, for some applications it is preferable to vary the bin width. This avoids bins with low counts. A common case is to choose "equiprobable bins", where the number of samples in each bin is expected to be approximately equal. The bins may be chosen according to some known distribution or may be chosen based on the data so that each bin has formula_23 samples. When plotting the histogram, the "frequency density" is used for the dependent axis. While all bins have approximately equal area, the heights of the histogram approximate the density distribution.
For equiprobable bins, the following rule for the number of bins is suggested:
formula_24
This choice of bins is motivated by maximizing the power of a Pearson chi-squared test testing whether the bins do contain equal numbers of samples. More specifically, for a given confidence interval formula_25 it is recommended to choose between 1/2 and 1 times the following equation:
formula_26
Where formula_27 is the probit function. Following this rule for formula_28 would give between formula_29 and formula_30; the coefficient of 2 is chosen as an easy-to-remember value from this broad optimum.
Remark.
A good reason why the number of bins should be proportional to formula_31 is the following: suppose that the data are obtained as formula_32 independent realizations of a bounded probability distribution with smooth density. Then the histogram remains equally "rugged" as formula_32 tends to infinity. If formula_33 is the "width" of the distribution (e. g., the standard deviation or the inter-quartile range), then the number of units in a bin (the frequency) is of order formula_34 and the "relative" standard error is of order formula_35. Compared to the next bin, the relative change of the frequency is of order formula_36 provided that the derivative of the density is non-zero. These two are of the same order if formula_9 is of order formula_37, so that formula_38 is of order formula_31. This simple cubic root choice can also be applied to bins with non-constant widths.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n = \\sum_{i=1}^k{m_i}."
},
{
"math_id": 1,
"text": "M_i = \\sum_{j=1}^i{m_j}."
},
{
"math_id": 2,
"text": "k = \\left \\lceil \\frac{\\max x - \\min x}{h} \\right \\rceil."
},
{
"math_id": 3,
"text": "k = \\lceil \\sqrt{n} \\rceil \\, "
},
{
"math_id": 4,
"text": "k = \\lceil \\log_2 n \\rceil+ 1 , \\, "
},
{
"math_id": 5,
"text": "k = \\lceil 2 \\sqrt[3]{n}\\rceil"
},
{
"math_id": 6,
"text": " k = 1 + \\log_2( n ) + \\log_2 \\left( 1 + \\frac { |g_1| }{\\sigma_{g_1}} \\right) "
},
{
"math_id": 7,
"text": "g_1"
},
{
"math_id": 8,
"text": " \\sigma_{g_1} = \\sqrt { \\frac { 6(n-2) }{ (n+1)(n+3) } } "
},
{
"math_id": 9,
"text": "h"
},
{
"math_id": 10,
"text": "h = \\frac{3.49 \\hat \\sigma}{\\sqrt[3]{n}},"
},
{
"math_id": 11,
"text": "\\hat \\sigma"
},
{
"math_id": 12,
"text": "k = \\sqrt[3]{2n}"
},
{
"math_id": 13,
"text": "\\frac 3 4 (1-x^2)"
},
{
"math_id": 14,
"text": "h = 2\\frac{\\operatorname{IQR}(x)}{\\sqrt[3]{n}},"
},
{
"math_id": 15,
"text": "\\underset{h}{\\operatorname{arg\\,min}} \\hat{J}(h) = \\underset{h}{\\operatorname{arg\\,min}} \\left( \\frac{2}{(n-1)h} - \\frac{n+1}{n^2(n-1)h} \\sum_k N_k^2 \\right)"
},
{
"math_id": 16,
"text": "N_k"
},
{
"math_id": 17,
"text": " \\underset{h}{\\operatorname{arg\\,min}} \\frac{ 2 \\bar{m} - v } {h^2} "
},
{
"math_id": 18,
"text": "\\textstyle \\bar{m}"
},
{
"math_id": 19,
"text": "\\textstyle v"
},
{
"math_id": 20,
"text": "\\textstyle h"
},
{
"math_id": 21,
"text": "\\textstyle \\bar{m}=\\frac{1}{k} \\sum_{i=1}^{k} m_i"
},
{
"math_id": 22,
"text": "\\textstyle v= \\frac{1}{k} \\sum_{i=1}^{k} (m_i - \\bar{m})^2 "
},
{
"math_id": 23,
"text": "\\approx n/k"
},
{
"math_id": 24,
"text": "k = 2 n^{2/5}"
},
{
"math_id": 25,
"text": "\\alpha"
},
{
"math_id": 26,
"text": "k = 4 \\left( \\frac{2 n^2}{\\Phi^{-1}(\\alpha)} \\right)^\\frac{1}{5}"
},
{
"math_id": 27,
"text": "\\Phi^{-1}"
},
{
"math_id": 28,
"text": "\\alpha = 0.05"
},
{
"math_id": 29,
"text": "1.88n^{2/5}"
},
{
"math_id": 30,
"text": "3.77n^{2/5}"
},
{
"math_id": 31,
"text": "\\sqrt[3]{n}"
},
{
"math_id": 32,
"text": "n"
},
{
"math_id": 33,
"text": "s"
},
{
"math_id": 34,
"text": "n h/s"
},
{
"math_id": 35,
"text": "\\sqrt{s/(n h)}"
},
{
"math_id": 36,
"text": "h/s"
},
{
"math_id": 37,
"text": "s/\\sqrt[3]{n}"
},
{
"math_id": 38,
"text": "k"
}
]
| https://en.wikipedia.org/wiki?curid=13266 |
1326660 | Mohr's circle | Geometric civil engineering calculation technique
Mohr's circle is a two-dimensional graphical representation of the transformation law for the Cauchy stress tensor.
Mohr's circle is often used in calculations relating to mechanical engineering for materials' strength, geotechnical engineering for strength of soils, and structural engineering for strength of built structures. It is also used for calculating stresses in many planes by reducing them to vertical and horizontal components. These are called principal planes in which principal stresses are calculated; Mohr's circle can also be used to find the principal planes and the principal stresses in a graphical representation, and is one of the easiest ways to do so.
After performing a stress analysis on a material body assumed as a continuum, the components of the Cauchy stress tensor at a particular material point are known with respect to a coordinate system. The Mohr circle is then used to determine graphically the stress components acting on a rotated coordinate system, i.e., acting on a differently oriented plane passing through that point.
The abscissa and ordinate (formula_0,formula_1) of each point on the circle are the magnitudes of the normal stress and shear stress components, respectively, acting on the rotated coordinate system. In other words, the circle is the locus of points that represent the state of stress on individual planes at all their orientations, where the axes represent the principal axes of the stress element.
19th-century German engineer Karl Culmann was the first to conceive a graphical representation for stresses while considering longitudinal and vertical stresses in horizontal beams during bending. His work inspired fellow German engineer Christian Otto Mohr (the circle's namesake), who extended it to both two- and three-dimensional stresses and developed a failure criterion based on the stress circle.
Alternative graphical methods for the representation of the stress state at a point include the Lamé's stress ellipsoid and Cauchy's stress quadric.
The Mohr circle can be applied to any symmetric 2x2 tensor matrix, including the strain and moment of inertia tensors.
Motivation.
Internal forces are produced between the particles of a deformable object, assumed as a continuum, as a reaction to applied external forces, i.e., either surface forces or body forces. This reaction follows from Euler's laws of motion for a continuum, which are equivalent to Newton's laws of motion for a particle. A measure of the intensity of these internal forces is called stress. Because the object is assumed as a continuum, these internal forces are distributed continuously within the volume of the object.
In engineering, e.g., structural, mechanical, or geotechnical, the stress distribution within an object, for instance stresses in a rock mass around a tunnel, airplane wings, or building columns, is determined through a stress analysis. Calculating the stress distribution implies the determination of stresses at every point (material particle) in the object. According to Cauchy, the "stress at any point" in an object (Figure 2), assumed as a continuum, is completely defined by the nine stress components formula_2 of a second order tensor of type (2,0) known as the Cauchy stress tensor, formula_3:
formula_4
After the stress distribution within the object has been determined with respect to a coordinate system formula_5, it may be necessary to calculate the components of the stress tensor at a particular material point formula_6 with respect to a rotated coordinate system formula_7, i.e., the stresses acting on a plane with a different orientation passing through that point of interest —forming an angle with the coordinate system formula_5 (Figure 3). For example, it is of interest to find the maximum normal stress and maximum shear stress, as well as the orientation of the planes where they act upon. To achieve this, it is necessary to perform a tensor transformation under a rotation of the coordinate system. From the definition of tensor, the Cauchy stress tensor obeys the tensor transformation law. A graphical representation of this transformation law for the Cauchy stress tensor is the Mohr circle for stress.
Mohr's circle for two-dimensional state of stress.
In two dimensions, the stress tensor at a given material point formula_6 with respect to any two perpendicular directions is completely defined by only three stress components. For the particular coordinate system formula_5 these stress components are: the normal stresses formula_8 and formula_9, and the shear stress formula_10. From the balance of angular momentum, the symmetry of the Cauchy stress tensor can be demonstrated. This symmetry implies that formula_11. Thus, the Cauchy stress tensor can be written as:
formula_12
The objective is to use the Mohr circle to find the stress components formula_0 and formula_1 on a rotated coordinate system formula_7, i.e., on a differently oriented plane passing through formula_6 and perpendicular to the formula_13-formula_14 plane (Figure 4). The rotated coordinate system formula_7 makes an angle formula_15 with the original coordinate system formula_5.
Equation of the Mohr circle.
To derive the equation of the Mohr circle for the two-dimensional cases of plane stress and plane strain, first consider a two-dimensional infinitesimal material element around a material point formula_6 (Figure 4), with a unit area in the direction parallel to the formula_14-formula_16 plane, i.e., perpendicular to the page or screen.
From equilibrium of forces on the infinitesimal element, the magnitudes of the normal stress formula_0 and the shear stress formula_1 are given by:
formula_17
formula_18
Both equations can also be obtained by applying the tensor transformation law on the known Cauchy stress tensor, which is equivalent to performing the static equilibrium of forces in the direction of formula_0 and formula_1.
These two equations are the parametric equations of the Mohr circle. In these equations, formula_22 is the parameter, and formula_0 and formula_1 are the coordinates. This means that by choosing a coordinate system with abscissa formula_0 and ordinate formula_1, giving values to the parameter formula_15 will place the points obtained lying on a circle.
Eliminating the parameter formula_22 from these parametric equations will yield the non-parametric equation of the Mohr circle. This can be achieved by rearranging the equations for formula_0 and formula_1, first transposing the first term in the first equation and squaring both sides of each of the equations then adding them. Thus we have
formula_23
where
formula_24
This is the equation of a circle (the Mohr circle) of the form
formula_25
with radius formula_26 centered at a point with coordinates formula_27 in the formula_28 coordinate system.
Sign conventions.
There are two separate sets of sign conventions that need to be considered when using the Mohr Circle: One sign convention for stress components in the "physical space", and another for stress components in the "Mohr-Circle-space". In addition, within each of the two set of sign conventions, the engineering mechanics (structural engineering and mechanical engineering) literature follows a different sign convention from the geomechanics literature. There is no standard sign convention, and the choice of a particular sign convention is influenced by convenience for calculation and interpretation for the particular problem in hand. A more detailed explanation of these sign conventions is presented below.
The previous derivation for the equation of the Mohr Circle using Figure 4 follows the engineering mechanics sign convention. The engineering mechanics sign convention will be used for this article.
Physical-space sign convention.
From the convention of the Cauchy stress tensor (Figure 3 and Figure 4), the first subscript in the stress components denotes the face on which the stress component acts, and the second subscript indicates the direction of the stress component. Thus formula_10 is the shear stress acting on the face with normal vector in the positive direction of the formula_13-axis, and in the positive direction of the formula_14-axis.
In the physical-space sign convention, positive normal stresses are outward to the plane of action (tension), and negative normal stresses are inward to the plane of action (compression) (Figure 5).
In the physical-space sign convention, positive shear stresses act on positive faces of the material element in the positive direction of an axis. Also, positive shear stresses act on negative faces of the material element in the negative direction of an axis. A positive face has its normal vector in the positive direction of an axis, and a negative face has its normal vector in the negative direction of an axis. For example, the shear stresses formula_10 and formula_29 are positive because they act on positive faces, and they act as well in the positive direction of the formula_14-axis and the formula_13-axis, respectively (Figure 3). Similarly, the respective opposite shear stresses formula_10 and formula_29 acting in the negative faces have a negative sign because they act in the negative direction of the formula_13-axis and formula_14-axis, respectively.
Mohr-circle-space sign convention.
In the Mohr-circle-space sign convention, normal stresses have the same sign as normal stresses in the physical-space sign convention: positive normal stresses act outward to the plane of action, and negative normal stresses act inward to the plane of action.
Shear stresses, however, have a different convention in the Mohr-circle space compared to the convention in the physical space. In the Mohr-circle-space sign convention, positive shear stresses rotate the material element in the counterclockwise direction, and negative shear stresses rotate the material in the clockwise direction. This way, the shear stress component formula_10 is positive in the Mohr-circle space, and the shear stress component formula_29 is negative in the Mohr-circle space.
Two options exist for drawing the Mohr-circle space, which produce a mathematically correct Mohr circle:
Plotting positive shear stresses upward makes the angle formula_22 on the Mohr circle have a positive rotation clockwise, which is opposite to the physical space convention. That is why some authors prefer plotting positive shear stresses downward, which makes the angle formula_22 on the Mohr circle have a positive rotation counterclockwise, similar to the physical space convention for shear stresses.
To overcome the "issue" of having the shear stress axis downward in the Mohr-circle space, there is an "alternative" sign convention where positive shear stresses are assumed to rotate the material element in the clockwise direction and negative shear stresses are assumed to rotate the material element in the counterclockwise direction (Figure 5, option 3). This way, positive shear stresses are plotted upward in the Mohr-circle space and the angle formula_22 has a positive rotation counterclockwise in the Mohr-circle space. This "alternative" sign convention produces a circle that is identical to the sign convention #2 in Figure 5 because a positive shear stress formula_30 is also a counterclockwise shear stress, and both are plotted downward. Also, a negative shear stress formula_30 is a clockwise shear stress, and both are plotted upward.
This article follows the engineering mechanics sign convention for the physical space and the "alternative" sign convention for the Mohr-circle space (sign convention #3 in Figure 5)
Drawing Mohr's circle.
Assuming we know the stress components formula_8, formula_9, and formula_10 at a point formula_6 in the object under study, as shown in Figure 4, the following are the steps to construct the Mohr circle for the state of stresses at formula_6:
Finding principal normal stresses.
The magnitude of the principal stresses are the abscissas of the points formula_41 and formula_42 (Figure 6) where the circle intersects the formula_43-axis. The magnitude of the major principal stress formula_44 is always the greatest absolute value of the abscissa of any of these two points. Likewise, the magnitude of the minor principal stress formula_45 is always the lowest absolute value of the abscissa of these two points. As expected, the ordinates of these two points are zero, corresponding to the magnitude of the shear stress components on the principal planes. Alternatively, the values of the principal stresses can be found by
formula_46
formula_47
where the magnitude of the average normal stress formula_48 is the abscissa of the centre formula_40, given by
formula_49
and the length of the radius formula_50 of the circle (based on the equation of a circle passing through two points), is given by
formula_51
Finding maximum and minimum shear stresses.
The maximum and minimum shear stresses correspond to the ordinates of the highest and lowest points on the circle, respectively. These points are located at the intersection of the circle with the vertical line passing through the center of the circle, formula_40. Thus, the magnitude of the maximum and minimum shear stresses are equal to the value of the circle's radius formula_50
formula_52
Finding stress components on an arbitrary plane.
As mentioned before, after the two-dimensional stress analysis has been performed we know the stress components formula_8, formula_9, and formula_10 at a material point formula_6. These stress components act in two perpendicular planes formula_31 and formula_32 passing through formula_6 as shown in Figure 5 and 6. The Mohr circle is used to find the stress components formula_0 and formula_1, i.e., coordinates of any point formula_34 on the circle, acting on any other plane formula_34 passing through formula_6 making an angle formula_15 with the plane formula_32. For this, two approaches can be used: the double angle, and the Pole or origin of planes.
Double angle.
As shown in Figure 6, to determine the stress components formula_53 acting on a plane formula_34 at an angle formula_15 counterclockwise to the plane formula_32 on which formula_8 acts, we travel an angle formula_22 in the same counterclockwise direction around the circle from the known stress point formula_38 to point formula_54, i.e., an angle formula_22 between lines formula_35 and formula_36 in the Mohr circle.
The double angle approach relies on the fact that the angle formula_15 between the normal vectors to any two physical planes passing through formula_6 (Figure 4) is half the angle between two lines joining their corresponding stress points formula_53 on the Mohr circle and the centre of the circle.
This double angle relation comes from the fact that the parametric equations for the Mohr circle are a function of formula_22. It can also be seen that the planes formula_33 and formula_32 in the material element around formula_6 of Figure 5 are separated by an angle formula_55, which in the Mohr circle is represented by a formula_56 angle (double the angle).
Pole or origin of planes.
The second approach involves the determination of a point on the Mohr circle called the "pole" or the "origin of planes". Any straight line drawn from the pole will intersect the Mohr circle at a point that represents the state of stress on a plane inclined at the same orientation (parallel) in space as that line. Therefore, knowing the stress components formula_57 and formula_58 on any particular plane, one can draw a line parallel to that plane through the particular coordinates formula_0 and formula_1 on the Mohr circle and find the pole as the intersection of such line with the Mohr circle. As an example, let's assume we have a state of stress with stress components formula_59, formula_60, and formula_61, as shown on Figure 7. First, we can draw a line from point formula_32 parallel to the plane of action of formula_8, or, if we choose otherwise, a line from point formula_33 parallel to the plane of action of formula_9. The intersection of any of these two lines with the Mohr circle is the pole. Once the pole has been determined, to find the state of stress on a plane making an angle formula_62 with the vertical, or in other words a plane having its normal vector forming an angle formula_15 with the horizontal plane, then we can draw a line from the pole parallel to that plane (See Figure 7). The normal and shear stresses on that plane are then the coordinates of the point of intersection between the line and the Mohr circle.
Finding the orientation of the principal planes.
The orientation of the planes where the maximum and minimum principal stresses act, also known as "principal planes", can be determined by measuring in the Mohr circle the angles ∠BOC and ∠BOE, respectively, and taking half of each of those angles. Thus, the angle ∠BOC between formula_63 and formula_64 is double the angle formula_65 which the major principal plane makes with plane formula_32.
Angles formula_66 and formula_67 can also be found from the following equation
formula_68
This equation defines two values for formula_69 which are formula_70 apart (Figure). This equation can be derived directly from the geometry of the circle, or by making the parametric equation of the circle for formula_30 equal to zero (the shear stress in the principal planes is always zero).
Example.
Assume a material element under a state of stress as shown in Figure 8 and Figure 9, with the plane of one of its sides oriented 10° with respect to the horizontal plane.
Using the Mohr circle, find:
Check the answers using the stress transformation formulas or the stress transformation law.
Solution:
Following the engineering mechanics sign convention for the physical space (Figure 5), the stress components for the material element in this example are:
formula_71
formula_72
formula_73.
Following the steps for drawing the Mohr circle for this particular state of stress, we first draw a Cartesian coordinate system formula_28 with the formula_1-axis upward.
We then plot two points A(50,40) and B(-10,-40), representing the state of stress at plane A and B as show in both Figure 8 and Figure 9. These points follow the engineering mechanics sign convention for the Mohr-circle space (Figure 5), which assumes positive normals stresses outward from the material element, and positive shear stresses on each plane rotating the material element clockwise. This way, the shear stress acting on plane B is negative and the shear stress acting on plane A is positive.
The diameter of the circle is the line joining point A and B. The centre of the circle is the intersection of this line with the formula_0-axis. Knowing both the location of the centre and length of the diameter, we are able to plot the Mohr circle for this particular state of stress.
The abscissas of both points E and C (Figure 8 and Figure 9) intersecting the formula_0-axis are the magnitudes of the minimum and maximum normal stresses, respectively; the ordinates of both points E and C are the magnitudes of the shear stresses acting on both the minor and major principal planes, respectively, which is zero for principal planes.
Even though the idea for using the Mohr circle is to graphically find different stress components by actually measuring the coordinates for different points on the circle, it is more convenient to confirm the results analytically. Thus, the radius and the abscissa of the centre of the circle are
formula_74
formula_75
and the principal stresses are
formula_76
formula_77
The coordinates for both points H and G (Figure 8 and Figure 9) are the magnitudes of the minimum and maximum shear stresses, respectively; the abscissas for both points H and G are the magnitudes for the normal stresses acting on the same planes where the minimum and maximum shear stresses act, respectively.
The magnitudes of the minimum and maximum shear stresses can be found analytically by
formula_78
and the normal stresses acting on the same planes where the minimum and maximum shear stresses act are equal to formula_79
We can choose to either use the double angle approach (Figure 8) or the Pole approach (Figure 9) to find the orientation of the principal normal stresses and principal shear stresses.
Using the double angle approach we measure the angles ∠BOC and ∠BOE in the Mohr Circle (Figure 8) to find double the angle the major principal stress and the minor principal stress make with plane B in the physical space. To obtain a more accurate value for these angles, instead of manually measuring the angles, we can use the analytical expression
formula_80
One solution is: formula_81.
From inspection of Figure 8, this value corresponds to the angle ∠BOE. Thus, the minor principal angle is
formula_82
Then, the major principal angle is
formula_83
Remember that in this particular example formula_66 and formula_67 are angles with respect to the plane of action of formula_21 (oriented in the formula_19-axis)and not angles with respect to the plane of action of formula_8 (oriented in the formula_13-axis).
Using the Pole approach, we first localize the Pole or origin of planes. For this, we draw through point A on the Mohr circle a line inclined 10° with the horizontal, or, in other words, a line parallel to plane A where formula_20 acts. The Pole is where this line intersects the Mohr circle (Figure 9). To confirm the location of the Pole, we could draw a line through point B on the Mohr circle parallel to the plane B where formula_21 acts. This line would also intersect the Mohr circle at the Pole (Figure 9).
From the Pole, we draw lines to different points on the Mohr circle. The coordinates of the points where these lines intersect the Mohr circle indicate the stress components acting on a plane in the physical space having the same inclination as the line. For instance, the line from the Pole to point C in the circle has the same inclination as the plane in the physical space where formula_44 acts. This plane makes an angle of 63.435° with plane B, both in the Mohr-circle space and in the physical space. In the same way, lines are traced from the Pole to points E, D, F, G and H to find the stress components on planes with the same orientation.
Mohr's circle for a general three-dimensional state of stresses.
To construct the Mohr circle for a general three-dimensional case of stresses at a point, the values of the principal stresses formula_84 and their principal directions formula_85 must be first evaluated.
Considering the principal axes as the coordinate system, instead of the general formula_86, formula_87, formula_88 coordinate system, and assuming that formula_89, then the normal and shear components of the stress vector formula_90, for a given plane with unit vector formula_91, satisfy the following equations
formula_92
formula_93
Knowing that formula_94, we can solve for formula_95, formula_96, formula_97, using the Gauss elimination method which yields
formula_98
Since formula_89, and formula_99 is non-negative, the numerators from these equations satisfy
formula_100 as the denominator formula_101 and formula_102
formula_103 as the denominator formula_104 and formula_105
formula_106 as the denominator formula_107 and formula_108
These expressions can be rewritten as
formula_109
which are the equations of the three Mohr's circles for stress formula_110, formula_111, and formula_112, with radii formula_113, formula_114, and formula_115, and their centres with coordinates formula_116, formula_117, formula_118, respectively.
These equations for the Mohr circles show that all admissible stress points formula_53 lie on these circles or within the shaded area enclosed by them (see Figure 10). Stress points formula_53 satisfying the equation for circle formula_110 lie on, or outside circle formula_110. Stress points formula_53 satisfying the equation for circle formula_111 lie on, or inside circle formula_111. And finally, stress points formula_53 satisfying the equation for circle formula_112 lie on, or outside circle formula_112.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma_\\mathrm{n}"
},
{
"math_id": 1,
"text": "\\tau_\\mathrm{n}"
},
{
"math_id": 2,
"text": "\\sigma_{ij}"
},
{
"math_id": 3,
"text": "\\boldsymbol\\sigma"
},
{
"math_id": 4,
"text": "\\boldsymbol{\\sigma}=\n\\left[{\\begin{matrix}\n\\sigma _{11} & \\sigma _{12} & \\sigma _{13} \\\\\n\\sigma _{21} & \\sigma _{22} & \\sigma _{23} \\\\\n\\sigma _{31} & \\sigma _{32} & \\sigma _{33} \\\\\n\\end{matrix}}\\right]\n\n\\equiv \\left[{\\begin{matrix}\n\\sigma _{xx} & \\sigma _{xy} & \\sigma _{xz} \\\\\n\\sigma _{yx} & \\sigma _{yy} & \\sigma _{yz} \\\\\n\\sigma _{zx} & \\sigma _{zy} & \\sigma _{zz} \\\\\n\\end{matrix}}\\right]\n\\equiv \\left[{\\begin{matrix}\n\\sigma _x & \\tau _{xy} & \\tau _{xz} \\\\\n\\tau _{yx} & \\sigma _y & \\tau _{yz} \\\\\n\\tau _{zx} & \\tau _{zy} & \\sigma _z \\\\\n\\end{matrix}}\\right]\n"
},
{
"math_id": 5,
"text": "(x,y)"
},
{
"math_id": 6,
"text": "P"
},
{
"math_id": 7,
"text": "(x',y')"
},
{
"math_id": 8,
"text": "\\sigma_x"
},
{
"math_id": 9,
"text": "\\sigma_y"
},
{
"math_id": 10,
"text": "\\tau_{xy}"
},
{
"math_id": 11,
"text": "\\tau_{xy}=\\tau_{yx}"
},
{
"math_id": 12,
"text": "\\boldsymbol{\\sigma}=\n\\left[{\\begin{matrix}\n\\sigma _x & \\tau _{xy} & 0 \\\\\n\\tau _{xy} & \\sigma _y & 0 \\\\\n0 & 0 & 0 \\\\\n\\end{matrix}}\\right]\n\\equiv\n\n\\left[{\\begin{matrix}\n\\sigma _x & \\tau _{xy} \\\\\n\\tau _{xy} & \\sigma _y \\\\\n\n\\end{matrix}}\\right]\n"
},
{
"math_id": 13,
"text": "x"
},
{
"math_id": 14,
"text": "y"
},
{
"math_id": 15,
"text": "\\theta"
},
{
"math_id": 16,
"text": "z"
},
{
"math_id": 17,
"text": "\\sigma_\\mathrm{n} = \\frac{1}{2} ( \\sigma_x + \\sigma_y ) + \\frac{1}{2} ( \\sigma_x - \\sigma_y )\\cos 2\\theta + \\tau_{xy} \\sin 2\\theta"
},
{
"math_id": 18,
"text": "\\tau_\\mathrm{n} = -\\frac{1}{2}(\\sigma_x - \\sigma_y )\\sin 2\\theta + \\tau_{xy}\\cos 2\\theta"
},
{
"math_id": 19,
"text": "x'"
},
{
"math_id": 20,
"text": "\\sigma_{y'}"
},
{
"math_id": 21,
"text": "\\sigma_{x'}"
},
{
"math_id": 22,
"text": "2\\theta"
},
{
"math_id": 23,
"text": "\\begin{align}\n\\left[ \\sigma_\\mathrm{n} - \\tfrac{1}{2} ( \\sigma_x + \\sigma_y )\\right]^2 + \\tau_\\mathrm{n}^2 &= \\left[\\tfrac{1}{2}(\\sigma_x - \\sigma_y)\\right]^2 + \\tau_{xy}^2 \\\\\n(\\sigma_\\mathrm{n} - \\sigma_\\mathrm{avg})^2 + \\tau_\\mathrm{n}^2 &= R^2 \\end{align}"
},
{
"math_id": 24,
"text": "R = \\sqrt{\\left[\\tfrac{1}{2}(\\sigma_x - \\sigma_y)\\right]^2 + \\tau_{xy}^2} \\quad \\text{and} \\quad \\sigma_\\mathrm{avg} = \\tfrac{1}{2} ( \\sigma_x + \\sigma_y )"
},
{
"math_id": 25,
"text": "(x-a)^2+(y-b)^2=r^2"
},
{
"math_id": 26,
"text": "r=R"
},
{
"math_id": 27,
"text": "(a,b)=(\\sigma_\\mathrm{avg}, 0)"
},
{
"math_id": 28,
"text": "(\\sigma_\\mathrm{n},\\tau_\\mathrm{n})"
},
{
"math_id": 29,
"text": "\\tau_{yx}"
},
{
"math_id": 30,
"text": "\\tau_\\mathrm n"
},
{
"math_id": 31,
"text": " A"
},
{
"math_id": 32,
"text": "B"
},
{
"math_id": 33,
"text": "A"
},
{
"math_id": 34,
"text": "D"
},
{
"math_id": 35,
"text": "\\overline {OB}"
},
{
"math_id": 36,
"text": "\\overline {OD}"
},
{
"math_id": 37,
"text": "A(\\sigma_y, \\tau_{xy})"
},
{
"math_id": 38,
"text": "B(\\sigma_x, -\\tau_{xy})"
},
{
"math_id": 39,
"text": "\\overline{AB}"
},
{
"math_id": 40,
"text": "O"
},
{
"math_id": 41,
"text": "C"
},
{
"math_id": 42,
"text": "E"
},
{
"math_id": 43,
"text": "\\sigma_\\mathrm n"
},
{
"math_id": 44,
"text": "\\sigma_1"
},
{
"math_id": 45,
"text": "\\sigma_2"
},
{
"math_id": 46,
"text": "\\sigma_1 = \\sigma_\\max = \\sigma_\\text{avg}+R"
},
{
"math_id": 47,
"text": "\\sigma_2 = \\sigma_\\min = \\sigma_\\text{avg}-R"
},
{
"math_id": 48,
"text": "\\sigma_\\text{avg}"
},
{
"math_id": 49,
"text": "\\sigma_\\text{avg} = \\tfrac{1}{2}(\\sigma_x+ \\sigma_y)"
},
{
"math_id": 50,
"text": "R"
},
{
"math_id": 51,
"text": "R = \\sqrt{\\left[\\tfrac{1}{2}(\\sigma_x - \\sigma_y)\\right]^2 + \\tau_{xy}^2}"
},
{
"math_id": 52,
"text": "\n\\tau_{\\max,\\min}= \\pm R"
},
{
"math_id": 53,
"text": "(\\sigma_\\mathrm{n}, \\tau_\\mathrm{n})"
},
{
"math_id": 54,
"text": "D(\\sigma_\\mathrm{n}, \\tau_\\mathrm{n})"
},
{
"math_id": 55,
"text": "\\theta=90^\\circ"
},
{
"math_id": 56,
"text": "180^\\circ"
},
{
"math_id": 57,
"text": "\\sigma"
},
{
"math_id": 58,
"text": "\\tau"
},
{
"math_id": 59,
"text": "\\sigma_x,\\!"
},
{
"math_id": 60,
"text": "\\sigma_y,\\!"
},
{
"math_id": 61,
"text": "\\tau_{xy},\\!"
},
{
"math_id": 62,
"text": " \\theta"
},
{
"math_id": 63,
"text": "\\overline{OB}"
},
{
"math_id": 64,
"text": "\\overline{OC}"
},
{
"math_id": 65,
"text": "\\theta_p"
},
{
"math_id": 66,
"text": "\\theta_{p1}"
},
{
"math_id": 67,
"text": "\\theta_{p2}"
},
{
"math_id": 68,
"text": "\\tan 2 \\theta_\\mathrm{p} = \\frac{2 \\tau_{xy}}{\\sigma_y - \\sigma_x}"
},
{
"math_id": 69,
"text": "\\theta_\\mathrm{p}"
},
{
"math_id": 70,
"text": "90^\\circ"
},
{
"math_id": 71,
"text": "\\sigma_{x'}=-10\\textrm{ MPa}"
},
{
"math_id": 72,
"text": "\\sigma_{y'}=50\\textrm{ MPa}"
},
{
"math_id": 73,
"text": "\\tau_{x'y'}=40\\textrm{ MPa}"
},
{
"math_id": 74,
"text": "\\begin{align}\nR &= \\sqrt{\\left[\\tfrac{1}{2}(\\sigma_x - \\sigma_y)\\right]^2 + \\tau_{xy}^2} \\\\\n&= \\sqrt{\\left[\\tfrac{1}{2}(-10 - 50)\\right]^2 + 40^2} \\\\\n&= 50 \\textrm{ MPa} \\\\\n\\end{align}\n"
},
{
"math_id": 75,
"text": "\\begin{align}\n\\sigma_\\mathrm{avg} &= \\tfrac{1}{2}(\\sigma_x + \\sigma_y) \\\\\n&= \\tfrac{1}{2}(-10 + 50) \\\\\n&= 20 \\textrm{ MPa} \\\\\n\\end{align}\n"
},
{
"math_id": 76,
"text": "\\begin{align}\n\\sigma_1 &= \\sigma_\\mathrm{avg}+R \\\\\n&= 70 \\textrm{ MPa} \\\\\n\\end{align}"
},
{
"math_id": 77,
"text": "\\begin{align}\n\\sigma_2 &= \\sigma_\\mathrm{avg}-R \\\\\n&= -30 \\textrm{ MPa} \\\\\n\\end{align}"
},
{
"math_id": 78,
"text": "\\tau_{\\max,\\min}= \\pm R = \\pm 50 \\textrm{ MPa}"
},
{
"math_id": 79,
"text": "\\sigma_\\mathrm{avg}"
},
{
"math_id": 80,
"text": "\\begin{align}\n2 \\theta_\\mathrm{p} = \\arctan\\frac{2 \\tau_{xy}}{\\sigma_x - \\sigma_y}=\\arctan\\frac{2*40}{(-10-50)}=-\\arctan\\frac{4}{3}\n\\end{align}"
},
{
"math_id": 81,
"text": "2\\theta_{p}=-53.13^\\circ"
},
{
"math_id": 82,
"text": "\\theta_{p2}=-26.565^\\circ"
},
{
"math_id": 83,
"text": "\\begin{align}\n2\\theta_{p1}&=180-53.13^\\circ=126.87^\\circ \\\\\n\\theta_{p1}&=63.435^\\circ \\\\\n\\end{align}"
},
{
"math_id": 84,
"text": "\\left(\\sigma_1, \\sigma_2, \\sigma_3 \\right)"
},
{
"math_id": 85,
"text": "\\left(n_1, n_2, n_3 \\right)"
},
{
"math_id": 86,
"text": "x_1"
},
{
"math_id": 87,
"text": "x_2"
},
{
"math_id": 88,
"text": "x_3"
},
{
"math_id": 89,
"text": "\\sigma_1 > \\sigma_2 > \\sigma_3"
},
{
"math_id": 90,
"text": "\\mathbf T^{(\\mathbf n)}"
},
{
"math_id": 91,
"text": "\\mathbf n"
},
{
"math_id": 92,
"text": "\\begin{align}\n\\left( T^{(n)} \\right)^2 &= \\sigma_{ij}\\sigma_{ik}n_jn_k \\\\\n\\sigma_\\mathrm{n}^2 + \\tau_\\mathrm{n}^2 &= \\sigma_1^2 n_1^2 + \\sigma_2^2 n_2^2 + \\sigma_3^2 n_3^2 \\end{align}"
},
{
"math_id": 93,
"text": "\\sigma_\\mathrm{n} = \\sigma_1 n_1^2 + \\sigma_2 n_2^2 + \\sigma_3 n_3^2."
},
{
"math_id": 94,
"text": "n_i n_i = n_1^2+n_2^2+n_3^2 = 1"
},
{
"math_id": 95,
"text": "n_1^2"
},
{
"math_id": 96,
"text": "n_2^2"
},
{
"math_id": 97,
"text": "n_3^2"
},
{
"math_id": 98,
"text": "\\begin{align}\nn_1^2 &= \\frac{\\tau_\\mathrm{n}^2+(\\sigma_\\mathrm{n} - \\sigma_2)(\\sigma_\\mathrm{n} - \\sigma_3)}{(\\sigma_1 - \\sigma_2)(\\sigma_1 - \\sigma_3)} \\ge 0\\\\\nn_2^2 &= \\frac{\\tau_\\mathrm{n}^2+(\\sigma_\\mathrm{n} - \\sigma_3)(\\sigma_\\mathrm{n} - \\sigma_1)}{(\\sigma_2 - \\sigma_3)(\\sigma_2 - \\sigma_1)} \\ge 0\\\\\nn_3^2 &= \\frac{\\tau_\\mathrm{n}^2+(\\sigma_\\mathrm{n} - \\sigma_1)(\\sigma_\\mathrm{n} - \\sigma_2)}{(\\sigma_3 - \\sigma_1)(\\sigma_3 - \\sigma_2)} \\ge 0.\n\\end{align}"
},
{
"math_id": 99,
"text": "(n_i)^2"
},
{
"math_id": 100,
"text": "\\tau_\\mathrm{n}^2+(\\sigma_\\mathrm{n} - \\sigma_2)(\\sigma_\\mathrm{n} - \\sigma_3) \\ge 0"
},
{
"math_id": 101,
"text": "\\sigma_1 - \\sigma_2 > 0"
},
{
"math_id": 102,
"text": "\\sigma_1 - \\sigma_3 > 0"
},
{
"math_id": 103,
"text": "\\tau_\\mathrm{n}^2+(\\sigma_\\mathrm{n} - \\sigma_3)(\\sigma_\\mathrm{n} - \\sigma_1) \\le 0"
},
{
"math_id": 104,
"text": "\\sigma_2 - \\sigma_3 > 0"
},
{
"math_id": 105,
"text": "\\sigma_2 - \\sigma_1 < 0"
},
{
"math_id": 106,
"text": "\\tau_\\mathrm{n}^2+(\\sigma_\\mathrm{n} - \\sigma_1)(\\sigma_\\mathrm{n} - \\sigma_2) \\ge 0"
},
{
"math_id": 107,
"text": "\\sigma_3 - \\sigma_1 < 0"
},
{
"math_id": 108,
"text": "\\sigma_3 - \\sigma_2 < 0."
},
{
"math_id": 109,
"text": "\\begin{align}\n\\tau_\\mathrm{n}^2 + \\left[ \\sigma_\\mathrm{n}- \\tfrac{1}{2} (\\sigma_2 + \\sigma_3) \\right]^2 \\ge \\left( \\tfrac{1}{2}(\\sigma_2 - \\sigma_3) \\right)^2 \\\\\n\\tau_\\mathrm{n}^2 + \\left[ \\sigma_\\mathrm{n}- \\tfrac{1}{2} (\\sigma_1 + \\sigma_3) \\right]^2 \\le \\left( \\tfrac{1}{2}(\\sigma_1 - \\sigma_3) \\right)^2 \\\\\n\\tau_\\mathrm{n}^2 + \\left[ \\sigma_\\mathrm{n}- \\tfrac{1}{2} (\\sigma_1 + \\sigma_2) \\right]^2 \\ge \\left( \\tfrac{1}{2}(\\sigma_1 - \\sigma_2) \\right)^2 \\\\\n\\end{align}"
},
{
"math_id": 110,
"text": "C_1"
},
{
"math_id": 111,
"text": "C_2"
},
{
"math_id": 112,
"text": "C_3"
},
{
"math_id": 113,
"text": "R_1=\\tfrac{1}{2}(\\sigma_2 - \\sigma_3)"
},
{
"math_id": 114,
"text": "R_2=\\tfrac{1}{2}(\\sigma_1 - \\sigma_3)"
},
{
"math_id": 115,
"text": "R_3=\\tfrac{1}{2}(\\sigma_1 - \\sigma_2)"
},
{
"math_id": 116,
"text": "\\left[\\tfrac{1}{2}(\\sigma_2 + \\sigma_3), 0\\right]"
},
{
"math_id": 117,
"text": "\\left[\\tfrac{1}{2}(\\sigma_1 + \\sigma_3), 0\\right]"
},
{
"math_id": 118,
"text": "\\left[\\tfrac{1}{2}(\\sigma_1 + \\sigma_2), 0\\right]"
}
]
| https://en.wikipedia.org/wiki?curid=1326660 |
13268241 | Simple magic cube | A simple magic cube is the lowest of six basic classes of magic cubes. These classes are based on extra features required.
The simple magic cube requires only the basic features a cube requires to be magic. Namely, all lines parallel to the faces, and all 4 space diagonals sum correctly. i.e. all "1-agonals" and all "3-agonals" sum to
formula_0
No planar diagonals (2-agonals) are required to sum correctly, so there are probably no magic squares in the cube.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S = \\frac{m(m^3+1)}{2}."
}
]
| https://en.wikipedia.org/wiki?curid=13268241 |
13268473 | HAZMAT Class 5 Oxidizing agents and organic peroxides | An oxidizer is a chemical that readily yields oxygen in reactions, thereby causing or enhancing combustion.
Divisions.
Division 5.1: Oxidizers.
An oxidizer is a material that may, generally by yielding oxygen, cause or enhance the combustion of other materials.
Division 5.2: Organic Peroxides.
An organic peroxide is any organic compound containing oxygen (O) in the bivalent -O-O- structure and which may be considered a derivative of hydrogen peroxide, where one or more of the hydrogen atoms have been replaced by organic radicals, "unless" any of the following paragraphs applies:
*For materials containing no more than 1.0 percent hydrogen peroxide, the available oxygen, as calculated using the equation in paragraph (a)(4)(ii) of this section, is not more than 1.0 percent, or
*For materials containing more than 1.0 percent but not more than 7.0 percent hydrogen peroxide, the available oxygen content (Oa) is not more than 0.5 percent, when determined using the equation:
Oa = 16x formula_0
where for a material containing k species of organic peroxides:
formula_1 = number of -O-O- groups per molecule of the formula_2 species
formula_3 = concentration (mass percent) of the formula_2 species
formula_4 = molecular mass of the formula_2 species
Placards.
Prior to 2007, the placard for 'Organic Peroxide' (5.2) was entirely yellow, like placard 5.1.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{i=1}^k \\frac {n_{i}c_{i}}{m_{i}}"
},
{
"math_id": 1,
"text": "n_{i}"
},
{
"math_id": 2,
"text": "i^{th}"
},
{
"math_id": 3,
"text": "c_{i}"
},
{
"math_id": 4,
"text": "m_{i}"
}
]
| https://en.wikipedia.org/wiki?curid=13268473 |
1326926 | Array processing | Area of research in signal processing
Array processing is a wide area of research in the field of signal processing that extends from the simplest form of 1 dimensional line arrays to 2 and 3 dimensional array geometries. Array structure can be defined as a set of sensors that are spatially separated, e.g. radio antenna and seismic arrays. The sensors used for a specific problem may vary widely, for example microphones, accelerometers and telescopes. However, many similarities exist, the most fundamental of which may be an assumption of wave propagation. Wave propagation means there is a systemic relationship between the signal received on spatially separated sensors. By creating a physical model of the wave propagation, or in machine learning applications a training data set, the relationships between the signals received on spatially separated sensors can be leveraged for many applications.
Some common problem that are solved with array processing techniques are:
Array processing metrics are often assessed noisy environments. The model for noise may be either one of spatially incoherent noise, or one with interfering signals following the same propagation physics. Estimation theory is an important and basic part of signal processing field, which used to deal with estimation problem in which the values of several parameters of the system should be estimated based on measured/empirical data that has a random component. As the number of applications increases, estimating temporal and spatial parameters become more important. Array processing emerged in the last few decades as an active area and was centered on the ability of using and combining data from different sensors (antennas) in order to deal with specific estimation task (spatial and temporal processing). In addition to the information that can be extracted from the collected data the framework uses the advantage prior knowledge about the geometry of the sensor array to perform the estimation task.
Array processing is used in radar, sonar, seismic exploration, anti-jamming and wireless communications. One of the main advantages of using array processing along with an array of sensors is a smaller foot-print. The problems associated with array processing include the number of sources used, their direction of arrivals, and their signal waveforms.
There are four assumptions in array processing. The first assumption is that there is uniform propagation in all directions of isotropic and non-dispersive medium. The second assumption is that for far field array processing, the radius of propagation is much greater than size of the array and that there is plane wave propagation. The third assumption is that there is a zero mean white noise and signal, which shows uncorrelation. Finally, the last assumption is that there is no coupling and the calibration is perfect.
Applications.
The ultimate goal of sensor array signal processing is to estimate the values of parameters by using available temporal and spatial information, collected through sampling a wavefield with a set of antennas that have a precise geometry description. The processing of the captured data and information is done under the assumption that the wavefield is generated by a finite number of signal sources (emitters), and contains information about signal parameters characterizing and describing the sources. There are many applications related to the above problem formulation, where the number of sources, their directions and locations should be specified. To motivate the reader, some of the most important applications related to array processing will be discussed.
array processing concept was closely linked to radar and sonar systems which represent the classical applications of array processing. The antenna array is used in these systems to determine location(s) of source(s), cancel interference, suppress ground clutter. Radar systems used basically to detect objects by using radio waves. The range, altitude, speed and direction of objects can be specified. Radar systems started as military equipments then entered the civilian world. In radar applications, different modes can be used, one of these modes is the active mode. In this mode the antenna array based system radiates pulses and listens for the returns. By using the returns, the estimation of parameters such as velocity, range and DOAs (direction of arrival) of target of interest become possible. Using the passive far-field listening arrays, only the DOAs can be estimated. Sonar systems (Sound Navigation and Ranging) use the sound waves that propagate under the water to detect objects on or under the water surface. Two types of sonar systems can be defined the active one and the passive one. In active sonar, the system emits pulses of sound and listens to the returns that will be used to estimate parameters. In the passive sonar, the system is essentially listening for the sounds made by the target objects. It is very important to note the difference between the radar system that uses radio waves and the sonar system that uses sound waves, the reason why the sonar uses the sound wave is because sound waves travel farther in the water than do radar and light waves. In passive sonar, the receiving array has the capability of detecting distant objects and their locations. Deformable array are usually used in sonar systems where the antenna is typically drawn under the water. In active sonar, the sonar system emits sound waves (acoustic energy) then listening and monitoring any existing echo (the reflected waves). The reflected sound waves can be used to estimate parameters, such as velocity, position and direction etc. Difficulties and limitations in sonar systems comparing to radar systems emerged from the fact that the propagation speed of sound waves under the water is slower than the radio waves. Another source of limitation is the high propagation losses and scattering. Despite all these limitations and difficulties, sonar system remains a reliable technique for range, distance, position and other parameters estimation for underwater applications.
NORSAR is an independent geo-scientific research facility that was founded in Norway in 1968. NORSAR has been working with array processing ever since to measure seismic activity around the globe. They are currently working on an International Monitoring System which will comprise 50 primary and 120 auxiliary seismic stations around the world. NORSAR has ongoing work to improve array processing to improve monitoring of seismic activity not only in Norway but around the globe.
Communication can be defined as the process of exchanging of information between two or more parties. The last two decades witnessed a rapid growth of wireless communication systems. This success is a result of advances in communication theory and low power dissipation design process. In general, communication (telecommunication) can be done by technological means through either electrical signals (wired communication) or electromagnetic waves (wireless communication). Antenna arrays have emerged as a support technology to increase the usage efficiency of spectral and enhance the accuracy of wireless communication systems by utilizing spatial dimension in addition to the classical time and frequency dimensions. Array processing and estimation techniques have been used in wireless communication. During the last decade these techniques were re-explored as ideal candidates to be the solution for numerous problems in wireless communication. In wireless communication, problems that affect quality and performance of the system may come from different sources. The multiuser –medium multiple access- and multipath -signal propagation over multiple scattering paths in wireless channels- communication model is one of the most widespread communication models in wireless communication (mobile communication).
In the case of multiuser communication environment, the existence of multiuser increases the inter-user interference possibility that can affect quality and performance of the system adversely. In mobile communication systems the multipath problem is one of the basic problems that base stations have to deal with. Base stations have been using spatial diversity for combating fading due to the severe multipath. Base stations use an antenna array of several elements to achieve higher selectivity, so called beamforming. Receiving array can be directed in the direction of one user at a time, while avoiding the interference from other users.
Array processing techniques got on much attention from medical and industrial applications. In medical applications, the medical image processing field was one of the basic fields that use array processing. Other medical applications that use array processing: diseases treatment, tracking waveforms that have information about the condition of internal organs e.g. the heart, localizing and analyzing brain activity by using bio-magnetic sensor arrays.
Speech enhancement and processing represents another field that has been affected by the new era of array processing. Most of the acoustic front end systems became fully automatic systems (e.g. telephones). However, the operational environment of these systems contains a mix of other acoustic sources; external noises as well as acoustic couplings of loudspeaker signals overwhelm and attenuate the desired speech signal. In addition to these external sources, the strength of the desired signal is reduced due to the relatively distance between speaker and microphones. Array processing techniques have opened new opportunities in speech processing to attenuate noise and echo without degrading the quality of and affecting adversely the speech signal. In general array processing techniques can be used in speech processing to reduce the computing power (number of computations) and enhance the quality of the system (the performance). Representing the signal as a sum of sub-bands and adapting cancellation filters for the sub-band signals can reduce the demanded computation power and lead to a higher performance system. Relying on multiple input channels allows designing systems of higher quality comparing to systems that use single channel and solving problems such as source localization, tracking and separation, which cannot be achieved in case of using single channel.
Astronomical environment contains a mix of external signals and noises that affect the quality of the desired signals. Most of the arrays processing applications in astronomy are related to image processing. The array used to achieve a higher quality that is not achievable by using a single channel. The high image quality facilitates quantitative analysis and comparison with images at other wavelengths. In general, astronomy arrays can be divided into two classes: the beamforming class and the correlation class. Beamforming is a signal processing techniques that produce summed array beams from a direction of interest – used basically in directional signal transmission or reception- the basic idea is to combine elements in a phased array such that some signals experience destructive inference and other experience constructive inference. Correlation arrays provide images over the entire single-element primary beam pattern, computed off-line from records of all the possible correlations between the antennas, pairwise.
In addition to these applications, many applications have been developed based on array processing techniques: Acoustic Beamforming for Hearing Aid Applications, Under-determined Blind Source Separation Using Acoustic Arrays, Digital 3D/4D Ultrasound Imaging Array, Smart Antennas, Synthetic aperture radar, underwater acoustic imaging, and Chemical sensor arrays...etc.
General model and problem formulation.
Consider a system that consists of array of r arbitrary sensors that have arbitrary locations and arbitrary directions (directional characteristics) which receive signals that generated by q narrow band sources of known center frequency ω and locations θ1, θ2, θ3, θ4 ... θq. since the signals are narrow band the propagation delay across the array is much smaller than the reciprocal of the signal bandwidth and it follows that by using a complex envelop representation the array output can be expressed (by the sense of superposition) as :<br>
formula_0
Where:
The same equation can be also expressed in the form of vectors:<br>
formula_6
If we assume now that M snapshots are taken at time instants t1, t2 ... tM, the data can be expressed as:<br>
formula_7
Where X and N are the r × M matrices and S is q × M:<br>
formula_8<br>
formula_9<br>
formula_10
Problem definition<br>
“The target is to estimate the DOA’s θ1, θ2, θ3, θ4 …θq of the sources from the M snapshot of the array x(t1)… x(tM). In other words what we are interested in is estimating the DOA’s of emitter signals impinging on receiving array, when given a finite data set {x(t)} observed over t=1, 2 … M. This will be done basically by using the second-order statistics of data”
In order to solve this problem (to guarantee that there is a valid solution) do we have to add conditions or assumptions on the operational environment and\or the used model? Since there are many parameters used to specify the system like the number of sources, the number of array elements ...etc. are there conditions that should be met first? Toward this goal we want to make the following assumptions:
a. Radius of propagation » size of array.
b. Plane wave propagation.
Throughout this survey, it will be assumed that the number of underlying signals, q, in the observed process is considered known. There are, however, good and consistent techniques for estimating this value even if it is not known.
Estimation techniques.
In general, parameters estimation techniques can be classified into: spectral based and parametric based methods. In the former, one forms some spectrum-like function of the parameter(s) of interest. The locations of the highest (separated) peaks of the function in question are recorded as the DOA estimates. Parametric techniques, on the other hand, require a simultaneous search for all parameters of interest. The basic advantage of using the parametric approach comparing to the spectral based approach is the accuracy, albeit at the expense of an increased computational complexity.
Spectral–based solutions.
Spectral based algorithmic solutions can be further classified into beamforming techniques and subspace-based techniques.
Beamforming technique.
The first method used to specify and automatically localize the signal sources using antenna arrays was the beamforming technique. The idea behind beamforming is very simple: steer the array in one direction at a time and measure the output power. The steering locations where we have the maximum power yield the DOA estimates. The array response is steered by forming a linear combination of the sensor outputs.<br>
"Approach overview" <br>
formula_11<br>
formula_12<br>
formula_13<br>
formula_14<br>
<br>
Where Rx is the sample covariance matrix. Different beamforming approaches correspond to different choices of the weighting vector F. The advantages of using beamforming technique are the simplicity, easy to use and understand. While the disadvantage of using this technique is the low resolution.
Subspace-based technique.
Many spectral methods in the past have called upon the spectral decomposition of a covariance matrix to carry out the analysis. A very important breakthrough came about when the eigen-structure of the covariance matrix was explicitly invoked, and its intrinsic properties were directly used to provide a solution to an underlying estimation problem for a given observed process. A class of spatial spectral estimation techniques is based on the eigen-value decomposition of the spatial covariance matrix. The rationale behind this approach is that one wants to emphasize the choices for the steering vector a(θ) which correspond to signal directions. The method exploits the property that the directions of arrival determine the eigen structure of the matrix.<br>
The tremendous interest in the subspace based methods is mainly due to the introduction of the MUSIC (Multiple Signal Classification) algorithm. MUSIC was originally presented as a DOA estimator, then it has been successfully brought back to the spectral analysis/system identification problem with its later development.
"Approach overview" <br>
formula_15<br>
formula_16<br>
formula_17<br>
formula_18<br>
formula_19<br>
formula_20<br>
formula_21<br>
where the noise eigenvector matrix formula_22
MUSIC spectrum approaches use a single realization of the stochastic process that is represent by the snapshots x (t), t=1, 2 ...M. MUSIC estimates are consistent and they converge to true source bearings as the number of snapshots grows to infinity. A basic drawback of MUSIC approach is its sensitivity to model errors. A costly procedure of calibration is required in MUSIC and it is very sensitive to errors in the calibration procedure. The cost of calibration increases as the number of parameters that define the array manifold increases.
Parametric–based solutions.
While the spectral-based methods presented in the previous section are computationally attractive, they do not always yield sufficient accuracy. In particular, for the cases when we have highly correlated signals, the performance of spectral-based methods may be insufficient. An alternative is to more fully exploit the underlying data model, leading to so-called parametric array processing methods. The cost of using such methods to increase the efficiency is that the algorithms typically require a multidimensional search to find the estimates. The most common used model based approach in signal processing is the maximum likelihood (ML) technique. This method requires a statistical framework for the data generation process. When applying the ML technique to the array processing problem, two main methods have been considered depending on the signal data model assumption. According to the Stochastic ML, the signals are modeled as Gaussian random processes. On the other hand, in the Deterministic ML the signals are considered as unknown, deterministic quantities that need to be estimated in conjunction with the direction of arrival.
Stochastic ML approach.
The stochastic maximum likelihood method is obtained by modeling the signal waveforms as a Gaussian random process under the assumption that the process x(t) is a stationary, zero-mean, Gaussian process that is completely described by its second-order covariance matrix. This model is a reasonable one if the measurements are obtained by filtering wide-band signals using a narrow band-pass filter.<br>
"Approach overview" <br>
formula_23<br>
formula_24<br>
formula_25<br>
formula_26<br>
formula_24<br>
formula_27<br>
formula_28<br>
formula_29<br>
formula_30<br>
formula_31<br>
formula_32<br>
formula_33<br>
formula_34<br>
formula_35
Deterministic ML approach.
While the background and receiver noise in the assumed data model can be thought of as emanating from a large number of independent noise sources, the same is usually not the case for the emitter signals. It therefore appears natural to model the noise as a stationary Gaussian white random process whereas the signal waveforms are deterministic (arbitrary) and unknown. According to the Deterministic ML the signals are considered as unknown, deterministic quantities that need to be estimated in conjunction with the direction of arrival. This is a natural model for digital communication applications where the signals are far from being normal random variables, and where estimation of the signal is of equal interest.
Correlation spectrometer.
The problem of computing pairwise correlation as a function of frequency can be solved by two mathematically equivalent but distinct ways. By using Discrete Fourier Transform (DFT) it is possible to analyze signals in the time domain as well as in the spectral domain. The first approach is "XF" correlation because it first cross-correlates antennas (the "X" operation) using a time-domain "lag" convolution, and then computes the spectrum (the "F" operation) for each resulting baseline. The second approach "FX" takes advantage of the fact that convolution is equivalent to multiplication in Fourier domain. It first computes the spectrum for each individual antenna (the F operation), and then multiplies pairwise all antennas for each spectral channel (the X operation). A FX correlator has an advantage over a XF correlators in that the computational complexity is O(N2). Therefore, FX correlators are more efficient for larger arrays.
Correlation spectrometers like the Michelson interferometer vary the time lag between signals obtain the power spectrum of input signals. The power spectrum formula_36 of a signal is related to its autocorrelation function by a Fourier transform:
where the autocorrelation function formula_37 for signal X as a function of time delay formula_38 is
Cross-correlation spectroscopy with spatial interferometry, is possible by simply substituting a signal with voltage formula_39 in equation Eq.II to produce the cross-correlation formula_40 and the cross-spectrum formula_41.
Example: spatial filtering.
In radio astronomy, RF interference must be mitigated to detect and observe any meaningful objects and events in the night sky.
Projecting out the interferer.
For an array of Radio Telescopes with a spatial signature of the interfering source formula_42 that is not a known function of the direction of interference and its time variance, the signal covariance matrix takes the form:
formula_43
where formula_44 is the visibilities covariance matrix (sources), formula_45 is the power of the interferer, and formula_46 is the noise power, and formula_47 denotes the Hermitian transpose. One can construct a projection matrix formula_48, which, when left and right multiplied by the signal covariance matrix, will reduce the interference term to zero.
formula_49
So the modified signal covariance matrix becomes:
formula_50
Since formula_42 is generally not known, formula_48 can be constructed using the eigen-decomposition of formula_51, in particular the matrix containing an orthonormal basis of the noise subspace, which is the orthogonal complement of formula_42. The disadvantages to this approach include altering the visibilities covariance matrix and coloring the white noise term.
Spatial whitening.
This scheme attempts to make the interference-plus-noise term spectrally white. To do this, left and right multiply formula_51 with inverse square root factors of the interference-plus-noise terms.
formula_52
The calculation requires rigorous matrix manipulations, but results in an expression of the form:
formula_53
This approach requires much more computationally intensive matrix manipulations, and again the visibilities covariance matrix is altered.
Subtraction of interference estimate.
Since formula_42 is unknown, the best estimate is the dominant eigenvector formula_54 of the eigen-decomposition of formula_55, and likewise the best estimate of the interference power is formula_56, where formula_57 is the dominant eigenvalue of formula_51. One can subtract the interference term from the signal covariance matrix:
formula_58
By right and left multiplying formula_51:
formula_59
where formula_60 by selecting the appropriate formula_61. This scheme requires an accurate estimation of the interference term, but does not alter the noise or sources term.
Summary.
Array processing technique represents a breakthrough in signal processing. Many applications and problems which are solvable using array processing techniques are introduced. In addition to these applications within the next few years the number of applications that include a form of array signal processing will increase. It is highly expected that the importance of array processing will grow as the automation becomes more common in industrial environment and applications, further advances in digital signal processing and digital signal processing systems will also support the high computation requirements demanded by some of the estimation techniques.
In this article we emphasized the importance of array processing by listing the most important applications that include a form of array processing techniques. We briefly describe the different classifications of array processing, spectral and parametric based approaches. Some of the most important algorithms are covered, the advantage(s) and the disadvantage(s) of these algorithms also explained and discussed.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\textstyle x(t)=\\sum_{K=1}^q a(\\theta_k)s_k(t)+n(t)"
},
{
"math_id": 1,
"text": "x(t)"
},
{
"math_id": 2,
"text": "s_k(t)"
},
{
"math_id": 3,
"text": "a(\\theta_k)"
},
{
"math_id": 4,
"text": "\\theta_k"
},
{
"math_id": 5,
"text": "n(t)"
},
{
"math_id": 6,
"text": "\\textstyle \\mathbf x(t) = A(\\theta)s(t) + n(t)"
},
{
"math_id": 7,
"text": "\\mathbf X = \\mathbf A(\\theta)\\mathbf S + \\mathbf N"
},
{
"math_id": 8,
"text": "\\mathbf X = [x(t_{1}), ......, x(t_{M})]"
},
{
"math_id": 9,
"text": "\\mathbf N = [n(t_{1}), ......, n(t_{M})]"
},
{
"math_id": 10,
"text": "\\mathbf S = [s(t_{1}), ......, s(t_{M})]"
},
{
"math_id": 11,
"text": "\\textstyle 1.\\ R_{x}= \\frac{1}{M}\\sum_{t=1}^M \\mathbf{x}(t) \\mathbf{x}^{*}(t)"
},
{
"math_id": 12,
"text": "\\textstyle 2.\\ Calculate\\ B(W_{i})=F^{*}R_{x}F(W_{i})"
},
{
"math_id": 13,
"text": "\\textstyle 3.\\ Find\\ Peaks\\ of\\ B(W_{i})\\ for\\ all\\ possible\\ w_{i}'s."
},
{
"math_id": 14,
"text": "\\textstyle 4.\\ Calculate\\ \\theta_{k},\\ i=1, .... q."
},
{
"math_id": 15,
"text": "\\textstyle 1.\\ Subspace\\ decomposition\\ by\\ performing\\ eigenvalue\\ decomposition:"
},
{
"math_id": 16,
"text": "\\textstyle R_{x}=\\mathbf A \\mathbf R_{s} \\mathbf A^{*} + \\sigma^{2}I=\\sum_{k=1}^M \\lambda_{k}e_{k}r_{k}^{*}"
},
{
"math_id": 17,
"text": "\\textstyle 2.\\ span\\{\\mathbf A\\}=spane\\{e1,....,e_{d}\\}=span\\{\\mathbf E_{s}\\}."
},
{
"math_id": 18,
"text": "\\textstyle 3.\\ Check\\ which\\ a(\\theta)\\ \\epsilon span\\{\\mathbf E_{s}\\}\\ or\\ \\mathbf P_{A}a(\\theta)\\ or\\ P_{\\mathbf A}^{\\perp}a(\\theta),\\ where\\ \\mathbf P_{A}\\ is\\ a\\ projection\\ matrix."
},
{
"math_id": 19,
"text": "\\textstyle 4.\\ Search\\ for\\ all\\ possible\\ \\theta\\ such\\ that: \\left | P_{\\mathbf A}^{\\perp}a(\\theta) \\right |^{2} = 0\\ or\\ M(\\theta)=\\frac{1}{P_{A}a(\\theta)} =\\infty"
},
{
"math_id": 20,
"text": "\\textstyle 5.\\ After\\ EVD\\ of\\ R_{x}:"
},
{
"math_id": 21,
"text": "\\textstyle P_{A}^{\\perp}=I-E_{s}E_{s}^{*}=E_{n}E_{n}^{*}"
},
{
"math_id": 22,
"text": "E_{n}=[e_{d}+1, .... , e_{M}]"
},
{
"math_id": 23,
"text": "\\textstyle 1.\\ Find\\ W_{K}\\ to\\ minimize:"
},
{
"math_id": 24,
"text": "\\textstyle min_{a^{*}(\\theta_{k}w_{k}=1)}\\ E\\{\\left |W_{k}X(t) \\right |^{2}\\}"
},
{
"math_id": 25,
"text": "\\textstyle=min_{a^{*}(\\theta_{k}w_{k}=1)}\\ W_{k}^{*}R_{k}W_{k}"
},
{
"math_id": 26,
"text": "\\textstyle 2.\\ Use\\ the\\ langrange\\ method:"
},
{
"math_id": 27,
"text": "\\textstyle=min_{a^{*}(\\theta_{k}w_{k}=1)}\\ W_{k}^{*}R_{k}W_{k}+ 2\\mu(a^{*}(\\theta_{k})w_{k}\\Leftrightarrow 1) "
},
{
"math_id": 28,
"text": "\\textstyle 3.\\ Differentiating\\ it,\\ we\\ obtain"
},
{
"math_id": 29,
"text": "\\textstyle R_{x}w_{k}=\\mu a(\\theta_{k}),\\ or\\ W_{k} = \\mu R_{x}^{-1}a(\\theta_{k})"
},
{
"math_id": 30,
"text": "\\textstyle 4.\\ since"
},
{
"math_id": 31,
"text": "\\textstyle a^{*}(\\theta_{k})W_{k}=\\mu a(\\theta_{k})^{*}R_{x}^{-1}a(\\theta_{k})=1"
},
{
"math_id": 32,
"text": "\\textstyle Then"
},
{
"math_id": 33,
"text": "\\textstyle \\mu=a(\\theta_{k})^{*}R_{x}^{-1}a(\\theta_{k})"
},
{
"math_id": 34,
"text": "\\textstyle 5.\\ Capon's\\ Beamformer"
},
{
"math_id": 35,
"text": "\\textstyle W_{k}=R_{x}^{-1}a(\\theta_{k})/(a^{*}(\\theta_{k})R_{x}^{-1}a(\\theta_{k}))"
},
{
"math_id": 36,
"text": "S_{\\text{XX}}(f)"
},
{
"math_id": 37,
"text": "R_{\\text{XX}}(\\tau)"
},
{
"math_id": 38,
"text": "\\tau"
},
{
"math_id": 39,
"text": "V_Y(t)"
},
{
"math_id": 40,
"text": "R_{\\text{XY}}(\\tau)"
},
{
"math_id": 41,
"text": "S_{\\text{XY}}(f)"
},
{
"math_id": 42,
"text": "\\mathbf{a}"
},
{
"math_id": 43,
"text": "\\mathbf{R} = \\mathbf{R}_v + \\sigma_s^2 \\mathbf{a} \\mathbf{a}^{\\dagger} + \\sigma_n^2 \\mathbf{I}"
},
{
"math_id": 44,
"text": "\\mathbf{R}_v"
},
{
"math_id": 45,
"text": "\\sigma_s^2"
},
{
"math_id": 46,
"text": "\\sigma_n^2"
},
{
"math_id": 47,
"text": "\\dagger"
},
{
"math_id": 48,
"text": "\\mathbf{P}_a^{\\perp}"
},
{
"math_id": 49,
"text": "\\mathbf{P}_a^{\\perp} = \\mathbf{I} - \\mathbf{a}(\\mathbf{a}^{\\dagger} \\mathbf{a})^{-1} \\mathbf{a}^{\\dagger}"
},
{
"math_id": 50,
"text": "\\tilde{\\mathbf{R}} = \\mathbf{P}_a^{\\perp} \\mathbf{R} \\mathbf{P}_a^{\\perp} = \\mathbf{P}_a^{\\perp} \\mathbf{R}_v \\mathbf{P}_a^{\\perp} + \\sigma_n^2 \\mathbf{P}_a^{\\perp}"
},
{
"math_id": 51,
"text": "\\mathbf{R}"
},
{
"math_id": 52,
"text": "\\tilde{\\mathbf{R}} = (\\sigma_s^2 \\mathbf{a} \\mathbf{a}^{\\dagger} + \\sigma_n^2 \\mathbf{I})^{-{\\frac{1}{2}}} \\mathbf{R}(\\sigma_s^2 \\mathbf{a} \\mathbf{a}^{\\dagger} + \\sigma_n^2 \\mathbf{I})^{-{\\frac{1}{2}}}"
},
{
"math_id": 53,
"text": "\\tilde{\\mathbf{R}} = (\\cdot)^{-{\\frac{1}{2}}} \\mathbf{R}_v(\\cdot)^{-{\\frac{1}{2}}} + \\mathbf{I}"
},
{
"math_id": 54,
"text": "\\mathbf{u}_1"
},
{
"math_id": 55,
"text": "\\mathbf{R} = \\mathbf{U} \\mathbf{\\Lambda} \\mathbf{U}^{\\dagger}"
},
{
"math_id": 56,
"text": "\\sigma_s^2 \\approx \\lambda_1 - \\sigma_n^2"
},
{
"math_id": 57,
"text": "\\lambda_1"
},
{
"math_id": 58,
"text": "\\tilde{\\mathbf{R}} = \\mathbf{R} - \\sigma_s^2 \\mathbf{a} \\mathbf{a}^{\\dagger}"
},
{
"math_id": 59,
"text": "\\tilde{\\mathbf{R}} \\approx (\\mathbf{I} - \\alpha \\mathbf{u}_1 \\mathbf{u}_1^{\\dagger})\\mathbf{R}(\\mathbf{I} - \\alpha \\mathbf{u}_1 \\mathbf{u}_1^{\\dagger}) = \\mathbf{R} - \\mathbf{u}_1 \\mathbf{u}_1^{\\dagger} \\lambda_1(2 \\alpha - \\alpha^2)"
},
{
"math_id": 60,
"text": "\\lambda_1(2 \\alpha - \\alpha^2) \\approx \\sigma_s^2"
},
{
"math_id": 61,
"text": "\\alpha"
}
]
| https://en.wikipedia.org/wiki?curid=1326926 |
1326932 | Beamforming | Signal processing technique used in sensor arrays for directional signal transmission or reception
Beamforming or spatial filtering is a signal processing technique used in sensor arrays for directional signal transmission or reception. This is achieved by combining elements in an antenna array in such a way that signals at particular angles experience constructive interference while others experience destructive interference. Beamforming can be used at both the transmitting and receiving ends in order to achieve spatial selectivity. The improvement compared with omnidirectional reception/transmission is known as the directivity of the array.
Beamforming can be used for radio or sound waves. It has found numerous applications in radar, sonar, seismology, wireless communications, radio astronomy, acoustics and biomedicine. Adaptive beamforming is used to detect and estimate the signal of interest at the output of a sensor array by means of optimal (e.g. least-squares) spatial filtering and interference rejection.
Techniques.
To change the directionality of the array when transmitting, a beamformer controls the phase and relative amplitude of the signal at each transmitter, in order to create a pattern of constructive and destructive interference in the wavefront. When receiving, information from different sensors is combined in a way where the expected pattern of radiation is preferentially observed.
For example, in sonar, to send a sharp pulse of underwater sound towards a ship in the distance, simply simultaneously transmitting that sharp pulse from every sonar projector in an array fails because the ship will first hear the pulse from the speaker that happens to be nearest the ship, then later pulses from speakers that happen to be further from the ship. The beamforming technique involves sending the pulse from each projector at slightly different times (the projector closest to the ship last), so that every pulse hits the ship at exactly the same time, producing the effect of a single strong pulse from a single powerful projector. The same technique can be carried out in air using loudspeakers, or in radar/radio using antennas.
In passive sonar, and in reception in active sonar, the beamforming technique involves combining delayed signals from each hydrophone at slightly different times (the hydrophone closest to the target will be combined after the longest delay), so that every signal reaches the output at exactly the same time, making one loud signal, as if the signal came from a single, very sensitive hydrophone. Receive beamforming can also be used with microphones or radar antennas.
With narrowband systems the time delay is equivalent to a "phase shift", so in this case the array of antennas, each one shifted a slightly different amount, is called a phased array. A narrow band system, typical of radars, is one where the bandwidth is only a small fraction of the center frequency. With wideband systems this approximation no longer holds, which is typical in sonars.
In the receive beamformer the signal from each antenna may be amplified by a different "weight." Different weighting patterns (e.g., Dolph–Chebyshev) can be used to achieve the desired sensitivity patterns. A main lobe is produced together with nulls and sidelobes. As well as controlling the main lobe width (beamwidth) and the sidelobe levels, the position of a null can be controlled. This is useful to ignore noise or jammers in one particular direction, while listening for events in other directions. A similar result can be obtained on transmission.
For the full mathematics on directing beams using amplitude and phase shifts, see the mathematical section in phased array.
Beamforming techniques can be broadly divided into two categories:
Conventional beamformers, such as the Butler matrix, use a fixed set of weightings and time-delays (or phasings) to combine the signals from the sensors in the array, primarily using only information about the location of the sensors in space and the wave directions of interest. In contrast, adaptive beamforming techniques (e.g., MUSIC, SAMV) generally combine this information with properties of the signals actually received by the array, typically to improve rejection of unwanted signals from other directions. This process may be carried out in either the time or the frequency domain.
As the name indicates, an adaptive beamformer is able to automatically adapt its response to different situations. Some criterion has to be set up to allow the adaptation to proceed such as minimizing the total noise output. Because of the variation of noise with frequency, in wide band systems it may be desirable to carry out the process in the frequency domain.
Beamforming can be computationally intensive. Sonar phased array has a data rate low enough that it can be processed in real time in software, which is flexible enough to transmit or receive in several directions at once. In contrast, radar phased array has a data rate so high that it usually requires dedicated hardware processing, which is hard-wired to transmit or receive in only one direction at a time. However, newer field programmable gate arrays are fast enough to handle radar data in real time, and can be quickly re-programmed like software, blurring the hardware/software distinction.
Sonar beamforming requirements.
Sonar beamforming utilizes a similar technique to electromagnetic beamforming, but varies considerably in implementation details. Sonar applications vary from 1 Hz to as high as 2 MHz, and array elements may be few and large, or number in the hundreds yet very small. This will shift sonar beamforming design efforts significantly between demands of such system components as the "front end" (transducers, pre-amplifiers and digitizers) and the actual beamformer computational hardware downstream. High frequency, focused beam, multi-element imaging-search sonars and acoustic cameras often implement fifth-order spatial processing that places strains equivalent to Aegis radar demands on the processors.
Many sonar systems, such as on torpedoes, are made up of arrays of up to 100 elements that must accomplish beam steering over a 100 degree field of view and work in both active and passive modes.
Sonar arrays are used both actively and passively in 1-, 2-, and 3-dimensional arrays.
Sonar differs from radar in that in some applications such as wide-area-search all directions often need to be listened to, and in some applications broadcast to, simultaneously. Thus a multibeam system is needed. In a narrowband sonar receiver, the phases for each beam can be manipulated entirely by signal processing software, as compared to present radar systems that use hardware to 'listen' in a single direction at a time.
Sonar also uses beamforming to compensate for the significant problem of the slower propagation speed of sound as compared to that of electromagnetic radiation. In side-look-sonars, the speed of the towing system or vehicle carrying the sonar is moving at sufficient speed to move the sonar out of the field of the returning sound "ping". In addition to focusing algorithms intended to improve reception, many side scan sonars also employ beam steering to look forward and backward to "catch" incoming pulses that would have been missed by a single sidelooking beam.
Evolved Beamformer.
The delay-and-sum beamforming technique uses multiple microphones to localize sound sources. One disadvantage of this technique is that adjustments of the position or of the number of microphones changes the performance of the beamformer nonlinearly. Additionally, due to the number of combinations possible, it is computationally hard to find the best configuration. One of the techniques to solve this problem is the use of genetic algorithms. Such algorithm searches for the microphone array configuration that provides the highest signal-to-noise ratio for each steered orientation. Experiments showed that such algorithm could find the best configuration of a constrained search space comprising ~33 million solutions in a matter of seconds instead of days.
History in wireless communication standards.
Beamforming techniques used in cellular phone standards have advanced through the generations to make use of more complex systems to achieve higher density cells, with higher throughput.
An increasing number of consumer 802.11ac Wi-Fi devices with MIMO capability can support beamforming to boost data communication rates.
Digital, analog, and hybrid.
To receive (but not transmit), there is a distinction between analog and digital beamforming. For example, if there are 100 sensor elements, the "digital beamforming" approach entails that each of the 100 signals passes through an analog-to-digital converter to create 100 digital data streams. Then these data streams are added up digitally, with appropriate scale-factors or phase-shifts, to get the composite signals. By contrast, the "analog beamforming" approach entails taking the 100 analog signals, scaling or phase-shifting them using analog methods, summing them, and then usually digitizing the "single" output data stream.
Digital beamforming has the advantage that the digital data streams (100 in this example) can be manipulated and combined in many possible ways in parallel, to get many different output signals in parallel. The signals from every direction can be measured simultaneously, and the signals can be integrated for a longer time when studying far-off objects and simultaneously integrated for a shorter time to study fast-moving close objects, and so on. This cannot be done as effectively for analog beamforming, not only because each parallel signal combination requires its own circuitry, but more fundamentally because digital data can be copied perfectly but analog data cannot. (There is only so much analog power available, and amplification adds noise.) Therefore, if the received analog signal is split up and sent into a large number of different signal combination circuits, it can reduce the signal-to-noise ratio of each.
In MIMO communication systems with large number of antennas, so called massive MIMO systems, the beamforming algorithms executed at the digital baseband can get very complex.
In addition, if all beamforming is done at baseband, each antenna needs its own RF feed. At high frequencies and with large number of antenna elements, this can be very costly, and increase loss and complexity in the system. To remedy these issues, hybrid beamforming has been suggested where some of the beamforming is done using analog components and not digital.
There are many possible different functions that can be performed using analog components instead of at the digital baseband.
Beamforming, whether done digitally, or by means of analog architecture, has recently been applied in integrated sensing and communication technology. For instance, a beamformer was suggested, in imperfect channel state information situations to perform communication tasks, while at the same time performing target detection to sense targets in the scene.
For speech audio.
Beamforming can be used to try to extract sound sources in a room, such as multiple speakers in the cocktail party problem. This requires the locations of the speakers to be known in advance, for example by using the time of arrival from the sources to mics in the array, and inferring the locations from the distances.
Compared to carrier-wave telecommunications, natural audio contains a variety of frequencies. It is advantageous to separate frequency bands prior to beamforming because different frequencies have different optimal beamform filters (and hence can be treated as separate problems, in parallel, and then recombined afterward). Properly isolating these bands involves specialized non-standard filter banks. In contrast, for example, the standard fast Fourier transform (FFT) band-filters implicitly assume that the only frequencies present in the signal are exact harmonics; frequencies which lie between these harmonics will typically activate all of the FFT channels (which is not what is wanted in a beamform analysis). Instead, filters can be designed in which only local frequencies are detected by each channel (while retaining the recombination property to be able to reconstruct the original signal), and these are typically non-orthogonal unlike the FFT basis.
References.
<templatestyles src="Reflist/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "L"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "\\sigma_n^2"
},
{
"math_id": 3,
"text": "\\frac{1}{\\sigma_n^2}P\\cdot L"
}
]
| https://en.wikipedia.org/wiki?curid=1326932 |
13269420 | Unistochastic matrix | In mathematics, a unistochastic matrix (also called "unitary-stochastic") is a doubly stochastic matrix whose entries are the squares of the absolute values of the entries of some unitary matrix.
A square matrix "B" of size "n" is doubly stochastic (or "bistochastic") if all its entries are non-negative real numbers and each of its rows and columns sum to 1. It is unistochastic if there exists a unitary matrix "U" such that
formula_0
This definition is analogous to that for an orthostochastic matrix, which is a doubly stochastic matrix whose entries are the squares of the entries in some orthogonal matrix. Since all orthogonal matrices are necessarily unitary matrices, all orthostochastic matrices are also unistochastic. The converse, however, is not true. First, all 2-by-2 doubly stochastic matrices are both unistochastic and orthostochastic, but for larger "n" this is not the case. For example, take formula_1 and consider the following doubly stochastic matrix:
formula_2
This matrix is not unistochastic, since any two vectors with moduli equal to the square root of the entries of two columns (or rows) of "B" cannot be made orthogonal by a suitable choice of phases. For formula_3, the set of orthostochastic matrices is a proper subset of the set of unistochastic matrices.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " B_{ij}=|U_{ij}|^2 \\text{ for } i,j=1,\\dots,n. \\, "
},
{
"math_id": 1,
"text": " n=3 "
},
{
"math_id": 2,
"text": "\nB= \\frac{1}{2} \n\\begin{bmatrix}\n1 & 1 & 0 \\\\\n0 & 1 & 1 \\\\\n1 & 0 & 1 \\end{bmatrix}.\n"
},
{
"math_id": 3,
"text": "n > 2"
},
{
"math_id": 4,
"text": " n \\ge 3 "
},
{
"math_id": 5,
"text": " n = 3 "
},
{
"math_id": 6,
"text": " n =3 "
},
{
"math_id": 7,
"text": " 8\\pi^2/105 \\approx 75.2 \\% "
},
{
"math_id": 8,
"text": " n =4 "
},
{
"math_id": 9,
"text": " \\mathcal{U}_n "
},
{
"math_id": 10,
"text": " n \\times n "
},
{
"math_id": 11,
"text": " v \\in \\mathbb{R}^n "
},
{
"math_id": 12,
"text": " \\mathcal{U}_nv "
},
{
"math_id": 13,
"text": " v "
},
{
"math_id": 14,
"text": " \\mathcal{U}_n \\subset \\mathbb{R}^{(n-1)^2} "
},
{
"math_id": 15,
"text": " U_{ij} = \\delta_{ij} + \\frac{\\theta-1}{n} "
},
{
"math_id": 16,
"text": " |\\theta|=1 "
},
{
"math_id": 17,
"text": " \\theta \\neq \\pm 1 "
}
]
| https://en.wikipedia.org/wiki?curid=13269420 |
1327 | Antiparticle | Particle with opposite charges
In particle physics, every type of particle of "ordinary" matter (as opposed to antimatter) is associated with an antiparticle with the same mass but with opposite physical charges (such as electric charge). For example, the antiparticle of the electron is the positron (also known as an antielectron). While the electron has a negative electric charge, the positron has a positive electric charge, and is produced naturally in certain types of radioactive decay. The opposite is also true: the antiparticle of the positron is the electron.
Some particles, such as the photon, are their own antiparticle. Otherwise, for each pair of antiparticle partners, one is designated as the normal particle (the one that occurs in matter usually interacted with in daily life). The other (usually given the prefix "anti-") is designated the "antiparticle".
Particle–antiparticle pairs can annihilate each other, producing photons; since the charges of the particle and antiparticle are opposite, total charge is conserved. For example, the positrons produced in natural radioactive decay quickly annihilate themselves with electrons, producing pairs of gamma rays, a process exploited in positron emission tomography.
The laws of nature are very nearly symmetrical with respect to particles and antiparticles. For example, an antiproton and a positron can form an antihydrogen atom, which is believed to have the same properties as a hydrogen atom. This leads to the question of why the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter, rather than being a half-and-half mixture of matter and antimatter. The discovery of charge parity violation helped to shed light on this problem by showing that this symmetry, originally thought to be perfect, was only approximate. The question about how the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter remains an unanswered one, and explanations so far are not truly satisfactory, overall.
Because charge is conserved, it is not possible to create an antiparticle without either destroying another particle of the same charge (as is for instance the case when antiparticles are produced naturally via beta decay or the collision of cosmic rays with Earth's atmosphere), or by the simultaneous creation of both a particle "and" its antiparticle (pair production), which can occur in particle accelerators such as the Large Hadron Collider at CERN.
Particles and their antiparticles have equal and opposite charges, so that an uncharged particle also gives rise to an uncharged antiparticle. In many cases, the antiparticle and the particle coincide: pairs of photons, Z0 bosons, mesons, and hypothetical gravitons and some hypothetical WIMPs all self-annihilate. However, electrically neutral particles need not be identical to their antiparticles: for example, the neutron and antineutron are distinct.
History.
Experiment.
In 1932, soon after the prediction of positrons by Paul Dirac, Carl D. Anderson found that cosmic-ray collisions produced these particles in a cloud chamber – a particle detector in which moving electrons (or positrons) leave behind trails as they move through the gas. The electric charge-to-mass ratio of a particle can be measured by observing the radius of curling of its cloud-chamber track in a magnetic field. Positrons, because of the direction that their paths curled, were at first mistaken for electrons travelling in the opposite direction. Positron paths in a cloud-chamber trace the same helical path as an electron but rotate in the opposite direction with respect to the magnetic field direction due to their having the same magnitude of charge-to-mass ratio but with opposite charge and, therefore, opposite signed charge-to-mass ratios.
The antiproton and antineutron were found by Emilio Segrè and Owen Chamberlain in 1955 at the University of California, Berkeley. Since then, the antiparticles of many other subatomic particles have been created in particle accelerator experiments. In recent years, complete atoms of antimatter have been assembled out of antiprotons and positrons, collected in electromagnetic traps.
Dirac hole theory.
<templatestyles src="Template:Quote_box/styles.css" />
... the development of quantum field theory made the interpretation of antiparticles as holes unnecessary, even though it lingers on in many textbooks.
Steven Weinberg
Solutions of the Dirac equation contain negative energy quantum states. As a result, an electron could always radiate energy and fall into a negative energy state. Even worse, it could keep radiating infinite amounts of energy because there were infinitely many negative energy states available. To prevent this unphysical situation from happening, Dirac proposed that a "sea" of negative-energy electrons fills the universe, already occupying all of the lower-energy states so that, due to the Pauli exclusion principle, no other electron could fall into them. Sometimes, however, one of these negative-energy particles could be lifted out of this Dirac sea to become a positive-energy particle. But, when lifted out, it would leave behind a "hole" in the sea that would act exactly like a positive-energy electron with a reversed charge. These holes were interpreted as "negative-energy electrons" by Paul Dirac and mistakenly identified with protons in his 1930 paper "A Theory of Electrons and Protons" However, these "negative-energy electrons" turned out to be positrons, and not protons.
This picture implied an infinite negative charge for the universe – a problem of which Dirac was aware. Dirac tried to argue that we would perceive this as the normal state of zero charge. Another difficulty was the difference in masses of the electron and the proton. Dirac tried to argue that this was due to the electromagnetic interactions with the sea, until Hermann Weyl proved that hole theory was completely symmetric between negative and positive charges. Dirac also predicted a reaction + → + , where an electron and a proton annihilate to give two photons. Robert Oppenheimer and Igor Tamm, however, proved that this would cause ordinary matter to disappear too fast. A year later, in 1931, Dirac modified his theory and postulated the positron, a new particle of the same mass as the electron. The discovery of this particle the next year removed the last two objections to his theory.
Within Dirac's theory, the problem of infinite charge of the universe remains. Some bosons also have antiparticles, but since bosons do not obey the Pauli exclusion principle (only fermions do), hole theory does not work for them. A unified interpretation of antiparticles is now available in quantum field theory, which solves both these problems by describing antimatter as negative energy states of the same underlying matter field, i.e. particles moving backwards in time.
Particle–antiparticle annihilation.
If a particle and antiparticle are in the appropriate quantum states, then they can annihilate each other and produce other particles. Reactions such as + → (the two-photon annihilation of an electron-positron pair) are an example. The single-photon annihilation of an electron-positron pair, + → , cannot occur in free space because it is impossible to conserve energy and momentum together in this process. However, in the Coulomb field of a nucleus the translational invariance is broken and single-photon annihilation may occur. The reverse reaction (in free space, without an atomic nucleus) is also impossible for this reason. In quantum field theory, this process is allowed only as an intermediate quantum state for times short enough that the violation of energy conservation can be accommodated by the uncertainty principle. This opens the way for virtual pair production or annihilation in which a one particle quantum state may "fluctuate" into a two particle state and back. These processes are important in the vacuum state and renormalization of a quantum field theory. It also opens the way for neutral particle mixing through processes such as the one pictured here, which is a complicated example of mass renormalization.
Properties.
Quantum states of a particle and an antiparticle are interchanged by the combined application of charge conjugation formula_0, parity formula_1 and time reversal formula_2.
formula_0 and formula_1 are linear, unitary operators, formula_2 is antilinear and antiunitary,
formula_3.
If formula_4 denotes the quantum state of a particle formula_5 with momentum formula_6 and spin formula_7 whose component in the z-direction is formula_8, then one has
formula_9
where formula_10 denotes the charge conjugate state, that is, the antiparticle. In particular a massive particle and its antiparticle transform under the same irreducible representation of the Poincaré group which means the antiparticle has the same mass and the same spin.
If formula_11, formula_12 and formula_13
can be defined separately on the particles and antiparticles, then
formula_14
formula_15
formula_16
where the proportionality sign indicates that there might be a phase on the right hand side.
As formula_17 anticommutes with the charges, formula_18, particle and antiparticle have opposite electric charges q and -q.
"This section draws upon the ideas, language and notation of canonical quantization of a quantum field theory."
Quantum field theory.
One may try to quantize an electron field without mixing the annihilation and creation operators by writing
formula_19
where we use the symbol "k" to denote the quantum numbers "p" and σ of the previous section and the sign of the energy, "E(k)", and "ak" denotes the corresponding annihilation operators. Of course, since we are dealing with fermions, we have to have the operators satisfy canonical anti-commutation relations. However, if one now writes down the Hamiltonian
formula_20
then one sees immediately that the expectation value of "H" need not be positive. This is because "E(k)" can have any sign whatsoever, and the combination of creation and annihilation operators has expectation value 1 or 0.
So one has to introduce the charge conjugate "antiparticle" field, with its own creation and annihilation operators satisfying the relations
formula_21
where "k" has the same "p", and opposite σ and sign of the energy. Then one can rewrite the field in the form
formula_22
where the first sum is over positive energy states and the second over those of negative energy. The energy becomes
formula_23
where "E0" is an infinite negative constant. The vacuum state is defined as the state with no particle or antiparticle, "i.e.", formula_24 and formula_25. Then the energy of the vacuum is exactly "E0". Since all energies are measured relative to the vacuum, H is positive definite. Analysis of the properties of "ak" and "bk" shows that one is the annihilation operator for particles and the other for antiparticles. This is the case of a fermion.
This approach is due to Vladimir Fock, Wendell Furry and Robert Oppenheimer. If one quantizes a real scalar field, then one finds that there is only one kind of annihilation operator; therefore, real scalar fields describe neutral bosons. Since complex scalar fields admit two different kinds of annihilation operators, which are related by conjugation, such fields describe charged bosons.
Feynman–Stückelberg interpretation.
By considering the propagation of the negative energy modes of the electron field backward in time, Ernst Stückelberg reached a pictorial understanding of the fact that the particle and antiparticle have equal mass m and spin J but opposite charges q. This allowed him to rewrite perturbation theory precisely in the form of diagrams. Richard Feynman later gave an independent systematic derivation of these diagrams from a particle formalism, and they are now called Feynman diagrams. Each line of a diagram represents a particle propagating either backward or forward in time. In Feynman diagrams, anti-particles are shown traveling backwards in time relative to normal matter, and vice versa. This technique is the most widespread method of computing amplitudes in quantum field theory today.
Since this picture was first developed by Stückelberg, and acquired its modern form in Feynman's work, it is called the Feynman–Stückelberg interpretation of antiparticles to honor both scientists.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " C "
},
{
"math_id": 1,
"text": " P "
},
{
"math_id": 2,
"text": " T "
},
{
"math_id": 3,
"text": " \\langle \\Psi | T\\,\\Phi\\rangle = \\langle \\Phi | T^{-1}\\,\\Psi\\rangle "
},
{
"math_id": 4,
"text": "|p,\\sigma ,n \\rangle "
},
{
"math_id": 5,
"text": " n "
},
{
"math_id": 6,
"text": " p "
},
{
"math_id": 7,
"text": " J "
},
{
"math_id": 8,
"text": " \\sigma "
},
{
"math_id": 9,
"text": "CPT \\ |p,\\sigma,n \\rangle\\ =\\ (-1)^{J-\\sigma}\\ |p,-\\sigma,n^c \\rangle ,"
},
{
"math_id": 10,
"text": " n^c "
},
{
"math_id": 11,
"text": "C"
},
{
"math_id": 12,
"text": "P"
},
{
"math_id": 13,
"text": "T"
},
{
"math_id": 14,
"text": "T\\ |p,\\sigma,n\\rangle \\ \\propto \\ |-p,-\\sigma,n\\rangle ,"
},
{
"math_id": 15,
"text": "CP\\ |p,\\sigma,n\\rangle \\ \\propto \\ |-p,\\sigma,n^c\\rangle ,"
},
{
"math_id": 16,
"text": "C\\ |p,\\sigma,n\\rangle \\ \\propto \\ |p,\\sigma,n^c\\rangle ,"
},
{
"math_id": 17,
"text": " CPT "
},
{
"math_id": 18,
"text": " CPT\\,Q = - Q\\, CPT "
},
{
"math_id": 19,
"text": "\\psi (x)=\\sum_{k}u_k (x)a_k e^{-iE(k)t},\\,"
},
{
"math_id": 20,
"text": "H=\\sum_{k} E(k) a^\\dagger_k a_k,\\,"
},
{
"math_id": 21,
"text": "b_{k\\prime} = a^\\dagger_k\\ \\mathrm{and}\\ b^\\dagger_{k\\prime}=a_k,\\,"
},
{
"math_id": 22,
"text": "\\psi(x)=\\sum_{k_+} u_k (x)a_k e^{-iE(k)t}+\\sum_{k_-} u_k (x)b^\\dagger _k e^{-iE(k)t},\\,"
},
{
"math_id": 23,
"text": "H=\\sum_{k_+} E_k a^\\dagger _k a_k + \\sum_{k_-} |E(k)|b^\\dagger_k b_k + E_0,\\,"
},
{
"math_id": 24,
"text": "a_k |0\\rangle=0"
},
{
"math_id": 25,
"text": "b_k |0\\rangle=0"
}
]
| https://en.wikipedia.org/wiki?curid=1327 |
13270149 | SNP (complexity) | Complexity class
In computational complexity theory, SNP (from Strict NP) is a complexity class containing a limited subset of NP based on its logical characterization in terms of graph-theoretical properties. It forms the basis for the definition of the class MaxSNP of optimization problems.
It is defined as the class of problems that are properties of relational structures (such as graphs) expressible by a second-order logic formula of the following form:
formula_0
where formula_1 are relations of the structure (such as the adjacency relation, for a graph), formula_2 are unknown relations (sets of tuples of vertices), and formula_3 is a quantifier-free formula: any boolean combination of the relations. That is, only existential second-order quantification (over relations) is allowed and only universal first-order quantification (over vertices) is allowed.
If existential quantification over vertices were also allowed, the resulting complexity class would be equal to NP (more precisely, the class of those properties of relational structures that are in NP), a fact known as Fagin's theorem.
For example, SNP contains 3-Coloring (the problem of determining whether a given graph is 3-colorable), because it can be expressed by the following formula:
formula_4
Here formula_5 denotes the adjacency relation of the input graph, while the sets (unary relations) formula_6 correspond to sets of vertices colored with one of the 3 colors.
Similarly, SNP contains the "k"-SAT problem: the boolean satisfiability problem (SAT) where the formula is restricted to conjunctive normal form and to at most "k" literals per clause, where "k" is fixed.
MaxSNP.
An analogous definition considers optimization problems, when instead of asking a formula to be satisfied for "all" tuples, one wants to maximize the number of tuples for which it is satisfied.
That is, MaxSNP0 is defined as the class of optimization problems on relational structures expressible in the following form:
formula_7
MaxSNP is then defined as the class of all problems with an L-reduction ("linear reduction", not "log-space reduction") to problems in MaxSNP0.
For example, MAX-3SAT is a problem in MaxSNP0: given an instance of 3-CNF-SAT (the boolean satisfiability problem with the formula in conjunctive normal form and at most 3 literals per clause), find an assignment satisfying as many clauses as possible.
In fact, it is a natural complete problem for MaxSNP.
There is a fixed-ratio approximation algorithm to solve any problem in MaxSNP, hence MaxSNP is contained in APX, the class of all problems approximable to within some constant ratio.
In fact the closure of MaxSNP under PTAS reductions (slightly more general than L-reductions) is equal to APX; that is, every problem in APX has a PTAS reduction to it from some problem in MaxSNP.
In particular, every MaxSNP-complete problem (under L-reductions or under AP-reductions) is also APX-complete (under PTAS reductions), and hence does not admit a PTAS unless P=NP.
However, the proof of this relies on the PCP theorem, while proofs of MaxSNP-completeness are often elementary.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\exists S_1 \\dots \\exists S_\\ell \\, \\forall v_1 \\dots \\forall v_m \\,\\phi(R_1,\\dots,R_k,S_1,\\dots,S_\\ell,v_1,\\dots,v_m)"
},
{
"math_id": 1,
"text": "R_1,\\dots,R_k"
},
{
"math_id": 2,
"text": "S_1,\\dots,S_\\ell"
},
{
"math_id": 3,
"text": "\\phi"
},
{
"math_id": 4,
"text": "\\exists S_1 \\exists S_2 \\exists S_3 \\, \\forall u \\forall v \\, \\bigl( S_1(u) \\vee S_2(u) \\vee S_3(u) \\bigr) \\, \\wedge \\, \\bigl( E(u,v)\\,\\implies\\,(\\neg S_1(u) \\vee \\neg S_1(v))\\,\\wedge\\,\\left(\\neg S_2(u) \\vee \\neg S_2(v)\\right)\\,\\wedge\\,(\\neg S_3(u) \\vee \\neg S_3(v)) \\bigr) "
},
{
"math_id": 5,
"text": "E"
},
{
"math_id": 6,
"text": "S_1,S_2,S_3"
},
{
"math_id": 7,
"text": "\\max\\limits_{S_1,\\dots,S_\\ell} |\\{ (v_1, \\dots, v_m) \\colon \\phi(R_1,\\dots,R_k,S_1,\\dots,S_\\ell,v_1,\\dots,v_m)\\}|"
}
]
| https://en.wikipedia.org/wiki?curid=13270149 |
13271310 | NAD+ kinase | Enzyme
NAD+ kinase (EC 2.7.1.23, NADK) is an enzyme that converts nicotinamide adenine dinucleotide (NAD+) into NADP+ through phosphorylating the NAD+ coenzyme. NADP+ is an essential coenzyme that is reduced to NADPH primarily by the pentose phosphate pathway to provide reducing power in biosynthetic processes such as fatty acid biosynthesis and nucleotide synthesis. The structure of the NADK from the archaean "Archaeoglobus fulgidus" has been determined.
In humans, the genes "NADK" and "MNADK" encode NAD+ kinases localized in cytosol and mitochondria, respectively. Similarly, yeast have both cytosolic and mitochondrial isoforms, and the yeast mitochondrial isoform accepts both NAD+ and NADH as substrates for phosphorylation.
Reaction.
The reaction catalyzed by NADK is
ATP + NAD+ formula_0 ADP + NADP+
Mechanism.
NADK phosphorylates NAD+ at the 2’ position of the ribose ring that carries the adenine moiety. It is highly selective for its substrates, NAD and ATP, and does not tolerate modifications either to the phosphoryl acceptor, NAD, or the pyridine moiety of the phosphoryl donor, ATP. NADK also uses metal ions to coordinate the ATP in the active site. In vitro studies with various divalent metal ions have shown that zinc and manganese are preferred over magnesium, while copper and nickel are not accepted by the enzyme at all. A proposed mechanism involves the 2' alcohol oxygen acting as a nucleophile to attack the gamma-phosphoryl of ATP, releasing ADP.
Regulation.
NADK is highly regulated by the redox state of the cell. Whereas NAD is predominantly found in its oxidized state NAD+, the phosphorylated NADP is largely present in its reduced form, as NADPH. Thus, NADK can modulate responses to oxidative stress by controlling NADP synthesis. Bacterial NADK is shown to be inhibited allosterically by both NADPH and NADH. NADK is also reportedly stimulated by calcium/calmodulin binding in certain cell types, such as neutrophils. NAD kinases in plants and sea urchin eggs have also been found to bind calmodulin.
Clinical significance.
Due to the essential role of NADPH in lipid and DNA biosynthesis and the hyperproliferative nature of most cancers, NADK is an attractive target for cancer therapy. Furthermore, NADPH is required for the antioxidant activities of thioredoxin reductase and glutaredoxin. Thionicotinamide and other nicotinamide analogs are potential inhibitors of NADK, and studies show that treatment of colon cancer cells with thionicotinamide suppresses the cytosolic NADPH pool to increase oxidative stress and synergizes with chemotherapy.
While the role of NADK in increasing the NADPH pool appears to offer protection against apoptosis, there are also cases where NADK activity appears to potentiate cell death. Genetic studies done in human haploid cell lines indicate that knocking out NADK may protect from certain non-apoptotic stimuli.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13271310 |
132729 | Tuple | Finite ordered list of elements
In mathematics, a tuple is a finite sequence or "ordered list" of numbers or, more generally, mathematical objects, which are called the "elements" of the tuple. An n-tuple is a tuple of n elements, where n is a non-negative integer. There is only one 0-tuple, called the "empty tuple". A 1-tuple and a 2-tuple are commonly called a singleton and an ordered pair, respectively.
Tuples may be formally defined from ordered pairs by recurrence by starting from ordered pairs; indeed, an n-tuple can be identified with the ordered pair of its ("n" − 1) first elements and its nth element.
Tuples are usually written by listing the elements within parentheses "( )", separated by a comma and a space; for example, (2, 7, 4, 1, 7) denotes a 5-tuple. Sometimes other symbols are used to surround the elements, such as square brackets "[ ]" or angle brackets "⟨ ⟩". Braces "{ }" are used to specify arrays in some programming languages but not in mathematical expressions, as they are the standard notation for sets. The term "tuple" can often occur when discussing other mathematical objects, such as vectors.
In computer science, tuples come in many forms. Most typed functional programming languages implement tuples directly as product types, tightly associated with algebraic data types, pattern matching, and destructuring assignment. Many programming languages offer an alternative to tuples, known as record types, featuring unordered elements accessed by label. A few programming languages combine ordered tuple product types and unordered record types into a single construct, as in C structs and Haskell records. Relational databases may formally identify their rows (records) as "tuples".
Tuples also occur in relational algebra; when programming the semantic web with the Resource Description Framework (RDF); in linguistics; and in philosophy.
Etymology.
The term originated as an abstraction of the sequence: single, couple/double, triple, quadruple, quintuple, sextuple, septuple, octuple, ..., "n"‑tuple, ..., where the prefixes are taken from the Latin names of the numerals. The unique 0-tuple is called the "null tuple" or "empty tuple". A 1‑tuple is called a "single" (or "singleton"), a 2‑tuple is called an "ordered pair" or "couple", and a 3‑tuple is called a "triple" (or "triplet"). The number "n" can be any nonnegative integer. For example, a complex number can be represented as a 2‑tuple of reals, a quaternion can be represented as a 4‑tuple, an octonion can be represented as an 8‑tuple, and a sedenion can be represented as a 16‑tuple.
Although these uses treat "‑uple" as the suffix, the original suffix was "‑ple" as in "triple" (three-fold) or "decuple" (ten‑fold). This originates from medieval Latin "plus" (meaning "more") related to Greek ‑πλοῦς, which replaced the classical and late antique "‑plex" (meaning "folded"), as in "duplex".
Properties.
The general rule for the identity of two "n"-tuples is
formula_0 if and only if formula_1.
Thus a tuple has properties that distinguish it from a set:
Definitions.
There are several definitions of tuples that give them the properties described in the previous section.
Tuples as functions.
The formula_6-tuple may be identified as the empty function. For formula_7 the formula_8-tuple formula_9 may be identified with the (surjective) function
formula_10
with domain
formula_11
and with codomain
formula_12
that is defined at formula_13 by
formula_14
That is, formula_15 is the function defined by
formula_16
in which case the equality
formula_17
necessarily holds.
Functions are commonly identified with their graphs, which is a certain set of ordered pairs.
Indeed, many authors use graphs as the definition of a function.
Using this definition of "function", the above function formula_15 can be defined as:
formula_18
Tuples as nested ordered pairs.
Another way of modeling tuples in Set Theory is as nested ordered pairs. This approach assumes that the notion of ordered pair has already been defined.
This definition can be applied recursively to the ("n" − 1)-tuple:
formula_21
Thus, for example:
formula_22
A variant of this definition starts "peeling off" elements from the other end:
This definition can be applied recursively:
formula_24
Thus, for example:
formula_25
Tuples as nested sets.
Using Kuratowski's representation for an ordered pair, the second definition above can be reformulated in terms of pure set theory:
In this formulation:
formula_31
"n"-tuples of "m"-sets.
In discrete mathematics, especially combinatorics and finite probability theory, "n"-tuples arise in the context of various counting problems and are treated more informally as ordered lists of length "n". "n"-tuples whose entries come from a set of "m" elements are also called "arrangements with repetition", "permutations of a multiset" and, in some non-English literature, "variations with repetition". The number of "n"-tuples of an "m"-set is "m""n". This follows from the combinatorial rule of product. If "S" is a finite set of cardinality "m", this number is the cardinality of the "n"-fold Cartesian power "S" × "S" × ⋯ × "S". Tuples are elements of this product set.
Type theory.
In type theory, commonly used in programming languages, a tuple has a product type; this fixes not only the length, but also the underlying types of each component. Formally:
formula_32
and the projections are term constructors:
formula_33
The tuple with labeled elements used in the relational model has a record type. Both of these types can be defined as simple extensions of the simply typed lambda calculus.
The notion of a tuple in type theory and that in set theory are related in the following way: If we consider the natural model of a type theory, and use the Scott brackets to indicate the semantic interpretation, then the model consists of some sets formula_34 (note: the use of italics here that distinguishes sets from types) such that:
formula_35
and the interpretation of the basic terms is:
formula_36.
The "n"-tuple of type theory has the natural interpretation as an "n"-tuple of set theory:
formula_37
The unit type has as semantic interpretation the 0-tuple.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(a_1, a_2, \\ldots, a_n) = (b_1, b_2, \\ldots, b_n)"
},
{
"math_id": 1,
"text": "a_1=b_1,\\text{ }a_2=b_2,\\text{ }\\ldots,\\text{ }a_n=b_n"
},
{
"math_id": 2,
"text": "(1,2,2,3) \\neq (1,2,3)"
},
{
"math_id": 3,
"text": "\\{1,2,2,3\\} = \\{1,2,3\\}"
},
{
"math_id": 4,
"text": "(1,2,3) \\neq (3,2,1)"
},
{
"math_id": 5,
"text": "\\{1,2,3\\} = \\{3,2,1\\}"
},
{
"math_id": 6,
"text": "0"
},
{
"math_id": 7,
"text": "n \\geq 1,"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "\\left(a_1, \\ldots, a_n\\right)"
},
{
"math_id": 10,
"text": "F ~:~ \\left\\{ 1, \\ldots, n \\right\\} ~\\to~ \\left\\{ a_1, \\ldots, a_n \\right\\}"
},
{
"math_id": 11,
"text": "\\operatorname{domain} F = \\left\\{ 1, \\ldots, n \\right\\} = \\left\\{ i \\in \\N : 1 \\leq i \\leq n\\right\\}"
},
{
"math_id": 12,
"text": "\\operatorname{codomain} F = \\left\\{ a_1, \\ldots, a_n \\right\\},"
},
{
"math_id": 13,
"text": "i \\in \\operatorname{domain} F = \\left\\{ 1, \\ldots, n \\right\\}"
},
{
"math_id": 14,
"text": "F(i) := a_i."
},
{
"math_id": 15,
"text": "F"
},
{
"math_id": 16,
"text": "\\begin{alignat}{3}\n1 \\;&\\mapsto&&\\; a_1 \\\\\n \\;&\\;\\;\\vdots&&\\; \\\\\nn \\;&\\mapsto&&\\; a_n \\\\\n\\end{alignat}"
},
{
"math_id": 17,
"text": "\\left(a_1, a_2, \\dots, a_n\\right) = \\left(F(1), F(2), \\dots, F(n)\\right)"
},
{
"math_id": 18,
"text": "F ~:=~ \\left\\{ \\left(1, a_1\\right), \\ldots, \\left(n, a_n\\right) \\right\\}."
},
{
"math_id": 19,
"text": "\\emptyset"
},
{
"math_id": 20,
"text": "(a_1, a_2, a_3, \\ldots, a_n) = (a_1, (a_2, a_3, \\ldots, a_n))"
},
{
"math_id": 21,
"text": "(a_1, a_2, a_3, \\ldots, a_n) = (a_1, (a_2, (a_3, (\\ldots, (a_n, \\emptyset)\\ldots))))"
},
{
"math_id": 22,
"text": "\n \\begin{align}\n (1, 2, 3) & = (1, (2, (3, \\emptyset))) \\\\\n (1, 2, 3, 4) & = (1, (2, (3, (4, \\emptyset)))) \\\\\n \\end{align}\n "
},
{
"math_id": 23,
"text": "(a_1, a_2, a_3, \\ldots, a_n) = ((a_1, a_2, a_3, \\ldots, a_{n-1}), a_n)"
},
{
"math_id": 24,
"text": "(a_1, a_2, a_3, \\ldots, a_n) = ((\\ldots(((\\emptyset, a_1), a_2), a_3), \\ldots), a_n)"
},
{
"math_id": 25,
"text": "\n \\begin{align}\n (1, 2, 3) & = (((\\emptyset, 1), 2), 3) \\\\\n (1, 2, 3, 4) & = ((((\\emptyset, 1), 2), 3), 4) \\\\\n \\end{align}\n "
},
{
"math_id": 26,
"text": "x"
},
{
"math_id": 27,
"text": "(a_1, a_2, \\ldots, a_n)"
},
{
"math_id": 28,
"text": "x \\rightarrow b \\equiv (a_1, a_2, \\ldots, a_n, b)"
},
{
"math_id": 29,
"text": "x \\rightarrow b \\equiv \\{\\{x\\}, \\{x, b\\}\\}"
},
{
"math_id": 30,
"text": "\\rightarrow"
},
{
"math_id": 31,
"text": "\n \\begin{array}{lclcl}\n () & & &=& \\emptyset \\\\\n & & & & \\\\\n (1) &=& () \\rightarrow 1 &=& \\{\\{()\\},\\{(),1\\}\\} \\\\\n & & &=& \\{\\{\\emptyset\\},\\{\\emptyset,1\\}\\} \\\\\n & & & & \\\\\n (1,2) &=& (1) \\rightarrow 2 &=& \\{\\{(1)\\},\\{(1),2\\}\\} \\\\\n & & &=& \\{\\{\\{\\{\\emptyset\\},\\{\\emptyset,1\\}\\}\\}, \\\\\n & & & & \\{\\{\\{\\emptyset\\},\\{\\emptyset,1\\}\\},2\\}\\} \\\\\n & & & & \\\\\n (1,2,3) &=& (1,2) \\rightarrow 3 &=& \\{\\{(1,2)\\},\\{(1,2),3\\}\\} \\\\\n & & &=& \\{\\{\\{\\{\\{\\{\\emptyset\\},\\{\\emptyset,1\\}\\}\\}, \\\\\n & & & & \\{\\{\\{\\emptyset\\},\\{\\emptyset,1\\}\\},2\\}\\}\\}, \\\\\n & & & & \\{\\{\\{\\{\\{\\emptyset\\},\\{\\emptyset,1\\}\\}\\}, \\\\\n & & & & \\{\\{\\{\\emptyset\\},\\{\\emptyset,1\\}\\},2\\}\\},3\\}\\} \\\\\n \\end{array}\n "
},
{
"math_id": 32,
"text": "(x_1, x_2, \\ldots, x_n) : \\mathsf{T}_1 \\times \\mathsf{T}_2 \\times \\ldots \\times \\mathsf{T}_n"
},
{
"math_id": 33,
"text": "\\pi_1(x) : \\mathsf{T}_1,~\\pi_2(x) : \\mathsf{T}_2,~\\ldots,~\\pi_n(x) : \\mathsf{T}_n"
},
{
"math_id": 34,
"text": "S_1, S_2, \\ldots, S_n"
},
{
"math_id": 35,
"text": "[\\![\\mathsf{T}_1]\\!] = S_1,~[\\![\\mathsf{T}_2]\\!] = S_2,~\\ldots,~[\\![\\mathsf{T}_n]\\!] = S_n"
},
{
"math_id": 36,
"text": "[\\![x_1]\\!] \\in [\\![\\mathsf{T}_1]\\!],~[\\![x_2]\\!] \\in [\\![\\mathsf{T}_2]\\!],~\\ldots,~[\\![x_n]\\!] \\in [\\![\\mathsf{T}_n]\\!]"
},
{
"math_id": 37,
"text": "[\\![(x_1, x_2, \\ldots, x_n)]\\!] = (\\,[\\![x_1]\\!], [\\![x_2]\\!], \\ldots, [\\![x_n]\\!]\\,)"
}
]
| https://en.wikipedia.org/wiki?curid=132729 |
1327379 | Quantum wire | An electrically conducting wire in which quantum effects influence the transport properties
In mesoscopic physics, a quantum wire is an electrically conducting wire in which quantum effects influence the transport properties. Usually such effects appear in the dimension of nanometers, so they are also referred to as nanowires.
Quantum effects.
If the diameter of a wire is sufficiently small, electrons will experience quantum confinement in the transverse direction. As a result, their transverse energy will be limited to a series of discrete values. One consequence of this quantization is that the classical formula for calculating the electrical resistance of a wire,
formula_0
is not valid for quantum wires (where formula_1 is the material's resistivity, formula_2 is the length, and formula_3 is the cross-sectional area of the wire).
Instead, an exact calculation of the transverse energies of the confined electrons has to be performed to calculate a wire's resistance. Following from the quantization of electron energy, the electrical conductance (the inverse of the resistance) is found to be quantized in multiples of formula_4, where formula_5 is the electron charge and formula_6 is the Planck constant. The factor of two arises from spin degeneracy. A single ballistic quantum channel (i.e. with no internal scattering) has a conductance equal to this quantum of conductance. The conductance is lower than this value in the presence of internal scattering.
The importance of the quantization is inversely proportional to the diameter of the nanowire for a given material. From material to material, it is dependent on the electronic properties, especially on the effective mass of the electrons. Physically, this means that it will depend on how conduction electrons interact with the atoms within a given material. In practice, semiconductors can show clear conductance quantization for large wire transverse dimensions (~100 nm) because the electronic modes due to confinement are spatially extended. As a result, their Fermi wavelengths are large and thus they have low energy separations. This means that they can only be resolved at cryogenic temperatures (within a few degrees of absolute zero) where the thermal energy is lower than the inter-mode energy separation.
For metals, quantization corresponding to the lowest energy states is only observed for atomic wires. Their corresponding wavelength being thus extremely small they have a very large energy separation which makes resistance quantization observable even at room temperature.
Carbon nanotubes.
The carbon nanotube is an example of a quantum wire. A metallic single-walled carbon nanotube that is sufficiently short to exhibit no internal scattering (ballistic transport) has a conductance that approaches two times the conductance quantum, formula_4. The factor of two arises because carbon nanotubes have two spatial channels.
The structure of a nanotube strongly affects its electrical properties. For a given ("n","m") nanotube, if "n" = "m", the nanotube is metallic; if "n" − "m" is a multiple of 3, then the nanotube is semiconducting with a very small band gap, otherwise the nanotube is a moderate semiconductor. Thus all armchair ("n" = "m") nanotubes are metallic, and nanotubes (6,4), (9,1), etc. are semiconducting.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R = \\rho \\frac{l}{A},"
},
{
"math_id": 1,
"text": "\\rho"
},
{
"math_id": 2,
"text": "l"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "2e^2/h"
},
{
"math_id": 5,
"text": "e"
},
{
"math_id": 6,
"text": "h"
}
]
| https://en.wikipedia.org/wiki?curid=1327379 |
13274389 | Articulated body pose estimation | Field of study in computer vision
Articulated body pose estimation in computer vision is the study of algorithms and systems that recover the pose of an articulated body, which consists of joints and rigid parts using image-based observations. It is one of the longest-lasting problems in computer vision because of the complexity of the models that relate observation with pose, and because of the variety of situations in which it would be useful.
Description.
Perception of human beings in their neighboring environment is an important capability that robots must possess. If a person uses gestures to point to a particular object, then the interacting machine should be able to understand the situation in real world context. Thus pose estimation is an important and challenging problem in computer vision, and many algorithms have been deployed in solving this problem over the last two decades. Many solutions involve training complex models with large data sets.
Pose estimation is a difficult problem and an active subject of research because the human body has 244 degrees of freedom with 230 joints. Although not all movements between joints are evident, the human body is composed of 10 large parts with 20 degrees of freedom. Algorithms must account for large variability introduced by differences in appearance due to clothing, body shape, size, and hairstyles. Additionally, the results may be ambiguous due to partial occlusions from self-articulation, such as a person's hand covering their face, or occlusions from external objects. Finally, most algorithms estimate pose from monocular (two-dimensional) images, taken from a normal camera. Other issues include varying lighting and camera configurations. The difficulties are compounded if there are additional performance requirements. These images lack the three-dimensional information of an actual body pose, leading to further ambiguities. There is recent work in this area wherein images from RGBD cameras provide information about color and depth.
Sensors.
The typical articulated body pose estimation system involves a model-based approach, in which the pose estimation is achieved by maximizing/minimizing a similarity/dissimilarity between an observation (input) and a template model. Different kinds of sensors have been explored for use in making the observation, including the following:
These sensors produce intermediate representations that are directly used by the model. The representations include the following:
Classical models.
Part models.
The basic idea of part based model can be attributed to the human skeleton. Any object having the property of articulation can be broken down into smaller parts wherein each part can take different orientations, resulting in different articulations of the same object. Different scales and orientations of the main object can be articulated to scales and orientations of the corresponding parts. To formulate the model so that it can be represented in mathematical terms, the parts are connected to each other using springs. As such, the model is also known as a spring model. The degree of closeness between each part is accounted for by the compression and expansion of the springs. There is geometric constraint on the orientation of
springs. For example, limbs of legs cannot move 360 degrees. Hence parts cannot have that extreme orientation. This reduces the possible permutations.
The spring model forms a graph G(V,E) where V (nodes) corresponds to the parts and E (edges) represents springs connecting two neighboring parts. Each location in the image can be reached by the formula_0 and formula_1 coordinates of the pixel location. Let formula_2 be point at formula_3 location. Then the cost associated in joining the spring between formula_3 and the formula_4 point can be given by formula_5. Hence the
total cost associated in placing formula_6 components at locations formula_7 is given by
formula_8
The above equation simply represents the spring model used to describe body pose. To estimate pose from images, cost or energy function must be minimized. This energy function consists of two terms. The first is related to how each component matches the image data and the second deals with how much the
oriented (deformed) parts match, thus accounting for articulation along with object detection.
The part models, also known as pictorial structures, are of one of the basic models on which other efficient models are built by slight modification. One such example is the flexible mixture model which reduces the database of hundreds or thousands of deformed parts by exploiting the notion of local rigidity.
Articulated model with quaternion.
The kinematic skeleton is constructed by a tree-structured chain. Each rigid body segment has its local coordinate system that can be transformed to the world coordinate system via a 4×4 transformation matrix formula_9,
formula_10
where formula_11 denotes the local transformation from body segment formula_12 to its parent formula_13. Each joint in the body has 3 degrees of freedom (DoF) rotation. Given a transformation matrix formula_14 , the joint position at the T-pose can be transferred to its corresponding position in the world coordination. In many works, the 3D joint rotation is expressed as a normalized quaternion formula_15 due to its continuity that can facilitate gradient-based optimization in the parameter estimation.
Deep learning based models.
Since about 2016, deep learning has emerged as the dominant method for performing accurate articulated body pose estimation. Rather than building an explicit model for the parts as above, the appearances of the joints and relationships between the joints of the body are learned from large training sets. Models generally focus on extracting the 2D positions of joints (keypoints), the 3D positions of joints, or the 3D shape of the body from either a single or multiple images.
Supervised.
2D joint positions.
The first deep learning models that emerged focused on extracting the 2D positions of human joints in an image. Such models take in an image and pass it through a convolutional neural network to obtain a series of heatmaps (one for each joint) which take on high values where joints are detected.
When there are multiple people per image, two main techniques have emerged for grouping joints within each person. In the first, "bottom-up" approach, the neural network is trained to also generate "part affinity fields" which indicate the location of limbs. Using these fields, joints can be grouped limb by limb by solving a series of assignment problems. In the second, "top-down" approach, an additional network is used to first detect people in the image and then the pose estimation network is applied to each image.
3D joint positions.
With the advent of multiple datasets with human pose annotated in multiple views, models which detect 3D joint positions became more popular. These again fell into two categories In the first, a neural network is used to detect 2D joint positions from each view and these detections are then triangulated to obtain 3D joint positions. The 2D network may be refined to produce better detections based on the 3D data. Furthermore, such approaches often have filters in both 2D and 3D to refine the detected points. In the second, a neural network is trained end-to-end to predict 3D joint positions directly from a set of images, without 2D joint position intermediate detections. Such approaches often project image features into a cube and then use a 3D convolutional neural network to predict a 3D heatmap for each joint.
3D shape.
Concurrently with the work above, scientists have been working on estimating the full 3D shape of a human or animal from a set of images. Most of the work is based on estimating the appropriate pose of the skinned multi-person linear (SMPL) model within an image. Variants of the SMPL model for other animals have also been developed. Generally, some keypoints and a silhouette are detected for each animal within the image, and then the parameters 3D shape model are fit to match the position of keypoints and silhouette.
Unsupervised.
The above algorithms all rely on annotated images, which can be time-consuming to produce. To address this issue, computer vision researchers have developed new algorithms which can learn 3D keypoints given only annotated 2D images from a single view or identify keypoints given videos without any annotations.
Applications.
Assisted living.
Personal care robots may be deployed in future assisted living homes. For these robots, high-accuracy human detection and pose estimation is necessary to perform a variety of tasks, such as fall detection. Additionally, this application has a number of performance constraints.
Character animation.
Traditionally, character animation has been a manual process. However, poses can be synced directly to a real-life actor through specialized pose estimation systems. Older systems relied on markers or specialized suits. Recent advances in pose estimation and motion capture have enabled markerless applications, sometimes in real time.
Intelligent driver assisting system.
Car accidents account for about two percent of deaths globally each year. As such, an intelligent system tracking driver pose may be useful for emergency alerts . Along the same lines, pedestrian detection algorithms have been used successfully in autonomous cars, enabling the car to make smarter decisions.
Video games.
Commercially, pose estimation has been used in the context of video games, popularized with the Microsoft Kinect sensor (a depth camera). These systems track the user to render their avatar in-game, in addition to performing tasks like gesture recognition to enable the user to interact with the game. As such, this application has a strict real-time requirement.
Medical Applications.
Pose estimation has been used to detect postural issues such as scoliosis by analyzing abnormalities in a patient's posture, physical therapy, and the study of the cognitive brain development of young children by monitoring motor functionality.
Other applications.
Other applications include video surveillance, animal tracking and behavior understanding, sign language detection, advanced human–computer interaction, and markerless motion capturing.
Related technology.
A commercially successful but specialized computer vision-based articulated body pose estimation technique is optical motion capture. This approach involves placing markers on the individual at strategic locations to capture the 6 degrees-of-freedom of each body part.
Research groups.
A number of groups and companies are researching pose estimation, including groups at Brown University, Carnegie Mellon University, MPI Saarbruecken, Stanford University, the University of California, San Diego, the University of Toronto, the École Centrale Paris, ETH Zurich, National University of Sciences and Technology (NUST), the University of California, Irvine and Polytechnic University of Catalonia.
Companies.
At present, several companies are working on articulated body pose estimation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "y"
},
{
"math_id": 2,
"text": "\\mathbf{p}_{i}(x, \\, y)"
},
{
"math_id": 3,
"text": "\\mathbf{i}^{th}"
},
{
"math_id": 4,
"text": "\\mathbf{j}^{th}"
},
{
"math_id": 5,
"text": "S(\\mathbf{p}_{i},\\,\\mathbf{p}_{j}) = S(\\mathbf{p}_{i} - \\mathbf{p}_{j})"
},
{
"math_id": 6,
"text": "l"
},
{
"math_id": 7,
"text": "\\mathbf{P}_{l}"
},
{
"math_id": 8,
"text": "\nS(\\mathbf{P}_{l}) = \\displaystyle\\sum_{i=1}^{l} \\; \\displaystyle\\sum_{j=1}^{i} \\; \\mathbf{s}_{ij}(\\mathbf{p}_{i},\\,\\mathbf{p}_{j})\n"
},
{
"math_id": 9,
"text": "T_l "
},
{
"math_id": 10,
"text": "\nT_{l} = T_{\\operatorname{par}(l)}R_{l},\n"
},
{
"math_id": 11,
"text": "R_l"
},
{
"math_id": 12,
"text": "S_l"
},
{
"math_id": 13,
"text": "\\operatorname{par}(S_l)"
},
{
"math_id": 14,
"text": "T_l"
},
{
"math_id": 15,
"text": "[x,y,z,w]"
}
]
| https://en.wikipedia.org/wiki?curid=13274389 |
13276958 | Initial value formulation (general relativity) | The initial value formulation of general relativity is a reformulation of Albert Einstein's theory of general relativity that describes a universe evolving over time.
Each solution of the Einstein field equations encompasses the whole history of a universe – it is not just some snapshot of how things are, but a whole spacetime: a statement encompassing the state of matter and geometry everywhere and at every moment in that particular universe. By this token, Einstein's theory appears to be different from most other physical theories, which specify evolution equations for physical systems; if the system is in a given state at some given moment, the laws of physics allow you to extrapolate its past or future. For Einstein's equations, there appear to be subtle differences compared with other fields: they are self-interacting (that is, non-linear even in the absence of other fields); they are diffeomorphism invariant, so to obtain a unique solution, a fixed background metric and gauge conditions need to be introduced; finally, the metric determines the spacetime structure, and thus the domain of dependence for any set of initial data, so the region on which a specific solution will be defined is not, a priori, defined.
There is, however, a way to re-formulate Einstein's equations that overcomes these problems. First of all, there are ways of rewriting spacetime as the evolution of "space" in time; an earlier version of this is due to Paul Dirac, while a simpler way is known after its inventors Richard Arnowitt, Stanley Deser and Charles Misner as ADM formalism. In these formulations, also known as "3+1" approaches, spacetime is split into a three-dimensional hypersurface with interior metric and an embedding into spacetime with exterior curvature; these two quantities are the dynamical variables in a Hamiltonian formulation tracing the hypersurface's evolution over time. With such a split, it is possible to state the "initial value formulation of general relativity". It involves initial data which cannot be specified arbitrarily but needs to satisfy specific constraint equations, and which is defined on some suitably smooth three-manifold formula_0; just as for other differential equations, it is then possible to prove existence and uniqueness theorems, namely that there exists a unique spacetime which is a solution of Einstein equations, which is globally hyperbolic, for which formula_0 is a Cauchy surface (i.e. all past events influence what happens on formula_0, and all future events are influenced by what happens on it), and has the specified internal metric and extrinsic curvature; all spacetimes that satisfy these conditions are related by isometries.
The initial value formulation with its 3+1 split is the basis of numerical relativity; attempts to simulate the evolution of relativistic spacetimes (notably merging black holes or gravitational collapse) using computers. However, there are significant differences to the simulation of other physical evolution equations which make numerical relativity especially challenging, notably the fact that the dynamical objects that are evolving include space and time itself (so there is no fixed background against which to evaluate, for instance, perturbations representing gravitational waves) and the occurrence of singularities (which, when they are allowed to occur within the simulated portion of spacetime, lead to arbitrarily large numbers that would have to be represented in the computer model).
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Sigma"
}
]
| https://en.wikipedia.org/wiki?curid=13276958 |
1327701 | Rindler coordinates | Tool from special relativity
Rindler coordinates are a coordinate system used in the context of special relativity to describe the hyperbolic acceleration of a uniformly accelerating reference frame in flat spacetime. In relativistic physics the coordinates of a "hyperbolically accelerated reference frame" constitute an important and useful coordinate chart representing part of flat Minkowski spacetime. In special relativity, a uniformly accelerating particle undergoes hyperbolic motion, for which a uniformly accelerating frame of reference in which it is at rest can be chosen as its proper reference frame. The phenomena in this hyperbolically accelerated frame can be compared to effects arising in a homogeneous gravitational field. For general overview of accelerations in flat spacetime, see Acceleration (special relativity) and Proper reference frame (flat spacetime).
In this article, the speed of light is defined by "c" = 1, the inertial coordinates are ("X", "Y", "Z", "T"), and the hyperbolic coordinates are ("x", "y", "z", "t"). These hyperbolic coordinates can be separated into two main variants depending on the accelerated observer's position: If the observer is located at time "T" = 0 at position "X" = 1/α (with α as the constant proper acceleration measured by a comoving accelerometer), then the hyperbolic coordinates are often called Rindler coordinates with the corresponding "Rindler metric". If the observer is located at time "T" = 0 at position "X" = 0, then the hyperbolic coordinates are sometimes called "Møller coordinates" or "Kottler–Møller coordinates" with the corresponding "Kottler–Møller metric". An alternative chart often related to observers in hyperbolic motion is obtained using Radar coordinates which are sometimes called "Lass coordinates". Both the Kottler–Møller coordinates as well as Lass coordinates are denoted as Rindler coordinates as well.
Regarding the history, such coordinates were introduced soon after the advent of special relativity, when they were studied (fully or partially) alongside the concept of hyperbolic motion: In relation to flat Minkowski spacetime by Albert Einstein (1907, 1912), Max Born (1909), Arnold Sommerfeld (1910), Max von Laue (1911), Hendrik Lorentz (1913), Friedrich Kottler (1914), Wolfgang Pauli (1921), Karl Bollert (1922), Stjepan Mohorovičić (1922), Georges Lemaître (1924), Einstein & Nathan Rosen (1935), Christian Møller (1943, 1952), Fritz Rohrlich (1963), Harry Lass (1963), and in relation to both flat and curved spacetime of general relativity by Wolfgang Rindler (1960, 1966). For details and sources, see "".
Characteristics of the Rindler frame.
The worldline of a body in hyperbolic motion having constant proper acceleration formula_0 in the formula_1-direction as a function of proper time formula_2 and rapidity formula_3 can be given by
formula_4
where formula_5 is constant and formula_3 is variable, with the worldline resembling the hyperbola formula_6. Sommerfeld showed that the equations can be reinterpreted by defining formula_7 as variable and formula_3 as constant, so that it represents the simultaneous "rest shape" of a body in hyperbolic motion measured by a comoving observer. By using the proper time of the observer as the time of the entire hyperbolically accelerated frame by setting formula_8, the transformation formulas between the inertial coordinates and the hyperbolic coordinates are consequently:
with the inverse
formula_9
Differentiated and inserted into the Minkowski metric
formula_10
the metric in the hyperbolically accelerated frame follows as
These transformations define the "Rindler observer" as an observer that is "at rest" in Rindler coordinates, i.e., maintaining constant "x", "y", "z", and only varying "t" as time passes. The coordinates are valid in the region formula_11, which is often called the "Rindler wedge", if formula_0 represents the proper acceleration (along the hyperbola formula_12) of the Rindler observer whose proper time is defined to be equal to Rindler coordinate time. To maintain this world line, the observer must accelerate with a constant proper acceleration, with Rindler observers closer to formula_13 (the Rindler horizon) having greater proper acceleration. All the Rindler observers are instantaneously at rest at time formula_14 in the inertial frame, and at this time a Rindler observer with proper acceleration formula_15 will be at position formula_16 (really formula_17, but we assume units where formula_18), which is also that observer's constant distance from the Rindler horizon in Rindler coordinates. If all Rindler observers set their clocks to zero at formula_14, then when defining a Rindler coordinate system we have a choice of which Rindler observer's proper time will be equal to the coordinate time formula_19 in Rindler coordinates, and this observer's proper acceleration defines the value of formula_0 above (for other Rindler observers at different distances from the Rindler horizon, the coordinate time will equal some constant multiple of their own proper time). It is a common convention to define the Rindler coordinate system so that the Rindler observer whose proper time matches coordinate time is the one who has proper acceleration formula_20, so that formula_0 can be eliminated from the equations.
The above equation has been simplified for formula_18. The unsimplified equation is more convenient for finding the Rindler Horizon distance, given an acceleration formula_0.
formula_21
The remainder of the article will follow the convention of setting both formula_20 and formula_18, so units for formula_1 and formula_7 will be 1 unit formula_22. Be mindful that setting formula_20 light-second/second2 is very different from setting formula_20 light-year/year2. Even if we pick units where formula_18, the magnitude of the proper acceleration formula_0 will depend on our choice of units: for example, if we use units of light-years for distance, (formula_1 or formula_7) and years for time, (formula_23 or formula_19), this would mean formula_20 light year/year2, equal to about 9.5 meters/second2, while if we use units of light-seconds for distance, (formula_1 or formula_7), and seconds for time, (formula_23 or formula_19), this would mean formula_20 light-second/second2, or 299 792 458 meters/second2).
Variants of transformation formulas.
A more general derivation of the transformation formulas is given, when the corresponding Fermi–Walker tetrad is formulated from which the Fermi coordinates or Proper coordinates can be derived. Depending on the choice of origin of these coordinates, one can derive the metric, the time dilation between the time at the origin formula_24 and formula_25 at point formula_7, and the coordinate light speed formula_26 (this variable speed of light does not contradict special relativity, because it is only an artifact of the accelerated coordinates employed, while in inertial coordinates it remains constant). Instead of Fermi coordinates, also Radar coordinates can be used, which are obtained by determining the distance using light signals (see section Notions of distance), by which metric, time dilation and speed of light do not depend on the coordinates anymore – in particular, the coordinate speed of light remains identical with the speed of light formula_27 in inertial frames:
The Rindler observers.
In the new chart (1a) with formula_18 and formula_20, it is natural to take the coframe field
formula_28
which has the dual frame field
formula_29
This defines a "local Lorentz frame" in the tangent space at each event (in the region covered by our Rindler chart, namely the Rindler wedge). The integral curves of the timelike unit vector field formula_30 give a timelike congruence, consisting of the world lines of a family of observers called the "Rindler observers". In the Rindler chart, these world lines appear as the vertical coordinate lines formula_31. Using the coordinate transformation above, we find that these correspond to hyperbolic arcs in the original Cartesian chart.
As with any timelike congruence in any Lorentzian manifold, this congruence has a "kinematic decomposition" (see Raychaudhuri equation). In this case, the "expansion" and "vorticity" of the congruence of Rindler observers "vanish". The vanishing of the expansion tensor implies that "each of our observers maintains constant distance to his neighbors". The vanishing of the vorticity tensor implies that the world lines of our observers are not twisting about each other; this is a kind of local absence of "swirling".
The acceleration vector of each observer is given by the covariant derivative
formula_32
That is, each Rindler observer is accelerating in the formula_33 direction. Individually speaking, each observer is in fact accelerating with "constant magnitude" in this direction, so their world lines are the Lorentzian analogs of circles, which are the curves of constant path curvature in the Euclidean geometry.
Because the Rindler observers are "vorticity-free", they are also "hypersurface orthogonal". The orthogonal spatial hyperslices are formula_34; these appear as horizontal half-planes in the Rindler chart and as half-planes through formula_35 in the Cartesian chart (see the figure above). Setting formula_36 in the line element, we see that these have the ordinary Euclidean geometry, formula_37. Thus, the spatial coordinates in the Rindler chart have a very simple interpretation consistent with the claim that the Rindler observers are mutually stationary. We will return to this rigidity property of the Rindler observers a bit later in this article.
A "paradoxical" property.
Note that Rindler observers with smaller constant x coordinate are accelerating "harder" to keep up. This may seem surprising because in Newtonian physics, observers who maintain constant relative distance must share the "same" acceleration. But in relativistic physics, we see that the trailing endpoint of a rod which is accelerated by some external force (parallel to its symmetry axis) must accelerate a bit harder than the leading endpoint, or else it must ultimately break. This is a manifestation of Lorentz contraction. As the rod accelerates, its velocity increases and its length decreases. Since it is getting shorter, the back end must accelerate harder than the front. Another way to look at it is: the back end must achieve the same change in velocity in a shorter period of time. This leads to a differential equation showing that, at some distance, the acceleration of the trailing end diverges, resulting in the Rindler horizon.
This phenomenon is the basis of a well known "paradox", Bell's spaceship paradox. However, it is a simple consequence of relativistic kinematics. One way to see this is to observe that the magnitude of the acceleration vector is just the path curvature of the corresponding world line. But "the world lines of our Rindler observers are the analogs of a family of concentric circles" in the Euclidean plane, so we are simply dealing with the Lorentzian analog of a fact familiar to speed skaters: in a family of concentric circles, "inner circles must bend faster (per unit arc length) than the outer ones".
Minkowski observers.
It is worthwhile to also introduce an alternative frame, given in the Minkowski chart by the natural choice
formula_38
Transforming these vector fields using the coordinate transformation given above, we find that in the Rindler chart (in the Rinder wedge) this frame becomes
formula_39
Computing the kinematic decomposition of the timelike congruence defined by the timelike unit vector field formula_40, we find that the expansion and vorticity again vanishes, and in addition the acceleration vector vanishes, formula_41. In other words, this is a "geodesic congruence"; the corresponding observers are in a state of "inertial motion". In the original Cartesian chart, these observers, whom we will call "Minkowski observers", are at rest.
In the Rindler chart, the world lines of the Minkowski observers appear as hyperbolic secant curves asymptotic to the coordinate plane formula_42. Specifically, in Rindler coordinates, the world line of the Minkowski observer passing through the event formula_43 is
formula_44
where formula_45 is the proper time of this Minkowski observer. Note that only a small portion of his history is covered by the Rindler chart. This shows explicitly why the Rindler chart is "not" geodesically complete; timelike geodesics run outside the region covered by the chart in finite proper time. Of course, we already knew that the Rindler chart cannot be geodesically complete, because it covers only a portion of the original Cartesian chart, which "is" a geodesically complete chart.
In the case depicted in the figure, formula_46 and we have drawn (correctly scaled and boosted) the light cones at formula_47.
The Rindler horizon.
The Rindler coordinate chart has a "coordinate singularity" at "x" = 0, where the metric tensor (expressed in the Rindler coordinates) has vanishing determinant. This happens because as "x" → 0 the acceleration of the Rindler observers diverges. As we can see from the figure illustrating the Rindler wedge, the locus "x" = 0 in the Rindler chart corresponds to the locus "T"2 = "X"2, "X" > 0 in the Cartesian chart, which consists of two null half-planes, each ruled by a null geodesic congruence.
For the moment, we simply consider the Rindler horizon as the boundary of the Rindler coordinates. If we consider the set of accelerating observers who have a constant position in Rindler coordinates, none of them can ever receive light signals from events with "T" ≥ "X" (on the diagram, these would be events on or to the left of the line "T" = "X" which the upper red horizon lies along; these observers could however receive signals from events with "T" ≥ "X" if they stopped their acceleration and crossed this line themselves) nor could they have ever sent signals to events with "T" ≤ −"X" (events on or to the left of the line "T" = −"X" which the lower red horizon lies along; those events lie outside all future light cones of their past world line). Also, if we consider members of this set of accelerating observers closer and closer to the horizon, in the limit as the distance to the horizon approaches zero, the constant proper acceleration experienced by an observer at this distance (which would also be the G-force experienced by such an observer) would approach infinity. Both of these facts would also be true if we were considering a set of observers hovering outside the event horizon of a black hole, each observer hovering at a constant radius in Schwarzschild coordinates. In fact, in the close neighborhood of a black hole, the geometry close to the event horizon can be described in Rindler coordinates. Hawking radiation in the case of an accelerating frame is referred to as Unruh radiation. The connection is the equivalence of acceleration with gravitation.
Geodesics.
The geodesic equations in the Rindler chart are easily obtained from the geodesic Lagrangian; they are
formula_48
Of course, in the original Cartesian chart, the geodesics appear as straight lines, so we could easily obtain them in the Rindler chart using our coordinate transformation. However, it is instructive to obtain and study them independently of the original chart, and we shall do so in this section.
From the first, third, and fourth we immediately obtain the "first integrals"
formula_49
But from the line element we have formula_50 where formula_51 for timelike, null, and spacelike geodesics, respectively. This gives the fourth first integral, namely
formula_52.
This suffices to give the complete solution of the geodesic equations.
In the case of null geodesics, from formula_53 with nonzero formula_54, we see that the x coordinate ranges over the interval
formula_55.
The complete seven parameter family giving any null geodesic through any event in the Rindler wedge, is
formula_56
Plotting the "tracks" of some representative null geodesics through a given event (that is, projecting to the hyperslice formula_57), we obtain a picture which looks suspiciously like the family of all semicircles through a point and orthogonal to the Rindler horizon (See the figure).
The Fermat metric.
The fact that in the Rindler chart, the projections of null geodesics into any spatial hyperslice for the Rindler observers are simply semicircular arcs can be verified directly from the general solution just given, but there is a very simple way to see this. A static spacetime is one in which a vorticity-free timelike Killing vector field can be found. In this case, we have a uniquely defined family of (identical) spatial hyperslices orthogonal to the corresponding static observers (who need not be inertial observers). This allows us to define a new metric on any of these hyperslices which is conformally related to the original metric inherited from the spacetime, but with the property that geodesics in the new metric (note this is a Riemannian metric on a Riemannian three-manifold) are precisely the projections of the null geodesics of spacetime. This new metric is called the "Fermat metric", and in a static spacetime endowed with a coordinate chart in which the line element has the form
formula_58
the Fermat metric on formula_59 is simply
formula_60
(where the metric coeffients are understood to be evaluated at formula_59).
In the Rindler chart, the timelike translation formula_61 is such a Killing vector field, so this is a static spacetime (not surprisingly, since Minkowski spacetime is of course trivially a static vacuum solution of the Einstein field equation). Therefore, we may immediately write down the Fermat metric for the Rindler observers:
formula_62
But this is the well-known line element of "hyperbolic three-space" H3 in the "upper half space chart". This is closely analogous to the well known "upper half plane chart" for the hyperbolic plane H2, which is familiar to generations of complex analysis students in connection with "conformal mapping problems" (and much more), and many mathematically minded readers already know that the geodesics of H2 in the upper half plane model are simply semicircles (orthogonal to the circle at infinity represented by the real axis).
Symmetries.
Since the Rindler chart is a coordinate chart for Minkowski spacetime, we expect to find ten linearly independent Killing vector fields. Indeed, in the Cartesian chart we can readily find ten linearly independent Killing vector fields, generating respectively one parameter subgroups of time translation, three spatials, three rotations and three boosts. Together these generate the (proper isochronous) Poincaré group, the symmetry group of Minkowski spacetime.
However, it is instructive to write down and solve the Killing vector equations directly. We obtain four familiar looking Killing vector fields
formula_63
(time translation, spatial translations orthogonal to the direction of acceleration, and spatial rotation orthogonal to the direction of acceleration) plus six more:
formula_64
(where the signs are chosen consistently + or −). We leave it as an exercise to figure out how these are related to the standard generators; here we wish to point out that we must be able to obtain generators equivalent to formula_65 in the Cartesian chart, yet the Rindler wedge is obviously not invariant under this translation. How can this be? The answer is that like anything defined by a system of partial differential equations on a smooth manifold, the Killing equation will in general have locally defined solutions, but these might not exist globally. That is, with suitable restrictions on the group parameter, a Killing flow can always be defined in a suitable "local neighborhood", but the flow might not be well-defined globally. This has nothing to do with Lorentzian manifolds per se, since the same issue arises in the study of general smooth manifolds.
Notions of distance.
One of the many valuable lessons to be learned from a study of the Rindler chart is that there are in fact several "distinct" (but reasonable) notions of distance which can be used by the Rindler observers.
The first is the one we have tacitly employed above: the induced Riemannian metric on the spatial hyperslices formula_34. We will call this the "ruler distance" since it corresponds to this induced Riemannian metric, but its operational meaning might not be immediately apparent.
From the standpoint of physical measurement, a more natural notion of distance between two world lines is the "radar distance". This is computed by sending a null geodesic from the world line of our observer (event A) to the world line of some small object, whereupon it is reflected (event B) and returns to the observer (event C). The radar distance is then obtained by dividing the round trip travel time, as measured by an ideal clock carried by our observer.
In particular, consider a pair of Rindler observers with coordinates formula_66 and formula_67 respectively. (Note that the first of these, the trailing observer, is accelerating a bit harder, in order to keep up with the leading observer). Setting formula_68 in the Rindler line element, we readily obtain the equation of null geodesics moving in the direction of acceleration:
formula_69
Therefore, the radar distance between these two observers is given by
formula_70
This is a bit smaller than the ruler distance, but for nearby observers the discrepancy is negligible.
A third possible notion of distance is this: our observer measures the "angle" subtended by a unit disk placed on some object (not a point object), as it appears from his location. We call this the "optical diameter distance". Because of the simple character of null geodesics in Minkowski spacetime, we can readily determine the optical distance between our pair of Rindler observers (aligned with the direction of acceleration). From a sketch it should be plausible that the optical diameter distance scales like formula_71. Therefore, in the case of a trailing observer estimating distance to a leading observer (the case formula_72), the optical distance is a bit larger than the ruler distance, which is a bit larger than the radar distance. The reader should now take a moment to consider the case of a leading observer estimating distance to a trailing observer.
There are other notions of distance, but the main point is clear: while the values of these various notions will in general disagree for a given pair of Rindler observers, they all agree that "every pair of Rindler observers maintains constant distance". The fact that "very nearby" Rindler observers are mutually stationary follows from the fact, noted above, that the expansion tensor of the Rindler congruence vanishes identically. However, we have shown here that in various senses, this rigidity property holds at larger scales. This is truly a remarkable rigidity property, given the well-known fact that in relativistic physics, "no rod can be accelerated rigidly" (and "no disk can be spun up rigidly") — at least, not without sustaining inhomogeneous stresses. The easiest way to see this is to observe that in Newtonian physics, if we "kick" a rigid body, all elements of matter in the body will immediately change their state of motion. This is of course incompatible with the relativistic principle that no information having any physical effect can be transmitted faster than the speed of light.
It follows that if a rod is accelerated by some external force applied anywhere along its length, the elements of matter in various different places in the rod cannot all feel the same magnitude of acceleration if the rod is not to extend without bound and ultimately break. In other words, an accelerated rod which does not break must sustain stresses which vary along its length. Furthermore, in any thought experiment with time varying forces, whether we "kick" an object or try to accelerate it gradually, we cannot avoid the problem of avoiding mechanical models which are inconsistent with relativistic kinematics (because distant parts of the body respond too quickly to an applied force).
Returning to the question of the operational significance of the ruler distance, we see that this should be the distance which our observers will obtain should they very slowly pass from hand to hand a small ruler which is repeatedly set end to end. But justifying this interpretation in detail would require some kind of material model.
Generalization to curved spacetimes.
Rindler coordinates as described above can be generalized to curved spacetime, as Fermi normal coordinates. The generalization essentially involves constructing an appropriate orthonormal tetrad and then transporting it along the given trajectory using the Fermi–Walker transport rule. For details, see the paper by Ni and Zimmermann in the references below. Such a generalization actually enables one to study inertial and gravitational effects in an Earth-based laboratory, as well as the more interesting coupled inertial-gravitational effects.
History.
Overview.
Albert Einstein (1907) studied the effects within a uniformly accelerated frame, obtaining equations for coordinate dependent time dilation and speed of light equivalent to (2c), and in order to make the formulas independent of the observer's origin, he obtained time dilation (2i) in formal agreement with Radar coordinates. While introducing the concept of Born rigidity, Max Born (1909) noted that the formulas for hyperbolic motion can be used as transformations into a "hyperbolically accelerated reference system" () equivalent to (2d). Born's work was further elaborated by Arnold Sommerfeld (1910) and Max von Laue (1911) who both obtained (2d) using imaginary numbers, which was summarized by Wolfgang Pauli (1921) who besides coordinates (2d) also obtained metric (2e) using imaginary numbers. Einstein (1912) studied a static gravitational field and obtained the Kottler–Møller metric (2b) as well as approximations to formulas (2a) using a coordinate dependent speed of light. Hendrik Lorentz (1913) obtained coordinates similar to (2d, 2e, 2f) while studying Einstein's equivalence principle and the uniform gravitational field.
A detailed description was given by Friedrich Kottler (1914), who formulated the corresponding orthonormal tetrad, transformation formulas and metric (2a, 2b). Also Karl Bollert (1922) obtained the metric (2b) in his study of uniform acceleration and uniform gravitational fields. In a paper concerned with Born rigidity, Georges Lemaître (1924) obtained coordinates and metric (2a, 2b). Albert Einstein and Nathan Rosen (1935) described (2d, 2e) as the "well known" expressions for a homogeneous gravitational field. After Christian Møller (1943) obtained (2a, 2b) in as study related to homogeneous gravitational fields, he (1952) as well as Misner & Thorne & Wheeler (1973) used Fermi–Walker transport to obtain the same equations.
While these investigations were concerned with flat spacetime, Wolfgang Rindler (1960) analyzed hyperbolic motion in curved spacetime, and showed (1966) the analogy between the hyperbolic coordinates (2d, 2e) in flat spacetime with Kruskal coordinates in Schwarzschild space. This influenced subsequent writers in their formulation of Unruh radiation measured by an observer in hyperbolic motion, which is similar to the description of Hawking radiation of black holes.
Born (1909) showed that the inner points of a Born rigid body in hyperbolic motion can only be in the region formula_73. Sommerfeld (1910) defined that the coordinates allowed for the transformation between inertial and hyperbolic coordinates must satisfy formula_74. Kottler (1914) defined this region as formula_75, and pointed out the existence of a "border plane" () formula_76, beyond which no signal can reach the observer in hyperbolic motion. This was called the "horizon of the observer" () by Bollert (1922). Rindler (1966) demonstrated the relation between such a horizon and the horizon in Kruskal coordinates.
Using Bollert's formalism, Stjepan Mohorovičić (1922) made a different choice for some parameter and obtained metric (2h) with a printing error, which was corrected by Bollert (1922b) with another printing error, until a version without printing error was given by Mohorovičić (1923). In addition, Mohorovičić erroneously argued that metric (2b, now called Kottler–Møller metric) is incorrect, which was rebutted by Bollert (1922). Metric (2h) was rediscovered by Harry Lass (1963), who also gave the corresponding coordinates (2g) which are sometimes called "Lass coordinates". Metric (2h), as well as (2a, 2b), was also derived by Fritz Rohrlich (1963). Eventually, the Lass coordinates (2g, 2h) were identified with Radar coordinates by Desloge & Philpott (1987).
Further reading.
Useful background:
Rindler coordinates:
Rindler horizon: | [
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "\\tau"
},
{
"math_id": 3,
"text": "\\alpha\\tau"
},
{
"math_id": 4,
"text": "T=x\\sinh(\\alpha\\tau),\\quad X=x\\cosh(\\alpha\\tau)"
},
{
"math_id": 5,
"text": "x=1/\\alpha"
},
{
"math_id": 6,
"text": "X^{2}-T^{2}=x^2"
},
{
"math_id": 7,
"text": "x"
},
{
"math_id": 8,
"text": "\\tau=t"
},
{
"math_id": 9,
"text": "t=\\frac{1}{\\alpha}\\operatorname{artanh}\\left(\\frac{T}{X}\\right),\\quad x=\\sqrt{X^{2}-T^{2}},\\quad y=Y,\\quad z=Z"
},
{
"math_id": 10,
"text": "\\mathrm ds^{2}=-\\mathrm dT^{2}+\\mathrm dX^{2}+\\mathrm dY^{2}+\\mathrm dZ^{2},"
},
{
"math_id": 11,
"text": " 0 < X < \\infty,\\; -X < T < X"
},
{
"math_id": 12,
"text": "x=1 / \\alpha"
},
{
"math_id": 13,
"text": "x=0"
},
{
"math_id": 14,
"text": "T=0"
},
{
"math_id": 15,
"text": "\\alpha_{i}"
},
{
"math_id": 16,
"text": "X=1/\\alpha_{i}"
},
{
"math_id": 17,
"text": "X=c^{2}/\\alpha_{i}"
},
{
"math_id": 18,
"text": "c=1"
},
{
"math_id": 19,
"text": "t"
},
{
"math_id": 20,
"text": "\\alpha=1"
},
{
"math_id": 21,
"text": "\\begin{align} \n&t =\\frac{c}{\\alpha}\\operatorname{artanh}\\left(\\frac{cT}{X}\\right)\\;\\overset{X\\,\\gg\\,cT}{\\approx}\\;\\frac{c^2 T}{\\alpha X} \\\\\n&\\Rightarrow X \\approx\\frac{c^2 T}{\\alpha t}\\;\\overset{T\\,\\approx\\,t}{\\approx}\\;\\frac{c^2}{\\alpha}\n\\end{align}"
},
{
"math_id": 22,
"text": "=c^{2}/\\alpha=1"
},
{
"math_id": 23,
"text": "T"
},
{
"math_id": 24,
"text": "dt_{0}"
},
{
"math_id": 25,
"text": "dt"
},
{
"math_id": 26,
"text": "|dx|/|dt|"
},
{
"math_id": 27,
"text": "(c=1)"
},
{
"math_id": 28,
"text": " d\\sigma^0 = x \\, dt,\\;\\; d\\sigma^1 = dx,\\;\\; d\\sigma^2 = dy,\\;\\; d\\sigma^3 = dz"
},
{
"math_id": 29,
"text": " \\vec{e}_0 = \\frac{1}{x}\\partial_t,\\;\\; \\vec{e}_1 = \\partial_x,\\;\\; \\vec{e}_2 = \\partial_y,\\;\\; \\vec{e}_3 = \\partial_z"
},
{
"math_id": 30,
"text": "\\vec{e}_0"
},
{
"math_id": 31,
"text": " x = x_0,\\; y = y_0,\\; z = z_0"
},
{
"math_id": 32,
"text": "\\nabla_{\\vec{e}_0} \\vec{e}_0 = \\frac{1}{x}\\vec{e}_1"
},
{
"math_id": 33,
"text": "\\partial_x"
},
{
"math_id": 34,
"text": " t = t_0"
},
{
"math_id": 35,
"text": " T = X = 0"
},
{
"math_id": 36,
"text": " dt = 0"
},
{
"math_id": 37,
"text": " d\\sigma^2 = dx^2 + dy^2 + dz^2,\\; \\forall x > 0, \\forall y, z"
},
{
"math_id": 38,
"text": "\\vec{f}_0 = \\partial_T, \\; \\vec{f}_1 = \\partial_X, \\; \\vec{f}_2 = \\partial_Y, \\; \\vec{f}_3 = \\partial_Z "
},
{
"math_id": 39,
"text": "\\begin{align}\n \\vec{f}_0 &= \\frac{1}{x}\\cosh(t) \\, \\partial_t - \\sinh(t) \\, \\partial_x\\\\\n \\vec{f}_1 &= -\\frac{1}{x}\\sinh(t) \\, \\partial_t + \\cosh(t) \\, \\partial_x\\\\\n \\vec{f}_2 &= \\partial_y, \\; \\vec{f}_3 = \\partial_z\n\\end{align}"
},
{
"math_id": 40,
"text": "\\vec{f}_0 "
},
{
"math_id": 41,
"text": "\\nabla_{\\vec{f}_0} \\vec{f}_0 = 0"
},
{
"math_id": 42,
"text": "x = 0"
},
{
"math_id": 43,
"text": " t = t_0,\\; x = x_0,\\; y = y_0,\\; z = z_0"
},
{
"math_id": 44,
"text": "\\begin{align}\n t &= \\operatorname{artanh}\\left(\\frac{s}{x_0}\\right),\\; -x_0 < s < x_0\\\\\n x &= \\sqrt{x_0^2-s^2},\\; -x_0 < s < x_0\\\\\n y &= y_0\\\\\n z &= z_0\n\\end{align}"
},
{
"math_id": 45,
"text": " s"
},
{
"math_id": 46,
"text": " x_0 = 1"
},
{
"math_id": 47,
"text": " s \\in \\left\\{-\\frac{1}{2},\\; 0,\\; \\frac{1}{2}\\right\\}"
},
{
"math_id": 48,
"text": " \\ddot{t} + \\frac{2}{x} \\, \\dot{x} \\, \\dot{t} = 0, \\; \\ddot{x} + x \\, \\dot{t}^2 = 0, \\; \\ddot{y} = 0, \\; \\ddot{z} = 0"
},
{
"math_id": 49,
"text": " \\dot{t} = \\frac{E}{x^2}, \\; \\; \\dot{y} = P, \\; \\; \\dot{z} = Q "
},
{
"math_id": 50,
"text": " \\varepsilon = -x^2 \\, \\dot{t}^2 + \\dot{x}^2 + \\dot{y}^2 + \\dot{z}^2"
},
{
"math_id": 51,
"text": "\\varepsilon \\in \\left\\{-1,\\, 0,\\, 1\\right\\}"
},
{
"math_id": 52,
"text": " \\dot{x}^2 = \\left(\\varepsilon + \\frac{E^2}{x^2} \\right) - P^2 - Q^2"
},
{
"math_id": 53,
"text": "\\frac{E^2}{x^2} \\,-\\, P^2 \\,-\\, Q^2"
},
{
"math_id": 54,
"text": " E"
},
{
"math_id": 55,
"text": " 0 \\,<\\, x \\,<\\, \\frac{E}{\\sqrt{P^2 \\,+\\, Q^2}}"
},
{
"math_id": 56,
"text": "\\begin{align}\n t - t_0 &= \\operatorname{artanh} \\left(\n \\frac{1}{E}\\left[s \\left(P^2 + Q^2\\right) - \\sqrt{E^2 - \\left(P^2 + Q^2\\right) x_0^2}\\right]\n \\right) +\\\\\n & \\qquad \\operatorname{artanh} \\left(\n \\frac{1}{E}\\sqrt{E^2 - (P^2+Q^2) x_0^2}\n \\right)\\\\\n x &= \\sqrt{ x_0^2 + 2s \\sqrt{E^2 - (P^2+Q^2) x_0^2} - s^2 (P^2 + Q^2) }\\\\\n y - y_0 &= Ps;\\;\\; z - z_0 = Qs\n\\end{align}"
},
{
"math_id": 57,
"text": "t = 0"
},
{
"math_id": 58,
"text": " ds^2 = g_{00} \\, dt^2 + g_{jk} \\, dx^j \\, dx^k,\\;\\; j,\\; k \\in \\{1, 2, 3\\} "
},
{
"math_id": 59,
"text": " t = 0"
},
{
"math_id": 60,
"text": " d\\rho^2 = \\frac{1}{-g_{00}}\\left(g_{jk} \\, dx^j \\, dx^k\\right)"
},
{
"math_id": 61,
"text": " \\partial_t"
},
{
"math_id": 62,
"text": " d\\rho^2 = \\frac{1}{x^2}\\left(dx^2 + dy^2 + dz^2\\right),\\;\\; \\forall x > 0,\\;\\; \\forall y, z"
},
{
"math_id": 63,
"text": " \\partial_t, \\; \\; \\partial_y, \\; \\; \\partial_z, \\; \\; -z \\, \\partial_y + y \\, \\partial_z "
},
{
"math_id": 64,
"text": "\\begin{align}\n &\\exp(\\pm t) \\, \\left( \\frac{y}{x} \\, \\partial_t \\pm \\left[ y \\, \\partial_x - x \\, \\partial_y \\right] \\right)\\\\\n &\\exp(\\pm t) \\, \\left( \\frac{z}{x} \\, \\partial_t \\pm \\left[ z \\, \\partial_x - x \\, \\partial_z \\right] \\right)\\\\\n &\\exp(\\pm t) \\, \\left( \\frac{1}{x} \\, \\partial_t \\pm \\partial_x \\right)\n\\end{align}"
},
{
"math_id": 65,
"text": " \\partial_T"
},
{
"math_id": 66,
"text": " x = x_0, \\; y = 0,\\; z = 0"
},
{
"math_id": 67,
"text": " x = x_0 + h, \\; y = 0,\\; z = 0"
},
{
"math_id": 68,
"text": " dy = dz = 0"
},
{
"math_id": 69,
"text": " t - t_0 = \\log\\left(\\frac{x}{x_0}\\right) "
},
{
"math_id": 70,
"text": " x_0 \\, \\log \\left(1 + \\frac{h}{x_0} \\right) = h - \\frac{h^2}{2 \\, x_0} + O \\left( h^3 \\right) "
},
{
"math_id": 71,
"text": " h + \\frac{1}{x_0} + O \\left( h^3 \\right) "
},
{
"math_id": 72,
"text": " h > 0"
},
{
"math_id": 73,
"text": "X/\\left(X^{2}-T^{2}\\right)>0"
},
{
"math_id": 74,
"text": "T<X"
},
{
"math_id": 75,
"text": "X^{2}-T^{2}>0"
},
{
"math_id": 76,
"text": "c^2/\\alpha+x"
}
]
| https://en.wikipedia.org/wiki?curid=1327701 |
1327762 | Real estate economics | Application of economic techniques to real estate markets
Real estate economics is the application of economic techniques to real estate markets. It aims to describe and predict economic patterns of supply and demand. The closely related field of housing economics is narrower in scope, concentrating on residential real estate markets, while the research on real estate trends focuses on the business and structural changes affecting the industry. Both draw on partial equilibrium analysis (supply and demand), urban economics, spatial economics, basic and extensive research, surveys, and finance.
Overview of real estate markets.
The main participants in real estate markets are:
The choices of users, owners, and renters form the demand side of the market, while the choices of owners, developers and renovators form the supply side. In order to apply simple supply and demand analysis to real estate markets, a number of modifications need to be made to standard microeconomic assumptions and procedures. In particular, the unique characteristics of the real estate market must be accommodated. These characteristics include:
Housing industry.
The housing industry is the development, construction, and sale of homes. Its interests are represented in the United States by the National Association of Home Builders (NAHB). In Australia the trade association representing the residential housing industry is the Housing Industry Association. It also refers to the housing market which means the supply and demand for houses, usually in a particular country or region. Housing market includes features as supply of housing, demand for housing, house prices, rented sector and government intervention in the Housing market.
Demand for housing.
The main determinants of the demand for housing are demographic. But other factors, like income, price of housing, cost and availability of credit, consumer preferences, investor preferences, price of substitutes, and price of complements, all play a role.
The core demographic variables are population size and population growth: the more people in the economy, the greater the demand for housing. But this is an oversimplification. It is necessary to consider family size, the age composition of the family, the number of first and second children, net migration (immigration minus emigration), non-family household formation, the number of double-family households, death rates, divorce rates, and marriages. In housing economics, the elemental unit of analysis is not the individual, as it is in standard partial equilibrium models. Rather, it is households, which demand housing services: typically one household per house. The size and demographic composition of households is variable and not entirely exogenous. It is endogenous to the housing market in the sense that as the price of housing services increase, household size will tend also to increase.
Income is also an important determinant. Empirical measures of the income elasticity of demand in North America range from 0.5 to 0.9 (De Leeuw 1971). If permanent income elasticity is measured, the results are slightly higher (Kain and Quigley 1975) because transitory income varies from year to year and across individuals, so positive transitory income will tend to cancel out negative transitory income. Many housing economists use permanent income rather than annual income because of the high cost of purchasing real estate. For many people, real estate will be the costliest item they will ever buy.
The price of housing is also an important factor. The price elasticity of the demand for housing services in North America is estimated as negative 0.7 by Polinsky and Ellwood (1979), and as negative 0.9 by Maisel, Burnham, and Austin (1971).
An individual household's housing demand can be modelled with standard utility/choice theory. A utility function, such as formula_0, can be constructed, in which the household's utility is a function of various goods and services (formula_1). This will be subject to a budget constraint such as formula_2, where formula_3 is the household's available income and the formula_4 are the prices for the various goods and services. The equality indicates that the money spent on all the goods and services must be equal to the available income. Because this is unrealistic, the model must be adjusted to allow for borrowing and saving. A measure of wealth, lifetime income, or permanent income is required. The model must also be adjusted to account for the heterogeneity of real estate. This can be done by deconstructing the utility function. If housing services (formula_5) are separated into its constituent components (formula_6), the utility function can be rewritten as formula_7. By varying the price of housing services (formula_5) and solving for points of optimal utility, the household's demand schedule for housing services can be constructed. Market demand is calculated by summing all individual household demands.
Supply of housing.
Developers produce housing supply using land, labour, and various inputs, such as electricity and building materials. The quantity of new supply is determined by the cost of these inputs, the price of the existing stock of houses, and the technology of production. For a typical single-family dwelling in suburban North America, one can assign approximate cost percentages as follows: acquisition costs, 10%; site improvement costs, 11%; labour costs, 26%; materials costs, 31%; finance costs, 3%; administrative costs, 15%; and marketing costs, 4%. Multi-unit residential dwellings typically break down as follows: acquisition costs, 7%; site improvement costs, 8%; labour costs, 27%; materials costs, 33%; finance costs, 3%; administrative costs, 17%; and marketing costs, 5%. Public-subdivision requirements can increase development costs by up to 3%, depending on the jurisdiction. Differences in building codes account for about a 2% variation in development costs. However, these subdivision and building-code costs typically increase the market value of the buildings by at least the amount of their cost outlays. A production function such as formula_8 can be constructed in which formula_9 is the quantity of houses produced, formula_10 is the amount of labour employed, formula_11 is the amount of land used, and formula_12 is the amount of other materials. This production function must, however, be adjusted to account for the refurbishing and augmentation of existing buildings. To do this, a second production function is constructed that includes the stock of existing housing and their ages as determinants. The two functions are summed, yielding the total production function. Alternatively, a hedonic pricing model can be regressed.
The long-run price elasticity of supply is quite high. George Fallis (1985) estimates it as 8.2, but in the short run, supply tends to be very price-inelastic. Supply-price elasticity depends on the elasticity of substitution and supply restrictions. There is significant substitutability, both between land and materials and between labour and materials. In high-value locations, developers can typically construct multi-story concrete buildings to reduce the amount of expensive land used. As labour costs have increased since the 1950s, new materials and capital-intensive techniques have been employed to reduce the amount of labour used. However, supply restrictions can significantly affect substitutability. In particular, the lack of supply of skilled labour (and labour-union requirements) can constrain the substitution from capital to labour. Land availability can also constrain substitutability if the area of interest is delineated (i.e., the larger the area, the more suppliers of land, and the more substitution that is possible). Land-use controls such as zoning bylaws can also reduce land substitutability.
Adjustment mechanism.
The basic adjustment mechanism is a stock/flow model to reflect the fact that about 98% the market is existing stock and about 2% is the flow of new buildings.
In the adjacent diagram, the stock of housing supply is presented in the left panel while the new flow is in the right panel. There are four steps in the basic adjustment mechanism. First, the initial equilibrium price (Ro) is determined by the intersection of the supply of existing housing stock (SH) and the demand for housing (D). This rent is then translated into value (Vo) via discounting cash flows. Value is calculated by dividing current period rents by the discount rate, that is, as a perpetuity. Then value is compared to construction costs (CC) in order to determine whether profitable opportunities exist for developers. The intersection of construction costs and the value of housing services determine the maximum level of new housing starts (HSo). Finally the amount of housing starts in the current period is added to the available stock of housing in the next period. In the next period, supply curve SH will shift to the right by amount HSo.
Adjustment with depreciation.
The diagram to the right shows the effects of depreciation. If the supply of existing housing deteriorates due to wear, then the stock of housing supply depreciates. Because of this, the supply of housing (SHo) will shift to the left (to SH1) resulting in a new equilibrium demand of R1 (since the number of homes decreased, but demand still exists). The increase of demand from Ro to R1 will shift the value function up (from Vo to V1). As a result, more houses can be produced profitably and housing starts will increase (from HSo to HS1). Then the supply of housing will shift back to its initial position (SH1 to SHo).
Increase in demand.
The diagram on the right shows the effects of an increase in demand in the short run. If there is an increase in the demand for housing, such as the shift from Do to D1 there will be either a price or quantity adjustment, or both. For the price to stay the same, the supply of housing must increase. That is, supply SHo must increase by HS.
Increase in costs.
The diagram on the right shows the effects of an increase in costs in the short-run. If construction costs increase (say from CCo to CC1), developers will find their business less profitable and will be more selective in their ventures. In addition some developers may leave the industry. The quantity of housing starts will decrease (HSo to HS1). This will eventually reduce the level of supply (from SHo to SH1) as the existing stock of housing depreciates. Prices will tend to rise (from Ro to R1).
Real estate financing.
There are different ways of real estate financing: governmental and commercial sources and institutions. A homebuyer or builder can obtain financial aid from savings and loan associations, commercial banks, savings banks, mortgage bankers and brokers, life insurance companies, credit unions, federal agencies, individual investors, and builders.
Over the last decade, residential prices increased every year on average by double digits in Beijing or Shanghai. However many observers and researchers argue that fundamentals of the housing sector, both sector-specific and macroeconomic, may have been the driving force behind housing price volatility.
Savings and loan associations.
The most important purpose of these institutions is to make mortgage loans on residential property. These organizations, which also are known as savings associations, building and loan associations, cooperative banks (in New England), or homestead associations (in Louisiana), are the primary source of financial assistance to a large segment of American homeowners. As home-financing institutions, they give primary attention to single-family residences and are equipped to make loans in this area.
Some of the most important characteristics of a savings and loan association are:
Commercial banks.
Due to changes in banking laws and policies, commercial banks are increasingly active in home financing. In acquiring mortgages on real estate, these institutions follow two main practices:
In addition, dealer service companies, which were originally used to obtain car loans for permanent lenders such as commercial banks, wanted to broaden their activity beyond their local area. In recent years, however, such companies have concentrated on acquiring mobile home loans in volume for both commercial banks and savings and loan associations. Service companies obtain these loans from retail dealers, usually on a non-recourse basis. Almost all bank or service company agreements contain a credit insurance policy that protects the lender if the consumer defaults.
Savings banks.
These depository financial institutions are federally chartered, primarily accept consumer deposits, and make home mortgage loans.
Mortgage bankers and brokers.
Mortgage bankers are companies or individuals that originate mortgage loans, sell them to other investors, service the monthly payments, and may act as agents to dispense funds for taxes and insurance.
Mortgage brokers present homebuyers with loans from a variety of loan sources. Their income comes from the lender making the loan, just like with any other bank. Because they can tap a variety of lenders, they can shop on behalf of the borrower and achieve the best available terms. Despite legislation that could favor major banks, mortgage bankers and brokers keep the market competitive so the largest lenders must continue to compete on price and service. According to Don Burnette of Brightgreen Homeloans in Port Orange, Florida, "The mortgage banker and broker conduit is vital to maintain competitive balance in the mortgage industry. Without it, the largest lenders would be able to unduly influence rates and pricing, potentially hurting the consumer. Competition drives every organization in this industry to constantly improve on their performance, and the consumer is the winner in this scenario."
Life insurance companies.
Life insurance companies are another source of financial assistance. These companies lend on real estate as one form of investment and adjust their portfolios from time to time to reflect changing economic conditions. Individuals seeking a loan from an insurance company can deal directly with a local branch office or with a local real estate broker who acts as loan correspondent for one or more insurance companies.
Credit unions.
These cooperative financial institutions are organized by people who share a common bond—for example, employees of a company, labor union, or religious group. Some credit unions offer home loans in addition to other financial services.
Federally supported agencies.
Under certain conditions and fund limitations, the Veterans Administration (VA) makes direct loans to creditworthy veterans in housing credit shortage areas designated by the VA's administrator. Such areas are generally rural and small cities and towns not near the metropolitan or commuting areas of large cities—areas where GI loans from private institutions are not available.
The federally supported agencies referred to here do not include the so-called second-layer lenders who enter the scene after the mortgage is arranged between the lending institution and the individual home buyer.
Real estate investment trusts.
Real estate investment trusts (REITs), which began when the Real Estate Investment Trust Act became effective on January 1, 1961, are available. REITs, like savings and loan associations, are committed to real estate lending and can and do serve the national real estate market, although some specialization has occurred in their activities.
In the United States, REITs generally pay little or no federal income tax but are subject to a number of special requirements set forth in the Internal Revenue Code, one of which is the requirement to annually distribute at least 90% of their taxable income in the form of dividends to shareholders.
Other sources.
Individual investors constitute a fairly large but somewhat declining source of money for home mortgage loans. Experienced observers claim that these lenders prefer shorter-term obligations and usually restrict their loans to less than two-thirds of the value of the residential property. Likewise, building contractors sometimes accept second mortgages in partial payment of the construction price of a home if the purchaser is unable to raise the total amount of the down payment above the first mortgage money offered.
In addition, homebuyers or builders can save their money using FSBO in order not to pay extra fees.
Common misconceptions.
A 2022 study published by three professors from the University of California found that people in the United States broadly misunderstand the role that supply plays in counteracting the price of housing. Although most renters and homeowners were able to predict the effect of increasing supply on the market for other goods, the researchers found that only 30 to 40 percent of both groups could correctly predict the effect of new supply when applied to the market for homes. A majority of both renters and homeowners were found to prefer lower rent and housing prices for their city, but struggled to connect this preferred policy outcome to the supply-side solutions advocated for by economists. “Supply skepticism,” as the study labelled this phenomenon, was found to predict opposition to constructing new housing as well as opposition to state level policies that reduce local barriers like exclusionary zoning.
Political Economy of Real Estate.
Real estate offers interesting perspectives on understanding some of the factors in social mobility and economic decision-making, both at the macro and the micro levels. It's had a profound impact on not only government policies, but also meaningful discussions and choices for individuals looking to become homeowners. In the recent years, liberalization of the mortgage markets and complex finance operations using mortgage as collateral "(See Mortgage-backed security)" have led to the expansion of the world economy. "(See Financialization)"
Housing and voting patterns.
Housing and left/right cleavage.
While popular culture tends to link home ownership with right-wing voting, studies conducted across Europe tend to show mixed results. In Sweden, homeowners from left-wing social classes are likelier to report themselves as right-wing. In France, middle-class voters were three times more likely to vote for Nicolas Sarkozy in the 2012 French presidential election, but results showed few variations between homeowners and tenants among lower-class and upper-class voters. In Germany, homeowners were more likely to vote for conservative parties when house prices were rising. In the UK, studies on the Housing Act 1980 and the 1983 United Kingdom general election tend to show that while purchasing council houses was linked to a decreased likelihood of voting for the Labour Party (UK), it is mostly the Alliance (center-left) that gained those defecting voters.
Studies in the UK and Germany have also highlighted links between home ownership and the redirection of voting patterns towards center-left parties. In line with results from the 1983 general election, a more recent study has argued that as part of a broader process of ‘gentrification’ of Labour electoral interests, UK homeowners tend to divert from the Conservative party (UK) towards a party that reconciles economic interests and left-wing ideals. In Germany, one study has also pointed out a similar ‘embourgeoisement’ effect of the SPD vote.
Regarding preferences for policy proposals, some studies from the UK tend to demonstrate that, as houses are fixed assets, right-wing homeowners who have seen an increase in their property value tend to be less favorable to redistributive policies and social insurance programs. As their property value increases, Conservative voters tend to consider, to a larger degree, houses, a form of self-supplied insurance, which disincentivizes support for such programs. Those policy preferences are likely to be present to a greater extent when it comes to long-term social insurance and redistributive programs such as pensions due to the fixed nature of houses.
Housing and populism.
Recently, several studies conducted in several European countries sought to determine the influence of housing on right-wing populist electoral results. While political spectrums and housing markets differ according to countries, studies highlight some cross-national trends.
Studies regarding the relationship between variation in house prices and populist electoral results have found that voters living in areas where house prices increased the least were more prone to vote for right-wing populist parties. One explanation may lie in the fact that as the housing map created winners (those owning in dynamic areas) and losers (those holding in less prosperous areas), those who experienced a relative decline in the value of their homes tended to feel left out of a significant component of household wealth formation, and therefore were inclined to favorite populist political parties which challenged a status quo that did not benefit them. In the UK, some have highlighted a correlation between the relative deflation of housing prices and an increased likelihood of voting in favor of Brexit. Research in France shows that those who saw their home prices increase tended to vote for candidates other than Marine Le Pen in the 2017 French presidential election. In Nordic countries, studies tend to come to similar findings, with data showing an inverse relationship between house price increases and support for right-wing populist parties. Those living in ‘left-behind’ areas (where house prices have decreased by 15%) tended to vote 10% higher for the Danish People’s Party than in ‘booming’ areas (where house prices have increased by 100% In Germany, studies show that die AfD scores are higher in areas where house prices have not risen as much as the average rate.
Recent work by Julia Cagé and Thomas Piketty seems to corroborate the existence of areas’ prosperity determinants in the vote for right-wing populist parties. Describing the Rassemblement National vote as “a vote of little-middle access to home ownership,” they argue that home ownership is twice as frequent in towns and villages as in cities (the formers generally being considered as less prosperous areas), represent to some a sign of upward social mobility towards neither affluent nor disadvantaged class and who do not felt represented by traditional right-wing political parties, which they consider representing a more favorited population, or by left-wing political parties, which they regard representing less deserving class and not supporting their efforts. Such analysis, combined with previous presentations on house price variations, point in the direction that right-wing populist electoral results are, at least partly, driven by geosocial factors, with lower middle-class people living in less populated areas not feeling supported by traditional political parties and afraid of social downgrading.
Debates around intergenerational conflicts.
Around Europe, debates around generational inequalities have been the subject of several news outlets. Regarding ownership inequality in Europe, data points to a positive relationship between age and home ownership. In England, those over 65 owned 35.8% of all houses in 2022, while they only represented 18.6% of the population in 2021. In Germany, 50.4% of 60-69-year-olds owned their homes, while only 18.4% of 20-29-year-olds did. As older people tend to have more time to accumulate wealth, academics highlight that these inequalities are wider than decades ago. Research shows that such inequalities exist due to a significant increase in housing prices to the annual income, also known as the wealth-to-income ratio. (See below Wealth-to-Income Ratio) Data collected from the Bank of England show that, in 1982, a house cost, on average, only 4.16 times an average British person’s annual income, but it has now climbed to 8.68 times the yearly income in 2023.
Several European countries enacted in the 1990s different public policies aimed to promote home ownership. In the UK during the 1980s, the Thatcher premiership passed the ‘right to buy scheme,’ which saw 3 million council houses sold at a price between 30% and 70% below market prices. In France, liberal housing policies gained ground in the 1970s, enabling the rise of residential suburbs. Nonetheless, some have also nuanced the extent to which countries have uniformly incentivized homeownership during that period. Studies on Nordic countries have highlighted the difference in housing models promoted by public policies, arguing that while Norway has been promoting cooperative and private ownership, it has not so much been the case with Denmark, which has, for example, institutionalized nonprofit renting.
Academics have also pointed out that the strain on capital accumulation that resulted from WWII and post-war era interventionist and redistributionist policies have helped workers – i.e., those who earn a large share of their income through work, to earn a larger share of national income, translating into a greater ability to become homeowners. Such theories would tend to favor the idea that intergenerational homeownership inequalities are more a product of class-based inequalities than intergenerational inequalities as such, as young people struggle to a larger extent not because older people have ‘hoarded’ the housing market but because the capital class, which is not constituted via age but rather by intra-familial transfers and wealth accumulation, have exploited the labor class to a greater extent since the end of the 1970s. In line with this argument, some highlight the importance of considering intragenerational housing wealth inequalities; studies regarding the UK have demonstrated that such household housing wealth inequalities are the most important within the baby-boomer generation, suggesting limits to the intergenerational divide theories.
While several news outlets have framed a growing generational conflict around housing ownership, some studies have argued that if, from an objective perspective, millennials recognized that baby boomers were better off, a relational analysis demonstrated that they did not resent the older generation for their situations but rather the government for out-of-touch policies. As for the baby boomers, they tended to resent sympathy for the younger generations, recognizing that they were facing more significant barriers to home ownership. Similarly, research argues that if the probability of housing being a personal issue significantly decreases with age, the tendency to consider it a country-wide problem, i.e., a public policy issue, remains similar across generations, which would tend to affirm the prominence of inter-generational solidarity rather than inter-generational conflict.
Paradigms of Social Welfare Policies Regarding Real Estate Economics.
Many government policies in social welfare states view houses as assets – a way for families to hedge their risks against eventual retirement and have a safe form of savings alternative to other pensions. Since the 1980s, these governments have often focused on making the housing market more liquid by broadening the access to financing of houses. Bohle and Seabrooke argue that there are three paradigms of housing:
There are clear examples in which these three paradigms served as the basis for structural changes in which states’ housing policies evolved based upon the economic changes. In Ireland, the 1980s housing market reflected a patrimony-based housing market – but since then, neoliberal policies have led to cutting social housing programs and increasing private home constructions. Housing finance became even stronger with the EU accession, and banks began asset-based lending. By 2016, the Irish households were the fourth most indebted in the EU, a fifth with residential mortgage debts.: After the 2008 financial crisis, Ireland suffered from the Troika, resulting in Irish domestic laws undermining the social policies in favor of its financial health. The Land and Conveyancing Law Reform Bill 2013 made it possible for lenders to repossess homes from borrowers - an action aimed at protecting the financial sector rather than having a coherent housing policy. The vacancy rate of housing in Ireland rose to 12.8%, leaving behind ghost towns. Ultimately, wealthier households in Ireland pass their houses to children while lower-income families are excluded from ever owning property – watching the paradigm of Ireland shift from asset to patrimony.
In Denmark, the persistence of tax breaks for mortgage debt led to Danish consumers becoming one of the most indebted people in the world, with an average of 250% of debt per capita relative to personal income. Denmark used a mortgage-based covered bond system as its form of “privatized monetary policy,” in 1986, the housing bubble burst, leading to the coalition government reducing the mortgage interest deductibility from taxes. After the 1989 reform of the mortgage financing system (in line with the EU’s Second Banking Directive) and the 1990 Social Democratic government’s liberalized mortgage product policies, the credit market and available credit for housing boomed. In the 2000s, cracks began to show between the elites and masses - 2007 reforms allowed Danish banks to enter the mortgage market more aggressively while the foreign investment interests in Danish mortgage bond markets increased. (Increased financialization, the continued road to the housing as asset policies). Continued marketization of housing led some apartments in Copenhagen to triple in price within five years.
In Hungary, foreign capital began flowing in during its accession to the EU era, and banks in Hungary were encouraged to increase their access to lending and lower borrowing costs, creating a risky housing market. When the housing bubble burst in 2008, the socialist Gordon Bajnai administration (2009-2010) focused on reducing public debt and deficit rather than the private side (over-indebted population). Orban’s government took a pivot from these policies - it angled its attacks on the foreign banks and lenders as the predators for why Hungary fell under economic downfall; the government levied special taxes on banks, insurance companies, and financial sectors. It tried to alleviate the burdens of households of foreign currency loans by allowing borrowers to pay in Hungarian forint (currency) at a preferential rate if they could pay their loans in one lump sum. Lenders were compelled to compensate borrowers for discrepancies between the exchange rate they used for loan repayment and the market exchange rate. However, this pushed the lower-income part of the society even further down; the cut down on subsidies to housing for these groups made homelessness even a crime. The ultraconservative policies pushed the cost of the housing onto the banks rather than taxpayers, while Hungary’s housing deprivation problems and issues with dampness, rot, lack of bathroom facilities, and overcrowding are among the worst in Central and Eastern Europe.
Governments’ neoliberal policies and rising mortgage debt levels.
The popular academic discourse surrounding the financialization of real estate is that liquidity and furthering of credit stimulate economic growth. Deregulation and liberalization are ways financial regulators intended for the markets to grow – through the increasing utilization of real estate as collateral for other financial products. Such decisions have led to the creation of complex financial transactions that eventually led to the government’s continued neoliberal policies of opening the housing markets to financialization.
The academic debate around the causes for rising levels of mortgage debts concerns their focus on the supply or the demand side of the housing market. Recent scholars focusing on the demand side explain that consumers purchasing and owning houses seek mortgage lending to complete their purchases, ultimately increasing house prices. Schwartz states that the 1980s deregulation of mortgage markets increased potential credit, resulting in higher demand for real estate and rising prices. In addition, Johnston and Regan showed that increased wages led to households having more liquidity to finance real estate properties, leading to higher demand for houses and, therefore, even more mortgage lending. This side of the academic debate presents an argument that seeks to use increased demand for housing over the years as the primary reason for rising levels of mortgage debt worldwide.
On the other hand, Anderson and Kurzer argued that drivers of housing supply led to a rising level of mortgages and household indebtedness – while also interacting with the demand levels for housing. They studied the Netherlands, Denmark, and Sweden – three countries with the highest levels of outstanding mortgage debts compared to their disposable income and the highest levels of mortgage debts as part of their GDP. In summary, the study presented that the three countries rode the waves of political policies that were formed around the right/center-right governments’ desire in the 1990s to stimulate home ownership and reduce social housing expenses and build a society of self-sufficient home-owners. However, when the more left-leaning governments came to power, they did not reverse these policies. Instead, they introduced further deregulation of mortgage markets to allow more working-class consumers to become homeowners.
As a result, the continued neoliberal policies around the mortgage markets in these three countries led to the growth of banking power. Danish banks saw their annual growth rates in lending exceeding 50% between 2003 and 2007, while in the Netherlands, the Dutch market for securitized assets (in this case, mortgage-backed securities) became the second largest in Europe after the UK in 2008. In Sweden, the Swedish-covered bonds (securities, usually backed by mortgages) were at 55% of GDP in 2014 and more than double the Swedish government bonds. In summary, the scholars argue that the Netherlands, Denmark, and Sweden mortgage markets were liberalized to encourage financial innovation and promote homeownership. Still, residential construction remained stagnant, leading to an inelastic housing supply. Government officials and regulators liberalized the mortgage market using credit and financial products such as special mortgage packages and consumer tax incentives to bypass this issue. Because all three countries have very high tax rates, the fiscal relief offered by tax incentives from having mortgages seemed even more lucrative, increasing the demand. At the same time, the supply of housing continued to stay inelastic. Anderson and Kurzer conclude that this led to a critical exposure to the 2008 global financial crisis when the housing markets collapsed under the crumbling legs of complex mortgage-backed financial products.
Ultimately, the debate around the rising mortgage debt levels worldwide centers around financialization and the political agenda of homeownership. There are strong connections to government programs that reflect the political ideologies of homeownership and the economic tools to achieve those means. In the case of the Netherlands, Sweden, and Denmark, Anderson and Kurzer showed that the center-right governments began increasing homeownership to cut social housing costs and reduce social policy dependence. Interestingly enough, the center-left government that subsequently followed also used similar tools of neo-liberal housing policies to enable homeownership for working-class citizens.
The trade-off between Social policies and home ownership.
Academic debates surround the nature of the trade-off between social welfare and house ownership. In the 1980s, Jim Kemeney presented that homeownership and social welfare policies have an inverse relationship. First, Kemeney argued that citizens living in a country with meager retirement pensions and/or lacking government support of public welfare policies would tend to make private contributions in their earlier phases of life towards retirement – often in the form of housing. Homeowners would feel that their home is a valuable asset that would safeguard them from the risks of eventual retirement and aging. Thus, they would feel less inclined to rely on or support the government’s public welfare policies. The government’s social welfare policies would further undermine the value of the houses because the public support of the social housing upkeep will ultimately drive the value of the homes downward. Kemeney showed these findings from his analysis of eight OECD Countries, including Sweden, the Netherlands, the UK, the USA, Canada, Australia, and others. About twenty years later, Frank Castles, a professor of political science at the Australian National University, conducted more in-depth research on Kemeney’s thesis and strongly confirmed his case. Castles would adjust Kemeny’s thesis to show that the “really big trade-off” was between homeownership and pensions instead of the welfare state.
Kemeney’s main argument is presented in his work: “My overall argument was that high rates of home ownership impacted on society through various forms of privatisation, influencing urban form, public transport, life-styles, gender roles, systems of welfare and social security as well as other dimensions of social structure. I argued that an overwhelming emphasis on home ownership created a lifestyle based on detached housing, privatised urban transport and its resulting ‘‘one-household’’ (and increasingly ‘‘one-person’’) car ownership, a traditional gendered division of labour based on female housewifery and the fulltime working male, and strong resistance to public expenditure that necessitated the high taxes needed to fund quality universal welfare provision.”
In 2020, Gunten and Kohl returned to Kemeny’s thesis. They presented a different side of the academic research, presenting in an updated study that this inverse relationship between social welfare and house ownership converges upwards to what they labeled the “dual ratchet effect.” Huber and Stephens argued that the political costs of stopping social policies could be damaging, and thus, social policies are more resistant to their opposition. Gunten and Kohl reciprocate this argument for homeownership – homeownership is also used to garner political support due to its popularity amongst citizens – and thus, the damaging political costs from withdrawing the benefits of having public policies favoring homeownership (ex. In the form of tax breaks, subsidies, etc.) lead to a resiliency against its opposition. This inelasticity of both social policies and homeownership resulted in the fact that high costs associated with decreasing either of the policies resulted in them being more responsive to upward drivers rather than downward effects. Governments bypassed the issue of unloading the problems of social policies on homeowners by using the credit markets – resorting to inflation in the 70s, public debt in the 80s, and private debt in the 2000s. This was labeled as the buying time hypothesis by Gunten and Kohl and will be further supported by their capital supply hypothesis – where the amount of capital available will increase due to the deregulation of the international financial market since the 1970s and the growth of private pension fund assets leading to an abundance of available capital. In conclusion, Gunten and Kohl present a case where the inverse relationship between homeownership and social policies existed in the 80s but has changed towards the dual ratchet effect of simultaneous, upward convergence. Furthermore, they state that if the trade-off in the long run still holds, the correction costs of homeownership and pensions will eventually correct themselves in the long run when the amount of capital begins to dwindle and the credit market runs dry.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "U=U(X_1,X_2,X_3,X_4,...X_n)"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "P_1X_1+P_2X_2 + ... P_nX_n = Y"
},
{
"math_id": 3,
"text": "Y"
},
{
"math_id": 4,
"text": "Ps"
},
{
"math_id": 5,
"text": "X_4"
},
{
"math_id": 6,
"text": "Z_1,Z_2,Z_3,Z_4,...Z_n"
},
{
"math_id": 7,
"text": "U=U(X_1,X_2,X_3,(Z_1,Z_2,Z_3,Z_4,...Z_n)...X_n)"
},
{
"math_id": 8,
"text": "Q=f(L,N,M)"
},
{
"math_id": 9,
"text": "Q"
},
{
"math_id": 10,
"text": "N"
},
{
"math_id": 11,
"text": "L"
},
{
"math_id": 12,
"text": "M"
}
]
| https://en.wikipedia.org/wiki?curid=1327762 |
13280188 | 20,000 | Natural number
20,000 (twenty thousand) is the natural number that comes after 19,999 and before 20,001.
20,000 is a round number, and is also in the title of Jules Verne's 1870 novel "Twenty Thousand Leagues Under the Seas".
Selected numbers in the range 20001–29999.
Primes.
There are 983 prime numbers between 20000 and 30000.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\leq 2^{18}"
}
]
| https://en.wikipedia.org/wiki?curid=13280188 |
13280205 | 30,000 | Natural number
30,000 (thirty thousand) is the natural number that comes after 29,999 and before 30,001.
Selected numbers in the range 30001–39999.
Primes.
There are 958 prime numbers between 30000 and 40000.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{Q}(\\sqrt{6}, \\sqrt{14})"
},
{
"math_id": 1,
"text": "F"
}
]
| https://en.wikipedia.org/wiki?curid=13280205 |
13280211 | 40,000 | Natural number
40,000 (forty thousand) is the natural number that comes after 39,999 and before 40,001. It is the square of 200.
Selected numbers in the range 40001–49999.
Primes.
There are 930 prime numbers between 40000 and 50000.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\leq 2^{19}"
},
{
"math_id": 1,
"text": "a^2+b!"
}
]
| https://en.wikipedia.org/wiki?curid=13280211 |
13280221 | 60,000 | Natural number
60,000 (sixty thousand) is the natural number that comes after 59,999 and before 60,001. It is a round number. It is the value of formula_0(75025).
Selected numbers in the range 60,000–69,999.
Primes.
There are 878 prime numbers between 60000 and 70000.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varphi"
}
]
| https://en.wikipedia.org/wiki?curid=13280221 |
13280231 | 80,000 | Natural number
80,000 (eighty thousand) is the natural number after 79,999 and before 80,001.
Selected numbers in the range 80,000–89,999.
Primes.
There are 876 prime numbers between 80000 and 90000.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\leq 2^{20}"
}
]
| https://en.wikipedia.org/wiki?curid=13280231 |
1328116 | Neutrino oscillation | Phenomenon in which a neutrino changes lepton flavor as it travels
Neutrino oscillation is a quantum mechanical phenomenon in which a neutrino created with a specific lepton family number ("lepton flavor": electron, muon, or tau) can later be measured to have a different lepton family number. The probability of measuring a particular flavor for a neutrino varies between three known states, as it propagates through space.
First predicted by Bruno Pontecorvo in 1957, neutrino oscillation has since been observed by a multitude of experiments in several different contexts. Most notably, the existence of neutrino oscillation resolved the long-standing solar neutrino problem.
Neutrino oscillation is of great theoretical and experimental interest, as the precise properties of the process can shed light on several properties of the neutrino. In particular, it implies that the neutrino has a non-zero mass outside the Einstein-Cartan torsion, which requires a modification to the Standard Model of particle physics. The experimental discovery of neutrino oscillation, and thus neutrino mass, by the Super-Kamiokande Observatory and the Sudbury Neutrino Observatories was recognized with the 2015 Nobel Prize for Physics.
Observations.
A great deal of evidence for neutrino oscillation has been collected from many sources, over a wide range of neutrino energies and with many different detector technologies. The 2015 Nobel Prize in Physics was shared by Takaaki Kajita and Arthur B. McDonald for their early pioneering observations of these oscillations.
Neutrino oscillation is a function of the ratio , where L is the distance traveled and E is the neutrino's energy. (Details in below.) All available neutrino sources produce a range of energies, and oscillation is measured at a fixed distance for neutrinos of varying energy. The limiting factor in measurements is the accuracy with which the energy of each observed neutrino can be measured. Because current detectors have energy uncertainties of a few percent, it is satisfactory to know the distance to within 1%.
Solar neutrino oscillation.
The first experiment that detected the effects of neutrino oscillation was Ray Davis' Homestake experiment in the late 1960s, in which he observed a deficit in the flux of solar neutrinos with respect to the prediction of the Standard Solar Model, using a chlorine-based detector. This gave rise to the solar neutrino problem. Many subsequent radiochemical and water Cherenkov detectors confirmed the deficit, but neutrino oscillation was not conclusively identified as the source of the deficit until the Sudbury Neutrino Observatory provided clear evidence of neutrino flavor change in 2001.
Solar neutrinos have energies below 20 MeV. At energies above 5 MeV, solar neutrino oscillation actually takes place in the Sun through a resonance known as the MSW effect, a different process from the vacuum oscillation described later in this article.
Atmospheric neutrino oscillation.
Following the theories that were proposed in the 1970s suggesting unification of electromagnetic, weak, and strong forces, a few experiments on proton decay followed in the 1980s. Large detectors such as IMB, MACRO, and Kamiokande II have observed a deficit in the ratio of the flux of muon to electron flavor atmospheric neutrinos (see "muon decay"). The Super-Kamiokande experiment provided a very precise measurement of neutrino oscillation in an energy range of hundreds of MeV to a few TeV, and with a baseline of the diameter of the Earth; the first experimental evidence for atmospheric neutrino oscillations was announced in 1998.
Reactor neutrino oscillation.
Many experiments have searched for oscillation of electron anti-neutrinos produced in nuclear reactors. No oscillations were found until a detector was installed at a distance 1–2 km. Such oscillations give the value of the parameter θ13. Neutrinos produced in nuclear reactors have energies similar to solar neutrinos, of around a few MeV. The baselines of these experiments have ranged from tens of meters to over 100 km (parameter θ12). Mikaelyan and Sinev proposed to use two identical detectors to cancel systematic uncertainties in reactor experiment to measure the parameter θ13.
In December 2011, the Double Chooz experiment found that Then, in 2012, the Daya Bay experiment found that with a significance of These results have since been confirmed by RENO.
Beam neutrino oscillation.
Neutrino beams produced at a particle accelerator offer the greatest control over the neutrinos being studied. Many experiments have taken place that study the same oscillations as in atmospheric neutrino oscillation using neutrinos with a few GeV of energy and several-hundred-km baselines. The MINOS, K2K, and Super-K experiments have all independently observed muon neutrino disappearance over such long baselines.
Data from the LSND experiment appear to be in conflict with the oscillation parameters measured in other experiments. Results from the MiniBooNE appeared in Spring 2007 and contradicted the results from LSND, although they could support the existence of a fourth neutrino type, the sterile neutrino.
In 2010, the INFN and CERN announced the observation of a tauon particle in a muon neutrino beam in the OPERA detector located at Gran Sasso, 730 km away from the source in Geneva.
T2K, using a neutrino beam directed through 295 km of earth and the Super-Kamiokande detector, measured a non-zero value for the parameter θ13 in a neutrino beam. NOνA, using the same beam as MINOS with a baseline of 810 km, is sensitive to the same.
Theory.
Neutrino oscillation arises from mixing between the flavor and mass eigenstates of neutrinos. That is, the three neutrino states that interact with the charged leptons in weak interactions are each a different superposition of the three (propagating) neutrino states of definite mass. Neutrinos are emitted and absorbed in weak processes in flavor eigenstates but travel as mass eigenstates.
As a neutrino superposition propagates through space, the quantum mechanical phases of the three neutrino mass states advance at slightly different rates, due to the slight differences in their respective masses. This results in a changing superposition mixture of mass eigenstates as the neutrino travels; but a different mixture of mass eigenstates corresponds to a different mixture of flavor states. For example, a neutrino born as an electron neutrino will be some mixture of electron, mu, and tau neutrino after traveling some distance. Since the quantum mechanical phase advances in a periodic fashion, after some distance the state will nearly return to the original mixture, and the neutrino will be again mostly electron neutrino. The electron flavor content of the neutrino will then continue to oscillate – as long as the quantum mechanical state maintains coherence. Since mass differences between neutrino flavors are small in comparison with long coherence lengths for neutrino oscillations, this microscopic quantum effect becomes observable over macroscopic distances.
In contrast, due to their larger masses, the charged leptons (electrons, muons, and tau leptons) have never been observed to oscillate. In nuclear beta decay, muon decay, pion decay, and kaon decay, when a neutrino and a charged lepton are emitted, the charged lepton is emitted in incoherent mass eigenstates such as |〉, because of its large mass. Weak-force couplings compel the simultaneously emitted neutrino to be in a "charged-lepton-centric" superposition such as |〉, which is an eigenstate for a "flavor" that is fixed by the electron's mass eigenstate, and not in one of the neutrino's own mass eigenstates. Because the neutrino is in a coherent superposition that is not a mass eigenstate, the mixture that makes up that superposition oscillates significantly as it travels. No analogous mechanism exists in the Standard Model that would make charged leptons detectably oscillate. In the four decays mentioned above, where the charged lepton is emitted in a unique mass eigenstate, the charged lepton will not oscillate, as single mass eigenstates propagate without oscillation.
The case of (real) W boson decay is more complicated: W boson decay is sufficiently energetic to generate a charged lepton that is not in a mass eigenstate; however, the charged lepton would lose coherence, if it had any, over interatomic distances (0.1 nm) and would thus quickly cease any meaningful oscillation. More importantly, no mechanism in the Standard Model is capable of pinning down a charged lepton into a coherent state that is not a mass eigenstate, in the first place; instead, while the charged lepton from the W boson decay is not initially in a mass eigenstate, neither is it in any "neutrino-centric" eigenstate, nor in any other coherent state. It cannot meaningfully be said that such a featureless charged lepton oscillates or that it does not oscillate, as any "oscillation" transformation would just leave it the same generic state that it was before the oscillation. Therefore, detection of a charged lepton oscillation from W boson decay is infeasible on multiple levels.
Pontecorvo–Maki–Nakagawa–Sakata matrix.
The idea of neutrino oscillation was first put forward in 1957 by Bruno Pontecorvo, who proposed that neutrino–antineutrino transitions may occur in analogy with neutral kaon mixing. Although such matter–antimatter oscillation had not been observed, this idea formed the conceptual foundation for the quantitative theory of neutrino flavor oscillation, which was first developed by Maki, Nakagawa, and Sakata in 1962 and further elaborated by Pontecorvo in 1967. One year later the solar neutrino deficit was first observed, and that was followed by the famous article by Gribov and Pontecorvo published in 1969 titled "Neutrino astronomy and lepton charge".
The concept of neutrino mixing is a natural outcome of gauge theories with massive neutrinos, and its structure can be characterized in general.
In its simplest form it is expressed as a unitary transformation relating the flavor and mass eigenbasis and can be written as
formula_0
formula_1
where
formula_7 represents the "Pontecorvo–Maki–Nakagawa–Sakata matrix" (also called the "PMNS matrix", "lepton mixing matrix", or sometimes simply the "MNS matrix"). It is the analogue of the CKM matrix describing the analogous mixing of quarks. If this matrix were the identity matrix, then the flavor eigenstates would be the same as the mass eigenstates. However, experiment shows that it is not.
When the standard three-neutrino theory is considered, the matrix is 3×3. If only two neutrinos are considered, a 2×2 matrix is used. If one or more sterile neutrinos are added (see later), it is 4×4 or larger. In the 3×3 form, it is given by
formula_8
where and The phase factors α1 and α2 are physically meaningful only if neutrinos are Majorana particles — i.e. if the neutrino is identical to its antineutrino (whether or not they are is unknown) — and do not enter into oscillation phenomena regardless. If neutrinoless double beta decay occurs, these factors influence its rate. The phase factor δ is non-zero only if neutrino oscillation violates CP symmetry; this has not yet been observed experimentally. If experiment shows this 3×3 matrix to be not unitary, a sterile neutrino or some other new physics is required.
Propagation and interference.
Since formula_9 are mass eigenstates, their propagation can be described by plane wave solutions of the form
formula_10
where
In the ultrarelativistic limit, formula_18 we can approximate the energy as
formula_19
where E is the energy of the wavepacket (particle) to be detected.
This limit applies to all practical (currently observed) neutrinos, since their masses are less than 1 eV and their energies are at least 1 MeV, so the Lorentz factor, γ, is greater than 106 in all cases. Using also "t" ≈ "L", where L is the distance traveled and also dropping the phase factors, the wavefunction becomes
formula_20
Eigenstates with different masses propagate with different frequencies. The heavier ones oscillate faster compared to the lighter ones. Since the mass eigenstates are combinations of flavor eigenstates, this difference in frequencies causes interference between the corresponding flavor components of each mass eigenstate. Constructive interference causes it to be possible to observe a neutrino created with a given flavor to change its flavor during its propagation. The probability that a neutrino originally of flavor α will later be observed as having flavor β is
formula_21
This is more conveniently written as
formula_22
where formula_23
The phase that is responsible for oscillation is often written as (with c and formula_24 restored)
formula_25
where 1.27 is unitless. In this form, it is convenient to plug in the oscillation parameters since:
If there is no CP-violation (δ is zero), then the second sum is zero. Otherwise, the CP asymmetry can be given as
formula_26
In terms of Jarlskog invariant
formula_27
the CP asymmetry is expressed as
formula_28
Two-neutrino case.
The above formula is correct for any number of neutrino generations. Writing it explicitly in terms of mixing angles is extremely cumbersome if there are more than two neutrinos that participate in mixing. Fortunately, there are several meaningful cases in which only two neutrinos participate significantly. In this case, it is sufficient to consider the mixing matrix
formula_29
Then the probability of a neutrino changing its flavor is
formula_30
Or, using SI units and the convention introduced above
formula_31
This formula is often appropriate for discussing the transition "ν"μ ↔ "ν"τ in atmospheric mixing, since the electron neutrino plays almost no role in this case. It is also appropriate for the solar case of "ν"e ↔ "ν"x , where "ν"x is a mix (superposition) of "ν"μ and "ν"τ . These approximations are possible because the mixing angle θ13 is very small and because two of the mass states are very close in mass compared to the third.
Classical analogue of neutrino oscillation.
The basic physics behind neutrino oscillation can be found in any system of coupled harmonic oscillators. A simple example is a system of two pendulums connected by a weak spring (a spring with a small spring constant). The first pendulum is set in motion by the experimenter while the second begins at rest. Over time, the second pendulum begins to swing under the influence of the spring, while the first pendulum's amplitude decreases as it loses energy to the second. Eventually all of the system's energy is transferred to the second pendulum and the first is at rest. The process then reverses. The energy oscillates between the two pendulums repeatedly until it is lost to friction.
The behavior of this system can be understood by looking at its normal modes of oscillation. If the two pendulums are identical then one normal mode consists of both pendulums swinging in the same direction with a constant distance between them, while the other consists of the pendulums swinging in opposite (mirror image) directions. These normal modes have (slightly) different frequencies because the second involves the (weak) spring while the first does not. The initial state of the two-pendulum system is a combination of both normal modes. Over time, these normal modes drift out of phase, and this is seen as a transfer of motion from the first pendulum to the second.
The description of the system in terms of the two pendulums is analogous to the flavor basis of neutrinos. These are the parameters that are most easily produced and detected (in the case of neutrinos, by weak interactions involving the W boson). The description in terms of normal modes is analogous to the mass basis of neutrinos. These modes do not interact with each other when the system is free of outside influence.
When the pendulums are not identical the analysis is slightly more complicated. In the small-angle approximation, the potential energy of a single pendulum system is formula_32, where "g" is the standard gravity, "L" is the length of the pendulum, "m" is the mass of the pendulum, and "x" is the horizontal displacement of the pendulum. As an isolated system the pendulum is a harmonic oscillator with a frequency of formula_33. The potential energy of a spring is formula_34 where "k" is the spring constant and "x" is the displacement. With a mass attached it oscillates with a period of formula_35. With two pendulums (labeled "a" and "b") of equal mass but possibly unequal lengths and connected by a spring, the total potential energy is
formula_36
This is a quadratic form in "x""a" and "x""b", which can also be written as a matrix product:
formula_37
The 2×2 matrix is real symmetric and so (by the spectral theorem) it is "orthogonally diagonalizable". That is, there is an angle "θ" such that if we define
formula_38
then
formula_39
where "λ"1 and "λ"2 are the eigenvalues of the matrix. The variables x1 and x2 describe normal modes which oscillate with frequencies of formula_40 and formula_41. When the two pendulums are identical ("L""a" = "L""b"), "θ" is 45°.
The angle "θ" is analogous to the Cabibbo angle (though that angle applies to quarks rather than neutrinos).
When the number of oscillators (particles) is increased to three, the orthogonal matrix can no longer be described by a single angle; instead, three are required (Euler angles). Furthermore, in the quantum case, the matrices may be complex. This requires the introduction of complex phases in addition to the rotation angles, which are associated with CP violation but do not influence the observable effects of neutrino oscillation.
Theory, graphically.
Two neutrino probabilities in vacuum.
In the approximation where only two neutrinos participate in the oscillation, the probability of oscillation follows a simple pattern:
The blue curve shows the probability of the original neutrino retaining its identity. The red curve shows the probability of conversion to the other neutrino. The maximum probability of conversion is equal to sin22"θ". The frequency of the oscillation is controlled by Δm2.
Three neutrino probabilities.
If three neutrinos are considered, the probability for each neutrino to appear is somewhat complex. The graphs below show the probabilities for each flavor, with the plots in the left column showing a long range to display the slow "solar" oscillation, and the plots in the right column zoomed in, to display the fast "atmospheric" oscillation. The parameters used to create these graphs (see below) are consistent with current measurements, but since some parameters are still quite uncertain, some aspects of these plots are only qualitatively correct.
The illustrations were created using the following parameter values:
Observed values of oscillation parameters.
. PDG combination of Daya Bay, RENO, and Double Chooz results.
. This corresponds to "θ"sol (solar), obtained from KamLand, solar, reactor and accelerator data.
(atmospheric)
(normal mass hierarchy)
Solar neutrino experiments combined with KamLAND have measured the so-called solar parameters Δ"m" and Atmospheric neutrino experiments such as Super-Kamiokande together with the K2K and MINOS long baseline accelerator neutrino experiment have determined the so-called atmospheric parameters Δ"m" and The last mixing angle, θ13, has been measured by the experiments Daya Bay, Double Chooz and RENO as
For atmospheric neutrinos the relevant difference of masses is about and the typical energies are ; for these values the oscillations become visible for neutrinos traveling several hundred kilometres, which would be those neutrinos that reach the detector traveling through the earth, from below the horizon.
The mixing parameter θ13 is measured using electron anti-neutrinos from nuclear reactors. The rate of anti-neutrino interactions is measured in detectors sited near the reactors to determine the flux prior to any significant oscillations and then it is measured in far detectors (placed kilometres from the reactors). The oscillation is observed as an apparent disappearance of electron anti-neutrinos in the far detectors (i.e. the interaction rate at the far site is lower than predicted from the observed rate at the near site).
From atmospheric and solar neutrino oscillation experiments, it is known that two mixing angles of the MNS matrix are large and the third is smaller. This is in sharp contrast to the CKM matrix in which all three angles are small and hierarchically decreasing. The CP-violating phase of the MNS matrix is as of April 2020 to lie somewhere between −2 and −178 degrees, from the T2K experiment.
If the neutrino mass proves to be of Majorana type (making the neutrino its own antiparticle), it is then possible that the MNS matrix has more than one phase.
Since experiments observing neutrino oscillation measure the squared mass difference and not absolute mass, one might claim that the lightest neutrino mass is exactly zero, without contradicting observations. This is however regarded as unlikely by theorists.
Origins of neutrino mass.
The question of how neutrino masses arise has not been answered conclusively. In the Standard Model of particle physics, fermions only have intrinsic mass because of interactions with the Higgs field (see "Higgs boson"). These interactions require both left- and right-handed versions of the fermion (see "chirality"). However, only left-handed neutrinos have been observed so far.
Neutrinos may have another source of mass through the Majorana mass term. This type of mass applies for electrically neutral particles since otherwise it would allow particles to turn into anti-particles, which would violate conservation of electric charge.
The smallest modification to the Standard Model, which only has left-handed neutrinos, is to allow these left-handed neutrinos to have Majorana masses. The problem with this is that the neutrino masses are surprisingly smaller than the rest of the known particles (at least 600,000 times smaller than the mass of an electron), which, while it does not invalidate the theory, is widely regarded as unsatisfactory as this construction offers no insight into the origin of the neutrino mass scale.
The next simplest addition would be to add into the Standard Model right-handed neutrinos that interact with the left-handed neutrinos and the Higgs field in an analogous way to the rest of the fermions. These new neutrinos would interact with the other fermions solely in this way and hence would not be directly observable, so are not phenomenologically excluded. The problem of the disparity of the mass scales remains.
Seesaw mechanism.
The most popular conjectured solution currently is the "seesaw mechanism", where right-handed neutrinos with very large Majorana masses are added. If the right-handed neutrinos are very heavy, they induce a very small mass for the left-handed neutrinos, which is proportional to the reciprocal of the heavy mass.
If it is assumed that the neutrinos interact with the Higgs field with approximately the same strengths as the charged fermions do, the heavy mass should be close to the GUT scale. Because the Standard Model has only one fundamental mass scale, all particle masses must arise in relation to this scale.
There are other varieties of seesaw and there is currently great interest in the so-called low-scale seesaw schemes, such as the inverse seesaw mechanism.
The addition of right-handed neutrinos has the effect of adding new mass scales, unrelated to the mass scale of the Standard Model, hence the observation of heavy right-handed neutrinos would reveal physics beyond the Standard Model. Right-handed neutrinos would help to explain the origin of matter through a mechanism known as leptogenesis.
Other sources.
There are alternative ways to modify the standard model that are similar to the addition of heavy right-handed neutrinos (e.g., the addition of new scalars or fermions in triplet states) and other modifications that are less similar (e.g., neutrino masses from loop effects and/or from suppressed couplings). One example of the last type of models is provided by certain versions supersymmetric extensions of the standard model of fundamental interactions, where R parity is not a symmetry. There, the exchange of supersymmetric particles such as squarks and sleptons can break the lepton number and lead to neutrino masses. These interactions are normally excluded from theories as they come from a class of interactions that lead to unacceptably rapid proton decay if they are all included. These models have little predictive power and are not able to provide a cold dark matter candidate.
Oscillations in the early universe.
During the early universe when particle concentrations and temperatures were high, neutrino oscillations could have behaved differently. Depending on neutrino mixing-angle parameters and masses, a broad spectrum of behavior may arise including vacuum-like neutrino oscillations, smooth evolution, or self-maintained coherence. The physics for this system is non-trivial and involves neutrino oscillations in a dense neutrino gas.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\left| \\nu_i \\right\\rangle = \\sum_{\\alpha} U^*_{\\alpha i} \\left| \\nu_\\alpha \\right\\rangle,"
},
{
"math_id": 1,
"text": " \\left| \\nu_\\alpha \\right\\rangle = \\sum_{i} U_{\\alpha i} \\left| \\nu_i \\right\\rangle,"
},
{
"math_id": 2,
"text": "\\ \\left| \\nu_\\alpha \\right\\rangle\\ "
},
{
"math_id": 3,
"text": "\\ \\left| \\nu_i \\right\\rangle\\ "
},
{
"math_id": 4,
"text": "\\ m_i\\ ,"
},
{
"math_id": 5,
"text": "i = 1, 2, 3,"
},
{
"math_id": 6,
"text": "^*"
},
{
"math_id": 7,
"text": " U_{\\alpha i} "
},
{
"math_id": 8,
"text": "\n\\begin{align}\nU &= \\begin{bmatrix}\nU_{e 1} & U_{e 2} & U_{e 3} \\\\\nU_{\\mu 1} & U_{\\mu 2} & U_{\\mu 3} \\\\\nU_{\\tau 1} & U_{\\tau 2} & U_{\\tau 3}\n\\end{bmatrix} \\\\\n&= \\begin{bmatrix}\n1 & 0 & 0 \\\\\n0 & c_{23} & s_{23} \\\\\n0 & -s_{23} & c_{23}\n\\end{bmatrix}\n\\begin{bmatrix}\nc_{13} & 0 & s_{13} e^{-i\\delta} \\\\\n0 & 1 & 0 \\\\\n-s_{13} e^{i\\delta} & 0 & c_{13}\n\\end{bmatrix}\n\\begin{bmatrix}\nc_{12} & s_{12} & 0 \\\\\n-s_{12} & c_{12} & 0 \\\\\n0 & 0 & 1\n\\end{bmatrix}\n\\begin{bmatrix}\ne^{i\\alpha_1 / 2} & 0 & 0 \\\\\n0 & e^{i\\alpha_2 / 2} & 0 \\\\\n0 & 0 & 1 \\\\\n\\end{bmatrix} \\\\\n&= \\begin{bmatrix}\nc_{12} c_{13} & s_{12} c_{13} & s_{13} e^{-i\\delta} \\\\\n- s_{12} c_{23} - c_{12} s_{23} s_{13} e^{i \\delta} & c_{12} c_{23} - s_{12} s_{23} s_{13} e^{i \\delta} & s_{23} c_{13}\\\\\ns_{12} s_{23} - c_{12} c_{23} s_{13} e^{i \\delta} & - c_{12} s_{23} - s_{12} c_{23} s_{13} e^{i \\delta} & c_{23} c_{13}\n\\end{bmatrix}\n\\begin{bmatrix}\ne^{i\\alpha_1 / 2} & 0 & 0 \\\\\n0 & e^{i\\alpha_2 / 2} & 0 \\\\\n0 & 0 & 1 \\\\\n\\end{bmatrix},\n\\end{align}\n"
},
{
"math_id": 9,
"text": " \\left|\\, \\nu_j \\,\\right\\rangle"
},
{
"math_id": 10,
"text": " \\left|\\, \\nu_j(t) \\,\\right\\rangle = e^{-i\\,\\left(\\, E_j t \\,-\\, \\vec{p}_j \\cdot \\vec{x} \\,\\right) } \\left|\\, \\nu_j(0) \\,\\right\\rangle ~,"
},
{
"math_id": 11,
"text": "(\\,c = 1, \\hbar = 1\\,)~,"
},
{
"math_id": 12,
"text": "~ i^2 \\,\\equiv\\, -1 ~,"
},
{
"math_id": 13,
"text": "E_j"
},
{
"math_id": 14,
"text": "j"
},
{
"math_id": 15,
"text": "t"
},
{
"math_id": 16,
"text": "\\vec{p}_j"
},
{
"math_id": 17,
"text": "\\vec{x}"
},
{
"math_id": 18,
"text": "\\left|\\vec{p}_j\\right| = p_j \\gg m_j ~,"
},
{
"math_id": 19,
"text": "E_j = \\sqrt{\\, p_j^2 + m_j^2 \\;} \\simeq p_j + \\frac{ m_j^2 }{\\, 2\\,p_j \\,} \\approx E + \\frac{ m_j^2 }{\\, 2\\,E \\,} ~,"
},
{
"math_id": 20,
"text": "\\left| \\,\\nu_j(L) \\,\\right\\rangle = e^{-i\\,\\left(\\frac{\\, m_j^2\\,L \\,}{2\\,E} \\right)} \\, \\left| \\, \\nu_j(0) \\, \\right\\rangle ~."
},
{
"math_id": 21,
"text": "P_{\\alpha\\rightarrow\\beta} \\, = \\, \\Bigl|\\, \\left\\langle\\, \\left. \\nu_\\beta \\, \\right| \\, \\nu_\\alpha (L) \\, \\right\\rangle \\,\\Bigr|^2 \\, =\\, \\left|\\, \\sum_j\\, U_{\\alpha j}^*\\, U_{\\beta j}\\,e^{-i\\frac{m_j^2\\,L}{2E}} \\, \\right|^2 ~."
},
{
"math_id": 22,
"text": "\\begin{align}\n P_{\\alpha\\rightarrow\\beta} = \\delta_{\\alpha\\beta}\n &{}- 4\\,\\sum_{j>k} \\,\\operatorname\\mathcal{R_e}\\left\\{\\, U_{\\alpha j}^*\\, U_{\\beta j}\\, U_{\\alpha k}\\, U_{\\beta k}^* \\,\\right\\}\\, \\sin^2 \\left( \\frac{\\Delta_{jk} m^2\\, L}{4E} \\right) \\\\\n &{}+ 2\\,\\sum_{j>k} \\,\\operatorname\\mathcal{I_m}\\left\\{\\, U_{\\alpha j}^*\\, U_{\\beta j}\\, U_{\\alpha k}\\, U_{\\beta k}^* \\,\\right\\}\\, \\sin \\left( \\frac{\\Delta_{jk} m^2\\, L}{2E} \\right) ~,\n\\end{align}"
},
{
"math_id": 23,
"text": "\\Delta_{jk} m^2\\ \\equiv m_j^2 - m_k^2 ~."
},
{
"math_id": 24,
"text": "\\hbar"
},
{
"math_id": 25,
"text": "\n \\frac{\\Delta_{jk} (mc^2)^2 \\, L}{4 \\hbar c\\,E} =\n \\frac{{\\rm GeV}\\, {\\rm fm}}{4 \\hbar c} \\times \\frac{\\Delta_{jk} m^2}{{\\rm eV}^2} \\frac{L}{\\rm km} \\frac{\\rm GeV}{E} \\approx 1.27 \\times \\frac{\\Delta_{jk} m^2}{{\\rm eV}^2} \\frac{L}{\\rm km} \\frac{\\rm GeV}{E} ~,\n"
},
{
"math_id": 26,
"text": "\n A^{(\\alpha\\beta)}_\\mathsf{CP} =\n P(\\nu_\\alpha \\rightarrow \\nu_\\beta) - P(\\bar{\\nu}_\\alpha \\rightarrow \\bar{\\nu}_\\beta) =\n 4\\,\\sum_{j>k}\\,\\operatorname\\mathcal{I_m} \\left\\{\\, U_{\\alpha j}^*\\, U_{\\beta j}\\, U_{\\alpha k}\\, U_{\\beta k}^* \\,\\right\\} \\,\\sin \\left( \\frac{\\Delta_{jk} m^2\\,L}{2E} \\right)\n"
},
{
"math_id": 27,
"text": "\\operatorname\\mathcal{I_m} \\left\\{\\, U_{\\alpha j}^*\\, U_{\\beta j}\\, U_{\\alpha k}\\, U_{\\beta k}^* \\,\\right\\} = J \\, \\sum_{\\gamma,\\ell} \\varepsilon_{\\alpha\\beta\\gamma}\\,\\varepsilon_{jk\\ell} ~,"
},
{
"math_id": 28,
"text": "A^{(\\alpha\\beta)}_\\mathsf{CP} = 16 \\,\n \\sin \\left( \\frac{\\Delta_{21} m^2\\,L}{4E} \\right)\n \\sin \\left( \\frac{\\Delta_{32} m^2\\,L}{4E} \\right) \\sin \\left( \\frac{\\Delta_{31} m^2\\,L}{4E} \\right) \\,J\\, \\sum_{\\gamma} \\,\\varepsilon_{\\alpha\\beta\\gamma}\n"
},
{
"math_id": 29,
"text": "U = \\begin{pmatrix} \\cos\\theta & \\sin\\theta \\\\ -\\sin\\theta & \\cos\\theta \\end{pmatrix}."
},
{
"math_id": 30,
"text": "P_{\\alpha\\rightarrow\\beta, \\alpha\\neq\\beta} = \\sin^2(2\\theta) \\, \\sin^2 \\left(\\frac{\\Delta m^2 L}{4E}\\right) \\quad \\text{ [natural units] .}"
},
{
"math_id": 31,
"text": "P_{\\alpha\\rightarrow\\beta, \\alpha\\neq\\beta} = \\sin^2(2\\theta) \\, \\sin^2 \\left( 1.27\\, \\frac{\\Delta m^2 L}{E}\\, \\frac{\\rm [eV^{2}]\\,[km]}{\\rm [GeV]}\\right) ~."
},
{
"math_id": 32,
"text": "\\tfrac{1}{2}\\tfrac{mg}{L} x^2"
},
{
"math_id": 33,
"text": "\\sqrt{g/L\\;}\\,"
},
{
"math_id": 34,
"text": "\\tfrac{1}{2} k x^2"
},
{
"math_id": 35,
"text": "\\sqrt{k/m\\;}\\,"
},
{
"math_id": 36,
"text": "V = \\frac{m}{2} \\left( \\frac{g}{L_a} x_a^2 + \\frac{g}{L_b} x_b^2 + \\frac{k}{m} (x_b - x_a)^2 \\right)."
},
{
"math_id": 37,
"text": "V = \n \\frac{m}{2} \\begin{pmatrix} x_a & x_b \\end{pmatrix} \\begin{pmatrix}\n \\frac{g}{L_a} + \\frac{k}{m} & -\\frac{k}{m} \\\\\n -\\frac{k}{m} & \\frac{g}{L_b} + \\frac{k}{m}\n \\end{pmatrix} \\begin{pmatrix} x_a \\\\ x_b \\end{pmatrix}.\n"
},
{
"math_id": 38,
"text": "\n \\begin{pmatrix} x_a \\\\ x_b \\end{pmatrix} =\n \\begin{pmatrix}\n \\cos\\theta & \\sin\\theta \\\\\n -\\sin\\theta & \\cos\\theta\n \\end{pmatrix} \\begin{pmatrix} x_1 \\\\ x_2 \\end{pmatrix}\n"
},
{
"math_id": 39,
"text": "V = \\frac{m}{2} \\begin{pmatrix} x_1 \\ x_2 \\end{pmatrix}\n \\begin{pmatrix}\n \\lambda_1 & 0 \\\\\n 0 & \\lambda_2\n \\end{pmatrix} \\begin{pmatrix} x_1 \\\\ x_2 \\end{pmatrix}\n"
},
{
"math_id": 40,
"text": "\\sqrt{\\lambda_1\\,}"
},
{
"math_id": 41,
"text": "\\sqrt{\\lambda_2\\,}"
}
]
| https://en.wikipedia.org/wiki?curid=1328116 |
13284111 | Wu's method of characteristic set | Algorithm for solving systems of polynomial equations
Wenjun Wu's method is an algorithm for solving multivariate polynomial equations introduced in the late 1970s by the Chinese mathematician Wen-Tsun Wu. This method is based on the mathematical concept of characteristic set introduced in the late 1940s by J.F. Ritt. It is fully independent of the Gröbner basis method, introduced by Bruno Buchberger (1965), even if Gröbner bases may be used to compute characteristic sets.
Wu's method is powerful for mechanical theorem proving in elementary geometry, and provides a complete decision process for certain classes of problem. It has been used in research in his laboratory (KLMM, Key Laboratory of Mathematics Mechanization in Chinese Academy of Science) and around the world. The main trends of research on Wu's method concern systems of polynomial equations of positive dimension and differential algebra where Ritt's results have been made effective. Wu's method has been applied in various scientific fields, like biology, computer vision, robot kinematics and especially automatic proofs in geometry.
Informal description.
Wu's method uses polynomial division to solve problems of the form:
formula_0
where "f" is a polynomial equation and "I" is a conjunction of polynomial equations. The algorithm is complete for such problems over the complex domain.
The core idea of the algorithm is that you can divide one polynomial by another to give a remainder. Repeated division results in either the remainder vanishing (in which case the "I" implies "f" statement is true), or an irreducible remainder is left behind (in which case the statement is false).
More specifically, for an ideal "I" in the ring "k"["x"1, ..., "x""n"] over a field "k", a (Ritt) characteristic set "C" of "I" is composed of a set of polynomials in "I", which is in triangular shape: polynomials in "C" have distinct main variables (see the formal definition below). Given a characteristic set "C" of "I", one can decide if a polynomial "f" is zero modulo "I". That is, the membership test is checkable for "I", provided a characteristic set of "I".
Ritt characteristic set.
A Ritt characteristic set is a finite set of polynomials in triangular form of an ideal. This triangular set satisfies
certain minimal condition with respect to the Ritt ordering, and it preserves many interesting geometrical properties
of the ideal. However it may not be its system of generators.
Notation.
Let R be the multivariate polynomial ring "k"["x"1, ..., "x""n"] over a field "k".
The variables are ordered linearly according to their subscript: "x"1 < ... < "x""n".
For a non-constant polynomial "p" in R, the greatest variable effectively presenting in "p", called main variable or class, plays a particular role:
"p" can be naturally regarded as a univariate polynomial in its main variable "x""k" with coefficients in "k"["x"1, ..., "x""k"−1].
The degree of p as a univariate polynomial in its main variable is also called its main degree.
Triangular set.
A set "T" of non-constant polynomials is called a triangular set if all polynomials in "T" have distinct main variables. This generalizes triangular systems of linear equations in a natural way.
Ritt ordering.
For two non-constant polynomials "p" and "q", we say "p" is smaller than "q" with respect to Ritt ordering and written as "p" <"r" "q", if one of the following assertions holds:
(1) the main variable of "p" is smaller than the main variable of "q", that is, mvar("p") < mvar("q"),
(2) "p" and "q" have the same main variable, and the main degree of "p" is less than the main degree of "q", that is, mvar("p") = mvar("q") and mdeg("p") < mdeg("q").
In this way, ("k"["x"1, ..., "x""n"],<"r") forms a well partial order. However, the Ritt ordering is not a total order:
there exist polynomials p and q such that neither "p" <"r" "q" nor "p" >"r" "q". In this case, we say that "p" and "q" are not comparable.
The Ritt ordering is comparing the rank of "p" and "q". The rank, denoted by rank("p"), of a non-constant polynomial "p" is defined to be a power of
its main variable: mvar("p")mdeg("p") and ranks are compared by comparing first the variables and then, in case of equality of the variables, the degrees.
Ritt ordering on triangular sets.
A crucial generalization on Ritt ordering is to compare triangular sets.
Let "T" = { "t"1, ..., "t""u"} and "S" = { "s"1, ..., "s""v"} be two triangular sets
such that polynomials in "T" and "S" are sorted increasingly according to their main variables.
We say "T" is smaller than S w.r.t. Ritt ordering if one of the following assertions holds
Also, there exists incomparable triangular sets w.r.t Ritt ordering.
Ritt characteristic set.
Let I be a non-zero ideal of k[x1, ..., xn]. A subset T of I is a Ritt characteristic set of I if one of the following conditions holds:
A polynomial ideal may possess (infinitely) many characteristic sets, since Ritt ordering is a partial order.
Wu characteristic set.
The Ritt–Wu process, first devised by Ritt, subsequently modified by Wu, computes not a Ritt characteristic but an extended one, called Wu characteristic set or ascending chain.
A non-empty subset T of the ideal ⟨F⟩ generated by F is a Wu characteristic set of F if one of the following condition holds
Wu characteristic set is defined to the set F of polynomials, rather to the ideal ⟨F⟩ generated by F. Also it can be shown that a Ritt characteristic set T of ⟨F⟩ is a Wu characteristic set of F. Wu characteristic sets can be computed by Wu's algorithm CHRST-REM, which only requires pseudo-remainder computations and no factorizations are needed.
Wu's characteristic set method has exponential complexity; improvements in computing efficiency by weak chains, regular chains, saturated chain were introduced
Decomposing algebraic varieties.
An application is an algorithm for solving systems of algebraic equations by means of characteristic sets. More precisely, given a finite subset F of polynomials, there is an algorithm to compute characteristic sets "T"1, ..., "T""e" such that:
formula_1
where "W"("T""i") is the difference of "V"("T""i") and "V"("h""i"), here "h""i" is the product of initials of the polynomials in "T""i". | [
{
"math_id": 0,
"text": " \\forall x, y, z, \\dots I(x, y, z, \\dots) \\implies f(x, y, z, \\dots) \\, "
},
{
"math_id": 1,
"text": "V(F) = W(T_1)\\cup \\cdots \\cup W(T_e), "
}
]
| https://en.wikipedia.org/wiki?curid=13284111 |
1328727 | Jean Charles Athanase Peltier | French physicist (1785–1845)
Jean Charles Athanase Peltier (; ; 22 February 1785 – 27 October 1845) was a French physicist. He was originally a watch dealer, but at the age of 30 began experiments and observations in physics.
Peltier was the author of numerous papers in different departments of physics. His name is specially associated with the thermal effects at junctions in a voltaic circuit, the Peltier effect. Peltier introduced the concept of electrostatic induction (1840), based on the modification of the distribution of electric charge in a material under the influence of a second object closest to it and its own electrical charge.
Biography.
Peltier trained as a watchmaker; until his 30s he was a watch dealer. He worked with Abraham Louis Breguet in Paris. Later, he conducted various experiments on electrodynamics and noticed that in an electronic element when current flows through, a temperature gradient or temperature difference is generated at a current flow. In 1836 he published his work and in 1838 his findings were confirmed by Emil Lenz. Peltier dealt with topics from the atmospheric electricity and meteorology. In 1840, he published a work on the causes of hurricanes.
Peltier's numerous papers are devoted in great part to atmospheric electricity, waterspouts, cyanometry and polarization of sky-light, the temperature of water in the spheroidal state, and the boiling-point at high elevations. There are also a few devoted to curious points of natural history. His name will always be associated with the thermal effects at junctions in a voltaic circuit, a discovery of importance comparable with those of Seebeck and Cumming.
Peltier discovered the calorific effect of electric current passing through the junction of two different metals. This is now called the Peltier effect (or Peltier–Seebeck effect). By switching the direction of current, either heating or cooling may be achieved. Junctions always come in pairs, as the two different metals are joined at two points. Thus heat will be moved from one junction to the other.
Peltier effect.
The "Peltier effect" is the presence of heating or cooling at an electrified junction of two different conductors (1834). His great experimental discovery was the heating or cooling of the junctions in a heterogeneous circuit of metals according to the direction in which an electric current is made to pass round the circuit. This reversible effect is proportional directly to the strength of the current, not to its square, as is the irreversible generation of heat due to resistance in all parts of the circuit. It is found that, if a current pass from an external source through a circuit of two metals, it cools one junction and heats the other. It cools the junction if it be in the same direction as the thermoelectric current which would be caused by directly heating that junction. In other words, the passage of a current from an external source produces in the junctions of the circuit a distribution of temperature which leads to the weakening of the current by the superposition of a thermo-electric current running in the opposite direction.
When electromotive current is made to flow through an electronic junction between two conductors (A and B), heat is removed at the junction. To make a typical pump, multiple junctions are created between two plates. One side heats and the other side cools. A dissipation device is attached to the hot side to maintain cooling effect on the cold side. Typically, the use of the Peltier effect as a heat pump device involves multiple junctions in series, through which a current is driven. Some of the junctions lose heat due to the Peltier effect, while others gain heat. Thermoelectric pumps exploit this phenomenon, as do thermoelectric cooling Peltier modules found in refrigerators.
The "Peltier effect" generated at the junction per unit time, formula_0, is equal to
formula_1
where,
formula_2 (formula_3) is the Peltier coefficient of conductor A (conductor B), and
formula_4 is the electric current (from A to B).
"Note:" Total heat generated at the junction is "not" determined by the Peltier effect alone, being influenced by Joule heating and thermal gradient effects.
The Peltier coefficients represent how much heat is carried per unit charge. With charge current continuous across a junction, the associated heat flow will develop a discontinuity if formula_2 and formula_3 are different.
The Peltier effect can be considered as the back-action counterpart to the Seebeck effect (analogous to the back-emf in magnetic induction): if a simple thermoelectric circuit is closed then the Seebeck effect will drive a current, which in turn (via the Peltier effect) will always transfer heat from the hot to the cold junction.
The true importance of this "Peltier effect" in the explanation of thermoelectric currents was first clearly pointed out by James Prescott Joule; and Sir William Thomson further extended the subject by showing, both theoretically and experimentally, that there is something closely analogous to the Peltier effect when the heterogeneity is due, not to difference of quality of matter, but to difference of temperature in contiguous portions of the same material. Shortly after Peltier's discovery was published, Lenz used the effect to freeze small quantities of water by the cold developed in a bismuth-antimony junction when a voltaic current was passed through the metals in the order named.
Publications.
"Listed by date"
<templatestyles src="Col-begin/styles.css"/>
"Other"
References and notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\dot{Q}"
},
{
"math_id": 1,
"text": "\\dot{Q} = \\left( \\Pi_\\mathrm{A} - \\Pi_\\mathrm{B} \\right) I,"
},
{
"math_id": 2,
"text": "\\Pi_A"
},
{
"math_id": 3,
"text": "\\Pi_B"
},
{
"math_id": 4,
"text": "I"
}
]
| https://en.wikipedia.org/wiki?curid=1328727 |
13287855 | Infinity sign (disambiguation) | The infinity sign or infinity symbol is commonly typed as formula_0, ∞ or ∞.
Infinity sign may also refer to
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "\\infty"
}
]
| https://en.wikipedia.org/wiki?curid=13287855 |
13289313 | Souders–Brown equation | The Souders–Brown equation (named after Mott Souders and George Granger Brown) has been a tool for obtaining the maximum allowable vapor velocity in vapor–liquid separation vessels (variously called "flash drums", "knockout drums", "knockout pots", "compressor suction drums" and "compressor inlet drums"). It has also been used for the same purpose in designing trayed fractionating columns, trayed absorption columns and other vapor–liquid-contacting columns.
A vapor–liquid separator drum is a vertical vessel into which a liquid and vapor mixture (or a flashing liquid) is fed and wherein the liquid is separated by gravity, falls to the bottom of the vessel, and is withdrawn. The vapor travels upward at a design velocity which minimizes the entrainment of any liquid droplets in the vapor as it exits the top of the vessel.
Use.
The diameter of a vapor–liquid separator drum is dictated by the expected volumetric flow rate of vapor and liquid from the drum. The following sizing methodology is based on the assumption that those flow rates are known.
Use a vertical pressure vessel with a length–diameter ratio of about 3 to 4, and size the vessel to provide about 5 minutes of liquid inventory between the normal liquid level and the bottom of the vessel (with the normal liquid level being somewhat below the feed inlet).
Calculate the maximum allowable vapor velocity in the vessel by using the Souders–Brown equation:
formula_0
Then the cross-sectional area of the drum can be found from:
formula_1
And the drum diameter is:
formula_2
The drum should have a vapor outlet at the top, liquid outlet at the bottom, and feed inlet at about the half-full level. At the vapor outlet, provide a de-entraining mesh pad within the drum such that the vapor must pass through that mesh before it can leave the drum. Depending upon how much liquid flow is expected, the liquid outlet line should probably have a liquid level control valve.
As for the mechanical design of the drum (materials of construction, wall thickness, corrosion allowance, etc.) use the same criteria as for any pressure vessel.
Recommended values of "k".
The "GPSA Engineering Data Book" recommends the following k values for vertical drums with horizontal mesh pads (at the denoted operating pressures):
GPSA notes:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "v = \\left(k \\right)\\sqrt\\frac{\\rho_L - \\rho_V}{\\rho_V}"
},
{
"math_id": 1,
"text": " A = \\frac{\\dot{V}}{v}"
},
{
"math_id": 2,
"text": "D = \\sqrt\\frac{4A}{\\pi}"
}
]
| https://en.wikipedia.org/wiki?curid=13289313 |
13290844 | Bicentric polygon | In geometry, a bicentric polygon is a tangential polygon (a polygon all of whose sides are tangent to an inner incircle) which is also cyclic — that is, inscribed in an outer circle that passes through each vertex of the polygon. All triangles and all regular polygons are bicentric. On the other hand, a rectangle with unequal sides is not bicentric, because no circle can be tangent to all four sides.
Triangles.
Every triangle is bicentric. In a triangle, the radii "r" and "R" of the incircle and circumcircle respectively are related by the equation
formula_0
where "x" is the distance between the centers of the circles. This is one version of Euler's triangle formula.
Bicentric quadrilaterals.
Not all quadrilaterals are bicentric (having both an incircle and a circumcircle). Given two circles (one within the other) with radii "R" and "r" where formula_1, there exists a convex quadrilateral inscribed in one of them and tangent to the other if and only if their radii satisfy
formula_2
where "x" is the distance between their centers. This condition (and analogous conditions for higher order polygons) is known as Fuss' theorem.
Polygons with n > 4.
A complicated general formula is known for any number "n" of sides for the relation among the circumradius "R", the inradius "r", and the distance "x" between the circumcenter and the incenter. Some of these for specific "n" are:
formula_3
formula_4
formula_5
where formula_6 and formula_7
Regular polygons.
Every regular polygon is bicentric. In a regular polygon, the incircle and the circumcircle are concentric—that is, they share a common center, which is also the center of the regular polygon, so the distance between the incenter and circumcenter is always zero. The radius of the inscribed circle is the apothem (the shortest distance from the center to the boundary of the regular polygon).
For any regular polygon, the relations between the common edge length "a", the radius "r" of the incircle, and the radius "R" of the circumcircle are:
formula_8
For some regular polygons which can be constructed with compass and ruler, we have the following algebraic formulas for these relations:
Thus we have the following decimal approximations:
Poncelet's porism.
If two circles are the inscribed and circumscribed circles of a particular bicentric "n"-gon, then the same two circles are the inscribed and circumscribed circles of infinitely many bicentric "n"-gons. More precisely,
every tangent line to the inner of the two circles can be extended to a bicentric "n"-gon by placing vertices on the line at the points where it crosses the outer circle, continuing from each vertex along another tangent line, and continuing in the same way until the resulting polygonal chain closes up to an "n"-gon. The fact that it will always do so is implied by Poncelet's closure theorem, which more generally applies for inscribed and circumscribed conics.
Moreover, given a circumcircle and incircle, each diagonal of the variable polygon is tangent to a fixed circle.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{1}{R-x}+\\frac{1}{R+x}=\\frac{1}{r}"
},
{
"math_id": 1,
"text": "R>r"
},
{
"math_id": 2,
"text": "\\frac{1}{(R-x)^2}+\\frac{1}{(R+x)^2}=\\frac{1}{r^2}"
},
{
"math_id": 3,
"text": "n=5: \\quad r(R-x)=(R+x)\\sqrt{(R-r+x)(R-r-x)}+(R+x)\\sqrt{2R(R-r-x)} ,"
},
{
"math_id": 4,
"text": "n=6: \\quad 3(R^2-x^2)^4=4r^2(R^2+x^2)(R^2-x^2)^2+16r^4x^2R^2 ,"
},
{
"math_id": 5,
"text": "n=8: \\quad 16p^4q^4(p^2-1)(q^2-1)=(p^2+q^2-p^2q^2)^4 ,"
},
{
"math_id": 6,
"text": "p=\\tfrac{R+x}{r}"
},
{
"math_id": 7,
"text": "q=\\tfrac{R-x}{r}."
},
{
"math_id": 8,
"text": "R=\\frac{a}{2\\sin \\frac{\\pi}{n}}=\\frac{r}{\\cos \\frac{\\pi}{n}}."
}
]
| https://en.wikipedia.org/wiki?curid=13290844 |
1329114 | Chiral model | Model of mesons in the massless quark limit
In nuclear physics, the chiral model, introduced by Feza Gürsey in 1960, is a phenomenological model describing effective interactions of mesons in the chiral limit (where the masses of the quarks go to zero), but without necessarily mentioning quarks at all. It is a nonlinear sigma model with the principal homogeneous space of a Lie group formula_0 as its target manifold. When the model was originally introduced, this Lie group was the SU("N"), where "N" is the number of quark flavors. The Riemannian metric of the target manifold is given by a positive constant multiplied by the Killing form acting upon the Maurer–Cartan form of SU("N").
The internal global symmetry of this model is formula_1, the left and right copies, respectively; where the left copy acts as the left action upon the target space, and the right copy acts as the right action. Phenomenologically, the left copy represents flavor rotations among the left-handed quarks, while the right copy describes rotations among the right-handed quarks, while these, L and R, are completely independent of each other. The axial pieces of these symmetries are spontaneously broken so that the corresponding scalar fields are the requisite Nambu−Goldstone bosons.
The model was later studied in the two-dimensional case as an integrable system, in particular an integrable field theory. Its integrability was shown by Faddeev and Reshetikhin in 1982 through the quantum inverse scattering method. The two-dimensional principal chiral model exhibits signatures of integrability such as a Lax pair/zero-curvature formulation, an infinite number of symmetries, and an underlying quantum group symmetry (in this case, Yangian symmetry).
This model admits topological solitons called skyrmions.
Departures from exact chiral symmetry are dealt with in chiral perturbation theory.
Mathematical formulation.
On a manifold (considered as the spacetime) M and a choice of compact Lie group G, the field content is a function formula_2. This defines a related field formula_3, a formula_4-valued vector field (really, covector field) which is the Maurer–Cartan form. The principal chiral model is defined by the Lagrangian density
formula_5
where formula_6 is a dimensionless coupling. In differential-geometric language, the field formula_7 is a section of a principal bundle formula_8 with fibres isomorphic to the principal homogeneous space for M (hence why this defines the "principal" chiral model).
Phenomenology.
An outline of the original, 2-flavor model.
The chiral model of Gürsey (1960; also see Gell-Mann and Lévy) is now appreciated to be an effective theory of QCD with two light quarks, "u", and "d". The QCD Lagrangian is approximately invariant under independent global flavor rotations of the left- and right-handed quark fields,
formula_9
where τ denote the Pauli matrices in the flavor space and θ"L", θ"R" are the corresponding rotation angles.
The corresponding symmetry group formula_10 is the chiral group, controlled by the six conserved currents
formula_11
which can equally well be expressed in terms of the vector and axial-vector currents
formula_12
The corresponding conserved charges generate the algebra of the chiral group,
formula_13
with "I=L,R", or, equivalently,
formula_14
Application of these commutation relations to hadronic reactions dominated current algebra calculations in the early seventies of the last century.
At the level of hadrons, pseudoscalar mesons, the ambit of the chiral model, the chiral formula_15 group is spontaneously broken down to formula_16, by the QCD vacuum. That is, it is realized "nonlinearly", in the Nambu–Goldstone mode: The "QV" annihilate the vacuum, but the "QA" do not! This is visualized nicely through a geometrical argument based on the fact that the Lie algebra of formula_10 is isomorphic to that of SO(4). The unbroken subgroup, realized in the linear Wigner–Weyl mode, is formula_17 which is locally isomorphic to SU(2) (V: isospin).
To construct a non-linear realization of SO(4), the representation describing four-dimensional rotations of a vector
formula_18
for an infinitesimal rotation parametrized by six angles
formula_19
is given by
formula_20
where
formula_21
The four real quantities (π, "σ") define the smallest nontrivial chiral multiplet and represent the field content of the linear sigma model.
To switch from the above linear realization of SO(4) to the nonlinear one, we observe that, in fact, only three of the four components of (π, "σ") are independent with respect to four-dimensional rotations. These three independent components
correspond to coordinates on a hypersphere "S"3, where π and "σ" are subjected to the constraint
formula_22
with "F" a (pion decay) constant of dimension mass.
Utilizing this to eliminate "σ" yields the following transformation properties of π under SO(4),
formula_23
The nonlinear terms (shifting π) on the right-hand side of the second equation underlie the nonlinear realization of SO(4). The chiral group formula_24 is realized nonlinearly on the triplet of pions— which, however, still transform linearly under isospin formula_25 rotations parametrized through the angles formula_26 By contrast, the formula_27 represent the nonlinear "shifts" (spontaneous breaking).
Through the spinor map, these four-dimensional rotations of (π, "σ") can also be conveniently written using 2×2 matrix notation by introducing the unitary matrix
formula_28
and requiring the transformation properties of "U" under chiral rotations to be
formula_29
where formula_30
The transition to the nonlinear realization follows,
formula_31
where formula_32 denotes the trace in the flavor space. This is a non-linear sigma model.
Terms involving formula_33 or formula_34 are not independent and can be brought to this form through partial integration.
The constant "F"2/4 is chosen in such a way that the Lagrangian matches the usual free term for massless scalar fields when written in terms of the pions,
formula_35
Alternate Parametrization.
An alternative, equivalent (Gürsey, 1960), parameterization
formula_36
yields a simpler expression for "U",
formula_37
Note the reparameterized π transform under
formula_38
so, then, manifestly identically to the above under isorotations, V; and similarly to the above, as
formula_39
under the broken symmetries, A, the shifts. This simpler expression generalizes readily (Cronin, 1967) to N light quarks, so formula_40
Integrability.
Integrable chiral model.
Introduced by Richard S. Ward, the integrable chiral model or Ward model is described in terms of a matrix-valued field formula_41 and is given by the partial differential equation
formula_42
It has a Lagrangian formulation with the expected kinetic term together with a term which resembles a Wess–Zumino–Witten term. It also has a formulation which is formally identical to the Bogomolny equations but with Lorentz signature. The relation between these formulations can be found in Dunajski (2010).
Many exact solutions are known.
Two-dimensional principal chiral model.
Here the underlying manifold formula_43 is taken to be a Riemann surface, in particular the cylinder formula_44 or plane formula_45, conventionally given "real" coordinates formula_46, where on the cylinder formula_47 is a periodic coordinate. For application to string theory, this cylinder is the world sheet swept out by the closed string.
Global symmetries.
The global symmetries act as internal symmetries on the group-valued field formula_48 as formula_49 and formula_50. The corresponding conserved currents from Noether's theorem are
formula_51
The equations of motion turn out to be equivalent to conservation of the currents,
formula_52
The currents additionally satisfy the flatness condition,
formula_53
and therefore the equations of motion can be formulated entirely in terms of the currents.
Lax formulation.
Consider the worldsheet in light-cone coordinates formula_54. The components of the appropriate Lax matrix are
formula_55
The requirement that the zero-curvature condition on formula_56 for all formula_57 is equivalent to the conservation of current and flatness of the current formula_58, that is, the equations of motion from the principal chiral model (PCM).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "G_L \\times G_R"
},
{
"math_id": 2,
"text": "U: M \\rightarrow G"
},
{
"math_id": 3,
"text": "j_\\mu = U^{-1}\\partial_\\mu U"
},
{
"math_id": 4,
"text": "\\mathfrak{g}"
},
{
"math_id": 5,
"text": "\\mathcal{L} = \\frac{\\kappa}{2}\\mathrm{tr}(\\partial_\\mu U^{-1} \\partial^\\mu U) = -\\frac{\\kappa}{2}\\mathrm{tr}(j_\\mu j^\\mu),"
},
{
"math_id": 6,
"text": "\\kappa"
},
{
"math_id": 7,
"text": "U"
},
{
"math_id": 8,
"text": "\\pi: P \\rightarrow M"
},
{
"math_id": 9,
"text": "\\begin{cases} q_L \\mapsto q_L'= L q_L = \\exp{\\left(- i {\\boldsymbol{\\theta}}_L \\cdot \\tfrac{\\boldsymbol{\\tau}}{2} \\right)} q_L \\\\ q_R \\mapsto q_R'= R q_R = \\exp{\\left(- i \\boldsymbol{ \\theta}_R \\cdot \\tfrac{\\boldsymbol{\\tau}}{2} \\right)} q_R \\end{cases}"
},
{
"math_id": 10,
"text": "\\text{SU}(2)_L\\times\\text{SU}(2)_R"
},
{
"math_id": 11,
"text": "L_\\mu^i = \\bar q_L \\gamma_\\mu \\tfrac{\\tau^i}{2} q_L, \\qquad R_\\mu^i = \\bar q_R \\gamma_\\mu \\tfrac{\\tau^i}{2} q_R, "
},
{
"math_id": 12,
"text": "V_\\mu^i = L_\\mu^i + R_\\mu^i, \\qquad A_\\mu^i = R_\\mu^i - L_\\mu^i."
},
{
"math_id": 13,
"text": " \\left[ Q_{I}^i, Q_{I}^j \\right] = i \\epsilon^{ijk} Q_I^k \\qquad \\qquad \\left[ Q_{L}^i, Q_{R}^j \\right] = 0,"
},
{
"math_id": 14,
"text": " \\left[ Q_{V}^i, Q_{V}^j \\right] = i \\epsilon^{ijk} Q_V^k, \\qquad \\left[ Q_{A}^i, Q_{A}^j \\right] = i \\epsilon^{ijk} Q_V^k, \\qquad \\left[ Q_{V}^i, Q_{A}^j \\right] = i \\epsilon^{ijk} Q_A^k."
},
{
"math_id": 15,
"text": "\\text{SU}(2)_L \\times \\text{SU}(2)_R"
},
{
"math_id": 16,
"text": "\\text{SU}(2)_V"
},
{
"math_id": 17,
"text": "\\text{SO}(3) \\subset \\text{SO}(4)"
},
{
"math_id": 18,
"text": " \\begin{pmatrix} {\\boldsymbol{ \\pi}} \\\\ \\sigma \\end{pmatrix} \\equiv \\begin{pmatrix} \\pi_1 \\\\ \\pi_2 \\\\ \\pi_3 \\\\ \\sigma \\end{pmatrix},"
},
{
"math_id": 19,
"text": "\\left \\{ \\theta_i^{V,A} \\right \\}, \\qquad i =1, 2, 3,"
},
{
"math_id": 20,
"text": " \\begin{pmatrix} {\\boldsymbol{ \\pi}} \\\\ \\sigma \\end{pmatrix} \\stackrel{SO(4)}{\\longrightarrow} \\begin{pmatrix} {\\boldsymbol{ \\pi}'} \\\\ \\sigma' \\end{pmatrix} = \\left[ \\mathbf{1}_4+ \\sum_{i=1}^3 \\theta_i^V V_i + \\sum_{i=1}^3 \\theta_i^A A_i \\right] \\begin{pmatrix} {\\boldsymbol{ \\pi}} \\\\ \\sigma \\end{pmatrix}"
},
{
"math_id": 21,
"text": " \\sum_{i=1}^3 \\theta_i^V V_i =\\begin{pmatrix}\n0 & -\\theta^V_3 & \\theta^V_2 & 0 \\\\\n\\theta^V_3 & 0 & -\\theta_1^V & 0 \\\\\n-\\theta^V_2 & \\theta_1^V & 0 & 0 \\\\\n0 & 0 & 0 & 0\n\\end{pmatrix}\n\\qquad \\qquad \n\\sum_{i=1}^3 \\theta_i^A A_i = \\begin{pmatrix}\n0 & 0 & 0 & \\theta^A_1 \\\\\n0 & 0 & 0 & \\theta^A_2 \\\\\n0 & 0 & 0 & \\theta^A_3 \\\\\n-\\theta^A_1 & -\\theta_2^A & -\\theta_3^A & 0 \n\\end{pmatrix}."
},
{
"math_id": 22,
"text": "{\\boldsymbol{ \\pi}}^2 + \\sigma^2 = F^2,"
},
{
"math_id": 23,
"text": "\\begin{cases} \\theta^V: \\boldsymbol{\\pi} \\mapsto \\boldsymbol{\\pi}'= \\boldsymbol{\\pi} + \\boldsymbol{\\theta}^V \\times \\boldsymbol{\\pi} \\\\ \\theta^A: \\boldsymbol{\\pi} \\mapsto \\boldsymbol{\\pi}'= \\boldsymbol{ \\pi } + \\boldsymbol{\\theta}^A \\sqrt{ F^2 - \\boldsymbol{ \\pi}^2} \\end{cases} \\qquad \\boldsymbol{\\theta}^{V,A} \\equiv \\left \\{ \\theta^{V,A}_i \\right \\}, \\qquad i =1, 2, 3. "
},
{
"math_id": 24,
"text": "\\text{SU}(2)_L \\times \\text{SU}(2)_R \\simeq \\text{SO}(4)"
},
{
"math_id": 25,
"text": "\\text{SU}(2)_V \\simeq \\text{SO}(3)"
},
{
"math_id": 26,
"text": "\\{ \\boldsymbol{\\theta}_V \\}."
},
{
"math_id": 27,
"text": "\\{ \\boldsymbol{\\theta}_A \\}"
},
{
"math_id": 28,
"text": " U = \\frac{1}{F} \\left( \\sigma \\mathbf{1}_2 + i \\boldsymbol{ \\pi} \\cdot \\boldsymbol{ \\tau} \\right),"
},
{
"math_id": 29,
"text": " U \\longrightarrow U' = L U R^\\dagger,"
},
{
"math_id": 30,
"text": "\\theta_L=\\theta_V- \\theta_A, \\theta_R= \\theta_V+ \\theta_A."
},
{
"math_id": 31,
"text": "U = \\frac{1}{F} \\left( \\sqrt{F^2 - \\boldsymbol{ \\pi}^2} \\mathbf{1}_2 + i \\boldsymbol{ \\pi} \\cdot \\boldsymbol{ \\tau} \\right) , \\qquad \\mathcal{L}_\\pi^{(2)} = \\frac{F^2}{4} \\langle \\partial_\\mu U \\partial^\\mu U^\\dagger \\rangle,"
},
{
"math_id": 32,
"text": " \\langle \\ldots \\rangle"
},
{
"math_id": 33,
"text": "\\textstyle \\partial_\\mu \\partial^\\mu U"
},
{
"math_id": 34,
"text": "\\textstyle \\partial_\\mu \\partial^\\mu U^\\dagger"
},
{
"math_id": 35,
"text": "\\mathcal{L}_\\pi^{(2)} = \\frac{1}{2} \\partial_\\mu \\boldsymbol{\\pi} \\cdot \\partial^\\mu \\boldsymbol{\\pi} + \\frac{1}{2 F^2}\\left( \\partial_\\mu \\boldsymbol{\\pi} \\cdot \\boldsymbol{\\pi} \\right)^2 + \\mathcal{O} ( \\pi^6 )."
},
{
"math_id": 36,
"text": " \\boldsymbol{\\pi}\\mapsto \\boldsymbol{\\pi}~ \\frac{\\sin (|\\pi/F|)}{|\\pi/F|},"
},
{
"math_id": 37,
"text": "U=\\mathbf{1} \\cos |\\pi/F| + i \\widehat{\\pi}\\cdot \\boldsymbol{\\tau} \\sin |\\pi/F| =e^{i~\\boldsymbol{\\tau}\\cdot \\boldsymbol{\\pi}/F}. "
},
{
"math_id": 38,
"text": "L U R^\\dagger=\\exp(i\\boldsymbol{\\theta}_A\\cdot \\boldsymbol{\\tau}/2 -i\\boldsymbol{\\theta}_V\\cdot \\boldsymbol{\\tau}/2 ) \\exp(i\\boldsymbol{\\pi}\\cdot \\boldsymbol{\\tau}/F ) \\exp(i\\boldsymbol{\\theta}_A\\cdot \\boldsymbol{\\tau}/2 +i\\boldsymbol{\\theta}_V\\cdot \\boldsymbol{\\tau}/2 )"
},
{
"math_id": 39,
"text": "\\boldsymbol{\\pi} \\longrightarrow \\boldsymbol{\\pi} +\\boldsymbol{\\theta}_A F+ \\cdots =\\boldsymbol{\\pi} +\\boldsymbol{\\theta}_A F ( |\\pi/F| \\cot |\\pi/F| )"
},
{
"math_id": 40,
"text": "\\textstyle \\text{SU}(N)_L \\times \\text{SU}(N)_R/\\text{SU}(N)_V."
},
{
"math_id": 41,
"text": "J: \\mathbb{R}^3 \\rightarrow U(n)"
},
{
"math_id": 42,
"text": "\\partial_t(J^{-1}J_t)- \\partial_x(J^{-1}J_x) - \\partial_y(J^{-1}J_y) - [J^{-1}J_t, J^{-1}J_y] = 0."
},
{
"math_id": 43,
"text": "M"
},
{
"math_id": 44,
"text": "\\mathbb{C}^*"
},
{
"math_id": 45,
"text": "\\mathbb{C}"
},
{
"math_id": 46,
"text": "\\tau, \\sigma"
},
{
"math_id": 47,
"text": "\\sigma \\sim \\sigma + 2\\pi"
},
{
"math_id": 48,
"text": "g(x)"
},
{
"math_id": 49,
"text": "\\rho_L(g') g(x) = g'g(x)"
},
{
"math_id": 50,
"text": "\\rho_R(g) g(x) = g(x)g'"
},
{
"math_id": 51,
"text": "L_\\alpha = g^{-1}\\partial_\\alpha g, R_\\alpha = \\partial_\\alpha g g^{-1}."
},
{
"math_id": 52,
"text": "\\partial_\\alpha L^\\alpha = \\partial_\\alpha R^\\alpha = 0 \\text{ or in coordinate-free form } d * L = d * R = 0."
},
{
"math_id": 53,
"text": "dL + \\frac{1}{2}[L,L] = 0 \\text{ or in coordinates } \\partial_\\alpha L_\\beta - \\partial_\\beta L_\\alpha + [L_\\alpha, L_\\beta] = 0,"
},
{
"math_id": 54,
"text": "x^\\pm = t \\pm x"
},
{
"math_id": 55,
"text": " L_\\pm(x^+, x^-; \\lambda) = \\frac{j_{\\pm}}{1 \\mp \\lambda}."
},
{
"math_id": 56,
"text": "L_\\pm"
},
{
"math_id": 57,
"text": "\\lambda"
},
{
"math_id": 58,
"text": "j = (j_+, j_-)"
}
]
| https://en.wikipedia.org/wiki?curid=1329114 |
13292403 | A Disappearing Number | Play by Simon McBurney
A Disappearing Number is a 2007 play co-written and devised by the Théâtre de Complicité company and directed and conceived by English playwright Simon McBurney. It was inspired by the collaboration during the 1910s between the pure mathematicians Srinivasa Ramanujan from India, and the Cambridge University don G.H. Hardy.
It was a co-production between the UK-based theatre company Complicite and Theatre Royal, Plymouth, and Ruhrfestspiele, Wiener Festwochen, and the Holland Festival. "A Disappearing Number" premiered in Plymouth in March 2007, toured internationally, and played at The Barbican Centre in Autumn 2007 and 2008 and at Lincoln Center in July 2010. It was directed by Simon McBurney with music by Nitin Sawhney. The production is 110 minutes with no intermission.
The piece was co-devised and written by the cast and company. The cast in order of appearance: Firdous Bamji, Saskia Reeves, David Annen, Paul Bhattacharjee, Shane Shambu, Divya Kasturi and Chetna Pandya.
Plot.
Ramanujan first attracted Hardy's attention by writing him a letter in which he proved that
formula_0
where the notation formula_1 indicates a Ramanujan summation.
Hardy realised that this confusing presentation of the series 1 + 2 + 3 + 4 + ⋯ was an application of the Riemann zeta function formula_2 with formula_3. Ramanujan's work became one of the foundations of bosonic string theory, a precursor of modern string theory.
The play includes live "tabla" playing, which "morphs seductively into pure mathematics", as the "Financial Times" review put it, "especially when … its rhythms shade into chants of number sequences reminiscent of the libretto to Philip Glass's "Einstein on the Beach". One can hear the beauty of the sequences without grasping the rules that govern them."
The play has two strands of narrative and presents strong visual and physical theatre. It interweaves the passionate intellectual relationship between Hardy and the more intuitive Ramanujan, with the present-day story of Ruth, an English maths lecturer, and her husband, Al Cooper, a globe-trotting Indian-American businessman "to illuminate the beauty and the patterns – the mystery – of mathematics." It also explores the nature and spirituality of infinity, and explores several aspects of the Indian diaspora.
Ruth travels to India in Ramanujan's footsteps and eventually dies. Al follows, to get closer to her ghost. Meanwhile, 100 years previously, Ramanujan is travelling in the opposite direction, making the trip to England, where he works with Hardy on maths and contracts tuberculosis. Partition (as a maths concept) is explored, and diverging and converging series in mathematics become a metaphor for the Indian diaspora. | [
{
"math_id": 0,
"text": "1+2+3+\\cdots = -\\frac{1}{12}\\ (\\Re)"
},
{
"math_id": 1,
"text": "(\\Re)"
},
{
"math_id": 2,
"text": "\\zeta(s)"
},
{
"math_id": 3,
"text": "s=-1"
}
]
| https://en.wikipedia.org/wiki?curid=13292403 |
13293546 | Feynman checkerboard | Fermion path integral approach in 1+1 dimensions
The Feynman checkerboard, or relativistic chessboard model, was Richard Feynman's sum-over-paths formulation of the kernel for a free spin- particle moving in one spatial dimension. It provides a representation of solutions of the Dirac equation in (1+1)-dimensional spacetime as discrete sums.
The model can be visualised by considering relativistic random walks on a two-dimensional spacetime checkerboard. At each discrete timestep formula_0 the particle of mass formula_1 moves a distance formula_2 to the left or right (formula_3 being the speed of light). For such a discrete motion, the Feynman path integral reduces to a sum over the possible paths. Feynman demonstrated that if each "turn" (change of moving from left to right or conversely) of the space–time path is weighted by formula_4 (with formula_5 denoting the reduced Planck constant), in the limit of infinitely small checkerboard squares the sum of all weighted paths yields a propagator that satisfies the one-dimensional Dirac equation. As a result, helicity (the one-dimensional equivalent of spin) is obtained from a simple cellular-automata-type rule.
The checkerboard model is important because it connects aspects of spin and chirality with propagation in spacetime and is the only sum-over-path formulation in which quantum phase is discrete at the level of the paths, taking only values corresponding to the 4th roots of unity.
History.
Richard Feynman invented the model in the 1940s while developing his spacetime approach to quantum mechanics. He did not publish the result until it appeared in a text on path integrals coauthored by Albert Hibbs in the mid 1960s. The model was not included with the original path-integral article because a suitable generalization to a four-dimensional spacetime had not been found.
One of the first connections between the amplitudes prescribed by Feynman for the Dirac particle in 1+1 dimensions, and the standard interpretation of amplitudes in terms of the kernel, or propagator, was established by Jayant Narlikar in a detailed analysis. The name "Feynman chessboard model" was coined by Harold A. Gersch when he demonstrated its relationship to the one-dimensional Ising model. B. Gaveau et al. discovered a relationship between the model and a stochastic model of the telegraph equations due to Mark Kac through analytic continuation. Ted Jacobson and Lawrence Schulman examined the passage from the relativistic to the non-relativistic path integral. Subsequently, G. N. Ord showed that the chessboard model was embedded in correlations in Kac's original stochastic model and so had a purely classical context, free of formal analytic continuation. In the same year, Louis Kauffman and Pierres Noyes produced a fully discrete version related to bit-string physics, which has been developed into a general approach to discrete physics.
Extensions.
Although Feynman did not live to publish extensions to the chessboard model, it is evident from his archived notes that he was interested in establishing a link between the 4th roots of unity (used as statistical weights in chessboard paths) and his discovery, with John Archibald Wheeler, that antiparticles are equivalent to particles moving backwards in time. His notes contain several sketches of chessboard paths with added spacetime loops. The first extension of the model to explicitly contain such loops was the "spiral model", in which chessboard paths were allowed to spiral in spacetime. Unlike the chessboard case, causality had to be implemented explicitly to avoid divergences, however with this restriction the Dirac equation emerged as a continuum limit. Subsequently, the roles of zitterbewegung, antiparticles and the Dirac sea in the chessboard model have been elucidated, and the implications for the Schrödinger equation considered through the non-relativistic limit.
Further extensions of the original 2-dimensional spacetime model include features such as improved summation rules and generalized lattices. There has been no consensus on an optimal extension of the chessboard model to a fully four-dimensional spacetime. Two distinct classes of extensions exist, those working with a fixed underlying lattice and those that embed the two-dimensional case in higher dimension. The advantage of the former is that the sum-over-paths is closer to the non-relativistic case, however the simple picture of a single directionally independent speed of light is lost. In the latter extensions the fixed-speed property is maintained at the expense of variable directions at each step.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\epsilon"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "\\epsilon c"
},
{
"math_id": 3,
"text": "c"
},
{
"math_id": 4,
"text": "-i \\epsilon mc^2/\\hbar"
},
{
"math_id": 5,
"text": "\\hbar"
}
]
| https://en.wikipedia.org/wiki?curid=13293546 |
13297769 | Whipple formulae | In the theory of special functions, Whipple's transformation for Legendre functions, named after Francis John Welsh Whipple, arise from a general expression, concerning associated Legendre functions. These formulae have been presented previously in terms of a viewpoint aimed at spherical harmonics, now that we view the equations in terms of toroidal coordinates, whole new symmetries of Legendre functions arise.
For associated Legendre functions of the first and second kind,
formula_0
and
formula_1
These expressions are valid for all parameters formula_2 and formula_3. By shifting the complex degree and order in an appropriate fashion, we obtain Whipple formulae for general complex index interchange of general associated Legendre functions of the first and second kind. These are given by
formula_4
and
formula_5
Note that these formulae are well-behaved for all values of the degree and order, except for those with integer values. However, if we examine these formulae for toroidal harmonics, i.e. where the degree is half-integer, the order is integer, and the argument is positive and greater than unity one obtains
formula_6
and
formula_7.
These are the Whipple formulae for toroidal harmonics. They show an important property of toroidal harmonics under index (the integers associated with the order and the degree) interchange. | [
{
"math_id": 0,
"text": "P_{-\\mu-\\frac12}^{-\\nu-\\frac12}\\biggl(\\frac{z}{\\sqrt{z^2-1}}\\biggr)=\n\\frac{(z^2-1)^{1/4}e^{-i\\mu\\pi} Q_\\nu^\\mu(z)}{(\\pi/2)^{1/2}\\Gamma(\\nu+\\mu+1)}\n"
},
{
"math_id": 1,
"text": "Q_{-\\mu-\\frac12}^{-\\nu-\\frac12}\\biggl(\\frac{z}{\\sqrt{z^2-1}}\\biggr)=\n-i(\\pi/2)^{1/2}\\Gamma(-\\nu-\\mu)(z^2-1)^{1/4}e^{-i\\nu\\pi} P_\\nu^\\mu(z).\n"
},
{
"math_id": 2,
"text": "\\nu, \\mu,"
},
{
"math_id": 3,
"text": "z"
},
{
"math_id": 4,
"text": "\nP_{\\nu-\\frac12}^\\mu(z)=\\frac{\\sqrt{2}\\Gamma(\\mu-\\nu+\\frac12)}{\\pi^{3/2}(z^2-1)^{1/4}}\\biggl[\n\\pi\\sin\\mu\\pi P_{\\mu-\\frac12}^\\nu\\biggl(\\frac{z}{\\sqrt{z^2-1}}\\biggr)+\\cos\\pi(\\nu+\\mu)e^{-i\\nu\\pi}Q_{\\mu-\\frac12}^\\nu\\biggl(\\frac{z}{\\sqrt{z^2-1}}\\biggr)\\biggr]\n"
},
{
"math_id": 5,
"text": "\nQ_{\\nu-\\frac12}^\\mu(z)=\\frac{e^{i\\mu\\pi}\\Gamma(\\mu-\\nu+\\frac12)(\\pi/2)^{1/2}}{(z^2-1)^{1/4}}\\biggl[\nP_{\\mu-\\frac12}^\\nu\\biggl(\\frac{z}{\\sqrt{z^2-1}}\\biggr)-\\frac{2}{\\pi}e^{-i\\nu\\pi}\\sin\\nu\\pi Q_{\\mu-\\frac12}^\\nu\\biggl(\\frac{z}{\\sqrt{z^2-1}}\\biggr)\\biggr].\n"
},
{
"math_id": 6,
"text": "\nP_{m-\\frac12}^n(\\cosh\\eta)=\\frac{(-1)^m}{\\Gamma(m-n+\\frac12)}\\sqrt{\\frac{2}{\\pi\\sinh\\eta}}Q_{n-\\frac12}^m(\\coth\\eta)\n"
},
{
"math_id": 7,
"text": "\nQ_{m-\\frac12}^n(\\cosh\\eta)=\\frac{(-1)^m\\pi}{\\Gamma(m-n+\\frac12)}\\sqrt{\\frac{\\pi}{2\\sinh\\eta}}P_{n-\\frac12}^m(\\coth\\eta)\n"
}
]
| https://en.wikipedia.org/wiki?curid=13297769 |
133017 | Second law of thermodynamics | Physical law for entropy and heat
The second law of thermodynamics is a physical law based on universal empirical observation concerning heat and energy interconversions. A simple statement of the law is that heat always flows spontaneously from hotter to colder regions of matter (or 'downhill' in terms of the temperature gradient). Another statement is: "Not all heat can be converted into work in a cyclic process."
The second law of thermodynamics establishes the concept of entropy as a physical property of a thermodynamic system. It predicts whether processes are forbidden despite obeying the requirement of conservation of energy as expressed in the first law of thermodynamics and provides necessary criteria for spontaneous processes. For example, the first law allows the process of a cup falling off a table and breaking on the floor, as well as allowing the reverse process of the cup fragments coming back together and 'jumping' back onto the table, while the second law allows the former and denies the latter. The second law may be formulated by the observation that the entropy of isolated systems left to spontaneous evolution cannot decrease, as they always tend toward a state of thermodynamic equilibrium where the entropy is highest at the given internal energy. An increase in the combined entropy of system and surroundings accounts for the irreversibility of natural processes, often referred to in the concept of the arrow of time.
Historically, the second law was an empirical finding that was accepted as an axiom of thermodynamic theory. Statistical mechanics provides a microscopic explanation of the law in terms of probability distributions of the states of large assemblies of atoms or molecules. The second law has been expressed in many ways. Its first formulation, which preceded the proper definition of entropy and was based on caloric theory, is Carnot's theorem, formulated by the French scientist Sadi Carnot, who in 1824 showed that the efficiency of conversion of heat to work in a heat engine has an upper limit. The first rigorous definition of the second law based on the concept of entropy came from German scientist Rudolf Clausius in the 1850s and included his statement that heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.
The second law of thermodynamics allows the definition of the concept of thermodynamic temperature, but this has been formally delegated to the zeroth law of thermodynamics.
Introduction.
The first law of thermodynamics provides the definition of the internal energy of a thermodynamic system, and expresses its change for a closed system in terms of work and heat. It can be linked to the law of conservation of energy. Conceptually, the first law describes the fundamental principle that systems do not consume or 'use up' energy, that energy is neither created nor destroyed, but is simply converted from one form to another.
The second law is concerned with the direction of natural processes. It asserts that a natural process runs only in one sense, and is not reversible. That is, the state of a natural system itself can be reversed, but not without increasing the entropy of the system's surroundings, that is, both the state of the system plus the state of its surroundings cannot be together, fully reversed, without implying the destruction of entropy.
For example, when a path for conduction or radiation is made available, heat always flows spontaneously from a hotter to a colder body. Such phenomena are accounted for in terms of entropy change. A heat pump can reverse this heat flow, but the reversal process and the original process, both cause entropy production, thereby increasing the entropy of the system's surroundings. If an isolated system containing distinct subsystems is held initially in internal thermodynamic equilibrium by internal partitioning by impermeable walls between the subsystems, and then some operation makes the walls more permeable, then the system spontaneously evolves to reach a final new internal thermodynamic equilibrium, and its total entropy, formula_0, increases.
In a reversible or quasi-static, idealized process of transfer of energy as heat to a closed thermodynamic system of interest, (which allows the entry or exit of energy – but not transfer of matter), from an auxiliary thermodynamic system, an infinitesimal increment (formula_1) in the entropy of the system of interest is defined to result from an infinitesimal transfer of heat (formula_2) to the system of interest, divided by the common thermodynamic temperature formula_3 of the system of interest and the auxiliary thermodynamic system:
formula_4
Different notations are used for an infinitesimal amount of heat formula_5 and infinitesimal change of entropy formula_6 because entropy is a function of state, while heat, like work, is not.
For an actually possible infinitesimal process without exchange of mass with the surroundings, the second law requires that the increment in system entropy fulfills the inequality
formula_7
This is because a general process for this case (no mass exchange between the system and its surroundings) may include work being done on the system by its surroundings, which can have frictional or viscous effects inside the system, because a chemical reaction may be in progress, or because heat transfer actually occurs only irreversibly, driven by a finite difference between the system temperature ("T") and the temperature of the surroundings ("T"surr).
The equality still applies for pure heat flow (only heat flow, no change in chemical composition and mass),
formula_8
which is the basis of the accurate determination of the absolute entropy of pure substances from measured heat capacity curves and entropy changes at phase transitions, i.e. by calorimetry.
Introducing a set of internal variables formula_9 to describe the deviation of a thermodynamic system from a chemical equilibrium state in physical equilibrium (with the required well-defined uniform pressure "P" and temperature "T"), one can record the equality
formula_10
The second term represents work of internal variables that can be perturbed by external influences, but the system cannot perform any positive work via internal variables. This statement introduces the impossibility of the reversion of evolution of the thermodynamic system in time and can be considered as a formulation of "the second principle of thermodynamics" – the formulation, which is, of course, equivalent to the formulation of the principle in terms of entropy.
The zeroth law of thermodynamics in its usual short statement allows recognition that two bodies in a relation of thermal equilibrium have the same temperature, especially that a test body has the same temperature as a reference thermometric body. For a body in thermal equilibrium with another, there are indefinitely many empirical temperature scales, in general respectively depending on the properties of a particular reference thermometric body. The second law allows a distinguished temperature scale, which defines an absolute, thermodynamic temperature, independent of the properties of any particular reference thermometric body.
Various statements of the law.
The second law of thermodynamics may be expressed in many specific ways, the most prominent classical statements being the statement by Rudolf Clausius (1854), the statement by Lord Kelvin (1851), and the statement in axiomatic thermodynamics by Constantin Carathéodory (1909). These statements cast the law in general physical terms citing the impossibility of certain processes. The Clausius and the Kelvin statements have been shown to be equivalent.
Carnot's principle.
The historical origin of the second law of thermodynamics was in Sadi Carnot's theoretical analysis of the flow of heat in steam engines (1824). The centerpiece of that analysis, now known as a Carnot engine, is an ideal heat engine fictively operated in the limiting mode of extreme slowness known as quasi-static, so that the heat and work transfers are between subsystems that are always in their own internal states of thermodynamic equilibrium. It represents the theoretical maximum efficiency of a heat engine operating between any two given thermal or heat reservoirs at different temperatures. Carnot's principle was recognized by Carnot at a time when the caloric theory represented the dominant understanding of the nature of heat, before the recognition of the first law of thermodynamics, and before the mathematical expression of the concept of entropy. Interpreted in the light of the first law, Carnot's analysis is physically equivalent to the second law of thermodynamics, and remains valid today. Some samples from his book are:
..."wherever there exists a difference of temperature, motive power can be produced."
The production of motive power is then due in steam engines not to an actual consumption of caloric, but "to its transportation from a warm body to a cold body ..."
"The motive power of heat is independent of the agents employed to realize it; its quantity is fixed solely by the temperatures of the bodies between which is effected, finally, the transfer of caloric."
In modern terms, Carnot's principle may be stated more precisely:
The efficiency of a quasi-static or reversible Carnot cycle depends only on the temperatures of the two heat reservoirs, and is the same, whatever the working substance. A Carnot engine operated in this way is the most efficient possible heat engine using those two temperatures.
Clausius statement.
The German scientist Rudolf Clausius laid the foundation for the second law of thermodynamics in 1850 by examining the relation between heat transfer and work. His formulation of the second law, which was published in German in 1854, is known as the "Clausius statement":
Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.
The statement by Clausius uses the concept of 'passage of heat'. As is usual in thermodynamic discussions, this means 'net transfer of energy as heat', and does not refer to contributory transfers one way and the other.
Heat cannot spontaneously flow from cold regions to hot regions without external work being performed on the system, which is evident from ordinary experience of refrigeration, for example. In a refrigerator, heat is transferred from cold to hot, but only when forced by an external agent, the refrigeration system.
Kelvin statements.
Lord Kelvin expressed the second law in several wordings.
It is impossible for a self-acting machine, unaided by any external agency, to convey heat from one body to another at a higher temperature.
It is impossible, by means of inanimate material agency, to derive mechanical effect from any portion of matter by cooling it below the temperature of the coldest of the surrounding objects.
Equivalence of the Clausius and the Kelvin statements.
Suppose there is an engine violating the Kelvin statement: i.e., one that drains heat and converts it completely into work (the drained heat is fully converted to work) in a cyclic fashion without any other result. Now pair it with a reversed Carnot engine as shown by the right figure. The efficiency of a normal heat engine is "η" and so the efficiency of the reversed heat engine is 1/"η". The net and sole effect of the combined pair of engines is to transfer heat formula_11 from the cooler reservoir to the hotter one, which violates the Clausius statement. This is a consequence of the first law of thermodynamics, as for the total system's energy to remain the same; formula_12, so therefore formula_13, where (1) the sign convention of heat is used in which heat entering into (leaving from) an engine is positive (negative) and (2) formula_14 is obtained by the definition of efficiency of the engine when the engine operation is not reversed. Thus a violation of the Kelvin statement implies a violation of the Clausius statement, i.e. the Clausius statement implies the Kelvin statement. We can prove in a similar manner that the Kelvin statement implies the Clausius statement, and hence the two are equivalent.
Planck's proposition.
Planck offered the following proposition as derived directly from experience. This is sometimes regarded as his statement of the second law, but he regarded it as a starting point for the derivation of the second law.
It is impossible to construct an engine which will work in a complete cycle, and produce no effect except the production of work and cooling of a heat reservoir.
Relation between Kelvin's statement and Planck's proposition.
It is almost customary in textbooks to speak of the "Kelvin–Planck statement" of the law, as for example in the text by ter Haar and Wergeland. This version, also known as the heat engine statement, of the second law states that
It is impossible to devise a cyclically operating device, the sole effect of which is to absorb energy in the form of heat from a single thermal reservoir and to deliver an equivalent amount of work.
Planck's statement.
Max Planck stated the second law as follows.
Every process occurring in nature proceeds in the sense in which the sum of the entropies of all bodies taking part in the process is increased. In the limit, i.e. for reversible processes, the sum of the entropies remains unchanged.
Rather like Planck's statement is that of George Uhlenbeck and G. W. Ford for "irreversible phenomena".
... in an irreversible or spontaneous change from one equilibrium state to another (as for example the equalization of temperature of two bodies A and B, when brought in contact) the entropy always increases.
Principle of Carathéodory.
Constantin Carathéodory formulated thermodynamics on a purely mathematical axiomatic foundation. His statement of the second law is known as the Principle of Carathéodory, which may be formulated as follows:
In every neighborhood of any state S of an adiabatically enclosed system there are states inaccessible from S.
With this formulation, he described the concept of adiabatic accessibility for the first time and provided the foundation for a new subfield of classical thermodynamics, often called geometrical thermodynamics. It follows from Carathéodory's principle that quantity of energy quasi-statically transferred as heat is a holonomic process function, in other words, formula_15.
Though it is almost customary in textbooks to say that Carathéodory's principle expresses the second law and to treat it as equivalent to the Clausius or to the Kelvin-Planck statements, such is not the case. To get all the content of the second law, Carathéodory's principle needs to be supplemented by Planck's principle, that isochoric work always increases the internal energy of a closed system that was initially in its own internal thermodynamic equilibrium.
Planck's principle.
In 1926, Max Planck wrote an important paper on the basics of thermodynamics. He indicated the principle
The internal energy of a closed system is increased by an adiabatic process, throughout the duration of which, the volume of the system remains constant.
This formulation does not mention heat and does not mention temperature, nor even entropy, and does not necessarily implicitly rely on those concepts, but it implies the content of the second law. A closely related statement is that "Frictional pressure never does positive work." Planck wrote: "The production of heat by friction is irreversible."
Not mentioning entropy, this principle of Planck is stated in physical terms. It is very closely related to the Kelvin statement given just above. It is relevant that for a system at constant volume and mole numbers, the entropy is a monotonic function of the internal energy. Nevertheless, this principle of Planck is not actually Planck's preferred statement of the second law, which is quoted above, in a previous sub-section of the present section of this present article, and relies on the concept of entropy.
A statement that in a sense is complementary to Planck's principle is made by Claus Borgnakke and Richard E. Sonntag. They do not offer it as a full statement of the second law:
... there is only one way in which the entropy of a [closed] system can be decreased, and that is to transfer heat from the system.
Differing from Planck's just foregoing principle, this one is explicitly in terms of entropy change. Removal of matter from a system can also decrease its entropy.
Relating the second law to the definition of temperature.
The second law has been shown to be equivalent to the internal energy "U" defined as a convex function of the other extensive properties of the system. That is, when a system is described by stating its internal energy "U", an extensive variable, as a function of its entropy "S", volume "V", and mol number "N", i.e. "U"
"U" ("S", "V", "N"), then the temperature is equal to the partial derivative of the internal energy with respect to the entropy (essentially equivalent to the first "TdS" equation for "V" and "N" held constant):
formula_16
Second law statements, such as the Clausius inequality, involving radiative fluxes.
The Clausius inequality, as well as some other statements of the second law, must be re-stated to have general applicability for all forms of heat transfer, i.e. scenarios involving radiative fluxes. For example, the integrand (đQ/T) of the Clausius expression applies to heat conduction and convection, and the case of ideal infinitesimal blackbody radiation (BR) transfer, but does not apply to most radiative transfer scenarios and in some cases has no physical meaning whatsoever. Consequently, the Clausius inequality was re-stated so that it is applicable to cycles with processes involving any form of heat transfer. The entropy transfer with radiative fluxes (formula_17) is taken separately from that due to heat transfer by conduction and convection (formula_18), where the temperature is evaluated at the system boundary where the heat transfer occurs. The modified Clausius inequality, for all heat transfer scenarios, can then be expressed as,
formula_19
In a nutshell, the Clausius inequality is saying that when a cycle is completed, the change in the state property S will be zero, so the entropy that was produced during the cycle must have transferred out of the system by heat transfer. The formula_20 (or đ) indicates a path dependent integration.
Due to the inherent emission of radiation from all matter, most entropy flux calculations involve incident, reflected and emitted radiative fluxes. The energy and entropy of unpolarized blackbody thermal radiation, is calculated using the spectral energy and entropy radiance expressions derived by Max Planck using equilibrium statistical mechanics,
formula_21
formula_22
where "c" is the speed of light or (2.9979)108 m/s", k" is Boltzmann's constant or (1.38)10−23 J/K", h" is Planck's constant or (6.626)10-34 J s, v is frequency (s−1), and the quantities "K"v and "L"v are the energy and entropy fluxes per unit frequency, area, and solid angle. In deriving this blackbody spectral entropy radiance, with the goal of deriving the blackbody energy formula, Planck postulated that the energy of a photon was quantized (partly to simplify the mathematics), thereby starting quantum theory.
A non-equilibrium statistical mechanics approach has also been used to obtain the same result as Planck, indicating it has wider significance and represents a non-equilibrium entropy. A plot of "K"v versus frequency (v) for various values of temperature ("T)" gives a family of blackbody radiation energy spectra, and likewise for the entropy spectra. For non-blackbody radiation (NBR) emission fluxes, the spectral entropy radiance "L"v is found by substituting "K"v spectral energy radiance data into the "L"v expression (noting that emitted and reflected entropy fluxes are, in general, not independent). For the emission of NBR, including graybody radiation (GR), the resultant emitted entropy flux, or radiance "L", has a higher ratio of entropy-to-energy ("L/K"), than that of BR. That is, the entropy flux of NBR emission is farther removed from the conduction and convection "q"/"T" result, than that for BR emission. This observation is consistent with Max Planck's blackbody radiation energy and entropy formulas and is consistent with the fact that blackbody radiation emission represents the maximum emission of entropy for all materials with the same temperature, as well as the maximum entropy emission for all radiation with the same energy radiance.
Generalized conceptual statement of the second law principle.
Second law analysis is valuable in scientific and engineering analysis in that it provides a number of benefits over energy analysis alone, including the basis for determining energy quality (exergy content), understanding fundamental physical phenomena, and improving performance evaluation and optimization. As a result, a conceptual statement of the principle is very useful in engineering analysis. Thermodynamic systems can be categorized by the four combinations of either entropy (S) up or down, and uniformity (Y) - between system and its environment - up or down. This ‘special' category of processes, category IV, is characterized by movement in the direction of low disorder and low uniformity, counteracting the second law tendency towards uniformity and disorder.
The second law can be conceptually stated as follows: Matter and energy have the tendency to reach a state of uniformity or internal and external equilibrium, a state of maximum disorder (entropy). Real non-equilibrium processes always produce entropy, causing increased disorder in the universe, while idealized reversible processes produce no entropy and no process is known to exist that destroys entropy. The tendency of a system to approach uniformity may be counteracted, and the system may become more ordered or complex, by the combination of two things, a work or exergy source and some form of instruction or intelligence. Where ‘exergy’ is the thermal, mechanical, electric or chemical work potential of an energy source or flow, and ‘instruction or intelligence’, although subjective, is in the context of the set of category IV processes.
Consider a category IV example of robotic manufacturing and assembly of vehicles in a factory. The robotic machinery requires electrical work input and instructions, but when completed, the manufactured products have less uniformity with their surroundings, or more complexity (higher order) relative to the raw materials they were made from. Thus, system entropy or disorder decreases while the tendency towards uniformity between the system and its environment is counteracted. In this example, the instructions, as well as the source of work may be internal or external to the system, and they may or may not cross the system boundary. To illustrate, the instructions may be pre-coded and the electrical work may be stored in an energy storage system on-site. Alternatively, the control of the machinery may be by remote operation over a communications network, while the electric work is supplied to the factory from the local electric grid. In addition, humans may directly play, in whole or in part, the role that the robotic machinery plays in manufacturing. In this case, instructions may be involved, but intelligence is either directly responsible, or indirectly responsible, for the direction or application of work in such a way as to counteract the tendency towards disorder and uniformity.
There are also situations where the entropy spontaneously decreases by means of energy and entropy transfer. When thermodynamic constraints are not present, spontaneously energy or mass, as well as accompanying entropy, may be transferred out of a system in a progress to reach external equilibrium or uniformity in intensive properties of the system with its surroundings. This occurs spontaneously because the energy or mass transferred from the system to its surroundings results in a higher entropy in the surroundings, that is, it results in higher overall entropy of the system plus its surroundings. Note that this transfer of entropy requires dis-equilibrium in properties, such as a temperature difference. One example of this is the cooling crystallization of water that can occur when the system's surroundings are below freezing temperatures. Unconstrained heat transfer can spontaneously occur, leading to water molecules freezing into a crystallized structure of reduced disorder (sticking together in a certain order due to molecular attraction). The entropy of the system decreases, but the system approaches uniformity with its surroundings (category III).
On the other hand, consider the refrigeration of water in a warm environment. Due to refrigeration, as heat is extracted from the water, the temperature and entropy of the water decreases, as the system moves further away from uniformity with its warm surroundings or environment (category IV). The main point, take-away, is that refrigeration not only requires a source of work, it requires designed equipment, as well as pre-coded or direct operational intelligence or instructions to achieve the desired refrigeration effect.
Corollaries.
Perpetual motion of the second kind.
Before the establishment of the second law, many people who were interested in inventing a perpetual motion machine had tried to circumvent the restrictions of first law of thermodynamics by extracting the massive internal energy of the environment as the power of the machine. Such a machine is called a "perpetual motion machine of the second kind". The second law declared the impossibility of such machines.
Carnot's theorem.
Carnot's theorem (1824) is a principle that limits the maximum efficiency for any possible engine. The efficiency solely depends on the temperature difference between the hot and cold thermal reservoirs. Carnot's theorem states:
In his ideal model, the heat of caloric converted into work could be reinstated by reversing the motion of the cycle, a concept subsequently known as thermodynamic reversibility. Carnot, however, further postulated that some caloric is lost, not being converted to mechanical work. Hence, no real heat engine could realize the Carnot cycle's reversibility and was condemned to be less efficient.
Though formulated in terms of caloric (see the obsolete caloric theory), rather than entropy, this was an early insight into the second law.
Clausius inequality.
The Clausius theorem (1854) states that in a cyclic process
formula_23
The equality holds in the reversible case and the strict inequality holds in the irreversible case, with "T"surr as the temperature of the heat bath (surroundings) here. The reversible case is used to introduce the state function entropy. This is because in cyclic processes the variation of a state function is zero from state functionality.
Thermodynamic temperature.
For an arbitrary heat engine, the efficiency is:
where "W"n is the net work done by the engine per cycle, "q""H" > 0 is the heat added to the engine from a hot reservoir, and "q""C" = - |"q""C"| < 0 is waste heat given off to a cold reservoir from the engine. Thus the efficiency depends only on the ratio |"q""C"| / |"q""H"|.
Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, any reversible heat engine operating between temperatures "T"H and "T"C must have the same efficiency, that is to say, the efficiency is a function of temperatures only:
In addition, a reversible heat engine operating between temperatures "T"1 and "T"3 must have the same efficiency as one consisting of two cycles, one between "T"1 and another (intermediate) temperature "T"2, and the second between "T"2 and "T"3, where "T1" > "T2" > "T3". This is because, if a part of the two cycle engine is hidden such that it is recognized as an engine between the reservoirs at the temperatures "T"1 and "T"3, then the efficiency of this engine must be same to the other engine at the same reservoirs. If we choose engines such that work done by the one cycle engine and the two cycle engine are same, then the efficiency of each heat engine is written as the below.
formula_24,
formula_25,
formula_26.
Here, the engine 1 is the one cycle engine, and the engines 2 and 3 make the two cycle engine where there is the intermediate reservoir at "T"2. We also have used the fact that the heat formula_27 passes through the intermediate thermal reservoir at formula_28 without losing its energy. (I.e., formula_27 is not lost during its passage through the reservoir at formula_28.) This fact can be proved by the following.
formula_29
In order to have the consistency in the last equation, the heat formula_27 flown from the engine 2 to the intermediate reservoir must be equal to the heat formula_30 flown out from the reservoir to the engine 3.
Then
formula_31
Now consider the case where formula_32 is a fixed reference temperature: the temperature of the triple point of water as 273.16 Kelvin; formula_33. Then for any "T"2 and "T"3,
formula_34
Therefore, if thermodynamic temperature "T"* is defined by
formula_35
then the function "f", viewed as a function of thermodynamic temperatures, is simply
formula_36
and the reference temperature "T"1* = 273.16 K × "f"("T"1,"T"1) = 273.16 K. (Any reference temperature and any positive numerical value could be used – the choice here corresponds to the Kelvin scale.)
Entropy.
According to the Clausius equality, for a "reversible process"
formula_37
That means the line integral formula_38 is path independent for reversible processes.
So we can define a state function S called entropy, which for a reversible process or for pure heat transfer satisfies
formula_39
With this we can only obtain the difference of entropy by integrating the above formula. To obtain the absolute value, we need the third law of thermodynamics, which states that "S" = 0 at absolute zero for perfect crystals.
For any irreversible process, since entropy is a state function, we can always connect the initial and terminal states with an imaginary reversible process and integrating on that path to calculate the difference in entropy.
Now reverse the reversible process and combine it with the said irreversible process. Applying the Clausius inequality on this loop, with "T"surr as the temperature of the surroundings,
formula_40
Thus,
formula_41
where the equality holds if the transformation is reversible. If the process is an adiabatic process, then formula_42, so formula_43.
Energy, available useful work.
An important and revealing idealized special case is to consider applying the second law to the scenario of an isolated system (called the total system or universe), made up of two parts: a sub-system of interest, and the sub-system's surroundings. These surroundings are imagined to be so large that they can be considered as an "unlimited" heat reservoir at temperature "TR" and pressure "PR" – so that no matter how much heat is transferred to (or from) the sub-system, the temperature of the surroundings will remain "TR"; and no matter how much the volume of the sub-system expands (or contracts), the pressure of the surroundings will remain "PR".
Whatever changes to "dS" and "dSR" occur in the entropies of the sub-system and the surroundings individually, the entropy "S"tot of the isolated total system must not decrease according to the second law of thermodynamics:
formula_44
According to the first law of thermodynamics, the change "dU" in the internal energy of the sub-system is the sum of the heat "δq" added to the sub-system, "minus" any work "δw" done "by" the sub-system, "plus" any net chemical energy entering the sub-system "d" Σ"μiRNi", so that:
formula_45
where "μ""iR" are the chemical potentials of chemical species in the external surroundings.
Now the heat leaving the reservoir and entering the sub-system is
formula_46
where we have first used the definition of entropy in classical thermodynamics (alternatively, in statistical thermodynamics, the relation between entropy change, temperature and absorbed heat can be derived); and then the second law inequality from above.
It therefore follows that any net work "δw" done by the sub-system must obey
formula_47
It is useful to separate the work "δw" done by the subsystem into the "useful" work "δwu" that can be done "by" the sub-system, over and beyond the work "pR dV" done merely by the sub-system expanding against the surrounding external pressure, giving the following relation for the useful work (exergy) that can be done:
formula_48
It is convenient to define the right-hand-side as the exact derivative of a thermodynamic potential, called the "availability" or "exergy" "E" of the subsystem,
formula_49
The second law therefore implies that for any process which can be considered as divided simply into a subsystem, and an unlimited temperature and pressure reservoir with which it is in contact,
formula_50
i.e. the change in the subsystem's exergy plus the useful work done "by" the subsystem (or, the change in the subsystem's exergy less any work, additional to that done by the pressure reservoir, done "on" the system) must be less than or equal to zero.
In sum, if a proper "infinite-reservoir-like" reference state is chosen as the system surroundings in the real world, then the second law predicts a decrease in "E" for an irreversible process and no change for a reversible process.
formula_51 is equivalent to formula_50
This expression together with the associated reference state permits a design engineer working at the macroscopic scale (above the thermodynamic limit) to utilize the second law without directly measuring or considering entropy change in a total isolated system. ("Also, see process engineer"). Those changes have already been considered by the assumption that the system under consideration can reach equilibrium with the reference state without altering the reference state. An efficiency for a process or collection of processes that compares it to the reversible ideal may also be found ("See second law efficiency".)
This approach to the second law is widely utilized in engineering practice, environmental accounting, systems ecology, and other disciplines.
Direction of spontaneous processes.
The second law determines whether a proposed physical or chemical process is forbidden or may occur spontaneously. For isolated systems, no energy is provided by the surroundings and the second law requires that the entropy of the system alone must increase: Δ"S" > 0. Examples of spontaneous physical processes in isolated systems include the following:
However, for some non-isolated systems which can exchange energy with their surroundings, the surroundings exchange enough heat with the system, or do sufficient work on the system, so that the processes occur in the opposite direction. This is possible provided the total entropy change of the system plus the surroundings is positive as required by the second law: Δ"S"tot = Δ"S" + Δ"S"R > 0. For the three examples given above:
The second law in chemical thermodynamics.
For a spontaneous chemical process in a closed system at constant temperature and pressure without non-"PV" work, the Clausius inequality Δ"S" > "Q/T"surr transforms into a condition for the change in Gibbs free energy
formula_52
or d"G" < 0. For a similar process at constant temperature and volume, the change in Helmholtz free energy must be negative, formula_53. Thus, a negative value of the change in free energy ("G" or "A") is a necessary condition for a process to be spontaneous. This is the most useful form of the second law of thermodynamics in chemistry, where free-energy changes can be calculated from tabulated enthalpies of formation and standard molar entropies of reactants and products. The chemical equilibrium condition at constant "T" and "p" without electrical work is d"G" = 0.
History.
The first theory of the conversion of heat into mechanical work is due to Nicolas Léonard Sadi Carnot in 1824. He was the first to realize correctly that the efficiency of this conversion depends on the difference of temperature between an engine and its surroundings.
Recognizing the significance of James Prescott Joule's work on the conservation of energy, Rudolf Clausius was the first to formulate the second law during 1850, in this form: heat does not flow "spontaneously" from cold to hot bodies. While common knowledge now, this was contrary to the caloric theory of heat popular at the time, which considered heat as a fluid. From there he was able to infer the principle of Sadi Carnot and the definition of entropy (1865).
Established during the 19th century, the Kelvin-Planck statement of the second law says, "It is impossible for any device that operates on a cycle to receive heat from a single reservoir and produce a net amount of work." This statement was shown to be equivalent to the statement of Clausius.
The ergodic hypothesis is also important for the Boltzmann approach. It says that, over long periods of time, the time spent in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e. that all accessible microstates are equally probable over a long period of time. Equivalently, it says that time average and average over the statistical ensemble are the same.
There is a traditional doctrine, starting with Clausius, that entropy can be understood in terms of molecular 'disorder' within a macroscopic system. This doctrine is obsolescent.
Account given by Clausius.
In 1865, the German physicist Rudolf Clausius stated what he called the "second fundamental theorem in the mechanical theory of heat" in the following form:
formula_54
where "Q" is heat, "T" is temperature and "N" is the "equivalence-value" of all uncompensated transformations involved in a cyclical process. Later, in 1865, Clausius would come to define "equivalence-value" as entropy. On the heels of this definition, that same year, the most famous version of the second law was read in a presentation at the Philosophical Society of Zurich on April 24, in which, in the end of his presentation, Clausius concludes:
The entropy of the universe tends to a maximum.
This statement is the best-known phrasing of the second law. Because of the looseness of its language, e.g. universe, as well as lack of specific conditions, e.g. open, closed, or isolated, many people take this simple statement to mean that the second law of thermodynamics applies virtually to every subject imaginable. This is not true; this statement is only a simplified version of a more extended and precise description.
In terms of time variation, the mathematical statement of the second law for an isolated system undergoing an arbitrary transformation is:
formula_55
where
"S" is the entropy of the system and
"t" is time.
The equality sign applies after equilibration. An alternative way of formulating of the second law for isolated systems is:
formula_56 with formula_57
with formula_58 the sum of the rate of entropy production by all processes inside the system. The advantage of this formulation is that it shows the effect of the entropy production. The rate of entropy production is a very important concept since it determines (limits) the efficiency of thermal machines. Multiplied with ambient temperature formula_59 it gives the so-called dissipated energy formula_60.
The expression of the second law for closed systems (so, allowing heat exchange and moving boundaries, but not exchange of matter) is:
formula_61 with formula_57
Here
formula_62 is the heat flow into the system
formula_63 is the temperature at the point where the heat enters the system.
The equality sign holds in the case that only reversible processes take place inside the system. If irreversible processes take place (which is the case in real systems in operation) the >-sign holds. If heat is supplied to the system at several places we have to take the algebraic sum of the corresponding terms.
For open systems (also allowing exchange of matter):
formula_64 with formula_57
Here formula_65 is the flow of entropy into the system associated with the flow of matter entering the system. It should not be confused with the time derivative of the entropy. If matter is supplied at several places we have to take the algebraic sum of these contributions.
Statistical mechanics.
Statistical mechanics gives an explanation for the second law by postulating that a material is composed of atoms and molecules which are in constant motion. A particular set of positions and velocities for each particle in the system is called a microstate of the system and because of the constant motion, the system is constantly changing its microstate. Statistical mechanics postulates that, in equilibrium, each microstate that the system might be in is equally likely to occur, and when this assumption is made, it leads directly to the conclusion that the second law must hold in a statistical sense. That is, the second law will hold on average, with a statistical variation on the order of 1/√"N" where "N" is the number of particles in the system. For everyday (macroscopic) situations, the probability that the second law will be violated is practically zero. However, for systems with a small number of particles, thermodynamic parameters, including the entropy, may show significant statistical deviations from that predicted by the second law. Classical thermodynamic theory does not deal with these statistical variations.
Derivation from statistical mechanics.
The first mechanical argument of the Kinetic theory of gases that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium was due to James Clerk Maxwell in 1860; Ludwig Boltzmann with his H-theorem of 1872 also argued that due to collisions gases should over time tend toward the Maxwell–Boltzmann distribution.
Due to Loschmidt's paradox, derivations of the second law have to make an assumption regarding the past, namely that the system is uncorrelated at some time in the past; this allows for simple probabilistic treatment. This assumption is usually thought as a boundary condition, and thus the second law is ultimately a consequence of the initial conditions somewhere in the past, probably at the beginning of the universe (the Big Bang), though other scenarios have also been suggested.
Given these assumptions, in statistical mechanics, the second law is not a postulate, rather it is a consequence of the fundamental postulate, also known as the equal prior probability postulate, so long as one is clear that simple probability arguments are applied only to the future, while for the past there are auxiliary sources of information which tell us that it was low entropy. The first part of the second law, which states that the entropy of a thermally isolated system can only increase, is a trivial consequence of the equal prior probability postulate, if we restrict the notion of the entropy to systems in thermal equilibrium. The entropy of an isolated system in thermal equilibrium containing an amount of energy of formula_66 is:
formula_67
where formula_68 is the number of quantum states in a small interval between formula_66 and formula_69. Here formula_70 is a macroscopically small energy interval that is kept fixed. Strictly speaking this means that the entropy depends on the choice of formula_70. However, in the thermodynamic limit (i.e. in the limit of infinitely large system size), the specific entropy (entropy per unit volume or per unit mass) does not depend on formula_70.
Suppose we have an isolated system whose macroscopic state is specified by a number of variables. These macroscopic variables can, e.g., refer to the total volume, the positions of pistons in the system, etc. Then formula_71 will depend on the values of these variables. If a variable is not fixed, (e.g. we do not clamp a piston in a certain position), then because all the accessible states are equally likely in equilibrium, the free variable in equilibrium will be such that formula_71 is maximized at the given energy of the isolated system as that is the most probable situation in equilibrium.
If the variable was initially fixed to some value then upon release and when the new equilibrium has been reached, the fact the variable will adjust itself so that formula_71 is maximized, implies that the entropy will have increased or it will have stayed the same (if the value at which the variable was fixed happened to be the equilibrium value).
Suppose we start from an equilibrium situation and we suddenly remove a constraint on a variable. Then right after we do this, there are a number formula_71 of accessible microstates, but equilibrium has not yet been reached, so the actual probabilities of the system being in some accessible state are not yet equal to the prior probability of formula_72. We have already seen that in the final equilibrium state, the entropy will have increased or have stayed the same relative to the previous equilibrium state. Boltzmann's H-theorem, however, proves that the quantity "H" increases monotonically as a function of time during the intermediate out of equilibrium state.
Derivation of the entropy change for reversible processes.
The second part of the second law states that the entropy change of a system undergoing a reversible process is given by:
formula_73
where the temperature is defined as:
formula_74
See here for the justification for this definition. Suppose that the system has some external parameter, "x", that can be changed. In general, the energy eigenstates of the system will depend on "x". According to the adiabatic theorem of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in.
The generalized force, "X", corresponding to the external variable "x" is defined such that formula_75 is the work performed by the system if "x" is increased by an amount "dx". For example, if "x" is the volume, then "X" is the pressure. The generalized force for a system known to be in energy eigenstate formula_76 is given by:
formula_77
Since the system can be in any energy eigenstate within an interval of formula_70, we define the generalized force for the system as the expectation value of the above expression:
formula_78
To evaluate the average, we partition the formula_68 energy eigenstates by counting how many of them have a value for formula_79 within a range between formula_80 and formula_81. Calling this number formula_82, we have:
formula_83
The average defining the generalized force can now be written:
formula_84
We can relate this to the derivative of the entropy with respect to "x" at constant energy "E" as follows. Suppose we change "x" to "x" + "dx". Then formula_68 will change because the energy eigenstates depend on "x", causing energy eigenstates to move into or out of the range between formula_66 and formula_85. Let's focus again on the energy eigenstates for which formula_86 lies within the range between formula_80 and formula_81. Since these energy eigenstates increase in energy by "Y dx", all such energy eigenstates that are in the interval ranging from "E" – "Y" "dx" to "E" move from below "E" to above "E". There are
formula_87
such energy eigenstates. If formula_88, all these energy eigenstates will move into the range between formula_66 and formula_85 and contribute to an increase in formula_71. The number of energy eigenstates that move from below formula_85 to above formula_85 is given by formula_89. The difference
formula_90
is thus the net contribution to the increase in formula_71. If "Y dx" is larger than formula_70 there will be the energy eigenstates that move from below "E" to above formula_85. They are counted in both formula_91 and formula_89, therefore the above expression is also valid in that case.
Expressing the above expression as a derivative with respect to "E" and summing over "Y" yields the expression:
formula_92
The logarithmic derivative of formula_71 with respect to "x" is thus given by:
formula_93
The first term is intensive, i.e. it does not scale with system size. In contrast, the last term scales as the inverse system size and will thus vanish in the thermodynamic limit. We have thus found that:
formula_94
Combining this with
formula_95
gives:
formula_96
Derivation for systems described by the canonical ensemble.
If a system is in thermal contact with a heat bath at some temperature "T" then, in equilibrium, the probability distribution over the energy eigenvalues are given by the canonical ensemble:
formula_97
Here "Z" is a factor that normalizes the sum of all the probabilities to 1, this function is known as the partition function. We now consider an infinitesimal reversible change in the temperature and in the external parameters on which the energy levels depend. It follows from the general formula for the entropy:
formula_98
that
formula_99
Inserting the formula for formula_100 for the canonical ensemble in here gives:
formula_101
Initial conditions at the Big Bang.
As elaborated above, it is thought that the second law of thermodynamics is a result of the very low-entropy initial conditions at the Big Bang. From a statistical point of view, these were very special conditions. On the other hand, they were quite simple, as the universe - or at least the part thereof from which the observable universe developed - seems to have been extremely uniform.
This may seem somewhat paradoxical, since in many physical systems uniform conditions (e.g. mixed rather than separated gases) have high entropy. The paradox is solved once realizing that gravitational systems have negative heat capacity, so that when gravity is important, uniform conditions (e.g. gas of uniform density) in fact have lower entropy compared to non-uniform ones (e.g. black holes in empty space). Yet another approach is that the universe had high (or even maximal) entropy given its size, but as the universe grew it rapidly came out of thermodynamic equilibrium, its entropy only slightly increased compared to the increase in maximal possible entropy, and thus it has arrived at a very low entropy when compared to the much larger possible maximum given its later size.
As for the reason why initial conditions were such, one suggestion is that cosmological inflation was enough to wipe off non-smoothness, while another is that the universe was created spontaneously where the mechanism of creation implies low-entropy initial conditions.
Living organisms.
There are two principal ways of formulating thermodynamics, (a) through passages from one state of thermodynamic equilibrium to another, and (b) through cyclic processes, by which the system is left unchanged, while the total entropy of the surroundings is increased. These two ways help to understand the processes of life. The thermodynamics of living organisms has been considered by many authors, including Erwin Schrödinger (in his book "What is Life?") and Léon Brillouin.
To a fair approximation, living organisms may be considered as examples of (b). Approximately, an animal's physical state cycles by the day, leaving the animal nearly unchanged. Animals take in food, water, and oxygen, and, as a result of metabolism, give out breakdown products and heat. Plants take in radiative energy from the sun, which may be regarded as heat, and carbon dioxide and water. They give out oxygen. In this way they grow. Eventually they die, and their remains rot away, turning mostly back into carbon dioxide and water. This can be regarded as a cyclic process. Overall, the sunlight is from a high temperature source, the sun, and its energy is passed to a lower temperature sink, i.e. radiated into space. This is an increase of entropy of the surroundings of the plant. Thus animals and plants obey the second law of thermodynamics, considered in terms of cyclic processes.
Furthermore, the ability of living organisms to grow and increase in complexity, as well as to form correlations with their environment in the form of adaption and memory, is not opposed to the second law – rather, it is akin to general results following from it: Under some definitions, an increase in entropy also results in an increase in complexity, and for a finite system interacting with finite reservoirs, an increase in entropy is equivalent to an increase in correlations between the system and the reservoirs.
Living organisms may be considered as open systems, because matter passes into and out from them. Thermodynamics of open systems is currently often considered in terms of passages from one state of thermodynamic equilibrium to another, or in terms of flows in the approximation of local thermodynamic equilibrium. The problem for living organisms may be further simplified by the approximation of assuming a steady state with unchanging flows. General principles of entropy production for such approximations are a subject of ongoing research.
Gravitational systems.
Commonly, systems for which gravity is not important have a positive heat capacity, meaning that their temperature rises with their internal energy. Therefore, when energy flows from a high-temperature object to a low-temperature object, the source temperature decreases while the sink temperature is increased; hence temperature differences tend to diminish over time.
This is not always the case for systems in which the gravitational force is important: systems that are bound by their own gravity, such as stars, can have negative heat capacities. As they contract, both their total energy and their entropy decrease but their internal temperature may increase. This can be significant for protostars and even gas giant planets such as Jupiter. When the entropy of the black-body radiation emitted by the bodies is included, however, the total entropy of the system can be shown to increase even as the entropy of the planet or star decreases.
Non-equilibrium states.
The theory of classical or equilibrium thermodynamics is idealized. A main postulate or assumption, often not even explicitly stated, is the existence of systems in their own internal states of thermodynamic equilibrium. In general, a region of space containing a physical system at a given time, that may be found in nature, is not in thermodynamic equilibrium, read in the most stringent terms. In looser terms, nothing in the entire universe is or has ever been truly in exact thermodynamic equilibrium.
For purposes of physical analysis, it is often enough convenient to make an assumption of thermodynamic equilibrium. Such an assumption may rely on trial and error for its justification. If the assumption is justified, it can often be very valuable and useful because it makes available the theory of thermodynamics. Elements of the equilibrium assumption are that a system is observed to be unchanging over an indefinitely long time, and that there are so many particles in a system, that its particulate nature can be entirely ignored. Under such an equilibrium assumption, in general, there are no macroscopically detectable fluctuations. There is an exception, the case of critical states, which exhibit to the naked eye the phenomenon of critical opalescence. For laboratory studies of critical states, exceptionally long observation times are needed.
In all cases, the assumption of thermodynamic equilibrium, once made, implies as a consequence that no putative candidate "fluctuation" alters the entropy of the system.
It can easily happen that a physical system exhibits internal macroscopic changes that are fast enough to invalidate the assumption of the constancy of the entropy. Or that a physical system has so few particles that the particulate nature is manifest in observable fluctuations. Then the assumption of thermodynamic equilibrium is to be abandoned. There is no unqualified general definition of entropy for non-equilibrium states.
There are intermediate cases, in which the assumption of local thermodynamic equilibrium is a very good approximation, but strictly speaking it is still an approximation, not theoretically ideal.
For non-equilibrium situations in general, it may be useful to consider statistical mechanical definitions of other quantities that may be conveniently called 'entropy', but they should not be confused or conflated with thermodynamic entropy properly defined for the second law. These other quantities indeed belong to statistical mechanics, not to thermodynamics, the primary realm of the second law.
The physics of macroscopically observable fluctuations is beyond the scope of this article.
Arrow of time.
The second law of thermodynamics is a physical law that is not symmetric to reversal of the time direction. This does not conflict with symmetries observed in the fundamental laws of physics (particularly CPT symmetry) since the second law applies statistically on time-asymmetric boundary conditions. The second law has been related to the difference between moving forwards and backwards in time, or to the principle that cause precedes effect (the causal arrow of time, or causality).
Irreversibility.
Irreversibility in thermodynamic processes is a consequence of the asymmetric character of thermodynamic operations, and not of any internally irreversible microscopic properties of the bodies. Thermodynamic operations are macroscopic external interventions imposed on the participating bodies, not derived from their internal properties. There are reputed "paradoxes" that arise from failure to recognize this.
Loschmidt's paradox.
Loschmidt's paradox, also known as the reversibility paradox, is the objection that it should not be possible to deduce an irreversible process from the time-symmetric dynamics that describe the microscopic evolution of a macroscopic system.
In the opinion of Schrödinger, "It is now quite obvious in what manner you have to reformulate the law of entropy – or for that matter, all other irreversible statements – so that they be capable of being derived from reversible models. You must not speak of one isolated system but at least of two, which you may for the moment consider isolated from the rest of the world, but not always from each other." The two systems are isolated from each other by the wall, until it is removed by the thermodynamic operation, as envisaged by the law. The thermodynamic operation is externally imposed, not subject to the reversible microscopic dynamical laws that govern the constituents of the systems. It is the cause of the irreversibility. The statement of the law in this present article complies with Schrödinger's advice. The cause–effect relation is logically prior to the second law, not derived from it.
Poincaré recurrence theorem.
The Poincaré recurrence theorem considers a theoretical microscopic description of an isolated physical system. This may be considered as a model of a thermodynamic system after a thermodynamic operation has removed an internal wall. The system will, after a sufficiently long time, return to a microscopically defined state very close to the initial one. The Poincaré recurrence time is the length of time elapsed until the return. It is exceedingly long, likely longer than the life of the universe, and depends sensitively on the geometry of the wall that was removed by the thermodynamic operation. The recurrence theorem may be perceived as apparently contradicting the second law of thermodynamics. More obviously, however, it is simply a microscopic model of thermodynamic equilibrium in an isolated system formed by removal of a wall between two systems. For a typical thermodynamical system, the recurrence time is so large (many many times longer than the lifetime of the universe) that, for all practical purposes, one cannot observe the recurrence. One might wish, nevertheless, to imagine that one could wait for the Poincaré recurrence, and then re-insert the wall that was removed by the thermodynamic operation. It is then evident that the appearance of irreversibility is due to the utter unpredictability of the Poincaré recurrence given only that the initial state was one of thermodynamic equilibrium, as is the case in macroscopic thermodynamics. Even if one could wait for it, one has no practical possibility of picking the right instant at which to re-insert the wall. The Poincaré recurrence theorem provides a solution to Loschmidt's paradox. If an isolated thermodynamic system could be monitored over increasingly many multiples of the average Poincaré recurrence time, the thermodynamic behavior of the system would become invariant under time reversal.
Maxwell's demon.
James Clerk Maxwell imagined one container divided into two parts, "A" and "B". Both parts are filled with the same gas at equal temperatures and placed next to each other, separated by a wall. Observing the molecules on both sides, an imaginary demon guards a microscopic trapdoor in the wall. When a faster-than-average molecule from "A" flies towards the trapdoor, the demon opens it, and the molecule will fly from "A" to "B". The average speed of the molecules in "B" will have increased while in "A" they will have slowed down on average. Since average molecular speed corresponds to temperature, the temperature decreases in "A" and increases in "B", contrary to the second law of thermodynamics.
One response to this question was suggested in 1929 by Leó Szilárd and later by Léon Brillouin. Szilárd pointed out that a real-life Maxwell's demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energy. Likewise, Brillouin demonstrated that the decrease in entropy caused by the demon would be less than the entropy produced by choosing molecules based on their speed.
Maxwell's 'demon' repeatedly alters the permeability of the wall between "A" and "B". It is therefore performing thermodynamic operations on a microscopic scale, not just observing ordinary spontaneous or natural macroscopic thermodynamic processes.
Quotations.
<templatestyles src="Template:Blockquote/styles.css" />The law that entropy always increases holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations – then so much the worse for Maxwell's equations. If it is found to be contradicted by observation – well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.
<templatestyles src="Template:Blockquote/styles.css" />There have been nearly as many formulations of the second law as there have been discussions of it.
<templatestyles src="Template:Blockquote/styles.css" />Clausius is the author of the sibyllic utterance, "The energy of the universe is constant; the entropy of the universe tends to a maximum." The objectives of continuum thermomechanics stop far short of explaining the "universe", but within that theory we may easily derive an explicit statement in some ways reminiscent of Clausius, but referring only to a modest object: an isolated body of finite size.
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "\\mathrm d S"
},
{
"math_id": 2,
"text": "\\delta Q"
},
{
"math_id": 3,
"text": "(T)"
},
{
"math_id": 4,
"text": "\\mathrm dS = \\frac{\\delta Q}{T} \\,\\, \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\text {(closed system; idealized, reversible process)}."
},
{
"math_id": 5,
"text": "(\\delta)"
},
{
"math_id": 6,
"text": "(\\mathrm d)"
},
{
"math_id": 7,
"text": "\\mathrm dS > \\frac{\\delta Q}{T_\\text{surr}} \\,\\, \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\text {(closed system; actually possible, irreversible process).}"
},
{
"math_id": 8,
"text": "\\mathrm dS = \\frac{\\delta Q}{T} \\,\\, \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\text {(actually possible quasistatic irreversible process without composition change).}"
},
{
"math_id": 9,
"text": "\\xi"
},
{
"math_id": 10,
"text": "\\mathrm dS = \\frac{\\delta Q}{T} - \\frac{1}{T} \\sum_{j} \\, \\Xi_{j} \\,\\delta \\xi_j \\,\\, \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\text {(closed system; actually possible quasistatic irreversible process).}"
},
{
"math_id": 11,
"text": "\\Delta Q = Q\\left(\\frac{1}{\\eta}-1\\right)"
},
{
"math_id": 12,
"text": " \\text{Input}+\\text{Output}=0 \\implies (Q + Q_c) - \\frac{Q}{\\eta} = 0 "
},
{
"math_id": 13,
"text": " Q_c=Q\\left( \\frac{1}{\\eta}-1\\right) "
},
{
"math_id": 14,
"text": " \\frac{Q}{\\eta} "
},
{
"math_id": 15,
"text": "\\delta Q=TdS"
},
{
"math_id": 16,
"text": "T = \\left ( \\frac{\\partial U}{\\partial S} \\right )_{V, N}"
},
{
"math_id": 17,
"text": " \\delta S_\\text{NetRad} "
},
{
"math_id": 18,
"text": " \\delta Q_{CC} "
},
{
"math_id": 19,
"text": " \\int_\\text{cycle} (\\frac{\\delta Q_{CC}}{T_b} + \\delta S_\\text{NetRad}) <= 0 "
},
{
"math_id": 20,
"text": " \\delta "
},
{
"math_id": 21,
"text": "\nK_\\nu = \\frac{ 2 h }{c^2} \\frac{\\nu^3}{\\exp\\left(\\frac{h\\nu}{kT}\\right) - 1}, \n"
},
{
"math_id": 22,
"text": "\nL_\\nu = \\frac{ 2 k \\nu^2 }{c^2} ((1+\\frac{c^2 K_\\nu}{2 h \\nu^3})\\ln(1+\\frac{c^2 K_\\nu}{2 h \\nu^3})-(\\frac{c^2 K_\\nu}{2 h \\nu^3})\\ln(\\frac{c^2 K_\\nu}{2 h \\nu^3}))\n"
},
{
"math_id": 23,
"text": "\\oint \\frac{\\delta Q}{T_\\text{surr}} \\leq 0."
},
{
"math_id": 24,
"text": "\\eta _1 = 1 - \\frac{|q_3|}{|q_1|} = 1 - f(T_1, T_3)"
},
{
"math_id": 25,
"text": "\\eta _2 = 1 - \\frac{|q_2|}{|q_1|} = 1 - f(T_1, T_2)"
},
{
"math_id": 26,
"text": "\\eta _3 = 1 - \\frac{|q_3|}{|q_2|} = 1 - f(T_2, T_3)"
},
{
"math_id": 27,
"text": "q_2"
},
{
"math_id": 28,
"text": "T_2"
},
{
"math_id": 29,
"text": "\\begin{align}\n & {{\\eta }_{2}}=1-\\frac{|{{q}_{2}}|}{|{{q}_{1}}|}\\to |{{w}_{2}}|=|{{q}_{1}}|-|{{q}_{2}}|,\\\\ \n & {{\\eta }_{3}}=1-\\frac{|{{q}_{3}}|}{|{{q}_{2}}^{*}|}\\to |{{w}_{3}}|=|{{q}_{2}}^{*}|-|{{q}_{3}}|,\\\\ \n & |{{w}_{2}}|+|{{w}_{3}}|=(|{{q}_{1}}|-|{{q}_{2}}|)+(|{{q}_{2}}^{*}|-|{{q}_{3}}|),\\\\ \n & {{\\eta}_{1}}=1-\\frac{|{{q}_{3}}|}{|{{q}_{1}}|}=\\frac{(|{{w}_{2}}|+|{{w}_{3}}|)}{|{{q}_{1}}|}=\\frac{(|{{q}_{1}}|-|{{q}_{2}}|)+(|{{q}_{2}}^{*}|-|{{q}_{3}}|)}{|{{q}_{1}}|}.\\\\ \n\\end{align}"
},
{
"math_id": 30,
"text": "q_2^*"
},
{
"math_id": 31,
"text": "f(T_1,T_3) = \\frac{|q_3|}{|q_1|} = \\frac{|q_2| |q_3|} {|q_1| |q_2|} = f(T_1,T_2)f(T_2,T_3)."
},
{
"math_id": 32,
"text": "T_1"
},
{
"math_id": 33,
"text": "T_1 = 273.16 K"
},
{
"math_id": 34,
"text": "f(T_2,T_3) = \\frac{f(T_1,T_3)}{f(T_1,T_2)} = \\frac{273.16 \\text{ K} \\cdot f(T_1,T_3)}{273.16 \\text{ K} \\cdot f(T_1,T_2)}."
},
{
"math_id": 35,
"text": "T^* = 273.16 \\text{ K} \\cdot f(T_1,T)"
},
{
"math_id": 36,
"text": "f(T_2,T_3) = f(T_2^*,T_3^*) = \\frac{T_3^*}{T_2^*},"
},
{
"math_id": 37,
"text": "\\oint \\frac{\\delta Q}{T}=0"
},
{
"math_id": 38,
"text": "\\int_L \\frac{\\delta Q}{T}"
},
{
"math_id": 39,
"text": "dS = \\frac{\\delta Q}{T} "
},
{
"math_id": 40,
"text": "-\\Delta S+\\int\\frac{\\delta Q}{T_{surr}}=\\oint\\frac{\\delta Q}{T_{surr}} \\leq 0"
},
{
"math_id": 41,
"text": "\\Delta S \\ge \\int \\frac{\\delta Q}{T_{surr}}"
},
{
"math_id": 42,
"text": "\\delta Q=0"
},
{
"math_id": 43,
"text": "\\Delta S \\ge 0"
},
{
"math_id": 44,
"text": " dS_{\\mathrm{tot}}= dS + dS_R \\ge 0 "
},
{
"math_id": 45,
"text": " dU = \\delta q - \\delta w + d\\left(\\sum \\mu_{iR}N_i\\right)"
},
{
"math_id": 46,
"text": " \\delta q = T_R (-dS_R) \\le T_R dS "
},
{
"math_id": 47,
"text": " \\delta w \\le - dU + T_R dS + \\sum \\mu_{iR} dN_i "
},
{
"math_id": 48,
"text": " \\delta w_u \\le -d \\left(U - T_R S + p_R V - \\sum \\mu_{iR} N_i \\right)"
},
{
"math_id": 49,
"text": " E = U - T_R S + p_R V - \\sum \\mu_{iR} N_i "
},
{
"math_id": 50,
"text": " dE + \\delta w_u \\le 0 "
},
{
"math_id": 51,
"text": "dS_{tot} \\ge 0 "
},
{
"math_id": 52,
"text": "\\Delta G < 0 "
},
{
"math_id": 53,
"text": "\\Delta A < 0 "
},
{
"math_id": 54,
"text": "\\int \\frac{\\delta Q}{T} = -N"
},
{
"math_id": 55,
"text": "\\frac{dS}{dt} \\ge 0"
},
{
"math_id": 56,
"text": "\\frac{dS}{dt} = \\dot S_{i}"
},
{
"math_id": 57,
"text": " \\dot S_{i} \\ge 0"
},
{
"math_id": 58,
"text": " \\dot S_{i}"
},
{
"math_id": 59,
"text": "T_{a}"
},
{
"math_id": 60,
"text": " P_{diss}=T_{a}\\dot S_{i}"
},
{
"math_id": 61,
"text": "\\frac{dS}{dt} = \\frac{\\dot Q}{T}+\\dot S_{i}"
},
{
"math_id": 62,
"text": "\\dot Q"
},
{
"math_id": 63,
"text": "T"
},
{
"math_id": 64,
"text": "\\frac{dS}{dt} = \\frac{\\dot Q}{T}+\\dot S+\\dot S_{i}"
},
{
"math_id": 65,
"text": "\\dot S"
},
{
"math_id": 66,
"text": "E"
},
{
"math_id": 67,
"text": "S = k_{\\mathrm B} \\ln\\left[\\Omega\\left(E\\right)\\right]"
},
{
"math_id": 68,
"text": "\\Omega\\left(E\\right)"
},
{
"math_id": 69,
"text": "E +\\delta E"
},
{
"math_id": 70,
"text": "\\delta E"
},
{
"math_id": 71,
"text": "\\Omega"
},
{
"math_id": 72,
"text": "1/\\Omega"
},
{
"math_id": 73,
"text": "dS =\\frac{\\delta Q}{T}"
},
{
"math_id": 74,
"text": "\\frac{1}{k_{\\mathrm B} T}\\equiv\\beta\\equiv\\frac{d\\ln\\left[\\Omega\\left(E\\right)\\right]}{dE}"
},
{
"math_id": 75,
"text": "X dx"
},
{
"math_id": 76,
"text": "E_{r}"
},
{
"math_id": 77,
"text": "X = -\\frac{dE_{r}}{dx}"
},
{
"math_id": 78,
"text": "X = -\\left\\langle\\frac{dE_{r}}{dx}\\right\\rangle\\,"
},
{
"math_id": 79,
"text": "\\frac{dE_{r}}{dx}"
},
{
"math_id": 80,
"text": "Y"
},
{
"math_id": 81,
"text": "Y + \\delta Y"
},
{
"math_id": 82,
"text": "\\Omega_{Y}\\left(E\\right)"
},
{
"math_id": 83,
"text": "\\Omega\\left(E\\right)=\\sum_{Y}\\Omega_{Y}\\left(E\\right)\\,"
},
{
"math_id": 84,
"text": "X = -\\frac{1}{\\Omega\\left(E\\right)}\\sum_{Y} Y\\Omega_{Y}\\left(E\\right)\\,"
},
{
"math_id": 85,
"text": "E+\\delta E"
},
{
"math_id": 86,
"text": "\\frac{dE_{r}}{dx}"
},
{
"math_id": 87,
"text": "N_{Y}\\left(E\\right)=\\frac{\\Omega_{Y}\\left(E\\right)}{\\delta E} Y dx\\,"
},
{
"math_id": 88,
"text": "Y dx\\leq\\delta E"
},
{
"math_id": 89,
"text": "N_{Y}\\left(E+\\delta E\\right)"
},
{
"math_id": 90,
"text": "N_{Y}\\left(E\\right) - N_{Y}\\left(E+\\delta E\\right)\\,"
},
{
"math_id": 91,
"text": "N_{Y}\\left(E\\right)"
},
{
"math_id": 92,
"text": "\\left(\\frac{\\partial\\Omega}{\\partial x}\\right)_{E} = -\\sum_{Y}Y\\left(\\frac{\\partial\\Omega_{Y}}{\\partial E}\\right)_{x}= \\left(\\frac{\\partial\\left(\\Omega X\\right)}{\\partial E}\\right)_{x}\\,"
},
{
"math_id": 93,
"text": "\\left(\\frac{\\partial\\ln\\left(\\Omega\\right)}{\\partial x}\\right)_{E} = \\beta X +\\left(\\frac{\\partial X}{\\partial E}\\right)_{x}\\,"
},
{
"math_id": 94,
"text": "\\left(\\frac{\\partial S}{\\partial x}\\right)_{E} = \\frac{X}{T}\\,"
},
{
"math_id": 95,
"text": "\\left(\\frac{\\partial S}{\\partial E}\\right)_{x} = \\frac{1}{T}\\,"
},
{
"math_id": 96,
"text": "dS = \\left(\\frac{\\partial S}{\\partial E}\\right)_{x}dE+\\left(\\frac{\\partial S}{\\partial x}\\right)_{E}dx = \\frac{dE}{T} + \\frac{X}{T} dx=\\frac{\\delta Q}{T}\\,"
},
{
"math_id": 97,
"text": "P_{j}=\\frac{\\exp\\left(-\\frac{E_{j}}{k_{\\mathrm B} T}\\right)}{Z}"
},
{
"math_id": 98,
"text": "S = -k_{\\mathrm B}\\sum_{j}P_{j}\\ln\\left(P_{j}\\right)"
},
{
"math_id": 99,
"text": "dS = -k_{\\mathrm B}\\sum_{j}\\ln\\left(P_{j}\\right)dP_{j}"
},
{
"math_id": 100,
"text": "P_{j}"
},
{
"math_id": 101,
"text": "dS = \\frac{1}{T}\\sum_{j}E_{j}dP_{j}=\\frac{1}{T}\\sum_{j}d\\left(E_{j}P_{j}\\right) - \\frac{1}{T}\\sum_{j}P_{j}dE_{j}= \\frac{dE + \\delta W}{T}=\\frac{\\delta Q}{T}"
}
]
| https://en.wikipedia.org/wiki?curid=133017 |
13301859 | Diffusion-controlled reaction | Reaction rate equals rate of transport
Diffusion-controlled (or diffusion-limited) reactions are reactions in which the reaction rate is equal to the rate of transport of the reactants through the reaction medium (usually a solution). The process of chemical reaction can be considered as involving the diffusion of reactants until they encounter each other in the right stoichiometry and form an activated complex which can form the product species. The observed rate of chemical reactions is, generally speaking, the rate of the slowest or "rate determining" step. In diffusion controlled reactions the formation of products from the activated complex is much faster than the diffusion of reactants and thus the rate is governed by collision frequency.
Diffusion control is rare in the gas phase, where rates of diffusion of molecules are generally very high. Diffusion control is more likely in solution where diffusion of reactants is slower due to the greater number of collisions with solvent molecules. Reactions where the activated complex forms easily and the products form rapidly are most likely to be limited by diffusion control. Examples are those involving catalysis and enzymatic reactions. Heterogeneous reactions where reactants are in different phases are also candidates for diffusion control.
One classical test for diffusion control of a heterogeneous reaction is to observe whether the rate of reaction is affected by stirring or agitation; if so then the reaction is almost certainly diffusion controlled under those conditions.
Derivation.
The following derivation is adapted from "Foundations of Chemical Kinetics".
This derivation assumes the reaction formula_0. Consider a sphere of radius formula_1, centered at a spherical molecule A, with reactant B flowing in and out of it. A reaction is considered to occur if molecules A and B touch, that is, when the distance between the two molecules is formula_2 apart.
If we assume a local steady state, then the rate at which B reaches formula_2 is the limiting factor and balances the reaction.
Therefore, the steady state condition becomes
1. formula_3
where
formula_4 is the flux of B, as given by Fick's law of diffusion,
2. formula_5,
where formula_6 is the diffusion coefficient and can be obtained by the Stokes-Einstein equation, and the second term is the gradient of the chemical potential with respect to position. Note that [B] refers to the average concentration of B in the solution, while [B](r) is the "local concentration" of B at position r.
Inserting 2 into 1 results in
3. formula_7.
It is convenient at this point to use the identity
formula_8 allowing us to rewrite 3 as
4. formula_9.
Rearranging 4 allows us to write
5. formula_10
Using the boundary conditions that formula_11, ie the local concentration of B approaches that of the solution at large distances, and consequently formula_12, as formula_13, we can solve 5 by separation of variables, we get
6. formula_14
or
7. formula_15 (where : formula_16)
For the reaction between A and B, there is an inherent reaction constant formula_17, so formula_18. Substituting this into 7 and rearranging yields
8. formula_19
Limiting conditions.
Very fast intrinsic reaction.
Suppose formula_17 is very large compared to the diffusion process, so A and B react immediately. This is the classic diffusion limited reaction, and the corresponding diffusion limited rate constant, can be obtained from 8 as formula_20. 8 can then be re-written as the "diffusion influenced rate constant" as
9. formula_21
Weak intermolecular forces.
If the forces that bind A and B together are weak, ie formula_22 for all r except very small r, formula_23. The reaction rate 9 simplifies even further to
10. formula_24
This equation is true for a very large proportion of industrially relevant reactions in solution.
Viscosity dependence.
The Stokes-Einstein equation describes a frictional force on a sphere of diameter formula_25 as formula_26 where formula_27 is the viscosity of the solution. Inserting this into 9 gives an estimate for formula_28 as formula_29, where R is the gas constant, and formula_27 is given in centipoise. For the following molecules, an estimate for formula_28 is given:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A+B\\rightarrow C"
},
{
"math_id": 1,
"text": "R_{A}"
},
{
"math_id": 2,
"text": "R_{AB}"
},
{
"math_id": 3,
"text": " k[B]=-4\\pi r^2 J_{B}"
},
{
"math_id": 4,
"text": "J_{B}"
},
{
"math_id": 5,
"text": "J_{B} = -D_{AB} (\\frac{dB(r)}{dr} +\\frac{[B]}{k_{B}T} \\frac{dU}{dr})"
},
{
"math_id": 6,
"text": "D_{AB}"
},
{
"math_id": 7,
"text": "k[B]= 4\\pi r^2 D_{AB}(\\frac{dB(r)}{dr}+\\frac{[B](r)}{k_{B}T} \\frac{dU}{dr})"
},
{
"math_id": 8,
"text": " \\exp(-U(r)/k_{B}T) \\cdot \\frac{d}{dr} ([B](r)\\exp(U(r)/k_{B}T) = (\\frac{dB(r)}{dr}+\\frac{[B](r)}{k_{B}T} \\frac{dU}{dr}) "
},
{
"math_id": 9,
"text": "k[B]= 4\\pi r^2 D_{AB} \\exp(-U(r)/k_{B}T) \\cdot \\frac{d}{dr} ([B](r)\\exp(U(r)/k_{B}T)"
},
{
"math_id": 10,
"text": "\\frac{k[B] \\exp(U(r)/k_{B}T)}{4\\pi r^2 D_{AB}}= \\frac{d}{dr} ([B](r)\\exp(U(r)/k_{B}T)"
},
{
"math_id": 11,
"text": "[B](r)\\rightarrow [B]"
},
{
"math_id": 12,
"text": "U(r) \\rightarrow 0 "
},
{
"math_id": 13,
"text": " r \\rightarrow \\infty "
},
{
"math_id": 14,
"text": " \\int_{R_{AB}}^{\\infty} dr \\frac{k[B] \\exp(U(r)/k_{B}T)}{4\\pi r^2 D_{AB}}= \\int_{R_{AB}}^{\\infty} d( [B](r)\\exp(U(r)/k_{B}T )"
},
{
"math_id": 15,
"text": " \\frac{k[B]}{4\\pi D_{AB}\\beta }= \n [B]-[B](R_{AB})\\exp(U(R_{AB})/k_{B}T ) "
},
{
"math_id": 16,
"text": "\\beta^{-1} = \\int_{R_{AB}}^{\\infty} \\frac{1}{r^2}\\exp(\\frac{U(r)}{k_B T}dr ) "
},
{
"math_id": 17,
"text": "k_r"
},
{
"math_id": 18,
"text": "[B](R_{AB}) = k[B]/k_r "
},
{
"math_id": 19,
"text": " k = \\frac{4\\pi D_{AB}\\beta k_r }{k_r + 4\\pi D_{AB} \\beta \\exp(\\frac{U(R_{AB} )}{k_B T} ) } "
},
{
"math_id": 20,
"text": "k_D = 4\\pi D_{AB} \\beta "
},
{
"math_id": 21,
"text": " k= \\frac{k_D k_r}{k_r + k_D \\exp(\\frac{U(R_{AB} )}{k_B T} )} "
},
{
"math_id": 22,
"text": "U(r) \\approx 0"
},
{
"math_id": 23,
"text": "\\beta^{-1} \\approx \\frac{1}{R_{AB}}"
},
{
"math_id": 24,
"text": " k = \\frac{k_D k_r}{k_r + k_D} "
},
{
"math_id": 25,
"text": "R_A"
},
{
"math_id": 26,
"text": "D_A = \\frac{k_BT}{3\\pi R_A \\eta}"
},
{
"math_id": 27,
"text": "\\eta"
},
{
"math_id": 28,
"text": "k_D"
},
{
"math_id": 29,
"text": "\\frac{8 RT}{3\\eta} "
}
]
| https://en.wikipedia.org/wiki?curid=13301859 |
13305328 | Prolate spheroidal wave function | The prolate spheroidal wave functions are eigenfunctions of the Laplacian in prolate spheroidal coordinates, adapted to boundary conditions on certain ellipsoids of revolution (an ellipse rotated around its long axis, “cigar shape“). Related are the oblate spheroidal wave functions (“pancake shaped” ellipsoid).
Solutions to the wave equation.
Solve the Helmholtz equation,
formula_0, by the method of separation of variables in prolate spheroidal coordinates, formula_1, with:
formula_2
formula_3
formula_4
and formula_5, formula_6, and formula_7. Here, formula_8 is the interfocal distance of the elliptical cross section of the prolate spheroid.
Setting formula_9, the solution formula_10 can be written
as the product of formula_11, a radial spheroidal wave function formula_12 and an angular spheroidal wave function formula_13.
The radial wave function formula_12 satisfies the linear ordinary differential equation:
formula_14
The angular wave function satisfies the differential equation:
formula_15
It is the same differential equation as in the case of the radial wave function. However, the range of the variable is different: in the radial wave function, formula_5, while in the angular wave function, formula_16. The eigenvalue formula_17 of this Sturm–Liouville problem is fixed by the requirement that formula_18 must be finite for formula_19.
For formula_20 both differential equations reduce to the equations satisfied by the associated Legendre polynomials. For formula_21, the angular spheroidal wave functions can be expanded as a series of Legendre functions.
If one writes formula_22, the function formula_23 satisfies
formula_24
which is known as the spheroidal wave equation. This auxiliary equation has been used by Stratton.
Band-limited signals.
In signal processing, the prolate spheroidal wave functions (PSWF) are useful as eigenfunctions of a time-limiting operation followed by a low-pass filter. Let formula_25 denote the time truncation operator, such that formula_26 if and only if formula_27 has support on formula_28. Similarly, let formula_29 denote an ideal low-pass filtering operator, such that formula_30 if and only if its Fourier transform is limited to formula_31. The operator formula_32 turns out to be linear, bounded and self-adjoint. For formula_33 we denote with formula_34 the formula_35-th eigenfunction, defined as
formula_36
where formula_37 are the associated eigenvalues, and formula_38 is a constant. The band-limited functions formula_39 are the prolate spheroidal wave functions, proportional to the formula_40 introduced above. (See also Spectral concentration problem.)
Pioneering work in this area was performed by Slepian and Pollak, Landau and Pollak, and Slepian.
Prolate spheroidal wave functions whose domain is a (portion of) the surface of the unit sphere are more generally called "Slepian functions". These are of great utility in disciplines such as geodesy, cosmology, or tomography
Technical information and history.
There are different normalization schemes for spheroidal functions. A table of the different schemes can be found in Abramowitz and Stegun who follow the notation of Flammer.
The Digital Library of Mathematical Functions provided by NIST is an excellent resource for spheroidal wave functions.
Tables of numerical values of spheroidal wave functions are given in Flammer, Hunter, Hanish et al., and Van Buren et al.
Originally, the spheroidal wave functions were introduced by C. Niven, which lead to a Helmholtz equation in spheroidal coordinates. Monographs tying together many aspects of the theory of spheroidal wave functions were written by Strutt, Stratton et al., Meixner and Schafke, and Flammer.
Flammer provided a thorough discussion of the calculation of the eigenvalues, angular wavefunctions, and radial wavefunctions for both the prolate and the oblate case. Computer programs for this purpose have been developed by many, including King et al., Patz and Van Buren, Baier et al., Zhang and Jin, Thompson and Falloon. Van Buren and Boisvert have recently developed new methods for calculating prolate spheroidal wave functions that extend the ability to obtain numerical values to extremely wide parameter ranges. Fortran source code that combines the new results with traditional methods is available at http://www.mathieuandspheroidalwavefunctions.com.
Asymptotic expansions of angular prolate spheroidal wave functions for large values of formula_41 have been derived by Müller. He also investigated the relation between asymptotic expansions of spheroidal wave functions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\nabla^2 \\Phi + k^2 \\Phi=0"
},
{
"math_id": 1,
"text": "(\\xi,\\eta,\\varphi)"
},
{
"math_id": 2,
"text": "\\ x=a \\sqrt{(\\xi^2-1)(1-\\eta^2)} \\cos \\varphi, "
},
{
"math_id": 3,
"text": "\\ y=a \\sqrt{(\\xi^2-1)(1-\\eta^2)} \\sin \\varphi, "
},
{
"math_id": 4,
"text": "\\ z=a \\, \\xi \\, \\eta, "
},
{
"math_id": 5,
"text": "\\xi \\ge 1"
},
{
"math_id": 6,
"text": " |\\eta| \\le 1 "
},
{
"math_id": 7,
"text": "0 \\le \\varphi \\le 2\\pi"
},
{
"math_id": 8,
"text": "2a > 0"
},
{
"math_id": 9,
"text": "c=ka"
},
{
"math_id": 10,
"text": "\\Phi(\\xi,\\eta,\\varphi)"
},
{
"math_id": 11,
"text": "e^{{\\rm i} m \\varphi}"
},
{
"math_id": 12,
"text": "R_{mn}(c,\\xi)"
},
{
"math_id": 13,
"text": "S_{mn}(c,\\eta)"
},
{
"math_id": 14,
"text": "\\ (\\xi^2 -1) \\frac{d^2 R_{mn}(c,\\xi)}{d \\xi ^2} + 2\\xi \\frac{d R_{mn}(c,\\xi)}{d \\xi} -\\left(\\lambda_{mn}(c) -c^2 \\xi^2 +\\frac{m^2}{\\xi^2-1}\\right) {R_{mn}(c,\\xi)} = 0 "
},
{
"math_id": 15,
"text": "\\ (1 - \\eta^2) \\frac{d^2 S_{mn}(c,\\eta)}{d \\eta ^2} - 2\\eta \\frac{d S_{mn}(c,\\eta)}{d \\eta} +\\left(\\lambda_{mn}(c) -c^2 \\eta^2 +\\frac{m^2}{\\eta^2-1}\\right) {S_{mn}(c,\\eta)} = 0 "
},
{
"math_id": 16,
"text": "|\\eta| \\le 1"
},
{
"math_id": 17,
"text": "\\lambda_{mn}(c)"
},
{
"math_id": 18,
"text": "{S_{mn}(c,\\eta)}"
},
{
"math_id": 19,
"text": "\\eta \\to \\pm1"
},
{
"math_id": 20,
"text": "c=0"
},
{
"math_id": 21,
"text": "c\\ne 0"
},
{
"math_id": 22,
"text": "S_{mn}(c,\\eta)=(1-\\eta^2)^{m/2} Y_{mn}(c,\\eta)"
},
{
"math_id": 23,
"text": "Y_{mn}(c,\\eta)"
},
{
"math_id": 24,
"text": "\\ (1-\\eta^2) \\frac{d^2 Y_{mn}(c,\\eta)}{d \\eta ^2} -2 (m+1) \\eta \\frac{d Y_{mn}(c,\\eta)}{d \\eta} - \\left(c^2 \\eta^2 +m(m+1)-\\lambda_{mn}(c)\\right) {Y_{mn}(c,\\eta)} = 0, "
},
{
"math_id": 25,
"text": "D"
},
{
"math_id": 26,
"text": "f(t)=D f(t)"
},
{
"math_id": 27,
"text": "f(t)"
},
{
"math_id": 28,
"text": "[-T, T]"
},
{
"math_id": 29,
"text": "B"
},
{
"math_id": 30,
"text": "f(t)=B f(t)"
},
{
"math_id": 31,
"text": "[-\\Omega, \\Omega]"
},
{
"math_id": 32,
"text": " BD "
},
{
"math_id": 33,
"text": "n=0,1,2,\\ldots"
},
{
"math_id": 34,
"text": "\\psi_n(c,t)"
},
{
"math_id": 35,
"text": "n"
},
{
"math_id": 36,
"text": "\\ BD \\psi_n(c,t) = \\frac{1}{2\\pi}\\int_{-\\Omega}^\\Omega \\left(\\int_{-T}^T \\psi_n(c,\\tau)e^{-i\\omega \\tau} \\, d\\tau\\right)e^{i\\omega t} \\, d\\omega = \\lambda_n(c)\\psi_n(c,t),"
},
{
"math_id": 37,
"text": "1>\\lambda_0(c)>\\lambda_1(c)>\\cdots>0"
},
{
"math_id": 38,
"text": "c=T\\Omega"
},
{
"math_id": 39,
"text": "\\{\\psi_n(c,t)\\}_{n=0}^{\\infty}"
},
{
"math_id": 40,
"text": "S_{0n}(c, t/T)"
},
{
"math_id": 41,
"text": " c "
}
]
| https://en.wikipedia.org/wiki?curid=13305328 |
13307983 | Resistive ballooning mode | The resistive ballooning mode (RBM) is an instability occurring in magnetized plasmas, particularly in magnetic confinement devices such as tokamaks, when the pressure gradient is opposite to the effective gravity created by a magnetic field.
Linear growth rate.
The linear growth rate formula_0 of the RBM instability is given as
formula_1
where formula_2 is the pressure gradient formula_3 is the effective gravity produced by a non-homogeneous magnetic field, "R"0 is the major radius of the device, "L""p" is a characteristic length of the pressure gradient, and "c""s" is the plasma sound speed.
Similarity with the Rayleigh–Taylor instability.
The RBM instability is similar to the Rayleigh–Taylor instability (RT), with Earth gravity formula_4 replaced by the effective gravity formula_5, except that for the RT instability, formula_4 acts on the mass density formula_6 of the fluid, whereas for the RBM instability, formula_5 acts on the pressure formula_7 of the plasma. | [
{
"math_id": 0,
"text": "\\gamma"
},
{
"math_id": 1,
"text": "\\gamma^2 = -\\vec{g_{eff}}\\cdot\\frac{\\nabla p}{p}"
},
{
"math_id": 2,
"text": "|\\nabla p|\\sim \\frac{p}{L_p}"
},
{
"math_id": 3,
"text": "g_{eff}=c_s^2|\\frac{\\nabla B}{B}|\\sim 1/R_0"
},
{
"math_id": 4,
"text": "\\vec g"
},
{
"math_id": 5,
"text": "\\vec g_{eff}"
},
{
"math_id": 6,
"text": "\\rho"
},
{
"math_id": 7,
"text": "p"
}
]
| https://en.wikipedia.org/wiki?curid=13307983 |
1331039 | Dirac large numbers hypothesis | Hypothesis relating age of the universe to physical constants
The Dirac large numbers hypothesis (LNH) is an observation made by Paul Dirac in 1937 relating ratios of size scales in the Universe to that of force scales. The ratios constitute very large, dimensionless numbers: some 40 orders of magnitude in the present cosmological epoch. According to Dirac's hypothesis, the apparent similarity of these ratios might not be a mere coincidence but instead could imply a cosmology with these unusual features:
Background.
LNH was Dirac's personal response to a set of large number "coincidences" that had intrigued other theorists of his time. The "coincidences" began with Hermann Weyl (1919), who speculated that the observed radius of the universe, "R"U, might also be the hypothetical radius of a particle whose rest energy is equal to the gravitational self-energy of the electron:
formula_2
where,
formula_3
formula_4 with formula_5
and "r"e is the classical electron radius, "m"e is the mass of the electron, "m"H denotes the mass of the hypothetical particle, and "r"H is its electrostatic radius.
The coincidence was further developed by Arthur Eddington (1931) who related the above ratios to N, the estimated number of charged particles in the universe, with the following ratio:
formula_6.
In addition to the examples of Weyl and Eddington, Dirac was also influenced by the primeval-atom hypothesis of Georges Lemaître, who lectured on the topic in Cambridge in 1933. The notion of a varying-"G" cosmology first appears in the work of Edward Arthur Milne a few years before Dirac formulated LNH. Milne was inspired not by large number coincidences but by a dislike of Einstein's general theory of relativity. For Milne, space was not a structured object but simply a system of reference in which relations such as this could accommodate Einstein's conclusions:
formula_7
where "M"U is the mass of the universe and "t" is the age of the universe. According to this relation, "G" increases over time.
Dirac's interpretation of the large number coincidences.
The Weyl and Eddington ratios above can be rephrased in a variety of ways, as for instance in the context of time:
formula_8
where "t" is the age of the universe, formula_9 is the speed of light and "r"e is the classical electron radius. Hence, in units where "c" = 1 and "r"e = 1, the age of the universe is about 1040 units of time. This is the same order of magnitude as the ratio of the electrical to the gravitational forces between a proton and an electron:
formula_10
Hence, interpreting the charge formula_11 of the electron, the masses formula_12 and formula_13 of the proton and electron, and the permittivity factor formula_14 in atomic units (equal to 1), the value of the gravitational constant is approximately 10−40. Dirac interpreted this to mean that formula_15 varies with time as formula_16. Although George Gamow noted that such a temporal variation does not necessarily follow from Dirac's assumptions, a corresponding change of "G" has not been found.
According to general relativity, however, "G" is constant, otherwise the law of conserved energy is violated. Dirac met this difficulty by introducing into the Einstein field equations a gauge function that describes the structure of spacetime in terms of a ratio of gravitational and electromagnetic units. He also provided alternative scenarios for the continuous creation of matter, one of the other significant issues in LNH:
Later developments and interpretations.
Dirac's theory has inspired and continues to inspire a significant body of scientific literature in a variety of disciplines, with it sparking off many speculations, arguments and new ideas in terms of applications. In the context of geophysics, for instance, Edward Teller seemed to raise a serious objection to LNH in 1948 when he argued that variations in the strength of gravity are not consistent with paleontological data. However, George Gamow demonstrated in 1962 how a simple revision of the parameters (in this case, the age of the Solar System) can invalidate Teller's conclusions. The debate is further complicated by the choice of LNH cosmologies: In 1978, G. Blake argued that paleontological data is consistent with the "multiplicative" scenario but not the "additive" scenario. Arguments both for and against LNH are also made from astrophysical considerations. For example, D. Falik argued that LNH is inconsistent with experimental results for microwave background radiation whereas Canuto and Hsieh argued that it "is" consistent. One argument that has created significant controversy was put forward by Robert Dicke in 1961. Known as the anthropic coincidence or fine-tuned universe, it simply states that the large numbers in LNH are a necessary coincidence for intelligent beings since they parametrize fusion of hydrogen in stars and hence carbon-based life would not arise otherwise.
Various authors have introduced new sets of numbers into the original "coincidence" considered by Dirac and his contemporaries, thus broadening or even departing from Dirac's own conclusions. Jordan (1947) noted that the mass ratio for a typical star (specifically, a star of the Chandrasekhar mass, itself a constant of nature, approx. 1.44 solar masses) and an electron approximates to 1060, an interesting variation on the 1040 and 1080 that are typically associated with Dirac and Eddington respectively. (The physics defining the Chandrasekhar mass produces a ratio that is the −3/2 power of the gravitational fine-structure constant, 10−40.)
Modern studies.
Several authors have recently identified and pondered the significance of yet another large number, approximately 120 orders of magnitude. This is for example the ratio of the theoretical and observational estimates of the energy density of the vacuum, which Nottale (1993) and Matthews (1997) associated in an LNH context with a scaling law for the cosmological constant. Carl Friedrich von Weizsäcker identified 10120 with the ratio of the universe's volume to the volume of a typical nucleon bounded by its Compton wavelength, and he identified this ratio with the sum of elementary events or bits of information in the universe.
Valev (2019) found an equation connecting cosmological parameters (for example density of the universe) and Planck units (for example Planck density). This ratio of densities, and other ratios (using four fundamental constants: speed of light in vacuum c, Newtonian constant of gravity G, reduced Planck constant ℏ, and Hubble constant H) computes to an exact number, 32.8·10120. This provides evidence of the Dirac large numbers hypothesis by connecting the macro-world and the micro-world.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G \\propto 1/t\\,"
},
{
"math_id": 1,
"text": "M \\propto t^2"
},
{
"math_id": 2,
"text": "\\frac {R_\\text{U}}{r_\\text{e}}\\approx \\frac{r_\\text{H}}{r_\\text{e}} \\approx 4.1666763 \\cdot 10^{42} \\approx 10^{42.62\\ldots} ,"
},
{
"math_id": 3,
"text": "r_\\text{e} = \\frac {e^2}{4 \\pi \\epsilon_0 \\ m_\\text{e} c^2} \\approx 3.7612682 \\cdot 10^{-16} \\mathrm{m}"
},
{
"math_id": 4,
"text": "r_\\text{H} = \\frac {e^2}{4 \\pi \\epsilon_0 \\ m_\\text{H} c^2} \\approx 1.5671987 \\cdot 10^{27} \n \\,\\mathrm{m}"
},
{
"math_id": 5,
"text": "m_\\text{H} c^2 = \\frac {Gm_\\text{e}^2}{r_\\text{e}}"
},
{
"math_id": 6,
"text": "\\frac {e^2}{4 \\pi \\epsilon_0 \\ Gm_\\text{e}^2} \\approx 4.1666763 \\cdot 10^{42} \\approx \\sqrt {N}"
},
{
"math_id": 7,
"text": "G = \\left(\\!\\frac{c^3}{M_\\text{U}}\\!\\right)t,"
},
{
"math_id": 8,
"text": "\\frac {c\\,t}{r_\\text{e}} \\approx 3.47 \\cdot 10^{41} \\approx 10^{42},"
},
{
"math_id": 9,
"text": "c"
},
{
"math_id": 10,
"text": "\\frac{e^2}{4 \\pi \\epsilon_0 G m_\\text{p} m_\\text{e}} \\approx 10^{40}."
},
{
"math_id": 11,
"text": "e"
},
{
"math_id": 12,
"text": "m_\\text{p}"
},
{
"math_id": 13,
"text": "m_\\text{e}"
},
{
"math_id": 14,
"text": " 4 \\pi \\epsilon_0"
},
{
"math_id": 15,
"text": "G"
},
{
"math_id": 16,
"text": "G \\approx 1/t"
}
]
| https://en.wikipedia.org/wiki?curid=1331039 |
13313910 | Sticking coefficient | Sticking coefficient is the term used in surface physics to describe the ratio of the number of adsorbate atoms (or molecules) that adsorb, or "stick", to a surface to the total number of atoms that impinge upon that surface during the same period of time. Sometimes the symbol Sc is used to denote this coefficient, and its value is between 1 (all impinging atoms stick) and 0 (no atoms stick). The coefficient is a function of surface temperature, surface coverage (θ) and structural details as well as the kinetic energy of the impinging particles. The original formulation was for molecules adsorbing from the gas phase and the equation was later extended to adsorption from the liquid phase by comparison with molecular dynamics simulations. For use in adsorption from liquids the equation is expressed based on solute density (molecules per volume) rather than the pressure.
Derivation.
When arriving at a site of a surface, an adatom has three options. There is a probability that it will adsorb to the surface (formula_0), a probability that it will migrate to another site on the surface (formula_1), and a probability that it will desorb from the surface and return to the bulk gas (formula_2). For an empty site (θ=0) the sum of these three options is unity.
formula_3
For a site already occupied by an adatom (θ>0), there is no probability of adsorbing, and so the probabilities sum as:
formula_4
For the first site visited, the P of migrating overall is the P of migrating if the site is filled plus the P of migrating if the site is empty. The same is true for the P of desorption. The P of adsorption, however, does not exist for an already filled site.
formula_5
formula_6
formula_7
The P of migrating from the second site is the P of migrating from the first site "and then" migrating from the second site, and so we multiply the two values.
formula_8
Thus the sticking probability (formula_9) is the P of sticking of the first site, plus the P of migrating from the first site "and then" sticking to the second site, plus the P of migrating from the second site "and then" sticking at the third site etc.
formula_10
formula_11
There is an identity we can make use of.
formula_12
formula_13
The sticking coefficient when the coverage is zero formula_14 can be obtained by simply setting formula_15. We also remember that
formula_16
formula_17
formula_18
If we just look at the P of migration at the first site, we see that it is certainty minus all other possibilities.
formula_19
Using this result, and rearranging, we find:
formula_20
formula_21
formula_22
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P_a"
},
{
"math_id": 1,
"text": "P_m"
},
{
"math_id": 2,
"text": "P_d"
},
{
"math_id": 3,
"text": " P_a + P_m + P_d=1 "
},
{
"math_id": 4,
"text": " P_d'+P_m'=1 "
},
{
"math_id": 5,
"text": " P_{m1}=P_m(1-\\theta)+P_m'(\\theta) "
},
{
"math_id": 6,
"text": " P_{d1}=P_d(1-\\theta)+P_d'(\\theta) "
},
{
"math_id": 7,
"text": " P_{a1}=P_a(1-\\theta) "
},
{
"math_id": 8,
"text": " P_{m2}=P_{m1} \\times P_{m1}=P_{m1}^2 "
},
{
"math_id": 9,
"text": " s_c "
},
{
"math_id": 10,
"text": " s=P_a(1-\\theta)+P_{m1}P_a(1-\\theta)+P_{m1}^2P_a(1-\\theta)..."
},
{
"math_id": 11,
"text": " s=P_a(1-\\theta)\\sum_{n=0}^{\\infin} P_{m1}^n "
},
{
"math_id": 12,
"text": "\\sum_{n=0}^{\\infin} x^n =\\frac{1}{1-x}\\forall x<1"
},
{
"math_id": 13,
"text": "\\therefore s=P_a(1-\\theta)\\frac{1}{1-P_{m1}}"
},
{
"math_id": 14,
"text": "s_0"
},
{
"math_id": 15,
"text": "\\theta=0"
},
{
"math_id": 16,
"text": "1-P_{m1}=P_a+P_d"
},
{
"math_id": 17,
"text": " s_0=\\frac{P_a}{P_a+P_d} "
},
{
"math_id": 18,
"text": " \\frac{s}{s_0}=\\frac{P_a(1-\\theta)}{1-P_{m1}}\\frac{P_a+P_d}{P_a} "
},
{
"math_id": 19,
"text": " P_{m1}=1-P_d(1-\\theta)-P_d'(\\theta)-P_a(1-\\theta) "
},
{
"math_id": 20,
"text": " \\frac{s}{s_0}=\\left[1+\\frac{P_d'\\theta}{(P_a+P_d)(1-\\theta)}\\right]^{-1} "
},
{
"math_id": 21,
"text": " \\frac{s}{s_0}=\\left[1+\\frac{K\\theta}{1-\\theta}\\right]^{-1} "
},
{
"math_id": 22,
"text": " K\\overset{\\underset{\\mathrm{def}}{}}{=}\\frac{P_d'}{P_a+P_d} "
}
]
| https://en.wikipedia.org/wiki?curid=13313910 |
13314050 | Conversation theory | Cybernetic and dialectic framework
Conversation theory is a cybernetic approach to the study of conversation, cognition and learning that may occur between two participants who are engaged in conversation with each other. It presents an experimental framework heavily utilizing human-computer interactions and computer theoretic models as a means to present a scientific theory explaining how conversational interactions lead to the emergence of knowledge between participants. The theory was developed by Gordon Pask, who credits Bernard Scott, Dionysius Kallikourdis, Robin McKinnon-Wood, and others during its initial development and implementation as well as Paul Pangaro during subsequent years.
Overview.
Conversation theory may be described as a formal theory of conversational process, as well as a theoretical methodology concerned with concept-forming and concept-sharing between conversational participants. It may be viewed as a framework that may be used to examine learning and development through the means of conversational techniques by means of human-machine interactions; the results of which may then inform approaches to education, educational psychology, and epistemology. While the framework is interpretable as a psychological framework with educational applications (specifically, as a general framework to think about teaching and learning), Pask's motivation in developing the theory has been interpreted by some who closely worked with him develop upon certain theoretical concerns regarding the nature of cybernetic inquiry.
The theory has been noted to have been influenced by a variety of psychological, pedagogical and philosophical influences such as Lev Vygotsky, R. D. Laing and George H. Mead. With some authors suggesting that the kind of human-machine learning interactions documented in conversation theory to be mirroring Vygotsky's descriptions of the zone of proximal development, and his descriptions of spontaneous and scientific concepts.
The theory prioritizes learning and teaching approaches related to education. A central idea of the theory is that learning occurs through conversations: For if participant "A" is to be conscious with participant "B" of a topic of inquiry, both participants must be able to converse with each other about that topic. Because of this, participants engaging in a discussion about a subject matter make their knowledge claims explicit through the means of such conversational interactions.
The theory is concerned with a variety of "psychological, linguistic, epistemological, social or non-commitally mental events of which there is awareness". Awareness in this sense is not of a person-specific type, i.e., it is not necessarily localized in a single participant. Instead, the type of awareness examined in conversation theory is the kind of joint awareness that may be shared between entities. While there is an acknowledgment of its similarities to phenomenology, the theory extends its analysis to examine cognitive processes. However, the concept of cognition is not viewed as merely being confined to an individual's brain or central nervous system. Instead, cognition may occur at the level of a group of people (leading to the emergence of social awareness), or may characterize certain types of computing machines.
Initial results from the theory lead to a distinction in the type of learning strategies participants used during the learning process; whereby students in general gravitated towards "holistic" or "serialist" learning strategies (with the optimal mixture producing a "versatile" learning strategy).
Conversation.
Following Hugh Dubberly and Paul Pangaro, a conversation in the context of conversation theory involves an exchange between two participants whereby each participant is contextualized as a learning system whose internal states are changed through the course of the conversation. What can be discussed through conversation, i.e., topics of discussion, are said to belong to a conversational domain.
Conversation is distinguished from the mere exchange of information as seen in information theory, by the fact that utterances are interpreted within the context of a given perspective of such a learning system. Each participant's meanings and perceptions change during the course of a conversation, and each participant can agree to commit to act in certain ways during the conversation. In this way, conversation permits not only learning but also collaboration through participants coordinating themselves and designating their roles through the means of conversation.
Since meanings are agreed during the course of a conversation, and since purported agreements can be illusory (whereby we think we have the same understanding of a given topic but in fact do not), an empirical approach to the study of conversation would require stable reference points during such conversational exchanges between peers so as to permit reproducible results. Using computer theoretical models of cognition, conversation theory can document these intervals of understanding that arise in the conversations between two participating individuals, such that the development of individual and collective understandings can be analyzed rigorously.
In this way, Pask has been argued to have been an early pioneer in AI-based educational approaches: Having proposed that advances in computational media may enable conversational forms of interactions to take place between man and machine.
Language.
The types of languages that conversation theory utilizes in its approach are distinguishable based on a language's role in relation to an experiment in which a conversation is examined as the subject of inquiry; thus, it follows that conversations can be conducted at different levels depending on the role a language has in relation to an experiment. The types of languages are as follows: Natural languages used for general discussions outside the experiment; object languages which are the subject of inquiry during an experiment, and finally a metalanguage which is used to talk about the design, management, and results on an experiment.
A natural language formula_0 is treated as an unrestricted language used between a source (say a participant) and an interrogator or analyst (say an experimenter).
For this reason, it may be considered a language for general discussion in the context of conversation theory.
An object language formula_1 meanwhile, has some of the qualities of a natural language (which permits commands, questions, ostentation and predication), but is used in conversation theory specifically as the language studied during experiments. Finally, the metalanguage formula_2 is an observational language used by an interrogator or analysis for describing the conversational system under observation, prescribing actions that are permitted within such a system, and posing parameters regarding what may be discussed during an experiment under observation.
The object language formula_1 differs from most formal languages, by virtue of being "a command and question language[,] not an assertoric language like [a] predicate calculus". Moreover, formula_1 is a language primarily dealing with metaphors indicating material analogies and not on the kind of propositions dealing with truth or falsity values. Since conversation theory specifically focuses on learning and development within human subjects, the object language is separated into two distinct modes of conversing.
Conversation theory conceptualises learning as being the result of two integrated levels of control: The first level of control is designated by formula_3 and designates a set of problem-solving procedures which attempt to attain goals or subgoals, whereas the second level of control is designated as formula_4 and denotes various constructive processes that have been acquired by a student through maturation, imprinting and previous learning.
The object language formula_1 then is demarcated in conversation theory based on these considerations, whereby it is split between formula_5 and formula_6 lines of inquiry such that an object language is the ordered pair of such discourse types formula_7.
According to Bernard Scott, formula_5 discourse of an object language may be conceptualized as the level of how, i.e., discourse that is concerned with "how to “do” a topic: how to recognize it, construct it, maintain it and so on".
Meanwhile, formula_6 discourse may be conceptualized as the level of why, i.e., it is discourse "concerned with explaining or justifying what a topic means in terms of other topics".
Concepts.
A concept in conversation theory, is conceived of as the production, reproduction, and maintenance of a given topic relation formula_8 from other topic relations formula_9, all belonging to a given conversational domain formula_10. This implies formula_11, where formula_12 and formula_13 are used to represent a number on a finite index of numbers. A concept must satisfy the twin condition that it must entail formula_14 and be entailed formula_15 by other topics.
A concept in the context of conversation theory is not a class, nor description of a class, nor a stored description: Instead, a concept is specifically used to reconstruct, reproduce or stabilize relations. Thus, if formula_16 is the head topic of discussion, then formula_17 implies that the concept of that relation produces, reproduces, and maintains that relation.
Now, a concept itself is considered to consist of the ordered pair containing a program and an interpretation:
formula_18
Whereby a program attempts to derive a given topic relation, while an interpretation refers to the compilation of that program. In other words, given a specific topic relation, a program attempts to derive that relation through a series of other topic relations, which are compiled in such a way as to derive the initial topic relation. A concept as defined above is considered to be a formula_1-procedure, which is embodied by an underlying processor called a formula_1-processor.
In this way, Pask envisages concepts as mental organisations that hold a hypothesis and seek to test that hypothesis in order to confirm or deny its validity. This notion of a concept has been noted as formally resembling a TOTE cycle discussed by Miller, Galanter and Pribram. The contents and structure that a concept might have at a given interaction of its continuous deformation can be represented through an entailment structure.
Such conceptual forms are said to be emergent through conversational interactions. They are encapsulated through entailment structures, which is a way by which we may visualize an organized and publicly available collection of resultant knowledge. Entailment structures may afford certain advantages compared to certain semantic network structures, as they force semantic relations to be expressed as belonging to coherent structures. The entailment structure is composed of a series of nodes and arrows representing a series of topic relations and the derivations of such topic relations. For example:
In the above illustration, let formula_19, such that there are topic relations that are members of a set of topic relations. Each topic relation is represented by a node, and the entailment represented by the black arc. It follows that formula_20 in the case above, such that the topics "P" and "Q" entail the topic of "T".
Assuming we use the same derivation process for all topics in above entailment structure, then we are let with the following product as illustrated above. This represents a minimal entailment mesh consisting of a triad of derivations: formula_21, formula_22, and formula_23. The solid arc indicates that a given head topic relation is derived from subordinate topics, whereas the arcs with dotted lines represent how the head topic may be used to derive other topics. Finally:
Represents two solid arcs permitting alternative derivations of the topic "T". This can be expressed as formula_24, which reads either the set containing "P" and "Q", or the set containing "R" and "S" entail "T". Lastly, a formal analogy is shown where two topics "T" and "T"' belonging to two entailment meshes are demonstrated to have a one-to-one correspondence with each other. The diamond shape formula_10 below denotes analogy relation that can be claimed to exist between any three topics of each entailment mesh.
The relation of one topic "T" to another "T"' by an analogy can also be seen as: Being based on an isomorphism formula_25, a semantic distinction formula_26 between two individual universes on interpretation formula_27. Assuming an analogy holds for two topics in two distinct entailment meshes, then it should hold for all if the analogy is to be considered coherent and stable.
Cognitive Reflector.
From conversation theory, Pask developed what he called a "Cognitive Reflector". This is a virtual machine for selecting and executing concepts or topics from an entailment mesh shared by at least a pair of participants. It features an external modelling facility on which agreement between, say, a teacher and pupil may be shown by reproducing public descriptions of behaviour. We see this in essay and report writing or the "practicals" of science teaching.
Lp was Pask's protolanguage which produced operators like Ap which concurrently executes the concept, Con, of a Topic, T, to produce a Description, D. Thus:
Ap(Con(T)) => D(T), where => stands for produces.
A succinct account of these operators is presented in Pask Amongst many insights he points out that three indexes are required for concurrent execution, two for parallel and one to designate a serial process. He subsumes this complexity by designating participants A, B, etc.
In Commentary toward the end of Pask, he states:
The form not the content of the theories (conversation theory and interactions of actors theory) return to and is congruent with the forms of physical theories; such as wave particle duality (the set theoretic unfoldment part of conversation theory is a radiation and its reception is the interpretation by the recipient of the descriptions so exchanged, and vice versa). The particle aspect is the recompilation by the listener of what a speaker is saying. Theories of many universes, one at least for each participant A and one to participant B- are bridged by analogy. As before this is the "truth value of any interaction; the metaphor for which is culture itself".
Learning strategies.
In order to facilitate learning, Pask argued that subject matter should be represented in the form of structures which show what is to be learned. These structures exist in a variety of different levels depending upon the extent of the relationships displayed. The critical method of learning according to Conversation Theory is "teachback" in which one person teaches another what they have learned.
Pask identified two different types of learning strategies:
The ideal is the versatile learner who is neither vacuous holist "globe trotter" nor serialist who knows little of the context of his work.
In learning, the stage where one converges or evolves, many Cyberneticians describe the act of understanding as a closed-loop. Instead of simply “taking in” new information, one goes back to look at their understandings and pulls together information that was “triggered” and forms a new connection. This connection becomes tighter and one's understanding of a certain concept is solidified or “stable” (Pangaro, 2003). Furthermore, Gordon Pask emphasized that conflict is the basis for the notion of “calling for" additional information (Pangaro, 1992).
According to Entwistle, experiments which lead to the investigation of phenomenon later denoted by the term learning strategy came about through the implementation of a variety of learning tasks. Initially, this was done through utilising either CASTE, INTUITION, or the Clobbits pseudo-taxonomy. However, given issues resulting from either the time-consuming nature or operating experiments or inexactness of experimental conditions, new tests were created in the form of the Spy Ring History test and the Smuggler's test. The former test involved a participant having to learn the history of a fictitious spy ring (in other words, the history of a fictitious espionage network); the participant, having to learn about the history of five spies in three countries over the period of five years. The comprehension learning component of the test involved learning the similarities and differences between a set of networks; whereas the operation learning aspect of the test involved learning the role each spy played and what sequence of actions that spy played over a given year.
While Entwistle noted difficulties regarding the length of such tests for groups of students who were engaged in the Spy Ring History test, the results of the test did seem to correspond with the type of learning strategies discussed. However, it has been noted that while Pask and associates work on learning styles has been influential in both the development of conceptual tools and methodology, the Spy Ring History test and Smuggler's test may have been biased towards STEM students than humanities in its implementation, with Entwistle arguing that the "rote learning of formulae and definitions, together with a positive reaction to solving puzzles and problems of a logical nature, are characteristics more commonly found in science than arts student".
Applications.
One potential application of conversation theory that has been studied and developed is as an alternative approach to common types of search engine Information retrieval algorithms. Unlike PageRank-like algorithms, which determine the priority of a search result based on how many hyperlinks on the web link to them, conversation theory has been used to apply a discursive approach to web search requests.
"ThoughtShuffler" is an attempt to build a search engine utilizing design principles from conversation theory: In this approach, terms that are input into a search request yield search results relating to other terms that derive or help provide context to the meaning of the first in a way that mimics derivations of topics in an entailment structure. For example, given the input of a search term, a neighbourhood of corresponding terms that comprise the meaning of the first term may be suggested for the user to explore. In doing this, the search engine interface highlights snippets of webpages corresponding to a neighbourhood terms that help provide meaning to the first.
The aim of this design, is to provide just enough information for a user to become curious about a topic in order to induce the intention to explore other subtopics related to the main term input into the search engine.
Footnotes.
<templatestyles src="Refbegin/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Citation Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "L^{+}"
},
{
"math_id": 1,
"text": "L"
},
{
"math_id": 2,
"text": "L^{*}"
},
{
"math_id": 3,
"text": "Lev\\;0"
},
{
"math_id": 4,
"text": "Lev\\;1"
},
{
"math_id": 5,
"text": "L^{0}"
},
{
"math_id": 6,
"text": "L^{1}"
},
{
"math_id": 7,
"text": "L= \\langle L^{0}, L^{1} \\rangle"
},
{
"math_id": 8,
"text": "R_{i}"
},
{
"math_id": 9,
"text": "R_{j}"
},
{
"math_id": 10,
"text": "R"
},
{
"math_id": 11,
"text": "R_{i}, R_{j} \\in R"
},
{
"math_id": 12,
"text": "i"
},
{
"math_id": 13,
"text": "j"
},
{
"math_id": 14,
"text": "R_{i} \\vdash R_{j}"
},
{
"math_id": 15,
"text": "R_{j} \\vdash R_{i}"
},
{
"math_id": 16,
"text": "R_{H}"
},
{
"math_id": 17,
"text": "CON(R_{H})\\rightarrow R_{H}"
},
{
"math_id": 18,
"text": "CON \\triangleq \\langle PROG, INTER \\rangle"
},
{
"math_id": 19,
"text": "TPQ \\in \\{ R_{i} \\}"
},
{
"math_id": 20,
"text": " \\langle P, Q \\rangle \\vdash T "
},
{
"math_id": 21,
"text": " \\langle P, Q \\rangle \\vdash T"
},
{
"math_id": 22,
"text": " \\langle T, Q \\rangle \\vdash P"
},
{
"math_id": 23,
"text": " \\langle P, T \\rangle \\vdash Q"
},
{
"math_id": 24,
"text": " \\langle P, Q \\rangle \\lor \\langle R, S \\rangle \\vdash T"
},
{
"math_id": 25,
"text": "\\Leftrightarrow"
},
{
"math_id": 26,
"text": "/"
},
{
"math_id": 27,
"text": "\\mathbb{U}"
}
]
| https://en.wikipedia.org/wiki?curid=13314050 |
13314087 | KK-theory | Theory in mathematics
In mathematics, "KK"-theory is a common generalization both of K-homology and K-theory as an additive bivariant functor on separable C*-algebras. This notion was introduced by the Russian mathematician Gennadi Kasparov in 1980.
It was influenced by Atiyah's concept of Fredholm modules for the Atiyah–Singer index theorem, and the classification of extensions of C*-algebras by Lawrence G. Brown, Ronald G. Douglas, and Peter Arthur Fillmore in 1977. In turn, it has had great success in operator algebraic formalism toward the index theory and the classification of nuclear C*-algebras, as it was the key to the solutions of many problems in operator K-theory, such as, for instance, the mere calculation of "K"-groups. Furthermore, it was essential in the development of the Baum–Connes conjecture and plays a crucial role in noncommutative topology.
"KK"-theory was followed by a series of similar bifunctor constructions such as the "E"-theory and the bivariant periodic cyclic theory, most of them having more category-theoretic flavors, or concerning another class of algebras rather than that of the separable "C"*-algebras, or incorporating group actions.
Definition.
The following definition is quite close to the one originally given by Kasparov. This is the form in which most KK-elements arise in applications.
Let "A" and "B" be separable "C"*-algebras, where "B" is also assumed to be σ-unital. The set of cycles is the set of triples ("H", ρ, "F"), where "H" is a countably generated graded Hilbert module over "B", ρ is a *-representation of "A" on "H" as even bounded operators which commute with "B", and "F" is a bounded operator on "H" of degree 1 which again commutes with "B". They are required to fulfill the condition that
formula_0
for "a" in "A" are all "B"-compact operators. A cycle is said to be degenerate if all three expressions are 0 for all "a".
Two cycles are said to be homologous, or homotopic, if there is a cycle between "A" and "IB", where "IB" denotes the "C"*-algebra of continuous functions from [0,1] to "B", such that there is an even unitary operator from the 0-end of the homotopy to the first cycle, and a unitary operator from the 1-end of the homotopy to the second cycle.
The KK-group KK(A, B) between A and B is then defined to be the set of cycles modulo homotopy. It becomes an abelian group under the direct sum operation of bimodules as the addition, and the class of the degenerate modules as its neutral element.
There are various, but equivalent definitions of the KK-theory, notably the one due to Joachim Cuntz which eliminates bimodule and 'Fredholm' operator F from the picture and puts the accent entirely on the homomorphism ρ. More precisely it can be defined as the set of homotopy classes
formula_1,
of *-homomorphisms from the classifying algebra "qA" of quasi-homomorphisms to the "C"*-algebra of compact operators of an infinite dimensional separable Hilbert space tensored with "B". Here, "qA" is defined as the kernel of the map from the "C"*-algebraic free product "A"*"A" of "A" with itself to "A" defined by the identity on both factors.
Properties.
When one takes the "C"*-algebra C of the complex numbers as the first argument of "KK" as in "KK"(C, "B") this additive group is naturally isomorphic to the "K"0-group "K"0("B") of the second argument "B". In the Cuntz point of view, a "K"0-class of "B" is nothing but a homotopy class of *-homomorphisms from the complex numbers to the stabilization of "B". Similarly when one takes the algebra "C"0(R) of the continuous functions on the real line decaying at infinity as the first argument, the obtained group "KK"("C"0(R), "B") is naturally isomorphic to "K"1("B").
An important property of "KK"-theory is the so-called Kasparov product, or the composition product,
formula_2,
which is bilinear with respect to the additive group structures. In particular each element of "KK"("A", "B") gives a homomorphism of "K"*("A") → "K"*("B") and another homomorphism "K"*("B") → "K"*("A").
The product can be defined much more easily in the Cuntz picture given that there are natural maps from "QA" to "A", and from "B" to "K"("H") ⊗ "B" which induce "KK"-equivalences.
The composition product gives a new category formula_3, whose objects are given by the separable "C"*-algebras while the morphisms between them are given by elements of the corresponding KK-groups. Moreover, any *-homomorphism of "A" into "B" induces an element of "KK"("A", "B") and this correspondence gives a functor from the original category of the separable "C"*-algebras into formula_3. The approximately inner automorphisms of the algebras become identity morphisms in formula_3.
This functor formula_4 is universal among the split-exact, homotopy invariant and stable additive functors on the category of the separable "C"*-algebras. Any such theory satisfies Bott periodicity in the appropriate sense since formula_3 does.
The Kasparov product can be further generalized to the following form:
formula_5
It contains as special cases not only the K-theoretic cup product, but also the K-theoretic cap, cross, and slant products and the product of extensions.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "[F, \\rho(a)], (F^2-1)\\rho(a), (F-F^*)\\rho(a)"
},
{
"math_id": 1,
"text": "KK(A,B) = [qA, K(H) \\otimes B]"
},
{
"math_id": 2,
"text": "KK(A,B) \\times KK(B,C) \\to KK(A,C)"
},
{
"math_id": 3,
"text": "\\mathsf{KK}"
},
{
"math_id": 4,
"text": "\\mathsf{C^*\\!-\\!alg} \\to \\mathsf{KK}"
},
{
"math_id": 5,
"text": "KK(A, B \\otimes E) \\times KK(B \\otimes D, C) \\to KK(A \\otimes D, C \\otimes E)."
}
]
| https://en.wikipedia.org/wiki?curid=13314087 |
13314265 | Operator K-theory | In mathematics, operator K-theory is a noncommutative analogue of topological K-theory for Banach algebras with most applications used for C*-algebras.
Overview.
Operator K-theory resembles topological K-theory more than algebraic K-theory. In particular, a Bott periodicity theorem holds. So there are only two K-groups, namely "K"0, which is equal to algebraic "K"0, and "K"1. As a consequence of the periodicity theorem, it satisfies excision. This means that it associates to an extension of C*-algebras to a long exact sequence, which, by Bott periodicity, reduces to an exact cyclic 6-term-sequence.
Operator K-theory is a generalization of topological K-theory, defined by means of vector bundles on locally compact Hausdorff spaces. Here, a vector bundle over a topological space "X" is associated to a projection in the C* algebra of matrix-valued—that is, formula_0-valued—continuous functions over "X". Also, it is known that isomorphism of vector bundles translates to Murray-von Neumann equivalence of the associated projection in "K" ⊗ "C"("X"), where "K" is the compact operators on a separable Hilbert space.
Hence, the "K"0 group of a (not necessarily commutative) C*-algebra "A" is defined as Grothendieck group generated by the Murray-von Neumann equivalence classes of projections in "K" ⊗ "C"("X"). "K"0 is a functor from the category of C*-algebras and *-homomorphisms, to the category of abelian groups and group homomorphisms. The higher K-functors are defined via a C*-version of the suspension: "K"n("A") = "K"0("S""n"("A")), where
"SA" = "C"0(0,1) ⊗ "A".
However, by Bott periodicity, it turns out that "K""n"+2("A") and "K""n"("A") are isomorphic for each "n", and thus the only groups produced by this construction are "K"0 and "K"1.
The key reason for the introduction of K-theoretic methods into the study of C*-algebras was the Fredholm index: Given a bounded linear operator on a Hilbert space that has finite-dimensional kernel and cokernel, one can associate to it an integer, which, as it turns out, reflects the 'defect' on the operator - i.e. the extent to which it is not invertible. The Fredholm index map appears in the 6-term exact sequence given by the Calkin algebra. In the analysis on manifolds, this index and its generalizations played a crucial role in the index theory of Atiyah and Singer, where the topological index of the manifold can be expressed via the index of elliptic operators on it. Later on, Brown, Douglas and Fillmore observed that the Fredholm index was the missing ingredient in classifying essentially normal operators up to certain natural equivalence. These ideas, together with Elliott's classification of AF C*-algebras via K-theory led to a great deal of interest in adapting methods such as K-theory from algebraic topology into the study of operator algebras.
This, in turn, led to K-homology, Kasparov's bivariant KK-theory, and, more recently, Connes and Higson's E-theory. | [
{
"math_id": 0,
"text": "M_n(\\mathbb{C})"
}
]
| https://en.wikipedia.org/wiki?curid=13314265 |
1331480 | Gnomonic projection | Projection of a sphere through its center onto a plane
A gnomonic projection, also known as a central projection or rectilinear projection, is a perspective projection of a sphere, with center of projection at the sphere's center, onto any plane not passing through the center, most commonly a tangent plane. Under gnomonic projection every great circle on the sphere is projected to a straight line in the plane (a great circle is a geodesic on the sphere, the shortest path between any two points, analogous to a straight line on the plane). More generally, a gnomonic projection can be taken of any n-dimensional hypersphere onto a hyperplane.
The projection is the n-dimensional generalization of the trigonometric tangent which maps from the circle to a straight line, and as with the tangent, every pair of antipodal points on the sphere projects to a single point in the plane, while the points on the plane through the sphere's center and parallel to the image plane project to points at infinity; often the projection is considered as a one-to-one correspondence between points in the hemisphere and points in the plane, in which case any finite part of the image plane represents a portion of the hemisphere.
The gnomonic projection is azimuthal (radially symmetric). No shape distortion occurs at the center of the projected image, but distortion increases rapidly away from it.
The gnomonic projection originated in astronomy for constructing sundials and charting the celestial sphere. It is commonly used as a geographic map projection, and can be convenient in navigation because great-circle courses are plotted as straight lines. Rectilinear photographic lenses make a perspective projection of the world onto an image plane; this can be thought of as a gnomonic projection of the image sphere (an abstract sphere indicating the direction of each ray passing through a camera modeled as a pinhole). The gnomonic projection is used in crystallography for analyzing the orientations of lines and planes of crystal structures. It is used in structural geology for analyzing the orientations of fault planes. In computer graphics and computer representation of spherical data, cube mapping is the gnomonic projection of the image sphere onto six faces of a cube.
In mathematics, the space of orientations of undirected lines in 3-dimensional space is called the real projective plane, and is typically pictured either by the "projective sphere" or by its gnomonic projection. When the angle between lines is imposed as a measure of distance, this space is called the elliptic plane. The gnomonic projection of the 3-sphere of unit quaternions, points of which represent 3-dimensional rotations, results in Rodrigues vectors. The gnomonic projection of the hyperboloid of two sheets, treated as a model for the hyperbolic plane, is called the Beltrami–Klein model.
History.
The gnomonic projection is said to be the oldest map projection, speculatively attributed to Thales who may have used it for star maps in the 6th century BC. The path of the shadow-tip or light-spot in a nodus-based sundial traces out the same hyperbolae formed by parallels on a gnomonic map.
Properties.
The gnomonic projection is from the centre of a sphere to a plane tangent to the sphere (Fig 1 below). The sphere and the plane touch at the tangent point. Great circles transform to straight lines via the gnomonic projection. Since meridians (lines of longitude) and the equator are great circles, they are always shown as straight lines on a gnomonic map. Since the projection is from the centre of the sphere, a gnomonic map can represent less than half of the area of the sphere. Distortion of the scale of the map increases from the centre (tangent point) to the periphery.
As with all azimuthal projections, angles from the tangent point are preserved. The map distance from that point is a function "r"("d") of the true distance "d", given by
formula_0
where "R" is the radius of the Earth. The radial scale is
formula_1
and the transverse scale
formula_2
so the transverse scale increases outwardly, and the radial scale even more.
Use.
Gnomonic projections are used in seismic work because seismic waves tend to travel along great circles. They are also used by navies in plotting direction finding bearings, since radio signals travel along great circles. Meteors also travel along great circles, with the Gnomonic Atlas Brno 2000.0 being the IMO's recommended set of star charts for visual meteor observations. Aircraft and ship navigators use the projection to find the shortest route between start and destination. The track is first drawn on the gnomonic chart, then transferred to a Mercator chart for navigation.
The gnomonic projection is used extensively in photography, where it is called "rectilinear projection", as it naturally arises from the pinhole camera model where the screen is a plane. Because they are equivalent, the same viewer used for photographic panoramas can be used to render gnomonic maps .
The gnomonic projection is used in astronomy where the tangent point is centered on the object of interest. The sphere being projected in this case is the celestial sphere, "R" = 1, and not the surface of the Earth.
In astronomy, gnomic projection star charts of the celestial sphere can be used by observers to accurately plot the straight line path of a meteor trail.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " r(d) = R\\,\\tan \\frac d R"
},
{
"math_id": 1,
"text": " r'(d) = \\frac{1}{\\cos^2\\frac d R} "
},
{
"math_id": 2,
"text": " \\frac{1}{\\cos\\frac d R} "
}
]
| https://en.wikipedia.org/wiki?curid=1331480 |
1331587 | B − L | Quantum number; the difference between the baryon and lepton numbers
In particle physics, B" − "L (pronounced "bee minus ell") is a quantum number which is the difference between the baryon number (B) and the lepton number (L) of a quantum system.
Details.
This quantum number is the charge of a global/gauge U(1) symmetry in some Grand Unified Theory models, called U(1)"B"−"L". Unlike baryon number alone or lepton number alone, this hypothetical symmetry would not be broken by chiral anomalies or gravitational anomalies, as long as this symmetry is global, which is why this symmetry is often invoked.
If "B" – "L" exists as a symmetry, then for the seesaw mechanism to work "B" – "L" has to be spontaneously broken to give the neutrinos a nonzero mass.
The anomalies that would break baryon number conservation and lepton number conservation individually cancel in such a way that "B" – "L" is always conserved. One hypothetical example is proton decay where a proton ("B" = 1, "L" = 0) would decay into a pion ("B" = 0, "L" = 0) and positron ("B" = 0, "L" = –1).
The weak hypercharge "Y"W is related to "B" – "L" via
formula_0
where "X" charge (not to be confused with the X boson) is the conserved quantum number associated with the global U(1) symmetry Grand Unified Theory.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X + 2\\,Y_\\text{W} = 5\\,( B - L ),"
}
]
| https://en.wikipedia.org/wiki?curid=1331587 |
13316034 | Redintegration | Reconstruction of the whole of something, from a part
Redintegration refers to the restoration of the whole of something from a part of it. The everyday phenomenon is that a small part of a memory can remind a person of the entire memory, for example, “recalling an entire song when a few notes are played.” In cognitive psychology the word is used in reference to phenomena in the field of memory, where it is defined as "the use of long-term knowledge to facilitate recall." The process is hypothesised to be working as "pattern completion", where previous knowledge is used to facilitate the completion of the partially degraded memory trace.
Proust.
The great literary example of redintegration is Marcel Proust's novel "Remembrance of Things Past". The conceit is that the entire seven-volume novel consists of the memories triggered by the taste of a madeleine soaked in lime tea. "I had recognized the taste of the crumb of madeleine soaked in her concoction of lime-flowers which my aunt used to give to me. Immediately the old grey house upon the street, where her room was, rose up like the scenery of a theatre to attach itself to the little pavilion, opening on to the garden, which had been built out behind it for my parents..."
Associationists.
The Associationist School of philosophical psychologists sought to explain redintegration (among other memory-related phenomena) and used it as evidence supporting their theories.
Contemporary memory research.
In the study of item recall in working memory, memories that have partially decayed can be recalled in their entirety. It is hypothesized that this is accomplished by a redintegration process, which allows the entire memory to be reconstructed from the temporary memory trace by using the subject's previous knowledge. The process seems to work because of the redundancy of language. The effects of long-term knowledge on memory’s trace reconstruction have been shown for both visual and auditory presentation and recall. The mechanism of redintegration is still not fully understood and is being actively researched.
Models of redintegration.
Multinomial processing tree.
Schweickert (1993) attempted to model memory redintegration using a multinomial processing tree. In a multinomial processing tree, the cognitive processes and their outcomes are represented with branches and nodes, respectively. The outcome of the cognitive effort is dependent on which terminal node is reached.
In Schweickert's model of recall, a trace of memory can be either intact or partially degraded. If the trace is intact, memory can be restored promptly and accurately. The node for correct recall is reached, and the recall process is terminated. If the memory has partially degraded, the item must be reconstructed through trace redintegration. If the process of redintegration was successful, the memory is recalled correctly.
Thus, the probability of correct recall (formula_0) is:formula_1Where: formula_2 is the probability of trace being intact, and formula_3 is the probability of correct redintegration.If the trace is neither intact nor successfully completely redintegrated, person fails to accurately recall the memory.
Trace redintegration.
Schweickert (1993) proposed that the redintegration of a memory trace happens through two independent processes. In the lexical process, the memory trace is attempted to be converted into a word. In the phonemic process, the memory trace is attempted to be converted into a string of phonemes. Consequently, the probability of correct redintegration (formula_3), becomes a function of formula_4 (lexical process) and/or formula_5 (phonemic process). These processes are autonomous, and their effect on formula_3 depends on whether they take place sequentially or non-sequentially.
Schweickert’s explanation of trace redintegration is analogous to the processes hypothesized to be responsible for repairs of errors in speech.
Though Schweickert indicates that the process of trace redintegration may be facilitated by the context of the situation in which recall takes place (e.g. syntax, semantics), his model does not provide details on the potential influences of such factors.
Extensions.
Schweickert's model was extended by Gathercole and colleagues (1999), who added a concept of degraded trace. Their model of the multinomial processing tree included an additional node, which represents a decayed memory. Such degraded trace can no longer undergo redintegration, and the outcome of recall is incorrect. Thus, the probability of correct recall (formula_0) changes to: formula_6Where:formula_7 is the probability of trace being intact, formula_3 is the probability of correct redintegration, and formula_8 is the probability of trace being entirely lost.
Criticism.
The main criticism of Schweickert's model concerns its discrete nature. The model treats memory in a binomial manner, where a trace can be either intact, leading to correct recall, or partially decayed, with subsequent successful or unsuccessful redintegration. It does not explain the factors underlying the intactness, and cannot account for the differences in the number of failed attempts to recall different items. Moreover, the model does not incorporate the concept of the degree of memory degradation, implying that the level of a trace’s decay does not affect the probability of redintegration. This issue was approached by Roodenrys and Miller (2008), whose alternative account of redintegration uses constrained Rasch model to portray trace degradation as a continuous process.
Influencing factors.
Lexicality.
In immediate recall, trace reconstruction is more accurate for words than for non-words. This has been labeled the "lexicality effect". The effect is hypothesized to occur due to the differences in the presence and availability of phonological representations. Contrarily to non-words, words possess stable mental representations of the accompanying sounds. Such representation can be retrieved from previous knowledge, facilitating the redintegration of item from the memory trace. The lexicality effect is commonly used to support the importance of long-term memory in the redintegration processes.
Item similarity.
The redintegration of memory traces may be affected by both semantic and phonological similarity of items which are to be recalled.
Redintegration is more accurate for lists of semantically homogenous items than for lists of semantically heterogeneous items. This has been attributed to the differences in the accessibility of different memories in the long-term store. When words are presented in semantically homogenous lists, other items may guide the trace reconstruction, providing a cue for item search. This increases the availability of certain memories and facilitates the redintegration process. An example would be a redintegration attempt for a word from a list of animal names. The semantic consistency of words evokes the memories associated with this matter, making the animal names more accessible in the memory.
Contrarily, redintegration has been shown to be hindered for items sharing phonological features. This has been attributed to the “trace competition”, where errors in redintegration are caused by mistaking the items on the lists. This effect could arise for example for the words "auction" (/ˈɔːkʃ(ə)n/) and "audience" (/ˈɔːdiəns/). The effect of phonological similarity on redintegration may differ depending on the position of phenomes shared within the items.
Word frequency.
Redintegration processes appear more accurate for words that are encountered more frequently in the language. This effect has been attributed to the differences in the availability of items stored in long-term memory. Frequently encountered words are hypothesized to be more accessible for subsequent recall, which facilitates the reconstruction of memory redintegration of the partially degraded trace.
Phonotactic frequency.
Trace reconstruction appears more accurate for items that contain phoneme combinations frequently represented in the language. Though this effect is similar to the word-frequency effect, it can also explain patterns in redintegration of non-word items.
Others.
Other factors which have been shown to facilitate redintegration include the ease of item imageability, familiarity with the language, and word concreteness.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P_C"
},
{
"math_id": 1,
"text": "P_C = I + R*(1-I)"
},
{
"math_id": 2,
"text": "I"
},
{
"math_id": 3,
"text": "R"
},
{
"math_id": 4,
"text": "L"
},
{
"math_id": 5,
"text": "P"
},
{
"math_id": 6,
"text": "P_C = I + R*(1-I-T)"
},
{
"math_id": 7,
"text": "I"
},
{
"math_id": 8,
"text": "T"
}
]
| https://en.wikipedia.org/wiki?curid=13316034 |
1331789 | Lepton number | Difference between number of leptons and antileptons
In particle physics, lepton number (historically also called lepton charge)
is a conserved quantum number representing the difference between the number of leptons and the number of antileptons in an elementary particle reaction.
Lepton number is an additive quantum number, so its sum is preserved in interactions (as opposed to multiplicative quantum numbers such as parity, where the product is preserved instead). The lepton number formula_0 is defined by
formula_1
where
Lepton number was introduced in 1953 to explain the absence of reactions such as
in the Cowan–Reines neutrino experiment, which instead observed
This process, inverse beta decay, conserves lepton number, as the incoming antineutrino has lepton number −1, while the outgoing positron (antielectron) also has lepton number −1.
Lepton flavor conservation.
In addition to lepton number, lepton family numbers are defined as
formula_4 the electron number, for the electron and the electron neutrino;
formula_5 the muon number, for the muon and the muon neutrino; and
formula_6 the tau number, for the tauon and the tau neutrino.
Prominent examples of lepton flavor conservation are the muon decays
and
In these decay reactions, the creation of an electron is accompanied by the creation of an electron antineutrino, and the creation of a positron is accompanied by the creation of an electron neutrino. Likewise, a decaying negative muon results in the creation of a muon neutrino, while a decaying positive muon results in the creation of a muon antineutrino.
Finally, the weak decay of a lepton into a lower-mass lepton always results in the production of a neutrino-antineutrino pair:
One neutrino carries through the lepton number of the decaying heavy lepton, (a tauon in this example, whose faint residue is a tau neutrino) and an antineutrino that cancels the lepton number of the newly created, lighter lepton that replaced the original. (In this example, a muon antineutrino with formula_7 that cancels the muon's formula_8.
Violations of the lepton number conservation laws.
Lepton flavor is only approximately conserved, and is notably not conserved in neutrino oscillation.
However, both the total lepton number and lepton flavor are still conserved in the Standard Model.
Numerous searches for physics beyond the Standard Model incorporate searches for lepton number or lepton flavor violation, such as the hypothetical decay
Experiments such as MEGA and SINDRUM have searched for lepton number violation in muon decays to electrons; MEG set the current branching limit of order 10−13 and plans to lower to limit to 10−14 after 2016.
Some theories beyond the Standard Model, such as supersymmetry, predict branching ratios of order 10−12 to 10−14. The Mu2e experiment, in construction as of 2017, has a planned sensitivity of order 10−17.
Because the lepton number conservation law in fact is violated by chiral anomalies, there are problems applying this symmetry universally over all energy scales. However, the quantum number B − L is commonly conserved in Grand Unified Theory models.
If neutrinos turn out to be Majorana fermions, neither individual lepton numbers, nor the total lepton number
formula_9
nor
"B" − "L"
would be conserved, e.g. in neutrinoless double beta decay, where two neutrinos colliding head-on might actually annihilate, similar to the (never observed) collision of a neutrino and antineutrino.
Reversed signs convention.
Some authors prefer to use lepton numbers that match the signs of the charges of the leptons involved, following the convention in use for the sign of weak isospin and the sign of strangeness quantum number (for quarks), both of which conventionally have the otherwise arbitrary sign of the quantum number match the sign of the particles' electric charges.
When following the electric-charge-sign convention, the lepton number (shown with an over-bar here, to reduce confusion) of an electron, muon, tauon, and any neutrino counts as formula_10 the lepton number of the positron, antimuon, antitauon, and any antineutrino counts as formula_11 When this reversed-sign convention is observed, the baryon number is left unchanged, but the difference B − L is replaced with a sum: , whose number value remains unchanged, since
L = −L,
and
B + L = B − L.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L"
},
{
"math_id": 1,
"text": "L = n_\\ell - n_{\\overline\\ell},"
},
{
"math_id": 2,
"text": "n_\\ell \\quad "
},
{
"math_id": 3,
"text": "n_{\\overline\\ell } \\quad "
},
{
"math_id": 4,
"text": "L_\\mathrm{e}"
},
{
"math_id": 5,
"text": "L_\\mathrm{\\mu}"
},
{
"math_id": 6,
"text": "L_\\mathrm{\\tau}"
},
{
"math_id": 7,
"text": "L_\\mathrm{\\mu} = -1"
},
{
"math_id": 8,
"text": "L_\\mathrm{\\mu} = +1"
},
{
"math_id": 9,
"text": "L \\equiv L_\\mathrm{e} + L_\\mathrm{\\mu} + L_\\mathrm{\\tau},"
},
{
"math_id": 10,
"text": "\\bar{L} = -1;"
},
{
"math_id": 11,
"text": "\\bar{L} = +1."
}
]
| https://en.wikipedia.org/wiki?curid=1331789 |
13320316 | Ozsváth–Schücking metric | Solution of Einstein field equations
The Ozsváth–Schücking metric, or the Ozsváth–Schücking solution, is a vacuum solution of the Einstein field equations. The metric was published by István Ozsváth and Engelbert Schücking in 1962. It is noteworthy among vacuum solutions for being the first known solution that is stationary, globally defined, and singularity-free but nevertheless not isometric to the Minkowski metric. This stands in contradiction to a claimed strong Mach principle, which would forbid a vacuum solution from being anything but Minkowski without singularities, where the singularities are to be construed as mass as in the Schwarzschild metric.
With coordinates formula_0, define the following tetrad:
formula_1
formula_2
formula_3
formula_4
It is straightforward to verify that e(0) is timelike, e(1), e(2), e(3) are spacelike, that they are all orthogonal, and that there are no singularities. The corresponding proper time is
formula_5
The Riemann tensor has only one algebraically independent, nonzero component
formula_6
which shows that the spacetime is Ricci flat but not conformally flat. That is sufficient to conclude that it is a vacuum solution distinct from Minkowski spacetime. Under a suitable coordinate transformation, the metric can be rewritten as
formula_7
and is therefore an example of a pp-wave spacetime. | [
{
"math_id": 0,
"text": "\\{x^0,x^1,x^2,x^3\\}"
},
{
"math_id": 1,
"text": "e_{(0)}=\\frac{1}{\\sqrt{2+(x^3)^2}}\\left( x^3\\partial_0-\\partial_1+\\partial_2\\right)"
},
{
"math_id": 2,
"text": "e_{(1)}=\\frac{1}{\\sqrt{4+2(x^3)^2}}\\left[ \\left(x^3-\\sqrt{2+(x^3)^2}\\right)\\partial_0+\\left(1+(x^3)^2-x^3\\sqrt{2+(x^3)^2}\\right)\\partial_1+\\partial_2\\right]"
},
{
"math_id": 3,
"text": "e_{(2)}=\\frac{1}{\\sqrt{4+2(x^3)^2}}\\left[ \\left(x^3+\\sqrt{2+(x^3)^2}\\right)\\partial_0+\\left(1+(x^3)^2+x^3\\sqrt{2+(x^3)^2}\\right)\\partial_1+\\partial_2\\right]"
},
{
"math_id": 4,
"text": "e_{(3)}=\\partial_3"
},
{
"math_id": 5,
"text": "{d \\tau}^{2} = -(dx^0)^2 +4(x^3)(dx^0)(dx^2)-2(dx^1)(dx^2)-2(x^3)^2(dx^2)^2-(dx^3)^2."
},
{
"math_id": 6,
"text": "R_{0202}=-1,"
},
{
"math_id": 7,
"text": "\nd\\tau^2 = [(x^2 - y^2) \\cos (2u) + 2xy \\sin(2u)] du^2 - 2dudv - dx^2 - dy^2\n"
}
]
| https://en.wikipedia.org/wiki?curid=13320316 |
13322903 | Relative neighborhood graph | In computational geometry, the relative neighborhood graph (RNG) is an undirected graph defined on a set of points in the Euclidean plane by connecting two points formula_0 and formula_1 by an edge whenever there does not exist a third point formula_2 that is closer to both formula_0 and formula_1 than they are to each other. This graph was proposed by Godfried Toussaint in 1980 as a way of defining a structure from a set of points that would match human perceptions of the shape of the set.
Algorithms.
showed how to construct the relative neighborhood graph of formula_3 points in the plane efficiently in formula_4 time. It can be computed in formula_5 expected time, for random set of points distributed uniformly in the unit square. The relative neighborhood graph can be computed in linear time from the Delaunay triangulation of the point set.
Generalizations.
Because it is defined only in terms of the distances between points, the relative neighborhood graph can be defined for point sets in any and for non-Euclidean metrics. Computing the relative neighborhood graph, for higher-dimensional point sets, can be done in time formula_6.
Related graphs.
The relative neighborhood graph is an example of a lens-based beta skeleton. It is a subgraph of the Delaunay triangulation. In turn, the Euclidean minimum spanning tree is a subgraph of it, from which it follows that it is a connected graph.
The Urquhart graph, the graph formed by removing the longest edge from every triangle in the Delaunay triangulation, was originally proposed as a fast method to compute the relative neighborhood graph. Although the Urquhart graph sometimes differs from the relative neighborhood graph it can be used as an approximation to the relative neighborhood graph.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "q"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "O(n\\log n)"
},
{
"math_id": 5,
"text": "O(n)"
},
{
"math_id": 6,
"text": "O(n^2)"
}
]
| https://en.wikipedia.org/wiki?curid=13322903 |
1332480 | Liberal paradox | Paradox in social choice
The liberal paradox, also Sen paradox or Sen's paradox, is a logical paradox proposed by Amartya Sen which shows that no means of aggregating individual preferences into a single, social choice, can simultaneously fulfill the following, seemingly mild conditions:
Sen's result shows that this is impossible. The three, rather minimalistic, assumptions cannot all hold together. The paradox—more properly called a proof of contradiction, and a paradox only in the sense of informal logic—is contentious because it appears to contradict the classical liberal idea that markets are both Pareto-efficient and respect individual freedoms.
Sen's proof, set in the context of social choice theory, is similar in many respects to Arrow's impossibility theorem and the Gibbard–Satterthwaite theorem. As a mathematical construct, it also has much wider applicability: it is essentially about cyclical majorities between partially ordered sets, of which at least three must participate in order to give rise to the phenomenon. Since the idea is about pure mathematics and logic, similar arguments abound much further afield. They, for example, lead to the necessity of the fifth normal form in relational database design. The history of the argument also goes deeper, Condorcet's paradox perhaps being the first example of the finite sort.
Pareto efficiency.
Definition.
A particular distribution of goods or outcome of any social process is regarded as "Pareto-efficient" if there is no way to improve one or more people's situations without harming another. Put another way, an outcome is not Pareto-efficient if there is a way to improve at least one person's situation without harming anyone else.
For example, suppose a mother has ten dollars which she intends to give to her two children Carlos and Shannon. Suppose the children each want only money, and they do not get jealous of one another. The following distributions are Pareto-efficient:
However, a distribution where the mother gives each of them $2 and wastes the remaining $6 is not Pareto-efficient, because she could have given the wasted money to either child and made that child better off without harming the other.
In this example, it was presumed that a child was made better or worse off by gaining or losing money, respectively, and that neither child gained or lost by evaluating her share in comparison to the other. To be more precise, we must evaluate all possible preferences that the child might have and consider a situation as Pareto-efficient if there is no other social state that at least one person favors (or prefers) and no one disfavors.
Use in economics.
Pareto efficiency is often used in economics as a minimal sense of economic efficiency. If a mechanism does not result in Pareto-efficient outcomes, it is regarded as inefficient, since there was another outcome that could have made some people better off without harming anyone else.
The view that markets produce Pareto-efficient outcomes is regarded as an important and central justification for capitalism. This result was established (with certain assumptions) in an area of study known as general equilibrium theory and is known as the first fundamental theorem of welfare economics. As a result, these results often feature prominently in conservative libertarian justifications of unregulated markets.
Two examples.
Sen's original example.
Sen's original example used a simple society with only two people and only one social issue to consider. The two members of society are named "Lewd" and "Prude". In this society there is a copy of a "Lady Chatterley's Lover" and it must be given either to Lewd to read, to Prude to read, or disposed of - unread. Suppose that Lewd enjoys this sort of reading and would prefer to read it rather than have it disposed of. However, they would get even more enjoyment out of Prude being forced to read it.
Prude thinks that the book is indecent and that it should be disposed of, unread. However, if someone must read it, Prude would prefer to read it rather than Lewd since Prude thinks it would be even worse for someone to read and enjoy the book rather than read it in disgust.
Given these preferences of the two individuals in the society, a social planner must decide what to do. Should the planner force Lewd to read the book, force Prude to read the book or let it go unread? More particularly, the social planner must rank all three possible outcomes in terms of their social desirability. The social planner decides that they should be committed to individual rights, each individual should get to choose whether they, themself will read the book. Lewd should get to decide whether the outcome "Lewd reads" will be ranked higher than "No one reads", and similarly Prude should get to decide whether the outcome "Prude reads" will be ranked higher than "No one reads".
Following this strategy, the social planner declares that the outcome "Lewd reads" will be ranked higher than "No one reads" (because of Lewd's preferences) and that "No one reads" will be ranked higher than "Prude reads" (because of Prude's preferences). Consistency then requires that "Lewd reads" be ranked higher than "Prude reads", and so the social planner gives the book to Lewd to read.
Notice that this outcome is regarded as worse than "Prude reads" by both Prude "and" Lewd, and the chosen outcome is therefore Pareto inferior to another available outcome—the one where Prude is forced to read the book.
Gibbard's example.
Another example was provided by philosopher Allan Gibbard. Suppose there are two individuals Alice and Bob who live next door to each other. Alice loves the color blue and hates red. Bob loves the color green and hates yellow. If each were free to choose the color of their house independently of the other, they would choose their favorite colors. But Alice hates Bob with a passion, and she would gladly endure a red house if it meant that Bob would have to endure his house being yellow. Bob similarly hates Alice, and would gladly endure a yellow house if that meant that Alice would live in a red house.
If each individual is free to choose their own house color, independently of the other, Alice would choose a blue house and Bob would choose a green one. But, this outcome is not Pareto efficient, because both Alice and Bob would prefer the outcome where Alice's house is red and Bob's is yellow. As a result, giving each individual the freedom to choose their own house color has led to an inefficient outcome—one that is inferior to another outcome where neither is free to choose their own color.
Mathematically, we can represent Alice's preferences with this symbol: formula_0 and Bob's preferences with this one: formula_1. We can represent each outcome as a pair: ("Color of Alice's house", "Color of Bob's house"). As stated Alice's preferences are:
(Blue, Yellow) formula_0 (Red, Yellow) formula_0 (Blue, Green) formula_0 (Red, Green)
And Bob's are:
(Red, Green) formula_1 (Red, Yellow) formula_1 (Blue, Green) formula_1 (Blue, Yellow)
If we allow free and independent choices of both parties we end up with the outcome (Blue, Green) which is dispreferred by both parties to the outcome (Red, Yellow) and is therefore not Pareto efficient.
The theorem.
Suppose there is a society "N" consisting of two or more individuals and a set "X" of two or more social outcomes. (For example, in the Alice and Bob case, "N" consisted of Alice and Bob, and "X" consisted of the four color options ⟨Blue, Yellow⟩, ⟨Blue, Green⟩, ⟨Red, Yellow⟩, and ⟨Red, Green⟩.)
Suppose each individual in the society has a total and transitive preference relation on the set of social outcomes "X". For notation, the preference relation of an individual "i"∊"N" is denoted by ≼"i". Each preference relation belongs to the set "Rel(X)" of all total and transitive relations on "X".
A social choice function is a map which can take any configuration of preference relations of "N" as input and produce a subset of ("chosen") social outcomes as output. Formally, a social choice function is a map
formula_2
from the set of functions between "N"→"Rel(X)", to the power set of "X". (Intuitively, the social choice function represents a societal principle for choosing one or more social outcomes based on individuals' preferences. By representing the social choice process as a "function" on "Rel(X)""N", we are tacitly assuming that the social choice function is defined for any possible configuration of preference relations; this is sometimes called the Universal Domain assumption.)
The liberal paradox states that every social choice function satisfies "at most one" of the following properties, never both:
In other words, the liberal paradox states that for every social choice function "F", there is a configuration of preference relations "p"∊"Rel(X)""N" for which "F" violates either Pareto optimality or Minimal liberalism (or both). In the examples of Sen and Gibbard noted above, the social choice function satisfies minimal liberalism at the expense of Pareto optimality.
Ways out of the paradox.
Because the paradox relies on very few conditions, there are a limited number of ways to escape the paradox. Essentially one must either reject the "universal domain" assumption, the "Pareto principle", or the "minimal liberalism principle". Sen himself suggested two ways out, one a rejection of universal domain another a rejection of the Pareto principle.
Universal domain.
Julian Blau proves that Sen's paradox can only arise when individuals have "nosy" preferences—that is when their preference depends not only on their own action but also on others' actions. In the example of Alice and Bob above, Alice has a preference over how Bob paints his house, and Bob has a preference over Alice's house color as well.
Most arguments which demonstrate market efficiency assume that individuals care about only their own consumption and not others' consumption and therefore do not consider the situations that give rise to Sen's paradox. In fact, this shows a strong relationship between Sen's paradox and the well known result that markets fail to produce Pareto outcomes in the presence of externalities. Externalities arise when the choices of one party affect another. Classic examples of externalities include pollution or overfishing. Because of their nosy preferences, Alice's choice imposes a negative externality on Bob and vice versa.
To prevent the paradox, Sen suggests that "The ultimate guarantee for individual liberty may rest not on rules for social choice but on developing individual values that respect each other's personal choices." Doing so would amount to limiting certain types of nosy preferences, or alternatively restricting the application of the Pareto principle only to those situations where individuals fail to have nosy preferences.
Note that if we consider the case of cardinal preferences—for instance, if Alice and Bob both had to state, within certain bounds, how much happiness they would get for each color of each house separately, and the situation which produced the most happiness were chosen—a minimally-liberal solution does not require that they have no nosiness at all, but just that the sum of all "nosy" preferences about one house's color are below some threshold, while the "non-nosy" preferences are all above that threshold. Since there are generally some questions for which this will be true—Sen's classic example is an individual's choice of whether to sleep on their back or their side—the goal of combining minimal liberalism with Pareto efficiency, while impossible to guarantee in all theoretical cases, may not in practice be impossible to obtain.
Pareto.
Alternatively, one could remain committed to the universality of the rules for social choice and to individual rights and instead reject the universal application of the Pareto principle. Sen also hints that this should be how one escapes the paradox:
<templatestyles src="Template:Blockquote/styles.css" />
Minimal liberalism.
Most commentators on Sen's paradox have argued that Sen's minimal liberalism condition does not adequately capture the notion of individual rights. Essentially what is excluded from Sen's characterization of individual rights is the ability to voluntarily form contracts that lay down one's claim to a right.
For example, in the example of Lewd and Prude, although each has a right to refuse to read the book, Prude would voluntarily sign a contract with Lewd promising to read the book on condition that Lewd refrain from doing so. In such a circumstance there was no violation of Prude's or Lewd's rights because each entered the contract willingly. Similarly, Alice and Bob might sign a contract to each paint their houses their dispreferred color on condition that the other does the same.
In this vein, Gibbard provides a weaker version of the minimal liberalism claim which he argues is consistent with the possibility of contracts and which is also consistent with the Pareto principle given any possible preferences of the individuals.
Dynamism.
Alternatively, instead of both Lewd and Prude deciding what to do at the same time, they should do it one after the other. If Prude decides not to read, then Lewd will decide to read. This yields the same outcome. However, if Prude decides to read, Lewd won't. "Prude reads" is preferred by Prude (and also Lewd) to "Lewd reads", so he will decide to read (with no obligation, voluntarily) to get this Pareto efficient outcome. Marc Masat hints that this should be another way out of the paradox:
<templatestyles src="Template:Blockquote/styles.css" />If there's, at least, one player without dominant strategy, the game will be played sequentially where players with dominant strategy and need to change it (if they are in the Pareto optimal they don't have to) will be the firsts to choose, allowing to reach the Pareto Efficiency without dictatorship nor restricted domain and also avoiding contract's costs such as time, money or other people. If all players present a dominant strategy, contracts may be used.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\succ_A"
},
{
"math_id": 1,
"text": "\\succ_B"
},
{
"math_id": 2,
"text": "F : {Rel}(X)^N \\rightarrow \\mathcal{P}(X)"
}
]
| https://en.wikipedia.org/wiki?curid=1332480 |
13329722 | Semialgebraic set | Subset of n-space defined by a finite sequence of polynomial equations and inequalities
In mathematics, a basic semialgebraic set is a set defined by polynomial equalities and polynomial inequalities, and a semialgebraic set is a finite union of basic semialgebraic sets. A semialgebraic function is a function with a semialgebraic graph. Such sets and functions are mainly studied in real algebraic geometry which is the appropriate framework for algebraic geometry over the real numbers.
Definition.
Let formula_0 be a real closed field (For example formula_0 could be the field of real numbers formula_1).
A subset formula_2 of formula_3 is a "semialgebraic set" if it is a finite union of sets defined by polynomial equalities of the form formula_4 and of sets defined by polynomial inequalities of the form formula_5
Properties.
Similarly to algebraic subvarieties, finite unions and intersections of semialgebraic sets are still semialgebraic sets. Furthermore, unlike subvarieties, the complement of a semialgebraic set is again semialgebraic. Finally, and most importantly, the Tarski–Seidenberg theorem says that they are also closed under the projection operation: in other words a semialgebraic set projected onto a linear subspace yields another semialgebraic set (as is the case for quantifier elimination). These properties together mean that semialgebraic sets form an o-minimal structure on "R".
A semialgebraic set (or function) is said to be defined over a subring "A" of "R" if there is some description, as in the definition, where the polynomials can be chosen to have coefficients in "A".
On a dense open subset of the semialgebraic set "S", it is (locally) a submanifold. One can define the dimension of "S" to be the largest dimension at points at which it is a submanifold. It is not hard to see that a semialgebraic set lies inside an algebraic subvariety of the same dimension. | [
{
"math_id": 0,
"text": "\\mathbb{F}"
},
{
"math_id": 1,
"text": "\\mathbb{R}"
},
{
"math_id": 2,
"text": "S"
},
{
"math_id": 3,
"text": "\\mathbb{F}^n"
},
{
"math_id": 4,
"text": "\\{(x_1,...,x_n) \\in \\mathbb{F}^n \\mid P(x_1,...,x_n) = 0\\}"
},
{
"math_id": 5,
"text": "\\{(x_1,...,x_n) \\in\\mathbb{F}^n \\mid P(x_1,...,x_n) > 0\\}."
}
]
| https://en.wikipedia.org/wiki?curid=13329722 |
1333167 | VAN method | Claimed earthquake prediction method
The VAN method – named after P. Varotsos, K. Alexopoulos and K. Nomicos, authors of the 1981 papers describing it – measures low frequency electric signals, termed "seismic electric signals" (SES), by which Varotsos and several colleagues claimed to have successfully predicted earthquakes in Greece. Both the method itself and the manner by which successful predictions were claimed have been severely criticized. Supporters of VAN have responded to the criticism but the critics have not retracted their views.
Since 2001, the VAN group has introduced a concept they call "natural time", applied to the analysis of their precursors. Initially it is applied on SES to distinguish them from noise and relate them to a possible impending earthquake. In case of verification (classification as "SES activity"), natural time analysis is additionally applied to the general subsequent seismicity of the area associated with the SES activity, in order to improve the time parameter of the prediction. The method treats earthquake onset as a critical phenomenon.
After 2006, VAN say that all alarms related to SES activity have been made public by posting at arxiv.org. One such report was posted on Feb. 1, 2008, two weeks before the strongest earthquake in Greece during the period 1983-2011. This earthquake occurred on February 14, 2008, with magnitude (Mw) 6.9. VAN's report was also described in an article in the newspaper Ethnos on Feb. 10, 2008. However, Gerassimos Papadopoulos complained that the VAN reports were confusing and ambiguous, and that "none of the claims for successful VAN predictions is justified", but this complaint was answered on the same issue.
Description of the VAN method.
Prediction of earthquakes with this method is based on the detection, recording and evaluation of seismic electric signals or SES. These electrical signals have a fundamental frequency component of 1 Hz or less and an amplitude the logarithm of which scales with the magnitude of the earthquake. According to VAN proponents, SES are emitted by rocks under stresses caused by plate-tectonic forces. There are three types of reported electric signal:
Several hypotheses have been proposed to explain SES:
While the electrokinetic effect may be consistent with signal detection tens or hundreds of kilometers away, the other mechanisms require a second mechanism to account for propagation:
Seismic electric signals are detected at stations which consist of pairs of electrodes (oriented NS and EW) inserted into the ground, with amplifiers and filters. The signals are then transmitted to the VAN scientists in Athens where they are recorded and evaluated. Currently the VAN team operates 9 stations, while in the past (until 1989) they could afford up to 17.
The VAN team claimed that they were able to predict earthquakes of magnitude larger than 5, with an uncertainty of 0.7 units of magnitude, within a radius of 100 km, and in time window ranging from several hours to a few weeks. Several papers confirmed this success rate, leading to statistically significant conclusion. For example, there were eight M ≥ 5.5 earthquakes in Greece from January 1, 1984 through September 10, 1995, and the VAN network forecast six of these.
The VAN method has also been used in Japan, but in early attempts success comparable to that achieved in Greece was "difficult" to attain. A preliminary investigation of seismic electric signals in France led to encouraging results.
Earthquake prediction using "natural time" analysis.
Since 2001 the VAN team has attempted to improve the accuracy of the estimation of the time of the forthcoming earthquake. To that end, they introduced the concept of "natural time", a time series analysis technique which puts weight on a process based on the ordering of events. Two terms characterize each event, the "natural time" χ, and the energy "Q". χ is defined as "k"/"N", where "k" is an integer (the "k"-th event) and "N" is the total number of events in the time sequence of data. A related term, "p""k", is the ratio "Q""k" "/" "Q""total", which describes the fractional energy released. They introduce a critical term κ, the "variance in natural time", which puts extra weight on the energy term "p""k":
formula_0
where formula_1 and formula_2
Their current method deems SES valid when κ = 0.070. Once the SES are deemed valid, a second analysis is started in which the subsequent seismic (rather than electric) events are noted, and the region is divided up as a Venn diagram with at least two seismic events per overlapping rectangle. When the distribution of κ for the rectangular regions has its maximum at κ = 0.070, a critical seismic event is imminent, i.e. it will occur in a few days to one week or so, and a report is issued.
Results.
The VAN team claim that out of seven mainshocks with magnitude Mw>=6.0 from 2001 through 2010 in the region of latitude N 36° to N 41° and longitude E 19° to E 27°, all but one could be classified with relevant SES activity identified and reported in advance through natural time analysis. Additionally, they assert that the occurrence time of four of these mainshocks with magnitude Mw>=6.4 were identified to within "a narrow range, a few days to around one week or so." These reports are inserted in papers housed in arXiv, and new reports are made and uploaded there. For example, a report preceding the strongest earthquake in Greece during the period 1983-2011, which occurred on February 14, 2008, with magnitude (Mw) 6.9, was publicized in arXiv almost two weeks before, on February 1, 2008. A description of the updated VAN method was collected in a book published by Springer in 2011, titled "Natural Time Analysis: The New View of Time."
Natural time analysis also claims that the physical connection of SES activities with earthquakes is as follows: Taking the view that the earthquake occurrence is a phase-change (critical phenomenon), where the new phase is the mainshock occurrence, the above-mentioned variance term κ is the corresponding order parameter. The κ value calculated for a window comprising a number of seismic events comparable to the average number of earthquakes occurring within a few months, fluctuates when the window is sliding through a seismic catalogue. The VAN team claims that these κ fluctuations exhibit a minimum a few months before a mainshock occurrence and in addition this minimum occurs simultaneously with the initiation of the corresponding SES activity, and that this is the first time in the literature that such a simultaneous appearance of two precursory phenomena in independent datasets of different geophysical observables (electrical measurements, seismicity) has been observed. Furthermore, the VAN team claims that their natural time analysis of the seismic catalogue of Japan during the period from January 1, 1984 until the occurrence of the magnitude 9.0 Tohoku earthquake on March 11, 2011, revealed that such clear minima of the κ fluctuations appeared before all major earthquakes with magnitude 7.6 or larger. The deepest of these minima was said to occur on January 5, 2011, i.e., almost two months before the Tohoku earthquake occurrence. Finally, by dividing the Japanese region into small areas, the VAN team states that some small areas show minimum of the κ fluctuations almost simultaneously with the large area covering the whole Japan and such small areas clustered within a few hundred kilometers from the actual epicenter of the impending major earthquake.
Criticisms of VAN.
Historically, the usefulness of the VAN method for prediction of earthquakes had been a matter of debate. Both positive and negative criticism on an older conception of the VAN method is summarized in the 1996 book "A Critical Review of VAN", edited by Sir James Lighthill. A critical review of the statistical methodology was published by Y. Y. Kagan of UCLA in 1997. Note that these criticisms predate the time series analysis methods introduced by the VAN group in 2001. The main points of the criticism were:
Predictive success.
Critics say that the VAN method is hindered by a lack of statistical testing of the validity of the hypothesis because the researchers keep changing the parameters (the moving the goalposts) technique).
VAN has claimed to have observed at a recording station in Athens a perfect record of a one-to-one correlation between SESs and earthquake of magnitude ≥ 2.9 which occurred 7 hours later in all of Greece. However, Max Wyss said that the list of earthquake used for the correlation was false. Although VAN stated in their article that the list of earthquakes was that of the Bulletin of the National Observatory of Athens (NOA), Wyss found that 37% of the earthquakes actually listed in the bulletin, including the largest one, were not in the list used by VAN for issuing their claim. In addition, 40% of the earthquake which VAN claimed had occurred were not in the NOA bulletin. Examining the probability of chance correlation of another set of 22 claims of successful predictions by VAN of M > 4.0 from January 1, 1987 through November 30, 1989 it was found that 74% were false, 9% correlated by chance, and for 14% the correlation was uncertain. No single event correlated at a probability greater than 85%, whereas the level required in statistics for accepting a hypothesis test as positive would more commonly be 95%.
In response to Wyss' analysis of the NOA findings, VAN said that the criticisms were based on misunderstandings. VAN said that the calculations suggested by Wyss would lead to a paradox, i.e., to probability values larger than unity, when applied to an ideal earthquake prediction method. Other independent evaluations said that VAN obtained statistically significant results.
Mainstream seismologists remain unconvinced by any of VAN's rebuttals. In 2011 the ICEF concluded that the optimistic prediction capability claimed by VAN could not be validated. Most seismologists consider VAN to have been "resoundingly debunked".
Uyeda and others in 2011, however, supported the use of the technique. In 2018, the statistical significance of the method was revisited by the VAN group employing modern techniques, such as event coincidence analysis (ECA) and receiver operating characteristic (ROC), which they interpreted to show that SES exhibit precursory information far beyond chance.
Proposed SES propagation mechanism.
An analysis of the propagation properties of SES in the Earth’s crust showed that it is impossible that signals with the amplitude reported by VAN could have been generated by small earthquakes and transmitted over the several hundred kilometers between the epicenter and the receiving station. In effect, if the mechanism is based on piezoelectricity or electrical charging of crystal deformations with the signal traveling along faults, then none of the earthquakes which VAN claimed were preceded by SES generated an SES themselves. VAN answered that such an analysis of the SES propagation properties is based on a simplified model of horizontally layered Earth and that this differs greatly from the real situation since Earth's crust contains inhomogeneities. When the latter are taken into account, for example by considering that the faults are electrically appreciably more conductive than the surrounding medium, VAN believes that electric signals transmitted at distances of the order of one hundred kilometers between the epicenter and the receiving station have amplitudes comparable to those reported by VAN.
Electromagnetic compatibility issues.
VAN’s publications are further weakened by failure to address the problem of eliminating the many and strong sources of change in the magneto-electric field measured by them, such as telluric currents from weather, and electromagnetic interference (EMI) from man-made signals. One critical paper (Pham et al 1998) clearly correlates an SES used by the VAN group with digital radio transmissions made from a military base. In a subsequent paper, VAN said that such noise coming from digital radio transmitters of the military database has been clearly distinguished from true SES by following the criteria developed by VAN. Further work in Greece by Pham et al in 2002 has tracked SES-like "anomalous transient electric signals" back to specific human sources, and found that such signals are not excluded by the criteria used by VAN to identify SES.
In 2003, modern methods of statistical physics, i.e., detrended fluctuation analysis (DFA), multifractal DFA and wavelet transform revealed that SES are clearly distinguished from those produced by human sources, since the former signals exhibit very strong long range correlations, while the latter signals do not. A work published in 2020 examined the statistical significance of the minima of the fluctuations of the order parameter κ1 of seismicity by event coincidence analysis as a possible precursor to strong earthquakes in both regional and global level. The results show that these minima are indeed statistically significant earthquake precursors. In particular, in the regional studies the time lag was found to be fully compatible with the finding that these mimima are simultaneous with the initiation of SES activities, thus the distinction of the latter precursory signals from those produced by human sources is evident.
Public policy.
Finally, one requirement for any earthquake prediction method is that, in order for any prediction to be useful, it must predict a forthcoming earthquake within a reasonable time-frame, epicenter and magnitude. If the prediction is too vague, no feasible decision (such as to evacuate the population of a certain area for a given period of time) can be made. In practice, the VAN group issued a series of telegrams in the 1980s. During the same time frame, the technique also missed major earthquakes, in the sense that ""for earthquakes with Mb≥5.0, the ratio of the predicted to the total number of earthquakes is 6/12 (50%) and the success rate of the prediction is also 6/12 (50%) with the probability gain of a factor of 4. With a confidence level of 99.8%, the possibility of this success rate being explained by a random model of earthquake occurrence taking into account the regional factor which includes high seismicity in the prediction area, can be rejected". This study concludes that "the statistical examination of the SES predictions proved high rates of success prediction and predicted events with high probability gain. This suggests a physical connection between SES and subsequent earthquakes, at least for an event of magnitude of Ms≥5"". Predictions from the early VAN method led to public criticism and the cost associated with false alarms generated ill will. In 2016 the Union of Greek Physicists honored P. Varotsos for his work on VAN with a prize delivered by the President of Greece.
Updated VAN method.
A review of the updated VAN method in 2020 says that it suffers from an abundance of false positives and is therefore not usable as a prediction protocol. VAN group answered by pinpointing misunderstandings in the specific reasoning.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\kappa=\\sum_{k=1}^N p_k(\\chi_k)^2 - \\bigl(\\sum_{k=1}^N p_k\\chi_k\\bigr)^2"
},
{
"math_id": 1,
"text": "\\textstyle\\chi_k=k/N"
},
{
"math_id": 2,
"text": "\\textstyle\\ p_k=\\frac{Q_k}{\\sum_{n=1}^N Q_n}"
}
]
| https://en.wikipedia.org/wiki?curid=1333167 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.