id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
630398
|
Identity of indiscernibles
|
Impossibility for separate objects to have all their properties in common
The identity of indiscernibles is an ontological principle that states that there cannot be separate objects or entities that have all their properties in common. That is, entities "x" and "y" are identical if every predicate possessed by "x" is also possessed by "y" and vice versa. It states that no two distinct things (such as snowflakes) can be exactly alike, but this is intended as a metaphysical principle rather than one of natural science. A related principle is the indiscernibility of identicals, discussed below.
A form of the principle is attributed to the German philosopher Gottfried Wilhelm Leibniz. While some think that Leibniz's version of the principle is meant to be only the indiscernibility of identicals, others have interpreted it as the conjunction of the identity of indiscernibles and the indiscernibility of identicals (the converse principle). Because of its association with Leibniz, the indiscernibility of identicals is sometimes known as Leibniz's law. It is considered to be one of his great metaphysical principles, the other being the principle of noncontradiction and the principle of sufficient reason (famously used in his disputes with Newton and Clarke in the Leibniz–Clarke correspondence).
Some philosophers have decided, however, that it is important to exclude certain predicates (or purported predicates) from the principle in order to avoid either triviality or contradiction. An example (detailed below) is the predicate that denotes whether an object is equal to "x" (often considered a valid predicate). As a consequence, there are a few different versions of the principle in the philosophical literature, of varying logical strength—and some of them are termed "the strong principle" or "the weak principle" by particular authors, in order to distinguish between them.
The identity of indiscernibles has been used to motivate notions of noncontextuality within quantum mechanics.
Associated with this principle is also the question as to whether it is a logical principle, or merely an empirical principle.
Identity and indiscernibility.
Both identity and indiscernibility are expressed by the word "same". "Identity" is about "numerical sameness", and is expressed by the equality sign ("="). It is the relation each object bears only to itself. "Indiscernibility", on the other hand, concerns "qualitative sameness": two objects are indiscernible if they have all their properties in common. Formally, this can be expressed as "formula_0". The two senses of "sameness" are linked by two principles: the principle of "indiscernibility of identicals" and the principle of "identity of indiscernibles". The principle of "indiscernibility of identicals" is uncontroversial and states that if two entities are identical with each other then they have the same properties. The principle of "identity of indiscernibles", on the other hand, is more controversial in making the converse claim that if two entities have the same properties then they must be identical. This entails that "no two distinct things exactly resemble each other". Note that these are all second-order expressions. Neither of these principles can be expressed in first-order logic (are nonfirstorderizable). Taken together, they are sometimes referred to as "Leibniz's law". Formally, the two principles can be expressed in the following way:
Principle 1 is generally regarded as an a priori logical truth. Principle 2, on the other hand, is controversial; Max Black famously argued against it.
In a universe of two distinct objects A and B, all predicates F are materially equivalent to one of the following properties:
If ∀F applies to all such predicates, then the second principle as formulated above reduces trivially and uncontroversially to a logical tautology. In that case, the objects are distinguished by IsA, IsB, and all predicates that are materially equivalent to either of these. This argument can combinatorially be extended to universes containing any number of distinct objects.
The equality relation expressed by the sign "=" is an equivalence relation in being reflexive (everything is equal to itself), symmetric (if "x" is equal to "y" then "y" is equal to "x") and transitive (if "x" is equal to "y" and "y" is equal to "z" then "x" is equal to "z"). The "indiscernibility of identicals" and "identity of indiscernables" can jointly be used to define the equality relation. The "symmetry" and "transitivity" of equality follow from the first principle, whereas "reflexivity" follows from the second. Both principles can be combined into a single axiom by using a biconditional operator ("formula_5") in place of material implication ("formula_6").
Indiscernibility and conceptions of properties.
Indiscernibility is usually defined in terms of shared properties: two objects are indiscernible if they have all their properties in common. The plausibility and strength of the principle of identity of indiscernibles depend on the conception of properties used to define indiscernibility.
One important distinction in this regard is between "pure" and "impure" properties. "Impure properties" are properties that, unlike "pure properties", involve reference to a particular substance in their definition. So, for example, "being a wife" is a pure property while "being the wife of Socrates" is an impure property due to the reference to the particular "Socrates". Sometimes, the terms "qualitative" and "non-qualitative" are used instead of "pure" and "impure". Discernibility is usually defined in terms of pure properties only. The reason for this is that taking impure properties into consideration would result in the principle being trivially true since any entity has the impure property of being identical to itself, which it does not share with any other entity.
Another important distinction concerns the difference between intrinsic and extrinsic properties. A property is "extrinsic" to an object if having this property depends on other objects (with or without reference to particular objects), otherwise it is "intrinsic". For example, the property of "being an aunt" is extrinsic while the property of "having a mass of 60 kg" is intrinsic. If the identity of indiscernibles is defined only in terms of "intrinsic pure" properties, one cannot regard two books lying on a table as distinct when they are "intrinsically identical". But if "extrinsic" and "impure" properties are also taken into consideration, the same books become distinct so long as they are discernible through the latter properties.
Critique.
Symmetric universe.
Max Black has argued against the identity of indiscernibles by counterexample. Notice that to show that the identity of indiscernibles is false, it is sufficient that one provide a model in which there are two distinct (numerically nonidentical) things that have all the same properties. He claimed that in a symmetric universe wherein only two symmetrical spheres exist, the two spheres are two distinct objects even though they have all their properties in common.
Black argues that even relational properties (properties specifying distances between objects in space-time) fail to distinguish two identical objects in a symmetrical universe. Per his argument, two objects are, and will remain, equidistant from the universe's plane of symmetry and each other. Even bringing in an external observer to label the two spheres distinctly does not solve the problem, because it violates the symmetry of the universe.
Indiscernibility of identicals.
As stated above, the principle of indiscernibility of identicals—that if two objects are in fact one and the same, they have all the same properties—is mostly uncontroversial. However, one famous application of the indiscernibility of identicals was by René Descartes in his "Meditations on First Philosophy". Descartes concluded that he could not doubt the existence of himself (the famous "cogito" argument), but that he "could" doubt the existence of his body.
This argument is criticized by some modern philosophers on the grounds that it allegedly derives a conclusion about what is true from a premise about what people know. What people know or believe about an entity, they argue, is not really a characteristic of that entity. A response may be that the argument in the "Meditations on First Philosophy" is that the inability of Descartes to doubt the existence of his mind is part of his mind's essence. One may then argue that identical things should have identical essences.
Numerous counterexamples are given to debunk Descartes' reasoning via "reductio ad absurdum", such as the following argument based on a secret identity:
Any of which will undermine Descartes' argument.
|
[
{
"math_id": 0,
"text": "\\forall F(Fx \\leftrightarrow Fy)"
},
{
"math_id": 1,
"text": "\\forall x \\, \\forall y \\, [x=y \\rightarrow \\forall F(Fx \\leftrightarrow Fy)]"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "y"
},
{
"math_id": 4,
"text": "\\forall x \\, \\forall y \\, [\\forall F(Fx \\leftrightarrow Fy) \\rightarrow x=y]"
},
{
"math_id": 5,
"text": "\\leftrightarrow"
},
{
"math_id": 6,
"text": "\\rightarrow"
}
] |
https://en.wikipedia.org/wiki?curid=630398
|
63044039
|
Semiorthogonal decomposition
|
In mathematics, a semiorthogonal decomposition is a way to divide a triangulated category into simpler pieces. One way to produce a semiorthogonal decomposition is from an exceptional collection, a special sequence of objects in a triangulated category. For an algebraic variety "X", it has been fruitful to study semiorthogonal decompositions of the bounded derived category of coherent sheaves, formula_0.
Semiorthogonal decomposition.
Alexei Bondal and Mikhail Kapranov (1989) defined a semiorthogonal decomposition of a triangulated category formula_1 to be a sequence formula_2 of strictly full triangulated subcategories such that:
The notation formula_8 is used for a semiorthogonal decomposition.
Having a semiorthogonal decomposition implies that every object of formula_1 has a canonical "filtration" whose graded pieces are (successively) in the subcategories formula_2. That is, for each object "T" of formula_1, there is a sequence
formula_9
of morphisms in formula_1 such that the cone of formula_10 is in formula_11, for each "i". Moreover, this sequence is unique up to a unique isomorphism.
One can also consider "orthogonal" decompositions of a triangulated category, by requiring that there are no morphisms from formula_11 to formula_12 for any formula_13. However, that property is too strong for most purposes. For example, for an (irreducible) smooth projective variety "X" over a field, the bounded derived category formula_0 of coherent sheaves never has a nontrivial orthogonal decomposition, whereas it may have a semiorthogonal decomposition, by the examples below.
A semiorthogonal decomposition of a triangulated category may be considered as analogous to a finite filtration of an abelian group. Alternatively, one may consider a semiorthogonal decomposition formula_14 as closer to a split exact sequence, because the exact sequence formula_15 of triangulated categories is split by the subcategory formula_16, mapping isomorphically to formula_17.
Using that observation, a semiorthogonal decomposition formula_8 implies a direct sum splitting of Grothendieck groups:
formula_18
For example, when formula_19 is the bounded derived category of coherent sheaves on a smooth projective variety "X", formula_20 can be identified with the Grothendieck group formula_21 of algebraic vector bundles on "X". In this geometric situation, using that formula_0 comes from a dg-category, a semiorthogonal decomposition actually gives a splitting of all the algebraic K-groups of "X":
formula_22
for all "i".
Admissible subcategory.
One way to produce a semiorthogonal decomposition is from an admissible subcategory. By definition, a full triangulated subcategory formula_23 is left admissible if the inclusion functor formula_24 has a left adjoint functor, written formula_25. Likewise, formula_23 is right admissible if the inclusion has a right adjoint, written formula_26, and it is admissible if it is both left and right admissible.
A right admissible subcategory formula_27 determines a semiorthogonal decomposition
formula_28,
where
formula_29
is the right orthogonal of formula_30 in formula_1. Conversely, every semiorthogonal decomposition formula_31 arises in this way, in the sense that formula_30 is right admissible and formula_32. Likewise, for any semiorthogonal decomposition formula_31, the subcategory formula_33 is left admissible, and formula_34, where
formula_35
is the left orthogonal of formula_33.
If formula_1 is the bounded derived category of a smooth projective variety over a field "k", then every left or right admissible subcategory of formula_1 is in fact admissible. By results of Bondal and Michel Van den Bergh, this holds more generally for formula_1 any regular proper triangulated category that is idempotent-complete.
Moreover, for a regular proper idempotent-complete triangulated category formula_1, a full triangulated subcategory is admissible if and only if it is regular and idempotent-complete. These properties are intrinsic to the subcategory. For example, for "X" a smooth projective variety and "Y" a subvariety not equal to "X", the subcategory of formula_0 of objects supported on "Y" is not admissible.
Exceptional collection.
Let "k" be a field, and let formula_1 be a "k"-linear triangulated category. An object "E" of formula_1 is called exceptional if Hom("E","E") = "k" and Hom("E","E"["t"]) = 0 for all nonzero integers "t", where ["t"] is the shift functor in formula_1. (In the derived category of a smooth complex projective variety "X", the first-order deformation space of an object "E" is formula_36, and so an exceptional object is in particular rigid. It follows, for example, that there are at most countably many exceptional objects in formula_0, up to isomorphism. That helps to explain the name.)
The triangulated subcategory generated by an exceptional object "E" is equivalent to the derived category formula_37 of finite-dimensional "k"-vector spaces, the simplest triangulated category in this context. (For example, every object of that subcategory is isomorphic to a finite direct sum of shifts of "E".)
Alexei Gorodentsev and Alexei Rudakov (1987) defined an exceptional collection to be a sequence of exceptional objects formula_38 such that formula_39 for all "i" < "j" and all integers "t". (That is, there are "no morphisms from right to left".) In a proper triangulated category formula_1 over "k", such as the bounded derived category of coherent sheaves on a smooth projective variety, every exceptional collection generates an admissible subcategory, and so it determines a semiorthogonal decomposition:
formula_40
where formula_41, and formula_42 denotes the full triangulated subcategory generated by the object formula_42. An exceptional collection is called full if the subcategory formula_33 is zero. (Thus a full exceptional collection breaks the whole triangulated category up into finitely many copies of formula_37.)
In particular, if "X" is a smooth projective variety such that formula_0 has a full exceptional collection formula_38, then the Grothendieck group of algebraic vector bundles on "X" is the free abelian group on the classes of these objects:
formula_43
A smooth complex projective variety "X" with a full exceptional collection must have trivial Hodge theory, in the sense that formula_44 for all formula_45; moreover, the cycle class map formula_46 must be an isomorphism.
Examples.
The original example of a full exceptional collection was discovered by Alexander Beilinson (1978): the derived category of projective space over a field has the full exceptional collection
formula_47,
where O("j") for integers "j" are the line bundles on projective space. Full exceptional collections have also been constructed on all smooth projective toric varieties, del Pezzo surfaces, many projective homogeneous varieties, and some other Fano varieties.
More generally, if "X" is a smooth projective variety of positive dimension such that the coherent sheaf cohomology groups formula_48 are zero for "i" > 0, then the object formula_49 in formula_0 is exceptional, and so it induces a nontrivial semiorthogonal decomposition formula_50. This applies to every Fano variety over a field of characteristic zero, for example. It also applies to some other varieties, such as Enriques surfaces and some surfaces of general type.
A source of examples is Orlov's blowup formula concerning the blowup formula_51 of a scheme formula_52 at a codimension formula_53 locally complete intersection subscheme formula_54 with exceptional locus formula_55. There is a semiorthogonal decomposition formula_56 where formula_57 is the functor formula_58 with formula_59is the natural map.
While these examples encompass a large number of well-studied derived categories, many naturally occurring triangulated categories are "indecomposable". In particular, for a smooth projective variety "X" whose canonical bundle formula_60 is basepoint-free, every semiorthogonal decomposition formula_61 is trivial in the sense that formula_33 or formula_30 must be zero. For example, this applies to every variety which is Calabi–Yau in the sense that its canonical bundle is trivial.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\text{D}^{\\text{b}}(X)"
},
{
"math_id": 1,
"text": "\\mathcal{T}"
},
{
"math_id": 2,
"text": "\\mathcal{A}_1,\\ldots,\\mathcal{A}_n"
},
{
"math_id": 3,
"text": "1\\leq i<j\\leq n"
},
{
"math_id": 4,
"text": "A_i\\in\\mathcal{A}_i"
},
{
"math_id": 5,
"text": "A_j\\in\\mathcal{A}_j"
},
{
"math_id": 6,
"text": "A_j"
},
{
"math_id": 7,
"text": "A_i"
},
{
"math_id": 8,
"text": "\\mathcal{T}=\\langle\\mathcal{A}_1,\\ldots,\\mathcal{A}_n\\rangle"
},
{
"math_id": 9,
"text": "0=T_n\\to T_{n-1}\\to\\cdots\\to T_0=T"
},
{
"math_id": 10,
"text": "T_i\\to T_{i-1}"
},
{
"math_id": 11,
"text": "\\mathcal{A}_i"
},
{
"math_id": 12,
"text": "\\mathcal{A}_j"
},
{
"math_id": 13,
"text": "i\\neq j"
},
{
"math_id": 14,
"text": "\\mathcal{T}=\\langle\\mathcal{A},\\mathcal{B}\\rangle"
},
{
"math_id": 15,
"text": "0\\to\\mathcal{A}\\to\\mathcal{T}\\to\\mathcal{T}/\\mathcal{A}\\to 0"
},
{
"math_id": 16,
"text": "\\mathcal{B}\\subset \\mathcal{T}"
},
{
"math_id": 17,
"text": "\\mathcal{T}/\\mathcal{A}"
},
{
"math_id": 18,
"text": "K_0(\\mathcal{T})\\cong K_0(\\mathcal{A}_1)\\oplus\\cdots\\oplus K_0(\\mathcal{A_n})."
},
{
"math_id": 19,
"text": "\\mathcal{T}=\\text{D}^{\\text{b}}(X)"
},
{
"math_id": 20,
"text": "K_0(\\mathcal{T})"
},
{
"math_id": 21,
"text": "K_0(X)"
},
{
"math_id": 22,
"text": "K_i(X)\\cong K_i(\\mathcal{A}_1)\\oplus\\cdots\\oplus K_i(\\mathcal{A_n})"
},
{
"math_id": 23,
"text": "\\mathcal{A}\\subset\\mathcal{T}"
},
{
"math_id": 24,
"text": "i\\colon\\mathcal{A}\\to\\mathcal{T}"
},
{
"math_id": 25,
"text": "i^*"
},
{
"math_id": 26,
"text": "i^!"
},
{
"math_id": 27,
"text": "\\mathcal{B}\\subset\\mathcal{T}"
},
{
"math_id": 28,
"text": "\\mathcal{T}=\\langle\\mathcal{B}^{\\perp},\\mathcal{B}\\rangle"
},
{
"math_id": 29,
"text": "\\mathcal{B}^{\\perp}:=\\{T\\in\\mathcal{T}: \\operatorname{Hom}(\\mathcal{B},T)=0\\}"
},
{
"math_id": 30,
"text": "\\mathcal{B}"
},
{
"math_id": 31,
"text": "\\mathcal{T}=\\langle \\mathcal{A},\\mathcal{B}\\rangle"
},
{
"math_id": 32,
"text": "\\mathcal{A}=\\mathcal{B}^{\\perp}"
},
{
"math_id": 33,
"text": "\\mathcal{A}"
},
{
"math_id": 34,
"text": "\\mathcal{B}={}^{\\perp}\\mathcal{A}"
},
{
"math_id": 35,
"text": "{}^{\\perp}\\mathcal{A}:=\\{T\\in\\mathcal{T}: \\operatorname{Hom}(T,\\mathcal{A})=0\\}"
},
{
"math_id": 36,
"text": "\\operatorname{Ext}^1_X(E,E)\\cong \\operatorname{Hom}(E,E[1])"
},
{
"math_id": 37,
"text": "\\text{D}^{\\text{b}}(k)"
},
{
"math_id": 38,
"text": "E_1,\\ldots,E_m"
},
{
"math_id": 39,
"text": "\\operatorname{Hom}(E_j,E_i[t])=0"
},
{
"math_id": 40,
"text": "\\mathcal{T}=\\langle\\mathcal{A},E_1,\\ldots,E_m\\rangle,"
},
{
"math_id": 41,
"text": "\\mathcal{A}=\\langle E_1,\\ldots,E_m\\rangle^{\\perp}"
},
{
"math_id": 42,
"text": "E_i"
},
{
"math_id": 43,
"text": "K_0(X)\\cong \\Z\\{E_1,\\ldots,E_m\\}."
},
{
"math_id": 44,
"text": "h^{p,q}(X)=0"
},
{
"math_id": 45,
"text": "p\\neq q"
},
{
"math_id": 46,
"text": "CH^*(X)\\otimes\\Q\\to H^*(X,\\Q)"
},
{
"math_id": 47,
"text": "\\text{D}^{\\text{b}}(\\mathbf{P}^n)=\\langle O,O(1),\\ldots,O(n)\\rangle"
},
{
"math_id": 48,
"text": "H^i(X,O_X)"
},
{
"math_id": 49,
"text": "O_X"
},
{
"math_id": 50,
"text": "\\text{D}^{\\text{b}}(X)=\\langle (O_X)^{\\perp},O_X\\rangle"
},
{
"math_id": 51,
"text": "X = \\operatorname{Bl}_Z(Y)"
},
{
"math_id": 52,
"text": "Y"
},
{
"math_id": 53,
"text": "k"
},
{
"math_id": 54,
"text": "Z"
},
{
"math_id": 55,
"text": "\\iota: E \\simeq \\mathbb{P}_Z(N_{Z/Y})\\to X"
},
{
"math_id": 56,
"text": "D^b(X) = \\langle \\Phi_{1-k}(D^b(Z)), \\ldots, \\Phi_{-1}(D^b(Z)), \\pi^*(D^b(Y))\\rangle"
},
{
"math_id": 57,
"text": "\\Phi_i:D^b(Z) \\to D^b(X)"
},
{
"math_id": 58,
"text": "\\Phi_i(-) = \\iota_*(\\mathcal{O}_E(k))\\otimes p^*(-))"
},
{
"math_id": 59,
"text": "p : X \\to Y"
},
{
"math_id": 60,
"text": "K_X"
},
{
"math_id": 61,
"text": "\\text{D}^{\\text{b}}(X)=\\langle\\mathcal{A},\\mathcal{B}\\rangle"
}
] |
https://en.wikipedia.org/wiki?curid=63044039
|
63045799
|
Equal detour point
|
Triangle center
In Euclidean geometry, the equal detour point is a triangle center denoted by "X"(176) in Clark Kimberling's Encyclopedia of Triangle Centers. It is characterized by the equal detour property: if one travels from any vertex of a triangle △"ABC" to another by taking a detour through some inner point P, then the additional distance traveled is constant. This means the following equation has to hold:
formula_0
The equal detour point is the only point with the equal detour property if and only if the following inequality holds for the angles α, β, γ of △"ABC":
formula_1
If the inequality does not hold, then the isoperimetric point possesses the equal detour property as well.
The equal detour point, isoperimetric point, the incenter and the Gergonne point of a triangle are collinear, that is all four points lie on a common line. Furthermore, they form a harmonic range as well (see graphic on the right).
The equal detour point is the center of the inner Soddy circle of a triangle and the additional distance travelled by the detour is equal to the diameter of the inner Soddy Circle.
The barycentric coordinates of the equal detour point are
formula_2
and the trilinear coordinates are:
formula_3
|
[
{
"math_id": 0,
"text": "\n\\begin{align}\n & \\overline{AP} + \\overline{PC} - \\overline{AC} \\\\[3mu]\n ={}& \\overline{AP} + \\overline{PB} - \\overline{AB} \\\\[3mu]\n ={}& \\overline{BP} + \\overline{PC} - \\overline{BC}.\n\\end{align}\n"
},
{
"math_id": 1,
"text": "\\tan\\tfrac12\\alpha + \\tan\\tfrac12\\beta + \\tan \\tfrac12\\gamma \\leq 2 "
},
{
"math_id": 2,
"text": "\\left( a+\\frac{\\Delta}{s-a} : b+\\frac{\\Delta}{s-b} : c+\\frac{\\Delta}{s-c} \\right)."
},
{
"math_id": 3,
"text": "\n1 + \\frac{\\cos\\tfrac12\\beta\\,\\cos\\tfrac12\\gamma}{\\cos\\tfrac12\\alpha} \\ :\\ \n1 + \\frac{\\cos\\tfrac12\\gamma\\,\\cos\\tfrac12\\alpha}{\\cos\\tfrac12\\beta} \\ :\\ \n1 + \\frac{\\cos\\tfrac12\\alpha\\,\\cos\\tfrac12\\beta}{\\cos\\tfrac12\\gamma}\n"
}
] |
https://en.wikipedia.org/wiki?curid=63045799
|
630478
|
Infinitary logic
|
An infinitary logic is a logic that allows infinitely long statements and/or infinitely long proofs. The concept was introduced by Zermelo in the 1930s.
Some infinitary logics may have different properties from those of standard first-order logic. In particular, infinitary logics may fail to be compact or complete. Notions of compactness and completeness that are equivalent in finitary logic sometimes are not so in infinitary logics. Therefore for infinitary logics, notions of strong compactness and strong completeness are defined. This article addresses Hilbert-type infinitary logics, as these have been extensively studied and constitute the most straightforward extensions of finitary logic. These are not, however, the only infinitary logics that have been formulated or studied.
Considering whether a certain infinitary logic named Ω-logic is complete promises to throw light on the continuum hypothesis.
A word on notation and the axiom of choice.
As a language with infinitely long formulae is being presented, it is not possible to write such formulae down explicitly. To get around this problem a number of notational conveniences, which, strictly speaking, are not part of the formal language, are used. formula_0 is used to point out an expression that is infinitely long. Where it is unclear, the length of the sequence is noted afterwards. Where this notation becomes ambiguous or confusing, suffixes such as formula_1 are used to indicate an infinite disjunction over a set of formulae of cardinality formula_2. The same notation may be applied to quantifiers, for example formula_3. This is meant to represent an infinite sequence of quantifiers: a quantifier for each formula_4 where formula_5.
All usage of suffixes and formula_0 are not part of formal infinitary languages.
The axiom of choice is assumed (as is often done when discussing infinitary logic) as this is necessary to have sensible distributivity laws.
Formal languages.
A first-order infinitary language formula_6, formula_7 regular, formula_8 or formula_9, has the same set of symbols as a finitary logic and may use all the rules for formation of formulae of a finitary logic together with some additional ones:
The language may also have function, relation, and predicate symbols of finite arity. Karp also defined languages formula_17 with formula_18 an infinite cardinal and some more complicated restrictions on formula_19 that allow for function and predicate symbols of infinite arity, with formula_19 controlling the maximum arity of a function symbol and formula_20 controlling predicate symbols.
The concepts of free and bound variables apply in the same manner to infinite formulae. Just as in finitary logic, a formula all of whose variables are bound is referred to as a "sentence".
Definition of Hilbert-type infinitary logics.
A theory formula_21 in infinitary language formula_22 is a set of sentences in the logic. A proof in infinitary logic from a theory formula_21 is a (possibly infinite) sequence of statements that obeys the following conditions: Each statement is either a logical axiom, an element of formula_21, or is deduced from previous statements using a rule of inference. As before, all rules of inference in finitary logic can be used, together with an additional one:
If formula_24, forming universal closures may not always be possible, however extra constant symbols may be added for each variable with the resulting satisfiability relation remaining the same. To avoid this, some authors use a different definition of the language formula_6 forbidding formulas from having more than formula_25 free variables.
The logical axiom schemata specific to infinitary logic are presented below. Global schemata variables: formula_2 and formula_26 such that formula_27.
The last two axiom schemata require the axiom of choice because certain sets must be well orderable. The last axiom schema is strictly speaking unnecessary, as Chang's distributivity laws imply it, however it is included as a natural way to allow natural weakenings to the logic.
Completeness, compactness, and strong completeness.
A theory is any set of sentences. The truth of statements in models are defined by recursion and will agree with the definition for finitary logic where both are defined. Given a theory "T" a sentence is said to be valid for the theory "T" if it is true in all models of "T".
A logic in the language formula_22 is complete if for every sentence "S" valid in every model there exists a proof of "S". It is strongly complete if for any theory "T" for every sentence "S" valid in "T" there is a proof of "S" from "T". An infinitary logic can be complete without being strongly complete.
A cardinal formula_38 is weakly compact when for every theory "T" in formula_39 containing at most formula_40 many formulas, if every "S" formula_41 "T" of cardinality less than formula_40 has a model, then "T" has a model. A cardinal formula_38 is strongly compact when for every theory "T" in formula_39, without restriction on size, if every "S" formula_41 "T" of cardinality less than formula_40 has a model, then "T" has a model.
Concepts expressible in infinitary logic.
In the language of set theory the following statement expresses foundation:
formula_42
Unlike the axiom of foundation, this statement admits no non-standard interpretations. The concept of well-foundedness can only be expressed in a logic that allows infinitely many quantifiers in an individual statement. As a consequence many theories, including Peano arithmetic, which cannot be properly axiomatised in finitary logic, can be in a suitable infinitary logic. Other examples include the theories of non-archimedean fields and torsion-free groups. These three theories can be defined without the use of infinite quantification; only infinite junctions are needed.
Truth predicates for countable languages are definable in formula_43.
Complete infinitary logics.
Two infinitary logics stand out in their completeness. These are the logics of formula_44 and formula_45. The former is standard finitary first-order logic and the latter is an infinitary logic that only allows statements of countable size.
The logic of formula_44 is also strongly complete, compact and strongly compact.
The logic of formula_46 fails to be compact, but it is complete (under the axioms given above). Moreover, it satisfies a variant of the Craig interpolation property.
If the logic of formula_47 is strongly complete (under the axioms given above) then formula_7 is strongly compact (because proofs in these logics cannot use formula_7 or more of the given axioms).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\cdots"
},
{
"math_id": 1,
"text": "\\bigvee_{\\gamma < \\delta}{A_{\\gamma}}"
},
{
"math_id": 2,
"text": "\\delta"
},
{
"math_id": 3,
"text": "\\forall_{\\gamma < \\delta}{V_{\\gamma}:}"
},
{
"math_id": 4,
"text": "V_{\\gamma}"
},
{
"math_id": 5,
"text": "\\gamma < \\delta"
},
{
"math_id": 6,
"text": "L_{\\alpha,\\beta}"
},
{
"math_id": 7,
"text": "\\alpha"
},
{
"math_id": 8,
"text": "\\beta = 0"
},
{
"math_id": 9,
"text": "\\omega\\leq\\beta\\leq\\alpha"
},
{
"math_id": 10,
"text": "A=\\{A_\\gamma | \\gamma < \\delta <\\alpha \\}"
},
{
"math_id": 11,
"text": "(A_0 \\lor A_1 \\lor \\cdots)"
},
{
"math_id": 12,
"text": "(A_0 \\land A_1 \\land \\cdots)"
},
{
"math_id": 13,
"text": "V=\\{V_\\gamma | \\gamma< \\delta < \\beta \\}"
},
{
"math_id": 14,
"text": "A_0"
},
{
"math_id": 15,
"text": "\\forall V_0 :\\forall V_1 \\cdots (A_0)"
},
{
"math_id": 16,
"text": "\\exists V_0 :\\exists V_1 \\cdots (A_0)"
},
{
"math_id": 17,
"text": "L_{\\alpha\\beta\\omicron\\pi}"
},
{
"math_id": 18,
"text": "\\pi\\leq\\alpha"
},
{
"math_id": 19,
"text": "\\omicron"
},
{
"math_id": 20,
"text": "\\pi"
},
{
"math_id": 21,
"text": "T"
},
{
"math_id": 22,
"text": "L_{\\alpha , \\beta}"
},
{
"math_id": 23,
"text": "\\land_{\\gamma < \\delta}{A_{\\gamma}}"
},
{
"math_id": 24,
"text": "\\beta<\\alpha"
},
{
"math_id": 25,
"text": "\\beta"
},
{
"math_id": 26,
"text": "\\gamma"
},
{
"math_id": 27,
"text": "0 < \\delta < \\alpha "
},
{
"math_id": 28,
"text": "((\\land_{\\epsilon < \\delta}{(A_{\\delta} \\implies A_{\\epsilon})}) \\implies (A_{\\delta} \\implies \\land_{\\epsilon < \\delta}{A_{\\epsilon}}))"
},
{
"math_id": 29,
"text": "((\\land_{\\epsilon < \\delta}{A_{\\epsilon}}) \\implies A_{\\gamma})"
},
{
"math_id": 30,
"text": "(\\lor_{\\mu < \\gamma}{(\\land_{\\delta < \\gamma}{A_{\\mu , \\delta}})})"
},
{
"math_id": 31,
"text": "\\forall \\mu \\forall \\delta \\exists \\epsilon < \\gamma: A_{\\mu , \\delta} = A_{\\epsilon}"
},
{
"math_id": 32,
"text": "A_{\\mu , \\delta} = \\neg A_{\\epsilon}"
},
{
"math_id": 33,
"text": "\\forall g \\in \\gamma^{\\gamma} \\exists \\epsilon < \\gamma: \\{A_{\\epsilon} , \\neg A_{\\epsilon}\\} \\subseteq \\{A_{\\mu , g(\\mu)} : \\mu < \\gamma\\}"
},
{
"math_id": 34,
"text": "\\gamma < \\alpha"
},
{
"math_id": 35,
"text": "((\\land_{\\mu < \\gamma}{(\\lor_{\\delta < \\gamma}{A_{\\mu , \\delta}})}) \\implies (\\lor_{\\epsilon < \\gamma^{\\gamma}}{(\\land_{\\mu < \\gamma}{A_{\\mu ,\\gamma_{\\epsilon}(\\mu)})}}))"
},
{
"math_id": 36,
"text": "\\{\\gamma_{\\epsilon}: \\epsilon < \\gamma^{\\gamma}\\}"
},
{
"math_id": 37,
"text": "\\gamma^{\\gamma}"
},
{
"math_id": 38,
"text": "\\kappa \\neq \\omega"
},
{
"math_id": 39,
"text": "L_{\\kappa , \\kappa}"
},
{
"math_id": 40,
"text": "\\kappa"
},
{
"math_id": 41,
"text": "\\subseteq"
},
{
"math_id": 42,
"text": "\\forall_{\\gamma < \\omega}{V_{\\gamma}:} \\neg \\land_{\\gamma < \\omega}{V_{\\gamma +} \\in V_{\\gamma}}.\\,"
},
{
"math_id": 43,
"text": "\\mathcal L_{\\omega_1,\\omega}"
},
{
"math_id": 44,
"text": "L_{\\omega , \\omega}"
},
{
"math_id": 45,
"text": "L_{\\omega_1 , \\omega}"
},
{
"math_id": 46,
"text": "L_{\\omega_1, \\omega}"
},
{
"math_id": 47,
"text": "L_{\\alpha, \\alpha}"
}
] |
https://en.wikipedia.org/wiki?curid=630478
|
630505
|
Algebraic geometry and analytic geometry
|
Two closely related mathematical subjects
In mathematics, algebraic geometry and analytic geometry are two closely related subjects. While algebraic geometry studies algebraic varieties, analytic geometry deals with complex manifolds and the more general analytic spaces defined locally by the vanishing of analytic functions of several complex variables. The deep relation between these subjects has numerous applications in which algebraic techniques are applied to analytic spaces and analytic techniques to algebraic varieties.
Main statement.
Let "X" be a projective complex algebraic variety. Because "X" is a complex variety, its set of complex points "X"(C) can be given the structure of a compact complex analytic space. This analytic space is denoted "X"an. Similarly, if formula_0 is a sheaf on "X", then there is a corresponding sheaf formula_1 on "X"an. This association of an analytic object to an algebraic one is a functor. The prototypical theorem relating "X" and "X"an says that for any two coherent sheaves formula_0 and formula_2 on "X", the natural homomorphism:
formula_3
is an isomorphism. Here formula_4 is the structure sheaf of the algebraic variety "X" and formula_5 is the structure sheaf of the analytic variety "X"an. More precisely, the category of coherent sheaves on the algebraic variety "X" is equivalent to the category of analytic coherent sheaves on the analytic variety "X"an, and the equivalence is given on objects by mapping formula_0 to formula_1. (Note in particular that formula_6 itself is coherent, a result known as the Oka coherence theorem, and also, it was proved in “Faisceaux Algebriques Coherents” that the structure sheaf of the algebraic variety formula_4 is coherent.
Another important statement is as follows: For any coherent sheaf formula_0 on an algebraic variety "X" the homomorphisms
formula_7
are isomorphisms for all "q"'s. This means that the "q"-th cohomology group on "X" is isomorphic to the cohomology group on "X"an.
The theorem applies much more generally than stated above (see the formal statement below). It and its proof have many consequences, such as Chow's theorem, the Lefschetz principle and Kodaira vanishing theorem.
Background.
Algebraic varieties are locally defined as the common zero sets of polynomials and since polynomials over the complex numbers are holomorphic functions, algebraic varieties over C can be interpreted as analytic spaces. Similarly, regular morphisms between varieties are interpreted as holomorphic mappings between analytic spaces. Somewhat surprisingly, it is often possible to go the other way, to interpret analytic objects in an algebraic way.
For example, it is easy to prove that the analytic functions from the Riemann sphere to itself are either
the rational functions or the identically infinity function (an extension of Liouville's theorem). For if such a function "f" is nonconstant, then since the set of "z" where "f(z)" is infinity is isolated and the Riemann sphere is compact, there are finitely many "z" with "f(z)" equal to infinity. Consider the Laurent expansion at all such "z" and subtract off the singular part: we are left with a function on the Riemann sphere with values in C, which by Liouville's theorem is constant. Thus "f" is a rational function. This fact shows there is no essential difference between the complex projective line as an algebraic variety, or as the Riemann sphere.
Important results.
There is a long history of comparison results between algebraic geometry and analytic geometry, beginning in the nineteenth century. Some of the more important advances are listed here in chronological order.
Riemann's existence theorem.
Riemann surface theory shows that a compact Riemann surface has enough meromorphic functions on it, making it an (smooth projective) algebraic curve. Under the name Riemann's existence theorem a deeper result on ramified coverings of a compact Riemann surface was known: such "finite" coverings as topological spaces are classified by permutation representations of the fundamental group of the complement of the ramification points. Since the Riemann surface property is local, such coverings are quite easily seen to be coverings in the complex-analytic sense. It is then possible to conclude that they come from covering maps of algebraic curves—that is, such coverings all come from finite extensions of the function field.
The Lefschetz principle.
In the twentieth century, the Lefschetz principle, named for Solomon Lefschetz, was cited in algebraic geometry to justify the use of topological techniques for algebraic geometry over any algebraically closed field "K" of characteristic 0, by treating "K" as if it were the complex number field. An elementary form of it asserts that true statements of the first order theory of fields about C are true for any algebraically closed field "K" of characteristic zero. A precise principle and its proof are due to Alfred Tarski and are based in mathematical logic.
This principle permits the carrying over of some results obtained using analytic or topological methods for algebraic varieties over C to other algebraically closed ground fields of characteristic 0. (e.g. Kodaira type vanishing theorem.)
Chow's theorem.
, proved by Wei-Liang Chow, is an example of the most immediately useful kind of comparison available. It states that an analytic subspace of complex projective space that is closed (in the ordinary topological sense) is an algebraic subvariety. This can be rephrased as "any analytic subspace of complex projective space that is closed in the strong topology is closed in the Zariski topology." This allows quite a free use of complex-analytic methods within the classical parts of algebraic geometry.
GAGA.
Foundations for the many relations between the two theories were put in place during the early part of the 1950s, as part of the business of laying the foundations of algebraic geometry to include, for example, techniques from Hodge theory. The major paper consolidating the theory was by Jean-Pierre Serre, now usually referred to as GAGA. It proves general results that relate classes of algebraic varieties, regular morphisms and sheaves with classes of analytic spaces, holomorphic mappings and sheaves. It reduces all of these to the comparison of categories of sheaves.
Nowadays the phrase "GAGA-style result" is used for any theorem of comparison, allowing passage between a category of objects from algebraic geometry, and their morphisms, to a well-defined subcategory of analytic geometry objects and holomorphic mappings.
Formal statement of GAGA.
In slightly lesser generality, the GAGA theorem asserts that the category of coherent algebraic sheaves on a complex projective variety "X" and the category of coherent analytic sheaves on the corresponding analytic space "X"an are equivalent. The analytic space "X"an is obtained roughly by pulling back to "X" the complex structure from C"n" through the coordinate charts. Indeed, phrasing the theorem in this manner is closer in spirit to Serre's paper, seeing how the full scheme-theoretic language that the above formal statement uses heavily had not yet been invented by the time of GAGA's publication.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathcal{F}"
},
{
"math_id": 1,
"text": "\\mathcal{F}^\\text{an}"
},
{
"math_id": 2,
"text": "\\mathcal{G}"
},
{
"math_id": 3,
"text": "\\text{Hom}_{\\mathcal{O}_X}(\\mathcal{F},\\mathcal{G})\\rightarrow\\text{Hom}_{\\mathcal{O}^{\\text{an}}_X}(\\mathcal{F}^{\\text{an}},\\mathcal{G}^{\\text{an}})"
},
{
"math_id": 4,
"text": "\\mathcal{O}_X"
},
{
"math_id": 5,
"text": "\\mathcal{O}_X^{\\text{an}}"
},
{
"math_id": 6,
"text": "\\mathcal{O}^{\\text{an}}_X"
},
{
"math_id": 7,
"text": "\\varepsilon_q\\ :\\ H^q(X,\\mathcal{F}) \\rightarrow H^q(X^{an},\\mathcal{F}^{an})"
},
{
"math_id": 8,
"text": " (X,\\mathcal O_X) "
},
{
"math_id": 9,
"text": " \\mathcal O_X^\\mathrm{an} "
},
{
"math_id": 10,
"text": " (X^\\mathrm{an}, \\mathcal O_X^\\mathrm{an}) "
},
{
"math_id": 11,
"text": " \\mathcal O_X^\\mathrm{an}(U) "
},
{
"math_id": 12,
"text": " \\mathcal F "
},
{
"math_id": 13,
"text": " \\mathcal F^\\mathrm{an} "
},
{
"math_id": 14,
"text": " \\mathcal O_X "
},
{
"math_id": 15,
"text": " \\lambda_X^*: \\mathcal F\\rightarrow (\\lambda_X)_* \\mathcal F^\\mathrm{an} "
},
{
"math_id": 16,
"text": " \\lambda_X^{-1} \\mathcal F \\otimes_{\\lambda_X^{-1} \\mathcal O_X} \\mathcal O_X^\\mathrm{an} "
},
{
"math_id": 17,
"text": " \\mathcal F \\mapsto \\mathcal F^\\mathrm{an} "
},
{
"math_id": 18,
"text": " (X, \\mathcal O_X) "
},
{
"math_id": 19,
"text": " (f_* \\mathcal F)^\\mathrm{an}\\rightarrow f_*^\\mathrm{an} \\mathcal F^\\mathrm{an} "
},
{
"math_id": 20,
"text": " (R^i f_* \\mathcal F)^\\mathrm{an} \\cong R^i f_*^\\mathrm{an} \\mathcal F^\\mathrm{an} "
},
{
"math_id": 21,
"text": " \\mathcal F, \\mathcal G "
},
{
"math_id": 22,
"text": " f\\colon \\mathcal F^\\mathrm{an} \\rightarrow \\mathcal G^\\mathrm{an} "
},
{
"math_id": 23,
"text": " \\varphi: \\mathcal F\\rightarrow \\mathcal G "
},
{
"math_id": 24,
"text": " \nf =\\varphi^\\mathrm{an} "
},
{
"math_id": 25,
"text": " \\mathcal R "
},
{
"math_id": 26,
"text": " \\mathcal F^\\mathrm{an} \\cong \\mathcal R "
}
] |
https://en.wikipedia.org/wiki?curid=630505
|
63057517
|
Generalized pencil-of-function method
|
Generalized pencil-of-function method (GPOF), also known as matrix pencil method, is a signal processing technique for estimating a signal or extracting information with complex exponentials. Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency.
The method was originally developed by Yingbo Hua and Tapan Sarkar for estimating the behaviour of electromagnetic systems by its transient response, building on Sarkar's past work on the original pencil-of-function method. The method has a plethora of applications in electrical engineering, particularly related to problems in computational electromagnetics, microwave engineering and antenna theory.
Method.
Mathematical basis.
A transient electromagnetic signal can be represented as:
formula_0
where
formula_1 is the observed time-domain signal,
formula_2 is the signal noise,
formula_3 is the actual signal,
formula_4 are the residues (formula_4),
formula_5 are the poles of the system, defined as formula_6,
formula_7 by the identities of Z-transform,
formula_8 are the damping factors and
formula_9 are the angular frequencies.
The same sequence, sampled by a period of formula_10, can be written as the following:
formula_11,
Generalized pencil-of-function estimates the optimal formula_12 and formula_13's.
Noise-free analysis.
For the noiseless case, two formula_14 matrices, formula_15 and formula_16, are produced:
formula_17formula_18
where formula_19 is defined as the pencil parameter. formula_15 and formula_16 can be decomposed into the following matrices:
formula_20
formula_21
where
formula_22formula_23
formula_24 and formula_25 are formula_26 diagonal matrices with sequentially-placed formula_27 and formula_28 values, respectively.
If formula_29, the generalized eigenvalues of the matrix pencil
formula_30
yield the poles of the system, which are formula_31. Then, the generalized eigenvectors formula_32 can be obtained by the following identities:
formula_33 formula_34
formula_35 formula_34
where the formula_36 denotes the Moore–Penrose inverse, also known as the pseudo-inverse. Singular value decomposition can be employed to compute the pseudo-inverse.
Noise filtering.
If noise is present in the system, formula_37 and formula_38 are combined in a general data matrix, formula_39:
formula_40
where formula_41 is the noisy data. For efficient filtering, L is chosen between formula_42 and formula_43. A singular value decomposition on formula_39 yields:
formula_44
In this decomposition, formula_45 and formula_46 are unitary matrices with respective eigenvectors formula_47 and formula_48 and formula_49 is a diagonal matrix with singular values of formula_39. Superscript formula_50 denotes the conjugate transpose.
Then the parameter formula_51 is chosen for filtering. Singular values after formula_51, which are below the filtering threshold, are set to zero; for an arbitrary singular value formula_52, the threshold is denoted by the following formula:
formula_53,
formula_54 and "p" are the maximum singular value and significant decimal digits, respectively. For a data with significant digits accurate up to "p", singular values below formula_55 are considered noise.
formula_56 and formula_57 are obtained through removing the last and first row and column of the filtered matrix formula_58, respectively; formula_51 columns of formula_49 represent formula_59. Filtered formula_37 and formula_38 matrices are obtained as:
formula_60
formula_61
Prefiltering can be used to combat noise and enhance signal-to-noise ratio (SNR). Band-pass matrix pencil (BPMP) method is a modification of the GPOF method via FIR or IIR band-pass filters.
GPOF can handle up to 25 dB SNR. For GPOF, as well as for BPMP, variance of the estimates approximately reaches Cramér–Rao bound.
Calculation of residues.
Residues of the complex poles are obtained through the least squares problem:
formula_62
Applications.
The method is generally used for the closed-form evaluation of Sommerfeld integrals in discrete complex image method for method of moments applications, where the spectral Green's function is approximated as a sum of complex exponentials. Additionally, the method is used in antenna analysis, S-parameter-estimation in microwave integrated circuits, wave propagation analysis, moving target indication, radar signal processing, and series acceleration in electromagnetic problems.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "y(t)=x(t)+n(t) \\approx \\sum_{i=1}^{M}R_i e^{s_i t} + n(t); 0 \\leq t \\leq T, "
},
{
"math_id": 1,
"text": "y(t)"
},
{
"math_id": 2,
"text": "n(t)"
},
{
"math_id": 3,
"text": "x(t)"
},
{
"math_id": 4,
"text": "R_i"
},
{
"math_id": 5,
"text": "s_i"
},
{
"math_id": 6,
"text": "s_i=-\\alpha_i+j \\omega_i"
},
{
"math_id": 7,
"text": "z_i=e^{(- \\alpha_i + j \\omega_i) T_s}"
},
{
"math_id": 8,
"text": "\\alpha_i"
},
{
"math_id": 9,
"text": "\\omega_i"
},
{
"math_id": 10,
"text": "T_s"
},
{
"math_id": 11,
"text": "y[kT_s]=x[kT_s]+n[kT_s] \\approx \\sum_{i=1}^{M}R_i z_i^{k} + n[kT_s]; k=0,...,N-1; i=1,2,...,M"
},
{
"math_id": 12,
"text": "M"
},
{
"math_id": 13,
"text": "z_i"
},
{
"math_id": 14,
"text": "(N-L) \\times L"
},
{
"math_id": 15,
"text": "Y_1"
},
{
"math_id": 16,
"text": "Y_2"
},
{
"math_id": 17,
"text": "[Y_1]=\n\\begin{bmatrix}\nx(0) & x(1) & \\cdots & x(L-1)\\\\\nx(1) & x(2) & \\cdots & x(L)\\\\\n\\vdots & \\vdots & \\ddots & \\vdots\\\\\nx(N-L-1) & x(N-L) & \\cdots & x(N-2)\n\\end{bmatrix}_{(N-L) \\times L};\n"
},
{
"math_id": 18,
"text": "[Y_2]=\n\\begin{bmatrix}\nx(1) & x(2) & \\cdots & x(L)\\\\\nx(2) & x(3) & \\cdots & x(L+1)\\\\\n\\vdots & \\vdots & \\ddots & \\vdots\\\\\nx(N-L) & x(N-L+1) & \\cdots & x(N-1)\n\\end{bmatrix}_{(N-L) \\times L}\n"
},
{
"math_id": 19,
"text": "L"
},
{
"math_id": 20,
"text": "[Y_1]=[Z_1][B][Z_2]"
},
{
"math_id": 21,
"text": "[Y_2]=[Z_1][B][Z_0][Z_2]"
},
{
"math_id": 22,
"text": "[Z_1]=\n\\begin{bmatrix}\n1 & 1 & \\cdots & 1\\\\\nz_1 & z_2 & \\cdots & z_M\\\\\n\\vdots & \\vdots & \\ddots & \\vdots\\\\\nz_1^{(N-L-1)} & z_2^{(N-L-1)} & \\cdots & z_M^{(N-L-1)}\n\\end{bmatrix}_{(N-L) \\times M};\n"
},
{
"math_id": 23,
"text": "[Z_2]=\n\\begin{bmatrix}\n1 & z_1 & \\cdots & z_1^{L-1}\\\\\n1 & z_2 & \\cdots & z_2^{L-1}\\\\\n\\vdots & \\vdots & \\ddots & \\vdots\\\\\n1 & z_M & \\cdots & z_M^{L-1}\n\\end{bmatrix}_{M \\times L}\n"
},
{
"math_id": 24,
"text": "[Z_0]"
},
{
"math_id": 25,
"text": "[B]"
},
{
"math_id": 26,
"text": "M \\times M"
},
{
"math_id": 27,
"text": "z_i"
},
{
"math_id": 28,
"text": "R_i"
},
{
"math_id": 29,
"text": "M \\leq L \\leq N-M"
},
{
"math_id": 30,
"text": "[Y_2]-\\lambda[Y_1]=[Z_1][B]([Z_0]-\\lambda[I])[Z_2]"
},
{
"math_id": 31,
"text": "\\lambda=z_i"
},
{
"math_id": 32,
"text": "p_i"
},
{
"math_id": 33,
"text": "[Y_1]^+[Y_1]p_i=p_i;"
},
{
"math_id": 34,
"text": "i=1,...,M"
},
{
"math_id": 35,
"text": "[Y_1]^+[Y_2]p_i=z_i p_i;"
},
{
"math_id": 36,
"text": "^+"
},
{
"math_id": 37,
"text": "[Y_1]"
},
{
"math_id": 38,
"text": "[Y_2]"
},
{
"math_id": 39,
"text": "[Y]"
},
{
"math_id": 40,
"text": "[Y]=\n\\begin{bmatrix}\ny(0) & y(1) & \\cdots & y(L)\\\\\ny(1) & y(2) & \\cdots & y(L+1)\\\\\n\\vdots & \\vdots & \\ddots & \\vdots\\\\\ny(N-L-1) & y(N-L) & \\cdots & y(N-1)\n\\end{bmatrix}_{(N-L) \\times (L+1)}\n"
},
{
"math_id": 41,
"text": "y"
},
{
"math_id": 42,
"text": " \\frac{N}{3}"
},
{
"math_id": 43,
"text": " \\frac{N}{2}"
},
{
"math_id": 44,
"text": "[Y]=[U][\\Sigma][V]^H"
},
{
"math_id": 45,
"text": "[U]"
},
{
"math_id": 46,
"text": "[V]"
},
{
"math_id": 47,
"text": "[Y][Y]^H"
},
{
"math_id": 48,
"text": "[Y]^H[Y]"
},
{
"math_id": 49,
"text": "[\\Sigma]"
},
{
"math_id": 50,
"text": "H"
},
{
"math_id": 51,
"text": "M"
},
{
"math_id": 52,
"text": "\\sigma_c"
},
{
"math_id": 53,
"text": "\\frac{\\sigma_c}{\\sigma_{max}}=10^{-p}"
},
{
"math_id": 54,
"text": "\\sigma_{max}"
},
{
"math_id": 55,
"text": "10^{-p}"
},
{
"math_id": 56,
"text": "[V_1']"
},
{
"math_id": 57,
"text": "[V_2']"
},
{
"math_id": 58,
"text": "[V']"
},
{
"math_id": 59,
"text": "[\\Sigma']"
},
{
"math_id": 60,
"text": "[Y_1]=[U][\\Sigma'][V_1']^H"
},
{
"math_id": 61,
"text": "[Y_2]=[U][\\Sigma'][V_2']^H"
},
{
"math_id": 62,
"text": "\\begin{bmatrix}\ny(0) \\\\ y(1) \\\\ \\vdots \\\\ y(N-1)\n\\end{bmatrix} =\n\\begin{bmatrix}\n1 & 1 & \\cdots & 1 \\\\\nz_1 & z_2 & \\cdots & z_M \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\ \nz_1^{N-1} & z_2^{N-1} & \\cdots & z_M^{N-1}\n\\end{bmatrix}\n\\begin{bmatrix}\nR_1 \\\\ R_2 \\\\ \\vdots \\\\ R_M\n\\end{bmatrix}\n"
}
] |
https://en.wikipedia.org/wiki?curid=63057517
|
63059
|
Slugging percentage
|
Hitting statistic in baseball
In baseball statistics, slugging percentage (SLG) is a measure of the batting productivity of a hitter. It is calculated as total bases divided by at-bats, through the following formula, where "AB" is the number of at-bats for a given player, and "1B", "2B", "3B", and "HR" are the number of singles, doubles, triples, and home runs, respectively:
formula_0
Unlike batting average, slugging percentage gives more weight to extra-base hits such as doubles and home runs, relative to singles. Such batters are usually referred to as sluggers. Plate appearances resulting in walks, hit-by-pitches, catcher's interference, and sacrifice bunts or flies are specifically excluded from this calculation, as such an appearance is not counted as an at-bat (these are not factored into batting average either).
The name is a misnomer, as the statistic is not a percentage but an average of how many bases a player achieves per at bat. It is a scale of measure whose computed value is a number from 0 to 4. This might not be readily apparent given that a Major League Baseball player's slugging percentage is almost always less than 1 (it being the case that a majority of at bats result in either 0 or 1 base). The statistic gives a double twice the value of a single, a triple three times the value, and a home run four times. The slugging percentage would have to be divided by 4 to actually be a percentage (of bases achieved per at bat out of total bases possible). As a result, it is occasionally called slugging average, or simply slugging, instead.
A slugging percentage is usually expressed as a decimal to three decimal places, and is generally spoken as if multiplied by 1000. For example, a slugging percentage of .589 would be spoken as "five eighty nine." Slugging percentage can also be applied as an evaluative tool for pitchers. This is not as common, but is referred to as "slugging-percentage against".
In Major League Baseball.
As an example: with the New York Yankees in 1920, Babe Ruth had 458 at bats during which he recorded 172 hits: 73 singles, 36 doubles, 9 triples, and 54 home runs. This was (73 × 1) + (36 × 2) + (9 × 3) + (54 × 4) = 388 total bases. His total number of bases (388) divided by his total at bats (458) is .847, which constitutes his slugging percentage for the season.
Ruth's 1920 figure set a record in Major League Baseball (MLB), which stood until 2001 when Barry Bonds achieved 411 bases in 476 at bats for a slugging percentage of .863. Josh Gibson, who played in Negro league baseball, had a slugging percentage of .974 in 1937.
Until the 2024 incorporation of Negro league statistics into major league records, the MLB career leader in slugging percentage was Ruth (.6897), followed by Ted Williams (.6338) and Lou Gehrig (.6324). Ruth was displaced by Josh Gibson, who has a career slugging percentage of .718.
The maximum possible slugging percentage is 4.000. A number of MLB players have had a 4.000 career slugging percentage for a short amount of time by hitting a home run in their first major league at bat. However, no player in MLB history has ever retired with a 4.000 slugging percentage. Four players have tripled in their only MLB plate appearance and therefore share the record—without consideration of a minimum amount of games played or plate appearances—of a career slugging percentage of 3.000. They are Eric Cammack (2000 Mets); Scott Munninghoff (1980 Phillies); Eduardo Rodríguez (1973 Brewers); and Chuck Lindstrom (1958 White Sox).
For the 2023 season, the average slugging percentage for all players in MLB was .414. The highest single season league average was .437 in 2000, and the lowest was .305 in 1908.
Significance.
Long after it was invented, slugging percentage gained new significance when baseball analysts realized that it combined with on-base percentage (OBP) to form a very good measure of a player's overall offensive production (OBP + SLG was originally referred to as "production" by baseball writer and statistician Bill James). A predecessor metric was developed by Branch Rickey in 1954. Rickey, in "Life" magazine, suggested that combining OBP with what he called "extra base power" (EBP) would give a better indicator of player performance than typical Triple Crown stats. EBP was a predecessor to slugging percentage.
Allen Barra and George Ignatin were early adopters in combining the two modern-day statistics, multiplying them together to form what is now known as "SLOB" (Slugging × On-Base). Bill James applied this principle to his runs created formula several years later (and perhaps independently), essentially multiplying SLOB × at bats to create the formula:
formula_1
In 1984, Pete Palmer and John Thorn developed perhaps the most widespread means of combining slugging and on-base percentage: on-base plus slugging (OPS), which is a simple addition of the two values. Because it is easy to calculate, OPS has been used with increased frequency in recent years as a shorthand form to evaluate contributions as a batter.
In a 2015 article, Bryan Grosnick made the point that "on base" and "slugging" may not be comparable enough to be simply added together. "On base" has a theoretical maximum of 1.000 whereas "slugging" has a theoretical maximum of 4.000. The actual numbers do not show as big a difference, with Grosnick listing .350 as a good "on base" and .430 as a good "slugging." He goes on to say that OPS has the advantages of simplicity and availability and further states, "you'll probably get it 75% right, at least."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathrm{SLG} = \\frac{(\\mathit{1B}) + (2 \\times \\mathit{2B}) + (3 \\times \\mathit{3B}) + (4 \\times \\mathit{HR})}{AB}"
},
{
"math_id": 1,
"text": "\\text{RC}=\\frac{(\\text{hits}+\\text{walks})\\times(\\text{total bases})}{(\\text{at bats}) + (\\text{walks})}"
}
] |
https://en.wikipedia.org/wiki?curid=63059
|
63067144
|
Boltzmann sampler
|
Random sampling algorithm
A Boltzmann sampler is an algorithm intended for random sampling of combinatorial structures. If the object size is viewed as its energy, and the argument of the corresponding generating function is interpreted in terms of the temperature of the physical system, then a Boltzmann sampler returns an object from a classical Boltzmann distribution.
The concept of Boltzmann sampler was proposed by Philippe Duchon, Philippe Flajolet, Guy Louchard and Gilles Schaeffer in 2004.
Description.
The concept of Boltzmann sampling is closely related to the symbolic method in combinatorics.
Let formula_0 be a combinatorial class with an ordinary generating function formula_1 which has a nonzero radius of convergence formula_2, i.e. is complex analytic. Formally speaking, if each object
formula_3 is equipped with a non-negative integer "size" formula_4, then the generating function formula_1 is defined as
formula_5
where formula_6 denotes the number of objects formula_3 of size formula_7. The size function is typically used to denote the number of vertices in a tree or in a graph, the number of letters in a word, etc.
A "Boltzmann sampler" for the class formula_8 with a parameter formula_9 such that formula_10, denoted as
formula_11 returns an object formula_3 with probability
formula_12
Construction.
Finite sets.
If formula_13 is finite, then an element formula_14 is drawn with probability proportional to formula_15.
Disjoint union.
If the target class is a disjoint union of two other classes, formula_16, and the generating functions formula_17 and formula_18 of formula_19 and formula_20 are known, then the Boltzmann sampler for formula_0 can be obtained as
formula_21
where formula_22 stands for "if the random variable formula_23 is 1, then execute formula_24, else execute formula_25". More generally, if the disjoint union is taken over a finite set, the resulting Boltzmann sampler can be represented using a random choice with probabilities proportional to the values of the generating functions.
Cartesian product.
If formula_26 is a class constructed of ordered pairs formula_27 where formula_28 and formula_29, then the corresponding Boltzmann sampler formula_11 can be obtained as
formula_30
i.e. by forming a pair formula_27 with formula_31 and formula_32 drawn independently from formula_33 and formula_34.
Sequence.
If formula_8 is composed of all the finite sequences of elements of formula_19 with size of a sequence additively inherited from sizes of components, then the generating function of formula_8 is expressed as
formula_35, where formula_17 is the generating function of formula_19. Alternatively, the class formula_8 admits a recursive representation formula_36 This gives two possibilities for formula_11.
where formula_39 stands for "draw a random variable formula_23; if the value formula_40 is returned, then execute formula_24 independently formula_41 times and return the sequence obtained". Here, formula_42 stands for the geometric distribution formula_43.
Recursive classes.
As the first construction of the sequence operator suggests, Boltzmann samplers can be used recursively. If the target class formula_8 is a part of the system
formula_44
where each of the expressions formula_45 involves only disjoint union, cartesian product and sequence operator, then the corresponding Boltzmann sampler is well defined. Given the argument value formula_9, the numerical values of the generating functions can be obtained by Newton iteration.
Labelled structures.
Boltzmann sampling can be applied to labelled structures. For a labelled combinatorial class formula_8, exponential generating function is used instead:
formula_46
where formula_6 denotes the number of labelled objects formula_3 of size formula_7. The operation of cartesian product and sequence need to be adjusted to take labelling into account, and the principle of construction remains the same.
In the labelled case, the Boltzmann sampler for a labelled class formula_8 is required to output an object formula_3 with probability
formula_47
Labelled sets.
In the labelled universe, a class formula_8 can be composed of all the finite sets of elements of a class formula_19 with order-consistent relabellings. In this case, the exponential generating function of the class formula_8 is written as
formula_48
where formula_17 is the exponential generating function of the class formula_19. The Boltzmann sampler for formula_8 can be described as
formula_49
where formula_50 stands for the standard Poisson distribution formula_51.
Labelled cycles.
In the cycle construction, a class formula_8 is composed of all the finite sequences of elements of a class formula_19, where two sequences are considered equivalent if they can be obtained by a cyclic shift. The exponential generating function of the class formula_8 is written as
formula_52
where formula_17 is the exponential generating function of the class formula_19. The Boltzmann sampler for formula_8 can be described as
formula_53
where formula_54 describes the log-law distribution formula_55.
Properties.
Let formula_56 denote the random size of the generated object from formula_11. Then, the size has the first and the second moment satisfying
Examples.
Binary trees.
The class formula_20 of binary trees can be defined by the recursive specification
formula_60
and its generating function formula_18 satisfies an equation
formula_61 and can be evaluated as a solution of the quadratic equation
formula_62
The resulting Boltzmann sampler can be described recursively by
formula_63
Set partitions.
Consider various partitions of the set formula_64 into several non-empty classes, being disordered between themselves.
Using symbolic method, the class formula_8 of set partitions can be expressed as
formula_65
The corresponding generating function is equal to formula_66. Therefore, Boltzmann sampler can be described as
formula_67
where the positive Poisson distribution formula_68 is a Poisson distribution with a parameter formula_69 conditioned to take only positive values.
Further generalisations.
The original Boltzmann samplers described by Philippe Duchon, Philippe Flajolet, Guy Louchard and Gilles Schaeffer only support basic unlabelled operations of disjoint union, cartesian product and sequence, and two additional operations for labelled classes, namely the set and the cycle construction. Since then, the scope of combinatorial classes for which a Boltzmann sampler can be constructed, has expanded.
Unlabelled structures.
The admissible operations for unlabelled classes include such additional operations as Multiset, Cycle and Powerset. Boltzmann samplers for these operations have been described by Philippe Flajolet, Éric Fusy and Carine Pivoteau.
Differential specifications.
Let formula_8 be a labelled combinatorial class. The derivative operation is defined as follows: take a labelled object formula_3 and replace an atom with the largest label with a distinguished atom without a label, therefore reducing a size of the resulting object by 1. If formula_70 is the exponential generating function of the class formula_8, then the exponential generating function of the derivative class formula_71is given byformula_72A differential specification is a recursive specification of type
formula_73
where the expression formula_74 involves only standard operations of union, product, sequence, cycle and set, and does not involve differentiation.
Boltzmann samplers for differential specifications have been constructed by Olivier Bodini, Olivier Roussel and Michèle Soria.
Multi-parametric Boltzmann samplers.
A multi-parametric Boltzmann distribution for multiparametric combinatorial classes is defined similarly to the classical case. Assume that each object formula_3 is equipped with the composition size formula_75 which is a vector of non-negative integer numbers. Each of the size functions formula_76 can reflect one of the parameters of a data structure, such as the number of leaves of certain colour in a tree, the height of the tree, etc. The corresponding "multivariate generating function" formula_77 is then associated with a multi-parametric class, and is defined asformula_78A "Boltzmann sampler" for the multiparametric class formula_8 with a vector parameter formula_79 inside the domain of analyticity of formula_77, denoted as
formula_80 returns an object formula_3 with probability
formula_81
Multiparametric Boltzmann samplers have been constructed by Olivier Bodini and Yann Ponty. A polynomial-time algorithm for finding the numerical values of the parameters formula_82 given the target parameter expectations, can be obtained by formulating an auxiliary convex optimisation problem
Applications.
Boltzmann sampling can be used to generate algebraic data types for the sake of property-based testing.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathcal{C}"
},
{
"math_id": 1,
"text": "C(z)"
},
{
"math_id": 2,
"text": "\\rho"
},
{
"math_id": 3,
"text": "c \\in \\mathcal C"
},
{
"math_id": 4,
"text": "\\omega(c)"
},
{
"math_id": 5,
"text": "C(z) = \\sum_{c \\in \\mathcal C} z^{\\omega(c)} = \\sum_{n = 0}^\\infty a_n z^n,"
},
{
"math_id": 6,
"text": "a_n"
},
{
"math_id": 7,
"text": "n"
},
{
"math_id": 8,
"text": "\\mathcal C"
},
{
"math_id": 9,
"text": "z"
},
{
"math_id": 10,
"text": "0 < z < \\rho"
},
{
"math_id": 11,
"text": "\\Gamma \\mathcal C(z)"
},
{
"math_id": 12,
"text": "\\mathbb P(\\Gamma \\mathcal C(z) = c) = \\dfrac{z^{\\omega(c)}}{C(z)}."
},
{
"math_id": 13,
"text": "\\mathcal{C} = \\{ c_1, \\ldots, c_r \\}"
},
{
"math_id": 14,
"text": "c_j"
},
{
"math_id": 15,
"text": "z^{\\omega(c_j)}"
},
{
"math_id": 16,
"text": "\\mathcal C = \\mathcal A + \\mathcal B"
},
{
"math_id": 17,
"text": "A(z)"
},
{
"math_id": 18,
"text": "B(z)"
},
{
"math_id": 19,
"text": "\\mathcal A"
},
{
"math_id": 20,
"text": "\\mathcal B"
},
{
"math_id": 21,
"text": "\n\\left( \\operatorname{Bern} \\left( \\frac{A(z)}{C(z)} \\right) \\longrightarrow \\Gamma \\mathcal A(z) \\mid \\Gamma \\mathcal B(z) \\right)\n"
},
{
"math_id": 22,
"text": "(X \\longrightarrow f \\mid g)"
},
{
"math_id": 23,
"text": "X"
},
{
"math_id": 24,
"text": "f"
},
{
"math_id": 25,
"text": "g"
},
{
"math_id": 26,
"text": "\\mathcal C = \\mathcal A \\times \\mathcal B"
},
{
"math_id": 27,
"text": "(a, b)"
},
{
"math_id": 28,
"text": "a \\in \\mathcal A"
},
{
"math_id": 29,
"text": "b \\in \\mathcal B"
},
{
"math_id": 30,
"text": "\\Gamma \\mathcal C(z) = (\\Gamma \\mathcal A(z), \\Gamma \\mathcal B(z)),"
},
{
"math_id": 31,
"text": "a"
},
{
"math_id": 32,
"text": "b"
},
{
"math_id": 33,
"text": "\\Gamma \\mathcal A(z)"
},
{
"math_id": 34,
"text": "\\Gamma \\mathcal B(z)"
},
{
"math_id": 35,
"text": "C(z) = \\sum_{k=0}^\\infty A(z)^k = \\dfrac{1}{1 - A(z)}"
},
{
"math_id": 36,
"text": "\\mathcal C = 1 + \\mathcal A \\times \\mathcal C."
},
{
"math_id": 37,
"text": "\\Gamma \\mathcal C(z) = \\Gamma (1 + \\mathcal A \\times \\mathcal C)(z)\n=\\left( \\operatorname{Bern}\\left(\\frac{1}{C(z)}\\right) \\longrightarrow 1 \\, \\Big|\\, (\\Gamma \\mathcal A(z), \\Gamma \\mathcal C(z)) \\right)"
},
{
"math_id": 38,
"text": "\\Gamma \\mathcal C(z) = \\left(\n\\operatorname{Geom}(A(z)) \\Longrightarrow \\Gamma \\mathcal C(z)\n\\right)"
},
{
"math_id": 39,
"text": "(X \\Longrightarrow f)"
},
{
"math_id": 40,
"text": "X = x"
},
{
"math_id": 41,
"text": "x"
},
{
"math_id": 42,
"text": "\\operatorname{Geom}(p)"
},
{
"math_id": 43,
"text": "\\mathbb P(\\operatorname{Geom}(p) = k) = p^k (1 - p)"
},
{
"math_id": 44,
"text": "\\begin{cases}\n\\mathcal C_1 = \\Phi_1(\\mathcal C_1, \\ldots, \\mathcal C_n, \\mathcal Z),\\\\\n\\qquad \\vdots \\\\\n\\mathcal C_n = \\Phi_n(\\mathcal C_1, \\ldots, \\mathcal C_n, \\mathcal Z),\n\\end{cases}"
},
{
"math_id": 45,
"text": "\\Phi_k(\\mathcal C_1, \\ldots, \\mathcal C_n, \\mathcal Z)"
},
{
"math_id": 46,
"text": "C(z) = \\sum_{c \\in \\mathcal C} \\dfrac{z^{\\omega(c)}}{\\omega(c)!} = \\sum_{n = 0}^\\infty a_n \\dfrac{z^n}{n!},"
},
{
"math_id": 47,
"text": "\\mathbb P(\\Gamma \\mathcal C(z) = c) = \\frac{1}{C(z)} \\frac{z^{\\omega(c)}}{\\omega(c)!}."
},
{
"math_id": 48,
"text": "C(z) = \\sum_{k = 0}^\\infty \\dfrac{A(z)^k}{k!} = e^{A(z)}"
},
{
"math_id": 49,
"text": "\\Gamma \\mathcal C(z) = \\left(\n\\operatorname{Poisson}(A(z)) \\Longrightarrow \\Gamma \\mathcal C(z)\n\\right)"
},
{
"math_id": 50,
"text": "\\operatorname{Poisson}(\\lambda)"
},
{
"math_id": 51,
"text": "\\mathbb P(\\operatorname{Poisson}(\\lambda) = k) = e^{-\\lambda} \\dfrac{\\lambda^k}{k!}"
},
{
"math_id": 52,
"text": "C(z) = \\sum_{k = 0}^\\infty \\frac{A(z)^k}{k} = \\log \\frac{1}{1 - A(z)}"
},
{
"math_id": 53,
"text": "\\Gamma \\mathcal C(z) = \\left(\n\\operatorname{Loga}(A(z)) \\Longrightarrow \\Gamma \\mathcal C(z)\n\\right)"
},
{
"math_id": 54,
"text": "\\operatorname{Loga}(\\lambda)"
},
{
"math_id": 55,
"text": "\\mathbb P(\\operatorname{Loga}(\\lambda) = k) = \\dfrac{1}{\\log(1 - \\lambda)^{-1}} \\dfrac{\\lambda^k}{k}"
},
{
"math_id": 56,
"text": "N"
},
{
"math_id": 57,
"text": "\\mathbb E_z (N) = z \\dfrac{C'(z)}{C(z)};"
},
{
"math_id": 58,
"text": "\\mathbb E_z (N^2) = \\dfrac{z^2 C''(z) + z C'(z)}{C(z)};"
},
{
"math_id": 59,
"text": "z \\dfrac{d}{dz} \\mathbb E_z (N) = \\operatorname{Var}_z (N)"
},
{
"math_id": 60,
"text": "\\mathcal B = \\mathcal Z + \\mathcal Z \\times \\mathcal B \\times \\mathcal B"
},
{
"math_id": 61,
"text": "B(z) = z + z B(z)^2"
},
{
"math_id": 62,
"text": "B(z) = \\dfrac{1 - \\sqrt{1 - 4 z^2}}{2z}"
},
{
"math_id": 63,
"text": " \\Gamma \\mathcal B(z) = \\left( \\operatorname{Bern}\\left( \\frac{z}{B(z)} \\right) \\longrightarrow\n\\mathcal Z \\mid \\left( \\mathcal Z, \\, \\Gamma \\mathcal B(z), \\, \\Gamma \\mathcal B(z) \\right) \\right)\n"
},
{
"math_id": 64,
"text": "\\{1, 2, \\ldots, n\\}"
},
{
"math_id": 65,
"text": "\\mathcal C = \\operatorname{Set}(\\operatorname{Set}_{>0}(\\mathcal Z))."
},
{
"math_id": 66,
"text": "C(z) = e^{e^z - 1}"
},
{
"math_id": 67,
"text": "\\Gamma \\mathcal C = \\left( \\operatorname{Poisson}(e^z - 1) \\Longrightarrow \\left( \\operatorname{Poisson}_{>0} (z) \\Longrightarrow \\mathcal Z \\right) \\right),"
},
{
"math_id": 68,
"text": "\\operatorname{Poisson}_{>0}(\\lambda)"
},
{
"math_id": 69,
"text": "\\lambda"
},
{
"math_id": 70,
"text": "C(z) = \\sum_{n=0}^\\infty a_n \\dfrac{z^n}{n!}"
},
{
"math_id": 71,
"text": "\\mathcal C'"
},
{
"math_id": 72,
"text": "C'(z) = \\dfrac{d}{dz} C(z) = \\sum_{n = 0}^\\infty n a_n \\frac{z^{n-1}}{n!}"
},
{
"math_id": 73,
"text": "\\mathcal T' = \\Phi(\\mathcal T, \\mathcal Z)"
},
{
"math_id": 74,
"text": "\\Phi(\\mathcal T, \\mathcal Z)"
},
{
"math_id": 75,
"text": "\\omega(c) = (\\omega_1(c), \\ldots, \\omega_d(c))"
},
{
"math_id": 76,
"text": "\\omega_j(c)"
},
{
"math_id": 77,
"text": "C(z_1, \\ldots, z_d)"
},
{
"math_id": 78,
"text": "C(z_1, \\ldots, z_d) = \\sum_{c \\in \\mathcal C}\nz_1^{\\omega_1(c)} \\cdots z_d^{\\omega_d(c)}."
},
{
"math_id": 79,
"text": "\\boldsymbol z = (z_1, \\ldots, z_d)"
},
{
"math_id": 80,
"text": "\\Gamma \\mathcal C(z_1, \\ldots, z_d)"
},
{
"math_id": 81,
"text": "\\mathbb P( \\Gamma \\mathcal C(z_1, \\ldots, z_d) = c) = \\frac{z_1^{\\omega_1(c)} \\cdots z_d^{\\omega_d(c)}}{C(z_1, \\ldots, z_d)}."
},
{
"math_id": 82,
"text": "z_1, \\ldots, z_d"
}
] |
https://en.wikipedia.org/wiki?curid=63067144
|
63070573
|
Derived noncommutative algebraic geometry
|
Mathematics study in geometry
In mathematics, derived noncommutative algebraic geometry, the derived version of noncommutative algebraic geometry, is the geometric study of derived categories and related constructions of triangulated categories using categorical tools. Some basic examples include the bounded derived category of coherent sheaves on a smooth variety, formula_0, called its derived category, or the derived category of perfect complexes on an algebraic variety, denoted formula_1. For instance, the derived category of coherent sheaves formula_0 on a smooth projective variety can be used as an invariant of the underlying variety for many cases (if formula_2 has an ample (anti-)canonical sheaf). Unfortunately, studying derived categories as geometric objects of themselves does not have a standardized name.
Derived category of projective line.
The derived category of formula_3 is one of the motivating examples for derived non-commutative schemes due to its easy categorical structure. Recall that the Euler sequence of formula_3 is the short exact sequence
formula_4
if we consider the two terms on the right as a complex, then we get the distinguished triangle
formula_5
Since formula_6 we have constructed this sheaf formula_7 using only categorical tools. We could repeat this again by tensoring the Euler sequence by the flat sheaf formula_8, and apply the cone construction again. If we take the duals of the sheaves, then we can construct all of the line bundles in formula_9 using only its triangulated structure. It turns out the correct way of studying derived categories from its objects and triangulated structure is with exceptional collections.
Semiorthogonal decompositions and exceptional collections.
The technical tools for encoding this construction are semiorthogonal decompositions and exceptional collections. A semiorthogonal decomposition of a triangulated category formula_10 is a collection of full triangulated subcategories formula_11 such that the following two properties hold
(1) For objects formula_12 we have formula_13 for formula_14
(2) The subcategories formula_15 generate formula_10, meaning every object formula_16 can be decomposed in to a sequence of formula_17,
formula_18
such that formula_19. Notice this is analogous to a filtration of an object in an abelian category such that the cokernels live in a specific subcategory.
We can specialize this a little further by considering exceptional collections of objects, which generate their own subcategories. An object formula_20 in a triangulated category is called exceptional if the following property holds
formula_21
where formula_22 is the underlying field of the vector space of morphisms. A collection of exceptional objects formula_23 is an exceptional collection of length formula_24 if for any formula_14 and any formula_25, we have
formula_26
and is a strong exceptional collection if in addition, for any formula_27 and "any" formula_28, we have
formula_26
We can then decompose our triangulated category into the semiorthogonal decomposition
formula_29
where formula_30, the subcategory of objects in formula_31 such that formula_32. If in addition formula_33 then the strong exceptional collection is called full.
Beilinson's theorem.
Beilinson provided the first example of a full strong exceptional collection. In the derived category formula_34 the line bundles formula_35 form a full strong exceptional collection. He proves the theorem in two parts. First showing these objects are an exceptional collection and second by showing the diagonal formula_36 of formula_37 has a resolution whose compositions are tensors of the pullback of the exceptional objects.
Technical Lemma
An exceptional collection of sheaves formula_38 on formula_2 is full if there exists a resolution
formula_39
in formula_40 where formula_41 are arbitrary coherent sheaves on formula_2.
Another way to reformulate this lemma for formula_42 is by looking at the Koszul complex associated toformula_43where formula_44 are hyperplane divisors of formula_45. This gives the exact complexformula_46which gives a way to construct formula_47 using the sheaves formula_48, since they are the sheaves used in all terms in the above exact sequence, except for
formula_49
which gives a derived equivalence of the rest of the terms of the above complex with formula_47. For formula_50 the Koszul complex above is the exact complexformula_51giving the quasi isomorphism of formula_52 with the complexformula_53
Orlov's reconstruction theorem.
If formula_2 is a smooth projective variety with ample (anti-)canonical sheaf and there is an equivalence of derived categories formula_54, then there is an isomorphism of the underlying varieties.
Sketch of proof.
The proof starts out by analyzing two induced Serre functors on formula_55 and finding an isomorphism between them. It particular, it shows there is an object formula_56 which acts like the dualizing sheaf on formula_57. The isomorphism between these two functors gives an isomorphism of the set of underlying points of the derived categories. Then, what needs to be check is an ismorphism formula_58, for any formula_59, giving an isomorphism of canonical rings
formula_60
If formula_61 can be shown to be (anti-)ample, then the proj of these rings will give an isomorphism formula_62. All of the details are contained in Dolgachev's notes.
Failure of reconstruction.
This theorem fails in the case formula_2 is Calabi-Yau, since formula_63, or is the product of a variety which is Calabi-Yau. Abelian varieties are a class of examples where a reconstruction theorem could "never" hold. If formula_2 is an abelian variety and formula_64 is its dual, the Fourier–Mukai transform with kernel formula_65, the Poincare bundle, gives an equivalence
formula_66
of derived categories. Since an abelian variety is generally not isomorphic to its dual, there are derived equivalent derived categories without isomorphic underlying varieties. There is an alternative theory of tensor triangulated geometry where we consider not only a triangulated category, but also a monoidal structure, i.e. a tensor product. This geometry has a full reconstruction theorem using the spectrum of categories.
Equivalences on K3 surfaces.
K3 surfaces are another class of examples where reconstruction fails due to their Calabi-Yau property. There is a criterion for determining whether or not two K3 surfaces are derived equivalent: the derived category of the K3 surface formula_0 is derived equivalent to another K3 formula_55 if and only if there is a Hodge isometry formula_67, that is, an isomorphism of Hodge structure. Moreover, this theorem is reflected in the motivic world as well, where the Chow motives are isomorphic if and only if there is an isometry of Hodge structures.
Autoequivalences.
One nice application of the proof of this theorem is the identification of autoequivalences of the derived category of a smooth projective variety with ample (anti-)canonical sheaf. This is given by
formula_68
Where an autoequivalence formula_69 is given by an automorphism formula_70, then tensored by a line bundle formula_71 and finally composed with a shift. Note that formula_72 acts on formula_73 via the polarization map, formula_74.
Relation with motives.
The bounded derived category formula_0 was used extensively in SGA6 to construct an intersection theory with formula_75 and formula_76. Since these objects are intimately relative with the Chow ring of formula_2, its chow motive, Orlov asked the following question: given a fully-faithful functor
formula_77
is there an induced map on the chow motives
formula_78
such that formula_79 is a summand of formula_80? In the case of K3 surfaces, a similar result has been confirmed since derived equivalent K3 surfaces have an isometry of Hodge structures, which gives an isomorphism of motives.
Derived category of singularities.
On a smooth variety there is an equivalence between the derived category formula_0 and the thick full triangulated formula_1 of perfect complexes. For separated, Noetherian schemes of finite Krull dimension (called the ELF condition) this is not the case, and Orlov defines the derived category of singularities as their difference using a quotient of categories. For an ELF scheme formula_2 its derived category of singularities is defined as
formula_81
for a suitable definition of localization of triangulated categories.
Construction of localization.
Although localization of categories is defined for a class of morphisms formula_82 in the category closed under composition, we can construct such a class from a triangulated subcategory. Given a full triangulated subcategory formula_83 the class of morphisms formula_84, formula_85 in formula_10 where formula_85 fits into a distinguished triangleformula_86with formula_87 and formula_88. It can be checked this forms a multiplicative system using the octahedral axiom for distinguished triangles. Given
formula_89
with distinguished triangles
formula_90
formula_91
where formula_92, then there are distinguished triangles
formula_93
formula_94 where formula_95 since formula_96 is closed under extensions. This new category has the following properties
Landau–Ginzburg models.
Kontsevich proposed a model for Landau–Ginzburg models which was worked out to the following definition: a Landau–Ginzburg model is a smooth variety formula_2 together with a morphism formula_107 which is flat. There are three associated categories which can be used to analyze the D-branes in a Landau–Ginzburg model using matrix factorizations from commutative algebra.
Associated categories.
With this definition, there are three categories which can be associated to any point formula_108, a formula_109-graded category formula_110, an exact category formula_111, and a triangulated category formula_112, each of which has objects
formula_113 where formula_114 are multiplication by formula_115.
There is also a shift functor formula_116 send formula_117 toformula_118.The difference between these categories are their definition of morphisms. The most general of which is formula_110 whose morphisms are the formula_109-graded complex
formula_119
where the grading is given by formula_120 and differential acting on degree formula_121 homogeneous elements by
formula_122
In formula_111 the morphisms are the degree formula_123 morphisms in formula_110. Finally, formula_112 has the morphisms in formula_111 modulo the null-homotopies. Furthermore, formula_112 can be endowed with a triangulated structure through a graded cone-construction in formula_111. Given formula_124 there is a mapping code formula_125 with maps
formula_126 where formula_127
and
formula_128 where formula_129
Then, a diagram formula_130 in formula_112 is a distinguished triangle if it is isomorphic to a cone from formula_111.
D-brane category.
Using the construction of formula_112 we can define the category of D-branes of type B on formula_2 with superpotential formula_131 as the product category
formula_132
This is related to the singularity category as follows: Given a superpotential formula_131 with isolated singularities only at formula_123, denote formula_133. Then, there is an exact equivalence of categories
formula_134
given by a functor induced from cokernel functor formula_135 sending a pair formula_136. In particular, since formula_2 is regular, Bertini's theorem shows formula_137 is only a finite product of categories.
Computational tools.
Knörrer periodicity.
There is a Fourier-Mukai transform formula_138 on the derived categories of two related varieties giving an equivalence of their singularity categories. This equivalence is called Knörrer periodicity. This can be constructed as follows: given a flat morphism formula_139 from a separated regular Noetherian scheme of finite Krull dimension, there is an associated scheme formula_140 and morphism formula_141 such that formula_142 where formula_143 are the coordinates of the formula_144-factor. Consider the fibers formula_145, formula_146, and the induced morphism formula_147. And the fiber formula_148. Then, there is an injection formula_149 and a projection formula_150 forming an formula_151-bundle. The Fourier-Mukai transform
formula_152
induces an equivalence of categories
formula_153
called Knörrer periodicity. There is another form of this periodicity where formula_143 is replaced by the polynomial formula_154. These periodicity theorems are the main computational techniques because it allows for a reduction in the analysis of the singularity categories.
Computations.
If we take the Landau–Ginzburg model formula_155 where formula_156, then the only fiber singular fiber of formula_131 is the origin. Then, the D-brane category of the Landau–Ginzburg model is equivalent to the singularity category formula_157. Over the algebra formula_158 there are indecomposable objects
formula_159
whose morphisms can be completely understood. For any pair formula_160 there are morphisms formula_161 where
where every other morphism is a composition and linear combination of these morphisms. There are many other cases which can be explicitly computed, using the table of singularities found in Knörrer's original paper.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "D^b(X)"
},
{
"math_id": 1,
"text": "D_{\\operatorname{perf}}(X)"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "\\mathbb{P}^1"
},
{
"math_id": 4,
"text": "0 \\to \\mathcal{O}(-2) \\to \\mathcal{O}(-1)^{\\oplus 2} \\to \\mathcal{O} \\to 0"
},
{
"math_id": 5,
"text": "\\mathcal{O}(-1)^{\\oplus 2} \\overset{\\phi}{\\rightarrow} \\mathcal{O} \\to \\operatorname{Cone}(\\phi) \\overset{+1}{\\rightarrow}."
},
{
"math_id": 6,
"text": "\\operatorname{Cone}(\\phi) \\cong \\mathcal{O}(-2)[+1]"
},
{
"math_id": 7,
"text": "\\mathcal{O}(-2)"
},
{
"math_id": 8,
"text": "\\mathcal{O}(-1)"
},
{
"math_id": 9,
"text": "\\operatorname{Coh}(\\mathbb{P}^1)"
},
{
"math_id": 10,
"text": "\\mathcal{T}"
},
{
"math_id": 11,
"text": "\\mathcal{T}_1,\\ldots, \\mathcal{T}_n"
},
{
"math_id": 12,
"text": "T_i \\in \\operatorname{Ob}(\\mathcal{T}_i)"
},
{
"math_id": 13,
"text": "\\operatorname{Hom}(T_i, T_j) = 0"
},
{
"math_id": 14,
"text": "i > j"
},
{
"math_id": 15,
"text": "\\mathcal{T}_i"
},
{
"math_id": 16,
"text": "T \\in \\operatorname{Ob}(\\mathcal{T})"
},
{
"math_id": 17,
"text": "T_i \\in \\operatorname{Ob}(\\mathcal{T})"
},
{
"math_id": 18,
"text": "0 = T_n \\to T_{n-1} \\to \\cdots \\to T_1 \\to T_0 = T"
},
{
"math_id": 19,
"text": "\\operatorname{Cone}(T_i \\to T_{i-1}) \\in \\operatorname{Ob}(\\mathcal{T}_i)"
},
{
"math_id": 20,
"text": "E"
},
{
"math_id": 21,
"text": "\\operatorname{Hom}(E,E[+\\ell]) = \\begin{cases}\nk &\\text{if } \\ell = 0 \\\\\n0 &\\text{if } \\ell \\neq 0\n\\end{cases}"
},
{
"math_id": 22,
"text": "k"
},
{
"math_id": 23,
"text": "E_1, \\ldots, E_r"
},
{
"math_id": 24,
"text": "r"
},
{
"math_id": 25,
"text": "\\ell"
},
{
"math_id": 26,
"text": "\\operatorname{Hom}(E_i, E_j[+\\ell]) = 0"
},
{
"math_id": 27,
"text": "\\ell \\neq 0"
},
{
"math_id": 28,
"text": "i, j"
},
{
"math_id": 29,
"text": "\\mathcal{T} = \\langle \\mathcal{T}', E_1, \\ldots, E_r \\rangle "
},
{
"math_id": 30,
"text": "\\mathcal{T}' = \\langle E_1, \\ldots, E_r \\rangle^\\perp"
},
{
"math_id": 31,
"text": "E \\in \\operatorname{Ob}(\\mathcal{T})"
},
{
"math_id": 32,
"text": "\\operatorname{Hom}(E, E_i[+\\ell]) = 0"
},
{
"math_id": 33,
"text": "\\mathcal{T}' = 0"
},
{
"math_id": 34,
"text": "D^b(\\mathbb{P}^n)"
},
{
"math_id": 35,
"text": "\\mathcal{O}(-n), \\mathcal{O}(-n+1), \\ldots, \\mathcal{O}(-1), \\mathcal{O}"
},
{
"math_id": 36,
"text": "\\mathcal{O}_\\Delta"
},
{
"math_id": 37,
"text": "\\mathbb{P}^n \\times \\mathbb{P}^n"
},
{
"math_id": 38,
"text": "E_1, E_2, \\ldots, E_r"
},
{
"math_id": 39,
"text": "0 \\to p_1^*E_1 \\otimes p_2^*F_1 \\to \\cdots \\to p_1^*E_n \\otimes p_2^*F_n \\to \\mathcal{O}_\\Delta \\to 0"
},
{
"math_id": 40,
"text": "D^b(X\\times X)"
},
{
"math_id": 41,
"text": "F_i"
},
{
"math_id": 42,
"text": "X = \\mathbb{P}^n"
},
{
"math_id": 43,
"text": "\\bigoplus_{i=0}^n \\mathcal{O}(-D_i) \\xrightarrow{\\phi} \\mathcal{O}"
},
{
"math_id": 44,
"text": "D_i"
},
{
"math_id": 45,
"text": "\\mathbb{P}^n"
},
{
"math_id": 46,
"text": "0 \\to \\mathcal{O}\\left(-\\sum_{i=1}^n D_i \\right) \\to \\cdots \\to\n\\bigoplus_{i \\neq j}\\mathcal{O}(-D_i - D_j) \\to\n\\bigoplus_{i=1}^n\\mathcal{O}(-D_i) \\to \\mathcal{O} \\to 0"
},
{
"math_id": 47,
"text": "\\mathcal{O}(-n-1)"
},
{
"math_id": 48,
"text": "\\mathcal{O}(-n),\\ldots,\\mathcal{O}(-1),\\mathcal{O}"
},
{
"math_id": 49,
"text": "\\mathcal{O}\\left(-\\sum_{i=0}^n D_i \\right) \\cong \\mathcal{O}(-n-1)"
},
{
"math_id": 50,
"text": "n=2"
},
{
"math_id": 51,
"text": "0 \\to \\mathcal{O}(-3) \\to \\mathcal{O}(-2)\\oplus\\mathcal{O}(-2) \\to \\mathcal{O}(-1)\\oplus\\mathcal{O}(-1) \\to \\mathcal{O} \\to 0"
},
{
"math_id": 52,
"text": "\\mathcal{O}(-3)"
},
{
"math_id": 53,
"text": "0 \\to \\mathcal{O}(-2)\\oplus\\mathcal{O}(-2) \\to \\mathcal{O}(-1)\\oplus\\mathcal{O}(-1) \\to \\mathcal{O} \\to 0"
},
{
"math_id": 54,
"text": "F: D^b(X) \\to D^b(Y)"
},
{
"math_id": 55,
"text": "D^b(Y)"
},
{
"math_id": 56,
"text": "\\omega_Y = F(\\omega_X)"
},
{
"math_id": 57,
"text": "Y"
},
{
"math_id": 58,
"text": "F(\\omega_X^{\\otimes k}) \\cong \\omega_Y^{\\otimes k}"
},
{
"math_id": 59,
"text": "k \\in \\mathbb{N}"
},
{
"math_id": 60,
"text": "A(X) = \\bigoplus_{k=0}^\\infty H^0(X,\\omega_X^{\\otimes k}) \\cong \\bigoplus_{k=0}^\\infty H^0(Y,\\omega_Y^{\\otimes k})"
},
{
"math_id": 61,
"text": "\\omega_Y"
},
{
"math_id": 62,
"text": "X \\to Y"
},
{
"math_id": 63,
"text": "\\omega_X \\cong \\mathcal{O}_X"
},
{
"math_id": 64,
"text": "\\hat{X}"
},
{
"math_id": 65,
"text": "\\mathcal{P}"
},
{
"math_id": 66,
"text": "FM_{\\mathcal{P}}:D^b(X) \\to D^b(\\hat{X})"
},
{
"math_id": 67,
"text": "H^2(X, \\mathbb{Z}) \\to H^2(Y, \\mathbb{Z})"
},
{
"math_id": 68,
"text": "\\operatorname{Auteq}(D^b(X)) \\cong (\\operatorname{Pic}(X)\\rtimes \\operatorname{Aut}(X))\\times\\mathbb{Z}"
},
{
"math_id": 69,
"text": "F"
},
{
"math_id": 70,
"text": "f:X\\to X"
},
{
"math_id": 71,
"text": "\\mathcal{L} \\in \\operatorname{Pic}(X)"
},
{
"math_id": 72,
"text": "\\operatorname{Aut}(X)"
},
{
"math_id": 73,
"text": "\\operatorname{Pic}(X)"
},
{
"math_id": 74,
"text": "g \\mapsto g^*(L)\\otimes L^{-1}"
},
{
"math_id": 75,
"text": "K(X)"
},
{
"math_id": 76,
"text": "Gr_\\gamma K(X)\\otimes\\mathbb{Q}"
},
{
"math_id": 77,
"text": "F:D^b(X) \\to D^b(Y)"
},
{
"math_id": 78,
"text": "f:M(X) \\to M(Y)"
},
{
"math_id": 79,
"text": "M(X)"
},
{
"math_id": 80,
"text": "M(Y)"
},
{
"math_id": 81,
"text": "D_{sg}(X) := D^b(X)/D_\\text{perf}(X)"
},
{
"math_id": 82,
"text": "\\Sigma"
},
{
"math_id": 83,
"text": "\\mathcal{N} \\subset \\mathcal{T}"
},
{
"math_id": 84,
"text": "\\Sigma(\\mathcal{N})"
},
{
"math_id": 85,
"text": "s"
},
{
"math_id": 86,
"text": "X \\xrightarrow{s} Y \\to N \\to X[+1]"
},
{
"math_id": 87,
"text": "X,Y \\in \\mathcal{T}"
},
{
"math_id": 88,
"text": "N \\in \\mathcal{N}"
},
{
"math_id": 89,
"text": "X \\xrightarrow{s} Y \\xrightarrow{s'}Z"
},
{
"math_id": 90,
"text": "X \\xrightarrow{s}Y \\to N \\to X[+1]"
},
{
"math_id": 91,
"text": "Y \\xrightarrow{s'} Z \\to N' \\to Y[+1]"
},
{
"math_id": 92,
"text": "N,N' \\in \\mathcal{N}"
},
{
"math_id": 93,
"text": "X \\to Z \\to M \\to X[+1]"
},
{
"math_id": 94,
"text": "N \\to M \\to N' \\to N[+1]"
},
{
"math_id": 95,
"text": "M \\in \\mathcal{N}"
},
{
"math_id": 96,
"text": "\\mathcal{N}"
},
{
"math_id": 97,
"text": "\\mathcal{T}/\\mathcal{N}"
},
{
"math_id": 98,
"text": "F:\\mathcal{T} \\to \\mathcal{T}'"
},
{
"math_id": 99,
"text": "F(N) \\cong 0"
},
{
"math_id": 100,
"text": "Q: \\mathcal{T} \\to \\mathcal{T}/\\mathcal{N}"
},
{
"math_id": 101,
"text": "\\tilde{F}: \\mathcal{T}/\\mathcal{N} \\to \\mathcal{T}'"
},
{
"math_id": 102,
"text": "\\tilde{F}\\circ Q \\simeq F"
},
{
"math_id": 103,
"text": "\\mathcal{F}"
},
{
"math_id": 104,
"text": "\\operatorname{Sing}(X)"
},
{
"math_id": 105,
"text": "D_{sg}(X)"
},
{
"math_id": 106,
"text": "\\mathcal{F}[+k]"
},
{
"math_id": 107,
"text": "W:X \\to \\mathbb{A}^1"
},
{
"math_id": 108,
"text": "w_0 \\in \\mathbb{A}^1"
},
{
"math_id": 109,
"text": "\\mathbb{Z}/2"
},
{
"math_id": 110,
"text": "DG_{w_0}(W)"
},
{
"math_id": 111,
"text": "\\operatorname{Pair}_{w_0}(W)"
},
{
"math_id": 112,
"text": "DB_{w_0}(W)"
},
{
"math_id": 113,
"text": "\\overline{P} = (p_1: P_1 \\to P_0, p_0: P_0 \\to P_1)"
},
{
"math_id": 114,
"text": "p_0\\circ p_1,p_1\\circ p_0"
},
{
"math_id": 115,
"text": "W - w_0"
},
{
"math_id": 116,
"text": "[+1]"
},
{
"math_id": 117,
"text": "\\overline{P}"
},
{
"math_id": 118,
"text": "\\overline{P}[+1] = (-p_0: P_0 \\to P_1, -p_1: P_1 \\to P_0)"
},
{
"math_id": 119,
"text": "\\operatorname{Hom}(\\overline{P},\\overline{Q}) = \\bigoplus_{i,j}\\operatorname{Hom}(P_i, Q_j)"
},
{
"math_id": 120,
"text": "(i-j) \\bmod 2"
},
{
"math_id": 121,
"text": "d"
},
{
"math_id": 122,
"text": "Df = q \\circ f - (-1)^df \\circ p"
},
{
"math_id": 123,
"text": "0"
},
{
"math_id": 124,
"text": "\\overline{f}:\\overline{P}\\to\\overline{Q}"
},
{
"math_id": 125,
"text": "C(f)"
},
{
"math_id": 126,
"text": "c_1: Q_1\\oplus P_0 \\to Q_0\\oplus P_1"
},
{
"math_id": 127,
"text": "c_1 = \\begin{bmatrix} q_0 & f_1 \\\\ 0 &-p_1\\end{bmatrix} "
},
{
"math_id": 128,
"text": "c_0: Q_0\\oplus P_1 \\to Q_1\\oplus P_0"
},
{
"math_id": 129,
"text": "{\\displaystyle c_{0}={\\begin{bmatrix}q_{1}&f_{0}\\\\0&-p_{0}\\end{bmatrix}}} "
},
{
"math_id": 130,
"text": "\\overline{P} \\to \\overline{Q} \\to \\overline{R} \\to \\overline{P}[+1]"
},
{
"math_id": 131,
"text": "W"
},
{
"math_id": 132,
"text": "DB(W) = \\prod_{w \\in \\mathbb{A}_1}DB_{w_0}(W)."
},
{
"math_id": 133,
"text": "X_0 = W^{-1}(0)"
},
{
"math_id": 134,
"text": "DB_{w_0}(W) \\cong D_{sg}(X_0)"
},
{
"math_id": 135,
"text": "\\operatorname{Cok}"
},
{
"math_id": 136,
"text": "\\overline{P} \\mapsto \\operatorname{Coker}(p_1)"
},
{
"math_id": 137,
"text": "DB(W)"
},
{
"math_id": 138,
"text": "\\Phi_Z"
},
{
"math_id": 139,
"text": "f:X\\to\\mathbb{A}^1"
},
{
"math_id": 140,
"text": "Y = X\\times \\mathbb{A}^2"
},
{
"math_id": 141,
"text": "g:Y \\to \\mathbb{A}^1"
},
{
"math_id": 142,
"text": "g = f + xy"
},
{
"math_id": 143,
"text": "xy"
},
{
"math_id": 144,
"text": "\\mathbb{A}^2"
},
{
"math_id": 145,
"text": "X_0 = f^{-1}(0)"
},
{
"math_id": 146,
"text": "Y_0 = g^{-1}(0)"
},
{
"math_id": 147,
"text": "x: Y_0 \\to \\mathbb{A}^1"
},
{
"math_id": 148,
"text": "Z = x^{-1}(0)"
},
{
"math_id": 149,
"text": "i:Z \\to Y_0"
},
{
"math_id": 150,
"text": "q: Z \\to X_0"
},
{
"math_id": 151,
"text": "\\mathbb{A}^1"
},
{
"math_id": 152,
"text": "\\Phi_Z(\\cdot) = \\mathbf{R}i_*q^*(\\cdot)"
},
{
"math_id": 153,
"text": "D_{sg}(X_0) \\to D_{sg}(Y_0)"
},
{
"math_id": 154,
"text": "x^2 + y^2"
},
{
"math_id": 155,
"text": "(\\mathbb{C}^{2k+1}, W)"
},
{
"math_id": 156,
"text": "W = z_0^n + z_1^2 + \\cdots + z_{2k}^2 "
},
{
"math_id": 157,
"text": "D_\\text{sing}(\\operatorname{Spec}(\\mathbb{C}[z]/(z^n)))"
},
{
"math_id": 158,
"text": "A = \\mathbb{C}[z]/(z^n)"
},
{
"math_id": 159,
"text": "V_i = \\operatorname{Coker}(A \\xrightarrow{z^i} A) = A / z^i"
},
{
"math_id": 160,
"text": "i,j"
},
{
"math_id": 161,
"text": "\\alpha_j^i: V_i \\to V_j"
},
{
"math_id": 162,
"text": "i \\geq j"
},
{
"math_id": 163,
"text": "i < j"
},
{
"math_id": 164,
"text": "z^{j-i}"
}
] |
https://en.wikipedia.org/wiki?curid=63070573
|
63071801
|
Wigner–Araki–Yanase theorem
|
The Wigner–Araki–Yanase theorem, also known as the WAY theorem, is a result in quantum physics establishing that the presence of a conservation law limits the accuracy with which observables that fail to commute with the conserved quantity can be measured. It is named for the physicists Eugene Wigner, Huzihiro Araki and Mutsuo Yanase.
The theorem can be illustrated with a particle coupled to a measuring apparatus. If the position operator of the particle is formula_0 and its momentum operator is formula_1, and if the position and momentum of the apparatus are formula_2 and formula_3 respectively, assuming that the total momentum formula_4 is conserved implies that, in a suitably quantified sense, the particle's position itself cannot be measured. The measurable quantity is its position "relative" to the measuring apparatus, represented by the operator formula_5. The Wigner–Araki–Yanase theorem generalizes this to the case of two arbitrary observables formula_6 and formula_7 for the system and an observable formula_8 for the apparatus, satisfying the condition that formula_9 is conserved.
Mikko Tukiainen gave a generalized version of the WAY theorem, which makes no use of conservation laws, but uses quantum incompatibility instead.
Yui Kuramochi and Hiroyasu Tajima proved a generalized form of the theorem for possibly unbounded and continuous conserved observables.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "q"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "Q"
},
{
"math_id": 3,
"text": "P"
},
{
"math_id": 4,
"text": "p + P"
},
{
"math_id": 5,
"text": "q - Q"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "B"
},
{
"math_id": 8,
"text": "C"
},
{
"math_id": 9,
"text": "B + C"
}
] |
https://en.wikipedia.org/wiki?curid=63071801
|
630741
|
Arithmetic group
|
In mathematics, an arithmetic group is a group obtained as the integer points of an algebraic group, for example formula_0 They arise naturally in the study of arithmetic properties of quadratic forms and other classical topics in number theory. They also give rise to very interesting examples of Riemannian manifolds and hence are objects of interest in differential geometry and topology. Finally, these two topics join in the theory of automorphic forms which is fundamental in modern number theory.
History.
One of the origins of the mathematical theory of arithmetic groups is algebraic number theory. The classical reduction theory of quadratic and Hermitian forms by Charles Hermite, Hermann Minkowski and others can be seen as computing fundamental domains for the action of certain arithmetic groups on the relevant symmetric spaces. The topic was related to Minkowski's geometry of numbers and the early development of the study of arithmetic invariant of number fields such as the discriminant. Arithmetic groups can be thought of as a vast generalisation of the unit groups of number fields to a noncommutative setting.
The same groups also appeared in analytic number theory as the study of classical modular forms and their generalisations developed. Of course the two topics were related, as can be seen for example in Langlands' computation of the volume of certain fundamental domains using analytic methods. This classical theory culminated with the work of Siegel, who showed the finiteness of the volume of a fundamental domain in many cases.
For the modern theory to begin foundational work was needed, and was provided by the work of Armand Borel, André Weil, Jacques Tits and others on algebraic groups. Shortly afterwards the finiteness of covolume was proven in full generality by Borel and Harish-Chandra. Meanwhile, there was progress on the general theory of lattices in Lie groups by Atle Selberg, Grigori Margulis, David Kazhdan, M. S. Raghunathan and others. The state of the art after this period was essentially fixed in Raghunathan's treatise, published in 1972.
In the seventies Margulis revolutionised the topic by proving that in "most" cases the arithmetic constructions account for all lattices in a given Lie group. Some limited results in this direction had been obtained earlier by Selberg, but Margulis' methods (the use of ergodic-theoretical tools for actions on homogeneous spaces) were completely new in this context and were to be extremely influential on later developments, effectively renewing the old subject of geometry of numbers and allowing Margulis himself to prove the Oppenheim conjecture; stronger results (Ratner's theorems) were later obtained by Marina Ratner.
In another direction the classical topic of modular forms has blossomed into the modern theory of automorphic forms. The driving force behind this effort is mainly the Langlands program initiated by Robert Langlands. One of the main tool used there is the trace formula originating in Selberg's work and developed in the most general setting by James Arthur.
Finally arithmetic groups are often used to construct interesting examples of locally symmetric Riemannian manifolds. A particularly active research topic has been arithmetic hyperbolic 3-manifolds, which as William Thurston wrote, "...often seem to have special beauty."
Definition and construction.
Arithmetic groups.
If formula_1 is an algebraic subgroup of formula_2 for some formula_3 then we can define an arithmetic subgroup of formula_4 as the group of integer points formula_5 In general it is not so obvious how to make precise sense of the notion of "integer points" of a formula_6-group, and the subgroup defined above can change when we take different embeddings formula_7
Thus a better notion is to take for definition of an arithmetic subgroup of formula_4 any group formula_8 which is commensurable (this means that both formula_9 and formula_10 are finite sets) with a group formula_11 defined as above (with respect to any embedding into formula_12). With this definition, to the algebraic group formula_1 is associated a collection of "discrete" subgroups all commensurable to each other.
Using number fields.
A natural generalisation of the construction above is as follows: let formula_13 be a number field with ring of integers formula_14 and formula_1 an algebraic group over formula_13. If we are given an embedding formula_15 defined over formula_13 then the subgroup formula_16 can legitimately be called an arithmetic group.
On the other hand, the class of groups thus obtained is not larger than the class of arithmetic groups as defined above. Indeed, if we consider the algebraic group formula_17 over formula_6 obtained by restricting scalars from formula_13 to formula_6 and the formula_18-embedding formula_19 induced by formula_20 (where formula_21) then the group constructed above is equal to formula_22.
Examples.
The classical example of an arithmetic group is formula_23, or the closely related groups formula_24, formula_25 and formula_26. For formula_27 the group formula_28, or sometimes formula_29, is called the modular group as it is related to the modular curve. Similar examples are the Siegel modular groups formula_30.
Other well-known and studied examples include the Bianchi groups formula_31 where formula_32 is a square-free integer and formula_33 is the ring of integers in the field formula_34 and the Hilbert–Blumenthal modular groups formula_35.
Another classical example is given by the integral elements in the orthogonal group of a quadratic form defined over a number field, for example formula_36. A related construction is by taking the unit groups of orders in quaternion algebras over number fields (for example the Hurwitz quaternion order). Similar constructions can be performed with unitary groups of hermitian forms, a well-known example is the Picard modular group.
Arithmetic lattices in semisimple Lie groups.
When formula_37 is a Lie group one can define an arithmetic lattice in formula_37 as follows: for any algebraic group formula_1 defined over formula_6 such that there is a morphism formula_38 with compact kernel, the image of an arithmetic subgroup in formula_4 is an arithmetic lattice in formula_37. Thus, for example, if formula_39 and formula_37 is a subgroup of formula_40 then formula_41 is an arithmetic lattice in formula_37 (but there are many more, corresponding to other embeddings); for instance, formula_23 is an arithmetic lattice in formula_42.
The Borel–Harish-Chandra theorem.
A lattice in a Lie group is usually defined as a discrete subgroup with finite covolume. The terminology introduced above is coherent with this, as a theorem due to Borel and Harish-Chandra states that an arithmetic subgroup in a semisimple Lie group is of finite covolume (the discreteness is obvious).
The theorem is more precise: it says that the arithmetic lattice is cocompact if and only if the "form" of formula_37 used to define it (i.e. the formula_18-group formula_1) is anisotropic. For example, the arithmetic lattice associated to a quadratic form in formula_3 variables over formula_6 will be co-compact in the associated orthogonal group if and only if the quadratic form does not vanish at any point in formula_43.
Margulis arithmeticity theorem.
The spectacular result that Margulis obtained is a partial converse to the Borel—Harish-Chandra theorem: for certain Lie groups "any" lattice is arithmetic. This result is true for all irreducible lattice in semisimple Lie groups of real rank larger than two. For example, all lattices in formula_42 are arithmetic when formula_44. The main new ingredient that Margulis used to prove his theorem was the superrigidity of lattices in higher-rank groups that he proved for this purpose.
Irreducibility only plays a role when formula_37 has a factor of real rank one (otherwise the theorem always holds) and is not simple: it means that for any product decomposition formula_45 the lattice is not commensurable to a product of lattices in each of the factors formula_46. For example, the lattice formula_47 in formula_48 is irreducible, while formula_49 is not.
The Margulis arithmeticity (and superrigidity) theorem holds for certain rank 1 Lie groups, namely formula_50 for formula_51 and the exceptional group formula_52. It is known not to hold in all groups formula_53 for formula_54 (ref to GPS) and for formula_55 when formula_56. There are no known non-arithmetic lattices in the groups formula_57 when formula_58.
Arithmetic Fuchsian and Kleinian groups.
An arithmetic Fuchsian group is constructed from the following data: a totally real number field formula_13, a quaternion algebra formula_59 over formula_13 and an order formula_60 in formula_59. It is asked that for one embedding formula_61 the algebra formula_62 be isomorphic to the matrix algebra formula_63 and for all others to the Hamilton quaternions. Then the group of units formula_64 is a lattice in formula_65 which is isomorphic to formula_66 and it is co-compact in all cases except when formula_59 is the matrix algebra over formula_67 All arithmetic lattices in formula_68 are obtained in this way (up to commensurability).
Arithmetic Kleinian groups are constructed similarly except that formula_13 is required to have exactly one complex place and formula_59 to be the Hamilton quaternions at all real places. They exhaust all arithmetic commensurability classes in formula_69
Classification.
For every semisimple Lie group formula_37 it is in theory possible to classify (up to commensurability) all arithmetic lattices in formula_37, in a manner similar to the cases formula_70 explained above. This amounts to classifying the algebraic groups whose real points are isomorphic up to a compact factor to formula_37.
The congruence subgroup problem.
A congruence subgroup is (roughly) a subgroup of an arithmetic group defined by taking all matrices satisfying certain equations modulo an integer, for example the group of 2 by 2 integer matrices with diagonal (respectively off-diagonal) coefficients congruent to 1 (respectively 0) modulo a positive integer. These are always finite-index subgroups and the congruence subgroup problem roughly asks whether all subgroups are obtained in this way. The conjecture (usually attributed to Jean-Pierre Serre) is that this is true for (irreducible) arithmetic lattices in higher-rank groups and false in rank-one groups. It is still open in this generality but there are many results establishing it for specific lattices (in both its positive and negative cases).
S-arithmetic groups.
Instead of taking integral points in the definition of an arithmetic lattice one can take points which are only integral away from a finite number of primes. This leads to the notion of an "formula_71-arithmetic lattice" (where formula_71 stands for the set of primes inverted). The prototypical example is formula_72. They are also naturally lattices in certain topological groups, for example formula_72 is a lattice in formula_73
Definition.
The formal definition of an formula_71-arithmetic group for formula_71 a finite set of prime numbers is the same as for arithmetic groups with formula_25 replaced by formula_74 where formula_75 is the product of the primes in formula_71.
Lattices in Lie groups over local fields.
The Borel–Harish-Chandra theorem generalizes to formula_71-arithmetic groups as follows: if formula_11 is an formula_71-arithmetic group in a formula_6-algebraic group formula_1 then formula_11 is a lattice in the locally compact group
formula_76.
Some applications.
Explicit expander graphs.
Arithmetic groups with Kazhdan's property (T) or the weaker property (formula_77) of Lubotzky and Zimmer can be used to construct expander graphs (Margulis), or even Ramanujan graphs (Lubotzky-Phillips-Sarnak). Such graphs are known to exist in abundance by probabilistic results but the explicit nature of these constructions makes them interesting.
Extremal surfaces and graphs.
Congruence covers of arithmetic surfaces are known to give rise to surfaces with large injectivity radius. Likewise the Ramanujan graphs constructed by Lubotzky-Phillips-Sarnak have large girth. It is in fact known that the Ramanujan property itself implies that the local girths of the graph are almost always large.
Isospectral manifolds.
Arithmetic groups can be used to construct isospectral manifolds. This was first realised by Marie-France Vignéras and numerous variations on her construction have appeared since. The isospectrality problem is in fact particularly amenable to study in the restricted setting of arithmetic manifolds.
Fake projective planes.
A fake projective plane is a complex surface which has the same Betti numbers as the projective plane formula_78 but is not biholomorphic to it; the first example was discovered by Mumford. By work of Klingler (also proved independently by Yeung) all such are quotients of the 2-ball by arithmetic lattices in formula_79. The possible lattices have been classified by Prasad and Yeung and the classification was completed by Cartwright and Steger who determined, by computer assisted computations, all the fake projective planes in each Prasad-Yeung class.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{SL}_2(\\Z)."
},
{
"math_id": 1,
"text": "\\mathrm G"
},
{
"math_id": 2,
"text": "\\mathrm{GL}_n(\\Q)"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "\\mathrm G(\\Q)"
},
{
"math_id": 5,
"text": "\\Gamma = \\mathrm{GL}_n(\\Z) \\cap \\mathrm G(\\Q)."
},
{
"math_id": 6,
"text": "\\Q"
},
{
"math_id": 7,
"text": "\\mathrm G \\to \\mathrm{GL}_n(\\Q)."
},
{
"math_id": 8,
"text": "\\Lambda"
},
{
"math_id": 9,
"text": "\\Gamma/(\\Gamma\\cap \\Lambda)"
},
{
"math_id": 10,
"text": "\\Lambda/(\\Gamma \\cap \\Lambda)"
},
{
"math_id": 11,
"text": "\\Gamma"
},
{
"math_id": 12,
"text": " \\mathrm{GL}_n"
},
{
"math_id": 13,
"text": "F"
},
{
"math_id": 14,
"text": "O"
},
{
"math_id": 15,
"text": "\\rho : \\mathrm{G} \\to \\mathrm{GL}_n"
},
{
"math_id": 16,
"text": "\\rho^{-1}(\\mathrm{GL}_n(O)) \\subset \\mathrm G(F)"
},
{
"math_id": 17,
"text": "\\mathrm G'"
},
{
"math_id": 18,
"text": "\\Q "
},
{
"math_id": 19,
"text": "\\rho' : \\mathrm G' \\to \\mathrm{GL}_{dn}"
},
{
"math_id": 20,
"text": "\\rho"
},
{
"math_id": 21,
"text": "d = [F:\\Q ]"
},
{
"math_id": 22,
"text": "(\\rho')^{-1}(\\mathrm{GL}_{nd}(\\Z))"
},
{
"math_id": 23,
"text": "\\mathrm{SL}_n(\\Z)"
},
{
"math_id": 24,
"text": "\\mathrm{PSL}_n(\\Z)"
},
{
"math_id": 25,
"text": "\\mathrm{GL}_n(\\Z)"
},
{
"math_id": 26,
"text": "\\mathrm{PGL}_n(\\Z)"
},
{
"math_id": 27,
"text": "n = 2"
},
{
"math_id": 28,
"text": "\\mathrm{PSL}_2(\\Z)"
},
{
"math_id": 29,
"text": "\\mathrm{SL}_2(\\Z)"
},
{
"math_id": 30,
"text": "\\mathrm{Sp}_{2g}(\\Z)"
},
{
"math_id": 31,
"text": "\\mathrm{SL}_2(O_{-m}),"
},
{
"math_id": 32,
"text": "m > 0"
},
{
"math_id": 33,
"text": "O_{-m}"
},
{
"math_id": 34,
"text": "\\Q(\\sqrt{-m}),"
},
{
"math_id": 35,
"text": "\\mathrm{SL}_2(O_m)"
},
{
"math_id": 36,
"text": "\\mathrm{SO}(n,1)(\\Z )"
},
{
"math_id": 37,
"text": "G"
},
{
"math_id": 38,
"text": "\\mathrm G(\\R) \\to G"
},
{
"math_id": 39,
"text": "G = \\mathrm G(\\R)"
},
{
"math_id": 40,
"text": "\\mathrm{GL}_n"
},
{
"math_id": 41,
"text": "G \\cap \\mathrm{GL}_n(\\Z)"
},
{
"math_id": 42,
"text": "\\mathrm{SL}_n(\\R )"
},
{
"math_id": 43,
"text": "\\Q^n \\setminus \\{ 0\\}"
},
{
"math_id": 44,
"text": "n \\ge 3"
},
{
"math_id": 45,
"text": "G = G_1\\times G_2"
},
{
"math_id": 46,
"text": "G_i"
},
{
"math_id": 47,
"text": "\\mathrm{SL}_2(\\Z [\\sqrt 2])"
},
{
"math_id": 48,
"text": "\\mathrm{SL}_2(\\R) \\times \\mathrm{SL}_2(\\R)"
},
{
"math_id": 49,
"text": "\\mathrm{SL}_2(\\Z) \\times \\mathrm{SL}_2(\\Z)"
},
{
"math_id": 50,
"text": "\\mathrm{Sp}(n,1)"
},
{
"math_id": 51,
"text": "n \\geqslant 1 "
},
{
"math_id": 52,
"text": "F_4^{-20}"
},
{
"math_id": 53,
"text": "\\mathrm{SO}(n,1)"
},
{
"math_id": 54,
"text": " n \\geqslant 2 "
},
{
"math_id": 55,
"text": "\\mathrm{SU}(n, 1)"
},
{
"math_id": 56,
"text": "n = 1,2,3"
},
{
"math_id": 57,
"text": "\\mathrm{SU}(n,1)"
},
{
"math_id": 58,
"text": "n \\geqslant 4"
},
{
"math_id": 59,
"text": "A"
},
{
"math_id": 60,
"text": "\\mathcal O"
},
{
"math_id": 61,
"text": "\\sigma: F \\to \\R"
},
{
"math_id": 62,
"text": "A^\\sigma \\otimes_F \\R"
},
{
"math_id": 63,
"text": "M_2(\\R)"
},
{
"math_id": 64,
"text": "\\mathcal O^1"
},
{
"math_id": 65,
"text": "(A^\\sigma \\otimes_F \\R)^1"
},
{
"math_id": 66,
"text": "\\mathrm{SL}_2(\\R),"
},
{
"math_id": 67,
"text": "\\Q."
},
{
"math_id": 68,
"text": "\\mathrm{SL}_2(\\R)"
},
{
"math_id": 69,
"text": "\\mathrm{SL}_2(\\Complex)."
},
{
"math_id": 70,
"text": "G = \\mathrm{SL}_2(\\R), \\mathrm{SL}_2(\\Complex)"
},
{
"math_id": 71,
"text": "S"
},
{
"math_id": 72,
"text": "\\mathrm{SL}_2 \\left( \\Z \\left[\\tfrac 1 p \\right] \\right)"
},
{
"math_id": 73,
"text": "\\mathrm{SL}_2(\\R) \\times \\mathrm{SL}_2(\\Q_p)."
},
{
"math_id": 74,
"text": "\\mathrm{GL}_n\\left(\\Z \\left[ \\tfrac 1 N \\right] \\right)"
},
{
"math_id": 75,
"text": "N"
},
{
"math_id": 76,
"text": "G = \\mathrm G(\\R) \\times \\prod_{p\\in S} \\mathrm G(\\Q_p)"
},
{
"math_id": 77,
"text": "\\tau"
},
{
"math_id": 78,
"text": "\\mathbb P^2(\\Complex)"
},
{
"math_id": 79,
"text": "\\mathrm{PU}(2,1)"
}
] |
https://en.wikipedia.org/wiki?curid=630741
|
63077136
|
Cartier isomorphism
|
In algebraic geometry, the Cartier isomorphism is a certain isomorphism between the cohomology sheaves of the de Rham complex of a smooth algebraic variety over a field of positive characteristic, and the sheaves of differential forms on the Frobenius twist of the variety. It is named after Pierre Cartier. Intuitively, it shows that de Rham cohomology in positive characteristic is a much larger object than one might expect. It plays an important role in the approach of Deligne and Illusie to the degeneration of the Hodge–de Rham spectral sequence.
Statement.
Let "k" be a field of characteristic "p" > 0, and let formula_0 be a morphism of "k"-schemes. Let formula_1 denote the Frobenius twist and let formula_2 be the relative Frobenius. The Cartier map is defined to be the unique morphismformula_3of graded formula_4-algebras such that formula_5 for any local section "x" of formula_6. (Here, for the Cartier map to be well-defined in general it is essential that one takes cohomology sheaves for the codomain.) The Cartier isomorphism is then the assertion that the map formula_7 is an isomorphism if formula_8 is a smooth morphism.
In the above, we have formulated the Cartier isomorphism in the form it is most commonly encountered (e.g., in the 1970 paper of Katz). In his original paper, Cartier actually considered the inverse map in a more restrictive setting, whence the notation formula_7 for the Cartier map.
The smoothness assumption is not essential for the Cartier map to be an isomorphism. For instance, one has it for ind-smooth morphisms since both sides of the Cartier map commute with filtered colimits. By Popescu's theorem, one then has the Cartier isomorphism for a regular morphism of noetherian "k"-schemes. Ofer Gabber has also proven a Cartier isomorphism for valuation rings. In a different direction, one can dispense with such assumptions entirely if one instead works with derived de Rham cohomology (now taking the associated graded of the conjugate filtration) and the exterior powers of the cotangent complex.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f: X \\to S"
},
{
"math_id": 1,
"text": "X^{(p)} = X \\times_{S,\\varphi} S"
},
{
"math_id": 2,
"text": "F: X \\to X^{(p)}"
},
{
"math_id": 3,
"text": "C^{-1}: \\bigoplus_{i \\geq 0} \\Omega^i_{X^{(p)}/S} \\to \\bigoplus_{i \\geq 0} \\mathcal{H}^i(F_* \\Omega^{\\bullet}_{X/S})"
},
{
"math_id": 4,
"text": "\\mathcal{O}_{X^{(p)}}"
},
{
"math_id": 5,
"text": "C^{-1}(d(x \\otimes 1)) = [x^{p-1} dx]"
},
{
"math_id": 6,
"text": "\\mathcal{O}_X"
},
{
"math_id": 7,
"text": "C^{-1}"
},
{
"math_id": 8,
"text": "f"
}
] |
https://en.wikipedia.org/wiki?curid=63077136
|
63078029
|
Low-temperature distillation
|
Water treatment process
The low-temperature distillation (LTD) technology is the first implementation of the direct spray distillation (DSD) process. The first large-scale units are now in operation for desalination. The process was first developed by scientists at the University of Applied Sciences in Switzerland, focusing on low-temperature distillation in vacuum conditions, from 2000 to 2005.
Direct spray distillation is a water treatment process applied in seawater desalination and industrial wastewater treatment, brine and concentrate treatment as well as zero liquid discharge systems. It is a physical water separation process driven by thermal energy. Direct spray distillation involves evaporation and condensation on water droplets that are sprayed into a chamber that is evacuated of non-condensable permanent gases like air and carbon dioxide. Compared to other vaporization systems, no phase change happens on solid surfaces such as shell and tube heat exchangers.
Applications.
Currently, the only implementation of DSD technology is low-temperature distillation (LTD). The LTD process runs under partial pressure in the evaporator and condenser chambers, and with process temperatures of below 100 °C. The first large-scale LTD systems for industrial water treatment are now in operation.
History.
The DSD process was invented in the late 1990 by Mark Lehmann with the first successful demonstration of the process in a factory hall of the Obrecht AG, Doettingen, Switzerland. The results of the experiments were evaluated and double-checked by Prof. Dr. Kurt Heiniger (University of applied Sciences and Arts, Northwestern Switzerland) and Dr. Franco Blanggetti (Alstom, Co-author of the VDI Wärmeatlas). During the next years, the process has been further researched in the framework of many thesis supervised by Heiniger and Lehmann. The objective has been the examination of the influence of non-condensable gases in lowered pressure environments on the heat transfer during the condensation process on cooled droplets. It has been found that the droplet size and distribution as well as the geometry of the condensation reactor has the most significant influence on the heat transfer. Due to the absence of common tube bundle heat exchangers, the achievable efficiency gains result from the minimized heat resistance during the condensation process.
Technology description.
Low temperature distillation (LTD) is a thermal distillation process in several stages, powered by temperature differences between heat and cooling sources of at least 5 K per stage. Two separate volume flows, a hot evaporator flow and a cool condenser flow, with different temperatures and vapor pressures, are sprayed in a combined pressure chamber, where non-condensable gases are continuously removed. As the vapor moves to a partial pressure equilibrium, part of the water from the hot stream evaporates. Several serial arranged chambers in counter flow of the hot evaporator and cold condenser stream allow a high internal heat recovery by the application of multiple stages. The process excels in a high specific heat conversion rate caused by the reduction of heat transfer losses, which results in a high thermal efficiency and low heat transfer resistance. The LTD process is tolerant to high salinity, other impurities, and fluctuating feed water qualities. The precipitation of solids is technically intended to allow for zero-liquid-discharge operation (complete ZLD). It is possible to combine the low-temperature distillation process with existing desalination technologies serving as downstream process to increase the water output and reduce the brine generation.
Physical principle.
The following figures show and explain the thermodynamic principle on which the LTD technology is built. Considering Fig. 1, there are two cylinders given with open buttons and filled with water in two basins with two different temperatures (assumption: hot at 50°C and cold at 20°C). The temperature related vapor pressure of the water is 123 mbar for 50°C and 23 mbar for 20°C. It is assumed that the two cylinders are 10 meters long and allow to be pulled out the same distance.
The pulled-out cylinders in Fig. 2 show now a different situation regarding the level of the water column. Due to the higher vapor pressure at 50°C, in the hot water column the Atmospheric pressure is capable to elevate the hot water column about 877 cm. In the remaining space, the water starts to evaporate at a pressure of 123 mbar. The cold water column at 20°C, the atmospheric pressure (1000 mbar) is 977 cm high in equilibrium with the according vapor pressure of 23 mbar. If no heat exchange takes place, this situation remains unchanged and is thermodynamically in equilibrium.
Now, the two tops of both columns are connected with a vapor channel in Fig. 3. If they are connected, the two vapor chambers (123 mbar and 23 mbar) spontaneously equalize their pressure to an average pressure. As a result, the two water columns tend to have the same level on both sides. However, this connection causes an energetical imbalance of the physical conditions of the water surface on top of the columns. On the 50°C hot column, the vapor pressure of the media is higher than the average pressure. On the 20°C cold side, the average pressure is higher than the vapor pressure of the water. This situation leads to a spontaneous boiling on the hot side and a vapor condensation on the cooler side on the water surface. This process continuous until the temperature on both sides has been balanced out in both columns. After the temperature adaption, both pressures and levels in the chambers are equal.
As a consequence of this, it can be assumed that as long as a temperature difference in both columns is maintained, a spontaneous evaporation and condensation of the surface water takes place in order to achieve equilibrium temperature and pressure. To make this technically possible, an additional external circulation in Fig. 4 can supply heat formula_0 on the evaporation side and extract heat formula_1 on the condenser side. As the reaction velocity is strongly depending on the available water surface, a specially designed spraying system creates millions of small droplets. This huge internal water surface results in very high internal heat transfer rates between evaporator and condenser.
This principle also works if the useless bottom of the open water column is cut off and replaced by a lid as shown in Fig. 5. Experiments on the demonstration plant have shown that a pressure differential of only a few millibar (1 mbar corresponds to 1 cm water column) is sufficient to run this distillation process. It corresponds with very small temperature differentials of a few Kelvin.
If the temperature spread between the heat source and condenser is large enough, the condenser can act as a heater for the following stage. This has the advantage that the condensation heat is re-used multiple times at different temperature/pressures increasing the energetic efficiency with each additional stage. Depending on the available temperature difference, it can be multiplied several times resulting in an increase of the distillation capacity with the same amount of available heat. The result is the creation of the multi-cascaded direct spray distillation, visualized in Fig. 6.
Plant design.
The low temperature distillation process needs reactors for evaporation and condensation equipped with the spaying system to generate the droplets, and three standard plate heat exchangers (heating, cooling, thermal recovery). The feedwater and distillate are pumped in two large circulation streams through the reactors. The thermal recovery is realized in a heat exchanger preheating the feedwater by the distillate after condensation. Saturated brine and distillate are removed from the process by valve locks. The process and media flows are visualized in Fig. 7 in a general process scheme.
The thermal energy (1) is supplied at the main heat exchanger (HEX 1) by any available media heating up the intake water up to 95°C.
In the evaporator cycle (green), the hot water is sprayed and evaporated in pressure reduced chambers (2) and flows by gravity to the subsequent chambers with lowered temperature and pressure environment. The generated vapor (3) flows from the evaporator to the condenser in every stage where it condenses on the cooled droplets of the sprayed distillate.
The heat exchanger for cooling (HEX 3) reduces the temperature of the distillate (4) before it is pumped to the condenser cycle. In the condenser cycle (5), the cooled distillate is pumped and sprayed into the pressure chambers to allow for vapor condensation from the evaporators on cooled droplets. During this process, the temperature and pressure increases from stage to stage. After the last condenser, the increased heat of the distillate is recovered in the heat exchanger for thermal recovery (HEX 2) preheating the evaporator cycle. After the condensation in the first reactor, the distillate is hotter compared to the brine of the last evaporator. This condensation heat is recovered in HEX 2 and is used for heating the evaporator cycle (6). It is beneficial for the energetic efficiency to design this heat exchanger as large as possible.
In order to run the process, a vacuum system (7) extracts non-condensable gases (like formula_2), in the chambers. In the connection duct to the vacuum pump, an optional heat exchanger (HEX 5) cools down the vapor to condense as much water as possible (8). The gained distillate is transferred after an optional heat recovery (9) out of the process. A post-treatment system can treat the distillate according to the desired requirements (remineralization). The brine is extracted at the evaporator cycle after the last evaporator stage (10). The over-saturation and precipitation of salts for zero-liquid discharge (ZLD) application requires an additional evaporator acting as crystallizer which is not shown in Fig. 7.
Plant layout.
The main components to fow temperature distillation plants are the pressure vessels and the spraying facilities. Further important components are an adapted instrumentation and controlling system as well as a vacuum system. A low temperature distillation plant has no membranes and no tube bundles, and consists of the following main elements:
Plant components.
Evaporator and condenser vessels are constructed for vacuum pressure conditions up to 20 formula_3 and include the spraying installations for the evaporation/condensation reactors.
For the energy supply of the process itself, only standard plate heat exchangers are installed. A low temperature distillation plant consists of one heat exchanger for the heat transfer from heat source into water and one for the heat transfer from distillate to the re-cooling media. A plant with several cascades has one additional heat exchanger for internal heat recovery (HEX 2) increasing the thermal efficiency of the plant. Due to the flexibility of the low temperature distillation process, various arrangements are possible to adapt each plant to the given application. If only a small overall temperature spread or a limited heat source is available, internal flows can be adjusted for maximised internal heat recovery. Additional low-temperature heat sources such as solar collector systems can also be integrated.
The media supply is mostly realized with standard centrifugal pumps. The process conditions favor a low NPSH construction in order to facilitate hot media leaving the system from vacuum conditions. Due to the lowered volume flows in small scale plants, the application of displacer pumps is recommended.
Comparison with other thermal desalination technologies.
Low temperature distillation operates at low temperature and low pressure, similar to Multi-effect distillation (MED) and Multi-stage flash distillation (MSF). While the process flow is similar to a MSF plant, the temperature and pressure dynamics are more comparable to a MED system. It is designed use low grade or waste heat from other industrial processes or renewable sources, like solar thermal collectors. The most significant difference compared to MED and MSF technologies is that there are no tube bundles heat exchangers within the pressure chambers. This permits enhancements of the thermal distillation process:
Due to the relative high energy demand of thermal distillation processes for water treatment, low temperature distillation is most economically applicable for high saline feed waters. Fig. 8 compares the relative energy and plant costs in comparison with membrane-based desalination processes like reverse osmosis (RO) from sea water desalination. The possible feed waters may contain a wide range of impurities like brines from desalination plants, radioactive ground water, produced water from oil production, hydrocarbon polluted water, and high salinities up to 33% NaCl. The plant operates even under high concentrations up to the precipitation of anorganic compounds. Also, the effluent of existing sea water desalination plants can be treated further in a low temperature distillation to maximise the dewatering capacity of a desalination system.
Low temperature distillation can accommodate variations in the plant load, running efficiently from 50 – 100% of plant design capacity depending on the available heat supply. The spraying process is self-adjusting, and the amount of water produced is proportional to the amount of heat provided.
High saline feed water.
The LTD process is most suitable for high saline feedwaters starting from typical concentrations of sea water to concentrated wastewater solutions from various industrial processes. One possible application is the capacity duplication of RO based desalination systems by further treatment of the evolving effluents to the precipitation of salts. Brackish water desalination is principally also possible, but other desalination processes tend to be more economical due to low osmotic pressure and resulting low specific energy consumption.
Scaling and fouling.
Low temperature distillation plants are not prone to scaling or clogging even with very high TDS in the feed water. There are no installations within the pressure vessels that could scale. Phase changes (evaporation and condensation) only take place on the surface of the water droplets, never on solid surfaces. The following design features ensure the minimal risk of scaling within the plant:
Low temperature distillation plants are able to treat feed waters such as:
Desalinated water and brine.
The desalinated water quality from the low temperature distillation process is almost demineralized water with a remaining salinity of 10 ppm. Residual contaminants result from demister losses and depend on the treated feedwater as well as vapor velocities between evaporator and condenser. The brine concentration in the LTD process can be adjusted to the site conditions and disposal options. Current research focuses on selective crystallization to recover various salt species beyond NaCl.
Specific data and information.
Preferred use.
The application of the LTD process becomes economically feasible starting with salinities more than 4%. LTD can be useful for normal seawater desalination if high recovery rates or further treatment of the RO brine are required. High saline effluents from industrial processes such as the oil and gas industry, the textile industry, and the chemical industry are more advantageous. In general, pretreatment for zero-liquid discharge systems with LTD is the most economical option. The treatment of brackish water is possible in principle, but the energy consumption required for evaporation is higher compared to conventional reverse osmosis.
Environmental impact.
Due to the reduction of the brine volume, the environmental impacts are significantly lowered compared to standard seawater RO units. The recovery of NaCl in high purity is possible and can be used e.g. as regenerative salt for ion exchangers or water softeners.
The LTD process has a stable part-load behaviour which facilitates the use of renewable energy sources. Thermal energy can be supplied by solar collectors like flat plate or evacuated tube, solar ponds, concentrating solar collectors, or in co-generation with solar power plants.
Further developments.
Opportunities for improvement focus mainly on integration in an appropriate operating environment with heat management. The combination of LTD plants with thermal power plants as heat sources seems advantageous. Combinations with other desalination processes, like thermal or mechanical vapor compression (MVC) are also possible. Under certain process conditions, such systems can compensate for fluctuating heat supply by substituting electric power in an integrated MVC unit.
Current research focuses on the reduction of the heat and electricity consumption of auxiliary systems. The selective crystallization of the brine and recovery of salts are also being researched (in cooperation with TU Berlin, Germany). Further development potential lies in the integration of adsorption and absorption technologies for integrated cooling and desalination.
|
[
{
"math_id": 0,
"text": " \\textstyle E_{in} "
},
{
"math_id": 1,
"text": " \\textstyle E_{out} "
},
{
"math_id": 2,
"text": " \\textstyle CO_2, N_2, O_2 "
},
{
"math_id": 3,
"text": " \\textstyle mbar_{abs} "
},
{
"math_id": 4,
"text": " \\textstyle ppm TDS "
},
{
"math_id": 5,
"text": " \\textstyle kWh_{el}/m^3 "
},
{
"math_id": 6,
"text": " \\textstyle kWh_{th}/m^3 "
}
] |
https://en.wikipedia.org/wiki?curid=63078029
|
63083789
|
Dupin's theorem
|
In differential geometry Dupin's theorem, named after the French mathematician Charles Dupin, is the statement:
A "threefold orthogonal system" of surfaces consists of three pencils of surfaces such that any pair of surfaces out of different pencils intersect orthogonally.
The most simple example of a threefold orthogonal system consists of the coordinate planes and their parallels. But this example is of no interest, because a plane has no curvature lines.
A simple example with at least one pencil of curved surfaces: 1) all right circular cylinders with the z-axis as axis, 2) all planes, which contain the z-axis, 3) all horizontal planes (see diagram).
A "curvature line" is a curve on a surface, which has at any point the direction of a principal curvature (maximal or minimal curvature). The set of curvature lines of a right circular cylinder consists of the set of circles (maximal curvature) and the lines (minimal curvature). A plane has no curvature lines, because any normal curvature is zero. Hence, only the curvature lines of the cylinder are of interest: A horizontal plane intersects a cylinder at a circle and a vertical plane has lines with the cylinder in common.
The idea of threefold orthogonal systems can be seen as a generalization of orthogonal trajectories. Special examples are systems of confocal conic sections.
Application.
Dupin's theorem is a tool for determining the curvature lines of a surface by intersection with suitable surfaces (see examples), without time-consuming calculation of derivatives and principal curvatures. The next example shows, that the embedding of a surface into a threefold orthogonal system is not unique.
Examples.
Right circular cone.
"Given:" A right circular cone, green in the diagram.<br>
"Wanted:" The curvature lines.
"1. pencil": Shifting the given cone C with apex S along its axis generates a pencil of cones (green).<br>
"2. pencil": Cones with apexes on the axis of the given cone such that the lines are orthogonal to the lines of the given cone (blue).<br>
"3. pencil": Planes through the cone's axis (purple).
These three pencils of surfaces are an orthogonal system of surfaces. The blue cones intersect the given cone C at a circle (red). The purple planes intersect at the lines of cone C (green).
The points of the space can be described by the spherical coordinates
formula_0. It is set S=M=origin.
"1. pencil:" Cones with point S as apex and their axes are the axis of the given cone C (green): formula_1.<br>
"2. pencil:" Spheres centered at M=S (blue): formula_2<br>
"3. pencil:" Planes through the axis of cone C (purple): formula_3.
Torus.
"1. pencil": Tori with the same directrix (green).<br>
"2. pencil": Cones containing the directrix circle of the torus with apexes on the axis of the torus (blue).<br>
"3. pencil": Planes containing the axis of the given torus (purple).
The blue cones intersect the torus at "horizontal circles" (red).
The purple planes intersect at "vertical circles" (green).
The curvature lines of a torus generate a net of orthogonal circles.
A torus contains more circles: the Villarceau circles, which are not curvature lines.
Surface of revolution.
Usually a surface of revolution is determined by a generating plane curve (meridian) formula_4. Rotating formula_4 around the axis generates the surface of revolution. The method used for a cone and a torus can be extended to a surface of revolution:
"1. pencil": Parallel surfaces to the given surface of revolution.<br>
"2. pencil": Cones with apices on the axis of revolution with generators orthogonal to the given surface (blue).<br>
"3. pencil": Planes containing the axis of revolution (purple).
The cones intersect the surface of revolution at circles (red). The purple planes intersect at meridians (green). Hence:
Confocal quadrics.
The article confocal conic sections deals with confocal "quadrics", too. They are a prominent example of a non trivial orthogonal system of surfaces. Dupin's theorem shows that
the curvature lines of any of the quadrics can be seen as the intersection curves with quadrics out of the other pencils (see diagrams).
Confocal quadrics are never rotational quadrics, so the result on surfaces of revolution (above) cannot be applied. The curvature lines are i.g. curves of degree 4. (Curvature lines of rotational quadrics are always conic sections !)
Semi-axes: formula_5. <br>
The curvature lines are sections with one (blue) and two (purple) sheeted hyperboloids. The red points are umbilic points.
Semi-axes: formula_6. <br>
The curvature lines are intersections with ellipsoids (blue) and hyperboloids of two sheets (purple).
Dupin cyclides.
A Dupin cyclide and its parallels are determined by a pair of focal conic sections. The diagram shows a ring cyclide together with its focal conic sections (ellipse: dark red, hyperbola: dark blue). The cyclide can be seen as a member of an orthogonal system of surfaces:
"1. pencil": parallel surfaces of the cyclide.<br>
"2. pencil:" right circular cones through the ellipse (their apexes are on the hyperbola)<br>
"3. pencil:" right circular cones through the hyperbola (their apexes are on the ellipse)
The special feature of a cyclide is the property:
The curvature lines of a Dupin cyclide are "circles".
Proof of Dupin's theorem.
Any point of consideration is contained in exactly one surface of any pencil of the orthogonal system. The three parameters formula_7 describing these three surfaces can be considered as new coordinates. Hence any point can be represented by:
formula_8
formula_9
formula_10 or shortly: formula_11
For the example (cylinder) in the lead the new coordinates are the radius formula_12 of the actual cylinder, angle formula_13 between the vertical plane and the x-axis and formula_14 the height of the horizontal plane. Hence, formula_15 can be considered as the cylinder coordinates of the point of consideration.
The condition "the surfaces intersect orthogonally" at point formula_16 means, the surface normals formula_17 are pairwise orthogonal. This is true, if
formula_18 are pairwise orthogonal. This property can be checked with help of Lagrange's identity.
Hence
(1)formula_19
Deriving these equations for the variable, which is not contained in the equation, one gets
formula_20
formula_21
formula_22
Solving this linear system for the three appearing scalar products yields:
(2)formula_23
From (1) and (2): The three vectors formula_24 are orthogonal to vector formula_25 and hence are linear dependent (are contained in a common plane), which can be expressed by:
(3)formula_26
From equation (1) one gets formula_27 (coefficient of the first fundamental form) and <br>
from equation (3): formula_28 (coefficient of the second fundamental form)
of the surface formula_29.
Consequence: The parameter curves are curvature lines.
The analogous result for the other two surfaces through point formula_30 is true, too.
|
[
{
"math_id": 0,
"text": "(r,\\varphi,\\theta)"
},
{
"math_id": 1,
"text": "(r,\\varphi,\\theta_0)"
},
{
"math_id": 2,
"text": "(r_0,\\varphi,\\theta)"
},
{
"math_id": 3,
"text": "(r,\\varphi_0,\\theta)"
},
{
"math_id": 4,
"text": "m_0"
},
{
"math_id": 5,
"text": "\\;a=1, \\; b=0.8, \\; c=0.6\\;"
},
{
"math_id": 6,
"text": "\\;a=0.67, \\; b=0.3 , \\; c=0.44\\;"
},
{
"math_id": 7,
"text": "u,v,w"
},
{
"math_id": 8,
"text": "x=x(u,v,w)"
},
{
"math_id": 9,
"text": "y=y(u,v,w)"
},
{
"math_id": 10,
"text": "z=z(u,v,w)\\quad "
},
{
"math_id": 11,
"text": "\\quad \\vec x=\\vec x(u,v,w)\\; ."
},
{
"math_id": 12,
"text": "r"
},
{
"math_id": 13,
"text": "\\varphi"
},
{
"math_id": 14,
"text": "\\zeta"
},
{
"math_id": 15,
"text": "r,\\varphi,\\zeta"
},
{
"math_id": 16,
"text": "\\vec x(u,v,w)"
},
{
"math_id": 17,
"text": "\\; \\vec x_u\\times \\vec x_v,\\; \\vec x_v\\times \\vec x_w, \\;\\vec x_w\\times \\vec x_u"
},
{
"math_id": 18,
"text": "\\vec x_u,\\;\\vec x_v,\\; \\vec x_w\\quad "
},
{
"math_id": 19,
"text": "\\quad \\vec x_u\\cdot \\vec x_v=0,\\quad \\vec x_v\\cdot \\vec x_w=0, \\quad\\vec x_w\\cdot \\vec x_u=0\\; ."
},
{
"math_id": 20,
"text": "\\vec x_{uw}\\cdot\\vec x_v+\\vec x_u\\cdot\\vec x_{vw}=0\\; ,"
},
{
"math_id": 21,
"text": "\\vec x_{uv}\\cdot\\vec x_w+\\vec x_v\\cdot\\vec x_{uw}=0\\; ,"
},
{
"math_id": 22,
"text": "\\vec x_{vw}\\cdot\\vec x_u+\\vec x_w\\cdot\\vec x_{vw}=0\\; ."
},
{
"math_id": 23,
"text": "\\quad \\vec x_{uv}\\cdot \\vec x_w=0,\\quad \\vec x_{vw}\\cdot \\vec x_u=0, \\quad\\vec x_{uw}\\cdot \\vec x_v=0\\; ."
},
{
"math_id": 24,
"text": "\\vec x_u,\\vec x_v,\\vec x_{uv}"
},
{
"math_id": 25,
"text": "\\vec x_w"
},
{
"math_id": 26,
"text": "\\quad \\det(\\vec x_{uv},\\vec x_u,\\vec x_v)=0\\; ."
},
{
"math_id": 27,
"text": "F=0"
},
{
"math_id": 28,
"text": "M=0"
},
{
"math_id": 29,
"text": "\\vec x = \\vec x(u,v,w_0)"
},
{
"math_id": 30,
"text": "\\vec x(u_0,v_0,w_0)"
}
] |
https://en.wikipedia.org/wiki?curid=63083789
|
6308405
|
Delta potential
|
Model of an energy potential in quantum mechanics
In quantum mechanics the delta potential is a potential well mathematically described by the Dirac delta function - a generalized function. Qualitatively, it corresponds to a potential which is zero everywhere, except at a single point, where it takes an infinite value. This can be used to simulate situations where a particle is free to move in two regions of space with a barrier between the two regions. For example, an electron can move almost freely in a conducting material, but if two conducting surfaces are put close together, the interface between them acts as a barrier for the electron that can be approximated by a delta potential.
The delta potential well is a limiting case of the finite potential well, which is obtained if one maintains the product of the width of the well and the potential constant while decreasing the well's width and increasing the potential.
This article, for simplicity, only considers a one-dimensional potential well, but analysis could be expanded to more dimensions.
Single delta potential.
The time-independent Schrödinger equation for the wave function "ψ"("x") of a particle in one dimension in a potential "V"("x") is
formula_0
where ħ is the reduced Planck constant, and E is the energy of the particle.
The delta potential is the potential
formula_1
where "δ"("x") is the Dirac delta function.
It is called a "delta potential well" if λ is negative, and a "delta potential barrier" if λ is positive. The delta has been defined to occur at the origin for simplicity; a shift in the delta function's argument does not change any of the following results.
Solving the Schrödinger equation.
Source:
The potential splits the space in two parts ("x" < 0 and "x" > 0). In each of these parts the potential is zero, and the Schrödinger equation reduces to
formula_2
this is a linear differential equation with constant coefficients, whose solutions are linear combinations of "eikx" and "e"−"ikx", where the wave number k is related to the energy by
formula_3
In general, due to the presence of the delta potential in the origin, the coefficients of the solution need not be the same in both half-spaces:
formula_4
where, in the case of positive energies (real k), "eikx" represents a wave traveling to the right, and "e"−"ikx" one traveling to the left.
One obtains a relation between the coefficients by imposing that the wavefunction be continuous at the origin:
formula_5
A second relation can be found by studying the derivative of the wavefunction. Normally, we could also impose differentiability at the origin, but this is not possible because of the delta potential. However, if we integrate the Schrödinger equation around "x" = 0, over an interval [−"ε", +"ε"]:
formula_6
In the limit as "ε" → 0, the right-hand side of this equation vanishes; the left-hand side becomes
formula_7
because
formula_8
Substituting the definition of ψ into this expression yields
formula_9
The boundary conditions thus give the following restrictions on the coefficients
formula_10
Bound state ("E" < 0).
In any one-dimensional attractive potential there will be a bound state. To find its energy, note that for "E" < 0, "k" = "i"√2"m"|"E"|/"ħ" = "iκ" is imaginary, and the wave functions which were oscillating for positive energies in the calculation above are now exponentially increasing or decreasing functions of "x" (see above). Requiring that the wave functions do not diverge at infinity eliminates half of the terms: "A"r = "B"l = 0. The wave function is then
formula_11
From the boundary conditions and normalization conditions, it follows that
formula_12
from which it follows that λ must be negative, that is, the bound state only exists for the well, and not for the barrier. The Fourier transform of this wave function is a Lorentzian function.
The energy of the bound state is then
formula_13
Scattering ("E" > 0).
For positive energies, the particle is free to move in either half-space: "x" < 0 or "x" > 0. It may be scattered at the delta-function potential.
The quantum case can be studied in the following situation: a particle incident on the barrier from the left side ("A"r). It may be reflected ("A"l) or transmitted ("B"r).
To find the amplitudes for reflection and transmission for incidence from the left, we put in the above equations "A"r = 1 (incoming particle), "A"l = "r" (reflection), "B"l = 0 (no incoming particle from the right) and "B"r = "t" (transmission), and solve for r and t even though we do not have any equations in t.
The result is
formula_14
Due to the mirror symmetry of the model, the amplitudes for incidence from the right are the same as those from the left. The result is that there is a non-zero probability
formula_15
for the particle to be reflected. This does not depend on the sign of λ, that is, a barrier has the same probability of reflecting the particle as a well. This is a significant difference from classical mechanics, where the reflection probability would be 1 for the barrier (the particle simply bounces back), and 0 for the well (the particle passes through the well undisturbed).
The probability for transmission is
formula_16
Remarks and application.
The calculation presented above may at first seem unrealistic and hardly useful. However, it has proved to be a suitable model for a variety of real-life systems.
One such example regards the interfaces between two conducting materials. In the bulk of the materials, the motion of the electrons is quasi-free and can be described by the kinetic term in the above Hamiltonian with an effective mass m. Often, the surfaces of such materials are covered with oxide layers or are not ideal for other reasons. This thin, non-conducting layer may then be modeled by a local delta-function potential as above. Electrons may then tunnel from one material to the other giving rise to a current.
The operation of a scanning tunneling microscope (STM) relies on this tunneling effect. In that case, the barrier is due to the air between the tip of the STM and the underlying object. The strength of the barrier is related to the separation being stronger the further apart the two are. For a more general model of this situation, see Finite potential barrier (QM). The delta function potential barrier is the limiting case of the model considered there for very high and narrow barriers.
The above model is one-dimensional while the space around us is three-dimensional. So, in fact, one should solve the Schrödinger equation in three dimensions. On the other hand, many systems only change along one coordinate direction and are translationally invariant along the others. The Schrödinger equation may then be reduced to the case considered here by an Ansatz for the wave function of the type formula_17.
Alternatively, it is possible to generalize the delta function to exist on the surface of some domain "D" (see Laplacian of the indicator).
The delta function model is actually a one-dimensional version of the Hydrogen atom according to the "dimensional scaling" method developed by the group of Dudley R. Herschbach
The delta function model becomes particularly useful with the "double-well" Dirac Delta function model which represents a one-dimensional version of the Hydrogen molecule ion, as shown in the following section.
Double delta potential.
The double-well Dirac delta function models a diatomic hydrogen molecule by the corresponding Schrödinger equation:
formula_0
where the potential is now
formula_18
where formula_19 is the "internuclear" distance with Dirac delta-function (negative) peaks located at "x" = ±"R"/2 (shown in brown in the diagram). Keeping in mind the relationship of this model with its three-dimensional molecular counterpart, we use atomic units and set formula_20. Here formula_21 is a formally adjustable parameter. From the single-well case, we can infer the "ansatz" for the solution to be
formula_22
Matching of the wavefunction at the Dirac delta-function peaks yields the determinant
formula_23
Thus, formula_24 is found to be governed by the "pseudo-quadratic" equation
formula_25
which has two solutions formula_26. For the case of equal charges (symmetric homonuclear case), "λ" = 1, and the pseudo-quadratic reduces to
formula_27
The "+" case corresponds to a wave function symmetric about the midpoint (shown in red in the diagram), where "A" = "B", and is called "gerade". Correspondingly, the "−" case is the wave function that is anti-symmetric about the midpoint, where "A" = −"B", and is called "ungerade" (shown in green in the diagram). They represent an approximation of the two lowest discrete energy states of the three-dimensional <chem>H2^+</chem> and are useful in its analysis. Analytical solutions for the energy eigenvalues for the case of symmetric charges are given by
formula_28
where "W" is the standard Lambert "W" function. Note that the lowest energy corresponds to the symmetric solution formula_29. In the case of "unequal" charges, and for that matter the three-dimensional molecular problem, the solutions are given by a "generalization" of the Lambert "W" function (see ).
One of the most interesting cases is when "qR" ≤ 1, which results in formula_30. Thus, one has a non-trivial bound state solution with "E" = 0. For these specific parameters, there are many interesting properties that occur, one of which is the unusual effect that the transmission coefficient is unity at zero energy.
|
[
{
"math_id": 0,
"text": "-\\frac{\\hbar^2}{2m} \\frac{d^2 \\psi(x)}{dx^2} + V(x) \\psi(x) = E \\psi(x),"
},
{
"math_id": 1,
"text": "V(x) = \\lambda \\delta(x),"
},
{
"math_id": 2,
"text": "\\frac{d^2\\psi}{dx^2} = -\\frac{2mE}{\\hbar^2} \\psi;"
},
{
"math_id": 3,
"text": "k = \\frac{\\sqrt{2mE}}{\\hbar}."
},
{
"math_id": 4,
"text": "\\psi(x) = \\begin{cases}\n \\psi_\\text{L}(x) = A_\\text{r} e^{ikx} + A_\\text{l} e^{-ikx}, & \\text{ if } x < 0, \\\\\n \\psi_\\text{R}(x) = B_\\text{r} e^{ikx} + B_\\text{l} e^{-ikx}, & \\text{ if } x > 0,\n\\end{cases}"
},
{
"math_id": 5,
"text": "\\psi(0) = \\psi_L(0) = \\psi_R(0) = A_r + A_l = B_r + B_l,"
},
{
"math_id": 6,
"text": "-\\frac{\\hbar^2}{2m} \\int_{-\\varepsilon}^{+\\varepsilon} \\psi''(x) \\,dx + \\int_{-\\varepsilon}^{+\\varepsilon} V(x)\\psi(x) \\,dx = E \\int_{-\\varepsilon}^{+\\varepsilon} \\psi(x) \\,dx."
},
{
"math_id": 7,
"text": "-\\frac{\\hbar^2}{2m} [\\psi_R'(0) - \\psi_L'(0)] + \\lambda \\psi(0),"
},
{
"math_id": 8,
"text": "\\int_{-\\varepsilon}^{+\\varepsilon} \\psi''(x) \\,dx = [\\psi'(+\\varepsilon) - \\psi'(-\\varepsilon)]."
},
{
"math_id": 9,
"text": "-\\frac{\\hbar^2}{2m} ik (-A_r + A_l + B_r - B_l) + \\lambda(A_r + A_l) = 0."
},
{
"math_id": 10,
"text": "\\begin{cases}\n A_r + A_l - B_r - B_l &= 0,\\\\\n -A_r + A_l + B_r - B_l &= \\frac{2m\\lambda}{ik\\hbar^2} (A_r + A_l).\n\\end{cases}"
},
{
"math_id": 11,
"text": "\\psi(x) = \\begin{cases}\n \\psi_\\text{L}(x) = A_\\text{l} e^{\\kappa x}, & \\text{ if } x \\le 0, \\\\\n \\psi_\\text{R}(x) = B_\\text{r} e^{-\\kappa x}, & \\text{ if } x \\ge 0.\n\\end{cases}"
},
{
"math_id": 12,
"text": "\\begin{cases}\n A_\\text{l} = B_\\text{r} = \\sqrt{\\kappa},\\\\\n \\kappa = -\\frac{m \\lambda}{\\hbar^2},\n\\end{cases}"
},
{
"math_id": 13,
"text": "E = -\\frac{\\hbar^2\\kappa^2}{2m} = -\\frac{m\\lambda^2}{2\\hbar^2}."
},
{
"math_id": 14,
"text": "t = \\cfrac{1}{1 - \\cfrac{m\\lambda}{i\\hbar^2k}}, \\quad r = \\cfrac{1}{\\cfrac{i\\hbar^2 k}{m\\lambda} - 1}."
},
{
"math_id": 15,
"text": "R = |r|^2 = \\cfrac{1}{1 + \\cfrac{\\hbar^4 k^2}{m^2\\lambda^2}} = \\cfrac{1}{1 + \\cfrac{2\\hbar^2 E}{m \\lambda^2}}"
},
{
"math_id": 16,
"text": "T = |t|^2 = 1 - R = \\cfrac{1}{1 + \\cfrac{m^2\\lambda^2}{\\hbar^4 k^2}} = \\cfrac{1}{1 + \\cfrac{m \\lambda^2}{2\\hbar^2 E}}."
},
{
"math_id": 17,
"text": "\\Psi(x,y,z)=\\psi(x)\\phi(y,z)\\,\\!"
},
{
"math_id": 18,
"text": "V(x) = -q \\left[ \\delta \\left(x + \\frac{R}{2}\\right) + \\lambda\\delta \\left(x - \\frac{R}{2} \\right) \\right],"
},
{
"math_id": 19,
"text": "0 < R < \\infty"
},
{
"math_id": 20,
"text": "\\hbar = m = 1"
},
{
"math_id": 21,
"text": "0 < \\lambda < 1"
},
{
"math_id": 22,
"text": "\\psi(x) = A e^{-d \\left|x + \\frac{R}{2}\\right|} + B e^{-d \\left|x - \\frac{R}{2} \\right|}."
},
{
"math_id": 23,
"text": "\\begin{vmatrix}\n q - d & q e^{-d R} \\\\\n q \\lambda e^{-d R} & q \\lambda - d\n\\end{vmatrix} = 0,\n\\quad \\text{where } E = -\\frac{d^2}{2}.\n"
},
{
"math_id": 24,
"text": "d"
},
{
"math_id": 25,
"text": "\n d_\\pm(\\lambda ) = \\frac{1}{2} q(\\lambda + 1) \\pm \\frac{1}{2}\n \\left\\{q^2(1 + \\lambda)^2 - 4\\lambda q^2 \\left[1 - e^{-2d_\\pm(\\lambda\n)R}\\right]\\right\\}^{1/2},\n"
},
{
"math_id": 26,
"text": "d = d_{\\pm}"
},
{
"math_id": 27,
"text": "d_\\pm = q \\left[1 \\pm e^{-d_\\pm R}\\right]."
},
{
"math_id": 28,
"text": "d_\\pm = q + W(\\pm q R e^{-q R}) / R,"
},
{
"math_id": 29,
"text": "d_+"
},
{
"math_id": 30,
"text": "d_- = 0"
}
] |
https://en.wikipedia.org/wiki?curid=6308405
|
63087276
|
Plotting algorithms for the Mandelbrot set
|
Algorithms and methods of plotting the Mandelbrot set on a computing device
There are many programs and algorithms used to plot the Mandelbrot set and other fractals, some of which are described in fractal-generating software. These programs use a variety of algorithms to determine the color of individual pixels efficiently.
Escape time algorithm.
The simplest algorithm for generating a representation of the Mandelbrot set is known as the "escape time" algorithm. A repeating calculation is performed for each "x", "y" point in the plot area and based on the behavior of that calculation, a color is chosen for that pixel.
Unoptimized naïve escape time algorithm.
In both the unoptimized and optimized escape time algorithms, the "x" and "y" locations of each point are used as starting values in a repeating, or iterating calculation (described in detail below). The result of each iteration is used as the starting values for the next. The values are checked during each iteration to see whether they have reached a critical "escape" condition, or "bailout". If that condition is reached, the calculation is stopped, the pixel is drawn, and the next "x", "y" point is examined. For some starting values, escape occurs quickly, after only a small number of iterations. For starting values very close to but not in the set, it may take hundreds or thousands of iterations to escape. For values within the Mandelbrot set, escape will never occur. The programmer or user must choose how many iterations–or how much "depth"–they wish to examine. The higher the maximal number of iterations, the more detail and subtlety emerge in the final image, but the longer time it will take to calculate the fractal image.
Escape conditions can be simple or complex. Because no complex number with a real or imaginary part greater than 2 can be part of the set, a common bailout is to escape when either coefficient exceeds 2. A more computationally complex method that detects escapes sooner, is to compute distance from the origin using the Pythagorean theorem, i.e., to determine the absolute value, or "modulus", of the complex number. If this value exceeds 2, or equivalently, when the sum of the squares of the real and imaginary parts exceed 4, the point has reached escape. More computationally intensive rendering variations include the Buddhabrot method, which finds escaping points and plots their iterated coordinates.
The color of each point represents how quickly the values reached the escape point. Often black is used to show values that fail to escape before the iteration limit, and gradually brighter colors are used for points that escape. This gives a visual representation of how many cycles were required before reaching the escape condition.
To render such an image, the region of the complex plane we are considering is subdivided into a certain number of pixels. To color any such pixel, let formula_0 be the midpoint of that pixel. We now iterate the critical point 0 under formula_1, checking at each step whether the orbit point has modulus larger than 2. When this is the case, we know that formula_0 does not belong to the Mandelbrot set, and we color our pixel according to the number of iterations used to find out. Otherwise, we keep iterating up to a fixed number of steps, after which we decide that our parameter is "probably" in the Mandelbrot set, or at least very close to it, and color the pixel black.
In pseudocode, this algorithm would look as follows. The algorithm does not use complex numbers and manually simulates complex-number operations using two real numbers, for those who do not have a complex data type. The program may be simplified if the programming language includes complex-data-type operations.
for each pixel (Px, Py) on the screen do
x0 := scaled x coordinate of pixel (scaled to lie in the Mandelbrot X scale (-2.00, 0.47))
y0 := scaled y coordinate of pixel (scaled to lie in the Mandelbrot Y scale (-1.12, 1.12))
x := 0.0
y := 0.0
iteration := 0
max_iteration := 1000
while (x*x + y*y ≤ 2*2 AND iteration < max_iteration) do
xtemp := x*x - y*y + x0
y := 2*x*y + y0
x := xtemp
iteration := iteration + 1
color := palette[iteration]
plot(Px, Py, color)
Here, relating the pseudocode to formula_0, formula_2 and formula_1:
and so, as can be seen in the pseudocode in the computation of "x" and "y":
To get colorful images of the set, the assignment of a color to each value of the number of executed iterations can be made using one of a variety of functions (linear, exponential, etc.). One practical way, without slowing down calculations, is to use the number of executed iterations as an entry to a palette initialized at startup. If the color table has, for instance, 500 entries, then the color selection is "n" mod 500, where "n" is the number of iterations.
Optimized escape time algorithms.
The code in the previous section uses an unoptimized inner while loop for clarity. In the unoptimized version, one must perform five multiplications per iteration. To reduce the number of multiplications the following code for the inner while loop may be used instead:
x2:= 0
y2:= 0
w:= 0
while (x2 + y2 ≤ 4 and iteration < max_iteration) do
x:= x2 - y2 + x0
y:= w - x2 - y2 + y0
x2:= x * x
y2:= y * y
w:= (x + y) * (x + y)
iteration:= iteration + 1
The above code works via some algebraic simplification of the complex multiplication:
formula_9
Using the above identity, the number of multiplications can be reduced to three instead of five.
The above inner while loop can be further optimized by expanding "w" to
formula_10
Substituting "w" into formula_11 yields
formula_12
and hence calculating "w" is no longer needed.
The further optimized pseudocode for the above is:
x2:= 0
y2:= 0
while (x2 + y2 ≤ 4 and iteration < max_iteration) do
y:= 2 * x * y + y0
x:= x2 - y2 + x0
x2:= x * x
y2:= y * y
iteration:= iteration + 1
Note that in the above pseudocode, formula_13 seems to increase the number of multiplications by 1, but since 2 is the multiplier the code can be optimized via formula_14.
Derivative Bailout or "derbail".
It is common to check the magnitude of z after every iteration, but there is another method we can use that can converge faster and reveal structure within julia sets.
Instead of checking if the magnitude of z after every iteration is larger than a given value, we can instead check if the sum of each derivative of z up to the current iteration step is larger than a given bailout value:
formula_15
The size of the dbail value can enhance the detail in the structures revealed within the dbail method with very large values.
It is possible to find derivatives automatically by leveraging Automatic differentiation and computing the iterations using Dual numbers.
Rendering fractals with the derbail technique can often require a large number of samples per pixel, as there can be precision issues which lead to fine detail and can result in noisy images even with samples in the hundreds or thousands.
Python code:
def pixel(x: int, y: int, w: int, h: int) -> int:
def magn(a, b):
return a * a + b * b
dbail = 1e6
ratio = w / h
x0 = (((2 * x) / w) - 1) * ratio
y0 = ((2 * y) / h) - 1
dx_sum = 0
dy_sum = 0
iters = 0
limit = 1024
while magn(dx_sum, dy_sum) < dbail and iters < limit:
xtemp = x * x - y * y + x0
y = 2 * x * y + y0
x = xtemp
dx_sum += (dx * x - dy * y) * 2 + 1
dy_sum += (dy * x + dx * y) * 2
iters += 1
return iters
Coloring algorithms.
In addition to plotting the set, a variety of algorithms have been developed to
Histogram coloring.
A more complex coloring method involves using a histogram which pairs each pixel with said pixel's maximum iteration count before escape/bailout. This method will equally distribute colors to the same overall area, and, importantly, is independent of the maximum number of iterations chosen.
This algorithm has four passes. The first pass involves calculating the iteration counts associated with each pixel (but without any pixels being plotted). These are stored in an array: IterationCounts[x][y], where x and y are the x and y coordinates of said pixel on the screen respectively.
The first step of the second pass is to create an array of size "n", which is the maximum iteration count: NumIterationsPerPixel. Next, one must iterate over the array of pixel-iteration count pairs, IterationCounts[][], and retrieve each pixel's saved iteration count, "i", via e.g. "i" = IterationCounts[x][y]. After each pixel's iteration count "i" is retrieved, it is necessary to index the NumIterationsPerPixel by "i" and increment the indexed value (which is initially zero) -- e.g. NumIterationsPerPixel["i"] = NumIterationsPerPixel["i"] + 1 .
for (x = 0; x < width; x++) do
for (y = 0; y < height; y++) do
i:= IterationCounts[x][y]
NumIterationsPerPixel[i]++
The third pass iterates through the NumIterationsPerPixel array and adds up all the stored values, saving them in "total". The array index represents the number of pixels that reached that iteration count before bailout.
total: = 0
for (i = 0; i < max_iterations; i++) do
total += NumIterationsPerPixel[i]
After this, the fourth pass begins and all the values in the IterationCounts array are indexed, and, for each iteration count "i", associated with each pixel, the count is added to a global sum of all the iteration counts from 1 to "i" in the NumIterationsPerPixel array . This value is then normalized by dividing the sum by the "total" value computed earlier.
hue[][]:= 0.0
for (x = 0; x < width; x++) do
for (y = 0; y < height; y++) do
iteration:= IterationCounts[x][y]
for (i = 0; i <= iteration; i++) do
hue[x][y] += NumIterationsPerPixel[i] / total "/* Must be floating-point division. */"
...
color = palette[hue[m, n]]
...
Finally, the computed value is used, e.g. as an index to a color palette.
This method may be combined with the smooth coloring method below for more aesthetically pleasing images.
Continuous (smooth) coloring.
The escape time algorithm is popular for its simplicity. However, it creates bands of color, which, as a type of aliasing, can detract from an image's aesthetic value. This can be improved using an algorithm known as "normalized iteration count", which provides a smooth transition of colors between iterations. The algorithm associates a real number formula_16 with each value of "z" by using the connection of the iteration number with the potential function. This function is given by
formula_17
where "z""n" is the value after "n" iterations and "P" is the power for which "z" is raised to in the Mandelbrot set equation ("z""n"+1 = "z""n""P" + "c", "P" is generally 2).
If we choose a large bailout radius "N" (e.g., 10100), we have that
formula_18
for some real number formula_19, and this is
formula_20
and as "n" is the first iteration number such that |"z""n"| > "N", the number we subtract from "n" is in the interval [0, 1).
For the coloring we must have a cyclic scale of colors (constructed mathematically, for instance) and containing "H" colors numbered from 0 to "H" − 1 ("H" = 500, for instance). We multiply the real number formula_19 by a fixed real number determining the density of the colors in the picture, take the integral part of this number modulo "H", and use it to look up the corresponding color in the color table.
For example, modifying the above pseudocode and also using the concept of linear interpolation would yield
for each pixel (Px, Py) on the screen do
x0:= scaled x coordinate of pixel (scaled to lie in the Mandelbrot X scale (-2.5, 1))
y0:= scaled y coordinate of pixel (scaled to lie in the Mandelbrot Y scale (-1, 1))
x:= 0.0
y:= 0.0
iteration:= 0
max_iteration:= 1000
"// Here N = 2^8 is chosen as a reasonable bailout radius."
while x*x + y*y ≤ (1 « 16) and iteration < max_iteration do
xtemp:= x*x - y*y + x0
y:= 2*x*y + y0
x:= xtemp
iteration:= iteration + 1
"// Used to avoid floating point issues with points inside the set."
if iteration < max_iteration then
"// sqrt of inner term removed using log simplification rules."
log_zn:= log(x*x + y*y) / 2
nu:= log(log_zn / log(2)) / log(2)
"// Rearranging the potential function."
"// Dividing log_zn by log(2) instead of log(N = 1«8)"
"// because we want the entire palette to range from the"
"// center to radius 2, NOT our bailout radius."
iteration:= iteration + 1 - nu
color1:= palette[floor(iteration)]
color2:= palette[floor(iteration) + 1]
"// iteration % 1 = fractional part of iteration."
color:= linear_interpolate(color1, color2, iteration % 1)
plot(Px, Py, color)
Exponentially mapped and cyclic iterations.
Typically when we render a fractal, the range of where colors from a given palette appear along the fractal is static. If we desire to offset the location from the border of the fractal, or adjust their palette to cycle in a specific way, there are a few simple changes we can make when taking the final iteration count before passing it along to choose an item from our palette.
When we have obtained the iteration count, we can make the range of colors non-linear. Raising a value normalized to the range [0,1] to a power "n", maps a linear range to an exponential range, which in our case can nudge the appearance of colors along the outside of the fractal, and allow us to bring out other colors, or push in the entire palette closer to the border.
formula_21
where i is our iteration count after bailout, max_i is our iteration limit, S is the exponent we are raising iters to, and N is the number of items in our palette. This scales the iter count non-linearly and scales the palette to cycle approximately proportionally to the zoom.
We can then plug v into whatever algorithm we desire for generating a color.
Passing iterations into a color directly.
One thing we may want to consider is avoiding having to deal with a palette or color blending at all. There are actually a handful of methods we can leverage to generate smooth, consistent coloring by constructing the color on the spot.
f(v) refers to the sRGB transfer function.
A naive method for generating a color in this way is by directly scaling v to 255 and passing it into RGB as such
rgb = [v * 255, v * 255, v * 255]
One flaw with this is that RGB is non-linear due to gamma; consider linear sRGB instead. Going from RGB to sRGB uses an inverse companding function on the channels. This makes the gamma linear, and allows us to properly sum the colors for sampling.
srgb = [v * 255, v * 255, v * 255]
HSV coloring.
HSV Coloring can be accomplished by mapping iter count from [0,max_iter) to [0,360), taking it to the power of 1.5, and then modulo 360.
We can then simply take the exponentially mapped iter count into the value and return
hsv = [powf((i / max) * 360, 1.5) % 360, 100, (i / max) * 100]
This method applies to HSL as well, except we pass a saturation of 50% instead.
hsl = [powf((i / max) * 360, 1.5) % 360, 50, (i / max) * 100]
LCH coloring.
One of the most perceptually uniform coloring methods involves passing in the processed iter count into LCH. If we utilize the exponentially mapped and cyclic method above, we can take the result of that into the Luma and Chroma channels. We can also exponentially map the iter count and scale it to 360, and pass this modulo 360 into the hue.
formula_22
One issue we wish to avoid here is out-of-gamut colors. This can be achieved with a little trick based on the change in in-gamut colors relative to luma and chroma. As we increase luma, we need to decrease chroma to stay within gamut.
s = iters/max_i;
v = 1.0 - powf(cos(pi * s), 2.0);
LCH = [75 - (75 * v), 28 + (75 - (75 * v)), powf(360 * s, 1.5) % 360];
Advanced plotting algorithms.
In addition to the simple and slow escape time algorithms already discussed, there are many other more advanced algorithms that can be used to speed up the plotting process.
Distance estimates.
One can compute the distance from point "c" (in exterior or interior) to nearest point on the boundary of the Mandelbrot set.
Exterior distance estimation.
The proof of the connectedness of the Mandelbrot set in fact gives a formula for the uniformizing map of the complement of formula_23 (and the derivative of this map). By the Koebe quarter theorem, one can then estimate the distance between the midpoint of our pixel and the Mandelbrot set up to a factor of 4.
In other words, provided that the maximal number of iterations is sufficiently high, one obtains a picture of the Mandelbrot set with the following properties:
The upper bound "b" for the distance estimate of a pixel "c" (a complex number) from the Mandelbrot set is given by
formula_24
where
The idea behind this formula is simple: When the equipotential lines for the potential function formula_35 lie close, the number formula_36 is large, and conversely, therefore the equipotential lines for the function formula_37 should lie approximately regularly.
From a mathematician's point of view, this formula only works in limit where "n" goes to infinity, but very reasonable estimates can be found with just a few additional iterations after the main loop exits.
Once "b" is found, by the Koebe 1/4-theorem, we know that there is no point of the Mandelbrot set with distance from "c" smaller than "b/4".
The distance estimation can be used for drawing of the boundary of the Mandelbrot set, see the article Julia set. In this approach, pixels that are sufficiently close to M are drawn using a different color. This creates drawings where the thin "filaments" of the Mandelbrot set can be easily seen. This technique is used to good effect in the B&W images of Mandelbrot sets in the books "The Beauty of Fractals" and "The Science of Fractal Images".
Here is a sample B&W image rendered using Distance Estimates:
Distance Estimation can also be used to render 3D images of Mandelbrot and Julia sets
Interior distance estimation.
It is also possible to estimate the distance of a limitly periodic (i.e., hyperbolic) point to the boundary of the Mandelbrot set. The upper bound "b" for the distance estimate is given by
formula_38
where
Analogous to the exterior case, once "b" is found, we know that all points within the distance of "b"/4 from "c" are inside the Mandelbrot set.
There are two practical problems with the interior distance estimate: first, we need to find formula_44 precisely, and second, we need to find formula_39 precisely.
The problem with formula_44 is that the convergence to formula_44 by iterating formula_40 requires, theoretically, an infinite number of operations.
The problem with any given formula_39 is that, sometimes, due to rounding errors, a period is falsely identified to be an integer multiple of the real period (e.g., a period of 86 is detected, while the real period is only 43=86/2). In such case, the distance is overestimated, i.e., the reported radius could contain points outside the Mandelbrot set.
Cardioid / bulb checking.
One way to improve calculations is to find out beforehand whether the given point lies within the cardioid or in the period-2 bulb. Before passing the complex value through the escape time algorithm, first check that:
formula_52,
formula_53,
formula_54,
where "x" represents the real value of the point and "y" the imaginary value. The first two equations determine that the point is within the cardioid, the last the period-2 bulb.
The cardioid test can equivalently be performed without the square root:
formula_55
formula_56
3rd- and higher-order buds do not have equivalent tests, because they are not perfectly circular. However, it is possible to find whether the points are within circles inscribed within these higher-order bulbs, preventing many, though not all, of the points in the bulb from being iterated.
Periodicity checking.
To prevent having to do huge numbers of iterations for points inside the set, one can perform periodicity checking, which checks whether a point reached in iterating a pixel has been reached before. If so, the pixel cannot diverge and must be in the set. Periodicity checking is a trade-off, as the need to remember points costs data management instructions and memory, but saves computational instructions. However, checking against only one previous iteration can detect many periods with little performance overhead. For example, within the while loop of the pseudocode above, make the following modifications:
xold := 0
yold := 0
period := 0
while (x*x + y*y ≤ 2*2 and iteration < max_iteration) do
xtemp := x*x - y*y + x0
y := 2*x*y + y0
x := xtemp
iteration := iteration + 1
if x ≈ xold and y ≈ yold then
iteration := max_iteration /* Set to max for the color plotting */
break /* We are inside the Mandelbrot set, leave the while loop */
period:= period + 1
if period > 20 then
period := 0
xold := x
yold := y
The above code stores away a new x and y value on every 20th iteration, thus it can detect periods that are up to 20 points long.
Border tracing / edge checking.
Because the Mandelbrot set is full, any point enclosed by a closed shape whose borders lie entirely within the Mandelbrot set must itself be in the Mandelbrot set. Border tracing works by following the lemniscates of the various iteration levels (colored bands) all around the set, and then filling the entire band at once. This also provides a speed increase because large numbers of points can be now skipped.
In the animation shown, points outside the set are colored with a 1000-iteration escape time algorithm. Tracing the set border and filling it, rather than iterating the interior points, reduces the total number of iterations by 93.16%. With a higher iteration limit the benefit would be even greater.
Rectangle checking.
Rectangle checking is an older and simpler method for plotting the Mandelbrot set. The basic idea of rectangle checking is that if every pixel in a rectangle's border shares the same amount of iterations, then the rectangle can be safely filled using that number of iterations. There are several variations of the rectangle checking method, however, all of them are slower than the border tracing method because they end up calculating more pixels. One variant just calculates the corner pixels of each rectangle, however, this causes damaged pictures more often than calculating the entire border, thus it only works reasonably well if only small boxes of around 6x6 pixels are used, and no recursing in from bigger boxes. (Fractint method.)
The most simple rectangle checking method lies in checking the borders of equally sized rectangles, resembling a grid pattern. (Mariani's algorithm.)
A faster and slightly more advanced variant is to first calculate a bigger box, say 25x25 pixels. If the entire box border has the same color, then just fill the box with the same color. If not, then split the box into four boxes of 13x13 pixels, reusing the already calculated pixels as outer border, and sharing the inner "cross" pixels between the inner boxes. Again, fill in those boxes that has only one border color. And split those boxes that don't, now into four 7x7 pixel boxes. And then those that "fail" into 4x4 boxes. (Mariani-Silver algorithm.)
Even faster is to split the boxes in half instead of into four boxes. Then it might be optimal to use boxes with a 1.4:1 aspect ratio, so they can be split like how A3 papers are folded into A4 and A5 papers. (The DIN approach.)
As with border tracing, rectangle checking only works on areas with one discrete color. But even if the outer area uses smooth/continuous coloring then rectangle checking will still speed up the costly inner area of the Mandelbrot set. Unless the inner area also uses some smooth coloring method, for instance interior distance estimation.
Symmetry utilization.
The horizontal symmetry of the Mandelbrot set allows for portions of the rendering process to be skipped upon the presence of the real axis in the final image. However, regardless of the portion that gets mirrored, the same number of points will be rendered.
Julia sets have symmetry around the origin. This means that quadrant 1 and quadrant 3 are symmetric, and quadrants 2 and quadrant 4 are symmetric. Supporting symmetry for both Mandelbrot and Julia sets requires handling symmetry differently for the two different types of graphs.
Multithreading.
Escape-time rendering of Mandelbrot and Julia sets lends itself extremely well to parallel processing. On multi-core machines the area to be plotted can be divided into a series of rectangular areas which can then be provided as a set of tasks to be rendered by a pool of rendering threads. This is an embarrassingly parallel computing problem. (Note that one gets the best speed-up by first excluding symmetric areas of the plot, and then dividing the remaining unique regions into rectangular areas.)
Here is a short video showing the Mandelbrot set being rendered using multithreading and symmetry, but without boundary following:
Finally, here is a video showing the same Mandelbrot set image being rendered using multithreading, symmetry, and boundary following:
Perturbation theory and series approximation.
Very highly magnified images require more than the standard 64–128 or so bits of precision that most hardware floating-point units provide, requiring renderers to use slow "BigNum" or "arbitrary-precision" math libraries to calculate. However, this can be sped up by the exploitation of perturbation theory. Given
formula_57
as the iteration, and a small epsilon and delta, it is the case that
formula_58
or
formula_59
so if one defines
formula_60
one can calculate a single point (e.g. the center of an image) using high-precision arithmetic ("z"), giving a "reference orbit", and then compute many points around it in terms of various initial offsets delta plus the above iteration for epsilon, where epsilon-zero is set to 0. For most iterations, epsilon does not need more than 16 significant figures, and consequently hardware floating-point may be used to get a mostly accurate image. There will often be some areas where the orbits of points diverge enough from the reference orbit that extra precision is needed on those points, or else additional local high-precision-calculated reference orbits are needed. By measuring the orbit distance between the reference point and the point calculated with low precision, it can be detected that it is not possible to calculate the point correctly, and the calculation can be stopped. These incorrect points can later be re-calculated e.g. from another closer reference point.
Further, it is possible to approximate the starting values for the low-precision points with a truncated Taylor series, which often enables a significant amount of iterations to be skipped.
Renderers implementing these techniques are publicly available and offer speedups for highly magnified images by around two orders of magnitude.
An alternate explanation of the above:
For the central point in the disc formula_61 and its iterations formula_62, and an arbitrary point in the disc formula_63 and its iterations formula_64, it is possible to define the following iterative relationship:
formula_65
With formula_66. Successive iterations of formula_67 can be found using the following:
formula_68
formula_69
formula_70
formula_71
Now from the original definition:
formula_72,
It follows that:
formula_73
As the iterative relationship relates an arbitrary point to the central point by a very small change formula_74, then most of the iterations of formula_67 are also small and can be calculated using floating point hardware.
However, for every arbitrary point in the disc it is possible to calculate a value for a given formula_75 without having to iterate through the sequence from formula_76, by expressing formula_67 as a power series of formula_74.
formula_77
With formula_78.
Now given the iteration equation of formula_79, it is possible to calculate the coefficients of the power series for each formula_67:
formula_73
formula_80
formula_81
Therefore, it follows that:
formula_82
formula_83
formula_84
formula_85
The coefficients in the power series can be calculated as iterative series using only values from the central point's iterations formula_86, and do not change for any arbitrary point in the disc. If formula_74 is very small, formula_67 should be calculable to sufficient accuracy using only a few terms of the power series. As the Mandelbrot Escape Contours are 'continuous' over the complex plane, if a points escape time has been calculated, then the escape time of that points neighbours should be similar. Interpolation of the neighbouring points should provide a good estimation of where to start in the formula_67 series.
Further, separate interpolation of both real axis points and imaginary axis points should provide both an upper and lower bound for the point being calculated. If both results are the same (i.e. both escape or do not escape) then the difference formula_87 can be used to recuse until both an upper and lower bound can be established. If floating point hardware can be used to iterate the formula_79 series, then there exists a relation between how many iterations can be achieved in the time it takes to use BigNum software to compute a given formula_67. If the difference between the bounds is greater than the number of iterations, it is possible to perform binary search using BigNum software, successively halving the gap until it becomes more time efficient to find the escape value using floating point hardware.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "c"
},
{
"math_id": 1,
"text": "P_c"
},
{
"math_id": 2,
"text": "z"
},
{
"math_id": 3,
"text": " z = x + iy\\ "
},
{
"math_id": 4,
"text": " z^2 = x^2 +2ixy"
},
{
"math_id": 5,
"text": "y^2 \\ "
},
{
"math_id": 6,
"text": " c = x_0 + i y_0\\ "
},
{
"math_id": 7,
"text": "x = \\mathop{\\mathrm{Re}}(z^2+c) = x^2-y^2 + x_0"
},
{
"math_id": 8,
"text": "y = \\mathop{\\mathrm{Im}}(z^2+c) = 2xy + y_0.\\ "
},
{
"math_id": 9,
"text": "\\begin{align}\n(iy + x)^2 &= -y^2 + 2iyx + x^2 \\\\\n &= x^2 - y^2 + 2iyx\n\\end{align} "
},
{
"math_id": 10,
"text": "w = x^{2} + 2xy + y^{2}"
},
{
"math_id": 11,
"text": "y = w - x^{2} - y^{2} + y_0"
},
{
"math_id": 12,
"text": "y = 2xy + y_0"
},
{
"math_id": 13,
"text": "2xy"
},
{
"math_id": 14,
"text": "(x + x)y"
},
{
"math_id": 15,
"text": "z_n^\\prime := (2 * z_{n-1}^\\prime * z_{n-1}) + 1 "
},
{
"math_id": 16,
"text": "\\nu"
},
{
"math_id": 17,
"text": "\\phi(z) = \\lim_{n \\to \\infty} (\\log|z_n|/P^{n}),"
},
{
"math_id": 18,
"text": "\\log|z_n|/P^{n} = \\log(N)/P^{\\nu(z)}"
},
{
"math_id": 19,
"text": "\\nu(z)"
},
{
"math_id": 20,
"text": "\\nu(z) = n - \\log_P (\\log|z_n|/\\log(N)),"
},
{
"math_id": 21,
"text": "v = ((\\mathbf{i} / max_i)^\\mathbf{S}\\mathbf{N})^{1.5} \\bmod \\mathbf{N}\n"
},
{
"math_id": 22,
"text": "\\begin{array}{lcl} \nx & \\in & \\mathbb{Q+} \\\\\ns_i & = & (i/max_i)^\\mathbf{x} \\\\\nv & = & 1.0 - cos^2(\\pi s_i)\\\\\nL & = & 75 - (75v) \\\\ \nC & = & 28 + (75 - 75v) \\\\\nH & = & (360s_i)^{1.5} \\bmod 360\n\\end{array}\n"
},
{
"math_id": 23,
"text": "M"
},
{
"math_id": 24,
"text": "b=\\lim_{n \\to \\infty} \\frac{2 \\cdot |{P_c^n(c)| \\cdot \\ln|{P_c^n(c)}}|}{|\\frac{\\partial}{\\partial{c}} P_c^n(c)|},"
},
{
"math_id": 25,
"text": "P_c(z) \\,"
},
{
"math_id": 26,
"text": "P_c^n(c)"
},
{
"math_id": 27,
"text": "P_c(z) \\to z"
},
{
"math_id": 28,
"text": "z^2 + c \\to z"
},
{
"math_id": 29,
"text": "z=c"
},
{
"math_id": 30,
"text": "P_c^{ 0}(c) = c"
},
{
"math_id": 31,
"text": "P_c^{ n+1}(c) = P_c^n(c)^2 + c"
},
{
"math_id": 32,
"text": "\\frac{\\partial}{\\partial{c}} P_c^n(c)"
},
{
"math_id": 33,
"text": "\\frac{\\partial}{\\partial{c}} P_c^{ 0}(c) = 1"
},
{
"math_id": 34,
"text": "\\frac{\\partial}{\\partial{c}} P_c^{ n+1}(c) = 2\\cdot{}P_c^n(c)\\cdot\\frac{\\partial}{\\partial{c}} P_c^n(c) + 1"
},
{
"math_id": 35,
"text": "\\phi(z)"
},
{
"math_id": 36,
"text": "|\\phi'(z)|"
},
{
"math_id": 37,
"text": "\\phi(z)/|\\phi'(z)|"
},
{
"math_id": 38,
"text": "b=\\frac{1-\\left|{\\frac{\\partial}{\\partial{z}}P_c^p(z_0)}\\right|^2}\n {\\left|{\\frac{\\partial}{\\partial{c}}\\frac{\\partial}{\\partial{z}}P_c^p(z_0) +\n \\frac{\\partial}{\\partial{z}}\\frac{\\partial}{\\partial{z}}P_c^p(z_0)\n \\frac{\\frac{\\partial}{\\partial{c}}P_c^p(z_0)}\n {1-\\frac{\\partial}{\\partial{z}}P_c^p(z_0)}} \\right|},"
},
{
"math_id": 39,
"text": "p"
},
{
"math_id": 40,
"text": "P_c(z)"
},
{
"math_id": 41,
"text": "P_c(z)=z^2 + c"
},
{
"math_id": 42,
"text": "P_c^p(z_0)"
},
{
"math_id": 43,
"text": "P_c^{ 0}(z) = z_0"
},
{
"math_id": 44,
"text": "z_0"
},
{
"math_id": 45,
"text": "P_c^{ 0}(z) = c"
},
{
"math_id": 46,
"text": "z_0 = P_c^p(z_0)"
},
{
"math_id": 47,
"text": "\\frac{\\partial}{\\partial{c}}\\frac{\\partial}{\\partial{z}}P_c^p(z_0)"
},
{
"math_id": 48,
"text": "\\frac{\\partial}{\\partial{z}}\\frac{\\partial}{\\partial{z}}P_c^p(z_0)"
},
{
"math_id": 49,
"text": "\\frac{\\partial}{\\partial{c}}P_c^p(z_0)"
},
{
"math_id": 50,
"text": "\\frac{\\partial}{\\partial{z}}P_c^p(z_0)"
},
{
"math_id": 51,
"text": "P_c^p(z)"
},
{
"math_id": 52,
"text": " p = \\sqrt{ \\left(x - \\frac{1}{4}\\right)^2 + y^2} "
},
{
"math_id": 53,
"text": " x \\leq p - 2p^2 + \\frac{1}{4} "
},
{
"math_id": 54,
"text": " (x+1)^2 + y^2 \\leq \\frac{1}{16} "
},
{
"math_id": 55,
"text": " q = \\left(x - \\frac{1}{4}\\right)^2 + y^2, "
},
{
"math_id": 56,
"text": " q \\left(q + \\left(x - \\frac{1}{4}\\right)\\right) \\leq \\frac{1}{4}y^2. "
},
{
"math_id": 57,
"text": " z_{n+1} = z_n^2 + c "
},
{
"math_id": 58,
"text": " (z_n + \\epsilon)^2 + (c + \\delta) = z_n^2 + 2z_n\\epsilon + \\epsilon^2 + c + \\delta, "
},
{
"math_id": 59,
"text": " = z_{n+1} + 2z_n\\epsilon + \\epsilon^2 + \\delta, "
},
{
"math_id": 60,
"text": " \\epsilon_{n+1} = 2z_n\\epsilon_n + \\epsilon_n^2 + \\delta, "
},
{
"math_id": 61,
"text": " c "
},
{
"math_id": 62,
"text": " z_n "
},
{
"math_id": 63,
"text": " c + \\delta "
},
{
"math_id": 64,
"text": " z'_n "
},
{
"math_id": 65,
"text": " z'_{n} = z_{n} + \\epsilon_{n} "
},
{
"math_id": 66,
"text": " \\epsilon_{1} = \\delta "
},
{
"math_id": 67,
"text": " \\epsilon_n "
},
{
"math_id": 68,
"text": " z'_{n+1} = {z'_n}^2 + (c + \\delta) "
},
{
"math_id": 69,
"text": " z'_{n+1} = (z_n + \\epsilon_n)^2 + c + \\delta "
},
{
"math_id": 70,
"text": " z'_{n+1} = {z_n}^2 + c + 2z_n\\epsilon_n + {\\epsilon_n}^2 + \\delta"
},
{
"math_id": 71,
"text": " z'_{n+1} = z_{n+1} + 2z_n\\epsilon_n + {\\epsilon_n}^2 + \\delta"
},
{
"math_id": 72,
"text": " z'_{n+1} = z_{n+1} + \\epsilon_{n+1} "
},
{
"math_id": 73,
"text": " \\epsilon_{n+1} = 2z_n\\epsilon_n + {\\epsilon_n}^2 + \\delta "
},
{
"math_id": 74,
"text": " \\delta "
},
{
"math_id": 75,
"text": " \\epsilon_{n} "
},
{
"math_id": 76,
"text": " \\epsilon_0 "
},
{
"math_id": 77,
"text": " \\epsilon_n = A_{n}\\delta + B_{n}\\delta^2 + C_{n}\\delta^3 + \\dotsc "
},
{
"math_id": 78,
"text": " A_{1} = 1, B_{1} = 0, C_{1} = 0, \\dotsc "
},
{
"math_id": 79,
"text": " \\epsilon "
},
{
"math_id": 80,
"text": " \\epsilon_{n+1} = 2z_n(A_n\\delta + B_n\\delta^2 + C_n\\delta^3 + \\dotsc) + (A_n\\delta + B_n\\delta^2 + C_n\\delta^3 + \\dotsc)^2 + \\delta "
},
{
"math_id": 81,
"text": " \\epsilon_{n+1} = (2z_nA_n+1)\\delta + (2z_nB_n + {A_n}^2)\\delta^2 + (2z_nC_n + 2A_nB_n)\\delta^3 + \\dotsc "
},
{
"math_id": 82,
"text": " A_{n+1} = 2z_nA_n + 1 "
},
{
"math_id": 83,
"text": " B_{n+1} = 2z_nB_n + {A_n}^2 "
},
{
"math_id": 84,
"text": " C_{n+1} = 2z_nC_n + 2A_nB_n "
},
{
"math_id": 85,
"text": " \\vdots "
},
{
"math_id": 86,
"text": " z "
},
{
"math_id": 87,
"text": " \\Delta n "
}
] |
https://en.wikipedia.org/wiki?curid=63087276
|
63087895
|
Energy-based model
|
Approach in generative models
An energy-based model (EBM) (also called a Canonical Ensemble Learning(CEL) or Learning via Canonical Ensemble (LCE)) is an application of canonical ensemble formulation of statistical physics for learning from data problems. The approach prominently appears in generative models (GMs).
EBMs provide a unified framework for many probabilistic and non-probabilistic approaches to such learning, particularly for training graphical and other structured models.
An EBM learns the characteristics of a target dataset and generates a similar but larger dataset. EBMs detect the latent variables of a dataset and generate new datasets with a similar distribution.
Energy-based generative neural networks is a class of generative models, which aim to learn explicit probability distributions of data in the form of energy-based models whose energy functions are parameterized by modern deep neural networks.
Boltzmann machines are a special form of energy-based models with a specific parametrization of the energy.
Description.
For a given input formula_0, the model describes an energy formula_1 such that the Boltzmann distribution formula_2 is a probability (density) and typically formula_3.
Since the normalization constant formula_4, also known as partition function, depends on all the Boltzmann factors of all possible inputs formula_0 it cannot be easily computed or reliably estimated during training simply using standard maximum likelihood estimation.
However for maximizing the likelihood during training, the gradient of the log likelihood of a single training example formula_0 is given by using the chain rule
formula_5
The expectation in the above formula for the gradient can be "approximately estimated" by drawing samples formula_6 from the distribution formula_7 using Markov chain Monte Carlo (MCMC)
Early energy-based models like the 2003 Boltzmann machine by Hinton estimated this expectation using block Gibbs sampler. Newer approaches make use of more efficient Stochastic Gradient Langevin Dynamics (LD) drawing samples using:
formula_8
and formula_9. A replay buffer of past values formula_10 is used with LD to initialize the optimization module.
The parameters formula_11 of the neural network are, therefore, trained in a generative manner by MCMC-based maximum likelihood estimation:
The learning process follows an "analysis by synthesis" scheme, where within each learning iteration, the algorithm samples the synthesized examples from the current model by a gradient-based MCMC method, e.g., Langevin dynamics or Hybrid Monte Carlo, and then updates the model parameters formula_11 based on the difference between the training examples and the synthesized ones, see equation formula_12.
This process can be interpreted as an alternating mode seeking and mode shifting process, and also has an adversarial interpretation.
In the end, the model learns a function formula_13 that associates low energies to correct values, and higher energies to incorrect values.
After training, given a converged energy model formula_13, the Metropolis–Hastings algorithm can be used to draw new samples.
The acceptance probability is given by:
formula_14
History.
The term "energy-based models" was first coined in a 2003 JMLR paper where the authors defined a generalisation of independent components analysis to the overcomplete setting using EBMs.
Other early work on EBMs proposed models that represented energy as a composition of latent and observable variables.
Characteristics.
EBMs demonstrate useful properties:
Experimental results.
On image datasets such as CIFAR-10 and ImageNet 32x32, an EBM model generated high-quality images relatively quickly. It supported combining features learned from one type of image for generating other types of images. It was able to generalize using out-of-distribution datasets, outperforming flow-based and autoregressive models. EBM was relatively resistant to adversarial perturbations, behaving better than models explicitly trained against them with training for classification.
Applications.
Target applications include natural language processing, robotics and computer vision.
The first energy-based generative neural network is the generative ConvNet proposed in 2016 for image patterns, where the neural network is a convolutional neural network. The model has been generalized to various domains to learn distributions of videos, and 3D voxels. They are made more effective in their variants. They have proven useful for data generation (e.g., image synthesis, video synthesis,
3D shape synthesis, etc.), data recovery (e.g., recovering videos with missing pixels or image frames, 3D super-resolution, etc), data reconstruction (e.g., image reconstruction and linear interpolation ).
Alternatives.
EBMs compete with techniques such as variational autoencoders (VAEs), generative adversarial networks (GANs) or normalizing flows.
Extensions.
Joint energy-based models.
Joint energy-based models (JEM), proposed in 2020 by Grathwohl et al., allow any classifier with softmax output to be interpreted as energy-based model. The key observation is that such a classifier is trained to predict the conditional probability formula_15
where formula_16 is the y-th index of the logits formula_17 corresponding to class y.
Without any change to the logits it was proposed to reinterpret the logits to describe a joint probability density:
formula_18
with unknown partition function formula_19 and energy formula_20.
By marginalization, we obtain the unnormalized density
formula_21
therefore,
formula_22
so that any classifier can be used to define an energy function formula_1.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "E_\\theta(x)"
},
{
"math_id": 2,
"text": "P_\\theta(x)=\\exp(-\\beta E_\\theta(x))/Z(\\theta)"
},
{
"math_id": 3,
"text": "\\beta=1"
},
{
"math_id": 4,
"text": "Z(\\theta):=\\int_{x \\in X} dx \\exp(-\\beta E_\\theta(x))"
},
{
"math_id": 5,
"text": "\\partial_\\theta \\log\\left(P_\\theta(x)\\right)=\\mathbb{E}_{x'\\sim P_\\theta}[\\partial_\\theta E_\\theta(x')]-\\partial_\\theta E_\\theta(x) \\, (*)"
},
{
"math_id": 6,
"text": "x'"
},
{
"math_id": 7,
"text": "P_\\theta"
},
{
"math_id": 8,
"text": "x_0' \\sim P_0, x_{i+1}' = x_i' - \\frac{\\alpha}{2}\\frac{\\partial E_\\theta(x_i') }{\\partial x_i'} +\\epsilon,"
},
{
"math_id": 9,
"text": "\\epsilon \\sim \\mathcal{N}(0,\\alpha)"
},
{
"math_id": 10,
"text": "x_i'"
},
{
"math_id": 11,
"text": "\\theta"
},
{
"math_id": 12,
"text": "(*)"
},
{
"math_id": 13,
"text": "E_\\theta"
},
{
"math_id": 14,
"text": "P_{acc}(x_i \\to x^*)=\\min\\left(1, \\frac{P_\\theta(x^*)}{P_\\theta(x_i)}\\right)."
},
{
"math_id": 15,
"text": "p_\\theta(y | x)=\\frac{e^{\\vec{f}_\\theta(x)[y]}}{\\sum_{j=1}^K e^{\\vec{f}_\\theta(x)[j]}} \\ \\ \\text{ for } y = 1, \\dotsc, K \\text{ and } \\vec{f}_\\theta = (f_1, \\dotsc, f_K) \\in \\R^K,"
},
{
"math_id": 16,
"text": "\\vec{f}_\\theta(x)[y]"
},
{
"math_id": 17,
"text": "\\vec{f}"
},
{
"math_id": 18,
"text": "p_\\theta(y,x)=\\frac{e^{\\vec{f}_\\theta(x)[y]}}{Z(\\theta)},"
},
{
"math_id": 19,
"text": "Z(\\theta)"
},
{
"math_id": 20,
"text": "E_\\theta (x, y)=-f_\\theta(x)[y]"
},
{
"math_id": 21,
"text": "p_\\theta(x)=\\sum_y p_\\theta(y,x)= \\sum_y \\frac{e^{\\vec{f}_\\theta(x)[y]}}{Z(\\theta)}=:\\exp(-E_\\theta(x)),"
},
{
"math_id": 22,
"text": "E_\\theta(x)=-\\log\\left(\\sum_y \\frac{e^{\\vec{f}_\\theta(x)[y]}}{Z(\\theta)}\\right),"
}
] |
https://en.wikipedia.org/wiki?curid=63087895
|
63090080
|
Using the Borsuk–Ulam Theorem
|
Using the Borsuk–Ulam Theorem: Lectures on Topological Methods in Combinatorics and Geometry is a graduate-level mathematics textbook in topological combinatorics. It describes the use of results in topology, and in particular the Borsuk–Ulam theorem, to prove theorems in combinatorics and discrete geometry. It was written by Czech mathematician Jiří Matoušek, and published in 2003 by Springer-Verlag in their Universitext series ().
Topics.
The topic of the book is part of a relatively new field of mathematics crossing between topology and combinatorics, now called topological combinatorics. The starting point of the field, and one of the central inspirations for the book, was a proof that László Lovász published in 1978 of a 1955 conjecture by Martin Kneser, according to which the Kneser graphs formula_0 have no graph coloring with formula_1 colors. Lovász used the Borsuk–Ulam theorem in his proof, and Matoušek gathers many related results, published subsequently, to show that this connection between topology and combinatorics is not just a proof trick but an area.
The book has six chapters. After two chapters reviewing the basic notions of algebraic topology, and proving the Borsuk–Ulam theorem, the applications to combinatorics and geometry begin in the third chapter, with topics including the ham sandwich theorem, the necklace splitting problem, Gale's lemma on points in hemispheres, and several results on colorings of Kneser graphs. After another chapter on more advanced topics in equivariant topology, two more chapters of applications follow, separated according to whether the equivariance is modulo two or using a more complicated group action. Topics in these chapters include the van Kampen–Flores theorem on embeddability of skeletons of simplices into lower-dimensional Euclidean spaces, and topological and multicolored variants of Radon's theorem and Tverberg's theorem on partitions into subsets with intersecting convex hulls.
Audience and reception.
The book is written at a graduate level, and has exercises making it suitable as a graduate textbook. Some knowledge of topology would be helpful for readers but is not necessary. Reviewer Mihaela Poplicher writes that it is not easy to read, but is "very well written, very interesting, and very informative". And reviewer Imre Bárány writes that "The book is well written, and the style is lucid and pleasant, with plenty of illustrative examples."
Matoušek intended this material to become part of a broader textbook on topological combinatorics, to be written jointly with him, Anders Björner, and Günter M. Ziegler. However, this was not completed before Matoušek's untimely death in 2015.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "KG_{2n+k,n}"
},
{
"math_id": 1,
"text": "k+1"
}
] |
https://en.wikipedia.org/wiki?curid=63090080
|
63092514
|
SUSPUP and SUSPPUP
|
SUSPUP (serum sodium to urinary sodium to serum potassium to urinary potassium) and SUSPPUP (serum sodium to urinary sodium to (serum potassium)2 to urinary potassium) are calculated structure parameters of the renin–angiotensin-aldosterone system (RAAS). They have been developed to support screening for primary or secondary aldosteronism.
Physiological principle.
The steroid hormone aldosterone stimulates the reabsorption of sodium and the excretion of potassium in the distal tubuli and the collecting tubes of the kidneys. Calculating SUSPUP and/or SUSPPUP helps to determine the intensity of mineralocorticoid signalling, which may be helpful in the differential diagnosis of hypertension and hypokalaemia.
Preconditions of testing.
Sodium and potassium concentrations have to be determined in serum and spot urine probes that have been obtained simultaneously or within a short time interval between.
Calculation.
formula_0
formula_1
Interpretation.
Reference ranges are 3.6–22.6 for SUSPUP and 0.6–5.3 for SUSPPUP.
Increased values support the hypothesis of increased mineralocorticoid stimulation of the distal tubules and collecting tubes, i.e. in cases of hyperaldosteronism. While these parameters have a high sensitivity for screening purposes their specificity may be inferior compared to aldosteron-to-renin ratio (ARR) and potassium concentrations
Both parameters may also be elevated in syndrome of inappropriate ADH secretion (SIADH), probably reflecting a compensatory mechanism, where the organism tries to maintain serum sodium concentrations by means of increased renin and/or aldosterone secretion.
|
[
{
"math_id": 0,
"text": "SUSPUP = \\frac{\\frac{{{[Na}^{+}]_{Serum}}}{{[Na}^{+}]_{Urine}}}{\\frac{{[K}^{+}]_{Serum}}{{[Na}^{+}]_{Urine}}}"
},
{
"math_id": 1,
"text": "SUSPPUP = \\frac{\\frac{{{[Na}^{+}]_{Serum}}}{{[Na}^{+}]_{Urine}}}{\\frac{{[K}^{+}]^2_{Serum}}{{[Na}^{+}]_{Urine}}}"
}
] |
https://en.wikipedia.org/wiki?curid=63092514
|
63095134
|
Spaced seed
|
In bioinformatics, a spaced seed is a pattern of relevant and irrelevant positions in a biosequence and a method of approximate string matching that allows for substitutions. They are a straightforward modification to the earliest heuristic-based alignment efforts that allow for minor differences between the sequences of interest. Spaced seeds have been used in homology search., alignment, assembly, and metagenomics. They are usually represented as a sequence of zeroes and ones, where a one indicates relevance and a zero indicates irrelevance at the given position. Some visual representations use pound signs for relevant and dashes or asterisks for irrelevant positions.
Principle.
Due to a number of functional and evolutionary constraints, nucleic acid sequences between individuals tend to be highly conserved, with the typical difference between two human genomes estimated on the order of 0.6% (or around 20 million base pairs). Identification of highly similar regions in the genome may indicate functional importance, as mutations in these areas that would result in cessation of function or loss of regulatory ability would be evolutionary unfavorable. More observed differences between two sequences may arise as a result of stochastic sequencing errors. Similarly, when performing assembly of a previously characterized genome, an attempt is made to align the newly sequenced DNA fragments to the existing genome sequence.
In both cases, it is useful to be able to directly compare nucleic acid sequences. Since the sequences are not expected to be exactly identical, however, it is beneficial to focus on smaller subsequences that are more likely to be locally identical. Spaced seeds allow for even more permissive local matching by allowing certain base pairs (defined by the pattern of the specific spaced seed) to mismatch without penalty, thus allowing algorithms that use the general "hit-extend" strategy of alignment to explore additional potential matches that would be otherwise ignored.
Example.
As a simple example, consider the following DNA sequences:
CTAAGTCACG
CTAACACACG
1111001111
Upon visual inspection, it's easy to see that there is a mismatch between the two sequences at the fifth and six base positions (in bold, above). However, the sequences still share 80% sequence similarity. The mismatches may be due to a real (biological) change or a sequencing error. In a non-spaced model, this putative match would be ignored if a seed size greater than 4 is specified. But a spaced seed of formula_0 could be used to effectively zero-weighting the mismatch sites, treating the sequences as same for the purposes of hit identification. In reality, of course, we don't know the relative positioning of the "true" mismatches, so there can be different spaced seed patterns depending on where the mismatches are anticipated.
History.
The concept of spaced seeds has been explored in literature under different names. One of the early uses was in sequence homology where the FLASH algorithm from 1993 referred to it as "non-contiguous sub-sequences of tokens" that were generated from all combinations of positions within a sequence window. In 1995, a similar concept was used in approximate string matching where "gapped tuples" of positions in a sequence were explored to identify common substrings between a large text and a query. The term "shape" was used in a 2001 paper to describe gapped q-grams where it refers to a set of relevant positions in a substring and soon after in 2002, PatternHunter introduced "spaced model" which was proposed as an improvement upon the consecutive seeds used in BLAST, which was ultimately adopted by newer versions of it. Finally, in 2003, PatternHunter II settled on the term "spaced seed" to refer to the approach used in PatternHunter
Spaced versus Non-Spaced Models.
Popular alignment algorithms such as BLAST and MegaBLAST use a non-spaced model, where the entire length of the seed is made of exact matches. Thus, any mismatching base pair along the length of the seed will result in the program ignoring the potential hit. In a spaced model (such as PatternHunter), the matches are not necessarily consecutive.
More formally, the difference in a spaced seed model as compared to a non-spaced model is the relative positioning (otherwise known as weight, formula_1) of the matched bases. In a non-spaced model, the length of the seed model, formula_2 and the weight, formula_1 of the seeds are the same, as they must be consecutive while in a spaced model, the weight is not necessarily equal to the length of the seed model, since match positions may be non-consecutive. Therefore, a spaced seed model may be longer than a non-spaced seed model but have the same weight. For example, a non-spaced seed formula_3 has the same weight as a spaced model formula_4, but their lengths differ.
The predicted number of hits can be calculated from PatternHunter using the following lemma:
formula_5
Where formula_6 is the length of the sequence the model is compared to, formula_2 is the length of the seed model, formula_7 is the probability of a match and formula_1 is the weight of the seed used.
Applications.
Homology Search.
The type of seed model used for sequence alignment can affect the processing time and memory usage when doing large-scale homology searches – two considerations that have been central in the development of modern homology search algorithms. It may also affect the sensitivity. Using spaced seed models has been demonstrated to allow for faster homology searches as seen with PatternHunter wherein homology searches were twenty times faster and used less memory than BLASTn (using a non-spaced model).
Sequence Alignment.
Most aligners first find candidate locations (or seeds) in the target sequence and then inspect those more closely to verify the alignment. Ideally, this first step would find all relevant locations in the target so sensitivity is prioritized but due to computational intensity, many popular algorithms (such as the earlier implementations of BLAST and FASTA) use heuristics to "short-cut" exploring all locations, ultimately missing many but running relatively quickly. One possible way to increase sensitivity, as done in the SHRiMP2 algorithm, is to use spaced seeds to allow for small differences between the query and the candidate locations so that somewhat more locations are identified as candidates. SHRiMP2 specifically uses multiple spaced seeds for this and requires multiple matches, increasing sensitivity as it allows different possible combinations of differences while maintaining speed comparable to original methods.
Sequence Assembly.
A variation of spaced seeds with a single contiguous gap has been used in "de novo" sequence assembly. In this instance, the design has an equal number of ones at either end of the sequence with a run of zeroes in between. The reasoning behind this design is that in assemblers that utilize De Bruijn graphs, increasing k-mer size inflates memory usage, as k-mers are more likely to be unique. However, the most important parts of a k-mer are its ends, as they are what are used to extend sequences in a graph. Thus, to circumvent the problem with memory usage, the less-important middle part (covered by the gap) is ignored. This approach has the additional advantage, as in other uses of spaced seeds, of taking into account any sequencing errors that may have occurred in the gap area. It has been noted that increasing the length of the gap also increases the uniqueness of k-mers in both "E. coli" and "H. sapiens" genomes.
Metagenomics.
A metagenomics study will commonly start with the high-throughput sequencing of a mixture of distinct species (e.g. from the human gut), yielding a set of sequences but with unknown origins. As such, one common goal is to identify which genome each sequence is phylogenetically most similar to. One approach could be to take k-mers from each sequence and see which, in a set of genomes, it has most sequence similarity with. Spaced seeds have been successfully utilized for this purpose by finding how many k-mers are found in each genome (hit number) and the total number of positions these k-mers cover (coverage).
Multiple Spaced Seeds.
An improvement to (single) spaced seeds was first demonstrated by PatternHunter in 2002 where a set of spaced seeds were used, in which a "hit" was called whenever one of the set matched. PatternHunter II, in 2003, demonstrated that this approach could offer higher sensitivity than BLAST while maintaining similar speed. Identifying an optimal set of spaced seeds is an NP-hard problem, however, even finding a "good" set of spaced seeds remains difficult, although several attempts have been made to computationally identify them. Since the speed of the algorithm must decrease with an increasing number of spaced seeds, it makes sense to only consider multiple seeds when all offer some useful contribution. There is ongoing research on how to quickly calculate good multiple spaced seeds, as previous homology search software calculated and hard-coded their seeds – it would be advantageous to be able to calculate purpose-driven multiple spaced seeds instead.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "1111001111"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "11111"
},
{
"math_id": 4,
"text": "1101101"
},
{
"math_id": 5,
"text": "(L-M+1)*p^k"
},
{
"math_id": 6,
"text": "L"
},
{
"math_id": 7,
"text": "p"
}
] |
https://en.wikipedia.org/wiki?curid=63095134
|
6309551
|
Hydrophilic-lipophilic balance
|
Measure of degree to which a surface active agents is hydrophilic or lipophilic
The hydrophilic–lipophilic balance (HLB) of a surfactant is a measure of its degree of hydrophilicity or lipophilicity, determined by calculating percentages of molecular weights for the hydrophilic and lipophilic portions of the surfactant molecule, as described by Griffin in 1949 and 1954. Other methods have been suggested, notably in 1957 by Davies.
Griffin's method.
Griffin's method for non-ionic surfactants as described in 1954 works as follows:
formula_0
where formula_1 is the molecular mass of the hydrophilic portion of the molecule, and M is the molecular mass of the whole molecule, giving a result on a scale of 0 to 20.
An HLB value of 0 corresponds to a completely lipophilic/hydrophobic molecule, and a value of 20 corresponds to a completely hydrophilic/lipophobic molecule.
The HLB value can be used to predict the surfactant properties of a molecule:
Davies' method.
In 1957, Davies suggested a method based on calculating a value based on the chemical groups of the molecule. The advantage of this method is that it takes into account the effect of stronger and weaker hydrophilic groups. The method works as follows:
formula_2
where:
formula_3 - Number of hydrophilic groups in the molecule
formula_4 - Value of the formula_5th hydrophilic groups (see tables)
formula_6 - Number of lipophilic groups in the molecule
|
[
{
"math_id": 0,
"text": "HLB = 20 * M_h / M"
},
{
"math_id": 1,
"text": "M_h"
},
{
"math_id": 2,
"text": "HLB = 7 + \\sum_{i \\mathop=1}^{m}H_i - n \\times 0.475"
},
{
"math_id": 3,
"text": "m"
},
{
"math_id": 4,
"text": "H_i"
},
{
"math_id": 5,
"text": "i"
},
{
"math_id": 6,
"text": "n"
}
] |
https://en.wikipedia.org/wiki?curid=6309551
|
631045
|
Smith number
|
Type of composite integer
In number theory, a Smith number is a composite number for which, in a given number base, the sum of its digits is equal to the sum of the digits in its prime factorization in the same base. In the case of numbers that are not square-free, the factorization is written without exponents, writing the repeated factor as many times as needed.
Smith numbers were named by Albert Wilansky of Lehigh University, as he noticed the property in the phone number (493-7775) of his brother-in-law Harold Smith:
4937775 = 3 · 5 · 5 · 65837
while
4 + 9 + 3 + 7 + 7 + 7 + 5 = 3 + 5 + 5 + (6 + 5 + 8 + 3 + 7)
in base 10.
Mathematical definition.
Let formula_0 be a natural number. For base formula_1, let the function formula_2 be the digit sum of formula_0 in base formula_3. A natural number formula_0 with prime factorisation
formula_4
is a Smith number if
formula_5
Here the exponent formula_6 is the multiplicity of formula_7 as a prime factor of formula_0 (also known as the "p"-adic valuation of formula_0).
For example, in base 10, 378 = 21 · 33 · 71 is a Smith number since 3 + 7 + 8 = 2 · 1 + 3 · 3 + 7 · 1, and 22 = 21 · 111 is a Smith number, because 2 + 2 = 2 · 1 + (1 + 1) · 1.
The first few Smith numbers in base 10 are
4, 22, 27, 58, 85, 94, 121, 166, 202, 265, 274, 319, 346, 355, 378, 382, 391, 438, 454, 483, 517, 526, 535, 562, 576, 588, 627, 634, 636, 645, 648, 654, 663, 666, 690, 706, 728, 729, 762, 778, 825, 852, 861, 895, 913, 915, 922, 958, 985. (sequence in the OEIS)
Properties.
W.L. McDaniel in 1987 proved that there are infinitely many Smith numbers.
The number of Smith numbers in base 10 below 10"n" for "n" = 1, 2, ... is given by
1, 6, 49, 376, 3294, 29928, 278411, 2632758, 25154060, 241882509, ... (sequence in the OEIS).
Two consecutive Smith numbers (for example, 728 and 729, or 2964 and 2965) are called Smith brothers. It is not known how many Smith brothers there are. The starting elements of the smallest Smith "n"-tuple (meaning "n" consecutive Smith numbers) in base 10 for "n" = 1, 2, ... are
4, 728, 73615, 4463535, 15966114, 2050918644, 164736913905, ... (sequence in the OEIS).
Smith numbers can be constructed from factored repunits. As of 2010[ [update]], the largest known Smith number in base 10 is
9 × R1031 × (104594 + 3×102297 + 1)1476 ×103913210
where R1031 is the base 10 repunit (101031 − 1)/9.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "b > 1"
},
{
"math_id": 2,
"text": "F_b(n)"
},
{
"math_id": 3,
"text": "b"
},
{
"math_id": 4,
"text": "\nn = \\prod_{\\stackrel{p \\mid n,}{p\\text{ prime}}} p^{v_p(n)}\n"
},
{
"math_id": 5,
"text": "\nF_b(n) = \\sum_{{\\stackrel{p \\mid n,}{p\\text{ prime}}}} v_p(n) F_b(p).\n"
},
{
"math_id": 6,
"text": "v_p(n)"
},
{
"math_id": 7,
"text": "p"
}
] |
https://en.wikipedia.org/wiki?curid=631045
|
63114331
|
Romanov's theorem
|
Theorem on the set of numbers that are the sum of a prime and a positive integer power of the base
In mathematics, specifically additive number theory, Romanov's theorem is a mathematical theorem proved by Nikolai Pavlovich Romanov. It states that given a fixed base b, the set of numbers that are the sum of a prime and a positive integer power of b has a positive lower asymptotic density.
Statement.
Romanov initially stated that he had proven the statements "In jedem Intervall (0, x) liegen mehr als ax Zahlen, welche als Summe von einer Primzahl und einer k-ten Potenz einer ganzen Zahl darstellbar sind, wo a eine gewisse positive, nur von k abhängige Konstante bedeutet" and "In jedem Intervall (0, x) liegen mehr als bx Zahlen, weiche als Summe von einer Primzahl und einer Potenz von a darstellbar sind. Hier ist a eine gegebene ganze Zahl und b eine positive Konstante, welche nur von a abhängt". These statements translate to "In every interval formula_0 there are more than formula_1 numbers which can be represented as the sum of a prime number and a k-th power of an integer, where formula_2 is a certain positive constant that is only dependent on k" and "In every interval formula_0 there are more than formula_3 numbers which can be represented as the sum of a prime number and a power of a. Here a is a given integer and formula_4 is a positive constant that only depends on a" respectively. The second statement is generally accepted as the Romanov's theorem, for example in Nathanson's book.
Precisely, let formula_5 and let formula_6, formula_7. Then Romanov's theorem asserts that formula_8.
History.
Alphonse de Polignac wrote in 1849 that every odd number larger than 3 can be written as the sum of an odd prime and a power of 2. (He soon noticed a counterexample, namely 959.) This corresponds to the case of formula_9 in the original statement. The counterexample of 959 was, in fact, also mentioned in Euler's letter to Christian Goldbach, but they were working in the opposite direction, trying to find odd numbers that cannot be expressed in the form.
In 1934, Romanov proved the theorem. The positive constant formula_4 mentioned in the case formula_9 was later known as "Romanov's constant". Various estimates on the constant, as well as formula_10, has been made. The history of such refinements are listed below. In particular, since formula_10 is shown to be less than 0.5 this implies that the odd numbers that cannot be expressed this way has positive lower asymptotic density.
<templatestyles src="Reflist/styles.css" />
Generalisations.
Analogous results of Romanov's theorem has been proven in number fields by Riegel in 1961. In 2015, the theorem was also proven for polynomials in finite fields. Also in 2015, an arithmetic progression of Gaussian integers that are not expressible as the sum of a Gaussian prime and a power of 1+i is given.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(0,x)"
},
{
"math_id": 1,
"text": "\\alpha x"
},
{
"math_id": 2,
"text": "\\alpha"
},
{
"math_id": 3,
"text": "\\beta x"
},
{
"math_id": 4,
"text": "\\beta"
},
{
"math_id": 5,
"text": "d(x)=\\frac{\\left\\vert \\{n\\le x:n=p+2^k,p\\ \\textrm{prime,}\\ k\\in\\N\\} \\right\\vert}{x}"
},
{
"math_id": 6,
"text": "\\underline{d}=\\liminf_{x\\to\\infty}d(x)"
},
{
"math_id": 7,
"text": "\\overline{d}=\\limsup_{x\\to\\infty}d(x)"
},
{
"math_id": 8,
"text": "\\underline{d}>0"
},
{
"math_id": 9,
"text": "a=2"
},
{
"math_id": 10,
"text": "\\overline{d}"
}
] |
https://en.wikipedia.org/wiki?curid=63114331
|
63115362
|
Filter quantifier
|
In mathematics, a filter on a set formula_0 informally gives a notion of which subsets formula_1 are "large". Filter quantifiers are a type of logical quantifier which, informally, say whether or not a statement is true for "most" elements of formula_2 Such quantifiers are often used in combinatorics, model theory (such as when dealing with ultraproducts), and in other fields of mathematical logic where (ultra)filters are used.
Background.
Here we will use the set theory convention, where a filter formula_3 on a set formula_0 is defined to be an order-theoretic proper filter in the poset formula_4 that is, a subset of formula_5 such that:
Recall a filter formula_3 on formula_0 is an "ultrafilter" if, for every formula_13 either formula_11 or formula_14
Given a filter formula_3 on a set formula_15 we say a subset formula_1 is "formula_3-stationary" if, for all formula_16 we have formula_17
Definition.
Let formula_3 be a filter on a set formula_2 We define the "filter quantifiers" formula_18 and formula_19 as formal logical symbols with the following interpretation:
formula_20
formula_21 is formula_3-stationary
for every first-order formula formula_22 with one free variable. These also admit alternative definitions as
formula_23
formula_24
When formula_3 is an ultrafilter, the two quantifiers defined above coincide, and we will often use the notation formula_25 instead. Verbally, we might pronounce formula_25 as "for formula_3-almost all formula_26", "for formula_3-most formula_26", "for the majority of formula_26 (according to formula_3)", or "for most formula_26 (according to formula_3)". In cases where the filter is clear, we might omit mention of formula_27
Properties.
The filter quantifiers formula_18 and formula_19 satisfy the following logical identities, for all formulae formula_28:
Additionally, if formula_3 is an ultrafilter, the two filter quantifiers coincide: formula_38 Renaming this quantifier formula_39 the following properties hold:
In general, filter quantifiers do not commute with each other, nor with the usual formula_44 and formula_45 quantifiers.
Use.
The utility of filter quantifiers is that they often give a more concise or clear way to express certain mathematical ideas. For example, take the definition of convergence of a real-valued sequence: a sequence formula_71 converges to a point formula_72 if
formula_73
Using the Fréchet quantifier formula_56 as defined above, we can give a nicer (equivalent) definition:
formula_74
Filter quantifiers are especially useful in constructions involving filters. As an example, suppose that formula_0 has a binary operation formula_75 defined on it. There is a natural way to extend formula_75 to formula_76 the set of ultrafilters on formula_0:
formula_77
With an understanding of the ultrafilter quantifier, this definition is reasonably intuitive. It says that formula_78 is the collection of subsets formula_1 such that, for most formula_26 (according to formula_79) and for most formula_80 (according to formula_81), the sum formula_82 is in formula_83 Compare this to the equivalent definition without ultrafilter quantifiers:
formula_84
The meaning of this is much less clear.
This increased intuition is also evident in proofs involving ultrafilters. For example, if formula_75 is associative on formula_15 using the first definition of formula_85 it trivially follows that formula_86 is associative on formula_87 Proving this using the second definition takes a lot more work.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "A \\subseteq X"
},
{
"math_id": 2,
"text": "X."
},
{
"math_id": 3,
"text": "\\mathcal{F}"
},
{
"math_id": 4,
"text": "(\\mathcal{P}(X), \\subseteq),"
},
{
"math_id": 5,
"text": "\\mathcal{P}(X)"
},
{
"math_id": 6,
"text": "\\varnothing \\notin \\mathcal{F}"
},
{
"math_id": 7,
"text": "X \\in \\mathcal{F}"
},
{
"math_id": 8,
"text": "A, B \\in \\mathcal{F},"
},
{
"math_id": 9,
"text": "A \\cap B \\in \\mathcal{F}"
},
{
"math_id": 10,
"text": "A \\subseteq B \\subseteq X,"
},
{
"math_id": 11,
"text": "A \\in \\mathcal{F}"
},
{
"math_id": 12,
"text": "B \\in \\mathcal{F}."
},
{
"math_id": 13,
"text": "A \\subseteq X,"
},
{
"math_id": 14,
"text": "X \\setminus A \\in \\mathcal{F}."
},
{
"math_id": 15,
"text": "X,"
},
{
"math_id": 16,
"text": "B \\in \\mathcal{F},"
},
{
"math_id": 17,
"text": "A \\cap B \\neq \\varnothing."
},
{
"math_id": 18,
"text": "\\forall_\\mathcal{F} x"
},
{
"math_id": 19,
"text": "\\exists_\\mathcal{F} x"
},
{
"math_id": 20,
"text": "\\forall_\\mathcal{F} x\\ \\varphi(x) \\iff \\{ x \\in X: \\varphi(x) \\} \\in \\mathcal{F}"
},
{
"math_id": 21,
"text": "\\exists_\\mathcal{F} x\\ \\varphi(x) \\iff \\{ x \\in X: \\varphi(x) \\}"
},
{
"math_id": 22,
"text": "\\varphi(x)"
},
{
"math_id": 23,
"text": "\\forall_\\mathcal{F} x\\ \\varphi(x) \\iff \\exists A \\in \\mathcal{F}\\ \\ \\forall x \\in A\\ \\ \\varphi(x)"
},
{
"math_id": 24,
"text": "\\exists_\\mathcal{F} x\\ \\varphi(x) \\iff \\forall A \\in \\mathcal{F}\\ \\ \\exists x \\in A\\ \\ \\varphi(x)"
},
{
"math_id": 25,
"text": "\\mathcal{F} x"
},
{
"math_id": 26,
"text": "x"
},
{
"math_id": 27,
"text": "\\mathcal{F}."
},
{
"math_id": 28,
"text": "\\varphi, \\psi"
},
{
"math_id": 29,
"text": "\\forall_\\mathcal{F} x\\ \\varphi(x) \\iff \\neg \\exists_\\mathcal{F} x\\ \\neg \\varphi(x)"
},
{
"math_id": 30,
"text": "\\forall x\\ \\varphi(x) \\implies \\forall_\\mathcal{F} x\\ \\varphi(x) \\implies \\exists_\\mathcal{F} x\\ \\varphi(x) \\implies \\exists x\\ \\varphi(x)"
},
{
"math_id": 31,
"text": "\\forall_\\mathcal{F} x\\ \\big( \\varphi(x) \\land \\psi(x) \\big) \\iff \\big( \\forall_\\mathcal{F} x\\ \\varphi(x) \\big) \\land \\big( \\forall_\\mathcal{F} x\\ \\psi(x) \\big)"
},
{
"math_id": 32,
"text": "\\exists_\\mathcal{F} x\\ \\big( \\varphi(x) \\land \\psi(x) \\big) \\implies \\big( \\exists_\\mathcal{F} x\\ \\varphi(x) \\big) \\land \\big( \\exists_\\mathcal{F} x\\ \\psi(x) \\big)"
},
{
"math_id": 33,
"text": "\\forall_\\mathcal{F} x\\ \\big( \\varphi(x) \\lor \\psi(x) \\big) \\;\\Longleftarrow\\; \\big( \\forall_\\mathcal{F} x\\ \\varphi(x) \\big) \\lor \\big( \\forall_\\mathcal{F} x\\ \\psi(x) \\big)"
},
{
"math_id": 34,
"text": "\\exists_\\mathcal{F} x\\ \\big( \\varphi(x) \\lor \\psi(x) \\big) \\;\\Longleftrightarrow\\; \\big( \\exists_\\mathcal{F} x\\ \\varphi(x) \\big) \\lor \\big( \\exists_\\mathcal{F} x\\ \\psi(x) \\big)"
},
{
"math_id": 35,
"text": "\\mathcal{F} \\subseteq \\mathcal{G}"
},
{
"math_id": 36,
"text": "\\forall_\\mathcal{F} x\\ \\varphi(x) \\implies \\forall_\\mathcal{G} x\\ \\varphi(x)"
},
{
"math_id": 37,
"text": "\\exists_\\mathcal{F} x\\ \\varphi(x) \\;\\Longleftarrow\\; \\exists_\\mathcal{G} x\\ \\varphi(x)"
},
{
"math_id": 38,
"text": "\\forall_\\mathcal{F} x\\ \\varphi(x) \\iff \\exists_\\mathcal{F} x\\ \\varphi(x)."
},
{
"math_id": 39,
"text": "\\mathcal{F} x,"
},
{
"math_id": 40,
"text": "\\mathcal{F} x\\ \\varphi(x) \\iff \\neg \\mathcal{F} x\\ \\neg \\varphi(x)"
},
{
"math_id": 41,
"text": "\\forall x\\ \\varphi(x) \\implies \\mathcal{F} x\\ \\varphi(x) \\implies \\exists x\\ \\varphi(x)"
},
{
"math_id": 42,
"text": "\\mathcal{F} x\\ \\big( \\varphi(x) \\land \\psi(x) \\big) \\iff \\big( \\mathcal{F} x\\ \\varphi(x) \\big) \\land \\big( \\mathcal{F} x\\ \\psi(x) \\big)"
},
{
"math_id": 43,
"text": "\\mathcal{F} x\\ \\big( \\varphi(x) \\lor \\psi(x) \\big) \\iff \\big( \\mathcal{F} x\\ \\varphi(x) \\big) \\lor \\big( \\mathcal{F} x\\ \\psi(x) \\big)"
},
{
"math_id": 44,
"text": "\\forall"
},
{
"math_id": 45,
"text": "\\exists"
},
{
"math_id": 46,
"text": "\\mathcal{F} = \\{X\\}"
},
{
"math_id": 47,
"text": "\\forall_\\mathcal{F} x\\ \\varphi(x) \\iff \\{x \\in X: \\varphi(x)\\} = X,"
},
{
"math_id": 48,
"text": "\\exists_\\mathcal{F} x\\ \\varphi(x) \\iff \\{x \\in X: \\varphi(x)\\} \\cap X \\neq \\varnothing."
},
{
"math_id": 49,
"text": "\\mathcal{F}^\\infty"
},
{
"math_id": 50,
"text": "\\forall_{\\mathcal{F}^\\infty} x\\ \\varphi(x)"
},
{
"math_id": 51,
"text": "x \\in X,"
},
{
"math_id": 52,
"text": "\\exists_{\\mathcal{F}^\\infty} x\\ \\varphi(x)"
},
{
"math_id": 53,
"text": "x \\in X."
},
{
"math_id": 54,
"text": "\\forall_{\\mathcal{F}^\\infty}"
},
{
"math_id": 55,
"text": "\\exists_{\\mathcal{F}^\\infty}"
},
{
"math_id": 56,
"text": "\\forall^\\infty"
},
{
"math_id": 57,
"text": "\\exists^\\infty,"
},
{
"math_id": 58,
"text": "\\mathcal{M}"
},
{
"math_id": 59,
"text": "[0, 1],"
},
{
"math_id": 60,
"text": "A \\subseteq [0,1]"
},
{
"math_id": 61,
"text": "\\mu(A) = 1."
},
{
"math_id": 62,
"text": "\\forall_\\mathcal{M} x\\ \\varphi(x)"
},
{
"math_id": 63,
"text": "\\exists_\\mathcal{M} x\\ \\varphi(x)"
},
{
"math_id": 64,
"text": "\\mathcal{F}_A"
},
{
"math_id": 65,
"text": "A \\subseteq X."
},
{
"math_id": 66,
"text": "\\forall_{\\mathcal{F}_A} x\\ \\varphi(x) \\iff \\forall x \\in A\\ \\varphi(x),"
},
{
"math_id": 67,
"text": "\\exists_{\\mathcal{F}_A} x\\ \\varphi(x) \\iff \\exists x \\in A\\ \\varphi(x)."
},
{
"math_id": 68,
"text": "\\mathcal{U}_d"
},
{
"math_id": 69,
"text": "d \\in X,"
},
{
"math_id": 70,
"text": "\\mathcal{U}_d\\, x\\ \\varphi(x) \\iff \\varphi(d)."
},
{
"math_id": 71,
"text": "(a_n)_{n \\in \\N} \\subseteq \\R"
},
{
"math_id": 72,
"text": "a \\in \\R"
},
{
"math_id": 73,
"text": "\\forall \\varepsilon > 0\\ \\ \\exists N \\in \\mathbb{N}\\ \\ \\forall n \\in \\N\\ \\big(\\ n \\geq N \\implies \\vert a_n - a \\vert < \\varepsilon\\ \\big)"
},
{
"math_id": 74,
"text": "\\forall \\varepsilon > 0\\ \\ \\forall^\\infty n \\in \\mathbb{N}:\\ \\vert a_n - a \\vert < \\varepsilon"
},
{
"math_id": 75,
"text": "+"
},
{
"math_id": 76,
"text": "\\beta X,"
},
{
"math_id": 77,
"text": "\\mathcal{U} \\oplus \\mathcal{V} = \\big\\{ A \\subseteq X :\\ \\mathcal{U}x\\ \\mathcal{V}y\\ \\ x+y \\in A \\big\\}"
},
{
"math_id": 78,
"text": "\\mathcal{U} \\oplus \\mathcal{V}"
},
{
"math_id": 79,
"text": "\\mathcal{U}"
},
{
"math_id": 80,
"text": "y"
},
{
"math_id": 81,
"text": "\\mathcal{V}"
},
{
"math_id": 82,
"text": "x+y"
},
{
"math_id": 83,
"text": "A."
},
{
"math_id": 84,
"text": "\\mathcal{U} \\oplus \\mathcal{V} = \\big\\{ A \\subseteq X :\\ \\{ x \\in X:\\ \\{ y \\in X:\\ x+y \\in A \\} \\in \\mathcal{V} \\} \\in \\mathcal{U} \\big\\}"
},
{
"math_id": 85,
"text": "\\oplus,"
},
{
"math_id": 86,
"text": "\\oplus"
},
{
"math_id": 87,
"text": "\\beta X."
}
] |
https://en.wikipedia.org/wiki?curid=63115362
|
631188
|
Halton sequence
|
Type of numeric sequence used in statistics
In statistics, Halton sequences are sequences used to generate points in space for numerical methods such as Monte Carlo simulations. Although these sequences are deterministic, they are of low discrepancy, that is, appear to be random for many purposes. They were first introduced in 1960 and are an example of a quasi-random number sequence. They generalize the one-dimensional van der Corput sequences.
Example of Halton sequence used to generate points in (0, 1) × (0, 1) in R2.
The Halton sequence is constructed according to a deterministic method that uses coprime numbers as its bases. As a simple example, let's take one dimension of the two-dimensional Halton sequence to be based on 2 and the other dimension on 3. To generate the sequence for 2, we start by dividing the interval (0,1) in half, then in fourths, eighths, etc., which generates
<templatestyles src="Fraction/styles.css" />1⁄2,
<templatestyles src="Fraction/styles.css" />1⁄4, <templatestyles src="Fraction/styles.css" />3⁄4,
<templatestyles src="Fraction/styles.css" />1⁄8, <templatestyles src="Fraction/styles.css" />5⁄8, <templatestyles src="Fraction/styles.css" />3⁄8, <templatestyles src="Fraction/styles.css" />7⁄8,
<templatestyles src="Fraction/styles.css" />1⁄16, <templatestyles src="Fraction/styles.css" />9⁄16...
Equivalently, the nth number of this sequence is the number n written in binary representation, inverted, and written after the decimal point. This is true for any base. As an example, to find the sixth element of the above sequence, we'd write 6 = 1*22 + 1*21 + 0*20 = 1102, which can be inverted and placed after the decimal point to give 0.0112 = 0*2-1 + 1*2-2 + 1*2-3 = <templatestyles src="Fraction/styles.css" />3⁄8. So the sequence above is the same as
0.12, 0.012, 0.112, 0.0012, 0.1012, 0.0112, 0.1112, 0.00012, 0.10012...
To generate the sequence for 3 for the other dimension, we divide the interval (0,1) in thirds, then ninths, twenty-sevenths, etc., which generates
<templatestyles src="Fraction/styles.css" />1⁄3, <templatestyles src="Fraction/styles.css" />2⁄3, <templatestyles src="Fraction/styles.css" />1⁄9, <templatestyles src="Fraction/styles.css" />4⁄9, <templatestyles src="Fraction/styles.css" />7⁄9, <templatestyles src="Fraction/styles.css" />2⁄9, <templatestyles src="Fraction/styles.css" />5⁄9, <templatestyles src="Fraction/styles.css" />8⁄9, <templatestyles src="Fraction/styles.css" />1⁄27...
When we pair them up, we get a sequence of points in a unit square:
(<templatestyles src="Fraction/styles.css" />1⁄2, <templatestyles src="Fraction/styles.css" />1⁄3), (<templatestyles src="Fraction/styles.css" />1⁄4, <templatestyles src="Fraction/styles.css" />2⁄3), (<templatestyles src="Fraction/styles.css" />3⁄4, <templatestyles src="Fraction/styles.css" />1⁄9), (<templatestyles src="Fraction/styles.css" />1⁄8, <templatestyles src="Fraction/styles.css" />4⁄9), (<templatestyles src="Fraction/styles.css" />5⁄8, <templatestyles src="Fraction/styles.css" />7⁄9), (<templatestyles src="Fraction/styles.css" />3⁄8, <templatestyles src="Fraction/styles.css" />2⁄9), (<templatestyles src="Fraction/styles.css" />7⁄8, <templatestyles src="Fraction/styles.css" />5⁄9), (<templatestyles src="Fraction/styles.css" />1⁄16, <templatestyles src="Fraction/styles.css" />8⁄9), (<templatestyles src="Fraction/styles.css" />9⁄16, <templatestyles src="Fraction/styles.css" />1⁄27).
Even though standard Halton sequences perform very well in low dimensions, correlation problems have been noted between sequences generated from higher primes. For example, if we started with the primes 17 and 19, the first 16 pairs of points: (<templatestyles src="Fraction/styles.css" />1⁄17, <templatestyles src="Fraction/styles.css" />1⁄19), (<templatestyles src="Fraction/styles.css" />2⁄17, <templatestyles src="Fraction/styles.css" />2⁄19), (<templatestyles src="Fraction/styles.css" />3⁄17, <templatestyles src="Fraction/styles.css" />3⁄19) ... (<templatestyles src="Fraction/styles.css" />16⁄17, <templatestyles src="Fraction/styles.css" />16⁄19) would have perfect linear correlation. To avoid this, it is common to drop the first 20 entries, or some other predetermined quantity depending on the primes chosen. Several other methods have also been proposed. One of the most prominent solutions is the scrambled Halton sequence, which uses permutations of the coefficients used in the construction of the standard sequence. Another solution is the "leaped Halton", which skips points in the standard sequence. Using, e.g., only each 409th point (also other prime numbers not used in the Halton core sequence are possible), can achieve significant improvements.
Implementation.
In pseudocode:
algorithm Halton-Sequence is
inputs: index formula_0
base formula_1
output: result formula_2
formula_3
formula_4
while formula_5 do
formula_6
formula_7
formula_8
return formula_2
An alternative implementation that produces subsequent numbers of a Halton sequence for base "b" is given in the following generator function (in Python). This algorithm uses only integer numbers internally, which makes it robust against round-off errors.
def halton_sequence(b):
"""Generator function for Halton sequence."""
n, d = 0, 1
while True:
x = d - n
if x == 1:
n = 1
d *= b
else:
y = d // b
while x <= y:
y //= b
n = (b + 1) * y - x
yield n / d
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "i"
},
{
"math_id": 1,
"text": "b"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "f \\larr 1"
},
{
"math_id": 4,
"text": "r \\larr 0"
},
{
"math_id": 5,
"text": "i > 0"
},
{
"math_id": 6,
"text": "f \\larr f/b"
},
{
"math_id": 7,
"text": "r \\larr r + f * (i \\operatorname{mod} b)"
},
{
"math_id": 8,
"text": "i \\larr \\lfloor i/b \\rfloor"
}
] |
https://en.wikipedia.org/wiki?curid=631188
|
63122626
|
2 Kings 3
|
2 Kings, chapter 3
2 Kings 3 is the third chapter in the second part of the Books of Kings in the Hebrew Bible or the Second Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. After a short introduction to the reign of the last king of Israel from the Omride, Jehoram of Israel, the son of Ahab, this chapter records the war of the coalition of the kings of Israel, Judah, and Edom, against Mesha the king of Moab with some contribution of Elisha the prophet. Another view of the events in this chapter is notably provided by the inscription on the Mesha Stele made by the aforementioned king of Moab in c. 840 BCE.
Text.
This chapter was originally written in the Hebrew language and since the 16th century is divided into 27 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
2 Kings 3 has rather coherent syntax with virtually no indications of redactional work on a syntactic level. However, from topographical considerations, the narrative could have at least two layers: the original tradition preserved in verses 4–6 and 24–27 describing the punitive war of Israel against Moab from the north some time after the rebellion of Mesha, which is in accord with the extrabiblical evidence and the settlement history of Trans-Jordan in the ninth century BCE; and another story in verses 7–23 augmenting this basic layer, introducing the formation of an alliance between Israel, Judah, and Edom; the oracle of Elisha; and an attack on Moab from the south. Despite some inconsistencies, the pro-Judean redactor skillfully joined this expansion of the story into a coherent information. The narrative of 2 Kings 3 has thematic and lexical parallels to other passages in the Bible, such as 1 Kings 22 or Numbers 20.
King Jehoram of Israel (3:1–3).
Jehoram is the last ruler of the Omri dynasty and as the other monarchs in the dynasty he received a negative rating before God, although more favourable than his parents Ahab and Jezebel because 'he is said to have abolished the "pillar of Baal", a cult-stone setup by his father' (although it is not mentioned in 1 Kings 16:32). Nonetheless, he is later killed by Jehu (2 Kings 9:24) and his family dynasty is completely annihilated as prophesied.
"Now Jehoram the son of Ahab began to reign over Israel in Samaria the eighteenth year of Jehoshaphat king of Judah, and reigned twelve years."
"And he did evil in the sight of the Lord, but not like his father and mother; for he put away the sacred pillar of Baal that his father had made."
War against Moab (3:4–27).
At one point Israel under the Omri dynasty is recognized as a 'regional superpower' that 'the kingdoms of Judah and Edom were compliant' (verses 7–8), 'the kingdom of Moab was a vassal liable to pay tribute' (verse 4), and any rebellions face military reprisals. However, the success of Israel's wars were not without the interference of YHWH, as shown in this section. When the coalition of the kings of Israel, Judah, and Edom against Moab threatens to fail as water supplies ran out in the desert of Edom, Jehoshaphat, the king of Judah, asked to call for a prophet of YHWH. Elisha, an Israelite prophet, showed up but wished only to deal with the king of Judah (verses 11–14) The prophet ensured the success of the campaign with the miraculous help of YHWH. The advance of the allied army against Moab managed to destroy the entire region (verses 24b–26) until the king of Moab, out of desperation, made a terrible sacrifice of his firstborn son to his god, that caused Israel be struck with 'great wrath' and forced the attacking armies to retreat (verse 27).
"And Mesha king of Moab was a sheepmaster, and rendered unto the king of Israel an hundred thousand lambs, and an hundred thousand rams, with the wool."
"But it came to pass, when Ahab was dead, that the king of Moab rebelled against the king of Israel."
Verse 5.
This and the following verses elaborate the statement in the opening verse of 2 Kings, about Moab's rebellion. Just as the unified kingdom of Israel divides in the days of Solomon's son, the resulted kingdom of Israel divides (with the loss of Moab) in the days of Ahab's son, indicating the framing of Ahab as a perverse Solomon (comparing 2 Kings 3:5 to 1 Kings 12:19).
"So the king of Israel went with the king of Judah and the king of Edom. And when they had made a circuitous march of seven days, there was no water for the army or for the animals that followed them."
Verse 11.
This is the only verse in the chapter that mentions Elijah. It mentions a king looking for prophet Elisha who washed the head of Elijah.
"And he said, "Thus says the Lord: 'Make this valley full of ditches.'""
Relation to the Mesha Stele.
The inscription on the Mesha Stele (Mesha Inscription or "MI") verifies certain things recorded in 2 Kings 3 and makes other things in the biblical text more understandable:
On the other hand, the Mesha Inscription spoke about victory over Israel, in contrast to the report of Israel's victory over Moab in 2 Kings 3, but the biblical account of Moab's invasion helps explain why 'Moab is nowhere mentioned in the inscriptions of Shalmaneser III (858-824)', that is, 'Israel's punitive raid had rendered them militarily not worth mentioning'.
Therefore, even though detailed synchronization between the Mesha Inscription and 2 Kings 3 can be problematic, Hermann states that "on the whole, the texts complement each other."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=63122626
|
6313357
|
Smearing retransformation
|
The Smearing retransformation is used in regression analysis, after estimating the logarithm of a variable. Estimating the logarithm of a variable instead of the variable itself is a common technique to more closely approximate normality. In order to retransform the variable back to level from log, the Smearing retransformation is used.
If the log-transformed variable y is normally distributed with mean
formula_0 and variance formula_1
then, the expected value of y is given by:
formula_2
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " f(X) "
},
{
"math_id": 1,
"text": " \\sigma^2 "
},
{
"math_id": 2,
"text": " y = \\exp(f(X))\\exp(\\frac{1}{2}\\sigma^2). "
}
] |
https://en.wikipedia.org/wiki?curid=6313357
|
63134772
|
Variational series
|
In statistics, a variational series is a non-decreasing sequence formula_0composed from an initial series of independent and identically distributed random variables formula_1. The members of the variational series form order statistics, which form the basis for nonparametric statistical methods.
formula_2 is called the "k"th order statistic, while the values formula_3 and formula_4 (the 1st and formula_5th order statistics, respectively) are referred to as the extremal terms. The sample range is given by formula_6, and the sample median by formula_7 when formula_8 is odd and formula_9 when formula_10 is even.
The variational series serves to construct the empirical distribution function formula_11 , where formula_12 is the number of members of the series which are less than formula_13. The empirical distribution formula_14 serves as an estimate of the true distribution formula_15 of the random variablesformula_1, and according to the Glivenko–Cantelli theorem converges almost surely to formula_15.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X_{(1)} \\leqslant X_{(2)} \\leqslant \\cdots \\leqslant X_{(n-1)} \\leqslant X_{(n)}"
},
{
"math_id": 1,
"text": "X_1,\\ldots,X_n"
},
{
"math_id": 2,
"text": "X_{(k)}"
},
{
"math_id": 3,
"text": " X_{(1)}=\\min_{1 \\leq k \\leq n}{X_k}"
},
{
"math_id": 4,
"text": " X_{(n)}=\\max_{1 \\leq k \\leq n}{X_k} "
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "R_n = X_{(n)}-X_{(1)}"
},
{
"math_id": 7,
"text": "X_{(m+1)}"
},
{
"math_id": 8,
"text": "n=2m+1"
},
{
"math_id": 9,
"text": "(X_{(m+1)} + X_{(m)})/2"
},
{
"math_id": 10,
"text": "n=2m"
},
{
"math_id": 11,
"text": "\\hat{F}(x) = \\mu(x)/n"
},
{
"math_id": 12,
"text": " \\mu(x) "
},
{
"math_id": 13,
"text": "x"
},
{
"math_id": 14,
"text": "\\hat{F}(x)"
},
{
"math_id": 15,
"text": "F(x)"
}
] |
https://en.wikipedia.org/wiki?curid=63134772
|
63136694
|
2 Kings 1
|
A chapter in the Second Book of Kings
2 Kings 1 is the first chapter of the second part of the Books of Kings in the Hebrew Bible or the Second Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. This chapter focuses on Ahaziah of Israel, the son of Ahab, and the acts of Elijah the prophet who rebuked the king and prophesied the king's death.
Text.
This chapter was originally written in the Hebrew language and since the 16th century is divided into 18 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The second book of Kings begins with a chapter featuring the prophet Elijah, whose stories occupy the last part of the first book of Kings (1 Kings). In this final story of confrontation with a monarch, Elijah takes on King Ahaziah of Israel whose reign was introduced in the ending verses of 1 Kings ().
The artificial separation of this episode of Elijah from those in the previous book resulted from the Septuagint's division of the Hebrew book of Kings into two parts, whereas the Jewish Hebrew text tradition continued to consider Kings as one book until the Bamberg Rabbinic Bible of 1516. This makes 2 Kings begin with a sick king (Ahaziah) in his deathbed, just as 1 Kings (David), where both Ahaziah and David received prophets with quite different results. Although Elijah is fully capable of raising the dead (), Ahaziah seeks help elsewhere, so instead of being healed, he was prophesied by Elijah to die in his current bed.
The abruptness of the beginning of 2 Kings can also be seen its very first verse about Moab's rebellion against Israel after the death of Ahab, which seems unrelated to the story of Ahaziah and Elijah that follows it; the rebellion will be dealt in chapter 3, where it starts with the verbatim "repetitive resumption" ("Wiederaufnahme") of 2 Kings 1:1 in . However, this opening episode of 2 Kings serves several important functions: looking backwards to summarize the personality and behavioral traits of Elijah, while at the same time anticipating the future anti-Baal crusade by Jehu who would destroy the Omride dynasty.
This narrative is one of four in 1–2 Kings in which a prophet delivers an oracle to a dying king, placing Elijah in a "type-scene" associated with each major prophet in the book (Ahijah in 1 Kings 14:1–18; Elisha in 2 Kings 8:7–15; Isaiah in ), thus linking him into a prophetic chain. The differences from the common pattern are the threefold repetition of the oracle (verses 3b-4, 6, 16) and the confrontations between Elijah and the three captains of the king.
Structure.
The main narrative of this chapter contains parallel elements that create structural symmetry:
A Ahaziah's illness and inquiry (1:2)
B Angel of YHWH sends Elijah to messengers with prophecy (1) (1:3–4)
C Messengers deliver prophecy (2) to Ahaziah (1:5–8)
X Three captains confront Elijah (1.9–14)
B' Angel of YHWH sends Elijah to Ahaziah (1:15)
C' Elijah delivers prophecy (3) to Ahaziah (1:16)
A' Ahaziah's death (1:17)
"Then Moab rebelled against Israel after the death of Ahab."
Opening verse (1:1).
This statement about Moab's rebellion in the opening verse of 2 Kings is elaborated in ff, and supported by the information in the Mesha Stele (see detailed comparisons in chapter 3).
Moab in the Trans-Jordan region was incorporated into Israel by King David, who has family connections with the people of that land (Ruth 4), mentioned only briefly in 1 Kings 11:7 as a client state of Israel during the days of David and Solomon, then ruled by the Northern Kingdom of Israel during the reigning period of the Omrides.
"And Ahaziah fell down through a lattice in his upper chamber that was in Samaria, and was sick: and he sent messengers, and said unto them, Go, enquire of Baalzebub the god of Ekron whether I shall recover of this disease."
Elijah interferes (1:3–16).
The oracular consultation that Ahaziah requested did not take place due to Elijah's interference in the name of YHWH, following the explicit order of an 'angel of the LORD' (verses 3–4) and three (fifty-strong) army divisions are unable to stop him (verses 9–16). Thematically similar to 1 Kings 18, Elijah's mission to promote the exclusive worship of YHWH in Israel suits his name ('My God is YHWH!'). Ahaziah only realised the prophet's identity by the description of Elijah's appearance, and aside from his mantle (cf. 2 Kings 2:13), Elijah's recognizable feature seems to be his sudden showing up 'precisely when he is not expected or wanted, fearlessly saying what was to be said in the name of his God' (cf. 1 Kings 18:7; 21:17–20).
Death of Ahaziah (1:17–18).
In Ahaziah's life and death, the history of the house of Ahab is parallel to that of the house of Jeroboam I. A man of God from Judah prophesied the end of Jeroboam's family (1 Kings 13), then Jeroboam's son Abijah was sick and died before the dynasty ended. Likewise, after the destruction of Ahab's family was prophesied, Ahab's son, Ahaziah, died when the dynasty was still intact. However both dynasties fell during the reign of a subsequent son.
"So he died according to the word of the Lord which Elijah had spoken"
"Then Jehoram reigned in his place in the second year of Jehoram son of Jehoshaphat, king of Judah, because he had no son."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=63136694
|
63138221
|
Runtime predictive analysis
|
Runtime predictive analysis (or predictive analysis) is a runtime verification technique in computer science for detecting property violations in program executions inferred from an observed execution. An important class of predictive analysis methods has been developed for detecting concurrency errors (such as data races) in concurrent programs, where a runtime monitor is used to predict errors which did not happen in the observed run, but can happen in an alternative execution of the same program. The predictive capability comes from the fact that the analysis is performed on an abstract model extracted online from the observed execution, which admits a class of executions beyond the observed one.
Overview.
Informally, given an execution formula_0, predictive analysis checks errors in a reordered trace formula_1 of formula_0. formula_1 is called "feasible" from formula_0 (alternatively a "correct reordering" of formula_0) if any program that can generate formula_0 can also generate formula_1.
In the context of concurrent programs, a predictive technique is "sound" if it only predicts concurrency errors in "feasible" executions of the causal model of the observed trace. Assuming the analysis has no knowledge about the source code of the program, the analysis is "complete" (also called "maximal") if the inferred class of executions contains all executions that have the same program order and communication order prefix of the observed trace.
Applications.
Predictive analysis has been applied to detect a wide class of concurrency errors, including:
Implementation.
As is typical with dynamic program analysis, predictive analysis first instruments the source program. At runtime, the analysis can be performed online, in order to detect errors on the fly. Alternatively, the instrumentation can simply dump the execution trace for offline analysis. The latter approach is preferred for expensive refined predictive analyses that require random access to the execution trace or take more than linear time.
Incorporating data and control-flow analysis.
Static analysis can be first conducted to gather data and control-flow dependence information about the source program, which can help construct the causal model during online executions. This allows predictive analysis to infer a larger class of executions based on the observed execution. Intuitively, a feasible reordering can change the last writer of a memory read (data dependence) if the read, in turn, cannot affect whether any accesses execute (control dependence).
Approaches.
Partial order based techniques.
Partial order based techniques are most often employed for online race detection. At runtime, a partial order over the events in the trace is constructed, and any unordered pairs of critical events are reported as races. Many predictive techniques for race detection are based on the happens-before relation or a weakened version of it. Such techniques can typically be implemented efficiently with vector clock algorithms, allowing only one pass of the whole input trace as it is being generated, and are thus suitable for online deployment.
SMT-based techniques.
SMT encodings allow the analysis to extract a refined causal model from an execution trace, as a (possibly very large) mathematical formula. Furthermore, control-flow information can be incorporated into the model. SMT-based techniques can achieve soundness and completeness (also called "maximal causality"
), but has exponential-time complexity with respect to the trace size. In practice, the analysis is typically deployed to bounded segments of an execution trace, thus trading completeness for scalability.
Lockset-based approaches.
In the context of data race detection for programs using lock based synchronization, lockset-based techniques provide an unsound, yet lightweight mechanism for detecting data races. These techniques primarily detect violations of the lockset principle. which says that all accesses of a given memory location must be protected by a common lock. Such techniques are also used to filter out candidate race reports in more expensive analyses.
Graph-based techniques.
In the context of data race detection, sound polynomial-time predictive analyses have been developed, with good, close to maximal predictive capability based on a graphs.
Computational Complexity.
Given an input trace of size formula_2 executed by formula_3 threads, general race prediction is NP-complete and even W[1]-hard parameterized by formula_3, but admits a polynomial-time algorithm when the communication topology is acyclic.
Happens-before races are detected in formula_4 time, and this bound is optimal.
Lockset races over formula_5 variables are detected in formula_6 time, and this bound is also optimal.
Tools.
Here is a partial list of tools that use predictive analyses to detect concurrency errors, sorted alphabetically.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "t"
},
{
"math_id": 1,
"text": "t'"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "O(n\\cdot k)"
},
{
"math_id": 5,
"text": "d"
},
{
"math_id": 6,
"text": "O(n\\cdot d)"
}
] |
https://en.wikipedia.org/wiki?curid=63138221
|
63138616
|
Success
|
Meeting or surpassing an intended goal or objective
Success is the state or condition of meeting a defined range of expectations. It may be viewed as the opposite of failure. The criteria for success depend on context, and may be relative to a particular observer or belief system. One person might consider a success what another person considers a failure, particularly in cases of direct competition or a zero-sum game. Similarly, the degree of success or failure in a situation may be differently viewed by distinct observers or participants, such that a situation that one considers to be a success, another might consider to be a failure, a qualified success or a neutral situation. For example, a film that is a commercial failure or even a box-office bomb can go on to receive a cult following, with the initial lack of commercial success even lending a cachet of subcultural coolness.
It may also be difficult or impossible to ascertain whether a situation meets criteria for success or failure due to ambiguous or ill-defined definition of those criteria. Finding useful and effective criteria, or heuristics, to judge the failure or success of a situation may itself be a significant task.
In American culture.
DeVitis and Rich link the success to the notion of the American Dream. They observe that "[t]he ideal of success is found in the American Dream which is probably the most potent ideology in American life" and suggest that "Americans generally believe in achievement, success, and materialism." Weiss, in his study of success in the American psyche, compares the American view of success with Max Weber's concept of the Protestant work ethic.
In biology.
Natural selection is the variation in successful survival and reproduction of individuals due to differences in phenotype. It is a key mechanism of evolution, the change in the heritable traits characteristic of a population over generations. Charles Darwin popularized the term "natural selection", contrasting it with artificial selection, which in his view is intentional, whereas natural selection is not. As Darwin phrased it in 1859, natural selection is the "principle by which each slight variation [of a trait], if useful, is preserved". The concept was simple but powerful: individuals best adapted to their environments are more likely to survive and reproduce. As long as there is some variation between them and that variation is heritable, there will be an inevitable selection of individuals with the most advantageous variations. If the variations are heritable, then differential reproductive success leads to a progressive evolution of particular populations of a species, and populations that evolve to be sufficiently different eventually become different species.
In education.
A student's success within an educational system is often expressed by way of grading. Grades may be given as numbers, letters or other symbols. By the year 1884, Mount Holyoke College was evaluating students' performance on a 100-point or percentage scale and then summarizing those numerical grades by assigning letter grades to numerical ranges. Mount Holyoke assigned letter grades "A" through "E," with "E" indicating lower than 75% performance. The "A"–"E" system spread to Harvard University by 1890. In 1898, Mount Holyoke adjusted the grading system, adding an "F" grade for failing (and adjusting the ranges corresponding to the other letters). The practice of letter grades spread more broadly in the first decades of the 20th century. By the 1930s, the letter "E" was dropped from the system, for unclear reasons.
Educational systems themselves can be evaluated on how successfully they impart knowledge and skills. For example, the Programme for International Student Assessment (PISA) is a worldwide study by the Organisation for Economic Co-operation and Development (OECD) intended to evaluate educational systems by measuring 15-year-old school pupils' scholastic performance on mathematics, science, and reading. It was first performed in 2000 and then repeated every three years.
Carol Dweck, a Stanford University psychologist, primarily researches motivation, personality, and development as related to implicit theories of intelligence, her key contribution to education the 2006 book "." Dweck's work presents mindset as on a continuum between fixed mindset (intelligence is static) and growth mindset (intelligence can be developed). Growth mindset is a learning focus that embraces challenge and supports persistence in the face of setbacks. As a result of growth mindset, individuals have a greater sense of free will and are more likely to continue working toward their idea of success despite setbacks.
In business and leadership.
Malcolm Gladwell's 2008 book "Outliers: The Story of Success" suggests that the notion of the self-made man is a myth. Gladwell argues that the success of entrepreneurs such as Bill Gates is due to their circumstances, as opposed to their inborn talent.
Andrew Likierman, former Dean of London Business School, argues that success is a relative rather than an absolute term: success needs to be measured against stated objectives and against the achievements of relevant peers: he suggests Jeff Bezos (Amazon) and Jack Ma (Alibaba) have been successful in business "because at the time they started there were many companies aspiring to the dominance these two have achieved". Likierman puts forward four propositions regarding company success and its measurement.
In philosophy of science.
Scientific theories are often deemed successful when they make predictions that are confirmed by experiment. For example, calculations regarding the Big Bang predicted the cosmic microwave background and the relative abundances of chemical elements in deep space (see Big Bang nucleosynthesis), and observations have borne out these predictions. Scientific theories can also achieve success more indirectly, by suggesting other ideas that turn out correct. For example, Johannes Kepler conceived a model of the Solar System based on the Platonic solids. Although this idea was itself incorrect, it motivated him to pursue the work that led to the discoveries now known as Kepler's laws, which were pivotal in the development of astronomy and physics.
In probability.
The fields of probability and statistics often study situations where events are labeled as "successes" or "failures". For example, a Bernoulli trial is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is conducted. The concept is named after Jacob Bernoulli, a 17th-century Swiss mathematician, who analyzed them in his "Ars Conjectandi" (1713). The term "success" in this sense consists in the result meeting specified conditions, not in any moral judgement. For example, the experiment could be the act of rolling a single die, with the result of rolling a six being declared a "success" and all other outcomes grouped together under the designation "failure". Assuming a fair die, the probability of success would then be formula_0.
Dissatisfaction with success.
Although fame and success are widely sought by many people, successful people are often displeased by their status. Overall, there is a general correlation between success and unhappiness. A study done in 2008 notes that CEOs are depressed at more than double the rate of the public at large, suggesting that this is not a phenomenon exclusive to celebrities. Research suggests that people tend to focus more on objective success (ie: status, wealth, reputation) as benchmarks for success, rather than subjective success (ie: self-worth, relationships, moral introspection), and as a result become disillusioned with the success they do have. Celebrities in particular face specific circumstances that cause them to be displeased by their success.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "1/6"
}
] |
https://en.wikipedia.org/wiki?curid=63138616
|
631443
|
Seismic moment
|
Seismic moment is a quantity used by seismologists to measure the size of an earthquake. The scalar seismic moment formula_0 is defined by the equation
formula_1, where
formula_0 thus has dimensions of torque, measured in newton meters. The connection between seismic moment and a torque is natural in the body-force equivalent representation of seismic sources as a double-couple (a pair of force couples with opposite torques): the seismic moment is the torque of each of the two couples. Despite having the same dimensions as energy, seismic moment is not a measure of energy. The relations between seismic moment, potential energy drop and radiated energy are indirect and approximative.
The seismic moment of an earthquake is typically estimated using whatever information is available to constrain its factors. For modern earthquakes, moment is usually estimated from ground motion recordings of earthquakes known as seismograms. For earthquakes that occurred in times before modern instruments were available, moment may be estimated from geologic estimates of the size of the fault rupture and the slip.
Seismic moment is the basis of the moment magnitude scale introduced by Caltech's Thomas C. Hanks and Hiroo Kanamori, which is often used to compare the size of different earthquakes and is especially useful for comparing the sizes of large (great) earthquakes.
The seismic moment is not restricted to earthquakes. For a more general seismic source described by a seismic moment tensor formula_5 (a symmetric tensor, but not necessarily a double couple tensor), the seismic moment is
formula_6
Sources.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "M_0"
},
{
"math_id": 1,
"text": "M_0=\\mu AD"
},
{
"math_id": 2,
"text": "\\mu"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "D"
},
{
"math_id": 5,
"text": "M_{ij}"
},
{
"math_id": 6,
"text": "M_0=\\frac{1}{\\sqrt{2}} (M_{ij}^2)^{1/2}"
}
] |
https://en.wikipedia.org/wiki?curid=631443
|
63147271
|
2 Kings 2
|
A chapter in the Second Book of Kings
2 Kings 2 is the second chapter of the second part of the Books of Kings in the Hebrew Bible or the Second Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. The first part of this chapter (verses 1–18) records the appointment of Elisha to succeed Elijah, and Elijah's ascension to heaven, while the second part (verses 19–25) records some miraculous acts of Elisha showing that he has been granted power similar to Elijah's.
Text.
This chapter was originally written in the Hebrew language and since the 16th century is divided into 25 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The story of Elisha's inheritance of Elijah's prophetic power is placed in this chapter after the reign of Ahaziah is closed but before the reign of Jehoram opens, underscoring its importance, as the only account of a prophetic succession recorded in the Tanakh. It is also one of the two occasions (the other is at the death of Elisha; 13:14-21) that a story stands outside the narrated time (in the midst of, but separate from, the sequential and interlocking formulas of royal succession), setting "prophetic over against royal power."
The narratives of this chapter recall the past history of Israel, with Elijah as a new Moses, Elisha as his Joshua, Ahab as Pharaoh, and when one son died (Passover), Elijah departs on the far side of Jordan (as Moses does), while Elisha crosses the Jordan back into the west bank ‘to carry on a conquest, significantly starting at Jericho’.
Structure.
This whole chapter has a chiastic structure,
mapping the scenes of a journey and a return: Elijah and Elisha journeyed from Gilgal to Bethel to Jericho and then to the other side of the Jordan where the climactic ascent of Elijah occurs. Elisha then returned alone via Jericho, Bethel, Mount Carmel and ended in Samaria.
The diagram of the narratives is as follows:
A Elijah and Elisha leave Gilgal (2:1-2)
B Elijah and Elisha at Bethel (2:3-4)
C Elijah and Elisha at Jericho (2:5-6)
D Elijah and Elisha leave the sons of the prophets and cross the Jordan river (2:7-8)
X The ascent of Elijah (2:9-12a)
D' Elisha crosses the Jordan River and confronts the sons of the prophets (2:12b-18)
C' Elisha at Jericho (2:19-22)
B' Elisha at Bethel (2:23-24)
A' Elisha returns to Samaria (2:25)
Elisha's Appointment and Elijah's Ascension (2:1–18).
Elijah's life was coming to an end with an ascension to heaven, one of the very few breaches of the "wall of death" in the Hebrew Bible/Old Testament.
Because he 'only departed rather than died', he was expected to return without the need of resurrection to 'announce the Messiah's arrival' as noted at the time of the New Testament (; ). Verses 2–6 indicate that Elijah, Elisha, and many prophet disciples were aware of Elijah's impending departure. While Elijah seems to wish to be alone when the time comes, Elisha wants to accompany him: he is to be 'a witness to the miracle and an heir to the master'. Elisha requested and was granted the inheritance of Elijah's 'spirit' (as 'the double portion due the eldest son', verse 9, cf. ), which is considered closest to 'the sphere of God' (cf. Judges 3:10; 14:6; 1 Samuel 10:10; 11:6; Isaiah 11:2, among others). Elisha also inherits Elijah's mantle, one of the older prophet's hallmark (1 Kings 19:13, 19; cf. 2 Kings 1:8), which is also proved to have magical powers (both Elijah and Elisha could divide the river Jordan with it, reminding us of Moses' division of the Red Sea in Exodus 14:21). The military title of honor, 'chariot of Israel and its horsemen' (verse 12; or "horses"), is also applied to Elisha (2 Kings 13:14) in relation to wartime successes achieved by the kingdom of Israel with his help (the later acts of Elisha are recorded mainly in 2 Kings 3; 6–7).
"Then when the Lord was about to take Elijah up to heaven by a whirlwind, Elijah went with Elisha from Gilgal."
Verse 1.
The first clause of the opening verse summarizes the events to happen, whereas the second one relates the beginning of the journey that leads up to it. Since the start, YHWH is 'named as the subject of this occurrence' with "săarah" ("storm, whirlwind"; often associated with theophany, such as in ) as the agent of Elijah's ascent. Elisha is mentioned for the first time since Elijah chose him (), accompanying the prophet from Gilgal (probably north of Beth-el) and setting up the conflict between the two in the next three scenes as Elijah insists that he journey alone, while Elisha swears to follow.
"And so it was, when they had crossed over, that Elijah said to Elisha, "Ask! What may I do for you, before I am taken away from you?""
"Elisha said, “Please let a double portion of your spirit be upon me.”"
"Then it happened, as they continued on and talked, that suddenly a chariot of fire appeared with horses of fire, and separated the two of them; and Elijah went up by a whirlwind into heaven."
The early acts of Elisha: bringing life and death (2:19-25).
This section records some of Elisha's first actions, confirming that 'Elisha has the same power to perform miracles as Elijah before him'. The spring named after Elisha (attributed to the miracle recorded in verses 19–22) can still be seen today at the oasis in Jericho with its fresh and abundant life-giving water. By stark contrast, ridiculing prophets can cost lives (verses 23–24; cf. 2 Kings 1:9–14; another 42 deaths are by Jehu in 2 Kings 10:12–14).
"Then he went up from there to Bethel; and as he was going up the road, some youths came from the city and mocked him, and said to him, "Go up, you baldhead! Go up, you baldhead!" So he turned around and looked at them, and pronounced a curse on them in the name of the Lord. And two female bears came out of the woods and mauled forty-two of the youths."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=63147271
|
6314748
|
Aggregate function
|
Type of function in database management
In database management, an aggregate function or aggregation function is a function where multiple values are processed together to form a single summary statistic.
Common aggregate functions include:
Others include:
Formally, an aggregate function takes as input a set, a multiset (bag), or a list from some input domain "I" and outputs an element of an output domain "O". The input and output domains may be the same, such as for codice_0, or may be different, such as for codice_1.
Aggregate functions occur commonly in numerous programming languages, in spreadsheets, and in relational algebra.
The codice_2 function, as defined in the standard
aggregates data from multiple rows into a single concatenated string.
In the entity relationship diagram, aggregation is represented as seen in Figure 1 with a rectangle around the relationship and its entities to indicate that it is being treated as an aggregate entity.
Decomposable aggregate functions.
Aggregate functions present a bottleneck, because they potentially require having all input values at once. In distributed computing, it is desirable to divide such computations into smaller pieces, and distribute the work, usually computing in parallel, via a divide and conquer algorithm.
Some aggregate functions can be computed by computing the aggregate for subsets, and then aggregating these aggregates; examples include codice_1, codice_4, codice_5, and codice_0. In other cases the aggregate can be computed by computing auxiliary numbers for subsets, aggregating these auxiliary numbers, and finally computing the overall number at the end; examples include codice_7 (tracking sum and count, dividing at the end) and codice_8 (tracking max and min, subtracting at the end). In other cases the aggregate cannot be computed without analyzing the entire set at once, though in some cases approximations can be distributed; examples include codice_9 (Count-distinct problem), codice_10, and codice_11.
Such functions are called decomposable aggregation functions or decomposable aggregate functions. The simplest may be referred to as self-decomposable aggregation functions, which are defined as those functions "f" such that there is a "merge operator" &NoBreak;&NoBreak; such that
formula_0
where &NoBreak;&NoBreak; is the union of multisets (see monoid homomorphism).
For example, codice_0:
formula_1, for a singleton;
formula_2, meaning that merge &NoBreak;&NoBreak; is simply addition.
codice_1:
formula_3,
formula_4.
codice_4:
formula_5,
formula_6.
codice_5:
formula_7,
formula_8.
Note that self-decomposable aggregation functions can be combined (formally, taking the product) by applying them separately, so for instance one can compute both the codice_0 and codice_1 at the same time, by tracking two numbers.
More generally, one can define a decomposable aggregation function "f" as one that can be expressed as the composition of a final function "g" and a self-decomposable aggregation function "h", formula_9. For example, codice_7=codice_0/codice_1 and codice_8=codice_4−codice_5.
In the MapReduce framework, these steps are known as InitialReduce (value on individual record/singleton set), Combine (binary merge on two aggregations), and FinalReduce (final function on auxiliary values), and moving decomposable aggregation before the Shuffle phase is known as an InitialReduce step,
Decomposable aggregation functions are important in online analytical processing (OLAP), as they allow aggregation queries to be computed on the pre-computed results in the OLAP cube, rather than on the base data. For example, it is easy to support codice_1, codice_4, codice_5, and codice_0 in OLAP, since these can be computed for each cell of the OLAP cube and then summarized ("rolled up"), but it is difficult to support codice_10, as that must be computed for every view separately.
Other decomposable aggregate functions.
In order to calculate the average and standard deviation from aggregate data, it is necessary to have available for each group: the total of values (Σxi = SUM(x)), the number of values (N=COUNT(x)) and the total of squares of the values (Σxi2=SUM(x2)) of each groups.<br>
<br>
codice_29:
formula_10
or<br>
formula_11
or, only if COUNT(X)=COUNT(Y)<br>
formula_12
<br>
codice_30:
The sum of squares of the values is important in order to calculate the Standard Deviation of groups<br>
formula_13
<br>
codice_31:<br>
For a finite population with equal probabilities at all points, we have
formula_14
This means that the standard deviation is equal to the square root of the difference between the average of the squares of the values and the square of the average value.
formula_15
formula_16
References.
<templatestyles src="Reflist/styles.css" />
Literature.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "f(X \\uplus Y) = f(X) \\diamond f(Y)"
},
{
"math_id": 1,
"text": "\\operatorname{SUM}({x}) = x"
},
{
"math_id": 2,
"text": "\\operatorname{SUM}(X \\uplus Y) = \\operatorname{SUM}(X) + \\operatorname{SUM}(Y)"
},
{
"math_id": 3,
"text": "\\operatorname{COUNT}({x}) = 1"
},
{
"math_id": 4,
"text": "\\operatorname{COUNT}(X \\uplus Y) = \\operatorname{COUNT}(X) + \\operatorname{COUNT}(Y)"
},
{
"math_id": 5,
"text": "\\operatorname{MAX}({x}) = x"
},
{
"math_id": 6,
"text": "\\operatorname{MAX}(X \\uplus Y) = \\max\\bigl(\\operatorname{MAX}(X), \\operatorname{MAX}(Y)\\bigr)"
},
{
"math_id": 7,
"text": "\\operatorname{MIN}({x}) = x"
},
{
"math_id": 8,
"text": "\\operatorname{MIN}(X \\uplus Y) = \\min\\bigl(\\operatorname{MIN}(X), \\operatorname{MIN}(Y)\\bigr)"
},
{
"math_id": 9,
"text": "f = g \\circ h, f(X) = g(h(X))"
},
{
"math_id": 10,
"text": "\\operatorname{AVG}(X \\uplus Y) = \\bigl(\\operatorname{AVG}(X) * \\operatorname{COUNT}(X) + \\operatorname{AVG}(Y) * \\operatorname{COUNT}(Y)\\bigr) / \\bigl(\\operatorname{COUNT}(X) + \\operatorname{COUNT}(Y)\\bigr)"
},
{
"math_id": 11,
"text": "\\operatorname{AVG}(X \\uplus Y) = \\bigl(\\operatorname{SUM}(X) + \\operatorname{SUM}(Y)\\bigr) / \\bigl(\\operatorname{COUNT}(X) + \\operatorname{COUNT}(Y)\\bigr)"
},
{
"math_id": 12,
"text": "\\operatorname{AVG}(X \\uplus Y) = \\bigl(\\operatorname{AVG}(X) + \\operatorname{AVG}(Y)\\bigr) / 2"
},
{
"math_id": 13,
"text": "\\operatorname{SUM}(X^2 \\uplus Y^2) = \\operatorname{SUM}(X^2)+\\operatorname{SUM}(Y^2)"
},
{
"math_id": 14,
"text": "\\operatorname{STDDEV}(X) = s(x) = \\sqrt{\\frac{1}{N}\\sum_{i=1}^N(x_i-\\overline{x})^2} = \\sqrt{\\frac{1}{N} \\left(\\sum_{i=1}^N x_i^2\\right) - (\\overline{x})^2}\n= \\sqrt{\\operatorname{SUM}(x^2) / \\operatorname{COUNT}(x) - \\operatorname{AVG}(x) ^2}\n"
},
{
"math_id": 15,
"text": "\\operatorname{STDDEV}(X \\uplus Y) = \\sqrt{\\operatorname{SUM}(X^2 \\uplus Y^2) / \\operatorname{COUNT}(X \\uplus Y) - \\operatorname{AVG}(X \\uplus Y) ^2}"
},
{
"math_id": 16,
"text": "\\operatorname{STDDEV}(X \\uplus Y) = \\sqrt{\\bigl(\\operatorname{SUM}(X^2)+\\operatorname{SUM}(Y^2)\\bigr) / \\bigl(\\operatorname{COUNT}(X) + \\operatorname{COUNT}(Y) \\bigr) - \\bigl((\\operatorname{SUM}(X) + \\operatorname{SUM}(Y)) / (\\operatorname{COUNT}(X) + \\operatorname{COUNT}(Y))\\bigr)^2}"
}
] |
https://en.wikipedia.org/wiki?curid=6314748
|
63148459
|
Sliding DFT
|
In applied mathematics, the sliding discrete Fourier transform is a recursive algorithm to compute successive STFTs of input data frames that are a single sample apart (hopsize − 1). The calculation for the sliding DFT is closely related to Goertzel algorithm.
Definition.
Assuming that the hopsize between two consecutive DFTs is 1 sample, then
formula_0
From this definition above, the DFT can be computed recursively thereafter. However, implementing the window function on a sliding DFT is difficult due to its recursive nature, therefore it is done exclusively in a frequency domain.
Sliding windowed infinite Fourier transform.
It is not possible to implement asymmetric window functions into sliding DFT. However, the IIR version called sliding windowed infinite Fourier transform (SWIFT) provides an exponential window and the αSWIFT calculates two sDFTs in parallel where slow-decaying one is subtracted by fast-decaying one, therefore a window function of formula_1.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\begin{align} \nF_{t+1}(n) &= \\sum_{k=0}^{N-1} f_{k+t+1}e^{-j2\\pi k n/N}\\\\\n&= \\sum_{m=1}^N f_{m+t}e^{-j2\\pi (m-1) n/N} \\\\\n&= e^{j2\\pi n/N} \\left[ \\sum_{m=0}^{N-1} f_{m+t}e^{-j2\\pi m n /N} - f_t + f_{t+N} \\right] \\\\\n&= e^{j2\\pi n/N} \\left[F_t(n) - f_t + f_{t+N} \\right].\n\\end{align}\n"
},
{
"math_id": 1,
"text": "w(x) = e^{-x \\alpha} - e^{-x \\beta}"
}
] |
https://en.wikipedia.org/wiki?curid=63148459
|
631494
|
Moment magnitude scale
|
Measure of earthquake size, in terms of the energy released
The moment magnitude scale (MMS; denoted explicitly with M or ' or Mwg, and generally implied with use of a single M for magnitude) is a measure of an earthquake's magnitude ("size" or strength) based on its seismic moment. ' was defined in a 1979 paper by Thomas C. Hanks and Hiroo Kanamori. Similar to the local magnitude/Richter scale () defined by Charles Francis Richter in 1935, it uses a logarithmic scale; small earthquakes have approximately the same magnitudes on both scales. Despite the difference, news media often use the term "Richter scale" when referring to the moment magnitude scale.
Moment magnitude () is considered the authoritative magnitude scale for ranking earthquakes by size. It is more directly related to the energy of an earthquake than other scales, and does not saturate – that is, it does not underestimate magnitudes as other scales do in certain conditions. It has become the standard scale used by seismological authorities like the U.S. Geological Survey for reporting large earthquakes (typically M > 4), replacing the local magnitude () and surface wave magnitude () scales. Subtypes of the moment magnitude scale (, etc.) reflect different ways of estimating the seismic moment.
History.
Richter scale: the original measure of earthquake magnitude.
At the beginning of the twentieth century, very little was known about how earthquakes happen, how seismic waves are generated and propagate through the Earth's crust, and what information they carry about the earthquake rupture process; the first magnitude scales were therefore empirical. The initial step in determining earthquake magnitudes empirically came in 1931 when the Japanese seismologist Kiyoo Wadati showed that the maximum amplitude of an earthquake's seismic waves diminished with distance at a certain rate. Charles F. Richter then worked out how to adjust for epicentral distance (and some other factors) so that the logarithm of the amplitude of the seismograph trace could be used as a measure of "magnitude" that was internally consistent and corresponded roughly with estimates of an earthquake's energy. He established a reference point and the ten-fold (exponential) scaling of each degree of magnitude, and in 1935 published what he called the "magnitude scale", now called the local magnitude scale, labeled . (This scale is also known as the "Richter scale", but news media sometimes use that term indiscriminately to refer to other similar scales.)
The local magnitude scale was developed on the basis of shallow (~ deep), moderate-sized earthquakes at a distance of approximately , conditions where the surface waves are predominant. At greater depths, distances, or magnitudes the surface waves are greatly reduced, and the local magnitude scale underestimates the magnitude, a problem called "saturation". Additional scales were developed – a surface-wave magnitude scale () by Beno Gutenberg in 1945, a body-wave magnitude scale () by Gutenberg and Richter in 1956, and a number of variants – to overcome the deficiencies of the scale, but all are subject to saturation. A particular problem was that the scale (which in the 1970s was the preferred magnitude scale) saturates around and therefore underestimates the energy release of "great" earthquakes such as the 1960 Chilean and 1964 Alaskan earthquakes. These had magnitudes of 8.5 and 8.4 respectively but were notably more powerful than other M 8 earthquakes; their moment magnitudes were closer to 9.6 and 9.3, respectively.
Single couple or double couple.
The study of earthquakes is challenging as the source events cannot be observed directly, and it took many years to develop the mathematics for understanding what the seismic waves from an earthquake can tell about the source event. An early step was to determine how different systems of forces might generate seismic waves equivalent to those observed from earthquakes.
The simplest force system is a single force acting on an object. If it has sufficient strength to overcome any resistance it will cause the object to move ("translate"). A pair of forces, acting on the same "line of action" but in opposite directions, will cancel; if they cancel (balance) exactly there will be no net translation, though the object will experience stress, either tension or compression. If the pair of forces are offset, acting along parallel but separate lines of action, the object experiences a rotational force, or torque. In mechanics (the branch of physics concerned with the interactions of forces) this model is called a "couple", also "simple couple" or "single couple". If a second couple of equal and opposite magnitude is applied their torques cancel; this is called a "double couple". A double couple can be viewed as "equivalent to a pressure and tension acting simultaneously at right angles".
In 1923 Hiroshi Nakano showed that certain aspects of seismic waves could be explained in terms of a double couple model. This led to a three-decade-long controversy over the best way to model the seismic source: as a single couple, or a double couple. While Japanese seismologists favored the double couple, most seismologists favored the single couple. Although the single couple model had some shortcomings, it seemed more intuitive, and there was a belief – mistaken, as it turned out – that the elastic rebound theory for explaining why earthquakes happen required a single couple model. In principle these models could be distinguished by differences in the radiation patterns of their S-waves, but the quality of the observational data was inadequate for that.
but not from a single couple. This was confirmed as better and more plentiful data coming from the World-Wide Standard Seismograph Network (WWSSN) permitted closer analysis of seismic waves. Notably, in 1966 Keiiti Aki showed that the seismic moment of the 1964 Niigata earthquake as calculated from the seismic waves on the basis of a double couple was in reasonable agreement with the seismic moment calculated from the observed physical dislocation.
Dislocation theory.
A double couple model suffices to explain an earthquake's far-field pattern of seismic radiation, but tells us very little about the nature of an earthquake's source mechanism or its physical features. While slippage along a fault was theorized as the cause of earthquakes (other theories included movement of magma, or sudden changes of volume due to phase changes), observing this at depth was not possible, and understanding what could be learned about the source mechanism from the seismic waves requires an understanding of the source mechanism.
Modeling the physical process by which an earthquake generates seismic waves required much theoretical development of dislocation theory, first formulated by the Italian Vito Volterra in 1907, with further developments by E. H. Love in 1927. More generally applied to problems of stress in materials, an extension by F. Nabarro in 1951 was recognized by the Russian geophysicist A. V. Vvedenskaya as applicable to earthquake faulting. In a series of papers starting in 1956 she and other colleagues used dislocation theory to determine part of an earthquake's focal mechanism, and to show that a dislocation – a rupture accompanied by slipping – was indeed equivalent to a double couple.
In a pair of papers in 1958, J. A. Steketee worked out how to relate dislocation theory to geophysical features. Numerous other researchers worked out other details, culminating in a general solution in 1964 by Burridge and Knopoff, which established the relationship between double couples and the theory of elastic rebound, and provided the basis for relating an earthquake's physical features to seismic moment.
Seismic moment.
"Seismic moment" – symbol – is a measure of the fault slip and area involved in the earthquake. Its value is the torque of each of the two force couples that form the earthquake's equivalent double-couple. (More precisely, it is the scalar magnitude of the second-order moment tensor that describes the force components of the double-couple.) Seismic moment is measured in units of Newton meters (N·m) or Joules, or (in the older CGS system) dyne-centimeters (dyn-cm).
The first calculation of an earthquake's seismic moment from its seismic waves was by Keiiti Aki for the 1964 Niigata earthquake. He did this two ways. First, he used data from distant stations of the WWSSN to analyze long-period (200 second) seismic waves (wavelength of about 1,000 kilometers) to determine the magnitude of the earthquake's equivalent double couple. Second, he drew upon the work of Burridge and Knopoff on dislocation to determine the amount of slip, the energy released, and the stress drop (essentially how much of the potential energy was released). In particular, he derived an equation that relates an earthquake's seismic moment to its physical parameters:
"M"
"μūS"
with μ being the rigidity (or resistance to moving) of a fault with a surface area of S over an average dislocation (distance) of ū. (Modern formulations replace ūS with the equivalent D̄A, known as the "geometric moment" or "potency".) By this equation the "moment" determined from the double couple of the seismic waves can be related to the moment calculated from knowledge of the surface area of fault slippage and the amount of slip. In the case of the Niigata earthquake the dislocation estimated from the seismic moment reasonably approximated the observed dislocation.
Seismic moment is a measure of the work (more precisely, the torque) that results in inelastic (permanent) displacement or distortion of the Earth's crust. It is related to the total energy released by an earthquake. However, the power or potential destructiveness of an earthquake depends (among other factors) on how much of the total energy is converted into seismic waves. This is typically 10% or less of the total energy, the rest being expended in fracturing rock or overcoming friction (generating heat).
Nonetheless, seismic moment is regarded as the fundamental measure of earthquake size, representing more directly than other parameters the physical size of an earthquake. As early as 1975 it was considered "one of the most reliably determined instrumental earthquake source parameters".
Introduction of an energy-motivated magnitude "M"w.
Most earthquake magnitude scales suffered from the fact that they only provided a comparison of the amplitude of waves produced at a standard distance and frequency band; it was difficult to relate these magnitudes to a physical property of the earthquake. Gutenberg and Richter suggested that radiated energy Es could be estimated as
formula_0
(in Joules). Unfortunately, the duration of many very large earthquakes was longer than 20 seconds, the period of the surface waves used in the measurement of . This meant that giant earthquakes such as the 1960 Chilean earthquake (M 9.5) were only assigned an . Caltech seismologist Hiroo Kanamori recognized this deficiency and took the simple but important step of defining a magnitude based on estimates of radiated energy, , where the "w" stood for work (energy):
formula_1
Kanamori recognized that measurement of radiated energy is technically difficult since it involves the integration of wave energy over the entire frequency band. To simplify this calculation, he noted that the lowest frequency parts of the spectrum can often be used to estimate the rest of the spectrum. The lowest frequency asymptote of a seismic spectrum is characterized by the seismic moment, . Using an approximate relation between radiated energy and seismic moment (which assumes stress drop is complete and ignores fracture energy),
formula_2
(where E is in Joules and is in Nformula_3m), Kanamori approximated by
formula_4
Moment magnitude scale.
The formula above made it much easier to estimate the energy-based magnitude , but it changed the fundamental nature of the scale into a moment magnitude scale. USGS seismologist Thomas C. Hanks noted that Kanamori's scale was very similar to a relationship between and that was reported by
formula_5
combined their work to define a new magnitude scale based on estimates of seismic moment
formula_6
where formula_7 is defined in newton meters (N·m).
Current use.
Moment magnitude is now the most common measure of earthquake size for medium to large earthquake magnitudes, but in practice, seismic moment (), the seismological parameter it is based on, is not measured routinely for smaller quakes. For example, the United States Geological Survey does not use this scale for earthquakes with a magnitude of less than 3.5, which includes the great majority of quakes.
Popular press reports most often deal with significant earthquakes larger than . For these events, the preferred magnitude is the moment magnitude , not Richter's local magnitude .
Definition.
The symbol for the moment magnitude scale is , with the subscript "w" meaning mechanical work accomplished. The moment magnitude is a dimensionless value defined by Hiroo Kanamori as
formula_8
where is the seismic moment in dyne⋅cm (10−7 N⋅m). The constant values in the equation are chosen to achieve consistency with the magnitude values produced by earlier scales, such as the local magnitude and the surface wave magnitude. Thus, a magnitude zero microearthquake has a seismic moment of approximately , while the Great Chilean earthquake of 1960, with an estimated moment magnitude of 9.4–9.6, had a seismic moment between and .
Seismic moment magnitude ("M" wg or Das Magnitude Scale ) and moment magnitude ("M" w) scales
To understand the magnitude scales based on "M"o detailed background of "M"wg and "M"w scales is given below.
"M" w scale
Hiroo Kanamori defined a magnitude scale (Log "W"0 = 1.5 "M"w + 11.8, where "W"0 is the minimum strain energy) for great earthquakes using Gutenberg Richter Eq. (1).
Log Es = 1.5 Ms + 11.8 (A)
Hiroo Kanamori used "W"0 in place of "E"s (dyn.cm) and consider a constant term ("W"0/"M"o = 5 × 10−5) in Eq. (A) and estimated "M"s and denoted as "M"w (dyn.cm). The energy Eq. (A) is derived by substituting "m" = 2.5 + 0.63 M in the energy equation Log "E" = 5.8 + 2.4 m (Richter 1958), where "m" is the Gutenberg unified magnitude and "M" is a least squares approximation to the magnitude determined from surface wave magnitudes. After replacing the ratio of seismic Energy ("E") and Seismic Moment ("M"o), i.e., "E"/"M"o = 5 × 10−5, into the Gutenberg–Richter energy magnitude Eq. (A), Hanks and Kanamori provided Eq. (B):
Log M0 = 1.5 Ms + 16.1 (B)
Note that Eq. (B) was already derived by Hiroo Kanamori and termed it as "M"w. Eq. (B) was based on large earthquakes; hence, in order to validate Eq. (B) for intermediate and smaller earthquakes, Hanks and Kanamori (1979) compared this Eq. (B) with Eq. (1) of Percaru and Berckhemer (1978) for the magnitude 5.0 ≤ "M"s ≤ 7.5 (Hanks and Kanamori 1979). Note that Eq. (1) of Percaru and Berckhemer (1978) for the magnitude range 5.0 ≤ "M"s ≤ 7.5 is not reliable due to the inconsistency of defined magnitude range (moderate to large earthquakes defined as "M"s ≤ 7.0 and "M"s = 7–7.5) and scarce data in lower magnitude range (≤ 7.0) which rarely represents the global seismicity (e.g., see Figs. 1A, B, 4 and Table 2 of Percaru and Berckhemer 1978). Furthermore, Equation (1) of Percaru and Berckhemer 1978) is only valid for (≤ 7.0).
Relations between seismic moment, potential energy released and radiated energy.
Seismic moment is not a direct measure of energy changes during an earthquake. The relations between seismic moment and the energies involved in an earthquake depend on parameters that have large uncertainties and that may vary between earthquakes. Potential energy is stored in the crust in the form of elastic energy due to built-up stress and gravitational energy. During an earthquake, a portion formula_9 of this stored energy is transformed into
The potential energy drop caused by an earthquake is related approximately to its seismic moment by
formula_13
where formula_14 is the average of the "absolute" shear stresses on the fault before and after the earthquake (e.g., equation 3 of ) and formula_15 is the average of the shear moduli of the rocks that constitute the fault. Currently, there is no technology to measure absolute stresses at all depths of interest, nor method to estimate it accurately, and formula_14 is thus poorly known. It could vary highly from one earthquake to another. Two earthquakes with identical formula_16 but different formula_14 would have released different formula_9.
The radiated energy caused by an earthquake is approximately related to seismic moment by
formula_17
where formula_18 is radiated efficiency and formula_19 is the static stress drop, i.e., the difference between shear stresses on the fault before and after the earthquake (e.g., from equation 1 of ). These two quantities are far from being constants. For instance, formula_20 depends on rupture speed; it is close to 1 for regular earthquakes but much smaller for slower earthquakes such as tsunami earthquakes and slow earthquakes. Two earthquakes with identical formula_16 but different formula_20 or formula_19 would have radiated different formula_21.
Because formula_21 and formula_16 are fundamentally independent properties of an earthquake source, and since formula_21 can now be computed more directly and robustly than in the 1970s, introducing a separate magnitude associated to radiated energy was warranted. Choy and Boatwright defined in 1995 the "energy magnitude"
formula_22
where formula_21 is in J (N·m).
Comparative energy released by two earthquakes.
Assuming the values of σ̄/μ are the same for all earthquakes, one can consider as a measure of the potential energy change Δ"W" caused by earthquakes. Similarly, if one assumes formula_23 is the same for all earthquakes, one can consider as a measure of the energy "E"s radiated by earthquakes.
Under these assumptions, the following formula, obtained by solving for the equation defining , allows one to assess the ratio formula_24 of energy release (potential or radiated) between two earthquakes of different moment magnitudes, formula_25 and formula_26:
formula_27
As with the Richter scale, an increase of one step on the logarithmic scale of moment magnitude corresponds to a 101.5 ≈ 32 times increase in the amount of energy released, and an increase of two steps corresponds to a 103 = 1000 times increase in energy. Thus, an earthquake of of 7.0 contains 1000 times as much energy as one of 5.0 and about 32 times that of 6.0.
Comparison with TNT equivalents.
To make the significance of the magnitude value plausible, the seismic energy released during the earthquake is sometimes compared to the effect of the conventional chemical explosive TNT.
The seismic energy formula_28 results from the above-mentioned formula according to Gutenberg and Richter to
formula_29
or converted into Hiroshima bombs:
formula_30
For comparison of seismic energy (in joules) with the corresponding explosion energy, a value of 4.2 x 109 joules per ton of TNT applies.
The table illustrates the relationship between seismic energy and moment magnitude.
The end of the scale is at the value 10.6, corresponding to the assumption that at this value the Earth's crust would have to break apart completely.
Subtypes of Mw.
Various ways of determining moment magnitude have been developed, and several subtypes of the scale can be used to indicate the basis used.
Notes.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Div col/styles.css"/><templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\log _{10} E_s \\approx 4.8 + 1.5 M_S, "
},
{
"math_id": 1,
"text": " M_w = 2/3 \\log _{10} E_s - 3.2 "
},
{
"math_id": 2,
"text": "E_s \\approx M_0 / (2 \\times 10^4) "
},
{
"math_id": 3,
"text": " \\cdot "
},
{
"math_id": 4,
"text": " M_w = ( \\log _{10} M_0 - 9.1 ) /1.5 "
},
{
"math_id": 5,
"text": " M_L \\approx ( \\log _{10} M_0 - 9.0 ) /1.5 "
},
{
"math_id": 6,
"text": " M = ( \\log _{10} M_0 - 9.05 ) /1.5 "
},
{
"math_id": 7,
"text": " M_0 "
},
{
"math_id": 8,
"text": "M_\\mathrm{w} = {\\frac{2}{3}}\\log_{10}(M_0) - 10.7,"
},
{
"math_id": 9,
"text": "\\Delta W"
},
{
"math_id": 10,
"text": "E_f"
},
{
"math_id": 11,
"text": "E_h"
},
{
"math_id": 12,
"text": "E_s"
},
{
"math_id": 13,
"text": "\\Delta W \\approx \\frac{\\overline\\sigma}{\\mu} M_0"
},
{
"math_id": 14,
"text": "\\overline\\sigma"
},
{
"math_id": 15,
"text": "\\mu"
},
{
"math_id": 16,
"text": "M_0"
},
{
"math_id": 17,
"text": " E_\\mathrm{s} \\approx \\eta_R \\frac{\\Delta\\sigma_s}{2\\mu} M_0 "
},
{
"math_id": 18,
"text": "\\eta_R = E_s /(E_s+E_f)"
},
{
"math_id": 19,
"text": "\\Delta\\sigma_s"
},
{
"math_id": 20,
"text": "\\eta_R"
},
{
"math_id": 21,
"text": "E_\\mathrm{s}"
},
{
"math_id": 22,
"text": "M_\\mathrm{E} = \\textstyle{\\frac{2}{3}}\\log_{10}E_\\mathrm{s} -3.2"
},
{
"math_id": 23,
"text": "\\eta_R \\Delta\\sigma_s/2\\mu"
},
{
"math_id": 24,
"text": "E_1/E_2"
},
{
"math_id": 25,
"text": "m_1"
},
{
"math_id": 26,
"text": "m_2"
},
{
"math_id": 27,
"text": " E_1/E_2 \\approx 10^{\\frac{3}{2}(m_1 - m_2)}."
},
{
"math_id": 28,
"text": "E_\\mathrm{S}"
},
{
"math_id": 29,
"text": "E_\\mathrm{S} = 10^{\\;\\!1.5 \\cdot M_\\mathrm{S} + 4.8}"
},
{
"math_id": 30,
"text": "E_\\mathrm{S}= \\frac{10^{\\;\\!1.5 \\cdot M_\\mathrm{S} + 4.8}}{5.25\\cdot10^{13}} = 10^{\\;\\!1.5 \\cdot M_\\mathrm{S} - 8.92}"
}
] |
https://en.wikipedia.org/wiki?curid=631494
|
63167255
|
2 Kings 4
|
2 Kings, chapter 4
2 Kings 4 is the fourth chapter of the second part of the Books of Kings in the Hebrew Bible or the Second Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. In this chapter some of Elisha's acts are recorded: the first part (verses 1–7) is how he helped a poor widow of a prophet to repay her family debts, the second part (verses 8–37) is how he helped a family to have a son, and the third part (verses 38–44) is how he helped to make the food of his disciples harmless to eat as well as to multiply a small amount of food to feed about one hundred guests with some leftovers.
Text.
This chapter was originally written in the Hebrew language and since the 16th century is divided into 44 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The narrative moves abruptly from a war story involving Israel and Judah into a series of four small-scale episodes in which Elisha performs miracles for individuals or his disciples, the first two of which involve women, one poor and one rich. Overall, this chapter gives a view about the way the groups of prophets live, such as the community around Elisha. They seemed to lead an 'eremitic existence' in deserted areas, with extremely modest needs, but had followers in the cities from where they received visitors and sometimes they made preaching journeys to the cities themselves.
Elisha himself acts as a 'traveling temple, a human "tabernacle" that bears the life-giving glory of Yahweh', supplying the northern Israel what they could have gotten from the temple in Jerusalem: the water of life and cleansing, food, access to the presence of God. Water is an important object, symbolizing life from YHWH, that through Elisha's contribution, water at Jericho was purified (2 Kings 2:19-22), water came from nowhere to sustain the armies and animals of the three kings (2 Kings 3:20); Naaman was directed to the water of cleansing (2 Kings 5), an iron tool was made floating on the water (2 Kings 6:1-7), and Jehoram was directed to give food and water to Aramean soldiers (2 Kings 6:22-23).
The Shunammite woman even sets up a little "temple" for Elisha in the "upper room," (verse 10) where she places a bed, a table, a chair and a lampstand (Hebrew: "menorah"); a comparable set of furniture to those in the Jerusalem temple ("table" for "showbread"; "chair"/"throne" for "ark"; "bed" for "altar"). Elisha speaks to the woman through his "priest," Gehazi (2 Kings 4:12–13), the Shunammite visits the prophet on Sabbaths and new moons (4:23; in the northern Kingdom, Sabbath and new moons are usually kept, as is evidenced from Amos 8:5), and the sons of the prophets bring him their firstfruits (4:42), and what Israel normally expects at the temple is available from Elisha; what Israelites would expect to do at the temple they do in the presence of Elisha.
Elisha helps a poor widow (4:1–7).
A wife of a prophet-disciple in Elisa's group of prophets was left with an insolvent debt when her husband died, and faced a pressure from the creditor to give up her sons as temporary slaves as the payment for the debts. This is a lawful arrangement for the people of Israel (cf. Exodus 21:2-4; Deuteronomy 15:12), which is also found throughout the ancient Near East. However, in Elisha's time, this was used systematically as a method to rob farmers of their land (Isaiah 5:8; Amos 2:6; Micah 2:2). In the widow's case, the loss of support from her sons, after losing the protection from her husband, would severely ruin her life. Elisha, apparently regarded as the spiritual leader of the prophet-fraternity as well as 'a kind of clan-chief carrying social responsibility for its members', might not have the material, financial or legal means to help her, but he can perform miracles; this time by increasing what little she has beyond all measure with the active help of her and her children. The widow showed her faith in the prophet and his God (cf. a similar structure in 1 Kings 17:7–16 and Mark 6:35–44; 8:1–10) and received some full jars of oil, worth enough money to relieve the woman and her children from their plight. As in the earlier purification story of the water (2:19-22), Elisha enlists the help of the person for whom the miracle is to be performed.
"A certain woman of the wives of the sons of the prophets cried out to Elisha, saying, "Your servant my husband is dead, and you know that your servant feared the Lord. And the creditor is coming to take my two sons to be his slaves.""
Elisha helps a childless woman to bear a son (4:8–37).
Compared to the previous miraculous provision of oil, the second episode reveals interesting contrasts: "The poor widow was with two children but no food, but here is a rich matron was with no children but plenty to offer Elisha. The poor woman appealed to Elisha; the Shunammite woman asks for nothing. The miracle of the oil saves the poor woman's children; the miracle of the Shunammite's child leads to his death. Elisha instructs the poor woman; the Shunammite takes matters into her own hands and forces Elisha to revive her dead son". In this episode, Elisha is 'twice caught off guard and must quickly find solutions to the situations that confront him'. The story of the woman and her son will be concluded in chapter 8.
Structure of 4:8–37.
The main act is the Shunammite's appeal to Elisha and his response, and this is prefaced by three background scenes, each of which begins with the phrase "one day". The episode may be outlined as follows:
I. Background
A The Shunammite woman prepares a place for Elisha- "one day" (verses 8–10)
B Elisha confronts the woman and promises a son who is born-"one day" (verses 11–17)
C The son dies-"one day" (verses 18–20)
II. Foreground
A' The Shunammite woman prepares for her journey to Elisha (verses 21–25a)
B' The woman confronts Elisha (verses 25b–30)
C' Gehazi fails and Elisha succeeds in reviving the son (verses 31–37)
"Now it happened one day that Elisha went to Shunem, where there was a notable woman, and she persuaded him to eat some food. So it was, as often as he passed by, he would turn in there to eat some food."
[The woman said to her husband:] "Please, let us make a small upper room on the wall; and let us put a bed for him there, and a table and a chair and a lampstand; so it will be, whenever he comes to us, he can turn in there."
Elisha helps the disciples with meals (4:38–44).
The group of prophets in Elisha's community must literally scrape together a living in the barren area of lower Jordan valley, but their trust in YHWH enables them to enjoy divine care. Two of their miraculous experiences are recorded here. One obviously inexperienced man finds a vegetable he does not recognize and put it in the large cooking-pot for meal, but it turns out to have toxic effects. Elisha performs a miracle to make it harmless by adding a little amount of flour (verses 38–41). Another short episode in verses 42–44 involves the multiplication of food (such as known in the New Testament) from a little that they have to an amount that all who are hungry can be satisfied and still having some left over.
"So one went out into the field to gather herbs, and found a wild vine, and gathered from it a lapful of wild gourds, and came and sliced them into the pot of stew, though they did not know what they were."
"And there came a man from Ba'al-shalisha, and brought the man of God bread of the first fruits, twenty loaves of barley, and full ears of corn in his sack. And he said, Give to the people, that they may eat."
House of Elisha?
During the 2013 excavations in Tel Rehov a team directed by the Hebrew University of Jerusalem archaeologist Amihai Mazar uncovered a pottery fragment (sherd) bearing the name "Elisha", a table and a bench in a particular room excavated from a ruin dated to the second half of the ninth century BCE (the period when the prophet Elisha was active), which is linked to 2 Kings 4:8–10. Additionally, a storage jar from the same period was found in the ruin of another building at Tel Rehov bearing the inscription "Nimshi", the same name as the father or grandfather of the 9th-century king Jehu.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=63167255
|
63169268
|
Modes of variation
|
In statistics, modes of variation are a continuously indexed set of vectors or functions that are centered at a mean and are used to depict the variation in a population or sample. Typically, variation patterns in the data can be decomposed in descending order of eigenvalues with the directions represented by the corresponding eigenvectors or eigenfunctions. Modes of variation provide a visualization of this decomposition and an efficient description of variation around the mean. Both in principal component analysis (PCA) and in functional principal component analysis (FPCA), modes of variation play an important role in visualizing and describing the variation in the data contributed by each eigencomponent. In real-world applications, the eigencomponents and associated modes of variation aid to interpret complex data, especially in exploratory data analysis (EDA).
Formulation.
Modes of variation are a natural extension of PCA and FPCA.
Modes of variation in PCA.
If a random vector formula_0 has the mean vector formula_1, and the covariance matrix formula_2 with eigenvalues formula_3 and corresponding orthonormal eigenvectors formula_4, by eigendecomposition of a real symmetric matrix, the covariance matrix formula_5 can be decomposed as
formula_6
where formula_7 is an orthogonal matrix whose columns are the eigenvectors of formula_5, and formula_8 is a diagonal matrix whose entries are the eigenvalues of formula_5. By the Karhunen–Loève expansion for random vectors, one can express the centered random vector in the eigenbasis
formula_9
where formula_10 is the principal component associated with the formula_11-th eigenvector formula_12, with the properties
formula_13 and formula_14
Then the formula_15-th mode of variation of formula_16 is the set of vectors, indexed by formula_17,
formula_18
where formula_19 is typically selected as formula_20.
Modes of variation in FPCA.
For a square-integrable random function formula_21, where typically formula_22 and formula_23 is an interval, denote the mean function by formula_24, and the covariance function by
formula_25
where formula_26 are the eigenvalues and formula_27 are the orthonormal eigenfunctions of the linear Hilbert–Schmidt operator
formula_28
By the Karhunen–Loève theorem, one can express the centered function in the eigenbasis,
formula_29
where
formula_30
is the formula_15-th principal component with the properties
formula_31 and formula_32
Then the formula_15-th mode of variation of formula_33 is the set of functions, indexed by formula_17,
formula_34
that are viewed simultaneously over the range of formula_17, usually for formula_35.
Estimation.
The formulation above is derived from properties of the population. Estimation is needed in real-world applications. The key idea is to estimate mean and covariance.
Modes of variation in PCA.
Suppose the data formula_36 represent formula_37 independent drawings from some formula_38-dimensional population formula_16 with mean vector formula_39 and covariance matrix formula_5. These data yield the sample mean vector formula_40, and the sample covariance matrix formula_41 with eigenvalue-eigenvector pairs formula_42. Then the formula_15-th mode of variation of formula_16 can be estimated by
formula_43
Modes of variation in FPCA.
Consider formula_37 realizations formula_44 of a square-integrable random function formula_45 with the mean function formula_24 and the covariance function formula_46. Functional principal component analysis provides methods for the estimation of formula_47 and formula_48 in detail, often involving point wise estimate and interpolation. Substituting estimates for the unknown quantities, the formula_15-th mode of variation of formula_33 can be estimated by
formula_49
Applications.
Modes of variation are useful to visualize and describe the variation patterns in the data sorted by the eigenvalues. In real-world applications, modes of variation associated with eigencomponents allow to interpret complex data, such as the evolution of function traits and other infinite-dimensional data. To illustrate how modes of variation work in practice, two examples are shown in the graphs to the right, which display the first two modes of variation. The solid curve represents the sample mean function. The dashed, dot-dashed, and dotted curves correspond to modes of variation with formula_50 and formula_51, respectively.
The first graph displays the first two modes of variation of female mortality data from 41 countries in 2003. The object of interest is log hazard function between ages 0 and 100 years. The first mode of variation suggests that the variation of female mortality is smaller for ages around 0 or 100, and larger for ages around 25. An appropriate and intuitive interpretation is that mortality around 25 is driven by accidental death, while around 0 or 100, mortality is related to congenital disease or natural death.
Compared to female mortality data, modes of variation of male mortality data shows higher mortality after around age 20, possibly related to the fact that life expectancy for women is higher than that for men.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{X}=(X_1, X_2, \\cdots, X_p)^T"
},
{
"math_id": 1,
"text": "\\boldsymbol{\\mu}_p"
},
{
"math_id": 2,
"text": "\\mathbf{\\Sigma}_{p\\times p}"
},
{
"math_id": 3,
"text": "\\lambda_1\\geq \\lambda_2\\geq \\cdots \\geq \\lambda_p\\geq0"
},
{
"math_id": 4,
"text": "\\mathbf{e}_1, \\mathbf{e}_2, \\cdots,\\mathbf{e}_p"
},
{
"math_id": 5,
"text": "\\mathbf{\\Sigma}"
},
{
"math_id": 6,
"text": "\\mathbf{\\Sigma}=\\mathbf{Q}\\mathbf{\\Lambda}\\mathbf{Q}^T,"
},
{
"math_id": 7,
"text": "\\mathbf{Q}"
},
{
"math_id": 8,
"text": "\\mathbf{\\Lambda}"
},
{
"math_id": 9,
"text": "\\mathbf{X}-\\boldsymbol{\\mu}=\\sum_{k=1}^p\\xi_k\\mathbf{e}_k,"
},
{
"math_id": 10,
"text": "\\xi_k=\\mathbf{e}_k^T(\\mathbf{X}-\\boldsymbol{\\mu})"
},
{
"math_id": 11,
"text": " k"
},
{
"math_id": 12,
"text": "\\mathbf{e}_k"
},
{
"math_id": 13,
"text": "\\operatorname{E}(\\xi_k)=0, \\operatorname{Var}(\\xi_k)=\\lambda_k,"
},
{
"math_id": 14,
"text": "\\operatorname{E}(\\xi_k\\xi_l)=0\\ \\text{for}\\ l\\neq k."
},
{
"math_id": 15,
"text": "k"
},
{
"math_id": 16,
"text": "\\mathbf{X}"
},
{
"math_id": 17,
"text": "\\alpha"
},
{
"math_id": 18,
"text": "\\mathbf{m}_{k, \\alpha}=\\boldsymbol{\\mu}\\pm \\alpha\\sqrt{\\lambda_k}\\mathbf{e}_k, \\alpha\\in[-A, A],"
},
{
"math_id": 19,
"text": "A"
},
{
"math_id": 20,
"text": "2\\ \\text{or}\\ 3"
},
{
"math_id": 21,
"text": "X(t), t \\in \\mathcal{T}\\subset R^p"
},
{
"math_id": 22,
"text": "p=1"
},
{
"math_id": 23,
"text": "\\mathcal{T}"
},
{
"math_id": 24,
"text": " \\mu(t) = \\operatorname{E}(X(t)) "
},
{
"math_id": 25,
"text": " G(s, t) = \\operatorname{Cov}(X(s), X(t)) = \\sum_{k=1}^\\infty \\lambda_k \\varphi_k(s) \\varphi_k(t), "
},
{
"math_id": 26,
"text": "\\lambda_1\\geq \\lambda_2\\geq \\cdots \\geq 0"
},
{
"math_id": 27,
"text": "\\{\\varphi_1, \\varphi_2, \\cdots\\}"
},
{
"math_id": 28,
"text": " G: L^2(\\mathcal{T}) \\rightarrow L^2(\\mathcal{T}),\\, G(f) = \\int_\\mathcal{T} G(s, t) f(s) ds. "
},
{
"math_id": 29,
"text": " X(t) - \\mu(t) = \\sum_{k=1}^\\infty \\xi_k \\varphi_k(t),\n"
},
{
"math_id": 30,
"text": " \\xi_k = \\int_\\mathcal{T} (X(t) - \\mu(t)) \\varphi_k(t) dt \n"
},
{
"math_id": 31,
"text": " \\operatorname{E}(\\xi_k) = 0, \\operatorname{Var}(\\xi_k) = \\lambda_k,"
},
{
"math_id": 32,
"text": "\\operatorname{E}(\\xi_k \\xi_l) = 0 \\text{ for } l \\ne k."
},
{
"math_id": 33,
"text": "X(t)"
},
{
"math_id": 34,
"text": "m_{k, \\alpha}(t)=\\mu(t)\\pm \\alpha\\sqrt{\\lambda_k}\\varphi_k(t),\\ t\\in \\mathcal{T},\\ \\alpha\\in [-A, A]"
},
{
"math_id": 35,
"text": "A=2\\ \\text{or}\\ 3"
},
{
"math_id": 36,
"text": "\\mathbf{x}_1, \\mathbf{x}_2, \\cdots, \\mathbf{x}_n"
},
{
"math_id": 37,
"text": "n"
},
{
"math_id": 38,
"text": "p"
},
{
"math_id": 39,
"text": "\\boldsymbol{\\mu}"
},
{
"math_id": 40,
"text": "\\overline\\mathbf{{x}}"
},
{
"math_id": 41,
"text": "\\mathbf{S}"
},
{
"math_id": 42,
"text": "(\\hat{\\lambda}_1, \\hat{\\mathbf{e}}_1), (\\hat{\\lambda}_2, \\hat{\\mathbf{e}}_2), \\cdots, (\\hat{\\lambda}_p, \\hat{\\mathbf{e}}_p)"
},
{
"math_id": 43,
"text": "\\hat{\\mathbf{m}}_{k, \\alpha}=\\overline{\\mathbf{x}}\\pm \\alpha\\sqrt{\\hat{\\lambda}_k}\\hat{\\mathbf{e}}_k, \\alpha\\in [-A, A]."
},
{
"math_id": 44,
"text": "X_1(t), X_2(t), \\cdots, X_n(t)"
},
{
"math_id": 45,
"text": "X(t), t \\in \\mathcal{T}"
},
{
"math_id": 46,
"text": " G(s, t) = \\operatorname{Cov}(X(s), X(t)) "
},
{
"math_id": 47,
"text": " \\mu(t) "
},
{
"math_id": 48,
"text": " G(s, t) "
},
{
"math_id": 49,
"text": "\\hat{m}_{k, \\alpha}(t)=\\hat{\\mu}(t)\\pm \\alpha\\sqrt{\\hat{\\lambda}_k}\\hat{\\varphi}_k(t), t\\in \\mathcal{T}, \\alpha\\in [-A, A]."
},
{
"math_id": 50,
"text": "\\alpha=\\pm1, \\pm2,"
},
{
"math_id": 51,
"text": "\\pm3"
}
] |
https://en.wikipedia.org/wiki?curid=63169268
|
63169354
|
Art Gallery Theorems and Algorithms
|
Art Gallery Theorems and Algorithms is a mathematical monograph on topics related to the art gallery problem, on finding positions for guards within a polygonal museum floorplan so that all points of the museum are visible to at least one guard, and on related problems in computational geometry concerning polygons. It was written by Joseph O'Rourke, and published in 1987 in the International Series of Monographs on Computer Science of the Oxford University Press. Only 1000 copies were produced before the book went out of print, so to keep this material accessible O'Rourke has made a pdf version of the book available online.
Topics.
The art gallery problem, posed by Victor Klee in 1973, asks for the number of points at which to place guards inside a polygon (representing the floor plan of a museum) so that each point within the polygon is visible to at least one guard. Václav Chvátal provided the first proof that the answer is at most formula_0 for a polygon with formula_1 corners, but a simplified proof by Steve Fisk based on graph coloring and polygon triangulation is more widely known. This is the opening material of the book, which goes on to covers topics including visibility, decompositions of polygons, coverings of polygons, triangulations and triangulation algorithms, and higher-dimensional generalizations, including the result that some polyhedra such as the Schönhardt polyhedron do not have triangulations without additional vertices. More generally, the book has as a theme "the interplay between discrete and computational geometry".
It has 10 chapters, whose topics include the original art gallery theorem and Fisk's triangulation-based proof; rectilinear polygons; guards that can patrol a line segment rather than a single point; special classes of polygons including star-shaped polygons, spiral polygons, and monotone polygons; non-simple polygons; prison yard problems, in which the guards must view the exterior, or both the interior and exterior, of a polygon; visibility graphs; visibility algorithms; the computational complexity of minimizing the number of guards; and three-dimensional generalizations.
Audience and reception.
The book only requires an undergraduate-level knowledge of graph theory and algorithms. However, it lacks exercises, and is organized more as a monograph than as a textbook. Despite warning that it omits some details that would be important to implementors of the algorithms that it describes, and does not describe algorithms that perform well on random inputs despite poor worst-case complexity, reviewer Wm. Randolph Franklin recommends it "for the library of every geometer".
Reviewer Herbert Edelsbrunner writes that "This book is the most comprehensive collection of results on polygons currently available and thus earns its place as a standard text in computational geometry. It is very well written and a pleasure to read." However, reviewer Patrick J. Ryan complains that some of the book's proofs are inelegant, and reviewer David Avis, writing in 1990, noted that already by that time there were "many new developments" making the book outdated. Nevertheless, Avis writes that "the book succeeds on a number of levels", as an introductory text for undergraduates or for researchers in other areas, and as an invitation to solve the "many unsolved questions" remaining in this area.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\lceil n/3\\rceil"
},
{
"math_id": 1,
"text": "n"
}
] |
https://en.wikipedia.org/wiki?curid=63169354
|
631721
|
Tractography
|
3D visualization of nerve tracts via diffusion MRI
In neuroscience, tractography is a 3D modeling technique used to visually represent nerve tracts using data collected by diffusion MRI. It uses special techniques of magnetic resonance imaging (MRI) and computer-based diffusion MRI. The results are presented in two- and three-dimensional images called tractograms.
In addition to the long tracts that connect the brain to the rest of the body, there are complicated neural circuits formed by short connections among different cortical and subcortical regions. The existence of these tracts and circuits has been revealed by histochemistry and biological techniques on post-mortem specimens. Nerve tracts are not identifiable by direct exam, CT, or MRI scans. This difficulty explains the paucity of their description in neuroanatomy atlases and the poor understanding of their functions.
The most advanced tractography algorithm can produce 90% of the ground truth bundles, but it still contains a substantial amount of invalid results.
MRI technique.
Tractography is performed using data from diffusion MRI. The free water diffusion is termed "isotropic" diffusion. If the water diffuses in a medium with barriers, the diffusion will be uneven, which is termed anisotropic diffusion. In such a case, the relative mobility of the molecules from the origin has a shape different from a sphere. This shape is often modeled as an ellipsoid, and the technique is then called diffusion tensor imaging. Barriers can be many things: cell membranes, axons, myelin, etc.; but in white matter the principal barrier is the myelin sheath of axons. Bundles of axons provide a barrier to perpendicular diffusion and a path for parallel diffusion along the orientation of the fibers.
Anisotropic diffusion is expected to be increased in areas of high mature axonal order. Conditions where the myelin or the structure of the axon are disrupted, such as trauma, tumors, and inflammation reduce anisotropy, as the barriers are affected by destruction or disorganization.
Anisotropy is measured in several ways. One way is by a ratio called fractional anisotropy (FA). An FA of 0 corresponds to a perfect sphere, whereas 1 is an ideal linear diffusion. Few regions have FA larger than 0.90. The number gives information about how aspherical the diffusion is but says nothing of the direction.
Each anisotropy is linked to an orientation of the predominant axis (predominant direction of the diffusion). Post-processing programs are able to extract this directional information.
This additional information is difficult to represent on 2D grey-scaled images. To overcome this problem, a color code is introduced. Basic colors can tell the observer how the fibers are oriented in a 3D coordinate system, this is termed an "anisotropic map". The software could encode the colors in this way:
The technique is unable to discriminate the "positive" or "negative" direction in the same axis.
Mathematics.
Using diffusion tensor MRI, one can measure the apparent diffusion coefficient at each voxel in the image, and after multilinear regression across multiple images, the whole diffusion tensor can be reconstructed.
Suppose there is a fiber tract of interest in the sample. Following the Frenet–Serret formulas, we can formulate the space-path of the fiber tract as a parameterized curve:
formula_0
where formula_1 is the tangent vector of the curve. The reconstructed diffusion tensor formula_2 can be treated as a matrix, and we can compute its eigenvalues formula_3 and eigenvectors formula_4. By equating the eigenvector corresponding to the largest eigenvalue with the direction of the curve:
formula_5
we can solve for formula_6 given the data for formula_7. This can be done using numerical integration, e.g., using Runge–Kutta, and by interpolating the principal eigenvectors.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\frac{d\\mathbf{r}(s)}{ds} = \\mathbf{T}(s), "
},
{
"math_id": 1,
"text": "\\mathbf{T}(s)"
},
{
"math_id": 2,
"text": " D "
},
{
"math_id": 3,
"text": " \\lambda_1, \\lambda_2, \\lambda_3 "
},
{
"math_id": 4,
"text": " \\mathbf{u}_1, \\mathbf{u}_2, \\mathbf{u}_3 "
},
{
"math_id": 5,
"text": " \\frac{d\\mathbf{r}(s)}{ds} = \\mathbf{u}_1(\\mathbf{r}(s)) "
},
{
"math_id": 6,
"text": " \\mathbf{r}(s) "
},
{
"math_id": 7,
"text": " \\mathbf{u}_1(s) "
}
] |
https://en.wikipedia.org/wiki?curid=631721
|
63173263
|
Effective range
|
Effective range is a term with several definitions depending upon context.
Distance.
Effective range may describe a distance between two points where one point is subject to an energy release at the other point. The source, receiver, and conditions between the two points must be specified to define an effective range. Effective range may represent the maximum distance at which a measuring device or receiver will predictably respond to an energy release of specified magnitude. Alternatively, effective range may be the maximum distance at which the energy released from a specified device will cause the desired effect on a target receiver. Angular dispersion may be significant to effectiveness for asymmetrical energy propagation toward small targets.
Weapons.
The following definition has been attributed to the United States Department of Defense: "The maximum distance at which a weapon may be expected to be accurate and achieve the desired effect." Accuracy is ambiguous in the absence of a specified hit probability per unit of ammunition; and for any given weapon, the desired effect could be interpreted differently depending upon the target. Subjective interpretation of these variables has caused endless and heated debate for more than a century.
With the addition of clinometers fixed machine gun squads could set long ranges and deliver plunging fire or indirect fire at more than . This indirect firing method exploits the maximal practical range, that is defined by the maximum range of a small-arms projectile while still maintaining the minimum kinetic energy required to put unprotected personnel out of action, which is generally believed to be 15 kilogram-meters (147 J / 108 ft⋅lbf).
Advanced planned and unplanned map and range table predicted support/harassment firing methods developed during World War I like plunging fire or indirect fire were not as commonly used by machine gunners during World War II and later as they were during World War I.
Vehicles.
In a broader context, effective range describes the distance a vehicle (including weapon launch platforms like a ship or aircraft) may be expected to deliver a specified payload from a base or refueling point.
Statistics.
In statistics, range refers to the difference between the largest and smallest value of a set of quantified observations. Some observers consider it appropriate to remove unusually high or low outlying values to narrow the observed range to an effective range of the quantity being observed. Inferences based on effective range are of somewhat doubtful value if subjective judgement is used to determine which observations are discarded.
Nuclear physics.
In nuclear physics research, effective range is a physical parameter in the dimension of length to characterize an effective scattering square well potential. It is related to the scattering phase shift by,
formula_0.
where formula_1 is defined by the relation of deuteron binding energy formula_2.
In the limit of zero energy (formula_3), the scattering length can be related to effective length with formula_4.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "k\\cot\\delta = -\\gamma + \\frac{1}{2}\\left ( \\gamma^2+k^2 \\right )r_0+O\\left(k^4r_o^3\\right)"
},
{
"math_id": 1,
"text": "\\gamma"
},
{
"math_id": 2,
"text": "\\epsilon=\\hbar^2/M\\gamma^2"
},
{
"math_id": 3,
"text": "k^2/2m=0"
},
{
"math_id": 4,
"text": "\\alpha=\\frac{1}{a}=\\gamma\\left(1-\\frac{1}{2}\\gamma r_0\\right)"
}
] |
https://en.wikipedia.org/wiki?curid=63173263
|
6317347
|
207 (number)
|
Natural number
207 (two hundred [and] seven) is the natural number following 206 and preceding 208. It is an odd composite number with a prime factorization of formula_0.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "3^2\\cdot 23"
},
{
"math_id": 1,
"text": "1+3+9+23+69=105<207"
}
] |
https://en.wikipedia.org/wiki?curid=6317347
|
6317372
|
215 (number)
|
Integer
Natural number
215 (two hundred [and] fifteen) is the natural number following 214 and preceding 216.
In other fields.
215 is also:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "=5\\times43"
},
{
"math_id": 1,
"text": "215 = (3!)^3 - 1"
},
{
"math_id": 2,
"text": "n^2-17"
},
{
"math_id": 3,
"text": "215^2-17 = 2 \\times 152^2"
}
] |
https://en.wikipedia.org/wiki?curid=6317372
|
6317376
|
217 (number)
|
Natural number
217 (two hundred [and] seventeen) is the natural number following 216 and preceding 218.
In mathematics.
217 is a centered hexagonal number, a 12-gonal number, a centered 36-gonal number, a Fermat pseudoprime to base 5, and a Blum integer. It is both the sum of two positive cubes and the difference of two positive consecutive cubes in exactly one way: formula_0. When written in binary, it is a non-repetitive Kaprekar number. It is also the sum of all the divisors of 100.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "217 = 6^3 + 1^3 = 9^3 - 8^3"
}
] |
https://en.wikipedia.org/wiki?curid=6317376
|
6317445
|
252 (number)
|
Natural number
252 (two hundred [and] fifty-two) is the natural number following 251 and preceding 253.
In mathematics.
252 is:
formula_5
There are 252 points on the surface of a cuboctahedron of radius five in the face-centered cubic lattice, 252 ways of writing the number 4 as a sum of six squares of integers, 252 ways of choosing four squares from a 4×4 chessboard up to reflections and rotations, and 252 ways of placing three pieces on a Connect Four board.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\tbinom{10}{5}"
},
{
"math_id": 1,
"text": "\\tau(3)"
},
{
"math_id": 2,
"text": "\\tau"
},
{
"math_id": 3,
"text": "\\sigma_3(6)"
},
{
"math_id": 4,
"text": "\\sigma_3"
},
{
"math_id": 5,
"text": "1^3+2^3+3^3+6^3=(1^3+2^3)(1^3+3^3)=252."
}
] |
https://en.wikipedia.org/wiki?curid=6317445
|
6317450
|
253 (number)
|
Natural number
253 (two hundred [and] fifty-three) is the natural number following 252 and preceding 254.
In mathematics.
253 is:
formula_0 is a prime number. Its decimal expansion is 252 nines, an eight, and 253 more nines.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "10^{506}-10^{253}-1"
}
] |
https://en.wikipedia.org/wiki?curid=6317450
|
6317501
|
271 (number)
|
Natural number
271 (two hundred [and] seventy-one) is the natural number after 270 and before 272.
Properties.
271 is a twin prime with 269, a cuban prime (a prime number that is the difference of two consecutive cubes), and a centered hexagonal number. It is the smallest prime number bracketed on both sides by numbers divisible by cubes, and the smallest prime number bracketed by numbers with five primes (counting repetitions) in their factorizations:
formula_0 and formula_1.
After 7, 271 is the second-smallest Eisenstein–Mersenne prime, one of the analogues of the Mersenne primes in the Eisenstein integers.
271 is the largest prime factor of the five-digit repunit 11111, and the largest prime number for which the decimal period of its multiplicative inverse is 5:
formula_2
It is a sexy prime with 277.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "270=2\\cdot 3^3\\cdot 5"
},
{
"math_id": 1,
"text": "272=2^4\\cdot 17"
},
{
"math_id": 2,
"text": "\\frac{1}{271}=0.00369003690036900369\\ldots"
}
] |
https://en.wikipedia.org/wiki?curid=6317501
|
6317514
|
Cantellation (geometry)
|
Geometric operation on a regular polytope
In geometry, a cantellation is a 2nd-order truncation in any dimension that bevels a regular polytope at its edges and at its vertices, creating a new facet in place of each edge and of each vertex. Cantellation also applies to regular tilings and honeycombs. Cantellating a polyhedron is also rectifying its rectification.
Cantellation (for polyhedra and tilings) is also called "expansion" by Alicia Boole Stott: it corresponds to moving the faces of the regular form away from the center, and filling in a new face in the gap for each opened edge and for each opened vertex.
Notation.
A cantellated polytope is represented by an extended Schläfli symbol t0,2{"p","q"...} or rformula_0 or rr{"p","q"...}.
For polyhedra, a cantellation offers a direct sequence from a regular polyhedron to its dual.
Example: cantellation sequence between cube and octahedron:
Example: a cuboctahedron is a cantellated tetrahedron.
For higher-dimensional polytopes, a cantellation offers a direct sequence from a regular polytope to its birectified form.
|
[
{
"math_id": 0,
"text": "\\begin{Bmatrix}p\\\\q\\\\...\\end{Bmatrix}"
}
] |
https://en.wikipedia.org/wiki?curid=6317514
|
6317517
|
276 (number)
|
276 (two hundred [and] seventy-six) is the natural number following 275 and preceding 277.
Natural number
In mathematics.
276 is the sum of 3 consecutive fifth powers (276 = 15 + 25 + 35). As a figurate number it is a triangular number, a hexagonal number, and a centered pentagonal number, the third number after 1 and 6 to have this combination of properties.
276 is the size of the largest set of equiangular lines in 23 dimensions. The maximal set of such lines, derived from the Leech lattice, provides the highest dimension in which the "Gerzon bound" of formula_0 is known to be attained; its symmetry group is the third Conway group, Co3.
276 is the smallest number for which it is not known if the corresponding aliquot sequence either terminates or ends in a repeating cycle.
In the Bible.
In Acts 27 verses 37-44 the Bible refers to 276 people on board a ship all of which made it to safety after the ship ran aground.
In other fields.
In the Christian calendar, there are 276 days from the Annunciation on March 25 to Christmas on December 25, a number considered significant by some authors.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\binom{n+1}{2}"
}
] |
https://en.wikipedia.org/wiki?curid=6317517
|
6317531
|
281 (number)
|
Natural number
281 is the natural number following 280 and preceding 282. It is also a prime number.
In mathematics.
281 is a twin prime with 283, Sophie Germain prime, sum of the first fourteen primes, sum of seven consecutive primes (29 + 31 + 37 + 41 + 43 + 47 + 53), Chen prime, Eisenstein prime with no imaginary part, and a centered decagonal number.
281 is the smallest prime "p" such that the decimal period length of the reciprocal of "p" is ("p"−1)/10, i.e. the period length of 1/281 is 28. However, in binary, it has period length 70.
The generalized repunit number formula_0 is composite for all prime "p" < 60000.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{281^p-1}{280}"
}
] |
https://en.wikipedia.org/wiki?curid=6317531
|
6317563
|
288 (number)
|
Natural number
288 (two hundred [and] eighty-eight) is the natural number following 287 and preceding 289.
Because 288 = 2 · 12 · 12, it may also be called "two gross" or "two dozen dozen".
In mathematics.
Factorization properties.
Because its prime factorization formula_0 contains only the first two prime numbers 2 and 3, 288 is a 3-smooth number. This factorization also makes it a highly powerful number, a number with a record-setting value of the product of the exponents in its factorization. Among the highly abundant numbers, numbers with record-setting sums of divisors, it is one of only 13 such numbers with an odd divisor sum.
Both 288 and 289 = 172 are powerful numbers, numbers in which all exponents of the prime factorization are larger than one. This property is closely connected to being highly abundant with an odd divisor sum: all sufficiently large highly abundant numbers have an odd prime factor with exponent one, causing their divisor sum to be even. 288 and 289 form only the second consecutive pair of powerful numbers after
Factorial properties.
288 is a superfactorial, a product of consecutive factorials, since formula_1 Coincidentally, as well as being a product of descending powers, 288 is a sum of ascending powers: formula_2
288 appears prominently in Stirling's approximation for the factorial, as the denominator of the second term of the Stirling series
formula_3
Figurate properties.
288 is connected to the figurate numbers in multiple ways. It is a pentagonal pyramidal number and a dodecagonal number. Additionally, it is the index, in the sequence of triangular numbers, of the fifth square triangular number: formula_4
Enumerative properties.
There are 288 different ways of completely filling in a formula_5 sudoku puzzle grid. For square grids whose side length is the square of a prime number, such as 4 or 9, a completed sudoku puzzle is the same thing as a "pluperfect Latin square", an formula_6 array in which every dissection into formula_7 rectangles of equal width and height to each other has one copy of each digit in each rectangle. Therefore, there are also 288 pluperfect Latin squares of order 4. There are 288 different formula_8 invertible matrices modulo six, and 288 different ways of placing two chess queens on a formula_9 board with toroidal boundary conditions so that they do not attack each other. There are 288 independent sets in a 5-dimensional hypercube, up to symmetries of the hypercube.
In other areas.
In early 20th-century molecular biology, some mysticism surrounded the use of 288 to count protein structures, largely based on the fact that it is a smooth number.
A common mathematical pun involves the fact that 288 = 2 · 144, and that 144 is named as a gross: "Q: Why should the number 288 never be mentioned? A: it is two gross."
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "288 = 2^5\\cdot 3^2"
},
{
"math_id": 1,
"text": "288 = 1!\\cdot 2!\\cdot 3!\\cdot 4! = 1^4\\cdot 2^3\\cdot 3^2\\cdot 4^1."
},
{
"math_id": 2,
"text": "288 = 1^1 + 2^2 + 3^3 + 4^4."
},
{
"math_id": 3,
"text": "\nn! \\sim \\sqrt{2\\pi n}\\left(\\frac{n}{e}\\right)^n \\left(1 +\\frac{1}{12n}+\\frac{1}{288n^2} - \\frac{139}{51840n^3} -\\frac{571}{2488320n^4}+ \\cdots \\right)."
},
{
"math_id": 4,
"text": "41616 = \\frac{288\\cdot 289}{2} = 204^2."
},
{
"math_id": 5,
"text": "4\\times 4"
},
{
"math_id": 6,
"text": "n\\times n"
},
{
"math_id": 7,
"text": "n"
},
{
"math_id": 8,
"text": "2\\times 2"
},
{
"math_id": 9,
"text": "6\\times 6"
}
] |
https://en.wikipedia.org/wiki?curid=6317563
|
63178607
|
2 Kings 8
|
2 Kings, chapter 8
2 Kings 8 is the eighth chapter of the second part of the Books of Kings in the Hebrew Bible or the Second Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. This chapter records Elisha's acts in helping the family of Shunammite woman to escape famine, then to gain back their land (verses 1–6) and in contributing to Hazael's ascension to the throne of Syria (Aram) in verses 7–15; then subsequently records the reigns of Joram and Ahaziah, the kings of Judah.
Text.
This chapter was originally written in the Hebrew language and since the 16th century is divided into 29 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls, that is, 6Q4 (6QpapKgs; 150–75 BCE) with extant verses 1–5.
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Locations.
This chapter mentions or alludes to the following places (in order of appearance):
Elisha helps a refugee (8:1–6).
The part is a continuation to the story of the Shunammite woman in 4:8–37. Elisha foresees famine, warns the woman, and recommends her and her family to leave the area until the famine ends (cf. the stories of Ruth and Joseph and the so-called "economic refugees" today). On her return seven years later she found that her property belongs to someone else, probably fell into the crown's hands since it had no owner for a while (as there is no record of a dispute with neighbors). The woman appeals to the king who returns her the land on hearing of her connections with Elisha after being impressed by Elisha's miracle-working power told by Elisha's servant Gehazi.
"Then Elisha spoke to the woman whose son he had restored to life, saying, "Arise and go, you and your household, and stay wherever you can; for the Lord has called for a famine, and furthermore, it will come upon the land for seven years.""
Elisha triggers a change of power in Damascus (8:7–15).
The events that continue to 2 Kings 9–10 form one of two political stories placed at the end of the Elisha cycle (the other in ). The Aramean king, named here as Benhadad, becomes seriously ill and sends his general Hazael to Elisha, who was in Damascus at that time, to request an oracle. Elisha's reply is puzzling: Hazael should tell the king he will recover although he will also die (verse 10), which is clarified a little later: the king would have survived his illness (verse 14), but would not survive Hazael's assassination attempt (verse 15). Hazael's brutality against Israel was also revealed by the prophet (verses 11–13; cf. ; 2 Kings 8:28; ; ; ; ). It is a tragic future event that Elisha could not prevent to happen while the agent of destruction himself was before him at that moment. During the last year's of Benhadad' reign the relationship between Israel and Aram was relaxed, but the change of power in Damascus dramatically worsened it. The war between Hazael and Israel shortly after his accession leads to the Omride Joram's wounding and his murder (followed by the murder of Ahaziah of Judah) by general Jehu.
Hazael as an instrument of vengeance against Ahab's family was mentioned during the encounter of YHWH and Elijah at Mount Sinai (1 Kings 19).
Structure of 8:7–15.
A chiastic structure is observed in this part with the focus of attention on the central dialogue between Hazael and Elisha, as follows:
A Introduction: sickness of Ben-hadad (verse 7)
B Ben-hadad commissions Hazael (verse 8)
C Hazael goes to Elisha (verse 9a)
X Hazael and Elisha dialogue (verses 9b-13)
C' Hazael returns to Ben-hadad (verse 14)
B' Ben-hadad receives Hazael (verse 14)
A' Conclusion: death of Ben-hadad (verse 15)
"Then Elisha went to Damascus, and Ben-Hadad king of Syria was sick; and it was told him, saying, "The man of God has come here.""
"So Hazael said, "But what is your servant—a dog, that he should do this gross thing?""
"And Elisha answered, "The Lord has shown me that you will become king over Syria.""
Verse 13.
Elisha carries out the anointing of Hazael according to the divine commission to Elijah, his predecessor, in .
"But the next day he took a blanket, dipped it in water, and spread it on his face, so that he died. And Hazael reigned in his place."
Verse 15.
Hazael (reigns c. 842–800 BCE), seized Israelite territory east of the Jordan River, and the Philistine city of Gath, but unsuccessful to take Jerusalem (). His death is mentioned in .
Decorated bronze plaques from chariot horse-harness belonged to Hazael, identified by their inscriptions, have been found as re-gifted votive objects at two Greek sites, the Heraion of Samos and in the temple of Apollo at Eretria on Euboea. The inscriptions read "that which Hadad gave to our lord Hazael from 'Umq in the year that our lord crossed the River", which may refer to Orontes river.
King Joram of Judah (8:16–24).
Joram (or "Jehoram") got the 'harshest possible verdict' among the descendants of David in this book: placed on the same level as the kings of Israel, and especially 'the house of Ahab'. He was married to the Omride princess Athaliah, who was not merely one wife among others, but became the queen mother when her son Ahaziah came to the throne (cf. verses 18 and 26). The tense relationship between Judah and Israel after their separation (cf. e.g. ; ) clearly turned to a peaceful one during the time of the Omri dynasty, along with the northern religious supremacy over the south. The link between Judah and the sinful kingdom of Israel could have brought the kingdom of Judah down, but God in his faithfulness to the Davidic covenant () mercifully spared them (verse 19). Nevertheless, Judah lost the territory of Edom, after the Edomites heavily defeated Joram's troops and achieved independence (cf. ; 2 Kings 3:8-9).
"And in the fifth year of Joram the son of Ahab king of Israel, Jehoshaphat being then king of Judah, Jehoram the son of Jehoshaphat king of Judah began to reign."
"He was thirty-two years old when he became king, and he reigned eight years in Jerusalem."
King Ahaziah of Judah (8:25–29).
Ahaziah is depicted as bad as his father Joram (and his mother, the Omride Athaliah), although he only reigned for one year. He was soon involved in a war with Aram, in alliance with his uncle, Jehoram of Israel, centered upon Ramoth, a town on the border between Israelite Gilead and Aram's territory to the north ('Israel had been on guard at Ramoth-gilead against King Hazael' in 9:14). The repeated reports of 8:28–29 in 9:14–15a, and in 9:16 may indicate that the narrative could stem from three different sources: the annals of Judah and Israel, as well as a separate record on Jehu.
"In the twelfth year of Joram the son of Ahab, king of Israel, Ahaziah the son of Jehoram, king of Judah, began to reign."
"Ahaziah was twenty-two years old when he began to reign, and he reigned one year in Jerusalem. His mother's name was Athaliah; she was a granddaughter of Omri king of Israel."
"He went with Joram the son of Ahab to the war against Hazael king of Aram at Ramoth Gilead, and the Arameans struck Joram."
Verse 28.
The inscription by Hazael the king of Aram (Syria) in the Tel Dan Stele stated that after the death of his father 'the king of Israel invaded, advancing in my father's land' (lines 3–4). It corresponds well with 2 Kings 8:28a stating that the kings of Israel and Judah launched a campaign and attacked the Aramaeans at Ramoth-gilead. The city was soon occupied by Hazael for the whole period of his reign, but would be in Israelite hands again thereafter (cf. ; ; ).
"And King Joram returned to be healed in Jezreel of the wounds that the Syrians had given him at Ramah, when he fought against Hazael king of Syria. And Ahaziah the son of Jehoram king of Judah went down to see Joram the son of Ahab in Jezreel, because he was sick."
Relation to the Tel Dan Stele.
Tel Dan Stele, a fragmentary stele from the 9th century BCE was discovered in 1993 (first fragment) and 1994 (two smaller fragments) in Tel-Dan. The stele contains several lines of Aramaic detailing that the author of the inscription (likely Hazael, an Aramean king from the same period) killed both Jehoram, the son of Ahab, king of Israel, and Ahaziah, the son of Jehoram, the king of the house of David. This artifact is currently on display at the Israel Museum, and is known as KAI 310.
Although the part containing the name of the Israelite king is not complete, the only king, either of Israel or of Judah, whose name ends with "resh" and "mem" is Jehoram, who is either a son of Ahab, king of Israel, or a son of Jehoshaphat, king of Judah. The letters "y-h-u", followed by "b-n", 'the son of', must belong to a Hebrew theophorous name and in the ninth century BCE, the two royal names ending with "-yah(u)" were "Ahazyah(u)" (Ahaziah) and "Atalyah(u)" (Ataliah; becoming queen of Judah after her son Ahaziah), so the only name of the king is Ahaziah. The name “Ahaziah” can refer to a king of Israel and a king of Judah, but only one can be taken into consideration: the son of Jehoram and grandson of Jehoshaphat, who ruled in Judah for one year (2 Kings 8:25–26) and was the ally of Jehoram of Israel. After Hazael seized the throne from Ben Hadad II, king of Aram-Damascus, he fought Jehoram of Israel and Ahaziah of Judah at Ramoth Gilead (2 Kings 8:7-15, 28; ) and wounded Jehoram (according to , both Jehoram and Ahaziah were slain by Jehu shortly after). Thus, this stele is to be attributed to the campaign of Hazael.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=63178607
|
6318542
|
Boolean algebras canonically defined
|
A technical treatment of Boolean algebras
"Boolean algebras are models of the equational theory of two values; this definition is equivalent to the lattice and ring definitions."
Boolean algebra is a mathematically rich branch of abstract algebra. "Stanford Encyclopaedia of Philosophy" defines Boolean algebra as 'the algebra of two-valued logic with only sentential connectives, or equivalently of algebras of sets under union and complementation.' Just as group theory deals with groups, and linear algebra with vector spaces, Boolean algebras are models of the equational theory of the two values 0 and 1 (whose interpretation need not be numerical). Common to Boolean algebras, groups, and vector spaces is the notion of an algebraic structure, a set closed under some operations satisfying certain equations.
Just as there are basic examples of groups, such as the group formula_0 of integers and the symmetric group "S""n" of permutations of "n" objects, there are also basic examples of Boolean algebras such as the following.
Boolean algebra thus permits applying the methods of abstract algebra to mathematical logic and digital logic.
Unlike groups of finite order, which exhibit complexity and diversity and whose first-order theory is decidable only in special cases, all finite Boolean algebras share the same theorems and have a decidable first-order theory. Instead, the intricacies of Boolean algebra are divided between the structure of infinite algebras and the algorithmic complexity of their syntactic structure.
Definition.
Boolean algebra treats the equational theory of the maximal two-element finitary algebra, called the Boolean prototype, and the models of that theory, called Boolean algebras. These terms are defined as follows.
An algebra is a family of operations on a set, called the underlying set of the algebra. We take the underlying set of the Boolean prototype to be {0,1}.
An algebra is finitary when each of its operations takes only finitely many arguments. For the prototype each argument of an operation is either 0 or 1, as is the result of the operation. The maximal such algebra consists of all finitary operations on {0,1}.
The number of arguments taken by each operation is called the arity of the operation. An operation on {0,1} of arity "n", or "n"-ary operation, can be applied to any of 2"n" possible values for its "n" arguments. For each choice of arguments, the operation may return 0 or 1, whence there are 22"n" "n"-ary operations.
The prototype therefore has two operations taking no arguments, called zeroary or nullary operations, namely zero and one. It has four unary operations, two of which are constant operations, another is the identity, and the most commonly used one, called "negation", returns the opposite of its argument: 1 if 0, 0 if 1. It has sixteen binary operations; again two of these are constant, another returns its first argument, yet another returns its second, one is called "conjunction" and returns 1 if both arguments are 1 and otherwise 0, another is called "disjunction" and returns 0 if both arguments are 0 and otherwise 1, and so on. The number of ("n"+1)-ary operations in the prototype is the square of the number of "n"-ary operations, so there are 162 = 256 ternary operations, 2562 = 65,536 quaternary operations, and so on.
A family is indexed by an index set. In the case of a family of operations forming an algebra, the indices are called operation symbols, constituting the language of that algebra. The operation indexed by each symbol is called the denotation or interpretation of that symbol. Each operation symbol specifies the arity of its interpretation, whence all possible interpretations of a symbol have the same arity. In general it is possible for an algebra to interpret distinct symbols with the same operation, but this is not the case for the prototype, whose symbols are in one-one correspondence with its operations. The prototype therefore has 22"n" "n"-ary operation symbols, called the Boolean operation symbols and forming the language of Boolean algebra. Only a few operations have conventional symbols, such as ¬ for negation, ∧ for conjunction, and ∨ for disjunction. It is convenient to consider the "i"-th "n"-ary symbol to be "n""f""i" as done below in the section on truth tables.
An equational theory in a given language consists of equations between terms built up from variables using symbols of that language. Typical equations in the language of Boolean algebra are "x"∧"y" = "y"∧"x", "x"∧"x" = "x", "x"∧¬"x" = "y"∧¬"y", and "x"∧"y" = "x".
An algebra satisfies an equation when the equation holds for all possible values of its variables in that algebra when the operation symbols are interpreted as specified by that algebra. The laws of Boolean algebra are the equations in the language of Boolean algebra satisfied by the prototype. The first three of the above examples are Boolean laws, but not the fourth since 1∧0 ≠ 1.
The equational theory of an algebra is the set of all equations satisfied by the algebra. The laws of Boolean algebra therefore constitute the equational theory of the Boolean prototype.
A model of a theory is an algebra interpreting the operation symbols in the language of the theory and satisfying the equations of the theory.
"A Boolean algebra is any model of the laws of Boolean algebra."
That is, a Boolean algebra is a set and a family of operations thereon interpreting the Boolean operation symbols and satisfying the same laws as the Boolean prototype.
If we define a homologue of an algebra to be a model of the equational theory of that algebra, then a Boolean algebra can be defined as any homologue of the prototype.
Example 1. The Boolean prototype is a Boolean algebra, since trivially it satisfies its own laws. It is thus the prototypical Boolean algebra. We did not call it that initially in order to avoid any appearance of circularity in the definition.
Basis.
The operations need not be all explicitly stated. A "basis" is any set from which the remaining operations can be obtained by composition. A "Boolean algebra" may be defined from any of several different bases. Three bases for Boolean algebra are in common use, the lattice basis, the ring basis, and the Sheffer stroke or NAND basis. These bases impart respectively a logical, an arithmetical, and a parsimonious character to the subject.
The common elements of the lattice and ring bases are the constants 0 and 1, and an associative commutative binary operation, called meet "x"∧"y" in the lattice basis, and multiplication "xy" in the ring basis. The distinction is only terminological. The lattice basis has the further operations of join, "x"∨"y", and complement, ¬"x". The ring basis has instead the arithmetic operation "x"⊕"y" of addition (the symbol ⊕ is used in preference to + because the latter is sometimes given the Boolean reading of join).
To be a basis is to yield all other operations by composition, whence any two bases must be intertranslatable. The lattice basis translates "x"∨"y" to the ring basis as "x"⊕"y"⊕"xy", and ¬"x" as "x"⊕1. Conversely the ring basis translates "x"⊕"y" to the lattice basis as ("x"∨"y")∧¬("x"∧"y").
Both of these bases allow Boolean algebras to be defined via a subset of the equational properties of the Boolean operations. For the lattice basis, it suffices to define a Boolean algebra as a distributive lattice satisfying "x"∧¬"x" = 0 and "x"∨¬"x" = 1, called a complemented distributive lattice. The ring basis turns a Boolean algebra into a Boolean ring, namely a ring satisfying "x"2 = "x".
Emil Post gave a necessary and sufficient condition for a set of operations to be a basis for the nonzeroary Boolean operations. A "nontrivial" property is one shared by some but not all operations making up a basis. Post listed five nontrivial properties of operations, identifiable with the five Post's classes, each preserved by composition, and showed that a set of operations formed a basis if, for each property, the set contained an operation lacking that property. (The converse of Post's theorem, extending "if" to "if and only if," is the easy observation that a property from among these five holding of every operation in a candidate basis will also hold of every operation formed by composition from that candidate, whence by nontriviality of that property the candidate will fail to be a basis.) Post's five properties are:
The NAND (dually NOR) operation lacks all these, thus forming a basis by itself.
Truth tables.
The finitary operations on {0,1} may be exhibited as truth tables, thinking of 0 and 1 as the truth values false and true. They can be laid out in a uniform and application-independent way that allows us to name, or at least number, them individually. These names provide a convenient shorthand for the Boolean operations. The names of the "n"-ary operations are binary numbers of 2"n" bits. There being 22"n" such operations, one cannot ask for a more succinct nomenclature. Note that each finitary operation can be called a switching function.
This layout and associated naming of operations is illustrated here in full for arities from 0 to 2.
These tables continue at higher arities, with 2"n" rows at arity "n", each row giving a valuation or binding of the "n" variables "x"0..."x""n"−1 and each column headed "n""f""i" giving the value "n""f""i"("x"0...,"x""n"−1) of the "i"-th "n"-ary operation at that valuation. The operations include the variables, for example 1"f"2 is "x"0 while 2"f"10 is "x"0 (as two copies of its unary counterpart) and 2"f"12 is "x"1 (with no unary counterpart). Negation or complement ¬"x"0 appears as 1"f"1 and again as 2"f"5, along with 2"f"3 (¬"x"1, which did not appear at arity 1), disjunction or union "x"0∨"x"1 as 2"f"14, conjunction or intersection "x"0∧"x"1 as 2"f"8, implication "x"0→"x"1 as 2"f"13, exclusive-or symmetric difference "x"0⊕"x"1 as 2"f"6, set difference "x"0−"x"1 as 2"f"2, and so on.
As a minor detail important more for its form than its content, the operations of an algebra are traditionally organized as a list. Although we are here indexing the operations of a Boolean algebra by the finitary operations on {0,1}, the truth-table presentation above serendipitously orders the operations first by arity and second by the layout of the tables for each arity. This permits organizing the set of all Boolean operations in the traditional list format. The list order for the operations of a given arity is determined by the following two rules.
(i) The "i"-th row in the left half of the table is the binary representation of "i" with its least significant or 0-th bit on the left ("little-endian" order, originally proposed by Alan Turing, so it would not be unreasonable to call it Turing order).
(ii) The "j"-th column in the right half of the table is the binary representation of "j", again in little-endian order. In effect the subscript of the operation "is" the truth table of that operation. By analogy with Gödel numbering of computable functions one might call this numbering of the Boolean operations the Boole numbering.
When programming in C or Java, bitwise disjunction is denoted <samp>"x"|"y"</samp>, conjunction <samp>"x"&"y"</samp>, and negation <samp>~"x"</samp>. A program can therefore represent for example the operation "x"∧("y"∨"z") in these languages as <samp>"x"&("y"|"z")</samp>, having previously set <samp>"x" = 0xaa</samp>, <samp>"y" = 0xcc</samp>, and <samp>"z" = 0xf0</samp> (the "<samp>0x</samp>" indicates that the following constant is to be read in hexadecimal or base 16), either by assignment to variables or defined as macros. These one-byte (eight-bit) constants correspond to the columns for the input variables in the extension of the above tables to three variables. This technique is almost universally used in raster graphics hardware to provide a flexible variety of ways of combining and masking images, the typical operations being ternary and acting simultaneously on source, destination, and mask bits.
Examples.
Bit vectors.
Example 2. All bit vectors of a given length form a Boolean algebra "pointwise", meaning that any "n"-ary Boolean operation can be applied to "n" bit vectors one bit position at a time. For example, the ternary OR of three bit vectors each of length 4 is the bit vector of length 4 formed by oring the three bits in each of the four bit positions, thus 0100∨1000∨1001 = 1101. Another example is the truth tables above for the "n"-ary operations, whose columns are all the bit vectors of length 2"n" and which therefore can be combined pointwise whence the "n"-ary operations form a Boolean algebra.
This works equally well for bit vectors of finite and infinite length, the only rule being that the bit positions all be indexed by the same set in order that "corresponding position" be well defined.
The atoms of such an algebra are the bit vectors containing exactly one 1. In general the atoms of a Boolean algebra are those elements "x" such that "x"∧"y" has only two possible values, "x" or 0.
Power set algebra.
Example 3. The power set algebra, the set 2"W" of all subsets of a given set "W". This is just Example 2 in disguise, with "W" serving to index the bit positions. Any subset "X" of "W" can be viewed as the bit vector having 1's in just those bit positions indexed by elements of "X". Thus the all-zero vector is the empty subset of "W" while the all-ones vector is "W" itself, these being the constants 0 and 1 respectively of the power set algebra. The counterpart of disjunction "x"∨"y" is union "X"∪"Y", while that of conjunction "x"∧"y" is intersection "X"∩"Y". Negation ¬"x" becomes ~"X", complement relative to "W". There is also set difference "X"\"Y" = "X"∩~"Y", symmetric difference ("X"\"Y")∪("Y"\"X"), ternary union "X"∪"Y"∪"Z", and so on. The atoms here are the singletons, those subsets with exactly one element.
Examples 2 and 3 are special cases of a general construct of algebra called direct product, applicable not just to Boolean algebras but all kinds of algebra including groups, rings, etc. The direct product of any family "B"i of Boolean algebras where "i" ranges over some index set "I" (not necessarily finite or even countable) is a Boolean algebra consisting of all "I"-tuples (..."x"i...) whose "i"-th element is taken from "B""i". The operations of a direct product are the corresponding operations of the constituent algebras acting within their respective coordinates; in particular operation "n""f""j" of the product operates on "n" "I"-tuples by applying operation "n""f""j" of "B""i" to the "n" elements in the "i"-th coordinate of the "n" tuples, for all "i" in "I".
When all the algebras being multiplied together in this way are the same algebra "A" we call the direct product a "direct power" of "A". The Boolean algebra of all 32-bit bit vectors is the two-element Boolean algebra raised to the 32nd power, or power set algebra of a 32-element set, denoted 232. The Boolean algebra of all sets of integers is 2Z. All Boolean algebras we have exhibited thus far have been direct powers of the two-element Boolean algebra, justifying the name "power set algebra".
Representation theorems.
It can be shown that every finite Boolean algebra is isomorphic to some power set algebra. Hence the cardinality (number of elements) of a finite Boolean algebra is a power of 2, namely one of 1,2,4,8...,2"n"... This is called a representation theorem as it gives insight into the nature of finite Boolean algebras by giving a representation of them as power set algebras.
This representation theorem does not extend to infinite Boolean algebras: although every power set algebra is a Boolean algebra, not every Boolean algebra need be isomorphic to a power set algebra. In particular, whereas there can be no countably infinite power set algebras (the smallest infinite power set algebra is the power set algebra 2"N" of sets of natural numbers, shown by Cantor to be uncountable), there exist various countably infinite Boolean algebras.
To go beyond power set algebras we need another construct. A subalgebra of an algebra "A" is any subset of "A" closed under the operations of "A". Every subalgebra of a Boolean algebra "A" must still satisfy the equations holding of "A", since any violation would constitute a violation for "A" itself. Hence every subalgebra of a Boolean algebra is a Boolean algebra.
A subalgebra of a power set algebra is called a field of sets; equivalently a field of sets is a set of subsets of some set "W" including the empty set and "W" and closed under finite union and complement with respect to "W" (and hence also under finite intersection). Birkhoff's [1935] representation theorem for Boolean algebras states that every Boolean algebra is isomorphic to a field of sets. Now Birkhoff's HSP theorem for varieties can be stated as, every class of models of the equational theory of a class "C" of algebras is the Homomorphic image of a Subalgebra of a direct Product of algebras of "C". Normally all three of H, S, and P are needed; what the first of these two Birkhoff theorems shows is that for the special case of the variety of Boolean algebras Homomorphism can be replaced by Isomorphism. Birkhoff's HSP theorem for varieties in general therefore becomes Birkhoff's ISP theorem for the variety of Boolean algebras.
Other examples.
It is convenient when talking about a set "X" of natural numbers to view it as a sequence "x"0,"x"1,"x"2... of bits, with "x""i" = 1 if and only if "i" ∈ "X". This viewpoint will make it easier to talk about subalgebras of the power set algebra 2"N", which this viewpoint makes the Boolean algebra of all sequences of bits. It also fits well with the columns of a truth table: when a column is read from top to bottom it constitutes a sequence of bits, but at the same time it can be viewed as the set of those valuations (assignments to variables in the left half of the table) at which the function represented by that column evaluates to 1.
Example 4. "Ultimately constant sequences". Any Boolean combination of ultimately constant sequences is ultimately constant; hence these form a Boolean algebra. We can identify these with the integers by viewing the ultimately-zero sequences as nonnegative binary numerals (bit 0 of the sequence being the low-order bit) and the ultimately-one sequences as negative binary numerals (think two's complement arithmetic with the all-ones sequence being −1). This makes the integers a Boolean algebra, with union being bit-wise OR and complement being "−x−1". There are only countably many integers, so this infinite Boolean algebra is countable. The atoms are the powers of two, namely 1,2,4... Another way of describing this algebra is as the set of all finite and cofinite sets of natural numbers, with the ultimately all-ones sequences corresponding to the cofinite sets, those sets omitting only finitely many natural numbers.
Example 5. "Periodic sequence". A sequence is called "periodic" when there exists some number "n" > 0, called a witness to periodicity, such that "x""i" = "x""i"+"n" for all "i" ≥ 0. The period of a periodic sequence is its least witness. Negation leaves period unchanged, while the disjunction of two periodic sequences is periodic, with period at most the least common multiple of the periods of the two arguments (the period can be as small as 1, as happens with the union of any sequence and its complement). Hence the periodic sequences form a Boolean algebra.
Example 5 resembles Example 4 in being countable, but differs in being atomless. The latter is because the conjunction of any nonzero periodic sequence "x" with a sequence of coprime period (greater than 1) is neither 0 nor "x". It can be shown that all countably infinite atomless Boolean algebras are isomorphic, that is, up to isomorphism there is only one such algebra.
Example 6. "Periodic sequence with period a power of two". This is a proper subalgebra of Example 5 (a proper subalgebra equals the intersection of itself with its algebra). These can be understood as the finitary operations, with the first period of such a sequence giving the truth table of the operation it represents. For example, the truth table of "x"0 in the table of binary operations, namely 2"f"10, has period 2 (and so can be recognized as using only the first variable) even though 12 of the binary operations have period 4. When the period is 2"n" the operation only depends on the first "n" variables, the sense in which the operation is finitary. This example is also a countably infinite atomless Boolean algebra. Hence Example 5 is isomorphic to a proper subalgebra of itself! Example 6, and hence Example 5, constitutes the free Boolean algebra on countably many generators, meaning the Boolean algebra of all finitary operations on a countably infinite set of generators or variables.
Example 7. "Ultimately periodic sequences", sequences that become periodic after an initial finite bout of lawlessness. They constitute a proper extension of Example 5 (meaning that Example 5 is a proper subalgebra of Example 7) and also of Example 4, since constant sequences are periodic with period one. Sequences may vary as to when they settle down, but any finite set of sequences will all eventually settle down no later than their slowest-to-settle member, whence ultimately periodic sequences are closed under all Boolean operations and so form a Boolean algebra. This example has the same atoms and coatoms as Example 4, whence it is not atomless and therefore not isomorphic to Example 5/6. However it contains an infinite atomless subalgebra, namely Example 5, and so is not isomorphic to Example 4, every subalgebra of which must be a Boolean algebra of finite sets and their complements and therefore atomic. This example is isomorphic to the direct product of Examples 4 and 5, furnishing another description of it.
Example 8. The direct product of a Periodic Sequence (Example 5) with any finite but nontrivial Boolean algebra. (The trivial one-element Boolean algebra is the unique finite atomless Boolean algebra.) This resembles Example 7 in having both atoms and an atomless subalgebra, but differs in having only finitely many atoms. Example 8 is in fact an infinite family of examples, one for each possible finite number of atoms.
These examples by no means exhaust the possible Boolean algebras, even the countable ones. Indeed, there are uncountably many nonisomorphic countable Boolean algebras, which Jussi Ketonen [1978] classified completely in terms of invariants representable by certain hereditarily countable sets.
Boolean algebras of Boolean operations.
The "n"-ary Boolean operations themselves constitute a power set algebra 2"W", namely when "W" is taken to be the set of 2"n" valuations of the "n" inputs. In terms of the naming system of operations "n""f""i" where "i" in binary is a column of a truth table, the columns can be combined with Boolean operations of any arity to produce other columns present in the table. That is, we can apply any Boolean operation of arity "m" to "m" Boolean operations of arity "n" to yield a Boolean operation of arity "n", for any "m" and "n".
The practical significance of this convention for both software and hardware is that "n"-ary Boolean operations can be represented as words of the appropriate length. For example, each of the 256 ternary Boolean operations can be represented as an unsigned byte. The available logical operations such as AND and OR can then be used to form new operations. If we take "x", "y", and "z" (dispensing with subscripted variables for now) to be 10101010, 11001100, and 11110000 respectively (170, 204, and 240 in decimal, 0xaa, 0xcc, and 0xf0 in hexadecimal), their pairwise conjunctions are "x"∧"y" = 10001000, "y"∧"z" = 11000000, and "z"∧"x" = 10100000, while their pairwise disjunctions are "x"∨"y" = 11101110, "y"∨"z" = 11111100, and "z"∨"x" = 11111010. The disjunction of the three conjunctions is 11101000, which also happens to be the conjunction of three disjunctions. We have thus calculated, with a dozen or so logical operations on bytes, that the two ternary operations
formula_1
and
formula_2
are actually the same operation. That is, we have proved the equational identity
formula_3,
for the two-element Boolean algebra. By the definition of "Boolean algebra" this identity must therefore hold in every Boolean algebra.
This ternary operation incidentally formed the basis for Grau's [1947] ternary Boolean algebras, which he axiomatized in terms of this operation and negation. The operation is symmetric, meaning that its value is independent of any of the 3! = 6 permutations of its arguments. The two halves of its truth table 11101000 are the truth tables for ∨, 1110, and ∧, 1000, so the operation can be phrased as if "z" then "x"∨"y" else "x"∧"y". Since it is symmetric it can equally well be phrased as either of if "x" then "y"∨"z" else "y"∧"z", or if "y" then "z"∨"x" else "z"∧"x". Viewed as a labeling of the 8-vertex 3-cube, the upper half is labeled 1 and the lower half 0; for this reason it has been called the median operator, with the evident generalization to any odd number of variables (odd in order to avoid the tie when exactly half the variables are 0).
Axiomatizing Boolean algebras.
The technique we just used to prove an identity of Boolean algebra can be generalized to all identities in a systematic way that can be taken as a sound and complete axiomatization of, or axiomatic system for, the equational laws of Boolean logic. The customary formulation of an axiom system consists of a set of axioms that "prime the pump" with some initial identities, along with a set of inference rules for inferring the remaining identities from the axioms and previously proved identities. In principle it is desirable to have finitely many axioms; however as a practical matter it is not necessary since it is just as effective to have a finite axiom schema having infinitely many instances each of which when used in a proof can readily be verified to be a legal instance, the approach we follow here.
Boolean identities are assertions of the form "s" = "t" where "s" and "t" are "n"-ary terms, by which we shall mean here terms whose variables are limited to "x""0" through "x""n-1". An "n"-ary term is either an atom or an application. An application "m""f""i"("t"0...,"t""m"-1) is a pair consisting of an "m"-ary operation "m""f""i" and a list or "m"-tuple ("t"0...,"t""m"-1) of "m" "n"-ary terms called operands.
Associated with every term is a natural number called its height. Atoms are of zero height, while applications are of height one plus the height of their highest operand.
Now what is an atom? Conventionally an atom is either a constant (0 or 1) or a variable "x""i" where 0 ≤ "i" < "n". For the proof technique here it is convenient to define atoms instead to be "n"-ary operations "n""f""i", which although treated here as atoms nevertheless mean the same as ordinary terms of the exact form "n""f""i"("x"0...,"x""n"-1) (exact in that the variables must listed in the order shown without repetition or omission). This is not a restriction because atoms of this form include all the ordinary atoms, namely the constants 0 and 1, which arise here as the "n"-ary operations "n""f"0 and "n""f"−1 for each "n" (abbreviating 22"n"−1 to −1), and the variables "x"0...,"x""n"-1 as can be seen from the truth tables where "x"0 appears as both the unary operation 1"f"2 and the binary operation 2"f"10 while "x"1 appears as 2"f"12.
The following axiom schema and three inference rules axiomatize the Boolean algebra of "n"-ary terms.
A1. "m""f""i"("n""f""j"0...,"n""f""j""m"-1) = "n""f""i"o"ĵ" where ("i""ĵ")"v" = "i""ĵ""v", with "ĵ" being "j" transpose, defined by ("ĵ""v")"u" = ("j""u")"v".
R1. With no premises infer "t" = "t".
R2. From "s" = "u" and "t" = "u" infer "s" = "t" where "s", "t", and "u" are "n"-ary terms.
R3. From "s"0 = "t"0 , ... , "s""m"-1 = "t""m"-1 infer "m""f""i"("s"0...,"s""m"-1) = "m""f""i"("t"0...,"t""m"-1), where all terms "s"i, "t"i are "n"-ary.
The meaning of the side condition on A1 is that "i""ĵ" is that 2"n"-bit number whose "v"-th bit is the "ĵ""v"-th bit of "i", where the ranges of each quantity are "u": "m", "v": 2"n", "j""u": 22"n", and "ĵ""v": 2"m". (So "j" is an "m"-tuple of 2"n"-bit numbers while "ĵ" as the transpose of "j" is a 2"n"-tuple of "m"-bit numbers. Both "j" and "ĵ" therefore contain "m"2"n" bits.)
A1 is an axiom schema rather than an axiom by virtue of containing metavariables, namely "m", "i", "n", and "j0" through "jm-1". The actual axioms of the axiomatization are obtained by setting the metavariables to specific values. For example, if we take "m" = "n" = "i" = "j"0 = 1, we can compute the two bits of "i""ĵ" from "i"1 = 0 and "i"0 = 1, so "i""ĵ" = 2 (or 10 when written as a two-bit number). The resulting instance, namely 1"f"1(1"f"1) = 1"f"2, expresses the familiar axiom ¬¬"x" = "x" of double negation. Rule R3 then allows us to infer ¬¬¬"x" = ¬"x" by taking "s0" to be 1"f"1(1"f"1) or ¬¬"x"0, "t0" to be 1"f"2 or "x"0, and "m""f""i" to be "1""f""1" or ¬.
For each "m" and "n" there are only finitely many axioms instantiating A1, namely 22"m" × (22"n")"m". Each instance is specified by 2"m"+"m"2"n" bits.
We treat R1 as an inference rule, even though it is like an axiom in having no premises, because it is a domain-independent rule along with R2 and R3 common to all equational axiomatizations, whether of groups, rings, or any other variety. The only entity specific to Boolean algebras is axiom schema A1. In this way when talking about different equational theories we can push the rules to one side as being independent of the particular theories, and confine attention to the axioms as the only part of the axiom system characterizing the particular equational theory at hand.
This axiomatization is complete, meaning that every Boolean law "s" = "t" is provable in this system. One first shows by induction on the height of "s" that every Boolean law for which "t" is atomic is provable, using R1 for the base case (since distinct atoms are never equal) and A1 and R3 for the induction step ("s" an application). This proof strategy amounts to a recursive procedure for evaluating "s" to yield an atom. Then to prove "s" = "t" in the general case when "t" may be an application, use the fact that if "s" = "t" is an identity then "s" and "t" must evaluate to the same atom, call it "u". So first prove "s" = "u" and "t" = "u" as above, that is, evaluate "s" and "t" using A1, R1, and R3, and then invoke R2 to infer "s" = "t".
In A1, if we view the number "n""m" as the function type "m"→"n", and "m""n" as the application "m"("n"), we can reinterpret the numbers "i", "j", "ĵ", and "i""ĵ" as functions of type "i": ("m"→2)→2, "j": "m"→(("n"→2)→2), "ĵ": ("n"→2)→("m"→2), and "i""ĵ": ("n"→2)→2. The definition ("i""ĵ")"v" = "i""ĵ""v" in A1 then translates to ("i""ĵ")("v") = "i"("ĵ"("v")), that is, "i""ĵ" is defined to be composition of "i" and "ĵ" understood as functions. So the content of A1 amounts to defining term application to be essentially composition, modulo the need to transpose the "m"-tuple "j" to make the types match up suitably for composition. This composition is the one in Lawvere's previously mentioned category of power sets and their functions. In this way we have translated the commuting diagrams of that category, as the equational theory of Boolean algebras, into the equational consequences of A1 as the logical representation of that particular composition law.
Underlying lattice structure.
Underlying every Boolean algebra "B" is a partially ordered set or poset ("B",≤). The partial order relation is defined by "x" ≤ "y" just when "x" = "x"∧"y", or equivalently when "y" = "x"∨"y". Given a set "X" of elements of a Boolean algebra, an upper bound on "X" is an element "y" such that for every element "x" of "X", "x" ≤ "y", while a lower bound on "X" is an element "y" such that for every element "x" of "X", "y" ≤ "x".
A sup of "X" is a least upper bound on "X", namely an upper bound on "X" that is less or equal to every upper bound on "X". Dually an inf of "X" is a greatest lower bound on "X". The sup of "x" and "y" always exists in the underlying poset of a Boolean algebra, being "x"∨"y", and likewise their inf exists, namely "x"∧"y". The empty sup is 0 (the bottom element) and the empty inf is 1 (top). It follows that every finite set has both a sup and an inf. Infinite subsets of a Boolean algebra may or may not have a sup and/or an inf; in a power set algebra they always do.
Any poset ("B",≤) such that every pair "x","y" of elements has both a sup and an inf is called a lattice. We write "x"∨"y" for the sup and "x"∧"y" for the inf. The underlying poset of a Boolean algebra always forms a lattice. The lattice is said to be distributive when "x"∧("y"∨"z") = ("x"∧"y")∨("x"∧"z"), or equivalently when "x"∨("y"∧"z") = ("x"∨"y")∧("x"∨"z"), since either law implies the other in a lattice. These are laws of Boolean algebra whence the underlying poset of a Boolean algebra forms a distributive lattice.
Given a lattice with a bottom element 0 and a top element 1, a pair "x","y" of elements is called complementary when "x"∧"y" = 0 and "x"∨"y" = 1, and we then say that "y" is a complement of "x" and vice versa. Any element "x" of a distributive lattice with top and bottom can have at most one complement. When every element of a lattice has a complement the lattice is called complemented. It follows that in a complemented distributive lattice, the complement of an element always exists and is unique, making complement a unary operation. Furthermore, every complemented distributive lattice forms a Boolean algebra, and conversely every Boolean algebra forms a complemented distributive lattice. This provides an alternative definition of a Boolean algebra, namely as any complemented distributive lattice. Each of these three properties can be axiomatized with finitely many equations, whence these equations taken together constitute a finite axiomatization of the equational theory of Boolean algebras.
In a class of algebras defined as all the models of a set of equations, it is usually the case that some algebras of the class satisfy more equations than just those needed to qualify them for the class. The class of Boolean algebras is unusual in that, with a single exception, every Boolean algebra satisfies exactly the Boolean identities and no more. The exception is the one-element Boolean algebra, which necessarily satisfies every equation, even "x" = "y", and is therefore sometimes referred to as the inconsistent Boolean algebra.
Boolean homomorphisms.
A Boolean homomorphism is a function "h": "A"→"B" between Boolean algebras "A","B" such that for every Boolean operation "m""f""i":
formula_4
The category Bool of Boolean algebras has as objects all Boolean algebras and as morphisms the Boolean homomorphisms between them.
There exists a unique homomorphism from the two-element Boolean algebra 2 to every Boolean algebra, since homomorphisms must preserve the two constants and those are the only elements of 2. A Boolean algebra with this property is called an initial Boolean algebra. It can be shown that any two initial Boolean algebras are isomorphic, so up to isomorphism 2 is "the" initial Boolean algebra.
In the other direction, there may exist many homomorphisms from a Boolean algebra "B" to 2. Any such homomorphism partitions "B" into those elements mapped to 1 and those to 0. The subset of "B" consisting of the former is called an ultrafilter of "B". When "B" is finite its ultrafilters pair up with its atoms; one atom is mapped to 1 and the rest to 0. Each ultrafilter of "B" thus consists of an atom of "B" and all the elements above it; hence exactly half the elements of "B" are in the ultrafilter, and there as many ultrafilters as atoms.
For infinite Boolean algebras the notion of ultrafilter becomes considerably more delicate. The elements greater than or equal to an atom always form an ultrafilter, but so do many other sets; for example, in the Boolean algebra of finite and cofinite sets of integers, the cofinite sets form an ultrafilter even though none of them are atoms. Likewise, the powerset of the integers has among its ultrafilters the set of all subsets containing a given integer; there are countably many of these "standard" ultrafilters, which may be identified with the integers themselves, but there are uncountably many more "nonstandard" ultrafilters. These form the basis for nonstandard analysis, providing representations for such classically inconsistent objects as infinitesimals and delta functions.
Infinitary extensions.
Recall the definition of sup and inf from the section above on the underlying partial order of a Boolean algebra. A complete Boolean algebra is one every subset of which has both a sup and an inf, even the infinite subsets. Gaifman [1964] and Hales [1964] independently showed that infinite free complete Boolean algebras do not exist. This suggests that a logic with set-sized-infinitary operations may have class-many terms—just as a logic with finitary operations may have infinitely many terms.
There is however another approach to introducing infinitary Boolean operations: simply drop "finitary" from the definition of Boolean algebra. A model of the equational theory of the algebra of "all" operations on {0,1} of arity up to the cardinality of the model is called a complete atomic Boolean algebra, or "CABA". (In place of this awkward restriction on arity we could allow any arity, leading to a different awkwardness, that the signature would then be larger than any set, that is, a proper class. One benefit of the latter approach is that it simplifies the definition of homomorphism between CABAs of different cardinality.) Such an algebra can be defined equivalently as a complete Boolean algebra that is atomic, meaning that every element is a sup of some set of atoms. Free CABAs exist for all cardinalities of a set "V" of generators, namely the power set algebra 22"V", this being the obvious generalization of the finite free Boolean algebras. This neatly rescues infinitary Boolean logic from the fate the Gaifman–Hales result seemed to consign it to.
The nonexistence of free complete Boolean algebras can be traced to failure to extend the equations of Boolean logic suitably to all laws that should hold for infinitary conjunction and disjunction, in particular the neglect of distributivity in the definition of complete Boolean algebra. A complete Boolean algebra is called completely distributive when arbitrary conjunctions distribute over arbitrary disjunctions and vice versa. A Boolean algebra is a CABA if and only if it is complete and completely distributive, giving a third definition of CABA. A fourth definition is as any Boolean algebra isomorphic to a power set algebra.
A complete homomorphism is one that preserves all sups that exist, not just the finite sups, and likewise for infs. The category CABA of all CABAs and their complete homomorphisms is dual to the category of sets and their functions, meaning that it is equivalent to the opposite of that category (the category resulting from reversing all morphisms). Things are not so simple for the category Bool of Boolean algebras and their homomorphisms, which Marshall Stone showed in effect (though he lacked both the language and the conceptual framework to make the duality explicit) to be dual to the category of totally disconnected compact Hausdorff spaces, subsequently called Stone spaces.
Another infinitary class intermediate between Boolean algebras and complete Boolean algebras is the notion of a sigma-algebra. This is defined analogously to complete Boolean algebras, but with sups and infs limited to countable arity. That is, a sigma-algebra is a Boolean algebra with all countable sups and infs. Because the sups and infs are of bounded cardinality, unlike the situation with complete Boolean algebras, the Gaifman-Hales result does not apply and free sigma-algebras do exist. Unlike the situation with CABAs however, the free countably generated sigma algebra is not a power set algebra.
Other definitions of Boolean algebra.
We have already encountered several definitions of Boolean algebra, as a model of the equational theory of the two-element algebra, as a complemented distributive lattice, as a Boolean ring, and as a product-preserving functor from a certain category (Lawvere). Two more definitions worth mentioning are:.
To put this in perspective, infinite sets arise as filtered colimits of finite sets, infinite CABAs as filtered limits of finite power set algebras, and infinite Stone spaces as filtered limits of finite sets. Thus if one starts with the finite sets and asks how these generalize to infinite objects, there are two ways: "adding" them gives ordinary or inductive sets while "multiplying" them gives Stone spaces or profinite sets. The same choice exists for finite power set algebras as the duals of finite sets: addition yields Boolean algebras as inductive objects while multiplication yields CABAs or power set algebras as profinite objects.
A characteristic distinguishing feature is that the underlying topology of objects so constructed, when defined so as to be Hausdorff, is discrete for inductive objects and compact for profinite objects. The topology of finite Hausdorff spaces is always both discrete and compact, whereas for infinite spaces "discrete"' and "compact" are mutually exclusive. Thus when generalizing finite algebras (of any kind, not just Boolean) to infinite ones, "discrete" and "compact" part company, and one must choose which one to retain. The general rule, for both finite and infinite algebras, is that finitary algebras are discrete, whereas their duals are compact and feature infinitary operations. Between these two extremes, there are many intermediate infinite Boolean algebras whose topology is neither discrete nor compact.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb Z"
},
{
"math_id": 1,
"text": "(x \\land y)\\lor (y\\land z)\\lor (z\\land x)"
},
{
"math_id": 2,
"text": "(x\\lor y)\\land (y\\lor z)\\land (z\\lor x)"
},
{
"math_id": 3,
"text": "(x\\land y)\\lor (y\\land z)\\lor (z\\land x) = (x\\lor y)\\land (y\\lor z)\\land (z\\lor x)"
},
{
"math_id": 4,
"text": "h(^m\\!f_i(x_0,...,x_{m-1})) = {}^m\\!f_i(h(x_0,...,x_{m-1}))"
}
] |
https://en.wikipedia.org/wiki?curid=6318542
|
63186313
|
Fay–Herriot model
|
Statistical model
The Fay–Herriot model is a statistical model which includes some distinct variation for each of several subgroups of observations. It is an area-level model, meaning some input data are associated with sub-aggregates such as regions, jurisdictions, or industries. The model produces estimates about the subgroups. The model is applied in the context of small area estimation in which there is a lot of data overall, but not much for each subgroup.
The subgroups are determined in advance of estimation and are built into the model structure. The model combines, by averaging, estimates of fixed effects and of the random effects type. The model is typically used to adjust for group-related differences in some dependent variable.
In random effects models like the Fay–Herriot, estimation is built on the assumption that the effects associated with subgroups are drawn independently from a normal (Gaussian) distribution, whose variance is estimated from the data on each subgroup. It is more common to use a fixed-effects model instead for many systematically different groups. A mixed random effects model like the Fay–Herriot is preferred if there are not enough observations per group to reliably estimate the fixed effects, or if for some reason fixed effects would not be consistently estimated.
The Fay–Herriot is a two-stage hierarchical model. The parameters of the distributions within the groups are often assumed to be independent, or it is assumed that they are correlated to those measured for another variable.
Model structure and assumptions.
In classical Fay–Herriot (FH), the data used for estimation are aggregate estimates for the subgroups based on surveys.
The model can also be applied to microdata. Consider rows of observations numbered j=1 to J, in groups from i=1 to I, with predictive data formula_0 for dependent variable formula_1. If the model includes random effects only, it can be expressed by:
formula_2
A probability distribution is assumed for the random effects formula_3, typically a normal distribution. A different distribution can be assumed, e.g. if the sample distribution is known to have heavy tails.
Often fixed effects are included, making it a mixed model, with auxiliary data and economic or probability assumptions that make it possible to identify these effects separately from one another and from sampling variation formula_4.
Estimation.
The parameters of interest including the random effects are estimated together iteratively. Methods can include maximum likelihood estimation, the method of moments, or a Bayesian way.
Fay–Herriot models can be characterized either as mixed models, or in a hierarchical form, or a multilevel regression with poststratification.
The resulting estimates for each area (subgroup) are weighted averages from the direct estimates and indirect estimates based on estimates of variances.
Tests of consistency.
For random effects models to make consistent estimates, it is necessary that the subgroup-specific effects be uncorrelated to the other predictor variables in the model. If the subgroup-specific effects are correlated, then random effects estimation would be biased but fixed effects estimation would not be biased.
That correlation can be tested by running both the fixed effects and the random effects models and then applying the Hausman specification test. The test may not reject the hypothesis of no-correlation even when it is false, a Type II error, so that it cannot be definitively concluded that random effects estimation is unbiased even if the Hausman test fails to reject.
History.
Robert Fay and Roger Herriot of the U.S. Census Bureau developed the model to make estimates for populations in each of many geographic regions. The authors referred to the method as a James–Stein procedure and did not use the term "random effects." It is an area-level model. The model has been used for the same purpose, called small-area estimation, by other U.S. government agencies.
Rao and Molina's small area estimation text is sometimes characterized of as a definitive source about the FH model.
Applications.
The FH model is used extensively in the Small Area Income and Poverty Estimates (SAIPE) program of the U.S. Census Bureau.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X_{ij}"
},
{
"math_id": 1,
"text": "Y_{ij}"
},
{
"math_id": 2,
"text": " Y_{ij} = \\mu + \\beta X_{ij} + U_i + \\epsilon_{ij} "
},
{
"math_id": 3,
"text": "U_i"
},
{
"math_id": 4,
"text": "\\epsilon_{ij}"
}
] |
https://en.wikipedia.org/wiki?curid=63186313
|
63191839
|
John H. Malmberg
|
American physicist
John Holmes Malmberg (July 5, 1927 – November 1, 1992) was an American plasma physicist and a professor at the University of California, San Diego. He was known for making the first experimental measurements of Landau damping of plasma waves in 1964, as well as for his research on non-neutral plasmas and the development of the Penning–Malmberg trap.
In 1985, Malmberg won the James Clerk Maxwell Prize for Plasma Physics for his experimental work on wave-particle interactions in neutral plasmas and his studies on pure electron plasmas. He was later co-awarded the John Dawson Award for Excellence in Plasma Physics Research in 1991 for his contribution to research on non-neutral plasmas.
Early life and career.
Malmberg studied at Illinois State University (bachelor 1949) and the University of Illinois at Urbana–Champaign (master 1951), where he received his doctorate in 1957. From 1957 to 1969, he was a staff scientist working in the area of plasma physics at General Atomics in San Diego, California. From 1967 until his death, he was a professor of physics at the University of California, San Diego (UCSD) in La Jolla, California.
In 1980, Malmberg was appointed to the first Plasma Sciences Committee of the National Research Council. In that capacity, he was a strong voice for the importance of basic plasma experiments in maintaining the health of plasma science. In an era when small-scale and basic plasma physics research was nearing an ebb, Malmberg emphasized the importance of being able to follow the internal logic of the science, which he believed to be of paramount importance in doing basic research.
Scientific contributions.
Landau damping of plasma waves.
Malmberg and Charles Wharton made the first experimental measurements of Landau damping of plasma waves in 1964, two decades after its prediction by Lev Landau. Since this damping is collisionless, the free energy and phase-space memory associated with the damped wave are not lost, but are subtly stored in the plasma. Malmberg and collaborators demonstrated explicitly the reversible nature of this process by observation of the plasma wave echo in which a wave “spontaneously” appears in the plasma as an ‘echo’ of two previously launched waves that had been Landau damped.
Penning–Malmberg traps and non-neutral plasmas.
Neutral plasmas are notoriously difficult to confine. In contrast, Malmberg and collaborators predicted and demonstrated experimentally that plasmas with a single sign of charge, such as pure electron or pure ion plasmas, can be confined for long periods (e.g., hours). This was accomplished using an arrangement of electric and magnetic fields similar to that of a Penning trap, but optimized to confine single-component plasmas. In recognition of Malmberg’s contributions to the development of these devices, they are now referred to as Penning–Malmberg traps.
Malmberg and collaborators, realized that non-neutral plasmas offer research opportunities not available with neutral plasmas. In contrast to neutral plasmas, plasmas with a single sign of charge can reach states of global thermal equilibria. The possibility of using thermal equilibrium statistical mechanics to describe the plasma provides a large advantage to theory. Furthermore, states near such thermal equilibria can be more easily controlled experimentally and departures from equilibrium studied with precision.
When a neutral plasma is cooled, it simply recombines; but a plasma with a single sign of charge can be cooled without recombination. Malmberg constructed a trap for a pure electron plasma with walls at 4.2 K. Cyclotron radiation from the electrons then cooled the plasma to a few Kelvin. Theory argued that electron-electron collisions in such a strongly magnetized and low temperature plasma would be qualitatively different than those in warmer plasmas. Malmberg measured the equipartition rate between electron velocity components parallel to and perpendicular to the magnetic field and confirmed the striking prediction that it decreases exponentially with decreasing temperature.
Malmberg and Thomas Michael O'Neil predicted that a very cold, single-species plasma would undergo a phase transition to a body-centered cubic crystalline state. Later, John Bollinger and collaborators created such a state by laser cooling a plasma of singly ionized beryllium ions to temperatures of a few millikelvin. In other experiments, trapped pure electron plasmas are used to model the two-dimensional (2D) vortex dynamics expected for an ideal fluid.
In the late 1980s, pure positron (i.e., antielectron) plasmas were created using the Penning–Malmberg trap technology. This, and advances in confining low-energy antiprotons, led to the creation of low-energy antihydrogen a decade later. These and subsequent developments have spawned a wealth of research with low-energy antimatter. This includes ever more precise studies of antihydrogen and comparison with the properties of hydrogen and formation of the di-positronium molecule (Psformula_0, formula_1) predicted by J. A. Wheeler in 1946. The Penning–Malmberg trap technology is now being used to create a new generation of high-quality positroniumatom (formula_2) beams for atomic physics studies.
In the broader view, Malmberg’s seminal studies with trapped single-component and non-neutral plasmas have stimulated vibrant sub-fields of plasma physics with surprisingly broad impacts in the wider world of physics.
Honors and awards.
In 1985, Malmberg received the James Clerk Maxwell Prize for Plasma Physics from the American Physical Society for "his outstanding experimental studies which expanded our understanding of wave-particle interactions in neutral plasmas and increased our confidence in plasma theory; and for his pioneering studies of the confinement and transport of pure electron plasmas".
And in 1991, he was co-awarded the John Dawson Award for Excellence in Plasma Physics Research with Charles F. Driscoll and Thomas Michael O'Neil, for their studies of single-component electron plasmas.
Legacy.
In 1993, the UCSD physics department established the John Holmes Malmberg Prize in his honor. It is awarded annually to an outstanding undergraduate physics major with interests in experimental physics.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "_2"
},
{
"math_id": 1,
"text": "e^+e^-e^+e^-"
},
{
"math_id": 2,
"text": "e^+e^-"
}
] |
https://en.wikipedia.org/wiki?curid=63191839
|
6319245
|
Camera resectioning
|
Process of estimating the parameters of a pinhole camera model
Camera resectioning is the process of estimating the parameters of a pinhole camera model approximating the camera that produced a given photograph or video; it determines which incoming light ray is associated with each pixel on the resulting image. Basically, the process determines the pose of the pinhole camera.
Usually, the camera parameters are represented in a 3 × 4 projection matrix called the "camera matrix".
The extrinsic parameters define the camera "pose" (position and orientation) while the intrinsic parameters specify the camera image format (focal length, pixel size, and image origin).
This process is often called geometric camera calibration or simply camera calibration, although that term may also refer to photometric camera calibration or be restricted for the estimation of the intrinsic parameters only. Exterior orientation and interior orientation refer to the determination of only the extrinsic and intrinsic parameters, respectively.
The classic camera calibration requires special objects in the scene, which is not required in "camera auto-calibration".
Camera resectioning is often used in the application of stereo vision where the camera projection matrices of two cameras are used to calculate the 3D world coordinates of a point viewed by both cameras.
Formulation.
The camera projection matrix is derived from the intrinsic and extrinsic parameters of the camera, and is often represented by the series of transformations; e.g., a matrix of camera intrinsic parameters, a 3 × 3 rotation matrix, and a translation vector. The camera projection matrix can be used to associate points in a camera's image space with locations in 3D world space.
Homogeneous coordinates.
In this context, we use formula_0 to represent a 2D point position in "pixel" coordinates and formula_1 is used to represent a 3D point position in "world" coordinates. In both cases, they are represented in homogeneous coordinates (i.e. they have an additional last component, which is initially, by convention, a 1), which is the most common notation in robotics and rigid body transforms.
Projection.
Referring to the pinhole camera model, a camera matrix formula_2 is used to denote a projective mapping from "world" coordinates to "pixel" coordinates.
formula_3
where formula_4. formula_5 by convention are the x and y coordinates of the pixel in the camera, formula_6 is the intrinsic matrix as described below, and formula_7 form the extrinsic matrix as described below. formula_8 are the coordinates of the source of the light ray which hits the camera sensor in world coordinates, relative to the origin of the world. By dividing the matrix product by formula_9, the theoretical value for the pixel coordinates can be found.
formula_10
Intrinsic parameters.
The formula_6 contains 5 intrinsic parameters of the specific camera model. These parameters encompass focal length, image sensor format, and camera principal point.
The parameters formula_11 and formula_12 represent focal length in terms of pixels, where formula_13 and formula_14 are the inverses of the width and height of a pixel on the projection plane and formula_15 is the focal length in terms of distance.
formula_16 represents the skew coefficient between the x and the y axis, and is often 0.
formula_17 and formula_18 represent the principal point, which would be ideally in the center of the image.
Nonlinear intrinsic parameters such as lens distortion are also important although they cannot be included in the linear camera model described by the intrinsic parameter matrix. Many modern camera calibration algorithms estimate these intrinsic parameters as well in the form of non-linear optimisation techniques. This is done in the form of optimising the camera and distortion parameters in the form of what is generally known as bundle adjustment.
Extrinsic parameters.
formula_19
formula_20 are the extrinsic parameters which denote the coordinate system transformations from 3D world coordinates to 3D camera coordinates. Equivalently, the extrinsic parameters define the position of the camera center and the camera's heading in world coordinates. formula_21 is the position of the origin of the world coordinate system expressed in coordinates of the camera-centered coordinate system. formula_21 is often mistakenly considered the position of the camera. The position, formula_22, of the camera expressed in world coordinates is formula_23 (since formula_24 is a rotation matrix).
Camera calibration is often used as an early stage in computer vision.
When a camera is used, light from the environment is focused on an image plane and captured. This process reduces the dimensions of the data taken in by the camera from three to two (light from a 3D scene is stored on a 2D image). Each pixel on the image plane therefore corresponds to a shaft of light from the original scene.
Algorithms.
There are many different approaches to calculate the intrinsic and extrinsic parameters for a specific camera setup. The most common ones are:
Zhang's method.
Zhang's method is a camera calibration method that uses traditional calibration techniques (known calibration points) and self-calibration techniques (correspondence between the calibration points when they are in different positions). To perform a full calibration by the Zhang method, at least three different images of the calibration target/gauge are required, either by moving the gauge or the camera itself. If some of the intrinsic parameters are given as data (orthogonality of the image or optical center coordinates), the number of images required can be reduced to two.
In a first step, an approximation of the estimated projection matrix formula_25 between the calibration target and the image plane is determined using DLT method. Subsequently, self-calibration techniques are applied to obtain the image of the absolute conic matrix. The main contribution of Zhang's method is how to, given formula_26 poses of the calibration target, extract a constrained intrinsic matrix formula_6, along with formula_26 instances of formula_24 and formula_21 calibration parameters.
Derivation.
Assume we have a homography formula_27 that maps points formula_28 on a "probe plane" formula_29 to points formula_30 on the image.
The circular points formula_31 lie on both our probe plane formula_29 and on the absolute conic formula_32. Lying on formula_32 of course means they are also projected onto the "image" of the absolute conic (IAC) formula_33, thus formula_34 and formula_35. The circular points project as
formula_36.
We can actually ignore formula_37 while substituting our new expression for formula_38 as follows:
formula_39
Tsai's algorithm.
Tsai's algorithm, a significant method in camera calibration, involves several detailed steps for accurately determining a camera's orientation and position in 3D space. The procedure, while technical, can be generally broken down into three main stages:
Initial Calibration.
The process begins with the initial calibration stage, where a series of images are captured by the camera. These images, often featuring a known calibration pattern like a checkerboard, are used to estimate intrinsic camera parameters such as focal length and optical center.
Pose Estimation.
Following initial calibration, the algorithm undertakes pose estimation. This involves calculating the camera's position and orientation relative to a known object in the scene. The process typically requires identifying specific points in the calibration pattern and solving for the camera's rotation and translation vectors.
Refinement of Parameters.
The final phase is the refinement of parameters. In this stage, the algorithm refines the lens distortion coefficients, addressing radial and tangential distortions. Further optimization of internal and external camera parameters is performed to enhance the calibration accuracy.
This structured approach has positioned Tsai's Algorithm as a pivotal technique in both academic research and practical applications within robotics and industrial metrology.
Selby's method (for X-ray cameras).
Selby's camera calibration method addresses the auto-calibration of X-ray camera systems.
X-ray camera systems, consisting of the X-ray generating tube and a solid state detector can be modelled as pinhole camera systems, comprising 9 intrinsic and extrinsic camera parameters.
Intensity based registration based on an arbitrary X-ray image and a reference model (as a tomographic dataset) can then be used to determine the relative camera parameters without the need of a special calibration body or any ground-truth data.
|
[
{
"math_id": 0,
"text": "[u\\ v\\ 1]^T"
},
{
"math_id": 1,
"text": "[x_w\\ y_w\\ z_w\\ 1]^T"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "\\begin{bmatrix}\nwu\\\\\nwv\\\\\nw\\end{bmatrix}=K\\, \\begin{bmatrix}\nR & T\\end{bmatrix}\\begin{bmatrix}\nx_{w}\\\\\ny_{w}\\\\\nz_{w}\\\\\n1\\end{bmatrix}\n=M \\begin{bmatrix}\nx_{w}\\\\\ny_{w}\\\\\nz_{w}\\\\\n1\\end{bmatrix}\n"
},
{
"math_id": 4,
"text": " M = K\\, \\begin{bmatrix} R & T\\end{bmatrix}"
},
{
"math_id": 5,
"text": " u,v "
},
{
"math_id": 6,
"text": "K"
},
{
"math_id": 7,
"text": "R\\,T"
},
{
"math_id": 8,
"text": "x_{w},y_{w},z_{w}"
},
{
"math_id": 9,
"text": "w"
},
{
"math_id": 10,
"text": "K=\\begin{bmatrix}\n\\alpha_{x} & \\gamma & u_{0}\\\\\n0 & \\alpha_{y} & v_{0}\\\\\n0 & 0 & 1\\end{bmatrix}"
},
{
"math_id": 11,
"text": "\\alpha_{x} = f \\cdot m_{x}"
},
{
"math_id": 12,
"text": "\\alpha_{y} = f \\cdot m_{y}"
},
{
"math_id": 13,
"text": "m_{x}"
},
{
"math_id": 14,
"text": "m_{y}"
},
{
"math_id": 15,
"text": "f"
},
{
"math_id": 16,
"text": "\\gamma"
},
{
"math_id": 17,
"text": "u_{0}"
},
{
"math_id": 18,
"text": "v_{0}"
},
{
"math_id": 19,
"text": "{}\\begin{bmatrix}R_{3 \\times 3} & T_{3 \\times 1} \\\\\n0_{1 \\times 3} & 1\\end{bmatrix}_{4 \\times 4}"
},
{
"math_id": 20,
"text": "R,T"
},
{
"math_id": 21,
"text": "T"
},
{
"math_id": 22,
"text": "C"
},
{
"math_id": 23,
"text": "C = -R^{-1}T = -R^T T"
},
{
"math_id": 24,
"text": "R"
},
{
"math_id": 25,
"text": "H"
},
{
"math_id": 26,
"text": "n"
},
{
"math_id": 27,
"text": "\\textbf{H}"
},
{
"math_id": 28,
"text": "x_\\pi"
},
{
"math_id": 29,
"text": "\\pi"
},
{
"math_id": 30,
"text": "x"
},
{
"math_id": 31,
"text": "I, J = \\begin{bmatrix}1 & \\pm j & 0\\end{bmatrix}^{\\mathrm{T}}"
},
{
"math_id": 32,
"text": "\\Omega_\\infty"
},
{
"math_id": 33,
"text": "\\omega"
},
{
"math_id": 34,
"text": "x_1^T \\omega x_1= 0"
},
{
"math_id": 35,
"text": "x_2^T \\omega x_2= 0"
},
{
"math_id": 36,
"text": "\n\\begin{align}\nx_1 & = \\textbf{H} I = \n\\begin{bmatrix}\nh_1 & h_2 & h_3\n\\end{bmatrix}\n\\begin{bmatrix}\n1 \\\\\nj \\\\\n0\n\\end{bmatrix}\n= h_1 + j h_2\n\\\\\nx_2 & = \\textbf{H} J =\n\\begin{bmatrix}\nh_1 & h_2 & h_3\n\\end{bmatrix}\n\\begin{bmatrix}\n1 \\\\\n-j \\\\\n0\n\\end{bmatrix}\n= h_1 - j h_2\n\\end{align}\n"
},
{
"math_id": 37,
"text": "x_2"
},
{
"math_id": 38,
"text": "x_1"
},
{
"math_id": 39,
"text": "\n\\begin{align}\nx_1^T \\omega x_1 &= \\left ( h_1 + j h_2 \\right )^T \\omega \\left ( h_1 + j h_2 \\right ) \\\\\n &= \\left ( h_1^T + j h_2^T \\right ) \\omega \\left ( h_1 + j h_2 \\right ) \\\\\n &= h_1^T \\omega h_1 + j \\left ( h_2^T \\omega h_2 \\right ) \\\\\n &= 0\n\\end{align}\n"
}
] |
https://en.wikipedia.org/wiki?curid=6319245
|
631930
|
Membrane transport
|
Transportation of solutes through membranes
In cellular biology, membrane transport refers to the collection of mechanisms that regulate the passage of solutes such as ions and small molecules through biological membranes, which are lipid bilayers that contain proteins embedded in them. The regulation of passage through the membrane is due to selective membrane permeability – a characteristic of biological membranes which allows them to separate substances of distinct chemical nature. In other words, they can be permeable to certain substances but not to others.
The movements of most solutes through the membrane are mediated by membrane transport proteins which are specialized to varying degrees in the transport of specific molecules. As the diversity and physiology of the distinct cells is highly related to their capacities to attract different external elements, it is postulated that there is a group of specific transport proteins for each cell type and for every specific physiological stage. This differential expression is regulated through the differential transcription of the genes coding for these proteins and its translation, for instance, through genetic-molecular mechanisms, but also at the cell biology level: the production of these proteins can be activated by cellular signaling pathways, at the biochemical level, or even by being situated in cytoplasmic vesicles. The cell membrane regulates the transport of materials entering and exiting the cell.
Background.
Thermodynamically the flow of substances from one compartment to another can occur in the direction of a concentration or electrochemical gradient or against it. If the exchange of substances occurs in the direction of the gradient, that is, in the direction of decreasing potential, there is no requirement for an input of energy from outside the system; if, however, the transport is against the gradient, it will require the input of energy, metabolic energy in this case.
For example, a classic chemical mechanism for separation that does not require the addition of external energy is dialysis. In this system a semipermeable membrane separates two solutions of different concentration of the same solute. If the membrane allows the passage of water but not the solute the water will move into the compartment with the greatest solute concentration in order to establish an equilibrium in which the energy of the system is at a minimum. This takes place because the water moves from a high solvent concentration to a low one (in terms of the solute, the opposite occurs) and because the water is moving along a gradient there is no need for an external input of energy.
The nature of biological membranes, especially that of its lipids, is amphiphilic, as they form bilayers that contain an internal hydrophobic layer and an external hydrophilic layer. This structure makes transport possible by simple or passive diffusion, which consists of the diffusion of substances through the membrane without expending metabolic energy and without the aid of transport proteins. If the transported substance has a net electrical charge, it will move not only in response to a concentration gradient, but also to an electrochemical gradient due to the membrane potential.
As few molecules are able to diffuse through a lipid membrane the majority of the transport processes involve transport proteins. These transmembrane proteins possess a large number of alpha helices immersed in the lipid matrix. In bacteria these proteins are present in the beta lamina form. This structure probably involves a conduit through hydrophilic protein environments that cause a disruption in the highly hydrophobic medium formed by the lipids. These proteins can be involved in transport in a number of ways: they act as pumps driven by ATP, that is, by metabolic energy, or as channels of facilitated diffusion.
Thermodynamics.
A physiological process can only take place if it complies with basic thermodynamic principles. Membrane transport obeys physical laws that define its capabilities and therefore its biological utility.
A general principle of thermodynamics that governs the transfer of substances through membranes and other surfaces is that the exchange of free energy, Δ"G", for the transport of a mole of a substance of concentration C1 in a compartment to another compartment where it is present at C2 is:
formula_0
When C2 is less than C1, Δ"G" is negative, and the process is thermodynamically favorable. As the energy is transferred from one compartment to another, except where other factors intervene, an equilibrium will be reached where C2=C1, and where Δ"G" = 0. However, there are three circumstances under which this equilibrium will not be reached, circumstances which are vital for the "in vivo" functioning of biological membranes:
formula_1
Where F is Faraday's constant and Δ"P" the membrane potential in volts. If Δ"P" is negative and Z is positive, the contribution of the term "ZFΔP" to Δ"G" will be negative, that is, it will favor the transport of cations from the interior of the cell. So, if the potential difference is maintained, the equilibrium state Δ"G" = 0 will not correspond to an equimolar concentration of ions on both sides of the membrane.
formula_2
Where Δ"Gb" corresponds to a favorable thermodynamic reaction, such as the hydrolysis of ATP, or the co-transport of a compound that is moved in the direction of its gradient.
Transport types.
Passive diffusion and active diffusion.
As mentioned above, passive diffusion is a spontaneous phenomenon that increases the entropy of a system and decreases the free energy. The transport process is influenced by the characteristics of the transport substance and the nature of the bilayer. The diffusion velocity of a pure phospholipid membrane will depend on:
Active and co-transport.
In active transport a solute is moved against a concentration or electrochemical gradient; in doing so the transport proteins involved consume metabolic energy, usually ATP. In primary active transport the hydrolysis of the energy provider (e.g. ATP) takes place directly in order to transport the solute in question, for instance, when the transport proteins are ATPase enzymes. Where the hydrolysis of the energy provider is indirect as is the case in secondary active transport, use is made of the energy stored in an electrochemical gradient. For example, in co-transport use is made of the gradients of certain solutes to transport a target compound against its gradient, causing the dissipation of the solute gradient. It may appear that, in this example, there is no energy use, but hydrolysis of the energy provider is required to establish the gradient of the solute transported along with the target compound. The gradient of the co-transported solute will be generated through the use of certain types of proteins called biochemical pumps.
The discovery of the existence of this type of transporter protein came from the study of the kinetics of cross-membrane molecule transport. For certain solutes it was noted that the transport velocity reached a plateau at a particular concentration above which there was no significant increase in uptake rate, indicating a log curve type response. This was interpreted as showing that transport was mediated by the formation of a substrate-transporter complex, which is conceptually the same as the enzyme-substrate complex of enzyme kinetics. Therefore, each transport protein has an affinity constant for a solute that is equal to the concentration of the solute when the transport velocity is half its maximum value. This is equivalent in the case of an enzyme to the Michaelis–Menten constant.
Some important features of active transport in addition to its ability to intervene even against a gradient, its kinetics and the use of ATP, are its high selectivity and ease of selective pharmacological inhibition
Secondary active transporter proteins.
Secondary active transporter proteins move two molecules at the same time: one against a gradient and the other with its gradient. They are distinguished according to the directionality of the two molecules:
Both can be referred to as co-transporters.
Pumps.
A pump is a protein that hydrolyses ATP to transport a particular solute through a membrane, and in doing so, generating an electrochemical gradient membrane potential. This gradient is of interest as an indicator of the state of the cell through parameters such as the Nernst potential. In terms of membrane transport the gradient is of interest as it contributes to decreased system entropy in the co-transport of substances against their gradient.
One of the most important pumps in animal cells is the sodium potassium pump, that operates through the following mechanism:
Membrane selectivity.
As the main characteristic of transport through a biological membrane is its selectivity and its subsequent behavior as a barrier for certain substances, the underlying physiology of the phenomenon has been studied extensively. Investigation into membrane selectivity have classically been divided into those relating to electrolytes and non-electrolytes.
Electrolyte selectivity.
The ionic channels define an internal diameter that permits the passage of small ions that is related to various characteristics of the ions that could potentially be transported. As the size of the ion is related to its chemical species, it could be assumed "a priori" that a channel whose pore diameter was sufficient to allow the passage of one ion would also allow the transfer of others of smaller size, however, this does not occur in the majority of cases. There are two characteristics alongside size that are important in the determination of the selectivity of the membrane pores: the facility for dehydration and the interaction of the ion with the internal charges of the pore.
In order for an ion to pass through a pore it must dissociate itself from the water molecules that cover it in successive layers of solvation. The tendency to dehydrate, or the facility to do this, is related to the size of the ion: larger ions can do it more easily that the smaller ions, so that a pore with weak polar centres will preferentially allow passage of larger ions over the smaller ones.
When the interior of the channel is composed of polar groups from the side chains of the component amino acids, the interaction of a dehydrated ion with these centres can be more important than the facility for dehydration in conferring the specificity of the channel. For example, a channel made up of histidines and arginines, with positively charged groups, will selectively repel ions of the same polarity, but will facilitate the passage of negatively charged ions. Also, in this case, the smallest ions will be able to interact more closely due to the spatial arrangement of the molecule (stericity), which greatly increases the charge-charge interactions and therefore exaggerates the effect.
Non-electrolyte selectivity.
Non-electrolytes, substances that generally are hydrophobic and lipophilic, usually pass through the membrane by dissolution in the lipid bilayer, and therefore, by passive diffusion. For those non-electrolytes whose transport through the membrane is mediated by a transport protein the ability to diffuse is, generally, dependent on the partition coefficient K.
Partially charged non-electrolytes, that are more or less polar, such as ethanol, methanol or urea, are able to pass through the membrane through aqueous channels immersed in the membrane. There is no effective regulation mechanism that limits this transport, which indicates an intrinsic vulnerability of the cells to the penetration of these molecules.
Creation of membrane transport proteins.
There are several databases which attempt to construct phylogenetic trees detailing the creation of transporter proteins. One such resource is the Transporter Classification database
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Delta G = RT \\log \\frac{C_2}{C_1}"
},
{
"math_id": 1,
"text": "\\Delta G = RT \\log \\frac{C_{inside}}{C_{outside}}+ZF \\Delta P"
},
{
"math_id": 2,
"text": "\\Delta G = RT \\log \\frac{C_\\text{inside}}{C_\\text{outside}}+\\Delta G^b"
}
] |
https://en.wikipedia.org/wiki?curid=631930
|
632085
|
Private product remaining
|
Economic indicator
Private Product Remaining or PPR is a means of national income accounting similar to the more commonly encountered GNP. Since government is financed through taxation and any resulting output is not (usually) sold on the market, what value is ascribed to it is disputed (see calculation problem), and it is counted in GNP. Murray Rothbard developed the GPP (Gross Private Product) and PPR measures. GPP is GNP minus income originating in government and government enterprises. PPR is GPP minus the higher of government expenditures and tax revenues plus interest received.
formula_0
For example, in an economy in which the private expenditures total $1,000 and government expenditures total $200, the GNP would be $1,200, GPP would be $1,000, and PPR would be $800.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "PPR = C + I - G + (X - M) \\,"
}
] |
https://en.wikipedia.org/wiki?curid=632085
|
63209698
|
Matrix factorization (algebra)
|
In homological algebra, a branch of mathematics, a matrix factorization is a tool used to study infinitely long resolutions, generally over commutative rings.
Motivation.
One of the problems with non-smooth algebras, such as Artin algebras, are their derived categories are poorly behaved due to infinite projective resolutions. For example, in the ring formula_0 there is an infinite resolution of the formula_1-module formula_2 whereformula_3Instead of looking at only the derived category of the module category, David Eisenbud studied such resolutions by looking at their periodicity. In general, such resolutions are periodic with period formula_4 after finitely many objects in the resolution.
Definition.
For a commutative ring formula_5 and an element formula_6, a matrix factorization of formula_7 is a pair of "n"-by-"n" matrices formula_8 such that formula_9. This can be encoded more generally as a formula_10-graded formula_5-module formula_11 with an endomorphismformula_12 such that formula_13.
Examples.
(1) For formula_14 and formula_15 there is a matrix factorization formula_16 where formula_17 for formula_18.
(2) If formula_19 and formula_20, then there is a matrix factorization formula_21 whereformula_22
Periodicity.
definition
Main theorem.
Given a regular local ring formula_1 and an ideal formula_23 generated by an formula_24-sequence, set formula_25 and let
formula_26
be a minimal formula_27-free resolution of the ground field. Then formula_28 becomes periodic after at most formula_29 steps. https://www.youtube.com/watch?v=2Jo5eCv9ZVY
Maximal Cohen-Macaulay modules.
page 18 of eisenbud article
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R = \\mathbb{C}[x]/(x^2)"
},
{
"math_id": 1,
"text": "R"
},
{
"math_id": 2,
"text": "\\mathbb{C}"
},
{
"math_id": 3,
"text": "\\cdots \\xrightarrow{\\cdot x} R \\xrightarrow{\\cdot x} R \\xrightarrow{\\cdot x} R \\to \\mathbb{C} \\to 0"
},
{
"math_id": 4,
"text": "2"
},
{
"math_id": 5,
"text": "S"
},
{
"math_id": 6,
"text": "f \\in S"
},
{
"math_id": 7,
"text": "f"
},
{
"math_id": 8,
"text": "A,B"
},
{
"math_id": 9,
"text": "AB = f \\cdot \\text{Id}_n"
},
{
"math_id": 10,
"text": "\\mathbb{Z}/2"
},
{
"math_id": 11,
"text": "M = M_0\\oplus M_1"
},
{
"math_id": 12,
"text": "d = \\begin{bmatrix}0 & d_1 \\\\ d_0 & 0 \\end{bmatrix}"
},
{
"math_id": 13,
"text": "d^2 = f \\cdot \\text{Id}_M"
},
{
"math_id": 14,
"text": "S = \\mathbb{C}[[x]]"
},
{
"math_id": 15,
"text": "f = x^n"
},
{
"math_id": 16,
"text": "d_0:S \\rightleftarrows S:d_1"
},
{
"math_id": 17,
"text": "d_0=x^i, d_1 = x^{n-i}"
},
{
"math_id": 18,
"text": "0 \\leq i \\leq n"
},
{
"math_id": 19,
"text": "S = \\mathbb{C}[[x,y,z]]"
},
{
"math_id": 20,
"text": "f = xy + xz + yz"
},
{
"math_id": 21,
"text": "d_0:S^2 \\rightleftarrows S^2:d_1"
},
{
"math_id": 22,
"text": "d_0 = \\begin{bmatrix} z & y \\\\ x & -x-y \\end{bmatrix} \\text{ } d_1 = \\begin{bmatrix} x+y & y \\\\ x & -z \\end{bmatrix}"
},
{
"math_id": 23,
"text": "I \\subset R"
},
{
"math_id": 24,
"text": "A"
},
{
"math_id": 25,
"text": "B = A/I"
},
{
"math_id": 26,
"text": "\\cdots \\to F_2 \\to F_1 \\to F_0 \\to 0"
},
{
"math_id": 27,
"text": "B"
},
{
"math_id": 28,
"text": "F_\\bullet"
},
{
"math_id": 29,
"text": "1 + \\text{dim}(B)"
}
] |
https://en.wikipedia.org/wiki?curid=63209698
|
6320997
|
Dense order
|
In mathematics, a partial order or total order < on a set formula_0 is said to be dense if, for all formula_1 and formula_2 in formula_0 for which formula_3, there is a formula_4 in formula_0 such that formula_5. That is, for any two elements, one less than the other, there is another element between them. For total orders this can be simplified to "for any two distinct elements, there is another element between them", since all elements of a total order are comparable.
Example.
The rational numbers as a linearly ordered set are a densely ordered set in this sense, as are the algebraic numbers, the real numbers, the dyadic rationals and the decimal fractions. In fact, every Archimedean ordered ring extension of the integers formula_6 is a densely ordered set.
<templatestyles src="Math_proof/styles.css" />Proof
For the element formula_7, due to the Archimedean property, if formula_8, there exists a largest integer formula_9 with formula_10, and if formula_11, formula_12, and there exists a largest integer formula_13 with formula_14. As a result, formula_15. For any two elements formula_16 with formula_17, formula_18 and formula_19. Therefore formula_6 is dense.
On the other hand, the linear ordering on the integers is not dense.
Uniqueness for total dense orders without endpoints.
Georg Cantor proved that every two non-empty dense totally ordered countable sets without lower or upper bounds are order-isomorphic. This makes the theory of dense linear orders without bounds an example of an ω-categorical theory where ω is the smallest limit ordinal. For example, there exists an order-isomorphism between the rational numbers and other densely ordered countable sets including the dyadic rationals and the algebraic numbers. The proofs of these results use the back-and-forth method.
Minkowski's question mark function can be used to determine the order isomorphisms between the quadratic algebraic numbers and the rational numbers, and between the rationals and the dyadic rationals.
Generalizations.
Any binary relation "R" is said to be "dense" if, for all "R"-related "x" and "y", there is a "z" such that "x" and "z" and also "z" and "y" are "R"-related. Formally:
formula_20 Alternatively, in terms of composition of "R" with itself, the dense condition may be expressed as "R" ⊆ ("R" ; "R").
Sufficient conditions for a binary relation "R" on a set "X" to be dense are:
None of them are necessary. For instance, there is a relation R that is not reflexive but dense.
A non-empty and dense relation cannot be antitransitive.
A strict partial order < is a dense order if and only if < is a dense relation. A dense relation that is also transitive is said to be idempotent.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "y"
},
{
"math_id": 3,
"text": "x < y"
},
{
"math_id": 4,
"text": "z"
},
{
"math_id": 5,
"text": "x < z < y"
},
{
"math_id": 6,
"text": "\\mathbb{Z}[x]"
},
{
"math_id": 7,
"text": "x\\in\\mathbb{Z}[x]"
},
{
"math_id": 8,
"text": "x > 0"
},
{
"math_id": 9,
"text": "n < x"
},
{
"math_id": 10,
"text": "n < x < n + 1"
},
{
"math_id": 11,
"text": "x < 0"
},
{
"math_id": 12,
"text": "-x > 0"
},
{
"math_id": 13,
"text": "m = -n - 1 < -x"
},
{
"math_id": 14,
"text": "-n - 1 < -x < -n"
},
{
"math_id": 15,
"text": "0 < x - n < 1"
},
{
"math_id": 16,
"text": "y, z\\in\\mathbb{Z}[x]"
},
{
"math_id": 17,
"text": "z < y"
},
{
"math_id": 18,
"text": "0 < (x - n)(y - z) < y - z"
},
{
"math_id": 19,
"text": "z < (x - n)(y - z) + z < y"
},
{
"math_id": 20,
"text": " \\forall x\\ \\forall y\\ xRy\\Rightarrow (\\exists z\\ xRz \\land zRy)."
},
{
"math_id": 21,
"text": "A"
},
{
"math_id": 22,
"text": "\\Box\\Box A \\rightarrow \\Box A"
}
] |
https://en.wikipedia.org/wiki?curid=6320997
|
6321015
|
Spike-triggered average
|
The spike-triggered averaging
(STA) is a tool for characterizing the response properties of a neuron using the spikes emitted in response to a time-varying stimulus. The STA provides an estimate of a neuron's linear receptive field. It is a useful technique for the analysis of electrophysiological data.
Mathematically, the STA is the average stimulus preceding a spike. To compute the STA, the stimulus in the time window preceding each spike is extracted, and the resulting (spike-triggered) stimuli are averaged (see diagram). The STA provides an unbiased estimate of a neuron's receptive field only if the stimulus distribution is spherically symmetric (e.g., Gaussian white noise).
The STA has been used to characterize retinal ganglion cells, neurons in the lateral geniculate nucleus and simple cells in the striate cortex (V1) . It can be used to estimate the linear stage of the linear-nonlinear-Poisson (LNP) cascade model. The approach has also been used to analyze how transcription factor dynamics control gene regulation within individual cells.
Spike-triggered averaging is also commonly referred to as reverse correlation or white-noise analysis. The STA is well known as the first term in the Volterra kernel or Wiener kernel series expansion. It is closely related to linear regression, and identical to it in common circumstances.
Mathematical definition.
Standard STA.
Let formula_0 denote the spatio-temporal stimulus vector preceding the formula_1'th time bin, and formula_2 the spike count in that bin. The stimuli can be assumed to have zero mean (i.e., formula_3). If not, it can be transformed to have zero-mean by subtracting the mean stimulus from each vector. The STA is given
formula_4
where formula_5, the total number of spikes.
This equation is more easily expressed in matrix notation: let formula_6 denote a matrix whose formula_1'th row is the stimulus vector formula_7 and let formula_8 denote a column vector whose formula_1th element is formula_2. Then the STA can be written
formula_9
Whitened STA.
If the stimulus is not white noise, but instead has non-zero correlation across space or time, the standard STA provides a biased estimate of the linear receptive field. It may therefore be appropriate to whiten the STA by the inverse of the stimulus covariance matrix. This resolves the spatial dependency issue, however we still assume the stimulus is temporally independent. The resulting estimator is known as the whitened STA, which is given by
formula_10
where the first term is the inverse covariance matrix of the raw stimuli and the second is the standard STA. In matrix notation, this can be written
formula_11
The whitened STA is unbiased only if the stimulus distribution can be described by a correlated Gaussian distribution (correlated Gaussian distributions are elliptically symmetric, i.e. can be made spherically symmetric by a linear transformation, but not all elliptically symmetric distributions are Gaussian). This is a weaker condition than spherical symmetry.
The whitened STA is equivalent to linear least-squares regression of the stimulus against the spike train.
Regularized STA.
In practice, it may be necessary to regularize the whitened STA, since whitening amplifies noise along stimulus dimensions that are poorly explored by the stimulus (i.e., axes along which the stimulus has low variance). A common approach to this problem is ridge regression. The regularized STA, computed using ridge regression, can be written
formula_12
where formula_13 denotes the identity matrix and formula_14 is the ridge parameter controlling the amount of regularization. This procedure has a simple Bayesian interpretation: ridge regression is equivalent to placing a prior on the STA elements that says they are drawn i.i.d. from a zero-mean Gaussian prior with covariance proportional to the identity matrix. The ridge parameter sets the inverse variance of this prior, and is usually fit by cross-validation or empirical Bayes.
Statistical properties.
For responses generated according to an LNP model, the whitened STA provides an estimate of the subspace spanned by the linear receptive field. The properties of this estimate are as follows
Consistency.
The whitened STA is a consistent estimator, i.e., it converges to the true linear subspace, if
Optimality.
The whitened STA is an asymptotically efficient estimator if
For arbitrary stimuli, the STA is generally not consistent or efficient. For such cases, maximum likelihood and information-based estimators have been developed that are both consistent and efficient.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{x_i}"
},
{
"math_id": 1,
"text": "i"
},
{
"math_id": 2,
"text": "y_i"
},
{
"math_id": 3,
"text": "E[\\mathbf{x}]=0"
},
{
"math_id": 4,
"text": "\\mathrm{STA} = \\tfrac{1}{n_{sp}}\\sum_{i=1}^T y_i \\mathbf{x_i},"
},
{
"math_id": 5,
"text": "n_{sp} = \\sum y_i"
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "\\mathbf{x_i^T}"
},
{
"math_id": 8,
"text": "\\mathbf{y}"
},
{
"math_id": 9,
"text": "\\mathrm{STA} = \\tfrac{1}{n_{sp}} X^T \\mathbf{y}. "
},
{
"math_id": 10,
"text": "\\mathrm{STA}_w = \\left(\\tfrac{1}{T}\\sum_{i=1}^T\\mathbf{x_i}\\mathbf{x_i}^T\\right)^{-1} \\left(\\tfrac{1}{n_{sp}} \\sum_{i=1}^T y_i \\mathbf{x_i}\\right),"
},
{
"math_id": 11,
"text": "\\mathrm{STA}_w = \\tfrac{T}{n_{sp}} \\left(X^TX\\right)^{-1}X^T \\mathbf{y}. "
},
{
"math_id": 12,
"text": "\\mathrm{STA}_{ridge} = \\tfrac{T}{n_{sp}} \\left(X^TX + \\lambda I\\right)^{-1}X^T \\mathbf{y},"
},
{
"math_id": 13,
"text": "I"
},
{
"math_id": 14,
"text": "\\lambda"
},
{
"math_id": 15,
"text": "P(\\mathbf{x})"
},
{
"math_id": 16,
"text": "exp(x)"
}
] |
https://en.wikipedia.org/wiki?curid=6321015
|
63211357
|
Thermoneutral voltage
|
In electrochemistry, a thermoneutral voltage is a voltage drop across an electrochemical cell which is sufficient not only to drive the cell reaction, but to also provide the heat necessary to maintain a constant temperature. For a reaction of the form
formula_0
The thermoneutral voltage is given by
formula_1
where formula_2 is the change in enthalpy and "F" is the Faraday constant.
Explanation.
For a cell reaction characterized by the chemical equation:
formula_0
at constant temperature and pressure, the thermodynamic voltage (minimum voltage required to drive the reaction) is given by the Nernst equation:
formula_3
where formula_4 is the Gibbs energy and "F" is the Faraday constant. The standard thermodynamic voltage (i.e. at standard temperature and pressure) is given by:
formula_5
and the Nernst equation can be used to calculate the standard potential at other conditions.
The cell reaction is generally endothermic: i.e. it will extract heat from its environment. The Gibbs energy calculation generally assumes an infinite thermal reservoir to maintain a constant temperature, but in a practical case, the reaction will cool the electrode interface and slow the reaction occurring there.
If the cell voltage is increased above the thermodynamic voltage, the product of that voltage and the current will generate heat, and if the voltage is such that the heat generated matches the heat required by the reaction to maintain a constant temperature, that voltage is called the "thermoneutral voltage". The rate of delivery of heat is equal to formula_6 where "T" is the temperature (the standard temperature, in this case) and "dS/dt" is the rate of entropy production in the cell. At the thermoneutral voltage, this rate will be zero, which indicates that the thermoneutral voltage may be calculated from the enthalpy.
formula_7
An example.
For water at standard temperature (25 C) the net cell reaction may be written:
formula_8
Using Gibbs potentials (formula_9 kJ/mol), the thermodynamic voltage at standard conditions is
formula_10 1.229 Volt (2 electrons needed to form H2(g))
Just as the combustion of hydrogen and oxygen generates heat, the reverse reaction generating hydrogen and oxygen will absorb heat. The thermoneutral voltage is (using formula_11 kJ/mol):
formula_12 1.481 Volts.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Ox + ne^- \\leftrightarrow Red"
},
{
"math_id": 1,
"text": "E_{tn} = -\\Delta H/(nF) = (\\Delta H_{Red}-\\Delta H_{Ox})/(nF)"
},
{
"math_id": 2,
"text": "\\Delta H"
},
{
"math_id": 3,
"text": "E = -\\Delta G/(nF) = (\\Delta G_{Red}-\\Delta G_{Ox})/(nF)"
},
{
"math_id": 4,
"text": "\\Delta G"
},
{
"math_id": 5,
"text": "E^o = -\\Delta G^o/(nF) = (\\Delta G^o_{Red}-\\Delta G^o_{Ox})/(nF)"
},
{
"math_id": 6,
"text": "T dS/dt"
},
{
"math_id": 7,
"text": "E_{tn} = -(\\Delta G+T\\Delta S)/(nF) = -\\Delta H/(nF) = (\\Delta H_{Red}-\\Delta H_{Ox})/(nF)"
},
{
"math_id": 8,
"text": "H_2O \\leftrightarrow H_2(g) + \\frac{1}{2}O_2(g) "
},
{
"math_id": 9,
"text": "\\Delta G^o_{H2O}=-237.18"
},
{
"math_id": 10,
"text": "E^o = -\\Delta G^o_{H_2O}/(2F) \\approx "
},
{
"math_id": 11,
"text": "\\Delta H^o_{H2O}=-285.83"
},
{
"math_id": 12,
"text": "E^o_{tn} = -\\Delta H^o_{H_2O}/(2F) \\approx "
}
] |
https://en.wikipedia.org/wiki?curid=63211357
|
63214506
|
Euler's Gem
|
Euler's Gem: The Polyhedron Formula and the Birth of Topology is a book on the formula formula_0 for the Euler characteristic of convex polyhedra and its connections to the history of topology. It was written by David Richeson and published in 2008 by the Princeton University Press, with a paperback edition in 2012. It won the 2010 Euler Book Prize of the Mathematical Association of America.
Topics.
The book is organized historically, and reviewer Robert Bradley divides the topics of the book into three parts. The first part discusses the earlier history of polyhedra, including the works of Pythagoras, Thales, Euclid, and Johannes Kepler, and the discovery by René Descartes of a polyhedral version of the Gauss–Bonnet theorem (later seen to be equivalent to Euler's formula). It surveys the life of Euler, his discovery in the early 1750s that the Euler characteristic formula_1 (the number of vertices minus the number of edges plus the number of faces) is equal to 2 for all convex polyhedra, and his flawed attempts at a proof, and concludes with the first rigorous proof of this identity in 1794 by Adrien-Marie Legendre, based on Girard's theorem relating the angular excess of triangles in spherical trigonometry to their area.
Although polyhedra are geometric objects, "Euler's Gem" argues that Euler discovered his formula by being the first to view them topologically (as abstract incidence patterns of vertices, faces, and edges), rather than through their geometric distances and angles. (However, this argument is undermined by the book's discussion of similar ideas in the earlier works of Kepler and Descartes.) The birth of topology is conventionally marked by an earlier contribution of Euler, his 1736 work on the Seven Bridges of Königsberg, and the middle part of the book connects these two works through the theory of graphs. It proves Euler's formula in a topological rather than geometric form, for planar graphs, and discusses its uses in proving that these graphs have vertices of low degree, a key component in proofs of the four color theorem. It even makes connections to combinatorial game theory through the graph-based games of Sprouts and Brussels Sprouts and their analysis using Euler's formula.
In the third part of the book, Bradley moves on from the topology of the plane and the sphere to arbitrary topological surfaces. For any surface, the Euler characteristics of all subdivisions of the surface are equal, but they depend on the surface rather than always being 2. Here, the book describes the work of Bernhard Riemann, Max Dehn, and Poul Heegaard on the classification of manifolds, in which it was shown that the two-dimensional compact topological surfaces can be completely described by their Euler characteristics and their orientability. Other topics discussed in this part include knot theory and the Euler characteristic of Seifert surfaces, the Poincaré–Hopf theorem, the Brouwer fixed point theorem, Betti numbers, and Grigori Perelman's proof of the Poincaré conjecture.
An appendix includes instructions for creating paper and soap-bubble models of some of the examples from the book.
Audience and reception.
"Euler's Gem" is aimed at a general audience interested in mathematical topics, with biographical sketches and portraits of the mathematicians it discusses, many diagrams and visual reasoning in place of rigorous proofs, and only a few simple equations. With no exercises, it is not a textbook. However, the later parts of the book may be heavy going for amateurs, requiring at least an undergraduate-level understanding of calculus and differential geometry. Reviewer Dustin L. Jones suggests that teachers would find its examples, intuitive explanations, and historical background material useful in the classroom.
Although reviewer Jeremy L. Martin complains that "the book's generalizations about mathematical history and aesthetics are a bit simplistic or even one-sided", points out a significant mathematical error in the book's conflation of polar duality with Poincaré duality, and views the book's attitude towards computer-assisted proof as "unnecessarily dismissive", he nevertheless concludes that the book's mathematical content "outweighs these occasional flaws". Dustin Jones evaluates the book as "a unique blend of history and mathematics ... engaging and enjoyable", and reviewer Bruce Roth calls it "well written and full of interesting ideas". Reviewer Janine Daems writes, "It was a pleasure reading this book, and I recommend it to everyone who is not afraid of mathematical arguments".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V-E+F=2"
},
{
"math_id": 1,
"text": "V-E+F"
}
] |
https://en.wikipedia.org/wiki?curid=63214506
|
63215363
|
Parallel single-source shortest path algorithm
|
A central problem in algorithmic graph theory is the shortest path problem. One of the generalizations of the shortest path problem is known as the single-source-shortest-paths (SSSP) problem, which consists of finding the shortest paths from a source vertex formula_0 to all other vertices in the graph. There are classical sequential algorithms which solve this problem, such as Dijkstra's algorithm. In this article, however, we present two parallel algorithms solving this problem.
Another variation of the problem is the all-pairs-shortest-paths (APSP) problem, which also has parallel approaches: Parallel all-pairs shortest path algorithm.
Problem definition.
Let formula_1 be a directed graph with formula_2 nodes and formula_3 edges. Let formula_0 be a distinguished vertex (called "source") and formula_4 be a function assigning a non-negative real-valued weight to each edge. The goal of the single-source-shortest-paths problem is to compute, for every vertex formula_5 reachable from formula_0, the weight of a minimum-weight path from formula_0 to formula_5, denoted by formula_6 and abbreviated formula_7. The weight of a path is the sum of the weights of its edges. We set formula_8 if formula_5 is unreachable from formula_9.
Sequential shortest path algorithms commonly apply iterative labeling methods based on maintaining a tentative distance for all nodes; formula_10 is always formula_11 or the weight of some path from formula_0 to formula_5 and hence an upper bound on formula_7. Tentative distances are improved by performing edge relaxations, i.e., for an edge formula_12 the algorithm sets formula_13.
For all parallel algorithms we will assume a PRAM model with concurrent reads and concurrent writes.
Delta stepping algorithm.
The delta stepping algorithm is a label-correcting algorithm, which means the tentative distance of a vertex can be corrected several times via edge relaxations until the last step of the algorithm, when all tentative distances are fixed.
The algorithm maintains eligible nodes with tentative distances in an array of buckets each of which represents a distance range of size formula_14. During each phase, the algorithm removes all nodes of the first nonempty bucket and relaxes all outgoing edges of weight at most formula_14. Edges of a higher weight are only relaxed after their respective starting nodes are surely settled. The parameter formula_14 is a positive real number that is also called the "step width" or "bucket width".
Parallelism is obtained by concurrently removing all nodes of the first nonempty bucket and relaxing their outgoing light edges in a single phase. If a node formula_5 has been removed from the current bucket formula_15 with non-final distance value then, in some subsequent phase, formula_5 will eventually be reinserted into formula_15 , and the outgoing light edges of formula_5 will be re-relaxed. The remaining heavy edges emanating from all nodes that have been removed from formula_15 so far are relaxed once and for all when formula_15 finally remains empty. Subsequently, the algorithm searches for the next nonempty bucket and proceeds as described above.
The maximum shortest path weight for the source node formula_0 is defined as formula_16, abbreviated formula_17. Also, the size of a path is defined to be the number of edges on the path.
We distinguish light edges from heavy edges, where light edges have weight at most formula_14 and heavy edges have weight bigger than formula_14 .
Following is the delta stepping algorithm in pseudocode:
1 foreach formula_18 do formula_19
2 formula_20; (*Insert source node with distance 0*)
3 while formula_21 do (*A phase: Some queued nodes left (a)*)
4 formula_22 (*Smallest nonempty bucket (b)*)
5 formula_23 (*No nodes deleted for bucket B[i] yet*)
6 while formula_24 do (*New phase (c)*)
7 formula_25 (*Create requests for light edges (d)*)
8 formula_26 (*Remember deleted nodes (e)*)
9 formula_27 (*Current bucket empty*)
10 formula_28 (*Do relaxations, nodes may (re)enter B[i] (f)*)
11 formula_29 (*Create requests for heavy edges (g)*)
12 formula_28 (*Relaxations will not refill B[i] (h)*)
13
14 function formula_30:set of Request
15 return formula_31
16
17 procedure formula_28
18 foreach formula_32 do formula_33
19
20 procedure formula_33 (*Insert or move w in B if formula_34*)
21 if formula_35 then
22 formula_36 (*If in, remove from old bucket*)
23 formula_37 (*Insert into new bucket*)
24 formula_38
Example.
Following is a step by step description of the algorithm execution for a small example graph. The source vertex is the vertex A and formula_14 is equal to 3.
At the beginning of the algorithm, all vertices except for the source vertex A have infinite tentative distances.
Bucket formula_39 has range formula_40, bucket formula_41 has range formula_42 and bucket formula_43 has range formula_44.
The bucket formula_39 contains the vertex A. All other buckets are empty.
The algorithm relaxes all light edges incident to formula_39, which are the edges connecting A to B, G and E.
The vertices B,G and E are inserted into bucket formula_41. Since formula_39 is still empty, the heavy edge connecting A to D is also relaxed.
Now the light edges incident to formula_41 are relaxed. The vertex C is inserted into bucket formula_43. Since now formula_41 is empty, the heavy edge connecting E to F can be relaxed.
On the next step, the bucket formula_43 is examined, but doesn't lead to any modifications to the tentative distances.
The algorithm terminates.
Runtime.
As mentioned earlier, formula_17 is the maximum shortest path weight.
Let us call a path with total weight at most formula_14 and without edge repetitions a formula_14-path.
Let formula_45 denote the set of all node pairs formula_46 connected by some formula_14-path formula_47 and let formula_48. Similarly, define formula_49 as the set of triples formula_50 such that formula_51 and formula_52 is a light edge and let formula_53.
The sequential delta-stepping algorithm needs at mostformula_54 operations. A simple parallelization runs in time formula_55 .
If we take formula_56 for graphs with maximum degree formula_57 and random edge weights uniformly distributed in formula_58, the sequential version of the algorithm needs formula_59 total average-case time and a simple parallelization takes on average formula_60.
Graph 500.
The third computational kernel of the Graph 500 benchmark runs a single-source shortest path computation. The reference implementation of the Graph 500 benchmark uses the delta stepping algorithm for this computation.
Radius stepping algorithm.
For the radius stepping algorithm, we must assume that our graph formula_61 is undirected.
The input to the algorithm is a weighted, undirected graph, a source vertex, and a target radius value for every vertex, given as a function formula_62. The algorithm visits vertices in increasing distance from the source formula_0. On each step formula_63, the Radius-Stepping increments the radius centered at formula_0 from formula_64 to formula_65 , and settles all vertices formula_5 in the annulus formula_66.
Following is the radius stepping algorithm in pseudocode:
Input: A graph formula_67, vertex radii formula_68, and a source node formula_0.
Output: The graph distances formula_69 from formula_0.
1 formula_70, formula_71
2 foreach formula_72 do formula_73, formula_74, formula_75
3 while formula_76 do
4 formula_77
5 repeat
6 foreach formula_78 s.t formula_79 do
7 foreach formula_80 do
8 formula_81
9 until no formula_82 was updated
10 "formula_83"
11 formula_84
12 return formula_69
For all formula_85, define formula_86 to be the neighbor set of S. During the execution of standard breadth-first search or Dijkstra's algorithm, the frontier is the neighbor set of all visited vertices.
In the Radius-Stepping algorithm, a new round distance formula_65 is decided on each round with the goal of bounding the number of substeps. The algorithm takes a radius formula_87 for each vertex and selects a formula_65 on step formula_63 by taking the minimum formula_88 over all formula_5 in the frontier (Line 4).
Lines 5-9 then run the Bellman-Ford substeps until all vertices with radius less than formula_65 are settled. Vertices within formula_89 are then added to the visited set formula_90.
Example.
Following is a step by step description of the algorithm execution for a small example graph. The source vertex is the vertex A and the radius of every vertex is equal to 1.
At the beginning of the algorithm, all vertices except for the source vertex A have infinite tentative distances, denoted by formula_91 in the pseudocode.
All neighbors of A are relaxed and formula_92.
The variable formula_93 is chosen to be equal to 4 and the neighbors of the vertices B, E and G are relaxed. formula_94
The variable formula_95 is chosen to be equal to 6 and no values are changed. formula_96.
The variable formula_97 is chosen to be equal to 9 and no values are changed. formula_98.
The algorithm terminates.
Runtime.
After a preprocessing phase, the radius stepping algorithm can solve the SSSP problem in formula_99 work and formula_100 depth, for formula_101. In addition, the preprocessing phase takes formula_102 work and formula_103 depth, or formula_104 work and formula_105 depth.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "s"
},
{
"math_id": 1,
"text": "G=(V,E)"
},
{
"math_id": 2,
"text": "|V|=n"
},
{
"math_id": 3,
"text": "|E|=m"
},
{
"math_id": 4,
"text": "c"
},
{
"math_id": 5,
"text": "v"
},
{
"math_id": 6,
"text": "\\operatorname{dist}(s,v)"
},
{
"math_id": 7,
"text": "\\operatorname{dist}(v)"
},
{
"math_id": 8,
"text": "\\operatorname{dist}(u,v):=\\infty"
},
{
"math_id": 9,
"text": "u"
},
{
"math_id": 10,
"text": "\\operatorname{tent}(v)"
},
{
"math_id": 11,
"text": "\\infty"
},
{
"math_id": 12,
"text": "(v,w)\\in E"
},
{
"math_id": 13,
"text": "\\operatorname{tent}(w):=\\min\\{\\operatorname{tent}(w), \\operatorname{tent}(v)+c(v,w)\\}"
},
{
"math_id": 14,
"text": "\\Delta"
},
{
"math_id": 15,
"text": "B[i]"
},
{
"math_id": 16,
"text": "L(s):=\\max\\{\\operatorname{dist}(s,v) : \\operatorname{dist}(s,v)<\\infty\\}"
},
{
"math_id": 17,
"text": "L"
},
{
"math_id": 18,
"text": "v \\in V"
},
{
"math_id": 19,
"text": "\\operatorname{tent}(v):=\\infty"
},
{
"math_id": 20,
"text": "relax(s, 0)"
},
{
"math_id": 21,
"text": "\\lnot isEmpty(B)"
},
{
"math_id": 22,
"text": "i:=\\min\\{j\\geq 0: B[j]\\neq \\emptyset\\}"
},
{
"math_id": 23,
"text": "R:=\\emptyset"
},
{
"math_id": 24,
"text": "B[i]\\neq \\emptyset"
},
{
"math_id": 25,
"text": "Req:=findRequests(B[i],light)"
},
{
"math_id": 26,
"text": "R:=R\\cup B[i]"
},
{
"math_id": 27,
"text": "B[i]:=\\emptyset"
},
{
"math_id": 28,
"text": "relaxRequests(Req)"
},
{
"math_id": 29,
"text": "Req:=findRequests(R,heavy)"
},
{
"math_id": 30,
"text": "findRequests(V',kind:\\{\\text{light}, \\text{heavy}\\})"
},
{
"math_id": 31,
"text": "\\{(w, \\operatorname{tent}(v)+c(v,w)): v\\in V' \\land (v,w) \\in E_\\text{kind}\\}"
},
{
"math_id": 32,
"text": "(w, x)\\in Req"
},
{
"math_id": 33,
"text": "relax(w,x)"
},
{
"math_id": 34,
"text": "x<\\operatorname{tent}(w)"
},
{
"math_id": 35,
"text": "x < \\operatorname{tent}(w)"
},
{
"math_id": 36,
"text": "B[\\lfloor \\operatorname{tent}(w)/\\Delta\\rfloor]:=B[\\lfloor \\operatorname{tent}(w)/\\Delta\\rfloor]\\setminus \\{w\\} "
},
{
"math_id": 37,
"text": "B[\\lfloor x/\\Delta\\rfloor]:=B[\\lfloor x/\\Delta\\rfloor]\\cup \\{w\\} "
},
{
"math_id": 38,
"text": "\\operatorname{tent}(w):=x "
},
{
"math_id": 39,
"text": "B[0]"
},
{
"math_id": 40,
"text": "[0,2]"
},
{
"math_id": 41,
"text": "B[1]"
},
{
"math_id": 42,
"text": "[3,5]"
},
{
"math_id": 43,
"text": "B[2]"
},
{
"math_id": 44,
"text": "[6,8]"
},
{
"math_id": 45,
"text": "C_\\Delta"
},
{
"math_id": 46,
"text": "\\langle u,v \\rangle"
},
{
"math_id": 47,
"text": "(u,\\dots, v)"
},
{
"math_id": 48,
"text": "n_\\Delta:=|C_{\\Delta}|"
},
{
"math_id": 49,
"text": "C_{\\Delta+}"
},
{
"math_id": 50,
"text": "\\langle u,v',v \\rangle"
},
{
"math_id": 51,
"text": "\\langle u,v' \\rangle \\in C_\\Delta"
},
{
"math_id": 52,
"text": "(v',v)"
},
{
"math_id": 53,
"text": "m_\\Delta:=|C_{\\Delta+}|"
},
{
"math_id": 54,
"text": "\\mathcal{O} (n+m + n_\\Delta + m_\\Delta + L/\\Delta)"
},
{
"math_id": 55,
"text": "\\mathcal{O} \\left(\\frac{L}{\\Delta}\\cdot d\\ell_\\Delta \\log n\\right)"
},
{
"math_id": 56,
"text": "\\Delta = \\Theta(1/d)"
},
{
"math_id": 57,
"text": "d"
},
{
"math_id": 58,
"text": "[0,1]"
},
{
"math_id": 59,
"text": "\\mathcal{O}(n+m+dL)"
},
{
"math_id": 60,
"text": "\\mathcal{O}(d^2 L \\log^2n)"
},
{
"math_id": 61,
"text": "G"
},
{
"math_id": 62,
"text": "r : V \\rightarrow \\mathbb{R}_+"
},
{
"math_id": 63,
"text": "i"
},
{
"math_id": 64,
"text": "d_{i-1}"
},
{
"math_id": 65,
"text": "d_i"
},
{
"math_id": 66,
"text": "d_{i-1}<d(s,v)\\leq d_i"
},
{
"math_id": 67,
"text": "G=(V,E,w)"
},
{
"math_id": 68,
"text": "r(\\cdot)"
},
{
"math_id": 69,
"text": "\\delta(\\cdot)"
},
{
"math_id": 70,
"text": "\\delta(\\cdot)\\leftarrow +\\infty"
},
{
"math_id": 71,
"text": "\\delta(s)\\leftarrow 0"
},
{
"math_id": 72,
"text": "v \\in N(s)"
},
{
"math_id": 73,
"text": "\\delta(v)\\leftarrow w(s,v)"
},
{
"math_id": 74,
"text": "S_0\\leftarrow \\{s\\}"
},
{
"math_id": 75,
"text": "i \\leftarrow 1"
},
{
"math_id": 76,
"text": "|S_{i-1}|<|V|"
},
{
"math_id": 77,
"text": "d_i\\leftarrow \\min_{v\\in V\\setminus S_{i-1}}\\{\\delta(v) + r(v)\\}"
},
{
"math_id": 78,
"text": "u \\in V\\setminus S_{i-1}"
},
{
"math_id": 79,
"text": "\\delta(u) \\leq d_i"
},
{
"math_id": 80,
"text": "v \\in N(u)\\setminus S_{i-1}"
},
{
"math_id": 81,
"text": "\\delta(v) \\leftarrow \\min\\{\\delta(v), \\delta(u)+w(u,v)\\}"
},
{
"math_id": 82,
"text": "\\delta(v)\\leq d_i"
},
{
"math_id": 83,
"text": "S_i=\\{v\\;|\\;\\delta(v)\\leq d_i\\}"
},
{
"math_id": 84,
"text": "i=i+1"
},
{
"math_id": 85,
"text": "S\\subseteq V"
},
{
"math_id": 86,
"text": "N(S)=\\bigcup_{u\\in S}\\{v\\in V\\mid d(u,v)\\in E\\}"
},
{
"math_id": 87,
"text": "r(v)"
},
{
"math_id": 88,
"text": "\\delta(v) + r(v)"
},
{
"math_id": 89,
"text": "d_i\n"
},
{
"math_id": 90,
"text": "S_i\n"
},
{
"math_id": 91,
"text": "\\delta"
},
{
"math_id": 92,
"text": "S_0=\\{A\\}"
},
{
"math_id": 93,
"text": "d_1"
},
{
"math_id": 94,
"text": "S_1=\\{A,B,E,G\\}"
},
{
"math_id": 95,
"text": "d_2"
},
{
"math_id": 96,
"text": "S_2=\\{A,B,C,D,E,G\\}"
},
{
"math_id": 97,
"text": "d_3"
},
{
"math_id": 98,
"text": "S_3=\\{A,B,C,D,E,F, G\\}"
},
{
"math_id": 99,
"text": "\\mathcal{O}(m\\log n)"
},
{
"math_id": 100,
"text": "\\mathcal{O}(\\frac{n}{p}\\log n\\log pL)"
},
{
"math_id": 101,
"text": "p\\leq \\sqrt{n}"
},
{
"math_id": 102,
"text": "\\mathcal{O}(m\\log n + np^2)"
},
{
"math_id": 103,
"text": "\\mathcal{O}(p^2)"
},
{
"math_id": 104,
"text": "\\mathcal{O}(m\\log n + np^2\\log n)"
},
{
"math_id": 105,
"text": "\\mathcal{O}(p\\log p)"
}
] |
https://en.wikipedia.org/wiki?curid=63215363
|
63216140
|
Lamb–Chaplygin dipole
|
The Lamb–Chaplygin dipole model is a mathematical description for a particular inviscid and steady dipolar vortex flow. It is a non-trivial solution to the two-dimensional Euler equations. The model is named after Horace Lamb and Sergey Alexeyevich Chaplygin, who independently discovered this flow structure. This dipole is the two-dimensional analogue of Hill's spherical vortex.
The model.
A two-dimensional (2D), solenoidal vector field formula_0 may be described by a scalar stream function formula_1, via formula_2, where formula_3 is the right-handed unit vector perpendicular to the 2D plane. By definition, the stream function is related to the vorticity formula_4 via a Poisson equation: formula_5. The Lamb–Chaplygin model follows from demanding the following characteristics:
The solution formula_1 in cylindrical coordinates (formula_12), in the co-moving frame of reference reads:
formula_13
where formula_14 are the zeroth and first Bessel functions of the first kind, respectively. Further, the value of formula_15 is such that formula_16, the first non-trivial zero of the first Bessel function of the first kind.
Usage and considerations.
Since the seminal work of P. Orlandi, the Lamb–Chaplygin vortex model has been a popular choice for numerical studies on vortex-environment interactions. The fact that it does not deform make it a prime candidate for consistent flow initialization. A less favorable property is that the second derivative of the flow field at the dipole's edge is not continuous. Further, it serves a framework for stability analysis on dipolar-vortex structures.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathbf{u} "
},
{
"math_id": 1,
"text": "\\psi"
},
{
"math_id": 2,
"text": "\\mathbf{u} = -\\mathbf{e_z} \\times \\mathbf{\\nabla} \\psi"
},
{
"math_id": 3,
"text": "\\mathbf{e_z}"
},
{
"math_id": 4,
"text": "\\omega"
},
{
"math_id": 5,
"text": "-\\nabla^2\\psi = \\omega"
},
{
"math_id": 6,
"text": "R"
},
{
"math_id": 7,
"text": "\\psi\\left(r = R\\right) = 0"
},
{
"math_id": 8,
"text": "\\omega(r > R) = 0)"
},
{
"math_id": 9,
"text": "U"
},
{
"math_id": 10,
"text": "\\omega (r < R) = f\\left(\\psi\\right)"
},
{
"math_id": 11,
"text": "\\omega = k^2 \\psi "
},
{
"math_id": 12,
"text": "r, \\theta"
},
{
"math_id": 13,
"text": "\n\\begin{align}\n\\psi =\n\\begin{cases}\n\\frac{-2 U J_{1}(kr)}{kJ_{0}(kR)}\\mathrm{sin}(\\theta) , & \\text{for } r < R, \\\\ \nU\\left(\\frac{R^2}{r}-r\\right)\\mathrm{sin}(\\theta), & \\text{for } r \\geq R, \n\\end {cases}\n\\end{align}\n"
},
{
"math_id": 14,
"text": "J_0 \\text{ and } J_1"
},
{
"math_id": 15,
"text": "k"
},
{
"math_id": 16,
"text": " kR = 3.8317..."
}
] |
https://en.wikipedia.org/wiki?curid=63216140
|
63216947
|
X-ray emission spectroscopy
|
X-ray emission spectroscopy (XES) is a form of X-ray spectroscopy in which a core electron is excited by an incident x-ray photon and then this excited state decays by emitting an x-ray photon to fill the core hole. The energy of the emitted photon is the energy difference between the involved electronic levels. The analysis of the energy dependence of the emitted photons is the aim of the X-ray emission spectroscopy.
There are several types of XES and can be categorized as non-resonant XES (XES), which includes formula_0-measurements, valence-to-core (VtC/V2C)-measurements, and (formula_1)-measurements, or as resonant XES (RXES or RIXS), which includes XXAS+XES 2D-measurement, high-resolution XAS, 2p3d RIXS, and Mössbauer-XES-combined measurements. In addition, "Soft X-ray emission spectroscopy" (SXES) is used in determining the electronic structure of materials.
History.
The first XES experiments were published by Lindh and Lundquist in 1924
In these early studies, the authors utilized the electron beam of an X-ray tube to excite core electrons and obtain the formula_0-line spectra of sulfur and other elements. Three years later, Coster and Druyvesteyn performed the first experiments using photon excitation. Their work demonstrated that the electron beams produce artifacts, thus motivating the use of X-ray photons for creating the core hole. Subsequent experiments were carried out with commercial X-ray spectrometers, as well as with high-resolution spectrometers.
While these early studies provided fundamental insights into the electronic configuration of small molecules, XES only came into broader use with the availability of high intensity X-ray beams at synchrotron radiation facilities, which enabled the measurement of (chemically) dilute samples.
In addition to the experimental advances, it is also the progress in quantum chemical computations, which makes XES an intriguing tool for the study of the electronic structure of chemical compounds.
Henry Moseley, a British physicist was the first to discover a relation between the formula_1-lines and the atomic numbers of the probed elements. This was the birth hour of modern x-ray spectroscopy. Later these lines could be used in elemental analysis to determine the contents of a sample.
William Lawrence Bragg later found a relation between the energy of a photon and its diffraction within a crystal. The formula he established, formula_2 says that an X-ray photon with a certain energy bends at a precisely defined angle within a crystal.
Equipment.
Analyzers.
A special kind of monochromator is needed to diffract the radiation produced in X-Ray-Sources. This is because X-rays have a refractive index "n ≈ 1". Bragg came up with the equation that describes x-ray/neutron diffraction when those particles pass a crystal lattice.(X-ray diffraction)
For this purpose "perfect crystals" have been produced in many shapes, depending on the geometry and energy range of the instrument. Although they are called perfect, there are miscuts within the crystal structure which leads to offsets of the Rowland plane.
These offsets can be corrected by turning the crystal while looking at a specific energy(for example: formula_3-line of copper at 8027.83eV).
When the intensity of the signal is maximized, the photons diffracted by the crystal hit the detector in the Rowland plane. There will now be a slight offset in the horizontal plane of the instrument which can be corrected by increasing or decreasing the detector angle.
In the Von Hamos geometry, a cylindrically bent crystal disperses the radiation along its flat surface's plane and focuses it along its axis of curvature onto a line like feature.
The spatially distributed signal is recorded with a position sensitive detector at the crystal's focusing axis providing the overall spectrum. Alternative wavelength dispersive concepts have been proposed and implemented based on Johansson geometry having the source positioned inside the Rowland circle, whereas an instrument based on Johann geometry has its source placed on the Rowland circle.
X-ray sources.
X-ray sources are produced for many different purposes, yet not every X-ray source can be used for spectroscopy. Commonly used sources for medical applications generally generate very "noisy" source spectra, because the used cathode material must not be very pure for these measurements. These lines must be eliminated as much as possible to get a good resolution in all used energy ranges.
For this purpose normal X-ray tubes with highly pure tungsten, molybdenum, palladium, etc. are made. Except for the copper they are embedded in, they produce a relatively "white" spectrum. Another way of producing X-rays are particle accelerators. The way they produce X-rays is from vectoral changes of their direction through magnetic fields. Every time a moving charge changes direction it has to give off radiation of corresponding energy. In X-ray tubes this directional change is the electron hitting the metal target (Anode) in synchrotrons it is the outer magnetic field accelerating the electron into a circular path.
There are many different kind of X-ray tubes and operators have to choose accurately depending on what it is, that should be measured.
Modern spectroscopy and the importance of formula_0-lines in the 21st Century.
Today, XES is less used for elemental analysis, but more and more do measurements of formula_0-line spectra find importance, as the relation between these lines and the electronic structure of the ionized atom becomes more detailed.
If an 1s-Core-Electron gets excited into the continuum(out of the atoms energy levels in MO),
electrons of higher energy orbitals need to lose energy and "fall" to the 1s-Hole that was created to fulfill Hund's Rule.(Fig.2)
Those electron transfers happen with distinct probabilities. (See Siegbahn notation)
Scientists noted that after an ionization of a somehow bonded 3d-transition metal-atom the formula_0-lines intensities and energies shift with oxidation state of the metal and with the species of ligand(s). This gave way to a new method in structural analysis:
By high-resolution scans of these lines the exact energy level and structural configuration of a chemical compound can be determined.
This is because there are only two major electron transfer mechanisms, if we ignore every transfer not affecting valence electrons.
If we include the fact that chemical compounds of 3d-transition metals can either be high-spin or low-spin we get 2 mechanisms for each spin configuration.
These two spin configurations determine the general shape of the formula_4 and formula_5-mainlines as seen in figure one and two, while the structural configuration of electrons within the compound causes different intensities, broadening, tailing and piloting of the formula_6 and formula_7-lines.
Although this is quite a lot of information, this data has to be combined with absorption measurements of the so-called "pre-edge" region.
Those measurements are called XANES (X-ray absorption near edge structure).
In synchrotron facilities those measurement can be done at the same time, yet the experiment setup is quite complex and needs exact and fine tuned crystal monochromators to diffract the tangential beam coming from the electron storage ring. Method is called HERFD, which stands for High Energy Resolution Fluorescence Detection. The collection method is unique in that, after a collection of all wavelengths coming from "the source" called formula_8,
the beam is then shone onto the sample holder with a detector behind it for the XANES part of the measurement. The sample itself starts to emit X-rays and after those photons have been monochromatized they are collected, too.
Most setups use at least three crystal monochromators or more. The formula_8 is used in absorption measurements as a part of the Beer-Lambert Law in the equation
formula_9
where formula_10 is the intensity of transmitted photons. The received values for the extinction formula_11 are wavelength specific which therefore creates a spectrum of the absorption.
The spectrum produced from the combined data shows clear advantage in that background radiation is almost completely eliminated while still having an extremely resolute view on features on a given absorption edge.(Fig.4)
In the field of development of new catalysts for more efficient energy storage, production and usage in form of hydrogen fuel cells and new battery materials, the research of the formula_0-lines is essential nowadays.
The exact shape of specific oxidation states of metals is mostly known, yet newly produced chemical compounds with the potential
of becoming a reasonable catalyst for electrolysis, for example, are measured every day.
Several countries encourage many different facilities all over the globe in this special field of science in the hope for clean, responsible and cheap energy.
Soft x-ray emission spectroscopy.
Soft X-ray emission spectroscopy or (SXES) is an experimental technique for determining the electronic structure of materials.
Uses.
X-ray emission spectroscopy (XES) provides a means of probing the partial occupied density of electronic states of a material. XES is element-specific and site-specific, making it a powerful tool for determining detailed electronic properties of materials.
Forms.
Emission spectroscopy can take the form of either resonant inelastic X-ray emission spectroscopy (RIXS) or non-resonant X-ray emission spectroscopy (NXES). Both spectroscopies involve the photonic promotion of a core level electron, and the measurement of the fluorescence that occurs as the electron relaxes into a lower-energy state. The differences between resonant and non-resonant excitation arise from the state of the atom before fluorescence occurs.
In resonant excitation, the core electron is promoted to a bound state in the conduction band. Non-resonant excitation occurs when the incoming radiation promotes a core electron to the continuum. When a core hole is created in this way, it is possible for it to be refilled through one of several different decay paths. Because the core hole is refilled from the sample's high-energy free states, the decay and emission processes must be treated as separate dipole transitions. This is in contrast with RIXS, where the events are coupled, and must be treated as a single scattering process.
Properties.
Soft X-rays have different optical properties than visible light and therefore experiments must take place in ultra high vacuum, where the photon beam is manipulated using special mirrors and diffraction gratings.
Gratings diffract each energy or wavelength present in the incoming radiation in a different direction. Grating monochromators allow the user to select the specific photon energy they wish to use to excite the sample. Diffraction gratings are also used in the spectrometer to analyze the photon energy of the radiation emitted by the sample.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K_{\\beta}"
},
{
"math_id": 1,
"text": "K_{\\alpha}"
},
{
"math_id": 2,
"text": " n \\lambda = 2d \\, \\sin(\\theta)"
},
{
"math_id": 3,
"text": "K_{\\alpha2}"
},
{
"math_id": 4,
"text": "K_{\\beta1,3}"
},
{
"math_id": 5,
"text": "K_{\\beta'}"
},
{
"math_id": 6,
"text": "K_{\\beta2,5}"
},
{
"math_id": 7,
"text": "K_{\\beta''}"
},
{
"math_id": 8,
"text": "I_{0}"
},
{
"math_id": 9,
"text": " E_\\lambda = \\log_{10} \\left(\\frac{I_{0}}{I_{1}}\\right) = \\varepsilon_{\\lambda} \\cdot c \\cdot d "
},
{
"math_id": 10,
"text": "I_{1}"
},
{
"math_id": 11,
"text": "E_\\lambda"
}
] |
https://en.wikipedia.org/wiki?curid=63216947
|
63218
|
Present value
|
Economic concept
In economics and finance, present value (PV), also known as present discounted value, is the value of an expected income stream determined as of the date of valuation. The present value is usually less than the future value because money has interest-earning potential, a characteristic referred to as the time value of money, except during times of negative interest rates, when the present value will be equal or more than the future value. Time value can be described with the simplified phrase, "A dollar today is worth more than a dollar tomorrow". Here, 'worth more' means that its value is greater than tomorrow. A dollar today is worth more than a dollar tomorrow because the dollar can be invested and earn a day's worth of interest, making the total accumulate to a value more than a dollar by tomorrow. Interest can be compared to rent. Just as rent is paid to a landlord by a tenant without the ownership of the asset being transferred, interest is paid to a lender by a borrower who gains access to the money for a time before paying it back. By letting the borrower have access to the money, the lender has sacrificed the exchange value of this money, and is compensated for it in the form of interest. The initial amount of borrowed funds (the present value) is less than the total amount of money paid to the lender.
Present value calculations, and similarly future value calculations, are used to value loans, mortgages, annuities, sinking funds, perpetuities, bonds, and more. These calculations are used to make comparisons between cash flows that don’t occur at simultaneous times, since time and dates must be consistent in order to make comparisons between values. When deciding between projects in which to invest, the choice can be made by comparing respective present values of such projects by means of discounting the expected income streams at the corresponding project interest rate, or rate of return. The project with the highest present value, i.e. that is most valuable today, should be chosen.
Background.
If offered a choice between $100 today or $100 in one year, and there is a positive real interest rate throughout the year, a rational person will choose $100 today. This is described by economists as time preference. Time preference can be measured by auctioning off a risk free security—like a US Treasury bill. If a $100 note with a zero coupon, payable in one year, sells for $80 now, then $80 is the present value of the note that will be worth $100 a year from now. This is because money can be put in a bank account or any other (safe) investment that will return interest in the future.
An investor who has some money has two options: to spend it right now or to save it. But the financial compensation for saving it (and not spending it) is that the money value will accrue through the compound interest that he or she will receive from a borrower (the bank account in which he has the money deposited).
Therefore, to evaluate the real value of an amount of money today after a given period of time, economic agents compound the amount of money at a given (interest) rate. Most actuarial calculations use the risk-free interest rate which corresponds to the minimum guaranteed rate provided by a bank's saving account for example, assuming no risk of default by the bank to return the money to the account holder on time. To compare the change in purchasing power, the real interest rate (nominal interest rate minus inflation rate) should be used.
The operation of evaluating a present value into the future value is called a capitalization (how much will $100 today be worth in 5 years?). The reverse operation—evaluating the present value of a future amount of money—is called a discounting (how much will $100 received in 5 years—at a lottery for example—be worth today?).
It follows that if one has to choose between receiving $100 today and $100 in one year, the rational decision is to choose the $100 today. If the money is to be received in one year and assuming the savings account interest rate is 5%, the person has to be offered at least $105 in one year so that the two options are equivalent (either receiving $100 today or receiving $105 in one year). This is because if $100 is deposited in a savings account, the value will be $105 after one year, again assuming no risk of losing the initial amount through bank default.
Interest rates.
Interest is the additional amount of money gained between the beginning and the end of a time period. Interest represents the time value of money, and can be thought of as rent that is required of a borrower in order to use money from a lender. For example, when an individual takes out a bank loan, the individual is charged interest. Alternatively, when an individual deposits money into a bank, the money earns interest. In this case, the bank is the borrower of the funds and is responsible for crediting interest to the account holder. Similarly, when an individual invests in a company (through corporate bonds, or through stock), the company is borrowing funds, and must pay interest to the individual (in the form of coupon payments, dividends, or stock price appreciation).
The interest rate is the change, expressed as a percentage, in the amount of money during one compounding period. A compounding period is the length of time that must transpire before interest is credited, or added to the total. For example, interest that is compounded annually is credited once a year, and the compounding period is one year. Interest that is compounded quarterly is credited four times a year, and the compounding period is three months. A compounding period can be any length of time, but some common periods are annually, semiannually, quarterly, monthly, daily, and even continuously.
There are several types and terms associated with interest rates:
Calculation.
The operation of evaluating a present sum of money some time in the future is called a capitalization (how much will 100 today be worth in five years?). The reverse operation—evaluating the present value of a future amount of money—is called discounting (how much will 100 received in five years be worth today?).
Spreadsheets commonly offer functions to compute present value. In Microsoft Excel, there are present value functions for single payments - "=NPV(...)", and series of equal, periodic payments - "=PV(...)". Programs will calculate present value flexibly for any cash flow and interest rate, or for a schedule of different interest rates at different times.
Present value of a lump sum.
The most commonly applied model of present valuation uses compound interest. The standard formula is:
formula_0
Where formula_1 is the future amount of money that must be discounted, formula_2 is the number of compounding periods between the present date and the date where the sum is worth formula_1, formula_3 is the interest rate for one compounding period (the end of a compounding period is when interest is applied, for example, annually, semiannually, quarterly, monthly, daily). The interest rate, formula_3, is given as a percentage, but expressed as a decimal in this formula.
Often, formula_4 is referred to as the Present Value Factor
This is also found from the formula for the future value with negative time.
For example, if you are to receive $1000 in five years, and the effective annual interest rate during this period is 10% (or 0.10), then the present value of this amount is
formula_5
The interpretation is that for an effective annual interest rate of 10%, an individual would be indifferent to receiving $1000 in five years, or $620.92 today.
The purchasing power in today's money of an amount formula_1 of money, formula_2 years into the future, can be computed with the same formula, where in this case formula_3 is an assumed future inflation rate.
If we are using lower discount rate("i" ), then it allows the present values in the discount future to have higher values.
Net present value of a stream of cash flows.
A cash flow is an amount of money that is either paid out or received, differentiated by a negative or positive sign, at the end of a period. Conventionally, cash flows that are received are denoted with a positive sign (total cash has increased) and cash flows that are paid out are denoted with a negative sign (total cash has decreased). The cash flow for a period represents the net change in money of that period. Calculating the net present value, formula_6, of a stream of cash flows consists of discounting each cash flow to the present, using the present value factor and the appropriate number of compounding periods, and combining these values.
For example, if a stream of cash flows consists of +$100 at the end of period one, -$50 at the end of period two, and +$35 at the end of period three, and the interest rate per compounding period is 5% (0.05) then the present value of these three Cash Flows are:
formula_7
formula_8
formula_9 respectively
Thus the net present value would be:
formula_10
There are a few considerations to be made.
formula_11
formula_13
Here, formula_14 is the nominal annual interest rate, compounded quarterly, and the interest rate per quarter is formula_15
Present value of an annuity.
Many financial arrangements (including bonds, other loans, leases, salaries, membership dues, annuities including annuity-immediate and annuity-due, straight-line depreciation charges) stipulate structured payment schedules; payments of the same amount at regular time intervals. Such an arrangement is called an annuity. The expressions for the present value of such payments are summations of geometric series.
There are two types of annuities: an annuity-immediate and annuity-due. For an annuity immediate, formula_16 payments are received (or paid) at the end of each period, at times 1 through formula_16, while for an annuity due, formula_16 payments are received (or paid) at the beginning of each period, at times 0 through formula_17. This subtle difference must be accounted for when calculating the present value.
An annuity due is an annuity immediate with one more interest-earning period. Thus, the two present values differ by a factor of formula_18:
formula_19
The present value of an annuity immediate is the value at time 0 of the stream of cash flows:
formula_20
where:
formula_16 = number of periods,
formula_21 = amount of cash flows,
formula_12 = effective periodic interest rate or rate of return.
An approximation for annuity and loan calculations.
The above formula (1) for annuity immediate calculations offers little insight for the average user and requires the use of some form of computing machinery. There is an approximation which is less intimidating, easier to compute and offers some insight for the non-specialist. It is given by
formula_22
Where, as above, C is annuity payment, PV is principal, n is number of payments, starting at end of first period, and i is interest rate per period. Equivalently C is the periodic loan repayment for a loan of PV extending over n periods at interest rate, i. The formula is valid (for positive n, i) for ni≤3. For completeness, for ni≥3 the approximation is formula_23.
The formula can, under some circumstances, reduce the calculation to one of mental arithmetic alone. For example, what are the (approximate) loan repayments for a loan of PV = $10,000 repaid annually for n = ten years at 15% interest (i = 0.15)? The applicable approximate formula is C ≈ 10,000*(1/10 + (2/3) 0.15) = 10,000*(0.1+0.1) = 10,000*0.2 = $2000 pa by mental arithmetic alone. The true answer is $1993, very close.
The overall approximation is accurate to within ±6% (for all n≥1) for interest rates 0≤i≤0.20 and within ±10% for interest rates 0.20≤i≤0.40. It is, however, intended only for "rough" calculations.
Present value of a perpetuity.
A perpetuity refers to periodic payments, receivable indefinitely, although few such instruments exist. The present value of a perpetuity can be calculated by taking the limit of the above formula as "n" approaches infinity.
formula_24
Formula (2) can also be found by subtracting from (1) the present value of a perpetuity delayed n periods, or directly by summing the present value of the payments
formula_25
which form a geometric series.
Again there is a distinction between a perpetuity immediate – when payments received at the end of the period – and a perpetuity due – payment received at the beginning of a period. And similarly to annuity calculations, a perpetuity due and a perpetuity immediate differ by a factor of formula_26:
formula_27
"See: Bond valuation#Present value approach"
PV of a bond.
A corporation issues a bond, an interest earning debt security, to an investor to raise funds. The bond has a face value, formula_28, coupon rate, formula_29, and maturity date which in turn yields the number of periods until the debt matures and must be repaid. A bondholder will receive coupon payments semiannually (unless otherwise specified) in the amount of formula_30, until the bond matures, at which point the bondholder will receive the final coupon payment and the face value of a bond, formula_31.
The present value of a bond is the purchase price.
The purchase price can be computed as:
formula_32 formula_33
The purchase price is equal to the bond's face value if the coupon rate is equal to the current interest rate of the market, and in this case, the bond is said to be sold 'at par'. If the coupon rate is less than the market interest rate, the purchase price will be less than the bond's face value, and the bond is said to have been sold 'at a discount', or below par. Finally, if the coupon rate is greater than the market interest rate, the purchase price will be greater than the bond's face value, and the bond is said to have been sold 'at a premium', or above par.
Technical details.
Present value is additive. The present value of a bundle of cash flows is the sum of each one's present value. See time value of money for further discussion.
These calculations must be applied carefully, as there are underlying assumptions:
Variants/approaches.
There are mainly two flavors of Present Value. Whenever there will be uncertainties in both timing and amount of the cash flows, the expected present value approach will often be the appropriate technique. With Present Value under uncertainty, future dividends are replaced by their conditional expectation.
Choice of interest rate.
The interest rate used is the risk-free interest rate if there are no risks involved in the project. The rate of return from the project must equal or exceed this rate of return or it would be better to invest the capital in these risk free assets. If there are risks involved in an investment this can be reflected through the use of a risk premium. The risk premium required can be found by comparing the project with the rate of return required from other projects with similar risks. Thus it is possible for investors to take account of any uncertainty involved in various investments.
Present value method of valuation.
An investor, the lender of money, must decide the financial project in which to invest their money, and present value offers one method of deciding.
A financial project requires an initial outlay of money, such as the price of stock or the price of a corporate bond. The project claims to return the initial outlay, as well as some surplus (for example, interest, or future cash flows). An investor can decide which project to invest in by calculating each projects’ present value (using the same interest rate for each calculation) and then comparing them. The project with the smallest present value – the least initial outlay – will be chosen because it offers the same return as the other projects for the least amount of money.
Years' purchase.
The traditional method of valuing future income streams as a present capital sum is to multiply the average expected annual cash-flow by a multiple, known as "years' purchase". For example, in selling to a third party a property leased to a tenant under a 99-year lease at a rent of $10,000 per annum, a deal might be struck at "20 years' purchase", which would value the lease at 20 * $10,000, i.e. $200,000. This equates to a present value discounted in perpetuity at 5%. For a riskier investment the purchaser would demand to pay a lower number of years' purchase. This was the method used for example by the English crown in setting re-sale prices for manors seized at the Dissolution of the Monasteries in the early 16th century. The standard usage was 20 years' purchase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "PV = \\frac{C}{(1+i)^n} \\,"
},
{
"math_id": 1,
"text": "\\,C\\,"
},
{
"math_id": 2,
"text": "\\,n\\,"
},
{
"math_id": 3,
"text": "\\,i\\,"
},
{
"math_id": 4,
"text": "v^{n} = \\,(1 + i)^{-n}"
},
{
"math_id": 5,
"text": "PV = \\frac{\\$1000}{(1+0.10)^{5}} = \\$620.92 \\, "
},
{
"math_id": 6,
"text": "\\,NPV\\,"
},
{
"math_id": 7,
"text": "PV_{1} = \\frac{\\$100}{(1.05)^{1}} = \\$95.24 \\, "
},
{
"math_id": 8,
"text": "PV_{2} = \\frac{-\\$50}{(1.05)^{2}} = -\\$45.35 \\, "
},
{
"math_id": 9,
"text": "PV_{3} = \\frac{\\$35}{(1.05)^{3}} = \\$30.23 \\, "
},
{
"math_id": 10,
"text": "NPV = PV_{1}+PV_{2}+PV_{3} = \\frac{100}{(1.05)^{1}} + \\frac{-50}{(1.05)^{2}} + \\frac{35}{(1.05)^{3}} = 95.24 - 45.35 + 30.23 = 80.12, "
},
{
"math_id": 11,
"text": "NPV = 100\\,(1.05)^{-1} + 200\\,(1.10)^{-1}\\,(1.05)^{-1} = \\frac{100}{(1.05)^{1}} + \\frac{200}{(1.10)^{1}(1.05)^{1}} = \\$95.24 + \\$173.16 = \\$268.40 "
},
{
"math_id": 12,
"text": "\\, i \\, "
},
{
"math_id": 13,
"text": " (1+i) = \\left(1+\\frac{i^{4}}{4}\\right)^4 "
},
{
"math_id": 14,
"text": " i^{4} "
},
{
"math_id": 15,
"text": "\\frac{i^{4}}{4}"
},
{
"math_id": 16,
"text": "\\, n \\, "
},
{
"math_id": 17,
"text": "\\, n-1 \\, "
},
{
"math_id": 18,
"text": "(1+i)"
},
{
"math_id": 19,
"text": " PV_\\text{annuity due} = PV_\\text{annuity immediate}(1+i) \\,\\!"
},
{
"math_id": 20,
"text": "PV = \\sum_{k=1}^{n} \\frac{C}{(1+i)^{k}} = C\\left[\\frac{1-(1+i)^{-n}}{i}\\right], \\qquad (1) "
},
{
"math_id": 21,
"text": "\\, C \\, "
},
{
"math_id": 22,
"text": "C \\approx PV \\left( \\frac {1}{n} + \\frac {2}{3} i \\right) "
},
{
"math_id": 23,
"text": " C \\approx PV i"
},
{
"math_id": 24,
"text": "PV\\,=\\,\\frac{C}{i}. \\qquad (2)"
},
{
"math_id": 25,
"text": "PV = \\sum_{k=1}^\\infty \\frac{C}{(1+i)^{k}} = \\frac{C}{i}, \\qquad i > 0,"
},
{
"math_id": 26,
"text": "(1+i) "
},
{
"math_id": 27,
"text": " PV_\\text{perpetuity due} = PV_\\text{perpetuity immediate}(1+i) \\,\\!"
},
{
"math_id": 28,
"text": " F "
},
{
"math_id": 29,
"text": " r "
},
{
"math_id": 30,
"text": " Fr "
},
{
"math_id": 31,
"text": " F(1+r) "
},
{
"math_id": 32,
"text": "PV = \\left[\\sum_{k=1}^{n} Fr(1+i)^{-k}\\right]"
},
{
"math_id": 33,
"text": " + F(1+i)^{-n} "
}
] |
https://en.wikipedia.org/wiki?curid=63218
|
63219935
|
Levitation based inertial sensing
|
Levitation based inertial sensing is a new and rapidly growing technique for measuring linear acceleration, rotation and orientation of a body. Based on this technique, inertial sensors such as accelerometers and gyroscopes, enables ultra-sensitive inertial sensing. For example, the world's best accelerometer used in the LISA Pathfinder in-flight experiment is based on a levitation system which reaches a sensitivity of formula_0 and noise of formula_1.
History.
The pioneering work related to the microparticle levitation was performed by Artur Ashkin in 1970. He demonstrated optical trapping of dielectric microspheres for the first time, forming an optical levitation system, by using a focused laser beam in air and liquid. This new technology was later named "optical tweezer" and applied in biochemistry and biophysics. Later, significant scientific progress on optically levitated systems was made, for example the cooling of the center of mass motion of a micro- or nanoparticle in the millikelvin regime. Very recently a research group published a paper showing motional quantum ground state cooling of a levitated nanoparticle. In addition, levitation based on electrostatic and magnetic approaches have also been proposed and realized.
Levitation systems have shown high force sensitivities in the formula_2 range. For example, an optically levitated dielectric particle has been shown to exhibit force sensitivities beyond ~ formula_3. Thus, levitation systems show promise for ultra-sensitive force sensing, such as detection of short-range interactions. By levitating micro- or mesoparticles with a relatively large mass, this system can be employed as a high-performance inertial sensor, demonstrating nano-g sensitivity.
Method.
One possible working principle behind a levitation based inertial sensing system is the following. By levitating a micro-object in vacuum and after a cool-down process, the center of mass motion of the micro-object can be controlled and coupled to the kinematic states of the system. Once the system's kinematic state changes (in other words, the system undergoes linear or rotational acceleration), the center of mass motion of the levitated micro-object is affected and yields a signal. This signal is related to the changes of the system's kinematic states and can be read out.
Regarding levitation techniques, there are generally three different approaches: optical, electrostatic and magnetic.
Applications.
The sub-attonewton force sensitivity of levitation based system could show promise for applications in many different fields, such as Casimir force sensing, gravitational wave detection and inertial sensing. For inertial sensing, levitation based system could be used to make high-performance accelerometers and gyroscopes employed in inertial measurement units (IMUs) and inertial navigation systems (INSs). These are used in such applications as drone navigation in tunnels and mines, guidance of unmanned aerial vehicles (UAVs), or stabilization of micro-satellites. Levitation based Inertial sensors that have sufficient sensitivity and low noise (formula_4) for measurements in the seismic band (formula_5 to formula_6) can be used in the field of seismometry, in which current inertial sensors cannot meet the requirements.
There are already some commercial products on the market. One example is the iOSG Superconducting gravity sensor, which is based on magnetic levitation and shows a noise of formula_7.
Advantages.
The future trends in inertial sensing require that inertial sensors have lower cost, higher performance, and smaller in size. Levitation based inertial sensing systems have already shown high performance. For example, the accelerometer used in the LISA Pathfinder in-flight experiment has a sensitivity of formula_8 and noise of formula_1.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "<10^{-15}\\,{\\rm g}/\\sqrt{\\rm Hz}"
},
{
"math_id": 1,
"text": "<10^{-14}\\,\\rm m/s^2/\\sqrt{Hz}"
},
{
"math_id": 2,
"text": "\\rm zN/\\sqrt{Hz}"
},
{
"math_id": 3,
"text": "2\\times10^{-20}\\,\\rm N/\\sqrt{Hz}"
},
{
"math_id": 4,
"text": "<10^{-9}\\,\\rm m/s^2/\\sqrt{Hz}"
},
{
"math_id": 5,
"text": "\\rm 0.1\\,mHz\n"
},
{
"math_id": 6,
"text": "\\rm 10\\,Hz"
},
{
"math_id": 7,
"text": "<1\\,\\rm(nm/s^2)^2Hz"
},
{
"math_id": 8,
"text": "<10^{-15}\\,\\rm g/\\sqrt{Hz}"
}
] |
https://en.wikipedia.org/wiki?curid=63219935
|
63223344
|
Proofs That Really Count
|
2003 mathematics book by Arthur T. Benjamin and Jennifer Quinn
Proofs That Really Count: the Art of Combinatorial Proof is an undergraduate-level mathematics book on combinatorial proofs of mathematical identies. That is, it concerns equations between two integer-valued formulas, shown to be equal either by showing that both sides of the equation count the same type of mathematical objects, or by finding a one-to-one correspondence between the different types of object that they count. It was written by Arthur T. Benjamin and Jennifer Quinn, and published in 2003 by the Mathematical Association of America as volume 27 of their Dolciani Mathematical Expositions series. It won the Beckenbach Book Prize of the Mathematical Association of America.
Topics.
The book provides combinatorial proofs of thirteen theorems in combinatorics and 246 numbered identities (collated in an appendix). Several additional "uncounted identities" are also included. Many proofs are based on a visual-reasoning method that the authors call "tiling", and in a foreword, the authors describe their work as providing a follow-up for counting problems of the "Proof Without Words" books by Roger B. Nelson.
The first three chapters of the book start with integer sequences defined by linear recurrence relations, the prototypical example of which is the sequence of Fibonacci numbers. These numbers can be given a combinatorial interpretation as the number of ways of tiling a formula_0 strip of squares with tiles of two types, single squares and dominos; this interpretation can be used to prove many of the fundamental identities involving the Fibonacci numbers, and generalized to similar relations about other sequences defined similarly, such as the Lucas numbers, using "circular tilings and colored tilings". For instance, for the Fibonacci numbers, considering whether a tiling does or does not connect positions formula_1 and formula_2 of a strip of length formula_3 immediately leads to the identity
formula_4
Chapters four through seven of the book concern identities involving continued fractions, binomial coefficients, harmonic numbers, Stirling numbers, and factorials. The eighth chapter branches out from combinatorics to number theory and abstract algebra, and the final chapter returns to the Fibonacci numbers with more advanced material on their identities.
Audience and reception.
The book is aimed at undergraduate mathematics students, but the material is largely self-contained, and could also be read by advanced high school students. Additionally, many of the book's chapters are themselves self-contained, allowing for arbitrary reading orders or for excerpts of this material to be used in classes. Although it is structured as a textbook with exercises in each chapter, reviewer Robert Beezer writes that it is "not meant as a textbook", but rather intended as a "resource" for teachers and researchers. Echoing this, reviewer Joe Roberts writes that despite its elementary nature, this book should be "valuable as a reference ... for anyone working with such identities".
In an initial review, Darren Glass complained that many of the results are presented as dry formulas, without any context or explanation for why they should be interesting or useful, and that
this lack of context would be an obstacle for using it as the main text for a class. Nevertheless, in a second review after a year of owning the book, he wrote that he was "lending it out to person after person".
Reviewer Peter G. Anderson praises the book's "beautiful ways of seeing old, familiar mathematics and some new mathematics too", calling it "a treasure". Reviewer Gerald L. Alexanderson describes the book's proofs as "ingenious, concrete and memorable". The award citation for the book's 2006 Beckenbach Book Prize states that it "illustrates in a magical way the pervasiveness and power of counting techniques throughout mathematics. It is one of those rare books that will appeal to the mathematical professional and seduce the neophyte."
One of the open problems from the book, seeking a bijective proof of an identity combining binomial coefficients with Fibonacci numbers, was subsequently answered positively by Doron Zeilberger. In the web site where he links a preprint of his paper, Zeilberger writes,
<templatestyles src="Template:Blockquote/styles.css" />"When I was young and handsome, I couldn't see an identity without trying to prove it bijectively. Somehow, I weaned myself of this addiction. But the urge got rekindled, when I read Arthur Benjamin and Jennifer Quinn's masterpiece "Proofs that Really Count"."
Recognition.
"Proofs That Really Count" won the 2006 Beckenbach Book Prize of the Mathematical Association of America, and the 2010 CHOICE Award for Outstanding Academic Title of the American Library Association. It has been listed by the Basic Library List Committee of the Mathematical Association of America as essential for inclusion in any undergraduate mathematics library.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "1\\times n"
},
{
"math_id": 1,
"text": "a-1"
},
{
"math_id": 2,
"text": "a"
},
{
"math_id": 3,
"text": "a+b-1"
},
{
"math_id": 4,
"text": "F_{a+b}=F_{a-1}F_{b}+F_aF_{b+1}."
}
] |
https://en.wikipedia.org/wiki?curid=63223344
|
63226181
|
2 Kings 9
|
2 Kings, chapter 9
2 Kings 9 is the ninth chapter of the second part of the Books of Kings in the Hebrew Bible or the Second Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. This chapter records Jehu's anointing as the next king of Israel and his assassinations of Jehoram of Israel, Ahaziah of Judah and Jezebel, the queen mother of Israel. The narrative is a part of a major section 2 Kings 9:1–15:12 covering the period of Jehu's dynasty.
Text.
This chapter was originally written in the Hebrew language and since the 16th century is divided into 37 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls, that is, 6Q4 (6QpapKgs; 150–75 BCE) with extant verses 1–2.
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Locations.
This chapter mentions or alludes to the following places (in order of appearance):
Analysis.
This chapter and the next one contain one continuous narrative of Jehu's overthrow of the Omride dynasty and destruction of the Baal worship in Israel, reopening the battle against apostasy which was started by Elijah (). Fulfilling the divine commission given to Elijah, Elisha arranged the anointing of Jehu who then executed a total revolution in Israel and Judah, by killing the reigning kings (and their family members) of both kingdoms. The narrative may be divided into two parallel sections, the first one about the assassination of the leaders (including Jezebel, the queen mother of Israel) in chapter 9 and the second about the killing of their kinsmen (including the Baal worshippers as Jezebel's "kin"), ending with a summary of Jehu's reign and the consequences of his action in relation to his faithfulness to YHWH in chapter 10. The structure can be as follows:
A Jehu is anointed king (9:1-15)
B Jehu kills King Jehoram outside Jezreel (9:16-26)
C Jehu kills King Ahaziah in Beth-haggan (9:27-29)
D Jehu has Jezebel killed in Jezreel (9:30-37)
B' Jehu massacres the house of Ahab in Jezreel (10:1-11)
C' Jehu massacres the kinsmen of King Ahaziah at Beth-eked (10:12-14)
D' Jehu massacres worshipers of Baal and destroys house of Baal in Samaria (10:15-28)
A' Summary of reign of Jehu (10:29-36)
The anointing of Jehu (9:1–15).
The inverted subject-verb order in verse 1 indicates the shift to another story line. The political influence of prophets is shown here as in the previous chapter (8:7–15) when Elisha played a role in Hazael's coup d'état
against Ben-hadad in Aram-Damascus. In this part Elisha uses a military crisis to fulfill the last divine commission in to support Jehu's ousting of the Omrides. The long oracle in verses 7–10 stems from Elijah's prophecy to Ahab at Naboth's vineyard in Jezreel ().
"1And Elisha the prophet called one of the sons of the prophets, and said to him, "Get yourself ready, take this flask of oil in your hand, and go to Ramoth Gilead. 2Now when you arrive at that place, look there for Jehu the son of Jehoshaphat, the son of Nimshi, and go in and make him rise up from among his associates, and take him to an inner room. 3Then take the flask of oil, and pour it on his head, and say, 'Thus says the Lord: "I have anointed you king over Israel."' Then open the door and flee, and do not delay.""
Jehu killed King Jehoram of Israel (9:16–26).
The narrative follows an impressive scene from the sentinel's viewpoint (Greek: "teichoskopia"), how Jehu steers his chariot ('like a maniac') in verses 17–20. Since no messengers he sent to Jehu came back (instead, they got behind Jehu) king Joram decided to investigate the matter himself and met Jehu half way (verse 21). Jehu's reply with sharp criticism of the Omrides' religious policy (verse 22) alerted Joram of Jehu's aggressive intentions, but it is too late to flee, only enough time to warn Ahaziah to run. Joram was killed by Jehu's arrow, because, according to Jehu's reason, 'Joram had to suffer for a sin committed by his father Ahab' (verses 25, 26a). The discrepancies with (which only mention Naboth, but here also his sons) and the addition of religious dimension in verse 22 suggest the originality of the passage in the context.
"So the watchman reported, saying, "He went up to them and is not coming back; and the driving is like the driving of Jehu the son of Nimshi, for he drives furiously!""
Verse 20.
The man's "crazy" driving style as the chariot was approaching identified the driver as Jehu. The Hebrew word for "crazy" here ("shiggaon") is of the same root word as the nickname "crazy man" ("meshugga") associated to the disciple who anointed Jehu in verse 11.
"And Joram turned his hands, and fled, and said to Ahaziah, There is treachery, O Ahaziah."
"'Surely I saw yesterday the blood of Naboth and the blood of his sons,' says the Lord, 'and I will repay you in this plot,' says the Lord. Now therefore, take and throw him on the plot of ground, according to the word of the Lord."
Verse 26.
After the assassination of Jehoram, Jehu provides a brief flashback that he and Bidkar directly heard the original pronouncement of the oracle against Ahab to avenge the death of Naboth (cf. ). This information sheds new light that Jehu accepted the oracle after his anointing without question because
he had heard it before, thus fueling his conspiracy by the
doubled divine word and justifying the slaying of the son of Ahab as recompense for the murder of the sons of Naboth. The pronouncement is framed by his order to Bidkar to throw Joram into the field of Naboth, fulfilling the prophecy.
Jehu kills King Ahaziah of Judah (9:27–29).
Ahaziah the king of Judah initially managed to
flee to the south, but was overtaken after about 10 km on the ascent to the mountains and fatally shot, but he could still reach Megiddo, died there (cf. Josiah in ), then was taken to Jerusalem by his followers.
"And in the eleventh year of Joram the son of Ahab began Ahaziah to reign over Judah."
Jehu had Jezebel killed (9:30–37).
With the death of both kings, Jehu can turn his attention to Jezebel, who is still in Jezreel. He encounters no resistance on entering the city, finding Jezebel, lavishly decorated, appearing at 'the window from which royalty show themselves to the people'. She addressed the approaching Jehu as "Zimri", recalling another usurper who assassinated his royal master, only soon to be overcome himself by Omri (cf. ). Jehu responded impatiently and ordered the queen mother to be thrown out of the window. After it was promptly executed, Jehu imperturbably went in to eat, then, as an afterthought, he remembered that noble people should be given a decent burial, but there is not enough left of Jezebel to bury (verses 30–35). Verses 33–37 refer to the judgement made in to legitimize the events.
"And as Jehu entered in at the gate, she said, Had Zimri peace, who slew his master?"
Verse 31.
Jezebel associates Jehu with another assassin, Zimri, who approximately 44 years before had murdered King Elah, only to meet a violent death just a few days later ().
other.
Elijah is mentioned in verse 36. Elijah reportedly predicted that Jezebel's flesh would be eaten by dogs.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=63226181
|
63227596
|
Christiane Tretter
|
German mathematician and mathematical physicist
Christiane Tretter (born 28 December 1964) is a German mathematician and mathematical physicist who works as a professor in the Mathematical Institute (MAI) of the University of Bern in Switzerland, and as managing director of the institute. Her research interests include differential operators and spectral theory.
Education and career.
Tretter studied mathematics, with a minor in physics, at the University of Regensburg, earning a diploma in 1989, a Ph.D. in 1992, and a habilitation in 1998. Her doctoral dissertation, "Asymptotische Randbedingungen für Entwicklungssätze bei Randeigenwertproblemen zu formula_0 mit formula_1-abhängigen Randbedingungen", was supervised by Reinhard Mennicken.
She became a lecturer at the University of Leicester in 2000, moved to the University of Bremen as a professor in 2002, and took her present position in Bern in 2006.
Since 2008 she has been editor-in-chief of the journal "Integral Equations and Operator Theory".
Books.
Tretter is the author of two mathematical monographs, "Spectral Theory of Block Operator Matrices and Applications" (2008) and "On Lambda-Nonlinear-Boundary-Eigenvalue-Problems" (1993), and of two textbooks in mathematical analysis.
Recognition.
Tretter won the Richard von Mises Prize of the Gesellschaft für Angewandte Mathematik und Mechanik in 1995.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "N(y) = \\lambda P(y)"
},
{
"math_id": 1,
"text": "\\lambda"
}
] |
https://en.wikipedia.org/wiki?curid=63227596
|
63229281
|
HRDetect
|
HRDetect (Homologous Recombination Deficiency Detect) is a whole-genome sequencing (WGS)-based classifier designed to predict BRCA1 and BRCA2 deficiency based on six mutational signatures. Additionally, the classifier is able to identify similarities in mutational profiles of tumors to that of tumors with BRCA1 and BRCA2 defects, also known as BRCAness. This classifier can be applied to assess the implementation of PARP inhibitors in patients with BRCA1/BRCA2 deficiency. The final output is a probability of BRCA1/2 mutation.
Background.
BRCA1/BRCA2.
BRCA1 and BRCA2 play crucial roles in maintaining genome integrity, mainly through homologous recombination (HR) for DNA double-strand breaks (DSB)repair. The mutations of BRCA1 and BRCA2 can lead to a reduced capacity of HR machinery, increased genomic instability, and elicit a predisposition to malignancies. People with BRCA1 and BRCA2 deficiency have higher risks of developing certain cancers such as breast and ovarian cancers. Germline defects in BRCA1/BRCA2 genes account for up to 5% of breast cancer cases.
PARP inhibitors.
Poly (ADP ribose) polymerase (PARP) inhibitors are designed to treat BRCA1- and BRCA2- defect tumors owing to their homologous recombination deficiency. These drugs have been majorly implemented in breast and ovarian cancers, and their clinical efficacy among patients with other types of cancers, such as pancreatic cancer, is still being investigated. It is vital to identify adequate patients with BRCA1/BRCA2 deficiency to utilize PARP inhibitors optimally. PARP inhibitors operate on the concept of synthetic lethality where by selectively causing cell death in BRCA-mutant cells while sparing normal cells.
HRDetect.
HRDetect was implemented to detect tumors with BRCA1/BRCA2 deficiency using the data from whole-genome sequencing. This model quantitatively aggregates six HRD-associated signatures into a single score called HRDetect to accurately classify breast cancers by their BRCA1 and BRCA2 status. The machine learning algorithm assigns weight values to these signatures prior to computing the final score. The six signatures, ranked by decreasing weight, include microhomology-mediated indels, the HRD index, base- substitution signature 3, rearrangement signature 3, rearrangement signature 5, and base- substitution signature 8. Additionally, this weighted approach is able to identify BRCAness, which refers to mutational phenotypes displaying homologous recombination deficiency similar to tumors with BRCA1/BRCA2 germline defects.
Methodology.
Input.
HRDetect requires four types of inputs:
Statistical Analysis.
It is based on a supervised learning method using a lasso logistic regression model to distinguish samples into those with and without BRCA 1/2 deficiency. Optimal coefficients are obtained by minimizing the objective function.
Log Transformation.
To account for a high substitution count in samples, the genomic data is first log transformed:
formula_0
Standardization.
The transformed data is then standardized to make mutational class values comparable giving each object a mean of 0 and a standard deviation (sd) of 1:
formula_1
Lasso Logistical Regression Modelling.
To be able to distinguish between those affected and not affected by BRCA1/BRCA2 deficiency, a lasso logistic regression model is used:
formula_2
where:
formula_3: BRCA status of a sample || yi = 1 for BRCA1/BRCA2-null samples || yi = 0 otherwise
formula_4: Intercept, interpreted as the log of odds of formula_3 = 1 when formula_5 = 0
formula_6: Vector of weights
formula_7: Number of features characterizing each sample
formula_8: Number of samples
formula_5: Vector of features characterizing the ith sample
formula_9: Penalty promoting the sparseness of the weights
formula_10: L1 norm of the vector of weights
The β weights are constrained to be positive to reflect the presence of mutational actions due to BRCA1/BRCA2 defects. Setting the constraint of nonnegative weights ensures that all samples would be scored on the basis of the presence of relevant mutational signatures associated with BRCA1/BRCA2 deficiency, irrespective of whether these signatures are the dominant mutational process in the cancer.
HRDetect Score.
Lastly, the weights obtained from the lasso regression are used to give a new sample a probabilistic score using the normalized mutational data formula_5and application of the model parameters(formula_6, formula_4):
formula_11
where:
formula_12 : variable encoding the status of the ith sample
formula_4 : Intercept weight
formula_5: Vector encoding features of the ith sample
formula_6: Vector of weights
Interpretation.
The probability value quantifies the degree of BRCA1/BRCA2 defectiveness. A cut-off probability value should be chosen while maintaining a high sensitivity. These scores can be utilized to guide therapy.
Applications.
Predicting Chemotherapeutic Outcomes.
Mutations in genes responsible for HR are prevalent among human cancers. The BRCA1 and BRCA2 genes are centrally involved in HR, DNAdamage repair, end resection, and checkpoint signaling. Mutational signatures of HRD have been identified in over 20% of breast cancers, as well as pancreatic, ovarian, and gastric cancers. BRCA1/2 mutations confer sensitivity to platinum-based chemotherapies. HRDetect can independently trained to predict BRCA1/2 status, and has the capacity to predict outcomes on platinum-based chemotherapies.
Breast Cancer.
HRDetect was initially developed to detect tumors with BRCA1 and BRCA2 deficiency based on the data from whole-genome sequencing of a cohort of 560 breast cancer samples. Within this cohort, 22 patients were known to carry germline BRCA1/BRCA2 mutations. BRCA1/BRCA2- deficiency mutational signatures were found in more breast cancer patients than previously known. This model was able to identify 124 (22%) breast cancer patients showing BRCA1/2 mutational signatures in this cohort of 560 samples. Apart from the 22 known cases, an additional 33 patients showed deficiency with germline BRCA1/2 mutations, 22 patients displayed somatic mutation of BRCA1/2, and 47 were recognized to show functional defect without detected BRCA1/2 mutation. As a result, with an application of a probabilistic cut-off 0.7, HRDetect was able to demonstrate a 98.7% sensitivity recognizing BRCA1/2- deficient cases.
In contrast, germline mutations of BRCA1/2 are present in only 1~5% of breast cancer cases. Furthermore, these findings suggest that more breast cancer patients, as many as 1 in 5 (20%), may benefit from PARP inhibitors than a small percentage of patients currently given with the treatment.
Cohort of 80 Breast cancer patients. 6 out of 7 are above HRDetect score 0.7.
Cohort of 80 Breast Cancer Samples
HRDetect was tested in 80 breast cancer cases with mainly ER positive and HER2 negative. The tool was able to find ones that exceed HRDetect score 0.7, including one germline BRCA1 mutation carrier, four germline BRCA2 mutation carriers and one somatic BRCA2 mutation carrier. The sensitivity of this tool also reached 86%.
Compatibility Across Cancers.
HRDetect can be applied to other cancer types and yields adequate sensitivity.
Ovarian Cancer.
In a cohort of 73 patients with ovarian cancer, 30 patients were known to carry BRCA1/BRCA2 mutations and 46 (63%) patients were assessed by HRDetect to have HRDetect score over 0.7. The sensitivity of detecting BRCA1/2-deficient cancer was almost 100%, with an additional 16 cases identified.
Pancreatic Cancer.
In a cohort of 96 patients with pancreatic cancers, 6 cases were known to have mutation or allele loss and 11 (11.5%) patients were identified by HRDetect to an exceed cutoff of 0.7. The study observed a similar result of sensitivity approaching 100%, with five other cases identified.
Advantages and Limitations.
Advantages
Limitations
While it can be used with WES data, the sensitivity of detection falls considerably when not trained with such data. The sensitivity increases when training is performed with WES data however false-positive's are still identified.
|
[
{
"math_id": 0,
"text": " x=\\ln (x+1) "
},
{
"math_id": 1,
"text": " \n\\mathrm{x}=\\frac{x-\\operatorname{mean}\\left(x\\right)}{\\mathrm{s} \\mathrm{d} \\cdot\\left(x\\right)}\n"
},
{
"math_id": 2,
"text": "\\min_{((\\beta_0,\\, \\beta)) \\in \\mathbb{R}^{p+1}}{\\left(-\\left[\\frac{1}{N} \\sum_{i=1}^{N} y_{i} \\cdot\\left(\\beta_{0}+x_{i}^{T} \\beta\\right)-\\log \\left(1+e^{\\left(\\beta_{0}+x_{i}^{T} \\beta\\right)}\\right)\\right]+\\lambda\\|\\beta\\|_{1}\\right)}\n\n"
},
{
"math_id": 3,
"text": "y_{i}"
},
{
"math_id": 4,
"text": "\\beta_{0}"
},
{
"math_id": 5,
"text": "x_{i}^{T}"
},
{
"math_id": 6,
"text": "\\beta"
},
{
"math_id": 7,
"text": "p"
},
{
"math_id": 8,
"text": "N"
},
{
"math_id": 9,
"text": "\\lambda"
},
{
"math_id": 10,
"text": "\\|\\beta\\|"
},
{
"math_id": 11,
"text": "\nP\\left(C_{i}=B R C A\\right)=\\frac{1}{1+e^{-\\left(\\beta_{0}+x_{i}^{T} \\beta\\right)}}\n"
},
{
"math_id": 12,
"text": "C_{i}"
}
] |
https://en.wikipedia.org/wiki?curid=63229281
|
63231005
|
Convexity (algebraic geometry)
|
In algebraic geometry, convexity is a restrictive technical condition for algebraic varieties originally introduced to analyze Kontsevich moduli spaces formula_0 in quantum cohomology. These moduli spaces are smooth orbifolds whenever the target space is convex. A variety formula_1 is called convex if the pullback of the tangent bundle to a stable rational curve formula_2 has globally generated sections. Geometrically this implies the curve is free to move around formula_1 infinitesimally without any obstruction. Convexity is generally phrased as the technical condition
formula_3
since Serre's vanishing theorem guarantees this sheaf has globally generated sections. Intuitively this means that on a neighborhood of a point, with a vector field in that neighborhood, the local parallel transport can be extended globally. This generalizes the idea of convexity in Euclidean geometry, where given two points formula_4 in a convex set formula_5, all of the points formula_6 are contained in that set. There is a vector field formula_7 in a neighborhood formula_8 of formula_9 transporting formula_9 to each point formula_10. Since the vector bundle of formula_11 is trivial, hence globally generated, there is a vector field formula_12 on formula_11 such that the equality formula_13 holds on restriction.
Examples.
There are many examples of convex spaces, including the following.
Spaces with trivial rational curves.
If the only maps from a rational curve to formula_1 are constants maps, then the pullback of the tangent sheaf is the free sheaf formula_14 where formula_15. These sheaves have trivial non-zero cohomology, and hence they are always convex. In particular, Abelian varieties have this property since the Albanese variety of a rational curve formula_16 is trivial, and every map from a variety to an Abelian variety factors through the Albanese.
Projective spaces.
Projective spaces are examples of homogeneous spaces, but their convexity can also be proved using a sheaf cohomology computation. Recall the Euler sequence relates the tangent space through a short exact sequence
formula_17
If we only need to consider degree formula_18 embeddings, there is a short exact sequence
formula_19
giving the long exact sequence
formula_20
since the first two formula_21-terms are zero, which follows from formula_16 being of genus formula_22, and the second calculation follows from the Riemann–Roch theorem, we have convexity of formula_23. Then, any nodal map can be reduced to this case by considering one of the components formula_24 of formula_16.
Homogeneous spaces.
Another large class of examples are homogenous spaces formula_25 where formula_26 is a parabolic subgroup of formula_27. These have globally generated sections since formula_27 acts transitively on formula_1, meaning it can take a bases in formula_28 to a basis in any other point formula_29, hence it has globally generated sections. Then, the pullback is always globally generated. This class of examples includes Grassmannians, projective spaces, and flag varieties.
Product spaces.
Also, products of convex spaces are still convex. This follows from the Künneth theorem in coherent sheaf cohomology.
Projective bundles over curves.
One more non-trivial class of examples of convex varieties are projective bundles formula_30 for an algebraic vector bundle formula_31 over a smooth algebraic curvepg 6.
Applications.
There are many useful technical advantages of considering moduli spaces of stable curves mapping to convex spaces. That is, the Kontsevich moduli spaces formula_0 have nice geometric and deformation-theoretic properties.
Deformation theory.
The deformations of formula_2 in the Hilbert scheme of graphs formula_32 has tangent space
formula_33
where formula_34 is the point in the scheme representing the map. Convexity of formula_1 gives the dimension formula below. In addition, convexity implies all infinitesimal deformations are unobstructed.
Structure.
These spaces are normal projective varieties of pure dimension
formula_35
which are locally the quotient of a smooth variety by a finite group. Also, the open subvariety formula_36 parameterizing non-singular maps is a smooth fine moduli space. In particular, this implies the stacks formula_37 are orbifolds.
Boundary divisors.
The moduli spaces formula_0 have nice boundary divisors for convex varieties formula_1 given by
formula_38
for a partition formula_39 of formula_40 and formula_41 the point lying along the intersection of two rational curves formula_42.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\overline{M}_{0,n}(X,\\beta)"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "f:C \\to X"
},
{
"math_id": 3,
"text": "H^1(C, f^*T_X) = 0"
},
{
"math_id": 4,
"text": "p,q"
},
{
"math_id": 5,
"text": "C \\subset \\mathbb{R}^n"
},
{
"math_id": 6,
"text": "tp + (1-t)q"
},
{
"math_id": 7,
"text": "\\mathcal{X}_{U_p}"
},
{
"math_id": 8,
"text": "U_p"
},
{
"math_id": 9,
"text": "p"
},
{
"math_id": 10,
"text": "p' \\in \\{ tp + (1-t)q : t \\in [0,1] \\} \\cap U_p"
},
{
"math_id": 11,
"text": "\\mathbb{R}^n"
},
{
"math_id": 12,
"text": "\\mathcal{X}"
},
{
"math_id": 13,
"text": "\\mathcal{X}|_{U_p} = \\mathcal{X}_{U_{p}}"
},
{
"math_id": 14,
"text": "\\mathcal{O}_C^{\\oplus n}"
},
{
"math_id": 15,
"text": "n = \\dim(X)"
},
{
"math_id": 16,
"text": "C"
},
{
"math_id": 17,
"text": "0 \\to \\mathcal{O} \\to \\mathcal{O}(1)^{\\oplus (n+1)} \\to \\mathcal{T}_{\\mathbb{P}^n} \\to 0"
},
{
"math_id": 18,
"text": "d"
},
{
"math_id": 19,
"text": "0 \\to \\mathcal{O}_C \\to \\mathcal{O}_C(d)^{\\oplus (n+1)} \\to f^*\\mathcal{T}_{\\mathbb{P}^n} \\to 0"
},
{
"math_id": 20,
"text": "\\begin{align}\n0 & \\to H^0(C,\\mathcal{O}) \\to H^0(C,\\mathcal{O}(d)^{\\oplus(n+1)}) \\to H^0(C,f^*\\mathcal{T}_{\\mathbb{P}^n}) \\\\\n& \\to H^1(C,\\mathcal{O}) \\to H^1(C,\\mathcal{O}(d)^{\\oplus(n+1)}) \\to H^1(C,f^*\\mathcal{T}_{\\mathbb{P}^n}) \\to 0\n\\end{align}"
},
{
"math_id": 21,
"text": "H^1"
},
{
"math_id": 22,
"text": "0"
},
{
"math_id": 23,
"text": "\\mathbb{P}^n"
},
{
"math_id": 24,
"text": "C_i"
},
{
"math_id": 25,
"text": "G/P"
},
{
"math_id": 26,
"text": "P"
},
{
"math_id": 27,
"text": "G"
},
{
"math_id": 28,
"text": "T_xX"
},
{
"math_id": 29,
"text": "T_yX"
},
{
"math_id": 30,
"text": "\\mathbb{P}(\\mathcal{E})"
},
{
"math_id": 31,
"text": "\\mathcal{E} \\to C"
},
{
"math_id": 32,
"text": "\\operatorname{Hom}(C,X) \\subset \\operatorname{Hilb}_{C\\times X/\\operatorname{Spec}(\\mathbb{C})}"
},
{
"math_id": 33,
"text": "T_{\\operatorname{Hom}(C,X)}([f]) \\cong H^0(C, f^*T_X)"
},
{
"math_id": 34,
"text": "[f] \\in \\operatorname{Hom}(C,X)"
},
{
"math_id": 35,
"text": "\\dim(\\overline{M}_{0,n}(X,\\beta)) = \\dim(X) + \\int_\\beta c_1(T_X) + n - 3"
},
{
"math_id": 36,
"text": "\\overline{M}_{0,n}^*(X,\\beta)"
},
{
"math_id": 37,
"text": "\\overline{\\mathcal{M}}_{0,n}(X,\\beta)"
},
{
"math_id": 38,
"text": "D(A,B;\\beta_1,\\beta_2) = \\overline{M}_{0,A\\cup \\{\\bullet \\}}(X,\\beta_1) \\times_X \\overline{M}_{0,B\\cup \\{\\bullet \\}}(X,\\beta_2) "
},
{
"math_id": 39,
"text": "A\\cup B"
},
{
"math_id": 40,
"text": "[n]"
},
{
"math_id": 41,
"text": "\\{ \\bullet \\}"
},
{
"math_id": 42,
"text": "C = C_1 \\cup C_2"
}
] |
https://en.wikipedia.org/wiki?curid=63231005
|
63233978
|
Efficient approximately fair item allocation
|
When allocating objects among people with different preferences, two major goals are Pareto efficiency and fairness. Since the objects are indivisible, there may not exist any fair allocation. For example, when there is a single house and two people, every allocation of the house will be unfair to one person. Therefore, several common approximations have been studied, such as "maximin-share fairness" (MMS)", envy-freeness up to one item" (EF1), "proportionality up to one item" (PROP1), and equitability up to one item (EQ1). The problem of efficient approximately fair item allocation is to find an allocation that is both Pareto-efficient (PE) and satisfies one of these fairness notions. The problem was first presented at 2016 and has attracted considerable attention since then.
Setting.
There is a finite set of objects, denoted by "M". There are "n" agents. Each agent "i" has a value-function "Vi", that assigns a value to each subset of objects. The goal is to partition "M" into "n" subsets, "X"1...,"Xn", and give each subset "Xi" to agent "i", such that the allocation is both Pareto-efficient and approximately fair. There are various notions of approximate fairness.
Efficient approximately envy-free allocation.
An allocation is called envy-free (EF) if for every agent believes that the value of his/her share is at least as high as that of any other agent. It is called envy-free up to one item (EF1) if, for every two agents i and j, if at most one item is removed from the bundle of j, then i does not envy j. Formally:
Some early algorithms could find an approximately fair allocation that satisfies a weak form of efficiency, but not PE.
This raised the question of finding an allocation that is both PE and EF1.
Maximum Nash Welfare rule.
Caragiannis, Kurokawa, Moulin, Procaccia, Shah and Wang were the first to prove the existence of a PE+EF1 allocation. They proved that, when all agents have "positive additive utilities", every allocation that maximizes the "product of utilities" (also known as the "Nash welfare") is EF1. Since it is obvious that the maximizing allocation is PE, the existence of PE+EF1 allocation follows.
While the max-product allocation has desirable properties, it cannot be computed in polynomial time: finding a max-product allocation is NP-hard and even APX-hard. This led to various works attempting to approximate the maximum product, with improving approximation factors:
However, these approximations do not guarantee EF1. Some more recent algorithms guarantee both approximate max-product and fairness:
The max-product solution is particularly appealing when the valuations are binary (the value of each item is either 0 or 1):
Non-additive valuations.
If the agents' utilities are not additive, the max-product solution is not necessarily EF1; but if the agents' utilities are at least submodular, the max-product solution satisfies a weaker property called "Marginal-Envy-Freeness except-1-item" (MEF1): it means that each agent "i" values his bundle at least as much as the "marginal utility" of (the bundle of j with the best item removed from it). Formally:
Similar approximations have been found for more general utility functions:
Mixed valuations.
Martin and Walsh show that, with "mixed manna" (- additive valuations that may be both positive and negative), maximizing the product of utilities (or minimizing the product of minus the utilities) does not ensure EF1. They also prove that an EFX3 allocation may not exist even with identical utilities. However, with tertiary utilities, EFX and PO allocations, or EFX3 and PO allocations always exist; and with identical utilities, EFX and PO allocations always exist. For these cases there are polynomial-time algorithms.
Increasing price algorithm.
Barman, Krishanmurthy and Vaish presented a pseudo-polynomial time algorithm for finding PE+EF1 allocations for positive additive valuations. They proved the following results.
Basic concepts.
Their algorithm is based on the notion of competitive equilibrium in a Fisher market. It uses the following concepts.
Algorithm.
Given a parameter "e", the algorithm aims to find an allocation that is both fPO and 3"e"-pEF1. It proceeds in several phases.
Phase 1: Construct an initial MBB allocation+price (X, p).
Phase 2: Remove price-envy within MBB hierarchy:
Phase 3: Increase the prices. Increase the prices of all objects in the MBB hierarchy by the same multiplicative factor, until one of the following three things happen:
Proof of correctness.
First, assume that the above algorithm is executed on an instance in which all values are powers of (1+"e"), for some fixed "e">0.
Now, assume that we have an instance with general valuations. We run the above algorithm on a "rounded instance", where each valuation is rounded upwards to the nearest power of (1+"e"). Note that for each agent "i" and object "o", the rounded value "Vi"'("o") is bounded between "Vi"("o") and (1+"e")"Vi"("o").
Generalized Adjusted Winner.
Aziz, Caragiannis, Igarashi and Walsh extended the condition of EF1 to "mixed valuations" (objects can have both positive and negative utilities). They presented a generalization of the adjusted winner procedure, for finding a PO+EF1 allocation for two agents in time O("m"2).
Efficient approximately proportional allocation.
An allocation of objects is proportional (PROP) if every agent values his/her share at least 1/"n" of the value of all items. It is called proportional up to one item (PROP1) if for every agent "i", if at most one item is added to the bundle of "i", then "i" values the bundle at least 1/"n" of the total. Formally, for all "i" (where "M" is the set of all goods):
The PROP1 condition was introduced by Conitzer, Freeman and Shah in the context of fair public decision making. They proved that, in this case, a PE+PROP1 allocation always exists.
Since every EF1 allocation is PROP1, a PE+PROP1 allocation exists in indivisible item allocation too; the question is whether such allocations can be found by faster algorithms than the ones for PE+EF1.
Barman and Krishnamurthy presented a strongy-polynomial-time algorithm finding a PE+PROP1 allocation for "goods" (objects with positive utility).
Branzei and Sandomirskiy extended the condition of PROP1 to "chores" (objects with negative utility). Formally, for all "i":
They presented an algorithm finding a PE+PROP1 allocation of chores. The algorithm is strongly polynomial-time if either the number of objects or the number of agents (or both) are fixed.
Aziz, Caragiannis, Igarashi and Walsh extended the condition of PROP1 to "mixed valuations" (objects can have both positive and negative utilities). In this setting, an allocation is called PROP1 if, for each agent "i", if we remove one negative item from i's bundle, or add one positive item to i's bundle, then i's utility is at least 1/"n" of the total. Their Generalized Adjusted Winner algorithm finds a PE+EF1 allocation for two agents; such an allocation is also PROP1.
Aziz, Moulin and Sandomirskiy presented a strongly polynomial-time algorithm for finding an allocation that is fractionally PE (stronger than PE) and PROP1, with general mixed valuations, even if the number of agents or objects is not fixed, and even if the agents have different entitlements.
Efficient approximately equitable allocation.
An allocation of objects is called equitable (EQ) if the subjective value of all agents is the same. The motivation for studying this notion comes from experiments showing that human subjects prefer equitable allocations to envy-free ones. An allocation is called equitable up to one item (EQ1) if, for every two agents i and j, if at most one item is removed from the bundle of j, then the subjective value of i is at least that of j. Formally, for all "i", "j":
A stronger notion is equitable up to any item (EQx): for every two agents i and j, if "any" single item is removed from the bundle of j, then the subjective value of i is at least that of j:
EQx allocations were first studied by Gourves, Monnot and Tlilane, who used a different term: "near jealosy-free". They proved that a partial EQx allocation of always exists, even with the additional requirement that the union of all allocated goods is a basis of a given matroid. They used an algorithm similar to the envy-graph procedure. Suksompong proved that an EQ1 allocation exists even with the additional requirement that all allocations must be contiguous subsets of a line.
Freeman, Sidkar, Vaish and Xia proved the following stronger results:
Algorithms for a small number of agents.
Bredereck, Kaczmarcyk, Knop and Niedermeier study a setting where there are few agents (small "n") and few item-types (small "m"), the utility per item-type is upper-bounded (by "V"), but there can be many items of each type. For this setting, they prove the following meta-theorem (Theorem 2): Given an efficiency criterion E, and a fairness criterion F, if formula_8 is fixed, then it is possible to decide in polynomial time whether exists an allocation that is both E-efficient and F-fair, as long as E and F satisfy the following properties:
Then, they prove that several common fairness and efficiency criteria satisfy these properties, including:
The runtime of their algorithm is polynomial in the input size (in bits) times formula_10, where "d" is the number of variables in the resulting ILP, which is formula_11.
They later developed more practical algorithms for some of these problems.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\forall i,j: ~~~ \\exists Y\\subseteq X_j: ~~~|Y|\\leq 1, ~~~V_i(X_i) \\geq V_i(X_j\\setminus Y)"
},
{
"math_id": 1,
"text": "\\forall i,j: ~~~ \\exists Y\\subseteq X_j: ~~~|Y|\\leq 1, ~~~V_i(X_i) \\geq \nV_i(X_i \\cup (X_j\\setminus Y)) - V_i(X_i) "
},
{
"math_id": 2,
"text": "\\forall i,j: ~~~ \\exists Y\\subseteq X_j: ~~~|Y|\\leq 1, ~~~(1+e)\\cdot V_i(X_i) \\geq V_i(X_j\\setminus Y)"
},
{
"math_id": 3,
"text": "\\forall i,j: ~~~ \\exists C\\subseteq X_j: ~~~|C|\\leq 1, ~~~(1+e)\\cdot (1+3e)\\cdot V_i(X_i) \\geq V_i(X_j\\setminus C)"
},
{
"math_id": 4,
"text": "\\exists Y\\subseteq M \\setminus X_i: ~~~|Y|\\leq 1, ~~~V_i(X_i \\cup Y) \\geq V_i(M)/n"
},
{
"math_id": 5,
"text": "\\exists Y\\subseteq X_i: ~~~|Y|\\leq 1, ~~~V_i(X_i \\setminus Y) \\geq V_i(M)/n"
},
{
"math_id": 6,
"text": "\\exists Y\\subseteq X_j: ~~~|Y|\\leq 1, ~~~V_i(X_i) \\geq V_j(X_j\\setminus Y)"
},
{
"math_id": 7,
"text": "\\forall y\\in X_j: ~~~V_i(X_i) \\geq V_j(X_j\\setminus \\{y\\})"
},
{
"math_id": 8,
"text": "n+u_{\\max}+m"
},
{
"math_id": 9,
"text": "f(n,u_{\\max})"
},
{
"math_id": 10,
"text": "d^{2.5 d}"
},
{
"math_id": 11,
"text": "d = m(n+1) + m(n+1)\\cdot(4\\cdot (4n\\cdot V+1)^n)^{m(n+1)}"
}
] |
https://en.wikipedia.org/wiki?curid=63233978
|
63239261
|
Protected polymorphism
|
In population genetics, a protected polymorphism is a mechanism that maintains multiple alleles at a certain locus. In detail, any of the several alleles will follow certain dynamics; When a certain allele is high in frequency (p formula_0 1), it will decrease in frequency in the future and by that avoid from being fixated in the population. On the contrary, when a given allele is low in frequency (p formula_0 0) it will increase in frequency in the future, avoiding its extinction and maintaining polymorphism at the locus.
|
[
{
"math_id": 0,
"text": "\\to"
}
] |
https://en.wikipedia.org/wiki?curid=63239261
|
63245755
|
One-way wave equation
|
Differential equation important in physics
A one-way wave equation is a first-order partial differential equation describing one wave traveling in a direction defined by the vector wave velocity. It contrasts with the second-order two-way wave equation describing a standing wavefield resulting from superposition of two waves in opposite directions (using the squared scalar wave velocity). In the one-dimensional case it is also known as a transport equation, and it allows wave propagation to be calculated without the mathematical complication of solving a 2nd order differential equation. Due to the fact that in the last decades no general solution to the 3D one-way wave equation could be found, numerous approximation methods based on the 1D one-way wave equation are used for 3D seismic and other geophysical calculations, see also the section .
One-dimensional case.
The scalar second-order (two-way) wave equation describing a standing wavefield can be written as:
formula_0
where formula_1 is the coordinate, formula_2 is time, formula_3 is the displacement, and formula_4 is the wave velocity.
Due to the ambiguity in the direction of the wave velocity, formula_5, the equation does not contain information about the wave direction and therefore has solutions propagating in both the forward (formula_6) and backward (formula_7) directions. The general solution of the equation is the summation of the solutions in these two directions:
formula_8
where formula_9 and formula_10 are the displacement amplitudes of the waves running in formula_11 and formula_12 direction.
When a one-way wave problem is formulated, the wave propagation direction has to be (manually) selected by keeping one of the two terms in the general solution.
Factoring the operator on the left side of the equation yields a pair of one-way wave equations, one with solutions that propagate forwards and the other with solutions that propagate backwards.
formula_13
The backward- and forward-travelling waves are described respectively (for formula_14),
formula_15
The one-way wave equations can also be physically derived directly from specific acoustic impedance.
In a longitudinal plane wave, the specific impedance determines the local proportionality of pressure formula_16 and particle velocity formula_17:
formula_18
with formula_19 = density.
The conversion of the impedance equation leads to:
A longitudinal plane wave of angular frequency formula_20 has the displacement formula_21.
The pressure formula_22 and the particle velocity formula_23 can be expressed in terms of the displacement formula_24 (formula_25: Elastic Modulus):
formula_26 for the 1D case this is in full analogy to stress formula_27 in mechanics: formula_28, with strain being defined as formula_29
formula_30
These relations inserted into the equation above (⁎) yield:
formula_31
With the local wave velocity definition (speed of sound):
formula_32
directly(!) follows the 1st-order partial differential equation of the one-way wave equation:
formula_33
The wave velocity formula_4 can be set within this wave equation as formula_11 or formula_12 according to the direction of wave propagation.
For wave propagation in the direction of formula_11 the unique solution is
formula_34
and for wave propagation in the formula_12 direction the respective solution is
formula_35
There also exists a spherical one-way wave equation describing the wave propagation of a monopole sound source in spherical coordinates, i.e., in radial direction. By a modification of the radial nabla operator an inconsistency between spherical divergence and Laplace operators is solved and the resulting solution does not show Bessel functions (in contrast to the known solution of the conventional two-way approach).
Three-dimensional case.
The one-way equation and solution in the three-dimensional case was assumed to be similar way as for the one-dimensional case by a mathematical decomposition (factorization) of a 2nd order differential equation. In fact, the 3D One-way wave equation can be derived from first principles: a) derivation from impedance theorem and b) derivation from a tensorial impulse flow equilibrium in a field point. It is also possible to derive the vectorial two-way wave operator from synthesis of two one-way wave operators (using a combined field variable). This approach shows that the two-way wave equation or two-way wave operator can be used for the specific condition ∇c=0, i.e. for homogeneous and anisotropic medium, whereas the one-way wave equation resp. one-way wave operator is also valid in inhomogeneous media.
Inhomogeneous media.
For inhomogeneous media with location-dependent elasticity module formula_36, density formula_37 and wave velocity formula_38 an analytical solution of the one-way wave equation can be derived by introduction of a new field variable.
Further mechanical and electromagnetic waves.
The method of PDE factorization can also be transferred to other 2nd or 4th order wave equations, e.g. transversal, and string, Moens/Korteweg, bending, and electromagnetic wave equations and electromagnetic waves.
|
[
{
"math_id": 0,
"text": "\\frac{\\partial^2 s}{\\partial t^2} - c^2 \\frac{\\partial^2 s}{\\partial x^2} = 0,"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "s=s(x,t)"
},
{
"math_id": 4,
"text": "c"
},
{
"math_id": 5,
"text": "c^2=(+c)^2=(-c)^2"
},
{
"math_id": 6,
"text": "+x"
},
{
"math_id": 7,
"text": "-x"
},
{
"math_id": 8,
"text": "s(x,t)=s_{+}(t -x/c) + s_{-} (t +x/c)"
},
{
"math_id": 9,
"text": "s_{+}"
},
{
"math_id": 10,
"text": "s_{-}"
},
{
"math_id": 11,
"text": "+c"
},
{
"math_id": 12,
"text": "-c"
},
{
"math_id": 13,
"text": "\\left({\\partial^2\\over\\partial t^2}-c^2{\\partial^2\\over\\partial x^2}\\right)s=\n\\left({\\partial\\over\\partial t}-c{\\partial\\over\\partial x}\\right)\n\\left({\\partial\\over\\partial t}+c{\\partial\\over\\partial x}\\right)s=0,"
},
{
"math_id": 14,
"text": "c > 0"
},
{
"math_id": 15,
"text": "\n\\begin{align}\n& {\\frac{\\partial s}{\\partial t} - c \\frac{\\partial s}{\\partial x} = 0} \\\\[6pt]\n& {\\frac{\\partial s}{\\partial t} + c \\frac{\\partial s}{\\partial x} = 0}\n\\end{align}\n"
},
{
"math_id": 16,
"text": "p= p(x,t)"
},
{
"math_id": 17,
"text": "v= v(x,t)"
},
{
"math_id": 18,
"text": "\\frac{p}{v}=\\rho c ,"
},
{
"math_id": 19,
"text": "\\rho"
},
{
"math_id": 20,
"text": "\\omega"
},
{
"math_id": 21,
"text": "s = s(x,t)"
},
{
"math_id": 22,
"text": "p"
},
{
"math_id": 23,
"text": "v"
},
{
"math_id": 24,
"text": "s"
},
{
"math_id": 25,
"text": "E"
},
{
"math_id": 26,
"text": "p:=E {\\partial s\\over\\partial x}"
},
{
"math_id": 27,
"text": "\\sigma"
},
{
"math_id": 28,
"text": "\\sigma = E \\varepsilon"
},
{
"math_id": 29,
"text": "\\varepsilon = \\frac{\\Delta L}{L}"
},
{
"math_id": 30,
"text": "v = {\\partial s \\over\\partial t}"
},
{
"math_id": 31,
"text": "{\\partial s \\over\\partial t} - {E\\over \\rho c} {\\partial s\\over\\partial x} = 0 "
},
{
"math_id": 32,
"text": "c=\\sqrt{E(x) \\over \\rho(x)} \\Leftrightarrow c = {E\\over \\rho c}"
},
{
"math_id": 33,
"text": "{\\frac{\\partial s}{\\partial t}-c \\frac{\\partial s}{\\partial x} = 0}"
},
{
"math_id": 34,
"text": "s(x,t)=s_{+}(t -x/c) "
},
{
"math_id": 35,
"text": "s(x,t)=s_{-}(t+x/c) "
},
{
"math_id": 36,
"text": "E(x)"
},
{
"math_id": 37,
"text": "\\rho(x)"
},
{
"math_id": 38,
"text": "c(x)"
}
] |
https://en.wikipedia.org/wiki?curid=63245755
|
63246109
|
2 Kings 10
|
2 Kings, chapter 10
2 Kings 10 is the tenth chapter of the second part of the Books of Kings in the Hebrew Bible or the Second Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. This chapter records Jehu's massacres of the sons of Ahab, the kinsmen of Ahaziah the king of Judah and the Baal worshippers linked to Jezebel. The narrative is a part of a major section – covering the period of Jehu's dynasty.
Text.
This chapter was originally written in the Hebrew language and since the 16th century is divided into 36 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls, that is, 6Q4 (6QpapKgs; 150–75 BCE) with extant verses 19–21.
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter and the previous one contain the narrative of Jehu's overthrow of the Omride dynasty and destruction of the Baal worship in Israel, reopening the battle against apostasy which was started by Elijah (). Following his anointing, Jehu executed a total revolution in Israel and Judah, by killing the reigning kings (and their family members) of both kingdoms. The narrative may be divided into two parallel sections, the first one about the assassination of the leaders (including Jezebel, the queen mother of Israel) and the second about the killing of their kinsmen (including the Baal worshippers as Jezebel's "kin"), ending with a summary of Jehu's reign and the consequences of his action in relation to his faithfulness to YHWH. The structure can be as follows:
A Jehu is anointed king (9:1–15)
B Jehu kills King Jehoram outside Jezreel (9:16–26)
C Jehu kills King Ahaziah in Beth-haggan (9:27–29)
D Jehu has Jezebel killed in Jezreel (9:30–37)
B' Jehu massacres the house of Ahab in Jezreel (10:1–11)
C' Jehu massacres the kinsmen of King Ahaziah at Beth-eked (10:12–14)
D' Jehu massacres worshipers of Baal and destroys house of Baal in Samaria (10:15–28)
A' Summary of reign of Jehu (10:29–36)
Jehu massacres the house of Ahab (10:1–11).
The eradication of the entire ruling house after a coup was common in the ancient Near East, because it minimized the threat of blood-revenge and claims to the throne. As the royal house of Omri is in Samaria (), Jehu wrote to the Samarians to 'choose between loyalty to the previous dynasty and defection to him, the murderer of their king' (verses 1–5). The Samarians, like the Jezreelites, chose to follow Jehu and they brought the heads of the decapitated 70 Omrides to Jezreel (verses 6–7). Jehu took responsibility for murdering the king, but not for the slaughter of the royal family. It seems that Jehu was God's instrument to fulfill the prophecy spoken through the prophet Elijah (verse 10), but the way he executed the coup was blameworthy, because about 100 years later the prophet Hosea states that God 'will punish the house of Jehu for the blood of Jezreel' ().
"Now Ahab had seventy sons in Samaria. And Jehu wrote and sent letters to Samaria, to the rulers of Jezreel, to the elders, and to those who reared Ahab’s sons, saying:"
Verses 1.
The correspondence regarding the fate of the Ahab's sons recalls Ahab and Jezebel's correspondence with the nobles of Jezreel regarding Naboth's fate ().
"Know now that nothing shall fall to the earth of the word of the Lord which the Lord spoke concerning the house of Ahab; for the Lord has done what He spoke by His servant Elijah."
Jehu massacres the kinsmen of King Ahaziah (10:12–14).
Forty-two male members of the Judean royal family, who were closely tied and related to the Israelite royal house (cf. 2 Kings 3:7; 8:26, 29) near Betheked (presumably between Jezreel and Samaria) and ignorantly announced 'their allegiance to the Omrides, and thereby condemned themselves to death' (verses 13–14).
Jehu massacres worshipers of Baal and destroys house of Baal (10:15–28).
In their common 'zeal for the LORD', Jehu formed an alliance with Jehonadab ben Rechab, presumably the leader of a nomadic YHWH-worshipping religious clan which had strictly detached itself from the culture and religion of the country (cf. Jeremiah 35). The news that many Omrides have been killed (verse 17) is related to the full execution of the announcement made in . Jehu (and Jehonadab) then targets the house of Baal in Samaria, established since the time of Ahab (). As the Baal worshippers were closely linked to Ahab's royal family, the attack on them is clearly in line with Jehu's revolution. Jehu gathers all the prophets and priests in the temple using lures and threats (verses 18–19). Jehu's announcement, 'I have a great sacrifice to offer to Baal' (verse 19) is 'cruelly ambiguous, as he initially performs the sacrificial rites as a devout king would do (verse 24), only to order the ensuing human sacrifice'. According to verse 21, all servants of Baal throughout Israel should be eradicated, but individual YHWH-worshippers must first be separated from the mass (verse 22b), recalling the same problem in Genesis 18:17–33. Jehu's soldiers executed the order thoroughly, destroying the "cella" ('the citadel of the temple') and the within it, then transforming the holy site into a latrine, to remain so 'unto this day' (verses 25, 27). Jehu's victory led to a decisive turn in the political and religious history of Israel.
"And they demolished the pillar of Baal, and demolished the house of Baal, and made it a latrine to this day."
The reign of Jehu (10:29–36).
The final passage of this chapter contains annal notes of Jehu's reign. Jehu eradicated Baal worship in Israel, but the idol worship
sites still stood in Bethel and Dan, so he received bad rating, although his dynasty lasted four generations: no more than the Omrides, but longer in years (36 years for house of Omri to 100 years for house of Jehu, of which Jehu himself ruled for 28 years. However, verse 32 immediately shows that it was not a particularly good time for Israel, as the Arameans quickly put Israel under pressure. On the Tel Dan Stele erected presumably by Hazael the king of Aram (Syria) in the same period, it was written that the Arameans had comprehensive victories over Israel and Judah, explicitly stating the killing of "Joram the son of Ahab king of Israel and Ahaziah son of Jehoram of the king of the house of David" with a probable reading of Jehu appointed to rule Israel (line 11–12). This could mean that Jehu (willingly or unwillingly) was Hazael's accomplice. Soon the Assyrians came to defeat the Arameans, so Jehu might have to pay tribute to Shalmaneser III the Assyrian king, as depicted in the Black Obelisk (written in about 825 BCE, found in Nimrud, now in the British Museum).
"And the time that Jehu reigned over Israel in Samaria was twenty and eight years."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=63246109
|
632489
|
Quantum algorithm
|
Algorithms run on quantum computers, typically relying on superposition and/or entanglement
In quantum computing, a quantum algorithm is an algorithm that runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical (or non-quantum) algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.
Problems that are undecidable using classical computers remain undecidable using quantum computers. What makes quantum algorithms interesting is that they might be able to solve some problems faster than classical algorithms because the quantum superposition and quantum entanglement that quantum algorithms exploit generally cannot be efficiently simulated on classical computers (see Quantum supremacy).
The best-known algorithms are Shor's algorithm for factoring and Grover's algorithm for searching an unstructured database or an unordered list. Shor's algorithm runs much (almost exponentially) faster than the best-known classical algorithm for factoring, the general number field sieve. Grover's algorithm runs quadratically faster than the best possible classical algorithm for the same task, a linear search.
Overview.
Quantum algorithms are usually described, in the commonly used circuit model of quantum computation, by a quantum circuit that acts on some input qubits and terminates with a measurement. A quantum circuit consists of simple quantum gates, each of which acts on some finite number of qubits. Quantum algorithms may also be stated in other models of quantum computation, such as the Hamiltonian oracle model.
Quantum algorithms can be categorized by the main techniques involved in the algorithm. Some commonly used techniques/ideas in quantum algorithms include phase kick-back, phase estimation, the quantum Fourier transform, quantum walks, amplitude amplification and topological quantum field theory. Quantum algorithms may also be grouped by the type of problem solved; see, e.g., the survey on quantum algorithms for algebraic problems.
Algorithms based on the quantum Fourier transform.
The quantum Fourier transform is the quantum analogue of the discrete Fourier transform, and is used in several quantum algorithms. The Hadamard transform is also an example of a quantum Fourier transform over an n-dimensional vector space over the field F2. The quantum Fourier transform can be efficiently implemented on a quantum computer using only a polynomial number of quantum gates.
Deutsch–Jozsa algorithm.
The Deutsch–Jozsa algorithm solves a black-box problem that requires exponentially many queries to the black box for any deterministic classical computer, but can be done with a single query by a quantum computer. However, when comparing bounded-error classical and quantum algorithms, there is no speedup, since a classical probabilistic algorithm can solve the problem with a constant number of queries with small probability of error. The algorithm determines whether a function "f" is either constant (0 on all inputs or 1 on all inputs) or balanced (returns 1 for half of the input domain and 0 for the other half).
Bernstein–Vazirani algorithm.
The Bernstein–Vazirani algorithm is the first quantum algorithm that solves a problem more efficiently than the best known classical algorithm. It was designed to create an oracle separation between BQP and BPP.
Simon's algorithm.
Simon's algorithm solves a black-box problem exponentially faster than any classical algorithm, including bounded-error probabilistic algorithms. This algorithm, which achieves an exponential speedup over all classical algorithms that we consider efficient, was the motivation for Shor's algorithm for factoring.
Quantum phase estimation algorithm.
The quantum phase estimation algorithm is used to determine the eigenphase of an eigenvector of a unitary gate, given a quantum state proportional to the eigenvector and access to the gate. The algorithm is frequently used as a subroutine in other algorithms.
Shor's algorithm.
Shor's algorithm solves the discrete logarithm problem and the integer factorization problem in polynomial time, whereas the best known classical algorithms take super-polynomial time. These problems are not known to be in P or NP-complete. It is also one of the few quantum algorithms that solves a non–black-box problem in polynomial time, where the best known classical algorithms run in super-polynomial time.
Hidden subgroup problem.
The abelian hidden subgroup problem is a generalization of many problems that can be solved by a quantum computer, such as Simon's problem, solving Pell's equation, testing the principal ideal of a ring R and factoring. There are efficient quantum algorithms known for the Abelian hidden subgroup problem. The more general hidden subgroup problem, where the group isn't necessarily abelian, is a generalization of the previously mentioned problems, as well as graph isomorphism and certain lattice problems. Efficient quantum algorithms are known for certain non-abelian groups. However, no efficient algorithms are known for the symmetric group, which would give an efficient algorithm for graph isomorphism and the dihedral group, which would solve certain lattice problems.
Estimating Gauss sums.
A Gauss sum is a type of exponential sum. The best known classical algorithm for estimating these sums takes exponential time. Since the discrete logarithm problem reduces to Gauss sum estimation, an efficient classical algorithm for estimating Gauss sums would imply an efficient classical algorithm for computing discrete logarithms, which is considered unlikely. However, quantum computers can estimate Gauss sums to polynomial precision in polynomial time.
Fourier fishing and Fourier checking.
Consider an oracle consisting of "n" random Boolean functions mapping "n"-bit strings to a Boolean value, with the goal of finding n "n"-bit strings "z"1..., "zn" such that for the Hadamard-Fourier transform, at least 3/4 of the strings satisfy
formula_0
and at least 1/4 satisfy
formula_1
This can be done in bounded-error quantum polynomial time (BQP).
Algorithms based on amplitude amplification.
Amplitude amplification is a technique that allows the amplification of a chosen subspace of a quantum state. Applications of amplitude amplification usually lead to quadratic speedups over the corresponding classical algorithms. It can be considered as a generalization of Grover's algorithm.
Grover's algorithm.
Grover's algorithm searches an unstructured database (or an unordered list) with N entries for a marked entry, using only formula_2 queries instead of the formula_3 queries required classically. Classically, formula_3 queries are required even allowing bounded-error probabilistic algorithms.
Theorists have considered a hypothetical generalization of a standard quantum computer that could access the histories of the hidden variables in Bohmian mechanics. (Such a computer is completely hypothetical and would "not" be a standard quantum computer, or even possible under the standard theory of quantum mechanics.) Such a hypothetical computer could implement a search of an N-item database in at most formula_4 steps. This is slightly faster than the formula_2 steps taken by Grover's algorithm. However, neither search method would allow either model of quantum computer to solve NP-complete problems in polynomial time.
Quantum counting.
Quantum counting solves a generalization of the search problem. It solves the problem of counting the number of marked entries in an unordered list, instead of just detecting whether one exists. Specifically, it counts the number of marked entries in an formula_5-element list with an error of at most formula_6 by making only formula_7 queries, where formula_8 is the number of marked elements in the list. More precisely, the algorithm outputs an estimate formula_9 for formula_8, the number of marked entries, with accuracy formula_10.
Algorithms based on quantum walks.
A quantum walk is the quantum analogue of a classical random walk. A classical random walk can be described by a probability distribution over some states, while a quantum walk can be described by a quantum superposition over states. Quantum walks are known to give exponential speedups for some black-box problems. They also provide polynomial speedups for many problems. A framework for the creation of quantum walk algorithms exists and is a versatile tool.
Boson sampling problem.
The Boson Sampling Problem in an experimental configuration assumes an input of bosons (e.g., photons) of moderate number that are randomly scattered into a large number of output modes, constrained by a defined unitarity. When individual photons are used, the problem is isomorphic to a multi-photon quantum walk. The problem is then to produce a fair sample of the probability distribution of the output that depends on the input arrangement of bosons and the unitarity. Solving this problem with a classical computer algorithm requires computing the permanent of the unitary transform matrix, which may take a prohibitively long time or be outright impossible. In 2014, it was proposed that existing technology and standard probabilistic methods of generating single-photon states could be used as an input into a suitable quantum computable linear optical network and that sampling of the output probability distribution would be demonstrably superior using quantum algorithms. In 2015, investigation predicted the sampling problem had similar complexity for inputs other than Fock-state photons and identified a transition in computational complexity from classically simulable to just as hard as the Boson Sampling Problem, depending on the size of coherent amplitude inputs.
Element distinctness problem.
The element distinctness problem is the problem of determining whether all the elements of a list are distinct. Classically, formula_11 queries are required for a list of size formula_5; however, it can be solved in formula_12 queries on a quantum computer. The optimal algorithm was put forth by Andris Ambainis, and Yaoyun Shi first proved a tight lower bound when the size of the range is sufficiently large. Ambainis and Kutin independently (and via different proofs) extended that work to obtain the lower bound for all functions.
Triangle-finding problem.
The triangle-finding problem is the problem of determining whether a given graph contains a triangle (a clique of size 3). The best-known lower bound for quantum algorithms is formula_11, but the best algorithm known requires O("N"1.297) queries, an improvement over the previous best O("N"1.3) queries.
Formula evaluation.
A formula is a tree with a gate at each internal node and an input bit at each leaf node. The problem is to evaluate the formula, which is the output of the root node, given oracle access to the input.
A well studied formula is the balanced binary tree with only NAND gates. This type of formula requires formula_13 queries using randomness, where formula_14. With a quantum algorithm, however, it can be solved in formula_15 queries. No better quantum algorithm for this case was known until one was found for the unconventional Hamiltonian oracle model. The same result for the standard setting soon followed.
Fast quantum algorithms for more complicated formulas are also known.
Group commutativity.
The problem is to determine if a black-box group, given by "k" generators, is commutative. A black-box group is a group with an oracle function, which must be used to perform the group operations (multiplication, inversion, and comparison with identity). The interest in this context lies in the query complexity, which is the number of oracle calls needed to solve the problem. The deterministic and randomized query complexities are formula_16 and formula_17, respectively. A quantum algorithm requires formula_18 queries, while the best-known classical algorithm uses formula_19 queries.
BQP-complete problems.
The complexity class BQP (bounded-error quantum polynomial time) is the set of decision problems solvable by a quantum computer in polynomial time with error probability of at most 1/3 for all instances. It is the quantum analogue to the classical complexity class BPP.
A problem is BQP-complete if it is in BQP and any problem in BQP can be reduced to it in polynomial time. Informally, the class of BQP-complete problems are those that are as hard as the hardest problems in BQP and are themselves efficiently solvable by a quantum computer (with bounded error).
Computing knot invariants.
Witten had shown that the Chern-Simons topological quantum field theory (TQFT) can be solved in terms of Jones polynomials. A quantum computer can simulate a TQFT, and thereby approximate the Jones polynomial, which as far as we know, is hard to compute classically in the worst-case scenario.
Quantum simulation.
The idea that quantum computers might be more powerful than classical computers originated in Richard Feynman's observation that classical computers seem to require exponential time to simulate many-particle quantum systems, yet quantum many-body systems are able to "solve themselves." Since then, the idea that quantum computers can simulate quantum physical processes exponentially faster than classical computers has been greatly fleshed out and elaborated. Efficient (i.e., polynomial-time) quantum algorithms have been developed for simulating both Bosonic and Fermionic systems, as well as the simulation of chemical reactions beyond the capabilities of current classical supercomputers using only a few hundred qubits. Quantum computers can also efficiently simulate topological quantum field theories. In addition to its intrinsic interest, this result has led to efficient quantum algorithms for estimating quantum topological invariants such as Jones and HOMFLY polynomials, and the Turaev-Viro invariant of three-dimensional manifolds.
Solving a linear systems of equations.
In 2009, Aram Harrow, Avinatan Hassidim, and Seth Lloyd, formulated a quantum algorithm for solving linear systems. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.
Provided that the linear system is sparse and has a low condition number formula_20, and that the user is interested in the result of a scalar measurement on the solution vector (instead of the values of the solution vector itself), then the algorithm has a runtime of formula_21, where formula_5 is the number of variables in the linear system. This offers an exponential speedup over the fastest classical algorithm, which runs in formula_22 (or formula_23 for positive semidefinite matrices).
Hybrid quantum/classical algorithms.
Hybrid Quantum/Classical Algorithms combine quantum state preparation and measurement with classical optimization. These algorithms generally aim to determine the ground-state eigenvector and eigenvalue of a Hermitian operator.
QAOA.
The quantum approximate optimization algorithm takes inspiration from quantum annealing, performing a discretized approximation of quantum annealing using a quantum circuit. It can be used to solve problems in graph theory. The algorithm makes use of classical optimization of quantum operations to maximize an "objective function."
Variational quantum eigensolver.
The variational quantum eigensolver (VQE) algorithm applies classical optimization to minimize the energy expectation value of an ansatz state to find the ground state of a Hermitian operator, such as a molecule's Hamiltonian. It can also be extended to find excited energies of molecular Hamiltonians.
Contracted quantum eigensolver.
The contracted quantum eigensolver (CQE) algorithm minimizes the residual of a contraction (or projection) of the Schrödinger equation onto the space of two (or more) electrons to find the ground- or excited-state energy and two-electron reduced density matrix of a molecule. It is based on classical methods for solving energies and two-electron reduced density matrices directly from the anti-Hermitian contracted Schrödinger equation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "| \\tilde{f}(z_i)| \\geqslant 1"
},
{
"math_id": 1,
"text": "| \\tilde{f}(z_i) | \\geqslant 2."
},
{
"math_id": 2,
"text": "O(\\sqrt{N})"
},
{
"math_id": 3,
"text": "O({N})"
},
{
"math_id": 4,
"text": "O(\\sqrt[3]{N})"
},
{
"math_id": 5,
"text": "N"
},
{
"math_id": 6,
"text": "\\varepsilon"
},
{
"math_id": 7,
"text": "\\Theta\\left(\\varepsilon^{-1} \\sqrt{N/k}\\right)"
},
{
"math_id": 8,
"text": "k"
},
{
"math_id": 9,
"text": "k'"
},
{
"math_id": 10,
"text": "|k-k'| \\leq \\varepsilon k"
},
{
"math_id": 11,
"text": "\\Omega(N)"
},
{
"math_id": 12,
"text": "\\Theta(N^{2/3})"
},
{
"math_id": 13,
"text": "\\Theta(N^c)"
},
{
"math_id": 14,
"text": "c = \\log_2(1+\\sqrt{33})/4 \\approx 0.754"
},
{
"math_id": 15,
"text": "\\Theta(N^{1/2})"
},
{
"math_id": 16,
"text": "\\Theta(k^2)"
},
{
"math_id": 17,
"text": "\\Theta(k)"
},
{
"math_id": 18,
"text": "\\Omega(k^{2/3})"
},
{
"math_id": 19,
"text": "O(k^{2/3} \\log k)"
},
{
"math_id": 20,
"text": "\\kappa"
},
{
"math_id": 21,
"text": "O(\\log(N)\\kappa^2)"
},
{
"math_id": 22,
"text": "O(N\\kappa)"
},
{
"math_id": 23,
"text": "O(N\\sqrt{\\kappa})"
}
] |
https://en.wikipedia.org/wiki?curid=632489
|
63255689
|
2020 Seanad election
|
Election to the 26th Seanad
An indirect election to the 26th Seanad took place after the 2020 Irish general election, with postal ballots due on 30 and 31 March. Seanad Éireann is the upper house of the Oireachtas, with Dáil Éireann as the lower house of representatives. The election was held for 49 of the 60 seats in the Seanad: 43 are elected for five vocational panels, and six are elected in two university constituencies. The remaining 11 senators are nominated by the newly elected Taoiseach when the Dáil reconvenes after the general election.
Background.
The Constitution of Ireland provides that a Seanad election must take place within 90 days of the dissolution of the Dáil Éireann. As the Dáil was dissolved on 14 January, the latest day the election could take place is 13 April 2020. On 21 January 2020, the Minister for Housing, Planning and Local Government signed an order for the Seanad elections, providing 30 March as the deadline for ballots for the vocational panels and 31 March as the deadline for ballots in the university constituencies.
On 8 February 2020, the 33rd Dáil was elected in the general election. The Fine Gael-led government, led by Taoiseach Leo Varadkar was defeated, with Sinn Féin taking the most first preference votes, and Fianna Fáil taking the most seats. The Sinn Féin victory came as a surprise and an upset, as it ended the two-party rule of Fine Gael and Fianna Fáil that had existed for many decades, and polls did not show Sinn Féin winning until the election was called. Sinn Féin won 37 seats, Fianna Fáil won 38, and Fine Gael won 35.
Electoral system.
Of the forty-nine elected seats, three are elected from the university constituency of the National University and three are elected from the university constituency of Dublin University (Trinity College Dublin).
Forty-three are elected by an electorate of elected politicians, consisting of members of the 33rd Dáil, members of the 25th Seanad and city and county councillors, who each have five ballots for vocational panels. The Seanad Returning Officer maintains a list of qualified nominating bodies for each panel. Candidates may be nominated by nominating bodies (outside sub-panel) or by members of the Oireachtas (inside sub-panel). In each vocational panel, there is a minimum number who must be elected from either the inside or the outside sub-panel. If the number of candidates nominated for each sub-panel does not exceed by two the maximum number which may be elected from that sub-panel, the Taoiseach shall nominate candidates to fill the deficiency.
Electors for the Panels elect:
All votes are cast by postal ballot, and are counted using the single transferable vote. Under this system, voters can rank candidates in order of their preference, 1 as their first preference, 2 for second preference, and so on. Ballots are initially given a value of 1,000 to allow calculation of quotas where all ballots are distributed in the case of a surplus, rather than taking a representative sample as is done in Dáil elections. The quota for election is given as formula_0.
The 11 nominated members can only be appointed by the Taoiseach who is appointed next after the reassembly of Dáil Éireann. They are usually appointed after the Seanad election, but if a Taoiseach has not been appointed at stage, they will not be appointed until then.
Election process.
Impact of coronavirus.
Because of the coronavirus outbreak, changes to the usual arrangements for the Vocational Panel elections were made to reduce the risk of transmission. The clerk and deputy clerk of the Dáil and Seanad refused to witness Oireachtas members' ballots, advising them to use the local government chief executive or Garda (police) superintendent for this purpose. The Seanad clerk, as returning officer, also requested that counting agents not be present at the count centre in Dublin Castle. Similar appeals were made regarding the NUI count in the RDS and the Dublin University count in the university's Examination Hall.
Results.
Dublin University.
<templatestyles src="Reflist/styles.css" />
Administrative Panel.
<templatestyles src="Reflist/styles.css" />
Agricultural Panel.
<templatestyles src="Reflist/styles.css" />
Cultural and Educational Panel.
<templatestyles src="Reflist/styles.css" />
Industrial and Commercial Panel.
<templatestyles src="Reflist/styles.css" />
Labour Panel.
<templatestyles src="Reflist/styles.css" />
References.
Footnotes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\left( \\frac{\\text{total valid poll}}{ \\text{seats}+1 } \\right) + 1"
}
] |
https://en.wikipedia.org/wiki?curid=63255689
|
63262
|
Financial economics
|
Academic discipline concerned with the exchange of money
Financial economics is the branch of economics characterized by a "concentration on monetary activities", in which "money of one type or another is likely to appear on "both sides" of a trade".
Its concern is thus the interrelation of financial variables, such as share prices, interest rates and exchange rates, as opposed to those concerning the real economy.
It has two main areas of focus: asset pricing and corporate finance; the first being the perspective of providers of capital, i.e. investors, and the second of users of capital.
It thus provides the theoretical underpinning for much of finance.
The subject is concerned with "the allocation and deployment of economic resources, both spatially and across time, in an uncertain environment". It therefore centers on decision making under uncertainty in the context of the financial markets, and the resultant economic and financial models and principles, and is concerned with deriving testable or policy implications from acceptable assumptions.
It thus also includes a formal study of the financial markets themselves, especially market microstructure and market regulation.
It is built on the foundations of microeconomics and decision theory.
Financial econometrics is the branch of financial economics that uses econometric techniques to parameterise the relationships identified.
Mathematical finance is related in that it will derive and extend the mathematical or numerical models suggested by financial economics.
Whereas financial economics has a primarily microeconomic focus, monetary economics is primarily macroeconomic in nature.
Underlying economics.
Financial economics studies how rational investors would apply decision theory to investment management. The subject is thus built on the foundations of microeconomics and derives several key results for the application of decision making under uncertainty to the financial markets. The underlying economic logic yields the fundamental theorem of asset pricing, which gives the conditions for arbitrage-free asset pricing.
The various "fundamental" valuation formulae result directly.
Present value, expectation and utility.
Underlying all of financial economics are the concepts of present value and expectation.
Calculating their present value formula_6 allows the decision maker to aggregate the cashflows (or other returns) to be produced by the asset in the future to a single value at the date in question, and to thus more readily compare two opportunities; this concept is then the starting point for financial decision making.
An immediate extension is to combine probabilities with present value, leading to the expected value criterion which sets asset value as a function of the sizes of the expected payouts and the probabilities of their occurrence, formula_7 and formula_3 respectively.
This decision method, however, fails to consider risk aversion ("as any student of finance knows"). In other words, since individuals receive greater utility from an extra dollar when they are poor and less utility when comparatively rich, the approach is to therefore "adjust" the weight assigned to the various outcomes - i.e. "states" - correspondingly, formula_4. See indifference price. (Some investors may in fact be risk seeking as opposed to risk averse, but the same logic would apply).
Choice under uncertainty here may then be characterized as the maximization of expected utility. More formally, the resulting expected utility hypothesis states that, if certain axioms are satisfied, the subjective value associated with a gamble by an individual is "that individual"'s statistical expectation of the valuations of the outcomes of that gamble.
The impetus for these ideas arise from various inconsistencies observed under the expected value framework, such as the St. Petersburg paradox and the Ellsberg paradox.
Arbitrage-free pricing and equilibrium.
The concepts of arbitrage-free, "rational", pricing and equilibrium are then coupled
with the above to derive various of the "classical" (or "neo-classical") financial economics models.
Rational pricing is the assumption that asset prices (and hence asset pricing models) will reflect the arbitrage-free price of the asset, as any deviation from this price will be "arbitraged away". This assumption is useful in pricing fixed income securities, particularly bonds, and is fundamental to the pricing of derivative instruments.
Economic equilibrium is, in general, a state in which economic forces such as supply and demand are balanced, and, in the absence of external influences these equilibrium values of economic variables will not change. General equilibrium deals with the behavior of supply, demand, and prices in a whole economy with several or many interacting markets, by seeking to prove that a set of prices exists that will result in an overall equilibrium. (This is in contrast to partial equilibrium, which only analyzes single markets.)
The two concepts are linked as follows: where market prices do not allow for profitable arbitrage, i.e. they comprise an arbitrage-free market, then these prices are also said to constitute an "arbitrage equilibrium". Intuitively, this may be seen by considering that where an arbitrage opportunity does exist, then prices can be expected to change, and are therefore not in equilibrium. An arbitrage equilibrium is thus a precondition for a general economic equilibrium.
The immediate, and formal, extension of this idea, the fundamental theorem of asset pricing, shows that where markets are as described – and are additionally (implicitly and correspondingly) complete – one may then make financial decisions by constructing a risk neutral probability measure corresponding to the market.
"Complete" here means that there is a price for every asset in every possible state of the world, formula_1, and that the complete set of possible bets on future states-of-the-world can therefore be constructed with existing assets (assuming no friction): essentially solving simultaneously for "n" (risk-neutral) probabilities, formula_8, given "n" prices. For a simplified example see , where the economy has only two possible states – up and down – and where formula_9 and formula_10 (=formula_11) are the two corresponding probabilities, and in turn, the derived distribution, or "measure".
The formal derivation will proceed by arbitrage arguments.
The analysis here is often undertaken assuming a "representative agent",
essentially treating all market-participants, "agents", as identical (or, at least, that they act in such a way that the sum of their choices is equivalent to the decision of one individual) with the effect that the problems are then mathematically tractable.
With this measure in place, the expected, i.e. required, return of any security (or portfolio) will then equal the riskless return, plus an "adjustment for risk", i.e. a security-specific risk premium, compensating for the extent to which its cashflows are unpredictable. All pricing models are then essentially variants of this, given specific assumptions or conditions. This approach is consistent with the above, but with the expectation based on "the market" (i.e. arbitrage-free, and, per the theorem, therefore in equilibrium) as opposed to individual preferences.
Thus, continuing the example, in pricing a derivative instrument its forecasted cashflows in the up- and down-states, formula_12 and formula_13, are multiplied through by formula_9 and formula_10, and are then discounted at the risk-free interest rate; per the second equation above. In pricing a "fundamental", underlying, instrument (in equilibrium), on the other hand, a risk-appropriate premium over risk-free is required in the discounting, essentially employing the first equation with formula_14 and formula_2 combined. In general, this premium may be derived by the CAPM (or extensions) as will be seen under .
The difference is explained as follows: By construction, the value of the derivative will (must) grow at the risk free rate, and, by arbitrage arguments, its value must then be discounted correspondingly; in the case of an option, this is achieved by "manufacturing" the instrument as a combination of the underlying and a risk free "bond"; see (and below). Where the underlying is itself being priced, such "manufacturing" is of course not possible – the instrument being "fundamental", i.e. as opposed to "derivative" – and a premium is then required for risk.
(Correspondingly, mathematical finance separates into :
risk and portfolio management (generally) use physical (or actual or actuarial) probability, denoted by "P";
while derivatives pricing uses risk-neutral probability (or arbitrage-pricing probability), denoted by "Q".
In specific applications the lower case is used, as in the above equations.)
State prices.
With the above relationship established, the further specialized Arrow–Debreu model may be derived.
This result suggests that, under certain economic conditions, there must be a set of prices such that aggregate supplies will equal aggregate demands for every commodity in the economy.
The Arrow–Debreu model applies to economies with maximally complete markets, in which there exists a market for every time period and forward prices for every commodity at all time periods.
A direct extension, then, is the concept of a state price security (also called an Arrow–Debreu security), a contract that agrees to pay one unit of a numeraire (a currency or a commodity) if a particular state occurs ("up" and "down" in the simplified example above) at a particular time in the future and pays zero numeraire in all the other states. The price of this security is the "state price" formula_15 of this particular state of the world; also referred to as a "Risk Neutral Density".
In the above example, the state prices, formula_16, formula_17would equate to the present values of formula_18 and formula_19: i.e. what one would pay today, respectively, for the up- and down-state securities; the state price vector is the vector of state prices for all states. Applied to derivative valuation, the price today would simply be [formula_16×formula_12 + formula_17×formula_13]: the fourth formula (see above regarding the absence of a risk premium here). For a continuous random variable indicating a continuum of possible states, the value is found by integrating over the state price "density". These concepts are extended to martingale pricing and the related risk-neutral measure.
State prices find immediate application as a conceptual tool ("contingent claim analysis"); but can also be applied to valuation problems. Given the pricing mechanism described, one can decompose the derivative value – true in fact for "every security" – as a linear combination of its state-prices; i.e. back-solve for the state-prices corresponding to observed derivative prices.
These recovered state-prices can then be used for valuation of other instruments with exposure to the underlyer, or for other decision making relating to the underlyer itself.
Using the related stochastic discount factor - also called the pricing kernel - the asset price is computed by "discounting" the future cash flow by the stochastic factor formula_20, and then taking the expectation;
the third equation above.
Essentially, this factor divides expected utility at the relevant future period - a function of the possible asset values realized under each state - by the utility due to today's wealth, and is then also referred to as "the intertemporal marginal rate of substitution".
Resultant models.
Applying the above economic concepts, we may then derive various economic- and financial models and principles. As above, the two usual areas of focus are Asset Pricing and Corporate Finance, the first being the perspective of providers of capital, the second of users of capital. Here, and for (almost) all other financial economics models, the questions addressed are typically framed in terms of "time, uncertainty, options, and information", as will be seen below.
Applying this framework, with the above concepts, leads to the required models. This derivation begins with the assumption of "no uncertainty" and is then expanded to incorporate the other considerations. (This division sometimes denoted "deterministic" and "random", or "stochastic".)
Certainty.
The starting point here is "Investment under certainty", and usually framed in the context of a corporation.
The Fisher separation theorem, asserts that the objective of the corporation will be the maximization of its present value, regardless of the preferences of its shareholders.
Related is the Modigliani–Miller theorem, which shows that, under certain conditions, the value of a firm is unaffected by how that firm is financed, and depends neither on its dividend policy nor its decision to raise capital by issuing stock or selling debt. The proof here proceeds using arbitrage arguments, and acts as a benchmark for evaluating the effects of factors outside the model that do affect value.
of 1958,
The mechanism for determining (corporate) value is provided by
John Burr Williams' "The Theory of Investment Value", which proposes that the value of an asset should be calculated using "evaluation by the rule of present worth". Thus, for a common stock, the "intrinsic", long-term worth is the present value of its future net cashflows, in the form of dividends. What remains to be determined is the appropriate discount rate. Later developments show that, "rationally", i.e. in the formal sense, the appropriate discount rate here will (should) depend on the asset's riskiness relative to the overall market, as opposed to its owners' preferences; see below. Net present value (NPV) is the direct extension of these ideas typically applied to Corporate Finance decisioning. For other results, as well as specific models developed here, see the list of "Equity valuation" topics under {{section link|Outline of finance|Discounted cash flow valuation}}.
Bond valuation, in that cashflows (coupons and return of principal, or "Face value") are deterministic, may proceed in the same fashion. An immediate extension, Arbitrage-free bond pricing, discounts each cashflow at the market derived rate – i.e. at each coupon's corresponding zero rate, and of equivalent credit worthiness – as opposed to an overall rate.
In many treatments bond valuation precedes equity valuation, under which cashflows (dividends) are not "known" "per se". Williams and onward allow for forecasting as to these – based on historic ratios or published dividend policy – and cashflows are then treated as essentially deterministic; see below under {{slink|#Corporate finance theory}}.
For both stocks and bonds, "under certainty, with the focus on cash flows from securities over time," valuation based on a term structure of interest rates is in fact consistent with arbitrage-free pricing.
Indeed, a corollary of the above is that "the law of one price implies the existence of a discount factor";
correspondingly, as formulated, formula_5.
Whereas these "certainty" results are all commonly employed under corporate finance, uncertainty is the focus of "asset pricing models" as follows. Fisher's formulation of the theory here - developing an intertemporal equilibrium model - underpins also the below applications to uncertainty;
see for the development.
Uncertainty.
For "choice under uncertainty" the twin assumptions of rationality and market efficiency, as more closely defined, lead to modern portfolio theory (MPT) with its capital asset pricing model (CAPM) – an "equilibrium-based" result – and to the Black–Scholes–Merton theory (BSM; often, simply Black–Scholes) for option pricing – an "arbitrage-free" result. As above, the (intuitive) link between these, is that the latter derivative prices are calculated such that they are arbitrage-free with respect to the more fundamental, equilibrium determined, securities prices; see {{slink|Asset pricing|Interrelationship}}.
Briefly, and intuitively – and consistent with {{slink|#Arbitrage-free pricing and equilibrium}} above – the relationship between rationality and efficiency is as follows.
Given the ability to profit from private information, self-interested traders are motivated to acquire and act on their private information. In doing so, traders contribute to more and more "correct", i.e. "efficient", prices: the efficient-market hypothesis, or EMH. Thus, if prices of financial assets are (broadly) efficient, then deviations from these (equilibrium) values could not last for long. (See earnings response coefficient.)
The EMH (implicitly) assumes that average expectations constitute an "optimal forecast", i.e. prices using all available information are identical to the "best guess of the future": the assumption of rational expectations.
The EMH does allow that when faced with new information, some investors may overreact and some may underreact,
but what is required, however, is that investors' reactions follow a normal distribution – so that the net effect on market prices cannot be reliably exploited to make an abnormal profit.
In the competitive limit, then, market prices will reflect all available information and prices can only move in response to news: the random walk hypothesis.
This news, of course, could be "good" or "bad", minor or, less common, major; and these moves are then, correspondingly, normally distributed; with the price therefore following a log-normal distribution.
Under these conditions, investors can then be assumed to act rationally: their investment decision must be calculated or a loss is sure to follow; correspondingly, where an arbitrage opportunity presents itself, then arbitrageurs will exploit it, reinforcing this equilibrium.
Here, as under the certainty-case above, the specific assumption as to pricing is that prices are calculated as the present value of expected future dividends,
as based on currently available information.
What is required though, is a theory for determining the appropriate discount rate, i.e. "required return", given this uncertainty: this is provided by the MPT and its CAPM. Relatedly, rationality – in the sense of arbitrage-exploitation – gives rise to Black–Scholes; option values here ultimately consistent with the CAPM.
In general, then, while portfolio theory studies how investors should balance risk and return when investing in many assets or securities, the CAPM is more focused, describing how, in equilibrium, markets set the prices of assets in relation to how risky they are.
This result will be independent of the investor's level of risk aversion and assumed utility function, thus providing a readily determined discount rate for corporate finance decision makers as above, and for other investors.
The argument proceeds as follows:
If one can construct an efficient frontier – i.e. each combination of assets offering the best possible expected level of return for its level of risk, see diagram – then mean-variance efficient portfolios can be formed simply as a combination of holdings of the risk-free asset and the "market portfolio" (the Mutual fund separation theorem), with the combinations here plotting as the capital market line, or CML.
Then, given this CML, the required return on a risky security will be independent of the investor's utility function, and solely determined by its covariance ("beta") with aggregate, i.e. market, risk.
This is because investors here can then maximize utility through leverage as opposed to pricing; see Separation property (finance), {{section link|Markowitz model|Choosing the best portfolio}} and CML diagram aside.
As can be seen in the formula aside, this result is consistent with the preceding, equaling the riskless return plus an adjustment for risk.
A more modern, direct, derivation is as described at the bottom of this section; which can be generalized to derive other equilibrium-pricing models.
Black–Scholes provides a mathematical model of a financial market containing derivative instruments, and the resultant formula for the price of European-styled options.
The model is expressed as the Black–Scholes equation, a partial differential equation describing the changing price of the option over time; it is derived assuming log-normal, geometric Brownian motion (see Brownian model of financial markets).
The key financial insight behind the model is that one can perfectly hedge the option by buying and selling the underlying asset in just the right way and consequently "eliminate risk", absenting the risk adjustment from the pricing (formula_21, the value, or price, of the option, grows at formula_2, the risk-free rate).
This hedge, in turn, implies that there is only one right price – in an arbitrage-free sense – for the option. And this price is returned by the Black–Scholes option pricing formula. (The formula, and hence the price, is consistent with the equation, as the formula is the solution to the equation.)
Since the formula is without reference to the share's expected return, Black–Scholes inheres risk neutrality; intuitively consistent with the "elimination of risk" here, and mathematically consistent with {{slink|#Arbitrage-free pricing and equilibrium}} above. Relatedly, therefore, the pricing formula may also be derived directly via risk neutral expectation.
Itô's lemma provides the underlying mathematics, and, with Itô calculus more generally, remains fundamental in quantitative finance.
Robert Merton promoted continuous stochastic calculus and continuous-time processes from 1969.
As implied by the Fundamental Theorem, it can be shown that the two models are consistent; then, as is to be expected, "classical" financial economics is thus unified.
Here, the Black Scholes equation can alternatively be derived from the CAPM, and the price obtained from the Black–Scholes model is thus consistent with the assumptions of the CAPM.
The Black–Scholes theory, although built on Arbitrage-free pricing, is therefore consistent with the equilibrium based capital asset pricing.
Both models, in turn, are ultimately consistent with the Arrow–Debreu theory, and can be derived via state-pricing – essentially, by expanding the fundamental result above – further explaining, and if required demonstrating, this unity.
Here, the CAPM is derived by linking formula_14, risk aversion, to overall market return, and setting the return on security formula_0 as formula_23; see {{section link|Stochastic discount factor|Properties}}.
The Black-Scholes formula is found, in the limit, by attaching a binomial probability to each of numerous possible spot-prices (i.e. states) and then rearranging for the terms corresponding to formula_24 and formula_22, per the boxed description; see {{section link|Binomial options pricing model|Relationship with Black–Scholes}}.
Extensions.
More recent work further generalizes and extends these models. As regards asset pricing, developments in equilibrium-based pricing are discussed under "Portfolio theory" below, while "Derivative pricing" relates to risk-neutral, i.e. arbitrage-free, pricing. As regards the use of capital, "Corporate finance theory" relates, mainly, to the application of these models.
Portfolio theory.
The majority of developments here relate to required return, i.e. pricing, extending the basic CAPM. Multi-factor models such as the Fama–French three-factor model and the Carhart four-factor model, propose factors other than market return as relevant in pricing. The intertemporal CAPM and consumption-based CAPM similarly extend the model. With intertemporal portfolio choice, the investor now repeatedly optimizes her portfolio; while the inclusion of consumption (in the economic sense) then incorporates all sources of wealth, and not just market-based investments, into the investor's calculation of required return.
Whereas the above extend the CAPM, the single-index model is a more simple model. It assumes, only, a correlation between security and market returns, without (numerous) other economic assumptions. It is useful in that it simplifies the estimation of correlation between securities, significantly reducing the inputs for building the correlation matrix required for portfolio optimization. The arbitrage pricing theory (APT) similarly differs as regards its assumptions. APT "gives up the notion that there is one right portfolio for everyone in the world, and ...replaces it with an explanatory model of what drives asset returns." It returns the required (expected) return of a financial asset as a linear function of various macro-economic factors, and assumes that arbitrage should bring incorrectly priced assets back into line.{{NoteTag|The single-index model was developed by William Sharpe in 1963.
APT was developed by Stephen Ross in 1976.
The linear factor model structure of the APT is used as the basis for many of the commercial risk systems employed by asset managers.
As regards portfolio optimization, the Black–Litterman model
departs from the original Markowitz model – i.e. of constructing portfolios via an efficient frontier. Black–Litterman instead starts with an equilibrium assumption, and is then modified to take into account the 'views' (i.e., the specific opinions about asset returns) of the investor in question to arrive at a bespoke asset allocation. Where factors additional to volatility are considered (kurtosis, skew...) then multiple-criteria decision analysis can be applied; here deriving a Pareto efficient portfolio. The universal portfolio algorithm applies machine learning to asset selection, learning adaptively from historical data. Behavioral portfolio theory recognizes that investors have varied aims and create an investment portfolio that meets a broad range of goals. Copulas have lately been applied here; recently this is the case also for genetic algorithms and Machine learning, more generally.
(Tail) risk parity focuses on allocation of risk, rather than allocation of capital.
See {{section link|Portfolio optimization|Improving portfolio optimization}} for other techniques and objectives, and {{slink|Financial risk management|Investment management}} for discussion.
Derivative pricing.
In pricing derivatives, the binomial options pricing model provides a discretized version of Black–Scholes, useful for the valuation of American styled options. Discretized models of this type are built – at least implicitly – using state-prices (as above); relatedly, a large number of researchers have used options to extract state-prices for a variety of other applications in financial economics. For path dependent derivatives, Monte Carlo methods for option pricing are employed; here the modelling is in continuous time, but similarly uses risk neutral expected value. Various other numeric techniques have also been developed. The theoretical framework too has been extended such that martingale pricing is now the standard approach.
The binomial model was first proposed by William Sharpe in the 1978 edition of "Investments" ({{ISBN|013504605X}}),
and in 1979 formalized by Cox, Ross and Rubinstein
and by Rendleman and Bartter.
Finite difference methods for option pricing were due to Eduardo Schwartz in 1977.
Monte Carlo methods for option pricing were originated by Phelim Boyle in 1977;
In 1996, methods were developed for American
and Asian options.
Drawing on these techniques, models for various other underlyings and applications have also been developed, all based on the same logic (using "contingent claim analysis"). Real options valuation allows that option holders can influence the option's underlying; models for employee stock option valuation explicitly assume non-rationality on the part of option holders; Credit derivatives allow that payment obligations or delivery requirements might not be honored. Exotic derivatives are now routinely valued. Multi-asset underlyers are handled via simulation or copula based analysis.
Similarly, the various short-rate models allow for an extension of these techniques to fixed income- and interest rate derivatives. (The Vasicek and CIR models are equilibrium-based, while Ho–Lee and subsequent models are based on arbitrage-free pricing.) The more general HJM Framework describes the dynamics of the full forward-rate curve – as opposed to working with short rates – and is then more widely applied. The valuation of the underlying instrument – additional to its derivatives – is relatedly extended, particularly for hybrid securities, where credit risk is combined with uncertainty re future rates; see {{section link|Bond valuation|Stochastic calculus approach}} and {{section link|Lattice model (finance)|Hybrid securities}}.
Oldrich Vasicek developed his pioneering short-rate model in 1977.
The HJM framework originates from the work of David Heath, Robert A. Jarrow, and Andrew Morton in 1987.
Following the Crash of 1987, equity options traded in American markets began to exhibit what is known as a "volatility smile"; that is, for a given expiration, options whose strike price differs substantially from the underlying asset's price command higher prices, and thus implied volatilities, than what is suggested by BSM. (The pattern differs across various markets.) Modelling the volatility smile is an active area of research, and developments here – as well as implications re the standard theory – are discussed in the next section.
After the financial crisis of 2007–, a further development: as outlined, (over the counter) derivative pricing had relied on the BSM risk neutral pricing framework, under the assumptions of funding at the risk free rate and the ability to perfectly replicate cashflows so as to fully hedge. This, in turn, is built on the assumption of a credit-risk-free environment – called into question during the crisis.
Addressing this, therefore, issues such as counterparty credit risk, funding costs and costs of capital are now additionally considered when pricing, and a credit valuation adjustment, or CVA – and potentially other "valuation adjustments", collectively xVA – is generally added to the risk-neutral derivative value.
The standard economic arguments can be extended to incorporate these various adjustments.
A related, and perhaps more fundamental change, is that discounting is now on the Overnight Index Swap (OIS) curve, as opposed to LIBOR as used previously. This is because post-crisis, the overnight rate is considered a better proxy for the "risk-free rate". (Also, practically, the interest paid on cash collateral is usually the overnight rate; OIS discounting is then, sometimes, referred to as "CSA discounting".) Swap pricing – and, therefore, yield curve construction – is further modified: previously, swaps were valued off a single "self discounting" interest rate curve; whereas post crisis, to accommodate OIS discounting, valuation is now under a "multi-curve framework" where "forecast curves" are constructed for each floating-leg LIBOR tenor, with discounting on the "common" OIS curve.
Corporate finance theory.
Mirroring the above developments, asset-valuation and decisioning no longer need assume "certainty".
Monte Carlo methods in finance allow financial analysts to construct "stochastic" or probabilistic corporate finance models, as opposed to the traditional static and deterministic models; see {{section link|Corporate finance|Quantifying uncertainty}}.
Relatedly, Real Options theory allows for owner – i.e. managerial – actions that impact underlying value: by incorporating option pricing logic, these actions are then applied to a distribution of future outcomes, changing with time, which then determine the "project's" valuation today.
More traditionally, decision trees – which are complementary – have been used to evaluate projects, by incorporating in the valuation (all) possible events (or states) and consequent management decisions; the correct discount rate here reflecting each decision-point's "non-diversifiable risk looking forward."
Simulation was first applied to (corporate) finance by David B. Hertz in 1964.
Decision trees, a standard operations research tool, were applied to corporate finance also in the 1960s.
Real options in corporate finance were first discussed by Stewart Myers in 1977.
Related to this, is the treatment of forecasted cashflows in equity valuation. In many cases, following Williams above, the average (or most likely) cash-flows were discounted, as opposed to a theoretically correct state-by-state treatment under uncertainty; see comments under Financial modeling § Accounting.
In more modern treatments, then, it is the "expected" cashflows (in the mathematical sense: {{small|formula_25}}) combined into an overall value per forecast period which are discounted.
And using the CAPM – or extensions – the discounting here is at the risk-free rate plus a premium linked to the uncertainty of the entity or project cash flows
(essentially, formula_14 and formula_2 combined).
Other developments here include agency theory, which analyses the difficulties in motivating corporate management (the "agent"; in a different sense to the above) to act in the best interests of shareholders (the "principal"), rather than in their own interests; here emphasizing the issues interrelated with capital structure.
Clean surplus accounting and the related residual income valuation provide a model that returns price as a function of earnings, expected returns, and change in book value, as opposed to dividends. This approach, to some extent, arises due to the implicit contradiction of seeing value as a function of dividends, while also holding that dividend policy cannot influence value per Modigliani and Miller's "Irrelevance principle"; see {{section link|Dividend policy|Relevance of dividend policy}}.
"Corporate finance" as a discipline more generally, per Fisher above, relates to the long term objective of maximizing the value of the firm - and its return to shareholders - and thus also incorporates the areas of capital structure and dividend policy.
Extensions of the theory here then also consider these latter, as follows:
(i) optimization re capitalization structure, and theories here as to corporate choices and behavior: Capital structure substitution theory, Pecking order theory, Market timing hypothesis, Trade-off theory;
(ii) considerations and analysis re dividend policy, additional to - and sometimes contrasting with - Modigliani-Miller, include:
the Walter model, Lintner model, Residuals theory and signaling hypothesis, as well as discussion re the observed clientele effect and dividend puzzle.
As described, the typical application of real options is to capital budgeting type problems.
However, here, they are also applied to problems of capital structure and dividend policy, and to the related design of corporate securities;
and since stockholder and bondholders have different objective functions, in the analysis of the related agency problems.
In all of these cases, state-prices can provide the market-implied information relating to the corporate, as above, which is then applied to the analysis. For example, convertible bonds can (must) be priced consistent with the (recovered) state-prices of the corporate's equity.
Financial markets.
The discipline, as outlined, also includes a formal study of financial markets. Of interest especially are market regulation and market microstructure, and their relationship to price efficiency.
Regulatory economics studies, in general, the economics of regulation. In the context of finance, it will address the impact of financial regulation on the functioning of markets and the efficiency of prices, while also weighing the corresponding increases in market confidence and financial stability.
Research here considers how, and to what extent, regulations relating to disclosure (earnings guidance, annual reports), insider trading, and short-selling will impact price efficiency, the cost of equity, and market liquidity.
Market microstructure is concerned with the details of how exchange occurs in markets
(with Walrasian-, matching-, Fisher-, and
Arrow-Debreu markets as prototypes),
and "analyzes how specific trading mechanisms affect the price formation process", examining the ways in which the processes of a market affect determinants of transaction costs, prices, quotes, volume, and trading behavior.
It has been used, for example, in providing explanations for long-standing exchange rate puzzles, and for the equity premium puzzle.
In contrast to the above classical approach, models here explicitly allow for (testing the impact of) market frictions and other imperfections;
see also market design.
For both regulation and microstructure, and generally, agent-based models can be developed to due to a change in structure or policy - or to make inferences re market dynamics - by testing these in an artificial financial market, or AFM.
This approach, essentially simulated trade between numerous agents, "typically uses artificial intelligence technologies [often genetic algorithms and neural nets] to represent the adaptive behaviour of market participants".
These 'bottom-up' models "start from first principals of agent behavior", with participants modifying their trading strategies having learned over time, and "are able to describe macro features [i.e. stylized facts] emerging from a soup of individual interacting strategies".
Agent-based models depart further from the classical approach — the representative agent, as outlined — in that they introduce heterogeneity into the environment (thereby addressing, also, the aggregation problem).
Challenges and criticism.
As above, there is a very close link between (i) the random walk hypothesis, with the associated belief that price changes should follow a normal distribution, on the one hand, and (ii) market efficiency and rational expectations, on the other. Wide departures from these are commonly observed, and there are thus, respectively, two main sets of challenges.
Departures from normality.
As discussed, the assumptions that market prices follow a random walk and that asset returns are normally distributed are fundamental. Empirical evidence, however, suggests that these assumptions may not hold, and that in practice, traders, analysts and risk managers frequently modify the "standard models" (see Kurtosis risk, Skewness risk, Long tail, Model risk).
In fact, Benoit Mandelbrot had discovered already in the 1960s
that changes in financial prices do not follow a normal distribution, the basis for much option pricing theory, although this observation was slow to find its way into mainstream financial economics.
Financial models with long-tailed distributions and volatility clustering have been introduced to overcome problems with the realism of the above "classical" financial models; while jump diffusion models allow for (option) pricing incorporating "jumps" in the spot price.
Risk managers, similarly, complement (or substitute) the standard value at risk models with historical simulations, mixture models, principal component analysis, extreme value theory, as well as models for volatility clustering.
For further discussion see {{section link|Fat-tailed distribution|Applications in economics}}, and {{section link|Value at risk|Criticism}}.
Portfolio managers, likewise, have modified their optimization criteria and algorithms; see {{slink|#Portfolio theory}} above.
Closely related is the volatility smile, where, as above, implied volatility – the volatility corresponding to the BSM price – is observed to "differ" as a function of strike price (i.e. moneyness), true only if the price-change distribution is non-normal, unlike that assumed by BSM.
The term structure of volatility describes how (implied) volatility differs for related options with different maturities.
An implied volatility surface is then a three-dimensional surface plot of volatility smile and term structure.
These empirical phenomena negate the assumption of constant volatility – and log-normality – upon which Black–Scholes is built.
Within institutions, the function of Black-Scholes is now, largely, to "communicate" prices via implied volatilities, much like bond prices are communicated via YTM; see {{section link|Black–Scholes model|The volatility smile}}.
In consequence traders (and risk managers) now, instead, use "smile-consistent" models, firstly, when valuing derivatives not directly mapped to the surface, facilitating the pricing of other, i.e. non-quoted, strike/maturity combinations, or of non-European derivatives, and generally for hedging purposes.
The two main approaches are local volatility and stochastic volatility.
The first returns the volatility which is "local" to each spot-time point of the finite difference- or simulation-based valuation; i.e. as opposed to implied volatility, which holds overall. In this way calculated prices – and numeric structures – are market-consistent in an arbitrage-free sense. The second approach assumes that the volatility of the underlying price is a stochastic process rather than a constant. Models here are first calibrated to observed prices, and are then applied to the valuation or hedging in question; the most common are Heston, SABR and CEV. This approach addresses certain problems identified with hedging under local volatility.
Related to local volatility are the lattice-based implied-binomial and -trinomial trees – essentially a discretization of the approach – which are similarly, but less commonly, used for pricing; these are built on state-prices recovered from the surface. Edgeworth binomial trees allow for a specified (i.e. non-Gaussian) skew and kurtosis in the spot price; priced here, options with differing strikes will return differing implied volatilities, and the tree can be calibrated to the smile as required.
Similarly purposed (and derived) closed-form models were also developed.
As discussed, additional to assuming log-normality in returns, "classical" BSM-type models also (implicitly) assume the existence of a credit-risk-free environment, where one can perfectly replicate cashflows so as to fully hedge, and then discount at "the" risk-free-rate.
And therefore, post crisis, the various x-value adjustments must be employed, effectively correcting the risk-neutral value for counterparty- and funding-related risk.
These xVA are "additional" to any smile or surface effect. This is valid as the surface is built on price data relating to fully collateralized positions, and there is therefore no "double counting" of credit risk (etc.) when appending xVA. (Were this not the case, then each counterparty would have its own surface...)
As mentioned at top, mathematical finance (and particularly financial engineering) is more concerned with mathematical consistency (and market realities) than compatibility with economic theory, and the above "extreme event" approaches, smile-consistent modeling, and valuation adjustments should then be seen in this light. Recognizing this, critics of financial economics - especially vocal since the financial crisis - suggest that instead, the theory needs revisiting almost entirely: {{NoteTag|
This quote, from author James Rickards, is representative.
Prominent and earlier criticism is from Benoit Mandelbrot Emanuel Derman, Paul Wilmott, Nassim Taleb, and others.
Well known popularizations include Taleb's "Fooled by Randomness" and , Mandelbrot's "The Misbehavior of Markets", and Derman's "Models.Behaving.Badly" and, with Wimott, the "Financial Modelers' Manifesto".
"The current system, based on the idea that risk is distributed in the shape of a bell curve, is flawed... The problem is [that economists and practitioners] never abandon the bell curve. They are like medieval astronomers who believe the sun revolves around the earth and are furiously tweaking their geo-centric math in the face of contrary evidence. They will never get this right; they need their Copernicus."
Departures from rationality.
As seen, a common assumption is that financial decision makers act rationally; see Homo economicus. Recently, however, researchers in experimental economics and experimental finance have challenged this assumption empirically. These assumptions are also challenged theoretically, by behavioral finance, a discipline primarily concerned with the limits to rationality of economic agents.
An early anecdotal treatment is Benjamin Graham's "Mr. Market", discussed in his "The Intelligent Investor" in 1949.
See also John Maynard Keynes' 1936 discussion of "Animal spirits", and the related Keynesian beauty contest, in his
"Extraordinary Popular Delusions and the Madness of Crowds" is a study of crowd psychology by Scottish journalist Charles Mackay, first published in 1841, with Volume I discussing economic bubbles.
For related criticisms re corporate finance theory vs its practice see:.
Consistent with, and complementary to these findings, various persistent market anomalies have been documented, these being price or return distortions – e.g. size premiums – which appear to contradict the efficient-market hypothesis; calendar effects are the best known group here. Related to these are various of the economic puzzles, concerning phenomena similarly contradicting the theory. The "equity premium puzzle", as one example, arises in that the difference between the observed returns on stocks as compared to government bonds is consistently higher than the risk premium rational equity investors should demand, an "abnormal return". For further context see Random walk hypothesis § A non-random walk hypothesis, and sidebar for specific instances.
More generally, and, again, particularly following the financial crisis, financial economics and mathematical finance have been subjected to deeper criticism; notable here is Nassim Nicholas Taleb, who claims that the prices of financial assets cannot be characterized by the simple models currently in use, rendering much of current practice at best irrelevant, and, at worst, dangerously misleading; see Black swan theory, Taleb distribution.
A topic of general interest has thus been financial crises,
and the failure of (financial) economics to model (and predict) these.
A related problem is systemic risk: where companies hold securities in each other then this interconnectedness may entail a "valuation chain" – and the performance of one company, or security, here will impact all, a phenomenon not easily modeled, regardless of whether the individual models are correct. See: Systemic risk § Inadequacy of classic valuation models; Cascades in financial networks; Flight-to-quality.
Areas of research attempting to explain (or at least model) these phenomena, and crises, include noise trading, market microstructure (as above), and Heterogeneous agent models. The latter is extended to agent-based computational models, as mentioned; here price is treated as an emergent phenomenon, resulting from the interaction of the various market participants (agents). The noisy market hypothesis argues that prices can be influenced by speculators and momentum traders, as well as by insiders and institutions that often buy and sell stocks for reasons unrelated to fundamental value; see Noise (economic). The adaptive market hypothesis is an attempt to reconcile the efficient market hypothesis with behavioral economics, by applying the principles of evolution to financial interactions. An information cascade, alternatively, shows market participants engaging in the same acts as others ("herd behavior"), despite contradictions with their private information. Copula-based modelling has similarly been applied. See also Hyman Minsky's "financial instability hypothesis", as well as George Soros' application of "reflexivity".
Relatedly, institutionally inherent "limits to arbitrage" – as opposed to factors directly contradictory to the theory – are sometimes proposed as an explanation for these departures from efficiency.
Bibliography.
Financial economics
Asset pricing
Corporate finance
|
[
{
"math_id": 0,
"text": "j"
},
{
"math_id": 1,
"text": "s"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "p_{s}"
},
{
"math_id": 4,
"text": "Y_{s}"
},
{
"math_id": 5,
"text": "\\sum_{s}\\pi_{s} = 1/r"
},
{
"math_id": 6,
"text": "X_{sj}/r"
},
{
"math_id": 7,
"text": "X_{s}"
},
{
"math_id": 8,
"text": "q_{s}"
},
{
"math_id": 9,
"text": "q_{up}"
},
{
"math_id": 10,
"text": "q_{down}"
},
{
"math_id": 11,
"text": "1-q_{up}"
},
{
"math_id": 12,
"text": "X_{up}"
},
{
"math_id": 13,
"text": "X_{down}"
},
{
"math_id": 14,
"text": "Y"
},
{
"math_id": 15,
"text": "\\pi_{s}"
},
{
"math_id": 16,
"text": "\\pi_{up}"
},
{
"math_id": 17,
"text": "\\pi_{down}"
},
{
"math_id": 18,
"text": "$q_{up}"
},
{
"math_id": 19,
"text": "$q_{down}"
},
{
"math_id": 20,
"text": "\\tilde{m}"
},
{
"math_id": 21,
"text": "V"
},
{
"math_id": 22,
"text": "N(d_2)"
},
{
"math_id": 23,
"text": "X_j/Price_j"
},
{
"math_id": 24,
"text": "N(d_1)"
},
{
"math_id": 25,
"text": "\\sum_{s}p_{s}X_{sj}"
}
] |
https://en.wikipedia.org/wiki?curid=63262
|
632685
|
Coimage
|
In algebra, the coimage of a homomorphism
formula_0
is the quotient
formula_1
of the domain by the kernel.
The coimage is canonically isomorphic to the image by the first isomorphism theorem, when that theorem applies.
More generally, in category theory, the coimage of a morphism is the dual notion of the image of a morphism. If formula_2, then a coimage of formula_3 (if it exists) is an epimorphism formula_4 such that
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "f : A \\rightarrow B"
},
{
"math_id": 1,
"text": "\\text{coim} f = A/\\ker(f)"
},
{
"math_id": 2,
"text": "f : X \\rightarrow Y"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "c : X \\rightarrow C"
},
{
"math_id": 5,
"text": "f_c : C \\rightarrow Y "
},
{
"math_id": 6,
"text": " f =f_c \\circ c "
},
{
"math_id": 7,
"text": "z : X \\rightarrow Z"
},
{
"math_id": 8,
"text": "f_z : Z \\rightarrow Y "
},
{
"math_id": 9,
"text": " f =f_z \\circ z "
},
{
"math_id": 10,
"text": " h : Z \\rightarrow C "
},
{
"math_id": 11,
"text": " c =h \\circ z "
},
{
"math_id": 12,
"text": " f_z =f_c \\circ h "
}
] |
https://en.wikipedia.org/wiki?curid=632685
|
63270016
|
Direction-preserving function
|
In discrete mathematics, a direction-preserving function (or mapping) is a function on a discrete space, such as the integer grid, that (informally) does not change too drastically between two adjacent points. It can be considered a discrete analogue of a continuous function.
The concept was first defined by Iimura. Some variants of it were later defined by Yang, Chen and Deng, Herings, van-der-Laan, Talman and Yang, and others.
Basic concepts.
We focus on functions formula_0, where the domain X is a finite subset of the Euclidean space formula_1. ch("X") denotes the convex hull of "X".
There are many variants of direction-preservation properties, depending on how exactly one defines the "drastic change" and the "adjacent points". Regarding the "drastic change" there are two main variants:
Regarding the "adjacent points" there are several variants:
Specific definitions are presented below. All examples below are for formula_5 dimensions and for "X" = { (2,6), (2,7), (3, 6), (3, 7) }.
Properties and examples.
Hypercubic direction-preservation.
A "cell" is a subset of formula_1 that can be expressed by formula_6 for some formula_7. For example, the square formula_8 is a cell.
Two points in formula_1 are called "cell connected" if there is a cell that contains both of them.
Hypercubic direction-preservation properties require that the function does not change too drastically in cell-connected points (points in the same hypercubic cell).
"f" is called hypercubic direction preserving (HDP) if, for any pair of cell-connected points "x","y" in "X," for all formula_2: formula_3. The term locally direction-preserving (LDP) is often used instead. The function "fa" on the right is DP.
"f" is called hypercubic gross direction preserving (HGDP), or locally gross direction preserving (LGDP), if for any pair of cell-connected points "x","y" in "X," formula_4. Every HDP function is HGDP, but the converse is not true. The function "fb" is HGDP, since the scalar product of every two vectors in the table is non-negative. But it is not HDP, since the second component switches sign between (2,6) and (3,6): formula_10.
Simplicial direction-preservation.
A simplex is called "integral" if all its vertices have integer coordinates, and they all lie in the same cell (so the difference between coordinates of different vertices is at most 1).
A triangulation of some subset of formula_1 is called "integral" if all its simplices are integral.
Given a triangulation, two points are called "simplicially connected" if there is a simplex of the triangulation that contains both of them.
Note that, in an integral triangulation, every simplicially-connected points are also cell-connected, but the converse is not true. For example, consider the cell formula_8. Consider the integral triangulation that partitions it into two triangles: {(2,6),(2,7),(3,7)} and {(2,6),(3,6),(3,7)}. The points (2,7) and (3,6) are cell-connected but not simplicially-connected.
Simplicial direction-preservation properties assume some fixed integral triangulation of the input domain. They require that the function does not change too drastically in simplicially-connected points (points in the same simplex of the triangulation). This is, in general, a much weaker requirement than hypercubic direction-preservation.
"f" is called simplicial direction preserving (SDP) if, for some integral triangulation of "X", for any pair of simplicially-connected points "x","y" in "X," for all formula_2: formula_9.
"f" is called simplicially gross direction preserving (SGDP) or simplicially-local gross direction preserving (SLGDP) if there exists an integral triangulation of ch("X") such that, for any pair of simplicially-connected points "x","y" in "X," formula_4.
Every HGDP function is SGDP, but HGDP is much stronger: it is equivalent to SGDP w.r.t. "all possible" integral triangulations of ch("X"), whereas SGDP relates to a "single" triangulation. As an example, the function "fc" on the right is SGDP by the triangulation that partitions the cell into the two triangles {(2,6),(2,7),(3,7)} and {(2,6),(3,6),(3,7)}, since in each triangle, the scalar product of every two vectors is non-negative. But it is not HGDP, since formula_12.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f: X\\to \\mathbb{R}^n"
},
{
"math_id": 1,
"text": "\\mathbb{R}^n"
},
{
"math_id": 2,
"text": "i\\in [n]"
},
{
"math_id": 3,
"text": "f_i(x)\\cdot f_i(y) \\geq 0"
},
{
"math_id": 4,
"text": "f(x)\\cdot f(y) \\geq 0"
},
{
"math_id": 5,
"text": "n=2"
},
{
"math_id": 6,
"text": "k + [0,1]^n"
},
{
"math_id": 7,
"text": "k\\in \\mathbb{Z}^n"
},
{
"math_id": 8,
"text": "[2,3]\\times [6,7]"
},
{
"math_id": 9,
"text": "(f_i(x) - x_i)\\cdot (f_i(y)-y_i) \\geq 0"
},
{
"math_id": 10,
"text": "f^b_2(2,6)\\cdot f^b_2(3,6) = -1 < 0"
},
{
"math_id": 11,
"text": "(f(x)-x)\\cdot (f(y)-y) \\geq 0"
},
{
"math_id": 12,
"text": "f^c(3,6)\\cdot f^c(2,7) = -1 < 0"
}
] |
https://en.wikipedia.org/wiki?curid=63270016
|
63272141
|
COVID-19 pandemic in Israel
|
The COVID-19 pandemic in Israel () is part of the worldwide pandemic of coronavirus disease 2019 (COVID-19) caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The first case in Israel was confirmed on 21 February 2020, when a female citizen tested positive for COVID-19 at the Sheba Medical Center after return from quarantine on the "Diamond Princess" ship in Japan. As a result, a 14-day home isolation rule was instituted for anyone who had visited South Korea or Japan, and a ban was placed on non-residents and non-citizens who were in South Korea for 14 days before their arrival.
Beginning on 11 March 2020, Israel began enforcing social distancing and other rules to limit the spread of infection. Gatherings were first restricted to no more than 100 people, and on 15 March this figure was lowered to 10 people, with attendees advised to keep a distance of between one another. On 19 March, Prime Minister Benjamin Netanyahu declared a national state of emergency, saying that existing restrictions would henceforth be legally enforceable, and violators would be fined. Israelis were not allowed to leave their homes unless absolutely necessary. Essential services—including food stores, pharmacies, and banks—would remain open. Restrictions on movement were further tightened on 25 March and 1 April, with everyone instructed to cover their noses and mouths outdoors. As coronavirus diagnoses spiked in the city of Bnei Brak, reaching nearly 1,000 infected people at the beginning of April, the cabinet voted to declare the city a "restricted zone", limiting entry and exit for a period of one week. Coinciding with the Passover Seder on the night of 8 April, lawmakers ordered a 3-day travel ban and mandated that Israelis stay within of their home on the night of the Seder. On 12 April, Haredi neighborhoods in Jerusalem were placed under closure.
On 20 March 2020, an 88-year-old Holocaust survivor in Jerusalem who had previous illnesses was announced as the country's first casualty. The pandemic occurred during the 2019–2022 Israeli political crisis and had a significant political impact. All restrictions in Israel were removed throughout the spring of 2021, later reintroducing face mask requirements. Restrictions on non-citizens entering the country remained until January 2022.
Israel Shield, the country's national program to combat the pandemic, was established in July 2020. As of June 2021, it is led by Salman Zarka, a position known as the "COVID czar".
Timeline.
First wave: February to May 2020.
First cases.
On 21 February, Israel confirmed the first case of COVID-19. A female Israeli citizen who had flown home from Japan after being quarantined on the "Diamond Princess" tested positive at Sheba Medical Center. On 23 February, a second former "Diamond Princess" passenger tested positive, and was admitted to a hospital for isolation.
On 27 February, a man, who had returned from Italy on 23 February, tested positive and was admitted to Sheba Medical Center. On 28 February, his wife also tested positive. On 1 March, a female soldier tested positive for the virus. She had been working at the toy store managed by the same man diagnosed on 27 February. On 3 March, three more cases were confirmed. Two contracted the virus at the same toy store: a middle school student who worked at the store, and a school deputy principal who shopped there. Following this, 1,150 students entered a two-week quarantine. One other person, who had returned from a trip to Italy on 29 February, also tested positive for the virus.
Information campaign.
The government has set up a multi-lingual website with information and instructions regarding the pandemic. Among the languages: English, Hebrew, Arabic, Russian, Amharic, French, Spanish, Ukrainian, Romanian, Thai, Chinese, Tigrinya, Hindi, Filipino. The government also set up a dashboard where daily statistics can be viewed.
Travel restrictions.
On 26 January 2020, Israel advised against non-essential travel to China. On 30 January, Israel suspended all flights from China. On 17 February, Israel extended the ban to include arrivals from Thailand, Hong Kong, Macau, and Singapore. On 22 February, a flight from Seoul, South Korea, landed at Ben Gurion Airport. An ad hoc decision was made to allow only Israeli citizens to disembark the plane, and all non-Israeli citizens aboard returned to South Korea. Later, Israel barred the entry of non-residents or non-citizens of Israel who were in South Korea during the 14 days prior to their arrival in Israel. The same directive was applied to those arriving from Japan starting 23 February. On 26 February, Israel issued a travel warning to Italy, and urged cancelling of all travel abroad. By the third week in March, El Al, Israel's national air carrier, responded to a government request to send rescue flights to Peru, India, Australia, Brazil, and Costa Rica to bring home hundreds of Israelis who were stranded around the world due to the worldwide pandemic. On 22 March, 550 Israelis returned from India; a few days before about 1,100 Israeli travelers were repatriated from Peru.
On 21 February, Israel instituted a 14-day home isolation rule for anyone who had been in South Korea or Japan. A number of tourists tested positive after visiting Israel, including members of a group from South Korea, two people from Romania, a group of Greek pilgrims, and a woman from the U.S. State of New York. 200 Israeli students were quarantined after being exposed to a group of religious tourists from South Korea. An additional 1,400 Israelis were quarantined after having traveled abroad. On 9 March, Prime Minister Benjamin Netanyahu declared a mandatory quarantine for all people entering Israel, requiring all entrants to quarantine themselves for 14 days upon entering the country. The order was effective immediately for all returning Israelis, and would apply beginning on 13 March for all foreign citizens, who must show that they have arranged for accommodation during their quarantine period.
Social distancing and closures.
On 2 March, the 2020 Israeli legislative election was held. Multiple secluded voting booths were established for 5,630 quarantined Israeli citizens who were eligible to vote. 4,073 citizens voted in the coronavirus-special voting booths. After the election, numerous Israelis were in quarantine. On 10 March, Israel began limiting gatherings to 2,000 people. A day later, on March 11, Israel further limited gatherings to 100 people. On 14 March, Prime Minister Netanyahu announced new regulations and stated the need to "adopt a new way of life". The Health Ministry posted new regulations, effective 15 March. These included banning gatherings of more than 10 people, and closure of all educational institutions, among them daycare centers, special education, youth movements, and after-school programs. The list of venues required to close included: malls, restaurants, hotel dining rooms, pubs, dance clubs, gyms, pools, beaches, water and amusement parks, zoos and petting zoos, bathhouses and ritual baths for men, beauty and massage salons, event and conference venues, public boats and cable cars, and heritage sites. Take-away restaurants, supermarkets, and pharmacies were to remain open. The pandemic forced many events to be cancelled. Notwithstanding the closure of wedding halls, weddings took place in private homes with the limitation of no more than 10 participants in each room; dancing could take place both indoors and in outdoor courtyards. Weddings were also held on rooftops and yeshiva courtyards. In one case, a Sephardi couple opted to hold their wedding ceremony in an Osher Ad supermarket, which was exempt from the 10-person rule. The Al-Aqsa Mosque and Dome of the Rock closed to prevent contamination of the holy sites. As a result of the government's directive for citizens to remain at home, there was an increase in calls to domestic violence hotlines, and women's shelters were close to full capacity, both due to new arrivals and to current residents who remained due to the pandemic.
On 9 March, after it was discovered that an employee at the Israeli embassy in Greece had contracted coronavirus and spread it to two family members, it was announced that the embassy was temporarily shutting down. On 12 March, Israel announced that all universities and schools would close until after the Passover (spring) break. After the break, schools remained closed and students learned online. On 3 May, grades one to three were allowed to resume school, with restrictions, and not in all cities. In addition, grades eleven and twelve were allowed to hold revisions for the upcoming Bagrut exams. On 15 March, Justice Minister Amir Ohana expanded his powers and announced that non-urgent court activity would be frozen. As a result, the corruption trial of Prime Minister Netanyahu was postponed from 17 March to 24 May. The Movement for Quality Government in Israel urged the Attorney General to stay the new regulations. On 16 March, the Bank of Israel ordered retail banks to close, but allowed special services to remain open for elderly people. On 22 March, both the open-air Carmel Market in Tel Aviv and the open-air Mahane Yehuda Market in Jerusalem were closed by police. Many supermarkets experienced a shortage of eggs caused by panic buying and fear of shutdown.
Medical response.
As late as 15 March, doctors complained that guidelines for testing were too restrictive. On 16 March, the Health Ministry approved a number of experimental treatments for patients with COVID-19. On 18 March, the Defense Ministry took over purchasing of Corona-related gear. On the same day, the Israel Institute for Biological Research announced that they are working on a COVID-19 vaccine. On 18 March at 6 pm, Israelis across the country applauded from their balconies for two minutes in appreciation of medical workers and first responders battling coronavirus. On 29 March, Magen David Adom announced that it will collect blood plasma from recovered COVID-19 patients to treat those most severely affected by the infection. In December 2021, the Israeli Ministry of Health approved the use of Pfizer's Nirmatrelvir/ritonavir for treating COVID-19.
Politics.
The virus begun to rapidly spread after the March 2020 election, and pandemic politics affected Israel's subsequent trajectory. The incumbent PM Benjamin Netanyahu did not win enough seats to form a coalition, and the presidential mandate to form a coalition was given to his contender, Benny Gantz. Facing criminal charges and unable to form a coalition, PM Netanyahu urged the establishment of a National Emergency Government (NEG). Abulof and Le Penne argue that Netanyahu succeeded partly through fearmongering. Suggesting that “If I fall, Israel falls”, Netanyahu compared the COVID-19 crisis to the Holocaust, qualifying “unlike the holocaust, this time – this time, we identified the danger in time,” saying that NEG headed by him is needed “like before the Six-Day War,” to “save the country.”
Netanyahu's pandemic politics brought his party the Likud to reach peak public support (41-43 seats during the first wave of April–May 2020), pushing Gantz to ask Israel's President to transfer the mandate to Netanyahu so that the latter could form, and head, a new government. Overall, Abulof and Le Penne argue, Israel features key factors that could have helped it weather well the COVID-19 crisis: a young population, close or otherwise heavily monitored borders, warm climate, an efficient public health system, hard-earned public resilience, willingness for mass mobilization, high-tech capacities (to help gather and spread information), and meager reliance on tourism.
Mobile phone tracking.
On 15 March, the Israeli government proposed allowing the Israel Security Agency (ISA) to track the prior movements of people diagnosed with coronavirus through their mobile phones. The security service would not require a court order for its surveillance. The stated goal of the measure was to identify people with whom infected individuals came into contact in the two weeks prior to their diagnosis, and to dispatch text messages informing those people that they must enter the 14-day self-quarantine. The security measure was to be in place for only 30 days after approval by a Knesset subcommittee, and all records were to be deleted after that point. Critics branded the proposal an invasion of privacy and civil liberties.
On 17 March, at 1:30 AM, a Knesset committee approved the contact-tracing program, making Israel the only country in the world to use its internal security agency (Shin Bet) to track citizens' geolocations. Within the first two days, the Ministry of Health text-messaged 400 individuals who had been in proximity to an infected person, and told them to enter a 14-day self-quarantine. On 19 March, the Supreme Court of Israel heard petitions to halt the contact-tracing program, submitted by the Association for Civil Rights in Israel, and Adalah – The Legal Center for Arab Minority Rights, and issued an interim order. The same day, several hundred protesters converged on the Knesset to protest the phone surveillance and other restrictions on citizens' movements, as well as the shutdown of the judicial and legislative branches of the government. Police arrested three protesters for violating the ban on gatherings over 10 people, and also blocked dozens of cars from entering Jerusalem and approaching the Knesset building. On 26 March, the ISA said contact tracing had led to over 500 Israelis being notified who were then diagnosed with coronavirus. On April 26, 2020, the Supreme Court issued its judgment on the contact-tracing petitions. In granting the petitions, the Court held that the Government's decision passed constitutional review under the exigent circumstances at the time it was made, but that further recourse to the Israel Security Agency for the purpose of contact tracing would require primary legislation in the form of a temporary order that would meet the requirements of the Limitations Clause of Basic Law: Human Dignity and Liberty. The Court further held that due to the fundamental importance of freedom of the press, ISA contact tracing of journalists who tested positive for the virus would require consent, and in the absence of consent, a journalist would undergo an individual epidemiological investigation, and would be asked to inform any sources with whom he was in contact over the 14 days prior to his diagnosis. Cellphone-based location tracking proved to be insufficiently accurate, as scores of Israeli citizens were falsely identified as carriers of COVID-19 and subsequently ordered to self-quarantine. In an attempt to contain the spread of the Omicron Variant, Israel reinstated the use of Shin Bet counterterrorism surveillance measures for a limited period of time.
Public transportation.
As of 19 March, public transportation ridership was down 38.5 percent compared to before the virus outbreak. Public bus operations were strictly curtailed by the government, which placed an 8 p.m. curfew on bus operations nightly, and halted all public transportation between Thursday night at 8 p.m. and Sunday morning, going beyond the usual hiatus on public transportation in Israel during Shabbat (from Friday evening to Saturday evening). As of 22 March Israel's Ministry of Transport and Road Safety and its National Public Transportation Authority instituted a notification system allowing passengers using public transportation to inquire whether they had shared a ride with a person sick with COVID-19. The travel histories will be stored through the use of the country's electronic bus card passes, known as Rav-Kav. At the peak of the first wave, on 20 April 2020, public transportation ridership was down 80% compared to before the outbreak.
State of emergency.
On 19 March, Prime Minister Netanyahu declared a national state of emergency. He said that existing restrictions would henceforth be legally enforceable, and violators would be fined. Israelis were not allowed to leave their homes unless absolutely necessary. Essential services would remain open. News reports showed hundreds of Israelis ignoring the new ban on Shabbat, 21 March, and visiting beaches, parks, and nature spots in large numbers, prompting the Ministry of Health to threaten imposing tighter restrictions on the public.
On 25 March, the government imposed stricter restrictions on citizens' movements. These include:
Beginning on April 1 the government proposed to intensify precautionary restrictions on its citizens, requiring them to: refrain from all public gatherings, including prayer quorums of 10 men; limiting outings to two people from the same household; and calling upon them to always wear face masks in public. Beginning on April 12, the government required all Israelis to cover their nose and mouth when leaving their homes. Exceptions include "children under age 6; people with emotional, mental or medical conditions that would prevent them from wearing a mask; drivers in their cars; people alone in a building; and two workers who work regularly together, provided they maintain social distancing". The new law was passed on the same day that the World Health Organization questioned the efficacy of face masks for protecting healthy individuals from catching the virus.
Restrictions on religious gatherings.
According to Israeli Ministry of Health statistics, as of March 24, 24% of all coronavirus infections in Israel with known infection points (35% of all known cases) were contracted in synagogues, 15% in hotels, and 12% in restaurants. The Health Ministry's rules on indoor gatherings, which were reduced from 100 to 10, still took into account the minimum number of members needed for a minyan (public prayer quorum). With stricter restrictions placed on citizens on 25 March (see below), the two Chief Rabbis of Israel called for all synagogues to be closed and prayer services to be held outdoors in groups of 10, with between each worshipper. Many synagogues in Jerusalem were locked and prayer services held outdoors. Due to the uptick in coronavirus diagnoses in Bnei Brak and after initially ordering his followers to ignore Health Ministry restrictions, leading Haredi posek Chaim Kanievsky eventually issued an unprecedented statement on 29 March instructing Bnei Brak residents not to pray with a minyan at all, but rather individually at home. Despite this, Kanievsky was accused of secretly arranging public prayers at his house. On 1 April, the Chief Rabbis of Israel published guidelines for observance of Passover laws during the outbreak. The guidelines included praying at home and not in a minyan, selling chametz online, and getting rid of chametz at home in ways other than burning, so as not to go out into the streets for the traditional burning of the chametz.
After back-and-forth discussions with representatives of the chevra kadisha (Jewish religious burial society), the Health Ministry allowed burial society members to proceed with many traditional aspects of burial for coronavirus victims. Burial workers will be garbed in full protective gear to perform the "taharah" (ritual purification) of the body, which will then be wrapped in the customary "tachrichim" (linen shrouds) followed by a layer of plastic. The funeral service must be held completely outdoors. Funeral attendees do not need to wear protective gear.
On 26 March, the Church of the Holy Sepulchre was closed. Ziyarat al-Nabi Shu'ayb is a Druze festival called Ziyara celebrated between 25 and 28 April which is officially recognized in Israel as a public holiday. Mowafaq Tarif the current spiritual leader of the Druze community in Israel, announced that the traditional festivities of the Ziyarat al-Nabi Shu'ayb were canceled for the first time in the history of the Druze community.
Closures of cities and neighborhoods.
On 2 April, the cabinet voted by conference call to declare Bnei Brak a "restricted zone", limiting entry and exit to "residents, police, rescue services, those bringing essential supplies and journalists", for an initial period of one week. With a population of 200,000, Bnei Brak had the second-highest number of coronavirus cases of all Israeli cities in total numbers, and the highest rate per capita. On 10 April the closure was relaxed to allow residents to leave the city to go to work, attend a funeral of an immediate relative, or for essential medical needs. On April 12, the government imposed a closure on Haredi neighborhoods of Jerusalem, citing Ministry of Health statistics that nearly 75% of that city's coronavirus infections could be traced to these neighborhoods. The closure impacted Mea Shearim, Geula, Bukharim Quarter, Romema, Mekor Baruch, Sanhedria, Neve Yaakov, Ramat Shlomo, and Har Nof. Residents of these neighborhoods were allowed to leave to other areas only to go to work, attend funerals of immediate relatives, and for essential medical needs. The closure was opposed by the Mayor of Jerusalem, Moshe Lion, who reportedly told the government cabinet members: "Take the Ramot neighborhood for example — 60,000 residents and 140 of them sick. Why do we need to close off the whole neighborhood?"
Lawmakers enforced a 3-day nationwide lockdown in conjunction with the Passover Seder, which took place in Israel on Wednesday night, April 8. All travel between cities was prohibited from Tuesday evening until Friday evening. From Wednesday at 3 p.m. until Thursday at 7 a.m., all Israelis were prohibited from venturing more than from their home. The goal of these measures was to prevent the traditional family gatherings associated with the Passover Seder. The lockdown did not apply to Arab towns, where Passover is not observed. Despite the lockdown, several prominent politicians, including Prime Minister Netanyahu, President of Israel Reuven Rivlin, Yisrael Beiteinu party leader Avigdor Lieberman, Minister of Immigration and Absorption Yoav Gallant, and Likud MK Nir Barkat were noted by the Israeli press to have celebrated the Seder or other parts of the festival with relatives who did not live with them. A partial nationwide lockdown was again imposed from 14 to 16 April, preventing Israelis from visiting family in other towns, and Jerusalem residents from leaving their own neighborhoods, in conjunction with the seventh day of Passover and the Mimouna holiday the following evening at the end of Passover.
Throughout the month of Ramadan, which began on April 25, stores in towns with majority Muslim populations (including East Jerusalem) were to be closed from 6 pm until 3 am. Indoor prayer for all religions was banned, while outdoor prayer was allowed for groups up to 19 people, distanced at least apart.
Exit strategy.
On 24 April 2020, the government approved the reopening of street stores and barbershops, effective 26 April 2020. Malls, gyms, and restaurants without delivery services remained closed. On 7 May 2020, malls and outdoor markets reopened, with restrictions on the number of people allowed. On 27 May 2020, restaurants reopened, with 1.6 meter distancing between diners, and masked staff.
On 3 May 2020, schools reopened for first to third grade, and 11th to 12th grade. Classes were limited in size, and schoolchildren were required to wear masks. By 17 May 2020, limitations on class size were lifted. On 10 May 2020, preschools and kindergartens reopened, with limits on the number of children per class, and on a rotating half-week schedule. Nurseries were reopened with a full-week schedule, but allowing only 70% of the children to attend. Priority was given to children of single or working mothers. On 17 to 19 May 2020, schools reopened fully, with certain social distancing rules in place, including staggered recesses and maintaining 2 meters distance between pupils during breaks. Children arriving at school were required to present a health statement signed by their parents. A number of schools were shut down after reopening due to cases among staff members or students.
On 4 May 2020, Prime Minister Netanyahu outlined a gradual easing of lockdown restrictions, approved by the government. Immediate changes included allowing outdoor meetings of groups not exceeding 20, removal of the 100-meter limit on venturing from homes, and allowing meetings with family members, including elderly. Weddings with up to 50 attendees were also allowed. The easing of restrictions would halt should one of the following occur:
Additional easing of restrictions was announced on 5 May 2020. On 19 May 2020, the requirement to wear masks outdoors and in schools was lifted for the remainder of the week due to a severe heat wave. On 20 May 2020, beaches and museums reopened, and restrictions on the number of passengers on buses were relaxed. Houses of prayer reopened to groups of up to 50 people. Attendees were required to wear masks and maintain a distance of two meters.
Economic impact.
On 16 March, Israel imposed limitations on the public and private sectors. All non-critical government and local authority workers were placed on paid leave until the end of the Passover holiday. Private sector firms exceeding 10 employees were required to reduce staff present in the workplace by 70%. On 30 March, Prime Minister Netanyahu announced an economic rescue package totaling 80 billion shekels ($22 billion), saying that was 6% of the country's GDP. The money will be allocated to health care (10 billion shekels); welfare and unemployment (30 billion shekels) aid for small and large businesses (32 billion shekels), and to financial stimulus (8 billion). By 1 April, the national unemployment rate had reached 24.4 percent. In the month of March alone, more than 844,000 individuals applied for unemployment benefits—90 percent of whom had been placed on unpaid leave due to the pandemic. During April 2020, Bituah Leumi deposited one-time payments to seniors, disabled people, people receiving income support or alimony payments, and families with children. On 16 June 2020, the Knesset passed a stimulus bill to encourage businesses to bring workers back from unemployment.
Second wave: May to November 2020.
Government response.
On July 1, the Knesset reauthorized ISA mobile phone tracking of infected individuals by enacting the Law to Authorize the ISA to Assist in the National Effort to Contain the Spread of the Novel Coronavirus (Temporary Provisions) 2020–5780. As ISA location tracking resumed, by July 5, over 30,000 Israelis were ordered into quarantine. On 6 July 2020, following over two weeks of continued increase in the number of new daily cases, Netanyahu announced new social distancing guidelines, approved by the government. These included:
On 17 July, additional restrictions were announced. These included:
Due to pressure from business owners, the government backtracked on the closure of restaurants, pools, and beaches. Weekend closures of malls and markets were also cancelled, following claims that the closures had not slowed infection rates.
On 31 August 2020, the coronavirus cabinet approved the 'traffic-light' plan introduced by Prof. Ronni Gamzu, in which each city is assigned a color indicating its current level of COVID-19. On 6 September 2020, the government approved closure of schools and a night-time curfew for forty 'red' communities. This plan replaced Gamzu's proposal of full closure in ten 'red' towns and went into effect 8 September. The communities affected by the curfews were among the poorest in Israel, with mainly Arab and ultra-Orthodox population. Residents under curfew were restricted to 500 meters distance from their homes, from 7pm to 5am.
On 10 September 2020, Israel became the country with the highest rate of COVID-19 infections per capita. As confirmed infections continued to rise daily, Israeli officials warned that hospitals would eventually be unable to confront the crisis. On 13 September 2020, the government approved a 3-week country-wide lockdown, beginning Friday, 18 September at 2pm, and ending on 10 October. Restrictions include:
The 3-week lockdown took place during the high holidays, during which many Jews attend synagogue. The lockdown rules for prayer were as follows:
formula_2 for formula_3.
formula_4 for formula_5.
In either case, the total number of people 10formula_0 should not exceed formula_6, where formula_7 is the area of the room in square meters.
On 23 September 2020, Prime Minister Benjamin Netanyahu announced stricter lockdown rules after a new daily coronavirus record of 6,923 infections was reported in Israel. These included:
On 13 October the lockdown was extended for an additional week, until midnight 18 October 2020.
While restrictions were eased in most of the country, local lockdowns were imposed in the following towns due to high case numbers: Majdal Shams and Masade (starting on 6 November 2020), Buqata (starting on 7 November), Hazor Haglilit (starting on 8 November), Qalansawe and Iksal (starting on 17 November 2020), Nazareth and Isfiya (starting on 21 November).
A number of steps were taken to provide financial assistance:
On 21 September 2020, the government unanimously approved a 10% pay cut for all Knesset members and government ministers.
Protests.
During July and August 2020, many protests were held, with protesters voicing frustration over the response of the Netanyahu-led government to the pandemic. On 30 September, Israel's parliament passed a law limiting demonstrations which the opposition said was intended to curb protests against Prime Minister Benjamin Netanyahu over alleged corruption and his mismanagement of the coronavirus crisis. The law prohibited Israelis from holding large gatherings more than from their residences. The government defended the measure as a way to curb COVID-19 infections. On 3 October, numerous anti-Netanyahu protests were held throughout Israel after the passage of legislation limiting demonstrations during the lockdown. The Black Flag movement estimated that 130,000 people took part in Saturday's protests against Netanyahu in cities and towns across Israel.
2020–2021 school year.
The Haredi school year started on 24 August 2020, before the 'traffic light' plan was approved. All other schools in non-'red' cities opened on 1 September 2020. Within a week, a number of schools and kindergartens reported outbreaks, leading to quarantine of exposed staff members and students. Physical schools, kindergartens and nurseries closed at the beginning of the 3-week lockdown, on 13 September, with classes continuing online. Kindergartens and nurseries reopened on 18 October, including in 'red' cities. Grades 1 to 4 reopened on 1 November, in non-'red' cities. Class size was limited to 18 children. Students were required to wear masks throughout the day and eat their meals outdoors or spaced far apart from one another. Grades 5 and 6 returned to school on 24 November. High school grades 11 and 12 returned on 29 November. Schools reopened for remaining grades 7–10 on 6 December. After the Hanukkah break, over 220,000 students from grades 5–12 in 'orange' and 'red' cities went back to online studies.
Exit strategy.
On 18 October 2020 Israel eased lockdown restrictions in non-'red' cities. The first stage of the exit strategy included:
On 1 November 2020 Israel eased restrictions further:
On 8 November 2020, street-front stores reopened. Strip malls reopened on 17 November 2020. 15 malls opened as part of a pilot plan on 27 November 2020.
Third wave: November 2020 to April 2021.
In December 2020, cases steadily increased, reaching over 3,000 new cases daily and over 5% test positivity rate. Multiple countries announced the appearance of new and more infectious COVID-19 strains; towards the end of December, first cases of the Alpha variant were detected in Israel. First cases of the Beta variant were detected in January 2021.
Travel ban.
On 20 December 2020, Israel announced an entry ban on all foreign travelers arriving from the United Kingdom, South Africa, and Denmark. Israelis returning from these countries were required to enter state-run quarantine hotels. On 24 January 2021, the government announced a week-long ban on most incoming and outgoing flights, effective on Monday January 25 at midnight, to prevent entry of new variants into Israel. The flight restrictions were extended multiple times: until 5 February 2021, then until 21 February 2021, and later until 6 March 2021. Daily flights, for new immigrants and for Israelis stranded outside Israel, were available as of 22 February 2021, for up to 2,000 passengers. The number of daily entries was increased to 3,000 on 7 March 2021.
Third nationwide lockdown.
On 24 December 2020, the government declared a third nationwide lockdown, to begin on 27 December 2020. Restrictions included:
Preschoolers through grade 4, grades 11–12, and special education, are to continue physical schooling as usual, even in "orange" and "red" cities. While the initial government decision called for remote learning for grades 5–10, this decision was revised by the Knesset Education Committee: in "green" and "yellow" cities, grades 5-10 are to continue in-person schooling, while schools in "orange" and "red" cities will switch to remote learning. In April 2021, Israel lifted its outdoor and indoor mask mandates, as it was the country with the fastest vaccination campaign worldwide. But it reimposed the indoor mask mandate due to an increase in infections.
During the first week of January 2021, there were over 8,000 new cases daily. On 5 January 2021, the government announced a two-week long, complete lockdown, effective midnight Thursday 7 January 2021. The tightened restrictions include:
On 19 January 2021 the tight lockdown was extended until the end of January. The tight lockdown was initially extended until 5 February 2021, and then until 7 February 2021. The government approved a curfew from 8:30 p.m. to 5:00 a.m. for the three nights of 25–27 February, in an attempt to limit spread of the virus during Purim holiday activities.
Exit strategy.
On 7 February 2021 Israel began easing lockdown restrictions:
During the third lockdown many Israelis were vaccinated against COVID-19. On 21 February 2021, the government implemented green passes for those who were fully vaccinated or were infected and recovered. Green passes are required for the following:
On 7 March 2021, restrictions were eased further. Rules include:
Green passes can be generated for those who have recovered from the virus or who are fully vaccinated (1 week after the second dose) using the Ministry of Health's Traffic Light app.
Preschools, kindergartens, and grades 1-4 reopened on 11 February 2021 in "yellow" and "green" areas, and in "light orange" areas that had at least 70% of their community vaccinated. Grades 5-6 and grades 11-12 returned to school in "yellow", "green", and "light orange" areas on 21 February 2021. Grades 7-10 returned to school in "yellow", "green", and "light orange" areas on 7 March 2021. Universities reopened with in-person classes for green pass holders on 7 March 2021. On 18 April 2021, schools reopened fully, with in-person classes and no special limitations on class size. Students are still required to wear masks indoors but are allowed to take them off during gym class, when they eat, and in between classes.
Period following vaccination campaign: April to June 2021.
Following the national vaccination campaign during late December to April 2021, Israel reached a vaccination rate of over 50% of the population, and 9% recovered from COVID-19, with resulting drops in new cases and deaths. In April 2021, first cases of the Delta variant were detected in Israel. In May 2021, first cases of the Gamma variant were detected too.
On 18 April 2021, the requirement for masks outdoors was cancelled. Masks were still required indoors in public places, and The Ministry of Health recommended that they be worn outdoors in large gatherings. On 15 June 2021, the requirement for masks indoors, in schools, and on public transportation was cancelled.
On 23 April 2021, Israel issued a travel warning for Brazil, Ethiopia, India, Mexico, South Africa, Turkey, and Ukraine due to their high COVID-19 morbidity rates. On 2 May 2021, the government banned the travel of Israelis to India, to Mexico, to South Africa, to Brazil, to Ukraine, to Ethiopia and to Turkey unless they receive special permission. Israelis returning from these countries must isolate for either 14 days with one PCR test taken upon arrival, or 10 days with two negative PCR tests. The current list of 'red' countries for which isolation is required can be found on the Ministry of Health website.
On 5 May 2021, the government extended the validity of green passes for those vaccinated or recovered until December 2021. On 1 June 2021 Israel lifted many COVID-19 restrictions, including limitations on the number of people at both indoor and outdoor gatherings, and green pass requirements. Restrictions on international travel remain in place. Testing protocols remain in place for containing new outbreaks, particularly in schools and among international travellers.
Fourth wave: June to November 2021.
Daily case numbers began rising at the end of June 2021, reaching over 1000 daily cases on 17 July 2021 and peaking at over 10,000 during September 2021. The number of hospitalizations also rose.
On 19 October 2021, the first case of Delta variant AY.4.2 was detected in Israel. Subsequent tests revealed 5 earlier cases of the variant.
Government response.
On 25 June 2021, the requirement for mask indoors was reinstituted due to the rise in cases. On 29 July 2021, the green pass requirement was reinstituted for indoor events with 100 or more participants.
On 29 July 2021, a third vaccination was approved for persons aged 60 or older due to observed waning efficacy of the Pfizer vaccine to the prevalent Delta variant. The vaccine booster was later approved for all those 12 and older.
On 8 August 2021, restrictions renewed by the government came into effect to slow the spread of the Delta variant and included expanding proof of vaccine and mask-wearing requirements for some gatherings, and a shift back to more work from home, quarantines, and travel restrictions.
2021–2022 school year.
Prime Minister Naftali Bennett approved a testing plan for students during the 2021–2022 school year. Serological testing of all students in grades 1 through 6 is planned. Students with a positive serological result will receive a green pass and will be exempt from quarantine during the school year. Families of kindergarten and elementary school children will receive home-testing kits and will be required to test their children within 48 hours of the first day of school.
In a pilot of the serology test carried out in Haredi schools, which reopened on 9 August 2021, approximately 20% of children tested positive.
Beginning 10 October 2021, Israel adopted the 'green classroom' outline for grades 1–12 in 'green' cities. According to the outline, if a child tests positive, the child's classmates undergo PCR testing. Classmates who test negative are allowed to return to school, but must avoid social contact with non-classmates after school hours. Instead of quarantine, the classmates are required to take antigen tests for 7 days, followed by a second PCR test. The children resume regular studies and afterschool activities when the second PCR tests are negative for the whole class. This outline was extended to 'yellow' cities and to daycare on 24 October 2021.
Fifth wave: December 2021 to May 2022.
First cases of the Omicron variant were detected in Israel in the end of November 2021, reaching 175 cases on 19 December 2021. Daily cases increased to over 80,000 at the end of January 2022. Despite having administered enough doses to fully vaccinate 98.6% of the country, Israel health authorities expressed concern about breaking the record for serious infections in late January 2022.
Travel restrictions.
Israel banned the entry of foreigners on 28 November 2021. Israel further listed 'red' countries to which travel of Israelis was banned. Travel restrictions on Israelis were removed on January 6, 2022, and foreigners complying with 'Green Pass' rules were allowed to enter starting January 9, 2022.
School guidelines.
Israel scrapped the 'traffic light' plan for in-person school attendance, thereby easing schools' ability to hold in-person classes. Instead, beginning 9 January 2022, children testing positive were required to self-isolate for 10 days. Vaccinated children who were exposed were allowed to return to school after a negative rapid antigen test, while unvaccinated children were required to isolate for 10 days. The isolation requirements for exposed schoolchildren were cancelled on 27 January 2022. Instead, children will undergo two home tests weekly, on Sunday and Wednesday. Children who test positive at home are required to take an official test and, if positive, isolate for 5 days. Those exposed are recommended to undergo daily tests for 5 days, but are not required to isolate.
4th vaccine dose.
Israel began offering a 4th dose of the Pfizer vaccine to those 60 or older on 2 January 2022. The 4th dose was later recommended for all those aged 18 or older.
Green Pass restrictions.
On 7 February 2022, the requirement to hold a 'Green Pass' or a recent negative test when entering restaurants, movie theaters, gyms, and hotels was removed. 'Green Passes' are still required for entry into event halls and dance clubs.
Sixth wave: since June 2022.
The number of cases started rising again in June 2022, caused mainly by the spread of variant BA.5.
Infection prevalence and compliance.
The prevalence of infection has varied between different sectors of the Israeli population. Haredi communities have experienced a disproportionately higher number of cases and deaths. Reasons for the increased case numbers include crowded living conditions, and prioritizing continuity of religious routines, such as synagogue services and Torah study at yeshivas. Compliance, at least of some groups within the Haredi sector, has been low. During the 'third wave', when all schools were supposed to be closed, many Haredi schools reopened. Hundreds attended weddings in some Haredi communities. Thousands gathered for funerals of prominent rabbis, including Rabbi Meshulam Dovid Soloveitchik and Rabbi Chaim Meir Wosner, despite government restrictions. Vaccination rates in the Haredi community have been lower than in the general population, at least partially due to disinformation. A number of prominent rabbis have called on community members to get vaccinated.
Arab communities have also experienced relatively high case numbers and deaths. This was mainly attributed to large weddings and social gatherings, held despite government restrictions. Arab communities lagged in vaccinations, despite widespread vaccine availability. The lag was attributed to widespread distrust of the government, and to a lack of Arabic-language outreach and education about the vaccine's safety.
Vaccination.
Procurement.
The Israeli government began to procure doses of COVID-19 vaccines from various sources as data regarding various COVID-19 vaccines became available:
The first batch of vaccines, from Pfizer, arrived on 9 December. 700,000 more doses were delivered on 10 January 2021. The first batch of vaccines from Moderna arrived on 7 January 2021. Israel was prioritized for receiving the Pfizer vaccine. In exchange, Israel has committed to send Pfizer medical data pertaining to the vaccinations, including side effects, efficacy, and amount of time it takes to develop antibodies, for different age groups. In order to protect privacy, it was agreed that the identity of those vaccinated will not be disclosed to Pfizer. A censored version of the agreement was made public by the Israeli government on 17 January 2021. In April 2021, long-term agreements for the supply of 18 million total additional vaccines were signed with Moderna and Pfizer. The doses to be supplied will be adapted to the different variants of the virus, if needed.
Distribution.
Pfizer vaccine.
The following vaccination priorities were established by the Ministry of Health:
Netanyahu, Yuli Edelstein and others received their vaccination first. Vaccinations began on 19 December 2020. The first large batch of vaccines, from Pfizer, was distributed rapidly, with about 1.5 million people (16% of the population) vaccinated within 3 weeks. While Israel's rollout of COVID-19 vaccinations was not problem-free, its initial phase was clearly rapid and effective. Vaccinations were expanded to teachers and to those 55 or older on 12 January 2021, to those 45 or older on 17 January 2021, to those 40 or older on 19 January 2021, and to those 35 or older on 28 January 2021. Pregnant women were advised to vaccinate and were added to the priority list on 19 January 2021.
Teenagers born in 2003 and 2004 began getting vaccinated on 23 January 2021. Vaccinations became available to all people 16 or older who had not contracted COVID, beginning 4 February 2021. Vaccinations became available to those 16 or older who had contracted COVID on 2 March 2021. These people will receive a single dose of the Pfizer vaccine. Vaccinations were approved for 12-15 year olds on 2 June 2021.
A third vaccination for people aged 60 and above was approved by the government of 29 July 2021. President Herzog was the first to receive the third shot. The third dose eligibility was expanded to health workers and those over 50 on 13 August 2021, to those over 40 and teachers on 19 August 2021, to those over 30 on 24 August 2021, and to anyone 12 or older who received the second shot at least five months prior on 29 August 2021.
Israel approved child-sized doses of the Pfizer vaccine on 10 November 2021. The first batch of child-dose vaccines arrived on 20 November 2021 and vaccination of 5-11 year olds began on 22 November 2021.
Astrazeneca vaccine.
On 21 October 2021, Israel began offering the Astrazeneca vaccine to those unable to receive the Pfizer or Moderna vaccines.
Development of an Israeli vaccine.
The Israel Institute for Biological Research developed a vaccine and produced 25,000 doses of the vaccine for a Phase I clinical trial, which began in Sheba and Hadassah medical centers in October 2020. On 14 December 2020, it was announced that the Health Ministry had approved the launch of a Phase II clinical trial for the Israel Institute for Biological Research's vaccine candidate, BriLife.
Vaccine diplomacy and swap.
Prime Minister Netanyahu donated vaccines purchased by Israel to a small number of countries, including Honduras and the Czech Republic. Planned donations of vaccines to other countries were frozen after legal questions were raised.
On 6 July 2021, Israel signed a vaccine swap agreement with South Korea. Israel delivered 700,000 doses of the Pfizer vaccine that were close to expiration in exchange for an equal amount of doses that South Korea had ordered for later in 2021.
Relations with neighbouring countries and territories.
Palestine.
On 11 March, Israel delivered 20 tons of disinfectant to the West Bank.
On 17 March, the Defense Ministry tightened restrictions on Palestinian workers, limiting entry to those working in essential sectors, and requiring that they remain in Israel instead of commuting. Also, Israel and the Palestinian Authority set up a joint operations room to coordinate their response to the virus.
On 25 March, the Palestinian National Authority urged all Palestinians working in Israel to return to the West Bank. All those returning were requested to self-isolate.
On 19 May, an unmarked Etihad Airways plane marked the first direct flight between the United Arab Emirates and Israel. Its goal was to deliver supplies to the West Bank. The aid was rejected by the West Bank, so it was delivered to Gaza instead.
On 18 October, former chief negotiator Saeb Erekat was transferred to Hadassah Medical Center for treatment for COVID-19. Erekat had undergone a lung transplant in 2017. Erekat died of COVID-19 on 10 November.
On 4 January 2021, Minister of Health Yuli Edelstein secretly approved the transfer of 200 vaccine doses to the Palestinian Authority as a humanitarian gesture.
On 29 January 2021, it was reported in several Israeli news sources that Israel is planning to give the Palestinian Authority a batch of vaccine doses for 1000 Palestinian medical workers. The Palestinian Authority also asked Israel to help coordinate the transfer of Palestinian ordered vaccine shipments to the West Bank.
Israel transferred 2000 vaccine doses for Palestinian health workers on 1 February 2021. This is the first batch of a reported 5000 doses scheduled to be transferred.
On 20 February 2021, Palestinian officials reported that Israel had agreed to vaccinate 100,000 Palestinians who regularly enter Israel.
On 28 February 2021, Israel confirmed that it would vaccinate 120,000 Palestinian workers. Vaccinations of Palestinian workers began on 8 March 2021.
On 18 June 2021, Israel announced that it would supply at least 1 million Pfizer vaccinations to the Palestinian Authority in exchange for an equal amount of vaccinations that were to be delivered to the Palestinian Authority later in the year. The deal was scrapped by the Palestinian Authority due to the expiry date on the delivered vaccines, which was earlier than the date agreed upon. After the Palestinian cancellation of the trade deal South Korea accepted these now available near-expiration vaccine doses in exchange for supplying the same number of future vaccine doses when they are available to Korea in the September timeframe.
Israel initially blocked and later permitted entry of 2,000 Sputnik V vaccine doses into the Gaza Strip.
Egypt.
On 8 March 2020, Israel closed the Taba Border Crossing with Egypt, fearing the spread of the coronavirus from Egypt. Non-Israelis were not permitted to enter Israel; Israelis returning from Egypt were required to enter an immediate 14-day quarantine.
Jordan.
Israel did not place restrictions on crossing the border with Jordan. The Jordanian Kingdom closed its border with all neighboring countries, including Israel, from March 11, 2020.
On April 15, 2020, the "Jerusalem Post" reported that Israel was to provide 5,000 medical protection masks to Jordan.
Syria.
Israel agreed to pay Russia to send Russian-made Sputnik V vaccine doses to Syria as part of a Russia-mediated prisoner swap agreement.
Criticism and opposition to COVID-19 restrictions.
Since April 2020 a series of protests by various social and political groups took place across Israel, opposing lockdowns, mandatory vaccines, government restriction policies and vaccinations in general. The protests coincided with similar demonstrations and riots worldwide, though some of the earlier protests were linked to the specific 2019–2021 Israeli political crisis.
Notable people infected with COVID-19.
Then Health Minister Yaakov Litzman and his wife tested positive for the coronavirus on 2 April 2020. News reports later claimed that Litzman had violated the government's ban on participating in group prayer the day before he was diagnosed. His office denied the claims.
Hebrew University of Jerusalem professor Mark Steiner died of the virus on 6 April 2020.
Former Chief Rabbi Eliyahu Bakshi-Doron died of the virus on 12 April 2020.
Jerusalem Affairs Minister Rafi Peretz tested positive on 1 August 2020.
Aliyah and Integration Minister Pnina Tamano-Shata tested positive on 24 August 2020.
Knesset member Yinon Azulai tested positive on 9 September 2020.
Rabbi Shmaryahu Yosef Chaim Kanievsky was diagnosed with COVID-19 on 2 October 2020. On 28 October 2020, Kanievsky's physician said Kanievsky had recovered from the virus.
Environmental Protection Minister Gila Gamliel tested positive on 3 October 2020. It was later claimed that Gamliel violated lockdown rules by traveling further than 1 km to her in-laws' house for Yom Kippur and attending synagogue there. She did not reveal this information during her epidemiological investigation, instead claiming she had been infected by her driver.
Knesset member Ayman Odeh tested positive on 4 October 2020.
Knesset member Moshe Abutbul tested positive on 5 October 2020.
Former Shin Bet Deputy Director Itzhak Ilan died of the virus on 16 October 2020.
Actor Yehuda Barkan died of the virus on 23 October 2020.
Minister of Regional Cooperation Ofir Akunis tested positive on 9 November 2020.
Knesset member David Bitan tested positive on 7 December 2020, and was later hospitalized.
Knesset member Ya'akov Asher tested positive on 20 December 2020.
Former Chief Rabbi Yisrael Meir Lau tested positive on 17 January 2021, only a few days after receiving the second dose of the vaccine.
Knesset member Vladimir Beliak tested positive on 14 July 2021.
Natan Sharansky and his wife Avital Sharansky both tested positive on 3 August 2021, despite both being fully vaccinated.
Knesset member Ofer Cassif tested positive on 9 August 2021.
Knesset member Gilad Kariv tested positive on 10 August 2021 and was later hospitalized.
Knesset member Simcha Rothman tested positive on 12 August 2021.
Knesset member Inbar Bezek tested positive on 16 August 2021.
Knesset member Itamar Ben-Gvir tested positive on 16 August 2021 and was later hospitalized.
During the fifth wave, many Israeli politicians tested positive, including Knessent members Michael Biton, Moshe Tur-Paz, Alex Kushnir, and Dudi Amsalem, Foreign Minister Yair Lapid, and Finance Minister Avigdor Liberman.
Remote work.
Rachel Gould and M. Kate Gallagher have researched the ways in which COVID-19 has altered Israeli life, specifically when considering remote work. In an article in The Israel Journal of Foreign Affairs, they lay out the advantages and disadvantages of WFH. In Israeli society specifically, they state, Israeli work periods are much more focused on hours, rather than completing tasks. In order to see if this hours-based approach carries to attitudes in remote work, Gould and Gallagher set up an experimental-research approach and found that two-thirds of Israelis felt that remote work was just as effective as working in an office. This WFH phenomena did not only "increase productivity and satisfaction", but it changed the rigidity of the Israeli work schedule and adapted the system to have more flexibility. This change has great implications when considering Israeli's innovation and increasing "global clout", which Gould and Gallagher predict will continue to grow as the work system changes. However, they caution that in order to keep increasing innovation and efficiency, Israel's work force must prioritize climate change and investment to clean energy.
Statistics.
Graphs.
According to Israel Ministry of Health.
New cases per day
Data is updated by MOH at 09:00 and 21:00 (IST) every day.
Deaths per day
Data is according to MOH update at 08:00 (IST) every day.
Diagnostic tests per day
Data is according to the MOH dashboard.
Vaccines per day
Case fatality rate (percent)
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "E"
},
{
"math_id": 2,
"text": "N=3E\\,\\,\\,"
},
{
"math_id": 3,
"text": "E<=2"
},
{
"math_id": 4,
"text": "N=6+2\\times(E-2)\\,\\,\\,"
},
{
"math_id": 5,
"text": "E>2"
},
{
"math_id": 6,
"text": "A/4"
},
{
"math_id": 7,
"text": "A"
}
] |
https://en.wikipedia.org/wiki?curid=63272141
|
63273767
|
Geodesic bicombing
|
In metric geometry, a geodesic bicombing distinguishes a class of geodesics of a metric space. The study of metric spaces with distinguished geodesics traces back to the work of the mathematician Herbert Busemann. The convention to call a collection of paths of a metric space bicombing is due to William Thurston. By imposing a weak global non-positive curvature condition on a geodesic bicombing several results from the theory of CAT(0) spaces and Banach space theory may be recovered in a more general setting.
Definition.
Let formula_0 be a metric space. A map formula_1 is a "geodesic bicombing" if for all points formula_2 the map formula_3 is a unit speed metric geodesic from formula_4 to formula_5, that is, formula_6, formula_7 and formula_8 for all real numbers formula_9.
Different classes of geodesic bicombings.
A geodesic bicombing formula_1 is:
Examples.
Examples of metric spaces with a conical geodesic bicombing include:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(X,d)"
},
{
"math_id": 1,
"text": "\\sigma\\colon X\\times X\\times [0,1]\\to X"
},
{
"math_id": 2,
"text": "x,y\\in X"
},
{
"math_id": 3,
"text": "\\sigma_{xy}(\\cdot):=\\sigma(x,y,\\cdot)"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "y"
},
{
"math_id": 6,
"text": "\\sigma_{xy}(0)=x"
},
{
"math_id": 7,
"text": "\\sigma_{xy}(1)=y"
},
{
"math_id": 8,
"text": "d(\\sigma_{xy}(s), \\sigma_{xy}(t))=\\vert s-t\\vert d(x,y)"
},
{
"math_id": 9,
"text": "s,t\\in [0,1]"
},
{
"math_id": 10,
"text": "\\sigma_{xy}(t)=\\sigma_{yx}(1-t)"
},
{
"math_id": 11,
"text": "t\\in [0,1]"
},
{
"math_id": 12,
"text": "\\sigma_{xy}((1-\\lambda)s+\\lambda t)=\\sigma_{pq}(\\lambda)"
},
{
"math_id": 13,
"text": "x,y\\in X, 0\\leq s\\leq t\\leq 1, p:=\\sigma_{xy}(s), q:=\\sigma_{xy}(t), "
},
{
"math_id": 14,
"text": "\\lambda\\in [0,1]"
},
{
"math_id": 15,
"text": "d(\\sigma_{xy}(t), \\sigma_{x^\\prime y^\\prime}(t))\\leq (1-t)d(x,x^\\prime)+t d(y,y^\\prime)"
},
{
"math_id": 16,
"text": "x,x^\\prime, y, y^\\prime\\in X"
},
{
"math_id": 17,
"text": "t\\mapsto d(\\sigma_{xy}(t), \\sigma_{x^\\prime y^\\prime}(t)) "
},
{
"math_id": 18,
"text": "[0,1]"
},
{
"math_id": 19,
"text": "(P_1(X),W_1),"
},
{
"math_id": 20,
"text": "W_1"
}
] |
https://en.wikipedia.org/wiki?curid=63273767
|
63276409
|
Gram–Euler theorem
|
In geometry, the Gram–Euler theorem, Gram-Sommerville, Brianchon-Gram or Gram relation (named after Jørgen Pedersen Gram, Leonhard Euler, Duncan Sommerville and Charles Julien Brianchon) is a generalization of the internal angle sum formula of polygons to higher-dimensional polytopes. The equation constrains the sums of the interior angles of a polytope in a manner analogous to the Euler relation on the number of d-dimensional faces.
Statement.
Let formula_0 be an formula_1-dimensional convex polytope. For each "k"-face formula_2, with formula_3 its dimension (0 for vertices, 1 for edges, 2 for faces, etc., up to "n" for "P" itself), its interior (higher-dimensional) solid angle formula_4 is defined by choosing a small enough formula_5-sphere centered at some point in the interior of formula_2 and finding the surface area contained inside formula_0. Then the Gram–Euler theorem states: formula_6In non-Euclidean geometry of constant curvature (i.e. spherical, formula_7, and hyperbolic, formula_8, geometry) the relation gains a volume term, but only if the dimension "n" is even:formula_9Here, formula_10 is the normalized (hyper)volume of the polytope (i.e, the fraction of the "n"-dimensional spherical or hyperbolic space); the angles formula_4 also have to be expressed as fractions (of the ("n"-1)-sphere).
When the polytope is simplicial additional angle restrictions known as "Perles relations" hold, analogous to the Dehn-Sommerville equations for the number of faces.
Examples.
For a two-dimensional polygon, the statement expands into:formula_11where the first term formula_12 is the sum of the internal vertex angles, the second sum is over the edges, each of which has internal angle formula_13, and the final term corresponds to the entire polygon, which has a full internal angle formula_14. For a polygon with formula_1 faces, the theorem tells us that formula_15, or equivalently, formula_16. For a polygon on a sphere, the relation gives the spherical surface area or solid angle as the spherical excess: formula_17.
For a three-dimensional polyhedron the theorem reads:formula_18where formula_19 is the solid angle at a vertex, formula_20 the dihedral angle at an edge (the solid angle of the corresponding lune is twice as big), the third sum counts the faces (each with an interior hemisphere angle of formula_14) and the last term is the interior solid angle (full sphere or formula_21).
History.
The n-dimensional relation was first proven by Sommerville, Heckman and Grünbaum for the spherical, hyperbolic and Euclidean case, respectively.
|
[
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "F"
},
{
"math_id": 3,
"text": "k = \\dim(F)"
},
{
"math_id": 4,
"text": "\\angle(F)"
},
{
"math_id": 5,
"text": "(n - 1)"
},
{
"math_id": 6,
"text": "\\sum_{F \\subset P} (-1)^{\\dim F} \\angle(F) = 0"
},
{
"math_id": 7,
"text": "\\epsilon = 1"
},
{
"math_id": 8,
"text": "\\epsilon = -1"
},
{
"math_id": 9,
"text": "\\sum_{F \\subset P} (-1)^{\\dim F} \\angle(F) = \\epsilon^{n/2}(1 + (-1)^n)\\operatorname{Vol}(P)"
},
{
"math_id": 10,
"text": "\\operatorname{Vol}(P)"
},
{
"math_id": 11,
"text": "\\sum_{v} \\alpha_v - \\sum_e \\pi + 2\\pi = 0"
},
{
"math_id": 12,
"text": "A=\\textstyle\\sum \\alpha_v"
},
{
"math_id": 13,
"text": "\\pi"
},
{
"math_id": 14,
"text": "2\\pi"
},
{
"math_id": 15,
"text": "A - \\pi n + 2\\pi = 0"
},
{
"math_id": 16,
"text": "A = \\pi (n - 2)"
},
{
"math_id": 17,
"text": "\\Omega = A - \\pi (n - 2)"
},
{
"math_id": 18,
"text": "\\sum_{v} \\Omega_v - 2\\sum_e \\theta_e + \\sum_f 2\\pi - 4\\pi = 0"
},
{
"math_id": 19,
"text": "\\Omega_v"
},
{
"math_id": 20,
"text": "\\theta_e"
},
{
"math_id": 21,
"text": "4\\pi"
}
] |
https://en.wikipedia.org/wiki?curid=63276409
|
63282018
|
Life Cycle Climate Performance
|
Life Cycle Climate Performance (LCCP) is an evolving method to evaluate the carbon footprint and global warming impact of heating, ventilation, air conditioning (AC), refrigeration systems, and potentially other applications such as thermal insulating foam. It is calculated as the sum of "direct", "indirect, and embodied" greenhouse gas (GHG) emissions generated over the lifetime of the system “from cradle to grave,” i.e. from manufacture to disposal. "Direct emissions" include all climate forcing effects from the release of refrigerants into the atmosphere, including annual leakage and losses during service and disposal of the unit. "Indirect emissions" include the climate forcing effects of GHG emissions from the electricity powering the equipment. The "embodied emissions" include the climate forcing effects of the manufacturing processes, transport, and installation for the refrigerant, materials, and equipment, and for recycle or other disposal of the product at end of its useful life.
LCCP is more inclusive than previous metrics such as Total Equivalent Warming Impact (TEWI), which considers direct and indirect GHG emissions but overlooks embodied emissions, and Life Cycle Warming Impact (LCWI), which considers direct, indirect and refrigerant manufacturing emissions but overlooks appliance manufacturing, materials, transport installation and recycle. Enhanced and Localized Life Cycle Climate Performance (EL-LCCP) is the latest and most comprehensive carbon metric and takes into account: 1) real-world operating conditions, including the actual hour-by-hour carbon intensity of electricity generation, transmission, and distribution, which is degraded by high ambient temperature; 2) specific conditions of AC condensers located within urban heat islands and in locations with poor air circulation (mounted to close to buildings, clustered and stacked), as well of refrigerators and refrigerated display cases located against walls, inside cabinets, and other locations that compromise energy efficiency; 3) local climate conditions, such as higher ambient temperature at the location of the equipment than at the weather monitoring stations, which typically are located away from human influence.
TEWI was developed by experts at Oak Ridge National Laboratory under contract from Allied Signal (now Honeywell) and was a step forward as a complement and enhancement of previous metrics like coefficient of performance (COP) and Seasonal Energy Efficiency Ratio (SEER), which consider energy use but not global warming potential (GWP) and emissions of refrigerants.
Development.
LCCP was developed in 1999 by an expert working for the United States Environmental Protection Agency and serving on the Montreal Protocol Technology and Economic Assessment Panel (TEAP), who noticed that TEWI ignored the substantial emissions of unwanted hydrofluorocarbon (HFC)-23 byproducts of hydrochlorofluorocarbon (HCFC)-22 production. The byproduct emissions increased the climate forcing GWP of ozone-depleting HCFC-22 by up to 20%, depending on the efficiency of the chemical manufacturing process. At the time, all fluorocarbon manufacturers merely discharged the hazardous HFC-23 chemical waste to the atmosphere. In 2005, a joint committee of the United Nations Intergovernmental Panel on Climate Change (IPCC) and the TEAP endorsed the LCCP metric for use in evaluating low carbon refrigeration and AC equipment.
Calculation.
The equations to calculate LCCP for mobile and stationary equipment are similar, with the exception that the calculation for mobile equipment includes the energy consumption necessary to transport the weight of the AC in the vehicle, whether in operation or not.
formula_0
formula_1
formula_2
formula_3
where: C = Refrigerant Charge (kg), L=Average Lifetime of Equipment (yr), ALR = Annual Leakage Rate (% of Refrigerant Charge), EOL = End of Life Refrigerant Leakage (% of Refrigerant Charge), GWP = Global Warming Potential (kg CO2e/kg), Adp. GWP = GWP of Atmospheric Degradation Product of the Refrigerant (kg CO2e/kg), AEC = Annual Energy Consumption (kWh), EM = CO2 Produced/kWh (kg CO2e/kWh), m = Mass of Unit (kg), MM = CO2e Produced/Material (kg CO2e/kg), mr = Mass of Recycled Material (kg), RM = CO2e Produced/Recycled Material (kg CO2e/kg), RFM = Refrigerant Manufacturing Emissions (kg CO2e/kg), RFD = Refrigerant Disposal Emissions (kg CO2e/kg).
Refrigerant GWP values are typically from the IPCC (2013) for the 100-year timeline.
Applications.
Motor Vehicle Air Conditioning (MAC).
LCCP was perfected for motor vehicle air conditioning (MAC) by a technical committee of the Society of Automotive Engineers (SAE) (now SAE International) and named (Global Refrigerants Energy and ENvironmental – Mobile Air Conditioning – Life Cycle Climate Performance (GREEN-MAC-LCCP©). The GREEN-MAC-LCCP model was approved and assigned SAE Technical Standard J-J2766. The global automotive community used the SAE metric to choose next-generation refrigerant hydrofluoroolefin (HFO)-1234yf (ozone safe; GWP<1) to replace hydrofluorocarbon (HFC)-134a (ozone safe; GWP=1300), which was a temporary replacement for chlorofluorocarbon (CFC)-12 (ozone depletion potential (ODP)=1; GWP=1300) when fast action was needed to avoid a stratospheric ozone tipping point, i.e., destruction at a level that may have been irreversible within human time dimensions.
LCCP was perfected for stationary air conditioning applications by a technical committee of the International Institute of Refrigeration (IIR) chaired by experts from University of Maryland Center for Environmental Energy Engineering (UMD CEEE).
EL-LCCP was developed for room ACs by experts from the UMD CEEE and the Institute for Governance & Sustainable Development (IGSD) working in cooperation with the Government of Morocco and guided by a technical advisory team and ad hoc committee of refrigeration and air conditioning engineers from Brazil, Costa Rica, China, France, and the United States. Moroccan government partners included the Morocco National Ozone Unit; Ministre de l'Énergie, des Mines et du Développement Durable; and Agence Marocaine de l’Efficacité Énergétique (AMEE).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "LCCP = Direct Emissions + Indirect Emissions+ Embodied Emissions"
},
{
"math_id": 1,
"text": "Direct Emissions = C*(L*ALR + EOL)*(GWP + Adp. GWP) "
},
{
"math_id": 2,
"text": "Indirect Emissions = L*AEC*EM + \\Sigma(m*MM)"
},
{
"math_id": 3,
"text": "Embodied Emissions = \\Sigma(mr*RM ) + C*(1+ L*ALR)*RFM +C*(1-EOL)*RFD "
}
] |
https://en.wikipedia.org/wiki?curid=63282018
|
6328320
|
Cataphora
|
Use of an expression or word that co-refers with a later, more specific, expression
In linguistics, cataphora (; from Greek, "καταφορά", "kataphora", "a downward motion" from "κατά", "kata", "downwards" and "φέρω", "pherō", "I carry") is the use of an expression or word that co-refers with a later, more specific expression in the discourse. The preceding expression, whose meaning is determined or specified by the later expression, may be called a cataphor. Cataphora is a type of anaphora, although the terms "anaphora" and "anaphor" are sometimes used in a stricter sense, denoting only cases where the order of the expressions is the reverse of that found in cataphora.
An example of cataphora in English is the following sentence:
In this sentence, the pronoun "he" (the cataphor) appears earlier than the noun "John" (the postcedent) that it refers to. This is the reverse of the more normal pattern, "strict" anaphora, where a referring expression such as "John" (in the example above) or "the soldier" (in the example below) appears before any pronouns that reference it. Both cataphora and anaphora are types of endophora.
Examples.
Other examples of the same type of cataphora are:
Cataphora across sentences is often used for rhetorical effect. It can build suspense and provide a description. For example:
The examples of cataphora described so far are strict cataphora, because the anaphor is an actual pronoun. Strict within-sentence cataphora is highly restricted in the sorts of structures it can appear within, generally restricted to a preceding subordinate clause. More generally, however, any fairly general noun phrase can be considered an anaphor when it co-refers with a more specific noun phrase (i.e. both refer to the same entity), and if the more general noun phrase comes first, it can be considered an example of cataphora. Non-strict cataphora of this sort can occur in many contexts, for example:
Strict cross-sentence cataphora where the antecedent is an entire sentence is fairly common cross-linguistically:
Cataphora of this sort is particularly common in formal contexts, using an anaphoric expression such as "this" or "the following":
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x = y^3 + 2z - 1"
}
] |
https://en.wikipedia.org/wiki?curid=6328320
|
63285
|
JPEG 2000
|
Image compression standard and coding system
JPEG 2000 (JP2) is an image compression standard and coding system. It was developed from 1997 to 2000 by a Joint Photographic Experts Group committee chaired by Touradj Ebrahimi (later the JPEG president), with the intention of superseding their original JPEG standard (created in 1992), which is based on a discrete cosine transform (DCT), with a newly designed, wavelet-based method. The standardized filename extension is .jp2 for ISO/IEC 15444-1 conforming files and .jpx for the extended part-2 specifications, published as ISO/IEC 15444-2. The registered MIME types are defined in RFC 3745. For ISO/IEC 15444-1 it is image/jp2.
The JPEG 2000 project was motivated by Ricoh's submission in 1995 of the CREW (Compression with Reversible Embedded Wavelets) algorithm to the standardization effort of JPEG-LS. Ultimately the LOCO-I algorithm was selected as the basis for JPEG-LS, but many of the features of CREW ended up in the JPEG 2000 standard.
JPEG 2000 codestreams are regions of interest that offer several mechanisms to support spatial random access or region of interest access at varying degrees of granularity. It is possible to store different parts of the same picture using different quality.
JPEG 2000 is a compression standard based on a discrete wavelet transform (DWT). The standard could be adapted for motion imaging video compression with the Motion JPEG 2000 extension. JPEG 2000 technology was selected as the video coding standard for digital cinema in 2004. However, JPEG 2000 is generally not supported in web browsers for web pages as of 2024,[ [update]] and hence is not generally used on the World Wide Web. Nevertheless, for those with PDF support, web browsers generally support JPEG 2000 in PDFs.
Design goals.
While there is a modest increase in compression performance of JPEG 2000 compared to JPEG, the main advantage offered by JPEG 2000 is the significant flexibility of the codestream. The codestream obtained after compression of an image with JPEG 2000 is scalable in nature, meaning that it can be decoded in a number of ways; for instance, by truncating the codestream at any point, one may obtain a representation of the image at a lower resolution, or signal-to-noise ratio – see scalable compression. By ordering the codestream in various ways, applications can achieve significant performance increases. However, as a consequence of this flexibility, JPEG 2000 requires codecs that are complex and computationally demanding. Another difference, in comparison with JPEG, is in terms of visual artifacts: JPEG 2000 only produces ringing artifacts, manifested as blur and rings near edges in the image, while JPEG produces both ringing artifacts and 'blocking' artifacts, due to its 8×8 blocks.
JPEG 2000 has been published as an ISO standard, ISO/IEC 15444. The cost of obtaining all documents for the standard has been estimated at 2,718CHF (US$2,720 as of 2015).
Applications.
Notable markets and applications intended to be served by the standard include:
Improvements over the 1992 JPEG standard.
Multiple resolution representation.
JPEG 2000 decomposes the image into a multiple resolution representation in the course of its compression process. This pyramid representation can be put to use for other image presentation purposes beyond compression.
Progressive transmission by pixel and resolution accuracy.
These features are more commonly known as "progressive decoding" and "signal-to-noise ratio (SNR) scalability". JPEG 2000 provides efficient codestream organizations which are progressive by pixel accuracy and by image resolution (or by image size). This way, after a smaller part of the whole file has been received, the viewer can see a lower quality version of the final picture. The quality then improves progressively through downloading more data bits from the source.
Choice of lossless or lossy compression.
Like the Lossless JPEG standard, the JPEG 2000 standard provides both lossless and lossy compression in a single compression architecture. Lossless compression is provided by the use of a reversible integer wavelet transform in JPEG 2000.
Error resilience.
Like JPEG 1992, JPEG 2000 is robust to bit errors introduced by noisy communication channels, due to the coding of data in relatively small independent blocks.
Flexible file format.
The JP2 and JPX file formats allow for handling of color-space information, metadata, and for interactivity in networked applications as developed in the JPEG Part 9 JPIP protocol.
High dynamic range support.
JPEG 2000 supports bit depths of 1 to 38 bits per component. Supported color spaces include monochrome, 3 types of YCbCr, sRGB, PhotoYCC, CMY(K), YCCK and CIELab. It also later added support for CIEJab (CIECAM02), e-sRGB, ROMM, YPbPr and others.
Side channel spatial information.
Full support for transparency and alpha planes.
JPEG 2000 image coding system – Parts.
The JPEG 2000 image coding system (ISO/IEC 15444) consists of the following parts:
Technical discussion.
The aim of JPEG 2000 is not only improving compression performance over JPEG but also adding (or improving) features such as scalability and editability. JPEG 2000's improvement in compression performance relative to the original JPEG standard is actually rather modest and should not ordinarily be the primary consideration for evaluating the design. Very low and very high compression rates are supported in JPEG 2000. The ability of the design to handle a very large range of effective bit rates is one of the strengths of JPEG 2000. For example, to reduce the number of bits for a picture below a certain amount, the advisable thing to do with the first JPEG standard is to reduce the resolution of the input image before encoding it. That is unnecessary when using JPEG 2000, because JPEG 2000 already does this automatically through its multi-resolution decomposition structure. The following sections describe the algorithm of JPEG 2000.
According to the Royal Library of the Netherlands, "the current JP2 format specification leaves room for multiple interpretations when it comes to the support of ICC profiles, and the handling of grid resolution information".
Color components transformation.
Initially images have to be transformed from the RGB color space to another color space, leading to three "components" that are handled separately. There are two possible choices:
formula_0
If R, G, and B are normalized to the same precision, then numeric precision of CB and CR is one bit greater than the precision of the original components. This increase in precision is necessary to ensure reversibility. The chrominance components can be, but do not necessarily have to be, downscaled in resolution; in fact, since the wavelet transformation already separates images into scales, downsampling is more effectively handled by dropping the finest wavelet scale. This step is called "multiple component transformation" in the JPEG 2000 language since its usage is not restricted to the RGB color model.
Tiling.
After color transformation, the image is split into so-called "tiles", rectangular regions of the image that are transformed and encoded separately. Tiles can be any size, and it is also possible to consider the whole image as one single tile. Once the size is chosen, all the tiles will have the same size (except optionally those on the right and bottom borders). Dividing the image into tiles is advantageous in that the decoder will need less memory to decode the image and it can opt to decode only selected tiles to achieve a partial decoding of the image. The disadvantage of this approach is that the quality of the picture decreases due to a lower peak signal-to-noise ratio. Using many tiles can create a blocking effect similar to the older JPEG 1992 standard.
Wavelet transform.
These tiles are then wavelet-transformed to an arbitrary depth, in contrast to JPEG 1992 which uses an 8×8 block-size discrete cosine transform. JPEG 2000 uses two different wavelet transforms:
The wavelet transforms are implemented by the lifting scheme or by convolution.
Quantization.
After the wavelet transform, the coefficients are scalar-quantized to reduce the number of bits to represent them, at the expense of quality. The output is a set of integer numbers which have to be encoded bit-by-bit. The parameter that can be changed to set the final quality is the quantization step: the greater the step, the greater is the compression and the loss of quality. With a quantization step that equals 1, no quantization is performed (it is used in lossless compression).
Coding.
The result of the previous process is a collection of "sub-bands" which represent several approximation scales. A sub-band is a set of "coefficients"—real numbers which represent aspects of the image associated with a certain frequency range as well as a spatial area of the image.
The quantized sub-bands are split further into "precincts", rectangular regions in the wavelet domain. They are typically sized so that they provide an efficient way to access only part of the (reconstructed) image, though this is not a requirement.
Precincts are split further into "code blocks". Code blocks are in a single sub-band and have equal sizes—except those located at the edges of the image. The encoder has to encode the bits of all quantized coefficients of a code block, starting with the most significant bits and progressing to less significant bits by a process called the "EBCOT" scheme. "EBCOT" here stands for "Embedded Block Coding with Optimal Truncation". In this encoding process, each bit plane of the code block gets encoded in three so-called "coding passes", first encoding bits (and signs) of insignificant coefficients with significant neighbors (i.e., with 1-bits in higher bit planes), then refinement bits of significant coefficients and finally coefficients without significant neighbors. The three passes are called "Significance Propagation", "Magnitude Refinement" and "Cleanup" pass, respectively.
In lossless mode all bit planes have to be encoded by the EBCOT, and no bit planes can be dropped.
The bits selected by these coding passes then get encoded by a context-driven binary arithmetic coder, namely the binary MQ-coder (as also employed by JBIG2). The context of a coefficient is formed by the state of its eight neighbors in the code block.
The result is a bit-stream that is split into "packets" where a "packet" groups selected passes of all code blocks from a precinct into one indivisible unit. Packets are the key to quality scalability (i.e., packets containing less significant bits can be discarded to achieve lower bit rates and higher distortion).
Packets from all sub-bands are then collected in so-called "layers".
The way the packets are built up from the code-block coding passes, and thus which packets a layer will contain, is not defined by the JPEG 2000 standard, but in general a codec will try to build layers in such a way that the image quality will increase monotonically with each layer, and the image distortion will shrink from layer to layer. Thus, layers define the progression by image quality within the codestream.
The problem is now to find the optimal packet length for all code blocks which minimizes the overall distortion in a way that the generated target bitrate equals the demanded bit rate.
While the standard does not define a procedure as to how to perform this form of rate–distortion optimization, the general outline is given in one of its many appendices: For each bit encoded by the EBCOT coder, the improvement in image quality, defined as mean square error, gets measured; this can be implemented by an easy table-lookup algorithm. Furthermore, the length of the resulting codestream gets measured. This forms for each code block a graph in the rate–distortion plane, giving image quality over bitstream length. The optimal selection for the truncation points, thus for the packet-build-up points is then given by defining critical "slopes" of these curves, and picking all those coding passes whose curve in the rate–distortion graph is steeper than the given critical slope. This method can be seen as a special application of the method of "Lagrange multiplier" which is used for optimization problems under constraints. The Lagrange multiplier, typically denoted by λ, turns out to be the critical slope, the constraint is the demanded target bitrate, and the value to optimize is the overall distortion.
Packets can be reordered almost arbitrarily in the JPEG 2000 bit-stream; this gives the encoder as well as image servers a high degree of freedom.
Already encoded images can be sent over networks with arbitrary bit rates by using a layer-progressive encoding order. On the other hand, color components can be moved back in the bit-stream; lower resolutions (corresponding to low-frequency sub-bands) could be sent first for image previewing. Finally, spatial browsing of large images is possible through appropriate tile or partition selection. All these operations do not require any re-encoding but only byte-wise copy operations.
Compression ratio.
Compared to the previous JPEG standard, JPEG 2000 delivers a typical compression gain in the range of 20%, depending on the image characteristics. Higher-resolution images tend to benefit more, where JPEG 2000's spatial-redundancy prediction can contribute more to the compression process. In very low-bitrate applications, studies have shown JPEG 2000 to be outperformed by the intra-frame coding mode of H.264.
Computational complexity and performance.
JPEG 2000 is much more complicated in terms of computational complexity in comparison with JPEG standard. Tiling, color component transform, discrete wavelet transform, and quantization could be done pretty fast, though entropy codec is time-consuming and quite complicated. EBCOT context modelling and arithmetic MQ-coder take most of the time of JPEG 2000 codec.
On CPU the main idea of getting fast JPEG 2000 encoding and decoding is closely connected with AVX/SSE and multithreading to process each tile in a separate thread. The fastest JPEG 2000 solutions utilize both CPU and GPU power to get high performance benchmarks.
File format and codestream.
Similar to JPEG-1, JPEG 2000 defines both a file format and a codestream. Whereas JPEG 2000 entirely describes the image samples, JPEG-1 includes additional meta-information such as the resolution of the image or the color space that has been used to encode the image. JPEG 2000 images should—if stored as files—be boxed in the JPEG 2000 file format, where they get the .jp2 extension. The part-2 extension to JPEG 2000 (ISO/IEC 15444-2) enriches the file format by including mechanisms for animation or composition of several codestreams into one single image. This extended file format is called JPX, and should use the file extension .jpf, although .jpx is also used.
There is no standardized extension for codestream data because codestream data is not to be considered to be stored in files in the first place, though when done for testing purposes, the extension .jpc, .j2k or .j2c is commonly used.
Metadata.
For traditional JPEG, additional metadata, e.g. lighting and exposure conditions, is kept in an application marker in the Exif format specified by the JEITA. JPEG 2000 chooses a different route, encoding the same metadata in XML form. The reference between the Exif tags and the XML elements is standardized by the ISO TC42 committee in the standard 12234-1.4.
Extensible Metadata Platform can also be embedded in JPEG 2000.
Legal status.
ISO 15444 is covered by patents and the specification lists 17 patent holders, but the contributing companies and organizations agreed that licenses for its first part—the core coding system—can be obtained free of charge from all contributors. But this is not a formal guarantee. License and royalties may be required to use some extensions.
The JPEG committee has stated:
<templatestyles src="Template:Blockquote/styles.css" />It has always been a strong goal of the JPEG committee that its standards should be implementable in their baseline form without payment of royalty and license fees... The up and coming JPEG 2000 standard has been prepared along these lines, and agreement reached with over 20 large organizations holding many patents in this area to allow use of their intellectual property in connection with the standard without payment of license fees or royalties.
However, the JPEG committee acknowledged in 2004 that undeclared submarine patents may present a hazard:
<templatestyles src="Template:Blockquote/styles.css" />It is of course still possible that other organizations or individuals may claim intellectual property rights that affect implementation of the standard, and any implementers are urged to carry out their own searches and investigations in this area.
In ISO/IEC 15444-1:2016, the JPEG committee stated in "Annex L: Patent statement":
<templatestyles src="Template:Blockquote/styles.css" />The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) draw attention to the fact that it is claimed that compliance with this Recommendation | International Standard may involve the use of patents.
The complete list of intellectual property rights statements can be obtained from the ITU-T and ISO patent declaration databases (available at https://www.iso.org/iso-standards-and-patents.html)
ISO and IEC take no position concerning the evidence, validity and scope of these patent rights.
Attention is drawn to the possibility that some of the elements of this Recommendation | International Standard may be the subject of patent rights other than those identified in the above mentioned databases. ISO and IEC shall not be held responsible for identifying any or all such patent rights.
Related standards.
Several additional parts of the JPEG 2000 standard exist; amongst them are ISO/IEC 15444-2:2000, JPEG 2000 extensions defining the .jpx file format, featuring for example Trellis quantization, an extended file format and additional color spaces, ISO/IEC 15444-4:2000, the reference testing and ISO/IEC 15444-6:2000, the compound image file format (.jpm), allowing compression of compound text/image graphics.
Extensions for secure image transfer, "JPSEC" (ISO/IEC 15444-8), enhanced error-correction schemes for wireless applications, "JPWL" (ISO/IEC 15444-11) and extensions for encoding of volumetric images, "JP3D" (ISO/IEC 15444-10) are also already available from the ISO.
JPIP protocol for streaming JPEG 2000 images.
In 2005, a JPEG 2000–based image browsing protocol, called JPIP was published as ISO/IEC 15444-9. Within this framework, only selected regions of potentially huge images have to be transmitted from an image server on the request of a client, thus reducing the required bandwidth.
JPEG 2000 data may also be streamed using the ECWP and ECWPS protocols found within the ERDAS ECW/JP2 SDK.
Motion JPEG 2000.
Motion JPEG 2000, (MJ2), originally defined in Part 3 of the ISO Standard for JPEG2000 (ISO/IEC 15444-3:2002,) as a standalone document, has now been expressed by ISO/IEC 15444-3:2002/Amd 2:2003 in terms of the ISO Base format, ISO/IEC 15444-12 and in ITU-T Recommendation T.802. It specifies the use of the JPEG 2000 format for timed sequences of images (motion sequences), possibly combined with audio, and composed into an overall presentation. It also defines a file format, based on ISO base media file format (ISO 15444-12). Filename extensions for Motion JPEG 2000 video files are .mj2 and .mjp2 according to RFC 3745.
It is an open ISO standard and an advanced update to MJPEG (or MJ), which was based on the legacy JPEG format. Unlike common video formats, such as MPEG-4 Part 2, WMV, and H.264, MJ2 does not employ temporal or inter-frame compression. Instead, each frame is an independent entity encoded by either a lossy or lossless variant of JPEG 2000. Its physical structure does not depend on time ordering, but it does employ a separate profile to complement the data. For audio, it supports LPCM encoding, as well as various MPEG-4 variants, as "raw" or complement data.
Motion JPEG 2000 (often referenced as MJ2 or MJP2) is considered as a digital archival format by the Library of Congress though MXF_OP1a_JP2_LL (lossless JPEG 2000 wrapped in MXF operational pattern 1a) is preferred by the LOC Packard Campus for Audio-Visual Conservation.
ISO base media file format.
ISO/IEC 15444-12 is identical with ISO/IEC 14496-12 (MPEG-4 Part 12) and it defines ISO base media file format. For example, Motion JPEG 2000 file format, MP4 file format or 3GP file format are also based on this ISO base media file format.
GML JP2 georeferencing.
The Open Geospatial Consortium (OGC) has defined a metadata standard for georeferencing JPEG 2000 images with embedded XML using the Geography Markup Language (GML) format: "GML in JPEG 2000 for Geographic Imagery Encoding (GMLJP2)", version 1.0.0, dated 2006-01-18. Version 2.0, entitled "GML in JPEG 2000 (GMLJP2) Encoding Standard Part 1: Core" was approved 2014-06-30.
JP2 and JPX files containing GMLJP2 markup can be located and displayed in the correct position on the Earth's surface by a suitable Geographic Information System (GIS), in a similar way to GeoTIFF and GTG images.
Application support.
Applications.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\begin{array}{rl}\nY &=& \\left\\lfloor \\frac{R+2G+B}{4} \\right\\rfloor ; \\\\\nC_B &=& B - G ; \\\\\nC_R &=& R - G ;\n\\end{array}\n\\qquad\n\\begin{array}{rl}\nG &=& Y - \\left\\lfloor \\frac{C_B + C_R}{4} \\right\\rfloor ; \\\\\nR &=& C_R + G ; \\\\\nB &=& C_B + G.\n\\end{array}\n"
}
] |
https://en.wikipedia.org/wiki?curid=63285
|
632855
|
Reference dose
|
A reference dose is the United States Environmental Protection Agency's maximum acceptable oral dose of a toxic substance, "below which no adverse noncancer health effects should result from a lifetime of exposure". Reference doses have been most commonly determined for pesticides. The EPA defines an oral reference dose (abbreviated RfD) as:
[A]n estimate, with uncertainty spanning perhaps an order of magnitude, of a daily oral exposure to the human population (including sensitive subgroups) that is likely to be without an appreciable risk of deleterious effects during a lifetime.
Definition.
The United States Environmental Protection Agency defines a reference dose (abbreviated RfD) as the maximum acceptable oral dose of a toxic substance, below which no adverse non cancerous health effects should result from a lifetime of exposure. It is an estimate, with uncertainty spanning perhaps an order of magnitude, of a daily oral exposure to the human population (including sensitive subgroups) that is likely to be without an appreciable risk of deleterious effects during a lifetime.
Regulatory status.
RfDs are no enforceable standards, unlike National Ambient Air Quality Standards. RfDs are risk assessment benchmarks, and the EPA tries to set other regulations, so that people are not exposed to chemicals in amounts that exceed RfDs. According to the EPA from 2008, "[a]n aggregate daily exposure to a [chemical] at or below the RfD (expressed as 100 percent or less of the RfD) is generally considered acceptable by EPA."
States can set their own RfDs.
For example, the EPA set an acute RfD for children of 0.0015 mg/kg/day for the organochlorine insecticide endosulfan, based on neurological effects observed in test animals. The EPA then looked at dietary exposure to endosulfan, and found that for the most exposed 0.1% of children age 1–6, their daily consumption of the endosulfan exceeded this RfD. To remedy this, the EPA revoked the use of endosulfan on the crops that contributed the most to exposure of children: certain beans, peas, spinach, and grapes.
Types.
Reference doses are chemical-specific, "i.e." the EPA determines a unique reference dose for every substance it evaluates. Often separate acute (0-1 month)and chronic RfDs (more than one month) are determined for the same substance. Reference doses are specific to dietary exposure. When assessing inhalation exposure, EPA uses "reference concentrations" (RfCs), instead of RfDs. Note that RfDs apply only to non-cancer effects. When evaluating carcinogenic effects, the EPA uses the Q1* method.
Determination.
RfDs are usually derived from animal studies. Animals (typically rats) are dosed with varying amounts of the substance in question, and the largest dose at which no effects are observed is identified. This dose level is called the No observable effect level, or NOEL. To account for the fact that humans may be more or less susceptible than the test animal, a 10-fold "uncertainty factor" is usually applied to the NOEL. This uncertainty factor is called the "interspecies uncertainty factor" or UFinter. An additional 10-fold uncertainty factor, the "intraspecies uncertainty factor" or UFintra, is usually applied to account for the fact that some humans may be substantially more sensitive to the effects of substances than others. Additional uncertainty factors may also be applied. In general:
formula_0
Frequently, a "lowest-observed-adverse-effect level" or LOAEL is used in place of a NOEL. If adverse effects are observed at all dose levels tested, then the smallest dose tested, the LOAEL, is used to calculate the RfD. An additional uncertainty factor usually applied in these cases, since the NOAEL, by definition, would be lower than the LOAEL had it been observed. If studies using human subjects are used to determine a RfD, then the interspecies uncertainty factor can be reduced to 1, but generally the 10-fold intraspecies uncertainty factor is retained. Such studies are rare.
Example.
As an example, consider the following determination of the RfD for the insecticide chlorpyrifos, adapted from the EPA's Interim Reregistration Eligibility Decision for chlorpyrifos.
The EPA determined the acute RfD to be 0.005 mg/kg/day based on a study in which male rats were administered a one-time dose of chlorpyrifos and blood cholinesterase activity was monitored. Cholinesterase inhibition was observed at all dose levels tested, the lowest of which was 1.5 mg/kg. This level was thus identified at the lowest observed adverse effect level (LOAEL). A NOAEL of 0.5 mg/kg was estimated by dividing the LOAEL by a three-fold uncertainty factor. The NOAEL was then divided by the standard 10-fold inter- and 10-fold intraspecies uncertainty factors to arrive at the RfD of 0.005 mg/kg/day. Other studies showed that fetuses and children are even more sensitive to chlorpyrifos than adults, so the EPA applies an additional ten-fold uncertainty factor to protect that subpopulation. A RfD that has been divided by an additional uncertainty factor that only applies to certain populations is called a "population adjusted dose" or PAD. For chlorpyrifos, the acute PAD (or "aPAD") is thus 5×10−4 mg/kg/day, and it applies to infants, children, and women who are breast feeding.
The EPA also determined a chronic RfD for chlorpyrifos exposure based on studies in which animals were administered low doses of the pesticide for two years. Cholinesterase inhibition was observed at all dose levels tested, and a NOAEL of 0.03 mg/kg/day estimated by dividing a LOAEL of 0.3 mg/kg/day by an uncertainty factor of 10. As with the acute RfD, the chronic RfD of 3×10−4 mg/kg/day was determined by dividing this NOAEL by the inter- and intraspecies uncertainty factors. The chronic PAD ("cPAD") of 3×10−5 mg/kg/day was determined by applying an additional 10-fold uncertainty factor to account for the increased susceptibility of infants and children. Like the aPAD, this cPAD applies to infants, children, and breast feeding women.
Consensus.
Because the RfD assumes "a dose below which no adverse noncarcinogenic health effects should result from a lifetime of exposure", the critical step in all chemical risk and regulatory threshold calculations is dependent upon a properly derived dose at which no observed adverse effects (NOAEL) were seen which is then divided by an uncertainty factor that considers inadequacies of the study, animal-to-human extrapolation, sensitive sub-populations, and inadequacies of the database. The RfD that is derived is not always agreed upon. Some may believe it to be overly protective while others may contend that it is not adequately protective of human health.
For example, in 2002 the EPA completed its draft toxicological review of perchlorate and proposed an RfD of 0.00003 milligrams per kilogram per day (mg/kg/day) based primarily on studies that identified neurodevelopmental deficits in rat pups. These deficits were linked to maternal exposure to perchlorate. Subsequently, the National Academy of Sciences (NAS) reviewed the health implications of perchlorate, and in 2005 proposed a much higher alternative reference dose of 0.0007 mg/kg/day based primarily on a 2002 study by Greer et al. During that study, 37 adult human subjects were split into four exposure groups exposed to 0.007 (7 subjects), 0.02 (10 subjects), 0.1 (10 subjects), and 0.5 (10 subjects) mg/kg/day. Significant decreases in iodide uptake were found in the three highest exposure groups. Iodide uptake was not significantly reduced in the lowest exposed group, but four of the seven subjects in this group experienced inhibited iodide uptake. In 2005, the RfD proposed by NAS was accepted by EPA and added to its integrated risk information system (IRIS).
In a 2005 article in the journal Environmental Health Perspectives (EHP), Gary Ginsberg and Deborah Rice argued, that the 2005 NAS RfD was not protective of human health based on the following:
Although there has generally been consensus with the Greer "et al" study, there is no consensus with regard to developing a perchlorate RfD. One of the key differences results from how the point of departure is viewed (i.e., NOEL or LOAEL), or whether a benchmark dose should be used to derive the RfD. Defining the point of departure as a NOEL or LOAEL has implications when it comes to applying appropriate safety factors to the point of departure to derive the RfD.
In 2010, the Massachusetts Department of Environmental Protection set a 10 fold lower RfD (0.07μg/kg/day) using a much higher uncertainty factor of 100. They also calculated an Infant drinking water value, which neither US EPA nor CalEPA have done.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{RfD} \\mathrm{(mg/kg/day)} = {\\mathrm{NOEL} \\mathrm{(mg/kg/day)} \\over \\mathrm{UF}_{\\mathrm{inter}} \\cdot \\mathrm{UF}_{\\mathrm{intra}} \\cdot \\mathrm{UF}_{\\mathrm{other}}}"
}
] |
https://en.wikipedia.org/wiki?curid=632855
|
6328849
|
Ole Barndorff-Nielsen
|
Danish statistician (1935–2022)
Ole Eiler Barndorff-Nielsen (18 March, 1935 – 26 June, 2022) was a Danish statistician who has contributed to many areas of statistical science.
Education and career.
He was born in Copenhagen, and became interested in statistics when, as a student of actuarial mathematics at the University of Copenhagen, he worked part-time at the Department of Biostatistics of the Danish State Serum Institute. He graduated from the University of Aarhus (Denmark) in 1960, where he has spent most of his academic life, and where he became professor of statistics in 1973. However, in 1962-1963 and 1963-1964 he stayed at the University of Minnesota and Stanford University, respectively, and from August 1974 to February 1975 he was an Overseas Fellow at Churchill College, Cambridge, and visitor at Statistical Laboratory, Cambridge University.
Barndorff-Nielsen became Professor Emeritus at Aarhus University at the Thiele Centre for Applied Mathematics in Natural Science and affiliated with the "Center for Research in Econometric Analysis of Time Series" (CREATES) on a part-time basis and since 2008 also affiliated to Institute of Advanced Studies, Technical University Munich.
Works of Barndorff-Nielsen.
Among Barndorff-Nielsen's early scientific contributions are his work on exponential families and on the foundations of statistics, in particular sufficiency and conditional inference. In 1977 he introduced the hyperbolic distribution as a mathematical model of the size distribution of sand grains, formalising heuristic ideas proposed by Ralph Alger Bagnold. He also derived the larger class of generalised hyperbolic distributions. These distributions, in particular the normal-inverse Gaussian (NIG) distribution, have later turned out to be useful in many other areas of science, in particular turbulence and finance. The NIG-distribution is now widely used to describe the distribution of returns from financial assets.
In 1984 he produced a short film on the physics of blown sand and the life of the British scientist and explorer Brigadier Ralph Alger Bagnold. A follow-up to the film was produced in 2011 on the studies of stochastics in the physical sciences carried out by Barndorff-Nielsen and colleagues at the Faculty of Science, Aarhus University by the initiative of the President of the Bernoulli Society for Mathematical Statistics and Probability, Professor Victor Pérez-Abreu.
Later Barndorff-Nielsen played a leading role in the application of differential geometry to investigate statistical models. Another main contribution is his work on asymptotic methods in statistics, not least his formula for the conditional distribution of the maximum likelihood estimator given an ancillary statistic that generalizes a formula by Ronald A. Fisher (originally called the formula_0-formula, but now known as the Barndorff-Nielsen formula). He has jointly with David Cox written two influential books on asymptotic techniques in statistics. Since the mid-90s Barndorff-Nielsen has worked on stochastic models in finance (often with Neil Shephard) and turbulence, on statistical methods for the analysis of data from experiments in quantum physics, and has contributed to the theory of Lévy processes.
Notable honors and positions held.
Barndorff-Nielsen is a member of the Royal Danish Academy of Sciences and Letters and of Academia Europaea. He has received honorary doctorate degrees from the Université Paul Sabatier, Toulouse and the Katholieke Universiteit Leuven. In 1993-1995 he was a very influential president of the Bernoulli Society for Mathematical Statistics and Probability. He was the editor of "International Statistical Review" in 1980-1987 and of the journal "Bernoulli" in 1994–2000. From 1 April 1998 to 31 March 2003 he was Scientific Director of MaPhySto (the Centre for Mathematical Physics and Stochastics), which was a centre devoted to advanced research and training in the fields of mathematical physics and stochastics, as well as some closely related areas. In 2001 he received the Humboldt Prize and in 2010 the Faculty Price from Faculty of Science, Aarhus University.
|
[
{
"math_id": 0,
"text": "p^*"
}
] |
https://en.wikipedia.org/wiki?curid=6328849
|
6329
|
Chromatography
|
Set of physico-chemical techniques for separation of mixtures
In chemical analysis, chromatography is a laboratory technique for the separation of a mixture into its components. The mixture is dissolved in a fluid solvent (gas or liquid) called the "mobile phase", which carries it through a system (a column, a capillary tube, a plate, or a sheet) on which a material called the "stationary phase" is fixed. Because the different constituents of the mixture tend to have different affinities for the stationary phase and are retained for different lengths of time depending on their interactions with its surface sites, the constituents travel at different apparent velocities in the mobile fluid, causing them to separate. The separation is based on the differential partitioning between the mobile and the stationary phases. Subtle differences in a compound's partition coefficient result in differential retention on the stationary phase and thus affect the separation.
Chromatography may be "preparative" or "analytical". The purpose of preparative chromatography is to separate the components of a mixture for later use, and is thus a form of purification. This process is associated with higher costs due to its mode of production. Analytical chromatography is done normally with smaller amounts of material and is for establishing the presence or measuring the relative proportions of analytes in a mixture. The two types are not mutually exclusive.
Etymology and pronunciation.
Chromatography, pronounced , is derived from Greek χρῶμα "chroma", which means "color", and γράφειν "graphein", which means "to write". The combination of these two terms was directly inherited from the invention of the technique first used to separate biological pigments.
History.
Chromatography was first devised at the University of Kazan by the Italian-born Russian scientist Mikhail Tsvet in 1900. He developed the technique and coined the term "chromatography" in the first decade of the 20th century, primarily for the separation of plant pigments such as chlorophyll, carotenes, and xanthophylls. Since these components separate in bands of different colors (green, orange, and yellow, respectively) they directly inspired the name of the technique. New types of chromatography developed during the 1930s and 1940s made the technique useful for many separation processes.
Chromatography technique developed substantially as a result of the work of Archer John Porter Martin and Richard Laurence Millington Synge during the 1940s and 1950s, for which they won the 1952 Nobel Prize in Chemistry. They established the principles and basic techniques of partition chromatography, and their work encouraged the rapid development of several chromatographic methods: paper chromatography, gas chromatography, and what would become known as high-performance liquid chromatography. Since then, the technology has advanced rapidly. Researchers found that the main principles of Tsvet's chromatography could be applied in many different ways, resulting in the different varieties of chromatography described below. Advances are continually improving the technical performance of chromatography, allowing the separation of increasingly similar molecules.
Terms.
Chromatography is based on the concept of partition coefficient. Any solute partitions between two immiscible solvents. When one make one solvent immobile (by adsorption on a solid support matrix) and another mobile it results in most common applications of chromatography. If the matrix support, or stationary phase, is polar (e.g, cellulose, silica etc.) it is forward phase chromatography. Otherwise this technique is known as reversed phase, where a non-polar stationary phase (e.g, non-polar derivative of C-18) is used.
Techniques by chromatographic bed shape.
Column chromatography.
Column chromatography is a separation technique in which the stationary bed is within a tube. The particles of the solid stationary phase or the support coated with a liquid stationary phase may fill the whole inside volume of the tube (packed column) or be concentrated on or along the inside tube wall leaving an open, unrestricted path for the mobile phase in the middle part of the tube (open tubular column). Differences in rates of movement through the medium are calculated to different retention times of the sample.
In 1978, W. Clark Still introduced a modified version of column chromatography called "flash column chromatography" (flash). The technique is very similar to the traditional column chromatography, except that the solvent is driven through the column by applying positive pressure. This allowed most separations to be performed in less than 20 minutes, with improved separations compared to the old method. Modern flash chromatography systems are sold as pre-packed plastic cartridges, and the solvent is pumped through the cartridge. Systems may also be linked with detectors and fraction collectors providing automation. The introduction of gradient pumps resulted in quicker separations and less solvent usage.
In expanded bed adsorption, a fluidized bed is used, rather than a solid phase made by a packed bed. This allows omission of initial clearing steps such as centrifugation and filtration, for culture broths or slurries of broken cells.
Phosphocellulose chromatography utilizes the binding affinity of many DNA-binding proteins for phosphocellulose. The stronger a protein's interaction with DNA, the higher the salt concentration needed to elute that protein.
Planar chromatography.
"Planar chromatography" is a separation technique in which the stationary phase is present as or on a plane. The plane can be a paper, serving as such or impregnated by a substance as the stationary bed (paper chromatography) or a layer of solid particles spread on a support such as a glass plate (thin-layer chromatography). Different compounds in the sample mixture travel different distances according to how strongly they interact with the stationary phase as compared to the mobile phase. The specific Retention factor (Rf) of each chemical can be used to aid in the identification of an unknown substance.
Paper chromatography.
Paper chromatography is a technique that involves placing a small dot or line of sample solution onto a strip of "chromatography paper". The paper is placed in a container with a shallow layer of solvent and sealed. As the solvent rises through the paper, it meets the sample mixture, which starts to travel up the paper with the solvent. This paper is made of cellulose, a polar substance, and the compounds within the mixture travel further if they are less polar. More polar substances bond with the cellulose paper more quickly, and therefore do not travel as far.
Thin-layer chromatography (TLC).
Thin-layer chromatography (TLC) is a widely employed laboratory technique used to separate different biochemicals on the basis of their relative attractions to the stationary and mobile phases. It is similar to paper chromatography. However, instead of using a stationary phase of paper, it involves a stationary phase of a thin layer of adsorbent like silica gel, alumina, or cellulose on a flat, inert substrate. TLC is very versatile; multiple samples can be separated simultaneously on the same layer, making it very useful for screening applications such as testing drug levels and water purity.
Possibility of cross-contamination is low since each separation is performed on a new layer. Compared to paper, it has the advantage of faster runs, better separations, better quantitative analysis, and the choice between different adsorbents. For even better resolution and faster separation that utilizes less solvent, high-performance TLC can be used. An older popular use had been to differentiate chromosomes by observing distance in gel (separation of was a separate step).
Displacement chromatography.
The basic principle of displacement chromatography is:
A molecule with a high affinity for the chromatography matrix (the displacer) competes effectively for binding sites, and thus displaces all molecules with lesser affinities.
There are distinct differences between displacement and elution chromatography. In elution mode, substances typically emerge from a column in narrow, Gaussian peaks. Wide separation of peaks, preferably to baseline, is desired for maximum purification. The speed at which any component of a mixture travels down the column in elution mode depends on many factors. But for two substances to travel at different speeds, and thereby be resolved, there must be substantial differences in some interaction between the biomolecules and the chromatography matrix. Operating parameters are adjusted to maximize the effect of this difference. In many cases, baseline separation of the peaks can be achieved only with gradient elution and low column loadings. Thus, two drawbacks to elution mode chromatography, especially at the preparative scale, are operational complexity, due to gradient solvent pumping, and low throughput, due to low column loadings. Displacement chromatography has advantages over elution chromatography in that components are resolved into consecutive zones of pure substances rather than "peaks". Because the process takes advantage of the nonlinearity of the isotherms, a larger column feed can be separated on a given column with the purified components recovered at significantly higher concentrations.
Techniques by physical state of mobile phase.
Gas chromatography.
Gas chromatography (GC), also sometimes known as gas-liquid chromatography, (GLC), is a separation technique in which the mobile phase is a gas. Gas chromatographic separation is always carried out in a column, which is typically "packed" or "capillary". Packed columns are the routine work horses of gas chromatography, being cheaper and easier to use and often giving adequate performance. Capillary columns generally give far superior resolution and although more expensive are becoming widely used, especially for complex mixtures. Further, capillary columns can be split into three classes: porous layer open tubular (PLOT), wall-coated open tubular (WCOT) and support-coated open tubular (SCOT) columns. PLOT columns are unique in a way that the stationary phase is adsorbed to the column walls, while WCOT columns have a stationary phase that is chemically bonded to the walls. SCOT columns are in a way the combination of the two types mentioned in a way that they have support particles adhered to column walls, but those particles have liquid phase chemically bonded onto them. Both types of column are made from non-adsorbent and chemically inert materials. Stainless steel and glass are the usual materials for packed columns and quartz or fused silica for capillary columns.
Gas chromatography is based on a partition equilibrium of analyte between a solid or viscous liquid stationary phase (often a liquid silicone-based material) and a mobile gas (most often helium). The stationary phase is adhered to the inside of a small-diameter (commonly 0.53 – 0.18mm inside diameter) glass or fused-silica tube (a capillary column) or a solid matrix inside a larger metal tube (a packed column). It is widely used in analytical chemistry; though the high temperatures used in GC make it unsuitable for high molecular weight biopolymers or proteins (heat denatures them), frequently encountered in biochemistry, it is well suited for use in the petrochemical, environmental monitoring and remediation, and industrial chemical fields. It is also used extensively in chemistry research.
Liquid chromatography.
Liquid chromatography (LC) is a separation technique in which the mobile phase is a liquid. It can be carried out either in a column or a plane. Present day liquid chromatography that generally utilizes very small packing particles and a relatively high pressure is referred to as high-performance liquid chromatography.
In HPLC the sample is forced by a liquid at high pressure (the mobile phase) through a column that is packed with a stationary phase composed of irregularly or spherically shaped particles, a porous monolithic layer, or a porous membrane. Monoliths are "sponge-like chromatographic media" and are made up of an unending block of organic or inorganic parts. HPLC is historically divided into two different sub-classes based on the polarity of the mobile and stationary phases. Methods in which the stationary phase is more polar than the mobile phase (e.g., toluene as the mobile phase, silica as the stationary phase) are termed normal phase liquid chromatography (NPLC) and the opposite (e.g., water-methanol mixture as the mobile phase and C18 (octadecylsilyl) as the stationary phase) is termed reversed phase liquid chromatography (RPLC).
Supercritical fluid chromatography.
Supercritical fluid chromatography is a separation technique in which the mobile phase is a fluid above and relatively close to its critical temperature and pressure.
Specific techniques under this broad heading are listed below.
Affinity chromatography.
Affinity chromatography is based on selective non-covalent interaction between an analyte and specific molecules. It is very specific, but not very robust. It is often used in biochemistry in the purification of proteins bound to tags. These fusion proteins are labeled with compounds such as His-tags, biotin or antigens, which bind to the stationary phase specifically. After purification, these tags are usually removed and the pure protein is obtained.
Affinity chromatography often utilizes a biomolecule's affinity for the cations of a metal (Zn, Cu, Fe, etc.). Columns are often manually prepared and could be designed specifically for the proteins of interest. Traditional affinity columns are used as a preparative step to flush out unwanted biomolecules, or as a primary step in analyzing a protein with unknown physical properties.
However, liquid chromatography techniques exist that do utilize affinity chromatography properties. Immobilized metal affinity chromatography (IMAC) is useful to separate the aforementioned molecules based on the relative affinity for the metal. Often these columns can be loaded with different metals to create a column with a targeted affinity.
Techniques by separation mechanism.
Ion exchange chromatography.
Ion exchange chromatography (usually referred to as ion chromatography) uses an ion exchange mechanism to separate analytes based on their respective charges. It is usually performed in columns but can also be useful in planar mode. Ion exchange chromatography uses a charged stationary phase to separate charged compounds including anions, cations, amino acids, peptides, and proteins. In conventional methods the stationary phase is an ion-exchange resin that carries charged functional groups that interact with oppositely charged groups of the compound to retain. There are two types of ion exchange chromatography: Cation-Exchange and Anion-Exchange. In the Cation-Exchange Chromatography the stationary phase has negative charge and the exchangeable ion is a cation, whereas, in the Anion-Exchange Chromatography the stationary phase has positive charge and the exchangeable ion is an anion. Ion exchange chromatography is commonly used to purify proteins using FPLC.
Size-exclusion chromatography.
Size-exclusion chromatography (SEC) is also known as "gel permeation chromatography" (GPC) or "gel filtration chromatography" and separates molecules according to their size (or more accurately according to their hydrodynamic diameter or hydrodynamic volume).
Smaller molecules are able to enter the pores of the media and, therefore, molecules are trapped and removed from the flow of the mobile phase. The average residence time in the pores depends upon the effective size of the analyte molecules. However, molecules that are larger than the average pore size of the packing are excluded and thus suffer essentially no retention; such species are the first to be eluted. It is generally a low-resolution chromatography technique and thus it is often reserved for the final, "polishing" step of a purification. It is also useful for determining the tertiary structure and quaternary structure of purified proteins, especially since it can be carried out under native solution conditions.
Expanded bed adsorption chromatographic separation.
An expanded bed chromatographic adsorption (EBA) column for a biochemical separation process comprises a pressure equalization liquid distributor having a self-cleaning function below a porous blocking sieve plate at the bottom of the expanded bed, an upper part nozzle assembly having a backflush cleaning function at the top of the expanded bed, a better distribution of the feedstock liquor added into the expanded bed ensuring that the fluid passed through the expanded bed layer displays a state of piston flow. The expanded bed layer displays a state of piston flow. The expanded bed chromatographic separation column has advantages of increasing the separation efficiency of the expanded bed.
Expanded-bed adsorption (EBA) chromatography is a convenient and effective technique for the capture of proteins directly from unclarified crude sample. In EBA chromatography, the settled bed is first expanded by upward flow of equilibration buffer. The crude feed, which is a mixture of soluble proteins, contaminants, cells, and cell debris, is then passed upward through the expanded bed. Target proteins are captured on the adsorbent, while particulates and contaminants pass through. A change to elution buffer while maintaining upward flow results in desorption of the target protein in expanded-bed mode. Alternatively, if the flow is reversed, the adsorbed particles will quickly settle and the proteins can be desorbed by an elution buffer. The mode used for elution (expanded-bed versus settled-bed) depends on the characteristics of the feed. After elution, the adsorbent is cleaned with a predefined cleaning-in-place (CIP) solution, with cleaning followed by either column regeneration (for further use) or storage.
Special techniques.
Reversed-phase chromatography.
Reversed-phase chromatography (RPC) is any liquid chromatography procedure in which the mobile phase is significantly more polar than the stationary phase. It is so named because in normal-phase liquid chromatography, the mobile phase is significantly less polar than the stationary phase. Hydrophobic molecules in the mobile phase tend to adsorb to the relatively hydrophobic stationary phase. Hydrophilic molecules in the mobile phase will tend to elute first. Separating columns typically comprise a C8 or C18 carbon-chain bonded to a silica particle substrate.
Hydrophobic interaction chromatography.
Hydrophobic Interaction Chromatography (HIC) is a purification and analytical technique that separates analytes, such as proteins, based on hydrophobic interactions between that analyte and the chromatographic matrix. It can provide a non-denaturing orthogonal approach to reversed phase separation, preserving native structures and potentially protein activity. In hydrophobic interaction chromatography, the matrix material is lightly substituted with hydrophobic groups. These groups can range from methyl, ethyl, propyl, butyl, octyl, or phenyl groups. At high salt concentrations, non-polar sidechains on the surface on proteins "interact" with the hydrophobic groups; that is, both types of groups are excluded by the polar solvent (hydrophobic effects are augmented by increased ionic strength). Thus, the sample is applied to the column in a buffer which is highly polar, which drives an association of hydrophobic patches on the analyte with the stationary phase. The eluent is typically an aqueous buffer with decreasing salt concentrations, increasing concentrations of detergent (which disrupts hydrophobic interactions), or changes in pH. Of critical importance is the type of salt used, with more kosmotropic salts as defined by the Hofmeister series providing the most water structuring around the molecule and resulting hydrophobic pressure. Ammonium sulfate is frequently used for this purpose. The addition of organic solvents or other less polar constituents may assist in improving resolution.
In general, Hydrophobic Interaction Chromatography (HIC) is advantageous if the sample is sensitive to pH change or harsh solvents typically used in other types of chromatography but not high salt concentrations. Commonly, it is the amount of salt in the buffer which is varied. In 2012, Müller and Franzreb described the effects of temperature on HIC using Bovine Serum Albumin (BSA) with four different types of hydrophobic resin. The study altered temperature as to effect the binding affinity of BSA onto the matrix. It was concluded that cycling temperature from 40 to 10 degrees Celsius would not be adequate to effectively wash all BSA from the matrix but could be very effective if the column would only be used a few times. Using temperature to effect change allows labs to cut costs on buying salt and saves money.
If high salt concentrations along with temperature fluctuations want to be avoided one can use a more hydrophobic to compete with one's sample to elute it. This so-called salt independent method of HIC showed a direct isolation of Human Immunoglobulin G (IgG) from serum with satisfactory yield and used β-cyclodextrin as a competitor to displace IgG from the matrix. This largely opens up the possibility of using HIC with samples which are salt sensitive as we know high salt concentrations precipitate proteins.
Hydrodynamic chromatography.
Hydrodynamic chromatography (HDC) is derived from the observed phenomenon that large droplets move faster than small ones. In a column, this happens because the center of mass of larger droplets is prevented from being as close to the sides of the column as smaller droplets because of their larger overall size. Larger droplets will elute first from the middle of the column while smaller droplets stick to the sides of the column and elute last. This form of chromatography is useful for separating analytes by molar mass (or molecular mass), size, shape, and structure when used in conjunction with light scattering detectors, viscometers, and refractometers. The two main types of HDC are open tube and packed column. Open tube offers rapid separation times for small particles, whereas packed column HDC can increase resolution and is better suited for particles with an average molecular mass larger than formula_0 daltons. HDC differs from other types of chromatography because the separation only takes place in the interstitial volume, which is the volume surrounding and in between particles in a packed column.
HDC shares the same order of elution as Size Exclusion Chromatography (SEC) but the two processes still vary in many ways. In a study comparing the two types of separation, Isenberg, Brewer, Côté, and Striegel use both methods for polysaccharide characterization and conclude that HDC coupled with multiangle light scattering (MALS) achieves more accurate molar mass distribution when compared to off-line MALS than SEC in significantly less time. This is largely due to SEC being a more destructive technique because of the pores in the column degrading the analyte during separation, which tends to impact the mass distribution. However, the main disadvantage of HDC is low resolution of analyte peaks, which makes SEC a more viable option when used with chemicals that are not easily degradable and where rapid elution is not important.
HDC plays an especially important role in the field of microfluidics. The first successful apparatus for HDC-on-a-chip system was proposed by Chmela, et al. in 2002. Their design was able to achieve separations using an 80 mm long channel on the timescale of 3 minutes for particles with diameters ranging from 26 to 110 nm, but the authors expressed a need to improve the retention and dispersion parameters. In a 2010 publication by Jellema, Markesteijn, Westerweel, and Verpoorte, implementing HDC with a recirculating bidirectional flow resulted in high resolution, size based separation with only a 3 mm long channel. Having such a short channel and high resolution was viewed as especially impressive considering that previous studies used channels that were 80 mm in length. For a biological application, in 2007, Huh, et al. proposed a microfluidic sorting device based on HDC and gravity, which was useful for preventing potentially dangerous particles with diameter larger than 6 microns from entering the bloodstream when injecting contrast agents in ultrasounds. This study also made advances for environmental sustainability in microfluidics due to the lack of outside electronics driving the flow, which came as an advantage of using a gravity based device.
Two-dimensional chromatography.
In some cases, the selectivity provided by the use of one column can be insufficient to provide resolution of analytes in complex samples. Two-dimensional chromatography aims to increase the resolution of these peaks by using a second column with different physico-chemical (chemical classification) properties. Since the mechanism of retention on this new solid support is different from the first dimensional separation, it can be possible to separate compounds by two-dimensional chromatography that are indistinguishable by one-dimensional chromatography. Furthermore, the separation on the second dimension occurs faster than the first dimension. An example of a TDC separation is where the sample is spotted at one corner of a square plate, developed, air-dried, then rotated by 90° and usually redeveloped in a second solvent system.
Two-dimensional chromatography can be applied to GC or LC separations. The heart-cutting approach selects a specific region of interest on the first dimension for separation, and the comprehensive approach uses all analytes in the second-dimension separation.
Simulated moving-bed chromatography.
The simulated moving bed (SMB) technique is a variant of high performance liquid chromatography; it is used to separate particles and/or chemical compounds that would be difficult or impossible to resolve otherwise. This increased separation is brought about by a valve-and-column arrangement that is used to lengthen the stationary phase indefinitely.
In the moving bed technique of preparative chromatography the feed entry and the analyte recovery are simultaneous and continuous, but because of practical difficulties with a continuously moving bed, simulated moving bed technique was proposed. In the simulated moving bed technique instead of moving the bed, the sample inlet and the analyte exit positions are moved continuously, giving the impression of a moving bed.
True moving bed chromatography (TMBC) is only a theoretical concept. Its simulation, SMBC is achieved by the use of a multiplicity of columns in series and a complex valve arrangement. This valve arrangement provides for sample and solvent feed and analyte and waste takeoff at appropriate locations of any column, whereby it allows switching at regular intervals the sample entry in one direction, the solvent entry in the opposite direction, whilst changing the analyte and waste takeoff positions appropriately as well.
Pyrolysis gas chromatography.
Pyrolysis–gas chromatography–mass spectrometry is a method of chemical analysis in which the sample is heated to decomposition to produce smaller molecules that are separated by gas chromatography and detected using mass spectrometry.
Pyrolysis is the thermal decomposition of materials in an inert atmosphere or a vacuum. The sample is put into direct contact with a platinum wire, or placed in a quartz sample tube, and rapidly heated to 600–1000 °C. Depending on the application even higher temperatures are used. Three different heating techniques are used in actual pyrolyzers: Isothermal furnace, inductive heating (Curie point filament), and resistive heating using platinum filaments. Large molecules cleave at their weakest points and produce smaller, more volatile fragments. These fragments can be separated by gas chromatography. Pyrolysis GC chromatograms are typically complex because a wide range of different decomposition products is formed. The data can either be used as fingerprints to prove material identity or the GC/MS data is used to identify individual fragments to obtain structural information. To increase the volatility of polar fragments, various methylating reagents can be added to a sample before pyrolysis.
Besides the usage of dedicated pyrolyzers, pyrolysis GC of solid and liquid samples can be performed directly inside Programmable Temperature Vaporizer (PTV) injectors that provide quick heating (up to 30 °C/s) and high maximum temperatures of 600–650 °C. This is sufficient for some pyrolysis applications. The main advantage is that no dedicated instrument has to be purchased and pyrolysis can be performed as part of routine GC analysis. In this case, quartz GC inlet liners have to be used. Quantitative data can be acquired, and good results of derivatization inside the PTV injector are published as well.
Fast protein liquid chromatography.
Fast protein liquid chromatography (FPLC), is a form of liquid chromatography that is often used to analyze or purify mixtures of proteins. As in other forms of chromatography, separation is possible because the different components of a mixture have different affinities for two materials, a moving fluid (the "mobile phase") and a porous solid (the stationary phase). In FPLC the mobile phase is an aqueous solution, or "buffer". The buffer flow rate is controlled by a positive-displacement pump and is normally kept constant, while the composition of the buffer can be varied by drawing fluids in different proportions from two or more external reservoirs. The stationary phase is a resin composed of beads, usually of cross-linked agarose, packed into a cylindrical glass or plastic column. FPLC resins are available in a wide range of bead sizes and surface ligands depending on the application.
Countercurrent chromatography.
Countercurrent chromatography (CCC) is a type of liquid-liquid chromatography, where both the stationary and mobile phases are liquids and the liquid stationary phase is held stagnant by a strong centrifugal force.
Hydrodynamic countercurrent chromatography (CCC).
The operating principle of CCC instrument requires a column consisting of an open tube coiled around a bobbin. The bobbin is rotated in a double-axis gyratory motion (a cardioid), which causes a variable gravity (G) field to act on the column during each rotation. This motion causes the column to see one partitioning step per revolution and components of the sample separate in the column due to their partitioning coefficient between the two immiscible liquid phases used. There are many types of CCC available today. These include HSCCC (High Speed CCC) and HPCCC (High Performance CCC). HPCCC is the latest and best-performing version of the instrumentation available currently.
Centrifugal partition chromatography (CPC).
In the CPC (centrifugal partition chromatography or hydrostatic countercurrent chromatography) instrument, the column consists of a series of cells interconnected by ducts attached to a rotor. This rotor rotates on its central axis creating the centrifugal field necessary to hold the stationary phase in place. The separation process in CPC is governed solely by the partitioning of solutes between the stationary and mobile phases, which mechanism can be easily described using the partition coefficients ("KD") of solutes. CPC instruments are commercially available for laboratory, pilot, and industrial-scale separations with different sizes of columns ranging from some 10 milliliters to 10 liters in volume.
Periodic counter-current chromatography.
In contrast to Counter current chromatography (see above), periodic counter-current chromatography (PCC) uses a solid stationary phase and only a liquid mobile phase. It thus is much more similar to conventional affinity chromatography than to counter current chromatography. PCC uses multiple columns, which during the loading phase are connected in line. This mode allows for overloading the first column in this series without losing product, which already breaks through the column before the resin is fully saturated. The breakthrough product is captured on the subsequent column(s). In a next step the columns are disconnected from one another. The first column is washed and eluted, while the other column(s) are still being loaded. Once the (initially) first column is re-equilibrated, it is re-introduced to the loading stream, but as last column. The process then continues in a cyclic fashion.
Chiral chromatography.
Chiral chromatography involves the separation of stereoisomers. In the case of enantiomers, these have no chemical or physical differences apart from being three-dimensional mirror images. To enable chiral separations to take place, either the mobile phase or the stationary phase must themselves be made chiral, giving differing affinities between the analytes. Chiral chromatography HPLC columns (with a chiral stationary phase) in both normal and reversed phase are commercially available.
Conventional chromatography are incapable of separating racemic mixtures of enantiomers. However, in some cases "nonracemic" mixtures of enantiomers may be separated unexpectedly by conventional liquid chromatography (e.g. HPLC without chiral mobile phase or stationary phase ).
Aqueous normal-phase chromatography.
Aqueous normal-phase (ANP) chromatography is characterized by the elution behavior of classical normal phase mode (i.e. where the mobile phase is significantly less polar than the stationary phase) in which water is one of the mobile phase solvent system components. It is distinguished from hydrophilic interaction liquid chromatography (HILIC) in that the retention mechanism is due to adsorption rather than partitioning.
Applications.
Chromatography is used in many fields including the pharmaceutical industry, the food and beverage industry, the chemical industry, forensic science, environment analysis, and hospitals.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "10^5"
}
] |
https://en.wikipedia.org/wiki?curid=6329
|
6329366
|
Legendre rational functions
|
Sequence of orthogonal functions on [0, ∞)
In mathematics, the Legendre rational functions are a sequence of orthogonal functions on [0, ∞). They are obtained by composing the Cayley transform with Legendre polynomials.
A rational Legendre function of degree "n" is defined as:
formula_0
where formula_1 is a Legendre polynomial. These functions are eigenfunctions of the singular Sturm–Liouville problem:
formula_2
with eigenvalues
formula_3
Properties.
Many properties can be derived from the properties of the Legendre polynomials of the first kind. Other properties are unique to the functions themselves.
Recursion.
formula_4
and
formula_5
Limiting behavior.
It can be shown that
formula_6
and
formula_7
Orthogonality.
formula_8
where formula_9 is the Kronecker delta function.
Particular values.
formula_10
|
[
{
"math_id": 0,
"text": "R_n(x) = \\frac{\\sqrt{2}}{x+1}\\,P_n\\left(\\frac{x-1}{x+1}\\right)"
},
{
"math_id": 1,
"text": "P_n(x)"
},
{
"math_id": 2,
"text": "(x+1) \\frac{d}{dx}\\left(x \\frac{d}{dx} \\left[\\left(x+1\\right) v(x)\\right]\\right) + \\lambda v(x) = 0"
},
{
"math_id": 3,
"text": "\\lambda_n=n(n+1)\\,"
},
{
"math_id": 4,
"text": "R_{n+1}(x)=\\frac{2n+1}{n+1}\\,\\frac{x-1}{x+1}\\,R_n(x)-\\frac{n}{n+1}\\,R_{n-1}(x)\\quad\\mathrm{for\\,n\\ge 1}"
},
{
"math_id": 5,
"text": "2 (2n+1) R_n(x) = \\left(x+1\\right)^2 \\left(\\frac{d}{dx} R_{n+1}(x) - \\frac{d}{dx} R_{n-1}(x)\\right) + (x+1) \\left(R_{n+1}(x) - R_{n-1}(x)\\right)"
},
{
"math_id": 6,
"text": "\\lim_{x\\to\\infty}(x+1)R_n(x)=\\sqrt{2}"
},
{
"math_id": 7,
"text": "\\lim_{x\\to\\infty}x\\partial_x((x+1)R_n(x))=0"
},
{
"math_id": 8,
"text": "\\int_{0}^\\infty R_m(x)\\,R_n(x)\\,dx=\\frac{2}{2n+1}\\delta_{nm}"
},
{
"math_id": 9,
"text": "\\delta_{nm}"
},
{
"math_id": 10,
"text": "\\begin{align}\nR_0(x) &= \\frac{\\sqrt{2}}{x+1}\\,1 \\\\\nR_1(x) &= \\frac{\\sqrt{2}}{x+1}\\,\\frac{x-1}{x+1} \\\\\nR_2(x) &= \\frac{\\sqrt{2}}{x+1}\\,\\frac{x^2-4x+1}{(x+1)^2} \\\\\nR_3(x) &= \\frac{\\sqrt{2}}{x+1}\\,\\frac{x^3-9x^2+9x-1}{(x+1)^3} \\\\\nR_4(x) &= \\frac{\\sqrt{2}}{x+1}\\,\\frac{x^4-16x^3+36x^2-16x+1}{(x+1)^4}\n\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=6329366
|
63294615
|
0x88
|
The 0x88 chess board representation is a square-centric method of representing the chess board in computer chess programs. The number 0x88 is a hexadecimal integer (13610, 2108, 100010002). The rank and file positions are each represented by a nibble (hexadecimal digit), and the bit gaps simplify a number of computations to bitwise operations.
Layout.
In the 0x88 board representation, the layout is spread out to cover an 8-by-16 board, equal to the size of two adjacent chessboards. Each square of the 8-by-16 matrix is assigned a number as can be seen in the board layout table. In this scheme each nibble represents a rank or a file, so that the 8-bit integer 0x42 represents the square at (4,2) in zero-based numbering, i.e. c5 in standard algebraic notation.
Adding 16 to a number for a square results in the number for the square one row above, and subtracting 16 results in the number for the square one row below. To move from one column to another the number is increased or decreased by one. In hexadecimal notation, legal chess positions (A1-H8) are always below 0x88. This layout simplifies many computations that chess programs need to perform by allowing bitwise operations instead of comparisons.
Algebraic notation and conversion.
The modern standard to identify the squares on a chessboard and moves in a game is algebraic notation, whereby each square of the board is identified by a unique coordinate pair — a letter between "a" and "h" for the horizontal coordinate, known as the file, and a number between 1 and 8 for the vertical coordinate, known as the rank.
In computer chess, file-rank coordinates are internally represented as integers ranging from 0 to 7, with file "a" mapping to 0 through to file "h" mapping to 7, while the rank coordinate is shifted down by one to the range 0 to 7.
An advantage of the 0x88 coding scheme is that values can be easily converted between 0x88 representation and file-rank coordinates using only bitwise operations, which are simple and efficient for computer processors to work with. To convert a zero-based file-rank coordinate to 0x88 value:
formula_0
Thus, "a"1 corresponds to formula_1, with all 8 of the bits set to formula_2, "b"2 corresponds to formula_3, and "h"8 corresponds to formula_4.
To convert an 0x88 value to a file-rank coordinate pair:
formula_5
formula_6
"Note: In the above formulas, « and » represent left and right logical bit shift operations respectively while & represents bitwise and."
Applications.
Off-the-board detection.
Off-the-board detection is a feature of chess programs which determines whether a piece is on or off the legal chess board. In 0x88, the highest bit of each nibble represents whether a piece is on the board or not. Specifically, out of the 8 bits to represent a square, the fourth and the eighth must both be 0 for a piece to be located within the board. This allows off-the-board detection by bitwise and operations. If codice_0 (or, in binary, codice_1) is non-zero, then the square is not on the board. This bitwise operation requires fewer computer resources than integer comparisons. This makes calculations such as illegal move detection faster.
Square relations.
The difference of valid 0x88 coordinates A and B is unique with respect to distance and direction, which is not true for classical packed three-bit rank and file coordinates. That makes lookups for Manhattan distance, possible piece attacks, and legal piece moves more resource-friendly. While classical square coordinates in the 0–63 range require 4K-sized tables (64×64), the 0x88 difference takes 1/16 that or 256-sized tables—or even 16 less.
An offset of 119 (0x77 as the maximum valid square index) is added, to make ±119 a 0–238 range (a size of 240 for alignment reasons).
Adoption.
Though the 0x88 representation was initially popular, it has been mostly replaced by the system of bitboards.
References.
<templatestyles src="Reflist/styles.css" />
Works cited.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": " \\text{square}_{0\\rm{x}88} = (\\text{rank}_\\text{0 to 7} << 4) + \\text{file}_\\text{0 to 7}"
},
{
"math_id": 1,
"text": "00000000_2"
},
{
"math_id": 2,
"text": "0"
},
{
"math_id": 3,
"text": "00010001_2"
},
{
"math_id": 4,
"text": "01110111_2"
},
{
"math_id": 5,
"text": " \\text{file}_\\text{0 to 7} = \\text{square}_{0\\rm{x}88}\\ \\&\\ 7"
},
{
"math_id": 6,
"text": " \\text{rank}_\\text{0 to 7} = ( \\text{square}_{0\\rm{x}88}\\ >> 4 )"
}
] |
https://en.wikipedia.org/wiki?curid=63294615
|
63295596
|
Janson inequality
|
Mathematical theory
In the mathematical theory of probability, Janson's inequality is a collection of related inequalities giving an exponential bound on the probability of many related events happening simultaneously by their pairwise dependence. Informally Janson's inequality involves taking a sample of many independent random binary variables, and a set of subsets of those variables and bounding the probability that the sample will contain any of those subsets by their pairwise correlation.
Statement.
Let formula_0 be our set of variables. We intend to sample these variables according to probabilities formula_1. Let formula_2 be the random variable of the subset of formula_0 that includes formula_3 with probability formula_4. That is, independently, for every formula_5.
Let formula_6 be a family of subsets of formula_0. We want to bound the probability that any formula_7 is a subset of formula_8. We will bound it using the expectation of the number of formula_7 such that formula_9, which we call formula_10, and a term from the pairwise probability of being in formula_8, which we call formula_11.
For formula_7, let formula_12 be the random variable that is one if formula_9 and zero otherwise. Let formula_13 be the random variables of the number of sets in formula_6 that are inside formula_8: formula_14. Then we define the following variables:
formula_15
formula_16
formula_17
Then the Janson inequality is:
formula_18
and
formula_19
Tail bound.
Janson later extended this result to give a tail bound on the probability of only a few sets being subsets. Let formula_20 give the distance from the expected number of subsets. Let formula_21. Then we have
formula_22
Uses.
Janson's Inequality has been used in pseudorandomness for bounds on constant-depth circuits. Research leading to these inequalities were originally motivated by estimating chromatic numbers of random graphs.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Gamma"
},
{
"math_id": 1,
"text": "p = (p_i \\in [0, 1]: i \\in \\Gamma)"
},
{
"math_id": 2,
"text": "\\Gamma_p \\subseteq \\Gamma"
},
{
"math_id": 3,
"text": "i \\in \\Gamma"
},
{
"math_id": 4,
"text": "p_i"
},
{
"math_id": 5,
"text": "i \\in \\Gamma: \\Pr[i \\in \\Gamma_p]= p_i"
},
{
"math_id": 6,
"text": "S"
},
{
"math_id": 7,
"text": "A \\in S"
},
{
"math_id": 8,
"text": "\\Gamma_p"
},
{
"math_id": 9,
"text": "A \\subseteq \\Gamma_p"
},
{
"math_id": 10,
"text": "\\lambda"
},
{
"math_id": 11,
"text": "\\Delta"
},
{
"math_id": 12,
"text": "I_A"
},
{
"math_id": 13,
"text": "X"
},
{
"math_id": 14,
"text": "X = \\sum_{A \\in S} I_A"
},
{
"math_id": 15,
"text": "\\lambda = \\operatorname E \\left[\\sum_{A \\in S} I_A\\right] = \\operatorname E[X]"
},
{
"math_id": 16,
"text": "\\Delta = \\frac{1}{2}\\sum_{A \\neq B, A \\cap B \\neq \\emptyset} \\operatorname E[I_A I_B]"
},
{
"math_id": 17,
"text": "\\bar{\\Delta} = \\lambda + 2\\Delta"
},
{
"math_id": 18,
"text": "\\Pr[X = 0] = \\Pr[\\forall A \\in S: A \\not \\subset \\Gamma_p] \\leq e^{-\\lambda + \\Delta} "
},
{
"math_id": 19,
"text": "\\Pr[X = 0] = \\Pr[\\forall A \\in S: A \\not \\subset \\Gamma_p] \\leq e^{-\\frac{\\lambda^2}{\\bar{\\Delta}}} "
},
{
"math_id": 20,
"text": "0 \\leq t \\leq \\lambda"
},
{
"math_id": 21,
"text": "\\phi(x) = (1 + x) \\ln(1 + x) - x"
},
{
"math_id": 22,
"text": "\\Pr(X \\leq \\lambda - t) \\leq e^{-\\varphi(-t/\\lambda)\\lambda^2/\\bar{\\Delta}} \\leq e^{-t^2/\\left(2\\bar{\\Delta}\\right)}"
}
] |
https://en.wikipedia.org/wiki?curid=63295596
|
63295970
|
Haar's Tauberian theorem
|
In mathematical analysis, Haar's Tauberian theorem named after Alfréd Haar, relates the asymptotic behaviour of a continuous function to properties of its Laplace transform. It is related to the integral formulation of the Hardy–Littlewood Tauberian theorem.
Simplified version by Feller.
William Feller gives the following simplified form for this theorem:
Suppose that formula_0 is a non-negative and continuous function for formula_1, having finite Laplace transform
formula_2
for formula_3. Then formula_4 is well defined for any complex value of formula_5 with formula_6. Suppose that formula_7 verifies the following conditions:
1. For formula_8 the function formula_9 (which is regular on the right half-plane formula_6) has continuous boundary values formula_10 as formula_11, for formula_12 and formula_8, furthermore for formula_13 it may be written as
formula_14
where formula_15 has finite derivatives formula_16 and formula_17 is bounded in every finite interval;
2. The integral
formula_18
converges uniformly with respect to formula_19 for fixed formula_6 and formula_20;
3. formula_21 as formula_22, uniformly with respect to formula_12;
4. formula_23 tend to zero as formula_22;
5. The integrals
formula_24 and formula_25
converge uniformly with respect to formula_19 for fixed formula_26, formula_27 and formula_20.
Under these conditions
formula_28
Complete version.
A more detailed version is given in.
Suppose that formula_0 is a continuous function for formula_1, having Laplace transform
formula_2
with the following properties
1. For all values formula_5 with formula_29 the function formula_30 is regular;
2. For all formula_29, the function formula_9, considered as a function of the variable formula_31, "has the Fourier property" ("Fourierschen Charakter besitzt") defined by Haar as for any formula_32 there is a value formula_33 such that for all formula_19
formula_34
whenever formula_35 or formula_36.
3. The function formula_37 has a boundary value for formula_38 of the form
formula_39
where formula_40 and formula_41 is an formula_42 times differentiable function of formula_31 and such that the derivative
formula_43
is bounded on any finite interval (for the variable formula_31)
4. The derivatives
formula_44
for formula_45 have zero limit for formula_22 and for formula_46 has the Fourier property as defined above.
5. For sufficiently large formula_47 the following hold
formula_48
Under the above hypotheses we have the asymptotic formula
formula_49
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f(t)"
},
{
"math_id": 1,
"text": "t \\geq 0"
},
{
"math_id": 2,
"text": "F(s) = \\int_0^\\infty e^{-st} f(t)\\,dt"
},
{
"math_id": 3,
"text": "s>0"
},
{
"math_id": 4,
"text": " F(s)"
},
{
"math_id": 5,
"text": "s=x+iy"
},
{
"math_id": 6,
"text": "x>0"
},
{
"math_id": 7,
"text": "F"
},
{
"math_id": 8,
"text": "y \\neq 0"
},
{
"math_id": 9,
"text": "F(x+iy)"
},
{
"math_id": 10,
"text": "F(iy)"
},
{
"math_id": 11,
"text": "x \\to +0"
},
{
"math_id": 12,
"text": "x \\geq 0"
},
{
"math_id": 13,
"text": "s=iy"
},
{
"math_id": 14,
"text": "F(s) = \\frac{C}{s} + \\psi(s),"
},
{
"math_id": 15,
"text": "\\psi(iy)"
},
{
"math_id": 16,
"text": "\\psi'(iy),\\ldots,\\psi^{(r)}(iy)"
},
{
"math_id": 17,
"text": "\\psi^{(r)}(iy)"
},
{
"math_id": 18,
"text": "\\int_0^\\infty e^{ity} F(x+iy) \\, dy"
},
{
"math_id": 19,
"text": "t \\geq T"
},
{
"math_id": 20,
"text": "T>0"
},
{
"math_id": 21,
"text": "F(x+iy) \\to 0"
},
{
"math_id": 22,
"text": "y \\to \\pm\\infty"
},
{
"math_id": 23,
"text": "F'(iy),\\ldots,F^{(r)}(iy)"
},
{
"math_id": 24,
"text": "\\int_{-\\infty}^{y_1} e^{ity} F^{(r)}(iy) \\, dy"
},
{
"math_id": 25,
"text": "\\int_{y_2}^\\infty e^{ity} F^{(r)}(iy) \\, dy"
},
{
"math_id": 26,
"text": "y_1 < 0"
},
{
"math_id": 27,
"text": "y_2 > 0"
},
{
"math_id": 28,
"text": "\\lim_{t \\to \\infty} t^r[f(t)-C] = 0."
},
{
"math_id": 29,
"text": "x>a"
},
{
"math_id": 30,
"text": "F(s)=F(x+iy)"
},
{
"math_id": 31,
"text": "y"
},
{
"math_id": 32,
"text": "\\delta>0"
},
{
"math_id": 33,
"text": "\\omega"
},
{
"math_id": 34,
"text": "\\Big| \\, \\int_\\alpha^\\beta e^{iyt} F(x+iy) \\, dy \\; \\Big| < \\delta"
},
{
"math_id": 35,
"text": "\\alpha,\\beta \\geq \\omega"
},
{
"math_id": 36,
"text": "\\alpha,\\beta \\leq -\\omega"
},
{
"math_id": 37,
"text": "F(s)"
},
{
"math_id": 38,
"text": "\\Re s = a"
},
{
"math_id": 39,
"text": "F(s) = \\sum_{j=1}^N \\frac{c_j}{(s-s_j)^{\\rho_j}} + \\psi(s)"
},
{
"math_id": 40,
"text": "s_j = a + i y_j"
},
{
"math_id": 41,
"text": "\\psi(a+iy)"
},
{
"math_id": 42,
"text": "n"
},
{
"math_id": 43,
"text": "\\left| \\frac{d^n \\psi(a+iy)}{dy^n} \\right|"
},
{
"math_id": 44,
"text": "\\frac{d^k F(a+iy)}{dy^k}"
},
{
"math_id": 45,
"text": "k=0,\\ldots,n-1"
},
{
"math_id": 46,
"text": "k=n"
},
{
"math_id": 47,
"text": "t"
},
{
"math_id": 48,
"text": "\\lim_{y \\to \\pm\\infty} \\int_{a+iy}^{x+iy} e^{st} F(s) \\, ds = 0"
},
{
"math_id": 49,
"text": "\\lim_{t \\to \\infty} t^n e^{-at} \\Big[ f(t) - \\sum_{j=1}^{N} \\frac{c_j}{\\Gamma(\\rho_j)} e^{s_j t} t^{\\rho_j - 1} \\Big] = 0."
}
] |
https://en.wikipedia.org/wiki?curid=63295970
|
632992
|
Paley–Wiener theorem
|
In mathematics, a Paley–Wiener theorem is any theorem that relates decay properties of a function or distribution at infinity with analyticity of its Fourier transform. It is named after Raymond Paley (1907–1933) and Norbert Wiener (1894–1964) who, in 1934, introduced various versions of the theorem. The original theorems did not use the language of distributions, and instead applied to square-integrable functions. The first such theorem using distributions was due to Laurent Schwartz. These theorems heavily rely on the triangle inequality (to interchange the absolute value and integration).
The original work by Paley and Wiener is also used as a namesake in the fields of control theory and harmonic analysis; introducing the Paley–Wiener condition for spectral factorization and the Paley–Wiener criterion for non-harmonic Fourier series respectively. These are related mathematical concepts that place the decay properties of a function in context of stability problems.
Holomorphic Fourier transforms.
The classical Paley–Wiener theorems make use of the holomorphic Fourier transform on classes of square-integrable functions supported on the real line. Formally, the idea is to take the integral defining the (inverse) Fourier transform
formula_0
and allow formula_1 to be a complex number in the upper half-plane. One may then expect to differentiate under the integral in order to verify that the Cauchy–Riemann equations hold, and thus that formula_2 defines an analytic function. However, this integral may not be well-defined, even for formula_3 in formula_4; indeed, since formula_1 is in the upper half plane, the modulus of formula_5 grows exponentially as formula_6; so differentiation under the integral sign is out of the question. One must impose further restrictions on formula_3 in order to ensure that this integral is well-defined.
The first such restriction is that formula_3 be supported on formula_7: that is, formula_8. The Paley–Wiener theorem now asserts the following: The holomorphic Fourier transform of formula_3, defined by
formula_9
for formula_1 in the upper half-plane is a holomorphic function. Moreover, by Plancherel's theorem, one has
formula_10
and by dominated convergence,
formula_11
Conversely, if formula_2 is a holomorphic function in the upper half-plane satisfying
formula_12
then there exists formula_8 such that formula_2 is the holomorphic Fourier transform of formula_3.
In abstract terms, this version of the theorem explicitly describes the Hardy space formula_13. The theorem states that
formula_14
This is a very useful result as it enables one to pass to the Fourier transform of a function in the Hardy space and perform calculations in the easily understood space
formula_15 of square-integrable functions supported on the positive axis.
By imposing the alternative restriction that formula_3 be compactly supported, one obtains another Paley–Wiener theorem. Suppose that formula_3 is supported in formula_16, so that formula_17. Then the holomorphic Fourier transform
formula_18
is an entire function of exponential type formula_19, meaning that there is a constant formula_20 such that
formula_21
and moreover, formula_2 is square-integrable over horizontal lines:
formula_22
Conversely, any entire function of exponential type formula_19 which is square-integrable over horizontal lines is the holomorphic Fourier transform of an
formula_23 function supported in formula_16.
Schwartz's Paley–Wiener theorem.
Schwartz's Paley–Wiener theorem asserts that the Fourier transform of a distribution of compact support on formula_24 is an entire function on formula_25 and gives estimates on its growth at infinity. It was proven by Laurent Schwartz (1952). The formulation presented here is from .
Generally, the Fourier transform can be defined for any tempered distribution; moreover, any distribution of compact support formula_26 is a tempered distribution. If formula_26 is a distribution of compact support and formula_2 is an infinitely differentiable function, the expression
formula_27
is well defined.
It can be shown that the Fourier transform of formula_26 is a function (as opposed to a general tempered distribution) given at the value formula_28 by
formula_29
and that this function can be extended to values of formula_28 in the complex space formula_25. This extension of the Fourier transform to the complex domain is called the Fourier–Laplace transform.
<templatestyles src="Math_theorem/styles.css" />
Schwartz's theorem — An entire function formula_3 on formula_25 is the Fourier–Laplace transform of a distribution formula_26 of compact support if and only if for all formula_30,
formula_31
for some constants formula_20, formula_32, formula_33. The distribution formula_26 in fact will be supported in the closed ball of center formula_34
and radius formula_33.
Additional growth conditions on the entire function formula_3 impose regularity properties on the distribution formula_26.
For instance:
<templatestyles src="Math_theorem/styles.css" />
Theorem — If for every positive formula_32 there is a constant formula_35 such that for all formula_30,
formula_36
then formula_26 is an infinitely differentiable function, and vice versa.
Sharper results giving good control over the singular support of formula_26 have been formulated by . In particular, let formula_37 be a convex compact set in formula_24 with supporting function formula_38, defined by
formula_39
Then the singular support of formula_26 is contained in formula_37 if and only if there is a constant formula_32 and sequence of constants formula_40 such that
formula_41
for formula_42
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f(\\zeta) = \\int_{-\\infty}^\\infty F(x)e^{i x \\zeta}\\,dx"
},
{
"math_id": 1,
"text": "\\zeta"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "F"
},
{
"math_id": 4,
"text": "L^2(\\mathbb{R})"
},
{
"math_id": 5,
"text": "e^{ix\\zeta}"
},
{
"math_id": 6,
"text": "x \\to -\\infty"
},
{
"math_id": 7,
"text": "\\mathbb{R}_+"
},
{
"math_id": 8,
"text": "F\\in L^2(\\mathbb{R}_+)"
},
{
"math_id": 9,
"text": "f(\\zeta) = \\int_0^\\infty F(x) e^{i x\\zeta}\\, dx"
},
{
"math_id": 10,
"text": "\\int_{-\\infty}^\\infty \\left |f(\\xi+i\\eta) \\right|^2\\, d\\xi \\le \\int_0^\\infty |F(x)|^2\\, dx"
},
{
"math_id": 11,
"text": "\\lim_{\\eta\\to 0^+}\\int_{-\\infty}^\\infty \\left|f(\\xi+i\\eta)-f(\\xi) \\right|^2\\,d\\xi = 0."
},
{
"math_id": 12,
"text": "\\sup_{\\eta>0} \\int_{-\\infty}^\\infty \\left |f(\\xi+i\\eta) \\right|^2\\,d\\xi = C < \\infty"
},
{
"math_id": 13,
"text": "H^2(\\mathbb{R})"
},
{
"math_id": 14,
"text": " \\mathcal{F}H^2(\\mathbb{R})=L^2(\\mathbb{R_+})."
},
{
"math_id": 15,
"text": "L^2(\\mathbb{R}_+)"
},
{
"math_id": 16,
"text": "[-A,A]"
},
{
"math_id": 17,
"text": "F\\in L^2(-A,A)"
},
{
"math_id": 18,
"text": "f(\\zeta) = \\int_{-A}^A F(x)e^{i x\\zeta}\\,dx"
},
{
"math_id": 19,
"text": "A"
},
{
"math_id": 20,
"text": "C"
},
{
"math_id": 21,
"text": "|f(\\zeta)|\\le Ce^{A|\\zeta|},"
},
{
"math_id": 22,
"text": "\\int_{-\\infty}^{\\infty} |f(\\xi+i\\eta)|^2\\,d\\xi < \\infty."
},
{
"math_id": 23,
"text": "L^2"
},
{
"math_id": 24,
"text": "\\mathbb{R}^n"
},
{
"math_id": 25,
"text": "\\mathbb{C}^n"
},
{
"math_id": 26,
"text": "v"
},
{
"math_id": 27,
"text": " v(f) = v(x\\mapsto f(x)) "
},
{
"math_id": 28,
"text": "s"
},
{
"math_id": 29,
"text": " \\hat{v}(s) = (2 \\pi)^{-\\frac{n}{2}} v\\left(x\\mapsto e^{-i \\langle x, s\\rangle}\\right)"
},
{
"math_id": 30,
"text": "z\\in\\mathbb{C}^n"
},
{
"math_id": 31,
"text": " |F(z)| \\leq C (1 + |z|)^N e^{B|\\text{Im}(z)|} "
},
{
"math_id": 32,
"text": "N"
},
{
"math_id": 33,
"text": "B"
},
{
"math_id": 34,
"text": "0"
},
{
"math_id": 35,
"text": "C_N"
},
{
"math_id": 36,
"text": " |F(z)| \\leq C_N (1 + |z|)^{-N} e^{B|\\mathrm{Im}(z)|} "
},
{
"math_id": 37,
"text": "K"
},
{
"math_id": 38,
"text": "H"
},
{
"math_id": 39,
"text": "H(x) = \\sup_{y\\in K} \\langle x,y\\rangle."
},
{
"math_id": 40,
"text": "C_m"
},
{
"math_id": 41,
"text": "|\\hat{v}(\\zeta)| \\le C_m(1+|\\zeta|)^Ne^{H(\\mathrm{Im}(\\zeta))}"
},
{
"math_id": 42,
"text": "|\\mathrm{Im}(\\zeta)| \\le m \\log(| \\zeta |+1)."
}
] |
https://en.wikipedia.org/wiki?curid=632992
|
63302782
|
Random surfing model
|
Model of web browser usage
The random surfing model is a graph model which describes the probability of a random user visiting a web page. The model attempts to predict the chance that a random internet surfer will arrive at a page by either clicking a link or by accessing the site directly, for example by directly entering the website's URL in the address bar. For this reason, an assumption is made that all users surfing the internet will eventually stop following links in favor of switching to another site completely. The model is similar to a Markov chain, where the chain's states are web pages the user lands on and transitions are equally probable links between these pages.
Description.
A user navigates the internet in two primary ways; the user may access a site directly by entering the site's URL or clicking a bookmark, or the user may use a series of hyperlinks to get to the desired page. The random surfer model assumes that the link which the user selects next is picked at random. The model also assumes that the number of successive links is not infinite – the user will at some point lose interest and leave the current site for a completely new site.
The random surfer model is presented as a series of nodes which indicate web pages that can be accessed at random by users. A new node is added to the a graph when a new website is published. The movement about the graphs nodes is modeled by choosing a start node at random, then performing a short and random traversal of the nodes, or random walk. This traversal is analogous to a user accessing a website, then following hyperlink formula_0 number of times, until the user either exits the page or accesses another site completely. Connections to other nodes in this graph are formed when outbound links are placed on the page.
Graph definitions.
In the random surfing model, webgraphs are presented as a sequence of directed graphs formula_1 such that a graph formula_2 has formula_0 vertices and formula_0 edges. The process of defining graphs is parameterized with a probability formula_3, thus we let formula_4.
Nodes of the model arrive one at time, forming formula_5 connections to the existing graph formula_6. In some models, connections represent directed edges, and in others, connections represent undirected edges. Models start with a single node formula_7 and have formula_5 self-loops. formula_8 denotes a vertex added in the formula_9 step, and formula_10 denotes the total number of vertices.
Model 1. (1-step walk with self-loop).
At time formula_0, vertex formula_8 makes formula_5 connections by formula_5 iterations of the following steps:
For directed graphs, edges added are directed from formula_8 into the existing graph. Edges are undirected in respective undirected graphs.
Model 2. (Random walks with coin flips).
At time formula_0, vertex formula_8 makes formula_5 connections by formula_5 iterations of the following steps:
For directed graphs, edges added are directed from formula_8 into the existing graph. Edges are undirected in respective undirected graphs.
Limitations.
There are some caveats to the standard random surfer model, one of which is that the model ignores the content of the sites which users select – since the model assumes links are selected at random. Because users tend to have a goal in mind when surfing the internet, the content of the linked sites is a determining factor of whether or not the user will click a link.
Application.
The normalized eigenvector centrality combined with random surfer model's assumption of random jumps created the foundation of Google's PageRank algorithm.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "t"
},
{
"math_id": 1,
"text": "G_t,t = 1,2,\\ldots"
},
{
"math_id": 2,
"text": "G_t"
},
{
"math_id": 3,
"text": "p"
},
{
"math_id": 4,
"text": "q= 1-p"
},
{
"math_id": 5,
"text": "k"
},
{
"math_id": 6,
"text": "G_t\n"
},
{
"math_id": 7,
"text": "v_0"
},
{
"math_id": 8,
"text": "v_t"
},
{
"math_id": 9,
"text": "t^{th}"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "v"
},
{
"math_id": 12,
"text": "\\{ v_0, v_1, \\ldots , v_{t-1} \\}"
},
{
"math_id": 13,
"text": "1 - p"
},
{
"math_id": 14,
"text": "\\{ v_0, v_1, ..., v_{t-1} \\}"
}
] |
https://en.wikipedia.org/wiki?curid=63302782
|
6331168
|
Bagnold formula
|
Formula relating wind speed and mass transport
The Bagnold formula, named after Ralph Alger Bagnold, relates the amount of sand moved by the wind to wind speed by saltation. It states that the mass transport of sand is proportional to the third power of the friction velocity. Under steady conditions, this implies that mass transport is proportional to the third power of the excess of the wind speed (at any fixed height over the sand surface) over the minimum wind speed that is able to activate and sustain a continuous flow of sand grains.
The formula was derived by Bagnold in 1936 and later published in his book "The Physics of Blown Sand and Desert Dunes" in 1941. Wind tunnel and field experiments suggest that the formula is basically correct. It has later been modified by several researchers, but is still considered to be the benchmark formula.
In its simplest form, Bagnold's formula may be expressed as:
formula_0
where "q" represents the mass "transport" of sand across a lane of unit width; "C" is a dimensionless constant of order unity that depends on the sand sorting; "formula_1" is the density of air; "g" is the local gravitational acceleration; "d" is the reference grain size for the sand; "D" is the nearly uniform grain size originally used in Bagnold's experiments (250 micrometres); and, finally, formula_2 is friction velocity proportional to the square root of the shear stress between the wind and the sheet of moving sand.
The formula is valid in dry (desert) conditions. The effects of sand moisture at play in most coastal dunes, therefore, are not included.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "q = C \\ \\frac{\\rho}{g}\\ \\sqrt{\\frac{d}{D}} u_*^3"
},
{
"math_id": 1,
"text": "\\rho"
},
{
"math_id": 2,
"text": "u_*"
}
] |
https://en.wikipedia.org/wiki?curid=6331168
|
63313306
|
McKay–Miller–Širáň graph
|
In graph theory, the McKay–Miller–Širáň graphs are an infinite class of vertex-transitive graphs with diameter two, and with a large number of vertices relative to their diameter and degree. They are named after Brendan McKay, Mirka Miller, and Jozef Širáň, who first constructed them using voltage graphs in 1998.
Background.
The context for the construction of these graphs is the degree diameter problem in graph theory, which seeks the largest possible graph for each combination of degree and diameter. For graphs of diameter two, every vertex can be reached in two steps from an arbitrary starting vertex, and if the degree is formula_0 then at most formula_0 vertices can be reached in one step and another formula_1 in two steps, giving the Moore bound that the total number of vertices can be at most formula_2. However, only four graphs are known to reach this bound: a single edge (degree one), a 5-vertex cycle graph (degree two), the Petersen graph (degree three), and the Hoffman–Singleton graph (degree seven). Only one more of these Moore graphs can exist, with degree 57. For all other degrees, the maximum number of vertices in a diameter-two graph must be smaller.
Until the construction of the McKay–Miller–Širáň graphs, the only known construction achieved a number of vertices equal to
formula_3
using a Cayley graph construction.
The McKay–Miller–Širáň graphs, instead, have a number of vertices equal to
formula_4
for infinitely many values of formula_0. The degrees formula_0 for which their construction works are the ones for which formula_5 is a prime power and is congruent to 1 modulo 4. These possible degrees are the numbers
7, 13, 19, 25, 37, 43, 55, 61, 73, 79, 91, ...
The first number in this sequence, 7, is the degree of the Hoffman–Singleton graph, and the McKay–Miller–Širáň graph of degree seven is the Hoffman–Singleton graph. The same construction can also be applied to degrees formula_0 for which formula_5 is a prime power but is 0 or −1 mod 4. In these cases, it still produces a graph with the same formulas for its size, diameter, and degree, but these graphs are not in general vertex-transitive.
Subsequent to the construction of the McKay–Miller–Širáň graphs, other graphs with an even larger number of vertices, formula_6 fewer than the Moore bound, were constructed. However, these cover a significantly more restricted set of degrees than the McKay–Miller–Širáň graphs.
Constructions.
The original construction of McKay, Miller, and Širáň, used the voltage graph method to construct them as a covering graph of the graph formula_7, where formula_8 is a prime power and where formula_7 is formed from a complete bipartite graph formula_9 by attaching formula_10 self-loops to each vertex.
Instead, again uses the voltage graph method, but applied to a simpler graph, a dipole graph with formula_11 parallel edges modified in the same way by attaching the same number of self-loops to each vertex.
It is also possible to construct the McKay–Miller–Širáň graphs by modifying the Levi graph of an affine plane over the field of order formula_11.
Additional properties.
The spectrum of a McKay–Miller–Širáň graph has at most five distinct eigenvalues. In some of these graphs, all of these values are integers, so that the graph is an integral graph.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "d"
},
{
"math_id": 1,
"text": "d(d-1)"
},
{
"math_id": 2,
"text": "d^2+1"
},
{
"math_id": 3,
"text": "\\left\\lfloor\\frac{d+2}{2}\\right\\rfloor\\cdot\\left\\lceil\\frac{d+2}{2}\\right\\rceil,"
},
{
"math_id": 4,
"text": "\\frac{8}{9}\\left( d+\\frac{1}{2} \\right)^2,"
},
{
"math_id": 5,
"text": "(2d+1)/3"
},
{
"math_id": 6,
"text": "O(d^{3/2})"
},
{
"math_id": 7,
"text": "K^*_{q,q}"
},
{
"math_id": 8,
"text": "q=(2d+1)/3"
},
{
"math_id": 9,
"text": "K_{q,q}"
},
{
"math_id": 10,
"text": "(q-1)/4"
},
{
"math_id": 11,
"text": "q"
}
] |
https://en.wikipedia.org/wiki?curid=63313306
|
63326139
|
Integrally convex set
|
An integrally convex set is the discrete geometry analogue of the concept of convex set in geometry.
A subset "X" of the integer grid formula_0 is integrally convex if any point "y" in the convex hull of "X" can be expressed as a convex combination of the points of "X" that are "near" "y", where "near" means that the distance between each two coordinates is less than 1.
Definitions.
Let "X" be a subset of formula_0.
Denote by ch("X") the convex hull of "X". Note that ch("X") is a subset of formula_1, since it contains all the real points that are convex combinations of the integer points in "X".
For any point "y" in formula_1, denote near("y") := {"z" in formula_0 | |"zi" - "yi"| < 1 for all "i" in {1...,"n"} }. These are the integer points that are considered "nearby" to the real point "y".
A subset "X" of formula_0 is called integrally convex if every point "y" in ch("X") is also in ch("X" ∩ near("y")).
Example.
Let "n" = 2 and let "X" = { (0,0), (1,0), (2,0), (2,1) }. Its convex hull ch("X") contains, for example, the point "y" = (1.2, 0.5).
The integer points nearby "y" are near("y") = {(1,0), (2,0), (1,1), (2,1) }. So "X" ∩ near("y") = {(1,0), (2,0), (2,1)}. But "y" is not in ch("X" ∩ near("y")). See image at the right.
Therefore "X" is not integrally convex.
In contrast, the set "Y" = { (0,0), (1,0), (2,0), (1,1), (2,1) } is integrally convex.
Properties.
Iimura, Murota and Tamura have shown the following property of integrally convex set.
Let formula_2 be a finite integrally convex set. There exists a triangulation of ch("X") that is "integral", i.e.:
The example set "X" is not integrally convex, and indeed ch("X") does not admit an integral triangulation: every triangulation of ch("X"), either has to add vertices not in "X", or has to include simplices that are not contained in a single cell.
In contrast, the set "Y" = { (0,0), (1,0), (2,0), (1,1), (2,1) } is integrally convex, and indeed admits an integral triangulation, e.g. with the three simplices {(0,0),(1,0),(1,1)} and {(1,0),(2,0),(2,1)} and {(1,0),(1,1),(2,1)}. See image at the right.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{Z}^n"
},
{
"math_id": 1,
"text": "\\mathbb{R}^n"
},
{
"math_id": 2,
"text": "X\\subset \\mathbb{Z}^n"
}
] |
https://en.wikipedia.org/wiki?curid=63326139
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.