id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
67459232
Gowers' theorem
In mathematics, Gowers' theorem, also known as Gowers' Ramsey theorem and Gowers' FIN"k" theorem, is a theorem in Ramsey theory and combinatorics. It is a Ramsey-theoretic result about functions with finite support. Timothy Gowers originally proved the result in 1992, motivated by a problem regarding Banach spaces. The result was subsequently generalised by Bartošová, Kwiatkowska, and Lupini. Definitions. The presentation and notation is taken from Todorčević, and is different to that originally given by Gowers. For a function formula_0, the support of formula_1 is defined formula_2. Given formula_3, let formula_4 be the set formula_5 If formula_6, formula_7 have disjoint supports, we define formula_8 to be their pointwise sum, where formula_9. Each formula_4 is a partial semigroup under formula_10. The "tetris operation" formula_11 is defined formula_12. Intuitively, if formula_1 is represented as a pile of square blocks, where the formula_13th column has height formula_14, then formula_15 is the result of removing the bottom row. The name is in analogy with the video game. formula_16 denotes the formula_17th iterate of formula_18. A "block sequence" formula_19 in formula_4 is one such that formula_20 for every formula_21. The theorem. Note that, for a block sequence formula_19, numbers formula_22 and indices formula_23, the sum formula_24 is always defined. Gowers' original theorem states that, for any finite colouring of formula_4, there is a block sequence formula_19 such that all elements of the form formula_24 have the same colour. The standard proof uses ultrafilters, or equivalently, nonstandard arithmetic. Generalisation. Intuitively, the tetris operation can be seen as removing the bottom row of a pile of boxes. It is natural to ask what would happen if we tried removing different rows. Bartošová and Kwiatkowska considered the wider class of "generalised tetris operations", where we can remove any chosen subset of the rows. Formally, let formula_25 be a nondecreasing surjection. The induced tetris operation formula_26 is given by composition with formula_27, i.e. formula_28. The generalised tetris operations are the collection of formula_29 for all nondecreasing surjections formula_25. In this language, the original tetris operation is induced by the map formula_30. Bartošová and Kwiatkowska showed that the finite version of Gowers' theorem holds for the collection of generalised tetris operations. Lupini later extended this result to the infinite case.
[ { "math_id": 0, "text": "f\\colon \\N \\to \\N" }, { "math_id": 1, "text": "f" }, { "math_id": 2, "text": "\\operatorname{supp}(f) = \\{ n : f(n) \\neq 0 \\}" }, { "math_id": 3, "text": "k \\in \\N" }, { "math_id": 4, "text": "\\mathrm{FIN}_k" }, { "math_id": 5, "text": "\\mathrm{FIN}_k = \\big\\{ f\\colon \\N \\to \\N \\mid \\operatorname{supp}(f) \\text{ is finite and } \\max(\\operatorname{range}(f)) = k \\big\\}" }, { "math_id": 6, "text": "f \\in \\mathrm{FIN}_n" }, { "math_id": 7, "text": "g \\in \\mathrm{FIN}_m" }, { "math_id": 8, "text": "f+g \\in \\mathrm{FIN}_k" }, { "math_id": 9, "text": "k = \\max \\{ n, m \\}" }, { "math_id": 10, "text": "+" }, { "math_id": 11, "text": "T\\colon \\mathrm{FIN}_{k+1} \\to \\mathrm{FIN}_k" }, { "math_id": 12, "text": "T(f)(n) = \\max \\{ 0, f(n)-1 \\}" }, { "math_id": 13, "text": "n" }, { "math_id": 14, "text": "f(n)" }, { "math_id": 15, "text": "T(f)" }, { "math_id": 16, "text": "T^{(k)}" }, { "math_id": 17, "text": "k" }, { "math_id": 18, "text": "T" }, { "math_id": 19, "text": "(f_n)" }, { "math_id": 20, "text": "\\max(\\operatorname{supp}(f_m)) < \\min(\\operatorname{supp}(f_{m+1}))" }, { "math_id": 21, "text": "m" }, { "math_id": 22, "text": "k_1, \\ldots, k_n" }, { "math_id": 23, "text": "m_1 < \\cdots < m_n" }, { "math_id": 24, "text": "T^{(k_1)}(f_{m_1}) + \\cdots + T^{(k_n)}(f_{m_n})" }, { "math_id": 25, "text": "F\\colon \\N \\to \\N" }, { "math_id": 26, "text": "\\hat{F}\\colon \\mathrm{FIN}_k \\to \\mathrm{FIN}_{F(k)}" }, { "math_id": 27, "text": "F" }, { "math_id": 28, "text": "\\hat{F}(f)(n) = F(f(n))" }, { "math_id": 29, "text": "\\hat{F}" }, { "math_id": 30, "text": "T\\colon n \\mapsto \\max \\{ n-1, 0 \\}" } ]
https://en.wikipedia.org/wiki?curid=67459232
67465897
Affine symmetric group
Mathematical structure The affine symmetric groups are a family of mathematical structures that describe the symmetries of the number line and the regular triangular tiling of the plane, as well as related higher-dimensional objects. In addition to this geometric description, the affine symmetric groups may be defined in other ways: as collections of permutations (rearrangements) of the integers (..., −2, −1, 0, 1, 2, ...) that are periodic in a certain sense, or in purely algebraic terms as a group with certain generators and relations. They are studied in combinatorics and representation theory. A finite symmetric group consists of all permutations of a finite set. Each affine symmetric group is an infinite extension of a finite symmetric group. Many important combinatorial properties of the finite symmetric groups can be extended to the corresponding affine symmetric groups. Permutation statistics such as descents and inversions can be defined in the affine case. As in the finite case, the natural combinatorial definitions for these statistics also have a geometric interpretation. The affine symmetric groups have close relationships with other mathematical objects, including juggling patterns and certain complex reflection groups. Many of their combinatorial and geometric properties extend to the broader family of affine Coxeter groups. Definitions. The affine symmetric group may be equivalently defined as an abstract group by generators and relations, or in terms of concrete geometric and combinatorial models. Algebraic definition. One way of defining groups is by generators and relations. In this type of definition, generators are a subset of group elements that, when combined, produce all other elements. The relations of the definition are a system of equations that determine when two combinations of generators are equal. In this way, the affine symmetric group formula_0 is generated by a set formula_1 of n elements that satisfy the following relations: when formula_2, In the relations above, indices are taken modulo n, so that the third relation includes as a particular case formula_7. (The second and third relation are sometimes called the braid relations.) When formula_8, the affine symmetric group formula_9 is the infinite dihedral group generated by two elements formula_10 subject only to the relations formula_11. These relations can be rewritten in the special form that defines the Coxeter groups, so the affine symmetric groups are Coxeter groups, with the formula_12 as their Coxeter generating sets. Each Coxeter group may be represented by a Coxeter–Dynkin diagram, in which vertices correspond to generators and edges encode the relations between them. For formula_13, the Coxeter–Dynkin diagram of formula_0 is the n-cycle (where the edges correspond to the relations between pairs of consecutive generators and the absence of an edge between other pairs of generators indicates that they commute), while for formula_8 it consists of two nodes joined by an edge labeled formula_14. Geometric definition. In the Euclidean space formula_15 with coordinates formula_16, the set V of points for which formula_17 forms a (hyper)plane, an ("n" − 1)-dimensional subspace. For every pair of distinct elements i and j of formula_18 and every integer k, the set of points in V that satisfy formula_19 forms an ("n" − 2)-dimensional subspace within V, and there is a unique reflection of V that fixes this subspace. Then the affine symmetric group formula_0 can be realized geometrically as a collection of maps from V to itself, the compositions of these reflections. Inside V, the subset of points with integer coordinates forms the "root lattice", Λ. It is the set of all the integer vectors formula_20 such that formula_21. Each reflection preserves this lattice, and so the lattice is preserved by the whole group. The fixed subspaces of these reflections divide V into congruent simplices, called "alcoves". The situation when formula_22 is shown in the figure; in this case, the root lattice is a triangular lattice, the reflecting lines divide V into equilateral triangle alcoves, and the roots are the centers of nonoverlapping hexagons made up of six triangular alcoves. To translate between the geometric and algebraic definitions, one fixes an alcove and consider the n hyperplanes that form its boundary. The reflections through these boundary hyperplanes may be identified with the Coxeter generators. In particular, there is a unique alcove (the "fundamental alcove") consisting of points formula_16 such that formula_23, which is bounded by the hyperplanes formula_24 formula_25 ..., and formula_26 illustrated in the case formula_22. For formula_27, one may identify the reflection through formula_28 with the Coxeter generator formula_12, and also identify the reflection through formula_29 with the generator formula_30. Combinatorial definition. The elements of the affine symmetric group may be realized as a group of periodic permutations of the integers. In particular, say that a function formula_31 is an "affine permutation" if For every affine permutation, and more generally every shift-equivariant bijection, the numbers formula_37 must all be distinct modulo n. An affine permutation is uniquely determined by its "window notation" formula_38, because all other values of formula_39 can be found by shifting these values. Thus, affine permutations may also be identified with tuples formula_38 of integers that contain one element from each congruence class modulo n and sum to formula_40. To translate between the combinatorial and algebraic definitions, for formula_27 one may identify the Coxeter generator formula_12 with the affine permutation that has window notation formula_41, and also identify the generator formula_30 with the affine permutation formula_42. More generally, every reflection (that is, a conjugate of one of the Coxeter generators) can be described uniquely as follows: for distinct integers i, j in formula_18 and arbitrary integer k, it maps i to "j" − "kn", maps j to "i" + "kn", and fixes all inputs not congruent to i or j modulo n. Representation as matrices. Affine permutations can be represented as infinite periodic permutation matrices. If formula_43 is an affine permutation, the corresponding matrix has entry 1 at position formula_44 in the infinite grid formula_45 for each integer i, and all other entries are equal to 0. Since u is a bijection, the resulting matrix contains exactly one 1 in every row and column. The periodicity condition on the map u ensures that the entry at position formula_46 is equal to the entry at position formula_47 for every pair of integers formula_46. For example, a portion of the matrix for the affine permutation formula_48 is shown in the figure. In row 1, there is a 1 in column 2; in row 2, there is a 1 in column 0; and in row 3, there is a 1 in column 4. The rest of the entries in those rows and columns are all 0, and all the other entries in the matrix are fixed by the periodicity condition. Relationship to the finite symmetric group. The affine symmetric group formula_0 contains the finite symmetric group formula_49 of permutations on formula_35 elements as both a subgroup and a quotient group. These connections allow a direct translation between the combinatorial and geometric definitions of the affine symmetric group. As a subgroup. There is a canonical way to choose a subgroup of formula_0 that is isomorphic to the finite symmetric group formula_49. In terms of the algebraic definition, this is the subgroup of formula_0 generated by formula_50 (excluding the simple reflection formula_51). Geometrically, this corresponds to the subgroup of transformations that fix the origin, while combinatorially it corresponds to the window notations for which formula_52 (that is, in which the window notation is the one-line notation of a finite permutation). If formula_53 is the window notation of an element of this standard copy of formula_54, its action on the hyperplane V in formula_55 is given by permutation of coordinates: formula_56. (In this article, the geometric action of permutations and affine permutations is on the right; thus, if u and v are two affine permutations, the action of "uv" on a point is given by first applying u, then applying v.) There are also many nonstandard copies of formula_49 contained in formula_0. A geometric construction is to pick any point a in Λ (that is, an integer vector whose coordinates sum to 0); the subgroup formula_57 of formula_0 of isometries that fix a is isomorphic to formula_49. As a quotient. There is a simple map (technically, a surjective group homomorphism) π from formula_0 onto the finite symmetric group formula_49. In terms of the combinatorial definition, an affine permutation can be mapped to a permutation by reducing the window entries modulo n to elements of formula_58, leaving the one-line notation of a permutation. In this article, the image formula_59 of an affine permutation u is called the "underlying permutation" of u. The map π sends the Coxeter generator formula_60 to the permutation whose one-line notation and cycle notation are formula_61 and formula_62, respectively. The kernel of π is by definition the set of affine permutations whose underlying permutation is the identity. The window notations of such affine permutations are of the form formula_63, where formula_64 is an integer vector such that formula_65, that is, where formula_66. Geometrically, this kernel consists of the translations, the isometries that shift the entire space V without rotating or reflecting it. In an abuse of notation, the symbol Λ is used in this article for all three of these sets (integer vectors in V, affine permutations with underlying permutation the identity, and translations); in all three settings, the natural group operation turns Λ into an abelian group, generated freely by the "n" − 1 vectors formula_67. Connection between the geometric and combinatorial definitions. The affine symmetric group formula_0 has Λ as a normal subgroup, and is isomorphic to the semidirect product formula_68 of this subgroup with the finite symmetric group formula_49, where the action of formula_49 on Λ is by permutation of coordinates. Consequently, every element u of formula_0 has a unique realization as a product formula_69 where formula_70 is a permutation in the standard copy of formula_49 in formula_0 and formula_71 is a translation in Λ. This point of view allows for a direct translation between the combinatorial and geometric definitions of formula_0: if one writes formula_72 where formula_73 and formula_74 then the affine permutation u corresponds to the rigid motion of V defined by formula_75 Furthermore, as with every affine Coxeter group, the affine symmetric group acts transitively and freely on the set of alcoves: for each two alcoves, a unique group element takes one alcove to the other. Hence, making an arbitrary choice of alcove formula_76 places the group in one-to-one correspondence with the alcoves: the identity element corresponds to formula_76, and every other group element g corresponds to the alcove formula_77 that is the image of formula_76 under the action of g. === Example: "n" 2 === Algebraically, formula_9 is the infinite dihedral group, generated by two generators formula_10 subject to the relations formula_11. Every other element of the group can be written as an alternating product of copies of formula_78 and formula_79. Combinatorially, the affine permutation formula_79 has window notation formula_80, corresponding to the bijection formula_81 for every integer k. The affine permutation formula_78 has window notation formula_82, corresponding to the bijection formula_83 for every integer k. Other elements have the following window notations: formula_84 Geometrically, the space V on which formula_9 acts is a line, with infinitely many equally spaced reflections. It is natural to identify the line V with the real line formula_85, formula_78 with reflection around the point 0, and formula_79 with reflection around the point 1. In this case, the reflection formula_86 reflects across the point –"k" for any integer k, the composition formula_87 translates the line by –2, and the composition formula_88 translates the line by 2. Permutation statistics and permutation patterns. Many permutation statistics and other features of the combinatorics of finite permutations can be extended to the affine case. Descents, length, and inversions. The "length" formula_89 of an element g of a Coxeter group G is the smallest number k such that g can be written as a product formula_90 of k Coxeter generators of G. Geometrically, the length of an element g in formula_0 is the number of reflecting hyperplanes that separate formula_76 and formula_91, where formula_76 is the fundamental alcove (the simplex bounded by the reflecting hyperplanes of the Coxeter generators formula_92). Combinatorially, the length of an affine permutation is encoded in terms of an appropriate notion of inversions: for an affine permutation u, the length is formula_93 Alternatively, it is the number of equivalence classes of pairs formula_94 such that formula_95 and formula_96 under the equivalence relation formula_97 if formula_98 for some integer k. The generating function for length in formula_0 is formula_99 Similarly, there is an affine analogue of descents in permutations: an affine permutation u has a descent in position i if formula_100. (By periodicity, u has a descent in position i if and only if it has a descent in position formula_101 for all integers k.) Algebraically, the descents corresponds to the "right descents" in the sense of Coxeter groups; that is, i is a descent of u if and only if formula_102. The left descents (that is, those indices i such that formula_103) are the descents of the inverse affine permutation formula_104; equivalently, they are the values i such that i occurs before "i" − 1 in the sequence formula_105. Geometrically, i is a descent of u if and only if the fixed hyperplane of formula_12 separates the alcoves formula_76 and formula_106 Because there are only finitely many possibilities for the number of descents of an affine permutation, but infinitely many affine permutations, it is not possible to naively form a generating function for affine permutations by number of descents (an affine analogue of Eulerian polynomials). One possible resolution is to consider affine descents (equivalently, cyclic descents) in the finite symmetric group formula_49. Another is to consider simultaneously the length and number of descents of an affine permutation. The multivariate generating function for these statistics over formula_0 simultaneously for all n is formula_107 where des("w") is the number of descents of the affine permutation w and formula_108 is the q-exponential function. Cycle type and reflection length. Any bijection formula_43 partitions the integers into a (possibly infinite) list of (possibly infinite) cycles: for each integer i, the cycle containing i is the sequence formula_109 where exponentiation represents functional composition. For an affine permutation u, the following conditions are equivalent: all cycles of u are finite, u has finite order, and the geometric action of u on the space V has at least one fixed point. The "reflection length" formula_110 of an element u of formula_0 is the smallest number k such that there exist reflections formula_111 such that formula_112. (In the symmetric group, reflections are transpositions, and the reflection length of a permutation u is formula_113, where formula_114 is the number of cycles of u.) In , the following formula was proved for the reflection length of an affine permutation u: for each cycle of u, define the "weight" to be the integer "k" such that consecutive entries congruent modulo n differ by exactly "kn". Form a tuple of cycle weights of u (counting translates of the same cycle by multiples of n only once), and define the "nullity" formula_115 to be the size of the smallest set partition of this tuple so that each part sums to 0. Then the reflection length of u is formula_116 where formula_59 is the underlying permutation of u. For every affine permutation u, there is a choice of subgroup W of formula_0 such that formula_117, formula_118, and for the standard form formula_119 implied by this semidirect product, the reflection lengths are additive, that is, formula_120. Fully commutative elements and pattern avoidance. A "reduced word" for an element g of a Coxeter group is a tuple formula_121 of Coxeter generators of minimum possible length such that formula_122. The element g is called "fully commutative" if any reduced word can be transformed into any other by sequentially swapping pairs of factors that commute. For example, in the finite symmetric group formula_123, the element formula_124 is fully commutative, since its two reduced words formula_125 and formula_126 can be connected by swapping commuting factors, but formula_127 is not fully commutative because there is no way to reach the reduced word formula_128 starting from the reduced word formula_129 by commutations. proved that in the finite symmetric group formula_49, a permutation is fully commutative if and only if it avoids the permutation pattern 321, that is, if and only if its one-line notation contains no three-term decreasing subsequence. In , this result was extended to affine permutations: an affine permutation u is fully commutative if and only if there do not exist integers formula_130 such that formula_131. The number of affine permutations avoiding a single pattern p is finite if and only if p avoids the pattern 321, so in particular there are infinitely many fully commutative affine permutations. These were enumerated by length in . Parabolic subgroups and other structures. The parabolic subgroups of formula_0 and their coset representatives offer a rich combinatorial structure. Other aspects of affine symmetric groups, such as their Bruhat order and representation theory, may also be understood via combinatorial models. Parabolic subgroups, coset representatives. A "standard parabolic subgroup" of a Coxeter group is a subgroup generated by a subset of its Coxeter generating set. The maximal parabolic subgroups are those that come from omitting a single Coxeter generator. In formula_0, all maximal parabolic subgroups are isomorphic to the finite symmetric group formula_49. The subgroup generated by the subset formula_132 consists of those affine permutations that stabilize the interval formula_133, that is, that map every element of this interval to another element of the interval. For a fixed element i of formula_134, let formula_135 be the maximal proper subset of Coxeter generators omitting formula_12, and let formula_136 denote the parabolic subgroup generated by J. Every coset formula_137 has a unique element of minimum length. The collection of such representatives, denoted formula_138, consists of the following affine permutations: formula_139 In the particular case that formula_140, so that formula_141 is the standard copy of formula_49 inside formula_0, the elements of formula_142 may naturally be represented by "abacus diagrams": the integers are arranged in an infinite strip of width n, increasing sequentially along rows and then from top to bottom; integers are circled if they lie directly above one of the window entries of the minimal coset representative. For example, the minimal coset representative formula_143 is represented by the abacus diagram at right. To compute the length of the representative from the abacus diagram, one adds up the number of uncircled numbers that are smaller than the last circled entry in each column. (In the example shown, this gives formula_144.) Other combinatorial models of minimum-length coset representatives for formula_145 can be given in terms of core partitions (integer partitions in which no hook length is divisible by n) or bounded partitions (integer partitions in which no part is larger than "n" − 1). Under these correspondences, it can be shown that the weak Bruhat order on formula_146 is isomorphic to a certain subposet of Young's lattice. Bruhat order. The Bruhat order on formula_0 has the following combinatorial realization. If u is an affine permutation and i and j are integers, define formula_147 to be the number of integers a such that formula_148 and formula_149. (For example, with formula_150, one has formula_151: the three relevant values are formula_152, which are respectively mapped by u to 1, 2, and 4.) Then for two affine permutations u, v, one has that formula_153 in Bruhat order if and only if formula_154 for all integers i, j. Representation theory and an affine Robinson–Schensted correspondence. In the finite symmetric group, the Robinson–Schensted correspondence gives a bijection between the group and pairs formula_155 of standard Young tableaux of the same shape. This bijection plays a central role in the combinatorics and the representation theory of the symmetric group. For example, in the language of Kazhdan–Lusztig theory, two permutations lie in the same left cell if and only if their images under Robinson–Schensted have the same tableau Q, and in the same right cell if and only if their images have the same tableau P. In , Jian-Yi Shi showed that left cells for formula_0 are indexed instead by "tabloids", and in he gave an algorithm to compute the tabloid analogous to the tableau P for an affine permutation. In , the authors extended Shi's work to give a bijective map between formula_0 and triples formula_156 consisting of two tabloids of the same shape and an integer vector whose entries satisfy certain inequalities. Their procedure uses the matrix representation of affine permutations and generalizes the shadow construction, introduced in . Inverse realizations. In some situations, one may wish to consider the action of the affine symmetric group on formula_157 or on alcoves that is inverse to the one given above. These alternate realizations are described below. In the combinatorial action of formula_0 on formula_157, the generator formula_12 acts by switching the "values" i and "i" + 1. In the inverse action, it instead switches the entries in "positions" i and "i" + 1. Similarly, the action of a general reflection will be to switch the entries at "positions" "j" − "kn" and "i" + "kn" for each k, fixing all inputs at positions not congruent to i or j modulo n. In the geometric action of formula_0, the generator formula_12 acts on an alcove A by reflecting it across one of the bounding planes of the fundamental alcove "A"0. In the inverse action, it instead reflects A across one of "its own" bounding planes. From this perspective, a reduced word corresponds to an "alcove walk" on the tessellated space V. Relationship to other mathematical objects. The affine symmetric groups are closely related to a variety of other mathematical objects. Juggling patterns. In , a correspondence is given between affine permutations and juggling patterns encoded in a version of siteswap notation. Here, a juggling pattern of period n is a sequence formula_20 of nonnegative integers (with certain restrictions) that captures the behavior of balls thrown by a juggler, where the number formula_158 indicates the length of time the ith throw spends in the air (equivalently, the height of the throw). The number b of balls in the pattern is the average formula_159. The Ehrenborg–Readdy correspondence associates to each juggling pattern formula_160 of period n the function formula_161 defined by formula_162 where indices of the sequence a are taken modulo n. Then formula_163 is an affine permutation in formula_0, and moreover every affine permutation arises from a juggling pattern in this way. Under this bijection, the length of the affine permutation is encoded by a natural statistic in the juggling pattern: formula_164 where formula_165 is the number of crossings (up to periodicity) in the arc diagram of a. This allows an elementary proof of the generating function for affine permutations by length. For example, the juggling pattern 441 has formula_22 and formula_166. Therefore, it corresponds to the affine permutation formula_167. The juggling pattern has four crossings, and the affine permutation has length formula_168. Similar techniques can be used to derive the generating function for minimal coset representatives of formula_145 by length. Complex reflection groups. In a finite-dimensional real inner product space, a "reflection" is a linear transformation that fixes a linear hyperplane pointwise and negates the vector orthogonal to the plane. This notion may be extended to vector spaces over other fields. In particular, in a complex inner product space, a "reflection" is a unitary transformation T of finite order that fixes a hyperplane. This implies that the vectors orthogonal to the hyperplane are eigenvectors of T, and the associated eigenvalue is a complex root of unity. A "complex reflection group" is a finite group of linear transformations on a complex vector space generated by reflections. The complex reflection groups were fully classified by : each complex reflection group is isomorphic to a product of irreducible complex reflection groups, and every irreducible either belongs to an infinite family formula_169 (where m, p, and n are positive integers such that p divides m) or is one of 34 other (so-called "exceptional") examples. The group formula_170 is the generalized symmetric group: algebraically, it is the wreath product formula_171 of the cyclic group formula_172 with the symmetric group formula_49. Concretely, the elements of the group may be represented by monomial matrices (matrices having one nonzero entry in every row and column) whose nonzero entries are all mth roots of unity. The groups formula_169 are subgroups of formula_170, and in particular the group formula_173 consists of those matrices in which the product of the nonzero entries is equal to 1. In , Shi showed that the affine symmetric group is a "generic cover" of the family formula_174, in the following sense: for every positive integer m, there is a surjection formula_175 from formula_0 to formula_173, and these maps are compatible with the natural surjections formula_176 when formula_177 that come from raising each entry to the "m"/"p"th power. Moreover, these projections respect the reflection group structure, in that the image of every reflection in formula_0 under formula_175 is a reflection in formula_173; and similarly when formula_178 the image of the standard Coxeter element formula_179 in formula_0 is a Coxeter element in formula_180. Affine Lie algebras. Each affine Coxeter group is associated to an affine Lie algebra, a certain infinite-dimensional non-associative algebra with unusually nice representation-theoretic properties. In this association, the Coxeter group arises as a group of symmetries of the root space of the Lie algebra (the dual of the Cartan subalgebra). In the classification of affine Lie algebras, the one associated to formula_0 is of (untwisted) type formula_181, with Cartan matrix formula_182 for formula_183 and formula_184 (a circulant matrix) for formula_185. Like other Kac–Moody algebras, affine Lie algebras satisfy the Weyl–Kac character formula, which expresses the characters of the algebra in terms of their highest weights. In the case of affine Lie algebras, the resulting identities are equivalent to the Macdonald identities. In particular, for the affine Lie algebra of type formula_186, associated to the affine symmetric group formula_9, the corresponding Macdonald identity is equivalent to the Jacobi triple product. Braid group and group-theoretic properties. Coxeter groups have a number of special properties not shared by all groups. These include that their word problem is decidable (that is, there exists an algorithm that can determine whether or not any given product of the generators is equal to the identity element) and that they are linear groups (that is, they can be represented by a group of invertible matrices over a field). Each Coxeter group W is associated to an Artin–Tits group formula_187, which is defined by a similar presentation that omits relations of the form formula_188 for each generator s. In particular, the Artin–Tits group associated to formula_0 is generated by n elements formula_189 subject to the relations formula_190 for formula_191 (and no others), where as before the indices are taken modulo n (so formula_192). Artin–Tits groups of Coxeter groups are conjectured to have many nice properties: for example, they are conjectured to be torsion-free, to have trivial center, to have solvable word problem, and to satisfy the formula_193 conjecture. These conjectures are not known to hold for all Artin–Tits groups, but in it was shown that formula_194 has these properties. (Subsequently, they have been proved for the Artin–Tits groups associated to affine Coxeter groups.) In the case of the affine symmetric group, these proofs make use of an associated Garside structure on the Artin–Tits group. Artin–Tits groups are sometimes also known as "generalized braid groups", because the Artin–Tits group formula_195 of the (finite) symmetric group is the braid group on n strands. Not all Artin–Tits groups have a natural representation in terms of geometric braids. However, the Artin–Tits group of the hyperoctahedral group formula_196 (geometrically, the symmetry group of the "n"-dimensional hypercube; combinatorially, the group of signed permutations of size "n") does have such a representation: it is given by the subgroup of the braid group on formula_197 strands consisting of those braids for which a particular strand ends in the same position it started in, or equivalently as the braid group of n strands in an annular region. Moreover, the Artin–Tits group of the hyperoctahedral group formula_196 can be written as a semidirect product of formula_194 with an infinite cyclic group. It follows that formula_194 may be interpreted as a certain subgroup consisting of geometric braids, and also that it is a linear group. Extended affine symmetric group. The affine symmetric group is a subgroup of the "extended affine symmetric group". The extended group is isomorphic to the wreath product formula_198. Its elements are "extended affine permutations": bijections formula_31 such that formula_34 for all integers x. Unlike the affine symmetric group, the extended affine symmetric group is not a Coxeter group. But it has a natural generating set that extends the Coxeter generating set for formula_0: the "shift operator" formula_199 whose window notation is formula_200 generates the extended group with the simple reflections, subject to the additional relations formula_201. Combinatorics of other affine Coxeter groups. The geometric action of the affine symmetric group formula_0 places it naturally in the family of affine Coxeter groups, each of which has a similar geometric action on an affine space. The combinatorial description of the formula_0 may also be extended to many of these groups: in , an axiomatic description is given of certain permutation groups acting on formula_157 (the "George groups", in honor of George Lusztig), and it is shown that they are exactly the "classical" Coxeter groups of finite and affine types A, B, C, and D. (In the classification of affine Coxeter groups, the affine symmetric group is type A.) Thus, the combinatorial interpretations of descents, inversions, etc., carry over in these cases. Abacus models of minimum-length coset representatives for parabolic quotients have also been extended to this context. History. The study of Coxeter groups in general could be said to first arise in the classification of regular polyhedra (the Platonic solids) in ancient Greece. The modern systematic study (connecting the algebraic and geometric definitions of finite and affine Coxeter groups) began in work of Coxeter in the 1930s. The combinatorial description of the affine symmetric group first appears in work of , and was expanded upon by ; both authors used the combinatorial description to study the Kazhdan–Lusztig cells of formula_0. The proof that the combinatorial definition agrees with the algebraic definition was given by . References. &lt;templatestyles src="Reflist/styles.css" /&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\widetilde{S}_n" }, { "math_id": 1, "text": " s_0, s_1, \\ldots, s_{n - 1}" }, { "math_id": 2, "text": " n \\geq 3 " }, { "math_id": 3, "text": "s_i^2 = 1" }, { "math_id": 4, "text": " s_is_j = s_js_i " }, { "math_id": 5, "text": "i - 1, i, i + 1" }, { "math_id": 6, "text": " s_is_{i + 1}s_i = s_{i + 1}s_is_{i + 1} " }, { "math_id": 7, "text": " s_0s_{n - 1}s_0 = s_{n - 1}s_0s_{n - 1} " }, { "math_id": 8, "text": " n = 2" }, { "math_id": 9, "text": "\\widetilde{S}_2" }, { "math_id": 10, "text": "s_0, s_1" }, { "math_id": 11, "text": "s_0^2 = s_1^2 = 1" }, { "math_id": 12, "text": "s_i" }, { "math_id": 13, "text": " n \\geq 3" }, { "math_id": 14, "text": "\\infty" }, { "math_id": 15, "text": "\\R^{n}" }, { "math_id": 16, "text": "(x_1, \\ldots, x_n)" }, { "math_id": 17, "text": "x_1 + x_2 + \\cdots + x_n = 0" }, { "math_id": 18, "text": "\\{1, \\ldots, n\\}" }, { "math_id": 19, "text": "x_i - x_j = k" }, { "math_id": 20, "text": "(a_1, \\ldots, a_n)" }, { "math_id": 21, "text": "a_1 + \\cdots + a_n = 0" }, { "math_id": 22, "text": "n = 3" }, { "math_id": 23, "text": " x_1 \\geq x_2 \\geq \\cdots \\geq x_n \\geq x_1 - 1" }, { "math_id": 24, "text": " x_1 - x_2 = 0," }, { "math_id": 25, "text": "x_2 - x_3 = 0," }, { "math_id": 26, "text": " x_1 - x_n = 1," }, { "math_id": 27, "text": "i = 1, \\ldots, n- 1" }, { "math_id": 28, "text": "x_i - x_{i + 1} = 0" }, { "math_id": 29, "text": " x_1 - x_n = 1" }, { "math_id": 30, "text": " s_0 = s_n" }, { "math_id": 31, "text": "u \\colon \\Z \\to \\Z" }, { "math_id": 32, "text": "u(x)" }, { "math_id": 33, "text": "x" }, { "math_id": 34, "text": "u(x + n) = u(x) + n" }, { "math_id": 35, "text": "n" }, { "math_id": 36, "text": " u(1) + u(2) + \\cdots + u(n) = 1 + 2 + \\cdots + n = \\frac{n(n + 1)}{2}" }, { "math_id": 37, "text": "u(1), \\ldots, u(n)" }, { "math_id": 38, "text": "[u(1), \\ldots, u(n)]" }, { "math_id": 39, "text": "u" }, { "math_id": 40, "text": "1 + 2 + \\cdots + n" }, { "math_id": 41, "text": "[1, 2, \\ldots, i - 1, i + 1, i, i + 2, \\ldots, n ] " }, { "math_id": 42, "text": "[0, 2, 3, \\ldots, n - 2, n - 1, n + 1] " }, { "math_id": 43, "text": "u : \\mathbb{Z} \\to \\mathbb{Z}" }, { "math_id": 44, "text": "(i, u(i))" }, { "math_id": 45, "text": " \\mathbb{Z} \\times \\mathbb{Z}" }, { "math_id": 46, "text": "(a, b)" }, { "math_id": 47, "text": "(a + n, b + n)" }, { "math_id": 48, "text": "[2, 0, 4] \\in \\widetilde{S}_3" }, { "math_id": 49, "text": "S_n" }, { "math_id": 50, "text": "s_1, \\ldots, s_{n - 1}" }, { "math_id": 51, "text": "s_0 = s_n" }, { "math_id": 52, "text": "\\{u(1), \\ldots, u(n) \\} = \\{1, 2, \\ldots, n \\}" }, { "math_id": 53, "text": " u = [u(1), u(2), \\ldots, u(n)]" }, { "math_id": 54, "text": "S_n \\subset \\widetilde{S}_n" }, { "math_id": 55, "text": "\\R^n" }, { "math_id": 56, "text": " (x_1, x_2, \\ldots, x_n) \\cdot u = (x_{u(1)}, x_{u(2)}, \\ldots, x_{u(n)})" }, { "math_id": 57, "text": "(\\widetilde{S}_n)_a" }, { "math_id": 58, "text": "\\{1, 2, \\ldots, n\\}" }, { "math_id": 59, "text": "\\pi(u)" }, { "math_id": 60, "text": " s_0 = [0, 2, 3, 4, \\ldots, n - 2, n - 1, n + 1]" }, { "math_id": 61, "text": " [n , 2 , 3 , 4, \\ldots , n - 2 , n - 1 , 1]" }, { "math_id": 62, "text": "(1 \\; n)" }, { "math_id": 63, "text": "[1 - a_1 \\cdot n, 2 - a_2 \\cdot n, \\ldots, n - a_n \\cdot n]" }, { "math_id": 64, "text": "(a_1, a_2, \\ldots, a_n)" }, { "math_id": 65, "text": "a_1 + a_2 + \\ldots + a_n = 0" }, { "math_id": 66, "text": "(a_1, \\ldots, a_n) \\in \\Lambda" }, { "math_id": 67, "text": "\\{(1, -1, 0, \\ldots, 0), (0, 1, -1, \\ldots, 0), \\ldots, (0, \\ldots, 0, 1, -1)\\}" }, { "math_id": 68, "text": " \\widetilde{S}_n \\cong S_n \\ltimes \\Lambda " }, { "math_id": 69, "text": " u = r \\cdot t " }, { "math_id": 70, "text": "r" }, { "math_id": 71, "text": "t" }, { "math_id": 72, "text": " [u(1), \\ldots, u(n)] = [r_1 - a_1 \\cdot n, \\ldots, r_n - a_n \\cdot n]" }, { "math_id": 73, "text": "r = [r_1, \\ldots, r_n] = \\pi(u)" }, { "math_id": 74, "text": "(a_1, a_2, \\ldots, a_n) \\in \\Lambda" }, { "math_id": 75, "text": " (x_1, \\ldots, x_n) \\cdot u = \\left(x_{r(1)} + a_1, \\ldots, x_{r(n)} + a_n \\right)." }, { "math_id": 76, "text": "A_0" }, { "math_id": 77, "text": "A = A_0 \\cdot g" }, { "math_id": 78, "text": "s_0" }, { "math_id": 79, "text": "s_1" }, { "math_id": 80, "text": "[2, 1]" }, { "math_id": 81, "text": "2k \\mapsto 2k - 1, 2k - 1 \\mapsto 2k" }, { "math_id": 82, "text": "[0, 3]" }, { "math_id": 83, "text": "2k \\mapsto 2k + 1, 2k + 1 \\mapsto 2k" }, { "math_id": 84, "text": "\n\\begin{align}\n\\overbrace{s_0 s_1 \\cdots s_0 s_1}^{2k \\text{ factors}} & = [1 + 2k, 2 - 2k ], \\\\[5pt]\n\\overbrace{s_1 s_0 \\cdots s_1 s_0}^{2k \\text{ factors}} & = [1 - 2k, 2 + 2k ], \\\\[5pt]\n\\overbrace{s_0 s_1 \\cdots s_0}^{2k + 1 \\text{ factors}} & = [2 + 2k, 1 - 2k ], \\\\[5pt]\n\\overbrace{s_1 s_0 \\cdots s_1}^{2k + 1 \\text{ factors}} & = [2 - 2(k + 1), 1 + 2(k + 1) ].\n\\end{align}\n" }, { "math_id": 85, "text": " \\R^1" }, { "math_id": 86, "text": "(s_0 s_1)^k s_0" }, { "math_id": 87, "text": "s_0 s_1" }, { "math_id": 88, "text": "s_1 s_0" }, { "math_id": 89, "text": "\\ell(g)" }, { "math_id": 90, "text": " g= s_{i_1} \\cdots s_{i_k}" }, { "math_id": 91, "text": " A_0 \\cdot g" }, { "math_id": 92, "text": "s_0, s_1, \\ldots, s_{n - 1}" }, { "math_id": 93, "text": " \\ell(u) = \\# \\left\\{ (i, j) \\colon i \\in \\{1, \\ldots, n\\}, i < j, \\text{ and } u(i) > u(j) \\right\\}." }, { "math_id": 94, "text": " (i, j) \\in \\mathbb{Z} \\times \\mathbb{Z} " }, { "math_id": 95, "text": " i < j" }, { "math_id": 96, "text": " u(i) > u(j)" }, { "math_id": 97, "text": " (i, j) \\equiv (i', j') " }, { "math_id": 98, "text": "(i - i', j - j') = (kn, kn)" }, { "math_id": 99, "text": " \\sum_{g \\in \\widetilde{S}_n} q^{\\ell(g)} = \\frac{1 - q^n}{(1 - q)^n}." }, { "math_id": 100, "text": "u(i) > u(i + 1)" }, { "math_id": 101, "text": "i + kn" }, { "math_id": 102, "text": " \\ell(u \\cdot s_i) < \\ell (u)" }, { "math_id": 103, "text": " \\ell(s_i \\cdot u) < \\ell (u)" }, { "math_id": 104, "text": "u^{-1}" }, { "math_id": 105, "text": " \\ldots, u(-2), u(-1), u(0), u(1), u(2), \\ldots" }, { "math_id": 106, "text": " A_0 \\cdot u." }, { "math_id": 107, "text": " \\sum_{n \\geq 1} \\frac{x^n}{1 - q^n} \\sum_{w \\in \\widetilde{S}_n} t^{\\operatorname{des}(w)} q^{\\ell(w)}\n=\n\\left[\n\\frac{x \\cdot \\frac{\\partial}{\\partial{x}} \\log(\\exp(x; q))}{1 - t \\exp(x; q)}\n\\right]_{x \\mapsto x \\frac{1 - t}{1 - q}}\n" }, { "math_id": 108, "text": "\\exp(x; q) = \\sum_{n \\geq 0} \\frac{x^n (1 - q)^n}{(1 - q)(1 - q^2) \\cdots (1 - q^n)}" }, { "math_id": 109, "text": " ( \\ldots, u^{-2}(i), u^{-1}(i), i, u(i), u^2(i), \\ldots )" }, { "math_id": 110, "text": "\\ell_R(u)" }, { "math_id": 111, "text": "r_1, \\ldots, r_k" }, { "math_id": 112, "text": "u = r_1 \\cdots r_k" }, { "math_id": 113, "text": "n - c(u)" }, { "math_id": 114, "text": "c(u)" }, { "math_id": 115, "text": "\\nu(u)" }, { "math_id": 116, "text": " \\ell_R(u) = n - 2 \\nu(u) + c(\\pi(u))," }, { "math_id": 117, "text": "W \\cong S_n" }, { "math_id": 118, "text": " \\widetilde{S}_n = W \\ltimes \\Lambda" }, { "math_id": 119, "text": " u = w \\cdot t " }, { "math_id": 120, "text": " \\ell_R(u) = \\ell_R(w) + \\ell_R(t)" }, { "math_id": 121, "text": "(s_{i_1}, \\ldots, s_{i_{\\ell(g)}})" }, { "math_id": 122, "text": " g = s_{i_1} \\cdots s_{i_{\\ell(g)}}" }, { "math_id": 123, "text": "S_4" }, { "math_id": 124, "text": " 2143 = (12)(34)" }, { "math_id": 125, "text": "(s_1, s_3)" }, { "math_id": 126, "text": "(s_3, s_1)" }, { "math_id": 127, "text": " 4132 = (142)(3)" }, { "math_id": 128, "text": "(s_3, s_2, s_3, s_1)" }, { "math_id": 129, "text": "(s_2, s_3, s_2, s_1)" }, { "math_id": 130, "text": " i < j < k" }, { "math_id": 131, "text": " u(i) > u(j) > u(k)" }, { "math_id": 132, "text": " \\{s_0, \\ldots, s_{n - 1} \\} \\smallsetminus \\{s_i\\}" }, { "math_id": 133, "text": "[i + 1, i + n]" }, { "math_id": 134, "text": "\\{0, \\ldots, n - 1\\}" }, { "math_id": 135, "text": " J = \\{s_0, \\ldots, s_{n - 1} \\} \\smallsetminus \\{s_i\\}" }, { "math_id": 136, "text": "(\\widetilde{S}_n)_J" }, { "math_id": 137, "text": " g \\cdot (\\widetilde{S}_n)_J " }, { "math_id": 138, "text": "(\\widetilde{S}_n)^J" }, { "math_id": 139, "text": " (\\widetilde{S}_n)^J = \\left \\{ u \\in \\widetilde{S}_n \\colon u(i - n + 1) < u(i - n + 2) < \\cdots < u(i - 1) < u(i) \\right \\}." }, { "math_id": 140, "text": " J = \\{s_1, \\ldots, s_{n - 1} \\}" }, { "math_id": 141, "text": "(\\widetilde{S}_n)_J \\cong S_n" }, { "math_id": 142, "text": "(\\widetilde{S}_n)^J \\cong \\widetilde{S}_n/S_n" }, { "math_id": 143, "text": " u = [-5, 0, 6, 9]" }, { "math_id": 144, "text": " 5 + 3 + 0 + 1 = 9" }, { "math_id": 145, "text": "\\widetilde{S}_n/S_n" }, { "math_id": 146, "text": "\\widetilde{S}_n / S_n" }, { "math_id": 147, "text": " u [i, j] " }, { "math_id": 148, "text": " a \\leq i " }, { "math_id": 149, "text": " u(a) \\geq j" }, { "math_id": 150, "text": " u = [2, 0, 4] \\in \\widetilde{S}_3" }, { "math_id": 151, "text": "u [ 3, 1 ] = 3" }, { "math_id": 152, "text": " a = 0, 1, 3 " }, { "math_id": 153, "text": "u \\leq v" }, { "math_id": 154, "text": " u[i, j] \\leq v[i, j] " }, { "math_id": 155, "text": "(P, Q)" }, { "math_id": 156, "text": "(P, Q, \\rho)" }, { "math_id": 157, "text": "\\Z" }, { "math_id": 158, "text": "a_i" }, { "math_id": 159, "text": "b = \\frac{a_1 + \\cdots + a_n}{n}" }, { "math_id": 160, "text": "{\\bf a} = (a_1, \\ldots, a_n)" }, { "math_id": 161, "text": "w_{\\bf a} \\colon \\Z \\to \\Z" }, { "math_id": 162, "text": " w_{\\bf a}(i) = i + a_i - b," }, { "math_id": 163, "text": "w_{\\bf a}" }, { "math_id": 164, "text": "\\ell(w_{\\bf a}) = (b - 1)n - \\operatorname{cross}({\\bf a})," }, { "math_id": 165, "text": "\\operatorname{cross}({\\bf a})" }, { "math_id": 166, "text": " b = \\frac{4 + 4 + 1}{3} = 3" }, { "math_id": 167, "text": "w_{441} = [1 + 4 - 3, 2 + 4 - 3, 3 + 1 - 3] = [2, 3, 1]" }, { "math_id": 168, "text": "\\ell(w_{441}) = (3 - 1) \\cdot 3 - 4 = 2" }, { "math_id": 169, "text": "G(m, p, n)" }, { "math_id": 170, "text": "G(m, 1, n)" }, { "math_id": 171, "text": " (\\Z / m \\Z) \\wr S_n" }, { "math_id": 172, "text": "\\Z / m \\Z" }, { "math_id": 173, "text": "G(m, m, n)" }, { "math_id": 174, "text": "\\left\\{G(m, m, n) \\colon m \\geq 1 \\right\\}" }, { "math_id": 175, "text": "\\pi_m" }, { "math_id": 176, "text": "G(m, m, n) \\twoheadrightarrow G(p, p, n)" }, { "math_id": 177, "text": "p \\mid m" }, { "math_id": 178, "text": "m > 1" }, { "math_id": 179, "text": "s_0 \\cdot s_1 \\cdots s_{n - 1}" }, { "math_id": 180, "text": " G(m, m, n)" }, { "math_id": 181, "text": "A_{n - 1}^{(1)}" }, { "math_id": 182, "text": " \\left[ \\begin{array}{rr} 2 & - 2 \\\\ - 2& 2 \\end{array} \\right] " }, { "math_id": 183, "text": "n = 2" }, { "math_id": 184, "text": " \\left[ \\begin{array}{rrrrrr} 2 & -1 & 0 & \\cdots & 0 & -1 \\\\\n-1 & 2 & -1 & \\cdots & 0 & 0 \\\\\n0 & -1 & 2 & \\cdots & 0 & 0 \\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\\n0 & 0 & 0 & \\cdots & 2 & -1 \\\\\n-1 & 0 & 0 & \\cdots & -1 & 2\n\\end{array} \\right]\n" }, { "math_id": 185, "text": " n > 2 " }, { "math_id": 186, "text": "A_1^{(1)}" }, { "math_id": 187, "text": "B_W" }, { "math_id": 188, "text": "s^2 = 1" }, { "math_id": 189, "text": " \\sigma_0, \\sigma_1, \\ldots, \\sigma_{n - 1}" }, { "math_id": 190, "text": "\\sigma_i \\sigma_{i + 1} \\sigma_i = \\sigma_{i + 1}\\sigma_i \\sigma_{i + 1}" }, { "math_id": 191, "text": "i = 0, \\ldots, n - 1" }, { "math_id": 192, "text": " \\sigma_n = \\sigma_0" }, { "math_id": 193, "text": "K(\\pi, 1)" }, { "math_id": 194, "text": "B_{\\widetilde{S}_n}" }, { "math_id": 195, "text": "B_{S_n}" }, { "math_id": 196, "text": "S^{\\pm}_n" }, { "math_id": 197, "text": "n + 1" }, { "math_id": 198, "text": " \\Z \\wr S_n" }, { "math_id": 199, "text": "\\tau" }, { "math_id": 200, "text": "\\tau = [2, 3, \\ldots, n, n + 1]" }, { "math_id": 201, "text": " \\tau s_i \\tau^{-1} = s_{i + 1}" } ]
https://en.wikipedia.org/wiki?curid=67465897
67466517
Familial sleep traits
Heritable variations in sleep patterns Familial sleep traits are heritable variations in sleep patterns, resulting in abnormal sleep-wake times and/or abnormal sleep length. Circadian rhythms are coordinated physiological and biological changes that oscillate on an approximately 24-hour cycle. Disruptions to these rhythms in humans may affect the duration, onset, and/or quality of sleep during this cycle, resulting in familial sleep traits. These traits are not necessarily syndromes because they do not always cause distress among individuals. Instead of being disorders, familial sleep traits are variations in an individual's biological tendencies of sleep-wake times, and are only considered syndromes if affected individuals complain about life interference, in which case they may fall under the category of Circadian Rhythm Sleep Disorders (CRSD) that affect sleep timing and circadian rhythms. Some of these circadian disorders include Advanced Sleep Phase Disorder (ASPD) and Delayed Sleep Phase Disorder (DSPD). Familial sleep traits are more specific than CRSD because they are heritable and involve a wide range of Mendelian genes. Evidence has shown that genes significantly influence sleep schedules in mammals, including humans, and account for one-third of the variation in sleep quality and duration. Studies in human monozygotic twins have provided evidence that genetic factors affect "normal" sleep patterns as well, meaning ones where no individual has been diagnosed with an altered phenotypic sleep trait. Sleep timing is controlled by the circadian clock, which can entrain to environmental stimuli (usually a light-dark cycle) and is regulated by a transcription-translation feedback loop (TTFL). In humans, there are multiple genes involved in this molecular biological clock, which when mutated may result in sleep disorders such as Familial Advanced Sleep Phase (FASP), Familial Delayed Sleep Phase (FDSP), and Familial Natural Short Sleep (FNSS). Some mutations in Mendelian genes that are involved in the TTFL have been identified as the causes of these sleep traits, including PER2, PER3, CRY2, CRY1. Other Mendelian genes that are not known to play a core role in the TTFL but are involved in FNSS include DEC2 and ADRB1. With some familial sleep traits, there may be a shift in an individual's chronotype, which describes the time of sleep-wake behaviors that result from circadian rhythms. Chronotype may shift depending on multiple factors including gender and age. Individuals with FASP have earlier chronotypes and individuals with FDSP have later chronotypes compared to a conventional sleep period which runs from approximately 10pm to 7am. Individuals may meet the criteria for FASP or FDSP if they have Advanced Sleep Phase or Delayed Sleep Phase and at least one first degree relative with the trait. Researchers have examined the human prevalence of FASP to be 0.33-0.5% by including individuals who have a sleep onset at approximately 8:30pm and offset at 5:30am. FDSP, which includes individuals who have a delayed sleep onset and offset, has an unknown human prevalence and may vary based on location, definition, and age. History of discoveries. Familial sleep traits have been difficult to study due to the various environmental influences (such as entraining daily alarms, artificial light at night, and caffeine or stimulant intake) that can contribute to different behavioral phenotypes in humans. Despite these potential difficulties, Louis Ptáček and colleagues discovered evidence of a human familial circadian rhythm variant in the 1990s. This variant resulted in a shorter period and an advance of melatonin and temperature rhythms and was initially termed Advanced Sleep Phase Syndrome (ASPS) in a 1999 publication. Individuals with ASPS have earlier sleep and wake onsets, meaning they both go to bed and wake up earlier compared to control groups. The first participant with this phenotype told researchers she recognized similar sleep patterns in her family. From structured interviews and family pedigree analysis, some of these individuals were identified to have ASPS as well, providing evidence that this phenotype could be genetic, resulting in Familial Advanced Sleep Phase (FASP). In this 1999 publication, researchers were also able to conclude that this trait has an autosomal dominant mode of inheritance with high penetrance. This means that the genes involved in FASP are passed through non-sex chromosomes, and an individual only needs one copy of the gene across homologs for the gene to be expressed. Since this initial 1999 FASP publication, other circadian biologists including Phyllis Zee and Joseph Takahashi have conducted further genetic analysis. They published a paper in 2001 that presented data showing a phenotypically characterized case of Advanced Sleep Phase Syndrome to provide further evidence that this trait can be hereditary. Since these studies, Csnk1d, PER2, PER3, and CRY2 have all been identified as important in hereditary FASP. Another sleep trait, Delayed Sleep Phase Syndrome (DSPS) was first identified by Elliot Weitzman and colleagues in 1981. Individuals with DSPS typically cannot fall asleep until later and wake up later compared to control groups. They often cannot fall asleep until between 2:00-6:00am, but then have a normal sleep duration. However, DSPS was not hypothesized to have a genetic component until researchers at University of California, San Diego discovered a familial pedigree with DSPS in 2001, adding this Familial Delayed Sleep Phase (FDSP) to the list of heritable sleep traits. Almost two decades later in 2017, Michael Young and colleagues in New York published findings that further supported delayed sleep to have a genetic component, resulting in FDSP. These scientists reported that a mutation in CRY1, a component of the TTFL that represses Clock and Bmal1, results in a gain-of-function variation that lengthens circadian period. In addition to these findings, Familial Natural Short Sleep (FNSS) is another heritable sleep trait that has been studied over the past few years. In 2009, Ying-Hui Fu and Ptáček discovered the first short-sleep gene by identifying a mutation in the DEC2 gene that resulted in an average of 6.25 hours of sleep a night instead of 8.06 hours, an identifying feature of FNSS. This was the first genetic discovery for this sleep trait, broadening the scope of familial sleep trait research. In 2019, Ptáček and Fu published further research about the genetic aspect of FNSS, identifying a mutation in the gene ADRB1 that increases the activity of ADRB1+ neurons in the dorsal pons. Most of the research conducted thus far has been surrounding FASP, FDSP, and FNSS, with recent studies beginning to examine the roles of heritable sleep variability on autism-spectrum disorder (ASD) and Alzheimer's disease (AD). ASD, a neurodevelopmental disorder, has evidence of genetic components and affected individuals have reported a high prevalence of insomnia. Fu, Ptáček, and colleagues have hypothesized that it may be interesting to examine if sleep traits and disruptions can exacerbate the atypical neurodevelopment in ASD. Additionally, recent research about AD, a neurodegenerative disease, has suggested that sleep disruption might contribute to the disease. A characteristic factor of AD is the accumulation of formula_0 plaques. These plaques are usually at a lower level in the brain interstitial space when an individual first wakes up and then during waking hours these levels increase. Sleep disruption can eliminate the reduction in formula_0 levels, which is important during disease progression. Both ASD and AD demonstrate how the heritability of sleep traits may also be involved in disorders and diseases that are not traditionally thought of as circadian, but more research must be done in this field. Heritable sleep traits. The functions of heritability for many sleep traits are not well known, underscoring the importance of continued research into the human genome. Familial Advanced Sleep Phase. Familial Advanced Sleep Phase (FASP) results in an individual having a circadian clock that is entrained to their surroundings, but gives the impression that the individual is not. This trait typically develops during middle age, and is more common in older adults. Affected individuals typically have a free-running period of about 22 hours, shorter than the average person who has a free-running period closer to 24 hours. This also means that certain physiological markers, such as body temperature and melatonin will be present at higher levels earlier in the day as compared to an average person. Symptoms. FASP is typically characterized by excessively early sleep and wake times. Additionally, individuals may experience excessive daytime sleepiness if they are forced to adhere to a schedule offset from their personal biological clock. Individuals with FASP are typically phase advanced by 4 to 6 hours as compared to the average person. Treatments. FASP is traditionally treated with light therapy in the evenings, or behaviorally with chronotherapy. Individuals with FASP typically need to have a two-hour delay per day to remain entrained, due to their 22-hour period. Pharmacological interventions are typically avoided due to risks associated with daytime drug-induced sleepiness. Molecular Basis. FASP has been mapped to chromosome 2q. Genes that are known to influence the presentation of FASP are CRY2, PER2, PER3 and CK1∂. TIMELESS (hTIM) has also been shown to cause FASP. These mutations are critical in the trait's phenotype and heritability. This trait is inherited in an autosomal dominant fashion. Familial Delayed Sleep Phase. Familial Delayed Sleep Phase (FDSP) results in an individual having a circadian clock that is entrained to their surroundings, but gives the impression that the individual is not. The trait typically develops in adolescence. Affected individuals have a free-running period that is longer than the average 24 hours, meaning that certain physiological markers, such as body temperature and melatonin, are present in higher levels later in the day as compared to the average person. Symptoms. FDSP is typically characterized by excessively late sleep times and wake times, and may include daytime sleepiness if the individual is forced to adhere to a schedule offset from their personal biological clock. Individuals with FDSP may have comorbidities with depression, Attention Deficit Hyperactivity Disorder (ADHD), obesity, and Obsessive-Compulsive Disorder (OCD). Treatments. Treatment is usually non-pharmacological, with light therapy being a common intervention. Phase delay chronotherapy is also occasionally used. Melatonin taken at night will not change the individual's circadian rhythm, but may act as a temporary solution. Molecular Basis. FDSP is heritable and linked to mutations in the PER3 and CRY1 genes, which result in the delayed sleep phenotype. Fatal Familial Insomnia. Fatal Familial Insomnia (FFI) is a disorder that results in trouble sleeping, speech and coordination problems, and eventually dementia. Most of those affected die within a few years, and the disorder has no cure. The disorder can manifest any time from age 18 to 60, but the average age of affected individuals is 50 years old. Symptoms. The disorder has a 4-stage progression, starting with individuals experiencing insomnia, progressing to seeing hallucinations, then inability to sleep and dramatic weight loss, and finally dementia, which is followed by death. Individuals have a 6-36 month prognosis after they begin experiencing symptoms. Treatments. Due to the prognosis of the disorder, treatment is often minimal and palliative in nature. Sleeping pills and other traditional treatments are not found to be beneficial in treating FFI. Molecular Basis. The disorder is caused by a mutation of the PRNP gene resulting in the creation of a prion. These prions result in neurodegeneration, leading to FFI. This mutation can either occur spontaneously or be passed down in an autosomal dominant manner. Familial Natural Short Sleep. Familial natural short sleep (FNSS) is a distinct category of habitual short sleep. Individuals with this trait usually get 4–6.5 hours of sleep per day but do not have daytime sleepiness and do not need catch-up sleep on the weekends. After sleep deprivation, these individuals have less of a sleep deficit than individuals without FNSS. Additionally, affected individuals have a higher behavioral drive, resulting in many holding high pressure jobs, and they may have a better ability to deal with stress. People with FNSS are commonly mistaken for having insomnia. The prevalence of FNSS is currently unknown, however mutations in the genes DEC2 and ADRB1, NPSR1, and GRM1 have been linked to FNSS. Symptoms. FNSS is unique because individuals with this sleep trait show no symptoms of shorter sleep. They are able to be active and function normally. Treatments. FNSS may be seen as advantageous rather than detrimental to some individuals. Therefore, because FNSS does not negatively impact most affected individuals, treatment options for it have not been well researched or documented. Health effects. Such mutations appear to reduce Alzheimer's pathology in mice. Molecular Basis. In 2009, Ying-Hui Fu and colleagues described how a genetic variant in DEC2 produced the short sleep phenotype. In this variant, an arginine residue is substituted for a proline residue typically present at position 384. Within the family studied, people having the DEC2 mutation had shorter sleep durations. The researchers found the same phenotype when mutating this gene in Drosophila and mice. Interestingly, they found that the mutant mice did not display changes in their free-running activity period. DEC2 functions as a transcriptional repressor and increases expression of hypocretin, which promotes a waked state. DEC2 inhibits CLOCK/BMAL1 activation of PER, through protein-protein interaction or competition for the E-box transcriptional elements. A separate study using dizygotic twins with a novel DEC2 mutation showed that one twin had shorter sleep duration. These results demonstrate that DEC2 is able to affect sleep length through weakened transcriptional repression. Another important gene involved in FNSS is ADRB1. ADRB1 neurons in mice are active they are awake and are found in the dorsal pons. Through additional family studies, mutations in ADRB1 have shown the reduced sleep phenotype. In a more recent study done by Lijuan Xing and colleagues, NPSR1 was linked to FNSS. In this study, researchers identified a family with a mutation in the NPSR1 gene, which caused a short sleep phenotype. NPSR1 is a G-protein coupled receptor that plays a role in arousal and sleep behaviors. This NPSR1 mutation was recreated in mice, and the researchers found the same short sleep phenotype present. In another study done by Guangsen Shi and colleagues, GRM1 was linked to FNSS. Here, researchers identified two GRM1 mutations in two different FNSS families. They recreated these same mutations in mouse models and found that they caused the mice to sleep less. Understanding how these individuals are able to tolerate higher sleep pressure and behavioral drive will prove useful for numerous people that hold jobs which require long durations of wakefulness. Familial Natural Long Sleep. Familial natural long sleep (FNLS) likely exists, however there have not been any genetic variants found that cause FNLS. People with FNLS likely need more than 8 hours of sleep per day to feel well rested. This group of individuals may be harder to detect due to comorbidities, such as depression. Additional research is necessary to learn more about this sleep trait. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A\\beta" } ]
https://en.wikipedia.org/wiki?curid=67466517
67467053
2 Chronicles 27
Second Book of Chronicles, chapter 27 2 Chronicles 27 is the twenty-seventh chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter is the reign of Jotham, king of Judah. Text. This chapter was originally written in the Hebrew language and is divided into 9 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Leningradensis (1008)., and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Jotham, king of Judah (27:1–9). Jotham receives positive judgment for his reign (cf. 2 Kings 15), repeating the praise for Uzziah (2 Chronicles 25:2) with the addition of 'only he did not invade the temple of the LORD' (verse 2), and therefore was rewarded threefold (verses. 3–6): "Jotham was twenty-five years old when he began to reign, and he reigned sixteen years in Jerusalem. His mother's name was Jerushah the daughter of Zadok." "He fought also with the king of the Ammonites, and prevailed against them. And the children of Ammon gave him the same year an hundred talents of silver, and ten thousand measures of wheat, and ten thousand of barley. So much did the children of Ammon pay unto him, both the second year, and the third." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=67467053
67467257
Ravi Allada
Indian-American chronobiologist Ravi Allada (born 1967) is an Indian-American chronobiologist studying the circadian and homeostatic regulation of sleep primarily in the fruit fly Drosophila. He is currently the Executive Director of the Michigan Neuroscience Institute (MNI), a collective which connects neuroscience investigators across the University of Michigan to probe the mysteries of the brain on a cellular, molecular, and behavioral level. Working with Michael Rosbash, he positionally cloned the Drosophila Clock gene. In his laboratory at Northwestern, he discovered a conserved mechanism for circadian control of sleep-wake cycle, as well as circuit mechanisms that manage levels of sleep. Early life. Ravi Allada was born on August 20, 1967, in Midland, Michigan, to Indian immigrant parents, Sambasiva Rao and Jayalakshmi. Allada has two brothers, Vivek and Gopal, who both currently work as physicians. At the age of 11, Allada won 3rd place in a free throw competition. Allada's interest in sports also led him to track baseball statistics, which triggered Ravi Allada’s interest in math and later, his research on jet lag for MLB players. Education. Allada graduated from H. H. Dow High School in 1985. Following high school, Allada attended the University of Michigan where he was awarded his B.S. degree. Allada was also awarded his M.D. by the University of Michigan. While attending University of Michigan Medical School, Allada spent two years as an HHMI-NIH Research Scholar working with Howard Nash on a molecular genetics project relating to general anesthesia in Drosophila. Before the end of medical school, he returned to NIH as a HHMI-NIH Continued Support Fellow working with Carl Wu at the NCI. Following medical school, he completed his residency in clinical pathology at Brigham and Women's Hospital in Boston. Thereafter, he completed an HHMI Physician Postdoctoral Fellowship with Michael Rosbash at Brandeis University. Career. Currently, Allada is the Executive Director of the Michigan Neuroscience Institute (MNI). He also holds an professorship in the University of Michigan's Department of Anesthesiology and is the Theophile Raphael, M.D., Collegiate Professor of Neurosciences. Prior to joining MNI in September 2023, Allada was a Professor and Chair of Neurobiology and a Professor and Associate Director of the Center for Sleep and Circadian Biology at Northwestern University. Allada also served on the NIH Sleep Disorders Research Advisory Board, the Society for Research on Biological Rhythms Board as a member and Secretary, and the Sleep Research Society's Board of Directors from 2020-2023. The Allada lab focuses on finding molecular components of the circadian clock and their impacts on neurodegenerative diseases, sleep, jet lag, and memory processing. His lab has begun to shift its focus to research regarding sleep homeostasis. Allada’s research has been supported financially by the NIH, the Defense Advanced Research Projects Agency, and other private foundations. Early research. "Drosophila" circadian rhythms. Molecular identification of the Drosophila Clock Gene (1998). Using Drosophila melanogaster as a model organism, Allada and his team used forward genetics to discover a circadian rhythm gene called Drosophila Clock ("dClock"; "Clk"). Forward genetics screens for observable phenotypes that could potentially correspond to underlying genetic differences typically resulting from randomly induced mutagenesis. dClock (Clk) was discovered when Allada and his colleagues were completing a forward genetic screen of EMS mutagenized flies. The mutation found by Allada, that abolishes fly circadian rhythms is termed Jrk. Functioning, CLOCK proteins encoded by the "Clk" gene form a dimer with CYCLE proteins. The formed dimer will bind to the E-box sequence which will activate the enhancers of per and tim genes. "per" and "tim" have been shown in Drosophila, to have daily rhythms of transcription. These "per" and "tim" mRNA transcripts are translated into proteins, PER and TIM, which heterodimerize and are essential for the circadian rhythms. The "Jrk" mutation within "Clk", eliminates the cycling of "per" and "tim" mRNA transcripts which disrupts molecular and behavioral outputs of the circadian clock. The studies of the "Jrk" mutation in the "Clk" gene showed dominant effects on the Drosophila. Half of the heterozygous flies demonstrate arrhythmic activity and reduce amplitude levels of "per/tim" transcripts in constant darkness. While all homozygous flies showed arrhythmic activity in constant darkness. Coupled with complementation data with a null deletion. Data suggests that the "Jrk" mutation has a negative dominant effect, meaning that only one copy of the gene is sufficient for phenotype interference. Further studies of the output of other clock proteins, namely, PERIOD (PER) and TIMELESS (TIM), showed very low expression levels. In Drosophila, the two well studied clock genes, period ("per") and timeless ("tim") undergo circadian oscillations. The low levels of PER and TIM could be explained by lower protein stability or reduced protein synthesis due to mutant strains. To distinguish that from transcription levels, Allada et al conducted experiments measuring the levels of "per" and "tim" RNA. Experiment showed low levels and non-cycling levels of RNA which suggested reduced synthesis rather than stability. To compare its function to mouse CLOCK gene, in situ cloning and DNA sequencing was performed. A point mutation that changes a triplet codon to a premature stop codon. Allada et al concluded that the "Jrk" mutation disrupts the transcription cycling of "per" and "tim", since it encodes a premature stop codon that abolished the function of the truncated C-terminal activation domain of the transcription factor bHLH-PAS. Clk and cctopic circadian rhythms (2003-2005). Further research on the "dClock" ("Clk") gene and its role in the Drosophila transcription translation feedback loop (TTFL) revealed the gene’s ability to stimulate gene expression of other components of the TTFL ectopically—or outside of pacemaker cells—and subsequently modify daily organismal behavior. "Clk" and other circadian genes are predominantly expressed in the “lateral” neurons (LN), the pacemaker neurons of Drosophila’s central circadian clock. Allada’s research team worked to understand how the expression of certain clock genes influenced the expression of other clock genes and where these genes were expressed. They used the GAL4/UAS system to examine how the expression of certain clock genes, pigment dispersing factor gene ("pdf") and long and short versions of the "cry" promoter DNA sequence, affected neuronal gene expression. P"df"-GAL4 and long "cry"-GAL4 were only active in known clock neurons, but the shorter cry sequence was also active in “non-circadian” neurons. When short "cry"-GAL4 was coupled with the UAS-"Clk" gene in these non-circadian neurons, these ectopic sites rhythmically expressed the genes "Tim" and even "cry", a circadian clock component that is expressed antiphase to "Tim". These results demonstrate that misexpression of "Clk" was sufficient to induce an ectopic circadian clock. Additionally, transgenic flies with misexpression of "Clk" displayed different patterns in locomotor activity to wild type files under light-dark conditions: they displayed a single peak of activity in the daytime, opposed to two peaks of activity in the morning and evening. These results suggest that the ectopic clocks were sufficient to influence behavioral circadian rhythms. PDF receptor (2005). To gain a deeper understanding of PDF's function, Allada and his colleagues worked to identify the PDF receptor protein. The receptor was found to be a class II peptide G protein-coupled receptor. The location of the PDF receptor was identified while observing fruit flies with an inversion mutation in a known potassium channel that disrupted circadian processes. It found that flies with this mutation in the potassium channel had a genomic insertion in the gene for the PDF receptor which caused the disruption. Flies possessing this receptor mutation became known as groom-of-PDF or gop. Comparing the oscillation of clock proteins within pacemaker neurons in wild-type flies to those in gop flies revealed an advance in clock protein oscillation. Allada and colleagues measured differences between the peaks of behavioral rhythmicity after genetically modifying PDF neurons to have a slower clock in both wild-type and gop flies. Wild-type flies with the genetically modified PDF neurons were able to delay peak behavioral rhythmicity whereas gop flies were not. PDF neurons being able to alter rhythmicity within wild-type flies demonstrates PDF's role as a signaling molecule. PDF's lack of ability to change rhythmicity within gop flies provides support that gop is a downstream receptor of PDF. Casein kinase 2 (2002-2008). casein kinase 2 (CK2) is a protein that helps to regulate key pacemaker proteins, TIM and PER. TIM and PER proteins form a heterodimer that serves to inhibit the further transcription of clock genes tim and per. Nuclear entry of the heterodimer inhibits CLK-CYC from activating any further transcription of tim and per. Therefore, regulation of TIM and PER genes is essential to regulating other clock genes and outputs. Allada sought to understand what molecular mechanisms underlie PER and TIM regulation. TIM and PER nuclear entry appears to be regulated by phosphorylation. Phosphorylation of this heterodimer is partially carried out by CK2, which is composed of different subunits CK2formula_0 and CK2formula_1. Allada studied fruit flies with a mutant CK2formula_0 gene, termed CK2formula_0"Tik" "," and observed an abnormally long behavioral rhythm of about 33 hours. The lengthened period of the CK2formula_0"Tik" mutants helped to highlight the importance of CK2 in regulating daily PER and TIM oscillations. A later study conducted by Allada and his colleagues, attempted to understand why CK2 was important for regulation of the PER and TIM complex. To determine CK2's significance, Allada investigated flies with mutations in CK2 target sites in PER and TIM proteins. Mutations of PER CK2 target sites did not lead to abnormal accumulation of PER, but mutations in the TIM CK2 target sites did. The mutation in tim that caused the accumulation of PER, termed tim"UL", has a mutation at a serine site thought to prevent CK2 phosphorylation. Accumulation of PER proteins as a result of TIM CK2 target sites being mutated provides evidence that the purpose of CK2 is to regulate the stability of the TIM protein. Later research. How clocks control neuronal excitability (2005-2015). Allada and colleagues are credited with understanding the "na" (NARROW ABDOMEN) gene’s integral role in Drosophila circadian clock output and normal rest: activity. The Drosophila gene "na" codes for an ion channel with homology to the mammalian sodium leak channel, nonselective (NALCN). Mutants of "na" show poor circadian rhythms, yet oscillations of the clock protein PER remain; pacemaker neurons were found to express "na" and inducing "na" in pacemaker neurons is sufficient to restore normal locomotor activity rhythms. This served as an indication that NA likely functions on the clock output and “the mutant is a result of disruption in the coupling between the central clock and the neuronal networks controlling locomotion.” Further research on the NA ion channel provided evidence for DN1 pacemaker neurons as essential for light response and PDF signaling integration, mediating anticipatory locomotor behavior and robust daily rhythms in behavior. Mutants of "na" lack a significant increase in locomotor activity in response to light and show reduced free-running rhythms and anticipatory behavior before dawn, making "na" a potential gene involved in photic responses and clock function. Rescue of "na" in a cluster of posterior DN1 neurons implicates its role in “mediating the acute response to the onset of light” and anticipatory behavior, and pdf expression in DN1 partially rescues morning and free-running rhythms—including in DN1 neurons. These findings suggest that this section of the DN1 utilizes photic and PDF signaling to mediate the behavioral output of Drosophila. Posterior DN1 pacemaker neurons demonstrate rhythmicity in firing rate throughout the day to facilitate behaviors of sleeping and waking, firing a lot in the morning and scarcely in the evening. DN1 membrane potentials and conductance of sodium and potassium have daily rhythms, indicating that they are potentially under circadian clock control. Allada’s team discovered that a “voltage-independent sodium conductance via the NA/NALCN ion channel” raises the resting potential of DN1 cells to increase their firing rate during the day and is controlled by the rhythmic expression of its localization ER protein Nlf-1; potassium channels also peak in the evening to lower membrane potential, subdue DN1 firing and promote sleep . The researchers refer to this antiphase activity of sodium and potassium currents as a “bicycle” mechanism, and its discovery in nocturnal mice suggests this mechanism is ancient, well-conserved through evolution, and thus likely present in humans. Sleep homeostasis (2006-2017). Mushroom bodies (2006). Drosophila has served as a model organism for studying the mechanisms and function of sleep since “flies and vertebrates share… behavioral and physiological traits of sleep,” including the presence of both circadian and homeostatic sleep components. Allada’s research team conducted one of the first unbiased neurogenetic screens for neurons identified the Drosophila mushroom bodies (MBs) as a major sleep regulatory center with an effect on wakefulness and sleep duration. The MBs are also well known for their role in learning and memory, connecting sleep regulation to memory consolidation. RDL inhibition of PDF neurons (2009). Transcriptional clock components and sleep homeostasis—or the basic principle of sleep regulation behind the biological response to sleep deprivation—mediate timely sleeping and waking. Research on the neuropeptide PDF and its receptor solidified the role of PDF as an output to the circadian clock, which is expressed in pacemaker neurons and promotes wakefulness, particularly late-night activity. They also further demonstrated the significance of the GABAA receptor gene, "Resistant to dieldrin" ("Rdl"), in promoting sleep. RDL’s role in PDF pacemaker neuron inhibition was supported by electrophysiological evidence that GABA induced an inward current of chloride and GABA antagonist picrotoxin blocked this current in lLNv neurons—PDF-secreting, arousal promoting pacemaker neurons. These findings outline one of first proposed wake-promoting circuits in Drosophila, which posits that PDF neuron activation in controlled by the circadian clock to time waking behavior and GABA serves an inhibitor of these neurons to promote sleep. Rebound sleep (2012-2017). Allada has continued to uncover the molecular basis of sleep homeostasis to further understand why sleep occurs. Focusing on when an organism is sleep deprived leads to what is known as compensatory sleep mechanisms. Sleep deprived organisms engaging in compensatory sleep or sleep rebound is a good indicator of homeostatic sleep regulation. Rebound sleep, is the longer than average sleep time following sleep deprivation of an organism. A genetic screen for Drosophila mutants with sleep disturbances yielded one of the most severe sleep phenotypes to date, mutants of "Cul3" and "insomniac" ("inc"). "Cul3" and "inc" refer to an E3 ubiquitin ligase and its adaptor, respectively, and disruption of these genes or the ability of these two constituents to interact results in reduced sleep duration and homeostatic response to sleep deprivation, implicating "cul3" and "inc" in sleep homeostasis. It is known that "cul3" and "inc" are implicated in protein ubiquitination, it is unclear how reduced activity of these genes impact sleep. Allada et al. propose that Inc/Cul3 proteins may “impact dopaminergic modulations of sleep”, given that loss of "cul3" and "inc" results in “hyper-arousability to a mechanical stimulus in adult flies” like flies with increased dopaminergic signaling. Additionally, the reduced sleep duration and homeostatic regulation phenotype of "inc" mutants can be rescued with pharmacological intervention that inhibits dopamine biosynthesis, composing the very limited records of successful pharmacological intervention of sleep homeostasis disruption. This evidence may be used to further our understanding of sleep as a molecular system. Jet lag in athletes (2017). Allada's interest in sports has continued throughout his life and, in 2017, he and his lab reported that jet lag impacted the performance of major league baseball players. Although circadian clocks have been studied extensively in controlled lab settings, the function of these biological clocks in natural settings has not been the case. With data from 40,000 MLB games spanning 20 years, they discovered a significant negative correlation between the time zone change experienced by players and their performance on the field. For example, east coast teams that had travelled west for games recorded decreased game performance after flying back home for home games. The difference is most significantly seen in pitchers who gave up more home runs. Interestingly, the study observed that jet lag effects were mostly evident after eastward travels with very limited effects after westward travel. TimeSignature (2018). The accurate assessment of physiological time using certain biomarkers found in human blood can improve diagnosis of circadian disorders and optimize chronotherapy. In order to receive an assessment of one’s biological time, a dim light melatonin onset test is often used. This requires the patient to stay in low light condition, while numerous blood or saliva samples are taken from the patient. However, a new blood test that only requires two blood draws may be able to provide physicians with accurate patient chronotypes. Computational biologist Rosemary Braun, Allada et al, of Northwestern University published a new study in the Proceedings of the National Academy of Sciences USA. The study claimed that their blood draw test can be easily generalized to more patients. The major obstacle in developing a blood test is to find a reliable gene expression biomarker. Due to the diversity of measurement platforms and inherent variability, many biomarkers perform well in original data sets but cannot be universally applied to new samples. Machine learning algorithms can now learn which gene gives the best indication of biological time. With the help of TimeSignature, a computer algorithm that infers circadian time from gene expression, the team was able to yield highly accurate results with a wide population without renormalizing that data. Neurodegenerative research. Huntington's Disease (2019). Allada and his team have recently been studying a neurodegenerative disease called Huntington’s disease and its relationship with circadian rhythm. Using a Drosophila model impaired with Huntington’s disease, they found evidence that environmental and genetic perturbations of the circadian clock alter the neurodegeneration caused by Huntington’s disease. The results suggested that the knockdown of the clock-regulated protein called Heat Shock Protein 70/90 Organizing Protein (HOP) reduces the mutant Huntington’s disease aggregation and toxicity, providing evidence for the casual relationship between circadian clock and neurodegenerative disease. Sleep waste (2021). Allada has been studying proboscis extension sleep, a deep sleep stage in Drosophila similar to the human deep sleep. The study identified that the prevention of proboscis extensions increased injury related mortality and reduced waste clearance. Allada and his lab team administered luciferin, a substrate for firefly luciferase reporters and discovered evidence of a functional role in Drosophila proboscis extension sleep related to waste clearance. In subsequent experiments, Allada has emphasized the implication of the waste clearance functionality in maintaining brain health and preventing neurodegenerative disease. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha" }, { "math_id": 1, "text": "\\beta" } ]
https://en.wikipedia.org/wiki?curid=67467257
67471804
Rayleigh–Kuo criterion
Stability condition for fluids The Rayleigh–Kuo criterion (sometimes called the Kuo criterion) is a stability condition for a fluid. This criterion determines whether or not a barotropic instability can occur, leading to the presence of vortices (like eddies and storms). The Kuo criterion states that for barotropic instability to occur, the gradient of the absolute vorticity must change its sign at some point within the boundaries of the current. Note that this criterion is a necessary condition, so if it does not hold it is not possible for a barotropic instability to form. But it is not a sufficient condition, meaning that if the criterion is met, this does not automatically mean that the fluid is unstable. If the criterion is not met, it is certain that the flow is stable. This criterion was formulated by Hsiao-Lan Kuo and is based on Rayleigh's equation named after the Lord Rayleigh who first introduced this equation in fluid dynamics. Barotropic instability. Vortices like eddies are created by instabilities in a flow. When there are instabilities within the mean flow, energy can be transferred from the mean flow to the small perturbations which can then grow. In a barotropic fluid the density is a function of only the pressure and not the temperature (in contrast to a baroclinic fluid, where the density is a function of both the pressure and temperature). This means that surfaces of constant density (isopycnals) are also surfaces of constant pressure (isobars). Barotropic instability can form in different ways. Two examples are; when there is an interaction between the fluid flow and the bathymetry or topography of the domain; when there are frontal instabilities (may also lead to baroclinic instabilities). These instabilities are not dependent on the density and might even occur when the density of the fluid is constant. Instead, most of the instabilities are caused by a shear on the flow as can be seen in Figure 1. This shear in the velocity field induces a vertical and horizontal vorticity within the flow. As a result, there is upwelling on the right of the flow and downwelling on the left. This situation might lead to a barotropic unstable flow. The eddies that form alternatingly on both sides of the flow are part of this instability. Another way to achieve this instability is to displace the Rossby waves in the horizontal direction (see Figure 2). This leads to a transfer of kinetic energy (not potential energy) from the mean flow towards the small perturbations (the eddies). The Rayleigh–Kuo criterion states that the gradient of the absolute vorticity should change sign within the domain. In the example of the shear induced eddies on the right, this means that the second derivative of the flow in the cross-flow direction, should be zero somewhere. This happens in the centre of the eddies, where the acceleration of the flow perpendicular to the flow changes direction. Examples. The presence of these instabilities in a rotating fluid have been observed in laboratory experiments. The settings of the experiment were based on the conditions in the Gulf Stream and showed that within the ocean currents such as the Gulf Stream, it is possible for barotropic instabilities to occur. But barotropic instabilities were also observed in other Western Boundary Currents (WBC). In the Agulhas current, the barotropic instability leads to ring shedding. The Agulhas current retroflects (turns back) near the coast of South Africa. At this same location, some anti-cyclonic rings of warm water escape from the mean current and travel along the coast of Africa. The formation of these rings is a manifestation of a barotropic instability. Derivation. The derivation of the Rayleigh–Kuo criterion was first written down by Hsiao-Lan Kuo in his paper called '"dynamic instability of two-dimensional nondivergent flow in a barotropic atmosphere"' from 1949"." This derivation is repeated and simplified below. First, the assumptions made by Hsiao-Lan Kuo are discussed. Second, the Rayleigh equation is derived in order continue to derive the Rayleigh–Kuo criterion. By integrating this equation and filling in the boundary conditions, the Kuo criterion can be obtained. Assumptions. In order to derive the Rayleigh–Kuo criterion, some assumptions are made on the fluids properties. We consider a nondivergent, two-dimensional barotropic fluid. The fluid has a mean zonal flow direction which can vary in the meridional direction. On this mean flow, some small perturbations are imposed in both the zonal and meridional direction: formula_0 and formula_1. The perturbations need to be small in order to linearize the vorticity equation. Vertical motion and divergence and convergence of the fluid are neglected. When taking into account these factors, a similar result would have been obtained with only a small shift in the position of the criterion within the velocity profile. The derivation of the Kuo criterion will be done within the domain formula_2. On the northern and southern boundary of this domain, the meridional fluid is zero. Rayleigh Equation. Barotropic vorticity equation. To derive the Rayleigh equation for a barotropic fluid, the barotropic vorticity equation is used. This equation assumes that the absolute vorticity is conserved: formula_3 here, formula_4 is the material derivative. The absolute vorticity is the relative vorticity plus the planetary vorticity: formula_5. The relative vorticity, formula_6, is the rotation of the fluid with respect to the Earth. The planetary vorticity (also called Coriolis frequency),formula_7, is the vorticity of a parcel induced by the rotation of the Earth. When applying the beta-plane approximation for the planetary vorticity, the conservation of absolute vorticity looks like: formula_8 The relative vorticity is defined as formula_9 Since the flow field consist of a mean flow with small perturbations, it can be written as formula_10 with formula_11 and formula_12 This formulation is used in the vorticity equation: formula_13Here, formula_14 and formula_15 are the zonal and meridional components of the flow and formula_16 is the relative vorticity induced by the perturbations on the flow (formula_17 and formula_18). formula_19 is the mean zonal flow and formula_20 is derivative of the planetary vorticity formula_21 with respect to formula_22 . Linearization. A zonal mean flow with small perturbations was assumed, formula_23, and a meridional flow with a zero mean, formula_24. Since it was assumed that the perturbations are small, a linearization can be performed on the barotropic vorticity equation above, ignoring all the non-linear terms (terms where two or more small variables, i.e. formula_25, are multiplied with one another). Also the derivative of formula_26 in the zonal direction, the time derivative of the mean flow formula_19 and the time derivative of formula_27 are zero. This results in a simplified equation: formula_28 With formula_16 as defined above (formula_29) and formula_30 and formula_31 the small perturbations in the zonal and meridional components of the flow. Stream function. To find the solution to the linearized equation, a stream function was introduced by Lord Rayleigh for the perturbations of the flow velocity: formula_32These new definitions of the stream function are used to rewrite the linearized barotropic vorticity equation. formula_33Here, formula_34 is the second derivative of formula_35 with respect to formula_36 formula_37. To solve this equation for the stream function, a wave-like solution was proposed by Rayleigh which reads formula_38. The amplitude formula_39 may be complex number, formula_40 is the wave number which is a real number and formula_41 is the phase velocity which may be complex as well. Inserting this proposed solution leads us to the equation which is known as Rayleigh's equation. formula_42To get to this equation, in the last step it was used that formula_40 can't be zero and neither can the exponential. This means that the terms in the square brackets needs to be zero. The symbol formula_43 denotes the second derivative of the amplitude of the stream function, formula_44 with respect to formula_36 formula_45. This last equation that was derived, is known as Rayleigh's equation which is a linear ordinary differential equation. It is very difficult to explicitly solve this equation. It is therefore that Hsiao-Lan Kuo came up with a stability criterion for this problem without actually solving it. Kuo Criterion. Instead of solving Rayleigh's equation, Hsiao-Lan Kuo came up with a necessary stability condition which had to be met in order for the fluid to be able to get unstable. To get to this criterion, Rayleigh's equation was rewritten and the boundary conditions of the flow field are used. The first step is to divide Rayleigh's equation by formula_46 and multiplying the equation by the complex conjugate of formula_47. formula_48In the last step, formula_46 is multiplied with its complex conjugate leading to the following equality is used: formula_49. For the solution of Rayleigh's equation to exist, both the real and imaginary part of the equation above need to be equal to zero. Boundary conditions. To get to the Kuo criterion, the imaginary part is integrated over the domain (formula_50) . The stream function at the boundaries of the domain is zero, formula_51, as already stated in the assumptions. The zonal flow must vanish at the boundaries of the domain. This leads to a constant stream function which is set to zero for convenience. formula_52 The first integral can be solved: formula_53 So the first integral is equal to zero. This means that the second integral should also be zero, making it possible to solve this integral numerically. formula_54 When formula_55 is zero, we are dealing with a stable amplitude of the solution, this means that the solution is stable. We are looking for un unstable situation, so then formula_56 should be zero. Since the fraction in front of formula_57 is non-zero and positive, this leads to the conclusion that formula_57 should be zero. This leads to the final formulation, the Kuo criterion: formula_58Here, formula_35 is the mean zonal flow and formula_59 is the derivative of the planetary vorticity formula_60 with respect to formula_36. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " u(y,t) = U(y) + u^*(y,t) " }, { "math_id": 1, "text": "v = v^*" }, { "math_id": 2, "text": "L=[0,y ]" }, { "math_id": 3, "text": "\\frac{d \\zeta_a}{dt} = 0" }, { "math_id": 4, "text": "\\frac{d}{dt}" }, { "math_id": 5, "text": "\n\\zeta_a = \\zeta + f" }, { "math_id": 6, "text": "\\zeta" }, { "math_id": 7, "text": "f" }, { "math_id": 8, "text": "\\frac{d\\zeta_a}{dt} = \\frac{d}{dt}\\left(\\zeta + \\beta y \\right) = 0" }, { "math_id": 9, "text": "\\zeta = \\frac{\\partial v}{\\partial x} - \\frac{\\partial u}{\\partial y}." }, { "math_id": 10, "text": "\\zeta = \\overline{\\zeta} + \\zeta^*" }, { "math_id": 11, "text": "\\overline{\\zeta} = -\\frac{\\partial U}{\\partial y}" }, { "math_id": 12, "text": "\\zeta^* = \\frac{\\partial v^*}{\\partial x} - \\frac{\\partial u^*}{\\partial y}." }, { "math_id": 13, "text": "\n\\begin{align}\n0 &= \\frac{d}{dt}\\left(\\zeta + \\beta y\\right) \\\\\n0 &= \\frac{d}{dt}\\left(\\zeta ' + \\overline{\\zeta} + \\beta y\\right) \\\\\n0 &= \\left( \\frac{\\partial}{\\partial t} + u\\frac{\\partial}{\\partial x} + v\\frac{\\partial}{\\partial y} \\right) \\left(\\zeta^* - \\frac{\\partial U}{\\partial y} + \\beta y \\right)\n\\end{align}" }, { "math_id": 14, "text": "\nu" }, { "math_id": 15, "text": "\nv" }, { "math_id": 16, "text": "\n\\zeta'" }, { "math_id": 17, "text": "\nu'" }, { "math_id": 18, "text": "\nv'" }, { "math_id": 19, "text": "\nU" }, { "math_id": 20, "text": "\n\\beta" }, { "math_id": 21, "text": "\nf" }, { "math_id": 22, "text": "\ny" }, { "math_id": 23, "text": "\nu = U+ u^*" }, { "math_id": 24, "text": "\nv = v^*" }, { "math_id": 25, "text": "\nu^*, v^*, \\zeta^*" }, { "math_id": 26, "text": "\nu " }, { "math_id": 27, "text": "\n\\beta y" }, { "math_id": 28, "text": "\n\\begin{align}\n0 &= \\left( \\frac{\\partial }{\\partial t} + U\\frac{\\partial}{\\partial x}\\right)\\zeta' - v'\\frac{\\partial}{\\partial y}\\frac{\\partial U}{\\partial y} + v'\\frac{\\partial}{\\partial y} \\left(\\beta y\\right)\\\\\n0 &= \\left( \\frac{\\partial }{\\partial t} + U\\frac{\\partial}{\\partial x} \\right)\\zeta' + v' \\left( \\beta - \\frac{\\partial^2 U}{\\partial y^2}\\right).\n\\end{align}" }, { "math_id": 29, "text": "\n\\zeta^* = \\frac{\\partial v^*}{\\partial x} - \\frac{\\partial u^*}{\\partial y}" }, { "math_id": 30, "text": "\nu^*" }, { "math_id": 31, "text": "\nv^*" }, { "math_id": 32, "text": "u^* = \\frac{\\partial \\psi}{\\partial y}, \\;\\;\\;\\; v^* = \\frac{-\\partial \\psi}{\\partial x}." }, { "math_id": 33, "text": "\\begin{align}\n0 &= \\left(\\frac{\\partial}{\\partial t} + U\\frac{\\partial}{\\partial x}\\right) \\left(\\frac{\\partial v^*}{\\partial x} - \\frac{\\partial u^*}{\\partial y}\\right) + v^*\\left(\\beta - \\frac{\\partial^2 U}{\\partial y^2}\\right)\\\\\n0 &= \\left(\\frac{\\partial}{\\partial t} + U\\frac{\\partial}{\\partial x}\\right) \\left(-\\frac{\\partial^2 \\psi}{\\partial x^2} - \\frac{\\partial^2 \\psi}{\\partial y ^2}\\right) - \\frac{\\partial \\psi}{\\partial x}\\left(\\beta - U''\\right)\\\\\n0 &= \\left(\\frac{\\partial}{\\partial t} + U\\frac{\\partial}{\\partial x}\\right) \\nabla^2 \\psi + \\frac{\\partial \\psi}{\\partial x}\\left(\\beta - U''\\right) \n\\end{align}\n" }, { "math_id": 34, "text": "U''\n" }, { "math_id": 35, "text": "U\n" }, { "math_id": 36, "text": "y\n" }, { "math_id": 37, "text": "(U'' = \\frac{\\partial^2 U}{\\partial y^2})\n" }, { "math_id": 38, "text": "\\psi(x,y,t) = \\Psi(y)e^{i\\alpha\\left(x-ct\\right)}\n" }, { "math_id": 39, "text": "\\Psi(y)\n" }, { "math_id": 40, "text": "\\alpha\n" }, { "math_id": 41, "text": "c\n" }, { "math_id": 42, "text": "\\begin{align}\n0 &= \\left(\\frac{\\partial}{\\partial t} + U\\frac{\\partial}{\\partial x}\\right) \\nabla^2 \\psi + \\frac{\\partial \\psi}{\\partial x}\\left(\\beta - U''\\right)\\\\[14pt]\n0 &= \\left(\\frac{\\partial}{\\partial t} + U\\frac{\\partial}{\\partial x}\\right)\\left((i\\alpha)^2\\Psi e^{i\\alpha\\left(x-ct \\right)} + \\Psi''e^{i\\alpha\\left(x-ct\\right)}\\right) + i\\alpha\\Psi e^{i\\alpha(x-ct)}(\\beta - U'')\\\\[14pt]\n0 &= (i\\alpha)^2\\Psi e^{i\\alpha(x-ct)}(-i\\alpha c + i\\alpha U ) + \\Psi''e^{i\\alpha(x-ct)} (-i\\alpha c+i\\alpha U) + i\\alpha\\Psi e^{i\\alpha(x-ct)}(\\beta - U'')\\\\[14pt]\n0 &= i\\alpha e^{i\\alpha(x-ct)}[(\\alpha^2c - \\alpha^2 U)\\Psi + \\Psi''(-c + U) + \\Psi(\\beta - U'')]\\\\[14pt]\n0 &= (U-c)(\\Psi'' - \\alpha^2\\Psi) + (\\beta - U'')\\Psi)\\\\[14pt]\n\\end{align}\n" }, { "math_id": 43, "text": "\\Psi''\n" }, { "math_id": 44, "text": "\\Psi\n" }, { "math_id": 45, "text": "(\\Psi'' = \\frac{\\partial ^2 \\Psi}{\\partial y^2})\n" }, { "math_id": 46, "text": "(U-c)\n" }, { "math_id": 47, "text": "\\Psi\\;\\; (\\Psi^*= \\Psi_r - i\\Psi_i)\n" }, { "math_id": 48, "text": "\\begin{align}\n0 &= (\\Psi'' - \\alpha^2\\Psi) + \\left( \\frac{\\beta - U''}{U-c} \\right)\\Psi\\\\\n0 &= \\Psi^*(\\Psi_r'' + i\\Psi_i'' - \\alpha^2\\Psi_r - \\alpha^2 i\\Psi_i) + \\Psi^*\\left(\\frac{\\beta - U''}{U-c}\\right)(\\Psi_r - i\\Psi_i)\\\\\n0 &= \\Psi_r\\Psi_r'' + \\Psi_i\\Psi_i'' - \\alpha^2(\\Psi_r^2 - \\Psi_i^2) + \\left(\\frac{\\beta - U''}{U-c}\\right) (\\Psi_r^2 - \\Psi_i^2) + i(-\\Psi_i\\Psi_r'' + \\Psi_r\\Psi_i'')\\\\\n0 &= \\Psi_r''\\Psi_r + \\Psi_i''\\Psi_i + \\left(-\\alpha^2 + \\frac{U - c_r}{|U-c|^2}(\\beta - U'')\\right)|\\Psi|^2 + i\\left(\\frac{c_i}{|U-c|^2}(\\beta - U'')|\\Psi|^2 - \\Psi_r''\\Psi_i + \\Psi_i''\\Psi_r\\right)\n\\end{align}\n" }, { "math_id": 49, "text": "\\frac{1}{U-c} =\\frac{1}{U-c_r - ic_i} = \\frac{U-c_r + ic_i}{|U-c|^2} \n" }, { "math_id": 50, "text": "y=[0,L]\n" }, { "math_id": 51, "text": "\\Psi(0) = \\Psi(L) = 0\n" }, { "math_id": 52, "text": "\\int_0^L(\\Psi_r\\Psi_i'' - \\Psi_i\\Psi_r'')dy + \\int_0^L\\left(c_i \\frac{|\\Psi|^2}{|U-c|}(\\beta - U'')\\right) =0\n" }, { "math_id": 53, "text": "\\begin{align}\n\\int_0^L (\\Psi_r\\Psi_i'' - \\Psi_i\\Psi_r'')dy &= \\int_0^L \\frac{\\partial }{\\partial y}(\\Psi_r\\Psi_i' - \\Psi_i\\Psi_r')dy\\\\[8pt]\n&= (\\Psi_r\\Psi_i' - \\Psi_i\\Psi_r')|_0^L \\\\[8pt]\n&= 0 \\\\[12pt]\n\\end{align}\n" }, { "math_id": 54, "text": "\\begin{align}\n\\int_0^L\\left(c_i \\frac{|\\Psi|^2}{|U-c|}(\\beta - U'')\\right)dy &=0\\\\\n\\end{align}\n" }, { "math_id": 55, "text": "c_i\n" }, { "math_id": 56, "text": "\\frac{|\\Psi|^2}{|U-c|}(\\beta - U'')\n" }, { "math_id": 57, "text": "(\\beta - U'')\n" }, { "math_id": 58, "text": "\\begin{align}\n\\beta - U'' &= 0\\\\[10pt]\n\\beta &= U''\\\\[10pt]\n\\frac{\\partial(\\beta y)}{\\partial y} &= \\frac{\\partial^2 U}{\\partial y^2}\n\n\\end{align}\n" }, { "math_id": 59, "text": "\\beta\n" }, { "math_id": 60, "text": "f\n" } ]
https://en.wikipedia.org/wiki?curid=67471804
6747488
Binding constant
The binding constant, or affinity constant/association constant, is a special case of the equilibrium constant "K", and is the inverse of the dissociation constant. It is associated with the binding and unbinding reaction of receptor (R) and ligand (L) molecules, which is formalized as: R + L ⇌ RL The reaction is characterized by the on-rate constant "k"on and the off-rate constant "k"off, which have units of M−1 s−1 and s−1, respectively. In equilibrium, the forward binding transition R + L → RL should be balanced by the backward unbinding transition RL → R + L. That is, formula_0, where [R], [L] and [RL] represent the concentration of unbound free receptors, the concentration of unbound free ligand and the concentration of receptor-ligand complexes. The binding constant "K"a is defined by formula_1. An often considered quantity is the dissociation constant "K"d ≡ , which has the unit of concentration, despite the fact that strictly speaking, all association constants are unitless values. The inclusion of units arises from the simplification that such constants are calculated solely from concentrations, which is not the case. Once chemical activity is factored into the correct form of the equation, a dimensionless value is obtained. For the binding of receptor and ligand molecules in solution, the molar Gibbs free energy Δ"G", or the binding affinity is related to the dissociation constant "K"d via formula_2, in which "R" is the ideal gas constant, "T" temperature and the standard reference concentration "c"o = 1 mol/L. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k_{\\rm on}\\,[{\\rm R}]\\,[{\\rm L}] = k_{\\rm off}\\,[{\\rm RL}]" }, { "math_id": 1, "text": "K_{\\rm a} = {k_{\\rm on} \\over k_{\\rm off}} = {[{\\rm RL}] \\over {[{\\rm R}]\\,[{\\rm L}]}}" }, { "math_id": 2, "text": "\\Delta G = R T\\ln{{K_{\\rm d} \\over c^{\\ominus}}}" } ]
https://en.wikipedia.org/wiki?curid=6747488
6748071
Time-bin encoding
Time-bin encoding is a technique used in quantum information science to encode a qubit of information on a photon. Quantum information science makes use of qubits as a basic resource similar to bits in classical computing. Qubits are any two-level quantum mechanical system; there are many different physical implementations of qubits, one of which is time-bin encoding. While the time-bin encoding technique is very robust against decoherence, it does not allow easy interaction between the different qubits. As such, it is much more useful in quantum communication (such as quantum teleportation and quantum key distribution) than in quantum computation. Construction of a time-bin encoded qubit. Time-bin encoding is done by having a single-photon go through a Mach–Zehnder interferometer (MZ), shown in black here. The photon coming from the left is guided through one of two paths (shown in blue and red); the guiding can be made by optical fiber or simply in free space using mirrors and polarising cubes. One of the two paths is longer than the other. The difference in path length must be longer than the coherence length of the photon to make sure the path taken can be unambiguously distinguished. The interferometer has to keep a stable phase, which means that the path length difference must vary by much less than the wavelength of light during the experiment. This usually requires active temperature stabilization. If the photon takes the short path, it is said to be in the state formula_0; if it takes the long path, it is said to be in the state formula_1. If the photon has a non-zero probability to take either path, then it is in a coherent superposition of the two states: formula_2 These coherent superpositions of the two possible states are called qubits and are the basic ingredient of Quantum information science. In general, it is easy to vary the phase gained by the photon between the two paths, for example by stretching the fiber, while it is much more difficult to vary the amplitudes which are therefore fixed, typically at 50%. The created qubit is then formula_3 which covers only a subset of all possible qubits. Measurement in the formula_4 basis is done by measuring the time of arrival of the photon. Measurement in other bases can be achieved by letting the photon go through a second MZ before measurement, though, similar to the state preparation, the possible measurement setups are restricted to only a small subset of possible qubit measurements. Decoherence. Time-bin qubits do not suffer from depolarization or polarization mode-dispersion, making them better suited to fiber optics applications than polarization encoding. Photon loss is easily detectable since the absence of photons does not correspond to an allowed state, making it better suited than a photon-number based encoding.
[ { "math_id": 0, "text": "|0 \\rangle" }, { "math_id": 1, "text": "|1 \\rangle" }, { "math_id": 2, "text": "| \\psi \\rangle = \\alpha |0 \\rangle + \\beta |1 \\rangle," }, { "math_id": 3, "text": "| \\psi \\rangle = \\frac{|0 \\rangle +e^{i \\phi} |1 \\rangle}{\\sqrt{2}}," }, { "math_id": 4, "text": "\\{|0 \\rangle, |1 \\rangle\\}" } ]
https://en.wikipedia.org/wiki?curid=6748071
67494738
2 Chronicles 28
Second Book of Chronicles, chapter 28 2 Chronicles 28 is the twenty-eighth chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter is the reign of Ahaz, king of Judah. Text. This chapter was originally written in the Hebrew language and is divided into 27 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Leningradensis (1008). A fragment containing a part of this chapter was found among the Dead Sea Scrolls, that is, 4Q118 (4QChr; 50–25 BCE) with extant verse 27. There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Ahaz, king of Judah (28:1–4). Ahaz's reign was dominated by the Syro-Ephraimite war, against the kingdoms of Israel and Aram, due to Ahaz's wicked way and refusal to convert. His reign marks the unmitigated decline of the kingdom of Judah. "Ahaz was twenty years old when he began to reign, and he reigned sixteen years in Jerusalem. And he did not do what was right in the eyes of the Lord, as his father David had done," Judah were defeated by enemies (28:5–21). A possible opportunity of Israel reunification by northern kingdom's subjugation of Judah was prevented by God's word through the prophet Oded (verses 9–11) and some chiefs of the Ephraimites (verses 9–11), so the army of Israel treated the captives from Judah humanely (a mirror image of 2 Chronicles 13, in which Judah and Israel have exchanged their roles). Some details of the good treatments by the people of "Samaria" in verses 9–15 are apparently underlined in the well-known story of "the Good Samaritans" in the Gospel of Luke (). While Ahaz sought and waited for Tiglath-pileser's support (not recorded in the Chronicles, the books of Kings note that later Tiglath-pileser accepted the offer, defeated Damascus, deported its citizens, and killed king Rezin), the Edomites (verse 17) and the Philistines (verse 18) had successfully defeated Judah. Verses 20–21 emphasize that Tiglath-pileser did not really come to help, because he extorted heavy tribute from Judah. Apostasy and death of Ahaz (28:22–27). Verses 22–25 record the cultic sins of Ahaz as he worshipped for the gods of Damascus, the land that defeated him, and abandoned the worship of YHWH. "And the men who have been mentioned by name rose and took the captives, and with the spoil they clothed all who were naked among them. They clothed them, gave them sandals, provided them with food and drink, and anointed them, and carrying all the feeble among them on donkeys, they brought them to their kinsfolk at Jericho, the city of palm trees. Then they returned to Samaria." Verse 15. This verse displays the strongest parallels with Luke 10 (Luke 10:30, 33–34). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=67494738
675
Affirming the consequent
Type of fallacious argument (logical fallacy) In propositional logic, affirming the consequent, sometimes called converse error, fallacy of the converse, or confusion of necessity and sufficiency, is a formal fallacy of taking a true conditional statement (e.g., "if the lamp were broken, then the room would be dark") under certain assumptions (there are no other lights in the room, it is nighttime and the windows are closed), and invalidly inferring its converse ("the room is dark, so the lamp must be broken"), even though that statement may not be true under the same assumptions. This arises when the consequent ("the room would be dark") has other possible antecedents (for example, "the lamp is in working order, but is switched off" or "there is no lamp in the room"). Converse errors are common in everyday thinking and communication and can result from, among other causes, communication issues, misconceptions about logic, and failure to consider other causes. The opposite statement, denying the consequent, is called modus tollens and "is" a valid form of argument. Formal description. Affirming the consequent is the action of taking a true statement formula_0 and invalidly concluding its converse formula_1. The name "affirming the consequent" derives from using the consequent, "Q", of formula_0, to conclude the antecedent "P". This fallacy can be summarized formally as formula_2 or, alternatively, formula_3. The root cause of such a logical error is sometimes failure to realize that just because "P" is a "possible" condition for "Q", "P" may not be the "only" condition for "Q", i.e. "Q" may follow from another condition as well. Affirming the consequent can also result from overgeneralizing the experience of many statements "having" true converses. If "P" and "Q" are "equivalent" statements, i.e. formula_4, it "is" possible to infer "P" under the condition "Q". For example, the statements "It is August 13, so it is my birthday" formula_0 and "It is my birthday, so it is August 13" formula_1 are equivalent and both true consequences of the statement "August 13 is my birthday" (an abbreviated form of formula_4). Of the possible forms of "mixed hypothetical syllogisms," two are valid and two are invalid. Affirming the antecedent (modus ponens) and denying the consequent (modus tollens) are valid. Affirming the consequent and denying the antecedent are invalid. Additional examples. Example 1 One way to demonstrate the invalidity of this argument form is with a counterexample with true premises but an obviously false conclusion. For example: If someone lives in San Diego, then they live in California. Joe lives in California. Therefore, Joe lives in San Diego. There are many places to live in California other than San Diego. On the other hand, one can affirm with certainty that "if someone does not live in California" ("non-Q"), then "this person does not live in San Diego" ("non-P"). This is the contrapositive of the first statement, and it must be true if and only if the original statement is true. Example 2 If an animal is a dog, then it has four legs. My cat has four legs. Therefore, my cat is a dog. Here, it is immediately intuitive that any number of other antecedents ("If an animal is a deer...", "If an animal is an elephant...", "If an animal is a moose...", "etc.") can give rise to the consequent ("then it has four legs"), and that it is preposterous to suppose that having four legs "must" imply that the animal is a dog and nothing else. This is useful as a teaching example since most people can immediately recognize that the conclusion reached must be wrong (intuitively, a cat cannot be a dog), and that the method by which it was reached must therefore be fallacious. Example 3 In "Catch-22", the chaplain is interrogated for supposedly being "Washington Irving"/"Irving Washington", who has been blocking out large portions of soldiers' letters home. The colonel has found such a letter, but with the Chaplain's name signed. "You can read, though, can't you?" the colonel persevered sarcastically. "The author signed his name." "That's my name there." "Then you wrote it. Q.E.D." "P" in this case is 'The chaplain signs his own name', and "Q" 'The chaplain's name is written'. The chaplain's name may be written, but he did not necessarily write it, as the colonel falsely concludes." See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P \\to Q" }, { "math_id": 1, "text": "Q \\to P" }, { "math_id": 2, "text": "(P \\to Q, Q)\\to P" }, { "math_id": 3, "text": "\\frac{P \\to Q, Q}{\\therefore P}" }, { "math_id": 4, "text": "P \\leftrightarrow Q" } ]
https://en.wikipedia.org/wiki?curid=675
6750188
Independence of clones criterion
Property of electoral systems In social choice theory, the independence of (irrelevant) clones criterion says that adding a "clone", i.e. a new candidate very similar to an already-existing candidate, should not spoil the results. It can be considered a very weak form of the independence of irrelevant alternatives (IIA) criterion. A group of candidates are called clones if they are always ranked together, placed side-by-side, by every voter; no voter ranks any of the non-clone candidates between or equal to the clones. In other words, the process of "cloning" a candidate involves taking an existing candidate "C", then replacing them with several candidates "C1", "C2..". who are slotted into the original ballots in the spot where "C" previously was, with the clones being arranged in any order. If a set of clones contains at least two candidates, the criterion requires that deleting one of the clones must not increase or decrease the winning chance of any candidate not in the set of clones. Ranked pairs, the Schulze method, and any system that satisfies independence of irrelevant alternatives such as range voting or majority judgment satisfies the criterion. Instant-runoff voting is generally described as passing, but this may depend on specific details in how the criterion is defined and how tied ranks are handled. The Borda count, minimax, Kemeny–Young, Copeland's method, plurality, and the two-round system all fail the independence of clones criterion. Voting methods that limit the number of allowed ranks also fail the criterion, because the addition of clones can leave voters with insufficient space to express their preferences about other candidates. For similar reasons, ballot formats that impose such a limit may cause an otherwise clone-independent method to fail. This criterion is very weak, as adding a substantially similar (but not quite identical) candidate to a race can still substantially affect the results and cause vote splitting. For example, the center squeeze pathology that affects instant-runoff voting means that several similar (but not identical) candidates competing in the same race will tend to hurt each others' chances of winning. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Clone positivity. Election methods that fail independence of clones can be clone negative (the addition of a similar candidate decreases another candidate's chance of winning) or clone positive (the addition of a similar candidate increases another candidate's chance of winning). First-preference plurality is a common example of such a method. The Borda count is an example of a clone-positive method; in fact, the method is "so" clone-positive that any candidate can simply "clone their way to victory", and the winner being the coalition that runs the most clones. Plurality voting is an example of a strongly clone-negative method because of vote-splitting. A method can also fail the independence of clones method without being clone-positive or clone-negative. Finally, methods can suffer from "crowding", which happens where cloning a losing candidate changes the winner from one non-clone to a different non-clone. Copeland's method is an example of a method that exhibits crowding. Examples. Borda count. Consider an election in which there are two candidates, A and B. Suppose the voters have the following preferences: Candidate A would receive 66% Borda points (66%×1 + 34%×0) and B would receive 34% (66%×0 + 34%×1). Thus candidate A would win by a 66% landslide. Now suppose supporters of B nominate an additional candidate, B2, that is very similar to B but considered inferior by all voters. For the 66% who prefer A, B continues to be their second choice. For the 34% who prefer B, A continues to be their least preferred candidate. Now the voters' preferences are as follows: Candidate A now has 132% Borda points (66%×2 + 34%×0). B has 134% (66%×1 + 34%×2). B2 has 34% (66%×0 + 34%×1). The nomination of B2 changes the winner from A to B, overturning the landslide, even though the additional information about voters' preferences is redundant due to the similarity of B2 to B. Similar examples can be constructed to show that "given the Borda count, any arbitrarily large landslide can be overturned by adding enough candidates" (assuming at least one voter prefers the landslide loser). For example, to overturn a 90% landslide preference for A over B, add 9 alternatives similar/inferior to B. Then A's score would be 900% (90%×10 + 10%×0) and B's score would be 910% (90%×9 + 10%×10). No knowledge of the voters' preferences is needed to exploit this strategy. Factions could simply nominate as many alternatives as possible that are similar to their preferred alternative. In typical elections, game theory suggests this manipulability of Borda can be expected to be a serious problem, particularly when a significant number of voters can be expected to vote their sincere order of preference (as in public elections, where many voters are not strategically sophisticated; cite Michael R. Alvarez of Caltech). Small minorities typically have the power to nominate additional candidates, and typically it is easy to find additional candidates that are similar. In the context of people running for office, people can take similar positions on the issues, and in the context of voting on proposals, it is easy to construct similar proposals. Game theory suggests that all factions would seek to nominate as many similar candidates as possible since the winner would depend on the number of similar candidates, regardless of the voters' preferences. Copeland. These examples show that Copeland's method violates the Independence of clones criterion. Crowding. Copeland's method is vulnerable against crowding, that is the outcome of the election is changed by adding (non-winning) clones of a non-winning candidate. Assume five candidates A, B, B2, B3 and C and 4 voters with the following preferences: Note, that B, B2 and B3 form a clone set. Clones not nominated. If only one of the clones would compete, preferences would be as follows: The results would be tabulated as follows: Result: C has one win and no defeats, A has one win and one defeat. Thus, C is elected Copeland winner. Clones nominated. Assume, all three clones would compete. The preferences would be the following: The results would be tabulated as follows: Result: Still, C has one win and no defeat, but now A has three wins and one defeat. Thus, A is elected Copeland winner. Conclusion. A benefits from the clones of the candidate he defeats, while C cannot benefit from the clones because C ties with all of them. Thus, by adding two clones of the non-winning candidate B, the winner has changed. Thus, Copeland's method is vulnerable against crowding and fails the independence of clones criterion. Teaming. Copeland's method is also vulnerable against teaming, that is adding clones raises the winning chances of the set of clones. Again, assume five candidates A, B, B2, B3 and C and 2 voters with the following preferences: Note, that B, B2 and B3 form a clone set. Clones not nominated. Assume that only one of the clones would compete. The preferences would be as follows: The results would be tabulated as follows: Result: A has one win and no defeats, B has no wins or defeats so A is elected Copeland winner. Clones nominated. If all three clones competed, the preferences would be as follows: The results would be tabulated as follows: Result: A has one win and no defeat, but now B has two wins and no defeat. Thus, B is elected Copeland winner. Conclusion. B benefits from adding inferior clones, while A cannot benefit from the clones because he ties with all of them. So, by adding two clones of B, B changed from loser to winner. Thus, Copeland's method is vulnerable against Teaming and fails the Independence of clones criterion. Plurality voting. Suppose there are two candidates, A and B, and 55% of the voters prefer A over B. A would win the election, 55% to 45%. But suppose the supporters of B also nominate an alternative similar to A, named A2. Assume a significant number of the voters who prefer A over B also prefer A2 over A. When they vote for A2, this reduces A's total below 45%, causing B to win. Range voting. Range voting satisfies the independence of clones criterion. Voters changing their opinion. However, like in every voting system, if voters change their opinions about candidates if similar candidates are added, adding clone candidates can change the outcome of an election. This can be seen by some premises and a simple example: In range voting, to raise the influence of the ballot, the voter can give the maximum possible score to their most preferred alternative and the minimum possible score to their least preferred alternative. In fact, giving the maximum possible score to all candidates that are over some threshold and giving the minimum possible score to the other candidates, will maximize the influence of a ballot on the outcome. However, for this example it is necessary that the voter uses the first simple rule, but not the second. Begin by supposing there are 3 alternatives: A, B and B2, where B2 is similar to B but considered inferior by the supporters of A and B. The voters supporting A would have the order of preference "A&gt;B&gt;B2" so that they give A the maximum possible score, they give B2 the minimum possible score, and they give B a score that's somewhere in between (greater than the minimum). The supporters of B would have the order of preference "B&gt;B2&gt;A", so they give B the maximum possible score, A the minimum score and B2 a score somewhere in between. Assume B narrowly wins the election. Now suppose B2 isn't nominated. The voters supporting A who would have given B a score somewhere in between would now give B the minimum score while the supporters of B will still give B the maximum score, changing the winner to A. This violates the criterion. Note, that if the voters that support B would prefer B2 to B, this result would not hold, since removing B2 would raise the score B receives from his supporters in an analogous way as the score he receives from the supporters of A would decrease. The conclusion that can be drawn is that considering all voters voting in a certain special way, range voting creates an incentive to nominate additional alternatives that are similar to one you prefer, but considered clearly inferior by his voters and by the voters of his opponent, since this can be expected to cause the voters supporting the opponent to raise their score of the one you prefer (because it looks better by comparison to the inferior ones), but not his own voters to lower their score. Kemeny–Young method. This example shows that the Kemeny–Young method violates the Independence of clones criterion. Assume five candidates A, B1, B2, B3 and C and 13 voters with the following preferences: Note, that B1, B2 and B3 form a clone set. Clones not nominated. Assume only one of the clones competes. The preferences would be: The Kemeny–Young method arranges the pairwise comparison counts in the following tally table: The ranking scores of all possible rankings are: Result: The ranking B1 &gt; C &gt; A has the highest ranking score. Thus, B1 wins ahead of C and A. Clones nominated. Assume all three clones compete. The preferences would be: The Kemeny–Young method arranges the pairwise comparison counts in the following tally table (with formula_0) : Since the clones have identical results against all other candidates, they have to be ranked one after another in the optimal ranking. More over, the optimal ranking within the clones is unambiguous: B1 &gt; B2 &gt; B3. In fact, for computing the results, the three clones can be seen as one united candidate B, whose wins and defeats are three times as strong as of every single clone. The ranking scores of all possible rankings with respect to that are: Result: The ranking A &gt; B1 &gt; B2 &gt; B3 &gt; C has the highest ranking score. Thus, A wins ahead of the clones Bi and C. Conclusion. A benefits from the two clones of B1 because A's win is multiplied by three. So, by adding two clones of B, B changed from winner to loser. Thus, the Kemeny–Young method is vulnerable against spoilers and fails the independence of clones criterion. Minimax. This example shows that the minimax method violates the Independence of clones criterion. Assume four candidates A, B1, B2 and B3 and 9 voters with the following preferences: Note, that B1, B2 and B3 form a clone set. Since all preferences are strict rankings (no equals are present), all three minimax methods (winning votes, margins and pairwise opposite) elect the same winners. Clones not nominated. Assume only one of the clones would compete. The preferences would be: The results would be tabulated as follows: Result: B is the Condorcet winner. Thus, B is elected minimax winner. Clones nominated. Now assume all three clones would compete. The preferences would be as follows: The results would be tabulated as follows: Result: A has the closest biggest defeat. Thus, A is elected minimax winner. Conclusion. By adding clones, the Condorcet winner B1 becomes defeated. All three clones beat each other in clear defeats. A benefits from that. So, by adding two clones of B, B changed from winner to loser. Thus, the minimax method is vulnerable against spoilers and fails the independence of clones criterion. STAR voting. STAR voting consists of an automatic runoff between the two candidates with the highest rated scores. This example involves clones with nearly identical scores, and shows teaming. Clones not nominated. The finalists are Amy and Brian, and Brian beats Amy pairwise and thus wins. Clones nominated. The finalists are Amy and her clone, and Amy's clone wins. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i \\in \\{1, 2, 3\\}" } ]
https://en.wikipedia.org/wiki?curid=6750188
6750203
Spinor spherical harmonics
Special functions on a sphere In quantum mechanics, the spinor spherical harmonics (also known as spin spherical harmonics, spinor harmonics and Pauli spinors) are special functions defined over the sphere. The spinor spherical harmonics are the natural spinor analog of the vector spherical harmonics. While the standard spherical harmonics are a basis for the angular momentum operator, the spinor spherical harmonics are a basis for the total angular momentum operator (angular momentum plus spin). These functions are used in analytical solutions to Dirac equation in a radial potential. The spinor spherical harmonics are sometimes called Pauli central field spinors, in honor to Wolfgang Pauli who employed them in the solution of the hydrogen atom with spin–orbit interaction. Properties. The spinor spherical harmonics "Y""l, s, j, m" are the spinors eigenstates of the total angular momentum operator squared: formula_0 where j l + s, where j, l, and s are the (dimensionless) total, orbital and spin angular momentum operators, "j" is the total azimuthal quantum number and "m" is the total magnetic quantum number. Under a parity operation, we have formula_1 For spin-1/2 systems, they are given in matrix form by formula_2 where formula_3 are the usual spherical harmonics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{align}\n \\mathbf j^2 Y_{l, s, j, m} &= j (j + 1) Y_{l, s, j, m} \\\\\n \\mathrm j_{\\mathrm z} Y_{l, s, j, m} &= m Y_{l, s, j, m}\\;;\\;m=-j,-(j-1),\\cdots,j-1,j\\\\\n\\mathbf l^2 Y_{l, s, j, m} &= l (l + 1) Y_{l, s, j, m}\\\\\n\\mathbf s^2 Y_{l, s, j, m} &= s (s + 1) Y_{l, s, j, m}\n\\end{align}\n" }, { "math_id": 1, "text": "\n P Y_{l, s j, m}\n = (-1)^{l}Y_{l,s, j, m}.\n" }, { "math_id": 2, "text": "\n Y_{l, \\pm\\frac{1}{2}, j, m}\n = \\frac{1}{\\sqrt{2 \\bigl(j \\mp \\frac{1}{2}\\bigr) + 1}}\n \\begin{pmatrix}\n \\pm \\sqrt{j \\mp \\frac{1}{2} \\pm m + \\frac{1}{2}} Y_{l}^{m - \\frac{1}{2}} \\\\\n \\sqrt{j \\mp \\frac{1}{2} \\mp m + \\frac{1}{2}} Y_{l}^{m + \\frac{1}{2}}\n \\end{pmatrix}.\n" }, { "math_id": 3, "text": "Y_{l}^{m}" } ]
https://en.wikipedia.org/wiki?curid=6750203
67504737
Laser speckle contrast imaging
Laser speckle contrast imaging (LSCI), also called laser speckle imaging (LSI), is an imaging modality based on the analysis of the blurring effect of the speckle pattern. The operation of LSCI is having a wide-field illumination of a rough surface through a coherent light source. Then using photodetectors such as CCD camera or CMOS sensors imaging the resulting laser speckle pattern caused by the interference of coherent light. In biomedical use, the coherent light is typically in the red or near-infrared region to ensure higher penetration depth. When scattering particles moving during the time, the interference caused by the coherent light will have fluctuations which will lead to the intensity variations detected via the photodetector, and this change of the intensity contain the information of scattering particles' motion. Through image the speckle patterns with finite exposure time, areas with scattering particles will appear blurred. Development. The first practical application of utilizing speckle pattern reduction to mapping retinal blood flow was reported by Fercher and Briers in 1982. This technology was called single-exposure speckle photography at that time. Due to the lacking of sufficient digital techniques in the 1980s, single-exposure speckle photography has a two-step process which made it not convenient and efficient enough for biomedical research especially in clinical use. With the development of digital techniques, including the CCD cameras, CMOS sensors, and computers, in the 1990s, Briers and Webster successfully improved single-exposure speckle photography. It no longer needed to use photographs to capture images. The improved technology is called laser speckle contrast imaging (LSCI) which can directly measure the contrast of speckle pattern. A typical instrumental setup of laser speckle contrast imaging only contains a laser source, camera, diffuser, lens, and computer. Due to the simple structure of the instrumental setup, LSCI can be integrated into other systems easily. Concept. Speckle theory. Contrast. For a fully developed speckle pattern which formed when the complete coherent and polarized light illuminate a static medium, the contrast (K) range from 0 to 1 is defined by the ratio between the standard deviation and mean intensity: formula_0 The intensity distribution of the speckle pattern will be used to compute the contrast value. Autocorrelation functions. Autocorrelation functions of electric field are used to measure the relationship between contrast and the motion of scatterers because the intensity fluctuations are produced by electric field changes of scatterers. E(t) is the electric field over time, E* is the complex conjugate of electric field and formula_1 is the autocorrelation delay time. formula_2 Bandyopadhyay et al. showed that the reduced intensity variances of speckle pattern are related to formula_3. Therefore, the contrast can be written as formula_4 where T is the exposure time. The normalization constant formula_5 takes into account the loss of correlation due to the detector pixel size, and depolarization of the light through the medium. Motion Distributions. Dynamic scatterers' motion can be classified into two categories, one is the ordered motion and the other one is disordered motion. The ordered motion is the ordered flow of scattered while the disordered motion is caused by the temperature effects. The total dynamic scatterers' motions were thought of as Brownian motion historically, the approximate velocity distribution of Brownian motion can be considered as the Lorentzian profile. However, the ordered motion in dynamic scatterers follows Gaussian distribution. When considering the motion distribution, the contrast equation related to the autocorrelation can be updated. The updated equations are as follows, formula_6 is the contrast equation function in Lorentzian profile and formula_7 is the contrast equation function in Gaussian profile. formula_1 is the decorrelation time. Both equations can be used in contrast measurement, some scientists also use contrast equations with the combination of them. However, what the correct theoretical contrast equation should be is still under investigation. formula_8 formula_9 Normalization constants. formula_5 is the normalization constants that vary in different LSCI systems, the value of it is formula_10 1, the most common method to determine the value of it is using the following equation. formula_5 is account for the instability and maximum contrast of each LSCI system. formula_11 Effect of static scatterers. Static scatterers are present in the assessed sample, speckle contrast produced by static scatterers remains constant. By adding statics scatterers, the contrast equation can be updated again. formula_12 "*The above equation did not account for the motion distributions." P1 and P2 are two constants that range from 0 to 1, they are determined by fitting this equation to the actual experimental data. Scatterers velocity determination. The relationship between the velocity of scatterers and decorrelation time is as follows, velocity of scatterers such as the blood flow is proportional to the decorrelation time, formula_13 is the laser light wavelength. formula_14 Contrast processing algorithm. The method to compute the contrast of speckle patterns can be classified into three categories: s-K (spatial), t-K (temporal), and st-K (Spatio-temporal). To compute the spatial contrast, raw images of laser speckle will be separated into small elements, and each element corresponds to a formula_15 pixels. The value of formula_16 is determined by the speckle size. The intensity of all the pixels in each element will be summed and averaged to return a mean intensity value (μ), the final contrast value of this element will be calculated based on the mean intensity and actual intensity of each pixel. To improve the resolution limitation, scientists also compute the temporal contrast of the speckle pattern. The method is the same as how to compute spatial contrast but just in temporal. The combination computation of spatial contrast and temporal contrast is Spatio-temporal contrast processing algorithm and this is the most commonly used one. Applications. Compared with other existing imaging technologies, laser speckle contrast imaging has several obvious advantages. It can uses simple and cost-effective instrument to return excellent spatial and temporal resolution imaging. And due to these strengths, laser speckle contrast imaging has been involved in mapping blood flow for decades. The utilize of LSCI has been extended to many subjects in the biomedical field which include but are not limited to rheumatology, burns, dermatology, neurology, gastrointestinal tract surgery, dentistry, cardiovascular research. LSCI can be adopted into another system easily for clinical full-field monitoring, measuring, and investigating living processes in almost real-time scale. However, LSCI still has some limitations, it can only be used to mapping relative blood flow instead of measuring the absolute blood flow. Due to the complex vascular anatomy structure, the maximum detection depth of LSCI is limited by 900 micrometers now. The scattering and absorption effect of red blood cell can influence the contrast value. The complex physics of measuring behind this technology made it hard to do quantitative measurements.
[ { "math_id": 0, "text": "K=\\frac{\\sigma}{\\langle I \\rangle} " }, { "math_id": 1, "text": "\\tau" }, { "math_id": 2, "text": "g_1(\\tau)=\\frac{\\langle E(t)\\cdot E^*(t+\\tau)\\rangle}{\\langle E(t)\\cdot E^*(t) \\rangle}" }, { "math_id": 3, "text": "g_1" }, { "math_id": 4, "text": "K(T)^2 = \\frac{2\\beta}{T}\\int\\limits_{0}^{T} |g_1(\\tau)|^2 \\left ( 1- \\frac{\\tau}{T} \\right ) \\mathrm d\\tau" }, { "math_id": 5, "text": "\\beta" }, { "math_id": 6, "text": "K^L" }, { "math_id": 7, "text": "K^G" }, { "math_id": 8, "text": "K^L(T)^2=\\beta\\begin{bmatrix} \\frac{\\tau_{cl}}{T}+\\frac{(\\tau_{cl})^2}{2T^2}(e^{-2T/\\tau_{cl}}-1) \\end{bmatrix}" }, { "math_id": 9, "text": "K^G(T)^2=\\beta\\begin{bmatrix} {\\frac{\\tau_{cg}}{T}}\\sqrt{\\frac{\\pi}{2}} \\mathrm{erf} (\\sqrt{2}T/\\tau_{cg})+{\\frac{\\tau_{cg}^2}{2T^2}}(e^{-2T^2/\\tau_{cg}^2}-1) \\end{bmatrix}" }, { "math_id": 10, "text": "\\leq" }, { "math_id": 11, "text": "\\beta=\\lim_{T \\to 0}K(T)" }, { "math_id": 12, "text": "K(T)^2 = \\frac{2\\beta}{T}\\int\\limits_{0}^{T} P_1^2|g_1(\\tau)|^2 \\left ( 1- \\frac{\\tau}{T} \\right )P_2^2 \\mathrm d\\tau" }, { "math_id": 13, "text": "\\lambda" }, { "math_id": 14, "text": "V=\\frac{\\lambda}{2\\pi \\tau_c}" }, { "math_id": 15, "text": "n\\times n" }, { "math_id": 16, "text": "n" }, { "math_id": 17, "text": "d_\\min\\thickapprox 1.2(1+M)\\lambda f/\\#" } ]
https://en.wikipedia.org/wiki?curid=67504737
67509773
Fibonacci group
In mathematics, for a natural number formula_0, the "n"th Fibonacci group, denoted formula_1 or sometimes formula_2, is defined by "n" generators formula_3 and "n" relations: These groups were introduced by John Conway in 1965. The group formula_1 is of finite order for formula_10 and infinite order for formula_11 and formula_12. The infinitude of formula_13 was proved by computer in 1990. Kaplansky's unit conjecture. From a group formula_14 and a field formula_15 (or more generally a ring), the group ring formula_16 is defined as the set of all finite formal formula_15-linear combinations of elements of formula_14 − that is, an element formula_17 of formula_16 is of the form formula_18, where formula_19 for all but finitely many formula_20 so that the linear combination is finite. The (size of the) "support" of an element formula_21 in formula_16, denoted formula_22, is the number of elements formula_20 such that formula_23, i.e. the number of terms in the linear combination. The ring structure of formula_16 is the "obvious" one: the linear combinations are added "component-wise", i.e. as formula_24, whose support is also finite, and multiplication is defined by formula_25, whose support is again finite, and which can be written in the form formula_26 as formula_27. "Kaplansky's unit conjecture" states that given a field formula_15 and a torsion-free group formula_14 (a group in which all non-identity elements have infinite order), the group ring formula_16 does not contain any non-trivial units – that is, if formula_28 in formula_16 then formula_29 for some formula_30 and formula_20. Giles Gardam disproved this conjecture in February 2021 by providing a counterexample. He took formula_31, the finite field with two elements, and he took formula_14 to be the 6th Fibonacci group formula_32. The non-trivial unit formula_33 he discovered has formula_34. The 6th Fibonacci group formula_32 has also been variously referred to as the "Hantzsche-Wendt group", the "Passman group", and the "Promislow group". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n \\ge 2" }, { "math_id": 1, "text": "F(2,n)" }, { "math_id": 2, "text": "F(n)" }, { "math_id": 3, "text": "a_1, a_2, \\dots, a_n" }, { "math_id": 4, "text": "a_1 a_2 = a_3," }, { "math_id": 5, "text": "a_2 a_3 = a_4," }, { "math_id": 6, "text": "\\dots" }, { "math_id": 7, "text": "a_{n-2} a_{n-1} = a_n," }, { "math_id": 8, "text": "a_{n-1}a_n = a_1," }, { "math_id": 9, "text": "a_n a_1 = a_2" }, { "math_id": 10, "text": "n=2,3,4,5,7" }, { "math_id": 11, "text": "n = 6" }, { "math_id": 12, "text": "n \\ge 8" }, { "math_id": 13, "text": "F(2,9)" }, { "math_id": 14, "text": "G" }, { "math_id": 15, "text": "K" }, { "math_id": 16, "text": "K[G]" }, { "math_id": 17, "text": "a" }, { "math_id": 18, "text": "a = \\sum_{g \\in G} \\lambda_g g" }, { "math_id": 19, "text": "\\lambda_g = 0" }, { "math_id": 20, "text": "g \\in G" }, { "math_id": 21, "text": "a = \\sum\\nolimits_g \\lambda_g g" }, { "math_id": 22, "text": "|\\operatorname{supp} a\\,|" }, { "math_id": 23, "text": "\\lambda_g \\neq 0" }, { "math_id": 24, "text": "\\sum\\nolimits_g \\lambda_g g + \\sum\\nolimits_g \\mu_g g = \\sum\\nolimits_g (\\lambda_g \\!+\\! \\mu_g) g" }, { "math_id": 25, "text": "\\left(\\sum\\nolimits_g \\lambda_g g\\right)\\!\\!\\left(\\sum\\nolimits_h \\mu_h h\\right) = \\sum\\nolimits_{g,h} \\lambda_g\\mu_h \\, gh" }, { "math_id": 26, "text": "\\sum_{x \\in G} \\nu_x x" }, { "math_id": 27, "text": "\\sum_{x \\in G}\\Bigg(\\sum_{g,h \\in G \\atop gh = x} \\lambda_g\\mu_h \\!\\Bigg) x" }, { "math_id": 28, "text": "ab = 1" }, { "math_id": 29, "text": "a = kg" }, { "math_id": 30, "text": "k \\in K" }, { "math_id": 31, "text": "K = \\mathbb{F}_2" }, { "math_id": 32, "text": "F(2,6)" }, { "math_id": 33, "text": "\\alpha \\in \\mathbb{F}_2[F(2, 6)]" }, { "math_id": 34, "text": "|\\operatorname{supp} \\alpha\\,| = |\\operatorname{supp} \\alpha^{-1}| = 21" } ]
https://en.wikipedia.org/wiki?curid=67509773
67511458
2 Chronicles 33
Second Book of Chronicles, chapter 33 2 Chronicles 33 is the thirty-third chapter of the Second Book of Chronicles the Old Testament of the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). It contains the regnal accounts of Manasseh and Amon, the kings of Judah. Text. This chapter was originally written in the Hebrew language and is divided into 25 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Manasseh, king of Judah (33:1–20). Historically, Manasseh was regarded as an 'exceptionally skilful ruler', reigned on David's throne for 55 years, longer than any other king of Israel and Judah. The books of Kings portray him as the most godless king of all and extensively list his disgraceful behavior which mostly contributed to the downfall of Judah (2 Kings 21:1–18), but the Chronicler records his repentance during his deportation to Babylon, that when he returned to Jerusalem, he removed all foreign images, so the long reign was a result of this God-fearing behavior. The Assyrians' treatment of Manasseh (verse 11) was similar to the Babylonian's treatment of Jehoiachin in later date (Ezekiel 19:9; 2 Chronicles 36:10). In his distress, Manasseh did as instructed in the temple-consecration prayer (cf. 2 Chronicles 6:36–39; 7:14), that he humbled himself and prayed to God, so . "Manasseh was twelve years old when he began to reign, and he reigned fifty-five years in Jerusalem." Verse 1. Two seals appeared on the antiquities market in Jerusalem (first reported in 1963), both bearing the inscription, “Belonging to Manasseh, son of the king.” As the term "son of the king" refers to royal princes, whether they eventually ascended the throne or not, the seal is considered to be Manasseh's during his co-regency with his father. It bears the same iconography of the Egyptian winged scarab as the seals attributed to King Hezekiah, recalling the alliance between Hezekiah and Egypt against the Assyrians (; ), and may symbolize 'a desire to permanently unite the northern and southern kingdoms together with God's divine blessing'. Jar handles bearing a stamp with a winged-beetle and the phrase LMLK ("to the king"), along with the name of a city, have been unearthed throughout ancient Judah as well as in a large administrative complex discovered outside of the old city of Jerusalem and used to hold olive oil, food, wine, etc – goods that were paid as taxes to the king, dated to the reigns of Hezekiah (cf. "Hezekiah's storehouses"; ) and Manasseh. These artifacts provide the evidence of 'a complex and highly-organized tax system in Judah' from the time of Hezekiah extending into the time of Manasseh, among others to pay the tribute to the Assyrians. "11 Therefore the Lord brought upon them the captains of the army of the king of Assyria, who took Manasseh with hooks, bound him with bronze fetters, and carried him off to Babylon." "12 Now when he was in affliction, he implored the Lord his God, and humbled himself greatly before the God of his fathers, 13 and prayed to Him; and He received his entreaty, heard his supplication, and brought him back to Jerusalem into his kingdom." "Then Manasseh knew that the Lord was God." Verse 11–13. Manasseh was thought to have joined a widespread rebellion (or at least been suspected of having supported it) led by Shamash-shum-ukin, the king of Babylon, against his brother, the Assyrian king Ashurbanipal, in an attempt to take the empire for himself, in 652-648 BCE. Amon, king of Judah (33:21–25). The record of Amon's rule is brief (as also in 2 Kings 21) and he is mainly portrayed as a godless king. "Amon was twenty-two years old when he began to reign, and he reigned two years in Jerusalem." "24 Then his servants conspired against him, and killed him in his own house. 25 But the people of the land executed all those who had conspired against King Amon. Then the people of the land made his son Josiah king in his place." Verse 24–25. The assassination of Amon is thought to be related to the rise of an extensive anti-Assyrian rebellion (recorded in Assyrian sources) organized in ʻEber ha-Nahar, the region between the Euphrates and the Mediterranean Sea, against the rule of Ashurbanipal, and at the same time, an attempt of Egypt under Psamtik I to conquer Assyrian territories in the southern Palestine. The faction in Jerusalem that wanted to throw off the yoke of Assyrian, succeeded in killing Amon who was pro-Assyrian, even as worshipping Assyrian gods. However, Assyrian army soon arrived in Syria and Palestine and suppressed the revolt with 'all the usual severity' (all inhabitants were killed or exiled to Assyria'), so the forces in Judah, who wanted to prevent a military clash with Assyria, exterminated the anti-Assyrian nobles. Extrabiblical documentation on Manasseh. In rabbinic literature on "Isaiah" and Christian pseudepigrapha "Ascension of Isaiah", Manasseh is accused of executing the prophet Isaiah, who was identified as the maternal grandfather of Manasseh. Manasseh is mentioned in chapter 21 of 1 Meqabyan, a book considered canonical in the Ethiopian Orthodox Tewahedo Church, where he is used as an example of ungodly king. Manasseh and the kingdom of Judah are only mentioned in the list of subservient kings/states in Assyrian inscriptions of Esarhaddon and Ashurbanipal. Manasseh is listed in annals of Esarhaddon as one of the 22 vassal kings from the area of the Levant and the islands whom the Assyrian king conscripted to deliver timber and stone for the rebuilding of his palace at Nineveh. Esarhaddon's son and successor, Ashurbanipal, mentions "Manasseh, King of Judah" in his annals, which are recorded on the "Rassam cylinder" (or "Rassam Prism", now in the British Museum), named after Hormuzd Rassam, who discovered it in the North Palace of Nineveh in 1854. The ten-faced, cuneiform cylinder contains a record of Ashurbanipal's campaigns against Egypt and the Levant, that involved 22 kings "from the seashore, the islands and the mainland", who are called "servants who belong to me," clearly denoting them as Assyrian vassals. Manasseh was one of the kings who 'brought tribute to Ashurbanipal and kissed his feet'. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=67511458
6751583
Bochner space
Type of topological space In mathematics, Bochner spaces are a generalization of the concept of formula_0 spaces to functions whose values lie in a Banach space which is not necessarily the space formula_1 or formula_2 of real or complex numbers. The space formula_3 consists of (equivalence classes of) all Bochner measurable functions formula_4 with values in the Banach space formula_5 whose norm formula_6 lies in the standard formula_0 space. Thus, if formula_5 is the set of complex numbers, it is the standard Lebesgue formula_0 space. Almost all standard results on formula_0 spaces do hold on Bochner spaces too; in particular, the Bochner spaces formula_3 are Banach spaces for formula_7 Bochner spaces are named for the mathematician Salomon Bochner. Definition. Given a measure space formula_8 a Banach space formula_9 and formula_10 the Bochner space formula_11 is defined to be the Kolmogorov quotient (by equality almost everywhere) of the space of all Bochner measurable functions formula_12 such that the corresponding norm is finite: formula_13 formula_14 In other words, as is usual in the study of formula_0 spaces, formula_11 is a space of equivalence classes of functions, where two functions are defined to be equivalent if they are equal everywhere except upon a formula_15-measure zero subset of formula_16 As is also usual in the study of such spaces, it is usual to abuse notation and speak of a "function" in formula_11 rather than an equivalence class (which would be more technically correct). Applications. Bochner spaces are often used in the functional analysis approach to the study of partial differential equations that depend on time, e.g. the heat equation: if the temperature formula_17 is a scalar function of time and space, one can write formula_18 to make formula_4 a family formula_19 (parametrized by time) of functions of space, possibly in some Bochner space. Application to PDE theory. Very often, the space formula_20 is an interval of time over which we wish to solve some partial differential equation, and formula_15 will be one-dimensional Lebesgue measure. The idea is to regard a function of time and space as a collection of functions of space, this collection being parametrized by time. For example, in the solution of the heat equation on a region formula_21 in formula_22 and an interval of time formula_23 one seeks solutions formula_24 with time derivative formula_25 Here formula_26 denotes the Sobolev Hilbert space of once-weakly differentiable functions with first weak derivative in formula_27 that vanish at the boundary of Ω (in the sense of trace, or, equivalently, are limits of smooth functions with compact support in Ω); formula_28 denotes the dual space of formula_29 References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L^p" }, { "math_id": 1, "text": "\\R" }, { "math_id": 2, "text": "\\Complex" }, { "math_id": 3, "text": "L^p(X)" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "X" }, { "math_id": 6, "text": "\\|f\\|_X" }, { "math_id": 7, "text": "1 \\leq p \\leq \\infty." }, { "math_id": 8, "text": "(T, \\Sigma; \\mu)," }, { "math_id": 9, "text": "\\left(X, \\|\\,\\cdot\\,\\|_X\\right)" }, { "math_id": 10, "text": "1 \\leq p \\leq \\infty," }, { "math_id": 11, "text": "L^p(T; X)" }, { "math_id": 12, "text": "u : T \\to X" }, { "math_id": 13, "text": "\\|u\\|_{L^p(T; X)} := \\left( \\int_{T} \\| u(t) \\|_{X}^{p} \\, \\mathrm{d} \\mu (t) \\right)^{1/p} < + \\infty \\mbox{ for } 1 \\leq p < \\infty," }, { "math_id": 14, "text": "\\|u\\|_{L^{\\infty}(T; X)} := \\mathrm{ess\\,sup}_{t \\in T} \\|u(t)\\|_{X} < + \\infty." }, { "math_id": 15, "text": "\\mu" }, { "math_id": 16, "text": "T." }, { "math_id": 17, "text": "g(t, x)" }, { "math_id": 18, "text": "(f(t))(x) := g(t,x)" }, { "math_id": 19, "text": "f(t)" }, { "math_id": 20, "text": "T" }, { "math_id": 21, "text": "\\Omega" }, { "math_id": 22, "text": "\\R^n" }, { "math_id": 23, "text": "[0, T]," }, { "math_id": 24, "text": "u \\in L^2\\left([0, T]; H_0^1(\\Omega)\\right)" }, { "math_id": 25, "text": "\\frac{\\partial u}{\\partial t} \\in L^2 \\left([0, T]; H^{-1}(\\Omega)\\right)." }, { "math_id": 26, "text": "H_0^1(\\Omega)" }, { "math_id": 27, "text": "L^2(\\Omega)" }, { "math_id": 28, "text": "H^{-1} (\\Omega)" }, { "math_id": 29, "text": "H_0^1(\\Omega)." }, { "math_id": 30, "text": "t" } ]
https://en.wikipedia.org/wiki?curid=6751583
67516554
Monogamy of entanglement
Principle in quantum information science In quantum physics, the "monogamy" of quantum entanglement refers to the fundamental property that it cannot be freely shared between arbitrarily many parties. In order for two qubits "A" and "B" to be maximally entangled, they must not be entangled with any third qubit "C" whatsoever. Even if "A" and "B" are not maximally entangled, the degree of entanglement between them constrains the degree to which either can be entangled with "C". In full generality, for formula_0 qubits formula_1, monogamy is characterized by the Coffman–Kundu–Wootters (CKW) inequality, which states that formula_2 where formula_3 is the density matrix of the substate consisting of qubits formula_4 and formula_5 and formula_6 is the "tangle", a quantification of bipartite entanglement equal to the square of the concurrence. Monogamy, which is closely related to the no-cloning property, is purely a feature of quantum correlations, and has no classical analogue. Supposing that two classical random variables "X" and "Y" are correlated, we can copy, or "clone", "X" to create arbitrarily many random variables that all share precisely the same correlation with "Y". If we let "X" and "Y" be entangled quantum states instead, then "X" cannot be cloned, and this sort of "polygamous" outcome is impossible. The monogamy of entanglement has broad implications for applications of quantum mechanics ranging from black hole physics to quantum cryptography, where it plays a pivotal role in the security of quantum key distribution. Proof. The monogamy of bipartite entanglement was established for tripartite systems in terms of concurrence by Coffman, Kundu, and Wootters in 2000. In 2006, Osborne and Verstraete extended this result to the multipartite case, proving the CKW inequality. Example. For the sake of illustration, consider the three-qubit state formula_7 consisting of qubits "A", "B", and "C". Suppose that "A" and "B" form a (maximally entangled) EPR pair. We will show that: formula_8 for some valid quantum state formula_9. By the definition of entanglement, this implies that "C" must be completely disentangled from "A" and "B". When measured in the standard basis, "A" and "B" collapse to the states formula_10 and formula_11 with probability formula_12 each. It follows that: formula_13 for some formula_14 such that formula_15. We can rewrite the states of "A" and "B" in terms of diagonal basis vectors formula_16 and formula_17: formula_18 formula_19 Being maximally entangled, "A" and "B" collapse to one of the two states formula_20 or formula_21 when measured in the diagonal basis. The probability of observing outcomes formula_22 or formula_23 is zero. Therefore, according to the equation above, it must be the case that formula_24 and formula_25. It follows immediately that formula_26 and formula_27. We can rewrite our expression for formula_28 accordingly: formula_29 formula_30 formula_31 This shows that the original state can be written as a product of a pure state in "AB" and a pure state in "C", which means that the EPR state in qubits "A" and "B" is not entangled with the qubit "C". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n \\geq 3" }, { "math_id": 1, "text": " A_1, \\ldots, A_n" }, { "math_id": 2, "text": "\\sum_{k=2}^{n} \\tau(\\rho_{A_1A_k}) \\leq \\tau(\\rho_{A_1(A_2{\\ldots}A_n)})" }, { "math_id": 3, "text": "\\rho_{A_{1}A_k}" }, { "math_id": 4, "text": "A_1" }, { "math_id": 5, "text": "A_k" }, { "math_id": 6, "text": "\\tau" }, { "math_id": 7, "text": "|\\psi\\rangle \\in (\\mathbb{C}^2)^{\\otimes 3}" }, { "math_id": 8, "text": "|\\psi\\rangle = |\\text{EPR} \\rangle_{AB} \\otimes |\\phi\\rangle_C" }, { "math_id": 9, "text": "|\\phi\\rangle_C" }, { "math_id": 10, "text": "|00\\rangle" }, { "math_id": 11, "text": "|11\\rangle" }, { "math_id": 12, "text": "\\frac{1}{2}" }, { "math_id": 13, "text": "|\\psi\\rangle = |00\\rangle \\otimes (\\alpha_{0}|0\\rangle + \\alpha_{1}|1\\rangle) + |11\\rangle \\otimes (\\beta_0|0\\rangle + \\beta_1|1\\rangle)" }, { "math_id": 14, "text": "\\alpha_0, \\alpha_1, \\beta_0, \\beta_1 \\in \\mathbb{C}" }, { "math_id": 15, "text": "|\\alpha_0|^2 + |\\alpha_1|^2 = |\\beta_0|^2 + |\\beta_1|^2 = \\frac{1}{2}" }, { "math_id": 16, "text": "|+\\rangle" }, { "math_id": 17, "text": "|-\\rangle" }, { "math_id": 18, "text": "|\\psi\\rangle = \\frac{1}{2}(|++\\rangle + |+-\\rangle + |-+\\rangle + |--\\rangle) \\otimes (\\alpha_{0}|0\\rangle + \\alpha_{1}|1\\rangle) + \\frac{1}{2}(|++\\rangle - |+-\\rangle - |-+\\rangle + |--\\rangle) \\otimes (\\beta_0|0\\rangle + \\beta_1|1\\rangle)" }, { "math_id": 19, "text": "= \\frac{1}{2}(|++\\rangle + |--\\rangle) \\otimes ((\\alpha_0 + \\beta_0)|0\\rangle + (\\alpha_1 + \\beta_1)|1\\rangle) + \\frac{1}{2}(|+-\\rangle + |-+\\rangle) \\otimes ((\\alpha_0 - \\beta_0)|0\\rangle + (\\alpha_1 - \\beta_1)|1\\rangle)" }, { "math_id": 20, "text": "|++\\rangle" }, { "math_id": 21, "text": "|--\\rangle" }, { "math_id": 22, "text": "|+-\\rangle" }, { "math_id": 23, "text": "|-+\\rangle" }, { "math_id": 24, "text": "\\alpha_0 - \\beta_0 = 0" }, { "math_id": 25, "text": "\\alpha_1 - \\beta_1 = 0" }, { "math_id": 26, "text": "\\alpha_0 = \\beta_0" }, { "math_id": 27, "text": "\\alpha_1 = \\beta_1" }, { "math_id": 28, "text": "|\\psi\\rangle" }, { "math_id": 29, "text": "|\\psi\\rangle = (|++\\rangle + |--\\rangle) \\otimes (\\alpha_0|0\\rangle + \\alpha_1|1\\rangle)" }, { "math_id": 30, "text": "= |\\text{EPR}\\rangle_{AB} \\otimes (\\sqrt{2}\\alpha_0|0\\rangle + \\sqrt{2}\\alpha_1|1\\rangle)" }, { "math_id": 31, "text": "= |\\text{EPR}\\rangle_{AB} \\otimes |\\phi\\rangle_C" } ]
https://en.wikipedia.org/wiki?curid=67516554
67517849
K-sorted sequence
In computer science, a nearly-sorted sequence, also known as roughly-sorted sequence and as formula_0-sorted sequence is a sequence which is almost ordered. By almost ordered, it is meant that no element of the sequence is very far away from where it would be if the sequence were perfectly ordered. It is still possible that no element of the sequence is at the place where it should be if the sequence were perfectly ordered. The nearly-sorted sequences are particularly useful when the exact order of element has little importance. For example Twitter nearly sort tweets, up to the second, as there is no need for more precision. Actually, given the impossibility to exactly synchronize all computers, an exact sorting of all tweets according to the time at which they are posted is impossible. This idea led to the creation of Snowflake IDs. formula_0-sorting is the operation of reordering the elements of a sequence so that it becomes formula_0-sorted. formula_0-sorting is generally more efficient than sorting. Similarly, sorting a sequence is easier if it is known that the sequence is formula_0-sorted. So if a program needs only to consider formula_0-sorted sequences as input or output, considering formula_0-sorted sequences may save time. The radius of a sequence is a measure of presortedness, that is, its value indicate how much the elements in the list has to be moved to get a totally sorted value. In the above example of tweets which are sorted up to the second, the radius is bounded by the number of tweets in a second. Definition. Given a positive number formula_0, a sequence formula_1 is said to be formula_0-sorted if for each formula_2 and for each formula_3, formula_4. That is, the sequence has to be ordered only for pairs of elements whose distance is at least formula_0. The radius of the sequence formula_5, denoted formula_6 or formula_7 is the smallest formula_0 such that the sequence is formula_0-sorted. The radius is a measure of presortedness. A sequence is said to be nearly-sorted or roughly-sorted if its radius is small compared to its length. Equivalent definition. A sequence formula_1 is formula_0-sorted if and only if each range of length formula_8, formula_9 is formula_0-sorted. Properties. All sequences of length formula_10 are formula_11-sorted, that is, formula_12. A sequence is formula_13-sorted if and only if it is sorted. A formula_0-sorted sequence is automatically formula_14-sorted but not necessarily formula_15-sorted. Relation with sorted sequences. Given a sequence a formula_0-sorted sequence formula_1 and its sorted permutation formula_16, formula_17 is at most formula_0. Algorithms. Deciding whether a sequence is formula_0-sorted. Deciding whether a sequence formula_1 is formula_0-sorted can be done in linear time and constant space by a streaming algorithm. It suffices, for each formula_18, to keep track of formula_19 and to check that formula_20. Computing the radius of a sequence. Computing the radius of a sequence can be computed in linear time and space. This follows from the fact that it can be defined as formula_21. Halving the radius of a sequence. Given a formula_22-sorted sequence formula_23, it is possible to compute a formula_15-sorted permutation formula_24 of formula_5 in linear time and constant space. First, given a sequence formula_25, lets say that this sequence is partitioned if the formula_0-smaller elements are in formula_26 and the formula_0-greater elements are in formula_27. Lets call partitioning the action of reordering the sequence formula_28 into a partitioned permutation. This can be done in linear time by first finding the median of formula_29 and then moving elements to the first or second half depending on whether they are smaller or greater than the median. The sequence formula_24 can be obtained by partitioning the blocks of elements at position formula_30, then by partitioning the blocks of elements at position formula_31, and then again the elements at position formula_30 for each number formula_32 such that those sequences are defined. Using formula_33 processors, with no shared read nor write access to memory, the same algorithm can be applied in formula_34 time, since each partition of a sequence can occur in parallel. Merging two formula_0-sorted sequences. Merging two formula_0-sorted sequences formula_35 and formula_36 can be done in linear time and constant space. First, using the preceding algorithm, both sequences should be transformed into formula_37-sorted sequences. Let's construct iteratively an output sequence formula_38 by removing content from both formula_39 and adding it in formula_38. If both formula_39's are empty, then it suffices to return formula_38. Otherwise, let us assume that formula_40 is empty and not formula_41, it suffices to remove the content of formula_41 and append it to formula_38. The case where formula_41 is empty and not formula_40 is similar by symmetry. Let us consider the case where neither formula_39 is empty. Let us call formula_42 and formula_43, they are the greatest of the formula_37-firsts elements of each list. Let us assume that formula_44, the other case is similar by symmetry. Remove formula_45 from formula_40 and remove formula_46 from formula_41 and add them to formula_38. Sorting a formula_0-sorted sequence. A formula_0-sorted sequence can be sorted by applying the halving algorithm given above formula_47 times. This takes formula_48 time on a sequential machine, or formula_49 time using formula_50 processors. formula_0-sorting a sequence. Since each sequence formula_23 is necessarily formula_10-sorted, it suffices to apply the halving algorithm formula_51-times. Thus, formula_0-sorting can be done in formula_52-time. This algorithm is Par-optimal, that is, there exists no sequential algorithm with a better worst-case complexity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "[a_1,\\dots, a_n]" }, { "math_id": 2, "text": "1\\le i" }, { "math_id": 3, "text": "i+k\\le j\\le n" }, { "math_id": 4, "text": "a_i\\le a_j" }, { "math_id": 5, "text": "\\alpha" }, { "math_id": 6, "text": "\\text{ROUGH}(\\alpha)" }, { "math_id": 7, "text": "\\text{Par}(\\alpha)" }, { "math_id": 8, "text": "2k+2" }, { "math_id": 9, "text": "[a_i, a_{i+1},\\dots, a_{i+2k+2}]" }, { "math_id": 10, "text": "n" }, { "math_id": 11, "text": "(n-1)" }, { "math_id": 12, "text": "0\\le \\text{Par}([a_1,\\dots,a_n])<n" }, { "math_id": 13, "text": "0" }, { "math_id": 14, "text": "(k+1)" }, { "math_id": 15, "text": "(k-1)" }, { "math_id": 16, "text": "[a_{\\sigma_1},\\dots, a_{\\sigma_n}]" }, { "math_id": 17, "text": "|i-\\sigma_i|" }, { "math_id": 18, "text": "1\\le i < n-k" }, { "math_id": 19, "text": "\\max(a_j\\mid j\\le i)" }, { "math_id": 20, "text": "a_{i+k}\\ge\\max(a_j\\mid j\\le i)" }, { "math_id": 21, "text": "\\max(i-j\\mid \\min(a_k\\mid k\\ge i) < \\max(a_k\\mid k\\le j))" }, { "math_id": 22, "text": "2k" }, { "math_id": 23, "text": "\\alpha=[a_1,\\dots, a_n]" }, { "math_id": 24, "text": "\\alpha'" }, { "math_id": 25, "text": "\\beta=[b_1,\\dots,b_{2k}]" }, { "math_id": 26, "text": "[b_1,\\dots,b_k]" }, { "math_id": 27, "text": "[b_{k+1},\\dots, b_{2k}]" }, { "math_id": 28, "text": "\\beta" }, { "math_id": 29, "text": "b" }, { "math_id": 30, "text": "[2k(i-1)+1, \\dots, 2ki]" }, { "math_id": 31, "text": "[2k(i-1)+k+1, \\dots, 2ki+k]" }, { "math_id": 32, "text": "i" }, { "math_id": 33, "text": "\\frac{n}{2k}" }, { "math_id": 34, "text": "O(k)" }, { "math_id": 35, "text": "\\alpha^1=[a^1_1,\\dots, a^1_n]" }, { "math_id": 36, "text": "\\alpha=[a^2_1,\\dots, a^2_m]" }, { "math_id": 37, "text": "k/2" }, { "math_id": 38, "text": "\\omega" }, { "math_id": 39, "text": "\\alpha^i" }, { "math_id": 40, "text": "\\alpha^1" }, { "math_id": 41, "text": "\\alpha^2" }, { "math_id": 42, "text": "A^1=\\max(a^1_i\\mid 1\\le i\\le\nk/2)" }, { "math_id": 43, "text": "A^2=\\max(a^2_i\\mid 1\\le i\\le k/2)" }, { "math_id": 44, "text": "A^1<A^2" }, { "math_id": 45, "text": "a^1_1,\\dots, a^1_{k/2}" }, { "math_id": 46, "text": "\\{a^2_j\\mid 1\\le j\\le k/2, a^2_j\\le\nA^1\\}" }, { "math_id": 47, "text": "\\log_2(k)" }, { "math_id": 48, "text": "O(n\\log k)" }, { "math_id": 49, "text": "O(k\\log k)" }, { "math_id": 50, "text": "O(n)" }, { "math_id": 51, "text": "\\log_{2}\\left(\\frac{n}{k}\\right)" }, { "math_id": 52, "text": "O(n\\log (n/k))" } ]
https://en.wikipedia.org/wiki?curid=67517849
67518897
Sklyanin algebra
In mathematics, specifically the field of algebra, Sklyanin algebras are a class of noncommutative algebra named after Evgeny Sklyanin. This class of algebras was first studied in the classification of Artin-Schelter regular algebras of global dimension 3 in the 1980s. Sklyanin algebras can be grouped into two different types, the non-degenerate Sklyanin algebras and the degenerate Sklyanin algebras, which have very different properties. A need to understand the non-degenerate Sklyanin algebras better has led to the development of the study of point modules in noncommutative geometry. Formal definition. Let formula_0 be a field with a primitive cube root of unity. Let formula_1 be the following subset of the projective plane formula_2: formula_3 Each point formula_4 gives rise to a (quadratic 3-dimensional) Sklyanin algebra, formula_5 where, formula_6 Whenever formula_7 we call formula_8 a degenerate Sklyanin algebra and whenever formula_9 we say the algebra is non-degenerate. Properties. The non-degenerate case shares many properties with the commutative polynomial ring formula_10, whereas the degenerate case enjoys almost none of these properties. Generally the non-degenerate Sklyanin algebras are more challenging to understand than their degenerate counterparts. Properties of degenerate Sklyanin algebras. Let formula_11 be a degenerate Sklyanin algebra. Properties of non-degenerate Sklyanin algebras. Let formula_13 be a non-degenerate Sklyanin algebra. Examples. Degenerate Sklyanin algebras. The subset formula_1 consists of 12 points on the projective plane, which give rise to 12 expressions of degenerate Sklyanin algebras. However, some of these are isomorphic and there exists a classification of degenerate Sklyanin algebras into two different cases. Let formula_15 be a degenerate Sklyanin algebra. These two cases are Zhang twists of each other and therefore have many properties in common. Non-degenerate Sklyanin algebras. The commutative polynomial ring formula_10 is isomorphic to the non-degenerate Sklyanin algebra formula_22 and is therefore an example of a non-degenerate Sklyanin algebra. Point modules. The study of point modules is a useful tool which can be used much more widely than just for Sklyanin algebras. Point modules are a way of finding projective geometry in the underlying structure of noncommutative graded rings. Originally, the study of point modules was applied to show some of the properties of non-degenerate Sklyanin algebras. For example to find their Hilbert series and determine that non-degenerate Sklyanin algebras do not contain zero divisors. Non-degenerate Sklyanin algebras. Whenever formula_23 and formula_24 in the definition of a non-degenerate Sklyanin algebra formula_25, the point modules of formula_13 are parametrised by an elliptic curve. If the parameters formula_26 do not satisfy those constraints, the point modules of any non-degenerate Sklyanin algebra are still parametrised by a closed projective variety on the projective plane. If formula_13 is a Sklyanin algebra whose point modules are parametrised by an elliptic curve, then there exists an element formula_27 which annihilates all point modules i.e. formula_28 for all point modules formula_29 of formula_13. Degenerate Sklyanin algebras. The point modules of degenerate Sklyanin algebras are not parametrised by a projective variety. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "\\mathfrak{D}" }, { "math_id": 2, "text": "\\textbf{P}_k^2" }, { "math_id": 3, "text": "\\mathfrak{D} = \\{ [1:0:0], [0:1:0], [0:0:1] \\} \\sqcup \\{ [a:b:c ] \\big| a^3=b^3=c^3\\}." }, { "math_id": 4, "text": "[a:b:c] \\in \\textbf{P}_k^2" }, { "math_id": 5, "text": "S_{a,b,c} = k \\langle x,y,z \\rangle / (f_1, f_2, f_3)," }, { "math_id": 6, "text": "f_1 = ayz + bzy + cx^2, \\quad f_2 = azx + bxz + cy^2, \\quad f_3 = axy + b yx + cz^2." }, { "math_id": 7, "text": "[a:b:c ] \\in \\mathfrak{D}" }, { "math_id": 8, "text": "S_{a,b,c}" }, { "math_id": 9, "text": "[a:b:c] \\in \\textbf{P}^2 \\setminus \\mathfrak{D}" }, { "math_id": 10, "text": "k[x,y,z]" }, { "math_id": 11, "text": "S_{\\text{deg}}" }, { "math_id": 12, "text": "H_{S_{\\text{deg}}} = \\frac{1+t}{1-2t}" }, { "math_id": 13, "text": "S" }, { "math_id": 14, "text": "H_{S} = \\frac{1}{(1-t)^3}" }, { "math_id": 15, "text": "S_{\\text{deg}} = S_{a,b,c}" }, { "math_id": 16, "text": "a=b" }, { "math_id": 17, "text": "k \\langle x,y,z \\rangle /(x^2,y^2,z^2)" }, { "math_id": 18, "text": "[0:0:1] \\in \\mathfrak{D}" }, { "math_id": 19, "text": "a \\neq b" }, { "math_id": 20, "text": "k \\langle x,y,z \\rangle /(xy,yx,zx)" }, { "math_id": 21, "text": "[1:0:0] \\in \\mathfrak{D}" }, { "math_id": 22, "text": "S_{1,-1,0} = k \\langle x,y,z \\rangle /( xy-yx, yz-zy, zx- xz)" }, { "math_id": 23, "text": "abc \\neq 0" }, { "math_id": 24, "text": "\\left( \\frac{a^3+b^3+c^3}{3abc} \\right) ^3 \\neq 1" }, { "math_id": 25, "text": "S=S_{a,b,c}" }, { "math_id": 26, "text": "a,b,c" }, { "math_id": 27, "text": "g \\in S" }, { "math_id": 28, "text": "Mg = 0" }, { "math_id": 29, "text": "M " } ]
https://en.wikipedia.org/wiki?curid=67518897
675192
Sloped armour
Type of armour Sloped armour is armour that is oriented neither vertically nor horizontally. Such angled armour is typically mounted on tanks and other armoured fighting vehicles (AFVs), as well as naval vessels such as battleships and cruisers. Sloping an armour plate makes it more difficult to penetrate by anti-tank weapons, such as armour-piercing shells, kinetic energy penetrators and rockets, if they follow a more or less horizontal trajectory to their target, as is often the case. The improved protection is caused by three main effects. Firstly, a projectile hitting a plate at an angle other than 90° has to move through a greater thickness of armour, compared to hitting the same plate at a right-angle. In the latter case only the plate thickness (the normal to the surface of the armour) must be pierced. Increasing the armour slope improves, for a given plate thickness, the level of protection at the point of impact by increasing the thickness measured in the horizontal plane, the angle of attack of the projectile. The protection of an area, instead of just a single point, is indicated by the average horizontal thickness, which is identical to the area density (in this case relative to the horizontal): the relative armour mass used to protect that area. If the horizontal thickness is increased by increasing the slope while keeping the plate thickness constant, a longer and thus heavier armour plate is required to protect a certain area. This improvement in protection is simply equivalent to the increase of area density and thus mass, and can offer no weight benefit. Therefore, in armoured vehicle design the two other main effects of sloping have been the motive to apply sloped armour. One of these is the more efficient envelopment of a certain vehicle volume by armour. In general, more rounded shapes have a smaller surface area relative to their volume. In an armoured vehicle that surface must be covered by heavy armour, so a more efficient shape leads to either a substantial weight reduction or a thicker armour for the same weight. Sloping the armour leads to a better approximation of the ideal rounded shape. The final effect is that of deflection, deforming and ricochet of a projectile. When it hits a plate under a steep angle, its path might be curved, causing it to move through more armour – or it might bounce off entirely. Also it can be bent, reducing its penetration. Shaped charge warheads may fail to penetrate or even detonate when striking armour at a highly oblique angle. However, these desired effects are critically dependent on the precise armour materials used in relation to the characteristics of the projectile hitting it: sloping might even lead to better penetration. The sharpest angles are usually designed on the frontal glacis plate, because it is the hull direction most likely to be hit while facing an attack, and also because there is more room to slope in the longitudinal direction of the vehicle. Principle. The cause for the increased protection of a certain point "at a given normal thickness" is the increased line-of-sight ("LOS") thickness of the armour, which is the thickness along the horizontal plane, along a line describing the oncoming projectile's general direction of travel. For a given thickness of armour plate, a projectile must travel through a greater thickness of armour to penetrate into the vehicle when it is sloped. The mere fact that the LOS-thickness increases by angling the plate is not however the motive for applying sloped armour in armoured vehicle design. The reason for this is that this increase offers no weight benefit. To maintain a given mass of a vehicle, the area density would have to remain equal and this implies that the LOS-thickness would also have to remain constant while the slope increases, which again implies that the normal thickness decreases. In other words: to avoid increasing the weight of the vehicle, plates have to get proportionally thinner while their slope increases, a process equivalent to shearing the mass. Sloped armour provides increased protection for armoured fighting vehicles through two primary mechanisms. The most important is based on the fact that to attain a certain protection level a certain volume has to be enclosed by a certain mass of armour and that sloping may reduce the surface to volume ratio and thus allow for either a lesser relative mass for a given volume or more protection for a given weight. If attack were equally likely from all directions, the ideal form would be a sphere; because horizontal attack is in fact to be expected the ideal becomes an oblate spheroid. Angling flat plates or curving cast armour allows designers to approach these ideals. For practical reasons this mechanism is most often applied on the front of the vehicle, where there is sufficient room to slope and much of the armour is concentrated, on the assumption that unidirectional frontal attack is the most likely. A simple wedge, such as can be seen in the hull design of the M1 Abrams, is already a good approximation that is often applied. The second mechanism is that shots hitting sloped armour are more likely to be deflected, ricochet or shatter on impact. Modern weapon and armour technology has significantly reduced this second benefit which initially was the main motive sloped armour was incorporated into vehicle design in the Second World War. The cosine rule. Even though the increased protection to a point, provided by angling a certain armour plate with a given normal thickness causing an increased line-of-sight ("LOS") thickness, is of no consideration in armour vehicle design, it is of great importance when determining the level of protection of a designed vehicle. The LOS-thickness for a vehicle in a horizontal position can be calculated by a simple formula, applying the cosine rule: it is equal to the armour's normal thickness divided by the cosine of the armour's inclination from perpendicularity to the projectile's travel (assumed to be in the horizontal plane) or: formula_0 where For example, armour sloped sixty degrees back from the vertical presents to a projectile travelling horizontally a line-of-sight thickness twice the armour's normal thickness, as the cosine of 60° is . When armour thickness or rolled homogeneous armour equivalency (RHAe) values for AFVs are provided without the slope of the armour, the figure provided generally takes into account this effect of the slope, while when the value is in the format of "x units at y degrees", the effects of the slope are not taken into account. Deflection. Sloping armour can increase protection by a mechanism such as shattering of a brittle kinetic energy penetrator (KEP) or a deflection of that penetrator away from the surface normal, even though the area density remains constant. These effects are strongest when the projectile has a low absolute weight and is short relative to its width. Armour piercing shells of World War II, certainly those of the early years, had these qualities and sloped armour was therefore rather efficient in that period. In the sixties however long-rod penetrators, such as armour-piercing fin-stabilized discarding sabot rounds, were introduced, projectiles that are both very elongated and very dense in mass. When hitting a sloped thick homogeneous plate, such a long-rod penetrator will, due to the incoming rear of the projectile acting as a lever, after initial penetration into the armour's LOS thickness, bend toward the armour's normal thickness and take a path with a length between the armour's LOS and normal thicknesses. Also the deformed penetrator tends to act as a projectile of a very large diameter and this stretches out the remaining armour, causing it to fail more easily. If these latter effects occur strongly – for modern penetrators this is typically the case for a slope between 55° and 65° – better protection would be provided by vertically mounted armour of the same area density. Another development decreasing the importance of the principle of sloped armour has been the introduction of ceramic armour in the 1970s. At any given area density, ceramic armour is also best when mounted more vertically, as maintaining the same area density requires the armour be thinned as it is sloped and the ceramic fractures earlier because of its reduced normal thickness. Sloped armour can also cause projectiles to ricochet, but this phenomenon is much more complicated and as yet not fully predictable. High rod density, impact velocity, and length-to-diameter ratio are factors that contribute to a high critical ricochet angle (the angle at which ricochet is expected to onset) for a long rod projectile, but different formulae may predict different critical ricochet angles for the same situation. Basic physical principles of deflection. The behaviour of a real world projectile, and the armour plate it hits, depends on many effects and mechanisms, involving their material structure and continuum mechanics which are very difficult to predict. Using only a few basic principles will therefore not result in a model that is a good description of the full range of possible outcomes. However, in many conditions most of these factors have only a negligible effect while a few of them dominate the equation. Therefore, a very simplified model can be created providing a general idea and understanding of the basic physical principles behind these aspects of sloped armour design. If the projectile travels very fast, and thus is in a state of hypervelocity, the strength of the armour material becomes negligible, because the energy of impact causes both projectile and armour to melt and behave like fluids, and only its area density is an important factor. In this limiting case, after the hit, the projectile continues to penetrate until it has stopped transferring its momentum to the target matter. In this ideal case, only momentum, area cross section, density and LOS-thickness are relevant. The situation of the penetrating metal jet caused by the explosion of the shaped charge of high-explosive anti-tank (HEAT) ammunition, forms a good approximation of this ideal. Therefore, if the angle is not too extreme, and the projectile is very dense and fast, sloping has little effect and no relevant deflection occurs. On the other extreme, the more light and slow a projectile is, the more relevant sloping becomes. Typical World War II Armour-Piercing shells were bullet-shaped and had a much lower velocity than a shaped charge jet. An impact would not result in a complete melting of projectile and armour. In this condition the strength of the armour material becomes a relevant factor. If the projectile would be very light and slow, the strength of the armour might even cause the hit to result in just an elastic deformation, the projectile being defeated without damage to the target. Sloping will mean the projectile will have to attain a higher velocity to defeat the armour, because on impact on a sloped armour not all kinetic energy is transferred to the target, the ratio depending on the slope angle. The projectile in a process of elastic collision deflects at an angle of 2formula_4 (where formula_4 denotes the angle between the armour plate surface and the projectile's initial direction), however the change of direction could be virtually divided into a deceleration part, when the projectile is halted when moving in a direction perpendicular to the plate (and will move along the plate after having been deflected at an angle of about formula_4), and a process of elastic acceleration, when the projectile accelerates out of the plate (velocity along the plate is considered as invariant because of negligible friction). Thus the maximum energy accumulated by the plate can be calculated from the deceleration phase of the collision event. Under the assumption that only elastic deformation occurs and that a target is solid, while disregarding friction, it is easy to calculate the proportion of energy absorbed by a target if it is hit by a projectile, which, if also disregarding more complex deflection effects, after impact bounces off (elastic case) or slides along (idealised inelastic case) the armour plate. In this very simple model the portion of the energy projected to the target depends on the angle of slope: formula_5 where However, in practice the AP-shells were powerful enough that the forces involved reach the plastic deformation limit and the elasticity of the plate could accumulate only a small part of the energy. In that case the armour plate would yield and much of the energy and force be spent by the deformation. As such this means that approximately half the deflection can be assumed (just formula_4 rather than 2formula_4) and the projectile will groove into the plate before it slides along, rather than bounce off. Plasticity surface friction is also very low in comparison to the plastic deformation energy and can be neglected. This implies that the formula above is principally valid also for the plastic deformation case, but because of the gauge grooved into the plate a larger surface angle formula_4 should be taken into account. Not only would this imply that the energy transferred to the target would thus be used to damage it; it would also mean that this energy would be higher because the effective angle formula_4 in the formula is now higher than the angle of the armour slope. The value of the appropriate real formula_4' which should be substituted cannot be derived from this simple principle and can only be determined by a more sophisticated model or simulation. On the other hand, that very same deformation will also cause, in combination with the armour plate slope, an effect that diminishes armour penetration. Though the deflection is under conditions of plastic deformation smaller, it will nevertheless change the course of the grooving projectile which again will result in an increase of the angle between the new armour surface and the projectile's initial direction. Thus the projectile has to work itself through more armour and, though in absolute terms thereby more energy could be absorbed by the target, it is more easily defeated, the process ideally ending in a complete ricochet. Historical application. One of the earliest documented instances of the concept of sloped armour is in the drawing of Leonardo da Vinci's fighting vehicle. Sloped armour was actually used on nineteenth century early Confederate ironclads, such as CSS Virginia, and partially implemented on the first French tank, the Schneider CA1 in the First World War, but the first tanks to be completely fitted with sloped armour were the French SOMUA S35 and other contemporary French tanks like the Renault R35, which had fully cast hulls and turrets. It was also used to a greater effect on the famous Soviet T-34 battle tank by the Soviet tank design team of the Kharkov Locomotive Factory, led by Mikhail Koshkin. It was a technological response to the more effective anti-tank guns being put into service at this time. The T-34 had profound impact on German WWII tank design. Pre- or early war designs like the Panzer IV and Tiger I differ clearly from post 1941 vehicles like for example the Panther, Tiger II, Hetzer, Jagdpanzer IV, Jagdpanther and Jagdtiger, which all had sloped armour. This is especially evident because German tank armour was generally not cast but consisted of welded plates. Sloped armour became very much the fashion after World War II, its most pure expression being perhaps the British Chieftain. However, the latest main battle tanks use perforated and composite armour, which attempts to deform and abrade a penetrator rather than deflecting it, as deflecting a long rod penetrator is difficult. These tanks have a more blocky appearance. Examples include the Leopard 2 and M1 Abrams. An exception is the Israeli Merkava. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_L=\\frac{T_N}{cos(\\theta)}" }, { "math_id": 1, "text": "T_L" }, { "math_id": 2, "text": "T_N" }, { "math_id": 3, "text": "\\theta" }, { "math_id": 4, "text": "\\alpha" }, { "math_id": 5, "text": "E_d / E_k = {sin^2(\\alpha)}" }, { "math_id": 6, "text": "E_d" }, { "math_id": 7, "text": "E_k" } ]
https://en.wikipedia.org/wiki?curid=675192
67520139
Run of a sequence
In computer science, a run of a sequence is a non-decreasing range of the sequence that cannot be extended. The "number of runs" of a sequence is the number of increasing subsequences of the sequence. This is a measure of presortedness, and in particular measures how many subsequences must be merged to sort a sequence. Definition. Let formula_0 be a sequence of elements from a totally ordered set. A run of formula_1 is a maximal increasing sequence formula_2. That is, formula_3 and formula_4 assuming that formula_5 and formula_6 exists. For example, if formula_7 is a natural number, the sequence formula_8 has the two runs formula_9 and formula_10. Let formula_11 be defined as the number of positions formula_12 such that formula_13 and formula_14. It is equivalently defined as the number of runs of formula_1 minus one. This definition ensure that formula_15, that is, the formula_16 if, and only if, the sequence formula_1 is sorted. As another example, formula_17 and formula_18. Sorting sequences with a low number of runs. The function formula_19 is a measure of presortedness. The natural merge sort is formula_19-optimal. That is, if it is known that a sequence has a low number of runs, it can be efficiently sorted using the natural merge sort. Long runs. A long run is defined similarly to a run, except that the sequence can be either non-decreasing or non-increasing. The number of long runs is not a measure of presortedness. A sequence with a small number of long runs can be sorted efficiently by first reversing the decreasing runs and then using a natural merge sort.
[ { "math_id": 0, "text": "X=\\langle x_1,\\dots,x_n\\rangle" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "\\langle x_i,x_{i+1},\\dots, x_{j-1},x_j \\rangle" }, { "math_id": 3, "text": "x_{i-1}>x_i" }, { "math_id": 4, "text": "x_{j}>x_{j+1}" }, { "math_id": 5, "text": "x_{i-1}" }, { "math_id": 6, "text": "x_{j+1}" }, { "math_id": 7, "text": "n" }, { "math_id": 8, "text": "\\langle n+1,n+2,\\dots, 2n, 1,2,\\dots, n\\rangle" }, { "math_id": 9, "text": "\\langle n+1,\\dots, 2n \\rangle" }, { "math_id": 10, "text": "\\langle 1,\\dots,n \\rangle" }, { "math_id": 11, "text": "\\mathtt{runs}(X)" }, { "math_id": 12, "text": "i" }, { "math_id": 13, "text": "1\\le i<n" }, { "math_id": 14, "text": "x_{i+1}<x_i" }, { "math_id": 15, "text": "\\mathtt{runs}(\\langle 1,2,\\dots, n \\rangle)=0" }, { "math_id": 16, "text": "\\mathtt{runs}(X)=0" }, { "math_id": 17, "text": "\\mathtt{runs}(\\langle n,n-1,\\dots,1 \\rangle)=n-1" }, { "math_id": 18, "text": "\\mathtt{runs}(\\langle 2,1,4,3,\\dots, 2n,2n-1\\rangle)=n" }, { "math_id": 19, "text": "\\mathtt{runs}" } ]
https://en.wikipedia.org/wiki?curid=67520139
67520463
Petr Lazarev
Russian biophysicist (1878–1942) Petr Petrovich Lazarev (; 14 April 1878 – 24 April 1942) was a biophysicist and a founder of the Soviet Institute of Physics and Biophysics (now Lebedev Physical Institute). He also founded the journal "Uspekhi fizicheskikh nauk" (later "Physics-Uspekhi"). Early life. Lazarev was born in Moscow where his father worked as a civil engineer. After studies at a Moscow Gymnasium, he graduated from Moscow University in 1901 with a medical degree. After passing the examination to become a doctor of medicine in 1902 he worked at an ear clinic. He then became interested in mathematics and physics and passed the university examination in 1903 after studying the subjects entirely on his own. Career. Lazarev's early research was on hearing and he noted that auditory sensations could be amplified by coordinated visual stimulation. He later studied other phenomena such as synesthesia and the effect of singing on vision. He began to collaborate with P.N. Lebedev from 1903 and in 1912 he obtained a doctorate with a thesis "Vytsvetanie krasok i pigmentov v vidimom svete" ["The Fading of Colors and Pigments in Visible Light"]. In 1911, he joined Lebedev in protest against the policies of L.A. Kasso and quit Moscow University to join Shanyavsky University. He began to study the ion theory of nerve excitation and confirmed Loeb's formula. During World War I, he was involved in producing medical equipment, including thermometers and mobile X-ray systems. After the 1917 revolution, he became a member of the Russian Academy of Sciences. In 1918, he was involved in the study of the Kursk Magnetic Anomaly and worked on geomagnetism with E.G. Leyst. Arrest and exile. In 1929, a group of communists failed to be elected to the Soviet Academy of Sciences. Lazarev objected to re-balloting them. He also publicly criticized Friedrich Engels writings about formula_0 (Imaginary unit) who said that is "not only a contradiction, but even an absurd contradiction, a real absurdity." As a result of this Lazarev was arrested on 5 March 1931 and removed from all of his academic positions. His institute was transferred to the Supreme Soviet of the National Economy, the key personnel was sacked and most of the equipment destroyed. Lazarev's wife committed suicide on 13 June 1931. In September 1931 Lazarev was exiled to Sverdlovsk. He returned to Moscow in 1932. Aleksandr Mints was his student. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt{-1} " } ]
https://en.wikipedia.org/wiki?curid=67520463
675231
Line graph
Graph representing edges of another graph In the mathematical discipline of graph theory, the line graph of an undirected graph G is another graph L("G") that represents the adjacencies between edges of G. L("G") is constructed in the following way: for each edge in G, make a vertex in L("G"); for every two edges in G that have a vertex in common, make an edge between their corresponding vertices in L("G"). The name "line graph" comes from a paper by although both and used the construction before this. Other terms used for the line graph include the covering graph, the derivative, the edge-to-vertex dual, the conjugate, the representative graph, and the θ-obrazom, as well as the edge graph, the interchange graph, the adjoint graph, and the derived graph. Hassler Whitney (1932) proved that with one exceptional case the structure of a connected graph G can be recovered completely from its line graph. Many other properties of line graphs follow by translating the properties of the underlying graph from vertices into edges, and by Whitney's theorem the same translation can also be done in the other direction. Line graphs are claw-free, and the line graphs of bipartite graphs are perfect. Line graphs are characterized by nine forbidden subgraphs and can be recognized in linear time. Various extensions of the concept of a line graph have been studied, including line graphs of line graphs, line graphs of multigraphs, line graphs of hypergraphs, and line graphs of weighted graphs. Formal definition. Given a graph G, its line graph "L"("G") is a graph such that That is, it is the intersection graph of the edges of G, representing each edge by the set of its two endpoints. Example. The following figures show a graph (left, with blue vertices) and its line graph (right, with green vertices). Each vertex of the line graph is shown labeled with the pair of endpoints of the corresponding edge in the original graph. For instance, the green vertex on the right labeled 1,3 corresponds to the edge on the left between the blue vertices 1 and 3. Green vertex 1,3 is adjacent to three other green vertices: 1,4 and 1,2 (corresponding to edges sharing the endpoint 1 in the blue graph) and 4,3 (corresponding to an edge sharing the endpoint 3 in the blue graph). Properties. Translated properties of the underlying graph. Properties of a graph G that depend only on adjacency between edges may be translated into equivalent properties in "L"("G") that depend on adjacency between vertices. For instance, a matching in G is a set of edges no two of which are adjacent, and corresponds to a set of vertices in "L"("G") no two of which are adjacent, that is, an independent set. Thus, Whitney isomorphism theorem. If the line graphs of two connected graphs are isomorphic, then the underlying graphs are isomorphic, except in the case of the triangle graph "K"3 and the claw "K"1,3, which have isomorphic line graphs but are not themselves isomorphic. As well as "K"3 and "K"1,3, there are some other exceptional small graphs with the property that their line graph has a higher degree of symmetry than the graph itself. For instance, the diamond graph "K"1,1,2 (two triangles sharing an edge) has four graph automorphisms but its line graph "K"1,2,2 has eight. In the illustration of the diamond graph shown, rotating the graph by 90 degrees is not a symmetry of the graph, but is a symmetry of its line graph. However, all such exceptional cases have at most four vertices. A strengthened version of the Whitney isomorphism theorem states that, for connected graphs with more than four vertices, there is a one-to-one correspondence between isomorphisms of the graphs and isomorphisms of their line graphs. Analogues of the Whitney isomorphism theorem have been proven for the line graphs of multigraphs, but are more complicated in this case. Strongly regular and perfect line graphs. The line graph of the complete graph Kn is also known as the triangular graph, the Johnson graph "J"("n", 2), or the complement of the Kneser graph "KG""n",2. Triangular graphs are characterized by their spectra, except for "n" = 8. They may also be characterized (again with the exception of "K"8) as the strongly regular graphs with parameters srg("n"("n" – 1)/2, 2("n" – 2), "n" – 2, 4). The three strongly regular graphs with the same parameters and spectrum as "L"("K"8) are the Chang graphs, which may be obtained by graph switching from "L"("K"8). The line graph of a bipartite graph is perfect (see Kőnig's theorem), but need not be bipartite as the example of the claw graph shows. The line graphs of bipartite graphs form one of the key building blocks of perfect graphs, used in the proof of the strong perfect graph theorem. A special case of these graphs are the rook's graphs, line graphs of complete bipartite graphs. Like the line graphs of complete graphs, they can be characterized with one exception by their numbers of vertices, numbers of edges, and number of shared neighbors for adjacent and non-adjacent points. The one exceptional case is "L"("K"4,4), which shares its parameters with the Shrikhande graph. When both sides of the bipartition have the same number of vertices, these graphs are again strongly regular. More generally, a graph G is said to be a line perfect graph if "L"("G") is a perfect graph. The line perfect graphs are exactly the graphs that do not contain a simple cycle of odd length greater than three. Equivalently, a graph is line perfect if and only if each of its biconnected components is either bipartite or of the form "K"4 (the tetrahedron) or "K"1,1,"n" (a book of one or more triangles all sharing a common edge). Every line perfect graph is itself perfect. Other related graph families. All line graphs are claw-free graphs, graphs without an induced subgraph in the form of a three-leaf tree. As with claw-free graphs more generally, every connected line graph "L"("G") with an even number of edges has a perfect matching; equivalently, this means that if the underlying graph G has an even number of edges, its edges can be partitioned into two-edge paths. The line graphs of trees are exactly the claw-free block graphs. These graphs have been used to solve a problem in extremal graph theory, of constructing a graph with a given number of edges and vertices whose largest tree induced as a subgraph is as small as possible. All eigenvalues of the adjacency matrix A of a line graph are at least −2. The reason for this is that A can be written as formula_0, where J is the signless incidence matrix of the pre-line graph and I is the identity. In particular, "A" + 2"I" is the Gramian matrix of a system of vectors: all graphs with this property have been called generalized line graphs. Characterization and recognition. Clique partition. For an arbitrary graph G, and an arbitrary vertex v in G, the set of edges incident to v corresponds to a clique in the line graph "L"("G"). The cliques formed in this way partition the edges of "L"("G"). Each vertex of "L"("G") belongs to exactly two of them (the two cliques corresponding to the two endpoints of the corresponding edge in G). The existence of such a partition into cliques can be used to characterize the line graphs: A graph L is the line graph of some other graph or multigraph if and only if it is possible to find a collection of cliques in L (allowing some of the cliques to be single vertices) that partition the edges of L, such that each vertex of L belongs to exactly two of the cliques. It is the line graph of a graph (rather than a multigraph) if this set of cliques satisfies the additional condition that no two vertices of L are both in the same two cliques. Given such a family of cliques, the underlying graph G for which L is the line graph can be recovered by making one vertex in G for each clique, and an edge in G for each vertex in L with its endpoints being the two cliques containing the vertex in L. By the strong version of Whitney's isomorphism theorem, if the underlying graph G has more than four vertices, there can be only one partition of this type. For example, this characterization can be used to show that the following graph is not a line graph: In this example, the edges going upward, to the left, and to the right from the central degree-four vertex do not have any cliques in common. Therefore, any partition of the graph's edges into cliques would have to have at least one clique for each of these three edges, and these three cliques would all intersect in that central vertex, violating the requirement that each vertex appear in exactly two cliques. Thus, the graph shown is not a line graph. Forbidden subgraphs. Another characterization of line graphs was proven in (and reported earlier without proof by ). He showed that there are nine minimal graphs that are not line graphs, such that any graph that is not a line graph has one of these nine graphs as an induced subgraph. That is, a graph is a line graph if and only if no subset of its vertices induces one of these nine graphs. In the example above, the four topmost vertices induce a claw (that is, a complete bipartite graph "K"1,3), shown on the top left of the illustration of forbidden subgraphs. Therefore, by Beineke's characterization, this example cannot be a line graph. For graphs with minimum degree at least 5, only the six subgraphs in the left and right columns of the figure are needed in the characterization. Algorithms. and described linear time algorithms for recognizing line graphs and reconstructing their original graphs. generalized these methods to directed graphs. described an efficient data structure for maintaining a dynamic graph, subject to vertex insertions and deletions, and maintaining a representation of the input as a line graph (when it exists) in time proportional to the number of changed edges at each step. The algorithms of and are based on characterizations of line graphs involving odd triangles (triangles in the line graph with the property that there exists another vertex adjacent to an odd number of triangle vertices). However, the algorithm of uses only Whitney's isomorphism theorem. It is complicated by the need to recognize deletions that cause the remaining graph to become a line graph, but when specialized to the static recognition problem only insertions need to be performed, and the algorithm performs the following steps: Each step either takes constant time, or involves finding a vertex cover of constant size within a graph S whose size is proportional to the number of neighbors of v. Thus, the total time for the whole algorithm is proportional to the sum of the numbers of neighbors of all vertices, which (by the handshaking lemma) is proportional to the number of input edges. Iterating the line graph operator. consider the sequence of graphs formula_1 They show that, when G is a finite connected graph, only four behaviors are possible for this sequence: If G is not connected, this classification applies separately to each component of G. For connected graphs that are not paths, all sufficiently high numbers of iteration of the line graph operation produce graphs that are Hamiltonian. Generalizations. Medial graphs and convex polyhedra. When a planar graph G has maximum vertex degree three, its line graph is planar, and every planar embedding of G can be extended to an embedding of "L"("G"). However, there exist planar graphs with higher degree whose line graphs are nonplanar. These include, for example, the 5-star "K"1,5, the gem graph formed by adding two non-crossing diagonals within a regular pentagon, and all convex polyhedra with a vertex of degree four or more. An alternative construction, the medial graph, coincides with the line graph for planar graphs with maximum degree three, but is always planar. It has the same vertices as the line graph, but potentially fewer edges: two vertices of the medial graph are adjacent if and only if the corresponding two edges are consecutive on some face of the planar embedding. The medial graph of the dual graph of a plane graph is the same as the medial graph of the original plane graph. For regular polyhedra or simple polyhedra, the medial graph operation can be represented geometrically by the operation of cutting off each vertex of the polyhedron by a plane through the midpoints of all its incident edges. This operation is known variously as the second truncation, degenerate truncation, or rectification. Total graphs. The total graph "T"("G") of a graph G has as its vertices the elements (vertices or edges) of G, and has an edge between two elements whenever they are either incident or adjacent. The total graph may also be obtained by subdividing each edge of G and then taking the square of the subdivided graph. Multigraphs. The concept of the line graph of G may naturally be extended to the case where G is a multigraph. In this case, the characterizations of these graphs can be simplified: the characterization in terms of clique partitions no longer needs to prevent two vertices from belonging to the same to cliques, and the characterization by forbidden graphs has seven forbidden graphs instead of nine. However, for multigraphs, there are larger numbers of pairs of non-isomorphic graphs that have the same line graphs. For instance a complete bipartite graph "K"1,"n" has the same line graph as the dipole graph and Shannon multigraph with the same number of edges. Nevertheless, analogues to Whitney's isomorphism theorem can still be derived in this case. Line digraphs. It is also possible to generalize line graphs to directed graphs. If G is a directed graph, its directed line graph or line digraph has one vertex for each edge of G. Two vertices representing directed edges from u to v and from w to x in G are connected by an edge from uv to wx in the line digraph when "v" = "w". That is, each edge in the line digraph of G represents a length-two directed path in G. The de Bruijn graphs may be formed by repeating this process of forming directed line graphs, starting from a complete directed graph. Weighted line graphs. In a line graph "L"("G"), each vertex of degree k in the original graph G creates "k"("k" − 1)/2 edges in the line graph. For many types of analysis this means high-degree nodes in G are over-represented in the line graph "L"("G"). For instance, consider a random walk on the vertices of the original graph G. This will pass along some edge e with some frequency f. On the other hand, this edge e is mapped to a unique vertex, say v, in the line graph "L"("G"). If we now perform the same type of random walk on the vertices of the line graph, the frequency with which v is visited can be completely different from "f". If our edge e in G was connected to nodes of degree "O"("k"), it will be traversed "O"("k"2) more frequently in the line graph "L"("G"). Put another way, the Whitney graph isomorphism theorem guarantees that the line graph almost always encodes the topology of the original graph G faithfully but it does not guarantee that dynamics on these two graphs have a simple relationship. One solution is to construct a weighted line graph, that is, a line graph with weighted edges. There are several natural ways to do this. For instance if edges d and e in the graph G are incident at a vertex v with degree k, then in the line graph "L"("G") the edge connecting the two vertices d and e can be given weight 1/("k" − 1). In this way every edge in G (provided neither end is connected to a vertex of degree 1) will have strength 2 in the line graph "L"("G") corresponding to the two ends that the edge has in G. It is straightforward to extend this definition of a weighted line graph to cases where the original graph G was directed or even weighted. The principle in all cases is to ensure the line graph "L"("G") reflects the dynamics as well as the topology of the original graph G. Line graphs of hypergraphs. The edges of a hypergraph may form an arbitrary family of sets, so the line graph of a hypergraph is the same as the intersection graph of the sets from the family. Disjointness graph. The disjointness graph of G, denoted "D"("G"), is constructed in the following way: for each edge in G, make a vertex in "D"("G"); for every two edges in G that do "not" have a vertex in common, make an edge between their corresponding vertices in "D"("G"). In other words, "D"("G") is the complement graph of "L"("G"). A clique in "D"("G") corresponds to an independent set in "L"("G"), and vice versa. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "A = J^\\mathsf{T}J - 2I" }, { "math_id": 1, "text": "G, L(G), L(L(G)), L(L(L(G))), \\dots.\\ " } ]
https://en.wikipedia.org/wiki?curid=675231
67525376
Representative layer theory
The concept of the representative layer came about though the work of Donald Dahm, with the assistance of Kevin Dahm and Karl Norris, to describe spectroscopic properties of particulate samples, especially as applied to near-infrared spectroscopy. A representative layer has the same void fraction as the sample it represents and each particle type in the sample has the same volume fraction and surface area fraction as does the sample as a whole. The spectroscopic properties of a representative layer can be derived from the spectroscopic properties of particles, which may be determined by a wide variety of ways. While a representative layer could be used in any theory that relies on the mathematics of plane parallel layers, there is a set of definitions and mathematics, some old and some new, which have become part of representative layer theory. Representative layer theory can be used to determine the spectroscopic properties of an assembly of particles from those of the individual particles in the assembly. The sample is modeled as a series of layers, each of which is parallel to each other and perpendicular to the incident beam. The mathematics of plane parallel layers is then used to extract the desired properties from the data, most notably that of the linear absorption coefficient which behaves in the manner of the coefficient in Beer’s law. The representative layer theory gives a way of performing the calculations for new sample properties by changing the properties of a single layer of the particles, which doesn’t require reworking the mathematics for a sample as a whole. History. The first attempt to account for transmission and reflection of a layered material was carried out by George G. Stokes in about 1860 and led to some very useful relationships. John W. Strutt (Lord Rayleigh) and Gustav Mie developed the theory of single scatter to a high degree, but Aurthur Schuster was the first to consider multiple scatter. He was concerned with the cloudy atmospheres of stars, and developed a plane-parallel layer model in which the radiation field was divided into forward and backward components. This same model was used much later by Paul Kubelka and Franz Munk, whose names are usually attached to it by spectroscopists. Following WWII, the field of reflectance spectroscopy was heavily researched, both theoretically and experimentally.  The remission function, formula_0, following Kubelka-Munk theory, was the leading contender as the metric of absorption analogous to the absorbance function in transmission absorption spectroscopy.  The form of the K-M solution originally was: formula_1, but it was rewritten in terms of linear coefficients by some authors, becoming formula_2 , taking formula_3 and formula_4 as being equivalent to the linear absorption and scattering coefficients as they appear in the Bouguer-Lambert law, even though sources who derived the equations preferred the symbolism formula_5 and usually emphasized that formula_6 and formula_7 was a remission or back-scattering parameter, which for the case of diffuse scatter should properly be taken as an integral. In 1966, in a book entitled Reflectance Spectroscopy, Harry Hecht had pointed out that the formulation formula_8 led to formula_9, which enabled plotting formula_0 "against the wavelength or wave-number for a particular sample" giving a curve corresponding "to the real absorption determined by transmission measurements, except for a displacement by formula_10 in the ordinate direction." However, in data presented, "the marked deviation in the remission function ... in the region of large extinction is obvious." He listed various reasons given by other authors for this "failure ... to remain valid in strongly absorbing materials", including: "incomplete diffusion in the scattering process"; failure to use "diffuse illumination; "increased proportion of regular reflection"; but concluded that "notwithstanding the above mentioned difficulties, ... the remission function should be a linear function of the concentration at a given wavelength for a constant particle size" though stating that "this discussion has been restricted entirely to the reflectance of homogeneous powder layers" though "equation systems for combination of inhomogeneous layers cannot be solved for the scattering and absorbing properties even in the simple case of a dual combination of sublayers. ... This means that the (Kubelka-Munk) theory fails to include, in an explicit manner, any dependence of reflection on particle size or shape or refractive index". The field of Near infrared spectroscopy (NIR) got its start in 1968, when Karl Norris and co-workers with the Instrumentation Research Lab of the U.S. Department of Agriculture first applied the technology to agricultural products. The USDA discovered how to use NIR empirically, based on available sources, gratings, and detector materials. Even the wavelength range of NIR was empirically set based on the operational range of a PbS detector. Consequently, it was not seen as a rigorous science: it had not evolved in the usual way, from research institutions to general usage. Even though the Kubelka-Munk theory provided a remission function that could have been used as the absorption metric, Norris selected formula_11 for convenience. He believed that the problem of non-linearity between the metric and concentration was due to particle size (a theoretical concern) and stray light (an instrumental effect). In qualitative terms, he would explain differences in spectra of different particle size as changes in the effective path length that the light traveled though the sample. In 1976, Hecht published an exhaustive evaluation of the various theories which were considered to be fairly general. In it, he presented his derivation of the Hecht finite difference formula by replacing the fundamental differential equations of the Kubelka-Munk theory by the finite difference equations, and obtained: formula_12 . He noted "it is well known that a plot of formula_0 versus formula_13 deviates from linearity for high values of formula_13, and it appears that (this equation) can be used to explain the deviations in part", and "represents an improvement in the range of validity and shows the need to consider the particulate nature of scattering media in developing a more precise theory by which absolute absorptivities can be determined." In 1982, Gerry Birth convened a meeting of experts in several areas that impacted NIR Spectroscopy, with emphasis on diffuse reflectance spectroscopy, no matter which portion of the electromagnetic spectrum might be used. This was the beginning of the International Diffuse Reflectance Conference. At this meeting was Harry Hecht, who may have at the time been the world's most knowledgeable person in the theory of diffuse reflectance. Gerry himself took many photographs illustrating various aspects of diffuse reflectance, many of which were not explainable with the best available theories. In 1987, Birth and Hecht wrote a joint article in a new handbook, which pointed a direction for future theoretical work. In 1994, Donald and Kevin Dahm began using numerical techniques to calculate remission and transmission from samples of varying numbers of plane parallel layers from absorption and remission fractions for a single layer. Using this entirely independent approach, they found a function that was the independent of the number of layers of the sample. This function, called the Absorption/Remission function and nick-named the ART function, is defined as: formula_14 . Besides the relationships displayed here, the formulas obtained for the general case are entirely consistent with the Stokes formulas, the equations of Benford, and Hecht's finite difference formula. For the special cases of infinitesimal or infinitely dilute particles, it gives results consistent with the Schuster equation for isotropic scattering and Kubelka–Munk equation. These equations are all for plane parallel layers using two light streams. This cumulative mathematics was tested on data collected using directed radiation on plastic sheets, a system that precisely matches the physical model of a series of plane parallel layers, and found to conform. The mathematics provided: 1) a method to use plane parallel mathematics to separate absorption and remission coefficients for a sample; 2) an Absorption/Remission function that is constant for all sample thickness; and 3) equations relating the absorption and remission of one thickness of sample to that of any other thickness. Mathematics of plane parallel layers in absorption spectroscopy. Using simplifying assumptions, the spectroscopic parameters (absorption, remission, and transmission fractions) of a plane parallel layer can be built from the refractive index of the material making up the layer, the linear absorption coefficient (absorbing power) of the material, and the thickness of the layer. While other assumptions could be made, those most often used are those of normal incidence of a directed beam of light, with internal and external reflection from the surface being the same. Determining the "A", "R", "T" fractions for a surface. For the special case where the incident radiation is normal (perpendicular) to a surface and the absorption is negligible, the intensity of the reflected and transmitted beams can be calculated from the refractive indices "η"1 and "η"2 of the two media, where r is the fraction of the incident light reflected, and t is the fraction of the transmitted light: formula_15 , formula_16 , with the fraction absorbed taken as zero ( formula_17 = 0 ). Illustration. For a beam of light traveling in air with an approximate index of refraction of 1.0, and encountering the surface of a material having an index of refraction of 1.5: formula_18 , formula_19 Determining the "A", "R", "T" fractions for a sheet. There is a simplified special case for the spectroscopic parameters of a sheet. This sheet consists of three plane parallel layers (1:front surface, 2:interior, 3:rear surface) in which the surfaces both have the same remission fraction when illuminated from either direction, regardless of the relative refractive indices of the two media on either side of the surface. For the case of zero absorption in the interior, the total remission and transmission from the layer can be determined from the infinite series, where formula_20 is the remission from the surface: formula_21 formula_22 formula_23 These formulas can be modified to account for absorption. Alternatively, the spectroscopic parameters of a sheet (or slab) can be built up from the spectroscopic parameters of the individual pieces that compose the layer: surface, interior, surface. This can be done using an approach developed by Kubelka for treatment of inhomogeneous layers. Using the example from the previous section: { "A"1 = 0, "R"1 = 0.04, "T"1 = 0.96 } {"A"3 = 0, "R"3 = 0.04, "T"3 = 0.96 }. We will assume the interior of the sheet is composed of a material that has Napierian absorption coefficient k of 0.5 cm−1, and the sheet is 1 mm thick ("d" = 1 mm). For this case, on a single trip through the interior, according to the Bouguer-Lambert law, formula_24, which according to our assumptions yields formula_25 and formula_26. Thus { "A"2 = 0.05, "R"2 = 0, "T"2 = 0.95 }. Then one of Benford's equations can be applied. If Ax, Rx and Tx are known for layer x and Ay Ry and Ty are known for layer y, the ART fractions for a sample composed of layer x and layer y are: formula_27 formula_28 formula_29 (The symbol formula_30 means the reflectance of layer formula_31 when the direction of illumination is antiparallel to that of the incident beam. The difference in direction is important when dealing with inhomogeneous layers. This consideration was added by Paul Kubelka in 1954. He also pointed out that transmission was independent of the direction of illumination, but absorption and remission were not.) Illustration. Step 1: We take layer 1 as x, and layer 2 as y. By our assumptions in this case, { formula_32 }.formula_33 formula_34 formula_35 Step 2: We take the result from step 1 as the value for new x [ x is old x+y; (-x) is old y+x ], and the value for layer 3 as new y. formula_36 formula_37 formula_38 Dahm has shown that for this special case, the total amount of light absorbed by the interior of the sheet (considering surface remission) is the same as that absorbed in a single trip (independent of surface remission). This is borne out by the calculations. The decadic absorbance (formula_39) of the sheet is given by: formula_40 Determining the "A", "R", "T" fractions for "n" layers. The Stokes Formulas can be used to calculate the ART fractions for any number of layers. Alternatively, they can be calculated by successive application of Benford's equation for "one more layer". If "A"1, "R"1, and "T"1 are known for the representative layer of a sample, and An, Rn and Tn are known for a layer composed of n representative layers, the ART fractions for a layer with thickness of "n" + 1 are: formula_41 formula_42 formula_43 Illustration. In the above example, { formula_44 }. The Table shows the results of repeated application of the above formulas. Absorbing Power: The Scatter Corrected Absorbance of a sample. Within a homogeneous media such as a solution, there is no scatter. For this case, the function is linear with both the concentration of the absorbing species and the path-length. Additionally, the contributions of individual absorbing species are additive. For samples which scatter light, absorbance is defined as "the negative logarithm of one minus absorptance (absorption fraction: formula_45) as measured on a uniform sample". For decadic absorbance, this may be symbolized as: formula_46 .   Even though this absorbance function is useful with scattering samples, the function does not have the same desirable characteristics as it does for non-scattering samples. There is, however, a property called absorbing power which may be estimated for these samples. The absorbing power of a single unit thickness of material making up a scattering sample is the same as the absorbance of the same thickness of the materiel in the absence of scatter. Illustration. Suppose that we have a sample consisting of 14 sheets described above, each one of which has an absorbance of 0.0222. If we are able to estimate the absorbing power (the absorbance of a sample of the same thickness, but having no scatter) from the sample without knowing how many sheets are in the sample (as would be the general case), it would have the desirable property of being proportional to the thickness. In this case, we know that the absorbing power (scatter corrected absorbance) should be: {14 x the absorbance of a single sheet} formula_47 . This is the value we should have for the sample if the absorbance is to follow the law of Bouguer (often referred to as Beer's law). In the Table below, we see that the sample has the A,R,T values for the case of 14 sheets in the Table above. Because of the presence of scatter, the measured absorbance of the sample would be: formula_48. Then we calculate this for the half sample thickness using another of Benford's equations. If Ad, Rd and Td are known for a layer with thickness d, the ART fractions for a layer with thickness of "d"/2 are: formula_49 formula_50 formula_51 In the line for half sample [S/2], we see the values which are the same as those for 7 layers in the Table above, as we expect. Note that formula_52. We desire to have the absorbance be linear with sample thickness, but we find when we multiply this value by 2, we get formula_53 , which is a significant departure from the previous estimate for the absorbing power. The next iteration of the formula produces the estimate for A,R,T for a quarter sample: formula_54. Note that this time the calculation corresponds to three and a half layers, a thickness of sample that cannot exist physically. Continuing for the sequentially higher powers of two, we see a monotonically increasing estimate. Eventually the numbers will start jumping with round off error, but one can stop when getting a constant value to a specified number of significant figures. In this case, we become constant to 4 significant figures at 0.3105, which is our estimate for the absorbing power of the sample. This corresponds to our target value of 0.312 determined above. Expressing particulate mixtures as layers. If one wants to use a theory based on plane parallel layers, optimally the samples would be describable as layers. But a particulate sample often looks a jumbled maze of particles of various sizes and shapes, showing no structured pattern of any kind, and certainly not literally divided into distinct, identical layers. Even so, it is a tenet of Representative Layer Theory that for spectroscopic purposes, we may treat the complex sample as if it were a series of layers, each one representative of the sample as a whole. Definition of a representative layer. To be representative, the layer must meet the following criteria: • The volume fraction of each type of particle is the same in the representative layer as in the sample as a whole. • The surface area fraction of each type of particle is the same in the representative layer as in the sample as a whole. • The void fraction of the representative layer is the same as in the sample. • The representative layer is nowhere more than one particle thick. Note this means the “thickness” of the representative layer is not uniform. This criterion is imposed so that we can assume that a given photon of light has only one interaction with the layer. It might be transmitted, remitted, or absorbed as a result of this interaction, but it is assumed not to interact with a second particle within the same layer. In the above discussion, when we talk about a “type” of particle, we must clearly distinguish between particles of different composition. In addition, however, we must distinguish between particles of different sizes. Recall that scattering is envisioned as a surface phenomenon and absorption is envisioned as occurring at the molecular level throughout the particle. Consequently, our expectation is that the contribution of a “type” of particle to absorption will be proportional to the volume fraction of that particle in the sample, and the contribution of a “type” of particle to scattering will be proportional to the surface area fraction of that particle in the sample. This is why our “representative layer” criteria above incorporate both volume fraction and surface area fraction. Since small particles have larger surface area-to-volume ratios than large particles, it is necessary to distinguish between them. Determining spectroscopic properties of a representative layer. Under these criteria, we can propose a model for the fractions of incident light that are absorbed (formula_55), remitted (formula_56), and transmitted (formula_57) by one representative layer. formula_58 , formula_59 , formula_60 in which: • formula_61 is the fraction of cross-sectional surface area that is occupied by particles of type formula_62. • formula_63 is the effective absorption coefficient for particles of type formula_62. • formula_64 is the remission coefficient for particles of type formula_62. • formula_65 is the thickness of a particle of type formula_62 in the direction of the incident beam. • The summation is carried out over all of the distinct “types” of particle. In effect, formula_61represents the fraction of light that will interact with a particle of type formula_62, and formula_63and formula_64 quantify the likelihood of that interaction resulting in absorption and remission, respectively. Surface area fractions and volume fractions for each type of particle can be defined as follows: formula_66 , formula_67 , formula_68 , formula_69 in which: • formula_70 is the mass fraction of particles of type i in the sample. • formula_71 is the fraction of "occupied" volume composed of particles of type i. • formula_72 is the fraction of particle surface area that is composed of particles of type i. • formula_73 is the fraction of "total" volume composed of particles of type i. • formula_74 is the fraction of cross-sectional surface area that is composed of particles of type i. • formula_75 is the density of particles of type i. • formula_76 is the void fraction of the sample. This is a logical way of relating the spectroscopic behavior of a “representative layer” to the properties of the individual particles that make up the layer. The values of the absorption and remission coefficients represent a challenge in this modeling approach. Absorption is calculated from the fraction of light striking each type of particle and a “Beer’s law”-type calculation of the absorption by each type of particle, so the values of formula_77used should ideally model the ability of the particle to absorb light, independent of other processes (scattering, remission) that also occur. We referred to this as the absorbing power in the section above. List of principle symbols used. Where a given letter is used in both capital and lower case form (r, R and t, T ) the capital letter refers to the macroscopic observable and the lower case letter to the corresponding variable for an individual particle or layer of the material. Greek symbols are used for properties of a single particle.
[ { "math_id": 0, "text": "F(R_\\infty)" }, { "math_id": 1, "text": "F(R_\\infty)\\equiv \\frac{(1-R_\\infty)^2}{2R_\\infty} = \\frac {a_0}{r_0} " }, { "math_id": 2, "text": " F(R_\\infty)\\equiv\\frac{(1-R_\\infty)^2}{2R_\\infty} = \\frac {k}{s}" }, { "math_id": 3, "text": "k" }, { "math_id": 4, "text": "s" }, { "math_id": 5, "text": " \\frac {K}S" }, { "math_id": 6, "text": "K=2k" }, { "math_id": 7, "text": "S" }, { "math_id": 8, "text": " F(R_\\infty) = \\frac {k}{s}" }, { "math_id": 9, "text": "\\log F(R_\\infty ) = \\log k - \\log s" }, { "math_id": 10, "text": "-\\log s" }, { "math_id": 11, "text": "\\log(1/R_\\infty)" }, { "math_id": 12, "text": "F(R_\\infty) = a\\biggl( \\frac {1}{r} - 1\\biggr) - \\frac {a^2}{2r}" }, { "math_id": 13, "text": "K" }, { "math_id": 14, "text": "A(R,T) \\equiv \\frac {(1-R_n)^2-T_n^2}{R_n} = \\frac {(2-a-2r)a}{r} = \\frac {a(1+t-r)}{r} = 2 F(R_\\infty) = \\frac {2a_0}{r_0} " }, { "math_id": 15, "text": "r = \\frac {(\\eta_2-\\eta_1)^2}{(\\eta_2+\\eta_1)^2}" }, { "math_id": 16, "text": "t= \\frac {4 \\eta_1\\eta_2}{(\\eta_2+\\eta_1)^2}" }, { "math_id": 17, "text": "a" }, { "math_id": 18, "text": "r = \\frac {(\\eta_2-\\eta_1)^2}{(\\eta_2+\\eta_1)^2} = \\frac {0.5^2}{2.5^2}= 0.04 " }, { "math_id": 19, "text": "t= \\frac {4 \\eta_1\\eta_2}{(\\eta_2+\\eta_1)^2} = \\frac {(4)(1)(1.5)}{2.5^2} = 0.96" }, { "math_id": 20, "text": "r_0" }, { "math_id": 21, "text": "A = 0,\\qquad" }, { "math_id": 22, "text": "R = r_0 + (1-r_0)^2 \\sum_{n=1}^\\infty r_0^{ (2n-1)},\\qquad" }, { "math_id": 23, "text": "T = (1-r_0)^2 \\sum_{n=0}^\\infty r_0^{ 2n}." }, { "math_id": 24, "text": "T=\\exp(-kd)" }, { "math_id": 25, "text": "T = \\exp (- 0.5\\ \\text{cm}^{-1}\\cdot 0.1\\ \\text{cm}) =0.95" }, { "math_id": 26, "text": "A = 0.05" }, { "math_id": 27, "text": "T_{x+y} = \\frac {T_x T_y}{1-R_{(-x)} R_y},\\qquad" }, { "math_id": 28, "text": "R_{x+y} = R_x + \\frac {T_x^2 R_y}{1-R_{(-x)} R_y},\\qquad" }, { "math_id": 29, "text": "A_{x+y} = 1 - T_{x+y} - R_{x+y}" }, { "math_id": 30, "text": "R_{(-x)}" }, { "math_id": 31, "text": "x" }, { "math_id": 32, "text": "R_1=R_{(-1)} = 0.04 , R_2=0 , T_1=0.96 , T_2=0.95" }, { "math_id": 33, "text": "T_{x+y} = \\frac {T_x T_y}{1-R_{(-x)} R_y}= \\frac {(0.96)(0.95)}{1-(0.04) 0} = 0.912 \\quad" }, { "math_id": 34, "text": "R_{x+y} = R_x + \\frac {T_x^2 R_y}{1-R_{(-x)} R_y} = 0.04 + \\frac {(0.96^2)0}{1-(0.04)0} = 0.04 \\quad" }, { "math_id": 35, "text": "R_{y+x} = R_y + \\frac {T_y^2 R_x}{1-R_{(-y)} R_x} = 0+\\frac {(0.95^2)(0.04)}{1-0(0.04)} = 0.0361" }, { "math_id": 36, "text": "T_{x+y} = \\frac {T_x T_y}{1-R_{(-x)} R_y} = \\frac {(0.912)(0.96)}{1-(0.0361)(0.04)} =0.877 = T_{123}\n" }, { "math_id": 37, "text": "R_{x+y} = R_x + \\frac {T_x^2 R_y}{1-R_{(-x)} R_y} = 0.04 + \\frac {(0.912^2)(0.04)}{1-(0.0361)(0.04)} = .0733 =R_{123}" }, { "math_id": 38, "text": "A_{123} = 1-R_{123}-T_{123} = 1 - 0.877 - .073 = 0.05 " }, { "math_id": 39, "text": "\\mathsf{\\Alpha b}_{10}" }, { "math_id": 40, "text": "\\mathsf{\\Alpha b}_{10} = -log(1-A_{123}) = 0.0222" }, { "math_id": 41, "text": "T_{n+1} = \\frac {T_n T_1}{1-R_n R_1},\\qquad" }, { "math_id": 42, "text": "R_{n+1} = R_n + \\frac {T_n^2 R_1}{1-R_n R_1},\\qquad" }, { "math_id": 43, "text": "A_{n+1} = 1 - T_{n+1} - R_{n+1}" }, { "math_id": 44, "text": "A_1 = 0.05, R_1=0.0733 , T_1 =0.877 " }, { "math_id": 45, "text": "\\alpha" }, { "math_id": 46, "text": "\\Alpha_{10}=-log_{10}(1-\\alpha)" }, { "math_id": 47, "text": "= (14)(0.0222) = 0.312" }, { "math_id": 48, "text": "\\mathsf{\\Alpha b}_{10} = -log(1-A_S) = -log(0.466) = 0.2728 " }, { "math_id": 49, "text": "R_{d/2} = \\frac {R_d}{1+T_d},\\qquad" }, { "math_id": 50, "text": "T_{d/2} = \\sqrt{T_d (1-R_{d/2}^2)},\\qquad" }, { "math_id": 51, "text": "A_{d/2} = 1 - T_{d/2} - R_{d/2}," }, { "math_id": 52, "text": "-log(1-A_{S/2}) = -log(1-0.292) = 0.150 " }, { "math_id": 53, "text": "(2)(0.150) = 0.300" }, { "math_id": 54, "text": "-log(1-0.162) \\times 4 =0.307 " }, { "math_id": 55, "text": "A_1" }, { "math_id": 56, "text": "R_1" }, { "math_id": 57, "text": "T_1" }, { "math_id": 58, "text": "A_1= \\sum S_j (1-exp(-k_jd_j))" }, { "math_id": 59, "text": "R_1=\\sum S_jb_jd_j" }, { "math_id": 60, "text": "T_1 = 1-R_1-T_1" }, { "math_id": 61, "text": "S_j" }, { "math_id": 62, "text": "j" }, { "math_id": 63, "text": "K_j" }, { "math_id": 64, "text": "b_j" }, { "math_id": 65, "text": "d_j" }, { "math_id": 66, "text": "v_i= \\frac { \\frac {w_i}{\\rho_i}} {\\sum \\frac {w_j}{\\rho_j}}" }, { "math_id": 67, "text": "s_i= \\frac { \\frac {w_i}{\\rho_i d_i}} {\\sum \\frac {w_j}{\\rho_jd_j}}" }, { "math_id": 68, "text": " V_i = (1-v_0)v_i" }, { "math_id": 69, "text": "S_i = (1-v_0)s_i" }, { "math_id": 70, "text": "w_i" }, { "math_id": 71, "text": "v_i" }, { "math_id": 72, "text": "s_i" }, { "math_id": 73, "text": "V_i" }, { "math_id": 74, "text": "S_i" }, { "math_id": 75, "text": "\\rho _i" }, { "math_id": 76, "text": "v_0" }, { "math_id": 77, "text": "K_i" }, { "math_id": 78, "text": "r" }, { "math_id": 79, "text": "t" } ]
https://en.wikipedia.org/wiki?curid=67525376
67526365
Gravitational contact terms
In quantum field theory, a contact term is a radiatively induced point-like interaction. These typically occur when the vertex for the emission of a massless particle such as a photon, a graviton, or a gluon, is proportional to formula_0 (the invariant momentum of the radiated particle). This factor cancels the formula_1 of the Feynman propagator, and causes the exchange of the massless particle to produce a point-like formula_2-function effective interaction, rather than the usual formula_3 long-range potential. A notable example occurs in the weak interactions where a W-boson radiative correction to a gluon vertex produces a formula_0 term, leading to what is known as a "penguin" interaction. The contact term then generates a correction to the full action of the theory. Contact terms occur in gravity when there are non-minimal interactions, formula_4, or in Brans-Dicke Theory, formula_5. The non-minimal couplings are quantum equivalent to an "Einstein frame," with a pure Einstein-Hilbert action, formula_6, owing to gravitational contact terms. These arise classically from graviton exchange interactions. The contact terms are an essential, yet hidden, part of the action and, if they are ignored, the Feynman diagram loops in different frames yield different results. At the leading order in formula_7 including the contact terms is equivalent to performing a Weyl Transformation to remove the non-minimal couplings and taking the theory to the Einstein-Hilbert form. In this sense, the Einstein-Hilbert form of the action is unique and "frame ambiguities" in loop calculations do not exist. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " q^2 " }, { "math_id": 1, "text": " 1/q^2 " }, { "math_id": 2, "text": "\\delta" }, { "math_id": 3, "text": " \\sim 1/r " }, { "math_id": 4, "text": " ( M_{Planck}^2 +\\alpha \\phi^2) R " }, { "math_id": 5, "text": " ( M_{Planck}^2 +\\kappa M_{Planck}\\Phi) R " }, { "math_id": 6, "text": " M_{Planck}^2 R " }, { "math_id": 7, "text": "{1}/{M_{Planck}^2} " } ]
https://en.wikipedia.org/wiki?curid=67526365
67535153
General time- and transfer constant analysis
The general time- and transfer-constants (TTC) analysis is the generalized version of the Cochran-Grabel (CG) method, which itself is the generalized version of zero-value time-constants (ZVT), which in turn is the generalization of the open-circuit time constant method (OCT). While the other methods mentioned provide varying terms of only the denominator of an arbitrary transfer function, TTC can be used to determine every term both in the numerator and the denominator. Its denominator terms are the same as that of Cochran-Grabel method, when stated in terms of time constants (when expressed in Rosenstark notation). however, the numerator terms are determined using a combination of transfer constants and time constants, where the time constants are the same as those in CG method. Transfer constants are low-frequency ratios of the output variable to input variable under different open- and short-circuited active elements. In general, a transfer function (which can characterize gain, admittance, impedance, trans-impedance, etc., based on the choice of the input and output variables) can be written as: formula_0 The denominator terms. The first denominator term formula_1 can be expressed as the sum of zero value time constants (ZVTs): formula_2 where formula_3 is the time constant associated with the reactive element formula_4 when all the other sources are zero-valued (hence the superscript '0'). Setting a capacitor value to zero corresponds to an open circuit, while a zero-valued inductor is a short circuit. So for calculation of the formula_3, all other capacitors are open-circuited and all other inductors are short-circuited. This is the essence of the ZVT method, which reduces to OCT when only capacitors are involved. All independent sources are also zero-valued during the time constant calculations (voltage sources short-circuited and current source open-circuited). In this case, if the element in question (element formula_4) is a capacitor, the time constant is given by formula_5 and when element formula_4 is an inductor is it given by: formula_6. where in both cases, the resistance formula_7, is the resistance seen by elements formula_4 (denoted by subscript), when all the other elements are zero-valued (denoted by the zero superscript). The second-order denominator term is equal to: formula_8 where the second form is the often-used shorthand notation for a sum that does not repeat permutations (e.g., only one of the permutations formula_9 and formula_10 are counted). The second order time constant, formula_11, is simply the time constant associated with the reactive element formula_4 (where subscript always denotes the index of the element in question), when element formula_12 is infinite valued. In this notation, the superscript always denotes the index of the element (or elements) being infinite valued, with superscript zero implying all elements are zero-valued. Infinite-valued capacitors are short circuits, while infinite-value inductors are open circuits. In general, any denominator terms can be expressed as: formula_13 where formula_14 is the time constant of associated with element formula_4, when all the elements with an index in the superscript (i.e., formula_15) are infinite valued (shorted capacitors and opened inductors). Usually, the higher-order time constants involve simpler calculations, as there are more infinite valued elements involved during their calculations. The numerator terms. The major addition in the TTC over the Cochran-Grabel method is its ability to calculate all the numerator terms in a similar fashion using the same time constants used for the denominator calculation in conjunction with transfer constants, denoted as formula_16. Transfer constants are low-frequency gains (or, in general, ratios of the output to input variables) under different combinations of reactive elements being zero and infinite valued. The notation uses the same convention, with all the elements whose indexes appear in the superscript of formula_17, being infinite valued (shorted capacitors and opened inductors) and all unlisted elements zero-valued. The zeroth order transfer constant formula_18 denotes the ratio of the output to input when all elements are zero-valued (hence the superscript of 0). Using the time constants and transfer constants, all terms of the numerator can be calculated. In particular: formula_19 which is the transfer constant when all elements are zero-valued (e.g., dc gain). The first order numerator term can be expressed as the sum of the products of first-order transfer constants formula_20 and their associated zero value time constants, formula_3, formula_21 where transfer constant, formula_20, is the ratio of the output to input when element formula_4 is infinite valued and all others are zero valued. Similarly, the second-order numerator term is given by formula_22 where, again, transfer constant, formula_23, is the transfer constant when both element formula_4 and formula_12 are infinite valued (all else zero valued). In general, the formula_24th numerator term is given by: formula_25 This allows for full calculation of any transfer function to any degree of accuracy by generating a sufficient number of numerator and denominator terms using the above expressions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " H(s)= \\frac{a_0+a_1 s+a_2s^2+\\ldots+a_ms^m}{1+b_1s+b_2s^2+ \\ldots +b_ns^n} " }, { "math_id": 1, "text": "b_1" }, { "math_id": 2, "text": " b_1 = \\sum_{i=1}^N \\tau_i^0" }, { "math_id": 3, "text": "\\tau_i^0" }, { "math_id": 4, "text": "i" }, { "math_id": 5, "text": " \\tau_i^0= R_i^0 C_i" }, { "math_id": 6, "text": " \\tau_i^0= L_i/R_i^0" }, { "math_id": 7, "text": "R_i^0" }, { "math_id": 8, "text": " b_2= \\sum_{i=1}^{N-1}\\sum_{j=i+1}^{N} \\tau_i^0 \\tau_j^i=\\sum_i^{1\\leqslant i}\\sum_j^{<j\\leqslant N} \\tau_i^0 \\tau_j^i" }, { "math_id": 9, "text": "\\tau_i^0 \\tau_j^i" }, { "math_id": 10, "text": "\\tau_j^0 \\tau_i^j" }, { "math_id": 11, "text": "\\tau_i^j" }, { "math_id": 12, "text": "j" }, { "math_id": 13, "text": " b_n =\\sum_i^{1\\leqslant i<}\\sum_j^{j<k}\\sum_{k \\cdots}^{\\ldots \\leqslant N} \\ldots \\tau_i^0 \\tau_j^i \\tau_k^{ij} \\ldots " }, { "math_id": 14, "text": "\\tau_i^{jkl\\ldots}" }, { "math_id": 15, "text": "jkl\\ldots" }, { "math_id": 16, "text": "H^{ijk\\ldots}" }, { "math_id": 17, "text": "H" }, { "math_id": 18, "text": "H^0" }, { "math_id": 19, "text": " a_0 = H^0 " }, { "math_id": 20, "text": "H^i" }, { "math_id": 21, "text": "a_1=\\sum_{i=1}^N \\tau_i^0 H^i" }, { "math_id": 22, "text": "a_2=\\sum_i^{1\\leqslant i}\\sum_j^{<j\\leqslant N} \\tau_i^0 \\tau_j^i H^{ij}" }, { "math_id": 23, "text": "H^{ij}" }, { "math_id": 24, "text": "n" }, { "math_id": 25, "text": "a_n =\\sum_i^{1\\leqslant i<}\\sum_j^{j<k}\\sum_{k \\cdots}^{\\ldots \\leqslant N} \\ldots \\tau_i^0 \\tau_j^i \\tau_k^{ij} \\ldots H^{ijk \\ldots}" } ]
https://en.wikipedia.org/wiki?curid=67535153
6753623
Exchangeable random variables
Concept in statistics In statistics, an exchangeable sequence of random variables (also sometimes interchangeable) is a sequence "X"1, "X"2, "X"3, ... (which may be finitely or infinitely long) whose joint probability distribution does not change when the positions in the sequence in which finitely many of them appear are altered. In other words, the joint distribution is invariant to finite permutation. Thus, for example the sequences formula_0 both have the same joint probability distribution. It is closely related to the use of independent and identically distributed random variables in statistical models. Exchangeable sequences of random variables arise in cases of simple random sampling. Definition. Formally, an exchangeable sequence of random variables is a finite or infinite sequence "X"1, "X"2, "X"3, ... of random variables such that for any finite permutation σ of the indices 1, 2, 3, ..., (the permutation acts on only finitely many indices, with the rest fixed), the joint probability distribution of the permuted sequence formula_1 is the same as the joint probability distribution of the original sequence. (A sequence "E"1, "E"2, "E"3, ... of events is said to be exchangeable precisely if the sequence of its indicator functions is exchangeable.) The distribution function "F""X"1...,"X""n"("x"1, ..., "x""n") of a finite sequence of exchangeable random variables is symmetric in its arguments "x"1, ..., "x""n". Olav Kallenberg provided an appropriate definition of exchangeability for continuous-time stochastic processes. History. The concept was introduced by William Ernest Johnson in his 1924 book "Logic, Part III: The Logical Foundations of Science". Exchangeability is equivalent to the concept of statistical control introduced by Walter Shewhart also in 1924. Exchangeability and the i.i.d. statistical model. The property of exchangeability is closely related to the use of independent and identically distributed (i.i.d.) random variables in statistical models. A sequence of random variables that are i.i.d, conditional on some underlying distributional form, is exchangeable. This follows directly from the structure of the joint probability distribution generated by the i.i.d. form. Mixtures of exchangeable sequences (in particular, sequences of i.i.d. variables) are exchangeable. The converse can be established for infinite sequences, through an important representation theorem by Bruno de Finetti (later extended by other probability theorists such as Halmos and Savage). The extended versions of the theorem show that in any infinite sequence of exchangeable random variables, the random variables are conditionally independent and identically-distributed, given the underlying distributional form. This theorem is stated briefly below. (De Finetti's original theorem only showed this to be true for random indicator variables, but this was later extended to encompass all sequences of random variables.) Another way of putting this is that de Finetti's theorem characterizes exchangeable sequences as mixtures of i.i.d. sequences—while an exchangeable sequence need not itself be unconditionally i.i.d., it can be expressed as a mixture of underlying i.i.d. sequences. This means that infinite sequences of exchangeable random variables can be regarded equivalently as sequences of conditionally i.i.d. random variables, based on some underlying distributional form. (Note that this equivalence does not quite hold for finite exchangeability. However, for finite vectors of random variables there is a close approximation to the i.i.d. model.) An infinite exchangeable sequence is strictly stationary and so a law of large numbers in the form of Birkhoff–Khinchin theorem applies. This means that the underlying distribution can be given an operational interpretation as the limiting empirical distribution of the sequence of values. The close relationship between exchangeable sequences of random variables and the i.i.d. form means that the latter can be justified on the basis of infinite exchangeability. This notion is central to Bruno de Finetti's development of predictive inference and to Bayesian statistics. It can also be shown to be a useful foundational assumption in frequentist statistics and to link the two paradigms. The representation theorem: This statement is based on the presentation in O'Neill (2009) in references below. Given an infinite sequence of random variables formula_2 we define the limiting empirical distribution function formula_3 by formula_4 (This is the Cesàro limit of the indicator functions. In cases where the Cesàro limit does not exist this function can actually be defined as the Banach limit of the indicator functions, which is an extension of this limit. This latter limit always exists for sums of indicator functions, so that the empirical distribution is always well-defined.) This means that for any vector of random variables in the sequence we have joint distribution function given by formula_5 If the distribution function formula_3 is indexed by another parameter formula_6 then (with densities appropriately defined) we have formula_7 These equations show the joint distribution or density characterised as a mixture distribution based on the underlying limiting empirical distribution (or a parameter indexing this distribution). Note that not all finite exchangeable sequences are mixtures of i.i.d. To see this, consider sampling without replacement from a finite set until no elements are left. The resulting sequence is exchangeable, but not a mixture of i.i.d. Indeed, conditioned on all other elements in the sequence, the remaining element is known. Covariance and correlation. Exchangeable sequences have some basic covariance and correlation properties which mean that they are generally positively correlated. For infinite sequences of exchangeable random variables, the covariance between the random variables is equal to the variance of the mean of the underlying distribution function. For finite exchangeable sequences the covariance is also a fixed value which does not depend on the particular random variables in the sequence. There is a weaker lower bound than for infinite exchangeability and it is possible for negative correlation to exist. Covariance for exchangeable sequences (infinite): If the sequence formula_8 is exchangeable, then formula_9 Covariance for exchangeable sequences (finite): If formula_10 is exchangeable with formula_11, then formula_12 The finite sequence result may be proved as follows. Using the fact that the values are exchangeable, we have formula_13 We can then solve the inequality for the covariance yielding the stated lower bound. The non-negativity of the covariance for the infinite sequence can then be obtained as a limiting result from this finite sequence result. Equality of the lower bound for finite sequences is achieved in a simple urn model: An urn contains 1 red marble and "n" − 1 green marbles, and these are sampled without replacement until the urn is empty. Let "X""i" = 1 if the red marble is drawn on the "i"-th trial and 0 otherwise. A finite sequence that achieves the lower covariance bound cannot be extended to a longer exchangeable sequence. Applications. The von Neumann extractor is a randomness extractor that depends on exchangeability: it gives a method to take an exchangeable sequence of 0s and 1s (Bernoulli trials), with some probability "p" of 0 and formula_28 of 1, and produce a (shorter) exchangeable sequence of 0s and 1s with probability 1/2. Partition the sequence into non-overlapping pairs: if the two elements of the pair are equal (00 or 11), discard it; if the two elements of the pair are unequal (01 or 10), keep the first. This yields a sequence of Bernoulli trials with formula_29 as, by exchangeability, the odds of a given pair being 01 or 10 are equal. Exchangeable random variables arise in the study of U statistics, particularly in the Hoeffding decomposition. Exchangeability is a key assumption of the distribution-free inference method of conformal prediction. Refererences. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " X_1, X_2, X_3, X_4, X_5, X_6 \\quad \\text{ and } \\quad X_3, X_6, X_1, X_5, X_2, X_4 " }, { "math_id": 1, "text": " X_{\\sigma(1)}, X_{\\sigma(2)}, X_{\\sigma(3)}, \\dots" }, { "math_id": 2, "text": "\\mathbf{X}=(X_1,X_2,X_3,\\ldots)" }, { "math_id": 3, "text": "F_\\mathbf{X}" }, { "math_id": 4, "text": "F_\\mathbf{X}(x) = \\lim_{n\\to\\infty} \\frac{1}{n} \\sum_{i=1}^n I(X_i \\le x)." }, { "math_id": 5, "text": "\\Pr (X_1 \\le x_1,X_2 \\le x_2,\\ldots,X_n \\le x_n) = \\int \\prod_{i=1}^n F_\\mathbf{X}(x_i)\\,dP(F_\\mathbf{X})." }, { "math_id": 6, "text": "\\theta" }, { "math_id": 7, "text": "p_{X_1,\\ldots,X_n}(x_1,\\ldots,x_n) = \\int \\prod_{i=1}^n p_{X_i}(x_i\\mid\\theta)\\,dP(\\theta)." }, { "math_id": 8, "text": "X_1,X_2,X_3,\\ldots" }, { "math_id": 9, "text": " \\operatorname{cov} (X_i,X_j) = \\operatorname{var} (\\operatorname{E}(X_i\\mid F_\\mathbf{X})) = \\operatorname{var} (\\operatorname{E}(X_i\\mid\\theta)) \\ge 0 \\quad\\text{for }i \\ne j." }, { "math_id": 10, "text": "X_1,X_2,\\ldots,X_n" }, { "math_id": 11, "text": "\\sigma^2 = \\operatorname{var} (X_i)" }, { "math_id": 12, "text": " \\operatorname{cov} (X_i,X_j) \\ge - \\frac{\\sigma^2}{n-1} \\quad\\text{for }i \\ne j." }, { "math_id": 13, "text": "\n\\begin{align}\n0 & \\le \\operatorname{var}(X_1 + \\cdots + X_n) \\\\\n& = \\operatorname{var}(X_1) + \\cdots + \\operatorname{var}(X_n) + \\underbrace{\\operatorname{cov}(X_1,X_2) + \\cdots\\quad{}}_\\text{all ordered pairs} \\\\\n& = n\\sigma^2 + n(n-1)\\operatorname{cov}(X_1,X_2).\n\\end{align}\n" }, { "math_id": 14, "text": "n" }, { "math_id": 15, "text": "m" }, { "math_id": 16, "text": "X_i" }, { "math_id": 17, "text": "i" }, { "math_id": 18, "text": "\\left\\{ X_i \\right\\}_{i=1, \\dots, n+m}" }, { "math_id": 19, "text": "\\left\\{ X_i \\right\\}_{i\\in \\N}" }, { "math_id": 20, "text": "(X, Y)" }, { "math_id": 21, "text": "\\mu = 0" }, { "math_id": 22, "text": "\\sigma_x = \\sigma_y = 1" }, { "math_id": 23, "text": "\\rho\\in (-1, 1)" }, { "math_id": 24, "text": "X" }, { "math_id": 25, "text": "Y" }, { "math_id": 26, "text": "\\rho=0" }, { "math_id": 27, "text": "p(x, y) = p(y, x) \\propto \\exp\\left[-\\frac{1}{2(1-\\rho^2)}(x^2+y^2-2\\rho xy)\\right]." }, { "math_id": 28, "text": "q=1-p" }, { "math_id": 29, "text": "p=1/2," } ]
https://en.wikipedia.org/wiki?curid=6753623
675364
Voltage regulation
Concept in electrical engineering In electrical engineering, particularly power engineering, voltage regulation is a measure of change in the voltage magnitude between the sending and receiving end of a component, such as a transmission or distribution line. Voltage regulation describes the ability of a system to provide near constant voltage over a wide range of load conditions. The term may refer to a passive property that results in more or less voltage drop under various load conditions, or to the active intervention with devices for the specific purpose of adjusting voltage. Electrical power systems. In electrical power systems, voltage regulation is a dimensionless quantity defined at the receiving end of a transmission line as: formula_0 where "Vnl" is voltage at no load and "Vfl" is voltage at full load. The percent voltage regulation of an ideal transmission line, as defined by a transmission line with zero resistance and reactance, would equal zero due to "Vnl" equaling "Vfl" as a result of there being no voltage drop along the line. This is why a smaller value of "Voltage Regulation" is usually beneficial, indicating that the line is closer to ideal. The Voltage Regulation formula could be visualized with the following: "Consider power being delivered to a load such that the voltage at the load is the load's rated voltage "VRated", if then the load disappears, the voltage at the point of the load will rise to "Vnl"." Voltage regulation in transmission lines occurs due to the impedance of the line between its sending and receiving ends. Transmission lines intrinsically have some amount of resistance, inductance, and capacitance that all change the voltage continuously along the line. Both the magnitude and phase angle of voltage change along a real transmission line. The effects of line impedance can be modeled with simplified circuits such as the short line approximation (least accurate), the medium line approximation (more accurate), and the long line approximation (most accurate). The short line approximation ignores capacitance of the transmission line and models the resistance and reactance of the transmission line as a simple series resistor and inductor. This combination has impedance R + jωL or R + jX. There is a single line current I = IS = IR in the short line approximation, different from the medium and long line. The medium length line approximation takes into account the shunt admittance, usually pure capacitance, by distributing half the admittance at the sending and receiving end of the line. This configuration is often referred to as a nominal - π. The long line approximation takes these lumped impedance and admittance values and distributes them uniformly along the length of the line. The long line approximation therefore requires the solving of differential equations and results in the highest degree of accuracy. In the voltage regulation formula, Vno load is the voltage measured at the receiving end terminals when the receiving end is an open circuit. The entire short line model is an open circuit in this condition, and no current flows in an open circuit, so I = 0 A and the voltage drop across the line given by Ohm’s law Vline drop = IZline is 0 V. The sending and receiving end voltages are thus the same. This value is what the voltage at the receiving end would be if the transmission line had no impedance. The voltage would not be changed at all by the line, which is an ideal scenario in power transmission. Vfull load is the voltage across the load at the receiving end when the load is connected and current flows in the transmission line. Now Vline drop = IZline is nonzero, so the voltages and the sending and receiving ends of the transmission line are not equal. The current I can be found by solving Ohm’s law using a combined line and load impedance: formula_1. Then the VR, full load is given by formula_2. The effects of this modulation on voltage magnitude and phase angle is illustrated using phasor diagrams that map VR, VS, and the resistive and inductive components of Vline drop. Three power factor scenarios are shown, where (a) the line serves an inductive load so the current lags receiving end voltage, (b) the line serves a completely real load so the current and receiving end voltage are in phase, and (c) the line serves a capacitive load so the current leads receiving end voltage. In all cases the line resistance R causes a voltage drop that is in phase with current, and the reactance of the line X causes a voltage drop that leads current by 90 degrees. These successive voltage drops are summed to the receiving end voltage, tracing backward from VR to VS in the short line approximation circuit. The vector sum of VR and the voltage drops equals VS, and it is apparent in the diagrams that VS does not equal VR in magnitude or phase angle. The diagrams show that the phase angle of current in the line affects voltage regulation significantly. Lagging current in (a) makes the required magnitude of sending end voltage quite large relative to the receiving end. The phase angle difference between sending and receiving end is minimized, however. Leading current in (c) actually allows the sending end voltage magnitude be smaller than the receiving end magnitude, so the voltage counterintuitively increases along the line. In-phase current in (b) does little to affect the magnitude of voltage between sending and receiving ends, but the phase angle shifts considerably. Real transmission lines typically serve inductive loads, which are the motors that exist everywhere in modern electronics and machines. Transferring a large amount of reactive power Q to inductive loads makes the line current lag voltage, and the voltage regulation is characterized by decrease in voltage magnitude. In transferring a large amount of real power P to real loads, current is mostly in phase with voltage. The voltage regulation in this scenario is characterized by a decrease in phase angle rather than magnitude. Sometimes, the term voltage regulation is used to describe processes by which the quantity "VR" is reduced, especially concerning special circuits and devices for this purpose (see below). Electronic power supply parameters. The quality of a system's voltage regulation is described by three main parameters: Distribution feeder regulation. Electric utilities aim to provide service to customers at a specific voltage level, for example, 220 V or 240 V. However, due to Kirchhoff's Laws, the voltage magnitude and thus the service voltage to customers will in fact vary along the length of a conductor such as a distribution feeder (see Electric power distribution). Depending on law and local practice, actual service voltage within a tolerance band such as ±5% or ±10% may be considered acceptable. In order to maintain voltage within tolerance under changing load conditions, various types of devices are traditionally employed: A new generation of devices for voltage regulation based on solid-state technology are in the early commercialization stages. Distribution regulation involves a "regulation point": the point at which the equipment tries to maintain constant voltage. Customers further than this point observe an expected effect: higher voltage at light load, and lower voltage at high load. Customers closer than this point experience the opposite effect: higher voltage at high load, and lower voltage at light load. Complications due to distributed generation. Distributed generation, in particular photovoltaics connected at the distribution level, presents a number of significant challenges for voltage regulation. Conventional voltage regulation equipment works under the assumption that line voltage changes predictably with distance along the feeder. Specifically, feeder voltage drops with increasing distance from the substation due to line impedance and the rate of voltage drop decreases farther away from the substation. However, this assumption may not hold when DG is present. For example, a long feeder with a high concentration of DG at the end will experience significant current injection at points where the voltage is normally lowest. If the load is sufficiently low, current will flow in the reverse direction (i.e. towards the substation), resulting in a voltage profile that increases with distance from the substation. This inverted voltage profile may confuse conventional controls. In one such scenario, load tap changers expecting voltage to decrease with distance from the substation may choose an operating point that in fact causes voltage down the line to exceed operating limits. The voltage regulation issues caused by DG at the distribution level are complicated by lack of utility monitoring equipment along distribution feeders. The relative scarcity of information on distribution voltages and loads makes it difficult for utilities to make adjustments necessary to keep voltage levels within operating limits. Although DG poses a number of significant challenges for distribution level voltage regulation, if combined with intelligent power electronics DG can actually serve to enhance voltage regulation efforts. One such example is PV connected to the grid through inverters with volt-VAR control. In a study conducted jointly by the National Renewable Energy Laboratory (NREL) and Electric Power Research Institute (EPRI), when volt-VAR control was added to a distribution feeder with 20% PV penetration, the diurnal voltage swings on the feeder were significantly reduced. Transformers. One case of voltage regulation is in a transformer. The unideal components of the transformer cause a change in voltage when current flows. Under no load, when no current flows through the secondary coils, "Vnl" is given by the ideal model, where "VS = VP*NS/NP". Looking at the equivalent circuit and neglecting the shunt components, as is a reasonable approximation, one can refer all resistance and reactance to the secondary side and clearly see that the secondary voltage at no load will indeed be given by the ideal model. In contrast, when the transformer delivers full load, a voltage drop occurs over the winding resistance, causing the terminal voltage across the load to be lower than anticipated. By the definition above, this leads to a nonzero voltage regulation which must be considered in use of the transformer. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{Percent } VR = \\frac{|V_{nl}| - |V_{fl}|}{|V_{fl}|} \\times 100" }, { "math_id": 1, "text": "I = \\frac{V_{S}}{Z_{line} + Z_{load}}" }, { "math_id": 2, "text": "V_{S} - \\frac{V_{S}Z_{line}}{Z_{line} + Z_{load}}" } ]
https://en.wikipedia.org/wiki?curid=675364
67541582
Transfer constant
Transfer constants are low-frequency gains (or in general ratios of the output to input variables) evaluated under different combinations of shorting and opening of reactive elements in the circuit (i.e., capacitors and inductors). They are used in general time- and transfer constant (TTC) analysis to determine the numerator terms and the zeros in the transfer function. The transfer constants are calculated under similar zero- and infinite-value conditions of reactive elements used in the Cochran-Grabel (CG) method to calculate time constants, but calculating the low-frequency transfer functions from a defined input source to the output terminal, instead of the resistance seen by the reactive elements. Transfer constants are shown as formula_0, where the superscripts formula_1, are the indexes of the elements infinite valued (short-circuited capacitors and open-circuited inductors) in calculation of the transfer constant and the remains elements zero valued. The zeroth order transfer constant formula_2 denotes the ratio of the output to input when all elements are zero-valued (hence the superscript of 0). formula_2 often corresponds to the dc gain of the system. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H^{ijk\\ldots}" }, { "math_id": 1, "text": "ijk\\ldots" }, { "math_id": 2, "text": "H^0" } ]
https://en.wikipedia.org/wiki?curid=67541582
67554277
Rectangular lattice
2-dimensional lattice The rectangular lattice and rhombic lattice (or centered rectangular lattice) constitute two of the five two-dimensional Bravais lattice types. The symmetry categories of these lattices are wallpaper groups pmm and cmm respectively. The conventional translation vectors of the rectangular lattices form an angle of 90° and are of unequal lengths. Bravais lattices. There are two rectangular Bravais lattices: primitive rectangular and centered rectangular (also rhombic). The primitive rectangular lattice can also be described by a centered rhombic unit cell, while the centered rectangular lattice can also be described by a primitive rhombic unit cell. Note that the length formula_0 in the lower row is not the same as in the upper row. For the first column above, formula_0 of the second row equals formula_1 of the first row, and for the second column it equals formula_2. Crystal classes. The "rectangular lattice" class names, Schönflies notation, Hermann-Mauguin notation, orbifold notation, Coxeter notation, and wallpaper groups are listed in the table below. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a" }, { "math_id": 1, "text": "\\sqrt{a^2+b^2}" }, { "math_id": 2, "text": "\\frac{1}{2} \\sqrt{a^2+b^2}" } ]
https://en.wikipedia.org/wiki?curid=67554277
6755482
Membrane reactor
A membrane reactor is a physical device that combines a chemical conversion process with a membrane separation process to add reactants or remove products of the reaction. Chemical reactors making use of membranes are usually referred to as membrane reactors. The membrane can be used for different tasks: Membrane reactors are an example for the combination of two unit operations in one step, e.g., membrane filtration with the chemical reaction. The integration of reaction section with selective extraction of a reactant allows an enhancement of the conversions compared to the equilibrium value. This characteristic makes membrane reactors suitable to perform equilibrium-limited endothermic reactions. Benefits and critical issues. Selective membranes inside the reactor lead to several benefits: reactor section substitutes several downstream processes. Moreover, removing a product allows to exceed thermodynamics limitations. In this way, it is possible to reach higher conversions of the reactants or to obtain the same conversion with a lower temperature. Reversible reactions are usually limited by thermodynamics: when direct and reverse reactions, whose rate depends from reactants and product concentrations, are balanced, a chemical equilibrium state is achieved. If temperature and pressure are fixed, this equilibrium state is a constraint for the ratio of products versus reactants concentrations, obstructing the possibility to reach higher conversions. This limit can be overcome by removing a product of the reaction: in this way, the system cannot reach equilibrium and the reaction continues, reaching higher conversions (or same conversion at lower temperature). Nevertheless, there are several hurdles in an industrial commercialization due to technical difficulties in designing membranes with long stabilities and due to the high costs of membranes. Moreover, there is a lack of a process which lead the technology, even if in recent years this technology was successfully applied to hydrogen production and hydrocarbon dehydrogenation. Reactor configurations. Generally, membrane reactors can be classified based on the membrane position and reactor configuration. Usually there is a catalyst inside: if the catalyst is installed inside the membrane, the reactor is called "catalytic membrane reactor" (CMR); if the catalyst (and the support) are packed and fixed inside, the reactor is called "packed bed membrane reactor"; if the speed of the gas is high enough, and the particle size is small enough, fluidization of the bed occurs and the reactor is called fluidized bed membrane reactor. Other types of reactor take the name from the membrane material, e.g., zeolite membrane reactor. Among these configurations, higher attention in recent years, particularly in hydrogen production, is given to fixed bed and fluidized bed: in these cases the standard reactor is simply integrated with membranes inside reaction space. Membrane reactors for hydrogen production. Today hydrogen is mainly used in chemical industry as a reactant in ammonia production and methanol synthesis, and in refinery processes for hydrocracking. Moreover, there is a growing interest in its use as energy carrier and as fuel in fuel cells. More than 50% of hydrogen is currently produced from steam reforming of natural gas, due to low costs and the fact that it is a mature technology. Traditional processes are composed by a steam reforming section, to produce syngas from natural gas, two water gas shift reactors which enhance hydrogen in syngas and a pressure swing adsorption unit for hydrogen purification. Membrane reactors make a process intensification including all these sections in one single unit, with both economic and environmental benefits. Membranes for hydrogen production. To be suitable for hydrogen production industry, membranes must have a high flux, high selectivity towards hydrogen, low cost and high stability. Among membranes, dense inorganic are the most suitable having a selectivity orders of magnitude bigger than porous ones. Among dense membranes, metallic ones are the most used due to higher fluxes compared to ceramic ones. The most used material in hydrogen separation membranes is palladium, particularly its alloy with silver. This metal, even if is more expensive than other ones, shows very high solubility towards hydrogen. The transport mechanism of hydrogen inside palladium membranes follows a solution/diffusion mechanism: hydrogen molecule is adsorbed onto the surface of the membrane, then it is split into hydrogen atoms; these atoms go across the membrane through diffusion and then recombine again into hydrogen molecule on the low-pressure side of the membrane; then, it is desorbed from the surface. In recent years, several works were performed to study the integration of palladium membranes inside fluidized bed membrane reactors for hydrogen production. Other applications. Membrane bioreactors for wastewater treatment. Submerged and sidestream membrane bioreactors in wastewater treatment plants are the most developed filtration based membrane reactors. Electrochemical membrane reactors ecMR. The production of chloride (Cl2) and caustic soda NaOH from NaCl is carried out industrially by the chlor-alkali-process using a proton conducting polyelectrolyte membrane. It is used on large scale and has replaced diaphragm electrolysis. Nafion has been developed as a bilayer membrane to withstand the harsh conditions during the chemical conversion. Biological systems. In biological systems, membranes fulfill a number of essential functions. The compartmentalization of biological cells is achieved by membranes. The semi-permeability allows to separate reactions and reaction environments. A number of enzymes are membrane bound and often mass transport through the membrane is active rather than passive as in artificial membranes, allowing the cell to keep up gradients for example by using active transport of protons or water. The use of a natural membrane is the first example of the utilization for a chemical reaction. By using the selective permeability of a pig's bladder, water could be removed from a condensation reaction to shift the equilibrium position of the reaction towards the condensation products according to Le Chatelier's principle. Size exclusion: Enzyme Membrane Reactor. As enzymes are macromolecules and often differ greatly in size from reactants, they can be separated by size exclusion membrane filtration with ultra- or nanofiltration artificial membranes. This is used on industrial scale for the production of enantiopure amino acids by kinetic racemic resolution of chemically derived racemic amino acids. The most prominent example is the production of L-methionine on a scale of 400t/a. The advantage of this method over other forms of immobilization of the catalyst is that the enzymes are not altered in activity or selectivity as it remains solubilized. The principle can be applied to all macromolecular catalysts which can be separated from the other reactants by means of filtration. So far, only enzymes have been used to a significant extent. Reaction combined with pervaporation. In pervaporation, dense membranes are used for separation. For dense membranes the separation is governed by the difference of the chemical potential of the components in the membrane. The selectivity of the transport through the membrane is dependent on the difference in solubility of the materials in the membrane and their diffusivity through the membrane. For example, for the selective removal of water by using lipophilic membranes. This can be used to overcome thermodynamic limitations of condensation, e.g., esterification reactions by removing water. Dosing: Partial oxidation of methane to methanol. In the STAR process for the catalytic conversion of methane from natural gas with oxygen from air, to methanol by the partial oxidation &lt;br&gt; 2CH4 + O2 formula_0 2CH3OH. The partial pressure of oxygen has to be low to prevent the formation of explosive mixtures and to suppress the successive reaction to carbon monoxide, carbon dioxide and water. This is achieved by using a tubular reactor with an oxygen-selective membrane. The membrane allows the uniform distribution of oxygen as the driving force for the permeation of oxygen through the membrane is the difference in partial pressures on the air side and the methane side. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightarrow" } ]
https://en.wikipedia.org/wiki?curid=6755482
67562204
Planetary coordinate system
Coordinate system for planets A planetary coordinate system (also referred to as planetographic, planetodetic, or planetocentric) is a generalization of the geographic, geodetic, and the geocentric coordinate systems for planets other than Earth. Similar coordinate systems are defined for other solid celestial bodies, such as in the "selenographic coordinates" for the Moon. The coordinate systems for almost all of the solid bodies in the Solar System were established by Merton E. Davies of the Rand Corporation, including Mercury, Venus, Mars, the four Galilean moons of Jupiter, and Triton, the largest moon of Neptune. A planetary datum is a generalization of geodetic datums for other planetary bodies, such as the Mars datum; it requires the specification of physical reference points or surfaces with fixed coordinates, such as a specific crater for the reference meridian or the best-fitting equigeopotential as zero-level surface. Longitude. The longitude systems of most of those bodies with observable rigid surfaces have been defined by references to a surface feature such as a crater. The north pole is that pole of rotation that lies on the north side of the invariable plane of the Solar System (near the ecliptic). The location of the prime meridian as well as the position of the body's north pole on the celestial sphere may vary with time due to precession of the axis of rotation of the planet (or satellite). If the position angle of the body's prime meridian increases with time, the body has a direct (or prograde) rotation; otherwise the rotation is said to be retrograde. In the absence of other information, the axis of rotation is assumed to be normal to the mean orbital plane; Mercury and most of the satellites are in this category. For many of the satellites, it is assumed that the rotation rate is equal to the mean orbital period. In the case of the giant planets, since their surface features are constantly changing and moving at various rates, the rotation of their magnetic fields is used as a reference instead. In the case of the Sun, even this criterion fails (because its magnetosphere is very complex and does not really rotate in a steady fashion), and an agreed-upon value for the rotation of its equator is used instead. For planetographic longitude, west longitudes (i.e., longitudes measured positively to the west) are used when the rotation is prograde, and east longitudes (i.e., longitudes measured positively to the east) when the rotation is retrograde. In simpler terms, imagine a distant, non-orbiting observer viewing a planet as it rotates. Also suppose that this observer is within the plane of the planet's equator. A point on the Equator that passes directly in front of this observer later in time has a higher planetographic longitude than a point that did so earlier in time. However, planetocentric longitude is always measured positively to the east, regardless of which way the planet rotates. "East" is defined as the counter-clockwise direction around the planet, as seen from above its north pole, and the north pole is whichever pole more closely aligns with the Earth's north pole. Longitudes traditionally have been written using "E" or "W" instead of "+" or "−" to indicate this polarity. For example, −91°, 91°W, +269° and 269°E all mean the same thing. The modern standard for maps of Mars (since about 2002) is to use planetocentric coordinates. Guided by the works of historical astronomers, Merton E. Davies established the meridian of Mars at Airy-0 crater. For Mercury, the only other planet with a solid surface visible from Earth, a thermocentric coordinate is used: the prime meridian runs through the point on the equator where the planet is hottest (due to the planet's rotation and orbit, the Sun briefly retrogrades at noon at this point during perihelion, giving it more sunlight). By convention, this meridian is defined as exactly twenty degrees of longitude east of Hun Kal. Tidally-locked bodies have a natural reference longitude passing through the point nearest to their parent body: 0° the center of the primary-facing hemisphere, 90° the center of the leading hemisphere, 180° the center of the anti-primary hemisphere, and 270° the center of the trailing hemisphere. However, libration due to non-circular orbits or axial tilts causes this point to move around any fixed point on the celestial body like an analemma. Latitude. Planetographic latitude and planetocentric latitude may be similarly defined. The zero latitude plane (Equator) can be defined as orthogonal to the mean axis of rotation (poles of astronomical bodies). The reference surfaces for some planets (such as Earth and Mars) are ellipsoids of revolution for which the equatorial radius is larger than the polar radius, such that they are oblate spheroids. Altitude. Vertical position can be expressed with respect to a given vertical datum, by means of physical quantities analogous to the topographical geocentric distance (compared to a constant nominal Earth radius or the varying geocentric radius of the reference ellipsoid surface) or altitude/elevation (above and below the geoid). The "areoid" (the geoid of Mars) has been measured using flight paths of satellite missions such as Mariner 9 and Viking. The main departures from the ellipsoid expected of an ideal fluid are from the Tharsis volcanic plateau, a continent-size region of elevated terrain, and its antipodes. The "selenoid" (the geoid of the Moon) has been measured gravimetrically by the GRAIL twin satellites. Ellipsoid of revolution (spheroid). Reference ellipsoids are also useful for defining geodetic coordinates and mapping other planetary bodies including planets, their satellites, asteroids and comet nuclei. Some well observed bodies such as the Moon and Mars now have quite precise reference ellipsoids. For rigid-surface nearly-spherical bodies, which includes all the rocky planets and many moons, ellipsoids are defined in terms of the axis of rotation and the mean surface height excluding any atmosphere. Mars is actually egg shaped, where its north and south polar radii differ by approximately , however this difference is small enough that the average polar radius is used to define its ellipsoid. The Earth's Moon is effectively spherical, having almost no bulge at its equator. Where possible, a fixed observable surface feature is used when defining a reference meridian. For gaseous planets like Jupiter, an effective surface for an ellipsoid is chosen as the equal-pressure boundary of one bar. Since they have no permanent observable features, the choices of prime meridians are made according to mathematical rules. Flattening. For the WGS84 ellipsoid to model Earth, the "defining" values are a (equatorial radius): 6 378 137.0 m formula_0 (inverse flattening): 298.257 223 563 from which one derives b (polar radius): 6 356 752.3142 m, so that the difference of the major and minor semi-axes is . This is only 0.335% of the major axis, so a representation of Earth on a computer screen would be sized as 300 pixels by 299 pixels. This is rather indistinguishable from a sphere shown as 300pix by 300pix. Thus illustrations typically greatly exaggerate the flattening to highlight the concept of any planet's oblateness. Other f values in the Solar System are &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄16 for Jupiter, &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄10 for Saturn, and &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄900 for the Moon. The flattening of the Sun is about . Origin of flattening. In 1687, Isaac Newton published the "Principia" in which he included a proof that a rotating self-gravitating fluid body in equilibrium takes the form of an oblate ellipsoid of revolution (a spheroid). The amount of flattening depends on the density and the balance of gravitational force and centrifugal force. Equatorial bulge. Generally any celestial body that is rotating (and that is sufficiently massive to draw itself into spherical or near spherical shape) will have an equatorial bulge matching its rotation rate. With Saturn is the planet with the largest equatorial bulge in our Solar System. Equatorial ridges. Equatorial bulges should not be confused with "equatorial ridges". Equatorial ridges are a feature of at least four of Saturn's moons: the large moon Iapetus and the tiny moons Atlas, Pan, and Daphnis. These ridges closely follow the moons' equators. The ridges appear to be unique to the Saturnian system, but it is uncertain whether the occurrences are related or a coincidence. The first three were discovered by the "Cassini" probe in 2005; the Daphnean ridge was discovered in 2017. The ridge on Iapetus is nearly 20 km wide, 13 km high and 1300 km long. The ridge on Atlas is proportionally even more remarkable given the moon's much smaller size, giving it a disk-like shape. Images of Pan show a structure similar to that of Atlas, while the one on Daphnis is less pronounced. Triaxial ellipsoid. Small moons, asteroids, and comet nuclei frequently have irregular shapes. For some of these, such as Jupiter's Io, a scalene (triaxial) ellipsoid is a better fit than the oblate spheroid. For highly irregular bodies, the concept of a reference ellipsoid may have no useful value, so sometimes a spherical reference is used instead and points identified by planetocentric latitude and longitude. Even that can be problematic for non-convex bodies, such as Eros, in that latitude and longitude don't always uniquely identify a single surface location. Smaller bodies (Io, Mimas, etc.) tend to be better approximated by triaxial ellipsoids; however, triaxial ellipsoids would render many computations more complicated, especially those related to map projections. Many projections would lose their elegant and popular properties. For this reason spherical reference surfaces are frequently used in mapping programs. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{1}{f}\\,\\!" } ]
https://en.wikipedia.org/wiki?curid=67562204
675651
2–3 heap
In computer science, a 2–3 heap is a data structure, a variation on the heap, designed by Tadao Takaoka in 1999. The structure is similar to the Fibonacci heap, and borrows from the 2–3 tree. Time costs for some common heap operations are: Polynomial of trees. Source: A linear tree of size formula_1 is a sequential path of formula_1 nodes with the first node as a root of the tree and it is represented by a bold formula_2 (e.g. formula_3 is a linear tree of a single node). Product formula_4 of two trees formula_5 and formula_6, is a new tree with every node of formula_5 is replaced by a copy of formula_6 and for each edge of formula_5 we connect the roots of the trees corresponding to the endpoints of the edge. Note that this definition of product is associative but not commutative. Sum formula_7 of two trees formula_5 and formula_6 is the collection of two trees formula_5 and formula_6. An r-ary polynomial of trees is defined as formula_8 where formula_9. This polynomial notation for trees of formula_10 nodes is unique. The tree formula_11 is actually formula_12 copy of formula_13 that their roots are connected with formula_14 edges sequentially and the path of these formula_14 edge is called the main trunk of the tree formula_11. Furthermore, an r-ary polynomial of trees is called an r-nomial queue if nodes of the polynomial of trees are associated with keys in heap property. Operations on r-nomial queues. To merge two terms of form formula_15 and formula_16, we just reorder the trees in the main trunk based on the keys in the root of trees. If formula_17 we will have a term of form formula_18 and a carry tree formula_19. Otherwise, we would have only a tree formula_20. So the sum of two r-nomial queues are actually similar to the addition of two number in base formula_1. An insertion of a key into a polynomial queue is like merging a single node with the label of the key into the existing r-nomial queue, taking formula_21 time. To delete the minimum, first, we need to find the minimum in the root of a tree, say formula_6, then we delete the minimum from formula_6 and we add the resulting polynomial queue formula_22 to formula_23 in total time formula_21. (2,3)-heap. Source: An formula_24 tree formula_25 is defined recursively by formula_26 for formula_27 (formula_28 is between formula_29 and formula_1 and formula_30 formula_31 operations are evaluated from right to left) where for two trees, formula_32 and formula_33, the result of the operationformula_34 is connecting the root of formula_35 as a rightmost child to the root of formula_32 and formula_36 is a single node tree. Note that the root of the tree formula_25 has degree formula_37. An extended polynomial of trees, formula_38, is defined by formula_39. If we assign keys into the nodes of an extended polynomial of trees in heap order it is called formula_40, and the special case of formula_41 and formula_42 is called formula_43. Operations on (2,3)-heap. Delete-min: First find the minimum by scanning the root of the trees. Let formula_25 be the tree containing minimum element and let formula_22 be the result of removing root from formula_25. Then merge formula_44 and formula_22 (The merge operation is similar to merge of two r-nomial queues). Insertion: In order to insert a new key, merge the currently existing (2,3)-heap with a single node tree, formula_45 labeled with this key. Decrease Key: To be done! References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(\\log(n))" }, { "math_id": 1, "text": "r" }, { "math_id": 2, "text": "\\mathbf{r}" }, { "math_id": 3, "text": "\\mathbf{1}" }, { "math_id": 4, "text": "P = ST" }, { "math_id": 5, "text": "S" }, { "math_id": 6, "text": "T" }, { "math_id": 7, "text": "S+T" }, { "math_id": 8, "text": "P = \\mathbf{a}_{k-1}\\mathbf{r}^{k-1} + \\dots + \\mathbf{a}_1\\mathbf{r} + \\mathbf{a}_0" }, { "math_id": 9, "text": "0 \\leq a_i \\leq r-1" }, { "math_id": 10, "text": "n" }, { "math_id": 11, "text": "\\mathbf{a}_i\\mathbf{r}^i" }, { "math_id": 12, "text": "a_i" }, { "math_id": 13, "text": "\\mathbf{r}^i" }, { "math_id": 14, "text": "a_i-1" }, { "math_id": 15, "text": "\\mathbf{a}_i \\mathbf{r}^i" }, { "math_id": 16, "text": "\\mathbf{a}'_i \\mathbf{r}^i" }, { "math_id": 17, "text": "a_i + a'_i \\geq r" }, { "math_id": 18, "text": "(\\mathbf{a}_i+\\mathbf{a}'_i-\\mathbf{r}) \\mathbf{r}^i" }, { "math_id": 19, "text": "\\mathbf{r}^{i+1}" }, { "math_id": 20, "text": "(\\mathbf{a}_i+\\mathbf{a}'_i) \\mathbf{r}^i" }, { "math_id": 21, "text": "O(r \\log_r{n})" }, { "math_id": 22, "text": "Q" }, { "math_id": 23, "text": "P-T" }, { "math_id": 24, "text": "(l,r)-" }, { "math_id": 25, "text": "T(i)" }, { "math_id": 26, "text": "T(i) = T_1(i-1) \\triangleleft \\dots \\triangleleft T_s(i-1)" }, { "math_id": 27, "text": "i \\geq 1" }, { "math_id": 28, "text": "s" }, { "math_id": 29, "text": "l" }, { "math_id": 30, "text": "(s-1)" }, { "math_id": 31, "text": "\\triangleleft" }, { "math_id": 32, "text": "A" }, { "math_id": 33, "text": "B" }, { "math_id": 34, "text": " A \\triangleleft B " }, { "math_id": 35, "text": " B " }, { "math_id": 36, "text": "T(0) " }, { "math_id": 37, "text": "i" }, { "math_id": 38, "text": "P" }, { "math_id": 39, "text": "P = a_{k-1}T(k-1) + \\dots + a_1T(1) + a_0" }, { "math_id": 40, "text": "(l, r)-heap" }, { "math_id": 41, "text": "l = 2" }, { "math_id": 42, "text": "r = 3" }, { "math_id": 43, "text": "(2, 3)-heap" }, { "math_id": 44, "text": "P - T(i)" }, { "math_id": 45, "text": "T(0)" } ]
https://en.wikipedia.org/wiki?curid=675651
675699
Octree
Tree data structure in which each internal node has exactly eight children, to partition a 3D space An octree is a tree data structure in which each internal node has exactly eight children. Octrees are most often used to partition a three-dimensional space by recursively subdividing it into eight octants. Octrees are the three-dimensional analog of quadtrees. The word is derived from "oct" (Greek root meaning "eight") + "tree". Octrees are often used in 3D graphics and 3D game engines. For spatial representation. Each node in an octree subdivides the space it represents into eight octants. In a point region (PR) octree, the node stores an explicit three-dimensional point, which is the "center" of the subdivision for that node; the point defines one of the corners for each of the eight children. In a matrix-based (MX) octree, the subdivision point is implicitly the center of the space the node represents. The root node of a PR octree can represent infinite space; the root node of an MX octree must represent a finite bounded space so that the implicit centers are well-defined. Note that octrees are not the same as "k"-d trees: "k"-d trees split along a dimension and octrees split around a point. Also "k"-d trees are always binary, which is not the case for octrees. By using a depth-first search the nodes are to be traversed and only required surfaces are to be viewed. History. The use of octrees for 3D computer graphics was pioneered by Donald Meagher at Rensselaer Polytechnic Institute, described in a 1980 report "Octree Encoding: A New Technique for the Representation, Manipulation and Display of Arbitrary 3-D Objects by Computer", for which he holds a 1995 patent (with a 1984 priority date) "High-speed image generation of complex solid objects using octree encoding" Application to color quantization. The octree color quantization algorithm, invented by Gervautz and Purgathofer in 1988, encodes image color data as an octree up to nine levels deep. Octrees are used because formula_0 and there are three color components in the RGB system. The node index to branch out from at the top level is determined by a formula that uses the most significant bits of the red, green, and blue color components, e.g. 4r + 2g + b. The next lower level uses the next bit significance, and so on. Less significant bits are sometimes ignored to reduce the tree size. The algorithm is highly memory efficient because the tree's size can be limited. The bottom level of the octree consists of leaf nodes that accrue color data not represented in the tree; these nodes initially contain single bits. If much more than the desired number of palette colors are entered into the octree, its size can be continually reduced by seeking out a bottom-level node and averaging its bit data up into a leaf node, pruning part of the tree. Once sampling is complete, exploring all routes in the tree down to the leaf nodes, taking note of the bits along the way, will yield approximately the required number of colors. Implementation for point decomposition. The example recursive algorithm outline below (MATLAB syntax) decomposes an array of 3-dimensional points into octree style bins. The implementation begins with a single bin surrounding all given points, which then recursively subdivides into its 8 octree regions. Recursion is stopped when a given exit condition is met. Examples of such exit conditions (shown in code below) are: function [binDepths, binParents, binCorners, pointBins] = OcTree(points) binDepths = [0] % Initialize an array of bin depths with this single base-level bin binParents = [0] % This base level bin is not a child of other bins binCorners = [min(points) max(points)] % It surrounds all points in XYZ space pointBins(:) = 1 % Initially, all points are assigned to this first bin divide(1) % Begin dividing this first bin function divide(binNo) % If this bin meets any exit conditions, do not divide it any further. binPointCount = nnz(pointBins == binNo) binEdgeLengths = binCorners(binNo, 1:3) - binCorners(binNo, 4:6) binDepth = binDepths(binNo) exitConditionsMet = binPointCount&lt;value || min(binEdgeLengths) &lt; value || binDepth &gt; value if exitConditionsMet return; % Exit recursive function end % Otherwise, split this bin into 8 new sub-bins with a new division point newDiv = (binCorners(binNo, 1:3) + binCorners(binNo, 4:6)) / 2 for i = 1:8 newBinNo = length(binDepths) + 1 binDepths(newBinNo) = binDepths(binNo) + 1 binParents(newBinNo) = binNo binCorners(newBinNo) = [one of the 8 pairs of the newDiv with minCorner or maxCorner] oldBinMask = pointBins == binNo % Calculate which points in pointBins == binNo now belong in newBinNo pointBins(newBinMask) = newBinNo % Recursively divide this newly created bin divide(newBinNo) end Example color quantization. Taking the full list of colors of a 24-bit RGB image as point input to the Octree point decomposition implementation outlined above, the following example show the results of octree color quantization. The first image is the original (532818 distinct colors), while the second is the quantized image (184 distinct colors) using octree decomposition, with each pixel assigned the color at the center of the octree bin in which it falls. Alternatively, final colors could be chosen at the centroid of all colors in each octree bin, however this added computation has very little effect on the visual result. % Read the original RGB image Img = imread('IMG_9980.CR2'); % Extract pixels as RGB point triplets pts = reshape(Img, [], 3); % Create OcTree decomposition object using a target bin capacity OT = OcTree(pts, 'BinCapacity', ceil((size(pts, 1) / 256) * 7)); % Find which bins are "leaf nodes" on the octree object leafs = find(~ismember(1:OT.BinCount, OT.BinParents) &amp; ... ismember(1:OT.BinCount, OT.PointBins)); % Find the central RGB location of each leaf bin binCents = mean(reshape(OT.BinBoundaries(leafs,:), [], 3, 2), 3); % Make a new "indexed" image with a color map ImgIdx = zeros(size(Img, 1), size(Img, 2)); for i = 1:length(leafs) pxNos = find(OT.PointBins==leafs(i)); ImgIdx(pxNos) = i; end ImgMap = binCents / 255; % Convert 8-bit color to MATLAB rgb values % Display the original 532818-color image and resulting 184-color image figure subplot(1, 2, 1), imshow(Img) title(sprintf('Original %d color image', size(unique(pts,'rows'), 1))) subplot(1, 2, 2), imshow(ImgIdx, ImgMap) title(sprintf('Octree-quantized %d color image', size(ImgMap, 1))) References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2^3 = 8" } ]
https://en.wikipedia.org/wiki?curid=675699
67571167
1 Kings 2
1 Kings, chapter 2 1 Kings 2 is the second chapter of the Books of Kings in the Hebrew Bible or the First Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. This chapter belongs to the section focusing on the reign of Solomon over the unified kingdom of Judah and Israel (1 Kings 1 to 11). The focus of this chapter is the reign of David and Solomon, the kings of Israel. Text. This chapter was originally written in the Hebrew language and since the 16th century is divided into 53 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). In the middle of chapter 2 in 1 Kings (3 Reigns 2), the Septuagint of Codex Vaticanus has two long additions, called "Additions 1 and 2": 1. After verse 35 there are 14 additional verses, traditionally denoted 35a–35o, 2. After verse 46 there are 11 additional verses, denoted 46a–46l. Analysis. The first two chapters of the Books of Kings describe the final phase of David's story and the beginning of Solomon's. These chapters are markedly written differently than other biblical and extrabiblical ancient literature. David's bequest to Solomon (2:1–12). This section contains the only time in the books of Kings that David spoke directly to Solomon. The parting words are similar to God's words to Joshua after the death of Moses (Joshua 1:6–9). David first charged Solomon to reign in accordance to the "law of Moses" (cf. Deuteronomy 4:29; 6:2; 8:6; 9:5; 11:1; 29:9), because everyone in Israel, even the king (cf. Deuteronomy 17:18–20; Psalm 132:12; cf. 2 Samuel 7:14–16), should fall under God and his laws. It is followed by David's complaints to the 'wise' Solomon about the 'enemies', which were Joab and Shimei ben Gera (cf. 2 Samuel 3:27; 20:9–10; 16:5–14; cf. 19:24) and incited him to deal with them, which gave legitimation for the subsequent purges. David also encouraged reward for the old Barzillai (verse 7, cf. 2 Samuel 17:26–29; 19:32–39). After all the words, David was able to die in peace and buried in the necropolis within the "city of David". "And the time that David reigned over Israel was forty years. He reigned seven years in Hebron and thirty-three years in Jerusalem." The elimination of Adonijah (2:13–25). After a while, Adonijah began to 'dig his own grave' by lusting after Abishag the Shunammite, a dangerous move because 'she had, after all, lain in his father's bed', and 2 Samuel 16:20–22 indicate that having a sexual liaison with David's concubines was to legitimize Absalom's claim to the throne. Adonijah correctly recognized the power and influence of Bathsheba as the queen mother (shown in verse 19). He failed to understand her intentions and character, as she seemed to support Adonijah's petition, yet slipped the phrase 'your brother' to awaken Solomon's fears. Solomon used the opportunity to order Adonijah's execution by the unscrupulous Benaiah. The elimination of Abiathar (2:26–27). Solomon did not dare to harm Abiathar, one of David's trusted priests, but he had the authority to relieve the priest of all duties and banish him to Anathoth, a small country town about north of Jerusalem. This is a fulfillment of 1 Samuel 2:27–36. Jeremiah the prophet also came from Anatoth (Jeremiah 1:1; 32), so could be his descendant. Interestingly, David did not mention Abiathar nor Adonijah in his last words, so the actions against them were solely Solomon's decision. Zadok (cf 2 Chronicles 1:8,10, 34, 39) became the sole high priest after the departure of Abiathar (verse 35). The elimination of Joab (2:28–35). Joab realized the direction of the purge, so he took refuge in the Tabernacle. Solomon used Joab's word "I will die here" as a request that the king would gladly grant with the addition of justifying words of Joab's past sins (verses 31–33), so Benaiah, under the explicit order of the king, could execute Joab at the altar. For his loyal service, Benaiah was appointed to Joab's post as army chief (verse 35). The elimination of Shimei (2:36–46). Solomon plays a cruel game with Shimei, who had done unpleasant things to David, but later received David's personal promise of safety (2 Samuel 16:5–14 and 19:17–24). The king placed Shimei under housearrest, and would only be executed if he left his house with the addition of a seemingly reasonable requirement "not crossing the Wadi Kidron" on the east of Jerusalem. However, when Shimei eventually left his house to Gath, west of Jerusalem, the leaving of the house was the ground for his execution by Benaiah. The outcome of the actions in this chapter is that the kingdom was then firmly in Solomon's hands. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=67571167
67571170
1 Kings 1
1 Kings, chapter 1 1 Kings 1 is the first chapter of the Books of Kings in the Hebrew Bible or the First Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. This chapter belongs to the section focusing on the reign of Solomon over the unified kingdom of Judah and Israel (1 Kings 1 to 11). The focus of this chapter is the reign of David and Solomon, the kings of Israel. Text. This chapter was originally written in the Hebrew language and since the 16th century is divided into 53 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls, that is, 5Q2 (5QKings; 150–50 BCE) with extant verses 1, 16–17, 27–37. There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The first two chapters of the Books of Kings describe the final phase of David's story and the beginning of Solomon's. However, 1 Kings 1 is a new narrative, not a continuation of 1–2 Samuel, as 1–2 Kings also markedly differ from other biblical and extrabiblical ancient literature. This chapter in particular is strongly related to 2 Samuel 11–12, because only in these chapters (and not in between them) Bathsheba, Nathan the prophet, and Solomon are mentioned. The narrative clarifies how God fulfills His promise to establish David's kingdom forever through his son (). David's weakness and old age (1:1–4). The opening scene of the Books of Kings describes King David as an 'old and impotent man, shivering with cold', a unique depiction of a highly respected king in ancient historiography. The beautiful young Abishag was to accompany him and later played a role without even saying one word (1 Kings 2:17, 22). The loss of David's virility (implied in verse 4) suggested to the palace officers that the aging David might have lost his ability to govern as well. "Therefore, his servants said to him, "Let a young woman, a virgin, be sought for our lord the king, and let her stand before the king, and let her care for him; and let her lie in your bosom, that our lord the king may be warm."" The struggle for succession to David's throne (1:5–10). Feeling that the time for David's succession had arrived. Adonijah, who was David's fourth son but at that time was apparently the oldest surviving son after the death of his brothers Amnon and Absalom (2 Samuel 14; 18; cf. 2 Samuel 3:2–5). He seized the opportunity to announce his ambitions to be king, but he followed the same path as Absalom's and similarly failed (cf. ; 15:1). Adonijah seemed to take David's paternal silence as 'implied approval' and he gathered support from the leading personalities and classes in the land of Judah, notably Joab, the commander of the army (cf. and Abiathar, one of the chief priests and an old companion of David (cf. ; 2 Samuel 15:24–29), along with Judean court civil servants and other members of the royal family. On the other hand, Solomon was the tenth in the line of David's sons (cf. ; 5:14–16), but with David's explicit approval, he received the support of 'political and military heavyweights of the city of Jerusalem', notably: 'the mercenary general' Benaiah, with his elite troops (; ), the other high priest, Zadok (2 Samuel 15:24–29) and the prophet Nathan (2 Samuel 7; 12). The situation was as tense as during Absalom's effort (2 Samuel 13:23–29; 15:7–12), because Adonijah invited members of his supporters to a great feast at "a well", probably in the valley of Kidron. David's decision in favor of Solomon (1:11–37). The narrator reports what had unfolded 'within the confines of the palace walls': Nathan talked to Bathsheba (Solomon's mother, cf. 2 Samuel 11–12), Bathsheba talked to David, David to Nathan, David to Bathsheba, then finally David gave a firm order to Zadok, Nathan, and Benaiah, that 'Solomon should be anointed king'. David decided to abdicate to make way for Solomon. Solomon's accession to power (1:38–53). The anointing of Solomon takes place at the Gihon Spring, just on the east side and below the palace grounds (in the City of David), guarded by David's 'powerful and readily available mercenary troop', the "Cherethites and Pelethites" (cf. ). The holy oil is brought from the tent where the ark of the covenant was placed (), signifying the 'consecration of a king and his authorization to rule'. The witnesses cheered loudly in celebration and the noise stroke fear into the participants of Adonijah's party. Jonathan ben Abiathar (cf. ) informed Adonijah the shocking news of Solomon's anointing. Adonijah fled to the altar standing next the tent that hosted the ark of the Covenant, believing that its holiness would offer him amnesty (cf. Exodus 21:13-14). Solomon seemed to assure Adonijah a pardon, although only on probation. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=67571170
675776
Weather radar
Radar used to locate and monitor meteorological conditions Weather radar, also called weather surveillance radar (WSR) and Doppler weather radar, is a type of radar used to locate precipitation, calculate its motion, and estimate its type (rain, snow, hail etc.). Modern weather radars are mostly pulse-Doppler radars, capable of detecting the motion of rain droplets in addition to the intensity of the precipitation. Both types of data can be analyzed to determine the structure of storms and their potential to cause severe weather. During World War II, radar operators discovered that weather was causing echoes on their screens, masking potential enemy targets. Techniques were developed to filter them, but scientists began to study the phenomenon. Soon after the war, surplus radars were used to detect precipitation. Since then, weather radar has evolved and is used by national weather services, research departments in universities, and in television stations' weather departments. Raw images are routinely processed by specialized software to make short term forecasts of future positions and intensities of rain, snow, hail, and other weather phenomena. Radar output is even incorporated into numerical weather prediction models to improve analyses and forecasts. History. During World War II, military radar operators noticed noise in returned echoes due to rain, snow, and sleet. After the war, military scientists returned to civilian life or continued in the Armed Forces and pursued their work in developing a use for those echoes. In the United States, David Atlas at first working for the Air Force and later for MIT, developed the first operational weather radars. In Canada, J.S. Marshall and R.H. Douglas formed the "Stormy Weather Group" in Montreal. Marshall and his doctoral student Walter Palmer are well known for their work on the drop size distribution in mid-latitude rain that led to understanding of the Z-R relation, which correlates a given radar reflectivity with the rate at which rainwater is falling. In the United Kingdom, research continued to study the radar echo patterns and weather elements such as stratiform rain and convective clouds, and experiments were done to evaluate the potential of different wavelengths from 1 to 10 centimeters. By 1950 the UK company EKCO was demonstrating its airborne 'cloud and collision warning search radar equipment'. Between 1950 and 1980, reflectivity radars, which measure the position and intensity of precipitation, were incorporated by weather services around the world. The early meteorologists had to watch a cathode ray tube. In 1953 Donald Staggs, an electrical engineer working for the Illinois State Water Survey, made the first recorded radar observation of a "hook echo" associated with a tornadic thunderstorm. The first use of weather radar on television in the United States was in September 1961. As Hurricane Carla was approaching the state of Texas, local reporter Dan Rather, suspecting the hurricane was very large, took a trip to the U.S. Weather Bureau WSR-57 radar site in Galveston in order to get an idea of the size of the storm. He convinced the bureau staff to let him broadcast live from their office and asked a meteorologist to draw him a rough outline of the Gulf of Mexico on a transparent sheet of plastic. During the broadcast, he held that transparent overlay over the computer's black-and-white radar display to give his audience a sense both of Carla's size and of the location of the storm's eye. This made Rather a national name and his report helped in the alerted population accepting the evacuation of an estimated 350,000 people by the authorities, which was the largest evacuation in US history at that time. Just 46 people were killed thanks to the warning and it was estimated that the evacuation saved several thousand lives, as the smaller 1900 Galveston hurricane had killed an estimated 6000-12000 people. During the 1970s, radars began to be standardized and organized into networks. The first devices to capture radar images were developed. The number of scanned angles was increased to get a three-dimensional view of the precipitation, so that horizontal cross-sections (CAPPI) and vertical cross-sections could be performed. Studies of the organization of thunderstorms were then possible for the Alberta Hail Project in Canada and National Severe Storms Laboratory (NSSL) in the US in particular. The NSSL, created in 1964, began experimentation on dual polarization signals and on Doppler effect uses. In May 1973, a tornado devastated Union City, Oklahoma, just west of Oklahoma City. For the first time, a Dopplerized 10 cm wavelength radar from NSSL documented the entire life cycle of the tornado. The researchers discovered a mesoscale rotation in the cloud aloft before the tornado touched the ground – the tornadic vortex signature. NSSL's research helped convince the National Weather Service that Doppler radar was a crucial forecasting tool. The Super Outbreak of tornadoes on 3–4 April 1974 and their devastating destruction might have helped to get funding for further developments. Between 1980 and 2000, weather radar networks became the norm in North America, Europe, Japan and other developed countries. Conventional radars were replaced by Doppler radars, which in addition to position and intensity could track the relative velocity of the particles in the air. In the United States, the construction of a network consisting of 10 cm radars, called NEXRAD or WSR-88D (Weather Surveillance Radar 1988 Doppler), was started in 1988 following NSSL's research. In Canada, Environment Canada constructed the King City station, with a 5 cm research Doppler radar, by 1985; McGill University dopplerized its radar (J. S. Marshall Radar Observatory) in 1993. This led to a complete Canadian Doppler network between 1998 and 2004. France and other European countries had switched to Doppler networks by the early 2000s. Meanwhile, rapid advances in computer technology led to algorithms to detect signs of severe weather, and many applications for media outlets and researchers. After 2000, research on dual polarization technology moved into operational use, increasing the amount of information available on precipitation type (e.g. rain vs. snow). "Dual polarization" means that microwave radiation which is polarized both horizontally and vertically (with respect to the ground) is emitted. Wide-scale deployment was done by the end of the decade or the beginning of the next in some countries such as the United States, France, and Canada. In April 2013, all United States National Weather Service NEXRADs were completely dual-polarized. Since 2003, the U.S. National Oceanic and Atmospheric Administration has been experimenting with phased-array radar as a replacement for conventional parabolic antenna to provide more time resolution in atmospheric sounding. This could be significant with severe thunderstorms, as their evolution can be better evaluated with more timely data. Also in 2003, the National Science Foundation established the Engineering Research Center for Collaborative Adaptive Sensing of the Atmosphere (CASA), a multidisciplinary, multi-university collaboration of engineers, computer scientists, meteorologists, and sociologists to conduct fundamental research, develop enabling technology, and deploy prototype engineering systems designed to augment existing radar systems by sampling the generally undersampled lower troposphere with inexpensive, fast scanning, dual polarization, mechanically scanned and phased array radars. In 2023, the private American company Tomorrow.io launched a Ka-band space-based radar for weather observation and forecasting. Principle. Sending radar pulses. Weather radars send directional pulses of microwave radiation, on the order of one microsecond long, using a cavity magnetron or klystron tube connected by a waveguide to a parabolic antenna. The wavelengths of 1 – 10 cm are approximately ten times the diameter of the droplets or ice particles of interest, because Rayleigh scattering occurs at these frequencies. This means that part of the energy of each pulse will bounce off these small particles, back towards the radar station. Shorter wavelengths are useful for smaller particles, but the signal is more quickly attenuated. Thus 10 cm (S-band) radar is preferred but is more expensive than a 5 cm C-band system. 3 cm X-band radar is used only for short-range units, and 1 cm Ka-band weather radar is used only for research on small-particle phenomena such as drizzle and fog. W band (3 mm) weather radar systems have seen limited university use, but due to quicker attenuation, most data are not operational. Radar pulses diverge as they move away from the radar station. Thus the volume of air that a radar pulse is traversing is larger for areas farther away from the station, and smaller for nearby areas, decreasing resolution at farther distances. At the end of a 150 – 200 km sounding range, the volume of air scanned by a single pulse might be on the order of a cubic kilometer. This is called the "pulse volume". The volume of air that a given pulse takes up at any point in time may be approximated by the formula formula_0, where v is the volume enclosed by the pulse, h is pulse width (in e.g. meters, calculated from the duration in seconds of the pulse times the speed of light), r is the distance from the radar that the pulse has already traveled (in e.g. meters), and formula_1 is the beam width (in radians). This formula assumes the beam is symmetrically circular, "r" is much greater than "h" so "r" taken at the beginning or at the end of the pulse is almost the same, and the shape of the volume is a cone frustum of depth "h". Listening for return signals. Between each pulse, the radar station serves as a receiver as it listens for return signals from particles in the air. The duration of the "listen" cycle is on the order of a millisecond, which is a thousand times longer than the pulse duration. The length of this phase is determined by the need for the microwave radiation (which travels at the speed of light) to propagate from the detector to the weather target and back again, a distance which could be several hundred kilometers. The horizontal distance from station to target is calculated simply from the amount of time that elapses from the initiation of the pulse to the detection of the return signal. The time is converted into distance by multiplying by the speed of light in air: formula_2 where "c" = km/s is the speed of light, and "n" ≈ is the refractive index of air. If pulses are emitted too frequently, the returns from one pulse will be confused with the returns from previous pulses, resulting in incorrect distance calculations. Determining height. Since the Earth is round, the radar beam in vacuum would rise according to the reverse curvature of the Earth. However, the atmosphere has a refractive index that diminishes with height, due to its diminishing density. This bends the radar beam slightly toward the ground and with a standard atmosphere this is equivalent to considering that the curvature of the beam is 4/3 the actual curvature of the Earth. Depending on the elevation angle of the antenna and other considerations, the following formula may be used to calculate the target's height above ground: formula_3 where: "r" = distance radar–target, "k"e = 4/3, "a"e = Earth radius, "θ"e = elevation angle above the radar horizon, "h"a = height of the feedhorn above ground. A weather radar network uses a series of typical angles that are set according to its needs. After each scanning rotation, the antenna elevation is changed for the next sounding. This scenario will be repeated on many angles to scan the entire volume of air around the radar within the maximum range. Usually, the scanning strategy is completed within 5 to 10 minutes to have data within 15 km above ground and 250 km distance of the radar. For instance in Canada, the 5 cm weather radars use angles ranging from 0.3 to 25 degrees. The accompanying image shows the volume scanned when multiple angles are used. Due to the Earth's curvature and change of index of refraction with height, the radar cannot "see" below the height above ground of the minimal angle (shown in green) or closer to the radar than the maximal one (shown as a red cone in the center). Calibrating return intensity. Because the targets are not unique in each volume, the radar equation has to be developed beyond the basic one. Assuming a monostatic radar where formula_4: formula_5 where formula_6 is received power, formula_7 is transmitted power, formula_8 is the gain of the transmitting/receiving antenna, formula_9 is radar wavelength, formula_10 is the radar cross section of the target and formula_11 is the distance from transmitter to target. In this case, the cross sections of all the targets must be summed: formula_12 formula_13 where formula_14 is the light speed, formula_15 is temporal duration of a pulse and formula_1 is the beam width in radians. In combining the two equations: formula_16 Which leads to: formula_17 The return varies inversely to formula_18 instead of formula_19. In order to compare the data coming from different distances from the radar, one has to normalize them with this ratio. Data types. Reflectivity. Return echoes from targets ("reflectivity") are analyzed for their intensities to establish the precipitation rate in the scanned volume. The wavelengths used (1–10 cm) ensure that this return is proportional to the rate because they are within the validity of Rayleigh scattering which states that the targets must be much smaller than the wavelength of the scanning wave (by a factor of 10). Reflectivity perceived by the radar (Ze) varies by the sixth power of the rain droplets' diameter (D), the square of the dielectric constant (K) of the targets and the drop size distribution (e.g. N[D] of "Marshall-Palmer") of the drops. This gives a truncated Gamma function, of the form: formula_20 Precipitation rate (R), on the other hand, is equal to the number of particles, their volume and their fall speed (v[D]) as: formula_21 So Ze and R have similar functions that can be resolved by giving a relation between the two of the form called "Z-R relation": Z = aRb Where a and b depend on the type of precipitation (snow, rain, convective or stratiform), which has different formula_22, K, N0 and v. How to read reflectivity on a radar display. Radar returns are usually described by colour or level. The colours in a radar image normally range from blue or green for weak returns, to red or magenta for very strong returns. The numbers in a verbal report increase with the severity of the returns. For example, the U.S. National NEXRAD radar sites use the following scale for different levels of reflectivity: Strong returns (red or magenta) may indicate not only heavy rain but also thunderstorms, hail, strong winds, or tornadoes, but they need to be interpreted carefully, for reasons described below. Aviation conventions. When describing weather radar returns, pilots, dispatchers, and air traffic controllers will typically refer to three return levels: Aircraft will try to avoid level 2 returns when possible, and will always avoid level 3 unless they are specially-designed research aircraft. Precipitation types. Some displays provided by commercial television outlets (both local and national) and weather websites, like The Weather Channel and AccuWeather, show precipitation types during the winter months: rain, snow, mixed precipitations (sleet and freezing rain). This is not an analysis of the radar data itself but a post-treatment done with other data sources, the primary being surface reports (METAR). Over the area covered by radar echoes, a program assigns a precipitation type according to the surface temperature and dew point reported at the underlying weather stations. Precipitation types reported by human operated stations and certain automatic ones (AWOS) will have higher weight. Then the program does interpolations to produce an image with defined zones. These will include interpolation errors due to the calculation. Mesoscale variations of the precipitation zones will also be lost. More sophisticated programs use the numerical weather prediction output from models, such as NAM and WRF, for the precipitation types and apply it as a first guess to the radar echoes, then use the surface data for final output. Until dual-polarization (section Polarization below) data are widely available, any precipitation types on radar images are only indirect information and must be taken with care. Velocity. Precipitation is found in and below clouds. Light precipitation such as drops and flakes is subject to the air currents, and scanning radar can pick up the horizontal component of this motion, thus giving the possibility to estimate the wind speed and direction where precipitation is present. A target's motion relative to the radar station causes a change in the reflected frequency of the radar pulse, due to the Doppler effect. With velocities of less than 70-metre/second for weather echos and radar wavelength of 10 cm, this amounts to a change only 0.1 ppm. This difference is too small to be noted by electronic instruments. However, as the targets move slightly between each pulse, the returned wave has a noticeable phase difference or "phase shift" from pulse to pulse. Pulse pair. Doppler weather radars use this phase difference (pulse pair difference) to calculate the precipitation's motion. The intensity of the successively returning pulse from the same scanned volume where targets have slightly moved is: formula_23 So formula_24, "v" = target speed = formula_25. This speed is called the radial Doppler velocity because it gives only the radial variation of distance versus time between the radar and the target. The real speed and direction of motion has to be extracted by the process described below. Doppler dilemma. The phase between pulse pairs can vary from -formula_26 and +formula_26, so the unambiguous Doppler velocity range is Vmax = formula_27formula_28 This is called the Nyquist velocity. This is inversely dependent on the time between successive pulses: the smaller the interval, the larger is the unambiguous velocity range. However, we know that the maximum range from reflectivity is directly proportional to formula_29: x = formula_30 The choice becomes increasing the range from reflectivity at the expense of velocity range, or increasing the latter at the expense of range from reflectivity. In general, the useful range compromise is 100–150 km for reflectivity. This means for a wavelength of 5 cm (as shown in the diagram), an unambiguous velocity range of 12.5 to 18.75 metre/second is produced (for 150 km and 100 km, respectively). For a 10 cm radar such as the NEXRAD, the unambiguous velocity range would be doubled. Some techniques using two alternating pulse repetition frequencies (PRF) allow a greater Doppler range. The velocities noted with the first pulse rate could be equal or different with the second. For instance, if the maximum velocity with a certain rate is 10 metre/second and the one with the other rate is 15 m/s. The data coming from both will be the same up to 10 m/s, and will differ thereafter. It is then possible to find a mathematical relation between the two returns and calculate the real velocity beyond the limitation of the two PRFs. Doppler interpretation. In a uniform rainstorm moving eastward, a radar beam pointing west will "see" the raindrops moving toward itself, while a beam pointing east will "see" the drops moving away. When the beam scans to the north or to the south, no relative motion is noted. Synoptic. In the synoptic scale interpretation, the user can extract the wind at different levels over the radar coverage region. As the beam is scanning 360 degrees around the radar, data will come from all those angles and be the radial projection of the actual wind on the individual angle. The intensity pattern formed by this scan can be represented by a cosine curve (maximum in the precipitation motion and zero in the perpendicular direction). One can then calculate the direction and the strength of the motion of particles as long as there is enough coverage on the radar screen. However, the rain drops are falling. As the radar only sees the radial component and has a certain elevation from ground, the radial velocities are contaminated by some fraction of the falling speed. This component is negligible in small elevation angles, but must be taken into account for higher scanning angles. Meso scale. In the velocity data, there could be smaller zones in the radar coverage where the wind varies from the one mentioned above. For example, a thunderstorm is a mesoscale phenomenon which often includes rotations and turbulence. These may only cover few square kilometers but are visible by variations in the radial speed. Users can recognize velocity patterns in the wind associated with rotations, such as mesocyclone, convergence (outflow boundary) and divergence (downburst). Polarization. Droplets of falling liquid water tend to have a larger horizontal axis due to the drag coefficient of air while falling (water droplets). This causes the water molecule dipole to be oriented in that direction; so, radar beams are, generally, polarized horizontally in order to receive the maximal signal reflection. If two pulses are sent simultaneously with orthogonal polarization (vertical and horizontal, "ZV" and "ZH" respectively), two independent sets of data will be received. These signals can be compared in several useful ways: * Differential Reflectivity ("Zdr") – Differential reflectivity is proportional to the ratio of the reflected horizontal and vertical power returns as "ZH" / "ZV". Among other things, it is a good indicator of droplet shape. Differential reflectivity also can provide an estimate of average droplet size, as larger drops are more subject to deformation by aerodynamic forces than are smaller ones (that is, larger drops are more likely to become "hamburger bun-shaped") as they fall through the air. * Correlation Coefficient ("ρhv") – A statistical correlation between the reflected horizontal and vertical power returns. High values, near one, indicate homogeneous precipitation types, while lower values indicate regions of mixed precipitation types, such as rain and snow, or hail, or in extreme cases debris aloft, usually coinciding with a tornado debris signature and a tornado vortex signature. * Linear Depolarization Ratio ("LDR") – This is a ratio of a vertical power return from a horizontal pulse or a horizontal power return from a vertical pulse. It can also indicate regions where there is a mixture of precipitation types. * Differential Phase ("formula_31") – The differential phase is a comparison of the returned phase difference between the horizontal and vertical pulses. This change in phase is caused by the difference in the number of wave cycles (or wavelengths) along the propagation path for horizontal and vertically polarized waves. It should not be confused with the Doppler frequency shift, which is caused by the motion of the cloud and precipitation particles. Unlike the differential reflectivity, correlation coefficient and linear depolarization ratio, which are all dependent on reflected power, the differential phase is a "propagation effect." It is a very good estimator of rain rate and is not affected by attenuation. The range derivative of differential phase (specific differential phase, "Kdp") can be used to localize areas of strong precipitation/attenuation. With more information about particle shape, dual-polarization radars can more easily distinguish airborne debris from precipitation, making it easier to locate tornados. With this new knowledge added to the reflectivity, velocity, and spectrum width produced by Doppler weather radars, researchers have been working on developing algorithms to differentiate precipitation types, non-meteorological targets, and to produce better rainfall accumulation estimates. In the U.S., NCAR and NSSL have been world leaders in this field. NOAA established a test deployment for dual-polametric radar at NSSL and equipped all its 10 cm NEXRAD radars with dual-polarization, which was completed in April 2013. In 2004, ARMOR Doppler Weather Radar in Huntsville, Alabama was equipped with a SIGMET Antenna Mounted Receiver, giving Dual-Polarmetric capabilities to the operator. McGill University J. S. Marshall Radar Observatory in Montreal, Canada has converted its instrument (1999) and the data were used operationally by Environment Canada in Montreal until its closure in 2018. Another Environment Canada radar, in King City (North of Toronto), was dual-polarized in 2005; it uses a 5 cm wavelength, which experiences greater attenuation. Environment Canada is converting graually all of its radars to dual-polarization. Météo-France is planning on incorporating dual-polarizing Doppler radar in its network coverage. Radar display methods. Various methods of displaying data from radar scans have been developed over time to address the needs of its users. This is a list of common and specialized displays: Plan position indicator. Since data is obtained one angle at a time, the first way of displaying it has been the Plan Position Indicator (PPI) which is only the layout of radar return on a two dimensional image. Importantly, the data coming from different distances to the radar are at different heights above ground. This is very important as a high rain rate seen near the radar is relatively close to what reaches the ground but what is seen from 160 km away is about 1.5 km above ground and could be far different from the amount reaching the surface. It is thus difficult to compare weather echoes at different distances from the radar. PPIs are affected by ground echoes near the radar. These can be misinterpreted as real echoes. Other products and further treatments of data have been developed to supplement such shortcomings. Usage: Reflectivity, Doppler and polarimetric data can use PPI. In the case of Doppler data, two points of view are possible: relative to the surface or the storm. When looking at the general motion of the rain to extract wind at different altitudes, it is better to use data relative to the radar. But when looking for rotation or wind shear under a thunderstorm, it is better to use storm relative images that subtract the general motion of precipitation leaving the user to view the air motion as if he would be sitting on the cloud. Constant-altitude plan position indicator. To avoid some of the PPI problems, the constant-altitude plan position indicator (CAPPI) has been developed by Canadian researchers. It is a horizontal cross-section through radar data. This way, one can compare precipitation on an equal footing at difference distance from the radar and avoid ground echoes. Although data are taken at a certain height above ground, a relation can be inferred between ground stations' reports and the radar data. CAPPIs call for a large number of angles from near the horizontal to near the vertical of the radar to have a cut that is as close as possible at all distance to the height needed. Even then, after a certain distance, there isn't any angle available and the CAPPI becomes the PPI of the lowest angle. The zigzag line on the angles diagram above shows the data used to produce 1.5 km and 4 km height CAPPIs. Notice that the section after 120 km is using the same data. Since the CAPPI uses the closest angle to the desired height at each point from the radar, the data can originate from slightly different altitudes, as seen on the image, in different points of the radar coverage. It is therefore crucial to have a large enough number of sounding angles to minimize this height change. Furthermore, the type of data must change relatively gradually with height to produce an image that is not noisy. Reflectivity data being relatively smooth with height, CAPPIs are mostly used for displaying them. Velocity data, on the other hand, can change rapidly in direction with height and CAPPIs of them are not common. It seems that only McGill University is producing regularly Doppler CAPPIs with the 24 angles available on their radar. However, some researchers have published papers using velocity CAPPIs to study tropical cyclones and development of NEXRAD products. Finally, polarimetric data are recent and often noisy. There doesn't seem to have regular use of CAPPI for them although the "SIGMET" company offer a software capable to produce those types of images. Vertical composite. Another solution to the PPI problems is to produce images of the maximum reflectivity in a layer above ground. This solution is usually taken when the number of angles available is small or variable. The American National Weather Service is using such Composite as their scanning scheme can vary from 4 to 14 angles, according to their need, which would make very coarse CAPPIs. The Composite assures that no strong echo is missed in the layer and a treatment using Doppler velocities eliminates the ground echoes. Comparing base and composite products, one can locate virga and updrafts zones. Accumulations. Another important use of radar data is the ability to assess the amount of precipitation that has fallen over large basins, to be used in hydrological calculations; such data is useful in flood control, sewer management and dam construction. The computed data from radar weather may be used in conjunction with data from ground stations. To produce radar accumulations, we have to estimate the rain rate over a point by the average value over that point between one PPI, or CAPPI, and the next; then multiply by the time between those images. If one wants for a longer period of time, one has to add up all the accumulations from image to image during that time. Echotops. Aviation is a heavy user of radar data. One map particularly important in this field is the Echotops for flight planning and avoidance of dangerous weather. Most country weather radars scan enough angles to have a 3D set of data over the area of coverage. It is relatively easy to estimate the maximum altitude at which precipitation is found within the volume. However, those are not the tops of clouds, as they always extend above the precipitation. Vertical cross sections. To know the vertical structure of clouds, in particular thunderstorms or the level of the melting layer, a vertical cross-section product of the radar data is available to meteorologists. This is done by displaying only the data along a line, from coordinates A to B, taken from the different angles scanned. Range Height Indicator. When a weather radar is scanning in only the vertical axis, it can obtain much higher resolution data than it could with a composite-vertical slice using combined PPI tilts. This output is called a "Range Height Indicator" (RHI), which is excellent for viewing the detailed smaller-scale vertical structure of a storm. As mentioned, this is different from the vertical cross section mentioned above, namely due to the fact that the radar antenna is scanning solely vertically, and does not scan over the entire 360 degrees around the site. This kind of product is typically only available on research radars. Radar networks. Over the past few decades, radar networks have been extended to allow the production of composite views covering large areas. For instance, countries such as the United States, Canada, Australia, Japan, and much of Europe, combine images from their radar network into a singular display. In fact, such a network can consist of different types of radar with different characteristics like beam width, wavelength and calibration. These differences have to be taken into account when matching data across the network, particularly when deciding what data to use when two radars cover the same point. If one uses the stronger echo but it comes from the most distant radar, one uses returns that are from higher altitude coming from rain or snow that might evaporate before reaching the ground (virga). If one uses data from the closest radar, it might be attenuated by passing through a thunderstorm. Composite images of precipitations using a network of radars are made with all those limitations in mind. Automatic algorithms. To help meteorologists spot dangerous weather, mathematical algorithms have been introduced in the weather radar treatment programmes. These are particularly important in analyzing the Doppler velocity data as they are more complex. The polarization data will even need more algorithms. Main algorithms for reflectivity: Main algorithms for Doppler velocities: Animations. The animation of radar products can show the evolution of reflectivity and velocity patterns. The user can extract information on the dynamics of the meteorological phenomena, including the ability to extrapolate the motion and observe development or dissipation. This can also reveal non-meteorological artifacts (false echoes) that will be discussed later. Radar Integrated Display with Geospatial Elements. A new popular presentation of weather radar data in United States is via "Radar Integrated Display with Geospatial Elements" (RIDGE) in which the radar data is projected on a map with geospatial elements such as topography maps, highways, state/county boundaries and weather warnings. The projection is often flexible giving the user a choice of various geographic elements. It is frequently used in conjunction with animations of radar data over a time period. Limitations and artifacts. Radar data interpretation depends on many hypotheses about the atmosphere and the weather targets, including: These assumptions are not always met; one must be able to differentiate between reliable and dubious echoes. Anomalous propagation (non-standard atmosphere). The first assumption is that the radar beam is moving through air that cools down at a certain rate with height. The position of the echoes depend heavily on this hypothesis. However, the real atmosphere can vary greatly from the norm. Super refraction. Temperature inversions often form near the ground, for instance by air cooling at night while remaining warm aloft. As the index of refraction of air decreases faster than normal the radar beam bends toward the ground instead of continuing upward. Eventually, it will hit the ground and be reflected back toward the radar. The processing program will then wrongly place the return echoes at the height and distance it would have been in normal conditions. This type of false return is relatively easy to spot on a time loop if it is due to night cooling or marine inversion as one sees very strong echoes developing over an area, spreading in size laterally but not moving and varying greatly in intensity. However, inversion of temperature exists ahead of warm fronts and the abnormal propagation echoes are then mixed with real rain. The extreme of this problem is when the inversion is very strong and shallow, the radar beam reflects many times toward the ground as it has to follow a waveguide path. This will create multiple bands of strong echoes on the radar images. This situation can be found with inversions of temperature aloft or rapid decrease of moisture with height. In the former case, it could be difficult to notice. Under refraction. On the other hand, if the air is unstable and cools faster than the standard atmosphere with height, the beam ends up higher than expected. This indicates that precipitation is occurring higher than the actual height. Such an error is difficult to detect without additional temperature lapse rate data for the area. Non-Rayleigh targets. If we want to reliably estimate the precipitation rate, the targets have to be 10 times smaller than the radar wave according to Rayleigh scattering. This is because the water molecule has to be excited by the radar wave to give a return. This is relatively true for rain or snow as 5 or 10 cm wavelength radars are usually employed. However, for very large hydrometeors, since the wavelength is on the order of stone, the return levels off according to Mie theory. A return of more than 55 dBZ is likely to come from hail but won't vary proportionally to the size. On the other hand, very small targets such as cloud droplets are too small to be excited and do not give a recordable return on common weather radars. Resolution and partially filled scanned volume. As demonstrated at the start of the article, radar beams have a physical dimension and data are sampled at discrete angles, not continuously, along each angle of elevation. This results in an averaging of the values of the returns for reflectivity, velocities and polarization data on the resolution volume scanned. In the figure to the left, at the top is a view of a thunderstorm taken by a wind profiler as it was passing overhead. This is like a vertical cross section through the cloud with 150-metre vertical and 30-metre horizontal resolution. The reflectivity has large variations in a short distance. Compare this with a simulated view of what a regular weather radar would see at 60 km, in the bottom of the figure. Everything has been smoothed out. Not only the coarser resolution of the radar blur the image but the sounding incorporates area that are echo free, thus extending the thunderstorm beyond its real boundaries. This shows how the output of weather radar is only an approximation of reality. The image to the right compares real data from two radars almost colocated. The TDWR has about half the beamwidth of the other and one can see twice more details than with the NEXRAD. Resolution can be improved by newer equipment but some things cannot. As mentioned previously, the volume scanned increases with distance so the possibility that the beam is only partially filled also increases. This leads to underestimation of the precipitation rate at larger distances and fools the user into thinking that rain is lighter as it moves away. Beam geometry. The radar beam has a distribution of energy similar to the diffraction pattern of a light passing through a slit. This is because the wave is transmitted to the parabolic antenna through a slit in the wave-guide at the focal point. Most of the energy is at the center of the beam and decreases along a curve close to a Gaussian function on each side. However, there are secondary peaks of emission that will sample the targets at off-angles from the center. Designers attempt to minimize the power transmitted by such lobes, but they cannot be eliminated. When a secondary lobe hits a reflective target such as a mountain or a strong thunderstorm, some of the energy is reflected to the radar. This energy is relatively weak but arrives at the same time that the central peak is illuminating a different azimuth. The echo is thus misplaced by the processing program. This has the effect of actually broadening the real weather echo making a smearing of weaker values on each side of it. This causes the user to overestimate the extent of the real echoes. Non-weather targets. There is more than rain and snow in the sky. Other objects can be misinterpreted as rain or snow by weather radars. Insects and arthropods are swept along by the prevailing winds, while birds follow their own course. As such, fine line patterns within weather radar imagery, associated with converging winds, are dominated by insect returns. Bird migration, which tends to occur overnight within the lowest 2000 metres of the Earth's atmosphere, contaminates wind profiles gathered by weather radar, particularly the WSR-88D, by increasing the environmental wind returns by 30–60 km/h. Other objects within radar imagery include: Such extraneous objects have characteristics that allow a trained eye to distinguish them. It is also possible to eliminate some of them with post-treatment of data using reflectivity, Doppler, and polarization data. Wind farms. The rotating blades of windmills on modern wind farms can return the radar beam to the radar if they are in its path. Since the blades are moving, the echoes will have a velocity and can be mistaken for real precipitation. The closer the wind farm, the stronger the return, and the combined signal from many towers is stronger. In some conditions, the radar can even see toward and away velocities that generate false positives for the tornado vortex signature algorithm on weather radar; such an event occurred in 2009 in Dodge City, Kansas. As with other structures that stand in the beam, attenuation of radar returns from beyond windmills may also lead to underestimation. Attenuation. Microwaves used in weather radars can be absorbed by rain, depending on the wavelength used. For 10 cm radars, this attenuation is negligible. That is the reason why countries with high water content storms are using 10 cm wavelength, for example the US NEXRAD. The cost of a larger antenna, klystron and other related equipment is offset by this benefit. For a 5 cm radar, absorption becomes important in heavy rain and this attenuation leads to underestimation of echoes in and beyond a strong thunderstorm. Canada and other northern countries use this less costly kind of radar as the precipitation in such areas is usually less intense. However, users must consider this characteristic when interpreting data. The images above show how a strong line of echoes seems to vanish as it moves over the radar. To compensate for this behaviour, radar sites are often chosen to somewhat overlap in coverage to give different points of view of the same storms. Shorter wavelengths are even more attenuated and are only useful on short range radar. Many television stations in the United States have 5 cm radars to cover their audience area. Knowing their limitations and using them with the local NEXRAD can supplement the data available to a meteorologist. Due to the spread of dual-polarization radar systems, robust and efficient approaches for the compensation of rain attenuation are currently implemented by operational weather services. Attenuation correction in weather radars for snow particles is an active research topic. Bright band. A radar beam's reflectivity depends on the diameter of the target and its capacity to reflect. Snowflakes are large but weakly reflective while rain drops are small but highly reflective. When snow falls through a layer above freezing temperature, it melts into rain. Using the reflectivity equation, one can demonstrate that the returns from the snow before melting and the rain after, are not too different as the change in dielectric constant compensates for the change in size. However, during the melting process, the radar wave "sees" something akin to very large droplets as snow flakes become coated with water. This gives enhanced returns that can be mistaken for stronger precipitations. On a PPI, this will show up as an intense ring of precipitation at the altitude where the beam crosses the melting level while on a series of CAPPIs, only the ones near that level will have stronger echoes. A good way to confirm a bright band is to make a vertical cross section through the data, as illustrated in the picture above. An opposite problem is that drizzle (precipitation with small water droplet diameter) tends not to show up on radar because radar returns are proportional to the sixth power of droplet diameter. Multiple reflections. It is assumed that the beam hits the weather targets and returns directly to the radar. In fact, there is energy reflected in all directions. Most of it is weak, and multiple reflections diminish it even further so what can eventually return to the radar from such an event is negligible. However, some situations allow a multiple-reflected radar beam to be received by the radar antenna. For instance, when the beam hits hail, the energy spread toward the wet ground will be reflected back to the hail and then to the radar. The resulting echo is weak but noticeable. Due to the extra path length it has to go through, it arrives later at the antenna and is placed further than its source. This gives a kind of triangle of false weaker reflections placed radially behind the hail. Solutions and future solutions. Filtering. These two images show what can be achieved to clean up radar data. On the first image made from the raw returns, it is difficult to distinguish the real weather. Since rain and snow clouds are usually moving, Doppler velocities can be used to eliminate a good part of the clutter (ground echoes, reflections from buildings seen as urban spikes, anomalous propagation). The other image has been filtered using this property. However, not all non-meteorological targets remain stationary (birds, insects, dust). Others, like the bright band, depend on the structure of the precipitation. Polarization offers a direct typing of the echoes which could be used to filter more false data or produce separate images for specialized purposes, such as clutter, birds, etc. subsets. Mesonet. Another question is the resolution. As mentioned, radar data are an average of the scanned volume by the beam. Resolution can be improved by larger antenna or denser networks. A program by the Center for Collaborative Adaptive Sensing of the Atmosphere (CASA) aims to supplement the regular NEXRAD (a network in the United States) using many low cost X-band (3 cm) weather radar mounted on cellular telephone towers. These radars will subdivide the large area of the NEXRAD into smaller domains to look at altitudes below its lowest angle. These will give details not otherwise available. Using 3 cm radars, the antenna of each radar is small (about 1 meter diameter) but the resolution is similar at short distance to that of NEXRAD. The attenuation is significant due to the wavelength used but each point in the coverage area is seen by many radars, each viewing from a different direction and compensating for data lost from others. Scanning strategies. The number of elevation scanned and the time taken for a complete cycle depend on the weather. For example, with little or no precipitation the scheme may be limited to the lowest angles and use longer impulses in order to detect wind shift near the surface. On the other hand, for violent thunderstorms it is better to scan a large range of angles in order to have a 3-D view of the precipitation as often as possible. To mitigate the different demands, scanning strategies have been developed according to the type of radar, the wavelength used and the most common weather situations in the area considered. One example of scanning strategies is offered by the US NEXRAD radar network which has evolved over time. In 2008, it added extra resolution of data, and in 2014, additional intra-cycle scanning of the lowest level elevation (MESO-SAILS). Electronic sounding. Timeliness also needs improvement. With 5 to 10 minutes between complete scans of weather radar, much data is lost as a thunderstorm develops. A Phased-array radar is being tested at the National Severe Storms Lab in Norman, Oklahoma, to speed the data gathering. A team in Japan has also deployed a phased-array radar for 3D NowCasting at the RIKEN Advanced Institute for Computational Science (AICS). Specialized applications. Avionics weather radar. Aircraft application of radar systems include weather radar, collision avoidance, target tracking, ground proximity, and other systems. For commercial weather radar, ARINC 708 is the primary specification for weather radar systems using an airborne pulse-Doppler radar. Antennas. Unlike ground weather radar, which is set at a fixed angle, airborne weather radar is being utilized from the nose or wing of an aircraft. Not only will the aircraft be moving up, down, left, and right, but it will be rolling as well. To compensate for this, the antenna is linked and calibrated to the vertical gyroscope located on the aircraft. By doing this, the pilot is able to set a pitch or angle to the antenna that will enable the stabilizer to keep the antenna pointed in the right direction under moderate maneuvers. The small servo motors will not be able to keep up with abrupt maneuvers, but it will try. In doing this the pilot is able to adjust the radar so that it will point towards the weather system of interest. If the airplane is at a low altitude, the pilot would want to set the radar above the horizon line so that ground clutter is minimized on the display. If the airplane is at a very high altitude, the pilot will set the radar at a low or negative angle, to point the radar towards the clouds wherever they may be relative to the aircraft. If the airplane changes attitude, the stabilizer will adjust itself accordingly so that the pilot doesn't have to fly with one hand and adjust the radar with the other. Receivers/transmitters. There are two major systems when talking about the receiver/transmitter: the first is high-powered systems, and the second is low-powered systems; both of which operate in the X-band frequency range (8,000 – 12,500 MHz). High-powered systems operate at 10,000 – 60,000 watts. These systems consist of magnetrons that are fairly expensive (approximately $1,700) and allow for considerable noise due to irregularities with the system. Thus, these systems are highly dangerous for arcing and are not safe to be used around ground personnel. However, the alternative would be the low-powered systems. These systems operate 100 – 200 watts, and require a combination of high gain receivers, signal microprocessors, and transistors to operate as effectively as the high-powered systems. The complex microprocessors help to eliminate noise, providing a more accurate and detailed depiction of the sky. Also, since there are fewer irregularities throughout the system, the low-powered radars can be used to detect turbulence via the Doppler Effect. Since low-powered systems operate at considerable less wattage, they are safe from arcing and can be used at virtually all times. Thunderstorm tracking. Digital radar systems have capabilities far beyond their predecessors. They offer thunderstorm tracking surveillance which provides users with the ability to acquire detailed information of each storm cloud being tracked. Thunderstorms are identified by matching raw precipitation data received from the radar pulse, to a preprogrammed template. In order for a thunderstorm to be confirmed, it must meet strict definitions of intensity and shape to distinguish it from a non-convective cloud. Usually, it must show signs of horizontal organization and vertical continuity: and have a core or a more intense center identified and tracked by digital radar trackers. Once the thunderstorm cell is identified, speed, distance covered, direction, and Estimated Time of Arrival (ETA) are all tracked and recorded. Doppler radar and bird migration. Using Doppler weather radar is not limited to determining the location and velocity of precipitation. It can track bird migrations as well (non-weather targets section). The radio waves from the radars bounce off rain and birds alike (or even insects like butterflies). The US National Weather Service, for instance, has reported having flights of birds appear on their radars as clouds and then fade away when the birds land. The U.S. National Weather Service St. Louis has even reported monarch butterflies appearing on its radars. Different programs in North America use regular weather radars and specialized radar data to determine the paths, height of flight, and timing of migrations. This is useful information in planning windmill farm placement and operation, to reduce bird fatalities, improve aviation safety and other wildlife management. In Europe, there have been similar developments and even a comprehensive forecast program for aviation safety, based on radar detection. Meteorite fall detection. An image shows the Park Forest, Illinois, meteorite fall which occurred on 26 March 2003. The red-green feature at the upper left is the motion of clouds near the radar itself, and a signature of falling meteorites is inside the yellow ellipse at image center. The intermixed red and green pixels indicate turbulence, in this case arising from the wakes of falling, high-velocity meteorites. According to the American Meteor Society, meteorite falls occur on a daily basis somewhere on Earth. However, the database of worldwide meteorite falls maintained by the Meteoritical Society typically records only about 10-15 new meteorite falls annually Meteorites occur when a meteoroid falls into the Earth's atmosphere, generating an optically bright meteor by ionization and frictional heating. If the meteor is large enough and infall velocity is low enough, surviving meteorites will reach the ground. When the falling meteorites decelerate below about 2–4 km/s, usually at an altitude between 15 and 25 km, they no longer generate an optically bright meteor and enter "dark flight". Because of this, most meteorite falls occurring into the oceans, during the day, or otherwise go unnoticed. It is in dark flight that falling meteorites typically fall through the interaction volume of most types of radars. It has been demonstrated that it is possible to identify falling meteorites in weather radar imagery. This is especially useful for meteorite recovery, as weather radars are part of widespread networks and scan the atmosphere continuously. Furthermore, the meteorites cause local wind turbulence, which is noticeable on Doppler outputs, and fall nearly vertically so their resting place on the ground is close to their radar signature. References. &lt;templatestyles src="Reflist/styles.css" /&gt; *
[ { "math_id": 0, "text": "\\, {v = h r^2 \\theta^2}" }, { "math_id": 1, "text": "\\,\\theta" }, { "math_id": 2, "text": "\\text{Distance} = c \\frac{\\Delta t}{2n}," }, { "math_id": 3, "text": "H = \\sqrt{r^2+(k_ea_e)^2+2rk_{e}a_{e}\\sin(\\theta_e)} - k_{e}a_{e} + h_{a}," }, { "math_id": 4, "text": " G_t=A_r (\\mathrm{or} \\, G_r) =G" }, { "math_id": 5, "text": "P_r = P_t{{G^2 \\lambda^2 \\sigma}\\over{{(4\\pi)}^3 R^4}} \\propto \\frac {\\sigma} {R^4}" }, { "math_id": 6, "text": "\\scriptstyle P_r" }, { "math_id": 7, "text": "\\scriptstyle P_t" }, { "math_id": 8, "text": "\\scriptstyle G" }, { "math_id": 9, "text": "\\scriptstyle \\lambda" }, { "math_id": 10, "text": "\\scriptstyle \\sigma" }, { "math_id": 11, "text": "\\scriptstyle R" }, { "math_id": 12, "text": "\\sigma = \\bar \\sigma = V \\sum \\sigma_{j} = V \\eta " }, { "math_id": 13, "text": "\\begin{cases} V\\quad= \\mathrm{scanned \\, \\, volume} \\\\ \\qquad= \\mathrm{pulse \\, \\, length} \\times \\mathrm{beam \\, \\, width} \\\\ \\qquad= \\frac {c\\tau}{2}\\frac {\\pi R^2 \\theta^2}{4} \\end{cases}" }, { "math_id": 14, "text": "\\,c" }, { "math_id": 15, "text": "\\,\\tau" }, { "math_id": 16, "text": "P_r = P_t{{G^2 \\lambda^2 }\\over{{(4\\pi)}^3 R^4}} \\frac {c\\tau}{2} \\frac {\\pi R^2 \\theta^2}{4} \\eta = P_t \\tau G^2 \\lambda^2 \\theta^2 \\frac {c}{512(\\pi^2)} \\frac {\\eta} {R^2} " }, { "math_id": 17, "text": "P_r \\propto \\frac {\\eta} {R^2}" }, { "math_id": 18, "text": "\\, R^2" }, { "math_id": 19, "text": "\\,R^4" }, { "math_id": 20, "text": "Z_e = \\int_{0}^{Dmax} |K|^2 N_0 e^{-\\Lambda D} D^6dD " }, { "math_id": 21, "text": "R = \\int_{0}^{Dmax} N_0 e^{-\\Lambda D} {\\pi D^3 \\over 6} v(D)dD " }, { "math_id": 22, "text": "\\Lambda" }, { "math_id": 23, "text": "I = I_0 \\sin \\left(\\frac{4\\pi (x_0 + v \\Delta t)}{\\lambda}\\right) = I_0 \\sin \\left(\\Theta_0 + \\Delta\\Theta\\right) \\quad \\begin{cases} x = \\text{distance from radar to target} \\\\ \\lambda = \\text{radar wavelength} \\\\ \\Delta t = \\text{time between two pulses} \\end{cases} " }, { "math_id": 24, "text": "\\Delta\\Theta = \\frac{4\\pi v \\Delta t}{\\lambda}" }, { "math_id": 25, "text": "\\frac{\\lambda\\Delta\\Theta}{4\\pi \\Delta t}" }, { "math_id": 26, "text": "\\pi" }, { "math_id": 27, "text": "\\pm" }, { "math_id": 28, "text": "\\frac{\\lambda}{4\\Delta t}" }, { "math_id": 29, "text": "\\Delta t" }, { "math_id": 30, "text": "\\frac{c\\Delta t}{2}" }, { "math_id": 31, "text": "\\Phi_{dp}" } ]
https://en.wikipedia.org/wiki?curid=675776
67578967
Classical shadow
In quantum computing, classical shadow is a protocol for predicting functions of a quantum state using only a logarithmic number of measurements. Given an unknown state formula_0, a tomographically complete set of gates formula_1 (e.g. Clifford gates), a set of formula_2 observables formula_3 and a quantum channel formula_4 defined by randomly sampling from formula_5, applying it to formula_6 and measuring the resulting state, predict the expectation values formula_7. A list of classical shadows formula_8 is created using formula_6, formula_5 and formula_4 by running a Shadow generation algorithm. When predicting the properties of formula_6, a Median-of-means estimation algorithm is used to deal with the outliers in formula_8. Classical shadow is useful for direct fidelity estimation, entanglement verification, estimating correlation functions, and predicting entanglement entropy. Recently, researchers have built on classical shadow to devise provably efficient classical machine learning algorithms for a wide range of quantum many-body problems. For example, machine learning models could learn to solve ground states of quantum many-body systems and classify quantum phases of matter. Algorithm Shadow generation Inputs formula_9 copies of an unknown formula_10-qubit state formula_6                   A list of unitaries formula_5 that is tomographically complete                   A classical description of a quantum channel formula_11 Return formula_8 &lt;br&gt;* Algorithm Median-of-means estimation Inputs A list of observables formula_18                   A classical shadow formula_19                   A positive integer formula_20 that specifies how many linear estimates of formula_6 to calculate. Return A list formula_21 where formula_22 where formula_23 and where formula_24. &lt;br&gt;* References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\rho " }, { "math_id": 1, "text": " U" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "\\{O_{i}\\}" }, { "math_id": 4, "text": "\\mathcal{E}" }, { "math_id": 5, "text": "U" }, { "math_id": 6, "text": "\\rho" }, { "math_id": 7, "text": "\\operatorname{tr}(O_{i} \\rho)" }, { "math_id": 8, "text": "S" }, { "math_id": 9, "text": "N" }, { "math_id": 10, "text": "n" }, { "math_id": 11, "text": "\\mathcal{E}^{-1}" }, { "math_id": 12, "text": "i" }, { "math_id": 13, "text": "1" }, { "math_id": 14, "text": "U_{i}" }, { "math_id": 15, "text": "\\rho_{i}" }, { "math_id": 16, "text": "b_{i} \\in \\{0, 1\\}^{n}" }, { "math_id": 17, "text": "\\mathcal{E}^{-1}(U_{i}^{\\dagger}|b_{i}\\rangle\\langle b_{i}|U_{i})" }, { "math_id": 18, "text": " O_{1}, ...., O_{M} " }, { "math_id": 19, "text": "S(\\rho; N) = [\\hat{\\rho}_1, \\ldots, \\hat{\\rho}_N]" }, { "math_id": 20, "text": "K" }, { "math_id": 21, "text": "[o_{1}, ..., o_{M}] " }, { "math_id": 22, "text": "o_{i} = \\mathrm{median}(\\mathrm{trace}(O_{1} p_{1}),..., \\mathrm{trace}(O_{1} p_{K})) " }, { "math_id": 23, "text": "p_{k} = \\frac{1}{[\\frac{N}{K}]} \\sum_{i = (k-1)[\\frac{N}{K}] + 1}^{k [\\frac{N}{K}]} \\hat{\\rho}_{i}" }, { "math_id": 24, "text": "k = 1, ..., K" } ]
https://en.wikipedia.org/wiki?curid=67578967
675863
Oh-My-God particle
Ultra-high-energy cosmic ray detected in 1991 The Oh-My-God particle was an ultra-high-energy cosmic ray detected on 15 October 1991 by the Fly's Eye camera in Dugway Proving Ground, Utah, United States. , it is the highest-energy cosmic ray ever observed. Its energy was estimated as (320 exa-eV). The particle's energy was unexpected and called into question prevailing theories about the origin and propagation of cosmic rays. Speed. It is not known what kind of particle it was, but most cosmic rays are protons. If formula_0 is the rest mass of the particle and formula_1 is its kinetic energy (energy above the rest mass energy), then its speed was formula_2 times the speed of light. Assuming it was a proton, for which formula_3 is 938 MeV, this means it was traveling at times the speed of light, its Lorentz factor was and its rapidity was . Due to special relativity, the relativistic time dilation experienced by a proton traveling at this speed would be extreme. If the proton originated from a distance of 1.5 billion light years, it would take approximately 1.71 days in the reference frame of the proton to travel that distance. Collision energy. The energy of the particle was some 40 million times that of the highest-energy protons that have been produced in any terrestrial particle accelerator. However, only a small fraction of this energy was available for its interaction with a nucleus in the Earth's atmosphere, with most of the energy remaining in the form of kinetic energy of the center of mass of the products of the interaction. If formula_4 is the mass of the "target" nucleus, the energy available for such a collision is formula_5 which for large formula_1 is approximately formula_6 For the Oh-My-God particle hitting a nitrogen nucleus, this gives 2900 TeV, which is roughly 200 times higher than the highest collision energy of the Large Hadron Collider, in which two high-energy particles going opposite directions collide. In the center-of-mass frame of reference (which moved at almost the speed of light in our frame of reference), the products of the collision would therefore have had around 2900 TeV of energy, enough to transform the nucleus into many particles, moving apart at almost the speed of light even in this center-of-mass frame of reference. As with other cosmic rays, this generated a cascade of relativistic particles as the particles interacted with other nuclei. Comparisons. The "Oh-My-God particle"'s energy was estimated as , or . Although this amount is phenomenally large for a single elementary particle – far outstripping the highest energy that human technology can generate in a particle – it is still far below the level of the Planck scale, where exotic physics is expected. Though a subatomic particle, its energy was comparable to the gravitational potential energy of a 1 kilogram object that could fall 5 meters off a two-story building. The "Oh-My-God particle" had 1020 (100 quintillion) times the photon energy of visible light, equivalent to a baseball travelling at about . &lt;indicator name="01-sky-coordinates"&gt;&lt;templatestyles src="Template:Sky/styles.css" /&gt;Coordinates: &amp;show_grid=1&amp;show_constellation_lines=1&amp;show_constellation_boundaries=1&amp;show_const_names=1&amp;show_galaxies=1&amp;img_source=IMG_all 5h 40m 48s, +48° 0′ 0″&lt;/indicator&gt; Its energy was 20 million times greater than the highest photon energy measured in electromagnetic radiation emitted by an extragalactic object, the blazar Markarian 501. High energy, but far below the Planck scale. While the particle's energy was higher than anything achieved in terrestrial accelerators, it was still about 40 million times "lower" than the Planck energy (). Particles of that energy would be required in order to expose effects on the Planck scale. A proton with that much energy would travel times closer to the speed of light than the Oh-My-God particle did. As viewed from Earth and observed in Earth's reference frame, it would take about ( times the current age of the universe) for a photon to overtake a Planck energy proton with a 1 cm lead. Later similar events. Since the first observation, hundreds of similar events (energy or greater) have been recorded, confirming the phenomenon. These ultra-high-energy cosmic ray particles are very rare; the energy of most cosmic ray particles is between 107 eV and 1010 eV. More recent studies using the Telescope Array Project have suggested a source of the particles within a 20 degree radius "warm spot" in the direction of the constellation Ursa Major. The Amaterasu particle, named after the sun goddess in Japanese mythology, was detected in 2021 and later identified in 2023, using the Telescope Array observatory in Utah, United States. It had an energy exceeding 240 exa-electron volts ( eV). This particle appears to have emerged from the Local Void, an empty area of space bordering the Milky Way galaxy. It contained an amount of energy comparable to dropping a brick from the height of the waist. No promising astronomical object matching the direction from which the cosmic ray arrived has been identified. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m_\\mathrm{p}" }, { "math_id": 1, "text": "E_\\mathrm{K}" }, { "math_id": 2, "text": "\\sqrt{1-[m_\\mathrm{p}c^2/(E_\\mathrm{K}+m_\\mathrm{p}c^2)]^2}" }, { "math_id": 3, "text": "m_\\mathrm{p}c^2" }, { "math_id": 4, "text": "m_\\mathrm{t}" }, { "math_id": 5, "text": "\\sqrt{ 2E_\\mathrm{K}m_\\mathrm{t}c^2+(m_\\mathrm{p}+m_\\mathrm{t})^2c^4 }-(m_\\mathrm{p}+m_\\mathrm{t})c^2" }, { "math_id": 6, "text": "\\sqrt{ 2E_\\mathrm{K}m_\\mathrm{t}c^2}." } ]
https://en.wikipedia.org/wiki?curid=675863
67587840
COVID-19 CPI
Brazilian parliamentary commission of inquiry The COVID-19 CPI, also known as Pandemic CPI, Coronavirus CPI, or simply COVID CPI, was a parliamentary inquiry commission from Brazil, with the goal to investigate alleged omissions and irregularities in federal government actions during the COVID-19 pandemic in Brazil. It was created on April 13, 2021 and officially installed in the Brazilian Senate on April 27, 2021. It ended with the voting of the final report on October 26, 2021. Origin. Senator Randolfe Rodrigues (REDE-AP), was the creator of the commission, due to the very serious health crisis in the state of Amazonas, highlighting in his text alleged mistakes and omissions of the Federal Government in the health protocols of its responsibility. Rodrigues filed a request for a CPI on February 4, 2021, to investigate the points listed. Goals. The objective of the request presented by Randolfe Rodrigues is to discuss the actions of the federal government in confronting the pandemic, in a scenario in which Brazil ranks second in the world in number of deaths from COVID-19. The CPI's main focus is the allegations that the federal government would have been against sanitary measures such as social distancing and mandatory use of face mask. It is also accused of delays in the purchase of vaccines, as well as the disclosure of ineffective treatments and the use of public money to buy drugs without scientific proof of effectiveness. The dismissals of health ministers, such as Luiz Henrique Mandetta and Nelson Teich, will also be the target of clarifications, as well as the cause of the lack of oxygen in Manaus hospitals, among others. Report from Federal Court of Accounts. According to the portal UOL, the Bolsonaro government has not set aside money in 2021 for the Ministry of Health and until March, had not passed on resources to states and municipalities to deal with the COVID-19 pandemic. This is what the report of the Federal Court of Accounts (TCU) pointed out, whose documents will be attached to the COVID-19 CPI. According to the court inspectors, there were no appropriations for expenses in fighting the pandemic in the 2021 Budget Law prepared by the Government. In response, the Ministry of Economy said in a statement that: "this report has not yet been considered by the Plenary of the Court and has not resulted in a judgment, because other ministers have requested views". The Ministry of Health did not want to comment. In another report released earlier, the TCU accused the Bolsonaro government of altering documents to evade responsibility for leading the actions to combat and confront the pandemic, as well as failing to monitor the supply of drugs and intubation kits to hospitals. Investigation lines. Herd immunity. Herd immunity by infection (also called vaccine-free herd immunity in CPI) is an arbitrary thesis to the consensus of the world scientific community and based on the thought that antibodies can be acquired by natural infection and collective immunity achieved without vaccines. It is known that herd immunity from naturally acquired infection has even been considered by authorities in Brazil, the UK, and the US as a strategy to mitigate the COVID-19 pandemic, but infectologists have made it clear that this is not even a hypothesis that can be considered and that vaccination is the only acceptable way for a possible herd immunity. One line of investigation of the CPI is whether the Federal Government through a supposed "parallel ministry" adopted herd immunity by infection as a strategy in the fight against COVID-19 due to the repeated statements issued by President Jair Bolsonaro and Osmar Terra about herd immunization and defending that the virus, already underway, would not be stopped by isolation, and that the epidemic would only end after 70% of the population was infected. In statements to the CPI, General Eduardo Pazuello confirmed having been contacted superficially by Osmar Terra about the thesis that the health crisis would naturally cease after this percentage of people were infected. Suspicions of corruption. In its third phase the CPI focuses on suspicions of corruption involving the federal government and private companies to mitigate the COVID-19 pandemic. Along these lines, the investigations into the purchase of the Indian vaccine Covaxin are considered to have brought the most relevant among all the revelations made by the commission of inquiry. One of the reasons is that the episode directly involves President Jair Bolsonaro to a possible crime of prevarication which is an infraction provided for in article 319 of the Brazilian Penal Code. At this stage, Bolsonaro's supporters have demanded that the CPI be extended to investigate also possible cases of corruption in the Northeast Consortium. Decision from the Supreme Federal Court. Writ of Security. Minister Luís Roberto Barroso of the Supreme Federal Court (STF), in a letter filed by Senators Jorge Kajuru (PODE-GO) and Alessandro Vieira (Cidadania-SE), ordered Senate President Rodrigo Pacheco (DEM-MG) to create the commission because the number of signatures was more than necessary. In the decision, the minister fomented the thesis, arguing that it would be the responsibility of the president of the Board of the Legislative House to define the agenda and priorities, but that such prerogative could not hurt the constitutional right of the third of the parliamentarians in favor of the creation of the CPI. The president of the Senate accepted the request, but criticized what he called Barroso's decision as "political platform for 2022," stating that the CPI could have a role in anticipating the political-electoral discussion of 2022, which would not be appropriate for the moment the nation is going through. The next day, by 10 votes to 1, the STF ruled that the COVID-19 CPI is constitutional. Reactions. In response to the Supreme Court's validation of the CPI, impeachment proceedings against the Court's ministers gained momentum in the Senate. Two senators filed a request for impeachment against Luís Roberto Barroso because of the monocratic decision. There has also been talk of a bill on limiting these single decrees. A day after Barroso's decision, President Jair Bolsonaro in a conversation with supporters, and in a post on Twitter criticized the minister's monocratic decision on the creation of the CPI. He typed, saying, "Barroso omits himself by not determining the Senate to install impeachment proceedings against a Supreme Minister, even at the request of more than 3 million Brazilians. He lacks moral courage and has inappropriate political militancy". Hours later, the STF released a press release on the suit saying, "The ministers that make up the Court make decisions in accordance with the Constitution and the laws." It also says that, "within the democratic state of law, challenges to them (decisions) must be made in the proper appeals, contributing to the republican spirit prevailing in our country." Senator Jorge Kajuru disclosed conversations between him and President Jair Bolsonaro to BandNews FM radio between April 11 and 12. In the wiretap, the president asked that the CPI also include investigations against mayors and governors. In it, there were also threats, attacks, and offenses against Senator Randolfe Rodrigues. Habeas Corpus. The STF was asked to rule on a request by the Federal Attorney General's Office to allow former minister General Eduardo Pazuello to remain silent during testimony at the CPI and also to ensure that he would be immune from certain measures, such as imprisonment in case of non-compliance. The rapporteur of the case, minister Ricardo Lewandowski, granted the habeas corpus order. The minister also denied the habeas corpus order, filed by the defense of Mayra Pinheiro, requesting the possibility of remaining silent and safe conduct to eventual arrest. However, after a request for reconsideration, he authorized her to remain silent about the facts that occurred between December 2020 and January 2021. Argument of Noncompliance with a Fundamental Precept. Governors from 17 states and the Federal District filed, in the Supreme Federal Court, a plea of breach of fundamental precept (ADPF) with a request for an injunction to suspend the summoning of governors to testify in the Parliamentary Inquiry Commission. The Rapporteur of the action, minister Rosa Weber, of the Federal Supreme Court, determined that the president of the COVID CPI, senator Omar Aziz (PSD-AM) should present information within five days, and that the Attorney General of the Union and the Attorney General of the Republic should also manifest within five days. Members. Female group. Because no female senators were chosen for the composition of the CPI, neither as full or substitutes members, the Federal Senate women's group made an agreement within the commission so that one of them can ask questions to the CPI's witnesses; however, they will not be able to present requests or vote, a possibility that is restricted only to CPI members. A rotation format was chosen, so that there is alternation among the female senators. The female senators that make up the rotation are: Summoned and invited. During the course of the work, they were invited and summoned, to be questioned, as witnesses or investigated: Reaction of non-convoked involved parties. President Jair Bolsonaro. Quoted several times during the investigation, President Jair Bolsonaro was placed in complicated situations, as when he was quoted by the Miranda brothers (Luis Miranda and Luis Ricardo Miranda) during the investigation, they said that Bolsonaro was already aware of the corruption in the purchase of vaccines from Covaxin. This created a tension that justifies a criminal action against the president for prevarication, understanding that there was corruption but no action was taken. The president has always been very open about all issues pertaining to his person, especially when talking to supporters in his "enclosure" in Brasilia, but after the denunciation of the Miranda brothers, Bolsonaro did not defend himself on the accusations, he just started to attack the CPI trying to discredit it by accusing members of the investigation of corruption without reliable evidence. When provoked by the head of the CPI, Renan Calheiros, to present an answer about the accusations and forced by a letter sent by the senators, Bolsonaro had an invasive reaction and said the following: "I don't give a shit for the CPI. What good has it done for Brazil?" Then he added "I ignored you, there will be no answer". Bolsonaro is not legally obliged to respond to the senators, thus being an act that would only show disrespect and immorality. The president can be summoned to testify at a CPI, but the bureaucratic process for this to occur is extremely complex. After the president's inelegant answers, it is possible that the senators will summon the president to testify. If summoned, Bolsonaro may be able to obtain in court the right to remain silent, but he will have to testify (even if he doesn't say anything) and he will not be able to lie during the investigation. Armed Forces. The Armed Forces were also mentioned in some key moments of the investigation and with possible members in a list of future investigated such as Eduardo Pazuello who is an Army General, Élcio Franco who is an Army Reserve Colonel, Marcelo Blanco who is an Army Reserve Colonel, Bento Pires who is also a Colonel. And especially Roberto Dias, who is accused in a corruption scheme in the sale of overpriced vaccines. After a major repercussion of military personnel involved in corruption schemes, the Armed Forces issued a note with an intimidating tone in which they promise a "tougher response" if the CPI continues to reveal corruption schemes within them. CPI president Omar Aziz, an ex-military man, reacted to the Armed Forces' note stating that he will not allow them to intimidate the work of the commission, saying the following: "It has been many years since Brazil has seen members of the rotten side of the Armed Forces involved in corruption within the government." After that, he added: "The note is very disproportionate. Make a thousand notes against me, but don't intimidate me. If you intimidate me, you intimidate this House here." History. Background. On April 19, 2021, federal deputy Carla Zambelli appealed to the Supreme Court to prevent cases of politicians with evidence of suspicion or impediment from being members of the CPI. On April 26, 2021, the Casa Civil forwarded to the ministries a list with 23 accusations against the Federal Government. Minister Luiz Eduardo Ramos sent an e-mail to 13 ministries regarding 23 accusations and criticisms about the performance in fighting the COVID-19 pandemic in Brazil. In the list were accusations of early treatment, negligence of the government in buying vaccines, militarization of the Ministry of Health, minimization of the pandemic in Brazil, lack of adoption and restrictive measures to reduce the virus contagion. In response, the Civil House said that the document was created for the ministries to prepare responses. Installation and First Week. At the beginning of the committee, the senators voted on the requirement of 115 requests for information directed to various bodies, companies, and state governments. Second Week. The first to be heard by the CPI, on May 4, was Luiz Henrique Mandetta, the first health minister of the Jair Bolsonaro government. Mandetta delivered a letter to the CPI addressed to the president, in which he asked for the reconsideration of the position adopted by the government, "since the adoption of measures to the contrary could generate collapse of the health system and very serious consequences to the health of the population". The Bolsonaro government's second health minister, Nelson Teich, was heard on Wednesday, May 5. During the six hours of his testimony, the former minister explained the reasons for his resignation, motivated by disagreements with government policy. Teich said that if the government continued with its actions in the ministry, Brazil would have more vaccines. The next day, the fourth and current Minister of Health, Marcelo Queiroga, gave a statement that was considered evasive and ended up irritating the senators. Queiroga denied presidential pressure on technical issues, acknowledged the importance of vaccination and social distancing over "alternative treatments," and said he was unaware of "evidence of chemical warfare in China." The day before, President Bolsonaro had suggested that COVID-19 would have arisen in a chemical warfare action in China; later, the possibility would arise that the statement itself would be the subject of the CPI investigation. The senators consider the minister's answers insufficient, and planned to call him back. The third Health Minister, General Eduardo Pazuello, was initially scheduled to appear before the CPI on the 5th, but was rescheduled for the 19th, after the former minister revealed that he has had recent contact with people who have contracted COVID-19. According to the vice-president of the commission, Senator Randolfe Rodrigues, Pazuello sought to avoid being heard as a witness by the CPI. On April 13, 2021, the Federal Attorney General's Office sent a letter to the STF so that the former minister can remain silent during testimony at the CPI and also ensure that he is immune from certain measures, such as imprisonment in case of non-compliance. Hours earlier, a lawyer requested habeas corpus for Pazuello from the STF to prevent him from being arrested in case of non-compliance. The minister chosen to judge both cases was Ricardo Lewandowski. A day later, Lewandowski granted habeas corpus allowing the former minister to remain silent and avoid prison. Third week. On May 11, the testimony of the CEO Brazilian Health Regulatory Agency (Anvisa), Admiral Antonio Barra Torres, took place. In his statement, said that a series of statements by President Bolsonaro went against the recommendations of the agency. Barra Torres denied government pressure for the approval of chloroquine to combat COVID-19, but confirmed the existence of a proposal to change the drug's package insert at a meeting of ministers. Present at this meeting was physician Nise Yamaguchi, pointed out by Barra Torres as one of the heads of the supposed "Parallel Ministry of Health," a group independent of the administration of Minister Mandetta. On May 12, 2021, it was the turn of former Secretary of the Special Secretariat of Social Communication (SECOM) Fábio Wajngarten to testify at the COVID-19 CPI. During the inquiry, Wajngarten denied that the federal government had made attacks on opponents through a parallel communications secretariat, popularly known as the "hate cabinet"; also denying that Councilman Carlos Bolsonaro (Republicans-RJ) had influence in the secretariat during his term. Finally, Wajngarten said that the Ministry of Health has not been "incompetent" in the vaccine procurement process, during an interview with Veja magazine. However, the rapporteur of the CPI, Senator Renan Calheiros, considered Wajngarten's statements untrue and called for his arrest. Calheiros referred to a campaign with the title: "O Brasil não pode parar" ("Brazil can't stop"), broadcast in 2020 on the Federal Government's official website. Another lie he pointed out was during the Veja interview. The magazine itself released a 30-second audio of the interview. Senator Marcos Rogério rebuked him, saying that Renan Calheiros had abused his authority. Minutes later, Senator Flávio Bolsonaro (Republicans-RJ) argued with Calheiros over the arrest request. The president of the CPI Omar Aziz delivered to the Federal Public Ministry (MPF) the testimony of Wajngarten. The reason was the alleged lies in the questioning as a witness. On May 13, 2021, it was the turn of Carlos Murillo, a representative of the Pfizer laboratory, to testify as a witness at the COVID-19 CPI. The executive confirmed that councilman Carlos Bolsonaro and special advisor to the presidency Filipe Martins were present at the meeting dealing with the purchase of 70 million doses. The information about her was passed on to Shirley Meschike during Murillo's testimony to the CPI. He said he could not confirm the presence of the two in meetings he did not attend. He also said that the company only dealt with "authorities represented by the Government". CPI rapporteur Renan Calheiros threatened to summon two Pfizer representatives to clarify the situation. Two hours later, the general manager of the pharmaceutical company confirmed the presence of Carlos and Filipe. Also in the interrogation, Murillo said that the Federal Government has not expressed itself about the offer of approximately 1.5 million doses still for 2020. He said that the laboratory presented 3 proposals of offers to the Government: of 30 or 70 million doses each shipment. The negotiations took place on August 14, 18, and 26, 2020. In a new negotiation, he also said that there were offers for the end of 2020 and all, at 70 million doses each. On November 11 and November 24, to be precise. Fourth week. On May 18, 2021, it was former Foreign Minister Ernesto Araújo's turn to testify at the COVID-19 CPI. He denied that he made massive attacks on China. But the president of the CPI accused him of lying in the interrogation and asked that the country not punish Brazil with the delay in deliveries of active pharmaceutical ingredients (APIs) for vaccines. Senator Kátia Abreu (PP-TO) called the former chancellor a "compulsive denier". Araújo also reported in the CPI that the country's entry in the Covax Facility cost the Union R$2.5 billion (US$ 1) and that it was up to the Ministry of Health to adhere to the "minimum quantity of doses" in the consortium. About Pfizer's vaccines, Araújo said that he knew of the laboratory's intention to sell them to Brazil in September 2020, where Nestor Forster (Brazil's ambassador to the United States) was the recipient of this supposed letter and had informed the former chancellor 2 days later. Also in the CPI, Araújo said he was unaware of the existence of a "parallel cabinet in Health". Renan Calheiros then suggested that the philosopher and writer Olavo de Carvalho was a member of this group and asked him about it. Araújo limited himself to saying that he was a friend of his, but that did not mean the possibility of interference in the ministry. On May 19, 2021, it was the turn of Eduardo Pazuello, former Minister of Health to testify at the COVID-19 CPI. In the first question asked by rapporteur Renan Calheiros, Pazuello recalled the Supreme Court's decision, which gave autonomy to the 3 executive powers (states, municipalities, and the Union being the latter to conduct measures to confront the pandemic) and said that the mandate "limited" the actions of the Federal Government and cited several numbers of transfers to states and municipalities for the Health Secretariats. Also in the interrogation, he said that the ministry adopted "total transparency" in the accounts and that the communication strategy was aimed at the most vulnerable. Pazuello also said in his statement that he always had autonomy to make decisions in the post. He also said that his relationship with President Jair Bolsonaro has always been one of "simple friendship" without being close to the president, and that his nomination was made with official generals who work with the government. About the purchases of CoronaVac vaccines, the general did not answer about the president's misdeeds for the purchase of 46 million doses. In the statement, Pazuello also said that Bolsonaro listened to other people not part of the ministry but denied that there was a "parallel cabinet in Health". Renan Calheiros questioned the former minister about this "parallel counseling". The military said he would speak to put "a stone in this matter". About the delay in acquiring the vaccines from Pfizer, Pazuello said he received contrary recommendations from the TCU, the AGU and the CGU about the purchase of the immunizer. The Federal Audit Court (TCU) refuted Pazuello's answers about Pfizer and informed in a note to the CPI that no such opinion was sent and that the purchase of the doses was urgent due to the health crisis caused by COVID-19. The former minister then corrected himself and referred to the Controladoria-Geral da União report; however, Eduardo Braga refuted this. About the health crisis in Amazonas, Pazuello reported that in January he delivered oxygen. According to him, he was fully aware of the collapse on January 10 and would have delivered it six days before people died from the lack of oxygen. This statement angered the senators. Renan Calheiros claimed that the former minister "lied a lot" in the deposition arguing that he maneuvered to avoid answering the reported questions. Senator Eduardo Braga, on the other hand, harshly criticized Pazuello's statements about the collapse due to the lack of oxygen in Amazonas. He said that the deaths in the state occurred for several days at the beginning of the year and Manaus was the city most affected by the second wave. He also criticized the government's refusal of oxygen donations from Venezuela by not sending a Brazilian Air Force plane to the country. On the second day of questioning, Pazuello finally said about the refusal to purchase the CoronaVac vaccine. He said that the president "never" spoke to him personally about purchasing the Chinese immunizer. According to him, the delay happened because of the lack of an interim measure that would make the purchase possible. The statement irritated the senators. Senator Zenaide Maia (PROS-RN) said that Pazuello assumed responsibility for the vaccines even though he had witnessed the intention to buy 46 million doses from CONASS. Otto Alencar, said that yesterday Pazuello had said that the president had oriented him to buy the material. But he didn't bought it, since at that moment he was subordinated to Jair Bolsonaro. Pazuello wanted to preserve the president about his public statements opposing social distancing, mask wearing, and 70th IGPM alcohol. Alessandro Vieira asked the military man if he hadn't convinced the president to change his mind about such statements. The former minister then replied that "there is no scientific proof of the benefits of the use of the mask, alcohol 70°PMGI, and social distancing. He also said that such "really necessary" measures were preventive measures. About the health collapse in Amazonas, Pazuello blamed the company White Martins and the municipal and state health secretariats for the chaos. According to him, the folder was "proactive" about the lack of oxygen. In a note released to the press, the state health secretariat of the state, said it was monitoring the demand for gas and in December 2020, and White Martins had not expressed itself about logistical difficulties to maintain the supply in hospitals. According to the G1 portal, on December 24, 2020, the company said in a note that it was monitoring the abnormal growth in demand for gas in Manaus and requested information from the state health department about the demand. Also in the note, the company said that the then Health Minister Eduardo Pazuello met with its executives and the state committee for a meeting on January 11, 2021. Rapporteur Renan Calheiros said that he had "never seen a person lie so much in a CPI as Pazuello" and cited at least 15 allegedly false information. Fifth week. On May 25, 2021, it was the turn of the Health Ministry's secretary of management and labor Mayra Pinheiro (commonly called "Captain Chloroquine") to testify at the COVID-19 CPI. She defended the "early treatment" with drugs with no proven effectiveness against SARS-CoV-2 (chloroquine, azithromycin, ivermectin) in the interrogatory, however saying that she never determined the ministry to use the drugs and established "use of safe doses" prescribed by doctors. Renan Calheiros questioned her if the president of the Republic pressured her and she said no. About the health collapse in Amazonas, Mayra said that the health ministry "had no responsibility" for the chaos and blamed COVID-19. About the "TrateCov" app, Pinheiro said that there was no hacking into the app but rather an "improper extraction of data." This statement contradicts Eduardo Pazuello, who said in a statement that the app "was stolen by a hacker." The platform was discontinued by the ministry on the grounds that she recommended "early treatment" of COVID, even though the person was not experiencing apparent symptoms. She further accused the press of accessing the app, "leaking" the alleged flaws, and disseminating content "out of context." There was a comment made by the rapporteur Renan Calheiros, comparing the COVID-19 CPI to the Nuremberg Tribunal, which tried Nazi crimes. The statement angered senators from the ruling base. The rapporteur denied that he had offended the Jews with his comment and only "compared" two problem situations to the "denialism" in the pandemic. Soon after, Calheiros said that there are similarities between the behavior of Brazilian authorities in the middle of the commission and that of Marshal Hermann Göring, considered Adolf Hitler's number two. The Brazilian Israelite Confederation (Conib) repudiated Renan's speeches about the Holocaust. Senator Randolfe Rodrigues released in the CPI an audio of Mayra harshly criticizing the work of Fiocruz on its policies saying that all the members were leftist. The Senate Legislative Police opened an investigation against a columnist from the Folha de São Paulo newspaper about a report that had been published. Senators Luis Carlos Heinze and Eduardo Girão were the initiators of the investigation. On May 27, 2021, it was the turn of Dimas Covas, president of the Butantan Foundation to testify at the COVID-19 CPI. In the interrogation, he accused President Jair Bolsonaro for the delay in negotiating the CoronaVac vaccine. At the time, Bolsonaro stated in a press conference that the immunizer would not be purchased for "personal reasons". He also said that he had negotiated with Eduardo Pazuello the negotiation of the 46 million doses in a CONASS meeting. According to Covas, the institute offered 100 million doses in October 2020 but had not received any response from Ministry representatives. Renan Calheiros released a video with statements from the President of the Republic against the vaccine. He also stated that Brazil could be the first country in the world to start the vaccination campaign, but that there were "mishaps" along the way. He also said that the call for volunteers was affected due to the "fake news" spread on social networks about CoronaVac. About the possibility of a booster dose of CoronaVac due to its low efficacy, Dimas said he sees the need for a third dose to immunize the elderly and comorbid due to SARS-CoV-2 variants. He also explained that the 28-day period between 2 doses is "ideal" to complete the vaccine stage. There was discussion between Senator Marcos Rogério and the president of the CPI Omar Aziz. Rogério asked Dimas if João Doria's statements got in the way of China's negotiations for the purchase of vaccines. Soon after, an audio of the governor of SP was released. Sixth week. On June 1, 2021, Nise Yamaguchi testified as an invited guest. She was pointed out as a member of a so-called "parallel cabinet in Health", which allegedly advised Bolsonaro to make decisions in the fight against the pandemic. Yamaguchi claimed to hold the view that it was not necessary to vaccinate the population randomly, as well as claiming that doctors who advocated early treatment, which included the use of chloroquine, were politically persecuted. She also denied that she proposed to change the chloroquine package insert to include its use in the treatment of COVID-19, contradicting the statements of Antônio Barra Torres and Mandetta. In a more tense moment of the session, one of Nise's staff was expelled from the CPI for "asking respect" for the doctor during the break, arguing with senators. On June 2, 2021, Luana Araújo testified as an invited guest, in the position of infectious disease physician, indicated to assume the portfolio of the Ministry of Health's secretary of confrontation with COVID, but her nomination was not approved. About her harsh words in 2020 against the use of medicines from the "covid kit", she explained that in Medicine the evidence takes away from the individual responsibility of the professional a value judgment about the situation and that the so-called "early treatment" is "a delirious, oddball, anachronistic and counterproductive discussion". As for her non-appointment, the doctor said that she had received no explanation and that she went to Minister Queiroga's office and was told that there was no doubt about her technical capacity, but that she would have to withdraw the appointment, which had not been approved, without further explanation. Senator Marcos do Val, alternate member of the commission, used his time to question that the senators treated the infectologist Luana well, in contrast to oncologist Nise Yamaguchi, who had been invited the day before. The senator also accused her of putting herself as the "owner of the truth", not having the right to undervalue the work of other doctors who defend early treatment. The infectious disease specialist replied that she was not the "owner of the truth", but representing a class of scientists and entities. Other senators supporting the president did not agree with the doctor's scientific statements. Senator Marcos Rogério used the doctor's speech against the so-called "early treatment" to say that she was against going to the doctor. Luana explained that early diagnosis is different from early treatment, and clarified that prophylaxis is to prevent a person from getting sick or to reduce the risk of getting sick, and that in terms of the COVID-19 pandemic, it refers to vaccine intervention, associated with other non-pharmacological strategies, such as mask use and social distancing. Seventh week. On June 8, Minister Queiroga testified for the second time. He took an assertive position on the defense of non-pharmacological measures, vaccine use, and the lack of scientific evidence for early treatment. The minister's scientific position displeased pro-Bolsonaro senators. Queiroga was asked by the reporter if he advises the president to wear a mask. The minister replied that the guidelines are for everyone, and complemented by stating that he is the minister of health and not "the president's censor". As for the withdrawal of the nomination of infectious disease specialist Luana Pinheiro, the minister presented a different version from the one put forward by the doctor the week before. He affirmed that the non-appointment was his exclusive decision due to the non-harmonization of the scientific thoughts defended by her with those of the medical class. He complemented that Luana's name had been approved by the Civil House, in contradiction with what Luana reported. About the purchase of vaccines, Queiroga until then had not accepted the purchase of 30 million doses of CoronaVac produced by the Butantan Institute, but said that he had personally talked to Dimas Covas, director of the institute, about the purchase of the vaccine. On June 9, former secretary Élcio Franco testified as a witness. The testimony began an hour late, due to a short deliberative section for approving testimony and requests, in which Senator Jorginho Mello asked that the violation of telephone and telecom secrecy be voted on the following day. President Omar Aziz granted the request. Colonel Élcio denied knowledge of the so-called "shadow cabinet", but claimed to know and have met with some members such as Nise Yamaguchi, Arthur Weintraub, Osmar Terra, and Carlos Wizard. The colonel was questioned mainly about the purchase of vaccines. When asked by rapporteur Renan Calheiros, Élcio gave as explanations for the delay in buying vaccines, the uncertainty in the effectiveness of vaccines and the lack of legislation to buy vaccines not approved by Anvisa. Coronel called phase three of vaccine development a "vaccine graveyard," which provoked criticism for other vaccines (e.g. Covaxin.) have been purchased even before entering phase three. Senator Eduardo Braga harshly criticized this delay. As for the thesis of letting the virus spread freely through the population, Élcio said that the idea of herd immunity had never been discussed in the technical area, and that the severity of the pandemic was known, and there should be annual vaccination campaigns. About the purchase of vaccines from Pfizer, Élcio Franco explained the contract clauses were one of the reasons for the delay of the purchase of vaccines that even required a law that guaranteed the purchase of vaccines. Of these clauses, he highlighted the non-liability for adverse effects of the vaccine and the mandatory signature of the president. The colonel himself went public to declare that the government would not sign these so-called "unfair" clauses, and was asked by Senator Eliziane Gama if this attitude did not hinder the negotiation with Pfizer; Élcio replied that his speech is part of a government pressure against the company and that it was a business strategy of the negotiation. As for the lost opportunities to buy the Pfizer vaccine earlier and cheaper, Senator Randolfe Rodríguez questioned why only the president's signature was removed from a draft of a provisional measure that would allow the purchase of the Pfizer vaccine. Élcio confessed that the lack of consensus came from the Ministry of Economy. On June 10, the governor of Amazonas Wilson Lima was summoned as a witness and appealed to the Supreme Federal Court for a writ of habeas corpus not to appear, because he was being investigated by the Federal Police. The day was taken up by a deliberative section for voting on requests. However, the session was tumultuous due to the STF decision, suspending the deposition of Wilson Lima, and the request for breaches of secrecy. Initially, Senator Marcos Rogério interrupted the session for more than twenty minutes on the complaint that the STF's interference was unconstitutional and that everyone should be investigated. On the other hand, regarding the telephone secrecy requests, the senator was against it and asked for a question of order to withdraw the requests alleging lack of motivation and invalidity of this request. Senator and law professor Fabiano Contarato explained the validity of the request. The senator and president of the commission Omar Aziz dismissed the question and explained that the requests would be properly grounded. On June 11, microbiologist and researcher Natalia Pasternak and the sanitary doctor and former president of Anvisa, Cláudio Maierovitch, were heard. In her opening speech, Natalia used her time to explain the importance of science and the difference between laboratory studies and observational studies, focusing on the lack of solid evidence for the use of early treatment, showing studies dating back to the mid-2020s. Maierovitch showed research proving that Brazil was one of the most prepared countries for possible epidemics, and contrasted this with another study that showed Brazil as the worst country in fighting the pandemic. Senator Renan Calheiros criticized the presidential speeches on June 9 and 10 about the non-effectiveness of the vaccine and the non-use of masks. In response to Senator Randolfe Rodrigues, the researcher and the doctor stated that it was not the time to stop using masks and make isolation measures more flexible. Natalia explained that this kind of measure should be based on the number of cases of COVID and not on the number of vaccinated people. In response to Senator Renan Calheiros about treatments, the sanitary doctor recalled the approval of the law for the use of phosphoethanolamine to fight cancer by the then federal deputy Jair Bolsonaro in 2016. Natalia complemented that on this occasion it cost R$10 million for the state of São Paulo, and that the measure was approved by popular clamor and not by scientific motivation. Later it was shown that this chemical compound was not effective against cancer, and the law was judged unconstitutional by the STF in 2020. Criticizing the "COVID kit", Maierovitch warned about the danger of irresponsible use of antibiotics, such as azithromycin, which could lead to antibiotic resistance by bacteria. Eighth week. On June 15, the former health secretary of Amazonas was heard. The civil engineer Marcellus Campelo adopted a calm speech and did not point out errors in his management, "We did what we could with available funds". The former secretary had been asked about the oxygen shortage crisis in the state, about the trip of the entourage of doctor Mayra Pinheiro and on measures to contain the pandemic. About the imminent lack of oxygen supply in the state, both opposition and ruling senators criticized the state's inaction. Senator Eduardo Braga showed correspondence from White Martins dated the beginning of September 2020 about the company is already operating at full capacity and that it would be necessary to hire additional sources of oxygen supply; the crisis will only occur after five months. The former secretary admitted that no plant for oxygen production was purchased by the state. In response to Renan, the former secretary said that there was only a lack of oxygen in the state hospitals on January 14 and 15, 2021. At the same moment senator Eduardo Braga claimed that the former secretary was lying, just like the former health minister Pazuello and Élcio Franco lied. According to the videos shown by the senator, the oxygen crisis at least lasted until January 26, 2021. As for the restriction measures, there was a growing number of cases in the state, and the secretary was alerted in September by the Health Surveillance Foundation. The secretary admitted that the planning considered that the worst-case scenario was a repeat of the crisis of the first wave and that "only at the end of December that we started to notice that there was something different in the contamination", not knowing possible variants of the virus. Senator Otto Alencar criticized that the health portfolio had been given to a civil engineer. The former secretary said that he had a 5-phase pandemic contingency plan that was activated when "the Covid-19 ICU occupancy rate exceeded 75% of its capacity". On December 23, 2020, phase 3 of the plan was activated, and the crisis committee prepared a decree that restricted activities and the circulation of people in the state. In the following days, the decree was revoked, relaxing the sanitary measures, and the demonstrations of the population against the decree was the reason exposed by the former secretary. In the words of the engineer, "there were many demonstrations [...] that forced the government to relax the decree". This position had been harshly criticized by Senator Alessandro, explaining that the one who revokes the decree is the state governor, not the population. The June 16th session began with voting on requests. The session lasted one hour and six minutes and removed secrecy from part of the documents sent to the commission as "secret" in order to correct documents that were incorrectly sent with this secrecy. This request displeased Senators Marcos Rogério (DEM-RO) and Ciro Nogueira (PP-PI). Three information requests were approved, plus five requests for the lifting of telephone and bank secrecy (one of them belonging to Carlos Wizard). Three more summonses were also approved and one rejected. Protected by habeas corpus, the former governor of the state of Rio de Janeiro was heard as a guest and could not answer questions and could leave the session at any time. Former federal judge Wilson Witzel revealed alleged retaliations of the federal government to the states during the pandemic. He explained that the distancing of the state and federal government took place from 2018, in this situation the former governor called for an "impartial" investigation into the death of councilwoman Marielle Franco; "From the Marielle case, I particularly realized that the federal government began to retaliate against me. After this event, I was no longer received at the Palácio do Planalto. I had difficulty talking to ministers," said Witzel. The former governor was impeached in April 2020 and among other charges were allegations of corruption involving kickbacks paid by Social Organizations (OSs) in the health area. Witzel claims to have been the target of a larger action that wanted to remove him from power, and as evidence show that process. As for the campaign hospitals, Witzel said these were targets of persecution by RJ deputies and lack of support by the federal government. "Unfortunately, the campaign hospitals were sabotaged from beginning to end with reports stimulated by these deputies, who were doing cartwheels in RJ against social isolation." He also said that perhaps the field hospitals would not be necessary had he opened the more than 800 beds closed in federal hospitals in the state; Witzel again blamed the federal government "The federal hospitals are untouchable. If the CPI breaks the secrets of the OSs that manage the hospitals, it will find out who owns the hospitals." As for the social restriction measures, he said that there was no communication with the ministry of health, stating that "Since there was no coordination from the federal government, the issue was solved by the Supreme Court (STF). And it must be made very clear: the STF did not prevent the federal government from doing anything, but gave governors and mayors the necessary conditions for us to overcome the omission of the federal government." He affirmed that he was one of the first governors to implement social isolation measures and criticized the delay in the coming of the emergency aid and concluded: "If you ask the population to stay at home, but don't give them conditions, it is more difficult to control the pandemic". There was uproar throughout the session. Senator Flávio (Patriota-RJ), not a member of the CPI, said that the former governor "was elected by lying, deceiving the people of Rio and revealed himself after he sat in the governor's chair. After the aggressive speeches by senators Jorginho Mello (PL-SC) and Eduardo Girão (Podemos-CE), the former governor decided to close the session. However, during the session a request was made to hold a session under the secrecy of the court to provide hard evidence of the federal government's actions against the governor. At the June 17 session, businessman Carlos Wizard and former TCU auditor Alexandre Marques were questioned as witnesses. The session was suspended because the businessman did not appear to testify. As a witness Wizard has the obligation to present himself, so Omar Aziz, the president of the CPI asked the STF for the coercive conduction of the businessman and the retention of his passport. On this occasion Aziz commented on the lack of respect with the STF for the businessman having requested a habeas corpus to be able to testify in silence, "What amazes me is a citizen seeking the STF to get a habeas corpus to come to this CPI to be silent and he does not appear. So why did he go to the Supreme if he wasn't coming?" said Aziz. As for the former auditor, his testimony will be scheduled for a later date. In the case of both witnesses, on the previous day, June 16, telephone and telematic seizures were approved. Wizard's defense appealed against the lifting of the secrecy. In the June 18 section, two doctors who advocate so-called "early treatment" for COVID were invited. On this occasion, members of the opposition and the CPI rapporteur chose not to participate in the hearing. Senator Marcos Rogerio, interrupted the speech of Senator Jorginho Melo with a question of order to vote some senator to ask questions in place of the rapporteur of the CPI, in response the president of the CPI Omar Aziz said that the doctors are not in conditions of witnesses or investigated and that the rapporteur has no obligation to ask questions. In response to Senator Girão, the doctors confirmed their participation in the portfolio commanded by Arthur Weintrab at the OAS, and both denied having a political conflict of interest. As for the "treatment", both doctors defend the use of more than 20 drugs that go beyond chloroquine. As for the media characterizing the debate as unscientific, both doctors downplayed Dr. Luana Araújo's participation in the CPI, and Alves said she was not suitable for the position in the Ministry of Health. They also criticized researcher Pasternak for disseminating science and not having seen patients with COVID. Physician Ricardo Ariel Zimerman criticized the horizontal lockdown, and said that this media is the opposite of social distancing. This doctor used the example of the state of Amazonas, said that the average number of people per household is 7 or 8 people, and therefore staying at home is an agglomeration and stimulates the development of virus variants such as the variant formula_0 (gamma or P1) variant originating in Manaus. In response to Jorginho, the doctor rectified what he said in the video that the use of chloroquine reduces mortality by 21% and stated that it reduces mortality by 73%. Zimerman explained the concept of cherry-picking and confirmation bias to explain why some people only read articles that confirm what they think. Physician Zimmerman warned about the lethal dose of chloroquine being low. Doctor Francisco Eduardo Cardoso Alves said that continuing with lockdown is "insisting on the error", and used the UK as an example for having an increase in the number of cases even with lockdown. With this, Alves concluded that lockdown is "proven not to work". Alves said that the vaccines are still new and warned of the risk that they may not be effective against new variants. Alves said that "early treatment" should just be called "treatment" and that this has had "thousands of lives saved." On this subject, he said he doesn't care that he has been the target of "attacks" by doctors and scientists because he "knows he is right, (...) and it is not the mockery of a fellow denialist that will make me turn back". On several occasions Alves criticized the study in Manaus, saying that it was unethical and that the dose of chloroquine was lethal. The doctor also said that it is "fake news" that Ivermectin causes liver cirrhosis or hepatitis, saying that only in Brazil there is this suspicion. Alves defended that in the case of a pandemic situation the best available evidence at the time and pharmacological plausibility should be used. Alves said that the post-COVID syndrome is less frequent for people who adopted the "early treatment". In response to Omar Aziz, doctor Alves vehemently attacked the deliberate use of the so-called "COVID kit," and argued that treatment should be individualized. Both doctors criticized the prescription of "early treatment" outside the hospital and/or prior to contamination. They also criticized herd immunization, and Zimerman encouraged immunization with vaccination. Ninth week. Over the weekend of June 19 and 20, Brazil passed the COVID-19 mark of 500,000 deaths. In respect, the CPI senators held a minute of silence. Senators Randolfe Rodrigues, Eliziane Gama, Humberto Costa, Rogerio Carvalho, and Otto Alencar held up mourning signs written "500 thousand lives", "responsibility", and "vaccine". On Tuesday, June 22, congressman Osmar Terra was invited. The congressman was invited because of his presence in the supposed "parallel cabinet" and because of his public positions minimizing the pandemic. Osmar denies the existence of the cabinet, saying that there was only one meeting with other doctors to which he was invited. He denied that he has put forward a proposal to "contaminate the population freely," however, he went on to state that quarantine and lockdown measures have no effect in fighting the pandemic. As for the erroneous predictions that in 2020 he had claimed that there would be no more than 800 deaths and no variants of the virus, the deputy said "The predictions I made were based not on an apocalyptic mathematical study like the one at Imperial College." On the 23rd Francisco Emerson Maximiano did not attend the CPI for health reasons. On the 24th Jurema Werneck and Pedro Hallal were invited to the CPI. A graph presented by Pedro shows that those most affected by the pandemic are black and, proportionally, indigenous peoples. In the year 2020 he was leading an EPICOVID research study, which was cancelled after these results were presented. The study was not replaced by another in 2020. The epidemiologist showed that 400,000 deaths could be avoided with a "median performance" in fighting the pandemic. Of those, about 95,500 people could have been immunized if the purchase of the Coronavac and Pfizer vaccines had not been delayed. On Friday the 25th, brothers Luiz Miranda (DEM-DF) and Luis Ricardo Miranda were heard. The contract for the purchase of the Indian vaccine Covaxin was on the agenda. During his testimony, Ricardo Miranda refuted Onyx Lorenzoni's accusations. After suspecting the purchase, the deputy and the server claim that they would have met with President Bolsonaro. Tenth week. On June 29, the CPI heard from Fausto Junior, a state deputy in Amazonas and rapporteur of the CPI that investigated crimes in health care in the state. The deputy was questioned about the non-indictment and non-invocation of Governor Wilson Lima by the CPI. Among the reasons given by the deputy, he said that it was not allowed to investigate the governor and that the investigation was already being conducted by the police. The explanation did not convince the senators and senator Soraya Thronicke (PSL-MS) confronted him saying that the responsibility to investigate the governors is of the state chamber. Senator Omar Aziz affirmed that the non-indictment of the governor of Amazonas was due to Junior's personal interests; "I will point out the reason why the congressman did not indict the governor of Amazonas, and the reason is very big", said Aziz. On June 30 Wizard was summoned as a witness. In possession of a Habeas Corpus, the billionaire businessman decided to remain silent. The day was marked by biblical quotes by some senators. Eliziane Gama contrasted the posture of the businessman, e.g. buying vaccines for the private initiative, with the trajectory of Christ; "Jesus was on the side of the poor, orphans, widows, and the excluded," said the senator. During the day the senators showed several videos of the businessman, among which the video of Wizard laughing at five dead people in Porto Feliz was the most shown. Once again, Francisco Emerson Maximiano did not show up at the CPI, because his testimony that was scheduled for July 1 was postponed and in his place was called Luiz Paulo Dominghetti Pereira due to a denunciation that he had made to the newspaper Folha de São Paulo about a request for a bribe of US$1(one) per dose in the purchase of 400 million doses of Astrazeneca's vaccine. On July 1, Minas Gerais military police corporal and businessman Luiz Paulo Dominghetti Pereira testified. He represented the Davati company in the vaccine negotiations with the Ministry of Health. He explained the request for the bribe was made by Roberto Ferreira Dias, Colonel Blanco and Élcio Franco. Roberto Dias was the then director of Logistics at the Ministry of Health; Deputy Natália Bonavides (PT-RN) forwarded to the STF a request for investigation of crimes of criminal association, passive corruption and administrative advocacy on the part of Dias. In the face of this scandal, Dias was exonerated. The alleged request for a bribe occurred in a restaurant in a shopping mall in Brasília, where 400 million doses of AstraZeneca were being negotiated. At the time, February 25, 2021, the market price was $3.50 per dose and Dias asked for $1.00 bribe per dose. Dominghetti could not explain how Davati would have been able to deliver the amount of vaccines and pointed the responsibility to Herman, owner of the company. Dominghetti revealed that Reverend Amilton Gomes de Paula was his contact with the health ministry. On February 22, 2021, the reverend met with Dominghetti and the air force officer Hardaleson Araújo de Oliveira. During his testimony, Dominghetti reported that parliamentarians sought the CEO of Davati in Brazil, Cristiano Alberto Carvalho, to broker the purchase of vaccines, the deponent even reproduced an audio of deputy Luis Miranda the same who testified to the CPI a week earlier, the audio would be about the negotiation of some product with Cristiano, Dominghetti could not say whether the negotiations were about vaccines or not, The rapporteur Renan Calheiros asked for the seizure of the cell phone of the seller for investigation, Luis Miranda even gave a press conference and talked with the chairman of the commission Omar Aziz along with Senators Bezerra and Marcos do Val, outside the CPI and said that the audio was tampered with, because it was about negotiations made in 2020 about buying gloves, after the verification of the tampering Dominghetti says he did not know he was editing the audio that Cristiano Carvalho sent him. Eleventh week. On July 6, contract inspector Regina Célia of the Ministry of Health testified. She was nominated for the position by Ricardo Barros, but denied knowing him. During the deposition she was questioned for approving the contract for the Indian Covaxin vaccine. In response, the inspector said that she "didn't find anything atypical". As for the invoice counting with the name of a third company, Madison Biotech, from Singapore, to receive the money, the fiscal attributed the responsibility to the import sector for not having pointed out this error. On July 7, the former director of Logistics at the Ministry of Health, Roberto Dias, was heard. In his statement, Dias revealed that he did not negotiate vaccines, and that the responsibility was that of the executive secretary Élcio Franco. As for the meeting with Dominghetti in which vaccines were negotiated, Dias denied having invited him. President Omar Aziz requested the arrest of the deponent for false testimony, claiming that he had enough evidence to show that the meeting with Dominghetti was planned. The next day, Roberto Dias was released after paying a bail of R$1.100,00 (US$ 1). On July 8 the former coordinator of the national immunization plan (NIP) Francieli Fantinato testified as an investigated. During her testimony, the rapporteur changed her status from investigated to witness. The servant explained that the immunization plan "cannot make a successful campaign without vaccines and without communication, without an effective publicity campaign". Senator Alessandro Vieira questioned what other reasons made her resign from her position, Francieli reported that the pressure from several municipalities to change the order of the priority groups "brought a difficulty to the campaign". The former coordinator revealed that Colonel Élcio Franco's executive secretary denied the inclusion of the deprived-of-freedom population in the PNI; the then PNI coordinator reported that "If it is taken away, it will leave without the program's official endorsement" and "Who asked to take away the deprivation-of-freedom group was Colonel Élcio". As for the purchase of vaccines, the first request from PNI was on June 19, 2020, reporting that it did not limit the supplier of the vaccine only requested "satisfactory results and approval by Anvisa". However, he added that the negotiation took place in Élcio's executive office. As for the government having adhered to the minimum quantity of 10% in the Covax consortium, the PNI coordinator reported having requested a larger quantity of vaccines than agreed; she said "we needed to vaccinate within the scenarios or 55% of the population up to 95%, in a scenario of uncertainty". At the time, Franciele questioned the executive secretary and Élcio Franco replied that "we can't put all our eggs in one basket". On July 9, the CPI heard William Santana, technical consultant for the Ministry of Health's import sector. Earlier in the CPI, Regina Célia pointed out that it was William who pointed out the discrepancies between the invoice and the Covaxin contract. William confirmed that he saw errors in the invoices and asked for their correction. Twelfth week. On July 13, 2021, the representative of Precisa Medicamentos Emanuela Medrades appeared before the COVID-19 CPI, but she remained silent throughout the session, claiming that she herself had already made statements to the Federal Police. The president of the CPI Omar Aziz suspended the hearing temporarily and the president of the STF Luiz Fux said that the deponent can only remain silent for questions that might embarrass her. In the evening of the same day, the session was resumed and again Emanuela did not answer the senators' questions. The president of the CPI then rescheduled her return to the commission for July 14. She claimed exhaustion and tiredness. On the 14th, Emanuela denied that the contract with Bharat Biotech started to be inspected late and reported that only the first dates were met; however, she contradicted herself. Regina Célia, already questioned in the commission, said that she only brokered the inspection of the COVAXIN contract on March 22. Medrades, on the other hand, said that she had followed up much earlier, on March 3. In the session, Medrades denied that Precisa offered the value of US$10 per dose of vaccines, and that the Ministry of Health opted to pay US$15 instead. In the contract signed by the government, the 20 million doses were not delivered, nothing was paid, and the contract was temporarily suspended. She also said that the first invoice of the COVAXIN vaccine was shipped with payment in advance, as it was a standard that the manufacturer Bharat practiced worldwide. The vaccine was the most expensive vaccine contracted by the government so far. About the negotiations of the vaccine, Emanuela said she dealt with Élcio Franco "most of the negotiations". Still in the interrogation she said that the delivery of the document was made on March 22, whose information contradicts the speech of the Miranda brothers and William Amorim, who informed the date to be on March 18, and Onyx Lorenzoni, who informed to be on the 19th. She suggested a confrontation. On July 15, Cristiano Carvalho, representative of Davati Medical Supply, testified at the COVID-19 CPI as a witness. He said that the "vaccine salesman", police officer Luiz Paulo Dominghetti, had approached him in January of this year to discuss the issue. He also said that he didn't know Dominghetti until then. This version contradicts the police officer's version, who claimed in the CPI that Cristiano had approached him first. Cristiano denied an alleged bribe payment to Davati for the purchase of the vaccines. He had said that "it was an opportunity for the vaccine issue" and, induced by Dominghetti, "he embarked on that journey". He also said that Colonel Blanco negotiated the purchase of 400 million doses from Oxford/AstraZeneca through a "commission". Also during the hearing, Carvalho said that at least 8 people mediated the negotiation of the vaccines. In the list disclosed to the CPI, 6 are military. He also reported the meeting of the Ministry of Health intermediated by Reverend Amilton Gomes de Paula (owner of the Secretariat of Humanitarian Affairs) and Colonel Hélcio Bruno (of the Força Brasil Institute) The companies listed are private and these people do not hold public positions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gamma" } ]
https://en.wikipedia.org/wiki?curid=67587840
6759
Context-free grammar
Type of formal grammar In formal language theory, a context-free grammar (CFG) is a formal grammar whose production rules can be applied to a nonterminal symbol regardless of its context. In particular, in a context-free grammar, each production rule is of the form formula_1 with formula_2 a "single" nonterminal symbol, and formula_3 a string of terminals and/or nonterminals (formula_3 can be empty). Regardless of which symbols surround it, the single nonterminal formula_2 on the left hand side can always be replaced by formula_3 on the right hand side. This distinguishes it from a context-sensitive grammar, which can have production rules in the form formula_4 with formula_2 a nonterminal symbol and formula_3, formula_5, and formula_6 strings of terminal and/or nonterminal symbols. A formal grammar is essentially a set of production rules that describe all possible strings in a given formal language. Production rules are simple replacements. For example, the first rule in the picture, formula_7 replaces formula_0 with formula_8. There can be multiple replacement rules for a given nonterminal symbol. The language generated by a grammar is the set of all strings of terminal symbols that can be derived, by repeated rule applications, from some particular nonterminal symbol ("start symbol"). Nonterminal symbols are used during the derivation process, but do not appear in its final result string. Languages generated by context-free grammars are known as context-free languages (CFL). Different context-free grammars can generate the same context-free language. It is important to distinguish the properties of the language (intrinsic properties) from the properties of a particular grammar (extrinsic properties). The language equality question (do two given context-free grammars generate the same language?) is undecidable. Context-free grammars arise in linguistics where they are used to describe the structure of sentences and words in a natural language, and they were invented by the linguist Noam Chomsky for this purpose. By contrast, in computer science, as the use of recursively-defined concepts increased, they were used more and more. In an early application, grammars are used to describe the structure of programming languages. In a newer application, they are used in an essential part of the Extensible Markup Language (XML) called the document type definition. In linguistics, some authors use the term phrase structure grammar to refer to context-free grammars, whereby phrase-structure grammars are distinct from dependency grammars. In computer science, a popular notation for context-free grammars is Backus–Naur form, or BNF. Background. Since at least the time of the ancient Indian scholar Pāṇini, linguists have described the grammars of languages in terms of their block structure, and described how sentences are recursively built up from smaller phrases, and eventually individual words or word elements. An essential property of these block structures is that logical units never overlap. For example, the sentence: "John, whose blue car was in the garage, walked to the grocery store." can be logically parenthesized (with the logical metasymbols [ ]) as follows: [John[, [whose [blue car]] [was [in [the garage]]],]] [walked [to [the [grocery store]]]]. A context-free grammar provides a simple and mathematically precise mechanism for describing the methods by which phrases in some natural language are built from smaller blocks, capturing the "block structure" of sentences in a natural way. Its simplicity makes the formalism amenable to rigorous mathematical study. Important features of natural language syntax such as agreement and reference are not part of the context-free grammar, but the basic recursive structure of sentences, the way in which clauses nest inside other clauses, and the way in which lists of adjectives and adverbs are swallowed by nouns and verbs, is described exactly. Context-free grammars are a special form of Semi-Thue systems that in their general form date back to the work of Axel Thue. The formalism of context-free grammars was developed in the mid-1950s by Noam Chomsky, and also their classification as a special type of formal grammar (which he called phrase-structure grammars). Some authors, however, reserve the term for more restricted grammars in the Chomsky hierarchy: context-sensitive grammars or context-free grammars. In a broader sense, phrase structure grammars are also known as constituency grammars. The defining trait of phrase structure grammars is thus their adherence to the constituency relation, as opposed to the dependency relation of dependency grammars. In Chomsky's generative grammar framework, the syntax of natural language was described by context-free rules combined with transformation rules. Block structure was introduced into computer programming languages by the Algol project (1957–1960), which, as a consequence, also featured a context-free grammar to describe the resulting Algol syntax. This became a standard feature of computer languages, and the notation for grammars used in concrete descriptions of computer languages came to be known as Backus–Naur form, after two members of the Algol language design committee. The "block structure" aspect that context-free grammars capture is so fundamental to grammar that the terms syntax and grammar are often identified with context-free grammar rules, especially in computer science. Formal constraints not captured by the grammar are then considered to be part of the "semantics" of the language. Context-free grammars are simple enough to allow the construction of efficient parsing algorithms that, for a given string, determine whether and how it can be generated from the grammar. An Earley parser is an example of such an algorithm, while the widely used LR and LL parsers are simpler algorithms that deal only with more restrictive subsets of context-free grammars. Formal definitions. A context-free grammar G is defined by the 4-tuple formula_9, where Production rule notation. A production rule in R is formalized mathematically as a pair formula_12, where formula_13 is a nonterminal and formula_14 is a string of variables and/or terminals; rather than using ordered pair notation, production rules are usually written using an arrow operator with formula_3 as its left hand side and β as its right hand side: formula_15. It is allowed for β to be the empty string, and in this case it is customary to denote it by ε. The form formula_16 is called an ε-production. It is common to list all right-hand sides for the same left-hand side on the same line, using | (the vertical bar) to separate them. Rules formula_17 and formula_18 can hence be written as formula_19. In this case, formula_20 and formula_21 are called the first and second alternative, respectively. Rule application. For any strings formula_22, we say u directly yields v, written as formula_23, if formula_24 with formula_13 and formula_25 such that formula_26 and formula_27. Thus, v is a result of applying the rule formula_28 to u. Repetitive rule application. For any strings formula_29 we say u "yields" v or v is "derived" from u if there is a positive integer k and strings formula_30 such that formula_31. This relation is denoted formula_32, or formula_33 in some textbooks. If formula_34, the relation formula_35 holds. In other words, formula_36 and formula_37 are the reflexive transitive closure (allowing a string to yield itself) and the transitive closure (requiring at least one step) of formula_38, respectively. Context-free language. The language of a grammar formula_9 is the set formula_39 of all terminal-symbol strings derivable from the start symbol. A language L is said to be a context-free language (CFL), if there exists a CFG G, such that formula_40. Non-deterministic pushdown automata recognize exactly the context-free languages. Examples. Words concatenated with their reverse. The grammar formula_41, with productions "S" → "aSa", "S" → "bSb", "S" → ε, is context-free. It is not proper since it includes an ε-production. A typical derivation in this grammar is "S" → "aSa" → "aaSaa" → "aabSbaa" → "aabbaa". This makes it clear that formula_42. The language is context-free, however, it can be proved that it is not regular. If the productions "S" → "a", "S" → "b", are added, a context-free grammar for the set of all palindromes over the alphabet { "a", "b" } is obtained. Well-formed parentheses. The canonical example of a context-free grammar is parenthesis matching, which is representative of the general case. There are two terminal symbols "(" and ")" and one nonterminal symbol S. The production rules are "S" → "SS", "S" → ("S"), "S" → () The first rule allows the S symbol to multiply; the second rule allows the S symbol to become enclosed by matching parentheses; and the third rule terminates the recursion. Well-formed nested parentheses and square brackets. A second canonical example is two different kinds of matching nested parentheses, described by the productions: "S" → "SS" "S" → () "S" → ("S") "S" → [] "S" → ["S"] with terminal symbols [ ] ( ) and nonterminal S. The following sequence can be derived in that grammar: Matching pairs. In a context-free grammar, we can pair up characters the way we do with brackets. The simplest example: "S → aSb" "S → ab" This grammar generates the language formula_43, which is not regular (according to the pumping lemma for regular languages). The special character ε stands for the empty string. By changing the above grammar to "S → aSb" "S → ε" we obtain a grammar generating the language formula_44 instead. This differs only in that it contains the empty string while the original grammar did not. Distinct number of a's and b's. A context-free grammar for the language consisting of all strings over {a,b} containing an unequal number of a's and b's: "S → T | U" "T → VaT | VaV | TaV" "U → VbU | VbV | UbV" "V → aVbV | bVaV | ε" Here, the nonterminal T can generate all strings with more a's than b's, the nonterminal U generates all strings with more b's than a's and the nonterminal V generates all strings with an equal number of a's and b's. Omitting the third alternative in the rules for T and U does not restrict the grammar's language. Second block of b's of double size. Another example of a non-regular language is formula_45. It is context-free as it can be generated by the following context-free grammar: "S" → "bSbb" | "A" "A" → "aA" | "ε" First-order logic formulas. The formation rules for the terms and formulas of formal logic fit the definition of context-free grammar, except that the set of symbols may be infinite and there may be more than one start symbol. Examples of languages that are not context free. In contrast to well-formed nested parentheses and square brackets in the previous section, there is no context-free grammar for generating all sequences of two different types of parentheses, each separately balanced "disregarding the other", where the two types need not nest inside one another, for example: or The fact that this language is not context free can be proven using pumping lemma for context-free languages and a proof by contradiction, observing that all words of the form formula_46 should belong to the language. This language belongs instead to a more general class and can be described by a conjunctive grammar, which in turn also includes other non-context-free languages, such as the language of all words of the form formula_47. Regular grammars. Every regular grammar is context-free, but not all context-free grammars are regular. The following context-free grammar, for example, is also regular. "S" → "a" "S" → "aS" "S" → "bS" The terminals here are "a" and "b", while the only nonterminal is "S". The language described is all nonempty strings of formula_48s and formula_49s that end in formula_48. This grammar is regular: no rule has more than one nonterminal in its right-hand side, and each of these nonterminals is at the same end of the right-hand side. Every regular grammar corresponds directly to a nondeterministic finite automaton, so we know that this is a regular language. Using vertical bars, the grammar above can be described more tersely as follows: "S" → "a" | "aS" | "bS" Derivations and syntax trees. A "derivation" of a string for a grammar is a sequence of grammar rule applications that transform the start symbol into the string. A derivation proves that the string belongs to the grammar's language. A derivation is fully determined by giving, for each step: For clarity, the intermediate string is usually given as well. For instance, with the grammar: the string 1 + 1 + "a" can be derived from the start symbol "S" with the following derivation: "S" → "S" + "S" (by rule 1. on "S") → "S" + "S" + "S" (by rule 1. on the second "S") → 1 + "S" + "S" (by rule 2. on the first "S") → 1 + 1 + "S" (by rule 2. on the second "S") → 1 + 1 + "a" (by rule 3. on the third "S") Often, a strategy is followed that deterministically chooses the next nonterminal to rewrite: Given such a strategy, a derivation is completely determined by the sequence of rules applied. For instance, one leftmost derivation of the same string is "S" → "S" + "S" (by rule 1 on the leftmost "S") → 1 + "S" (by rule 2 on the leftmost "S") → 1 + "S" + "S" (by rule 1 on the leftmost "S") → 1 + 1 + "S" (by rule 2 on the leftmost "S") → 1 + 1 + "a" (by rule 3 on the leftmost "S"), which can be summarized as rule 1 rule 2 rule 1 rule 2 rule 3. One rightmost derivation is: "S" → "S" + "S" (by rule 1 on the rightmost "S") → "S" + "S" + "S" (by rule 1 on the rightmost "S") → "S" + "S" + "a" (by rule 3 on the rightmost "S") → "S" + 1 + "a" (by rule 2 on the rightmost "S") → 1 + 1 + "a" (by rule 2 on the rightmost "S"), which can be summarized as rule 1 rule 1 rule 3 rule 2 rule 2. The distinction between leftmost derivation and rightmost derivation is important because in most parsers the transformation of the input is defined by giving a piece of code for every grammar rule that is executed whenever the rule is applied. Therefore, it is important to know whether the parser determines a leftmost or a rightmost derivation because this determines the order in which the pieces of code will be executed. See for an example LL parsers and LR parsers. A derivation also imposes in some sense a hierarchical structure on the string that is derived. For example, if the string "1 + 1 + a" is derived according to the leftmost derivation outlined above, the structure of the string would be: {{Math|{{1}"S" + {{1}"S" + {"a"}"S"}"S"}"S"}} where {{Math|{...}"S"}} indicates a substring recognized as belonging to {{Math|"S"}}. This hierarchy can also be seen as a tree: This tree is called a "parse tree" or "concrete syntax tree" of the string, by contrast with the abstract syntax tree. In this case the presented leftmost and the rightmost derivations define the same parse tree; however, there is another rightmost derivation of the same string {{Math|"S"}} {{Math|→ "S" + "S"}} (by rule 1 on the rightmost {{Math|"S"}}) {{Math|→ "S" + "a"}} (by rule 3 on the rightmost {{Math|"S"}}) {{Math|→ "S" + "S" + "a"}} (by rule 1 on the rightmost {{Math|"S"}}) {{Math|→ "S" + 1 + "a"}} (by rule 2 on the rightmost {{Math|"S"}}) {{Math|→ 1 + 1 + "a"}} (by rule 2 on the rightmost {{Math|"S"}}), which defines a string with a different structure {{Math|{{{1}"S" + {1}"S"}"S" + {"a"}"S"}"S"}} and a different parse tree: Note however that both parse trees can be obtained by both leftmost and rightmost derivations. For example, the last tree can be obtained with the leftmost derivation as follows: {{Math|"S"}} {{Math|→ "S" + "S"}} (by rule 1 on the leftmost {{Math|"S"}}) {{Math|→ "S" + "S" + "S"}} (by rule 1 on the leftmost {{Math|"S"}}) {{Math|→ 1 + "S" + "S"}} (by rule 2 on the leftmost {{Math|"S"}}) {{Math|→ 1 + 1 + "S"}} (by rule 2 on the leftmost {{Math|"S"}}) {{Math|→ 1 + 1 + "a"}} (by rule 3 on the leftmost {{Math|"S"}}), If a string in the language of the grammar has more than one parsing tree, then the grammar is said to be an "ambiguous grammar". Such grammars are usually hard to parse because the parser cannot always decide which grammar rule it has to apply. Usually, ambiguity is a feature of the grammar, not the language, and an unambiguous grammar can be found that generates the same context-free language. However, there are certain languages that can only be generated by ambiguous grammars; such languages are called "inherently ambiguous languages". Normal forms. Every context-free grammar with no ε-production has an equivalent grammar in Chomsky normal form, and a grammar in Greibach normal form. "Equivalent" here means that the two grammars generate the same language. The especially simple form of production rules in Chomsky normal form grammars has both theoretical and practical implications. For instance, given a context-free grammar, one can use the Chomsky normal form to construct a polynomial-time algorithm that decides whether a given string is in the language represented by that grammar or not (the CYK algorithm). Closure properties. Context-free languages are closed under the various operations, that is, if the languages "K" and "L" are context-free, so is the result of the following operations: They are not closed under general intersection (hence neither under complementation) and set difference. Decidable problems. The following are some decidable problems about context-free grammars. Parsing. The parsing problem, checking whether a given word belongs to the language given by a context-free grammar, is decidable, using one of the general-purpose parsing algorithms: Context-free parsing for Chomsky normal form grammars was shown by Leslie G. Valiant to be reducible to Boolean matrix multiplication, thus inheriting its complexity upper bound of "O"("n"2.3728639). Conversely, Lillian Lee has shown "O"("n"3−ε) Boolean matrix multiplication to be reducible to "O"("n"3−3ε) CFG parsing, thus establishing some kind of lower bound for the latter. Reachability, productiveness, nullability. A nonterminal symbol formula_50 is called "productive", or "generating", if there is a derivation formula_51 for some string formula_52 of terminal symbols. formula_50 is called "reachable" if there is a derivation formula_53 for some strings formula_54 of nonterminal and terminal symbols from the start symbol. formula_50 is called "useless" if it is unreachable or unproductive. formula_50 is called "nullable" if there is a derivation formula_55. A rule formula_56 is called an "ε-production". A derivation formula_57 is called a "cycle". Algorithms are known to eliminate from a given grammar, without changing its generated language, In particular, an alternative containing a useless nonterminal symbol can be deleted from the right-hand side of a rule. Such rules and alternatives are called "useless". In the depicted example grammar, the nonterminal "D" is unreachable, and "E" is unproductive, while "C" → "C" causes a cycle. Hence, omitting the last three rules does not change the language generated by the grammar, nor does omitting the alternatives "| "Cc" | "Ee"" from the right-hand side of the rule for "S". A context-free grammar is said to be "proper" if it has neither useless symbols nor ε-productions nor cycles. Combining the above algorithms, every context-free grammar not generating ε can be transformed into a weakly equivalent proper one. Emptiness and finiteness. There are algorithms to decide whether the language of a given context-free grammar is empty, as well as whether it is finite. Undecidable problems. Some questions that are undecidable for wider classes of grammars become decidable for context-free grammars; e.g. the emptiness problem (whether the grammar generates any terminal strings at all), is undecidable for context-sensitive grammars, but decidable for context-free grammars. However, many problems are undecidable even for context-free grammars; the most prominent ones are handled in the following. Universality. Given a CFG, does it generate the language of all strings over the alphabet of terminal symbols used in its rules? A reduction can be demonstrated to this problem from the well-known undecidable problem of determining whether a Turing machine accepts a particular input (the halting problem). The reduction uses the concept of a "computation history", a string describing an entire computation of a Turing machine. A CFG can be constructed that generates all strings that are not accepting computation histories for a particular Turing machine on a particular input, and thus it will accept all strings only if the machine does not accept that input. Language equality. Given two CFGs, do they generate the same language? The undecidability of this problem is a direct consequence of the previous: it is impossible to even decide whether a CFG is equivalent to the trivial CFG defining the language of all strings. Language inclusion. Given two CFGs, can the first one generate all strings that the second one can generate? If this problem was decidable, then language equality could be decided too: two CFGs formula_58 and formula_59 generate the same language if formula_60 is a subset of formula_61 and formula_61 is a subset of formula_60. Being in a lower or higher level of the Chomsky hierarchy. Using Greibach's theorem, it can be shown that the two following problems are undecidable: Grammar ambiguity. Given a CFG, is it ambiguous? The undecidability of this problem follows from the fact that if an algorithm to determine ambiguity existed, the Post correspondence problem could be decided, which is known to be undecidable. This may be proved by Ogden's lemma. Language disjointness. Given two CFGs, is there any string derivable from both grammars? If this problem was decidable, the undecidable Post correspondence problem (PCP) could be decided, too: given strings formula_62 over some alphabet formula_63, let the grammar {{tmath|G_1}} consist of the rule formula_64; where formula_65 denotes the reversed string formula_66 and formula_49 does not occur among the formula_67; and let grammar {{tmath|G_2}} consist of the rule formula_68; Then the PCP instance given by formula_62 has a solution if and only if {{tmath|L(G_1)}} and {{tmath|L(G_2)}} share a derivable string. The left of the string (before the formula_69) will represent the top of the solution for the PCP instance while the right side will be the bottom in reverse. Extensions. An obvious way to extend the context-free grammar formalism is to allow nonterminals to have arguments, the values of which are passed along within the rules. This allows natural language features such as agreement and reference, and programming language analogs such as the correct use and definition of identifiers, to be expressed in a natural way. E.g. we can now easily express that in English sentences, the subject and verb must agree in number. In computer science, examples of this approach include affix grammars, attribute grammars, indexed grammars, and Van Wijngaarden two-level grammars. Similar extensions exist in linguistics. An extended context-free grammar (or regular right part grammar) is one in which the right-hand side of the production rules is allowed to be a regular expression over the grammar's terminals and nonterminals. Extended context-free grammars describe exactly the context-free languages. Another extension is to allow additional terminal symbols to appear at the left-hand side of rules, constraining their application. This produces the formalism of context-sensitive grammars. Subclasses. There are a number of important subclasses of the context-free grammars: LR parsing extends LL parsing to support a larger range of grammars; in turn, generalized LR parsing extends LR parsing to support arbitrary context-free grammars. On LL grammars and LR grammars, it essentially performs LL parsing and LR parsing, respectively, while on nondeterministic grammars, it is as efficient as can be expected. Although GLR parsing was developed in the 1980s, many new language definitions and parser generators continue to be based on LL, LALR or LR parsing up to the present day. Linguistic applications. Chomsky initially hoped to overcome the limitations of context-free grammars by adding transformation rules. Such rules are another standard device in traditional linguistics; e.g. passivization in English. Much of generative grammar has been devoted to finding ways of refining the descriptive mechanisms of phrase-structure grammar and transformation rules such that exactly the kinds of things can be expressed that natural language actually allows. Allowing arbitrary transformations does not meet that goal: they are much too powerful, being Turing complete unless significant restrictions are added (e.g. no transformations that introduce and then rewrite symbols in a context-free fashion). Chomsky's general position regarding the non-context-freeness of natural language has held up since then, although his specific examples regarding the inadequacy of context-free grammars in terms of their weak generative capacity were later disproved. Gerald Gazdar and Geoffrey Pullum have argued that despite a few non-context-free constructions in natural language (such as cross-serial dependencies in Swiss German and reduplication in Bambara), the vast majority of forms in natural language are indeed context-free.
[ { "math_id": 0, "text": "\\langle\\text{Stmt}\\rangle" }, { "math_id": 1, "text": "A\\ \\to\\ \\alpha" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "\\alpha" }, { "math_id": 4, "text": "\\alpha A \\beta \\rightarrow \\alpha \\gamma \\beta" }, { "math_id": 5, "text": "\\beta" }, { "math_id": 6, "text": "\\gamma" }, { "math_id": 7, "text": "\\langle\\text{Stmt}\\rangle \\to \\langle\\text{Id}\\rangle = \\langle\\text{Expr}\\rangle ;" }, { "math_id": 8, "text": "\\langle\\text{Id}\\rangle = \\langle\\text{Expr}\\rangle ;" }, { "math_id": 9, "text": "G = (V, \\Sigma, R, S)" }, { "math_id": 10, "text": " v\\in V" }, { "math_id": 11, "text": "V\\times(V\\cup\\Sigma)^{*}" }, { "math_id": 12, "text": "(\\alpha, \\beta)\\in R" }, { "math_id": 13, "text": "\\alpha \\in V" }, { "math_id": 14, "text": "\\beta \\in (V\\cup\\Sigma)^{*}" }, { "math_id": 15, "text": "\\alpha\\rightarrow\\beta" }, { "math_id": 16, "text": "\\alpha\\rightarrow\\varepsilon" }, { "math_id": 17, "text": "\\alpha\\rightarrow \\beta_1" }, { "math_id": 18, "text": "\\alpha\\rightarrow\\beta_2" }, { "math_id": 19, "text": "\\alpha\\rightarrow\\beta_1\\mid\\beta_2" }, { "math_id": 20, "text": "\\beta_1" }, { "math_id": 21, "text": "\\beta_2" }, { "math_id": 22, "text": "u, v\\in (V\\cup\\Sigma)^{*}" }, { "math_id": 23, "text": "u\\Rightarrow v\\," }, { "math_id": 24, "text": "\\exists (\\alpha, \\beta)\\in R" }, { "math_id": 25, "text": "u_{1}, u_{2}\\in (V\\cup\\Sigma)^{*}" }, { "math_id": 26, "text": "u\\,=u_{1}\\alpha u_{2}" }, { "math_id": 27, "text": "v\\,=u_{1}\\beta u_{2}" }, { "math_id": 28, "text": "(\\alpha, \\beta)" }, { "math_id": 29, "text": "u, v\\in (V\\cup\\Sigma)^{*}, " }, { "math_id": 30, "text": "u_{1}, \\ldots, u_{k}\\in (V\\cup\\Sigma)^{*}" }, { "math_id": 31, "text": "u = u_{1} \\Rightarrow u_{2} \\Rightarrow \\cdots \\Rightarrow u_{k} = v" }, { "math_id": 32, "text": "u\\stackrel{*}{\\Rightarrow} v" }, { "math_id": 33, "text": "u\\Rightarrow\\Rightarrow v" }, { "math_id": 34, "text": "k\\geq 2" }, { "math_id": 35, "text": "u\\stackrel{+}{\\Rightarrow} v" }, { "math_id": 36, "text": "(\\stackrel{*}{\\Rightarrow})" }, { "math_id": 37, "text": "(\\stackrel{+}{\\Rightarrow})" }, { "math_id": 38, "text": "(\\Rightarrow)" }, { "math_id": 39, "text": "L(G) = \\{ w\\in\\Sigma^{*} : S\\stackrel{*}{\\Rightarrow} w\\}" }, { "math_id": 40, "text": "L\\,=\\,L(G)" }, { "math_id": 41, "text": "G = (\\{S\\}, \\{a, b\\}, P, S)" }, { "math_id": 42, "text": "L(G) = \\{ww^R:w\\in\\{a,b\\}^*\\}" }, { "math_id": 43, "text": " \\{ a^n b^n : n \\ge 1 \\} " }, { "math_id": 44, "text": " \\{ a^n b^n : n \\ge 0 \\} " }, { "math_id": 45, "text": " \\{ \\text{b}^n \\text{a}^m \\text{b}^{2n} : n \\ge 0, m \\ge 0 \\} " }, { "math_id": 46, "text": "\n{(}^n {[}^n {)}^n {]}^n\n" }, { "math_id": 47, "text": "\n\\text{a}^n \\text{b}^n \\text{c}^n\n" }, { "math_id": 48, "text": "a" }, { "math_id": 49, "text": "b" }, { "math_id": 50, "text": "X" }, { "math_id": 51, "text": "X \\stackrel{*}{\\Rightarrow} w" }, { "math_id": 52, "text": "w" }, { "math_id": 53, "text": "S \\stackrel{*}{\\Rightarrow} \\alpha X \\beta" }, { "math_id": 54, "text": "\\alpha,\\beta" }, { "math_id": 55, "text": "X \\stackrel{*}{\\Rightarrow} \\varepsilon" }, { "math_id": 56, "text": "X \\rightarrow \\varepsilon" }, { "math_id": 57, "text": "X \\stackrel{+}{\\Rightarrow} X" }, { "math_id": 58, "text": "G_1" }, { "math_id": 59, "text": "G_2" }, { "math_id": 60, "text": "L(G_1)" }, { "math_id": 61, "text": "L(G_2)" }, { "math_id": 62, "text": "\\alpha_1, \\ldots, \\alpha_N, \\beta_1, \\ldots, \\beta_N" }, { "math_id": 63, "text": "\\{a_1, \\ldots, a_k\\}" }, { "math_id": 64, "text": "S \\to \\alpha_1 S \\beta_1^{rev} | \\cdots | \\alpha_N S \\beta_N^{rev} | b" }, { "math_id": 65, "text": "\\beta_i^{rev}" }, { "math_id": 66, "text": "\\beta_i" }, { "math_id": 67, "text": "a_i" }, { "math_id": 68, "text": "T \\to a_1 T a_1^{rev} | \\cdots | a_k T a_k^{rev} | b" }, { "math_id": 69, "text": " b " } ]
https://en.wikipedia.org/wiki?curid=6759
67595838
Hamiltonian truncation
Numerical method in quantum field theory Hamiltonian truncation is a numerical method used to study quantum field theories (QFTs) in formula_0 spacetime dimensions. Hamiltonian truncation is an adaptation of the Rayleigh–Ritz method from quantum mechanics. It is closely related to the exact diagonalization method used to treat spin systems in condensed matter physics. The method is typically used to study QFTs on spacetimes of the form formula_1, specifically to compute the spectrum of the Hamiltonian along formula_2. A key feature of Hamiltonian truncation is that an explicit ultraviolet cutoff formula_3 is introduced, akin to the lattice spacing "a" in lattice Monte Carlo methods. Since Hamiltonian truncation is a nonperturbative method, it can be used to study strong-coupling phenomena like spontaneous symmetry breaking. Principles. Energy cutoff. Local quantum field theories can be defined on any manifold. Often, the spacetime of interest includes a copy of formula_2, like formula_4 (flat space), formula_5 (an infinite hollow cylinder), formula_6 (space is taken to be a torus) or even Anti-de Sitter space in global coordinates. On such a manifold we can take time to run along formula_2, such that energies are conserved. Solving such a QFT amounts to finding the spectrum and eigenstates of the Hamiltonian "H", which is difficult or impossible to do analytically. Hamiltonian truncation provides a strategy to compute the spectrum of "H" to arbitrary precision. The idea is that many QFT Hamiltonians can be written as the sum of "free" part formula_7 and an "interacting" part that describes interactions (for example a formula_8 term or a Yukawa coupling), schematically formula_9 where "V" can be written as the integral of a local operator formula_10 over "M". There may be multiple interaction terms formula_11, but that case generalizes straightforwardly from the case with a single interaction formula_12. Hamiltonian truncation amounts to the following recipe: In a UV-finite quantum field theory, the resulting energies formula_22 have a finite limit as the cutoff formula_3 is taken to infinity, so at least in principle the exact spectrum of the Hamiltonian can be recovered. In practice the cutoff formula_3 is always finite, and the procedure is performed on a computer. Range of validity. For a given cutoff formula_3, Hamiltonian truncation has a finite range of validity, meaning that cutoff errors become important when the coupling "g" is too large. To make this precise, let's take "R" to be the rough size of the manifold "M", that is to say that formula_23 up to some c-number coefficient. If the deformation "V" is the integral of a local operator of dimension formula_24, then the coupling "g" will have mass dimension formula_25, so the redefined coupling formula_26 is dimensionless. Depending on the order of magnitude of formula_27, we can distinguish three different regimes: Truncation errors and ultraviolet divergences. There are two intrinsic but related issues with Hamiltonian truncation: The first case is due to ultraviolet divergences of the quantum field theory in question. In this case, cutoff-dependent counterterms must be added to the Hamiltonian "H" in order to obtain a physically meaningful result. In order to understand the second problem, one can perform perturbative computations to understand the continuum limit analytically. Let us spell this out using an example. We have in mind a perturbation of the form "gV" with formula_34 where formula_35 is a local operator. Suppose that we want to compute the first corrections to the vacuum energy due to "V". In Rayleigh–Schrödinger perturbation theory, we know that formula_36 where formula_37 where the sum runs over all states formula_13 other than the vacuum formula_38 itself. Whether this integral converges or not depends on the large-"E" behavior of the spectral density formula_39. In turn, this depends on the short-distance behavior of the two-point correlation function of the operator formula_10. Indeed, we can write formula_40 where formula_41 evolves in Euclidean time in the interaction picture. Hence the large-"E" behavior of the spectral density encodes the short-time behavior of the formula_42 vacuum correlator, where both "x,y" are integrated over space. The large-"E" scaling can be computed in explicit theories; in general it goes as formula_43 where formula_44 is the scaling or mass dimension of the operator formula_10 and "c" is some constant. There are now two possibilities, depending on the value of formula_45: formula_49 A similar analysis applies to cutoff errors in excited states and at higher orders in perturbation theory. Example of the massive scalar "φ"4 theory. Quantization. As an example, we can consider a massive scalar field formula_50 on some spacetime formula_1, where "M" is compact (possibly having a boundary). The total metric can be written as formula_51 Let's consider the action formula_52 where formula_53 is the Laplacian on formula_1. The "g"=0 theory can be canonically quantized, which endows the field formula_54 with a mode decomposition formula_55 where the creation and annihilation operators obey canonical commutation relations formula_56. The single-particle energies formula_57 and the mode functions formula_58 depend on the spatial manifold "M". The Hamiltonian at "t"=0 is then given by formula_59 Hamiltonian truncation. The Hilbert space of the formula_60 theory is the Fock space of the modes formula_61. That is to say that there exists a vacuum state formula_38 obeying formula_62 for all "n", and on top of that there are single- and multi-particle states. Explicitly, a general eigenstate of formula_7 is labeled by a tuple formula_63 of occupation numbers: formula_64 where the formula_65 can take values in the integers: formula_66. Such a state has energy formula_67 so finding a basis of low-energy states amounts to finding all tuples formula_63 obeying formula_68. Let's denote all such states schematically as formula_13. Next, the matrix elements formula_69 can be computed explicitly using the canonical commutation relations. Finally, the explicit Hamiltonian formula_18 has to be diagonalized. The resulting spectra can be used to study precision physics. Depending on the values of "g" and formula_70, the above formula_8 theory can be in a symmetry-preserving or a symmetry-broken phase, which can be studied explicitly using the above algorithm. The continuous phase transition between these two phases can also be analyzed, in which case the spectrum and eigenstates of "H" contain information about the conformal field theory of the Ising universality class. Special cases. Truncated Conformal Space Approach. The truncated conformal space approach (TCSA) is a version of the Hamiltonian truncation that applies to perturbed conformal field theories. This approach was introduced by Yurov and Al. Zamolodchikov in 1990 and has become a standard ingredient used to study two-dimensional QFTs. The "d"-dimensional version of TCSA was first studied in 2014. A RG flow emanating from a conformal field theory (CFT) is described by an action formula_71 where formula_10 is a scalar operator in the CFT of scaling dimension formula_72. At large distances, such theories are strongly coupled. It is convenient to study such RG flows on the cylinder formula_5, taking the sphere to have radius "R" and endowing the full space with coordinates formula_73. The reason is that the unperturbed ("g"=0) theory admits a simple description owing to radial quantization. Schematically, states formula_13 on the cylinder are in one-to-one correspondence with local operators formula_74 inserted at the origin of flat space: formula_75 where formula_76 is the CFT vacuum state. The Hamiltonian on the cylinder is precisely the dilatation operator "D" of the CFT: the unperturbed energies are given by formula_77 where formula_78 is the scaling dimension of the operator formula_74. Finally, the matrix elements of the deformation "V" formula_79 are proportional to OPE coefficients formula_80 in the original CFT. Lightcone truncation methods. Real-time QFTs are often studied in lightcone coordinates formula_81 Although the spectrum of the lightcone Hamiltonian formula_82 is continuous, it is still possible to compute certain observables using truncation methods. The most commonly used scheme, used when the UV theory is conformal, is known as lightcone conformal truncation (LCT). Notably, the spatial manifold "M" is non-compact in this case, unlike the equal-time quantization described previously. See also the page for light-front computational methods, which describes related computational setups. Numerical implementation. Hamiltonian truncation computations are normally performed using a computer algebra system, or a programming language like Python or C++. The number of low-energy states formula_16 tends to grow rapidly with the UV cutoff, and it is common to perform Hamiltonian truncation computations taking into account several thousand states. Nonetheless, one is often only interested in the first O(10) energies and eigenstates of "H". Instead of diagonalizing the full Hamiltonian explicitly (which is numerically very costly), approximation methods like Arnoldi iteration and the Lanczos algorithm are commonly used. In some cases, it is not possible to orthonormalize the low-energy states formula_13, either because this is numerically expensive or because the underlying Hilbert space is not positive definite. In that case, one has to solve the generalized eigenvalue problem formula_83 where formula_84 and formula_85 is the Gram matrix of the theory. In this formulation, the eigenstates of the truncated Hamiltonian are formula_86. In practice, it is important to keep track of the "symmetries" of the theory, that is to say all generators formula_87 that satisfy formula_88. There are two types of symmetries in Hamiltonian truncation: When all states are organized in symmetry sectors with respect to the formula_87 the Hamiltonian is block diagonal, so the effort required to diagonalize "H" is reduced. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d \\geq 2" }, { "math_id": 1, "text": "\\mathbb{R} \\times M" }, { "math_id": 2, "text": "\\mathbb{R}" }, { "math_id": 3, "text": "\\Lambda" }, { "math_id": 4, "text": "\\mathbb{R}^d" }, { "math_id": 5, "text": "\\mathbb{R} \\times S^{d-1}" }, { "math_id": 6, "text": "\\mathbb{R} \\times \\mathbf{T}^{d-1}" }, { "math_id": 7, "text": "H_0" }, { "math_id": 8, "text": "\\phi^4" }, { "math_id": 9, "text": "H = H_0 + g V" }, { "math_id": 10, "text": "\\mathcal{V}" }, { "math_id": 11, "text": "g_1 V_1 + g_2 V_2 + \\ldots" }, { "math_id": 12, "text": "g V" }, { "math_id": 13, "text": "|i\\rangle" }, { "math_id": 14, "text": "e_i \\leq \\Lambda" }, { "math_id": 15, "text": "\\langle i | j \\rangle = \\delta_{ij}" }, { "math_id": 16, "text": "N(\\Lambda)" }, { "math_id": 17, "text": "N(\\Lambda) \\times N(\\Lambda)" }, { "math_id": 18, "text": "H(\\Lambda)_{ij} = e_i \\delta_{ij} + g V_{ij}" }, { "math_id": 19, "text": "V_{ij} = \\langle i | V | j \\rangle." }, { "math_id": 20, "text": "H(\\Lambda)" }, { "math_id": 21, "text": "H(\\Lambda)| \\psi_\\alpha \\rangle = E_\\alpha(\\Lambda) | \\psi_\\alpha \\rangle" }, { "math_id": 22, "text": "E_\\alpha(\\Lambda)" }, { "math_id": 23, "text": " \\text{vol}(M) \\sim R^{d-1}" }, { "math_id": 24, "text": "\\Delta" }, { "math_id": 25, "text": "[g] = d-\\Delta" }, { "math_id": 26, "text": "\\bar{g} \\equiv g R^{d-\\Delta}" }, { "math_id": 27, "text": "\\bar{g}" }, { "math_id": 28, "text": "\\bar{g} \\ll 1" }, { "math_id": 29, "text": "|\\bar{g}| \\leq \\bar{g}_\\text{max} = O(1)" }, { "math_id": 30, "text": "\\bar{g} = O(\\text{few})" }, { "math_id": 31, "text": "\\bar{g} \\gg 1" }, { "math_id": 32, "text": "\\Lambda \\to \\infty" }, { "math_id": 33, "text": "\\lim_{\\Lambda \\to \\infty} E_\\alpha(\\Lambda)" }, { "math_id": 34, "text": "V = \\int_M d{\\mathbf x} \\; \\mathcal{V}(t=0,{\\mathbf x})" }, { "math_id": 35, "text": "\\mathcal{V}(t,\\mathbf x)" }, { "math_id": 36, "text": "E_\\Omega(\\Lambda) = 0 + g \\langle \\Omega | V | \\Omega \\rangle + g^2 E_\\Omega^{(2)}(\\Lambda) + O(g^3)" }, { "math_id": 37, "text": "\nE_\\Omega^{(2)}(\\Lambda) = - \\int_0^\\Lambda \\frac{dE}{E}\\; \\rho_{\\Omega}(E),\n\\quad\n\\rho_\\Omega(E) = \\sum_i \\delta(E-e_i) |\\langle i | V | \\Omega \\rangle|^2 \\geq 0\n" }, { "math_id": 38, "text": "|\\Omega \\rangle" }, { "math_id": 39, "text": "\\rho_\\Omega(E)" }, { "math_id": 40, "text": " \\int_0^\\infty dE\\, e^{-E t}\\rho_\\Omega(E) = \\int_M d{\\mathbf x}\\int_Md{\\mathbf y} \\, \\left[ \\langle \\Omega | \\mathcal{V}(t,\\mathbf x)\\mathcal{V}(0,\\mathbf y) | \\Omega \\rangle - \\langle \\Omega | \\mathcal{V}(0,\\mathbf x) | \\Omega \\rangle \\langle \\Omega | \\mathcal{V}(0,\\mathbf y) | \\Omega \\rangle \\right] " }, { "math_id": 41, "text": "\\mathcal{V}(t,\\mathbf x) = e^{H_0 t} \\mathcal{V}(0,\\mathbf x) e^{-H_0 t}" }, { "math_id": 42, "text": "\\langle \\mathcal{V}(t,\\mathbf x)\\mathcal{V}(0,\\mathbf y) \\rangle" }, { "math_id": 43, "text": " \\rho_\\Omega(E) \\mathrel{\\mathop{\\sim}\\limits_{\\scriptstyle{E \\to \\infty}}} c \\cdot E^{2\\Delta_\\mathcal{V} - d}" }, { "math_id": 44, "text": "\\Delta_\\mathcal{V}" }, { "math_id": 45, "text": "\\gamma \\equiv d - 2\\Delta_\\mathcal{V}" }, { "math_id": 46, "text": "\\gamma \\leq 0" }, { "math_id": 47, "text": "E_\\Omega(\\Lambda)" }, { "math_id": 48, "text": "\\gamma > 0" }, { "math_id": 49, "text": " \\qquad E_\\Omega(\\infty) - E_\\Omega(\\Lambda) \\approx -g^2 \\frac{c}{\\gamma} \\Lambda^{-\\gamma} + O(g^3). " }, { "math_id": 50, "text": "\\phi(t,\\mathbf{x})" }, { "math_id": 51, "text": "ds^2 = -dt^2 + h_{ij}({\\mathbf x}) dx^i dx^j." }, { "math_id": 52, "text": " S = \\int_\\mathbb{R} dt \\int_M \\sqrt{h}d\\mathbf{x}\\, \\mathcal{L},\n\\quad\n\\mathcal{L} = - \\frac{1}{2} \\phi \\triangle \\phi + \\frac{1}{2} m^2 \\phi^2 + g \\phi^4\n" }, { "math_id": 53, "text": "\\triangle" }, { "math_id": 54, "text": "\\phi" }, { "math_id": 55, "text": " \\phi(t,\\mathbf{x}) = \\sum_{n \\in \\mathbb{N}} \\frac{1}{\\sqrt{2\\omega_n}} \\left(a_n \\, e^{-i\\omega_n t} f_n(\\mathbf{x}) + a_n^\\dagger \\, e^{i\\omega_n t} f_n(\\mathbf{x})^*\\right)" }, { "math_id": 56, "text": "[a_m,a_n^\\dagger] = \\delta_{mn}" }, { "math_id": 57, "text": "\\omega_n > 0" }, { "math_id": 58, "text": "f_n(\\mathbf{x})" }, { "math_id": 59, "text": "H = H_0 + gV,\n\\quad\nH_0 = \\sum_{n \\in \\mathbb{N}} \\omega_n a_n^\\dagger a_n\n\\quad\n\\text{and}\n\\quad\nV = \\int_M \\sqrt{h} d\\mathbf{x}\\; \\phi(t=0,\\mathbf{x})^4.\n" }, { "math_id": 60, "text": "g=0" }, { "math_id": 61, "text": "\\{a_n^\\dagger\\}" }, { "math_id": 62, "text": "a_n | \\Omega \\rangle = 0" }, { "math_id": 63, "text": "\\{k_n\\}" }, { "math_id": 64, "text": "| {\\mathbf k} \\rangle = \\prod_{n \\in \\mathbb{N}} \\frac{1}{\\sqrt{k_n!}} (a_n^\\dagger)^{k_n} | \\Omega \\rangle" }, { "math_id": 65, "text": "k_n" }, { "math_id": 66, "text": "k_n \\in \\{ 0,1,2,\\ldots\\}" }, { "math_id": 67, "text": "H_0 |{\\mathbf k} \\rangle = e({\\mathbf k}) | {\\mathbf k} \\rangle,\n\\quad\ne({\\mathbf k}) = \\sum_{n \\in \\mathbb{N}} k_n \\omega_n" }, { "math_id": 68, "text": "e(\\mathbf k) \\leq \\Lambda" }, { "math_id": 69, "text": "V_{ij}" }, { "math_id": 70, "text": "m^2" }, { "math_id": 71, "text": "S = S_\\text{CFT} + g \\int d^d x\\, \\mathcal{V}(x)" }, { "math_id": 72, "text": "\\Delta_\\mathcal{V} \\leq d" }, { "math_id": 73, "text": "(t,\\mathbf{n})" }, { "math_id": 74, "text": "\\mathcal{O}_i" }, { "math_id": 75, "text": "|i\\rangle = \\lim_{x \\to 0} \\mathcal{O}_i(x) | \\Omega \\rangle" }, { "math_id": 76, "text": "|\\Omega\\rangle" }, { "math_id": 77, "text": "H_0 | i \\rangle = \\frac{\\Delta_i}{R} | i \\rangle" }, { "math_id": 78, "text": "\\Delta_i " }, { "math_id": 79, "text": "V_{ij} = R^{d-1} \\int_{S^{d-1}} d\\mathbf{n}\\; \\langle i | \\mathcal{V}(t=0,\\mathbf{n}) | j \\rangle " }, { "math_id": 80, "text": " \\mathcal{V} \\times \\mathcal{O}_j \\sim \\mathcal{O}_i" }, { "math_id": 81, "text": "ds^2 = 2dx^{+}dx^{-} - (dx^i)^2." }, { "math_id": 82, "text": "P_{+} = i \\partial/\\partial x^{+}" }, { "math_id": 83, "text": " H_{ij}(\\Lambda) v_\\alpha^j = E_\\alpha(\\Lambda) \\, G_{ij} v_\\alpha^j" }, { "math_id": 84, "text": "H_{ij}(\\Lambda) = \\langle i | H(\\Lambda) | j \\rangle" }, { "math_id": 85, "text": "G_{ij} = \\langle i | j \\rangle" }, { "math_id": 86, "text": "|\\psi_\\alpha \\rangle = v_\\alpha^i | i \\rangle" }, { "math_id": 87, "text": "G_i" }, { "math_id": 88, "text": "[G_i,H(\\Lambda)] = 0" }, { "math_id": 89, "text": "\\mathbb{Z}_2" }, { "math_id": 90, "text": "\\phi \\mapsto -\\phi" }, { "math_id": 91, "text": "O(d)" }, { "math_id": 92, "text": "M = S^{d-1}" } ]
https://en.wikipedia.org/wiki?curid=67595838
67614629
Turner angle
The Turner angle "Tu", introduced by Ruddick(1983) and named after J. Stewart Turner, is a parameter used to describe the local stability of an inviscid water column as it undergoes double-diffusive convection. The temperature and salinity attributes, which generally determine the water density, both respond to the water vertical structure. By putting these two variables in orthogonal coordinates, the angle with the axis can indicate the importance of the two in stability. Turner angle is defined as: formula_0 where tan−1 is the four-quadrant arctangent; α is the coefficient of thermal expansion; β is the equivalent coefficient for the addition of salinity, sometimes referred to as the "coefficient of saline contraction"; θ is potential temperature; and S is salinity. The relation between "Tu" and stability is as shown Relation to density ratio. Turner angle is related to the density ratio mathematically by: formula_1 Meanwhile, Turner angle has more advantages than density ratio in aspects of: 0 is well defined in terms of "Tu;" Nevertheless, Turner angle is not as directly obvious as density ratio when assessing different attributions of thermal and haline stratification. Its strength mainly focuses on classification. Physical description. Turner angle is usually discussed when researching ocean stratification and double diffusion. Turner angle assesses the vertical stability, indicating the density of the water column changes with depth. The density is generally related to potential temperature and salinity profile: the cooler and saltier the water is, the denser it is. As the light water overlays on the dense water, the water column is called stable stratification. The buoyancy force preserves stable stratification. One characteristic of stability is that the Brunt-Vaisala frequency N2&gt;0, which includes three situations of doubly stable, thermal diffusion, and salt fingering. Considering the density attribute to both temperature and salinity, a "double stable" status, where the temperature decreases with depth (∂θ/∂z&gt;0) and salinity increases with depth (∂S/∂z&lt;0), is the most ideal stable water column, which means the water column is stably stratified with respect to both θ and S. The water column can maintain stability even though one attribute does not agree. In one case of heat diffusion dominant, diffusive double-diffusion is possible when the salinity structure is stable while the temperature structure is unstable (∂θ/∂z&lt;0, ∂S/∂z&lt;0). In the other case, salt fingering can be expected when relatively warm, salty water overlies relatively colder, fresher water (∂θ/∂z&gt;0, ∂S/∂z&gt;0). Both these cases lead to turbulence and mixing on the vertical structure of the water column. Since Turner angle can indicate the thermal and haline properties of the water column, it is used to discuss water thermal and haline structures, and demonstrated benefits of localizing the boundaries of the subarctic front. Characteristics. The global meridional Turner angle distributions at the surface and 300-m depth in different seasons are investigated by Tippins, Duncan &amp; Tomczak, Matthias (2003), which indicates the overall stability of the ocean over a long-time scale. It's worth noting that 300-m depth is deep enough to be beneath the mixed layer during all seasons over most of the subtropics, yet shallow enough to be located entirely in the permanent thermocline, even in the tropics. At the surface, as the temperature and salinity increase from the Subpolar Front towards subtropics, the Turner angle is positive, while it becomes negative due to the meridional salinity gradient being reversed on the equatorial side of the subtropical surface salinity maximum. "Tu" becomes positive again in the Pacific and Atlantic Oceans near the equator. A band of negative "Tu" in the South Pacific extends westward along 45°S, produced by the low salinities because of plenty of rainfall, off the southern coast of Chile. In 300-m depth, it is dominated by positive "Tu" nearly everywhere except for only narrow bands of negative Turner angles. This reflects the shape of the permanent thermocline, which sinks to its greatest depth in the center of the oceanic gyres and then rises again towards the equator, and which also indicates a vertical structure in temperature and salinity where both decrease with depth. Availability. The function of Turner angle is available: For Python, published in the GSW Oceanographic Toolbox from the function gsw_Turner_Rsubrho. For R, please reference this page Home/CRAN/gsw/gsw_Turner_Rsubrho: Turner Angle and Density Ratio. For MATLAB, please reference this page GSW-Matlab/gsw_Turner_Rsubrho.m. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Tu(\\deg)=\\tan^{-1}\\left ( \\alpha \\frac{\\partial \\theta}{\\partial z}-\\beta \\frac{\\partial S}{\\partial z}, \\alpha \\frac{\\partial \\theta}{\\partial z}+\\beta \\frac{\\partial S}{\\partial z} \\right )" }, { "math_id": 1, "text": "R_\\rho=-\\tan(Tu+45^\\circ)" } ]
https://en.wikipedia.org/wiki?curid=67614629
6762557
Probabilistic CTL
Probabilistic Computation Tree Logic (PCTL) is an extension of computation tree logic (CTL) that allows for probabilistic quantification of described properties. It has been defined in the paper by Hansson and Jonsson. PCTL is a useful logic for stating soft deadline properties, e.g. "after a request for a service, there is at least a 98% probability that the service will be carried out within 2 seconds". Akin CTL suitability for model-checking PCTL extension is widely used as a property specification language for probabilistic model checkers. PCTL syntax. A possible syntax of PCTL can be defined as follows: formula_0 Therein, formula_1 is a comparison operator and formula_2 is a probability threshold. &lt;br&gt; Formulas of PCTL are interpreted over discrete Markov chains. An interpretation structure is a quadruple formula_3, where &lt;br&gt; A path formula_12 from a state formula_13 is an infinite sequence of states formula_14. The n-th state of the path is denoted as formula_15 and the prefix of formula_12 of length formula_16 is denoted as formula_17. Probability measure. A probability measure formula_18 on the set of paths with a common prefix of length formula_16 is given by the product of transition probabilities along the prefix of the path: formula_19 For formula_20 the probability measure is equal to formula_21. Satisfaction relation. The satisfaction relation formula_22 is inductively defined as follows:
[ { "math_id": 0, "text": "\n\\phi ::= p \\mid \\neg \\phi \\mid \\phi \\lor \\phi \\mid \\phi \\land \\phi \\mid \\mathcal{P}_{\\sim\\lambda}(\\phi \\mathcal{U} \\phi) \\mid \\mathcal{P}_{\\sim\\lambda}(\\square\\phi)\n" }, { "math_id": 1, "text": "\\sim \\in \\{ <, \\leq, \\geq, > \\}" }, { "math_id": 2, "text": "\\lambda" }, { "math_id": 3, "text": "K = \\langle S, s^i, \\mathcal{T}, L \\rangle" }, { "math_id": 4, "text": "S" }, { "math_id": 5, "text": "s^i \\in S" }, { "math_id": 6, "text": "\\mathcal{T}" }, { "math_id": 7, "text": "\\mathcal{T} : S \\times S \\to [0,1] " }, { "math_id": 8, "text": "s \\in S" }, { "math_id": 9, "text": "\\sum_{s'\\in S} \\mathcal{T}(s,s')=1" }, { "math_id": 10, "text": "L" }, { "math_id": 11, "text": "L:S\\to2^A" }, { "math_id": 12, "text": "\\sigma" }, { "math_id": 13, "text": "s_0" }, { "math_id": 14, "text": "s_0 \\to s_1 \\to \\dots \\to s_n \\to \\dots " }, { "math_id": 15, "text": "\\sigma[n]" }, { "math_id": 16, "text": "n" }, { "math_id": 17, "text": "\\sigma\\uparrow n" }, { "math_id": 18, "text": "\\mu_m" }, { "math_id": 19, "text": "\n\\mu_m(\\{\\sigma \\in X : \\sigma\\uparrow n = s_0 \\to \\dots \\to s_n \\}) = \\mathcal{T}(s_0,s_1) \\times\\dots\\times\\mathcal{T}(s_{n-1},s_n)\n" }, { "math_id": 20, "text": "n = 0" }, { "math_id": 21, "text": "\\mu_m(\\{\\sigma \\in X : \\sigma\\uparrow 0 = s_0 \\}) = 1" }, { "math_id": 22, "text": "s \\models_K f" }, { "math_id": 23, "text": "s \\models_K a" }, { "math_id": 24, "text": "a \\in L(s)" }, { "math_id": 25, "text": "s \\models_K \\neg f" }, { "math_id": 26, "text": "s \\models_K f_1 \\lor f_2" }, { "math_id": 27, "text": "s \\models_K f_1" }, { "math_id": 28, "text": "s \\models_K f_2" }, { "math_id": 29, "text": "s \\models_K f_1 \\land f_2" }, { "math_id": 30, "text": "s \\models_K \\mathcal{P}_{\\sim\\lambda}(f_1 \\mathcal{U} f_2)" }, { "math_id": 31, "text": "\\mu_m(\\{\\sigma : \\sigma[0] = s \\land (\\exists i)\\sigma[i] \\models_K f_2 \\land (\\forall 0 \\leq j < i) \\sigma[j] \\models_K f_1\\}) \\sim \\lambda" }, { "math_id": 32, "text": "s \\models_K \\mathcal{P}_{\\sim\\lambda}(\\square f)" }, { "math_id": 33, "text": "\\mu_m(\\{\\sigma : \\sigma[0] = s \\land (\\forall i \\geq 0)\\sigma[i] \\models_K f\\}) \\sim \\lambda" } ]
https://en.wikipedia.org/wiki?curid=6762557
6762618
Virgo interferometer
Gravitational wave detector in Santo Stefano a Macerata, Tuscany, Italy The Virgo interferometer is a large Michelson interferometer designed to detect the gravitational waves predicted by general relativity. It is in Santo Stefano a Macerata, near the city of Pisa, Italy. The instrument's two arms are three kilometres long and contain its mirrors and instrumentation inside an ultra-high vacuum. Virgo is hosted by the European Gravitational Observatory (EGO), a consortium founded by the French Centre National de la Recherche Scientifique (CNRS) and Italian Istituto Nazionale di Fisica Nucleare (INFN). The "Virgo Collaboration" operates the detector and defines the strategy and policy for its use and upgrades. It is composed of several hundreds of members across 16 different countries. These operations are carried out jointly with other similar detectors, including the two LIGO interferometers in the United States (at the Hanford Site and in Livingston, Louisiana) and the Japanese interferometer KAGRA (in the Kamioka mine). Cooperation between several detectors is crucial for detecting gravitational waves and pinpointing their origin, which is why the LIGO and Virgo collaborations have been sharing their data since 2007, with KAGRA joining in 2019 to form the LIGO-Virgo-KAGRA (LVK) collaboration. The interferometer is named after the Virgo Cluster, a cluster of about 1,500 galaxies in the Virgo constellation, about 50 million light-years from Earth. Founded at a time when gravitational waves were only a prediction by general relativity, it has now participated in detecting multiple gravitational wave events, making its first detection in 2017 (jointly with the two LIGO detectors), quickly followed by the GW170817 event, the only one to have also been observed with classical methods (optical, gamma-ray, X-ray and radio telescopes) as of 2024. The detector currently participates in joint observing runs with the other detectors, separated by commissioning periods during which the detector is upgraded to increase its sensitivity and scientific output. Organization. The Virgo experiment is managed by the European Gravitational Observatory (EGO) consortium, created in December 2000 by the CNRS and INFN. The Dutch Institute for Nuclear and High-Energy Physics, Nikhef, later joined as an observer and eventually became a full member. EGO is responsible for the Virgo site, in charge of the construction, maintenance, and operation of the detector, as well as its upgrades. One of the goals of EGO is also to promote research on and studies of gravitation in Europe. The Virgo Collaboration consolidates all the researchers working on various aspects of the detector. As of May 2023, around 850 members, representing 142 institutions in 16 different countries, are part of the collaboration. This includes institutions from France, Italy, the Netherlands, Poland, Spain, Belgium, Germany, Hungary, Portugal, Greece, Czechia, Denmark, Ireland, Monaco, China, and Japan. The Virgo Collaboration is part of the larger LIGO-Virgo-KAGRA (LVK) Collaboration, which gathers scientists from the other major gravitational waves experiment, for the purpose of carrying out joint analysis of the data which is crucial for gravitational wave detections. LVK first started in 2007 as the LIGO-Virgo Collaboration, and was expanded when KAGRA joined in 2019. History. The Virgo project was approved in 1992 by the French CNRS and in 1993 by the Italian INFN, the two institutes at the origin of the experiment. The construction of the detector started in 1996 at the Cascina site near Pisa, Italy, and was completed in 2003. After several observation runs without detection, the interferometer was shut down in 2011 to allow for significant upgrades as part of the Advanced Virgo project. It started making observations again in 2017, quickly making its first detections along with the LIGO detectors. Conception. Although the concept of gravitational waves is more than 100 years old, having been predicted by Einstein in 1916, it was not before the 1970s that serious projects for detecting them started to appear. The first were the Weber bars, invented by Joseph Weber; while they could in principle detect gravitational waves, none of the experiments succeeded. They did however spark the creation of many research groups dedicated to gravitational waves. The idea of a large interferometric detector began to gain credibility in the early 1980s, and in 1985, the Virgo project was conceptualized by the Italian researcher Adalberto Giazotto and the French researcher Alain Brillet after they met in Rome. One of the key ideas that set Virgo apart from other projects was targeting low frequencies (around 10 Hz), whereas most projects focused on higher frequencies (around 500 Hz); many believed at the time that this was not doable, and only France and Italy started working on the project, which was first presented in 1987. After being approved by the CNRS and the INFN, the construction of the interferometer began in 1996, with the aim of beginning observations by 2000. The first goal of Virgo was to directly observe gravitational waves. The study of the binary pulsar 1913+16 over three decades, whose discoverers were awarded the 1993 Nobel Prize in Physics, had already led to indirect evidence of their existence. The observed decrease of this binary pulsar's orbital period was in agreement with the hypothesis that the system was losing energy by emitting gravitational waves. Initial Virgo detector. In the 2000s, the Virgo detector was first built, commissioned, and operated. The instrument successfully reached its planned design sensitivity. This initial endeavor was used to validate the Virgo technical design choices; it also demonstrated that giant interferometers were promising devices for detecting gravitational waves in a wide frequency band. This phase is generally named the "initial Virgo" or "original Virgo". The construction of the initial Virgo detector was completed in June 2003, and several data collection periods ("science runs") followed between 2007 and 2011. Some of these runs were done simultaneously with the two LIGO detectors. There was a shut-down of a few months in 2010 to allow for a major upgrade of the Virgo suspension system: the original steel suspension wires were replaced by glass fibers to reduce the thermal noise. However, the initial Virgo detector was not sensitive enough to detect gravitational waves. After several months of data collection with the upgraded suspension system, the initial Virgo detector was shut down in September 2011 to begin the installation of Advanced Virgo. Advanced Virgo detector. The Advanced Virgo detector aimed to increase the sensitivity (and thus the distance at which a signal can be detected) by a factor of 10, allowing it to probe a volume of the Universe 1,000 times larger, making detection of gravitational waves more likely. It benefited from the experience gained with the initial detector and technological advances. The Advanced Virgo detector kept the same vacuum infrastructure as the initial Virgo, but the remainder of the interferometer was significantly upgraded. Four additional cryotraps were added at both ends of each arm to trap residual particles coming from the mirror towers. The new mirrors were larger (350 mm in diameter, with a weight of 40 kg), and their optical performance was improved. The critical optical elements used to control the interferometer are under vacuum on suspended benches. A system of adaptive optics was to be installed to correct the mirror aberrations "in-situ". In the original plan, the laser power was expected to reach 200 W in its final configuration. Advanced Virgo started the commissioning process in 2016, joining the two advanced LIGO detectors ("aLIGO") on 1 August 2017, during the "O2" observation period. On 14 August 2017, LIGO and Virgo detected a signal, GW170814, which was reported on 27 September 2017. It was the first binary black hole merger detected by both LIGO and Virgo (and the first one for Virgo). Just a few days later, GW170817 was detected by LIGO and Virgo on 17 August 2017. The signal was produced by the last minutes of two neutron stars spiralling closer to each other and finally merging, and represents both the first binary neutron star merger observed and the first gravitational wave observation which was confirmed by non-gravitational means. Indeed, the resulting gamma-ray burst was also detected, and optical telescopes later discovered a kilonova corresponding to the merger. After further upgrades, Virgo started the third observation run ("O3") in April 2019, planned to last one year. However, the run ended earlier that expected on 27 March 2020, due to the COVID-19 pandemic. The upgrades following O3 are part of the "Advanced Virgo +" program, divided in two phases, the first one preceding the O4 run and the second one preceding the O5 run. The first phase focuses on the reduction of quantum noise by introducing a more powerful laser, improving the squeezing introduced in O3, and implementing a new technique called signal recycling; seismic sensors are also installed around the mirrors. The second phase will then try to reduce the mirror thermal noise, by changing the geometry of the laser beam to increase its size on the mirrors (spreading the energy on a larger area and thus reducing the temperature), and by improving the coating of the mirrors; the end mirrors will also be significantly larger, requiring improvements to the suspension. Further improvements for quantum noise reduction are also expected in the second phase, building upon the changes from the first. The fourth observation run ("O4") was scheduled to start in May 2023, and was planned to last for a total of 20 months, including a commissioning break of up to two months. However, on 11 May 2023, Virgo announced that it would not join at the beginning of O4, as the interferometer was not stable enough to reach the expected sensitivity and needed to undergo the replacement of one of the mirrors, requiring several weeks of work. Virgo has not joined the O4 run during the first part of the run ("O4a"), which ended on 16 January 2024, as it only managed to reach a peak sensitivity of 45 Mpc instead of the 80 to 115 Mpc initially expected; it joined the second part of the run ("O4b") which began on 10 April 2024, with a sensitivity of 50 to 55 Mpc. In June 2024, it was announced that the O4 run would last until 9 June 2025, to get more preparation for the O5 upgrades. Future. Following the O4 run, the detector will once again be shut down to undergo upgrades, including an improvement in the coating of the mirrors. A fifth observing run (O5) is currently planned for beginning around June 2027; the target sensitivity for Virgo, which was originally set to be 150–260 Mpc, is currently being redefined in light of the performance during O4; plans to enter the O5 run are expected to be known before the end of 2024. No official plans have been announced for the future of the Virgo installations following the O5 period, although projects for further improving the detectors have been suggested; the current plans of the collaboration are referred to as the Virgo_nEXT project. Science case. Virgo is designed to look for gravitational waves emitted by astrophysical sources across the universe, which can be broadly classified into three types: The detection of these sources gives a new way to observe them (often carrying different information than more classical ways, e.g. using telescopes), and allows to probe fundamental properties of gravity, such as the polarization of gravitational waves, possible gravitational lensing, or more generally whether the observed signals are correctly described by general relativity. It also provides a way to measure the Hubble constant. Instrument. Principle. In general relativity, a gravitational wave is a space-time perturbation which propagates at the speed of light. It slightly curves space-time, which locally changes the light path. Concretely, it can be detected using a Michelson interferometer design, where a laser is divided in two beams travelling in orthogonal directions, bouncing on a mirror located at the end of each arm. As the gravitational wave passes, it alters the path of the two beams in a different manner; they are then recombined, and the resulting interferometric pattern is measured using a photodiode. As the induced deformation is extremely small, the design requires an excellent precision in the position of the mirrors, the stability of the laser, the measurements, and a very good isolation from the outside world to reduce the amount of noise. Laser and injection system. The laser is the light source of the experiment. It must be powerful, while extremely stable in frequency and amplitude. To meet all these (somewhat opposing) specifications, the beam starts from a very low power, yet very stable, laser. The light from this laser passes through several amplifiers which enhance its power by a factor of 100. A 50 W output power was achieved for the last configuration of the initial Virgo detector, and later reached 100 W during the O3 run, following the Advanced Virgo upgrades; it is expected to be upgraded to 130 W at the beginning of the O4 run. The original Virgo detector used a master-slave laser system, where a "master" laser is used to stabilize a high-powered "slave" laser; the master laser was a , and the slave laser a . The retained solution for Advanced Virgo is to have a fiber laser with an amplification stage made of fibers as well, to improve the robustness of the system; in its final configuration, it is planned to coherently combine the light of two lasers in order to achieve the required power. The wavelength of the laser is 1064 nanometres, in both the original and Advanced Virgo configurations. This laser is sent into the interferometer after passing through the injection system, which further ensures the stability of the beam, adjusts its shape and power, and positions it correctly for entering the interferometer. Key components of the injection system include the input mode cleaner (a 140-metre-long cavity made for improving the beam quality, by stabilizing the frequency, removing light propagating in an unwanted way and reducing the effect of misalignment of the laser), a Faraday isolator preventing any light from returning to the laser, and a mode-matching telescope, which adapts the size and position of the beam right before it enters the interferometer. Mirrors. The large mirrors located in each arm are the most critical optics of the interferometer. They include the two end mirrors, located at the ends of the 3-km interferometer arms, and the two input mirrors, located near the beginning of the arms. Together, these mirrors make a resonant optical cavity in each arm, where the light bounces thousands of times before returning to the beam splitter, maximizing the effect of the signal on the laser path. It also allows to increase the power of the light circulating in the arms. These mirrors have been specifically designed for Virgo and are made from state-of-the-art technologies. They are cylinders 35 cm in diameter and 20 cm thick, made from the purest glass in the world. The mirrors are polished to the atomic level to avoid diffusing (and hence losing) any light. Finally, a reflective coating (a Bragg reflector made with ion beam sputtering) is added. The mirrors located at the end of the arms reflect almost all incoming light; less than 0.002% of the light is lost at each reflection. In addition, two other mirrors are present in the final design: Superattenuators. To mitigate the seismic noise which could propagate up to the mirrors, shaking them and hence obscuring potential gravitational wave signals, the large mirrors are suspended by a complex system. All of the main mirrors are suspended by four thin fibers made of silica which are attached to a series of attenuators. This chain of suspension, called the "superattenuator", is close to 8 meters high and is also under vacuum. The superattenuators do not only limit the disturbances on the mirrors, they also allow the mirror position and orientation to be precisely steered. The optical table where the injection optics used to shape the laser beam are located, such as the benches used for the light detection, are also suspended and under vacuum, to limit the seismic and acoustic noises. In the Advanced Virgo configuration, the whole instrumentation used to detect gravitational waves signals and to steer the interferometer (photodiodes, cameras, and the associated electronics) is also installed on several suspended benches, and under vacuum. The design of the superattenuators is mainly based on the passive attenuation of the seismic noise, which is achieved by chaining several pendula, each acting as an harmonic oscillator. They are characterized by a resonance frequency (which diminishes with the length of the pendulum) above which the noise will be dampened; chaining several pendula allows to reduce the noise by twelve orders of magnitude, at the cost of introducing multiple, collective resonance frequencies, which are at a higher frequency than a single long pendulum. In the current design, the highest resonance frequency is around 2 Hz, providing a meaningful noise reduction starting at 4 Hz, and reaching the level needed for detecting gravitational waves around 10 Hz. A limit of the system is that the noise in the resonance frequency band (below 2 Hz) is not filtered and can generate large oscillations; this is mitigated by an active damping system, including sensors measuring the seismic noise and actuators controlling the superattenuator to counteract the noise. Detection system. Part of the light circulating in the arm cavities is sent towards the detection system by the beam splitter. In its optimal configuration, the interferometer works close to the "dark fringe", meaning that very little light is sent towards the output (most of it is sent back to the input, to be collected by the power recycling mirror). A fraction of this light is reflected back by the signal recycling mirror, and the rest is collected by the detection system. It first passes through the output mode cleaner, which allows to filter the so-called "high-order modes" (light propagating in an unwanted way, typically introduced by small defects in the mirrors, and susceptible to degrade the measurement), before reaching the photodiodes, which measure the light intensity. Both the output mode cleaner and the photodiodes are suspended and under vacuum. Starting with the O3 run, a squeezed vacuum source was introduced to reduce the quantum noise, which is one of the main limitations to sensitivity. When replacing the standard vacuum by a squeezed vacuum, the fluctuations of a quantity are decreased, at the expense of increasing the fluctuations of the other quantity due to Heisenberg's uncertainty principle. In the case of Virgo, the two quantities are the amplitude and the phase of the light. The idea of using squeezed vacuum was first proposed in 1981 by Carlton Caves, during the infancy of gravitational wave detectors. During the O3 run, frequency-independent squeezing was implemented, meaning that the squeezing is identical at all frequencies; it was used to reduce the shot noise (dominant at high frequencies) and increase the radiation pressure noise (dominant at low frequencies), as the latter was not limiting the instrument's sensitivity. Due to the addition of the squeezed vacuum injection, the quantum noise was reduced by 3.2 dB at high frequencies, resulting in an increase of the range of the detector by 5–8%. Currently, more sophisticated squeezed states are produced by combining the technology from O3 with a new 285 m long cavity, known as the filter cavity. This technology is known as frequency-dependent squeezing, and helps reduce the shot noise at high frequencies (where radiation pressure noise is not relevant), and reduce the radiation pressure noise at low frequencies (where shot noise is low). Infrastructure. Seen from the air, the Virgo detector has a characteristic "L" shape with its two 3-km-long perpendicular arms. The arm "tunnels" house vacuum pipes in which the laser beams are travelling. Virgo is the largest ultra-high vacuum installation in Europe, with a total volume of 6,800 cubic meters. The two 3-km arms are made of a long steel pipe 1.2m in diameter in which the target residual pressure is about 1 thousandth of a billionth of an atmosphere (improving by a factor of 100 from the original Virgo level). Thus, the residual gas molecules (mainly hydrogen and water) have a limited impact on the path of the laser beams. Large gate valves are located at both ends of the arms so that work can be done in the mirror vacuum towers without breaking an arm's ultra-high vacuum. The towers containing the mirrors and attenuators are themselves split in two sections with different pressures. The tubes undergo a process called baking, where they are heated at 150°C to remove unwanted particles stuck on the surfaces; while the towers were also baked-out in the initial Virgo design, cryogenic traps are now used to prevent contamination. Due to the high power in the interferometer, the mirrors are susceptible to thermal effects due to the heating induced by the laser (despite having an extremely low absorption). These thermal effects can take the shape of a deformation of the surface due to dilation, or a change in the refractive index of the substrate; this results in power escaping from the interferometer and in perturbations of the signal. These two effects are accounted for by the thermal compensation system (TCS), which includes sensors called Hartmann wavefront sensors (HWS), used to measure the optical aberration through an auxiliary light source, and two actuators: CO2 lasers, which selectively heat parts of the mirror to correct the defects, and ring heaters, which precisely adjust the radius of curvature of the mirror. The system also corrects the "cold defects", which are permanent defects introduced during the mirror manufacturing. During the O3 run, the TCS was able to increase the power circulating inside the interferometer by 15%, and decrease the power leaving the interferometer by a factor of 2. Another important component is the system for controlling stray light, which refers to any light leaving the designated path of the interferometer, either by scattering on a surface or from unwanted reflection. The recombination of this stray light with the main beam of the interferometer can be a significant source of noise, and is often hard to track and to model. Most of the efforts to mitigate stray light are based on absorbing plates called "baffles", placed near the optics as well as within the tubes; additional precautions are needed to prevent the baffles from having an effect on the interferometer operation. To estimate properly the response of the detector to gravitational waves and thus correctly reconstruct the signal, a calibration step is required, which involves moving the mirrors in a controlled way and measuring the result. During the initial Virgo era, this was primarily achieved by agitating one of the pendulum to which the mirror is suspended using coils to generate a magnetic field interacting with magnets fixed to the pendulum. This technique was employed until O2. For O3, the main calibration method became the photon calibration ("PCal") which had until then been used as a secondary method to validate the results; it uses an auxiliary laser to displace the mirror "via" radiation pressure. In addition, a new method called Newtonian calibration ("NCal") has been introduced at the end of O2 and is now used to validate the PCal; it relies on gravity to move the mirror, by placing a rotating mass at a specific distance of the mirror. Finally, the instrument requires an efficient data acquisition system. This system is in charge of managing the data measured at the output of the interferometer and from the many sensors present on the site, writing it in files, and distributing the files for data analysis. To this end, dedicated hardware and software have been developed to accommodate the specific needs of Virgo. Noise and sensitivity. Noise sources. Due to the precision required in the measurement, the Virgo detector is sensitive to several sources of noise which limit its ability to detect gravitational wave signals. Some of these sources correspond to large frequency ranges and limit the overall sensitivity of the detector, such as: In addition to these broad noise sources, some other ones may affect specific frequencies. These notably include a source at 50 Hz (as well as harmonics at 100, 150, and 200 Hz), corresponding to the frequency of the European power grid; so-called "violin modes" at 300 Hz (and several harmonics), corresponding to the resonance frequency of the suspension fibers (which can vibrate at a specific frequency just as the strings of a violin do); and calibration lines, appearing when mirrors are moved for calibration. Additional noise sources may also have a short-term impact—bad weather or earthquakes may temporarily increase the noise level. Finally, several short-lived artifacts may appear in the data due to many possible instrumental issues; these are usually referred to as "glitches". It is estimated that about 20% of the detected events are impacted by glitches, requiring specific data processing methods to mitigate their impact. Detector sensitivity. A detector like Virgo is characterized by its sensitivity, which provides information about the tiniest signal the instrument could detect. As the sensitivity depends on the frequency, it is usually represented as a curve corresponding to the noise power spectrum (or often amplitude spectrum, which is the square root of the power spectrum); the lower the curve, the better the sensitivity. Virgo is a wide band detector whose sensitivity ranges from a few Hz up to 10 kHz; the image attached shows an example of a Virgo sensitivity curve from 2011, plotted using a log-log scale. The most common measure for the sensitivity of a gravitational wave detector is the "horizon distance", defined as the distance at which a binary neutron star with masses 1.4 M☉–1.4 M☉ (where M☉ is the solar mass) produces a signal-to-noise ratio of 8 in the detector. It is generally expressed in megaparsecs. For instance, the range for Virgo during the O3 run was between 40 and 50 Mpc. This range is only an indicator and does not represent a maximal range for the detector; signals from more massive sources will have a larger amplitude, and can thus be detected from further away. Calculations show that the detector sensitivity roughly scales as formula_0, where formula_1 is the arm cavity length and formula_2 the laser power on the beam splitter. To improve it, these two quantities must be increased. This is achieved by having long arms, using optical cavities inside the arm to maximize the exposition to the signal, and implementing power recycling to increase the power in the arms. Data analysis. An important part of the Virgo collaboration resources is dedicated to the development and deployment of data analysis software designed to process the output of the detector. Apart from the data acquisition software and the tools for distributing the data, this effort is mostly shared with members of the LIGO and KAGRA collaborations, as part of the LIGO-Virgo-KAGRA (LVK) collaboration. The data from the detector is initially only available to LVK members; segments of data around detected events are released at the time of publication of the related paper, and the full data is released after a proprietary period, currently lasting 18 months. During the third observing run (O3), this resulted in two separated data releases (O3a and O3b), corresponding to the first six months and last six months of the run respectively. The data is then available for anyone on the Gravitational Wave Open Science Center (GWOSC) platform. The analysis of the data requires a variety of different techniques, targetting the different type of sources. The major part of the effort is dedicated to the detection and analysis of mergers of compact objects, the only type of source detected up until now. Several different analysis software are running on the data searching for this type of event, and a dedicated infrastructure is used to emit alerts to the online community. Other efforts are carried out after the data taking period ("offline"), including searches for continuous sources or for a stochastic background, as well as deeper analysis of the detected events. Scientific results. The first detection of a gravitational signal by Virgo took place at during the second observing run (O2) of the "Advanced" era, as only the LIGO detectors were operating during the first observing run. The event, named GW170814, was a coalescence between two black holes, and also the first event to be detected by three different detectors, allowing for its localization to be greatly improved compared to the events from the first observing run. It also allowed for the first conclusive measure of gravitational wave polarizations, providing evidence against the existence of polarizations other than the ones predicted by general relativity. It was soon followed by the more famous GW170817, first merger of two neutron stars detected by the gravitational wave network, and as of January 2023 the only event with a confirmed detection of an electromagnetic counterpart, both in gamma rays and in optical telescopes, and later in the radio and x-ray domains. While no signal was observed in Virgo, this absence was crucial to put tighter constraints on the localization of the event. This event had tremendous repercussions in the astronomical community, involving more than 4000 astronomers, improving the understanding of neutron star mergers, and putting very tight constraints on the speed of gravity. Several searches for continuous gravitational waves have been performed on data from the past runs. On the O3 run, these include an all-sky search, targeted searches toward Scorpius X-1 and several known pulsars (including the Crab and Vela pulsars), and directed search towards the supernova remnants Cassiopeia A and Vela Jr. and the Galactic Center. While none of the searches managed to identify a signal, this allowed upper limits to be set on some parameters; in particular, it was found that the deviation from perfect spinning balls for close known pulsars is at most of the order of 1 mm. Virgo was included in the latest search for a gravitational wave background along with LIGO, combining the results of O3 with the ones from the O1 and O2 runs (which only used LIGO data). No stochastic background was observed, improving previous constraints on the energy of the background by an order of magnitude. Constraints on the Hubble constant have also been obtained; the current best estimate is 68 km s−1 Mpc−1, combining results from binary black holes and from the GW170817 event. This result is coherent with other estimates of the constant, but not precise enough to resolve the tension regarding its exact value. Outreach. The Virgo collaboration participates in several activities promoting communication and education on gravitational waves for the general public. One of the more forefront activities is the organization of guided tours of the Virgo facilities for schools, universities, and the general public; however, many of the outreach activities take place outside the Virgo site. This includes educational activities such as public lectures and courses about Virgo activities, including toward school classes, but also participating in several science festivals, which involves the development of methods and devices for the vulgarization of gravitational waves (and related topics). The collaboration is also involved in several artistic projects, ranging from visual projects such as "The Rhythm of Space" at the Museo della Grafica in Pisa, or "On Air" at the Palais de Tokyo, to musical ones with different concerts. It also includes activities promoting gender equality in science, for instance highlighting the women working in Virgo in communications to the general public. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{1}{L\\times\\sqrt{P}}" }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": " P " } ]
https://en.wikipedia.org/wiki?curid=6762618
6763
Cistron
Region of DNA equaling a gene as defined by complementation test A cistron is a region of DNA that is conceptually equivalent to some definitions of a gene, such that the terms are synonymous from certain viewpoints, especially with regard to the molecular gene as contrasted with the Mendelian gene. The question of which scope of a subset of DNA (that is, how large a segment of DNA) constitutes a unit of selection is the question that governs whether cistrons are the same thing as genes. The word "cistron" is used to emphasize that molecular genes exhibit a specific behavior in a complementation test (cis-trans test); distinct positions (or loci) within a genome are cistronic. History. The words "cistron" and "gene" were coined before the advancing state of biology made it clear to many people that the concepts they refer to, at least in some senses of the word "gene", are either equivalent or nearly so. The same historical naming practices are responsible for many of the synonyms in the life sciences. The term "cistron" was coined by Seymour Benzer in an article entitled "The elementary units of heredity". The cistron was defined by an operational test applicable to most organisms that is sometimes referred to as a cis-trans test, but more often as a complementation test. Richard Dawkins in his influential book "The Selfish Gene" argues "against" the cistron being the unit of selection and against it being the best definition of a gene. (He also argues against group selection.) He does not argue against the existence of cistrons, or their being elementary, but rather against the idea that natural selection selects them; he argues that it used to, back in earlier eras of life's development, but not anymore. He defines a gene as a larger unit, which others may now call gene clusters, as the unit of selection. He also defines replicators, more general than cistrons and genes, in this gene-centered view of evolution. Definition. Defining a Cistron as a segment of DNA coding for a polypeptide, the structural gene in a transcription unit could be said as monocistronic (mostly in eukaryotes) or polycistronic (mostly in bacteria and prokaryotes). For example, suppose a mutation at a chromosome position formula_0 is responsible for a change in recessive trait in a diploid organism (where chromosomes come in pairs). We say that the mutation is recessive because the organism will exhibit the wild type phenotype (ordinary trait) unless both chromosomes of a pair have the mutation (homozygous mutation). Similarly, suppose a mutation at another position, formula_1, is responsible for the same recessive trait. The positions formula_0 and formula_1 are said to be within the same cistron when an organism that has the mutation at formula_0 on one chromosome and has the mutation at position formula_1 on the paired chromosome exhibits the recessive trait even though the organism is not homozygous for either mutation. When instead the wild type trait is expressed, the positions are said to belong to distinct cistrons / genes. Or simply put, mutations on the same cistrons will not complement; as opposed to mutations on different cistrons may complement (see Benzer's T4 bacteriophage experiments T4 rII system). For example, an operon is a stretch of DNA that is transcribed to create a contiguous segment of RNA, but contains more than one cistron / gene. The operon is said to be polycistronic, whereas ordinary genes are said to be monocistronic. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "y" } ]
https://en.wikipedia.org/wiki?curid=6763
67630778
Generalized suffix array
In computer science, a generalized suffix array (GSA) is a suffix array containing all suffixes for a set of strings. Given the set of strings formula_0 of total length formula_1, it is a lexicographically sorted array of all suffixes of each string in formula_2. It is primarily used in bioinformatics and string processing. Functionality. The functionality of a generalized suffix array is as follows: A generalized suffix array can be generated for a generalized suffix tree. When compared to a generalized suffix tree, while the generalized suffix array will require more time to construct, it will use less space than the tree. Construction Algorithms and Implementations. Algorithms and tools for constructing a generalized suffix array include: Solving the Pattern Matching Problem. Generalized suffix arrays can be used to solve the pattern matching problem: The runtime of the algorithm is formula_19. By comparison, solving this problem using suffix trees takes formula_20 time. Note that with a generalized suffix array, the space required is smaller compared to a suffix tree, since the algorithm only requires space for formula_1 words and the space to store the string. As mentioned above, by optionally keeping track of formula_18 information which will use slightly more space, the running time of the algorithm can be improved to formula_21. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. Generalized enhanced suffix array construction in external memory
[ { "math_id": 0, "text": "S = S_1, S_2, S_3, ..., S_k" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "S" }, { "math_id": 3, "text": "(i, j)" }, { "math_id": 4, "text": "j" }, { "math_id": 5, "text": "s_i" }, { "math_id": 6, "text": "\\mathcal{O}(N \\log n)" }, { "math_id": 7, "text": "\\mathcal{O}(N)" }, { "math_id": 8, "text": "N" }, { "math_id": 9, "text": "P" }, { "math_id": 10, "text": "T" }, { "math_id": 11, "text": "G" }, { "math_id": 12, "text": "i" }, { "math_id": 13, "text": "suff_G[i]" }, { "math_id": 14, "text": "j(\\geq i)" }, { "math_id": 15, "text": "G[i..j]" }, { "math_id": 16, "text": "\\Theta (log n)" }, { "math_id": 17, "text": "|P| = m" }, { "math_id": 18, "text": "lcp" }, { "math_id": 19, "text": "\\Theta (m log n)" }, { "math_id": 20, "text": "\\Theta (m)" }, { "math_id": 21, "text": "\\Theta (m+log n)" }, { "math_id": 22, "text": "\\Theta (n^2)" } ]
https://en.wikipedia.org/wiki?curid=67630778
676328
Graph homomorphism
A structure-preserving correspondence between node-link graphs In the mathematical field of graph theory, a graph homomorphism is a mapping between two graphs that respects their structure. More concretely, it is a function between the vertex sets of two graphs that maps adjacent vertices to adjacent vertices. Homomorphisms generalize various notions of graph colorings and allow the expression of an important class of constraint satisfaction problems, such as certain scheduling or frequency assignment problems. The fact that homomorphisms can be composed leads to rich algebraic structures: a preorder on graphs, a distributive lattice, and a category (one for undirected graphs and one for directed graphs). The computational complexity of finding a homomorphism between given graphs is prohibitive in general, but a lot is known about special cases that are solvable in polynomial time. Boundaries between tractable and intractable cases have been an active area of research. Definitions. In this article, unless stated otherwise, "graphs" are finite, undirected graphs with loops allowed, but multiple edges (parallel edges) disallowed. A graph homomorphism "f"  from a graph formula_0 to a graph formula_1, written "f" : "G" → "H" is a function from formula_2 to formula_3 that preserves edges. Formally, formula_4 implies formula_5, for all pairs of vertices formula_6 in formula_2. If there exists any homomorphism from "G" to "H", then "G" is said to be homomorphic to "H" or "H"-colorable. This is often denoted as just "G" → "H" . The above definition is extended to directed graphs. Then, for a homomorphism "f" : "G" → "H", ("f"("u"),"f"("v")) is an arc (directed edge) of "H" whenever ("u","v") is an arc of "G". There is an injective homomorphism from "G" to "H" (i.e., one that maps distinct vertices in "G" to distinct vertices in "H") if and only if "G" is isomorphic to a subgraph of "H". If a homomorphism "f" : "G" → "H" is a bijection, and its inverse function "f" −1 is also a graph homomorphism, then "f" is a graph isomorphism. Covering maps are a special kind of homomorphisms that mirror the definition and many properties of covering maps in topology. They are defined as surjective homomorphisms (i.e., something maps to each vertex) that are also locally bijective, that is, a bijection on the neighbourhood of each vertex. An example is the bipartite double cover, formed from a graph by splitting each vertex "v" into "v0" and "v1" and replacing each edge "u","v" with edges "u0","v1" and "v0","u1". The function mapping "v0" and "v1" in the cover to "v" in the original graph is a homomorphism and a covering map. Graph homeomorphism is a different notion, not related directly to homomorphisms. Roughly speaking, it requires injectivity, but allows mapping edges to paths (not just to edges). Graph minors are a still more relaxed notion. Cores and retracts. Two graphs "G" and "H" are homomorphically equivalent if "G" → "H" and "H" → "G". The maps are not necessarily surjective nor injective. For instance, the complete bipartite graphs "K"2,2 and "K"3,3 are homomorphically equivalent: each map can be defined as taking the left (resp. right) half of the domain graph and mapping to just one vertex in the left (resp. right) half of the image graph. A retraction is a homomorphism "r" from a graph "G" to a subgraph "H" of "G" such that "r"("v") = "v" for each vertex "v" of "H". In this case the subgraph "H" is called a retract of "G". A core is a graph with no homomorphism to any proper subgraph. Equivalently, a core can be defined as a graph that does not retract to any proper subgraph. Every graph "G" is homomorphically equivalent to a unique core (up to isomorphism), called "the core" of "G". Notably, this is not true in general for infinite graphs. However, the same definitions apply to directed graphs and a directed graph is also equivalent to a unique core. Every graph and every directed graph contains its core as a retract and as an induced subgraph. For example, all complete graphs "K"n and all odd cycles (cycle graphs of odd length) are cores. Every 3-colorable graph "G" that contains a triangle (that is, has the complete graph "K"3 as a subgraph) is homomorphically equivalent to "K"3. This is because, on one hand, a 3-coloring of "G" is the same as a homomorphism "G" → "K"3, as explained below. On the other hand, every subgraph of "G" trivially admits a homomorphism into "G", implying "K"3 → "G". This also means that "K"3 is the core of any such graph "G". Similarly, every bipartite graph that has at least one edge is equivalent to "K"2. Connection to colorings. A "k"-coloring, for some integer "k", is an assignment of one of "k" colors to each vertex of a graph "G" such that the endpoints of each edge get different colors. The "k"-colorings of "G" correspond exactly to homomorphisms from "G" to the complete graph "K""k". Indeed, the vertices of "K""k" correspond to the "k" colors, and two colors are adjacent as vertices of "K""k" if and only if they are different. Hence a function defines a homomorphism to "K""k" if and only if it maps adjacent vertices of "G" to different colors (i.e., it is a "k"-coloring). In particular, "G" is "k"-colorable if and only if it is "K""k"-colorable. If there are two homomorphisms "G" → "H" and "H" → "K""k", then their composition "G" → "K""k" is also a homomorphism. In other words, if a graph "H" can be colored with "k" colors, and there is a homomorphism from "G" to "H", then "G" can also be "k"-colored. Therefore, "G" → "H" implies χ("G") ≤ χ("H"), where "χ" denotes the chromatic number of a graph (the least "k" for which it is "k"-colorable). Variants. General homomorphisms can also be thought of as a kind of coloring: if the vertices of a fixed graph "H" are the available "colors" and edges of "H" describe which colors are "compatible", then an "H"-coloring of "G" is an assignment of colors to vertices of "G" such that adjacent vertices get compatible colors. Many notions of graph coloring fit into this pattern and can be expressed as graph homomorphisms into different families of graphs. Circular colorings can be defined using homomorphisms into circular complete graphs, refining the usual notion of colorings. Fractional and "b"-fold coloring can be defined using homomorphisms into Kneser graphs. T-colorings correspond to homomorphisms into certain infinite graphs. An oriented coloring of a directed graph is a homomorphism into any oriented graph. An L(2,1)-coloring is a homomorphism into the complement of the path graph that is locally injective, meaning it is required to be injective on the neighbourhood of every vertex. Orientations without long paths. Another interesting connection concerns orientations of graphs. An orientation of an undirected graph "G" is any directed graph obtained by choosing one of the two possible orientations for each edge. An example of an orientation of the complete graph "Kk" is the transitive tournament "A""k" with vertices 1,2,…,"k" and arcs from "i" to "j" whenever "i" &lt; "j". A homomorphism between orientations of graphs "G" and "H" yields a homomorphism between the undirected graphs "G" and "H", simply by disregarding the orientations. On the other hand, given a homomorphism "G" → "H" between undirected graphs, any orientation "A" of "H" can be pulled back to an orientation "A" of "G" so that "A" has a homomorphism to "A". Therefore, a graph "G" is "k"-colorable (has a homomorphism to "Kk") if and only if some orientation of "G" has a homomorphism to "A""k". A folklore theorem states that for all "k", a directed graph "G" has a homomorphism to "A""k" if and only if it admits no homomorphism from the directed path "A""k"+1. Here "A""n" is the directed graph with vertices 1, 2, …, "n" and edges from "i" to "i" + 1, for "i" = 1, 2, …, "n" − 1. Therefore, a graph is "k"-colorable if and only if it has an orientation that admits no homomorphism from "A""k"+1. This statement can be strengthened slightly to say that a graph is "k"-colorable if and only if some orientation contains no directed path of length "k" (no "A""k"+1 as a subgraph). This is the Gallai–Hasse–Roy–Vitaver theorem. Connection to constraint satisfaction problems. Examples. Some scheduling problems can be modeled as a question about finding graph homomorphisms. As an example, one might want to assign workshop courses to time slots in a calendar so that two courses attended by the same student are not too close to each other in time. The courses form a graph "G", with an edge between any two courses that are attended by some common student. The time slots form a graph "H", with an edge between any two slots that are distant enough in time. For instance, if one wants a cyclical, weekly schedule, such that each student gets their workshop courses on non-consecutive days, then "H" would be the complement graph of "C"7. A graph homomorphism from "G" to "H" is then a schedule assigning courses to time slots, as specified. To add a requirement saying that, e.g., no single student has courses on both Friday and Monday, it suffices to remove the corresponding edge from "H". A simple frequency allocation problem can be specified as follows: a number of transmitters in a wireless network must choose a frequency channel on which they will transmit data. To avoid interference, transmitters that are geographically close should use channels with frequencies that are far apart. If this condition is approximated with a single threshold to define 'geographically close' and 'far apart', then a valid channel choice again corresponds to a graph homomorphism. It should go from the graph of transmitters "G", with edges between pairs that are geographically close, to the graph of channels "H", with edges between channels that are far apart. While this model is rather simplified, it does admit some flexibility: transmitter pairs that are not close but could interfere because of geographical features can be added to the edges of "G". Those that do not communicate at the same time can be removed from it. Similarly, channel pairs that are far apart but exhibit harmonic interference can be removed from the edge set of "H". In each case, these simplified models display many of the issues that have to be handled in practice. Constraint satisfaction problems, which generalize graph homomorphism problems, can express various additional types of conditions (such as individual preferences, or bounds on the number of coinciding assignments). This allows the models to be made more realistic and practical. Formal view. Graphs and directed graphs can be viewed as a special case of the far more general notion called relational structures (defined as a set with a tuple of relations on it). Directed graphs are structures with a single binary relation (adjacency) on the domain (the vertex set). Under this view, homomorphisms of such structures are exactly graph homomorphisms. In general, the question of finding a homomorphism from one relational structure to another is a constraint satisfaction problem (CSP). The case of graphs gives a concrete first step that helps to understand more complicated CSPs. Many algorithmic methods for finding graph homomorphisms, like backtracking, constraint propagation and local search, apply to all CSPs. For graphs "G" and "H", the question of whether "G" has a homomorphism to "H" corresponds to a CSP instance with only one kind of constraint, as follows. The "variables" are the vertices of "G" and the "domain" for each variable is the vertex set of "H". An "evaluation" is a function that assigns to each variable an element of the domain, so a function "f" from "V"("G") to "V"("H"). Each edge or arc ("u","v") of "G" then corresponds to the "constraint" (("u","v"), E("H")). This is a constraint expressing that the evaluation should map the arc ("u","v") to a pair ("f"("u"),"f"("v")) that is in the relation "E"("H"), that is, to an arc of "H". A solution to the CSP is an evaluation that respects all constraints, so it is exactly a homomorphism from "G" to "H". Structure of homomorphisms. Compositions of homomorphisms are homomorphisms. In particular, the relation → on graphs is transitive (and reflexive, trivially), so it is a preorder on graphs. Let the equivalence class of a graph "G" under homomorphic equivalence be ["G"]. The equivalence class can also be represented by the unique core in ["G"]. The relation → is a partial order on those equivalence classes; it defines a poset. Let "G" &lt; "H" denote that there is a homomorphism from "G" to "H", but no homomorphism from "H" to "G". The relation → is a dense order, meaning that for all (undirected) graphs "G", "H" such that "G" &lt; "H", there is a graph "K" such that "G" &lt; "K" &lt; "H" (this holds except for the trivial cases "G" = "K"0 or "K"1). For example, between any two complete graphs (except "K"0, "K"1, "K"2) there are infinitely many circular complete graphs, corresponding to rational numbers between natural numbers. The poset of equivalence classes of graphs under homomorphisms is a distributive lattice, with the join of ["G"] and ["H"] defined as (the equivalence class of) the disjoint union ["G" ∪ "H"], and the meet of ["G"] and ["H"] defined as the tensor product ["G" × "H"] (the choice of graphs "G" and "H" representing the equivalence classes ["G"] and ["H"] does not matter). The join-irreducible elements of this lattice are exactly connected graphs. This can be shown using the fact that a homomorphism maps a connected graph into one connected component of the target graph. The meet-irreducible elements of this lattice are exactly the multiplicative graphs. These are the graphs "K" such that a product "G" × "H" has a homomorphism to "K" only when one of "G" or "H" also does. Identifying multiplicative graphs lies at the heart of Hedetniemi's conjecture. Graph homomorphisms also form a category, with graphs as objects and homomorphisms as arrows. The initial object is the empty graph, while the terminal object is the graph with one vertex and one loop at that vertex. The tensor product of graphs is the category-theoretic product and the exponential graph is the exponential object for this category. Since these two operations are always defined, the category of graphs is a cartesian closed category. For the same reason, the lattice of equivalence classes of graphs under homomorphisms is in fact a Heyting algebra. For directed graphs the same definitions apply. In particular → is a partial order on equivalence classes of directed graphs. It is distinct from the order → on equivalence classes of undirected graphs, but contains it as a suborder. This is because every undirected graph can be thought of as a directed graph where every arc ("u","v") appears together with its inverse arc ("v","u"), and this does not change the definition of homomorphism. The order → for directed graphs is again a distributive lattice and a Heyting algebra, with join and meet operations defined as before. However, it is not dense. There is also a category with directed graphs as objects and homomorphisms as arrows, which is again a cartesian closed category. Incomparable graphs. There are many incomparable graphs with respect to the homomorphism preorder, that is, pairs of graphs such that neither admits a homomorphism into the other. One way to construct them is to consider the odd girth of a graph "G", the length of its shortest odd-length cycle. The odd girth is, equivalently, the smallest odd number "g" for which there exists a homomorphism from the cycle graph on "g" vertices to "G". For this reason, if "G" → "H", then the odd girth of "G" is greater than or equal to the odd girth of "H". On the other hand, if "G" → "H", then the chromatic number of "G" is less than or equal to the chromatic number of "H". Therefore, if "G" has strictly larger odd girth than "H" and strictly larger chromatic number than "H", then "G" and "H" are incomparable. For example, the Grötzsch graph is 4-chromatic and triangle-free (it has girth 4 and odd girth 5), so it is incomparable to the triangle graph "K"3. Examples of graphs with arbitrarily large values of odd girth and chromatic number are Kneser graphs and generalized Mycielskians. A sequence of such graphs, with simultaneously increasing values of both parameters, gives infinitely many incomparable graphs (an antichain in the homomorphism preorder). Other properties, such as density of the homomorphism preorder, can be proved using such families. Constructions of graphs with large values of chromatic number and girth, not just odd girth, are also possible, but more complicated (see Girth and graph coloring). Among directed graphs, it is much easier to find incomparable pairs. For example, consider the directed cycle graphs "Cn", with vertices 1, 2, …, "n" and edges from "i" to "i" + 1 (for "i" = 1, 2, …, "n" − 1) and from "n" to 1. There is a homomorphism from "Cn" to "Ck" ("n", "k" ≥ 3) if and only if "n" is a multiple of "k". In particular, directed cycle graphs "Cn" with "n" prime are all incomparable. Computational complexity. In the graph homomorphism problem, an instance is a pair of graphs ("G","H") and a solution is a homomorphism from "G" to "H". The general decision problem, asking whether there is any solution, is NP-complete. However, limiting allowed instances gives rise to a variety of different problems, some of which are much easier to solve. Methods that apply when restraining the left side "G" are very different than for the right side "H", but in each case a dichotomy (a sharp boundary between easy and hard cases) is known or conjectured. Homomorphisms to a fixed graph. The homomorphism problem with a fixed graph "H" on the right side of each instance is also called the "H"-coloring problem. When "H" is the complete graph "K""k", this is the graph "k"-coloring problem, which is solvable in polynomial time for "k" = 0, 1, 2, and NP-complete otherwise. In particular, "K"2-colorability of a graph "G" is equivalent to "G" being bipartite, which can be tested in linear time. More generally, whenever "H" is a bipartite graph, "H"-colorability is equivalent to "K"2-colorability (or "K""0" / "K""1"-colorability when "H" is empty/edgeless), hence equally easy to decide. Pavol Hell and Jaroslav Nešetřil proved that, for undirected graphs, no other case is tractable: Hell–Nešetřil theorem (1990): The "H"-coloring problem is in P when "H" is bipartite and NP-complete otherwise. This is also known as the "dichotomy theorem" for (undirected) graph homomorphisms, since it divides "H"-coloring problems into NP-complete or P problems, with no intermediate cases. For directed graphs, the situation is more complicated and in fact equivalent to the much more general question of characterizing the complexity of constraint satisfaction problems. It turns out that "H"-coloring problems for directed graphs are just as general and as diverse as CSPs with any other kinds of constraints. Formally, a (finite) "constraint language" (or "template") "Γ" is a finite domain and a finite set of relations over this domain. CSP("Γ") is the constraint satisfaction problem where instances are only allowed to use constraints in "Γ". Theorem (Feder, Vardi 1998): For every constraint language "Γ", the problem CSP("Γ") is equivalent under polynomial-time reductions to some "H"-coloring problem, for some directed graph "H". Intuitively, this means that every algorithmic technique or complexity result that applies to "H"-coloring problems for directed graphs "H" applies just as well to general CSPs. In particular, one can ask whether the Hell–Nešetřil theorem can be extended to directed graphs. By the above theorem, this is equivalent to the Feder–Vardi conjecture (aka CSP conjecture, dichotomy conjecture) on CSP dichotomy, which states that for every constraint language "Γ", CSP("Γ") is NP-complete or in P. This conjecture was proved in 2017 independently by Dmitry Zhuk and Andrei Bulatov, leading to the following corollary: Corollary (Bulatov 2017; Zhuk 2017): The "H"-coloring problem on directed graphs, for a fixed "H", is either in P or NP-complete. Homomorphisms from a fixed family of graphs. The homomorphism problem with a single fixed graph "G" on left side of input instances can be solved by brute-force in time |"V"("H")|O(|"V"("G")|), so polynomial in the size of the input graph "H". In other words, the problem is trivially in P for graphs "G" of bounded size. The interesting question is then what other properties of "G", beside size, make polynomial algorithms possible. The crucial property turns out to be treewidth, a measure of how tree-like the graph is. For a graph "G" of treewidth at most "k" and a graph "H", the homomorphism problem can be solved in time |"V"("H")|O("k") with a standard dynamic programming approach. In fact, it is enough to assume that the core of "G" has treewidth at most "k". This holds even if the core is not known. The exponent in the |"V"("H")|O("k")-time algorithm cannot be lowered significantly: no algorithm with running time |"V"("H")|o(tw("G") /log tw("G")) exists, assuming the exponential time hypothesis (ETH), even if the inputs are restricted to any class of graphs of unbounded treewidth. The ETH is an unproven assumption similar to P ≠ NP, but stronger. Under the same assumption, there are also essentially no other properties that can be used to get polynomial time algorithms. This is formalized as follows: Theorem (Grohe): For a computable class of graphs formula_7, the homomorphism problem for instances formula_8 with formula_9 is in P if and only if graphs in formula_7 have cores of bounded treewidth (assuming ETH). One can ask whether the problem is at least solvable in a time arbitrarily highly dependent on "G", but with a fixed polynomial dependency on the size of "H". The answer is again positive if we limit "G" to a class of graphs with cores of bounded treewidth, and negative for every other class. In the language of parameterized complexity, this formally states that the homomorphism problem in formula_7 parameterized by the size (number of edges) of "G" exhibits a dichotomy. It is fixed-parameter tractable if graphs in formula_7 have cores of bounded treewidth, and W[1]-complete otherwise. The same statements hold more generally for constraint satisfaction problems (or for relational structures, in other words). The only assumption needed is that constraints can involve only a bounded number of variables (all relations are of some bounded arity, 2 in the case of graphs). The relevant parameter is then the treewidth of the primal constraint graph. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G = (V(G), E(G))" }, { "math_id": 1, "text": " H = (V(H), E(H))" }, { "math_id": 2, "text": "V(G)" }, { "math_id": 3, "text": "V(H)" }, { "math_id": 4, "text": "(u,v) \\in E(G)" }, { "math_id": 5, "text": "(f(u),f(v)) \\in E(H)" }, { "math_id": 6, "text": "u, v" }, { "math_id": 7, "text": "\\mathcal{G}" }, { "math_id": 8, "text": "(G,H)" }, { "math_id": 9, "text": "G \\in \\mathcal{G}" } ]
https://en.wikipedia.org/wiki?curid=676328
67633142
1 Kings 4
1 Kings, chapter 4 1 Kings 4 is the fourth chapter of the Books of Kings in the Hebrew Bible or the First Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. This chapter belongs to the section focusing on the reign of Solomon over the unified kingdom of Judah and Israel (1 Kings 1 to 11). The focus of this chapter is the reign of Solomon, the king of Israel. Text. This chapter was originally written in the Hebrew language and since the 16th century is divided into 34 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Analysis. The structure of the first 28 verses of this chapter centers on the abundant provision of Solomon's table: A Solomon's royal officials and officers (4:1–19) B Judah and Israel eat, drink and rejoice (4:20) C Solomon rules over kingdoms between the Euphrates and Egypt (4:21) D Provisions of Solomon's table (4:22–23) C' Solomon rules over kingdoms between the Euphrates and Egypt (4:24) B' Judah and Israel are living in safety (4:25) A' Officers provide for Solomon's household (4:26–28) The section starting from 1 Kings 4:29 to 1 Kings 5:12 is organized as a chiasm: A Solomon's wisdom (4:29–34) B Hiram sends servants to Solomon (5:1) C Solomon's message to Hiram (5:2–6) B' Hiram's response to Solomon (5:7–11) A' Solomon's wisdom (5:12) Solomon's royal officials and officers (4:1–19). The orderly structure of the kingdom shows the quality of Solomon's wisdom, resulting in happy and prosperous citizens, fulfilling not only the Abrahamic promise (Genesis 22:17), but also the fruit of Joshua's conquest (Joshua 11:23). A comparison with David's list of officers (2 Samuel 8:16–18; 20:23–26) demonstrate the continuity and development of the court, with the increase of the number of ministers: some remained (Ado[ni]ram] and Jehoshaphat), some removed (Joab and Abiathar), one promoted (Benaiah), and some as rewards to his party followers (Zadok's son, Azariah, and Nathan's sons in verse 5). Provincial governors were assigned in the twelve provinces of Israel, mainly of northern Israel, not including Jerusalem and the land of Judah, nor the 'foreign possessions' (verses 7–19). The geographical organization of the list is interesting: beginning with the central mountains of Ephraim, moving north (Naphtali, Asher, and Issachar) and concluding with the south and south-east (Benjamin and Gad). 'And Ahishar was over the household: and Adoniram the son of Abda was over the tribute." Solomon's prosperity and wisdom (4:20–34). Under Solomon, the kingdom prospered and had a security from the neighboring states from the Euphrates to Egypt, while the state administration had become larger and more centralized since the time of Saul. Verses 29–34 focus on Solomon's wisdom, a full circle of Solomon's history since 1 Kings 3:1–15, more on the academic aspect, instead of as a king or a judge. In Solomon's time, science was already international, with the texts of wisdom from the whole of the ancient Near East (as found in archaeology) containing accumulated general knowledge. Solomon is named as the author of the many proverbs (verse 32; ; ; hence also ) and songs (Psalm 72, Psalm 127; ). He also has ability to enumerate creation in natural order (verse 33; cf. Job 38–39, Psalm 104, and Genesis 1). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=67633142
6763876
Skorokhod's representation theorem
In mathematics and statistics, Skorokhod's representation theorem is a result that shows that a weakly convergent sequence of probability measures whose limit measure is sufficiently well-behaved can be represented as the distribution/law of a pointwise convergent sequence of random variables defined on a common probability space. It is named for the Ukrainian mathematician A. V. Skorokhod. Statement. Let formula_0 be a sequence of probability measures on a metric space formula_1 such that formula_2 converges weakly to some probability measure formula_3 on formula_1 as formula_4. Suppose also that the support of formula_3 is separable. Then there exist formula_1-valued random variables formula_5 defined on a common probability space formula_6 such that the law of formula_5 is formula_2 for all formula_7 (including formula_8) and such that formula_9 converges to formula_10, formula_11-almost surely.
[ { "math_id": 0, "text": "(\\mu_n)_{n \\in \\mathbb{N}}" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "\\mu_n" }, { "math_id": 3, "text": "\\mu_\\infty" }, { "math_id": 4, "text": "n \\to \\infty" }, { "math_id": 5, "text": "X_n" }, { "math_id": 6, "text": "(\\Omega,\\mathcal{F},\\mathbf{P})" }, { "math_id": 7, "text": "n" }, { "math_id": 8, "text": "n=\\infty" }, { "math_id": 9, "text": "(X_n)_{n \\in \\mathbb{N}}" }, { "math_id": 10, "text": "X_\\infty" }, { "math_id": 11, "text": "\\mathbf{P}" } ]
https://en.wikipedia.org/wiki?curid=6763876
676406
Plethysmograph
Medical instrument for measuring changes in volume A plethysmograph is an instrument for measuring changes in volume within an organ or whole body (usually resulting from fluctuations in the amount of blood or air it contains). The word is derived from the Greek "plethysmos" (increasing, enlarging, becoming full), and "graphein" (to write). Organs studied. Lungs. Pulmonary plethysmographs are commonly used to measure the functional residual capacity (FRC) of the lungs—the volume in the lungs when the muscles of respiration are relaxed—and total lung capacity. In a traditional plethysmograph (or "body box"), the test subject, or patient, is placed inside a sealed chamber the size of a small telephone booth with a single mouthpiece. At the end of normal expiration, the mouthpiece is closed. The patient is then asked to make an inspiratory effort. As the patient tries to inhale (a maneuver which looks and feels like panting), the lungs expand, decreasing pressure within the lungs and increasing lung volume. This, in turn, increases the pressure within the box since it is a closed system and the volume of the box compartment has decreased to accommodate the new volume of the subject. With cabinless plethysmography, the patient is seated next to a desktop testing device and inserts the mouthpiece into his/her mouth. The patient takes a series of normal tidal breaths for approximately one minute. During this tidal breathing, a series of rapid interruptions occurs, with a shutter opening and closing, measuring pressure and volume. Lung volume measurements taken with cabinless plethysmography are considered equivalent to body plethysmography. Methodological approach. Boyle's Law is used to calculate the unknown volume within the lungs. First, the change in volume of the chest is computed. The initial pressure of the box times its volume is considered equal to the known pressure after expansion times the unknown new volume. Once the new volume is found, the original volume minus the new volume is the change in volume in the box and also the change in volume in the chest. With this information, Boyle's Law is used again to determine the original volume of gas in the chest: the initial volume (unknown) times the initial pressure is equal to the final volume times the final pressure. Starting from this principle, it can be shown that the functional residual capacity is a function of the changes in volume and pressures as follows: formula_0 The difference between full and empty lungs can be used to assess diseases and airway passage restrictions. An obstructive disease will show increased FRC because some airways do not empty normally, while a restrictive disease will show decreased FRC. Body plethysmography is particularly appropriate for patients who have air spaces which do not communicate with the bronchial tree; in such patients helium dilution would give an incorrectly low reading. Another important parameter, which can be calculated with a body plethysmograph is the airway resistance. During inhalation the chest expands, which increases the pressure within the box. While observing the so-called resistance loop (cabin pressure and flow), diseases can easily be recognized. If the resistance loop becomes planar, this shows a bad compliance of the lung. A COPD, for instance, can easily be discovered because of the unique shape of the corresponding resistance loop. Limbs. Some plethysmograph devices are attached to arms, legs or other extremities and used to determine circulatory capacity. In water plethysmography an extremity, e.g. an arm, is enclosed in a water-filled chamber where volume changes can be detected. Air plethysmography uses a similar principle but based on an air-filled long cuff, which is more convenient but less accurate. Another practical device is mercury-filled strain gauges used to continuously measure circumference of the extremity, e.g. at mid calf. Impedance plethysmography is a non-invasive method used to detect venous thrombosis in these areas of the body. Genitals. Another common type of plethysmograph is the penile plethysmograph. This device is used to measure changes in blood flow in the penis. Although some researchers use this device to assess sexual arousal and sexual orientation, courts that have considered penile plethysmography generally rule that the technique is not sufficiently reliable for use in court. An approximate female equivalent to penile plethysmography is vaginal photoplethysmography, which optically measures blood flow in the vagina. Use in preclinical research. Plethysmography is a widely used method in basic and preclinical research to study respiration. Several techniques are used: Respiratory parameters from conscious freely moving animals: whole-body plethysmography. Whole-body plethysmography is used to measure respiratory parameters in conscious unrestrained subjects, including quantification of bronchoconstriction. The standard plethysmograph sizes are for the study of mice, rats and guinea pigs. On request, larger plethysmographs can also be manufactured for other animals, such as rabbits, dogs, pigs, or primates. The plethysmograph has two chambers, each fitted with a pneumotachograph. The subject is placed in one of them (subject chamber) and the other remains empty (reference chamber). The pressure change is measured by a differential pressure transducer with one port exposed to the subject chamber and the other to the reference chamber. Respiratory parameters from conscious restrained animals: double-chamber / head-out plethysmography. The double-chamber plethysmograph (dcp) measures respiratory parameters in a conscious restrained subject, including airway resistance and conductance. Different sizes of plethysmograph exist to study mice, rats or guinea pigs. The head-out configuration is identical to the standard configuration described above except that there is no head chamber. Of course the collar seal is still applied, so that the body chamber remains airtight. With only a thoracic signal, all parameters can be obtained except for specific airway resistance (SRaw) and specific airway conductance (Sgaw). Resistance/compliance from sedated animals. In anesthetized plethysmography, lung resistance and dynamic compliance are measured directly because the subject is anesthetized. Depending on the level of sedation, the subject may be spontaneously breathing (SB configuration) or under mechanical ventilation (MV configuration). A flow signal and a pressure signal are required to calculate compliance and resistance. Cerebral blood flow. Cerebral venous blood flow has been recently studied trying to establish a connection between Chronic cerebrospinal venous insufficiency and multiple sclerosis. The small study is not big enough to establish a conclusion, but some association has been shown. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "FRC = K_V \\frac{\\Delta V_{box}}{\\Delta P_{mouth}} , K_V \\approx P_{Barometric}" } ]
https://en.wikipedia.org/wiki?curid=676406
676418
Reflection seismology
Explore subsurface properties with seismology Reflection seismology (or seismic reflection) is a method of exploration geophysics that uses the principles of seismology to estimate the properties of the Earth's subsurface from reflected seismic waves. The method requires a controlled seismic source of energy, such as dynamite or Tovex blast, a specialized air gun or a seismic vibrator. Reflection seismology is similar to sonar and echolocation. History. Reflections and refractions of seismic waves at geologic interfaces within the Earth were first observed on recordings of earthquake-generated seismic waves. The basic model of the Earth's deep interior is based on observations of earthquake-generated seismic waves transmitted through the Earth's interior (e.g., Mohorovičić, 1910). The use of human-generated seismic waves to map in detail the geology of the upper few kilometers of the Earth's crust followed shortly thereafter and has developed mainly due to commercial enterprise, particularly the petroleum industry. Seismic reflection exploration grew out of the seismic refraction exploration method, which was used to find oil associated with salt domes. Ludger Mintrop, a German mine surveyor, devised a mechanical seismograph in 1914 that he successfully used to detect salt domes in Germany. He applied for a German patent in 1919 that was issued in 1926. In 1921 he founded the company Seismos, which was hired to conduct seismic exploration in Texas and Mexico, resulting in the first commercial discovery of oil using the refraction seismic method in 1924. The 1924 discovery of the Orchard salt dome in Texas led to a boom in seismic refraction exploration along the Gulf Coast, but by 1930 the method had led to the discovery of most of the shallow Louann Salt domes, and the refraction seismic method faded. After WWI, those involved in the development of commercial applications of seismic waves included Mintrop, Reginald Fessenden, John Clarence Karcher, E. A. Eckhardt, William P. Haseman, and Burton McCollum. In 1920, Haseman, Karcher, Eckhardt and McCollum founded the Geological Engineering Company. In June 1921, Karcher, Haseman, I. Perrine and W. C. Kite recorded the first exploration reflection seismograph near Oklahoma City, Oklahoma. Early reflection seismology was viewed with skepticism by many in the oil industry. An early advocate of the method commented: "As one who personally tried to introduce the method into general consulting practice, the senior writer can definitely recall many times when reflections were not even considered on a par with the divining rod, for at least that device had a background of tradition." The Geological Engineering Company folded due to a drop in the price of oil. In 1925, oil prices had rebounded, and Karcher helped to form Geophysical Research Corporation (GRC) as part of the oil company Amerada. In 1930, Karcher left GRC and helped to found Geophysical Service Incorporated (GSI). GSI was one of the most successful seismic contracting companies for over 50 years and was the parent of an even more successful company, Texas Instruments. Early GSI employee Henry Salvatori left that company in 1933 to found another major seismic contractor, Western Geophysical. Many other companies using reflection seismology in hydrocarbon exploration, hydrology, engineering studies, and other applications have been formed since the method was first invented. Major service companies in recent years have included CGG, ION Geophysical, Petroleum Geo-Services, Polarcus, TGS and WesternGeco, but since the oil price crash of 2015, providers of seismic services have continued to struggle financially such as Polarcus, whilst companies that were seismic acquisition industry leaders just ten years ago such as CGG and WesternGeco have now removed themselves from the seismic acquisition environment entirely and restructured to focus upon their existing seismic data libraries, seismic data management and non-seismic related oilfield services. Summary of the method. Seismic waves are mechanical perturbations that travel in the Earth at a speed governed by the acoustic impedance of the medium in which they are travelling. The acoustic (or seismic) impedance, "Z", is defined by the equation: formula_0, where "v" is the seismic wave velocity and "ρ" (Greek "rho") is the density of the rock. When a seismic wave travelling through the Earth encounters an interface between two materials with different acoustic impedances, some of the wave energy will reflect off the interface and some will refract through the interface. At its most basic, the seismic reflection technique consists of generating seismic waves and measuring the time taken for the waves to travel from the source, reflect off an interface and be detected by an array of receivers (as geophones or hydrophones) at the surface. Knowing the travel times from the source to various receivers, and the velocity of the seismic waves, a geophysicist then attempts to reconstruct the pathways of the waves in order to build up an image of the subsurface. In common with other geophysical methods, reflection seismology may be seen as a type of inverse problem. That is, given a set of data collected by experimentation and the physical laws that apply to the experiment, the experimenter wishes to develop an abstract model of the physical system being studied. In the case of reflection seismology, the experimental data are recorded seismograms, and the desired result is a model of the structure and physical properties of the Earth's crust. In common with other types of inverse problems, the results obtained from reflection seismology are usually not unique (more than one model adequately fits the data) and may be sensitive to relatively small errors in data collection, processing, or analysis. For these reasons, great care must be taken when interpreting the results of a reflection seismic survey. The reflection experiment. The general principle of seismic reflection is to send elastic waves (using an energy source such as dynamite explosion or Vibroseis) into the Earth, where each layer within the Earth reflects a portion of the wave's energy back and allows the rest to refract through. These reflected energy waves are recorded over a predetermined time period (called the record length) by receivers that detect the motion of the ground in which they are placed. On land, the typical receiver used is a small, portable instrument known as a geophone, which converts ground motion into an analogue electrical signal. In water, hydrophones are used, which convert pressure changes into electrical signals. Each receiver's response to a single shot is known as a “trace” and is recorded onto a data storage device, then the shot location is moved along and the process is repeated. Typically, the recorded signals are subjected to significant amounts of signal processing. Reflection and transmission at normal incidence. When a seismic P-wave encounters a boundary between two materials with different acoustic impedances, some of the energy in the wave will be reflected at the boundary, while some of the energy will be transmitted through the boundary. The amplitude of the reflected wave is predicted by multiplying the amplitude of the incident wave by the seismic "reflection coefficient" formula_1, determined by the impedance contrast between the two materials. For a wave that hits a boundary at normal incidence (head-on), the expression for the reflection coefficient is simply formula_2, where formula_3 and formula_4 are the impedance of the first and second medium, respectively. Similarly, the amplitude of the incident wave is multiplied by the "transmission coefficient" formula_5 to predict the amplitude of the wave transmitted through the boundary. The formula for the normal-incidence transmission coefficient is formula_6. As the sum of the energies of the reflected and transmitted wave has to be equal to the energy of the incident wave, it is easy to show that formula_7. By observing changes in the strength of reflections, seismologists can infer changes in the seismic impedances. In turn, they use this information to infer changes in the properties of the rocks at the interface, such as density and wave velocity, by means of seismic inversion. Reflection and transmission at non-normal incidence. The situation becomes much more complicated in the case of non-normal incidence, due to mode conversion between P-waves and S-waves, and is described by the Zoeppritz equations. In 1919, Karl Zoeppritz derived 4 equations that determine the amplitudes of reflected and refracted waves at a planar interface for an incident P-wave as a function of the angle of incidence and six independent elastic parameters. These equations have 4 unknowns and can be solved but they do not give an intuitive understanding for how the reflection amplitudes vary with the rock properties involved. The reflection and transmission coefficients, which govern the amplitude of each reflection, vary with angle of incidence and can be used to obtain information about (among many other things) the fluid content of the rock. Practical use of non-normal incidence phenomena, known as AVO (see amplitude versus offset) has been facilitated by theoretical work to derive workable approximations to the Zoeppritz equations and by advances in computer processing capacity. AVO studies attempt with some success to predict the fluid content (oil, gas, or water) of potential reservoirs, to lower the risk of drilling unproductive wells and to identify new petroleum reservoirs. The 3-term simplification of the Zoeppritz equations that is most commonly used was developed in 1985 and is known as the "Shuey equation". A further 2-term simplification is known as the "Shuey approximation", is valid for angles of incidence less than 30 degrees (usually the case in seismic surveys) and is given below: formula_8 where formula_9 = reflection coefficient at zero-offset (normal incidence); formula_10 = AVO gradient, describing reflection behaviour at intermediate offsets and formula_11 = angle of incidence. This equation reduces to that of normal incidence at formula_11=0. Interpretation of reflections. The time it takes for a reflection from a particular boundary to arrive at the geophone is called the "travel time". If the seismic wave velocity in the rock is known, then the travel time may be used to estimate the depth to the reflector. For a simple vertically traveling wave, the travel time formula_12 from the surface to the reflector and back is called the Two-Way Time (TWT) and is given by the formula formula_13, where formula_14 is the depth of the reflector and formula_15 is the wave velocity in the rock. A series of apparently related reflections on several seismograms is often referred to as a "reflection event". By correlating reflection events, a seismologist can create an estimated cross-section of the geologic structure that generated the reflections. Sources of noise. In addition to reflections off interfaces within the subsurface, there are a number of other seismic responses detected by receivers and are either unwanted or unneeded: Air wave. The airwave travels directly from the source to the receiver and is an example of coherent noise. It is easily recognizable because it travels at a speed of 330 m/s, the speed of sound in air. Ground roll / Rayleigh wave / Scholte wave / Surface wave. A Rayleigh wave typically propagates along a free surface of a solid, but the elastic constants and density of air are very low compared to those of rocks so the surface of the Earth is approximately a free surface. Low velocity, low frequency and high amplitude Rayleigh waves are frequently present on a seismic record and can obscure signal, degrading overall data quality. They are known within the industry as ‘Ground Roll’ and are an example of coherent noise that can be attenuated with a carefully designed seismic survey. The Scholte wave is similar to ground roll but occurs at the sea-floor (fluid/solid interface) and it can possibly obscure and mask deep reflections in marine seismic records. The velocity of these waves varies with wavelength, so they are said to be dispersive and the shape of the wavetrain varies with distance. Refraction / Head wave / Conical wave. A head wave refracts at an interface, travelling along it, within the lower medium and produces oscillatory motion parallel to the interface. This motion causes a disturbance in the upper medium that is detected on the surface. The same phenomenon is utilised in seismic refraction. Multiple reflection. An event on the seismic record that has incurred more than one reflection is called a "multiple". Multiples can be either short-path (peg-leg) or long-path, depending upon whether they interfere with primary reflections or not. Multiples from the bottom of a body of water and the air-water interface are common in marine seismic data, and are suppressed by seismic processing. Cultural noise. Cultural noise includes noise from weather effects, planes, helicopters, electrical pylons, and ships (in the case of marine surveys), all of which can be detected by the receivers. Electromagnetic noise. Particularly important in urban environments (i.e. power lines), it is hardly removable. Some particular sensor as microelectromechanical systems (MEMs) are used to decrease these interference when in such environments. 2D versus 3D. The original seismic reflection method involved acquisition along a two-dimensional vertical profile through the crust, now referred to as 2D data. This approach worked well with areas of relatively simple geological structure where dips are low. However, in areas of more complex structure, the 2D technique failed to properly image the subsurface due to out of plane reflections and other artefacts. Spatial aliasing is also an issue with 2D data due to the lack of resolution between the lines. Beginning with initial experiments in the 1960s, the seismic technique explored the possibility of full three-dimensional acquisition and processing. In the late 1970s the first large 3D datasets were acquired and by the 1980s and 1990s this method became widely used. Applications. Reflection seismology is used extensively in a number of fields and its applications can be categorised into three groups, each defined by their depth of investigation: A method similar to reflection seismology which uses electromagnetic instead of elastic waves, and has a smaller depth of penetration, is known as Ground-penetrating radar or GPR. Hydrocarbon exploration. Reflection seismology, more commonly referred to as "seismic reflection" or abbreviated to "seismic" within the hydrocarbon industry, is used by petroleum geologists and geophysicists to map and interpret potential petroleum reservoirs. The size and scale of seismic surveys has increased alongside the significant increases in computer power since the late 20th century. This led the seismic industry from laboriously – and therefore rarely – acquiring small 3D surveys in the 1980s to routinely acquiring large-scale high resolution 3D surveys. The goals and basic principles have remained the same, but the methods have slightly changed over the years. The primary environments for seismic hydrocarbon exploration are land, the transition zone and marine: "Land" – The land environment covers almost every type of terrain that exists on Earth, each bringing its own logistical problems. Examples of this environment are jungle, desert, arctic tundra, forest, urban settings, mountain regions and savannah. "Transition Zone (TZ)" – The transition zone is considered to be the area where the land meets the sea, presenting unique challenges because the water is too shallow for large seismic vessels but too deep for the use of traditional methods of acquisition on land. Examples of this environment are river deltas, swamps and marshes, coral reefs, beach tidal areas and the surf zone. Transition zone seismic crews will often work on land, in the transition zone and in the shallow water marine environment on a single project in order to obtain a complete map of the subsurface. "Marine" – The marine zone is either in shallow water areas (water depths of less than 30 to 40 metres would normally be considered shallow water areas for 3D marine seismic operations) or in the deep water areas normally associated with the seas and oceans (such as the Gulf of Mexico). Seismic data acquisition. Seismic data acquisition is the first of the three distinct stages of seismic exploration, the other two being seismic data processing and seismic interpretation. Seismic surveys are typically designed by National oil companies and International oil companies who hire service companies such as CGG, Petroleum Geo-Services and WesternGeco to acquire them. Another company is then hired to process the data, although this can often be the same company that acquired the survey. Finally the finished seismic volume is delivered to the oil company so that it can be geologically interpreted. Land survey acquisition. Land seismic surveys tend to be large entities, requiring hundreds of tons of equipment and employing anywhere from a few hundred to a few thousand people, deployed over vast areas for many months. There are a number of options available for a controlled seismic source in a land survey and particularly common choices are Vibroseis and dynamite. Vibroseis is a non-impulsive source that is cheap and efficient but requires flat ground to operate on, making its use more difficult in undeveloped areas. The method comprises one or more heavy, all-terrain vehicles lowering a steel plate onto the ground, which is then vibrated with a specific frequency distribution and amplitude. It produces a low energy density, allowing it to be used in cities and other built-up areas where dynamite would cause significant damage, though the large weight attached to a Vibroseis truck can cause its own environmental damage. Dynamite is an impulsive source that is regarded as the ideal geophysical source due to it producing an almost perfect impulse function but it has obvious environmental drawbacks. For a long time, it was the only seismic source available until weight dropping was introduced around 1954, allowing geophysicists to make a trade-off between image quality and environmental damage. Compared to Vibroseis, dynamite is also operationally inefficient because each source point needs to be drilled and the dynamite placed in the hole. Unlike in marine seismic surveys, land geometries are not limited to narrow paths of acquisition, meaning that a wide range of offsets and azimuths is usually acquired and the largest challenge is increasing the rate of acquisition. The rate of production is obviously controlled by how fast the source (Vibroseis in this case) can be fired and then move on to the next source location. Attempts have been made to use multiple seismic sources at the same time in order to increase survey efficiency and a successful example of this technique is Independent Simultaneous Sweeping (ISS). A land seismic survey requires substantial logistical support; in addition to the day-to-day seismic operation itself, there must also be support for the main camp for resupply activities, medical support, camp and equipment maintenance tasks, security, personnel crew changes and waste management. Some operations may also operate smaller 'fly' camps that are set up remotely where the distance is too far to travel back to the main camp on a daily basis and these will also need logistical support on a frequent basis. Marine survey acquisition (Towed Streamer). Towed streamer marine seismic surveys are conducted using specialist seismic vessels that tow one or more cables known as streamers just below the surface typically between 5 and 15 metres depending upon the project specification that contain groups of hydrophones (or receiver groups) along their length (see diagram). Modern streamer vessels normally tow multiple streamers astern which can be secured to underwater wings, commonly known as doors or vanes that allow a number of streamers to be towed out wide to the port and starboard side of a vessel. Current streamer towing technology such as seen on the PGS operated Ramform series of vessels built between 2013 and 2017 has pushed the number of streamers up to 24 in total on these vessels. For vessels of this type of capacity, it is not uncommon for a streamer spread across the stern from 'door to door' to be in excess on one nautical mile. The precise configuration of the streamers on any project in terms of streamer length, streamer separation, hydrophone group length and the offset or distance between the source centre and the receivers will be dependent upon the geological area of interest below the sea floor that the client is trying to get data from. Streamer vessels also tow high energy sources, principally high pressure air gun arrays that operate at 2000psi that fire together to the create a tuned energy pulse into the seabed from which the reflected energy waves are recorded on the streamer receiver groups. Gun arrays are tuned, that is the frequency response of the resulting air bubble from the array when fired can be changed depending upon the combination and number of guns in a specific array and their individual volumes. Guns can be located individual on an array or can be combined to form clusters. Typically, source arrays have a volume of 2000 cubic inches to 7000 cubic inches, but this will depend upon the specific geology of the survey area. Marine seismic surveys generate a significant quantity of data due to the size of modern towed streamer vessels and their towing capabilities. A seismic vessel with 2 sources and towing a single streamer is known as a "Narrow-Azimuth Towed Streamer" (or NAZ or NATS). By the early 2000s, it was accepted that this type of acquisition was useful for initial exploration but inadequate for development and production, in which wells had to be accurately positioned. This led to the development of the "Multi-Azimuth Towed Streamer" (MAZ) which tried to break the limitations of the linear acquisition pattern of a NATS survey by acquiring a combination of NATS surveys at different azimuths (see diagram). This successfully delivered increased illumination of the subsurface and a better signal to noise ratio. The seismic properties of salt poses an additional problem for marine seismic surveys, it attenuates seismic waves and its structure contains overhangs that are difficult to image. This led to another variation on the NATS survey type, the "wide-azimuth towed streamer" (or WAZ or WATS) and was first tested on the Mad Dog field in 2004. This type of survey involved 1 vessel solely towing a set of 8 streamers and 2 separate vessels towing seismic sources that were located at the start and end of the last receiver line (see diagram). This configuration was "tiled" 4 times, with the receiver vessel moving further away from the source vessels each time and eventually creating the effect of a survey with 4 times the number of streamers. The end result was a seismic dataset with a larger range of wider azimuths, delivering a breakthrough in seismic imaging. These are now the three common types of marine towed streamer seismic surveys. Marine survey acquisition (Ocean Bottom Seismic (OBS)). Marine survey acquisition is not just limited to seismic vessels; it is also possible to lay cables of geophones and hydrophones on the sea bed in a similar way to how cables are used in a land seismic survey, and use a separate source vessel. This method was originally developed out of operational necessity in order to enable seismic surveys to be conducted in areas with obstructions, such as production platforms, without having the compromise the resultant image quality. Ocean bottom cables (OBC) are also extensively used in other areas that a seismic vessel cannot be used, for example in shallow marine (water depth &lt;300m) and transition zone environments, and can be deployed by remotely operated underwater vehicles (ROVs) in deep water when repeatability is valued (see 4D, below). Conventional OBC surveys use dual-component receivers, combining a pressure sensor (hydrophone) and a vertical particle velocity sensor (vertical geophone), but more recent developments have expanded the method to use four-component sensors i.e. a hydrophone and three orthogonal geophones. Four-component sensors have the advantage of being able to also record shear waves, which do not travel through water but can still contain valuable information. In addition to the operational advantages, OBC also has geophysical advantages over a conventional NATS survey that arise from the increased fold and wider range of azimuths associated with the survey geometry. However, much like a land survey, the wider azimuths and increased fold come at a cost and the ability for large-scale OBC surveys is severely limited. In 2005, ocean bottom nodes (OBN) – an extension of the OBC method that uses battery-powered cableless receivers placed in deep water – was first trialled over the Atlantis Oil Field in a partnership between BP and Fairfield Geotechnologies. The placement of these nodes can be more flexible than the cables in OBC and they are easier to store and deploy due to their smaller size and lower weight. Marine survey acquisition (Ocean Bottom Nodes (OBN)). The development of node technology came as a direct development from that of ocean bottom cable technology, i.e. that ability to place a hydrophone in direct contact with the seafloor to eliminate the seafloor to hydrophone sea water space that exists with towed streamer technology. The ocean bottom hydrophone concept itself is not new and has been used for many years in scientific research, but its rapid use as a data acquisition methodology in oil and gas exploration is relatively recent. Nodes are self-contained 4-component units which include a hydrophone and three horizontal and vertical axis orientation sensors. Their physical dimensions vary depending on the design requirement and the manufacturer, but in general nodes tend to weigh in excess of 10 kilograms per unit to counteract buoyancy issues and to lessen the chance of movement on the seabed due to currents or tides. Nodes are usable in areas where streamer vessels may not be able to safely enter and so for the safe navigation of node vessels and prior to the deployment of nodes, a bathymetry seabed survey is normally conducted of the survey area using side-scan technology to map the seabed topography in detail. This will identify any possible hazards that could impact the safe navigation of node and source vessels and also to identify any issues for node deployment including subsea obstructions, wrecks, oilfield infrastructure or sudden changes in water depths from underwater cliffs, canyons or other locations where nodes may not be stable or not make a good connection to the seabed. Unlike OBC operations, a node vessel does not connect to a node line, whereas ocean bottom cables need to be physically attached to a recorder vessel to record data in real-time. With nodes, until the nodes are recovered and the data reaped from them (reaping is the industry term used to remove data from a recovered node when it is placed within a computerised system that copies the hard drive data from the node), there is an assumption that the data will be recorded as there is no real-time quality control element to a node's operating status as they are self-contained and not connected to any system once they are deployed. The technology is now well-established and very reliable and once a node and its battery system has passed all of its set up criteria there is a high degree of confidence that a node will work as specified. Technical downtime during node projects, i.e. individual node failures during deployment are usually in single figures as a percentage of the total nodes deployed. Nodes are powered by either rechargeable internal Lithium-ion battery packs or replaceable non-rechargeable batteries - the design and the specification of the node will determine what battery technology is used. The battery life of a node unit is a critical consideration in the design of a node project; this is because once the battery runs out on a node, the data that has been collected is no longer stored on the solid-state hard drive and all data recorded since it was deployed on the sea floor will be lost. Therefore, a node with a 30-day battery life must be deployed, record data, be recovered and reaped within that 30-day period. This also ties in with the number of nodes that are to be deployed as this is closely related to battery life too. If too many nodes are deployed and the OBN crew's resources are not sufficient to recover these in time or external factors such as adverse weather limits recovery operations, batteries can expire and data can be lost. Disposable or non-rechargeable batteries can also create a significant waste management issue as batteries must be transported to and from an operation and the drained batteries disposed of by a licensed contractor ashore. Another important consideration is that of synchronising the timing of individual node clock units with an internal clock drift correction. Any error in synchronising nodes properly before they are deployed can create unusable data. Because node acquisition is often multi-directional and from a number of sources simultaneously across a 24-hour time frame, for accurate data processing it is vital that all of the nodes are working to the same clock time. The node type and specification will determine the node handling system design and the deployment and recovery modes. At present there are two mainstream approaches; node on a rope and ROV operations. Node on a rope This method requires the node to be attached to a steel wire or a high specification rope. Each node will be evenly spaced along the rope which will have special fittings to securely connect the node to the rope, for example every 50 metres depending upon the prospect design. This rope is then laid by a specialist node vessel using an node handling system, usually with dynamic positioning along a pre-defined node line. The nodes are ‘landed’ onto pre-plotted positions with an agreed and acceptable error radius, for example, a node must be placed within a 12.5 metre radius from the navigation pre-plot positions. They are often accompanied by pingers, small transponders that can be detected by an underwater acoustic positioning transducer which allows a pinging vessel or the node vessel itself to establish a definite sea floor position for each node on deployment. Depending on the contract, pingers can be located on every node or every third node, for example. Pinging and pinging equipment is the industry shorthand for the use of USBL or Ultra-short baseline acoustic positioning systems which are interfaced with vessel based differential GPS or Differential Global Positioning System navigation equipment. Node lines are usually recovered by anchor or grapple hook dragging to recover the node line back on board the vessel. Handling systems on node vessels are used to store, deploy and recover nodes and their specific design will depend upon the node design. Small nodes can include a manual handling element whereas larger nodes are automatically handled by robotic systems for moving, storing, recharging and reaping nodes. Node vessels also use systems such as spoolers to manage rope lines and rope bins to store the many kilometres of ropes often carried onboard node on a rope vessels. Node on a rope is normally used where there is shallow water within the prospect, for example less than 100 metres or a transition zone area close to a beach. For deeper water operations, a dynamic positioning vessel is used to ensure accurate deployment of nodes, but these larger vessels have a limitation as to how far into shore they can safely navigate into; the usual cutoff will be between 15 and 20 metres water depth depending on the vessel and its in-water equipment. Specialist shallow water boats can then be used for deploying and recovering nodes in water depths as shallow as 1 to 3 metres. These shallow water nodes can then be used to tie-in with geophones on the shore to provide a consistent seismic line transition from water to land. There are some issues with this approach which make them vulnerable to damage or loss on a project and these all must be risk assessed and mitigated. Since nodes connected together on a rope sit on the sea floor unattended: they can be moved due to strong currents, the ropes can snag on seabed obstructions, they can be dragged by third party vessel anchors and caught by trawling fishing vessels. The threat of these types of potential hazards to this equipment should normally be identified and assessed during the project planning phase, especially in oilfield locations where well heads, pipelines and other subsea structures exist and where any contact with these must be avoided, normally achieved by adopting exclusion zones. Since it is possible for node lines to be moved after deployment, the issue of node position on recovery is critical and therefore positioning during both deployment and recovery is a standard navigation quality control check. In some case, node lines may need to be recovered and re-laid if the nodes have moved outside of the contract specifications. ROV deployment This method utilises ROV (remotely operated underwater vehicle) technology to handle and place nodes at their pre-plotted positions. This type of deployment and recovery method uses a basket full of nodes which is lowered into the water. An ROV will connect with the compatible node basket and remove individual nodes from a tray in a pre-defined order. Each node will be placed on its allocated pre-plot position. On recovery, the process works in reverse; the node to be recovered is picked up by the ROV, placed into the node basket tray until the basket is full when it is lifted back to the surface. The basket is recovered onto the node vessel, the nodes are removed from the basket and reaped. ROV operations are normally used for deep water node projects, often in water depths to 3000 metres in the open ocean. However, there are some issues with ROV operations that need to be considered. ROV operations tend to be complex, especially deep water ROV operations and so the periodic maintenance demands may impact upon production. Umbilical's and other high technology spares for ROV's can be extremely expensive and repairs to ROVs which require onshore or third-party specialist support will stop a node project. Due to extreme water depths, the node deployment and recovery production rate will be much lower due to the time for node basket transit from surface to seafloor and there will almost certainly be weather or sea condition limitation for ROV operations in open ocean areas. The logistics for supporting operations far from shore can also be problematic for regular resupply, bunkering and crew change activities. Time lapse acquisition (4D). Time lapse or 4D surveys are 3D seismic surveys repeated after a period of time, the 4D term refers to the fourth dimension which in this case is time. Time lapse surveys are acquired in order to observe reservoir changes during production and identify areas where there are barriers to flow that may not be detectable in conventional seismic. Time lapse surveys consist of a baseline survey and a monitor or repeat survey, acquired after the field has been in production. Most of these surveys have been repeated NATS surveys as they are cheaper to acquire and most fields historically already had a NATS baseline survey. Some of these surveys are collected using ocean-bottom cables because the cables can be accurately placed in their previous location after being removed. Better repetition of the exact source and receiver location leads to improved repeatability and better signal to noise ratios. A number of 4D surveys have also been set up over fields in which ocean bottom cables have been purchased and permanently deployed. This method can be known as life of field seismic (LoFS) or permanent reservoir monitoring (PRM). 4D seismic surveys using towed streamer technology can be very challenging as the aim of a 4D survey it to repeat the original or baseline survey as accurately as possible. Weather, tides, current and even the time of year can have a significant impact upon how accurately such a survey can achieve that repeatability goal. OBN has proven to be another very good way to accurately repeat a seismic acquisition. The world's first 4D survey using nodes was acquired over the Atlantis Oil Field in 2009, with the nodes being placed by a ROV in a water depth of 1300–2200 metres to within a few metres of where they were previously placed in 2005. Seismic data processing. There are three main processes in seismic data processing: deconvolution, common-midpoint (CMP) stacking and migration. "Deconvolution" is a process that tries to extract the reflectivity series of the Earth, under the assumption that a seismic trace is just the reflectivity series of the Earth convolved with distorting filters. This process improves temporal resolution by collapsing the seismic wavelet, but it is nonunique unless further information is available such as well logs, or further assumptions are made. "Deconvolution" operations can be cascaded, with each individual deconvolution designed to remove a particular type of distortion. "CMP stacking" is a robust process that uses the fact that a particular location in the subsurface will have been sampled numerous times and at different offsets. This allows a geophysicist to construct a group of traces with a range of offsets that all sample the same subsurface location, known as a "Common Midpoint Gather". The average amplitude is then calculated along a time sample, resulting in significantly lowering the random noise but also losing all valuable information about the relationship between seismic amplitude and offset. Less significant processes that are applied shortly before the "CMP stack" are "Normal moveout correction" and "statics correction". Unlike marine seismic data, land seismic data has to be corrected for the elevation differences between the shot and receiver locations. This correction is in the form of a vertical time shift to a flat datum and is known as a "statics correction", but will need further correcting later in the processing sequence because the velocity of the near-surface is not accurately known. This further correction is known as a "residual statics correction." "Seismic migration" is the process by which seismic events are geometrically re-located in either space or time to the location the event occurred in the subsurface rather than the location that it was recorded at the surface, thereby creating a more accurate image of the subsurface. Seismic interpretation. The goal of seismic interpretation is to obtain a coherent geological story from the map of processed seismic reflections. At its most simple level, seismic interpretation involves tracing and correlating along continuous reflectors throughout the 2D or 3D dataset and using these as the basis for the geological interpretation. The aim of this is to produce structural maps that reflect the spatial variation in depth of certain geological layers. Using these maps hydrocarbon traps can be identified and models of the subsurface can be created that allow volume calculations to be made. However, a seismic dataset rarely gives a picture clear enough to do this. This is mainly because of the vertical and horizontal seismic resolution but often noise and processing difficulties also result in a lower quality picture. Due to this, there is always a degree of uncertainty in a seismic interpretation and a particular dataset could have more than one solution that fits the data. In such a case, more data will be needed to constrain the solution, for example in the form of further seismic acquisition, borehole logging or gravity and magnetic survey data. Similarly to the mentality of a seismic processor, a seismic interpreter is generally encouraged to be optimistic in order encourage further work rather than the abandonment of the survey area. Seismic interpretation is completed by both geologists and geophysicists, with most seismic interpreters having an understanding of both fields. In hydrocarbon exploration, the features that the interpreter is particularly trying to delineate are the parts that make up a petroleum reservoir – the source rock, the reservoir rock, the seal and trap. Seismic attribute analysis. Seismic attribute analysis involves extracting or deriving a quantity from seismic data that can be analysed in order to enhance information that might be more subtle in a traditional seismic image, leading to a better geological or geophysical interpretation of the data. Examples of attributes that can be analysed include mean amplitude, which can lead to the delineation of bright spots and dim spots, coherency and amplitude versus offset. Attributes that can show the presence of hydrocarbons are called direct hydrocarbon indicators. Crustal studies. The use of reflection seismology in studies of tectonics and the Earth's crust was pioneered in the 1970s by groups such as the Consortium for Continental Reflection Profiling (COCORP), who inspired deep seismic exploration in other countries such as BIRPS in Great Britain and ECORS in France. The British Institutions Reflection Profiling Syndicate (BIRPS) was started up as a result of oil hydrocarbon exploration in the North Sea. It became clear that there was a lack of understanding of the tectonic processes that had formed the geological structures and sedimentary basins which were being explored. The effort produced some significant results and showed that it is possible to profile features such as thrust faults that penetrate through the crust to the upper mantle with marine seismic surveys. Environmental impact. As with all human activities, seismic reflection surveys have some impact on the Earth's natural environment and both the hydrocarbon industry and environmental groups partake in research to investigate these effects. Land. On land, conducting a seismic survey may require the building of roads, for transporting equipment and personnel, and vegetation may need to be cleared for the deployment of equipment. If the survey is in a relatively undeveloped area, significant habitat disturbance may occur and many governments require seismic companies to follow strict rules regarding destruction of the environment; for example, the use of dynamite as a seismic source may be disallowed. Seismic processing techniques allow for seismic lines to deviate around natural obstacles, or use pre-existing non-straight tracks and trails. With careful planning, this can greatly reduce the environmental impact of a land seismic survey. The more recent use of inertial navigation instruments for land survey instead of theodolites decreased the impact of seismic by allowing the winding of survey lines between trees. The potential impact of any seismic survey on land needs to be assessed at the planning stage and managed effectively. Well regulated environments would generally require Environmental and Social Impact Assessment (ESIA) or Environmental Impact Assessment (EIA) reports prior to any work starting. Project planning also needs to recognise that once a project has completed, what impact if any, will be left behind. It is the contractors and clients responsibility to manage the remediation plan as per the contract and as per the laws where the project has been conducted. Depending upon the size of a project, land seismic operations can have a significant local impact and a sizeable physical footprint, especially where storage facilities, camp utilities, waste management facilities (including black and grey water management), general and seismic vehicle parking areas, workshops and maintenance facilities and living accommodation are required. Contact with local people can cause potential disruptions to their normal lives such as increased noise, 24-hour operations and increased traffic and these have to be assessed and mitigated. Archeological considerations are also important and project planning must accommodate legal, cultural and social requirements that will have to be considered. Specialist techniques can be used to assessed safe working distances from buildings and archeological structures to minimise their impact and prevent damage. Marine. The main environmental concern for marine seismic surveys is the potential for noise associated with the high-energy seismic source to disturb or injure animal life, especially cetaceans such as whales, porpoises, and dolphins, as these mammals use sound as their primary method of communication with one another. High-level and long-duration sound can cause physical damage, such as hearing loss, whereas lower-level noise can cause temporary threshold shifts in hearing, obscuring sounds that are vital to marine life, or behavioural disturbance. A study has shown that migrating humpback whales will leave a minimum 3 km gap between themselves and an operating seismic vessel, with resting humpback whale pods with cows exhibiting increased sensitivity and leaving an increased gap of 7–12 km. Conversely, the study found that male humpback whales were attracted to a single operating airgun as they were believed to have confused the low-frequency sound with that of whale breaching behaviour. In addition to whales, sea turtles, fish and squid all showed alarm and avoidance behaviour in the presence of an approaching seismic source. It is difficult to compare reports on the effects of seismic survey noise on marine life because methods and units are often inadequately documented. The gray whale will avoid its regular migratory and feeding grounds by &gt;30 km in areas of seismic testing. Similarly the breathing of gray whales was shown to be more rapid, indicating discomfort and panic in the whale. It is circumstantial evidence such as this that has led researchers to believe that avoidance and panic might be responsible for increased whale beachings although research is ongoing into these questions. Even so, airguns are shut down only when cetaceans are seen at very close range, usually under 1 km Offering another point of view, a joint paper from the International Association of Geophysical Contractors (IAGC) and the International Association of Oil and Gas Producers (IOGP) argue that the noise created by marine seismic surveys is comparable to natural sources of seismic noise, stating: The UK government organisation, the Joint Nature Conservation Committee (more commonly known as JNCC) is "...the public body that advises the UK Government and devolved administrations on UK-wide and international nature conservation." has had a long term vested interest in the impact of geophysical or seismic surveys on the marine environment for many years. Even back in the 1990s, it was understood at a government level that the impact of the sound energy produced by seismic surveys needed to be investigated and monitored. JNCC guidelines have been and continue to be one of the references used internationally as a possible baseline standard for surveys in seismic contracts world-wide, such as the 'JNCC guidelines for minimising the risk of injury to marine mammals from geophysical surveys (seismic survey guidelines)', 2017. A complicating factor in the discussion of seismic sound energy as a disruptive factor to marine mammals is that of the size and scale of seismic surveys as they are conducted into the 21st century. Historically, seismic surveys tended to have a duration of weeks or months and to be localised, but with OBN technology, surveys can cover thousands of square kilometres of ocean and can continue for years, all of the time putting sound energy into the ocean 24 hours a day from multiple energy sources. One current example of this is the 85,000 square kilometre mega seismic survey contract signed by the Abu Dhabi national oil company ADNOC in 2018 with an estimated duration into 2024 across a range of deep-water areas, coastal areas, islands and shallow water locations. It may be very difficult to assess the long-term impact of these huge operations on marine life. In 2017, IOGP recommended that, to avoid disturbance whilst surveying: As second factor is the regulatory environment where the seismic survey is taking place. In highly regulated locations such as the North Sea or the Gulf of Mexico, the legal requirements will be clearly stated at the contract level and both contractor and client will comply with the regulations as the consequences of non-compliance can be severe such as substantial fines or withdrawal of permits for exploration blocks. However, there are some countries which have a varied and rich marine biome but where the environmental laws are weak and where a regulator is ineffective or even non-existent. This situation, where the regulatory framework is not robust can severely compromise any attempts at protecting marine environments: this is frequently found where state-owned oil and gas companies are dominant in a country and where the regulator is also a state-owned and operated entity and therefore it is not considered to be truly independent. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. The following books cover important topics in reflection seismology. Most require some knowledge of mathematics, geology, and/or physics at the university level or above. Further research in reflection seismology may be found particularly in books and journals of the Society of Exploration Geophysicists, the American Geophysical Union, and the European Association of Geoscientists and Engineers.
[ { "math_id": 0, "text": "Z=v\\rho \\ " }, { "math_id": 1, "text": "R" }, { "math_id": 2, "text": "R=\\frac{Z_2 - Z_1}{Z_2 + Z_1}" }, { "math_id": 3, "text": "Z_1" }, { "math_id": 4, "text": "Z_2" }, { "math_id": 5, "text": "T" }, { "math_id": 6, "text": "T=1+R=\\frac{2 Z_2}{(Z_2 + Z_1)}" }, { "math_id": 7, "text": "\\frac{R^2}{Z_1}+\\frac{T^2}{Z_2}=\\frac{Z_2(Z_2 - Z_1)^2+4Z_1Z_2^2}{Z_1Z_2(Z_1+Z_2)^2}=\\frac{1}{Z_1}" }, { "math_id": 8, "text": "R(\\theta ) = R(0) + G \\sin^2 \\theta " }, { "math_id": 9, "text": "R(0)" }, { "math_id": 10, "text": "G" }, { "math_id": 11, "text": "(\\theta)" }, { "math_id": 12, "text": "t" }, { "math_id": 13, "text": "t = 2\\frac{d}{V}" }, { "math_id": 14, "text": "d" }, { "math_id": 15, "text": "V" } ]
https://en.wikipedia.org/wiki?curid=676418
67642835
Hilsenhoff Biotic Index
The Hilsenhoff Biotic Index (HBI) is a quantitative method of evaluating the abundance of arthropod fauna in stream ecosystems as a measurement of estimating water quality based on the predetermined pollution tolerances of the observed taxa. This biotic index was created by William Hilsenhoff in 1977 to measure the effects of oxygen depletion in Wisconsin streams resulting from organic or nutrient pollution. Calculating the HBI. The collection sample should contain 100+ arthropods. A tolerance value of 0 to 10 is assigned to each arthropod species (or genera) based on its known prevalence in stream habitats with varying states of detritus contamination. A highly tolerant species would receive a value of 10, while a species collected only in unaltered streams with high water quality would receive a value of 0. The sum products of the number of individuals in each species (or genera) multiplied by the tolerance of the species is divided by the total number of specimens in the sample to determine the HBI value. formula_0; where "n" = number of specimens in taxa; "a" = tolerance value of taxa; "N" = total number of specimens in the sample. Precautions should be taken to account for confounding variables, such as the effects of dominant species over-abundance, seasonal temperature stress, and water currents. Limiting the collection of individuals from each species to a maximum of 10 (10-Max BI) has been shown to minimize the effects of these phenomena on the True BI. The biotic index is then ranked for water quality and degree of organic pollution, as follows: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "HBI = \\frac{\\Sigma (n_i a_i)}{N}\n" } ]
https://en.wikipedia.org/wiki?curid=67642835
67651019
Laser polishing
Laser polishing, also referred to as laser re-melting, is a type of micro-melting process employed for improving surface quality of materials. As opposed to other conventional polishing processes, this process does not involve removal of materials from the workpiece surface. In this process, the laser is made incident on the workpiece to melt the surface down to a certain depth, thus enabling subsequent betterment of surface parameters due to re-solidification of the melted material. Laser Polishing can be done at two levels - micro and macro levels. The workpiece material can be any metal or metals alloys, and can also be used to polish certain ceramics and glass. Principle and mechanism. The aim of this process lies in melting a thin layer of the workpiece surface to reduce the average height of the peaks found on the surface asperities. The melting depth is strictly restricted to a certain degree of the asperity height to prevent any major microstructural changes deep in the workpiece material. This is hugely affected by the type of laser radiation, i.e. pulsed-radiation or continuous radiation, as well as the laser parameters, viz. laser power, feed rate or scanning velocity, laser beam diameter, and distance between source (or precisely laser focal point) and workpiece surface. This process is widely researched for the application of surface reduction techniques on various materials. The two most general mechanisms are identified as Shallow Surface Melt (SSM) and Surface Over Melt (SOM). Shallow Surface Melt (SSM). Literature defines SSM region is formed due to dynamic behavior of the high-temperature metal liquid which is forced into micro-asperities essentially filling up the valleys present on the surface. The depth of the melted material is typically "less" than the peak-valley distance which can be affected by the laser parameters. The cited SEM image shows a clearly distinguishable laser polished surface without showing major side effects on the surrounding material, and can be used as a reference for understanding SSM mechanism. Surface Over Melt (SOM). Increasing the energy density of the laser beam after a certain level will change how the melt-pool, or the melted material will behave. With gradual increase in the melt-pool thickness, it will exceed the peak-valley distance (or the asperity height) thus converting the entire metal surface into a melt-pool. Higher densities of the laser causes the molten material to be pulled away from the solidifying front, thus forming ripples on the metal surface. Thus, laser polishing with this mechanism requires extensive study of the effect of the laser parameters to reduce the waviness on the final polished surface. Mechanical properties of laser polished components. Since the workpiece surface is exposed to high temperature which establishes a huge thermal gradient along its cross-section, there are a few changes at the micro-structural level due to the material behavior at the surface. However, majority of the literature reports show little change in the overall material properties of the entire workpiece. Surface morphology and microstructure. The laser polished surface has a huge improvement in terms of average surface roughness of the worked material. This can be attributed to uniform distribution of the melt-pool during rapid solidification, due to presence of laser pressure, gravity and surface tension. The treated layer is divided into 3 major zones: the re-melted layer, the heat affected zone and the original workpiece material. The near consistent re-melted layer has finer grains compared to rest of the material because of high cooling rate. This reduction in size from original can be explained as a result of "grain boundary pinning" due to presence of already present or fresh precipitates in the melted material. The fresh precipitates may sprout from the material matrix or maybe induced from surrounding environment. Going down the material, there is the heat affected zone, which is not exposed to the laser beam, but is affected by the melt-pool formed on the surface. The grain sizes are coarser than the re-melted surface layer, but not as large as the original grain size that are found by going further down the material (typically in additively manufactured workpiece). Tensile properties. The polished surface has a significant increase in tensile strength, but the total elongation (till failure) reduces. As a case study, consider a polymer-metal composite with aluminum fibers and PLA as the matrix. The cited study shows an increase in tensile strength from 41.01 MPa to 50.47 MPa with a reduced maximum elongation from an initial 60.6% to 33.2%. This can be explained as the result of densification and improved adhesion between the matrix and fiber components. The outcome therefore is increased rigidity and reduced ductility material at the polished surface. For this specific case, the workpiece is fabricated with Fused Deposition Modelling (FDM), an additive manufacturing method. Typically, all the additively manufactured components have defects throughout their matrix, viz. gas porosity, gap between deposited layers, inconsistent lamination of the deposited layers and low adhesion among layers. All of the aforementioned terms have related or unrelated reasons of formation which can be studied in depth, but are beyond scope of this summary. These defects become the failure sources or origin of damage induced in the composite. Due to laser polishing, the failure behavior of the composite changes because of combined elastoplastic behavior of the newly polished fiber and matrix at the workpiece surface. Furthermore, since melted surface material flows from peak to unfilled valleys, many defects are removed. This also causes re-bonding of the matrix-matrix as well as matrix-fiber essentially improving the tensile strength as well as dynamic mechanical properties by creating a much denser structure. This can be mathematically explained by rule of mixtures, by assuming constant strain for matrix and continuous fiber composite and evaluating the tensile strength for different stages found in a composite stress-strain curve Other improvements can be seen on the polished surface are increased micro-hardness, wear resistance and corrosion resistance. Fracture behavior. Depending on the material being polished the fracture mechanism vary vastly for pure metals, non-metals, alloys, polymers, ceramics, amorphous solids and composites. All of them show improved fracture resistance post laser polishing because of reduced defects and increased resistance to crack propagation. However, this performance is not universal, it is also affected by presence of defects within the unaffected workpiece material. The improved fracture behavior can be quantified by defining the critical stress intensity factor (formula_0). Theoretically, this value is achieved when the nominal applied stress is equal to the crack propagation stress, and is calculated taking into fact Griffith criteria. The final derived equation for a plane stress condition is given by a square root of product of the material stiffness (formula_1) and the material toughness ( formula_2). As evident, with increase in material stiffness, the polished surface is bound to have increased toughness. A more in-depth study reveals role of more than just material stiffness in increase of the fracture resistance of the laser treated material. Multiple sources have described the effect of strain hardening (induced compression due to dislocation motion at elevated temperatures) and phase transformation within the material. Consider another case study of a silicon nitride engineering ceramic. The result of this study documents the change in surface hardness, surface crack length and the surface formula_3 (mode-1 formula_0) by using the Vickers indentation technique(s). The increase in surface hardness and formula_3 factor can be related to the induced residual compressive stress due to motion of dislocations at the elevated temperatures during the laser polishing process. These compressive stresses act against the externally applied tension, thus needing a certain threshold value in addition to the fracture stress (or crack propagation stress) to completely overcome the opposing stresses before crack initiation. Other observations include a reduction of crack length by 37% in the laser polished &lt;chem&gt;Si3N4&lt;/chem&gt;, and induced anisotopy, which is further discussed in the cited reference. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K_c" }, { "math_id": 1, "text": "E" }, { "math_id": 2, "text": "G_c" }, { "math_id": 3, "text": "K_1c" } ]
https://en.wikipedia.org/wiki?curid=67651019
6765164
Gabor atom
In applied mathematics, Gabor atoms, or Gabor functions, are functions used in the analysis proposed by Dennis Gabor in 1946 in which a family of functions is built from translations and modulations of a generating function. Overview. In 1946, Dennis Gabor suggested the idea of using a granular system to produce sound. In his work, Gabor discussed the problems with Fourier analysis. Although he found the mathematics to be correct, it did not reflect the behaviour of sound in the world, because sounds, such as the sound of a siren, have variable frequencies over time. Another problem was the underlying supposition, as we use sine waves analysis, that the signal under concern has infinite duration even though sounds in real life have limited duration – see time–frequency analysis. Gabor applied ideas from quantum physics to sound, allowing an analogy between sound and quanta. He proposed a mathematical method to reduce Fourier analysis into cells. His research aimed at the information transmission through communication channels. Gabor saw in his atoms a possibility to transmit the same information but using less data. Instead of transmitting the signal itself it would be possible to transmit only the coefficients which represent the same signal using his atoms. Mathematical definition. The Gabor function is defined by formula_0 where "a" and "b" are constants and "g" is a fixed function in "L"2(R), such that ||"g"|| = 1. Depending on formula_1, formula_2, and formula_3, a Gabor system may be a basis for "L"2(R), which is defined by translations and modulations. This is similar to a wavelet system, which may form a basis through dilating and translating a mother wavelet. When one takes formula_4 one gets the kernel of the Gabor transform. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "g_{\\ell,n}(x) = g(x - a\\ell)e^{2\\pi ibnx}, \\quad -\\infty < \\ell,n < \\infty," }, { "math_id": 1, "text": "a" }, { "math_id": 2, "text": "b" }, { "math_id": 3, "text": "g" }, { "math_id": 4, "text": "g(t) = A e^{-\\pi t^2}" } ]
https://en.wikipedia.org/wiki?curid=6765164
67654308
Tides in marginal seas
Dynamics of tidal wave deformation in the shallow waters of the marginal seas Tides in marginal seas are tides affected by their location in semi-enclosed areas along the margins of continents and differ from tides in the open oceans. Tides are water level variations caused by the gravitational interaction between the Moon, the Sun and the Earth. The resulting tidal force is a secondary effect of gravity: it is the difference between the actual gravitational force and the centrifugal force. While the centrifugal force is constant across the Earth, the gravitational force is dependent on the distance between the two bodies and is therefore not constant across the Earth. The tidal force is thus the difference between these two forces on each location on the Earth. In an idealized situation, assuming a planet with no landmasses (an aqua planet), the tidal force would result in two tidal bulges on opposite sides of the earth. This is called the equilibrium tide. However, due to global and local ocean responses different tidal patterns are generated. The complicated ocean responses are the result of the continental barriers, resonance due to the shape of the ocean basin, the tidal waves impossibility to keep up with the Moons tracking, the Coriolis acceleration and the elastic response of the solid earth. In addition, when the tide arrives in the shallow seas it interacts with the sea floor which leads to the deformation of the tidal wave. As a results, tides in shallow waters tend to be larger, of shorter wavelength, and possibly nonlinear relative to tides in the deep ocean. Tides on the continental shelf. The transition from the deep ocean to the continental shelf, known as the continental slope, is characterized by a sudden decrease in water depth. In order to apply to the conservation of energy, the tidal wave has to deform as a result of the decrease in water depth. The total energy of a linear progressive wave per wavelength is the sum of the potential energy (PE) and the kinetic energy (KE). The potential and kinetic energy integrated over a complete wavelength are the same, under the assumption that the water level variations are small compared to the water depth (formula_0). formula_1 where formula_2 is the density, formula_3 the gravitation acceleration and formula_4 the vertical tidal elevation. The total wave energy becomes: formula_5 If we now solve for a harmonic wave formula_6, where formula_7 is the wave number and formula_8 the amplitude, the total energy per unit area of surface becomes: formula_9 A tidal wave has a wavelength that is much larger than the water depth. And thus according to the dispersion of gravity waves, they travel with the phase and group velocity of a shallow water wave: formula_10. The wave energy is transmitted by the group velocity of a wave and thus the energy flux (formula_11) is given by: formula_12 The energy flux needs to be conserved and with formula_2 and formula_3 constant, this leads to: formula_13 where formula_14 and thus formula_15. When the tidal wave propagates onto the continental shelf, the water depth formula_16 decreases. In order to conserve the energy flux, the amplitude of the wave needs to increase (see figure 1). Transmission coefficient. The above explanation is a simplification as not all tidal wave energy is transmitted, but it is partly reflected at the continental slope. The transmission coefficient of the tidal wave is given by: formula_17 This equation indicates that when formula_18 the transmitted tidal wave has the same amplitude as the original wave. Furthermore, the transmitted wave will be larger than the original wave when formula_19 as is the case for the transition to the continental shelf. The reflected wave amplitude (formula_20) is determined by the reflection coefficient of the tidal wave: formula_21 This equation indicates that when formula_18 there is no reflected wave and if formula_19 the reflected tidal wave will be smaller than the original tidal wave. Internal tide and mixing. At the continental shelf the reflection and transmission of the tidal wave can lead to the generation of internal tides on the pycnocline. The surface (i.e. barotropic) tide generates these internal tides where stratified waters are forced upwards over a sloping bottom topography. The internal tide extracts energy from the surface tide and propagates both in shoreward and seaward direction. The shoreward propagating internal waves shoals when reaching shallower water where the wave energy is dissipated by wave breaking. The shoaling of the internal tide drives mixing across the pycnocline, high levels carbon sequestration and sediment resuspension. Furthermore, through nutrient mixing the shoaling of the internal tide has a fundamental control on the functioning of ecosystems on the continental margin. Tidal propagation along coasts. After entering the continental shelf, a tidal wave quickly faces a boundary in the form of a landmass. When the tidal wave reaches a continental margin, it continues as a boundary trapped Kelvin wave. Along the coast, a boundary trapped Kelvin is also known as a coastal Kelvin wave or Edge wave. A Kelvin wave is a special type of gravity wave that can exist when there is (1) gravity and stable stratification, (2) sufficient Coriolis force and (3) the presence of a vertical boundary. Kelvin waves are important in the ocean and shelf seas, they form a balance between inertia, the Coriolis force and the pressure gradient force. The simplest equations that describe the dynamics of Kelvin waves are the linearized shallow water equations for homogeneous, in-viscid flows. These equations can be linearized for a small Rossby number, no frictional forces and under the assumption that the wave height is small compared to the water depth (formula_22). The linearized depth-averaged shallow water equations become: u momentum equation: v momentum equation: the continuity equation: where formula_26 is the zonal velocity (formula_27 direction), formula_28 the meridional velocity (formula_29 direction), formula_30 is time and formula_31 is the Coriolis frequency. Kelvin waves are named after Lord Kelvin, who first described them after finding solutions to the linearized shallow water equations with the boundary condition formula_32. When this assumption is made the linearized depth-averaged shallow water equations that can describe a Kelvin wave become: u momentum equation: v momentum equation: the continuity equation: Now it is possible to get an expression for formula_4, by taking the time derivative of the continuity equation and substituting the momentum equation: The same can be done for formula_28, by taking the time derivative of the v momentum equation and substituting the continuity equation Both of these equations take the form of the classical wave equation, where formula_38. Which is the same velocity as the tidal wave and thus of a shallow water wave. These preceding equations govern the dynamics of a one-dimensional non-dispersive wave, for which the following general solution exist: where length formula_41 is the Rossby radius of deformation and formula_42 is an arbitrary function describing the wave motion. In the most simple form formula_43 is a cosine or sine function which describes a wave motion in the positive and negative direction. The Rossby radius of deformation is a typical length scale in the ocean and atmosphere that indicates when rotational effects become important. The Rossby radius of deformation is a measure for the trapping distance of a coastal Kelvin wave. The exponential term results in an amplitude that decays away from the coast. The expression of tides as a bounded Kelvin wave is well observable in enclosed shelf seas around the world (e.g. the English channel, the North Sea or the Yellow sea). Animation 1 shows the behaviour of a simplified case of a Kelvin wave in an enclosed shelf sea for the case with (lower panel) and without friction (upper panel). The shape of an enclosed shelf sea is represented as a simple rectangular domain in the Northern Hemisphere which is open on the left hand side and closed on the right hand side. The tidal wave, a Kelvin wave, enters the domain in the lower left corner and travels to the right with the coast on its right. The sea surface height (SSH, left panels of animation 1), the tidal elevation, is maximum at the coast and decreases towards the centre of the domain. The tidal currents (right panels of animation 1) are in the direction of wave propagation under the crest and in the opposite direction under the through. They are both maximum under the crest and the trough of the waves and decrease towards the centre. This was expected as the equations for formula_4 and formula_28 are in phase as they both depend on the same arbitrary function describing the wave motion and exponential decay term. Therefore this set of equations describes a wave that travels along the coast with a maximum amplitude at the coast which declines towards the ocean. These solutions also indicate that a Kelvin wave always travels with the coast on their right hand side in the Northern Hemisphere and with the coast at their left hand side in the Southern Hemisphere. In the limit of no rotation where formula_44, the exponential term increase without a bound and the wave will become a simple gravity wave orientated perpendicular to the coast. In the next section, it will be shown how these Kelvin waves behaves when traveling along a coast, in an enclosed shelf seas or in estuaries and basins. Tides in enclosed shelf seas. The expression of tides as a bounded Kelvin wave is well observable in enclosed shelf seas around the world (e.g. the English channel, the North Sea or the Yellow sea). Animation 1 shows the behaviour of a simplified case of a Kelvin wave in an enclosed shelf sea for the case with (lower panel) and without friction (upper panel). The shape of an enclosed shelf sea is represented as a simple rectangular domain in the Northern Hemisphere which is open on the left hand side and closed on the right hand side. The tidal wave, a Kelvin wave, enters the domain in the lower left corner and travels to the right with the coast on its right. The sea surface height (SSH, left panels of animation 1), the tidal elevation, is maximum at the coast and decreases towards the centre of the domain. The tidal currents (right panels of animation 1) are in the direction of wave propagation under the crest and in the opposite direction under the through. They are both maximum under the crest and the trough of the waves and decrease towards the centre. This was expected as the equations for formula_4 and formula_28 are in phase as they both depend on the same arbitrary function describing the wave motion and exponential decay term. On the enclosed right hand side, the Kelvin wave is reflected and because it always travels with the coast on its right, it will now travel in the opposite direction. The energy of the incoming Kelvin wave is transferred through Poincare waves along the enclosed side of the domain to the outgoing Kelvin wave. The final pattern of the SSH and the tidal currents is made up of the sum of the two Kelvin waves. These two can amplify each other and this amplification is maximum when the length of the shelf sea is a quarter wavelength of the tidal wave. Next to that, the sum of the two Kelvin waves result in several static minima's in the centre of the domain which hardly experience any tidal motion, these are called Amphidromic points. In the upper panel of figure 2, the absolute time averaged SSH is shown in red shading and the dotted lines show the zero tidal elevation level at roughly hourly intervals, also known as cotidal lines. Where these lines intersect the tidal elevation is zero during a full tidal period and thus this is the location of the Amphidromic points. In the real world, the reflected Kelvin wave has a lower amplitude due to energy loss as a result of friction and through the transfer via Poincare waves (lower left panel of animation 1). The tidal currents are proportional to the wave amplitude and therefore also decrease on the side of the reflected wave (lower right panel of animation 1). Finally, the static minima's are no longer in the centre of the domain as wave amplitude is no longer symmetric. Therefore, the Amphidromic points shift towards the side of the reflected wave (lower panel figure 2). The dynamics of a tidal Kelvin wave in enclosed shelf sea is well manifested and studied in the North Sea. Tides in estuaries and basins. When tides enter estuaries or basins, the boundary conditions change as the geometry changes drastically. The water depth becomes shallower and the width decreases, next to that the depth and width become significantly variable over the length and width of the estuary or basin. As a result the tidal wave deforms which affects the tidal amplitude, phase speed and the relative phase between tidal velocity and elevation. The deformation of the tide is largely controlled by the competition between bottom friction and channel convergence. Channel convergence increases the tidal amplitude and phase speed as the energy of the tidal wave is traveling through a smaller area while bottom friction decrease the amplitude through energy loss. The modification of the tide leads to the creation of overtides (e.g. formula_45 tidal constituents) or higher harmonics. These overtides are multiples, sums or differences of the astronomical tidal constituents and as a result the tidal wave can become asymmetric. A tidal asymmetry is a difference between the duration of the rise and the fall of the tidal water elevation and this can manifest itself as a difference in flood/ebb tidal currents. The tidal asymmetry and the resulting currents are important for the sediment transport and turbidity in estuaries and tidal basins. Each estuary and basin has its own distinct geometry and these can be subdivided in several groups of similar geometries with its own tidal dynamics.
[ { "math_id": 0, "text": "\\eta << H" }, { "math_id": 1, "text": "\\int_{0}^{\\lambda}PE = \\int_{0}^{\\lambda}KE = \\frac{1}{2}\\rho g\\int_{0}^{\\lambda}\\eta^2 dx" }, { "math_id": 2, "text": "\\rho" }, { "math_id": 3, "text": "g" }, { "math_id": 4, "text": "\\eta" }, { "math_id": 5, "text": "E =\\rho g\\int_{0}^{\\lambda}\\eta^2 dx" }, { "math_id": 6, "text": "\\eta(x) = Acos(kx)" }, { "math_id": 7, "text": "k" }, { "math_id": 8, "text": "A" }, { "math_id": 9, "text": "E_{s} = \\frac{1}{2}\\rho g A^2" }, { "math_id": 10, "text": "c_p = c_g = \\sqrt{gh}" }, { "math_id": 11, "text": "F_E" }, { "math_id": 12, "text": "F_E = \\frac{1}{2}\\rho g A^2 \\sqrt{gh}" }, { "math_id": 13, "text": "F_{E,1}=F_{E,2}\\Longrightarrow{A_1}^{2}\\sqrt{gh_1} = {A_2}^{2}\\sqrt{gh_2} " }, { "math_id": 14, "text": "h2<h1" }, { "math_id": 15, "text": "A_2 > A_1" }, { "math_id": 16, "text": "(h)" }, { "math_id": 17, "text": "\\frac{A_2}{A_1}=\\frac{2 c_1}{(c_1+c_2)}" }, { "math_id": 18, "text": "c_1 = c_2" }, { "math_id": 19, "text": "c_1>c_2" }, { "math_id": 20, "text": "A^'" }, { "math_id": 21, "text": "\\frac{A^'}{A_1}=\\frac{c_1-c_2}{(c_1+c_2)}" }, { "math_id": 22, "text": "\\eta<<h " }, { "math_id": 23, "text": "\\frac{\\partial u}{\\partial t} - fv = -g \\frac{\\partial\\eta}{\\partial x}" }, { "math_id": 24, "text": "\\frac{\\partial v}{\\partial t} + fu = -g \\frac{\\partial\\eta}{\\partial y}" }, { "math_id": 25, "text": "\\frac{\\partial \\eta}{\\partial t} + h(\\frac{\\partial u}{\\partial x} + \\frac{\\partial v}{\\partial y}) = 0" }, { "math_id": 26, "text": "u" }, { "math_id": 27, "text": "x" }, { "math_id": 28, "text": "v" }, { "math_id": 29, "text": "y" }, { "math_id": 30, "text": "t" }, { "math_id": 31, "text": "f" }, { "math_id": 32, "text": "u(x,y,t) = 0 " }, { "math_id": 33, "text": "v = \\frac{-g}{f} \\frac{\\partial\\eta}{\\partial x}" }, { "math_id": 34, "text": "\\frac{\\partial v}{\\partial t} = -g \\frac{\\partial\\eta}{\\partial y}" }, { "math_id": 35, "text": "\\frac{\\partial \\eta}{\\partial t} + h\\frac{\\partial v}{\\partial y} = 0" }, { "math_id": 36, "text": "\\frac{\\partial^2 \\eta}{\\partial t^2} - gh\\frac{\\partial^2 \\eta}{\\partial y^2} = 0" }, { "math_id": 37, "text": "\\frac{\\partial^2 v}{\\partial t^2} - gh\\frac{\\partial^2 v}{\\partial y^2} = 0" }, { "math_id": 38, "text": "c = \\sqrt{gh}" }, { "math_id": 39, "text": "\\eta = -h \\ F(y + ct) \\ e^{\\frac{-x}{R}}" }, { "math_id": 40, "text": "v = \\sqrt{gh}\\ F(y + ct) \\ e^{\\frac{-x}{R}}" }, { "math_id": 41, "text": "R = \\frac{\\sqrt{gh}}{f}" }, { "math_id": 42, "text": "F(y + ct)" }, { "math_id": 43, "text": "F" }, { "math_id": 44, "text": "f \\rightarrow 0" }, { "math_id": 45, "text": "M_{4}" } ]
https://en.wikipedia.org/wiki?curid=67654308
67654719
Southern Caribbean upwelling system
Low latitude tropical upwelling system The Southern Caribbean Upwelling system (SCUS) is a low latitude tropical upwelling system, where due to multiple environmental and bathymetric conditions water from the deep sea is forced to the surface layers of the ocean. The SCUS is located at about 10°N on the southern coast of the Caribbean sea basin off Colombia, Venezuela, and Trinidad. There are two main upwelling zones in the system that vary in intensity throughout the year; The Western Upwelling Zone (WUZ); And the Eastern Upwelling Zone; (EUZ). The western the WUZ is situated between 74-71°W and generates mainly seasonal upwelling and high offshore transport due to intense winds. The EUZ, situated between 71-60°W is less intense but is more favourable for the upwelling throughout the year. General information. For thirty years after 1990, the upwelling has intensified which is producing cooling of the sea surface temperature (SST) in the WUZ, this is in contrast to the general temperature in the Caribbean sea which has shown to increase. The "typical" Caribbean surface water is a mixture of North Atlantic Surface Water (NASW) and riverine waters from the Orinoco and Amazon rivers. The intensity of the Caribbean low-level jet (further explanation below) and the coastal orientation are determining the timing and spatial variability of this upwelling system. The system is likely to be responsible for a major part in the primary production due to the addition of nutrients that are added to the system through the upwelling. Under the Caribbean surface waters more saline water is found with values close to those typical for the Subtropical Underwater (SUW) SA(salinity) ~37, Θ ~22 °C. This forms a subsurface maximum(SSM) of more saline water than the water on top of it. After the rainy season the SSM is lower due to dilution of the surface waters. Characterization of the SCUS. Sinces 1994 variations in upwelling are studied using cycles of satellite SST (Sea surface Temperature). The SST this is used as a proxy for upwelling (explained in more detail below) in this tropical region As well as the dominant winds and chlorophyll a. These are all proxies that are relatively easy to measure and quite easily accessible. Location and source of the SCUS. The location of the SCUS is depending on the Rossby radius or R. The rossby radius formula_0 changes the positioning of the upwelling relative to the coastline. The Rossby radius for this region is ~19 km; estimated using a mean depth h= 35m, gravity g = 9,81 m s -1. The upwelling zones are found close to the coast roughly within the 19 km found within the Rossby radius. However, in rare cases upwelled water moves offshore by over 250 km from the coast. Upwelled waters in the SCUS are consistent in geochemical compositions with the "Subtropical Underwater". This is a water mass that comes from the central Atlantic and due to its relative dense water properties (SA (salinity) ~37g per kg water, Temperature ~22 °C) lays under the Caribbean surface waters. Because the properties are so similar to the water that is upwelled in the SCUS its likely that the water comes from this water mass. Sea surface temperature (SST). The SCUS is studied through the SST at a high resolution (1 km grid) radiometer (National Oceanic and Atmospheric Administration / NOAA). This data is used to identify differences in the SST and to locate upwelling regions. There is a semi-annual cycle of SST within the upwelling areas. With cooling periodically occurring between December and April, showing 2-4 upwelling pulses peaking during February–March. Around May there is a typical increase in SST followed by cooling during June–August due to a midyear upwelling. There is a strong relationship between SST and Chlorophyll-a, this is explained more later in this article. Wind. The trade winds that blow over the southern Caribbean Sea, amplified by the Caribbean low-level jet generate northward Ekman transport. The intensity of the trade winds varies per season (the image of the Seasonal differences shows the WUZ b) and c) in December and d) and e) in February) and explains the variation in upwelling and the measured differences in SST that are mentioned above. The driving forces of the Ekman transport is wind-stress or "τ" (wind stress on the seasurface). The driving winds are divided in multiple areas; East of 68°W has relative stable wind speeds (&gt;6 m formula_1), and slightly lower during August–November (4 – 6 m formula_1). The direction of the wind is generally parallel to the southern Caribbean Sea margin. However, between May and October the EUZ has more along shore winds that are ~1.7° varying from the along-shore direction and during November until April the direction is more onshore within ±12°. In the WUZ the wind is more offshore in the majority of the year approximately -14°. This changes during October–November when winds are aligned with the shoreline ~0.2°. These wind directions produce the northward wind curl and thus offshore Ekman transport that are favourable year round for the SCUS. Chlorophyll-a. Chlorophyll a is used to see the productivity of phytoplankton and therefore zooplankton. Amounts of chlorophyll-a increase with higher nutrient concentrations that are found in upwelled water and can therefore be used as a proxy for upwelling systems. Within the SCUS there are strong correlations between the SST and Chlorophyll-a. These show a Chlorophyll-a maximum in December and April and a shorter maximum between June and July further confirming the upwelling of nutrient rich water. Biological impact. As mentioned in the Chlorophyll-a section, nutrient concentrations that come with the water from the SUW have a strong positive impact on the phytoplankton productivity. It is estimated that up to 95% of the small pelagic biomass is the southern Caribbean sea is sustained by the primary production that comes with these upwelled waters. In the EUZ there is a four time higher amount of small pelagic biomass compared to the WUZ. This difference is contributed to the prolonged duration of the upwelling. The water in the EUZ has a SST &lt; 26°C for 8.5 months and the WUZ for 6.9. In addition to that, the EUZ has a wider continental shelf. Upwelling over wide and shallow continental shelves can generate resuspension and transport of essential microelements from the benthic boundary layer to the surface. The Caribbean low-level jet (CLLJ). The CLLJ has a core in the western basin (70°W - 80°W, 15°N) and maximum horizontal wind speeds of up to 16 m/s that tops in July and February. The Caribbean low-level jet is an amplification of the large-scale circulation of the North Atlantic subtropical high (NASH). The NASH interacts closely with the trade winds and therefor connects the CLLJ with the tradewinds.
[ { "math_id": 0, "text": "R = \\sqrt{{g}\\frac{\\Delta\\rho}{\\rho}\\frac{h}{f}}" }, { "math_id": 1, "text": "s^-1" } ]
https://en.wikipedia.org/wiki?curid=67654719
67655573
Multiple equilibria in the Atlantic meridional overturning circulation
The Atlantic meridional overturning circulation (AMOC) is a large system of ocean currents, like a conveyor belt. It is driven by differences in temperature and salt content and it is an important component of the climate system. However, the AMOC is not a static feature of global circulation. It is sensitive to changes in temperature, salinity and atmospheric forcings. Climate reconstructions from δ18O proxies from Greenland reveal an abrupt transition in global temperature about every 1470 years. These changes may be due to changes in ocean circulation, which suggests that there are two equilibria possible in the AMOC. Stommel made a two-box model in 1961 which showed two different states of the AMOC are possible on a single hemisphere. Stommel’s result with an ocean box model has initiated studies using three dimensional ocean circulation models, confirming the existence of multiple equilibria in the AMOC. The AMOC. The Atlantic Meridional Overturning Circulation (AMOC) is a large system of ocean currents that carry warm water from the tropics northwards into the North Atlantic. It is driven by differences in temperature and salt content. The present-day AMOC is mainly temperature-driven, which means that there is a strong AMOC characterized by sinking in the North. It is, in principle, also possible that upwelling can take place at low latitudes. This was studied by Stommel in 1961. The climate of the Northern hemisphere is influenced by the oceanic transport of heat and salt from the tropics to the sub-polar regions. The ocean releases heat to the atmosphere in the sub-polar Atlantic region. This Northward heat transport is responsible for the relatively warm climate in Northwest Europe. Changes in the strength of the AMOC are thought to have been responsible for significant changes in past climate. A collapse of the AMOC would have large consequences on the temperatures in the North-Atlantic region. It could lead to a reduction of air temperatures up to 10 °C. Geological record of abrupt changes in the climate. The Cenozoic Era. The Cenozoic Era covers the period from 65.5 Ma to present. It is the most recent of the three classic geological eras (Paleozoic, Mesozoic, Cenozoic). The earth is mostly characterized as a "Greenhouse world" during the early Cenozoic times, with no ice and high temperatures. The widespread occurrence of large glaciations started on Antarctica ~34 Ma in the Eocene-Oligocene transition (EOT). During this time, the world became an "Icehouse world" like we know it today, with ice sheets present in both poles simultaneously. Dansgaard-Oeschger events. There are also abrupt changes in the climate in the last glacial period. Willi Dansgaard analyzed the isotopic composition of ice cores from Camp Century in Greenland in 1972. Het reported that the last glacial period showed more than 20 abrupt interstadials marked by very intense warming. Hans Oeschger reported 12 years later that the abrupt changes were accompanied by sudden increases in CO2 in the Greenland ice cores. These abrupt and dramatic changes in climate were from then on known as Dansgaard-Oeschger events (DO-events) and they occur approximately every 1470 years. Paleo-proxy records from δ18O proxies have been linked to the evidence of temperature fluctuations of this magnitude. The cause for these fluctuations is still uncertain, but recent research suggests that they are due to changes in ocean circulation. These changes could be induced by North Atlantic freshwater perturbations. Stommel box model. Several simple box models were used to study the changes in AMOC caused by for example changes in freshwater fluxes or salinity fluxes. Stommel was the first one to do so and devised a single-hemispheric box model in 1961 (Stommel box model). He made this model to explore the existence of stable responses to a constant forcing with either a temperate-driven or a salinity-driven AMOC. Stommel made use of a fundamental assumption that the strength of AMOC is linearly proportional to the equator–pole density difference. This assumption implies that AMOC is driven by surface thermohaline forcing.The model consists of two boxes. One box is at a high latitude (polar box) and the other one is at a low latitude (equatorial box). The high-latitude box has uniform temperature and salinity (T1,S1), this holds as well for the equatorial box (T2,S2). A linear equation of state is assumed: formula_0, where ρ0, T0 and S0 are the reference density, temperature and salinity, respectively. The thermal and haline coefficients are indicated by α and β. As said before, the flow strength between the boxes is set by the density difference between the boxes: formula_1, where k is a hydraulic pumping constant. Each box exchanges heat with the atmosphere. The atmospheric temperatures (T1a,T2a) are fixed in this model. The evaporated water (η2 ≥ 0) in the equatorial box is precipitated, via the atmosphere, in the high-latitude box. The governing differential equations for the temperatures and salinities in the Stommel box model are: formula_2 formula_3 formula_4 formula_5 In these relations λT is the thermal exchange coefficient with the atmosphere, formula_6 and formula_7. From this it follows that: formula_8. There is a poleward surface flow if formula_9 and an equatorward surface flow if formula_10. Under the assumption of a steady state for T1 and T2, the flow strength is: formula_11. Here formula_12. It follows that the time evolution of the flow strength is given by: formula_13The steady state of the flow strength is then given by: formula_16 for formula_9 and formula_17 for formula_10. The solution of formula_18 is not physically possible, because it contradicts the assumption that formula_19. These formulas for the flow strength can be made dimensionless by setting formula_20 and formula_21, which gives: formula_22 formula_23 Solutions with formula_24 represent solutions with sinking in the polar box (high latitudes) and solutions with formula_25 represent solutions with sinking in the equatorial box (low latitudes). The solutions of formula_26 and formula_27 are stable and the solution of formula_28 is unstable. This means that there are two stable states (equilibria) of the AMOC possible on a single hemisphere for a certain range of the salinity forcing. In the present-day, we have a circulation on the positive branch with formula_24. If we were to switch to a circulation on the negative branch with formula_25, the oceanic heat transport to the Northern Hemisphere would weaken and the air temperatures would drop. The cooling would be largest in the North Atlantic region and could lead to a reduction of air temperatures in Northwest Europe of up to 10 °C. Switching between branches. Stommel proved the possibility of two equilibria on a single hemisphere. Next, it is important to investigate how these stable states react to changes in the salinity forcing or freshwater forcing. An example of a change in forcing could be enhanced precipitation or evaporation. Hosing experiment. One way to go from one equilibrium to the other is by a “hosing” experiment. Here, an instantaneous surface forcing perturbation is applied. This moves the system to the stable branch of the negative formula_14. Next, the perturbation is removed. This puts the system back into the bistable regime of formula_15, but on the other branch. This gives two different steady states for the same dimensionless salinity forcing. Traditional hysteresis experiment. Another strategy is the traditional hysteresis experiment. Here, the forcing is gradually increased. This allows the system to follow the positive branch to the negative branch. The AMOC quickly collapses when it reaches the threshold of formula_15. From here it goes to the negative branch of formula_14. After reaching the negative branch, the forcing is slowly reduced again. This makes the system go the other equilibrium. When the forcing is reduced even further, the AMOC will transition again to the positive branch of formula_14. Redistributing salt experiment. A third strategy is an experiment where the initial state is perturbed by redistributing salt in the ocean interior. This strategy leaves the salinity forcing unchanged. If the perturbation is large enough, the AMOC will collapse. This will make the system transition to the negative branch of formula_14. Examples of multiple equilibria in the ocean. Stommel’s result with an ocean box model has initiated studies using three dimensional ocean circulation models, confirming the existence of multiple equilibria. The full range of possible equilibria in the ocean has not been well explored yet. Besides Stommel's box model and three dimensional models, paleoclimatic evidence also suggests that variations in the AMOC are linked to abrupt climate changes. In the past. Dansgaard-Oeschger events. Dansgaard-Oeschger events are the most relevant paleoclimate phenomena associated with the instability of the AMOC in the past. They occur approximately every 1470 years. Recent research suggests that they are due to changes in ocean circulation. These changes could be induced by North Atlantic freshwater perturbations. Eocene-Oligocene Transition (EOT). An example of a switch between two equilibria in the AMOC is the Eocene-Oligocene transition (EOT) 34 MA ago, where proxies of the deep circulation suggest the onset of the AMOC. This caused a major shift in the global climate towards colder and drier conditions. It also caused the formation of the Antarctic ice sheet. This colder and drier climate caused the large scale extinction of flora and fauna in what is called the Eocene–Oligocene extinction event. It is suggested that the shift from one equilibrium to the other is caused by a long term decrease in atmospheric CO2. In the future. Due to the building evidence of abrupt climate change due to multiple equilibria in the AMOC, the interest grew in the possibility of such events in the present climate. The recent anthropogenic forcing of the climate system may apply a forcing to the AMOC of similar magnitude to that associated with freshwater forcing in the glacial past. Like in paleoceanographic models, the mechanism and likelihood of collapse have been investigated using climate models. Most present-day climate models already predict a gradual weakening of the AMOC over the 21st century due to anthropogenic forcing, although there is large uncertainty in the amount of decrease. Some researchers even argue that such a gradual slowdown has already started and that it is visible in the proxy records of the AMOC from the mid-twentieth century. In the Fifth Assessment Report of the Intergovernmental Panel on Climate Change they identify a collapse of the AMOC as one of the tipping points in the climate system with a low probability of occurrence, but with a potentially high impact. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rho = \\rho_0 - \\alpha (T - T_0) + \\beta (S - S_0)" }, { "math_id": 1, "text": "\\psi = k(\\rho_1 - \\rho_2)" }, { "math_id": 2, "text": "{dT_1\\over dt} = |\\psi | \\Delta T + \\lambda_T(T_1^a - T_1)\n" }, { "math_id": 3, "text": "{dT_2\\over dt} = -|\\psi | \\Delta T + \\lambda_T(T_2^a - T_2)" }, { "math_id": 4, "text": "{dS_1\\over dt} = |\\psi | \\Delta S - \\eta_2" }, { "math_id": 5, "text": "{dS_2\\over dt} = -|\\psi | \\Delta S + \\eta_2" }, { "math_id": 6, "text": "\\Delta T = T_2 - T_1" }, { "math_id": 7, "text": "\\Delta S = S_2 - S_1" }, { "math_id": 8, "text": "\\psi = k(\\alpha \\Delta T - \\beta \\Delta S)" }, { "math_id": 9, "text": "\\psi > 0" }, { "math_id": 10, "text": "\\psi < 0" }, { "math_id": 11, "text": "\\psi = k(\\alpha \\Delta T^a - \\beta \\Delta S)" }, { "math_id": 12, "text": "\\Delta T^a = T_2^a - T_1^a" }, { "math_id": 13, "text": "{d\\psi\\over dt} = -2|\\psi | \\psi + 2k\\alpha \\Delta T^a|\\psi | - 2k\\beta \\eta_2" }, { "math_id": 14, "text": "\\psi^*" }, { "math_id": 15, "text": "F^*" }, { "math_id": 16, "text": "\\psi_{1,2} = \\tfrac{k \\alpha \\Delta T^a}{2} \\pm \\sqrt{\\tfrac{k \\alpha \\Delta T^a}{2} - k \\beta \\eta_2}" }, { "math_id": 17, "text": "\\psi_{3,4} = \\tfrac{k \\alpha \\Delta T^a}{2} \\pm \\sqrt{\\tfrac{k \\alpha \\Delta T^a}{2} + k \\beta \\eta_2}" }, { "math_id": 18, "text": "\\psi_3" }, { "math_id": 19, "text": "\\psi_3 < 0" }, { "math_id": 20, "text": "\\psi^* = \\frac{\\psi}{k \\alpha \\Delta T^a}" }, { "math_id": 21, "text": "F^* = \\frac{\\beta \\eta_2}{k(\\alpha \\Delta T^a)^2}" }, { "math_id": 22, "text": "\\psi_{1,2}^* = \\frac{1}{2} \\pm \\sqrt{\\frac{1}{4} -F^*}" }, { "math_id": 23, "text": "\\psi_{4}^* = \\frac{1}{2} - \\sqrt{\\frac{1}{4} + F^*}" }, { "math_id": 24, "text": "\\psi^* > 0" }, { "math_id": 25, "text": "\\psi^* < 0" }, { "math_id": 26, "text": "\\psi_{1}^*" }, { "math_id": 27, "text": "\\psi_{4}^*" }, { "math_id": 28, "text": "\\psi_{2}^*" } ]
https://en.wikipedia.org/wiki?curid=67655573
67657720
Wave nonlinearity
The nonlinearity of surface gravity waves refers to their deviations from a sinusoidal shape. In the fields of physical oceanography and coastal engineering, the two categories of nonlinearity are skewness and asymmetry. Wave skewness and asymmetry occur when waves encounter an opposing current or a shallow area. As waves shoal in the nearshore zone, in addition to their wavelength and height changing, their asymmetry and skewness also change. Wave skewness and asymmetry are often implicated in ocean engineering and coastal engineering for the modelling of random sea states, in particular regarding the distribution of wave height, wavelength and crest length. For practical engineering purposes, it is important to know the probability of these wave characteristics in seas and oceans at a given place and time. This knowledge is crucial for the prediction of extreme waves, which are a danger for ships and offshore structures. Satellite altimeter Envisat RA-2 data shows geographically coherent skewness fields in the ocean and from the data has been concluded that large values of skewness occur primarily in regions of large significant wave height. At the nearshore zone, skewness and asymmetry of surface gravity waves are the main drivers for sediment transport. Skewness and asymmetry. Sinusoidal waves (or linear waves) are waves having equal height and duration during the crest and the trough, and they can be mirrored in both the crest and the trough. Due to Non-linear effects, waves can transform from sinusoidal to a skewed and asymmetric shape. Skewed waves. In probability theory and statistics, skewness refers to a distortion or asymmetry that deviates from a normal distribution. Waves that are asymmetric along the horizontal axis are called skewed waves. Asymmetry along the horizontal axis indicates that the wave crest deviates from the wave trough in terms of duration and height. Generally, skewed waves have a short and high wave crest and a long and flat wave trough. A skewed wave shape results in larger orbital velocities under the wave crest compared to smaller orbital velocities under the wave trough. For waves having the same velocity variance, the ones with higher skewness result in a larger net sediment transport. Asymmetric waves. Waves that are asymmetric along the vertical axis are referred to as asymmetric waves. Wave asymmetry indicates the leaning forward or backward of the wave, with a steep front face and a gentle rear face. A steep front correlates with an upward tilt, a steep back is correlated with a downward tilt. The duration and height of the wave-crest equal the duration and height of the wave-trough. An asymmetric wave shape results in a larger acceleration between trough and crest and a smaller acceleration between crest and trough. Mathematical description. Skewness (Sk) and asymmetry (As) are measures of the wave nonlinearity and can be described in terms of the following parameters: formula_0 formula_1 In which: Values for the skewness are positive with typical values between 0 and 1, where values of 1 indicate high skewness. Values for asymmetry are negative with typical values between -1.5 and 0, where values of -1.5 indicate high asymmetry. Ursell number. The Ursell number, named after Fritz Ursell, relates the skewness and asymmetry and quantifies the degree of sea surface elevation nonlinearity. Ruessink et al. defined the Ursell number as: formula_5, where formula_6 is the local significant wave height, formula_7 is the local wavenumber and formula_8 is the mean water depth. The skewness and asymmetry at a certain location nearshore can be predicted from the Ursell number by: formula_9 formula_10 formula_11 For small Ursell numbers, the skewness and asymmetry both approach zero and the waves have a sinusoidal shape, and thus waves having small Ursell numbers do not result in net sediment transport. For formula_12, the skewness is maximum and the asymmetry is small and the waves have a skewed shape. For large Ursell numbers, the skewness approaches 0 and the asymmetry is maximum, resulting in an asymmetric wave shape. In this way, if the wave shape is known, the Ursell number can be predicted and consequently the size and direction of sediment transport at a certain location can be predicted. Impact on sediment transport. The nearshore zone is divided into the shoaling zone, surf zone and swash zone. In the shoaling zone, the wave nonlinearity increases due to the decreasing depth and the sinusoidal waves approaching the coast will transform into skewed waves. As waves propagate further towards the coast, the wave shape becomes more asymmetric due to wave breaking in the surf zone until the waves run up on the beach in the swash zone. Skewness and asymmetry are not only observed in the shape of the wave, but also in the orbital velocity profiles beneath the waves. The skewed and asymmetric velocity profiles have important implications for sediment transport in shallow conditions, where it both affects the bedload transport as the suspended load transport. Skewed waves have higher flow velocities under the crest of the waves than under the trough, resulting in a net onshore sediment transport as the high velocities under the crest are much more capable of moving large sediments. Beneath waves with high asymmetry, the change from onshore to offshore flow is more gradual than from offshore to onshore, where sediments are stirred up during peaks in offshore velocity and are transported onshore because of the sudden change in flow direction. The local sediment transport generates nearshore bar formation and provides a mechanism for the generation of three-dimensional features such as rip currents and rhythmic bars. Models including wave skewness and asymmetry. Two different approaches exist to include wave shape in models: the phase-averaged approach and the phase-resolving approach. With the phase-averaged approach, wave skewness and asymmetry are included based on parameterizations. Phase-averaged models incorporate the evolution of wave frequency and direction in space and time of the wave spectrum. Examples of these kinds of models are WAVEWATCH3 (NOAA) and SWAN (TU Delft). WAVEWATCH3 is a global wave forecasting model with a focus on the deep ocean. SWAN is a nearshore model and mainly has coastal applications. Advantages of phase-averaged models are that they compute wave characteristics over a large domain, they are fast and they can be coupled to sediment transport models, which is an efficient tool to study morphodynamics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Sk = \\frac{\\langle\\eta^{3}\\rangle}{\\langle\\eta^2\\rangle^{\\frac{3}{2}}} " }, { "math_id": 1, "text": "As = \\frac{\\langle\\mathcal{H}(\\eta)^{3}\\rangle}{\\langle\\eta^2\\rangle^{\\frac{3}{2}}} " }, { "math_id": 2, "text": "\\eta" }, { "math_id": 3, "text": "\\mathcal{H} " }, { "math_id": 4, "text": "\\langle\\cdot\\rangle" }, { "math_id": 5, "text": "Ur = \\frac{3}{8}\\frac{H_{m0}k}{(kh)^3} " }, { "math_id": 6, "text": "H_{m0} " }, { "math_id": 7, "text": "k" }, { "math_id": 8, "text": "h" }, { "math_id": 9, "text": "Sk = B*\\cos(\\psi)" }, { "math_id": 10, "text": "As = B*\\sin(\\psi)\n" }, { "math_id": 11, "text": "\\textrm{where} \\; B = \\frac{0.857}{1 + \\exp(\\frac{-0.471 - \\log(Ur)}{0.297})}, \\; \\textrm{ and } \\; \\psi = -90^{\\circ} + 90^{\\circ}\\tanh(\\frac{0.8150}{Ur^{0.672}})" }, { "math_id": 12, "text": "Ur \\sim 1" } ]
https://en.wikipedia.org/wiki?curid=67657720
67658164
Ernst Julius Amberg
Swiss mathematician and mountain climer Ernst Julius Amberg (6 September 1871 – 15 March 1952) was a Swiss mathematician and mountain climber. He is noteworthy as a mountain climber and as one of the organizers of the first International Congress of Mathematicians held in Zürich in 1897. Biography. Amberg was born on 6 September 1871 in Zürich, He studied mathematics at ETH Zurich with a "Lehrerdiplom" (teaching diploma) in 1894. He received his doctorate in 1897 from the University of Zurich. His dissertation "Über einen Körper, dessen Zahlen sich rational aus zwei Quadratwurzeln zusammensetzen" (On a field whose elements have the form formula_0, where formula_1 are rational numbers) was supervised by Adolf Hurwitz. As an assistant at ETH Zurich, Amberg was one of the members of the organizing committee of the first International Congress of Mathematicians. In May 1897 he joined a subcommittee that selected plenary speakers. Other members of the subcommittee were Hurwitz, Hermann Minkowski, Karl Geiser and Jérôme Franel. When Johann Jakob Rebstein (because of military service) resigned as the organizing committee's German-language secretary, Amberg replaced him. After leaving ETH Zurich, he become a teacher at the "Kantonsschule" in Frauenfeld (in canton Thurgau). He was from 1903 to 1938 a mathematics teacher (as Walter Gröbli’s successor) at the "Gymnasium" in Zürich, as well as the "Gymnasium"’s director from 1916 to 1938 (when he retired). &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;In 1912 he also became assistant professor for mathematics and analytic geometry at the ETH, and was promoted to "Titularprofessor" in 1918. Furthermore, he also lectured on teaching skills for mathematics both at the ETH and at the University of Zurich. Amberg lectured at the ETH until his retirement in 1938. During WW II, he was a substitute teacher in various Swiss "Gymnasiums". In addition to his school duties, he worked as an actuary for insurance and reinsurance companies. His dissertation seems to be his only published mathematical research, although he did write about actuarial mathematics and mathematics education. He was a keen mountaineer and headed the Zürich section of the Swiss Alpine Club for six years. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Amberg was married but had no children. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c_0 + c_1 \\sqrt{a} + c_2 \\sqrt{b} + c_3 \\sqrt{ab}" }, { "math_id": 1, "text": "c_0, c_1, c_2, c_3" } ]
https://en.wikipedia.org/wiki?curid=67658164
67661885
Miles-Phillips mechanism
In physical oceanography and fluid mechanics, the Miles-Phillips mechanism describes the generation of wind waves from a flat sea surface by two distinct mechanisms. Wind blowing over the surface generates tiny wavelets. These wavelets develop over time and become ocean surface waves by absorbing the energy transferred from the wind. The Miles-Phillips mechanism is a physical interpretation of these wind-generated surface waves.&lt;br&gt;Both mechanisms are applied to gravity-capillary waves and have in common that waves are generated by a resonance phenomenon. The Miles mechanism is based on the hypothesis that waves arise as an instability of the sea-atmosphere system. The Phillips mechanism assumes that turbulent eddies in the atmospheric boundary layer induce pressure fluctuations at the sea surface. The Phillips mechanism is generally assumed to be important in the first stages of wave growth, whereas the Miles mechanism is important in later stages where the wave growth becomes exponential in time. History. It was Harold Jeffreys in 1925 who was the first to produce a plausible explanation for the phase shift between the water surface and the atmospheric pressure which can give rise to an energy flux between the air and the water. For the waves to grow, a higher pressure on the windward side of the wave, in comparison to the leeward side, is necessary to create a positive energy flux. Using dimensional analysis, Jeffreys showed that the atmospheric pressure can be displayed as formula_0 where formula_1 is the constant of proportionality, also termed sheltering coefficient, formula_2 is the density of the atmosphere, formula_3 is the wind speed, formula_4 is the phase speed of the wave and formula_5 is the free surface elevation. The subscript formula_6 is used to make the distinction that no boundary layer is considered in this theory. Expanding this pressure term to the energy transfer yields formula_7 where formula_8 is the density of the water, formula_9 is the gravitational acceleration, formula_10 is the wave amplitude and formula_11 is the wavenumber. With this theory, Jeffreys calculated the sheltering coefficient at a value of 0.3 based on observations of wind speeds. In 1956, Fritz Ursell examined available data on pressure variation in wind tunnels from multiple sources and concluded that the value of formula_1 found by Jeffreys was too large. This result led Ursell to reject the theory from Jeffreys. &lt;br&gt; Ursell's work also resulted in new advances in the search for a plausible mechanism for wind-generated waves. These advances led a year later to two new theoretical concepts: the Miles and Phillips mechanisms. Miles' Theory. John W. Miles developed his theory in 1957 for inviscid, incompressible air and water. He assumed that air can be expressed as a mean shear flow with varying height above the surface. By solving the hydrodynamic equations for the coupled sea-atmosphere system, Miles was able to express the free surface elevation as a function of wave parameters and sea-atmosphere characteristics as formula_12 where formula_13, formula_14 is the scale parameter, formula_15 is the phase speed of free gravity waves, formula_16 is the wind speed and formula_17 is the angular frequency of the wave. The wind speed as a function of height was found by integrating the Orr-Sommerfeld equation with the assumption of a logarithmic boundary layer and that in the equilibrium state no currents below the sea surface exist formula_18 where formula_19 is the von Kármán's constant, formula_20 is the friction velocity, formula_21 is the Reynolds stress and formula_22 is the roughness length. Furthermore, Miles defined the growth rate formula_23 of the wave energy for arbitrary angles formula_24 between the wind and the waves as formula_25 Miles determined formula_23 in his 1957 paper by solving the inviscid form of the Orr-Sommerfeld equation. He further expanded his theory on the growth rate of wind driven waves by finding an expression for the dimensionless growth rate formula_26 at a critical height formula_27 above the surface where the wind speed formula_16 is equal is to the phase speed of the gravity waves formula_15. formula_28 with formula_29 the frequency of the wave and formula_30 the amplitude of the vertical velocity field at the critical height formula_27. The first derivative formula_31 describes the shear of the wind velocity field and the second derivative formula_32 described the curvature of the wind velocity field. This result represents Miles' classical result for the growth of surface waves. It becomes clear that without wind shear in the atmosphere (formula_33), the result from Miles fails, hence the name 'shear instability mechanism'. Even though this theory gives an accurate description of the transfer of energy from the wind to the waves, it also has some limitations The atmospheric energy input from the wind to the waves is represented by formula_35. Snyder and Cox (1967) were the first to produce a relationship for the experimental growth rate due to atmospheric forcing by use of experimental data. They found formula_36 where formula_37 the wind speed measured at a height of 10 meters and formula_38 a spectrum of the form of the JONSWAP. The JONSWAP spectrum is a spectrum based on data collected during the Joint North Sea Wave Observation Project and is a variation on the Pierson-Moskowitz spectrum, but then multiplied by an extra peak enhancement factor formula_34 formula_39 Phillips' Theory. At the same time, but independently from Miles, Owen M. Phillips (1957) developed his theory for the generation of waves based on the resonance between a fluctuating pressure field and surface waves. The main idea behind Phillips' theory is that this resonance mechanism causes the waves to grow when the length of the waves matches the length of the atmospheric pressure fluctuations. This means that the energy will be transferred to the components in the spectrum which satisfy the resonance condition. &lt;br&gt; Phillips determined the atmospheric source term for his theory as the following formula_40 where formula_41 is the frequency spectrum, with the three dimensional wave number formula_42. The strong points from this theory are that waves can grow from an initially smooth surface, so the initial presence of surface waves is not necessary. In addition, contrary to Miles' theory, this theory does predict that no wave growth can occur if the wind speed is below a certain value. &lt;br&gt; Miles theory predicts exponential growth of waves with time, while Phillips theory predicts linear growth with time. The linear growth of the wave is especially observed in the earliest stages of wave growth. For later stages, Miles' exponential growth is more consistent with observations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p = S \\rho_a (U_{\\infty} - C)^2 \\frac{\\partial \\eta}{\\partial x}" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "\\rho_a" }, { "math_id": 3, "text": "U_{\\infty}" }, { "math_id": 4, "text": "C" }, { "math_id": 5, "text": "\\eta" }, { "math_id": 6, "text": "\\infty" }, { "math_id": 7, "text": "\\frac{\\partial E}{\\partial t} = \\frac{1}{2\\rho_w g} S \\rho_a (U_{\\infty} - C)^2 (ak)^2 C" }, { "math_id": 8, "text": "\\rho_w" }, { "math_id": 9, "text": "g" }, { "math_id": 10, "text": "a" }, { "math_id": 11, "text": "k" }, { "math_id": 12, "text": "\\eta = a \\exp\\left[\\frac{1}{2} \\varepsilon \\beta k C_w \\left(\\frac{U}{C_w}\\right)^2 t \\right] \\exp[i(kx - \\omega t)]" }, { "math_id": 13, "text": "\\varepsilon = (\\rho_a / \\rho_w)" }, { "math_id": 14, "text": "\\beta" }, { "math_id": 15, "text": "C_w" }, { "math_id": 16, "text": "U" }, { "math_id": 17, "text": "\\omega" }, { "math_id": 18, "text": "U(z) = \\frac{u_*}{\\kappa} \\log\\left(1 + \\frac{z}{z_0}\\right)" }, { "math_id": 19, "text": "\\kappa" }, { "math_id": 20, "text": "u_* = (\\tau/\\rho_a)^{1/2}" }, { "math_id": 21, "text": "\\tau" }, { "math_id": 22, "text": "z_0" }, { "math_id": 23, "text": "\\gamma" }, { "math_id": 24, "text": "\\phi" }, { "math_id": 25, "text": " \\gamma = \\varepsilon \\beta \\omega \\left(\\frac{U}{C_w} \\cos\\phi\\right)^2 " }, { "math_id": 26, "text": "\\gamma / \\varepsilon f" }, { "math_id": 27, "text": "z_c" }, { "math_id": 28, "text": "\\frac{\\gamma}{\\varepsilon f} = -\\frac{\\pi}{2k}|\\chi|^2 \\frac{\\left(\\frac{\\partial^2 U}{\\partial z^2}\\right)_{z=z_c}}{\\left(\\frac{\\partial U}{\\partial z}\\right)_{z=z_c}}" }, { "math_id": 29, "text": "f" }, { "math_id": 30, "text": "\\chi" }, { "math_id": 31, "text": "U'(z)" }, { "math_id": 32, "text": "U''(z)" }, { "math_id": 33, "text": "U'(z) = 0" }, { "math_id": 34, "text": "\\beta^r" }, { "math_id": 35, "text": "S_{in}" }, { "math_id": 36, "text": "S_{in}(f,\\phi) = \\varepsilon \\beta \\omega \\left(\\frac{U_{10}}{C}\\cos\\phi-1\\right)^2 F(f,\\phi)" }, { "math_id": 37, "text": "U_{10}" }, { "math_id": 38, "text": "F(f,\\phi)" }, { "math_id": 39, "text": " F(f) = \\alpha g^2 (2\\pi)^{-4} f^{-5}\\exp\\left[{-\\frac{5}{4}\\left(\\frac{f}{f_p}\\right)^{-4}}\\right]\\cdot \\beta^{\\exp\\left[\\frac{-(f-f_p)^2}{2\\sigma^2 f_p^2}\\right]}" }, { "math_id": 40, "text": "S_{in}(f,\\phi)=\\frac{2\\pi^2\\omega}{\\rho_w^2 C^3 C_g} \\Pi(\\mathbf{k},\\omega)" }, { "math_id": 41, "text": "\\Pi(\\mathbf{k},\\omega)" }, { "math_id": 42, "text": "\\mathbf{k}" } ]
https://en.wikipedia.org/wiki?curid=67661885
67662668
Atlantification of the Arctic
Atlantification is the increasing influence of Atlantic water in the Arctic. Warmer and saltier Atlantic water is extending its reach northward into the Arctic Ocean. The Arctic Ocean is becoming warmer and saltier and sea-ice is disappearing as a result. The process can be seen on the figure on the far right, where the sea surface temperature change in the past 50 years is shown, which is up to 5 degrees in some places. This change in the Arctic climate is most prominent in the Barents Sea, a shallow shelf sea north of Scandinavia, where sea-ice is disappearing faster than in any other Arctic region, impacting the local and global ecosystem. Structure of the Arctic Ocean. The largest part of the Arctic Ocean has a strong division between ocean layers. At the top is a mixed layer of fresh water with a temperature near the freezing point and a salinity of around 30 psu (practical salinity unit). This water is fed by rivers and melting of sea-ice. Underneath this fresh water is a layer where the salinity increases strongly but the temperature remains low: the cold halocline layer. Below this layer, the temperature increases with depth to above the freezing point. This layer which holds this temperature gradient is called the pycnocline layer. The water underneath is warm and salty, carried in from the Atlantic Ocean by the Atlantic Meridional Overturning Circulation (AMOC). This layer is warmer than the surface layer but because of its salinity it has a higher density than the water above. This means this layer is less buoyant than the surface layer. The cold freshwater therefore floats on top and the halocline across which mixing tends to be weak even under ice free conditions and therefore protects the surface from the heat in the Atlantic water. Under the Atlantic water layer is a deep layer of Arctic bottom water extending to the bottom of the ocean. Process of Atlantification. The increasing influence of Atlantic water flowing into the Arctic Ocean and the loss of stratification causes the warm Atlantic water to mix with the fresh water at the surface. As can be seen in the figure below, the halocline weakens and therefore heat from the Atlantic water reaches the surface. This warming of the surface water causes a retreat in sea-ice in winter and a total absence of sea-ice in summer. The loss of winter sea-ice means that in summer, the colder layer of freshwater at the surface is less replenished by melting ice, lessening the temperature difference between the layers. Also, a lack of sea-ice increases the influence of wind on the sea surface, mixing the layers further. Model predictions do not show an upward trend in volume transport into the Arctic from the North Atlantic nor an increase in the temperature of the inflowing water leading some to conclude that the Atlantification of the Arctic is not caused by a process in the Atlantic Ocean but rather by atmospheric forcing in the Arctic region, amplified by sea-ice loss. However observations show a regime shift from winter sea ice cover to open water in the southern Barents Sea in response to the warming of the inflowing Atlantic water. Observations also reveal the increasing influence of Atlantic water heat further to the east, in the eastern Eurasian Basin, where in recent years the heat flux from the Atlantic water towards the surface has overtaken the atmospheric contribution in this region. Furthermore, an observed weakening of the halocline over this period coincided increasing wind driven upper ocean currents, pointing to increased mixing. Consequences. At the moment, the largest part of inflowing heat from the Atlantic Ocean is lost to the atmosphere within the Barents Sea. It is expected though, that the temperature in the Barents Sea will increase due to changes in the interaction with the atmosphere. As a result, the water flowing out from the Barents Sea in between Franz Josef Land and Novaya Zemlya (Barents Sea exit) will warm significantly from -0.2formula_0 to 2.2formula_0 C in 2080. This shows that warm Atlantic water will penetrate further into the Arctic Ocean, ultimately extending throughout the Eurasian basin, leading to reduction in sea-ice thickness in this region. Organisms. Atlantification as part of the climate changing in the Arctic has major consequences for all organisms living there. Due to the warming of Barents Sea, phytoplankton blooms are moving further into the Eurasian Basin each year. Typical species have moved 5 degrees further North compared to 1989. Also, fish communities are moving Northward at the pace of the local climate change, a process called borealization. Some predators that reach areas previously not warm enough change the ecological systems of the Arctic. As a result, Arctic shelf fish are being expelled and retract Northwards as well. For some species, depth might limit their options and this will induce changes in the biodiversity of the Arctic Ocean. This change in marine ecosystem also influences the bird and mammal species living in the Arctic region. Sea birds, seals and whales depend directly on the fish populations. Land mammals like polar bears live on seals and are also strongly dependant on the sea-ice to live on. Tipping point. There are growing concerns that the Arctic climate might be moving to a so-called tipping point, meaning that if a critical point is reached, the system will settle around a different equilibrium state. In the Arctic this different state could be one with much less or no sea-ice. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "^{\\circ}" } ]
https://en.wikipedia.org/wiki?curid=67662668
67672691
Open ocean convection
Mixing of seawater at different depths by mesoscale ocean circulation and large, strong winds Open ocean convection is a process in which the mesoscale ocean circulation and large, strong winds mix layers of water at different depths. Fresher water lying over the saltier or warmer over the colder leads to the stratification of water, or its separation into layers. Strong winds cause evaporation, so the ocean surface cools, weakening the stratification. As a result, the surface waters are overturned and sink while the "warmer" waters rise to the surface, starting the process of convection. This process has a crucial role in the formation of both bottom and intermediate water and in the large-scale thermohaline circulation, which largely determines global climate. It is also an important phenomena that controls the intensity of the Atlantic Meridional Overturning Circulation (AMOC). Convection exists under certain conditions which are promoted by strong atmospheric forcing due to thermal or haline surface fluxes. This may be observed in oceans adjacent to boundaries with either dry and cold winds above or ice, inducing large latent heat and moisture fluxes. Ocean convection depends on the weakness of stratification under the surface mixed layer. These stratified water layers must rise, near to the surface resulting in their direct exposition to intense surface forcing. Major convection sites. Deep convection is observed in the subpolar North Atlantic (the Greenland Sea and the Labrador Sea), in the Weddell Sea in the southern hemisphere as well as in the northwestern Mediterranean. In sub-polar regions, the upper mixed layer starts deepening during late autumn until early spring, when the convection is at the deepest level before the phenomenon is weakened. The weak density stratification of the Labrador Sea is observed each wintertime, in depths between 1000 and 2000 m, making it one of the most extreme ocean convection sites in the world. The deep convection in the Labrador Sea is significantly affected by the North Atlantic Oscillation (NAO). In winter, when the NAO is in positive phase above this region, the cyclonic activity is greater over the North Atlantic with an enhanced circulation of cold and dry air. During this positive phase of NAO, the oceanic heat loss from the Labrador Sea is higher, contributing to a deeper convection. According to Holdsworth et al. (2015), during the negative phase of NAO which is associated with an absence of high frequency forcing, the average maximum mixed layer depth decreases more than 20%. The Greenland Sea differs from the Labrador Sea because of the important role of ice in preconditioning during the months November until February. In early winter, the ice spreads eastward across the central Greenland Sea, and brine rejection under the ice increases the surface layer density. In March, when preconditioning is far enough advanced, and the meteorological conditions are favourable, deep convection develops. In the northwestern Mediterranean Sea, deep convection occurs in winter, when the water undergoes the necessary preconditioning with air-sea fluxes inducing buoyancy losses at the surface. In winter, the Gulf of Lions is regularly subject to atmospheric forcing under the intense cold winds Tramontane and Mistral, inducing strong evaporation and an intense cooling of surface waters. This leads to buoyancy losses and vertical deep mixing. The convection in the Weddell Sea is mostly associated with polynya. According to Akitomo et al. (1995), Arnold L. Gordon was the first to find the remnant of deep convection near the Maud Rise in 1977. This deep convection was probably accompanied by a large polynya which had been appearing in the central Weddell Sea every winter during 1974-76. Additionally, according to Van Westen and Dijkstra, (2020), the formation of Maude Rise polynya which was observed in 2016 is associated with the subsurface convection. In particular, the Maud Rise region undergoes preconditioning due to the accumulation of subsurface heat and salt, leading to a convection and favoring a polynya formation. Phases of convection. Ocean convection is distinguished by three phases: preconditioning, deep convection and lateral exchange and spreading. Preconditioning is referred to a period during which a cyclonic gyre-scale circulation and buoyancy forcing are combined to predispose a convective site to locally overturn. A site is preconditioned when a laterally extended deep region of relatively weak vertical density stratification exists there, and it is capped by a locally shallow thermocline. Cooling events lead to the second phase, deep convection, in which a part of the fluid column may overturn in numerous plumes that distribute the dense surface water in the vertical axis. These plumes form a homogeneous deep chimney. During this phase, the chimney is getting deeper through plume-scale overturning and adjusts geostrophically. Additionally, at some point in time, the sea-surface buoyancy loss is completely offset through lateral buoyancy transfer by baroclinic eddies which are generated at the periphery of the convective regime and thus, the quasi-steady state can be achieved. Once the surface forcing decreases, the vertical heat transfer due to convection abates, leading to horizontal transfer associated with eddying on geostrophic scale. The balance between sea-surface forcing and lateral eddy buoyancy flux becomes unstable. Due to gravity and planetary rotation, the mixed fluid disperses and spreads out, leading to the decay of the chimney.  The residual pieces of the “broken” chimney are named cones. The lateral exchange and spreading are also known as restratification phase. If surface conditions deteriorate again, deep convection can reinitiate while the remaining cones can form preferential centers for further deep convective activity. Phenomena involved in convection. Deep convection is distinguished in small-scale and mesoscale processes. Plumes represent the smallest-scale process while chimneys (patch) and eddies represent the mesoscale. Plumes. Plumes are the initial convectively driven vertical motions which are formed during the second phase of convection. They have horizontal scales between 100m and 1 km and their vertical scale is around 1–2 km with vertical velocities of up to 10 cm/s which are measured by acoustic doppler current profilers (ADCPs). Time scales associated with convective plumes are reported to be several hours to several days. The plumes act as “conduits” or as “mixing agents” in terms of their dynamic part. If they act as “conduits”, they transport cooled and dense surface water downward. This is the main mechanism of water transport toward lower depths and its renewal. However, plumes can act as “mixing agents” rather than as downward carriers of a flow. In this case, the convection cools and mixes a patch of water, creating a dense homogeneous cylinder, like a chimney, which ultimately collapses and adjusts under planetary rotation and gravity. Coriolis force and thermobaricity are important in deep convective plumes. Thermobaricity is the effect in which sinking cold saline water is formed under freezing conditions, resulting in downward acceleration. Additionally, many numerical and tank modeling experiments examine the role of rotation in the processes of the convection and in the morphology of the plumes. According to Paluszkiewicz et al. (1994), planetary rotation does not affect the individual plumes vertically, but do so horizontally. Under the influence of rotation, the diameter of the plumes becomes smaller compared to the diameter of the plumes in the absence of rotation. In contrast, chimneys and associated eddies are dominated by the effects of rotation due to thermal wind. Convection patch (or “Chimney”). The convective overturning of the water column occurs through a contribution of a high number of intense plumes, which strongly mix the column. The plumes can process large volumes of fluid to form what has become known as a “chimney” of homogenized fluid. These vertically isolated homogenized water columns have a diameter of 10 to 50 km and they are 1–2 km deep. Surface waters growing denser and sinking drive the initial deepening stage, while the final deepening stage and the restratification phase are affected by a buoyancy transfer, through the lateral surface of the chimney, by baroclinic eddies. Seasonality. The chimneys of deep convection remain open for one to three months, during winter, in a quasi-stable state whereas they can collapse within a few weeks. The chimneys are destroyed in early spring when the sea-surface buoyancy flux weakens and reverses while the stratification of the water’s layers under the mixed layer starts becoming stable. Formation. Formation of the convection chimneys is preconditioned by two processes: strong heat fluxes from the sea-surface and cyclonic circulation. A chimney is formed when a relatively strong buoyancy flux from the ocean’s surface exists for at least 1 to 3 days. The time, depth and diameter development of the chimney clearly depends on the buoyancy flux and the stratification of the surrounding ocean. As the surface water cools, it becomes denser and it overturns, forming a convectively modified layer of depth formula_0. In the center of the chimney, the mixed layer deepens and the depth as a function of time is computed as described below. During the initial stage of the intensive deepening of the chimney, when the baroclinic instability effects are assumed to be unimportant, the depth can be found as a function of time using the buoyancy forcing. The buoyancy is defined as:formula_1Where formula_2 is the acceleration due to gravity, formula_3 the potential density and formula_4a constant reference value of density. The buoyancy equation for the mixed layer is:formula_5Where formula_6 is the buoyancy and formula_7 the buoyancy forcing. The buoyancy forcing is equal to formula_8where formula_9 is the buoyancy loss. As a simplification, the assumption that the buoyancy loss is constant in time (formula_10) is used. Neglecting horizontal advection and integrating the above equation over the mixed layer, we obtain:formula_11For a uniformly stratified fluid, the power of the buoyancy frequency is equal to:formula_12Therefore, the classical result for the nonpenetrative deepening of the upper mixed layer is:formula_13 The chimney evolution equation. As time progresses and the baroclinic instability effects are becoming important, the time evolution of the chimney cannot be described only by the buoyancy forcing. The maximum depth that a convection chimney reaches, must be found using the chimney evolution equation instead. Following Kovalevsky et al. (2020) and Visbeck et al. (1996), consider a chimney of radius formula_14 and a time-dependent height formula_15. The driving force of the chimney deepening is the surface buoyancy loss formula_10 which causes convective overturning leading to homogeneously mixed fluid in the interior of the chimney. Assuming that the density at the base of the chimney is continuous, the buoyancy anomaly formula_16 of a particle which is displaced a distance Δz within the chimney is:formula_17According to Kovalevsky et al. (2020) the buoyancy budget equation is:formula_18The left-hand side represents the time evolution of the total buoyancy anomaly accumulated in the time-depended chimney volume formula_19. The first and second term on the right-hand side correspond to the total buoyancy loss from the sea-surface over the chimney and the buoyancy transfer between the interior of the chimney and the baroclinic eddies, respectively. Initially, the total buoyancy depends only on the total buoyancy loss through the sea-surface above the chimney. While time progresses, the buoyancy loss through the sea-surface above the chimney becomes partially equivalent with the lateral buoyancy exchange between the chimney and the baroclinic eddies, through the chimney’s side walls. Visbeck et al. (1996), by using a suggestion by Green (1970) and Stone (1972), parameterized the eddy flux formula_20as:formula_21Where formula_22 is a constant of proportionality to be determined by observations and laboratory modeling. The variable formula_23represents the pulsations of the horizontal current velocity component perpendicular to the side-walls of the chimney while, by following Visbeck et al. (1996), formula_24 is equal to:formula_25 Decay. If the buoyancy loss is maintained for a sufficient time period, then the sea-surface cooling weakens and the restratification phase starts. At the surrounding of the convective regime, the stratification takes up an ambient value while in the center of the chimney the stratification is eroded away. As a result, around the periphery of the chimney, the isopycnal surfaces deviate from their resting level, tilting towards the ocean’s surface. Associated with the tilting isopycnal surfaces a thermal wind is set up generating the rim current around the edge of the convection regime. This current must be in thermal wind balance with the density gradient between the chimney’s interior and exterior. The width of the rim current’s region and its baroclinic zone will initially be of the order of the Rossby radius of deformation. The existence of the rim current plays an important role for the chimney’s collapse. At the center of the chimney, the mixed layer will deepen as formula_26, until the growing baroclinic instability begins to carry convected fluid outward while water from the exterior flows into the chimney. At this moment, the rim current around the cooling region becomes baroclinically unstable and the buoyancy laterally is transferred by the instability eddies. If the eddies are intense enough, deepening of the chimney will be limited. In this limit, when the lateral buoyancy flux completely balances the sea-surface buoyancy loss, a quasi-steady state can be established:formula_27By solving the above equation, the final depth of the convective chimney can be found to be:formula_28 Consequently, the final mixing depth depends on the strength of the cooling, the radius of the cooling and the stratification. Therefore, the final mixing depth is not directly dependent of the rate of the rotation. However, baroclinic instability is a consequence of the thermal wind, which is crucially dependent on rotation. The length scale of baroclinic eddies, assumed to be set by the Rossby radius of deformation, scale as: formula_29 Which does depend on the rate of rotation "f" but is independent of the ambient stratification. The least time that the chimney needs to reach the quasi-equilibrium state is equivalent with the time that it needs to reach the depth formula_30 and it is equal to:formula_31The final timescale is independent of the rate of rotation, increases with the radius of the cooling region r and decreases with the surface buoyancy flux Bo. According to Visbeck at al. (1996), the constant of proportionality γ and β are found to be equal to 3.9 ± 0.9 and 12 ± 3 respectively, through laboratory experiments. Cones. Finally, the cooling of the surface as well as convective activity cease. Therefore, the chimney of homogenized cold water erodes into several small conical structures, named cones, which propagate outward. The cones travel outward, carrying cold water far from the area of cooling. As time progresses and the cones disperse, the magnitude of the rim current diminishes. The currents associated with the cones are intensified and cyclonic at the surface whereas they are weaker and anticyclonic at low depths. Effects of global warming on ocean convection. Deep convective activity in the Labrador Sea has decreased and become shallower since the beginning of the 20th century due to low-frequency variability of the North Atlantic oscillation. A warmer atmosphere warms the surface waters so that they do not sink to mix with the colder waters below. The resulting decline does not occur steeply but stepwise. Specifically, two severe drops in deep convective activity have been recorded, during the 1920s and the 1990s. Similarly, in the Greenland Sea, shallower deep mixed layers have been observed over the last 30 years due to the fall of wintertime atmospheric forcing. The melting of the Greenland ice-sheet, could also contribute to an even earlier extinction of deep convection. The freshening of the surface waters due to enhanced meltwater from the Greenland Ice Sheet, have less density, making it more difficult for oceanic convection to occur. Reduction of the deep wintertime convective mixing in the North Atlantic results the weakening of AMOC. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h" }, { "math_id": 1, "text": "b=-g\\frac{\\sigma}{\\rho_o}" }, { "math_id": 2, "text": "g" }, { "math_id": 3, "text": "\\sigma" }, { "math_id": 4, "text": "\\rho_o" }, { "math_id": 5, "text": "{Db \\over Dt}=F_B" }, { "math_id": 6, "text": "b" }, { "math_id": 7, "text": "F_B" }, { "math_id": 8, "text": "\\frac{\\partial{B}}{\\partial{z}}" }, { "math_id": 9, "text": "B" }, { "math_id": 10, "text": "B_o" }, { "math_id": 11, "text": "{\\partial{b}\\over\\partial z}{\\partial{h}\\over\\partial t}=\\frac{B_o}{h}" }, { "math_id": 12, "text": "N^2=\\frac{\\partial{b}}{\\partial{z}}" }, { "math_id": 13, "text": "h=\\frac{\\sqrt{2B_ot}}{N}\n" }, { "math_id": 14, "text": "r_o" }, { "math_id": 15, "text": "h(t)" }, { "math_id": 16, "text": "b'" }, { "math_id": 17, "text": "b'=N^2(z-h(t)),\\;\\;\\;\\;\\;\\;\\;\\;\\;0 < z < h(t)" }, { "math_id": 18, "text": "{d \\over dt}\\int_{V(t)}^{} b' dV=\\int_{S_{top}}^{} B_odS-\\int_{S_{lat}}^{} \\overline{u'b'}dS" }, { "math_id": 19, "text": "V(t)" }, { "math_id": 20, "text": "\\overline{u'b'}" }, { "math_id": 21, "text": "\\overline{u'b'}=\\alpha'\\frac{\\overline{(b')^2}}{N}" }, { "math_id": 22, "text": "\\alpha'" }, { "math_id": 23, "text": "u'" }, { "math_id": 24, "text": "\\overline{(b')^2}" }, { "math_id": 25, "text": "\\overline{(b')^2}=N^4h^2" }, { "math_id": 26, "text": "\\sqrt{t}" }, { "math_id": 27, "text": "\\int_{S_{top}}^{} B_odS=\\int_{S_{lat}}^{} \\overline{u'b'}dS" }, { "math_id": 28, "text": "h_{final}=\\gamma\\frac{(B_or_o)^{1/3}}{N},\\;\\;\\;\\;\\;\\;\\;\\;\\; \\gamma=\\left(\\frac{1}{2\\alpha'}\\right)^{1/3}\n" }, { "math_id": 29, "text": "L_{eddy,final}=\\frac{Nh_{final}}{f}=\\gamma\\frac{(B_or_o)^{1/3}}{f}\n" }, { "math_id": 30, "text": "h_{final}" }, { "math_id": 31, "text": "t_{final}=\\beta\\left(\\frac{r_o^2}{B_o}\\right)^{1/3},\\;\\;\\;\\;\\;\\;\\;\\;\\; \\beta=\\frac{\\gamma^2}{2}\n" } ]
https://en.wikipedia.org/wiki?curid=67672691
67678079
Landau kinetic equation
The Landau kinetic equation is a transport equation of weakly coupled charged particles performing Coulomb collisions in a plasma. The equation was derived by Lev Landau in 1936 as an alternative to the Boltzmann equation in the case of Coulomb interaction. When used with the Vlasov equation, the equation yields the time evolution for collisional plasma, hence it is considered a staple kinetic model in the theory of collisional plasma. Overview. Definition. Let formula_0 be a one-particle Distribution function. The equation reads: formula_1 formula_2 The right-hand side of the equation is known as the Landau collision integral (in parallel to the Boltzmann collision integral). formula_3 is obtained by integrating over the intermolecular potential formula_4: formula_5 formula_6 For many intermolecular potentials (most notably power laws where formula_7), the expression for formula_3 diverges. Landau's solution to this problem is to introduce Cutoffs at small and large angles. Uses. The equation is used primarily in Statistical mechanics and Particle physics to model plasma. As such, it has been used to model and study Plasma in thermonuclear reactors. It has also seen use in modeling of Active matter . The equation and its properties have been studied in depth by Alexander Bobylev. Derivations. The first derivation was given in Landau's original paper. The rough idea for the derivation: Assuming a spatially homogenous gas of point particles with unit mass described by "formula_0", one may define a corrected potential for Coulomb interactions, formula_8, where formula_9 is the Coulomb potential, formula_10, and formula_11 is the Debye radius. The potential formula_12 is then plugged it into the Boltzmann collision integral (the collision term of the Boltzmann equation) and solved for the main asymptotic term in the limit formula_13. In 1946, the first formal derivation of the equation from the BBGKY hierarchy was published by Nikolay Bogolyubov. The Fokker-Planck-Landau equation. In 1957, the equation was derived independently by Marshall Rosenbluth. Solving the Fokker–Planck equation under an inverse-square force, one may obtain: formula_14 where formula_15 are the Rosenbluth potentials: formula_16 formula_17 for formula_18 The Fokker-Planck representation of the equation is primarily used for its convenience in numerical calculations. The relativistic Landau kinetic equation. A relativistic version of the equation was published in 1956 by Gersh Budker and Spartak Belyaev. Considering relativistic particles with momentum formula_19 and energy formula_20, the equation reads: formula_21 where the kernel is given by formula_22 such that: formula_23 formula_24 formula_25 A relativistic correction to the equation is relevant seeing as particle in hot plasma often reach relativistic speeds. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(v, t)" }, { "math_id": 1, "text": "\\frac{\\partial f}{\\partial t} = B \\frac{\\partial}{\\partial v_i}\\left(\\int_{\\R^3}dw \\frac{\\left(u^2 \\delta_{ij}-u_iu_j\\right)}{u^3}\\left(\\frac{\\partial}{\\partial v_j} - \\frac{\\partial}{\\partial w_j}\\right)f(v)f(w)\\right)\n" }, { "math_id": 2, "text": "u = v - w" }, { "math_id": 3, "text": "B" }, { "math_id": 4, "text": "U(r)" }, { "math_id": 5, "text": "B = \\frac{1}{8 \\pi}\\int_0^\\infty dr \\, r^3 \\hat{U}(r)^2" }, { "math_id": 6, "text": "\\hat{U}(|k|) = \\int_{\\R^3} dx \\, U(|x|) e^{ikx}" }, { "math_id": 7, "text": "U(r) \\propto \\frac{1}{r^n}" }, { "math_id": 8, "text": "\\hat{U}_{ij} = U_{ij} \\exp\\left(-\\frac{r_{ij}}{r_D}\\right)" }, { "math_id": 9, "text": "U_{ij}" }, { "math_id": 10, "text": "U_{ij} = \\frac{e_i e_j}{|x_i - x_j|}" }, { "math_id": 11, "text": "r_D" }, { "math_id": 12, "text": "\\hat{U_{ij}}" }, { "math_id": 13, "text": "r_D \\rightarrow \\infin" }, { "math_id": 14, "text": "\\frac{1}{4 \\pi L} \\frac{\\partial f_i}{\\partial t} = \\frac{\\partial}{\\partial v_{\\alpha}} \\left(-f_i \\frac{\\partial h_i}{\\partial v_{\\alpha}}+\\frac{1}{2} \\frac{\\partial}{\\partial v_{\\beta}} \\left(f_i \\frac{\\partial^2 g_i}{\\partial v_{\\alpha} \\partial v_{\\beta}}\\right)\\right)" }, { "math_id": 15, "text": "h_i, g_i" }, { "math_id": 16, "text": "h_i = \\sum^n_{j=1} K_{ij} \\int dw \\frac{f_i(w, t)}{|v-w|}" }, { "math_id": 17, "text": "g_i = \\sum^n_{j=1} K_{ij} \\frac{m_j}{m_i} \\int dw \\frac{f_i(w, t)}{|v-w|}" }, { "math_id": 18, "text": "K_{ij} = \\frac{e_i^2 e_j^2}{m_i m_j}, i = 1, 2, \\dots, n" }, { "math_id": 19, "text": "p = (p^1, p^2, p^3) \\in \\mathbb{R}^3" }, { "math_id": 20, "text": "p^0 = \\sqrt{1+|p|^2}" }, { "math_id": 21, "text": "\\frac{\\partial f}{\\partial t} = \\frac{\\partial}{\\partial p_i}\\int_{\\R^3} dq \\, \\Phi^{ij}(p,\\ q) \\left[h(q)\\frac{\\partial}{\\partial p_j}g(p)-\\frac{\\partial}{\\partial q_j}h(q)g(p)\\right]" }, { "math_id": 22, "text": "\\Phi^{ij} = \\Alpha(p, q)S^{ij}(p, q)" }, { "math_id": 23, "text": "\\Alpha = \\frac{\\left(\\rho_- + 1\\right)^2}{p^0 q^0} \\left(\\rho_+ \\rho_-\\right)^{-3/2}" }, { "math_id": 24, "text": "S^{ij} = \\rho_+ \\rho_- \\delta_{ij} - \\left(p_i-q_i\\right)\\left(p_j-q_j\\right)+\\rho_-\\left(p_i q_j + p_j q_i\\right)" }, { "math_id": 25, "text": "\\rho_{\\pm} = p^0 q^0 - pq \\pm 1" } ]
https://en.wikipedia.org/wiki?curid=67678079
6768103
Intrabeam scattering
Intrabeam scattering (IBS) is an effect in accelerator physics where collisions between particles couple the beam emittance in all three dimensions. This generally causes the beam size to grow. In proton accelerators, intrabeam scattering causes the beam to grow slowly over a period of several hours. This limits the luminosity lifetime. In circular lepton accelerators, intrabeam scattering is counteracted by radiation damping, resulting in a new equilibrium beam emittance with a relaxation time on the order of milliseconds. Intrabeam scattering creates an inverse relationship between the smallness of the beam and the number of particles it contains, therefore limiting luminosity. The two principal methods for calculating the effects of intrabeam scattering were done by Anton Piwinski in 1974 and James Bjorken and Sekazi Mtingwa in 1983. The Bjorken-Mtingwa formulation is regarded as being the most general solution. Both of these methods are computationally intensive. Several approximations of these methods have been done that are easier to evaluate, but less general. These approximations are summarized in "Intrabeam scattering formulas for high energy beams" by K. Kubo "et al." Intrabeam scattering rates have a formula_0 dependence. This means that its effects diminish with increasing beam energy. Other ways of mitigating IBS effects are the use of wigglers, and reducing beam intensity. Transverse intrabeam scattering rates are sensitive to dispersion. Intrabeam scattering is closely related to the Touschek effect. The Touschek effect is a lifetime based on intrabeam collisions that result in both particles being ejected from the beam. Intrabeam scattering is a risetime based on intrabeam collisions that result in momentum coupling. Bjorken–Mtingwa formulation. The betatron growth rates for intrabeam scattering are defined as, formula_1, formula_2, formula_3. The following is general to all bunched beams, formula_4, where formula_5, formula_6, and formula_7 are the momentum spread, horizontal, and vertical are the betatron growth times. The angle brackets &lt;...&gt; indicate that the integral is averaged around the ring. formula_8 formula_9 formula_10 formula_11 formula_12 formula_13 formula_14 formula_15 Definitions: formula_16 is the classical radius of the particle formula_17 is the speed of light formula_18 is the number of particles per bunch formula_19 is velocity divided by the speed of light formula_20 is energy divided by mass formula_21 and formula_22 is the betatron function and its derivative, respectively formula_23 and formula_24 is the dispersion function and its derivative, respectively formula_25 is the emittance formula_26 is the bunch length formula_27 is the momentum spread formula_28 and formula_29 are the minimum and maximum impact parameters. The minimum impact parameter is the closest distance of approach between two particles in a collision. The maximum impact parameter is the largest distance between two particles such that their trajectories are unaltered by the collision. The maximum impact parameter should be taken to be the minimum beam size. See for some analysis of the Coulomb log and support for this result. formula_30 is the minimum scattering angle. Equilibrium and growth rate sum rule. IBS can be seen as a process in which the different "temperatures" try to equilibrate. The growth rates would be zero in the case that which the factor of formula_20 coming from the Lorentz transformation. From this equation, we see that due to the factor of formula_20, the longitudinal is typically much "colder" than the transverse. Thus, we typically get growth in the longitudinal, and shrinking in the transverse. One may also the express conservation of energy in IBS in terms of the Piwinski invariant where formula_33. Above transition, with just IBS, this implies that there is no equilibrium. However, for the case of radiation damping and diffusion, there is certainly an equilibrium. The effect of IBS is to cause a change in the equilibrium values of the emittances. Inclusion of coupling. In the case of a coupled beam, one must consider the evolution of the coupled eigenemittances. The growth rates are generalized to formula_34 Measurement and comparison with Theory. Intrabeam scattering is an important effect in the proposed "ultimate storage ring" light sources and lepton damping rings for International Linear Collider (ILC) and Compact Linear Collider (CLIC). Experimental studies aimed at understanding intrabeam scattering in beams similar to those used in these types of machines have been conducted at KEK, CesrTA, and elsewhere. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1/\\gamma^{4}" }, { "math_id": 1, "text": "\\frac{1}{T_{p}} \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac{1}{\\sigma_{p}} \\frac{d\\sigma_{p}}{dt}" }, { "math_id": 2, "text": "\\frac{1}{T_{h}} \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac{1}{\\epsilon_{h}^{1/2}} \\frac{d\\epsilon_{h}^{1/2}}{dt}" }, { "math_id": 3, "text": "\\frac{1}{T_{v}} \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac{1}{\\epsilon_{v}^{1/2}} \\frac{d\\epsilon_{v}^{1/2}}{dt}" }, { "math_id": 4, "text": "\\frac{1}{T_{i}} = 4\\pi A (\\operatorname{log}) \\left\\langle \\int_{0}^{\\infty} \\,d\\lambda\\ \\frac{\\lambda^{1/2}}{[\\operatorname{det}(L+\\lambda I)]^{1/2}} \n\\left\\{\\operatorname{Tr}L^{i}\\operatorname{Tr}\\left(\\frac{1}{L+\\lambda I}\\right) - 3 \\operatorname{Tr}\\left[L^{i}\\left(\\frac{1}{L+\\lambda I}\n\\right)\\right]\\right\\}\\right\\rangle" }, { "math_id": 5, "text": "T_{p}" }, { "math_id": 6, "text": "T_{h}" }, { "math_id": 7, "text": "T_{v}" }, { "math_id": 8, "text": "(\\operatorname{log}) = \\ln \\frac{b_{min}}{b_{max}} = \\ln \\frac{2}{\\theta_{min}}" }, { "math_id": 9, "text": "A = \\frac{r_0^2 c N}{64 \\pi^2 \\beta^3 \\gamma^4 \\epsilon_h \\epsilon_v \\sigma_s \\sigma_p}" }, { "math_id": 10, "text": "L = L^{(p)} + L^{(h)} + L^{(v)}\\," }, { "math_id": 11, "text": "L^{(p)} = \\frac{\\gamma^2}{\\sigma^2_p}\\begin{pmatrix}\n0 & 0 & 0\\\\\n0 & 1 & 0\\\\\n0 & 0 & 0\\end{pmatrix}" }, { "math_id": 12, "text": "L^{(h)} = \\frac{\\beta_h}{\\epsilon_h}\\begin{pmatrix}\n1 & -\\gamma\\phi_h & 0\\\\\n-\\gamma\\phi_h & \\frac{\\gamma^2 {\\mathcal H}_h}{\\beta_h} & 0\\\\\n0 & 0 & 0\\end{pmatrix}" }, { "math_id": 13, "text": "L^{(v)} = \\frac{\\beta_v}{\\epsilon_v}\\begin{pmatrix}\n0 & 0 & 0\\\\\n0 & \\frac{\\gamma^2 {\\mathcal H}_v}{\\beta_v} & -\\gamma\\phi_v\\\\\n0 & -\\gamma\\phi_v & 1\\end{pmatrix}" }, { "math_id": 14, "text": "{\\mathcal H}_{h,v} = [\\eta^2_{h,v} + (\\beta_{h,v}\\eta'_{h,v} - \\frac{1}{2}\\beta'_{h,v}\\eta_h)^2]/\\beta_{h,v}" }, { "math_id": 15, "text": "\\phi_{h,v} = \\eta'_{h,v} - \\frac{1}{2}\\beta'_{h,v}\\eta_{h,v}/\\beta_{h,v}" }, { "math_id": 16, "text": "r_0^2" }, { "math_id": 17, "text": "c" }, { "math_id": 18, "text": "N" }, { "math_id": 19, "text": "\\beta" }, { "math_id": 20, "text": "\\gamma" }, { "math_id": 21, "text": "\\beta_{h,v}" }, { "math_id": 22, "text": "\\beta'_{h,v}" }, { "math_id": 23, "text": "\\eta_{h,v}" }, { "math_id": 24, "text": "\\eta'_{h,v}" }, { "math_id": 25, "text": "\\epsilon_{h,v}" }, { "math_id": 26, "text": "\\sigma_s" }, { "math_id": 27, "text": "\\sigma_p" }, { "math_id": 28, "text": "b_{min}" }, { "math_id": 29, "text": "b_{max}" }, { "math_id": 30, "text": "\\theta_{min}" }, { "math_id": 31, "text": " \\frac{\\sigma_\\delta}{\\gamma} = \\sigma_{x'} = \\sigma_{y'}" }, { "math_id": 32, "text": "\\frac{\\epsilon_x}{\\beta_x} + \\frac{\\epsilon_y}{\\beta_y} + \\eta_s \\frac{\\epsilon_z}{\\beta_z } " }, { "math_id": 33, "text": "\\eta_s = \\frac{1}{\\gamma^2} -\\alpha_c" }, { "math_id": 34, "text": "\\frac{1}{\\tau_{1,2,3}}=\\frac{1}{\\epsilon_{1,2,3}}\\frac{d\\epsilon_{1,2,3}}{dt}" } ]
https://en.wikipedia.org/wiki?curid=6768103
67685236
Haline contraction coefficient
The Haline contraction coefficient, abbreviated as β, is a coefficient that describes the change in ocean density due to a salinity change, while the potential temperature and the pressure are kept constant. It is a parameter in the Equation Of State (EOS) of the ocean. β is also described as the saline contraction coefficient and is measured in [kg]/[g] in the EOS that describes the ocean. An example is TEOS-10. This is the thermodynamic equation of state. β is the salinity variant of the thermal expansion coefficient α, where the density changes due to a change in temperature instead of salinity. With these two coefficients, the density ratio can be calculated. This determines the contribution of the temperature and salinity to the density of a water parcel. β is called a contraction coefficient, because when salinity increases, water becomes denser, and if the temperature increases, water becomes less dense. Definition. Τhe haline contraction coefficient is defined as: formula_0 where ρ is the density of a water parcel in the ocean and Sformula_1 is the absolute salinity. The subscripts Θ and p indicate that β is defined at constant potential temperature Θ and constant pressure p. The haline contraction coefficient is constant when a water parcel moves adiabatically along the isobars. Application. The amount that density is influenced by a change in salinity or temperature can be computed from the density formula that is derived from the thermal wind balance. formula_2 The Brunt–Väisälä frequency can also be defined when β is known, in combination with α, Θ and Sformula_1. This frequency is a measure of the stratification of a fluid column and is defined over depth as: formula_3. The direction of the mixing and whether the mixing is temperature- or salinity-driven can be determined from the density difference and the Brunt-Väisälä frequency. Computation. β can be computed when the conserved temperature, the absolute salinity and the pressure are known from a water parcel. Python offers the Gibbs SeaWater (GSW) oceanographic toolbox. It contains coupled non-linear equations that are derived from the Gibbs function. These equations are formulated in the equation of state of seawater, also called the equation of seawater. This equation relates the thermodynamic properties of the ocean (density, temperature, salinity and pressure). These equations are based on empirical thermodynamic properties. This means that the properties of the ocean can be computed from other thermodynamic properties. The difference between the EOS and TEOS-10 is that in TEOS-10, salinity is stated as absolute salinity, while in the previous EOS version salinity was stated as conductivity-based salinity. The absolute salinity is based on density, where it uses the mass off all non-H2O molecules. Conductivity-based salinity is calculated directly from conductivity measurements taken by (for example) buoys. The GSW beta(SA,CT,p) function can calculate β when the absolute salinity (SA), conserved temperature (CT) and the pressure are known. The conserved temperature cannot be obtained directly from assimilation data bases like GODAS. But these variables can be calculated with GSW. Physical examples. β is not a constant, it mostly changes with latitude and depth. At locations where salinity is high, as in the tropics, β is low and where salinity is low, β is high. A high β means that the increase in density is more than when β is low.The effect of β is shown in the figures. Near Antarctica, ocean salinity is low. This is because meltwater that runs off Antarctica dilutes the ocean. This water is dense, because it is cold. β around Antarctica is relatively high. Near Antarctica, temperature is the main contributor for the high density there. Water near the tropics already has high salinity. Evaporation leaves salt behind in the water, increasing salinity and therefore density. As water temperatures are a lot higher, density in the tropics is lower than around the poles. In the tropics, salinity is the main contributor to density. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\beta = \\frac{1}{\\rho} \\frac{\\partial \\rho}{\\partial S_A}\\Bigg |_{\\Theta,p}" }, { "math_id": 1, "text": "_A" }, { "math_id": 2, "text": "\\rho = \\rho_0 \\left( \\alpha \\Theta + \\beta S_A \\right)" }, { "math_id": 3, "text": "N^2 = g \\left( \\alpha \\frac{\\partial \\Theta}{\\partial z} - \\beta \\frac{\\partial S_A}{\\partial z} \\right)" } ]
https://en.wikipedia.org/wiki?curid=67685236
67686875
Baroclinic instabilities in the ocean
Fluid dynamical instability A baroclinic instability is a fluid dynamical instability of fundamental importance in the atmosphere and ocean. It can lead to the formation of transient mesoscale eddies, with a horizontal scale of 10-100 km. In contrast, flows on the largest scale in the ocean are described as ocean currents, the largest scale eddies are mostly created by shearing of two ocean currents and static mesoscale eddies are formed by the flow around an obstacle (as seen in the animation on eddy (fluid dynamics). Mesoscale eddies are circular currents with swirling motion and account for approximately 90% of the ocean's total kinetic energy. Therefore, they are key in mixing and transport of for example heat, salt and nutrients. In a baroclinic medium, the density depends on both the temperature and pressure. The effect of the temperature on the density allows lines of equal density (isopycnals) and lines of equal pressure (isobars) to intersect. This is in contrast to a barotropic fluid, in which the density is only a function of pressure. For this barotropic case, isobars and isopycnals are parallel. The intersecting of isobars and isopycnals in a baroclinic medium may cause baroclinic instabilities to occur by the process of sloping convection. The sizes of baroclinic instabilities and therefore also the eddies they create scale with the Rossby radius of deformation, which strongly varies with latitude for the ocean. Instability and eddy generation. In a baroclinic fluid, the thermal-wind balance holds, which is a combination of the geostrophic balance and the hydrostatic balance. This implies that isopycnals can slope with respect to the isobars. Furthermore, this also results in changing horizontal velocities with height as a result of horizontal temperature and therefore density gradients. Under the thermal-wind balance, geostrophic balance and hydrostatic balance, a flow is in equilibrium. However, this is not the equilibrium of least energy. A reduction in slope of the isopycnals would lower the center of gravity and therefore also the potential energy. It would also reduce the pressure gradient, leading to an increase in the kinetic energy. However, under the thermal-wind balance, a decrease in slope of the isopycnals cannot occur spontaneously. It requires a change of potential vorticity. Under certain conditions, slight perturbations of the equilibrium under the thermal-wind balance may increase, leading to larger perturbations from the initial state and thus the growth of an instability. It is often considered that baroclinic instability is the mechanism which extracts potential energy stored in horizontal density gradients and uses this "eddy potential energy" to drive eddies. Sloping convection. These baroclinic instabilities may be initiated by the process of 'sloping convection' or 'slanted thermal convection'. To understand this, consider a fluid in steady state and under the thermal-wind balance. Initially, a fluid parcel is at location A. The fluid parcel is slightly perturbed to location B, while still retaining its original density. Therefore, the fluid parcel is now in a location with a lower density than itself and the parcel will just sink down to its original position; the fluid parcel is now stable. However, when a parcel displaced to location C, it is surrounded by fluid with a higher density than the parcel itself. Due to its relatively low density with respect to its surroundings, the parcel will float up even further. Now a small perturbation grows into a larger one, which implies a baroclinic instability. A criterion for an instability to occur can be defined. As stated before, in a baroclinic fluid, the thermal-wind balance holds, which implies the following two relations: formula_0 and formula_1, where formula_2 is the density and formula_3, formula_4 and formula_5 are the spatial coordinates in the horizontal (latitudinal and longitudinal) and vertical direction, respectively. formula_6 and formula_7 represent the horizontal (zonal and meridional) components of the velocity vector formula_8 in the formula_3- and formula_4-direction, respectively. Now thus formula_9 and formula_10 are the two horizontal density gradients. formula_11 is the gravitational acceleration at the surface of the Earth and formula_12 the Coriolis parameter. Therefore a horizontal density gradient in the formula_4-direction formula_13 leads to a gradient in horizontal flow velocity formula_6 over depth formula_14. The slope of the displacement is defined as formula_15, where formula_16 and formula_17 are the horizontal and vertical velocities of the perturbation, respectively. An instability now occurs when the slope of the displacement is smaller than the slope of the isopycnals. The isopycnals can be mathematically described as formula_18. Now this results in an instability when: formula_19 From now on, only a two layer system with formula_20 and formula_21 the slopes of the top and bottom layer, respectively, is considered to simplify the problem. This is now similar to the classic Philips model. From the thermal-wind balance it now follows that formula_22 where formula_23 is the reduced gravity and formula_24 the Coriolis-parameter at the equator according to the beta-plane approximation. Performing a scale analysis on the slope of the perturbation allows to assign physical quantities to this mathematical problem. This now results in formula_25, where formula_26 is the scale height, formula_27 the horizontal length scale, and formula_28 is the Rossby-parameter. From this it can be stated that an instability occurs when formula_29 or formula_30, where formula_23 is the reduced gravity and  formula_31 is the velocity difference between the lower and upper layer. This criterion can be used to identify whether a small perturbation will grow into a larger one and thus whether an instability is expected to occur. From this it follows that you need some kind of shear formula_32 to obtain an instability, it is easier to get an instability for long waves (perturbations) with large formula_27, and the formula_28 and therefore the beta-effect is stabilizing. Furthermore, for the baroclinic Rossby radius of deformation it holds that formula_33. Now the instability criteria simplify to formula_34 or formula_35. From this analysis it also follows that baroclinic instabilities are important for small Rossby numbers, where formula_36. Observations of Baroclinic instabilities and eddies. Recently, many observations on mesoscale eddies in the ocean have been made using sea surface height data from altimeters. It has been shown that regions with the highest growth rate of baroclinic instabilities indeed match the regions which are rich in eddies. Furthermore, also the trajectories of both cyclonic and anticyclonic eddies can be studied. From this it follows that there are approximately the same number of cyclonic and anticyclonic eddies observed and therefore it is concluded that the generation of these two types is very similar. However, when considering longer lived eddies, they found that anticyclonic eddies clearly dominate. This implies that cyclonic eddies are less stable and therefore decay more rapidly. In addition, there are no eddies present above shallows in the ocean due to topographic steering as a result of the Taylor–Proudman theorem. Lastly, extremely long lived eddies with lifetimes over 1.5 to 2 years are only found in gyres, most likely because the background flow is small here. Four different types of Baroclinic instabilities can be distinguished: These four types are based on classical models (the classic Eady Model, the Charney model, and the Phillips model, respectively), but can also be distinguished from observations. Overall, from the observed baroclinic instabilities 47% is the Charney surface type, 33% the Phillips type, 13% the Eady type and only 7% the Charney bottom type. These different types of Baroclinic instabilities all lead to different types of eddies. Important here is ψ, which is the absolute value of the complex eigenfunction of the stream function of the horizontal velocity. It represents the vertical structure of the Baroclinic instability and ranges from 0, which implies a very low chance of an instability of this type and thus also eddy to form, to 1, which means a high chance. The Eady type has a maximum ψ of one at the top and bottom, and a minimum around 0.5 halfway the total depth. For this type of model, an eddy thus occurs at both the surface and bottom of the ocean. It is therefore also called the surface- and bottom-intensified type and found mainly at high latitudes. The Charney surface type is surface-intensified and has a maximum ψ at the surface, whereas the Charney bottom type only shows baroclinic instabilities at the bottom. For the Charney bottom type ψ is also at the surface and increases to one over increasing depth. The Charney surface type is found in the subtropics, whereas the Charney bottom type is present at high latitudes. Lastly, for the Phillips type, ψ is zero at the surface, strongly increases to one just below the surface, and then slowly decreases again to zero for increasing depths. The location of these Phillips type instabilities agree with the occurrence of subsurface eddies, again supporting the idea that the Baroclinic instabilities lead to the formation of eddies. They are mostly found in the tropics and the eastern return flow of the subtropical gyres. It was found that the type of Baroclinic instability present also depends on the mean background flow. An Eady type is preferred for a strong eastward mean flow in the upper ocean, and a weak westward flow in the deeper ocean. For the Charney bottom type this is similar, but now the westward flow in the deeper ocean is found to be stronger. The Charney surface and Phillips types exist for weaker background flows, also explaining why these are dominant in the ocean gyres. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n \\frac{\\partial v}{\\partial z} = \\frac{g}{\\rho f} \\frac{\\partial \\rho}{\\partial x}\n\\end{align}" }, { "math_id": 1, "text": "\\begin{align}\n \\frac{\\partial u}{\\partial z} = - \\frac{g}{\\rho f} \\frac{\\partial \\rho}{\\partial y}\n\\end{align}" }, { "math_id": 2, "text": "\\rho" }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "y" }, { "math_id": 5, "text": "z" }, { "math_id": 6, "text": "u" }, { "math_id": 7, "text": "v" }, { "math_id": 8, "text": "\\bold{u}" }, { "math_id": 9, "text": "\\frac{\\partial \\rho}{\\partial x}" }, { "math_id": 10, "text": "\\frac{\\partial \\rho}{\\partial y}" }, { "math_id": 11, "text": "g" }, { "math_id": 12, "text": "f" }, { "math_id": 13, "text": "\\left( \\frac{\\partial \\rho}{\\partial y} \\right)" }, { "math_id": 14, "text": "\\left( \\frac{\\partial u}{\\partial z} \\right)" }, { "math_id": 15, "text": " \\frac{\\Delta z}{\\Delta y} = \\frac{w' \\Delta t}{v' \\Delta t} = \\frac{w'}{v'} " }, { "math_id": 16, "text": "v'" }, { "math_id": 17, "text": "w'" }, { "math_id": 18, "text": "z = H + \\bar{a}" }, { "math_id": 19, "text": "\\begin{align}\n \\frac{d\\bar{a}}{d y} > \\frac{w'}{v'}.\n\\end{align}" }, { "math_id": 20, "text": "U_1" }, { "math_id": 21, "text": "U_2" }, { "math_id": 22, "text": "\\begin{align}\n \\frac{d\\bar{a}}{d y} = \\frac{f_0}{g'}\\left( U_1-U_2 \\right),\n\\end{align}" }, { "math_id": 23, "text": "g' = \\frac{g(\\rho_2-\\rho_1)}{\\rho_0}" }, { "math_id": 24, "text": "f_0" }, { "math_id": 25, "text": "\\begin{align}\n \\frac{w'}{v'} \\sim \\frac{H}{L}\\frac{U_1-U_2}{f_0L} \\sim \\frac{H}{L} \\frac{\\beta_0 L}{f_0}\n \n\\end{align}" }, { "math_id": 26, "text": "H" }, { "math_id": 27, "text": "L" }, { "math_id": 28, "text": "\\beta_0" }, { "math_id": 29, "text": "\\begin{align}\n \\frac{f_0^2L^2}{g'H} > 1\n \n\\end{align}" }, { "math_id": 30, "text": "\\begin{align}\n \\frac{f_0^2}{g'H}\\frac{\\Delta U}{\\beta_0} > 1\n \n\\end{align}" }, { "math_id": 31, "text": "\\Delta U = U_2 - U_1" }, { "math_id": 32, "text": "\\Delta U" }, { "math_id": 33, "text": "R \\sim \\frac{\\sqrt{g'H}}{f_0}" }, { "math_id": 34, "text": "\\begin{align}\n L \\gtrsim R\n\\end{align}" }, { "math_id": 35, "text": "\\begin{align}\n \\Delta U \\gtrsim \\beta_0 R^2\n\\end{align}" }, { "math_id": 36, "text": "Ro = \\frac{U}{Lf}" } ]
https://en.wikipedia.org/wiki?curid=67686875
67687189
Kuroshio Current Intrusion
Movement of water from the Pacific to the West Philippine/South China Sea The Kuroshio Current is a northward flowing Western Boundary Current (WBC) in the Pacific Ocean. It is a bifurcation arm of the North Equatorial Current and consists of northwestern Pacific Ocean water. The other arm is the southward flowing Mindanao Current. The Kuroshio Current flows along the eastern Philippine coast, up to 13.7 Sv... of it leaking into the Luzon Strait - the gap between the Philippines and Taiwan - before continuing along the Japanese coast. Some of the leaked water manages to intrude into the South China Sea (SCS). This affects the heat and salt budgets and circulation and eddy generation mechanisms in the SCS. There are various theories about possible intrusion paths and what mechanisms initiate them. Intrusion paths. From satellite data, Nan, et al. (2011) concluded there are three intrusion paths for the Kuroshio Current into the SCS. A northward flowing WBC (like the Kuroshio Current) can deform at a gap in a western boundary and form an anticyclonic current loop if the gap is wide enough. This results in a looping path, where water from the Kuroshio flows through the middle of the Luzon Strait into the SCS and out in the north of the strait. The current loop in the SCS forms due to Ekman transport resulting from northeasterly winds that push Kuroshio surface water westward. Anticyclonic eddies can shed from the current loop and penetrate farther into the SCS, as has been observed by Li (1997) During the winter monsoon season, Kuroshio intrusion strengthens. Winds blow in the northwestward direction, thereby pushing Kuroshio surface water into the Luzon Strait. This can result in an anticyclonic bending of the Kuroshio flow into the Luzon Strait, from which a branch detaches into the SCS. A cyclonic gyre forms northwestward of the Luzon Strait as a result of this leaking path. This theory is based on observations by D.Z. Qiu, et al. (1984) from floaters but more recently no such branch has been observed The Kuroshio can also take a leaping path across the Luzon Strait and into the SCS. This is seen as a strengthening of the Luzon Cyclonic Gyre to the west of the strait while the Kuroshio continues northwards along the eastern Taiwanese coast. The anticyclonic gyre normally present in the SCS is significantly weakened as a result. Intrusion mechanisms. Wind forcing. The Luzon Strait and SCS experience seasonally reversing monsoon winds; these are southwestward and stronger in the boreal winter and northeastward and weaker in the boreal summer. This results in negative wind-driven Ekman transport in the winter, strengthening Kuroshio intrusion, and positive transport in the boreal summer, weakening intrusion. Wind-driven Ekman transport could therefore contribute to westward flow through the Luzon Strait and hence to Kuroshio leakage into the SCS. However, research has shown that less than 10% of Luzon Strait transport is due to purely wind-driven Ekman flow. Nevertheless, wind-driven Ekman drift still influences the inflow angle and speed of Kuroshio intrusion Inter-basin pressure gradient. A build up of water has been observed on the Pacific side of the Luzon Strait by Y. T. Song (2006), which results in a pressure gradient across the strait. This could initiate the bending of the Kuroshio Current into the Luzon Strait, thereby resulting in eventual leakage. Satellite data show a decreasing trend in Kuroshio intrusion strength over time, which correlates with a decrease in the cross-Luzon Strait pressure gradient, thereby supporting this theory. However, the exact mechanism for a pressure gradient-induced intrusion is not yet fully understood. The proposed equation describing the transport formula_0 between the SCS and the Pacific Ocean basins is based on a two-layer ocean model. formula_1 It depends on the surface and bottom layer depths formula_2 and formula_3 respectively, the sea surface height difference between the two basins formula_4 and the height difference of the layer interface between the two basins formula_5. The Rossby radius of deformation formula_6 uses the reduced gravity formula_7, formula_8 determines the direction of the pressure gradient and formula_9 is the strait width. According to this mechanism, formula_10 describes the transport in the upper layer of the Luzon Strait, which is dominated by geostrophic balance. This model has a few drawbacks: it only divides the ocean into two layers which reduces accuracy, and the model outcome depends strongly on the delineation of the SCS and Pacific Ocean basins... Beta effect and hysteresis. The beta effect describes the changing of the Coriolis parameter formula_11 with latitude. This effect will cause a WBC like the Kuroshio current to intrude into a meridional gap. The intrusion can then either penetrate the gap, or leap over it continuing its flow on the other side. Which flow path occurs depends on the ratio of flow inertia (which encourages leaping) to beta effect (which encourages penetrating). There can also be a transition between flow states as this ratio changes. Two different flow states can arise from the same external forcings depending on the past flow state; there is hysteresis in the system Potential vorticity conservation. Research by Nan et al. (2011) suggests that during Kuroshio intrusion into the SCS, potential vorticity must be conserved across the Luzon Strait. This means that intrusion must be in the form of current loops or rings that rotate either cyclonically or anticyclonically depending on the potential vorticity balance. The equation describing this motion is for a frictionless beta plane in a steady state with reduced gravity. formula_12 Here, formula_13 is the flow speed at the current core and formula_14 is the angle between the velocity vector and the positive formula_15-axis. Notice that the model is dependent on the inflow speed formula_16 and angle formula_17. However, this theory is based on the assumption of a steady state, which is not realistic since the Kuroshio intrusion process is unstable. Eddy activity. There are many eddies near the Luzon-Taiwan coast, especially to the east of the Kuroshio axis. Most eddies propagate westward with a mean speed of 7.2 cm/s and are deflected due to the Kuroshio current. This could be a source of Kuroshio intrusion into the SCS. However, few eddies from the Pacific can propagate into the Luzon Strait, since it is blocked by the Kuroshio current. Mesoscale eddies can impact the strength of the Kuroshio and its inflow angle at the Luzon Strait by changing the local background flow. Furthermore, seasonal variations in eddy strength and frequency correlate with seasonal variations in Kuroshio intrusion and Luzon Strait transport, suggesting that the two could nevertheless be linked. Impacts of Kuroshio Intrusion. The water intruding into the northern SCS from the Kuroshio current is relatively nutrient-rich. Therefore it enriches dissolved organic matter stores and enhances ammonia oxidation in the SCS. Bacteria and phytoplankton use these resources to grow and support their biogeochemical activities. Microzooplankton are particularly affected by the influx of nutrients since they have limited transport mechanisms compared to zooplankton Kuroshio current intrusion has oxidized and increased the salinity of the sedimentary environment northwest of Luzon Island in the SCS. The intrusion transports sediment high in illite and chlorite concentrations from around Taiwan southwestward into the deep sea environment. Pearl river sediment, high in concentrations of kaolinite and titanium, are also transported southwestward by the intrusion References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q" }, { "math_id": 1, "text": "\nQ =\n\\begin{cases}\n\\frac{g}{f}H_1\\Delta\\eta+\\frac{H_2}{2f}g'\\Delta h, & \\text{if }R<W_0 \\\\\n\\frac{g}{f}H_1\\Delta\\eta+\\kappa\\left(\\frac{2}{3}\\right)^{3/2}H_2W_0\\sqrt{g'\\Delta h}, & \\text{otherwise}\n\\end{cases}\n" }, { "math_id": 2, "text": "H_1" }, { "math_id": 3, "text": "H_2" }, { "math_id": 4, "text": "\\Delta\\eta" }, { "math_id": 5, "text": "\\Delta h" }, { "math_id": 6, "text": "R=\\frac{\\sqrt{2g'\\Delta h}}{f}" }, { "math_id": 7, "text": "g'=\\frac{g \\Delta\\rho}{\\rho_0}" }, { "math_id": 8, "text": "\\kappa=\\pm(\\Delta p_b-\\Delta\\eta)" }, { "math_id": 9, "text": "W_0" }, { "math_id": 10, "text": "H_1\\Delta\\eta" }, { "math_id": 11, "text": "f" }, { "math_id": 12, "text": "\n\\frac{d\\theta}{dy}\\sin{\\theta}=-\\frac{\\beta}{\\nu_c}+\\frac{\\nu_{c0}}{\\nu_c}\\frac{d\\theta}{dy}\\bigg|_{y=0}\\sin{\\theta_0}\n" }, { "math_id": 13, "text": "\\nu_c" }, { "math_id": 14, "text": "\\theta" }, { "math_id": 15, "text": "x" }, { "math_id": 16, "text": "\\nu_{c0}" }, { "math_id": 17, "text": "\\theta_0" } ]
https://en.wikipedia.org/wiki?curid=67687189
67687783
Topographic steering
In fluid mechanics, topographic steering is the effect of potential vorticity conservation on the motion of a fluid parcel. This means that the fluid parcels will not only react to physical obstacles in their path, but also to changes in topography or latitude. The two types of 'fluids' where topographic steering is mainly observed in daily life are air (air can be considered a compressible fluid in fluid mechanics) and water in respectively the atmosphere and the oceans. Examples of topographic steering can be found in, among other things, paths of low pressure systems and oceanic currents. In 1869, Kelvin published his circulation theorem, which states that a barotropic, ideal fluid with conservative body forces conserves the circulation around a closed loop. To generalise this, Bjerknes published his own circulation theorem in 1898. Bjerknes extended the concept to inviscid, geostrophic and baroclinic fluids, resulting in addition of terms in the equation. Mathematical description. Circulation. The exact mathematical description of the different potential vorticities can all be obtained from the circulation theorem of Bjerkness, which is stated as formula_0. Here formula_1 is the circulation, the line integral of the velocity along a closed contour. Also, formula_2 is the material derivative, formula_3 is the density, formula_4 is the pressure, formula_5 is the angular velocity of the frame of reference and formula_6 is the area projection of the closed contour onto the equatorial plane. This means the bigger the angle between the contour and the equatorial plane, the smaller this projection becomes. The formula states that the change of the circulation along a fluid's path is affected by the variation of density in pressure coordinates and by the change in equatorial projection of the contour. Kelvin assumed both a barotropic fluid and a constant projection. Under these assumptions the right hand side of the equation is zero and Kelvin's theorem is found. Shallow water. When considering a relatively thin layer of fluid of constant density, with on the bottom a topography and on top a free surface, the shallow water approximation can be used. Using this approximation, Rossby showed in 1939, by integrating the shallow water equations over the depth of the fluid, that formula_7.(1) Here formula_8 is the relative vorticity, formula_9 is the Coriolis parameter and formula_10 is the height of the water layer. The quantity inside the material derivative was later called the "shallow water potential vorticity". Layered atmosphere. When considering an atmosphere with multiple layers of constant potential temperature, the quasi-2D shallow water equations on a beta plane can be used. In 1940, Rossby used this to show that formula_11.(2) Here formula_12 is the relative vorticity on an isentropic surface, formula_9 is the Coriolis parameter and formula_13 is a quantity measuring the weight of unit cross-section of an individual air column in the layer. This last quantity can also be seen as a measure of the vortex depth. The potential vorticity defined here is also called the "Rossby potential vorticity". Continuous atmosphere. When the approximation of the discrete layers is relaxed and the fluid becomes continuous, another potential vorticity can be defined which is conserved. It was shown by Ertel in 1942 that formula_14.(3) Here formula_15 is the absolute vorticity, formula_16 is the gradient in potential temperature and formula_3 the density. This potential vorticity is also called the "Ertel potential vorticity". To get to this result, first recall the circulation theorem from Kelvin formula_17. If the coordinate system is transformed to the one of the local tangent plane coordinates and we use potential temperature as the vertical coordinate, the equation can be slightly rewritten to formula_18. Where now formula_1 is the local circulation in the frame of reference, formula_9 is Coriolis parameter and formula_19 is the area on an isentropic surface over which the circulation formula_1. Because the local circulation can be approximated as a product between the area and the relative vorticity on the isentropic surface, the circulation equation yields formula_20. When a fluid parcel is between two isentropic layers and the pressure difference between these layers increases, the fluid parcel is 'stretched'. This is because it wants to conserve the potential temperature at each side of the parcel. To conserve the mass, this horizontally thins the fluid parcel while it is vertically stretched. So the area of the isentropic surface, formula_19, is a function of how quickly the lines of equal potential temperature change with pressure: formula_21. In the end this yields formula_22, which is exactly the result found by Ertel, written in a slightly different way. Note that when assuming a layered atmosphere, the gradient in the potential temperature becomes an absolute difference and the result from Kelvin for a layered atmosphere can be found. Also note that when the fluid is incompressible, the layer depth becomes a measure for the change in potential temperature. Then the result for shallow water potential vorticity can be extracted again. Effect. The different definitions of potential vorticity conservation, resulting from different approximations, can be used to explain phenomena observed here on earth. Fluid parcels will move along lines of constant potential vorticity. Oceans. Because the scale of large flows in the oceans is much larger than the depth of the ocean, the shallow water approximation and thus (1) can often be used. On top of that, the changes in relative vorticity are very small with respect to the changes in the Coriolis parameter. The direct result of that is that for a fluid parcel a change in ocean floor depth will have to be compensated by a change in latitude. In both hemispheres this means that a rising ocean floor, so a decrease in water depth, results in a deflection equatorwards. This phenomenon can explain different currents found on earth. One of them is the specific path the water takes in the Antarctic Circumpolar Current. This path is not a straight line, but curves according to the bathymetry. Another one is the water flowing through the Luzon Strait. Researchers Metzger and Hurlburt showed that the existence of three small shoals can explain the deflection of the current away from the strait instead of flowing through the strait. Atmosphere. In the atmosphere, topographic steering can also be observed. In most cases, the simple modeled layer of the atmosphere and thus (2) can explain the phenomena. When an isentropic layer flows zonally from west to east over a mountain, the topographic steering can create a wave-like pattern on the lee-side and eventually form an alternating pattern of ridges and troughs. Upon approach of the mountain, the layer depth will increase slightly. This is because the incline of the isentropic surfaces is less steep at the top of the layer than at the bottom. When the layer depth increases, the change in potential vorticity is countered by an increase in relative vorticity as well as the Coriolis parameter. The vortex will begin to move away from the equator and begin to rotate cyclonically. During the crossing of the mountain, the effect is reversed due to the shrinking of the layer depth. The vortex will rotate anti-cyclonically and move towards the equator. As the vortex leaves the mountain, the resulting latitude is closer to the equator than before. This means vortices will have a cyclonic rotation on the lee-side of the mountain and be turning northwards. The Coriolis parameter and relative vorticity increase and decrease in antiphase. This results in an alternation of cyclonic and anti-cyclonic flows after the mountain. The change in the Coriolis parameter and relative vorticity work against each other, creating a wave-like phenomenon. When looking at zonal flow from east to west, this effect is not occurring. This is because the change in the Coriolis parameter and the change in relative vorticity work in the same direction. The flow will return to zonal again some time after crossing the mountain. The effect described is often credited as the source of the tendency of cyclogenesis on lee-sides of mountains. One example of this are the so called Colorado lows, troughs originating from air passing over the Rocky Mountains. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{DC}{Dt}=\\int_A \\frac{\\nabla\\rho\\times\\nabla p}{\\rho^2}\\cdot dA - 2\\Omega\\frac{DA_e}{Dt}" }, { "math_id": 1, "text": "C" }, { "math_id": 2, "text": "D/Dt" }, { "math_id": 3, "text": "\\rho" }, { "math_id": 4, "text": "p" }, { "math_id": 5, "text": "\\Omega" }, { "math_id": 6, "text": "A_e" }, { "math_id": 7, "text": "\\frac{D}{Dt}\\left(\\frac{\\zeta+f}{h}\\right)=0" }, { "math_id": 8, "text": "\\zeta" }, { "math_id": 9, "text": "f" }, { "math_id": 10, "text": "h" }, { "math_id": 11, "text": "\\frac{D}{Dt}\\left(\\frac{\\zeta_\\theta+f}{\\Delta}\\right)=0" }, { "math_id": 12, "text": "\\zeta_\\theta" }, { "math_id": 13, "text": "\\Delta=\\delta p/g" }, { "math_id": 14, "text": "\\frac{D}{Dt}\\left(\\frac{\\zeta_a\\cdot\\nabla\\theta}{\\rho}\\right)=0" }, { "math_id": 15, "text": "\\zeta_a" }, { "math_id": 16, "text": "\\nabla\\theta" }, { "math_id": 17, "text": "\\frac{DC}{Dt}=0" }, { "math_id": 18, "text": "\\frac{D}{Dt}\\left(C+f\\delta A\\right)=0" }, { "math_id": 19, "text": "\\delta A" }, { "math_id": 20, "text": "\\frac{D}{Dt}\\left(\\delta A\\left(\\zeta_\\theta+f\\right)\\right)=0" }, { "math_id": 21, "text": "\\delta A=Const*g\\left(-\\frac{\\partial\\theta}{\\partial p}\\right)" }, { "math_id": 22, "text": "\\frac{D}{Dt}\\left(\\frac{\\zeta_\\theta+f}{-g\\frac{\\partial\\theta}{\\partial p}}\\right)=0" } ]
https://en.wikipedia.org/wiki?curid=67687783
6768791
Lévy's modulus of continuity theorem
Lévy's modulus of continuity theorem is a theorem that gives a result about an almost sure behaviour of an estimate of the modulus of continuity for Wiener process, that is used to model what's known as Brownian motion. Lévy's modulus of continuity theorem is named after the French mathematician Paul Lévy. Statement of the result. Let formula_0 be a standard Wiener process. Then, almost surely, formula_1 In other words, the sample paths of Brownian motion have modulus of continuity formula_2 with probability one, for formula_3 and sufficiently small formula_4. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B : [0, 1] \\times \\Omega \\to \\mathbb{R}" }, { "math_id": 1, "text": "\\lim_{h \\to 0} \\sup_{t, t'\\leq 1; |t-t'|\\leq h } \\frac{| B_{t'} - B_{t} |}{\\sqrt{2 h \\log (1 / h)}} = 1." }, { "math_id": 2, "text": "\\omega_{B} (\\delta) = c\\sqrt{2 \\delta \\log (1 / \\delta)}" }, { "math_id": 3, "text": "c > 1" }, { "math_id": 4, "text": "\\delta > 0" } ]
https://en.wikipedia.org/wiki?curid=6768791
67689502
Kaiser–Meyer–Olkin test
Statistical measure to determine how suited data is for factor analysis. The Kaiser–Meyer–Olkin (KMO) test is a statistical measure to determine how suited data is for factor analysis. The test measures sampling adequacy for each variable in the model and the complete model. The statistic is a measure of the proportion of variance among variables that might be common variance. The higher the proportion, the higher the KMO-value, the more suited the data is to factor analysis. History. Henry Kaiser introduced a Measure of Sampling Adequacy (MSA) of factor analytic data matrices in 1970. Kaiser and Rice then modified it in 1974. Measure of sampling adequacy. The measure of sampling adequacy is calculated for each indicator as formula_0 and indicates to what extent an indicator is suitable for a factor analysis. The Kaiser–Meyer–Olkin criterion is calculated and returns values between 0 and 1. formula_1 Kaiser–Meyer–Olkin criterion. Here formula_2 is the correlation between the variable in question and another, and formula_3 is the partial correlation. This is a function of the squared elements of the `image' matrix compared to the squares of the original correlations. The overall MSA as well as estimates for each item are found. The index is known as the Kaiser–Meyer–Olkin (KMO) index. Interpretation of result. In flamboyant fashion, Kaiser proposed that a KMO &gt; 0.9 was marvelous, in the 0.80s, meritorious, in the 0.70s, middling, in the 0.60s, mediocre, in the 0.50s, miserable, and less than 0.5 would be unacceptable. In general, KMO values between 0.8 and 1 indicate the sampling is adequate. KMO values less than 0.6 indicate the sampling is not adequate and that remedial action should be taken. In contrast, others set this cutoff value at 0.5. A KMO value close to zero means that there are large partial correlations compared to the sum of correlations. In other words, there are widespread correlations which would be a large problem for factor analysis. An alternative measure of whether a matrix is factorable is the Bartlett test, which tests the degree that the matrix deviates from an identity matrix. Example in R. If the following is run in R with the library(psych) library(psych) set.seed(5L) five.samples &lt;- data.frame("A"=rnorm(100), "B"=rnorm(100), "C"=rnorm(100),                     "D"=rnorm(100), "E"=rnorm(100)) cor(five.samples) KMO(five.samples) The following is produced: Kaiser-Meyer-Olkin factor adequacy Call: KMO(r = five.samples) Overall MSA = 0.53 MSA for each item = A B C D E 0.52 0.56 0.52 0.48 0.54 This shows that the data is not that suited to Factor Analysis. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "MSA_j = \\frac{\\displaystyle \\sum_{k\\neq j} r_{jk}^2}{\\displaystyle \\sum_{k\\neq j} r_{jk}^2+\\sum_{k\\neq j} p_{jk}^2}" }, { "math_id": 1, "text": "KMO = \\frac{\\displaystyle \\underset{j\\neq k}{\\sum\\sum} r_{jk}^2}{\\displaystyle \\underset{j\\neq k}{\\sum\\sum} r_{jk}^2+\\underset{j\\neq k}{\\sum\\sum} p_{jk}^2}" }, { "math_id": 2, "text": "r_{jk}" }, { "math_id": 3, "text": "p_{jk}" } ]
https://en.wikipedia.org/wiki?curid=67689502
6768984
Phase margin
Parameter of electronic amplifiers In electronic amplifiers, the phase margin (PM) is the difference between the phase lag "φ" (&lt; 0) and -180°, for an amplifier's output signal (relative to its input) at zero dB gain - i.e. unity gain, or that the output signal has the same amplitude as the input. formula_0. For example, if the amplifier's open-loop gain crosses 0 dB at a frequency where the phase lag is -135°, then the phase margin of this feedback system is -135° -(-180°) = 45°. See Bode plot#Gain margin and phase margin for more details. Theory. Typically the open-loop phase lag (relative to input, "φ" &lt; 0) varies with frequency, progressively increasing to exceed 180°, at which frequency the output signal becomes inverted, or antiphase in relation to the input. The PM will be positive but decreasing at frequencies less than the frequency at which inversion sets in (at which PM = 0), and PM is negative (PM &lt; 0) at higher frequencies. In the presence of negative feedback, a zero or negative PM at a frequency where the loop gain exceeds unity (1) guarantees instability. Thus positive PM is a "safety margin" that ensures proper (non-oscillatory) operation of the circuit. This applies to amplifier circuits as well as more generally, to active filters, under various load conditions (e.g. reactive loads). In its simplest form, involving ideal negative feedback "voltage" amplifiers with non-reactive feedback, the phase margin is measured at the frequency where the open-loop voltage gain of the amplifier equals the desired closed-loop DC voltage gain. More generally, PM is defined as that of the amplifier and its feedback network combined (the "loop", normally opened at the amplifier input), measured at a frequency where the loop gain is unity, and prior to the closing of the loop, through tying the output of the open loop to the input source, in such a way as to subtract from it. In the above loop-gain definition, it is assumed that the amplifier input presents zero load. To make this work for non-zero-load input, the output of the feedback network needs to be loaded with an equivalent load for the purpose of determining the frequency response of the loop gain. It is also assumed that the graph of gain vs. frequency crosses unity gain with a negative slope and does so only once. This consideration matters only with reactive and active feedback networks, as may be the case with active filters. Phase margin and its important companion concept, gain margin, are measures of stability in closed-loop, dynamic-control systems. Phase margin indicates relative stability, the tendency to oscillate during its damped response to an input change such as a step function. Gain margin indicates absolute stability and the degree to which the system will oscillate, without limit, given any disturbance. The output signals of all amplifiers exhibit a time delay when compared to their input signals. This delay causes a phase difference between the amplifier's input and output signals. If there are enough stages in the amplifier, at some frequency, the output signal will lag behind the input signal by one cycle period at that frequency. In this situation, the amplifier's output signal will be in phase with its input signal though lagging behind it by 360°, i.e., the output will have a phase angle of −360°. This lag is of great consequence in amplifiers that use feedback. The reason: the amplifier will oscillate if the fed-back output signal is in phase with the input signal at the frequency at which its open-loop voltage gain equals its closed-loop voltage gain and the open-loop voltage gain is one or greater. The oscillation will occur because the fed-back output signal will then reinforce the input signal at that frequency. In conventional operational amplifiers, the critical output phase angle is −180° because the output is fed back to the input through an inverting input which adds an additional −180°. Phase Margin, Gain margin and relation with feedback stability. Phase margin and gain margin are two measures of stability for a feedback control system. They indicate how much the gain or the phase of the system can vary before it becomes unstable. Phase margin is the difference (expressed as a positive number) between 180° and the phase shift where the magnitude of the loop transfer function is 0 dB. It is the additional phase shift that can be tolerated, with no gain change, while remaining stable .Gain margin is the difference (expressed as a positive dB value) between 0 dB and the magnitude of the loop transfer function at the frequency where the phase shift is 180°. It is the amount of gain, which can be increased or decreased without making the system unstable2. For a stable system, both the margins should be positive, or the phase margin should be greater than the gain margin1. For a marginally stable system, the margins should be zero or the phase margin should be equal to the gain margin. You can use Bode plots to graphically determine the gain margin and phase margin of a system. A Bode plot maps the frequency response of the system through two graphs – the Bode magnitude plot (expressing the magnitude in decibels) and the Bode phase plot (expressing the phase shift in degrees). Practice. In practice, feedback amplifiers must be designed with phase margins substantially in excess of 0°, even though amplifiers with phase margins of, say, 1° are theoretically stable. The reason is that many practical factors can reduce the phase margin below the theoretical minimum. A prime example is when the amplifier's output is connected to a capacitive load. Therefore, operational amplifiers are usually compensated to achieve a minimum phase margin of 45° or so. This means that at the frequency at which the open and closed loop gains meet, the phase angle is −135°. The calculation is: -135° - (-180°) = 45°. See Warwick or Stout for a detailed analysis of the techniques and results of compensation to ensure adequate phase margins. See also the article "Pole splitting". Often amplifiers are designed to achieve a typical phase margin of 60 degrees. If the typical phase margin is around 60 degrees then the minimum phase margin will typically be greater than 45 degrees. A phase margin of 60 degrees is also a magic number because it allows for the fastest settling time when attempting to follow a voltage step input (a Butterworth design). An amplifier with lower phase margin will ring for longer and an amplifier with more phase margin will take a longer time to rise to the voltage step's final level. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; See also. &lt;templatestyles src="Div col/styles.css"/&gt;
[ { "math_id": 0, "text": "\\mathrm{PM} = \\varphi-(-180^\\circ)" } ]
https://en.wikipedia.org/wiki?curid=6768984
67692128
Thermohaline staircase
Oceanographic Phenomenon Thermohaline staircases are patterns that form in oceans and other bodies of salt water, characterised by step-like structures observed in vertical temperature and salinity profiles; the patterns are formed and maintained by double diffusion of heat and salt. The ocean phenomenon consists of well-mixed layers of ocean water stacked on top of each other. The well-mixed layers are separated by high-gradient interfaces, which can be several meters thick. The total thickness of staircases ranges typically from tens to hundreds of meters. Two types of staircases are distinguished. Salt-fingering staircases can be found at locations where relatively warm, salty water overlies relatively colder, fresher water. Here, large-scale temperature and salinity both increase upward, making the mixing process of salt fingering possible. Locations where you can find these type of staircases are for example beneath the Mediterranean outflow, in the Tyrrhenian Sea, and northeast Caribbean. Diffusive staircases can be found at locations where both temperature and salinity increase downward, for example in the Arctic Ocean and in the Weddell Sea. An important feature of thermohaline staircases is their extreme stability in space and time. They can persist several years or more and can extend for hundreds of kilometers. The interest in thermohaline staircases is partly due to the fact that the staircases represent mixing hot spots in the main thermocline. Extensive definition and detection. To determine the presence of thermohaline staircases, the following steps can be taken according to the algorithm designed by Van der Boog. The first step of the algorithm is to identify the mixed layers by locating weak vertical density gradients in conservative temperature and absolute salinity. To do so, the threshold gradient method is used with a threshold of formula_2, with formula_3 the pressure and formula_4 formula_5 the reference pressure. The vertical conservative temperature, absolute salinity, and potential density gradients are all below the threshold value by meeting these three conditions: formula_6 formula_7 formula_8 with formula_9 the thermal expansion coefficient, formula_10 the haline contraction coefficient, formula_11 the reference density, formula_12 the conservative temperature, and formula_13 the salinity. The second step is to define the interface, which is the part of the water column in the middle of two mixed layers. It is required that the conservative temperature, absolute salinity, and potential density variations in the interface formula_14 should be larger than the variations within each mixed layer formula_15 to ensure a stepped structure. Therefore the following conditions should be met: formula_16 formula_17 formula_18 where subscript 1 corresponds to the mixed layer above the interface and subscript 2 corresponds to the mixed layer below the interface. The third step is to limit the interface height formula_19. The interface height should be smaller than the height of the mixed layers directly above and below the interface formula_20. This condition has to be met in order to ensure that the interface is relatively thin compared to the mixed layers surrounding it. Furthermore, the algorithm removes all interfaces with conservative temperature or absolute salinity inversions to make sure that it only detects step-like structures that are associated with the presence of thermohaline staircases. The fourth step is to determine the double-diffusive regime (salt-fingering or diffusive) of each interface. When both conservative temperature and absolute salinity of the mixed layers above and below the interface increase downward, the interface belongs to the diffusive regime. When both conservative temperature and absolute salinity of the mixed layers above and below the interface both increase upward, the interface is classified as the salt-fingering regime. Finally, only vertical sequences of at least two interfaces in the same double-diffusive regime are selected, where the interfaces should be separated from each other by only one mixed layer. This way, most thermohaline intrusions are removed, as these are characterised by alternating mixed layers in the diffusive and salt-finger regimes. Furthermore, the algorithm removes salt-fingering interfaces and diffusive-convective interfaces outside their favourable Turner angle formula_0, a parameter used to describe the local stability of an inviscid water column. Interfaces with salt-fingering characteristics should correspond to Turner angles of formula_21 and interfaces with diffusive-convective characteristics should correspond to Turner angles of formula_22. Staircase origin. The origin of thermohaline staircases relies on double diffusive convection, and specifically on the fact that heated water diffuses more readily than salty water. However, there is still much debate on which specific mechanism of layering plays a role. Six possible mechanisms are described below. Collective instability mechanism. This mechanism, involving collective instability, relies on the idea that after a period of active internal wave motion, layers appear. This hypothesis was motivated by laboratory experiments in which staircases formed from the initially uniform temperature and salinity gradients. Growing waves might overturn and generate the stepped structure of thermohaline staircases. Thermohaline intrusion mechanism. This hypothesis states that staircases represent the final stage in the evolution of thermohaline intrusions. Intrusions can evolve either to a state consisting of alternating salt-finger and diffusive interfaces separated by convecting layers, which is common at high density ratio formula_1, or to a series of salt-finger interfaces when the density ratio is low formula_23. This proposition relies on the presence of lateral property gradients to drive interleaving. This mechanism, where thermohaline intrusions are transformed into staircases, are likely to exist in strong temperature-salinity fronts. Metastable equilibria mechanism. A different theory states that staircases represent distinct metastable equilibria. It is suggested that finite amplitude perturbations to the gradient state force the system into a layered regime where it can remain for long periods of time. Large initial perturbations to the gradient state make the transition to the staircase more likely and accelerate the process. Once the staircase is created, the system becomes resilient to further structural changes. Applied flux mechanism. The applied flux mechanism was mainly tested in laboratory experiments, and is most likely at work in cases when layering is caused by geothermal heating. When a stable salinity gradient is heated from below, top-heavy convection will take place in the lower part of the water column. The well-mixed convecting layer is bounded from above by a thin high-gradient interface. By a combination of molecular diffusion and entrainment across the interface, heat is transferred upward from the convecting layer. The molecular transfer of heat exceeds that of salt, resulting in a supply of buoyancy to the region immediately above the interface. This leads to the formation of a second convecting layer. The process can repeat itself over and over, which results in a sequence of mixed layers separated by sharp interfaces, a thermohaline staircase. Negative density diffusion. In salt-fingering staircases, vertical temperature and salinity fluxes are downgradient, while the vertical density flux is upgradient. This is explained by the fact that the potential energy released in transporting salt downward must exceed that expended in transporting heat upward, resulting in a net downward transport of mass. This negative diffusion sharpens the fluctuations and therefore suggests a means for generating and maintaining staircases. Instability of flux-gradient laws. This mechanism is based on negative density diffusion as well. However, instead of combining temperature and salinity into a single density term, it treats both density components individually. In a publication by Radko, it is shown that formation of steps in numerical models is caused by the parametric variation of the flux ratio as a function of the density ratio formula_1, leading to an instability of equilibrium with uniform stratification. These unstable perturbations continuously grow in time until well-defined layers are formed. Observations. Two types of staircases exist: salt-fingering staircases, where both temperature and salinity of the mixed layers decrease with pressure (and therefore with depth); and diffusive staircases, where both temperature and salinity of the mixed layers increase with pressure (so with depth). Salt-fingering staircases. Most observations of salt-fingering staircases have come from three locations: the western Tropical Atlantic, the Tyrrhenian Sea, and the Mediterranean outflow. In these regions the density ratio formula_1 has a very low value, which appears to be a condition for sufficient staircase formation. No staircases have been reported for formula_24 values below 2. For values below 1.7, the step-like structures in vertical temperature and salinity profiles becomes apparent. Moreover, the spatial pattern of staircases is very sensitive to formula_24. With decreasing formula_24, the height of steps sharply increases and the staircases become more pronounced. The importance of the density ratio for the formation is a sign that staircases are a product of double diffusive convection. In the Tyrrhenian Sea, thermohaline staircases due to salt fingers are observed"." The step-like shape is visible in the vertical temperature and salinity profiles. Staircases in the Tyrrhenian Sea show a very high stability in space and time. The weak deep circulation in this area might be an explanation for this stability. Diffusive staircases. Diffusive staircases are found at higher latitudes. In the Arctic Ocean, warm and salty water from the Atlantic enters the Arctic basin and subducts beneath the colder and fresher waters of the upper Arctic. In some regions, also Pacific waters sit below the mixed layer and above the Atlantic layer. A thermocline is found at the top of the Atlantic Water layer. In that region, temperature and salinity increases with depth and step-like patterns are observed in vertical temperature and salinity profiles. These staircases mediate the heat transport from the warm water of Atlantic origin to the Arctic halocline and therefore serve as an important process in determining the heat flux from the Atlantic Water upward to the sea ice. Staircases in the Arctic are characterised by much smaller steps than in salt-fingering staircases. On a much smaller scale, diffusive staircases have also been observed in low- and mid-latitudes. For example, Lake Kivu and Lake Nyos show characteristic staircase patterns. In these salt-water lakes, geothermal springs supply heat at the bottom resulting in the diffusive background stratification. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(Tu)" }, { "math_id": 1, "text": "(R_\\rho)" }, { "math_id": 2, "text": "\\partial \\sigma_1 / \\partial p_{max} = 0.0005 \\, \\text{kg m}^{-3} \\text{pbar}^{-1}" }, { "math_id": 3, "text": "p\n\n" }, { "math_id": 4, "text": "\\sigma_1\n\n" }, { "math_id": 5, "text": "(= 1000 \\,\\text{dbar})" }, { "math_id": 6, "text": "\\left| \\alpha \\rho_0 \\frac{\\partial T}{\\partial p} \\right| \\,\\leq 0.0005 \\, \\text{kg m}^{-3} \\text{dbar}^{-1},\n\n" }, { "math_id": 7, "text": "\\left| \\beta \\rho_0 \\frac{\\partial S}{\\partial p} \\right| \\,\\leq 0.0005 \\, \\text{kg m}^{-3} \\text{dbar}^{-1},\n\n" }, { "math_id": 8, "text": "\\left| \\frac{\\partial \\sigma_1}{\\partial p} \\right| \\,\\leq 0.0005 \\, \\text{kg m}^{-3} \\text{dbar}^{-1},\n\n" }, { "math_id": 9, "text": "\\alpha\n\n" }, { "math_id": 10, "text": "\\beta\n\n" }, { "math_id": 11, "text": "\\rho_0\n\n" }, { "math_id": 12, "text": "T\n\n" }, { "math_id": 13, "text": "S\n\n" }, { "math_id": 14, "text": "(\\Delta T_{IF}, \\Delta S_{IF},\\Delta \\sigma_{IF}) \n\n" }, { "math_id": 15, "text": "(\\Delta T_{ML}, \\Delta S_{ML},\\Delta \\sigma_{ML}) \n\n" }, { "math_id": 16, "text": "\\text{max}\\left(\\left| \\Delta T_{ML,1} |, |\\Delta T_{ML,2} \\right| \\right) < |\\Delta T_{IF}|,\n\n" }, { "math_id": 17, "text": "\\text{max}\\left(\\left| \\Delta S_{ML,1} |, |\\Delta S_{ML,2} \\right| \\right) < |\\Delta S_{IF}|,\n" }, { "math_id": 18, "text": "\\text{max}\\left(| \\Delta \\sigma_{1_{ML,1}} |, |\\Delta \\sigma_{1_{ML,2}} | \\right) < |\\Delta \\sigma_{1_{IF}}|,\n" }, { "math_id": 19, "text": "(h_{IF})" }, { "math_id": 20, "text": "(h_{ML,1},h_{ML,2})" }, { "math_id": 21, "text": "45 ^{\\circ} < Tu < 90^{\\circ}" }, { "math_id": 22, "text": "-90 ^{\\circ} < Tu < -45^{\\circ}" }, { "math_id": 23, "text": "(R_\\rho < 1.6)" }, { "math_id": 24, "text": "R_\\rho" } ]
https://en.wikipedia.org/wiki?curid=67692128
67696150
Lagrangian ocean analysis
Lagrangian ocean analysis is a way of analysing ocean dynamics by computing the trajectories of virtual fluid particles, following the Lagrangian perspective of fluid flow, from a specified velocity field. Often, the Eulerian velocity field used as an input for Lagrangian ocean analysis has been computed using an ocean general circulation model (OGCM). Lagrangian techniques can be employed on a range of scales, from modelling the dispersal of biological matter within the Great Barrier Reef to global scales. Lagrangian ocean analysis has numerous applications, from modelling the diffusion of tracers, through the dispersal of aircraft debris and plastics, to determining the biological connectivity of ocean regions. Techniques. Lagrangian ocean analysis makes use of the relation between the Lagrangian and Eulerian specifications of the flow field, namely formula_0 where formula_1 defines the trajectory of a particle (fluid parcel), labelled formula_2, as a function of the time formula_3, and the partial derivative is taken for a given fluid parcel formula_2. In this context, formula_2 is used to identify a given virtual particle - physically it corresponds to the position through which that particle passed at time formula_4. In words, this equation expresses that the velocity of a fluid parcel at the position along its trajectory that it reaches at time formula_3 can also be interpreted as the velocity at that point in the Eulerian coordinate system. Using this relation, the Eulerian velocity field can be integrated in time to trace a trajectory, formula_5 where formula_6 is a dummy integration variable. In this equation, formula_7 is continuous in space – for the integration of trajectories in a Lagrangian ocean model, the velocity field must be evaluable at any point in space. Spatial interpolation is used so that the velocity field can be evaluated at points inside the grid cells outputted by OGCMs. Time Integration. In some cases, the time integration is performed using explicit time-stepping methods. Lagrangian ocean analysis codes may make use of, for instance, an Euler method, or a higher order method, such as Runge-Kutta 4 or Runge-Kutta 4-5. If the timestep of the integration method is shorter than the time resolution of the Eulerian velocity field used as an input, then the velocity field must be interpolated in the temporal domain, so that there is a velocity value to be integrated for each time. To ensure volume conservation in integrating the trajectories, symplectic methods, can be used. These methods are generally implicit in nature, requiring extra computation when compared to explicit methods. Alternatively, if each component of the flow velocity within a spatial grid is assumed to vary linearly along its axis, trajectories can be analytically calculated. If the velocity field is steady-state, then trajectories can be treated as streamlines, and considered together in bundles known as stream tubes, which bound fluid flow in different parts of the spatial domain. If the velocity field provided as the starting point of the Lagrangian analysis is a divergence-free flow, the volume of fluid moving through a stream tube is conserved throughout the stream tube. To show this mathematically, the starting point is the condition that the divergence of the velocity field is zero, formula_8 Integrating this over the volume of a stream tube, the divergence theorem can be used to show that formula_9 formula_10 formula_11 where formula_12 is the normal to the surface of the stream tube, formula_13 denotes the entire volume of the stream tube and formula_14 its surface. If the streamlines and pathlines (trajectories) are equivalent, as is the case for steady-state (non time-evolving) flows, then the walls of the stream tube do not contribute to the integral, as the flow cannot cross them. Thus, only the ends, formula_15 and formula_16 will contribute to the integral, so formula_17 Physically, this equation expresses that the fluid flux passing through the two ends of the streamtube are equal, demonstrating the volume conservation. The equation also shows that the area of each end of the stream tube is inversely proportional to the speed of the normal flow through it. These features of the analytical method of calculating trajectories lend themselves to Lagrangian analyses primarily concerned with the advective (as opposed to diffusive) component of the flow. There exists a caveat to this approach: given that the velocity fields considered can be time-evolving, the equivalence between stream tubes and material pathways may not hold. Lagrangian ocean models can address this formalism by considering the flow field to be a piecewise function in the temporal domain, where each sub-function is a steady-state velocity field. A Boussinesq model, in which flow is incompressible and thus non-divergent, can be used to generate the velocity field used as an input for the Lagrangian analysis code to ensure volume is conserved when using this method. Incorporating Diffusion. For a Lagrangian ocean analysis code to include the effects of molecular diffusion, or other small-scale mixing which may be modelled as a diffusive process, stochastic terms must be added to the trajectory computations. Stochastic terms may be added in accordance with a stochastic differential equation (SDE) derived from the tracer diffusion equation in the form of the Fokker-Planck equation. This method requires that a diffusion tensor be provided. Rather than using an SDE derived with a diffusion tensor, Lagrangian ocean models may instead find an SDE based on how well the resulting diffusivity statistics fit either observations or models built with a finer resolution. This method involves the use of a Markov chain; the order of the Markov chain used is another point where different Lagrangian analysis codes differ. Online and Offline Analysis. Lagrangian ocean analysis codes can be characterised as online or offline. Online codes work in tandem with the Eulerian model that outputs the velocity field: each time the Eulerian model updates, the trajectories are timestepped using the new velocity information. The Lagrangian analysis packages available with some OGCMs are examples of online Lagrangian ocean analysis codes. Offline codes calculate trajectories using stored velocity fields outputted by Eulerian models at a prior stage. As a result of this, offline models may be used to calculate trajectories forwards or backwards through time; the latter may be helpful in determining the origins of water masses. As of 2018, there are no examples of online models that use the tracer equation to compute diffusive effects. Applications. One strength of Lagrangian ocean models is that they can be less computationally expensive than calculating the advection diffusion of a tracer concentration within the Eulerian paradigm: for each timestep, the Lagrangian code needs only evaluate the position of each virtual particle, as opposed to the Eulerian model which must explicitly calculate the tracer concentration in every grid cell. In the case of offline models, trajectories may be advected backwards in time which can be useful in finding sources of the material being tracked. Lagrangian ocean analysis has found use in tracking water masses, for instance tracking the source and pathways of the Subantarctic Mode Water, as well as how its temperature and salinity evolve along its path. Another application of Lagrangian modelling techniques is in simulating the dispersal of biological matter in different ocean regions to gauge their biological connectivity. Lagrangian ocean analysis has also been used to model the dispersal of materials originating in human activities, for example in tracking the spread of oil following the 2010 Deepwater Horizon oil spill and in attempts to model dispersal of Debris from flight MH370 in the Indian Ocean since 2014. Furthermore, Lagrangian ocean analysis has played a role in the tracking of the transport of plastics across the global ocean, for instance helping researchers estimate the fraction of plastic debris that ends up at the coast. Examples. While small scale Lagrangian analysis codes have been written for many individual research projects, there exists a smaller number of community codes that can implement Lagrangian ocean models on a global scale. The differences between 10 such community codes are illustrated in the venn diagram, where they are grouped in terms of whether they are offline, whether they include a stochastic term to model diffusion or small-scale turbulent mixing, and whether they compute trajectories analytically. All of the codes in the diagram that lie outside of the analytic trajectories circle employ explicit integration methods to compute the trajectories. Ariane, for example, is a code that runs offline, calculates trajectories with the analytic streamtube method and does not model diffusive effects. By contrast, Parcels computes particle trajectories through explicit time-stepping and can include stochastic terms if the user desires; it is also offline. A caveat to this is that Parcels' integration framework is customisable, and could be reprogrammed to instead use the analytical method to calculate trajectories. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathbf{u}(\\mathbf{x}, t) = \\mathbf{u}(\\mathbf{X}(\\mathbf{x_0}, t), t) = \\frac{\\partial\\mathbf{X}(\\mathbf{x_0}, t)}{\\partial t}," }, { "math_id": 1, "text": "\\mathbf{X}(\\mathbf{x_0}, t)" }, { "math_id": 2, "text": "\\mathbf{x_0}" }, { "math_id": 3, "text": "t" }, { "math_id": 4, "text": "t_0" }, { "math_id": 5, "text": " \\mathbf{X}(t+\\Delta t) = \\mathbf{X}(t) + \\int^{t+\\Delta t}_{t}\\mathbf{u}(\\mathbf{X}(\\mathbf{x_0}, t^\\prime), t^\\prime)dt^\\prime," }, { "math_id": 6, "text": "t^\\prime" }, { "math_id": 7, "text": "\\mathbf{u}" }, { "math_id": 8, "text": " \\nabla\\cdot\\mathbf{u} = 0 ." }, { "math_id": 9, "text": "\\iiint_V\\left(\\mathbf{\\nabla}\\cdot\\mathbf{u}\\right)\\,dV=" }, { "math_id": 10, "text": "\\scriptstyle S" }, { "math_id": 11, "text": "(\\mathbf{u}\\cdot\\mathbf{\\hat{n}})\\,dS = 0," }, { "math_id": 12, "text": "\\mathbf{\\hat{n}}" }, { "math_id": 13, "text": "V" }, { "math_id": 14, "text": "S" }, { "math_id": 15, "text": "S_1" }, { "math_id": 16, "text": "S_2" }, { "math_id": 17, "text": " \\iint_{S_1}(\\mathbf{u}\\cdot\\mathbf{\\hat{n}})\\,dS + \\iint_{S_2}(\\mathbf{u}\\cdot\\mathbf{\\hat{n}})\\,dS = 0. " } ]
https://en.wikipedia.org/wiki?curid=67696150
67696353
Remote sensing (oceanography)
Remote sensing in oceanography is a widely used observational technique which enables researchers to acquire data of a location without physically measuring at that location. Remote sensing in oceanography mostly refers to measuring properties of the ocean surface with sensors on satellites or planes, which compose an image of captured electromagnetic radiation. A remote sensing instrument can either receive radiation from the Earth’s surface (passive), whether reflected from the Sun or emitted, or send out radiation to the surface and catch the reflection (active). All remote sensing instruments carry a sensor to capture the intensity of the radiation at specific wavelength windows, to retrieve a spectral signature for every location. The physical and chemical state of the surface determines the emissivity and reflectance for all bands in the electromagnetic spectrum, linking the measurements to physical properties of the surface. Unlike passive instruments, active remote sensing instruments also measure the two-way travel time of the signal; which is used to calculate the distance between the sensor and the imaged surface. Remote sensing satellites often carry other instruments which keep track of their location and measure atmospheric conditions. Remote sensing observations, in comparison to (most) physical observations, are consistent in time and have good spatial coverage. Since the ocean is fluid, it is constantly changing on different spatial and temporal scales. Capturing the spatial variation of the ocean with remote sensing is considered extremely valuable and is on the frontier of oceanographic research. The high variability of the ocean surface is also the deterministic factor in the differences between land and ocean remote sensing. Remote sensing of the ocean. Characteristics. Remote sensing is actively used in various fields of natural sciences like geology, physical geography, ecology, archeology and meteorology but, remote sensing of the ocean is vastly different. Unlike most land processes the ocean, just like the atmosphere, is variable on way shorter time scales over its entire spatial scale; the ocean is always moving. The temporal variability in the object of study determines the usability of specific data and the applicable methods and is the reason why remote sensing methods differ materially between ocean and land surfaces. A single wave on the surface of the ocean can not be tracked by satellites of today. Ocean waves crash or disappear before a new observation is made, features with this time scale are rarer on land. Unlike vegetation, snow and other land covers the ocean is opaque to most electromagnetic radiation (except for visible light) therefore the ocean surface is easy to monitor but it is a challenge to retrieve information of deeper layers. Remote sensing enables temporal analysis over vast spatial scale, since satellites have a constant revisit time, provide a wide image and are often operational for multiple consecutive years. This concept of constant data in time and space was a breakthrough in oceanography, which previously relied on measurements from drifters, coastal locations like tide gauges, ships and buoys. All in-situ measurements either have a small spatial footprint or are varying in location and time, so do not deliver constant and comparable data. History. Remote sensing as we know it today started with the first earth orbiting satellite Landsat 1 in 1973. Landsat 1 delivered the first multi-spectral images of features on land and coastal zones all over the world and already showed effectiveness in oceanography, although not specifically designed for it. In 1978 NASA makes the next step in remote sensing for oceanography with the launch of the first orbiting satellite dedicated to ocean research, Seasat. The satellite carried 5 different instruments: a Radar altimeter for retrieving sea surface height, a microwave scatterometer to retrieve wind speeds and direction, a microwave radiometer to retrieve sea surface temperature (SST), an optical and infrared radiometer to check for clouds and surface characteristics and lastly the first Synthetic Aperture Radar (SAR) instrument. Seasat was only operational for a few months but, together with the Coastal Zone Color Scanner (CZCS) on Nimbus-7, proved the feasibility of many techniques and instruments in ocean remote sensing. TOPEX/POSEIDON, an altimeter launched in 1992, provided the first continuous global map of sea surface topography and continued on the possibilities explored by Seasat. The Jason-1, Jason-2 and Jason-3 missions continue the measurements from 1992 to today to form a complete time-series of the global sea surface height. Also other techniques hosted on Seasat found continuation. The Advanced Very-High-Resolution Radiometer (AVHRR) Is the sensor carried on al NOAA missions and made SST retrieval accessible with a continuous time-series since 1979. The European Space Agency (ESA) further developed SAR with the ERS-2, ENVISAT and now Sentinel-1 missions by providing larger spatial footprints, lowering the resolution and flying twin missions to reduce the effective revisit time. Optical remote sensing of the ocean found continuation after the CZCS with polar orbiting missions ENVISAT, OrbView-2, MODIS and very recently with Sentinel-3, to form a continuous record since 1997. Sentinel-3 is now one of the best equipped missions to map the ocean hosting a SAR altimiter, multispectral spectrometer a radiometer and several other instruments on multiple satellites with alternating orbits providing exceptional temporal and spatial resolution. Methods. The physical and chemical state of a surface or object have direct impact on the emissivity, reflectance and refractance of electromagnetic radiation. Sensors on remote sensing instruments capture radiation, which can be translated back to deduce the physio-chemical properties of the surface. Water content, temperature, roughness and colour are characteristics often deduced from the spectral characteristics of the surface. A sensor on a satellite returns the composite signal for a certain area inside the footprint called a cell, the size of the unique cells is referred to as the spatial resolution. The spatial resolution of a sensor is determined by the distance from earth and the available bandwidth for data transfer. A satellite passes over the same location consistently through time with the same interval called the revisit-time or temporal resolution. Sensors can not have both a very high temporal and spatial resolution so a tradeoff has to be made specific for the goal of the mission. Sensors on satellites have measuring errors, caused by for example atmospheric interference, geolocation imprecision and topographic distortion. Complete derived products from remote sensing often use simple calculations or algorithms to transform the spectral signature from a cell to a physical value. All methods of transferring spectral data has certain biases which can contribute to the measurement errors of the final result. Often surface characteristics can be deduced with very low error margins due to data corrections, using onboard data or models, and a physically correct translation of spectral characteristics to physio-chemical characteristics. Although it is interesting to know the surface characteristics at a certain moment, often research is more interested in documenting the change of a surface over time or the transport of characteristics through space. Change detection leverages the consistent temporal component of remote sensing data to analyze the change of surface properties in time. Change detection relies on having at least two observations taken at different times to analyze the difference between the two images visually or analytically. In land remote sensing change detection is used for example: to assess the impact of a volcano eruption, check the growth of plants through time, map deforestation, and measure ice sheet melt. In oceanography the surface changes more quickly than the revisit time of a satellite making it difficult to monitor certain processes. Change detection in oceanography requires the characteristic to change continuously like sea level rise or change spatial scale slower than the revisit time of the satellite like algal blooms. Another way to infer change from only 1 acquisition is by computing the dynamical component and direction from a static image which is leveraged in RADAR altimetry to deduce surface current velocity. Remote sensing use cases. Sea surface temperature (thermal infrared radiometry). The ocean surface emits electromagnetic radiation formula_0 dependent on the temperature formula_1 at a certain frequency formula_2 following Planck's law for black body radiation, scaled by the emissivity formula_3 of the surface since the ocean is not a perfect black body. With formula_5 the spectral radiance, formula_6 the Planck constant; formula_7 the speed of light and formula_8 the Boltzmann constant. Most radiation emitted by earth is in the thermal-infrared spectrum which is part of the atmospheric window, the spectral region for which the atmosphere does not significantly absorb radiation. The radiation coming from the earth's surface with a wavelength within the atmospheric window can be captured by a passive radiometry sensor at satellite height. The radiation captured by the sensor is corrected for atmospheric disturbance and radiation noise to compute the brightness temperature of the ocean surface. With a correct estimation of the emissivity of sea water (~0.99) the grey body temperature of the ocean surface can be deduced, also referred to as the Sea Surface Temperature (SST). To correctly remove atmospheric disturbance, both emission and absorption, the airborne radiometers are calibrated for every measurement by SST measurements in multiple bands and/or under different angles. Atmospheric correction is only viable if the measured surface is not covered in clouds as they significantly disturb the emitted radiation. Clouds are either removed as viable pixels in the image using cloud busting algorithms or clouds are handled using histogram and spatial coherency techniques (up to 80% cloud cover). Radiometry captures the surface skin temperature (~10 micron depth) of the ocean, which significantly differs from bulk SST in-situ measurements. Phenomena close but not at the surface like diurnal thermocline formation are not well captured with satellites but SST can still be of tremendous value in oceanography. Overall satellites measure the SST with a ~0.1-0.6 K accuracy dependent on the sensor and only experience limited issues like surface slicks. Retrieved SST datasets really transformed oceanographic research during the 1980's and has multiple different uses. The SST is a clear climatological indicator linking to the ENSO cycles, weather and climate change but can also highlight movement of ocean water. SST anomalies can highlight mesoscale eddies, ocean fronts and regions of upwelling, vertical mixing or river outflow as the water is locally more cold or warm due to transport. The SST is directly linked to the horizontal density gradient which is really strong at fronts and is induced by ocean currents and eddies. The currents and fronts are visible in SST images and can be detected using edge detection via high pass filters or kernel transformations to study the dynamics and origin. SST is widely used to track upwelling and river outflow strength as these processes are clearly visible as negative SST anomalies. Mapping of algal blooms (Optical). An algae bloom is the enhanced growth of photosynthetic organisms in a water system, which manifests itself as a clear change of water color. Algal blooms are often caused by a local enrichment of the water system with nutrients, which temporarily remove the limiting growth factor of photosynthetic organisms like cyanobacteria. Due to oxygen depletion, blocking sunlight and the release of possible toxins algal blooms can be harmful to their environment. Algae are characterized by their green color, caused by the absorption spectra of the chlorophyll-a in these organisms. Optical satellites like Sentinel-2 or active radiometers like Sentinel-3 and MODIS can capture the reflectance of the ocean surface in the visible and near-infrared spectrum. Areas with a higher concentration of algae near the surface have a distinct different color. The spectral signature of an algal bloom in water is captured by the sensor as a high green and near-infrared radiation reflectance and low red light reflectance. To map algal blooms thresholding is used in combination with a spectral index like the Normalized Difference Vegetation Index (NDVI). In one observation the intensity and location of the algal bloom can be recorded, and with a second observation at a different time the displacement and intensity change of the algal bloom can be tracked. Algal blooms are used to study internal wave structures, up-welling and river outflows, which all bring nutrients to surface waters, since they are correlated with algae concentration . Pollution often coincides with high nutrient waters, making algal blooms good indicators for the severity and impact of water pollution Sea surface height (RADAR altimetry). RADAR altimeters send microwave pulses to the surface and catch the reflection intensity over a short time period measuring the two-way travel time of the signal. Electromagnetic radiation travels with the speed of light formula_7 thus the two way travel time formula_9 gives information on the height of the satellite above the surface formula_10 following the formula formula_11 . To deduce the sea surface height from the satellite height the two-way travel time has to be corrected for dynamical errors, the atmospheric conditions and the local geoid height formula_12. The local change of the sea surface height due to dynamical effects like wind and currents can be expressed using the following formula. formula_13 It is hard to correctly estimate formula_12 , formula_16 and formula_17 for a certain moment and location. As a solution remote sensing analysts use the Sea Surface Height Anomaly (SSHA) which only requires information on the tidal height and atmospheric pressure, which can be deduced from drifters, weather programs and tidal models. The geoid height for SSHA retrieval is deduced from a long time-series of the same RADAR altimetry data. The SSHA is computed by subtracting the temporal mean of the SSH or Mean Sea Surface (MSS) from the current SSH with formula_18 so that: formula_19 Although the SSHA can show anomalies in surface currents of the ocean, often a measure called the Absolute Dynamic Topography (ADT) is computed using an independent measurement of the geoid height to display the total ocean currents. formula_20 with the geoid height as a measurement from instruments like the Gravity and Ocean Circulations Explorer (GOCE) or Gravity Recovery and Climate Experiment (GRACE). With the launch of TOPEX/POSEIDON in 1992 started a continuous time series of global SSH data which, has been extremely valuable in assessing sea level rise in the past decades by combining data with local tide gauges. The dynamical sea surface height from radar altimetry provides useful insight into ocean currents. If assuming geostrophic balance, the velocity anomaly and direction of surface currents perpendicular to the satellite overpass can be computed using the formula: formula_21 and formula_22 for formula_23 and formula_24 With formula_25 the Coriolis force, formula_26 the gravity constant, formula_27 the zonal and meridional velocity and formula_28 the derived sea surface height anomaly. RADAR altimeters are able to collect data even in cloudy circumstances but only cover the globe up to latitudes ~60 - 65°. Often the spatial resolution of RADAR altimeters is not too high but their temporal coverage is tremendous, allowing constant monitoring of the ocean surface. RADAR altimeters can also be used to determine the specific wave height and estimate wind velocities using the wave form and backscatter coefficient of the pulse limited return signal. Challenges of Remote Sensing in Coastal Zones. There can be numerous limitations with the sensors and techniques used by remote sensing tools when it comes to mapping coastal regions. Some challenges stem from issues with resolution and pixel size, as most remote imaging satellites have a pixel size of approximately 1 square kilometer. This presents issues with analyzing coastal regions in the desired level of detail as most coastal processes occur on a spatial scale that is approximately the same (or smaller) than the pixel size provided by remote imaging satellites. Additionally, most ocean sensors have a global coverage frequency of 1-2 days, which may be too long to observe the temporal scale of coastal ocean processes. Furthermore, remote sensing of coastal areas has faced challenges in accurately interpreting the color of the ocean. The color of open ocean basins is mostly controlled by phytoplankton and travel predictably or covary with other constituents in the water column like chlorophyll a. However, as we get closer to the coastlines and move from the open ocean, to shelf seas, to coastal waters, the particles in the water do not covary with chlorophyll. The apparent color may be influenced by optically active constituents in the water column, such as sediments from runoff or pollution. The satellites can also be influenced by "adjacency effects", where the color of the land can bleed into coastal ocean pixels. Finally, removing the effects of the atmosphere is difficult to achieve because of the complex and dynamic mix of coastal aerosols and sea spray. All of these factors can make it increasingly challenging to accurately analyze coastal regions from remote sensing satellites. Use of UAVs in Remote Sensing. Satellites, while the core of remote sensing, have limitations in their spatial, spectral, and temporal resolution. In an effort to combat these limitations, satellite remote sensing utilizes interpolation and modelling to fill in the gaps. While methods of interpolation and modelling can be developed to a high degree of statistical accuracy, they are in their essence a educated guess based on surrounding conditions. The use of UAVs, or drones, as a remote sensing tool can provide data at higher resolutions that can then be used to fill in the gaps in satellite data, often at a lower price than satellites or crewed aircraft. Notable benefits can be found in the pixel gaps found along coastal areas in satellite data as well as the ability to conduct observations of a given area between satellite passes. Modern technology has provided UAV users with numerous platforms able to be outfitted with commercial or custom made sensor packages. These sensors consist of multispectral, hyperspectral sensors as well as standard visual spectrum, high definition cameras. The size of modern UAVs is also a factor contributing to their applicability. Satellites and crewed aircraft require shore-based facilities or ships capable of supporting take-off and landing operations. Small-UAVs, those defined as under 55 pounds, have the ability to be launched from nearly every location on shore as well as any size vessel at sea. They require very few crew to operate and flight training requirements are affordable and relatively easy to obtain. There are some limiting factors to UAV use for oceanic remote sensing. Firstly, the range is limited to the on board fuel or battery capacity as well as distance from the controller. Many governments also impose restrictions on range, stating that UAVs must be flown within unaided visual line of sight. UAV use offshore must be accompanied by a vessel due to these range constraints. Furthermore, the sensors themselves encounter similar challenges to the sensors mounted on satellites, namely in alterations to oceanic reflectance in coastal zones; however, the higher resolution provided by UAV mounted sensors allows for a more diverse assignment of pixels, reducing the blending effect of terrestrial and aquatic environments and reducing the amount of calculations needed to account for reflectance shifts. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B" }, { "math_id": 1, "text": "T" }, { "math_id": 2, "text": "\\nu" }, { "math_id": 3, "text": "\\epsilon" }, { "math_id": 4, "text": "B(\\nu, T) = \\frac{ 2 h \\nu^3}{c^2} \\frac{1}{e^\\frac{h\\nu}{k_\\mathrm B T} - 1} \\cdot \\epsilon" }, { "math_id": 5, "text": "B(\\nu, T)" }, { "math_id": 6, "text": "h" }, { "math_id": 7, "text": "c" }, { "math_id": 8, "text": "k" }, { "math_id": 9, "text": "t" }, { "math_id": 10, "text": "R_{sat}" }, { "math_id": 11, "text": "R_{sat} = \\frac{c \\cdot t}{2}" }, { "math_id": 12, "text": "h_{geoid}" }, { "math_id": 13, "text": "h_{dyn} = H_{sat} - RH_{sat} - h_{geoid} - h_{tide} - h_{atm}" }, { "math_id": 14, "text": "h_{dyn}" }, { "math_id": 15, "text": "H_{sat}" }, { "math_id": 16, "text": "h_{tide}" }, { "math_id": 17, "text": "h_{atm}" }, { "math_id": 18, "text": "h_{MSS} = [(h_{dyn} + h_{geoid})]" }, { "math_id": 19, "text": "h_{SSHA} = h_{dyn} + h_{geoid} - MSS" }, { "math_id": 20, "text": "h_{ADT} = h_{SSHA} + h_{MSS} - h_{geoid}\n " }, { "math_id": 21, "text": "fv_{a} = g \\frac{\\partial h_{ssha}}{\\partial x} " }, { "math_id": 22, "text": "fu_{a} = g \\frac{\\partial h_{ssha}}{\\partial y} " }, { "math_id": 23, "text": "u = [u] + u_a " }, { "math_id": 24, "text": "v = [v] + v_a " }, { "math_id": 25, "text": "f " }, { "math_id": 26, "text": "g " }, { "math_id": 27, "text": "u, v " }, { "math_id": 28, "text": "h_{SSHA}\n " } ]
https://en.wikipedia.org/wiki?curid=67696353
6769796
Polar curve
In algebraic geometry, the first polar, or simply polar of an algebraic plane curve "C" of degree "n" with respect to a point "Q" is an algebraic curve of degree "n"−1 which contains every point of "C" whose tangent line passes through "Q". It is used to investigate the relationship between the curve and its dual, for example in the derivation of the Plücker formulas. Definition. Let "C" be defined in homogeneous coordinates by "f"("x, y, z") = 0 where "f" is a homogeneous polynomial of degree "n", and let the homogeneous coordinates of "Q" be ("a", "b", "c"). Define the operator formula_0 Then Δ"Q""f" is a homogeneous polynomial of degree "n"−1 and Δ"Q""f"("x, y, z") = 0 defines a curve of degree "n"−1 called the "first polar" of "C" with respect of "Q". If "P"=("p", "q", "r") is a non-singular point on the curve "C" then the equation of the tangent at "P" is formula_1 In particular, "P" is on the intersection of "C" and its first polar with respect to "Q" if and only if "Q" is on the tangent to "C" at "P". For a double point of "C", the partial derivatives of "f" are all 0 so the first polar contains these points as well. Class of a curve. The "class" of "C" may be defined as the number of tangents that may be drawn to "C" from a point not on "C" (counting multiplicities and including imaginary tangents). Each of these tangents touches "C" at one of the points of intersection of "C" and the first polar, and by Bézout's theorem there are at most "n"("n"−1) of these. This puts an upper bound of "n"("n"−1) on the class of a curve of degree "n". The class may be computed exactly by counting the number and type of singular points on "C" (see Plücker formula). Higher polars. The "p-th" polar of a "C" for a natural number "p" is defined as Δ"Q""p""f"("x, y, z") = 0. This is a curve of degree "n"−"p". When "p" is "n"−1 the "p"-th polar is a line called the "polar line" of "C" with respect to "Q". Similarly, when "p" is "n"−2 the curve is called the "polar conic" of "C". Using Taylor series in several variables and exploiting homogeneity, "f"(λ"a"+μ"p", λ"b"+μ"q", λ"c"+μ"r") can be expanded in two ways as formula_2 and formula_3 Comparing coefficients of λ"p"μ"n"−"p" shows that formula_4 In particular, the "p"-th polar of "C" with respect to "Q" is the locus of points "P" so that the ("n"−"p")-th polar of "C" with respect to "P" passes through "Q". Poles. If the polar line of "C" with respect to a point "Q" is a line "L", then "Q" is said to be a "pole" of "L". A given line has ("n"−1)2 poles (counting multiplicities etc.) where "n" is the degree of "C". To see this, pick two points "P" and "Q" on "L". The locus of points whose polar lines pass through "P" is the first polar of "P" and this is a curve of degree "n"−"1". Similarly, the locus of points whose polar lines pass through "Q" is the first polar of "Q" and this is also a curve of degree "n"−"1". The polar line of a point is "L" if and only if it contains both "P" and "Q", so the poles of "L" are exactly the points of intersection of the two first polars. By Bézout's theorem these curves have ("n"−1)2 points of intersection and these are the poles of "L". The Hessian. For a given point "Q"=("a", "b", "c"), the polar conic is the locus of points "P" so that "Q" is on the second polar of "P". In other words, the equation of the polar conic is formula_5 The conic is degenerate if and only if the determinant of the Hessian of "f", formula_6 vanishes. Therefore, the equation |"H"("f")|=0 defines a curve, the locus of points whose polar conics are degenerate, of degree 3("n"−"2") called the "Hessian curve" of "C". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta_Q = a{\\partial\\over\\partial x}+b{\\partial\\over\\partial y}+c{\\partial\\over\\partial z}." }, { "math_id": 1, "text": "x{\\partial f\\over\\partial x}(p, q, r)+y{\\partial f\\over\\partial y}(p, q, r)+z{\\partial f\\over\\partial z}(p, q, r)=0." }, { "math_id": 2, "text": "\\mu^nf(p, q, r) + \\lambda\\mu^{n-1}\\Delta_Q f(p, q, r) + \\frac{1}{2}\\lambda^2\\mu^{n-2}\\Delta_Q^2 f(p, q, r)+\\dots" }, { "math_id": 3, "text": "\\lambda^nf(a, b, c) + \\mu\\lambda^{n-1}\\Delta_P f(a, b, c) + \\frac{1}{2}\\mu^2\\lambda^{n-2}\\Delta_P^2 f(a, b, c)+\\dots ." }, { "math_id": 4, "text": "\\frac{1}{p!}\\Delta_Q^p f(p, q, r)=\\frac{1}{(n-p)!}\\Delta_P^{n-p} f(a, b, c)." }, { "math_id": 5, "text": "\\Delta_{(x, y, z)}^2 f(a, b, c)=x^2{\\partial^2 f\\over\\partial x^2}(a, b, c)+2xy{\\partial^2 f\\over\\partial x\\partial y}(a, b, c)+\\dots=0." }, { "math_id": 6, "text": "H(f) = \\begin{bmatrix}\n\\frac{\\partial^2 f}{\\partial x^2} & \\frac{\\partial^2 f}{\\partial x\\,\\partial y} & \\frac{\\partial^2 f}{\\partial x\\,\\partial z} \\\\ \\\\\n\\frac{\\partial^2 f}{\\partial y\\,\\partial x} & \\frac{\\partial^2 f}{\\partial y^2} & \\frac{\\partial^2 f}{\\partial y\\,\\partial z} \\\\ \\\\\n\\frac{\\partial^2 f}{\\partial z\\,\\partial x} & \\frac{\\partial^2 f}{\\partial z\\,\\partial y} & \\frac{\\partial^2 f}{\\partial z^2}\n\\end{bmatrix}," } ]
https://en.wikipedia.org/wiki?curid=6769796
6769823
Electrostatic force microscope
Electrostatic force microscopy (EFM) is a type of dynamic non-contact atomic force microscopy where the electrostatic force is probed. ("Dynamic" here means that the cantilever is oscillating and does not make contact with the sample). This force arises due to the attraction or repulsion of separated charges. It is a long-range force and can be detected 100 nm or more from the sample. Force measurement. For example, consider a conductive cantilever tip and sample which are separated a distance "z" usually by a vacuum. A bias voltage between tip and sample is applied by an external battery forming a capacitor, "C", between the two. The capacitance of the system depends on the geometry of the tip and sample. The total energy stored in that capacitor is "U = C⋅ΔV2". The work done by the battery to maintain a constant voltage, "ΔV", between the capacitor plates (tip and sample) is "-2U". By definition, taking the negative gradient of the total energy "Utotal = -U" gives the force. The "z" component of the force (the force along the axis connecting the tip and sample) is thus: formula_0. Since "&lt;templatestyles src="Fraction/styles.css" /&gt;∂C⁄∂z &lt; 0" this force is always attractive. The electrostatic force can be probed by changing the voltage, and that force is parabolic with respect to the voltage. One note to make is that "ΔV" is not simply the voltage difference between the tip and sample. Since the tip and sample are often not the same material, and furthermore can be subject to trapped charges, debris, etc., there is a difference between the work functions of the two. This difference, when expressed in terms of a voltage, is called the contact potential difference, "VCPD" This causes the apex of the parabola to rest at "ΔV = Vtip − Vsample − VCPD = 0". Typically, the value of "VCPD" is on the order of a few hundred millivolts. Forces as small as piconewtons can routinely be detected with this method. Non-contact atomic force microscopy. A common form of electric force microscopy involves a noncontact AFM mode of operation. In this mode the cantilever is oscillated at a resonant frequency of the cantilever and the AFM tip is held such that it only senses with long range electrostatic forces without entering the repulsive contact regime. In this non-contact regime, the electric force gradient causes a shift in the resonance frequency of the cantilever. EFM images can be created by measuring the cantilever oscillation, phase and/or frequency shift of the cantilever in response to the electrostatic force gradient. Immersion. With an electrostatic force microscope, like the atomic force microscope it is based on, the sample can be immersed in non-conductive liquid only, because conductive liquids hinder the establishment of an electrical potential difference that causes the detected electrostatic force.
[ { "math_id": 0, "text": "F_{electrostatic} = \\frac{1}{2} \\frac{\\partial C}{\\partial z} \\Delta V^2 " } ]
https://en.wikipedia.org/wiki?curid=6769823
67698348
Pagh's problem
Algorithm for set intersection Pagh's problem is a datastructure problem often used when studying lower bounds in computer science named after Rasmus Pagh. Mihai Pătrașcu was the first to give lower bounds for the problem. In 2021 it was shown that, given popular conjectures, the naive linear time algorithm is optimal. Definition. We are given as inputs formula_0 subsets formula_1 over a universe formula_2. We must accept updates of the following kind: Given a pointer to two subsets formula_3 and formula_4, create a new subset formula_5. After each update, we must output whether the new subset is empty or not. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "X_1, X_2, \\dots, X_k" }, { "math_id": 2, "text": "U=\\{1,\\dots,k\\}" }, { "math_id": 3, "text": "X_1" }, { "math_id": 4, "text": "X_2" }, { "math_id": 5, "text": "X_1\\cap X_2" } ]
https://en.wikipedia.org/wiki?curid=67698348
6769882
Linear circuit
Electronic circuits obeying the superposition principle A linear circuit is an electronic circuit which obeys the superposition principle. This means that the output of the circuit "F(x)" when a linear combination of signals "ax1(t) + bx2(t)" is applied to it is equal to the linear combination of the outputs due to the signals "x1(t)" and "x2(t)" applied separately: formula_0 It is called a linear circuit because the output voltage and current of such a circuit are linear functions of its input voltage and current. This kind of linearity is not the same as that of straight-line graphs. In the common case of a circuit in which the components' values are constant and don't change with time, an alternate definition of linearity is that when a sinusoidal input voltage or current of frequency "f" is applied, any steady-state output of the circuit (the current through any component, or the voltage between any two points) is also sinusoidal with frequency "f". A linear circuit with constant component values is called "linear time-invariant" (LTI). Informally, a linear circuit is one in which the electronic components' values (such as resistance, capacitance, inductance, gain, etc.) do not change with the level of voltage or current in the circuit. Linear circuits are important because they can amplify and process electronic signals without distortion. An example of an electronic device that uses linear circuits is a sound system. Alternate definition. The superposition principle, the defining equation of linearity, is equivalent to two properties, additivity and homogeneity, which are sometimes used as an alternate definition That is, a linear circuit is a circuit in which (1) the output when a sum of two signals is applied is equal to the sum of the outputs when the two signals are applied separately, and (2) scaling the input signal formula_3 by a factor formula_4 scales the output signal formula_5 by the same factor. Linear and nonlinear components. A linear circuit is one that has no nonlinear electronic components in it. Examples of linear circuits are amplifiers, differentiators, and integrators, linear electronic filters, or any circuit composed exclusively of "ideal" resistors, capacitors, inductors, op-amps (in the "non-saturated" region), and other "linear" circuit elements. Some examples of nonlinear electronic components are: diodes, transistors, and iron core inductors and transformers when the core is saturated. Some examples of circuits that operate in a nonlinear way are mixers, modulators, rectifiers, radio receiver detectors and digital logic circuits. Significance. Linear time-invariant circuits are important because they can process analog signals without introducing intermodulation distortion. This means that separate frequencies in the signal stay separate and do not mix, creating new frequencies (heterodynes). They are also easier to understand and analyze. Because they obey the superposition principle, linear circuits are governed by linear differential equations, and can be analyzed with powerful mathematical frequency domain techniques, including Fourier analysis and the Laplace transform. These also give an intuitive understanding of the qualitative behavior of the circuit, characterizing it using terms such as gain, phase shift, resonant frequency, bandwidth, Q factor, poles, and zeros. The analysis of a linear circuit can often be done by hand using a scientific calculator. In contrast, nonlinear circuits usually do not have closed form solutions. They must be analyzed using approximate numerical methods by electronic circuit simulation computer programs such as SPICE, if accurate results are desired. The behavior of such linear circuit elements as resistors, capacitors, and inductors can be specified by a single number (resistance, capacitance, inductance, respectively). In contrast, a nonlinear element's behavior is specified by its detailed transfer function, which may be given by a curved line on a graph. So specifying the characteristics of a nonlinear circuit requires more information than is needed for a linear circuit. "Linear" circuits and systems form a separate category within electronic manufacturing. Manufacturers of transistors and integrated circuits often divide their product lines into 'linear' and 'digital' lines. "Linear" here means "analog"; the linear line includes integrated circuits designed to process signals linearly, such as op-amps, audio amplifiers, and active filters, as well as a variety of signal processing circuits that implement nonlinear analog functions such as logarithmic amplifiers, analog multipliers, and peak detectors. Small signal approximation. Nonlinear elements such as transistors tend to behave linearly when small AC signals are applied to them. So in analyzing many circuits where the signal levels are small, for example those in TV and radio receivers, nonlinear elements can be replaced with a linear small-signal model, allowing linear analysis techniques to be used. Conversely, all circuit elements, even "linear" elements, show nonlinearity as the signal level is increased. If nothing else, the power supply voltage to the circuit usually puts a limit on the magnitude of voltage output from a circuit. Above that limit, the output ceases to scale in magnitude with the input, failing the definition of linearity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F(ax_1 + bx_2) = aF(x_1) + bF(x_2)\\," }, { "math_id": 1, "text": "F(x_1 + x_2) = F(x_1) + F(x_2)\\qquad" }, { "math_id": 2, "text": "F(hx) = hF(x)\\qquad\\qquad\\qquad\\qquad" }, { "math_id": 3, "text": "x(t)" }, { "math_id": 4, "text": "h" }, { "math_id": 5, "text": "F(x(t))" } ]
https://en.wikipedia.org/wiki?curid=6769882
6769980
BRST quantization
Formulation to quantize gauge field theories in physics In theoretical physics, the BRST formalism, or BRST quantization (where the "BRST" refers to the last names of Carlo Becchi, Alain Rouet, Raymond Stora and Igor Tyutin) denotes a relatively rigorous mathematical approach to quantizing a field theory with a gauge symmetry. Quantization rules in earlier quantum field theory (QFT) frameworks resembled "prescriptions" or "heuristics" more than proofs, especially in non-abelian QFT, where the use of "ghost fields" with superficially bizarre properties is almost unavoidable for technical reasons related to renormalization and anomaly cancellation. The BRST global supersymmetry introduced in the mid-1970s was quickly understood to rationalize the introduction of these Faddeev–Popov ghosts and their exclusion from "physical" asymptotic states when performing QFT calculations. Crucially, this symmetry of the path integral is preserved in loop order, and thus prevents introduction of counterterms which might spoil renormalizability of gauge theories. Work by other authors a few years later related the BRST operator to the existence of a rigorous alternative to path integrals when quantizing a gauge theory. Only in the late 1980s, when QFT was reformulated in fiber bundle language for application to problems in the topology of low-dimensional manifolds (topological quantum field theory), did it become apparent that the BRST "transformation" is fundamentally geometrical in character. In this light, "BRST quantization" becomes more than an alternate way to arrive at anomaly-cancelling ghosts. It is a different perspective on what the ghost fields represent, why the Faddeev–Popov method works, and how it is related to the use of Hamiltonian mechanics to construct a perturbative framework. The relationship between gauge invariance and "BRST invariance" forces the choice of a Hamiltonian system whose states are composed of "particles" according to the rules familiar from the canonical quantization formalism. This esoteric consistency condition therefore comes quite close to explaining how quanta and fermions arise in physics to begin with. In certain cases, notably gravity and supergravity, BRST must be superseded by a more general formalism, the Batalin–Vilkovisky formalism. Technical summary. BRST quantization is a differential geometric approach to performing consistent, anomaly-free perturbative calculations in a non-abelian gauge theory. The analytical form of the BRST "transformation" and its relevance to renormalization and anomaly cancellation were described by Carlo Maria Becchi, Alain Rouet, and Raymond Stora in a series of papers culminating in the 1976 "Renormalization of gauge theories". The equivalent transformation and many of its properties were independently discovered by Igor Viktorovich Tyutin. Its significance for rigorous canonical quantization of a Yang–Mills theory and its correct application to the Fock space of instantaneous field configurations were elucidated by Taichiro Kugo and Izumi Ojima. Later work by many authors, notably Thomas Schücker and Edward Witten, has clarified the geometric significance of the BRST operator and related fields and emphasized its importance to topological quantum field theory and string theory. In the BRST approach, one selects a perturbation-friendly gauge fixing procedure for the action principle of a gauge theory using the differential geometry of the gauge bundle on which the field theory lives. One then quantizes the theory to obtain a Hamiltonian system in the interaction picture in such a way that the "unphysical" fields introduced by the gauge fixing procedure resolve gauge anomalies without appearing in the asymptotic states of the theory. The result is a set of Feynman rules for use in a Dyson series perturbative expansion of the S-matrix which guarantee that it is unitary and renormalizable at each loop order—in short, a coherent approximation technique for making physical predictions about the results of scattering experiments. Classical BRST. This is related to a supersymplectic manifold where pure operators are graded by integral ghost numbers and we have a BRST cohomology. Gauge transformations in QFT. From a practical perspective, a quantum field theory consists of an action principle and a set of procedures for performing perturbative calculations. There are other kinds of "sanity checks" that can be performed on a quantum field theory to determine whether it fits qualitative phenomena such as quark confinement and asymptotic freedom. However, most of the predictive successes of quantum field theory, from quantum electrodynamics to the present day, have been quantified by matching S-matrix calculations against the results of scattering experiments. In the early days of QFT, one would have had to say that the quantization and renormalization prescriptions were as much part of the model as the Lagrangian density, especially when they relied on the powerful but mathematically ill-defined path integral formalism. It quickly became clear that QED was almost "magical" in its relative tractability, and that most of the ways that one might imagine extending it would not produce rational calculations. However, one class of field theories remained promising: gauge theories, in which the objects in the theory represent equivalence classes of physically indistinguishable field configurations, any two of which are related by a gauge transformation. This generalizes the QED idea of a local change of phase to a more complicated Lie group. QED itself is a gauge theory, as is general relativity, although the latter has proven resistant to quantization so far, for reasons related to renormalization. Another class of gauge theories with a non-Abelian gauge group, beginning with Yang–Mills theory, became amenable to quantization in the late 1960s and early 1970s, largely due to the work of Ludwig D. Faddeev, Victor Popov, Bryce DeWitt, and Gerardus 't Hooft. However, they remained very difficult to work with until the introduction of the BRST method. The BRST method provided the calculation techniques and renormalizability proofs needed to extract accurate results from both "unbroken" Yang–Mills theories and those in which the Higgs mechanism leads to spontaneous symmetry breaking. Representatives of these two types of Yang–Mills systems—quantum chromodynamics and electroweak theory—appear in the Standard Model of particle physics. It has proven rather more difficult to prove the "existence" of non-Abelian quantum field theory in a rigorous sense than to obtain accurate predictions using semi-heuristic calculation schemes. This is because analyzing a quantum field theory requires two mathematically interlocked perspectives: a Lagrangian system based on the action functional, composed of "fields" with distinct values at each point in spacetime and local operators which act on them, and a Hamiltonian system in the Dirac picture, composed of "states" which characterize the entire system at a given time and field operators which act on them. What makes this so difficult in a gauge theory is that the objects of the theory are not really local fields on spacetime; they are right-invariant local fields on the principal gauge bundle, and different local sections through a portion of the gauge bundle, related by "passive" transformations, produce different Dirac pictures. What is more, a description of the system as a whole in terms of a set of fields contains many redundant degrees of freedom; the distinct configurations of the theory are equivalence classes of field configurations, so that two descriptions which are related to one another by a gauge transformation are also really the same physical configuration. The "solutions" of a quantized gauge theory exist not in a straightforward space of fields with values at every point in spacetime but in a quotient space (or cohomology) whose elements are equivalence classes of field configurations. Hiding in the BRST formalism is a system for parameterizing the variations associated with all possible active gauge transformations and correctly accounting for their physical irrelevance during the conversion of a Lagrangian system to a Hamiltonian system. Gauge fixing and perturbation theory. The principle of gauge invariance is essential to constructing a workable quantum field theory. But it is generally not feasible to perform a perturbative calculation in a gauge theory without first "fixing the gauge"—adding terms to the Lagrangian density of the action principle which "break the gauge symmetry" to suppress these "unphysical" degrees of freedom. The idea of gauge fixing goes back to the Lorenz gauge approach to electromagnetism, which suppresses most of the excess degrees of freedom in the four-potential while retaining manifest Lorentz invariance. The Lorenz gauge is a great simplification relative to Maxwell's field-strength approach to classical electrodynamics, and illustrates why it is useful to deal with excess degrees of freedom in the representation of the objects in a theory at the Lagrangian stage, before passing over to Hamiltonian mechanics via the Legendre transformation. The Hamiltonian density is related to the Lie derivative of the Lagrangian density with respect to a unit timelike horizontal vector field on the gauge bundle. In a quantum mechanical context it is conventionally rescaled by a factor formula_0. Integrating it by parts over a spacelike cross section recovers the form of the integrand familiar from canonical quantization. Because the definition of the Hamiltonian involves a unit time vector field on the base space, a horizontal lift to the bundle space, and a spacelike surface "normal" (in the Minkowski metric) to the unit time vector field at each point on the base manifold, it is dependent both on the connection and the choice of Lorentz frame, and is far from being globally defined. But it is an essential ingredient in the perturbative framework of quantum field theory, into which the quantized Hamiltonian enters via the Dyson series. For perturbative purposes, we gather the configuration of all the fields of our theory on an entire three-dimensional horizontal spacelike cross section of "P" into one object (a Fock state), and then describe the "evolution" of this state over time using the interaction picture. The Fock space is spanned by the multi-particle eigenstates of the "unperturbed" or "non-interaction" portion formula_1 of the Hamiltonian formula_2. Hence the instantaneous description of any Fock state is a complex-amplitude-weighted sum of eigenstates of formula_1. In the interaction picture, we relate Fock states at different times by prescribing that each eigenstate of the unperturbed Hamiltonian experiences a constant rate of phase rotation proportional to its energy (the corresponding eigenvalue of the unperturbed Hamiltonian). Hence, in the zero-order approximation, the set of weights characterizing a Fock state does not change over time, but the corresponding field configuration does. In higher approximations, the weights also change; collider experiments in high-energy physics amount to measurements of the rate of change in these weights (or rather integrals of them over distributions representing uncertainty in the initial and final conditions of a scattering event). The Dyson series captures the effect of the discrepancy between formula_1 and the true Hamiltonian formula_2, in the form of a power series in the coupling constant "g"; it is the principal tool for making quantitative predictions from a quantum field theory. To use the Dyson series to calculate anything, one needs more than a gauge-invariant Lagrangian density; one also needs the quantization and gauge fixing prescriptions that enter into the Feynman rules of the theory. The Dyson series produces infinite integrals of various kinds when applied to the Hamiltonian of a particular QFT. This is partly because all usable quantum field theories to date must be considered effective field theories, describing only interactions on a certain range of energy scales that we can experimentally probe and therefore vulnerable to ultraviolet divergences. These are tolerable as long as they can be handled via standard techniques of renormalization; they are not so tolerable when they result in an infinite series of infinite renormalizations or, worse, in an obviously unphysical prediction such as an uncancelled gauge anomaly. There is a deep relationship between renormalizability and gauge invariance, which is easily lost in the course of attempts to obtain tractable Feynman rules by fixing the gauge. Pre-BRST approaches to gauge fixing. The traditional gauge fixing prescriptions of continuum electrodynamics select a unique representative from each gauge-transformation-related equivalence class using a constraint equation such as the Lorenz gauge formula_3. This sort of prescription can be applied to an Abelian gauge theory such as QED, although it results in some difficulty in explaining why the Ward identities of the classical theory carry over to the quantum theory—in other words, why Feynman diagrams containing internal longitudinally polarized virtual photons do not contribute to S-matrix calculations. This approach also does not generalize well to non-Abelian gauge groups such as the SU(2)xU(1) of Yang–Mills electroweak theory and the SU(3) of quantum chromodynamics. It suffers from Gribov ambiguities and from the difficulty of defining a gauge fixing constraint that is in some sense "orthogonal" to physically significant changes in the field configuration. More sophisticated approaches do not attempt to apply a delta function constraint to the gauge transformation degrees of freedom. Instead of "fixing" the gauge to a particular "constraint surface" in configuration space, one can break the gauge freedom with an additional, non-gauge-invariant term added to the Lagrangian density. In order to reproduce the successes of gauge fixing, this term is chosen to be minimal for the choice of gauge that corresponds to the desired constraint and to depend quadratically on the deviation of the gauge from the constraint surface. By the stationary phase approximation on which the Feynman path integral is based, the dominant contribution to perturbative calculations will come from field configurations in the neighborhood of the constraint surface. The perturbative expansion associated with this Lagrangian, using the method of functional quantization, is generally referred to as the "R"ξ gauge. It reduces in the case of an Abelian U(1) gauge to the same set of Feynman rules that one obtains in the method of canonical quantization. But there is an important difference: the broken gauge freedom appears in the functional integral as an additional factor in the overall normalization. This factor can only be pulled out of the perturbative expansion (and ignored) when the contribution to the Lagrangian of a perturbation along the gauge degrees of freedom is independent of the particular "physical" field configuration. This is the condition that fails to hold for non-Abelian gauge groups. If one ignores the problem and attempts to use the Feynman rules obtained from "naive" functional quantization, one finds that one's calculations contain unremovable anomalies. The problem of perturbative calculations in QCD was solved by introducing additional fields known as Faddeev–Popov ghosts, whose contribution to the gauge-fixed Lagrangian offsets the anomaly introduced by the coupling of "physical" and "unphysical" perturbations of the non-Abelian gauge field. From the functional quantization perspective, the "unphysical" perturbations of the field configuration (the gauge transformations) form a subspace of the space of all (infinitesimal) perturbations; in the non-Abelian case, the embedding of this subspace in the larger space depends on the configuration around which the perturbation takes place. The ghost term in the Lagrangian represents the functional determinant of the Jacobian of this embedding, and the properties of the ghost field are dictated by the exponent desired on the determinant in order to correct the functional measure on the remaining "physical" perturbation axes. Gauge bundles and the vertical ideal. Intuition for the BRST formalism is provided by describing it geometrically, in the setting of fiber bundles. This geometric setting contrasts with and illuminates the older traditional picture, that of algebra-valued fields on Minkowski space, provided in (earlier) quantum field theory texts. In this setting, a gauge field can be understood in one of two different ways. In one, the gauge field is a local section of the fiber bundle. In the other, the gauge field is little more than the connection between adjacent fibers, defined on the entire length of the fiber. Corresponding to these two understandings, there are two ways to look at a gauge transformation. In the first case, a gauge transformation is just a change of local section. In general relativity, this is referred to as a passive transformation. In the second view, a gauge transformation is a change of coordinates along the entire fiber (arising from multiplication by a group element "g") which induces a vertical diffeomorphism of the principal bundle. This second viewpoint provides the geometric foundation for the BRST method. Unlike a passive transformation, it is well-defined globally on a principal bundle, with any structure group over an arbitrary manifold. That is, the BRST formalism can be developed to describe the quantization of "any" principle bundle on any manifold. For concreteness and relevance to conventional QFT, much of this article sticks to the case of a principal gauge bundle with compact fiber over 4-dimensional Minkowski space. A principal gauge bundle "P" over a 4-manifold "M" is locally isomorphic to "U" × "F", where "U" ⊂ R4 and the fiber "F" is isomorphic to a Lie group "G", the gauge group of the field theory (this is an isomorphism of manifold structures, not of group structures; there is no special surface in "P" corresponding to 1 in "G", so it is more proper to say that the fiber "F" is a "G"-torsor). The most basic property as a fiber bundle is the "projection to the base space" π : "P" → "M", which defines the vertical directions on "P" (those lying within the fiber π−1("p") over each point "p" in "M"). As a gauge bundle it has a left action of "G" on "P" which respects the fiber structure, and as a principal bundle it also has a right action of "G" on "P" which also respects the fiber structure and commutes with the left action. The left action of the structure group "G" on "P" corresponds to a change of coordinate system on an individual fiber. The (global) right action "Rg" : "P" → "P" for a fixed "g" in "G" corresponds to an actual automorphism of each fiber and hence to a map of "P" to itself. In order for "P" to qualify as a principal "G"-bundle, the global right action of each "g" in "G" must be an automorphism with respect to the manifold structure of "P" with a smooth dependence on "g", that is, a diffeomorphism "P" × "G" → "P". The existence of the global right action of the structure group picks out a special class of right invariant geometric objects on "P"—those which do not change when they are pulled back along "Rg" for all values of "g" in "G". The most important right invariant objects on a principal bundle are the right invariant vector fields, which form an ideal formula_4 of the Lie algebra of infinitesimal diffeomorphisms on "P". Those vector fields on "P" which are both right invariant and vertical form an ideal formula_5 of formula_4, which has a relationship to the entire bundle "P" analogous to that of the Lie algebra formula_6 of the gauge group "G" to the individual "G"-torsor fiber "F". The "field theory" of interest is defined in terms of a set of "fields" (smooth maps into various vector spaces) defined on a principal gauge bundle "P". Different fields carry different representations of the gauge group "G", and perhaps of other symmetry groups of the manifold such as the Poincaré group. One may define the space formula_7 of local polynomials in these fields and their derivatives. The fundamental Lagrangian density of one's theory is presumed to lie in the subspace formula_8 of polynomials which are real-valued and invariant under any unbroken non-gauge symmetry groups. It is also presumed to be invariant not only under the left action (passive coordinate transformations) and the global right action of the gauge group but also under local gauge transformations—pullback along the infinitesimal diffeomorphism associated with an arbitrary choice of right-invariant vertical vector field formula_9. Identifying local gauge transformations with a particular subspace of vector fields on the manifold "P" provides a better framework for dealing with infinite-dimensional infinitesimals: differential geometry and the exterior calculus. The change in a scalar field under pullback along an infinitesimal automorphism is captured in the Lie derivative, and the notion of retaining only the term linear in the vector field is implemented by separating it into the interior derivative and the exterior derivative. In this context, "forms" and the exterior calculus refer exclusively to degrees of freedom which are dual to vector fields "on the gauge bundle", not to degrees of freedom expressed in (Greek) tensor indices on the base manifold or (Roman) matrix indices on the gauge algebra. The Lie derivative on a manifold is a globally well-defined operation in a way that the partial derivative is not. The proper generalization of Clairaut's theorem to the non-trivial manifold structure of "P" is given by the Lie bracket of vector fields and the nilpotence of the exterior derivative. This provides an essential tool for computation: the generalized Stokes theorem, which allows integration by parts and then elimination of the surface term, as long as the integrand drops off rapidly enough in directions where there is an open boundary. (This is not a trivial assumption, but can be dealt with by renormalization techniques such as dimensional regularization as long as the surface term can be made gauge invariant.) The BRST operator and asymptotic Fock space. Central to the BRST formalism is the BRST operator formula_10, defined as the tangent to the Ward operator formula_11. The Ward operator on each field may be identified (up to a sign convention) with the Lie derivative along the vertical vector field associated with the local gauge transformation formula_12 appearing as a parameter of the Ward operator. The BRST operator formula_10 on fields resembles the exterior derivative on the gauge bundle, or rather to its restriction to a reduced space of alternating forms which are defined only on vertical vector fields. The Ward and BRST operators are related (up to a phase convention introduced by Kugo and Ojima, whose notation we will follow in the treatment of state vectors below) by formula_13. Here, formula_14 is a zero-form (scalar). The space formula_15 is the space of real-valued polynomials in the fields and their derivatives that are invariant under any (unbroken) non-gauge symmetry groups. Like the exterior derivative, the BRST operator is nilpotent of degree 2, i. e., formula_16. The variation of any "BRST-exact form" formula_17 with respect to a local gauge transformation formula_12 is given by the interior derivative formula_18 It is formula_19 Note that this is also exact. The Hamiltonian perturbative formalism is carried out not on the fiber bundle, but on a local section. In this formalism, adding a BRST-exact term to a gauge invariant Lagrangian density preserves the relation formula_20 This implies that there is a related operator formula_21 on the state space for which formula_22 That is, the BRST operator on Fock states is a conserved charge of the Hamiltonian system. This implies that the time evolution operator in a Dyson series calculation will not evolve a field configuration obeying formula_23 into a later configuration with formula_24 (or vice versa). The nilpotence of the BRST operator can be understood as saying that its image (the space of BRST exact forms) lies entirely within its kernel (the space of BRST closed forms). The "true" Lagrangian, presumed to be invariant under local gauge transformations, is in the kernel of the BRST operator but not in its image. This implies that the universe of initial and final conditions can be limited to asymptotic "states" or field configurations at timelike infinity, where the interaction Lagrangian is "turned off". These states lie in the kernel of formula_25 but as the construction is invariant, the scattering matrix remains unitary. BRST-closed and exact states are defined similarly to BRST-closed and exact fields; closed states are annihilated by formula_25 while exact states are those obtainable by applying formula_21 to some arbitrary field configuration. When defining the asymptotic states, the states that lie inside the image of formula_21 can also be suppressed, but the reasoning is a bit subtler. Having postulated that the "true" Lagrangian of the theory is gauge invariant, the true "states" of the Hamiltonian system are equivalence classes under local gauge transformation; in other words, two initial or final states in the Hamiltonian picture that differ only by a BRST-exact state are physically equivalent. However, the use of a BRST-exact gauge breaking prescription does not guarantee that the interaction Hamiltonian will preserve any particular subspace of closed field configurations that are orthogonal to the space of exact configurations. This is a crucial point, often mishandled in QFT textbooks. There is no "a priori" inner product on field configurations built into the action principle; such an inner product is constructed as part of the Hamiltonian perturbative apparatus. The quantization prescription in the interaction picture is to build a vector space of BRST-closed configurations at a particular time, such that this can be converted into a Fock space of intermediate states suitable for Hamiltonian perturbation. As is conventional for second quantization, the Fock space is provided with ladder operators for the energy-momentum eigenconfigurations (particles) of each field, complete with appropriate (anti-)commutation rules, as well as a positive semi-definite inner product. The inner product is required to be singular exclusively along directions that correspond to BRST-exact eigenstates of the unperturbed Hamiltonian. This ensures that any pair of BRST-closed Fock states can be freely chosen out of the two equivalence classes of asymptotic field configurations corresponding to particular initial and final eigenstates of the (unbroken) free-field Hamiltonian. The desired quantization prescriptions provide a "quotient" Fock space isomorphic to the BRST cohomology, in which each BRST-closed equivalence class of intermediate states (differing only by an exact state) is represented by exactly one state that contains no quanta of the BRST-exact fields. This is the appropriate Fock space for the "asymptotic" states of the theory. The singularity of the inner product along BRST-exact degrees of freedom ensures that the physical scattering matrix contains only physical fields. This is in contrast to the (naive, gauge-fixed) Lagrangian dynamics, in which unphysical particles are propagated to the asymptotic states. By working in the cohomology, each asymptotic state is guaranteed to have one (and only one) corresponding physical state (free of ghosts). The operator formula_21 is Hermitian and non-zero, yet its square is zero. This implies that the Fock space of all states prior to the cohomological reduction has an indefinite norm, and so is not a Hilbert space. This requires that a Krein space for the BRST-closed intermediate Fock states, with the time reversal operator playing the role of the "fundamental symmetry" relating the Lorentz-invariant and positive semi-definite inner products. The asymptotic state space is then the Hilbert space obtained by quotienting BRST-exact states out of the Krein space. To summarize: no field introduced as part of a BRST gauge fixing procedure will appear in asymptotic states of the gauge-fixed theory. However, this does not imply that these "unphysical" fields are absent in the intermediate states of a perturbative calculation! This is because perturbative calculations are done in the interaction picture. They implicitly involve initial and final states of the non-interaction Hamiltonian formula_1, gradually transformed into states of the full Hamiltonian in accordance with the adiabatic theorem by "turning on" the interaction Hamiltonian (the gauge coupling). The expansion of the Dyson series in terms of Feynman diagrams will include vertices that couple "physical" particles (those that can appear in asymptotic states of the free Hamiltonian) to "unphysical" particles (states of fields that live outside the kernel of formula_10 or inside the image of formula_10) and vertices that couple "unphysical" particles to one another. The Kugo–Ojima answer to unitarity questions. T. Kugo and I. Ojima are commonly credited with the discovery of the principal QCD color confinement criterion. Their role in obtaining a correct version of the BRST formalism in the Lagrangian framework seems to be less widely appreciated. It is enlightening to inspect their variant of the BRST transformation, which emphasizes the hermitian properties of the newly introduced fields, before proceeding from an entirely geometrical angle. The formula_6-valued gauge fixing conditions are taken to be formula_26 where formula_27 is a positive number determining the gauge. There are other possible gauge fixings, but are outside of the present scope. The fields appearing in the Lagrangian are: The field formula_32 is used to deal with the gauge transformations, wheareas formula_33 and formula_34 deal with the gauge fixings. There actually are some subtleties associated with the gauge fixing due to Gribov ambiguities but they will not be covered here. The BRST Lagrangian density is formula_35 Here, formula_36 is the covariant derivative with respect to the gauge field (connection) formula_28 The Faddeev–Popov ghost field formula_32 has a geometrical interpretation as a version of the Maurer–Cartan form on formula_5, which relates each right-invariant vertical vector field formula_37 to its representation (up to a phase) as a formula_6-valued field. This field must enter into the formulas for infinitesimal gauge transformations on objects (such as fermions formula_38, gauge bosons formula_39, and the ghost formula_32 itself) which carry a non-trivial representation of the gauge group. While the Lagrangian density isn't BRST invariant, its integral over all of spacetime, the action is. The transformation of the fields under an infinitessimal gauge transformation formula_12 is given by formula_40 Note that formula_41 is the Lie bracket, NOT the commutator. These may be written in an equivalent form, using the charge operator formula_21 instead of formula_12. The BRST charge operator formula_21 is defined as formula_42 where formula_43 are the infinitesimal generators of the Lie group, and formula_44 are its structure constants. Using this, the transformation is given as formula_45 The details of the matter sector formula_38 are unspecified, as is left the form of the Ward operator on it; these are unimportant so long as the representation of the gauge algebra on the matter fields is consistent with their coupling to formula_46. The properties of the other fields are fundamentally analytical rather than geometric. The bias is towards connections with formula_3 is gauge-dependent and has no particular geometrical significance. The anti-ghost formula_47 is nothing but a Lagrange multiplier for the gauge fixing term, and the properties of the scalar field formula_34 are entirely dictated by the relationship formula_48. These fields are all Hermitian in Kugo–Ojima conventions, but the parameter formula_12 is an anti-Hermitian "anti-commuting "c"-number". This results in some unnecessary awkwardness with regard to phases and passing infinitesimal parameters through operators; this can be resolved with a change of conventions. We already know, from the relation of the BRST operator to the exterior derivative and the Faddeev–Popov ghost to the Maurer–Cartan form, that the ghost formula_32 corresponds (up to a phase) to a formula_6-valued 1-form on formula_5. In order for integration of a term like formula_49 to be meaningful, the anti-ghost formula_50 must carry representations of these two Lie algebras—the vertical ideal formula_5 and the gauge algebra formula_6—dual to those carried by the ghost. In geometric terms, formula_50 must be fiberwise dual to formula_6 and one rank short of being a top form on formula_5. Likewise, the auxiliary field formula_34 must carry the same representation of formula_6 (up to a phase) as formula_50, as well as the representation of formula_5 dual to its trivial representation on formula_51 That is, formula_34 is a fiberwise formula_6-dual top form on formula_5. The one-particle states of the theory are discussed in the adiabatically decoupled limit "g" → 0. There are two kinds of quanta in the Fock space of the gauge-fixed Hamiltonian that lie entirely outside the kernel of the BRST operator: those of the Faddeev–Popov anti-ghost formula_50 and the forward polarized gauge boson. This is because no combination of fields containing formula_50 is annihilated by formula_10 and the Lagrangian has a gauge breaking term that is equal, up to a divergence, to formula_52 Likewise, there are two kinds of quanta that will lie entirely in the image of the BRST operator: those of the Faddeev–Popov ghost formula_32 and the scalar field formula_34, which is "eaten" by completing the square in the functional integral to become the backward polarized gauge boson. These are the four types of "unphysical" quanta which do not appear in the asymptotic states of a perturbative calculation. The anti-ghost is taken to be a Lorentz scalar for the sake of Poincaré invariance in formula_49. However, its (anti-)commutation law relative to formula_32 "i.e." its quantization prescription, which ignores the spin–statistics theorem by giving Fermi–Dirac statistics to a spin-0 particle—will be given by the requirement that the inner product on our Fock space of asymptotic states be singular along directions corresponding to the raising and lowering operators of some combination of non-BRST-closed and BRST-exact fields. This last statement is the key to "BRST quantization", as opposed to mere "BRST symmetry" or "BRST transformation". Mathematical approach to BRST. This section only applies to classical gauge theories. "i.e." those that can be described with first class constraints. The more general formalism is described using the Batalin–Vilkovisky formalism. The BRST construction applies to a situation of a Hamiltonian action of a gauge group formula_53 on a phase space formula_54. Let formula_55 be the Lie algebra of formula_53 and formula_56 a regular value of the moment map formula_57. Let formula_58. Assume the formula_53-action on formula_59 is free and proper, and consider the space formula_60 of formula_53-orbits on formula_61. The Hamiltonian mechanics of a gauge theory is described by formula_62 first class constraints formula_63 acting upon a symplectic space formula_54. formula_61 is the submanifold satisfying the first class constraints. The action of the gauge symmetry partitions formula_61 into gauge orbits. The symplectic reduction is the quotient of formula_61 by the gauge orbits. According to algebraic geometry, the set of smooth functions over a space is a ring. The Koszul-Tate complex (the first class constraints aren't regular in general) describes the algebra associated with the symplectic reduction in terms of the algebra formula_64. First, using equations defining formula_59 inside formula_65, construct a Koszul complex formula_66 so that formula_67 and formula_68 for formula_69. Then, for the fibration formula_70 one considers the complex of vertical exterior forms formula_71. Locally, formula_72 is isomorphic to formula_73, where formula_74 is the exterior algebra of the dual of a vector space formula_75. Using the Koszul resolution defined earlier, one obtains a bigraded complex formula_76 Finally (and this is the most nontrivial step), a differential formula_77 is defined on formula_78 which lifts formula_79 to formula_80 and such that formula_16 and formula_81 with respect to the grading by the ghost number : formula_82. Thus, the BRST operator or BRST differential formula_10 accomplishes on the level of functions what symplectic reduction does on the level of manifolds. There are two antiderivations, formula_83 and formula_84 which anticommute with each other. The BRST antiderivation formula_10 is given by formula_85. The operator formula_10 is nilpotent; formula_86 Consider the supercommutative algebra generated by formula_64 and Grassman odd generators formula_87, i.e. the tensor product of a Grassman algebra and formula_64. There is a unique antiderivation formula_83 satisfying formula_88 and formula_89 for all formula_90. The zeroth homology is given by formula_91. A longitudinal vector field on formula_61 is a vector field over formula_61 which is tangent everywhere to the gauge orbits. The Lie bracket of two longitudinal vector fields is itself another longitudinal vector field. Longitudinal formula_92-forms are dual to the exterior algebra of formula_92-vectors. formula_84 is essentially the longitudinal exterior derivative defined by formula_93 The zeroth cohomology of the longitudinal exterior derivative is the algebra of gauge invariant functions. The BRST construction applies when one has a Hamiltonian action of a compact, connected Lie group formula_53 on a phase space formula_54. Let formula_6 be the Lie algebra of formula_53 (via the Lie group–Lie algebra correspondence) and formula_94 (the dual of formula_95 a regular value of the momentum map formula_96. Let formula_97. Assume the formula_53-action on formula_61 is free and proper, and consider the space formula_98 of formula_53-orbits on formula_61, which is also known as a symplectic reduction quotient formula_99. First, using the regular sequence of functions defining formula_61 inside formula_54, construct a Koszul complex formula_100 The differential, formula_83, on this complex is an odd formula_64-linear derivation (differential algebra) of the graded formula_64-algebra formula_101. This odd derivation is defined by extending the Lie algebra homomorphism formula_102 of the Hamiltonian action. The resulting Koszul complex is the Koszul complex of the formula_103-module formula_64, where formula_104 is the symmetric algebra of formula_6, and the module structure comes from a ring homomorphism formula_105 induced by the Hamiltonian action formula_106. This Koszul complex is a resolution of the formula_107-module formula_108, that is, formula_109 Then, consider the Chevalley–Eilenberg complex for the Koszul complex formula_110 considered as a differential graded module over the Lie algebra formula_6: formula_111 The "horizontal" differential formula_112 is defined on the coefficients formula_110 by the action of formula_6 and on formula_113 as the exterior derivative of right-invariant differential forms on the Lie group formula_53, whose Lie algebra is formula_6. Let Tot("K") be a complex such that formula_114 with a differential "D" = "d" + δ. The cohomology groups of (Tot("K"), "D") are computed using a spectral sequence associated to the double complex formula_115. The first term of the spectral sequence computes the cohomology of the "vertical" differential formula_83: formula_116, if "j" = 0 and zero otherwise. The first term of the spectral sequence may be interpreted as the complex of vertical differential forms formula_117 for the fiber bundle formula_118. The second term of the spectral sequence computes the cohomology of the "horizontal" differential formula_84 on formula_119: formula_120, if formula_121 and zero otherwise. The spectral sequence collapses at the second term, so that formula_122, which is concentrated in degree zero. Therefore, formula_123, if "p" = 0 and 0 otherwise. References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Primary literature. Original BRST papers:
[ { "math_id": 0, "text": "i \\hbar" }, { "math_id": 1, "text": "\\mathcal{H}_0" }, { "math_id": 2, "text": "\\mathcal{H}" }, { "math_id": 3, "text": "\\partial^\\mu A_\\mu = 0" }, { "math_id": 4, "text": "\\mathfrak{E}" }, { "math_id": 5, "text": "V\\mathfrak{E}" }, { "math_id": 6, "text": "\\mathfrak{g}" }, { "math_id": 7, "text": "Pl" }, { "math_id": 8, "text": "Pl_0" }, { "math_id": 9, "text": "\\epsilon \\in V\\mathfrak{E}" }, { "math_id": 10, "text": "s_B" }, { "math_id": 11, "text": "W(\\delta\\lambda)" }, { "math_id": 12, "text": "\\delta\\lambda" }, { "math_id": 13, "text": "W(\\delta\\lambda) X = \\delta\\lambda\\; s_B X" }, { "math_id": 14, "text": "X \\in {Pl}_0" }, { "math_id": 15, "text": "{Pl}_0" }, { "math_id": 16, "text": "(s_B)^2 = 0" }, { "math_id": 17, "text": "s_B X" }, { "math_id": 18, "text": "\\iota_{\\delta\\lambda}." }, { "math_id": 19, "text": "\\begin{align}\n\\left [\\iota_{\\delta\\lambda}, s_B \\right ] s_B X &= \\iota_{\\delta\\lambda} (s_B s_B X) + s_B \\left (\\iota_{\\delta\\lambda} (s_B X) \\right ) \\\\\n&= s_B \\left (\\iota_{\\delta\\lambda} (s_B X) \\right )\n\\end{align}" }, { "math_id": 20, "text": "s_BX=0." }, { "math_id": 21, "text": "Q_B" }, { "math_id": 22, "text": "[Q_B, \\mathcal{H}] = 0." }, { "math_id": 23, "text": "Q_B |\\Psi_i\\rangle = 0" }, { "math_id": 24, "text": "Q_B |\\Psi_f\\rangle \\neq 0" }, { "math_id": 25, "text": "Q_B," }, { "math_id": 26, "text": "G=\\xi\\partial^\\mu A_\\mu," }, { "math_id": 27, "text": "\\xi" }, { "math_id": 28, "text": "A_\\mu." }, { "math_id": 29, "text": "c^i" }, { "math_id": 30, "text": "b_i=\\bar{c}_i" }, { "math_id": 31, "text": "B_i" }, { "math_id": 32, "text": "c" }, { "math_id": 33, "text": "b" }, { "math_id": 34, "text": "B" }, { "math_id": 35, "text": "\\mathcal{L} = \\mathcal{L}_\\textrm{matter}(\\psi,\\,A_\\mu^a) -{1\\over 4g^2} \\operatorname{Tr}[F^{\\mu\\nu}F_{\\mu\\nu}]+{1\\over 2g^2} \\operatorname{Tr}[BB]-{1\\over g^2} \\operatorname{Tr}[BG]-{\\xi\\over g^2} \\operatorname{Tr}[\\partial^\\mu b D_\\mu c]" }, { "math_id": 36, "text": "D_\\mu" }, { "math_id": 37, "text": "\\delta\\lambda \\in V\\mathfrak{E}" }, { "math_id": 38, "text": "\\psi" }, { "math_id": 39, "text": "A_\\mu" }, { "math_id": 40, "text": "\\begin{align}\n\\delta \\psi_i &= \\delta\\lambda D_i c \\\\\n\\delta A_\\mu &= \\delta\\lambda D_\\mu c \\\\\n\\delta c &= \\delta\\lambda \\tfrac{i}{2} [c, c] \\\\\n\\delta b= \\delta\\bar{c} &= \\delta\\lambda B \\\\\n\\delta B &= 0\n\\end{align}" }, { "math_id": 41, "text": "[\\cdot,\\cdot]" }, { "math_id": 42, "text": "Q_B = c^i \\left(L_i-\\frac 12 {{f_{i}}^j}_k b_j c^k\\right)" }, { "math_id": 43, "text": "L_i" }, { "math_id": 44, "text": "f_{ij}{}^k" }, { "math_id": 45, "text": "\\begin{align}\nQ_B A_\\mu &= D_\\mu c \\\\\nQ_B c &= {i\\over 2}[c,c] \\\\\nQ_B b &= B \\\\\nQ_B B &= 0\n\\end{align}" }, { "math_id": 46, "text": "\\delta A_\\mu" }, { "math_id": 47, "text": "b=\\bar{c}" }, { "math_id": 48, "text": "\\delta \\bar{c} = i \\delta\\lambda B" }, { "math_id": 49, "text": "-i (\\partial^\\mu \\bar{c}) D_\\mu c" }, { "math_id": 50, "text": "\\bar{c}" }, { "math_id": 51, "text": "A_\\mu ." }, { "math_id": 52, "text": "s_B \\left (\\bar{c} \\left (i \\partial^\\mu A_\\mu - \\tfrac{1}{2} \\xi s_B \\bar{c} \\right ) \\right )." }, { "math_id": 53, "text": "G" }, { "math_id": 54, "text": "M" }, { "math_id": 55, "text": "{\\mathfrak g}" }, { "math_id": 56, "text": " 0\\in {\\mathfrak g}^*" }, { "math_id": 57, "text": " \\Phi: M\\to {\\mathfrak g}^* " }, { "math_id": 58, "text": " M_0=\\Phi^{-1}(0) " }, { "math_id": 59, "text": " M_0 " }, { "math_id": 60, "text": "\\tilde M" }, { "math_id": 61, "text": "M_0" }, { "math_id": 62, "text": "r" }, { "math_id": 63, "text": "\\Phi_i" }, { "math_id": 64, "text": "C^\\infty(M)" }, { "math_id": 65, "text": " M " }, { "math_id": 66, "text": " ... \\to K^1(\\Phi) \\to C^{\\infty}(M) \\to 0 " }, { "math_id": 67, "text": " H^0(K(\\Phi))=C^\\infty(M_0) " }, { "math_id": 68, "text": " H^p(K(\\Phi))=0" }, { "math_id": 69, "text": " p\\ne 0" }, { "math_id": 70, "text": " M_0 \\to \\tilde M " }, { "math_id": 71, "text": " (\\Omega^\\cdot_{vert}(M_0), d_{vert}) " }, { "math_id": 72, "text": " \\Omega^\\cdot_{vert}(M_0) " }, { "math_id": 73, "text": " \\Lambda^\\cdot V^* \\otimes C^{\\infty}(\\tilde M) " }, { "math_id": 74, "text": " \\Lambda^\\cdot V^* " }, { "math_id": 75, "text": " V " }, { "math_id": 76, "text": " K^{i,j} = \\Lambda^i V^* \\otimes \\Lambda^j V \\otimes C^{\\infty}(M). " }, { "math_id": 77, "text": " s_B " }, { "math_id": 78, "text": " K=\\oplus_{i,j} K^{i,j} " }, { "math_id": 79, "text": " d_{vert} " }, { "math_id": 80, "text": " K " }, { "math_id": 81, "text": " H^0_{s_B} = C^{\\infty}(\\tilde M) " }, { "math_id": 82, "text": " K^n = \\oplus_{i-j=n} K^{i,j} " }, { "math_id": 83, "text": "\\delta" }, { "math_id": 84, "text": "d" }, { "math_id": 85, "text": "\\delta + d + \\mathrm{more}" }, { "math_id": 86, "text": "s^2=(\\delta+d)^2=\\delta^2 + d^2 + (\\delta d + d\\delta) = 0" }, { "math_id": 87, "text": "\\mathcal{P}_i" }, { "math_id": 88, "text": "\\delta \\mathcal{P}_i = -\\Phi_i" }, { "math_id": 89, "text": "\\delta f=0" }, { "math_id": 90, "text": "f\\in C^\\infty(M)" }, { "math_id": 91, "text": "C^\\infty(M_0)" }, { "math_id": 92, "text": "p" }, { "math_id": 93, "text": "\\begin{align}\nd\\omega(V_0, \\ldots, V_k) = & \\sum_i(-1)^{i} d_{{}_{V_i}} ( \\omega (V_0, \\ldots, \\widehat V_i, \\ldots,V_k ))\\\\\n& + \\sum_{i<j}(-1)^{i+j}\\omega ([V_i, V_j], V_0, \\ldots, \\widehat V_i, \\ldots, \\widehat V_j, \\ldots, V_k)\n\\end{align}" }, { "math_id": 94, "text": "0 \\in \\mathfrak{g}^*" }, { "math_id": 95, "text": "\\mathfrak{g})" }, { "math_id": 96, "text": "\\Phi: M\\to \\mathfrak{g}^*" }, { "math_id": 97, "text": "M_0=\\Phi^{-1}(0) " }, { "math_id": 98, "text": "\\widetilde M = M_0/G " }, { "math_id": 99, "text": "\\widetilde M = M/\\!\\!/G" }, { "math_id": 100, "text": "\\Lambda^\\bullet {\\mathfrak g} \\otimes C^{\\infty}(M)." }, { "math_id": 101, "text": "\\Lambda^\\bullet {\\mathfrak g} \\otimes C^{\\infty}(M) " }, { "math_id": 102, "text": " {\\mathfrak g}\\to C^{\\infty}(M) " }, { "math_id": 103, "text": "S({\\mathfrak g})" }, { "math_id": 104, "text": "S(\\mathfrak{g})" }, { "math_id": 105, "text": "S({\\mathfrak g}) \\to C^{\\infty}(M) " }, { "math_id": 106, "text": "\\mathfrak{g} \\to C^{\\infty}(M)" }, { "math_id": 107, "text": " S({\\mathfrak g})" }, { "math_id": 108, "text": " C^{\\infty}(M_0) " }, { "math_id": 109, "text": " H^{j}(\\Lambda^\\bullet {\\mathfrak g} \\otimes C^{\\infty}(M),\\delta) = \\begin{cases} C^{\\infty}(M_0) & j = 0 \\\\ 0 & j \\neq 0 \\end{cases}" }, { "math_id": 110, "text": " \\Lambda^\\bullet {\\mathfrak g} \\otimes C^{\\infty}(M) " }, { "math_id": 111, "text": " K^{\\bullet,\\bullet} = C^\\bullet \\left (\\mathfrak g,\\Lambda^\\bullet {\\mathfrak g} \\otimes C^{\\infty}(M) \\right ) = \\Lambda^\\bullet {\\mathfrak g}^* \\otimes \\Lambda^\\bullet {\\mathfrak g} \\otimes C^{\\infty}(M). " }, { "math_id": 112, "text": " d: K^{i,\\bullet} \\to K^{i+1,\\bullet} " }, { "math_id": 113, "text": " \\Lambda^\\bullet {\\mathfrak g}^*" }, { "math_id": 114, "text": "\\operatorname{Tot}(K)^n =\\bigoplus\\nolimits_{i-j=n} K^{i,j}" }, { "math_id": 115, "text": "(K^{\\bullet,\\bullet}, d, \\delta)" }, { "math_id": 116, "text": " E_1^{i,j} = H^j (K^{i,\\bullet},\\delta) = \\Lambda^i {\\mathfrak g}^* \\otimes C^{\\infty}(M_0)" }, { "math_id": 117, "text": " (\\Omega^\\bullet{\\operatorname{vert}}(M_0), d_{\\operatorname{vert}}) " }, { "math_id": 118, "text": " M_0 \\to \\widetilde M " }, { "math_id": 119, "text": "E_1^{\\bullet,\\bullet}" }, { "math_id": 120, "text": " E_2^{i,j} \\cong H^i(E_1^{\\bullet,j},d) = C^{\\infty}(M_0)^g = C^{\\infty}(\\widetilde M)" }, { "math_id": 121, "text": "i = j= 0" }, { "math_id": 122, "text": " E_{\\infty}^{i,j} = E_2^{i,j} " }, { "math_id": 123, "text": " H^p (\\operatorname{Tot}(K), D ) = C^{\\infty}(M_0)^g = C^{\\infty}(\\widetilde M)" } ]
https://en.wikipedia.org/wiki?curid=6769980
677
Ambiguity
Type of uncertainty of meaning in which several interpretations are plausible Ambiguity is the type of meaning in which a phrase, statement, or resolution is not explicitly defined, making for several interpretations; others describe it as a concept or statement that has no real reference. A common aspect of ambiguity is uncertainty. It is thus an attribute of any idea or statement whose intended meaning cannot be definitively resolved, according to a rule or process with a finite number of steps. (The prefix "ambi-" reflects the idea of "two", as in "two meanings"). The concept of ambiguity is generally contrasted with vagueness. In ambiguity, specific and distinct interpretations are permitted (although some may not be immediately obvious), whereas with vague information it is difficult to form any interpretation at the desired level of specificity. Linguistic forms. Lexical ambiguity is contrasted with semantic ambiguity. The former represents a choice between a finite number of known and meaningful context-dependent interpretations. The latter represents a choice between any number of possible interpretations, none of which may have a standard agreed-upon meaning. This form of ambiguity is closely related to vagueness. Ambiguity in human language is argued to reflect principles of efficient communication. Languages that communicate efficiently will avoid sending information that is redundant with information provided in the context. This can be shown mathematically to result in a system that is ambiguous when context is neglected. In this way, ambiguity is viewed as a generally useful feature of a linguistic system. Linguistic ambiguity can be a problem in law, because the interpretation of written documents and oral agreements is often of paramount importance. Lexical ambiguity. The lexical ambiguity of a word or phrase applies to it having more than one meaning in the language to which the word belongs. "Meaning" here refers to whatever should be represented by a good dictionary. For instance, the word "bank" has several distinct lexical definitions, including "financial institution" and "edge of a river". Or consider "apothecary". One could say "I bought herbs from the apothecary". This could mean one actually spoke to the apothecary (pharmacist) or went to the apothecary (pharmacy). The context in which an ambiguous word is used often makes it clearer which of the meanings is intended. If, for instance, someone says "I put $100 in the bank", most people would not think someone used a shovel to dig in the mud. However, some linguistic contexts do not provide sufficient information to make a used word clearer. Lexical ambiguity can be addressed by algorithmic methods that automatically associate the appropriate meaning with a word in context, a task referred to as word-sense disambiguation. The use of multi-defined words requires the author or speaker to clarify their context, and sometimes elaborate on their specific intended meaning (in which case, a less ambiguous term should have been used). The goal of clear concise communication is that the receiver(s) have no misunderstanding about what was meant to be conveyed. An exception to this could include a politician whose "weasel words" and obfuscation are necessary to gain support from multiple constituents with mutually exclusive conflicting desires from his or her candidate of choice. Ambiguity is a powerful tool of political science. More problematic are words whose multiple meanings express closely related concepts. "Good", for example, can mean "useful" or "functional" ("That's a good hammer"), "exemplary" ("She's a good student"), "pleasing" ("This is good soup"), "moral" ("a good person" versus "the lesson to be learned from a story"), "righteous", etc. "I have a good daughter" is not clear about which sense is intended. The various ways to apply prefixes and suffixes can also create ambiguity ("unlockable" can mean "capable of being opened" or "impossible to lock"). Semantic and syntactic ambiguity. Semantic ambiguity occurs when a word, phrase or sentence, taken out of context, has more than one interpretation. In "We saw her duck" (example due to Richard Nordquist), the words "her duck" can refer either Syntactic ambiguity arises when a sentence can have two (or more) different meanings because of the structure of the sentence—its syntax. This is often due to a modifying expression, such as a prepositional phrase, the application of which is unclear. "He ate the cookies on the couch", for example, could mean that he ate those cookies that were on the couch (as opposed to those that were on the table), or it could mean that he was sitting on the couch when he ate the cookies. "To get in, you will need an entrance fee of $10 or your voucher and your drivers' license." This could mean that you need EITHER ten dollars OR BOTH your voucher and your license. Or it could mean that you need your license AND you need EITHER ten dollars OR a voucher. Only rewriting the sentence, or placing appropriate punctuation can resolve a syntactic ambiguity. For the notion of, and theoretic results about, syntactic ambiguity in artificial, formal languages (such as computer programming languages), see Ambiguous grammar. Usually, semantic and syntactic ambiguity go hand in hand. The sentence "We saw her duck" is also syntactically ambiguous. Conversely, a sentence like "He ate the cookies on the couch" is also semantically ambiguous. Rarely, but occasionally, the different parsings of a syntactically ambiguous phrase result in the same meaning. For example, the command "Cook, cook!" can be parsed as "Cook (noun used as vocative), cook (imperative verb form)!", but also as "Cook (imperative verb form), cook (noun used as vocative)!". It is more common that a syntactically unambiguous phrase has a semantic ambiguity; for example, the lexical ambiguity in "Your boss is a funny man" is purely semantic, leading to the response "Funny ha-ha or funny peculiar?" Spoken language can contain many more types of ambiguities that are called phonological ambiguities, where there is more than one way to compose a set of sounds into words. For example, "ice cream" and "I scream". Such ambiguity is generally resolved according to the context. A mishearing of such, based on incorrectly resolved ambiguity, is called a mondegreen. Philosophy. Philosophers (and other users of logic) spend a lot of time and effort searching for and removing (or intentionally adding) ambiguity in arguments because it can lead to incorrect conclusions and can be used to deliberately conceal bad arguments. For example, a politician might say, "I oppose taxes which hinder economic growth", an example of a glittering generality. Some will think they oppose taxes in general because they hinder economic growth. Others may think they oppose only those taxes that they believe will hinder economic growth. In writing, the sentence can be rewritten to reduce possible misinterpretation, either by adding a comma after "taxes" (to convey the first sense) or by changing "which" to "that" (to convey the second sense) or by rewriting it in other ways. The devious politician hopes that each constituent will interpret the statement in the most desirable way, and think the politician supports everyone's opinion. However, the opposite can also be true—an opponent can turn a positive statement into a bad one if the speaker uses ambiguity (intentionally or not). The logical fallacies of amphiboly and equivocation rely heavily on the use of ambiguous words and phrases. In continental philosophy (particularly phenomenology and existentialism), there is much greater tolerance of ambiguity, as it is generally seen as an integral part of the human condition. Martin Heidegger argued that the relation between the subject and object is ambiguous, as is the relation of mind and body, and part and whole. In Heidegger's phenomenology, Dasein is always in a meaningful world, but there is always an underlying background for every instance of signification. Thus, although some things may be certain, they have little to do with Dasein's sense of care and existential anxiety, e.g., in the face of death. In calling his work Being and Nothingness an "essay in phenomenological ontology" Jean-Paul Sartre follows Heidegger in defining the human essence as ambiguous, or relating fundamentally to such ambiguity. Simone de Beauvoir tries to base an ethics on Heidegger's and Sartre's writings (The Ethics of Ambiguity), where she highlights the need to grapple with ambiguity: "as long as there have been philosophers and they have thought, most of them have tried to mask it ... And the ethics which they have proposed to their disciples has always pursued the same goal. It has been a matter of eliminating the ambiguity by making oneself pure inwardness or pure externality, by escaping from the sensible world or being engulfed by it, by yielding to eternity or enclosing oneself in the pure moment." Ethics cannot be based on the authoritative certainty given by mathematics and logic, or prescribed directly from the empirical findings of science. She states: "Since we do not succeed in fleeing it, let us, therefore, try to look the truth in the face. Let us try to assume our fundamental ambiguity. It is in the knowledge of the genuine conditions of our life that we must draw our strength to live and our reason for acting". Other continental philosophers suggest that concepts such as life, nature, and sex are ambiguous. Corey Anton has argued that we cannot be certain what is separate from or unified with something else: language, he asserts, divides what is not, in fact, separate. Following Ernest Becker, he argues that the desire to 'authoritatively disambiguate' the world and existence has led to numerous ideologies and historical events such as genocide. On this basis, he argues that ethics must focus on 'dialectically integrating opposites' and balancing tension, rather than seeking a priori validation or certainty. Like the existentialists and phenomenologists, he sees the ambiguity of life as the basis of creativity. Literature and rhetoric. In literature and rhetoric, ambiguity can be a useful tool. Groucho Marx's classic joke depends on a grammatical ambiguity for its humor, for example: "Last night I shot an elephant in my pajamas. How he got in my pajamas, I'll never know". Songs and poetry often rely on ambiguous words for artistic effect, as in the song title "Don't It Make My Brown Eyes Blue" (where "blue" can refer to the color, or to sadness). In the narrative, ambiguity can be introduced in several ways: motive, plot, character. F. Scott Fitzgerald uses the latter type of ambiguity with notable effect in his novel "The Great Gatsby". Mathematical notation. Mathematical notation is a helpful tool that eliminates a lot of misunderstandings associated with natural language in physics and other sciences. Nonetheless, there are still some inherent ambiguities due to lexical, syntactic, and semantic reasons that persist in mathematical notation. Names of functions. The ambiguity in the style of writing a function should not be confused with a multivalued function, which can (and should) be defined in a deterministic and unambiguous way. Several special functions still do not have established notations. Usually, the conversion to another notation requires to scale the argument or the resulting value; sometimes, the same name of the function is used, causing confusions. Examples of such underestablished functions: Expressions. Ambiguous expressions often appear in physical and mathematical texts. It is common practice to omit multiplication signs in mathematical expressions. Also, it is common to give the same name to a variable and a function, for example, formula_0. Then, if one sees formula_1, there is no way to distinguish whether it means formula_0 multiplied by formula_2, or function formula_3 evaluated at argument equal to formula_2. In each case of use of such notations, the reader is supposed to be able to perform the deduction and reveal the true meaning. Creators of algorithmic languages try to avoid ambiguities. Many algorithmic languages (C++ and Fortran) require the character * as symbol of multiplication. The Wolfram Language used in Mathematica allows the user to omit the multiplication symbol, but requires square brackets to indicate the argument of a function; square brackets are not allowed for grouping of expressions. Fortran, in addition, does not allow use of the same name (identifier) for different objects, for example, function and variable; in particular, the expression formula_4 is qualified as an error. The order of operations may depend on the context. In most programming languages, the operations of division and multiplication have equal priority and are executed from left to right. Until the last century, many editorials assumed that multiplication is performed first, for example, formula_5 is interpreted as formula_6; in this case, the insertion of parentheses is required when translating the formulas to an algorithmic language. In addition, it is common to write an argument of a function without parenthesis, which also may lead to ambiguity. In the scientific journal style, one uses roman letters to denote elementary functions, whereas variables are written using italics. For example, in mathematical journals the expression formula_7 does not denote the sine function, but the product of the three variables formula_8, formula_9, formula_10, although in the informal notation of a slide presentation it may stand for formula_11. Commas in multi-component subscripts and superscripts are sometimes omitted; this is also potentially ambiguous notation. For example, in the notation formula_12, the reader can only infer from the context whether it means a single-index object, taken with the subscript equal to product of variables formula_13, formula_10 and formula_14, or it is an indication to a trivalent tensor. Examples of potentially confusing ambiguous mathematical expressions. An expression such as formula_15 can be understood to mean either formula_16 or formula_17. Often the author's intention can be understood from the context, in cases where only one of the two makes sense, but an ambiguity like this should be avoided, for example by writing formula_18 or formula_19. The expression formula_20 means formula_21 in several texts, though it might be thought to mean formula_22, since formula_23 commonly means formula_24. Conversely, formula_25 might seem to mean formula_26, as this exponentiation notation usually denotes function iteration: in general, formula_27 means formula_28. However, for trigonometric and hyperbolic functions, this notation conventionally means exponentiation of the result of function application. The expression formula_29 can be interpreted as meaning formula_30; however, it is more commonly understood to mean formula_31. Notations in quantum optics and quantum mechanics. It is common to define the coherent states in quantum optics with formula_32 and states with fixed number of photons with formula_33. Then, there is an "unwritten rule": the state is coherent if there are more Greek characters than Latin characters in the argument, and formula_10-photon state if the Latin characters dominate. The ambiguity becomes even worse, if formula_34 is used for the states with certain value of the coordinate, and formula_35 means the state with certain value of the momentum, which may be used in books on quantum mechanics. Such ambiguities easily lead to confusions, especially if some normalized adimensional, dimensionless variables are used. Expression formula_36 may mean a state with single photon, or the coherent state with mean amplitude equal to 1, or state with momentum equal to unity, and so on. The reader is supposed to guess from the context. Ambiguous terms in physics and mathematics. Some physical quantities do not yet have established notations; their value (and sometimes even dimension, as in the case of the Einstein coefficients), depends on the system of notations. Many terms are ambiguous. Each use of an ambiguous term should be preceded by the definition, suitable for a specific case. Just like Ludwig Wittgenstein states in Tractatus Logico-Philosophicus: "... Only in the context of a proposition has a name meaning." A highly confusing term is "gain". For example, the sentence "the gain of a system should be doubled", without context, means close to nothing. The term "intensity" is ambiguous when applied to light. The term can refer to any of irradiance, luminous intensity, radiant intensity, or radiance, depending on the background of the person using the term. Also, confusions may be related with the use of atomic percent as measure of concentration of a dopant, or resolution of an imaging system, as measure of the size of the smallest detail that still can be resolved at the background of statistical noise. See also "Accuracy and precision". The Berry paradox arises as a result of systematic ambiguity in the meaning of terms such as "definable" or "nameable". Terms of this kind give rise to vicious circle fallacies. Other terms with this type of ambiguity are: satisfiable, true, false, function, property, class, relation, cardinal, and ordinal. Mathematical interpretation of ambiguity. In mathematics and logic, ambiguity can be considered to be an instance of the logical concept of underdetermination—for example, formula_37 leaves open what the value of formula_38 is—while overdetermination, except when like formula_39, is a self-contradiction, also called inconsistency, paradoxicalness, or oxymoron, or in mathematics an inconsistent system—such as formula_40, which has no solution. Logical ambiguity and self-contradiction is analogous to visual ambiguity and impossible objects, such as the Necker cube and impossible cube, or many of the drawings of M. C. Escher. Constructed language. Some languages have been created with the intention of avoiding ambiguity, especially lexical ambiguity. Lojban and Loglan are two related languages that have been created for this, focusing chiefly on syntactic ambiguity as well. The languages can be both spoken and written. These languages are intended to provide a greater technical precision over big natural languages, although historically, such attempts at language improvement have been criticized. Languages composed from many diverse sources contain much ambiguity and inconsistency. The many exceptions to syntax and semantic rules are time-consuming and difficult to learn. Biology. In structural biology, ambiguity has been recognized as a problem for studying protein conformations. The analysis of a protein three-dimensional structure consists in dividing the macromolecule into subunits called domains. The difficulty of this task arises from the fact that different definitions of what a domain is can be used (e.g. folding autonomy, function, thermodynamic stability, or domain motions), which sometimes results in a single protein having different—yet equally valid—domain assignments. Christianity and Judaism. Christianity and Judaism employ the concept of paradox synonymously with "ambiguity". Many Christians and Jews endorse Rudolf Otto's description of the sacred as 'mysterium tremendum et fascinans', the awe-inspiring mystery that fascinates humans. The apocryphal Book of Judith is noted for the "ingenious ambiguity" expressed by its heroine; for example, she says to the villain of the story, Holofernes, "my lord will not fail to achieve his purposes", without specifying whether "my lord" refers to the villain or to God. The orthodox Catholic writer G. K. Chesterton regularly employed paradox to tease out the meanings in common concepts that he found ambiguous or to reveal meaning often overlooked or forgotten in common phrases: the title of one of his most famous books, "Orthodoxy" (1908), itself employed such a paradox. Music. In music, pieces or sections that confound expectations and may be or are interpreted simultaneously in different ways are ambiguous, such as some polytonality, polymeter, other ambiguous meters or rhythms, and ambiguous phrasing, or (Stein 2005, p. 79) any aspect of music. The music of Africa is often purposely ambiguous. To quote Sir Donald Francis Tovey (1935, p. 195), "Theorists are apt to vex themselves with vain efforts to remove uncertainty just where it has a high aesthetic value." Visual art. In visual art, certain images are visually ambiguous, such as the Necker cube, which can be interpreted in two ways. Perceptions of such objects remain stable for a time, then may flip, a phenomenon called multistable perception. The opposite of such ambiguous images are impossible objects. Pictures or photographs may also be ambiguous at the semantic level: the visual image is unambiguous, but the meaning and narrative may be ambiguous: is a certain facial expression one of excitement or fear, for instance? Social psychology and the bystander effect. In social psychology, ambiguity is a factor used in determining peoples' responses to various situations. High levels of ambiguity in an emergency (e.g. an unconscious man lying on a park bench) make witnesses less likely to offer any sort of assistance, due to the fear that they may have misinterpreted the situation and acted unnecessarily. Alternately, non-ambiguous emergencies (e.g. an injured person verbally asking for help) elicit more consistent intervention and assistance. With regard to the bystander effect, studies have shown that emergencies deemed ambiguous trigger the appearance of the classic bystander effect (wherein more witnesses decrease the likelihood of any of them helping) far more than non-ambiguous emergencies. Computer science. In computer science, the SI prefixes kilo-, mega- and giga- were historically used in certain contexts to mean either the first three powers of 1024 (1024, 10242 and 10243) contrary to the metric system in which these units unambiguously mean one thousand, one million, and one billion. This usage is particularly prevalent with electronic memory devices (e.g. DRAM) addressed directly by a binary machine register where a decimal interpretation makes no practical sense. Subsequently, the Ki, Mi, and Gi prefixes were introduced so that binary prefixes could be written explicitly, also rendering k, M, and G "unambiguous" in texts conforming to the new standard—this led to a "new" ambiguity in engineering documents lacking outward trace of the binary prefixes (necessarily indicating the new style) as to whether the usage of k, M, and G remains ambiguous (old style) or not (new style). 1 M (where M is ambiguously or ) is "less" uncertain than the engineering value (defined to designate the interval ). As non-volatile storage devices begin to exceed 1 GB in capacity (where the ambiguity begins to routinely impact the second significant digit), GB and TB almost always mean 109 and 1012 bytes. See also. &lt;templatestyles src="Div col/styles.css"/&gt;* References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f=f(x)" }, { "math_id": 1, "text": "f=f(y+1)" }, { "math_id": 2, "text": "(y+1)" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "f = f(x)" }, { "math_id": 5, "text": "a/bc" }, { "math_id": 6, "text": "a/(bc)" }, { "math_id": 7, "text": "s i n" }, { "math_id": 8, "text": "s" }, { "math_id": 9, "text": "i" }, { "math_id": 10, "text": "n" }, { "math_id": 11, "text": "\\sin" }, { "math_id": 12, "text": "T_{mnk}" }, { "math_id": 13, "text": "m" }, { "math_id": 14, "text": "k" }, { "math_id": 15, "text": "\\sin^2\\alpha/2" }, { "math_id": 16, "text": "(\\sin(\\alpha/2))^2" }, { "math_id": 17, "text": "(\\sin \\alpha)^2/2" }, { "math_id": 18, "text": "\\sin^2(\\alpha/2)" }, { "math_id": 19, "text": "\\frac{1}{2}\\sin^2\\alpha" }, { "math_id": 20, "text": "\\sin^{-1}\\alpha" }, { "math_id": 21, "text": "\\arcsin(\\alpha)" }, { "math_id": 22, "text": "(\\sin \\alpha)^{-1}" }, { "math_id": 23, "text": "\\sin^{n} \\alpha" }, { "math_id": 24, "text": "(\\sin \\alpha)^{n}" }, { "math_id": 25, "text": "\\sin^2 \\alpha" }, { "math_id": 26, "text": "\\sin(\\sin \\alpha)" }, { "math_id": 27, "text": "f^2(x)" }, { "math_id": 28, "text": "f(f(x))" }, { "math_id": 29, "text": "a/2b" }, { "math_id": 30, "text": "(a/2)b" }, { "math_id": 31, "text": "a/(2b)" }, { "math_id": 32, "text": "~|\\alpha\\rangle~ " }, { "math_id": 33, "text": "~|n\\rangle~" }, { "math_id": 34, "text": "~|x\\rangle~" }, { "math_id": 35, "text": "~|p\\rangle~" }, { "math_id": 36, "text": " |1\\rangle " }, { "math_id": 37, "text": "X=Y" }, { "math_id": 38, "text": "X" }, { "math_id": 39, "text": "X=1, X=1, X=1" }, { "math_id": 40, "text": "X=2, X=3" } ]
https://en.wikipedia.org/wiki?curid=677
6770335
Car–Parrinello molecular dynamics
Computational chemistry software package Car–Parrinello molecular dynamics or CPMD refers to either a method used in molecular dynamics (also known as the Car–Parrinello method) or the computational chemistry software package used to implement this method. The CPMD method is one of the major methods for calculating ab-initio molecular dynamics (ab-initio MD or AIMD). Ab initio molecular dynamics (ab initio MD) is a computational method that uses first principles, or fundamental laws of nature, to simulate the motion of atoms in a system. It is a type of molecular dynamics (MD) simulation that does not rely on empirical potentials or force fields to describe the interactions between atoms, but rather calculates these interactions directly from the electronic structure of the system using quantum mechanics. In an ab initio MD simulation, the total energy of the system is calculated at each time step using density functional theory (DFT) or another method of quantum chemistry. The forces acting on each atom are then determined from the gradient of the energy with respect to the atomic coordinates, and the equations of motion are solved to predict the trajectory of the atoms. AIMD permits chemical bond breaking and forming events to occur and accounts for electronic polarization effect. Therefore, Ab initio MD simulations can be used to study a wide range of phenomena, including the structural, thermodynamic, and dynamic properties of materials and chemical reactions. They are particularly useful for systems that are not well described by empirical potentials or force fields, such as systems with strong electronic correlation or systems with many degrees of freedom. However, ab initio MD simulations are computationally demanding and require significant computational resources. The CPMD method is related to the more common Born–Oppenheimer molecular dynamics (BOMD) method in that the quantum mechanical effect of the electrons is included in the calculation of energy and forces for the classical motion of the nuclei. CPMD and BOMD are different types of AIMD. However, whereas BOMD treats the electronic structure problem within the time-"independent" Schrödinger equation, CPMD explicitly includes the electrons as active degrees of freedom, via (fictitious) dynamical variables. The software is a parallelized plane wave / pseudopotential implementation of density functional theory, particularly designed for "ab initio" molecular dynamics. Car–Parrinello method. The Car–Parrinello method is a type of molecular dynamics, usually employing periodic boundary conditions, planewave basis sets, and density functional theory, proposed by Roberto Car and Michele Parrinello in 1985, who were subsequently awarded the Dirac Medal by ICTP in 2009. In contrast to Born–Oppenheimer molecular dynamics wherein the nuclear (ions) degree of freedom are propagated using ionic forces which are calculated at each iteration by approximately solving the electronic problem with conventional matrix diagonalization methods, the Car–Parrinello method explicitly introduces the electronic degrees of freedom as (fictitious) dynamical variables, writing an extended Lagrangian for the system which leads to a system of coupled equations of motion for both ions and electrons. In this way, an explicit electronic minimization at each time step, as done in Born–Oppenheimer MD, is not needed: after an initial standard electronic minimization, the fictitious dynamics of the electrons keeps them on the electronic ground state corresponding to each new ionic configuration visited along the dynamics, thus yielding accurate ionic forces. In order to maintain this adiabaticity condition, it is necessary that the fictitious mass of the electrons is chosen small enough to avoid a significant energy transfer from the ionic to the electronic degrees of freedom. This small fictitious mass in turn requires that the equations of motion are integrated using a smaller time step than the one (1–10 fs) commonly used in Born–Oppenheimer molecular dynamics. Currently, the CPMD method can be applied to systems that consist of a few tens or hundreds of atoms and access timescales on the order of tens of picoseconds. General approach. In CPMD the core electrons are usually described by a pseudopotential and the wavefunction of the valence electrons are approximated by a plane wave basis set. The ground state electronic density (for fixed nuclei) is calculated self-consistently, usually using the density functional theory method. Kohn-Sham equations are often used to calculate the electronic structure, where electronic orbitals are expanded in a plane-wave basis set. Then, using that density, forces on the nuclei can be computed, to update the trajectories (using, e.g. the Verlet integration algorithm). In addition, however, the coefficients used to obtain the electronic orbital functions can be treated as a set of extra spatial dimensions, and trajectories for the orbitals can be calculated in this context. Fictitious dynamics. CPMD is an approximation of the Born–Oppenheimer MD (BOMD) method. In BOMD, the electrons' wave function must be minimized via matrix diagonalization at every step in the trajectory. CPMD uses fictitious dynamics to keep the electrons close to the ground state, preventing the need for a costly self-consistent iterative minimization at each time step. The fictitious dynamics relies on the use of a fictitious electron mass (usually in the range of 400 – 800 a.u.) to ensure that there is very little energy transfer from nuclei to electrons, i.e. to ensure adiabaticity. Any increase in the fictitious electron mass resulting in energy transfer would cause the system to leave the ground-state BOMD surface. formula_0 Lagrangian. where formula_1 is the fictitious mass parameter; "E"[{"ψi"},{R"I"}] is the Kohn–Sham energy density functional, which outputs energy values when given Kohn–Sham orbitals and nuclear positions. formula_2 Orthogonality constraint. where "δij" is the Kronecker delta. Equations of motion. The equations of motion are obtained by finding the stationary point of the Lagrangian under variations of "ψi" and R"I", with the orthogonality constraint. formula_3 formula_4 where Λ"ij" is a Lagrangian multiplier matrix to comply with the orthonormality constraint. Born–Oppenheimer limit. In the formal limit where "μ" → 0, the equations of motion approach Born–Oppenheimer molecular dynamics. Software packages. There are a number of software packages available for performing AIMD simulations. Some of the most widely used packages include: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\mathcal{L} = \n\\frac{1}{2}\\left(\\sum_I^{\\mathrm{nuclei}}\\ M_I\\dot{\\mathbf{R}}_I^2 + \\mu\\sum_i^{\\mathrm{orbitals}}\\int d\\mathbf r\\ |\\dot{\\psi}_i(\\mathbf r,t)|^2 \\right)\n- E\\left[\\{\\psi_i\\},\\{\\mathbf R_I\\}\\right] + \\sum_{ij}\\Lambda_{ij}\\left(\\int d\\mathbf r\\ \\psi_i \\psi_j - \\delta_{ij}\\right),\n" }, { "math_id": 1, "text": "\\mu" }, { "math_id": 2, "text": "\n\\int d\\mathbf r\\ \\psi_i^*(\\mathbf r,t) \\psi_j(\\mathbf r,t) = \\delta_{ij},\n" }, { "math_id": 3, "text": "\nM_I \\ddot{\\mathbf R}_I = - \\nabla_I \\, E\\left[\\{\\psi_i\\},\\{\\mathbf R_I\\}\\right]\n" }, { "math_id": 4, "text": "\n\\mu \\ddot{\\psi}_i(\\mathbf r,t) = - \\frac{\\delta E}{\\delta \\psi_i^*(\\mathbf r,t)} + \\sum_j \\Lambda_{ij} \\psi_j(\\mathbf r,t),\n" } ]
https://en.wikipedia.org/wiki?curid=6770335
6770393
Quasitransitive relation
The mathematical notion of quasitransitivity is a weakened version of transitivity that is used in social choice theory and microeconomics. Informally, a relation is quasitransitive if it is symmetric for some values and transitive elsewhere. The concept was introduced by to study the consequences of Arrow's theorem. Formal definition. A binary relation T over a set "X" is quasitransitive if for all "a", "b", and "c" in "X" the following holds: formula_0 If the relation is also antisymmetric, T is transitive. Alternately, for a relation T, define the asymmetric or "strict" part P: formula_1 Then T is quasitransitive if and only if P is transitive. Examples. Preferences are assumed to be quasitransitive (rather than transitive) in some economic contexts. The classic example is a person indifferent between 7 and 8 grams of sugar and indifferent between 8 and 9 grams of sugar, but who prefers 9 grams of sugar to 7. Similarly, the Sorites paradox can be resolved by weakening assumed transitivity of certain relations to quasitransitivity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(a\\operatorname{T}b) \\wedge \\neg(b\\operatorname{T}a) \\wedge (b\\operatorname{T}c) \\wedge \\neg(c\\operatorname{T}b) \\Rightarrow (a\\operatorname{T}c) \\wedge \\neg(c\\operatorname{T}a)." }, { "math_id": 1, "text": "(a\\operatorname{P}b) \\Leftrightarrow (a\\operatorname{T}b) \\wedge \\neg(b\\operatorname{T}a)." } ]
https://en.wikipedia.org/wiki?curid=6770393
67714643
Body of constant brightness
In convex geometry, a body of constant brightness is a three-dimensional convex set all of whose two-dimensional projections have equal area. A sphere is a body of constant brightness, but others exist. Bodies of constant brightness are a generalization of curves of constant width, but are not the same as another generalization, the surfaces of constant width. The name comes from interpreting the body as a shining body with isotropic luminance, then a photo (with focus at infinity) of the body taken from any angle would have the same total light energy hitting the photo. Properties. A body has constant brightness if and only if the reciprocal Gaussian curvatures at pairs of opposite points of tangency of parallel supporting planes have almost-everywhere-equal sums. According to an analogue of Barbier's theorem, all bodies of constant brightness that have the same projected area formula_0 as each other also have the same surface area, formula_1. This can be proved by the Crofton formula. Example. The first known body of constant brightness that is not a sphere was constructed by Wilhelm Blaschke in 1915. Its boundary is a surface of revolution of a curved triangle (but not the Reuleaux triangle). It is smooth except on a circle and at one isolated point where it is crossed by the axis of revolution. The circle separates two patches of different geometry from each other: one of these two patches is a spherical cap, and the other forms part of a football, a surface of constant Gaussian curvature with a pointed tip. Pairs of parallel supporting planes to this body have one plane tangent to a singular point (with reciprocal curvature zero) and the other tangent to the one of these two patches, which both have the same curvature. Among bodies of revolution of constant brightness, Blaschke's shape (also called the Blaschke–Firey body) is the one with minimum volume, and the sphere is the one with maximum volume. Additional examples can be obtained by combining multiple bodies of constant brightness using the Blaschke sum, an operation on convex bodies that preserves the property of having constant brightness. Relation to constant width. A curve of constant width in the Euclidean plane has an analogous property: all of its one-dimensional projections have equal length. In this sense, the bodies of constant brightness are a three-dimensional generalization of this two-dimensional concept, different from the surfaces of constant width. Since the work of Blaschke, it has been conjectured that the only shape that has both constant brightness and constant width is a sphere. This was formulated explicitly by Nakajima in 1926, and it came to be known as "Nakajima's problem". Nakajima himself proved the conjecture under the additional assumption that the boundary of the shape is smooth. A proof of the full conjecture was published in 2006 by Ralph Howard. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "\\textstyle\\sqrt{A/\\pi}" } ]
https://en.wikipedia.org/wiki?curid=67714643
677191
Differential (mathematics)
Mathematical notion of infinitesimal difference In mathematics, differential refers to several related notions derived from the early days of calculus, put on a rigorous footing, such as infinitesimal differences and the derivatives of functions. The term is used in various branches of mathematics such as calculus, differential geometry, algebraic geometry and algebraic topology. Introduction. The term differential is used nonrigorously in calculus to refer to an infinitesimal ("infinitely small") change in some varying quantity. For example, if "x" is a variable, then a change in the value of "x" is often denoted Δ"x" (pronounced "delta x"). The differential "dx" represents an infinitely small change in the variable "x". The idea of an infinitely small or infinitely slow change is, intuitively, extremely useful, and there are a number of ways to make the notion mathematically precise. Using calculus, it is possible to relate the infinitely small changes of various variables to each other mathematically using derivatives. If "y" is a function of "x", then the differential "dy" of "y" is related to "dx" by the formula formula_0 where formula_1denotes the derivative of "y" with respect to "x". This formula summarizes the intuitive idea that the derivative of "y" with respect to "x" is the limit of the ratio of differences Δ"y"/Δ"x" as Δ"x" becomes infinitesimal. History and usage. Infinitesimal quantities played a significant role in the development of calculus. Archimedes used them, even though he did not believe that arguments involving infinitesimals were rigorous. Isaac Newton referred to them as fluxions. However, it was Gottfried Leibniz who coined the term "differentials" for infinitesimal quantities and introduced the notation for them which is still used today. In Leibniz's notation, if "x" is a variable quantity, then "dx" denotes an infinitesimal change in the variable "x". Thus, if "y" is a function of "x", then the derivative of "y" with respect to "x" is often denoted "dy"/"dx", which would otherwise be denoted (in the notation of Newton or Lagrange) "ẏ" or "y"′. The use of differentials in this form attracted much criticism, for instance in the famous pamphlet The Analyst by Bishop Berkeley. Nevertheless, the notation has remained popular because it suggests strongly the idea that the derivative of "y" at "x" is its instantaneous rate of change (the slope of the graph's tangent line), which may be obtained by taking the limit of the ratio Δ"y"/Δ"x" as Δ"x" becomes arbitrarily small. Differentials are also compatible with dimensional analysis, where a differential such as "dx" has the same dimensions as the variable "x". Calculus evolved into a distinct branch of mathematics during the 17th century CE, although there were antecedents going back to antiquity. The presentations of, e.g., Newton, Leibniz, were marked by non-rigorous definitions of terms like differential, fluent and "infinitely small". While many of the arguments in Bishop Berkeley's 1734 The Analyst are theological in nature, modern mathematicians acknowledge the validity of his argument against "the Ghosts of departed Quantities"; however, the modern approaches do not have the same technical issues. Despite the lack of rigor, immense progress was made in the 17th and 18th centuries. In the 19th century, Cauchy and others gradually developed the Epsilon, delta approach to continuity, limits and derivatives, giving a solid conceptual foundation for calculus. In the 20th century, several new concepts in, e.g., multivariable calculus, differential geometry, seemed to encapsulate the intent of the old terms, especially "differential"; both differential and infinitesimal are used with new, more rigorous, meanings. Differentials are also used in the notation for integrals because an integral can be regarded as an infinite sum of infinitesimal quantities: the area under a graph is obtained by subdividing the graph into infinitely thin strips and summing their areas. In an expression such as formula_2 the integral sign (which is a modified long s) denotes the infinite sum, "f"("x") denotes the "height" of a thin strip, and the differential "dx" denotes its infinitely thin width. Approaches. There are several approaches for making the notion of differentials mathematically precise. These approaches are very different from each other, but they have in common the idea of being "quantitative", i.e., saying not just that a differential is infinitely small, but "how" small it is. Differentials as linear maps. There is a simple way to make precise sense of differentials, first used on the Real line by regarding them as linear maps. It can be used on formula_3, formula_4, a Hilbert space, a Banach space, or more generally, a topological vector space. The case of the Real line is the easiest to explain. This type of differential is also known as a covariant vector or cotangent vector, depending on context. Differentials as linear maps on R. Suppose formula_5 is a real-valued function on formula_3. We can reinterpret the variable formula_6 in formula_5 as being a function rather than a number, namely the identity map on the real line, which takes a real number formula_7 to itself: formula_8. Then formula_5 is the composite of formula_9 with formula_6, whose value at formula_7 is formula_10. The differential formula_11 (which of course depends on formula_9) is then a function whose value at formula_7 (usually denoted formula_12) is not a number, but a linear map from formula_3 to formula_3. Since a linear map from formula_3 to formula_3 is given by a formula_13 matrix, it is essentially the same thing as a number, but the change in the point of view allows us to think of formula_12 as an infinitesimal and "compare" it with the "standard infinitesimal" formula_14, which is again just the identity map from formula_3 to formula_3 (a formula_13 matrix with entry formula_15). The identity map has the property that if formula_16 is very small, then formula_17 is very small, which enables us to regard it as infinitesimal. The differential formula_12 has the same property, because it is just a multiple of formula_14, and this multiple is the derivative formula_18 by definition. We therefore obtain that formula_19, and hence formula_20. Thus we recover the idea that formula_21 is the ratio of the differentials formula_22 and formula_23. This would just be a trick were it not for the fact that: Differentials as linear maps on Rn. If formula_9 is a function from formula_4 to formula_3, then we say that formula_9 is "differentiable" at formula_24 if there is a linear map formula_12 from formula_4 to formula_3 such that for any formula_25, there is a neighbourhood formula_26 of formula_7 such that for formula_27, formula_28 We can now use the same trick as in the one-dimensional case and think of the expression formula_29 as the composite of formula_9 with the standard coordinates formula_30 on formula_4 (so that formula_31 is the formula_32-th component of formula_24). Then the differentials formula_33 at a point formula_7 form a basis for the vector space of linear maps from formula_4 to formula_3 and therefore, if formula_9 is differentiable at formula_7, we can write "formula_34" as a linear combination of these basis elements: formula_35 The coefficients formula_36 are (by definition) the partial derivatives of formula_9 at formula_7 with respect to formula_30. Hence, if formula_9 is differentiable on all of formula_4, we can write, more concisely: formula_37 In the one-dimensional case this becomes formula_38 as before. This idea generalizes straightforwardly to functions from formula_4 to formula_39. Furthermore, it has the decisive advantage over other definitions of the derivative that it is invariant under changes of coordinates. This means that the same idea can be used to define the differential of smooth maps between smooth manifolds. Aside: Note that the existence of all the partial derivatives of formula_5 at formula_6 is a necessary condition for the existence of a differential at formula_6. However it is not a sufficient condition. For counterexamples, see Gateaux derivative. Differentials as linear maps on a vector space. The same procedure works on a vector space with a enough additional structure to reasonably talk about continuity. The most concrete case is a Hilbert space, also known as a complete inner product space, where the inner product and its associated norm define a suitable concept of distance. The same procedure works for a Banach space, also known as a complete Normed vector space. However, for a more general topological vector space, some of the details are more abstract because there is no concept of distance. For the important case of a finite dimension, any inner product space is a Hilbert space, any normed vector space is a Banach space and any topological vector space is complete. As a result, you can define a coordinate system from an arbitrary basis and use the same technique as for formula_4. Differentials as germs of functions. This approach works on any differentiable manifold. If then f is equivalent to g at p, denoted formula_42, if and only if there is an open formula_43 containing p such that formula_44 for every x in W. The germ of f at p, denoted formula_45, is the set of all real continuous functions equivalent to f at p; if f is smooth at p then formula_45 is a smooth germ. If then This shows that the germs at p form an algebra. Define formula_59 to be the set of all smooth germs vanishing at p and formula_60 to be the product of ideals formula_61. Then a differential at p (cotangent vector at p) is an element of formula_62. The differential of a smooth function f at p, denoted formula_63, is formula_64. A similar approach is to define differential equivalence of first order in terms of derivatives in an arbitrary coordinate patch. Then the differential of f at p is the set of all functions differentially equivalent to formula_65 at p. Algebraic geometry. In algebraic geometry, differentials and other infinitesimal notions are handled in a very explicit way by accepting that the coordinate ring or structure sheaf of a space may contain nilpotent elements. The simplest example is the ring of dual numbers R["ε"], where "ε"2 = 0. This can be motivated by the algebro-geometric point of view on the derivative of a function "f" from R to R at a point "p". For this, note first that "f" − "f"("p") belongs to the ideal "I""p" of functions on R which vanish at "p". If the derivative "f" vanishes at "p", then "f" − "f"("p") belongs to the square "I""p"2 of this ideal. Hence the derivative of "f" at "p" may be captured by the equivalence class ["f" − "f"("p")] in the quotient space "I""p"/"I""p"2, and the 1-jet of "f" (which encodes its value and its first derivative) is the equivalence class of "f" in the space of all functions modulo "I""p"2. Algebraic geometers regard this equivalence class as the "restriction" of "f" to a "thickened" version of the point "p" whose coordinate ring is not R (which is the quotient space of functions on R modulo "I""p") but R["ε"] which is the quotient space of functions on R modulo "I""p"2. Such a thickened point is a simple example of a scheme. Algebraic geometry notions. Differentials are also important in algebraic geometry, and there are several important notions. Synthetic differential geometry. A fifth approach to infinitesimals is the method of synthetic differential geometry or smooth infinitesimal analysis. This is closely related to the algebraic-geometric approach, except that the infinitesimals are more implicit and intuitive. The main idea of this approach is to replace the category of sets with another category of "smoothly varying sets" which is a topos. In this category, one can define the real numbers, smooth functions, and so on, but the real numbers "automatically" contain nilpotent infinitesimals, so these do not need to be introduced by hand as in the algebraic geometric approach. However the logic in this new category is not identical to the familiar logic of the category of sets: in particular, the law of the excluded middle does not hold. This means that set-theoretic mathematical arguments only extend to smooth infinitesimal analysis if they are "constructive" (e.g., do not use proof by contradiction). Constuctivists regard this disadvantage as a positive thing, since it forces one to find constructive arguments wherever they are available. Nonstandard analysis. The final approach to infinitesimals again involves extending the real numbers, but in a less drastic way. In the nonstandard analysis approach there are no nilpotent infinitesimals, only invertible ones, which may be viewed as the reciprocals of infinitely large numbers. Such extensions of the real numbers may be constructed explicitly using equivalence classes of sequences of real numbers, so that, for example, the sequence (1, 1/2, 1/3, ..., 1/"n", ...) represents an infinitesimal. The first-order logic of this new set of hyperreal numbers is the same as the logic for the usual real numbers, but the completeness axiom (which involves second-order logic) does not hold. Nevertheless, this suffices to develop an elementary and quite intuitive approach to calculus using infinitesimals, see transfer principle. Differential geometry. The notion of a differential motivates several concepts in differential geometry (and differential topology). Other meanings. The term "differential" has also been adopted in homological algebra and algebraic topology, because of the role the exterior derivative plays in de Rham cohomology: in a cochain complex formula_66 the maps (or "coboundary operators") "di" are often called differentials. Dually, the boundary operators in a chain complex are sometimes called "codifferentials". The properties of the differential also motivate the algebraic notions of a "derivation" and a "differential algebra". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Dmbox/styles.css" /&gt; Index of articles associated with the same name This includes a list of related items that share the same name (or similar names). &lt;br&gt; If an [ internal link] incorrectly led you here, you may wish to change the link to point directly to the intended article.
[ { "math_id": 0, "text": "dy = \\frac{dy}{dx} \\,dx," }, { "math_id": 1, "text": "\\frac{dy}{dx} \\," }, { "math_id": 2, "text": "\\int f(x) \\,dx," }, { "math_id": 3, "text": "\\mathbb{R}" }, { "math_id": 4, "text": "\\mathbb{R}^n" }, { "math_id": 5, "text": "f(x)" }, { "math_id": 6, "text": "x" }, { "math_id": 7, "text": "p" }, { "math_id": 8, "text": "x(p)=p" }, { "math_id": 9, "text": "f" }, { "math_id": 10, "text": "f(x(p))=f(p)" }, { "math_id": 11, "text": "\\operatorname{d}f" }, { "math_id": 12, "text": "df_p" }, { "math_id": 13, "text": "1\\times 1" }, { "math_id": 14, "text": "dx_p" }, { "math_id": 15, "text": "1" }, { "math_id": 16, "text": "\\varepsilon" }, { "math_id": 17, "text": "dx_p(\\varepsilon)" }, { "math_id": 18, "text": "f'(p)" }, { "math_id": 19, "text": "df_p=f'(p)\\,dx_p" }, { "math_id": 20, "text": "df=f'\\,dx" }, { "math_id": 21, "text": "f'" }, { "math_id": 22, "text": "df" }, { "math_id": 23, "text": "dx" }, { "math_id": 24, "text": "p\\in\\mathbb{R}^n" }, { "math_id": 25, "text": "\\varepsilon>0" }, { "math_id": 26, "text": "N" }, { "math_id": 27, "text": "x\\in N" }, { "math_id": 28, "text": "\\left|f(x) - f(p) - df_p(x-p)\\right| < \\varepsilon \\left|x-p\\right| ." }, { "math_id": 29, "text": "f(x_1, x_2, \\ldots, x_n)" }, { "math_id": 30, "text": "x_1, x_2, \\ldots, x_n" }, { "math_id": 31, "text": "x_j(p)" }, { "math_id": 32, "text": "j" }, { "math_id": 33, "text": "\\left(dx_1\\right)_p, \\left(dx_2\\right)_p, \\ldots, \\left(dx_n\\right)_p" }, { "math_id": 34, "text": "\\operatorname{d}f_p" }, { "math_id": 35, "text": "df_p = \\sum_{j=1}^n D_j f(p) \\,(dx_j)_p." }, { "math_id": 36, "text": "D_j f(p)" }, { "math_id": 37, "text": "\\operatorname{d}f = \\frac{\\partial f}{\\partial x_1} \\,dx_1 + \\frac{\\partial f}{\\partial x_2} \\,dx_2 + \\cdots +\\frac{\\partial f}{\\partial x_n} \\,dx_n." }, { "math_id": 38, "text": "df = \\frac{df}{dx}dx" }, { "math_id": 39, "text": "\\mathbb{R}^m" }, { "math_id": 40, "text": "f\\colon U\\to \\mathbb{R}" }, { "math_id": 41, "text": "g\\colon V\\to \\mathbb{R}" }, { "math_id": 42, "text": "f \\sim_p g" }, { "math_id": 43, "text": "W \\subseteq U \\cap V" }, { "math_id": 44, "text": "f(x) = g(x)" }, { "math_id": 45, "text": "[f]_p" }, { "math_id": 46, "text": "U_1" }, { "math_id": 47, "text": "U_2" }, { "math_id": 48, "text": "V_1" }, { "math_id": 49, "text": "V_2" }, { "math_id": 50, "text": "f_1\\colon U_1\\to \\mathbb{R}" }, { "math_id": 51, "text": "f_2\\colon U_2\\to \\mathbb{R}" }, { "math_id": 52, "text": "g_1\\colon V_1\\to \\mathbb{R}" }, { "math_id": 53, "text": "g_2\\colon V_2\\to \\mathbb{R}" }, { "math_id": 54, "text": "f_1 \\sim_p g_1" }, { "math_id": 55, "text": "f_2 \\sim_p g_2" }, { "math_id": 56, "text": "r*f_1 \\sim_p r*g_1" }, { "math_id": 57, "text": "f_1+f_2\\colon U_1 \\cap U_2\\to \\mathbb{R} \\sim_p g_1+g_2\\colon V_1 \\cap V_2\\to \\mathbb{R}" }, { "math_id": 58, "text": "f_1*f_2\\colon U_1 \\cap U_2\\to \\mathbb{R} \\sim_p g_1*g_2\\colon V_1 \\cap V_2\\to \\mathbb{R}" }, { "math_id": 59, "text": "\\mathcal{I}_p" }, { "math_id": 60, "text": "\\mathcal{I}_p^2" }, { "math_id": 61, "text": "\\mathcal{I}_p \\mathcal{I}_p" }, { "math_id": 62, "text": "\\mathcal{I}_p/\\mathcal{I}_p^2" }, { "math_id": 63, "text": "\\mathrm d f_p" }, { "math_id": 64, "text": "[f-f(p)]_p/\\mathcal{I}_p^2" }, { "math_id": 65, "text": "f-f(p)" }, { "math_id": 66, "text": "(C_\\bullet, d_\\bullet)," } ]
https://en.wikipedia.org/wiki?curid=677191
67727469
George Marshall (gunner)
George Marshall (1781 – August 2, 1855) was a chemist, pyrotechnist, artillery specialist, author, educator, and gunner in the United States Navy. He fought in the War of 1812, he was part of Commodore Isaac Chauncey's freshwater fleet on Lake Ontario. He served in the United States Navy with distinction for over forty-six years. He achieved the status of master gunner. He was one of the most important naval gunners in U.S. history. He was a 19th-century American scientist who helped build the framework of U.S. naval gunnery education. He wrote "Marshall's Practical Marine Gunnery" in 1821. Early life. George Marshall was born in Rhodes, Ottoman Empire in 1781. By the early 19th century he fled Rhodes and came to the United States. His name was Americanised and American Naval literature lists his origin as Greece as early as 1825 but the country was in the process of fighting for their independence from the Ottoman Empire. Marshall identified himself as a Greek from Greece. He married Phillippi Higgs around 1805, she was from Maryland. Their children included: Sophia, Maria, Eleanor, and George J. He enlisted in the Navy around 1807. He worked at the Washington Navy Yard as a seaman. At the time Thomas Jefferson was president and American hero Sicilian Salvadore Catalano was one of two gunners at the Navy Yard. According to records, Samuel Kelly was the other acting gunner he had one arm. Marshall began his career at the navy yard. Around 1807, Robert Fulton occasionally visited the navy yard to test his torpedo experimentation. The navy yard also conducted advanced cannon research and had a fully functioning steam engine. Catalano was the pilot of the Intrepid during the famous burning of the captured Philadelphia in Tripoli Harbor. America rewarded him for his service. He was invited to join the U.S. Navy. He received the warrant of sailing master and master gunner. Marshall was his student. War of 1812. Marshall became a warrant officer on July 15, 1809. His specialization was gunnery. He continued at the Navy Yard from 1807 to 1813. He was with Catalano and Thomas Tingey. Tingey was the commander of the Navy Yard. The War of 1812 depleted the navy yard of resources and officers. The U.S. armed forces took most of the cannons, ships, and experienced officers. Commander Tingey pleaded with the government and warned them that the navy yard had weakened defenses. Commander Charles G. Ridgely was an American hero who fought with Edward Preble in the First Barbary War. He was looking for glory in the War of 1812. He was assigned to the sloop-of-war Erie. Ridgely's gunners were having a hard time loading the cannons outfitted for his ship. The Washington Navy Yard's gunner was ordered to demonstrate the cannons at Henry Foxall’s Columbia Foundry in Washington. According to the naval contractor, Captain Ridgley's gunners did not know how to properly load and fire the carronade's. Marshall demonstrated the cannons to Captain Ridgley. Captain Ridgley hired Marshall as his Gunner. The Washington Navy Yard's defenses were further disabled with the departure of Gunner Marshall. The ship was first put to sea around March 1814. The war of 1812 continued and the sloop-of-war Erie was forced to return to Baltimore around April 1814. The British set up a strategic blockade outside of coastal Virginia. The ship berthed at Baltimore and was on standby until early 1815. Captain Ridgeley and his crew were reassigned to the Lake Ontario fleet under Commodore Isaac Chauncey. Ridgeley was in command of the brig Jefferson and Marshall was his gunner. The crew arrived at Sackett's Harbor in May but the cannons did not arrive until mid-summer. The fleet was delayed but sailed ten days after the Battle of Lundy's Lane. The battle was one of the bloodiest battles of the war. It continued further south of the Niagara River. The battle was now called the Siege of Fort Erie. Marshall and the crew were about to gain experience in warfare. The Jefferson, Sylph, and Oneida blockaded the north entrance of the Niagara River. The remaining fleet sailed for Kingston. The three ships blocked British vessels inside of the river and blocked the entrance preventing British supplies and troops from reaching the Niagara River at the Lake Ontario entrance. During this time Washington was invaded by the British and the city was burned. Tingey and Catalano burned the Washington Navy Yard to prevent the British from taking control. One month after the blockade the Jefferson and the other ships joined the fleet. The next assignment was to lure Captain James Lucas Yeo into a conflict. The fleet retired in November because the lake froze. By February the war ended. The crew boarded a ship called the Brig Surpize in New York returning to Baltimore. Regrettably, the ship sank outside the coast of New Jersey. Some of the passengers escaped. Marshall and the remaining crew stayed behind. Marshall and two other officers in an act of bravery tied the remaining pieces of the ship together with whatever they could find and they eventually reached the shore. A small number of the crew drowned. sloop-of-war Erie (1815-1819). The sloop-of-war sailed to Boston and joined Commodore William Bainbridge's squadron in May 1815. They sailed to the Mediterranean in July. The crew arrived slightly after the Second Barbary War. Marshall and the Erie joined the Mediterranean Squadron. They provided gunboat diplomacy and protected American ships bound for trade in Europe. The squadron sailed between the Strait of Gibraltar and the Strait of Sicily. The warships frequented Italy, Mahón, and the neighboring ports. They did not travel to present Greece because it was restricted and some Ottoman ports required special permission. On several occasions, the crew of the sloop-of-war participated in small disputes. The tour was dangerous. Marshall and the crew gained experience. The sloop-of-war remained stationed around the Strait of Gibraltar until late 1819. In one incident while they were returning to the United States. The Captain was ordered to pursue a pirate in the Caribbean. Marshall was back in the United States in early 1820. He was reassigned to the Gosport Navy Yard. Gosport Navy Yard (1821-1824). George Marshall gained experience as a Gunner. Marshall was reassigned to Gosport Navy Yard Portsmouth, VA in January 1821. The Washington Navy Yard was burned during the War of 1812. The Gosport Navy Yard was protected under the command of Commodore John Cassin. The yard was in a crucial location due to the British blockade of the Chesapeake. The U.S. Government enhanced its defenses. According to the 1821 naval rule book, special marines were dispatched to guard U.S. Navy Yards. Marshall was assigned to the crucial location. Captain Arthur Sinclair began his school for midshipmen and Captain William M. Crane was on board a ship in ordinary and Captain Lewis Warrington replaced Commodore John Cassin as the commander. Around 1821, Marshall published "Marshall's Practical Marine Gunnery". Captain's Warrington, Sinclair, and Crane endorsed his book and recommended it for junior officers. The book was part of the curriculum in Sinclair's school for midshipmen. It was an early scientific book featuring chemistry, physics, and the task of the gunner. It defined U.S. gunnery education in the 19th century. The book described details about the necessary equipment for different types of cannons and pistols. It featured certificate templates that the gunner needed to fill out for each different task. The book had a time management table. According to Marshall, it would take one man 37 days to complete a cannon for service. Most of the tasks listed took one day. It took one day for one man to pack 100 blank cartridges or fix 4 skyrockets. The book outlined an early record of the chemical composition of rockets and practical chemical mixtures. The book had over 50 different chemical ingredients for pyrotechnics. It described how to mix the chemical compounds. The book also featured the chemical ingredients that produced smoke bombs, phosphorus of lime, alum Phosphorus and a technique to shoot fire at buildings and structures, historically known as Greek fire. The chemistry portion also included how to deal with metallic compounds such as Gold, Silver, Tin, and Iron. Furthermore, the section listed the chemical compounds to prove spirits, glue for broken glass, a stain removal formula, and the formula to dye hammock fabric. The textbook also featured an equation dealing with projectile motion. The book detailed the distance of a shot on a ship based on the sound of the gun, which was found to fly at a rate of 1142 feet in one second. It was the standard of the time. According to Marshall's equation after seeing the flash of a cannon and hearing the blast the gunner would count the seconds until impact. This way a trained ear would know the distance a cannonball traveled based on ear training. The book example outlined a 9-second scenario where the distance the cannon was fired from the gunner was approximately 10,278 feet or 3,426 yards. Below is an example of the equation from the book, x represents the time in seconds. formula_0 The equation was an early view of projectile motion. It did not observe the force of gravity or the traditional angle Θ. The technique predated classical and Newtonian mechanics. North Carolina 74 (1825-1827). After four years of service at the navy yard and due to his technical expertise he was assigned to the ship of the line North Carolina 74. Around this time Greece was fighting for its independence from the Ottoman Empire. Marshall was in charge of the gunners on board. The gunner's crew was roughly 600-800 men. The crew included: the gunner's mate, quarter gunner, first gunner, second gunner, gunners aids, powder boys, armorer, armorer's mate, gunsmith, and yeoman of the powder room. Before the ship's departure in early 1825 it was boarded by President James Monroe, Secretary of Navy Samuel Southard, Navy Commissioner Charles Morris, and distinguished members of our Congress. The ship featured a library of books numbering 1100. The North Carolina 74 served in the Mediterranean as flagship to Commodore John Rodgers. Rogers captain was Master Commandant Charles W. Morgan. The ship sailed to the Mediterranean with two notable passengers Estwick Evans and George Bethune English. They were traveling to aid war-torn Greece. The warship traveled all over the Greek islands. Some included Paros, Milos, and Mytilini. Seaman sent letters back from the ship and they were published in American newspapers. Many of the seamen sympathized with the Greek cause. In one account a Greek commander boarded the North Carolina and was overwhelmed by the massive ship. The Mediterranean Squadron was celebrating the Fourth of July in the summer of 1826 off of the island of Tenedos. A large Ottoman Fleet approached their position and they eventually began to communicate. Recall most of the Ottoman ports were restricted until the Greek War of Independence. Commander John Rodgers had a historic meeting with the Kapudan Pasha. The Kapudan was the highest-ranking naval officer in the Ottoman Navy. Rogers and the Pasha met in Mytilini where they began trade talks. Eventually, around the time, Greece was recognized as a country American ships were allowed in Ottoman Ports under the Ottoman-American Treaty. Many refugees came to the United States namely: George Siran, John Celivergos Zachos, Gregory Anthony Perdicaris, Christophoros Plato Castanes, and Evangelinos Apostolides Sophocles. Americans began to witness the horrors of Greek slavery. This was used by American abolitionists. A notable Greek slave was Garafilia Mohalbi. Later life. Marshall was back with his wife Phillippi and his four children. His son George J Marshall was around 2 years old. He was reassigned to the Washington Navy Yard (1827-1832). Marshall was reunited with Commander Tingey and his Mentor Sicilian Salvadore Catalano. Marshall was the gunner of the navy yard. Catalano was in the Ordnance Department at the yard. Marshall and Catalano were both training gunners at the navy yard. Commander Thomas Tingey died and American hero Captain Isaac Hull replaced him. Two years later, Marshall was reassigned to the Gosport Navy Yard. Marshall met a young man named George Sirian he was also Greek from the island Psara. In 1835, Marshall's eldest daughter Sophia married Samuel G. City. He was also a gunner in the U.S. Navy. His son George J. began his training along with George Sirian. Both of the young men received warrants as gunners and were in command of gunnery on naval ships as teenagers. Marshall began to accumulate a family of gunners. By 1840, George Sirian married his third daughter Eleanor Elizabeth Marshall. One year later, the U.S. Navy promoted Marshall to Master Gunner. It was the highest rank a gunner could obtain. Around this time Sirian was assigned to the Washington Navy Yard with Marshall's mentor Master Gunner Salvador Catalano. He trained with him. Recall, Captain William M. Crane endorsed Marshall's book. He invited Master Gunner Marshall to the Bureau of Ordnance and Hydrography (1842–1862) Crane was the commander. The Bureau was located at the navy yard. Marshall was also the gunner of the yard. His son-in-law Samuel G. City was in the gunner's loft with him. Catalano died serving the U.S. Navy at 79 years old at Washington Navy Yard. Marshall resigned from the navy four months later in 1846. Records indicate his resignation was rescinded as soon as the resignation was tendered. His circle of captains and bureaucratic officials did not allow him to resign due to his hi-level of expertise in the science of gunnery. Marshall was reassigned to Washington Navy Yard from 1847 to 1848. Greek American U.S. Navy Chaplain and American Abolitionist Photius Fisk was stationed at the navy yard around the same period. Marshall‘s son George J. and son-in-law George Sirian both participated in the Mexican-American War. Sadly, while on the sloop John Adams his son G.J. Marshall died of yellow fever. Marshall returned to Gosport in 1849, Lewis Warrington was now chief of Bureau of Ordnance and Hydrography (1842–1862). Recall, he also endorsed Marshall's book and he was his commander for many years. In 1851, Four-Star Admiral David Farragut and Marshall each planted an oak tree outside of the commandant's office at the Gosport Navy Yard. The commander was Rear Admiral Silas H. Stringham. Gosport Navy Yard was struck by yellow fever in 1855. Marshall died at the naval hospital on August 2, 1855. His sister-in-law died four days later. The name of the navy yard was changed to Norfolk Navy Yard and is now one of the largest shipyards in the world specializing in repairing, overhauling, and modernizing ships and submarines. Norfolk Naval shipyard is the oldest and largest industrial facility that belongs to the U.S. Navy. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y(x) = 1142x" } ]
https://en.wikipedia.org/wiki?curid=67727469
67732445
Jelly roll (options)
Option trading strategy A jelly roll, or simply a roll, is an options trading strategy that captures the cost of carry of the underlying asset while remaining otherwise neutral. It is often used to take a position on dividends or interest rates, or to profit from mispriced calendar spreads. A jelly roll consists of a long call and a short put with one expiry date, and a long put and a short call with a different expiry date, all at the same strike price. In other words, a trader combines a synthetic long position at one expiry date with a synthetic short position at another expiry date. Equivalently, the trade can be seen as a combination of a long time spread and a short time spread, one with puts and one with calls, at the same strike price. The value of a call time spread (composed of a long call option and a short call option at the same strike price but with different expiry dates) and the corresponding put time spread should be related by put-call parity, with the difference in price explained by the effect of interest rates and dividends. If this expected relationship does not hold, a trader can profit from the difference either by buying the call spread and selling the put spread (a long jelly roll) or by selling the call spread and buying the put spread (a short jelly roll). Where this arbitrage opportunity exists, it is typically small, and retail traders are unlikely to be able to profit from it due to transaction costs. All four options must be for the same underlying at the same strike price. For example, a position composed of options on futures is not a true jelly roll if the underlying futures have different expiry dates. The jelly roll is a neutral position with no delta, gamma, theta, or vega. However, it is sensitive to interest rates and dividends. Value. Disregarding interest on dividends, the theoretical value of a jelly roll on European options is given by the formula: formula_0 where formula_1 is the value of the jelly roll, formula_2 is the strike price, formula_3 is the value of any dividends, formula_4 and formula_5 are the times to expiry, and formula_6 and formula_7 are the effective interest rates to time formula_4 and formula_5 respectively. Assuming a constant interest rate, this formula can be approximated by formula_8. This theoretical value formula_1 should be equal to the difference between the price of the call time spread (formula_9) and the price of the put time spread (formula_10): formula_11. If that equality does not hold for prices in the market, a trader may be able to profit from the mismatch. Typically the interest component outweighs the dividend component, and as a result the long jelly roll has a positive value (and the value of the call time spread is greater than the value of the put time spread). However, it is possible for the dividend component to outweigh the interest component, in which case the long jelly roll has a negative value, meaning that the value of the put time spread is greater than the value of the call time spread. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "JR = \\frac{K}{1+r_1\\cdot t_1} - \\frac{K}{1+r_2\\cdot t_2} - D" }, { "math_id": 1, "text": "JR" }, { "math_id": 2, "text": "K" }, { "math_id": 3, "text": "D" }, { "math_id": 4, "text": "t_1" }, { "math_id": 5, "text": "t_2" }, { "math_id": 6, "text": "r_1" }, { "math_id": 7, "text": "r_2" }, { "math_id": 8, "text": "JR = K \\cdot (t_2 - t_1) \\cdot r - D" }, { "math_id": 9, "text": "CTS" }, { "math_id": 10, "text": "PTS" }, { "math_id": 11, "text": "CTS - PTS = JR" } ]
https://en.wikipedia.org/wiki?curid=67732445
6773580
Residual (numerical analysis)
Loosely speaking, a residual is the error in a result. To be precise, suppose we want to find "x" such that formula_0 Given an approximation "x"0 of "x", the residual is formula_1 that is, "what is left of the right hand side" after subtracting "f"("x"0)" (thus, the name "residual": what is left, the rest). On the other hand, the error is formula_2 If the exact value of "x" is not known, the residual can be computed, whereas the error cannot. Residual of the approximation of a function. Similar terminology is used dealing with differential, integral and functional equations. For the approximation formula_3 of the solution formula_4 of the equation formula_5 the residual can either be the function formula_6, or can be said to be the maximum of the norm of this difference formula_7 over the domain formula_8, where the function formula_3 is expected to approximate the solution formula_9, or some integral of a function of the difference, for example: formula_10 In many cases, the smallness of the residual means that the approximation is close to the solution, i.e., formula_11 In these cases, the initial equation is considered as well-posed; and the residual can be considered as a measure of deviation of the approximation from the exact solution. Use of residuals. When one does not know the exact solution, one may look for the approximation with small residual. Residuals appear in many areas in mathematics, including iterative solvers such as the generalized minimal residual method, which seeks solutions to equations by systematically minimizing the residual. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x)=b." }, { "math_id": 1, "text": "b - f(x_0)" }, { "math_id": 2, "text": "x - x_0" }, { "math_id": 3, "text": "f_\\text{a}" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": " T(f)(x)=g(x) \\, ," }, { "math_id": 6, "text": "~g(x)~ - ~T(f_\\text{a})(x)" }, { "math_id": 7, "text": "\\max_{x\\in \\mathcal X} |g(x)-T(f_\\text{a})(x)| " }, { "math_id": 8, "text": "\\mathcal X" }, { "math_id": 9, "text": " f " }, { "math_id": 10, "text": "\\int_{\\mathcal X} |g(x)-T(f_\\text{a})(x)|^2~ \\mathrm dx." }, { "math_id": 11, "text": "\\left|\\frac{f_\\text{a}(x) - f(x)}{f(x)}\\right| \\ll 1." } ]
https://en.wikipedia.org/wiki?curid=6773580
6774211
Parallax barrier
3D imaging device A parallax barrier is a device placed in front of an image source, such as a liquid crystal display, to allow it to show a stereoscopic or multiscopic image without the need for the viewer to wear 3D glasses. Placed in front of the normal LCD, it consists of an opaque layer with a series of precisely spaced slits, allowing each eye to see a different set of pixels, so creating a sense of depth through parallax in an effect similar to what lenticular printing produces for printed products and lenticular lenses for other displays. A disadvantage of the method in its simplest form is that the viewer must be positioned in a well-defined spot to experience the 3D effect. However, recent versions of this technology have addressed this issue by using face-tracking to adjust the relative positions of the pixels and barrier slits according to the location of the user's eyes, allowing the user to experience the 3D from a wide range of positions. Another disadvantage is that the horizontal pixel count viewable by each eye is halved, reducing the overall horizontal resolution of the image. History. The principle of the parallax barrier was independently invented by Auguste Berthier, who published an article on stereoscopic pictures including his new idea illustrated with a diagram and pictures with purposely exaggerated dimensions of the interlaced image strips, and by Frederic E. Ives, who made and exhibited a functional autostereoscopic image in 1901. About two years later, Ives began selling specimen images as novelties, the first known commercial use. In the early 2000s, Sharp developed the electronic flat-panel application of this old technology to commercialization, briefly selling two laptops with the world's only 3D LCD screens, including the Actius RD3D. These displays are no longer available from Sharp but still being manufactured and further developed from other companies like Tridelity and SpatialView. Similarly, Hitachi has released the first 3D mobile phone for the Japanese market under distribution by KDDI. In 2009, Fujifilm released the Fujifilm FinePix Real 3D W1 digital camera, which features a built-in autostereoscopic LCD measuring 2.8" diagonal. Nintendo has also implemented this technology on its portable gaming console, the Nintendo 3DS. Applications. In addition to films and computer games, the technique has found uses in areas such as molecular modelling and airport security. It is also being used for the navigation system in the 2010-model Range Rover, allowing the driver to view (for example) GPS directions, while a passenger watches a movie. It is also used in the Nintendo 3DS hand-held game console and LG's Optimus 3D and Thrill smartphones, HTC's EVO 3D as well as Sharp's Galapagos smartphone series. The technology is harder to apply for 3D television sets, because of the requirement for a wide range of possible viewing angles. A Toshiba 21-inch 3D display uses parallax barrier technology with 9 pairs of images, to cover a viewing angle of 30 degrees. Design. The slits in the parallax barrier allow the viewer to see only left image pixels from the position of their left eye, right image pixels from the right eye. When choosing the geometry of the parallax barrier the important parameters that need to be optimised are; the pixel – barrier separation d, the parallax barrier pitch f, the pixel aperture a, and the parallax barrier slit width b. Pixel separation. The closer the parallax barrier is to the pixels, the wider the angle of separation between the left and right images. For a stereoscopic display the left and right images must hit the left and right eyes, which means the views must be separated by only a few degrees. The pixel- barrier separation "d" for this case can be derived as follows. From Snell’s law: formula_0 For small angles: formula_1 and formula_2 Therefore: formula_3 For a typical auto-stereoscopic display of pixel pitch 65 micrometers, eye separation 63mm, viewing distance 30 cm, and refractive index 1.52, the pixel-barrier separation needs to be about 470 micrometers. Pitch. The pitch of a parallax barrier should ideally be roughly two times the pitch of the pixels, but the optimum design should be slightly less than this. This perturbation to the barrier pitch compensates for the fact that the edges of a display are viewed at a different angle to that of the centre, it enables the left and right images target the eyes appropriately from all positions of the screen. Optimum pixel aperture and barrier slit width. In a parallax barrier system for a high-resolution display, the performance (brightness and crosstalk) can be simulated by Fresnel diffraction theory. From these simulations, the following can be deduced. If the slit width is small, light passing the slits is diffracted heavily causing crosstalk. The brightness of the display is also reduced. If the slit width is large, light passing the slit does not diffract so much, but the wider slits create crosstalk due to geometric ray paths. Therefore, the design suffers more crosstalk. The brightness of the display is increased. Therefore, the best slit width is given by a tradeoff between crosstalk and brightness. Barrier position. Note that the parallax barrier may also be placed behind the LCD pixels. In this case, light from a slit passes the left image pixel in the left direction, and vice versa. This produces the same basic effect as a front parallax barrier. Techniques for switching. In a parallax barrier system, the left eye sees only half the pixels (that is to say the left image pixels) and the same is true for the right eye. Therefore, the resolution of the display is reduced, and so it can be advantageous to make a parallax barrier that can be switched on when 3D is needed or off when a 2D image is required. One method of switching the parallax barrier on and off is to form it from a liquid crystal material, the parallax barrier can then be created similar to the way that an image is formed in a liquid crystal display. Time multiplexing to increase resolution. Time multiplexing provides a means of increasing the resolution of a parallax barrier system. In the design shown each eye is able to see the full resolution of the panel. The design requires a display that can switch fast enough to avoid image flicker as the images swap each frame. Tracking barriers for increased viewing freedom. In a standard parallax barrier system, the viewer must position themselves in an appropriate location so that the left and right eye views can be seen by their left and right eyes respectively. In a ‘tracked 3D system’, the viewing freedom can be increased considerably by tracking the position of the user and adjusting the parallax barrier so that the left and right views are always directed to the user's eyes correctly. Identification of the user's viewing angle can be done by using a forward-facing camera above the display and image-processing software that can recognise the position of the user's face. Adjustment of the angle at which the left and right views are projected can be done by mechanically or electronically shifting the parallax barrier relative to the pixels. Crosstalk. Crosstalk is the interference that exists between the left and right views in a 3D display. In a display with high crosstalk, each eye is able to see the image intended for the other eye faintly superimposed. The perception of crosstalk in stereoscopic displays has been studied widely. It is generally acknowledged that the presence of high levels of crosstalk in a stereoscopic display is detrimental. The effects of crosstalk in an image include: ghosting and loss of contrast, loss of 3D effect and depth resolution, and viewer discomfort. The visibility of crosstalk (ghosting) increases with increasing contrast and increasing binocular parallax of the image. For example, a stereoscopic image with high contrast will exhibit more ghosting on a particular stereoscopic display than will an image with low contrast. Measurement. A technique to quantify the level of crosstalk from a 3D display involves measuring the percentage of light that deviates from one view to the other. The crosstalk in a typical parallax-barrier-based 3D system at the best eye position might be 3%. Results of subjective tests carried out to determine the image quality of 3D images conclude that for high-quality 3D, crosstalk should be 'no greater than around 1 to 2%'. Causes and countermeasures. Diffraction can be a major cause of crosstalk. Theoretical simulations of diffraction have been found to be a good predictor of experimental crosstalk measurements in emulsion parallax barrier systems. These simulations predict that the amount of crosstalk caused by the parallax barrier will be highly dependent on the sharpness of the edges of the slits. For example, if the transmission of the barrier goes from opaque to transparent sharply as it moves from barrier to slit then this produces a wide diffraction pattern and consequently more crosstalk. If the transition is smoother then the diffraction will not spread so widely and less crosstalk will be produced. This prediction is consistent with experimental results for a slightly soft-edged barrier (whose pitch was 182 micrometers, slit width was 48 micrometers, and transition between opaque and transmissive occurred over a region of about 3 micrometers). The slightly soft-edged barrier has a crosstalk of 2.3%, which is slightly lower than the crosstalk from a harder-edged barrier which was about 2.7%. The diffraction simulations also suggest that if the parallax barrier slit edges had a transmission that decreases over a 10 micrometers region, then crosstalk could become as 0.1. Image processing is an alternative crosstalk countermeasure. The figure shows the principle behind crosstalk correction. There are three main types of autostereoscopic displays with a parallax barrier: References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. Media related to at Wikimedia Commons
[ { "math_id": 0, "text": "n \\sin x = \\sin y" }, { "math_id": 1, "text": "\\sin y \\approx \\frac {e} {2 r}" }, { "math_id": 2, "text": "\\sin x \\approx \\frac {p} {2 d} \\,." }, { "math_id": 3, "text": "d = \\frac {rnp} {e} \\,." } ]
https://en.wikipedia.org/wiki?curid=6774211
67752523
Jury theorem
Mathematical theory of majority voting A jury theorem is a mathematical theorem proving that, under certain assumptions, a decision attained using majority voting in a large group is more likely to be correct than a decision attained by a single expert. It serves as a formal argument for the idea of wisdom of the crowd, for decision of questions of fact by jury trial, and for democracy in general. The first and most famous jury theorem is Condorcet's jury theorem. It assumes that all voters have independent probabilities to vote for the correct alternative, these probabilities are larger than 1/2, and are the same for all voters. Under these assumptions, the probability that the majority decision is correct is strictly larger when the group is larger; and when the group size tends to infinity, the probability that the majority decision is correct tends to 1. There are many other jury theorems, relaxing some or all of these assumptions. Setting. The premise of all jury theorems is that there is an "objective truth", which is unknown to the voters. Most theorems focus on "binary issues" (issues with two possible states), for example, whether a certain defendant is guilty or innocent, whether a certain stock is going to rise or fall, etc. There are formula_0 voters (or jurors), and their goal is to reveal the truth. Each voter has an "opinion" about which of the two options is correct. The opinion of each voter is either correct (i.e., equals the true state), or wrong (i.e., differs than the true state). This is in contrast to other settings of voting, in which the opinion of each voter represents his/her subjective preferences and is thus always "correct" for this specific voter. The opinion of a voter can be considered a random variable: for each voter, there is a positive probability that his opinion equals the true state. The group decision is determined by the "majority rule". For example, if a majority of voters says "guilty" then the decision is "guilty", while if a majority says "innocent" then the decision is "innocent". To avoid ties, it is often assumed that the number of voters formula_0 is odd. Alternatively, if formula_0 is even, then ties are broken by tossing a fair coin. Jury theorems are interested in the "probability of correctness" - the probability that the majority decision coincides with the objective truth. Typical jury theorems make two kinds of claims on this probability: Claim 1 is often called the "non-asymptotic part" and claim 2 is often called the "asymptotic part" of the jury theorem. Obviously, these claims are not always true, but they are true under certain assumptions on the voters. Different jury theorems make different assumptions. Independence, competence, and uniformity. Condorcet's jury theorem makes the following three assumptions: The jury theorem of Condorcet says that these three assumptions imply Growing Reliability and Crowd Infallibility. Correlated votes: weakening the independence assumption. The opinions of different voters are often correlated, so Unconditional Independence may not hold. In this case, the Growing Reliability claim might fail. Example. Let formula_1 be the probability of a juror voting for the correct alternative and formula_2 be the (second-order) "correlation coefficient" between any two correct votes. If all higher-order correlation coefficients in the Bahadur representation of the joint probability distribution of votes equal to zero, and formula_3 is an admissible pair, then the probability of the jury collectively reaching the correct decision under simple majority is given by: formula_4 where formula_5 is the "regularized incomplete beta function". "Example:" Take a jury of three jurors formula_6, with individual competence formula_7 and second-order correlation formula_8. Then formula_9. The competence of the jury is lower than the competence of a single juror, which equals to formula_10. Moreover, enlarging the jury by two jurors formula_11 decreases the jury competence even further, formula_12. Note that formula_7 and formula_8 is an admissible pair of parameters. For formula_13 and formula_7, the maximum admissible second-order correlation coefficient equals formula_14. The above example shows that when the individual competence is low but the correlation is high: The above result is due to Kaniovski and Zaigraev. They also discuss optimal jury design for homogenous juries with correlated votes. There are several jury theorems that weaken the Independence assumption in various ways. Truth-sensitive independence and competence. In binary decision problems, there is often one option that is easier to detect that the other one. For example, it may be easier to detect that a defendant is guilty (as there is clear evidence for guilt) than to detect that he is innocent. In this case, the probability that the opinion of a single voter is correct is represented by two different numbers: probability given that option #1 is correct, and probability given that option #2 is correct. This also implies that opinions of different voters are correlated. This motivates the following relaxations of the above assumptions: Growing Reliability and Crowd Infallibility continue to hold under these weaker assumptions. One criticism of Conditional Competence is that it depends on the way the decision question is formulated. For example, instead of asking whether the defendant is guilty or innocent, one can ask whether the defendant is guilty of exactly 10 charges (option A), or guilty of another number of charges (0..9 or more than 11). This changes the conditions, and hence, the conditional probability. Moreover, if the state is very specific, then the probability of voting correctly might be below 1/2, so Conditional Competence might not hold. Effect of an opinion leader. Another cause of correlation between voters is the existence of an opinion leader. Suppose each voter makes an independent decision, but then each voter, with some fixed probability, changes his opinion to match that of the opinion leader. Jury theorems by Boland and Boland, Proschan and Tong shows that, if (and only if) the probability of following the opinion leader is less than 1-1/2"p" (where "p" is the competence level of all voters), then Crowd Infallibility holds. Problem-sensitive independence and competence. In addition to the dependence on the true option, there are many other reasons for which voters' opinions may be correlated. For example: It is possible to weaken the Conditional Independence assumption, and conditionalize on "all" common causes of the votes (rather than just the state). In other words, the votes are now independent "conditioned on the specific decision problem". However, in a specific problem, the Conditional Competence assumption may not be valid. For example, in a specific problem with false evidence, it is likely that most voters will have a wrong opinion. Thus, the two assumptions - conditional independence and conditional competence - are not justifiable simultaneously (under the same conditionalization). A possible solution is to weaken Conditional Competence as follows. For each voter and each problem "x", there is a probability "p"("x") that the voter's opinion is correct in this specific problem. Since "x" is a random variable, "p"("x") is a random variable too. Conditional Competence requires that "p"("x") &gt; 1/2 with probability 1. The weakened assumption is: A jury theorem by Dietrich and Spiekerman says that Conditional Independence, Tendency to Competence, and Conditional Uniformity, together imply Growing Reliability. Note that Crowd Infallibility is not implied. In fact, the probability of correctness tends to a value which is below 1, if and only of Conditional Competence does not hold. Bounded correlation. A jury theorem by Pivato shows that, if the average covariance between voters becomes small as the population becomes large, then Crowd Infallibility holds (for some voting rule). There are other jury theorems that take into account the degree to which votes may be correlated. Other solutions. Other ways to cope with voter correlation include causal networks, dependence structures, and interchangeability.2.2 Diverse capabilities: weakening the uniformity assumption. Different voters often have different competence levels, so the Uniformity assumption does not hold. In this case, both Growing Reliability and Crowd Infallibility may not hold. This may happen if new voters have much lower competence than existing voters, so that adding new voters decreases the group's probability of correctness. In some cases, the probability of correctness might converge to 1/2 (- a random decision) rather than to 1. Stronger competence requirements. Uniformity can be dismissed if the Competence assumption is strengthened. There are several ways to strengthen it: Random voter selection. instead of assuming that the voter identity is fixed, one can assume that there is a large pool of potential voters with different competence levels, and the actual voters are selected at random from this pool (as in sortition). A jury theorem by Ben Yashar and Paroush shows that, under certain conditions, the correctness probability of a jury, or of a subset of it chosen at random, is larger than the correctness probability of a single juror selected at random. A more general jury theorem by Berend and Sapir proves that Growing Reliability holds in this setting: the correctness probability of a random committee increases with the committee size. The theorem holds, under certain conditions, even with correlated votes. A jury theorem by Owen, Grofman and Feld analyzes a setting where the competence level is random. They show what distribution of individual competence maximizes or minimizes the probability of correctness. Weighted majority rule. When the competence levels of the voters are known, the simple majority rule may not be the best decision rule. There are various works on identifying the "optimal decision rule" - the rule maximizing the group correctness probability. Nitzan and Paroush show that, under Unconditional Independence, the optimal decision rule is a "weighted" majority rule, where the weight of each voter with correctness probability "pi" is log("pi"/(1-"pi")), and an alternative is selected if the sum of weights of its supporters is above some threshold. Grofman and Shapley analyze the effect of interdependencies between voters on the optimal decision rule. Ben-Yashar and Nitzan prove a more general result. Dietrich generalizes this result to a setting that does not require prior probabilities of the 'correctness' of the two alternative. The only required assumption is Epistemic Monotonicity, which says that, if under certain profile alternative "x" is selected, and the profile changes such that "x" becomes more probable, then x is still selected. Dietrich shows that Epistemic Monotonicity implies that the optimal decision rule is weighted majority with a threshold. In the same paper, he generalizes the optimal decision rule to a setting that does not require the input to be a vote for one of the alternatives. It can be, for example, a subjective degree of belief. Moreover, competence parameters do not need to be known. For example, if the inputs are subjective beliefs "x"1...,"xn", then the optimal decision rule sums log("xi"/(1-"xi")) and checks whether the sum is above some threshold. Epistemic Monotonicity is not sufficient for computing the threshold itself; the threshold can be computed by assuming expected-utility maximization and prior probabilities. A general problem with the weighted majority rules is that they require to know the competence levels of the different voters, which is usually hard to compute in an objective way. Baharad, Goldberger, Koppel and Nitzan present an algorithm that solves this problem using statistical machine learning. It requires as input only a list of past votes; it does not need to know whether these votes were correct or not. If the list is sufficiently large, then its probability of correctness converges to 1 even if the individual voters' competence levels are close to 1/2. More than two options. Often, decision problems involve three or more options. This critical limitation was in fact recognized by Condorcet (see Condorcet's paradox), and in general it is very difficult to reconcile individual decisions between three or more outcomes (see Arrow's theorem). This limitation may also be overcome by means of a sequence of votes on pairs of alternatives, as is commonly realized via the legislative amendment process. (However, as per Arrow's theorem, this creates a "path dependence" on the exact sequence of pairs of alternatives; e.g., which amendment is proposed first can make a difference in what amendment is ultimately passed, or if the law—with or without amendments—is passed at all.) With three or more options, Conditional Competence can be generalized as follows: A jury theorem by List and Goodin shows that Multioption Conditional Competence and Conditional Independence together imply Crowd Infallibility. Dietrich and Spiekermann conjecture that they imply Growing Reliability too. Another related jury theorem is by Everaere, Konieczny and Marquis. When there are more than two options, there are various voting rules that can be used instead of simple majority. The statistic and utilitarian properties of such rules are analyzed e.g. by Pivato. Indirect majority systems. Condorcet's theorem considers a "direct majority system", in which all votes are counted directly towards the final outcome. Many countries use an "indirect majority system", in which the voters are divided into groups. The voters in each group decide on an outcome by an internal majority vote; then, the groups decide on the final outcome by a majority vote among them. For example, suppose there are 15 voters. In a direct majority system, a decision is accepted whenever at least 8 votes support it. Suppose now that the voters are grouped into 3 groups of size 5 each. A decision is accepted whenever at least 2 groups support it, and in each group, a decision is accepted whenever at least 3 voters support it. Therefore, a decision may be accepted even if only 6 voters support it. Boland, Proschan and Tong prove that, when the voters are independent and p&gt;1/2, a direct majority system - as in Condorcet's theorem - always has a higher chance of accepting the correct decision than any indirect majority system. Berg and Paroush consider multi-tier voting hierarchies, which may have several levels with different decision-making rules in each level. They study the optimal voting structure, and compares the competence against the benefit of time-saving and other expenses. Goodin and Spiekermann compute the amount by which a small group of experts should be better than the average voters, in order for them to accept better decisions. Strategic voting. It is well-known that, when there are three or more alternatives, and voters have different preferences, they may engage in strategic voting, for example, vote for the second-best option in order to prevent the worst option from being elected. Surprisingly, strategic voting might occur even with two alternatives and when all voters have the same preference, which is to reveal the truth. For example, suppose the question is whether a defendant is guilty or innocent, and suppose a certain juror thinks the true answer is "guilty". However, he also knows that his vote is effective only if the other votes are tied. But, if other votes are tied, it means that the probability that the defendant is guilty is close to 1/2. Taking this into account, our juror might decide that this probability is not sufficient for deciding "guilty", and thus will vote "innocent". But if all other voters do the same, the wrong answer is derived. In game-theoretic terms, truthful voting might not be a Nash equilibrium. This problem has been termed "the swing voter's curse", as it is analogous to the winner's curse in auction theory. A jury theorem by Peleg and Zamir shows sufficient and necessary conditions for the existence of a Bayesian-Nash equilibrium that satisfies Condorcet's jury theorem. Bozbay, Dietrich and Peters show voting rules that lead to efficient aggregation of the voters' private information even with strategic voting. In practice, this problem may not be very severe, since most voters care not only about the final outcome, but also about voting correctly by their conscience. Moreover, most voters are not sophisticated enough to vote strategically.4.7 Subjective opinions. The notion of "correctness" may not be meaningful when making policy decisions, which are based on values or preferences, rather than just on facts. Some defenders of the theorem hold that it is applicable when voting is aimed at determining which policy best promotes the public good, rather than at merely expressing individual preferences. On this reading, what the theorem says is that although each member of the electorate may only have a vague perception of which of two policies is better, majority voting has an amplifying effect. The "group competence level", as represented by the probability that the majority chooses the better alternative, increases towards 1 as the size of the electorate grows assuming that each voter is more often right than wrong. Several papers show that, under reasonable conditions, large groups are better trackers of the majority preference. Applicability. The applicability of jury theorems, in particular, Condorcet's Jury Theorem (CJT) to democratic processes is debated, as it can prove majority rule to be a perfect mechanism or a disaster depending on individual competence. Recent studies show that, in a non-homogeneous case, the theorem's thesis does not hold almost surely (unless weighted majority rule is used with stochastic weights that are correlated with epistemic rationality but such that every voter has a minimal weight of one). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "p" }, { "math_id": 2, "text": "c" }, { "math_id": 3, "text": "(p,c)\\in\\mathcal{B}_n" }, { "math_id": 4, "text": "P(n,p,c)=I_p\\left(\\frac{n+1}{2},\\frac{n+1}{2}\\right)+0.5c(n-1)(0.5-p)\\frac{\\partial I_p(\\frac{n+1}{2},\\frac{n+1}{2})}{\\partial p}," }, { "math_id": 5, "text": "I_p" }, { "math_id": 6, "text": "(n=3)" }, { "math_id": 7, "text": "p=0.55" }, { "math_id": 8, "text": "c=0.4" }, { "math_id": 9, "text": "P(3,0.55,0.4)=0.54505" }, { "math_id": 10, "text": "0.55" }, { "math_id": 11, "text": "(n=5)" }, { "math_id": 12, "text": "P(5,0.55,0.4)=0.5196194" }, { "math_id": 13, "text": "n=5" }, { "math_id": 14, "text": "\\approx 0.43" } ]
https://en.wikipedia.org/wiki?curid=67752523
67762866
Economy monetization
National economy metric The Economy monetization is a metric of the national economy, reflecting its saturation with liquid assets. The level of monetization is determined both by the development of the national financial system and by the whole economy. The monetization of economy also determines the freedom of capital movement. Long time ago scientists recognized the important role played by the money supply. Nevertheless, only approximately 50 years ago did Milton Friedman convincingly prove that change in the money quantity might have a very serious effect on the GDP. The monetization is especially important in low- to middle-income countries in which it is substantially correlated with the per-capita GDP and real interest rates. This fact suggests that supporting an upward monetization trend can be an important policy objective for governments. The reverse concept is called economy demonetization. Monetization coefficient. The monetization coefficient (or ratio) of the economy is an indicator that is equal to the ratio of the money supply aggregate M2 to the gross domestic product (GDP)—both nominated in current prices. The coefficient reflects the proportion of the total of goods and services of an economy that is monetized—being actually paid for in money by the purchaser—to substitute bartering. This is one of the most important characteristics of the level and course of economic development. The ratio can be as low as 10–20% for the emerging economies and as high as 100%+ for the developed countries. Formula. formula_0 The ratio is, in fact, based on the money demand function of Milton Friedman. This coefficient gives an idea of the degree of financial security of the economy. Many scientific publications calculate not only the indicator of M2/GDP but also M3/GDP and M1/GDP. The higher the M3/GDP compared to M1/GDP, the more developed and elaborated the system of non-cash payments and the financial potential of the economy. A small difference indicates that in this country a significant proportion of monetary transactions are carried out in cash, and the banking system is poorly developed. It is impossible to artificially increase the monetization coefficient; its growth is based on the high level of savings within the national financial system and on the strengthened confidence in the national economic policy and economic growth. The ability of the state to borrow money in the domestic market and implement social programs depends on the value of the coefficient. The monetization ratio is positively related to the expected wealth and negatively related to the opportunity costs of holding money. A high level of economy monetization is typical for developed countries with a well-functioning financial sector. A low level of monetization creates an artificial shortage of capital and, consequently, investments. This fact limits any economic growth. At the same time, the saturation of the economy with money in an undeveloped financial system will only lead to an increase in inflation and, accordingly, an even greater decrease in the economy monetization. This is so due to the fact that the additional money supply enters the consumer market, increasing the aggregate demand, but does not proportionally affect the level of supply. Economy demonetization. There are two primary nonmonetized sectors in the economy: subsistence and barter. Modern economic publications define the economy demonetization as an increase in the share of barter in the economic life and its displacement of money as a medium of exchange. Demonetization, as a transition from monetary to barter exchange, oftentimes occurs during the periods of military operations and hyperinflation, that is, when money loses its natural role in the economy as a measure of value, means of circulation, accumulation, payment. Counterintuitively, the demonetization can also be observed in the peacetime, in the absence of the hyperinflation. The microeconomic explanation of demonetization is the hypothesis of so-called "liquidity constraints". When entrepreneurs simply do not have enough money to carry out the necessary transactions, they have to resort to the commodity-for-commodity form of exchange. It is noted that in the context of financial crises the demonetization is associated with a strict state monetary policy. The monetary tightening (higher taxes, lower government spending, a reduction in the money supply to prevent inflation, etc.) leads to a relative stabilization of the financial sector, which, due to a decrease in liquidity, leads to the demonetization of the economy and exacerbates the production crisis. The monetary easing, in turn, exacerbates the financial crisis. Alternative explanations suggest that the demonetization can be a form of tax evasion. Monetization coefficients for countries (2015–2018, %). The table includes data for both developed and emerging economies. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\mbox{Monetization coefficient} = \\frac{\\mbox{M2}}{\\mbox{GDP}}\n" } ]
https://en.wikipedia.org/wiki?curid=67762866
67764540
1 Kings 9
1 Kings, chapter 9 1 Kings 9 is the ninth chapter of the Books of Kings in the Hebrew Bible or the First Book of Kings in the Old Testament of the Christian Bible. The book is a compilation of various annals recording the acts of the kings of Israel and Judah by a Deuteronomic compiler in the seventh century BCE, with a supplement added in the sixth century BCE. This chapter belongs to the section focusing on the reign of Solomon over the unified kingdom of Judah and Israel (1 Kings 1 to 11). The focus of this chapter is Solomon's achievements. Text. This chapter was originally written in the Hebrew language and since the 16th century is divided into 28 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). God's response to Solomon (9:1–9). With the completion of the Temple, God did not need to appear to Solomon in Gibeon (verse 2) but in Jerusalem, assuring Solomon of the continuation of his dynasty and the temple, as long as God's laws were kept. The destruction of the Temple and the loss of land are predicted here, as well as the possibility of return, so this section contains two things: 'an explanation for woe and an offer of salvation'. The tribute to Tyre (9:10–14). Several times 1 Kings 9–10 overlaps with –, bracketing the construction of the temple. After paying Hiram I of Tyre with agricultural products (), Solomon gave a strip of land in Galilee (at the Bay of Akko), but Hiram was not satisfied with this gift. However, in it is asserted that Hiram also gave Solomon some cities as a present. "Then Hiram sent the king one hundred and twenty talents of gold." Construction of towns and forced labor (9:15–28). This section parallels the narrative in 1 Kings 5:13–18, emphasizing that Israelites were not employed as forced labor, but 'only' Canaanites, for the construction of various cities outside Jerusalem. Currently, there are archaeological excavation of the cities in the list, in particular of Gezer, Megiddo, and Hazor. In Jerusalem, Solomon expanded the construction of 'Millo' (verse 15), a term which is probably related to the meaning of 'to fill', referring to a substructure designed to secure the sloping terrain of the palace grounds (cf 2 Samuel 5:9; 1 Kings 11:27; 2 Kings 12:20). Pharaoh's daughter (verse 16) moved to her own palace (verse 24). Solomon's triannual sacrificial feasts at the temple was mentioned in verse 25, followed by a report of Solomon's shipping expedition from Red Sea (or 'Reed Sea, cf. Exodus 14), to Ophir, a place that could be near Aden or on the Horn of Africa. "And this is the account of the forced labor that King Solomon drafted to build the house of the Lord and his own house and the Millo and the wall of Jerusalem and Hazor and Megiddo and Gezer" Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=67764540
67767951
Stein discrepancy
A Stein discrepancy is a statistical divergence between two probability measures that is rooted in Stein's method. It was first formulated as a tool to assess the quality of Markov chain Monte Carlo samplers, but has since been used in diverse settings in statistics, machine learning and computer science. Definition. Let formula_0 be a measurable space and let formula_1 be a set of measurable functions of the form formula_2. A natural notion of distance between two probability distributions formula_3, formula_4, defined on formula_0, is provided by an integral probability metric formula_5 where for the purposes of exposition we assume that the expectations exist, and that the set formula_1 is sufficiently rich that (1.1) is indeed a metric on the set of probability distributions on formula_0, i.e. formula_6 if and only if formula_7. The choice of the set formula_1 determines the topological properties of (1.1). However, for practical purposes the evaluation of (1.1) requires access to both formula_3 and formula_4, often rendering direct computation of (1.1) impractical. Stein's method is a theoretical tool that can be used to bound (1.1). Specifically, we suppose that we can identify an operator formula_8 and a set formula_9 of real-valued functions in the domain of formula_8, both of which may be formula_3-dependent, such that for each formula_10 there exists a solution formula_11 to the "Stein equation" formula_12 The operator formula_8 is termed a "Stein operator" and the set formula_9 is called a "Stein set". Substituting (1.2) into (1.1), we obtain an upper bound formula_13 . This resulting bound formula_14 is called a "Stein discrepancy". In contrast to the original integral probability metric formula_15, it may be possible to analyse or compute formula_16 using expectations only with respect to the distribution formula_4. Examples. Several different Stein discrepancies have been studied, with some of the most widely used presented next. Classical Stein discrepancy. For a probability distribution formula_3 with positive and differentiable density function formula_17 on a convex set formula_18, whose boundary is denoted formula_19, the combination of the "Langevin–Stein operator" formula_20 and the "classical Stein set" formula_21 yields the "classical Stein discrepancy". Here formula_22 denotes the Euclidean norm and formula_23 the Euclidean inner product. Here formula_24 is the associated operator norm for matrices formula_25, and formula_26 denotes the outward unit normal to formula_19 at location formula_27. If formula_28 then we interpret formula_29. In the univariate case formula_30, the classical Stein discrepancy can be computed exactly by solving a quadratically constrained quadratic program. Graph Stein discrepancy. The first known computable Stein discrepancies were the graph Stein discrepancies (GSDs). Given a discrete distribution formula_31, one can define the graph formula_32 with vertex set formula_33 and edge set formula_34. From this graph, one can define the "graph Stein set" as formula_35 The combination of the Langevin–Stein operator and the graph Stein set is called the "graph Stein discrepancy" (GSD). The GSD is actually the solution of a finite-dimensional linear program, with the size of formula_36 as low as linear in formula_37, meaning that the GSD can be efficiently computed."" Kernel Stein discrepancy. The supremum arising in the definition of Stein discrepancy can be evaluated in closed form using a particular choice of Stein set. Indeed, let formula_38 be the unit ball in a (possibly vector-valued) reproducing kernel Hilbert space formula_39 with reproducing kernel formula_40, whose elements are in the domain of the Stein operator formula_41. Suppose that where the Stein operator formula_41 acts on the first argument of formula_46 and formula_47 acts on the second argument. Then it can be shown that formula_48, where the random variables formula_49 and formula_50 in the expectation are independent. In particular, if formula_51 is a discrete distribution on formula_0, then the Stein discrepancy takes the closed form formula_52 A Stein discrepancy constructed in this manner is called a "kernel Stein discrepancy" and the construction is closely connected to the theory of kernel embedding of probability distributions. Let formula_53 be a reproducing kernel. For a probability distribution formula_3 with positive and differentiable density function formula_17 on formula_28, the combination of the Langevin--Stein operator formula_20 and the Stein set formula_54 associated to the matrix-valued reproducing kernel formula_55, yields a kernel Stein discrepancy with formula_56 where formula_57 (resp. formula_58) indicated the gradient with respect to the argument indexed by formula_59 (resp. formula_60). Concretely, if we take the "inverse multi-quadric" kernel formula_61 with parameters formula_62 and formula_63 a symmetric positive definite matrix, and if we denote formula_64, then we have formula_65. Diffusion Stein discrepancy. "Diffusion Stein discrepancies" generalize the Langevin Stein operator formula_66 to a class of "diffusion Stein operators" formula_67, each representing an Itô diffusion that has formula_3 as its stationary distribution. Here, formula_68 is a matrix-valued function determined by the infinitesimal generator of the diffusion. Other Stein discrepancies. Additional Stein discrepancies have been developed for constrained domains, non-Euclidean domains"", discrete domains, improved scalability., and gradient-free Stein discrepancies where derivatives of the density formula_17 are circumvented. Properties. The flexibility in the choice of Stein operator and Stein set in the construction of Stein discrepancy precludes general statements of a theoretical nature. However, much is known about the particular Stein discrepancies. Computable without the normalisation constant. Stein discrepancy can sometimes be computed in challenging settings where the probability distribution formula_3 admits a probability density function formula_69 (with respect to an appropriate reference measure on formula_70) of the form formula_71, where formula_72 and its derivative can be numerically evaluated but whose normalisation constant formula_73 is not easily computed or approximated. Considering (2.1), we observe that the dependence of formula_74 on formula_3 occurs only through the term formula_75 which does not depend on the normalisation constant formula_76. Stein discrepancy as a statistical divergence. A basic requirement of Stein discrepancy is that it is a statistical divergence, meaning that formula_77 and formula_78 if and only if formula_79. This property can be shown to hold for classical Stein discrepancy and kernel Stein discrepancy"" a provided that appropriate regularity conditions hold. Convergence control. A stronger property, compared to being a statistical divergence, is "convergence control", meaning that formula_80 implies formula_81 converges to formula_3 in a sense to be specified. For example, under appropriate regularity conditions, both the classical Stein discrepancy and graph Stein discrepancy enjoy "Wasserstein convergence control", meaning that formula_80 implies that the Wasserstein metric between formula_81 and formula_3 converges to zero. For the kernel Stein discrepancy, "weak convergence control" has been established under regularity conditions on the distribution formula_3 and the reproducing kernel formula_40, which are applicable in particular to (2.1). Other well-known choices of formula_82, such as based on the Gaussian kernel, provably do not enjoy weak convergence control. Convergence detection. The converse property to convergence control is "convergence detection", meaning that formula_80 whenever formula_81 converges to formula_3 in a sense to be specified. For example, under appropriate regularity conditions, classical Stein discrepancy enjoys a particular form of "mean square convergence detection", meaning that formula_80 whenever formula_83 converges in mean-square to formula_84 and formula_85 converges in mean-square to formula_86. For kernel Stein discrepancy, W"asserstein convergence detection" has been established, under appropriate regularity conditions on the distribution formula_3 and the reproducing kernel formula_40. Applications of Stein discrepancy. Several applications of Stein discrepancy have been proposed, some of which are now described. Optimal quantisation. Given a probability distribution formula_3 defined on a measurable space formula_0, the "quantization" task is to select a small number of states formula_87 such that the associated discrete distribution formula_88 is an accurate approximation of formula_3 in a sense to be specified. "Stein points" are the result of performing "optimal" quantisation via minimisation of Stein discrepancy: formula_89 Under appropriate regularity conditions, it can be shown that formula_90 as formula_91. Thus, if the Stein discrepancy enjoys convergence control, it follows that formula_92 converges to formula_3. Extensions of this result, to allow for imperfect numerical optimisation, have also been derived. Sophisticated optimisation algorithms have been designed to perform efficient quantisation based on Stein discrepancy, including gradient flow algorithms that aim to minimise kernel Stein discrepancy over an appropriate space of probability measures. Optimal weighted approximation. If one is allowed to consider weighted combinations of point masses, then more accurate approximation is possible compared to (3.1). For simplicity of exposition, suppose we are given a set of states formula_93. Then the optimal weighted combination of the point masses formula_94, i.e. formula_95 which minimise Stein discrepancy can be obtained in closed form when a kernel Stein discrepancy is used. Some authors consider imposing, in addition, a non-negativity constraint on the weights, i.e. formula_96. However, in both cases the computation required to compute the optimal weights formula_97 can involve solving linear systems of equations that are numerically ill-conditioned. Interestingly, it has been shown that greedy approximation of formula_98 using an un-weighted combination of formula_99 states can reduce this computational requirement. In particular, the greedy "Stein thinning" algorithm formula_100 has been shown to satisfy an error bound formula_101 Non-myopic and mini-batch generalisations of the greedy algorithm have been demonstrated to yield further improvement in approximation quality relative to computational cost. Variational inference. Stein discrepancy has been exploited as a "variational objective" in variational Bayesian methods. Given a collection formula_102 of probability distributions on formula_0, parametrised by formula_103, one can seek the distribution in this collection that best approximates a distribution formula_3 of interest: formula_104 A possible advantage of Stein discrepancy in this context, compared to the traditional Kullback–Leibler variational objective, is that formula_105 need not be absolutely continuous with respect to formula_3 in order for formula_106 to be well-defined. This property can be used to circumvent the use of flow-based generative models, for example, which impose diffeomorphism constraints in order to enforce absolute continuity of formula_105 and formula_3. Statistical estimation. Stein discrepancy has been proposed as a tool to fit parametric statistical models to data. Given a dataset formula_93, consider the associated discrete distribution formula_107. For a given parametric collection formula_108 of probability distributions on formula_0, one can estimate a value of the parameter formula_109 which is compatible with the dataset using a "minimum Stein discrepancy estimator" formula_110 The approach is closely related to the framework of minimum distance estimation, with the role of the "distance" being played by the Stein discrepancy. Alternatively, a generalised Bayesian approach to estimation of the parameter formula_109 can be considered where, given a prior probability distribution with density function formula_111, formula_103, (with respect to an appropriate reference measure on formula_112), one constructs a "generalised posterior" with probability density function formula_113 for some formula_114 to be specified or determined. Hypothesis testing. The Stein discrepancy has also been used as a test statistic for performing goodness-of-fit testing and comparing latent variable models. Since the aforementioned tests have a computational cost quadratic in the sample size, alternatives have been developed with (near-)linear runtimes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{X}" }, { "math_id": 1, "text": "\\mathcal{M}" }, { "math_id": 2, "text": "m : \\mathcal{X} \\rightarrow \\mathbb{R}" }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": "Q" }, { "math_id": 5, "text": "\n(1.1) \\quad d_{\\mathcal{M}}(P , Q) := \\sup_{m \\in \\mathcal{M}} |\\mathbb{E}_{X \\sim P}[m(X)] - \\mathbb{E}_{Y \\sim Q}[m(Y)]| ,\n" }, { "math_id": 6, "text": "d_{\\mathcal{M}}(P,Q) = 0" }, { "math_id": 7, "text": "P=Q" }, { "math_id": 8, "text": "\\mathcal{A}_{P}" }, { "math_id": 9, "text": "\\mathcal{F}_{P}" }, { "math_id": 10, "text": "m \\in \\mathcal{M}" }, { "math_id": 11, "text": "f_m \\in \\mathcal{F}_{P}" }, { "math_id": 12, "text": "\n(1.2) \\quad m(x) - \\mathbb{E}_{X \\sim P}[m(X)] = \\mathcal{A}_{P} f_m(x) . \n" }, { "math_id": 13, "text": "\nd_{\\mathcal{M}}(P , Q) = \\sup_{m \\in \\mathcal{M}} |\\mathbb{E}_{Y \\sim Q}[m(Y) - \\mathbb{E}_{X \\sim P}[m(X)] ]|\n= \\sup_{m \\in \\mathcal{M}} |\\mathbb{E}_{Y \\sim Q}[ \\mathcal{A}_{P} f_m(Y) ]| \\leq \\sup_{f \\in \\mathcal{F}_{P}} |\\mathbb{E}_{Y \\sim Q}[\\mathcal{A}_{P} f(Y)]| \n" }, { "math_id": 14, "text": "\nD_P(Q) := \\sup_{f \\in \\mathcal{F}_P} |\\mathbb{E}_{Y \\sim Q}[\\mathcal{A}_P f(Y)]| \n" }, { "math_id": 15, "text": "d_{\\mathcal{M}}(P , Q)" }, { "math_id": 16, "text": "D_{P}(Q)" }, { "math_id": 17, "text": "p" }, { "math_id": 18, "text": "\\mathcal{X} \\subseteq \\mathbb{R}^d" }, { "math_id": 19, "text": "\\partial \\mathcal{X}" }, { "math_id": 20, "text": "\\mathcal{A}_{P} f = \\nabla \\cdot f + f \\cdot \\nabla \\log p" }, { "math_id": 21, "text": "\\mathcal{F}_P = \\left\\{ f : \\mathcal{X} \\rightarrow \\mathbb{R}^d \\,\\Biggl\\vert\\, \\sup_{x \\neq y} \\max \\left( \\|f(x)\\| , \\|\\nabla f(x) \\|, \\frac{\\|\\nabla f(x) - \\nabla f(y) \\|}{\\|x-y\\|} \\right) \\leq 1 , \\; \\langle f(x) , n(x) \\rangle = 0 \\; \\forall x \\in \\partial \\mathcal{X} \\right\\} " }, { "math_id": 22, "text": "\\|\\cdot\\|" }, { "math_id": 23, "text": "\\langle \\cdot , \\cdot \\rangle" }, { "math_id": 24, "text": "\\| M \\| = \\textstyle \\sup_{v \\in \\mathbb{R}^d, \\|v\\| = 1} \\|Mv\\|" }, { "math_id": 25, "text": "M \\in \\R^{d \\times d}" }, { "math_id": 26, "text": "n(x)" }, { "math_id": 27, "text": "x \\in \\partial \\mathcal{X}" }, { "math_id": 28, "text": "\\mathcal{X} = \\R^d" }, { "math_id": 29, "text": "\\partial \\mathcal{X} = \\emptyset" }, { "math_id": 30, "text": "d=1" }, { "math_id": 31, "text": "Q = \\textstyle \\sum_{i=1}^n w_i \\delta(x_i)" }, { "math_id": 32, "text": "G" }, { "math_id": 33, "text": "V = \\{x_1, \\dots, x_n\\}" }, { "math_id": 34, "text": "E \\subseteq V \\times V" }, { "math_id": 35, "text": "\n \\begin{align}\n \\mathcal{F}_P = \\Big\\{ f : \\mathcal{X} \\rightarrow \\mathbb{R}^d & \\,\\Bigl\\vert\\,\n \\max \\left(\\|f(v)\\|_\\infty, \\|\\nabla f(v)\\|_\\infty,\n {\\textstyle\\frac{\\|f(x) - f(y)\\|_\\infty}{\\|x - y\\|_1}},\n {\\textstyle \\frac{\\|\\nabla f(x) - \\nabla f(y)\\|_\\infty}{\\|x - y\\|_1}}\\right) \\le 1, \\\\[8pt]\n & {\\textstyle\\frac{\\|f(x) - f(y) - {\\nabla (x)}{(x - y)}\\|_\\infty}{\\frac{1}{2}\\|x - y\\|_1^2} \\leq 1},\n {\\textstyle\\frac{\\|f(x) - f(y) -{\\nabla f(y)}{(x - y)}\\|_\\infty}{\\frac{1}{2}\\|x - y\\|_1^2} \\leq 1},\n \\; \\forall v \\in \\operatorname{supp}(Q_n), (x,y)\\in E \\Big\\}.\n \\end{align}\n" }, { "math_id": 36, "text": "E" }, { "math_id": 37, "text": "n" }, { "math_id": 38, "text": "\\mathcal{F}_P = \\{f \\in H(K) : \\|f\\|_{H(K)} \\leq 1\\}" }, { "math_id": 39, "text": "H(K)" }, { "math_id": 40, "text": "K" }, { "math_id": 41, "text": "\\mathcal{A}_P" }, { "math_id": 42, "text": "x \\in \\mathcal{X}" }, { "math_id": 43, "text": "f \\mapsto \\mathcal{A}_P[f](x)" }, { "math_id": 44, "text": "\\mathcal{F}_P" }, { "math_id": 45, "text": "\\mathbb{E}_{X \\sim Q}[ \\mathcal{A}_P \\mathcal{A}_P' K(X,X) ] < \\infty" }, { "math_id": 46, "text": "K(\\cdot,\\cdot)" }, { "math_id": 47, "text": "\\mathcal{A}_P'" }, { "math_id": 48, "text": "\nD_P(Q) = \\sqrt{ \\mathbb{E}_{X,X' \\sim Q} [ \\mathcal{A}_P \\mathcal{A}_P' K(X,X') ] }\n" }, { "math_id": 49, "text": "X" }, { "math_id": 50, "text": "X'" }, { "math_id": 51, "text": "Q = \\sum_{i=1}^n w_i \\delta(x_i)" }, { "math_id": 52, "text": "\nD_P(Q) = \\sqrt{ \\sum_{i=1}^n \\sum_{j=1}^n w_i w_j \\mathcal{A}_P \\mathcal{A}_P' K(x_i,x_j) }.\n" }, { "math_id": 53, "text": "k : \\mathcal{X} \\times \\mathcal{X} \\rightarrow \\mathbb{R}" }, { "math_id": 54, "text": "\\mathcal{F}_P = \\left\\{f \\in H(k) \\times \\dots \\times H(k) : \\sum_{i=1}^d \\|f_i\\|_{H(k)}^2 \\leq 1\\right\\}," }, { "math_id": 55, "text": "K(x,x') = k(x,x') I_{d \\times d}" }, { "math_id": 56, "text": "\\mathcal{A}_P \\mathcal{A}_P' K(x,x') = \\nabla_x \\cdot \\nabla_{x'} k(x,x') + \\nabla_x k(x,x') \\cdot \\nabla_{x'} \\log p(x') +\\nabla_{x'} k(x,x') \\cdot \\nabla_x \\log p(x) + k(x,x') \\nabla_x \\log p(x) \\cdot \\nabla_{x'} \\log p(x')" }, { "math_id": 57, "text": "\\nabla_x" }, { "math_id": 58, "text": "\\nabla_{x'}" }, { "math_id": 59, "text": "x" }, { "math_id": 60, "text": "x'" }, { "math_id": 61, "text": "k(x,x') = (1 + (x-x')^\\top \\Sigma^{-1} (x-x') )^{-\\beta} " }, { "math_id": 62, "text": "\\beta > 0" }, { "math_id": 63, "text": "\\Sigma \\in \\mathbb{R}^{d \\times d}" }, { "math_id": 64, "text": "u(x) = \\nabla \\log p(x)" }, { "math_id": 65, "text": "(2.1) \\quad \\mathcal{A}_P \\mathcal{A}_P' K(x,x') = - \\frac{4 \\beta (\\beta + 1) (x-x')^\\top \\Sigma^{-2} (x-x')}{ \\left(1 + (x-x')^\\top \\Sigma^{-1} (x-x') \\right)^{\\beta + 2} } + 2 \\beta \\left[ \\frac{ \\text{tr}(\\Sigma^{-1}) + [u(x) - u(x')]^\\top \\Sigma^{-1} (x-x') }{ \\left(1 + (x-x')^\\top \\Sigma^{-1} (x-x') \\right)^{1+\\beta} } \\right] + \\frac{ u(x)^\\top u(x') }{ \\left(1 + (x-x')^\\top \\Sigma^{-1} (x-x') \\right)^{\\beta} } " }, { "math_id": 66, "text": "\\mathcal{A}_{P} f = \\nabla \\cdot f + f \\cdot \\nabla \\log p = \\textstyle\\frac{1}{p}\\nabla \\cdot (f p)" }, { "math_id": 67, "text": "\\mathcal{A}_{P} f = \\textstyle\\frac{1}{p}\\nabla \\cdot (m f p)" }, { "math_id": 68, "text": "m" }, { "math_id": 69, "text": " p " }, { "math_id": 70, "text": "\\mathcal{X} " }, { "math_id": 71, "text": "\np(x) = \\textstyle \\frac{1}{Z} \\tilde{p}(x) " }, { "math_id": 72, "text": "\\tilde{p}(x) " }, { "math_id": 73, "text": " Z " }, { "math_id": 74, "text": "\\mathcal{A}_P \\mathcal{A}_P K(x,x') " }, { "math_id": 75, "text": "\nu(x) = \\nabla \\log p(x) = \\nabla \\log \\left( \\frac{\\tilde{p}(x)}{Z} \\right) = \\nabla \\log \\tilde{p}(x) - \\nabla \\log Z = \\nabla \\log \\tilde{p}(x) \n" }, { "math_id": 76, "text": "\nZ \n" }, { "math_id": 77, "text": "D_P(Q) \\geq 0" }, { "math_id": 78, "text": "D_P(Q) = 0" }, { "math_id": 79, "text": "Q=P" }, { "math_id": 80, "text": "D_P(Q_n) \\rightarrow 0" }, { "math_id": 81, "text": "Q_n" }, { "math_id": 82, "text": "K " }, { "math_id": 83, "text": "X_n \\sim Q_n" }, { "math_id": 84, "text": "X \\sim P" }, { "math_id": 85, "text": "\\nabla \\log p(X_m)" }, { "math_id": 86, "text": "\\nabla \\log p(X)" }, { "math_id": 87, "text": "x_1,\\dots,x_n \\in \\mathcal{X}" }, { "math_id": 88, "text": "Q^n = \\frac{1}{n} \\sum_{i=1}^n \\delta(x_i)" }, { "math_id": 89, "text": "\n(3.1) \\quad \\underset{x_1,\\dots,x_n \\in \\mathcal{X}}{\\operatorname{arg\\,min}} \\; D_{P}\\left( \\frac{1}{n} \\sum_{i=1}^n \\delta(x_i) \\right) \n" }, { "math_id": 90, "text": "D_P(Q^n) \\rightarrow 0" }, { "math_id": 91, "text": "n \\rightarrow \\infty" }, { "math_id": 92, "text": "Q^n" }, { "math_id": 93, "text": "\\{x_i\\}_{i=1}^n \\subset \\mathcal{X}" }, { "math_id": 94, "text": "\\delta(x_i) " }, { "math_id": 95, "text": "\nQ_n := \\sum_{i=1}^n w_i^* \\delta(x_i), \\qquad w^* \\in \\underset{w_1 + \\cdots + w_n = 1}{\\operatorname{arg\\,min}} \\; D_P\\left( \\sum_{i=1}^n w_i \\delta(x_i) \\right),\n" }, { "math_id": 96, "text": "w_i \\geq 0" }, { "math_id": 97, "text": " w^* " }, { "math_id": 98, "text": "Q_n " }, { "math_id": 99, "text": "m \\ll n " }, { "math_id": 100, "text": "\nQ_{n,m} := \\frac{1}{m} \\sum_{i=1}^m \\delta(x_{\\pi(i)}), \\qquad \\pi(m) \\in\n\\underset{j=1,\\dots,n}{\\operatorname{arg\\,min}} \\; D_P\\left( \\frac{1}{m} \\sum_{i=1}^{m-1} \\delta(x_{\\pi(i)}) + \\frac{1}{m} \\delta(x_j) \\right) \n" }, { "math_id": 101, "text": "D_P(Q_{n,m}) = D_P(Q_n) + O\\left(\\sqrt{\\frac{\\log m}{m}} \\right)." }, { "math_id": 102, "text": "\\{Q_\\theta\\}_{\\theta \\in \\Theta}" }, { "math_id": 103, "text": "\\theta \\in \\Theta" }, { "math_id": 104, "text": "\n\\underset{\\theta \\in \\Theta}{\\operatorname{arg\\,min}} \\; D_P(Q_\\theta)\n" }, { "math_id": 105, "text": "Q_\\theta" }, { "math_id": 106, "text": "D_P(Q_\\theta)" }, { "math_id": 107, "text": "Q^n = \\textstyle \\frac{1}{n}\\sum_{i=1}^n \\delta(x_i)" }, { "math_id": 108, "text": "\\{P_\\theta\\}_{\\theta \\in \\Theta}" }, { "math_id": 109, "text": "\\theta" }, { "math_id": 110, "text": "\n\\underset{\\theta \\in \\Theta}{\\operatorname{arg\\,min}} \\; D_{P_\\theta}(Q^n).\n" }, { "math_id": 111, "text": "\\pi(\\theta)" }, { "math_id": 112, "text": "\\Theta" }, { "math_id": 113, "text": "\n\\pi^n(\\theta) \\propto \\pi(\\theta) \\exp\\left( - \\gamma D_{P_\\theta}(Q^n)^2 \\right) ,\n" }, { "math_id": 114, "text": "\\gamma > 0" } ]
https://en.wikipedia.org/wiki?curid=67767951
6777260
Monetary conditions index
Macroeconomic index number relevant for monetary policy In macroeconomics, a monetary conditions index (MCI) is an index number calculated from a linear combination of a small number of economy-wide financial variables deemed relevant for monetary policy. These variables always include a short-run interest rate and an exchange rate. An MCI may also serve as a day-to-day operating target for the conduct of monetary policy, especially in small open economies. Central banks compute MCIs, with the Bank of Canada being the first to do so, beginning in the early 1990s. The MCI begins with a simple model of the determinants of aggregate demand in an open economy, which include variables such as the real exchange rate as well as the real interest rate. Moreover, monetary policy is assumed to have a significant effect on these variables, especially in the short run. Hence a linear combination of these variables can measure the effect of monetary policy on aggregate demand. Since the MCI is a function of the real exchange rate, the MCI is influenced by events such as terms of trade shocks, and changes in business and consumer confidence, which do not necessarily affect interest rates. Let aggregate demand take the following simple form: formula_0 Where: "y" = aggregate demand, logged; "r" = real interest rate, measured in percents, not decimal fractions; "q" = real exchange rate, defined as the foreign currency price of a unit of domestic currency. A rise in "q" means that the domestic currency appreciates. "q" is the natural log of an index number that is set to 1 in the base period (numbered 0 by convention); ν = stochastic error term assumed to capture all other influences on aggregate demand. "a"1 and "a"2 are the respective real interest rate and real exchange rate elasticities of aggregate demand. Empirically, we expect both "a"1 and "a"2 to be negative, and 0 ≤ "a"1/"a"2 ≤ 1. Let "MCI"0 be the (arbitrary) value of the MCI in the base year. The MCI is then defined as: formula_1 Hence "MCI"t is a weighted sum of the changes between periods 0 and t in the real interest and exchange rates. Only changes in the MCI, and not its numerical value, are meaningful, as is always the case with index numbers. Changes in the MCI reflect changes in monetary conditions between two points in time. A rise (fall) in the MCI means that monetary conditions have tightened (eased). Because an MCI begins with a linear combination, infinitely many distinct pairs of interest rates, "r", and exchange rates, "q", yield the same value of the MCI. Hence "r" and "q" can move a great deal, with little or no effect on the value of the MCI. Nevertheless, the differing value of "r" and "q" consistent with a given value of MCI may have widely differing implications for real output and the inflation rate, especially if the time lags in the transmission of monetary policy are material. Since "a"1 and "a"2 are expected to have the same sign, "r" and "q" may move in opposite directions with little or no change in the MCI. Hence an MCI that changes little after an announced change in monetary policy is evidence that financial markets view the policy change as lacking credibility. The real interest rate and real exchange rate require a measure of the price level, often calculated only quarterly and never more often than monthly. Hence calculating the MCI more often than monthly would not be meaningful. In practice, the MCI is calculated using the nominal exchange rate and a nominal short-run interest rate, for which data are readily available. This nominal variant of the MCI is very easy to compute in real time, even minute by minute, and assuming low and stable inflation, is not inconsistent with the underlying model of aggregate demand.
[ { "math_id": 0, "text": "\\ y = a_0 + a_{1}r + a_{2}q + \\nu ," }, { "math_id": 1, "text": "\\ MCI_t = MCI_{0}exp[(r_t - r_0) + (a_{2}/a_{1})q_t]." } ]
https://en.wikipedia.org/wiki?curid=6777260
67789038
Spaces of test functions and distributions
Topological vector spaces involving with the definition and use of Schwartz distributions. In mathematical analysis, the spaces of test functions and distributions are topological vector spaces (TVSs) that are used in the definition and application of distributions. Test functions are usually infinitely differentiable complex-valued (or sometimes real-valued) functions on a non-empty open subset formula_0 that have compact support. The space of all test functions, denoted by formula_1 is endowed with a certain topology, called the canonical LF-topology, that makes formula_2 into a complete Hausdorff locally convex TVS. The strong dual space of formula_2 is called the space of distributions on formula_3 and is denoted by formula_4 where the "formula_5" subscript indicates that the continuous dual space of formula_1 denoted by formula_6 is endowed with the strong dual topology. There are other possible choices for the space of test functions, which lead to other different spaces of distributions. If formula_7 then the use of Schwartz functions as test functions gives rise to a certain subspace of formula_8 whose elements are called tempered distributions. These are important because they allow the Fourier transform to be extended from "standard functions" to tempered distributions. The set of tempered distributions forms a vector subspace of the space of distributions formula_8 and is thus one example of a space of distributions; there are many other spaces of distributions. There also exist other major classes of test functions that are not subsets of formula_9 such as spaces of analytic test functions, which produce very different classes of distributions. The theory of such distributions has a different character from the previous one because there are no analytic functions with non-empty compact support. Use of analytic test functions leads to Sato's theory of hyperfunctions. Notation. The following notation will be used throughout this article: Definitions of test functions and distributions. In this section, we will formally define real-valued distributions on U. With minor modifications, one can also define complex-valued distributions, and one can replace formula_34 with any (paracompact) smooth manifold. &lt;templatestyles src="Block indent/styles.css"/&gt;Notation: Note that for all formula_49 and any compact subsets K and L of U, we have: formula_50 &lt;templatestyles src="Block indent/styles.css"/&gt;Definition: Elements of formula_51 are called test functions on U and formula_51 is called the space of test function on U. We will use both formula_52 and formula_51 to denote this space. Distributions on U are defined to be the continuous linear functionals on formula_51 when this vector space is endowed with a particular topology called the canonical LF-topology. This topology is unfortunately not easy to define but it is nevertheless still possible to characterize distributions in a way so that no mention of the canonical LF-topology is made. Proposition: If T is a linear functional on formula_51 then the T is a distribution if and only if the following equivalent conditions are satisfied: The above characterizations can be used to determine whether or not a linear functional is a distribution, but more advanced uses of distributions and test functions (such as applications to differential equations) is limited if no topologies are placed on formula_51 and formula_66 To define the space of distributions we must first define the canonical LF-topology, which in turn requires that several other locally convex topological vector spaces (TVSs) be defined first. First, a (non-normable) topology on formula_67 will be defined, then every formula_68 will be endowed with the subspace topology induced on it by formula_69 and finally the (non-metrizable) canonical LF-topology on formula_51 will be defined. The space of distributions, being defined as the continuous dual space of formula_9 is then endowed with the (non-metrizable) strong dual topology induced by formula_51 and the canonical LF-topology (this topology is a generalization of the usual operator norm induced topology that is placed on the continuous dual spaces of normed spaces). This finally permits consideration of more advanced notions such as convergence of distributions (both sequences and nets), various (sub)spaces of distributions, and operations on distributions, including extending differential equations to distributions. Choice of compact sets K. Throughout, formula_33 will be any collection of compact subsets of formula_3 such that (1) formula_70 and (2) for any compact formula_71 there exists some formula_72 such that formula_73 The most common choices for formula_33 are: We make formula_33 into a directed set by defining formula_81 if and only if formula_82 Note that although the definitions of the subsequently defined topologies explicitly reference formula_83 in reality they do not depend on the choice of formula_84 that is, if formula_85 and formula_86 are any two such collections of compact subsets of formula_74 then the topologies defined on formula_36 and formula_44 by using formula_85 in place of formula_33 are the same as those defined by using formula_86 in place of formula_46 Topology on "C""k"("U"). We now introduce the seminorms that will define the topology on formula_87 Different authors sometimes use different families of seminorms so we list the most common families below. However, the resulting topology is the same no matter which family is used. &lt;templatestyles src="Block indent/styles.css"/&gt;Suppose formula_88 and formula_45 is an arbitrary compact subset of formula_48 Suppose formula_89 an integer such that formula_90 and formula_92 is a multi-index with length formula_93 For formula_94 define: formula_95 while for formula_96 define all the functions above to be the constant 0 map. All of the functions above are non-negative formula_97-valued seminorms on formula_87 As explained in this article, every set of seminorms on a vector space induces a locally convex vector topology. Each of the following sets of seminorms formula_98 generate the same locally convex vector topology on formula_36 (so for example, the topology generated by the seminorms in formula_99 is equal to the topology generated by those in formula_100). &lt;templatestyles src="Block indent/styles.css"/&gt;The vector space formula_36 is endowed with the locally convex topology induced by any one of the four families formula_101 of seminorms described above. This topology is also equal to the vector topology induced by all of the seminorms in formula_102 With this topology, formula_36 becomes a locally convex Fréchet space that is not normable. Every element of formula_103 is a continuous seminorm on formula_87 Under this topology, a net formula_104 in formula_36 converges to formula_40 if and only if for every multi-index formula_92 with formula_105 and every compact formula_59 the net of partial derivatives formula_106 converges uniformly to formula_107 on formula_108 For any formula_109 any (von Neumann) bounded subset of formula_110 is a relatively compact subset of formula_87 In particular, a subset of formula_67 is bounded if and only if it is bounded in formula_111 for all formula_112 The space formula_36 is a Montel space if and only if formula_113 The topology on formula_67 is the superior limit of the subspace topologies induced on formula_67 by the TVSs formula_111 as i ranges over the non-negative integers. A subset formula_114 of formula_67 is open in this topology if and only if there exists formula_115 such that formula_114 is open when formula_67 is endowed with the subspace topology induced on it by formula_116 Metric defining the topology. If the family of compact sets formula_117 satisfies formula_118 and formula_77 for all formula_119 then a complete translation-invariant metric on formula_67 can be obtained by taking a suitable countable Fréchet combination of any one of the above defining families of seminorms ("A" through "D"). For example, using the seminorms formula_120 results in the metric formula_121 Often, it is easier to just consider seminorms (avoiding any metric) and use the tools of functional analysis. Topology on "C""k"("K"). As before, fix formula_35 Recall that if formula_45 is any compact subset of formula_3 then formula_122 &lt;templatestyles src="Block indent/styles.css"/&gt;Assumption: For any compact subset formula_37 we will henceforth assume that formula_38 is endowed with the subspace topology it inherits from the Fréchet space formula_87 For any compact subset formula_37 formula_38 is a closed subspace of the Fréchet space formula_36 and is thus also a Fréchet space. For all compact formula_123 satisfying formula_124 denote the inclusion map by formula_125 Then this map is a linear embedding of TVSs (that is, it is a linear map that is also a topological embedding) whose image (or "range") is closed in its codomain; said differently, the topology on formula_38 is identical to the subspace topology it inherits from formula_126 and also formula_38 is a closed subset of formula_127 The interior of formula_68 relative to formula_67 is empty. If formula_14 is finite then formula_38 is a Banach space with a topology that can be defined by the norm formula_128 And when formula_129 then formula_130 is even a Hilbert space. The space formula_68 is a distinguished Schwartz Montel space so if formula_131 then it is not normable and thus not a Banach space (although like all other formula_132 it is a Fréchet space). Trivial extensions and independence of "C""k"("K")'s topology from "U". The definition of formula_38 depends on U so we will let formula_39 denote the topological space formula_132 which by definition is a topological subspace of formula_87 Suppose formula_133 is an open subset of formula_34 containing formula_3 and for any compact subset formula_134 let formula_135 is the vector subspace of formula_136 consisting of maps with support contained in formula_108 Given formula_137 its trivial extension to V is by definition, the function formula_138 defined by: formula_139 so that formula_140 Let formula_141 denote the map that sends a function in formula_44 to its trivial extension on V. This map is a linear injection and for every compact subset formula_142 (where formula_45 is also a compact subset of formula_133 since formula_143) we have formula_144 If I is restricted to formula_145 then the following induced linear map is a homeomorphism (and thus a TVS-isomorphism): formula_146 and thus the next two maps (which like the previous map are defined by formula_147) are topological embeddings: formula_148 (the topology on formula_149 is the canonical LF topology, which is defined later). Using the injection formula_150 the vector space formula_44 is canonically identified with its image in formula_151 (however, if formula_152 then formula_153 is not a topological embedding when these spaces are endowed with their canonical LF topologies, although it is continuous). Because formula_154 through this identification, formula_145 can also be considered as a subset of formula_155 Importantly, the subspace topology formula_145 inherits from formula_36 (when it is viewed as a subset of formula_36) is identical to the subspace topology that it inherits from formula_136 (when formula_145 is viewed instead as a subset of formula_136 via the identification). Thus the topology on formula_39 is independent of the open subset U of formula_34 that contains K. This justifies the practice of written formula_38 instead of formula_156 Canonical LF topology. Recall that formula_44 denote all those functions in formula_36 that have compact support in formula_74 where note that formula_44 is the union of all formula_38 as K ranges over formula_46 Moreover, for every k, formula_44 is a dense subset of formula_87 The special case when formula_157 gives us the space of test functions. &lt;templatestyles src="Block indent/styles.css"/&gt;formula_51 is called the space of test functions on formula_3 and it may also be denoted by formula_66 This section defines the canonical LF topology as a direct limit. It is also possible to define this topology in terms of its neighborhoods of the origin, which is described afterwards. Topology defined by direct limits. For any two sets K and L, we declare that formula_158 if and only if formula_124 which in particular makes the collection formula_33 of compact subsets of U into a directed set (we say that such a collection is directed by subset inclusion). For all compact formula_123 satisfying formula_124 there are inclusion maps formula_159 Recall from above that the map formula_160 is a topological embedding. The collection of maps formula_161 forms a direct system in the category of locally convex topological vector spaces that is directed by formula_33 (under subset inclusion). This system's direct limit (in the category of locally convex TVSs) is the pair formula_162 where formula_163 are the natural inclusions and where formula_44 is now endowed with the (unique) strongest locally convex topology making all of the inclusion maps formula_164 continuous. &lt;templatestyles src="Block indent/styles.css"/&gt;The canonical LF topology on formula_44 is the finest locally convex topology on formula_44 making all of the inclusion maps formula_165 continuous (where K ranges over formula_33). &lt;templatestyles src="Block indent/styles.css"/&gt;As is common in mathematics literature, the space formula_44 is henceforth assumed to be endowed with its canonical LF topology (unless explicitly stated otherwise). Topology defined by neighborhoods of the origin. If U is a convex subset of formula_166 then U is a neighborhood of the origin in the canonical LF topology if and only if it satisfies the following condition: Note that any convex set satisfying this condition is necessarily absorbing in formula_169 Since the topology of any topological vector space is translation-invariant, any TVS-topology is completely determined by the set of neighborhood of the origin. This means that one could actually define the canonical LF topology by declaring that a convex balanced subset U is a neighborhood of the origin if and only if it satisfies condition CN. Topology defined via differential operators. A linear differential operator in U with smooth coefficients is a sum formula_170 where formula_171 and all but finitely many of formula_172 are identically 0. The integer formula_173 is called the order of the differential operator formula_174 If formula_175 is a linear differential operator of order k then it induces a canonical linear map formula_176 defined by formula_177 where we shall reuse notation and also denote this map by formula_174 For any formula_178 the canonical LF topology on formula_44 is the weakest locally convex TVS topology making all linear differential operators in formula_3 of order formula_179 into continuous maps from formula_44 into formula_180 Properties of the canonical LF topology. Canonical LF topology's independence from K. One benefit of defining the canonical LF topology as the direct limit of a direct system is that we may immediately use the universal property of direct limits. Another benefit is that we can use well-known results from category theory to deduce that the canonical LF topology is actually independent of the particular choice of the directed collection formula_33 of compact sets. And by considering different collections formula_33 (in particular, those formula_33 mentioned at the beginning of this article), we may deduce different properties of this topology. In particular, we may deduce that the canonical LF topology makes formula_44 into a Hausdorff locally convex strict LF-space (and also a strict LB-space if formula_91), which of course is the reason why this topology is called "the canonical LF topology" (see this footnote for more details). Universal property. From the universal property of direct limits, we know that if formula_181 is a linear map into a locally convex space Y (not necessarily Hausdorff), then u is continuous if and only if u is bounded if and only if for every formula_167 the restriction of u to formula_38 is continuous (or bounded). Dependence of the canonical LF topology on U. Suppose V is an open subset of formula_34 containing formula_48 Let formula_182 denote the map that sends a function in formula_44 to its trivial extension on V (which was defined above). This map is a continuous linear map. If (and only if) formula_152 then formula_183 is not a dense subset of formula_184 and formula_153 is not a topological embedding. Consequently, if formula_152 then the transpose of formula_153 is neither one-to-one nor onto. Bounded subsets. A subset formula_185 is bounded in formula_44 if and only if there exists some formula_72 such that formula_186 and formula_187 is a bounded subset of formula_168 Moreover, if formula_142 is compact and formula_188 then formula_189 is bounded in formula_38 if and only if it is bounded in formula_87 For any formula_190 any bounded subset of formula_191 (resp. formula_110) is a relatively compact subset of formula_44 (resp. formula_36), where formula_192 Non-metrizability. For all compact formula_37 the interior of formula_38 in formula_44 is empty so that formula_44 is of the first category in itself. It follows from Baire's theorem that formula_44 is not metrizable and thus also not normable (see this footnote for an explanation of how the non-metrizable space formula_44 can be complete even though it does not admit a metric). The fact that formula_51 is a nuclear Montel space makes up for the non-metrizability of formula_51 (see this footnote for a more detailed explanation). Relationships between spaces. Using the universal property of direct limits and the fact that the natural inclusions formula_160 are all topological embedding, one may show that all of the maps formula_165 are also topological embeddings. Said differently, the topology on formula_38 is identical to the subspace topology that it inherits from formula_166 where recall that formula_38's topology was defined to be the subspace topology induced on it by formula_87 In particular, both formula_44 and formula_36 induces the same subspace topology on formula_168 However, this does not imply that the canonical LF topology on formula_44 is equal to the subspace topology induced on formula_44 by formula_36; these two topologies on formula_44 are in fact never equal to each other since the canonical LF topology is never metrizable while the subspace topology induced on it by formula_36 is metrizable (since recall that formula_36 is metrizable). The canonical LF topology on formula_44 is actually strictly finer than the subspace topology that it inherits from formula_36 (thus the natural inclusion formula_193 is continuous but not a topological embedding). Indeed, the canonical LF topology is so fine that if formula_194 denotes some linear map that is a "natural inclusion" (such as formula_195 or formula_196 or other maps discussed below) then this map will typically be continuous, which (as is explained below) is ultimately the reason why locally integrable functions, Radon measures, etc. all induce distributions (via the transpose of such a "natural inclusion"). Said differently, the reason why there are so many different ways of defining distributions from other spaces ultimately stems from how very fine the canonical LF topology is. Moreover, since distributions are just continuous linear functionals on formula_9 the fine nature of the canonical LF topology means that more linear functionals on formula_51 end up being continuous ("more" means as compared to a coarser topology that we could have placed on formula_51 such as for instance, the subspace topology induced by some formula_197 which although it would have made formula_51 metrizable, it would have also resulted in fewer linear functionals on formula_51 being continuous and thus there would have been fewer distributions; moreover, this particular coarser topology also has the disadvantage of not making formula_51 into a complete TVS). Distributions. As discussed earlier, continuous linear functionals on a formula_51 are known as distributions on U. Thus the set of all distributions on U is the continuous dual space of formula_9 which when endowed with the strong dual topology is denoted by formula_201 &lt;templatestyles src="Block indent/styles.css"/&gt;By definition, a distribution on U is defined to be a continuous linear functional on formula_202 Said differently, a distribution on U is an element of the continuous dual space of formula_51 when formula_51 is endowed with its canonical LF topology. We have the canonical duality pairing between a distribution T on U and a test function formula_203 which is denoted using angle brackets by formula_204 One interprets this notation as the distribution T acting on the test function formula_16 to give a scalar, or symmetrically as the test function formula_16 acting on the distribution T. Characterizations of distributions. Proposition. If T is a linear functional on formula_51 then the following are equivalent: Topology on the space of distributions. &lt;templatestyles src="Block indent/styles.css"/&gt;Definition and notation: The space of distributions on U, denoted by formula_226 is the continuous dual space of formula_51 endowed with the topology of uniform convergence on bounded subsets of formula_202 More succinctly, the space of distributions on U is formula_227 The topology of uniform convergence on bounded subsets is also called the strong dual topology. This topology is chosen because it is with this topology that formula_8 becomes a nuclear Montel space and it is with this topology that the kernels theorem of Schwartz holds. No matter what dual topology is placed on formula_226 a sequence of distributions converges in this topology if and only if it converges pointwise (although this need not be true of a net). No matter which topology is chosen, formula_8 will be a non-metrizable, locally convex topological vector space. The space formula_8 is separable and has the strong Pytkeev property but it is neither a k-space nor a sequential space, which in particular implies that it is not metrizable and also that its topology can not be defined using only sequences. Topological properties. Topological vector space categories. The canonical LF topology makes formula_44 into a complete distinguished strict LF-space (and a strict LB-space if and only if formula_91), which implies that formula_44 is a meager subset of itself. Furthermore, formula_166 as well as its strong dual space, is a complete Hausdorff locally convex barrelled bornological Mackey space. The strong dual of formula_44 is a Fréchet space if and only if formula_91 so in particular, the strong dual of formula_9 which is the space formula_8 of distributions on U, is not metrizable (note that the weak-* topology on formula_8 also is not metrizable and moreover, it further lacks almost all of the nice properties that the strong dual topology gives formula_8). The three spaces formula_9 formula_69 and the Schwartz space formula_228 as well as the strong duals of each of these three spaces, are complete nuclear Montel bornological spaces, which implies that all six of these locally convex spaces are also paracompact reflexive barrelled Mackey spaces. The spaces formula_67 and formula_229 are both distinguished Fréchet spaces. Moreover, both formula_51 and formula_229 are Schwartz TVSs. Convergent sequences. Convergent sequences and their insufficiency to describe topologies. The strong dual spaces of formula_67 and formula_229 are sequential spaces but not Fréchet-Urysohn spaces. Moreover, neither the space of test functions formula_51 nor its strong dual formula_8 is a sequential space (not even an Ascoli space), which in particular implies that their topologies can not be defined entirely in terms of convergent sequences. A sequence formula_205 in formula_44 converges in formula_44 if and only if there exists some formula_72 such that formula_38 contains this sequence and this sequence converges in formula_38; equivalently, it converges if and only if the following two conditions hold: Neither the space formula_51 nor its strong dual formula_8 is a sequential space, and consequently, their topologies can not be defined entirely in terms of convergent sequences. For this reason, the above characterization of when a sequence converges is not enough to define the canonical LF topology on formula_202 The same can be said of the strong dual topology on formula_201 What sequences do characterize. Nevertheless, sequences do characterize many important properties, as we now discuss. It is known that in the dual space of any Montel space, a sequence converges in the strong dual topology if and only if it converges in the weak* topology, which in particular, is the reason why a sequence of distributions converges (in the strong dual topology) if and only if it converges pointwise (this leads many authors to use pointwise convergence to actually define the convergence of a sequence of distributions; this is fine for sequences but it does not extend to the convergence of nets of distributions since a net may converge pointwise but fail to converge in the strong dual topology). Sequences characterize continuity of linear maps valued in locally convex space. Suppose X is a locally convex bornological space (such as any of the six TVSs mentioned earlier). Then a linear map formula_234 into a locally convex space Y is continuous if and only if it maps null sequences in X to bounded subsets of Y. More generally, such a linear map formula_234 is continuous if and only if it maps Mackey convergent null sequences to bounded subsets of formula_235 So in particular, if a linear map formula_234 into a locally convex space is sequentially continuous at the origin then it is continuous. However, this does not necessarily extend to non-linear maps and/or to maps valued in topological spaces that are not locally convex TVSs. For every formula_236 is sequentially dense in formula_169 Furthermore, formula_237 is a sequentially dense subset of formula_8 (with its strong dual topology) and also a sequentially dense subset of the strong dual space of formula_238 Sequences of distributions. A sequence of distributions formula_239 converges with respect to the weak-* topology on formula_8 to a distribution T if and only if formula_240 for every test function formula_241 For example, if formula_242 is the function formula_243 and formula_244 is the distribution corresponding to formula_245 then formula_246 as formula_247 so formula_248 in formula_249 Thus, for large formula_250 the function formula_251 can be regarded as an approximation of the Dirac delta distribution. Localization of distributions. Preliminaries: Transpose of a linear operator. Operations on distributions and spaces of distributions are often defined by means of the transpose of a linear operator. This is because the transpose allows for a unified presentation of the many definitions in the theory of distributions and also because its properties are well known in functional analysis. For instance, the well-known Hermitian adjoint of a linear operator between Hilbert spaces is just the operator's transpose (but with the Riesz representation theorem used to identify each Hilbert space with its continuous dual space). In general the transpose of a continuous linear map formula_255 is the linear map formula_256 or equivalently, it is the unique map satisfying formula_257 for all formula_258 and all formula_259 (the prime symbol in formula_260 does not denote a derivative of any kind; it merely indicates that formula_260 is an element of the continuous dual space formula_261). Since formula_99 is continuous, the transpose formula_262 is also continuous when both duals are endowed with their respective strong dual topologies; it is also continuous when both duals are endowed with their respective weak* topologies (see the articles polar topology and dual system for more details). In the context of distributions, the characterization of the transpose can be refined slightly. Let formula_263 be a continuous linear map. Then by definition, the transpose of formula_99 is the unique linear operator formula_264 that satisfies: formula_265 Since formula_52 is dense in formula_266 (here, formula_52 actually refers to the set of distributions formula_267) it is sufficient that the defining equality hold for all distributions of the form formula_268 where formula_269 Explicitly, this means that a continuous linear map formula_270 is equal to formula_271 if and only if the condition below holds: formula_272 where the right hand side equals formula_273 Extensions and restrictions to an open subset. Let formula_274 be open subsets of formula_275 Every function formula_276 can be extended by zero from its domain formula_133 to a function on formula_3 by setting it equal to formula_10 on the complement formula_277 This extension is a smooth compactly supported function called the trivial extension of formula_16 to formula_3 and it will be denoted by formula_278 This assignment formula_279 defines the trivial extension operator formula_280 which is a continuous injective linear map. It is used to canonically identify formula_281 as a vector subspace of formula_52 (although not as a topological subspace). Its transpose (explained here) formula_282 is called the and as the name suggests, the image formula_283 of a distribution formula_284 under this map is a distribution on formula_133 called the restriction of formula_285 to formula_286 The defining condition of the restriction formula_283 is: formula_287 If formula_288 then the (continuous injective linear) trivial extension map formula_289 is not a topological embedding (in other words, if this linear injection was used to identify formula_281 as a subset of formula_52 then formula_281's topology would strictly finer than the subspace topology that formula_52 induces on it; importantly, it would not be a topological subspace since that requires equality of topologies) and its range is also not dense in its codomain formula_66 Consequently, if formula_288 then the restriction mapping is neither injective nor surjective. A distribution formula_290 is said to be if it belongs to the range of the transpose of formula_291 and it is called extendible if it is extendable to formula_275 Unless formula_292 the restriction to formula_133 is neither injective nor surjective. Spaces of distributions. For all formula_293 and all formula_294 all of the following canonical injections are continuous and have an image/range that is a dense subset of their codomain: formula_295 where the topologies on the LB-spaces formula_296 are the canonical LF topologies as defined below (so in particular, they are not the usual norm topologies). The range of each of the maps above (and of any composition of the maps above) is dense in the codomain. Indeed, formula_51 is even sequentially dense in every formula_169 For every formula_297 the canonical inclusion formula_298 into the normed space formula_299 (here formula_299 has its usual norm topology) is a continuous linear injection and the range of this injection is dense in its codomain if and only if formula_300 . Suppose that formula_301 is one of the LF-spaces formula_44 (for formula_302) or LB-spaces formula_303 (for formula_304) or normed spaces formula_299 (for formula_305). Because the canonical injection formula_306 is a continuous injection whose image is dense in the codomain, this map's transpose formula_307 is a continuous injection. This injective transpose map thus allows the continuous dual space formula_308 of formula_301 to be identified with a certain vector subspace of the space formula_266 of all distributions (specifically, it is identified with the image of this transpose map). This continuous transpose map is not necessarily a TVS-embedding so the topology that this map transfers from its domain to the image formula_309 is finer than the subspace topology that this space inherits from formula_201 A linear subspace of formula_8 carrying a locally convex topology that is finer than the subspace topology induced by formula_310 is called a space of distributions. Almost all of the spaces of distributions mentioned in this article arise in this way (e.g. tempered distribution, restrictions, distributions of order formula_311 some integer, distributions induced by a positive Radon measure, distributions induced by an formula_312-function, etc.) and any representation theorem about the dual space of X may, through the transpose formula_313 be transferred directly to elements of the space formula_314 Compactly supported "Lp"-spaces. Given formula_297 the vector space formula_296 of &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;compactly supported formula_312 functions on formula_3 and its topology are defined as direct limits of the spaces formula_315 in a manner analogous to how the canonical LF-topologies on formula_44 were defined. For any compact formula_37 let formula_316 denote the set of all element in formula_299 (which recall are equivalence class of Lebesgue measurable formula_312 functions on formula_3) having a representative formula_16 whose support (which recall is the closure of formula_317 in formula_3) is a subset of formula_45 (such an formula_16 is almost everywhere defined in formula_45). The set formula_316 is a closed vector subspace formula_299 and is thus a Banach space and when formula_318 even a Hilbert space. Let formula_296 be the union of all formula_316 as formula_142 ranges over all compact subsets of formula_48 The set formula_296 is a vector subspace of formula_299 whose elements are the (equivalence classes of) compactly supported formula_312 functions defined on formula_3 (or almost everywhere on formula_3). Endow formula_296 with the final topology (direct limit topology) induced by the inclusion maps formula_319 as formula_142 ranges over all compact subsets of formula_48 This topology is called the canonical LF topology and it is equal to the final topology induced by any countable set of inclusion maps formula_320 (formula_321) where formula_322 are any compact sets with union equal to formula_48 This topology makes formula_296 into an LB-space (and thus also an LF-space) with a topology that is strictly finer than the norm (subspace) topology that formula_299 induces on it. Radon measures. The inclusion map formula_323 is a continuous injection whose image is dense in its codomain, so the transpose formula_324 is also a continuous injection. Note that the continuous dual space formula_325 can be identified as the space of Radon measures, where there is a one-to-one correspondence between the continuous linear functionals formula_326 and integral with respect to a Radon measure; that is, Through the injection formula_331 every Radon measure becomes a distribution on U. If formula_16 is a locally integrable function on U then the distribution formula_332 is a Radon measure; so Radon measures form a large and important space of distributions. The following is the theorem of the structure of distributions of Radon measures, which shows that every Radon measure can be written as a sum of derivatives of locally formula_333 functions in U : &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem. — Suppose formula_284 is a Radon measure, where formula_334 let formula_274 be a neighborhood of the support of formula_335 and let formula_336 There exists a family formula_337 of locally formula_333 functions on U such that formula_338 for every formula_339 and formula_340 Furthermore, formula_285 is also equal to a finite sum of derivatives of continuous functions on formula_74 where each derivative has order formula_341 Positive Radon measures A linear function T on a space of functions is called positive if whenever a function formula_16 that belongs to the domain of T is non-negative (meaning that formula_16 is real-valued and formula_342) then formula_343 One may show that every positive linear functional on formula_329 is necessarily continuous (that is, necessarily a Radon measure). Lebesgue measure is an example of a positive Radon measure. Locally integrable functions as distributions. One particularly important class of Radon measures are those that are induced locally integrable functions. The function formula_344 is called locally integrable if it is Lebesgue integrable over every compact subset K of U. This is a large class of functions which includes all continuous functions and all Lp space formula_312 functions. The topology on formula_52 is defined in such a fashion that any locally integrable function formula_16 yields a continuous linear functional on formula_52 – that is, an element of formula_8 – denoted here by formula_345, whose value on the test function formula_346 is given by the Lebesgue integral: formula_347 Conventionally, one abuses notation by identifying formula_345 with formula_18 provided no confusion can arise, and thus the pairing between formula_345 and formula_346 is often written formula_348 If formula_16 and g are two locally integrable functions, then the associated distributions formula_345 and Tg are equal to the same element of formula_8 if and only if formula_16 and g are equal almost everywhere (see, for instance, ). In a similar manner, every Radon measure formula_327 on U defines an element of formula_8 whose value on the test function formula_346 is formula_349 As above, it is conventional to abuse notation and write the pairing between a Radon measure formula_327 and a test function formula_346 as formula_350 Conversely, as shown in a theorem by Schwartz (similar to the Riesz representation theorem), every distribution which is non-negative on non-negative functions is of this form for some (positive) Radon measure. Test functions as distributions The test functions are themselves locally integrable, and so define distributions. The space of test functions formula_51 is sequentially dense in formula_8 with respect to the strong topology on formula_201 This means that for any formula_351 there is a sequence of test functions, formula_352 that converges to formula_353 (in its strong dual topology) when considered as a sequence of distributions. Or equivalently, formula_354 Furthermore, formula_51 is also sequentially dense in the strong dual space of formula_238 Distributions with compact support. The inclusion map formula_355 is a continuous injection whose image is dense in its codomain, so the transpose formula_356 is also a continuous injection. Thus the image of the transpose, denoted by formula_357 forms a space of distributions when it is endowed with the strong dual topology of formula_358 (transferred to it via the transpose map formula_359 so the topology of formula_360 is finer than the subspace topology that this set inherits from formula_8). The elements of formula_361 can be identified as the space of distributions with compact support. Explicitly, if T is a distribution on U then the following are equivalent, Compactly supported distributions define continuous linear functionals on the space formula_67; recall that the topology on formula_67 is defined such that a sequence of test functions formula_364 converges to 0 if and only if all derivatives of formula_364 converge uniformly to 0 on every compact subset of U. Conversely, it can be shown that every continuous linear functional on this space defines a distribution of compact support. Thus compactly supported distributions can be identified with those distributions that can be extended from formula_51 to formula_238 Distributions of finite order. Let formula_365 The inclusion map formula_366 is a continuous injection whose image is dense in its codomain, so the transpose formula_367 is also a continuous injection. Consequently, the image of formula_368 denoted by formula_369 forms a space of distributions when it is endowed with the strong dual topology of formula_370 (transferred to it via the transpose map formula_371 so formula_372's topology is finer than the subspace topology that this set inherits from formula_8). The elements of formula_373 are the distributions of order formula_374 The distributions of order formula_375 which are also called distributions of order formula_376 are exactly the distributions that are Radon measures (described above). For formula_377 a distribution of order formula_14 is a distribution of order formula_378 that is not a distribution of order formula_379 A distribution is said to be of finite order if there is some integer k such that it is a distribution of order formula_380 and the set of distributions of finite order is denoted by formula_381 Note that if formula_382 then formula_383 so that formula_384 is a vector subspace of formula_8 and furthermore, if and only if formula_385 Structure of distributions of finite order Every distribution with compact support in U is a distribution of finite order. Indeed, every distribution in U is locally a distribution of finite order, in the following sense: If V is an open and relatively compact subset of U and if formula_386 is the restriction mapping from U to V, then the image of formula_8 under formula_386 is contained in formula_387 The following is the theorem of the structure of distributions of finite order, which shows that every distribution of finite order can be written as a sum of derivatives of Radon measures: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — Suppose formula_353 has finite order and formula_388 Given any open subset V of U containing the support of T, there is a family of Radon measures in U, formula_389 such that for very formula_390 and formula_391 Example. (Distributions of infinite order) Let formula_392 and for every test function formula_18 let formula_393 Then S is a distribution of infinite order on U. Moreover, S can not be extended to a distribution on formula_97; that is, there exists no distribution T on formula_97 such that the restriction of T to U is equal to T. Tempered distributions and Fourier transform. Defined below are the tempered distributions, which form a subspace of formula_394 the space of distributions on formula_275 This is a proper subspace: while every tempered distribution is a distribution and an element of formula_394 the converse is not true. Tempered distributions are useful if one studies the Fourier transform since all tempered distributions have a Fourier transform, which is not true for an arbitrary distribution in formula_395 Schwartz space The Schwartz space, formula_228 is the space of all smooth functions that are rapidly decreasing at infinity along with all partial derivatives. Thus formula_396 is in the Schwartz space provided that any derivative of formula_397 multiplied with any power of formula_398 converges to 0 as formula_399 These functions form a complete TVS with a suitably defined family of seminorms. More precisely, for any multi-indices formula_64 and formula_400 define: formula_401 Then formula_346 is in the Schwartz space if all the values satisfy: formula_402 The family of seminorms formula_403 defines a locally convex topology on the Schwartz space. For formula_404 the seminorms are, in fact, norms on the Schwartz space. One can also use the following family of seminorms to define the topology: formula_405 Otherwise, one can define a norm on formula_229 via formula_406 The Schwartz space is a Fréchet space (i.e. a complete metrizable locally convex space). Because the Fourier transform changes formula_407 into multiplication by formula_408 and vice versa, this symmetry implies that the Fourier transform of a Schwartz function is also a Schwartz function. A sequence formula_409 in formula_229 converges to 0 in formula_229 if and only if the functions formula_410 converge to 0 uniformly in the whole of formula_80 which implies that such a sequence must converge to zero in formula_411 formula_412 is dense in formula_413 The subset of all analytic Schwartz functions is dense in formula_229 as well. The Schwartz space is nuclear and the tensor product of two maps induces a canonical surjective TVS-isomorphisms formula_414 where formula_415 represents the completion of the injective tensor product (which in this case is the identical to the completion of the projective tensor product). Tempered distributions The inclusion map formula_416 is a continuous injection whose image is dense in its codomain, so the transpose formula_417 is also a continuous injection. Thus, the image of the transpose map, denoted by formula_418 forms a space of distributions when it is endowed with the strong dual topology of formula_419 (transferred to it via the transpose map formula_420 so the topology of formula_421 is finer than the subspace topology that this set inherits from formula_422). The space formula_421 is called the space of tempered distributions. It is the continuous dual of the Schwartz space. Equivalently, a distribution T is a tempered distribution if and only if formula_423 The derivative of a tempered distribution is again a tempered distribution. Tempered distributions generalize the bounded (or slow-growing) locally integrable functions; all distributions with compact support and all square-integrable functions are tempered distributions. More generally, all functions that are products of polynomials with elements of Lp space formula_424 for formula_425 are tempered distributions. The tempered distributions can also be characterized as slowly growing, meaning that each derivative of T grows at most as fast as some polynomial. This characterization is dual to the rapidly falling behaviour of the derivatives of a function in the Schwartz space, where each derivative of formula_346 decays faster than every inverse power of formula_426 An example of a rapidly falling function is formula_427 for any positive formula_428 Fourier transform To study the Fourier transform, it is best to consider complex-valued test functions and complex-linear distributions. The ordinary continuous Fourier transform formula_429 is a TVS-automorphism of the Schwartz space, and the Fourier transform is defined to be its transpose formula_430 which (abusing notation) will again be denoted by F. So the Fourier transform of the tempered distribution T is defined by formula_431 for every Schwartz function formula_432 formula_433 is thus again a tempered distribution. The Fourier transform is a TVS isomorphism from the space of tempered distributions onto itself. This operation is compatible with differentiation in the sense that formula_434 and also with convolution: if T is a tempered distribution and formula_435 is a slowly increasing smooth function on formula_80 formula_436 is again a tempered distribution and formula_437 is the convolution of formula_433 and formula_438. In particular, the Fourier transform of the constant function equal to 1 is the formula_439 distribution. Expressing tempered distributions as sums of derivatives If formula_440 is a tempered distribution, then there exists a constant formula_214 and positive integers M and N such that for all Schwartz functions formula_441 formula_442 This estimate along with some techniques from functional analysis can be used to show that there is a continuous slowly increasing function F and a multi-index formula_64 such that formula_443 Restriction of distributions to compact sets If formula_444 then for any compact set formula_445 there exists a continuous function F compactly supported in formula_34 (possibly on a larger set than K itself) and a multi-index formula_64 such that formula_446 on formula_447 Tensor product of distributions. Let formula_448 and formula_449 be open sets. Assume all vector spaces to be over the field formula_450 where formula_451 or formula_452 For formula_453 define for every formula_454 and every formula_455 the following functions: formula_456 Given formula_457 and formula_458 define the following functions: formula_459 where formula_460 and formula_461 These definitions associate every formula_462 and formula_463 with the (respective) continuous linear map: formula_464 Moreover, if either formula_189 (resp. formula_285) has compact support then it also induces a continuous linear map of formula_465 (resp. formula_466). &lt;templatestyles src="Math_theorem/styles.css" /&gt; Fubini's theorem for distributions — Let formula_462 and formula_467 If formula_453 then formula_468 The tensor product of formula_462 and formula_469 denoted by formula_470 or formula_471 is the distribution in formula_472 defined by: formula_473 Schwartz kernel theorem. The tensor product defines a bilinear map formula_474 the span of the range of this map is a dense subspace of its codomain. Furthermore, formula_475 Moreover formula_476 induces continuous bilinear maps: formula_477 where formula_478 denotes the space of distributions with compact support and formula_479 is the Schwartz space of rapidly decreasing functions. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Schwartz kernel theorem —  Each of the canonical maps below (defined in the natural way) are TVS isomorphisms: formula_480 Here formula_415 represents the completion of the injective tensor product (which in this case is identical to the completion of the projective tensor product, since these spaces are nuclear) and formula_481 has the topology of uniform convergence on bounded subsets. This result does not hold for Hilbert spaces such as formula_482 and its dual space. Why does such a result hold for the space of distributions and test functions but not for other "nice" spaces like the Hilbert space formula_482? This question led Alexander Grothendieck to discover nuclear spaces, nuclear maps, and the injective tensor product. He ultimately showed that it is precisely because formula_52 is a nuclear space that the Schwartz kernel theorem holds. Like Hilbert spaces, nuclear spaces may be thought as of generalizations of finite dimensional Euclidean space. Using holomorphic functions as test functions. The success of the theory led to investigation of the idea of hyperfunction, in which spaces of holomorphic functions are used as test functions. A refined theory has been developed, in particular Mikio Sato's algebraic analysis, using sheaf theory and several complex variables. This extends the range of symbolic methods that can be made into rigorous mathematics, for example Feynman integrals. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "U \\subseteq \\R^n" }, { "math_id": 1, "text": "C^\\infty_c(U)," }, { "math_id": 2, "text": "C^\\infty_c(U)" }, { "math_id": 3, "text": "U" }, { "math_id": 4, "text": "\\mathcal{D}^{\\prime}(U) := \\left(C^\\infty_c(U)\\right)^{\\prime}_b," }, { "math_id": 5, "text": "b" }, { "math_id": 6, "text": "\\left(C^\\infty_c(U)\\right)^{\\prime}," }, { "math_id": 7, "text": "U = \\R^n" }, { "math_id": 8, "text": "\\mathcal{D}^{\\prime}(U)" }, { "math_id": 9, "text": "C_c^\\infty(U)," }, { "math_id": 10, "text": "0" }, { "math_id": 11, "text": "n" }, { "math_id": 12, "text": "\\R^{n}." }, { "math_id": 13, "text": "\\N = \\{0, 1, 2, \\ldots\\}" }, { "math_id": 14, "text": "k" }, { "math_id": 15, "text": "\\infty." }, { "math_id": 16, "text": "f" }, { "math_id": 17, "text": "\\operatorname{Dom}(f)" }, { "math_id": 18, "text": "f," }, { "math_id": 19, "text": "\\operatorname{supp}(f)," }, { "math_id": 20, "text": "\\{x \\in \\operatorname{Dom}(f): f(x) \\neq 0\\}" }, { "math_id": 21, "text": "\\operatorname{Dom}(f)." }, { "math_id": 22, "text": "f, g : U \\to \\Complex" }, { "math_id": 23, "text": "\\langle f, g\\rangle := \\int_U f(x) g(x) \\,dx." }, { "math_id": 24, "text": "\\N^n" }, { "math_id": 25, "text": "\\alpha = (\\alpha_1, \\ldots, \\alpha_n) \\in \\N^n" }, { "math_id": 26, "text": "\\alpha_1+\\cdots+\\alpha_n" }, { "math_id": 27, "text": "|\\alpha|." }, { "math_id": 28, "text": "\\begin{align}\nx^\\alpha &= x_1^{\\alpha_1} \\cdots x_n^{\\alpha_n} \\\\\n\\partial^\\alpha &= \\frac{\\partial^{|\\alpha|}}{\\partial x_1^{\\alpha_1}\\cdots \\partial x_n^{\\alpha_n}}\n\\end{align}" }, { "math_id": 29, "text": "\\beta \\geq \\alpha" }, { "math_id": 30, "text": "\\beta_i \\geq \\alpha_i" }, { "math_id": 31, "text": "1 \\leq i\\leq n." }, { "math_id": 32, "text": "\\binom{\\beta}{\\alpha} := \\binom{\\beta_1}{\\alpha_1} \\cdots \\binom{\\beta_n}{\\alpha_n}." }, { "math_id": 33, "text": "\\mathbb{K}" }, { "math_id": 34, "text": "\\R^n" }, { "math_id": 35, "text": "k \\in \\{0, 1, 2, \\ldots, \\infty\\}." }, { "math_id": 36, "text": "C^k(U)" }, { "math_id": 37, "text": "K \\subseteq U," }, { "math_id": 38, "text": "C^k(K)" }, { "math_id": 39, "text": "C^k(K;U)" }, { "math_id": 40, "text": "f \\in C^k(U)" }, { "math_id": 41, "text": "\\operatorname{supp}(f) \\subseteq K." }, { "math_id": 42, "text": "f \\in C^k(K)" }, { "math_id": 43, "text": "K = \\varnothing." }, { "math_id": 44, "text": "C_c^k(U)" }, { "math_id": 45, "text": "K" }, { "math_id": 46, "text": "\\mathbb{K}." }, { "math_id": 47, "text": "C^k" }, { "math_id": 48, "text": "U." }, { "math_id": 49, "text": "j, k \\in \\{0, 1, 2, \\ldots, \\infty\\}" }, { "math_id": 50, "text": "\\begin{align}\nC^k(K) &\\subseteq C^k_c(U) \\subseteq C^k(U) \\\\\nC^k(K) &\\subseteq C^k(L) && \\text{ if } K \\subseteq L \\\\\nC^k(K) &\\subseteq C^j(K) && \\text{ if } j \\leq k \\\\\nC_c^k(U) &\\subseteq C^j_c(U) && \\text{ if } j \\leq k \\\\\nC^k(U) &\\subseteq C^j(U) && \\text{ if } j \\leq k \\\\\n\\end{align}" }, { "math_id": 51, "text": "C_c^\\infty(U)" }, { "math_id": 52, "text": "\\mathcal{D}(U)" }, { "math_id": 53, "text": "K\\subseteq U" }, { "math_id": 54, "text": "C>0" }, { "math_id": 55, "text": "N\\in \\N" }, { "math_id": 56, "text": "f \\in C^\\infty(K)," }, { "math_id": 57, "text": "|T(f)| \\leq C \\sup \\{|\\partial^\\alpha f(x)|: x \\in U, |\\alpha| \\leq N\\}." }, { "math_id": 58, "text": "f \\in C_c^\\infty(U)" }, { "math_id": 59, "text": "K," }, { "math_id": 60, "text": "|T(f)| \\leq C \\sup \\{|\\partial^\\alpha f(x)|: x \\in K, |\\alpha|\\leq N\\}." }, { "math_id": 61, "text": "\\{f_i\\}_{i=1}^\\infty" }, { "math_id": 62, "text": "C^\\infty(K)," }, { "math_id": 63, "text": "\\{\\partial^\\alpha f_i\\}_{i=1}^\\infty" }, { "math_id": 64, "text": "\\alpha" }, { "math_id": 65, "text": "T(f_i) \\to 0." }, { "math_id": 66, "text": "\\mathcal{D}(U)." }, { "math_id": 67, "text": "C^\\infty(U)" }, { "math_id": 68, "text": "C^\\infty(K)" }, { "math_id": 69, "text": "C^\\infty(U)," }, { "math_id": 70, "text": "U = \\bigcup_{K \\in \\mathbb{K}} K," }, { "math_id": 71, "text": "K_1,K_2 \\subseteq U" }, { "math_id": 72, "text": "K \\in \\mathbb{K}" }, { "math_id": 73, "text": "K_1\\cup K_2 \\subseteq K." }, { "math_id": 74, "text": "U," }, { "math_id": 75, "text": "\\left\\{\\overline{U}_1, \\overline{U}_2, \\ldots \\right\\}" }, { "math_id": 76, "text": "U = \\bigcup_{i=1}^\\infty U_i," }, { "math_id": 77, "text": "\\overline{U}_i \\subseteq U_{i+1}" }, { "math_id": 78, "text": "U_i" }, { "math_id": 79, "text": "U_i," }, { "math_id": 80, "text": "\\R^n," }, { "math_id": 81, "text": "K_1 \\leq K_2" }, { "math_id": 82, "text": "K_1 \\subseteq K_2." }, { "math_id": 83, "text": "\\mathbb{K}," }, { "math_id": 84, "text": "\\mathbb{K};" }, { "math_id": 85, "text": "\\mathbb{K}_1" }, { "math_id": 86, "text": "\\mathbb{K}_2" }, { "math_id": 87, "text": "C^k(U)." }, { "math_id": 88, "text": "k \\in \\{0, 1, 2, \\ldots, \\infty\\}" }, { "math_id": 89, "text": "i" }, { "math_id": 90, "text": "0 \\leq i \\leq k" }, { "math_id": 91, "text": "k \\neq \\infty" }, { "math_id": 92, "text": "p" }, { "math_id": 93, "text": "| p|\\leq k." }, { "math_id": 94, "text": "K \\neq \\varnothing," }, { "math_id": 95, "text": "\\begin{alignat}{4}\n\\text{ (1) }\\ & s_{p,K}(f) &&:= \\sup_{x_0 \\in K} \\left| \\partial^p f(x_0) \\right| \\\\[4pt]\n\\text{ (2) }\\ & q_{i,K}(f) &&:= \\sup_{|p| \\leq i} \\left(\\sup_{x_0 \\in K} \\left| \\partial^p f(x_0) \\right|\\right) = \\sup_{|p| \\leq i} \\left(s_{p, K}(f)\\right) \\\\[4pt]\n\\text{ (3) }\\ & r_{i,K}(f) &&:= \\sup_{\\stackrel{|p| \\leq i}{x_0 \\in K}} \\left| \\partial^p f(x_0) \\right| \\\\[4pt]\n\\text{ (4) }\\ & t_{i,K}(f) &&:= \\sup_{x_0 \\in K} \\left(\\sum_{|p| \\leq i} \\left| \\partial^p f(x_0) \\right|\\right)\n\\end{alignat}" }, { "math_id": 96, "text": "K = \\varnothing," }, { "math_id": 97, "text": "\\R" }, { "math_id": 98, "text": "\\begin{alignat}{4}\nA ~:= \\quad &\\{q_{i,K} &&: \\;K \\text{ compact and } \\;&&i \\in \\N \\text{ satisfies } \\;&&0 \\leq i \\leq k\\} \\\\\nB ~:= \\quad &\\{r_{i,K} &&: \\;K \\text{ compact and } \\;&&i \\in \\N \\text{ satisfies } \\;&&0 \\leq i \\leq k\\} \\\\\nC ~:= \\quad &\\{t_{i,K} &&: \\;K \\text{ compact and } \\;&&i \\in \\N \\text{ satisfies } \\;&&0 \\leq i \\leq k\\} \\\\\nD ~:= \\quad &\\{s_{p,K} &&: \\;K \\text{ compact and } \\;&&p \\in \\N^n \\text{ satisfies } \\;&&|p| \\leq k\\}\n\\end{alignat}" }, { "math_id": 99, "text": "A" }, { "math_id": 100, "text": "C" }, { "math_id": 101, "text": "A, B, C, D" }, { "math_id": 102, "text": "A \\cup B \\cup C \\cup D." }, { "math_id": 103, "text": "A \\cup B \\cup C \\cup D" }, { "math_id": 104, "text": "(f_i)_{i\\in I}" }, { "math_id": 105, "text": "|p|< k + 1" }, { "math_id": 106, "text": "\\left(\\partial^p f_i\\right)_{i \\in I}" }, { "math_id": 107, "text": "\\partial^p f" }, { "math_id": 108, "text": "K." }, { "math_id": 109, "text": "k \\in \\{0, 1, 2, \\ldots, \\infty\\}," }, { "math_id": 110, "text": "C^{k+1}(U)" }, { "math_id": 111, "text": "C^i(U)" }, { "math_id": 112, "text": "i \\in \\N." }, { "math_id": 113, "text": "k = \\infty." }, { "math_id": 114, "text": "W" }, { "math_id": 115, "text": "i\\in \\N" }, { "math_id": 116, "text": "C^i(U)." }, { "math_id": 117, "text": "\\mathbb{K} = \\left\\{\\overline{U}_1, \\overline{U}_2, \\ldots \\right\\}" }, { "math_id": 118, "text": "U = \\bigcup_{j=1}^\\infty U_j" }, { "math_id": 119, "text": "i," }, { "math_id": 120, "text": "(r_{i,K_i})_{i=1}^\\infty" }, { "math_id": 121, "text": "d(f, g) := \\sum^\\infty_{i=1} \\frac{1}{2^i} \\frac{r_{i, \\overline{U}_i}(f - g)}{1 + r_{i, \\overline{U}_i}(f - g)} = \\sum^\\infty_{i=1} \\frac{1}{2^i} \\frac{\\sup_{|p| \\leq i, x \\in \\overline{U}_i} \\left| \\partial^p (f - g)(x) \\right|}{\\left[ 1 + \\sup_{|p| \\leq i, x \\in \\overline{U}_i} \\left| \\partial^p (f - g)(x) \\right| \\right]}." }, { "math_id": 122, "text": "C^k(K) \\subseteq C^k(U)." }, { "math_id": 123, "text": "K, L \\subseteq U" }, { "math_id": 124, "text": "K \\subseteq L," }, { "math_id": 125, "text": "\\operatorname{In}_K^L : C^k(K) \\to C^k(L)." }, { "math_id": 126, "text": "C^k(L)," }, { "math_id": 127, "text": "C^k(L)." }, { "math_id": 128, "text": "r_K(f) := \\sup_{|p|<k} \\left(\\sup_{x_0 \\in K} \\left| \\partial^p f(x_0) \\right|\\right)." }, { "math_id": 129, "text": "k = 2," }, { "math_id": 130, "text": "\\,C^k(K)" }, { "math_id": 131, "text": "C^\\infty(K) \\neq \\{0\\}" }, { "math_id": 132, "text": "C^k(K)," }, { "math_id": 133, "text": "V" }, { "math_id": 134, "text": "K \\subseteq V," }, { "math_id": 135, "text": "C^k(K; V)" }, { "math_id": 136, "text": "C^k(V)" }, { "math_id": 137, "text": "f \\in C_c^k(U)," }, { "math_id": 138, "text": "I(f) := F : V \\to \\Complex" }, { "math_id": 139, "text": "F(x) = \\begin{cases} f(x) & x \\in U, \\\\ 0 & \\text{otherwise}, \\end{cases}" }, { "math_id": 140, "text": "F \\in C^k(V)." }, { "math_id": 141, "text": "I : C_c^k(U) \\to C^k(V)" }, { "math_id": 142, "text": "K \\subseteq U" }, { "math_id": 143, "text": "K \\subseteq U \\subseteq V" }, { "math_id": 144, "text": "\\begin{alignat}{4}\nI\\left(C^k(K; U)\\right) &~=~ C^k(K; V) \\qquad \\text{ and thus } \\\\\nI\\left(C_c^k(U)\\right) &~\\subseteq~ C_c^k(V)\n\\end{alignat}" }, { "math_id": 145, "text": "C^k(K; U)" }, { "math_id": 146, "text": "\\begin{alignat}{4}\n \\,& C^k(K; U) && \\to \\,&& C^k(K;V) \\\\\n & f && \\mapsto\\,&& I(f) \\\\\n\\end{alignat}" }, { "math_id": 147, "text": "f \\mapsto I(f)" }, { "math_id": 148, "text": "C^k(K; U) \\to C^k(V), \\qquad \\text{ and } \\qquad C^k(K; U) \\to C_c^k(V)," }, { "math_id": 149, "text": "C_c^k(V)" }, { "math_id": 150, "text": "I : C_c^k(U) \\to C^k(V)" }, { "math_id": 151, "text": "C_c^k(V) \\subseteq C^k(V)" }, { "math_id": 152, "text": "U \\neq V" }, { "math_id": 153, "text": "I : C_c^\\infty(U)\\to C_c^\\infty(V)" }, { "math_id": 154, "text": "C^k(K; U) \\subseteq C_c^k(U)," }, { "math_id": 155, "text": "C^k(V)." }, { "math_id": 156, "text": "C^k(K; U)." }, { "math_id": 157, "text": "k = \\infty" }, { "math_id": 158, "text": "K \\leq L" }, { "math_id": 159, "text": "\\operatorname{In}_K^L : C^k(K) \\to C^k(L)\\quad \\text{and} \\quad \\operatorname{In}_K^U : C^k(K) \\to C_c^k(U)." }, { "math_id": 160, "text": "\\operatorname{In}_K^L : C^k(K) \\to C^k(L)" }, { "math_id": 161, "text": "\\left\\{\\operatorname{In}_K^L \\;:\\; K, L \\in \\mathbb{K} \\;\\text{ and }\\; K \\subseteq L \\right\\}" }, { "math_id": 162, "text": "(C_c^k(U), \\operatorname{In}_{\\bullet}^U)" }, { "math_id": 163, "text": "\\operatorname{In}_{\\bullet}^U := \\left(\\operatorname{In}_K^U\\right)_{K \\in \\mathbb{K}}" }, { "math_id": 164, "text": "\\operatorname{In}_\\bullet^U = (\\operatorname{In}_K^U)_{K \\in \\mathbb{K}}" }, { "math_id": 165, "text": "\\operatorname{In}_K^U : C^k(K) \\to C_c^k(U)" }, { "math_id": 166, "text": "C_c^k(U)," }, { "math_id": 167, "text": "K \\in \\mathbb{K}," }, { "math_id": 168, "text": "C^k(K)." }, { "math_id": 169, "text": "C_c^k(U)." }, { "math_id": 170, "text": "P := \\sum_{\\alpha \\in \\N^n} c_\\alpha \\partial^\\alpha" }, { "math_id": 171, "text": "c_\\alpha \\in C^\\infty(U)" }, { "math_id": 172, "text": "c_\\alpha" }, { "math_id": 173, "text": "\\sup \\{|\\alpha|: c_\\alpha \\neq 0\\}" }, { "math_id": 174, "text": "P." }, { "math_id": 175, "text": "P" }, { "math_id": 176, "text": "C^k(U) \\to C^0(U)" }, { "math_id": 177, "text": "\\phi \\mapsto P\\phi," }, { "math_id": 178, "text": "1 \\leq k \\leq \\infty," }, { "math_id": 179, "text": "\\,< k + 1" }, { "math_id": 180, "text": "C_c^0(U)." }, { "math_id": 181, "text": "u : C_c^k(U) \\to Y" }, { "math_id": 182, "text": "I : C_c^k(U)\\to C_c^k(V)" }, { "math_id": 183, "text": "I\\left(C_c^\\infty(U)\\right)" }, { "math_id": 184, "text": "C_c^\\infty(V)" }, { "math_id": 185, "text": "B \\subseteq C_c^k(U)" }, { "math_id": 186, "text": "B \\subseteq C^k(K)" }, { "math_id": 187, "text": "B" }, { "math_id": 188, "text": "S \\subseteq C^k(K)" }, { "math_id": 189, "text": "S" }, { "math_id": 190, "text": "0 \\leq k \\leq \\infty," }, { "math_id": 191, "text": "C_c^{k+1}(U)" }, { "math_id": 192, "text": "\\infty + 1 = \\infty." }, { "math_id": 193, "text": "C_c^k(U)\\to C^k(U)" }, { "math_id": 194, "text": "C_c^\\infty(U)\\to X" }, { "math_id": 195, "text": "C_c^\\infty(U)\\to C^k(U)," }, { "math_id": 196, "text": "C_c^\\infty(U)\\to L^p(U)," }, { "math_id": 197, "text": "C^k(U)," }, { "math_id": 198, "text": "C_c^\\infty(U) \\to C_c^\\infty(U)" }, { "math_id": 199, "text": "C^\\infty(\\R^m) \\times C_c^\\infty(\\R^n) \\to C_c^\\infty(\\R^{m+n})" }, { "math_id": 200, "text": "(f,g)\\mapsto fg" }, { "math_id": 201, "text": "\\mathcal{D}^{\\prime}(U)." }, { "math_id": 202, "text": "C_c^\\infty(U)." }, { "math_id": 203, "text": "f \\in C_c^\\infty(U)," }, { "math_id": 204, "text": "\\begin{cases} \\mathcal{D}^{\\prime}(U) \\times C_c^\\infty(U) \\to \\R \\\\ (T, f) \\mapsto \\langle T, f \\rangle := T(f) \\end{cases}" }, { "math_id": 205, "text": "\\left(f_i\\right)_{i=1}^\\infty" }, { "math_id": 206, "text": "\\lim_{i \\to \\infty} T\\left(f_i\\right) = T(f);" }, { "math_id": 207, "text": "\\lim_{i \\to \\infty} T\\left(f_i\\right) = 0." }, { "math_id": 208, "text": "\\left(T\\left(f_i\\right)\\right)_{i=1}^\\infty" }, { "math_id": 209, "text": "f_{\\bull} = \\left(f_i\\right)_{i=1}^\\infty" }, { "math_id": 210, "text": "r_{\\bull} = \\left(r_i\\right)_{i=1}^\\infty \\to \\infty" }, { "math_id": 211, "text": "\\left(r_i f_i\\right)_{i=1}^\\infty" }, { "math_id": 212, "text": "g" }, { "math_id": 213, "text": "|T| \\leq g." }, { "math_id": 214, "text": "C > 0," }, { "math_id": 215, "text": "\\mathcal{P}," }, { "math_id": 216, "text": "\\left\\{g_1, \\ldots, g_m\\right\\} \\subseteq \\mathcal{P}" }, { "math_id": 217, "text": "|T| \\leq C(g_1 + \\cdots g_m);" }, { "math_id": 218, "text": "C > 0" }, { "math_id": 219, "text": "N \\in \\N" }, { "math_id": 220, "text": "|T(f)| \\leq C \\sup \\{|\\partial^p f(x)| : x \\in U, |\\alpha|\\leq N\\}." }, { "math_id": 221, "text": "C_K>0" }, { "math_id": 222, "text": "N_K\\in \\N" }, { "math_id": 223, "text": "|T(f)| \\leq C_K \\sup \\{\\left|\\partial^\\alpha f(x)\\right| : x \\in K, |\\alpha|\\leq N_K\\}." }, { "math_id": 224, "text": "\\{\\partial^p f_i\\}_{i=1}^\\infty" }, { "math_id": 225, "text": "p," }, { "math_id": 226, "text": "\\mathcal{D}^{\\prime}(U)," }, { "math_id": 227, "text": "\\mathcal{D}^{\\prime}(U) := \\left(C_c^\\infty(U)\\right)^{\\prime}_b." }, { "math_id": 228, "text": "\\mathcal{S}(\\R^n)," }, { "math_id": 229, "text": "\\mathcal{S}(\\R^n)" }, { "math_id": 230, "text": "f_i." }, { "math_id": 231, "text": "\\alpha," }, { "math_id": 232, "text": "\\partial^\\alpha f_{i}" }, { "math_id": 233, "text": "\\partial^\\alpha f." }, { "math_id": 234, "text": "F : X \\to Y" }, { "math_id": 235, "text": "Y." }, { "math_id": 236, "text": "k \\in \\{0, 1, \\ldots, \\infty\\}, C_c^\\infty(U)" }, { "math_id": 237, "text": "\\{D_\\phi : \\phi \\in C_c^\\infty(U)\\}" }, { "math_id": 238, "text": "C^\\infty(U)." }, { "math_id": 239, "text": "(T_i)_{i=1}^\\infty" }, { "math_id": 240, "text": "\\langle T_i, f \\rangle \\to \\langle T, f \\rangle" }, { "math_id": 241, "text": "f \\in \\mathcal{D}(U)." }, { "math_id": 242, "text": "f_m:\\R\\to\\R" }, { "math_id": 243, "text": "f_m(x) = \\begin{cases} m & \\text{ if } x \\in [0,\\frac{1}{m}] \\\\ 0 & \\text{ otherwise } \\end{cases}" }, { "math_id": 244, "text": "T_m" }, { "math_id": 245, "text": "f_m," }, { "math_id": 246, "text": "\\langle T_m, f \\rangle = m \\int_0^{\\frac{1}{m}} f(x)\\, dx \\to f(0) = \\langle \\delta, f \\rangle" }, { "math_id": 247, "text": "m \\to \\infty," }, { "math_id": 248, "text": "T_m \\to \\delta" }, { "math_id": 249, "text": "\\mathcal{D}^{\\prime}(\\R)." }, { "math_id": 250, "text": "m," }, { "math_id": 251, "text": "f_m" }, { "math_id": 252, "text": "C_c^\\infty(U) \\to (\\mathcal{D}^{\\prime}(U))'_{b}" }, { "math_id": 253, "text": "d \\in \\mathcal{D}^{\\prime}(U)" }, { "math_id": 254, "text": "d(f)" }, { "math_id": 255, "text": "A : X \\to Y" }, { "math_id": 256, "text": "{}^{t}A : Y' \\to X' \\qquad \\text{ defined by } \\qquad {}^{t}A(y') := y' \\circ A," }, { "math_id": 257, "text": "\\langle y', A(x)\\rangle = \\left\\langle {}^{t}A (y'), x \\right\\rangle" }, { "math_id": 258, "text": "x \\in X" }, { "math_id": 259, "text": "y' \\in Y'" }, { "math_id": 260, "text": "y'" }, { "math_id": 261, "text": "Y'" }, { "math_id": 262, "text": "{}^{t}A : Y' \\to X'" }, { "math_id": 263, "text": "A : \\mathcal{D}(U) \\to \\mathcal{D}(U)" }, { "math_id": 264, "text": "A^t : \\mathcal{D}'(U) \\to \\mathcal{D}'(U)" }, { "math_id": 265, "text": "\\langle {}^{t}A(T), \\phi \\rangle = \\langle T, A(\\phi) \\rangle \\quad \\text{ for all } \\phi \\in \\mathcal{D}(U) \\text{ and all } T \\in \\mathcal{D}'(U)." }, { "math_id": 266, "text": "\\mathcal{D}'(U)" }, { "math_id": 267, "text": "\\left\\{D_\\psi : \\psi \\in \\mathcal{D}(U)\\right\\}" }, { "math_id": 268, "text": "T = D_\\psi" }, { "math_id": 269, "text": "\\psi \\in \\mathcal{D}(U)." }, { "math_id": 270, "text": "B : \\mathcal{D}'(U) \\to \\mathcal{D}'(U)" }, { "math_id": 271, "text": "{}^{t}A" }, { "math_id": 272, "text": "\\langle B(D_\\psi), \\phi \\rangle = \\langle {}^{t}A(D_\\psi), \\phi \\rangle \\quad \\text{ for all } \\phi, \\psi \\in \\mathcal{D}(U)" }, { "math_id": 273, "text": "\\langle {}^{t}A(D_\\psi), \\phi \\rangle = \\langle D_\\psi, A(\\phi) \\rangle = \\langle \\psi, A(\\phi) \\rangle = \\int_U \\psi \\cdot A(\\phi) \\,dx." }, { "math_id": 274, "text": "V \\subseteq U" }, { "math_id": 275, "text": "\\R^n." }, { "math_id": 276, "text": "f \\in \\mathcal{D}(V)" }, { "math_id": 277, "text": "U \\setminus V." }, { "math_id": 278, "text": "E_{VU} (f)." }, { "math_id": 279, "text": "f \\mapsto E_{VU} (f)" }, { "math_id": 280, "text": "E_{VU} : \\mathcal{D}(V) \\to \\mathcal{D}(U)," }, { "math_id": 281, "text": "\\mathcal{D}(V)" }, { "math_id": 282, "text": "\\rho_{VU} := {}^{t}E_{VU} : \\mathcal{D}'(U) \\to \\mathcal{D}'(V)," }, { "math_id": 283, "text": "\\rho_{VU}(T)" }, { "math_id": 284, "text": "T \\in \\mathcal{D}'(U)" }, { "math_id": 285, "text": "T" }, { "math_id": 286, "text": "V." }, { "math_id": 287, "text": "\\langle \\rho_{VU} T, \\phi \\rangle = \\langle T, E_{VU} \\phi \\rangle \\quad \\text{ for all } \\phi \\in \\mathcal{D}(V)." }, { "math_id": 288, "text": "V \\neq U" }, { "math_id": 289, "text": "E_{VU} : \\mathcal{D}(V) \\to \\mathcal{D}(U)" }, { "math_id": 290, "text": "S \\in \\mathcal{D}'(V)" }, { "math_id": 291, "text": "E_{VU}" }, { "math_id": 292, "text": "U = V," }, { "math_id": 293, "text": "0 < k < \\infty" }, { "math_id": 294, "text": "1 < p < \\infty," }, { "math_id": 295, "text": "\\begin{matrix}\nC_c^\\infty(U) & \\to & C_c^k(U) & \\to & C_c^0(U) & \\to & L_c^\\infty(U) & \\to & L_c^{p+1}(U) & \\to & L_c^p(U) & \\to & L_c^1(U) \\\\\n\\downarrow & &\\downarrow && \\downarrow && && && && && \\\\\nC^\\infty(U) & \\to & C^k(U) & \\to & C^0(U) && && && && &&\n\\end{matrix}" }, { "math_id": 296, "text": "L_c^p(U)" }, { "math_id": 297, "text": "1 \\leq p \\leq \\infty," }, { "math_id": 298, "text": "C_c^\\infty(U) \\to L^p(U)" }, { "math_id": 299, "text": "L^p(U)" }, { "math_id": 300, "text": "p \\neq \\infty" }, { "math_id": 301, "text": "X" }, { "math_id": 302, "text": "k \\in \\{0, 1, \\ldots, \\infty\\}" }, { "math_id": 303, "text": "L^p_c(U)" }, { "math_id": 304, "text": "1 \\leq p \\leq \\infty" }, { "math_id": 305, "text": "1 \\leq p < \\infty" }, { "math_id": 306, "text": "\\operatorname{In}_X : C_c^\\infty(U) \\to X" }, { "math_id": 307, "text": "{}^{t}\\operatorname{In}_X : X'_b \\to \\mathcal{D}'(U) = \\left(C_c^\\infty(U)\\right)'_b" }, { "math_id": 308, "text": "X'" }, { "math_id": 309, "text": "\\operatorname{Im}\\left({}^{t}\\operatorname{In}_X\\right)" }, { "math_id": 310, "text": "\\mathcal{D}^{\\prime}(U) = \\left(C_c^\\infty(U)\\right)^{\\prime}_b" }, { "math_id": 311, "text": "\\leq" }, { "math_id": 312, "text": "L^p" }, { "math_id": 313, "text": "{}^{t}\\operatorname{In}_X : X'_b \\to \\mathcal{D}^{\\prime}(U)," }, { "math_id": 314, "text": "\\operatorname{Im} \\left({}^{t}\\operatorname{In}_X\\right)." }, { "math_id": 315, "text": "L_c^p(K)" }, { "math_id": 316, "text": "L^p(K)" }, { "math_id": 317, "text": "\\{u \\in U : f(x) \\neq 0\\}" }, { "math_id": 318, "text": "p = 2," }, { "math_id": 319, "text": "L^p(K) \\to L_c^p(U)" }, { "math_id": 320, "text": "L^p(K_n) \\to L_c^p(U)" }, { "math_id": 321, "text": "n = 1, 2, \\ldots" }, { "math_id": 322, "text": "K_1 \\subseteq K_2 \\subseteq \\cdots" }, { "math_id": 323, "text": "\\operatorname{In} : C_c^\\infty(U) \\to C_c^0(U)" }, { "math_id": 324, "text": "{}^{t}\\operatorname{In} : \\left(C_c^0(U)\\right)^{\\prime}_b \\to \\mathcal{D}^{\\prime}(U) = \\left(C_c^\\infty(U)\\right)^{\\prime}_b" }, { "math_id": 325, "text": "\\left(C_c^0(U)\\right)^{\\prime}_b" }, { "math_id": 326, "text": "T \\in \\left(C_c^0(U)\\right)^{\\prime}_b" }, { "math_id": 327, "text": "\\mu" }, { "math_id": 328, "text": "f \\in C_c^0(U), T(f) = \\textstyle \\int_U f \\, d\\mu," }, { "math_id": 329, "text": "C_c^0(U)" }, { "math_id": 330, "text": "C_c^0(U) \\ni f \\mapsto \\textstyle \\int_U f \\, d\\mu" }, { "math_id": 331, "text": "{}^{t}\\operatorname{In} : \\left(C_c^0(U)\\right)^{\\prime}_b \\to \\mathcal{D}^{\\prime}(U)," }, { "math_id": 332, "text": "\\phi \\mapsto \\textstyle \\int_U f(x) \\phi(x) \\, dx" }, { "math_id": 333, "text": "L^\\infty" }, { "math_id": 334, "text": "U \\subseteq \\R^n," }, { "math_id": 335, "text": "T," }, { "math_id": 336, "text": "I = \\{p \\in \\N^n : |p| \\leq n\\}." }, { "math_id": 337, "text": "f=(f_p)_{p\\in I}" }, { "math_id": 338, "text": "\\operatorname{supp} f_p \\subseteq V" }, { "math_id": 339, "text": "p\\in I," }, { "math_id": 340, "text": "T = \\sum_{p\\in I} \\partial^p f_p." }, { "math_id": 341, "text": "\\leq 2 n." }, { "math_id": 342, "text": "f \\geq 0" }, { "math_id": 343, "text": "T(f) \\geq 0." }, { "math_id": 344, "text": "f : U \\to \\R" }, { "math_id": 345, "text": "T_f" }, { "math_id": 346, "text": "\\phi" }, { "math_id": 347, "text": "\\langle T_f, \\phi \\rangle = \\int_U f \\phi\\,dx." }, { "math_id": 348, "text": "\\langle f, \\phi \\rangle = \\langle T_f, \\phi \\rangle." }, { "math_id": 349, "text": "\\textstyle\\int\\phi \\,d\\mu." }, { "math_id": 350, "text": "\\langle \\mu, \\phi \\rangle." }, { "math_id": 351, "text": "T \\in \\mathcal{D}^{\\prime}(U)," }, { "math_id": 352, "text": "(\\phi_i)_{i=1}^\\infty," }, { "math_id": 353, "text": "T \\in \\mathcal{D}^{\\prime}(U)" }, { "math_id": 354, "text": "\\langle \\phi_i, \\psi \\rangle \\to \\langle T, \\psi \\rangle \\qquad \\text{ for all } \\psi \\in \\mathcal{D}(U)." }, { "math_id": 355, "text": "\\operatorname{In} : C_c^\\infty(U) \\to C^\\infty(U)" }, { "math_id": 356, "text": "{}^{t}\\operatorname{In} : \\left(C^\\infty(U)\\right)^{\\prime}_b \\to \\mathcal{D}^{\\prime}(U) = \\left(C_c^\\infty(U)\\right)^{\\prime}_b" }, { "math_id": 357, "text": "\\mathcal{E}^{\\prime}(U)," }, { "math_id": 358, "text": "\\left(C^\\infty(U)\\right)^{\\prime}_b" }, { "math_id": 359, "text": "{}^{t}\\operatorname{In} : \\left(C^\\infty(U)\\right)^{\\prime}_b \\to \\mathcal{E}^{\\prime}(U)," }, { "math_id": 360, "text": "\\mathcal{E}^{\\prime}(U)" }, { "math_id": 361, "text": "\\mathcal{E}^{\\prime}(U) = \\left(C^\\infty(U)\\right)^{\\prime}_b" }, { "math_id": 362, "text": "T \\in \\mathcal{E}^{\\prime}(U)" }, { "math_id": 363, "text": "T(\\phi)=0." }, { "math_id": 364, "text": "\\phi_k" }, { "math_id": 365, "text": "k \\in \\N." }, { "math_id": 366, "text": "\\operatorname{In} : C_c^\\infty(U) \\to C_c^k(U)" }, { "math_id": 367, "text": "{}^{t}\\operatorname{In} : \\left(C_c^k(U)\\right)^{\\prime}_b \\to \\mathcal{D}^{\\prime}(U) = \\left(C_c^\\infty(U)\\right)^{\\prime}_b" }, { "math_id": 368, "text": "{}^{t}\\operatorname{In}," }, { "math_id": 369, "text": "\\mathcal{D}'^k(U)," }, { "math_id": 370, "text": "\\left(C_c^k(U)\\right)^{\\prime}_b" }, { "math_id": 371, "text": "{}^{t}\\operatorname{In} : \\left(C^\\infty(U)\\right)^{\\prime}_b \\to \\mathcal{D}'^k(U)," }, { "math_id": 372, "text": "\\mathcal{D}'^{m}(U)" }, { "math_id": 373, "text": "\\mathcal{D}'^k(U)" }, { "math_id": 374, "text": "\\,\\leq k." }, { "math_id": 375, "text": "\\,\\leq 0," }, { "math_id": 376, "text": "0," }, { "math_id": 377, "text": "0 \\neq k \\in \\N," }, { "math_id": 378, "text": "\\,\\leq k" }, { "math_id": 379, "text": "\\,\\leq k - 1" }, { "math_id": 380, "text": "\\,\\leq k," }, { "math_id": 381, "text": "\\mathcal{D}'^{F}(U)." }, { "math_id": 382, "text": "k \\leq 1" }, { "math_id": 383, "text": "\\mathcal{D}'^k(U) \\subseteq \\mathcal{D}'^{l}(U)" }, { "math_id": 384, "text": "\\mathcal{D}'^{F}(U)" }, { "math_id": 385, "text": "\\mathcal{D}'^{F}(U) = \\mathcal{D}^{\\prime}(U)." }, { "math_id": 386, "text": "\\rho_{VU}" }, { "math_id": 387, "text": "\\mathcal{D}'^{F}(V)." }, { "math_id": 388, "text": "I =\\{p \\in \\N^n : |p| \\leq k\\}." }, { "math_id": 389, "text": "(\\mu_p)_{p \\in I}," }, { "math_id": 390, "text": "p \\in I, \\operatorname{supp}(\\mu_p) \\subseteq V" }, { "math_id": 391, "text": "T = \\sum_{|p| \\leq k} \\partial^p \\mu_p." }, { "math_id": 392, "text": "U := (0, \\infty)" }, { "math_id": 393, "text": "S f := \\sum_{m=1}^\\infty (\\partial^{m} f)\\left(\\frac{1}{m}\\right)." }, { "math_id": 394, "text": "\\mathcal{D}^{\\prime}(\\R^n)," }, { "math_id": 395, "text": "\\mathcal{D}^{\\prime}(\\R^n)." }, { "math_id": 396, "text": "\\phi:\\R^n\\to\\R" }, { "math_id": 397, "text": "\\phi," }, { "math_id": 398, "text": "|x|," }, { "math_id": 399, "text": "|x| \\to \\infty." }, { "math_id": 400, "text": "\\beta" }, { "math_id": 401, "text": "p_{\\alpha, \\beta} (\\phi) ~=~ \\sup_{x \\in \\R^n} \\left|x^\\alpha \\partial^\\beta \\phi(x) \\right|." }, { "math_id": 402, "text": "p_{\\alpha, \\beta} (\\phi) < \\infty." }, { "math_id": 403, "text": "p_{\\alpha,\\beta}" }, { "math_id": 404, "text": "n = 1," }, { "math_id": 405, "text": "|f|_{m,k} = \\sup_{|p|\\leq m} \\left(\\sup_{x \\in \\R^n} \\left\\{(1+|x|)^k \\left|(\\partial^\\alpha f)(x) \\right|\\right\\}\\right), \\qquad k,m \\in \\N." }, { "math_id": 406, "text": "\\|\\phi \\|_k ~=~ \\max_{|\\alpha| + |\\beta| \\leq k} \\sup_{x \\in \\R^n} \\left| x^\\alpha \\partial^\\beta \\phi(x)\\right|, \\qquad k \\geq 1." }, { "math_id": 407, "text": "\\partial^\\alpha" }, { "math_id": 408, "text": "x^\\alpha" }, { "math_id": 409, "text": "\\{f_i\\}" }, { "math_id": 410, "text": "(1 + |x|)^k (\\partial^p f_i)(x)" }, { "math_id": 411, "text": "C^\\infty(\\R^n)." }, { "math_id": 412, "text": "\\mathcal{D}(\\R^n)" }, { "math_id": 413, "text": "\\mathcal{S}(\\R^n)." }, { "math_id": 414, "text": "\\mathcal{S}(\\R^m) \\ \\widehat{\\otimes}\\ \\mathcal{S}(\\R^n) \\to \\mathcal{S}(\\R^{m + n})," }, { "math_id": 415, "text": "\\widehat{\\otimes}" }, { "math_id": 416, "text": "\\operatorname{In} : \\mathcal{D}(\\R^n) \\to \\mathcal{S}(\\R^n)" }, { "math_id": 417, "text": "{}^{t}\\operatorname{In} : (\\mathcal{S}(\\R^n))'_b \\to \\mathcal{D}^{\\prime}(\\R^n)" }, { "math_id": 418, "text": "\\mathcal{S}^{\\prime}(\\R^n)," }, { "math_id": 419, "text": "(\\mathcal{S}(\\R^n))'_b" }, { "math_id": 420, "text": "{}^{t}\\operatorname{In} : (\\mathcal{S}(\\R^n))'_b \\to \\mathcal{D}^{\\prime}(\\R^n)," }, { "math_id": 421, "text": "\\mathcal{S}^{\\prime}(\\R^n)" }, { "math_id": 422, "text": "\\mathcal{D}^{\\prime}(\\R^n)" }, { "math_id": 423, "text": "\\left(\\text{ for all } \\alpha, \\beta \\in \\N^n: \\lim_{m\\to \\infty} p_{\\alpha, \\beta} (\\phi_m) = 0\\right) \\Longrightarrow \\lim_{m\\to \\infty} T(\\phi_m)=0." }, { "math_id": 424, "text": "L^p(\\R^n)" }, { "math_id": 425, "text": "p \\geq 1" }, { "math_id": 426, "text": "|x|." }, { "math_id": 427, "text": "|x|^n\\exp (-\\lambda |x|^\\beta)" }, { "math_id": 428, "text": "n, \\lambda, \\beta." }, { "math_id": 429, "text": "F : \\mathcal{S}(\\R^n) \\to \\mathcal{S}(\\R^n)" }, { "math_id": 430, "text": "{}^{t}F : \\mathcal{S}^{\\prime}(\\R^n) \\to \\mathcal{S}^{\\prime}(\\R^n)," }, { "math_id": 431, "text": "(FT)(\\psi) = T(F \\psi)" }, { "math_id": 432, "text": "\\psi." }, { "math_id": 433, "text": "FT" }, { "math_id": 434, "text": "F \\dfrac{dT}{dx} = ixFT" }, { "math_id": 435, "text": "\\psi" }, { "math_id": 436, "text": "\\psi T" }, { "math_id": 437, "text": "F(\\psi T) = F \\psi * FT" }, { "math_id": 438, "text": "F \\psi" }, { "math_id": 439, "text": "\\delta" }, { "math_id": 440, "text": "T \\in \\mathcal{S}^{\\prime}(\\R^n)" }, { "math_id": 441, "text": "\\phi \\in \\mathcal{S}(\\R^n)" }, { "math_id": 442, "text": "\\langle T, \\phi \\rangle \\leq C\\sum\\nolimits_{|\\alpha|\\leq N, |\\beta|\\leq M}\\sup_{x \\in \\R^n} \\left|x^\\alpha \\partial^\\beta \\phi(x) \\right|=C\\sum\\nolimits_{|\\alpha|\\leq N, |\\beta|\\leq M} p_{\\alpha, \\beta}(\\phi)." }, { "math_id": 443, "text": "T = \\partial^\\alpha F." }, { "math_id": 444, "text": "T \\in \\mathcal{D}^{\\prime}(\\R^n)," }, { "math_id": 445, "text": "K \\subseteq \\R^n," }, { "math_id": 446, "text": "T = \\partial^\\alpha F" }, { "math_id": 447, "text": "C_c^\\infty(K)." }, { "math_id": 448, "text": "U \\subseteq \\R^m" }, { "math_id": 449, "text": "V \\subseteq \\R^n" }, { "math_id": 450, "text": "\\mathbb{F}," }, { "math_id": 451, "text": "\\mathbb{F}=\\R" }, { "math_id": 452, "text": "\\Complex." }, { "math_id": 453, "text": "f \\in \\mathcal{D}(U \\times V)" }, { "math_id": 454, "text": "u \\in U" }, { "math_id": 455, "text": "v \\in V" }, { "math_id": 456, "text": "\\begin{alignat}{9}\nf_u : \\,& V && \\to \\,&& \\mathbb{F} && \\quad \\text{ and } \\quad && f^v : \\,&& U && \\to \\,&& \\mathbb{F} \\\\\n & y && \\mapsto\\,&& f(u, y) && && && x && \\mapsto\\,&& f(x, v) \\\\\n\\end{alignat}" }, { "math_id": 457, "text": "S \\in \\mathcal{D}^{\\prime}(U)" }, { "math_id": 458, "text": "T \\in \\mathcal{D}^{\\prime}(V)," }, { "math_id": 459, "text": "\\begin{alignat}{9}\n\\langle S, f^{\\bullet}\\rangle : \\,& V && \\to \\,&& \\mathbb{F} && \\quad \\text{ and } \\quad && \\langle T, f_{\\bullet}\\rangle : \\,&& U && \\to \\,&& \\mathbb{F} \\\\\n & v && \\mapsto\\,&& \\langle S, f^v \\rangle && && && u && \\mapsto\\,&& \\langle T, f_u \\rangle \\\\\n\\end{alignat}" }, { "math_id": 460, "text": "\\langle T, f_{\\bullet}\\rangle \\in \\mathcal{D}(U)" }, { "math_id": 461, "text": "\\langle S, f^{\\bullet}\\rangle \\in \\mathcal{D}(V)." }, { "math_id": 462, "text": "S \\in \\mathcal{D}'(U)" }, { "math_id": 463, "text": "T \\in \\mathcal{D}'(V)" }, { "math_id": 464, "text": "\\begin{alignat}{9}\n \\,& \\mathcal{D}(U \\times V) && \\to \\,&& \\mathcal{D}(V) && \\quad \\text{ and } \\quad && \\,&& \\mathcal{D}(U \\times V) && \\to \\,&& \\mathcal{D}(U) \\\\\n & f && \\mapsto\\,&& \\langle S, f^{\\bullet} \\rangle && && && f && \\mapsto\\,&& \\langle T, f_{\\bullet} \\rangle \\\\\n\\end{alignat}" }, { "math_id": 465, "text": "C^\\infty(U \\times V) \\to C^\\infty(V)" }, { "math_id": 466, "text": "C^\\infty(U \\times V) \\to C^\\infty(U)" }, { "math_id": 467, "text": "T \\in \\mathcal{D}'(V)." }, { "math_id": 468, "text": "\\langle S, \\langle T, f_{\\bullet} \\rangle \\rangle = \\langle T, \\langle S, f^{\\bullet} \\rangle \\rangle." }, { "math_id": 469, "text": "T \\in \\mathcal{D}'(V)," }, { "math_id": 470, "text": "S \\otimes T" }, { "math_id": 471, "text": "T \\otimes S," }, { "math_id": 472, "text": "U \\times V" }, { "math_id": 473, "text": "(S \\otimes T)(f) := \\langle S, \\langle T, f_{\\bullet} \\rangle \\rangle = \\langle T, \\langle S, f^{\\bullet}\\rangle \\rangle." }, { "math_id": 474, "text": "\\begin{alignat}{4}\n \\,& \\mathcal{D}^{\\prime}(U) \\times \\mathcal{D}^{\\prime}(V) && \\to \\,&& \\mathcal{D}^{\\prime}(U \\times V) \\\\\n & ~~~~~~~~(S, T) && \\mapsto\\,&& S \\otimes T \\\\\n\\end{alignat}" }, { "math_id": 475, "text": "\\operatorname{supp} (S \\otimes T) = \\operatorname{supp}(S) \\times \\operatorname{supp}(T)." }, { "math_id": 476, "text": "(S,T) \\mapsto S \\otimes T" }, { "math_id": 477, "text": "\\begin{alignat}{8}\n&\\mathcal{E}^{\\prime}(U) &&\\times \\mathcal{E}^{\\prime}(V) &&\\to \\mathcal{E}^{\\prime}(U \\times V) \\\\ \n&\\mathcal{S}^{\\prime}(\\R^m) &&\\times \\mathcal{S}^{\\prime}(\\R^n) &&\\to \\mathcal{S}^{\\prime}(\\R^{m + n}) \\\\\n\\end{alignat}" }, { "math_id": 478, "text": "\\mathcal{E}'" }, { "math_id": 479, "text": "\\mathcal{S}" }, { "math_id": 480, "text": "\\begin{alignat}{8}\n&\\mathcal{S}^{\\prime}(\\R^{m+n}) ~&&~\\cong~&&~ \\mathcal{S}^{\\prime}(\\R^m) \\ &&\\widehat{\\otimes}\\ \\mathcal{S}^{\\prime}(\\R^n) ~&&~\\cong~&&~ L_b(\\mathcal{S}(\\R^m); &&\\;\\mathcal{S}^{\\prime}(\\R^n)) \\\\\n&\\mathcal{E}^{\\prime}(U \\times V) ~&&~\\cong~&&~ \\mathcal{E}^{\\prime}(U) \\ &&\\widehat{\\otimes}\\ \\mathcal{E}^{\\prime}(V) ~&&~\\cong~&&~ L_b(C^\\infty(U); &&\\;\\mathcal{E}^{\\prime}(V)) \\\\\n&\\mathcal{D}^{\\prime}(U \\times V) ~&&~\\cong~&&~ \\mathcal{D}^{\\prime}(U) \\ &&\\widehat{\\otimes}\\ \\mathcal{D}^{\\prime}(V) ~&&~\\cong~&&~ L_b(\\mathcal{D}(U); &&\\;\\mathcal{D}^{\\prime}(V)) \\\\\n\\end{alignat}" }, { "math_id": 481, "text": "L_b(X;Y)" }, { "math_id": 482, "text": "L^2" } ]
https://en.wikipedia.org/wiki?curid=67789038
6778984
Discrete tomography
Reconstruction of binary images from a small number of their projections Discrete tomography focuses on the problem of reconstruction of binary images (or finite subsets of the integer lattice) from a small number of their projections. In general, tomography deals with the problem of determining shape and dimensional information of an object from a set of projections. From the mathematical point of view, the object corresponds to a function and the problem posed is to reconstruct this function from its integrals or sums over subsets of its domain. In general, the tomographic inversion problem may be continuous or discrete. In continuous tomography both the domain and the range of the function are continuous and line integrals are used. In discrete tomography the domain of the function may be either discrete or continuous, and the range of the function is a finite set of real, usually nonnegative numbers. In continuous tomography when a large number of projections is available, accurate reconstructions can be made by many different algorithms. It is typical for discrete tomography that only a few projections (line sums) are used. In this case, conventional techniques all fail. A special case of discrete tomography deals with the problem of the reconstruction of a binary image from a small number of projections. The name "discrete tomography" is due to Larry Shepp, who organized the first meeting devoted to this topic (DIMACS Mini-Symposium on Discrete Tomography, September 19, 1994, Rutgers University). Theory. Discrete tomography has strong connections with other mathematical fields, such as number theory, discrete mathematics, computational complexity theory and combinatorics. In fact, a number of discrete tomography problems were first discussed as combinatorial problems. In 1957, H. J. Ryser found a necessary and sufficient condition for a pair of vectors being the two orthogonal projections of a discrete set. In the proof of his theorem, Ryser also described a reconstruction algorithm, the very first reconstruction algorithm for a general discrete set from two orthogonal projections. In the same year, David Gale found the same consistency conditions, but in connection with the network flow problem. Another result of Ryser's is the definition of the switching operation by which discrete sets having the same projections can be transformed into each other. The problem of reconstructing a binary image from a small number of projections generally leads to a large number of solutions. It is desirable to limit the class of possible solutions to only those that are typical of the class of the images which contains the image being reconstructed by using a priori information, such as convexity or connectedness. Theorems. For further results, see Algorithms. Among the reconstruction methods one can find algebraic reconstruction techniques (e.g., DART or ), greedy algorithms (see for approximation guarantees), and Monte Carlo algorithms. Applications. Various algorithms have been applied in image processing, medicine, three-dimensional statistical data security problems, computer tomograph assisted engineering and design, electron microscopy and materials science, including the 3DXRD microscope. A form of discrete tomography also forms the basis of nonograms, a type of logic puzzle in which information about the rows and columns of a digital image is used to reconstruct the image. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " m\\geq 3 " }, { "math_id": 1, "text": " m=2 " }, { "math_id": 2, "text": " k " }, { "math_id": 3, "text": "(k-1)" }, { "math_id": 4, "text": " k \\geq 3 " } ]
https://en.wikipedia.org/wiki?curid=6778984
6779393
Riesz potential
In mathematics, the Riesz potential is a potential named after its discoverer, the Hungarian mathematician Marcel Riesz. In a sense, the Riesz potential defines an inverse for a power of the Laplace operator on Euclidean space. They generalize to several variables the Riemann–Liouville integrals of one variable. Definition. If 0 &lt; "α" &lt; "n", then the Riesz potential "I"α"f" of a locally integrable function "f" on R"n" is the function defined by where the constant is given by formula_0 This singular integral is well-defined provided "f" decays sufficiently rapidly at infinity, specifically if "f" ∈ L"p"(R"n") with 1 ≤ "p" &lt; "n"/"α". In fact, for any 1 ≤ "p" ("p"&gt;1 is classical, due to Sobolev, while for "p"=1 see , the rate of decay of "f" and that of "I""α""f" are related in the form of an inequality (the Hardy–Littlewood–Sobolev inequality) formula_1 where formula_2 is the vector-valued Riesz transform. More generally, the operators "I""α" are well-defined for complex α such that 0 &lt; Re "α" &lt; "n". The Riesz potential can be defined more generally in a weak sense as the convolution formula_3 where "K"α is the locally integrable function: formula_4 The Riesz potential can therefore be defined whenever "f" is a compactly supported distribution. In this connection, the Riesz potential of a positive Borel measure μ with compact support is chiefly of interest in potential theory because "I""α"μ is then a (continuous) subharmonic function off the support of μ, and is lower semicontinuous on all of R"n". Consideration of the Fourier transform reveals that the Riesz potential is a Fourier multiplier. In fact, one has formula_5 and so, by the convolution theorem, formula_6 The Riesz potentials satisfy the following semigroup property on, for instance, rapidly decreasing continuous functions formula_7 provided formula_8 Furthermore, if 0 &lt; Re "α" &lt; "n"–2, then formula_9 One also has, for this class of functions, formula_10 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c_\\alpha = \\pi^{n/2}2^\\alpha\\frac{\\Gamma(\\alpha/2)}{\\Gamma((n-\\alpha)/2)}." }, { "math_id": 1, "text": "\\|I_\\alpha f\\|_{p^*} \\le C_p \\|Rf\\|_p, \\quad p^*=\\frac{np}{n-\\alpha p}," }, { "math_id": 2, "text": "Rf=DI_1f" }, { "math_id": 3, "text": "I_\\alpha f = f*K_\\alpha" }, { "math_id": 4, "text": "K_\\alpha(x) = \\frac{1}{c_\\alpha}\\frac{1}{|x|^{n-\\alpha}}." }, { "math_id": 5, "text": "\\widehat{K_\\alpha}(\\xi) = \\int_{\\R^n} K_{\\alpha}(x) e^{-2\\pi i x \\xi }\\, \\mathrm{d}x = |2\\pi\\xi|^{-\\alpha}" }, { "math_id": 6, "text": "\\widehat{I_\\alpha f}(\\xi) = |2\\pi\\xi|^{-\\alpha} \\hat{f}(\\xi)." }, { "math_id": 7, "text": "I_\\alpha I_\\beta = I_{\\alpha+\\beta} " }, { "math_id": 8, "text": "0 < \\operatorname{Re} \\alpha, \\operatorname{Re} \\beta < n,\\quad 0 < \\operatorname{Re} (\\alpha+\\beta) < n." }, { "math_id": 9, "text": "\\Delta I_{\\alpha+2} = I_{\\alpha+2} \\Delta=-I_\\alpha. " }, { "math_id": 10, "text": "\\lim_{\\alpha\\to 0^+} (I_\\alpha f)(x) = f(x)." } ]
https://en.wikipedia.org/wiki?curid=6779393
677968
Information-theoretic security
Security of a cryptosystem which derives purely from information theory A cryptosystem is considered to have information-theoretic security (also called unconditional security) if the system is secure against adversaries with unlimited computing resources and time. In contrast, a system which depends on the computational cost of cryptanalysis to be secure (and thus can be broken by an attack with unlimited computation) is called computationally, or conditionally, secure. Overview. An encryption protocol with information-theoretic security is impossible to break even with infinite computational power. Protocols proven to be information-theoretically secure are resistant to future developments in computing. The concept of information-theoretically secure communication was introduced in 1949 by American mathematician Claude Shannon, one of the founders of classical information theory, who used it to prove the one-time pad system was secure. Information-theoretically secure cryptosystems have been used for the most sensitive governmental communications, such as diplomatic cables and high-level military communications. There are a variety of cryptographic tasks for which information-theoretic security is a meaningful and useful requirement. A few of these are: Physical layer encryption. Technical limitations. Algorithms which are computationally or conditionally secure (i.e., they are not information-theoretically secure) are dependent on resource limits. For example, RSA relies on the assertion that factoring large numbers is hard. A weaker notion of security, defined by Aaron D. Wyner, established a now-flourishing area of research that is known as physical layer encryption. It exploits the physical wireless channel for its security by communications, signal processing, and coding techniques. The security is provable, unbreakable, and quantifiable (in bits/second/hertz). Wyner's initial physical layer encryption work in the 1970s posed the Alice–Bob–Eve problem in which Alice wants to send a message to Bob without Eve decoding it. If the channel from Alice to Bob is statistically better than the channel from Alice to Eve, it had been shown that secure communication is possible. That is intuitive, but Wyner measured the secrecy in information theoretic terms defining secrecy capacity, which essentially is the rate at which Alice can transmit secret information to Bob. Shortly afterward, Imre Csiszár and Körner showed that secret communication was possible even if Eve had a statistically better channel to Alice than Bob did. The basic idea of the information theoretic approach to securely transmit confidential messages (without using an encryption key) to a legitimate receiver is to use the inherent randomness of the physical medium (including noises and channel fluctuations due to fading) and exploit the difference between the channel to a legitimate receiver and the channel to an eavesdropper to benefit the legitimate receiver. More recent theoretical results are concerned with determining the secrecy capacity and optimal power allocation in broadcast fading channels. There are caveats, as many capacities are not computable unless the assumption is made that Alice knows the channel to Eve. If that were known, Alice could simply place a null in Eve's direction. Secrecy capacity for MIMO and multiple colluding eavesdroppers is more recent and ongoing work, and such results still make the non-useful assumption about eavesdropper channel state information knowledge. Still other work is less theoretical by attempting to compare implementable schemes. One physical layer encryption scheme is to broadcast artificial noise in all directions except that of Bob's channel, which basically jams Eve. One paper by Negi and Goel details its implementation, and Khisti and Wornell computed the secrecy capacity when only statistics about Eve's channel are known. Parallel to that work in the information theory community is work in the antenna community, which has been termed near-field direct antenna modulation or directional modulation. It has been shown that by using a parasitic array, the transmitted modulation in different directions could be controlled independently. Secrecy could be realized by making the modulations in undesired directions difficult to decode. Directional modulation data transmission was experimentally demonstrated using a phased array. Others have demonstrated directional modulation with switched arrays and phase-conjugating lenses. That type of directional modulation is really a subset of Negi and Goel's additive artificial noise encryption scheme. Another scheme using pattern-reconfigurable transmit antennas for Alice called reconfigurable multiplicative noise (RMN) complements additive artificial noise. The two work well together in channel simulations in which nothing is assumed known to Alice or Bob about the eavesdroppers. Secret key agreement. The different works mentioned in the previous part employ, in one way or another, the randomness present in the wireless channel to transmit information-theoretically secure messages. Conversely, we could analyze how much secrecy one can extract from the randomness itself in the form of a secret key. That is the goal of "secret key agreement". In this line of work, started by Maurer and Ahlswede and Csiszár, the basic system model removes any restriction on the communication schemes and assumes that the legitimate users can communicate over a two-way, public, noiseless, and authenticated channel at no cost. This model has been subsequently extended to account for multiple users and a noisy channel among others. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Pi" }, { "math_id": 1, "text": "\\Pi'" } ]
https://en.wikipedia.org/wiki?curid=677968
67805581
Eddy saturation and eddy compensation
Eddy saturation and eddy compensation are phenomena found in the Southern Ocean. Both are limiting processes where eddy activity increases due to the momentum of strong westerlies, and hence do not enhance their respective mean currents. Where eddy saturations impacts the Antarctic Circumpolar Current (ACC), eddy compensation influences the associated Meridional Overturning Circulation (MOC). In recent decades wind stresses in the Southern Ocean have increased partly due to greenhouse gases and ozone depletion in the stratosphere. Because the ACC and MOC play an important role in the global climate; affecting the stratification of the ocean, uptake of heat, carbon dioxide and other passive tracers. Addressing how these increased zonal winds affect the MOC and the ACC will help understand whether the uptakes will change in the future, which could have serious impact on the carbon cycle. This remains an important and critical research topic. Formation of eddies and overturning response. Dynamics in the Southern Ocean are dominated by two cells with opposing rotation, each is forced by surface buoyancy fluxes. Via isopycnals, tracers are transferred from the deep to the surface. Isopycnal slopes are key for determining the depth of the global Pycnocline, and where water mass outcrops are. Therefore, isopycnals play an important role in the interaction with the atmosphere. In the Southeren Ocean it is thought that isopycnals are steepened by wind forcing and baroclinic eddies are acting to flatten the isopycnals. The westerly winds (formula_2), which make the ACC flow eastward, induce a clockwise rotating Eulerian meridional circulation (formula_0) via Ekman dynamics, which is also known as the Deacon cell. This circulation acts to overturn isopycnals enhance the buoyance forcing and therefore increase the mean flow. Although the ACC is very close to a geostrophic balance, when frontal jets reach a velocity that is high enough geostrophic turbulence (i.e. chaotic motion of fluids that are near to a state of hydrostatic balance and geostrophic balance) arises. Due to this geostrophic turbulence, potential energy stored in the fronts of jet streams is converted into eddy kinetic energy (EKE), which finally leads to the formation of mesoscale eddies. The surface EKE has increased in the recent decades, as proven by satellite altimetry. The relation between the increased wind stress and EKE is assumed to be near-linear, which explains the limited sensitivity of the ACC transport. In areas where stratification is very weak the formation of eddies is often associated with barotropic instabilities. Where stratification is more substantial, baroclinic instabilities (misalignment of isobars and isopycnals) are the mean cause of the formation of eddies. Eddies have the tendency to flatten isopycnals (surfaces of equal buoyancy), which slow down the mean flow . Due to these instabilities a counterclockwise rotating eddy-induced circulation (formula_1) is formed, which partially counteracts the Eulerian meridional circulation. The balance between the two overturning circulations determines the residual overturning, formula_3. This residual flow (formula_4) is assumed to be directed along mean buoyance surfaces in the interior but to have a diapycnal component in the mixed later. Eddy saturation. The Southern Ocean contains a system of ocean currents, which together forms the Antarctic Circumpolar Current (ACC). These ocean currents are subject to strong westerly winds jointly responsible for driving the zonal transport of the ACC. In recent decades a positive trend in SAM index (Southern Annular Mode) is seen, which measures the zonal pressure difference between the latitudes of 40S and 65S, showing us that the zonal winds have increased in the Southern Ocean. Studies indicate that the zonal Ekman transport in the ACC is relatively insensitive to the increasing changes in wind stress. This behaviour can also be seen in the change in isopycnal slopes (surfaces of equal buoyancy), which show limited response despite intensification of western winds. Therefore, the increased momentum (due to enhanced wind stress) is diverted into the oceanic mesoscale and transferred to the bottom of the ocean instead of the horizontal mean flow. This flattens isopycnals (reduces buoyancy forcing) and therefore slows down the mean flow. This phenomenon is known as eddy saturation. Eddy compensation. Alongside the ACC there is also the associated Meridional Overturning Circulation (MOC), which is also mainly driven by wind forcing. The insensitivity of the mean current to the accelerating wind forcing can also be seen in the MOC. This near independence of the MOC to the increase of wind stress is referred to as eddy compensation. There would be perfect eddy compensation when the Ekman transport would be balanced by eddy-induced transport. There is a widespread belief that the sensitivities of the transport in the ACC and MOC are dynamically linked. However, note that eddy saturation and eddy compensation are distinct dynamical mechanisms. Occurrence of one does not necessarily entail the occurrence of the other. It is hypothesized that dearth of a dynamical link between eddy saturation and eddy compensation is a consequence of the depth dependence (cancellation between the Eulerian circulation and eddy-induced circulation). Currently it is assumed that the ACC is fully eddy saturated, but only partially eddy compensated. The degree to which there is eddy compensation in the Southern Ocean is currently unknown. Models and sensitivity. Eddy permitting and eddy resolving models are used to examine the effect of eddy saturation and eddy compensation in the ACC. In these models resolution is of great importance. Ocean observations do not have a high enough resolution to fully estimate the degree of eddy saturation and eddy compensation. Idealized studies show that the MOC response is more sensitive to model resolution than the ACC transport. A general conclusion in such numerical models is that southward eddy transport in combination with enhanced westerlies results in an increase in EKE. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\overline{\\psi} " }, { "math_id": 1, "text": " \\psi* " }, { "math_id": 2, "text": "\\tau" }, { "math_id": 3, "text": " \\psi_{res} = \\overline{\\psi} + \\psi* " }, { "math_id": 4, "text": "\\psi_{res}" } ]
https://en.wikipedia.org/wiki?curid=67805581
678078
Inclusive fitness
Measure of evolutionary success based on the number of offspring the individual supports In evolutionary biology, inclusive fitness is one of two metrics of evolutionary success as defined by W. D. Hamilton in 1964: An individual's own child, who carries one half of the individual's genes, is defined as one offspring equivalent. A sibling's child, who will carry one-quarter of the individual's genes, is 1/2 offspring equivalent. Similarly, a cousin's child, who has 1/16 of the individual's genes, is 1/8 offspring equivalent. From a gene's point of view, evolutionary success ultimately depends on leaving behind the maximum number of copies of itself in the population. Prior to Hamilton's work, it was generally assumed that genes only achieved this through the number of viable offspring produced by the individual organism they occupied. However, this overlooked a wider consideration of a gene's success, most clearly in the case of the social insects where the vast majority of individuals do not produce (their own) offspring. Overview. The British evolutionary biologist W. D. Hamilton showed mathematically that, because other members of a population may share one's genes, a gene can also increase its evolutionary success by indirectly promoting the reproduction and survival of other individuals who also carry that gene. This is variously called "kin theory", "kin selection theory" or "inclusive fitness theory". The most obvious category of such individuals is close genetic relatives, and where these are concerned, the application of inclusive fitness theory is often more straightforwardly treated via the narrower kin selection theory. Hamilton's theory, alongside reciprocal altruism, is considered one of the two primary mechanisms for the evolution of social behaviors in natural species and a major contribution to the field of sociobiology, which holds that some behaviors can be dictated by genes, and therefore can be passed to future generations and may be selected for as the organism evolves. Belding's ground squirrel provides an example; it gives an alarm call to warn its local group of the presence of a predator. By emitting the alarm, it gives its own location away, putting itself in more danger. In the process, however, the squirrel may protect its relatives within the local group (along with the rest of the group). Therefore, if the effect of the trait influencing the alarm call typically protects the other squirrels in the immediate area, it will lead to the passing on of more copies of the alarm call trait in the next generation than the squirrel could leave by reproducing on its own. In such a case natural selection will increase the trait that influences giving the alarm call, provided that a sufficient fraction of the shared genes include the gene(s) predisposing to the alarm call. "Synalpheus regalis", a eusocial shrimp, is an organism whose social traits meet the inclusive fitness criterion. The larger defenders protect the young juveniles in the colony from outsiders. By ensuring the young's survival, the genes will continue to be passed on to future generations. Inclusive fitness is more generalized than strict kin selection, which requires that the shared genes are "identical by descent". Inclusive fitness is not limited to cases where "kin" ('close genetic relatives') are involved. Hamilton's rule. Hamilton's rule was originally derived in the framework of neighbour modulated fitness, where the fitness of a focal individual is considered to be modulated by the actions of its neighbours. This is the inverse of inclusive fitness where we consider how a focal individual modulates the fitness of its neighbours. However, taken over the entire population, these two approaches are equivalent to each other so long as fitness remains linear in trait value. A simple derivation of Hamilton's rule can be gained via the Price equation as follows. If an infinite population is assumed, such that any non-selective effects can be ignored, the Price equation can be written as: formula_0 Where formula_1 represents trait value and formula_2 represents fitness, either taken for an individual formula_3 or averaged over the entire population. If fitness is linear in trait value, the fitness for an individual formula_3 can be written as: formula_4 Where formula_5 is the component of an individual's fitness which is independent of trait value, formula_6 parameterizes the effect of individual formula_3's phenotype on its own fitness (written negative, by convention, to represent a fitness cost), formula_7 is the average trait value of individual formula_3's neighbours, and formula_8 parameterizes the effect of individual formula_3's neighbours on its fitness (written positive, by convention, to represent a fitness benefit). Substituting into the Price equation then gives: formula_9 Since formula_5 by definition does not covary with formula_10, this rearranges to: formula_11 Since formula_12 must, by definition, be greater than 0, it can then be said that that mean trait value will increase (formula_13) when: formula_14 Giving Hamilton's rule, where relatedness (formula_15) is a regression coefficient of the form formula_16, or formula_17. Relatedness here can vary between a value of 1 (only interacting with individuals of the same trait value) and -1 (only interacting with individuals of a [most] different trait value), and will be 0 when all individuals in the population interact with equal likelihood. Fitness in practice, however, does not tend to be linear in trait value -this would imply an increase to an infinitely large trait value being just as valuable to fitness as a similar increase to a very small trait value. Consequently, to apply Hamilton's rule to biological systems the conditions under which fitness can be approximate to being linear in trait value must first be found. There are two main methods used to approximate fitness as being linear in trait value; performing a partial regression (partial least squares regression) with respect to both the focal individual's trait value and its neighbours average trait value, or taking a first order Taylor series approximation of fitness with respect to trait value. Performing a partial regression requires minimal assumptions, but only provides a statistical relationship as opposed to a mechanistic one, and cannot be extrapolated beyond the dataset that it was generated from. Linearizing via a Taylor series approximation, however, provides a powerful mechanistic relationship, but requires the assumption that evolution proceeds in sufficiently small mutational steps that the difference in trait value between an individual and its neighbours is close to 0 (in accordance with Fisher's geometric model): although in practice this approximation can often still retain predictive power under larger mutational steps. Gardner "et al." (2007) suggest that Hamilton's rule can be applied to multi-locus models, but that it should be done at the point of interpreting theory, rather than the starting point of enquiry. They suggest that one should "use standard population genetics, game theory, or other methodologies to derive a condition for when the social trait of interest is favored by selection and then use Hamilton's rule as an aid for conceptualizing this result". However, it is now becoming increasingly popular to use adaptive dynamics (evolutionary invasion analysis) approaches to bridge this gap and gain selection conditions which are directly interpretable with respect to Hamilton's rule. Altruism. The concept serves to explain how natural selection can perpetuate altruism. If there is an "altruism gene" (or complex of genes) that influences an organism's behavior to be helpful and protective of relatives and their offspring, this behavior also increases the proportion of the altruism gene in the population, because relatives are likely to share genes with the altruist due to common descent. In formal terms, if such a complex of genes arises, Hamilton's rule (rbc) specifies the selective criteria (in terms of cost, benefit and relatedness) for such a trait to increase in frequency in the population. Hamilton noted that inclusive fitness theory "does not by itself predict" that a species will necessarily evolve such altruistic behaviors, since an opportunity or context for interaction between individuals is a more primary and necessary requirement in order for any social interaction to occur in the first place. As Hamilton put it, ""Altruistic or selfish acts are only possible when a suitable social object is available. In this sense behaviours are conditional from the start"." In other words, whilst inclusive fitness theory specifies a set of necessary criteria for the evolution of altruistic traits, it does not specify a sufficient condition for their evolution in any given species. More primary necessary criteria include the existence of gene complexes for altruistic traits in gene pool, as mentioned above, and especially that "a suitable social object is available", as Hamilton noted. The American evolutionary biologist Paul W. Sherman gives a fuller discussion of Hamilton's latter point: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;To understand any species' pattern of nepotism, two questions about individuals' behavior must be considered: (1) what is reproductively ideal?, and (2) what is socially possible? With his formulation of "inclusive fitness," Hamilton suggested a mathematical way of answering (1). Here I suggest that the answer to (2) depends on demography, particularly its spatial component, dispersal, and its temporal component, mortality. Only when ecological circumstances affecting demography consistently make it socially possible will nepotism be elaborated according to what is reproductively ideal. For example, if dispersing is advantageous and if it usually separates relatives permanently, as in many birds, on the rare occasions when nestmates or other kin live in proximity, they will not preferentially cooperate. Similarly, nepotism will not be elaborated among relatives that have infrequently coexisted in a population's or a species' evolutionary history. If an animal's life history characteristicsusually preclude the existence of certain relatives, that is if kin are usually unavailable, the rare coexistence of such kin will not occasion preferential treatment. For example, if reproductives generally die soon after zygotes are formed, as in many temperate zone insects, the unusual individual that survives to interact with its offspring is not expected to behave parentally. The occurrence of sibling cannibalism in several species underlines the point that inclusive fitness theory should not be understood to simply predict that genetically related individuals will inevitably recognize and engage in positive social behaviors towards genetic relatives. Only in species that have the appropriate traits in their gene pool, and in which individuals typically interacted with genetic relatives in the natural conditions of their evolutionary history, will social behavior potentially be elaborated, and consideration of the evolutionarily typical demographic composition of grouping contexts of that species is thus a first step in understanding how selection pressures upon inclusive fitness have shaped the forms of its social behavior. Richard Dawkins gives a simplified illustration: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;If families [genetic relatives] happen to go around in groups, this fact provides a useful rule of thumb for kin selection: 'care for any individual you often see'." Evidence from a variety of species including primates and other social mammals suggests that contextual cues (such as familiarity) are often significant proximate mechanisms mediating the expression of altruistic behavior, regardless of whether the participants are always in fact genetic relatives or not. This is nevertheless evolutionarily stable since selection pressure acts on the "typical conditions", not on "the rare occasions" where actual genetic relatedness differs from that normally encountered. Inclusive fitness theory thus does not imply that "organisms evolve to direct altruism towards genetic relatives". Many popular treatments do however promote this interpretation, as illustrated in a review: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;[M]any misunderstandings persist. In many cases, they result from conflating "coefficient of relatedness" and "proportion of shared genes," which is a short step from the intuitively appealing—but incorrect—interpretation that "animals tend to be altruistic toward those with whom they share a lot of genes." These misunderstandings don't just crop up occasionally; they are repeated in many writings, including undergraduate psychology textbooks—most of them in the field of social psychology, within sections describing evolutionary approaches to altruism. (Park 2007, p860) Such misunderstandings of inclusive fitness' implications for the study of altruism, even amongst professional biologists utilizing the theory, are widespread, prompting prominent theorists to regularly attempt to highlight and clarify the mistakes. An example of attempted clarification is West et al. (2010): &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Green-beard effect. As well as interactions in reliable contexts of genetic relatedness, altruists may also have some way to recognize altruistic behavior in unrelated individuals and be inclined to support them. As Dawkins points out in "The Selfish Gene" and "The Extended Phenotype", this must be distinguished from the green-beard effect. The green-beard effect is the act of a gene (or several closely linked genes), that: The green-beard effect was originally a thought experiment by Hamilton in his publications on inclusive fitness in 1964, although it hadn't yet been observed. As of today, it has been observed in few species. Its rarity is probably due to its susceptibility to 'cheating' whereby individuals can gain the trait that confers the advantage, without the altruistic behavior. This normally would occur via the crossing over of chromosomes which happens frequently, often rendering the green-beard effect a transient state. However, Wang et al. has shown in one of the species where the effect is common (fire ants), recombination cannot occur due to a large genetic transversion, essentially forming a supergene. This, along with homozygote inviability at the green-beard loci allows for the extended maintenance of the green-beard effect. Equally, cheaters may not be able to invade the green-beard population if the mechanism for preferential treatment and the phenotype are intrinsically linked. In budding yeast ("Saccharomyces cerevisiae"), the dominant allele FLO1 is responsible for flocculation (self-adherence between cells) which helps protect them against harmful substances such as ethanol. While 'cheater' yeast cells occasionally find their way into the biofilm-like substance that is formed from FLO1 expressing yeast, they cannot invade as the FLO1 expressing yeast will not bind to them in return, and thus the phenotype is intrinsically linked to the preference. Parent–offspring conflict and optimization. Early writings on inclusive fitness theory (including Hamilton 1964) used K in place of B/C. Thus Hamilton's rule was expressed as formula_18 is the necessary and sufficient condition for selection for altruism. Where B is the gain to the beneficiary, C is the cost to the actor and r is the number of its own offspring equivalents the actor expects in one of the offspring of the beneficiary. r is either called the coefficient of relatedness or coefficient of relationship, depending on how it is computed. The method of computing has changed over time, as has the terminology. It is not clear whether or not changes in the terminology followed changes in computation. Robert Trivers (1974) defined "parent-offspring conflict" as any case where formula_19 i.e., K is between 1 and 2. The benefit is greater than the cost but is less than twice the cost. In this case, the parent would wish the offspring to behave as if r is 1 between siblings, although it is actually presumed to be 1/2 or closely approximated by 1/2. In other words, a parent would wish its offspring to give up ten offspring in order to raise 11 nieces and nephews. The offspring, when not manipulated by the parent, would require at least 21 nieces and nephews to justify the sacrifice of 10 of its own offspring. The parent is trying to maximize its number of grandchildren, while the offspring is trying to maximize the number of its own offspring equivalents (via offspring and nieces and nephews) it produces. If the parent cannot manipulate the offspring and therefore loses in the conflict, the grandparents with the fewest grandchildren seem to be selected for. In other words, if the parent has no influence on the offspring's behavior, grandparents with fewer grandchildren increase in frequency in the population. By extension, parents with the fewest offspring will also increase in frequency. This seems to go against Ronald Fisher's "Fundamental Theorem of Natural Selection" which states that the change in fitness over the course of a generation equals the variance in fitness at the beginning of the generation. Variance is defined as the square of a quantity—standard deviation —and as a square must always be positive (or zero). That would imply that e fitness could never decrease as time passes. This goes along with the intuitive idea that lower fitness cannot be selected for. During parent-offspring conflict, the number of stranger equivalents reared per offspring equivalents reared is going down. Consideration of this phenomenon caused Orlove (1979) and Grafen (2006) to say that nothing is being maximized. According to Trivers, if Sigmund Freud had tried to explain intra-family conflict after Hamilton instead of before him, he would have attributed the motivation for the conflict and for the castration complex to resource allocation issues rather than to sexual jealousy. Incidentally, when k=1 or k=2, the average number of offspring per parent stays constant as time goes by. When k&lt;1 or k&gt;2 then the average number of offspring per parent increases as time goes by. The term "gene" can refer to a locus (location) on an organism's DNA—a section that codes for a particular trait. Alternative versions of the code at that location are called "alleles." If there are two alleles at a locus, one of which codes for altruism and the other for selfishness, an individual who has one of each is said to be a heterozygote at that locus. If the heterozygote uses half of its resources raising its own offspring and the other half helping its siblings raise theirs, that condition is called codominance. If there is codominance the "2" in the above argument is exactly 2. If by contrast, the altruism allele is more dominant, then the 2 in the above would be replaced by a number smaller than 2. If the selfishness allele is the more dominant, something greater than 2 would replace the 2. Opposing view. A 2010 paper by Martin Nowak, Corina Tarnita, and E. O. Wilson suggested that standard natural selection theory is superior to inclusive fitness theory, stating that the interactions between cost and benefit cannot be explained only in terms of relatedness. This, Nowak said, makes Hamilton's rule at worst superfluous and at best ad hoc. Gardner in turn was critical of the paper, describing it as "a really terrible article", and along with other co-authors has written a reply, submitted to "Nature". The disagreement stems from a long history of confusion over what Hamilton's rule represents. Hamilton's rule gives the direction of mean phenotypic change (directional selection) so long as fitness is linear in phenotype, and the utility of Hamilton's rule is simply a reflection of when it is suitable to consider fitness as being linear in phenotype. The primary (and strictest) case is when evolution proceeds in very small mutational steps. Under such circumstances Hamilton's rule then emerges as the result of taking a first order Taylor series approximation of fitness with regards to phenotype. This assumption of small mutational steps (otherwise known as δ-weak selection) is often made on the basis of Fisher's geometric model and underpins much of modern evolutionary theory. In work prior to Nowak "et al." (2010), various authors derived different versions of a formula for formula_15, all designed to preserve Hamilton's rule. Orlove noted that if a formula for formula_15 is defined so as to ensure that Hamilton's rule is preserved, then the approach is by definition ad hoc. However, he published an unrelated derivation of the same formula for formula_15 – a derivation designed to preserve two statements about the rate of selection – which on its own was similarly ad hoc. Orlove argued that the existence of two unrelated derivations of the formula for formula_15 reduces or eliminates the ad hoc nature of the formula, and of inclusive fitness theory as well. The derivations were demonstrated to be unrelated by corresponding parts of the two identical formulae for formula_15 being derived from the genotypes of different individuals. The parts that were derived from the genotypes of different individuals were terms to the right of the minus sign in the covariances in the two versions of the formula for formula_15. By contrast, the terms left of the minus sign in both derivations come from the same source. In populations containing only two trait values, it has since been shown that formula_15 is in fact Sewall Wright's coefficient of relationship. Engles (1982) suggested that the c/b ratio be considered as a continuum of this behavioral trait rather than discontinuous in nature. From this approach fitness transactions can be better observed because there is more to what is happening to affect an individual's fitness than just losing and gaining. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\bar{w}\\Delta{\\bar{z}} = \\operatorname{cov}(w_i, z_i)" }, { "math_id": 1, "text": "z" }, { "math_id": 2, "text": "w" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "w_i = \\alpha + (-c) z_i + b z_n" }, { "math_id": 5, "text": "\\alpha" }, { "math_id": 6, "text": "-c" }, { "math_id": 7, "text": "z_n" }, { "math_id": 8, "text": "b" }, { "math_id": 9, "text": "\\bar{w} \\Delta{\\bar{z}} = \\operatorname{cov}(\\alpha -c z_i + b z_{n}, z_i) = \\operatorname{cov}(\\alpha, z_i) - c\\operatorname{cov}(z_i, z_i) + b \\operatorname{cov}(z_n, z_i) " }, { "math_id": 10, "text": "z_i" }, { "math_id": 11, "text": "\\Delta{\\bar{z}} = \\frac{\\operatorname{cov}(z_i, z_i)}{\\bar{w}}(b\\frac{\\operatorname{cov}(z_n, z_i)}{\\operatorname{cov}(z_i, z_i)} - c)" }, { "math_id": 12, "text": "\\frac{\\operatorname{cov}(z_i, z_i)}{\\bar{w}}" }, { "math_id": 13, "text": "\\Delta{\\bar{z}} > 0" }, { "math_id": 14, "text": "r b > c" }, { "math_id": 15, "text": "r" }, { "math_id": 16, "text": "\\frac{\\operatorname{cov}(z_n, z_i)}{\\operatorname{cov}(z_i, z_i)}" }, { "math_id": 17, "text": "\\frac{\\operatorname{cov}(z_n, z_i)}{\\operatorname{var}(z_i)}" }, { "math_id": 18, "text": "K> 1/r" }, { "math_id": 19, "text": "1<K<2" } ]
https://en.wikipedia.org/wiki?curid=678078